ShortScience.org Latest SummariesShortScience.org Latest Summaries
http://www.shortscience.org/
60Sun, 22 Sep 2019 10:31:02 +00001909.04630journals/corr/1909.046302Meta-Learning with Implicit GradientsPrateek GuptaThis paper builds upon the previous work in gradient-based meta-learning methods.
The objective of meta-learning is to find meta-parameters ($\theta$) which can be "adapted" to yield "task-specific" ($\phi$) parameters.
Thus, $\theta$ and $\phi$ lie in the same hyperspace.
A meta-learning problem deals with several tasks, where each task is specified by its respective training and test datasets.
At the inference time of gradient-based meta-learning methods, before the start of each task, one ...
http://www.shortscience.org/paper?bibtexKey=journals/corr/1909.04630#prateekgupta
http://www.shortscience.org/paper?bibtexKey=journals/corr/1909.04630#prateekguptaSat, 21 Sep 2019 22:14:45 +00001904.07846journals/corr/abs-1904-078462Temporal Cycle-Consistency Learningjerpint# Overview
This paper presents a novel way to align frames in videos of similar actions temporally in a self-supervised setting. To do so, they leverage the concept of cycle-consistency. They introduce two formulations of cycle-consistency which are differentiable and solvable using standard gradient descent approaches. They name their method Temporal Cycle Consistency (TCC). They introduce a dataset that they use to evaluate their approach and show that their learned embeddings allow for few ...
http://www.shortscience.org/paper?bibtexKey=journals/corr/abs-1904-07846#jeremypinto
http://www.shortscience.org/paper?bibtexKey=journals/corr/abs-1904-07846#jeremypintoFri, 20 Sep 2019 19:27:45 +000010.1007/978-3-030-01228-1_252Videos as Space-Time Region GraphsOleksandr BailoThis paper tackles the challenge of action recognition by representing a video as space-time graphs: **similarity graph** captures the relationship between correlated objects in the video while the **spatial-temporal graph** captures the interaction between objects.
The algorithm is composed of several modules:
1. **Inflated 3D (I3D) network**. In essence, it is usual 2D CNN (e.g. ResNet-50) converted to 3D CNN by copying 2D weights along an additional dimension and subsequent renormalizatio...
http://www.shortscience.org/paper?bibtexKey=10.1007/978-3-030-01228-1_25#ukrdailo
http://www.shortscience.org/paper?bibtexKey=10.1007/978-3-030-01228-1_25#ukrdailoFri, 20 Sep 2019 05:56:42 +00001710.10571journals/corr/1710.105712Certifying Some Distributional Robustness with Principled Adversarial TrainingJan RocketManA novel method for adversarially-robust learning with theoretical guarantees under small perturbations.
1) Given the default distribution P_0, defines a proximity of it as a set of distributions which are \rho-close to P_0 in terms of Wasserstein metric with a predefined cost function c (e.g. L2);
2) Formulates the robust learning problem as minimization of the worst-case example in the proximity and proposes a Lagrangian relaxation of it;
3) Given it, provides a data-dependent upper bound on...
http://www.shortscience.org/paper?bibtexKey=journals/corr/1710.10571#janrocketman
http://www.shortscience.org/paper?bibtexKey=journals/corr/1710.10571#janrocketmanThu, 12 Sep 2019 12:38:11 +00001602.04938journals/corr/1602.049382"Why Should I Trust You?": Explaining the Predictions of Any ClassifierApoorva ShettyAlthough Machine learning models have been accepted widely as the next step towards simplifying complex problems, the inner workings of a machine learning model are still unclear and these details can lead to an increase in trust of the model prediction, and the model itself.
**Idea: ** A good explanation system that can justify the prediction of a classifier and can lead to diagnosing the reasoning behind a model can exponentially raise one’s trust in the predictive model.
**Solution: ** T...
http://www.shortscience.org/paper?bibtexKey=journals/corr/1602.04938#apoorvashetty
http://www.shortscience.org/paper?bibtexKey=journals/corr/1602.04938#apoorvashettyTue, 10 Sep 2019 12:31:58 +00001810.03292journals/corr/1810.032922Sanity Checks for Saliency MapsApoorva Shetty**Idea:** With the growing use of visual explanation systems of machine learning models such as saliency maps, there needs to be a standardized method of verifying if a saliency method is correctly describing the underlying ML model.
**Solution:** In this paper two Sanity Checks have been proposed to verify the accuracy and the faithfulness of a saliency method:
* *Model parameter randomization test:* In this sanity check the outputs of a saliency method on a trained model is compared to that o...
http://www.shortscience.org/paper?bibtexKey=journals/corr/1810.03292#apoorvashetty
http://www.shortscience.org/paper?bibtexKey=journals/corr/1810.03292#apoorvashettyWed, 04 Sep 2019 15:16:21 +00001907.02057journals/corr/abs-1907-020572Benchmarking Model-Based Reinforcement Learningdav1309This is not a detailed summary, just general notes:
Authors make a excellent and extensive comparison of Model Free, Model based methods in 18 environments. In general, the authors compare 3 classes of Model Based Reinforcement Learning (MBRL) algorithms using as metric for comparison the total return in the environment after 200K steps (reporting the mean and std by taking windows of 5000 steps throughout the whole training - and averaging across 4 seeds for each algorithm). They compare MBRL ...
http://www.shortscience.org/paper?bibtexKey=journals/corr/abs-1907-02057#dav1309
http://www.shortscience.org/paper?bibtexKey=journals/corr/abs-1907-02057#dav1309Tue, 27 Aug 2019 15:39:34 +00001312.6211journals/corr/1312.62113An Empirical Investigation of Catastrophic Forgetting in Gradient-Based Neural NetworksAndrea Walter RuggeriniThe paper discusses and empirically investigates by empirical testing the effect of "catastrophic forgetting" (**CF**), i.e. the inability of a model to perform a task it was previously trained to perform if retrained to perform a second task.
An illuminating example is what happens in ML systems with convex objectives: regardless of the initialization (i.e. of what was learnt by doing the first task), the training of the second task will always end in the global minimum, thus totally "forgett...
http://www.shortscience.org/paper?bibtexKey=journals/corr/1312.6211#andreaw
http://www.shortscience.org/paper?bibtexKey=journals/corr/1312.6211#andreawMon, 26 Aug 2019 12:36:51 +0000conf/icra/MiliotoMS192Fast Instance and Semantic Segmentation Exploiting Local Connectivity, Metric Learning, and One-Shot Detection for RoboticsHadrien BertrandThe paper proposes a method to perform joint instance and semantic segmentation. The method is fast as it is meant to run in an embedded environment (such as a robot). While the semantic map may seem redundant given the instance one, it is not as semantic segmentation is a key part of obtaining the instance map.
# Architecture
![image]()
The image is first put through a typical CNN encoder (specifically a ResNet derivative), followed by 3 separate decoders. The output of the decoder is at a l...
http://www.shortscience.org/paper?bibtexKey=conf/icra/MiliotoMS19#hbertrand
http://www.shortscience.org/paper?bibtexKey=conf/icra/MiliotoMS19#hbertrandMon, 19 Aug 2019 19:30:51 +00001908.04742journals/corr/1908.047424Online Continual Learning with Maximally Interfered RetrievalMassimo CacciaDisclaimer: I am an author
# Intro
Experience replay (ER) and generative replay (GEN) are two effective continual learning strategies. In the former, samples from a stored memory are replayed to the continual learner to reduce forgetting. In the latter, old data is compressed with a generative model and generated data is replayed to the continual learner. Both of these strategies assume a random sampling of the memories. But learning a new task doesn't cause **equal** interference (forgetting)...
http://www.shortscience.org/paper?bibtexKey=journals/corr/1908.04742#mcaccia
http://www.shortscience.org/paper?bibtexKey=journals/corr/1908.04742#mcacciaWed, 14 Aug 2019 14:49:54 +00001810.01392journals/corr/1810.013922WAIC, but Why? Generative Ensembles for Robust Anomaly DetectionMassimo Caccia### Summary
Knowing when a model is qualified to make a prediction is critical to safe deployment of ML technology. Model-independent / Unsupervised Out-of-Distribution (OoD) detection is appealing mostly because it doesn't require task-specific labels to train. It is tempting to suggest a simple one-tailed test in which lower likelihoods are OoD (assigned by a Likelihood Model), but the intuition that In-Distribution (ID) inputs should have highest likelihoods _does not hold in higher dimension...
http://www.shortscience.org/paper?bibtexKey=journals/corr/1810.01392#mcaccia
http://www.shortscience.org/paper?bibtexKey=journals/corr/1810.01392#mcacciaThu, 01 Aug 2019 22:45:16 +00001905.04610lundberg2019explainable2Explainable AI for Trees: From Local Explanations to Global UnderstandingApoorva ShettyTree-based ML models are becoming increasingly popular, but in the explanation space for these type of models is woefully lacking explanations on a local level. Local level explanations can give a clearer picture on specific use-cases and help pin point exact areas where the ML model maybe lacking in accuracy.
**Idea**: We need a local explanation system for trees, that is not based on simple decision path, but rather weighs each feature in comparison to every other feature to gain better insig...
http://www.shortscience.org/paper?bibtexKey=lundberg2019explainable#apoorvashetty
http://www.shortscience.org/paper?bibtexKey=lundberg2019explainable#apoorvashettyWed, 31 Jul 2019 18:23:34 +0000conf/icml/XuBKCCSZB152Show, Attend and Tell: Neural Image Caption Generation with Visual Attentionjerpint# Summary
The authors present a way to generate captions describing the content of images using attention-based mechanisms. They present two ways of training the network, one via standard backpropagation techniques and another using stochastic processes. They also show how their model can selectively "focus" on the relevant parts of an image to generate appropriate captions, as shown in the classic example of the famous woman throwing a frisbee. Finally, they validate their model on Flicker8k, ...
http://www.shortscience.org/paper?bibtexKey=conf/icml/XuBKCCSZB15#jeremypinto
http://www.shortscience.org/paper?bibtexKey=conf/icml/XuBKCCSZB15#jeremypintoThu, 25 Jul 2019 19:00:11 +000010.1109/cvpr.2018.006362Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answeringjerpint# Summary
This paper presents state-of-the-art methods for both caption generation of images and visual question answering (VQA). The authors build on previous methods by adding what they call a "bottom-up" approach to previous "top-down" attention mechanisms. They show that using their approach they obtain SOTA on both Image captioning (MSCOCO) and the Visual Question and Answering (2017 VQA challenge). They propose a specific network configurations for each. Their biggest contribution is usin...
http://www.shortscience.org/paper?bibtexKey=10.1109/cvpr.2018.00636#jeremypinto
http://www.shortscience.org/paper?bibtexKey=10.1109/cvpr.2018.00636#jeremypintoThu, 25 Jul 2019 17:06:02 +00001507.08439journals/corr/Kula152Metadata Embeddings for User and Item Cold-start RecommendationsMartin ThomaThe idea is to combine collaborative filtering with content-based recommenders to mitigate the user and item coldstart problems.
The author distinguishes between positive and negative interactions.
The representation of a user and of items is the sum of all their latent representations. This sounds similar to "**Asymmetric factor models**" as described in [the BellKor Netflix price solution](). **The key idea is to encode the latent user (or item) vector as a sum of latent attribute vectors.**...
http://www.shortscience.org/paper?bibtexKey=journals/corr/Kula15#martinthoma
http://www.shortscience.org/paper?bibtexKey=journals/corr/Kula15#martinthomaTue, 23 Jul 2019 14:01:54 +0000koren:icdm082Collaborative Filtering for Implicit Feedback DatasetsMartin ThomaThis paper is about a recommendation system approach using collaborative filtering (CF) on implicit feedback datasets.
The core of it is the minimization problem
$$\min_{x_*, y_*} \sum_{u,i} c_{ui} (p_{ui} - x_u^T y_i)^2 + \underbrace{\lambda \left ( \sum_u || x_u ||^2 + \sum_i || y_i ||^2\right )}_{\text{Regularization}}$$
with
* $\lambda \in [0, \infty[$ is a hyper parameter which defines how strong the model is regularized
* $u$ denoting a user, $u_*$ are all user factors $x_u$ combined
*...
http://www.shortscience.org/paper?bibtexKey=koren:icdm08#martinthoma
http://www.shortscience.org/paper?bibtexKey=koren:icdm08#martinthomaTue, 23 Jul 2019 06:09:59 +0000conf/nips/AdebayoGMGHK183Sanity Checks for Saliency MapsHadrien BertrandThe paper designs some basic tests to compare saliency methods. It founds that some of the most popular methods are independent of model parameters and the data, meaning they are effectively useless.
## Methods compared
The paper compare the following methods: gradient explanation, gradient x input, integrated gradients, guided backprop, guided GradCam and SmoothGrad. They provide a refresher on those methods in the appendix.
All those methods can be put in the same framework. They require a ...
http://www.shortscience.org/paper?bibtexKey=conf/nips/AdebayoGMGHK18#hbertrand
http://www.shortscience.org/paper?bibtexKey=conf/nips/AdebayoGMGHK18#hbertrandWed, 17 Jul 2019 20:19:14 +000010.1007/s10994-011-5268-12Robustness and generalizationDavid StutzXu and Mannor provide a theoretical paper on robustness and generalization where their notion of robustness is based on the idea that the difference in loss should be small for samples that are close. This implies that, e.g., for a test sample close to a training sample, the loss on both samples should be similar. The authors formalize this notion as follows:
Definition: Let $A$ be a learning algorithm and $S \subset Z$ be a training set such that $A(S)$ denotes the model learned on $S$ by $A$;...
http://www.shortscience.org/paper?bibtexKey=10.1007/s10994-011-5268-1#davidstutz
http://www.shortscience.org/paper?bibtexKey=10.1007/s10994-011-5268-1#davidstutzTue, 16 Jul 2019 17:19:43 +00001809.03113journals/corr/abs-1809-031132Second-Order Adversarial Attack and Certifiable RobustnessDavid StutzLi et al. propose an adversarial attack motivated by second-order optimization and uses input randomization as defense. Based on a Taylor expansion, the optimal adversarial perturbation should be aligned with the dominant eigenvector of the Hessian matrix of the loss. As the eigenvectors of the Hessian cannot be computed efficiently, the authors propose an approximation; this is mainly based on evaluating the gradient under Gaussian noise. The gradient is then normalized before taking a projecte...
http://www.shortscience.org/paper?bibtexKey=journals/corr/abs-1809-03113#davidstutz
http://www.shortscience.org/paper?bibtexKey=journals/corr/abs-1809-03113#davidstutzTue, 16 Jul 2019 17:13:29 +00001802.03471journals/corr/1802.034713Certified Robustness to Adversarial Examples with Differential PrivacyDavid StutzLecuyer et al. propose a defense against adversarial examples based on differential privacy. Their main insight is that a differential private algorithm is also robust to slight perturbations. In practice, this amounts to injecting noise in some layer (or on the image directly) and using Monte Carlo estimation for computing the expected prediction. The approach is compared to adversarial training against the Carlini+Wagner attack.
Also find this summary at [davidstutz.de]().
http://www.shortscience.org/paper?bibtexKey=journals/corr/1802.03471#davidstutz
http://www.shortscience.org/paper?bibtexKey=journals/corr/1802.03471#davidstutzTue, 16 Jul 2019 16:53:19 +0000geirhos2018imagenettrained2ImageNet-trained {CNN}s are biased towards texture; increasing shape bias improves accuracy and robustnessDavid StutzGeirhos et al. show that state-of-the-art convolutional neural networks put too much importance on texture information. This claim is confirmed in a controlled study comparing convolutional neural network and human performance on variants of ImageNet image with removed texture (silhouettes) or on edges. Additionally, networks only considering local information can perform nearly as well as other networks. To avoid this bias, they propose a stylized ImageNet variant where textured are replaced ra...
http://www.shortscience.org/paper?bibtexKey=geirhos2018imagenettrained#davidstutz
http://www.shortscience.org/paper?bibtexKey=geirhos2018imagenettrained#davidstutzTue, 16 Jul 2019 16:36:24 +00001904.00760journals/corr/abs-1904-007603Approximating CNNs with Bag-of-local-Features models works surprisingly well on ImageNetDavid StutzBrendel and Bethge show empirically that state-of-the-art deep neural networks on ImageNet rely to a large extent on local features, without any notion of interaction between them. To this end, they propose a bag-of-local-features model by applying a ResNet-like architecture on small patches of ImageNet images. The predictions of these local features are then averaged and a linear classifier is trained on top. Due to the locality, this model allows to inspect which areas in an image contribute t...
http://www.shortscience.org/paper?bibtexKey=journals/corr/abs-1904-00760#davidstutz
http://www.shortscience.org/paper?bibtexKey=journals/corr/abs-1904-00760#davidstutzTue, 16 Jul 2019 16:10:57 +00001906.06316journals/corr/abs-1906-063162Towards Stable and Efficient Training of Verifiably Robust Neural NetworksDavid StutzZhang et al. combine interval bound propagation and CROWN, both approaches to obtain bounds on a network’s output, to efficiently train robust networks. Both interval bound propagation (IBP) and CROWN allow to bound a network’s output for a specific set of allowed perturbations around clean input examples. These bounds can be used for adversarial training. The motivation to combine BROWN and IBP stems from the fact that training using IBP bounds usually results in instabilities, while traini...
http://www.shortscience.org/paper?bibtexKey=journals/corr/abs-1906-06316#davidstutz
http://www.shortscience.org/paper?bibtexKey=journals/corr/abs-1906-06316#davidstutzTue, 16 Jul 2019 16:01:19 +0000conf/nips/ZhangWCHD182Efficient Neural Network Robustness Certification with General Activation FunctionsDavid StutzZhang et al. propose CROWN, a method for certifying adversarial robustness based on bounding activations functions using linear functions. Informally, the main result can be stated as follows: if the activation functions used in a deep neural network can be bounded above and below by linear functions (the activation function may also be segmented first), the network output can also be bounded by linear functions. These linear functions can be computed explicitly, as stated in the paper. Then, gi...
http://www.shortscience.org/paper?bibtexKey=conf/nips/ZhangWCHD18#davidstutz
http://www.shortscience.org/paper?bibtexKey=conf/nips/ZhangWCHD18#davidstutzTue, 16 Jul 2019 15:55:18 +00001901.01672journals/corr/abs-1901-016722Generalization in Deep Networks: The Role of Distance from InitializationDavid StutzNagarajan and Kolter show that neural networks are implicitly regularized by stochastic gradient descent to have small distance from their initialization. This implicit regularization may explain the good generalization performance of over-parameterized neural networks; specifically, more complex models usually generalize better, which contradicts the general trade-off between expressivity and generalization in machine learning. On MNIST, the authors show that the distance of the network’s par...
http://www.shortscience.org/paper?bibtexKey=journals/corr/abs-1901-01672#davidstutz
http://www.shortscience.org/paper?bibtexKey=journals/corr/abs-1901-01672#davidstutzTue, 16 Jul 2019 15:51:29 +00001810.12715journals/corr/1810.127152On the Effectiveness of Interval Bound Propagation for Training Verifiably Robust ModelsDavid StutzGowal et al. propose interval bound propagation to obtain certified robustness against adversarial examples. In particular, given a neural network consisting of linear layers and monotonic increasing activation functions, a set of allowed perturbations is propagated to obtain upper and lower bounds at each layer. These lead to bounds on the logits of the network; these are used to verify whether the network changes its prediction on the allowed perturbations. Specifically, Gowal et al. consider ...
http://www.shortscience.org/paper?bibtexKey=journals/corr/1810.12715#davidstutz
http://www.shortscience.org/paper?bibtexKey=journals/corr/1810.12715#davidstutzTue, 16 Jul 2019 15:47:34 +00001905.02161journals/corr/abs-1905-021612Batch Normalization is a Cause of Adversarial VulnerabilityDavid StutzGalloway et al. argue that batch normalization reduces robustness against noise and adversarial examples. On various vision datasets, including SVHN and ImageNet, with popular self-trained and pre-trained models they empirically demonstrate that networks with batch normalization show reduced accuracy on noise and adversarial examples. As noise, they consider Gaussian additive noise as well as different noise types included in the Cifar-C dataset. Similarly, for adversarial examples, they conside...
http://www.shortscience.org/paper?bibtexKey=journals/corr/abs-1905-02161#davidstutz
http://www.shortscience.org/paper?bibtexKey=journals/corr/abs-1905-02161#davidstutzTue, 16 Jul 2019 15:41:54 +0000journals/cejcs/DashBDC162Radial basis function neural networks: a topical state-of-the-artsurveyDavid StutzDash et al. present a reasonably recent survey on radial basis function (RBF) networks. RBF networks can be understood as two-layer perceptrons, consisting of an input layer, a hidden layer and an output layer. Instead of using a linear operation for computing the hidden layers, RBF kernels are used; as simple example the hidden units are computed as
$h_i = \phi_i(x) = \exp\left(-\frac{\|x - \mu_i\|^2}{2\sigma_i^2}\right)$
where $\mu_i$ and $\sigma_i^2$ are parameters of the kernel. In a clust...
http://www.shortscience.org/paper?bibtexKey=journals/cejcs/DashBDC16#davidstutz
http://www.shortscience.org/paper?bibtexKey=journals/cejcs/DashBDC16#davidstutzSun, 14 Jul 2019 17:38:25 +00001903.11257journals/corr/abs-1903-112572How Can We Be So Dense? The Benefits of Using Highly Sparse RepresentationsDavid StutzAhmad and Scheinkman propose a simple sparse layer in order to improve robustness against random noise. Specifically, considering a general linear network layer, i.e.
$\hat{y}^l = W^l y^{l-1} + b^l$ and $y^l = f(\hat{y}^l$
where $f$ is an activation function, the weights are first initialized using a sparse distribution; then, the activation function (commonly ReLU) is replaced by a top-$k$ ReLU version where only the top-$k$ activations are propagated. In experiments, this is shown to improve...
http://www.shortscience.org/paper?bibtexKey=journals/corr/abs-1903-11257#davidstutz
http://www.shortscience.org/paper?bibtexKey=journals/corr/abs-1903-11257#davidstutzSun, 14 Jul 2019 17:29:34 +00001812.03190journals/corr/abs-1812-031902Deep-RBF Networks Revisited: Robust Classification with RejectionDavid StutzZadeh et al. propose a layer similar to radial basis functions (RBFs) to increase a network’s robustness against adversarial examples by rejection. Based on a deep feature extractor, the RBF units compute
$d_k(x) = \|A_k^Tx + b_k\|_p^p$
with parameters $A$ and $b$. The decision rule remains unchanged, but the output does not resemble probabilities anymore. The full network, i.e., feature extractor and RBF layer, is trained using an adapted loss that resembles a max margin loss:
$J = \sum_i ...
http://www.shortscience.org/paper?bibtexKey=journals/corr/abs-1812-03190#davidstutz
http://www.shortscience.org/paper?bibtexKey=journals/corr/abs-1812-03190#davidstutzSun, 14 Jul 2019 17:25:34 +00001809.09262journals/corr/1809.092622Neural Networks with Structural Resistance to Adversarial AttacksDavid StutzDe Alfaro proposes a deep radial basis function (RBF) network to obtain robustness against adversarial examples. In contrast to “regular” RBF networks, which usually consist of only one hidden layer containing RBF units, de Alfaro proposes to stack multiple layers with RBF units. Specifically, a Gaussian unit utilizing the $L_\infty$ norm is used:
$\exp\left( - \max_i(u_i(x_i – w_i))^2\right)$
where $u_i$ and $w_i$ are parameters and $x_i$ are the inputs to the unit – so the network in...
http://www.shortscience.org/paper?bibtexKey=journals/corr/1809.09262#davidstutz
http://www.shortscience.org/paper?bibtexKey=journals/corr/1809.09262#davidstutzSun, 14 Jul 2019 17:21:11 +00001905.02175ilyas2019adversarial2Adversarial Examples Are Not Bugs, They Are FeaturesDavid StutzIlyas et al. present a follow-up work to their paper on the trade-off between accuracy and robustness. Specifically, given a feature $f(x)$ computed from input $x$, the feature is considered predictive if
$\mathbb{E}_{(x,y) \sim \mathcal{D}}[y f(x)] \geq \rho$;
similarly, a predictive feature is robust if
$\mathbb{E}_{(x,y) \sim \mathcal{D}}\left[\inf_{\delta \in \Delta(x)} yf(x + \delta)\right] \geq \gamma$.
This means, a feature is considered robust if the worst-case correlation with the l...
http://www.shortscience.org/paper?bibtexKey=ilyas2019adversarial#davidstutz
http://www.shortscience.org/paper?bibtexKey=ilyas2019adversarial#davidstutzSun, 14 Jul 2019 17:13:32 +00001903.12269journals/corr/abs-1903-122692Bit-Flip Attack: Crushing Neural Network withProgressive Bit SearchDavid StutzRakin et al. introduce the bit-flip attack aimed to degrade a network’s performance by flipping a few weight bits. On Cifar10 and ImageNet, common architectures such as ResNets or AlexNet are quantized into 8 bits per weight value (or fewer). Then, on a subset of the validation set, gradients with respect to the training loss are computed and in each layer, bits are selected based on their gradient value. Afterwards, the layer which incurs the maximum increase in training loss is selected. Thi...
http://www.shortscience.org/paper?bibtexKey=journals/corr/abs-1903-12269#davidstutz
http://www.shortscience.org/paper?bibtexKey=journals/corr/abs-1903-12269#davidstutzSun, 14 Jul 2019 17:05:05 +0000conf/miccai/ZhangYCFHC174Deep Adversarial Networks for Biomedical Image Segmentation Utilizing Unannotated ImagesJoseph Paul CohenThis work improves the performance of a segmentation network by utilizing unlabelled data. They use a discriminator (they call EN) to distinguish between annotated and unannotated examples. They then train the segmentation generator (they call SN) based on what will fool the discriminator.
Three training phases are shown above
This work is really great. They are using the segmentation to condition the discriminator which will learn to point out flaws when applying the segmentation to the un...
http://www.shortscience.org/paper?bibtexKey=conf/miccai/ZhangYCFHC17#joecohen
http://www.shortscience.org/paper?bibtexKey=conf/miccai/ZhangYCFHC17#joecohenSun, 14 Jul 2019 16:04:19 +00001907.03626journals/corr/1907.036262Benchmarking Deep Learning Hardware and Frameworks: Qualitative MetricsWei DaiBenchmarking Deep Learning Hardware and Frameworks: Qualitative Metrics
Previous papers on benchmarking deep neural networks offer knowledge of deep learning hardware devices and software frameworks. This paper introduces benchmarking principles, surveys machine learning devices including GPUs, FPGAs, and ASICs, and reviews deep learning software frameworks. It also qualitatively compares these technologies with respect to benchmarking from the angles of our 7-metric approach to deep learning ...
http://www.shortscience.org/paper?bibtexKey=journals/corr/1907.03626#weidai
http://www.shortscience.org/paper?bibtexKey=journals/corr/1907.03626#weidaiFri, 12 Jul 2019 02:41:50 +00001711.09883journals/corr/1711.098832AI Safety GridworldsdnikuThe paper proposes a standardized benchmark for a number of safety-related problems, and provides an implementation that can be used by other researchers. The problems fall in two categories: specification and robustness. Specification refers to cases where it is difficult to specify a reward function that encodes our intentions. Robustness means that agent's actions should be robust when facing various complexities of a real-world environment. Here is a list of problems:
1. Specification:
1....
http://www.shortscience.org/paper?bibtexKey=journals/corr/1711.09883#dniku
http://www.shortscience.org/paper?bibtexKey=journals/corr/1711.09883#dnikuThu, 11 Jul 2019 14:01:20 +00001803.03635journals/corr/1803.036352The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural NetworksDavid StutzFrankle and Carbin discover so-called winning tickets, subset of weights of a neural network that are sufficient to obtain state-of-the-art accuracy. The lottery hypothesis states that dense networks contain subnetworks – the winning tickets – that can reach the same accuracy when trained in isolation, from scratch. The key insight is that these subnetworks seem to have received optimal initialization. Then, given a complex trained network for, e.g., Cifar, weights are pruned based on their ...
http://www.shortscience.org/paper?bibtexKey=journals/corr/1803.03635#davidstutz
http://www.shortscience.org/paper?bibtexKey=journals/corr/1803.03635#davidstutzTue, 09 Jul 2019 19:50:56 +00001902.02918journals/corr/1902.029182Certified Adversarial Robustness via Randomized SmoothingDavid StutzCohen et al. study robustness bounds of randomized smoothing, a region-based classification scheme where the prediction is averaged over Gaussian samples around the test input. Specifically, given a test input, the predicted class is the class whose decision region has the largest overlap with a normal distribution of pre-defined variance. The intuition of this approach is that, for small perturbations, the decision regions of classes can’t vary too much. In practice, randomized smoothing is a...
http://www.shortscience.org/paper?bibtexKey=journals/corr/1902.02918#davidstutz
http://www.shortscience.org/paper?bibtexKey=journals/corr/1902.02918#davidstutzTue, 09 Jul 2019 19:44:07 +00001706.02690journals/corr/1706.026902Enhancing The Reliability of Out-of-distribution Image Detection in Neural NetworksDavid StutzLiang et al. propose a perturbation-based approach for detecting out-of-distribution examples using a network’s confidence predictions. In particular, the approaches based on the observation that neural network’s make more confident predictions on images from the original data distribution, in-distribution examples, than on examples taken from a different distribution (i.e., a different dataset), out-distribution examples. This effect can further be amplified by using a temperature-scaled so...
http://www.shortscience.org/paper?bibtexKey=journals/corr/1706.02690#davidstutz
http://www.shortscience.org/paper?bibtexKey=journals/corr/1706.02690#davidstutzTue, 09 Jul 2019 19:31:52 +00001511.06807journals/corr/1511.068072Adding Gradient Noise Improves Learning for Very Deep NetworksDavid StutzNeelakantan et al. study gradient noise for improving neural network training. In particular, they add Gaussian noise to the gradients in each iteration:
$\tilde{\nabla}f = \nabla f + \mathcal{N}(0, \sigma^2)$
where the variance $\sigma^2$ is adapted throughout training as follows:
$\sigma^2 = \frac{\eta}{(1 + t)^\gamma}$
where $\eta$ and $\gamma$ are hyper-parameters and $t$ the current iteration. In experiments, the authors show that gradient noise has the potential to improve accuracy, es...
http://www.shortscience.org/paper?bibtexKey=journals/corr/1511.06807#davidstutz
http://www.shortscience.org/paper?bibtexKey=journals/corr/1511.06807#davidstutzTue, 09 Jul 2019 19:23:12 +0000conf/iclr/LeeLLS182Training Confidence-calibrated Classifiers for Detecting Out-of-Distribution SamplesDavid StutzLee et al. propose a generative model for obtaining confidence-calibrated classifiers. Neural networks are known to be overconfident in their predictions – not only on examples from the task’s data distribution, but also on other examples taken from different distributions. The authors propose a GAN-based approach to force the classifier to predict uniform predictions on examples not taken from the data distribution. In particular, in addition to the target classifier, a generator and a disc...
http://www.shortscience.org/paper?bibtexKey=conf/iclr/LeeLLS18#davidstutz
http://www.shortscience.org/paper?bibtexKey=conf/iclr/LeeLLS18#davidstutzTue, 09 Jul 2019 19:12:24 +00001901.04684journals/corr/abs-1901-046842The Limitations of Adversarial Training and the Blind-Spot AttackDavid StutzZhang et al. search for “blind spots” in the data distribution and show that blind spot test examples can be used to find adversarial examples easily. On MNIST, the data distribution is approximated using kernel density estimation were the distance metric is computed in dimensionality-reduced feature space (of an adversarially trained model). For dimensionality reduction, t-SNE is used. Blind spots are found by slightly shifting pixels or changing the gray value of the background. Based on t...
http://www.shortscience.org/paper?bibtexKey=journals/corr/abs-1901-04684#davidstutz
http://www.shortscience.org/paper?bibtexKey=journals/corr/abs-1901-04684#davidstutzTue, 09 Jul 2019 19:02:32 +00001612.00334journals/corr/1612.003342A Theoretical Framework for Robustness of (Deep) Classifiers against Adversarial ExamplesDavid StutzWang et al. discuss an alternative definition of adversarial examples, taking into account an oracle classifier. Adversarial perturbations are usually constrained in their norm (e.g., $L_\infty$ norm for images); however, the main goal of this constraint is to ensure label invariance – if the image didn’t change notable, the label didn’t change either. As alternative formulation, the authors consider an oracle for the task, e.g., humans for image classification tasks. Then, an adversarial ...
http://www.shortscience.org/paper?bibtexKey=journals/corr/1612.00334#davidstutz
http://www.shortscience.org/paper?bibtexKey=journals/corr/1612.00334#davidstutzTue, 09 Jul 2019 18:57:29 +000010.1145/3128572.31404512Towards Poisoning of Deep Learning Algorithms with Back-gradient OptimizationDavid StutzMunoz-Gonzalez et al. propose a multi-class data poisening attack against deep neural networks based on back-gradient optimization. They consider the common poisening formulation stated as follows:
$ \max_{D_c} \min_w \mathcal{L}(D_c \cup D_{tr}, w)$
where $D_c$ denotes a set of poisened training samples and $D_{tr}$ the corresponding clea dataset. Here, the loss $\mathcal{L}$ used for training is minimized as the inner optimization problem. As result, as long as learning itself does not have ...
http://www.shortscience.org/paper?bibtexKey=10.1145/3128572.3140451#davidstutz
http://www.shortscience.org/paper?bibtexKey=10.1145/3128572.3140451#davidstutzTue, 09 Jul 2019 18:41:53 +0000conf/ccs/MengC172MagNet: A Two-Pronged Defense against Adversarial ExamplesDavid StutzMeng and Chen propose MagNet, a combination of adversarial example detection and removal. At test time, given a clean or adversarial test image, the proposed defense works as follows: First, the input is passed through one or multiple detectors. If one of these detectors fires, the input is rejected. To this end, the authors consider detection based on the reconstruction error of an auto-encoder or detection based on the divergence between probability predictions (on adversarial vs. clean exampl...
http://www.shortscience.org/paper?bibtexKey=conf/ccs/MengC17#davidstutz
http://www.shortscience.org/paper?bibtexKey=conf/ccs/MengC17#davidstutzTue, 09 Jul 2019 18:38:40 +00001707.01159journals/corr/SarkarBMC172UPSET and ANGRI : Breaking High Performance Image ClassifiersDavid StutzSarkar et al. propose two “learned” adversarial example attacks, UPSET and ANGRI. The former, UPSET, learns to predict universal, targeted adversarial examples. The latter, ANGRI, learns to predict (non-universal) targeted adversarial attacks. For UPSET, a network takes the target label as input and learns to predict a perturbation, which added to the original image results in mis-classification; for ANGRI, a network takes both the target label and the original image as input to predict a pe...
http://www.shortscience.org/paper?bibtexKey=journals/corr/SarkarBMC17#davidstutz
http://www.shortscience.org/paper?bibtexKey=journals/corr/SarkarBMC17#davidstutzMon, 08 Jul 2019 19:49:38 +00001803.06959journals/corr/1803.069592On the importance of single directions for generalizationDavid StutzMorcos et al. study the influence of ablating single units as a proxy to generalization performance. On Cifar10, for example, a 11-layer convolutional network is trained on the clean dataset, as well as on versions of Cifar10 where a fraction of $p$ samples have corrupted labels. In the latter cases, the network is forced to memorize examples, as there is no inherent structure in the labels assignment. Then, it is experimentally shown that these memorizing networks are less robust to setting who...
http://www.shortscience.org/paper?bibtexKey=journals/corr/1803.06959#davidstutz
http://www.shortscience.org/paper?bibtexKey=journals/corr/1803.06959#davidstutzMon, 08 Jul 2019 19:47:59 +00001803.06978journals/corr/1803.069782Improving Transferability of Adversarial Examples with Input DiversityDavid StutzXie et al. propose to improve the transferability of adversarial examples by computing them based on transformed input images. In particular, they adapt I-FGSM such that, in each iteration, the update is computed on a transformed version of the current image with probability $p$. When, at the same time attacking an ensemble of networks, this is shown to improve transferability.
Also find this summary at [davidstutz.de]().
http://www.shortscience.org/paper?bibtexKey=journals/corr/1803.06978#davidstutz
http://www.shortscience.org/paper?bibtexKey=journals/corr/1803.06978#davidstutzSat, 06 Jul 2019 11:53:26 +00001712.00699journals/corr/abs-1712-006992Improving Network Robustness against Adversarial Attacks with Compact ConvolutionDavid StutzRanjan et al. propose to constrain deep features to lie on hyperspheres in order to improve robustness against adversarial examples. For the last fully-connected layer, this is achieved by the L2-softmax, which forces the features to lie on the hypersphere. For intermediate convolutional or fully-connected layer, the same effect is achieved analogously, i.e., by normalizing inputs, scaling them and applying the convolution/weight multiplication. In experiments, the authors argue that this improv...
http://www.shortscience.org/paper?bibtexKey=journals/corr/abs-1712-00699#davidstutz
http://www.shortscience.org/paper?bibtexKey=journals/corr/abs-1712-00699#davidstutzSat, 06 Jul 2019 11:44:19 +00001701.06548journals/corr/1701.065482Regularizing Neural Networks by Penalizing Confident Output DistributionsDavid StutzPereyra et al. propose an entropy regularizer for penalizing over-confident predictions of deep neural networks. Specifically, given the predicted distribution $p_\theta(y_i|x)$ for labels $y_i$ and network parameters $\theta$, a regularizer
$-\beta \max(0, \Gamma – H(p_\theta(y|x))$
is added to the learning objective. Here, $H$ denotes the entropy and $\beta$, $\Gamma$ are hyper-parameters allowing to weight and limit the regularizers influence. In experiments, this regularizer showed sligh...
http://www.shortscience.org/paper?bibtexKey=journals/corr/1701.06548#davidstutz
http://www.shortscience.org/paper?bibtexKey=journals/corr/1701.06548#davidstutzSat, 06 Jul 2019 11:34:51 +00001808.02651journals/corr/1808.026512Beyond Pixel Norm-Balls: Parametric Adversaries using an Analytically Differentiable RendererDavid StutzLiu et al. propose adversarial attacks on physical parameters of images, which can be manipulated efficiently through differentiable renderer. In particular, they propose adversarial lighting and adversarial geometry; in both cases, an image is assumed to be a function of lighting and geometry, generated by a differentiable renderer. By directly manipulating these latent variables, more realistic looking adversarial examples can be generated for synthetic images as shown in Figure 1.
Figure 1:...
http://www.shortscience.org/paper?bibtexKey=journals/corr/1808.02651#davidstutz
http://www.shortscience.org/paper?bibtexKey=journals/corr/1808.02651#davidstutzSat, 06 Jul 2019 11:25:01 +00001711.05934journals/corr/abs-1711-059342Enhanced Attacks on Defensively Distilled Deep Neural NetworksDavid StutzLiu et al. propose a white-box attack against defensive distillation. In particular, the proposed attack combines the objective of the Carlini+Wagner attack [1] with a slightly different reparameterization to enforce an $L_\infty$-constraint on the perturbation. In experiments, defensive distillation is shown to no be robust.
[1] Nicholas Carlini, David A. Wagner: Towards Evaluating the Robustness of Neural Networks. IEEE Symposium on Security and Privacy 2017: 39-57
Also find this summary at ...
http://www.shortscience.org/paper?bibtexKey=journals/corr/abs-1711-05934#davidstutz
http://www.shortscience.org/paper?bibtexKey=journals/corr/abs-1711-05934#davidstutzSat, 06 Jul 2019 11:19:52 +00001805.04613journals/corr/abs-1805-046132Breaking Transferability of Adversarial Samples with RandomnessDavid StutzZhou et al. study transferability of adversarial examples against ensembles of randomly perturbed networks. Specifically, they consider randomly perturbing the weights using Gaussian additive noise. Using an ensemble of these perturbed networks, the authors show that transferability of adversarial examples decreases significantly. However, the authors do not consider adapting their attack to this defense scenario.
Also find this summary at [davidstutz.de]().
http://www.shortscience.org/paper?bibtexKey=journals/corr/abs-1805-04613#davidstutz
http://www.shortscience.org/paper?bibtexKey=journals/corr/abs-1805-04613#davidstutzFri, 05 Jul 2019 19:26:46 +00001810.09225journals/corr/abs-1810-092252Cost-Sensitive Robustness against Adversarial ExamplesDavid StutzThang and Evanse propose cost-sensitive certified robustness where different adversarial examples can be weighted based on their actual impact for the application. Specifically, they consider the certified robustness formulation (and the corresponding training scheme) by Wong and Kolter. This formulation is extended by acknowledging that different adversarial examples have different impact for specific applications; this is formulized through a cost matrix which quantifies which source-target la...
http://www.shortscience.org/paper?bibtexKey=journals/corr/abs-1810-09225#davidstutz
http://www.shortscience.org/paper?bibtexKey=journals/corr/abs-1810-09225#davidstutzFri, 05 Jul 2019 19:23:39 +00001711.11279journals/corr/1711.112792Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)David StutzKim et al. propose Concept Activation Vectors (CAV) that represent the direction of features corresponding to specific human-interpretable concepts. In particular, given a network for a classification task, a concept is defined as a set of images with that concept. A linear classifier is then trained to distinguish images with concept from random images without the concept based on a chosen feature layer. The normal of the obtained linear classification boundary corresponds to the learned Concep...
http://www.shortscience.org/paper?bibtexKey=journals/corr/1711.11279#davidstutz
http://www.shortscience.org/paper?bibtexKey=journals/corr/1711.11279#davidstutzFri, 05 Jul 2019 19:16:28 +00001804.08598journals/corr/1804.085982Black-box Adversarial Attacks with Limited Queries and InformationDavid StutzIlyas et al. propose three query-efficient black-box adversarial example attacks using distribution-based gradient estimation. In particular, their simplest attacks involves estimating the gradient locally using a search distribution:
$ \nabla_x \mathbb{E}_{\pi(\theta|x)} [F(\theta)] = \mathbb{E}_{\pi(\theta|x)} [F(\theta) \nabla_x \log(\pi(\theta|x))]$
where $F(\cdot)$ is a loss function – e.g., using the cross-entropy loss which is maximized to obtain an adversarial example. The above equa...
http://www.shortscience.org/paper?bibtexKey=journals/corr/1804.08598#davidstutz
http://www.shortscience.org/paper?bibtexKey=journals/corr/1804.08598#davidstutzFri, 05 Jul 2019 19:02:12 +00001809.02861journals/corr/1809.028612On the Intriguing Connections of Regularization, Input Gradients and Transferability of Evasion and Poisoning AttacksDavid StutzDemontis et al. study transferability of adversarial examples and data poisening attacks in the light of the targeted models gradients. In particular, they experimentally validate the following hypotheses: First, susceptibility to these attacks depends on the size of the model’s gradients; the higher the gradient, the smaller is the perturbation needed to increase the loss. Second, the size of the gradient depends on regularization. And third, the cosine between the target model’s gradients ...
http://www.shortscience.org/paper?bibtexKey=journals/corr/1809.02861#davidstutz
http://www.shortscience.org/paper?bibtexKey=journals/corr/1809.02861#davidstutzFri, 05 Jul 2019 18:58:39 +0000conf/nips/TaoMLZ182Attacks Meet Interpretability: Attribute-steered Detection of Adversarial SamplesDavid StutzTao et al. propose Attacks Meet Interpretability, an adversarial example detection scheme based on the interpretability of individual neurons. In the context of face recognition, in a first step, the authors identify neurons that correspond to specific face attributes. This is achieved by constructing sets of images were only specific attributes change, and then investigating the firing neurons. In a second step, all other neurons, i.e., neurons not corresponding to any meaningful face attribute...
http://www.shortscience.org/paper?bibtexKey=conf/nips/TaoMLZ18#davidstutz
http://www.shortscience.org/paper?bibtexKey=conf/nips/TaoMLZ18#davidstutzWed, 03 Jul 2019 21:04:16 +0000conf/aaai/ParkPSM182Adversarial Dropout for Supervised and Semi-Supervised LearningDavid StutzPark et al. introduce adversarial dropout, a variant of adversarial training based on adversarially computing dropout masks. Specifically, instead of training on adversarial examples, the authors propose an efficient method to compute adversarial dropout masks during training. In experiments, this approach seems to improve generalization performance in semi-supervised settings.
Also find this summary at [davidstutz.de]().
http://www.shortscience.org/paper?bibtexKey=conf/aaai/ParkPSM18#davidstutz
http://www.shortscience.org/paper?bibtexKey=conf/aaai/ParkPSM18#davidstutzWed, 03 Jul 2019 21:01:02 +0000conf/raid/0017DG182Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural NetworksDavid StutzLiu et al. propose fine-pruning, a combination of weight pruning and fine-tuning to defend against backdoor attacks on neural networks. Specifically, they consider a setting where training is outsourced to a machine learning service; the attacker has access to the network and training set, however, any change in network architecture would be easily detected. Thus, the attacker tries to inject backdoors through data poisening. As defense against such attacks, the authors propose to identify and p...
http://www.shortscience.org/paper?bibtexKey=conf/raid/0017DG18#davidstutz
http://www.shortscience.org/paper?bibtexKey=conf/raid/0017DG18#davidstutzWed, 03 Jul 2019 20:49:55 +00001811.00525journals/corr/abs-1811-005252On the Geometry of Adversarial ExamplesDavid StutzKhoury and Hadfield-Menell provide two important theoretical insights regarding adversarial robustness: it is impossible to be robust in terms of all norms, and adversarial training is sample inefficient. Specifically, they study robustness in relation to the problem’s codimension, i.e., the difference between the dimensionality of the embedding space (e.g., image space) and the dimensionality of the manifold (where the data is assumed to actually live on). Then, adversarial training is shown ...
http://www.shortscience.org/paper?bibtexKey=journals/corr/abs-1811-00525#davidstutz
http://www.shortscience.org/paper?bibtexKey=journals/corr/abs-1811-00525#davidstutzWed, 03 Jul 2019 20:44:03 +00001812.02606journals/corr/abs-1812-026062The Limitations of Model Uncertainty in Adversarial SettingsDavid StutzGrosse et al. show that Gaussian Processes allow to reject some adversarial examples based on their confidence and uncertainty; however, attacks maximizing confidence and minimizing uncertainty are still successful. While some state-of-the-art adversarial examples seem to result in significantly different confidence and uncertainty estimates compared to benign examples, Gaussian Processes can still be fooled through particularly crafted adversarial examples. To this end, the confidence is explic...
http://www.shortscience.org/paper?bibtexKey=journals/corr/abs-1812-02606#davidstutz
http://www.shortscience.org/paper?bibtexKey=journals/corr/abs-1812-02606#davidstutzWed, 03 Jul 2019 20:40:19 +00001901.09035journals/corr/abs-1901-090352Towards Interpretable Deep Neural Networks by Leveraging Adversarial ExamplesDavid StutzDong et al. study interpretability in the context of adversarial examples and propose a variant of adversarial training to improve interpretability. First the authors argue that neurons do not preserve their interpretability on adversarial examples; e.g., neurons corresponding to high-level concepts such as “bird” or “dog” do not fire consistently on adversarial examples. This result is also validated experimentally, by considering deep representations at different layers. To improve int...
http://www.shortscience.org/paper?bibtexKey=journals/corr/abs-1901-09035#davidstutz
http://www.shortscience.org/paper?bibtexKey=journals/corr/abs-1901-09035#davidstutzWed, 03 Jul 2019 20:36:32 +00001802.08232journals/corr/abs-1802-082322The Secret Sharer: Measuring Unintended Neural Network Memorization & Extracting SecretsDavid StutzCarlini et al. propose several attacks to extract secrets form trained black-box models. Additionally, they show that state-of-the-art neural networks memorize secrets early during training. Particularly on the Penn treebank, after inserting a secret of specific format, the authors validate that the secret can be identified based on the models output probabilities (i.e., black-box access). Several metrics based on the log-perplexity of the secret show that secrets are memorized early during trai...
http://www.shortscience.org/paper?bibtexKey=journals/corr/abs-1802-08232#davidstutz
http://www.shortscience.org/paper?bibtexKey=journals/corr/abs-1802-08232#davidstutzWed, 03 Jul 2019 20:31:25 +000010.1145/3134600.31346062Mitigating Evasion Attacks to Deep Neural Networks via Region-based ClassificationDavid StutzCao and Gong introduce region-based classification as defense against adversarial examples. In particular, given an input (benign test input or adversarial example), the method samples random point in the neighborhood and classifies the test sample according to the majority vote of the obtained labels.
Also find this summary at [davidstutz.de]().
http://www.shortscience.org/paper?bibtexKey=10.1145/3134600.3134606#davidstutz
http://www.shortscience.org/paper?bibtexKey=10.1145/3134600.3134606#davidstutzWed, 03 Jul 2019 20:25:05 +000010.24963/ijcai.2018/5202Curriculum Adversarial TrainingDavid StutzCai et al. propose so-called curriculum adversarial training where adversarial training is applied to increasingly strong attacks. Specifically, considering a gradient-based, iterative attack such as projected gradient descent, a common proxy for the strength of the attack is the number of iterations. To avoid issues with forgetting old adversarial examples and reduced accuracy, the authors propose to apply adversarial training with different numbers of iterations. In each turn (called lesson in...
http://www.shortscience.org/paper?bibtexKey=10.24963/ijcai.2018/520#davidstutz
http://www.shortscience.org/paper?bibtexKey=10.24963/ijcai.2018/520#davidstutzWed, 03 Jul 2019 20:07:38 +0000conf/sp/GehrMDTCV182AI2: Safety and Robustness Certification of Neural Networks with Abstract InterpretationDavid StutzGehr et al. propose a method based on abstract interpretations in order to verify robustness guarantees of neural networks. First of all, I want to note that (in contrast to most work in adversarial robustness) the proposed method is not intended to improve robustness, but to get robustness certificates. Without going into details, abstract interpretations allow to verify conditions (e.g., robustness) of a function (e.g., a neural network) based on abstractions of the input. In particular, by ab...
http://www.shortscience.org/paper?bibtexKey=conf/sp/GehrMDTCV18#davidstutz
http://www.shortscience.org/paper?bibtexKey=conf/sp/GehrMDTCV18#davidstutzWed, 03 Jul 2019 19:55:38 +0000conf/nips/Alvarez-MelisJ182Towards Robust Interpretability with Self-Explaining Neural NetworksDavid StutzAlvarez-Melis and Jaakkola propose three requirements for self-explainable models, explicitness, faithfulness and stability, and construct a self-explainable, generalized linear model optimizing for these properties. In particular, the proposed model has the form
$f(x) = \theta(x)^T h(x)$
where $\theta(x)$ are features (e.g., from a deep network) and $h(x)$ are interpretable features/concepts. In practice, these concepts are learned using an auto-encoder from the raw input while the latent cod...
http://www.shortscience.org/paper?bibtexKey=conf/nips/Alvarez-MelisJ18#davidstutz
http://www.shortscience.org/paper?bibtexKey=conf/nips/Alvarez-MelisJ18#davidstutzWed, 03 Jul 2019 19:45:25 +000010.1145/3196494.31965172Efficient Repair of Polluted Machine Learning Systems via Causal UnlearningDavid StutzCao et al. propose KARMA, a method to defend against data poisening in an online learning system where training examples are obtained through crowdsourcing. The setting, however, is somewhat constrained and can be described as human-in-the-loop. In particular, there is the system, which is maintained by an administrator, and there are users – among them there might be users with malicious intents, i.e. attackers. KARMA consists of two steps: identifying (possibly polluted) training examples th...
http://www.shortscience.org/paper?bibtexKey=10.1145/3196494.3196517#davidstutz
http://www.shortscience.org/paper?bibtexKey=10.1145/3196494.3196517#davidstutzSun, 30 Jun 2019 19:51:20 +0000conf/sp/HerleyO172SoK: Science, Security and the Elusive Goal of Security as a Scientific PursuitDavid StutzHerley and van Oorschot explore how to make security research more scientific. In particular, they discuss different historic notions of what “scientific” means and related these insights to current practices in security research. I want to discuss only two points that I found very insightful. First, there seems to be a misalignment between formal methods, and empirical methods. While some researchers argue for more mathematically verifiable security methods, others claim that attackers do n...
http://www.shortscience.org/paper?bibtexKey=conf/sp/HerleyO17#davidstutz
http://www.shortscience.org/paper?bibtexKey=conf/sp/HerleyO17#davidstutzSun, 30 Jun 2019 19:45:37 +000010.1145/3243734.32437572Model-Reuse Attacks on Deep Learning SystemsDavid StutzJi et al. propose a model-reuse, or trojaning, attack against neural networks by deliberately manipulating specific weights. In particular, given a specific input, the attacker intends to manipulate the model into mis-classifying this input. This is achieved by first generating semantic neighbors of the input, e.g. through transformations or noise, and then identifying salient features for these inputs. These features are correlated to the classifiers output, i.e. some of them have positive impa...
http://www.shortscience.org/paper?bibtexKey=10.1145/3243734.3243757#davidstutz
http://www.shortscience.org/paper?bibtexKey=10.1145/3243734.3243757#davidstutzSun, 30 Jun 2019 19:28:29 +00001809.07802journals/corr/1809.078022Playing the Game of Universal Adversarial PerturbationsDavid StutzPérolat et al. propose a game-theoretic variant of adversarial training on universal adversarial perturbations. In particular, in each training iteration, the model is trained for a specific number of iterations on the current training set. Afterwards, a universal perturbation is found (and the corresponding test images) that fools the network. The found adversarial examples are added to the training set. In the next iteration, the network is trained on the new training set which includes adver...
http://www.shortscience.org/paper?bibtexKey=journals/corr/1809.07802#davidstutz
http://www.shortscience.org/paper?bibtexKey=journals/corr/1809.07802#davidstutzSun, 30 Jun 2019 19:22:41 +000010.1145/2996758.29967712Secure Kernel Machines against Evasion AttacksDavid StutzRussu et al. discuss robustness of linear and non-linear kernel machines through regularization. In particular, they show that linear classifiers can easily be regularized to be robust. In fact, robustness against $L_\infty$-bounded adversarial examples can be achieved through $L_1$ regularization on the weights. More generally, robustness against $L_p$ attacks are countered by $L_q$ regularization of the weights, with $\frac{1}{p} + \frac{1}{q} = 1$. These insights are generalized to the case o...
http://www.shortscience.org/paper?bibtexKey=10.1145/2996758.2996771#davidstutz
http://www.shortscience.org/paper?bibtexKey=10.1145/2996758.2996771#davidstutzSun, 30 Jun 2019 19:19:22 +00001606.04671journals/corr/1606.046712Progressive Neural NetworksDavid StutzRusu et al. Propose progressive networks, sets of networks allowing transfer learning over multiple tasks without forgetting. The key idea of progressive networks is very simple. Instead of fine-tuning a model (for transfer learning), the pre-trained model is taken and its weights fixed. Another network is then trained from scratch while receiving features from the pre-trained network as additional input.
Specifically, the authors consider a sequence of tasks. For the first task, a deep neural ...
http://www.shortscience.org/paper?bibtexKey=journals/corr/1606.04671#davidstutz
http://www.shortscience.org/paper?bibtexKey=journals/corr/1606.04671#davidstutzSun, 30 Jun 2019 19:16:24 +00001809.02104journals/corr/abs-1809-021042Are adversarial examples inevitable?David StutzShafahi et al. discuss fundamental limits of adversarial robustness, showing that adversarial examples are – to some extent – inevitable. Specifically, for the unit sphere, the unit cube as well as for different attacks (e.g., sparse attacks and dense attacks), the authors show that adversarial examples likely exist. The provided theoretical arguments also provide some insights on which problems are more (or less) robust. For example, more concentrated class distributions seem to be more rob...
http://www.shortscience.org/paper?bibtexKey=journals/corr/abs-1809-02104#davidstutz
http://www.shortscience.org/paper?bibtexKey=journals/corr/abs-1809-02104#davidstutzSun, 30 Jun 2019 18:42:09 +00001811.11304journals/corr/abs-1811-113042Universal Adversarial TrainingDavid StutzShafahi et al. propose universal adversarial training, meaning training on universal adversarial examples. In contrast to regular adversarial examples, universal ones represent perturbations that cause a network to mis-classify many test images. In contrast to regular adversarial training, where several additional iterations are required on each batch of images, universal adversarial training only needs one additional forward/backward pass on each batch. The obtained perturbations for each batch...
http://www.shortscience.org/paper?bibtexKey=journals/corr/abs-1811-11304#davidstutz
http://www.shortscience.org/paper?bibtexKey=journals/corr/abs-1811-11304#davidstutzSun, 30 Jun 2019 18:36:22 +00001703.08245journals/corr/CheneySK173On the Robustness of Convolutional Neural Networks to Internal Architecture and Weight PerturbationsDavid StutzCheney et al. study the robustness of deep neural networks, especially AlexNet, with regard to randomly dropping or perturbing weights. In particular, the authors consider three types of perturbations: synapse knockouts set random weights to zero, node knockouts set all weights corresponding to a set of neurons to zero, and weight perturbations add random Gaussian noise to the weights of a specific layer. These perturbations are studied on AlexNet, considering the top-5 accuracy on ImageNet; per...
http://www.shortscience.org/paper?bibtexKey=journals/corr/CheneySK17#davidstutz
http://www.shortscience.org/paper?bibtexKey=journals/corr/CheneySK17#davidstutzSun, 30 Jun 2019 18:02:55 +00001902.03020journals/corr/abs-1902-030202Adversarial Initialization - when your network performs the way I wantDavid StutzGrosse et al. propose an adversarial attack on a deep neural network’s weight initialization in order to damage accuracy or convergence. An attacker with access to the used deep learning library is assumed. The attack has no knowledge about the training data or the addressed task; however, the attacker has knowledge (through the library’s API) about the network architecture and its initialization. The goal of the attacker is to permutate the initialized weights, without being detected, in or...
http://www.shortscience.org/paper?bibtexKey=journals/corr/abs-1902-03020#davidstutz
http://www.shortscience.org/paper?bibtexKey=journals/corr/abs-1902-03020#davidstutzSun, 30 Jun 2019 17:59:56 +000010.1109/iccad.2017.82037702Fault injection attack on deep neural networkDavid StutzLiu et al. propose slight perturbations of a deep neural network’s weights in order to cause mis-classification on a specific input. Specifically, the authors propose two attacks: the single bias attack, where a single bias value is manipulated in order to cause mis-classification, and the gradient descent attack, where the network’s weights of a particular layer are manipulated through gradient descent to cause mis-classification. In both cases, a specific input example is considered to be ...
http://www.shortscience.org/paper?bibtexKey=10.1109/iccad.2017.8203770#davidstutz
http://www.shortscience.org/paper?bibtexKey=10.1109/iccad.2017.8203770#davidstutzSun, 30 Jun 2019 17:51:15 +00001902.00577journals/corr/1902.005772Robustness of Generalized Learning Vector Quantization Models against Adversarial AttacksDavid StutzSaralajew et al. evaluate learning vector quantization (LVQ) approaches regarding their robustness against adversarial examples. In particular, they consider generalized LVQ where examples are classified based on their distance to the closest prototype of the same class and the closest prototype of another class. The prototypes are learned during training; I refer to the paper for details. Robustness is compared to adversarial training and evaluated against several attacks, including FGSM, DeepF...
http://www.shortscience.org/paper?bibtexKey=journals/corr/1902.00577#davidstutz
http://www.shortscience.org/paper?bibtexKey=journals/corr/1902.00577#davidstutzSun, 30 Jun 2019 17:29:43 +0000conf/ccs/ZhangGJWSHM182Protecting Intellectual Property of Deep Neural Networks with WatermarkingDavid StutzZhang et al. propose a watermarking approach to protect the intellectual property of deep neural network models. Here, the watermarking concept is generalized from multimedia; specifically, the purpose of a watermark is to uniquely identify a neural network model as the original owner’s property to avoid plagiarism. The problem is illustrated in Figure 1. As watermarks, the authors consider perturbed input images. During training, these perturbations are trained to produce very specific output...
http://www.shortscience.org/paper?bibtexKey=conf/ccs/ZhangGJWSHM18#davidstutz
http://www.shortscience.org/paper?bibtexKey=conf/ccs/ZhangGJWSHM18#davidstutzSun, 30 Jun 2019 17:22:23 +00001804.02485journals/corr/abs-1804-024852Fortified Networks: Improving the Robustness of Deep Networks by Modeling the Manifold of Hidden RepresentationsDavid StutzLamb et al. introduce fortified networks with denoising auto encoders as hidden layers. These denoising auto encoders are meant to learn the manifold of hidden representations, project adversarial input back to the manifold and improve robustness. The main idea is illustrated in Figure 1. The denoising auto encoders can be added at any layer and are trained jointly with the classification network – either on the original input, or on adversarial examples as done in adversarial training.
Figu...
http://www.shortscience.org/paper?bibtexKey=journals/corr/abs-1804-02485#davidstutz
http://www.shortscience.org/paper?bibtexKey=journals/corr/abs-1804-02485#davidstutzSun, 30 Jun 2019 17:13:34 +00001805.09190journals/corr/1805.091902Towards the first adversarially robust neural network model on MNISTDavid StutzSchott et al. propose an analysis-by-synthetis approach for adversarially robust MNIST classification. In particular, as illustrated in Figure 1, class-conditional variational auto-encoders (i.e., one variational auto-encoder per class) are learned. The respective recognition models, i.e., encoders, are discarded. For classification, the optimization problem
$l_y^*(x) = \max_z \log p(x|z) - \text{KL}(\mathcal{N}(z, \sigma I)|\mathcal{N}(0,1))$
is solved for each class $z$. Here, $p(x|z)$ repre...
http://www.shortscience.org/paper?bibtexKey=journals/corr/1805.09190#davidstutz
http://www.shortscience.org/paper?bibtexKey=journals/corr/1805.09190#davidstutzSun, 30 Jun 2019 16:52:40 +00001707.05474journals/corr/ShenJGZ172AE-GAN: adversarial eliminating with GANDavid StutzShen et al. introduce APE-GAN, a generative adversarial network (GAN) trained to remove adversarial noise from adversarial examples. In specific, as illustrated in Figure 1, a GAN is traiend to specifically distinguish clean/real images from adversarial images. The generator is conditioned on th einput image and can be seen as auto encoder. Then, during testing, the generator is applied to remove the adversarial noise.
Figure 1: The proposed adversarial perturbation eliminating GAN (APE-GAN), ...
http://www.shortscience.org/paper?bibtexKey=journals/corr/ShenJGZ17#davidstutz
http://www.shortscience.org/paper?bibtexKey=journals/corr/ShenJGZ17#davidstutzSat, 29 Jun 2019 16:09:15 +0000conf/nips/SongSKE182Constructing Unrestricted Adversarial Examples with Generative ModelsDavid StutzSong et al. propose generative adversarial examples, crafted using a generative adversarial network (GAN) from scratch. In particular a GAN is trained on the original images in order to approximate the generative data distribution. Then, adversarial examples can be found in the learned latent space by finding a latent code that minimizes a loss consisting of fooling the target classifier, not fooling an auxiliary classifier (to not change the actual class) and (optionally) staying close to some ...
http://www.shortscience.org/paper?bibtexKey=conf/nips/SongSKE18#davidstutz
http://www.shortscience.org/paper?bibtexKey=conf/nips/SongSKE18#davidstutzSat, 29 Jun 2019 15:50:05 +000010.1007/978-3-030-01258-8_392Is Robustness the Cost of Accuracy? – A Comprehensive Study on the Robustness of 18 Deep Image Classification ModelsDavid StutzSu et al. present an extensive robustness study of 18 different ImageNet networks. Among these networks, popular architectures such as AlexNet, VGG, Inception or ResNet can be found. Their main result shows a trade-off between robustness accuracy. A possible explanation is that recent increases in accuracy are only possible when sacrificing network robustness. In particular, as shown in Figure 1, the robustness scales linearly in the logarithm of the classification error (note that Figure 1 show...
http://www.shortscience.org/paper?bibtexKey=10.1007/978-3-030-01258-8_39#davidstutz
http://www.shortscience.org/paper?bibtexKey=10.1007/978-3-030-01258-8_39#davidstutzSat, 29 Jun 2019 15:43:21 +0000conf/iclr/ZhaoDS183Generating Natural Adversarial ExamplesDavid StutzZhao et al. propose a generative adversarial network (GAN) based approach to generate meaningful and natural adversarial examples for images and text. With natural adversarial examples, the authors refer to meaningful changes in the image content instead of adding seemingly random/adversarial noise – as illustrated in Figure 1. These natural adversarial examples can be crafted by first learning a generative model of the data, e.g., using a GAN together with an inverter (similar to an encoder),...
http://www.shortscience.org/paper?bibtexKey=conf/iclr/ZhaoDS18#davidstutz
http://www.shortscience.org/paper?bibtexKey=conf/iclr/ZhaoDS18#davidstutzSat, 29 Jun 2019 15:35:27 +000010.1109/cvpr.2016.5142DisturbLabel: Regularizing CNN on the Loss LayerDavid StutzXie et al. Propose to regularize deep neural networks by randomly disturbing (i.e., changing) training labels. In particular, for each training batch, they randomly change the label of each sample with probability $\alpha$ - when changing a label, it’s sampled uniformly from the set of labels. In experiments, the authors show that this sort of loss regularization improves generalization. However, Dropout usually performs better; in their case, only the combination with leads to noticable impr...
http://www.shortscience.org/paper?bibtexKey=10.1109/cvpr.2016.514#davidstutz
http://www.shortscience.org/paper?bibtexKey=10.1109/cvpr.2016.514#davidstutzSat, 29 Jun 2019 15:28:27 +00001607.04311journals/corr/CarliniW162Defensive Distillation is Not Robust to Adversarial ExamplesDavid StutzCarlini and Wagner show that defensive distillation as defense against adversarial examples does not work. Specifically, they show that the attack by Papernot et al [1] can easily be modified to attack distilled networks. Interestingly, the main change is to introduce a temperature in the last softmax layer. This termperature, when chosen hgih enough will take care of aligning the gradients from the softmax layer and from the logit layer – otherwise, they will have significantly different magn...
http://www.shortscience.org/paper?bibtexKey=journals/corr/CarliniW16#davidstutz
http://www.shortscience.org/paper?bibtexKey=journals/corr/CarliniW16#davidstutzSat, 29 Jun 2019 15:22:14 +00001804.03308journals/corr/1804.033082Adversarial Training Versus Weight DecayDavid StutzGalloway et al. provide a theoretical and experimental discussion of adversarial training and weight decay with respect to robustness as well as generalization. In the following I want to try and highlight the most important findings based on their discussion of linear logistic regression. Considering the softplus loss $\mathcal{L}(z) = \log(1 + e^{-z})$, the learning problem takes the form:
$\min_w \mathbb{E}_{x,y \sim p_{data}} [\mathcal{L}(y(w^Tx + b)]$
where $y \in \{-1,1\}$. This optimiza...
http://www.shortscience.org/paper?bibtexKey=journals/corr/1804.03308#davidstutz
http://www.shortscience.org/paper?bibtexKey=journals/corr/1804.03308#davidstutzSat, 29 Jun 2019 15:14:46 +000010.1109/icip.2016.75330482Adaptive data augmentation for image classificationDavid StutzFawzi et al. propose an adaptive data augmentation scheme based on adversarial transformations similar to adversarial training. In particular, in each training iteration – and for each sample/batch – they compute an adversarial version by finding a transformation that maximizes the training loss. The transformation is usually constrained to a specific class of transformations – on MNIST, for example, they consider affine transformations. Additionally, only small transformations are conside...
http://www.shortscience.org/paper?bibtexKey=10.1109/icip.2016.7533048#davidstutz
http://www.shortscience.org/paper?bibtexKey=10.1109/icip.2016.7533048#davidstutzSat, 29 Jun 2019 15:06:35 +0000conf/icml/WangDWK192SATNet: Bridging deep learning and logical reasoning using a differentiable satisfiability solverHadrien BertrandThis paper considers "the problem of learning logical structure [...] as expressed by satisfiability problems". This is an attempt at incorporating symbolic AI into neural networks. The key contribution of the paper is the introduction of "a differentiable smoothed MAXSAT solver", that is able to learn logical relationships from examples.
The example given in the paper is Sudoku. The proposed model is able to learn jointly the rules of the game and how to solve the puzzles, **without prior on ...
http://www.shortscience.org/paper?bibtexKey=conf/icml/WangDWK19#hbertrand
http://www.shortscience.org/paper?bibtexKey=conf/icml/WangDWK19#hbertrandThu, 20 Jun 2019 19:55:02 +00001803.08494journals/corr/1803.084942Group NormalizationHadrien BertrandBatch Normalization doesn't work well when using small batch sizes, which is often required for memory intensive tasks such as detection or segmentation, or memory intensive data such as 3D images, videos or high-res images.
Group Normalization is a simple alternative that is independent of the batch size:
![image]()
It works as BN, except with a different set of features for computing the mean and std:
![image]()
The $\gamma$ and $\beta$ are learned per group and applied as usual:
![image]()
...
http://www.shortscience.org/paper?bibtexKey=journals/corr/1803.08494#hbertrand
http://www.shortscience.org/paper?bibtexKey=journals/corr/1803.08494#hbertrandWed, 19 Jun 2019 20:24:52 +00001904.05049journals/corr/1904.050492Drop an Octave: Reducing Spatial Redundancy in Convolutional Neural Networks with Octave ConvolutionHadrien BertrandNatural images can be decomposed in frequencies, higher frequencies contain small changes and details, while lower frequencies contain the global structure. We can see an example in this image:
![image]()
Each filter of a convolutional layer focuses on different frequencies of the image. This paper proposes a way to group them explicitly into high and low frequency filters.
To do that, the low frequency group is reduced spatially by 2 in all dimensions (which they define as an octave), before...
http://www.shortscience.org/paper?bibtexKey=journals/corr/1904.05049#hbertrand
http://www.shortscience.org/paper?bibtexKey=journals/corr/1904.05049#hbertrandWed, 19 Jun 2019 20:24:02 +0000conf/icml/NoklandE192Training Neural Networks with Local Error SignalsHadrien BertrandThis paper was presented at ICML 2019.
Do you remember greedy layer-wise training? Are you curious what a modern take on the idea can achieve? This is the paper for you then. And it has its own very good summary:
> We use standard convolutional and fully connected network architectures, but instead of globally back-propagating errors, each weight layer is trained by a local learning signal,that is not back-propagated down the network. The learning signal is provided by two separate single-laye...
http://www.shortscience.org/paper?bibtexKey=conf/icml/NoklandE19#hbertrand
http://www.shortscience.org/paper?bibtexKey=conf/icml/NoklandE19#hbertrandWed, 19 Jun 2019 20:10:34 +00001901.08256journals/corr/abs-1901-082562Large-Batch Training for LSTM and BeyondsudharsansaiOften the best learning rate for a DNN is sensitive to batch size and hence need significant tuning while scaling batch sizes to large scale training. Theory suggests that when you scale the batch size by a factor of $k$ (in the case of multi-GPU training), the learning rate should be scaled by $\sqrt{k}$ to keep the variance of the gradient estimator constant (remember the variance of an estimator is inversely proportional to the sample size?). But in practice, often linear learning rate scalin...
http://www.shortscience.org/paper?bibtexKey=journals/corr/abs-1901-08256#sudharsansai
http://www.shortscience.org/paper?bibtexKey=journals/corr/abs-1901-08256#sudharsansaiSun, 16 Jun 2019 17:59:59 +000010.2478/pralin-2018-00022Training Tips for the Transformer Modelsudharsansai**TL;DR:** This paper summarizes some of the practical tips for training a transformer model for MT task, though I believe some of the tips are task-agnostic. The parameters considered include number of GPUs, batch size, learning rate schedule, warmup steps, checkpoint averaging and maximum sequence lengths.
**Framework used for the experiments:** [Tensor2Tensor]()
The effect of varying the most important hyper-parameters on the performances are as follows:
**Early Stopping:** Usually pape...
http://www.shortscience.org/paper?bibtexKey=10.2478/pralin-2018-0002#sudharsansai
http://www.shortscience.org/paper?bibtexKey=10.2478/pralin-2018-0002#sudharsansaiSun, 16 Jun 2019 03:15:27 +00001810.02334journals/corr/1810.023342Unsupervised Learning via Meta-LearningJoseph Paul CohenWhat is stopping us from applying meta-learning to new tasks? Where do the tasks come from? Designing task distribution is laborious. We should automatically learn tasks!
Unsupervised Learning via Meta-Learning: The idea is to use a distance metric in an out-of-the-box unsupervised embedding space created by BiGAN/ALI or DeepCluster to construct tasks in an unsupervised way. If you cluster points to randomly define classes (e.g. random k-means) you can then sample tasks of 2 or 3 classes and us...
http://www.shortscience.org/paper?bibtexKey=journals/corr/1810.02334#joecohen
http://www.shortscience.org/paper?bibtexKey=journals/corr/1810.02334#joecohenThu, 13 Jun 2019 04:50:47 +0000conf/corl/ClaveraRS0AA182Model-Based Reinforcement Learning via Meta-Policy OptimizationJoseph Paul CohenIn terms of model based RL, learning dynamics models is imperfect, which often leads to the learned policy overfitting to the learned dynamics model, doing well in the learned simulator but not in the real world.
Key solution idea: No need to try to learn one accurate simulator. We can learn an ensemble of models that together will sufficiently represent the space. If we learn an ensemble of models (to be used as many learned simulators) we can denoise estimates of performance. In a meta-learni...
http://www.shortscience.org/paper?bibtexKey=conf/corl/ClaveraRS0AA18#joecohen
http://www.shortscience.org/paper?bibtexKey=conf/corl/ClaveraRS0AA18#joecohenWed, 12 Jun 2019 15:19:31 +00001901.10912journals/corr/1901.1091215A Meta-Transfer Objective for Learning to Disentangle Causal MechanismsJoseph Paul CohenHow can we learn causal relationships that explain data? We can learn from non-stationary distributions. If we experiment with different factorizations of relationships between variables we can observe which ones provide better sample complexity when adapting to distributional shift and therefore are likely to be causal.
If we consider the variables A and B we can factor them in two ways:
$P(A,B) = P(A)P(B|A)$ representing a causal graph like $A\rightarrow B$
$P(A,B) = P(A|B)P(B)$ representin...
http://www.shortscience.org/paper?bibtexKey=journals/corr/1901.10912#joecohen
http://www.shortscience.org/paper?bibtexKey=journals/corr/1901.10912#joecohenMon, 10 Jun 2019 21:33:13 +0000