Efficient Repair of Polluted Machine Learning Systems via Causal UnlearningEfficient Repair of Polluted Machine Learning Systems via Causal UnlearningYinzhi Cao and Alexander Fangxiao Yu and Andrew Aday and Eric Stahl and Jon Merwine and Junfeng Yang2018
Paper summarydavidstutzCao et al. propose KARMA, a method to defend against data poisening in an online learning system where training examples are obtained through crowdsourcing. The setting, however, is somewhat constrained and can be described as human-in-the-loop. In particular, there is the system, which is maintained by an administrator, and there are users – among them there might be users with malicious intents, i.e. attackers. KARMA consists of two steps: identifying (possibly polluted) training examples that cause mis-classification of samples within a small oracle set; and then correcting these problems by removing clusters of polluted samples.
Also find this summary at [davidstutz.de](https://davidstutz.de/category/reading/).