Explaining and Interpreting LSTMs
Leila Arras
and
Jose A. Arjona-Medina
and
Michael Widrich
and
Grégoire Montavon
and
Michael Gillhofer
and
Klaus-Robert Müller
and
Sepp Hochreiter
and
Wojciech Samek
arXiv e-Print archive - 2019 via arXiv
Keywords:
cs.LG, cs.NE, stat.ML
First published: 2019/09/25 (4 years ago) Abstract: While neural networks have acted as a strong unifying force in the design of
modern AI systems, the neural network architectures themselves remain highly
heterogeneous due to the variety of tasks to be solved. In this chapter, we
explore how to adapt the Layer-wise Relevance Propagation (LRP) technique used
for explaining the predictions of feed-forward networks to the LSTM
architecture used for sequential data modeling and forecasting. The special
accumulators and gated interactions present in the LSTM require both a new
propagation scheme and an extension of the underlying theoretical framework to
deliver faithful explanations.