The contribution of this paper is three-fold:
1. We present a method to use *process models* as interpretable sequence models that have a stronger notion of interpretability than what is generally used in the machine learning field (see Section *process models* below),
2. We show that this approach enables the comparison of traditional sequence models (RNNs, LSTMs, Markov Models) with techniques from the research field of *automated process discovery*,
3. We show on a collection of three real-life datasets that a better fit of sequence data can be obtained with LSTMs than with techniques from the *automated process discovery* field
# Process Models
Process models are visually interpretable models that model sequence data in such a way that the generated model is represented in a notation that has *formal semantics*, i.e., it is well-defined which sequences are and which aren't allowed by the model. Below you see an example of a Petri net (a type of model with formal semantics) which allows for the sequences <A,B,C>, <A,C,B>, <D,B,C>, and <D,C,B>.
For an overview of automated process discovery algorithms to mine a process model from sequnce data, we refer to [this recent survey and benchmark paper](https://ieeexplore.ieee.org/abstract/document/8368306/).