{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"# Handling Event Data\n",
"*by: Sebastiaan J. van Zelst*"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "notes"
}
},
"source": [
"Process mining exploits Event Logs to generate knowledge of a process.\n",
"A wide variety of information systems, e.g., SAP, ORACLE, SalesForce, etc., allow us to extract, in one way or the other,\n",
"event logs similar to the example event logs.\n",
"All the examples we show in this notebook and all algorithms implemented in pm4py assume that we have already extracted\n",
"the event data into an appropriate event log format.\n",
"Hence, the core of pm4py does not support any data extraction features.\n",
"\n",
"In order to support interoperability between different process mining tools and libraries, two standard data formats are\n",
"used to capture event logs, i.e., Comma Separated Value (CSV) files and eXtensible Event Stream (XES) files.\n",
"CSV files resemble the example tables shown in the previous section, i.e., Table 1 and Table 2. Each line in such a file\n",
"describes an event that occurred. The columns represent the same type of data, as shown in the examples, e.g., the case\n",
"for which the event occurred, the activity, the timestamp, the resource executing the activity, etc.\n",
"The XES file format is an XML-based format that allows us to describe process behavior.\n",
"We will not go into specific details w.r.t. the format of XES files, i.e., we refer to http://xes-standard.org/ for an\n",
"overview.\n",
"\n",
"In this tutorial, we will use an oftenly used dummy example event log to explain the basic process mining operations.\n",
"The process that we are considering is a simplified process related to customer complaint handling, i.e., taken from the\n",
"book of van der Aalst (https://www.springer.com/de/book/9783662498507). The process, and the event data we are going to\n",
"use, looks as follows."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
""
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"## Importing CSV Files"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "notes"
}
},
"source": [
"Let’s get started!\n",
"We have prepared a small sample event log, containing behavior similar equal to the process model in Figure 3.\n",
"You can find the sample event log [here](data/running_example.csv).\n",
"\n",
"We are going to load the event data, and, we are going to count how many cases are present in the event log, as well as\n",
"the number of events. Note that, for all this, we are effectively using a third-party library called pandas.\n",
"We do so because pandas is the de-facto standard of loading/manipulating csv-based data.\n",
"Hence, any process mining algorithm implemented in pm4py, using an event log as an input, can work directly with a\n",
"pandas file!\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"pycharm": {
"name": "#%%\n"
},
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [
{
"data": {
"text/plain": " case_id activity timestamp costs org:resource\n0 3 register request 2010-12-30 14:32:00+01:00 50 Pete\n1 3 examine casually 2010-12-30 15:06:00+01:00 400 Mike\n2 3 check ticket 2010-12-30 16:34:00+01:00 100 Ellen\n3 3 decide 2011-01-06 09:18:00+01:00 200 Sara\n4 3 reinitiate request 2011-01-06 12:18:00+01:00 200 Sara\n5 3 examine thoroughly 2011-01-06 13:06:00+01:00 400 Sean\n6 3 check ticket 2011-01-08 11:43:00+01:00 100 Pete\n7 3 decide 2011-01-09 09:55:00+01:00 200 Sara\n8 3 pay compensation 2011-01-15 10:45:00+01:00 200 Ellen\n9 2 register request 2010-12-30 11:32:00+01:00 50 Mike\n10 2 check ticket 2010-12-30 12:12:00+01:00 100 Mike\n11 2 examine casually 2010-12-30 14:16:00+01:00 400 Sean\n12 2 decide 2011-01-05 11:22:00+01:00 200 Sara\n13 2 pay compensation 2011-01-08 12:05:00+01:00 200 Ellen\n14 1 register request 2010-12-30 11:02:00+01:00 50 Pete\n15 1 examine thoroughly 2010-12-31 10:06:00+01:00 400 Sue\n16 1 check ticket 2011-01-05 15:12:00+01:00 100 Mike\n17 1 decide 2011-01-06 11:18:00+01:00 200 Sara\n18 1 reject request 2011-01-07 14:24:00+01:00 200 Pete\n19 6 register request 2011-01-06 15:02:00+01:00 50 Mike\n20 6 examine casually 2011-01-06 16:06:00+01:00 400 Ellen\n21 6 check ticket 2011-01-07 16:22:00+01:00 100 Mike\n22 6 decide 2011-01-07 16:52:00+01:00 200 Sara\n23 6 pay compensation 2011-01-16 11:47:00+01:00 200 Mike\n24 5 register request 2011-01-06 09:02:00+01:00 50 Ellen\n25 5 examine casually 2011-01-07 10:16:00+01:00 400 Mike\n26 5 check ticket 2011-01-08 11:22:00+01:00 100 Pete\n27 5 decide 2011-01-10 13:28:00+01:00 200 Sara\n28 5 reinitiate request 2011-01-11 16:18:00+01:00 200 Sara\n29 5 check ticket 2011-01-14 14:33:00+01:00 100 Ellen\n30 5 examine casually 2011-01-16 15:50:00+01:00 400 Mike\n31 5 decide 2011-01-19 11:18:00+01:00 200 Sara\n32 5 reinitiate request 2011-01-20 12:48:00+01:00 200 Sara\n33 5 examine casually 2011-01-21 09:06:00+01:00 400 Sue\n34 5 check ticket 2011-01-21 11:34:00+01:00 100 Pete\n35 5 decide 2011-01-23 13:12:00+01:00 200 Sara\n36 5 reject request 2011-01-24 14:56:00+01:00 200 Mike\n37 4 register request 2011-01-06 15:02:00+01:00 50 Pete\n38 4 check ticket 2011-01-07 12:06:00+01:00 100 Mike\n39 4 examine thoroughly 2011-01-08 14:43:00+01:00 400 Sean\n40 4 decide 2011-01-09 12:02:00+01:00 200 Sara\n41 4 reject request 2011-01-12 15:44:00+01:00 200 Ellen",
"text/html": "
\n\n
\n \n \n | \n case_id | \n activity | \n timestamp | \n costs | \n org:resource | \n
\n \n \n \n 0 | \n 3 | \n register request | \n 2010-12-30 14:32:00+01:00 | \n 50 | \n Pete | \n
\n \n 1 | \n 3 | \n examine casually | \n 2010-12-30 15:06:00+01:00 | \n 400 | \n Mike | \n
\n \n 2 | \n 3 | \n check ticket | \n 2010-12-30 16:34:00+01:00 | \n 100 | \n Ellen | \n
\n \n 3 | \n 3 | \n decide | \n 2011-01-06 09:18:00+01:00 | \n 200 | \n Sara | \n
\n \n 4 | \n 3 | \n reinitiate request | \n 2011-01-06 12:18:00+01:00 | \n 200 | \n Sara | \n
\n \n 5 | \n 3 | \n examine thoroughly | \n 2011-01-06 13:06:00+01:00 | \n 400 | \n Sean | \n
\n \n 6 | \n 3 | \n check ticket | \n 2011-01-08 11:43:00+01:00 | \n 100 | \n Pete | \n
\n \n 7 | \n 3 | \n decide | \n 2011-01-09 09:55:00+01:00 | \n 200 | \n Sara | \n
\n \n 8 | \n 3 | \n pay compensation | \n 2011-01-15 10:45:00+01:00 | \n 200 | \n Ellen | \n
\n \n 9 | \n 2 | \n register request | \n 2010-12-30 11:32:00+01:00 | \n 50 | \n Mike | \n
\n \n 10 | \n 2 | \n check ticket | \n 2010-12-30 12:12:00+01:00 | \n 100 | \n Mike | \n
\n \n 11 | \n 2 | \n examine casually | \n 2010-12-30 14:16:00+01:00 | \n 400 | \n Sean | \n
\n \n 12 | \n 2 | \n decide | \n 2011-01-05 11:22:00+01:00 | \n 200 | \n Sara | \n
\n \n 13 | \n 2 | \n pay compensation | \n 2011-01-08 12:05:00+01:00 | \n 200 | \n Ellen | \n
\n \n 14 | \n 1 | \n register request | \n 2010-12-30 11:02:00+01:00 | \n 50 | \n Pete | \n
\n \n 15 | \n 1 | \n examine thoroughly | \n 2010-12-31 10:06:00+01:00 | \n 400 | \n Sue | \n
\n \n 16 | \n 1 | \n check ticket | \n 2011-01-05 15:12:00+01:00 | \n 100 | \n Mike | \n
\n \n 17 | \n 1 | \n decide | \n 2011-01-06 11:18:00+01:00 | \n 200 | \n Sara | \n
\n \n 18 | \n 1 | \n reject request | \n 2011-01-07 14:24:00+01:00 | \n 200 | \n Pete | \n
\n \n 19 | \n 6 | \n register request | \n 2011-01-06 15:02:00+01:00 | \n 50 | \n Mike | \n
\n \n 20 | \n 6 | \n examine casually | \n 2011-01-06 16:06:00+01:00 | \n 400 | \n Ellen | \n
\n \n 21 | \n 6 | \n check ticket | \n 2011-01-07 16:22:00+01:00 | \n 100 | \n Mike | \n
\n \n 22 | \n 6 | \n decide | \n 2011-01-07 16:52:00+01:00 | \n 200 | \n Sara | \n
\n \n 23 | \n 6 | \n pay compensation | \n 2011-01-16 11:47:00+01:00 | \n 200 | \n Mike | \n
\n \n 24 | \n 5 | \n register request | \n 2011-01-06 09:02:00+01:00 | \n 50 | \n Ellen | \n
\n \n 25 | \n 5 | \n examine casually | \n 2011-01-07 10:16:00+01:00 | \n 400 | \n Mike | \n
\n \n 26 | \n 5 | \n check ticket | \n 2011-01-08 11:22:00+01:00 | \n 100 | \n Pete | \n
\n \n 27 | \n 5 | \n decide | \n 2011-01-10 13:28:00+01:00 | \n 200 | \n Sara | \n
\n \n 28 | \n 5 | \n reinitiate request | \n 2011-01-11 16:18:00+01:00 | \n 200 | \n Sara | \n
\n \n 29 | \n 5 | \n check ticket | \n 2011-01-14 14:33:00+01:00 | \n 100 | \n Ellen | \n
\n \n 30 | \n 5 | \n examine casually | \n 2011-01-16 15:50:00+01:00 | \n 400 | \n Mike | \n
\n \n 31 | \n 5 | \n decide | \n 2011-01-19 11:18:00+01:00 | \n 200 | \n Sara | \n
\n \n 32 | \n 5 | \n reinitiate request | \n 2011-01-20 12:48:00+01:00 | \n 200 | \n Sara | \n
\n \n 33 | \n 5 | \n examine casually | \n 2011-01-21 09:06:00+01:00 | \n 400 | \n Sue | \n
\n \n 34 | \n 5 | \n check ticket | \n 2011-01-21 11:34:00+01:00 | \n 100 | \n Pete | \n
\n \n 35 | \n 5 | \n decide | \n 2011-01-23 13:12:00+01:00 | \n 200 | \n Sara | \n
\n \n 36 | \n 5 | \n reject request | \n 2011-01-24 14:56:00+01:00 | \n 200 | \n Mike | \n
\n \n 37 | \n 4 | \n register request | \n 2011-01-06 15:02:00+01:00 | \n 50 | \n Pete | \n
\n \n 38 | \n 4 | \n check ticket | \n 2011-01-07 12:06:00+01:00 | \n 100 | \n Mike | \n
\n \n 39 | \n 4 | \n examine thoroughly | \n 2011-01-08 14:43:00+01:00 | \n 400 | \n Sean | \n
\n \n 40 | \n 4 | \n decide | \n 2011-01-09 12:02:00+01:00 | \n 200 | \n Sara | \n
\n \n 41 | \n 4 | \n reject request | \n 2011-01-12 15:44:00+01:00 | \n 200 | \n Ellen | \n
\n \n
\n
"
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"import pandas as pd\n",
"\n",
"df = pd.read_csv('data/running_example.csv', sep=';')\n",
"df"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "notes"
}
},
"source": [
"Let's inspect the small event log.\n",
"The first line (i.e., row) specifies the name of each column (i.e., event attribute).\n",
"Observe that, in the data table described by the file, we have 5 columns, being: *case_id*, *activity*,\n",
"*timestamp*, *costs* and *org:resource*.\n",
"The first column represents the *case identifier*, i.e., allowing us to identify what activity has been logged in the\n",
"context of what instance of the process.\n",
"The second column (*activity*) records the activity that has been performed.\n",
"The third column shows at what point in time the activity was recorded (*timestamp*).\n",
"In this example data, additional information is present as well.\n",
"In this case, the fourth column tracks the costs of the activity (*costs* attribute), whereas the fifth row tracks what\n",
"resource has performed the activity (*org:resource*).\n",
"\n",
"Observe that, row 2-10 show the events that have been recorded for the process identified by *case identifier* 3.\n",
"We observe that first a register request activity was performed, followed by the examine casually, check ticket, decide,\n",
"reinitiate request, examine thoroughly, check ticket,decide, and finally, pay compensation activities.\n",
"Note that, in this case, the recorded process instance behaves as described by the model depicted in Figure 3.\n",
"\n",
"Let's investigate some basic statistics of our log, e.g., the total number of cases described and the total number of events."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"pycharm": {
"name": "#%%\n"
},
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [
{
"data": {
"text/plain": [
"6"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# number of cases\n",
"len(df['case_id'].unique())"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"pycharm": {
"name": "#%%\n"
},
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [
{
"data": {
"text/plain": [
"42"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# number of events\n",
"len(df)\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"### Formatting Data Frames"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "notes"
}
},
"source": [
"Now we have loaded our first event log, it is time to put some pm4py into the mix.\n",
"pm4py uses standardized column names to represent the *case identifier*, the *activity name* and the timstamp.\n",
"These are, respectively, ```case:concept:name```, ```concept:name``` and ```time:timestamp```.\n",
"Hence, to make pm4py work with the provided csv file, we need to rename the ```case_id```, ```activity``` and ```timestamp``` columns.\n",
"pm4py provides a dedicated utility function for this:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"pycharm": {
"name": "#%%\n"
},
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [
{
"data": {
"text/html": [
"\n",
"\n",
"
\n",
" \n",
" \n",
" | \n",
" case:concept:name | \n",
" concept:name | \n",
" time:timestamp | \n",
" costs | \n",
" org:resource | \n",
" @@index | \n",
"
\n",
" \n",
" \n",
" \n",
" 14 | \n",
" 1 | \n",
" register request | \n",
" 2010-12-30 10:02:00+00:00 | \n",
" 50 | \n",
" Pete | \n",
" 14 | \n",
"
\n",
" \n",
" 15 | \n",
" 1 | \n",
" examine thoroughly | \n",
" 2010-12-31 09:06:00+00:00 | \n",
" 400 | \n",
" Sue | \n",
" 15 | \n",
"
\n",
" \n",
" 16 | \n",
" 1 | \n",
" check ticket | \n",
" 2011-01-05 14:12:00+00:00 | \n",
" 100 | \n",
" Mike | \n",
" 16 | \n",
"
\n",
" \n",
" 17 | \n",
" 1 | \n",
" decide | \n",
" 2011-01-06 10:18:00+00:00 | \n",
" 200 | \n",
" Sara | \n",
" 17 | \n",
"
\n",
" \n",
" 18 | \n",
" 1 | \n",
" reject request | \n",
" 2011-01-07 13:24:00+00:00 | \n",
" 200 | \n",
" Pete | \n",
" 18 | \n",
"
\n",
" \n",
" 9 | \n",
" 2 | \n",
" register request | \n",
" 2010-12-30 10:32:00+00:00 | \n",
" 50 | \n",
" Mike | \n",
" 9 | \n",
"
\n",
" \n",
" 10 | \n",
" 2 | \n",
" check ticket | \n",
" 2010-12-30 11:12:00+00:00 | \n",
" 100 | \n",
" Mike | \n",
" 10 | \n",
"
\n",
" \n",
" 11 | \n",
" 2 | \n",
" examine casually | \n",
" 2010-12-30 13:16:00+00:00 | \n",
" 400 | \n",
" Sean | \n",
" 11 | \n",
"
\n",
" \n",
" 12 | \n",
" 2 | \n",
" decide | \n",
" 2011-01-05 10:22:00+00:00 | \n",
" 200 | \n",
" Sara | \n",
" 12 | \n",
"
\n",
" \n",
" 13 | \n",
" 2 | \n",
" pay compensation | \n",
" 2011-01-08 11:05:00+00:00 | \n",
" 200 | \n",
" Ellen | \n",
" 13 | \n",
"
\n",
" \n",
" 0 | \n",
" 3 | \n",
" register request | \n",
" 2010-12-30 13:32:00+00:00 | \n",
" 50 | \n",
" Pete | \n",
" 0 | \n",
"
\n",
" \n",
" 1 | \n",
" 3 | \n",
" examine casually | \n",
" 2010-12-30 14:06:00+00:00 | \n",
" 400 | \n",
" Mike | \n",
" 1 | \n",
"
\n",
" \n",
" 2 | \n",
" 3 | \n",
" check ticket | \n",
" 2010-12-30 15:34:00+00:00 | \n",
" 100 | \n",
" Ellen | \n",
" 2 | \n",
"
\n",
" \n",
" 3 | \n",
" 3 | \n",
" decide | \n",
" 2011-01-06 08:18:00+00:00 | \n",
" 200 | \n",
" Sara | \n",
" 3 | \n",
"
\n",
" \n",
" 4 | \n",
" 3 | \n",
" reinitiate request | \n",
" 2011-01-06 11:18:00+00:00 | \n",
" 200 | \n",
" Sara | \n",
" 4 | \n",
"
\n",
" \n",
" 5 | \n",
" 3 | \n",
" examine thoroughly | \n",
" 2011-01-06 12:06:00+00:00 | \n",
" 400 | \n",
" Sean | \n",
" 5 | \n",
"
\n",
" \n",
" 6 | \n",
" 3 | \n",
" check ticket | \n",
" 2011-01-08 10:43:00+00:00 | \n",
" 100 | \n",
" Pete | \n",
" 6 | \n",
"
\n",
" \n",
" 7 | \n",
" 3 | \n",
" decide | \n",
" 2011-01-09 08:55:00+00:00 | \n",
" 200 | \n",
" Sara | \n",
" 7 | \n",
"
\n",
" \n",
" 8 | \n",
" 3 | \n",
" pay compensation | \n",
" 2011-01-15 09:45:00+00:00 | \n",
" 200 | \n",
" Ellen | \n",
" 8 | \n",
"
\n",
" \n",
" 37 | \n",
" 4 | \n",
" register request | \n",
" 2011-01-06 14:02:00+00:00 | \n",
" 50 | \n",
" Pete | \n",
" 37 | \n",
"
\n",
" \n",
" 38 | \n",
" 4 | \n",
" check ticket | \n",
" 2011-01-07 11:06:00+00:00 | \n",
" 100 | \n",
" Mike | \n",
" 38 | \n",
"
\n",
" \n",
" 39 | \n",
" 4 | \n",
" examine thoroughly | \n",
" 2011-01-08 13:43:00+00:00 | \n",
" 400 | \n",
" Sean | \n",
" 39 | \n",
"
\n",
" \n",
" 40 | \n",
" 4 | \n",
" decide | \n",
" 2011-01-09 11:02:00+00:00 | \n",
" 200 | \n",
" Sara | \n",
" 40 | \n",
"
\n",
" \n",
" 41 | \n",
" 4 | \n",
" reject request | \n",
" 2011-01-12 14:44:00+00:00 | \n",
" 200 | \n",
" Ellen | \n",
" 41 | \n",
"
\n",
" \n",
" 24 | \n",
" 5 | \n",
" register request | \n",
" 2011-01-06 08:02:00+00:00 | \n",
" 50 | \n",
" Ellen | \n",
" 24 | \n",
"
\n",
" \n",
" 25 | \n",
" 5 | \n",
" examine casually | \n",
" 2011-01-07 09:16:00+00:00 | \n",
" 400 | \n",
" Mike | \n",
" 25 | \n",
"
\n",
" \n",
" 26 | \n",
" 5 | \n",
" check ticket | \n",
" 2011-01-08 10:22:00+00:00 | \n",
" 100 | \n",
" Pete | \n",
" 26 | \n",
"
\n",
" \n",
" 27 | \n",
" 5 | \n",
" decide | \n",
" 2011-01-10 12:28:00+00:00 | \n",
" 200 | \n",
" Sara | \n",
" 27 | \n",
"
\n",
" \n",
" 28 | \n",
" 5 | \n",
" reinitiate request | \n",
" 2011-01-11 15:18:00+00:00 | \n",
" 200 | \n",
" Sara | \n",
" 28 | \n",
"
\n",
" \n",
" 29 | \n",
" 5 | \n",
" check ticket | \n",
" 2011-01-14 13:33:00+00:00 | \n",
" 100 | \n",
" Ellen | \n",
" 29 | \n",
"
\n",
" \n",
" 30 | \n",
" 5 | \n",
" examine casually | \n",
" 2011-01-16 14:50:00+00:00 | \n",
" 400 | \n",
" Mike | \n",
" 30 | \n",
"
\n",
" \n",
" 31 | \n",
" 5 | \n",
" decide | \n",
" 2011-01-19 10:18:00+00:00 | \n",
" 200 | \n",
" Sara | \n",
" 31 | \n",
"
\n",
" \n",
" 32 | \n",
" 5 | \n",
" reinitiate request | \n",
" 2011-01-20 11:48:00+00:00 | \n",
" 200 | \n",
" Sara | \n",
" 32 | \n",
"
\n",
" \n",
" 33 | \n",
" 5 | \n",
" examine casually | \n",
" 2011-01-21 08:06:00+00:00 | \n",
" 400 | \n",
" Sue | \n",
" 33 | \n",
"
\n",
" \n",
" 34 | \n",
" 5 | \n",
" check ticket | \n",
" 2011-01-21 10:34:00+00:00 | \n",
" 100 | \n",
" Pete | \n",
" 34 | \n",
"
\n",
" \n",
" 35 | \n",
" 5 | \n",
" decide | \n",
" 2011-01-23 12:12:00+00:00 | \n",
" 200 | \n",
" Sara | \n",
" 35 | \n",
"
\n",
" \n",
" 36 | \n",
" 5 | \n",
" reject request | \n",
" 2011-01-24 13:56:00+00:00 | \n",
" 200 | \n",
" Mike | \n",
" 36 | \n",
"
\n",
" \n",
" 19 | \n",
" 6 | \n",
" register request | \n",
" 2011-01-06 14:02:00+00:00 | \n",
" 50 | \n",
" Mike | \n",
" 19 | \n",
"
\n",
" \n",
" 20 | \n",
" 6 | \n",
" examine casually | \n",
" 2011-01-06 15:06:00+00:00 | \n",
" 400 | \n",
" Ellen | \n",
" 20 | \n",
"
\n",
" \n",
" 21 | \n",
" 6 | \n",
" check ticket | \n",
" 2011-01-07 15:22:00+00:00 | \n",
" 100 | \n",
" Mike | \n",
" 21 | \n",
"
\n",
" \n",
" 22 | \n",
" 6 | \n",
" decide | \n",
" 2011-01-07 15:52:00+00:00 | \n",
" 200 | \n",
" Sara | \n",
" 22 | \n",
"
\n",
" \n",
" 23 | \n",
" 6 | \n",
" pay compensation | \n",
" 2011-01-16 10:47:00+00:00 | \n",
" 200 | \n",
" Mike | \n",
" 23 | \n",
"
\n",
" \n",
"
\n",
"
"
],
"text/plain": [
" case:concept:name concept:name time:timestamp costs \\\n",
"14 1 register request 2010-12-30 10:02:00+00:00 50 \n",
"15 1 examine thoroughly 2010-12-31 09:06:00+00:00 400 \n",
"16 1 check ticket 2011-01-05 14:12:00+00:00 100 \n",
"17 1 decide 2011-01-06 10:18:00+00:00 200 \n",
"18 1 reject request 2011-01-07 13:24:00+00:00 200 \n",
"9 2 register request 2010-12-30 10:32:00+00:00 50 \n",
"10 2 check ticket 2010-12-30 11:12:00+00:00 100 \n",
"11 2 examine casually 2010-12-30 13:16:00+00:00 400 \n",
"12 2 decide 2011-01-05 10:22:00+00:00 200 \n",
"13 2 pay compensation 2011-01-08 11:05:00+00:00 200 \n",
"0 3 register request 2010-12-30 13:32:00+00:00 50 \n",
"1 3 examine casually 2010-12-30 14:06:00+00:00 400 \n",
"2 3 check ticket 2010-12-30 15:34:00+00:00 100 \n",
"3 3 decide 2011-01-06 08:18:00+00:00 200 \n",
"4 3 reinitiate request 2011-01-06 11:18:00+00:00 200 \n",
"5 3 examine thoroughly 2011-01-06 12:06:00+00:00 400 \n",
"6 3 check ticket 2011-01-08 10:43:00+00:00 100 \n",
"7 3 decide 2011-01-09 08:55:00+00:00 200 \n",
"8 3 pay compensation 2011-01-15 09:45:00+00:00 200 \n",
"37 4 register request 2011-01-06 14:02:00+00:00 50 \n",
"38 4 check ticket 2011-01-07 11:06:00+00:00 100 \n",
"39 4 examine thoroughly 2011-01-08 13:43:00+00:00 400 \n",
"40 4 decide 2011-01-09 11:02:00+00:00 200 \n",
"41 4 reject request 2011-01-12 14:44:00+00:00 200 \n",
"24 5 register request 2011-01-06 08:02:00+00:00 50 \n",
"25 5 examine casually 2011-01-07 09:16:00+00:00 400 \n",
"26 5 check ticket 2011-01-08 10:22:00+00:00 100 \n",
"27 5 decide 2011-01-10 12:28:00+00:00 200 \n",
"28 5 reinitiate request 2011-01-11 15:18:00+00:00 200 \n",
"29 5 check ticket 2011-01-14 13:33:00+00:00 100 \n",
"30 5 examine casually 2011-01-16 14:50:00+00:00 400 \n",
"31 5 decide 2011-01-19 10:18:00+00:00 200 \n",
"32 5 reinitiate request 2011-01-20 11:48:00+00:00 200 \n",
"33 5 examine casually 2011-01-21 08:06:00+00:00 400 \n",
"34 5 check ticket 2011-01-21 10:34:00+00:00 100 \n",
"35 5 decide 2011-01-23 12:12:00+00:00 200 \n",
"36 5 reject request 2011-01-24 13:56:00+00:00 200 \n",
"19 6 register request 2011-01-06 14:02:00+00:00 50 \n",
"20 6 examine casually 2011-01-06 15:06:00+00:00 400 \n",
"21 6 check ticket 2011-01-07 15:22:00+00:00 100 \n",
"22 6 decide 2011-01-07 15:52:00+00:00 200 \n",
"23 6 pay compensation 2011-01-16 10:47:00+00:00 200 \n",
"\n",
" org:resource @@index \n",
"14 Pete 14 \n",
"15 Sue 15 \n",
"16 Mike 16 \n",
"17 Sara 17 \n",
"18 Pete 18 \n",
"9 Mike 9 \n",
"10 Mike 10 \n",
"11 Sean 11 \n",
"12 Sara 12 \n",
"13 Ellen 13 \n",
"0 Pete 0 \n",
"1 Mike 1 \n",
"2 Ellen 2 \n",
"3 Sara 3 \n",
"4 Sara 4 \n",
"5 Sean 5 \n",
"6 Pete 6 \n",
"7 Sara 7 \n",
"8 Ellen 8 \n",
"37 Pete 37 \n",
"38 Mike 38 \n",
"39 Sean 39 \n",
"40 Sara 40 \n",
"41 Ellen 41 \n",
"24 Ellen 24 \n",
"25 Mike 25 \n",
"26 Pete 26 \n",
"27 Sara 27 \n",
"28 Sara 28 \n",
"29 Ellen 29 \n",
"30 Mike 30 \n",
"31 Sara 31 \n",
"32 Sara 32 \n",
"33 Sue 33 \n",
"34 Pete 34 \n",
"35 Sara 35 \n",
"36 Mike 36 \n",
"19 Mike 19 \n",
"20 Ellen 20 \n",
"21 Mike 21 \n",
"22 Sara 22 \n",
"23 Mike 23 "
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"import pm4py\n",
"log = pm4py.format_dataframe(df, case_id='case_id',activity_key='activity',\n",
" timestamp_key='timestamp')\n",
"log\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"pycharm": {
"name": "#%% md\n"
},
"slideshow": {
"slide_type": "notes"
}
},
"source": [
"Observe that the column names are updated as expected.\n",
"\n",
"Let us assume that we are not only interested in the number of events and cases, yet, we also want to figure out what\n",
"activities occur first, and what activities occur last in the traces described by the event log.\n",
"pm4py has a specific built-in function for this, i.e., ```pm4py.get_start_activities()``` and ```pm4py.get_end_activities()``` respectively."
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"pycharm": {
"name": "#%%\n"
},
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [
{
"data": {
"text/plain": [
"{'register request': 6}"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"pm4py.get_start_activities(log)"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"pycharm": {
"name": "#%%\n"
},
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [
{
"data": {
"text/plain": [
"{'pay compensation': 3, 'reject request': 3}"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"pm4py.get_end_activities(log)"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "notes"
}
},
"source": [
"The ```pm4py.get_start_activities()``` and ```pm4py.get_end_activities()``` both return a dictionary containing the activities\n",
"as a key, and, the number of observations (i.e., number of traces in which they occur first, respectively, last) in\n",
"the event log.\n",
"\n",
"pm4py exploits a built-in pandas function to detect the format of the timestamps in the input data automatically.\n",
"However, pandas looks at the timestamp values in each row in isolation.\n",
"In some cases, this can lead to problems.\n",
"For example, if the provided value is 2020-01-18, i.e., first the year, then the month, and then the day of the date,\n",
"in some cases, a value of 2020-02-01 may be interpreted wrongly as January 2nd, i.e., rather than February 1st.\n",
"To alleviate this problem, an additional parameter can be provided to the ```format_dataframe()``` method, i.e.,\n",
"the timest_format parameter. The default Python timestamp format codes can be used to provide the timestamp format.\n",
"In this example, the timestamp format is ```%Y-%m-%d %H:%M:%S%z```.\n",
"In general, we advise to always specify the timestamp format."
]
},
{
"cell_type": "markdown",
"metadata": {
"pycharm": {
"name": "#%% md\n"
},
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"## Importing XES Files"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "notes"
}
},
"source": [
"Next to CSV files, event data can also be stored in an XML-based format, i.e., in XES files.\n",
"In an XES file, we can describe a containment relation, i.e., a log contains a number of traces, which in turn contain several events.\n",
"Furthermore, an object, i.e., a log, trace, or event, is allowed to have attributes.\n",
"The advantage is that certain data attributes that are constant for a log or a trace, can be stored at that level.\n",
"For example, assume that we only know the total costs of a case, rather than the costs of the individual events.\n",
"If we want to store this information in a CSV file, we either need to replicate this information (i.e., we can only\n",
"store data in rows, which directly refer to events), or, we need to explicitly define that certain columns only get a\n",
"value once, i.e., referring to case-level attributes.\n",
"The XES standard more naturally supports the storage of this type of information.\n",
"Click [here](data/running_example.xes) to obtain the .xes file of the running_example.\n",
"\n",
"Importing an XES file is fairly straightforward.\n",
"pm4py has a special read_xes()-function that can parse a given xes file and load it in pm4py, i.e., as an Event Log object.\n",
"Consider the following code snippet, in which we show how to import an XES event log.\n",
"Like the previous example, the script outputs activities that can start and end a trace."
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {
"pycharm": {
"name": "#%%\n"
},
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "a356969d9a9b4ffa928c5670f630d3fc",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"parsing log, completed traces :: 0%| | 0/6 [00:00, ?it/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"{'register request': 6}"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"log_xes = pm4py.read_xes('data/running_example.xes', return_legacy_log_object=True)\n",
"pm4py.get_start_activities(log_xes)"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {
"pycharm": {
"name": "#%%\n"
},
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [
{
"data": {
"text/plain": [
"{'pay compensation': 3, 'reject request': 3}"
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"pm4py.get_end_activities(log_xes)\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"## Exporting Event Data"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "notes"
}
},
"source": [
"Now we have seen how to import event data into pm4py, let’s take a look at the opposite, i.e., exporting event data.\n",
"Exporting of event logs can be very useful, e.g., we might want to convert a .csv file into a ```.xes``` file or we might\n",
"want to filter out certain (noisy) cases and save the filtered event log. Like importing, exporting of event data is\n",
"possible in two ways, i.e., exporting to ```csv``` (using ```pandas```) and exporting event logs to xes. In the upcoming\n",
"sections, we show how to export an event log stored as a ```pandas data frame``` into a ```csv``` file, a ```pandas data frame``` as an\n",
"```xes file```, a pm4py ```event log object``` as a ```csv file``` and finally, a pm4py ```event log object``` as an ```xes file```."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"### Storing a Pandas Data Frame as a csv file"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "notes"
}
},
"source": [
"Storing an event log that is represented as a pandas dataframe is straightforward, i.e., we can directly use the ```to_csv```\n",
" ([full reference here](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_csv.html)) function\n",
" of the pandas DataFrame object. Consider the following example snippet of code, in which we show this functionality."
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {
"pycharm": {
"name": "#%%\n"
},
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [],
"source": [
"log.to_csv('running_example_exported.csv')"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"### Storing a Pandas DataFrame as a .xes file"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "notes"
}
},
"source": [
"It is also possible to store a pandas data frame to a xes file. This is simply done by calling the ```pm4py.write_xes()```\n",
"function. You can pass the dataframe as an input parameter to the function, i.e., pm4py handles the internal conversion\n",
"of the dataframe to an event log object prior to writing it to disk. Note that this construct only works if you have\n",
"formatted the data frame, i.e., as highlighted earlier in the importing CSV section."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"pycharm": {
"is_executing": true,
"name": "#%%\n"
},
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [],
"source": [
"pm4py.write_xes(log, 'running_example_csv_exported_as_xes.xes')\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"### Storing an Event Log object as a .csv file"
]
},
{
"cell_type": "markdown",
"metadata": {
"pycharm": {
"name": "#%% md\n"
},
"slideshow": {
"slide_type": "notes"
}
},
"source": [
"In some cases, we might want to store an event log object, e.g., obtained by importing a .xes file, as a csv file.\n",
"For example, certain (commercial) process mining tools only support csv importing. \n",
"For this purpose, pm4py offers conversion functionality that allows you to convert your event log object into a data frame,\n",
"which you can subsequently export using pandas.\n"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {
"pycharm": {
"name": "#%%\n"
},
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [],
"source": [
"df = pm4py.convert_to_dataframe(log_xes)\n",
"df.to_csv('running_example_xes_exported_as_csv.csv')\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"pycharm": {
"name": "#%% md\n"
},
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"### Storing an Event Log Object as a .xes File"
]
},
{
"cell_type": "markdown",
"metadata": {
"pycharm": {
"name": "#%% md\n"
},
"slideshow": {
"slide_type": "notes"
}
},
"source": [
"Storing an event log object as a .xes file is rather straightforward. In pm4py, the write_xes() method allows us to do so.\n",
"Consider the simple example script below in which we show an example of this functionality."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"pycharm": {
"name": "#%%\n"
},
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [],
"source": [
"pm4py.write_xes(log_xes, 'running_example_exported.xes')"
]
}
],
"metadata": {
"celltoolbar": "Slideshow",
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.5"
}
},
"nbformat": 4,
"nbformat_minor": 1
}