File size: 2,968 Bytes
c0b4b4d
092e48c
 
 
 
 
4bf94c8
977c62c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c0b4b4d
092e48c
4bf94c8
092e48c
4bf94c8
092e48c
4bf94c8
092e48c
4bf94c8
 
 
092e48c
4bf94c8
092e48c
4bf94c8
 
 
 
092e48c
4bf94c8
092e48c
4bf94c8
 
 
092e48c
4bf94c8
092e48c
4bf94c8
092e48c
4bf94c8
092e48c
4bf94c8
092e48c
086f13e
 
 
092e48c
4bf94c8
092e48c
4bf94c8
 
 
 
092e48c
086f13e
 
4bf94c8
092e48c
4bf94c8
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
---
language: en
tags:
- log-analysis
- hdfs
- anomaly-detection
license: mit
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: validation
    path: data/validation-*
  - split: test
    path: data/test-*
dataset_info:
  features:
  - name: event_encoded
    dtype: string
  - name: tokenized_block
    sequence: int64
  - name: block_id
    dtype: string
  - name: label
    dtype: string
  - name: __index_level_0__
    dtype: int64
  splits:
  - name: train
    num_bytes: 1159074302
    num_examples: 460048
  - name: validation
    num_bytes: 145089712
    num_examples: 57506
  - name: test
    num_bytes: 144844752
    num_examples: 57507
  download_size: 173888975
  dataset_size: 1449008766
---

# HDFS Logs Train/Val/Test Splits

This dataset contains preprocessed HDFS log sequences split into train, validation, and test sets for anomaly detection tasks.

## Dataset Description

The dataset is derived from the HDFS log dataset, which contains system logs from a Hadoop Distributed File System (HDFS).
Each sequence represents a block of log messages, labeled as either normal or anomalous. The dataset has been preprocessed
using the Drain algorithm to extract structured fields and identify event types. 

### Data Fields

- `block_id`: Unique identifier for each HDFS block, used to group log messages into blocks
- `event_encoded`: The preprocessed log sequence with event IDs and parameters
- `tokenized_block`: The tokenized log sequence, used for training
- `label`: Classification label ('Normal' or 'Anomaly')

### Data Splits

- Training set: 460,049 sequences (80%)
- Validation set: 57,506 sequences (10%)
- Test set: 57,506 sequences (10%)

The splits are stratified by the Label field to maintain class distribution across splits.

## Source Data

Original data source: https://zenodo.org/records/8196385/files/HDFS_v1.zip?download=1

## Preprocessing

We preprocess the logs using the Drain algorithm to extract structured fields and identify event types.
We then encode the logs using a pretrained tokenizer and add special tokens to separate event types. This
dataset should be immediately usable for training and testing models for log-based anomaly detection.

## Intended Uses

This dataset is designed for:
- Training log anomaly detection models
- Evaluating log sequence prediction models
- Benchmarking different approaches to log-based anomaly detection

see [honicky/pythia-14m-hdfs-logs](https://huggingface.co/honicky/pythia-14m-hdfs-logs) for an example model.

## Citation

If you use this dataset, please cite the original HDFS paper:
```bibtex
@inproceedings{xu2009detecting,
  title={Detecting large-scale system problems by mining console logs},
  author={Xu, Wei and Huang, Ling and Fox, Armando and Patterson, David and Jordan, Michael I},
  booktitle={Proceedings of the ACM SIGOPS 22nd symposium on Operating systems principles},
  pages={117--132},
  year={2009}
}
```