Datasets:

ArXiv:
TReconLM_datasets / README.md
FWeindel's picture
Create README.md
aa3fa32 verified
|
raw
history blame
1.87 kB

TReconLM synthetic test sets

This dataset contains the synthetic test sets used to evaluate TReconLM, a transformer-based model for trace reconstruction of noisy DNA sequences (see our paper).

The real-world datasets used for fine-tuning are available here:

The corresponding test sets used in the paper can be reproduced using the preprocessing scripts in our GitHub repository under data/.

Synthetic Dataset Generation

Synthetic datasets are generated using data_generation.py. Each test set is created by:

  • Sampling a ground-truth sequence of length L
  • Introducing insertions, deletions, and substitutions with rates sampled uniformly from [0.01, 0.1]
  • Randomly selecting the number of noisy reads N between 2 and 10

Files Included

  • ground_truth.txt Contains the original DNA sequences, one per line

  • reads.txt Contains the noisy traces (corrupted copies of the ground-truth sequences)

    • Each line is a single read
    • Clusters are separated by: ===============================
    • The i-th cluster corresponds to the i-th line in ground_truth.txt
  • test_x.pt A PyTorch tensor containing tokenized and padded input sequences used as model input, formatted as: read1|read2|...|readN : ground_truth

Usage

Instructions for running inference using these datasets and our pretrained models are provided in the trace_reconstruction.ipynb notebook in our GitHub repository.