suzuki-2001 commited on
Commit
aa71f73
·
verified ·
1 Parent(s): 7a6642c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +78 -84
README.md CHANGED
@@ -5,90 +5,89 @@ tags:
5
  - corpus
6
  ---
7
 
8
- # Model Plant Multi-Species Genomes
9
- ***
10
 
11
- ## Summary
12
 
13
- This dataset is derived from multiple plant genome assemblies available from NCBI RefSeq.
14
- It contains genomes from 11 different plant species.
15
- The dataset is pre-processed to produce fixed-length nucleotide sequence chunks (default 1kbp) with a 100bp overlap at both ends to preserve sequence context.
16
- All sequences are normalized to contain only `A`, `T`, `C`, `G`, and `N` characters.
17
- This dataset builds on the infrastructure provided by the [InstaDeepAI/plant-multi-species-genomes](https://huggingface.co/datasets/InstaDeepAI/plant-multi-species-genomes) repository, adapting the splitting strategy and configuration.
18
- Specifically, it introduces two splitting modes: `chromosome` (original record-based splitting within each species) and `genome` (split by selecting the smallest genomes for validation and test sets).
19
 
20
- ## Supported Configurations
 
 
 
 
 
 
 
 
 
 
21
 
22
- - **Configuration Name:** `1kbp`
23
- - **Chunk Length:** 1,000 base pairs
24
- - **Overlap:** 100 base pairs on each side
25
 
26
- ## Split Modes
 
 
 
 
 
 
 
 
 
 
27
 
28
- - **Chromosome Mode (Default)**:
29
- Within each genome file (which can contain multiple records, e.g. chromosomes or contigs), approximately 10% of the tail records are assigned to validation and another 10% (just before the validation chunk) to test. The remainder is used for training. This approach keeps the splitting consistent within the same species but ensures that no single chromosome is split across training and evaluation sets.
 
 
30
 
31
- - **Genome Mode**:
32
- Instead of splitting within each species, the entire set of genomes is sorted by their total sequence length. The two smallest genomes are assigned to validation, the next two smallest to test, and the remaining genomes are used for training. This approach evaluates how well a model generalizes across species when confronted with unseen genomes.
33
 
34
- You can specify the mode when loading the dataset:
 
 
 
35
 
36
- ```python
37
- from datasets import load_dataset
38
-
39
- overlap_length = 100
40
- chunk_length = 1000 # total length: 1000 + overlap*2 = 1200 bp
41
- split_mode = "chromosome" # or "genome"
42
-
43
- dataset = load_dataset(
44
- 'suzuki-2001/multi-model-plant-genome-corpus',
45
- chunk_length=chunk_length,
46
- overlap=overlap_length,
47
- split_mode=split_mode,
48
- trust_remote_code=True,
49
- )
50
- ```
51
 
52
- ## Dataset Structure
 
53
 
54
- ### Data Fields
55
 
56
- - `sequence` (`string`): The DNA sequence chunk (A, T, C, G, N only).
57
- - `description` (`string`): Description or FASTA header line associated with the original record.
58
- - `start_pos` (`int32`): Starting position of the chunk in the original genome sequence.
59
- - `end_pos` (`int32`): Ending position of the chunk in the original genome sequence.
60
 
61
- ### Splits
 
 
62
 
63
- Depending on the chosen mode, the dataset provides `train`, `validation`, and `test` splits:
 
64
 
65
- - `train`: Training chunks.
66
- - `validation` (`val`): Validation chunks.
67
- - `test`: Test chunks.
 
68
 
69
- In chromosome mode, these splits are made by dividing records from each genome file. In genome mode, these splits are made by selecting different genome files based on their total length.
 
 
 
70
 
71
- ## Source Data
72
 
73
- ### Original Source
74
- The genome sequences are sourced from the NCBI RefSeq database. The reference is:
 
75
 
76
- ```bibtex
77
- @article{o2016reference,
78
- title={Reference sequence (RefSeq) database at NCBI: current status, taxonomic expansion, and functional annotation},
79
- author={O'Leary, Nuala A and Wright, Mathew W and Brister, J Rodney and Ciufo, Stacy and Haddad, Diana and McVeigh, Rich and Rajput, Bhanu and Robbertse, Barbara and Smith-White, Brian and Ako-Adjei, Danso and others},
80
- journal={Nucleic acids research},
81
- volume={44},
82
- number={D1},
83
- pages={D733--D745},
84
- year={2016},
85
- publisher={Oxford University Press}
86
- }
87
- ```
88
 
89
- This dataset based agronomic nucleotide transformer paper:
90
  ```bibtex
91
- @article{mendoza2023foundational,
92
  title={A Foundational Large Language Model for Edible Plant Genomes},
93
  author={Mendoza-Revilla, Javier and Trop, Evan and Gonzalez, Liam and Roller, Masa and Dalla-Torre, Hugo and de Almeida, Bernardo P and Richard, Guillaume and Caton, Jonathan and Lopez Carranza, Nicolas and Skwark, Marcin and others},
94
  journal={bioRxiv},
@@ -98,26 +97,21 @@ This dataset based agronomic nucleotide transformer paper:
98
  }
99
  ```
100
 
101
- ### License
102
-
103
- Refer to NCBI policies: [https://www.ncbi.nlm.nih.gov/home/about/policies/](https://www.ncbi.nlm.nih.gov/home/about/policies/)
104
-
105
- ## Processing
106
-
107
- - Non-ACGT letters are converted to `N`.
108
- - Uppercasing is enforced.
109
- - Each record is split into overlapping chunks of length 1kbp with 100bp overlap, enabling the model to have context continuity across chunks.
110
-
111
- ## Intended Use
112
-
113
- This dataset is suitable for pre-training DNA language models, evaluating genome assembly tools, or exploring genomic features across multiple plant species. The two split modes facilitate different research questions: intra-species generalization (`chromosome` mode) or inter-species generalization (`genome` mode).
114
-
115
- ## Limitations and Considerations
116
-
117
- - The chromosome-based approach might not fully evaluate inter-species generalization, as train and test data might still be from the same species.
118
- - The genome-based approach might exclude some species from training, which could be seen as wasteful of data, but it provides a strict test of cross-species generalization.
119
- - Users can choose the most appropriate mode depending on their research objectives.
120
 
121
- ## Citation
 
122
 
123
- If you use this dataset in your research, please cite the original RefSeq database paper mentioned above.
 
 
5
  - corpus
6
  ---
7
 
8
+ # Dataset Card: Multi Model Plant Genome Corpus
 
9
 
10
+ This is a modified version of the Plant Multi-Species Genomes dataset created by InstaDeepAI, focusing on 11 major crop species with adaptive splitting based on genome sizes.
11
 
12
+ ## Dataset Summary
13
+ The Plant Multi-Species Genome dataset contains DNA sequences from 11 different plant species:
 
 
 
 
14
 
15
+ 1. Arabidopsis (GCF_000001735.4_TAIR10.1)
16
+ 2. Tomato (GCF_000188115.4_SL3.0)
17
+ 3. Rice (GCF_001433935.1_IRGSP-1.0)
18
+ 4. Soybean (GCF_000004515.6)
19
+ 5. Sorghum (GCF_000003195.3)
20
+ 6. Maize (GCF_902167145.1)
21
+ 7. Tobacco (GCF_000715135.1)
22
+ 8. Wheat (GCF_018294505.1)
23
+ 9. Cabbage (GCF_000695525.1)
24
+ 10. Foxtail millet (GCF_000263155.2)
25
+ 11. Cucumber (GCF_000004075.3)
26
 
27
+ Each sequence is processed into chunks with configurable length and overlap. The dataset employs an adaptive splitting strategy where the validation and test set ratios are determined based on genome sizes to ensure balanced representation while maintaining efficiency.
 
 
28
 
29
+ ## Dataset Structure
30
+ ### Data Instances
31
+ Each instance contains:
32
+ ```python
33
+ {
34
+ 'sequence': 'ACGTACGT...', # DNA sequence string
35
+ 'description': 'Species and genome information',
36
+ 'start_pos': 0, # Start position in the original sequence
37
+ 'end_pos': 1000 # End position in the original sequence
38
+ }
39
+ ```
40
 
41
+ - sequence: A string containing the DNA sequence
42
+ - description: A string containing the species information and NCBI identifier
43
+ - start_pos: Integer indicating the start position in the original sequence
44
+ - end_pos: Integer indicating the end position in the original sequence
45
 
46
+ ### Data Splits
47
+ The dataset is split into train, validation, and test sets using an adaptive strategy:
48
 
49
+ - Very large genomes (>20% of total): 0.2% for validation and test
50
+ - Large genomes (10-20%): 0.5% for validation and test
51
+ - Medium genomes (5-10%): 1% for validation and test
52
+ - Small genomes (<5%): 2% for validation and test
53
 
54
+ The maximum combined size of validation and test sets is capped at 10% for each genome.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
55
 
56
+ ## Dataset Creation
57
+ ### Source Data
58
 
59
+ The source genomes are from NCBI RefSeq database, accessed through the original Plant Multi-Species Genomes dataset by InstaDeepAI.
60
 
61
+ ### Preprocessing
 
 
 
62
 
63
+ - Sequences are cleaned to contain only A, T, C, G, and N
64
+ - Sequences are split into chunks with configurable length and overlap
65
+ - Adaptive splitting is applied based on genome sizes
66
 
67
+ ## Usage
68
+ pythonCopyfrom datasets import load_dataset
69
 
70
+ ### Default configuration (1000bp chunks)
71
+ ```python
72
+ dataset = load_dataset("your-username/multi-model-plant-genome-corpus")
73
+ ```
74
 
75
+ ### Custom chunk length
76
+ ```python
77
+ dataset = load_dataset("your-username/multi-model-plant-genome-corpus", chunk_length=6000)
78
+ ```
79
 
80
+ ## Limitations
81
 
82
+ The dataset is limited to 11 species compared to the original 48 species
83
+ Sequence length is restricted by the chunking process
84
+ The adaptive splitting strategy may result in different split sizes across species
85
 
86
+ ## Citation
87
+ Please cite both the original dataset and this modified version:
 
 
 
 
 
 
 
 
 
 
88
 
 
89
  ```bibtex
90
+ bibtexCopy@article{mendoza2023foundational,
91
  title={A Foundational Large Language Model for Edible Plant Genomes},
92
  author={Mendoza-Revilla, Javier and Trop, Evan and Gonzalez, Liam and Roller, Masa and Dalla-Torre, Hugo and de Almeida, Bernardo P and Richard, Guillaume and Caton, Jonathan and Lopez Carranza, Nicolas and Skwark, Marcin and others},
93
  journal={bioRxiv},
 
97
  }
98
  ```
99
 
100
+ ```bibtex
101
+ bibtexCopy@article{o2016reference,
102
+ title={Reference sequence (RefSeq) database at NCBI: current status, taxonomic expansion, and functional annotation},
103
+ author={O'Leary, Nuala A and Wright, Mathew W and Brister, J Rodney and Ciufo, Stacy and Haddad, Diana and McVeigh, Rich and Rajput, Bhanu and Robbertse, Barbara and Smith-White, Brian and Ako-Adjei, Danso and others},
104
+ journal={Nucleic acids research},
105
+ volume={44},
106
+ number={D1},
107
+ pages={D733--D745},
108
+ year={2016},
109
+ publisher={Oxford University Press}
110
+ }
111
+ ```
 
 
 
 
 
 
 
112
 
113
+ ## License
114
+ This dataset follows the same license as the original NCBI RefSeq data and the InstaDeepAI plant-multi-species-genomes dataset. For detailed terms, please refer to:
115
 
116
+ - NCBI Terms and Conditions
117
+ - InstaDeepAI/plant-multi-species-genomes License