suzuki-2001
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -5,90 +5,89 @@ tags:
|
|
5 |
- corpus
|
6 |
---
|
7 |
|
8 |
-
# Model Plant
|
9 |
-
***
|
10 |
|
11 |
-
|
12 |
|
13 |
-
|
14 |
-
|
15 |
-
The dataset is pre-processed to produce fixed-length nucleotide sequence chunks (default 1kbp) with a 100bp overlap at both ends to preserve sequence context.
|
16 |
-
All sequences are normalized to contain only `A`, `T`, `C`, `G`, and `N` characters.
|
17 |
-
This dataset builds on the infrastructure provided by the [InstaDeepAI/plant-multi-species-genomes](https://huggingface.co/datasets/InstaDeepAI/plant-multi-species-genomes) repository, adapting the splitting strategy and configuration.
|
18 |
-
Specifically, it introduces two splitting modes: `chromosome` (original record-based splitting within each species) and `genome` (split by selecting the smallest genomes for validation and test sets).
|
19 |
|
20 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
21 |
|
22 |
-
|
23 |
-
- **Chunk Length:** 1,000 base pairs
|
24 |
-
- **Overlap:** 100 base pairs on each side
|
25 |
|
26 |
-
##
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
27 |
|
28 |
-
-
|
29 |
-
|
|
|
|
|
30 |
|
31 |
-
|
32 |
-
|
33 |
|
34 |
-
|
|
|
|
|
|
|
35 |
|
36 |
-
|
37 |
-
from datasets import load_dataset
|
38 |
-
|
39 |
-
overlap_length = 100
|
40 |
-
chunk_length = 1000 # total length: 1000 + overlap*2 = 1200 bp
|
41 |
-
split_mode = "chromosome" # or "genome"
|
42 |
-
|
43 |
-
dataset = load_dataset(
|
44 |
-
'suzuki-2001/multi-model-plant-genome-corpus',
|
45 |
-
chunk_length=chunk_length,
|
46 |
-
overlap=overlap_length,
|
47 |
-
split_mode=split_mode,
|
48 |
-
trust_remote_code=True,
|
49 |
-
)
|
50 |
-
```
|
51 |
|
52 |
-
## Dataset
|
|
|
53 |
|
54 |
-
|
55 |
|
56 |
-
|
57 |
-
- `description` (`string`): Description or FASTA header line associated with the original record.
|
58 |
-
- `start_pos` (`int32`): Starting position of the chunk in the original genome sequence.
|
59 |
-
- `end_pos` (`int32`): Ending position of the chunk in the original genome sequence.
|
60 |
|
61 |
-
|
|
|
|
|
62 |
|
63 |
-
|
|
|
64 |
|
65 |
-
|
66 |
-
|
67 |
-
|
|
|
68 |
|
69 |
-
|
|
|
|
|
|
|
70 |
|
71 |
-
##
|
72 |
|
73 |
-
|
74 |
-
|
|
|
75 |
|
76 |
-
|
77 |
-
|
78 |
-
title={Reference sequence (RefSeq) database at NCBI: current status, taxonomic expansion, and functional annotation},
|
79 |
-
author={O'Leary, Nuala A and Wright, Mathew W and Brister, J Rodney and Ciufo, Stacy and Haddad, Diana and McVeigh, Rich and Rajput, Bhanu and Robbertse, Barbara and Smith-White, Brian and Ako-Adjei, Danso and others},
|
80 |
-
journal={Nucleic acids research},
|
81 |
-
volume={44},
|
82 |
-
number={D1},
|
83 |
-
pages={D733--D745},
|
84 |
-
year={2016},
|
85 |
-
publisher={Oxford University Press}
|
86 |
-
}
|
87 |
-
```
|
88 |
|
89 |
-
This dataset based agronomic nucleotide transformer paper:
|
90 |
```bibtex
|
91 |
-
@article{mendoza2023foundational,
|
92 |
title={A Foundational Large Language Model for Edible Plant Genomes},
|
93 |
author={Mendoza-Revilla, Javier and Trop, Evan and Gonzalez, Liam and Roller, Masa and Dalla-Torre, Hugo and de Almeida, Bernardo P and Richard, Guillaume and Caton, Jonathan and Lopez Carranza, Nicolas and Skwark, Marcin and others},
|
94 |
journal={bioRxiv},
|
@@ -98,26 +97,21 @@ This dataset based agronomic nucleotide transformer paper:
|
|
98 |
}
|
99 |
```
|
100 |
|
101 |
-
|
102 |
-
|
103 |
-
|
104 |
-
|
105 |
-
|
106 |
-
|
107 |
-
|
108 |
-
|
109 |
-
|
110 |
-
|
111 |
-
|
112 |
-
|
113 |
-
This dataset is suitable for pre-training DNA language models, evaluating genome assembly tools, or exploring genomic features across multiple plant species. The two split modes facilitate different research questions: intra-species generalization (`chromosome` mode) or inter-species generalization (`genome` mode).
|
114 |
-
|
115 |
-
## Limitations and Considerations
|
116 |
-
|
117 |
-
- The chromosome-based approach might not fully evaluate inter-species generalization, as train and test data might still be from the same species.
|
118 |
-
- The genome-based approach might exclude some species from training, which could be seen as wasteful of data, but it provides a strict test of cross-species generalization.
|
119 |
-
- Users can choose the most appropriate mode depending on their research objectives.
|
120 |
|
121 |
-
##
|
|
|
122 |
|
123 |
-
|
|
|
|
5 |
- corpus
|
6 |
---
|
7 |
|
8 |
+
# Dataset Card: Multi Model Plant Genome Corpus
|
|
|
9 |
|
10 |
+
This is a modified version of the Plant Multi-Species Genomes dataset created by InstaDeepAI, focusing on 11 major crop species with adaptive splitting based on genome sizes.
|
11 |
|
12 |
+
## Dataset Summary
|
13 |
+
The Plant Multi-Species Genome dataset contains DNA sequences from 11 different plant species:
|
|
|
|
|
|
|
|
|
14 |
|
15 |
+
1. Arabidopsis (GCF_000001735.4_TAIR10.1)
|
16 |
+
2. Tomato (GCF_000188115.4_SL3.0)
|
17 |
+
3. Rice (GCF_001433935.1_IRGSP-1.0)
|
18 |
+
4. Soybean (GCF_000004515.6)
|
19 |
+
5. Sorghum (GCF_000003195.3)
|
20 |
+
6. Maize (GCF_902167145.1)
|
21 |
+
7. Tobacco (GCF_000715135.1)
|
22 |
+
8. Wheat (GCF_018294505.1)
|
23 |
+
9. Cabbage (GCF_000695525.1)
|
24 |
+
10. Foxtail millet (GCF_000263155.2)
|
25 |
+
11. Cucumber (GCF_000004075.3)
|
26 |
|
27 |
+
Each sequence is processed into chunks with configurable length and overlap. The dataset employs an adaptive splitting strategy where the validation and test set ratios are determined based on genome sizes to ensure balanced representation while maintaining efficiency.
|
|
|
|
|
28 |
|
29 |
+
## Dataset Structure
|
30 |
+
### Data Instances
|
31 |
+
Each instance contains:
|
32 |
+
```python
|
33 |
+
{
|
34 |
+
'sequence': 'ACGTACGT...', # DNA sequence string
|
35 |
+
'description': 'Species and genome information',
|
36 |
+
'start_pos': 0, # Start position in the original sequence
|
37 |
+
'end_pos': 1000 # End position in the original sequence
|
38 |
+
}
|
39 |
+
```
|
40 |
|
41 |
+
- sequence: A string containing the DNA sequence
|
42 |
+
- description: A string containing the species information and NCBI identifier
|
43 |
+
- start_pos: Integer indicating the start position in the original sequence
|
44 |
+
- end_pos: Integer indicating the end position in the original sequence
|
45 |
|
46 |
+
### Data Splits
|
47 |
+
The dataset is split into train, validation, and test sets using an adaptive strategy:
|
48 |
|
49 |
+
- Very large genomes (>20% of total): 0.2% for validation and test
|
50 |
+
- Large genomes (10-20%): 0.5% for validation and test
|
51 |
+
- Medium genomes (5-10%): 1% for validation and test
|
52 |
+
- Small genomes (<5%): 2% for validation and test
|
53 |
|
54 |
+
The maximum combined size of validation and test sets is capped at 10% for each genome.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
55 |
|
56 |
+
## Dataset Creation
|
57 |
+
### Source Data
|
58 |
|
59 |
+
The source genomes are from NCBI RefSeq database, accessed through the original Plant Multi-Species Genomes dataset by InstaDeepAI.
|
60 |
|
61 |
+
### Preprocessing
|
|
|
|
|
|
|
62 |
|
63 |
+
- Sequences are cleaned to contain only A, T, C, G, and N
|
64 |
+
- Sequences are split into chunks with configurable length and overlap
|
65 |
+
- Adaptive splitting is applied based on genome sizes
|
66 |
|
67 |
+
## Usage
|
68 |
+
pythonCopyfrom datasets import load_dataset
|
69 |
|
70 |
+
### Default configuration (1000bp chunks)
|
71 |
+
```python
|
72 |
+
dataset = load_dataset("your-username/multi-model-plant-genome-corpus")
|
73 |
+
```
|
74 |
|
75 |
+
### Custom chunk length
|
76 |
+
```python
|
77 |
+
dataset = load_dataset("your-username/multi-model-plant-genome-corpus", chunk_length=6000)
|
78 |
+
```
|
79 |
|
80 |
+
## Limitations
|
81 |
|
82 |
+
The dataset is limited to 11 species compared to the original 48 species
|
83 |
+
Sequence length is restricted by the chunking process
|
84 |
+
The adaptive splitting strategy may result in different split sizes across species
|
85 |
|
86 |
+
## Citation
|
87 |
+
Please cite both the original dataset and this modified version:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
88 |
|
|
|
89 |
```bibtex
|
90 |
+
bibtexCopy@article{mendoza2023foundational,
|
91 |
title={A Foundational Large Language Model for Edible Plant Genomes},
|
92 |
author={Mendoza-Revilla, Javier and Trop, Evan and Gonzalez, Liam and Roller, Masa and Dalla-Torre, Hugo and de Almeida, Bernardo P and Richard, Guillaume and Caton, Jonathan and Lopez Carranza, Nicolas and Skwark, Marcin and others},
|
93 |
journal={bioRxiv},
|
|
|
97 |
}
|
98 |
```
|
99 |
|
100 |
+
```bibtex
|
101 |
+
bibtexCopy@article{o2016reference,
|
102 |
+
title={Reference sequence (RefSeq) database at NCBI: current status, taxonomic expansion, and functional annotation},
|
103 |
+
author={O'Leary, Nuala A and Wright, Mathew W and Brister, J Rodney and Ciufo, Stacy and Haddad, Diana and McVeigh, Rich and Rajput, Bhanu and Robbertse, Barbara and Smith-White, Brian and Ako-Adjei, Danso and others},
|
104 |
+
journal={Nucleic acids research},
|
105 |
+
volume={44},
|
106 |
+
number={D1},
|
107 |
+
pages={D733--D745},
|
108 |
+
year={2016},
|
109 |
+
publisher={Oxford University Press}
|
110 |
+
}
|
111 |
+
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
112 |
|
113 |
+
## License
|
114 |
+
This dataset follows the same license as the original NCBI RefSeq data and the InstaDeepAI plant-multi-species-genomes dataset. For detailed terms, please refer to:
|
115 |
|
116 |
+
- NCBI Terms and Conditions
|
117 |
+
- InstaDeepAI/plant-multi-species-genomes License
|