File size: 1,758 Bytes
28487a0
 
 
 
 
 
 
 
 
 
 
 
 
 
b6b2029
28487a0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
---
license: apache-2.0
tags:
- pretrained
- mistral
- DNA
- codon
---

# Model Card for Mistral-Codon-v1-13M (Mistral for coding DNA)

The Mistral-Codon-v1-13M Large Language Model (LLM) is a pretrained generative DNA sequence model with 13M parameters. 
It is derived from Mixtral-8x7B-v0.1 model, which was simplified for DNA: the number of layers and the hidden size were reduced. 
The model was pretrained using 24M coding DNA sequences (300bp) from many different species (vertebrates, plants, bacteria, viruses, ...). 
Compared to v1 models, v2 models have a very large number of experts (128) making the model faster to run.

## Model Architecture

Like Mixtral-8x7B-v0.1, it is a transformer model, with the following architecture choices:
- Grouped-Query Attention
- Sliding-Window Attention
- Byte-fallback BPE tokenizer
- Mixture of Experts

## Load the model from huggingface:

```
import torch
from transformers import AutoTokenizer, AutoModel

tokenizer = AutoTokenizer.from_pretrained("RaphaelMourad/Mistral-Codon-v1-13M", trust_remote_code=True) 
model = AutoModel.from_pretrained("RaphaelMourad/Mistral-Codon-v1-13M", trust_remote_code=True)
```

## Calculate the embedding of a coding sequence

```
insulin = "TGA TGA TTG GCG CGG CTA GGA TCG GCT"
inputs = tokenizer(insulin, return_tensors = 'pt')["input_ids"]
hidden_states = model(inputs)[0] # [1, sequence_length, 256]

# embedding with max pooling
embedding_max = torch.max(hidden_states[0], dim=0)[0]
print(embedding_max.shape) # expect to be 256
```

## Troubleshooting

Ensure you are utilizing a stable version of Transformers, 4.34.0 or newer.

## Notice

Mistral-Codon-v1-13M is a pretrained base model for coding DNA.

## Contact
 
Raphaël Mourad. [email protected]