Model Card for Mistral-Codon-v1-1M (Mistral for coding DNA)
The Mistral-Codon-v1-1M Large Language Model (LLM) is a pretrained generative DNA sequence model with 1M parameters. It is derived from Mixtral-8x7B-v0.1 model, which was simplified for DNA: the number of layers and the hidden size were reduced. The model was pretrained using 24M coding DNA sequences (3000bp) from many different species (vertebrates, plants, bacteria, viruses, ...).
Model Architecture
Like Mixtral-8x7B-v0.1, it is a transformer model, with the following architecture choices:
- Grouped-Query Attention
- Sliding-Window Attention
- Byte-fallback BPE tokenizer
- Mixture of Experts
Load the model from huggingface:
import torch
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("RaphaelMourad/Mistral-Codon-v1-1M", trust_remote_code=True)
model = AutoModel.from_pretrained("RaphaelMourad/Mistral-Codon-v1-1M", trust_remote_code=True)
Calculate the embedding of a coding sequence
codon_dna = "TGA TGA TTG GCG CGG CTA GGA TCG GCT"
inputs = tokenizer(codon_dna, return_tensors = 'pt')["input_ids"]
hidden_states = model(inputs)[0] # [1, sequence_length, 256]
# embedding with max pooling
embedding_max = torch.max(hidden_states[0], dim=0)[0]
print(embedding_max.shape) # expect to be 256
Troubleshooting
Ensure you are utilizing a stable version of Transformers, 4.34.0 or newer.
Notice
Mistral-Codon-v1-1M is a pretrained base model for coding DNA.
Contact
Raphaël Mourad. [email protected]
- Downloads last month
- 22
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.