Model Card for ModerBert-Codon-v1-34M (Mistral for coding DNA)

The ModerBert-Codon-v1-34M Large Language Model (LLM) is a pretrained generative DNA sequence model with 34M parameters. It is derived from ModernBERT model, which was simplified for DNA: the number of layers and the hidden size were reduced. The model was pretrained using 24M coding DNA sequences (3000bp) from many different species (vertebrates, plants, bacteria, viruses, ...).

Load the model from huggingface:

import torch
from transformers import AutoTokenizer, AutoModel

tokenizer = AutoTokenizer.from_pretrained("RaphaelMourad/ModerBert-Codon-v1-34M", trust_remote_code=True) 
model = AutoModel.from_pretrained("RaphaelMourad/ModerBert-Codon-v1-34M", trust_remote_code=True)

Calculate the embedding of a coding sequence

codon_dna = "TGA TGA TTG GCG CGG CTA GGA TCG GCT"
inputs = tokenizer(codon_dna, return_tensors = 'pt')["input_ids"]
hidden_states = model(inputs)[0] # [1, sequence_length, 256]

# embedding with max pooling
embedding_max = torch.max(hidden_states[0], dim=0)[0]
print(embedding_max.shape) # expect to be 256

Troubleshooting

Ensure you are utilizing a stable version of Transformers, 4.34.0 or newer.

Notice

ModerBert-Codon-v1-34M is a pretrained base model for coding DNA.

Contact

Raphaël Mourad. [email protected]

Downloads last month
16
Safetensors
Model size
33.7M params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.