Model Card for Mistral-Chem-v1-417M (Mistral for chemistry)

The Mistral-Chem-v1-417M Large Language Model (LLM) is a pretrained generative chemical molecule model with 417M parameters. It is derived from Mixtral-8x7B-v0.1 model, which was simplified for molecules: the number of layers and the hidden size were reduced. The model was pretrained using 10M molecule SMILES strings from the ZINC 15 database.

Model Architecture

Like Mixtral-8x7B-v0.1, it is a transformer model, with the following architecture choices:

  • Grouped-Query Attention
  • Sliding-Window Attention
  • Byte-fallback BPE tokenizer
  • Mixture of Experts

Load the model from huggingface:

import torch
from transformers import AutoTokenizer, AutoModel

tokenizer = AutoTokenizer.from_pretrained("RaphaelMourad/Mistral-Chem-v1-417M", trust_remote_code=True) 
model = AutoModel.from_pretrained("RaphaelMourad/Mistral-Chem-v1-417M", trust_remote_code=True)

Calculate the embedding of a DNA sequence

chem = "CCCCC[C@H](Br)CC"
inputs = tokenizer(chem, return_tensors = 'pt')["input_ids"]
hidden_states = model(inputs)[0] # [1, sequence_length, 256]

# embedding with max pooling
embedding_max = torch.max(hidden_states[0], dim=0)[0]
print(embedding_max.shape) # expect to be 256

Troubleshooting

Ensure you are utilizing a stable version of Transformers, 4.34.0 or newer.

Notice

Mistral-Chem-v1-417M is a pretrained base model for chemistry.

Contact

Raphaël Mourad. [email protected]

Downloads last month
16
Safetensors
Model size
417M params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.