|
--- |
|
language: |
|
- multilingual |
|
- en |
|
license: apache-2.0 |
|
library_name: transformers |
|
tags: |
|
- nlp |
|
- code |
|
- vision |
|
- chemistry |
|
- engineering |
|
- biology |
|
- bio-inspired |
|
- text-generation-inference |
|
- materials science |
|
- mixture-of-experts |
|
- science |
|
- latex |
|
datasets: |
|
- lamm-mit/Cephalo-Bioinspired-Mechanics-Materials |
|
- lamm-mit/Cephalo-Wikipedia-Materials |
|
pipeline_tag: image-text-to-text |
|
inference: |
|
parameters: |
|
temperature: 0.3 |
|
widget: |
|
- messages: |
|
- role: user |
|
content: <|image_1|>Can you describe what you see in the image? |
|
--- |
|
## Model Summary |
|
|
|
Cephalo is a series of multimodal materials science focused vision large language models (V-LLMs) designed to integrate visual and linguistic data for advanced understanding and interaction in human-AI or multi-agent AI frameworks. |
|
|
|
The model is developed to process diverse inputs, including images and text, facilitating a broad range of applications such as image captioning, visual question answering, and multimodal content generation. The architecture combines a vision encoder model and an autoregressive transformer to process complex natural language understanding. |
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/623ce1c6b66fedf374859fe7/kl5GWBP9WS0D4uwd1t3S7.png) |
|
|
|
Cephalo provides a robust framework for multimodal interaction and understanding, including the development of complex generative pipelines to create 2D and 3D renderings of material microstructures as input for additive manufacturing methods. |
|
|
|
This version of Cephalo, lamm-mit/Cephalo-Idefics2-3x8b-beta, is a Mixture-of-Expert model based on variants and fine-tuned versions of the Idefics-2 model. The basic model architecture is as follows: |
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/623ce1c6b66fedf374859fe7/b7BK8ZtDzTMsyFDi0wP3w.png) |
|
|
|
The model has 20b parameters (3 experts, each 8b each, 8b active parameters during inference). |
|
|
|
### Download Idefics-2 MoE Model and Sample inference code |
|
|
|
```python |
|
pip install transformers -U |
|
``` |
|
|
|
```python |
|
import torch |
|
from transformers import AutoModelForCausalLM, AutoProcessor, AutoConfig |
|
|
|
def count_parameters(model): |
|
total_params = sum(p.numel() for p in model.parameters()) |
|
trainable_params = sum(p.numel() for p in model.parameters() if p.requires_grad) |
|
#number of parameters in b |
|
return total_params/1e9, trainable_params/1e9 |
|
|
|
device = torch.device("cuda" if torch.cuda.is_available() else "cpu") |
|
|
|
model_name_moe = f"lamm-mit/Cephalo-Idefics2-3x8b-beta" |
|
config = AutoConfig.from_pretrained(model_name_moe, trust_remote_code=True) |
|
processor = AutoProcessor.from_pretrained(model_name_moe, trust_remote_code=True) |
|
moe_model = AutoModelForCausalLM.from_pretrained( |
|
model_name_moe,config=config, |
|
trust_remote_code=True, torch_dtype=torch.bfloat16, |
|
# quantization_config=quantization_config, |
|
).to(device) |
|
|
|
count_parameters(moe_model) |
|
``` |
|
|
|
Now use downloaded model for inference: |
|
|
|
```python |
|
from transformers.image_utils import load_image |
|
DEVICE='cuda' |
|
image = load_image("https://d2r55xnwy6nx47.cloudfront.net/uploads/2018/02/Ants_Lede1300.jpg") |
|
|
|
# Create inputs |
|
messages = [ |
|
{ |
|
"role": "user", |
|
"content": [ |
|
{"type": "image"}, |
|
{"type": "text", "text": "What is shown in this image, and what is the relevance for materials design? Include a discussion of multi-agent AI."}, |
|
] |
|
}, |
|
] |
|
prompt = processor.apply_chat_template(messages, add_generation_prompt=True) |
|
|
|
# Get inputs using the processor |
|
inputs = processor(text=prompt, images=[image], return_tensors="pt") |
|
inputs = {k: v.to(DEVICE) for k, v in inputs.items()} |
|
|
|
# Generate |
|
generated_ids = moe_model.generate(**inputs, max_new_tokens=500) |
|
generated_texts = processor.batch_decode(generated_ids, skip_special_tokens=True) |
|
|
|
print(generated_texts) |
|
``` |
|
Output: |
|
|
|
<pre style="white-space: pre-wrap;"> |
|
The image shows a group of ants climbing over a vertical surface. The ants are using their legs and antennae to navigate the surface, demonstrating their ability to adapt to different environments and overcome obstacles. This behavior is relevant for materials design because it highlights the ants' ability to optimize their movements and interactions with their surroundings, which can inspire the development of advanced materials that mimic these natural adaptations. |
|
|
|
Multi-agent AI refers to the use of artificial intelligence algorithms to simulate and analyze the behavior of multiple agents, such as ants, in a system. This approach allows for the study of complex interactions and emergent properties that arise from the collective actions of individual agents. By understanding how ants navigate and interact with their environment, researchers can gain insights into the design of materials that exhibit similar properties, such as self-healing, adaptive behavior, and enhanced functionality. |
|
</pre> |
|
|
|
## Make a Idefics-2-MoE model from scratch using several pre-trained models |
|
|
|
Download .py files that implement the Phi-3-V and the Mixture-of-Expert Vision model |
|
|
|
```python |
|
pip install huggingface_hub |
|
``` |
|
|
|
```python |
|
from huggingface_hub import HfApi, hf_hub_download |
|
from tqdm.notebook import tqdm |
|
import os |
|
import shutil |
|
|
|
# Repository details |
|
repo_id = "lamm-mit/Cephalo-Idefics2-3x8b-beta" |
|
api = HfApi() |
|
|
|
# List all files in the repository |
|
files_in_repo = api.list_repo_files(repo_id) |
|
|
|
# Filter for .py files |
|
py_files = [file for file in files_in_repo if file.endswith('.py')] |
|
|
|
# Directory to save the downloaded files |
|
save_dir = "./Idefics2_MoE/" |
|
os.makedirs(save_dir, exist_ok=True) |
|
|
|
# Download each .py file |
|
for file_name in tqdm(py_files): |
|
file_path = hf_hub_download(repo_id=repo_id, filename=file_name) |
|
new_path = os.path.join(save_dir, file_name) |
|
shutil.move(file_path, new_path) |
|
print(f"Downloaded: {file_name}") |
|
|
|
print("Download completed.") |
|
``` |
|
|
|
Download models that will form the experts, as well as the base model. As a simple example, we use |
|
|
|
1) Materials-science fine-tuned model: lamm-mit/Cephalo-Idefics-2-vision-8b-beta (model_1) |
|
2) A chatty version: HuggingFaceM4/idefics2-8b-chatty (model_1) (model_2) |
|
3) A basic variant: HuggingFaceM4/idefics2-8b (model_3) |
|
|
|
```python |
|
from transformers import AutoProcessor, Idefics2ForConditionalGeneration , AutoTokenizer |
|
from transformers import BitsAndBytesConfig |
|
|
|
DEVICE='cuda' |
|
|
|
quantization_config = BitsAndBytesConfig( |
|
load_in_4bit=True, |
|
bnb_4bit_quant_type="nf4", |
|
bnb_4bit_use_double_quant=True, |
|
bnb_4bit_compute_dtype=torch.bfloat16 |
|
) |
|
|
|
model_id_1='lamm-mit/Cephalo-Idefics-2-vision-8b-beta' |
|
|
|
model_1 = Idefics2ForConditionalGeneration.from_pretrained( model_id_1, |
|
torch_dtype=torch.bfloat16, #if your GPU allows |
|
_attn_implementation="flash_attention_2", #make sure Flash Attention 2 is installed |
|
trust_remote_code=True, |
|
#quantization_config=quantization_config, |
|
) |
|
processor = AutoProcessor.from_pretrained( |
|
f"{model_id_1}", |
|
do_image_splitting=True |
|
) |
|
|
|
config = AutoConfig.from_pretrained(model_id_1, trust_remote_code=True) |
|
|
|
IDEFICS2_CHAT_TEMPLATE = "{% for message in messages %}{{message['role'].capitalize()}}{% if message['content'][0]['type'] == 'image' %}{{':'}}{% else %}{{': '}}{% endif %}{% for line in message['content'] %}{% if line['type'] == 'text' %}{{line['text']}}{% elif line['type'] == 'image' %}{{ '<image>' }}{% endif %}{% endfor %}<end_of_utterance>\n{% endfor %}{% if add_generation_prompt %}{{ 'Assistant:' }}{% endif %}" |
|
processor.chat_template = IDEFICS2_CHAT_TEMPLATE |
|
``` |
|
|
|
Now, load the rest of the models: |
|
```python |
|
model_id_2='HuggingFaceM4/idefics2-8b-chatty' |
|
|
|
model_2 = Idefics2ForConditionalGeneration.from_pretrained( model_id_2, |
|
torch_dtype=torch.bfloat16, #if your GPU allows |
|
_attn_implementation="flash_attention_2", #make sure Flash Attention 2 is installed |
|
trust_remote_code=True, |
|
#quantization_config=quantization_config, |
|
) |
|
|
|
model_id_3='HuggingFaceM4/idefics2-8b' |
|
|
|
model_3 = Idefics2ForConditionalGeneration.from_pretrained( model_id_3, |
|
torch_dtype=torch.bfloat16, #if your GPU allows |
|
_attn_implementation="flash_attention_2", #make sure Flash Attention 2 is installed |
|
trust_remote_code=True, |
|
#quantization_config=quantization_config, |
|
) |
|
``` |
|
Put on device: |
|
```python |
|
model_1.to(DEVICE) |
|
model_2.to(DEVICE) |
|
model_3.to(DEVICE) |
|
``` |
|
|
|
### Construct MoE |
|
|
|
Here we show how a MoE is constructed from the set of expert models loaded earlier. We consider three models, model_1, model_2 and model_3. |
|
|
|
```python |
|
dtype = torch.bfloat16 # Desired dtype for new layers |
|
base_model = copy.deepcopy(model_1) # Your base model |
|
expert_models = [ model_1, model_2, model_3 ] # List of expert models |
|
|
|
moe_config = Idefics2ForCausalLMMoEConfig(config=config, k=1, num_expert_models=len (expert_models)) |
|
moe_model = Idefics2ForCausalLMMoE(moe_config, base_model, expert_models, layer_dtype = dtype) |
|
|
|
count_parameters(expert_models[0]),count_parameters(moe_model) |
|
``` |
|
Delete models no longer needed: |
|
```python |
|
del model_1 |
|
del model_2 |
|
del model_3 |
|
``` |
|
Put MoE model on device: |
|
```python |
|
moe_model.to(DEVICE) |
|
``` |
|
Test if it works (untrained, may not produce desirable putput since gating layers have not been trained): |
|
```python |
|
from transformers.image_utils import load_image |
|
|
|
image = load_image("https://d2r55xnwy6nx47.cloudfront.net/uploads/2018/02/Ants_Lede1300.jpg") |
|
|
|
# Create inputs |
|
messages = [ |
|
{ |
|
"role": "user", |
|
"content": [ |
|
{"type": "image"}, |
|
{"type": "text", "text": "What is shown in this image, and what is the relevance for materials design? Include a discussion of multi-agent AI."}, |
|
] |
|
}, |
|
] |
|
prompt = processor.apply_chat_template(messages, add_generation_prompt=True) |
|
|
|
# Get inputs using the processor |
|
inputs = processor(text=prompt, images=[image], return_tensors="pt") |
|
inputs = {k: v.to(DEVICE) for k, v in inputs.items()} |
|
|
|
# Generate |
|
generated_ids = moe_model.generate(**inputs, max_new_tokens=500) |
|
generated_texts = processor.batch_decode(generated_ids, skip_special_tokens=True) |
|
|
|
print(generated_texts) |
|
``` |
|
|
|
### Now train MoE gating function |
|
|
|
We train the gating layers by providing sample images/prompts for each of the three experts. Here is a simple example training set: |
|
|
|
```python |
|
image_1 = Image.open("./VALIDATION/Q15.jpg") |
|
image_1a = Image.open("./VALIDATION/Q31.jpg") |
|
|
|
image_2 = Image.open(requests.get("https://media.wired.com/photos/5aa32b912ba43111d1213e0c/master/w_2240,c_limit/akhacouple.jpg", stream=True).raw) |
|
image_2a = Image.open(requests.get("https://media.wired.com/photos/5aa32b912ba43111d1213e0c/master/w_2240,c_limit/akhacouple.jpg", stream=True).raw) |
|
|
|
image_3 = Image.open(requests.get("https://i5.walmartimages.com/seo/Amazing-Andrea-Apple-Tree-Seeds-20-Seeds-Grow-Fresh-Apples_ff218043-bcd4-4437-8418-6631d8e97bb3.638ac0120ff05c8913e85ebb74f45f6c.jpeg?odnHeight=640&odnWidth=640&odnBg=FFFFFF", stream=True).raw) |
|
image_3a = Image.open(requests.get("https://i5.walmartimages.com/seo/Amazing-Andrea-Apple-Tree-Seeds-20-Seeds-Grow-Fresh-Apples_ff218043-bcd4-4437-8418-6631d8e97bb3.638ac0120ff05c8913e85ebb74f45f6c.jpeg?odnHeight=640&odnWidth=640&odnBg=FFFFFF", stream=True).raw) |
|
|
|
prompts_per_expert = [ |
|
[{"text": "User:<image>What is shown in this image. Explain the importance for materials design.<end_of_utterance>Assistant: The image shows", "image": [image_1]}, |
|
{"text": "User:<image>What is shown in this image. Explain the importance for materials design.<end_of_utterance>Assistant: The image shows", "image": [image_1a]}, |
|
], |
|
|
|
[{"text": "User:<image>What is shown in this image. <end_of_utterance>Assistant: The image shows a human.", "image": [image_2]}, |
|
{"text": "User:<image>What is shown in this image, and what does it mean in terms of human history? <end_of_utterance>Assistant: The image shows a historical image of human development.", "image": [image_2a]}, |
|
], |
|
|
|
[{"text": "User:<image>What is shown in this image. Provide a brief answer. <end_of_utterance>Assistant: This is an apple.", "image": [image_3]}, |
|
{"text": "User:<image>What is shown in this image. Brief and concise answer. <end_of_utterance>Assistant: The image shows an apple.", "image": [image_3a]}, |
|
], |
|
] |
|
|
|
gating_layer_params = moe_model.train_gating_layer_params_from_hidden_states(processor, prompts_per_expert, |
|
epochs=1000, loss_steps=100, lr=5e-5, layer_offset=0) |
|
|
|
# Set parameters for a specific layer |
|
moe_model.set_gating_layer_params(gating_layer_params) |
|
``` |
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/623ce1c6b66fedf374859fe7/mh4eFDuFsTBOYbjc38PYz.png) |
|
|
|
|
|
Now that the MoE model has been trained, we can try inference. Inference after MoE gating layers are trained: |
|
|
|
```python |
|
from transformers.image_utils import load_image |
|
|
|
image = load_image("https://d2r55xnwy6nx47.cloudfront.net/uploads/2018/02/Ants_Lede1300.jpg") |
|
|
|
# Create inputs |
|
messages = [ |
|
{ |
|
"role": "user", |
|
"content": [ |
|
{"type": "image"}, |
|
{"type": "text", "text": "What is shown in this image, and what is the relevance for materials design? Include a discussion of multi-agent AI."}, |
|
] |
|
}, |
|
] |
|
prompt = processor.apply_chat_template(messages, add_generation_prompt=True) |
|
|
|
# Get inputs using the processor |
|
inputs = processor(text=prompt, images=[image], return_tensors="pt") |
|
inputs = {k: v.to(DEVICE) for k, v in inputs.items()} |
|
|
|
# Generate |
|
generated_ids = moe_model.generate(**inputs, max_new_tokens=500) |
|
generated_texts = processor.batch_decode(generated_ids, skip_special_tokens=True) |
|
|
|
print(generated_texts) |
|
``` |
|
|
|
### Push to hub and save locally |
|
|
|
We can save the MoE model either in Hugging Face Hub or locally: |
|
|
|
```python |
|
repo_id='...' |
|
moe_name='Cephalo-Idefics2-3x8b-beta' |
|
|
|
processor.push_to_hub (f'{repo_id}/'+moe_name, ) |
|
moe_model.push_to_hub (f'{repo_id}/'+merged_name, ) |
|
``` |
|
|
|
Save locally: |
|
```python |
|
processor.save_pretrained(moe_name, ) |
|
moe_model.save_pretrained(moe_name, ) |
|
|
|
``` |
|
|
|
Loading the model works as done above. Here included again for completeness: |
|
```python |
|
model_name_moe = f'{repo_id}/'+moe_name |
|
config = AutoConfig.from_pretrained(model_name_moe, trust_remote_code=True) |
|
processor = AutoProcessor.from_pretrained(model_name_moe, trust_remote_code=True) |
|
moe_model = AutoModelForCausalLM.from_pretrained( |
|
model_name_moe,config=config, |
|
trust_remote_code=True, torch_dtype=torch.bfloat16, |
|
# quantization_config=quantization_config, |
|
).to(device) |
|
|
|
count_parameters(moe_model) |
|
``` |