File size: 2,803 Bytes
a9de49d
 
 
14ff717
 
 
 
 
 
7bcef9f
14ff717
7bcef9f
14ff717
7bcef9f
14ff717
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7fbe908
 
 
 
14ff717
 
7fbe908
14ff717
 
 
7fbe908
 
 
14ff717
7fbe908
 
 
 
 
 
14ff717
 
 
 
 
 
 
 
 
 
ed86dcb
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
---
license: apache-2.0
---
### BioinspiredMixtral: Large Language Model for the Mechanics of Biological and Bio-Inspired Materials using Mixture-of-Experts

To accelerate discovery and guide insights, we report an open-source autoregressive transformer large language model (LLM), trained on expert knowledge in the biological materials field, especially focused on mechanics and structural properties.

The model is finetuned with a corpus of over a thousand peer-reviewed articles in the field of structural biological and bio-inspired materials and can be prompted to recall information, assist with research tasks, and function as an engine for creativity. 

The model is based on mistralai/Mixtral-8x7B-Instruct-v0.1. 

![image/png](https://cdn-uploads.huggingface.co/production/uploads/623ce1c6b66fedf374859fe7/K0GifLVENb8G0nERQAzeQ.png)

This model is based on work reported in https://doi.org/10.1002/advs.202306724, but uses a mixture-of-experts strategy. 

```
from llama_cpp import Llama

model_path='lamm-mit/BioinspiredMixtral/ggml-model-q5_K_M.gguf'
chat_format="mistral-instruct"

llm = Llama(model_path=model_path,
            n_gpu_layers=-1,verbose= True, 
            n_ctx=10000,
            #main_gpu=0,
            chat_format=chat_format,
            #split_mode=llama_cpp.LLAMA_SPLIT_LAYER
            )
```

Or, download directly from Hugging Face:

```
from llama_cpp import Llama

model_path='lamm-mit/BioinspiredMixtral/ggml-model-q5_K_M.gguf'
chat_format="mistral-instruct"

llm = Llama.from_pretrained(
    repo_id=model_path,
    filename="*q5_K_M.gguf",
    verbose=True,
    n_gpu_layers=-1, 
    n_ctx=10000,
    #main_gpu=0,
    chat_format=chat_format,
)
```
For inference:
```
def generate_BioMixtral (system_prompt='You are an expert in biological materials, mechanics and related topics.', prompt="What is spider silk?",
             temperature=0.0,
             max_tokens=10000,  
             ):

    if system_prompt==None:
        messages=[
            {"role": "user", "content": prompt},
            ]
    else:
        messages=[
            {"role": "system",  "content": system_prompt},
            {"role": "user", "content": prompt},
        ]

    result=llm.create_chat_completion(
            messages=messages,
            temperature=temperature,
            max_tokens=max_tokens,
        )

start_time = time.time()
result=generate_BioMixtral(system_prompt='You respond accurately.', 
                        prompt="What is graphene? Answer with detail.",
                        max_tokens=512, temperature=0.7,  )
print (result)
deltat=time.time() - start_time
print("--- %s seconds ---" % deltat)
toked=tokenizer(res)
print ("Tokens per second (generation): ", len (toked['input_ids'])/deltat)
```

arXiv: https://arxiv.org/abs/2309.08788