File size: 1,742 Bytes
f7b9b6f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0044442
f7b9b6f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
---
license: apache-2.0
datasets:
- OEvortex/Vortex-50k
language:
- en
pipeline_tag: text-generation
tags:
- HelpingAI
- vortex
---
**Model Overview**

vortex-3b is a 2.9 billion parameter causal language model created by OEvortex that is derived from EleutherAI's Pythia-2.8b and fine-tuned on Vortex-50k dataset'

**Usage**

To utilize the this model, you can access the provided Colab notebook. The notebook allows you to run the model on both CPU and GPU. Feel free to make any necessary changes to adapt it to your specific requirements.

**CPU and GPU code**

```bash
!pip install transformers
!pip install sentencepiece
!pip install accelerate
```
```python
import torch                        # allows Tensor computation with strong GPU acceleration
from transformers import pipeline   # fast way to use pre-trained models for inference
import os
```
```python
# load model
HL_pipeline = pipeline(model="OEvortex/vortex-3b",
                            torch_dtype=torch.bfloat16,
                            trust_remote_code=True,
                            device_map="auto")
```
```python
# define helper function
def get_completion_HL(input):
  system = f"""
  You are an expert Physicist.
  You are good at explaining Physics concepts in simple words.
  Help as much as you can.
  """
  prompt = f"#### System: {system}\n#### User: \n{input}\n\n#### Response from Lite:"
  print(prompt)
  HL_response = HL_pipeline(prompt,
                                  max_new_tokens=500
                                  )
  return HL_response[0]["generated_text"]
```
```python
# let's prompt
prompt = "Explain the difference between nuclear fission and fusion."
# prompt = "Why is the Sky blue?"

print(get_completion_HL(prompt))
```