dittops commited on
Commit
0014911
·
1 Parent(s): c28bfbc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -29
README.md CHANGED
@@ -5,7 +5,7 @@ license: llama2
5
 
6
  # Introducing Code Millenials 13B
7
 
8
- Welcome to our Code Model repository! Our model is specifically fine-tuned for code generation tasks, aiming to revolutionize how systems understand and translate natural language instructions into code queries. Built on CodeLLaMa 13B, our model has been meticulously fine-tuned with a curated code generation instructions, ensuring quality and precision. The model has capability of 120K+ sequence length without affecting the preplexity with the implemenation of lambda attention.
9
 
10
 
11
  ## Generate responses
@@ -16,8 +16,8 @@ Inference code using the pre-trained model from the Hugging Face model hub
16
  import torch
17
  from transformers import AutoTokenizer, AutoModelForCausalLM
18
 
19
- tokenizer = AutoTokenizer.from_pretrained("budecosystem/sql-millennials-13b")
20
- model = AutoModelForCausalLM.from_pretrained("budecosystem/sql-millennials-13b")
21
 
22
  template = """A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
23
  ### Instruction: {instruction} ### Response:"""
@@ -32,32 +32,6 @@ print(tokenizer.decode(sample[0]))
32
 
33
  ```
34
 
35
- To get extended context length, use the generate.py file from the [github repo](https://github.com/BudEcosystem/code-millenials)
36
-
37
- ```
38
- python generate.py --base_model budecosystem/code-millenials-13b
39
-
40
- ```
41
-
42
- You can integrate the model in your code my loading convert_llama_model function.
43
-
44
- ```python
45
- import torch
46
- from transformers import GenerationConfig, AutoModelForCausalLM, AutoTokenizer
47
- from model.llama import convert_llama_model
48
-
49
- local_branch = 2048
50
- global_branch = 10
51
- limit_distance = 2048
52
-
53
- model = AutoModelForCausalLM.from_pretrained(
54
- "budecosystem/code-millenials-13b",
55
- torch_dtype=torch.float16,
56
- device_map="auto",
57
- )
58
- model = convert_llama_model(model, local_branch, global_branch)
59
-
60
- ```
61
 
62
  ## Training details
63
 
 
5
 
6
  # Introducing Code Millenials 13B
7
 
8
+ Welcome to our Code Model repository! Our model is specifically fine-tuned for code generation tasks, aiming to revolutionize how systems understand and translate natural language instructions into code queries. Built on CodeLLaMa 13B, our model has been meticulously fine-tuned with a curated code generation instructions, ensuring quality and precision.
9
 
10
 
11
  ## Generate responses
 
16
  import torch
17
  from transformers import AutoTokenizer, AutoModelForCausalLM
18
 
19
+ tokenizer = AutoTokenizer.from_pretrained("budecosystem/code-millenials-13b")
20
+ model = AutoModelForCausalLM.from_pretrained("budecosystem/code-millenials-13b")
21
 
22
  template = """A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
23
  ### Instruction: {instruction} ### Response:"""
 
32
 
33
  ```
34
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
35
 
36
  ## Training details
37