michaelfeil commited on
Commit
8fe89bd
1 Parent(s): 8a7642c

Upload mosaicml/mpt-7b-instruct ctranslate fp16 weights

Browse files
Files changed (1) hide show
  1. README.md +6 -5
README.md CHANGED
@@ -16,14 +16,14 @@ Speedup inference while reducing memory by 2x-4x using int8 inference in C++ on
16
 
17
  quantized version of [mosaicml/mpt-7b-instruct](https://huggingface.co/mosaicml/mpt-7b-instruct)
18
  ```bash
19
- pip install hf-hub-ctranslate2>=2.0.8
20
  ```
21
- Converted on 2023-05-30 using
22
  ```
23
- ct2-transformers-converter --model mosaicml/mpt-7b-instruct --output_dir /home/michael/tmp-ct2fast-mpt-7b-instruct --force --copy_files configuration_mpt.py meta_init_context.py tokenizer.json hf_prefixlm_converter.py README.md tokenizer_config.json blocks.py adapt_tokenizer.py attention.py norm.py generation_config.json flash_attn_triton.py special_tokens_map.json param_init_fns.py .gitattributes --quantization float16 --trust_remote_code
24
  ```
25
 
26
- Checkpoint compatible to [ctranslate2>=3.13.0](https://github.com/OpenNMT/CTranslate2) and [hf-hub-ctranslate2>=2.0.6](https://github.com/michaelfeil/hf-hub-ctranslate2)
27
  - `compute_type=int8_float16` for `device="cuda"`
28
  - `compute_type=int8` for `device="cpu"`
29
 
@@ -42,7 +42,8 @@ model = GeneratorCT2fromHfHub(
42
  )
43
  outputs = model.generate(
44
  text=["How do you call a fast Flan-ingo?", "User: How are you doing? Bot:"],
45
- max_length=64
 
46
  )
47
  print(outputs)
48
  ```
 
16
 
17
  quantized version of [mosaicml/mpt-7b-instruct](https://huggingface.co/mosaicml/mpt-7b-instruct)
18
  ```bash
19
+ pip install hf-hub-ctranslate2>=2.0.8 ctranslate2>=3.14.0
20
  ```
21
+ Converted on 2023-05-31 using
22
  ```
23
+ ct2-transformers-converter --model mosaicml/mpt-7b-instruct --output_dir /home/michael/tmp-ct2fast-mpt-7b-instruct --force --copy_files tokenizer.json README.md tokenizer_config.json generation_config.json special_tokens_map.json .gitattributes --quantization float16 --trust_remote_code
24
  ```
25
 
26
+ Checkpoint compatible to [ctranslate2>=3.14.0](https://github.com/OpenNMT/CTranslate2) and [hf-hub-ctranslate2>=2.0.8](https://github.com/michaelfeil/hf-hub-ctranslate2)
27
  - `compute_type=int8_float16` for `device="cuda"`
28
  - `compute_type=int8` for `device="cpu"`
29
 
 
42
  )
43
  outputs = model.generate(
44
  text=["How do you call a fast Flan-ingo?", "User: How are you doing? Bot:"],
45
+ max_length=64,
46
+ include_prompt_in_result=False
47
  )
48
  print(outputs)
49
  ```