GGUF
English
Inference Endpoints
mav23 commited on
Commit
f123ea7
·
verified ·
1 Parent(s): c0c880b

Upload folder using huggingface_hub

Browse files
Files changed (3) hide show
  1. .gitattributes +1 -0
  2. README.md +89 -0
  3. chessgpt-chat-v1.Q4_0.gguf +3 -0
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ chessgpt-chat-v1.Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,89 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ datasets:
6
+ - Waterhorse/chess_data
7
+ - anon8231489123/ShareGPT_Vicuna_unfiltered
8
+ - OpenAssistant/oasst1
9
+ - vicgalle/alpaca-gpt4
10
+ ---
11
+
12
+ # Chessgpt-Chat-v1
13
+
14
+ Chessgpt-Chat-v1 is the sft-tuned model of Chessgpt-Base-v1.
15
+
16
+ - Base Model: [Chessgpt-base-v1](https://huggingface.co/Waterhorse/chessgpt-base-v1)
17
+ - Chat Version: [Chessgpt-chat-v1](https://huggingface.co/Waterhorse/chessgpt-chat-v1)
18
+
19
+ Also, we are actively working on the development of the next-generation model, ChessGPT-V2. We welcome any contribution, especially on chess related dataset. For related matters, please contact [email protected].
20
+
21
+ ## Model Details
22
+ - **Model type**: Language Model
23
+ - **Language(s)**: English
24
+ - **License**: Apache 2.0
25
+ - **Model Description**: A 2.8B parameter pretrained language model in Chess.
26
+
27
+ ## GPU Inference
28
+
29
+ This requires a GPU with 8GB memory.
30
+
31
+ ```python
32
+ import torch
33
+ import transformers
34
+ from transformers import AutoTokenizer, AutoModelForCausalLM
35
+
36
+ MIN_TRANSFORMERS_VERSION = '4.25.1'
37
+
38
+ # check transformers version
39
+ assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
40
+
41
+ # init
42
+ tokenizer = AutoTokenizer.from_pretrained("Waterhorse/chessgpt-chat-v1")
43
+ model = AutoModelForCausalLM.from_pretrained("Waterhorse/chessgpt-chat-v1", torch_dtype=torch.float16)
44
+ model = model.to('cuda:0')
45
+
46
+ # infer
47
+ # Conversation between two
48
+ prompt = "A friendly, helpful chat between some humans.<|endoftext|>Human 0: 1.e4 c5, what is the name of this opening?<|endoftext|>Human 1:"
49
+ # Conversation between more than two
50
+ #prompt = "A friendly, helpful chat between some humans.<|endoftext|>Human 0: 1.e4 c5, what is the name of this opening?<|endoftext|>Human 1: Sicilian defense.<|endoftext|>Human 2:"
51
+
52
+ inputs = tokenizer(prompt, return_tensors='pt').to(model.device)
53
+ input_length = inputs.input_ids.shape[1]
54
+ outputs = model.generate(
55
+ **inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True,
56
+ )
57
+ token = outputs.sequences[0, input_length:]
58
+ output_str = tokenizer.decode(token)
59
+ print(output_str)
60
+ ```
61
+
62
+ # Uses
63
+
64
+ Excluded uses are described below.
65
+
66
+ ### Direct Use
67
+
68
+ `chessgpt-chat-v1` is mainly for research on large language model, especially for those research about policy learning and language modeling.
69
+
70
+ #### Out-of-Scope Use
71
+
72
+ `chessgpt-chat-v1` is a language model trained on chess related data and may not perform well for other use cases beyond chess domain.
73
+
74
+ #### Bias, Risks, and Limitations
75
+
76
+ Just as with any language model, chessgpt-chat-v1 carries inherent limitations that necessitate careful consideration. Specifically, it may occasionally generate responses that are irrelevant or incorrect, particularly when tasked with interpreting complex or ambiguous queries. Additionally, given that its training is rooted in online data, the model may inadvertently reflect and perpetuate common online stereotypes and biases.
77
+
78
+ # Evaluation
79
+ Please refer to our [paper](https://arxiv.org/abs/2306.09200) and [code](https://github.com/waterhorse1/ChessGPT)for benchmark results.
80
+
81
+ # Citation Information
82
+ ```bash
83
+ @article{feng2023chessgpt,
84
+ title={ChessGPT: Bridging Policy Learning and Language Modeling},
85
+ author={Feng, Xidong and Luo, Yicheng and Wang, Ziyan and Tang, Hongrui and Yang, Mengyue and Shao, Kun and Mguni, David and Du, Yali and Wang, Jun},
86
+ journal={arXiv preprint arXiv:2306.09200},
87
+ year={2023}
88
+ }
89
+ ```
chessgpt-chat-v1.Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3f2283a9ef92a0566b3932b61cab2691fc4680909be5ebfdba0ae17474129d42
3
+ size 1600180832