Update README.md
Browse files
README.md
CHANGED
@@ -6,7 +6,8 @@ language:
|
|
6 |
- en
|
7 |
- zh
|
8 |
- ur
|
9 |
-
base_model:
|
|
|
10 |
tags:
|
11 |
- reasoning
|
12 |
- tiny
|
@@ -18,46 +19,30 @@ tags:
|
|
18 |
pipeline_tag: text-generation
|
19 |
---
|
20 |
|
21 |
-
|
22 |
-
|
23 |
-
Refer to the [original model card](https://huggingface.co/XeTute/Intellect_V0.2-1.6B) for more details on the model.
|
24 |
|
25 |
-
|
26 |
-
|
|
|
|
|
27 |
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
```
|
32 |
-
|
33 |
-
|
34 |
-
###
|
35 |
-
|
36 |
-
llama-cli --hf-repo XeTute/Intellect_V0.2-1.6B-Q8_0-GGUF --hf-file intellect_v0.2-1.6b-q8_0.gguf -p "The meaning to life and the universe is"
|
37 |
-
```
|
38 |
-
|
39 |
-
### Server:
|
40 |
-
```bash
|
41 |
-
llama-server --hf-repo XeTute/Intellect_V0.2-1.6B-Q8_0-GGUF --hf-file intellect_v0.2-1.6b-q8_0.gguf -c 2048
|
42 |
```
|
43 |
|
44 |
-
|
45 |
-
|
46 |
-
|
47 |
-
```
|
48 |
-
git clone https://github.com/ggerganov/llama.cpp
|
49 |
-
```
|
50 |
|
51 |
-
|
52 |
-
```
|
53 |
-
cd llama.cpp && LLAMA_CURL=1 make
|
54 |
-
```
|
55 |
|
56 |
-
|
57 |
-
|
58 |
-
|
59 |
-
```
|
60 |
-
or
|
61 |
-
```
|
62 |
-
./llama-server --hf-repo XeTute/Intellect_V0.2-1.6B-Q8_0-GGUF --hf-file intellect_v0.2-1.6b-q8_0.gguf -c 2048
|
63 |
-
```
|
|
|
6 |
- en
|
7 |
- zh
|
8 |
- ur
|
9 |
+
base_model:
|
10 |
+
- openai-community/gpt2-xl
|
11 |
tags:
|
12 |
- reasoning
|
13 |
- tiny
|
|
|
19 |
pipeline_tag: text-generation
|
20 |
---
|
21 |
|
22 |
+
> [!TIP]
|
23 |
+
> Intellect V0.2 (1.6B) is a small model that is still under development and has not been extensively tested. We do not recommend deploying it for production use, but it performs well for private applications. Feedback is welcome.
|
|
|
24 |
|
25 |
+
# Introduction
|
26 |
+
We introduce **Intellect 1.6B (V0.2)**, our first-ever reasoning model. It is a full-parameter fine-tune of **GPT2-XL** (licensed under MIT), trained using the **Pakistan-China-Alpaca** dataset (licensed under MIT).
|
27 |
+
Intellect V0.2 (1.6B) is licensed under **Apache 2.0**, meaning you are free to use it in personal projects. However, this fine-tune is highly experimental, and we do not recommend it for serious, production-ready deployments.
|
28 |
+
[You can find the FP32 version here.](https://huggingface.co/XeTute/Intellect_V0.2-1.6B)
|
29 |
|
30 |
+
# Usage
|
31 |
+
Since the data used for training were only one-message-in one-message-out pairs, the model often repeats itself after the user sent a follow-up question.
|
32 |
+
The chat-template is Alpaca, which looks something like following:
|
33 |
+
```txt
|
34 |
+
### Instruction:
|
35 |
+
{{{ INPUT }}}
|
36 |
+
### Response:
|
37 |
+
{{{ OUTPUT }}}
|
|
|
|
|
|
|
|
|
|
|
|
|
38 |
```
|
39 |
|
40 |
+
# Training Details
|
41 |
+
We used **SGD** (instead of AdamW) with an initial learning rate of **1.0e-5**, which allowed us to train the model with a batch size of **1** and a **maximum context length of 1K(the maximum GPT2-XL supports)** while staying within our allocated **64GB memory** for this project.
|
42 |
+
Training was completed in **under a day**, which is why **[PhantasiaAI](https://xetute.com/PhantasiaAI)** was unavailable from **05/02/2025 00:00 - 19:00**. The service is now fully operational.
|
|
|
|
|
|
|
43 |
|
44 |
+
---
|
|
|
|
|
|
|
45 |
|
46 |
+
[Visit our website.](https://xetute.com)
|
47 |
+
[Check out our Character.AI alternative.](https://xetute.com/PhantasiaAI)
|
48 |
+
[Support us financially.](https://ko-fi.com/XeTute)
|
|
|
|
|
|
|
|
|
|