Spaces:
Running
Running
Update README.md
Browse files
README.md
CHANGED
@@ -1,77 +1,59 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
4 |
-
|
|
|
|
|
|
|
|
|
5 |
|
6 |
-
|
7 |
-
return (
|
8 |
-
<div className="w-full max-w-4xl mx-auto p-6">
|
9 |
-
<Card className="bg-gradient-to-r from-red-500 to-pink-500">
|
10 |
-
<CardHeader className="space-y-4">
|
11 |
-
<div className="flex items-center justify-between">
|
12 |
-
<div className="flex items-center space-x-2">
|
13 |
-
<span className="text-4xl">π»</span>
|
14 |
-
<h1 className="text-3xl font-bold text-white">Cakrawala AI</h1>
|
15 |
-
</div>
|
16 |
-
<Badge className="bg-white/20 text-white hover:bg-white/30">
|
17 |
-
SDK: static
|
18 |
-
</Badge>
|
19 |
-
</div>
|
20 |
-
<p className="text-white/90 text-lg italic">
|
21 |
-
"Where Worlds Converge and Adventures Begin!"
|
22 |
-
</p>
|
23 |
-
</CardHeader>
|
24 |
|
25 |
-
|
26 |
-
<div className="bg-black/20 rounded-lg p-4">
|
27 |
-
<h2 className="text-xl font-semibold mb-3 flex items-center gap-2">
|
28 |
-
<BrainCircuit className="h-5 w-5" />
|
29 |
-
Latest Models
|
30 |
-
</h2>
|
31 |
-
|
32 |
-
<div className="space-y-4">
|
33 |
-
<div className="border-l-4 border-white/30 pl-4">
|
34 |
-
<h3 className="text-lg font-semibold">π Cakrawala-70B</h3>
|
35 |
-
<p className="text-sm text-white/80 mt-1">
|
36 |
-
Fine-tuned variant of Llama-3.1-70B-Instruct optimized for rich roleplaying conversations
|
37 |
-
</p>
|
38 |
-
<div className="mt-2 flex items-center gap-2">
|
39 |
-
<Cpu className="h-4 w-4" />
|
40 |
-
<span className="text-sm">8 x H100 NVL GPUs</span>
|
41 |
-
</div>
|
42 |
-
</div>
|
43 |
|
44 |
-
|
45 |
-
|
46 |
-
<p className="text-sm text-white/80 mt-1">
|
47 |
-
Fine-tuned variant of Llama-3.1-8B-Instruct for detailed character interactions
|
48 |
-
</p>
|
49 |
-
<div className="mt-2 flex items-center gap-2">
|
50 |
-
<Cpu className="h-4 w-4" />
|
51 |
-
<span className="text-sm">2 x H100 SXM GPUs</span>
|
52 |
-
</div>
|
53 |
-
</div>
|
54 |
-
</div>
|
55 |
-
</div>
|
56 |
|
57 |
-
|
58 |
-
<h2 className="text-xl font-semibold mb-3">Training Highlights</h2>
|
59 |
-
<ul className="list-disc list-inside space-y-2 text-sm">
|
60 |
-
<li>5,867 conversation pairs</li>
|
61 |
-
<li>Minimum 12-13 turns per conversation</li>
|
62 |
-
<li>Focus on expressions & character consistency</li>
|
63 |
-
<li>QLoRA fine-tuning over 3 epochs</li>
|
64 |
-
</ul>
|
65 |
-
</div>
|
66 |
|
67 |
-
|
68 |
-
|
69 |
-
<span>Built with love for roleplayers, by roleplayers</span>
|
70 |
-
</div>
|
71 |
-
</CardContent>
|
72 |
-
</Card>
|
73 |
-
</div>
|
74 |
-
);
|
75 |
-
};
|
76 |
|
77 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
title: NarrativAI
|
3 |
+
emoji: π
|
4 |
+
colorFrom: red
|
5 |
+
colorTo: purple
|
6 |
+
sdk: static
|
7 |
+
pinned: true
|
8 |
+
---
|
9 |
|
10 |
+
# NarrativAI π
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
11 |
|
12 |
+
Welcome to NarrativAI, where we're aiming to revolutionise AI roleplaying Our mission is to create AI models that don't just participate in roleplay, but truly understand and convey emotional depth, subtle character dynamics, and complex interpersonal relationships.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
13 |
|
14 |
+
## Our Vision
|
15 |
+
We believe that meaningful storytelling goes beyond just words β it's about understanding the intricate tapestry of human emotions, relationships, and motivations. Our models are specifically trained to grasp emotional subtext, character development, and interpersonal dynamics in ways that make roleplay feel genuinely human.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
16 |
|
17 |
+
## Latest Models
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
18 |
|
19 |
+
### π Cakrawala-70B
|
20 |
+
Our flagship model, built on Llama-3.1-70B-Instruct, specifically optimized for emotional intelligence in storytelling. Trained on 8 x H100 NVL GPUs. It's specifically optimised for generating rich roleplaying conversations and character interactions. The model has been trained to excel at producing detailed, contextually appropriate character dialogues with rich descriptions of physical actions, expressions, and emotional states while maintaining consistent character voices and perspectives throughout extended
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
21 |
|
22 |
+
#### Training Specifications:
|
23 |
+
- Specialized dataset of 5,867 emotionally rich conversations
|
24 |
+
- Minimum 12-13 turns per conversation for deep emotional development
|
25 |
+
- Fine-tuned using QLoRA over 3 epochs
|
26 |
+
- Focus on emotional consistency and character growth
|
27 |
+
|
28 |
+
#### Technical Parameters:
|
29 |
+
- Gradient Accumulation Steps: 16
|
30 |
+
- Micro Batch Size: 4
|
31 |
+
- Learning Rate: 0.0003
|
32 |
+
- Optimizer: AdamW
|
33 |
+
- Mixed Precision: BF16 & FP16 with TF32 support
|
34 |
+
|
35 |
+
### π Cakrawala-8B
|
36 |
+
Our compact model built on Llama-3.1-8B-Instruct, delivering high quality roleplay in a lighter model. Trained on 2 x H100 SXM GPUs.
|
37 |
+
|
38 |
+
## What Sets Us Apart
|
39 |
+
|
40 |
+
### Emotional Depth Training
|
41 |
+
Our models are specifically trained to understand:
|
42 |
+
- Complex emotional states and transitions
|
43 |
+
- Non-verbal emotional cues
|
44 |
+
- Psychological motivation and character growth
|
45 |
+
- Consistent emotional growth arcs
|
46 |
+
- Memory of emotional experiences
|
47 |
+
- Authentic personality expression
|
48 |
+
- Behavioral consistency with emotional state
|
49 |
+
|
50 |
+
## License & Credits
|
51 |
+
- Licensed under MIT
|
52 |
+
- Based on meta-llama/Llama-3.1 models
|
53 |
+
- *Created with empathy, driven by understanding*
|
54 |
+
|
55 |
+
---
|
56 |
+
|
57 |
+
*Join us in revolutionizing AI storytelling through emotional intelligence and deeper character understanding.*
|
58 |
+
|
59 |
+
For more information, collaboration opportunities, or to contribute to our mission, please reach out to our team.
|