Wonder-Griffin
commited on
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,124 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: wtfpl
|
3 |
+
datasets:
|
4 |
+
- Biddls/Onion_News
|
5 |
+
language:
|
6 |
+
- en
|
7 |
+
metrics:
|
8 |
+
- f1
|
9 |
+
- accuracy
|
10 |
+
- precision
|
11 |
+
- perplexity
|
12 |
+
base_model:
|
13 |
+
- Wonder-Griffin/TraXL
|
14 |
+
library_name: transformers
|
15 |
+
---
|
16 |
+
|
17 |
+
TraXLMistral
|
18 |
+
|
19 |
+
Created by: Morgan Griffin & WongrifferousAI (Wonder-Griffin)
|
20 |
+
|
21 |
+
#Model Description
|
22 |
+
|
23 |
+
TraXLMistral is a custom language model based on the GPT-2 architecture with additional enhancements for various tasks including causal language modeling, sequence classification, and question answering. The model incorporates several advanced techniques such as sparse attention, memory-augmented neural networks (MANN), adaptive computation time (ACT), and latent space clustering, making it suitable for both reasoning and general-purpose text generation.
|
24 |
+
|
25 |
+
#Key Features:
|
26 |
+
|
27 |
+
Sparse Attention: Efficient attention mechanism inspired by Mistral, focusing computational resources on important elements in the sequence.
|
28 |
+
Memory-Augmented Neural Networks (MANN): Enhances model capacity by adding external memory to better handle long-term dependencies and complex reasoning tasks.
|
29 |
+
Adaptive Computation Time (ACT): Dynamically adjusts the number of computation steps based on the complexity of the input.
|
30 |
+
Latent Space Clustering: Clusters latent representations for improved interpretability and task-specific adjustments.
|
31 |
+
Logical Transformer Layer: Improves the model's reasoning capabilities by integrating logical transformations.
|
32 |
+
|
33 |
+
Intended Uses & Limitations
|
34 |
+
|
35 |
+
#Use Cases:
|
36 |
+
|
37 |
+
Text Generation: Generating coherent and contextually relevant text in a wide range of domains, including conversational agents, story generation, and creative writing.
|
38 |
+
Question Answering: Providing accurate and concise answers to natural language questions.
|
39 |
+
Sequence Classification: Classification of text into predefined categories such as sentiment analysis, document categorization, or other NLP tasks.
|
40 |
+
Conversational AI: Suitable for applications requiring interactive and context-aware conversation.
|
41 |
+
|
42 |
+
#Limitations:
|
43 |
+
|
44 |
+
This model may require additional fine-tuning for domain-specific tasks where the input data differs significantly from the training data.
|
45 |
+
Due to the use of sparse attention and memory modules, the model may require more resources (GPU memory) compared to simpler architectures.
|
46 |
+
|
47 |
+
Training Procedure
|
48 |
+
|
49 |
+
The model was trained using the Wikitext-raw-01 dataset (details needed) and fine-tuned for various tasks such as causal language modeling, question answering, and sequence classification. #Training Hyperparameters:
|
50 |
+
|
51 |
+
Learning Rate: 5e-05
|
52 |
+
Train Batch Size: 8
|
53 |
+
Eval Batch Size: 8
|
54 |
+
Optimizer: Adam (betas = (0.9, 0.999), epsilon = 1e-08)
|
55 |
+
LR Scheduler: Linear
|
56 |
+
Training Steps: 100,000
|
57 |
+
Seed: 42
|
58 |
+
|
59 |
+
#Training Environment:
|
60 |
+
|
61 |
+
Transformers version: 4.45.0.dev0
|
62 |
+
PyTorch version: 2.4.0+cu124
|
63 |
+
Datasets version: 2.20.0
|
64 |
+
Tokenizers version: 0.19.1
|
65 |
+
GPU: The model is trained using GPU acceleration, with checks for CUDA availability and multiple GPUs.
|
66 |
+
|
67 |
+
Model Architecture
|
68 |
+
|
69 |
+
##Configuration:
|
70 |
+
|
71 |
+
Model Type: Hybrid Transformer with GPT/Mistral/TransformerXL (Causal LM)
|
72 |
+
Vocab Size: 50256
|
73 |
+
Hidden Size: 768
|
74 |
+
Number of Layers: 4
|
75 |
+
Number of Attention Heads: 4
|
76 |
+
Feedforward Expansion Factor: 4
|
77 |
+
RNN Units: 128
|
78 |
+
Max Sequence Length: 256
|
79 |
+
Dropout Rate: 0.1
|
80 |
+
Sparse Attention: Enabled
|
81 |
+
Memory Size: 256
|
82 |
+
Max Computation Steps: 5
|
83 |
+
Dynamic Routing: Enabled
|
84 |
+
|
85 |
+
##Special Modules:
|
86 |
+
|
87 |
+
Sparse Attention Layer: Improves efficiency by reducing unnecessary attention computation.
|
88 |
+
Adaptive Computation Time (ACT): Adjusts computation time based on input complexity.
|
89 |
+
Memory-Augmented Neural Networks (MANN): Provides external memory to help with long-term dependencies.
|
90 |
+
Latent Space Clustering: Clusters latent representations for improved task-specific behavior.
|
91 |
+
Logical Transformer Layer: Improves reasoning and logic-based tasks.
|
92 |
+
|
93 |
+
##Supported Tasks:
|
94 |
+
|
95 |
+
Causal Language Modeling (causal_lm): Generates text sequences based on a given prompt.
|
96 |
+
Question Answering (qa): Extracts relevant answers from a context given a question.
|
97 |
+
Sequence Classification: Classifies input sequences into one of the predefined labels.
|
98 |
+
|
99 |
+
##Evaluation##
|
100 |
+
|
101 |
+
The model was evaluated on several NLP benchmarks, but detailed results are pending. The primary metrics used for evaluation include accuracy, F1-score, and precision. Evaluation Metrics:
|
102 |
+
|
103 |
+
Accuracy
|
104 |
+
F1-score
|
105 |
+
Precision
|
106 |
+
|
107 |
+
Intended Users
|
108 |
+
|
109 |
+
This model is designed for researchers, developers, and organizations looking to implement advanced NLP models in production. It can be used for building conversational agents, question-answering systems, text generation applications, and more. How to Use Inference Example """"
|
110 |
+
|
111 |
+
python
|
112 |
+
|
113 |
+
from transformers import BertTokenizerFast, TraXLMistral
|
114 |
+
|
115 |
+
tokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased') model = TraXLMistral.from_pretrained('Wonder-Griffin/TraXLMistral')
|
116 |
+
|
117 |
+
input_text = "What is the capital of France?" inputs = tokenizer(input_text, return_tensors="pt") outputs = model.generate(**inputs) print(outputs) """" Limitations and Future Work
|
118 |
+
|
119 |
+
Limited Training Data: Future iterations should focus on expanding the dataset and improving performance across different languages and domains.
|
120 |
+
Memory Usage: Due to its complex architecture, this model might require optimizations for resource-constrained environments.
|
121 |
+
|
122 |
+
Acknowledgements
|
123 |
+
|
124 |
+
**Created by Morgan Griffin and WongrifferousAI (Wonder-Griffin)**
|