Gregory-L commited on
Commit
b048b96
·
verified ·
1 Parent(s): 243bbeb

Update README.md

Browse files

https://github.com/pythaiml/automindx

Files changed (1) hide show
  1. README.md +39 -2
README.md CHANGED
@@ -1,5 +1,5 @@
1
  ---
2
- title: aGLM A General Learing Model
3
  emoji: 🔥
4
  colorFrom: black
5
  colorTo: green
@@ -7,4 +7,41 @@ sdk: static
7
  pinned: false
8
 
9
  machine learning as a process
10
- research into machine learning intelligence principals and application
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ title: aGLM Autonomous General Learning Model
3
  emoji: 🔥
4
  colorFrom: black
5
  colorTo: green
 
7
  pinned: false
8
 
9
  machine learning as a process
10
+ research into machine learning intelligence principals and application
11
+
12
+ First iteration of aglm.py is available in the Professor-Codephreak LLM codebase as <a href="https://github.com/pythaiml/automindx">automindx</a> https://github.com/pythaiml/automindx
13
+ automindx is my earliest solution to machine memory with aglm as the memeory parser
14
+
15
+ aglm.py - Autonomous General Learning Model Overview
16
+
17
+ The aglm.py module implements an Autonomous General Learning Model (AGLM) that utilizes a pre-trained language model to generate contextual responses based on a conversation history. It is designed to process and generate responses from conversation data stored in memory files, using a pre-trained language model. Classes and Functions LlamaModel
18
+
19
+ This class represents the AGLM. It is responsible for initializing the language model and tokenizer, as well as generating contextual responses based on conversation history.
20
+
21
+ __init__(self, model_name, models_folder): Constructor that initializes the AGLM with the specified model_name and models_folder. It initializes the language model and tokenizer.
22
+
23
+ initialize_model(self): Initializes the language model and tokenizer using the specified model_name and models_folder.
24
+
25
+ generate_contextual_output(self, conversation_context): Generates a contextual response based on the given conversation context. It formats the conversation history using format_to_llama_chat_style and generates a response using the pre-trained language model.
26
+
27
+ determine_batch_size()
28
+
29
+ A utility function that determines an appropriate batch size for processing memory files based on available system memory. It calculates the batch size using the total available memory and a predefined maximum memory usage threshold. main()
30
+
31
+ The main entry point of the script. It reads conversation history from memory files in batches, generates a contextual response using the AGLM, and prints the response. It uses the LlamaModel class to perform response generation. Usage
32
+
33
+ Import the necessary modules: os, glob, ujson, psutil, AutoModelForCausalLM, AutoTokenizer from the transformers library, and format_to_llama_chat_style from automind.
34
+
35
+ Define the LlamaModel class, which encapsulates the AGLM's behavior. It initializes the language model, tokenizer, and generates responses based on conversation context.
36
+
37
+ Define the utility function determine_batch_size() that calculates an appropriate batch size based on system memory.
38
+
39
+ Define the main() function, which reads memory files in batches, generates responses, and prints the generated response.
40
+
41
+ If the script is executed as the main program (if __name__ == '__main__':), it calls the main() function to execute the AGLM.
42
+
43
+ Example Use Case
44
+
45
+ The aglm.py script could be used as part of a larger system that utilizes conversation memory to generate context-aware responses in a chatbot or virtual assistant application. It reads conversation history from memory files, processes the data in batches to manage memory usage, generates responses using a pre-trained language model, and prints the generated response to the console.
46
+
47
+ By integrating the aglm.py module with other components, developers can create more intelligent and contextually-aware conversational agents.