Uploaded model
- Developed by: LeroyDyer
- License: apache-2.0
- Finetuned from model : LeroyDyer/Mixtral_AI_CyberTron_Ultra
Ok Its a Great MODEL !
Highly Math Trained As well as many TextBooks and Lessons Highly fit datasets as well as Coding Datasets highly tuned!
This model has absorbed all its previous generations as well as ALL high performers and Specialist models (mistral) It has absorb many foriegn languge models and still stays as an english model !
Very impressive responses Short and long as also it was trained on some binary datasets to return a direct answer! and others to perform step by step response as wel as other to perform interactive response with clients for vairous tasks, such as product design and system design discussion:
Finacial information and other finacial tasks have been highly tunes also : Infact when returning to previous aligned datasets they stayed in line and was sdtill able to achieve High tuning! Hence a process of merging with a specific topic or role and then training for the role and topic on themed data, hence previous itterations heavily tuned for medical or law or role play as the conception was that intergating the model into a single enity may even corrput them , so the decision to seperate concerns was taken : This enabled for ssstrategic merging and tuning !
Concepts : chain of thought and functin calling Self rag ! Thoughts , emotive responses have been enhance where possibel with the data given . even sexy books have been highly tuned into the model : but also i think american genera books (sci fi, fantasy, romance novels are required) for great role play which some expect: ) I have recently seen a strategy in which prompts can be embedded into the adapter to Trigger Specific Roles : I hae tried to remove such prompting as you are a helpful ai to a character theme instead such as you are a cyber hacker by day and business man by night ! ie to give the model various internal personas ! after some training i noticed it was also talking to itself !! (rehersing) but the tokens for thought were missing so it lookeed strange until i noticed the bug; After removing the thought tokens they were displayed in the output as the tokenizer was masking them !
But Still a Great Model , Given a Task based data set it Coverges Super quickly hence my enjoyment of the model as training of it is super quick ! Now when ii load up datasets : they are generally only a few bad steps before it begins to drop below zero maintaining a steady 0.6 etc whilst loading the unnseen new dataset , hence not needing so many epochs to adjust the matrix to the new information !
Im not sure if Lora actually works when you save them but i do save some and use them to load models for training ! as they are jump starts for model which did not recive that fine tuning , they can be merged and alligned ! (probably thiey are Good! )
MOTTO FOR MODEL!
****Models are the same as loras , take them with light weight like tablets of knowledge!
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
- Downloads last month
- 18
Model tree for hflog/LeroyDyer-Mixtral_AI_CyberTron_Ultra
Unable to build the model tree, the base model loops to the model itself. Learn more.