DavidGF commited on
Commit
2e9a091
1 Parent(s): 97b61c4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -6
README.md CHANGED
@@ -41,17 +41,12 @@ Extensible Configuration: Leverages a custom configuration setup that can be eas
41
  **How to load and call Kraken-LoRA model :**
42
  ```
43
  from transformers import AutoConfig, AutoModelForCausalLM
44
- from configuration_kraken_lora import KrakenConfig
45
- from modeling_kraken_lora import KrakenForCausalLM
46
-
47
- AutoConfig.register("kraken", KrakenConfig)
48
- AutoModelForCausalLM.register(KrakenConfig, KrakenForCausalLM)
49
 
50
  device = "cuda:0" ## Setup "cuda:0" if NVIDIA, "mps" if on Mac
51
 
52
  # Load the model and config:
53
  config = AutoConfig.from_pretrained("./kraken_model")
54
- model = AutoModelForCausalLM.from_pretrained("./kraken_model", config=config, trust_remote_code=True)
55
  ```
56
 
57
  # Call the Reasoning LoRA-expert:
 
41
  **How to load and call Kraken-LoRA model :**
42
  ```
43
  from transformers import AutoConfig, AutoModelForCausalLM
 
 
 
 
 
44
 
45
  device = "cuda:0" ## Setup "cuda:0" if NVIDIA, "mps" if on Mac
46
 
47
  # Load the model and config:
48
  config = AutoConfig.from_pretrained("./kraken_model")
49
+ model = AutoModelForCausalLM.from_pretrained("./kraken_model", trust_remote_code=True)
50
  ```
51
 
52
  # Call the Reasoning LoRA-expert: