Commit History
Update README.md
225d66b
Update README.md
63caac3
Update README.md
13e1339
Update README.md
a40d15f
add fine-tuned model tokenizer trained on multilingual conversations based on decapoda-research/llama-13b-hf model
b3bdbb1
delete base model tokenizer
d2d602e
commit updated notebook
c092b56
commit updated notebook
8c80ed9
add fine-tuned model trained on multilingual conversations based on decapoda-research/llama-13b-hf model
f8de50b
update readme with instructions on how to load and test the model
6e1cb11
update README.md with new instructions to call and run the fine-tuned model
325d3f4
delete finetuned folder
004ee3c
add base model in separate folder
407f08c
Update README.md
821b86d
Update README.md
0addb44
Update README.md with improved way to load and use the model
590318d
Update README.md
c2eb01d
Update README.md
4e35cec
Update README.md
e498874
Update README.md
48470f0
Update README.md
73222f7
Update README.md
47b5241
Update README.md
e6a0448
Update README.md with license
d8c436e
Update README.md
66ae84f
Update README.md with example
50a5adc
Update README.md
309db04
add tokenizer_config.json file from original model
2dd994b
sandiago21
commited on
Update README.md
b89e29d
Create README.md after uploading original bin files
9e486be
committing original decapoda-research/llama-13b-hf model
a82427a
sandiago21
commited on
committing adapters for first finetuned model
f42fd22
sandiago21
commited on
committing first finetuned model - bin file
f54749d
sandiago21
commited on
committing first finetuned model
0972dff
sandiago21
commited on