nikhil07prakash commited on
Commit
06d94e1
·
verified ·
1 Parent(s): faa4824

Update README.md

Browse files

Updated model card details.

Files changed (1) hide show
  1. README.md +41 -0
README.md CHANGED
@@ -1,3 +1,44 @@
1
  ---
2
  license: mit
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
  ---
4
+
5
+ # Model Card for Model ID
6
+
7
+ <!-- Provide a quick summary of what the model is/does. -->
8
+
9
+ This model is a vanilla fine-tuned version of the [Llama-7B](https://huggingface.co/huggyllama/llama-7b) model on synthetically generated arithmetic tasks. It was introduced in [this](https://openreview.net/forum?id=8sKcAWOf2D) paper. It is very similar to [Goat-7B](https://github.com/liutiedong/goat), except it was trained without LoRA.
10
+
11
+ ## Model Details
12
+
13
+ ### Model Description
14
+
15
+ <!-- Provide a longer summary of what this model is. -->
16
+
17
+ - **Developed by:** [Nikhil Prakash](https://nix07.github.io/)
18
+ - **Model type:** Autoregressive Decoder-only Language Model
19
+ - **License:** MIT License
20
+ - **Finetuned from model [optional]:** [Llama-7B](https://huggingface.co/huggyllama/llama-7b)
21
+
22
+ ### Model Sources [optional]
23
+
24
+ <!-- Provide the basic links for the model. -->
25
+
26
+ - **Repository:** TODO
27
+ - **Paper [optional]:** [Fine-Tuning Enhances Existing Mechanisms: A Case Study on Entity Tracking](https://openreview.net/forum?id=8sKcAWOf2D)
28
+
29
+ ## How to Get Started with the Model
30
+
31
+ Use the code below to get started with the model.
32
+
33
+ ```python
34
+ from transformers import AutoModel
35
+ model = AutoModel.from_pretrained("nikhil07prakash/float-7b")
36
+ ```
37
+
38
+ ## Citation [optional]
39
+
40
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
41
+
42
+ **BibTeX:**
43
+
44
+ TODO