Karthik2510 commited on
Commit
dc86a9a
·
verified ·
1 Parent(s): 92ba95c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -34
README.md CHANGED
@@ -47,39 +47,6 @@ The fine-tuning process involves using **QLoRA** to adapt the pre-trained model
47
  - **Paper [optional]:** [More Information Needed]
48
  - **Demo [optional]:** [More Information Needed]
49
 
50
- ## Uses
51
-
52
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
53
-
54
- ### Direct Use
55
-
56
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
57
-
58
- [More Information Needed]
59
-
60
- ### Downstream Use [optional]
61
-
62
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
63
-
64
- [More Information Needed]
65
-
66
- ### Out-of-Scope Use
67
-
68
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
69
-
70
- [More Information Needed]
71
-
72
- ## Bias, Risks, and Limitations
73
-
74
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
75
-
76
- [More Information Needed]
77
-
78
- ### Recommendations
79
-
80
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
81
-
82
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
83
 
84
  ## How to Get Started with the Model
85
 
@@ -99,7 +66,7 @@ input_text = "What is the medical definition of pneumonia?"
99
  inputs = tokenizer(input_text, return_tensors="pt").to("cuda")
100
  outputs = model.generate(**inputs, max_new_tokens=100)
101
  print(tokenizer.decode(outputs[0], skip_special_tokens=True))
102
-
103
 
104
  ## Training Details
105
 
 
47
  - **Paper [optional]:** [More Information Needed]
48
  - **Demo [optional]:** [More Information Needed]
49
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
50
 
51
  ## How to Get Started with the Model
52
 
 
66
  inputs = tokenizer(input_text, return_tensors="pt").to("cuda")
67
  outputs = model.generate(**inputs, max_new_tokens=100)
68
  print(tokenizer.decode(outputs[0], skip_special_tokens=True))
69
+ ```
70
 
71
  ## Training Details
72