isaacchung's picture
Update README.md
6f49e7a verified
---
library_name: transformers
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [Isaac Chung](https://huggingface.co/isaacchung)
<!-- - **Funded by [optional]:** [More Information Needed] -->
<!-- - **Shared by [optional]:** [More Information Needed] -->
<!-- - **Model type:** [More Information Needed] -->
<!-- - **Language(s) (NLP):** [More Information Needed] -->
- **License:** [Apache 2.0]
- **Finetuned from model [optional]:** [openbmb/MiniCPM-2B-sft-bf16](https://huggingface.co/openbmb/MiniCPM-2B-sft-bf16)
<!-- ### Model Sources [optional] -->
<!-- Provide the basic links for the model. -->
<!--
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
-->
<!-- ## Uses -->
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
<!-- ### Direct Use -->
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
<!-- [More Information Needed] -->
<!-- ### Downstream Use [optional] -->
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
<!-- [More Information Needed] -->
<!-- ### Out-of-Scope Use -->
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
<!-- [More Information Needed] -->
<!-- ## Bias, Risks, and Limitations -->
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
<!-- [More Information Needed] -->
<!-- ### Recommendations -->
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
```python
# Load model directly
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("isaacchung/MiniCPM-2B-RAFT-lora-hotpotqa-dev", trust_remote_code=True)
```
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[isaacchung/hotpotqa-dev-raft-subset](https://huggingface.co/datasets/isaacchung/hotpotqa-dev-raft-subset)
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
<!-- #### Preprocessing [optional] -->
<!-- [More Information Needed] -->
#### Training Hyperparameters
<!-- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> -->
See https://github.com/isaac-chung/MiniCPM/commit/213282b679eb8eb054bb13f02af71b9d71ad3721.
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
- train_runtime: 4607.6477
- train_samples_per_second: 5.209
- train_steps_per_second: 0.651
- train_loss: 0.5153028686841329
- epoch: 9.52
#### Training Loss
From the last epoch:
```
'loss': 0.4504, 'grad_norm': 2.259155507591921, 'learning_rate': 2.7586206896551725e-06, 'epoch': 9.02}
{'loss': 0.431, 'grad_norm': 1.7071411656099411, 'learning_rate': 2.586206896551724e-06, 'epoch': 9.05}
{'loss': 0.4627, 'grad_norm': 1.7915555416805786, 'learning_rate': 2.413793103448276e-06, 'epoch': 9.08}
{'loss': 0.4528, 'grad_norm': 1.9988269942330565, 'learning_rate': 2.2413793103448275e-06, 'epoch': 9.11}
{'loss': 0.445, 'grad_norm': 1.8423666856380017, 'learning_rate': 2.0689655172413796e-06, 'epoch': 9.14}
{'loss': 0.4424, 'grad_norm': 1.7539963730934427, 'learning_rate': 1.896551724137931e-06, 'epoch': 9.17}
{'loss': 0.3817, 'grad_norm': 1.755668315740134, 'learning_rate': 1.724137931034483e-06, 'epoch': 9.21}
{'loss': 0.4012, 'grad_norm': 1.8214703589809635, 'learning_rate': 1.5517241379310346e-06, 'epoch': 9.24}
{'loss': 0.4567, 'grad_norm': 1.6490771602855827, 'learning_rate': 1.3793103448275862e-06, 'epoch': 9.27}
{'loss': 0.491, 'grad_norm': 1.5838108179327266, 'learning_rate': 1.206896551724138e-06, 'epoch': 9.3}
{'loss': 0.516, 'grad_norm': 1.7848893180960532, 'learning_rate': 1.0344827586206898e-06, 'epoch': 9.33}
{'loss': 0.3674, 'grad_norm': 1.6589815898285354, 'learning_rate': 8.620689655172415e-07, 'epoch': 9.37}
{'loss': 0.455, 'grad_norm': 1.6377170040397837, 'learning_rate': 6.896551724137931e-07, 'epoch': 9.4}
{'loss': 0.4322, 'grad_norm': 1.7061632686271986, 'learning_rate': 5.172413793103449e-07, 'epoch': 9.43}
{'loss': 0.3934, 'grad_norm': 1.784527156508834, 'learning_rate': 3.4482758620689656e-07, 'epoch': 9.46}
{'loss': 0.4457, 'grad_norm': 1.5131773700813846, 'learning_rate': 1.7241379310344828e-07, 'epoch': 9.49}
{'loss': 0.4026, 'grad_norm': 1.8239453129182908, 'learning_rate': 0.0, 'epoch': 9.52}
```
<!-- ## Evaluation -->
<!-- This section describes the evaluation protocols and provides the results. -->
<!-- ### Testing Data, Factors & Metrics -->
<!-- #### Testing Data -->
<!-- This should link to a Dataset Card if possible. -->
<!-- [More Information Needed] -->
<!-- #### Factors -->
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
<!-- [More Information Needed] -->
<!-- #### Metrics -->
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
<!-- [More Information Needed] -->
<!-- ### Results -->
<!-- [More Information Needed] -->
<!-- #### Summary -->
<!-- ## Model Examination [optional] -->
<!-- Relevant interpretability work for the model goes here -->
<!-- [More Information Needed] -->
<!-- ## Environmental Impact -->
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
<!--
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
-->
## Technical Specifications [optional]
<!-- ### Model Architecture and Objective -->
<!-- [More Information Needed] -->
### Compute Infrastructure
<!-- [More Information Needed] -->
#### Hardware
- 1x NVIDIA RTX 6000 Ada
<!-- #### Software -->
<!-- [More Information Needed] -->
<!-- ## Citation [optional] -->
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
<!-- **BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional] -->
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
<!-- [More Information Needed] -->
<!-- ## More Information [optional] -->
<!-- [More Information Needed] -->
## Model Card Authors
[Isaac Chung](https://huggingface.co/isaacchung)
## Model Card Contact
[Isaac Chung](https://huggingface.co/isaacchung)