|
--- |
|
library_name: peft |
|
base_model: meta-llama/Llama-2-7b-hf |
|
license: mit |
|
tags: |
|
- API |
|
- Testing |
|
--- |
|
|
|
# Model Card for Model ID |
|
|
|
Llama2 model fine tuned on different API's for generating test cases. |
|
|
|
|
|
|
|
## Model Details |
|
|
|
### Model Description |
|
|
|
As a developer, testing is a major concern. Keeping this in mind, came up with this fine tuned LLaMA-2-7b on different API. Through this model you can upload your API's detail and test cases scenarios will be generated. |
|
|
|
|
|
|
|
- **Developed by:** Anish Vantagodi |
|
- **Funded by :** Kusho |
|
- **Shared by :** Anish Vantagodi |
|
- **Model type:** LlaMA-v7 Peft fine tuned |
|
- **Language(s) (NLP):** English |
|
- **License:** MIT |
|
- **Finetuned from model [optional]:** meta-llama/Llama-2-7b-hf |
|
|
|
|
|
## Uses |
|
|
|
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> |
|
Used for generating and testing API's |
|
|
|
## Model Card Authors |
|
|
|
Anish Vantagodi: https://github.com/anish2105 |
|
|
|
## Training procedure |
|
|
|
|
|
The following `bitsandbytes` quantization config was used during training: |
|
- quant_method: bitsandbytes |
|
- load_in_8bit: False |
|
- load_in_4bit: True |
|
- llm_int8_threshold: 6.0 |
|
- llm_int8_skip_modules: None |
|
- llm_int8_enable_fp32_cpu_offload: False |
|
- llm_int8_has_fp16_weight: False |
|
- bnb_4bit_quant_type: nf4 |
|
- bnb_4bit_use_double_quant: False |
|
- bnb_4bit_compute_dtype: float16 |
|
|
|
### Framework versions |
|
|
|
|
|
- PEFT 0.6.2 |