File size: 1,616 Bytes
89fc89a b232876 5c49efd b232876 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 |
---
datasets:
- iamtarun/python_code_instructions_18k_alpaca
widget:
- text: >
Below is an instruction that describes a task. Write a response that
appropriately completes the request.
### Instruction:
Create a function to calculate the sum of a sequence of integers.
### Input:
[1, 2, 3, 4, 5]
### Output:
pipeline_tag: text-generation
tags:
- code
---
## Model Details
this is the finetuned version of GPT2 on a coding dataset
### Model Description
- **Model type:** text-generation
- **Finetuned from model [GPT2](https://huggingface.co/gpt2)**
### Model Sources
- **Repository:** https://huggingface.co/gpt2
## Uses
```python
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="not-lain/PyGPT")
prompt = """
Below is an instruction that describes a task. Write a response that
appropriately completes the request.
### Instruction:
Create a function to calculate the sum of a sequence of integers.
### Input:
[1, 2, 3, 4, 5]
### Output:
"""
pipe(prompt)
```
## Bias, Risks, and Limitations
model may produce biased ,erroneous and output.
### Recommendations
it is not advised to use this model as it is just a product of testing a finetuning script
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
## Evaluation
please refer to the tensorboard tab for full details |