File size: 1,762 Bytes
3bda180 2b30297 3bda180 2b30297 3bda180 2b30297 3bda180 2b30297 3bda180 2b30297 3bda180 2b30297 3bda180 2b30297 3bda180 2b30297 3bda180 2b30297 3bda180 2b30297 3bda180 2b30297 3bda180 2b30297 3bda180 2b30297 3bda180 2b30297 3bda180 2b30297 3bda180 2b30297 3bda180 2b30297 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 |
---
library_name: transformers
tags:
- code
- ReactJS
language:
- en
base_model:
- Qwen/Qwen3-1.7B-Base
base_model_relation: finetune
pipeline_tag: text-generation
---
# Model Information
The Qwen3-ReactJs-code is a quantized, fine-tuned version of the Qwen3-1.7B-Base model designed specifically for generating ReactJs code.
- **Base model:** Qwen/Qwen3-1.7B-Base
# How to use
Starting with transformers version 4.51.0 and later, you can run conversational inference using the Transformers pipeline.
Make sure to update your transformers installation via pip install --upgrade transformers.
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
```
```python
def get_pipline():
model_name = "nirusanan/Qwen3-ReactJs-code"
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
tokenizer.pad_token = tokenizer.eos_token
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
device_map="cuda:0",
trust_remote_code=True
)
pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=3500)
return pipe
pipe = get_pipline()
```
```python
def generate_prompt(project_title, description):
prompt = f"""Below is an instruction that describes a project. Write Reactjs code to accomplish the project described below.
### Instruction:
Project:
{project_title}
Project Description:
{description}
### Response:
"""
return prompt
```
```python
prompt = generate_prompt(project_title = "Your ReactJs project", description = "Your ReactJs project description")
result = pipe(prompt)
generated_text = result[0]['generated_text']
print(generated_text.split("### End")[0])
``` |