Visual Question Answering
Transformers
Safetensors
llava
image-to-text
AIGC
LLaVA
ponytail commited on
Commit
8a66cc5
·
verified ·
1 Parent(s): fac7467

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -0
README.md CHANGED
@@ -65,6 +65,8 @@ output = model.generate(**inputs, max_new_tokens=400, do_sample=False)
65
  predict = processor.decode(output[0][:], skip_special_tokens=True)
66
  print(predict)
67
  ```
 
 
68
  ## Get the Dataset
69
  #### Dataset Example
70
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64259db7d3e6fdf87e4792d0/vRojQxm8IMybBV0X5CKbf.png)
 
65
  predict = processor.decode(output[0][:], skip_special_tokens=True)
66
  print(predict)
67
  ```
68
+
69
+ Our code will be publicly available on Github.[ddw2AIGROUP2CQUPT/Human-LLaVA-8B(github.com)]https://github.com/ddw2AIGROUP2CQUPT/Human-LLaVA-8B
70
  ## Get the Dataset
71
  #### Dataset Example
72
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64259db7d3e6fdf87e4792d0/vRojQxm8IMybBV0X5CKbf.png)