glm-4vq / README.md
nikravan's picture
Update README.md
c7537e6 verified
|
raw
history blame
544 Bytes
metadata
language:
  - en
  - de
  - fr
  - fa
  - ar
  - tr
  - es
  - it
metrics:
  - accuracy
pipeline_tag: document-question-answering
tags:
  - text-generation-inference

This model is 4bit quantized of glm-4v-9b Model and fixed some error to executing on google colab.

It has exciting result with less then 10 Giga VRAM (Multi Modal Multi Language).

you can try this model on free google colab. Open In Colab