stevengrove commited on
Commit
96a07ef
·
1 Parent(s): cff46c1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -9
README.md CHANGED
@@ -1,13 +1,15 @@
1
  ---
2
- title: GPT4Tools
3
- emoji: 👀
4
- colorFrom: indigo
5
- colorTo: pink
6
- sdk: gradio
7
- sdk_version: 3.32.0
8
- app_file: app.py
9
- pinned: false
10
  license: apache-2.0
11
  ---
12
 
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
 
 
 
 
 
 
 
2
  license: apache-2.0
3
  ---
4
 
5
+ # GPT4Tools: Teaching LLM to Use Tools via Self-instruction
6
+
7
+ [Lin Song](http://linsong.info/), [Yanwei Li](https://yanwei-li.com/), [Rui Yang](https://github.com/Yangr116), Sijie Zhao, [Yixiao Ge](https://geyixiao.com/), [Ying Shan](https://scholar.google.com/citations?user=4oXBp9UAAAAJ&hl=en)
8
+
9
+ GPT4Tools is a centralized system that can control multiple visual foundation models.
10
+ It is based on Vicuna (LLaMA), and 71K self-built instruction data.
11
+ By analyzing the language content, GPT4Tools is capable of automatically deciding, controlling, and utilizing different visual foundation models, allowing the user to interact with images during a conversation.
12
+ With this approach, GPT4Tools provides a seamless and efficient solution to fulfill various image-related requirements in a conversation.
13
+ Different from previous work, we support users teach their own LLM to use tools with simple refinement via self-instruction and LoRA.
14
+
15
+ <a href='https://gpt4tools.github.io'><img src='https://img.shields.io/badge/Project-Page-Green'></a> <a href='https://huggingface.co/stevengrove/gpt4tools-vicuna-13b-lora'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Model-blue'></a> [![YouTube](https://badges.aleen42.com/src/youtube.svg)](https://youtu.be/Qrj94ibQIT8) [![arXiv](https://img.shields.io/badge/arXiv-Paper-<COLOR>.svg)]()