GPT4Tools / README.md
stevengrove's picture
Update README.md
2e01b37
|
raw
history blame
1.53 kB
metadata
license: apache-2.0
title: 'GPT4Tools: Teaching LLM to Use Tools via Self-instruction'
sdk: gradio
emoji: 🔥
colorFrom: red
colorTo: yellow
pinned: true

GPT4Tools: Teaching LLM to Use Tools via Self-instruction

Lin Song, Yanwei Li, Rui Yang, Sijie Zhao, Yixiao Ge, Ying Shan

GPT4Tools is a centralized system that can control multiple visual foundation models. It is based on Vicuna (LLaMA), and 71K self-built instruction data. By analyzing the language content, GPT4Tools is capable of automatically deciding, controlling, and utilizing different visual foundation models, allowing the user to interact with images during a conversation. With this approach, GPT4Tools provides a seamless and efficient solution to fulfill various image-related requirements in a conversation. Different from previous work, we support users teach their own LLM to use tools with simple refinement via self-instruction and LoRA.

YouTube arXiv