Converted content to more of a quick start guide for Hugging Face models on AMD devices using ROCm.
Browse files
README.md
CHANGED
@@ -9,18 +9,89 @@ pinned: false
|
|
9 |
# together we advance_AI
|
10 |
|
11 |
AI is increasingly pervasive across the modern world.
|
12 |
-
It’s driving our smart technology in retail, cities, factories and healthcare,
|
13 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
14 |
|
15 |
# Useful Links and Blogs
|
16 |
|
17 |
- Check out our blog titled [Run a Chatgpt-like Chatbot on a Single GPU with ROCm](https://huggingface.co/blog/chatbot-amd-gpu)
|
18 |
-
-
|
19 |
-
- ROCm [Documentation](https://rocm.docs.amd.com/en/latest/) for installation and usage
|
20 |
- Extended training content and connect with the development community at the [Developer Hub](https://www.amd.com/en/developer/rocm-hub.html)
|
21 |
|
22 |
|
23 |
-
# Run These HuggingFace Models with PyTorch on AMD GPUs
|
24 |
|
25 |
-
|
26 |
-
|
|
|
9 |
# together we advance_AI
|
10 |
|
11 |
AI is increasingly pervasive across the modern world.
|
12 |
+
It’s driving our smart technology in retail, cities, factories and healthcare,
|
13 |
+
and transforming our digital homes.
|
14 |
+
AMD offers advanced AI acceleration from data center to edge,
|
15 |
+
enabling high performance and high efficiency to make the world smarter.
|
16 |
+
|
17 |
+
# Getting Started with Hugging Face Transformers
|
18 |
+
|
19 |
+
This section describes how to use the most common transformers on Hugging Face
|
20 |
+
for inference workloads on AMD accelerators using the AMD ROCm software ecosystem.
|
21 |
+
This base knowledge can be leveraged to start fine-tuning from a base model or even start developing your own model.
|
22 |
+
General Linux and ML experience is a required pre-requisite.
|
23 |
+
|
24 |
+
## 1. Confirm you have a supported AMD hardware platform
|
25 |
+
|
26 |
+
Is my [hardware supported](https://rocm.docs.amd.com/en/latest/release/gpu_os_support.html#gpu-support-table) with ROCm?
|
27 |
+
|
28 |
+
## 2. Install ROCm driver, libraries and tools
|
29 |
+
|
30 |
+
Follow the detailed [installation instructions](https://rocm.docs.amd.com/en/latest/deploy/linux/index.html) for your Linux based platform.
|
31 |
+
|
32 |
+
## 3. Install Machine Learning Frameworks
|
33 |
+
Pip installation is an easy way to acquire all the required packages and is described in more detail below.
|
34 |
+
|
35 |
+
>If you prefer to use a container strategy, check out the pre-built images at
|
36 |
+
[ROCm Docker Hub](https://hub.docker.com/u/rocm/)
|
37 |
+
and [AMD Infinity Hub](https://www.amd.com/en/technologies/infinity-hub)
|
38 |
+
after installing the required [dependancies](https://rocm.docs.amd.com/en/latest/deploy/docker.html).
|
39 |
+
|
40 |
+
### PyTorch
|
41 |
+
AMD ROCm is fully integrated into the mainline PyTorch ecosystem. Pip wheels are built and tested as part of the stable and nightly releases.
|
42 |
+
Go to [pytorch.org](https://pytorch.org) and use the 'Install PyTorch' widget.
|
43 |
+
Select 'Stable + Linux + Pip + Python + ROCm' to get the specific pip installation command.
|
44 |
+
|
45 |
+
An example command line (note the versioning of the whl file):
|
46 |
+
|
47 |
+
> `pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm5.4.2`
|
48 |
+
|
49 |
+
### TensorFlow
|
50 |
+
|
51 |
+
AMD ROCm is upstreamed into the TensorFlow github repository.
|
52 |
+
Pre-built wheels are hosted on [pipy.org](https://pypi.org/project/tensorflow-rocm/)
|
53 |
+
|
54 |
+
The latest version can be installed with this command:
|
55 |
+
|
56 |
+
> `pip install tensorflow-rocm`
|
57 |
+
|
58 |
+
## 4. Use a Hugging Face Model
|
59 |
+
Now that you have the base requirements installed, get the latest transformer models.
|
60 |
+
|
61 |
+
> `pip install transformers`
|
62 |
+
|
63 |
+
This allows you to easily import any of the base models into your python application.
|
64 |
+
Here is an example using [GPT2](https://huggingface.co/gpt2) in PyTorch:
|
65 |
+
|
66 |
+
```python
|
67 |
+
from transformers import GPT2Tokenizer, GPT2Model
|
68 |
+
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
|
69 |
+
model = GPT2Model.from_pretrained('gpt2')
|
70 |
+
text = "Replace me by any text you'd like."
|
71 |
+
encoded_input = tokenizer(text, return_tensors='pt')
|
72 |
+
output = model(**encoded_input)
|
73 |
+
```
|
74 |
+
|
75 |
+
All of the 200+ standard transformer models are regularly tested with our supported hardware platforms.
|
76 |
+
Note that this also implies that all derivatives of those core models should also function correctly.
|
77 |
+
Let us know if you run into issues at our [ROCm Community page](https://github.com/RadeonOpenCompute/ROCm/discussions)
|
78 |
+
|
79 |
+
Here are a few of the more popular ones to get you started:
|
80 |
+
- [BERT](https://huggingface.co/bert-base-uncased)
|
81 |
+
- [BLOOM](https://huggingface.co/bigscience/bloom)
|
82 |
+
- [LLaMA](https://huggingface.co/huggyllama/llama-7b)
|
83 |
+
- [OPT](https://huggingface.co/facebook/opt-66b)
|
84 |
+
- [T5](https://huggingface.co/t5-base)
|
85 |
+
|
86 |
+
Click on the 'Use in Transformers' button to see the exact code to import a specific model into your Python application.
|
87 |
|
88 |
# Useful Links and Blogs
|
89 |
|
90 |
- Check out our blog titled [Run a Chatgpt-like Chatbot on a Single GPU with ROCm](https://huggingface.co/blog/chatbot-amd-gpu)
|
91 |
+
- Complete ROCm [Documentation](https://rocm.docs.amd.com/en/latest/) for installation and usage
|
|
|
92 |
- Extended training content and connect with the development community at the [Developer Hub](https://www.amd.com/en/developer/rocm-hub.html)
|
93 |
|
94 |
|
|
|
95 |
|
96 |
+
|
97 |
+
|