Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -1,10 +1,31 @@
|
|
1 |
---
|
2 |
-
title:
|
3 |
-
emoji:
|
4 |
-
colorFrom:
|
5 |
colorTo: purple
|
6 |
sdk: docker
|
7 |
pinned: false
|
8 |
---
|
9 |
|
10 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
title: InternViT Development Test
|
3 |
+
emoji: 🔧
|
4 |
+
colorFrom: indigo
|
5 |
colorTo: purple
|
6 |
sdk: docker
|
7 |
pinned: false
|
8 |
---
|
9 |
|
10 |
+
# InternViT-6B with CUDA Development Tools
|
11 |
+
|
12 |
+
This Space uses the PyTorch CUDA development image to properly install flash-attn with NVCC.
|
13 |
+
|
14 |
+
## Changes in this Version
|
15 |
+
|
16 |
+
- Using PyTorch CUDA development image instead of runtime image
|
17 |
+
- Includes NVCC (NVIDIA CUDA Compiler) needed for flash-attn
|
18 |
+
- Specific flash-attn version (1.0.9) compatible with CUDA 11.7
|
19 |
+
- Enhanced diagnostics to verify flash-attn installation
|
20 |
+
|
21 |
+
## Dependencies Added
|
22 |
+
|
23 |
+
- einops: Required for vision transformer operations
|
24 |
+
- flash-attn: Required for efficient attention computation
|
25 |
+
- CUDA build tools for proper compilation
|
26 |
+
|
27 |
+
## Instructions
|
28 |
+
|
29 |
+
1. Click the "Test Model Loading" button
|
30 |
+
2. Wait for the model to load and run the test
|
31 |
+
3. Check the results for success or errors
|