Yongdong Wang commited on
Commit
8e887ef
Β·
1 Parent(s): 1ef829e

Modify operating equipment instructions.

Browse files
Files changed (1) hide show
  1. app.py +10 -14
app.py CHANGED
@@ -277,18 +277,16 @@ with gr.Blocks(
277
 
278
  Choose from **three fine-tuned models** specialized for **robot task planning** using QLoRA technique:
279
 
280
- - **πŸš€ Dart-llm-model-1B** (Default): Fastest inference, optimized for speed
281
- - **βš–οΈ Dart-llm-model-3B**: Balanced performance and quality
282
- - **🎯 Dart-llm-model-8B**: Best quality output, higher latency
283
 
284
- **Capabilities**: Convert natural language robot commands into structured task sequences for excavators, dump trucks, and other construction robots. **Now with DAG Visualization!**
285
 
286
  **Models**:
287
  - [YongdongWang/llama-3.2-1b-lora-qlora-dart-llm](https://huggingface.co/YongdongWang/llama-3.2-1b-lora-qlora-dart-llm) (Default)
288
  - [YongdongWang/llama-3.2-3b-lora-qlora-dart-llm](https://huggingface.co/YongdongWang/llama-3.2-3b-lora-qlora-dart-llm)
289
  - [YongdongWang/llama-3.1-8b-lora-qlora-dart-llm](https://huggingface.co/YongdongWang/llama-3.1-8b-lora-qlora-dart-llm)
290
-
291
- ⚑ **Using ZeroGPU**: This Space uses dynamic GPU allocation (Nvidia H200). First generation might take a bit longer.
292
  """)
293
 
294
  with gr.Tabs():
@@ -324,7 +322,7 @@ with gr.Blocks(
324
  choices=[(config["name"], key) for key, config in MODEL_CONFIGS.items()],
325
  value=DEFAULT_MODEL,
326
  label="Model Size",
327
- info="Select model size (1B = fastest, 8B = best quality)",
328
  interactive=True
329
  )
330
 
@@ -338,13 +336,11 @@ with gr.Blocks(
338
  )
339
 
340
  gr.Markdown("""
341
- ### πŸ“Š Model Status
342
- - **Hardware**: ZeroGPU (Dynamic Nvidia H200)
343
- - **Status**: Ready
344
- - **Note**: First generation allocates GPU resources
345
- - **Dart-llm-model-1B**: Fastest inference (Default)
346
- - **Dart-llm-model-3B**: Balanced speed/quality
347
- - **Dart-llm-model-8B**: Best quality, slower
348
  """)
349
 
350
  with gr.Tab("πŸ“Š DAG Visualization"):
 
277
 
278
  Choose from **three fine-tuned models** specialized for **robot task planning** using QLoRA technique:
279
 
280
+ - **πŸš€ Dart-llm-model-1B**: Ready for Jetson Nano deployment
281
+ - **βš–οΈ Dart-llm-model-3B**: Ready for Jetson Xavier NX deployment
282
+ - **🎯 Dart-llm-model-8B**: Ready for Jetson AGX Xavier/Orin deployment
283
 
284
+ **Capabilities**: Convert natural language robot commands into structured task sequences for excavators, dump trucks, and other construction robots. **Edge-ready for Jetson devices with DAG Visualization!**
285
 
286
  **Models**:
287
  - [YongdongWang/llama-3.2-1b-lora-qlora-dart-llm](https://huggingface.co/YongdongWang/llama-3.2-1b-lora-qlora-dart-llm) (Default)
288
  - [YongdongWang/llama-3.2-3b-lora-qlora-dart-llm](https://huggingface.co/YongdongWang/llama-3.2-3b-lora-qlora-dart-llm)
289
  - [YongdongWang/llama-3.1-8b-lora-qlora-dart-llm](https://huggingface.co/YongdongWang/llama-3.1-8b-lora-qlora-dart-llm)
 
 
290
  """)
291
 
292
  with gr.Tabs():
 
322
  choices=[(config["name"], key) for key, config in MODEL_CONFIGS.items()],
323
  value=DEFAULT_MODEL,
324
  label="Model Size",
325
+ info="Select model for your Jetson device (1B = Nano, 3B = Xavier NX, 8B = AGX)",
326
  interactive=True
327
  )
328
 
 
336
  )
337
 
338
  gr.Markdown("""
339
+ ### πŸ”§ Jetson Deployment Ready
340
+ Choose the model that fits your Jetson device:
341
+ - **1B**: Deployable on Jetson Nano (4GB RAM)
342
+ - **3B**: Deployable on Jetson Xavier NX (8GB RAM)
343
+ - **8B**: Deployable on Jetson AGX Xavier/Orin (32GB RAM)
 
 
344
  """)
345
 
346
  with gr.Tab("πŸ“Š DAG Visualization"):