Spaces:
Sleeping
Sleeping
File size: 5,356 Bytes
15d6c34 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 |
# Installation
<!-- TOC -->
- [Requirements](#requirements)
- [Prepare environment](#prepare-environment)
- [Data Preparation](#data-preparation)
<!-- TOC -->
## Requirements
- Linux
- Python 3.7+
- PyTorch 1.6.0, 1.7.0, 1.7.1, 1.8.0, 1.8.1, 1.9.0 or 1.9.1.
- CUDA 9.2+
- GCC 5+
- [MMCV](https://github.com/open-mmlab/mmcv) (Please install mmcv-full>=1.3.17,<1.6.0 for GPU)
## Prepare environment
a. Create a conda virtual environment and activate it.
```shell
conda create -n motiondiffuse python=3.7 -y
conda activate motiondiffuse
```
b. Install PyTorch and torchvision following the [official instructions](https://pytorch.org/).
```shell
conda install pytorch={torch_version} torchvision cudatoolkit={cu_version} -c pytorch
```
E.g., install PyTorch 1.7.1 & CUDA 10.1.
```shell
conda install pytorch=1.7.1 torchvision cudatoolkit=10.1 -c pytorch
```
**Important:** Make sure that your compilation CUDA version and runtime CUDA version match.
c. Build mmcv-full
- mmcv-full
We recommend you to install the pre-build package as below.
For CPU:
```shell
pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cpu/{torch_version}/index.html
```
Please replace `{torch_version}` in the url to your desired one.
For GPU:
```shell
pip install "mmcv-full>=1.3.17,<=1.5.3" -f https://download.openmmlab.com/mmcv/dist/{cu_version}/{torch_version}/index.html
```
Please replace `{cu_version}` and `{torch_version}` in the url to your desired one.
For example, to install mmcv-full with CUDA 10.1 and PyTorch 1.7.1, use the following command:
```shell
pip install "mmcv-full>=1.3.17,<=1.5.3" -f https://download.openmmlab.com/mmcv/dist/cu101/torch1.7.1/index.html
```
See [here](https://mmcv.readthedocs.io/en/latest/get_started/installation.html) for different versions of MMCV compatible to different PyTorch and CUDA versions.
For more version download link, refer to [openmmlab-download](https://download.openmmlab.com/mmcv/dist/index.html).
d. Install other requirements
```shell
pip install -r requirements.txt
```
## Data Preparation
a. Download datasets
For both the HumanML3D dataset and the KIT-ML dataset, you could find the details as well as download link [[here]](https://github.com/EricGuo5513/HumanML3D).
b. Download pretrained weights for evaluation
We use the same evaluation protocol as [this repo](https://github.com/EricGuo5513/text-to-motion). You should download pretrained weights of the contrastive models in [t2m](https://drive.google.com/file/d/1DSaKqWX2HlwBtVH5l7DdW96jeYUIXsOP/view) and [kit](https://drive.google.com/file/d/1tX79xk0fflp07EZ660Xz1RAFE33iEyJR/view) for calculating FID and precisions. To dynamically estimate the length of the target motion, `length_est_bigru` and [Glove data](https://drive.google.com/drive/folders/1qxHtwffhfI4qMwptNW6KJEDuT6bduqO7?usp=sharing) are required.
c. Download pretrained weights for **MotionDiffuse**
The pretrained weights for our proposed MotionDiffuse can be downloaded from [here](https://drive.google.com/drive/folders/1qxHtwffhfI4qMwptNW6KJEDuT6bduqO7?usp=sharing)
Download the above resources and arrange them in the following file structure:
```text
MotionDiffuse
βββ text2motion
βββ checkpoints
β βββ kit
β β βββ kit_motiondiffuse
β β βββ meta
β β β βββ mean.npy
β β β βββ std.npy
β β βββ model
β β β βββ latest.tar
β β βββ opt.txt
β βββ t2m
β βββ t2m_motiondiffuse
β βββ meta
β β βββ mean.npy
β β βββ std.npy
β βββ model
β β βββ latest.tar
β βββ opt.txt
βββ data
βββ glove
β βββ our_vab_data.npy
β βββ our_vab_idx.pkl
β βββ out_vab_words.pkl
βββ pretrained_models
β βββ kit
β β βββ text_mot_match
β β βββ model
β β βββ finest.tar
β βββ t2m
β β βββ text_mot_match
β β β βββ model
β β β βββ finest.tar
β β βββ length_est_bigru
β β βββ model
β β βββ finest.tar
βββ HumanML3D
β βββ new_joint_vecs
β β βββ ...
β βββ new_joints
β β βββ ...
β βββ texts
β β βββ ...
β βββ Mean.npy
β βββ Std.npy
β βββ test.txt
β βββ train_val.txt
β βββ train.txt
β βββ val.txt
βββ KIT-ML
βββ new_joint_vecs
β βββ ...
βββ new_joints
β βββ ...
βββ texts
β βββ ...
βββ Mean.npy
βββ Std.npy
βββ test.txt
βββ train_val.txt
βββ train.txt
βββ val.txt
``` |