rezasalatin commited on
Commit
24ad6cc
·
verified ·
1 Parent(s): 71d6036

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +57 -1
README.md CHANGED
@@ -5,4 +5,60 @@ language:
5
  pipeline_tag: image-segmentation
6
  tags:
7
  - climate
8
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  pipeline_tag: image-segmentation
6
  tags:
7
  - climate
8
+ ---
9
+
10
+ # V-BeachNet
11
+
12
+ This repository contains the official PyTorch implementation for the paper "A New Framework for Quantifying Alongshore Variability of Swash Motion Using Fully Convolutional Networks." V-BeachNet is built upon V-FloodNet.
13
+
14
+ **V-BeachNet paper:**
15
+ Salatin, R., Chen, Q., Raubenheimer, B., Elgar, S., Gorrell, L., & Li, X. (2024). A New Framework for Quantifying Alongshore Variability of Swash Motion Using Fully Convolutional Networks. Coastal Engineering, 104542.
16
+
17
+ **V-FloodNet paper:**
18
+ Liang, Y., Li, X., Tsai, B., Chen, Q., & Jafari, N. (2023). V-FloodNet: A video segmentation system for urban flood detection and quantification. Environmental Modelling & Software, 160, 105586.
19
+
20
+ ## Prerequisites
21
+
22
+ This code is tested on a newly installed Ubuntu 24.04 with default version of Python and Nvidia GPU.
23
+
24
+ 1. Install Anaconda prerequisite (Can also be accessed from [here](https://docs.anaconda.com/anaconda/install/linux/)):
25
+ ```sh
26
+ sudo apt update && \
27
+ sudo apt install libgl1-mesa-dri libegl1 libglu1-mesa libxrandr2 libxss1 libxcursor1 libxcomposite1 libasound2-data libasound2-plugins libxi6 libxtst6
28
+ ```
29
+
30
+ 2. Download Anaconda3:
31
+ ```sh
32
+ curl -O https://repo.anaconda.com/archive/Anaconda3-2024.06-1-Linux-x86_64.sh
33
+ ```
34
+
35
+ 3. Locate the downloaded file and install it:
36
+ ```sh
37
+ bash Anaconda3-2024.06-1-Linux-x86_64.sh
38
+ ```
39
+
40
+ ## Steps
41
+
42
+ 1. Clone this repository and change directory:
43
+ ```sh
44
+ git clone https://huggingface.co/rezasalatin/V-BeachNet.git
45
+ cd V-BeachNet
46
+ ```
47
+
48
+ 2. Create the virtual environment with the requirements:
49
+ ```sh
50
+ conda env create -f environment.yml
51
+ conda activate vbeach
52
+ ```
53
+
54
+ 3. Visit the "Training_Station" folder and copy your manually segmented (using [labelme](https://github.com/labelmeai/labelme)) dataset to this directory. Open the following file to change any of the variables and save it. Then execute it to train the model:
55
+ ```sh
56
+ ./train_video_seg.sh
57
+ ```
58
+ Access your trained model from the `log/` directory.
59
+
60
+ 4. Visit the "Testing_Station" folder and copy your data to this directory. Open the following file to change any of the variables (especially the model path from the `log/` folder) and save it. Then execute it to test the model:
61
+ ```sh
62
+ ./test_video_seg.sh
63
+ ```
64
+ Access your segmented data from the `output` directory.