Henry Scheible commited on
Commit
ec35a33
·
1 Parent(s): 24de92f

initial commit

Browse files
.gitignore ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ .idea/
2
+ venv/
README.md CHANGED
@@ -1,12 +1,73 @@
1
  ---
2
- title: Barnacle Counter Fast Sam
3
- emoji: 💻
4
- colorFrom: red
5
- colorTo: yellow
6
  sdk: gradio
7
- sdk_version: 3.27.0
8
  app_file: app.py
9
  pinned: false
10
  ---
11
 
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ title: Barnacle Counter
 
 
 
3
  sdk: gradio
 
4
  app_file: app.py
5
  pinned: false
6
  ---
7
 
8
+ # 👩🏾‍💻 Project Starter Template
9
+
10
+ [Project Description]
11
+
12
+ ## Designs
13
+ [Screenshot description]
14
+
15
+ [Link to the project Figma](https://apple.com)
16
+
17
+ [2-4 screenshots from the app]
18
+
19
+ ## Architecture
20
+ ### Tech Stack 🥞
21
+ The app is built using [tech stack]
22
+
23
+ [Description of any notable added services]
24
+
25
+ [Link to other repos that comprise the project (optional)](https://github.com/)
26
+
27
+ #### Packages 📦
28
+ * [List of notable packages with links]
29
+
30
+ ### Style
31
+ [Describe notable code style conventions]
32
+
33
+ We are using [typically a configuration like [CS52's React-Native ESLint Configuration](https://gist.github.com/timofei7/c8df5cc69f44127afb48f5d1dffb6c84) or [CS52's ES6 and Node ESLint Configuration](https://gist.github.com/timofei7/21ac43d41e506429495c7368f0b40cc7)]
34
+
35
+ ### Data Models
36
+ [Brief description of typical data models.]
37
+
38
+ [Detailed description should be moved to the repo's Wiki page]
39
+
40
+ ### File Structure
41
+
42
+ ```
43
+ ├──[Top Level]/ # root directory
44
+ | └──[File] # brief description of file
45
+ | └──[Folder1]/ # brief description of folder
46
+ | └──[Folder2]/ # brief description of folder
47
+ [etc...]
48
+ ```
49
+
50
+ For more detailed documentation on our file structure and specific functions in the code, feel free to check the project files themselves.
51
+
52
+ ## Setup Steps (example)
53
+ 1. Clone repo by running `git clone https://github.com/dali-lab/<REPONAME>.git` in your terminal and `cd <REPONAME>`
54
+ 2. Run [`npm install` or equivalent] to install all of the necessary packages
55
+ * If you don't have [npm or equivalent] installed, you can install it by following the instructions <[here](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) OR AT THE RELEVANT HYPERLINK>
56
+ 3. Make sure you have [package names] installed. You can install it by running `npm install <PACKAGE NAMES IF NECESSARY> <--global IF NECESSARY>`
57
+ 4. To start the app locally, run [`npm start` or the relevant start command].
58
+
59
+ ## Deployment 🚀
60
+ [Where is the app deployed? i.e. Expo, Surge, TestFlight etc.]
61
+
62
+ [What are the steps to re-deploy the project with any new changes?]
63
+
64
+ [How does one get access to the deployed project?]
65
+
66
+ ## Authors
67
+ * Firstname Lastname 'YY, role
68
+
69
+ ## Acknowledgments 🤝
70
+ We would like to thank [anyone you would like to acknowledge] for [what you would like to acknowledge them for].
71
+
72
+ ---
73
+ Designed and developed by [@DALI Lab](https://github.com/dali-lab)
__pycache__/app.cpython-37.pyc ADDED
Binary file (3.89 kB). View file
 
annotated.png ADDED
app.py ADDED
@@ -0,0 +1,367 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import cv2
2
+ import numpy as np
3
+ import math
4
+ import torch
5
+ import random
6
+
7
+ from torch.utils.data import DataLoader
8
+ from torchvision.transforms import Resize
9
+
10
+ torch.manual_seed(12345)
11
+ random.seed(12345)
12
+ np.random.seed(12345)
13
+
14
+ device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
15
+
16
+ class WireframeExtractor:
17
+
18
+ def __call__(self, image: np.ndarray):
19
+ """
20
+ Extract corners of wireframe from a barnacle image
21
+ :param image: Numpy RGB image of shape (W, H, 3)
22
+ :return [x1, y1, x2, y2]
23
+ """
24
+ h, w = image.shape[:2]
25
+ imghsv = cv2.cvtColor(image, cv2.COLOR_RGB2HSV)
26
+ hsvblur = cv2.GaussianBlur(imghsv, (9, 9), 0)
27
+
28
+ lower = np.array([70, 20, 20])
29
+ upper = np.array([130, 255, 255])
30
+
31
+ color_mask = cv2.inRange(hsvblur, lower, upper)
32
+
33
+ invert = cv2.bitwise_not(color_mask)
34
+
35
+ contours, _ = cv2.findContours(invert, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
36
+
37
+ max_contour = contours[0]
38
+ largest_area = 0
39
+ for index, contour in enumerate(contours):
40
+ area = cv2.contourArea(contour)
41
+ if area > largest_area:
42
+ if cv2.pointPolygonTest(contour, (w / 2, h / 2), False) == 1:
43
+ largest_area = area
44
+ max_contour = contour
45
+
46
+ x, y, w, h = cv2.boundingRect(max_contour)
47
+ # return [x, y, x + w, y + h]
48
+ return x,y,w,h
49
+
50
+ wireframe_extractor = WireframeExtractor()
51
+
52
+ def show_anns(anns):
53
+ if len(anns) == 0:
54
+ return
55
+ sorted_anns = sorted(anns, key=(lambda x: x['area']), reverse=True)
56
+ ax = plt.gca()
57
+ ax.set_autoscale_on(False)
58
+ polygons = []
59
+ color = []
60
+ for ann in sorted_anns:
61
+ m = ann['segmentation']
62
+ img = np.ones((m.shape[0], m.shape[1], 3))
63
+ color_mask = np.random.random((1, 3)).tolist()[0]
64
+ for i in range(3):
65
+ img[:,:,i] = color_mask[i]
66
+ ax.imshow(np.dstack((img, m*0.35)))
67
+
68
+
69
+ # def find_contours(img, color):
70
+ # low = color - 10
71
+ # high = color + 10
72
+
73
+ # mask = cv2.inRange(img, low, high)
74
+ # contours, hierarchy = cv2.findContours(mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
75
+
76
+ # print(f"Total Contours: {len(contours)}")
77
+ # nonempty_contours = list()
78
+ # for i in range(len(contours)):
79
+ # if hierarchy[0,i,3] == -1 and cv2.contourArea(contours[i]) > cv2.arcLength(contours[i], True):
80
+ # nonempty_contours += [contours[i]]
81
+ # print(f"Nonempty Contours: {len(nonempty_contours)}")
82
+ # contour_plot = img.copy()
83
+ # contour_plot = cv2.drawContours(contour_plot, nonempty_contours, -1, (0,255,0), -1)
84
+
85
+ # sorted_contours = sorted(nonempty_contours, key=cv2.contourArea, reverse= True)
86
+
87
+ # bounding_rects = [cv2.boundingRect(cnt) for cnt in contours]
88
+
89
+ # for (i,c) in enumerate(sorted_contours):
90
+ # M= cv2.moments(c)
91
+ # cx= int(M['m10']/M['m00'])
92
+ # cy= int(M['m01']/M['m00'])
93
+ # cv2.putText(contour_plot, text= str(i), org=(cx,cy),
94
+ # fontFace= cv2.FONT_HERSHEY_SIMPLEX, fontScale=0.25, color=(255,255,255),
95
+ # thickness=1, lineType=cv2.LINE_AA)
96
+
97
+ # N = len(sorted_contours)
98
+ # H, W, C = img.shape
99
+ # boxes_array_xywh = [cv2.boundingRect(cnt) for cnt in sorted_contours]
100
+ # boxes_array_corners = [[x, y, x+w, y+h] for x, y, w, h in boxes_array_xywh]
101
+ # boxes = torch.tensor(boxes_array_corners)
102
+
103
+ # labels = torch.ones(N)
104
+ # masks = np.zeros([N, H, W])
105
+ # for idx in range(len(sorted_contours)):
106
+ # cnt = sorted_contours[idx]
107
+ # cv2.drawContours(masks[idx,:,:], [cnt], 0, (255), -1)
108
+ # masks = masks / 255.0
109
+ # masks = torch.tensor(masks)
110
+
111
+ # # for box in boxes:
112
+ # # cv2.rectangle(contour_plot, (box[0].item(), box[1].item()), (box[2].item(), box[3].item()), (255,0,0), 2)
113
+
114
+ # return contour_plot, (boxes, masks)
115
+
116
+
117
+ # def get_dataset_x(blank_image, filter_size=50, filter_stride=2):
118
+ # full_image_tensor = torch.tensor(blank_image).type(torch.FloatTensor).permute(2, 0, 1).unsqueeze(0)
119
+ # num_windows_h = math.floor((full_image_tensor.shape[2] - filter_size) / filter_stride) + 1
120
+ # num_windows_w = math.floor((full_image_tensor.shape[3] - filter_size) / filter_stride) + 1
121
+ # windows = torch.nn.functional.unfold(full_image_tensor, (filter_size, filter_size), stride=filter_stride).reshape(
122
+ # [1, 3, 50, 50, num_windows_h * num_windows_w]).permute([0, 4, 1, 2, 3]).squeeze()
123
+
124
+ # dataset_images = [windows[idx] for idx in range(len(windows))]
125
+ # dataset = list(dataset_images)
126
+ # return dataset
127
+
128
+
129
+ # def get_dataset(labeled_image, blank_image, color, filter_size=50, filter_stride=2, label_size=5):
130
+ # contour_plot, (blue_boxes, blue_masks) = find_contours(labeled_image, color)
131
+
132
+ # mask = torch.sum(blue_masks, 0)
133
+
134
+ # label_dim = int((labeled_image.shape[0] - filter_size) / filter_stride + 1)
135
+ # labels = torch.zeros(label_dim, label_dim)
136
+ # mask_labels = torch.zeros(label_dim, label_dim, filter_size, filter_size)
137
+
138
+ # for lx in range(label_dim):
139
+ # for ly in range(label_dim):
140
+ # mask_labels[lx, ly, :, :] = mask[
141
+ # lx * filter_stride: lx * filter_stride + filter_size,
142
+ # ly * filter_stride: ly * filter_stride + filter_size
143
+ # ]
144
+
145
+ # print(labels.shape)
146
+ # for box in blue_boxes:
147
+ # x = int((box[0] + box[2]) / 2)
148
+ # y = int((box[1] + box[3]) / 2)
149
+
150
+ # window_x = int((x - int(filter_size / 2)) / filter_stride)
151
+ # window_y = int((y - int(filter_size / 2)) / filter_stride)
152
+
153
+ # clamp = lambda n, minn, maxn: max(min(maxn, n), minn)
154
+
155
+ # labels[
156
+ # clamp(window_y - label_size, 0, labels.shape[0] - 1):clamp(window_y + label_size, 0, labels.shape[0] - 1),
157
+ # clamp(window_x - label_size, 0, labels.shape[0] - 1):clamp(window_x + label_size, 0, labels.shape[0] - 1),
158
+ # ] = 1
159
+
160
+ # positive_labels = labels.flatten() / labels.max()
161
+ # negative_labels = 1 - positive_labels
162
+ # pos_mask_labels = torch.flatten(mask_labels, end_dim=1)
163
+ # neg_mask_labels = 1 - pos_mask_labels
164
+ # mask_labels = torch.stack([pos_mask_labels, neg_mask_labels], dim=1)
165
+ # dataset_labels = torch.tensor(list(zip(positive_labels, negative_labels)))
166
+ # dataset = list(zip(
167
+ # get_dataset_x(blank_image, filter_size=filter_size, filter_stride=filter_stride),
168
+ # dataset_labels,
169
+ # mask_labels
170
+ # ))
171
+ # return dataset, (labels, mask_labels)
172
+
173
+
174
+ # from torchvision.models.resnet import resnet50
175
+ # from torchvision.models.resnet import ResNet50_Weights
176
+
177
+ # print("Loading resnet...")
178
+ # model = resnet50(weights=ResNet50_Weights.IMAGENET1K_V2)
179
+ # hidden_state_size = model.fc.in_features
180
+ # model.fc = torch.nn.Linear(in_features=hidden_state_size, out_features=2, bias=True)
181
+ # model.to(device)
182
+ # model.load_state_dict(torch.load("model_best_epoch_4_59.62.pth", map_location=torch.device(device)))
183
+ # model.to(device)
184
+ from segment_anything import sam_model_registry, SamAutomaticMaskGenerator, SamPredictor
185
+
186
+ model = sam_model_registry["default"](checkpoint="./sam_vit_h_4b8939.pth")
187
+ model.to(device)
188
+
189
+ predictor = SamPredictor(model)
190
+
191
+ mask_generator = SamAutomaticMaskGenerator(model)
192
+
193
+ import gradio as gr
194
+
195
+ import matplotlib.pyplot as plt
196
+ import io
197
+
198
+ def count_barnacles(image_raw, progress=gr.Progress()):
199
+ progress(0, desc="Finding bounding wire")
200
+
201
+ # crop image
202
+ # h, w = raw_input_img.shape[:2]
203
+ # imghsv = cv2.cvtColor(raw_input_img, cv2.COLOR_RGB2HSV)
204
+ # hsvblur = cv2.GaussianBlur(imghsv, (9, 9), 0)
205
+
206
+ # lower = np.array([70, 20, 20])
207
+ # upper = np.array([130, 255, 255])
208
+
209
+ # color_mask = cv2.inRange(hsvblur, lower, upper)
210
+
211
+ # invert = cv2.bitwise_not(color_mask)
212
+
213
+ # contours, _ = cv2.findContours(invert, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
214
+
215
+ # max_contour = contours[0]
216
+ # largest_area = 0
217
+ # for index, contour in enumerate(contours):
218
+ # area = cv2.contourArea(contour)
219
+ # if area > largest_area:
220
+ # if cv2.pointPolygonTest(contour, (w / 2, h / 2), False) == 1:
221
+ # largest_area = area
222
+ # max_contour = contour
223
+
224
+ # x, y, w, h = cv2.boundingRect(max_contour)
225
+
226
+
227
+ image = cv2.cvtColor(image_raw, cv2.COLOR_BGR2RGB)
228
+ corners = wireframe_extractor(image)
229
+ cropped_image = image[corners[1]:corners[3], corners[0]:corners[2], :]
230
+ cropped_image = cropped_image[100:400, 100:400]
231
+ # print(cropped_image)
232
+
233
+
234
+ # progress(0, desc="Generating Masks by point in window")
235
+
236
+ # # get center point of windows
237
+ # predictor.set_image(image)
238
+ # mask_counter = 0
239
+ # masks = []
240
+
241
+ # for x in range(1,20, 2):
242
+ # for y in range(1,20, 2):
243
+ # point = np.array([[x*25, y*25]])
244
+ # input_label = np.array([1])
245
+ # mask, score, logit = predictor.predict(
246
+ # point_coords=point,
247
+ # point_labels=input_label,
248
+ # multimask_output=False,
249
+ # )
250
+ # if score[0] > 0.8:
251
+ # mask_counter += 1
252
+ # masks.append(mask)
253
+
254
+ # return mask_counter
255
+
256
+ mask_counter = 0
257
+ good_masks = []
258
+ coords = []
259
+ progress(0, desc="Generating Masks")
260
+ # masks = mask_generator.generate(cropped_image)
261
+ masks = mask_generator.generate(cropped_image)
262
+ for mask in masks:
263
+ if mask['predicted_iou'] > 0.95:
264
+ mask_counter += 1
265
+ good_masks.append(mask)
266
+ coords.append(mask['point_coords'])
267
+
268
+
269
+ # Create a figure with a size of 10 inches by 10 inches
270
+ fig = plt.figure(figsize=(10, 10))
271
+
272
+ # Display the image using the imshow() function
273
+ plt.imshow(cropped_image)
274
+
275
+ # Call the custom function show_anns() to plot annotations on top of the image
276
+ show_anns(good_masks)
277
+
278
+ # Turn off the axis
279
+ plt.axis('off')
280
+
281
+ # Get the plot as a numpy array
282
+ buf = io.BytesIO()
283
+ plt.savefig(buf, format='png', bbox_inches='tight', pad_inches=0)
284
+ buf.seek(0)
285
+ img_arr = np.frombuffer(buf.getvalue(), dtype=np.uint8)
286
+ buf.close()
287
+
288
+ # Decode the numpy array to an image
289
+ annotated = cv2.imdecode(img_arr, 1)
290
+ annotated = cv2.cvtColor(annotated, cv2.COLOR_BGR2RGB)
291
+
292
+ # Close the figure
293
+ plt.close(fig)
294
+
295
+
296
+ # cropped_copy = torch.transpose(cropped_image, 0, 2).to("cpu").detach().numpy().copy()
297
+ return annotated, mask_counter
298
+
299
+
300
+ # return len(masks)
301
+
302
+ # progress(0, desc="Resizing Image")
303
+ # cropped_img = raw_input_img[x:x+w, y:y+h]
304
+ # cropped_image_tensor = torch.transpose(torch.tensor(cropped_img).to(device), 0, 2)
305
+ # resize = Resize((1500, 1500))
306
+ # input_img = cropped_image_tensor
307
+ # blank_img_copy = torch.transpose(input_img, 0, 2).to("cpu").detach().numpy().copy()
308
+
309
+ # progress(0, desc="Generating Windows")
310
+ # test_dataset = get_dataset_x(input_img)
311
+ # test_dataloader = DataLoader(test_dataset, batch_size=1024, shuffle=False)
312
+ # model.eval()
313
+ # predicted_labels_list = []
314
+ # for data in progress.tqdm(test_dataloader):
315
+ # with torch.no_grad():
316
+ # data = data.to(device)
317
+ # predicted_labels_list += [model(data)]
318
+ # predicted_labels = torch.cat(predicted_labels_list)
319
+ # x = int(math.sqrt(predicted_labels.shape[0]))
320
+ # predicted_labels = predicted_labels.reshape([x, x, 2]).detach()
321
+ # label_img = predicted_labels[:, :, :1].cpu().numpy()
322
+ # label_img -= label_img.min()
323
+ # label_img /= label_img.max()
324
+ # label_img = (label_img * 255).astype(np.uint8)
325
+ # mask = np.array(label_img > 180, np.uint8)
326
+ # contours, hierarchy = cv2.findContours(mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)\
327
+
328
+ # gt_contours = find_contours(labeled_input_img[x:x+w, y:y+h], cropped_img, np.array([59, 76, 160]))
329
+
330
+
331
+
332
+ # def extract_contour_center(cnt):
333
+ # M = cv2.moments(cnt)
334
+ # cx = int(M['m10'] / M['m00'])
335
+ # cy = int(M['m01'] / M['m00'])
336
+ # return cx, cy
337
+
338
+ # filter_width = 50
339
+ # filter_stride = 2
340
+
341
+ # def rev_window_transform(point):
342
+ # wx, wy = point
343
+ # x = int(filter_width / 2) + wx * filter_stride
344
+ # y = int(filter_width / 2) + wy * filter_stride
345
+ # return x, y
346
+
347
+ # nonempty_contours = filter(lambda cnt: cv2.contourArea(cnt) != 0, contours)
348
+ # windows = map(extract_contour_center, nonempty_contours)
349
+ # points = list(map(rev_window_transform, windows))
350
+ # for x, y in points:
351
+ # blank_img_copy = cv2.circle(blank_img_copy, (x, y), radius=4, color=(255, 0, 0), thickness=-1)
352
+ # print(f"pointlist: {len(points)}")
353
+ # return blank_img_copy, len(points)
354
+
355
+
356
+ demo = gr.Interface(count_barnacles,
357
+ inputs=[
358
+ gr.Image(shape=(500, 500), type="numpy", label="Input Image"),
359
+ ],
360
+ outputs=[
361
+ gr.Image(shape=(500, 500), type="numpy", label="Annotated Image"),
362
+ gr.Number(label="Predicted Number of Barnacles"),
363
+ # gr.Number(label="Actual Number of Barnacles"),
364
+ # gr.Number(label="Custom Metric")
365
+ ])
366
+ # examples="examples")
367
+ demo.queue(concurrency_count=10).launch()
examples/new_blank_image.png ADDED
examples/without_crop.png ADDED
examples/without_crop2.png ADDED
flagged/log.csv ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ input_img,output 0,output 1,flag,username,timestamp
2
+ ,,0,,,2023-02-22 15:46:27.797108
model_best_epoch_4_59.62.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f8ff81d32b5d8e4d9776386e6cbbe6baada9ea7ad95584d871bac1fea0a843cd
3
+ size 94371235
requirements.txt ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ opencv-python
2
+ numpy
3
+ --extra-index-url https://download.pytorch.org/whl/cu113
4
+ torch
5
+ torchvision
6
+ gradio
7
+ git+https://github.com/facebookresearch/segment-anything.git
sam_vit_h_4b8939.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a7bf3b02f3ebf1267aba913ff637d9a2d5c33d3173bb679e46d9f338c26f262e
3
+ size 2564550879