a_id
int64 7.84k
73.8M
| a_body
stringlengths 61
33k
| a_creation_date
stringlengths 25
32
| a_last_activity_date
stringlengths 25
32
| a_last_edit_date
stringlengths 25
32
β | a_tags
float64 | q_id
int64 826
73.8M
| q_body
stringlengths 61
29.9k
| q_creation_date
stringlengths 25
32
| q_last_activity_date
stringlengths 25
32
| q_last_edit_date
stringlengths 25
32
β | q_tags
stringlengths 1
103
| _arxiv_links
stringlengths 2
6.69k
| _n_arxiv_links
int64 0
94
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
56,261,828 | <p>It is very important how bounding box regression works in object detection. <strong>In bounding box regression, what the model predicts is the <em>OFFSET</em> of prediction box w.r.t. anchor box (or proposal box)</strong>. Anchor box and proposal box are similar in their function sense but they are generated in different ways. Anchor boxes serve as <code>references</code> to the final prediction boxes (that is possibly why they are named anchor boxes)</p>
<p><a href="https://i.stack.imgur.com/jQ1pV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jQ1pV.png" alt="enter image description here"></a> </p>
<p>As shown in the above figure, the model's output is <code>Delta(x1,y1,x2,y2)</code>, given this <strong>offset</strong> together with anchor box, the coordinates of prediction box can be calculated.</p>
<p>So actually <code>box_concat</code> is the <strong>offset</strong> prediction of the model, together with <code>anchor_concat</code>, the final bounding box coordinates can be calculated. This can be illustrated in the decoding function of above model's prediction. See <a href="https://github.com/lvaleriu/ssd_keras-1/blob/cdc514867db34c0c0b231c1efc15862f5ff30f9d/ssd_box_encode_decode_utils.py#L268" rel="nofollow noreferrer">here</a>.</p>
<pre><code>y_pred (array): The prediction output of the SSD model, expected to be a Numpy array
of shape `(batch_size, #boxes, #classes + 4 + 4 + 4)`, where `#boxes` is the total number of
boxes predicted by the model per image and the last axis contains
`[one-hot vector for the classes, 4 predicted coordinate offsets, 4 anchor box coordinates, 4 variances]`.
</code></pre>
<p>As illustrated above, <code>box_concat</code> contains <code>4 predicted coordinate offsets</code>.</p>
<p>If you wonder how anchor box and together with offset can be used to calculate the bounding box, here it is. This method dates back to the famous <code>R-CNN</code> <a href="https://arxiv.org/pdf/1311.2524.pdf" rel="nofollow noreferrer">paper</a>. (In Appendix C Bounding Box Regression)</p> | 2019-05-22 17:05:42.840000+00:00 | 2019-05-22 17:05:42.840000+00:00 | null | null | 56,259,670 | <p>I am trying to understand why do we need anchor boxes and box coordinate?</p>
<p>What I understand so far in SSD is that it will gives you an output of two thing. One is the class score and another is the Bounding box coordinate. What my understanding on anchor boxes so far is that it will be generating bounding boxes of different aspect ratio and do some NMS suppression to get a good bounding boxes. I thought that anchor boxes and box coordinate are the same. But why in this code we have three output mainly class score , box coordinates and anchor boxes. More specifically, what is anchor boxes returning? Is anchor boxes returning the set of all bounding boxes of different aspect ratio? Then, how is it different from the box coordinate. Maybe I am misunderstanding anchorboxes. Is anchorboxes acting like a Region proposal network and that boxes coordinates is returning the best Boxes from those anchorboxes list?</p>
<p>My main confusion here is the difference between anchor_concat and boxes_concat.</p>
<p>I am trying to understand the code from:</p>
<p><a href="https://github.com/lvaleriu/ssd_keras-1/blob/master/keras_ssd7.py" rel="nofollow noreferrer">https://github.com/lvaleriu/ssd_keras-1/blob/master/keras_ssd7.py</a></p>
<pre class="lang-py prettyprint-override"><code># Build the convolutional predictor layers on top of conv layers 4, 5, 6, and 7.
# We build two predictor layers on top of each of these layers: One for class prediction (classification), one for box coordinate prediction (localization)
# We precidt `n_classes` confidence values for each box, hence the `classes` predictors have depth `n_boxes * n_classes`
# We predict 4 box coordinates for each box, hence the `boxes` predictors have depth `n_boxes * 4`
# Output shape of `classes`: `(batch, height, width, n_boxes * n_classes)`
classes4 = Conv2D(n_boxes[0] * n_classes, (3, 3), strides=(1, 1), padding="valid", kernel_initializer='he_normal', kernel_regularizer=l2(l2_reg), name='classes4')(conv4)
classes5 = Conv2D(n_boxes[1] * n_classes, (3, 3), strides=(1, 1), padding="valid", kernel_initializer='he_normal', kernel_regularizer=l2(l2_reg), name='classes5')(conv5)
classes6 = Conv2D(n_boxes[2] * n_classes, (3, 3), strides=(1, 1), padding="valid", kernel_initializer='he_normal', kernel_regularizer=l2(l2_reg), name='classes6')(conv6)
classes7 = Conv2D(n_boxes[3] * n_classes, (3, 3), strides=(1, 1), padding="valid", kernel_initializer='he_normal', kernel_regularizer=l2(l2_reg), name='classes7')(conv7)
# Output shape of `boxes`: `(batch, height, width, n_boxes * 4)`
boxes4 = Conv2D(n_boxes[0] * 4, (3, 3), strides=(1, 1), padding="valid", kernel_initializer='he_normal', kernel_regularizer=l2(l2_reg), name='boxes4')(conv4)
boxes5 = Conv2D(n_boxes[1] * 4, (3, 3), strides=(1, 1), padding="valid", kernel_initializer='he_normal', kernel_regularizer=l2(l2_reg), name='boxes5')(conv5)
boxes6 = Conv2D(n_boxes[2] * 4, (3, 3), strides=(1, 1), padding="valid", kernel_initializer='he_normal', kernel_regularizer=l2(l2_reg), name='boxes6')(conv6)
boxes7 = Conv2D(n_boxes[3] * 4, (3, 3), strides=(1, 1), padding="valid", kernel_initializer='he_normal', kernel_regularizer=l2(l2_reg), name='boxes7')(conv7)
# Generate the anchor boxes
# Output shape of `anchors`: `(batch, height, width, n_boxes, 8)`
anchors4 = AnchorBoxes(img_height, img_width, this_scale=scales[0], next_scale=scales[1], aspect_ratios=aspect_ratios[0], two_boxes_for_ar1=two_boxes_for_ar1, this_steps=steps[0], this_offsets=offsets[0], limit_boxes=limit_boxes, variances=variances, coords=coords, normalize_coords=normalize_coords, name='anchors4')(boxes4)
anchors5 = AnchorBoxes(img_height, img_width, this_scale=scales[1], next_scale=scales[2], aspect_ratios=aspect_ratios[1], two_boxes_for_ar1=two_boxes_for_ar1, this_steps=steps[1], this_offsets=offsets[1], limit_boxes=limit_boxes, variances=variances, coords=coords, normalize_coords=normalize_coords, name='anchors5')(boxes5)
anchors6 = AnchorBoxes(img_height, img_width, this_scale=scales[2], next_scale=scales[3], aspect_ratios=aspect_ratios[2], two_boxes_for_ar1=two_boxes_for_ar1, this_steps=steps[2], this_offsets=offsets[2], limit_boxes=limit_boxes, variances=variances, coords=coords, normalize_coords=normalize_coords, name='anchors6')(boxes6)
anchors7 = AnchorBoxes(img_height, img_width, this_scale=scales[3], next_scale=scales[4], aspect_ratios=aspect_ratios[3], two_boxes_for_ar1=two_boxes_for_ar1, this_steps=steps[3], this_offsets=offsets[3], limit_boxes=limit_boxes, variances=variances, coords=coords, normalize_coords=normalize_coords, name='anchors7')(boxes7)
# Reshape the class predictions, yielding 3D tensors of shape `(batch, height * width * n_boxes, n_classes)`
# We want the classes isolated in the last axis to perform softmax on them
classes4_reshaped = Reshape((-1, n_classes), name='classes4_reshape')(classes4)
classes5_reshaped = Reshape((-1, n_classes), name='classes5_reshape')(classes5)
classes6_reshaped = Reshape((-1, n_classes), name='classes6_reshape')(classes6)
classes7_reshaped = Reshape((-1, n_classes), name='classes7_reshape')(classes7)
# Reshape the box coordinate predictions, yielding 3D tensors of shape `(batch, height * width * n_boxes, 4)`
# We want the four box coordinates isolated in the last axis to compute the smooth L1 loss
boxes4_reshaped = Reshape((-1, 4), name='boxes4_reshape')(boxes4)
boxes5_reshaped = Reshape((-1, 4), name='boxes5_reshape')(boxes5)
boxes6_reshaped = Reshape((-1, 4), name='boxes6_reshape')(boxes6)
boxes7_reshaped = Reshape((-1, 4), name='boxes7_reshape')(boxes7)
# Reshape the anchor box tensors, yielding 3D tensors of shape `(batch, height * width * n_boxes, 8)`
anchors4_reshaped = Reshape((-1, 8), name='anchors4_reshape')(anchors4)
anchors5_reshaped = Reshape((-1, 8), name='anchors5_reshape')(anchors5)
anchors6_reshaped = Reshape((-1, 8), name='anchors6_reshape')(anchors6)
anchors7_reshaped = Reshape((-1, 8), name='anchors7_reshape')(anchors7)
# Concatenate the predictions from the different layers and the assosciated anchor box tensors
# Axis 0 (batch) and axis 2 (n_classes or 4, respectively) are identical for all layer predictions,
# so we want to concatenate along axis 1
# Output shape of `classes_merged`: (batch, n_boxes_total, n_classes)
classes_concat = Concatenate(axis=1, name='classes_concat'([classes4_reshaped, classes5_reshaped, classes6_reshaped, classes7_reshaped])
# Output shape of `boxes_final`: (batch, n_boxes_total, 4)
boxes_concat = Concatenate(axis=1, name='boxes_concat')([boxes4_reshaped, boxes5_reshaped, boxes6_reshaped, boxes7_reshaped])
# Output shape of `anchors_final`: (batch, n_boxes_total, 8)
anchors_concat = Concatenate(axis=1, name='anchors_concat')([anchors4_reshaped,anchors5_reshaped, anchors6_reshaped, anchors7_reshaped])
# The box coordinate predictions will go into the loss function just the way they are,
# but for the class predictions, we'll apply a softmax activation layer first
classes_softmax = Activation('softmax', name='classes_softmax')(classes_concat)
# Concatenate the class and box coordinate predictions and the anchors to one large predictions tensor
# Output shape of `predictions`: (batch, n_boxes_total, n_classes + 4 + 8)
predictions = Concatenate(axis=2, name='predictions')([classes_softmax, boxes_concat, anchors_concat])
</code></pre> | 2019-05-22 14:55:29.107000+00:00 | 2019-05-22 17:05:42.840000+00:00 | 2019-05-22 15:29:22.460000+00:00 | keras|deep-learning|object-detection | ['https://i.stack.imgur.com/jQ1pV.png', 'https://github.com/lvaleriu/ssd_keras-1/blob/cdc514867db34c0c0b231c1efc15862f5ff30f9d/ssd_box_encode_decode_utils.py#L268', 'https://arxiv.org/pdf/1311.2524.pdf'] | 3 |
55,735,130 | <p>One CNN architecture that achieves you goal is the U-Net, originally introduced by <a href="https://arxiv.org/abs/1505.04597" rel="nofollow noreferrer">this</a> paper.</p>
<p>It uses a sequence of convolutional and pooling layers to create a pyramid. Note that it's not an image pyramid of the input image, but the idea is to learn what is useful at different scales, not directly feed the pyramid.</p>
<p>Now, think about how the <code>AveragePooling2D</code> works. You select a patch of the original image, replace it with the average and then move to the next patch. This is exactly what you describe in generating an image pyramid: smoothing is achieved by the averaging, and replacing the patch with one pixel is a downsampling.</p> | 2019-04-17 19:58:56.493000+00:00 | 2019-04-17 19:58:56.493000+00:00 | null | null | 55,734,974 | <p>To get a scale invariance (or to detect objects at any scale) on my CNN model, I want to implement <a href="https://en.wikipedia.org/wiki/Pyramid_(image_processing)" rel="nofollow noreferrer">Image Pyramids</a>. As the article explains, while creating image-pyramids, image is subjected to repeated smoothing and subsampling. </p>
<p>I am implementing a CNN in Keras. Is there a way with Keras to implement image pyramids? I read <a href="https://stackoverflow.com/questions/48420092/how-to-down-scale-image-in-keras-for-an-image-pyramid">one of the SO post</a> that says to use <a href="https://keras.io/layers/pooling/#averagepooling2d" rel="nofollow noreferrer">AveragePooling2D</a> to achieve the pyramid effect.</p>
<p>Is that even correct? How could <code>AveragePooling2D</code> layer give the pyramid effect? </p> | 2019-04-17 19:47:27.650000+00:00 | 2019-04-17 19:58:56.493000+00:00 | null | python|opencv|keras|computer-vision|conv-neural-network | ['https://arxiv.org/abs/1505.04597'] | 1 |
71,748,356 | <p>It seems that images in your dataset might not have the same size, as in the VIT model <a href="https://arxiv.org/abs/2010.11929" rel="nofollow noreferrer">https://arxiv.org/abs/2010.11929</a>, you are using an MLP model,</p>
<p>if it was not the case, it is worth checking if your labels are all in the expected dimension.
presumably, MMsegmentattion expects the output to be just the annotation map (a 2D array).
It is recommended that you revise your dataset and prepare the annotation map.</p> | 2022-04-05 08:16:04.603000+00:00 | 2022-04-05 08:16:04.603000+00:00 | null | null | 71,742,524 | <p>I am using MMSegmentainon library to train my model for instance image segmentation, during the traingin, I craete the model(Vision Transformer) and when I try to train the model using this:</p>
<p>I get this error:</p>
<blockquote>
<p>RuntimeError:CaughtRuntimeErrorinDataLoaderworkerprocess0.OriginalTraceback(mostrecentcalllast):
File"/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/worker.py",line287,in
_worker_loop
data=fetcher.fetch(index)
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 47, infetch
returnself.collate_fn(data)
File "/usr/local/lib/python3.7/dist-packages/mmcv/parallel/collate.py", line 81, in collateforkeyinbatch[0]
File"/usr/local/lib/python3.7/dist-packages/mmcv/parallel/collate.py",line81,in
<dictcomp>
forkey in batch[0]
File"/usr/local/lib/python3.7/dist-packages/mmcv/parallel/collate.py",line59,incollatestacked.append(default_collate(padded_samples))
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/collate.py", line 56, indefault_collate
returntorch.stack(batch,0,out=out)</p>
<p>RuntimeError: stack expects each tensor to be equal size, but got [1, 256, 256, 256] at entry0 and[1,256,256] at entry3</p>
</blockquote>
<p>** I must also mention that I have tested my own dataset with other models available in their library but all of them works properly.</p>
<p>tried :</p>
<pre><code>
model=build_segmentor(cfg.model,train_cfg=cfg.get('train_cfg'),test_cfg=cfg.get('test_cfg'))train_segmentor(model,datasets,cfg,distributed=False,validate=True,
meta=dict())
</code></pre> | 2022-04-04 19:07:57.553000+00:00 | 2022-05-30 18:25:29.667000+00:00 | 2022-05-30 18:25:29.667000+00:00 | python|image-processing|deep-learning|pytorch|image-segmentation | ['https://arxiv.org/abs/2010.11929'] | 1 |
1,270,361 | <p>The best I could find was this paper: <a href="http://arxiv.org/abs/math/9411215" rel="nofollow noreferrer">Tiling a rectangle with the fewest squares</a>.
The paper is an interesting read, though at times it delves deep into theory territory with talk of "universal constants". I am not certain whether the question of "can a rectangle of size m by n be tiled with k squares" is NP-complete. As noted in another response, your question resembles packing problems which are NP-complete. And, of course, your problem is a generalization of the one addressed in this paper, since you are dealing with non-rectangular areas. You could start by breaking your area up into the minimum number of rectangles, another interesting problem in itself. And finally, even if you could do that efficiently, I'm not sure if tiling those rectangles optimally would result in an overall optimal tiling.</p>
<p>As the author notes, a greedy algorithm is a good place to start: just put down the biggest square you can until the area is full.</p> | 2009-08-13 06:31:35.997000+00:00 | 2009-08-13 06:31:35.997000+00:00 | null | null | 1,268,826 | <p>Imagine you have a canvas and in this canvas there are already some objects. How can you find the minimal way to cover the "uncovered" area with squares, not overlaying each other, completely filling the canvas.</p>
<p>In my case the "canvas" is a html-div container and the objects are nested div-containers.
Could look like this: <a href="http://www.encodechain.com/demo/200908_optimize.png" rel="nofollow noreferrer">http://www.encodechain.com/demo/200908_optimize.png</a>
On the left there's the "start" and on the right there's on possible first "step"... </p>
<p>I know that there's an algorithm for this, but currently I can't remember the name.</p> | 2009-08-12 21:29:49.650000+00:00 | 2014-05-07 17:20:49.637000+00:00 | 2014-05-07 17:20:49.637000+00:00 | algorithm|mathematical-optimization | ['http://arxiv.org/abs/math/9411215'] | 1 |
65,548,833 | <p>One more reason for unstable training could be since you are using a very small batch size, i.e., <code>batch_size=2</code>. At least, use <code>batch_size=32</code>. This value is too small for the batch normalization to compute reliably the estimation of the training distribution statistics (mean and variance). These mean and variance values are then used to normalize first the distribution and followed by learning of <code>beta</code> and <code>gamma</code> parameters (actual distribution).</p>
<p>Check the following links for more details:</p>
<ol>
<li><p>In the <strong>introduction</strong> and <strong>related works</strong>, the authors criticized BatchNorm and do check <strong>figure 1</strong>: <a href="https://arxiv.org/pdf/1803.08494.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1803.08494.pdf</a></p>
</li>
<li><p>Nice article on "Curse of Batch Norm": <a href="https://towardsdatascience.com/curse-of-batch-normalization-8e6dd20bc304" rel="nofollow noreferrer">https://towardsdatascience.com/curse-of-batch-normalization-8e6dd20bc304</a></p>
</li>
</ol> | 2021-01-03 10:34:32.897000+00:00 | 2021-01-03 10:34:32.897000+00:00 | null | null | 44,805,600 | <p>I am trying to fine-tune a model using keras, according to this description: <a href="https://keras.io/applications/#inceptionv3" rel="nofollow noreferrer">https://keras.io/applications/#inceptionv3</a><br>
However, during training I discovered that the output of the network does not remain constant after training when using the same input (while all relevant layers were frozen), which I do not want.</p>
<p>I constructed the following toy example to investigate this:</p>
<pre><code>import keras.applications.resnet50 as resnet50
from keras.layers import Dense, Flatten, Input
from keras.models import Model
from keras.utils import to_categorical
from keras import optimizers
from keras.preprocessing.image import ImageDataGenerator
import numpy as np
# data
i = np.random.rand(1,224,224,3)
X = np.random.rand(32,224,224,3)
y = to_categorical(np.random.randint(751, size=32), num_classes=751)
# model
base_model = resnet50.ResNet50(weights='imagenet', include_top=False, input_tensor=Input(shape=(224,224,3)))
layer = base_model.output
layer = Flatten(name='myflatten')(layer)
layer = Dense(751, activation='softmax', name='fc751')(layer)
model = Model(inputs=base_model.input, outputs=layer)
# freeze all layers
for layer in model.layers:
layer.trainable = False
model.compile(optimizer='rmsprop', loss='categorical_crossentropy')
# features and predictions before training
feat0 = base_model.predict(i)
pred0 = model.predict(i)
weights0 = model.layers[-1].get_weights()
# before training output is consistent
feat00 = base_model.predict(i)
pred00 = model.predict(i)
print(np.allclose(feat0, feat00)) # True
print(np.allclose(pred0, pred00)) # True
# train
model.fit(X, y, batch_size=2, epochs=3, shuffle=False)
# features and predictions after training
feat1 = base_model.predict(i)
pred1 = model.predict(i)
weights1 = model.layers[-1].get_weights()
# these are not the same
print(np.allclose(feat0, feat1)) # False
# Optionally: printing shows they are in fact very different
# print(feat0)
# print(feat1)
# these are not the same
print(np.allclose(pred0, pred1)) # False
# Optionally: printing shows they are in fact very different
# print(pred0)
# print(pred1)
# these are the same and loss does not change during training
# so layers were actually frozen
print(np.allclose(weights0[0], weights1[0])) # True
# Check again if all layers were in fact untrainable
for layer in model.layers:
assert layer.trainable == False # All succeed
# Being overly cautious also checking base_model
for layer in base_model.layers:
assert layer.trainable == False # All succeed
</code></pre>
<p>Since I froze all layers i fully expect both the predictions and both the features to be equal, but surprisingly they aren't.</p>
<p>So probably I am making some kind of mistake, but I can't figure what.. Any suggestions would be greatly appreciated!</p> | 2017-06-28 14:37:32.760000+00:00 | 2021-01-03 10:34:32.897000+00:00 | 2017-06-29 12:26:05.440000+00:00 | python|keras | ['https://arxiv.org/pdf/1803.08494.pdf', 'https://towardsdatascience.com/curse-of-batch-normalization-8e6dd20bc304'] | 2 |
48,236,022 | <p>The following is a generic procedure of trimming down R objects from data that might not be necessary for the target use. It's heuristic in nature, but I've already applied it successfully twice, and with a bit of luck it works quite well.</p>
<p>You can measure object size using a function called <a href="https://stat.ethz.ch/R-manual/R-devel/library/utils/html/object.size.html" rel="noreferrer"><code>object.size</code></a>:</p>
<pre><code>> object.size(mod_fit)
528616 bytes
</code></pre>
<p>Indeed, quite a lot for a linear model with four predictors. You can inspect what's inside the object using, for example, the <a href="https://stat.ethz.ch/R-manual/R-devel/library/utils/html/str.html" rel="noreferrer"><code>str</code></a> function:</p>
<pre><code>> str(mod_fit)
List of 23
$ method : chr "glm"
$ modelInfo :List of 15
..$ label : chr "Generalized Linear Model"
..$ library : NULL
..$ loop : NULL
..$ type : chr [1:2] "Regression" "Classification"
..$ parameters:'data.frame': 1 obs. of 3 variables:
.. ..$ parameter: Factor w/ 1 level "parameter": 1
.. ..$ class : Factor w/ 1 level "character": 1
[β¦]
$ coefnames : chr [1:4] "Sepal.Length" "Sepal.Width" "Petal.Length" "Petal.Width"
$ xlevels : Named list()
- attr(*, "class")= chr [1:2] "train" "train.formula"
</code></pre>
<p>Quite a lot of data. So, let's check how much space each of these elements take:</p>
<pre><code>> sort(sapply(mod_fit, object.size))
pred preProcess yLimits dots maximize method
0 0 0 40 48 96
modelType metric perfNames xlevels coefnames levels
104 104 160 192 296 328
call bestTune results times resample resampledCM
936 1104 1584 2024 2912 4152
trainingData terms control modelInfo finalModel
5256 6112 29864 211824 259456
</code></pre>
<p>Now we can try removing elements from this object one-by-one, and check which are necessary for <code>predict</code> to work, starting from the largest:</p>
<pre><code>> test_obj <- mod_fit; test_obj$finalModel <- NULL; predict(test_obj, iris2)
Error in if (modelFit$problemType == "Classification") { :
argument is of length zero
</code></pre>
<p>Whoops, <code>finalModel</code> seems important. <em>Any</em> kind of error here tells you that you can't remove the element. How about, let say, <code>control</code>?</p>
<pre><code>> test_obj <- mod_fit; test_obj$control <- NULL; predict(test_obj, iris2)
[1] versicolor versicolor versicolor versicolor versicolor versicolor
[7] versicolor versicolor versicolor versicolor versicolor versicolor
[13] versicolor versicolor versicolor versicolor versicolor versicolor
[β¦]
[97] virginica virginica virginica virginica
Levels: versicolor virginica
</code></pre>
<p>So, it seems that <code>control</code> is not needed. You can perform this process recursively, for example:</p>
<pre><code>> sort(sapply(mod_fit$finalModel, object.size))
offset contrasts param rank
0 0 40 48
[β¦]
model family
17056 163936
> sort(sapply(mod_fit$finalModel$family, object.size))
link family valideta linkfun linkinv mu.eta dev.resids
96 104 272 560 560 560 1992
variance validmu initialize aic simulate
2064 6344 18712 27512 103888
> test_obj <- mod_fit; test_obj$finalModel$family$simulate <- NULL; predict(test_obj, iris2)
[1] versicolor versicolor versicolor versicolor versicolor versicolor
[β¦]
[97] virginica virginica virginica virginica
Levels: versicolor virginica
</code></pre>
<p>With enough attempts you will know which parts of the object are necessary, and which are notβand remove them before storing the model.</p>
<p>Note: while this may reduce unnecessary parts of the object, you may accidentally remove parts that are only <em>sometimes</em> used in prediction. For simple models that always work the same way, like <code>glm</code>, this should not happen, though.</p>
<p>Also, the result of this process is not guaranteed not to leak information about the model you don't want the model's user to see. There is no such guarantee in general, and there are methods of <a href="https://arxiv.org/abs/1609.02943" rel="noreferrer">reconstructing significant information about models and training data even from black-box models that are not usually easy to interpret</a>.</p> | 2018-01-13 01:06:38.110000+00:00 | 2018-01-13 01:06:38.110000+00:00 | null | null | 47,633,350 | <p>I would like to export the following model below so an other user can open it and use <code>predict</code> function to predict classes on new observation. That is the only thing it will be used for. I can save mod_fit, but it will take up lots of space and the end user can access information which I dont want. Is there any easy way? </p>
<pre><code>library(caret)
library(dplyr)
iris2 <- iris %>% filter(Species != "setosa") %>% mutate(Species = as.character(Species))
mod_fit <- train(Species ~., data = iris2, method = "glm")
</code></pre> | 2017-12-04 12:25:09.033000+00:00 | 2018-01-13 01:06:38.110000+00:00 | 2018-01-13 00:25:14.623000+00:00 | r|r-caret | ['https://stat.ethz.ch/R-manual/R-devel/library/utils/html/object.size.html', 'https://stat.ethz.ch/R-manual/R-devel/library/utils/html/str.html', 'https://arxiv.org/abs/1609.02943'] | 3 |
42,529,601 | <p>You can start by reading this paper: <a href="https://arxiv.org/abs/1412.6572" rel="nofollow noreferrer">https://arxiv.org/abs/1412.6572</a> (for example)</p>
<p>It explains one of the ways to generate adversarial examples by computing gradients of the loss function with respect to inputs. </p>
<p>Have a look at <a href="https://www.tensorflow.org/versions/master/api_docs/python/train/gradient_computation" rel="nofollow noreferrer"><code>tf.gradients()</code></a></p>
<p>Once you have defined your loss function, which is for example cross entropy, you do something like:</p>
<pre><code>grads = tf.gradients(loss, [x])[0]
signs = tf.sign(grads)
epsilon = tf.constant(0.25)
x_adversarial = tf.add(tf.multiply(epsilon, signs), x)
</code></pre>
<p><code>x_adversarial</code> will be your sneaky image. You can play with the <code>epsilon</code> value, which sets the magnitude of the added noise. </p> | 2017-03-01 10:45:01.700000+00:00 | 2017-03-01 10:45:01.700000+00:00 | null | null | 42,524,009 | <p>In case any of you don't know, adversarial images are images that belong to a certain class, but then are distorted without any visually perceptive difference to the human eye, but the network misunderstandingly recognizes it in a completely different class.</p>
<p>More information about it here:
<a href="http://karpathy.github.io/2015/03/30/breaking-convnets/" rel="nofollow noreferrer">http://karpathy.github.io/2015/03/30/breaking-convnets/</a></p>
<p>Using TensorFlow, I have learned a lot about convolutional neural networks.</p>
<pre><code>def weight_variable(shape):
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial)
def bias_variable(shape):
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
def conv2d(x, W):
return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')
def max_pool_2x2(x):
return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1], padding='SAME')
W_conv1 = weight_variable([5, 5, 1, 32])
b_conv1 = bias_variable([32])
x_image = tf.reshape(x, [-1,28,28,1])
h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
h_pool1 = max_pool_2x2(h_conv1)
W_conv2 = weight_variable([5, 5, 32, 64])
b_conv2 = bias_variable([64])
h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
h_pool2 = max_pool_2x2(h_conv2)
W_fc1 = weight_variable([7 * 7 * 64, 1024])
b_fc1 = bias_variable([1024])
h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64])
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
keep_prob = tf.placeholder(tf.float32)
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
W_fc2 = weight_variable([1024, 10])
b_fc2 = bias_variable([10])
y_conv = tf.matmul(h_fc1_drop, W_fc2) + b_fc2
</code></pre>
<p>The challenge is to input an image of the number 2 also labelled as '2', and somehow convolving this image so that the output would identify it as '6', changing the pixels so slightly that the difference is unrecognizable. </p>
<p>Anyone have any idea where to start with this?</p> | 2017-03-01 05:37:26.857000+00:00 | 2017-03-01 10:45:01.700000+00:00 | null | python|image-processing|machine-learning|neural-network|conv-neural-network | ['https://arxiv.org/abs/1412.6572', 'https://www.tensorflow.org/versions/master/api_docs/python/train/gradient_computation'] | 2 |
52,231,721 | <blockquote>
<p><strong>Exact match.</strong> This metric measures the percentage of predictions
that match any one of the ground truth answers exactly.</p>
</blockquote>
<p>According to <a href="https://arxiv.org/pdf/1606.05250.pdf" rel="nofollow noreferrer">here</a>.</p> | 2018-09-08 03:28:31.383000+00:00 | 2022-03-23 15:53:24.270000+00:00 | 2022-03-23 15:53:24.270000+00:00 | null | 52,229,059 | <p>The <a href="https://rajpurkar.github.io/SQuAD-explorer/" rel="noreferrer">SQuAD Challenge</a> ranks the results against the F1 and EM scores. There is a lot of information about the F1 score (a function of precision and recall). But what would the EM score be?</p> | 2018-09-07 20:09:29.393000+00:00 | 2022-03-23 15:53:24.270000+00:00 | null | tensorflow|machine-learning|deep-learning|stanford-nlp|reinforcement-learning | ['https://arxiv.org/pdf/1606.05250.pdf'] | 1 |
61,585,275 | <p>I know I'm a little too late to answer your question but hope this answer would help others! The package you're using is <code>DMwR</code> which uses a <a href="https://arxiv.org/pdf/1106.1813.pdf" rel="nofollow noreferrer">combination of SMOTE and under-sampling of the majority class</a>.</p>
<p>I'd suggest you to use <code>smotefamily::SMOTE</code> as it only over samples the minority class, so you wouldn't lose your majority class observations.</p> | 2020-05-04 04:38:11.713000+00:00 | 2020-05-04 04:38:11.713000+00:00 | null | null | 54,625,093 | <p>I have a data set with around 130000 records. The records divided in two class of target variable,0 & 1. 1 contains only 0.09% of total proportion.</p>
<p>I'm running my analysis in R-3.5.1 on Windows 10. I used SMOTE algorithm to work with this imbalanced data set.</p>
<p>I used following code to handle imbalanced data set</p>
<pre><code>library(DMwR)
data_code$target=as.factor(data_code$target) #Converted to factor as
# SMOTE works with factor data type
smoted_data <- SMOTE(target~., data_code, perc.over=100)
</code></pre>
<p>But after executing the code,I'm seeing the count for 0 is 212 & 1 is also 212 which is significant reduction of my sample size.Can you suggest me how do I handle this imbalanced data set with SMOTE without changing my data size</p> | 2019-02-11 06:36:02.410000+00:00 | 2020-06-29 13:05:55.710000+00:00 | 2019-02-11 07:07:59.937000+00:00 | r|statistics|sample-size | ['https://arxiv.org/pdf/1106.1813.pdf'] | 1 |
60,513,634 | <p>This question is a little old, so here goes an <em>updated</em> answer.</p>
<p>You should take a look into this paper <a href="https://arxiv.org/pdf/1507.03196v1.pdf" rel="nofollow noreferrer">DeepFont: Identify Your Font from An Image</a>. Basically it's a neural network trained on tons of images. It was presented commercially in <a href="https://www.youtube.com/watch?v=5eJ3IXYcw3M" rel="nofollow noreferrer">this video</a>.</p>
<p>Unfortunately, there is no code available. However, there is an independent implementation available <a href="https://github.com/robinreni96/Font_Recognition-DeepFont" rel="nofollow noreferrer">here</a>. You'll need to train it yourself, since weights are not provided, but the code is really easy to follow. In addition to this, consider that this implementation is only for a few fonts.</p>
<p>There is also a link to the dataset and a repo to generate more data.</p>
<p>Hope it helps.</p> | 2020-03-03 18:39:57.723000+00:00 | 2020-03-03 18:39:57.723000+00:00 | null | null | 1,708,858 | <p>As you may have heard of, there is an online font recognition service call WhatTheFont</p>
<p>I'm curious about the tech behind this tool. I think basically we can seperate this into two parts:</p>
<ol>
<li><p>Generate images from font files of various format, refer to <a href="http://www.fileinfo.com/filetypes/font" rel="noreferrer">http://www.fileinfo.com/filetypes/font</a> for a list of font file extensions.</p></li>
<li><p>Compare submitted image with all generated images</p></li>
</ol>
<p>I appreciate you share some advice or python code to implement two steps above.</p> | 2009-11-10 15:37:36.680000+00:00 | 2020-03-03 18:39:57.723000+00:00 | 2009-11-10 16:44:10.460000+00:00 | python|image-processing|fonts | ['https://arxiv.org/pdf/1507.03196v1.pdf', 'https://www.youtube.com/watch?v=5eJ3IXYcw3M', 'https://github.com/robinreni96/Font_Recognition-DeepFont'] | 3 |
52,392,344 | <p><em>Model interpretability</em> is a hyper-active and hyper-hot area of current research (think of holy grail, or something), which has been brought forward lately not least due to the (often tremendous) success of deep learning models in various tasks, plus the necessity of algorithmic fairness & accountability...</p>
<p>Apart from the intense theoretical research, there have been some toolboxes & libraries on a <strong>practical</strong> level lately, both for neural networks as well as for other general ML models; here is a partial list which arguably should keep you busy for some time:</p>
<ul>
<li><p>The Layer-wise Relevance Propagation (LRP) toolbox for neural networks (<a href="http://www.jmlr.org/papers/v17/15-618.html" rel="nofollow noreferrer">paper</a>, <a href="http://heatmapping.org/" rel="nofollow noreferrer">project page</a>, <a href="https://github.com/sebastian-lapuschkin/lrp_toolbox" rel="nofollow noreferrer">code</a>, <a href="https://github.com/VigneshSrinivasan10/interprettensor" rel="nofollow noreferrer">TF Slim wrapper</a>)</p>
</li>
<li><p>FairML: Auditing Black-Box Predictive Models, by Cloudera Fast Forward Labs (<a href="http://blog.fastforwardlabs.com/2017/03/09/fairml-auditing-black-box-predictive-models.html" rel="nofollow noreferrer">blog post</a>, <a href="https://arxiv.org/abs/1611.04967" rel="nofollow noreferrer">paper</a>, <a href="https://github.com/adebayoj/fairml" rel="nofollow noreferrer">code</a>)</p>
</li>
<li><p>LIME: Local Interpretable Model-agnostic Explanations (<a href="https://arxiv.org/abs/1602.04938" rel="nofollow noreferrer">paper</a>, <a href="https://github.com/marcotcr/lime" rel="nofollow noreferrer">code</a>, <a href="https://www.oreilly.com/learning/introduction-to-local-interpretable-model-agnostic-explanations-lime" rel="nofollow noreferrer">blog post</a>, <a href="https://cran.r-project.org/web/packages/lime/index.html" rel="nofollow noreferrer">R port</a>)</p>
</li>
<li><p><a href="https://arxiv.org/abs/1602.07043" rel="nofollow noreferrer">Black Box Auditing</a> and <a href="https://arxiv.org/abs/1412.3756" rel="nofollow noreferrer">Certifying and Removing Disparate Impact</a> (authors' <a href="https://github.com/algofairness/BlackBoxAuditing" rel="nofollow noreferrer">Python code</a>)</p>
</li>
<li><p>A recent (November 2017) paper by Geoff Hinton, <a href="https://arxiv.org/abs/1711.09784" rel="nofollow noreferrer">Distilling a Neural Network Into a Soft Decision Tree</a>, with various independent <a href="https://paperswithcode.com/paper/distilling-a-neural-network-into-a-soft" rel="nofollow noreferrer">PyTorch implementations</a></p>
</li>
<li><p>SHAP: A Unified Approach to Interpreting Model Predictions (<a href="https://arxiv.org/abs/1705.07874" rel="nofollow noreferrer">paper</a>, authors' <a href="https://github.com/slundberg/shap" rel="nofollow noreferrer">Python code</a>, <a href="https://github.com/redichh/ShapleyR" rel="nofollow noreferrer">R package</a>)</p>
</li>
<li><p>Interpretable Convolutional Neural Networks (<a href="https://arxiv.org/abs/1710.00935" rel="nofollow noreferrer">paper</a>, authors' Matlab <a href="https://github.com/zqs1022/interpretableCNN" rel="nofollow noreferrer">code</a>)</p>
</li>
<li><p>Lucid, a collection of infrastructure and tools for research in neural network interpretability by Google (<a href="https://github.com/tensorflow/lucid" rel="nofollow noreferrer">code</a>; papers: <a href="https://distill.pub/2017/feature-visualization/" rel="nofollow noreferrer">Feature Visualization</a>, <a href="https://distill.pub/2018/building-blocks/" rel="nofollow noreferrer">The Building Blocks of Interpretability</a>)</p>
</li>
<li><p>Transparecy-by-Design (TbD) networks (<a href="https://arxiv.org/abs/1803.05268" rel="nofollow noreferrer">paper</a>, <a href="https://github.com/davidmascharka/tbd-nets" rel="nofollow noreferrer">code</a>, <a href="https://mybinder.org/v2/gh/davidmascharka/tbd-nets/binder?filepath=full-vqa-example.ipynb" rel="nofollow noreferrer">demo</a>)</p>
</li>
<li><p>SVCCA: Singular Vector Canonical Correlation Analysis for Deep Learning Dynamics and Interpretability (<a href="https://arxiv.org/abs/1706.05806" rel="nofollow noreferrer">paper</a>, <a href="https://github.com/google/svcca" rel="nofollow noreferrer">code</a>, <a href="https://ai.googleblog.com/2017/11/interpreting-deep-neural-networks-with.html" rel="nofollow noreferrer">Google blog post</a>)</p>
</li>
<li><p>TCAV: Testing with Concept Activation Vectors (<a href="https://arxiv.org/abs/1711.11279" rel="nofollow noreferrer">ICML 2018 paper</a>, <a href="https://github.com/tensorflow/tcav" rel="nofollow noreferrer">Tensorflow code</a>)</p>
</li>
<li><p>Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization (<a href="https://arxiv.org/abs/1610.02391" rel="nofollow noreferrer">paper</a>, authors' <a href="https://github.com/ramprs/grad-cam" rel="nofollow noreferrer">Torch code</a>, <a href="https://github.com/Ankush96/grad-cam.tensorflow" rel="nofollow noreferrer">Tensorflow code</a>, <a href="https://github.com/meliketoy/gradcam.pytorch" rel="nofollow noreferrer">PyTorch code</a>, Keras <a href="http://nbviewer.jupyter.org/github/fchollet/deep-learning-with-python-notebooks/blob/master/5.4-visualizing-what-convnets-learn.ipynb" rel="nofollow noreferrer">example notebook</a>)</p>
</li>
<li><p>Network Dissection: Quantifying Interpretability of Deep Visual Representations, by MIT CSAIL (<a href="http://netdissect.csail.mit.edu/" rel="nofollow noreferrer">project page</a>, <a href="https://github.com/CSAILVision/NetDissect" rel="nofollow noreferrer">Caffe code</a>, <a href="https://github.com/CSAILVision/NetDissect-Lite" rel="nofollow noreferrer">PyTorch port</a>)</p>
</li>
<li><p>GAN Dissection: Visualizing and Understanding Generative Adversarial Networks, by MIT CSAIL (<a href="https://gandissect.csail.mit.edu/" rel="nofollow noreferrer">project page</a>, with links to paper & code)</p>
</li>
<li><p>Explain to Fix: A Framework to Interpret and Correct DNN Object Detector Predictions (<a href="https://arxiv.org/abs/1811.08011" rel="nofollow noreferrer">paper</a>, <a href="https://github.com/gudovskiy/e2x" rel="nofollow noreferrer">code</a>)</p>
</li>
<li><p>Anchors: High-Precision Model-Agnostic Explanations (<a href="https://homes.cs.washington.edu/%7Emarcotcr/aaai18.pdf" rel="nofollow noreferrer">paper</a>, <a href="https://github.com/marcotcr/anchor" rel="nofollow noreferrer">code</a>)</p>
</li>
<li><p>Diverse Counterfactual Explanations (DiCE) by Microsoft (<a href="https://arxiv.org/abs/1905.07697" rel="nofollow noreferrer">paper</a>, <a href="https://github.com/microsoft/dice" rel="nofollow noreferrer">code</a>, <a href="https://www.microsoft.com/en-us/research/blog/open-source-library-provides-explanation-for-machine-learning-through-diverse-counterfactuals/" rel="nofollow noreferrer">blog post</a>)</p>
</li>
<li><p>Axiom-based Grad-CAM (XGrad-CAM): Towards Accurate Visualization and Explanation of CNNs, a refinement of the existing Grad-CAM method (<a href="https://arxiv.org/abs/2008.02312" rel="nofollow noreferrer">paper</a>, <a href="https://github.com/Fu0511/XGrad-CAM" rel="nofollow noreferrer">code</a>)</p>
</li>
</ul>
<p>As interpretability moves toward the mainstream, there are already frameworks and toolboxes that incorporate more than one of the algorithms and techniques mentioned and linked above; here is an (again, partial) list for Python stuff:</p>
<ul>
<li>The ELI5 Python library (<a href="https://github.com/TeamHG-Memex/eli5" rel="nofollow noreferrer">code</a>, <a href="https://eli5.readthedocs.io/en/latest/" rel="nofollow noreferrer">documentation</a>)</li>
<li>The What-If tool by Google, a brand new (September 2018) feature of the open-source TensorBoard web application, which let users analyze an ML model without writing code (<a href="https://pair-code.github.io/what-if-tool/" rel="nofollow noreferrer">project page</a>, <a href="https://github.com/pair-code/what-if-tool" rel="nofollow noreferrer">code</a>, <a href="https://ai.googleblog.com/2018/09/the-what-if-tool-code-free-probing-of.html" rel="nofollow noreferrer">blog post</a>)</li>
<li>tf-explain - interpretability methods as Tensorflow 2.0 callbacks (<a href="https://github.com/sicara/tf-explain" rel="nofollow noreferrer">code</a>, <a href="https://tf-explain.readthedocs.io/en/latest/" rel="nofollow noreferrer">docs</a>, <a href="https://blog.sicara.com/tf-explain-interpretability-tensorflow-2-9438b5846e35" rel="nofollow noreferrer">blog post</a>)</li>
<li>InterpretML by Microsoft (<a href="https://interpret.ml/" rel="nofollow noreferrer">homepage</a>, <a href="https://github.com/Microsoft/interpret" rel="nofollow noreferrer">code</a> still in alpha, <a href="https://arxiv.org/abs/1909.09223" rel="nofollow noreferrer">paper</a>)</li>
<li>Captum by Facebook AI - model interpetability for Pytorch (<a href="https://captum.ai/" rel="nofollow noreferrer">homepage</a>, <a href="https://github.com/pytorch/captum" rel="nofollow noreferrer">code</a>, <a href="https://ai.facebook.com/blog/open-sourcing-captum-a-model-interpretability-library-for-pytorch/" rel="nofollow noreferrer">intro blog post</a>)</li>
<li>Skater, by Oracle (<a href="https://github.com/oracle/Skater" rel="nofollow noreferrer">code</a>, <a href="https://oracle.github.io/Skater/" rel="nofollow noreferrer">docs</a>)</li>
<li>Alibi, by SeldonIO (<a href="https://github.com/SeldonIO/alibi" rel="nofollow noreferrer">code</a>, <a href="https://docs.seldon.io/projects/alibi/en/stable/" rel="nofollow noreferrer">docs</a>)</li>
<li><a href="http://aix360.mybluemix.net/" rel="nofollow noreferrer">AI Explainability 360</a>, by IBM (<a href="https://github.com/IBM/AIX360" rel="nofollow noreferrer">code</a>, <a href="https://www.ibm.com/blogs/research/2019/08/ai-explainability-360/" rel="nofollow noreferrer">blog post</a>)</li>
</ul>
<p>See also:</p>
<ul>
<li><p><a href="https://christophm.github.io/interpretable-ml-book/" rel="nofollow noreferrer">Interpretable Machine Learning</a>, an online Gitbook by Christoph Molnar with <a href="https://github.com/christophM/iml" rel="nofollow noreferrer">R code</a> available</p>
</li>
<li><p><a href="https://pbiecek.github.io/ema/" rel="nofollow noreferrer">Explanatory Model Analysis</a>, another online book by Przemyslaw Biecek and Tomasz Burzykowski, with both R & Python code snippets</p>
</li>
<li><p>A <a href="https://twitter.com/ledell/status/995930308947140608" rel="nofollow noreferrer">Twitter thread</a>, linking to several interpretation tools available for R.</p>
</li>
<li><p>A short (4 hrs) online course by Dan Becker at Kaggle, <a href="https://www.kaggle.com/learn/machine-learning-explainability" rel="nofollow noreferrer">Machine Learning Explainability</a>, and the accompanying <a href="https://towardsdatascience.com/why-model-explainability-is-the-next-data-science-superpower-b11b6102a5e0" rel="nofollow noreferrer">blog post</a></p>
</li>
<li><p>... and a <strong>whole bunch</strong> of resources in the <a href="https://github.com/jphall663/awesome-machine-learning-interpretability" rel="nofollow noreferrer">Awesome Machine Learning Interpetability</a> repo</p>
</li>
</ul>
<p><strong>NOTE</strong>: I do no longer keep this answer updated; for updates, see my answer in the AI SE thread <a href="https://ai.stackexchange.com/questions/12870/which-explainable-artificial-intelligence-techniques-are-there/24138#24138">Which explainable artificial intelligence techniques are there?</a></p> | 2018-09-18 18:07:32.720000+00:00 | 2021-04-16 10:10:29.017000+00:00 | 2021-04-16 10:10:29.017000+00:00 | null | 52,391,871 | <p>I have the data that contains tons of x variables that are mainly categorical/nominal and my target variable is a multi-class label. I am able to build a couple models around to predict multi-class variables and compare how each of them performed. I have training and testing data. Both the training and testing data gave me good results.</p>
<p>Now, I am trying to find out "why" did the model predicted certain Y-variable? Meaning if I have weather data: X Variable: city, state, zip code, temp, year; Y Variable: rain, sun, cloudy, snow. I want to find out "why" did the model predict: rain, sun, cloudy, or snow respectfully. I used classification algorithms like multi-nominal, decision tree, ... etc</p>
<p>This may be a broad question but I need somewhere I can start researching. I can predict "what" but I can't see "why" it was predicted as rain, sun, cloudy, or snow label. Basically, I am trying to find the links between the variables that caused to predict the variable.</p>
<p>So far I thought of using correlation matrix, principal component analysis (that happened during model building process)...at least to see which are good predictors and which ones are not. Is there is a way to figure out "why" factor?</p> | 2018-09-18 17:32:35.923000+00:00 | 2021-04-16 10:10:29.017000+00:00 | 2020-08-18 11:03:43.940000+00:00 | machine-learning|data-science | ['http://www.jmlr.org/papers/v17/15-618.html', 'http://heatmapping.org/', 'https://github.com/sebastian-lapuschkin/lrp_toolbox', 'https://github.com/VigneshSrinivasan10/interprettensor', 'http://blog.fastforwardlabs.com/2017/03/09/fairml-auditing-black-box-predictive-models.html', 'https://arxiv.org/abs/1611.04967', 'https://github.com/adebayoj/fairml', 'https://arxiv.org/abs/1602.04938', 'https://github.com/marcotcr/lime', 'https://www.oreilly.com/learning/introduction-to-local-interpretable-model-agnostic-explanations-lime', 'https://cran.r-project.org/web/packages/lime/index.html', 'https://arxiv.org/abs/1602.07043', 'https://arxiv.org/abs/1412.3756', 'https://github.com/algofairness/BlackBoxAuditing', 'https://arxiv.org/abs/1711.09784', 'https://paperswithcode.com/paper/distilling-a-neural-network-into-a-soft', 'https://arxiv.org/abs/1705.07874', 'https://github.com/slundberg/shap', 'https://github.com/redichh/ShapleyR', 'https://arxiv.org/abs/1710.00935', 'https://github.com/zqs1022/interpretableCNN', 'https://github.com/tensorflow/lucid', 'https://distill.pub/2017/feature-visualization/', 'https://distill.pub/2018/building-blocks/', 'https://arxiv.org/abs/1803.05268', 'https://github.com/davidmascharka/tbd-nets', 'https://mybinder.org/v2/gh/davidmascharka/tbd-nets/binder?filepath=full-vqa-example.ipynb', 'https://arxiv.org/abs/1706.05806', 'https://github.com/google/svcca', 'https://ai.googleblog.com/2017/11/interpreting-deep-neural-networks-with.html', 'https://arxiv.org/abs/1711.11279', 'https://github.com/tensorflow/tcav', 'https://arxiv.org/abs/1610.02391', 'https://github.com/ramprs/grad-cam', 'https://github.com/Ankush96/grad-cam.tensorflow', 'https://github.com/meliketoy/gradcam.pytorch', 'http://nbviewer.jupyter.org/github/fchollet/deep-learning-with-python-notebooks/blob/master/5.4-visualizing-what-convnets-learn.ipynb', 'http://netdissect.csail.mit.edu/', 'https://github.com/CSAILVision/NetDissect', 'https://github.com/CSAILVision/NetDissect-Lite', 'https://gandissect.csail.mit.edu/', 'https://arxiv.org/abs/1811.08011', 'https://github.com/gudovskiy/e2x', 'https://homes.cs.washington.edu/%7Emarcotcr/aaai18.pdf', 'https://github.com/marcotcr/anchor', 'https://arxiv.org/abs/1905.07697', 'https://github.com/microsoft/dice', 'https://www.microsoft.com/en-us/research/blog/open-source-library-provides-explanation-for-machine-learning-through-diverse-counterfactuals/', 'https://arxiv.org/abs/2008.02312', 'https://github.com/Fu0511/XGrad-CAM', 'https://github.com/TeamHG-Memex/eli5', 'https://eli5.readthedocs.io/en/latest/', 'https://pair-code.github.io/what-if-tool/', 'https://github.com/pair-code/what-if-tool', 'https://ai.googleblog.com/2018/09/the-what-if-tool-code-free-probing-of.html', 'https://github.com/sicara/tf-explain', 'https://tf-explain.readthedocs.io/en/latest/', 'https://blog.sicara.com/tf-explain-interpretability-tensorflow-2-9438b5846e35', 'https://interpret.ml/', 'https://github.com/Microsoft/interpret', 'https://arxiv.org/abs/1909.09223', 'https://captum.ai/', 'https://github.com/pytorch/captum', 'https://ai.facebook.com/blog/open-sourcing-captum-a-model-interpretability-library-for-pytorch/', 'https://github.com/oracle/Skater', 'https://oracle.github.io/Skater/', 'https://github.com/SeldonIO/alibi', 'https://docs.seldon.io/projects/alibi/en/stable/', 'http://aix360.mybluemix.net/', 'https://github.com/IBM/AIX360', 'https://www.ibm.com/blogs/research/2019/08/ai-explainability-360/', 'https://christophm.github.io/interpretable-ml-book/', 'https://github.com/christophM/iml', 'https://pbiecek.github.io/ema/', 'https://twitter.com/ledell/status/995930308947140608', 'https://www.kaggle.com/learn/machine-learning-explainability', 'https://towardsdatascience.com/why-model-explainability-is-the-next-data-science-superpower-b11b6102a5e0', 'https://github.com/jphall663/awesome-machine-learning-interpretability', 'https://ai.stackexchange.com/questions/12870/which-explainable-artificial-intelligence-techniques-are-there/24138#24138'] | 79 |
56,829,106 | <p>I think you have an extra dense layer. <a href="https://arxiv.org/pdf/1512.03385" rel="nofollow noreferrer">ResNet</a> uses single fully-connected layer with softmax and <code>size=num_classes</code>.</p>
<p>You might also need to make sure that your hyperparameters are set correctly, like learning_rate and weight_decay and your input processing pipeline is also correct.</p>
<p>Here is an extra link to see if your pipeline is similar to a <a href="https://github.com/tensorpack/tensorpack/blob/master/examples/ResNet/cifar10-resnet.py" rel="nofollow noreferrer">working solution</a>.</p> | 2019-07-01 01:06:46.083000+00:00 | 2019-07-01 01:06:46.083000+00:00 | null | null | 56,827,028 | <p>I am trying to a resnet-50 model in tensorflow by cifar-100 dataset.I have used builtin resnet_v1_50 to create model in tensorflow with two fully connected layer on it's head.But my validation accuracy stuck at nearly 37%.What is the problem???am I configure wrongly define and configure resnet_v1_50??? my model creation code is given below.</p>
<pre class="lang-py prettyprint-override"><code>import tensorflow as tf
from tensorflow.contrib.slim.python.slim.nets import resnet_v1
X = tf.placeholder(dtype=tf.float32, shape=[None, 32, 32, 3])
Y = tf.placeholder(dtype=tf.float32, shape=[None, 100])
net, end_points = resnet_v1.resnet_v1_50(X,global_pool=False,is_training=True)
flattened = tf.contrib.layers.flatten(net)
dense_fc1 = tf.layers.dense(inputs=flattened,units=625, activation=tf.nn.relu,kernel_initializer=tf.contrib.layers.xavier_initializer())
dropout_fc1 = tf.layers.dropout(inputs=dense_fc1,rate=0.5, training=self.training)
logits = tf.layers.dense(inputs=dropout_fc1, units=num_classes,kernel_initializer = tf.contrib.layers.xavier_initializer())
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=Y))
optimizer = tf.train.AdamOptimizer(learning_rate=0.001).minimize(cost)
</code></pre> | 2019-06-30 18:12:37.283000+00:00 | 2019-07-01 01:06:46.083000+00:00 | 2019-06-30 22:48:20.120000+00:00 | tensorflow|resnet | ['https://arxiv.org/pdf/1512.03385', 'https://github.com/tensorpack/tensorpack/blob/master/examples/ResNet/cifar10-resnet.py'] | 2 |
62,884,007 | <p>General answer here is to try all of them and select the one, which performs best on the validation.</p>
<p>As for EvoNorm from <a href="https://arxiv.org/abs/2004.02967" rel="nofollow noreferrer">this paper</a>, it depends on your problem. Authors tested the new layer on classification problem with limited set of models. For image synthesis results weren't as good as for classification.</p>
<p>In my opinion, batchnorm is a good starting point to construct baseline solution, because it is time tested, and then try more advanced things.</p> | 2020-07-13 20:47:44.093000+00:00 | 2020-07-13 20:47:44.093000+00:00 | null | null | 62,706,421 | <p>When to decide that we need a batch or evolving layer and how do we decide it?
I am currently using PyTorch and I want to understand how I can decide which layer to add?</p> | 2020-07-02 23:28:17.583000+00:00 | 2020-07-13 20:47:44.093000+00:00 | null | python|pytorch|torch|conv-neural-network | ['https://arxiv.org/abs/2004.02967'] | 1 |
72,096,117 | <p>Because these word vectors are dense distributional representations, it is often difficult / impossible to interpret individual neurons, and such models often do not localize interpretable features to a single neuron (though this is an active area of research). For example, see <a href="https://arxiv.org/abs/2010.02695" rel="nofollow noreferrer">Analyzing Individual Neurons in Pre-trained Language Models
</a> for a discussion of this with respect to pre-trained language models).</p>
<p>A common method for studying how individual dimensions contribute to a particular phenomenon / task of interest is to train a linear model (i.e., logistic regression if the task is classification) to perform the task from fixed vectors, and then analyze the weights of the trained linear model.</p>
<p>For example, if you're interested in part of speech, you can train a linear model to map from the word vector to the POS [1]. Then, the weights of the linear model represent a linear combination of the dimensions that are predictive of the feature. For example, if the weight on the 5th neuron has large magnitude (very positive or very negative), you might expect that neuron to be somewhat correlated with the phenomenon of interest.</p>
<p>[1]: Note that defining a POS for a particular word is nontrivial, since the POS often depends on context. For example, "play" can be a noun ("he saw a play") or a verb ("I will play in the grass").</p> | 2022-05-03 07:34:06.883000+00:00 | 2022-05-04 06:15:29.590000+00:00 | 2022-05-04 06:15:29.590000+00:00 | null | 72,095,099 | <p>Let's imagine we generated a 200 dimension word vector using any pre-trained model of the word ('hello') as shown in the below image.</p>
<p><a href="https://i.stack.imgur.com/5TfJp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5TfJp.png" alt="Word_Vector" /></a></p>
<p>So, by any means can we tell which linguistic feature is represented by each d_i of this vector?</p>
<p>For example, d1 might be looking at whether the word is a noun; d2 might tell whether the word is a named entity or not and so on.</p> | 2022-05-03 05:25:50.150000+00:00 | 2022-05-04 06:15:29.590000+00:00 | 2022-05-03 08:36:28.580000+00:00 | nlp|stanford-nlp|word2vec|word-embedding | ['https://arxiv.org/abs/2010.02695'] | 1 |
58,868,383 | <p>Regularizers that'll work best will depend on your specific architecture, data, and problem; as usual, there isn't a single cut to rule all, but there <em>are</em> do's and (especially) don't's, as well as <em>systematic means</em> of determining what'll work best - via careful introspection and evaluation.</p>
<hr>
<p><strong>How does RNN regularization work?</strong></p>
<p>Perhaps the best approach to understanding it is <em>information</em>-based. First, see "How does 'learning' work?" and "RNN: Depth vs. Width". To understand RNN regularization, one must understand how RNN handles information and learns, which the referred sections describe (though not exhaustively). Now to answer the question:</p>
<p>RNN regularization's goal is any regularization's goal: maximizing information utility and traversal of the test loss function. The specific <em>methods</em>, however, tend to differ substantially for RNNs per their recurrent nature - and some work better than others; see below.</p>
<hr>
<p><strong>RNN regularization methods</strong>: </p>
<p><strong>WEIGHT DECAY</strong> </p>
<ol>
<li><p><strong>General</strong>: shrinks the norm ('average') of the weight matrix</p>
<ul>
<li><em>Linearization</em>, depending on activation; e.g. <code>sigmoid</code>, <code>tanh</code>, but less so <code>relu</code></li>
<li><em>Gradient boost</em>, depending on activation; e.g. <code>sigmoid</code>, <code>tanh</code> grads flatten out for large activations - linearizing enables neurons to keep learning</li>
</ul></li>
<li><p><strong>Recurrent weights</strong>: default <code>activation='sigmoid'</code> </p>
<ul>
<li><strong>Pros</strong>: linearizing can help BPTT (remedy vanishing gradient), hence also <em>learning long-term dependencies</em>, as <em>recurrent information utility</em> is increased</li>
<li><strong>Cons</strong>: linearizing can harm representational power - however, this can be offset by stacking RNNs</li>
</ul></li>
<li><p><strong>Kernel weights</strong>: for many-to-one (<code>return_sequences=False</code>), they work similar to weight decay on a typical layer (e.g. <code>Dense</code>). For many-to-many (<code>=True</code>), however, kernel weights operate on every timestep, so pros & cons similar to above will apply.</p></li>
</ol>
<p><strong>Dropout</strong>:</p>
<ul>
<li><strong>Activations</strong> (kernel): can benefit, but only if limited; values are usually kept less than <code>0.2</code> in practice. Problem: tends to introduce too much noise, and erase important context information, especially in problems w/ limited timesteps.</li>
<li><strong>Recurrent activations</strong> (<code>recurrent_dropout</code>): the <a href="https://stackoverflow.com/questions/44924690/keras-the-difference-between-lstm-dropout-and-lstm-recurrent-dropout">recommended dropout</a></li>
</ul>
<p><strong>Batch Normalization</strong>:</p>
<ul>
<li><strong>Activations</strong> (kernel): worth trying. Can benefit substantially, or not.</li>
<li><strong>Recurrent activations</strong>: should work better; see <a href="https://arxiv.org/abs/1603.09025" rel="noreferrer">Recurrent Batch Normalization</a>. No Keras implementations yet as far as I know, but I may implement it in the future.</li>
</ul>
<p><strong>Weight Constraints</strong>: set hard upper-bound on weights l2-norm; possible alternative to weight decay. </p>
<p><strong>Activity Constraints</strong>: don't bother; for most purposes, if you have to manually constrain your outputs, the layer itself is probably learning poorly, and the solution is elsewhere.</p>
<hr>
<p><strong>What should I do?</strong> Lots of info - so here's some concrete advice:</p>
<ol>
<li><p><strong>Weight decay</strong>: try <code>1e-3</code>, <code>1e-4</code>, see which works better. Do <em>not</em> expect the same value of decay to work for <code>kernel</code> and <code>recurrent_kernel</code>, especially depending on architecture. Check weight shapes - if one is much smaller than the other, apply smaller decay to former</p></li>
<li><p><strong>Dropout</strong>: try <code>0.1</code>. If you see improvement, try <code>0.2</code> - else, scrap it</p></li>
<li><p><strong>Recurrent Dropout</strong>: start with <code>0.2</code>. Improvement --> <code>0.4</code>. Improvement --> <code>0.5</code>, else <code>0.3</code>.</p></li>
<li><strong>Batch Normalization</strong>: try. Improvement --> keep it - else, scrap it.</li>
<li><strong>Recurrent Batchnorm</strong>: same as 4.</li>
<li><strong>Weight constraints</strong>: advisable w/ higher learning rates to prevent exploding gradients - else use higher weight decay</li>
<li><strong>Activity constraints</strong>: probably not (see above)</li>
<li><strong>Residual RNNs</strong>: introduce significant changes, along a regularizing effect. See application in <a href="https://arxiv.org/abs/1803.04831" rel="noreferrer">IndRNNs</a></li>
<li><strong>Biases</strong>: weight decay and constraints become important upon attaining good backpropagation properties; without them on bias weights but <em>with</em> them on kernel (K) & recurrent kernel (RK) weights, bias weights may grow much faster than the latter two, and dominate the transformation - also leading to exploding gradients. I recommend weight decay / constraint less than or equal to that used on K & RK. Also, with <code>BatchNormalization</code>, you <s>can</s> <em>cannot</em> set <code>use_bias=False</code> as an "equivalent"; BN applies to <em>outputs</em>, not <em>hidden-to-hidden transforms</em>.</li>
<li><strong>Zoneout</strong>: don't know, never tried, might work - see <a href="https://arxiv.org/abs/1606.01305" rel="noreferrer">paper</a>.</li>
<li><strong>Layer Normalization</strong>: some report it working better than BN for RNNs - but my application found it otherwise; <a href="https://arxiv.org/abs/1607.06450" rel="noreferrer">paper</a></li>
<li><strong>Data shuffling</strong>: is a strong regularizer. Also shuffle <em>batch samples</em> (samples in batch). See relevant info on <a href="https://stackoverflow.com/questions/58276337/proper-way-to-feed-time-series-data-to-stateful-lstm/58277760#58277760">stateful RNNs</a></li>
<li><strong>Optimizer</strong>: can be an inherent regularizer. Don't have a full explanation, but in my application, Nadam (& NadamW) has stomped every other optimizer - worth trying.</li>
</ol>
<p><strong>Introspection</strong>: bottom section on 'learning' isn't worth much without this; don't just look at validation performance and call it a day - <em>inspect</em> the effect that adjusting a regularizer has on <em>weights</em> and <em>activations</em>. Evaluate using info toward bottom & relevant theory.</p>
<p><strong>BONUS</strong>: weight decay can be powerful - even more powerful when done right; turns out, <em>adaptive optimizers</em> like Adam can harm its effectiveness, as described in <a href="https://arxiv.org/abs/1711.05101" rel="noreferrer">this paper</a>. <em>Solution</em>: use AdamW. My Keras/TensorFlow implementation <a href="https://github.com/OverLordGoldDragon/keras-adamw" rel="noreferrer">here</a>.</p>
<hr>
<p><strong>This is too much!</strong> Agreed - welcome to Deep Learning. Two tips here:</p>
<ol>
<li><a href="https://philipperemy.github.io/visualization/" rel="noreferrer"><em>Bayesian Optimization</em></a>; will save you time especially on prohibitively expensive training.</li>
<li><code>Conv1D(strides > 1)</code>, for many timesteps (<code>>1000</code>); slashes dimensionality, shouldn't harm performance (may in fact improve it).</li>
</ol>
<hr>
<p><strong>Introspection Code</strong>: </p>
<p><strong>Gradients</strong>: see <a href="https://stackoverflow.com/questions/59017288/how-to-visualize-rnn-lstm-gradients-in-keras-tensorflow/59017289#59017289">this answer</a></p>
<p><strong>Weights</strong>: see <a href="https://stackoverflow.com/questions/59275959/how-to-visualize-rnn-lstm-weights-in-keras-tensorflow/59275960#59275960">this answer</a></p>
<p><strong>Weight norm tracking</strong>: see <a href="https://stackoverflow.com/questions/61481921/how-to-set-and-track-weight-decays/61481922#61481922">this Q & A</a></p>
<p><strong>Activations</strong>: see <a href="https://stackoverflow.com/questions/58356868/how-visualize-attention-lstm-using-keras-self-attention-package/58357581#58357581">this answer</a></p>
<p><strong>Weights</strong>: <a href="https://github.com/OverLordGoldDragon/see-rnn/blob/master/see_rnn/visuals_rnn.py#L10" rel="noreferrer"><code>see_rnn.rnn_histogram</code></a> or <a href="https://github.com/OverLordGoldDragon/see-rnn/blob/master/see_rnn/visuals_rnn.py#L264" rel="noreferrer"><code>see_rnn.rnn_heatmap</code></a> (examples in README)</p>
<hr>
<p><strong>How does 'learning' work?</strong></p>
<p>The 'ultimate truth' of machine learning that is seldom discussed or emphasized is, <strong>we don't have access to the function we're trying to optimize</strong> - the <em>test loss function</em>. <em>All</em> of our work is with what are <em>approximations</em> of the true loss surface - both the train set and the validation set. This has some critical implications:</p>
<ol>
<li>Train set global optimum can lie <em>very far</em> from test set global optimum</li>
<li>Local optima are unimportant, and irrelevant:
<ul>
<li>Train set local optimum is almost always a better test set optimum</li>
<li>Actual local optima are almost impossible for high-dimensional problems; for the case of the "saddle", you'd need the gradients w.r.t. <em>all of the millions of parameters</em> to equal zero at once</li>
<li><a href="https://www.wikiwand.com/en/Attractor" rel="noreferrer">Local attractors</a> are lot more relevant; the analogy then shifts from "falling into a pit" to "gravitating into a strong field"; once in that field, your loss surface topology is bound to that set up by the field, which defines its own local optima; high LR can help exit a field, much like "escape velocity"</li>
</ul></li>
</ol>
<p>Further, loss functions are way too complex to analyze directly; a better approach is to <em>localize</em> analysis to individual layers, their weight matrices, and roles relative to the entire NN. Two key considerations are:</p>
<ol start="3">
<li><p><strong>Feature extraction capability</strong>. <em>Ex</em>: the driving mechanism of deep classifiers is, given input data, to <em>increase class separability</em> with each layer's transformation. Higher quality features will filter out irrelevant information, and deliver what's essential for the output layer (e.g. softmax) to learn a separating hyperplane.</p></li>
<li><p><strong>Information utility</strong>. <em>Dead neurons</em>, and <em>extreme activations</em> are major culprits of poor information utility; no single neuron should dominate information transfer, and too many neurons shouldn't lie purposeless. Stable activations and weight distributions enable gradient propagation and continued learning.</p></li>
</ol>
<hr>
<p><strong>How does regularization work?</strong> read above first</p>
<p>In a nutshell, via maximizing NN's information utility, and improving estimates of the test loss function. Each regularization method is unique, and no two exactly alike - see "RNN regularizers".</p>
<hr>
<p><strong>RNN: Depth vs. Width</strong>: not as simple as "one is more nonlinear, other works in higher dimensions".</p>
<ul>
<li><strong>RNN width</strong> is defined by (1) # of input channels; (2) # of cell's filters (output channels). As with CNN, each RNN filter is an <em>independent feature extractor</em>: <em>more</em> is suited for higher-complexity information, including but not limited to: dimensionality, modality, noise, frequency.</li>
<li><strong>RNN depth</strong> is defined by (1) # of stacked layers; (2) # of timesteps. Specifics will vary by architecture, but from information standpoint, unlike CNNs, RNNs are <em>dense</em>: every timestep influences the ultimate output of a layer, hence the ultimate output of the next layer - so it again isn't as simple as "more nonlinearity"; stacked RNNs exploit both spatial and temporal information.</li>
</ul>
<hr>
<p><strong>Update</strong>:</p>
<p>Here is an example of a near-ideal RNN gradient propagation for 170+ timesteps:</p>
<p><img src="https://i.stack.imgur.com/71seM.png" width="550"></p>
<p>This is rare, and was achieved via careful regularization, normalization, and hyperparameter tuning. Usually we see a large gradient for the last few timesteps, which drops off sharply toward left - as <a href="https://stackoverflow.com/questions/59017288/how-to-visualize-rnn-lstm-gradients-in-keras-tensorflow/59017289#59017289">here</a>. Also, since the model is stateful and fits 7 equivalent windows, gradient effectively spans <strong>1200 timesteps</strong>.</p>
<p><strong>Update 2</strong>: see 9 w/ new info & correction</p>
<p><strong>Update 3</strong>: add weight norms & weights introspection code</p> | 2019-11-15 00:02:31.530000+00:00 | 2020-04-30 19:55:37.990000+00:00 | 2020-04-30 19:55:37.990000+00:00 | null | 48,714,407 | <p>I am building an RNN for classification (there is a softmax layer after the RNN). There are so many options for what to regularize and I am not sure if to just try all of them, would the effect be the same? which components do I regularize for what situation?</p>
<p>The components being:</p>
<ul>
<li>Kernel weights (layer input)</li>
<li>Recurrent weights</li>
<li>Bias</li>
<li>Activation function (layer output)</li>
</ul> | 2018-02-09 21:34:06.850000+00:00 | 2020-04-30 19:55:37.990000+00:00 | 2019-11-15 00:04:12.623000+00:00 | python|keras|deep-learning|recurrent-neural-network|regularized | ['https://stackoverflow.com/questions/44924690/keras-the-difference-between-lstm-dropout-and-lstm-recurrent-dropout', 'https://arxiv.org/abs/1603.09025', 'https://arxiv.org/abs/1803.04831', 'https://arxiv.org/abs/1606.01305', 'https://arxiv.org/abs/1607.06450', 'https://stackoverflow.com/questions/58276337/proper-way-to-feed-time-series-data-to-stateful-lstm/58277760#58277760', 'https://arxiv.org/abs/1711.05101', 'https://github.com/OverLordGoldDragon/keras-adamw', 'https://philipperemy.github.io/visualization/', 'https://stackoverflow.com/questions/59017288/how-to-visualize-rnn-lstm-gradients-in-keras-tensorflow/59017289#59017289', 'https://stackoverflow.com/questions/59275959/how-to-visualize-rnn-lstm-weights-in-keras-tensorflow/59275960#59275960', 'https://stackoverflow.com/questions/61481921/how-to-set-and-track-weight-decays/61481922#61481922', 'https://stackoverflow.com/questions/58356868/how-visualize-attention-lstm-using-keras-self-attention-package/58357581#58357581', 'https://github.com/OverLordGoldDragon/see-rnn/blob/master/see_rnn/visuals_rnn.py#L10', 'https://github.com/OverLordGoldDragon/see-rnn/blob/master/see_rnn/visuals_rnn.py#L264', 'https://www.wikiwand.com/en/Attractor', 'https://stackoverflow.com/questions/59017288/how-to-visualize-rnn-lstm-gradients-in-keras-tensorflow/59017289#59017289'] | 17 |
49,525,501 | <p>You can take a look at this <a href="https://arxiv.org/pdf/1611.10012.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1611.10012.pdf</a> which contains comprehensive survey of recent detection architectures. Basically there are 3 meta architectures and all models fall into one of those categories:</p>
<ol>
<li>Faster-RCNN: Similar to the paper you have referenced, this is the improved version of fast-rcnn which did not use selective search and directly integrate proposal generation into the network known as region proposal network(rpn).</li>
<li>RFCN:similar in architecture to 1, except that roi pooling is performed differently, known as position sensitive roi pooling.</li>
<li>SSD:Modifies the rpn in Faster-rcnn to directly output class probabilities, eliminating the need for per-roi computation as is done in roi pooling. This is the fastest architecture type. Yolo falls into this architecture.</li>
</ol>
<p>I think based on my rough read through of the paper you have referenced, type 3 is the one you are looking for. However, in terms of implementation, it can be a little tricky to implement equation 3, i.e. you may need to stop backpropagating gradients to regions(or at least think about how it could effect the final results) that do not overlap with primary region as this architecture type computes probabilities for whole image.</p>
<p>I also note that there is in fact no primary/secondary "classifiers". The paper described primary/secondary "regions", the primary region is the region that contains the person(i.e use a person detector to find primary region first). And secondary regions are those that overlap with primary region. For activity classification <strong>there is only one classifier</strong> except that primary region carries more weight and secondary regions each contribute a little to the final prediction score.</p> | 2018-03-28 03:15:34.077000+00:00 | 2018-03-28 03:33:14.490000+00:00 | 2018-03-28 03:33:14.490000+00:00 | null | 49,247,011 | <p>I want to have a primary CNN based classifier and a similar secondary classifier for image regions.</p>
<p>Both classifiers will be used on image regions. I need the first classifier to be used on a primary region while the secondary classifier to be used on assistive regions and will be used to support the decision made by the first classifier with further evidence.</p>
<p>Thus the primary image region and the assistive ones will be used to infer <strong>one class label at a time</strong>.</p>
<p>What other ways or architectures exist these days to perform such a task, instead of ROI Pooling?</p>
<p>Ideally, I would like to have a classifier scheme similar to the one of this paper but without the use of ROI Pooling.</p>
<p><a href="https://arxiv.org/pdf/1505.01197.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1505.01197.pdf</a></p> | 2018-03-13 01:38:18.527000+00:00 | 2018-04-03 17:21:46.020000+00:00 | 2018-03-13 01:49:13.507000+00:00 | computer-vision|deep-learning|classification | ['https://arxiv.org/pdf/1611.10012.pdf'] | 1 |
49,635,840 | <p>Yaw Lin's answer contains a good amount of information, I'll just build on what he said in his last paragraph. I think the essence of what you want to do is not so much to process the person and the background independently and compare the results (that's clearly what you said you're doing), but to process the background first and infer from it the kinds of expectations you have for the primary region. Once you have some expectations, you can compare the primary region to the most significant expectations. </p>
<p>For example, from Figure 1 (b) in your Arxiv link, if you can process the background and determine that it's outdoors in a highly populated region, then you can focus a lot of the probability density function of what the person is doing in social outdoor activities, making jogging much more likely as a guess before you even process the figure you're interested in. In contrast, for Figure 1 (a), if you can process the background and tell that it's indoors and contains computers, then you can focus probability on lone indoor computer-based activities, skyrocketing the probability of "working on a computer".</p> | 2018-04-03 17:21:46.020000+00:00 | 2018-04-03 17:21:46.020000+00:00 | null | null | 49,247,011 | <p>I want to have a primary CNN based classifier and a similar secondary classifier for image regions.</p>
<p>Both classifiers will be used on image regions. I need the first classifier to be used on a primary region while the secondary classifier to be used on assistive regions and will be used to support the decision made by the first classifier with further evidence.</p>
<p>Thus the primary image region and the assistive ones will be used to infer <strong>one class label at a time</strong>.</p>
<p>What other ways or architectures exist these days to perform such a task, instead of ROI Pooling?</p>
<p>Ideally, I would like to have a classifier scheme similar to the one of this paper but without the use of ROI Pooling.</p>
<p><a href="https://arxiv.org/pdf/1505.01197.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1505.01197.pdf</a></p> | 2018-03-13 01:38:18.527000+00:00 | 2018-04-03 17:21:46.020000+00:00 | 2018-03-13 01:49:13.507000+00:00 | computer-vision|deep-learning|classification | [] | 0 |
23,373,350 | <p>(Quoting): </p>
<blockquote>
<p>I don't know how to determine if a word is a verb with an easy heuristic like adverbs, adjectives, etc.</p>
</blockquote>
<p>I can't speak to any issues in your Go implementation, but I'll address the larger problem of POS tagging in general. It sounds like you're attempting to build a rule-based unigram tagger. To elaborate a bit on those terms:</p>
<ul>
<li>"unigram" means you're considering each word in the sentence separately. Note that a unigram tagger is inherently limited, in that it cannot disambiguate words which can take on multiple POS tags. E.g., should you tag 'fish' as a noun or a verb? Is 'last' a verb or an adverb? </li>
<li>"rule-based" means exactly what it sounds like: a set of rules to determine the tag for each word. Rule-based tagging is limited in a different way - it requires considerable development effort to assemble a ruleset that will handle a reasonable portion of the ambiguity in common language. This effort might be appropriate if you're working in a language for which we don't have good training resources, but in most common languages, we now have enough tagged text to train high-accuracy tagging models.</li>
</ul>
<p>State-of-the-art for POS tagging is above 97% accuracy on well-formed newswire text (accuracy on less formal genres is naturally lower). A rule-based tagger will probably perform considerably worse (you'll have to determine the accuracy level needed to meet your requirements). If you want to continue down the rule-based path, I'd recommend reading <a href="http://nlpwp.org/book/chap-tagging.xhtml">this tutorial</a>. The code is based on Haskell, but it will help you learn the concepts and issues in rule-based tagging.</p>
<p>That said, I'd strongly recommend you look at other tagging methods. I mentioned the weaknesses of unigram tagging. Related approaches would be 'bigram', meaning that we consider the previous word when tagging word n, 'trigram' (usually the previous 2 words, or the previous word, the current word, and the following word); more generally, 'n-gram' refers to considering a sequence of n words (often, a sliding window around the word we're currently tagging). That context can help us disambiguate 'fish', 'last', 'flies', etc. </p>
<p>E.g., in </p>
<blockquote>
<p>We fish</p>
</blockquote>
<p>we probably want to tag fish as a verb, whereas in</p>
<blockquote>
<p>ate fish</p>
</blockquote>
<p>it's certainly a noun. </p>
<p><a href="http://www.nltk.org/book/ch05.html">The NLTK tutorial</a> might be a good reference here. An solid n-gram tagger should get you above 90% accuracy; likely above 95% (again on newswire text).</p>
<p>More sophisticated methods (known as 'structured inference') consider the entire tag sequence as a whole. That is, instead of trying to predict the most probable tag for each word separately, they attempt to predict the most probable sequence of tags for the entire input sequence. Structured inference is of course more difficult to implement and train, but will usually improve accuracy vs. n-gram approaches. If you want to read up on this area, I suggest <a href="http://arxiv.org/abs/1011.4088">Sutton and McCallum's excellent introduction</a>.</p> | 2014-04-29 19:14:25.693000+00:00 | 2014-04-29 19:14:25.693000+00:00 | null | null | 23,319,311 | <p>This script is compling without errors in play.golang.org: <a href="http://play.golang.org/p/Hlr-IAc_1f" rel="nofollow">http://play.golang.org/p/Hlr-IAc_1f</a></p>
<p>But when I run in on my machine, much longer than I expect happens with nothing happening in the terminal.</p>
<p>What I am trying to build is a PartOfSpeech Tagger.</p>
<p>I think the longest part is loading lexicon.txt into a map and then comparing each word with every word there to see if it has already been tagged in the lexicon. The lexicon only contains verbs. But doesn't every word need to be checked to see if it is a verb.</p>
<p>The larger problem is that I don't know how to determine if a word is a verb with an easy heuristic like adverbs, adjectives, etc.</p> | 2014-04-27 04:13:58.467000+00:00 | 2014-04-29 19:14:25.693000+00:00 | null | go|nlp|part-of-speech | ['http://nlpwp.org/book/chap-tagging.xhtml', 'http://www.nltk.org/book/ch05.html', 'http://arxiv.org/abs/1011.4088'] | 3 |
55,489,698 | <p>The official github repo showed both SPP-net and Fast R-CNN used the same region proposal method as R-CNN, namely 'selective search':</p>
<p><a href="https://github.com/ShaoqingRen/SPP_net" rel="nofollow noreferrer">SPP_net</a> and <a href="https://github.com/rbgirshick/fast-rcnn" rel="nofollow noreferrer">Fast R-CNN</a>. In SPP_net repo, there is a selective search module for computing region proposals, in fast r-cnn repo, the author specifically mentioned the method for computing object proposals is selective search.</p>
<p>But again, generating region proposals can also use other methods, since R-CNN and Fast R-CNN adopted object proposal methods as <strong>external modules independent of the detectors</strong>. </p>
<p>Generally speaking, if a method generates more proposals, it can benefit the final detection accuracy but this of course would limit the detection speed.
In the <a href="https://arxiv.org/abs/1506.01497" rel="nofollow noreferrer">Faster R-CNN paper</a> section 2 'Related Work', there is a nice summary of all object proposals generating method.</p>
<p>For the follow up question, namely how to intuitively picture region proposals in the feature map, it can be better illustrated in the following picture (<a href="https://adeshpande3.github.io/A-Beginner%27s-Guide-To-Understanding-Convolutional-Neural-Networks-Part-2/" rel="nofollow noreferrer">ref</a>):
<a href="https://i.stack.imgur.com/y81er.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/y81er.png" alt="image_ref"></a></p>
<p>In the picture, the red box on the left after convolutional opereation will become the red square in the output volume on the right, and the green box corresponds to the green square, etc. Now imagine the whole 7x7 on the left is the region proposal, then on the output feature map, it is still a region proposal!
Of course in reality the image on the left has much more pixels, so there could be many region proposals, and each of these proposals will still look like a region proposal on the output feature map!</p>
<p>Finally in the original <a href="https://arxiv.org/abs/1406.4729" rel="nofollow noreferrer">SPP_net paper</a>, the author expalins how exactly they performed the transformation of region proposals from the original image to the candidate windows on the feature map.
<a href="https://i.stack.imgur.com/ZYWb5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZYWb5.png" alt="enter image description here"></a></p> | 2019-04-03 07:45:39.017000+00:00 | 2019-04-04 13:49:40.047000+00:00 | 2019-04-04 13:49:40.047000+00:00 | null | 55,488,665 | <p>I understood that we need selective search as an external algorithm for generating region of interest proposals in R-CNN, but in Fast R-CNN we can simply take in the entire image, and then passes it to the convolutional network to create a feature map, and then used a single layer of SPP (RoI pooling layer).</p>
<p>On another hand, we used multi-layer SPP in SPP-net. For quick reference & understanding
<a href="https://i.stack.imgur.com/3uZ3b.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3uZ3b.png" alt="enter image description here"></a></p>
<p>In both slow R-CNN, SPP-net & Fast R-CNN the region of interest(RoIs) was from <strong>a proposal method</strong> ("selective search", <strong>??</strong> ,<strong>??</strong> respectively).</p>
<p>Could anyone explain in detail & cite it <strong><em>what proposal methods explicitly used in the SPP-net & Fast R-CN since</em></strong>, I didn't find it mentioned clearly in the research papers in details?</p> | 2019-04-03 06:43:09.027000+00:00 | 2019-04-04 21:42:31.090000+00:00 | 2019-04-04 21:42:31.090000+00:00 | computer-vision|conv-neural-network|object-detection|image-segmentation|faster-rcnn | ['https://github.com/ShaoqingRen/SPP_net', 'https://github.com/rbgirshick/fast-rcnn', 'https://arxiv.org/abs/1506.01497', 'https://adeshpande3.github.io/A-Beginner%27s-Guide-To-Understanding-Convolutional-Neural-Networks-Part-2/', 'https://i.stack.imgur.com/y81er.png', 'https://arxiv.org/abs/1406.4729', 'https://i.stack.imgur.com/ZYWb5.png'] | 7 |
60,370,659 | <p><strong>EDIT:</strong> I realized that my earlier response was highly misleading, which was thankfully pointed out by @xdurch0 and @Timbus Calin. Here is an edited answer.</p>
<ol>
<li><p>Check that all your input values are valid. Are there any <code>nan</code> or <code>inf</code> values in your training data?</p></li>
<li><p>Try using different activation functions. <code>ReLU</code> is good, but it is prone to what is known as the <a href="https://arxiv.org/abs/1903.06733" rel="nofollow noreferrer">dying ReLu problem</a>, where the neural network basically learns nothing since no updates are made to its weight. One possibility is to use <a href="https://keras.io/layers/advanced-activations/" rel="nofollow noreferrer">Leaky ReLu or PReLU</a>. </p></li>
<li><p>Try using gradient clipping, which is a technique used to tackle vanishing or exploding gradients (which is likely what is happening in your case). <a href="https://keras.io/optimizers/" rel="nofollow noreferrer">Keras</a> allows users to configure <code>clipnorm</code> <code>clip value</code> for optimizers. </p></li>
</ol>
<p>There are posts on SO that report similar problems, such as <a href="https://stackoverflow.com/questions/37232782/nan-loss-when-training-regression-network">this one</a>, which might also be of interest to you.</p> | 2020-02-24 06:37:03.373000+00:00 | 2020-02-24 10:53:35.697000+00:00 | 2020-02-24 10:53:35.697000+00:00 | null | 60,367,118 | <p>I'm making a simple classification algo with a keras neural network. The goal is to take 3 data points on weather and decide whether or not there's a wildfire. Here's an image of the .csv dataset that I'm using to train the model(this image is only the top few lines and isn't the entire thing ):
<a href="https://i.stack.imgur.com/cPyti.png" rel="nofollow noreferrer">wildfire weather dataset</a>
As you can see, there are 4 columns with the fourth being either a "1" which means "fire", or a "0" which means "no fire". I want the algo to predict either a 1 or a 0. This is the code that I wrote:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import keras
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
import csv
#THIS IS USED TO TRAIN THE MODEL
# Importing the dataset
dataset = pd.read_csv('Fire_Weather.csv')
dataset.head()
X=dataset.iloc[:,0:3]
Y=dataset.iloc[:,3]
X.head()
obj=StandardScaler()
X=obj.fit_transform(X)
X_train,X_test,y_train,y_test=train_test_split(X, Y, test_size=0.25)
print(X_train.shape)
print(X_test.shape)
print(y_train.shape)
print(y_test.shape)
classifier = Sequential()
# Adding the input layer and the first hidden layer
classifier.add(Dense(units = 6, kernel_initializer = 'uniform', activation =
'relu', input_dim = 3))
# classifier.add(Dropout(p = 0.1))
# Adding the second hidden layer
classifier.add(Dense(units = 6, kernel_initializer = 'uniform', activation
= 'relu'))
# classifier.add(Dropout(p = 0.1))
# Adding the output layer
classifier.add(Dense(units = 1, kernel_initializer = 'uniform', activation
= 'sigmoid'))
# Compiling the ANN
classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics
= ['accuracy'])
classifier.fit(X_train, y_train, batch_size = 3, epochs = 10)
y_pred = classifier.predict(X_test)
y_pred = (y_pred > 0.5)
print(y_pred)
classifier.save("weather_model.h5")
</code></pre>
<p>The problem is that whenever I run this, my accuracy is always "0.0000e+00" and my training output looks like this:</p>
<pre><code> Epoch 1/10
2146/2146 [==============================] - 2s 758us/step - loss: nan - accuracy: 0.0238
Epoch 2/10
2146/2146 [==============================] - 1s 625us/step - loss: nan - accuracy: 0.0000e+00
Epoch 3/10
2146/2146 [==============================] - 1s 604us/step - loss: nan - accuracy: 0.0000e+00
Epoch 4/10
2146/2146 [==============================] - 1s 609us/step - loss: nan - accuracy: 0.0000e+00
Epoch 5/10
2146/2146 [==============================] - 1s 624us/step - loss: nan - accuracy: 0.0000e+00
Epoch 6/10
2146/2146 [==============================] - 1s 633us/step - loss: nan - accuracy: 0.0000e+00
Epoch 7/10
2146/2146 [==============================] - 1s 481us/step - loss: nan - accuracy: 0.0000e+00
Epoch 8/10
2146/2146 [==============================] - 1s 476us/step - loss: nan - accuracy: 0.0000e+00
Epoch 9/10
2146/2146 [==============================] - 1s 474us/step - loss: nan - accuracy: 0.0000e+00
Epoch 10/10
2146/2146 [==============================] - 1s 474us/step - loss: nan - accuracy: 0.0000e+00
</code></pre>
<p>Does anyone know why this is happening and what I could do to my code to fix this?
Thank You!</p> | 2020-02-23 21:39:30.993000+00:00 | 2020-02-24 10:53:35.697000+00:00 | null | python|tensorflow|machine-learning|keras|neural-network | ['https://arxiv.org/abs/1903.06733', 'https://keras.io/layers/advanced-activations/', 'https://keras.io/optimizers/', 'https://stackoverflow.com/questions/37232782/nan-loss-when-training-regression-network'] | 4 |
49,126,505 | <p><code>space_to_depth</code> is a convolutional practice used very often for lossless spatial dimensionality reduction. Applied to tensor <code>(example_dim, width, height, channels)</code> with <code>block_size = k</code> it produces a tensor with shape <code>(example_dim, width / block_size, height / block_size, channels * block_size ** 2)</code>. It works in a following manner (<code>example_dim</code> is skipped for simplicity):</p>
<ol>
<li><p><strong>Cut image / feature map into chunks of size (block_size, block_size, channels)</strong>: e.g. the following image (with <code>block_size = 2</code>):</p>
<pre><code>[[[1], [2], [3], [4]],
[[5], [6], [7], [8]],
[[9], [10], [11], [12]],
[[13], [14], [15], [16]]]
</code></pre>
<p>is divided into the following chunks:</p>
<pre><code>[[[1], [2]], [[[3], [4]],
[[5], [6]]] [[7], [8]]]
[[[9], [10],] [[[11], [12]],
[[13], [14]]] [[15], [16]]]
</code></pre></li>
<li><p><strong>Flatten each chunk to a single array</strong>:</p>
<pre><code>[[1, 2, 5, 6]], [[3, 4, 7, 8]]
[[9 10, 13, 14]], [[11, 12, 15, 16]]
</code></pre></li>
<li><p><strong>Spatially rearrange chunks according to their initial position:</strong></p>
<pre><code>[[[1, 2, 5, 6]], [[3, 4, 7, 8]],
[[9 10, 13, 14]], [[11, 12, 15, 16]]]
</code></pre></li>
</ol>
<p>So - as you may see - the initial image with size <code>(4, 4, 1)</code> was rearranged to feature map with shape <code>(2, 2, 4)</code>. The following strategy is usually used for applications like object detection, segmentation or superresolution when it's important to decrease the spatial size of an image without losing reduction (like <code>pooling</code>). An example of an application of this technique might be found e.g. <a href="https://arxiv.org/pdf/1612.08242.pdf" rel="noreferrer">here</a>.</p> | 2018-03-06 08:38:29.753000+00:00 | 2018-03-06 08:38:29.753000+00:00 | null | null | 49,124,814 | <p>I am reading this document on <a href="https://www.tensorflow.org/api_docs/python/tf/space_to_depth" rel="nofollow noreferrer"><code>tf.space_to_depth</code></a>. There, it says about the use of the function: </p>
<blockquote>
<p>This operation is useful for resizing the activations between
convolutions (but keeping all data), e.g. instead of pooling. It is
also useful for training purely convolutional models.</p>
</blockquote>
<p>However, I still don't get a clear understanding of this. Why is it sometimes necessary to resize the activations in a model?</p> | 2018-03-06 06:42:34.423000+00:00 | 2019-10-11 16:23:21.610000+00:00 | 2019-10-11 16:23:21.610000+00:00 | python|tensorflow|machine-learning|keras|deep-learning | ['https://arxiv.org/pdf/1612.08242.pdf'] | 1 |
37,078,242 | <p>I worked on retina vessel detection for a bit few years ago, and there are different ways to do it:</p>
<ul>
<li>If you don't need a top result but something fast, you can use oriented openings, <a href="https://www.researchgate.net/publication/3327391_Segmentation_of_vessel-like_patterns_using_mathematical_morphology_and_curvature_evaluation" rel="nofollow">see here</a> and <a href="http://cmm.ensmp.fr/Anciens/zana/" rel="nofollow">here</a>.</li>
<li>Then you have an other version using mathematical morphology <a href="https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0ahUKEwiJ1qDb-cXMAhUMOsAKHaXXBcoQFggkMAA&url=http%3A%2F%2Fcmm.ensmp.fr%2F~walter%2Farticles_walter%2Fwalterklein.pdf.gz&usg=AFQjCNG3e1Ueke67eY9JEw6ub00Y8roQ-A&sig2=dzVYD_5_2BKMaOQpRLjjOQ&cad=rja" rel="nofollow">version here</a>.</li>
</ul>
<p>For better results, here are some ideas:</p>
<ul>
<li>Personally, I used combination of Gabor filters, and results where pretty good. See <a href="http://www.thibault.biz/StackOverflow/RetinaVesselsGabor.png" rel="nofollow">the segmentation result here on the first image of drive</a>.</li>
<li>And <a href="https://arxiv.org/pdf/cs/0510001.pdf" rel="nofollow">Gabor can be combined with learning for a good result</a>, or <a href="http://www.icmlc.org/icmlc2011/014_icmlc2011.pdf" rel="nofollow">here</a>.</li>
<li>Few years ago, <a href="https://www.researchgate.net/publication/5896442_Retinal_Blood_Vessel_Segmentation_Using_Line_Operators_and_Support_Vector_Classification" rel="nofollow">they claimed to have the best algorithm</a>, but I've never had the opportunity to test it. I was sceptic about the performance gap and the way they thresholded the line detector results, it was kind of obscure.</li>
<li>But I know that nowadays, many people try to tackle the problem using CNN, but I've not heard about significant improvements.</li>
</ul>
<p>[EDIT] To answer your specific question, you can erase the bright ring, and then apply a histogram stretching. But I think that the methods I introduced before will work better than the filter you are using.</p> | 2016-05-06 17:30:23.527000+00:00 | 2016-07-16 19:34:05.083000+00:00 | 2016-07-16 19:34:05.083000+00:00 | null | 37,071,335 | <p>I've used Kirsch filter to try and obtain the blood vessels, but the result isn't the best, as shown below:</p>
<p><a href="https://i.stack.imgur.com/ATcPV.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ATcPV.jpg" alt="enter image description here"></a></p>
<p>Although the vessels have been obtained, they aren't bright enough. How do I go about making them 'more visible'? </p> | 2016-05-06 11:33:02.380000+00:00 | 2016-07-16 19:34:05.083000+00:00 | null | python|opencv|image-processing|scipy | ['https://www.researchgate.net/publication/3327391_Segmentation_of_vessel-like_patterns_using_mathematical_morphology_and_curvature_evaluation', 'http://cmm.ensmp.fr/Anciens/zana/', 'https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0ahUKEwiJ1qDb-cXMAhUMOsAKHaXXBcoQFggkMAA&url=http%3A%2F%2Fcmm.ensmp.fr%2F~walter%2Farticles_walter%2Fwalterklein.pdf.gz&usg=AFQjCNG3e1Ueke67eY9JEw6ub00Y8roQ-A&sig2=dzVYD_5_2BKMaOQpRLjjOQ&cad=rja', 'http://www.thibault.biz/StackOverflow/RetinaVesselsGabor.png', 'https://arxiv.org/pdf/cs/0510001.pdf', 'http://www.icmlc.org/icmlc2011/014_icmlc2011.pdf', 'https://www.researchgate.net/publication/5896442_Retinal_Blood_Vessel_Segmentation_Using_Line_Operators_and_Support_Vector_Classification'] | 7 |
69,667,877 | <p>This paper does exactly this.</p>
<p>Compressing Multisets with Large Alphabets</p>
<p>Paper: <a href="https://arxiv.org/abs/2107.09202" rel="nofollow noreferrer">https://arxiv.org/abs/2107.09202</a></p>
<p>Code: <a href="https://github.com/facebookresearch/multiset-compression" rel="nofollow noreferrer">https://github.com/facebookresearch/multiset-compression</a></p>
<p>Summary: <a href="https://twitter.com/_dsevero/status/1419661190750425102" rel="nofollow noreferrer">https://twitter.com/_dsevero/status/1419661190750425102</a></p> | 2021-10-21 19:37:49.213000+00:00 | 2021-10-21 19:37:49.213000+00:00 | null | null | 14,550,174 | <p>I need to store a large number of integers in a file. The order of the integers does not matter, so the total information content should be lower than that of an ordered list. Is there a more space-efficient way to store the numbers than as an arbitrarily ordered array?</p>
<p>Edit: I assume the integers to be completely random. I am really looking for a universal way to squeeze out the redundant information which is introduced by fixing a permutation.</p> | 2013-01-27 17:42:16.357000+00:00 | 2021-10-21 19:37:49.213000+00:00 | 2013-01-27 21:00:14.217000+00:00 | list|data-structures|encoding|language-agnostic|multiset | ['https://arxiv.org/abs/2107.09202', 'https://github.com/facebookresearch/multiset-compression', 'https://twitter.com/_dsevero/status/1419661190750425102'] | 3 |
69,487,194 | <blockquote>
<p>Would a python iterator/generator can be considered a stream?</p>
</blockquote>
<p>No.</p>
<p>Basically, an iterator lazily traverses a sequence while a stream is a lazy sequence.</p>
<p>The difference between iterators and stream becomes clearer with the example you gave, an infinite sequence of natural numbers.</p>
<h2>Iterator</h2>
<p>This is a rather simplistic iterator, only with method <code>next()</code> implemented. This implementation is not iterable and does not conform with with python's <a href="https://docs.python.org/3/library/stdtypes.html#iterator-types" rel="nofollow noreferrer">iterator protocol</a>; however, it will make evident the differences with the stream implementation below.</p>
<pre><code>class Iterator:
def __init__(self, state=1, compute=lambda x: x + 1):
self.state = state
self.compute = compute
def next(self):
current, self.state = self.state, self.compute(self.state)
return current
natural_number = Iterator()
print(natural_number.next()) # 1
print(natural_number.next()) # 2
print(natural_number.next()) # 3
</code></pre>
<p>Every time <code>next()</code> is evoked, it returns the current element of the sequence and mutates the iterator's state.</p>
<h2>Stream</h2>
<p>Accordingly with SICP<a href="https://xuanji.appspot.com/isicp/3-5-streams.html" rel="nofollow noreferrer">2</a>, a stream can be defined by its first element, <code>car()</code>, and the rest of the stream, <code>cdr()</code>. In python, streams can be implemented as lazy linked lists constructed recursively.</p>
<pre><code>class Stream:
def __init__(self, state=1, f=lambda x: x+1):
self.state = state
self.compute = f
def car(self):
return self.state
def cdr(self):
return Stream(self.compute(self.state), self.compute)
sequence_of_natural_numbers = Stream()
sequence_of_natural_numbers.car() # 1
sequence_of_natural_numbers.car() # 1
sequence_of_natural_numbers.cdr().car() # 2
sequence_of_natural_numbers.cdr().cdr().car() # 3
</code></pre>
<p>Whenever <code>car()</code> is evoked, it always returns the same element, the first element of the sequence. <code>crd()</code> returns a new <code>Stream</code> with the computed state. In this case the stream state is never mutated.</p>
<p>If you wonder if streams are useful, I would recommend you to read this paper <a href="https://arxiv.org/abs/2103.06913" rel="nofollow noreferrer">Classical (Co)Recursion: Programming</a>, which provides a practical implementation and some applications of streams.</p> | 2021-10-07 20:02:11.743000+00:00 | 2021-10-07 20:02:11.743000+00:00 | null | null | 59,867,011 | <p>I'm reading the SICP and ended up in the part of streams. </p>
<p>Would a python iterator/generator can be considered a stream?</p>
<p>This iterator for example:</p>
<pre><code>class MyNumbers:
def __iter__(self):
self.a = 1
return self
def __next__(self):
x = self.a
self.a += 1
return x
myclass = MyNumbers()
myiter = iter(myclass)
print(next(myiter))
print(next(myiter))
print(next(myiter))
print(next(myiter))
print(next(myiter))
</code></pre>
<p>satisfies the definition: </p>
<blockquote>
<p>a stream is a sequence of data elements made available over time. A stream can be thought of as items on a conveyor belt being processed one at a time rather than in large batches.</p>
</blockquote>
<p><a href="https://en.wikipedia.org/wiki/Stream_(computing)" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Stream_(computing)</a></p> | 2020-01-22 19:30:39.707000+00:00 | 2021-10-07 20:18:28.557000+00:00 | null | python|functional-programming | ['https://docs.python.org/3/library/stdtypes.html#iterator-types', 'https://xuanji.appspot.com/isicp/3-5-streams.html', 'https://arxiv.org/abs/2103.06913'] | 3 |
55,679,085 | <p>To answer my own question:</p>
<p>The authors of the <a href="https://arxiv.org/pdf/1505.04597.pdf" rel="nofollow noreferrer">U-Net paper</a> used a pre-computed weight-map to handle imbalanced classes.</p>
<p>The Institute for Anstronomy of ETH Zurich provided a <a href="https://tf-unet.readthedocs.io/en/latest/index.html" rel="nofollow noreferrer">Tensorflow-based U-Net package</a> which contains a weighted version of the Softmax function (not sparse but they flatten their labels and logits first):</p>
<pre><code>class_weights = tf.constant(np.array(class_weights, dtype=np.float32))
weight_map = tf.multiply(flat_labels, class_weights)
weight_map = tf.reduce_sum(weight_map, axis=1)
loss_map = tf.nn.softmax_cross_entropy_with_logits_v2(logits=flat_logits, labels=flat_labels)
weighted_loss = tf.multiply(loss_map, weight_map)
loss = tf.reduce_mean(weighted_loss)
</code></pre> | 2019-04-14 19:13:25.943000+00:00 | 2019-04-14 19:27:39.350000+00:00 | 2019-04-14 19:27:39.350000+00:00 | null | 55,663,783 | <p>I'm working on a binary semantic segmentation task where the distribution of one class is very smalls across any input image, hence there are only a few pixels which are labeled. When using <a href="https://www.tensorflow.org/api_docs/python/tf/losses/sparse_softmax_cross_entropy" rel="nofollow noreferrer">sparse_softmax_cross_entropy</a>
the over all error is easily decreased when ignoring this class. Now, I'm looking for a way to weight the classes by a coefficient which penalizes missclassifications for the specific class higher compared to the other class.</p>
<p>The doc of the loss function states:</p>
<blockquote>
<p><strong>weights</strong> acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If <strong>weights</strong> is a tensor of shape [batch_size], then the loss <strong>weights</strong> apply to each corresponding sample.</p>
</blockquote>
<p>If I understand this correctly, it says that specific sample in a batch get weighted differently compared to others. But this is actually not what I'm looking for. Does anyone know how to implement a weighted version of this loss function where the weights scale the importance of a specific class rather than samples?</p> | 2019-04-13 09:03:13.040000+00:00 | 2019-04-14 19:27:39.350000+00:00 | 2020-06-20 09:12:55.060000+00:00 | python|tensorflow|deep-learning|conv-neural-network | ['https://arxiv.org/pdf/1505.04597.pdf', 'https://tf-unet.readthedocs.io/en/latest/index.html'] | 2 |
56,815,796 | <p>You should check out the end-to-end speaker verification systems, which are essentially siamese networks for speaker verification.</p>
<p><a href="https://arxiv.org/abs/1710.10467" rel="nofollow noreferrer">L. Wan, et al., "Generalized end-to-end loss for speaker verification," in Proc. ICASSP, 2018.</a></p>
<p>I think the literature above may give you intuition on your problem.</p> | 2019-06-29 07:20:30.177000+00:00 | 2019-07-17 04:00:17.240000+00:00 | 2019-07-17 04:00:17.240000+00:00 | null | 55,865,724 | <p>I want to build a siamese network for speaker verification using <code>python</code>. This network consists of 2 identical Convolutional Neural Network (CNN) to learn a similarity function which can distinguish whether 2 input voice belong to the same person or not. <br><br></p>
<h3>Data</h3>
<p>I have 10 person recording their voice in <code>.wav</code> saying digits of 9 number in Bahasa <code>(satu, dua, tiga, empat, lima, enam, tujuh, delapan, sembilan)</code> each person records 5 times for each number, so each person have 45 recordings <code>(9 x 5)</code>. I used MFCC to get features vector and got vector shape <code>(450, 250, 13)</code> -- (rows, number_frames, number_cepstral) and now I want to make a pair of my data</p>
<p>I have seen these link <br>
- <a href="https://www.kaggle.com/arpandhatt/siamese-neural-networks" rel="nofollow noreferrer">https://www.kaggle.com/arpandhatt/siamese-neural-networks</a> <br>
- <a href="https://keras.io/examples/mnist_siamese/" rel="nofollow noreferrer">https://keras.io/examples/mnist_siamese/</a></p>
<p>But I cannot understand what kind of methods which being used to create the pairs. Given my data, how can I create good pairs to train the siamese network?</p>
<p><b>Notes</b>: I want to build speaker verification text dependent which means one recording saying <code>'satu'</code> will be compared to another recording saying <code>'satu'</code> too.</p> | 2019-04-26 10:32:07.897000+00:00 | 2019-07-17 04:00:17.240000+00:00 | null | python|deep-learning|conv-neural-network | ['https://arxiv.org/abs/1710.10467'] | 1 |
60,444,861 | <p>I don't have the code but you could take a look at this archive paper : <a href="https://arxiv.org/abs/1905.05667" rel="nofollow noreferrer">https://arxiv.org/abs/1905.05667</a> and its references</p>
<p>and Demo code for dbscan algorithm on sklearn <a href="https://scikit-learn.org/stable/auto_examples/cluster/plot_dbscan.html" rel="nofollow noreferrer">https://scikit-learn.org/stable/auto_examples/cluster/plot_dbscan.html</a></p>
<p>I came across these while looking for unsupervised clustering evaluation metrics</p>
<p>edit: this post is also useful - <a href="https://stats.stackexchange.com/questions/79028/performance-metrics-to-evaluate-unsupervised-learning">https://stats.stackexchange.com/questions/79028/performance-metrics-to-evaluate-unsupervised-learning</a></p> | 2020-02-28 03:30:48.587000+00:00 | 2020-02-28 03:30:48.587000+00:00 | null | null | 37,396,727 | <p>I'm using the <a href="http://scikit-learn.org/stable/auto_examples/text/document_clustering.html#example-text-document-clustering-py" rel="nofollow">sklearn tutorials</a> on text clustering to find any interesting grouping on reviews of beers. So far it has been working out fine for me, however when it comes to testing, or finding the right parameters I've tried looping through different cluster numbers:</p>
<pre><code>for clusters in range(3, 10):
km = KMeans(n_clusters = clusters, init="k-means++", max_iter=100, n_init=1)
km.fit(vectorizer_fit)
order_centroids = km.cluster_centers_.argsort()[:, ::-1]
for i in range(clusters):
print("Cluster %d:" %i, end="")
for ind in order_centroids[i, :10]:
print(" %s"%terms[ind],end=",")
print()
</code></pre>
<p>it helps weed out some obvious choices (clusters 6+) start getting weird, or containing the beer's name:</p>
<pre><code>Cluster 1: julius, th, treehouse, ego, alter, papaya, dipa, melon, canned, dankness
Cluster 7: citra, farmstead, dipa, hf, apa, passion, abner, nelson, mosaic, papaya
</code></pre>
<p>And then tried using diffrent max/min_df:</p>
<pre><code>for x in range(40):
tfidf_vector = TfidfVectorizer(max_df = (.8-(x * .01)), min_df = .2, \
stop_words = "english", use_idf = True)
tfidf_matrix = tfidf_vector.fit_transform(documents)
km = KMeans(5)
km.fit(tfidf_matrix)
order_centroids = km.cluster_centers_.argsort()[:, ::-1]
print("~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~")
print("With max_df @ %d% " % ()(.80 - (x*.01))*100))
for i in range(5):
print("cluster %d words: " % i, end = "")
for ind in order_centroids[i, :10]: #top N number of words
print("%s "% vocab.ix[ind].values, end = ",") #lookup centroid number as vocab index
print("~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~")
</code></pre>
<p>which helped find good points to eliminate most some stop words:</p>
<pre><code>With max_df @ 80%: ['the'] ,['here'] ,['a'] ,['true'] ,['roasted'] ,['imparted'] ,['some'] ,['through'] ,['beer'] ,['such']
With max_df @ 60%: ['do'] ,['coffee'] ,['is'] ,['alcohol'] ,['retention'] ,['dry'] ['with'] ,['regular'] ,['little'] ,['speedway']
</code></pre>
<p>And I could hypothetically try looping through both at the same time, but at some point reading it and making a human judgement on the best doesn't seem the...computer scientist way. Is there any way to evaluate unsupervised methods, without an exact business question in mind?</p> | 2016-05-23 17:06:51.453000+00:00 | 2020-02-28 03:30:48.587000+00:00 | null | python-3.x|scikit-learn|cluster-analysis|k-means | ['https://arxiv.org/abs/1905.05667', 'https://scikit-learn.org/stable/auto_examples/cluster/plot_dbscan.html', 'https://stats.stackexchange.com/questions/79028/performance-metrics-to-evaluate-unsupervised-learning'] | 3 |
53,874,459 | <p>This is slightly tricky since I am not aware of any formal definition of a <code>S3 class</code>. For R objects the S3 classes are governed by a very simple character vector of class names stored in the <code>class</code> attribute. Method dispatch is then done by matching element(s) of that attribute with a function name.</p>
<p>You could essentially do:</p>
<pre><code>x <- 1:5
class(x) <- "MyMadeUpClass"
x
# [1] 1 2 3 4 5
# attr(,"class")
# [1] "MyMadeUpClass"
</code></pre>
<p>Does the above really define a class in the intuitive formal understanding of the term ?</p>
<p>You can create a <code>print</code> method for objects of this class like (silly example incoming):</p>
<pre><code>print.MyMadeUpClass <- function(x, ...) {
print(sprintf("Pretty vector: %s", paste(x, collapse = ",")))
}
x
# [1] "Pretty vector: 1,2,3,4,5"
</code></pre>
<p>The important distinction here is that methods in S3 </p>
<ul>
<li>"belong to" (generic) functions, not classes</li>
<li>are chosen based on classes of the arguments provided to the function call</li>
</ul>
<p>Point I am trying to make is that S3 does not really have a formally defined inheritance (which I assume is what you are looking for), with contrast to <code>S4</code> which implements this via the <code>contains</code> concept, so I am not really sure what would you like to see as a result.</p>
<p>Very good read on the topic Object-Oriented Programming, Functional
Programming and R by John M. Chambers: <a href="https://arxiv.org/pdf/1409.3531.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1409.3531.pdf</a></p>
<p>Edit (after question edit) - the <strong>sloop</strong> package: </p>
<p>From S3 perspective I think it makes a lot of sense to examine the structure of generics and methods. A found the <code>sloop</code> package to be a very useful tool for this: <a href="https://github.com/r-lib/sloop" rel="nofollow noreferrer">https://github.com/r-lib/sloop</a>.</p> | 2018-12-20 18:52:59.917000+00:00 | 2018-12-22 08:20:15.273000+00:00 | 2018-12-22 08:20:15.273000+00:00 | null | 53,873,740 | <p>You can find all the objects in a package with </p>
<pre><code>objs <- mget(ls("package:base"), inherits = TRUE)
</code></pre>
<p>You can select the functions from these with </p>
<pre><code>funs <- objs[is.function(objs)]
</code></pre>
<p>You can get a complete list of the dependencies of the listed functions in a package by applying <code>codetools::findGlobals()</code>, <code>miniCRAN::makeDepGraph</code>, <code>pkgnet::CreatePackageReport</code> (or others) to the function list. All of these functions either graph the resulting dependencies or return an object easily plotable with, e.g., <code>igraph</code> or <code>DependenciesGraph</code>.</p>
<p>Is there an comparable set of commands to find all the classes created by a package and the inheritance structure of those classes? I know that for most packages the resulting web of class inheritance would be relatively simple, but I think that in a few cases, such as <code>ggplot2</code> and the <code>survey</code> package, the resulting web of class inheritance could be quite helpful.</p>
<p>I have found a package, <code>classGraph</code>, that creates directed acyclic graphs for S4 class structures, but I am more interested in the much more common S3 structures.</p>
<p>This seems brute-force and sloppy, but I suppose if I had a list of all the <code>class</code> attributes used by objects in the base packages, and all the <code>class</code> attributes of objects in a package, then any of the latter which is not among the former would be new classes created by the package or inherited from another non-base package.</p> | 2018-12-20 17:49:21.917000+00:00 | 2018-12-22 08:20:15.273000+00:00 | 2018-12-21 23:24:03.583000+00:00 | r|oop|inheritance|package|directed-acyclic-graphs | ['https://arxiv.org/pdf/1409.3531.pdf', 'https://github.com/r-lib/sloop'] | 2 |
56,822,575 | <h3>TL;DR: <a href="https://www.geeksforgeeks.org/unbounded-knapsack-repetition-items-allowed/" rel="nofollow noreferrer">easiest algorithm is here</a>, most efficient algorithm is explained below.</h3>
<p>When you restrict the coefficients to positive integers, this problem is NP-complete (as long as <code>len</code> is part of the input and not fixed). So a truly <em>efficient</em> solution isn't going to happen. (It's called the Unbounded Subset Sum Problem, if you want to google around; <a href="https://folk.idi.ntnu.no/mlh/algkon/hard.pdf" rel="nofollow noreferrer">a proof of its hardness is here</a>.)</p>
<p><a href="https://arxiv.org/pdf/1610.04712.pdf" rel="nofollow noreferrer">The most efficient algorithm I've found is from this paper</a>:</p>
<p><a href="https://i.stack.imgur.com/wgqo0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wgqo0.png" alt="algorithm in pseudocode" /></a></p>
<p>The β<sub>t</sub> operation is a "capped sumset", also described in the paper: it basically operates like this (sketched in Python):</p>
<pre><code>def capped_sumset(a, b, t): # a, b sets of naturals, t natural
a0 = a | {0}
b0 = b | {0}
return {
x+y
for x in a0
for y in b0
if x+y <= t
}
</code></pre>
<p>The hardest part about implementing this in C is going to be all the set operations; once you have a good implementation of sets of integers, the algorithm itself isn't too bad.</p>
<p>If you don't care about efficiency, of course, you can use the "classic dynamic program" mentioned in the base case of the algorithm; <a href="https://www.geeksforgeeks.org/unbounded-knapsack-repetition-items-allowed/" rel="nofollow noreferrer">you can find a detailed explanation with examples in several programming languages here</a>. But be prepared for an exponential running time!</p> | 2019-06-30 05:16:52.143000+00:00 | 2019-06-30 05:23:51.080000+00:00 | 2020-06-20 09:12:55.060000+00:00 | null | 56,822,463 | <p>I'm trying to solve a problem where given an integer <code>n</code>, and an array of integers <code>a</code> -- can <code>n</code> be represented as a linear combination of elements from <code>a</code> such that the coefficients are <strong>positive</strong> integers as well.</p>
<p>I saw <a href="https://stackoverflow.com/questions/47583925/c-check-if-an-integer-is-linear-combination-of-elements-in-an-array">C: check if an integer is linear combination of elements in an array</a> and implemented it as such in C, but it doesn't work in all cases.</p>
<pre class="lang-c prettyprint-override"><code>int gcd(int a, int b) {
if (a == 0) {
return b;
}
return gcd(b % a, a);
}
bool can_be_changed(const int a[], int len, int val) {
for (int i = 0; i < len - 1; i++) {
for (int j = i + 1; j < len; j++) {
if (val % gcd(a[i], a[j]) == 0) {
return true;
}
}
}
return false;
}
</code></pre>
<p>But, if <code>a = {4,5,6}</code> and <code>val=7</code> the code will return true as <code>gcd(4,5) = 1</code> and <code>7 % gcd(4,5) == 0</code> will evaluate to <code>true</code> thus returning <code>true</code> which it shouldn't.</p>
<p>Any help is appreciated, thanks!</p> | 2019-06-30 04:48:26.033000+00:00 | 2019-06-30 05:23:51.080000+00:00 | 2019-06-30 04:52:42.207000+00:00 | c|arrays | ['https://www.geeksforgeeks.org/unbounded-knapsack-repetition-items-allowed/', 'https://folk.idi.ntnu.no/mlh/algkon/hard.pdf', 'https://arxiv.org/pdf/1610.04712.pdf', 'https://i.stack.imgur.com/wgqo0.png', 'https://www.geeksforgeeks.org/unbounded-knapsack-repetition-items-allowed/'] | 5 |
51,587,020 | <p>When doing binary classification, using <a href="http://caffe.help/manual/layers/softmaxwithloss.html" rel="nofollow noreferrer"><code>"SoftmaxWithLoss"</code></a> with two outputs, is mathematically equivalent to using <a href="http://caffe.help/manual/layers/sigmoidcrossentropyloss.html" rel="nofollow noreferrer"><code>"SigmoidCrossEntropyLoss"</code></a>. So, if you really only need one output you can set your last layer to <code>num_output: 1</code> and use <code>"SigmoidCrossEntropyLoss"</code>. However, if you want to take advantage of caffe's <code>"Accuracy"</code> layer, you need to use two outputs and <code>"SoftmaxWithLoss"</code> layer. </p>
<p>Regarding your questions:<br>
1. If you opt to use <code>"SoftmaxWithLoss"</code> and you only need one output, take the second output for each pixel as this entry represents the probability of class 1.<br>
I'll leave it to you as an exercise to figure out what you'll get if you take the sum (hint: `"Softmax" output probabilities...)<br>
2. The loss is very small most likely because you have severe class imbalance - most of your pixels are 0 while only very few are 1 (or vice versa), therefore predicting always 0 does not incur such great penalty. If this is your case, I suggest looking at <a href="https://arxiv.org/abs/1708.02002" rel="nofollow noreferrer">Focal Loss</a> that addresses this issue.</p> | 2018-07-30 04:33:52.710000+00:00 | 2018-07-30 04:33:52.710000+00:00 | null | null | 51,553,480 | <p>I have set up Caffe and using FCN-8s model with little change with output classes:</p>
<pre><code>layer {
name: "score_5classes"
type: "Convolution"
bottom: "score"
top: "score_5classes"
convolution_param {
num_output: 2
pad: 0
kernel_size: 1
}
}
layer {
name: "loss"
type: "SoftmaxWithLoss"
bottom: "score_5classes"
bottom: "label"
top: "loss"
loss_param {
normalize: true
}
}
</code></pre>
<p>I have changed last layer output number to 2, because I want to classify my input images into 2 classes, 0 and 1 (So it seems I should have 2 outputs! I cant understand why?! It could be an output matrix with zeros and ones, couldnt it?)</p>
<p>So my questions are:</p>
<p>1.Should I sum these 2 classes ? because I need 1 output</p>
<p>2.The loss is so small! even when the output is far away from the desired! how Caffe calculates the lost layer?</p>
<p>Thanks</p> | 2018-07-27 08:03:41.453000+00:00 | 2018-07-30 04:33:52.710000+00:00 | 2018-07-30 04:26:34.520000+00:00 | machine-learning|computer-vision|caffe|image-segmentation|matcaffe | ['http://caffe.help/manual/layers/softmaxwithloss.html', 'http://caffe.help/manual/layers/sigmoidcrossentropyloss.html', 'https://arxiv.org/abs/1708.02002'] | 3 |
50,984,159 | <h2>Suggested Solution</h2>
<p>Reusing the code from the <a href="https://github.com/r0nn13/conditional-dcgan-keras" rel="noreferrer">repository you shared</a>, here are some suggested modifications to train a classifier along your generator and discriminator (their architectures and other losses are left untouched):</p>
<pre class="lang-python prettyprint-override"><code>from keras import backend as K
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation, Flatten
from keras.layers.convolutional import Convolution2D, MaxPooling2D
def lenet_classifier_model(nb_classes):
# Snipped by Fabien Tanc - https://www.kaggle.com/ftence/keras-cnn-inspired-by-lenet-5
# Replace with your favorite classifier...
model = Sequential()
model.add(Convolution2D(12, 5, 5, activation='relu', input_shape=in_shape, init='he_normal'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Convolution2D(25, 5, 5, activation='relu', init='he_normal'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(180, activation='relu', init='he_normal'))
model.add(Dropout(0.5))
model.add(Dense(100, activation='relu', init='he_normal'))
model.add(Dropout(0.5))
model.add(Dense(nb_classes, activation='softmax', init='he_normal'))
def generator_containing_discriminator_and_classifier(generator, discriminator, classifier):
inputs = Input((IN_CH, img_cols, img_rows))
x_generator = generator(inputs)
merged = merge([inputs, x_generator], mode='concat', concat_axis=1)
discriminator.trainable = False
x_discriminator = discriminator(merged)
classifier.trainable = False
x_classifier = classifier(x_generator)
model = Model(input=inputs, output=[x_generator, x_discriminator, x_classifier])
return model
def train(BATCH_SIZE):
(X_train, Y_train, LABEL_train) = get_data('train') # replace with your data here
X_train = (X_train.astype(np.float32) - 127.5) / 127.5
Y_train = (Y_train.astype(np.float32) - 127.5) / 127.5
discriminator = discriminator_model()
generator = generator_model()
classifier = lenet_classifier_model(6)
generator.summary()
discriminator_and_classifier_on_generator = generator_containing_discriminator_and_classifier(
generator, discriminator, classifier)
d_optim = Adagrad(lr=0.005)
g_optim = Adagrad(lr=0.005)
generator.compile(loss='mse', optimizer="rmsprop")
discriminator_and_classifier_on_generator.compile(
loss=[generator_l1_loss, discriminator_on_generator_loss, "categorical_crossentropy"],
optimizer="rmsprop")
discriminator.trainable = True
discriminator.compile(loss=discriminator_loss, optimizer="rmsprop")
classifier.trainable = True
classifier.compile(loss="categorical_crossentropy", optimizer="rmsprop")
for epoch in range(100):
print("Epoch is", epoch)
print("Number of batches", int(X_train.shape[0] / BATCH_SIZE))
for index in range(int(X_train.shape[0] / BATCH_SIZE)):
image_batch = Y_train[index * BATCH_SIZE:(index + 1) * BATCH_SIZE]
label_batch = LABEL_train[index * BATCH_SIZE:(index + 1) * BATCH_SIZE] # replace with your data here
generated_images = generator.predict(X_train[index * BATCH_SIZE:(index + 1) * BATCH_SIZE])
if index % 20 == 0:
image = combine_images(generated_images)
image = image * 127.5 + 127.5
image = np.swapaxes(image, 0, 2)
cv2.imwrite(str(epoch) + "_" + str(index) + ".png", image)
# Image.fromarray(image.astype(np.uint8)).save(str(epoch)+"_"+str(index)+".png")
# Training D:
real_pairs = np.concatenate((X_train[index * BATCH_SIZE:(index + 1) * BATCH_SIZE, :, :, :], image_batch),
axis=1)
fake_pairs = np.concatenate(
(X_train[index * BATCH_SIZE:(index + 1) * BATCH_SIZE, :, :, :], generated_images), axis=1)
X = np.concatenate((real_pairs, fake_pairs))
y = np.zeros((20, 1, 64, 64)) # [1] * BATCH_SIZE + [0] * BATCH_SIZE
d_loss = discriminator.train_on_batch(X, y)
print("batch %d d_loss : %f" % (index, d_loss))
discriminator.trainable = False
# Training C:
c_loss = classifier.train_on_batch(image_batch, label_batch)
print("batch %d c_loss : %f" % (index, c_loss))
classifier.trainable = False
# Train G:
g_loss = discriminator_and_classifier_on_generator.train_on_batch(
X_train[index * BATCH_SIZE:(index + 1) * BATCH_SIZE, :, :, :],
[image_batch, np.ones((10, 1, 64, 64)), label_batch])
discriminator.trainable = True
classifier.trainable = True
print("batch %d g_loss : %f" % (index, g_loss[1]))
if index % 20 == 0:
generator.save_weights('generator', True)
discriminator.save_weights('discriminator', True)
</code></pre>
<hr>
<h2>Theoretical Details</h2>
<p>I believe there are some misunderstandings regarding how conditional GANs work and what is the discriminators role in such schemes.</p>
<h3>Role of the Discriminator</h3>
<p>In the min-max game which is GAN training [4], the discriminator <code>D</code> is playing against the generator <code>G</code> (the network you actually care about) so that under <code>D</code>'s scrutiny, <code>G</code> becomes better at outputting realistic results.</p>
<p>For that, <code>D</code> is trained to tell apart real samples from samples from <code>G</code> ; while <code>G</code> is trained to fool <code>D</code> by generating realistic results / results following the target distribution. </p>
<blockquote>
<p><em>Note: in the case of conditional GANs, i.e. GANs mapping an input sample from one domain <code>A</code> (e.g. real picture) to another domain <code>B</code>
(e.g. sketch), <code>D</code> is usually fed with the pairs of samples stacked
together and has to discriminate "real" pairs (input sample from <code>A</code> +
corresponding target sample from <code>B</code>) and "fake" pairs (input sample
from <code>A</code> + corresponding output from <code>G</code>)</em> [1, 2]</p>
</blockquote>
<p>Training a conditional generator against <code>D</code> (as opposed to simply training <code>G</code> alone, with a L1/L2 loss only e.g. DAE) improves the sampling capability of <code>G</code>, forcing it to output crisp, realistic results instead of trying to average the distribution.</p>
<p>Even though discriminators can have multiple sub-networks to cover other tasks (see next paragraphs), <code>D</code> should keep at least one sub-network/output to cover its main task: <strong>telling real samples from generated ones apart</strong>. Asking <code>D</code> to regress further semantic information (e.g. classes) alongside may interfere with this main purpose.</p>
<blockquote>
<p><em>Note: <code>D</code> output is often not a simple scalar / boolean. It is common to have a discriminator (e.g. PatchGAN [1, 2]) returning a matrix of
probabilities, evaluating how realistic patches made from its input
are.</em></p>
</blockquote>
<hr>
<h3>Conditional GANs</h3>
<p>Traditional GANs are trained in an unsupervised manner to generate realistic data (e.g. images) from a random noise vector as input. [4]</p>
<p>As previously mentioned, conditional GANs have further input <em>conditions</em>. Along/instead of the noise vector, they take for input a sample from a domain <code>A</code> and return a corresponding sample from a domain <code>B</code>. <code>A</code> can be a completely different modality, e.g. <code>B = sketch image</code> while <code>A = discrete label</code> ; <code>B = volumetric data</code> while <code>A = RGB image</code>, etc. [3]</p>
<p>Such GANs can also be conditioned by multiples inputs, e.g. <code>A = real image + discrete label</code> while <code>B = sketch image</code>. A famous work introducing such methods is <strong>InfoGAN</strong> [5]. It presents how to condition GANs on multiple continuous or discrete inputs (e.g. <code>A = digit class + writing type</code>, <code>B = handwritten digit image</code>), <strong>using a more advanced discriminator which has for 2nd task to force <code>G</code> to maximize the mutual-information between its conditioning inputs and its corresponding outputs</strong>.</p>
<hr>
<h3>Maximizing the Mutual Information for cGANs</h3>
<p>InfoGAN discriminator has 2 heads/sub-networks to cover its 2 tasks [5]:</p>
<ul>
<li>One head <code>D1</code> does the traditional real/generated discrimination -- <code>G</code> has to minimize this result, i.e. it has to fool <code>D1</code> so that it can't tell apart real form generated data;</li>
<li>Another head <code>D2</code> (also named <code>Q</code> network) tries to regress the input <code>A</code> information -- <code>G</code> has to maximize this result, i.e. it has to output data which "show" the requested semantic information (c.f. mutual-information maximization between <code>G</code> conditional inputs and its outputs).</li>
</ul>
<p>You can find a Keras implementation here for instance: <a href="https://github.com/eriklindernoren/Keras-GAN/tree/master/infogan" rel="noreferrer">https://github.com/eriklindernoren/Keras-GAN/tree/master/infogan</a>.</p>
<p>Several works are using similar schemes to improve control over what a GAN is generating, by using provided labels and maximizing the mutual information between these inputs and <code>G</code> outputs [6, 7]. The basic idea is always the same though:</p>
<ul>
<li>Train <code>G</code> to generate elements of domain <code>B</code>, given some inputs of domain(s) <code>A</code>;</li>
<li>Train <code>D</code> to discriminate "real"/"fake" results -- <code>G</code> has to minimize this;</li>
<li>Train <code>Q</code> (e.g. a classifier ; can share layers with <code>D</code>) to estimate the original <code>A</code> inputs from <code>B</code> samples -- <code>G</code> has to maximize this).</li>
</ul>
<hr>
<h3>Wrapping Up</h3>
<p>In your case, it seems you have the following training data:</p>
<ul>
<li>real images <code>Ia</code></li>
<li>corresponding sketch images <code>Ib</code></li>
<li>corresponding class labels <code>c</code></li>
</ul>
<p>And you want to train a generator <code>G</code> so that given an image <code>Ia</code> and its class label <code>c</code>, it outputs a proper sketch image <code>Ib'</code>. </p>
<p>All in all, that's a lot of information you have, and you can supervise your training both on the conditioned images and the conditioned labels...
Inspired from the aforementioned methods [1, 2, 5, 6, 7], here is a possible way of using all this information to train your conditional <code>G</code>:</p>
<strong>Network <code>G</code>:</strong>
<ul>
<li>Inputs: <code>Ia</code> + <code>c</code></li>
<li>Output: <code>Ib'</code></li>
<li>Architecture: up-to-you (e.g. U-Net, ResNet, ...)</li>
<li>Losses: L1/L2 loss between <code>Ib'</code> & <code>Ib</code>, <code>-D</code> loss, <code>Q</code> loss</li>
</ul>
<strong>Network <code>D</code>:</strong>
<ul>
<li>Inputs: <code>Ia</code> + <code>Ib</code> (real pair), <code>Ia</code> + <code>Ib'</code> (fake pair)</li>
<li>Output: "fakeness" scalar/matrix</li>
<li>Architecture: up-to-you (e.g. PatchGAN)</li>
<li>Loss: cross-entropy on the "fakeness" estimation</li>
</ul>
<strong>Network <code>Q</code>:</strong>
<ul>
<li>Inputs: <code>Ib</code> (real sample, for training <code>Q</code>), <code>Ib'</code> (fake sample, when back-propagating through <code>G</code>)</li>
<li>Output: <code>c'</code> (estimated class)</li>
<li>Architecture: up-to-you (e.g. LeNet, ResNet, VGG, ...)</li>
<li>Loss: cross-entropy between <code>c</code> and <code>c'</code></li>
</ul>
<strong>Training Phase:</strong>
<ol>
<li>Train <code>D</code> on a batch of real pairs <code>Ia</code> + <code>Ib</code> then on a batch of fake pairs <code>Ia</code> + <code>Ib'</code>;</li>
<li>Train <code>Q</code> on a batch of real samples <code>Ib</code>;</li>
<li>Fix <code>D</code> and <code>Q</code> weights;</li>
<li>Train <code>G</code>, passing its generated outputs <code>Ib'</code> to <code>D</code> and <code>Q</code> to back-propagate through them.</li>
</ol>
<blockquote>
<p><em>Note: this is a really rough architecture description. I'd recommend going through the literature ([1, 5, 6, 7] as a good start) to get
more details and maybe a more elaborate solution.</em></p>
</blockquote>
<hr>
<h3>References</h3>
<ol>
<li>Isola, Phillip, et al. "Image-to-image translation with conditional adversarial networks." arXiv preprint (2017). <a href="http://openaccess.thecvf.com/content_cvpr_2017/papers/Isola_Image-To-Image_Translation_With_CVPR_2017_paper.pdf" rel="noreferrer">http://openaccess.thecvf.com/content_cvpr_2017/papers/Isola_Image-To-Image_Translation_With_CVPR_2017_paper.pdf</a></li>
<li>Zhu, Jun-Yan, et al. "Unpaired image-to-image translation using cycle-consistent adversarial networks." arXiv preprint arXiv:1703.10593 (2017). <a href="http://openaccess.thecvf.com/content_ICCV_2017/papers/Zhu_Unpaired_Image-To-Image_Translation_ICCV_2017_paper.pdf" rel="noreferrer">http://openaccess.thecvf.com/content_ICCV_2017/papers/Zhu_Unpaired_Image-To-Image_Translation_ICCV_2017_paper.pdf</a></li>
<li>Mirza, Mehdi, and Simon Osindero. "Conditional generative adversarial nets." arXiv preprint arXiv:1411.1784 (2014). <a href="https://arxiv.org/pdf/1411.1784" rel="noreferrer">https://arxiv.org/pdf/1411.1784</a></li>
<li>Goodfellow, Ian, et al. "Generative adversarial nets." Advances in neural information processing systems. 2014. <a href="http://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf" rel="noreferrer">http://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf</a></li>
<li>Chen, Xi, et al. "Infogan: Interpretable representation learning by information maximizing generative adversarial nets." Advances in Neural Information Processing Systems. 2016. <a href="http://papers.nips.cc/paper/6399-infogan-interpretable-representation-learning-by-information-maximizing-generative-adversarial-nets.pdf" rel="noreferrer">http://papers.nips.cc/paper/6399-infogan-interpretable-representation-learning-by-information-maximizing-generative-adversarial-nets.pdf</a></li>
<li>Lee, Minhyeok, and Junhee Seok. "Controllable Generative Adversarial Network." arXiv preprint arXiv:1708.00598 (2017). <a href="https://arxiv.org/pdf/1708.00598.pdf" rel="noreferrer">https://arxiv.org/pdf/1708.00598.pdf</a></li>
<li>Odena, Augustus, Christopher Olah, and Jonathon Shlens. "Conditional image synthesis with auxiliary classifier gans." arXiv preprint arXiv:1610.09585 (2016). <a href="http://proceedings.mlr.press/v70/odena17a/odena17a.pdf" rel="noreferrer">http://proceedings.mlr.press/v70/odena17a/odena17a.pdf</a></li>
</ol> | 2018-06-22 08:50:13.173000+00:00 | 2018-06-22 13:35:10.787000+00:00 | 2018-06-22 13:35:10.787000+00:00 | null | 50,909,007 | <p>I am trying to figure out how I will use the label information of my dataset with Generative Adversarial Networks. I am trying to use the following implementation of conditional GANs that <a href="https://github.com/r0nn13/conditional-dcgan-keras" rel="nofollow noreferrer">can be found here</a>. My dataset contains two different image domains (real objects and sketches) with common class information (chair, tree, orange etc). I opted for this implementation which only considers the two different domains as different "classes" for the correspondence (train samples <code>X</code> correspond to the real images while target samples <code>y</code> correspond to the sketch images).</p>
<p>Is there a way to modify my code and take into account the class information (chair, tree, etc.) in my whole architecture? I want actually my discriminator to predict whether or not my generated images from the generator belong to a specific class and not only whether they are real or not. As it is, with the current architecture, the system learns to create similar sketches in all cases. </p>
<p><strong>Update:</strong> The discriminator returns a tensor of size <code>1x7x7</code> then both <code>y_true</code> and <code>y_pred</code> are passed through a flatten layer before calculating the loss:</p>
<pre><code>def discriminator_loss(y_true, y_pred):
BATCH_SIZE=100
return K.mean(K.binary_crossentropy(K.flatten(y_pred), K.concatenate([K.ones_like(K.flatten(y_pred[:BATCH_SIZE,:,:,:])),K.zeros_like(K.flatten(y_pred[:BATCH_SIZE,:,:,:])) ]) ), axis=-1)
</code></pre>
<p>and the loss function of the discriminator over the generator:</p>
<pre><code>def discriminator_on_generator_loss(y_true,y_pred):
BATCH_SIZE=100
return K.mean(K.binary_crossentropy(K.flatten(y_pred), K.ones_like(K.flatten(y_pred))), axis=-1)
</code></pre>
<p>Furthremore, my modification of the discriminator model for output 1 layer:</p>
<pre><code>model.add(Flatten())
model.add(Dense(1, activation='sigmoid'))
#model.add(Activation('sigmoid'))
</code></pre>
<p>Now the discriminator outputs 1 layer. How can I modify the above-mentioned loss functions correspondingly? Should I have 7 instead of 1, for the <code>n_classes = 6</code> + one class for predicting real and fake pairs?</p> | 2018-06-18 11:56:41.847000+00:00 | 2018-08-27 08:49:20.547000+00:00 | 2018-08-27 08:49:20.547000+00:00 | python|keras|conv-neural-network|loss-function|generative-adversarial-network | ['https://github.com/r0nn13/conditional-dcgan-keras', 'https://github.com/eriklindernoren/Keras-GAN/tree/master/infogan', 'http://openaccess.thecvf.com/content_cvpr_2017/papers/Isola_Image-To-Image_Translation_With_CVPR_2017_paper.pdf', 'http://openaccess.thecvf.com/content_ICCV_2017/papers/Zhu_Unpaired_Image-To-Image_Translation_ICCV_2017_paper.pdf', 'https://arxiv.org/pdf/1411.1784', 'http://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf', 'http://papers.nips.cc/paper/6399-infogan-interpretable-representation-learning-by-information-maximizing-generative-adversarial-nets.pdf', 'https://arxiv.org/pdf/1708.00598.pdf', 'http://proceedings.mlr.press/v70/odena17a/odena17a.pdf'] | 9 |
60,371,316 | <p>There are two possible approaches: </p>
<ul>
<li><strong><em>Feature engineering</em></strong> (user-defined global and/or <a href="https://doi.org/10.5194/isprsannals-II-3-181-2014" rel="nofollow noreferrer">local features</a>) </li>
<li><strong><em>Feature learning</em></strong> (models such as <a href="https://arxiv.org/abs/1612.00593" rel="nofollow noreferrer">PointNet</a>, <a href="https://arxiv.org/abs/1706.02413" rel="nofollow noreferrer">PointNet++</a>, and <a href="https://doi.org/10.1109/IROS.2015.7353481" rel="nofollow noreferrer">VoxNet</a>)</li>
</ul>
<p>If the feature engineering approach is selected, you can train your model with several different classifiers, such as Multilayer Perceptron (MLP), Support Vector Machines (SVM), and Random Forest (RF), for instance. </p> | 2020-02-24 07:34:54.107000+00:00 | 2020-02-24 07:34:54.107000+00:00 | null | null | 58,569,180 | <p>I'm an automotive engineer student, and at the moment I'm working in a project for an autonomous bus at the university with 3D point clouds from a lidar sensor. My job here is to train the point cloud with deep learning algorithms. But I do not know exactly how to start. I found many sources on the internet. But it is also too diverse for me as a beginner, I do not know how to start first. Can someone give me some tips? Or good source for beginners. </p>
<p>Thank you in advance!</p> | 2019-10-26 08:22:13.570000+00:00 | 2020-11-19 14:11:07.890000+00:00 | 2020-02-24 09:37:12.503000+00:00 | 3d|deep-learning|point-cloud-library|lidar-data | ['https://doi.org/10.5194/isprsannals-II-3-181-2014', 'https://arxiv.org/abs/1612.00593', 'https://arxiv.org/abs/1706.02413', 'https://doi.org/10.1109/IROS.2015.7353481'] | 4 |
64,913,440 | <p>According to your question, I assume that you are working on your own LiDAR point cloud data rather than the public datasets.
Initially, I would suggest annotating the data with 3D bounding boxes on point cloud data.</p>
<p>As concerned about deep learning algorithms I would prefer understanding <a href="https://arxiv.org/abs/1803.06199" rel="nofollow noreferrer">Complex YOLO</a>, <a href="https://arxiv.org/abs/1711.06396" rel="nofollow noreferrer">VoxelNet</a>, and PointNet.
To understand the implementation PointNet algorithm you can refer to the <a href="https://keras.io/examples/vision/pointnet/" rel="nofollow noreferrer">Keras webpage</a> trained on the ModelNet10 dataset.</p> | 2020-11-19 14:11:07.890000+00:00 | 2020-11-19 14:11:07.890000+00:00 | null | null | 58,569,180 | <p>I'm an automotive engineer student, and at the moment I'm working in a project for an autonomous bus at the university with 3D point clouds from a lidar sensor. My job here is to train the point cloud with deep learning algorithms. But I do not know exactly how to start. I found many sources on the internet. But it is also too diverse for me as a beginner, I do not know how to start first. Can someone give me some tips? Or good source for beginners. </p>
<p>Thank you in advance!</p> | 2019-10-26 08:22:13.570000+00:00 | 2020-11-19 14:11:07.890000+00:00 | 2020-02-24 09:37:12.503000+00:00 | 3d|deep-learning|point-cloud-library|lidar-data | ['https://arxiv.org/abs/1803.06199', 'https://arxiv.org/abs/1711.06396', 'https://keras.io/examples/vision/pointnet/'] | 3 |
24,519,734 | <p>I have been searching around a bit and I think the following is a candidate answer to your question.</p>
<p>Quoting N. Bell and J. Hoberock, "Thrust: a productivity-oriented library for CUDA", in GPU Computing Gems Jade Edition:</p>
<blockquote>
<p>Thrust statically selects a highly-optimized <strong>Radix Sort</strong> algorithm for sorting primitive types (<code>char</code>, <code>int</code>, <code>float</code> and <code>double</code>) with the standard <code>less</code> and <code>greater</code> comparison operators. For all other types (e.g., user-defined data types) and comparison operators, Thrust uses a general <strong>Merge Sort</strong> algorithm. Because sorting primitives with Radix Sort is considerably faster than Merge Sort, this static optimization has significant value.</p>
</blockquote>
<p>Now, <strong>Merge Sort</strong> requires <code>O(N)</code> memory space, see <a href="https://stackoverflow.com/questions/2967153/space-requirements-of-a-merge-sort">Space requirements of a merge-sort</a>.</p>
<p>Furthermore, <strong>Radix Sort</strong> still requires <code>O(N)</code> memory space, see <a href="http://arxiv.org/pdf/1206.3511.pdf" rel="nofollow noreferrer">Comparison of Bucket Sort and RADIX Sort</a>.</p>
<p>Which of the two consumes more memory is not defined and depends on the input sequence to be sorted as well as on algorithm tuning parameters, see the comments to one of the answers to <a href="https://stackoverflow.com/questions/3539265/why-quicksort-is-more-popular-than-radix-sort">Why quicksort is more popular than radix-sort?</a>.</p>
<p>Opposite to that, <strong>Quick Sort</strong> requires <code>O(logN)</code> memory space if performed <em>in-place</em>, otherwise it needs <code>O(N)</code>. For a CUDA implementation of the Quick Sort algorithm, you may have a look at <a href="http://blogs.nvidia.com/blog/2012/09/12/how-tesla-k20-speeds-up-quicksort-a-familiar-comp-sci-code/" rel="nofollow noreferrer">How Tesla K20 Speeds Quicksort</a>.</p>
<p>For other <em>in-place</em> sorting algorithms (the <em>in-place</em> strategy is worth to be explored as it saves memory as compared to the <em>non-in-place</em> counterpart), have a look at the <strong>Bitonic Sort</strong>, see <a href="http://www.informatik.uni-kiel.de/fileadmin/arbeitsgruppen/comsys/files/public/ppam09.pdf" rel="nofollow noreferrer">Fast in-place sorting with CUDA based on bitonic sort</a>.</p> | 2014-07-01 21:42:47.420000+00:00 | 2014-07-01 21:42:47.420000+00:00 | 2017-05-23 10:26:56.467000+00:00 | null | 24,493,810 | <p>In <a href="https://stackoverflow.com/questions/13306793/cuda-thrust-trying-to-sort-by-key-2-8gb-of-data-in-6gb-of-gpu-ram-throws-bad-al">cuda/thrust: Trying to sort_by_key 2.8GB of data in 6GB of GPU RAM throws bad_alloc</a>, I have read that <code>sort_by_key</code> consumed most of the memory for the test case considered therein.</p>
<p>Is there an alternative that can do exactly what <code>sort_by_key</code> is doing even if it is a little bit slower but that can sort bigger datasets?</p> | 2014-06-30 15:35:11.190000+00:00 | 2014-07-01 21:45:26.583000+00:00 | 2017-05-23 12:05:58.690000+00:00 | c++|cuda|thrust | ['https://stackoverflow.com/questions/2967153/space-requirements-of-a-merge-sort', 'http://arxiv.org/pdf/1206.3511.pdf', 'https://stackoverflow.com/questions/3539265/why-quicksort-is-more-popular-than-radix-sort', 'http://blogs.nvidia.com/blog/2012/09/12/how-tesla-k20-speeds-up-quicksort-a-familiar-comp-sci-code/', 'http://www.informatik.uni-kiel.de/fileadmin/arbeitsgruppen/comsys/files/public/ppam09.pdf'] | 5 |
56,007,099 | <p>No, if you are interested in an image segmentation, you should <strong>not</strong> flatten and then reshape your tensors. Instead, use a <em>fully convolutional</em> model, like the <a href="https://arxiv.org/abs/1505.04597" rel="nofollow noreferrer">U-Net</a>. You find a lot of example implementations of it on github, e.g. <a href="https://github.com/zhixuhao/unet/blob/master/model.py" rel="nofollow noreferrer">here</a> </p> | 2019-05-06 14:20:56.030000+00:00 | 2019-05-06 14:20:56.030000+00:00 | null | null | 56,006,630 | <p>I am using keras and python for satellite image segmentation. It is my understanding that to get (pixel level)predictions for image segmentation, model reshapes layer of dimension(-1,num_classes,height,width) to shape (-1,num_classes,height*width).This is then followed by applying activation function like softmax or sigmoid. My question is how to recover images after this step back in the format either channel first or channel last?
example code</p>
<pre><code>o = (Reshape(( num_classes , outputHeight*outputWidth)))(o)
o = (Permute((2, 1)))(o)
o = (Activation('softmax'))(o)
</code></pre>
<p>I have tried adding following layer to the model at the end</p>
<pre><code>o = (Reshape((outputHeight, outputWidth, num_classes)))(o)
</code></pre>
<p>Is this correct? will this reorient the image pixels in the same order as original or not?
Another alternative may be to use following code on individual images.</p>
<pre><code>array.reshape(height, width, num_classes)
</code></pre>
<p>Which method should i use to get pixel level segmentation result?</p> | 2019-05-06 13:52:07.920000+00:00 | 2019-05-08 10:33:58.620000+00:00 | 2019-05-08 10:33:58.620000+00:00 | python-3.x|keras|deep-learning|image-segmentation | ['https://arxiv.org/abs/1505.04597', 'https://github.com/zhixuhao/unet/blob/master/model.py'] | 2 |
37,548,562 | <p>This would be useful to look at: <a href="https://arxiv.org/pdf/1603.07285v1.pdf" rel="nofollow">A guide to convolutional arithmetic</a></p>
<p>In your case you would want to use: </p>
<p><img src="https://chart.googleapis.com/chart?cht=tx&chl=o%20%3D%20%5Cleft%20%5Clfloor%5Cfrac%7Bi-k%7D%7Bs%7D%20%5Cright%20%5Crfloor%2B1" alt="formula"></p>
<p><em>i</em> being the input, <em>k</em> kernel size, <em>s</em> stride, and <em>o</em> is the output so:</p>
<p><img src="https://chart.googleapis.com/chart?cht=tx&chl=o%20%3D%20%5Cleft%20%5Clfloor%5Cfrac%7B24-3%7D%7B2%7D%20%5Cright%20%5Crfloor%2B1" alt="formula2"></p>
<p>Your answer would be: 11*11</p> | 2016-05-31 14:32:31.970000+00:00 | 2016-05-31 15:49:38.733000+00:00 | 2016-05-31 15:49:38.733000+00:00 | null | 37,547,815 | <p>Could anyone help me to calculate each step output of cifar10 example of tensorflow?
The input image size is 24*24. But in max pooling step, the kernel size is 3 and stride is 2, so (24-3)/2+1 is not integer. What is wrong here? I am new in deep learning. Thank you for helping</p> | 2016-05-31 13:58:20.743000+00:00 | 2016-05-31 15:49:38.733000+00:00 | null | tensorflow|deep-learning | ['https://arxiv.org/pdf/1603.07285v1.pdf'] | 1 |
69,406,186 | <p>There are probably quite a few ways to tackle this:</p>
<ol>
<li><p>I suppose your problem could generally fall under the <em><a href="https://arxiv.org/abs/2010.02587" rel="nofollow noreferrer">span identification</a></em> umbrella. A classic approach, used in NER for example, would be to do token-level labeling of the titles within each text and train a model to predict those labels.</p>
</li>
<li><p>You could also use chunking to get verb or noun 'chunks', which may suffice as titles. <a href="https://streamhacker.com/2009/02/23/chunk-extraction-with-nltk/" rel="nofollow noreferrer">Here</a> is an article on noun chunking with NLTK and <a href="https://www.nltk.org/book/ch07.html#fig-chunk-segmentation" rel="nofollow noreferrer">here</a> is NLTK's own chapter on chunking (shared in <a href="https://stackoverflow.com/a/2642400/8758205">this answer</a> to another question).</p>
</li>
<li><p>You could also use a rule-based system if the language used is
fairly predictable. For example, you can hand-craft custom regexes
to capture the essential parts of the task (e.g. everything after
phrases <em>like</em> 'remind me to' and excluding temporal expressions <em>like</em>
'tomorrow at 6pm'). This is a limited and less flexible approach but
can work well in certain cases.</p>
</li>
<li><p>Another approach would be to create a dataset of full texts paired with their 'titles', and train a language model (maybe even
QA-style?) to predict these titles from texts. <a href="https://medium.com/analytics-vidhya/extract-the-right-phrase-from-sentence-29aa5f8b9182" rel="nofollow noreferrer">Here</a> is a blog
post about someone working on a task involving extraction of a phrase from a larger text, which is quite different but may still have relevant sections.</p>
</li>
<li><p>It could also be worth looking into text summarization (e.g. <a href="https://huggingface.co/transformers/usage.html#summarization" rel="nofollow noreferrer">with huggingface</a>), although I'm not sure how well that would work on mostly short texts like these and how useful the outputs would be.</p>
</li>
</ol>
<p>There are likely more options out there too, but these are the first that come to mind.</p> | 2021-10-01 12:50:04.913000+00:00 | 2021-10-01 12:50:04.913000+00:00 | null | null | 69,365,307 | <p>I am making a project in NLP. But found a problem. I saw that google assistant or <strong>cortana</strong> has a feature where you tell them them to remind you for a work. For example:
You tell Cortana "<em>Remind me to water the plants tomorrow at 6PM</em>". Then <strong>cortana</strong> creates a work named "<em>Water the plants</em>".</p>
<p>This is the thing I am trying to understand :</p>
<p><a href="https://i.stack.imgur.com/3ZmWk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3ZmWk.png" alt="enter image description here" /></a></p>
<p>Then sets the time, date etc. But how does it find the title "<em>Water the plants</em>". Is there any way where I can extract the title of given task using NLP in python? Please help me if you anything about this or what is called.</p> | 2021-09-28 16:24:56.197000+00:00 | 2021-10-01 12:50:04.913000+00:00 | 2021-09-28 17:36:56.193000+00:00 | python-3.x|deep-learning|nlp | ['https://arxiv.org/abs/2010.02587', 'https://streamhacker.com/2009/02/23/chunk-extraction-with-nltk/', 'https://www.nltk.org/book/ch07.html#fig-chunk-segmentation', 'https://stackoverflow.com/a/2642400/8758205', 'https://medium.com/analytics-vidhya/extract-the-right-phrase-from-sentence-29aa5f8b9182', 'https://huggingface.co/transformers/usage.html#summarization'] | 6 |
49,052,299 | <p>Because it was proposed this way. Residual Connections have been investigated in the following work: <a href="https://arxiv.org/pdf/1603.05027.pdf" rel="noreferrer">https://arxiv.org/pdf/1603.05027.pdf</a> and they have found, that Skip -> BN -> RELU -> Conv -> BN -> RELU -> Conv -> Add works best. </p>
<p>However, the differences in performance are negligible and therefore the original ResNet formulation prevailed. Still, you can read the paper if you want to know what works and what does not.</p> | 2018-03-01 14:50:50.243000+00:00 | 2018-03-01 14:50:50.243000+00:00 | null | null | 49,045,843 | <p>In the ResNet architecture, why is the ReLU activation applied after the element-wise addition with the residual in a residual block, instead of before it?</p> | 2018-03-01 08:49:08.600000+00:00 | 2018-03-01 14:50:50.243000+00:00 | null | computer-vision|deep-learning|resnet | ['https://arxiv.org/pdf/1603.05027.pdf'] | 1 |
35,015,167 | <h3>Hey Ryan!</h3>
<p>I know it's late but I just came across your question hope it's not too late or that you still find some knowledge here.</p>
<p><strong>First</strong> of all, Stackoverflow may not be the best place for this kind of question. First reason to that is you have a conceptual question that is not this site's purpose. Moreover your code runs so it's not even a matter of general programming. <strong>Have a look at <a href="https://stats.stackexchange.com/">stats</a></strong>.</p>
<p><strong>Second</strong> from what I see there is no conceptual error. You're using everything necessary that is:</p>
<ul>
<li>lstm with propper dimensions</li>
<li><code>return_sequences=false</code> just before your <code>Dense</code> layer</li>
<li>linear activation for your output</li>
<li><code>mse</code> cost/loss/objective function</li>
</ul>
<p><strong>Third</strong> I however find it extremely unlikely that your network learns anything with so few pieces of data. You have to understand that you have less data than parameters here! For the great majority of supervised learning algorithm, the first thing you need is not a good model, it's good data. You can not learn from so few examples, especially not with a complex model such as LSTM networks. </p>
<p><strong>Fourth</strong> It seems like your target data is made of relatively high values. First step of pre-processing here could be to standardize the data : center it around zero - that is translate your data by its mean - and rescale by ists standard deviation. This really helps learning! </p>
<p><strong>Fifth</strong> In general here are a few things you should look into to improve learning and reduce overfitting : </p>
<ul>
<li><a href="https://www.cs.toronto.edu/~hinton/absps/JMLRdropout.pdf" rel="nofollow noreferrer">Dropout</a></li>
<li><a href="http://arxiv.org/pdf/1502.03167v3.pdf" rel="nofollow noreferrer">Batch Normalization</a></li>
<li>Other optimizer (such as <a href="https://arxiv.org/pdf/1412.6980" rel="nofollow noreferrer">Adam</a>)</li>
<li><a href="https://arxiv.org/pdf/1211.5063v2.pdf" rel="nofollow noreferrer">Gradient clipping</a></li>
<li><a href="http://www.jmlr.org/papers/volume13/bergstra12a/bergstra12a.pdf" rel="nofollow noreferrer">Random hyper parameter search</a></li>
<li>(This is not exhaustive, if you're reading this and think something should be added, comment it so it's useful for future readers!)</li>
</ul>
<p><strong>Last but NOT least</strong> I suggest you look at this <a href="http://vict0rsch.github.io/tutorials" rel="nofollow noreferrer">tutorial on Github</a>, especially the <a href="http://vict0rsch.github.io/tutorials/keras/recurrent" rel="nofollow noreferrer">recurrent tutorial for time series with keras</a>.</p>
<p>PS: Daniel Hnyk updated his <a href="http://danielhnyk.cz/predicting-sequences-vectors-keras-using-rnn-lstm/" rel="nofollow noreferrer">post</a> ;)</p> | 2016-01-26 13:40:46.283000+00:00 | 2017-03-30 09:17:23.307000+00:00 | 2017-04-13 12:44:17.480000+00:00 | null | 32,514,704 | <p>I have a problem and at this point I'm completely lost as to how to solve it. I'm using Keras with an LSTM layer to project a time series. I'm trying to use the previous 10 data points to predict the 11th.</p>
<p>Here's the code:</p>
<pre><code>from keras.models import Sequential
from keras.layers.core import Dense, Activation, Dropout
from keras.layers.recurrent import LSTM
def _load_data(data):
"""
data should be pd.DataFrame()
"""
n_prev = 10
docX, docY = [], []
for i in range(len(data)-n_prev):
docX.append(data.iloc[i:i+n_prev].as_matrix())
docY.append(data.iloc[i+n_prev].as_matrix())
if not docX:
pass
else:
alsX = np.array(docX)
alsY = np.array(docY)
return alsX, alsY
X, y = _load_data(df_test)
X_train = X[:25]
X_test = X[25:]
y_train = y[:25]
y_test = y[25:]
in_out_neurons = 2
hidden_neurons = 300
model = Sequential()
model.add(LSTM(in_out_neurons, hidden_neurons, return_sequences=False))
model.add(Dense(hidden_neurons, in_out_neurons))
model.add(Activation("linear"))
model.compile(loss="mean_squared_error", optimizer="rmsprop")
model.fit(X_train, y_train, nb_epoch=10, validation_split=0.05)
predicted = model.predict(X_test)
</code></pre>
<p>So I'm taking the input data (a two column dataframe), creating X which is an n by 10 by 2 array, and y which is an n by 2 array which is one step ahead of the last row in each array of X (labeling the data with the point directly ahead of it. </p>
<p>predicted is returning </p>
<pre><code>[[ 7.56940445, 5.61719704],
[ 7.57328357, 5.62709032],
[ 7.56728049, 5.61216415],
[ 7.55060187, 5.60573629],
[ 7.56717342, 5.61548522],
[ 7.55866942, 5.59696181],
[ 7.57325984, 5.63150951]]
</code></pre>
<p>but I should be getting</p>
<pre><code>[[ 73, 48],
[ 74, 42],
[ 91, 51],
[102, 64],
[109, 63],
[ 93, 65],
[ 92, 58]]
</code></pre>
<p>The original data set only has 42 rows, so I'm wondering if there just isn't enough there to work with? Or am I missing a key step in the modeling process maybe? I've seen some examples using Embedding layers etc, is that something I should be looking at?</p>
<p>Thanks in advance for any help!</p> | 2015-09-10 14:10:56.377000+00:00 | 2017-03-30 09:17:23.307000+00:00 | null | machine-learning|python | ['https://stats.stackexchange.com/', 'https://www.cs.toronto.edu/~hinton/absps/JMLRdropout.pdf', 'http://arxiv.org/pdf/1502.03167v3.pdf', 'https://arxiv.org/pdf/1412.6980', 'https://arxiv.org/pdf/1211.5063v2.pdf', 'http://www.jmlr.org/papers/volume13/bergstra12a/bergstra12a.pdf', 'http://vict0rsch.github.io/tutorials', 'http://vict0rsch.github.io/tutorials/keras/recurrent', 'http://danielhnyk.cz/predicting-sequences-vectors-keras-using-rnn-lstm/'] | 9 |
58,472,929 | <p>For anyone looking into this topic some hints.
Lattice-Boltzmann is generally <strong>bandwidth limited</strong>. This means its performance depends mainly on the amount of data that can be loaded from and written to memory.</p>
<ul>
<li><p>Use a highly efficient compiled programming language: <strong>C or C++</strong> are good choices for CPU-based implementations.</p></li>
<li><p>Choose an <strong>architecture with a high bandwidth</strong>. For a CPU this means high clock RAM and a lot of memory channels (quad-channel or more).</p></li>
<li><p>This makes it crucial to use an appropriate <strong>linear memory layout</strong> that makes effective use of <em>cache prefetching</em>: The data is arranged in memory in small portions, so called cache lines. Whenever a processor accesses an element the entire cache line (on modern architectures 64 Bytes) it lies in are loaded. This means 8 double or 16 float values are loaded at once! While I have not found this to be a problem for multi-core processors as they share the L3 cache this should lead to problems on many-core architectures as changes to the same cache line have to be kept in sync and problems arise when other processors are working on data that another processor is working on (<em>false sharing</em>). This can be avoided by introducing <strong>padding</strong>, meaning you add elements you won't use to fill the rest of the cache line. Assume you want to update a cell with a discretisation with 27 speeds for the D3Q27-lattice then in the case of doubles (8 Bytes) the data lies on 4 distinct cache lines. You should add 5 doubles of padding to match the 32 Bytes (4*8 Bytes).</p></li>
</ul>
<pre><code>unsigned int const PAD = (64 - sizeof(double)*D3Q27.SPEEDS % 64); ///< padding: number of doubles
size_t const MEM_SIZE_POP = sizeof(double)*NZ*NY*NX*(SPEEDS+PAD); ///< amount of memory to be allocated
</code></pre>
<p>Most compilers naturally align the start of the array with a cache line so you don't have to take care of that.</p>
<ul>
<li>The linear indices are inconvenient for accessing. Therefore you should design the accessing as efficient as possible. You could write a wrapper class. In any case <strong>inline</strong> those <strong>functions</strong>, meaning every call is replaced by their definition in the code.</li>
</ul>
<pre><code>inline size_t const D3Q27_PopIndex(unsigned int const x, unsigned int const y, unsigned int const z, unsigned int const d)
{
return (D3Q27.SPEEDS + D3Q27.PAD)*(NX*(NY*z + y) + x) + D3Q27.SPEEDS*p + d;
}
</code></pre>
<ul>
<li><p>Furthermore cache locality can be increased by maximising the ratio between computation and communication for example using three-dimensional <strong>spatial loop blocking</strong> (<a href="https://stackoverflow.com/questions/50847746/scaling-issues-with-openmp">Scaling issues with OpenMP</a>), meaning every code works on a cube of cells instead of a single cell.</p></li>
<li><p>Generally implementations make use of two distinct populations A and B and perform collision and streaming from one implementation into another. This means every value in memory exists twice, once pre- and once post-collision. There exist different strategies for <strong>recombining steps and storing in such a way that you only have to keep one population copy in memory</strong>. For instance see the <em>A-A pattern</em> as proposed by P. Bailey et al. - "Accelerating Lattice Boltzmann Fluid Flow Simulations Using Graphics Processors" (<a href="https://www2.cs.arizona.edu/people/pbailey/Accelerating_GPU_LBM.pdf" rel="nofollow noreferrer">https://www2.cs.arizona.edu/people/pbailey/Accelerating_GPU_LBM.pdf</a>) or the <em>Esoteric Twist</em> by M. Geier & M. SchΓΆnherr - "Esoteric Twist: An Efficient in-Place Streaming Algorithmus for the Lattice Boltzmann Method on Massively Parallel Hardware" (<a href="https://pdfs.semanticscholar.org/ea64/3d63667900b60e6ff49f2746211700e63802.pdf" rel="nofollow noreferrer">https://pdfs.semanticscholar.org/ea64/3d63667900b60e6ff49f2746211700e63802.pdf</a>). I have implemented the first with the use of macros meaning every access of a population calls a macro of the form:</p></li>
</ul>
<pre><code>#define O_E(a,b) a*odd + b*(!odd)
#define READ_f_0 D3Q27_PopIndex(x, y, z, 0, p)
#define READ_f_1 D3Q27_PopIndex(O_E(x_m, x), y, z, O_E( 1, 2), p)
#define READ_f_2 D3Q27_PopIndex(O_E(x_p, x), y, z, O_E( 2, 1), p)
...
#define WRITE_f_0 D3Q27_PopIndex(x, y, z, 0, p)
#define WRITE_f_1 D3Q27_PopIndex(O_E(x_p, x), y, z, O_E( 2, 1), p)
#define WRITE_f_2 D3Q27_PopIndex(O_E(x_m, x), y, z, O_E( 1, 2), p)
...
</code></pre>
<ul>
<li>If you have multiple interaction populations use <strong>grid merging</strong>. Lay the indices out linearly in memory and put two distinct populations side by side. The accessing of population p works then as follows:</li>
</ul>
<pre><code>inline size_t const D3Q27_PopIndex(unsigned int const x, unsigned int const y, unsigned int const z, unsigned int const d, unsigned int const p = 0)
{
return (D3Q27.SPEEDS*D3Q27.NPOP + D3Q27.PAD)*(NX*(NY*z + y) + x) + D3Q27.SPEEDS*p + d;
}
</code></pre>
<ul>
<li><p>For a regular grid make the <em>algorithm as predictable as possible</em>. Let <strong>every cell perform collision and streaming</strong> and then do the <em>boundary conditions in reverse</em> afterwards. If you have many cells that do not contribute directly to the algorithm omit them with a <em>logical mask</em> that you can <em>store in the padding</em> as well!</p></li>
<li><p>Make everything <strong>know to the compiler at compilation time</strong>: Template for example boundary conditions with a function that takes care of index changes so you don't have to rewrite every boundary condition.</p></li>
<li><p>Modern architectures have registers that can perform <strong>SIMD</strong> operations, so the same instruction on multiple data. Some processors (AVX-512) can process up to 512 bits of data and thus 32 doubles almost as fast as a single number. This seems to be very attractive for LBM in particular ever since gathering and scattering have been introduced (<a href="https://en.wikipedia.org/wiki/Gather-scatter_(vector_addressing)" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Gather-scatter_(vector_addressing)</a>) but with the current bandwidth limitations (maybe it is worth it with DDR5 and processors with few cores) this is in my opinion not worth the hassle: The single core performance and parallel scaling is better (M. Wittmann et al. - "Lattice Boltzmann Benchmark Kernels as a Testbed for Performance Analysis" - <a href="https://arxiv.org/abs/1711.11468" rel="nofollow noreferrer">https://arxiv.org/abs/1711.11468</a>) but the overall algorithm performs not any better as it is bandwidth limited. So it only makes sense on architectures that are limited by the computing capacities rather than the bandwidth. On the Xeon Phi architecture the results seem to be remarkable Robertsen et al. - "Highβperformance SIMD implementation of the latticeβBoltzmann method on the Xeon Phi processor" (<a href="https://onlinelibrary.wiley.com/doi/abs/10.1002/cpe.5072" rel="nofollow noreferrer">https://onlinelibrary.wiley.com/doi/abs/10.1002/cpe.5072</a>).</p></li>
</ul>
<p>In my opinion most of this is not worth the effort for simple 2D implementations. Do the easy optimisations there, loop blocking, a linear memory layout but forget about the more complex access patterns. In 3D the effect can be enormous: I have achieved <em>up to 95% parallel scalability and an overall performance of over 150 Mlups with a D3Q19 on a modern 12-core processor</em>. For more performance switch to more adequate architectures like <em>GPUs</em> with <em>CUDA C</em> that are <em>optimised for bandwidth</em>.</p> | 2019-10-20 12:19:50.053000+00:00 | 2019-10-20 12:38:21.197000+00:00 | 2019-10-20 12:38:21.197000+00:00 | null | 9,684,747 | <p>I'm trying to optimize an algorithm (Lattice Boltzmann) for parallel computing using C++ AMP. And looking for some suggestions to optimize the memory layout, just found out that removing one parameter from the structure into another vector (the blocked vector) gave and increase of about 10%.</p>
<p>Anyone got any tips that can further improve this, or something i should take into consideration?
Below is the most time consuming function that is executed for each timestep, and the structure used for the layout.</p>
<pre><code>struct grid_cell {
// int blocked; // Define if blocked
float n; // North
float ne; // North-East
float e; // East
float se; // South-East
float s;
float sw;
float w;
float nw;
float c; // Center
};
int collision(const struct st_parameters param, vector<struct grid_cell> &node, vector<struct grid_cell> &tmp_node, vector<int> &obstacle) {
int x,y;
int i = 0;
float c_sq = 1.0f/3.0f; // Square of speed of sound
float w0 = 4.0f/9.0f; // Weighting factors
float w1 = 1.0f/9.0f;
float w2 = 1.0f/36.0f;
int chunk = param.ny/20;
float total_density = 0;
float u_x,u_y; // Avrage velocities in x and y direction
float u[9]; // Directional velocities
float d_equ[9]; // Equalibrium densities
float u_sq; // Squared velocity
float local_density; // Sum of densities in a particular node
for(y=0;y<param.ny;y++) {
for(x=0;x<param.nx;x++) {
i = y*param.nx + x; // Node index
// Dont consider blocked cells
if (obstacle[i] == 0) {
// Calculate local density
local_density = 0.0;
local_density += tmp_node[i].n;
local_density += tmp_node[i].e;
local_density += tmp_node[i].s;
local_density += tmp_node[i].w;
local_density += tmp_node[i].ne;
local_density += tmp_node[i].se;
local_density += tmp_node[i].sw;
local_density += tmp_node[i].nw;
local_density += tmp_node[i].c;
// Calculate x velocity component
u_x = (tmp_node[i].e + tmp_node[i].ne + tmp_node[i].se -
(tmp_node[i].w + tmp_node[i].nw + tmp_node[i].sw))
/ local_density;
// Calculate y velocity component
u_y = (tmp_node[i].n + tmp_node[i].ne + tmp_node[i].nw -
(tmp_node[i].s + tmp_node[i].sw + tmp_node[i].se))
/ local_density;
// Velocity squared
u_sq = u_x*u_x +u_y*u_y;
// Directional velocity components;
u[1] = u_x; // East
u[2] = u_y; // North
u[3] = -u_x; // West
u[4] = - u_y; // South
u[5] = u_x + u_y; // North-East
u[6] = -u_x + u_y; // North-West
u[7] = -u_x - u_y; // South-West
u[8] = u_x - u_y; // South-East
// Equalibrium densities
// Zero velocity density: weight w0
d_equ[0] = w0 * local_density * (1.0f - u_sq / (2.0f * c_sq));
// Axis speeds: weight w1
d_equ[1] = w1 * local_density * (1.0f + u[1] / c_sq
+ (u[1] * u[1]) / (2.0f * c_sq * c_sq)
- u_sq / (2.0f * c_sq));
d_equ[2] = w1 * local_density * (1.0f + u[2] / c_sq
+ (u[2] * u[2]) / (2.0f * c_sq * c_sq)
- u_sq / (2.0f * c_sq));
d_equ[3] = w1 * local_density * (1.0f + u[3] / c_sq
+ (u[3] * u[3]) / (2.0f * c_sq * c_sq)
- u_sq / (2.0f * c_sq));
d_equ[4] = w1 * local_density * (1.0f + u[4] / c_sq
+ (u[4] * u[4]) / (2.0f * c_sq * c_sq)
- u_sq / (2.0f * c_sq));
// Diagonal speeds: weight w2
d_equ[5] = w2 * local_density * (1.0f + u[5] / c_sq
+ (u[5] * u[5]) / (2.0f * c_sq * c_sq)
- u_sq / (2.0f * c_sq));
d_equ[6] = w2 * local_density * (1.0f + u[6] / c_sq
+ (u[6] * u[6]) / (2.0f * c_sq * c_sq)
- u_sq / (2.0f * c_sq));
d_equ[7] = w2 * local_density * (1.0f + u[7] / c_sq
+ (u[7] * u[7]) / (2.0f * c_sq * c_sq)
- u_sq / (2.0f * c_sq));
d_equ[8] = w2 * local_density * (1.0f + u[8] / c_sq
+ (u[8] * u[8]) / (2.0f * c_sq * c_sq)
- u_sq / (2.0f * c_sq));
// Relaxation step
node[i].c = (tmp_node[i].c + param.omega * (d_equ[0] - tmp_node[i].c));
node[i].e = (tmp_node[i].e + param.omega * (d_equ[1] - tmp_node[i].e));
node[i].n = (tmp_node[i].n + param.omega * (d_equ[2] - tmp_node[i].n));
node[i].w = (tmp_node[i].w + param.omega * (d_equ[3] - tmp_node[i].w));
node[i].s = (tmp_node[i].s + param.omega * (d_equ[4] - tmp_node[i].s));
node[i].ne = (tmp_node[i].ne + param.omega * (d_equ[5] - tmp_node[i].ne));
node[i].nw = (tmp_node[i].nw + param.omega * (d_equ[6] - tmp_node[i].nw));
node[i].sw = (tmp_node[i].sw + param.omega * (d_equ[7] - tmp_node[i].sw));
node[i].se = (tmp_node[i].se + param.omega * (d_equ[8] - tmp_node[i].se));
}
}
}
return 1;
}
</code></pre> | 2012-03-13 13:19:12.017000+00:00 | 2019-10-20 12:38:21.197000+00:00 | null | c++|parallel-processing|gpgpu|c++-amp | ['https://stackoverflow.com/questions/50847746/scaling-issues-with-openmp', 'https://www2.cs.arizona.edu/people/pbailey/Accelerating_GPU_LBM.pdf', 'https://pdfs.semanticscholar.org/ea64/3d63667900b60e6ff49f2746211700e63802.pdf', 'https://en.wikipedia.org/wiki/Gather-scatter_(vector_addressing)', 'https://arxiv.org/abs/1711.11468', 'https://onlinelibrary.wiley.com/doi/abs/10.1002/cpe.5072'] | 6 |
70,073,994 | <p>This is quite a broad question. There are many ways you could build a new OpenIE model.</p>
<p>Are you trying to build a rule based model? A deep learning model? A combination of both? What aspect of Stanford CoreNLP's OpenIE annotator do you want to improve?</p>
<p>It's quite difficult to answer this question without your level of domain knowledge, deep learning skills, etc. I suggest you first review a survey on open information extraction, like this one: <a href="https://arxiv.org/abs/1806.05599" rel="nofollow noreferrer">https://arxiv.org/abs/1806.05599</a>, and decide on an architecture.</p>
<p>Then, you could build your own supplementary training set, or a dataset from scratch that suits your needs, which can be used to train an already existing architecture.
Of course, you are free to modify the architecture as well. If you feel a rule-based model would suffice, you could either edit the appropriate code, or post-process results from a rule-based model.</p> | 2021-11-23 00:24:14.667000+00:00 | 2021-11-23 00:24:14.667000+00:00 | null | null | 66,460,244 | <p>I am using stanford-corenlp-4.2.0 for extracting data from unstructured text. It seems OpenIE is helpful but should be improved for my specific scenario.
Is it possible to train a new openie model and how?
Thanks</p> | 2021-03-03 15:39:35.307000+00:00 | 2021-11-23 00:24:14.667000+00:00 | null | stanford-nlp | ['https://arxiv.org/abs/1806.05599'] | 1 |
53,968,114 | <p>There's an <a href="https://distill.pub/2016/deconv-checkerboard/" rel="nofollow noreferrer">excellent article</a> about this on Distill. They state there:</p>
<blockquote>
<p>One approach is to make sure you use a kernel size that is divided by
your stride, avoiding the overlap issue. This is equivalent to
βsub-pixel convolution,β a technique which has recently had success in
image super-resolution [8]. However, while this approach helps, it is
still easy for deconvolution to fall into creating artifacts.</p>
<p>Another approach is to separate out upsampling to a higher resolution
from convolution to compute features. For example, you might resize
the image (using nearest-neighbor interpolation or bilinear
interpolation) and then do a convolutional layer. This seems like a
natural approach, and roughly similar methods have worked well in
image super-resolution (eg. [9]).</p>
</blockquote>
<p>A quick Google search also reveals <a href="https://arxiv.org/pdf/1806.02658.pdf" rel="nofollow noreferrer">a</a> <a href="https://arxiv.org/pdf/1707.02937.pdf" rel="nofollow noreferrer">number</a> of papers and <a href="https://github.com/prajjwal1/super_resolution" rel="nofollow noreferrer">source code</a> dealing with this issue.</p> | 2018-12-29 08:53:36.663000+00:00 | 2018-12-29 08:53:36.663000+00:00 | null | null | 53,968,062 | <p>I am studying a DBPN model, which is deep-learning model for the super resolution, with the tensorflow framework. <strong>I am suffered from "checkerboard problem" at scale x4 restoration.</strong> I know that the cause is using input patches (32x32 size) due to computing powers so that there is problem every 32 stride..</p>
<p>could anyone give me the solutions to cope with this problem ? Thanks!!</p>
<p>I already checked that kernel size should be multiple of the stride size</p>
<p><img src="https://i.stack.imgur.com/P0I9J.png" alt="enter image description here"></p> | 2018-12-29 08:45:01.120000+00:00 | 2018-12-29 08:54:12.860000+00:00 | 2018-12-29 08:54:12.860000+00:00 | python|conv-neural-network | ['https://distill.pub/2016/deconv-checkerboard/', 'https://arxiv.org/pdf/1806.02658.pdf', 'https://arxiv.org/pdf/1707.02937.pdf', 'https://github.com/prajjwal1/super_resolution'] | 4 |
44,981,645 | <p>There's a paper by Barlow+04 <a href="https://arxiv.org/abs/physics/0406120" rel="nofollow noreferrer">https://arxiv.org/abs/physics/0406120</a> on finding the mean of variables with asymmetric error bars. You could perhaps use these techniques.</p>
<p>The brute force route that I take is to draw many realisations of the variable from a split-normal distribution (<a href="https://en.wikipedia.org/wiki/Split_normal_distribution" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Split_normal_distribution</a>) and store the best-fitting polynomial parameters for them. I then compute the median and 1-sigma upper/lower error bars (from the 84th and 16th percentile from the median, respectively) for each polynomial parameter.</p>
<p>The code below, in Python 2.7.9, does this. There's a function for computing a split-normal value, the errors from the percentiles and fitting a polynomial.</p>
<p>Hope this helps. </p>
<pre><code>#! /bin/python
from random import choice, gauss
from numpy import polyfit
def split_normal(mus, sigmas_u68, sigmas_l68):
"""
RET: A split-normal value.
"""
split_normal = []
for mu, sigma_u68, sigma_l68 in zip(mus, sigmas_u68, sigmas_l68):
sigma = choice([sigma_u68, -sigma_l68])
g = abs(gauss(0.0, 1.0)) * sigma + mu
split_normal.append(g)
return split_normal
def errors_84_16(x):
"""
RET: 1-sigma upper/lower error bars from the 84/16th percentile
from the median.
"""
n = len(x)
index_med = n / 2 # median.
index_84 = int(round(n * 0.84135)) # 84th percentile from median.
index_16 = int(round(n * 0.15865))
x_sorted = sorted(x)
x_med = x_sorted[index_med]
x_u68 = x_sorted[index_84] - x_med # 1-sigma upper error.
x_l68 = x_med - x_sorted[index_16] # 1-sigma lower error.
return x_med, x_u68, x_l68
def assymetric_polyfit(x, y, y_u68, y_l68, n_mc=500):
"""
DES: Solves y = a + b * x for assymentric y error bars.
RET: [a, a_u68, a_l68, b, b_u68, b_l68].
"""
a_mc = []
b_mc = []
for i in xrange(0, n_mc):
y_mc = split_normal(y, y_u68, y_l68)
pars = polyfit(x, y_mc, 2)
a_mc.append(pars[2])
b_mc.append(pars[1])
a, a_u68, a_l68 = errors_84_16(a_mc)
b, b_u68, b_l68 = errors_84_16(b_mc)
return a, a_u68, a_l68, b, b_u68, b_l68
def example():
"""
"""
x = [1.0, 2.0, 3.0, 4.0, 5.0]
y = [5.0, 8.0, 11.0, 14.0, 17.0] # 2 + 3x
y_u68 = [0.5, 0.5, 0.5, 0.5, 0.5]
y_l68 = [1.5, 1.5, 1.5, 1.5, 1.5]
pars = assymetric_polyfit(x, y, y_u68, y_l68)
print(pars)
if __name__ == '__main__':
example()
</code></pre> | 2017-07-08 01:27:09.910000+00:00 | 2017-07-08 01:27:09.910000+00:00 | null | null | 23,902,157 | <p>I have a data set with values that can be plotted as x-values against y-values.
Data on the y-axis have asymmetric errors, i.e., </p>
<p><img src="https://latex.codecogs.com/png.latex?y_i%3D10%5E%7B%2B2%7D_%7B-1.5%7D" alt="y_i=10^{+2}_{-1.5}"></p>
<p>I want to fit these data with a linear function.
I can do this fit in a number of way in python, but all of them have the same problem, that is, how to get the errors of the fit parameters too.</p>
<p>Very related to <a href="https://stackoverflow.com/questions/12706528/how-to-calculate-error-for-polynomial-fitting-in-slope-and-intercept">this</a>, but impossible (or at least, not straightforward) to use <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.curve_fit.html" rel="nofollow noreferrer">scipy.optimize.curve_fit</a> since my data have asymmetric errors on the ordinate (y-axis) in order to get the slope error as well.</p>
<p>Then, how can I calculate errors on slopes of linear fits when y-error bars are asymmetric?</p>
<p>Is there any python function for this?</p> | 2014-05-28 03:04:42.733000+00:00 | 2017-07-08 01:27:09.910000+00:00 | 2017-05-23 11:53:27.760000+00:00 | python|numpy|scipy | ['https://arxiv.org/abs/physics/0406120', 'https://en.wikipedia.org/wiki/Split_normal_distribution'] | 2 |
59,497,315 | <p>There is already some work on that, you can use either Gramian Angular Fields (GAF) or Markov Transition Fields (MTF), a good description is in <a href="https://arxiv.org/abs/1506.00327" rel="nofollow noreferrer">Imaging Time-Series to Improve Classification and Imputation</a>. Also, some other works used recurrent plots as <a href="https://arxiv.org/abs/1911.07625" rel="nofollow noreferrer">Deep-Gap: deep learning framework</a>. Imaging TS is an interesting way to think about them so you can use e.g. CNNs easily. But which method you like to use? BTW be aware this might not be an "efficient" way to classify time series :)</p> | 2019-12-27 07:14:30.897000+00:00 | 2020-07-15 09:34:31.270000+00:00 | 2020-07-15 09:34:31.270000+00:00 | null | 59,496,603 | <p>I have a dataset where I have 12000+ data points and 25 features out of which last feature is the class label. This is classification problem. Now, I want to convert every data points into image, . I have no idea how to do that. Please help. I work on Python. If anyone have could provide sample code I will be grateful. Thanks in advance. </p> | 2019-12-27 05:59:39.463000+00:00 | 2020-07-15 09:34:31.270000+00:00 | null | python|machine-learning|classification|supervised-learning | ['https://arxiv.org/abs/1506.00327', 'https://arxiv.org/abs/1911.07625'] | 2 |
27,636,540 | <p>Word-vectors or so-called distributed representations have a long history by now, starting perhaps from work of S. Bengio (<em>Bengio, Y., Ducharme, R., & Vincent, P. (2001).A neural probabilistic language model. NIPS.</em>) where he obtained word-vectors as by-product of training neural-net lanuage model.</p>
<p>A lot of researches demonstrated that these vectors do capture semantic relationship between words (see for example <a href="http://research.microsoft.com/pubs/206777/338_Paper.pdf" rel="noreferrer">http://research.microsoft.com/pubs/206777/338_Paper.pdf</a>). Also this important paper (<a href="http://arxiv.org/abs/1103.0398" rel="noreferrer">http://arxiv.org/abs/1103.0398</a>) by Collobert et al, is a good starting point with understanding word vectors, the way they are obtained and used. </p>
<p>Besides word2vec there is a lot of methods to obtain them. Expamples include SENNA embeddings by Collobert et al (<a href="http://ronan.collobert.com/senna/" rel="noreferrer">http://ronan.collobert.com/senna/</a>), RNN embeddings by T. Mikolov that can be computed using RNNToolkit (<a href="http://www.fit.vutbr.cz/~imikolov/rnnlm/" rel="noreferrer">http://www.fit.vutbr.cz/~imikolov/rnnlm/</a>) and much more. For English, ready-made embeddings can be downloaded from these web-sites. word2vec really uses skip-gram model (not neural network model). Another fast code for computing word representations is GloVe (<a href="http://www-nlp.stanford.edu/projects/glove/" rel="noreferrer">http://www-nlp.stanford.edu/projects/glove/</a>). It is an open question whatever deep neural networks are essential for obtaining good embeddings or not.</p>
<p>Depending of your application, you may prefer using different types of word-vectors, so its a good idea to try several popular algorithms and see what works better for you. </p> | 2014-12-24 11:54:36.997000+00:00 | 2014-12-24 11:54:36.997000+00:00 | null | null | 27,561,971 | <p>How to create word vector? I used one hot key to create word vector, but it is very huge and not generalized for similar semantic word. So I have heard about word vector using neural network that finds word similarity and word vector. So I wanted to know how to generate this vector (algorithm) or good material to start creating word vector ?.</p> | 2014-12-19 08:07:54.573000+00:00 | 2018-07-13 09:18:21.050000+00:00 | 2014-12-19 08:42:09.953000+00:00 | nlp|neural-network|word2vec | ['http://research.microsoft.com/pubs/206777/338_Paper.pdf', 'http://arxiv.org/abs/1103.0398', 'http://ronan.collobert.com/senna/', 'http://www.fit.vutbr.cz/~imikolov/rnnlm/', 'http://www-nlp.stanford.edu/projects/glove/'] | 5 |
56,360,396 | <p>Let me explain the operations you mentioned in a bit of detail so you understand the differences between their intuition and usage:</p>
<h3>Cascaded cross-channel parametric pooling:</h3>
<p>This is introduced in the <a href="https://arxiv.org/pdf/1312.4400.pdf" rel="nofollow noreferrer">Network-in-Network</a> paper and is implemented in Keras as <code>GlobalAveragePooling2D()</code>. This operation averages over the output of each feature map in the previous layers.</p>
<p>It is a structural regularizer that enforces correspondence between feature maps
and categories, so feature maps can be interpreted as category confidence. It reduces parameter count and sums up spatial information and hence, it is more robust to spatial translations of the input.</p>
<p><code>GlobalAveragePooling2D()</code> is generally used without <code>Dense()</code> layers in the model before it.</p>
<h3>Conv1D:</h3>
<p><code>Conv1D()</code> is a convolution operation exactly similar to <code>Conv2D()</code> but it applies only to one dimension. <code>Conv1D()</code> is generally used on sequences or other 1D data, not as much on images.</p>
<h3>Depthwise Separable Convolution:</h3>
<p>Quoting from the Keras <a href="https://keras.io/layers/convolutional/" rel="nofollow noreferrer">documentation</a></p>
<blockquote>
<p>Separable convolutions consist in first performing a depthwise spatial convolution (which acts on each input channel separately) followed by a pointwise convolution which mixes together the resulting output channels. The depth_multiplier argument controls how many output channels are generated per input channel in the depthwise step.</p>
</blockquote>
<p>This <a href="https://towardsdatascience.com/a-basic-introduction-to-separable-convolutions-b99ec3102728" rel="nofollow noreferrer">blog</a> explains the depthwise separable convolution pretty well.</p>
<h3>Conv2D(num_filters, (1, 1)):</h3>
<p>This is generally known as <code>1x1</code> convolution, introduced in the <a href="https://arxiv.org/pdf/1312.4400.pdf" rel="nofollow noreferrer">Network-in-Network</a> paper.</p>
<p>The <code>1x1</code> convolutional filters are used to reduce/increase dimensionality in the filter dimension, without affecting the spatial dimensions. This is also used in the <a href="https://arxiv.org/pdf/1409.4842.pdf" rel="nofollow noreferrer">Google Inception</a> architecture for dimensionality reduction in filter space.</p>
<p>In your particular case, I am not exactly sure which of this techniques you can use. I do not think <code>Conv1D</code> would be of much use. You can definitely use<code> GlobalMaxPooling</code> or <code>GlobalAveragePooling</code> as long as you do not use <code>Dense</code> before them. This is helpful to get spatial information. Depthwise Separable Convolution can be used as well in place of your <code>Conv2D</code> layers. <code>Conv2D(num_filters, (1, 1))</code> is very helpful for dimensionality reduction in filter space, mostly towards the end of your model architecture.</p>
<p>Maybe, if you follow the resources you get a better understanding of the operations and see how they apply to your problem.</p> | 2019-05-29 12:14:53.823000+00:00 | 2019-05-29 12:14:53.823000+00:00 | 2020-06-20 09:12:55.060000+00:00 | null | 55,926,841 | <p>I am developing a CNN in keras to classify satellite imagery that has 10 spectral bands. I'm getting decent accuracy with the network below (~60% val accuracy across 15 classes) but I want to better incorporate the relationships between spectral bands at a single pixel which can yield a lot of information on the pixel's class. I see a lot of papers doing this but it is often called different things. For example:</p>
<ul>
<li>Cascaded cross-channel parametric pooling</li>
<li>Conv1D</li>
<li>Depthwise Separable Convolution</li>
<li>Conv2D(num_filters, (1, 1))</li>
</ul>
<p>And I'm not certain about the differences between these approaches (if there are any) and how I should implement this in my simple CNN below. I'm also not clear if I should do this at the very beginning or towards the end. I'm inclined to do it right at the start when the channels are still the raw spectral data rather than the feature maps.</p>
<pre><code>input_shape = (32,32,10)
num_classes = 15
model = Sequential()
model.add(Conv2D(32, (3, 3), padding='same', input_shape=input_shape))
model.add(Activation('relu'))
model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(64, (3, 3), padding='same'))
model.add(Activation('relu'))
model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(256))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes))
model.add(Activation('softmax'))
</code></pre> | 2019-04-30 18:30:19.700000+00:00 | 2019-05-29 12:14:53.823000+00:00 | null | python|tensorflow|keras|conv-neural-network | ['https://arxiv.org/pdf/1312.4400.pdf', 'https://keras.io/layers/convolutional/', 'https://towardsdatascience.com/a-basic-introduction-to-separable-convolutions-b99ec3102728', 'https://arxiv.org/pdf/1312.4400.pdf', 'https://arxiv.org/pdf/1409.4842.pdf'] | 5 |
40,447,745 | <p>Let me quote the setoids subsection (sect. 2.4) from a paper by J. Gross, A. Chlipala, D.I. Spivak -- <a href="https://arxiv.org/pdf/1401.7694.pdf" rel="nofollow noreferrer">Experience Implementing a Performant Category-Theory Library in Coq</a> (2014):</p>
<blockquote>
<p>A setoid [5] is a carrier type equipped with an equivalence relation; a map of setoids is a function between the carrier types and a proof that the function respects the equivalence relations of its domain and codomain. Many authors [11, 12, 15, 18] choose to use a <strong>setoid of morphisms</strong>, which allows for the definition of the category of set(oid)s, as well as the category of (small) categories, without assuming functional extensionality, and allows for the definition of categories where the objects are quotient types.</p>
</blockquote>
<p>The source the above is referring to as [12] is the Math-Classes library.
However, then the authors proceed with a caveat:</p>
<blockquote>
<p>However, there is significant overhead associated with using setoids everywhere, which can lead to slower compile times. Every type that we talk about needs to come with a relation and a proof that this relation is an equivalence relation. Every function that we use needs to come with a proof that it sends equivalent elements to equivalent elements. Even worse, if we need an equivalence relation on the universe of βtypes with equivalence relations,β we need to provide a transport function between equivalent types that respects the equivalence relations of those types.</p>
</blockquote> | 2016-11-06 09:19:38.077000+00:00 | 2016-11-06 09:28:30.890000+00:00 | 2016-11-06 09:28:30.890000+00:00 | null | 40,445,387 | <p>I am having trouble understanding the following Coq definition of Categories (defined <a href="https://github.com/math-classes/math-classes/blob/v8.5/interfaces/abstract_algebra.v" rel="nofollow noreferrer">here</a>), which involves <code>Setoid</code>. And I don't understand why <code>Setoid</code> is necessary or its role here. </p>
<pre><code>Class Category O `{!Arrows O} `{β x y: O, Equiv (x βΆ y)}
`{!CatId O} `{!CatComp O}: Prop :=
{ arrow_equiv :> β x y, Setoid (x βΆ y)
; comp_proper :> β x y z, Proper ((=) ==> (=) ==> (=)) (comp x y z)
; comp_assoc :> ArrowsAssociative O
; id_l :> β x y, LeftIdentity (comp x y y) cat_id
; id_r :> β x y, RightIdentity (comp x x y) cat_id }.
(* note: no equality on objects. *)
</code></pre>
<p>The basic notion of Categories I learned so far only requires that </p>
<ol>
<li>there are arrows between objects, </li>
<li>the arrows compose (respects associativity) and </li>
<li>identity arrows exist and behave.</li>
</ol>
<p>I understand that Setoid is about equivalent classes, but I can't see where Setoids come in. Can someone please help explain the definition above and explain the difference from the usual category definition without Setoids? </p> | 2016-11-06 02:20:28.733000+00:00 | 2016-11-06 09:28:30.890000+00:00 | null | coq|category-theory | ['https://arxiv.org/pdf/1401.7694.pdf'] | 1 |
65,143,043 | <p>Model Parallelism implementation for GPT2 model.
As per my understanding, parallelism implemented as shown in the below picture. Marked blocks are computed parallel.
<a href="https://i.stack.imgur.com/KFhte.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KFhte.png" alt="enter image description here" /></a></p>
<p>Fig (a)MLP, says</p>
<blockquote>
<p>f and g are conjugate, f is an identity operator in the forward pass
and all-reduce in the backward pass while g is an all-reduce in
forward and identity in backward.</p>
</blockquote>
<p>Similarly self attention block works as shown below picture
<a href="https://i.stack.imgur.com/H00ht.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/H00ht.png" alt="enter image description here" /></a></p>
<p>From the experiment given in the <a href="https://arxiv.org/pdf/1909.08053.pdf" rel="nofollow noreferrer">paper</a>, 1.2 billion parameters fits on single GPU, where as 8 billion parameters requires 8 GPUs (8-ways).</p>
<p>96 is the constant number used as the hidden size per head.
As per Table 2 from <a href="https://arxiv.org/pdf/1909.08053.pdf" rel="nofollow noreferrer">paper</a>, Hidden size might be based on parameter count.</p> | 2020-12-04 11:49:23.283000+00:00 | 2020-12-04 11:49:23.283000+00:00 | null | null | 64,040,071 | <p>I am trying to understand the implementation details of <a href="https://github.com/NVIDIA/Megatron-LM#inverse-cloze-task-ict-pretraining" rel="nofollow noreferrer">MegatronLM</a>, which has both model and data parallel. On their <a href="https://nv-adlr.github.io/MegatronLM" rel="nofollow noreferrer">site</a> or in their research <a href="https://arxiv.org/pdf/1909.08053.pdf" rel="nofollow noreferrer">paper</a>, they mentioned how they used intra-layer parallel which is similar to mesh TensorFlow. I am confused with some details.</p>
<p>As shown in the picture below, my understanding is that the computation inside each of the 4 red circles can be parallelized by intra-layer splitting, but MLP must happen after self-attention, so only 2 red circled blocks can be parallelized at the same time. The paper says the model parallel is 8-way. My first question is, <strong>Does this indicate they split each block into 4 intra-layer parts (8/2)?</strong></p>
<p>(8-way divided by 2-blocks)
<a href="https://i.stack.imgur.com/5agbD.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5agbD.jpg" alt="enter image description here" /></a></p>
<p>The paper also mentioned</p>
<blockquote>
<p>To have consistent GEMM sizes in the self attention layer, the hidden size per attention head is kept constant at 96 while the number of heads and layers are varied to obtain configurations ranging from 1 billion to 8 billion parameters.</p>
</blockquote>
<p>My second question is <strong>What does the 96 hidden size refer to here?</strong></p>
<p>I am totally new to distributed training, I probably misunderstood something. Any clarification on this topic would be very appreciated! Thanks!</p> | 2020-09-24 05:29:01.447000+00:00 | 2020-12-04 11:49:23.283000+00:00 | 2020-09-24 05:35:29.517000+00:00 | tensorflow|parallel-processing|nlp|distributed-computing|transformer-model | ['https://i.stack.imgur.com/KFhte.png', 'https://i.stack.imgur.com/H00ht.png', 'https://arxiv.org/pdf/1909.08053.pdf', 'https://arxiv.org/pdf/1909.08053.pdf'] | 4 |
58,807,650 | <p>There is a paper <a href="https://arxiv.org/pdf/1802.06360.pdf" rel="nofollow noreferrer">Anomaly Detection Using One-Class Neural Networks</a>
which combines One-Class SVM and Neural Networks.</p>
<p>Here is <a href="https://github.com/raghavchalapathy/oc-nn" rel="nofollow noreferrer">source code</a>. However, I've had difficulty connecting the source code and the paper.</p> | 2019-11-11 19:39:45.273000+00:00 | 2019-11-11 19:39:45.273000+00:00 | null | null | 39,019,567 | <p>I just want to know if a neural network can be trained with a single class of data set. I have a set of data that I want to train a neural network with. After training it, I want to give new data(for testing) to the trained neural network to check if it can recognize it as been similar to the training sample or not. </p>
<p>Is this possible with neural network? If yes, will that be a supervised learning or unsupervised.</p>
<p>I know neural networks can be used for classification if there are multiple classes but I have not seen with a single class before. A good explanation and link to any example will be much appreciated. Thanks </p> | 2016-08-18 13:31:09.790000+00:00 | 2021-05-23 08:35:37.073000+00:00 | 2018-10-24 09:55:14.470000+00:00 | neural-network | ['https://arxiv.org/pdf/1802.06360.pdf', 'https://github.com/raghavchalapathy/oc-nn'] | 2 |
55,784,654 | <p>First, you are not creating your permutation correctly. The correct syntax, best seen on page 23 in <a href="https://arxiv.org/pdf/1307.7042.pdf" rel="nofollow noreferrer">your link</a>, is</p>
<pre><code>a = Perm()(1, 2, 3)(4, 15, 6)(7, 8, 9)
</code></pre>
<p>Next, that module is set up so that theoretically it permutes all non-negative integers, with finitely many of them mapping to values other than themselves. So theoretically there is no highest number in a permutation in that module. As your link states on page 5,</p>
<blockquote>
<p>The perm size <em>n</em> is undeο¬ned because keys not deο¬ned explicitly are equal to their values (<em>p[i] == i</em>).</p>
</blockquote>
<p>So in one respect your quest to "extract the highest number from a permutation" is meaningless. However, at any given time, the data structure representing a permutation in that module does have a largest number. The module tries to hide that information from the user, to keep the theoretical viewpoint of acting on all non-negative integers. But since the <code>Perm</code> class is derived from the <code>dict</code> built-in type, you can find the current highest number in that structure with</p>
<pre><code>highestnum = max(a)
</code></pre>
<p>In your example, that does return the value <code>15</code>. But be aware that the largest value could easily change, without changing the permutation that is being represented. For example, if you execute <code>print(a[20])</code>, that does not seem to change the permutation <code>a</code>, and comparing the value of <code>a</code> to its previous value using <code>==</code> yields <code>True</code>. But now <code>max(a)</code> yields the value <code>20</code>. Thus <code>max(a)</code> is not consistent and depends on the current internal representation of the permutation, so it is not wise to use this value.</p>
<p>Fortunately, you can find a more consistent "highest number", namely the highest number that is changed by the permutation:</p>
<pre><code>highestnum = a.max()
</code></pre>
<p>This also returns the result you want, <code>15</code>. Accessing <code>a[20]</code> or any other value does not change <code>a.max()</code>, so you should satisfy yourself with the <code>max()</code> value.</p>
<p>By the way, regarding your linked document, here is <a href="http://ojs.pythonpapers.org/index.php/tpp/article/viewFile/258/229" rel="nofollow noreferrer">a better link</a> to the documentation, which is a finished version of the pre-print you linked to. And here is <a href="https://code.google.com/archive/p/perms-dict/source/default/source" rel="nofollow noreferrer">a link to the source code</a>. However, I referred to your link in what I wrote above. The Python code in that document uses Python 2.6: I made some changes so it runs in Python 3.7 and used that to check my answer.</p> | 2019-04-21 16:38:25.600000+00:00 | 2019-04-22 18:05:51.350000+00:00 | 2019-04-22 18:05:51.350000+00:00 | null | 55,776,777 | <p>I want to extract the highest number from a permutation. I am using the groups module right now so the output in the below code should be 15 </p>
<pre><code>
from groups import *
a = Perm((1, 2, 3), (4, 15, 6), (7, 8, 9))
max([x for x in a])
</code></pre> | 2019-04-20 19:02:42.490000+00:00 | 2019-04-22 18:05:51.350000+00:00 | 2019-04-21 17:14:33.327000+00:00 | python|math|permutation|finite-group-theory | ['https://arxiv.org/pdf/1307.7042.pdf', 'http://ojs.pythonpapers.org/index.php/tpp/article/viewFile/258/229', 'https://code.google.com/archive/p/perms-dict/source/default/source'] | 3 |
53,078,908 | <p>The benefit of long skip connections (term used for those long connections) have been studied by Drozdal et al. (<a href="https://arxiv.org/pdf/1608.04117.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1608.04117.pdf</a>) and have been shown to improve gradient flow and therefore keep the values of the weight matrices stable. It is a kind of hand wavy explanation, but it is the best one we got. However, it does improve results in segmentation tasks.</p>
<p>The crop operation is used in this example, due to the author using valid convolutions, which reduce the output size and therefore the dimension does not fit anymore. In Tensorflow you would use something like <code>tf.image.resize_images</code> to get the same height width dimensions as your feature maps in the expansive part and then use <code>tf.concat</code> to concatenate the feature maps along the channel axis (typically the 4th axis in BHWC layout).</p> | 2018-10-31 08:10:41.753000+00:00 | 2018-10-31 08:10:41.753000+00:00 | null | null | 53,068,877 | <p>I have a question regarding the copy and crop arrows in the Unet architecture picture:<a href="https://i.stack.imgur.com/SnQYM.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/SnQYM.jpg" alt="enter image description here"></a></p>
<p>I understand the implementation from the contracting and the transfer to the expensive path but I dont get these intra-connections. What should copy and crop mean and what benefit brings this operation?? Is this like a skip connection in ResNet? </p> | 2018-10-30 16:27:47.813000+00:00 | 2021-06-20 16:53:09.843000+00:00 | null | python-3.x|deep-learning | ['https://arxiv.org/pdf/1608.04117.pdf'] | 1 |
27,478,966 | <p>I have worked on this topic as part of my PhD research. You can find a few optimization algorithms <a href="http://www.wisdom.weizmann.ac.il/~bagon/matlab.html#LSCC" rel="nofollow">here</a>. You might find that combining these optimization algorithms with a <a href="http://arxiv.org/abs/1210.7070" rel="nofollow">multiscale framework</a> can help both speeding up the run-time and improve the convergence of the optimization process.</p> | 2014-12-15 07:09:41.133000+00:00 | 2014-12-15 07:09:41.133000+00:00 | null | null | 27,452,092 | <p>I am trying to do image segmentation by correlation clustering. I need to partition a graph. Is there a suitable library in matlab or python to implement this?</p> | 2014-12-12 21:04:51.133000+00:00 | 2014-12-15 07:09:41.133000+00:00 | null | graph|cluster-analysis|partitioning|image-segmentation | ['http://www.wisdom.weizmann.ac.il/~bagon/matlab.html#LSCC', 'http://arxiv.org/abs/1210.7070'] | 2 |
50,188,567 | <p>No, you are not limited to linear activation functions. An example of that is <a href="https://arxiv.org/pdf/1706.08838.pdf" rel="nofollow noreferrer">this work</a>, where they use the hidden state of the GRU layers as an embedding for the input. The hidden state is obtained by using non-linear tanh and sigmoid functions in its computation.</p>
<p>Also, there is nothing wrong with 'ignoring' the negative values. The sparsity may, in fact, be beneficial. It can enhance the representation. The noise that can be created by other functions such as identity or sigmoid function may introduce false dependencies where there are none. By using ReLU we can represent the lack of dependency properly (as a zero) as opposed to some near zero value which is likely for e.g. sigmoid function. </p> | 2018-05-05 10:44:04.670000+00:00 | 2018-05-05 10:44:04.670000+00:00 | null | null | 50,187,127 | <p>I'm currently trying to use an autoencoder network for dimensionality reduction.
(i.e. using the bottleneck activation as the compressed feature)</p>
<p>I noticed that a lot of studies that used autoencoder for this task uses a linear bottleneck layer.</p>
<p>By intuition, I think this makes sense since the usage of non-linear activation function may reduce the bottleneck feature's capability to represent the principle information contained within the original feature.
(e.g., ReLU ignores the negative values and sigmoid suppresses values too high or too low)</p>
<p>However, is this correct? And is using linear bottleneck layer for autoencoder necessary?</p>
<p>If it's possible to use a non-linear bootleneck layer, what activation function would be the best choice?</p>
<p>Thanks.</p> | 2018-05-05 07:58:23.890000+00:00 | 2018-05-05 10:44:04.670000+00:00 | null | neural-network|autoencoder | ['https://arxiv.org/pdf/1706.08838.pdf'] | 1 |
47,177,692 | <p>Short answer: <strong>no</strong>, neither in <a href="https://keras.io/optimizers/#adam" rel="noreferrer">Keras</a> nor in <a href="https://www.tensorflow.org/api_docs/python/tf/train/AdamOptimizer" rel="noreferrer">Tensorflow</a> [EDIT: see UPDATE at the end]</p>
<p>Long answer: as already mentioned in the comments, Adam already incorporates something like momentum. Here is some relevant corroboration:</p>
<p>From the highly recommended <a href="http://ruder.io/optimizing-gradient-descent/index.html#adam" rel="noreferrer"><em>An overview of gradient descent optimization algorithms</em></a> (available also as a <a href="https://arxiv.org/abs/1609.04747" rel="noreferrer">paper</a>):</p>
<blockquote>
<p>In addition to storing an exponentially decaying average of past squared gradients u[t] like Adadelta and RMSprop, Adam also keeps an exponentially decaying average of past gradients m[t], similar to momentum</p>
</blockquote>
<p>From <a href="http://cs231n.github.io/neural-networks-3/#ada" rel="noreferrer">Stanford CS231n: CNNs for Visual Recognition</a>:</p>
<blockquote>
<p>Adam is a recently proposed update that looks a bit like RMSProp with momentum</p>
</blockquote>
<p>Notice that some frameworks actually include a <code>momentum</code> parameter for Adam, but this is actually the <code>beta1</code> parameter; here is <a href="https://cntk.ai/pythondocs/cntk.learners.html#cntk.learners.adam" rel="noreferrer">CNTK</a>:</p>
<blockquote>
<p><strong>momentum</strong> (float, list, output of <code>momentum_schedule()</code>) β momentum schedule. Note that this is the beta1 parameter in the Adam paper. For additional information, please refer to the <a href="https://docs.microsoft.com/en-us/cognitive-toolkit/BrainScript-SGD-Block#converting-learning-rate-and-momentum-parameters-from-other-toolkits" rel="noreferrer">this CNTK Wiki article</a>.</p>
</blockquote>
<p>That said, there is an ICLR 2016 paper titled <a href="https://openreview.net/forum?id=OM0jvwB8jIp57ZJjtNEZ" rel="noreferrer">Incorporating Nesterov momentum into Adam</a>, along with an <a href="https://github.com/tdozat/Optimization" rel="noreferrer">implementation skeleton</a> in Tensorflow by the author - cannot offer any opinion on this, though.</p>
<p><strong>UPDATE</strong>: Keras indeed includes now an optimizer called <code>Nadam</code>, based on the ICLR 2016 paper mentioned above; from the <a href="https://keras.io/optimizers/#nadam" rel="noreferrer">docs</a>:</p>
<blockquote>
<p>Much like Adam is essentially RMSprop with momentum, Nadam is Adam RMSprop with Nesterov momentum.</p>
</blockquote>
<p>It is also included in Tensorflow as a contributed module <a href="https://www.tensorflow.org/api_docs/python/tf/contrib/opt/NadamOptimizer" rel="noreferrer"><code>NadamOptimizer</code></a>.</p> | 2017-11-08 10:47:41.577000+00:00 | 2019-08-13 14:35:19.627000+00:00 | 2019-08-13 14:35:19.627000+00:00 | null | 47,168,616 | <p>The question says it all. Since Adam is performing good with most of the datasets, I wanna try momentum tuning for Adam optimizer. So far I only find momentum option for SGD in Keras</p> | 2017-11-07 22:42:38.157000+00:00 | 2019-08-13 14:35:19.627000+00:00 | 2017-11-23 19:50:57.403000+00:00 | optimization|machine-learning|neural-network|deep-learning|keras | ['https://keras.io/optimizers/#adam', 'https://www.tensorflow.org/api_docs/python/tf/train/AdamOptimizer', 'http://ruder.io/optimizing-gradient-descent/index.html#adam', 'https://arxiv.org/abs/1609.04747', 'http://cs231n.github.io/neural-networks-3/#ada', 'https://cntk.ai/pythondocs/cntk.learners.html#cntk.learners.adam', 'https://docs.microsoft.com/en-us/cognitive-toolkit/BrainScript-SGD-Block#converting-learning-rate-and-momentum-parameters-from-other-toolkits', 'https://openreview.net/forum?id=OM0jvwB8jIp57ZJjtNEZ', 'https://github.com/tdozat/Optimization', 'https://keras.io/optimizers/#nadam', 'https://www.tensorflow.org/api_docs/python/tf/contrib/opt/NadamOptimizer'] | 11 |
63,321,697 | <p>Here is the same first order Taylor series method as given in minorlogic's answer (using the JOML API):</p>
<pre><code>public static Quaterniond integrateAngularVelocity(
double avx, double avy, double avz, double dt) {
double len = Math.sqrt(avx * avx + avy * avy + avz * avz);
double theta = len * dt * 0.5;
if (len > 1.0e-12) {
double w = Math.cos(theta);
double s = Math.sin(theta) / len;
return new Quaterniond(avx * s, avy * s, avz * s, w);
} else {
return new Quaterniond(0, 0, 0, 1);
}
}
</code></pre>
<p>The origin and meaning of this method is explained in the background section of <a href="https://www.researchgate.net/publication/257451700_A_novel_Quaternion_integration_approach_for_describing_the_behaviour_of_non-spherical_particles" rel="nofollow noreferrer">A novel Quaternion integration approach for describing the behaviour of non-spherical particles</a> by Facheng Zhao and Berend van Wachem (2013), and in significant detail in sections 4.5 (Time Derivatives) and 4.6 (Time-integration of Rotation Rates) of the excellent <a href="https://arxiv.org/pdf/1711.02508.pdf" rel="nofollow noreferrer">Quaternion kinematics for the error-state Kalman filter</a> by Joan SolΓ (2017).</p>
<p>This is only a first-order Taylor series expansion of the time integral of angular velocity. A second-order correction is given in Eq 227 of SolΓ :</p>
<pre><code>double avgAVX = 0.5 * (avPrev.x + avCurr.x);
double avgAVY = 0.5 * (avPrev.y + avCurr.y);
double avgAVZ = 0.5 * (avPrev.z + avCurr.z);
Quaterniond integral = IMUIntegration.integrateAngularVelocity(
avgAVX, avgAVY, avgAVZ, dt);
Vector3d correction = avPrev.cross(avCurr, new Vector3d()).mul(dt * dt / 24.0);
Quaterniond qCorrection = new Quaterniond(
correction.x, correction.y, correction.z, 0.0);
Quaterniond deltaOrientation = integral.add(qCorrection).normalize();
</code></pre>
<p>However the quaternion addition in the final step has no meaningful geometric interpretation, and increases the numerical error over the error introduced by integration at discrete time steps. Therefore, it is better to use a multiplication-only method, as introduced as the predictorβcorrector
direct multiplication (PCDM) method in Zhao and Wachem.</p>
<p>This method was improved upon in <a href="https://link.springer.com/content/pdf/10.1007/s00707-016-1670-x.pdf" rel="nofollow noreferrer">Improved quaternion-based integration scheme for rigid body motion</a> by L. J. H. Seelen, J. T. Padding, and J. A. M. Kuipers (2016). If I'm reading the math right, and if you simplify by assuming there are no external torques or forces, and if you throw away the linear velocity components to focus only on angular velocity and rotation, and if you work only in local coordinates rather than switching back and forth between local and world coordinates (which the authors do in order to add in the contributions of external torques, velocities and forces), then you end up with a method that simply integrates the angular velocities between the previous and current IMU readings, and then between the current and next IMU readings, sampling at any desired resolution along these two interpolations, and applying the <code>integrateAngularVelocity</code> routine shown above with the smaller <code>dt</code> value induced by the integration sampling interval:</p>
<pre><code>private static final int NUM_INTEGRATION_STEPS = 6; // Should be even
public static List<IMUIntegrated> integrateAngularVelocities(List<IMURaw> raw,
double dt) {
List<IMUIntegrated> integrated = new ArrayList<>();
for (int i = 0; i < raw.size(); i++) {
// Find average between curr and prev/next angular velocities
IMURaw prevRawAV = i == 0 ? null : raw.get(i - 1);
IMURaw currRawAV = raw.get(i);
IMURaw nextRawAV = i == raw.size() - 1 ? null
: raw.get(i + 1);
Vector3d prevAV = prevRawAV == null ? new Vector3d()
: prevRawAV.getAngularVelocity();
Vector3d currAV = currRawAV.getAngularVelocity();
Vector3d nextAV = nextRawAV == null ? new Vector3d()
: nextRawAV.getAngularVelocity();
Vector3d prevAvgAV = prevAV.add(currAV, new Vector3d()).mul(0.5);
Vector3d nextAvgAV = currAV.add(nextAV, new Vector3d()).mul(0.5);
// Find integration interval
double integrationDt = dt / NUM_INTEGRATION_STEPS;
// Linearly interpolate and integrate angular velocities
Quaterniond deltaOrientation = new Quaterniond();
for (int j = 0; j <= NUM_INTEGRATION_STEPS; j++) {
double frac;
Vector3d startAV, endAV;
if (j < NUM_INTEGRATION_STEPS / 2) {
frac = (j / (double) (NUM_INTEGRATION_STEPS / 2));
startAV = prevAvgAV;
endAV = currAV;
} else {
frac = ((j - NUM_INTEGRATION_STEPS / 2)
/ (double) (NUM_INTEGRATION_STEPS / 2));
startAV = currAV;
endAV = nextAvgAV;
}
// Linearly interpolate angular velocities
Vector3d interpolatedAV = startAV.mul(1.0 - frac, new Vector3d())
.add(endAV.mul(frac, new Vector3d()));
// Integrate to produce a quaternion
deltaOrientation = integrateAngularVelocity(
interpolatedAV.x, interpolatedAV.y, interpolatedAV.z,
integrationDt)
// Concatenate onto cumulative transformation
.mul(deltaOrientation);
}
integrated.add(new IMUIntegrated(currRawAV.timestamp,
deltaOrientation.normalize()));
}
return integrated;
}
</code></pre> | 2020-08-09 01:34:13.573000+00:00 | 2020-08-09 08:04:49.730000+00:00 | 2020-08-09 08:04:49.730000+00:00 | null | 24,197,182 | <p>I have an orientation expressed with a quaternion and an angular velocity expressed as either a quaternion or a number (radians per second around the original orientation). I understand how to do this using conversion to axis-angle but that method is rather computationally expensive and is not a realistic option. How would I go about modifying the orientation quaternion given a time interval (in seconds)? I need a solution for both cases (the quaternion and the number). However, converting one case into the other is acceptable and may be preferable depending on the computational complexity of the various algorithms/formulae required for conversions.</p> | 2014-06-13 03:22:30.873000+00:00 | 2020-08-09 08:04:49.730000+00:00 | 2014-06-13 04:51:45.977000+00:00 | 3d|rotation|quaternions | ['https://www.researchgate.net/publication/257451700_A_novel_Quaternion_integration_approach_for_describing_the_behaviour_of_non-spherical_particles', 'https://arxiv.org/pdf/1711.02508.pdf', 'https://link.springer.com/content/pdf/10.1007/s00707-016-1670-x.pdf'] | 3 |
27,810,813 | <p>If you have an adjacency matrix, you can find triangles by squaring the matrix and seeing if the original matrix and square matrix have a non-zero entry in the same place.</p>
<p>A naive matrix multiplication takes time <code>O(n^3)</code>, but there are fast matrix multiplication algorithms that do better. One of the best known is the <a href="http://en.wikipedia.org/wiki/Coppersmith%E2%80%93Winograd_algorithm" rel="nofollow noreferrer">Coppersmith-Winograd</a> algorithm, which runs in <code>O(n^2.4)</code> time. That means the algorithm goes something like:</p>
<ul>
<li>Use <code>O(V^2)</code> time to convert to an adjacency matrix.</li>
<li>Use <code>O(V^2.4)</code> time to compute the square of the adjacency matrix.</li>
<li>Use <code>O(V^2)</code> time to check over the matrices for coinciding non-zero entries.</li>
<li>The index of the row and column where you find coinciding non-zero entries in (if any) tell you two of the involved nodes.</li>
<li>Use <code>O(V)</code> time to narrow down the third node common to both the known nodes.</li>
</ul>
<p>So overall this takes <code>O(V^2.4)</code> time; more precisely it takes however long matrix multiplication takes. You can dynamically switch between this algorithm and the if-any-edge-end-points-have-a-common-neighbor algorithm <a href="https://stackoverflow.com/a/10193372/52239">that amit explains in their answer</a> to improve that to <code>O(V min(V^1.4, E))</code>.</p>
<p>Here's a <a href="http://i11www.iti.uni-karlsruhe.de/extra/publications/sw-fclt-05_wea.pdf" rel="nofollow noreferrer">paper that goes more in-depth into the problem</a>.</p>
<p>It's kind of neat how dependent-on-theoretical-discoveries this problem is.
If conjectures about matrix multiplication actually being quadratic turn out to be true, then you would get a really nice time bound of <code>O(V^2)</code> or <code>O(V^2 log(V))</code> or something like that. But if quantum computers work out, we'll be able to do <a href="http://arxiv.org/abs/quant-ph/0310134" rel="nofollow noreferrer">even better than that</a> (something like <code>O(V^1.3)</code>)!</p> | 2015-01-07 03:01:06.823000+00:00 | 2015-01-07 03:08:13.453000+00:00 | 2017-05-23 12:17:57.623000+00:00 | null | 10,193,228 | <p>Here is an exercise in the <a href="http://www.algorist.com/">Algorithm Design Manual</a>.</p>
<blockquote>
<p>Consider the problem of determining whether a given undirected graph G
= (V, E) contains a triangle or cycle of length 3.</p>
<p>(a) Give an O(|V |^3) to find a triangle if one exists. </p>
<p>(b) Improve
your algorithm to run in time O(|V |Β·|E|). You may assume |V | β€ |E|.</p>
<p>Observe that these bounds gives you time to convert between the
adjacency matrix and adjacency list representations of G.</p>
</blockquote>
<p>Here is my thoughts:</p>
<p>(a) If the graph is given as an adjacency list, I can convert the list to matrix by O(|V|^2). then I do:</p>
<pre><code>for (int i = 0;i < n;i++)
for (int j = i+1;j < n;j++)
if (matrix[i][j] == 1)
for (int k = j+1;k < n;k++)
if (matrix[i][k] == 1 && matrix[j][k] == 1)
return true;
</code></pre>
<p>This should give a O(|V|^3) to test the triangle.</p>
<p>(b) My first intuitive is that if the graph is given as an adjacency list, then I will do a bfs. Whenever a cross edge is found, for example, <code>if y-x is a cross edge</code>, then i will <code>check whether parent[y] == parent[x], if true, then a triangle is found</code>.</p>
<p>Could anyone tell me whether my thinking is correct or not?</p>
<p>Also for this (b), I am not sure its complexity. Should it be O(|V| + |E|), right?</p>
<p>How can I do it in O(|V|*|E|)?</p> | 2012-04-17 14:30:28.773000+00:00 | 2016-05-19 03:04:00.090000+00:00 | 2015-09-11 03:35:15.467000+00:00 | algorithm|data-structures|graph-theory | ['http://en.wikipedia.org/wiki/Coppersmith%E2%80%93Winograd_algorithm', 'https://stackoverflow.com/a/10193372/52239', 'http://i11www.iti.uni-karlsruhe.de/extra/publications/sw-fclt-05_wea.pdf', 'http://arxiv.org/abs/quant-ph/0310134'] | 4 |
63,316,678 | <p>Griffin-Lim is an iterative method to estimate the phase information needed when going from a magnitude-only spectrogram. The number of iterations in the librosa implementation can be adjusted (<code>n_iter</code>). Reducing this will speed-up things a bit, but it is in general slow.</p>
<p>Going back to a waveform after spectral processing can be sped up by:</p>
<ol>
<li>Using one-shot approximate methods, like a neural network. For example <a href="https://arxiv.org/abs/1808.06719" rel="nofollow noreferrer">Fast Spectrogram Inversion using Multi-head Convolutional Neural Networks</a></li>
<li>By using the original phase information instead of estimating it from the modified magnitude spectrogram. This requires that the phase spectrogram is available (not just the magnitude), but that is often the case when doing spectral processing on audio files.</li>
</ol> | 2020-08-08 14:45:02.307000+00:00 | 2020-08-08 14:45:02.307000+00:00 | null | null | 63,294,375 | <p>I am currently trying to convert a mel spectrogram back into an audio file, however, librosa's mel_to_stft function is taking a long time (upwards to 15 minutes) to read in a 30 second .wav file sampled at 384kHz.</p>
<p>The following is my code:</p>
<pre><code># Code for high pass filter
def butter_highpass(cutoff, fs, order=5):
nyq = 0.5 * fs
normal_cutoff = cutoff / nyq
b, a = butter(order, normal_cutoff, btype='high', analog=False)
return b, a
def butter_highpass_filter(data, cutoff, fs, order=5):
b, a = butter_highpass(cutoff, fs, order=order)
y = filtfilt(b, a, data)
return y
def high_pass_filter(data, sr):
# set as a highpass filter for 500 Hz
filtered_signal = butter_highpass_filter(data, 500, sr, order=5)
return filtered_signal
example_dir = '/Test/test.wav'
sr, data = wavfile.read(example_dir)
des_sr = 44100
data_resamp = samplerate.resample(data, des_sr/sr, 'sinc_best')
data_hp = high_pass_filter(data_resamp, des_sr)
mel_spect = librosa.feature.melspectrogram(y=data_resamp, sr=des_sr)
S = librosa.feature.inverse.mel_to_stft(mel_spect)
y = librosa.griffinlim(S)
</code></pre> | 2020-08-07 02:34:57.653000+00:00 | 2022-08-15 10:27:13.477000+00:00 | null | python|audio|spectrogram|librosa|mfcc | ['https://arxiv.org/abs/1808.06719'] | 1 |
71,237,215 | <p>Milletari et al. already explain this when they propose this in <a href="https://arxiv.org/pdf/1606.04797.pdf" rel="nofollow noreferrer">the paper of V-Net</a>. They suggest that the ROI may only occupy a very small region of the whole scan, which is likely to be biased towards the background. Since you say your ROI is about 4% of the whole image, maybe you're facing a similar issue.</p>
<p><a href="https://i.stack.imgur.com/L9P4K.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/L9P4K.png" alt="enter image description here" /></a></p> | 2022-02-23 12:47:05.777000+00:00 | 2022-02-23 12:47:05.777000+00:00 | null | null | 66,536,963 | <p>I'm experiencing an interesting and frustrating issue with the Dice loss used in image segmentation with Unet.</p>
<p>I have to segment images in two classes: background and region of interest. The region of interest is typically 4% of the pixels of the whole image. Images are about 1600x1600 pixels.
I found the Dice loss working much better than Cross Entropy.
However, if I use the standard Dice loss formula my Unet does not provide a correct output, i.e. all the pixels are predicted as background.</p>
<p>With standard Dice loss I mean:</p>
<p><a href="https://i.stack.imgur.com/HO2Ix.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HO2Ix.png" alt="enter image description here" /></a></p>
<p>where x_{c,i} is the probability predicted by Unet for pixel i and for channel c, and y_{c,i} is the corresponding ground-truth label.
The modified version I use is:</p>
<p><a href="https://i.stack.imgur.com/srLBM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/srLBM.png" alt="enter image description here" /></a></p>
<p>Note the squared x at the denominator.</p>
<p>For some reason the latter one makes the net to produce a correct output, although the loss converges to ~0.5.</p>
<p>I do not understand why the latter works and the former doesn't. The latter works even if I use the power of three at the denominator.</p>
<p>Here below my implementation:</p>
<pre><code>def make_one_hot(labels, classes):
one_hot = torch.FloatTensor(labels.size()[0], classes, labels.size()[2], labels.size()[3]).zero_().to(labels.device)
target = one_hot.scatter_(1, labels.data, 1)
return target
class DiceLoss(nn.Module):
def __init__(self,):
super(DiceLoss, self).__init__()
def forward(self, output, target):
target = make_one_hot(target.unsqueeze(dim=1), classes=output.size()[1])
output = F.softmax(output, dim=1)
numerator = (output * target).sum(dim=(2, 3))
denominator = output.pow(2).sum(dim=(2, 3)) + target.sum(dim=(2, 3))
iou = numerator / denominator
return 1 - iou.mean()
</code></pre> | 2021-03-08 20:46:47.047000+00:00 | 2022-02-23 12:47:05.777000+00:00 | 2021-03-09 06:03:00.673000+00:00 | deep-learning|computer-vision|pytorch|image-segmentation|loss-function | ['https://arxiv.org/pdf/1606.04797.pdf', 'https://i.stack.imgur.com/L9P4K.png'] | 2 |
19,159,731 | <p>You may check my 2009 paper, where I derived an "exact" approach to generate "random points" inside different lattice shapes: "hexagonal", "rhombus", and "triangular". As far as I know it is the "most optimized approach" because for every 2D position you only need two random samples. Other works derived earlier require 3 samples for each 2D position!</p>
<p>Hope this answers the question!</p>
<p><a href="http://arxiv.org/abs/1306.0162" rel="nofollow">http://arxiv.org/abs/1306.0162</a></p> | 2013-10-03 12:52:45.313000+00:00 | 2013-10-03 13:07:54.217000+00:00 | 2013-10-03 13:07:54.217000+00:00 | null | 3,239,611 | <p>I'm using procedural techniques to generate graphics for a game I am writing.</p>
<p>To generate some woods I would like to scatter trees randomly within a regular hexagonal area centred at <0,0>.</p>
<p>What is the best way to generate these points in a uniform way?</p> | 2010-07-13 17:14:46.920000+00:00 | 2013-10-03 13:07:54.217000+00:00 | 2010-07-13 19:45:38.530000+00:00 | algorithm|language-agnostic|random|procedural-generation | ['http://arxiv.org/abs/1306.0162'] | 1 |
42,696,088 | <p>If your intention is to model lists a la Haskell (aka "lazy lists"), then you should use something like:</p>
<pre><code>domain 'a list = Nil ("[]") | Cons (lazy 'a) (lazy "'a list") (infix ":" 65)
</code></pre>
<p>(note the "lazy" annotations for <code>Cons</code>). Then you do not need the assumptions on your third equation. E.g.,</p>
<pre><code>fixrec append :: "'a list β 'a list β 'a list"
where
"append $ [] $ ys = ys"
| "append $ (x : xs) $ ys = x : (append $ xs $ ys)"
</code></pre>
<p>for what you called <code>ssnoc</code> and</p>
<pre><code>fixrec reverse :: "'a list β 'a list"
where
"reverse $ [] = []"
| "reverse $ (x : xs) = append $ xs $ (x : [])"
</code></pre>
<p>for reverse.</p>
<p>However, since this type of lists allows for "infinite" values, you will not be able to prove that <code>reverse $ (reverse $ xs) = xs</code> hold in general (because it doesn't). This only holds for finite lists, which can be characterized inductively. (See, e.g., <a href="https://arxiv.org/abs/1306.1340" rel="nofollow noreferrer">https://arxiv.org/abs/1306.1340</a> for a more detailed discussion.)</p>
<p>If, however, you do not want to model lazy lists (i.e., really don't want the "lazy" annotations in your datatype), then your equations might not hold without the assumptions. Now if the equations have those assumptions, they can only be applied in cases where the assumptions are satisfied. So gain, you will not be able to proof (without additional assumptions) that <code>reverse $ (reverse $ xs) = xs</code>. It might again be possible to obtain the appropriate assumptions by an inductive predicate, but I did not investigate further.</p>
<p><strong>Update:</strong> After playing a bit with strict lists in HOLCF, I have some more comments:</p>
<p>First, my guess is that the preconditions in the fixrec specifications are necessary due to the internal construction, but we are able to get rid of them afterwards.</p>
<p>I managed to prove your lemma as follows. For completeness I give the whole content of my theory file. First make sure that notation doesn't clash with existing one:</p>
<pre><code>no_notation
List.Nil ("[]") and
Set.member ("op :") and
Set.member ("(_/ : _)" [51, 51] 50)
</code></pre>
<p>Then define the type of strict lists</p>
<pre><code>domain 'a list = Nil ("[]") | Cons 'a "'a list" (infixr ":" 65)
</code></pre>
<p>and the function <code>snoc</code>.</p>
<pre><code>fixrec snoc :: "'a list β 'a β 'a list"
where
"snoc $ [] $ y = y : []"
| "x β β₯ βΉ xs β β₯ βΉ snoc $ (x:xs) $ y = x : snoc $ xs $ y"
</code></pre>
<p>Now, we obtain an unconditional variant of the second equation by:</p>
<ol>
<li>Showing that <code>snoc</code> is strict in its first argument (note the usage of <code>fixrec_simp</code>).</li>
<li>Showing that <code>snoc</code> is strict in its second argument (here induction is needed).</li>
<li>And finally, obtaining the equation by case analysis on all three variables.</li>
</ol>
<p><b></b></p>
<pre><code>lemma snoc_bot1 [simp]: "snoc $ β₯ $ y = β₯" by fixrec_simp
lemma snoc_bot2 [simp]: "snoc $ xs $ β₯ = β₯" by (induct xs) simp_all
lemma snoc_Cons [simp]: "snoc $ (x:xs) $ y = x : snoc $ xs $ y"
by (cases "x = β₯"; cases "xs = β₯"; cases "y = β₯";simp)
</code></pre>
<p>Then the function <code>reverse</code></p>
<pre><code>fixrec reverse :: "'a list β 'a list"
where
"reverse $ [] = []"
| "x β β₯ βΉ xs β β₯ βΉ reverse $ (x : xs) = snoc $ (reverse $ xs) $ x"
</code></pre>
<p>and again an unconditional variant of its second equation:</p>
<pre><code>lemma reverse_bot [simp]: "reverse $ β₯ = β₯" by fixrec_simp
lemma reverse_Cons [simp]: "reverse $ (x : xs) = snoc $ (reverse $ xs) $ x"
by (cases "x = β₯"; cases "xs = β₯"; simp)
</code></pre>
<p>Now the lemma about <code>reverse</code> and <code>snoc</code> you also had:</p>
<pre><code>lemma reverse_snoc [simp]: "reverse $ (snoc $ xs $ y) = y : reverse $ xs"
by (induct xs) simp_all
</code></pre>
<p>And finally the desired lemma:</p>
<pre><code>lemma reverse_reverse [simp]:
"reverse $ (reverse $ xs) = xs"
by (induct xs) simp_all
</code></pre>
<p>The way I obtained this solution was by just looking into the remaining subgoals of your failed attempts, then get more failed attempts, look into the remaining subgoals, repeat, ...</p> | 2017-03-09 13:00:53.803000+00:00 | 2017-03-10 11:39:32.937000+00:00 | 2017-03-10 11:39:32.937000+00:00 | null | 42,694,830 | <p>Here is a simple theory written on the plain HOL:</p>
<pre><code>theory ToyList
imports Main
begin
no_notation Nil ("[]") and Cons (infixr "#" 65) and append (infixr "@" 65)
hide_type list
hide_const rev
datatype 'a list = Nil ("[]") | Cons 'a "'a list" (infixr "#" 65)
primrec snoc :: "'a list => 'a => 'a list" (infixr "#>" 65)
where
"[] #> y = y # []" |
"(x # xs) #> y = x # (xs #> y)"
primrec rev :: "'a list => 'a list"
where
"rev [] = []" |
"rev (x # xs) = (rev xs) #> x"
lemma rev_snoc [simp]: "rev(xs #> y) = y # (rev xs)"
apply(induct_tac xs)
apply(auto)
done
theorem rev_rev [simp]: "rev(rev xs) = xs"
apply(induct_tac xs)
apply(auto)
done
end
</code></pre>
<p><code>snoc</code> is an opposite of <code>cons</code>. It adds an item to the end of the list.</p>
<p>I want to prove a similar lemma via HOLCF. At a first stage I consider only strict lists. I declared the domain of strict lists in HOLCF. Also I declared two recursive functions:</p>
<ul>
<li><code>ssnoc</code> - appends an item to the end of a list</li>
<li><code>srev</code> - reverses a list</li>
</ul>
<p>Prefix <code>s</code> means "strict".</p>
<pre><code>theory Test
imports HOLCF
begin
domain 'a SList = SNil | SCons "'a" "'a SList"
fixrec ssnoc :: "'a SList β 'a β 'a SList"
where
"ssnoc β
SNil β
x = SCons β
x β
SNil" |
"ssnoc β
β₯ β
x = β₯" |
"x β β₯ β§ xs β β₯ βΉ ssnoc β
(SCons β
x β
xs) β
y = SCons β
x β
(ssnoc β
xs β
y)"
fixrec srev :: "'a SList β 'a SList"
where
"srev β
β₯ = β₯" |
"srev β
SNil = SNil" |
"x β β₯ β§ xs β β₯ βΉ srev β
(SCons β
x β
xs) = ssnoc β
(srev β
xs) β
x"
lemma srev_singleton [simp]:
"srev β
(SCons β
a β
SNil) = SCons β
a β
SNil"
apply(induct)
apply(simp_all)
done
lemma srev_ssnoc [simp]:
"srev β
(ssnoc β
xs β
a) = SCons β
a β
(srev β
xs)"
apply(induct xs)
apply(simp_all)
done
lemma srev_srev [simp]:
"srev β
(srev β
xs) = xs"
apply(induct xs)
apply(simp_all)
done
end
</code></pre>
<p>I'm trying to prove that double reversion of the list equals to the original list (<code>srev_srev</code> lemma). I have declared two helper lemmas:</p>
<ul>
<li><code>srev_singleton</code> - reverse of the singleton list is the original singleton list</li>
<li><code>srev_ssnoc</code> - reversion of the list equals to the list starting from the last item of the original list appending reversion of the rest items of the original list</li>
</ul>
<p>But I can't prove any of the lemmas. Could you point out the errors?</p>
<p>Also why the precondition <code>"x β β₯ β§ xs β β₯"</code> is necessary in the function definitions? And why should I declare <code>"srev β
β₯ = β₯"</code> and <code>"ssnoc β
β₯ β
x = β₯"</code> explicitly. I guess that in HOLCF by default functions are undefined if any of the arguments is undefined.</p> | 2017-03-09 11:58:09.257000+00:00 | 2017-03-10 11:41:20.273000+00:00 | 2017-03-10 11:41:20.273000+00:00 | isabelle | ['https://arxiv.org/abs/1306.1340'] | 1 |
72,261,948 | <p>The author of the <a href="https://arxiv.org/abs/1810.04805" rel="nofollow noreferrer">original BERT paper</a> answered it (kind of) in a <a href="https://github.com/google-research/bert/issues/43#issuecomment-435980269" rel="nofollow noreferrer">comment on GitHub</a>.</p>
<blockquote>
<p>The tanh() thing was done early to try to make it more interpretable but it probably doesn't matter either way.</p>
</blockquote>
<p>I agree it doesn't fully answer "whether" <code>tanh</code> is preferable, but from the looks of it, it'll probably work with any activation.</p> | 2022-05-16 15:45:45.057000+00:00 | 2022-05-16 15:45:45.057000+00:00 | null | null | 71,331,411 | <p>class BERTPooler(nn.Module):
def <strong>init</strong>(self, config):
super(BERTPooler, self).<strong>init</strong>()
self.dense = nn.Linear(config.hidden_size, config.hidden_size)
self.activation = nn.Tanh()</p>
<pre><code>def forward(self, hidden_states):
# We "pool" the model by simply taking the hidden state corresponding
# to the first token.
first_token_tensor = hidden_states[:, 0]
pooled_output = self.dense(first_token_tensor)
pooled_output = self.activation(pooled_output)
return pooled_output
</code></pre> | 2022-03-03 02:22:36.107000+00:00 | 2022-05-16 15:45:45.057000+00:00 | null | nlp|bert-language-model | ['https://arxiv.org/abs/1810.04805', 'https://github.com/google-research/bert/issues/43#issuecomment-435980269'] | 2 |
28,152,569 | <p>I've started writing a deep convolutional neural network library for OpenCL at <a href="https://github.com/hughperkins/ClConvolve/tree/master" rel="nofollow">https://github.com/hughperkins/ClConvolve/tree/master</a> . So far, it supports: </p>
<ul>
<li>convolutional layers</li>
<li>max-pooling</li>
<li>softmax</li>
<li>random translations layer</li>
<li>random patches layer</li>
<li>normalization layer</li>
<li>multinet (aka 'multi-column', per <a href="http://arxiv.org/pdf/1202.2745.pdf" rel="nofollow">http://arxiv.org/pdf/1202.2745.pdf</a> )</li>
<li>fully-connected</li>
</ul>
<p>... running on the GPU, using OpenCL. You can specify the network architecture on the commandline, like <code>100C5-MP2-100C5-MP2-100C4-MP2-300N-100N-6N</code>.</p>
<p>Edit: can get 99.55% test accuracy on MNIST now :-)</p> | 2015-01-26 14:54:14.717000+00:00 | 2015-02-07 09:33:59.333000+00:00 | 2015-02-07 09:33:59.333000+00:00 | null | 15,609,216 | <p>I am searching for a neural network sample code in <code>OpenCL</code>, that I might optimize using GPU kernels. Please help me as I am a beginner in <code>OpenCL</code>. </p> | 2013-03-25 06:50:11.460000+00:00 | 2015-02-07 09:33:59.333000+00:00 | 2013-03-28 01:53:08.450000+00:00 | opencl | ['https://github.com/hughperkins/ClConvolve/tree/master', 'http://arxiv.org/pdf/1202.2745.pdf'] | 2 |
57,729,373 | <p>The <a href="https://www.iro.umontreal.ca/%7Evincentp/ift3395/lectures/backprop_old.pdf" rel="nofollow noreferrer">backpropagation</a> was created by Rumelhart and Hinton et al and published <a href="https://www.nature.com/articles/323533a0" rel="nofollow noreferrer">on Nature</a> in 1986.</p>
<p>As stated in section <a href="https://www.deeplearningbook.org/contents/mlp.html#page=200" rel="nofollow noreferrer">6.5: Back-Propagation and Other Diο¬erentiationAlgorithms</a> of the <a href="https://www.deeplearningbook.org/" rel="nofollow noreferrer">deeplearning book</a> there are two types of approaches for back-propagation gradients through computational graphs: symbol-to-number differentiation and symbol to symbol derivatives. The more relevant one to Tensorflow as stated in this paper: <a href="https://arxiv.org/abs/1610.01178" rel="nofollow noreferrer">A Tour of TensorFlow</a> is the later which can be illustrated using this diagram:</p>
<p><a href="https://i.stack.imgur.com/f0THJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/f0THJ.png" alt="enter image description here" /></a></p>
<p>Source: Section II Part D of <a href="https://arxiv.org/abs/1610.01178" rel="nofollow noreferrer">A Tour of TensorFlow</a></p>
<p>In left side of the the Fig. 7 above, w represents the weights(or Variables) in Tensorflow and x and y are two intermediary operations(or nodes, w, x, y and z are all operations) to get the scalar loss z.</p>
<p>Tensorflow will add a node to each node(if we print the names of variables in a certain checkpoint we can see some additional variables for such nodes and they will be eliminated if we freeze the model to a protocol buffer file for deployment) in the graph for the gradient which can be seen in diagram (b) on the right side: dz/dy, dy/dx, dx/dw.</p>
<p>During the traversal of the back propagation at each node we multiply its gradient with that of the previous one and finally to get a symbolic handle to the overall target derivative dz/dw = dz/dy * dy/dx * dx/dw, which applies exactly the chain rule. Once the gradient is worked out w can update itself with a learning rate.</p>
<p>For more detailed information please read this paper: <a href="https://arxiv.org/pdf/1603.04467.pdf" rel="nofollow noreferrer">TensorFlow:
Large-Scale Machine Learning on Heterogeneous Distributed Systems</a></p> | 2019-08-30 15:15:39.337000+00:00 | 2019-08-30 15:40:13.157000+00:00 | 2020-06-20 09:12:55.060000+00:00 | null | 44,210,561 | <p>In tensorflow it seems that the entire backpropagation algorithm is performed by a single running of an optimizer on a certain cost function, which is the output of some MLP or a CNN.</p>
<p>I do not fully understand how tensorflow knows from the cost that it is indeed an output of a certain NN? A cost function can be defined for any model. How should I "tell" it that a certain cost function derives from a NN?</p> | 2017-05-26 21:50:49.300000+00:00 | 2019-08-30 15:40:13.157000+00:00 | 2017-10-13 16:42:50.407000+00:00 | tensorflow | ['https://www.iro.umontreal.ca/%7Evincentp/ift3395/lectures/backprop_old.pdf', 'https://www.nature.com/articles/323533a0', 'https://www.deeplearningbook.org/contents/mlp.html#page=200', 'https://www.deeplearningbook.org/', 'https://arxiv.org/abs/1610.01178', 'https://i.stack.imgur.com/f0THJ.png', 'https://arxiv.org/abs/1610.01178', 'https://arxiv.org/pdf/1603.04467.pdf'] | 8 |
66,840,564 | <p>The keras documentation for <a href="https://keras.io/api/layers/normalization_layers/batch_normalization/" rel="nofollow noreferrer"><code>BatchNormalization</code></a> gives an answer to your question:</p>
<blockquote>
<p>Importantly, batch normalization works differently during training and
during inference.</p>
</blockquote>
<p>What happens during <em>training</em>, i.e. when calling <code>model.fit()</code>?</p>
<blockquote>
<p><strong>During training</strong> [...], the layer normalizes its output
using the mean and standard deviation of the current batch of inputs.</p>
</blockquote>
<p>But what will happen during inference, i.e. when calling <code>mode.predict()</code> as in your examples?</p>
<blockquote>
<p><strong>During inference</strong> [...], the layer normalizes its output using a moving average of
the mean and standard deviation of the batches it has seen during
training. That is to say, it returns <code>(batch - self.moving_mean) / (self.moving_var + epsilon) * gamma + beta</code>.</p>
<p><code>self.moving_mean</code> and <code>self.moving_var</code> are non-trainable variables that
are <strong>updated each time the layer in called in training mode</strong> [...].</p>
</blockquote>
<p>It's important to understand that batch normalization will calculate the statistics (mean and variance) of your whole training data during training by looking at statistics of single batches and internally updating the <code>moving_mean</code> and <code>moving_variance</code> parameters by a running average computed form the single batch statistics. Therefore they're not affected by backpropagation. Ideally, after your model has seen enough training examples (or did enough training epochs), <code>moving_mean</code> and <code>moving_variance</code> will correspond to the statistics of your whole training set. These two parameters are then used during inference to normalize test examples. At the start of training the two parameters will be initialized to 0 and 1. Further batch norm has two more parameters called gamma and beta, which will be updated by the optimizer and therefore depend on your loss.</p>
<p>In essence, <strong>yes</strong>, the output of batch normalization <em>during inference</em> is dependent on the number of epochs you have trained your model. Firstly, due to changing moving averages for mean and variance and second due to learned parameters gamma and beta.</p>
<p>For a deeper understanding of how batch normalization works and why it is needed, have a look at the <a href="https://arxiv.org/abs/1502.03167" rel="nofollow noreferrer">original publication</a>.</p> | 2021-03-28 10:43:11.150000+00:00 | 2021-03-28 14:32:14.333000+00:00 | 2021-03-28 14:32:14.333000+00:00 | null | 66,832,395 | <p>I am finding output of batchnormalization in Keras.
My model is:</p>
<p>#Import libraries</p>
<pre><code>import numpy as np
import keras
from keras import layers
from keras.layers import Input, Dense, Activation, BatchNormalization, Flatten, Conv2D
from keras.models import Model
</code></pre>
<p>#Model</p>
<pre><code>def HappyModel3(input_shape):
X_input = Input(input_shape, name='input_layer')
X = BatchNormalization(axis = 1, name = 'batchnorm_layer')(X_input)
X = Dense(1, activation='sigmoid', name='sigmoid_layer')(X)
model = Model(inputs = X_input, outputs = X, name='HappyModel3')
return model
</code></pre>
<p>Compiling Model | here number of epochs is 1</p>
<pre><code>X_train=np.array([[1,1,-1],[2,1,1]])
Y_train=np.array([0,1])
happyModel_1=HappyModel3(X_train[0].shape)
happyModel_1.compile(optimizer=keras.optimizers.RMSprop(), loss=keras.losses.mean_squared_error)
happyModel_1.fit(x = X_train, y = Y_train, epochs = 1 , batch_size = 2, verbose=0 )
</code></pre>
<p>finding Batch Normalisation layer's output for model with epochs=1:</p>
<pre><code>for i in range(0, len(happyModel_1.layers)):
tmp_model = Model(happyModel_1.layers[0].input, happyModel_1.layers[i].output)
tmp_output = tmp_model.predict(X_train)
if i in (0,1) :
print(happyModel_1.layers[i].name)
print(tmp_output.shape)
print(tmp_output)
print('\n')
</code></pre>
<p>Code Output is:</p>
<pre><code>input_layer
(2, 3)
[[ 1. 1. -1.]
[ 2. 1. 1.]]
batchnorm_layer
(2, 3)
[[ 0.99003249 0.99388224 -0.99551398]
[ 1.99647105 0.99388224 0.9971655 ]]
</code></pre>
<p>We've normalized at axis=1 |
Batch Norm Layer Output: At axis=1, 1st dimension mean is 1.5, 2nd dimension mean is 1, 3rd dimension mean is 0.
Since its batch norm, I expect mean to be close to 0 for all 3 dimensions</p>
<p>This happens when I increase epochs to 1000:</p>
<pre><code>happyModel_2=HappyModel3(X_train[0].shape)
happyModel_2.compile(optimizer=keras.optimizers.RMSprop(), loss=keras.losses.mean_squared_error)
happyModel_2.fit(x = X_train, y = Y_train, epochs = 1000 , batch_size = 2, verbose=0 )
</code></pre>
<p>finding Batch Normalisation layer's output for model with epochs=1000:</p>
<pre><code>for i in range(0, len(happyModel_2.layers)):
tmp_model = Model(happyModel_2.layers[0].input, happyModel_2.layers[i].output)
tmp_output = tmp_model.predict(X_train)
if i in (0,1) :
print(happyModel_2.layers[i].name)
print(tmp_output.shape)
print(tmp_output)
print('\n')
</code></pre>
<p>#Code output</p>
<pre><code>input_layer
(2, 3)
[[ 1. 1. -1.]
[ 2. 1. 1.]]
batchnorm_layer
(2, 3)
[[ -1.95576239e+00 8.08715820e-04 -1.86621261e+00]
[ 1.95795488e+00 8.08715820e-04 1.86590290e+00]]
</code></pre>
<p>We've normalized at axis=1 | Now At axis=1, batch norm layer output is: 1st dimension mean is 0, 2nd dimension mean is 0, 3rd dimension mean is 0. THIS IS AN EXPECTED OUTPUT NOW</p>
<p>My question is: Is output of Batch Normalization in Keras dependent on number of epochs?
(Probably YES, as we do backpropagation, batch Normalization parameters will be affected by increasing number of epochs)</p> | 2021-03-27 14:25:13.483000+00:00 | 2021-03-28 14:32:14.333000+00:00 | 2021-03-28 07:41:31.143000+00:00 | python|tensorflow|keras|deep-learning | ['https://keras.io/api/layers/normalization_layers/batch_normalization/', 'https://arxiv.org/abs/1502.03167'] | 2 |
51,981,082 | <p>Happy news for you, there is.</p>
<p>A package called <strong>"SHAP"</strong> (<em>SHapley Additive exPlanation</em>) was recently released just for that purpose.
<a href="https://github.com/slundberg/shap" rel="nofollow noreferrer">Here's a link</a> to the github.</p>
<p>It supports visualization of complicated models (which are hard to intuitively explain) like boosted trees (and XGBOOST in particular!)</p>
<p>It can show you "real" feature importance which is better than the <code>"gain"</code>, <code>"weight"</code>, and <code>"cover"</code> <em>xgboost</em> supplies as they are not consistent.</p>
<p>You can read all about why SHAP is better for feature evaluation <a href="https://arxiv.org/pdf/1802.03888.pdf" rel="nofollow noreferrer">here</a>.</p>
<p>It will be hard to give you code that will work for you, but there is a good documentation and you should write one that suits you.</p>
<p>Here's the guide lines of building your first graph:</p>
<pre><code>import shap
import xgboost as xgb
# Assume X_train and y_train are both features and labels of data samples
dtrain = xgb.DMatrix(X_train, label=y_train, feature_names=feature_names, weight=weights_trn)
# Train your xgboost model
bst = xgb.train(params0, dtrain, num_boost_round=2500, evals=watchlist, early_stopping_rounds=200)
# "explainer" object of shap
explainer = shap.TreeExplainer(bst)
# "Values you explain, I took them from my training set but you can "explain" here what ever you want
shap_values = explainer.shap_values(X_test)
shap.summary_plot(shap_values, X_test)
shap.summary_plot(shap_values, X_test, plot_type="bar")
</code></pre>
<p>To plot the "<em>Why a certain sample got its score</em>" you can either use built in SHAP function for it (only works on a Jupyter Notebook). <a href="https://github.com/slundberg/shap#tree-ensemble-example-with-treeexplainer-xgboostlightgbmcatboostscikit-learn-models" rel="nofollow noreferrer">Perfect example here</a></p>
<p>I personally wrote a function that will plot it using <code>matplotlib</code>, which will take some effort.</p>
<p>Here is an example of a plot I've made using the shap values (features are confidential so all erased)
<a href="https://i.stack.imgur.com/b8E7z.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/b8E7z.png" alt="enter image description here"></a></p>
<p>You can see a 97% prediction to be <code>label=1</code> and each feature and how much it added or negate from the log-loss, for that specific sample.</p> | 2018-08-23 08:09:50.807000+00:00 | 2018-08-23 11:25:35.120000+00:00 | 2018-08-23 11:25:35.120000+00:00 | null | 51,965,203 | <p>Let's say I'm trying to predict an apartment price. So, I have a lot of labeled data, where on each apartment I have features that could affect the price like:</p>
<ul>
<li>city</li>
<li>street</li>
<li>floor</li>
<li>year built</li>
<li>socioeconomic status </li>
<li>square feet </li>
<li>etc.</li>
</ul>
<p>And I train a model, let's say XGBOOST. Now, I want to predict the price of a new apartment. Is there a good way to show what is "good" in this apartment, and what is bad, and by how much (scaled 0-1)?</p>
<p>For example: The floor number is a "strong" feature (i.e. - in this area this floor number is desired, thus affects positively on the price of the apartment), but the socioeconomic status is a weak feature (i.e. the socioeconomic status is low and thus affects negatively on the price of the apartment).</p>
<p>What I want is to illustrate more or less why my model decided on this price, and I want the user to get a feel of the apartment value by those indicators.</p>
<p>I thought of exhaustive search on each feature - but I'm afraid that will take too much time.</p>
<p>Is there a more brilliant way of doing this?</p>
<p>Any help would be much appreciated...</p> | 2018-08-22 10:50:03.347000+00:00 | 2018-08-23 11:25:35.120000+00:00 | null | python|algorithm|machine-learning|data-visualization|xgboost | ['https://github.com/slundberg/shap', 'https://arxiv.org/pdf/1802.03888.pdf', 'https://github.com/slundberg/shap#tree-ensemble-example-with-treeexplainer-xgboostlightgbmcatboostscikit-learn-models', 'https://i.stack.imgur.com/b8E7z.png'] | 4 |
52,677,748 | <p>In your question, you mention that you want to achieve this using only a single view of the object. In that case, homographies or Essential/Fundamental matrices wont help you, because these require at least two views of the scene to make sense. If you don't have any priors on the shape of the objects that you want to reconstruct, the key information that you'll be missing is (relative) depth, and in that case I think those are the two possible solutions:</p>
<ul>
<li><p>Leverage a learning algorithm. There is a rich literature on 6dof object pose estimation with deep networks, see <a href="https://arxiv.org/pdf/1711.00199.pdf" rel="nofollow noreferrer">this paper</a> for example. You wont have to deal with depth directly if you use those since those networks are trained end to end to estimate a pose in <code>SO(3)</code>.</p></li>
<li><p>Add <em>many</em> more images and use a dense photometric SLAM/SFM pipeline, such as <a href="http://www.roboticsproceedings.org/rss11/p01.pdf" rel="nofollow noreferrer">elastic fusion</a>. However, in that case you will need to segment the resulting models since the estimation they produce is of the entire environment, which can be difficult depending on the scene. </p></li>
</ul>
<p>However, as you mentioned in your comment, it is possible to reconstruct the model up to scale if you have very strong priors on its geometry. In the case of a planar object (a cuboid will just be an extension of that), you can use this simple algorithm (that is more or less what they do <a href="https://hal.inria.fr/inria-00525674/document" rel="nofollow noreferrer">here</a>, there are other methods but I find them a bit messy, equation-wise):</p>
<pre><code>//let's note A,B,C,D the rectangle in 3d that we are after, such that
//AB is parellel with CD. Let's also note a,b,c,d their respective
//reprojections in the image, i.e. a=KA where K is the calibration matrix, and so on.
1) Compute the common vanishing point of AB and CD. This is just the intersection
of ab and cd in the image plane. Let's call it v_1.
2) Do the same for the two other edges, i.e bc and da. Let's call this
vanishing point v_2.
3) Now, you can compute the vanishing line, which will just be
crossproduct(v_1, v_2), i.e. the line going through both v_1 and v_2. This gives
you the orientation of your plane. Let's write its normal N.
5) All you need to find now is the boundaries of the rectangle. To do
that, just consider any plane with normal N that doesn't go through
the camera center. Now find the intersections of K^{-1}a, K^{-1}b,
K^{-1}c, K^{-1}d with that plane.
</code></pre>
<p>If you need a refresher on vanishing points and lines, I suggest you take a look at pages 213 and 216 of <a href="http://cvrs.whu.edu.cn/downloads/ebooks/Multiple%20View%20Geometry%20in%20Computer%20Vision%20(Second%20Edition).pdf" rel="nofollow noreferrer">Hartley-Zisserman's book</a>.</p> | 2018-10-06 09:45:56.650000+00:00 | 2018-10-06 21:09:29.977000+00:00 | 2018-10-06 21:09:29.977000+00:00 | null | 52,677,164 | <p>I have a non-planar object with 9 points <strong>with known dimensions</strong> in 3D i.e. length of all sides is known. Now given a 2D projection of this shape, I want to reconstruct the 3D model of it. I basically want to retrieve the shape of this object in the real world i.e. angles between different sides in 3D. For eg: given all the dimensions of every part of the table and a 2D image, I'm trying to reconstruct its 3D model. </p>
<p><a href="https://i.stack.imgur.com/sBaaP.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sBaaP.jpg" alt="enter image description here"></a></p>
<p>I've read about homography, perspective transform, procrustes and fundamental/essential matrix so far but haven't found a solution that'll apply here. I'm new to this, so might have missed out something. Any direction on this will be really helpful. </p> | 2018-10-06 08:27:56.650000+00:00 | 2018-10-11 21:04:23.660000+00:00 | 2018-10-11 21:04:23.660000+00:00 | image-processing|computer-vision|transformation|robotics|projective-geometry | ['https://arxiv.org/pdf/1711.00199.pdf', 'http://www.roboticsproceedings.org/rss11/p01.pdf', 'https://hal.inria.fr/inria-00525674/document', 'http://cvrs.whu.edu.cn/downloads/ebooks/Multiple%20View%20Geometry%20in%20Computer%20Vision%20(Second%20Edition).pdf'] | 4 |
58,583,080 | <p><strong>TL;DR: <code>perform_analysis</code> wants to double-check unusually small epsilon results by using more granular computation.</strong></p>
<p>The <code>pate.perform_analysis</code> function iterates through the data (technically the privacy loss random variable) and computes various epsilons. It uses the <code>moments</code> parameter to know how granular this iteration should be. When using the default 8 <code>moments</code>, it will compute 8 epsilons. Then it returns the minimum of the computed epsilons, as you can see <a href="https://github.com/OpenMined/PySyft/blob/dev/syft/frameworks/torch/differential_privacy/pate.py#L261" rel="nofollow noreferrer">in the source code</a>.</p>
<p>When this function returns a very small data-dependent epsilon, it could be because A) the data has a high amount of agreement, or B) the computation wasn't granular enough, and the true epsilon is higher. When only 8 epsilons are computed, it's possible that they happened to be anomalies in the data that paint an overly-optimistic picture of the overall epsilon! So the function sees a surprisingly small epsilon and warns you - may want to increase the <code>moments</code> variable to compute more epsilons and make sure you've found the real minimum. If you still get the same result when you increase your <code>moments</code> parameter, your data probably has a high amount of agreement, so it truly has a small data-dependent epsilon compared to its data-independent epsilon.</p>
<p>Hopefully that makes sense to you at a high level. If you want more details on the math behind this, you can check out <a href="https://arxiv.org/pdf/1610.05755.pdf" rel="nofollow noreferrer">the research paper</a> that inspired the source code.</p> | 2019-10-27 20:26:31.817000+00:00 | 2019-10-27 20:31:53.187000+00:00 | 2019-10-27 20:31:53.187000+00:00 | null | 56,975,953 | <p>Got warning while doing PATE analysis:</p>
<blockquote>
<p>Warning: May not have used enough values of l. Increase 'moments' variable and run again.</p>
</blockquote>
<pre class="lang-py prettyprint-override"><code>from syft.frameworks.torch.differential_privacy import pate
</code></pre>
<pre class="lang-py prettyprint-override"><code>data_dep_eps, data_ind_eps = pate.perform_analysis(teacher_preds=preds, indices=indices, noise_eps=0.1)
print("Data Independent Epsilon:", data_ind_eps)
print("Data Dependent Epsilon:", data_dep_eps)
</code></pre>
<p>It has gone after increasing the value of the "moment" parameter in the "pate.perform_analysis" analysis function. But I want to know why this was so.</p>
<pre class="lang-py prettyprint-override"><code>data_dep_eps, data_ind_eps = pate.perform_analysis(teacher_preds=preds, indices=indices, noise_eps=0.1,moments=20)
print("Data Independent Epsilon:", data_ind_eps)
print("Data Dependent Epsilon:", data_dep_eps)
</code></pre> | 2019-07-10 17:38:33.713000+00:00 | 2020-11-28 11:09:11.383000+00:00 | 2020-11-28 11:09:11.383000+00:00 | python|pytorch|pysyft | ['https://github.com/OpenMined/PySyft/blob/dev/syft/frameworks/torch/differential_privacy/pate.py#L261', 'https://arxiv.org/pdf/1610.05755.pdf'] | 2 |
71,775,120 | <p>"Other algorithms" doesn't necessarily mean "other practical implementations". M-I is still essentially the theoretical best in terms of comparisons made for an infinite number of elements to sort. That's just as long as you don't have to worry about implementation, memory use, or any other real world concerns.</p>
<p>The other algorithms are improvements to merge insertion rather than completely different concepts. They use small tweaks like changing the order elements are compared or inserting <a href="https://arxiv.org/abs/1705.00849" rel="nofollow noreferrer">two elements</a> together more efficiently. <a href="https://link.springer.com/article/10.1007/s00224-020-09987-4" rel="nofollow noreferrer">This paper</a> discusses the worst cases.</p>
<p><a href="https://en.wikipedia.org/wiki/Timsort" rel="nofollow noreferrer">Timsort</a> uses similar concepts (specifically in <a href="https://svn.python.org/projects/python/trunk/Objects/listsort.txt" rel="nofollow noreferrer">merging runs</a> of already-sorted data in "galloping mode") but it's more focused on performance in real-world data than asymptotic complexity on infinite random inputs. <a href="https://en.wikipedia.org/wiki/Block_sort" rel="nofollow noreferrer">Block sort</a> is another fancy algorithm that actually sorts in-place but I don't have a comparison count at the moment. It's implemented in <a href="https://github.com/BonzaiThePenguin/WikiSort" rel="nofollow noreferrer">Wikisort</a> and <a href="https://github.com/HolyGrailSortProject/Rewritten-Grailsort" rel="nofollow noreferrer">Grailsort</a>.</p>
<p>For curiosity, here are some worst-case numbers for the merge-insertion variations from the paper linked above:</p>
<ul>
<li><code>n log n β 1.4427n + O(log n)</code> - Lowest mathematically possible</li>
<li><code>n log n β 1.3999n + o(n)</code> - Ford-Johnson Merge-Insertion (original)</li>
<li><code>n log n β 1.4005n + O(n)</code> - Iwama-Teruyama Merge-Insertion (1,2-insertion)</li>
</ul> | 2022-04-07 00:50:37.310000+00:00 | 2022-04-07 00:50:37.310000+00:00 | null | null | 71,605,548 | <p>On the Wikipedia page for <a href="https://en.wikipedia.org/wiki/Merge-insertion_sort#Relation_to_other_comparison_sorts" rel="nofollow noreferrer">Merge-Insertion Sort</a> it says</p>
<blockquote>
<p>Manacher's algorithm and later record-breaking sorting algorithms have all used modifications of the merge-insertion sort ideas.</p>
</blockquote>
<p>in reference to sorting algorithms that use the fewest comparisons. But it does not explain what algorithms it's referring to. The page the citation links to also just says "other algorithms".</p> | 2022-03-24 15:51:28.057000+00:00 | 2022-04-07 00:50:37.310000+00:00 | 2022-04-07 00:40:43.567000+00:00 | algorithm|sorting | ['https://arxiv.org/abs/1705.00849', 'https://link.springer.com/article/10.1007/s00224-020-09987-4', 'https://en.wikipedia.org/wiki/Timsort', 'https://svn.python.org/projects/python/trunk/Objects/listsort.txt', 'https://en.wikipedia.org/wiki/Block_sort', 'https://github.com/BonzaiThePenguin/WikiSort', 'https://github.com/HolyGrailSortProject/Rewritten-Grailsort'] | 7 |
68,948,219 | <p>Your model needs to be trained with negative examples (i.e. images without any cars in them). It needs to understand that it is possible that an image doesn't have any car.</p>
<p>Further, it looks like the model has understood some unrelated features as car features. E.g.: the parking lot area. To avoid this use random erasing augmentation while training model.</p>
<p><a href="https://arxiv.org/abs/1708.04896" rel="nofollow noreferrer">Random erasing</a> -Inspired by the mechanisms of dropout regularization, random erasing can be seen as analogous to dropout except in the input data space rather than embedded into the network architecture. By removing certain input patches, the model is forced to find other descriptive characteristics.</p> | 2021-08-27 05:00:35.737000+00:00 | 2021-08-27 06:12:07.893000+00:00 | 2021-08-27 06:12:07.893000+00:00 | null | 68,947,291 | <p>I have trained TensorFlow model with 200 car images, they can success detection:</p>
<p><a href="https://i.stack.imgur.com/jFXsS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jFXsS.png" alt="enter image description here" /></a></p>
<p>But, when there is no car in the image, a false car detection occurs, why?, how i can prevent this?:</p>
<p><a href="https://i.stack.imgur.com/9KWsC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9KWsC.png" alt="enter image description here" /></a></p> | 2021-08-27 02:21:56.027000+00:00 | 2021-08-27 12:38:10.460000+00:00 | 2021-08-27 12:38:10.460000+00:00 | tensorflow|keras|deep-learning|computer-vision|object-detection | ['https://arxiv.org/abs/1708.04896'] | 1 |
64,365,532 | <p>It seems like you are facing a severe "class imbalance" problem.</p>
<ol>
<li><p>Have a look at <a href="https://stackoverflow.com/a/52161194/1714410">focal loss</a>. This loss is designed for binary classification with severe class imbalance.</p>
</li>
<li><p>Consider "hard negative mining": that is, propagate gradients only for part of the training examples - the "hard" ones.<br />
see, e.g.:<br />
<em>Abhinav Shrivastava, Abhinav Gupta and Ross Girshick</em> <strong><a href="https://arxiv.org/abs/1604.03540" rel="nofollow noreferrer">Training Region-based Object Detectors with Online Hard Example Mining</a></strong> (CVPR 2016).</p>
</li>
</ol> | 2020-10-15 05:45:59.490000+00:00 | 2020-10-15 05:45:59.490000+00:00 | null | null | 64,364,684 | <p>My dataset consists of vectors that are massive. The data points are all mostly zeros with ~3% of the features being 1. Essentially my data is super sparse and I am attempting to train an autoencoder however my model is learning just to recreate vectors of all zeros.</p>
<p>Are there any techniques to prevent this? I have tried replacing mean squared error with dice loss but it completely stopped learning. My other thoughts would be to use a loss function that favors guessing 1s correctly rather than zeros. I have also tried using a sigmoid and linear last activation with no clear winner. Any ideas would be awesome.</p> | 2020-10-15 04:04:31.830000+00:00 | 2020-10-15 05:45:59.490000+00:00 | null | machine-learning|computer-vision|pytorch|data-science | ['https://stackoverflow.com/a/52161194/1714410', 'https://arxiv.org/abs/1604.03540'] | 2 |
25,664,336 | <p>Some ideas:</p>
<ol>
<li>you can store those distances off-heap or on-disc via MapDB or
GraphHopper its simplistic DataAccess implementations making it RAM-independent</li>
<li>you can use float which should be only ~30MB or even short and just use the kilometers</li>
<li>you could try on demand routing, without storing, as it takes only a few ms to calculate a route. Disable instructions and calculating points makes it even twice as fast. You could even disable calculating the distance and just use path.weight - this will give you another good speedup but requires a bit lower level GraphHopper usage and is only recommended if you know what you do.</li>
</ol>
<p>Now to your question. GraphHopper uses a graph model consisting of nodes (junctions) and edges (streets connecting junctions). Still a roundabout consists of multiple nodes. But in general it should be possible to use such a 'leaving' node as 'hub-id'.</p>
<p>I see two approaches to calculate those nodes:</p>
<ul>
<li>either by running the Contraction-Hierarchy and picking the highest 1000 nodes and define them as hubs - this would be similar to what is described in the '<a href="http://arxiv.org/pdf/1302.5611.pdf" rel="nofollow">transit node routing</a>' paper</li>
<li>or you calculate routes from one city to e.g. all other cities (or just 8 geographic directions) and find the last common nodes of two routes to identify some</li>
</ul>
<p>For both approaches you'll have to digg a bit deeper into GraphHopper and you'll probably need the <a href="https://github.com/graphhopper/graphhopper/blob/master/docs/core/low-level-api.md" rel="nofollow">lower level API</a>.</p> | 2014-09-04 11:29:43.120000+00:00 | 2014-09-04 11:29:43.120000+00:00 | null | null | 25,648,425 | <p>I have 2750 city centers in Belgium. I need to know the distances between every 2 city centers.
But that results in an matrix of 57MB, just to remember those distances (not even the routes), so that scales terribly.</p>
<p>Instead, I am looking at using Highway intersections as hubs. Basically, every city knows it's nearby cities and it's nearby hubs (= highway intersection). All hubs know the distance to each other.</p>
<p>So the distance from 1 city A to another non-nearby city B, can be calculated by the distance of <code>cityA -> hubX -> hubY -> cityB</code>. Because most cities have typically 3 hubs nearby, I might need to look at all 9 combinations and take the shortest. But in any case it should scale better memory wise.</p>
<p>Now the problem:
<strong>Can I describe a highway intersection as a single point?</strong> Think about it: a highway consist of 2 roads (one in both direction), so a highway intersection center has 4 roads (not even counting the arms).</p> | 2014-09-03 15:38:01.617000+00:00 | 2014-09-04 11:29:43.120000+00:00 | null | graphhopper | ['http://arxiv.org/pdf/1302.5611.pdf', 'https://github.com/graphhopper/graphhopper/blob/master/docs/core/low-level-api.md'] | 2 |
54,209,330 | <p>The vanilla way of doing it is by using the Web API that's been built for it: <a href="https://developer.mozilla.org/en-US/docs/Web/API/SpeechRecognition" rel="nofollow noreferrer">SpeechRecognition</a>, which is currently only supported in Chrome, and I don't really know why, but not in iframes currently, so doing a live example is unfortunately not possible... </p>
<p>Anyway, here is a basic example you can use on your own page.</p>
<pre class="lang-js prettyprint-override"><code>const magic_word = ###Some magic word###;
// initialize our SpeechRecognition object
const recognition = new (window.SpeechRecognition || window.webkitSpeechRecognition)();
recognition.lang = 'en-US';
recognition.interimResults = false;
recognition.maxAlternatives = 1;
recognition.continuous = true;
// when we get some results
recognition.onresult = e => {
// extract all the transcripts
const transcripts = [].concat.apply([], [...e.results]
.map(res => [...res]
.map(alt => alt.transcript)
)
);
// do something with the transcripts,
// here we are searching for our magic word
if(transcripts.some(t => t.indexOf(magic_word) > -1)){
//do something awesome, like starting your own command listeners
console.log('hello user');
recognition.stop();
}
else{
// didn't understood...
console.log("didn't got what you said", transcripts)
}
}
// start on click of a button
btn.onclick = e => {
recognition.stop();
recognition.start();
};
</code></pre>
<p>To have a grasp of how it works under the hood, you may want to check Mozilla's open-source project <a href="https://github.com/mozilla/DeepSpeech" rel="nofollow noreferrer">DeepSpeech</a>, based on <a href="https://arxiv.org/abs/1412.5567" rel="nofollow noreferrer">Baidu's Deep Speech research papers</a>. </p>
<p>So to make it clear, this is not javascript, and Chrome's implementation still outsources to their server. If you wish to build something yourself, be prepared to spend long nights ;-)</p> | 2019-01-16 01:44:47.803000+00:00 | 2019-01-16 01:44:47.803000+00:00 | null | null | 54,208,315 | <p>I want to add speech recognition to my simple ESL apps and games. I'd like to find a solution that is as close to vanilla javascript as possible that works in both Chrome and Safari.</p>
<p>This is more of an approach discussion than a fix to specific code.</p>
<p>I have been learning how to program using vanilla javascript for about the past year and a half. I have been giving myself projects to build ESL educational apps and games as a way to apply what I'm learning. For these reasons, I would like to find an approach to implementing speech recognition that works for both Chrome and Safari (I imagine most of my students will be able to access the games using one of these two browsers on PC or Mac) in a way that is as close to vanilla javascript as possible, to help me learn how to do the coding myself and learn what goes on under the hood, as opposed to just using third party software or libraries. However, with some of the complications I've read about and taking this approach to other problems, I do understand that this might not be possible. So again, as close to vanilla javascript as possible.</p>
<p>Ideally, I'd like the speech recognition to process as quickly as possible so as to give a responsive feel for games. I imagine an offline solution may work best for this. I'm also guessing that publishing the program/game as a downloadable app might be better than a website, and if that is the case, if someone could point me in a good direction for accomplishing that, that would be great.</p>
<p>If the above is not really possible, or even as just another approach, I could make less responsive programs, and even turn-style based games. So I'm open to this approach as well.</p>
<p>From my googling, it seems like I might need to use Swift to implement the Mac/Safari SFSpeechRecognizer, and I'd like to avoid that if possible. However, if someone knows of a simple way to go about this, that could work. I just would rather not learn an entire other language just to use one feature. Although, this may be more common than I realize given that I'm a newb. So if it's simpler than it sounds, I'm all ears.</p>
<p>Thanks!</p> | 2019-01-15 23:28:51.077000+00:00 | 2019-01-16 01:44:47.803000+00:00 | null | javascript|google-chrome|safari|speech-recognition | ['https://developer.mozilla.org/en-US/docs/Web/API/SpeechRecognition', 'https://github.com/mozilla/DeepSpeech', 'https://arxiv.org/abs/1412.5567'] | 3 |
58,547,026 | <p>Here's a simple explanation of what is going on in a special case that is used in <a href="https://arxiv.org/abs/1505.04597" rel="nofollow noreferrer"><code>U-Net</code></a> - that's one of the main use cases for transposed convolution. </p>
<p>We're interested in the following layer:</p>
<pre><code>Conv2DTranspose(64, (2, 2), strides=(2, 2))
</code></pre>
<p>What does this layer do <strong>exactly</strong>? Can we reproduce its work?</p>
<p>Hereβs the <strong>answer</strong>:</p>
<ul>
<li>First of all the default padding in this case is valid. This means we have no padding.</li>
<li>The size of the output will be 2 times bigger: if input (m, n), output will be (2m, 2n). Why is that? See the next point.</li>
<li>Take the first element from the input and multiply by the filter weights with shape (2,2). Put it into the output. Take the next element, multiply and put in the output next to the first result without overlapping. Why is that? We have strides (2, 2).</li>
</ul>
<p>Here's an example input and output (see details <a href="https://medium.com/@ilyarudyak/transposed-convolution-3a4725872a9a?source=friends_link&sk=ecd7a3f8568cb0ac327920c9676d788e" rel="nofollow noreferrer">here</a> and <a href="https://gist.github.com/ilyarudyak/8fdec3a6111226a5123fa0988dc36303" rel="nofollow noreferrer">here</a>):</p>
<pre><code>In [15]: X.reshape(n, m)
Out[15]:
array([[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14]])
In [16]: y_resh
Out[16]:
array([[ 0., 0., 1., 1., 2., 2., 3., 3., 4., 4.],
[ 0., 0., 1., 1., 2., 2., 3., 3., 4., 4.],
[ 5., 5., 6., 6., 7., 7., 8., 8., 9., 9.],
[ 5., 5., 6., 6., 7., 7., 8., 8., 9., 9.],
[10., 10., 11., 11., 12., 12., 13., 13., 14., 14.],
[10., 10., 11., 11., 12., 12., 13., 13., 14., 14.]], dtype=float32)
</code></pre>
<p>This slide from Stanford's cs231n is useful for our question:</p>
<p><a href="https://i.stack.imgur.com/T5EyN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/T5EyN.png" alt="enter image description here"></a></p> | 2019-10-24 18:13:16.703000+00:00 | 2019-10-24 18:13:16.703000+00:00 | null | null | 39,373,230 | <p>The documentation for the <code>conv2d_transpose()</code> operation does not clearly explain what it does:</p>
<blockquote>
<p>The transpose of conv2d.</p>
<p>This operation is sometimes called "deconvolution" after
<a href="http://www.matthewzeiler.com/pubs/cvpr2010/cvpr2010.pdf" rel="noreferrer">Deconvolutional Networks</a>, but is actually the transpose (gradient) of
conv2d rather than an actual deconvolution.</p>
</blockquote>
<p>I went through the paper that the doc points to, but it did not help.</p>
<p>What does this operation do and what are examples of why you would want to use it?</p> | 2016-09-07 14:55:18.783000+00:00 | 2020-11-07 20:48:24.533000+00:00 | null | tensorflow|conv-neural-network | ['https://arxiv.org/abs/1505.04597', 'https://medium.com/@ilyarudyak/transposed-convolution-3a4725872a9a?source=friends_link&sk=ecd7a3f8568cb0ac327920c9676d788e', 'https://gist.github.com/ilyarudyak/8fdec3a6111226a5123fa0988dc36303', 'https://i.stack.imgur.com/T5EyN.png'] | 4 |
44,350,789 | <p>Here's another viewpoint from the "gradients" perspective, i.e. why TensorFlow documentation says <code>conv2d_transpose()</code> is "actually the transpose (<strong>gradient</strong>) of conv2d rather than an actual deconvolution". <strong>For more details on the actual computation done in <code>conv2d_transpose</code>, I would highly recommend <a href="https://arxiv.org/pdf/1603.07285.pdf" rel="noreferrer">this article</a>, starting from page 19.</strong></p>
<h3>Four Related Functions</h3>
<p>In <code>tf.nn</code>, there are 4 closely related and rather confusing functions for 2d convolution:</p>
<ul>
<li><code>tf.nn.conv2d</code></li>
<li><code>tf.nn.conv2d_backprop_filter</code></li>
<li><code>tf.nn.conv2d_backprop_input</code></li>
<li><code>tf.nn.conv2d_transpose</code></li>
</ul>
<p>One sentence summary: <strong>they are all just 2d convolutions</strong>. Their differences are in their input arguments ordering, input rotation or transpose, strides (including fractional stride size), paddings and etc. With <code>tf.nn.conv2d</code> in hand, one can implement all of the 3 other ops by transforming inputs and changing the <code>conv2d</code> arguments.</p>
<h3>Problem Settings</h3>
<ul>
<li>Forward and backward computations:</li>
</ul>
<pre class="lang-py prettyprint-override"><code># forward
out = conv2d(x, w)
# backward, given d_out
=> find d_x?
=> find d_w?
</code></pre>
<p>In the forward computation, we compute the convolution of input image <code>x</code> with the filter <code>w</code>, and the result is <code>out</code>.
In the backward computation, assume we're given <code>d_out</code>, which is the gradient w.r.t. <code>out</code>. Our goal is to find <code>d_x</code> and <code>d_w</code>, which are the gradient w.r.t. <code>x</code> and <code>w</code> respectively.</p>
<p>For the ease of discussion, we assume:</p>
<ul>
<li>All stride size to be <code>1</code></li>
<li>All <code>in_channels</code> and <code>out_channels</code> are <code>1</code></li>
<li>Use <code>VALID</code> padding</li>
<li>Odd number filter size, this avoids some asymmetric shape problem</li>
</ul>
<h3>Short Answer</h3>
<p>Conceptually, with the assumptions above, we have the following relations:</p>
<pre class="lang-py prettyprint-override"><code>out = conv2d(x, w, padding='VALID')
d_x = conv2d(d_out, rot180(w), padding='FULL')
d_w = conv2d(x, d_out, padding='VALID')
</code></pre>
<p>Where <code>rot180</code> is a 2d matrix rotated 180 degrees (a left-right flip and a top-down flip), <code>FULL</code> means "apply filter wherever it partly overlaps with the input" (see <a href="http://deeplearning.net/software/theano/library/tensor/nnet/conv.html#theano.tensor.nnet.conv2d" rel="noreferrer">theano docs</a>). Notes that <em>this is only valid with the above assumptions</em>, however, one can change the conv2d arguments to generalize it.</p>
<p>The key takeaways:</p>
<ul>
<li>The input gradient <code>d_x</code> is the convolution of the output gradient <code>d_out</code> and the weight <code>w</code>, with some modifications.</li>
<li>The weight gradient <code>d_w</code> is the convolution of the input <code>x</code> and the output gradient <code>d_out</code>, with some modifications.</li>
</ul>
<h3>Long Answer</h3>
<p>Now, let's give an actual working code example of how to use the 4 functions above to compute <code>d_x</code> and <code>d_w</code> given <code>d_out</code>. This shows how
<code>conv2d</code>,
<code>conv2d_backprop_filter</code>,
<code>conv2d_backprop_input</code>, and
<code>conv2d_transpose</code> are related to each other.
<a href="https://gist.github.com/yxlao/ef50416011b9587835ac752aa3ce3530" rel="noreferrer">Please find the full scripts here</a>. </p>
<p><strong>Computing <code>d_x</code> in 4 different ways:</strong></p>
<pre class="lang-py prettyprint-override"><code># Method 1: TF's autodiff
d_x = tf.gradients(f, x)[0]
# Method 2: manually using conv2d
d_x_manual = tf.nn.conv2d(input=tf_pad_to_full_conv2d(d_out, w_size),
filter=tf_rot180(w),
strides=strides,
padding='VALID')
# Method 3: conv2d_backprop_input
d_x_backprop_input = tf.nn.conv2d_backprop_input(input_sizes=x_shape,
filter=w,
out_backprop=d_out,
strides=strides,
padding='VALID')
# Method 4: conv2d_transpose
d_x_transpose = tf.nn.conv2d_transpose(value=d_out,
filter=w,
output_shape=x_shape,
strides=strides,
padding='VALID')
</code></pre>
<p><strong>Computing <code>d_w</code> in 3 different ways:</strong></p>
<pre class="lang-py prettyprint-override"><code># Method 1: TF's autodiff
d_w = tf.gradients(f, w)[0]
# Method 2: manually using conv2d
d_w_manual = tf_NHWC_to_HWIO(tf.nn.conv2d(input=x,
filter=tf_NHWC_to_HWIO(d_out),
strides=strides,
padding='VALID'))
# Method 3: conv2d_backprop_filter
d_w_backprop_filter = tf.nn.conv2d_backprop_filter(input=x,
filter_sizes=w_shape,
out_backprop=d_out,
strides=strides,
padding='VALID')
</code></pre>
<p>Please see the <a href="https://gist.github.com/yxlao/ef50416011b9587835ac752aa3ce3530" rel="noreferrer">full scripts</a> for the implementation of <code>tf_rot180</code>, <code>tf_pad_to_full_conv2d</code>, <code>tf_NHWC_to_HWIO</code>. In the scripts, we check that the final output values of different methods are the same; a numpy implementation is also available. </p> | 2017-06-04 04:33:10.340000+00:00 | 2018-06-05 23:59:57.103000+00:00 | 2018-06-05 23:59:57.103000+00:00 | null | 39,373,230 | <p>The documentation for the <code>conv2d_transpose()</code> operation does not clearly explain what it does:</p>
<blockquote>
<p>The transpose of conv2d.</p>
<p>This operation is sometimes called "deconvolution" after
<a href="http://www.matthewzeiler.com/pubs/cvpr2010/cvpr2010.pdf" rel="noreferrer">Deconvolutional Networks</a>, but is actually the transpose (gradient) of
conv2d rather than an actual deconvolution.</p>
</blockquote>
<p>I went through the paper that the doc points to, but it did not help.</p>
<p>What does this operation do and what are examples of why you would want to use it?</p> | 2016-09-07 14:55:18.783000+00:00 | 2020-11-07 20:48:24.533000+00:00 | null | tensorflow|conv-neural-network | ['https://arxiv.org/pdf/1603.07285.pdf', 'http://deeplearning.net/software/theano/library/tensor/nnet/conv.html#theano.tensor.nnet.conv2d', 'https://gist.github.com/yxlao/ef50416011b9587835ac752aa3ce3530', 'https://gist.github.com/yxlao/ef50416011b9587835ac752aa3ce3530'] | 4 |
67,080,977 | <p>Fairlearn maintainer here. The answer is yes, you can use <code>fairlearn.reductions.Moment</code>, or more precisely <code>fairlearn.reductions.ClassificationMoment</code>, to implement any constraints of the form described in the paper "<a href="https://arxiv.org/pdf/1803.02453.pdf" rel="nofollow noreferrer">A Reductions Approach to Fair Classification</a>". Apologies for the mistake in the documentation. We should replace "parity constraints" by "constraints".</p>
<p>If you want to implement this kind of constraint or something more generic along the lines you describe, please open a GitHub issue, and we can help you along! Currently the internal API around creation of new Moment classes is not very well documented.</p> | 2021-04-13 19:04:48.943000+00:00 | 2021-04-13 19:04:48.943000+00:00 | null | null | 66,910,107 | <p>I'd like to use fairlearn to encode a binned monotonicity constraint on a binned continuous feature, e.g. income. That is, for input x, model h, and income groups {G_1...G_k}, I'd like to enforce:</p>
<p>E[h(x) | x \in G_i] <= E[h(x) | x \in G_i+1] for i from 0 to k-1.</p>
<p>This constraint fits into the form required in the Fair Reductions paper, where we have our vector mu, which is</p>
<p>mu_j=E[h(x)|x \in G_j] for all j,</p>
<p>and our matrix M which has M_i,i=1 and M_i,(i+1)=-1, and zeros elsewhere.</p>
<p>I was about to try to make an implementation of the Moment class to make a moment for this binned monotonicity constraint, but the documentation for other parts of the code has me worried that fairlearn is only implemented for parity constraints in particular.</p>
<p>For example, in the documentation the _Lagrangian class fairlearn, in fairlearn/fairlearn/reductions/_exponentiated_gradient/_lagrangian.py, it says:</p>
<p>"constraints : fairlearn.reductions.Moment
Object describing the <strong>parity</strong> constraints. This provides the reweighting
and relabelling."</p>
<p>Does this mean that even if find a way to write a moment class for binned monotonicity, I may still run into problems from the rest of the code being geared towards parity constraints only?</p>
<p>Additionally, is there any hope (or existing implementation?) of getting a general implementation of fairlearn where you input mu and M, as opposed to a pre-determined constraint?</p> | 2021-04-01 18:55:53.390000+00:00 | 2021-04-13 19:04:48.943000+00:00 | null | fairlearn | ['https://arxiv.org/pdf/1803.02453.pdf'] | 1 |
42,708,592 | <p>The <a href="https://cran.r-project.org/web/packages/queuecomputer/index.html" rel="nofollow noreferrer">R package <code>queuecomputer</code></a>. <code>queuecomputer</code> is a computationally efficient method for simulating queues with arbitrary arrival and service times. There is a submitted paper on <a href="https://arxiv.org/abs/1703.02151" rel="nofollow noreferrer">arXiv</a> describing the algorithm used in the package. Examples can be found within the arXiv paper and the <a href="https://cran.r-project.org/web/packages/queuecomputer/vignettes/Howto.html" rel="nofollow noreferrer">vignette</a>. A web app based on the package is available at <a href="https://ace-ebert.shinyapps.io/queue_simulator_mmk/" rel="nofollow noreferrer">https://ace-ebert.shinyapps.io/queue_simulator_mmk/</a> . </p> | 2017-03-10 01:32:37.717000+00:00 | 2017-03-10 01:32:37.717000+00:00 | null | null | 1,238,757 | <p>I have been trying to make work <a href="http://www-rcf.usc.edu/%7Ekhoshnev/software.html" rel="nofollow noreferrer">EZSIM</a> with no luck, which is a software to build discrete event simulators in a graphical DOS environment. In this software, my simulator and many others (of the other people in the course I'm taking) don't work, but teacher's simulator (and examples of the downloaded files) does work.</p>
<p>So, I began to distrust of the software.</p>
<p>Do you know any software that resolves the same kind of problems but <em>really works</em>? It will be good if it is free, or I can download an evaluation copy or something like that.</p>
<p>If you don't know any software, do you know any library which might work? Preferably in C#, Ansi C, Java or Delphi.</p> | 2009-08-06 13:08:21.580000+00:00 | 2021-02-26 16:37:52.320000+00:00 | 2021-02-26 16:37:52.320000+00:00 | simulation | ['https://cran.r-project.org/web/packages/queuecomputer/index.html', 'https://arxiv.org/abs/1703.02151', 'https://cran.r-project.org/web/packages/queuecomputer/vignettes/Howto.html', 'https://ace-ebert.shinyapps.io/queue_simulator_mmk/'] | 4 |
55,067,091 | <p>You can start with Squeeze Net, a simplified version of AlexNet:</p>
<p>SqueezeNet: AlexNet-Level accuracy with 50 X fewer parameters and < 0.5MB Model Size:</p>
<p><a href="https://arxiv.org/pdf/1602.07360.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1602.07360.pdf</a></p>
<p>Code:</p>
<pre><code>import numpy as np
from keras.layers import Input, Dense, Lambda
from keras.models import Model
from keras import backend as K
from keras import objectives
from keras.datasets import mnist
from keras.layers.core import Reshape
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten, Concatenate
from keras.layers import Convolution2D, MaxPooling2D
from keras.layers.convolutional import Conv2D, MaxPooling2D, ZeroPadding2D, UpSampling2D
from keras.utils import np_utils
from keras.layers.normalization import BatchNormalization
from keras.optimizers import SGD
from keras.layers.advanced_activations import ELU
from keras.layers.pooling import GlobalAveragePooling2D
import pandas as pd
import matplotlib.pyplot as plt
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_train = x_train.reshape((len(x_train), np.prod(x_train.shape[1:])))
x_test = x_test.reshape((len(x_test), np.prod(x_test.shape[1:])))
x_train_CNN=x_train.reshape(60000,28,28,1)
y_train2=pd.get_dummies(y_train)
epochs=3
learning_rate = 0.07
decay_rate = 5e-5
momentum = 0.6
sgd = SGD(lr=learning_rate,momentum=momentum, decay=decay_rate, nesterov=False)
input_shape=(28,28,1)
input_img = Input(batch_shape=(None, 28,28,1))
squeeze=Lambda(lambda x: x ** 2,input_shape=(784,),output_shape=(1,784))(input_img)
squeeze=Reshape((28,28,1))(squeeze)
squeeze=Conv2D(64, 3,3,
border_mode='valid',
input_shape=input_shape)(squeeze)
squeeze=BatchNormalization()(squeeze)
squeeze=ELU(alpha=1.0)(squeeze)
squeeze=MaxPooling2D(pool_size=(2,2))(squeeze)
squeeze=Conv2D(32, 1, 1,
init='glorot_uniform')(squeeze)
squeeze=BatchNormalization()(squeeze)
squeeze=ELU(alpha=1.0)(squeeze)
squeeze_left=squeeze
squeeze_left=Conv2D(64, 3,3,
border_mode='valid',
input_shape=input_shape)(squeeze_left)
squeeze_left=ELU(alpha=1.0)(squeeze_left)
squeeze_right=squeeze
squeeze_right=Conv2D(64, 3,3,
border_mode='valid',
input_shape=input_shape)(squeeze_right)
squeeze_right=ELU(alpha=1.0)(squeeze_right)
squeeze0=Concatenate()([squeeze_left,squeeze_right])
squeeze0=Dropout(0.2)(squeeze0)
squeeze0=GlobalAveragePooling2D()(squeeze0)
squeeze0=Dense(10)(squeeze0)
squeeze0=Activation('sigmoid')(squeeze0)
model = Model(inputs = input_img, outputs = squeeze0)
model.compile(loss='categorical_crossentropy', optimizer=sgd,metrics = ['accuracy'])
model.summary()
model.fit(x_train_CNN,np.array(y_train2),
nb_epoch=15,
batch_size=30,verbose=1)
predictions=np.argmax(model.predict(x_train_CNN,verbose=1),axis=1)
</code></pre>
<p>Neural Network Architecture:</p>
<pre><code>__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_22 (InputLayer) (None, 28, 28, 1) 0
__________________________________________________________________________________________________
lambda_39 (Lambda) (None, 1, 784) 0 input_22[0][0]
__________________________________________________________________________________________________
reshape_39 (Reshape) (None, 28, 28, 1) 0 lambda_39[0][0]
__________________________________________________________________________________________________
conv2d_144 (Conv2D) (None, 26, 26, 64) 640 reshape_39[0][0]
__________________________________________________________________________________________________
batch_normalization_73 (BatchNo (None, 26, 26, 64) 256 conv2d_144[0][0]
__________________________________________________________________________________________________
elu_143 (ELU) (None, 26, 26, 64) 0 batch_normalization_73[0][0]
__________________________________________________________________________________________________
max_pooling2d_37 (MaxPooling2D) (None, 13, 13, 64) 0 elu_143[0][0]
__________________________________________________________________________________________________
conv2d_145 (Conv2D) (None, 13, 13, 32) 2080 max_pooling2d_37[0][0]
__________________________________________________________________________________________________
batch_normalization_74 (BatchNo (None, 13, 13, 32) 128 conv2d_145[0][0]
__________________________________________________________________________________________________
elu_144 (ELU) (None, 13, 13, 32) 0 batch_normalization_74[0][0]
__________________________________________________________________________________________________
conv2d_146 (Conv2D) (None, 11, 11, 64) 18496 elu_144[0][0]
__________________________________________________________________________________________________
conv2d_147 (Conv2D) (None, 11, 11, 64) 18496 elu_144[0][0]
__________________________________________________________________________________________________
elu_145 (ELU) (None, 11, 11, 64) 0 conv2d_146[0][0]
__________________________________________________________________________________________________
elu_146 (ELU) (None, 11, 11, 64) 0 conv2d_147[0][0]
__________________________________________________________________________________________________
concatenate_34 (Concatenate) (None, 11, 11, 128) 0 elu_145[0][0]
elu_146[0][0]
__________________________________________________________________________________________________
dropout_28 (Dropout) (None, 11, 11, 128) 0 concatenate_34[0][0]
__________________________________________________________________________________________________
global_average_pooling2d_21 (Gl (None, 128) 0 dropout_28[0][0]
__________________________________________________________________________________________________
dense_15 (Dense) (None, 10) 1290 global_average_pooling2d_21[0][0]
__________________________________________________________________________________________________
activation_15 (Activation) (None, 10) 0 dense_15[0][0]
==================================================================================================
Total params: 41,386
Trainable params: 41,194
Non-trainable params: 192
__________________________________________________________________________________________________
</code></pre> | 2019-03-08 16:19:10.813000+00:00 | 2019-03-08 17:52:49.993000+00:00 | 2019-03-08 17:52:49.993000+00:00 | null | 55,066,812 | <p>How can build Computer Vision based object identification system using Alexnet in python with keras and Tensorflow ? </p>
<p>Is there anyone who familiar with <strong>Alexnet</strong>, Please help me to build image classification using my custom image directory dataset using alexnet CNN model.</p> | 2019-03-08 16:00:19.993000+00:00 | 2019-03-08 17:52:49.993000+00:00 | 2019-03-08 16:15:27.523000+00:00 | python-3.x|tensorflow|keras | ['https://arxiv.org/pdf/1602.07360.pdf'] | 1 |
71,086,582 | <p>Ok, it seems I was finally able to solve it, and I'm posting in case it would interest anyone in the future:</p>
<p>First, I pip installed the <a href="https://scaron.info/doc/pypoman" rel="nofollow noreferrer">pypoman library</a>.
With it, we are able to move easily between vertices and faces with <code>compute_polytope_halfspaces</code> (aka, the H-representation of a polytope). So I get the representation P_i: H_i x < h_i for i=1,2 from the vertices (or skip it if it's already in the correct format).</p>
<p>Now if we set P_sum = {[x1;x2] \in R^2n | [H_1 0; 0 H_2] [x1;x2]' < [h_1,h_2]'}, notice that the Minkowski sum is equivalent to P1+P2 = [I,I] P_sum (idea from this <a href="https://arxiv.org/pdf/1903.05214.pdf" rel="nofollow noreferrer">paper</a> IV.B). So I can use pypoman's <code>project_polytope</code> function to get the Minkwoski sum with H_sum x < h_sum in the original dimensions.</p> | 2022-02-11 21:14:33.950000+00:00 | 2022-02-11 21:14:33.950000+00:00 | null | null | 71,068,688 | <p>My goal is to obtain the representations of all faces (in the form of A[x,y,z]'>b) of a polyhedron that is the result of the convex difference between two convex polyhedra. Meaning, finding the intersection of all planes that are the result of the Minkowski difference of P1 - P2 = { x - y | x \in P1, y \in P2 }.</p>
<p>I'm looking for either an established library (Python?) or an idea on how to do this efficiently. I thought about doing something similar to the <a href="https://en.wikipedia.org/wiki/Gilbert%E2%80%93Johnson%E2%80%93Keerthi_distance_algorithm" rel="nofollow noreferrer">GJK algorithm</a> but I need all of the faces, and not just compute whether the origin is inside quickly. Moreover, seems inefficient to use this support function in a methodological way in 3D, or higher dimensions. Also, let's say I got the vertices, do I now need to form the plane equation from two vectors on it with the cross product, for every face, or is there a way to obtain it from the Minkowski sum itself? (keeping in mind the need for higher dimensions).</p> | 2022-02-10 16:25:08.670000+00:00 | 2022-03-11 15:03:00.507000+00:00 | null | python|computational-geometry|point-in-polygon|convex-polygon | ['https://scaron.info/doc/pypoman', 'https://arxiv.org/pdf/1903.05214.pdf'] | 2 |
50,189,281 | <p>It looks like your OAI server does not accept POST requests for these verbs. OAI servers are expected to treat POST and GET requests in the same way.</p>
<p>For instance, these two requests give the same result:</p>
<pre><code>curl -d "verb=Identify" http://export.arxiv.org/oai2
curl http://export.arxiv.org/oai2?verb=Identify
</code></pre>
<p>It should be the same for your server.</p> | 2018-05-05 12:11:35.753000+00:00 | 2018-05-05 12:11:35.753000+00:00 | null | null | 50,186,043 | <p>I validated my oai code in openarchives.So many error.Cleared mostly.But still have 2 errors.It shows error like 'FAIL POST test 1 for Identify was unsuccessful, an OAI error response was received'.Anyone know what kind of error is this.Attached <a href="https://i.stack.imgur.com/ruYez.png" rel="nofollow noreferrer">error image</a>
Thank you</p> | 2018-05-05 05:28:14.983000+00:00 | 2018-05-05 12:11:35.753000+00:00 | null | php|xml|oai|oai-pmh | [] | 0 |
53,890,033 | <p>Based on this <a href="https://arxiv.org/abs/1607.06450" rel="nofollow noreferrer">paper</a>: <em>"Layer Normalization" - Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E. Hinton</em></p>
<p>Tensorflow now comes with the <a href="https://github.com/tensorflow/tensorflow/blob/r1.12/tensorflow/contrib/rnn/python/ops/rnn_cell.py" rel="nofollow noreferrer"><code>tf.contrib.rnn.LayerNormBasicLSTMCell</code></a> a LSTM unit with layer normalization and recurrent dropout.</p>
<p>Find the documentation <a href="https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/LayerNormBasicLSTMCell" rel="nofollow noreferrer">here</a>.</p> | 2018-12-21 19:44:27.667000+00:00 | 2018-12-21 19:44:27.667000+00:00 | null | null | 46,915,354 | <p>My current LSTM network looks like this.</p>
<pre class="lang-py prettyprint-override"><code>rnn_cell = tf.contrib.rnn.BasicRNNCell(num_units=CELL_SIZE)
init_s = rnn_cell.zero_state(batch_size=1, dtype=tf.float32) # very first hidden state
outputs, final_s = tf.nn.dynamic_rnn(
rnn_cell, # cell you have chosen
tf_x, # input
initial_state=init_s, # the initial hidden state
time_major=False, # False: (batch, time step, input); True: (time step, batch, input)
)
# reshape 3D output to 2D for fully connected layer
outs2D = tf.reshape(outputs, [-1, CELL_SIZE])
net_outs2D = tf.layers.dense(outs2D, INPUT_SIZE)
# reshape back to 3D
outs = tf.reshape(net_outs2D, [-1, TIME_STEP, INPUT_SIZE])
</code></pre>
<p>Usually, I apply <a href="https://www.tensorflow.org/api_docs/python/tf/layers/batch_normalization" rel="noreferrer"><code>tf.layers.batch_normalization</code></a> as batch normalization. But I am not sure if this works in a LSTM network.</p>
<pre class="lang-py prettyprint-override"><code>b1 = tf.layers.batch_normalization(outputs, momentum=0.4, training=True)
d1 = tf.layers.dropout(b1, rate=0.4, training=True)
# reshape 3D output to 2D for fully connected layer
outs2D = tf.reshape(d1, [-1, CELL_SIZE])
net_outs2D = tf.layers.dense(outs2D, INPUT_SIZE)
# reshape back to 3D
outs = tf.reshape(net_outs2D, [-1, TIME_STEP, INPUT_SIZE])
</code></pre> | 2017-10-24 16:13:37.813000+00:00 | 2019-08-14 07:43:49.180000+00:00 | 2018-12-21 23:09:45.233000+00:00 | python|tensorflow|neural-network|lstm|rnn | ['https://arxiv.org/abs/1607.06450', 'https://github.com/tensorflow/tensorflow/blob/r1.12/tensorflow/contrib/rnn/python/ops/rnn_cell.py', 'https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/LayerNormBasicLSTMCell'] | 3 |
57,489,798 | <p>If you want to use batch norm for RNN (LSTM or GRU), you can check out <a href="https://github.com/OlavHN/bnlstm/blob/master/lstm.py" rel="nofollow noreferrer">this implementation</a> , or read the full description from <a href="http://olavnymoen.com/2016/07/07/rnn-batch-normalization" rel="nofollow noreferrer">blog post</a>. </p>
<p>However, the layer-normalization has more advantage than batch norm in sequence data. Specifically, "the effect of batch normalization is dependent on the mini-batch size and it is not obvious how to apply it to recurrent networks" (from the paper <a href="https://arxiv.org/abs/1607.06450" rel="nofollow noreferrer">Ba, et al. Layer normalization</a>).</p>
<p>For layer normalization, it normalizes the summed inputs within each layer. You can check out the <a href="https://gist.github.com/danijar/ff8b4b81da55c99b5096913c4953d29b" rel="nofollow noreferrer">implementation</a> of layer-normalization for GRU cell: </p> | 2019-08-14 07:13:28.073000+00:00 | 2019-08-14 07:43:49.180000+00:00 | 2019-08-14 07:43:49.180000+00:00 | null | 46,915,354 | <p>My current LSTM network looks like this.</p>
<pre class="lang-py prettyprint-override"><code>rnn_cell = tf.contrib.rnn.BasicRNNCell(num_units=CELL_SIZE)
init_s = rnn_cell.zero_state(batch_size=1, dtype=tf.float32) # very first hidden state
outputs, final_s = tf.nn.dynamic_rnn(
rnn_cell, # cell you have chosen
tf_x, # input
initial_state=init_s, # the initial hidden state
time_major=False, # False: (batch, time step, input); True: (time step, batch, input)
)
# reshape 3D output to 2D for fully connected layer
outs2D = tf.reshape(outputs, [-1, CELL_SIZE])
net_outs2D = tf.layers.dense(outs2D, INPUT_SIZE)
# reshape back to 3D
outs = tf.reshape(net_outs2D, [-1, TIME_STEP, INPUT_SIZE])
</code></pre>
<p>Usually, I apply <a href="https://www.tensorflow.org/api_docs/python/tf/layers/batch_normalization" rel="noreferrer"><code>tf.layers.batch_normalization</code></a> as batch normalization. But I am not sure if this works in a LSTM network.</p>
<pre class="lang-py prettyprint-override"><code>b1 = tf.layers.batch_normalization(outputs, momentum=0.4, training=True)
d1 = tf.layers.dropout(b1, rate=0.4, training=True)
# reshape 3D output to 2D for fully connected layer
outs2D = tf.reshape(d1, [-1, CELL_SIZE])
net_outs2D = tf.layers.dense(outs2D, INPUT_SIZE)
# reshape back to 3D
outs = tf.reshape(net_outs2D, [-1, TIME_STEP, INPUT_SIZE])
</code></pre> | 2017-10-24 16:13:37.813000+00:00 | 2019-08-14 07:43:49.180000+00:00 | 2018-12-21 23:09:45.233000+00:00 | python|tensorflow|neural-network|lstm|rnn | ['https://github.com/OlavHN/bnlstm/blob/master/lstm.py', 'http://olavnymoen.com/2016/07/07/rnn-batch-normalization', 'https://arxiv.org/abs/1607.06450', 'https://gist.github.com/danijar/ff8b4b81da55c99b5096913c4953d29b'] | 4 |
41,905,566 | <p>You don't need to use the exact same format - this is just a tutorial.... All you need to do is provide one or multiple data layers, with a total of three top Blobs: <code>data</code>, <code>data_p</code>, and <code>sim</code>. You can do that in any way you'd like, e.g. LMDB (like in the MNIST example), HDF5, or whatever.</p>
<h3>General explanation</h3>
<p>In the tutorial, they further show and easy way to load the image pairs: you concatenate two images in the channel dimension. For gray-scale, you take two input images, where each has for example the dimension <code>[1, 1, 28, 28]</code> (i.e. 1 image, 1 channel, 28x28 resolution). Then you concatenate them to be one image of size <code>[1, 2, 28, 28]</code> and save them e.g. to an LMDB.</p>
<p>In the network, the first step after loading the data is a "Slice" layer, which takes this image, and slices it (i.e. it splits it up) along that axis, thus creating two Top blobs, <code>data</code> and <code>data_p</code>.</p>
<h3>Β How to create the data files?</h3>
<p>There is no single right way to do that. The <a href="https://github.com/BVLC/caffe/blob/master/examples/siamese/convert_mnist_siamese_data.cpp" rel="nofollow noreferrer">code</a> from the tutorial is only for the MNIST set, so unless you have the <em>exact</em> same format, you can't use it without changes. You have a couple of possibilities:</p>
<ol>
<li><p>Convert your images to the MNIST-format. Then, the code from the Caffe tutorial works out-of-the-box. It appears that you are trying this - if you need help on that, please be specific: what is "mnisten", include your code, and so on.</p>
</li>
<li><p>Write your own script to convert the images.
This is actually very simple: all you need to do is read the images in your favorite programming language, select the pairs, calculate the labels, and re-save as LMDB.
This is definitely the more flexible way.</p>
</li>
<li><p>Create HDF5 files with multiple Top blobs. This is very simple to do, but will probably be a bit slower than using LMDB.</p>
</li>
</ol>
<p>What you use is up to you - I'd probably go with HDF5, as this is an easy and very flexible way to start.</p>
<h3>Β How to generate the pairs?</h3>
<p>Now, this is the difficult question here. The code from the tutorial just selects random pairs, which is not really optimal, and will make learning rather slow. You don't just need random pairs, you needmeaningful, difficult, but still solvable pairs. How to do that depends entirely on your dataset.</p>
<p>A very sophisticated example is presented, in (RadenoviΔ, 2016): they use a Siamese network for learning a representation for image retrieval on buildings. They use a Structure-from-Motion (SfM) algorithm to create a 3-D reconstruction of a building, and then sample image pairs from those reconstructions.</p>
<p>How exactly you create the pairs depends on your data - maybe you are fine with random pairs - maybe you need a sophisticated method.</p>
<p><strong>Literature:</strong></p>
<p>F. RadenoviΔ, G. Tolias, and O. Chum. "CNN Image Retrieval Learns from BoW: Unsupervised Fine-Tuning with Hard Examples". In: European Conference on Computer Vision (ECCV), 2016. arXiv: <a href="http://arxiv.org/abs/1604.02426" rel="nofollow noreferrer">1604.02426</a>.</p> | 2017-01-28 01:40:34.760000+00:00 | 2017-01-28 01:40:34.760000+00:00 | 2020-06-20 09:12:55.060000+00:00 | null | 41,904,521 | <p>I have a dataset that I have created of gray scale images which i want to use with the siamese network example in caffe in which the documentation uses mnist dataset. I want to replace the mnist dataset with my own dataset </p>
<p>I see that for doing this I need my dataset to be in the format required by the siamese netwrk. This can be created using the 'create_mnist_siamese.sh' which loads the mnist dataset in the idx3-ubyte format and creates a dataset lmdb database with two images and a matching/non matching label in each location of the lmdb database.</p>
<p>So I figured for me to use the 'create_mnist_siamese.sh' script, my dataset also needs to be in the idx-ubyte format. I tried to convert my dataset to the idx-ubyte format using 'mnisten'. However i get the error 'error:total images are less than num_tests'. I guess the script is not identifying my images. The folder structure of the dataset is like this:</p>
<pre><code>parent-directory
- subfolder
- subfolder
.
.
.
-txt file
</code></pre>
<p>parent directory name - 'generated dataset'<br>
subfolders - 1 ,2 ,3 ... (the subfolders are titled 1 - 30 as I want to label the data in each subfolder by the name of the subfolder)<br>
The txt file contains image title on each row with the class label.</p>
<p>How do I work with my dataset on the siamese network in caffe? Is there a direct way to convert my dataset to the lmdb format for the siamese network? Or do I have to use mnisten? If I do then how do I fix my error ? Anu help will be much appreciated. Thanks.</p> | 2017-01-27 23:11:30.540000+00:00 | 2017-04-02 09:29:03.900000+00:00 | null | dataset|deep-learning|caffe|conv-neural-network|mnist | ['https://github.com/BVLC/caffe/blob/master/examples/siamese/convert_mnist_siamese_data.cpp', 'http://arxiv.org/abs/1604.02426'] | 2 |
65,029,572 | <p>You first have to well define your problem and your objectives:</p>
<ul>
<li>If you only want to detect if your image has a defect or not, it's a <strong>binary classification</strong> problem and you affect a label (0 or 1) to each image.</li>
<li>If you want to localise the defect approximatively (like a bounding box), it's an <strong>object detection</strong> problem and it can be realised with one or more classes.</li>
<li>If you want to localise precisely the defect (in order to performe measures for instance) the best is <strong>semantic segmentation</strong> or <strong>instance segmentation</strong>.</li>
<li>If you want to classify the defect, you will need to create classes for each defect you want to classify.</li>
</ul>
<p>There is no magical solution because it depends of the objectives of your project. I can give you the following advices because I made an internship on a similar project :</p>
<ul>
<li>Look carefully at your data, if you have thousands of images it will take a long to create your semantic segmentation dataset. Be smarter by using <a href="https://towardsdatascience.com/data-augmentation-techniques-in-python-f216ef5eed69" rel="nofollow noreferrer">data augmentation techniques</a>.</li>
<li>If you want to classify the defects, be sure to have enough defects of each type to train your network. If your network only sees one defect type per epoch, it can't learn to detect it.</li>
<li>Be sure that your network can detect the defects you're providing (not a scratch of two pixels for instance or alignement defects).</li>
</ul>
<p>Performing semantic segmentation to only knows if there is a defect or not seems overkill because it's a long and complex process (rebuilding the image, memory of intermediaries images in <a href="https://arxiv.org/abs/1505.04597" rel="nofollow noreferrer">Unet</a>, lot of computations). If you really want to apply this method, you may create a threshold to detect if the number of detected pixels as defect allows to classify the image as 'presenting a defect' or not.</p> | 2020-11-26 21:44:14.057000+00:00 | 2020-11-26 21:44:14.057000+00:00 | null | null | 65,027,545 | <p>I am kind of new to semantic segmentation. I am trying to perform segmentation of images having defects.
I have the defect images annotated using a annotation tool and I created the mask for each image. I wanted to predict If an image has defect and where exactly it is located. But my problem is my defects does not look same in all the images. Example: Defects on steel- Steel breakage, erroded surface etc. I am just trying to classify if the image has defect or not and where it is located. So is it wrong to train the neural network with these all types considered as defects even though not everything lookalike?</p>
<p>I thought to do a binary segmentation of defect to no defect. If I am not correct how can I perform segmentation for defect and non defect images?</p> | 2020-11-26 18:31:02.817000+00:00 | 2020-11-27 13:36:00.843000+00:00 | null | deep-learning|neural-network|conv-neural-network|semantic-segmentation | ['https://towardsdatascience.com/data-augmentation-techniques-in-python-f216ef5eed69', 'https://arxiv.org/abs/1505.04597'] | 2 |
62,257,119 | <p>I don't think there is the name used by everyone but, for example, in this paper <a href="https://arxiv.org/pdf/1805.01035" rel="nofollow noreferrer">https://arxiv.org/pdf/1805.01035</a> they call it split-and-rephrase (in several other papers this term is used too). </p> | 2020-06-08 07:33:06.670000+00:00 | 2020-06-08 07:33:06.670000+00:00 | null | null | 62,254,778 | <p>I've go a question about the name of an NLP task - Splitting up a complex sentence into simple ones.
For example, if I have this sentence:</p>
<p>"Input t on the username input box and password input box."</p>
<p>I'd like to split this sentence into simpler sentences:</p>
<p>"Input t on the username input box"
"Input t on the password input box"</p>
<p>What would this problem be called? I've tried clause extraction <a href="https://stackoverflow.com/questions/39320015/how-to-split-an-nlp-parse-tree-to-clauses-independent-and-subordinate">here</a> but I don't want clauses, but rather, fully formed sentences. I've also tried 'sentence simplification' but it exceeds what I'm trying to do, with its lexical simplification and all. </p>
<p>Thanks </p> | 2020-06-08 03:57:35.353000+00:00 | 2020-06-08 07:33:06.670000+00:00 | null | nlp|nltk|stanford-nlp|spacy | ['https://arxiv.org/pdf/1805.01035'] | 1 |
63,844,246 | <blockquote>
<p>I am trying to speed up this training process to run the 2 models at
the same time using multiple devices</p>
</blockquote>
<p>I doubt that would bring any speed up, especially in case of:</p>
<blockquote>
<p>(e.g., loading one model on CPU and not interrupting the GPU training
of the other model).</p>
</blockquote>
<p>as deep learning is a pipeline which also utilizes CPU, possibly multiple cores (say for data loading but also receiving metrics, gathering them etc.).</p>
<p>Furthermore CPU is rather ineffective for neural network training when compared to GPU/TPU unless you have some tailored CPU architecture (stuff like <a href="https://arxiv.org/abs/1801.04381" rel="nofollow noreferrer">MobileNet</a>). If you were to train student on CPU, you might significantly slow down pipeline elements of <code>teacher</code>.</p>
<blockquote>
<p>What is the proper way to run 2 models in parallel?</p>
</blockquote>
<p>Again, depending on the model, but it would be best to utilize <code>2</code> GPUs for training and split CPU cores for other tasks between them. In your case you would have to synchronize teacher and student predictions across two devices though.</p>
<blockquote>
<p>Can I use Python multiprocessing library to start 2 processes for the 2 models, i.e., loading 2 model instances and running forward()?</p>
</blockquote>
<p>PyTorch provides primitives (e.g. "their" <code>multiprocessing</code> wrapper, Futures etc.) which could possibly be used for that, not sure about <code>mxnet</code> or a-like.</p> | 2020-09-11 09:24:00.973000+00:00 | 2020-09-11 09:24:00.973000+00:00 | null | null | 63,843,254 | <p>I am implementing fast DNN model training using knowledge distillation, as illustrated in the figure below, to run the teacher and student models in parallel.</p>
<p>I checked some popular repos like <a href="https://github.com/NervanaSystems/distiller/blob/master/distiller/knowledge_distillation.py#L105" rel="nofollow noreferrer">NervanaSystems/distiller</a> and <a href="https://github.com/peterliht/knowledge-distillation-pytorch/blob/e4c40132fed5a45e39a6ef7a77b15e5d389186f8/train.py#L277" rel="nofollow noreferrer">peterliht/knowledge-distillation-pytorch</a>. They execute the forward operations of the student and teacher models step by step, i.e., not in parallel on different devices (GPU or CPU).</p>
<p>I am trying to speed up this training process to run the 2 models at the same time using multiple devices (e.g., loading one model on CPU and not interrupting the GPU training of the other model).</p>
<p>What is the proper way to run 2 models in parallel? Can I use Python <code>multiprocessing</code> library to start 2 processes for the 2 models, i.e., loading 2 model instances and running <code>forward()</code>? I am using MXNet but this is a general question for all ML frameworks.</p>
<p>Edit:<br />
My plan is to put a light-weight pre-trained teacher model on CPU which only runs forward pass with frozen parameters.<br />
The student model is a large model to be trained on GPU (distributedly).
This task is not for model compression.
I suppose moving a light task (teacher's forward pass) to CPU can increase the overlap and make this pipeline faster.<br />
The idea is from a workshop paper: <a href="http://learningsys.org/nips18/assets/papers/24CameraReadySubmissionInfer2Train.pdf" rel="nofollow noreferrer">Infer2Train: leveraging inference for better training of deep networks</a>.</p>
<p><a href="https://i.stack.imgur.com/uOrit.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uOrit.png" alt="knowledge distillation illustration" /></a></p> | 2020-09-11 08:13:06.683000+00:00 | 2020-09-11 10:36:54.820000+00:00 | 2020-09-11 10:36:54.820000+00:00 | tensorflow|machine-learning|neural-network|pytorch|mxnet | ['https://arxiv.org/abs/1801.04381'] | 1 |
48,103,935 | <p>Zeiler and Fergus <a href="https://arxiv.org/pdf/1311.2901.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1311.2901.pdf</a> have a good picture showing kernel response to different parts of a picture. </p>
<p>Each kernel convolves over the image, so all the kernels (potentially) see all the pixels. Each of your 6 filters will "learn" a different feature. In the first layer, some will typically learn line features that look like lines (horizontal, vertical, diagonal) and some will learn colour blobs. In the next layer, these get combined. Pixels into edges into shapes.</p>
<p>It might help to look up Prewitt filters <a href="https://en.m.wikipedia.org/wiki/Prewitt_operator" rel="nofollow noreferrer">https://en.m.wikipedia.org/wiki/Prewitt_operator</a>
In this case, it is a single 3x3 kernel which convolves over the whole image and gives a feature map showing horizontal (or vertical) edges. You need one filter for horizontal and a different filter for vertical, but you can combine them to give both. In a neural network, the kernel values are learned from data but the feature maps at each layer are still produced by convolving the kernel over the input.</p> | 2018-01-04 21:36:58.813000+00:00 | 2018-01-04 21:36:58.813000+00:00 | null | null | 48,102,906 | <p>What is the advantage of using multiples of the same filter in convolutional networks in deep learning?</p>
<p>For example:
We use 6 filter of size [5,5] at the first layer to scan the image data, which is a matrix of size [28,28].
The question is why do we not use only a single filter of size [5,5] but use 6 or more of them. In the end they will scan the exact same pixels. I can see that the random weight might be different but DL model will adjust to it anyway.</p>
<p>So, specifically what is the main advantage and purpose of using multiple filters of the same shape then in convnets?</p> | 2018-01-04 20:15:17.343000+00:00 | 2018-01-05 11:42:02.947000+00:00 | 2018-01-04 21:57:37.630000+00:00 | tensorflow|machine-learning|neural-network|deep-learning|conv-neural-network | ['https://arxiv.org/pdf/1311.2901.pdf', 'https://en.m.wikipedia.org/wiki/Prewitt_operator'] | 2 |
48,103,096 | <h2>Why is filter shape the same?</h2>
<p>First, the kernel shape is the same merely to speed up computation. This allows to apply the convolution in a batch, for example using col2im transformation and matrix multiplication. This also makes it convenient to store all the weights in one multidimensional array. Though mathematically one can imagine using several filters of different shape. </p>
<p>Some architectures, such as Inception network, use this idea and apply different convolutional layers (with different kernels) in parallel and in the end stack up the feature maps. This turned out to be very useful.</p>
<h2>Why isn't one filter enough?</h2>
<p>Because each filter is going to learn <em>exactly one</em> pattern that will excite it, e.g., Gabor-like vertical line. A single filter <em>can't be equally excited</em> by a horizontal and a vertical line. So to recognize an object, one such filter is not enough.</p>
<p>For example, in order to recognize a cat, a neural network might need to recognize the eyes, the tail, ... of all which are composed of different lines and edges. The network can be confident about the object on the image if it can recognize a whole variety of different shapes and patterns in the image. This will be true even for a simple data set like MNIST.</p>
<h2>Why do filters learn different patterns?</h2>
<p>A simple analogy: imagine a linear regression network with one hidden layer. Each neuron in the hidden layer is connected to each input feature, so they are all symmetrical. But after some training, different neurons are going to learn different high-level features, which are useful to make a correct prediction.</p>
<p>There's a catch: if the network is initialized with zeros, it's going to suffer from <a href="https://stackoverflow.com/q/20027598/712995">symmetry issues</a> and in general won't converge to the target distribution. So it's essential to create asymmetry in the neurons from the very beginning and let different neurons get excited differently from the same input data. This in turn leads to different gradients getting applied to the weights, usually increasing the asymmetry even more. That's why different neurons are trained differently.</p>
<p>It's important to mention another issue that is still possible with random init called <em>co-adaptation</em>: when different neurons learn to adapt and depend on each other. This problem has been solved by a <a href="https://www.cs.toronto.edu/~hinton/absps/JMLRdropout.pdf" rel="nofollow noreferrer">dropout technique</a> and later by <a href="https://arxiv.org/abs/1502.03167" rel="nofollow noreferrer">batch normalization</a>, essentially by adding noise to the training process, in various ways. Combining it together, neurons are much more likely to learn different latent representations of the data.</p>
<h2>Further links</h2>
<p>Highly recommend to read <a href="http://cs231n.github.io/convolutional-networks/" rel="nofollow noreferrer">CS231n tutorial by Stanford</a> to gain better intuition about convolutional neural networks.</p> | 2018-01-04 20:29:31.923000+00:00 | 2018-01-05 11:42:02.947000+00:00 | 2018-01-05 11:42:02.947000+00:00 | null | 48,102,906 | <p>What is the advantage of using multiples of the same filter in convolutional networks in deep learning?</p>
<p>For example:
We use 6 filter of size [5,5] at the first layer to scan the image data, which is a matrix of size [28,28].
The question is why do we not use only a single filter of size [5,5] but use 6 or more of them. In the end they will scan the exact same pixels. I can see that the random weight might be different but DL model will adjust to it anyway.</p>
<p>So, specifically what is the main advantage and purpose of using multiple filters of the same shape then in convnets?</p> | 2018-01-04 20:15:17.343000+00:00 | 2018-01-05 11:42:02.947000+00:00 | 2018-01-04 21:57:37.630000+00:00 | tensorflow|machine-learning|neural-network|deep-learning|conv-neural-network | ['https://stackoverflow.com/q/20027598/712995', 'https://www.cs.toronto.edu/~hinton/absps/JMLRdropout.pdf', 'https://arxiv.org/abs/1502.03167', 'http://cs231n.github.io/convolutional-networks/'] | 4 |
72,854,779 | <p>I know this is old but I see nobody mentioning HOTA (<a href="https://arxiv.org/pdf/2009.07736.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/2009.07736.pdf</a>). This has become the new standard for multi-object tracking as can be seen here: <a href="https://arxiv.org/abs/2202.13514" rel="nofollow noreferrer">https://arxiv.org/abs/2202.13514</a> and <a href="https://arxiv.org/pdf/2110.06864.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/2110.06864.pdf</a></p>
<p>MOTA and IDF1 overemphasize detection and association respectively. HOTA explicitly measures both types of errors and combines these in a balanced way. HOTA also incorporates measuring the localization accuracy of tracking results which isnβt present in either MOTA or IDF1.</p> | 2022-07-04 09:48:21.857000+00:00 | 2022-07-04 09:48:21.857000+00:00 | null | null | 65,304,345 | <p>I want to compare multiple computer vision <strong>Multi-Object Tracking (MOT)</strong> methods on my own dataset, so first I want to choose the best metrics for this task. I have carried out some research in scientific literature and I come to the conclusion that there are three main metrics sets:</p>
<ol>
<li>Metrics from <a href="https://www.researchgate.net/publication/4246190_Tracking_of_Multiple_Partially_Occluded_Humans_based_on_Static_Body_Part_Detection" rel="nofollow noreferrer">"Tracking of Multiple, Partially Occluded Humans based on Static Body Part
Detection"</a></li>
<li><a href="https://www.researchgate.net/publication/26523191_Evaluating_multiple_object_tracking_performance_The_CLEAR_MOT_metrics" rel="nofollow noreferrer">CLEAR MOT metrics</a></li>
<li><a href="https://arxiv.org/abs/1609.01775" rel="nofollow noreferrer">ID scores</a></li>
</ol>
<p>Therefore, I wonder to which of the above metrics should I attach the greatest importance?</p>
<p>And I would like to ask if anyone has encountered a similar issue and has any thoughts on this topic that could justify and help me to choose the best metrics for the above task.</p> | 2020-12-15 10:46:44.110000+00:00 | 2022-07-04 09:48:21.857000+00:00 | null | object|deep-learning|computer-vision|object-detection|tracking | ['https://arxiv.org/pdf/2009.07736.pdf', 'https://arxiv.org/abs/2202.13514', 'https://arxiv.org/pdf/2110.06864.pdf'] | 3 |
66,688,997 | <p>You can refer to the metrics used in the <strong>MOT Challenge.</strong></p>
<p>Here's the results for the MOT 2020 Challenge and they have included the metrics used here:
<a href="https://motchallenge.net/results/MOT20/" rel="nofollow noreferrer">https://motchallenge.net/results/MOT20/</a></p>
<p>Based on the <a href="https://arxiv.org/pdf/2003.09003.pdf" rel="nofollow noreferrer">MOT 20 paper</a>, they said at section 4.1.7 (page 7):</p>
<blockquote>
<p>As we have seen in this section, there are a number of reasonable performance measures to assess the quality of a tracking system, which makes it rather difficult to reduce the evaluation to one single number. To nevertheless give an intuition on how each tracker performs compared to its competitors, we compute and show the average rank for each one by ranking all trackers according to each metric and then averaging across all performance measures.</p>
</blockquote> | 2021-03-18 10:15:54.693000+00:00 | 2021-03-18 10:15:54.693000+00:00 | null | null | 65,304,345 | <p>I want to compare multiple computer vision <strong>Multi-Object Tracking (MOT)</strong> methods on my own dataset, so first I want to choose the best metrics for this task. I have carried out some research in scientific literature and I come to the conclusion that there are three main metrics sets:</p>
<ol>
<li>Metrics from <a href="https://www.researchgate.net/publication/4246190_Tracking_of_Multiple_Partially_Occluded_Humans_based_on_Static_Body_Part_Detection" rel="nofollow noreferrer">"Tracking of Multiple, Partially Occluded Humans based on Static Body Part
Detection"</a></li>
<li><a href="https://www.researchgate.net/publication/26523191_Evaluating_multiple_object_tracking_performance_The_CLEAR_MOT_metrics" rel="nofollow noreferrer">CLEAR MOT metrics</a></li>
<li><a href="https://arxiv.org/abs/1609.01775" rel="nofollow noreferrer">ID scores</a></li>
</ol>
<p>Therefore, I wonder to which of the above metrics should I attach the greatest importance?</p>
<p>And I would like to ask if anyone has encountered a similar issue and has any thoughts on this topic that could justify and help me to choose the best metrics for the above task.</p> | 2020-12-15 10:46:44.110000+00:00 | 2022-07-04 09:48:21.857000+00:00 | null | object|deep-learning|computer-vision|object-detection|tracking | ['https://motchallenge.net/results/MOT20/', 'https://arxiv.org/pdf/2003.09003.pdf'] | 2 |
43,961,655 | <p>This is a very interesting problem and I have been working on this for a while. The first thing to consider is that binary knapsack problem with dependent item weights/value is not trivial at all. You may consider using Bayesian networks, Markov models, and other similar techniques for solving this problem. Nonetheless, any practical approach to this problem has to make some assumptions either about the optimization model or its input. Here is an example of formulating the binary knapsack problem with value-dependent items. <a href="https://arxiv.org/pdf/1702.06662.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1702.06662.pdf</a></p>
<p>In this work, authors have proposed modeling the input (value-related dependencies) using fuzzy graphs and then using the proposed integer linear programming model to solve the optimization problem. An extended version of the work has been accepted for publication and will be soon available online. </p>
<p>Please do not hesitate to contact me if you needed further information. I can also provide you with the source code of the model if needed. </p> | 2017-05-14 07:58:08.940000+00:00 | 2017-05-14 07:58:08.940000+00:00 | null | null | 38,868,205 | <p>The standard 0/1 knapsack requires that the weight of every item is independent to others. Then DP is a efficient algorithm towards the solution. But now I met a similar but extensions of this problem, that </p>
<blockquote>
<p>the weight of new items are dependent on previous items already in
the knapsack.</p>
</blockquote>
<p><strong>For example</strong>, we have 5 items <code>a</code>, <code>b</code>, <code>c</code>, <code>d</code> and <code>e</code> with weight <code>w_a</code>, ..., <code>w_e</code>. item <code>b</code> and <code>c</code> have weight dependency.</p>
<p>When <code>b</code> is already in the knapsack, the weight of item <code>c</code> will be <em>smaller</em> than <code>w_c</code> because it can share some space with <code>b</code>, i.e. <code>weight(b&c) < w_b + w_c</code>. Symmetrically, when <code>c</code> is already in the knapsack, the weight of <code>b</code> will be smaller than <code>w_b</code>.</p>
<p>This uncertainty results a failure of original DP algorithm, since it depend on the correctness of previous iterations which may not correct now. I have read some papers about knapsack but they either have dependencies subjected to <em>profit</em> (<em>quadratic knapsack problem</em>), or have variable weight which follows a random distribution (<em>stochastic knapsack problem</em>). I have also aware of the previous question <a href="https://stackoverflow.com/questions/27928153/1-0-knapsack-variation-with-weighted-edges">1/0 Knapsack Variation with Weighted Edges</a>, but there is only a very generic answer available, and no answer about what is the name of this knapsack.</p>
<p><strong>One existing solution:</strong></p>
<p>I have also read one approximate solution in <a href="http://www.nec-labs.com/uploads/Documents/Data-Management/miso-sigmod2014.pdf" rel="noreferrer">a paper</a> about DBMS optimizations, where they <code>group the related items as one combined item for knapsack</code>. If use this technique into our example, the items for knapsack will be <code>a</code>, <code>bc</code>, <code>d</code>, <code>e</code>, therefore there is no more dependencies between any two of these four items. However it is easy to construct an example that does not get optimal result, like when <code>an item with "small weight and benefit" is grouped with another item with "large weight and benefit"</code>. In this example, the "small" item should not be selected in solution, but is selected together with the "large" item.</p>
<p><strong>Question:</strong></p>
<p>Is there any kind of efficient solving techniques that can get optimal result, or at least with some error guarantee? Or am I taking the wrong direction for modelling this problem?</p> | 2016-08-10 08:37:28.823000+00:00 | 2017-08-04 14:52:17.613000+00:00 | 2017-05-23 12:02:43.927000+00:00 | algorithm|knapsack-problem | ['https://arxiv.org/pdf/1702.06662.pdf'] | 1 |
7,072,206 | <p>The distributive law solution
l : MN -> NM is enough </p>
<p>to guarantee monadicity of NM. To see this you need a unit and a mult. i'll focus on the mult (the unit is unit_N unitM)</p>
<pre><code>NMNM - l -> NNMM - mult_N mult_M -> NM
</code></pre>
<p>This does <em>not</em> guarantee that MN is a monad.</p>
<p>The crucial observation however, comes into play when you have distributive law solutions</p>
<pre><code>l1 : ML -> LM
l2 : NL -> LN
l3 : NM -> MN
</code></pre>
<p>thus, LM, LN and MN are monads. The question arises as to whether LMN is a monad (either by </p>
<p>(MN)L -> L(MN)
or by
N(LM) -> (LM)N</p>
<p>We have enough structure to make these maps. However, as <a href="http://arxiv.org/abs/0710.1120" rel="noreferrer">Eugenia Cheng observes</a>, we need a hexagonal condition (that amounts to a presentation of the Yang-Baxter equation) to guarantee monadicity of either construction. In fact, with the hexagonal condition, the two different monads coincide.</p> | 2011-08-16 00:07:07.910000+00:00 | 2011-08-16 00:07:07.910000+00:00 | null | null | 7,040,844 | <blockquote>
<p>Applicatives compose, monads don't.</p>
</blockquote>
<p>What does the above statement mean? And when is one preferable to other?</p> | 2011-08-12 13:35:41.860000+00:00 | 2021-05-19 13:20:36.780000+00:00 | 2015-06-18 06:11:53.830000+00:00 | haskell|functional-programming|monads|monad-transformers|applicative | ['http://arxiv.org/abs/0710.1120'] | 1 |
46,156,981 | <p>I see you've found the <a href="https://github.com/tensorflow/magenta/blob/master/magenta/models/sketch_rnn/model.py#L262" rel="nofollow noreferrer">sketch-rnn code</a> from magenta, I'm working on something similar. I found this piece of code not to be stable by itself. You'll need to stabilize it using constraints, so the <code>tf_2d_normal</code> code can't be used or interpreted in isolation. <code>NaN</code>s and <code>Inf</code>s will start appearing all over the place if your data isn't normalized properly in advance or in your loss function. </p>
<p>Below is a more stable loss function version I'm building with Keras. There may be some redundancy in here, it may not be perfect for your needs but I found it to be working and you can test/adapt it. I included some inline comments on how large negative log values can arise:</p>
<pre><code>def r3_bivariate_gaussian_loss(true, pred):
"""
Rank 3 bivariate gaussian loss function
Returns results of eq # 24 of http://arxiv.org/abs/1308.0850
:param true: truth values with at least [mu1, mu2, sigma1, sigma2, rho]
:param pred: values predicted from a model with the same shape requirements as truth values
:return: the log of the summed max likelihood
"""
x_coord = true[:, :, 0]
y_coord = true[:, :, 1]
mu_x = pred[:, :, 0]
mu_y = pred[:, :, 1]
# exponentiate the sigmas and also make correlative rho between -1 and 1.
# eq. # 21 and 22 of http://arxiv.org/abs/1308.0850
# analogous to https://github.com/tensorflow/magenta/blob/master/magenta/models/sketch_rnn/model.py#L326
sigma_x = K.exp(K.abs(pred[:, :, 2]))
sigma_y = K.exp(K.abs(pred[:, :, 3]))
rho = K.tanh(pred[:, :, 4]) * 0.1 # avoid drifting to -1 or 1 to prevent NaN, you will have to tweak this multiplier value to suit the shape of your data
norm1 = K.log(1 + K.abs(x_coord - mu_x))
norm2 = K.log(1 + K.abs(y_coord - mu_y))
variance_x = K.softplus(K.square(sigma_x))
variance_y = K.softplus(K.square(sigma_y))
s1s2 = K.softplus(sigma_x * sigma_y) # very large if sigma_x and/or sigma_y are very large
# eq 25 of http://arxiv.org/abs/1308.0850
z = ((K.square(norm1) / variance_x) +
(K.square(norm2) / variance_y) -
(2 * rho * norm1 * norm2 / s1s2)) # z β -β if rho * norm1 * norm2 β β and/or s1s2 β 0
neg_rho = 1 - K.square(rho) # β 0 if rho β {1, -1}
numerator = K.exp(-z / (2 * neg_rho)) # β β if z β -β and/or neg_rho β 0
denominator = (2 * np.pi * s1s2 * K.sqrt(neg_rho)) + epsilon() # β 0 if s1s2 β 0 and/or neg_rho β 0
pdf = numerator / denominator # β β if denominator β 0 and/or if numerator β β
return K.log(K.sum(-K.log(pdf + epsilon()))) # β -β if pdf β β
</code></pre>
<p>Hope you find this of value.</p> | 2017-09-11 13:32:06.933000+00:00 | 2017-09-11 13:32:06.933000+00:00 | null | null | 43,031,731 | <p>I am trying to implement a loss function which tries to minimize the negative log likelihood of obtaining ground truth values (x,y) from predicted bivariate gaussian distribution parameters. I am implementing this in tensorflow -
Here is the code - </p>
<pre><code>def tf_2d_normal(self, x, y, mux, muy, sx, sy, rho):
'''
Function that implements the PDF of a 2D normal distribution
params:
x : input x points
y : input y points
mux : mean of the distribution in x
muy : mean of the distribution in y
sx : std dev of the distribution in x
sy : std dev of the distribution in y
rho : Correlation factor of the distribution
'''
# eq 3 in the paper
# and eq 24 & 25 in Graves (2013)
# Calculate (x - mux) and (y-muy)
normx = tf.sub(x, mux)
normy = tf.sub(y, muy)
# Calculate sx*sy
sxsy = tf.mul(sx, sy)
# Calculate the exponential factor
z = tf.square(tf.div(normx, sx)) + tf.square(tf.div(normy, sy)) - 2*tf.div(tf.mul(rho, tf.mul(normx, normy)), sxsy)
negRho = 1 - tf.square(rho)
# Numerator
result = tf.exp(tf.div(-z, 2*negRho))
# Normalization constant
denom = 2 * np.pi * tf.mul(sxsy, tf.sqrt(negRho))
# Final PDF calculation
result = -tf.log(tf.div(result, denom))
return result
</code></pre>
<p>When I am doing the training, I can see the loss value decreasing but it goes well past below 0. I can understand that should be because, we are minimizing the 'negative' likelihood. Even the loss values are decreasing, I can't get my results accurate. Can someone help in verifying, if the code that I have written for the loss function is correct or not.</p>
<p>Also is such a nature of loss desirable for training Neural Nets(specifically RNN)?</p>
<p>Thankss</p> | 2017-03-26 16:57:10.173000+00:00 | 2017-09-11 13:32:06.933000+00:00 | null | tensorflow|statistics|deep-learning|lstm | ['https://github.com/tensorflow/magenta/blob/master/magenta/models/sketch_rnn/model.py#L262'] | 1 |
72,850,258 | <p>Stumbled across this looking for the same thing. I ended up writing the following:</p>
<pre><code>#include <bitset>
#include <limits>
#include <type_traits>
#include <boost/multiprecision/cpp_int.hpp>
// N.B.: Prior to Boost 1.79 MinBits and MaxBits were unsigned,
// not size_t - using auto to paper over this difference.
template <
auto MinBits,
auto MaxBits,
boost::multiprecision::cpp_integer_type SignType,
boost::multiprecision::cpp_int_check_type Checked,
class Allocator>
size_t popcount(
const boost::multiprecision::number<
boost::multiprecision::cpp_int_backend<
MinBits,
MaxBits,
SignType,
Checked,
Allocator>>& bits) {
const auto& backend = bits.backend();
// Using std::bitset::count to access a native popcnt.
// In principle the limb type could be larger than what a
// bitset can natively handle, in practice it likely isn't.
using BitsetNativeType = unsigned long long;
constexpr size_t kNativeBits = std::numeric_limits<BitsetNativeType>::digits;
using LimbType = std::decay_t<decltype(*backend.limbs())>;
constexpr size_t kLimbBits = std::numeric_limits<LimbType>::digits;
constexpr size_t kNativesToCount = (kLimbBits + kNativeBits - 1) / kNativeBits;
constexpr size_t kShiftPerNative = kNativesToCount > 1 ? kNativeBits : 0;
static_assert(kNativesToCount > 0, "bad bit counts");
size_t result = 0;
for (size_t i = 0; i != backend.size(); ++i) {
auto limb_value = backend.limbs()[i];
for (size_t j = 0; j != kNativesToCount; ++j) {
const std::bitset<kNativeBits> limb_bitset{BitsetNativeType(limb_value)};
result += limb_bitset.count();
limb_value >>= kShiftPerNative;
}
}
return result;
}
</code></pre>
<p>For native word sizes (i.e., <= 64 bits) this compiles to a single <code>popcnt</code>. Above that it compiles to two <code>popcnt</code> up to 128 bits. Beyond that it auto-vectorizes in clang - presumably to something inspired by <a href="https://arxiv.org/pdf/1611.07612.pdf" rel="nofollow noreferrer">Faster Population Counts Using AVX2 Instructions</a> but frankly I didn't investigate further. In gcc it just doesn't unroll the loop and still uses <code>popcnt</code>.</p>
<hr />
<p>clang demo: <a href="https://godbolt.org/z/fve195TeM" rel="nofollow noreferrer">https://godbolt.org/z/fve195TeM</a><br />
gcc demo: <a href="https://godbolt.org/z/P4ndY8ccq" rel="nofollow noreferrer">https://godbolt.org/z/P4ndY8ccq</a></p> | 2022-07-03 22:16:42.553000+00:00 | 2022-07-03 22:16:42.553000+00:00 | null | null | 24,824,681 | <p>I'm using boost::multiprecision to have fixed but arbitrary precision integers.
I was planning to use <code>number<cpp_int<W, W, unsigned_magnitude, unchecked, void>></code>. The first obvious question is:</p>
<p>Does this datatype have the standard bit pattern of any unsigned integer of a given precision? I heard that signed extended precision numbers don't use 2's complement, but I think that unsigned ones should use the standard representation, or am I missing something?</p>
<p>If this is the case, then ho can i get the population count of the integer? I doesn't seem to be any public interface to do that. I could be happy also of a way to get the internal
memory so I can count the population of the single words used as storage.</p>
<p>Thank you</p> | 2014-07-18 12:08:52.730000+00:00 | 2022-07-03 22:16:42.553000+00:00 | null | c++|boost | ['https://arxiv.org/pdf/1611.07612.pdf', 'https://godbolt.org/z/fve195TeM', 'https://godbolt.org/z/P4ndY8ccq'] | 3 |
49,480,748 | <p>HMM and RNN-LSTM based solutions are not considered highly accurate for SER. I believe the ranking algorithm to date is one based on Deep Retinal
Convolution Neural Networks (DRCNNs). See <a href="https://arxiv.org/ftp/arxiv/papers/1707/1707.09917.pdf" rel="nofollow noreferrer">Speech emotion recognition using Deep Retinal Convolution Neural Networks</a>, authored by Niu, Yafeng; Zou, Dongsheng; Niu, Yadong; He, Zhongshi; Tan, Hua and published in July of 2017. The authors achieved an average accuracy over 99% on the following databases: IEMOCAP, EMO-DB, and SAVEE. </p> | 2018-03-25 20:44:25.813000+00:00 | 2018-03-25 20:44:25.813000+00:00 | null | null | 49,476,289 | <p>For building Speech Emotion Detection and Recognition system, which approach would be better? Hidden Markov Model or Deep Learning (RNN-LSTM) approach? I have to build a SER system and I am confused between the two. If there are better models than these two, kindly tell.</p> | 2018-03-25 13:17:48.133000+00:00 | 2021-12-07 13:30:34.630000+00:00 | 2018-03-25 20:10:42.970000+00:00 | machine-learning|deep-learning|speech-recognition|recurrent-neural-network|hidden-markov-models | ['https://arxiv.org/ftp/arxiv/papers/1707/1707.09917.pdf'] | 1 |
57,740,822 | <p>It appears approaches for optimising directly for these types of metrics have been devised and used successfully, improving scoring and or training times:</p>
<p><a href="https://www.kaggle.com/c/human-protein-atlas-image-classification/discussion/77289" rel="noreferrer">https://www.kaggle.com/c/human-protein-atlas-image-classification/discussion/77289</a></p>
<p><a href="https://www.kaggle.com/c/human-protein-atlas-image-classification/discussion/70328" rel="noreferrer">https://www.kaggle.com/c/human-protein-atlas-image-classification/discussion/70328</a></p>
<p><a href="https://www.kaggle.com/rejpalcz/best-loss-function-for-f1-score-metric" rel="noreferrer">https://www.kaggle.com/rejpalcz/best-loss-function-for-f1-score-metric</a></p>
<p>One such method involves using the sums of probabilities, in place of counts, for the sets of true positives, false positives, and false negative metrics. For example F-beta loss (the generalisation of F1) can be calculated in with Torch in Python as follows:</p>
<pre><code>def forward(self, y_logits, y_true):
y_pred = self.sigmoid(y_logits)
TP = (y_pred * y_true).sum(dim=1)
FP = ((1 - y_pred) * y_true).sum(dim=1)
FN = (y_pred * (1 - y_true)).sum(dim=1)
fbeta = (1 + self.beta**2) * TP / ((1 + self.beta**2) * TP + (self.beta**2) * FN + FP + self.epsilon)
fbeta = fbeta.clamp(min=self.epsilon, max=1 - self.epsilon)
return 1 - fbeta.mean()
</code></pre>
<p>An alternative method is described in this paper:</p>
<p><a href="https://arxiv.org/abs/1608.04802" rel="noreferrer">https://arxiv.org/abs/1608.04802</a></p>
<p>The approach taken optimises for a lower bound on the statistic. Other metrics such as AUROC and AUCPR are also discussed. An implementation in TF of such an approach can be found here:</p>
<p><a href="https://github.com/tensorflow/models/tree/master/research/global_objectives" rel="noreferrer">https://github.com/tensorflow/models/tree/master/research/global_objectives</a></p> | 2019-08-31 18:50:34.640000+00:00 | 2019-08-31 18:58:25.667000+00:00 | 2019-08-31 18:58:25.667000+00:00 | null | 53,354,176 | <p>I am pretty new to neural networks. I am training a network in tensorflow, but the number of positive examples is much much less than negative examples in my dataset (it is a medical dataset).
So, I know that F-score calculated from precision and recall is a good measure of how well the model is trained.
I have used error functions like cross-entropy loss or MSE before, but they are all based on accuracy calculation (if I am not wrong). But how do I use this F-score as an error function? Is there a tensorflow function for that? Or I have to create a new one?</p>
<p>Thanks in advance. </p> | 2018-11-17 18:20:40.807000+00:00 | 2020-05-12 00:21:16.277000+00:00 | null | python|tensorflow|loss-function|precision-recall | ['https://www.kaggle.com/c/human-protein-atlas-image-classification/discussion/77289', 'https://www.kaggle.com/c/human-protein-atlas-image-classification/discussion/70328', 'https://www.kaggle.com/rejpalcz/best-loss-function-for-f1-score-metric', 'https://arxiv.org/abs/1608.04802', 'https://github.com/tensorflow/models/tree/master/research/global_objectives'] | 5 |
44,841,116 | <blockquote>
<p>Is there an existing pattern or model that describes this?</p>
</blockquote>
<p>What you describe sounds like a network data model, also known as an object or object-oriented data model.</p>
<blockquote>
<p>Are there any drawbacks to this approach?</p>
</blockquote>
<p>Your model doesn't support ternary and higher relationships. It also creates fixed access paths between nodes, which supports node-to-node navigation, but which can make many queries convoluted. I also don't see any support for subtyping.</p>
<p>Without composite determinants, some situations will be difficult to model or query. You don't support predicates like <code>(Object, Language) -> Name</code> (or <code>(Company, Role) -> Person</code>, etc). One way is to create special relationship types, but your model is going to be asymmetric and more complicated to query.</p>
<blockquote>
<p>Are there better approaches?</p>
</blockquote>
<p>The relational model of data handles n-ary relations between object types / domains, and allows for the representation of complex predicates. N-ary relations mean it supports object hypergraphs, and user-defined joins mean ad-hoc access paths. Composite determinants are supported, and most implementations support a variety of integrity constraints.</p>
<p>In particular, look at Object-Role Modeling (<a href="http://www.orm.net" rel="nofollow noreferrer">http://www.orm.net</a>, <a href="https://www.ormfoundation.org" rel="nofollow noreferrer">https://www.ormfoundation.org</a>).</p>
<blockquote>
<p>I want to avoid having to modify the model/schema whenever new object types are added.</p>
</blockquote>
<p>Try doing a web search for "universal schema for knowledge representation". Facts about the world aren't limited to simple atomic observations like "John Smith has a dog named Spot". We have to deal with facts like "Company A is not allowed to distribute product B in regions within 100km of point C after date D if that product contains ingredients E or F". The most powerful knowledge representation we've got so far is natural language, and as far as I know we don't yet have a simple model of its structure.</p>
<p>I'm currently reading <a href="https://arxiv.org/pdf/1102.1889.pdf" rel="nofollow noreferrer">Ologs: A Categorical Framework For Knowledge
Representation</a>. Perhaps this will be of interest to you too.</p> | 2017-06-30 07:37:22.680000+00:00 | 2017-06-30 12:16:27.017000+00:00 | 2017-06-30 12:16:27.017000+00:00 | null | 44,832,654 | <p>I am trying to develop a data model for a very diverse set of interconnected objects. As the application matures, the types of objects supported will increase significantly. I want to avoid having to modify the model/schema whenever new object types are added. </p>
<p>As a simple example, let's say I'm starting with a model of people and buildings. A building can have multiple owners; a person can own multiple buildings; a person can live in a house and work in an office... Future versions might add cars and corporations. Cars can have owners, corporations can manufacture cars, people can work for corporations, etc. Most of the relationships will be many-to-many, some will be one-to-many, very few will be one-to-one. </p>
<p>While concepts like "owner", "employer", or "manufacture" can be considered properties of a "building", "corporation", or "car" object, I don't want to redefine the data model to support a new property type.</p>
<p>My current idea is to model this similar to a graph, where each piece of data is its own node. The node object would be very simple:</p>
<ul>
<li>Unique identifier</li>
<li>Name (human representation)</li>
<li>Node type</li>
<li>Relationships</li>
</ul>
<p>Extending the previous example, the possible node types would be:</p>
<ul>
<li>Person</li>
<li>Car</li>
<li>Company</li>
<li>Building</li>
</ul>
<p>A relationship would be:</p>
<ul>
<li>Node A</li>
<li>Node B</li>
<li>Relationship type - uses, owns, has, is, etc</li>
</ul>
<p>I have a few questions:</p>
<ul>
<li>Are there any drawbacks to this approach?</li>
<li>Is there an existing pattern or model that describes this?</li>
<li>Are there better approaches?</li>
</ul> | 2017-06-29 18:38:09.237000+00:00 | 2017-06-30 12:16:27.017000+00:00 | null | schema|data-modeling|object-relationships | ['http://www.orm.net', 'https://www.ormfoundation.org', 'https://arxiv.org/pdf/1102.1889.pdf'] | 3 |
69,062,123 | <p>Here is how you can implement blind deconvolution in python with the <strong>Richardson Lucy</strong> algorithm:</p>
<p>Iterative updation steps for <strong>Blind Deblurring</strong> (as proposed in <a href="https://i.stack.imgur.com/a6I9t.png" rel="nofollow noreferrer">2</a>, <a href="https://arxiv.org/ftp/arxiv/papers/1206/1206.3594.pdf" rel="nofollow noreferrer">4</a>), with <strong>unknown</strong> PSF H, corrupted image <strong>X</strong> and restored image <strong>S</strong> are shown in the below equation:</p>
<p><a href="https://i.stack.imgur.com/a6I9t.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/a6I9t.png" alt="enter image description here" /></a></p>
<p>The following code shows my implementation of the iterative Bayesian blind deconvolution algorithm proposed in <a href="https://i.stack.imgur.com/h2SuP.gif" rel="nofollow noreferrer">1</a>, mostly with frequency domain operations (as opposed to spatial domain implementation as in <a href="https://scikit-image.org/docs/dev/api/skimage.restoration.html#skimage.restoration.richardson_lucy" rel="nofollow noreferrer">3</a>). It is similar to the non-blind implementation as in <a href="https://scikit-image.org/docs/dev/api/skimage.restoration.html#skimage.restoration.richardson_lucy" rel="nofollow noreferrer">3</a>, only we need to estimate the unknown blur PSF at each iteration (starting with a random PSF), assumed to be known in <a href="https://scikit-image.org/docs/dev/api/skimage.restoration.html#skimage.restoration.richardson_lucy" rel="nofollow noreferrer">3</a>.</p>
<pre><code>import numpy as np
from scipy.signal import fftconvolve
def richardson_lucy_blind(image, psf, original, num_iter=50):
im_deconv = np.full(image.shape, 0.1, dtype='float') # init output
for i in range(num_iter):
psf_mirror = np.flip(psf)
conv = fftconvolve(im_deconv, psf, mode='same')
relative_blur = image / conv
im_deconv *= fftconvolve(relative_blur, psf_mirror, mode='same')
im_deconv_mirror = np.flip(im_deconv)
psf *= fftconvolve(relative_blur, im_deconv_mirror, mode='same')
return im_deconv
</code></pre>
<p>The next animation shows image restoration with non-blind and blind versions of the RL algorithm, respectively.</p>
<p><a href="https://i.stack.imgur.com/h2SuP.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/h2SuP.gif" alt="enter image description here" /></a>
<strong>References:</strong></p>
<ol>
<li><a href="https://courses.cs.duke.edu/cps258/fall06/references/Nonnegative-iteration/Richardson-alg.pdf" rel="nofollow noreferrer">https://courses.cs.duke.edu/cps258/fall06/references/Nonnegative-iteration/Richardson-alg.pdf</a></li>
<li><a href="https://scikit-image.org/docs/dev/api/skimage.restoration.html#skimage.restoration.richardson_lucy" rel="nofollow noreferrer">https://scikit-image.org/docs/dev/api/skimage.restoration.html#skimage.restoration.richardson_lucy</a></li>
<li><a href="https://arxiv.org/ftp/arxiv/papers/1206/1206.3594.pdf" rel="nofollow noreferrer">https://arxiv.org/ftp/arxiv/papers/1206/1206.3594.pdf</a></li>
</ol> | 2021-09-05 09:26:24.850000+00:00 | 2021-09-08 06:50:54.200000+00:00 | 2021-09-08 06:50:54.200000+00:00 | null | 68,270,030 | <p>I'm working on blind deconvoltuion.</p>
<p>In iterating L2norm reguralization, I want to update the PSF at the same time, and when I looked it up, I found a function called <a href="https://www.mathworks.com/help/images/ref/deconvblind.html" rel="nofollow noreferrer"><code>deconvblind</code></a> in Matlab:</p>
<blockquote>
<p>l[J,PSF] = deconvblind(I,INITPSF) deconvolves image I using the maximum likelihood algorithm,
returning both the deblurred image, J, and a restored point-spread function, PSF.
The input array, I, and your initial guess at the PSF, INITPSF, can be numeric arrays or cell arrays.
(Use cell arrays when you want to be able to perform additional
deconvolutions that start where your initial deconvolution finished.
See Resuming Deconvolution for more information.)
The restored PSF is a positive array that is the same size as INITPSF, normalized so its sum adds up to 1.</p>
</blockquote>
<p>Is there a function similar to <code>deconvblind</code> in Python?</p> | 2021-07-06 11:49:34.490000+00:00 | 2021-09-08 06:50:54.200000+00:00 | 2021-09-05 15:06:10.163000+00:00 | python|matlab|image-processing|convolution|deconvolution | ['https://i.stack.imgur.com/a6I9t.png', 'https://arxiv.org/ftp/arxiv/papers/1206/1206.3594.pdf', 'https://i.stack.imgur.com/a6I9t.png', 'https://i.stack.imgur.com/h2SuP.gif', 'https://scikit-image.org/docs/dev/api/skimage.restoration.html#skimage.restoration.richardson_lucy', 'https://scikit-image.org/docs/dev/api/skimage.restoration.html#skimage.restoration.richardson_lucy', 'https://scikit-image.org/docs/dev/api/skimage.restoration.html#skimage.restoration.richardson_lucy', 'https://i.stack.imgur.com/h2SuP.gif', 'https://courses.cs.duke.edu/cps258/fall06/references/Nonnegative-iteration/Richardson-alg.pdf', 'https://scikit-image.org/docs/dev/api/skimage.restoration.html#skimage.restoration.richardson_lucy', 'https://arxiv.org/ftp/arxiv/papers/1206/1206.3594.pdf'] | 11 |
30,249,576 | <p>Codd would simply have excluded tuples with nulls from the application of functional dependencies. As far as I know, there is no self-consistent way to cope with functional dependencies in pure SQL, using Codd's 3-value logic.</p>
<p>I would therefore expect most people to tell you to avoid nulls. Obviously, that is not always received as a practically useful recommendation.</p>
<p>There has been academic work on the topic however if you are interested. We have a paper that covers specific this issue:</p>
<blockquote>
<p>Antonia Badia and Daniel Lemire, Functional dependencies with null
markers, Computer Journal The Computer Journal (2015) 58 (5): 1160-1168. <a href="http://arxiv.org/abs/1404.4963" rel="nofollow">http://arxiv.org/abs/1404.4963</a></p>
</blockquote> | 2015-05-15 00:21:06.110000+00:00 | 2015-05-15 00:28:41.973000+00:00 | 2015-05-15 00:28:41.973000+00:00 | null | 27,007,398 | <p>I have a set of functional dependencies F, R = {cid, cname, bid, name, rentdate, returndate, cost} in a bookstore, there is just one table of it.</p>
<blockquote>
<p>customerid, bookid, bookname, rent and return date of this book by this person.</p>
</blockquote>
<p>Obvious, it's not BCNF</p>
<p>but how to identify the F of non-trival functional dependencies for this?</p>
<p>in my opinion:</p>
<blockquote>
<p>cid -> cname</p>
<p>bid -> bname</p>
<p>bid, rentdate -> returndate, cid</p>
</blockquote>
<p>is that ok? in the last functional dependencies, i think each order, one book be rented in a specific time, will have the unique return date and belongs to just one person</p>
<p>but I am also confused about this Functional dependencies, because in this table, the rentdate and returndate can also set to be null!!!</p>
<p>in this way, does the</p>
<blockquote>
<p>bid, rentdate -> returndate, cid</p>
</blockquote>
<p>correct?</p> | 2014-11-19 01:30:40.480000+00:00 | 2015-05-15 00:28:41.973000+00:00 | 2020-06-20 09:12:55.060000+00:00 | database-design|database-schema|functional-dependencies|bcnf | ['http://arxiv.org/abs/1404.4963'] | 1 |
62,092,296 | <p>The ReLU activation solves the problem of vanishing gradient that is due to sigmoid-like non-linearities (the gradient vanishes because of the flat regions of the sigmoid).</p>
<p>The other kind of "vanishing" gradient seems to be related to the depth of the network (<em>e.g.</em> see <a href="https://arxiv.org/abs/1803.01719" rel="noreferrer">this</a> for example). Basically, when backpropagating gradient from layer <code>N</code> to <code>N-k</code>, the gradient vanishes as a function of depth (in vanilla architectures). The idea of resnets is to help with gradient backpropagation (see for example <a href="https://arxiv.org/abs/1603.05027" rel="noreferrer">Identity mappings in deep residual networks</a>, where they present resnet v2 and argue that identity skip connections are better at this).</p>
<p>A very interesting and relatively recent paper that sheds light on the working on resnets is <a href="https://arxiv.org/abs/1605.06431" rel="noreferrer">resnets behaves as ensembles of relatively small networks</a>. The tl;dr of this paper could be (very roughly) summarized as this: residual networks behave as an ensemble: removing a single layer (i.e. a single residual branch, not its skip connection) doesn't really affect performance, but performance decreases in an smooth manner as a function of the number of layers that are removed, which is the way in which ensembles behave. Most of the gradient during training comes from short paths. They show that training only this short paths doesn't affect performance in a statistically significant way compared to when all paths are trained. This means that the effect of residual networks doesn't really come from depth as the effect of long paths is almost non-existant.</p> | 2020-05-29 18:14:19.247000+00:00 | 2020-05-29 18:14:19.247000+00:00 | null | null | 62,091,567 | <p>I read that ResNet solves the problem of vanishing gradient problem by using skip functions. But are they not already solved using RELU? Is there some other important thing I'm missing about ResNet or does Vanishing gradient problem occur even after using RELU? </p> | 2020-05-29 17:31:16.053000+00:00 | 2020-05-30 16:04:58.503000+00:00 | 2020-05-30 16:04:58.503000+00:00 | optimization|deep-learning|neural-network|backpropagation|activation-function | ['https://arxiv.org/abs/1803.01719', 'https://arxiv.org/abs/1603.05027', 'https://arxiv.org/abs/1605.06431'] | 3 |
44,968,631 | <p>Obviously, this is a huge and busy research area, but I'd say there are two broad types of approaches you could look into:</p>
<p>First, there are some methods that learn sentence embeddings in an unsupervised manner, such as <a href="https://arxiv.org/pdf/1405.4053v2.pdf" rel="nofollow noreferrer">Le and Mikolov's (2014) Paragraph Vectors</a>, which are implemented in <a href="https://radimrehurek.com/gensim/models/doc2vec.html" rel="nofollow noreferrer">gensim</a>, or <a href="https://arxiv.org/abs/1506.06726" rel="nofollow noreferrer">Kiros et al.'s (2015) SkipThought vectors</a>, with an <a href="https://github.com/ryankiros/skip-thoughts" rel="nofollow noreferrer">implementation on Github</a>.</p>
<p>Then there also exist supervised methods that learn sentence embeddings from labelled data. The most recent one is <a href="https://arxiv.org/abs/1705.02364" rel="nofollow noreferrer">Conneau et al.'s (2017)</a>, which trains sentence embeddings on the Stanford Natural Language Inference dataset, and shows these embeddings can be used successfully across a range of NLP tasks. The code is <a href="https://github.com/facebookresearch/InferSent" rel="nofollow noreferrer">available on Github</a>.</p>
<p>You might also find some inspiration in <a href="http://nlp.yvespeirsman.be/blog/anything2vec/" rel="nofollow noreferrer">a blog post I wrote earlier this year on the topic of embeddings</a>.</p> | 2017-07-07 10:34:20.463000+00:00 | 2017-07-07 10:34:20.463000+00:00 | null | null | 44,967,009 | <p>I'm working on finding similarities between short sentences and articles. I used many existing methods such as tf-idf, word2vec etc but the results are just okay. The most relevant measure which I found was word moving distance, however, its results are not that better than the other measures. I know it's a challenging problem, however, I am wondering if there are any new methods to find an approximate similarity more on a higher or concept level than just matching words. Especially, any alternative new methods like word moving distance which looks at slightly higher semantic of a sentence or article?</p> | 2017-07-07 09:15:55.020000+00:00 | 2017-07-26 16:04:18.217000+00:00 | 2017-07-26 16:04:18.217000+00:00 | machine-learning|nlp|artificial-intelligence|nltk|similarity | ['https://arxiv.org/pdf/1405.4053v2.pdf', 'https://radimrehurek.com/gensim/models/doc2vec.html', 'https://arxiv.org/abs/1506.06726', 'https://github.com/ryankiros/skip-thoughts', 'https://arxiv.org/abs/1705.02364', 'https://github.com/facebookresearch/InferSent', 'http://nlp.yvespeirsman.be/blog/anything2vec/'] | 7 |
Subsets and Splits