a_id
int64 7.84k
73.8M
| a_body
stringlengths 61
33k
| a_creation_date
stringlengths 25
32
| a_last_activity_date
stringlengths 25
32
| a_last_edit_date
stringlengths 25
32
⌀ | a_tags
float64 | q_id
int64 826
73.8M
| q_body
stringlengths 61
29.9k
| q_creation_date
stringlengths 25
32
| q_last_activity_date
stringlengths 25
32
| q_last_edit_date
stringlengths 25
32
⌀ | q_tags
stringlengths 1
103
| _arxiv_links
stringlengths 2
6.69k
| _n_arxiv_links
int64 0
94
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
49,077,521 | <p>As it turned out, the problem is in low-level implementation of SpatialDepthWiseConvolution in torch/nn. I've created an issue: <a href="https://github.com/torch/nn/issues/1307" rel="nofollow noreferrer">https://github.com/torch/nn/issues/1307</a></p>
<p>For the moment (03 of March 2018) this issue is not solved yet.
Of course, I hope that somebody will fix mistake in low-level implementation.
But for now I know two ways to overcome this problem:</p>
<ul>
<li>emulate via torch containers</li>
<li>use my pure-lua implementation</li>
</ul>
<p>Here is a way to emulate this module using containers <strong>Concat</strong> and <strong>Parallel</strong> along with standard <strong>SpatialConvolution</strong> module:</p>
<pre><code> local depth_wise_conv = nn.Concat(2)
for o = 1, nOutputPlane do
local out = nn.Parallel(2, 2)
for i = 1, nInputPlane do
local seq = nn.Sequential()
local conv = nn.SpatialConvolution(1, 1, kW, kH, dW, dH, pW, pH):noBias()
seq:add( nn.Reshape(1, inputHeight, inputWidth) )
seq:add( conv )
out:add( seq )
end
depth_wise_conv:add( out )
end
</code></pre>
<p>Note that depth_wise_conv module in the code above is supposed to get batch-input with 4 dimensions: <code>batchSize x nInputPlane x inputHeight x inputWidth</code>.</p>
<p>But I also spend some time and created a pure-lua implementation of <strong>SpatialDepthWiseConvolution</strong> as a module. You may find it here:
<a href="https://gist.github.com/diovisgood/36ce5a6c5e9dd4cb20b13dd2a28c1f71" rel="nofollow noreferrer">https://gist.github.com/diovisgood/36ce5a6c5e9dd4cb20b13dd2a28c1f71</a></p>
<p>There are also: <strong>SpatialConvolution</strong> implementation and unit tests.
I published these modules under MIT license, so anyone can use them.</p>
<p>Please note that these modules have not been heavily tested!
So any help or advice is appreciated.</p>
<p>One more thing. There is a great paper that explains convolution and transposed convolution by Vincent Dumoulin and Francesco Visin:
<a href="https://arxiv.org/pdf/1603.07285.pdf" rel="nofollow noreferrer">'A guide to convolution arithmetic for deep learning'</a></p> | 2018-03-02 21:13:51.847000+00:00 | 2018-03-02 21:19:03.087000+00:00 | 2018-03-02 21:19:03.087000+00:00 | null | 48,948,457 | <p>I'm trying to implement NN inspired by Xception ideas.
Can't understand what is wrong with my model...</p>
<pre><code>local torch = require 'torch'
local nn = require 'nn'
dofile('GlobalAveragePooling.lua')
local model = nn.Sequential()
-- Entry convolution
model:add( nn.SpatialConvolution(3, 64, 3, 3, 2, 2, 1, 1) )
model:add( nn.SpatialBatchNormalization(64) )
model:add( nn.ReLU() )
-- Xception Unit with "skip-path"
local seq = nn.Sequential()
seq:add( nn.SpatialDepthWiseConvolution(64, 1, 3, 3, 1, 1, 1, 1) )
seq:add( nn.SpatialConvolution(64, 512, 1, 1, 1, 1, 0, 0) )
seq:add( nn.SpatialBatchNormalization(512) )
seq:add( nn.SpatialMaxPooling(3, 3, 2, 2, 1, 1) )
local con = nn.ConcatTable()
con:add( nn.SpatialConvolution(64, 512, 1, 1, 2, 2, 0, 0) )
con:add( seq )
model:add( con )
model:add( nn.CAddTable() )
model:add( nn.ReLU() )
-- Exit fully-connected layers for softmax(3) output
model:add( nn.GlobalAveragePooling() )
model:add( nn.Reshape(512) )
model:add( nn.Linear(512, 3) )
model:add( nn.LogSoftMax() )
print(tostring(model))
local X = torch.randn(10, 3, 16, 8)
local Y = torch.LongTensor(10):random(1,3)
local criterion = nn.ClassNLLCriterion()
local Yhat = model:forward(X)
local loss = criterion:forward(Yhat, Y)
local gradLoss = criterion:backward(Yhat, Y)
model:backward(X, gradLoss)
</code></pre>
<p>The model works good at forward() step.
But fails when it comes to model:backward(X, gradLoss) with error:</p>
<pre><code> /nn/THNN.lua:110: Need gradOutput of dimension 5 and gradOutput.size[3] == 8 but got gradOutput to be of shape: [10 x 64 x 1 x 4 x 8] at ../THNN/generic/SpatialDepthWiseConvolution.c:53
stack traceback:
[C]: in function 'v' ../nn/THNN.lua:110: in function 'SpatialDepthWiseConvolution_updateGradInput' ../nn/SpatialDepthWiseConvolution.lua:80:
in function 'updateGradInput' ../Module.lua:31:
in function <../nn/Module.lua:29>
[C]: in function 'xpcall' ../nn/Container.lua:63:
in function 'rethrowErrors' ../nn/Sequential.lua:88:
in function <../nn/Sequential.lua:78>
[C]: in function 'xpcall' ../Container.lua:63:
in function 'rethrowErrors' ../nn/ConcatTable.lua:66:
in function <../ConcatTable.lua:30>
[C]: in function 'xpcall' ../nn/Container.lua:63:
in function 'rethrowErrors' ../nn/Sequential.lua:84:
in function 'backward' test.lua:45:
in main chunk [C]: at 0x00405d50
</code></pre> | 2018-02-23 12:57:32.823000+00:00 | 2018-09-27 10:24:54.717000+00:00 | 2018-09-27 10:24:54.717000+00:00 | lua|neural-network|deep-learning|torch | ['https://github.com/torch/nn/issues/1307', 'https://gist.github.com/diovisgood/36ce5a6c5e9dd4cb20b13dd2a28c1f71', 'https://arxiv.org/pdf/1603.07285.pdf'] | 3 |
69,982,897 | <p>Interesting approach using RGB to get the average colour.</p>
<p>In the past I've answered a vaguely similar question but <a href="https://stackoverflow.com/questions/15583514/image-retrieval-system-by-colour-from-the-web-using-c-with-openframeworks/15721464#15721464">doing basic image search based on average colour</a>. Instead of RGB colour space I've used <a href="https://en.wikipedia.org/wiki/CIELAB_color_space" rel="nofollow noreferrer">L<em>a</em>b* colour space</a> which is a perceptual colour space. I simply used this implementation of the <code>rgb2xyz -> xyz2lab</code> and back formulas from <a href="http://web.archive.org/web/20111126054554/http://cookbooks.adobe.com/post_Useful_color_equations__RGB_to_LAB_converter-14227.html" rel="nofollow noreferrer">here</a> (if you take out a few keywords and types that syntax is pretty much javascript btw)</p>
<p>You might get slightly better results, but based on the demo you posted hopefully not extremely dissimilar. Does it justify the complexity: not sure.</p>
<p>Speaking of complexity you could go all the way to deep neural networks. Doing a quick search, <a href="https://arxiv.org/abs/1710.00756v2" rel="nofollow noreferrer">Progressive Color Transfer with Dense Semantic Correspondences</a> along with a <a href="https://github.com/rassilon712/Neural_Color_Transfer" rel="nofollow noreferrer">related implementation</a>. Maybe in a roundabout way that PyTorch model could be trained and exported to Tensorflow.js (<code>PyTorch -> ONNX -> TensorFlow -> TensorFlow.js</code>) and used directly or integrated with <a href="https://learn.ml5js.org/#/reference/style-transfer" rel="nofollow noreferrer">ml5.js similar to the StyleTransfer model</a>.
Maybe it could produce interesting results, but it will surely be a complex approach.</p>
<p>If you already know the average RGB colour of an image and you're after an approximation/similar look how about "faking it" by simply tinting the image via <a href="https://p5js.org/reference/#/p5/tint" rel="nofollow noreferrer"><code>tint()</code></a>. You could even control the amount of tint using the 4th(alpha) argument:</p>
<pre><code>// apply 50% of refRgb
tint(refRgb[0], refRgb[1], refRgb[2], 128);
image(theImageYouWanTinted, 0, 0);
</code></pre>
<p>Sure the output will be a mixture of the source image and refRgb, but it's super easy to test if visually it achieves what you're after.</p>
<p>You could then expand and try other things, for example:</p>
<ul>
<li>use the grayscale version of the image you want to tint instead of the rgb one</li>
<li>based on the image content's perhaps a single colour channel would have more dominant / appealing features (e.g. instead of true grayscale use either red, green or blue channel as grayscale)</li>
<li>further filter the source image to try and extract relevant information (e.g. smooth the image a bit with a median filter, try a low pass filter, etc.)</li>
</ul>
<p>It's hard to gauge how precise and complex things need to be: I'd simply go for <code>tint()</code> first. (If you need to "freeze" the tinted result to pixels remember you could always get a "snapshot" of what's been drawn using <code>get()</code> and more complex things could be achieved using a p5.Graphics layer (see <a href="https://p5js.org/reference/#/p5/createGraphics" rel="nofollow noreferrer"><code>createGraphics()</code></a>));</p> | 2021-11-16 01:48:52.880000+00:00 | 2021-11-16 11:31:03.497000+00:00 | 2021-11-16 11:31:03.497000+00:00 | null | 69,950,412 | <p>I have a 100x100 image:</p>
<pre><code><img id="my-face" src="/my-face.jpg" />
</code></pre>
<p>I get all of its pixels and I <a href="https://stackoverflow.com/a/2541680/2272048">calculate the average RGB</a> of that image:</p>
<pre><code>let img = document.getElementById('my-face')
let avgRgbOfImg = getAverageRGb(img)
</code></pre>
<p>I also have a reference RGB of a different color:</p>
<pre><code>let refRgb = [255, 244, 50] // yellow
</code></pre>
<p>I know now want to add a filter to the image, so that the <code>avgRgbOfImg</code> gets pretty close to my <code>refRgb</code>:</p>
<pre><code>addFilter(refRgb).to(img)
let newAvgRgb = getAverageRGb(img) // should be pretty close to `refRgb` (yellow)
</code></pre>
<p>In simpler terms, I have an image and I want to use canvas (or p5.js) to add a color filter to it so that it's <code>avgRgbOfImg</code> gets pretty close to that color.</p>
<p>Is there some canvas/p5 sets of methods to achieve this ?</p> | 2021-11-13 00:21:46.333000+00:00 | 2021-11-16 11:31:03.497000+00:00 | null | javascript|image-processing|canvas|html5-canvas|p5.js | ['https://stackoverflow.com/questions/15583514/image-retrieval-system-by-colour-from-the-web-using-c-with-openframeworks/15721464#15721464', 'https://en.wikipedia.org/wiki/CIELAB_color_space', 'http://web.archive.org/web/20111126054554/http://cookbooks.adobe.com/post_Useful_color_equations__RGB_to_LAB_converter-14227.html', 'https://arxiv.org/abs/1710.00756v2', 'https://github.com/rassilon712/Neural_Color_Transfer', 'https://learn.ml5js.org/#/reference/style-transfer', 'https://p5js.org/reference/#/p5/tint', 'https://p5js.org/reference/#/p5/createGraphics'] | 8 |
41,394,808 | <p>TensorRT 3.0 supports import/conversion of TensorFlow graphs via it's UFF (universal framework format). Some layer implementations are missing and will require custom implementations via IPlugin interface.</p>
<p>Previous versions didn't support native import of TensorFlow models/checkpoints.</p>
<p>What you can also do is export the layers/network description into your own intermediate format (such as text file) and then use TensorRT C++ API to construct the graph for inference. You'd have to export the convolution weights/biases separately. Make sure to pay attention to weight format - TensorFlow uses NHWC while TensorRT uses NCHW. And for the weights, TF uses RSCK ([filter_height, filter_width, input_depth, output_depth]) and TensorRT uses KCRS.</p>
<p>See this paper for an extended discussion of tensor formats:
<a href="https://arxiv.org/abs/1410.0759" rel="noreferrer">https://arxiv.org/abs/1410.0759</a></p>
<p>Also this link has useful relevant info:
<a href="https://www.tensorflow.org/versions/master/extend/tool_developers/" rel="noreferrer">https://www.tensorflow.org/versions/master/extend/tool_developers/</a></p> | 2016-12-30 10:38:45.470000+00:00 | 2017-11-09 09:22:04.443000+00:00 | 2017-11-09 09:22:04.443000+00:00 | null | 41,142,284 | <p>I would like to use NVIDIA TensorRT to run my Tensorflow models. Currenly, TensorRT supports Caffe prototxt network descriptor files.</p>
<p>I was not able to find source code to convert Tensorflow models to Caffe models. Are there any workarounds?</p> | 2016-12-14 12:09:12.387000+00:00 | 2017-11-09 09:22:04.443000+00:00 | 2017-08-10 21:59:35.720000+00:00 | tensorflow|nvidia|tensorrt | ['https://arxiv.org/abs/1410.0759', 'https://www.tensorflow.org/versions/master/extend/tool_developers/'] | 2 |
51,363,091 | <p>nobody can simply determine if an observation dataset follows a particular distribution. based on your situation, what you need:</p>
<p>fit empirical distribution using:
<a href="http://www.statsmodels.org/stable/generated/statsmodels.distributions.empirical_distribution.ECDF.html" rel="nofollow noreferrer">statsmodels.ECDF</a> </p>
<p>then, compare (nonparametric) this with your data to see if the Null hypothesis can be rejected</p>
<p>for 20/80 rule:
rescale your X to range [0,1] and simply pick up 0.2 on x axis </p>
<p>source: <a href="https://arxiv.org/pdf/1306.0100.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1306.0100.pdf</a> </p> | 2018-07-16 13:32:20.250000+00:00 | 2018-07-16 13:32:20.250000+00:00 | null | null | 51,362,331 | <p>I have a figure as shown below, I want to know whether it conforms to the Pareto distribution, or not? Its a cumulative plot.
And, I want to find out the point in x axis which marks the point for the 80-20 rule, i.e the x-axis point which bifurcates the plot into 20 percent having 80 percent of the wealth. </p>
<p>Also, I'm really confused by the scipy.stats Pareto function, would be great if someone can give some intuitive explanation on that, since the documentation is pretty confusing.</p>
<p><a href="https://i.stack.imgur.com/QFp1l.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QFp1l.jpg" alt="enter image description here"></a></p> | 2018-07-16 12:53:38.640000+00:00 | 2018-07-16 13:32:20.250000+00:00 | null | python|numpy|scipy|statistics|pareto-chart | ['http://www.statsmodels.org/stable/generated/statsmodels.distributions.empirical_distribution.ECDF.html', 'https://arxiv.org/pdf/1306.0100.pdf'] | 2 |
12,706,390 | <p>There is another way to <strong>evaluate the clustering quality by computing a stability metric</strong> on subfolds, a bit like cross validation for supervised models:</p>
<ul>
<li><p>Split the dataset in 3 folds A, B and C. Compute two clustering with you algorithm on A+B and A+C. Compute the Adjusted Rand Index or Adjusted Mutual Information of the 2 labelings on their intersection A and consider this value as an estimate of the stability score of the algorithm.</p></li>
<li><p>Rinse-repeat by shuffling the data and splitting it into 3 other folds A', B' and C' and recompute a stability score.</p></li>
<li><p>Average the stability scores over 5 or 10 runs to have a rough estimate of the standard error of the stability score.</p></li>
</ul>
<p>As you can guess this is very computer intensive evaluation method.</p>
<p>It is still an open research area to know whether or not this Stability-based evaluation of clustering algorithms is really useful in practice and to identify when it can fail to produce a valid criterion for model selection. Please refer to <a href="http://arxiv.org/abs/1007.1075" rel="nofollow">Clustering Stability: An Overview</a> by Ulrike von Luxburg and references therein for an overview of the state of the art on those matters.</p>
<p>Note: it is important to use Adjusted for Chance metrics such as ARI or AMI if you want to use this strategy to select the best value of k in k-means for instance. Non adjusted metrics such as NMI and V-measure will tend to favor models with higher k arbitrarily.</p> | 2012-10-03 10:15:40.393000+00:00 | 2012-10-03 14:07:25.600000+00:00 | 2012-10-03 14:07:25.600000+00:00 | null | 12,680,038 | <p><strong>Is there an objective way to validate the output of a clustering algorithm?</strong></p>
<p>I'm using scikit-learn's affinity propagation clustering against a dataset composed of objects with many attributes. The difference matrix supplied to the clustering algorithm is composed of the weighted difference of these attributes. I'm looking for a way to objectively validate tweaks in the distance weightings as reflected in the resulting clusters. The dataset is large and has enough attributes that manual examination of small examples is not a reasonable way to verify the produced clusters.</p> | 2012-10-01 19:50:03.413000+00:00 | 2015-04-27 16:18:24.120000+00:00 | 2015-04-27 16:18:24.120000+00:00 | machine-learning|scipy|data-mining|cluster-analysis|scikit-learn | ['http://arxiv.org/abs/1007.1075'] | 1 |
18,675,752 | <p>There are many possible ways of making clustering repeatable:</p>
<ul>
<li>The most basic method of dealing with k-means randomness is simply running it multiple times and selecting the best one (the one that minimizes the inner cluster distances/maximizes the between clusters distance).</li>
<li>One can use some <a href="http://arxiv.org/abs/1304.7465" rel="noreferrer">fixed initialization</a> for your data instead of randomization. There are many heuristics for starting the k-means. Or at least minimize the variance by using algorithms like <a href="http://en.wikipedia.org/wiki/K-means++" rel="noreferrer">k-means++.</a></li>
<li>Use modification of k-means which guarantees global minimum of regularized function, ie. <a href="http://www.control.isy.liu.se/research/reports/2011/2992.pdf" rel="noreferrer">convex k-means</a></li>
<li>Use different clustering method, which is deterministic, ie. <a href="https://www.google.pl/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0CCwQFjAA&url=http://eccc.hpi-web.de/report/2004/050/download/&ei=Z1srUqWSNqSt4ASAwICoDA&usg=AFQjCNGT67wor9Z322tZBJsJGICq0JHnSA&sig2=uS4_K146acIJgawM70El5g" rel="noreferrer">Data Nets</a></li>
</ul> | 2013-09-07 17:00:41.280000+00:00 | 2013-09-07 17:00:41.280000+00:00 | null | null | 18,674,701 | <p>I'm developing an algorithm to classify different types of dogs based off of image data. The steps of the algorithm are:</p>
<ol>
<li><p>Go through all training images, detect image features (ie SURF), and extract descriptors. Collect all descriptors for all images.</p></li>
<li><p>Cluster within the collected image descriptors and find k "words" or centroids within the collection.</p></li>
<li><p>Reiterate through all images, extract SURF descriptors, and match the extracted descriptor with the closest "word" found via clustering.</p></li>
<li><p>Represent each image as a histogram of the words found in clustering.</p></li>
<li><p>Feed these image representations (feature vectors) to a classifier and train...</p></li>
</ol>
<p>Now, I have run into a bit of a problem. Finding the "words" within the collection of image descriptors is a very important step. Due to the random nature of clustering, different clusters are found each time I run my program. The unfortunate result is that sometimes the accuracy of my classifier will be very good, and other times, very bad. I have chalked this up to the clustering algorithm finding "good" words sometimes, and "bad" words other times.</p>
<p>Does anyone know how I can hedge against the clustering algorithm from finding "bad" words? Currently I just cluster several times and take the mean accuracy of my classifier, but there must be a better way.</p>
<p>Thanks for taking time to read through this, and thank you for your help!</p>
<p>EDIT:</p>
<p>I am not using KMeans for classification; I am using a Support Vector Machine for classification. I am using KMeans for finding image descriptor "words", and then using these words to create histograms which describe each image. These histograms serve as feature vectors that are fed to the Support Vector Machine for classification.</p> | 2013-09-07 15:09:58.310000+00:00 | 2013-09-09 15:49:17.723000+00:00 | 2013-09-07 16:08:46.310000+00:00 | machine-learning|computer-vision|cluster-analysis|k-means | ['http://arxiv.org/abs/1304.7465', 'http://en.wikipedia.org/wiki/K-means++', 'http://www.control.isy.liu.se/research/reports/2011/2992.pdf', 'https://www.google.pl/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0CCwQFjAA&url=http://eccc.hpi-web.de/report/2004/050/download/&ei=Z1srUqWSNqSt4ASAwICoDA&usg=AFQjCNGT67wor9Z322tZBJsJGICq0JHnSA&sig2=uS4_K146acIJgawM70El5g'] | 4 |
60,172,829 | <p>There is indeed a few recent developments that try to counteract this problem. The most notable ones are probably <em>subword units</em> (also known as Byte Pair Encodings, or BPEs), which you can imagine as a notion similar to syllables in a word (but not the same!); A word like <code>basketball</code> could then be transformed into variations like <code>bas @@ket @@ball</code> or <code>basket @@ball</code>. Note that this is a constructed example and might not reflect the actually chosen subwords.</p>
<p>The idea itself is <a href="https://dl.acm.org/doi/10.5555/177910.177914" rel="nofollow noreferrer">relatively old</a> (an article from 1994), but has been recently popularized by <a href="https://arxiv.org/pdf/1508.07909.pdf" rel="nofollow noreferrer">Sennrich et al.</a>, and is basically used in every state-of-the-art NLP library that has to deal with large vocabularies.</p>
<p>The two biggest implementations of this idea are probably <a href="https://github.com/glample/fastBPE" rel="nofollow noreferrer">fastBPE</a> and Google's <a href="https://github.com/google/sentencepiece" rel="nofollow noreferrer">SentencePiece</a>.</p>
<p>With subword units, you now basically have the freedom to determine a fix vocabulary size, and the algorithm will then try to optimize towards a mix of word diversity, and basically splitting "more complex words" into several pieces, such that your desired vocabulary size can cover any word in the corpus. For the exact algorithm, though, I highly recommend you to look into the linked paper or implementations.</p> | 2020-02-11 15:55:22.273000+00:00 | 2020-02-11 15:55:22.273000+00:00 | null | null | 60,166,663 | <p>While working on tasks like text classification, QA, the original vocabulary generated from the corpus is usually too large, containing a lot of 'unimportant' words. The most popular ways I've seen to reduce the vocabulary size are discarding stop words and words with low frequencies.</p>
<p>For example, in <code>gensim</code></p>
<pre><code>gensim.utils.prune_vocab(vocab, min_reduce, trim_rule=None):
Remove all entries from the vocab dictionary with count smaller than min_reduce.
Modifies vocab in place, returns the sum of all counts that were pruned.
</code></pre>
<p>But in practice, setting the minimum count is empirical and does not seems quite exact. I notice that the term frequency of each word in the vocabulary often follows long-tail distribution, is it a good way if I only keep the top-K words that occupies X% (95%, 90%, 85%, ...) of the total term frequency? Or are there any sensible ways to reduce the vocabulary, without seriously influencing the NLP task? </p> | 2020-02-11 10:22:14.173000+00:00 | 2021-11-11 21:44:56.180000+00:00 | null | machine-learning|deep-learning|nlp|gensim | ['https://dl.acm.org/doi/10.5555/177910.177914', 'https://arxiv.org/pdf/1508.07909.pdf', 'https://github.com/glample/fastBPE', 'https://github.com/google/sentencepiece'] | 4 |
62,377,675 | <p>From the article (arxiv.org/pdf/1904.05862.pdf): "The output of the encoder is a low frequency feature representation zi ∈Z which encodes about 30 ms of 16 kHz of audio and the striding results in representations zi every 10ms." => The windows are overlapping and this explains why you are getting 2 frames fewer.
Indeed we are moving a 30 ms window by 10ms steps. In your example, the 30 ms window takes 60 different positions.</p> | 2020-06-14 20:12:30.610000+00:00 | 2020-06-14 20:24:20.820000+00:00 | 2020-06-14 20:24:20.820000+00:00 | null | 62,376,687 | <p>I am using the fairseq library to run an example code for feature extraction with the VQ-Wav2Vec code as written below:</p>
<pre><code>In [6]: import torch
...: from fairseq.models.wav2vec import Wav2VecModel
In [7]: cp = torch.load('wav2vec_models/checkpoint_best.pt')
...: model = Wav2VecModel.build_model(cp['args'], task=None)
...: model.load_state_dict(cp['model'])
...: model.eval()
In [9]: wav_input_16khz = torch.randn(1,10000)
...: z = model.feature_extractor(wav_input_16khz)
...: f, idxs = model.vector_quantizer.forward_idx(z)
...: print(idxs.shape, f.shape)
>>>> torch.Size([1, 60, 4]) torch.Size([1, 512, 60])
</code></pre>
<p>My understanding is that the vq-wav2vec processes every 10ms of input speech (assumed to be sampled at 16K samples / sec) samples and outputs a feature vector of size [512] samples for each of these 10ms of speech. So given that the input speech is 10000 samples, we are supposed to get 62 frames ( 62 * 160 = 9920 samples). </p>
<p>Why do I see only 60 frames?</p> | 2020-06-14 18:34:39.477000+00:00 | 2020-06-14 20:24:20.820000+00:00 | null | deep-learning|pytorch|fairseq | [] | 0 |
62,379,133 | <p>I encountered a modified CRF named <a href="https://arxiv.org/pdf/1809.03599.pdf" rel="nofollow noreferrer">fuzzy CRF</a> as shown below. </p>
<p><a href="https://i.stack.imgur.com/JcAfW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JcAfW.png" alt="enter image description here"></a></p>
<p>Its mathematics is quite simple as we can see from equation 2 in the paper: </p>
<p><a href="https://i.stack.imgur.com/fttY8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fttY8.png" alt="enter image description here"></a></p>
<p>We just sum all the energies of the paths in the numerator, and the denominator remains the same. For inference, we can apply Viterbi or beam search. </p> | 2020-06-14 22:56:52.550000+00:00 | 2020-06-14 22:56:52.550000+00:00 | null | null | 37,531,532 | <p>Is it possible to use Conditional Random Field for MultiLabel Classification? I saw a python CRF implementation at <a href="https://pystruct.github.io/user_guide.html" rel="noreferrer">https://pystruct.github.io/user_guide.html</a>, but couldn't figure a way to do multilabel classification.</p> | 2016-05-30 18:11:31.167000+00:00 | 2020-06-14 22:56:52.550000+00:00 | null | python|machine-learning|classification|multilabel-classification|crf | ['https://arxiv.org/pdf/1809.03599.pdf', 'https://i.stack.imgur.com/JcAfW.png', 'https://i.stack.imgur.com/fttY8.png'] | 3 |
73,610,716 | <h1>TLDR</h1>
<p>To compute embeddings on large graphs use <a href="https://github.com/AnacletoLAB/grape" rel="nofollow noreferrer">GRAPE</a>.</p>
<pre><code>pip install grape
</code></pre>
<p>Example:</p>
<pre class="lang-py prettyprint-override"><code>from grape import Graph
from grape.embedders import Node2VecGloVeEnsmallen
graph = Graph.from_csv(
## The path to the edges list tsv
edge_path="edges.csv",
sources_column="source",
destinations_column="destination",
directed=False,
)
embedding = Node2VecGloVeEnsmallen().fit_transform(graph)
</code></pre>
<h1>Longer answer</h1>
<p>To solve this issue, we developed <a href="https://github.com/AnacletoLAB/grape" rel="nofollow noreferrer">GRAPE</a>, our Rust library with Python bindings. We needed to run node2vec on big graphs and the libraries we found weren't fast enough. We re-implemented and optimized many models, including Node2vec's, from the ground up without using Tensorflow or Pytorch.</p>
<p>Here's some benchmarks on our server with 12 cores (24 threads) and 128GB of ram.</p>
<p><a href="https://i.stack.imgur.com/d2i91.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/d2i91.png" alt="enter image description here" /></a>
Info about the tested graphs:</p>
<ul>
<li>WikiEN has 130M edges and 17M nodes.</li>
<li>CTD has 45M edges and 100K nodes.</li>
<li>PheKnowLator has 7M edges and 800K nodes.</li>
</ul>
<p>Here's the tutorial on how to load your own custom graph, we support CSV-like formats:
<a href="https://github.com/AnacletoLAB/grape/blob/main/tutorials/Loading_a_Graph_in_Ensmallen.ipynb" rel="nofollow noreferrer">https://github.com/AnacletoLAB/grape/blob/main/tutorials/Loading_a_Graph_in_Ensmallen.ipynb</a></p>
<p>and here's the complete tutorial on how to run and visualize node2vec with Glove on a given graph:
<a href="https://github.com/AnacletoLAB/grape/blob/main/tutorials/Using_Node2Vec_GloVe_to_embed_Cora.ipynb" rel="nofollow noreferrer">https://github.com/AnacletoLAB/grape/blob/main/tutorials/Using_Node2Vec_GloVe_to_embed_Cora.ipynb</a>
<a href="https://i.stack.imgur.com/v3uU2.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/v3uU2.jpg" alt="enter image description here" /></a></p>
<p>Our paper is still under review ( <a href="https://arxiv.org/abs/2110.06196" rel="nofollow noreferrer">https://arxiv.org/abs/2110.06196</a> ) and we are just two developers, so, if you need any help contact us on Discord, Github, or Twitter <a href="https://twitter.com/GRAPElib" rel="nofollow noreferrer">@GRAPElib</a>.</p> | 2022-09-05 14:09:35.023000+00:00 | 2022-09-05 14:09:35.023000+00:00 | null | null | 60,276,191 | <p>I have a graph with 480k nodes and 34M edges. I want to create node embeddings using Node2Vec on this graph. But, It is not even able to calculate transition probabilities. I am using a Google Cloud Machine with 32 cores and 120 GB RAM. Infrastructure is not the problem, the problem is that the function _precompute_probabilities in the node2vec pip library is not paraller. It is using only a single thread to calculate the transition probabilities. Is there a way to make this parallel or is they any other parallel version of Node2Vec ?</p> | 2020-02-18 07:55:02.677000+00:00 | 2022-09-05 14:09:35.023000+00:00 | null | graph|parallel-processing|embedding | ['https://github.com/AnacletoLAB/grape', 'https://github.com/AnacletoLAB/grape', 'https://i.stack.imgur.com/d2i91.png', 'https://github.com/AnacletoLAB/grape/blob/main/tutorials/Loading_a_Graph_in_Ensmallen.ipynb', 'https://github.com/AnacletoLAB/grape/blob/main/tutorials/Using_Node2Vec_GloVe_to_embed_Cora.ipynb', 'https://i.stack.imgur.com/v3uU2.jpg', 'https://arxiv.org/abs/2110.06196', 'https://twitter.com/GRAPElib'] | 8 |
48,137,427 | <p>A common initializer for the sigmoid-based networks is <strong>Xavier initializer</strong> (a.k.a. <strong>Glorot initializer</strong>), named after Xavier Glorot, one of the authors of <a href="http://jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf" rel="nofollow noreferrer">"Understanding the difficulty of training deep feedforward neural networks"</a> paper. The formula takes into account not only the number of incoming connections, but also outcoming as well. The authors prove that with this initialization, activations distribution is approximately normal, which helps gradient flow in the backward pass.</p>
<p>For relu-based networks, a better initializer is <strong>He initializer</strong> from <a href="https://arxiv.org/pdf/1502.01852v1.pdf" rel="nofollow noreferrer">"Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification"</a> by Kaiming He at al., which prove the same properties for relu activation.</p>
<p>Dense and convolutional layer aren't that different in this case, but it's important to remember that kernel weights are shared across the input image and batch, so the number of incoming connections depends on several parameters, incluing kernel size and striding, and might not be easy to calculate by hand.</p>
<p>In tensorflow, He initialization is implemented in <code>variance_scaling_initializer()</code> function (which is, in fact, a more general initializer, but by default performs He initialization), while Xavier initializer is logically <code>xavier_initializer()</code>.</p>
<p>See also <a href="https://stats.stackexchange.com/q/319323/130598">this discussion on CrossValidated</a>.</p> | 2018-01-07 13:12:38.043000+00:00 | 2018-01-07 13:12:38.043000+00:00 | null | null | 48,137,312 | <p>In a dense layer, one should initialize the weights according to some rule of thumb. For example, with RELU, the weights should come from a normal distribution and should be rescaled by 2/n where n is the number of inputs to the layer (<a href="https://youtu.be/s2coXdufOzE?t=2m21s" rel="nofollow noreferrer">according to Andrew Ng</a>).</p>
<p>Does the same hold for convolutional layers? What is the right way to initialize weights (and biases) in a convolutional layer?</p> | 2018-01-07 12:56:54.693000+00:00 | 2018-06-16 09:31:51.673000+00:00 | 2018-06-16 09:31:51.673000+00:00 | machine-learning|neural-network|conv-neural-network|data-science|activation-function | ['http://jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf', 'https://arxiv.org/pdf/1502.01852v1.pdf', 'https://stats.stackexchange.com/q/319323/130598'] | 3 |
45,920,899 | <p>The missing value parameter works as whatever value you provide for 'missing' parameter it treats it as missing value. For example if you provide 0.5 as missing value, then wherever it finds 0.5 in your data it treats it as missing value. Default is NaN. So what XGBoost does is based on the data it defines one of the path as default path. For example based on one parameter say it can go in two directions either left or right, so one of that will be made default based on the data. So whenever one of the missing value comes as input for a parameter, say you defined 0.5 as missing, then whenever 0.5 comes in the data it takes the default path. Initially I thought it imputes the missing value but it does not. It just defines one of the path as default and whenever any missing value come it takes that default path. This is defined in the paper <a href="https://arxiv.org/abs/1603.02754" rel="noreferrer">XGBoost: A Scalable Tree Boosting System</a></p> | 2017-08-28 14:03:42.773000+00:00 | 2017-08-28 14:03:42.773000+00:00 | null | null | 42,136,276 | <p>I am working on a dataset which contains missing values in certain columns. I am trying to use XGBRegressor of Scikit-Learn wrapper interface for XGBoost. There it provides a parameter called 'missing' in which you can enter float values or otherwise it takes NaN of python as default. So i need help like how can i use this parameter to fill missing values of the columns in my dataset. It will be helpful if one can provide me a simple example as well.</p> | 2017-02-09 12:04:33.140000+00:00 | 2017-08-28 14:03:42.773000+00:00 | null | python|scikit-learn|xgboost | ['https://arxiv.org/abs/1603.02754'] | 1 |
60,271,489 | <p>A good way to evaluate an RL agent is to run it in the environment for N times, and calculate the average return from the N runs.</p>
<p>It is common to perform the above evaluation step throughout your training process, and graph the average return as training happens. You would expect the average return to go up, indicating that the training is doing something useful.</p>
<p>For example, in Figure 3 of the <a href="https://arxiv.org/pdf/1707.06347.pdf" rel="nofollow noreferrer">PPO paper</a>, the authors graphed the average return with training steps, to show that PPO performs better than other algorithms.</p> | 2020-02-17 22:55:55.267000+00:00 | 2020-02-17 22:55:55.267000+00:00 | null | null | 58,626,404 | <p>I am new to reinforcement learning agent training. I have read about PPO algorithm and used stable baselines library to train an agent using PPO. So my question here is how do I evaluate a trained RL agent. Consider for a regression or classification problem I have metrics like r2_score or accuracy etc.. Are there any such parameters or how do I test the agent, conclude that the agent is trained well or bad.</p>
<p>Thanks</p> | 2019-10-30 13:24:57.050000+00:00 | 2020-02-17 22:55:55.267000+00:00 | null | artificial-intelligence|reinforcement-learning|montecarlo|policy-gradient-descent | ['https://arxiv.org/pdf/1707.06347.pdf'] | 1 |
56,200,581 | <p>Here is a good review paper on this topic "A disciplined approach to neural network hyper-parameters: Part 1 -- learning rate, batch size, momentum, and weight decay" by Leslie N. Smith
<a href="https://arxiv.org/pdf/1803.09820.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1803.09820.pdf</a></p> | 2019-05-18 15:51:32.140000+00:00 | 2019-05-18 15:51:32.140000+00:00 | null | null | 53,841,041 | <p>Many CNN papers use momentum=0.9 when using Stochastic Gradient Descent in weight update. There is a good logic for using it, but what I am looking for is a thorough exploration of effects of that parameter. I've searched across many papers, and there are some insights here and there, but I have not been able a comprehensive exploration. Also, does it usefulness vary across different computer vision tasks like classification, segmentation, detection?</p> | 2018-12-18 21:05:09.103000+00:00 | 2019-05-18 15:51:32.140000+00:00 | null | optimization|computer-vision|conv-neural-network|gradient-descent | ['https://arxiv.org/pdf/1803.09820.pdf'] | 1 |
50,062,806 | <p>I would recommend you to try OpenFace, a popular open source Face recognition system built on top of dlib. It uses techniques such as Haar classifier and CNN to classify faces, The neural network is trained to create Face embeddings for each face, then given a test image it can calculate distance (error) to match with closest of known embeddings. </p>
<p>OpenFace worked really well for me, it serves it's purpose, however you have to make sure face embeddings are generated for faces which are clearly visible.
You can go for many wrappers of OpenFace available for Python. I recently developed a project using it, you can just clone it. Here is the link:
<a href="https://github.com/Narasimha1997/FaceRecognition-tool" rel="nofollow noreferrer">Face Recognition</a>
I have uploaded few screenshots while testing, it can clearly distinguish multiple faces in the same image. </p>
<p>There is another way of achieving this , (Experimental). Neural networks can identify multiple objects in same image (Video Frame), there are techniques and standard models like Yolo and SSD, which are designed for single-shot-multiple object detection. <a href="https://arxiv.org/abs/1506.02640" rel="nofollow noreferrer">Yolo v2 Paper</a>. But identifying faces is much more complex than identifying objects, because most of the time faces look alike, but it's not the same case with objects. You can research and develop your own Yolo based neural network for single shot multiple face detection. But I think Dlib and OpenFace can provide a better solution. </p> | 2018-04-27 12:50:54.523000+00:00 | 2018-04-27 12:50:54.523000+00:00 | null | null | 50,061,579 | <p>I'm new to opencv and in general neural networks, but I have a small background on the topic.</p>
<p>I'm trying to build a system that can detect a certain individual's face in a photo. </p>
<p>Actually I'm training a model using opencv and haar with about 20 photos, I'm using photos of Elvis Presley as an example. So first of all i run an algorithm that detects and extracts the face of Elvis from all the photos and stores the faces in a folder. Then I train the model with these faces. After that I send some images of different individuals to the program and using haar and the model I try to detect Elvis. </p>
<p>The thing is that I have a huge amount of false positives. </p>
<p>Of course, when I train the model with faces of Elvis -> labeled 0 and faces of let's say Frank Sinatra (just after doing the same process of face extraction with Frank) I get a good classification, the model then works fine for classifying frank between Elvis and Frank.</p>
<p>But if I'm only looking for Elvis, training the network with only Elvis photos, the model just doesn't work...</p>
<p>How could be an approach of identifying one person's face in photos between many other different individuals? Is the only solution training my network with a huge amount of Elvis faces?</p>
<p>I'm using python with opencv, numpy</p>
<p>Can you point me to some example for better studying?</p>
<p>Thanks!</p> | 2018-04-27 11:36:51.273000+00:00 | 2018-04-27 12:50:54.523000+00:00 | null | python|opencv|neural-network | ['https://github.com/Narasimha1997/FaceRecognition-tool', 'https://arxiv.org/abs/1506.02640'] | 2 |
61,294,931 | <p>This sounds related to the learning rate.</p>
<p>Scaling the inputs probably does not have too much effect on the network. Yes, normalizing the inputs e.g. with a z-score is standard and will help with convergence. But I think this is not what is driving your results.</p>
<p>For the VAE case I think it is the fact that your 10x or 100x factor will also scale the <em>outputs</em>. That will cause larger losses, which will send a stronger signal back through the rest of the network. You'll get faster convergence if your original learning rate was too low. Which, by the sound of things, it was. A very long period with nothing happening, followed by eventual convergence, often means you can train faster.</p>
<p>So, instead of scaling your inputs by 10x or 100x, try increasing your learning rate by 10x or 100x. The <a href="https://arxiv.org/abs/1708.07120" rel="nofollow noreferrer">Super-Convergence paper</a> by Leslie N. Smith has some great insights on high learning rates and a good LR schedule for fast convergence. If you want to go even deeper into hyperparameter tuning, he also has an excellent paper on <a href="http://%20https://arxiv.org/abs/1803.09820" rel="nofollow noreferrer">a disciplined approach to neural network hyper-parameters</a>.</p> | 2020-04-18 19:20:47.590000+00:00 | 2020-04-18 19:20:47.590000+00:00 | null | null | 61,293,041 | <p>I am training a variational autoencoder with tensorflow 2.0 using the Keras high-end API. The aim is to resonctruct images, which consist of a <em>shape</em> with homogenous intensity <code>int_shape</code> not equal to zero on a zero background, see following image with <code>int_shape = -0.25</code>:</p>
<p><a href="https://i.stack.imgur.com/8aYfJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8aYfJ.png" alt="enter image description here"></a></p>
<p>The variational autoencoder has the following architecture, w<em>ith the latent space being of 50 dimensions, opposed to the 16 stated in the image</em>:</p>
<p><a href="https://i.stack.imgur.com/7dJs4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7dJs4.png" alt="enter image description here"></a></p>
<p>The lambda layer uses a function to sample from a normal distribution which looks like that:</p>
<pre><code>def sampling(args):
z_mean, z_log_var = args
epsilon = K.random_normal(shape =(1,1,latent_dim))
return z_mean + K.exp(0.5 * z_log_var) * epsilon
</code></pre>
<p>The loss is a combination of the KL divergence and an MSE loss:</p>
<pre><code> def vae_loss(y_pred, y_gt):
mse_loss = mse(y_pred, y_gt)
original_dim = GLOBAL.input_res**2
mse_loss *= original_dim
z_mean = model.get_layer('z_mean_layer').output
z_log_var = model.get_layer('z_log_var_layer').output
kl_loss = 1 + z_log_var - K.square(z_mean) - K.exp(z_log_var)
kl_loss = K.sum(kl_loss, axis=-1)
kl_loss *= -0.5
return K.mean(mse_loss + GLOBAL.beta*kl_loss)
</code></pre>
<p>Currently, <code>int_shape</code> takes values in the range (-0.5, 0.5), however of different magnitude (1e-1, 1e-2 and rarely 1e-3). The aim of the VAE is to take the image as input and reconstruct the same image.
For the range (-0.5, 0.5) the convergence of the loss is not deterministic: Sometimes it stays constant the whole time, sometimes it stays constant for 5 / 50/180 epochs and then drops and converges. As a result, reconstructions are not consistently good, some are messy, sometimes only noise comes out. In my opinion, the convergence depends strongly on the initial starting point for optimization.</p>
<p>However, if I scale the data with 10, i.e. range (-5, 5) or even 100, i.e. (-50, 50), the loss converges and I <em>constantly</em> get good reconstructions. In the range (-5,5) higher absolute values tend to have better reconstructions, whereas in the range (-50,50) virtually every input is reconstructed correctly.</p>
<p><strong>My question is: Is there a link between the range of the input data and the convergence of the loss in the case of a VAE?</strong> </p>
<p>From my experiments, shapes with a higher <code>int_shape</code> tend to have better reconstructions. I would attribute this to higher gradients and thus bigger steps in the hyperparameter space, since the value of the gradients are dependant on the value of the input. Bigger steps lead to to point where the loss finds a minimum and converges. In this sense, smaller values would produce smaller gradients and thus smaller optimization steps will be taken.</p>
<p>Another theory that I have is that for a small range of (-0.5, 0.5) the points in the latent space representing the input are closer to each other and during random sampling they are confused and the "wrong" point is picked for the decoder. I have verified by plotting histograms of the latent space components. The higher the range of the input values, the higher the variance of the distribution in the latent space.</p>
<p>Update:
I tried various learning rates, ranging from 1e-4 to 5e-2, with and without learning rate decay (decrease by 10 % every 10 epochs). I also trained for a suficient amount of epochs (300), batch size 64, training set 3800 images, val on 100. The loss did not improve and reconstruction is not good. Both 32-bit and 64-bit floating point precision were utilized without any success. For learning rates bigger than 2e-2, loss ocasionally turns to Nan. The highest learning rate providing stable training is 1e-2. Also, latent space is 50 (should not have any effect on the problem though, IMO).</p> | 2020-04-18 17:05:15.557000+00:00 | 2020-04-19 17:09:43.797000+00:00 | 2020-04-19 12:54:14.413000+00:00 | tensorflow|keras|deep-learning|neural-network|autoencoder | ['https://arxiv.org/abs/1708.07120', 'http://%20https://arxiv.org/abs/1803.09820'] | 2 |
38,978,125 | <p>Because square images are pleasing to the eye. But there are applications on non-square images when domain requires it. For instance SVHN original dataset is an image of several digits, and hence rectangular images are used as input to convnet, as <a href="http://arxiv.org/abs/1312.6082" rel="nofollow">here</a></p> | 2016-08-16 14:52:53.803000+00:00 | 2016-08-16 20:19:58.273000+00:00 | 2016-08-16 20:19:58.273000+00:00 | null | 38,972,156 | <p>I have been doing deep learning with CNN for a while and I realize that the inputs for a model are always squared images. </p>
<p>I see that neither convolution operation or neural network architecture itself require such property. </p>
<p>So, what is the reason for that?</p> | 2016-08-16 10:11:31.467000+00:00 | 2019-02-11 19:31:07.307000+00:00 | 2019-02-11 19:31:07.307000+00:00 | neural-network|artificial-intelligence|deep-learning | ['http://arxiv.org/abs/1312.6082'] | 1 |
59,667,475 | <p>This is called the <strong>tree to tree correction problem</strong> or the <strong>tree to tree editing problem</strong>. Most of the literature dealing with this explicitly relates to comparing XML trees for some reason, so searching for "XML diffing algorithm" yields a lot of results. In addition to Nikos's list of links, I found these:</p>
<ul>
<li><a href="https://osr.cs.fau.de/wp-content/uploads/2015/11/doc014f-dohrn.pdf" rel="nofollow noreferrer">Fine-grained Change Detection in Structured Text
Documents</a> (2014)</li>
<li><a href="https://www.researchgate.net/publication/241180098_Change_Detection_by_Level_CDL_An_efficient_algorithm_to_detect_change_on_XML_documents/link/554795870cf2e2031b37203c/download" rel="nofollow noreferrer">Change Detection by Level (CDL): An Efficient Algorithm to Detect Change on XML Documents</a> (2010)</li>
<li><a href="https://era.library.ualberta.ca/items/377060f4-3a31-4dc9-b7f0-ea2dda3cf38a" rel="nofollow noreferrer">Comparing XML Documents as Reference-aware Labeled Ordered Trees</a> (2011) <strike>The code for this - <a href="https://users.encs.concordia.ca/~nikolaos/vtracker.html" rel="nofollow noreferrer">VTracker</a> still exists!</strike> Edit: actually the interesting bit of code is not included. This pointed me to...</li>
<li><a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.66.3343&rep=rep1&type=pdf" rel="nofollow noreferrer">UMLDiff: An Algorithm for Object-Oriented
Design Differencing </a> (2005).</li>
<li><a href="https://arxiv.org/pdf/1805.06869.pdf" rel="nofollow noreferrer">Revisiting the tree edit distance and its backtracing: A tutorial</a> (2018) - looks like a good tutorial for the Zhang-Shasha algorithm, which seems to be the "classic" solution, but has terrible time complexity because it compares every sub-tree with every other sub-tree.</li>
</ul>
<p>I also strongly recommend reading <a href="https://pdfs.semanticscholar.org/40f3/fb27f0708a69153006ee1debee7c94663dca.pdf" rel="nofollow noreferrer">Change Detection in XML Trees: a Survey</a> but it is from 2005 so barely any of the tools it mentions exist anymore. <em>Comparing XML Documents as Reference-aware Labeled Ordered Trees</em> has the best intuitive description of some of the algorithms that I have found so far (start at section 2.1.2).</p>
<p>Unfortunately there doesn't seem to be much open source code available that does this and isn't ancient. Just a lot of overly-complex papers. :-/</p> | 2020-01-09 15:39:58.780000+00:00 | 2020-01-09 17:09:39.393000+00:00 | 2020-01-09 17:09:39.393000+00:00 | null | 5,894,879 | <p>This is more of a CS question, but an interesting one :</p>
<p>Let's say we have 2 tree structures with more or less the same nodes reorganized. How would you find</p>
<ol>
<li><em>any</em> </li>
<li>in some sense <em>minimal</em></li>
</ol>
<p>sequence of operations </p>
<ul>
<li><code>MOVE(A, B)</code> - moves node A under node B (with the whole subtree)</li>
<li><code>INSERT(N, B)</code> - inserts a <strong>new</strong> node N under node B </li>
<li><code>DELETE (A)</code> - deletes the node A (with the whole subtree)</li>
</ul>
<p>that transforms one tree to the other.</p>
<p>There might obviously be cases where such transformation is not possible, trivial being root A with child B to root B with child A etc.). In such cases, the algorithm would simply deliver an result "<em>not possible</em>".</p>
<p>Even more spectacular version is a generalization for networks, i.e. when we assume that a node can occur multiple times in the tree (effectively having multiple "parents"), while cycles are forbidden.</p>
<p>Disclaimer : This is <strong>not</strong> a homework, actually it comes from a real business problem and I found it quite interesting wondering if somebody might know a solution.</p> | 2011-05-05 08:41:23.557000+00:00 | 2021-05-02 01:20:14.253000+00:00 | 2020-04-06 12:33:55.723000+00:00 | algorithm|tree|comparison|diff|computer-science | ['https://osr.cs.fau.de/wp-content/uploads/2015/11/doc014f-dohrn.pdf', 'https://www.researchgate.net/publication/241180098_Change_Detection_by_Level_CDL_An_efficient_algorithm_to_detect_change_on_XML_documents/link/554795870cf2e2031b37203c/download', 'https://era.library.ualberta.ca/items/377060f4-3a31-4dc9-b7f0-ea2dda3cf38a', 'https://users.encs.concordia.ca/~nikolaos/vtracker.html', 'http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.66.3343&rep=rep1&type=pdf', 'https://arxiv.org/pdf/1805.06869.pdf', 'https://pdfs.semanticscholar.org/40f3/fb27f0708a69153006ee1debee7c94663dca.pdf'] | 7 |
54,596,289 | <p>What you are going to do is a time-consuming task. Will not be easy. But if you try hard, you could do it. And don't forget to share the code with us.</p>
<p>First, you will need to get a basic understanding of the <code>YOLO</code> network. I would suggest reading the research papers.<br> Original <code>YOLO</code> paper and the second paper discuss many details about the network and how it works. It will give you a better understanding of the network and how it is working and will be helpful when debugging your own network.</p>
<p>The third paper is easier than the other two. It will only explain the modifications that they have done. So, in order to get a full understanding of the network, you still have to read all three research papers.</p>
<ol>
<li><a href="https://arxiv.org/abs/1506.02640" rel="nofollow noreferrer">Original Yolo Paper</a></li>
<li><a href="https://arxiv.org/abs/1612.08242" rel="nofollow noreferrer">Yolo9000 (Yolo version 2)</a></li>
<li><a href="https://arxiv.org/abs/1804.02767" rel="nofollow noreferrer">Yolov3</a></li>
</ol>
<p>After you have downloaded the <code>YOLO</code>, you will find a file called <code>yolo.cfg</code>. You can open that file in a notepad.</p>
<p>At the top of the file, they have defined some hyperparameters. You can know the meaning of those parameters by reading the papers. <br>
After that, they have described their <code>YOLO</code> network as <code>caffe</code> people do in their <code>prototxt</code> files. It is not exactly as same as the <code>prototxt</code> file, but you can get the idea. It would be very helpful when building your own network.</p>
<p>They have written the <code>YOLO</code> network in such a way that the network changes a lot when it changes the mode from training to testing. You can find all that information in their research papers. Keep that in your mind too.</p>
<p>Happy Coding !!!</p> | 2019-02-08 16:10:16.367000+00:00 | 2019-02-08 16:10:16.367000+00:00 | null | null | 54,380,951 | <p>I would like to implement YOLO from scratch. I have seen codes available in github but I want to try from scratch. Is it possible to implement YOLO in ordinary Python script without using dark flow? I am planning to implement it in keras.</p> | 2019-01-26 17:38:02.390000+00:00 | 2019-02-09 05:16:26.953000+00:00 | null | deep-learning|conv-neural-network|object-detection|yolo | ['https://arxiv.org/abs/1506.02640', 'https://arxiv.org/abs/1612.08242', 'https://arxiv.org/abs/1804.02767'] | 3 |
63,369,159 | <p>There are two closely related works in recent two years:</p>
<p>[1] Boutilier, Craig, et al. "Planning and learning with stochastic action sets." arXiv preprint arXiv:1805.02363 (2018).</p>
<p>[2] Chandak, Yash, et al. "Reinforcement Learning When All Actions Are Not Always Available." AAAI. 2020.</p> | 2020-08-12 03:06:01.913000+00:00 | 2020-08-12 03:06:01.913000+00:00 | null | null | 50,012,295 | <p>How do people deal with problems where the legal actions in different states are different? In my case I have about 10 actions total, the legal actions are not overlapping, meaning that in certain states, the same 3 states are always legal, and those states are never legal in other types of states. </p>
<p>I'm also interested in see if the solutions would be different if the legal actions were overlapping. </p>
<p>For Q learning (where my network gives me the values for state/action pairs), I was thinking maybe I could just be careful about which Q value to choose when I'm constructing the target value. (ie instead of choosing the max, I choose the max among legal actions...)</p>
<p>For Policy-Gradient type of methods I'm less sure of what the appropriate setup is. Is it okay to just mask the output layer when computing the loss? </p> | 2018-04-25 00:07:08.853000+00:00 | 2020-08-12 03:06:01.913000+00:00 | null | machine-learning|reinforcement-learning|q-learning | [] | 0 |
47,653,898 | <p>Hope this helps future reader</p>
<p><a href="https://arxiv.org/pdf/1609.03415.pdf" rel="noreferrer">Active Canny: Edge Detection and Recovery with Open Active Contour Models</a></p>
<p>Here is an image showing its performance <a href="https://i.stack.imgur.com/JAYz6.png" rel="noreferrer"><img src="https://i.stack.imgur.com/JAYz6.png" alt="image"></a> </p>
<p>Implementing it is a pain.
I'm trying to implement it using OpenCV and Python</p>
<p>Here's another paper I found.</p>
<p><a href="https://www.researchgate.net/profile/Da_Chen9/publication/322799012_Anisotropic_Edge-Based_Balloon_Eikonal_Active_Contours/links/5a709c530f7e9ba2e1caffe5/Anisotropic-Edge-Based-Balloon-Eikonal-Active-Contours.pdf" rel="noreferrer">Anisotropic Edge-Based Balloon Eikonal Active Contours</a></p>
<p><a href="https://i.stack.imgur.com/MGPmL.png" rel="noreferrer"><img src="https://i.stack.imgur.com/MGPmL.png" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/3OdnK.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/3OdnK.jpg" alt="enter image description here"></a></p> | 2017-12-05 12:41:41.757000+00:00 | 2020-04-03 12:35:59.953000+00:00 | 2020-04-03 12:35:59.953000+00:00 | null | 22,064,982 | <p>Is there an Edge Detection Method that performs <strong>significantly better</strong> than the Canny Edge Detector ??</p> | 2014-02-27 09:59:35+00:00 | 2020-04-03 12:35:59.953000+00:00 | null | image-processing|computer-vision|edge-detection | ['https://arxiv.org/pdf/1609.03415.pdf', 'https://i.stack.imgur.com/JAYz6.png', 'https://www.researchgate.net/profile/Da_Chen9/publication/322799012_Anisotropic_Edge-Based_Balloon_Eikonal_Active_Contours/links/5a709c530f7e9ba2e1caffe5/Anisotropic-Edge-Based-Balloon-Eikonal-Active-Contours.pdf', 'https://i.stack.imgur.com/MGPmL.png', 'https://i.stack.imgur.com/3OdnK.jpg'] | 5 |
72,843,301 | <p><em>I am assuming you want all keywords in a 'group' of keywords to be present in a single <strong>line</strong>, in order for you to extract that single line. If you want the keywords to be present in a single <strong>file</strong> to extract all text from that file, let me know so I can adjust the answer.</em></p>
<p>Indeed <code>pdfsearch::keyword_search()</code> searches only individual words. Luckily it does give us a page number and a line number for each result, so we can match those and check if all words from a single group are present in the search results on the same line:</p>
<h5>Preparation</h5>
<p>We start by defining our keywords grouped into vectors, and loading an example file:</p>
<pre class="lang-r prettyprint-override"><code>library(pdfsearch)
library(dplyr)
# Our list of keywords, grouped in vectors
grouped_keywords <- list(c('saturated','model'),
c('vector','specification'),
c('framework','inferences'),
c('test','that','gives','no','results'),
c('population','degree','types'))
# Example file supplied with `pdfsearch`, also available at https://arxiv.org/pdf/1610.00147.pdf
file <- system.file('pdf', '1610.00147.pdf', package = 'pdfsearch')
</code></pre>
<h5>Step 1: search all words individually</h5>
<p>To start the search, we perform <code>keyword_search()</code> on a flattened version of <code>grouped_keywords</code>. This will yield all results we want, but also many results we don't want (lines that only contain one or a few of the keywords in a group).</p>
<pre class="lang-r prettyprint-override"><code># Search for individual keywords
individual_results <- keyword_search(file,
keyword = unlist(grouped_keywords), # combine our keyword list into a single 1-dimensional vector
path = TRUE)
cat(nrow(individual_results), 'results for individual words\n')
head(individual_results, n=3)
</code></pre>
<p>Result:</p>
<pre class="lang-r prettyprint-override"><code>367 results for individual words
# A tibble: 3 × 5
keyword page_num line_num line_text token_text
<chr> <int> <int> <list> <list>
1 saturated 5 112 <chr [1]> <list [1]>
2 saturated 5 114 <chr [1]> <list [1]>
3 saturated 5 119 <chr [1]> <list [1]>
</code></pre>
<h5>Step 2: Merge results for keywords in the same subgroup</h5>
<p>For each group of keywords, we look for results that have the same line number and the same page number, ánd that match all keywords in the group:</p>
<pre class="lang-r prettyprint-override"><code>combined_results <- lapply(grouped_keywords, \(keyword_group) {
individual_results %>%
filter(keyword %in% keyword_group) %>%
group_by(page_num, line_num) %>%
filter(length(unique(keyword)) == length(unique(keyword_group))) %>%
summarise(keywords = paste(keyword_group, collapse=' + '),
line_text = line_text[1],
token_text = token_text[1],
.groups="keep")
})
# Merge list of tibbles to a single tibble
combined_results <- do.call(rbind, combined_results)
# Output result
cat(nrow(combined_results), 'results for combined words\n')
combined_results
</code></pre>
<p>Result:</p>
<pre class="lang-r prettyprint-override"><code>8 results for combined words
# A tibble: 8 × 5
# Groups: page_num, line_num [8]
page_num line_num keywords line_text token_text
<int> <int> <chr> <list> <list>
1 5 112 saturated + model <chr [1]> <list [1]>
2 5 114 saturated + model <chr [1]> <list [1]>
3 5 119 saturated + model <chr [1]> <list [1]>
4 7 184 saturated + model <chr [1]> <list [1]>
5 5 124 vector + specification <chr [1]> <list [1]>
6 2 32 framework + inferences <chr [1]> <list [1]>
7 7 168 framework + inferences <chr [1]> <list [1]>
8 7 187 population + degree + types <chr [1]> <list [1]>
</code></pre>
<hr />
<p><strong>Edit 19 July 2022:</strong></p>
<p>To get only exact matches, it's not enough to add the additional filter constrain which I previously described in the comments; we also need to process that filter rowwise:</p>
<pre><code>combined_results <- lapply(grouped_keywords, \(keyword_group) {
individual_results %>%
rowwise() %>%
filter(keyword %in% keyword_group, tolower(keyword) %in% tolower(unlist(token_text))) %>%
group_by(pdf_name, line_num) %>%
filter(length(unique(keyword)) == length(unique(keyword_group))) %>%
summarise(keywords = paste(keyword_group, collapse=' + '),
line_text = line_text[1],
token_text = token_text[1],
.groups="keep")
})
</code></pre>
<p>The complete code, also using <code>keyword_directory()</code> instead of <code>keyword_search()</code> and matching case-insensitive, becomes:</p>
<pre><code># Preparation -------------------------------------------------------------
library(pdfsearch)
library(dplyr)
# Our list of keywords, grouped in vectors
grouped_keywords <- list(c('individuals','model'),
c('information','abundance'),
c('individual','ranking'),
c('Test','That','Gives','No','Results'),
c('population','degree','types'))
grouped_keywords <- lapply(grouped_keywords, tolower)
# Directory containing a few example PDFs:
# https://arxiv.org/pdf/2207.00011.pdf
# https://arxiv.org/pdf/2207.00039.pdf
# https://arxiv.org/pdf/2207.00076.pdf
directory <- "~/Desktop/Rtemp/pdf/"
# Search for individual keywords ------------------------------------------
individual_results <- keyword_directory(directory,
keyword = unlist(grouped_keywords), # combine our keyword list into a single 1-dimensional vector
split_pdf = TRUE)
cat(nrow(individual_results), 'results for individual words\n')
View(individual_results)
# Merge results for keywords in the same subgroup and file ----------------
combined_results <- lapply(grouped_keywords, \(keyword_group) {
individual_results %>%
rowwise() %>%
filter(keyword %in% keyword_group, tolower(keyword) %in% tolower(unlist(token_text))) %>%
group_by(pdf_name, line_num) %>%
filter(length(unique(keyword)) == length(unique(keyword_group))) %>%
summarise(keywords = paste(keyword_group, collapse=' + '),
line_text = line_text[1],
token_text = token_text[1],
.groups="keep")
})
# Merge list of tibbles to a single tibble
combined_results <- do.call(rbind, combined_results)
# Output result
cat(nrow(combined_results), 'results for combined words\n')
combined_results
</code></pre>
<pre><code>> combined_results
# A tibble: 4 × 5
# Groups: pdf_name, line_num [4]
pdf_name line_num keywords line_text token_text
<chr> <int> <chr> <list> <list>
1 2207.00039.pdf 299 individuals + model <chr [1]> <list [1]>
2 2207.00076.pdf 4 individuals + model <chr [1]> <list [1]>
3 2207.00076.pdf 16 individuals + model <chr [1]> <list [1]>
4 2207.00039.pdf 10 information + abundance <chr [1]> <list [1]>
</code></pre>
<p>You'll notice I also added <code>split_pdf = TRUE</code> to the <code>keyword_directory()</code>-call, to improve handling of multi-column PDFs. I have also removed matching on the page number; matching on line number alone is enough.</p> | 2022-07-03 00:44:49.380000+00:00 | 2022-07-19 06:45:19.993000+00:00 | 2022-07-19 06:45:19.993000+00:00 | null | 72,833,955 | <p>I have a list of keywords to find text in a group of PDF files, some of the keyword must appear combined to extract the text even if they are not together.</p>
<p>I used the pdfsearch library and it finds text with the separated keywords. I read the documentation but I am not able to find a way to combine the keywords.</p>
<p>My code is as shown below:</p>
<pre><code>
library(pdftools)
library(pdfsearch)
keywords <- c("LOTE","VOLUMEN",
"LOTE","SOLVENCIA",
"LOTE","SEGURO",
"VOLUMEN","TRES ÚLTIMOS",
"VOLUMEN","3 ÚLTIMOS",
"VOLUMEN","(3) ÚLTIMOS",
"NO", "APLICA", "SOLVENCIA")
Results <- keyword_directory(directory,
keyword = keywords,
surround_lines = 1, full_names = TRUE,
ignore_case = TRUE, remove_hyphen = TRUE)
</code></pre>
<p>In the keyword assignation, every line is a combination:</p>
<pre><code>"LOTE" + "SOLVENCIA",
"LOTE" + "SEGURO",
"VOLUMEN" + "TRES ÚLTIMOS",
"VOLUMEN"+ "3 ÚLTIMOS",
"VOLUMEN" + "(3) ÚLTIMOS",
"NO" + "APLICA" + "SOLVENCIA"
</code></pre>
<p>For example the combination "NO" + "APLICA" + "SOLVENCIA"</p>
<p>This text should be extracted
"No siempre aplica el uso de solvencia para el proyecto"</p>
<p>This text should no be extracted even if it has the keyword "NO"
"No pueden contar con las listas antes de tiempo"</p>
<p>At the moment I am able just to get the text where the separated keyword appear.</p> | 2022-07-01 19:39:27.500000+00:00 | 2022-07-19 06:45:19.993000+00:00 | null | r|pdf|text|keyword | [] | 0 |
31,132,209 | <p>For loss layers, there is no next layer, and so the top diff blob is technically undefined and unused - but Caffe is using this preallocated space to store unrelated data: Caffe supports multiplying loss layers with a user-defined weight (loss_weight in the prototxt), this information (a single scalar floating point number) is stored in the first element of the diff array of the top blob. That's why you'll see in every loss layer, that they multiply by that amount to support that functionality. This is explained in <a href="http://caffe.berkeleyvision.org/tutorial/loss.html">Caffe's tutorial about the loss layer</a>.</p>
<p>This weight is usually used to add auxiliary losses to the network. You can read more about it in Google's <a href="http://arxiv.org/abs/1409.4842">Going Deeper with Convoltions</a> or in <a href="http://arxiv.org/abs/1409.5185">Deeply-Supervised Nets</a>.</p> | 2015-06-30 07:32:27.687000+00:00 | 2015-06-30 07:32:27.687000+00:00 | null | null | 31,099,233 | <p>I am currently trying to implement my own loss layer in caffe, and while attempting to do so, am using other layers as a reference. One thing that puzzles me, however, is the use of <code>top[0]->cpu_diff()</code> in <code>Backward_cpu</code>. I will be using the <code>EuclideanLossLayer</code> as a reference. Here are my questions</p>
<ul>
<li><p>It is my understanding that <code>top[0]->cpu_diff()</code> holds the error derivative from the next layer, but what if there is no other layer, how is it initialised? since it is used in <code>EuclideanLossLayer</code> without performing any checks:</p>
<pre><code>const Dtype alpha = sign * top[0]->cpu_diff()[0] / bottom[i]->num();
</code></pre></li>
<li><p>Again, in the <code>EuclideanLossLayer</code>, the derivative for the error with respect to the activations is calculated using the following code snippet:</p>
<pre><code>const Dtype alpha = sign * top[0]->cpu_diff()[0] / bottom[i]->num();
caffe_cpu_axpby(
bottom[i]->count(), // count
alpha, // alpha
diff_.cpu_data(), // a
Dtype(0), // beta
bottom[i]->mutable_cpu_diff()); // b
</code></pre>
<p>If my first assumption is correct, and <code>top[0]->cpu_diff()</code> does indeed hold the error derivative for the layer above, why do we only use the first element i.e. <code>top[0]->cpu_diff()[0]</code> as opposed to multiplying by the whole vector i.e. <code>top[0]->cpu_diff()</code>?</p></li>
</ul> | 2015-06-28 11:25:13.047000+00:00 | 2015-06-30 07:32:27.687000+00:00 | null | c++|deep-learning|caffe | ['http://caffe.berkeleyvision.org/tutorial/loss.html', 'http://arxiv.org/abs/1409.4842', 'http://arxiv.org/abs/1409.5185'] | 3 |
50,623,144 | <p>What's you're trying to do here is non convex optimisation and this is a notoriously difficult problem. Once you think about it, it make sense because just about any practical mathematical problem can be formulated as an optimisation problem.</p>
<p><strong>1. Prelude</strong><br>
So, before giving you hints as to where to find a solution to your particular problem, I want to illustrate why certain optimisation problems are easy to solve.</p>
<p>I'm going to start by discussing convex problems. These are easy to solve even in the constrained case, and the reason for this is that when you compute the gradient you actually get a lot of information of where the minimum cannot be (the Taylor expansion of a convex function, f, is always an underestimate of f), additionally there is only one minimum and no sadle points. If you're interested in learning more about convex optimisation I recommend seeing Stephen Boyd's class in convex optimisation on <a href="https://youtu.be/McLq1hEq3UY" rel="nofollow noreferrer">YouTube</a></p>
<p>Now then if non convex optimization is so difficult, how come we are able to solve it in deep learning? The answer is simply that the non convex function we are minimising in deep learning it's quite nice as demonstrated by <a href="https://arxiv.org/abs/1412.0233" rel="nofollow noreferrer">Henaff et al</a>.</p>
<p>It is therefore important that machine learning practitioners realise that the operation procedures used in deep learning will most likely not yield a good minimum, if they converge to a minimum in the first place, on other non-convex problems.</p>
<p><strong>2. Answer to your question</strong><br>
Now then to answer your problem, You're not you probably not gonna find and fast solution as nonconvex optimisation is NP complete. But fear not, SciPy has a few global optimisation algorithms to choose from. <a href="https://stackoverflow.com/questions/21670080/how-to-find-global-minimum-in-python-optimization-with-bounds">Here</a> is a link to another stack overflow thread with a good answer to your question.</p>
<p><strong>3. Moral of the story</strong><br>
Finally, I want to remind you that convergence guarantees are important, forgetting it has led to an <a href="https://en.m.wikipedia.org/wiki/Sleipner_A" rel="nofollow noreferrer">oil rig collapsing</a>.</p>
<p><strong>PS.</strong> Please forgive typos, I'm usong my phone for this</p>
<p><strong>Update:</strong> As to why BFGS works with dlib, there might be two reasons, firstly, BFGS is better at using curvature information than L-BFGS, and secondly it uses a line search to find an optimal step size. I'd recommend checking if PyTorch allow line searches and if not, setting an decreasing step size (or just a really low one).</p> | 2018-05-31 11:21:48.317000+00:00 | 2018-05-31 11:35:06.430000+00:00 | 2018-05-31 11:35:06.430000+00:00 | null | 50,621,786 | <p>I am playing with Rule 110 of Wolfram cellular automata. Given line of zeroes and ones, you can calculate next line with these rules:</p>
<p><a href="https://i.stack.imgur.com/TOjI7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TOjI7.png" alt="enter image description here"></a></p>
<p>Starting with 00000000....1 in the end you get this sequence:</p>
<p><a href="https://i.stack.imgur.com/fvRUL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fvRUL.png" alt="enter image description here"></a></p>
<p>Just of curiosity I decided to approximate these rules with a polynomial, so that cells could be not only 0 and 1, but also gray color in between:</p>
<pre><code>def triangle(x,y,z,v0):
v=(y + y * y + y * y * y - 3. * (1. + x) * y * z + z * (1. + z + z * z)) / 3.
return (v-v0)*(v-v0)
</code></pre>
<p>so if x,y,z and v0 matches any of these rules from the table, it will return 0, and positive nonzero value otherwise.</p>
<p>Next I've added all possible groups of 4 neighbors into single sum, which will be zero for integer solutions:</p>
<pre><code>def eval():
s = 0.
for i in range(W - 1):
for j in range(1, W + 1):
xx = x[i, (j - 1) % W]
yy = x[i, j % W]
zz = x[i, (j + 1) % W]
r = x[i + 1, j % W]
s += triangle(xx, yy, zz, r)
for j in range(W - 1): s += x[0, j] * x[0, j]
s += (1 - x[0, W - 1]) * (1 - x[0, W - 1])
return torch.sqrt(s)
</code></pre>
<p>Also in the bottom of this function I add ordinary conditions for first line, so that all elements are 0 except last one, which is 1. Finally I've decided to minimize this sum of squares on W*W matrix with pytorch:</p>
<pre><code>x = Variable(torch.DoubleTensor(W,W).zero_(), requires_grad=True)
opt = torch.optim.LBFGS([x],lr=.1)
for i in range(15500):
def closure():
opt.zero_grad()
s=eval()
s.backward()
return s
opt.step(closure)
</code></pre>
<p>Here is <a href="https://gist.github.com/stiv-yakovenko/f35e31cbf076b6a1359cf43afa5040f8" rel="nofollow noreferrer">full code</a>, you can try it yourself. The problem is that for 10*10 it converges to correct solution in ~20 steps:</p>
<p><a href="https://i.stack.imgur.com/ZPLHa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZPLHa.png" alt="enter image description here"></a></p>
<p>But if I take 15*15 board, it never finishes convergence:</p>
<p><a href="https://i.stack.imgur.com/aljAx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aljAx.png" alt="enter image description here"></a></p>
<p>The graph on the right shows how sum of squares is changing with each next iteration and you can see that it never reaches zero. My question is why this happens and how can I fix this. Tried different pytorch optimisers, but all of them perform worse then LBFGS. Tried different learning rates. Any ideas why this happen and how I can reach final point during optimisation?</p>
<p>UPD: improved convergence graph, log of SOS:</p>
<p><a href="https://i.stack.imgur.com/Qo1ff.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Qo1ff.png" alt="enter image description here"></a></p>
<p>UPD2: I also tried doing same in C++ with dlib, and I don't have any convergency issues there, it goes much deeper in much less time:</p>
<p><a href="https://i.stack.imgur.com/kJHf6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kJHf6.png" alt="enter image description here"></a></p>
<p>I am using this code for optimisation in C++:</p>
<pre><code>find_min_using_approximate_derivatives(bfgs_search_strategy(),
objective_delta_stop_strategy(1e-87),
s, x, -1)
</code></pre> | 2018-05-31 10:09:36.940000+00:00 | 2018-10-26 15:08:52.690000+00:00 | 2018-05-31 10:50:51.340000+00:00 | python|tensorflow|pytorch|nonlinear-optimization | ['https://youtu.be/McLq1hEq3UY', 'https://arxiv.org/abs/1412.0233', 'https://stackoverflow.com/questions/21670080/how-to-find-global-minimum-in-python-optimization-with-bounds', 'https://en.m.wikipedia.org/wiki/Sleipner_A'] | 4 |
61,984,351 | <p>I think the best solution is deep learning because detective always has <em>different backgrounds</em>. You can use Faster rcnn, or if you want speed, you can make nice detectives with a good training using Yolo algorithm. You can find Github repo easily. The mathematics of work is described in these links.</p>
<ul>
<li>Faster RCNN <a href="https://arxiv.org/abs/1506.01497" rel="nofollow noreferrer">https://arxiv.org/abs/1506.01497</a></li>
<li>Yolo <a href="https://pjreddie.com/darknet/yolo/" rel="nofollow noreferrer">https://pjreddie.com/darknet/yolo/</a></li>
</ul> | 2020-05-24 10:03:20.830000+00:00 | 2020-05-24 10:03:20.830000+00:00 | null | null | 61,982,592 | <p>Hope you guys are doing well.</p>
<p>I have a question about opencv and extracting the same point from number of images.</p>
<p>Say, we have a dataset of around 100 images. (maybe more but for this purpose it will suffice).</p>
<p>Image will look something like:</p>
<p><a href="https://i.stack.imgur.com/6Dr6D.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6Dr6D.jpg" alt="enter image description here"></a></p>
<p>As you can see in the image, there is an area marked in RED. I have marked that using Paint for this purpose. It indicates the highest point of the heap of soil. All the 100 images we have look more or less the same (without that crane in the background. But it can be removed using some opencv techniques, so that is not an issue). But this heap of soil can be either on the left hand side or the right hand side. According to its position, the coordinate of the highest point in the heap will change. </p>
<p>So, my question is, how to find this position given that the heap can be either on left or right side?
Note that this position can be relative to some object (for example in this image, midpoint of the crane) or if the images are of different size than we can resize the images to have same dimensions and take the coordinates of the point w.r.t the image itself. </p>
<p>How do we find out the highest point of the heap though? Should we manually go through each image, label that point and make a dataset with the images and bounding boxes? Or is there another decent soulution to this?</p>
<p>Also, if the soil heap is labelled manually (Like shading the required area i.e. heap of an image) using Paint or some other software, would that help too? But I cannot think of anything to do after this. </p>
<p>Thank You.</p> | 2020-05-24 06:53:11.760000+00:00 | 2020-05-24 14:18:25.693000+00:00 | null | python|opencv|image-processing|python-imaging-library | ['https://arxiv.org/abs/1506.01497', 'https://pjreddie.com/darknet/yolo/'] | 2 |
50,963,512 | <p>I found an answer to my own question in Section 3.2 of <a href="https://arxiv.org/abs/1710.07654" rel="nofollow noreferrer">the paper</a> (Deep Voice 3).
So, they trained both of phoneme-based model and character-based model, using phoneme inputs mainly except that character-based model is used if words cannot be converted to their phoneme representations.</p> | 2018-06-21 08:09:50.813000+00:00 | 2018-06-21 08:09:50.813000+00:00 | null | null | 50,657,546 | <p>I want to ask you how we can effectively re-train a trained seq2seq model to remove/mitigate a specific observed error output. I'm going to give an example about Speech Synthesis, but any idea from different domains, such as Machine Translation and Speech Recognition, using seq2seq model will be appreciated.</p>
<p>I learned the basics of seq2seq with attention model, especially for Speech Synthesis such as <a href="https://github.com/Rayhane-mamah/Tacotron-2" rel="nofollow noreferrer">Tacotron-2</a>.
Using a distributed well-trained model showed me how naturally our computer could speak with the seq2seq (end-to-end) model (you can listen to some audio samples <a href="https://r9y9.github.io/blog/2018/05/20/tacotron2/" rel="nofollow noreferrer">here</a>). But still, the model fails to read some words properly, e.g., it fails to read "obey [əˈbā]" in multiple ways like [əˈbī] and [əˈbē].</p>
<p>The reason is obvious because the word "obey" appears too little, only three times out of 225,715 words, in our dataset (<a href="https://keithito.com/LJ-Speech-Dataset/" rel="nofollow noreferrer">LJ Speech</a>), and the model had no luck.</p>
<p>So, how can we re-train the model to overcome the error? Adding extra audio clips containing the "obey" pronunciation sounds impractical, but reusing the three audio clips has the danger of overfitting. And also, I suppose we use a well-trained model and "simply training more" is not an effective solution.</p>
<p>Now, this is one of the drawbacks of seq2seq model, which is not talked much. The model successfully simplified the pipelines of the traditional models, e.g., for Speech Synthesis, it replaced an acoustic model and a text analysis frontend etc by a single neural network. But we lost the controllability of our model at all. It's impossible to make the system read in a specific way.</p>
<p>Again, if you use a seq2seq model in any field and get an undesirable output, how do you fix that? Is there a data-scientific workaround to this problem, or maybe a cutting-edge Neural Network mechanism to gain more controllability in seq2seq model?</p>
<p>Thanks.</p> | 2018-06-02 13:45:53.170000+00:00 | 2018-06-21 08:09:50.813000+00:00 | 2018-06-03 12:49:37.703000+00:00 | tensorflow|machine-learning|neural-network|nlp|text-to-speech | ['https://arxiv.org/abs/1710.07654'] | 1 |
58,833,197 | <p>In short, a dropout layer ignores a set of neurons (randomly) as one can see in the picture below. This normally is used to prevent the net from overfitting. </p>
<p>The Dense layer is a normal fully connected layer in a neuronal network. <a href="https://i.stack.imgur.com/c2O5w.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/c2O5w.png" alt="enter image description here"></a></p>
<p>Resources:</p>
<p><a href="https://arxiv.org/abs/1207.0580" rel="nofollow noreferrer">Improving neural networks by preventing co-adaptation of feature detectors</a></p>
<p><a href="http://jmlr.org/papers/v15/srivastava14a.html" rel="nofollow noreferrer">Dropout: A Simple Way to Prevent Neural Networks from Overfitting </a></p>
<p><a href="https://machinelearningmastery.com/dropout-for-regularizing-deep-neural-networks/" rel="nofollow noreferrer">A Gentle Introduction to Dropout for Regularizing Deep Neural Networks</a></p>
<p>Let me know if you need a more precise explanation.</p> | 2019-11-13 08:41:56.183000+00:00 | 2019-11-13 17:45:33.670000+00:00 | 2019-11-13 17:45:33.670000+00:00 | null | 58,830,573 | <p>what is the main difference between dense and dropout layers in Keras </p> | 2019-11-13 04:59:54.740000+00:00 | 2019-11-13 17:45:33.670000+00:00 | null | machine-learning|keras|artificial-intelligence|transfer-learning | ['https://i.stack.imgur.com/c2O5w.png', 'https://arxiv.org/abs/1207.0580', 'http://jmlr.org/papers/v15/srivastava14a.html', 'https://machinelearningmastery.com/dropout-for-regularizing-deep-neural-networks/'] | 4 |
54,719,164 | <p>As this question is rather open-ended I will start from the last parts, moving towards the more general answer to the main question posed in the title.</p>
<p><strong>Quick note:</strong> as pointed in the comments by <a href="https://stackoverflow.com/users/9094710/qusai-alothman">@Qusai Alothman</a>, you should find a better resource on the topic, this one is rather sparse when it comes to necessary informations.</p>
<p><strong>Additional note:</strong> full code for the process described in the last section would take <strong>way too much space</strong> to provide as an exact answer, it would be more of a blog post. I will highlight possible steps one should take to create such a network with helpful links as we go along.</p>
<p><strong>Final note:</strong> If there is anything dumb down there below (or you would like to expand the answer in any way or form, please do correct me/add info by posting a comment below).</p>
<h1>Question about the input</h1>
<p>Input here is generated from the <strong>random normal distribution</strong> and has <strong>no connection</strong> to the actual words. It is supposed to represent <a href="https://en.wikipedia.org/wiki/Word_embedding" rel="nofollow noreferrer">word embeddings</a>, e.g. representation of words as numbers carrying semantic (this is important!) meaning (sometimes depending on the context as well (see one of the current State Of The Art approaches, e.g. <a href="https://arxiv.org/pdf/1810.04805.pdf" rel="nofollow noreferrer">BERT</a>)).</p>
<h2>Shape of the input</h2>
<p>In your example it is provided as:</p>
<p><code>seq_len, batch_size, embedding_size</code>,</p>
<p>where</p>
<ul>
<li><code>seq_len</code> - means length of a single sentence (varies across your
dataset), we will get to it later.</li>
<li><code>batch_size</code> - how many sentences
should be processed in one step of <code>forward</code> pass (in case of
<code>PyTorch</code> it is the forward method of class inheriting from
<a href="https://pytorch.org/docs/stable/nn.html#module" rel="nofollow noreferrer">torch.nn.Module</a>)</li>
<li><code>embedding_size</code> - vector with which one word is represented (it
might range from the usual <code>100/300</code> using <code>word2vec</code> up to <code>4096</code> or
so using the more recent approaches like the <strong>BERT</strong> mentioned
above)</li>
</ul>
<p>In this case it's all hard-coded of size one, which is not really useful for a newcomer, it only outlines the idea that way.</p>
<h3>Why does one need beginning of sentence and end of sentence tags in this case?</h3>
<p>Correct me if I'm wrong, but you don't need it <strong>if your input is separated into sentences</strong>. It is used if you provide <strong>multiple sentences</strong> to the model, and want to indicate <strong>unambiguously</strong> the beginning and end of each (used with models which depend on the previous/next sentences, it seems to not be the case here). Those are encoded by special tokens (the ones which are not present in the entire corpus), so neural network "could learn" they represent end and beginning of sentence (one special token for this approach would be enough).</p>
<p>If you were to use serious dataset, I would advise to split your text using libraries like <a href="https://spacy.io/" rel="nofollow noreferrer">spaCy</a> or <a href="https://www.nltk.org/" rel="nofollow noreferrer">nltk</a> (the first one is a pleasure to use IMO), they do a really good job for this task.</p>
<p>You dataset might be already splitted into sentences, in those cases you are kind of ready to go.</p>
<h3>Why don't I see the input being a corpus on which the model is trained like other classic NLP problems?</h3>
<p>I don't recall models being trained on the corpuses <strong>as is</strong>, e.g. using strings. Usually those are represented by floating-points numbers using:</p>
<ul>
<li>Simple approaches, e.g. <a href="https://en.wikipedia.org/wiki/Bag-of-words_model" rel="nofollow noreferrer">Bag Of
Words</a> or
<a href="https://pl.wikipedia.org/wiki/TFIDF" rel="nofollow noreferrer">TF-IDF</a></li>
<li>More sophisticated ones, which provide some information about word
relationships (e.g. <code>king</code> is more semantically related to <code>queen</code>
than to a, say, <code>banana</code>). Those were already linked above, some
other noticeable might be
<a href="https://nlp.stanford.edu/projects/glove/" rel="nofollow noreferrer">GloVe</a> or
<a href="https://arxiv.org/abs/1802.05365" rel="nofollow noreferrer">ELMo</a> and tons of other creative
approaches.</li>
</ul>
<h1>Question about the output</h1>
<p>One should output <strong>indices into <a href="https://pytorch.org/docs/stable/nn.html#embedding" rel="nofollow noreferrer">embeddings</a></strong>, which in turn correspond to words represented by a vector (more sophisticated approach mentioned above).</p>
<p>Each row in such embedding represents a unique word and it's respective columns are their unique representations (in PyTorch, first index might be reserved for the words for which a representation is unknown [if using pretrained embeddings], you may also delete those words, or represent them as aj average of sentence/document, there are some other viable approaches as well).</p>
<h2>Loss provided in the example</h2>
<pre><code># for language models, use cross-entropy :)
loss = nn.MSELoss()
</code></pre>
<p>For this task it makes no sense, as <a href="https://en.wikipedia.org/wiki/Mean_squared_error" rel="nofollow noreferrer">Mean Squared Error</a> is a regression metric, not a classification one.</p>
<p>We want to use one for classification, so <a href="https://en.wikipedia.org/wiki/Mean_squared_error" rel="nofollow noreferrer">softmax</a> should be used for multiclass case (we should be outputting numbers spanning <code>[0, N]</code>, where <code>N</code> is the number of unique words in our corpus).</p>
<p>PyTorch's <a href="https://en.wikipedia.org/wiki/Mean_squared_error" rel="nofollow noreferrer">CrossEntropyLoss</a> already takes logits (output of last layer <strong>without</strong> activation like softmax) and returns loss value for each example. I would advise this approach as it's numerically stable (and I like it as the most minimal one).</p>
<h1>I am trying to fill in the blank using a bidirectional RNN and pytorch</h1>
<p>This is a long one, I will only highlight steps I would undertake in order to create a model whose idea represents the one outlined in the post.</p>
<h2>Basic preparation of dataset</h2>
<p>You may use the one you mentioned above or start with something easier like <a href="https://scikit-learn.org/0.19/datasets/twenty_newsgroups.html" rel="nofollow noreferrer">20 newsgroups</a> from <strong>scikit-learn</strong>.</p>
<p>First steps should be roughly this:</p>
<ul>
<li>scrape the metadata (if any) from your dataset (those might be HTML tags, some headers etc.)</li>
<li>split your text into sentences using a pre-made library (mentioned above)</li>
</ul>
<p>Next, you would like to create your target (e.g. words to be filled) in each sentence.
Each word should be replaced by a special token (say <code><target-token></code>) and moved to target.</p>
<p>Example:</p>
<ul>
<li>sentence: <code>Neural networks can do some stuff.</code></li>
</ul>
<p>would give us the following sentences and it's respective targets:</p>
<ul>
<li>sentence: <code><target-token> networks can do some stuff.</code> target: <code>Neural</code></li>
<li>sentence: <code>Neural <target-token> can do some stuff.</code> target: <code>networks</code></li>
<li>sentence: <code>Neural networks <target-token> do some stuff.</code> target: <code>can</code></li>
<li>sentence: <code>Neural networks can <target-token> some stuff.</code> target: <code>do</code></li>
<li>sentence: <code>Neural networks can do <target-token> stuff.</code> target: <code>some</code></li>
<li>sentence: <code>Neural networks can do some <target-token>.</code> target: <code>some</code></li>
<li>sentence: <code>Neural networks can do some stuff <target-token></code> target: <code>.</code></li>
</ul>
<p>You should adjust this approach to the problem at hand by correcting typos if there are any, tokenizing, lemmatizing and others, experiment!</p>
<h2>Embeddings</h2>
<p>Each word in each sentence should be replaced by an integer, which in turn points to it embedding.</p>
<p>I would advise you to use a pre-trained one. spaCy provides word vectors, but another interesting approach I would highly recommend is in the open source library <a href="https://github.com/zalandoresearch/flair" rel="nofollow noreferrer">flair</a>.</p>
<p>You may train your own, but it would take a lot of time + a lot of data for unsupervised training, and I think it is way beyond the scope of this question.</p>
<h2>Data batching</h2>
<p>One should use PyTorch's <a href="https://pytorch.org/docs/stable/data.html#torch.utils.data.Dataset" rel="nofollow noreferrer">torch.utils.data.Dataset</a> and <a href="https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader" rel="nofollow noreferrer">torch.utils.data.DataLoader</a>.</p>
<p>In my case, a good idea is was to provide custom <code>collate_fn</code> to <code>DataLoader</code>, which is responsible for creating padded batches of data (or represented as <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.utils.rnn.PackedSequence" rel="nofollow noreferrer">torch.nn.utils.rnn.PackedSequence</a> already).</p>
<p><strong>Important:</strong> currently, you have to sort the batch by length (word-wise) and keep the indices able to "unsort" the batch into it's original form, you should remember that during implementation. You may use <code>torch.sort</code> for that task. In future versions of PyTorch, there is a chance, one might not have to do that, see <a href="https://github.com/pytorch/pytorch/issues/3584" rel="nofollow noreferrer">this issue</a>.</p>
<p>Oh, and remember to shuffle your dataset using <code>DataLoader</code>, while we're at it.</p>
<h2>Model</h2>
<p>You should create a proper model by inheriting from <code>torch.nn.Module</code>. I would advise you to create a more general model, where you can provide PyTorch's cells (like GRU, LSTM or RNN), multilayered and <strong>bidirectional</strong> (as is described in the post).</p>
<p>Something along those lines when it comes to model construction:</p>
<pre><code>import torch
class Filler(torch.nn.Module):
def __init__(self, cell, embedding_words_count: int):
self.cell = cell
# We want to output vector of N
self.linear = torch.nn.Linear(self.cell.hidden_size, embedding_words_count)
def forward(self, batch):
# Assuming batch was properly prepared before passing into the network
output, _ = self.cell(batch)
# Batch shape[0] is the length of longest already padded sequence
# Batch shape[1] is the length of batch, e.g. 32
# Here we create a view, which allows us to concatenate bidirectional layers in general manner
output = output.view(
batch.shape[0],
batch.shape[1],
2 if self.cell.bidirectional else 1,
self.cell.hidden_size,
)
# Here outputs of bidirectional RNNs are summed, you may concatenate it
# It makes up for an easier implementation, and is another often used approach
summed_bidirectional_output = output.sum(dim=2)
# Linear layer needs batch first, we have to permute it.
# You may also try with batch_first=True in self.cell and prepare your batch that way
# In such case no need to permute dimensions
linear_input = summed_bidirectional_output.permute(1, 0, 2)
return self.linear(embedding_words_count)
</code></pre>
<p>As you can see, information about shapes can be obtained in a general fashion. Such approach will allow you to create a model with how many layers you want, bidirectional or not (<code>batch_first</code> argument is problematic, but you can get around it too in a general way, left it out for improved clarity), see below:</p>
<pre><code>model = Filler(
torch.nn.GRU(
# Size of your embeddings, for BERT it could be 4096, for spaCy's word2vec 300
input_size=300,
hidden_size=100,
num_layers=3,
batch_first=False,
dropout=0.4,
bidirectional=True,
),
# How many unique words are there in your dataset
embedding_words_count=10000,
)
</code></pre>
<p>You may pass <code>torch.nn.Embedding</code> into your model (if pretrained and already filled), create it from numpy matrix or plethora of other approaches, <strong>it's highly dependent how your structure your code exactly</strong>. Still, please, make your code more general, <strong>do not hardcode shapes</strong> unless it's totally necessary (usually it's not).</p>
<p><strong>Remember it's only a showcase, you will have to tune and fix it on your own</strong>.
This implementation returns logits and no <code>softmax</code> layer is used. If you wish to calculate perplexity, you may have to add it in order to obtain a correct probability distribution across all possible vectors.</p>
<p>BTW: <a href="https://discuss.pytorch.org/t/concatenation-of-the-hidden-states-produced-by-a-bidirectional-lstm/3686" rel="nofollow noreferrer">Here</a> is some info on concatenation of bidirectional output of RNN.</p>
<h2>Model training</h2>
<p>I would <strong>highly recommend</strong> <a href="https://pytorch.org/ignite/" rel="nofollow noreferrer">PyTorch ignite</a> as it's quite customizable, you can log a lot of info using it, perform validation and abstract cluttering parts like for loops in training.</p>
<p>Oh, and split your model, training and others into separate modules, don't put everything into one unreadable file.</p>
<h1>Final notes</h1>
<p>This is the outline of how I would approach this problem, you may have more fun using attention networks instead of merely using the last output layer as in this example, though you shouldn't start with that.</p>
<p>And please check PyTorch's 1.0 documentation and <strong>do not</strong> follow blindly tutorials or blog posts you see online as they might be out of date really fast and quality of the code varies enormously. For example <a href="https://pytorch.org/docs/stable/autograd.html#variable-deprecated" rel="nofollow noreferrer">torch.autograd.Variable</a> is <strong>deprecated</strong> as can be seen in the link.</p> | 2019-02-16 01:58:45.453000+00:00 | 2021-12-11 09:40:25.277000+00:00 | 2021-12-11 09:40:25.277000+00:00 | null | 54,323,427 | <p>I am trying to fill in the blank using a bidirectional RNN and pytorch. </p>
<p>The input will be like: <code>The dog is _____, but we are happy he is okay.</code></p>
<p>The output will be like: </p>
<pre><code>1. hyper (Perplexity score here)
2. sad (Perplexity score here)
3. scared (Perplexity score here)
</code></pre>
<p>I discovered this idea here: <a href="https://medium.com/@plusepsilon/the-bidirectional-language-model-1f3961d1fb27" rel="noreferrer">https://medium.com/@plusepsilon/the-bidirectional-language-model-1f3961d1fb27</a></p>
<pre><code>import torch, torch.nn as nn
from torch.autograd import Variable
text = ['BOS', 'How', 'are', 'you', 'EOS']
seq_len = len(text)
batch_size = 1
embedding_size = 1
hidden_size = 1
output_size = 1
random_input = Variable(
torch.FloatTensor(seq_len, batch_size, embedding_size).normal_(), requires_grad=False)
bi_rnn = torch.nn.RNN(
input_size=embedding_size, hidden_size=hidden_size, num_layers=1, batch_first=False, bidirectional=True)
bi_output, bi_hidden = bi_rnn(random_input)
# stagger
forward_output, backward_output = bi_output[:-2, :, :hidden_size], bi_output[2:, :, hidden_size:]
staggered_output = torch.cat((forward_output, backward_output), dim=-1)
linear = nn.Linear(hidden_size * 2, output_size)
# only predict on words
labels = random_input[1:-1]
# for language models, use cross-entropy :)
loss = nn.MSELoss()
output = loss(linear(staggered_output), labels)
</code></pre>
<p>I am trying to reimplement the code above found at the bottom of the blog post. I am new to pytorch and nlp, and can't understand what the input and output to the code is.</p>
<p>Question about the input: I am guessing the input are the few words that are given. Why does one need beginning of sentence and end of sentence tags in this case? Why don't I see the input being a corpus on which the model is trained like other classic NLP problems? I would like to use the Enron email corpus to train the RNN.</p>
<p>Question about the output: I see the output is a tensor. My understanding is the tensor is a vector, so maybe a word vector in this case. How can you use the tensor to output the words themselves? </p> | 2019-01-23 09:00:41.573000+00:00 | 2021-12-11 09:40:25.277000+00:00 | 2019-02-13 18:52:15.410000+00:00 | python|nlp|pytorch | ['https://stackoverflow.com/users/9094710/qusai-alothman', 'https://en.wikipedia.org/wiki/Word_embedding', 'https://arxiv.org/pdf/1810.04805.pdf', 'https://pytorch.org/docs/stable/nn.html#module', 'https://spacy.io/', 'https://www.nltk.org/', 'https://en.wikipedia.org/wiki/Bag-of-words_model', 'https://pl.wikipedia.org/wiki/TFIDF', 'https://nlp.stanford.edu/projects/glove/', 'https://arxiv.org/abs/1802.05365', 'https://pytorch.org/docs/stable/nn.html#embedding', 'https://en.wikipedia.org/wiki/Mean_squared_error', 'https://en.wikipedia.org/wiki/Mean_squared_error', 'https://en.wikipedia.org/wiki/Mean_squared_error', 'https://scikit-learn.org/0.19/datasets/twenty_newsgroups.html', 'https://github.com/zalandoresearch/flair', 'https://pytorch.org/docs/stable/data.html#torch.utils.data.Dataset', 'https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader', 'https://pytorch.org/docs/stable/nn.html#torch.nn.utils.rnn.PackedSequence', 'https://github.com/pytorch/pytorch/issues/3584', 'https://discuss.pytorch.org/t/concatenation-of-the-hidden-states-produced-by-a-bidirectional-lstm/3686', 'https://pytorch.org/ignite/', 'https://pytorch.org/docs/stable/autograd.html#variable-deprecated'] | 23 |
66,931,354 | <p>I think that the problem lies rather in the architecture itself and I would first consider the overall quality of generated images rather than their brightness or darkness. The generations clearly get better as you train for more epochs. I agree that the images get darker but even in the early epochs, the generated images are significantly darker than the ones in the training samples. (At least compared to ones that you posted.)</p>
<p>And now coming back to your architecture, 30k samples are actually enough to obtain very convincing results as achieved by state-of-the-art models in face generations. The generations do get better but they are still far away from being "very good".</p>
<p>I think <strong>the generator is definitely not strong enough and is the problematic part.</strong> (The fact that your generator loss skyrockets can also be a hint for this.) In the generator, all you do is just upsampling and upsampling. You should note that the transposed convolution is more like a heuristic and it does not provide much learnability. This is related to the nature of the problem. When you are doing convolutions, you have all the information and you are trying to learn to encode but in the decoder, you are trying to recover information that was previously lost :). So, in a way, it is harder to learn because the information taken as input is limited and lacking.</p>
<p>In fact, <strong>deterministic bilinear interpolation</strong> methods do perform similar or even better than transposed convolutions and these are purely based on scaling/extending <strong>with zero learnability.</strong> (<a href="https://arxiv.org/pdf/1707.05847.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1707.05847.pdf</a>)</p>
<p>To observe the transposed convolutions' limits, I suggest that you replace all the <code>Transposedconv2d</code> with <code>UpSampling2D</code> (<a href="https://www.tensorflow.org/api_docs/python/tf/keras/layers/UpSampling2D" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/keras/layers/UpSampling2D</a>) and I claim that the results will not be much different. UpSampling2D is one of those deterministic methods that I mentioned.</p>
<p>To improve your generator, you can try to insert convolutional layers between upsampling layers. These layers would refine the features/images and correct some of the mistakes that occurred during the up-sampling. In addition to corrections, the next upsampling layer would take a more informative input. What I mean is to try a UNet like decoding that you can find in this link (<a href="https://arxiv.org/pdf/1505.04597.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1505.04597.pdf</a>). Of course, that would be a primary step to explore. There are many more GAN architectures that you can try and probably perform better.</p> | 2021-04-03 12:53:01.453000+00:00 | 2021-04-03 13:14:51.820000+00:00 | 2021-04-03 13:14:51.820000+00:00 | null | 57,119,171 | <p>I created a simple <code>DCGAN</code> with 6 layers and trained it on CelebA dataset (a portion of it containing 30K images).<br>
I noticed my network generated images are dimmed looking and as the network trains more, the bright colors fade into dim ones! </p>
<p>here are some example:<br>
This is how CelebA images look like (real images used for training) :<br>
<a href="https://i.stack.imgur.com/VTD4p.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/VTD4p.jpg" alt="enter image description here"></a></p>
<p>and these are the generated ones ,the number shows the epoch number(they were trained for 30 epochs ultimately) :<br>
<a href="https://i.stack.imgur.com/nWPud.png" rel="noreferrer"><img src="https://i.stack.imgur.com/nWPud.png" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/Ju0e5.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Ju0e5.png" alt="enter image description here"></a>
<a href="https://i.stack.imgur.com/KJaMH.png" rel="noreferrer"><img src="https://i.stack.imgur.com/KJaMH.png" alt="enter image description here"></a>
<a href="https://i.stack.imgur.com/sKvZD.png" rel="noreferrer"><img src="https://i.stack.imgur.com/sKvZD.png" alt="enter image description here"></a></p>
<p>What is the cause for this phenomenon?<br>
I tried to do all the general tricks concerning <code>GAN</code>s, such as rescaling the input image between -1 and 1, or not using <code>BatchNorm</code> in the first layer of the <code>Discriminator</code>, and for the last layer of the <code>Generator</code> or
using <code>LeakyReLU(0.2)</code> in the <code>Discriminator</code>, and <code>ReLU</code> for the <code>Generator</code>. yet I have no idea why the images are this dim/dark!<br>
Is this caused by simply less training images?<br>
or is it caused by the networks deficiencies ? if so what is the source of such deficiencies?<br>
Here are how these networks implemented : </p>
<pre><code>def conv_batch(in_dim, out_dim, kernel_size, stride, padding, batch_norm=True):
layers = nn.ModuleList()
conv = nn.Conv2d(in_dim, out_dim, kernel_size, stride, padding, bias=False)
layers.append(conv)
if batch_norm:
layers.append(nn.BatchNorm2d(out_dim))
return nn.Sequential(*layers)
class Discriminator(nn.Module):
def __init__(self, conv_dim=32, act = nn.ReLU()):
super().__init__()
self.conv_dim = conv_dim
self.act = act
self.conv1 = conv_batch(3, conv_dim, 4, 2, 1, False)
self.conv2 = conv_batch(conv_dim, conv_dim*2, 4, 2, 1)
self.conv3 = conv_batch(conv_dim*2, conv_dim*4, 4, 2, 1)
self.conv4 = conv_batch(conv_dim*4, conv_dim*8, 4, 1, 1)
self.conv5 = conv_batch(conv_dim*8, conv_dim*10, 4, 2, 1)
self.conv6 = conv_batch(conv_dim*10, conv_dim*10, 3, 1, 1)
self.drp = nn.Dropout(0.5)
self.fc = nn.Linear(conv_dim*10*3*3, 1)
def forward(self, input):
batch = input.size(0)
output = self.act(self.conv1(input))
output = self.act(self.conv2(output))
output = self.act(self.conv3(output))
output = self.act(self.conv4(output))
output = self.act(self.conv5(output))
output = self.act(self.conv6(output))
output = output.view(batch, self.fc.in_features)
output = self.fc(output)
output = self.drp(output)
return output
def deconv_convtranspose(in_dim, out_dim, kernel_size, stride, padding, batchnorm=True):
layers = []
deconv = nn.ConvTranspose2d(in_dim, out_dim, kernel_size = kernel_size, stride=stride, padding=padding)
layers.append(deconv)
if batchnorm:
layers.append(nn.BatchNorm2d(out_dim))
return nn.Sequential(*layers)
class Generator(nn.Module):
def __init__(self, z_size=100, conv_dim=32):
super().__init__()
self.conv_dim = conv_dim
# make the 1d input into a 3d output of shape (conv_dim*4, 4, 4 )
self.fc = nn.Linear(z_size, conv_dim*4*4*4)#4x4
# conv and deconv layer work on 3d volumes, so we now only need to pass the number of fmaps and the
# input volume size (its h,w which is 4x4!)
self.drp = nn.Dropout(0.5)
self.deconv1 = deconv_convtranspose(conv_dim*4, conv_dim*3, kernel_size =4, stride=2, padding=1)
self.deconv2 = deconv_convtranspose(conv_dim*3, conv_dim*2, kernel_size =4, stride=2, padding=1)
self.deconv3 = deconv_convtranspose(conv_dim*2, conv_dim, kernel_size =4, stride=2, padding=1)
self.deconv4 = deconv_convtranspose(conv_dim, conv_dim, kernel_size =3, stride=2, padding=1)
self.deconv5 = deconv_convtranspose(conv_dim, 3, kernel_size =4, stride=1, padding=1, batchnorm=False)
def forward(self, input):
output = self.fc(input)
output = self.drp(output)
output = output.view(-1, self.conv_dim*4, 4, 4)
output = F.relu(self.deconv1(output))
output = F.relu(self.deconv2(output))
output = F.relu(self.deconv3(output))
output = F.relu(self.deconv4(output))
# we create the image using tanh!
output = F.tanh(self.deconv5(output))
return output
# testing nets
dd = Discriminator()
zd = np.random.rand(2,3,64,64)
zd = torch.from_numpy(zd).float()
# print(dd)
print(dd(zd).shape)
gg = Generator()
z = np.random.uniform(-1,1,size=(2,100))
z = torch.from_numpy(z).float()
print(gg(z).shape)
</code></pre> | 2019-07-19 20:18:07.813000+00:00 | 2021-04-03 13:14:51.820000+00:00 | 2019-07-22 08:05:22.683000+00:00 | python|pytorch|generative-adversarial-network | ['https://arxiv.org/pdf/1707.05847.pdf', 'https://www.tensorflow.org/api_docs/python/tf/keras/layers/UpSampling2D', 'https://arxiv.org/pdf/1505.04597.pdf'] | 3 |
54,044,859 | <p>Some of this previous work might be relevant:</p>
<ul>
<li>OCR from Video Stream of Book Flipping (<a href="https://ieeexplore.ieee.org/document/6778296" rel="nofollow noreferrer">https://ieeexplore.ieee.org/document/6778296</a>)</li>
<li>Video OCR for Video Indexing, Sankirti S. and P. M. Kamade (<a href="https://arxiv.org/pdf/1109.6862.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1109.6862.pdf</a>)</li>
<li>Video OCR API (<a href="https://rapidapi.com/SemaMediaData/api/video-ocr" rel="nofollow noreferrer">https://rapidapi.com/SemaMediaData/api/video-ocr</a>)</li>
</ul> | 2019-01-04 19:23:38.660000+00:00 | 2019-01-04 19:23:38.660000+00:00 | null | null | 19,224,150 | <p>I am working in win form C# application.
Application grabs the images from video device like (TV/Camera or input video file).
Then processed it and save it in the text document.</p>
<p>My question is, "How to create Digital Video OCR ?"</p>
<p>please suggest me any tutorial/link/source code.</p> | 2013-10-07 11:50:26.547000+00:00 | 2019-04-07 20:38:14.173000+00:00 | null | image-processing|ocr | ['https://ieeexplore.ieee.org/document/6778296', 'https://arxiv.org/pdf/1109.6862.pdf', 'https://rapidapi.com/SemaMediaData/api/video-ocr'] | 3 |
45,482,590 | <p>It is not "ignoring" a 90% of the input, the problem is simply that if you perform a 1-dimensional convolution with a kernel of size <em>K</em> over an input of size <em>X</em> the result of the convolution will have size <em>X</em> - <em>K</em> + 1. If you want the output to have the same size as the input, then you need to extend or "pad" your data. There are several strategies for that, such as add zeros, replicate the value at the ends or wrap around. Keras' <a href="https://keras.io/layers/convolutional/#conv1d" rel="noreferrer"><code>Convolution1D</code></a> has a <code>padding</code> parameter that you can set to <code>"valid"</code> (the default, no padding), <code>"same"</code> (add zeros at both sides of the input to obtain the same output size as the input) and <code>"causal"</code> (padding with zeros at one end only, idea taken from <a href="https://arxiv.org/abs/1609.03499" rel="noreferrer">WaveNet</a>).</p>
<p><em>Update</em></p>
<p>About the questions in your comments. So you say your input is <code>(600, 10)</code>. That, I assume, is the size of one example, and you have a batch of examples with size <code>(N, 600, 10)</code>. From the point of view of the convolution operation, this means you have <code>N</code> examples, each of with a length of at most <code>600</code> (this "length" may be time or whatever else, it's just the dimension across which the convolution works) and, at each of these <code>600</code> points, you have vectors of size <code>10</code>. Each of these vectors is considered an atomic sample with <code>10</code> features (e.g. price, heigh, size, whatever), or, as is sometimes called in the context of convolution, "channels" (from the RGB channels used in 2D image convolution).</p>
<p>The point is, the convolution has a kernel size and a number of output channels, which is the <code>filters</code> parameter in Keras. In your example, what the convolution does is take every possible slice of 25 contiguous 10-vectors and produce a single 40-vector for each (that, for every example in the batch, of course). So you pass from having 10 features or channels in your input to having 40 after the convolution. It's not that it's using only one of the 10 elements in the last dimension, it's using all of them to produce the output.</p>
<p>If the meaning of the dimensions in your input is not what the convolution is interpreting, or if the operation it is performing is not what you were expecting, you may need to either reshape your input or use a different kind of layer.</p> | 2017-08-03 11:13:58.707000+00:00 | 2017-08-03 12:05:15.407000+00:00 | 2017-08-03 12:05:15.407000+00:00 | null | 45,482,209 | <p>The first layer of my neural network is like this:</p>
<pre><code>model.add(Conv1D(filters=40,
kernel_size=25,
input_shape=x_train.shape[1:],
activation='relu',
kernel_regularizer=regularizers.l2(5e-6),
strides=1))
</code></pre>
<p>if my input shape is <code>(600,10)</code></p>
<p>i get <code>(None, 576, 40)</code> as output shape</p>
<p>if my input shape is <code>(6000,1)</code></p>
<p>i get <code>(None, 5976, 40)</code> as output shape</p>
<p>so my question is what exactly is happening here? is the first example simply ignoring 90% of the input?</p> | 2017-08-03 10:56:10.550000+00:00 | 2017-08-03 12:05:15.407000+00:00 | null | python|keras | ['https://keras.io/layers/convolutional/#conv1d', 'https://arxiv.org/abs/1609.03499'] | 2 |
68,464,181 | <p>Just like changing a single pixel in an image doesn't change the image, changing one bit in an array doesn't significantly adjust the prediction.</p>
<p>I ran mobilenet on a black 224x224 image and it predicted class 819 (whatever that is). Then I changed the top-left pixel to white and re-ran mobilenet and it still classifies as class 819.</p>
<p><a href="https://www.tensorplayground.com/1.0.0/?code=//+TensorPlayground.com%0A//+RGB+Tensor%0A//+INPUT+TENSOR+SHAPE:+%5B622,1024,3%5D%0A//+MODEL:+Mobilenet+v2+Expects+%5Bbatch,+224,+224,+3%5D%0A%0A(tf,+aTensor,+model)+%3D%3E+%7B%0A++//+Black+Box%0A++const+blackBox+%3D+tf.zeros(%5B224,+224,+3%5D)%0A++const+blackBatch+%3D+blackBox.expandDims(0)%0A++const+result1+%3D+model.predict(blackBatch)%0A++console.log(%27predicted+from+zeros%27,+tf.topk(result1,+1).indices.dataSync())%0A++%0A++//+Black+box+with+a+dot%0A++const+arrayRep+%3D+blackBatch.arraySync()%0A++arrayRep%5B0%5D%5B0%5D%5B0%5D+%3D+%5B1,+1,+1%5D%0A++const+singleDotBox+%3D+tf.tensor4d(arrayRep)%0A++const+result2+%3D+model.predict(singleDotBox)%0A++console.log(%27predicted+from+zeros%27,+tf.topk(result1,+1).indices.dataSync())%0A++%0A++return+singleDotBox%0A%7D&inputTensor=bella&modelInfo=%7B%22value%22:%22mobilenetv2%22,%22label%22:%22Mobilenet+v2%22,%22info%22:%22Expects+%5Bbatch,+224,+224,+3%5D%22,%22link%22:%22https://arxiv.org/abs/1801.04381%22,%22url%22:%22https://tfhub.dev/google/imagenet/mobilenet_v2_100_224/classification/2%22,%22fromTFHub%22:true,%22type%22:%22graph%22%7D" rel="nofollow noreferrer">See example code here</a></p>
<p>Changing a single bit does not have a cascading effect like a hash function. Mobilenet, by its nature is resilient to noise.</p> | 2021-07-21 04:56:59.030000+00:00 | 2021-08-02 22:43:16.600000+00:00 | 2021-08-02 22:43:16.600000+00:00 | null | 68,383,643 | <p>I'm trying to get a keras model i converted to tensorflow js, to work in react native but the model keeps giving bad responses. Did some digging and found realized that the tensor i passed into model.predict is some how being changed causing it to give the same incorrect prediction. Any suggestions would be appreciated. I'm pretty much hard stuck. Code below:</p>
<pre><code>import React, {useState, useEffect} from 'react';
import {View, Text, Button} from 'react-native';
import * as tf from '@tensorflow/tfjs';
import {
bundleResourceIO
} from '@tensorflow/tfjs-react-native';
import * as mobilenet from '@tensorflow-models/mobilenet';
function thing() {
const [model, setModel] = useState(null);
const [tensor, setTensor] = useState(null);
async function loadModel() {
const modelJson = require('./assets/model.json');
const weight = require('./assets/group1-shard1of1.bin');
const backend = await tf.ready();
const item = await tf.loadLayersModel(
bundleResourceIO(modelJson, weight)
);
const tfTensor = tf.tensor([[0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]);
setModel(item);
setTensor(tfTensor);
}
useEffect(() => {
loadModel();
}, []);
async function test() {
if(tensor !== null && model !== null) {
const result = await model.predict(tensor);
console.log(result.dataSync())
}
}
return (
<View>
<Button
onPress={test}
title="click"
color="#841584"
accessibilityLabel="Learn more about this purple button"
/>
</View>
);
}
export default thing;
</code></pre> | 2021-07-14 18:49:18.200000+00:00 | 2021-08-02 22:43:16.600000+00:00 | null | react-native|tensorflow|tensorflow.js | ['https://www.tensorplayground.com/1.0.0/?code=//+TensorPlayground.com%0A//+RGB+Tensor%0A//+INPUT+TENSOR+SHAPE:+%5B622,1024,3%5D%0A//+MODEL:+Mobilenet+v2+Expects+%5Bbatch,+224,+224,+3%5D%0A%0A(tf,+aTensor,+model)+%3D%3E+%7B%0A++//+Black+Box%0A++const+blackBox+%3D+tf.zeros(%5B224,+224,+3%5D)%0A++const+blackBatch+%3D+blackBox.expandDims(0)%0A++const+result1+%3D+model.predict(blackBatch)%0A++console.log(%27predicted+from+zeros%27,+tf.topk(result1,+1).indices.dataSync())%0A++%0A++//+Black+box+with+a+dot%0A++const+arrayRep+%3D+blackBatch.arraySync()%0A++arrayRep%5B0%5D%5B0%5D%5B0%5D+%3D+%5B1,+1,+1%5D%0A++const+singleDotBox+%3D+tf.tensor4d(arrayRep)%0A++const+result2+%3D+model.predict(singleDotBox)%0A++console.log(%27predicted+from+zeros%27,+tf.topk(result1,+1).indices.dataSync())%0A++%0A++return+singleDotBox%0A%7D&inputTensor=bella&modelInfo=%7B%22value%22:%22mobilenetv2%22,%22label%22:%22Mobilenet+v2%22,%22info%22:%22Expects+%5Bbatch,+224,+224,+3%5D%22,%22link%22:%22https://arxiv.org/abs/1801.04381%22,%22url%22:%22https://tfhub.dev/google/imagenet/mobilenet_v2_100_224/classification/2%22,%22fromTFHub%22:true,%22type%22:%22graph%22%7D'] | 1 |
60,147,852 | <p>Batch size is a hyper parameter like e.g. learning rate. It is really hard to say what is the perfect size for your problem.
The problem you are mentioning might exist but is only really relevant in specific problems where you can't just to random sampling like face/person re-identification.</p>
<p>For "normal" problems random sampling is sufficient. The reason behind minibatch training is, to get a more stable training. You want your weight updates to go in the right direction in regards to the global minimum of the loss function for the whole dataset. A minibatch is an approximation of this.</p>
<p>With increasing the batchsize you get less updates but "better" updates. With a small batchsize you get more updates, but they will more often go in the wrong direction. If the batch size is to small (e.g. 1) the network might take a long time to converge and thus increases the training time. To large of a batch size can hurt the generalization of the network. Good paper about the topic <a href="https://arxiv.org/abs/1609.04836" rel="nofollow noreferrer">On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
</a></p>
<p>Another interesting paper on the topic is: <a href="https://arxiv.org/abs/1711.00489" rel="nofollow noreferrer">Don't Decay the Learning Rate, Increase the Batch Size</a>. Which analyzes the effect of batch size on the training. In general learning rate and batch size have effects on each other.</p>
<p>In general batch size is more a factor to reduce training time, because you can make use of parallelism and have less weight updates with increasing batch size and more stability. As with everything look at what others did for a task comparable with your problem and take it as a baseline and experiment with it a little. Also with huge networks the available memory often limits the maximum batch size anyway.</p> | 2020-02-10 09:56:43.330000+00:00 | 2020-02-10 09:56:43.330000+00:00 | null | null | 60,142,351 | <p>I have a broad question, but should be still relevant.
lets say I am doing a 2 class image classification using a CNN. a batch size of 32-64 should be sufficient for training purpose. However, if I had data with about 13 classes, surely 32 batch size would not be sufficient for a good model, as each batch might get 2-3 images of each class. is there a generic or approximate formula to determine the batch size for training? or should that be determined as a hyperparameter using techniques like grid search or bayesian methods?</p>
<p>sedy</p> | 2020-02-09 23:25:45.640000+00:00 | 2020-02-10 09:56:43.330000+00:00 | null | machine-learning | ['https://arxiv.org/abs/1609.04836', 'https://arxiv.org/abs/1711.00489'] | 2 |
49,990,868 | <p>There is no "easy" way to remove clues from a completed Sudoku grid as the removal process is not linear.</p>
<p>After each removal of a cell or clue you need to check if the Sudoku only has a unique solution.</p>
<p>To check this you need to run a solver that can count all possible solutions (you can stop it after 2 possibilities are found to save time).</p>
<p>The two most popular algorithm used both for solving a Sudoku, counting all the Sudoku solutions and removing cells are backtracking algorithms and dancing links algorithms.</p>
<p>This article explains really well how the dancing links algorithm can be used in Sudokus:
<a href="http://garethrees.org/2007/06/10/zendoku-generation/#section-2" rel="nofollow noreferrer">http://garethrees.org/2007/06/10/zendoku-generation/#section-2</a></p>
<p>Here is another description of a dancing link algorithm in Sudokus written in JavaScript:
<a href="http://www.sudokubum.com/documentation.html" rel="nofollow noreferrer">http://www.sudokubum.com/documentation.html</a> </p>
<p>And here is the full paper about dancing links algorithms in general:
<a href="http://lanl.arxiv.org/pdf/cs/0011047" rel="nofollow noreferrer">http://lanl.arxiv.org/pdf/cs/0011047</a> </p> | 2018-04-23 22:35:03.303000+00:00 | 2018-04-23 22:35:03.303000+00:00 | null | null | 14,858,994 | <p>I am writing a Sudoku application and am currently working on the game generation algorithm. I managed to figure out how to quickly generate a solution (not solve). I am stumped on how to remove some of the numbers to actually make it into a puzzle, though. My first inclination was to randomly remove a certain number of cells based on the difficulty, but that is not the correct algorithm, because it often renders a puzzle that is unsolvable or has multiple solutions. It also might generate puzzles that don't reflect the requested difficulty.</p>
<p>Here is the code that I have so far. I removed most of the irrelevant code, but if you would like to see something that isn't implemented but used below, please let me know. I can also provide my attempt at the <code>Puzzlefy</code> method if you would like, but I opted out of immediately posting it since it's blatantly wrong (even though it "works").</p>
<pre><code>using System;
using System.Collections.Generic;
using System.Linq;
namespace Sudoku
{
public class Game
{
public enum Difficulty
{
VeryEasy,
Easy,
Medium,
Difficult,
Evil
}
private readonly int?[,] _currentItems = new int?[9,9];
private readonly int?[,] _solution = new int?[9,9];
private readonly int?[,] _startingItems = new int?[9,9];
private readonly Difficulty _difficulty;
public Game(Difficulty difficulty)
{
_difficulty = difficulty;
GenerateSolution();
Puzzlefy();
}
private void GenerateSolution()
{
var random = new Random();
var availableNumbers = new Stack<List<int?>>(81);
var x = 0;
var y = 0;
availableNumbers.Push(AllowableNumbers(_solution, 0, 0).ToList());
while (x < 9 && y < 9)
{
var currentAvailableNumbers = AllowableNumbers(_solution, x, y).ToList();
availableNumbers.Push(currentAvailableNumbers);
// back trace if the board is in an invalid state
while (currentAvailableNumbers.Count == 0)
{
_solution[x, y] = null;
availableNumbers.Pop();
currentAvailableNumbers = availableNumbers.Peek();
x -= y >= 1 ? 0 : 1;
y = y >= 1 ? y - 1 : 8;
}
var index = random.Next(currentAvailableNumbers.Count);
_solution[x, y] = currentAvailableNumbers[index];
currentAvailableNumbers.RemoveAt(index);
x += y < 8 ? 0 : 1;
y = y < 8 ? y + 1 : 0;
}
}
private void Puzzlefy()
{
CopyCells(_solution, _startingItems);
// remove some stuff from _startingItems
CopyCells(_startingItems, _currentItems);
}
}
}
</code></pre>
<p>I am not looking for code, rather an algorithm. How would I go about removing the numbers from the solution to make it into a puzzle?</p> | 2013-02-13 17:08:09.643000+00:00 | 2018-04-23 22:35:03.303000+00:00 | 2013-02-13 17:14:53.803000+00:00 | c#|.net|algorithm|sudoku | ['http://garethrees.org/2007/06/10/zendoku-generation/#section-2', 'http://www.sudokubum.com/documentation.html', 'http://lanl.arxiv.org/pdf/cs/0011047'] | 3 |
43,106,498 | <blockquote>
<p>Can someone recommend to me a high-level scheme that allows me to generate strongly-connected, uniformly-distributed, random di-graphs?</p>
</blockquote>
<p>I had a similar problem generating expression trees for test data. I found that if you find out how to count unique trees then the problem becomes easy. What I mean by that is that I found for full binary trees with N internal nodes the number of unique trees based on N is the <a href="https://en.wikipedia.org/wiki/Catalan_number" rel="nofollow noreferrer">Catalan Numbers</a>. Then for binary trees that have unary branches with N total nodes the number of unique trees based on N is the <a href="https://en.wikipedia.org/wiki/Motzkin_number" rel="nofollow noreferrer">Motzkin Numbers</a>. </p>
<p>Then I found <a href="http://oeis.org/" rel="nofollow noreferrer">The On-Line Encyclopedia of Integer Sequences®</a>. So if you know a value, N, that can uniquely identify a graph and you know the corresponding count of unique graphs for that N and put those counts into a the OEIS search you should get back a page that will help you in your search. e.g. <a href="http://oeis.org/A000108" rel="nofollow noreferrer">Catalan Numbers</a> for full binary trees or <a href="http://oeis.org/A001006" rel="nofollow noreferrer">Motzkin Numbers</a> for regular binary tress. Along the way I found that one of the keys to generating them was a <a href="https://en.wikipedia.org/wiki/Recurrence_relation" rel="nofollow noreferrer">recurrence relation</a>. </p>
<p>Or you can use keywords in the search but this may not get an exact hit. I only found Motzkin numbers using the number sequence and not via keywords.</p>
<p>Here is an OEIS query for <a href="http://oeis.org/search?q=+strongly+connected+digraph+&sort=&language=&go=Search" rel="nofollow noreferrer">strongly connected digraph</a></p>
<p>Now if you know the count for a given N and you either generate all graphs for a given N or can have a one to one correspondence between a value and the graph then you just generate random integers and get/generate the corresponding graph. If I understand your problem correctly this should solve it. </p>
<p>My best guess to the OEIS sequence for this question is:</p>
<p>Number of acyclic digraphs with n unlabeled nodes. <a href="https://oeis.org/A003087" rel="nofollow noreferrer">A003087</a></p>
<p>Which has a reference to <a href="https://arxiv.org/pdf/1202.6590.pdf" rel="nofollow noreferrer">Uniform random generation of large acyclic digraphs</a></p>
<h2>TL;DR</h2>
<p>For some related history see my question: <a href="https://stackoverflow.com/q/42627198/1243762">Algorithm improvement for enumerating binary trees</a></p> | 2017-03-30 00:34:23.637000+00:00 | 2017-03-31 12:11:52.367000+00:00 | 2017-05-23 10:31:04.593000+00:00 | null | 29,631,508 | <p>So I've been building a program that uses Monte Carlo simulations to find properties of evolutionary graph theory. One of the key functions of this is to be able to generate uniformly-distributed random graphs, so that we can determine the generalised properties of graphs. For the case of connected undirected graphs I have implemented the solution outlined in <a href="https://stackoverflow.com/a/14618505">this</a> answer. </p>
<p>However for directed graphs, generating the one-directional uniform spanning tree you get from Wilson's algorithm doesn't ensure that the graph is strongly-connected, and it seems that adding extra edges to make the spanning tree bi-directional would introduce a bias into the graphs that you generate. </p>
<p>I feel like I might be missing something obvious/misunderstanding something, but essentially my request is, can someone recommend to me a high-level scheme that allows me to generate strongly-connected, uniformly-distributed, random di-graphs?</p> | 2015-04-14 15:32:42.067000+00:00 | 2018-06-02 15:03:55.237000+00:00 | 2018-06-02 15:03:55.237000+00:00 | algorithm|random|directed-graph|motzkin-numbers | ['https://en.wikipedia.org/wiki/Catalan_number', 'https://en.wikipedia.org/wiki/Motzkin_number', 'http://oeis.org/', 'http://oeis.org/A000108', 'http://oeis.org/A001006', 'https://en.wikipedia.org/wiki/Recurrence_relation', 'http://oeis.org/search?q=+strongly+connected+digraph+&sort=&language=&go=Search', 'https://oeis.org/A003087', 'https://arxiv.org/pdf/1202.6590.pdf', 'https://stackoverflow.com/q/42627198/1243762'] | 10 |
32,748,652 | <p>[This solution is similar to Louis Ricci's, except inverted to the Topics table - <em>which could make subscription updates less practical, be warned!</em>] </p>
<p><em>(The probabilistic data structure approach is cool, but unnecessary for your current data-size. I was initially looking at compressed bitsets for a non-probabilistic solution, as they are great at performing set operations in-memory, but I think that's overkill as well. <a href="http://arxiv.org/pdf/1402.6407.pdf" rel="nofollow">Here is a good implementation for this type of use-case.</a> if you're interested.)</em></p>
<p>But looking at the <em>sparsity</em> of your data, bitsets waste space over integer-arrays. And even with integer-arrays, the <code>union</code> operation is still pretty inexpensive given that <strong>you only have an average of 10,000 subscriptions per topic.</strong> </p>
<hr>
<p>So maybe, just maybe, a dead-simple data-structure given your use-case is simply:</p>
<pre><code>Topic 1 => [array of subscriber IDs]
Topic 2 => [array of subscriber IDs]
...
Topic 20,000 => [array of subscriber IDs]
</code></pre>
<p><em>Storing (avg) 10,000 subscriber IDs (assuming 32-bit integers) only requires about <strong>40kb</strong> of space per topic.</em> </p>
<p>[In an array-type or BLOB, depending on your database]</p>
<p><em>With 20,000 topics, this adds only 800mb of data to your topic table ... and very little of this (~200kb avg) needs to be loaded to memory when a notification event occurs!</em></p>
<hr>
<p>Then when an average event (affecting 5 topics) occurs, all that needs to happen is:</p>
<ol>
<li><p>Query / Pull the data for the relevant topics (avg 5 records) into memory
(<strong>avg ~200kb</strong> of I/O)</p></li>
<li><p>Dump them into a Set data structure (de-duplicates subscriber list)</p></li>
<li><p>Alert the subscribers in the Set.</p></li>
</ol> | 2015-09-23 20:13:16.390000+00:00 | 2015-09-23 20:13:16.390000+00:00 | null | null | 32,540,956 | <p>I have database with user subscriptions to topics.
There is currently about 20 000 topics,
20 mln users and 200 mln subscriptions stored in SQL database.
Because of its size, the database is partitioned by topics,
so I can't get the info in one database query.
There are couple of topics with 10 mln subscriptions, couple with 100 000 and others have hundreds or less.</p>
<p>When an event occurs, it usually matches couple of topics, so to inform users, I need to perform query like "give me all users subscribed to topics x, y, z and perform union of sets", so that one user gets the news once even if he subscribed both topics x and z.</p>
<p>The constraints are:</p>
<ul>
<li>There must be no duplicates in the union set. (users can't get the content twice)</li>
<li>There can be bounded amount of users missing from the union set. (if sometimes user doesn't get the content, it is not that bad, but it can't be always the same user for the same topic)</li>
<li>It is possible to subscribe to new topic without rebuilding whole thing.</li>
</ul>
<p>I thought about using set of bloom filters for every topic, but they constraints are the other way round: "user either not subscribed for sure or probably subscribed". I need something like "user subscribed for sure or probably not".</p>
<p>Lossy hash tables might be good idea, but I am not sure, if they can be as memory efficient as bloom filters and I am afraid, that it would be always the same user, that is missing the content in his topic.</p>
<p>Do you know any other data structures, that mey be good for solving this problem?</p> | 2015-09-12 16:13:03.453000+00:00 | 2015-10-01 17:43:52.140000+00:00 | null | algorithm|data-structures|set|storing-data | ['http://arxiv.org/pdf/1402.6407.pdf'] | 1 |
211,912 | <p>The main disadvantage that I often hear of using blobs is that, above a certain size, the file system is much more efficient at storing and retrieving large files. It sounds like you've already taken this in to account by your list of requirements.</p>
<p>There's a <a href="http://arxiv.org/ftp/cs/papers/0701/0701168.pdf" rel="noreferrer">good reference (PDF) here</a> that covers the pros and cons of blobs.</p> | 2008-10-17 12:16:01.973000+00:00 | 2008-10-17 12:22:20.180000+00:00 | 2008-10-17 12:22:20.180000+00:00 | null | 211,895 | <p>The requirements for my document management system were:</p>
<ol>
<li>Must be secure from theft by simple copying of directories, files etc.</li>
<li>Must be secure against traditional virus infection (infection of physical file)</li>
<li>Must be fast to retrieve</li>
<li>The repository must not be visible to casual (directory) browsing users etc.</li>
</ol>
<p>I have decided to store all documents (and scanned images) as blobs in the database and so far my experience is wonderful and document retrieval is blindingly fast as well - it meets all the criteria from above and there are even a couple of additional advantages, such as autostoring documents together with the entity it relates to, easy and fast seaching of contents, removing of all sorts of user activities around opening and naming of documents etc. etc.</p>
<p>My question is - are there any serious risks or things that I overlooked with this design and implementation?</p>
<p>EDIT Note: DB is PostgreSQL, handles BLOBS very well and scales exceptionally well. The environment is Multi-user.</p> | 2008-10-17 12:09:46.383000+00:00 | 2015-11-12 18:50:31.327000+00:00 | 2009-05-28 10:10:13.240000+00:00 | performance|security|document|blob|document-management | ['http://arxiv.org/ftp/cs/papers/0701/0701168.pdf'] | 1 |
65,611,051 | <p>Here is a good primer to start your research.
<a href="https://arxiv.org/abs/1907.09236" rel="nofollow noreferrer">https://arxiv.org/abs/1907.09236</a></p>
<p>There are many open-source implementations. Try some and ask on stackoverflow when you get stuck.</p> | 2021-01-07 10:56:33.937000+00:00 | 2021-01-07 10:56:33.937000+00:00 | null | null | 65,610,474 | <p>Can you please recommend papers/github or smth about object detection on RGB-D images (NOT 3d cloud points).The result should still be objects in rectangles in the 2d image, as in the usual methods for object detection like YOLO and others. All I can find is Silent Object detection methods, but it seems like not what I'm looking for.</p> | 2021-01-07 10:20:01.677000+00:00 | 2021-01-07 10:56:33.937000+00:00 | null | deep-learning|computer-vision | ['https://arxiv.org/abs/1907.09236'] | 1 |
62,178,375 | <p>Interpreting GAN Losses are a bit of a black art because the actual loss values</p>
<p><strong>Question 1:</strong> The frequency of swinging between a discriminator/generator dominance will vary based on a few factors primarily (in my experience): learning rates and batch sizes which will impact the propagated loss. The particular loss metrics used will impact variance in how the D & G networks train. The EnhanceNet paper (for baseline) and the tutorial use a Mean Squared Error loss too - you're using a Binary Cross Entropy loss which will change the rate at which the networks converge. I'm no expert so here's a pretty good <a href="https://rohanvarma.me/Loss-Functions/" rel="nofollow noreferrer">link to Rohan Varma's article that explains the difference between loss functions</a>. Would be curious to see if your network behaves differently when you change the loss function - try it and update the question?</p>
<p><strong>Question 2:</strong> Over time both the D and G losses <em>should</em> settle to a value, however it's somewhat difficult to tell whether they've converged on strong performance or whether they've converged due to something like mode collapse/diminishing gradients (<a href="https://medium.com/@jonathan_hui/gan-why-it-is-so-hard-to-train-generative-advisory-networks-819a86b3750b" rel="nofollow noreferrer">Jonathan Hui's explanation on problems in training GANs</a>). The best way I've found is to actually inspect a cross section of the generated images and either visually inspect the output or use some kind of perceptual metrics (SSIM, PSNR, PIQ, etc.) across the generated image set.</p>
<p>Some other useful leads that you might find useful in finding an ans:</p>
<p><a href="https://stackoverflow.com/questions/42690721/how-to-interpret-the-discriminators-loss-and-the-generators-loss-in-generative">This post</a> has a couple of reasonably good pointers on interpreting GAN Losses.</p>
<p>Ian Goodfellow's <a href="https://arxiv.org/pdf/1701.00160.pdf" rel="nofollow noreferrer">NIPS2016 tutorial</a> also has some solid ideas on how to balance D & G training.</p> | 2020-06-03 16:55:28.440000+00:00 | 2020-06-03 16:55:28.440000+00:00 | null | null | 62,174,141 | <p>It's the first time I'm working with GANs and I am facing an issue regarding the Discriminator repeatedly outperforming the Generator. I am trying to reproduce the <code>PA</code> model from <a href="http://openaccess.thecvf.com/content_ICCV_2017/papers/Sajjadi_EnhanceNet_Single_Image_ICCV_2017_paper.pdf" rel="nofollow noreferrer">this article</a> and I'm looking at <a href="https://github.com/erikqu/EnhanceNet-PyTorch/blob/e4f1eb2a89497d9205463a0e9c54f334389d9405/train.py" rel="nofollow noreferrer">this slightly different implementation</a> to help me out.</p>
<p>I have read quite a lot of papers on how GANs work and also followed some tutorials to understand them better. Moreover, I've read articles on how to overcome the major instabilities, but I can't find a way to overcome this behavior.</p>
<p>In my environment, I'm using <code>PyTorch</code> and <code>BCELoss()</code>. Following the <a href="https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html" rel="nofollow noreferrer">DCGAN PyTorch tutorial</a>, I'm using the following training loop:</p>
<pre><code>criterion = nn.BCELoss()
train_d = False
# Discriminator true
optim_d.zero_grad()
disc_train_real = target.to(device)
batch_size = disc_train_real.size(0)
label = torch.full((batch_size,), 1, device=device).cuda()
output_d = discriminator(disc_train_real).view(-1)
loss_d_real = criterion(output_d, label).cuda()
if lossT:
loss_d_real *= 2
if loss_d_real.item() > 0.3:
loss_d_real.backward()
train_d = True
D_x = output_d.mean().item()
# Discriminator false
output_g = generator(image)
output_d = discriminator(output_g.detach()).view(-1)
label.fill_(0)
loss_d_fake = criterion(output_d, label).cuda()
D_G_z1 = output_d.mean().item()
if lossT:
loss_d_fake *= 2
loss_d = loss_d_real + loss_d_fake
if loss_d_fake.item() > 0.3:
loss_d_fake.backward()
train_d = True
if train_d:
optim_d.step()
# Generator
label.fill_(1)
output_d = discriminator(output_g).view(-1)
loss_g = criterion(output_d, label).cuda()
D_G_z2 = output_d.mean().item()
if lossT:
loss_g *= 2
loss_g.backward()
optim_g.step()
</code></pre>
<p>and, after a period of settlement, everything seems to work fine:</p>
<pre><code>Epoch 1/5 - Step: 1900/9338 Loss G: 3.057388 Loss D: 0.214545 D(x): 0.940985 D(G(z)): 0.114064 / 0.114064
Time for the last step: 51.55 s Epoch ETA: 01:04:13
Epoch 1/5 - Step: 2000/9338 Loss G: 2.984724 Loss D: 0.222931 D(x): 0.879338 D(G(z)): 0.159163 / 0.159163
Time for the last step: 52.68 s Epoch ETA: 01:03:24
Epoch 1/5 - Step: 2100/9338 Loss G: 2.824713 Loss D: 0.241953 D(x): 0.905837 D(G(z)): 0.110231 / 0.110231
Time for the last step: 50.91 s Epoch ETA: 01:02:29
Epoch 1/5 - Step: 2200/9338 Loss G: 2.807455 Loss D: 0.252808 D(x): 0.908131 D(G(z)): 0.218515 / 0.218515
Time for the last step: 51.72 s Epoch ETA: 01:01:37
Epoch 1/5 - Step: 2300/9338 Loss G: 2.470529 Loss D: 0.569696 D(x): 0.620966 D(G(z)): 0.512615 / 0.350175
Time for the last step: 51.96 s Epoch ETA: 01:00:46
Epoch 1/5 - Step: 2400/9338 Loss G: 2.148863 Loss D: 1.071563 D(x): 0.809529 D(G(z)): 0.114487 / 0.114487
Time for the last step: 51.59 s Epoch ETA: 00:59:53
Epoch 1/5 - Step: 2500/9338 Loss G: 2.016863 Loss D: 0.904711 D(x): 0.621433 D(G(z)): 0.440721 / 0.435932
Time for the last step: 52.03 s Epoch ETA: 00:59:02
Epoch 1/5 - Step: 2600/9338 Loss G: 2.495639 Loss D: 0.949308 D(x): 0.671085 D(G(z)): 0.557924 / 0.420826
Time for the last step: 52.66 s Epoch ETA: 00:58:12
Epoch 1/5 - Step: 2700/9338 Loss G: 2.519842 Loss D: 0.798667 D(x): 0.775738 D(G(z)): 0.246357 / 0.265839
Time for the last step: 51.20 s Epoch ETA: 00:57:19
Epoch 1/5 - Step: 2800/9338 Loss G: 2.545630 Loss D: 0.756449 D(x): 0.895455 D(G(z)): 0.403628 / 0.301851
Time for the last step: 51.88 s Epoch ETA: 00:56:27
Epoch 1/5 - Step: 2900/9338 Loss G: 2.458109 Loss D: 0.653513 D(x): 0.820105 D(G(z)): 0.379199 / 0.103250
Time for the last step: 53.50 s Epoch ETA: 00:55:39
Epoch 1/5 - Step: 3000/9338 Loss G: 2.030103 Loss D: 0.948208 D(x): 0.445385 D(G(z)): 0.303225 / 0.263652
Time for the last step: 51.57 s Epoch ETA: 00:54:47
Epoch 1/5 - Step: 3100/9338 Loss G: 1.721604 Loss D: 0.949721 D(x): 0.365646 D(G(z)): 0.090072 / 0.232912
Time for the last step: 52.19 s Epoch ETA: 00:53:55
Epoch 1/5 - Step: 3200/9338 Loss G: 1.438854 Loss D: 1.142182 D(x): 0.768163 D(G(z)): 0.321164 / 0.237878
Time for the last step: 50.79 s Epoch ETA: 00:53:01
Epoch 1/5 - Step: 3300/9338 Loss G: 1.924418 Loss D: 0.923860 D(x): 0.729981 D(G(z)): 0.354812 / 0.318090
Time for the last step: 52.59 s Epoch ETA: 00:52:11
</code></pre>
<p>that is, the gradients on the Generator are higher and start to decrease after a while, and in the meanwhile the gradients on the Discriminator rise up. As for the losses, the Generator goes down while the Discriminator goes up. If compared to the tutorial, I guess this can be acceptable.</p>
<p>Here's my <strong>first question</strong>: I've noticed that on the tutorial (usually) as <code>D_G_z1</code> rises, <code>D_G_z2</code> decreases (and viceversa), while in my example this happens a lot less. Is it just a coincidence or am I doing something wrong?</p>
<p>Given that, I've let the training procedure go on, but now I'm noticing this:</p>
<pre><code>Epoch 3/5 - Step: 1100/9338 Loss G: 4.071329 Loss D: 0.031608 D(x): 0.999969 D(G(z)): 0.024329 / 0.024329
Time for the last step: 51.41 s Epoch ETA: 01:11:24
Epoch 3/5 - Step: 1200/9338 Loss G: 3.883331 Loss D: 0.036354 D(x): 0.999993 D(G(z)): 0.043874 / 0.043874
Time for the last step: 51.63 s Epoch ETA: 01:10:29
Epoch 3/5 - Step: 1300/9338 Loss G: 3.468963 Loss D: 0.054542 D(x): 0.999972 D(G(z)): 0.050145 / 0.050145
Time for the last step: 52.47 s Epoch ETA: 01:09:40
Epoch 3/5 - Step: 1400/9338 Loss G: 3.504971 Loss D: 0.053683 D(x): 0.999972 D(G(z)): 0.052180 / 0.052180
Time for the last step: 50.75 s Epoch ETA: 01:08:41
Epoch 3/5 - Step: 1500/9338 Loss G: 3.437765 Loss D: 0.056286 D(x): 0.999941 D(G(z)): 0.058839 / 0.058839
Time for the last step: 52.20 s Epoch ETA: 01:07:50
Epoch 3/5 - Step: 1600/9338 Loss G: 3.369209 Loss D: 0.062133 D(x): 0.955688 D(G(z)): 0.058773 / 0.058773
Time for the last step: 51.05 s Epoch ETA: 01:06:54
Epoch 3/5 - Step: 1700/9338 Loss G: 3.290109 Loss D: 0.065704 D(x): 0.999975 D(G(z)): 0.056583 / 0.056583
Time for the last step: 51.27 s Epoch ETA: 01:06:00
Epoch 3/5 - Step: 1800/9338 Loss G: 3.286248 Loss D: 0.067969 D(x): 0.993238 D(G(z)): 0.063815 / 0.063815
Time for the last step: 52.28 s Epoch ETA: 01:05:09
Epoch 3/5 - Step: 1900/9338 Loss G: 3.263996 Loss D: 0.065335 D(x): 0.980270 D(G(z)): 0.037717 / 0.037717
Time for the last step: 51.59 s Epoch ETA: 01:04:16
Epoch 3/5 - Step: 2000/9338 Loss G: 3.293503 Loss D: 0.065291 D(x): 0.999873 D(G(z)): 0.070188 / 0.070188
Time for the last step: 51.85 s Epoch ETA: 01:03:25
Epoch 3/5 - Step: 2100/9338 Loss G: 3.184164 Loss D: 0.070931 D(x): 0.999971 D(G(z)): 0.059657 / 0.059657
Time for the last step: 52.14 s Epoch ETA: 01:02:34
Epoch 3/5 - Step: 2200/9338 Loss G: 3.116310 Loss D: 0.080597 D(x): 0.999850 D(G(z)): 0.074931 / 0.074931
Time for the last step: 51.85 s Epoch ETA: 01:01:42
Epoch 3/5 - Step: 2300/9338 Loss G: 3.142180 Loss D: 0.073999 D(x): 0.995546 D(G(z)): 0.054752 / 0.054752
Time for the last step: 51.76 s Epoch ETA: 01:00:50
Epoch 3/5 - Step: 2400/9338 Loss G: 3.185711 Loss D: 0.072601 D(x): 0.999992 D(G(z)): 0.076053 / 0.076053
Time for the last step: 50.53 s Epoch ETA: 00:59:54
Epoch 3/5 - Step: 2500/9338 Loss G: 3.027437 Loss D: 0.083906 D(x): 0.997390 D(G(z)): 0.082501 / 0.082501
Time for the last step: 52.06 s Epoch ETA: 00:59:03
Epoch 3/5 - Step: 2600/9338 Loss G: 3.052374 Loss D: 0.085030 D(x): 0.999924 D(G(z)): 0.073295 / 0.073295
Time for the last step: 52.37 s Epoch ETA: 00:58:12
</code></pre>
<p>not only <code>D(x)</code> has increased again and it's stuck to almost one, but also both <code>D_G_z1</code> and <code>D_G_z2</code> always show the same value. Moreover, looking at the losses it seems pretty clear that the Discriminator has outperformed the Generator. This behavior has gone on and on for the rest of the epoch and for all the next one, until the end of the training.</p>
<p>Hence my <strong>second question</strong>: is this normal? If not, what am I doing wrong within the procedure? How can I achieve a more stable training?</p>
<p><strong>EDIT:</strong> I've tried to train the network using the <code>MSELoss()</code> as suggested and this is the output:</p>
<pre><code>Epoch 1/1 - Step: 100/9338 Loss G: 0.800785 Loss D: 0.404525 D(x): 0.844653 D(G(z)): 0.030439 / 0.016316
Time for the last step: 55.22 s Epoch ETA: 01:25:01
Epoch 1/1 - Step: 200/9338 Loss G: 1.196659 Loss D: 0.014051 D(x): 0.999970 D(G(z)): 0.006543 / 0.006500
Time for the last step: 51.41 s Epoch ETA: 01:21:11
Epoch 1/1 - Step: 300/9338 Loss G: 1.197319 Loss D: 0.000806 D(x): 0.999431 D(G(z)): 0.004821 / 0.004724
Time for the last step: 51.79 s Epoch ETA: 01:19:32
Epoch 1/1 - Step: 400/9338 Loss G: 1.198960 Loss D: 0.000720 D(x): 0.999612 D(G(z)): 0.000000 / 0.000000
Time for the last step: 51.47 s Epoch ETA: 01:18:09
Epoch 1/1 - Step: 500/9338 Loss G: 1.212810 Loss D: 0.000021 D(x): 0.999938 D(G(z)): 0.000000 / 0.000000
Time for the last step: 52.18 s Epoch ETA: 01:17:11
Epoch 1/1 - Step: 600/9338 Loss G: 1.216168 Loss D: 0.000000 D(x): 0.999945 D(G(z)): 0.000000 / 0.000000
Time for the last step: 51.24 s Epoch ETA: 01:16:02
Epoch 1/1 - Step: 700/9338 Loss G: 1.212301 Loss D: 0.000000 D(x): 0.999970 D(G(z)): 0.000000 / 0.000000
Time for the last step: 51.61 s Epoch ETA: 01:15:02
Epoch 1/1 - Step: 800/9338 Loss G: 1.214397 Loss D: 0.000005 D(x): 0.999973 D(G(z)): 0.000000 / 0.000000
Time for the last step: 51.58 s Epoch ETA: 01:14:04
Epoch 1/1 - Step: 900/9338 Loss G: 1.212016 Loss D: 0.000003 D(x): 0.999932 D(G(z)): 0.000000 / 0.000000
Time for the last step: 52.20 s Epoch ETA: 01:13:13
Epoch 1/1 - Step: 1000/9338 Loss G: 1.215162 Loss D: 0.000000 D(x): 0.999988 D(G(z)): 0.000000 / 0.000000
Time for the last step: 52.28 s Epoch ETA: 01:12:23
Epoch 1/1 - Step: 1100/9338 Loss G: 1.216291 Loss D: 0.000000 D(x): 0.999983 D(G(z)): 0.000000 / 0.000000
Time for the last step: 51.78 s Epoch ETA: 01:11:28
Epoch 1/1 - Step: 1200/9338 Loss G: 1.215526 Loss D: 0.000000 D(x): 0.999978 D(G(z)): 0.000000 / 0.000000
Time for the last step: 51.88 s Epoch ETA: 01:10:35
</code></pre>
<p>As can be seen, the situation gets even worse. Moreover, reading the <a href="http://openaccess.thecvf.com/content_ICCV_2017/papers/Sajjadi_EnhanceNet_Single_Image_ICCV_2017_paper.pdf" rel="nofollow noreferrer">EnhanceNet paper</a> all over again, Section 4.2.4 (Adversarial Training) states that the adversarial loss function used is a <code>BCELoss()</code>, as I would expect to solve the vanishing gradients problem that I get with <code>MSELoss()</code>.</p> | 2020-06-03 13:33:31.003000+00:00 | 2020-06-04 06:06:16.890000+00:00 | 2020-06-04 06:06:16.890000+00:00 | python|machine-learning|deep-learning|pytorch|generative-adversarial-network | ['https://rohanvarma.me/Loss-Functions/', 'https://medium.com/@jonathan_hui/gan-why-it-is-so-hard-to-train-generative-advisory-networks-819a86b3750b', 'https://stackoverflow.com/questions/42690721/how-to-interpret-the-discriminators-loss-and-the-generators-loss-in-generative', 'https://arxiv.org/pdf/1701.00160.pdf'] | 4 |
59,517,070 | <p>It would be nice for you to explore a bit further what you tried so far and more in details of what you want to do. So, I'm answering this based on the assumption that you want to work with an emotion-based Sentiment Analysis. Actually there is an area of research that focus on identifying emotion from text.</p>
<p>In many cases, the problem is still treated as a multiclass classification problem, but instead of predicting sentiment polarity (positive, negative or neutral), people try to find emotions. The existing emotions vary in different research and different annotated data, but in general it looks like the ones you mentioned.</p>
<p>Your best chance to understand this area further is to look for papers and existing datasets. I'll list a few here for you and the emotions they work with:</p>
<ul>
<li><p><a href="https://www.aclweb.org/anthology/C18-1179.pdf" rel="nofollow noreferrer">An Analysis of Annotated Corpora for Emotion Classification in Text</a>. Literature review of methods and corpus for such analysis.</p>
</li>
<li><p><a href="https://arxiv.org/pdf/1901.08458.pdf" rel="nofollow noreferrer">Emotion Detection and Analysis on Social Media</a>. Happiness, Sadness, Fear, Anger, Surprise and Disgust</p>
</li>
<li><p>This <a href="https://www.kaggle.com/c/sa-emotions/overview" rel="nofollow noreferrer">dataset</a> is a good source for training data. Sadness, Enthusiasm, Neutral, Worry, Love, Fun, Hate, Happiness,</p>
</li>
</ul> | 2019-12-29 05:28:57.933000+00:00 | 2021-01-13 08:27:13.297000+00:00 | 2021-01-13 08:27:13.297000+00:00 | null | 59,511,392 | <p>I have searched the internet and there is more or less the same sentiment analysis of a sentence i.e Positive, Negative or Neutral. I want to build a sentiment analyzer that look for the following sentiments/emotions for a sentence. </p>
<pre><code>happy , sad , angry , disaapointed , surprised, proud, in love, scared
</code></pre> | 2019-12-28 13:37:45.353000+00:00 | 2021-01-13 08:27:13.297000+00:00 | null | python-3.x|nlp|sentiment-analysis | ['https://www.aclweb.org/anthology/C18-1179.pdf', 'https://arxiv.org/pdf/1901.08458.pdf', 'https://www.kaggle.com/c/sa-emotions/overview'] | 3 |
68,980,183 | <p>Seems like your question relies on the assumption that SGD with Nesterov would definitely perform better than Adam. However, there is no learning algorithm that is better than another no matter what. You always have to check it given your model (layers, activation functions, loss, etc.) and dataset.</p>
<p>Are you increasing the number of epochs for SGD? Usually, SGD takes much longer to converge than Adam. Note that recent studies show that despite training faster, Adam generalizes worse to the validation and test datasets (<a href="https://arxiv.org/abs/1712.07628" rel="nofollow noreferrer">https://arxiv.org/abs/1712.07628</a>). An alternative to that is to start the optimization with Adam, and then after some epochs, change the optimizer to SGD.</p> | 2021-08-30 07:10:45.140000+00:00 | 2021-12-13 14:38:52.370000+00:00 | 2021-12-13 14:38:52.370000+00:00 | null | 68,978,614 | <p>I am running an image segmentation code on Pytorch, based on the architecture of Linknet.
The optimizer is initially set as:</p>
<pre><code>self.optimizer = torch.optim.Adam(params=self.net.parameters(), lr=lr)
</code></pre>
<p>Then I change it to Nesterov to improve the performance, like:</p>
<pre><code>self.optimizer = torch.optim.SGD(params=self.net.parameters(), lr=lr, momentum=0.9, nesterov=True)
</code></pre>
<p>However, the performance is poorer using Nesterov. When I use Adam the loss function can converge to 0.19. But the loss function can only converge to 0.34 when I use Nesterov.</p>
<p>By the way, the learning rate is divided by 5 if no decrease of loss in 3 consecutive epochs, and lr can adjust 3 times. After that, the training process finish.</p>
<p>I am wondering why this happens and what should I do for optimization? Thanks a lot for the replys:)</p> | 2021-08-30 03:22:01.183000+00:00 | 2021-12-13 14:38:52.370000+00:00 | null | optimization|deep-learning|pytorch | ['https://arxiv.org/abs/1712.07628'] | 1 |
57,252,898 | <h3>TL;DR</h3>
<p>Your input data is not normalized.</p>
<ol>
<li>use <code>x_data = (x_data - x_data.mean()) / x_data.std() </code></li>
<li>increase the learning rate <code>optimizer = torch.optim.Adam(model.parameters(), lr=0.01)</code></li>
</ol>
<p>You'll get<br />
<a href="https://i.stack.imgur.com/vt4VK.png" rel="noreferrer"><img src="https://i.stack.imgur.com/vt4VK.png" alt="enter image description here" /></a></p>
<p>convergence in only 1000 iterations.</p>
<h3>More details</h3>
<p>The key difference between the two examples you have is that the data <code>x</code> in the first example is centered around (0, 0) and has very low variance.<br />
On the other hand, the data in the second example is centered around 92 and has relatively large variance.</p>
<p>This initial bias in the data is not taken into account when you randomly <a href="https://pytorch.org/docs/stable/_modules/torch/nn/modules/linear.html#Linear" rel="noreferrer">initialize the weights</a> which is done based on the assumption that the inputs are roughly normally distributed around <em>zero</em>.<br />
It is almost impossible for the optimization process to compensate for this gross deviation - thus the model gets stuck in a sub-optimal solution.</p>
<p>Once you normalize the inputs, by subtracting the mean and dividing by the std, the optimization process becomes stable again and rapidly converges to a good solution.</p>
<p>For more details about input normalization and weights initialization, you can read section 2.2 in <em>He et al</em> <a href="https://arxiv.org/abs/1502.01852" rel="noreferrer"><strong>Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification</strong></a> (ICCV 2015).</p>
<h3>What if I cannot normalize the data?</h3>
<p>If, for some reason, you cannot compute mean and std data in advance, you can still use <a href="https://pytorch.org/docs/stable/nn.html#batchnorm1d" rel="noreferrer"><code>nn.BatchNorm1d</code></a> to estimate and normalize the data as part of the training process. For example</p>
<pre class="lang-py prettyprint-override"><code>class Model(nn.Module):
def __init__(self, input_size, H1, output_size):
super().__init__()
self.bn = nn.BatchNorm1d(input_size) # adding batchnorm
self.linear = nn.Linear(input_size, H1)
self.linear2 = nn.Linear(H1, output_size)
def forward(self, x):
x = torch.sigmoid(self.linear(self.bn(x))) # batchnorm the input x
x = torch.sigmoid(self.linear2(x))
return x
</code></pre>
<p>This modification <em>without</em> any change to the input data, yields similar convergance after only 1000 epochs:<br />
<a href="https://i.stack.imgur.com/Q9XRw.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Q9XRw.png" alt="enter image description here" /></a></p>
<h3>A minor comment</h3>
<p>For numerical stability, it is better to use <a href="https://pytorch.org/docs/stable/nn.html#bcewithlogitsloss" rel="noreferrer"><code>nn.BCEWithLogitsLoss</code></a> instead of <a href="https://pytorch.org/docs/stable/nn.html#bceloss" rel="noreferrer"><code>nn.BCELoss</code></a>. For this end, you need to remove the <code>torch.sigmoid</code> from the <code>forward()</code> output, the <code>sigmoid</code> will be computed inside the loss.<br />
See, for example, <a href="https://stackoverflow.com/q/57253841/1714410">this thread</a> regarding the related sigmoid + cross entropy loss for binary predictions.</p> | 2019-07-29 11:28:39.460000+00:00 | 2019-08-04 04:56:15.120000+00:00 | 2020-06-20 09:12:55.060000+00:00 | null | 57,161,576 | <p>To get to grips with PyTorch (and deep learning in general) I started by working through some basic classification examples. One such example was classifying a non-linear dataset created using sklearn (full code available as notebook <a href="https://github.com/philipobrien/colab-notebooks/blob/master/Non_Linear_Data_Classification.ipynb" rel="noreferrer">here</a>)</p>
<pre><code>n_pts = 500
X, y = datasets.make_circles(n_samples=n_pts, random_state=123, noise=0.1, factor=0.2)
x_data = torch.FloatTensor(X)
y_data = torch.FloatTensor(y.reshape(500, 1))
</code></pre>
<p><a href="https://i.stack.imgur.com/w20Tb.png" rel="noreferrer"><img src="https://i.stack.imgur.com/w20Tb.png" alt="enter image description here"></a></p>
<p>This is then accurately classified using a pretty basic neural net</p>
<pre><code>class Model(nn.Module):
def __init__(self, input_size, H1, output_size):
super().__init__()
self.linear = nn.Linear(input_size, H1)
self.linear2 = nn.Linear(H1, output_size)
def forward(self, x):
x = torch.sigmoid(self.linear(x))
x = torch.sigmoid(self.linear2(x))
return x
def predict(self, x):
pred = self.forward(x)
if pred >= 0.5:
return 1
else:
return 0
</code></pre>
<p>As I have an interest in health data I then decided to try and use the same network structure to classify some a basic real-world dataset. I took heart rate data for one patient from <a href="https://physionet.org/physiobank/database/bidmc/" rel="noreferrer">here</a>, and altered it so all values > 91 would be labelled as anomalies (e.g. a <code>1</code> and everything <= 91 labelled a <code>0</code>). This is completely arbitrary, but I just wanted to see how the classification would work. The complete notebook for this example is <a href="https://github.com/philipobrien/colab-notebooks/blob/master/Basic_Heart_rate_Classification.ipynb" rel="noreferrer">here</a>.</p>
<p><a href="https://i.stack.imgur.com/6izzI.png" rel="noreferrer"><img src="https://i.stack.imgur.com/6izzI.png" alt="enter image description here"></a></p>
<p>What is not intuitive to me is why the first example reaches <strong>a loss of 0.0016 after 1,000 epochs</strong>, whereas the second example only reaches <strong>a loss of 0.4296 after 10,000 epochs</strong></p>
<p><a href="https://i.stack.imgur.com/rDag4.png" rel="noreferrer"><img src="https://i.stack.imgur.com/rDag4.png" alt="Training Loss for Example 1"></a></p>
<p><a href="https://i.stack.imgur.com/PMubT.png" rel="noreferrer"><img src="https://i.stack.imgur.com/PMubT.png" alt="Training Loss for Heart Rate Example"></a></p>
<p>Perhaps I am being naive in thinking that the heart rate example would be much easier to classify. Any insights to help me understand why this is not what I am seeing would be great!</p> | 2019-07-23 10:03:48.617000+00:00 | 2019-08-04 04:56:15.120000+00:00 | null | python|machine-learning|deep-learning|artificial-intelligence|pytorch | ['https://i.stack.imgur.com/vt4VK.png', 'https://pytorch.org/docs/stable/_modules/torch/nn/modules/linear.html#Linear', 'https://arxiv.org/abs/1502.01852', 'https://pytorch.org/docs/stable/nn.html#batchnorm1d', 'https://i.stack.imgur.com/Q9XRw.png', 'https://pytorch.org/docs/stable/nn.html#bcewithlogitsloss', 'https://pytorch.org/docs/stable/nn.html#bceloss', 'https://stackoverflow.com/q/57253841/1714410'] | 8 |
52,279,560 | <p>If you are looking to do optimization or sensitivity analysis on NetLogo models from Python you might want to consider using <strong><a href="https://github.com/chathika/NL4Py" rel="nofollow noreferrer">NL4Py</a></strong>. NL4Py is a Python package that lets you control NetLogo models from Python. The <a href="https://pypi.org/project/NL4Py/" rel="nofollow noreferrer">package is released on PyPI.org</a> and you can get started with a simple <code>pip install nl4py</code>.</p>
<p><a href="https://arxiv.org/pdf/1808.03292.pdf" rel="nofollow noreferrer">Article Preprint</a></p>
<p><a href="https://github.com/chathika/NL4Py" rel="nofollow noreferrer">Repository and Documentation</a></p>
<p><a href="https://github.com/chathika/NL4Py/blob/master/examples/ParameterCalibrationWithDEAP.ipynb" rel="nofollow noreferrer">Example of model calibration with NL4Py and DEAP</a></p>
<p><a href="https://github.com/chathika/NL4Py/blob/master/examples/SensitivityAnalysis.ipynb" rel="nofollow noreferrer">Example of sensitivity analysis with NL4Py and SALib</a></p> | 2018-09-11 15:35:43.883000+00:00 | 2018-09-11 15:35:43.883000+00:00 | null | null | 25,536,270 | <p>NetLogo is excellent for agent-based modeling...except for the language. I always find myself contorting my brain trying to figure out how to do something that <em>should</em> be simple to code (such as implementing a simple case statement) in NetLogo's Logo implementation. Logo is just not a programmer's language (apologies to those infuriated by this assertion).</p>
<p>I saw Abe Gong's Tengolo project that purported to do just this (<a href="http://compsocsci.blogspot.com/2012/02/announcing-tengolo-python-alternative.html" rel="noreferrer">http://compsocsci.blogspot.com/2012/02/announcing-tengolo-python-alternative.html</a>) but the project appears to have been abandoned. Also another question in stack overflow (<a href="https://stackoverflow.com/questions/4905873/agent-based-simulation-performance-issue-python-vs-netlogo-repast">agent-based simulation: performance issue: Python vs NetLogo & Repast</a>) seems to indicate that Python would be slower. </p>
<p>Seems like it would be quite possible to use Jython to compile into modules that NetLogo could use, but I wondered if anyone is aware of something that would let me do NetLogo simulations in a sensible language like Python. Thoughts?</p> | 2014-08-27 20:21:34.193000+00:00 | 2021-01-04 19:56:38.957000+00:00 | 2020-08-17 16:59:26.110000+00:00 | python|netlogo|agent-based-modeling | ['https://github.com/chathika/NL4Py', 'https://pypi.org/project/NL4Py/', 'https://arxiv.org/pdf/1808.03292.pdf', 'https://github.com/chathika/NL4Py', 'https://github.com/chathika/NL4Py/blob/master/examples/ParameterCalibrationWithDEAP.ipynb', 'https://github.com/chathika/NL4Py/blob/master/examples/SensitivityAnalysis.ipynb'] | 6 |
5,979,671 | <p>You can see also <a href="http://arxiv.org/pdf/cond-mat/0308217" rel="nofollow">Finding and evaluating community structure in networks</a> by Newman and Girvan, where they propose an aproach for evaluating communities in networks(and set of algoritms based on this approach) and measure of network division into communities quality (graph modularity). </p> | 2011-05-12 14:35:52.543000+00:00 | 2011-05-12 14:35:52.543000+00:00 | null | null | 84,820 | <p>Are there any algorithms that can help with hierarchical clustering?
Google's map-reduce has only an example of k-clustering. In case of hierarchical clustering, I'm not sure how it's possible to divide the work between nodes.
Other resource that I found is: <a href="http://issues.apache.org/jira/browse/MAHOUT-19" rel="noreferrer">http://issues.apache.org/jira/browse/MAHOUT-19</a>
But it's not apparent, which algorithms are used.</p> | 2008-09-17 16:00:53.167000+00:00 | 2014-09-04 12:54:24.027000+00:00 | 2008-09-19 14:57:46.537000+00:00 | algorithm|cluster-analysis|hierarchical-clustering | ['http://arxiv.org/pdf/cond-mat/0308217'] | 1 |
54,183,323 | <p>According to Donald Knuth, it's the same.
Here is the link on his paper about Dancing Links algorithm, which is used to solve such "non-tree" problems as N-queens and Sudoku solver.</p>
<blockquote>
<p><a href="https://arxiv.org/pdf/cs/0011047.pdf" rel="noreferrer">Backtracking, also called depth-first search</a></p>
</blockquote> | 2019-01-14 14:23:38.797000+00:00 | 2019-01-14 14:23:38.797000+00:00 | null | null | 1,294,720 | <p>What's the difference between backtracking and depth first search?</p> | 2009-08-18 15:40:27.437000+00:00 | 2022-02-23 04:28:40.507000+00:00 | null | algorithm | ['https://arxiv.org/pdf/cs/0011047.pdf'] | 1 |
64,071,909 | <p>My advice is to train your model using either of these two techniques:</p>
<ul>
<li><strong>Metric Learning:</strong> use siamese or triplet networks to compare images with each other. If your dataset is small it is a good option. <a href="https://arxiv.org/abs/1503.03832" rel="nofollow noreferrer">https://arxiv.org/abs/1503.03832</a></li>
<li><strong>Use center loss + softmax:</strong> Present the problem as a classification and train. <a href="https://ydwen.github.io/papers/WenECCV16.pdf" rel="nofollow noreferrer">https://ydwen.github.io/papers/WenECCV16.pdf</a></li>
</ul>
<p>Finally, you should think that you are not going to keep that model, but that you are only going to use it as a feature extractor. In other words, the models will be very good at extracting facial information, not classified (that will not be their task).</p>
<p>Later you fix that model and you can do different things:</p>
<ul>
<li><p><strong>Directly use a distance</strong> as the cosine to identify the closest face with respect to a dataset (it works if you don't require an extremely certain accuracy ). You directly use KNN between the features of your face and your entire database and consider the recognition as the closest face.</p>
</li>
<li><p>You train an additional model that uses the features of the previous one. You can use a neural network again, but in this case my suggestion is to use an SVM (since SVM models as LASVM is an approximation of learning online). Thus, they only have to train a small classifier every so often. <a href="https://machinelearningmastery.com/how-to-develop-a-face-recognition-system-using-facenet-in-keras-and-an-svm-classifier/" rel="nofollow noreferrer">https://machinelearningmastery.com/how-to-develop-a-face-recognition-system-using-facenet-in-keras-and-an-svm-classifier/</a></p>
</li>
</ul> | 2020-09-25 22:13:37.340000+00:00 | 2020-09-25 22:13:37.340000+00:00 | null | null | 64,055,539 | <p>I am trying to implement incremental/online learning for a face recognition application. I've trained a model on a dataset and it works perfectly fine, however, I need to capture new faces(classes) over time and add them to the existing dataset. Is there any way that I can train the model with new classes without retraining from the scratch?</p>
<p>I've not found any rich resources so far and really appreciated if anyone can point me out somewhere.</p> | 2020-09-24 23:03:35.257000+00:00 | 2020-09-26 08:32:53.813000+00:00 | 2020-09-26 08:32:53.813000+00:00 | python|deep-learning|face-recognition|online-machine-learning | ['https://arxiv.org/abs/1503.03832', 'https://ydwen.github.io/papers/WenECCV16.pdf', 'https://machinelearningmastery.com/how-to-develop-a-face-recognition-system-using-facenet-in-keras-and-an-svm-classifier/'] | 3 |
71,946,833 | <p>You might find the following analogy helpful:</p>
<blockquote>
<p>Bloom filter is to <em>set</em> as Bloomier filter is to <em>map</em>.</p>
</blockquote>
<p>A Bloom filter can be thought of as a way of storing a compressed version of a set of items in significantly less space than the original set would normally take. The tradeoff involved is that you might have some false positives - items that aren't actually in the set but which the Bloom filter says are in the set.</p>
<p>A <em><strong>Bloomier</strong></em> filter can be thought of as a way of storing a compressed version of a map (or associative array, or dictionary, depending on what terminology you're using). That is, it stores a compressed representation of an association from keys to values. The advantage of the Bloomier filter is that it uses significantly less space than what would normally be required to store the map. The drawback is that the Bloomier filter can have false positives - if you look something up that isn't in the map, the Bloomier filter might accidentally report that it's there and give back a nonsense associated value. (This is a bit different from what you described in your question. In your question, you characterized a Bloomier filter as a Bloom filter that can also hand back the items stored in the filter.)</p>
<p>So to directly answer your question, yes, you absolutely could use a hash table instead of a Bloomier filter. However, doing so would use considerably more space. In most cases, we don't care about that space, but there are applications where space is at a premium and reducing storage would be valuable. As an example, check out the paper <a href="https://arxiv.org/abs/1711.04686" rel="nofollow noreferrer"><em>Weightless: Lossy Weight Encoding For Deep Neural Network Compression</em></a>, which uses Bloomier filters to reduce the storage space needed to store large neural networks without too much of a decrease in the quality of the network.</p>
<p>As for how a Bloomier filter works, it's based on a technique that's closely related to that of the <a href="https://stackoverflow.com/questions/67527507/what-is-an-xor-filter">XOR filter</a>, and the linked question gives a rough overview of the key techniques.</p> | 2022-04-20 22:27:55.603000+00:00 | 2022-04-20 22:27:55.603000+00:00 | null | null | 30,137,855 | <p>This question is about the <em><strong>Bloomier</strong></em> filter, which is not the same as a standard Bloom filter.</p>
<p>I'm learning about the Bloomier filter and I don't see the advantage of using one. As far as I'm concerned, a Bloomier filter is a generalization of a Bloom filter. It can return the specific items themselves.</p>
<p>However, you could accomplish this by simply using hash tables and they seems faster and more space-efficient.</p>
<p>Given this, what's the purpose of a Bloomier filter?</p> | 2015-05-09 08:39:33.200000+00:00 | 2022-04-21 14:16:06.723000+00:00 | 2022-04-20 22:29:38.743000+00:00 | data-structures|bloom-filter | ['https://arxiv.org/abs/1711.04686', 'https://stackoverflow.com/questions/67527507/what-is-an-xor-filter'] | 2 |
1,160,433 | <p>I would also recommend you look for examples of having multiple readers and only 1 writer in a critical section heres a short paper with a good solution <a href="http://arxiv.org/PS_cache/cs/pdf/0303/0303005v1.pdf" rel="nofollow noreferrer">http://arxiv.org/PS_cache/cs/pdf/0303/0303005v1.pdf</a></p>
<p>Alternatively you could look at creating copies of the file each time it is requested and when it comes time to write any changes to the file you merge with the original file.</p> | 2009-07-21 16:59:51.210000+00:00 | 2009-07-21 16:59:51.210000+00:00 | null | null | 1,160,233 | <p>how to write to a text file that can be accessed by multiple sources (possibly in a concurrent way) ensuring that no write operation gets lost?</p>
<p>Like, if two different processes are writing in the same moment to the file, this can lead to problems. The simples solution (not very fast and not very elegant) would be locking the file while beginning the process (create a .lock file or similar) and release it (delete the lock) while the writing is done.</p>
<p>When beginning to write, i would check if the .lock file exists and delay the writing till the file is released.</p>
<p>What is the recommended pattern to follow for this kind of situation?</p>
<p>Thanks</p>
<p><strong>EDIT</strong>
I mean processes, like different programs from different clients, different users and so on, not threads within the same program</p> | 2009-07-21 16:18:59.793000+00:00 | 2018-12-05 08:40:16.213000+00:00 | 2009-07-21 17:12:35.647000+00:00 | c#|logging|concurrency | ['http://arxiv.org/PS_cache/cs/pdf/0303/0303005v1.pdf'] | 1 |
56,318,256 | <p>You can do it by sharing some (or all) layers of their network. If you do, however, you are assuming that there is a common state representation (the intermediate layer output) that is optimal w.r.t. both. This is a very strong assumption and it usually doesn't hold. It has been shown to work for learning from image, where you put (for instance) an autoencoder on the top both the actor and the critic network and train it using the sum of their loss function. </p>
<p>This is mentioned in <a href="https://arxiv.org/pdf/1707.06347.pdf" rel="noreferrer">PPO paper</a> (just before Eq. (9)). However, they just say that they share layers only for learning Atari games, not for continuous control problems. They don't say why, but this can be explained as I said above: Atari games have a low-dimensional state representation that is optimal for both the actor and the critic (e.g., the encoded image learned by an autoencoder), while for continuous control you usually pass directly a low-dimensional state (coordinates, velocities, ...).</p>
<p>A3C, which you mentioned, was also used mostly for games (Doom, I think).</p>
<p>From my experience, in control sharing layers never worked if the state is already compact.</p> | 2019-05-26 23:57:09.293000+00:00 | 2019-05-27 12:04:29.403000+00:00 | 2019-05-27 12:04:29.403000+00:00 | null | 56,312,962 | <p>Hello StackOverflow Community!</p>
<p>I have a question about Actor-Critic Models in Reinforcement Learning.</p>
<p>While listening policy gradient methods classes of Berkeley University, it is said in the lecture that in the actor-critic algorithms where we both optimize our policy with some policy parameters and our value functions with some value function parameters, we use same parameters in both optimization problems(i.e. policy parameters = value function parameters) in some algorithms (e.g. A2C/A3C)</p>
<p>I could not understand how this works. I was thinking that we should optimize them separately. How does this shared parameter solution helps us?</p>
<p>Thanks in advance :)</p> | 2019-05-26 11:09:35.787000+00:00 | 2019-05-27 12:04:29.403000+00:00 | null | reinforcement-learning | ['https://arxiv.org/pdf/1707.06347.pdf'] | 1 |
52,802,576 | <p>Randomness can be handled by replacing single average reward output with a distribution with possible values. By introducing a new learning rule, reflecting the transition from Bellman’s (average) equation to its distributional counterpart, <a href="https://arxiv.org/pdf/1707.06887.pdf" rel="nofollow noreferrer">the Value distribution approach</a> has been able to surpass the performance of all other comparable approaches.</p>
<p><a href="https://www.deepmind.com/blog/going-beyond-average-for-reinforcement-learning" rel="nofollow noreferrer">https://www.deepmind.com/blog/going-beyond-average-for-reinforcement-learning</a></p> | 2018-10-14 12:20:22.700000+00:00 | 2022-04-09 21:33:52.243000+00:00 | 2022-04-09 21:33:52.243000+00:00 | null | 52,744,919 | <p>I have a <strong>fundamental</strong> question on the applicability of reinforcement learning (RL) on a problem we are trying to solve.</p>
<p>We are trying to use RL for inventory management - where the demand is entirely <strong>random</strong> (it probably has a pattern in real life but for now let us assume that we have been forced to treat as purely random). </p>
<p>As I understand, RL can help learn how to play a game (say chess) or help a robot learn to walk. <em>But</em> all games have <strong>rules</strong> and so does the ‘cart-pole’ (of OpenAI Gym) – there are rules of ‘physics’ that govern when the cart-pole will tip and fall over. </p>
<p>For our problem there are no rules – the environment changes randomly (demand made for the product).</p>
<p>Is RL really applicable to such situations?</p>
<p>If it does - then what will improve the performance?</p>
<p>Further details:
- The only two stimuli available from the ‘environment’ are the currently available level of product 'X' and the current demand 'Y'
- And the ‘action’ is binary - do I order a quantity 'Q' to refill or do I not (discrete action space).
- We are using DQN and an Adam optimizer.</p>
<p>Our results are poor - I admit I have trained only for about 5,000 or 10,000 - should I let it train on for days because it is a random environment?</p>
<p>thank you
Rajesh</p> | 2018-10-10 16:32:27.653000+00:00 | 2022-04-09 21:33:52.243000+00:00 | 2018-10-11 16:49:21.533000+00:00 | machine-learning|reinforcement-learning | ['https://arxiv.org/pdf/1707.06887.pdf', 'https://www.deepmind.com/blog/going-beyond-average-for-reinforcement-learning'] | 2 |
50,855,329 | <p>This is a really interesting problem —and fundamental to how you implement data analytics at a scale where failures are inevitable.</p>
<h2>Hadoop V1 Algorithm</h2>
<p>HDFS: <code>O(1)</code> to commit a task, resilient to failure in task commit. Job commit is ~<code>O(files)</code> with lots of files; if it fails partway through, output status unknown.</p>
<p>S3: <code>O(data)</code> to commit task, very slow to commit job (<code>O(data)</code> for whole job's output). Lack of atomic rename potentially dangerous.</p>
<h2>Hadoop V2 commit algorithm</h2>
<p>HDFS: <code>O(files)</code> to commit a task, can't handle failure. Job commit is an O(1) <code>touch _SUCCESS</code> call.
S3: <code>O(data)</code> to commit a task , can't handle failure, and with a longer COPY operation to commit, chance of task commit failure higher.</p>
<p>I don't personally think the failure semantics of the V2 algorithm work; both MapReduce and Spark assume a task which fails during the commit process can be repeated...this does not hold here.</p>
<p>There are some extra details which you don't want to know about, like how the drivers conclude a task has failed, how MapReduce decides that it has partitioned from YARN and so must not commit, but generally it is all down to heartbeats and the assumption that once a task has timed out its not going to resurface. If you are implementing a commit algorithm yourself, make sure that a task committer which has hung until after the entire job has committed <em>will not affect the output</em></p>
<h2>For object stores:</h2>
<ul>
<li>Databricks DBIO. Not seen the code, sounds like they use DynamoDB for the XAs.</li>
<li>IBM Stocator: read the paper, <a href="https://arxiv.org/abs/1709.01812" rel="noreferrer">
Stocator: A High Performance Object Store Connector for Spark</a>. Focus is on minimising HTTP requests and being able to roll back from failed job/task commits.</li>
<li>Hadoop 3.1's S3A Committers, read: <a href="https://github.com/steveloughran/zero-rename-committer/releases/download/tag_draft_003/a_zero_rename_committer.pdf" rel="noreferrer">A Zero Rename Committer</a>. Time to commit task depends on which committer is chosen; at worst time to upload data from VM to S3. Task failures recoverable. Job Commit: one *HTTP POST per file created, parallelizable, so <code>O(files/threads</code>). Failure during job commit not recoverable.</li>
</ul>
<p>To round things off; Azure and google cloud stores do have directory renames, though they are usually O(files), not O(1) —but at least not O(data), like S3. You can safely use the Hadoop V1 committer.</p> | 2018-06-14 10:29:41.657000+00:00 | 2018-06-14 10:29:41.657000+00:00 | null | null | 50,815,710 | <p>According to <a href="https://databricks.com/blog/2017/05/31/transactional-writes-cloud-storage.html" rel="nofollow noreferrer">this</a> blog at databricks, spark relies on commit protocol classes from Hadoop so if job is not finished because of some failure output directory does not change(partial output files do not occurs).</p>
<p>So my questions are ;</p>
<p>Does spark prevent partial writes to different storages in case of failures (HDFS,S3 etc)?</p>
<p>Is it possible for a different spark jobs to use same temporary location before final write operation ?</p>
<p>Is it possible for a same spark job which is submitted more than once to use same temporary location ?</p> | 2018-06-12 11:09:32.203000+00:00 | 2018-06-14 10:29:41.657000+00:00 | 2018-06-12 14:59:43.590000+00:00 | apache-spark|amazon-s3|hdfs | ['https://arxiv.org/abs/1709.01812', 'https://github.com/steveloughran/zero-rename-committer/releases/download/tag_draft_003/a_zero_rename_committer.pdf'] | 2 |
58,674,343 | <p>In a comment, OP clarified that the desired function should map two IEEE-754 <code>binary64</code> operands into a single such operand. One way to accomplish this is to map each <code>binary64</code> (double-precision) number into a positive integer in [0, 2<sup>26</sup>-2], and then use a well-known <em>pairing function</em> to map two such integers into a single positive integer the interval [0,2<sup>52</sup>), which is exactly representable in a <code>binary64</code> which has 52 stored significand ("mantissa") bits.</p>
<p>As the <code>binary64</code> operands are unrestricted in range per a comment by OP, all binades should be representable with equal relative accuracy, and we need to handle zeroes, infinities, and NaNs as well. For this reason I chose <code>log2()</code> to compress the data. Zeros are treated as the smallest <code>binary64</code> subnormal, <code>0x1.0p-1074</code>, which has the consequence that both <code>0x1.0p-1074</code> and zero will decompress into zero. The result from <code>log2</code> falls into the range [-1074, 1024). Since we need to store the sign of the operand, we bias the logarithm value by 1074, giving a result in [0, 2098), then scale that to almost [0, 2<sup>25</sup>), <em>round</em> to the nearest integer, and finally affix the sign of the original <code>binary64</code> operand. The motivation for almost utilizing the complete range is to leave a little bit of room at the top of the range for special encodings for infinity and NaN (so a single canonical NaN encoding is used).</p>
<p>Since pairing functions known from the literature operate on natural numbers only, we need a mapping from whole numbers to natural numbers. This is easily accomplished by mapping negative whole numbers to odd natural numbers, while positive whole numbers are mapped to even natural numbers. Thus our operands are mapped from (-2<sup>25</sup>, +2<sup>25</sup>) to [0, 2<sup>26</sup>-2]. The pairing function then combines two integers in [0, 2<sup>26</sup>-2] into a single integer in [0, 2<sup>52</sup>).</p>
<p>Different pairing functions known from the literature differ in their scrambling behavior, which may impact the hashing functionality mentioned in the question. They may also differ in their performance. Therefore I am offering a selection of four different pairing functions for the <code>pair()</code> / <code>unpair()</code> implementations in the code below. Please see the comments in the code for corresponding literature references.</p>
<p>Unpacking of the packed operand involves applying the inverse of each packing step in reverse order. The unpairing function gives us two natural integers. These are mapped to two whole numbers, which are mapped to two logarithm values, which are then exponentiated with <code>exp2()</code> to recover the original numbers, with a bit of added work to get special values and the sign correct.</p>
<p>While logarithms are represented with a relative accuracy on the order of 10<sup>-8</sup>, the expected maximum relative error in final results is on the order of 10<sup>-5</sup> due to the well-known error magnification property of exponentiation. Maximum relative error observed for a <code>pack()</code> / <code>unpack()</code> round-trip in extensive testing was 2.167e-5.</p>
<p>Below is my ISO C99 implementation of the algorithm together with a portion of my test framework. This should be portable to other programming languages with a modicum of effort.</p>
<pre class="lang-c prettyprint-override"><code>#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
#include <string.h>
#include <math.h>
#include <float.h>
#define SZUDZIK_ELEGANT_PAIRING (1)
#define ROZSA_PETER_PAIRING (2)
#define ROSENBERG_STRONG_PAIRING (3)
#define CHRISTOPH_MICHEL_PAIRING (4)
#define PAIRING_FUNCTION (ROZSA_PETER_PAIRING)
/*
https://groups.google.com/forum/#!original/comp.lang.c/qFv18ql_WlU/IK8KGZZFJx4J
From: geo <[email protected]>
Newsgroups: sci.math,comp.lang.c,comp.lang.fortran
Subject: 64-bit KISS RNGs
Date: Sat, 28 Feb 2009 04:30:48 -0800 (PST)
This 64-bit KISS RNG has three components, each nearly
good enough to serve alone. The components are:
Multiply-With-Carry (MWC), period (2^121+2^63-1)
Xorshift (XSH), period 2^64-1
Congruential (CNG), period 2^64
*/
static uint64_t kiss64_x = 1234567890987654321ULL;
static uint64_t kiss64_c = 123456123456123456ULL;
static uint64_t kiss64_y = 362436362436362436ULL;
static uint64_t kiss64_z = 1066149217761810ULL;
static uint64_t kiss64_t;
#define MWC64 (kiss64_t = (kiss64_x << 58) + kiss64_c, \
kiss64_c = (kiss64_x >> 6), kiss64_x += kiss64_t, \
kiss64_c += (kiss64_x < kiss64_t), kiss64_x)
#define XSH64 (kiss64_y ^= (kiss64_y << 13), kiss64_y ^= (kiss64_y >> 17), \
kiss64_y ^= (kiss64_y << 43))
#define CNG64 (kiss64_z = 6906969069ULL * kiss64_z + 1234567ULL)
#define KISS64 (MWC64 + XSH64 + CNG64)
double uint64_as_double (uint64_t a)
{
double r;
memcpy (&r, &a, sizeof r);
return r;
}
#define LOG2_BIAS (1074.0)
#define CLAMP_LOW (exp2(-LOG2_BIAS))
#define SCALE (15993.5193125)
#define NAN_ENCODING (33554430)
#define INF_ENCODING (33554420)
/* check whether argument is an odd integer */
int is_odd_int (double a)
{
return (-2.0 * floor (0.5 * a) + a) == 1.0;
}
/* compress double-precision number into an integer in (-2**25, +2**25) */
double compress (double x)
{
double t;
t = fabs (x);
t = (t < CLAMP_LOW) ? CLAMP_LOW : t;
t = rint ((log2 (t) + LOG2_BIAS) * SCALE);
if (isnan (x)) t = NAN_ENCODING;
if (isinf (x)) t = INF_ENCODING;
return copysign (t, x);
}
/* expand integer in (-2**25, +2**25) into double-precision number */
double decompress (double x)
{
double s, t;
s = fabs (x);
t = s / SCALE;
if (s == (NAN_ENCODING)) t = NAN;
if (s == (INF_ENCODING)) t = INFINITY;
return copysign ((t == 0) ? 0 : exp2 (t - LOG2_BIAS), x);
}
/* map whole numbers to natural numbers. Here: (-2^25, +2^25) to [0, 2^26-2] */
double map_Z_to_N (double x)
{
return (x < 0) ? (-2 * x - 1) : (2 * x);
}
/* Map natural numbers to whole numbers. Here: [0, 2^26-2] to (-2^25, +2^25) */
double map_N_to_Z (double x)
{
return is_odd_int (x) ? (-0.5 * (x + 1)) : (0.5 * x);
}
#if PAIRING_FUNCTION == SZUDZIK_ELEGANT_PAIRING
/* Matthew Szudzik, "An elegant pairing function." In Wolfram Research (ed.)
Special NKS 2006 Wolfram Science Conference, pp. 1-12.
Here: map two natural numbers in [0, 2^26-2] to natural number in [0, 2^52),
and vice versa
*/
double pair (double x, double y)
{
return (x != fmax (x, y)) ? (y * y + x) : (x * x + x + y);
}
void unpair (double z, double *x, double *y)
{
double sqrt_z = trunc (sqrt (z));
double sqrt_z_diff = z - sqrt_z * sqrt_z;
*x = (sqrt_z_diff < sqrt_z) ? sqrt_z_diff : sqrt_z;
*y = (sqrt_z_diff < sqrt_z) ? sqrt_z : (sqrt_z_diff - sqrt_z);
}
#elif PAIRING_FUNCTION == ROZSA_PETER_PAIRING
/*
Rozsa Peter, "Rekursive Funktionen" (1951), p. 44. Via David R. Hagen's blog,
https://drhagen.com/blog/superior-pairing-function/
Here: map two natural numbers in [0, 2^26-2] to natural number in [0, 2^52),
and vice versa
*/
double pair (double x, double y)
{
double mx = fmax (x, y);
double mn = fmin (x, y);
double sel = (mx == x) ? 0 : 1;
return mx * mx + mn * 2 + sel;
}
void unpair (double z, double *x, double *y)
{
double sqrt_z = trunc (sqrt (z));
double sqrt_z_diff = z - sqrt_z * sqrt_z;
double half_diff = trunc (sqrt_z_diff * 0.5);
*x = is_odd_int (sqrt_z_diff) ? half_diff : sqrt_z;
*y = is_odd_int (sqrt_z_diff) ? sqrt_z : half_diff;
}
#elif PAIRING_FUNCTION == ROSENBERG_STRONG_PAIRING
/*
A. L. Rosenberg and H. R. Strong, "Addressing arrays by shells",
IBM Technical Disclosure Bulletin, Vol. 14, No. 10, March 1972,
pp. 3026-3028.
Arnold L. Rosenberg, "Allocating storage for extendible arrays,"
Journal of the ACM, Vol. 21, No. 4, October 1974, pp. 652-670.
Corrigendum, Journal of the ACM, Vol. 22, No. 2, April 1975, p. 308.
Matthew P. Szudzik, "The Rosenberg-Strong Pairing Function", 2019
https://arxiv.org/abs/1706.04129
Here: map two natural numbers in [0, 2^26-2] to natural number in [0, 2^52),
and vice versa
*/
double pair (double x, double y)
{
double mx = fmax (x, y);
return mx * mx + mx + x - y;
}
void unpair (double z, double *x, double *y)
{
double sqrt_z = trunc (sqrt (z));
double sqrt_z_diff = z - sqrt_z * sqrt_z;
*x = (sqrt_z_diff < sqrt_z) ? sqrt_z_diff : sqrt_z;
*y = (sqrt_z_diff < sqrt_z) ? sqrt_z : (2 * sqrt_z - sqrt_z_diff);
}
#elif PAIRING_FUNCTION == CHRISTOPH_MICHEL_PAIRING
/*
Christoph Michel, "Enumerating a Grid in Spiral Order", September 7, 2016,
https://cmichel.io/enumerating-grid-in-spiral-order. Via German Wikipedia,
https://de.wikipedia.org/wiki/Cantorsche_Paarungsfunktion
Here: map two natural numbers in [0, 2^26-2] to natural number in [0, 2^52),
and vice versa
*/
double pair (double x, double y)
{
double mx = fmax (x, y);
return mx * mx + mx + (is_odd_int (mx) ? (x - y) : (y - x));
}
void unpair (double z, double *x, double *y)
{
double sqrt_z = trunc (sqrt (z));
double sqrt_z_diff = z - sqrt_z * (sqrt_z + 1);
double min_clamp = fmin (sqrt_z_diff, 0);
double max_clamp = fmax (sqrt_z_diff, 0);
*x = is_odd_int (sqrt_z) ? (sqrt_z + min_clamp) : (sqrt_z - max_clamp);
*y = is_odd_int (sqrt_z) ? (sqrt_z - max_clamp) : (sqrt_z + min_clamp);
}
#else
#error unknown PAIRING_FUNCTION
#endif
/* Lossy pairing function for double precision numbers. The maximum round-trip
relative error is about 2.167e-5
*/
double pack (double a, double b)
{
double c, p, q, s, t;
p = compress (a);
q = compress (b);
s = map_Z_to_N (p);
t = map_Z_to_N (q);
c = pair (s, t);
return c;
}
/* Unpairing function for double precision numbers. The maximum round-trip
relative error is about 2.167e-5 */
void unpack (double c, double *a, double *b)
{
double s, t, u, v;
unpair (c, &s, &t);
u = map_N_to_Z (s);
v = map_N_to_Z (t);
*a = decompress (u);
*b = decompress (v);
}
int main (void)
{
double a, b, c, ua, ub, relerr_a, relerr_b;
double max_relerr_a = 0, max_relerr_b = 0;
#if PAIRING_FUNCTION == SZUDZIK_ELEGANT_PAIRING
printf ("Testing with Szudzik's elegant pairing function\n");
#elif PAIRING_FUNCTION == ROZSA_PETER_PAIRING
printf ("Testing with Rozsa Peter's pairing function\n");
#elif PAIRING_FUNCTION == ROSENBERG_STRONG_PAIRING
printf ("Testing with Rosenberg-Strong pairing function\n");
#elif PAIRING_FUNCTION == CHRISTOPH_MICHEL_PAIRING
printf ("Testing with C. Michel's spiral pairing function\n");
#else
#error unkown PAIRING_FUNCTION
#endif
do {
a = uint64_as_double (KISS64);
b = uint64_as_double (KISS64);
c = pack (a, b);
unpack (c, &ua, &ub);
if (!isnan(ua) && !isinf(ua) && (ua != 0)) {
relerr_a = fabs ((ua - a) / a);
if (relerr_a > max_relerr_a) {
printf ("relerr_a= %15.8e a=% 23.16e ua=% 23.16e\n",
relerr_a, a, ua);
max_relerr_a = relerr_a;
}
}
if (!isnan(ub) && !isinf(ub) && (ub != 0)) {
relerr_b = fabs ((ub - b) / b);
if (relerr_b > max_relerr_b) {
printf ("relerr_b= %15.8e b=% 23.16e ub=% 23.16e\n",
relerr_b, b, ub);
max_relerr_b = relerr_b;
}
}
} while (1);
return EXIT_SUCCESS;
}
</code></pre> | 2019-11-02 19:02:15.560000+00:00 | 2019-11-08 17:14:53.987000+00:00 | 2019-11-08 17:14:53.987000+00:00 | null | 58,511,854 | <p>Some time ago, I came across a pair of functions in some CAD code to encode a set of coordinates (a pair of <code>float</code>s) as a single <code>float</code> (to use as a hash key), and then to unpack that single float back into the original pair.</p>
<p>The forward and backward functions only used standard mathematical operations -- no magic fiddling with bit-level representations of floats, no extracting and interleaving individual digits or anything like that. Obviously the reversal is not perfect in practice, because you lose considerable precision going from two floats to one, but according to the Wikipedia page for the function it should have been exactly invertible given infinite precision arithmetic.</p>
<p>Unfortunately, I don't work on that code anymore, and I've forgotten the name of the function so I can't look it up on Wikipedia again. Anybody know of a named mathematical functions that meets that description?</p> | 2019-10-22 20:29:44.790000+00:00 | 2019-11-08 17:14:53.987000+00:00 | 2019-10-25 01:48:02.720000+00:00 | math|floating-point | [] | 0 |
44,573,990 | <p>You are applying batch normalization on the first (input) layer, which is most likely a mistake. Why would you do this? Your input are images and you know very well how to normalize your input - in fact that's what you're doing in the first line. It makes no sense to apply a normalization again. </p>
<p>Batch normalization is applied in hidden layers so that the data does not get too big or too small. There's no simple, universal way of doing this, hence this special layer as introduced by <a href="https://arxiv.org/pdf/1502.03167.pdf" rel="nofollow noreferrer">Sergey Ioffe and Christian Szegedy</a>.</p> | 2017-06-15 17:57:51.783000+00:00 | 2017-06-15 18:06:02.830000+00:00 | 2017-06-15 18:06:02.830000+00:00 | null | 44,572,978 | <p>I have high classification on training but low classification on validation even though I am using the same dataset. This problem only occurred when using batch normalization. Am I implementing it correctly?</p>
<p>Code using batch normalization:</p>
<pre><code>train_datagen = ImageDataGenerator(rescale=1. / 255)
train_generator = train_datagen.flow_from_directory(
directory = '../ImageFilter/Images/',
target_size=(img_rows, img_cols),
batch_size=batch_size,
class_mode='categorical',
shuffle=True)
model = Sequential()
model.add(Convolution2D(16,
kernel_size=(3, 3),
strides=(2,2),
activation='relu',
input_shape=(img_rows, img_cols, 3)))
model.add(BatchNormalization())
model.add(MaxPooling2D((2,2), strides=(2,2)))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(2, activation='softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics = ['accuracy'])
epochs = 100
patience = 6
n_images = 91
file_path = 'imageFilterCNN.hdf5'
checkpointer = ModelCheckpoint(file_path, monitor='val_acc', verbose=0, save_best_only=True)
earlystop = EarlyStopping(monitor='val_acc', patience=patience, verbose=0, mode='auto')
tboard = TensorBoard('./logs')
model.fit_generator(
train_generator,
steps_per_epoch=n_images// batch_size,
epochs=epochs,
callbacks=[checkpointer, earlystop, tboard],
validation_data=train_generator,
validation_steps=n_images// batch_size)
</code></pre>
<blockquote>
<p>Outputs:
Epoch 15/100 11/11 [==============================] - 2s - loss: 0.0092 - acc: 1.0000 -
val_loss: 3.0321 - val_acc: 0.5568</p>
</blockquote>
<p></p> | 2017-06-15 16:56:01.550000+00:00 | 2017-06-15 18:06:02.830000+00:00 | null | machine-learning|tensorflow|keras | ['https://arxiv.org/pdf/1502.03167.pdf'] | 1 |
69,180,711 | <p>Your classification is correct if the purpose is - why they were invented initially. However rather than the task based taxonomy, CNNs are better studied on the basis of what different they are doing. Initially CNNs were designed for image classification, but the same network works for Object detection with slight modifications in last layer. For example, Faster RCNN (designed for Object detection) can use any of the architecture designed for classification such as VGG, ResNet etc (<a href="https://www.researchgate.net/figure/Comparison-of-multiple-faster-R-CNN-backbones-The-evaluation-metric-is-MAP05IoU_tbl2_337017443" rel="nofollow noreferrer">link</a>). Similarly Faster-RCNN can be modified to do segmentation task in Mask-RCNN architecture (<a href="https://arxiv.org/abs/1703.06870" rel="nofollow noreferrer">link</a>).</p>
<p>Here is a chart showing evolutionary history of deep CNNs showing architectural innovations (<a href="https://arxiv.org/pdf/1901.06032.pdf" rel="nofollow noreferrer">source</a>)
<a href="https://i.stack.imgur.com/PCZhc.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PCZhc.jpg" alt="enter image description here" /></a></p>
<p>Here is another taxonomy showing different categories based on architecture style.
<a href="https://i.stack.imgur.com/ZOAyZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZOAyZ.png" alt="enter image description here" /></a></p> | 2021-09-14 15:36:32.783000+00:00 | 2021-09-14 15:36:32.783000+00:00 | null | null | 69,179,967 | <p>Is my classification correct?</p>
<p>LeNet-5: Image classification,<br />
AlexNet: Image classification,<br />
VGG-16: Image classification,<br />
ResNet: Image classification,<br />
Inception module: Image classification,<br />
MobileNet: Image classification,<br />
EfficientNet: Image classification,<br />
Neural Style Transfer: Image generation,<br />
Sliding Windows Detection algorithm: Object detection,<br />
R-CNN: Object detection,<br />
YOLO: Object detection,<br />
Siamese network: Image recognition,<br />
U-Net: Semantic segmentation</p>
<p>If wrong, please correct me. THANKS!</p> | 2021-09-14 14:48:54.737000+00:00 | 2021-09-14 15:36:32.783000+00:00 | null | computer-vision|object-detection|image-recognition|image-classification|image-generation | ['https://www.researchgate.net/figure/Comparison-of-multiple-faster-R-CNN-backbones-The-evaluation-metric-is-MAP05IoU_tbl2_337017443', 'https://arxiv.org/abs/1703.06870', 'https://arxiv.org/pdf/1901.06032.pdf', 'https://i.stack.imgur.com/PCZhc.jpg', 'https://i.stack.imgur.com/ZOAyZ.png'] | 5 |
53,577,756 | <p>I thought the nonconvergence may be caused by the gradient vanishing. You can trace the gradients using <a href="https://www.tensorflow.org/api_docs/python/tf/contrib/layers/optimize_loss" rel="nofollow noreferrer">tf.contrib.layers.optimize_loss</a> and the tensorboard. You can refer to this <a href="https://stackoverflow.com/a/43948624/3552975">answer</a> for more details. </p>
<p>Several optimizations(maybe): </p>
<p>1) don't write the cross entropy yourself.
You can employ the sigmoid cross entropy with logits API, since it ensures stability <a href="https://github.com/tensorflow/tensorflow/blob/aaaf1908758c2c1fadf534b6dccf6fdea1bdc18b/tensorflow/python/ops/nn_impl.py#L137" rel="nofollow noreferrer">as documented</a>: </p>
<pre><code> max(x, 0) - x * z + log(1 + exp(-abs(x)))
</code></pre>
<p>2) do some <a href="https://arxiv.org/pdf/1602.07868.pdf" rel="nofollow noreferrer">weigh normalization</a> may would hlep. </p>
<p>3) keep the regularization loss small.
You can read <a href="https://stats.stackexchange.com/a/305331/103153">this answer</a> for more information. </p>
<p>4) I don't see the necessity of tf.abs the L1 distance. </p>
<p>And here is the code I modified. Hope it helps. </p>
<pre><code>mode = "training"
rl_rate = .1
with tf.device('/gpu:0'):
h1 = siamese(feed_image1)
h2 = siamese(feed_image2)
l1_dist = tf.subtract(h1, h2)
# is it necessary to use abs?
l1_dist_norm = tf.layers.batch_normalization(l1_dist, training=(mode=="training"))
with tf.variable_scope('logits') as scope:
w = tf.get_variable('fully_connected_weights', [tf.shape(l1_dist)[-1], 1],
weights_initializer = tf.contrib.layers.xavier_initializer(uniform=False), weights_regularizer = tf.contrib.layers.l2_regularizer(tf.constant(0.001, dtype=tf.float32))
)
logits = tf.tensordot(l1_dist_norm, w, axis=1)
xent_loss = tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=feed_labels)
total_loss = tf.add(tf.reduce_sum(rl_rate * tf.abs(tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES))), (1-rl_rate) * xent_loss, name='total_loss')
# or:
# weights = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)
# l1_regularizer = tf.contrib.layers.l1_regularizer()
# regularization_loss = tf.contrib.layers.apply_regularization(l1_regularizer, weights)
# total_loss = xent_loss + regularization_loss
with tf.variable_scope('adam_optimizer') as scope:
optimizer = tf.train.AdamOptimizer(learning_rate=0.0005)
opt = tf.contrib.layers.optimize_loss(total_loss, global_step, learning_rate=learning_rate, optimizer="Adam", clip_gradients=max_grad_norm, summaries=["gradients"])
</code></pre> | 2018-12-02 05:46:02.667000+00:00 | 2018-12-03 06:15:10.110000+00:00 | 2018-12-03 06:15:10.110000+00:00 | null | 50,724,377 | <p>I am trying to implement a Siamese Network, as in this <a href="http://www.google.com/url?q=http%3A%2F%2Fwww.cs.toronto.edu%2F~rsalakhu%2Fpapers%2Foneshot1.pdf&sa=D&sntz=1&usg=AFQjCNH241Qx3Exepcsf2-7dtPVnb_17lA" rel="nofollow noreferrer">paper</a> </p>
<p>In this paper, they have used cross entropy for the Loss function</p>
<p>I am using STL-10 Dataset for training and instead of the 3 layer network used in the paper, I replaced it with VGG-13 CNN network, except the last logit layer.</p>
<p>Here is my loss function code</p>
<pre><code>def loss(pred,true_pred):
cross_entropy_loss = tf.multiply(-1.0,tf.reduce_mean(tf.add(tf.multiply(true_pred,tf.log(pred)),tf.multiply((1-true_pred),tf.log(tf.subtract(1.0,pred))))))
total_loss = tf.add(tf.reduce_sum(tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)),cross_entropy_loss,name='total_loss')
return cross_entropy_loss,total_loss
with tf.device('/gpu:0'):
h1 = siamese(feed_image1)
h2 = siamese(feed_image2)
l1_dist = tf.abs(tf.subtract(h1,h2))
with tf.variable_scope('pred') as scope:
predictions = tf.contrib.layers.fully_connected(l1_dist,1,activation_fn = tf.sigmoid,weights_initializer = tf.contrib.layers.xavier_initializer(uniform=False),weights_regularizer = tf.contrib.layers.l2_regularizer(tf.constant(0.001, dtype=tf.float32)))
celoss,cost = loss(predictions,feed_labels)
with tf.variable_scope('adam_optimizer') as scope:
optimizer = tf.train.AdamOptimizer(learning_rate=0.001)
opt = optimizer.minimize(cost)
</code></pre>
<p>However, when I run the training, the cost remains almost constant at 0.6932</p>
<p>I have used Adam Optimizer here. </p>
<p>But previously I used Momentum Optimizer.
I have tried changing the learning rate but the cost still behaves the same.</p>
<p>And <strong>all the prediction values converge to 0.5 after a few iterations</strong>.</p>
<p>After taking the output for two batches of images (input1 and input2), I take their L1 distance and to that I have connected a fully connected layer with a single output and sigmoid activation function.</p>
<p>[h1 and h2 contains the output of the last fully connected layer(not the logit layer) of the VGG-13 network]</p>
<p>Since the output activation function is sigmoid, and since the prediction values are around 0.5, we can calculate and say that the sum of the weighted L1 distance of the output of the two networks is near to zero.</p>
<p>I can't understand where I am going wrong.
A little help will be very much appreciated.</p> | 2018-06-06 15:38:22.817000+00:00 | 2018-12-03 06:15:10.110000+00:00 | 2018-06-07 07:41:55.980000+00:00 | python|tensorflow|conv-neural-network | ['https://www.tensorflow.org/api_docs/python/tf/contrib/layers/optimize_loss', 'https://stackoverflow.com/a/43948624/3552975', 'https://github.com/tensorflow/tensorflow/blob/aaaf1908758c2c1fadf534b6dccf6fdea1bdc18b/tensorflow/python/ops/nn_impl.py#L137', 'https://arxiv.org/pdf/1602.07868.pdf', 'https://stats.stackexchange.com/a/305331/103153'] | 5 |
50,100,807 | <p>Here is a detailed mathematical explanation:
<a href="https://arxiv.org/abs/1804.10253" rel="nofollow noreferrer">https://arxiv.org/abs/1804.10253</a></p>
<p>This paper also shows that using a linear autoencoder, it is possible not only to compute the subspace spanned by the PCA vectors, but it is actually possible to compute the principal components themselves.</p> | 2018-04-30 12:43:05.207000+00:00 | 2018-04-30 12:43:05.207000+00:00 | null | null | 42,607,471 | <p>I would like the mathematical proof of it. does anyone know a paper for it. or can workout the math?</p> | 2017-03-05 10:54:21.437000+00:00 | 2018-04-30 12:43:05.207000+00:00 | 2017-03-05 12:54:25.963000+00:00 | machine-learning|neural-network|deep-learning|pca|autoencoder | ['https://arxiv.org/abs/1804.10253'] | 1 |
53,102,173 | <p>Check out <a href="https://github.com/google/cityhash" rel="nofollow noreferrer">Cityhash</a> and <a href="https://github.com/google/highwayhash" rel="nofollow noreferrer">HighwayHash</a>. Both have 256-bit variants, and much faster than SHA256. Cityhash is faster, but it is a non-cryptographic hash. HighwayHash is slower (but still faster than SHA256), and a <a href="https://arxiv.org/pdf/1612.06257.pdf" rel="nofollow noreferrer">secure</a> hash.</p>
<p>All modern non-cryptographic hashes are <strong>much</strong> faster than SHA256. If you're willing to use a 128-bit hash, you'll have more <a href="https://github.com/rurban/smhasher" rel="nofollow noreferrer">options</a>.</p>
<p>Note, that you may want to consider using a 128-bit hash, as it may be adequate for your purpose. For example, if you have 10<sup>10</sup> different objects, the probability that you have a collision with a quality 128-bit hash is less than 10<sup>-18</sup>. Check out the table <a href="https://en.wikipedia.org/wiki/Birthday_attack" rel="nofollow noreferrer">here</a>.</p> | 2018-11-01 13:21:28.087000+00:00 | 2018-11-01 18:43:28.907000+00:00 | 2018-11-01 18:43:28.907000+00:00 | null | 53,041,646 | <p>I'm working on design of <a href="https://en.wikipedia.org/wiki/Content-addressable_storage" rel="nofollow noreferrer">Content-addressable storage</a>, so I'm looking for a hash function to generate object identifiers. Every object should get short ID based on its content in that way: <code>object_id = hash(object_content)</code>.</p>
<p>Prerequisites:</p>
<ol>
<li>Hash-function should be fast. </li>
<li>Collision probability must be as low as possible. </li>
<li>Optimal ID length is <code>32</code> bytes in order to address <code>256^32</code> objects at max (but this requirement may be relaxed).</li>
</ol>
<p>Taking into account these requirements, I picked up <code>SHA256</code> hash, but unfortunately it's not fast enough for my purposes. The fastest implementations of <code>SHA256</code> that I was able to benchmark were <code>openssl</code> and <code>boringssl</code>: on my desktop <code>Intel Core I5 6400</code> it gave about <code>420 MB/s</code> per core. Other implementations (like <code>crypto/rsa</code> in Go) are even slower. I would like to replace <code>SHA256</code> with other hash function that provides the same collision guarantees as <code>SHA256</code>, but gives betters throughput (at least <code>600 MB/s</code> per core).</p>
<p>Please share your opinion about possible options to solve this problem.</p>
<p>Also I would like to note that hardware update (like purchasing modern CPU with <code>AVX512</code> instruction set) is not possible. The main point is to find hash function that will provide better performance on commodity hardware.</p> | 2018-10-29 08:41:34.563000+00:00 | 2019-11-17 17:50:14.863000+00:00 | 2019-07-30 15:43:32.130000+00:00 | hash|blob|identity|sha256 | ['https://github.com/google/cityhash', 'https://github.com/google/highwayhash', 'https://arxiv.org/pdf/1612.06257.pdf', 'https://github.com/rurban/smhasher', 'https://en.wikipedia.org/wiki/Birthday_attack'] | 5 |
68,157,693 | <p>First, I note you are using FastDTW, but "FastDTW is approximate and Generally Slower than the Algorithm it Approximates."</p>
<p>Before you answer the question in code, you need to answer the question semantically.
Consider the following two cases</p>
<ol>
<li><p>A= CAT and B = CAAAT
In this case, you can interpolate the time series to the same length</p>
</li>
<li><p>C = CAT and B = CATXXXX
In this case, you need to use open ended DTW [b]</p>
</li>
</ol>
<p>[a]<a href="https://arxiv.org/abs/2003.11246" rel="nofollow noreferrer">https://arxiv.org/abs/2003.11246</a>
[b] <a href="https://www.cs.unm.edu/%7Emueen/DTW.pdf" rel="nofollow noreferrer">https://www.cs.unm.edu/~mueen/DTW.pdf</a></p> | 2021-06-28 04:44:19.857000+00:00 | 2021-06-28 04:44:19.857000+00:00 | null | null | 68,153,370 | <p>I have the two following time series signals:</p>
<pre><code>import numpy as np
x = np.cos(2*np.pi*np.power(3*(np.linspace(1, 1000, 1000))/1000, 2))
y = np.cos(2*np.pi*(9*(np.linspace(1, 399, 399))/400))
</code></pre>
<p>so <code>x</code> and <code>y</code> are in shape <code>(1000,)</code> and <code>(399,)</code>, respectively. I want to do two following dynamic time warping with <code>fastdtw</code> python package:</p>
<ol>
<li><code>x</code> is the reference signal (the longer signal):</li>
</ol>
<p>I want to map <code>y</code> into the longer signal shape (<code>x.shape=(1000,)</code>). It's done by the following code:</p>
<pre><code>from scipy.spatial.distance import euclidean
from fastdtw import fastdtw
distance, path = fastdtw(x, y, dist=euclidean) # x:reference signal
inds = [ind[1] for ind in path]
y_warped = y[inds]
</code></pre>
<p>in this case, the above code works correctly and maps <code>y:(399,)</code> into the <code>y_warped:(1000,)</code>.</p>
<ol start="2">
<li><code>y</code> is the reference signal (the shorter signal):</li>
</ol>
<p>I want to map <code>x</code> into the shorter signal shape (<code>y.shape=(399,)</code>).</p>
<pre><code>from scipy.spatial.distance import euclidean
from fastdtw import fastdtw
distance, path = fastdtw(y, x, dist=euclidean) # y:reference signal
inds = [ind[1] for ind in path]
x_warped = x[inds]
</code></pre>
<p>but in this case, I get <code>x_warped</code> in the same shape as <code>x</code>, but I expect to get <code>x.shape=(399,)</code>. How can I warp a longer signal into a shorter signal?
Thanks in advance!</p> | 2021-06-27 16:48:32.423000+00:00 | 2021-06-28 04:44:19.857000+00:00 | null | python|time-series|data-science|signal-processing|dtw | ['https://arxiv.org/abs/2003.11246', 'https://www.cs.unm.edu/%7Emueen/DTW.pdf'] | 2 |
73,092,326 | <p>You already have the fastest solution.</p>
<p>To understand why, we need to dig a bit deeper.</p>
<p>The memory bandwidh of a GPU is different for coalesced/misaligned read, and this is different for read/write operations. <a href="https://doi.org/10.48550/arXiv.2112.08926" rel="nofollow noreferrer">On most GPUs</a>, while misaligned writes are almost as fast as coalesced reads, for misaligned writes there is a large performance penalty. So misaligned reads are ok, but misaligned writes should be avoided.<a href="https://i.stack.imgur.com/wgzGE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wgzGE.png" alt="enter image description here" /></a></p>
<p>In your example, you have coalesced writes to <code>A</code> and misaligned reads from <code>B</code>, so you already get peak memory bandwidth.</p>
<pre class="lang-c prettyprint-override"><code>kernel void reverse_kernel(global float* A, global float* B) { // equivalent to "for(uint i=0u; i<N; i++) {", but executed in parallel
const uint i = get_global_id(0);
A[i] = B[N-1-i];
}
</code></pre>
<p>For coalesced memory access, generally contiguous threads must access contiguous memory addresses. But there is some special cases where coalesced access is still allowed in differend access patterns: strided access and broadcasting. I am not sure if reverse access also falls into that class, but for the reads it doesn't matter anyways.</p>
<p>To your initial question with shared memory: This is how you would do it. But there probably won't be any significant speedup here. Speedup with shared memory is only large if the shared memory is accessed multiple times; here you write and read it only once.</p>
<pre class="lang-c prettyprint-override"><code>kernel void reverse_kernel(global float* A, global float* B) { // equivalent to "for(uint i=0u; i<N; i++) {", but executed in parallel
const uint i = get_global_id(0);
const uint lid = get_local_id(0);
const uint gid = get_group_id(0);
local float cache[64]; // workgroup size on C++ size must be set to 64 as well
cache[lid] = B[64*gid+lid]; // coalesced read from B
barrier(CLK_LOCAL_MEM_FENCE);
A[i] = cache[64-1-lid]; // coalesced write to A
}
</code></pre> | 2022-07-23 16:18:17.467000+00:00 | 2022-07-23 16:23:50.160000+00:00 | 2022-07-23 16:23:50.160000+00:00 | null | 73,089,436 | <p>I have this function written in C, that reverse an array:</p>
<pre><code>for (int i=0; i<N;i++){
A[i] = B[N-i-1];
}
</code></pre>
<p>I have to write a kernel function that not suffer of Uncoalasced Memory Access (given at B[N-i-1]) using tiling and the local memory. So the idea is: Doing reverse in a local memory and write result back in the array A. How i can do it? Im a newebie.
Assumption: input size match with global size.</p> | 2022-07-23 09:07:57.207000+00:00 | 2022-07-23 16:23:50.160000+00:00 | null | c|gpu|opencl | ['https://doi.org/10.48550/arXiv.2112.08926', 'https://i.stack.imgur.com/wgzGE.png'] | 2 |
51,225,082 | <p>This should do it. </p>
<p>Source 1: <a href="https://arxiv.org/pdf/1706.03409" rel="nofollow noreferrer">https://arxiv.org/pdf/1706.03409</a></p>
<pre><code># Clustered Wilcox test
clusWilcox.test(x ~ grp + cluster(cid), dat.cl, method = "rgl")
</code></pre>
<p>Source 2: <a href="https://www.statmethods.net/stats/nonparametric.html" rel="nofollow noreferrer">https://www.statmethods.net/stats/nonparametric.html</a></p>
<pre><code># independent 2-group Mann-Whitney U Test
wilcox.test(y~A)
# where y is numeric and A is A binary factor
# independent 2-group Mann-Whitney U Test
wilcox.test(y,x) # where y and x are numeric
# dependent 2-group Wilcoxon Signed Rank Test
wilcox.test(y1,y2,paired=TRUE) # where y1 and y2 are numeric
</code></pre>
<p>Next time please include a <a href="https://stackoverflow.com/q/5963269/1422451">reproducible</a> example.</p> | 2018-07-07 16:42:37.947000+00:00 | 2018-07-07 16:49:00.640000+00:00 | 2018-07-07 16:49:00.640000+00:00 | null | 51,225,023 | <p>I used the hierarchical clustering approach on my dataset and created 3 clusters, which I added as a variable to my dataset.
I would like to perform the Wilcoxon Mann Whitney test on the three clusters (clusterID) and their respective variables.</p>
<p>This is the dataset </p>
<p><a href="https://i.stack.imgur.com/DEfFt.png" rel="nofollow noreferrer">dataset</a></p> | 2018-07-07 16:34:43.663000+00:00 | 2018-07-07 16:49:00.640000+00:00 | 2018-07-07 16:40:37.703000+00:00 | r|hierarchical-clustering | ['https://arxiv.org/pdf/1706.03409', 'https://www.statmethods.net/stats/nonparametric.html', 'https://stackoverflow.com/q/5963269/1422451'] | 3 |
60,394,246 | <h2>Which learning algorithm does spaCy use?</h2>
<p>spaCy has its own deep learning library called <a href="https://github.com/explosion/thinc" rel="noreferrer">thinc</a> used under the hood for different NLP models. for most (if not all) tasks, spaCy uses a deep neural network based on CNN with a few tweaks. Specifically for Named Entity Recognition, spacy uses:</p>
<ol>
<li><p>A <strong>transition based approach</strong> borrowed from shift-reduce parsers, which is described in the paper <a href="https://arxiv.org/pdf/1603.01360.pdf" rel="noreferrer">Neural Architectures for Named Entity Recognition</a> by Lample et al.
Matthew Honnibal describes how spaCy uses this on a <a href="https://youtu.be/sqDHBH9IjRU?t=975" rel="noreferrer">YouTube video</a>.</p>
</li>
<li><p>A framework that's called <strong>"Embed. Encode. Attend. Predict"</strong> (Starting <a href="https://youtu.be/sqDHBH9IjRU?t=1254" rel="noreferrer">here</a> on the video), slides <a href="https://github.com/explosion/talks/blob/master/2018-04-12_Embed-Encode-Attend-Predict.pdf" rel="noreferrer">here</a>.</p>
<ul>
<li><p><strong>Embed</strong>: Words are embedded using a Bloom filter, which means that word hashes are kept as keys in the embedding dictionary, instead of the word itself. This maintains a more compact embeddings dictionary, with words potentially colliding and ending up with the same vector representations.</p>
</li>
<li><p><strong>Encode</strong>: List of words is encoded into a sentence matrix, to take context into account. spaCy uses CNN for encoding.</p>
</li>
<li><p><strong>Attend</strong>: Decide which parts are more informative given a query, and get problem specific representations.</p>
</li>
<li><p><strong>Predict</strong>: spaCy uses a multi layer perceptron for inference.</p>
</li>
</ul>
</li>
</ol>
<p>Advantages of this framework, per Honnibal are:</p>
<ol>
<li>Mostly equivalent to sequence tagging (another task spaCy offers models for)</li>
<li>Shares code with the parser</li>
<li>Easily excludes invalid sequences</li>
<li>Arbitrary features are easily defined</li>
</ol>
<p>For a full overview, Matthew Honnibal describes how the model works in <a href="https://www.youtube.com/watch?v=sqDHBH9IjRU" rel="noreferrer">this YouTube video</a>. Slides could be found <a href="https://github.com/explosion/talks/blob/master/2017-11-02_Practical-and-Effective-Neural-NER.pdf" rel="noreferrer">here</a>.</p>
<p><em>Note</em>: This information is based on slides from 2017. The engine might have changed since then.</p>
<h2>When adding a new entity type, should we create a blank model or train an existing one?</h2>
<p>Theoretically, when fine-tuning a spaCy model with new entities, you have to make sure the model doesn't forget representations for previously learned entities. The best thing, if possible, is to train a model from scratch, but that might not be easy or possible due to lack of data or resources.</p>
<p><strong>EDIT Feb 2021</strong>: spaCy version 3 now uses the Transformer architecture as its deep learning model.</p> | 2020-02-25 12:04:59.947000+00:00 | 2021-08-24 12:40:23.633000+00:00 | 2021-08-24 12:40:23.633000+00:00 | null | 60,381,170 | <p>When we train custom model, I do see we have dropout and n_iter parameters to tune, but which deep learning algorithm does Spacy Uses to train Custom Models? Also, when Adding new Entity type is it good to create blank or train it on existing model?</p> | 2020-02-24 17:33:52.413000+00:00 | 2021-08-24 12:40:23.633000+00:00 | null | nlp|spacy|named-entity-recognition | ['https://github.com/explosion/thinc', 'https://arxiv.org/pdf/1603.01360.pdf', 'https://youtu.be/sqDHBH9IjRU?t=975', 'https://youtu.be/sqDHBH9IjRU?t=1254', 'https://github.com/explosion/talks/blob/master/2018-04-12_Embed-Encode-Attend-Predict.pdf', 'https://www.youtube.com/watch?v=sqDHBH9IjRU', 'https://github.com/explosion/talks/blob/master/2017-11-02_Practical-and-Effective-Neural-NER.pdf'] | 7 |
62,935,380 | <p>face_recognition problem is formed as the following:</p>
<p>The features are two pictures for each data point and the label is whether those two pictures for the same person or not(Binary classification) but the network is constructed without the classification layer. after training the model the output is called embedding. the network is trained such that the distance between the output of the model(embedding) for the same person is small and different persons is big. You can use cosine distance as a metric to get the distance between two vectors(embedding) and so on.</p>
<p>Note: this is very abstract idea about how face_recognition works, if you need more details you can read this <a href="https://arxiv.org/abs/1503.03832" rel="nofollow noreferrer">paper</a>.</p> | 2020-07-16 13:03:22.227000+00:00 | 2021-08-25 16:37:30.667000+00:00 | 2021-08-25 16:37:30.667000+00:00 | null | 62,934,665 | <p>I am using a face recognition library to detect faces. The model gets 128 embeddings from the image. To check if two faces match, it checks if the distance between those two points is less than 0.6. I am not sure what it means by distance between two images. As per my understanding, does it mean comparing the distance between two points in known images and also again in the image we want it to recognize. I could not find any documentation on this online. Please Help</p> | 2020-07-16 12:26:05.673000+00:00 | 2021-08-25 16:37:30.667000+00:00 | null | deep-learning|face-recognition|dlib | ['https://arxiv.org/abs/1503.03832'] | 1 |
2,431,326 | <p>You will be interested in </p>
<ol>
<li><p><a href="http://arxiv.org/abs/1002.4482" rel="noreferrer"><strong>Exploring the Limits of GPUs With Parallel Graph Algorithms</strong></a> <br></p></li>
<li><p><a href="http://cvit.iiit.ac.in/papers/Pawan07accelerating.pdf" rel="noreferrer"><strong>Accelerating large graph algorithms on the GPU using CUDA</strong></a>.</p></li>
</ol> | 2010-03-12 08:22:43.807000+00:00 | 2013-09-03 19:08:57.523000+00:00 | 2013-09-03 19:08:57.523000+00:00 | null | 2,431,310 | <p>The current GPU execution and memory models are somehow limited (memory limit, limit of data structures, no recursion...).</p>
<p>Do you think it would be feasible to implement a graph theory problem on a GPU? For example, vertex cover? dominating set? independent set? max clique?....</p>
<p>Is it also feasible to have branch-and-bound algorithms on GPUs? Recursive backtracking?</p> | 2010-03-12 08:17:00.233000+00:00 | 2020-07-19 17:02:58.753000+00:00 | 2020-07-19 17:02:58.753000+00:00 | cuda|graph-theory|gpu | ['http://arxiv.org/abs/1002.4482', 'http://cvit.iiit.ac.in/papers/Pawan07accelerating.pdf'] | 2 |
60,872,321 | <p>As this is very specific question, I wouldn't go to any mathematical details of Adam. I guess in the article, the line <strong>it computes individual learning rates for different parameters</strong> got you off.</p>
<p>This is the screenshot of the actual Adam algorithm proposed in the paper <a href="https://arxiv.org/pdf/1412.6980.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1412.6980.pdf</a></p>
<p><a href="https://i.stack.imgur.com/UmcK2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UmcK2.png" alt="enter image description here"></a></p>
<p>Adam keeps an exponentially decaying average of past gradients so it behaves like a heavy ball with friction which helps it faster convergence and stability.</p>
<p>But, if you look into the algorithm there's an <strong>alpha</strong> (step size), this is the keras equivalent of learning rate = 0.001 we provide. So, the algorithm needs a step size to update the parameters (simply, it's a scaling factor for the weight update). As for the varying learning rate (or update), you can see the last equation (it uses <strong>m_t</strong> and <strong>v_t</strong>, these are updated in the loop) but the <strong>alpha</strong> stays fixed in the whole algorithm. This is the keras learning rate that we have to provide.</p>
<p>As, alpha stays same, we sometimes have to use learning rate scheduling where we actually decrease the learning rate after few epochs. There are other variations where we increase the learning rate first then decrease.</p> | 2020-03-26 16:58:24.873000+00:00 | 2020-03-26 16:58:24.873000+00:00 | null | null | 60,871,794 | <p>So through my limited understanding of Adam (mainly through this post: <a href="https://towardsdatascience.com/adam-latest-trends-in-deep-learning-optimization-6be9a291375c" rel="nofollow noreferrer">https://towardsdatascience.com/adam-latest-trends-in-deep-learning-optimization-6be9a291375c</a>) I gather that the Adam optimizer computes individual learning rates for each parameter in a network.</p>
<p>But in the Keras docs (<a href="https://keras.io/optimizers/" rel="nofollow noreferrer">https://keras.io/optimizers/</a>) the Adam optimizer takes a learning rate parameter.</p>
<p>My question is how does the learning rate parameter taken by the Adam object correlate to these computed learning rates? As far as I can tell, this isn't covered in the post linked (Or it is but it went over my head).</p> | 2020-03-26 16:28:51.497000+00:00 | 2020-03-27 02:17:28.893000+00:00 | null | machine-learning|keras|neural-network | ['https://arxiv.org/pdf/1412.6980.pdf', 'https://i.stack.imgur.com/UmcK2.png'] | 2 |
50,143,782 | <p>There's a CRAN package called <a href="https://cran.r-project.org/web/packages/knor/index.html" rel="noreferrer">knor</a> that is derived from a <a href="https://arxiv.org/abs/1606.08905" rel="noreferrer">research paper</a> that improves the performance using a memory efficient variant of Elkan's pruning algorithm. It's an order of magnitude faster than everything in these answers.</p>
<pre><code>install.packages("knor")
require(knor)
iris.mat <- as.matrix(iris[,1:4])
k <- length(unique(iris[, dim(iris)[2]])) # Number of unique classes
nthread <- 4
kms <- Kmeans(iris.mat, k, nthread=nthread)
</code></pre> | 2018-05-02 21:27:04.650000+00:00 | 2018-05-02 21:27:04.650000+00:00 | null | null | 20,416,944 | <p>I am trying to understand how to parallelize some of my code using R. So, in the following example I want to use k-means to cluster data using 2,3,4,5,6 centers, while using 20 iterations.
Here is the code: </p>
<pre><code>library(parallel)
library(BLR)
data(wheat)
parallel.function <- function(i) {
kmeans( X[1:100,100], centers=?? , nstart=i )
}
out <- mclapply( c(5, 5, 5, 5), FUN=parallel.function )
</code></pre>
<p>How can we parallel simultaneously the iterations and the centers?
How to track the outputs, assuming I want to keep all the outputs from k-means across all, iterations and centers, just to learn how?</p> | 2013-12-06 05:49:31.213000+00:00 | 2018-05-02 21:27:04.650000+00:00 | 2016-06-09 20:41:54.807000+00:00 | r|parallel-processing|parallel-foreach | ['https://cran.r-project.org/web/packages/knor/index.html', 'https://arxiv.org/abs/1606.08905'] | 2 |
43,407,588 | <p>I'm not familiar with the GRU architecture. However, this paper compares the LSTM and GRU architectures, I think it's exactly what you need.</p>
<p><a href="https://arxiv.org/pdf/1412.3555v1.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1412.3555v1.pdf</a></p> | 2017-04-14 08:01:25.237000+00:00 | 2017-04-14 08:01:25.237000+00:00 | null | null | 43,398,024 | <p>Could somebody explain the similarities and dissimilarities between Long Short Term Memory(LSTM) and Gated Recurrent Unit(GRU) architectures. I know the definitions of each and that GRU lack an output gate and therefore have fewer parameters. Could somebody please give an intuitive explanation / analogy.</p> | 2017-04-13 17:07:03.863000+00:00 | 2017-04-14 08:01:25.237000+00:00 | 2017-04-13 17:12:14.830000+00:00 | neural-network | ['https://arxiv.org/pdf/1412.3555v1.pdf'] | 1 |
64,450,767 | <p>The <code>distilbert</code> model in <strong>ktrain</strong> is created using Hugging Face <strong>transformers</strong>, which means you can use that library to prune the model. See <a href="https://huggingface.co/transformers/bertology.html" rel="nofollow noreferrer">this link</a> for more information and <a href="https://github.com/huggingface/transformers/blob/master/examples/research_projects/bertology/run_bertology.py" rel="nofollow noreferrer">the example script</a>. You may need to convert the model to PyTorch before using the script (in addition to making some modifications to the script itself). The approach is based on the paper <a href="https://arxiv.org/abs/1905.10650" rel="nofollow noreferrer">Are Sixteen Heads Really Better Than One?</a>.</p> | 2020-10-20 17:52:50.110000+00:00 | 2022-01-11 10:09:44.373000+00:00 | 2022-01-11 10:09:44.373000+00:00 | null | 64,445,784 | <p>I have trained a <a href="https://en.wikipedia.org/wiki/BERT_(language_model)" rel="nofollow noreferrer">BERT</a> model using ktrain (TensorFlow wrapper) to recognize emotion on text. It works, but it suffers from really slow inference. That makes my model not suitable for a production environment. I have done some research, and it seems pruning could help.</p>
<p>TensorFlow provides some options for pruning, e.g., <em>tf.contrib.model_pruning</em>. The problem is that it is not a not a widely used technique. What would be a simple enough example that could help me to understand how to use it?</p>
<p>I provide my working code below for reference.</p>
<pre><code>import pandas as pd
import numpy as np
import preprocessor as p
import emoji
import re
import ktrain
from ktrain import text
from unidecode import unidecode
import nltk
# Text preprocessing class
class TextPreprocessing:
def __init__(self):
p.set_options(p.OPT.MENTION, p.OPT.URL)
def _punctuation(self, val):
val = re.sub(r'[^\w\s]', ' ', val)
val = re.sub('_', ' ', val)
return val
def _whitespace(self, val):
return " ".join(val.split())
def _removenumbers(self, val):
val = re.sub('[0-9] + ', '', val)
return val
def _remove_unicode(self, text):
text = unidecode(text).encode("ascii")
text = str(text, "ascii")
return text
def _split_to_sentences(self, body_text):
sentences = re.split(r"(?<!\w\.\w.)(?<![A-Z][a-z]\.)(?<=\.|\?)\s", body_text)
return sentences
def _clean_text(self, val):
val = val.lower()
val = self._removenumbers(val)
val = p.clean(val)
val = ' '.join(self._punctuation(emoji.demojize(val)).split())
val = self._remove_unicode(val)
val = self._whitespace(val)
return val
def text_preprocessor(self, body_text):
body_text_df = pd.DataFrame({"body_text": body_text}, index=[1])
sentence_split_df = body_text_df.copy()
sentence_split_df["body_text"] = sentence_split_df["body_text"].apply(
self._split_to_sentences)
lst_col = "body_text"
sentence_split_df = pd.DataFrame(
{
col: np.repeat(
sentence_split_df[col].values, sentence_split_df[lst_col].str.len(
)
)
for col in sentence_split_df.columns.drop(lst_col)
}
).assign(**{lst_col: np.concatenate(sentence_split_df[lst_col].values)})[
sentence_split_df.columns
]
body_text_df["body_text"] = body_text_df["body_text"].apply(self._clean_text)
final_df = (
pd.concat([sentence_split_df, body_text_df])
.reset_index()
.drop(columns=["index"])
)
return final_df["body_text"]
# Instantiate data preprocessing object
text1 = TextPreprocessing()
# Import data
data_train = pd.read_csv('data_train_v5.csv', encoding='utf8', engine='python')
data_test = pd.read_csv('data_test_v5.csv', encoding='utf8', engine='python')
# Clean the data
data_train['Text'] = data_train['Text'].apply(text1._clean_text)
data_test['Text'] = data_test['Text'].apply(text1._clean_text)
X_train = data_train.Text.tolist()
X_test = data_test.Text.tolist()
y_train = data_train.Emotion.tolist()
y_test = data_test.Emotion.tolist()
data = data_train.append(data_test, ignore_index=True)
class_names = ['joy', 'sadness', 'fear', 'anger', 'neutral']
encoding = {
'joy': 0,
'sadness': 1,
'fear': 2,
'anger': 3,
'neutral': 4
}
# Integer values for each class
y_train = [encoding[x] for x in y_train]
y_test = [encoding[x] for x in y_test]
trn, val, preproc = text.texts_from_array(x_train=X_train, y_train=y_train,
x_test=X_test, y_test=y_test,
class_names=class_names,
preprocess_mode='distilbert',
maxlen=350)
model = text.text_classifier('distilbert', train_data=trn, preproc=preproc)
learner = ktrain.get_learner(model, train_data=trn, val_data=val, batch_size=6)
predictor = ktrain.get_predictor(learner.model, preproc)
# Save the model on a file for later use
predictor.save("models/bert_model")
message = "This is a happy message"
# Cleaning - takes 5 ms to run
clean = text1._clean_text(message)
# Prediction - takes 325 ms to run
predictor.predict_proba(clean)
</code></pre> | 2020-10-20 13:04:26.517000+00:00 | 2022-01-11 10:09:44.373000+00:00 | 2022-01-10 13:21:42.943000+00:00 | python|tensorflow|nlp|bert-language-model|huggingface-transformers | ['https://huggingface.co/transformers/bertology.html', 'https://github.com/huggingface/transformers/blob/master/examples/research_projects/bertology/run_bertology.py', 'https://arxiv.org/abs/1905.10650'] | 3 |
66,860,346 | <p>Amazing question (and welcome to StackOverflow)! <a href="https://arxiv.org/pdf/1502.01852.pdf" rel="nofollow noreferrer">Research paper for quick reference</a>.</p>
<h2>TLDR</h2>
<ul>
<li>Try wider networks (<code>64</code> channels)</li>
<li>Add Batch Normalization after activation (or even before, shouldn't make much difference)</li>
<li>Add residual connections (shouldn't improve much over batch norm, last resort)</li>
</ul>
<p>Please check this out in this order and give a comment what (and if) any of that worked in your case (as I'm also curious).</p>
<h2>Things you do differently</h2>
<ul>
<li>Your neural network is very deep, yet <strong>very narrow</strong> (<code>81</code> parameters per layer only!)</li>
</ul>
<p>Due to above, one cannot reliably create those weights from normal distribution as the sample is just too small.</p>
<p><strong>Try wider networks, <code>64</code> channels or more</strong></p>
<ul>
<li><strong>You are trying much deeper network than they did</strong></li>
</ul>
<p><strong>Section: Comparison Experiments</strong></p>
<blockquote>
<p>We conducted comparisons on a deep but efficient model with 14 weight
layers (actually <code>22</code> was also tested in comparison with <code>Xavier</code>)</p>
</blockquote>
<p>That was due to date of release of this paper (<code>2015</code>) and hardware limitations "back in the days" (let's say)</p>
<h2>Is this normal?</h2>
<p>Approach itself is quite strange with layers of this depth, at least currently;</p>
<ul>
<li>each conv block is usually followed by activation like <code>ReLU</code> <strong>and Batch Normalization</strong> (which normalizes signal and helps with exploding/vanishing signals)</li>
<li>usually networks of this depth (even of depth half of what you've got) use also residual connections (though this is not directly linked to vanishing/small signal, more connected to degradation problem of even deep networks, like <code>1000</code> layers)</li>
</ul> | 2021-03-29 19:21:04.577000+00:00 | 2021-03-29 23:41:56.877000+00:00 | 2021-03-29 23:41:56.877000+00:00 | null | 66,853,692 | <p>I was implementing a conv block in pytorch with activation function(prelu). I used Kaiming initilization to initialize all my weights and set all the bias to zero. However as I tested these blocks (by stacking 100 such conv and activation blocks on top of each other), I noticed that the output I am getting values of the order of 10^(-10). Is this normal, considering I am stacking upto 100 layers. Adding a small bias to each layer fixes the problem. But in Kaiming initialization the biases are supposed to be zero.</p>
<p>Here is the conv block code</p>
<pre><code>from collections import Iterable
def convBlock(
input_channels, output_channels, kernel_size=3, padding=None, activation="prelu"
):
"""
Initializes a conv block using Kaiming Initialization
"""
padding_par = 0
if padding == "same":
padding_par = same_padding(kernel_size)
conv = nn.Conv2d(input_channels, output_channels, kernel_size, padding=padding_par)
relu_negative_slope = 0.25
act = None
if activation == "prelu" or activation == "leaky_relu":
nn.init.kaiming_normal_(conv.weight, a=relu_negative_slope, mode="fan_in")
if activation == "prelu":
act = nn.PReLU(init=relu_negative_slope)
else:
act = nn.LeakyReLU(negative_slope=relu_negative_slope)
if activation == "relu":
nn.init.kaiming_normal_(conv.weight, nonlinearity="relu")
act = nn.ReLU()
nn.init.constant_(conv.bias.data, 0)
block = nn.Sequential(conv, act)
return block
def flatten(lis):
for item in lis:
if isinstance(item, Iterable) and not isinstance(item, str):
for x in flatten(item):
yield x
else:
yield item
def Sequential(args):
flattened_args = list(flatten(args))
return nn.Sequential(*flattened_args)
</code></pre>
<p>This is the test Code</p>
<pre><code>ls=[]
for i in range(100):
ls.append(convBlock(3,3,3,"same"))
model=Sequential(ls)
test=np.ones((1,3,5,5))
model(torch.Tensor(test))
</code></pre>
<p>And the output I am getting is</p>
<pre><code>tensor([[[[-1.7771e-10, -3.5088e-10, 5.9369e-09, 4.2668e-09, 9.8803e-10],
[ 1.8657e-09, -4.0271e-10, 3.1189e-09, 1.5117e-09, 6.6546e-09],
[ 2.4237e-09, -6.2249e-10, -5.7327e-10, 4.2867e-09, 6.0034e-09],
[-1.8757e-10, 5.5446e-09, 1.7641e-09, 5.7018e-09, 6.4347e-09],
[ 1.2352e-09, -3.4732e-10, 4.1553e-10, -1.2996e-09, 3.8971e-09]],
[[ 2.6607e-09, 1.7756e-09, -1.0923e-09, -1.4272e-09, -1.1840e-09],
[ 2.0668e-10, -1.8130e-09, -2.3864e-09, -1.7061e-09, -1.7147e-10],
[-6.7161e-10, -1.3440e-09, -6.3196e-10, -8.7677e-10, -1.4851e-09],
[ 3.1475e-09, -1.6574e-09, -3.4180e-09, -3.5224e-09, -2.6642e-09],
[-1.9703e-09, -3.2277e-09, -2.4733e-09, -2.3707e-09, -8.7598e-10]],
[[ 3.5573e-09, 7.8113e-09, 6.8232e-09, 1.2285e-09, -9.3973e-10],
[ 6.6368e-09, 8.2877e-09, 9.2108e-10, 9.7531e-10, 7.0011e-10],
[ 6.6954e-09, 9.1019e-09, 1.5128e-08, 3.3151e-09, 2.1899e-10],
[ 1.2152e-08, 7.7002e-09, 1.6406e-08, 1.4948e-08, -6.0882e-10],
[ 6.9930e-09, 7.3222e-09, -7.4308e-10, 5.2505e-09, 3.4365e-09]]]],
grad_fn=<PreluBackward>)
</code></pre> | 2021-03-29 11:50:35.173000+00:00 | 2021-03-29 23:41:56.877000+00:00 | 2021-03-29 19:05:17.523000+00:00 | pytorch|gradient | ['https://arxiv.org/pdf/1502.01852.pdf'] | 1 |
44,626,127 | <p>In BPTT algorithm, when the word did not play an important role in determining the final output, then the gradient will be small and the weight will become smaller as training is going. It is automatic as LSTM mechanism determines it.</p>
<p>For your concern, you may misunderstand LSTM, LSTM can solve the gradient vanishing problem because it convert the <code>continually multiply</code> to <code>continually plus</code>. Simply speaking, hi = a1*h1+a2*h2+a3*h3+...,the latter output is a function of each previous output, so the gradient is remained. You can refer to <a href="http://proceedings.mlr.press/v37/jozefowicz15.pdf" rel="nofollow noreferrer">An Empirical Exploration of Recurrent Network Architectures</a> for details of the gradient accumulation theory. In addition, nowadays <strong>attention mechanism</strong> is wide applied and is more appropriate for you need, you can see <a href="https://arxiv.org/pdf/1409.0473.pdf" rel="nofollow noreferrer">Neural Machine Translation By Jointly Learning To Align and Translate</a>.</p> | 2017-06-19 08:55:29.383000+00:00 | 2017-06-19 14:11:46.843000+00:00 | 2017-06-19 14:11:46.843000+00:00 | null | 44,622,253 | <p>First off, I apologize if this isn't appropriate for stack overflow. This isn't really a code related question rather than a theory question. </p>
<p>This isn't completely clear to me. Say you have a massive passage that you want your LSTM to learn off of, how is it making sure it doesnt remove details from the first paragraph?</p> | 2017-06-19 04:25:12.367000+00:00 | 2017-06-19 14:11:46.843000+00:00 | null | neural-network|lstm|recurrent-neural-network | ['http://proceedings.mlr.press/v37/jozefowicz15.pdf', 'https://arxiv.org/pdf/1409.0473.pdf'] | 2 |
50,966,000 | <p>It looks like you need to start from basics, ok, no fear. I will try to suggest a simple route to start efficiently to use YOLO techniques. Luckly the web has a lot of examples.</p>
<ol>
<li>Understand <b> <em>WHAT</em></b> is a YOLO method.
<br> <a href="https://www.youtube.com/watch?v=YQYtgzOf9g4" rel="nofollow noreferrer">Andrew NG's YOLO explanation</a> is a good start, but only if you alread know what are <em>classification</em> and <em>detection</em>.</li>
<li>Understand the YOLO <b>Loss</b> function, the heart of the algorithm.<br>
Check the paper <a href="https://arxiv.org/pdf/1506.02640.pdf" rel="nofollow noreferrer">YOLO</a> itself, don't be scared. At page #2, in <em>Unified Detection</em> section, you will find the <b>information about the bounding box</b> detection used, but be aware that you can use whatever notation you want (even invent a new one), in order to be compatible with the Loss function, real meaning of this algorithm.</li>
<li>Start to implement an example<br>
As I wrote above, there are plenty of examples. You can check <a href="https://pythonprogramming.net/introduction-use-tensorflow-object-detection-api-tutorial/" rel="nofollow noreferrer">this one</a> if you are familiar with python and tensorflow.
Inside it you will find <b>a</b> way to prepare the dataset, that is your target for this question, I think. In this case a tool named labelImg is used.</li>
</ol>
<p>I hope it will be useful. Please share your code when it will be ready, I'm curious :). Good luck!</p> | 2018-06-21 10:15:02.643000+00:00 | 2018-06-21 10:15:02.643000+00:00 | null | null | 49,059,424 | <p>For a project I am using <a href="https://pjreddie.com/darknet/yolo/" rel="nofollow noreferrer">YOLO</a> to detect phallusia (microbial organisms) that swim into focus in a video. The issue is that I have to train YOLO on my own data. The data needs to be segmented so I can isolate the phallusia. I am not sure how to properly segment/cut-out the phallusia to fit the format that YOLO needs. For example in the picture below I want YOLO to detect when a phallusia is in focus similar to the one I have boxed in red. Do I just cut-out that segment of the image and save it as its own image and feed to that to YOLO? Do all segmented images need to have the same dimensions? Not sure what I am doing and could use some guidance. <a href="https://i.stack.imgur.com/htx16.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/htx16.jpg" alt="In focus phallusia"></a></p> | 2018-03-01 22:04:32.387000+00:00 | 2019-07-09 04:23:12.493000+00:00 | 2019-07-09 04:23:12.493000+00:00 | image|conv-neural-network|image-segmentation|training-data|yolo | ['https://www.youtube.com/watch?v=YQYtgzOf9g4', 'https://arxiv.org/pdf/1506.02640.pdf', 'https://pythonprogramming.net/introduction-use-tensorflow-object-detection-api-tutorial/'] | 3 |
70,683,760 | <p>Several problems:</p>
<ul>
<li><p><code>\citep</code> is a natbib macro. If you want to use it in biblatex, you must use the <code>natbib</code> option when you load biblatex.</p>
</li>
<li><p>you shouldn't load package more then once. You MUSTN'T load them more than once with different options. An error message will explicitly tell you about the option clash for the geometry package</p>
</li>
<li><p>the syntax <code>\begin{filecontents*}[overwrite]{\references.bib}</code> is wrong, <code>references.bib</code> should just be the filename, not a (non-existent) macro</p>
</li>
<li><p>the <code>note</code> field in the wikipedia entry caused come probelems, so I moved it to another field.</p>
</li>
</ul>
<hr />
<pre><code>\documentclass[12pt]{report}
\usepackage{setspace}
%\doublespacing
\usepackage[paperwidth=21cm,paperheight=29.7cm,includehead,headheight=1.5cm,pdftex,hmargin={3cm,2.5cm},vmargin={0cm,2cm},]{geometry}
\usepackage[utf8]{inputenc}
\usepackage{graphicx}
\usepackage{filecontents}
\usepackage{sectsty}
\usepackage{fancyhdr}
\usepackage{enumerate}
\usepackage{amsmath,amssymb}
\usepackage[english]{babel}
\usepackage[nottoc]{tocbibind}
\usepackage[autostyle,english=american]{csquotes}
\usepackage{pdfpages}
\usepackage{float}
\usepackage{verbatim}
\usepackage{fancyvrb}
\usepackage{here}
\usepackage{amsfonts}
\usepackage{url}
\usepackage{listings}
\usepackage{xcolor}
\usepackage{courier}
\usepackage{subfigure}
\usepackage{booktabs, cellspace, hhline}
\definecolor{mygreen}{rgb}{0,0.6,0}
\definecolor{mygray}{rgb}{0.5,0.5,0.5}
\definecolor{mymauve}{rgb}{0.58,0,0.82}
\usepackage{hyperref}
\hypersetup{
colorlinks=true,
linkcolor=blue,
filecolor=magenta,
urlcolor=cyan,
}
\lstset{
mathescape=true,
basicstyle = \ttfamily
}
%\usepackage[style=ieee,backend=biber]{biblatex}
\usepackage[backend=biber,style=chicago-authordate,natbib=true]{biblatex}
\addbibresource{references.bib}
%\usepackage[backend=biber,style=nature]{biblatex}
\begin{filecontents*}[overwrite]{references.bib}
@article{chandrakasan1995minimizing,
title={Minimizing power consumption in digital CMOS circuits},
author={Chandrakasan, Anantha P and Brodersen, Robert W},
journal={Proceedings of the IEEE},
volume={83},
number={4},
pages={498--523},
year={1995},
publisher={IEEE}
}
@misc{enwiki1062224546,
author = "{Wikipedia contributors}",
title = "CMOS --- {Wikipedia}{,} The Free Encyclopedia",
year = "2021",
howpublished = "\url{https://en.wikipedia.org/w/index.php?title=CMOS&oldid=1062224546} [Online; accessed 30-December-2021]"
}
@article{Bankman2018AnA3,
title={An always-on 3.8$\mu$J/86\% CIFAR-10 mixed-signal binary CNN processor with all memory on chip in 28nm CMOS},
author={Daniel Bankman and Lita Yang and Bert Moons and Marian Verhelst and Boris Murmann},
journal={2018 IEEE International Solid - State Circuits Conference - (ISSCC)},
year={2018},
pages={222-224}
}
@book{ann, title = {Artificial Neural Networks}, author={Ajith Abraham}, journal={Oklahoma State University, Stillwater, OK, USA}}
@article{courbariaux2016binarized,
title={Binarized neural networks: Training deep neural networks with weights and activations constrained to+ 1 or-1},
author={Courbariaux, Matthieu and Hubara, Itay and Soudry, Daniel and El-Yaniv, Ran and Bengio, Yoshua},
journal={arXiv preprint arXiv:1602.02830},
year={2016}
}
@ARTICLE{104196,
author={Boser, B.E. and Sackinger, E. and Bromley, J. and Le Cun, Y. and Jackel, L.D.},
journal={IEEE Journal of Solid-State Circuits},
title={An analog neural network processor with programmable topology},
year={1991},
volume={26},
number={12},
pages={2017-2025},
doi={10.1109/4.104196}}
@article{article,
author = {Li, Ji and Yuan, Zihao and Li, Zhe and Ding, Caiwen and Ren, Ao and Qiu, Qinru and Draper, Jeffrey and Wang, Yetang},
year = {2017},
month = {03},
pages = {},
title = {Hardware-Driven Nonlinear Activation for Stochastic Computing Based Deep Convolutional Neural Networks}
}
@article{forssell2014hardware,
title={Hardware implementation of artificial neural networks},
author={Forssell, Mats},
journal={Information Flow in Networks},
volume={18},
pages={1--4},
year={2014}
}
@article{davari1995cmos,
title={CMOS scaling for high performance and low power-the next ten years},
author={Davari, Bijan and Dennard, Robert H and Shahidi, Ghavam G},
journal={Proceedings of the IEEE},
volume={83},
number={4},
pages={595--606},
year={1995},
publisher={IEEE}
}
@article{ng1996performance,
title={Performance of CMOS differential circuits},
author={Ng, Pius and Balsara, Poras T and Steiss, Don},
journal={IEEE Journal of Solid-State Circuits},
volume={31},
number={6},
pages={841--846},
year={1996},
publisher={IEEE}
}
\end{filecontents*}
\usepackage{cleveref}
\begin{document}
\printbibliography
\citep{chandrakasan1995minimizing}
\citep{enwiki1062224546}
\citep{Bankman2018AnA3}
\citep{courbariaux2016binarized}
\citep{104196}
\citep{article}
\citep{ann}
\citep{davari1995cmos}
\citep{forssell2014hardware}
\citep{ng1996performance}
\end{document}
</code></pre>
<p><a href="https://i.stack.imgur.com/vlphD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vlphD.png" alt="enter image description here" /></a></p> | 2022-01-12 14:58:22.097000+00:00 | 2022-01-12 15:03:32.757000+00:00 | 2022-01-12 15:03:32.757000+00:00 | null | 70,683,370 | <p>Hi I have tried a lot of things and gone through several questions posted earlier but I can't seem to get my bibliography to print. I get the following errors:</p>
<ol>
<li>Empty Bibliography (when I write \printbibliography)</li>
<li>Undefined Control Sequence (when I overwrite file contents for reference.bib in my main.tex)</li>
</ol>
<p>Things I have tried:</p>
<ol>
<li>Changing the backend to biber and biblatex both. None worked.</li>
<li>Adding overwrite file contents and reinputting the bib file content in main.tex and then cite them one by one using \citep{}</li>
<li>Changing styles</li>
</ol>
<p>I have posted all of my code here (main.tex) in case there are some other code lines that might be messing with the use package of bibliography.</p>
<pre><code>\documentclass[12pt]{report}
\usepackage{setspace}
%\doublespacing
\usepackage[paperwidth=21cm,paperheight=29.7cm,includehead,headheight=1.5cm,pdftex,hmargin={3cm,2.5cm},vmargin={0cm,2cm},]{geometry}
\usepackage[utf8]{inputenc}
\usepackage{graphicx}
\usepackage{filecontents}
\usepackage{sectsty}
\usepackage{fancyhdr}
\usepackage{enumerate}
\usepackage{amsmath,amssymb}
\usepackage[english]{babel}
\usepackage[nottoc]{tocbibind}
\usepackage[autostyle,english=american]{csquotes}
\usepackage{pdfpages}
\usepackage{float}
\usepackage{verbatim}
\usepackage{fancyvrb}
\usepackage{here}
\usepackage{amsfonts}
\usepackage{url}
\usepackage{listings}
\usepackage{xcolor}
\usepackage{courier}
\usepackage{subfigure}
\usepackage{booktabs, cellspace, hhline}
\definecolor{mygreen}{rgb}{0,0.6,0}
\definecolor{mygray}{rgb}{0.5,0.5,0.5}
\definecolor{mymauve}{rgb}{0.58,0,0.82}
\usepackage{hyperref}
\hypersetup{
colorlinks=true,
linkcolor=blue,
filecolor=magenta,
urlcolor=cyan,
}
\lstset{
mathescape=true,
basicstyle = \ttfamily
}
%\usepackage[style=ieee,backend=biber]{biblatex}
\usepackage[backend=biber,style=chicago-authordate]{biblatex}
\addbibresource{references.bib}
%\usepackage[backend=biber,style=nature]{biblatex}
\begin{filecontents*}[overwrite]{\references.bib}
@article{chandrakasan1995minimizing,
title={Minimizing power consumption in digital CMOS circuits},
author={Chandrakasan, Anantha P and Brodersen, Robert W},
journal={Proceedings of the IEEE},
volume={83},
number={4},
pages={498--523},
year={1995},
publisher={IEEE}
}
@misc{ enwiki:1062224546,
author = "{Wikipedia contributors}",
title = "CMOS --- {Wikipedia}{,} The Free Encyclopedia",
year = "2021",
howpublished = "\url{https://en.wikipedia.org/w/index.php?title=CMOS&oldid=1062224546}",
note = "[Online; accessed 30-December-2021]"
}
@article{Bankman2018AnA3,
title={An always-on 3.8$\mu$J/86\% CIFAR-10 mixed-signal binary CNN processor with all memory on chip in 28nm CMOS},
author={Daniel Bankman and Lita Yang and Bert Moons and Marian Verhelst and Boris Murmann},
journal={2018 IEEE International Solid - State Circuits Conference - (ISSCC)},
year={2018},
pages={222-224}
}
@book{ann, title = {Artificial Neural Networks}, author={Ajith Abraham}, journal={Oklahoma State University, Stillwater, OK, USA}}
@article{courbariaux2016binarized,
title={Binarized neural networks: Training deep neural networks with weights and activations constrained to+ 1 or-1},
author={Courbariaux, Matthieu and Hubara, Itay and Soudry, Daniel and El-Yaniv, Ran and Bengio, Yoshua},
journal={arXiv preprint arXiv:1602.02830},
year={2016}
}
@ARTICLE{104196,
author={Boser, B.E. and Sackinger, E. and Bromley, J. and Le Cun, Y. and Jackel, L.D.},
journal={IEEE Journal of Solid-State Circuits},
title={An analog neural network processor with programmable topology},
year={1991},
volume={26},
number={12},
pages={2017-2025},
doi={10.1109/4.104196}}
@article{article,
author = {Li, Ji and Yuan, Zihao and Li, Zhe and Ding, Caiwen and Ren, Ao and Qiu, Qinru and Draper, Jeffrey and Wang, Yetang},
year = {2017},
month = {03},
pages = {},
title = {Hardware-Driven Nonlinear Activation for Stochastic Computing Based Deep Convolutional Neural Networks}
}
@article{forssell2014hardware,
title={Hardware implementation of artificial neural networks},
author={Forssell, Mats},
journal={Information Flow in Networks},
volume={18},
pages={1--4},
year={2014}
}
@article{davari1995cmos,
title={CMOS scaling for high performance and low power-the next ten years},
author={Davari, Bijan and Dennard, Robert H and Shahidi, Ghavam G},
journal={Proceedings of the IEEE},
volume={83},
number={4},
pages={595--606},
year={1995},
publisher={IEEE}
}
@article{ng1996performance,
title={Performance of CMOS differential circuits},
author={Ng, Pius and Balsara, Poras T and Steiss, Don},
journal={IEEE Journal of Solid-State Circuits},
volume={31},
number={6},
pages={841--846},
year={1996},
publisher={IEEE}
}
\end{filecontents*}
\usepackage[left=2cm,right=2cm,top=2cm,bottom=2cm]{geometry}
\DeclareUnicodeCharacter{202F}{\,}
\setlength{\parindent}{0pt}
\linespread{1.6}
\renewcommand{\familydefault}{\sfdefault}
\usepackage{lastpage}
\pagestyle{fancy}
\fancyhead{}
\fancyhead[RO, LE]{Interim Report - 20203108}
\fancypagestyle{plain}{}
\fancyfoot[C]{Page {\thepage} of \pageref{LastPage}}
\renewcommand{\footrulewidth}{0.3pt}
\makeatletter
\renewcommand{\l@chapter}{\bfseries\@dottedtocline{1}{0em}{2.3em}}
\renewcommand{\l@section}{\normalfont\@dottedtocline{2}{2em}{2.3em}}
\renewcommand{\l@subsection}{\normalfont\@dottedtocline{3}{2em}{2.3em}}
\renewcommand{\l@subsubsection}{\normalfont\@dottedtocline{4}{2em}{2.3em}}
\makeatother
\usepackage{cleveref}
\begin{document}
\crefname{chapter}{Chapter}{Chapters}
\crefname{section}{Section}{Sections}
\crefname{subsection}{Section}{Sections}
\crefname{figure}{Figure}{Figures}
\crefname{table}{Table}{Tables}
\input{Section/0.Title}
\clearpage
\renewcommand{\contentsname}{Table of Contents}
\tableofcontents
\clearpage
\chapter{Introduction}
\input{Section/1.Introduction & Project Aim}
\chapter{Project Background}
\input{Section/2.Project Background}
\chapter{Literature Review}
\input{Section/3.Literature Review}
\chapter{Outline of Approach \& Preliminary Results}
\input{Section/4.Outline of Approach}
\chapter{Ethics \& Sustainability}
\input{Section/5.Ethics & Sustainability}
\chapter{Project Work Plan}
\input{Section/6.Project Work Plan}
\clearpage
\addcontentsline{toc}{chapter}{Bibliography}
\printbibliography
\citep{chandrakasan1995minimizing}
\citep{enwiki:1062224546}
\citep{Bankman2018AnA3}
\citep{courbariaux2016binarized}
\citep{104196}
\citep{article}
\citep{ann}
\citep{davari1995cmos}
\citep{forssell2014hardware}
\citep{ng1996performance}
\end{document}
</code></pre> | 2022-01-12 14:31:52.763000+00:00 | 2022-08-18 10:21:08.243000+00:00 | 2022-08-18 10:21:08.243000+00:00 | latex|bibliography|biblatex | ['https://i.stack.imgur.com/vlphD.png'] | 1 |
54,558,864 | <p>According to this Oct 2018 paper: <a href="https://arxiv.org/pdf/1810.01109.pdf" rel="nofollow noreferrer">AI Benchmark: Running Deep Neural Networks
on Android Smartphones</a>, the NNAPI is defaults to the CPU path when no specific hardware and/or no drivers are available. Toward the end of the paper it notes that a number of devices have implementation issues.</p>
<p>As the paper's authors include representatives from Qualcomm, ARM, Huawei, MediaTek, and ETH Zurich, it is probably the most comprehensive overview of the state of machine learning on Android.</p>
<p>In Jan 2019 Google announced <a href="https://medium.com/tensorflow/tensorflow-lite-now-faster-with-mobile-gpus-developer-preview-e15797e6dee7?linkId=62443226" rel="nofollow noreferrer">TensorFlow Lite with GPU acceleration in developer preview</a> which will address some of the issues raised in the paper.</p>
<p><strong>2020 July Update:</strong></p>
<p>The researchers have a site at: <a href="http://ai-benchmark.com/" rel="nofollow noreferrer">http://ai-benchmark.com/</a></p>
<p>And have updated their paper in Oct 2019:
<a href="https://arxiv.org/pdf/1910.06663.pdf" rel="nofollow noreferrer">AI Benchmark: All About Deep Learning on Smartphones in 2019</a></p> | 2019-02-06 17:01:48.967000+00:00 | 2020-07-30 17:16:40.550000+00:00 | 2020-07-30 17:16:40.550000+00:00 | null | 54,557,026 | <p>I'm trying to use the new Google machine learning sdk, ML Kit, on an Android devices that run Android 9.
From the official site:</p>
<blockquote>
<p>ML Kit makes it easy to apply ML techniques in your apps by bringing
Google's ML technologies, such as the Google Cloud Vision API,
TensorFlow Lite, and the Android Neural Networks API together in a
single SDK. Whether you need the power of cloud-based processing, the
real-time capabilities of mobile-optimized on-device models, or the
flexibility of custom TensorFlow Lite models, ML Kit makes it possible
with just a few lines of code.</p>
</blockquote>
<p>I think it means that on a device with at least Android 8.1 (according to the documentation of nnapi) the SDK can uses NNAPI. But when I run the same app on a device with Android 7.1 (where nnapi is not supported) I obtain the same performance of the device that use Android 9 (and in theory the NNAPI). How i can use ML Kit with NNAPI? I am doing something wrong?
Link to documentation of mlkit: <a href="https://firebase.google.com/docs/ml-kit/" rel="nofollow noreferrer">https://firebase.google.com/docs/ml-kit/</a></p> | 2019-02-06 15:25:06.733000+00:00 | 2020-07-30 17:16:40.550000+00:00 | 2019-06-05 17:04:40.163000+00:00 | android|machine-learning|firebase-mlkit|nnapi | ['https://arxiv.org/pdf/1810.01109.pdf', 'https://medium.com/tensorflow/tensorflow-lite-now-faster-with-mobile-gpus-developer-preview-e15797e6dee7?linkId=62443226', 'http://ai-benchmark.com/', 'https://arxiv.org/pdf/1910.06663.pdf'] | 4 |
73,704,819 | <p>I believe this is because of Qiskit <code>initialize</code>, and not specific to the IonQ backend. You'd likely see the same (or worse!) gate depth with any backend.</p>
<p>Qiskit <code>initialize</code> uses a very general-purpose and therefore relatively naive algorithm for state encoding. Specifically, that from <a href="https://arxiv.org/abs/quant-ph/0406176v5" rel="nofollow noreferrer">Synthesis of Quantum Logic Circuits (2004)</a>; this is mentioned <a href="https://qiskit.org/documentation/_modules/qiskit/circuit/library/data_preparation/state_preparation.html#StatePreparation" rel="nofollow noreferrer">in the source for the method</a>.</p>
<p>Specifically, they say in the paper that using the method, an arbitrary n-qubit quantum state can be prepared by a circuit containing no more than 2<sup>n+1</sup> − 2n CNOT gates. For an n of 8, like you have here, that's 496 CNOTs. which is about what you're seeing.</p>
<p>Any encoding approach that is more specifically tailored to what you're trying to do will likely work better.</p> | 2022-09-13 14:34:27.310000+00:00 | 2022-09-20 16:10:04.760000+00:00 | 2022-09-20 16:10:04.760000+00:00 | null | 73,270,764 | <p>I'm encoding data into a QuantumCircuit via the Initialize method for QFTs. In doing this and transpiling for IonQ backends, I'm getting rather complex circuits. Is there a way to encode this data more efficiently for IonQ backends or a method to approximate this circuit? Thanks in advance!</p>
<p><a href="https://i.stack.imgur.com/J3NLB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/J3NLB.png" alt="Encoding a sine wave into an 8 qbit register, then transpiling for IonQ sim backend" /></a></p> | 2022-08-07 20:22:45.370000+00:00 | 2022-09-20 16:10:04.760000+00:00 | 2022-08-07 21:42:04.140000+00:00 | quantum-computing|qiskit|azure-quantum | ['https://arxiv.org/abs/quant-ph/0406176v5', 'https://qiskit.org/documentation/_modules/qiskit/circuit/library/data_preparation/state_preparation.html#StatePreparation'] | 2 |
48,116,217 | <p>Short answer: <strong>NO!</strong></p>
<p>You can't use a internal index to justify the choose of a algorithm over another. <strong>Why?</strong></p>
<p>Because evaluation indexes were designed to evaluate clustering results, i.e., partitions and hierarchies. You can only use them to access the quality of a clustering and therefore justify its choice over the others options. But again, you can't use them to justify choosing a particular algorithm to apply on a <strong>different dataset</strong> based on a single previous experiment.</p>
<p>For this task, several benchmarks are needed to determine which algorithms are generally better and should be tried first. Here some paper about it: <a href="https://arxiv.org/abs/0908.1062" rel="nofollow noreferrer">Community detection algorithms: a comparative analysis</a>.</p>
<p><strong>Edit:</strong> What I am saying is, your validation indexes may show that the <code>fast.greed</code>'s solution is better than the <code>walk.trap</code>'s. However, they do not explain why you chose these algorithms instead of any others. Only your data, your assumptions, and your constraints could do that.</p>
<blockquote>
<p>Also, is there any way to validate a disconnected graph?</p>
</blockquote>
<p>Theoretically, any evaluation index can do this. Technically, some implementations don't handle disconnected components.</p> | 2018-01-05 15:07:05.830000+00:00 | 2018-01-10 05:00:15.600000+00:00 | 2018-01-10 05:00:15.600000+00:00 | null | 48,057,714 | <p>From what I understand, it is usually difficult to select the best possible clustering method for your data priori, and we can use cluster validity to compare the results of different clustering algorithms and choose the one with the best validation scores.</p>
<p>I use an internal validation function from R <code>stats</code> package on my clustering result (for clustering methods I used R <code>igraph</code> <code>fast.greedy</code> and <code>walk.trap</code>).
The outcome is a list of many validation scores.</p>
<p>In the list, almost in every validation Fast greedy method has better scores than Walk trap, except in <code>entropy</code> walk trap method has a better score.</p>
<p>Can I use this validation result list as one of my reasons to explain to others why I choose Fast greedy method rather than walk trap method? </p>
<p>Also, is there any way to validate a disconnected graph?</p> | 2018-01-02 08:23:24.073000+00:00 | 2018-01-10 05:00:15.600000+00:00 | 2018-01-02 08:44:37.793000+00:00 | r|validation|social-networking | ['https://arxiv.org/abs/0908.1062'] | 1 |
50,780,854 | <p><strong>Edit:</strong> see also <a href="https://github.com/tensorflow/tensorflow/pull/17438" rel="noreferrer">this PR</a> which just got merged into TF.</p>
<p>When using pure SGD (without momentum) as an optimizer, weight decay is the same thing as adding a L2-regularization term to the loss. <strong>When using any other optimizer, this is not true.</strong></p>
<p>Weight decay (don't know how to TeX here, so excuse my pseudo-notation):</p>
<pre><code>w[t+1] = w[t] - learning_rate * dw - weight_decay * w
</code></pre>
<p>L2-regularization:</p>
<pre><code>loss = actual_loss + lambda * 1/2 sum(||w||_2 for w in network_params)
</code></pre>
<p>Computing the gradient of the extra term in L2-regularization gives <code>lambda * w</code> and thus inserting it into the SGD update equation</p>
<pre><code>dloss_dw = dactual_loss_dw + lambda * w
w[t+1] = w[t] - learning_rate * dw
</code></pre>
<p>gives the same as weight decay, but mixes <code>lambda</code> with the <code>learning_rate</code>. Any other optimizer, even SGD with momentum, gives a different update rule for weight decay as for L2-regularization! See the paper <a href="https://arxiv.org/abs/1711.05101" rel="noreferrer">Fixing weight decay in Adam</a> for more details. (Edit: AFAIK, <a href="http://www.cs.toronto.edu/~hinton/absps/parle.pdf" rel="noreferrer">this 1987 Hinton paper</a> introduced "weight decay", literally as "each time the weights are updated, their magnitude is also decremented by 0.4%" at page 10)</p>
<p>That being said, there doesn't seem to be support for "proper" weight decay in TensorFlow yet. There are a few issues discussing it, specifically because of above paper.</p>
<p>One possible way to implement it is by writing an op that does the decay step manually after every optimizer step. A different way, which is what I'm currently doing, is using an additional SGD optimizer just for the weight decay, and "attaching" it to your <code>train_op</code>. Both of these are just crude work-arounds, though. My current code:</p>
<pre><code># In the network definition:
with arg_scope([layers.conv2d, layers.dense],
weights_regularizer=layers.l2_regularizer(weight_decay)):
# define the network.
loss = # compute the actual loss of your problem.
train_op = optimizer.minimize(loss, global_step=global_step)
if args.weight_decay not in (None, 0):
with tf.control_dependencies([train_op]):
sgd = tf.train.GradientDescentOptimizer(learning_rate=1.0)
train_op = sgd.minimize(tf.add_n(tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)))
</code></pre>
<p>This somewhat makes use of TensorFlow's provided bookkeeping. Note that the <code>arg_scope</code> takes care of appending an L2-regularization term for every layer to the <code>REGULARIZATION_LOSSES</code> graph-key, which I then all sum up and optimize using SGD which, as shown above, corresponds to actual weight-decay.</p>
<p>Hope that helps, and if anyone gets a nicer code snippet for this, or TensorFlow implements it better (i.e. in the optimizers), please share.</p> | 2018-06-10 05:35:18.150000+00:00 | 2019-11-20 10:24:13.427000+00:00 | 2019-11-20 10:24:13.427000+00:00 | null | 44,452,571 | <p>Since Adam Optimizer keeps an pair of running averages like mean/variance for the gradients, I wonder how it should properly handle weight decay. I have seen two ways of implementing it.</p>
<ol>
<li><p>Only update mean/variance from the gradients based on the objective loss, decay weight explicitly at each mini-batch. (the following code is taken from <a href="https://github.com/dmlc/mxnet/blob/v0.7.0/python/mxnet/optimizer.py" rel="noreferrer">https://github.com/dmlc/mxnet/blob/v0.7.0/python/mxnet/optimizer.py</a>)</p>
<pre><code>weight[:] -= lr*mean/(sqrt(variance) + self.epsilon)
wd = self._get_wd(index)
if wd > 0.:
weight[:] -= (lr * wd) * weight
</code></pre></li>
<li><p>Update mean/variance from the gradients based on the objective loss + regularization loss, and update weights like usual. (the following code is taken from <a href="https://github.com/dmlc/mxnet/blob/master/src/operator/optimizer_op-inl.h#L210" rel="noreferrer">https://github.com/dmlc/mxnet/blob/master/src/operator/optimizer_op-inl.h#L210</a>)</p>
<pre><code>grad = scalar<DType>(param.rescale_grad) * grad +
scalar<DType>(param.wd) * weight;
// stuff
Assign(out, req[0],
weight -
scalar<DType>(param.lr) * mean /
(F<square_root>(var) + scalar<DType>(param.epsilon)));
</code></pre></li>
</ol>
<p>These two approaches sometimes show significant difference in training results. And I actually think the first one makes more sense (and find it gives better results time to time). Caffe and old version of mxnet follow the first approach, while torch, tensorflow and new version of mxnet follow the second one. </p>
<p>Really appreciate your help!</p> | 2017-06-09 08:08:41.840000+00:00 | 2019-11-20 10:24:13.427000+00:00 | null | tensorflow|deep-learning|caffe|torch|mxnet | ['https://github.com/tensorflow/tensorflow/pull/17438', 'https://arxiv.org/abs/1711.05101', 'http://www.cs.toronto.edu/~hinton/absps/parle.pdf'] | 3 |
52,904,901 | <p>I've used intermediate layers from CNNs to do that sort of robust comparison in some projects in the past. Basically, you take a CNN that has been trained for some task like image segmentation, and then try to identify layers or combinations of layers that offer a good balance of geometric/photometric features for your matching. Then at test time, you pass the images in the CNN, and compare those features with for example, a Euclidean distance. My images were similar to yours, and I needed something that was fast, so at that time <a href="https://arxiv.org/abs/1606.02147" rel="nofollow noreferrer">Enet</a> was a good choice for me (well, there are better choices now). I ended up using a combination of features from its 21st and 5th layers that ended up working well in practice. However, if your images are from a sequence where you can exploit temporal information, I strongly recommend that you take a look at <a href="https://ieeexplore.ieee.org/document/6224623" rel="nofollow noreferrer">SeqSLAM</a> (sorry, couldn't find a non-paywall version. The interesting thing with this is that it <strong>doesn't require any CNNs</strong>, is real-time, and if memory serves, uses just very simple <strong>pyramidal intensity based comparisons</strong> for the matching, similar to SPP), as well as <a href="https://arxiv.org/pdf/1704.05016.pdf" rel="nofollow noreferrer">this</a> paper, which improves SeqSLAM with layers from CNNs. </p> | 2018-10-20 10:56:36.017000+00:00 | 2018-10-20 19:37:00.300000+00:00 | 2018-10-20 19:37:00.300000+00:00 | null | 52,898,528 | <p>For two images, one in a sunny weather and another in rainy weather with virtually no difference in the content and objects except the weather, Is there any metric to say that they are highly similar?
<a href="https://i.stack.imgur.com/Zx0Xy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Zx0Xy.png" alt="Rainy image"></a></p>
<p><a href="https://i.stack.imgur.com/VuMXo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VuMXo.png" alt="Normal Image"></a></p>
<p>Vs... an image which is visibly not so similar..
<a href="https://i.stack.imgur.com/h3NGB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/h3NGB.png" alt="Not similar image"></a></p> | 2018-10-19 19:05:28.707000+00:00 | 2018-10-20 22:56:09.217000+00:00 | 2018-10-20 10:14:29.397000+00:00 | image|image-processing|computer-vision|similarity|weather | ['https://arxiv.org/abs/1606.02147', 'https://ieeexplore.ieee.org/document/6224623', 'https://arxiv.org/pdf/1704.05016.pdf'] | 3 |
1,975,243 | <p>This is called a <a href="http://en.wikipedia.org/wiki/Partition_(number_theory)" rel="nofollow noreferrer">partition problem</a> and approaches are discussed <a href="http://home.att.net/~numericana/answer/numbers.htm#partitions" rel="nofollow noreferrer">here</a>, <a href="http://www.site.uottawa.ca/~ivan/F49-int-part.pdf" rel="nofollow noreferrer">here</a> and <a href="http://arxiv.org/PS_cache/arxiv/pdf/0909/0909.2331v1.pdf" rel="nofollow noreferrer">here</a>.</p> | 2009-12-29 15:38:54.300000+00:00 | 2009-12-29 15:38:54.300000+00:00 | null | null | 1,975,201 | <p>I am writing a program to try to solve a math problem. I need to generate a unique list of all of the numbers that add up to another number. For example, all of the unqiue combinations of 4 numbers that add up to 5 are:</p>
<pre><code>5 0 0 0
4 1 0 0
3 2 0 0
3 1 1 0
2 2 1 0
2 1 1 1
</code></pre>
<p>This is easy to brute force in perl but I am working in C and would like to find a more elegant solution.</p>
<p>In perl I would generate every possible combination of numbers 0-N in each column, discard the ones that don't add up to the target number, then sort the numbers in each row and remove the duplicate rows.</p>
<p>I've been trying all morning to write this in C but can't seem to come up with a satisfactory solution. I need it to work up to a <strong>maximum N of about 25</strong>. Do you guys have any ideas? </p>
<p>Here is an example of the kind of thing I have been trying (this produces duplicate combinations):</p>
<pre><code>// target is the number each row should sum to.
// Don't worry about overflows, I am only using small values for target
void example(int target)
{
int row[4];
for (int a=target; a>=0; a--) {
row[0] = a;
for (int b=target-a; b>=0; b--) {
row[1] = b;
for (int c=target-(a+b); c>=0; c--) {
row[2] = c;
row[3] = target-(a+b+c);
printf ("%2d %2d %2d %2d sum: %d\n", row[0],row[1],row[2],row[3],
row[0]+row[1]+row[2]+row[3]);
}
}
}
}
</code></pre> | 2009-12-29 15:29:50.530000+00:00 | 2009-12-29 15:59:38.553000+00:00 | 2009-12-29 15:36:24.400000+00:00 | c++|c|math | ['http://en.wikipedia.org/wiki/Partition_(number_theory)', 'http://home.att.net/~numericana/answer/numbers.htm#partitions', 'http://www.site.uottawa.ca/~ivan/F49-int-part.pdf', 'http://arxiv.org/PS_cache/arxiv/pdf/0909/0909.2331v1.pdf'] | 4 |
61,580,287 | <h1>Suspect #1 - Regularization</h1>
<p>Neural networks are great at overfitting the training data, actually there is an <a href="https://arxiv.org/pdf/1611.03530.pdf" rel="nofollow noreferrer">experiment</a> replacing CIFAR10 (image classification task) labels (y values) by random labels on the training dataset and the network fits the random labels resulting in almost zero loss.</p>
<p><a href="https://i.stack.imgur.com/pzaXj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pzaXj.png" alt="enter image description here"></a></p>
<blockquote>
<p>on the left side we can see that given enough epochs random labels
gets around 0 loss - perfect score (from <a href="https://arxiv.org/pdf/1611.03530.pdf" rel="nofollow noreferrer">understanding deep learning
requires re-thinking generalization by zhang et al 2016</a>)</p>
</blockquote>
<p>So why its not happening all the time? <strong><a href="https://en.wikipedia.org/wiki/Regularization_(mathematics)" rel="nofollow noreferrer">regularization</a></strong>.</p>
<p>regularization is (roughly) trying to solve harder problem than the optimization problem (the loss) we defined for the model.</p>
<p>some common regularizations methods in neural networks:</p>
<ul>
<li>early stopping</li>
<li>dropout</li>
<li>batch normalization</li>
<li>weight decay (e.g. l1 l2 norms)</li>
<li>data augmentation</li>
<li>adding random/gaussian noise</li>
</ul>
<p>these methods help reduce overfitting and usually result in better validation and test performance, but result in lower train performance (which doesnt matter actually as explained on the last paragraph).</p>
<p>train data performance are usually not so important and for that we use the validation set.</p>
<h2>Suspect #2 - Model Size</h2>
<p>you are using single LSTM layer with 32 units. thats pretty small.
try increase the size and even put two LSTM layers (or bidirectional one) and I'm sure the model and the optimizer will overfit your data as long as you let them - i.e. remove the early stopping, restore_last_weights and any other regularization specified above.</p>
<h2>Note on Problem Complexity</h2>
<p>trying to predict future stock prices just by looking at the history is not an easy task, and even if the model can (over)fit perfectly the training set it will probably wont do anything useful on the test set or in real world.</p>
<p>ML is not black magic, the x samples needs to be correlated in some way to the y tags, we usually assume that (x,y) are drawn from some distribution together.</p>
<p>A more intuitive way to think about it, when you need to tag an image manually for dog/cat class - that pretty straight forward. but can you manually "tag" the stock price by looking at the history of that stock alone?</p>
<p>Thats some intuition on how hard this problem is.</p>
<h2>Note on Overfitting</h2>
<p><strong>One should not chase higher training performance</strong> its almost useless to try overfit the training data, as we usually try to perform well with a model on new unseen data with similar properties to the train data. the all idea is to try to generalize and learn the properties of the data and correlation with the target, thats what learning is :)</p> | 2020-05-03 19:16:59.187000+00:00 | 2020-05-03 23:03:26.300000+00:00 | 2020-05-03 23:03:26.300000+00:00 | null | 61,425,296 | <p>I made a LSTM (RNN) neural network with supervised learning for data stock prediction. The problem is why it predicts wrong on its own training data? (note: <strong>reproducible example</strong> below)</p>
<p>I created simple model to predict next 5 days stock price:</p>
<pre><code>model = Sequential()
model.add(LSTM(32, activation='sigmoid', input_shape=(x_train.shape[1], x_train.shape[2])))
model.add(Dense(y_train.shape[1]))
model.compile(optimizer='adam', loss='mse')
es = EarlyStopping(monitor='val_loss', patience=3, restore_best_weights=True)
model.fit(x_train, y_train, batch_size=64, epochs=25, validation_data=(x_test, y_test), callbacks=[es])
</code></pre>
<p>The correct results are in <code>y_test</code> (5 values), so model trains, looking back 90 previous days and then restore weights from best (<code>val_loss=0.0030</code>) result with <code>patience=3</code>:</p>
<pre><code>Train on 396 samples, validate on 1 samples
Epoch 1/25
396/396 [==============================] - 1s 2ms/step - loss: 0.1322 - val_loss: 0.0299
Epoch 2/25
396/396 [==============================] - 0s 402us/step - loss: 0.0478 - val_loss: 0.0129
Epoch 3/25
396/396 [==============================] - 0s 397us/step - loss: 0.0385 - val_loss: 0.0178
Epoch 4/25
396/396 [==============================] - 0s 399us/step - loss: 0.0398 - val_loss: 0.0078
Epoch 5/25
396/396 [==============================] - 0s 391us/step - loss: 0.0343 - val_loss: 0.0030
Epoch 6/25
396/396 [==============================] - 0s 391us/step - loss: 0.0318 - val_loss: 0.0047
Epoch 7/25
396/396 [==============================] - 0s 389us/step - loss: 0.0308 - val_loss: 0.0043
Epoch 8/25
396/396 [==============================] - 0s 393us/step - loss: 0.0292 - val_loss: 0.0056
</code></pre>
<p>Prediction result is pretty awesome, isn't it?</p>
<p><a href="https://i.stack.imgur.com/5z91q.png" rel="noreferrer"><img src="https://i.stack.imgur.com/5z91q.png" alt="enter image description here"></a></p>
<p>That's because algorithm restored best weights from #5 epoch. Okey, let's now save this model to <code>.h5</code> file, move back -10 days and predict last 5 days (at first example we made model and validate on 17-23 April including day off weekends, now let's test on 2-8 April). Result:</p>
<p><a href="https://i.stack.imgur.com/vXIvp.png" rel="noreferrer"><img src="https://i.stack.imgur.com/vXIvp.png" alt="enter image description here"></a></p>
<p>It shows absolutely wrong direction. As we see that's because model was trained and took #5 epoch best for validation set on 17-23 April, but not on 2-8. If I try train more, playing with what epoch to choose, whatever I do, there are always a lot of time intervals in the past that have wrong prediction.</p>
<p>Why does model show wrong results on its own trained data? I trained data, it must remember how to predict data on this piece of set, but predicts wrong. What I also tried:</p>
<ul>
<li>Use large data sets with 50k+ rows, 20 years stock prices, adding more or less features</li>
<li>Create different types of model, like adding more hidden layers, different batch_sizes, different layers activations, dropouts, batchnormalization</li>
<li>Create custom EarlyStopping callback, get average val_loss from many validation data sets and choose the best</li>
</ul>
<p>Maybe I miss something? What can I improve?</p>
<p>Here is very simple and <strong>reproducible</strong> example. <code>yfinance</code> downloads S&P 500 stock data.</p>
<pre><code>"""python 3.7.7
tensorflow 2.1.0
keras 2.3.1"""
import numpy as np
import pandas as pd
from keras.callbacks import EarlyStopping, Callback
from keras.models import Model, Sequential, load_model
from keras.layers import Dense, Dropout, LSTM, BatchNormalization
from sklearn.preprocessing import MinMaxScaler
import plotly.graph_objects as go
import yfinance as yf
np.random.seed(4)
num_prediction = 5
look_back = 90
new_s_h5 = True # change it to False when you created model and want test on other past dates
df = yf.download(tickers="^GSPC", start='2018-05-06', end='2020-04-24', interval="1d")
data = df.filter(['Close', 'High', 'Low', 'Volume'])
# drop last N days to validate saved model on past
df.drop(df.tail(0).index, inplace=True)
print(df)
class EarlyStoppingCust(Callback):
def __init__(self, patience=0, verbose=0, validation_sets=None, restore_best_weights=False):
super(EarlyStoppingCust, self).__init__()
self.patience = patience
self.verbose = verbose
self.wait = 0
self.stopped_epoch = 0
self.restore_best_weights = restore_best_weights
self.best_weights = None
self.validation_sets = validation_sets
def on_train_begin(self, logs=None):
self.wait = 0
self.stopped_epoch = 0
self.best_avg_loss = (np.Inf, 0)
def on_epoch_end(self, epoch, logs=None):
loss_ = 0
for i, validation_set in enumerate(self.validation_sets):
predicted = self.model.predict(validation_set[0])
loss = self.model.evaluate(validation_set[0], validation_set[1], verbose = 0)
loss_ += loss
if self.verbose > 0:
print('val' + str(i + 1) + '_loss: %.5f' % loss)
avg_loss = loss_ / len(self.validation_sets)
print('avg_loss: %.5f' % avg_loss)
if self.best_avg_loss[0] > avg_loss:
self.best_avg_loss = (avg_loss, epoch + 1)
self.wait = 0
if self.restore_best_weights:
print('new best epoch = %d' % (epoch + 1))
self.best_weights = self.model.get_weights()
else:
self.wait += 1
if self.wait >= self.patience or self.params['epochs'] == epoch + 1:
self.stopped_epoch = epoch
self.model.stop_training = True
if self.restore_best_weights:
if self.verbose > 0:
print('Restoring model weights from the end of the best epoch')
self.model.set_weights(self.best_weights)
def on_train_end(self, logs=None):
print('best_avg_loss: %.5f (#%d)' % (self.best_avg_loss[0], self.best_avg_loss[1]))
def multivariate_data(dataset, target, start_index, end_index, history_size, target_size, step, single_step=False):
data = []
labels = []
start_index = start_index + history_size
if end_index is None:
end_index = len(dataset) - target_size
for i in range(start_index, end_index):
indices = range(i-history_size, i, step)
data.append(dataset[indices])
if single_step:
labels.append(target[i+target_size])
else:
labels.append(target[i:i+target_size])
return np.array(data), np.array(labels)
def transform_predicted(pr):
pr = pr.reshape(pr.shape[1], -1)
z = np.zeros((pr.shape[0], x_train.shape[2] - 1), dtype=pr.dtype)
pr = np.append(pr, z, axis=1)
pr = scaler.inverse_transform(pr)
pr = pr[:, 0]
return pr
step = 1
# creating datasets with look back
scaler = MinMaxScaler()
df_normalized = scaler.fit_transform(df.values)
dataset = df_normalized[:-num_prediction]
x_train, y_train = multivariate_data(dataset, dataset[:, 0], 0,len(dataset) - num_prediction + 1, look_back, num_prediction, step)
indices = range(len(dataset)-look_back, len(dataset), step)
x_test = np.array(dataset[indices])
x_test = np.expand_dims(x_test, axis=0)
y_test = np.expand_dims(df_normalized[-num_prediction:, 0], axis=0)
# creating past datasets to validate with EarlyStoppingCust
number_validates = 50
step_past = 5
validation_sets = [(x_test, y_test)]
for i in range(1, number_validates * step_past + 1, step_past):
indices = range(len(dataset)-look_back-i, len(dataset)-i, step)
x_t = np.array(dataset[indices])
x_t = np.expand_dims(x_t, axis=0)
y_t = np.expand_dims(df_normalized[-num_prediction-i:len(df_normalized)-i, 0], axis=0)
validation_sets.append((x_t, y_t))
if new_s_h5:
model = Sequential()
model.add(LSTM(32, return_sequences=False, activation = 'sigmoid', input_shape=(x_train.shape[1], x_train.shape[2])))
# model.add(Dropout(0.2))
# model.add(BatchNormalization())
# model.add(LSTM(units = 16))
model.add(Dense(y_train.shape[1]))
model.compile(optimizer = 'adam', loss = 'mse')
# EarlyStoppingCust is custom callback to validate each validation_sets and get average
# it takes epoch with best "best_avg" value
# es = EarlyStoppingCust(patience = 3, restore_best_weights = True, validation_sets = validation_sets, verbose = 1)
# or there is keras extension with built-in EarlyStopping, but it validates only 1 set that you pass through fit()
es = EarlyStopping(monitor = 'val_loss', patience = 3, restore_best_weights = True)
model.fit(x_train, y_train, batch_size = 64, epochs = 25, shuffle = True, validation_data = (x_test, y_test), callbacks = [es])
model.save('s.h5')
else:
model = load_model('s.h5')
predicted = model.predict(x_test)
predicted = transform_predicted(predicted)
print('predicted', predicted)
print('real', df.iloc[-num_prediction:, 0].values)
print('val_loss: %.5f' % (model.evaluate(x_test, y_test, verbose=0)))
fig = go.Figure()
fig.add_trace(go.Scatter(
x = df.index[-60:],
y = df.iloc[-60:,0],
mode='lines+markers',
name='real',
line=dict(color='#ff9800', width=1)
))
fig.add_trace(go.Scatter(
x = df.index[-num_prediction:],
y = predicted,
mode='lines+markers',
name='predict',
line=dict(color='#2196f3', width=1)
))
fig.update_layout(template='plotly_dark', hovermode='x', spikedistance=-1, hoverlabel=dict(font_size=16))
fig.update_xaxes(showspikes=True)
fig.update_yaxes(showspikes=True)
fig.show()
</code></pre> | 2020-04-25 12:02:09.327000+00:00 | 2020-05-04 13:15:13.913000+00:00 | 2020-04-25 12:24:46.190000+00:00 | python|tensorflow|machine-learning|keras|neural-network | ['https://arxiv.org/pdf/1611.03530.pdf', 'https://i.stack.imgur.com/pzaXj.png', 'https://arxiv.org/pdf/1611.03530.pdf', 'https://en.wikipedia.org/wiki/Regularization_(mathematics)'] | 4 |
72,476,699 | <p>This approach has some difficulties. First, in article you sent they use two-stage detection model with separate classification "branches". In the same time YOLO is one-stage detector and is <strong>fullyconvolutional</strong>, that means there are no fullyconnected layers, and class predictions (1d) are taking from the whole 3d-tensor (see the image).
<a href="https://i.stack.imgur.com/0QlQV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0QlQV.png" alt="yolo tensors" /></a></p>
<p>You can take a look at <a href="https://arxiv.org/pdf/1612.08242.pdf" rel="nofollow noreferrer">YOLO9000</a> paper, the model was trained on detection and classification datasets at the same time - only loss function was changing.</p> | 2022-06-02 12:59:50.913000+00:00 | 2022-06-02 12:59:50.913000+00:00 | null | null | 72,472,975 | <p>I am interested in building a yolo detector with trained on multiple datasets where each dataset has it own detection head. It is a multi-task learning approach. I am not sure how to convert the yolo detector architecture to support multiple head.</p>
<p>I came across the following projects, however I need your help to implement similar approach.</p>
<p><a href="https://github.com/xingyizhou/UniDet" rel="nofollow noreferrer">https://github.com/xingyizhou/UniDet</a></p>
<p><a href="https://link.springer.com/chapter/10.1007/978-981-16-6963-7_27" rel="nofollow noreferrer">https://link.springer.com/chapter/10.1007/978-981-16-6963-7_27</a></p> | 2022-06-02 08:21:12.553000+00:00 | 2022-06-02 12:59:50.913000+00:00 | null | object-detection|yolo | ['https://i.stack.imgur.com/0QlQV.png', 'https://arxiv.org/pdf/1612.08242.pdf'] | 2 |
45,371,638 | <p>I think your question is on the right track but there are so many more ways in which Category Theory connects to other concepts. I also find that relating Category Theory to Type Theory brings more to the table than relating Category Theory to transformations. I say relating, because while Math and Computer Science may use the same terminology, they are not the same; one is not interchangeable with the other.</p>
<h2><a href="https://arxiv.org/pdf/0903.0340v3.pdf" rel="nofollow noreferrer">Physics, Topology, Logic and Computation: A Rosetta Stone</a></h2>
<p>by John C. Baez and Mike Stay</p>
<p>Category Theory: object X<br>
Computation: data type X </p>
<p>Category Theory: morphism f: X → Y<br>
Computation: program f: X → Y </p>
<p>Category Theory: tensor product of objects: X ⊗ Y<br>
Computation: product of data types: X ⊗ Y </p>
<p>Category Theory: tensor product of morphisms: f ⊗ g<br>
Computation: programs executing in parallel: f ⊗ g </p>
<p>Category Theory: internal hom: X ⊸ Y<br>
Computation: function type: X ⊸ Y </p>
<h2><a href="https://ncatlab.org/nlab/show/relation+between+type+theory+and+category+theory" rel="nofollow noreferrer">relation between type theory and category theory</a></h2>
<p>from nLab</p>
<p>Category Theory: counit for hom-tensor adjunction<br>
Type Theory: beta reduction </p>
<p>Category Theory: unit for hom-tensor adjunction<br>
Type Theory: eta conversion </p>
<h2>Yoneda embedding</h2>
<blockquote>
<p>The Yoneda embedding is familiar in category theory. The continuation passing transform is familiar in computer programming.
<a href="https://golem.ph.utexas.edu/category/2008/01/the_continuation_passing_trans.html" rel="nofollow noreferrer">They’re the same thing!</a> Why doesn’t anyone ever say so?</p>
</blockquote>
<p>by Mike Stay</p>
<h2>Other references</h2>
<p>There is so much more to this question than can possibly be put into an SO answer.</p>
<p>When I was looking into this in the past I asked most of my questions at <a href="https://cs.stackexchange.com/">StackExchange: Computer Science</a> and updated the most useful references as part of the <a href="https://cs.stackexchange.com/tags/category-theory/info">Category Theory tag</a>. Most of what you seek is found in those references.</p>
<h2>TL;DR</h2>
<p>If I could have created tables with the SO Markdown I would have added a lot more, but without being in a table seeing them in a list just loses its impact.</p>
<p>If you find Category Theory of interest, then you should also look at <a href="https://homotopytypetheory.org/book/" rel="nofollow noreferrer">HoTT</a> (Homotopy Type Theory)</p> | 2017-07-28 10:54:50.417000+00:00 | 2017-07-28 10:54:50.417000+00:00 | null | null | 45,213,359 | <p>I became interested and did not find in one place a list of corresponding terms:</p>
<p><code>Map <-> Morphism</code></p>
<p><code>Foldable <-> Catamorphism</code></p>
<p>...</p>
<p>Who can supplement the list of terms</p> | 2017-07-20 11:26:11.460000+00:00 | 2017-07-28 11:01:12.590000+00:00 | 2017-07-28 11:01:12.590000+00:00 | math|functional-programming|lambda-calculus|category-theory | ['https://arxiv.org/pdf/0903.0340v3.pdf', 'https://ncatlab.org/nlab/show/relation+between+type+theory+and+category+theory', 'https://golem.ph.utexas.edu/category/2008/01/the_continuation_passing_trans.html', 'https://cs.stackexchange.com/', 'https://cs.stackexchange.com/tags/category-theory/info', 'https://homotopytypetheory.org/book/'] | 6 |
45,759,215 | <p>TOF cameras suffer from many systematic errors from both sensor and illumination unit which result in inhomogeneus lighting for example. </p>
<p>I did a detailed study on this topic that can be find at this address:
<a href="https://arxiv.org/pdf/1505.05459.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1505.05459.pdf</a></p>
<p>The error gets a lot worth on the border of your frame, as the active illumination has never:</p>
<ol>
<li>same fov as camera</li>
<li>same power on center and border of the frame</li>
</ol>
<p>For your case I think the problem is happening at the border where the measured distance gets extremely unlinear for the border pixels. Check figure 5 and 15 in the paper.</p>
<p>Possible solutions:</p>
<ol>
<li>Try to remain between 1 and 3 meter distance.</li>
<li>Concentrate more on the central 2/3 of the frame.</li>
</ol> | 2017-08-18 14:34:22.647000+00:00 | 2017-08-18 18:54:42.867000+00:00 | 2017-08-18 18:54:42.867000+00:00 | null | 29,679,278 | <p>I was comparing Kinect V2 with my own ToF sensor and I found a different place.</p>
<p>Below is a point cloud with RGB information produced by Kinect V2, which is placed in front of a object, and the object is placed between two walls. This means the two walls are parallel to Kinect V2's view line.</p>
<p><img src="https://i.stack.imgur.com/0qOcR.png" alt="Front View"></p>
<p>If I drag the point cloud, you can see the point cloud of two walls in the .ply file are parallel with each other and parallel with the Kinect' view line.</p>
<p>Look from top of the point cloud
<img src="https://i.stack.imgur.com/qOQns.png" alt="The Left Wall">
<img src="https://i.stack.imgur.com/eiU8L.png" alt="The Right Wall"></p>
<p>However, if I use my own ToF sensor to catch the point cloud under same environment and same view(please ignore the difference of the object in the middle, it has been changed), the point cloud looks like this(color camera hasn't been implemented)
<img src="https://i.stack.imgur.com/KMgoT.png" alt="My ToF Sensor"></p>
<p>The left wall(red circle area) somehow distorted like a "/"(The right wall cannot visualize due to my sensor's FOV)</p>
<p>I was confused by this phenomenon, I pretty sure Kinect V2 did some processes to fix this issue, but I cannot figure it out.</p>
<p>Can someone give me some clues about the scene I saw?</p>
<p>If there is any further information need to be provided, feel free to ask. </p> | 2015-04-16 15:25:31.563000+00:00 | 2017-08-18 18:54:42.867000+00:00 | null | opencv|3d|kinect|point-cloud-library|point-clouds | ['https://arxiv.org/pdf/1505.05459.pdf'] | 1 |
45,430,444 | <p>See <a href="https://arxiv.org/pdf/1707.09725.pdf#page=73" rel="nofollow noreferrer">my masters thesis, page 59</a> for some of the reasons why to choose a bigger batch size / smaller batch size. You want to look at</p>
<ul>
<li>epochs until convergence</li>
<li>time per epoch: higher is better</li>
<li>resulting model quality: lower is better (in my experiments)</li>
</ul>
<p>A batch size of 32 was good for my datasets / models / training algorithm.</p> | 2017-08-01 06:28:55.713000+00:00 | 2017-08-01 06:28:55.713000+00:00 | null | null | 43,702,133 | <p>I understand that bigger batch size gives more accurate results from <a href="https://stackoverflow.com/questions/33684648/what-is-batch-size-in-caffe-or-convnets">here</a>. But I'm not sure which batch size is "good enough". I guess bigger batch sizes will always be better but it seems like at a certain point you will only get a slight improvement in accuracy for every increase in batch size. Is there a heuristic or a rule of thumb on finding the optimal batch size?</p>
<p>Currently, I have 40000 training data and 10000 test data. My batch size is the default which is 256 for training and 50 for the test. I am using NVIDIA GTX 1080 which has 8Gigs of memory.</p> | 2017-04-30 01:47:00.227000+00:00 | 2017-08-01 06:28:55.713000+00:00 | 2017-05-23 12:10:47.363000+00:00 | neural-network|deep-learning|caffe|gradient-descent|imagenet | ['https://arxiv.org/pdf/1707.09725.pdf#page=73'] | 1 |
58,854,567 | <p>For Syntactic Similarity
There can be 3 easy ways of detecting similarity.</p>
<ul>
<li>Word2Vec</li>
<li>Glove</li>
<li>Tfidf or countvectorizer</li>
</ul>
<p>For Semantic Similarity
One can use BERT Embedding and try a different word pooling strategies to get document embedding and then apply cosine similarity on document embedding. </p>
<p>An advanced methodology can use BERT SCORE to get similarity.
<a href="https://i.stack.imgur.com/Yg8c0.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Yg8c0.png" alt="BERT SCORE"></a></p>
<p>Research Paper Link: <a href="https://arxiv.org/abs/1904.09675" rel="noreferrer">https://arxiv.org/abs/1904.09675</a></p> | 2019-11-14 10:28:10.863000+00:00 | 2019-11-14 10:28:10.863000+00:00 | null | null | 8,897,593 | <p>I am looking at working on an NLP project, in any programming language (though Python will be my preference).</p>
<p>I want to take two documents and determine how similar they are.</p> | 2012-01-17 15:51:09.063000+00:00 | 2022-08-29 05:24:49.783000+00:00 | 2022-08-29 05:24:49.783000+00:00 | python|nlp | ['https://i.stack.imgur.com/Yg8c0.png', 'https://arxiv.org/abs/1904.09675'] | 2 |
55,732,255 | <p>If you are looking for something very accurate, you need to use some better tool than tf-idf. <a href="https://arxiv.org/abs/1803.11175" rel="noreferrer">Universal sentence encoder</a> is one of the most accurate ones to find the similarity between any two pieces of text. Google provided pretrained models that you can use for your own application without a need to train from scratch anything. First, you have to install tensorflow and tensorflow-hub:</p>
<pre><code> pip install tensorflow
pip install tensorflow_hub
</code></pre>
<p>The code below lets you convert any text to a fixed length vector representation and then you can use the dot product to find out the similarity between them</p>
<pre class="lang-py prettyprint-override"><code>import tensorflow_hub as hub
module_url = "https://tfhub.dev/google/universal-sentence-encoder/1?tf-hub-format=compressed"
# Import the Universal Sentence Encoder's TF Hub module
embed = hub.Module(module_url)
# sample text
messages = [
# Smartphones
"My phone is not good.",
"Your cellphone looks great.",
# Weather
"Will it snow tomorrow?",
"Recently a lot of hurricanes have hit the US",
# Food and health
"An apple a day, keeps the doctors away",
"Eating strawberries is healthy",
]
similarity_input_placeholder = tf.placeholder(tf.string, shape=(None))
similarity_message_encodings = embed(similarity_input_placeholder)
with tf.Session() as session:
session.run(tf.global_variables_initializer())
session.run(tf.tables_initializer())
message_embeddings_ = session.run(similarity_message_encodings, feed_dict={similarity_input_placeholder: messages})
corr = np.inner(message_embeddings_, message_embeddings_)
print(corr)
heatmap(messages, messages, corr)
</code></pre>
<p>and the code for plotting:</p>
<pre class="lang-py prettyprint-override"><code>def heatmap(x_labels, y_labels, values):
fig, ax = plt.subplots()
im = ax.imshow(values)
# We want to show all ticks...
ax.set_xticks(np.arange(len(x_labels)))
ax.set_yticks(np.arange(len(y_labels)))
# ... and label them with the respective list entries
ax.set_xticklabels(x_labels)
ax.set_yticklabels(y_labels)
# Rotate the tick labels and set their alignment.
plt.setp(ax.get_xticklabels(), rotation=45, ha="right", fontsize=10,
rotation_mode="anchor")
# Loop over data dimensions and create text annotations.
for i in range(len(y_labels)):
for j in range(len(x_labels)):
text = ax.text(j, i, "%.2f"%values[i, j],
ha="center", va="center", color="w",
fontsize=6)
fig.tight_layout()
plt.show()
</code></pre>
<p>the result would be:
<a href="https://i.stack.imgur.com/ftGqC.png" rel="noreferrer"><img src="https://i.stack.imgur.com/ftGqC.png" alt="the similarity matrix between pairs of texts"></a></p>
<p>as you can see the most similarity is between texts with themselves and then with their close texts in meaning.</p>
<p><strong>IMPORTANT</strong>: the first time you run the code it will be slow because it needs to download the model. if you want to prevent it from downloading the model again and use the local model you have to create a folder for cache and add it to the environment variable and then after the first time running use that path:
</p>
<pre><code>tf_hub_cache_dir = "universal_encoder_cached/"
os.environ["TFHUB_CACHE_DIR"] = tf_hub_cache_dir
# pointing to the folder inside cache dir, it will be unique on your system
module_url = tf_hub_cache_dir+"/d8fbeb5c580e50f975ef73e80bebba9654228449/"
embed = hub.Module(module_url)
</code></pre>
<p>More information: <a href="https://tfhub.dev/google/universal-sentence-encoder/2" rel="noreferrer">https://tfhub.dev/google/universal-sentence-encoder/2</a></p> | 2019-04-17 16:31:03.437000+00:00 | 2020-02-11 02:12:36.770000+00:00 | 2020-02-11 02:12:36.770000+00:00 | null | 8,897,593 | <p>I am looking at working on an NLP project, in any programming language (though Python will be my preference).</p>
<p>I want to take two documents and determine how similar they are.</p> | 2012-01-17 15:51:09.063000+00:00 | 2022-08-29 05:24:49.783000+00:00 | 2022-08-29 05:24:49.783000+00:00 | python|nlp | ['https://arxiv.org/abs/1803.11175', 'https://i.stack.imgur.com/ftGqC.png', 'https://tfhub.dev/google/universal-sentence-encoder/2'] | 3 |
41,560,883 | <p>I believe both problems that you mention (Segmentation and Detection) are still considered as open problems, therefore, there isn't a final solution. However, In the last years many works has been done to solve object detection and semantic segmentation using deeplearning with great performance and speed. </p>
<p>For Object Detection in real time I recommend to you check the results of <a href="http://pjreddie.com/darknet/yolo/" rel="nofollow noreferrer">YOLO</a> and <a href="https://arxiv.org/abs/1512.02325" rel="nofollow noreferrer">SSD</a> and take a look also of <a href="https://arxiv.org/pdf/1506.01497.pdf" rel="nofollow noreferrer">Faster R-CNN</a> since your requirements of 10Hz can be archive for it.</p>
<p>In the case of Object Segmentation you can try with <a href="https://arxiv.org/abs/1412.7062" rel="nofollow noreferrer">DCNN</a> that claims 8 fps. There are others, such as, DeepLab or FCN but I am not clear what is the speed of those systems/architectures.</p> | 2017-01-10 03:57:45.803000+00:00 | 2017-01-10 03:57:45.803000+00:00 | null | null | 41,558,099 | <p>I want to segment indoor area and find objects. Then, I want to use stereo vision to find Cartesian position of objects. The final goal is picking objects on a table (and controlling the trajectory) by a robot.</p>
<p>Example of object: chair, table, pen, syringe, stapler, cup, screw, toy doll, ruler, small box, milk, fruits, ....</p>
<p>My first priority is being real time (10 Hz).</p>
<p>I use ZED Stereo Camera to capture images in windows 10 64 bit, MATLAB 2016b 64 bit, on Intel core i7-3820 (3.6 GHz).</p>
<p>The camera output is color 720x2560 pixel which is combination of two (right and left image) 720x1280.</p>
<p>I prefer to use unsupervised algorithms for finding position of unknown object on table. However, it should be down in real time. If it is no possible in real time, I will degrade my expectation and will use supervised algorithms to find predefined object.</p> | 2017-01-09 22:33:31.447000+00:00 | 2017-01-10 03:57:45.803000+00:00 | null | computer-vision|real-time|image-segmentation|object-detection|object-recognition | ['http://pjreddie.com/darknet/yolo/', 'https://arxiv.org/abs/1512.02325', 'https://arxiv.org/pdf/1506.01497.pdf', 'https://arxiv.org/abs/1412.7062'] | 4 |
71,178,429 | <ol>
<li>You can create a ClientData for your dataset, see <a href="https://www.tensorflow.org/federated/tutorials/working_with_client_data" rel="nofollow noreferrer">Working with tff's ClientData</a>.</li>
<li>The dataset doesn't have to balanced to build a federated learning model. In <a href="https://arxiv.org/abs/1602.05629" rel="nofollow noreferrer">https://arxiv.org/abs/1602.05629</a>, the server takes weighted federated averaging of client's model updates, where the weights are the number of samples each client has.</li>
<li>A few hundred records per client is no less than the <a href="https://www.tensorflow.org/federated/api_docs/python/tff/simulation/datasets/emnist/load_data" rel="nofollow noreferrer">EMNIST dataset</a>, so that would be fine. About the total number of clients: this <a href="https://www.tensorflow.org/federated/tutorials/federated_learning_for_image_classification" rel="nofollow noreferrer">tutorial</a> shows FL with 10 clients, you can run the colab with smaller <code>NUM_CLIENTS</code> to see how it works on the example dataset.</li>
</ol> | 2022-02-18 18:45:11.320000+00:00 | 2022-02-19 00:49:19.377000+00:00 | 2022-02-19 00:49:19.377000+00:00 | null | 71,167,600 | <p>I am working to build a federated learning model using TFF and I have some questions:</p>
<ol>
<li><p>I am preparing the dataset, I have separate files of data, with same features and different samples. I would consider each of these files as a single client. How can I maintain this in TFF?</p>
</li>
<li><p>The data is not balanced, meaning, the size of data varies in each file. Is this affecting the modeling process?</p>
</li>
<li><p>The size of the data is a bit small, one file (client) is having 300 records and another is 1500 records, is it suitable to build a federated learning model?</p>
</li>
</ol>
<p>Thanks in advance</p> | 2022-02-18 02:05:39.090000+00:00 | 2022-02-19 00:49:19.377000+00:00 | 2022-02-18 02:15:52.080000+00:00 | machine-learning|tensorflow-federated|federated-learning | ['https://www.tensorflow.org/federated/tutorials/working_with_client_data', 'https://arxiv.org/abs/1602.05629', 'https://www.tensorflow.org/federated/api_docs/python/tff/simulation/datasets/emnist/load_data', 'https://www.tensorflow.org/federated/tutorials/federated_learning_for_image_classification'] | 4 |
62,919,760 | <p>See this for all distributions: <a href="https://pytorch.org/docs/stable/distributions.html#torch.distributions.uniform.Uniform" rel="nofollow noreferrer">https://pytorch.org/docs/stable/distributions.html#torch.distributions.uniform.Uniform</a></p>
<p>This is the way I found works:</p>
<pre><code># generating uniform variables
import numpy as np
num_samples = 3
Din = 1
lb, ub = -1, 1
xn = np.random.uniform(low=lb, high=ub, size=(num_samples,Din))
print(xn)
import torch
sampler = torch.distributions.Uniform(low=lb, high=ub)
r = sampler.sample((num_samples,Din))
print(r)
r2 = torch.torch.distributions.Uniform(low=lb, high=ub).sample((num_samples,Din))
print(r2)
# process input
f = nn.Sequential(OrderedDict([
('f1', nn.Linear(Din,Dout)),
('out', nn.SELU())
]))
Y = f(r2)
print(Y)
</code></pre>
<p>but I have to admit I don't know what the point of generating sampler is and why not just call it directly as I do in the one liner (last line of code).</p>
<p>Comments:</p>
<ul>
<li>sampler are good for it's so you can transform/compose/cache/etc distributions. see <a href="https://arxiv.org/abs/1711.10604" rel="nofollow noreferrer">https://arxiv.org/abs/1711.10604</a>, and the top of the docs of <a href="https://pytorch.org/docs/stable/distributions.html#" rel="nofollow noreferrer">https://pytorch.org/docs/stable/distributions.html#</a> and <a href="https://arxiv.org/abs/1506.05254" rel="nofollow noreferrer">https://arxiv.org/abs/1506.05254</a></li>
<li>you can feed in tensors to uniform to let it know the high dimensional interval (hypercube) to generate the uniform samples (that's why it receives tensors as input rather than simply numbers)</li>
</ul>
<hr />
<p>Reference:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/44328530/how-to-get-a-uniform-distribution-in-a-range-r1-r2-in-pytorch/62919760#62919760">How to get a uniform distribution in a range [r1,r2] in PyTorch?</a></li>
<li><a href="https://discuss.pytorch.org/t/generating-random-tensors-according-to-the-uniform-distribution-pytorch/53030/8" rel="nofollow noreferrer">https://discuss.pytorch.org/t/generating-random-tensors-according-to-the-uniform-distribution-pytorch/53030/8</a></li>
<li><a href="https://github.com/pytorch/pytorch/issues/24162" rel="nofollow noreferrer">https://github.com/pytorch/pytorch/issues/24162</a></li>
</ul> | 2020-07-15 16:44:08.083000+00:00 | 2021-04-04 12:20:13.547000+00:00 | 2021-04-04 12:20:13.547000+00:00 | null | 44,328,530 | <p>I want to get a 2-D <code>torch.Tensor</code> with size <code>[a,b]</code> filled with values from a uniform distribution (in range <code>[r1,r2]</code>) in PyTorch.</p> | 2017-06-02 12:05:32.340000+00:00 | 2022-07-11 07:36:30.733000+00:00 | 2021-05-09 13:01:59.610000+00:00 | python|pytorch|uniform-distribution | ['https://pytorch.org/docs/stable/distributions.html#torch.distributions.uniform.Uniform', 'https://arxiv.org/abs/1711.10604', 'https://pytorch.org/docs/stable/distributions.html#', 'https://arxiv.org/abs/1506.05254', 'https://stackoverflow.com/questions/44328530/how-to-get-a-uniform-distribution-in-a-range-r1-r2-in-pytorch/62919760#62919760', 'https://discuss.pytorch.org/t/generating-random-tensors-according-to-the-uniform-distribution-pytorch/53030/8', 'https://github.com/pytorch/pytorch/issues/24162'] | 7 |
38,414,873 | <p>I worked on retina vessel detection for a bit few years ago, and there are different ways to do it:</p>
<ul>
<li>If you don't need a top result but something fast, you can use oriented openings, <a href="https://www.researchgate.net/publication/3327391_Segmentation_of_vessel-like_patterns_using_mathematical_morphology_and_curvature_evaluation" rel="nofollow">see here</a> and <a href="http://cmm.ensmp.fr/Anciens/zana/" rel="nofollow">here</a>.</li>
<li>Then you have an other version using mathematical morphology <a href="https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0ahUKEwiJ1qDb-cXMAhUMOsAKHaXXBcoQFggkMAA&url=http%3A%2F%2Fcmm.ensmp.fr%2F~walter%2Farticles_walter%2Fwalterklein.pdf.gz&usg=AFQjCNG3e1Ueke67eY9JEw6ub00Y8roQ-A&sig2=dzVYD_5_2BKMaOQpRLjjOQ&cad=rja" rel="nofollow">version here</a>.</li>
</ul>
<p>For better results, here are some ideas:</p>
<ul>
<li>Personally, I used combination of Gabor filters, and results where pretty good. See <a href="http://www.thibault.biz/StackOverflow/RetinaVesselsGabor.png" rel="nofollow">the segmentation result here on the first image of drive</a>.</li>
<li>And <a href="https://arxiv.org/pdf/cs/0510001.pdf" rel="nofollow">Gabor can be combined with learning for a good result</a>, or <a href="http://www.icmlc.org/icmlc2011/014_icmlc2011.pdf" rel="nofollow">here</a>.</li>
<li>Few years ago, <a href="https://www.researchgate.net/publication/5896442_Retinal_Blood_Vessel_Segmentation_Using_Line_Operators_and_Support_Vector_Classification" rel="nofollow">they claimed to have the best algorithm</a>, but I've never had the opportunity to test it. I was sceptic about the performance gap and the way they thresholded the line detector results, it was kind of obscure.</li>
<li>But I know that nowadays, many people try to tackle the problem using CNN, but I've not heard about significant improvements.</li>
</ul> | 2016-07-16 19:33:16.630000+00:00 | 2016-07-16 19:33:16.630000+00:00 | null | null | 38,403,205 | <p>I am trying to segment the blood vessels in retinal images using Python and OpenCV. Here is the original image:</p>
<p><a href="https://i.stack.imgur.com/aB4Iz.jpg"><img src="https://i.stack.imgur.com/aB4Iz.jpg" alt="enter image description here"></a></p>
<p>Ideally I want all the blood vessels to be very visible like this (different image):</p>
<p><a href="https://i.stack.imgur.com/Yv2Ki.png"><img src="https://i.stack.imgur.com/Yv2Ki.png" alt="enter image description here"></a></p>
<p>Here is what I have tried so far. I took the green color channel of the image.</p>
<pre><code>img = cv2.imread('images/HealthyEyeFundus.jpg')
b,g,r = cv2.split(img)
</code></pre>
<p>Then I tried to create a matched filter by following <a href="http://funcvis.org/blog/?p=51">this article</a> and this is what the output image is:</p>
<p><a href="https://i.stack.imgur.com/5ZX7g.png"><img src="https://i.stack.imgur.com/5ZX7g.png" alt="enter image description here"></a></p>
<p>Then I tried doing max entropy thresholding:</p>
<pre><code>def max_entropy(data):
# calculate CDF (cumulative density function)
cdf = data.astype(np.float).cumsum()
# find histogram's nonzero area
valid_idx = np.nonzero(data)[0]
first_bin = valid_idx[0]
last_bin = valid_idx[-1]
# initialize search for maximum
max_ent, threshold = 0, 0
for it in range(first_bin, last_bin + 1):
# Background (dark)
hist_range = data[:it + 1]
hist_range = hist_range[hist_range != 0] / cdf[it] # normalize within selected range & remove all 0 elements
tot_ent = -np.sum(hist_range * np.log(hist_range)) # background entropy
# Foreground/Object (bright)
hist_range = data[it + 1:]
# normalize within selected range & remove all 0 elements
hist_range = hist_range[hist_range != 0] / (cdf[last_bin] - cdf[it])
tot_ent -= np.sum(hist_range * np.log(hist_range)) # accumulate object entropy
# find max
if tot_ent > max_ent:
max_ent, threshold = tot_ent, it
return threshold
img = skimage.io.imread('image.jpg')
# obtain histogram
hist = np.histogram(img, bins=256, range=(0, 256))[0]
# get threshold
th = max_entropy.max_entropy(hist)
print th
ret,th1 = cv2.threshold(img,th,255,cv2.THRESH_BINARY)
</code></pre>
<p>This is the result I'm getting, which is obviously not showing all the blood vessels:</p>
<p><a href="https://i.stack.imgur.com/5HIoE.png"><img src="https://i.stack.imgur.com/5HIoE.png" alt="enter image description here"></a></p>
<p>I've also tried taking the matched filter version of the image and taking the magnitude of its sobel values. </p>
<pre><code>img0 = cv2.imread('image.jpg',0)
sobelx = cv2.Sobel(img0,cv2.CV_64F,1,0,ksize=5) # x
sobely = cv2.Sobel(img0,cv2.CV_64F,0,1,ksize=5) # y
magnitude = np.sqrt(sobelx**2+sobely**2)
</code></pre>
<p>This makes the vessels pop out more:</p>
<p><a href="https://i.stack.imgur.com/9fZDJ.png"><img src="https://i.stack.imgur.com/9fZDJ.png" alt="enter image description here"></a></p>
<p>Then I tried Otsu thresholding on it:</p>
<pre><code>img0 = cv2.imread('image.jpg',0)
# # Otsu's thresholding
ret2,th2 = cv2.threshold(img0,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
# Otsu's thresholding after Gaussian filtering
blur = cv2.GaussianBlur(img0,(9,9),5)
ret3,th3 = cv2.threshold(blur,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
one = Image.fromarray(th2).show()
one = Image.fromarray(th3).show()
</code></pre>
<p>Otsu doesn't give adequate results. It ends up including noise in the results: </p>
<p><a href="https://i.stack.imgur.com/XkkYd.png"><img src="https://i.stack.imgur.com/XkkYd.png" alt="enter image description here"></a></p>
<p>Any help is appreciated on how I can segment the blood vessels successfully.</p> | 2016-07-15 18:48:13.927000+00:00 | 2018-04-01 17:35:36.627000+00:00 | 2016-07-15 22:58:32.950000+00:00 | python|image|opencv|computer-vision|edge-detection | ['https://www.researchgate.net/publication/3327391_Segmentation_of_vessel-like_patterns_using_mathematical_morphology_and_curvature_evaluation', 'http://cmm.ensmp.fr/Anciens/zana/', 'https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0ahUKEwiJ1qDb-cXMAhUMOsAKHaXXBcoQFggkMAA&url=http%3A%2F%2Fcmm.ensmp.fr%2F~walter%2Farticles_walter%2Fwalterklein.pdf.gz&usg=AFQjCNG3e1Ueke67eY9JEw6ub00Y8roQ-A&sig2=dzVYD_5_2BKMaOQpRLjjOQ&cad=rja', 'http://www.thibault.biz/StackOverflow/RetinaVesselsGabor.png', 'https://arxiv.org/pdf/cs/0510001.pdf', 'http://www.icmlc.org/icmlc2011/014_icmlc2011.pdf', 'https://www.researchgate.net/publication/5896442_Retinal_Blood_Vessel_Segmentation_Using_Line_Operators_and_Support_Vector_Classification'] | 7 |
60,281,036 | <p>There are a few things coming to mind you may try:</p>
<h2>GRU instead of LSTM</h2>
<p>This cell type has one gate less and thus is more computationally efficient.</p>
<p>Furthermore they have a little less hyperparameters so possible search space is smaller.</p>
<p>Nothing conclusive but anecdotically they perform on-par with LSTMs at least for moderate length sequences.</p>
<h2>Use CUDA</h2>
<p>If possible use CUDA enabled device for training your network. Particularly, for <code>tensorflow</code> use <code>CuDNNGRU</code> version (for <code>tf2.x</code> see <a href="https://www.tensorflow.org/api_docs/python/tf/keras/layers/GRU" rel="nofollow noreferrer">this</a>, there are options for <code>1.x</code> as well). </p>
<h2>Narrow your search space</h2>
<p>The less hyperparameters to search for, the faster it will go. <code>learning_rate</code> is usually very important to get right.</p>
<p>There is quite-known trick for approximating optimal learning rate originated by Leslie N. Smith in <a href="https://arxiv.org/abs/1506.01186" rel="nofollow noreferrer">Cyclical Learning Rates for Training Neural Networks</a> and popularised by <a href="https://docs.fast.ai/callbacks.lr_finder.html" rel="nofollow noreferrer">fastai</a> community.</p>
<p>Basic idea - you set learning rate really low (e.g. <code>1e-7</code>) and increment it exponentially every <code>n</code> batches until threshold is hit (highest <code>lr</code>, around <code>1.0</code> usually).</p>
<p>Monitoring <code>train</code> or <code>validation</code> loss you will see it will probably skyrocket at some learning rate. You take this point and set your approximately optimal learning <code>10</code> lower than that.</p>
<p>You can read about it <a href="https://www.pyimagesearch.com/2019/08/05/keras-learning-rate-finder/" rel="nofollow noreferrer">here</a>.</p>
<p>Other hyperparameters could be narrowed as well when you see some values are probably no-go early on, though bayesian optimisation takes care of that partially.</p> | 2020-02-18 12:35:21.420000+00:00 | 2020-02-18 12:35:21.420000+00:00 | null | null | 60,280,057 | <p>I use GPyOpt to calculate the optimized parameters of an LSTM model based on Tensorflow. Unlike typical process, however, the validation of the parameters is implemented by minimizing variance, not bias. Hence, it is required to iterate the LSTM model running with a parameters set. In fact, my script does work well without any errors, but it takes up too much time to return the optimum parameters. I tried to find out any methods to improve the processing speed, but couldn't. </p>
<p>I think the other modules, for instance GridSearch or RandomizedSearch are perhaps faster than one of GpyOp, but they use Bayesian Optimization to obtain the parameters.</p>
<p>Is there a way to reduce the computational cost of this operation? </p>
<p>My PC: <code>MackBook, 2.3 GHz Intel Core i5</code></p>
<p>My script:</p>
<pre><code>import os
os.environ['KMP_DUPLICATE_LIB_OK']='True'
os.environ['PYTHONHASHSEED']='0'
os.environ['OMP_NUM_THREADS']='4'
os.environ['KMP_BLOCKTIME']='30'
os.environ['KMP_SETTINGS']='1'
os.environ['KMP_AFFINITY']='granularity=fine,verbose,compact,1,0'
import tensorflow as tf
import numpy as np
import GPyOpt
import random
NUM_PARALLEL_EXEC_UNITS=4
config=tf.ConfigProto(intra_op_parallelism_threads=NUM_PARALLEL_EXEC_UNITS,
inter_op_parallelism_threads=2,
allow_soft_placement=True,
device_count={'CPU':NUM_PARALLEL_EXEC_UNITS})
def parameter_opt(df_x,df_y):
x_set_list=[]
y_set_list=[]
for i in range(10): #create 10 data sets for calculating variance
t=random.randint(1,2300)
x_set_=df_x[t:t+10,:,:] #x_set_.shape is (10,7,5)
y_set_=df_y[t:t+10,:]
x_set_list.append(x_set_)
y_set_list.append(y_set_)
pred_x=df_x[-10:,:,:]
bounds=[{'name':'hidden_layer','type':'discrete','domain':(10,50,100,300,500)},
{'name':'learn_rate','type':'discrete','domain':(0.1,0.01,0.001,0.0001)},
{'name':'forget','type':'continuous','domain':(0.1,1.0)},
{'name':'std','type':'continuous','domain':(0.01,1.0)},
{'name':'epo','type':'discrete','domain':(50,100,200,400)},
{'name':'cell_drop_','type':'discrete','domain':(0,1)},
{'name':'output_keep','type':'continuous','domain':(0.1,1.0)}]
def f(x):
tf.reset_default_graph()
def LSTMmse(hidden_layer,learn_rate,forget,std,epo,cell_drop_, output_keep):
pred_array=np.empty((10,0))
for x_set,y_set in zip(x_set_list,y_set_list):
tf.reset_default_graph()
.......
#LSTM model
#generate prediction_list consisting of 10 predictionvalue
.........
variance_list=np.var(prediction_list,axis=1)
variance_ave=sum(var_list)/len(var_list)
return variance_ave
for x_ in x:
vari=LSTMmse(hidden_layer=int(x_[0]),
learn_rate=np.float32(x_[1]),
forget=np.float32(x_[2]),
std=np.float32(x_[3]),
epo=int(x_[4]),
cell_drop_=bool(x_[5]),
output_keep=np.float32(x_[6]))
return vari
myBopt=GPyOpt.methods.BayesianOptimization(f=f,domain=bounds,acquisition_type='MPI')
myBopt.run_optimization(max_iter=10)
opt=myBopt.x_opt
opt=[int(opt[0]),opt[1],opt[2],opt[3],int(opt[4]),opt[5],opt[6]]
key=['hidden_layer','learn_rate','forget','std','epo','cell_drop','output_keep']
value=opt
para_dict=dict(zip(key,value))
return para_dict
parameter_opt=parameter_opt(df_x,df_y)
print(parameter_opt) # spending too much time
</code></pre> | 2020-02-18 11:38:26.817000+00:00 | 2020-02-18 12:35:21.420000+00:00 | 2020-02-18 12:30:23.170000+00:00 | python|optimization|parameters|lstm|bayesian | ['https://www.tensorflow.org/api_docs/python/tf/keras/layers/GRU', 'https://arxiv.org/abs/1506.01186', 'https://docs.fast.ai/callbacks.lr_finder.html', 'https://www.pyimagesearch.com/2019/08/05/keras-learning-rate-finder/'] | 4 |
44,461,910 | <p>The data format for storing training/test is defined in the FSNS paper <a href="https://arxiv.org/pdf/1702.03970.pdf" rel="noreferrer">https://arxiv.org/pdf/1702.03970.pdf</a> (Table 4). </p>
<p>To store tfrecord files with tf.Example protos you can use <a href="http://tf.python_io.tfrecordwriter/" rel="noreferrer">tf.python_io.TFRecordWriter</a>. There is <a href="http://warmspringwinds.github.io/tensorflow/tf-slim/2016/12/21/tfrecords-guide/" rel="noreferrer">a nice tutorial</a>, an existing <a href="https://stackoverflow.com/questions/33849617/how-do-i-convert-a-directory-of-jpeg-images-to-tfrecords-file-in-tensorflow">answer on the stackoverflow</a> and a <a href="https://gist.github.com/gvanhorn38/ac19b85a4f7b5fb9e82e04f4ac6d5566" rel="noreferrer">short gist</a>.</p>
<p>Assume you have an numpy ndarray <code>img</code> which has <code>num_of_views</code> images stored side-by-side (see Fig. 3 in the <a href="https://arxiv.org/pdf/1702.03970.pdf" rel="noreferrer">paper</a>):
<img src="https://i.stack.imgur.com/OIdak.png" alt="enter image description here">
and a corresponding text in a variable <code>text</code>. You will need to define some function to convert a unicode string into a list of character ids padded to a fixed length and unpadded as well. For example:</p>
<pre class="lang-py prettyprint-override"><code>char_ids_padded, char_ids_unpadded = encode_utf8_string(
text='abc',
charset={'a':0, 'b':1, 'c':2},
length=5,
null_char_id=3)
</code></pre>
<p>the result should be:</p>
<pre class="lang-py prettyprint-override"><code>char_ids_padded = [0,1,2,3,3]
char_ids_unpadded = [0,1,2]
</code></pre>
<p>If you use functions <code>_int64_feature</code> and <code>_bytes_feature</code> defined in the <a href="https://gist.github.com/gvanhorn38/ac19b85a4f7b5fb9e82e04f4ac6d5566" rel="noreferrer">gist</a> you can create a FSNS compatible tf.Example proto using a following snippet:</p>
<pre class="lang-py prettyprint-override"><code>char_ids_padded, char_ids_unpadded = encode_utf8_string(
text, charset, length, null_char_id)
example = tf.train.Example(features=tf.train.Features(
feature={
'image/format': _bytes_feature("PNG"),
'image/encoded': _bytes_feature(img.tostring()),
'image/class': _int64_feature(char_ids_padded),
'image/unpadded_class': _int64_feature(char_ids_unpadded),
'height': _int64_feature(img.shape[0]),
'width': _int64_feature(img.shape[1]),
'orig_width': _int64_feature(img.shape[1]/num_of_views),
'image/text': _bytes_feature(text)
}
))
</code></pre> | 2017-06-09 15:46:00.327000+00:00 | 2017-06-09 15:46:00.327000+00:00 | null | null | 44,430,310 | <p>I'm working on this <a href="https://github.com/tensorflow/models/tree/master/attention_ocr" rel="noreferrer">project</a> based on TensorFlow.</p>
<p>I just want to train an OCR model by attention_ocr based on my own datasets, but I don't know how to store my images and ground truth in the same format as FSNS datasets.</p>
<p>Is there anybody also work on this project or know how to solve this problem?</p> | 2017-06-08 08:19:14.633000+00:00 | 2019-01-22 10:35:05.207000+00:00 | 2018-06-19 07:19:42.443000+00:00 | tensorflow|dataset | ['https://arxiv.org/pdf/1702.03970.pdf', 'http://tf.python_io.tfrecordwriter/', 'http://warmspringwinds.github.io/tensorflow/tf-slim/2016/12/21/tfrecords-guide/', 'https://stackoverflow.com/questions/33849617/how-do-i-convert-a-directory-of-jpeg-images-to-tfrecords-file-in-tensorflow', 'https://gist.github.com/gvanhorn38/ac19b85a4f7b5fb9e82e04f4ac6d5566', 'https://arxiv.org/pdf/1702.03970.pdf', 'https://gist.github.com/gvanhorn38/ac19b85a4f7b5fb9e82e04f4ac6d5566'] | 7 |
57,890,878 | <h2>How not to involve IO in random number generation:</h2>
<p>This question has received excellent answers. However, it might leave some readers under the impression that pseudo-random number generation (PRNG) within Haskell is necessarily linked to IO.<br></p>
<p>Well, it's <strong>not</strong>. It is just that in Haskell, the default random number generator happens to be "hosted" in the IO type. But this is by choice, not by necessity.</p>
<p>For reference, here is a <a href="https://arxiv.org/pdf/1811.04035" rel="nofollow noreferrer">recent review paper on the subject of PRNGs</a>.
PRNGs are deterministic mathematical automata. They do not involve IO. Using PRNGs in Haskell does not need to involve the IO type. At the bottom of this answer, I provide code that solves the problem at hand without involving the IO type, except for printing the result.</p>
<p>The Haskell libraries provide functions such as <code>mkStdGen</code> that take an integer <em>seed</em> and return a pseudo-random number generator, that is an object of the <code>RandomGen</code> class, whose state is dependent on the value of seed. Note that there is nothing magic about <code>mkStdGen</code>. If for some reason you do not like it, there are alternatives, such as <a href="http://hackage.haskell.org/package/tf-random-0.5/docs/System-Random-TF.html" rel="nofollow noreferrer">mkTFGen</a> which is based on the <a href="https://www.schneier.com/academic/skein/threefish.html" rel="nofollow noreferrer">Threefish block cipher</a>.
<br></p>
<p>Now, pseudo-random number generation is not managed in the same way in imperative languages such as C++ and in Haskell. In C++, you would extract a random value like this: <code>rval = rng.nextVal();</code>. On top of just returning the value, calling nextVal() has the <em>side effect</em> of altering the state of the <code>rng</code> object, ensuring that next time it will return a different random number.</p>
<p>But in Haskell, functions have no side effects. So you need to have something like this:</p>
<pre><code>(rval, rng2) = nextVal rng1
</code></pre>
<p>That is, the evaluation function needs to <em>return</em> both the pseudo-random value and the updated state of the generator. A minor consequence is that, if the state is large (such as for the common <a href="https://en.wikipedia.org/wiki/Mersenne_Twister" rel="nofollow noreferrer">Mersenne Twister</a> generator), Haskell might need a bit more memory than C++.</p>
<p>So, we expect that solving the problem at hand, that is randomly transforming a list of strings, will involve a function with the following type signature: <code>RandomGen tg => [String] -> tg -> ([String], tg)</code>.</p>
<p>For illustration purposes, let's get a generator and use it to generate a couple of "random" integers between 0 and 100. For this, we need the <code>randomR</code> function:</p>
<pre><code>$ ghci
Prelude> import System.Random
Prelude System.Random> :t randomR
randomR :: (RandomGen g, Random a) => (a, a) -> g -> (a, g)
Prelude System.Random>
Prelude System.Random> let rng1 = mkStdGen 544
Prelude System.Random> let (v, rng2) = randomR (0,100) rng1
Prelude System.Random> v
23
Prelude System.Random> let (v, rng2) = randomR (0,100) rng1
Prelude System.Random> v
23
Prelude System.Random> let (w, rng3) = randomR (0,100) rng2
Prelude System.Random> w
61
Prelude System.Random>
</code></pre>
<p>Note that above, when we forget to feed the <em>updated</em> state of the generator, rng2, into the next computation, we get the same "random" number 23 a second time. This is a very common mistake and a very common complaint. Function <code>randomR</code> is a pure Haskell function that does not involve IO. Hence it has <em>referential transparency</em>, that is when given the same arguments, it returns the same output value.</p>
<p>A possible way to deal with this situation is to pass the updated state around manually within the source code. This is cumbersome and error prone, but can be managed. That gives this style of code:</p>
<pre><code>-- stateful map of randomize function for a list of strings:
fmapRandomize :: RandomGen tg => [String] -> tg -> ([String], tg)
fmapRandomize [] rng = ([], rng)
fmapRandomize(str:rest) rng = let (str1, rng1) = randomize str rng
(rest1, rng2) = fmapRandomize rest rng1
in (str1:rest1, rng2)
</code></pre>
<p>Thankfully, there is a better way, which involves the <code>runRand</code> function or its <code>evalRand</code> sibling. Function <code>runRand</code> takes a <em>monadic computation</em> plus (an initial state of) a generator. It returns the pseudo-random value and the updated state of the generator. It is much easier to write the code for monadic computations than to pass the generator state manually around. </p>
<p>This is a possible way to solve the random string substitution problem from the question text:</p>
<pre><code>import System.Random
import Control.Monad.Random
-- generic monadic computation to get a sequence of "count" random items:
mkRandSeqM :: (RandomGen tg, Random tv) => (tv,tv) -> Int -> Rand tg [tv]
mkRandSeqM range count = sequence (replicate count (getRandomR range))
-- monadic computation to get our sort of random string:
mkRandStrM :: RandomGen tg => Rand tg String
mkRandStrM = mkRandSeqM ('a', 'z') 10
-- monadic single string transformation:
randomizeM :: RandomGen tg => String -> Rand tg String
randomizeM str = if (str == "random") then mkRandStrM else (pure str)
-- monadic list-of-strings transformation:
mapRandomizeM :: RandomGen tg => [String] -> Rand tg [String]
mapRandomizeM = mapM randomizeM
-- non-monadic function returning the altered string list and generator:
mapRandomize :: RandomGen tg => [String] -> tg -> ([String], tg)
mapRandomize lstr rng = runRand (mapRandomizeM lstr) rng
main = do
let inpList = ["random", "foo", "random", "bar", "random", "boo", "qux"]
-- get a random number generator:
let mySeed = 54321
let rng1 = mkStdGen mySeed
-- execute the string substitutions:
let (outList, rng2) = mapRandomize inpList rng1
-- display results:
putStrLn $ "inpList = " ++ (show inpList)
putStrLn $ "outList = " ++ (show outList)
</code></pre>
<p><br>
Note that above, RandomGen is the class of the generator, while Random is just the class of the generated value.</p>
<h2>Program output:</h2>
<pre><code>$ random1.x
inpList = ["random","foo","random","bar","random","boo","qux"]
outList = ["gahuwkxant","foo","swuxjgapni","bar","zdjqwgpgqa","boo","qux"]
$
</code></pre> | 2019-09-11 14:01:07.943000+00:00 | 2019-10-24 00:05:07.270000+00:00 | 2019-10-24 00:05:07.270000+00:00 | null | 57,836,652 | <p>Suppose that I have a list like this:</p>
<pre class="lang-hs prettyprint-override"><code>let list = ["random", "foo", "random", "bar", "random", "boo"]
</code></pre>
<p>I want to iterate over a list and map all "random" elements to different random strings:</p>
<pre class="lang-hs prettyprint-override"><code>let newList = fmap randomize list
print newList
-- ["dasidias", "foo", "gasekir", "bar", "nabblip", "boo"]
</code></pre>
<p>My randomize function looks like this:</p>
<pre class="lang-hs prettyprint-override"><code>randomize :: String -> String
randomize str =
case str of
"random" -> randStr
_ -> str
where
randStr = take 10 $ randomRs ('a','z') $ unsafePerformIO newStdGen
</code></pre>
<p>But I get the same random string for every "random" element:</p>
<pre class="lang-hs prettyprint-override"><code>["abshasb", "foo", "abshasb", "bar", "abshasb", "boo"]
</code></pre>
<p>I can't figure out why is this happening and how to get a different random value for each occurrence of "random".</p> | 2019-09-07 18:59:24.603000+00:00 | 2019-10-24 00:05:07.270000+00:00 | null | haskell|random | ['https://arxiv.org/pdf/1811.04035', 'http://hackage.haskell.org/package/tf-random-0.5/docs/System-Random-TF.html', 'https://www.schneier.com/academic/skein/threefish.html', 'https://en.wikipedia.org/wiki/Mersenne_Twister'] | 4 |
65,207,342 | <p>Another possibility is to associate, with each element of the array, a random number drawn from an <a href="https://en.wikipedia.org/wiki/Exponential_distribution" rel="nofollow noreferrer">exponential distribution</a> with parameter given by the weight for that element. Then pick the element with the lowest such ‘ordering number’. In this case, the probability that a particular element has the lowest ordering number of the array is proportional to the array element's weight.</p>
<p>This is O(n), doesn't involve any reordering or extra storage, and the selection can be done in the course of a single pass through the array. The weights must be greater than zero, but don't have to sum to any particular value.</p>
<p>This has the further advantage that, if you store the ordering number with each array element, you have the option to sort the array by increasing ordering number, to get a random ordering of the array in which elements with higher weights have a higher probability of coming early (I've found this useful when deciding which DNS SRV record to pick, to decide which machine to query).</p>
<p>Repeated random sampling with replacement requires a new pass through the array each time; for random selection without replacement, the array can be sorted in order of increasing ordering number, and <em>k</em> elements can be read out in that order.</p>
<p>See the <a href="https://en.wikipedia.org/wiki/Exponential_distribution" rel="nofollow noreferrer">Wikipedia page about the exponential distribution</a> (in particular the remarks about the distribution of the minima of an ensemble of such variates) for the proof that the above is true, and also for the pointer towards the technique of generating such variates: if <em>T</em> has a uniform random distribution in [0,1), then <em>Z=-log(1-T)/w</em> (where <em>w</em> is the parameter of the distribution; here the weight of the associated element) has an exponential distribution.</p>
<p>That is:</p>
<ol>
<li>For each element <em>i</em> in the array, calculate <em>zi = -log(T)/wi</em> (or <em>zi = -log(1-T)/wi</em>), where T is drawn from a uniform distribution in [0,1), and <em>wi</em> is the weight of the I'th element.</li>
<li>Select the element which has the lowest <em>zi</em>.</li>
</ol>
<p>The element <em>i</em> will be selected with probability <em>wi/(w1+w2+...+wn)</em>.</p>
<p>See below for an illustration of this in Python, which takes a single pass through the array of weights, for each of 10000 trials.</p>
<pre><code>import math, random
random.seed()
weights = [10, 20, 50, 20]
nw = len(weights)
results = [0 for i in range(nw)]
n = 10000
while n > 0: # do n trials
smallest_i = 0
smallest_z = -math.log(1-random.random())/weights[0]
for i in range(1, nw):
z = -math.log(1-random.random())/weights[i]
if z < smallest_z:
smallest_i = i
smallest_z = z
results[smallest_i] += 1 # accumulate our choices
n -= 1
for i in range(nw):
print("{} -> {}".format(weights[i], results[i]))
</code></pre>
<hr />
<p><strong>Edit (for history):</strong> after posting this, I felt sure I couldn't be the first to have thought of it, and another search with this solution in mind shows that this is indeed the case.</p>
<ul>
<li>In an <a href="https://stackoverflow.com/a/30226926/375147">answer to a similar question</a>, <a href="https://stackoverflow.com/users/854793/joe-k">Joe K</a> suggested this algorithm (and also noted that someone else must have thought of it before).</li>
<li>Another <a href="https://stackoverflow.com/a/20548895/375147">answer to that question</a>, meanwhile, pointed to <a href="https://doi.org/10.1016/j.ipl.2005.11.003" rel="nofollow noreferrer">Efraimidis and Spirakis</a> (<a href="https://docdro.id/XTglD09" rel="nofollow noreferrer">preprint</a>), which describes a similar method.</li>
<li>I'm pretty sure, looking at it, that the Efraimidis and Spirakis is in fact the same exponential-distribution algorithm in disguise, and this is corroborated by a passing remark in the <a href="https://en.wikipedia.org/wiki/Reservoir_sampling" rel="nofollow noreferrer">Wikipedia page about Reservoir sampling</a> that ‘[e]quivalently, a more numerically stable formulation of this algorithm’ is the exponential-distribution algorithm above. The reference there is to <a href="https://arxiv.org/abs/1305.0941" rel="nofollow noreferrer">a sequence of lecture notes by Richard Arratia</a>; the relevant property of the exponential distribution is mentioned in Sect.1.3 (which mentions that something similar to this is a ‘familiar fact’ in some circles), but not its relationship to the Efraimidis and Spirakis algorithm.</li>
</ul> | 2020-12-08 21:41:27.783000+00:00 | 2021-04-19 07:45:28.407000+00:00 | 2021-04-19 07:45:28.407000+00:00 | null | 4,463,561 | <p>I would like to randomly select one element from an array, but each element has a known probability of selection.</p>
<p>All chances together (within the array) sums to 1.</p>
<p>What algorithm would you suggest as the fastest and most suitable for huge calculations?</p>
<p>Example:</p>
<pre><code>id => chance
array[
0 => 0.8
1 => 0.2
]
</code></pre>
<p>for this pseudocode, the algorithm in question should on multiple calls statistically return four elements on id <code>0</code> for one element on id <code>1</code>.</p> | 2010-12-16 17:20:44.353000+00:00 | 2022-06-08 15:02:58.027000+00:00 | 2014-02-22 01:16:15.093000+00:00 | arrays|algorithm|random | ['https://en.wikipedia.org/wiki/Exponential_distribution', 'https://en.wikipedia.org/wiki/Exponential_distribution', 'https://stackoverflow.com/a/30226926/375147', 'https://stackoverflow.com/users/854793/joe-k', 'https://stackoverflow.com/a/20548895/375147', 'https://doi.org/10.1016/j.ipl.2005.11.003', 'https://docdro.id/XTglD09', 'https://en.wikipedia.org/wiki/Reservoir_sampling', 'https://arxiv.org/abs/1305.0941'] | 9 |
46,390,590 | <p>Stable GAN Training is an open research problem. Nevertheless I can give you two tips. If you stick with the original GAN Training routine and do not have absolute knowledge of what you are doing, use the DCGAN Architecture with the available hyperparameters as described in their paper (<a href="https://arxiv.org/pdf/1511.06434.pdf%C3%AF%C2%BC%E2%80%B0" rel="nofollow noreferrer">https://arxiv.org/pdf/1511.06434.pdf%C3%AF%C2%BC%E2%80%B0</a>). GAN Training is highly volatile and using other hyperparameter will lead to mode collapse or vanishing gradients. </p>
<p>The easier route with GANs is to use the Wasserstein GAN. Those are quite stable with abritrary architectures. However, I strongly suggest to use the hyperparameter suggested in their paper, because for me the training also collapsed for different hyperparameters. Improved Wasserstein GAN: [<a href="https://arxiv.org/pdf/1704.00028.pdf]" rel="nofollow noreferrer">https://arxiv.org/pdf/1704.00028.pdf]</a></p> | 2017-09-24 13:32:50.340000+00:00 | 2017-09-24 13:32:50.340000+00:00 | null | null | 46,386,948 | <p><a href="https://i.stack.imgur.com/hWfC5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hWfC5.png" alt="enter image description here"></a>
<a href="https://i.stack.imgur.com/ZA0Wi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZA0Wi.png" alt="enter image description here"></a></p>
<p>As shown in the two pictures above,while training a DCGAN model ,the gradient is not stable and fluctuate greatly.Because of this reason,the model can't draw a perfect image,even to draw a image that be recognized by human eyes.Does anybody can tell me how to adjust the parameter such as dropout rate or learning rate or something else to make the model run better?I will be great thankful to you!
Here is the model I have made before(Build with Keras):</p>
<p>the discriminator :</p>
<p>learn rate is 0.0005</p>
<p>dropout rate is 0.6</p>
<p>batch_size is 25</p>
<pre><code>dis=Sequential()
dis.add(Conv2D(depth*1, 5, strides=2, input_shape=(56,56,3),padding='same',kernel_initializer='RandomNormal', bias_initializer='zeros'))
dis.add(LeakyReLU(alpha=alp))
dis.add(Dropout(dropout))
dis.add(Conv2D(depth*2, 5, strides=2, padding='same',kernel_initializer='RandomNormal', bias_initializer='zeros'))
dis.add(LeakyReLU(alpha=alp))
dis.add(Dropout(dropout))
dis.add(Conv2D(depth*4, 5, strides=2, padding='same',kernel_initializer='RandomNormal', bias_initializer='zeros'))
dis.add(LeakyReLU(alpha=alp))
dis.add(Dropout(dropout))
dis.add(Conv2D(depth*8,5,strides=1,padding='same',kernel_initializer='RandomUniform', bias_initializer='zeros'))
dis.add(LeakyReLU(alpha=alp))
dis.add(Dropout(dropout))
dis.add(Flatten())
dis.add(Dense(1))
dis.add(Activation('sigmoid'))
dis.summary()
dis.compile(loss='binary_crossentropy',optimizer=RMSprop(lr=d_lr))
</code></pre>
<p>the generator and GAN model:</p>
<p>learning rate is 0.0001</p>
<p>momentum is 0.9</p>
<pre><code>gen=Sequential()
gen.add(Dense(dim*dim*dep,input_dim=100))
gen.add(BatchNormalization(momentum=momentum))
gen.add(Activation('relu'))
gen.add(Reshape((dim,dim,dep)))
gen.add(Dropout(dropout))
gen.add(UpSampling2D())
gen.add(Conv2DTranspose(int(dep/2),5,padding='same',kernel_initializer='RandomNormal', bias_initializer='RandomNormal'))
gen.add(BatchNormalization(momentum=momentum))
gen.add(Activation('relu'))
gen.add(UpSampling2D())
gen.add(Conv2DTranspose(int(dep/4),5,padding='same',kernel_initializer='RandomNormal', bias_initializer='RandomNormal'))
gen.add(BatchNormalization(momentum=momentum))
gen.add(Activation('relu'))
gen.add(UpSampling2D())
gen.add(Conv2DTranspose(int(dep/8),5,padding='same',kernel_initializer='RandomNormal', bias_initializer='RandomNormal'))
gen.add(BatchNormalization(momentum=momentum))
gen.add(Activation('relu'))
gen.add(Conv2DTranspose(3,5,padding='same',kernel_initializer='RandomNormal', bias_initializer='RandomNormal'))
gen.add(Activation('sigmoid'))
gen.summary()
GAN=Sequential()
GAN.add(gen)
GAN.add(dis)
GAN.compile(loss='binary_crossentropy',optimizer=RMSprop(lr=g_lr))
</code></pre> | 2017-09-24 05:35:52.373000+00:00 | 2020-12-07 11:03:20.047000+00:00 | 2018-06-10 17:16:01.087000+00:00 | keras|deep-learning|generative-adversarial-network | ['https://arxiv.org/pdf/1511.06434.pdf%C3%AF%C2%BC%E2%80%B0', 'https://arxiv.org/pdf/1704.00028.pdf]'] | 2 |
69,468,274 | <p>The <code>doi</code> example can be addressed by using the <a href="https://www.sphinx-doc.org/en/master/usage/extensions/extlinks.html" rel="nofollow noreferrer">extlinks</a> Sphinx extension, by adding these contents to <code>conf.py</code>:</p>
<pre><code>extlinks = {
'doi': ('https://dx.doi.org/%s', 'doi:'),
}
</code></pre>
<p>(<a href="https://gist.github.com/jonls/b910afc46473b02597c4#gistcomment-1986251" rel="nofollow noreferrer">Source</a>)</p>
<p>And for <code>arxiv</code>, something similar.</p>
<hr />
<p>For a more generic case, you would need to either</p>
<ol>
<li>create a "custom role", see for example <a href="https://www.sphinx-doc.org/en/master/development/tutorials/helloworld.html" rel="nofollow noreferrer">https://www.sphinx-doc.org/en/master/development/tutorials/helloworld.html</a> (this tutorial creates a custom directive, but it should be similar), or</li>
<li>find a Sphinx extension that already does it for you.</li>
</ol> | 2021-10-06 15:10:09.497000+00:00 | 2021-10-06 15:10:09.497000+00:00 | null | null | 69,460,847 | <p>How can I link directive with :doi: or :arxiv: in Rst.</p>
<p>I'm making a document with Rst, but when I try to link with arxiv, it shows broken link.</p>
<p>What I tried is below but not working.</p>
<pre><code><:doi:`10.1145/2487575.2487591>`
</code></pre>
<p>or</p>
<pre><code>:arxiv:`1309.0238`
</code></pre> | 2021-10-06 06:31:07.270000+00:00 | 2021-10-06 15:10:38.593000+00:00 | 2021-10-06 15:10:38.593000+00:00 | hyperlink|python-sphinx|restructuredtext|docutils|doi | ['https://www.sphinx-doc.org/en/master/usage/extensions/extlinks.html', 'https://gist.github.com/jonls/b910afc46473b02597c4#gistcomment-1986251', 'https://www.sphinx-doc.org/en/master/development/tutorials/helloworld.html'] | 3 |
37,317,447 | <p>What you are looking for is called <strong>object detection</strong>. A starting point can be <a href="https://papers.nips.cc/paper/5207-deep-neural-networks-for-object-detection.pdf" rel="noreferrer">Deep Neural Networks for Object Detection</a> or <a href="https://www.cs.berkeley.edu/~rbg/papers/pami/rcnn_pami.pdf" rel="noreferrer">Region-based Convolutional Networks for Accurate Object Detection and Segmentation</a>.</p>
<p>A similar, but much more difficult task is <strong>instance segmentation</strong>. One of the latest papers I've seen in this area is <a href="http://arxiv.org/abs/1604.05096" rel="noreferrer">Pixel-level Encoding and Depth Layering for Instance-level Semantic Labeling</a>.</p>
<p>Instance segmentation is probably the hardest tasks in Computer Vision. When you're new to machine learning / computer vision, you might first want to do image classification. If you want to go into the direction of instance segmentation, then you should continue with semantic segmentation and then instance segmentation.</p>
<p>A simple sliding window approach, where you only predict "car" / "no car" will not work, because in the image the cars are not separated by any "no car".</p> | 2016-05-19 08:09:44.047000+00:00 | 2016-05-19 08:20:23.973000+00:00 | 2016-05-19 08:20:23.973000+00:00 | null | 37,307,198 | <p>I am new to machine learning. I got a task to find the total number of vehicles from an image using machine learning concept. I am using neural network. My image of worst case is given here.</p>
<p><a href="http://i.stack.imgur.com/A36Zw.jpg" rel="nofollow">Traffic Image</a> </p>
<p>I need to find the total number of cars from this image. My idea is to cut this big image into small patches of image and train the network to count the vehicles from the small patches. Each patch will be having count less than 5. Then in the processing of new image, I could make use of a sliding window to get the total count of vehicles. </p>
<p>I just want to know whether this idea is possible or not OR should I go for feature extraction and training neural network with those features. If possible, whether there is any conditions for the dataset and training.</p> | 2016-05-18 18:10:21.580000+00:00 | 2016-05-19 08:20:23.973000+00:00 | 2016-05-19 08:15:41.797000+00:00 | machine-learning|computer-vision|neural-network|conv-neural-network | ['https://papers.nips.cc/paper/5207-deep-neural-networks-for-object-detection.pdf', 'https://www.cs.berkeley.edu/~rbg/papers/pami/rcnn_pami.pdf', 'http://arxiv.org/abs/1604.05096'] | 3 |
14,774,732 | <p>Here is a benchmark comparing sparse matrix multiplication performance:</p>
<p><a href="http://uk.arxiv.org/abs/1302.1078" rel="nofollow">http://uk.arxiv.org/abs/1302.1078</a></p>
<p>It partly answers my question, but I would rather see more than one algorithm, and I would like to see how portable OpenCL performance is, I will still accept any answers which can provide that information.</p> | 2013-02-08 14:28:09.680000+00:00 | 2013-02-11 13:51:12.690000+00:00 | 2013-02-11 13:51:12.690000+00:00 | null | 14,409,778 | <p>To my surprise, I cannot find a comparison of these products using open source OpenCL benchmark suites, such as <a href="https://www.cs.virginia.edu/~skadron/wiki/rodinia/index.php/Main_Page" rel="nofollow">rodinia</a> and <a href="https://github.com/spaffy/shoc/wiki" rel="nofollow">SHOC</a>. Such a comparison could be more interesting than comparisons of theoretical peak performance, or of performance in simple matrix multiplication kernels, which I have been able to find.</p>
<p>Does anyone know where such results might be available? Failing that, do any stack overflow users have access to one or both products, and the time and inclination to run the benchmarks and share the results? Results for any of the versions of either card would be interesting.</p> | 2013-01-19 00:21:58.040000+00:00 | 2014-04-13 12:34:25.587000+00:00 | 2014-04-13 12:34:25.587000+00:00 | opencl|gpu|benchmarking|nvidia|xeon-phi | ['http://uk.arxiv.org/abs/1302.1078'] | 1 |
66,344,028 | <p>While HuggingFace is very good for NLP I would not recommend using it for any time series problem. With respect to tokens there is no reason to use CLS nor SEP s you don't need them. The simplest way would be to feed the model data in the format (batch_size, seq_len, n_features) then have it predict (batch_size, seq_len) in this case it would look like (batch_size, 90, 100) and return a tensor of shape (batch_size, 90). That is unless you think there are temporal dependencies between windows. In which case you could use a rolling historical window. Secondly I suggest you look at some papers that discuss <a href="https://arxiv.org/pdf/2001.08317.pdf" rel="noreferrer">transformer for time series.</a></p>
<p>If you are looking for time series libraries that include the transformer check out <a href="https://github.com/AIStream-Peelout/flow-forecast" rel="noreferrer">Flow Forecast</a> or <a href="https://github.com/oliverguhr/transformer-time-series-prediction" rel="noreferrer">transformer time series prediction</a> for actual examples of using the transformer for time series data.</p> | 2021-02-24 02:53:17.187000+00:00 | 2021-02-24 02:53:17.187000+00:00 | null | null | 66,321,085 | <p>I’d like to train a transformer encoder (e.g. BERT) on time-series data for a task that can be modeled as classification. Let met briefly describe the data I’m using before talking about the issue I’m facing.</p>
<p>I’m working with 90 seconds windows, and I have access to 100 values for each second (i.e. 90 vectors of size 100). My goal is to predict a binary label (0 or 1) <strong>for each second</strong> (i.e. produce a final vector of 0s ans 1s of length 90).</p>
<p>My first idea was to model this as a multi-label classification problem, where I would use BERT to produce a vector of size 90 filled with numbers between 0 and 1 and regress using nn.BCELoss and the groundtruth label (y_true looks like [0,0,0,1,1,1,0,0,1,1,1,0...,0]). A simple analogy would be to consider each second as a <em>word</em>, and the 100 values I have access to as the corresponding <em>word embedding</em>. The goal is then to train BERT (from scratch) on these sequences of 100-dim embedding (all sequence lengths are the same: 90).</p>
<p>The problem: when dealing with textual inputs, we simply add the CLS and SEP tokens to the input sequences, and let the tokenizer and the model do the rest of the job. When training directly on embeddings, what should we do to account for CLS and SEP tokens?</p>
<p>One idea I had was to add a 100-dim embedding at position 0 standing for the CLS token, as well as a 100-dim embedding on position 90+1=91 standing for the SEP token. But I don’t know what embeddings I should use for these two tokens. And I’m not sure that’s a good solution either.</p>
<p>Any ideas?</p>
<p>(I tried asking this question on Huggingface forums but didn't get any response.)</p> | 2021-02-22 18:05:57.940000+00:00 | 2021-02-24 02:53:17.187000+00:00 | 2021-02-22 22:07:25.990000+00:00 | python|deep-learning|time-series|bert-language-model|huggingface-transformers | ['https://arxiv.org/pdf/2001.08317.pdf', 'https://github.com/AIStream-Peelout/flow-forecast', 'https://github.com/oliverguhr/transformer-time-series-prediction'] | 3 |
70,588,450 | <p>In <a href="https://arxiv.org/pdf/2105.07576.pdf" rel="nofollow noreferrer">Rethinking “Batch” in BatchNorm</a> research carried out by FAIR, batch normalization and batch size are discussed. According to the graph below, you can see the relation of batch normalization and batch size. It shows that when you use smaller batch size you do not need to use batch normalization. Batch normalization is helpful when you have bigger batch size. Using smaller batch size with batch normalization leads to training/testing inconsistency.</p>
<p><a href="https://i.stack.imgur.com/aP50D.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aP50D.png" alt="enter image description here" /></a></p>
<p>Classification error under different normalization batch
sizes, with a fixed total batch size of 1024. <strong>Green</strong>: error rate on
unaugmented training set using mini-batch statistics; <strong>Red</strong>: error
rate on validation set using population statistics estimated by PreciseBN; <strong>Blue</strong>: <strong>error rate on validation set</strong> using mini-batch statistics of random batches (with the same normalization batch size
used in training). The gap between red and blue curves is caused
by <em><strong>train-test inconsistency</strong></em>, while the gap between blue and green
curves is the generalization gap on unseen dataset.</p> | 2022-01-05 06:06:37.530000+00:00 | 2022-01-05 06:06:37.530000+00:00 | null | null | 70,579,741 | <p>I'm working on image super-resolution tasks with EDSR as a baseline model. Following EDSR, I'm not using any batch-norm layers in my model. I suddenly came up with a stupid question about batch-sizes.</p>
<p>Currently, I'm training my model with batch-size=32 (as in EDSR). But since I'm not using any batch-normalization technique, I cant see any reason for using batch sizes greater than 1. But I'm not confident with my thoughts since the author's implementations <strong>are</strong> using batch sizes greater than 1.</p>
<p>Could someone help me with this? What am I missing?</p> | 2022-01-04 13:44:01.837000+00:00 | 2022-01-05 06:06:37.530000+00:00 | 2022-01-04 14:39:20.807000+00:00 | deep-learning|pytorch|batch-normalization|batchsize|batchnorm | ['https://arxiv.org/pdf/2105.07576.pdf', 'https://i.stack.imgur.com/aP50D.png'] | 2 |
31,076,706 | <p>There are some literature in that area:</p>
<p><a href="http://interscience.in/IJEEE_Vol1Iss3/Paper7.pdf" rel="nofollow">Content Based Image Retrieval Using SVM Algorithm</a></p>
<p><a href="http://link.springer.com/chapter/10.1007/978-1-4614-3872-4_37?no-access=true" rel="nofollow">An Approach for Image Retrieval Using SVM </a></p>
<p><a href="http://www.cs.sfu.ca/~mori/research/papers/lan-eccv12.pdf" rel="nofollow">Image Retrieval with Structured Object Queries Using Latent Ranking SVM </a></p>
<p>As far as I know, the simple approach is having a feature extraction phase (i.e Using PCA) and then doing a <a href="http://rvlasveld.github.io/blog/2013/07/12/introduction-to-one-class-support-vector-machines/" rel="nofollow">one-class svm</a> classification.</p>
<p>K-NN usually uses euclidean distance anyway so the algorithm is offering you a more consistent decision boundary and a feature extraction phase on top of that. You can see an example <a href="http://arxiv.org/abs/1307.4717" rel="nofollow">here</a></p> | 2015-06-26 15:12:51.073000+00:00 | 2015-06-26 15:12:51.073000+00:00 | null | null | 31,074,838 | <p>Generally CBIR works with Euclidean distance for comparing a query image and a database image feature vectors.</p>
<p>However in math works, I got a source code that instead of Euclidean distance it is done with SVM, like a content based image retrieval using two techniques:</p>
<ol>
<li>Using knn for image retrieval;</li>
<li>Using svm for image retrieval.</li>
</ol>
<p>How does it work?</p> | 2015-06-26 13:44:20.293000+00:00 | 2015-06-26 15:12:51.073000+00:00 | 2015-06-26 14:21:11.767000+00:00 | svm|cbir | ['http://interscience.in/IJEEE_Vol1Iss3/Paper7.pdf', 'http://link.springer.com/chapter/10.1007/978-1-4614-3872-4_37?no-access=true', 'http://www.cs.sfu.ca/~mori/research/papers/lan-eccv12.pdf', 'http://rvlasveld.github.io/blog/2013/07/12/introduction-to-one-class-support-vector-machines/', 'http://arxiv.org/abs/1307.4717'] | 5 |
68,885,046 | <p>First of all, you should note that <code>google/reformer-enwik8</code> is not a properly trained language model and that you will probably not get decent results from fine-tuning it. enwik8 is a compression challenge and <a href="https://arxiv.org/abs/2001.04451" rel="nofollow noreferrer">the reformer authors</a> used this dataset for exactly that purpose:</p>
<blockquote>
<p>To verify that the Reformer can indeed fit large models on a single
core and train fast on long sequences, we train up to 20-layer big
Reformers on enwik8 and imagenet64...</p>
</blockquote>
<p>This is also the reason why they haven't trained a sub-word tokenizer and operate on character level.</p>
<p>You should also note that the <code>LMHead</code> is usually used for predicting the next token of a sequence (CLM). You probably want to use a token classification head (i.e. use an encoder ReformerModel and add a linear layer with 9 classes on top+maybe a dropout layer).</p>
<p>Anyway, in case you want to try it still, you can do the following to reduce the memory footprint of the <code>google/reformer-enwik8</code> reformer:</p>
<ol>
<li>Reduce the number of hashes during training:</li>
</ol>
<pre class="lang-py prettyprint-override"><code>from transformers import ReformerConfig, ReformerModel
conf = ReformerConfig.from_pretrained('google/reformer-enwik8')
conf.num_hashes = 2 # or maybe even to 1
model = transformers.ReformerModel.from_pretrained("google/reformer-enwik8", config =conf)
</code></pre>
<p>After you have finetuned your model, you can increase the number of hashes again to increase the performance (compare Table 2 of the reformer paper).</p>
<ol start="2">
<li>Replace axial-position embeddings:</li>
</ol>
<pre class="lang-py prettyprint-override"><code>from transformers import ReformerConfig, ReformerModel
conf = ReformerConfig.from_pretrained('google/reformer-enwik8')
conf.axial_pos_embds = False
model = transformers.ReformerModel.from_pretrained("google/reformer-enwik8", config =conf)
</code></pre>
<p>This will replace the learned axial positional embeddings with learnable position embeddings like Bert's and do not require the full sequence length of 65536. They are untrained and randomly initialized (i.e. consider a longer training).</p> | 2021-08-22 20:50:36.847000+00:00 | 2021-08-22 21:36:37.800000+00:00 | 2021-08-22 21:36:37.800000+00:00 | null | 68,742,863 | <p>I'm trying to fine-tune the ReformerModelWithLMHead (google/reformer-enwik8) for NER. I used the padding sequence length same as in the encode method (max_length = max([len(string) for string in list_of_strings])) along with attention_masks. And I got this error:</p>
<p><strong>ValueError: If training, make sure that config.axial_pos_shape factors: (128, 512) multiply to sequence length. Got prod((128, 512)) != sequence_length: 2248. You might want to consider padding your sequence length to 65536 or changing config.axial_pos_shape.</strong></p>
<ul>
<li>When I changed the sequence length to 65536, my colab session crashed by getting all the inputs of 65536 lengths.</li>
<li>According to the second option(changing config.axial_pos_shape), I cannot change it.</li>
</ul>
<p>I would like to know, Is there any chance to change config.axial_pos_shape while fine-tuning the model? Or I'm missing something in encoding the input strings for reformer-enwik8?</p>
<p>Thanks!</p>
<p><strong>Question Update: I have tried the following methods:</strong></p>
<ol>
<li>By giving paramteres at the time of model instantiation:</li>
</ol>
<blockquote>
<p>model = transformers.ReformerModelWithLMHead.from_pretrained("google/reformer-enwik8", num_labels=9, max_position_embeddings=1024, axial_pos_shape=[16,64], axial_pos_embds_dim=[32,96],hidden_size=128)</p>
</blockquote>
<p>It gives me the following error:</p>
<blockquote>
<p>RuntimeError: Error(s) in loading state_dict for ReformerModelWithLMHead:
size mismatch for reformer.embeddings.word_embeddings.weight: copying a param with shape torch.Size([258, 1024]) from checkpoint, the shape in current model is torch.Size([258, 128]).
size mismatch for reformer.embeddings.position_embeddings.weights.0: copying a param with shape torch.Size([128, 1, 256]) from checkpoint, the shape in current model is torch.Size([16, 1, 32]).</p>
</blockquote>
<p>This is quite a long error.</p>
<ol start="2">
<li>Then I tried this code to update the config:</li>
</ol>
<blockquote>
<p>model1 = transformers.ReformerModelWithLMHead.from_pretrained('google/reformer-enwik8', num_labels = 9)</p>
</blockquote>
<h4>Reshape Axial Position Embeddings layer to match desired max seq length</h4>
<pre><code>model1.reformer.embeddings.position_embeddings.weights[1] = torch.nn.Parameter(model1.reformer.embeddings.position_embeddings.weights[1][0][:128])
</code></pre>
<h4>Update the config file to match custom max seq length</h4>
<pre><code>model1.config.axial_pos_shape = 16,128
model1.config.max_position_embeddings = 16*128 #2048
model1.config.axial_pos_embds_dim= 32,96
model1.config.hidden_size = 128
output_model_path = "model"
model1.save_pretrained(output_model_path)
</code></pre>
<p>By this implementation, I am getting this error:</p>
<blockquote>
<p>RuntimeError: The expanded size of the tensor (512) must match the existing size (128) at non-singleton dimension 2. Target sizes: [1, 128, 512, 768]. Tensor sizes: [128, 768]</p>
</blockquote>
<p>Because updated size/shape doesn't match with the original config parameters of pretrained model. The original parameters are: axial_pos_shape = 128,512 max_position_embeddings = 128*512 #65536 axial_pos_embds_dim= 256,768 hidden_size = 1024</p>
<p>Is it the right way I'm changing the config parameters or do I have to do something else?</p>
<p>Is there any example where ReformerModelWithLMHead('google/reformer-enwik8') model fine-tuned.</p>
<p>My main code implementation is as follow:</p>
<pre><code>class REFORMER(torch.nn.Module):
def __init__(self):
super(REFORMER, self).__init__()
self.l1 = transformers.ReformerModelWithLMHead.from_pretrained("google/reformer-enwik8", num_labels=9)
def forward(self, input_ids, attention_masks, labels):
output_1= self.l1(input_ids, attention_masks, labels = labels)
return output_1
model = REFORMER()
def train(epoch):
model.train()
for _, data in enumerate(training_loader,0):
ids = data['input_ids'][0] # input_ids from encode method of the model https://huggingface.co/google/reformer-enwik8#:~:text=import%20torch%0A%0A%23%20Encoding-,def%20encode,-(list_of_strings%2C%20pad_token_id%3D0
input_shape = ids.size()
targets = data['tags']
print("tags: ", targets, targets.size())
least_common_mult_chunk_length = 65536
padding_length = least_common_mult_chunk_length - input_shape[-1] % least_common_mult_chunk_length
#pad input
input_ids, inputs_embeds, attention_mask, position_ids, input_shape = _pad_to_mult_of_chunk_length(self=model.l1,
input_ids=ids,
inputs_embeds=None,
attention_mask=None,
position_ids=None,
input_shape=input_shape,
padding_length=padding_length,
padded_seq_length=None,
device=None,
)
outputs = model(input_ids, attention_mask, labels=targets) # sending inputs to the forward method
print(outputs)
loss = outputs.loss
logits = outputs.logits
if _%500==0:
print(f'Epoch: {epoch}, Loss: {loss}')
for epoch in range(1):
train(epoch)
</code></pre> | 2021-08-11 13:20:44.563000+00:00 | 2021-08-22 21:36:37.800000+00:00 | 2021-08-18 15:04:04.067000+00:00 | python|nlp|pytorch|huggingface-transformers|named-entity-recognition | ['https://arxiv.org/abs/2001.04451'] | 1 |
Subsets and Splits