a_id
int64 7.84k
73.8M
| a_body
stringlengths 61
33k
| a_creation_date
stringlengths 25
32
| a_last_activity_date
stringlengths 25
32
| a_last_edit_date
stringlengths 25
32
⌀ | a_tags
float64 | q_id
int64 826
73.8M
| q_body
stringlengths 61
29.9k
| q_creation_date
stringlengths 25
32
| q_last_activity_date
stringlengths 25
32
| q_last_edit_date
stringlengths 25
32
⌀ | q_tags
stringlengths 1
103
| _arxiv_links
stringlengths 2
6.69k
| _n_arxiv_links
int64 0
94
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
66,070,459 | <p><strong>I could be wrong,</strong> it should not differ whether if it is a classification or regression. Think about it mathematically.</p>
<p><strong>Generally speaking</strong>, having <code>softmax</code> in the hidden layers is not preferred because we want every neuron to be independent from each other. If you apply <code>softmax</code> then they will be linearly dependent as the activation will force their sum to be equal to one. That does not mean it is never used, you can refer <a href="https://arxiv.org/pdf/1410.5401v2.pdf" rel="nofollow noreferrer">this paper</a>.</p>
<p>Assume using some advanced activations such as <code>LeakyReLU</code>, by using it neurons will be under control as alpha rate can be tuned. But with <code>softmax</code> that will not be possible.</p>
<p><strong>Now back the question,</strong> this is dependent to dataset I think. Model is able to generalize this dataset with <code>softmax</code>. However I don't think it will always work that way. As mentioned above, you are making them linearly dependent to each other. So if one neuron learns something wrong, that will effect whole network's generalization because other values will be effected.</p>
<p><strong>Edit</strong>: I tested two models. With some data <code>softmax</code> worked as good as <code>relu</code>. But the case is all neurons are dependent to each other. Making them dependent to each other is not a risk that should be taken, especially in large networks.</p>
<p>Data:</p>
<pre><code>X_train = np.random.randn(10000,20)
y_train = np.random.randn(10000,1)
X_test = np.random.randn(5000,20)
y_test = np.random.randn(5000,1)
</code></pre>
<p>With <strong>Softmax</strong>:</p>
<pre><code>model = Sequential()
model.add(Dense(512, activation='relu',input_shape=(20,)))
model.add(Dense(256,activation='softmax'))
model.add(Dense(512,activation='softmax'))
model.add(Dense(256,activation='softmax'))
model.add(Dense(128,activation='softmax'))
model.add(Dense(1,activation='linear'))
model.compile(loss='mse',optimizer='adam')
model.fit(X_train, y_train, epochs = 16, validation_data= (X_test, y_test))
</code></pre>
<p><strong>Result</strong>: Model was not able to learn this data. It diverged and stayed in the same region as diverging. Seems like one neuron wants to learn but other one is not letting the other one.</p>
<pre><code>Epoch 15/16
313/313 [==============================] - 1s 3ms/step - loss: 1.0259 - val_loss: 1.0269
Epoch 16/16
313/313 [==============================] - 1s 3ms/step - loss: 1.0020 - val_loss: 1.0271
</code></pre>
<p>With <strong>relu</strong>:</p>
<pre><code>model = Sequential()
model.add(Dense(512, activation='relu',input_shape=(20,)))
model.add(Dense(256,activation='relu'))
model.add(Dense(512,activation='relu'))
model.add(Dense(256,activation='relu'))
model.add(Dense(128,activation='relu'))
model.add(Dense(1,activation='linear'))
model.compile(loss='mse',optimizer='adam')
model.fit(X_train, y_train, epochs = 16, validation_data= (X_test, y_test))
# Obviously overfitting but that's not the case.
</code></pre>
<p><strong>Result</strong>: The models with <code>relu</code> was able to learn both of the data.</p>
<pre><code>Epoch 15/16
313/313 [==============================] - 1s 3ms/step - loss: 0.5580 - val_loss: 1.3091
Epoch 16/16
313/313 [==============================] - 1s 3ms/step - loss: 0.4808 - val_loss: 1.3290
</code></pre> | 2021-02-05 20:56:20.433000+00:00 | 2021-02-05 21:27:39.437000+00:00 | 2021-02-05 21:27:39.437000+00:00 | null | 66,069,636 | <p>I have done manual hyperparameter optimization for ML models before and always defaulted to <em>tanh</em> or <em>relu</em> as hidden layer activation functions. Recently, I started trying out Keras Tuner to optimize my architecture and accidentally left <em>softmax</em> as a choice for hidden layer activation.</p>
<p>I have only ever seen <em>softmax</em> used in classification models in the output layer, never as a hidden layer activation, especially for regression. This model has really good performance in predicting temperature, but I am having a tough time justifying using this model.</p>
<p>I have seen posts like <a href="https://stackoverflow.com/questions/37588632/why-use-softmax-only-in-the-output-layer-and-not-in-hidden-layers">this one</a> which talk about why it should be used only for the output, but is there any justification in my case? I am showing the overall architecture below, for reference.</p>
<pre><code>model = Sequential()
model.add(Dense(648, activation='relu',input_shape=(train_x.shape[1],)))
model.add(Dropout(0.3))
model.add(LayerNormalization())
model.add(Dense(152,activation='relu'))
model.add(Dropout(0.15))
model.add(LayerNormalization())
model.add(Dense(924,activation='softsign'))
model.add(Dropout(0.37))
model.add(LayerNormalization())
model.add(Dense(248,activation='softmax'))
model.add(Dropout(0.12))
model.add(LayerNormalization())
model.add(Dense(1,activation='linear'))
model.compile(loss='mse',optimizer='Adam')
</code></pre> | 2021-02-05 19:43:28.030000+00:00 | 2021-02-05 21:27:39.437000+00:00 | null | python|keras|softmax | ['https://arxiv.org/pdf/1410.5401v2.pdf'] | 1 |
13,017,935 | <p>Daniel Lemire has a couple of papers on pre-sorting to increase compression and performance.
Here's the latest: <a href="http://arxiv.org/abs/1207.2189" rel="nofollow">http://arxiv.org/abs/1207.2189</a></p>
<p>You might look at his EWah variant as well. </p>
<p>The prevailing feeling is that Bitmap array compression techniques work great when the dataset changes slowly as most implementations discard and rebuild the index on each change. For datasets that change more often, traditional index approaches (such as B-Tree variants) are still king.</p>
<p>Implementations: <a href="https://github.com/lemire/javaewah" rel="nofollow">https://github.com/lemire/javaewah</a> and <a href="https://github.com/lemire/EWAHBoolArray" rel="nofollow">https://github.com/lemire/EWAHBoolArray</a></p> | 2012-10-22 18:59:16.480000+00:00 | 2013-02-12 22:35:03.423000+00:00 | 2013-02-12 22:35:03.423000+00:00 | null | 11,924,954 | <p>I have developing word align bitmap compression algorithm for data indexing.algorithm is based on the WAH compression research paper.compression bitmap perform well on bit-wise operation and it's very space efficient. but modifying the compressed bitmap not very efficient ,because modifying need splitting compressed word size block and several memmove cause performance bottleneck. </p>
<blockquote>
<p>please look at the following example.</p>
<p>example: data set -
[1000000,34,9,23456,6543,10000000,23440004,100,345]</p>
</blockquote>
<p>performance reduce due to the random nature of the data set , in the real application scenario this can happened.</p>
<ol>
<li>can anyone give me a hint on how to overcome this performance problem?.</li>
</ol> | 2012-08-12 19:01:42.053000+00:00 | 2013-02-12 22:35:03.423000+00:00 | null | c++|database|performance|compression|bit-manipulation | ['http://arxiv.org/abs/1207.2189', 'https://github.com/lemire/javaewah', 'https://github.com/lemire/EWAHBoolArray'] | 3 |
64,773,818 | <p>Many people use "Doc2Vec" to refer to the word2vec-like algorithm introduced by a paper titled <a href="https://arxiv.org/abs/1405.4053" rel="nofollow noreferrer">Distributed Representation of Sentences and Documents</a> (by Le & Mikolov). That paper calls the algorithm 'Paragraph Vector', without using the name 'Doc2Vec', and indeed introduces an extra vector per document, like you describe. (That is, the doc-vector is trained a bit like a 'floating' pseudoword-vector, that contributes to to the input 'context' for every training prediction in that document.)</p>
<p>I'm not familiar with R or that R <code>word2vec</code> package, but from the docs you forwarded, it does <strong>not</strong> sound like that <code>doc2vec</code> function implements the 'Paragraph Vector' algorithm that others call 'Doc2Vec'. In particular:</p>
<ul>
<li><p>'Paragraph Vector' doc-vectors are <strong>not</strong> a simple sum-of-word-vectors</p>
</li>
<li><p>'Paragraph Vector' doc-vectors are created by a separate word2vec-like training process that co-creates any necessary word-vectors simultaneous with that training. Specifically: that process does <strong>not</strong> normally use as input some other pre-trained word-vectors, nor create word-vectors as a 1st step. (And further: the PV-DBOW option of the 'Paragraph Vector' paper doesn't create traditional word-vectors at all.)</p>
</li>
</ul>
<p>It appears that function is poorly-named, and if you need to use the actual 'Paragraph Vector' algorithm, you will need to look elsewhere.</p> | 2020-11-10 17:32:04.837000+00:00 | 2020-11-11 20:12:59.113000+00:00 | 2020-11-11 20:12:59.113000+00:00 | null | 64,772,221 | <p>I am a student (computer science). This is my first question in stackoverflow. I really would appreciate your help! (The package I am referring to is called 'word2vec', thats why the tags/title are a bit confusing to choose.)</p>
<p>In the description of the doc2vec function (here <a href="https://cran.r-project.org/web/packages/word2vec/word2vec.pdf" rel="nofollow noreferrer">https://cran.r-project.org/web/packages/word2vec/word2vec.pdf</a>) it says:</p>
<blockquote>
<p>Document vectors are the sum of the vectors of the words which are part of the document standardised by the scale of the vector space. This scale is the sqrt of the average inner product of the vector
elements.</p>
</blockquote>
<p>From what I understood, doc2vec takes one additional vector for every paragraph. Which, in my eyes, seems to be different than the above description.</p>
<p>Is my understanding of doc2vec correct, or close enough?
And: Does the cited implementation work like the doc2vec-algorithm?</p> | 2020-11-10 15:52:26.460000+00:00 | 2020-11-21 08:27:51.943000+00:00 | 2020-11-21 08:27:51.943000+00:00 | r|word2vec|doc2vec | ['https://arxiv.org/abs/1405.4053'] | 1 |
43,505,854 | <p>A way to get around this without rooting the phone is to send your packets via multicast UDP*. These packets will make it from GO1 to GO2. </p>
<p>There are some side effects to this: </p>
<ul>
<li><p>To use this for networking you must perform encapsulation and routing at the OSI Application level (not efficient). </p></li>
<li><p>You will also need to route based on MAC addresses since every device has the same 192.168.49.1 address.</p></li>
<li><p>"It is important to note that the multicast socket encapsulates a one-to-many unicast communication and, as a result of this, cannot fully utilize the total available WiFi and WiFi Direct bandwidth" *</p></li>
</ul>
<p>Something else worth noting: </p>
<ul>
<li>As you scale up the number of GOs, you will run into a problem of all nodes operating on the same wifi channel. This isn't a problem with a few devices, but with hundreds of devices, it will be a huge problem.</li>
</ul>
<p>*This method was mentioned in Colin Funai, Cristiano Tapparello, and Wendi Heinzelman paper titled "Supporting Multi-hop Device-to-Device Networks Through WiFi Direct Multi-group Networking" found here: <a href="https://arxiv.org/pdf/1601.00028.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1601.00028.pdf</a></p> | 2017-04-19 20:53:29.257000+00:00 | 2017-04-19 20:53:29.257000+00:00 | null | null | 36,867,687 | <p>I have two Android KitKat phones, both are running WiFi-Direct groups as Group Owners, let's call them GO1 and GO2</p>
<p>I managed to connect GO1 as a legacy client to GO2 without breaking any of the (previously set) wifi-direct groups.</p>
<p>The problem is that, as you might know, the GO IP address is hardcoded in Android source, and is set to 192.168.49.1</p>
<p>Therefore, both of my devices, GO1 and GO2 have the same IP address (**)... each on his local network.</p>
<p>My app is both client and server at the same time. But both networks are using the same IP range (192.168.49.XXX), which, apparently, I cannot change.</p>
<p>As a result I cannot create a TCP connection between them if they are both hosting a WiFi-Direct Group, since any device will connect to itself when trying to connect to 192.168.49.1</p>
<p>So the questions are:</p>
<ul>
<li>Is there a way to change the IP range used in Wifi-Direct?</li>
<li>Is there a way to use IPv6 instead of IPv4 in Wifi-Direct?</li>
<li>Can any of this be done without rooting the phone?</li>
<li>Any other suggestion?</li>
</ul>
<p>** : Actually, because GO1 is connecting as a legacy client to GO2, then GO1 is known as 192.168.49.227 (for example) to GO2 and GO2 is known as 192.168.49.1 to GO1. But because GO1 is ALSO a GO, it also known as 192.168.49.1 to his clients (and itself).</p> | 2016-04-26 14:19:00.243000+00:00 | 2017-04-19 20:53:29.257000+00:00 | 2016-04-27 10:52:02.297000+00:00 | android|ip|ipv6|wifi-direct|wifip2p | ['https://arxiv.org/pdf/1601.00028.pdf'] | 1 |
48,436,520 | <p>You might be interested in my paper <a href="https://arxiv.org/pdf/1801.07779.pdf" rel="noreferrer">The WiLI benchmark dataset for written
language identification</a>. I also benchmarked a couple of tools.</p>
<p>TL;DR:</p>
<ul>
<li>CLD-2 is pretty good and extremely fast</li>
<li><a href="https://pypi.python.org/pypi/langdetect?" rel="noreferrer">lang-detect</a> is a tiny bit better, but much slower</li>
<li>langid is good, but CLD-2 and lang-detect are much better</li>
<li>NLTK's Textcat is neither efficient nor effective.</li>
</ul>
<p>You can install <a href="https://github.com/MartinThoma/lidtk" rel="noreferrer"><code>lidtk</code></a> and classify languages:</p>
<pre><code>$ lidtk cld2 predict --text "this is some text written in English"
eng
$ lidtk cld2 predict --text "this is some more text written in English"
eng
$ lidtk cld2 predict --text "Ce n'est pas en anglais"
fra
</code></pre> | 2018-01-25 05:58:05.390000+00:00 | 2018-02-06 05:43:02.820000+00:00 | 2018-02-06 05:43:02.820000+00:00 | null | 43,377,265 | <p>I am using both <a href="http://www.nltk.org/" rel="noreferrer">Nltk</a> and <a href="http://scikit-learn.org/stable/" rel="noreferrer">Scikit Learn</a> to do some text processing. However, within my list of documents I have some documents that are not in English. For example, the following could be true:</p>
<pre><code>[ "this is some text written in English",
"this is some more text written in English",
"Ce n'est pas en anglais" ]
</code></pre>
<p>For the purposes of my analysis, I want all sentences that are not in English to be removed as part of pre-processing. However, is there a good way to do this? I have been Googling, but cannot find anything specific that will let me recognize if strings are in English or not. Is this something that is not offered as functionality in either <code>Nltk</code> or <code>Scikit learn</code>? <b>EDIT</b> I've seen questions both like <a href="https://stackoverflow.com/questions/29099621/how-to-find-out-wether-a-word-exists-in-english-using-nltk">this</a> and <a href="https://stackoverflow.com/questions/3788870/how-to-check-if-a-word-is-an-english-word-with-python">this</a> but both are for individual words... Not a "document". Would I have to loop through every word in a sentence to check if the whole sentence is in English?</p>
<p>I'm using Python, so libraries that are in Python would be preferable, but I can switch languages if needed, just thought that Python would be the best for this.</p> | 2017-04-12 18:41:32.477000+00:00 | 2022-04-14 01:45:44.743000+00:00 | 2017-11-29 08:27:32.963000+00:00 | python|scikit-learn|nlp|nltk | ['https://arxiv.org/pdf/1801.07779.pdf', 'https://pypi.python.org/pypi/langdetect?', 'https://github.com/MartinThoma/lidtk'] | 3 |
51,525,403 | <p><strong>Segmentation Accuracy</strong></p>
<p>This is a pretty common problem addressed in image segmentation literature, e.g., <a href="https://stackoverflow.com/questions/13974167/how-to-test-accuracy-of-segmentation-algorithm">here is a StackOverflow post</a></p>
<p>One common approach is to consider the ratio of "correct pixels" to "incorrect pixels," which is common in <strong>image segmentation</strong> for safety domain, e.g., <a href="https://arxiv.org/abs/1703.06870" rel="nofollow noreferrer">Mask RCNN</a>, <a href="http://www.cs.cmu.edu/~aayushb/pixelNet/" rel="nofollow noreferrer">PixelNet</a>. </p>
<p>Treating it as more of an <strong>object detection</strong> task, you could take the overlap of the hull of the objects and just measure <a href="https://dspace2.flinders.edu.au/xmlui/bitstream/handle/2328/27165/Powers%20Evaluation.pdf?sequence=1&isAllowed=y" rel="nofollow noreferrer">accuracy</a> (commonly broken down into <a href="https://en.wikipedia.org/wiki/Precision_and_recall" rel="nofollow noreferrer">precision, recall</a>, <a href="https://en.wikipedia.org/wiki/F1_score" rel="nofollow noreferrer">f-score</a>, and other measures with <a href="https://dspace2.flinders.edu.au/xmlui/handle/2328/27165" rel="nofollow noreferrer">various bias/skews</a>). This allows you to produce an <a href="https://en.wikipedia.org/wiki/Receiver_operating_characteristic" rel="nofollow noreferrer">ROC curve</a> that can be calibrated for false positives/false negatives.</p>
<p>There is no domain-agnostic consensus on what's correct. <a href="http://www.cvlibs.net/datasets/kitti/eval_semantics.php" rel="nofollow noreferrer">KITTI provides both.</a></p>
<p>Mask RCNN is open source state-of-the-art, and provides implemenations
<strong>in python</strong> of</p>
<ul>
<li><a href="https://github.com/matterport/Mask_RCNN/blob/master/mrcnn/utils.py#L665" rel="nofollow noreferrer">Computing image matching between segmented and original</a></li>
<li><a href="https://github.com/matterport/Mask_RCNN/blob/master/mrcnn/visualize.py#L171" rel="nofollow noreferrer">Displaying the differences</a></li>
</ul>
<p>In your domain (medicine), standard statistical rules apply. Use a holdout set. Cross validate. Etc. (*)</p>
<p><em>Note:</em> although the literature space is dauntingly large, I'd caution you to take a look at some domain-relevant papers, as they may take fewer "statistical short cuts" than other vision (digit recognition e.g.) projects accept. </p>
<ul>
<li>"<a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4533825/" rel="nofollow noreferrer">Metrics for evaluating 3D medical image segmentation: analysis, selection, and tool</a>" provides some summary methods in your your domain </li>
<li>"<a href="https://www.annualreviews.org/doi/abs/10.1146/annurev.bioeng.2.1.315" rel="nofollow noreferrer">Current methods in image segmentation</a>" has about 2500 citations but is a little older. </li>
<li>"<a href="https://aapm.onlinelibrary.wiley.com/doi/abs/10.1118/1.597000" rel="nofollow noreferrer">Review of MR image segmentation techniques using pattern recognition</a>" is a little older still and will get you safely into "traditional" vision models. </li>
<li><a href="https://pubs.rsna.org/doi/abs/10.1148/radiology.218.2.r01fe44586" rel="nofollow noreferrer">Automated Segmentation of MR Images of Brain Tumors</a> is largely about its segmentation validation process </li>
</ul>
<hr>
<p><strong>Python</strong></p>
<p>Besides the mask rcnn links above, <a href="http://scikit-learn.org/stable/" rel="nofollow noreferrer">scikit-learn</a> provides some extremely user friendly tools and is considered part of the standard science "stack" for python.</p>
<p>Implementing the difference between images in python is trivial (using numpy). Here's an overkill <a href="https://stackoverflow.com/questions/189943/how-can-i-quantify-difference-between-two-images">SO link</a>. </p>
<p>Bounding box intersection in python <a href="https://www.pyimagesearch.com/2016/11/07/intersection-over-union-iou-for-object-detection/" rel="nofollow noreferrer">is easy to implement on one's own</a>; I'd use a library like <a href="https://stackoverflow.com/questions/14697442/faster-way-of-polygon-intersection-with-shapely">shapely if you want to measure general polygon intersection</a>.</p>
<p>Scikit-learn has some nice machine-learning evaluation tools, for example,</p>
<ul>
<li><a href="http://scikit-learn.org/stable/auto_examples/model_selection/plot_roc.html" rel="nofollow noreferrer">ROC curves</a></li>
<li><a href="http://scikit-learn.org/stable/modules/cross_validation.html" rel="nofollow noreferrer">Cross validation</a></li>
<li><a href="http://scikit-learn.org/stable/model_selection.html" rel="nofollow noreferrer">Model selection</a></li>
<li><a href="http://scikit-learn.org/stable/modules/classes.html" rel="nofollow noreferrer">A million others</a></li>
</ul>
<hr>
<p><strong>Literature Searching</strong></p>
<p>One reason that you may have trouble searching for the answer is because you're trying to measure performance of an unsupervised method, clustering, in a <strong>supervised learning</strong> arena. "Clusters" are fundamentally under-defined in mathematics (**). You want to be looking at the supervised learning literature for accuracy measures.</p>
<p>There is literature on unsupervised learning/clustering, too, which looks for topological structure, generally. <a href="https://nlp.stanford.edu/IR-book/html/htmledition/evaluation-of-clustering-1.html" rel="nofollow noreferrer">Here's a very introductory summary</a>. I don't think that is what you want.</p>
<p>A common problem, especially at scale, is that supervised methods require labels, which can be <a href="http://openaccess.thecvf.com/content_cvpr_2018/papers/Zlateski_On_the_Importance_CVPR_2018_paper.pdf" rel="nofollow noreferrer">time consuming to produce accurately</a> for dense segmentation. Object detection <a href="http://vision.stanford.edu/documents/Russakovsky_PhD_thesis_2015.pdf" rel="nofollow noreferrer">makes it a little easier</a>.</p>
<p>There are some existing datasets for medicine (<a href="https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/bsds/" rel="nofollow noreferrer">[1]</a>, <a href="https://www.kaggle.com/paultimothymooney/identification-and-segmentation-of-nuclei-in-cells" rel="nofollow noreferrer">[2]</a>, e.g.) and <a href="https://arxiv.org/pdf/1702.03407.pdf" rel="nofollow noreferrer">some ongoing research in label-less metrics</a>. If none of these are options for you, then you may have to revert to considering it an unsupervised problem, but evaluation becomes very different in scope and utility.</p>
<hr>
<p><strong>Footnotes</strong></p>
<p>[*] Vision people sometimes skip cross validation even though they shouldn't, mainly because the models are slow to fit and they're a lazy bunch. <em>Please don't skip a <a href="https://stats.stackexchange.com/questions/19048/what-is-the-difference-between-test-set-and-validation-set">train/test/validation split</a></em>, or your results may be dangerously useless</p>
<p>[**] You can find all sorts of "formal" definitions, but never two people to agree on which one is correct or most useful. <a href="http://www.stat.cmu.edu/~larry/=sml/clustering.pdf" rel="nofollow noreferrer">Here's denser reading</a></p> | 2018-07-25 18:21:27.220000+00:00 | 2018-08-03 18:39:27.487000+00:00 | 2018-08-03 18:39:27.487000+00:00 | null | 51,525,036 | <p>I have implemented several clustering algorithms on an image dataset.
I'm interested in deriving the success rate of clustering. I have to detect the tumor area, in the original image I know where the tumor is located, I would like to compare the two images and obtain the percentage of success.
Following images:</p>
<p>Original image: I know the position of cancer<img src="https://i.stack.imgur.com/TlMJl.png" alt=""></p>
<p>Image after clustering algorithm<img src="https://i.stack.imgur.com/hDT1b.png" alt=""></p>
<p>I'm using python 2.7.</p> | 2018-07-25 17:56:05.493000+00:00 | 2018-08-03 18:39:27.487000+00:00 | 2018-07-27 05:45:45.623000+00:00 | python|image-processing|cluster-analysis|analysis | ['https://stackoverflow.com/questions/13974167/how-to-test-accuracy-of-segmentation-algorithm', 'https://arxiv.org/abs/1703.06870', 'http://www.cs.cmu.edu/~aayushb/pixelNet/', 'https://dspace2.flinders.edu.au/xmlui/bitstream/handle/2328/27165/Powers%20Evaluation.pdf?sequence=1&isAllowed=y', 'https://en.wikipedia.org/wiki/Precision_and_recall', 'https://en.wikipedia.org/wiki/F1_score', 'https://dspace2.flinders.edu.au/xmlui/handle/2328/27165', 'https://en.wikipedia.org/wiki/Receiver_operating_characteristic', 'http://www.cvlibs.net/datasets/kitti/eval_semantics.php', 'https://github.com/matterport/Mask_RCNN/blob/master/mrcnn/utils.py#L665', 'https://github.com/matterport/Mask_RCNN/blob/master/mrcnn/visualize.py#L171', 'https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4533825/', 'https://www.annualreviews.org/doi/abs/10.1146/annurev.bioeng.2.1.315', 'https://aapm.onlinelibrary.wiley.com/doi/abs/10.1118/1.597000', 'https://pubs.rsna.org/doi/abs/10.1148/radiology.218.2.r01fe44586', 'http://scikit-learn.org/stable/', 'https://stackoverflow.com/questions/189943/how-can-i-quantify-difference-between-two-images', 'https://www.pyimagesearch.com/2016/11/07/intersection-over-union-iou-for-object-detection/', 'https://stackoverflow.com/questions/14697442/faster-way-of-polygon-intersection-with-shapely', 'http://scikit-learn.org/stable/auto_examples/model_selection/plot_roc.html', 'http://scikit-learn.org/stable/modules/cross_validation.html', 'http://scikit-learn.org/stable/model_selection.html', 'http://scikit-learn.org/stable/modules/classes.html', 'https://nlp.stanford.edu/IR-book/html/htmledition/evaluation-of-clustering-1.html', 'http://openaccess.thecvf.com/content_cvpr_2018/papers/Zlateski_On_the_Importance_CVPR_2018_paper.pdf', 'http://vision.stanford.edu/documents/Russakovsky_PhD_thesis_2015.pdf', 'https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/bsds/', 'https://www.kaggle.com/paultimothymooney/identification-and-segmentation-of-nuclei-in-cells', 'https://arxiv.org/pdf/1702.03407.pdf', 'https://stats.stackexchange.com/questions/19048/what-is-the-difference-between-test-set-and-validation-set', 'http://www.stat.cmu.edu/~larry/=sml/clustering.pdf'] | 31 |
47,011,724 | <p>This is normal behaviour and happens because your network is too confident of the quality of the input and doesn't learn to rely on the past (on it's internal state) enough, relying soley on the input. When you apply the network to its own output in the generation setting, the input to the network is not as reliable as it was in the training or validation case where it got the true input. </p>
<p>I have two possible solutions for you:</p>
<ul>
<li><p>The first is the simplest but less intuitive one: Add a little bit of Gaussian noise to your input. This will force the network to rely more on its hidden state. </p></li>
<li><p>The second, is the most obvious solution: during training, feed it not the true input but its generated output with a certain probability p. Start out training with p=0 and gradually increase it so that it learns to general longer and longer sequences, independently. This is called schedualed sampling, and you can read more about it here: <a href="https://arxiv.org/abs/1506.03099" rel="noreferrer">https://arxiv.org/abs/1506.03099</a> . </p></li>
</ul> | 2017-10-30 09:24:15.610000+00:00 | 2017-10-30 09:24:15.610000+00:00 | null | null | 43,459,013 | <p>For several days now, I am trying to build a simple sine-wave sequence generation using LSTM, without any glimpse of success so far.</p>
<p>I started from the <a href="https://github.com/pytorch/examples/tree/master/time_sequence_prediction" rel="noreferrer">time sequence prediction example</a></p>
<p>All what I wanted to do differently is:</p>
<ul>
<li>Use different optimizers (e.g RMSprob) than LBFGS</li>
<li>Try different signals (more sine-wave components)</li>
</ul>
<p>This is the link to <a href="https://github.com/osm3000/sequence_generation_pytorch.git" rel="noreferrer">my code</a>. "experiment.py" is the main file</p>
<p>What I do is:</p>
<ul>
<li>I generate artificial time-series data (sine waves)</li>
<li>I cut those time-series data into small sequences</li>
<li>The input to my model is a sequence of time 0...T, and the output is a sequence of time 1...T+1</li>
</ul>
<p>What happens is:</p>
<ul>
<li>The training and the validation losses goes down smoothly</li>
<li>The test loss is very low</li>
<li>However, when I try to generate arbitrary-length sequences, starting from a seed (a random sequence from the test data), everything goes wrong. The output always flats out</li>
</ul>
<p><a href="https://i.stack.imgur.com/crO8z.png" rel="noreferrer"><img src="https://i.stack.imgur.com/crO8z.png" alt="Shape of the generated signal"></a></p>
<p>I simply don't see what the problem is. I am playing with this for a week now, with no progress in sight.
I would be very grateful for any help.</p>
<p>Thank you</p> | 2017-04-17 20:23:11.273000+00:00 | 2018-03-07 08:36:10.770000+00:00 | 2018-03-07 08:36:10.770000+00:00 | python|machine-learning|deep-learning|lstm|pytorch | ['https://arxiv.org/abs/1506.03099'] | 1 |
55,873,035 | <h2>Update</h2>
<p><a href="https://github.com/apple/swift/pull/25286" rel="nofollow noreferrer">This</a> implementation of a random number generator in an interval has been merged into the standard library and should perform better than before:</p>
<pre><code>// s = upperBound; r1, r2 = random numbers from generator
func bounded(s: UInt64, r1:UInt64, r2: UInt64) -> UInt64 {
// r1 would come from invoking generator's next()
var m = r1.multipliedFullWidth(by: s)
if m.low < s {
// let t = (0 &- s) % s // Lemire's original form
var t = 0 &- s // O'Neill's modulo optimization
if t >= s {
t &-= s
if t >= s {
t %= s
}
}
while m.low < t {
// r2 would come from invoking generator's next()
m = r2.multipliedFullWidth(by: s)
}
}
return m.high
}
</code></pre>
<p>See the answer below for more details.</p>
<h2>Answer</h2>
<p>An answer to your second question :</p>
<blockquote>
<p>"Are there any faster methods for random number generation in swift?"</p>
</blockquote>
<p>I've <a href="https://stackoverflow.com/questions/55548872/shuffle-struct-by-int/55549494#55549494">previously used</a> the <a href="http://xoshiro.di.unimi.it/" rel="nofollow noreferrer"><strong>Xoshiro</strong></a> Pseudo-Random Number Generator which is pretty fast.</p>
<p>Here the code used for benchmarking :</p>
<ul>
<li><strong>randomGen1</strong></li>
</ul>
<pre><code>import Foundation
public func randomGen1() {
let n = 1_000_000
var sum: UInt32 = 0
let startTime = CFAbsoluteTimeGetCurrent()
for _ in 0..<n {
sum = sum &+ arc4random_uniform(10)
}
let timeElapsed = CFAbsoluteTimeGetCurrent() - startTime
print(sum, timeElapsed)
}
do {
randomGen1()
}
</code></pre>
<ul>
<li><strong>randomGen2</strong></li>
</ul>
<pre><code>public func randomGen2() {
let n = 1_000_000
var sum: UInt32 = 0
let startTime = CFAbsoluteTimeGetCurrent()
for _ in 0..<n {
sum = sum &+ UInt32.random(in: 0..<10)
}
let timeElapsed = CFAbsoluteTimeGetCurrent() - startTime
print(sum, timeElapsed)
}
do {
randomGen2()
}
</code></pre>
<ul>
<li>Xoshiro random number generator from <a href="https://github.com/mattgallagher/CwlUtils/blob/e4186dae4ba55ffa478264c8477d01a48fd2b459/Sources/CwlUtils/CwlRandom.swift#L80" rel="nofollow noreferrer">this library</a>:</li>
</ul>
<pre><code>struct Xoshiro: RandomNumberGenerator {
public typealias StateType = (UInt32, UInt32, UInt32, UInt32)
private var state: StateType
public init(seed: StateType) {
self.state = seed
}
public mutating func next() -> Int {
let x = state.1 &* 5
let result = ((x &<< 7) | (x &>> 25)) &* 9
let t = state.1 &<< 9
state.2 ^= state.0
state.3 ^= state.1
state.1 ^= state.2
state.0 ^= state.3
state.2 ^= t
state.3 = (state.3 &<< 21) | (state.3 &>> 11)
return Int(result)
}
}
var x = Xoshiro(seed: (UInt32.random(in: 0..<10), //Other upper limits could be used to increase randomness
UInt32.random(in: 0..<10),
UInt32.random(in: 0..<10),
UInt32.random(in: 0..<10)))
public func randomGen3() {
let n = 1_000_000
var sum: UInt32 = 0
let startTime = CFAbsoluteTimeGetCurrent()
for _ in 0..<n {
sum = sum &+ UInt32(abs(x.next()) % 10)
}
let timeElapsed = CFAbsoluteTimeGetCurrent() - startTime
print(sum, timeElapsed)
}
do {
randomGen3()
}
</code></pre>
<p>Xoshiro is fast but does not pass all randomness tests. If security is of concern then you could use <a href="https://lemire.me/blog/2019/03/19/the-fastest-conventional-random-number-generator-that-can-pass-big-crush/" rel="nofollow noreferrer">Wyhash</a>.</p>
<p><a href="https://lemire.me/en/" rel="nofollow noreferrer"><strong>Daniel Lemire</strong></a> (the author of <a href="https://arxiv.org/pdf/1805.10941.pdf" rel="nofollow noreferrer">this</a> paper) has kindly just sent me a <a href="https://github.com/lemire/SwiftWyhash" rel="nofollow noreferrer">Swift implementation</a> of Wyhash:</p>
<pre><code>class WyhashGenerator {
var seed : UInt64
let multiplier1 : UInt64 = 0xa3b195354a39b70d
let multiplier2 : UInt64 = 0x1b03738712fad5c9
let increment : UInt64 = 0x60bee2bee120fc15
init(userSeed : UInt64) {
seed = userSeed;
}
func random() -> UInt64 {
seed &+= increment
let fullmult1 = seed.multipliedFullWidth(by: multiplier1)
let m1 = fullmult1.high ^ fullmult1.low;
let fullmult2 = m1.multipliedFullWidth(by: multiplier2)
let m2 = fullmult2.high ^ fullmult2.low;
return m2
}
}
</code></pre>
<p>It can be used like so:</p>
<pre><code>public func randomGen4() {
let n = 1_000_000
var sum: UInt64 = 0
let startTime = CFAbsoluteTimeGetCurrent()
let gen = WyhashGenerator(userSeed: 0)
for _ in 0..<n {
sum = sum &+ gen.random() % 10
}
let timeElapsed = CFAbsoluteTimeGetCurrent() - startTime
print(sum, timeElapsed)
}
do {
randomGen4()
}
</code></pre>
<hr>
<p>And here are the benchmark results, with the code compiled in the terminal with optimizations (<code>-O</code>) :</p>
<pre><code>arc4random_uniform() : 0.034s
UInt32.random(in:) : 0.243s
WyHash64 : 0.002s
Xoshiro : 0.001s
</code></pre>
<hr>
<p><sup>You can find more random number generators <a href="https://github.com/nvzqz/RandomKit" rel="nofollow noreferrer">here</a>.</sup></p> | 2019-04-26 18:17:09.863000+00:00 | 2019-07-23 21:01:31.747000+00:00 | 2019-07-23 21:01:31.747000+00:00 | null | 55,872,415 | <p>I have used Int.random() method and arc4random_uniform() for number generation speed tests.<br>
Both tests were run in macOS console with build configuration set to release.
Below are codes which I have used for testing. </p>
<pre><code>public func randomGen1() {
let n = 1_000_000
let startTime = CFAbsoluteTimeGetCurrent()
for i in 0..<n {
_ = arc4random_uniform(10)
}
let timeElapsed = CFAbsoluteTimeGetCurrent() - startTime
print(timeElapsed)
}
public func randomGen2() {
let n = 1_000_000
let startTime = CFAbsoluteTimeGetCurrent()
for i in 0..<n {
_ = Int.random(in: 0..<10)
}
let timeElapsed = CFAbsoluteTimeGetCurrent() - startTime
print(timeElapsed)
}
</code></pre>
<p>The times I got are <br>
0.029475092887878418 (for arc4random_uniform(10))<br>
0.20298802852630615 (for Int.random(in: 0..<10))</p>
<p>Why is Int.random() so much slower?<br>
Is there a way to optimise it?<br>
Are there any faster methods for random number generation in swift?</p> | 2019-04-26 17:26:41.613000+00:00 | 2019-07-23 21:01:31.747000+00:00 | 2019-04-26 17:33:32.537000+00:00 | swift|random | ['https://github.com/apple/swift/pull/25286', 'https://stackoverflow.com/questions/55548872/shuffle-struct-by-int/55549494#55549494', 'http://xoshiro.di.unimi.it/', 'https://github.com/mattgallagher/CwlUtils/blob/e4186dae4ba55ffa478264c8477d01a48fd2b459/Sources/CwlUtils/CwlRandom.swift#L80', 'https://lemire.me/blog/2019/03/19/the-fastest-conventional-random-number-generator-that-can-pass-big-crush/', 'https://lemire.me/en/', 'https://arxiv.org/pdf/1805.10941.pdf', 'https://github.com/lemire/SwiftWyhash', 'https://github.com/nvzqz/RandomKit'] | 9 |
59,938,045 | <p>40% accuracy is not good. It needs to train more. You should rescale images to <code>128 or 256</code> to save time. Also try increasing epoch count to something like 100 or minimize loss to at least around 1 before testing. Another thing is class imbalance. </p>
<p>According to this, <a href="https://arxiv.org/abs/1708.07747" rel="nofollow noreferrer">https://arxiv.org/abs/1708.07747</a> link <code>Fashion MNIST</code> contains <code>7000</code> images per class with <code>70000</code> images in total. If your dataset has class imbalance which seems likely then you should look into other metrics and methods.</p> | 2020-01-27 19:57:38.153000+00:00 | 2020-01-27 19:57:38.153000+00:00 | null | null | 59,937,540 | <p>I am following this guide to learn image classification with neural networks:</p>
<p><a href="https://www.tensorflow.org/tutorials/keras/classification" rel="nofollow noreferrer">https://www.tensorflow.org/tutorials/keras/classification</a></p>
<p>And I implement this code for my custom dataset. I have 2300 gray scaled 1024x1024 pictures to train model. I hold all my images in 3D numpy array as train_images and test_images. I have 4 class which are 0,1,2,3 and I hold those as list, named "labels".</p>
<pre><code>train_images.shape # returns (2300,1024,1024)
test_images.shape # returns (384,1024,1024)
# normalize values
train_images = train_images / 255.0
test_images = test_images / 255.0
model = keras.Sequential([
keras.layers.Flatten(input_shape=(1024, 1024)),
keras.layers.Dense(128, activation='relu'),
keras.layers.Dense(4, activation='softmax')
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(train_images, labels, epochs=10)
</code></pre>
<p>Everything almost same with guide. But my epoch accuracy is around 0.4</p>
<pre><code>Epoch 10/10
...
2176/2300 [===========================>..] - ETA: 0s - loss: 9.5701 - acc: 0.4062
2208/2300 [===========================>..] - ETA: 0s - loss: 9.5628 - acc: 0.4067
2240/2300 [============================>.] - ETA: 0s - loss: 9.5485 - acc: 0.4076
2272/2300 [============================>.] - ETA: 0s - loss: 9.5417 - acc: 0.4080
2300/2300 [==============================] - 12s 5ms/step - loss: 9.5307 - acc: 0.4087
</code></pre>
<p>Also in guide some predictions are fractional but when I try to do prediction, My model predictions are only 0 or 1. It says this is %100 (x) but its wrong.</p>
<pre><code>predictions = model.predict(test_images)
print(predictions)
# 0 | 0 | 1 | 0
# 0 | 0 | 1 | 0
# 1 | 0 | 0 | 0
</code></pre>
<p><strong>UPDATED</strong></p>
<p>Here is epoch results for 256*256 2 classed 100 images per class:</p>
<pre><code>32/200 [===>..........................] - ETA: 0s - loss: 8.5627 - acc: 0.4688
200/200 [==============================] - 0s 317us/step - loss: 8.0590 - acc: 0.5000
Epoch 10/10
</code></pre>
<p>Also I lowered my classes into 2 but my predictions are still return %100 and wrong class.</p>
<hr>
<p>I dont know where I am doing wrong. If you have any advice/idea I would be grateful. Thank you in advance.</p> | 2020-01-27 19:19:10.547000+00:00 | 2020-01-28 16:09:05.880000+00:00 | 2020-01-28 16:09:05.880000+00:00 | python|tensorflow|machine-learning|keras|neural-network | ['https://arxiv.org/abs/1708.07747'] | 1 |
62,836,902 | <p>This question have been answered by <a href="https://arxiv.org/pdf/1605.05274.pdf" rel="nofollow noreferrer">R.Grigore(2016)</a> in his paper <em>Java Generics are Turing Complete</em></p>
<p>Take the following Java code as an example to his suggestion:</p>
<pre><code>//an empty interface
interface Z {}
//4 generic interfaces, one argument each
interface N<X> {}
interface L<X> {}
interface Qlr<X> {}
interface Qrl<X> {}
//one complex generic, inheriting from instantiations of two other
interface E<X> extends
Qlr<N<? super Qr<? super E<? super E<? super X>>>>>,
Qrl<N<?super Ql<? super E<? super E<? super X>>>>>
{}
//main class with a single function
class Main{
//heavily nested return type
L<? super N<? super L<? super N<? super L<? super N<? super E<? super E<? super Z>>>>>>>>
f(Qr<? super E<? super E<? super Z>>> v) {return v;}
}
</code></pre> | 2020-07-10 14:53:30.250000+00:00 | 2020-07-10 14:53:30.250000+00:00 | null | null | 3,451,519 | <p>The JLS mentions in the type inference algorithm (§15.12.2):</p>
<blockquote>
<p>It is possible that the process above yields an infinite type. This is permissible,
and Java compilers must recognize such situations and represent them appropriately using cyclic data structures.</p>
</blockquote>
<p>However, I'm unable to find an actual example where javac produces an infinite type.
I think it ought to produce one in the following case:</p>
<pre><code><T> T pick(T a, T b) { ... }
pick("string", 3);
</code></pre>
<p>Both String and Integer are Comparable<themselve>, so their common supertype should be <code>Comparable<? extends Comparable<? extends Comparable<? ...>>></code> (infinite).</p>
<p>I can do:</p>
<pre><code>Comparable<? extends Comparable<?>> x = pick("string", 3);
</code></pre>
<p>but then I tried:</p>
<pre><code>Comparable<? extends Comparable<? extends Comparable<?>>> x = pick("string", 3);
</code></pre>
<p>and this doesn't compile.
It seems that the recursion is aborted after 2 steps.</p>
<p>Do you know of any case to make Java actually produce an infinite type?</p>
<p>--</p>
<p>Edit: it seems that the above is a compiler bug. Reading the specification, let's see how the calculation of <code>lub(String, Integer)</code> works out:</p>
<pre><code>ST(String) = { String, Comparable<String>, Serializable, CharSequence, Object }
ST(Integer) = { Integer, Comparable<Integer>, Serializable, Number, Object }
EC = { Comparable, Serializable, Object }
MEC = { Comparable, Serializable }
Inv(Comparable) = { Comparable<String>, Comparable<Integer> }
lcta(String, Integer) = ? extends lub(String, Integer)
lci(Inv(Comparable)) = Comparable<? extends lub(String, Integer)>
lub(String, Integer) = Serializable & Comparable<? extends lub(String, Integer)>
</code></pre>
<p>So <code>lub(String, Integer)</code> should be an infinite type. Javac seems to be wrong here. Maybe it doesn't implement infinite types after all?</p> | 2010-08-10 17:08:20.650000+00:00 | 2020-07-10 14:53:30.250000+00:00 | 2010-08-11 13:28:39.157000+00:00 | java|programming-languages|wildcard|type-inference | ['https://arxiv.org/pdf/1605.05274.pdf'] | 1 |
60,628,756 | <p>I like your approach! When you mention your optimization I think a good way to go about it is by rotating the hexagonal grid and translating it till you find the least amount of circles that cover the region. You don't need to rotate 360 since the pattern is symmetric so just 360/6.</p>
<p>I've been working on this problem for a while and have just published a paper that contains code to solve this problem! It uses genetic algorithms and BFGS optimization. You can find a link to the paper here: <a href="https://arxiv.org/abs/2003.04839" rel="nofollow noreferrer">https://arxiv.org/abs/2003.04839</a></p> | 2020-03-11 03:15:12.600000+00:00 | 2020-03-13 00:59:05.403000+00:00 | 2020-03-13 00:59:05.403000+00:00 | null | 10,648,621 | <p><strong>The following problem:</strong>
Given is an arbitrary polygon. It shall be covered 100% with the minimum number of circles of a given radius.</p>
<p><strong>Note:</strong>
1) Naturally the circles have to overlap.
2) I try to solve the problem for ARBITRARY polygons. But also solutions for CONVEX polygons are appreciated.
3) As far as Im informed, this problem is NP-hard ( <a href="https://stackoverflow.com/questions/4282003/an-algorithm-to-find-the-minimum-size-set-cover-for-the-set-cover-problem">an algorithm to find the minimum size set cover for the Set-cover problem</a> )
Choose: U = polygon and S1...Sk = circles with arbitrary centers.</p>
<p><strong>My solution:</strong>
Ive already read some papers and tried a few things on my own. The most promising idea that I came up with was in fact one already indicated in <a href="https://stackoverflow.com/questions/1404944/covering-an-arbitrary-area-with-circles-of-equal-radius">Covering an arbitrary area with circles of equal radius</a>.</p>
<p>So I guess it’s best I quickly try to describe my own idea and then refine my questions.</p>
<p>The picture gives you already a pretty good idea of what I do</p>
<p><img src="https://i.stack.imgur.com/D58Yc.jpg" alt="enter image description here"></p>
<p><strong>IDEA and Problem Formulation</strong>
1. I approximate the circles with their corresponding hexagons and tessellate the whole R2, i.e. an sufficiently large area; keyword hexagonally closest packaging. (cyan … tessellation, red dotted, centers of the cyan hexagons)
2. I put the polygon somewhere in the middle of this tessellated area and compute the number of hexagons that are needed to cover the polygon.</p>
<p>In the following Im trying to minimize N, which is number ofhexagons needed to cover the polygon, by moving the polygon around step by step, after each step “counting” N.</p>
<p><strong>Solving the problem:</strong>
So that’s when it gets difficult (for me). I don’t know any optimizers that solve this problem properly, since they all terminate after moving the polygon around a bit and not observing any change.</p>
<p><strong>My solution is the following:</strong>
First note that this is a periodic problem:
1. The polygon can be moved in horizontal direction x with a period of 3*r (side length = radius r) of the hexagon.
2. The polygon can be moved in vertical direction y with a period of r^2+r^2-2*r<em>r</em>cos(2/3*pi) of the hexagon.
3. The polygon can be rotated phi with a period of 2/3*pi.</p>
<p>That means, one has to search a finite area of possible solutions to find the optimal solution.
So what I do is, I choose a stepsize for (x,y,phi) and simply brute force compute all possible solutions, picking out the optimum.</p>
<p><strong>Refining my questions</strong>
1) Is the problem formulated intelligently? Right now im working on an algorithm that only tessellates a very small area, so that as little hexagons as possible have to be computed.
2) Is there a more intelligent optimizer to solve the problem?
3) FINALLY: I also have difficulties finding appropriate literature, since I don’t guess I don’t know the right keywords to look for. So if anybody can provide me with literature, it would also be appreciated a lot.</p>
<p>Actually I could go on about other things ive tried but I think no one of u guys wants to spend the whole afternoon just reading my question.</p>
<p>Thx in advance to everybody who takes the time to think about it.</p>
<p>mat</p>
<p>PS i implement my algorithms in matlab</p> | 2012-05-18 07:40:55.300000+00:00 | 2020-03-13 00:59:05.403000+00:00 | 2017-05-23 12:32:32.410000+00:00 | matlab|geometry | ['https://arxiv.org/abs/2003.04839'] | 1 |
46,173,528 | <p>You can use the idea of face-embeddings, which for example is proposed in the highly-cited paper <a href="https://arxiv.org/abs/1503.03832" rel="noreferrer">FaceNet</a> and implemented in <a href="https://cmusatyalab.github.io/openface/" rel="noreferrer">OpenFace</a> (which also comes pre-trained).</p>
<p>The general idea: take some preprocessed face (frontal, cropped, ...) and embedd it to some lower dimension with the characteristic, that similar faces in input should have low euclidean-distance in the output.</p>
<p>So in your case: use the embedding-CNN to map your faces to the reduced space (usually a vector of size 128) and calculate the distance as in the euclidean-space. Of course you also cluster faces then, but that's not your task.</p>
<p>The good thing here besides the general idea: openface is a nice implementation ready to use and it's homepage also explains the idea:</p>
<blockquote>
<p>Use a deep neural network to represent (or embed) the face on a 128-dimensional unit hypersphere.</p>
<p>The embedding is a generic representation for anybody's face. Unlike other face representations, this embedding has the nice property that a larger distance between two face embeddings means that the faces are likely not of the same person.</p>
<p>This property makes clustering, similarity detection, and classification tasks easier than other face recognition techniques where the Euclidean distance between features is not meaningful.</p>
</blockquote>
<p>They even have a comparison-demo <a href="https://cmusatyalab.github.io/openface/demo-2-comparison/" rel="noreferrer">here</a>.</p> | 2017-09-12 10:03:32.393000+00:00 | 2017-09-12 10:03:32.393000+00:00 | 2020-06-20 09:12:55.060000+00:00 | null | 46,168,182 | <p>First of all here is my <a href="https://github.com/alucard001/OpenCV-Face-Recognition-and-Comparison/blob/master/Open%20CV.ipynb" rel="noreferrer">github link for the question</a>.</p>
<p>And here is my question:</p>
<p>I would like to do a face comparison function using Python. And I can successfully(?) recognize faces using OpenCV. Now, <strong>how do I do the comparison thing</strong>?</p>
<p>What I understand is this:</p>
<p>In general Machine learning approach, I need to gather lots of data about that particular person and finalize it using a CNN. </p>
<p>However, I just got 2 images, how do I do the comparison? Should I think it in terms of classification or clustering (Using KNN)?</p>
<p>Thank you very much in advance for all your help.</p> | 2017-09-12 04:55:55.683000+00:00 | 2021-10-19 14:41:04.830000+00:00 | 2018-09-27 10:02:08.430000+00:00 | python|opencv|neural-network|convolution|face-recognition | ['https://arxiv.org/abs/1503.03832', 'https://cmusatyalab.github.io/openface/', 'https://cmusatyalab.github.io/openface/demo-2-comparison/'] | 3 |
58,753,908 | <p>In the original paper, the author pushes one sample to the experience replay buffer and randomly samples 32 transitions to train the model in the minibatch fashion. The samples took from interacting with the environment is not directly feeding to the model. To increase the speed of training, the author store samples every step but updates the model every four steps. </p>
<p>Use OpenAI's <a href="https://github.com/openai/baselines" rel="nofollow noreferrer">Baseline project</a>; this single-process method can master easy games like Atari Pong (Pong-v4) about 2.5 hours using a single GPU. Of course, training in this kind of single process way makes multi-core, multi-GPU (or single-GPU) system's resource underutilised. So in new publications had decoupled action-selection and model optimisation. They use multiple "Actors" to interact with environments simultaneously and a single GPU "Leaner" to optimise the model or multiple Leaners with multiple models on various GPUs. The multi-actor-single-learner is described in Deepmind's Apex-DQN (<a href="https://arxiv.org/abs/1803.00933" rel="nofollow noreferrer">Distributed Prioritized Experience Replay, D. Horgan et al., 2018</a>) method and the multi-actor-multi-learner described in (<a href="https://arxiv.org/abs/1803.00933" rel="nofollow noreferrer">Accelerated Methods for Deep Reinforcement Learning, Stooke and Abbeel, 2018</a>). When using multiple learners, the parameter sharing across processes becomes essential. The old trail described in Deepmind's PDQN (<a href="https://arxiv.org/abs/1507.04296" rel="nofollow noreferrer">Massively Parallel Methods for Deep Reinforcement Learning, Nair et al., 2015</a>) which was proposed in the period between DQN and A3C. However, the work was performed entirely on CPUs, so it looks using massive resources, the result can be easy outperformed by PPAC's batched action-selection on GPU method. </p>
<p>You can't optimise on each episode end, because the episode length isn't fixed, the better the model usually results in the longer episode steps. The model's learning capability will decrease when they perform a little better. The learning progress will be instable. </p>
<p>We also don't train the model only on target model clone, because the introduction of the target is to stabilise the training process by keeping an older set of parameters. If you update only on parameter clones, the target model's parameters will be the same as the model and this cause instability. Because, if we use the same parameters, one model update will cause the next state to have a higher value. </p>
<p>In Deepmind's 2015 Nature paper, it states that:</p>
<blockquote>
<p>The second modification to online Q-learning aimed at further improving the stability of our method with neural networks is to use a separate network for generating the target yj in the Q-learning update. More precisely, every C updates we clone the network Q to obtain a target network Q' and use Q' for generating the Q-learning targets y<sub>j</sub> for the following C updates to Q.
This modification makes the algorithm more stable compared to standard online Q-learning, where an update that increases Q(s<sub>t</sub>,a<sub>t</sub>) often also increases Q(s<sub>t+1</sub>, a) for all a and hence also increases the target y<sub>j</sub>, possibly leading to oscillations or divergence of the policy. Generating the targets using the older set of parameters adds a delay between the time an update to Q is made and the time the update affects the targets y<sub>j</sub>, making divergence or oscillations much more unlikely. </p>
</blockquote>
<p><a href="https://web.stanford.edu/class/psych209/Readings/MnihEtAlHassibis15NatureControlDeepRL.pdf" rel="nofollow noreferrer">Human-level control through deep reinforcement
learning, Mnih et al., 2015</a> </p> | 2019-11-07 17:13:49.143000+00:00 | 2019-11-09 14:55:36.403000+00:00 | 2019-11-09 14:55:36.403000+00:00 | null | 58,600,089 | <p>I am confused why dqn with experience replay algorithm would perform gradient descent step for every step in a given episode? This will fit only one step, right? This would make it extremely slow. Why not after each episode ends or every time the model is cloned?</p> | 2019-10-29 00:37:53.163000+00:00 | 2019-11-09 14:55:36.403000+00:00 | 2019-10-29 01:02:33.857000+00:00 | deep-learning|reinforcement-learning | ['https://github.com/openai/baselines', 'https://arxiv.org/abs/1803.00933', 'https://arxiv.org/abs/1803.00933', 'https://arxiv.org/abs/1507.04296', 'https://web.stanford.edu/class/psych209/Readings/MnihEtAlHassibis15NatureControlDeepRL.pdf'] | 5 |
36,406,057 | <p>in my experience, NaNs, when training a network usually happen because of two problems:</p>
<ul>
<li>first, mathematical error, e.g. log of negative value. It could happen when you are using log() in your loss function.</li>
<li>Second, there is a value that becomes too big so python can't handle.</li>
</ul>
<p>In your case, from your good observation, I think it's a second case. Your loss value may become too big to handled by python. Try to initialize smaller weight when you try to expand your network. Or just use different approach to initialize weight like explained by <a href="http://jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf" rel="nofollow">Glorot (2010)</a> or <a href="http://arxiv.org/abs/1502.01852" rel="nofollow">He (2015)</a>. Hope it helps.</p> | 2016-04-04 14:54:07.443000+00:00 | 2016-04-06 22:30:51.983000+00:00 | 2016-04-06 22:30:51.983000+00:00 | null | 36,381,488 | <p>I am training a simple feed-forward model with 3 or 4 hidden layers and dropouts between each (hidden layer + non linearity) combination.
Sometimes after a few epochs (about 10-11) the model starts outputting Infs and NaNs as the error of the NLL and the accuracy falls to 0.0%. This problem does not happen when I do not use dropouts. Is this a known issue with dropouts in Theano? The way I implement dropouts is:</p>
<pre><code>def drop(self, input):
mask = self.theano_rng.binomial(n=1, p=self.p, size=input.shape, dtype=theano.config.floatX)
return input * mask
</code></pre>
<p>where input is the feature-vector on which we want to apply dropouts.
I have also observed that the occurance of NaNs happens earlier if the dropout probability (self.p) is higher. p = 0.5 would cause NaNs to occur around epoch 1 or 2 but p = 0.7 would cause NaNs to occur around epoch 10 or 11.
Also the occurrence of NaNs happens only when hidden layer sizes are large. For example (800,700,700) gives NaNs whereas (500,500,500) does not.</p> | 2016-04-03 04:03:04.203000+00:00 | 2016-04-06 22:30:51.983000+00:00 | null | python|theano|deep-learning | ['http://jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf', 'http://arxiv.org/abs/1502.01852'] | 2 |
46,021,189 | <blockquote>
<p>use a kd-tree</p>
</blockquote>
<p>Unfortunately, in high dimensions this data structure suffers severely from the <a href="https://en.wikipedia.org/wiki/Curse_of_dimensionality" rel="nofollow noreferrer">curse of dimensionality</a>, which causes its search time to be comparable to the brute force search.</p>
<blockquote>
<p>reduce the number of dimensions</p>
</blockquote>
<p><a href="https://en.wikipedia.org/wiki/Dimensionality_reduction" rel="nofollow noreferrer">Dimensionality reduction</a> is a good approach, which offers a fair trade-off between accuracy and speed. You lose some information when you reduce your dimensions, but gain some speed.</p>
<p>By accuracy I mean finding the exact Nearest Neighbor (NN).</p>
<p>Principal Component Analysis(<a href="https://en.wikipedia.org/wiki/Dimensionality_reduction#Principal_component_analysis_.28PCA.29" rel="nofollow noreferrer">PCA</a>) is a good idea when you want to reduce the dimensional space your data live on.</p>
<blockquote>
<p>Is there some clever algorithm or data structure to solve this exactly in reasonable time?</p>
</blockquote>
<p>Approximate nearest neighbor search (<a href="https://en.wikipedia.org/wiki/Nearest_neighbor_search#Approximate_nearest_neighbor" rel="nofollow noreferrer">ANNS</a>), where you are satisfied with finding a point that might not be the exact Nearest Neighbor, but rather a good approximation of it (that is the 4th for example NN to your query, while you are looking for the 1st NN).</p>
<p>That approach cost you accuracy, but increases performance significantly. Moreover, the probability of finding a good NN (close enough to the query) is relatively high.</p>
<p>You could read more about ANNS in the introduction our kd-GeRaF <a href="https://arxiv.org/pdf/1603.09596.pdf" rel="nofollow noreferrer">paper</a>.</p>
<p>A good idea is to combine ANNS with dimensionality reduction.</p>
<p>Locality Sensitive Hashing (<a href="https://en.wikipedia.org/wiki/Locality-sensitive_hashing" rel="nofollow noreferrer">LSH</a>) is a modern approach to solve the Nearest Neighbor problem in high dimensions. The key idea is that points that lie close to each other are hashed to the same bucket. So when a query arrives, it will be hashed to a bucket, where that bucket (and usually its neighboring ones) contain good NN candidates).</p>
<p><a href="https://github.com/FALCONN-LIB/FALCONN" rel="nofollow noreferrer">FALCONN</a> is a good C++ implementation, which focuses in cosine similarity. Another good implementation is our <a href="https://github.com/gsamaras/Dolphinn" rel="nofollow noreferrer">DOLPHINN</a>, which is a more general library.</p> | 2017-09-03 07:25:31.820000+00:00 | 2017-09-03 07:25:31.820000+00:00 | null | null | 3,962,775 | <p>So I have about 16,000 75-dimensional data points, and for each point I want to find its k nearest neighbours (using euclidean distance, currently k=2 if this makes it easiser)</p>
<p>My first thought was to use a kd-tree for this, but as it turns out they become rather inefficient as the number of dimension grows. In my sample implementation, its only slightly faster than exhaustive search.</p>
<p>My next idea would be using PCA (Principal Component Analysis) to reduce the number of dimensions, but I was wondering: Is there some clever algorithm or data structure to solve this exactly in reasonable time?</p> | 2010-10-18 19:46:36.487000+00:00 | 2017-09-03 07:26:10.790000+00:00 | 2017-09-03 07:26:10.790000+00:00 | algorithm|data-structures|computational-geometry|nearest-neighbor|dimensionality-reduction | ['https://en.wikipedia.org/wiki/Curse_of_dimensionality', 'https://en.wikipedia.org/wiki/Dimensionality_reduction', 'https://en.wikipedia.org/wiki/Dimensionality_reduction#Principal_component_analysis_.28PCA.29', 'https://en.wikipedia.org/wiki/Nearest_neighbor_search#Approximate_nearest_neighbor', 'https://arxiv.org/pdf/1603.09596.pdf', 'https://en.wikipedia.org/wiki/Locality-sensitive_hashing', 'https://github.com/FALCONN-LIB/FALCONN', 'https://github.com/gsamaras/Dolphinn'] | 8 |
61,577,834 | <p>Document based databases have a big advantage over relational databases as they do not require defining a schema upfront- before being able to enter any data.</p>
<p>Also, you should use a document database if your data is not relational and cannot be stored in a table but rather is a set of images, or for example newspaper articles.</p>
<p>Another advantage is the easiness to use document-based databases in web development.
For more in-depth NoSQL database models comparison check this source: <a href="https://arxiv.org/ftp/arxiv/papers/1509/1509.08035.pdf" rel="nofollow noreferrer">https://arxiv.org/ftp/arxiv/papers/1509/1509.08035.pdf</a></p> | 2020-05-03 16:22:55.093000+00:00 | 2022-03-11 08:28:16.757000+00:00 | 2022-03-11 08:28:16.757000+00:00 | null | 441,441 | <p>Why should I use document based database like CouchDB instead of using relational database.
Are there any typical kinds of applications or domains where the document based database is more suitable than the relational database?</p> | 2009-01-14 00:21:48.840000+00:00 | 2022-04-07 21:05:03.183000+00:00 | null | database|couchdb|relational|non-relational-database | ['https://arxiv.org/ftp/arxiv/papers/1509/1509.08035.pdf'] | 1 |
71,943,162 | <p>There is this interesting paper <a href="https://arxiv.org/pdf/1802.06222.pdf" rel="nofollow noreferrer">Efficient GAN-based anomaly detection</a>.<br />
To evaluate the anomaly detection, they use the following experimental setting</p>
<blockquote>
<p>MNIST: We generated 10 different datasets from MNIST by successively
making each digit class an anomaly and treating the remaining 9 digits
as normal examples. The training set consists of 80% of the normal data
and the test set consists of the remaining 20% of normal data and all
of the anomalous data. All models were trained only with normal data and
tested with both normal and anomalous data.</p>
</blockquote> | 2022-04-20 16:28:32.400000+00:00 | 2022-04-20 16:28:32.400000+00:00 | null | null | 71,942,290 | <p>There's something about GAN's training that i don't understand. I am making a GAN for Anomaly Detection. To start I followed this guide <a href="https://www.tensorflow.org/tutorials/generative/dcgan" rel="nofollow noreferrer">here</a> to create a DCGAN (and understand how it works) and then move into the Anomaly Detection stuff.</p>
<p>I understand how the two training's phases work for GANs and after nearly 2000 epochs the generator generate some good fake images. The problem is that the discriminator is not good to detect anomalies: if i try to give in input a real image, it produce a value between 0.5 and 1, no matter if the image has anomaly or not.</p>
<p>So basically, the discriminator is good to distinguish real images from fake images, but not good to discriminate real images with anomalies.</p>
<p>I tried to train the model some more but the results won't change (instead, it seems worst than before!). The two losses keep varing around 0 and 1, for example now the model has:</p>
<pre><code>gen_loss: 0.97844017, disc_loss: 0.9973822
</code></pre>
<p>What should I do to improve my net and make anomaly detection? It needs to be trained even more to make a better discriminator or for make anomaly detection should i add something more?</p>
<p>Thanks in advice, i'm definitely doing something wrong. If needed i can post some code and more information about my net.</p>
<p>P.S. My notebook is very similar to the one i linked before, the only difference is that i tried to feed test images to the discriminator after the training.</p> | 2022-04-20 15:24:05.590000+00:00 | 2022-04-20 16:28:32.400000+00:00 | null | python|tensorflow|deep-learning|generative-adversarial-network|anomaly-detection | ['https://arxiv.org/pdf/1802.06222.pdf'] | 1 |
57,136,880 | <p>L2 utilization and hit rate are orthogonal concepts.</p>
<p>L2 utilization % measures how many operations (reads/writes/atomics) the L2 cache performed, compared to its peak performance. You can alternatively think of this as a proxy for "how much L2 bandwidth did I use" given there is a fixed bandwidth between L1 and L2 on a given GPU. Note, this metric is not measuring the % of L2 capacity used. (to simplify, in the diagram below, think of it as measuring the throughput of arrows next to the red dots)</p>
<p>L2 cache hit rate measures when an L1 miss occurs, how often was it found in L2. (in the diagram, think of L2 cache tags at the green X)</p>
<p><em>Original diagram from <a href="https://arxiv.org/pdf/1903.07486.pdf" rel="nofollow noreferrer">Dissecting the NVidia Turing T4 GPU via Microbenchmarking</a></em>
<a href="https://i.stack.imgur.com/k3aG1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/k3aG1.png" alt="enter image description here"></a></p>
<p>Hypothetically:</p>
<ul>
<li>Some CUDA kernel could read a single L1 cacheline (128B) per SM once, incurring a single L2 read that always hits. The L2 utilization would be ~0%, with L2 hit-rate of 100%.</li>
<li>A different CUDA kernel could achieve ~100% L2 utilization and 100% L2 hit-rate, by performing tons of loads that either miss in L1, or were <a href="https://docs.nvidia.com/cuda/parallel-thread-execution/index.html#data-movement-and-conversion-instructions" rel="nofollow noreferrer">"cache global" loads</a>, where the set of accessed addresses fit within the size of the L2.</li>
<li>Yet another CUDA kernel could achieve high L2 utilization and low L2 hit-rate, by performing tons of loads that either miss in L1, or were <a href="https://docs.nvidia.com/cuda/parallel-thread-execution/index.html#data-movement-and-conversion-instructions" rel="nofollow noreferrer">"cache global" loads</a> that are scattered throughout a Gigabyte sized buffer (i.e. that don't all fit simultaneously in L2).</li>
</ul>
<p>See also</p>
<ul>
<li>The <a href="https://docs.nvidia.com/cuda/profiler-users-guide/index.html#metrics-reference-7x" rel="nofollow noreferrer">tables of metrics in the CUDA Toolkit Profiler documentation</a>.</li>
<li><a href="https://arxiv.org/pdf/1903.07486.pdf" rel="nofollow noreferrer">Dissecting the NVidia Turing T4 GPU via Microbenchmarking</a></li>
<li><a href="https://arxiv.org/pdf/1804.06826.pdf" rel="nofollow noreferrer">Dissecting the NVIDIA Volta GPU Architecture via Microbenchmarking</a></li>
</ul> | 2019-07-21 20:42:24.077000+00:00 | 2019-07-21 20:42:24.077000+00:00 | null | null | 57,135,152 | <p>I'm doing expriments by using cuda.</p>
<p>I thought that if L2 cache hit ratio is high, performance will increase.</p>
<p>However, from nvprof, L2 cache utilization is low even though L2 cache hit rate is about 93%.</p>
<p>Why this happens? Are there examples that make it happen?</p> | 2019-07-21 16:47:08.780000+00:00 | 2019-07-21 20:42:24.077000+00:00 | null | caching|cuda|gpu | ['https://arxiv.org/pdf/1903.07486.pdf', 'https://i.stack.imgur.com/k3aG1.png', 'https://docs.nvidia.com/cuda/parallel-thread-execution/index.html#data-movement-and-conversion-instructions', 'https://docs.nvidia.com/cuda/parallel-thread-execution/index.html#data-movement-and-conversion-instructions', 'https://docs.nvidia.com/cuda/profiler-users-guide/index.html#metrics-reference-7x', 'https://arxiv.org/pdf/1903.07486.pdf', 'https://arxiv.org/pdf/1804.06826.pdf'] | 7 |
70,006,822 | <p>So, In this paper : <a href="https://arxiv.org/pdf/2004.07464.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/2004.07464.pdf</a>
They have combined image embedding and text embedding by concatenating them.</p>
<pre><code>X = TE + IE
</code></pre>
<p>Here X is fusion embedding with TE and IE as text and image embedding respectively.
If your TE and IE have dimension of suppose 2048 each, your X will be of length 2*2024. Then maybe you can use this if possible or if you want to reduce the dimension you can use t-SNE/PCA or <a href="https://arxiv.org/abs/1708.03629" rel="nofollow noreferrer">https://arxiv.org/abs/1708.03629</a> (Implemented here : <a href="https://github.com/vyraun/Half-Size" rel="nofollow noreferrer">https://github.com/vyraun/Half-Size</a>)</p> | 2021-11-17 15:04:52.033000+00:00 | 2021-11-17 15:04:52.033000+00:00 | null | null | 44,786,174 | <p>I know the meaning and methods of word embedding(skip-gram, CBOW) completely. And I know, that Google has a word2vector API that by getting the word can produce the vector.
but my problem is this: we have a clause that includes the subject, object, verb... that each word is previously embedded by the Google API, now "How we can combine these vectors together to create a vector that is equal to the clause?"
Example:
Clause: V= "dog bites man"
after word embedding by the Google, we have V1, V2, V3 that each of them maps to the dog, bites, man. and we know that:
V = V1+ V2 +V3
How can we provide V?
I will appreciate if you explain it by taking an example of real vectors.</p> | 2017-06-27 17:12:36.157000+00:00 | 2021-11-17 15:04:52.033000+00:00 | null | nlp|information-retrieval|word2vec|google-api-python-client|word-embedding | ['https://arxiv.org/pdf/2004.07464.pdf', 'https://arxiv.org/abs/1708.03629', 'https://github.com/vyraun/Half-Size'] | 3 |
44,093,296 | <p>There is nothing wrong. The problem is that increasing layers does not automatically means a higher accuracy (otherwise machine learning would be kind of solved, because if you need a better accuracy in an image classifier you would just add +1 layer to an inception and claim a victory).</p>
<p>To show you that this is not only your problem - take a look at this high-level paper: <a href="https://arxiv.org/pdf/1512.03385.pdf" rel="noreferrer">Deep Residual Learning for Image Recognition</a> where they see that increasing the number of layers decreases the scoring function (which is not important) and their architecture to overcome this problem (which is important). Here is a small part from it:<a href="https://i.stack.imgur.com/msoTt.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/msoTt.jpg" alt="enter image description here"></a></p>
<blockquote>
<p>The deeper network has higher training error and thus test error.</p>
</blockquote> | 2017-05-21 04:47:03.857000+00:00 | 2017-05-21 04:47:03.857000+00:00 | null | null | 44,092,936 | <p>I'm learning TensorFlow, and trying to create a simple two layer neural network.</p>
<p>The tutorial code <a href="https://www.tensorflow.org/get_started/mnist/pros" rel="nofollow noreferrer">https://www.tensorflow.org/get_started/mnist/pros</a> starts with this simple network, to get 92% accuracy:</p>
<pre><code>W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
y = tf.nn.softmax(tf.matmul(x, W) + b)
</code></pre>
<p>I tried replacing it with this very simple network, adding a new layer, <strong>but accuracy now drops to 84%</strong>!!!</p>
<pre><code>layer1_len = 10
w1 = weight_var([784, layer1_len])
b1 = bias_var([layer1_len])
o1 = tf.nn.relu(tf.matmul(x, w1) + b1)
w2 = weight_var([layer1_len, 10])
b2 = bias_var([10])
y = tf.nn.softmax(tf.matmul(o1, w2) + b2)
</code></pre>
<p>I get that result with several different values for <code>layer1_len</code> as well as different numbers of training steps. (Note that if I omit the <code>weight_var</code> and <code>bias_var</code> random initialization, and keep everything at zero, accuracy drops to close to 10%, essentially no better than guessing.)</p>
<p><strong>What am I doing wrong?</strong></p> | 2017-05-21 03:40:05.703000+00:00 | 2017-05-21 04:47:03.857000+00:00 | null | machine-learning|tensorflow|neural-network | ['https://arxiv.org/pdf/1512.03385.pdf', 'https://i.stack.imgur.com/msoTt.jpg'] | 2 |
46,787,296 | <p>The fact that the model trains on its own predictions is the whole point of Q-learning: it is a concept called bootstrapping, which means reusing your experience. The insight behind this is:</p>
<ul>
<li>The Agent is initialized with some weights</li>
<li>These weights represent the Agent's current representation of the Q-Value function it is trying to approximate</li>
<li>Then it acts on the environment, performing the action it believes to be of highest Q-Value (with some randomness for exploration)</li>
<li>Then it receives some feedback from the environment : a reward, and the new state it is in</li>
<li>By comparing the difference between the Agent's Q-Value approximation for state <code>t</code> (= <code>[s_t_batch, a_batch]</code>) and it's (discounted) approximation for state <code>t+1</code> <strong><em>plus</em></strong> the reward (=<code>y_batch</code>), it is able to measure how wrong it's prediction for <code>Qt</code> is.</li>
<li>From this measure of mistake (called TD-Error) weights are updated in the direction lower MSE, as for any other gradient-based optimization. </li>
<li>(One could wait for more than one step to have more information from the environment to update the weights in an even better direction. One could actually wait for the whole episode to be over and train on that. This continuum between training instantly and waiting for the end is called TD(Lambda), you should look into it)</li>
</ul>
<p>Your loss means exactly this: for one batch, it is the mean-squared error between your model's prediction for time <code>t</code> from its sole Q-Value approximation and its prediction for time <code>t</code> from its Q-Value approximation for the <strong><em>next</em></strong> state and taking into account some "ground truth" from the environment, that is the <em>reward</em> for this timestep.</p>
<p>Your loss does go down it seems to me, it is however very unstable, which is a known issue of vanilla Q-Learning especially vanilla Deep Q-Learning. Look at the overview paper below to have an idea of how more complex algorithms work</p>
<p>I advise you to look into <a href="https://en.wikipedia.org/wiki/Temporal_difference_learning" rel="nofollow noreferrer">Temporal Difference Learning</a>.
Good ressources also are </p>
<ul>
<li><a href="https://medium.com/emergent-future/simple-reinforcement-learning-with-tensorflow-part-0-q-learning-with-tables-and-neural-networks-d195264329d0" rel="nofollow noreferrer">Simple Reinforcement Learning with Tensorflow</a></li>
<li>The RL Bible: Sutton & Barto, Reinforcement Learning: An Introduction (2015 Edition)</li>
<li>This <a href="http://arxiv.org/abs/1701.07274" rel="nofollow noreferrer">overview paper</a> summarizing insights and implementation of recent algorithms</li>
<li>I wrote my <a href="http://vict0rsch.github.io/thesis/thesisVictorSchmidt.pdf" rel="nofollow noreferrer">Master Thesis</a> on RL, you can checkout the part 2: background theory for more detailed insights</li>
</ul> | 2017-10-17 09:50:12.563000+00:00 | 2017-10-17 10:28:19.490000+00:00 | 2017-10-17 10:28:19.490000+00:00 | null | 46,783,760 | <p>When I am training my model I have the following segment:</p>
<pre><code>s_t_batch, a_batch, y_batch = train_data(minibatch, model2)
# perform gradient step
loss.append(model.train_on_batch([s_t_batch, a_batch], y_batch))
</code></pre>
<p>where <code>s_t, a_</code> corresponds to current states and actions that were taken in those states respectively. <code>model2</code> is the same as <code>model</code> except that <code>model2</code> has an output of <code>num_actions</code> and <code>model</code> only outputs the value of the action that was taken in that state. </p>
<p>What I find strange (and is really the focus of this question) is in the function <code>train_data</code> I have the line:</p>
<pre><code>y_batch = r_batch + GAMMA * np.max(model.predict(s_t_batch), axis=1)
</code></pre>
<p>The strange part is the fact that I am using the model to generate my <code>y_batch</code> as well as training on them. Doesn't this become some sort of self fulfilling prophecy? If I understand correctly, the model tries to predict the expected maximum reward. Using the <strong>same</strong> model to try and generate <code>y_batch</code> is implying that it is the true model doesn't it?</p>
<p><strong>The question is</strong>, 1. what is the intuition behind using the same model to generate y_batch as it is to train them. 2. (optional) does loss value mean anything. When I plot it, it seems doesn't seem to be converging, however the sum of rewards seem to be increasing (see plots in link below).</p>
<p>The full code can be found <a href="https://github.com/sachinruk/deepschool.io/blob/master/Lesson%2020%20-%20Deep%20Q%20Learning%20-%20Solutions.ipynb" rel="nofollow noreferrer">here</a>, which is an implementation of Deep Q Learning on the CartPole-v0 problem: </p>
<h2>Comments from other forums:</h2>
<ol>
<li>y = r + gamma*np.max(model.predict(s_t_batch), axis=1) is totally natural and y will converge to the true state-action value. And if you don't break down the correlation between consecutive updates with something like experience replay (or better prioritized exp replay) your model WILL diverge. And there are better variants like DDQN, Duelling Network which performs better.</li>
<li>y_batch includes the reward. Both the target and online networks are estimates. It is indeed a somewhat self fulfilling prophecy as DQN's value function is overly optimistic. That is why Double DQN was added a few months later.</li>
<li>y will converge, but not necessarily to the true (I assume you mean optimal) state-action value. No one has proven that the converged value is the optimal value but it is the best approximation we have. However will converge to the the true value for simple enough problems (e.g. grid-world)</li>
</ol> | 2017-10-17 06:25:25.803000+00:00 | 2019-10-19 08:00:48.420000+00:00 | 2019-10-19 08:00:48.420000+00:00 | deep-learning|reinforcement-learning|openai-gym|q-learning | ['https://en.wikipedia.org/wiki/Temporal_difference_learning', 'https://medium.com/emergent-future/simple-reinforcement-learning-with-tensorflow-part-0-q-learning-with-tables-and-neural-networks-d195264329d0', 'http://arxiv.org/abs/1701.07274', 'http://vict0rsch.github.io/thesis/thesisVictorSchmidt.pdf'] | 4 |
18,682,668 | <p>First of all, take a look on File size, here is detailed <a href="http://arxiv.org/pdf/cs/0502012.pdf" rel="nofollow">Performance measurements</a></p> | 2013-09-08 10:26:03.417000+00:00 | 2013-09-08 10:26:03.417000+00:00 | null | null | 18,681,936 | <p>I have a requirement to load a file containing up to 1 million lines of string data. My first thought is to use C# 5.0 async to load the data whilst not blocking the UI thread. If the user tries to access something that relies on the data they will get a loading message.</p>
<p>Still I would like the fastest possible method in order to improve the user's experience. </p>
<p>Is the speed of reading data from the disk purely a function of the disk speed and thus StreamReader.ReadAllLines() is as performant as other c# code? Or is there something 'fancy' I can do to boost performance programmatically. This does not have to be described in detail. If so what approximate percentage improvement might be achieved? </p>
<p>I am purely interested in read speed and not concerned with the speed of code that may process the data once loaded. </p> | 2013-09-08 08:38:16.257000+00:00 | 2013-09-08 19:10:23.917000+00:00 | 2013-09-08 19:10:23.917000+00:00 | c# | ['http://arxiv.org/pdf/cs/0502012.pdf'] | 1 |
52,743,448 | <p>The general answer whenever it comes to the question of "which is faster?" is always: measure how fast each approach runs your application scenario to find out. In this case, I would say that the first approach would seem preferable most of the time (if you had to pick one of those two options for some reason). Unless you have some very tiny convolution kernels, the second approach would have lots of threads idle in the parts that do much of the actual work. Be sure to avoid bank conflicts within your tiles and think about the memory access patterns you get from your warps when moving data to and from global memory.</p>
<p>In the end, convolution is basically just computing sums over all possible combinations of kernel coefficients and input elements. Since the workload is essentially just repeatedly fetching these values in some order, convolution is almost necessarily going to be limited by bandwidth. Thus, doing convolution efficiently comes down to optimizing memory access and reducing bandwidth as much as possible.</p>
<blockquote>
<p>[…] which version is used more often in practice, like in deep learning?</p>
</blockquote>
<p>Neither. The naïve approach of throwing nested loops at it to brute-force convolution in the spatial domain is almost never an efficient way of computing convolutions. Convolution is such a fundamental operation for so many things that it has been studied extensively. There are literally hundreds, if not thousands of papers and books you could read on the subject. In deep learning, the problem of convolution has <a href="https://petewarden.com/2015/04/20/why-gemm-is-at-the-heart-of-deep-learning/" rel="nofollow noreferrer">commonly been formulated in terms of <em>general matrix multiplications</em> (GEMMs)</a> since this approach leads to rather nice memory access patterns and many efficient GEMM implementations are available for the GPU. But also FFT-based approaches as well as <a href="https://arxiv.org/abs/1509.09308" rel="nofollow noreferrer">other algorithms</a> are increasingly used depending on the application.</p> | 2018-10-10 15:12:21.693000+00:00 | 2018-10-10 15:12:21.693000+00:00 | null | null | 52,729,965 | <p>Based on my study, there are 2 different strategies to implement tiled version of convolution with cuda. I want to know more about this, and would like to see how they compare with each other, what is the advantage and disadvantage of each strategy, and how to choose. Below is the implementations of the two different strategies.</p>
<p>Strategy 1: the tile size matches with the output size, and needs multiple steps to load the input.</p>
<pre><code>#define MASK_WIDTH 3
#define MASK_RADIUS 1
#define TILE_WIDTH 8
#define SHAREDMEM_DIM (TILE_WIDTH + (MASK_RADIUS * 2))
__constant__ float deviceMask[MASK_WIDTH * MASK_WIDTH * MASK_WIDTH];
__global__ void conv3d(float *inputArray,
float *outputArray,
const int z_size,
const int y_size,
const int x_size) {
__shared__ float subTile[SHAREDMEM_DIM][SHAREDMEM_DIM][SHAREDMEM_DIM];
int bx = blockIdx.x, tx = threadIdx.x;
int by = blockIdx.y, ty = threadIdx.y;
int bz = blockIdx.z, tz = threadIdx.z;
int destination = (tz * TILE_WIDTH * TILE_WIDTH) + (ty * TILE_WIDTH) + tx;
int destTmp = destination;
int dX = destTmp % SHAREDMEM_DIM;
destTmp = destTmp / SHAREDMEM_DIM;
int dY = destTmp % SHAREDMEM_DIM;
destTmp = destTmp / SHAREDMEM_DIM;
int dZ = destTmp;
int inputZ = dZ + (bz * TILE_WIDTH) - MASK_RADIUS;
int inputY = dY + (by * TILE_WIDTH) - MASK_RADIUS;
int inputX = dX + (bx * TILE_WIDTH) - MASK_RADIUS;
int input = (inputZ * y_size * x_size) + (inputY * x_size) + inputX;
if( inputZ >= 0 && inputZ < z_size
&& inputY >= 0 && inputY < y_size
&& inputX >= 0 && inputX < x_size){
subTile[dZ][dY][dX] = inputArray[input];
}
else{
subTile[dZ][dY][dX] = 0;
}
destination = TILE_WIDTH * TILE_WIDTH * TILE_WIDTH
+ (tz * TILE_WIDTH * TILE_WIDTH) + (ty * TILE_WIDTH) + tx;
destTmp = destination;
dX = destTmp % SHAREDMEM_DIM;
destTmp = destTmp / SHAREDMEM_DIM;
dY = destTmp % SHAREDMEM_DIM;
destTmp = destTmp / SHAREDMEM_DIM;
dZ = destTmp;
inputZ = dZ + (bz * TILE_WIDTH) - MASK_RADIUS;
inputY = dY + (by * TILE_WIDTH) - MASK_RADIUS;
inputX = dX + (bx * TILE_WIDTH) - MASK_RADIUS;
input = (inputZ * y_size * x_size) + (inputY * x_size) + inputX;
if(dZ < SHAREDMEM_DIM){
if( inputZ >= 0 && inputZ < z_size
&& inputY >= 0 && inputY < y_size
&& inputX >= 0 && inputX < x_size ) {
subTile[dZ][dY][dX] = inputArray[input];
}
else{
subTile[dZ][dY][dX] = 0;
}
}
__syncthreads();
float sum = 0;
int z, y, x;
for(z = 0; z < MASK_WIDTH; z++){
for(y = 0; y < MASK_WIDTH; y++){
for(x = 0; x < MASK_WIDTH; x++){
sum += subTile[tz + z][ty + y][tx + x]
* deviceMask[x + (y * MASK_WIDTH) + (z * MASK_WIDTH * MASK_WIDTH)];
}
}
}
z = tz + (bz * TILE_WIDTH);
y = ty + (by * TILE_WIDTH);
x = tx + (bx * TILE_WIDTH);
if(z < z_size && y < y_size && x < x_size){
outputArray[x + (y * x_size) + (z * y_size * x_size)] = sum;
}
__syncthreads();
}
</code></pre>
<p>The second strategy is to set the block size to be the same with input tile. In calculating output, some threads are turned off.</p>
<pre><code>#define TILE_X 14
#define TILE_Y 6
#define TILE_Z 6
#define MASK_WIDTH 3
#define MASK_SIZE MASK_WIDTH * MASK_WIDTH * MASK_WIDTH
__constant__ float mask[MASK_WIDTH][MASK_WIDTH][MASK_WIDTH];
__global__ void conv3d(float *input, float *output, const int z_size, const int y_size, const int x_size) {
__shared__ float inputTile [TILE_Z+MASK_WIDTH-1][TILE_Y+MASK_WIDTH-1][TILE_X+MASK_WIDTH-1];
int tx = threadIdx.x; int ty = threadIdx.y; int tz = threadIdx.z;
int bx = blockIdx.x; int by = blockIdx.y; int bz = blockIdx.z;
int x_o = bx * TILE_X + tx
int y_o = by * TILE_Y + ty;
int z_o = bz * TILE_Z + tz;
int x_i = x_o - MASK_WIDTH/2;
int y_i = y_o - MASK_WIDTH/2;
int z_i = z_o - MASK_WIDTH/2;
if (x_i >= 0 && y_i >= 0 && z_i >= 0 && x_i < x_size && y_i < y_size && z_i < z_size)
inputTile[tz][ty][tx] = input[(z_i * y_size + y_i) * x_size + x_i];
else
inputTile[tz][ty][tx] = 0.0;
__syncthreads();
float acc = 0.0;
if(tz < TILE_Z && ty < TILE_Y && tx < TILE_X) {
for(int z_mask = 0; z_mask < Z_MASK_WIDTH; z_mask++) {
for(int y_mask = 0; y_mask < Y_MASK_WIDTH; y_mask++) {
for(int x_mask = 0; x_mask < X_MASK_WIDTH; x_mask++) {
acc += mask[z_mask][y_mask][x_mask] *
inputTile[tz+z_mask][ty+y_mask][tx+x_mask];
}
}
}
if(z_o < z_size && y_o < y_size && x_o < x_size)
output[(z_o * y_size + y_o) * x_size + x_o] = acc;
}
}
</code></pre>
<p>Any idea about how to choose between these? In addition, which version is used more often in practice, like in deep learning? Also if you have any comments on the code, please also let me know!</p> | 2018-10-09 22:02:59.140000+00:00 | 2018-10-10 15:12:21.693000+00:00 | 2018-10-09 22:17:29.353000+00:00 | c++|3d|cuda|deep-learning|convolution | ['https://petewarden.com/2015/04/20/why-gemm-is-at-the-heart-of-deep-learning/', 'https://arxiv.org/abs/1509.09308'] | 2 |
69,508,740 | <p>Looks great, as you have already followed most of the solutions to resolve gradient exploding problem. Below is the list of all solutions you can try</p>
<p><strong>Solutions to avoid Gradient Exploding problem</strong></p>
<ol>
<li><p><em>Appropriate Weight initialization:</em> utilise appropriate weight Initialization based on the activation function used.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Initialization</th>
<th>Activation Function</th>
</tr>
</thead>
<tbody>
<tr>
<td>He</td>
<td>ReLU & variants</td>
</tr>
<tr>
<td>LeCun</td>
<td>SELU</td>
</tr>
<tr>
<td>Glorot</td>
<td>Softmax, Logistic, None, Tanh</td>
</tr>
</tbody>
</table>
</div></li>
<li><p><em>Redesigning your Neural network:</em> use fewer layers in neural network and/or use smaller batch size</p>
</li>
<li><p><em>Choosing Non Saturation activation function</em>: choose the right activation function with reduced learning rates</p>
<ul>
<li>ReLU</li>
<li>Leaky ReLU</li>
<li>randomized leaky ReLU (RReLU)</li>
<li>parametric leaky ReLU (PReLU)</li>
<li>exponential linear unit (ELU)</li>
</ul>
</li>
<li><p><em>Batch Normalisation:</em> Ideally using batch normalisation before/after each layer, based on what works best for your dataset.</p>
<ul>
<li><p>after each layer <a href="https://arxiv.org/abs/1502.03167" rel="nofollow noreferrer">Paper reference</a></p>
<pre><code>model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, activation="elu",
kernel_initializer="he_normal"),
keras.layers.BatchNormalization(),
keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal"),
keras.layers.BatchNormalization(),
keras.layers.Dense(10, activation="softmax")
])
</code></pre>
</li>
<li><p>before each layer</p>
<pre><code> model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, kernel_initializer="he_normal", use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("elu"),
keras.layers.Dense(100, kernel_initializer="he_normal", use_bias=False),
keras.layers.Activation("elu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(10, activation="softmax")
])
</code></pre>
</li>
</ul>
</li>
<li><p><em>Gradient Clipping</em> : Good default values are clipnorm=1.0 and clipvalue=0.5</p>
</li>
<li><p><em>Ensure right optimizer is utilised</em>: Since you have utilised Adam optimizer, check if other optimizer works best for your case. Refer <a href="https://keras.io/api/optimizers/" rel="nofollow noreferrer">this documentation</a> for info on the available optimizers [SGD, RMSprop, Adam, Adadelta, Adagrad, Adamax, Nadam, Ftrl]</p>
</li>
<li><p><em>Truncated Backpropagation through time</em>: often works for RNNS refer this <a href="https://machinelearningmastery.com/gentle-introduction-backpropagation-time/" rel="nofollow noreferrer">documentation</a></p>
</li>
<li><p><em>Use LSTM</em>(solution for RNN)</p>
</li>
<li><p><em>Use weight regularizers on layers</em>: set <code>kernel_regularizer</code> to L1 or L2. <a href="https://keras.io/api/layers/regularizers/" rel="nofollow noreferrer">Weight regularizer document reference</a></p>
</li>
</ol>
<p>For more information refer to chapter 11 in <em>Hands on Machine learning with scikit-learn, keras and tensorflow</em> book by <em>Aurélien</em></p> | 2021-10-09 16:42:13.367000+00:00 | 2021-10-29 16:33:42.460000+00:00 | 2021-10-29 16:33:42.460000+00:00 | null | 69,427,103 | <p>I have a gradient exploding problem which I couldn't solve after trying for several days. I implemented a custom message passing graph neural network in TensorFlow which is used to predict a continuous value from graph data. Each graph is associated with one target value. Each node of a graph is represented by a node attribute vector, and the edges between nodes are represented by an edge attribute vector.</p>
<p>Within a message passing layer, node attributes are updated in a certain way (e.g., by aggregating other node/edge attributes), and these updated node attributes are returned.</p>
<p>Now, I managed to figure out where the gradient problem occurs in my code. I have the below snippet.</p>
<pre><code>to_concat = [neighbors_mean, e]
z = K.concatenate(to_concat, axis=-1)
output = self.Net(z)
</code></pre>
<p>Here, <code>neighbors_mean</code> is the element-wise mean between two node attributes <code>vi</code>, <code>vj</code> that form the edge having an edge attribute <code>e</code>. <code>Net</code> is a single layer feed-forward network. With this, the training loss suddenly jumps to NaN after about 30 epochs with a batch size of 32. If the batch size is 128, still the gradients explode after about 200 epochs.</p>
<p>I found that, in this case, the gradients explode because of the edge attribute <code>e</code>. If I didn't concatenate <code>neighbors_mean</code> with <code>e</code> and just used the below code, there would be no gradient explosion.</p>
<pre><code>output = self.Net(neighbors_mean)
</code></pre>
<p>Also I can avoid gradient explosion by sending <code>e</code> through a sigmoid function as follows. But this degrades the performance (final MAE), because the values in <code>e</code> are mapped to 0-1 range non-linearly. Note that <a href="https://en.wikipedia.org/wiki/Rectifier_(neural_networks)" rel="nofollow noreferrer">Rectified Linear Unit</a> (ReLU) instead of sigmoid didn't work.</p>
<pre><code>to_concat = [neighbors_mean, tf.math.sigmoid(e)]
z = K.concatenate(to_concat, axis=-1)
output = self.Net(z)
</code></pre>
<p>Just to mention that <code>e</code> carries a single value relating to the distance between the two corresponding nodes and this distance is always in the range 0.5-4. There are no large values or NaNs in <code>e</code>.</p>
<p>I have a custom loss function to train this model, but I found that this is not a problem with loss (other losses also led to the same problem). Below is my custom loss function. Note that although this is a single output regression network, the final layer of my NN has two neurons, relating to the mean and log(sigma) of the prediction.</p>
<pre><code>def robust_loss(y_true, y_pred):
"""
Computes the robust loss between labels and predictions.
"""
mean, sigma = tf.split(y_pred, 2, axis=-1)
# tried limiting 'sigma' with sigma = tf.clip_by_value(sigma,-4,1.0) but the gradients still explode
loss = np.sqrt(2.0) * K.abs(mean - y_true) * K.exp(-sigma) + sigma
return K.mean(loss)
</code></pre>
<p>I basically tried everything suggested online to avoid gradient explosion.</p>
<ol>
<li>Applied gradient clipping - with <code>Adam(lr, clipnorm=1, clipvalue=5)</code> and also with <code>tf.clip_by_global_norm(gradients, 1.0)</code></li>
<li>My target variables are always scaled</li>
<li>Weights are initialized with <code>glorot_uniform</code> distribution</li>
<li>Applied regularisation to weights</li>
<li>Tried larger batch sizes (till 256, although delayed gradient explosion happens at some point)</li>
<li>Tried with reduced learning rate</li>
</ol>
<p>What am I missing here? I definitely know it has something to do with concatenating <code>e</code>. But given that 0.5<e<4, why do the gradients explode in this case? This feature <code>e</code> is important to me. What else can I do to avoid numerical overflow in my model?</p> | 2021-10-03 17:05:28.273000+00:00 | 2021-10-29 16:33:42.460000+00:00 | 2021-10-24 12:36:43.910000+00:00 | python|tensorflow|machine-learning|keras|gradient | ['https://arxiv.org/abs/1502.03167', 'https://keras.io/api/optimizers/', 'https://machinelearningmastery.com/gentle-introduction-backpropagation-time/', 'https://keras.io/api/layers/regularizers/'] | 4 |
57,385,673 | <p>The Batch normalization in LSTM is not that easy to implement. Some papers present some amazing results <a href="https://arxiv.org/pdf/1603.09025.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1603.09025.pdf</a> called Recurrent Batch normalization. The authors apply following the equations</p>
<p><a href="https://i.stack.imgur.com/2FhjQ.png" rel="nofollow noreferrer">BATCH-NORMALIZED LSTM</a></p>
<p>Unfortunately, this model is not implemented in keras yet only in tensorflow <a href="https://github.com/OlavHN/bnlstm" rel="nofollow noreferrer">https://github.com/OlavHN/bnlstm</a></p>
<p>However, I was able to get good results using (default) batch normalization after the activation function with the without centering and shifting. This approach is different from the paper above applying BN after c_t and h_t, maybe it is worth a try.</p>
<pre><code>model = Sequential()
model.add(LSTM(neurons1,
activation=tf.nn.relu,
return_sequences=True,
input_shape=(timesteps, data_dim)))
model.add(BatchNormalization(momentum=m, scale=False, center=False))
model.add(LSTM(neurons2,
activation=tf.nn.relu))
model.add(BatchNormalization(momentum=m, scale=False, center=False))
model.add(Dense(1))
</code></pre> | 2019-08-07 00:55:29.967000+00:00 | 2019-08-07 00:55:29.967000+00:00 | null | null | 48,544,953 | <p>I am trying to use batch normalization in LSTM using keras in R. In my dataset the target/output variable is the <code>Sales</code> column, and every row in the dataset records the <code>Sales</code> for each day in a year (2008-2017). The dataset looks like below:</p>
<p><a href="https://i.stack.imgur.com/mFJgq.png" rel="noreferrer"><img src="https://i.stack.imgur.com/mFJgq.png" alt="Sales data"></a></p>
<p>My objective is to build a LSTM model based on such dataset, which should be able to provide prediction at the end of training. I am training this model on the data from 2008-2016, and using half of the 2017 data as validation, and the rest as test set.</p>
<p>Previously, I tried creating a model using dropout and early stopping. This looks like below:</p>
<pre><code>mdl1 <- keras_model_sequential()
mdl1 %>%
layer_lstm(units = 512, input_shape = c(1, 3), return_sequences = T ) %>%
layer_dropout(rate = 0.3) %>%
layer_lstm(units = 512, return_sequences = FALSE) %>%
layer_dropout(rate = 0.2) %>%
layer_dense(units = 1, activation = "linear")
mdl1 %>% compile(loss = 'mse', optimizer = 'rmsprop')
</code></pre>
<p>The model looks as follows</p>
<pre><code>___________________________________________________________
Layer (type) Output Shape Param #
===========================================================
lstm_25 (LSTM) (None, 1, 512) 1056768
___________________________________________________________
dropout_25 (Dropout) (None, 1, 512) 0
___________________________________________________________
lstm_26 (LSTM) (None, 512) 2099200
___________________________________________________________
dropout_26 (Dropout) (None, 512) 0
___________________________________________________________
dense_13 (Dense) (None, 1) 513
===========================================================
Total params: 3,156,481
Trainable params: 3,156,481
Non-trainable params: 0
___________________________________________________________
</code></pre>
<p>To train the model, early stopping is used with a validation set.</p>
<pre><code>mdl1.history <- mdl1 %>%
fit(dt.tr, dt.tr.out, epochs=500, shuffle=F,
validation_data = list(dt.val, dt.val.out),
callbacks = list(
callback_early_stopping(min_delta = 0.000001, patience = 10, verbose = 1)
))
</code></pre>
<p>On top of this, I want to use batch normalization to speed up the training. As per my understanding, to use batch normalization, I need to divide the data into batches, and apply <code>layer_batch_normalization</code> for the input of each hidden layer. The model layers looks like as follows:</p>
<pre><code>batch_size <- 32
mdl2 <- keras_model_sequential()
mdl2 %>%
layer_batch_normalization(input_shape = c(1, 3), batch_size = batch_size) %>%
layer_lstm(units = 512, return_sequences = T) %>%
layer_dropout(rate = 0.3) %>%
layer_batch_normalization(batch_size = batch_size) %>%
layer_lstm(units = 512, return_sequences = F) %>%
layer_dropout(rate = 0.2) %>%
layer_batch_normalization(batch_size = batch_size) %>%
layer_dense(units = 1, activation = "linear")
mdl2 %>% compile(loss = 'mse', optimizer = 'rmsprop')
</code></pre>
<p>This model looks as follows:</p>
<pre><code>______________________________________________________________________________
Layer (type) Output Shape Param #
==============================================================================
batch_normalization_34 (BatchNormalization) (32, 1, 3) 12
______________________________________________________________________________
lstm_27 (LSTM) (32, 1, 512) 1056768
______________________________________________________________________________
dropout_27 (Dropout) (32, 1, 512) 0
______________________________________________________________________________
batch_normalization_35 (BatchNormalization) (32, 1, 512) 2048
______________________________________________________________________________
lstm_28 (LSTM) (32, 1, 512) 2099200
______________________________________________________________________________
dropout_28 (Dropout) (32, 1, 512) 0
______________________________________________________________________________
batch_normalization_36 (BatchNormalization) (32, 1, 512) 2048
______________________________________________________________________________
dense_14 (Dense) (32, 1, 1) 513
==============================================================================
Total params: 3,160,589
Trainable params: 3,158,535
Non-trainable params: 2,054
______________________________________________________________________________
</code></pre>
<p>Training the model looks like before. Only difference lies in the training and validation dataset, which are made of sizes that are multiple of <code>batch_size</code> (32 here), by resampling data from the 2nd last batch to the last batch.</p>
<p>However, the performance of <code>mdl1</code> is much better than that of <code>mdl2</code>, as can be seen below.</p>
<p><a href="https://i.stack.imgur.com/cyzak.png" rel="noreferrer"><img src="https://i.stack.imgur.com/cyzak.png" alt="models"></a></p>
<p>I am not sure exactly what I am doing wrong, as I am starting with keras (and practical neural net in general). Additionally, the performance of first model is not so good as well; any suggestion on how to improve that would also be great.</p> | 2018-01-31 14:49:03.390000+00:00 | 2019-08-07 00:55:29.967000+00:00 | null | r|tensorflow|keras|recurrent-neural-network|batch-normalization | ['https://arxiv.org/pdf/1603.09025.pdf', 'https://i.stack.imgur.com/2FhjQ.png', 'https://github.com/OlavHN/bnlstm'] | 3 |
60,792,910 | <p>Target tracking is a very difficult problem. In target tracking you will have <strong>two main issues</strong>: the motion uncertainty problem, and the origin uncertainty problem. The first one refers to the way you model object motion so you can predict its future state, and the second refers to the issue of data association(what measurement corresponds to what track, and the literature is filled with <a href="https://www.mdpi.com/1424-8220/20/4/1110" rel="nofollow noreferrer">scientific</a> ways in which this issue can be approached).</p>
<p>Before you can come up with a solution to your problem you will have to answer some questions yourself, regarding the tracking problem you want to solve. For example: what are the values that you what to track(this will define your state vector), how are those values related to one another, are you trying to perform single object tracking or multiple object tracking, how are the objects moving( do they have a relatively constant acceleration or velocity ) or not, do objects make turns, can objects also be occluded or not and so on.</p>
<p>The <strong>Kalman Filter</strong> is good solution to predict the next state of your system (once you have identified your process model). A deep learning alternative to the Kalman filter is the so called <a href="https://arxiv.org/abs/1511.05121" rel="nofollow noreferrer">Deep Kalman Filter</a> which essentially is used to do the same thing. In case your process or measurement models are not linear, you will have to linearize them before predicting the next state. Some solutions that deal with non-linear process or measurement models are the <strong>Extended Kalman Filter</strong> (EKF) or <strong><a href="https://towardsdatascience.com/the-unscented-kalman-filter-anything-ekf-can-do-i-can-do-it-better-ce7c773cf88d" rel="nofollow noreferrer">Unscented Kalman Filter</a></strong> (UKF). </p>
<p>Now related to fast moving objects, an idea you can use is to have a larger covariance matrix since the objects can move a lot more if they are fast, so the search space for the correct association has to be a bit larger. Additionally you can use multiple motion models in case your motion model cannot be satisfied with only one model.
In case of occlusions I will leave you <a href="https://stackoverflow.com/questions/2764238/image-processing-what-are-occlusions/60644446#60644446">this</a> stack overflow thread, where I have given an answer covering more details regarding occlusion handling in case of tracking. I have added some references for you to read. You will have to provide more details in your question, if you would like to receive more information regarding a solution (for example you should define fast moving objects with respect to camera frame rate). </p>
<p>I personally do not think there is a silver bullet solution for the tracking problem, I prefer to tailor a solution to the problem I am trying to solve. </p> | 2020-03-21 20:27:41.103000+00:00 | 2020-03-21 20:34:19.133000+00:00 | 2020-03-21 20:34:19.133000+00:00 | null | 60,592,851 | <p>I'm trying to create an application that will be able to track rapidly moving objects in video/camera feed, however have not found any CV/DL solution that is good enough. Can you recommend any computer vision solution for tracking fast moving objects on regular laptop computer and web cam? A demo app would be ideal.</p>
<p>For example see this video where the tracking is done in hardware (I'm looking for software solution) : <a href="https://www.youtube.com/watch?v=qn5YQVvW-hQ" rel="nofollow noreferrer">https://www.youtube.com/watch?v=qn5YQVvW-hQ</a></p> | 2020-03-08 22:54:46.580000+00:00 | 2020-06-08 11:39:46.317000+00:00 | null | opencv|deep-learning | ['https://www.mdpi.com/1424-8220/20/4/1110', 'https://arxiv.org/abs/1511.05121', 'https://towardsdatascience.com/the-unscented-kalman-filter-anything-ekf-can-do-i-can-do-it-better-ce7c773cf88d', 'https://stackoverflow.com/questions/2764238/image-processing-what-are-occlusions/60644446#60644446'] | 4 |
72,182,806 | <p>As you said, <code>train_test_split</code> interprets each list of tags as a label, it doesn't matter what it contains. A sample with tags <code>[1, 2, 3]</code> will not be identified the same as a sample with tags <code>[1, 2]</code>. Hence, you cannot flatten the <code>tags</code> column to check the label counts.</p>
<p>The solution, if you want to keep these labels, is to drop the observations with labels that are not enough represented (e.g., with <code>value_counts() == 1</code>. In fact, this is also what they do in the article you linked (see the last code snippet of the "Perform exploratory data analysis" paragraph):</p>
<pre><code># Filtering the rare terms.
arxiv_data_filtered = arxiv_data.groupby("terms").filter(lambda x: len(x) > 1)
</code></pre> | 2022-05-10 08:14:05.597000+00:00 | 2022-05-10 08:14:05.597000+00:00 | null | null | 72,182,217 | <p>I have multilabel dataset (<code>pd.DataFrame</code>) which looks like this:
<a href="https://i.stack.imgur.com/AuOxq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AuOxq.png" alt="" /></a></p>
<p>This is value_counts of flatten <code>tags</code> column:</p>
<pre><code>101 4450171
86 3933972
45 3468383
0 2801217
46 2621773
...
4681 1000
2923 1000
4580 1000
7569 1000
6955 1000
Length: 7657, dtype: int64
</code></pre>
<p>Then I use <code>train_test_split</code> from <code>sklearn</code> with <code>stratify</code> argument to split dataset with balanced distribution:</p>
<pre class="lang-py prettyprint-override"><code>train_df, test_df = train_test_split(
df,
test_size=0.02,
stratify=df["tags"].values,
)
</code></pre>
<p>And I get this error:</p>
<pre><code>ValueError: The least populated class in y has only 1 member, which is too few. The minimum number of groups for any class cannot be less than 2.
</code></pre>
<p>Why? I see that the least populated class has 1000 samples. Does it actually compare lists instead of list values? I based on this article: <a href="https://keras.io/examples/nlp/multi_label_classification/" rel="nofollow noreferrer">https://keras.io/examples/nlp/multi_label_classification/</a></p> | 2022-05-10 07:26:35.447000+00:00 | 2022-05-10 08:14:05.597000+00:00 | 2022-05-10 07:56:21.367000+00:00 | python|pandas|scikit-learn|split|dataset | [] | 0 |
66,679,501 | <p>I was able to bring your code to a version where it would at least converge. In summary, I think there might be multiple problems with it: the normalization (why those values?), some unnecessary relus, too high learning rate, MSE loss instead of cross-entropy and mainly I don't think the softmax in the bottleneck layer works that way for vanishing gradient reasons, see here</p>
<p><a href="https://www.quora.com/Does-anyone-ever-use-a-softmax-layer-mid-neural-network-rather-than-at-the-end" rel="nofollow noreferrer">https://www.quora.com/Does-anyone-ever-use-a-softmax-layer-mid-neural-network-rather-than-at-the-end</a></p>
<p>Maybe one could fix this using the Gumbel softmax: <a href="https://arxiv.org/abs/1611.01144" rel="nofollow noreferrer">https://arxiv.org/abs/1611.01144</a></p>
<p>Moreover, there are papers already achieving this, but as a Variational Autoencoder rather than a vanilla autoencoder, see here: <a href="https://arxiv.org/abs/1609.02200" rel="nofollow noreferrer">https://arxiv.org/abs/1609.02200</a>.</p>
<p>For now you can use this modification, which at least converges and then modify step-by-step and see what breaks it.</p>
<p>As for the classification, the standard way would be to use the trained encoder to generate features from images and then use a normal classifier (SVG or so) on top of that.</p>
<pre><code>batch_size = 16
transform = transforms.Compose([
transforms.ToTensor(),
])
trainset = MNIST(root='./data/', train=True, download=True, transform=transform)
dataloader = torch.utils.data.DataLoader(trainset, batch_size=batch_size, shuffle=True, num_workers=8)
class Autoencoder(nn.Module):
def __init__(self):
super(Autoencoder,self).__init__()
self.encoder = nn.Sequential(
nn.Conv2d(1, 2, kernel_size=5),
nn.ReLU(),
nn.Conv2d(2, 4, kernel_size=5),
)
self.decoder = nn.Sequential(
nn.ConvTranspose2d(4, 10, kernel_size=5),
nn.ReLU(),
nn.ConvTranspose2d(10, 1, kernel_size=5),
nn.Sigmoid(),
)
def forward(self, x):
x = self.encoder(x)
x = self.decoder(x)
return x
model = Autoencoder().cpu()
distance = nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.001,weight_decay=1e-5)
num_epochs = 20
outputs = []
for epoch in tqdm(range(num_epochs)):
for data in dataloader:
img, _ = data
img = Variable(img).cpu()
output = model(img)
loss = distance(output, img)
optimizer.zero_grad()
loss.backward()
optimizer.step()
outputs.append(output)
print('epoch [{}/{}], loss: {:.4f}'.format(epoch+1, num_epochs, loss.item()))
import matplotlib.pyplot as plt
% plotting epoch outputs
for k in range(0, 20):
plt.figure(figsize=(9, 2))
imgs = outputs[k].detach().numpy()
for i, item in enumerate(imgs):
plt.imshow(item[0])
plt.title(str(i))
plt.show()
</code></pre> | 2021-03-17 18:55:39.553000+00:00 | 2021-03-18 12:25:51.280000+00:00 | 2021-03-18 12:25:51.280000+00:00 | null | 66,667,949 | <p>I'm trying to build a simple autoencoder for MNIST, where the middle layer is just 10 neurons. My hope is that it will learn to classify the 10 digits, and I assume that would lead to the lowest error in the end (wrt reproducing the original image).</p>
<p>I have the following code, which I've already played around with a fair amount. If I run it for up-to 100 epochs, the loss doesn't really go below 1.0, and if I evaluate it, it's obviously not working. What am I missing?</p>
<p>Training:</p>
<pre class="lang-py prettyprint-override"><code>import torch
import torchvision as tv
import torchvision.transforms as transforms
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
from torchvision.utils import save_image
num_epochs = 100
batch_size = 64
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])
trainset = tv.datasets.MNIST(root='./data', train=True, download=True, transform=transform)
dataloader = torch.utils.data.DataLoader(trainset, batch_size=batch_size, shuffle=True, num_workers=4)
class Autoencoder(nn.Module):
def __init__(self):
super(Autoencoder,self).__init__()
self.encoder = nn.Sequential(
# 28 x 28
nn.Conv2d(1, 4, kernel_size=5),
nn.Dropout2d(p=0.2),
# 4 x 24 x 24
nn.ReLU(True),
nn.Conv2d(4, 8, kernel_size=5),
nn.Dropout2d(p=0.2),
# 8 x 20 x 20 = 3200
nn.ReLU(True),
nn.Flatten(),
nn.Linear(3200, 10),
nn.ReLU(True),
# 10
nn.Softmax(),
# 10
)
self.decoder = nn.Sequential(
# 10
nn.Linear(10, 400),
nn.ReLU(True),
# 400
nn.Unflatten(1, (1, 20, 20)),
# 20 x 20
nn.Dropout2d(p=0.2),
nn.ConvTranspose2d(1, 10, kernel_size=5),
# 24 x 24
nn.ReLU(True),
nn.Dropout2d(p=0.2),
nn.ConvTranspose2d(10, 1, kernel_size=5),
# 28 x 28
nn.ReLU(True),
nn.Sigmoid(),
)
def forward(self, x):
x = self.encoder(x)
x = self.decoder(x)
return x
model = Autoencoder().cpu()
distance = nn.MSELoss()
#optimizer = torch.optim.Adam(model.parameters(), weight_decay=1e-5)
optimizer = torch.optim.SGD(model.parameters(), lr=0.01, momentum=0.5)
for epoch in range(num_epochs):
for data in dataloader:
img, _ = data
img = Variable(img).cpu()
output = model(img)
loss = distance(output, img)
optimizer.zero_grad()
loss.backward()
optimizer.step()
print('epoch [{}/{}], loss: {:.4f}'.format(epoch+1, num_epochs, loss.item()))
</code></pre>
<p>Already the training loss indicates that the thing is not working, but printing out the confusion matrix (which in this case should not necessarily be the identity matrix, since the neurons can be ordered arbitrarily, but should be row-col-reordarable and approximate the identity, if this would work):</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
confusion_matrix = np.zeros((10, 10))
batch_size = 20*1000
testset = tv.datasets.MNIST(root='./data', train=False, download=True, transform=transform)
dataloader = torch.utils.data.DataLoader(testset, batch_size=batch_size, shuffle=True, num_workers=4)
for data in dataloader:
imgs, labels = data
imgs = Variable(imgs).cpu()
encs = model.encoder(imgs).detach().numpy()
for i in range(len(encs)):
predicted = np.argmax(encs[i])
actual = labels[i]
confusion_matrix[actual][predicted] += 1
print(confusion_matrix)
</code></pre> | 2021-03-17 06:22:52.777000+00:00 | 2021-03-19 01:09:50.773000+00:00 | 2021-03-17 09:53:51.540000+00:00 | python|pytorch|autoencoder|mnist | ['https://www.quora.com/Does-anyone-ever-use-a-softmax-layer-mid-neural-network-rather-than-at-the-end', 'https://arxiv.org/abs/1611.01144', 'https://arxiv.org/abs/1609.02200'] | 3 |
41,315,489 | <p>Stochastic Gradient Descent seems to require significant overparameterization in order to learn, here's one paper along those lines -- <a href="https://arxiv.org/abs/1301.3583" rel="nofollow noreferrer">"Big Neural Networks Waste Capacity"</a></p> | 2016-12-24 17:33:52.760000+00:00 | 2016-12-24 17:33:52.760000+00:00 | null | null | 41,314,819 | <p>I think I'm missing something obvious here but would love some help figuring this out. </p>
<p>Say I have a million words and want to embed them as part of my model.
With TF I can do an embedding lookup, though I need to provide a matrix of size [1m*space_size]. So for 50 dimensions that comes out to 50M trainable parameters.
On the other hand I can one hot encode the a million words with a vector of dimension 20. I can embed that into a a space of dimension 50 with a [20*50] matrix for 1K parameters. Much cheaper. Since the weights of this matrix are still trainable, I'd expect to learn something about the words and if I need more capacity I can increase the size of the space. </p>
<p>That's in theory, in practice I tried and the model didn't learn anything. So my question is, why?
Thanks</p> | 2016-12-24 16:08:11.587000+00:00 | 2016-12-26 19:19:35.393000+00:00 | null | tensorflow|deep-learning | ['https://arxiv.org/abs/1301.3583'] | 1 |
45,297,317 | <p>As of SUMO 0.29.0, acceleration is not one of the <a href="http://www.sumo.dlr.de/wiki/TraCI/Vehicle_Value_Retrieval" rel="nofollow noreferrer">variables exposed by the SUMO TraCI API of a vehicle</a> - primarily because it is not one of the state variables of the most common car following models.</p>
<p>You will need to compute acceleration yourself, by comparing the current speed of a vehicle to its speed before the last update.</p>
<p>Note that there is more than one way of deriving acceleration from speed, depending on what you assume about the underlying process. For more details, there is <a href="http://arxiv.org/abs/1403.4881" rel="nofollow noreferrer">a 2015 paper by Treiber and Kanagaraj</a> that discusses this.</p> | 2017-07-25 08:01:40.243000+00:00 | 2017-07-25 08:01:40.243000+00:00 | null | null | 45,287,511 | <p>I am using <code>veins-4.5</code> <code>omnet++ 5</code> and <code>sumo 0.29.0</code>.
How can I access the <em>acceleration</em> of a vehicle in veins?</p>
<p>Thanks a lot.</p> | 2017-07-24 18:23:29.063000+00:00 | 2017-07-25 08:01:40.243000+00:00 | 2017-07-25 06:27:26.650000+00:00 | omnet++|veins | ['http://www.sumo.dlr.de/wiki/TraCI/Vehicle_Value_Retrieval', 'http://arxiv.org/abs/1403.4881'] | 2 |
55,811,323 | <p>This problem is a well studied problem: the problem of <strong>Journey planning in public transportation networks</strong>.<br>
Your approach based on Bellman-Ford might become problematic and too expensive depending on the network since you can't consider that a vertex has been 'visited', or that the shortest path to a vertex has been computed already during the algorithm's execution.<br>
These concepts (of 'visited', or of 'the shortest') can apply only to single objective shortest paths problem. That is because given <code>u, v</code> a couple of vertices, there is a potentially exponential number of interesting paths, because you can't consider only the faster or the cheaper option. You have to keep in memory any path such that there is no other path that is cheaper AND faster, and this number of paths can quickly grow out of control if you start working on realistic networks (which can be pretty big, ~100k stops, and millions of trips). </p>
<p>I suggest you read about the <strong>multi-objective shortest path problem</strong>, with the additional fact that usually, the graph representing the network is a time-dependent graph.<br>
I think it can be worthy for you to read <a href="http://www.filipyoo.com/multi-objectives-shortest-paths-algorithms-for-multi-transfer-flight-routes/" rel="nofollow noreferrer">this page</a> on multi objective shortest path to have an idea about the main techniques used in the field (The notion of Pareto-set, or of Pareto-frontier, is quite important to grasp regarding this problem), and even more the section 2 and 4 of <a href="https://arxiv.org/pdf/1504.05140.pdf" rel="nofollow noreferrer">this paper</a>, which describes the actual state of the art regarding such techniques. </p>
<p>Despite seeming complex, most of them can run incredibly fast (hundreds of thousand times faster than a Dijkstra, and still much faster than any A*-based approach), and for some of them are not too hard to implement (for instance, the <a href="https://i11www.iti.kit.edu/extra/publications/dpsw-isftr-13.pdf" rel="nofollow noreferrer">CSA</a> is not too complex, and runs pretty fast, it can compute a simple query in a few miliseconds on a country sized network). </p> | 2019-04-23 12:33:19.597000+00:00 | 2019-04-23 12:33:19.597000+00:00 | null | null | 55,802,336 | <p>So suppose you are searching for a train ride. You would be interested in the price of the ride and also the amount of time the ride will take. Now suppose that you have a graph where each edge has a cost and a duration and you want to find the shortest duration path in the graph that doesn't go over a given maximum cost (There might be multiple edges between any two vertices).</p>
<p>This is the problem I have. I think the best way to approach this problem is to modify the bellman-ford algorithm.</p>
<p>This is what I have so far:</p>
<pre><code> // A struct to represent an Edge in graph
struct Edge
{
int source, dest, cost, duration;
};
// A struct to reporesented a connected, directed
//and wieghted graph
struct Graph
{
int V, E;
struct Edge* edge;
};
// Creates Graph with V vertices and E edges
struct Graph* createGraph(int V, int E)
{
struct Graph* graph = new (struct Graph);
graph -> V = V;
graph -> E = E;
graph -> edge = new Edge[E];
return graph;
}
</code></pre>
<p>I have already filled the structs with all the information they need. Now I just need to "organize" the data based on cost. So i realize that for each vertex i need to store a list of paths that lead to it. For each edge i consider ill need to copy the paths from the first vertex list to the second vertex list (adding cost and distance). But how do I actually go about coding this, that the part I am stuck on. </p> | 2019-04-22 23:27:57.550000+00:00 | 2019-04-23 23:59:48.697000+00:00 | 2019-04-23 23:59:48.697000+00:00 | c++|algorithm|bellman-ford | ['http://www.filipyoo.com/multi-objectives-shortest-paths-algorithms-for-multi-transfer-flight-routes/', 'https://arxiv.org/pdf/1504.05140.pdf', 'https://i11www.iti.kit.edu/extra/publications/dpsw-isftr-13.pdf'] | 3 |
29,853,535 | <p>Yes, the <a href="http://eclipseclp.org" rel="noreferrer" title="ECLiPSe">ECLiPSe</a> system does this.</p>
<p>As you suggest, it takes into account a number of simple built-in predicates (such as <code>integer/1, =/2, !/0</code>) for indexing purposes. Your example then executes deterministically, without choicepoints, for all calls of <code>foo/2</code> with the first argument instantiated. Moreover, ECLiPSe would do this on any argument, not just the first.</p>
<p>You can find a little more detail in the paper <a href="http://arxiv.org/abs/1012.4240" rel="noreferrer" title="ECLiPSe TPLP">ECLiPSe - from LP to CLP</a>.</p>
<p>To answer your followup question: No extra VM features are necessary, the generated VM code looks like this:</p>
<pre><code>foo / 2:
switch_on_type a(1)
list: ref(L5)
structure: ref(L1)
bignum: ref(L7)
[]: ref(L4)
integer: ref(L7)
meta: ref(L0)
label(L0):
try 0 2 ref(L1)
retry 0 ref(L3)
trust 0 ref(L5)
label(L1):
get_structure a(1) h / 1 ref(L2)
write_value a(2)
ret
label(L2):
read_value a(2)
ret
label(L3):
get_nil a(1)
label(L4):
get_atom a(2) nil
ret
label(L5):
get_list a(1) ref(L6)
write_void 2
label(L6):
get_atom a(2) cons
ret
label(L7):
get_structure a(2) n / 1 ref(L8)
write_value a(1)
ret
label(L8):
read_value a(1)
ret
</code></pre> | 2015-04-24 17:11:33.927000+00:00 | 2015-04-25 17:04:36.530000+00:00 | 2015-04-25 17:04:36.530000+00:00 | null | 29,605,132 | <p>I want to know how smart first argument indexing is implemented on various Prolog implementations.</p>
<p>In particular, simple type-test goals like <code>integer/1</code> right after a clause "neck" <em>could</em> contribute to better indexing.
Consider:</p>
<pre><code>foo(h(X),X).
foo([],nil).
foo([_|_],cons).
foo(X,Y) :- integer(X), Y = n(X).
</code></pre>
<p>With this clause ordering I would like the goal <code>foo([],_)</code> to succeed <strong>without</strong> leaving any useless choicepoints.</p>
<p>Unfortunately, SWI Prolog does not figure it out:</p>
<pre><code>?- length(Xs,10),
maplist(=([]),Xs),
statistics(trailused,T1),
maplist(foo,Xs,Ys),
statistics(trailused,T2).
T1 = 5792,
T2 = 5968,
Xs = [[], [], [], [], [], [], [], [], [], []],
Ys = [nil, nil, nil, nil, nil, nil, nil, nil, nil, nil] ...
</code></pre>
<p>Do other Prolog implementations do better?</p> | 2015-04-13 12:18:26.433000+00:00 | 2019-01-27 16:03:04.750000+00:00 | 2015-04-13 17:12:52.820000+00:00 | indexing|prolog | ['http://eclipseclp.org', 'http://arxiv.org/abs/1012.4240'] | 2 |
20,955,187 | <p>An alternative approach is to use something like generative backpropagation. In this scenario, you train a neural network updating the weights AND the input values. The given values are used as the output values since you can compute an error value directly. This approach has been used in dimensionality reduction, matrix completion (missing value imputation) among other applications. For more information, see <a href="http://bioinformatics.oxfordjournals.org/cgi/reprint/21/20/3887" rel="nofollow">non-linear principal component analysis (NLPCA)</a> and <a href="http://arxiv.org/abs/1312.5394" rel="nofollow">unsupervised backpropagation (UBP)</a> which uses the idea of generative backpropagation. UBP extends NLPCA by introducing a pre-training stage. An implementation of UBP and NLPCA and unsupervised backpropagation can be found in the waffles machine learning toolkit. The documentation for UBP and NLPCA can be found using the nlpca command.</p> | 2014-01-06 17:02:27.733000+00:00 | 2014-01-06 17:02:27.733000+00:00 | null | null | 15,514,618 | <p>I'm having trouble with some of the concepts in machine learning through neural networks. One of them is <a href="http://en.wikipedia.org/wiki/Delta_Rule" rel="noreferrer">backpropagation</a>. In the weight updating equation, </p>
<pre><code>delta_w = a*(t - y)*g'(h)*x
</code></pre>
<p><code>t</code> is the "target output", which would be your class label, or something, in the case of supervised learning. But what would the "target output" be for unsupervised learning?</p>
<p>Can someone kindly provide an example of how you'd use BP in unsupervised learning, specifically for clustering of classification?</p>
<p>Thanks in advance.</p> | 2013-03-20 03:12:20.327000+00:00 | 2019-04-02 08:42:01.440000+00:00 | 2017-04-19 04:50:59.180000+00:00 | machine-learning|neural-network|unsupervised-learning | ['http://bioinformatics.oxfordjournals.org/cgi/reprint/21/20/3887', 'http://arxiv.org/abs/1312.5394'] | 2 |
15,514,709 | <p>The most common thing to do is train <a href="http://en.wikipedia.org/wiki/Autoencoder" rel="noreferrer">an autoencoder</a>, where the desired outputs are equal to the inputs. This makes the network try to learn a representation that best "compresses" the input distribution.</p>
<p><a href="http://www.freepatentsonline.com/5590218.html" rel="noreferrer">Here's a patent</a> describing a different approach, where the output labels are assigned randomly and then sometimes flipped based on convergence rates. It seems weird to me, but okay.</p>
<p>I'm not familiar with other methods that use backpropogation for clustering or other unsupervised tasks. Clustering approaches with ANNs seem to use other algorithms (<a href="http://arxiv.org/pdf/cs/0608115.pdf" rel="noreferrer">example 1</a>, <a href="http://www.rimtengg.com/coit2007/proceedings/pdfs/40.pdf" rel="noreferrer">example 2</a>).</p> | 2013-03-20 03:22:39.767000+00:00 | 2013-03-20 03:22:39.767000+00:00 | null | null | 15,514,618 | <p>I'm having trouble with some of the concepts in machine learning through neural networks. One of them is <a href="http://en.wikipedia.org/wiki/Delta_Rule" rel="noreferrer">backpropagation</a>. In the weight updating equation, </p>
<pre><code>delta_w = a*(t - y)*g'(h)*x
</code></pre>
<p><code>t</code> is the "target output", which would be your class label, or something, in the case of supervised learning. But what would the "target output" be for unsupervised learning?</p>
<p>Can someone kindly provide an example of how you'd use BP in unsupervised learning, specifically for clustering of classification?</p>
<p>Thanks in advance.</p> | 2013-03-20 03:12:20.327000+00:00 | 2019-04-02 08:42:01.440000+00:00 | 2017-04-19 04:50:59.180000+00:00 | machine-learning|neural-network|unsupervised-learning | ['http://en.wikipedia.org/wiki/Autoencoder', 'http://www.freepatentsonline.com/5590218.html', 'http://arxiv.org/pdf/cs/0608115.pdf', 'http://www.rimtengg.com/coit2007/proceedings/pdfs/40.pdf'] | 4 |
47,311,604 | <p>The fundamental difficulty when it comes to adding new users in your system is that you need retraining to be able to give meaningful predictions to new users. Even if you were able to dynamically resize the embedding matrices, what values would you use for the parameters describing the new user?</p>
<p>Taking this into account, you have a couple of options.</p>
<ol>
<li>Save the weights of the graph, then create a new graph with adjusted dimensions and retrain it on data that includes information on the new user. As you say, this may be too costly to be in your critical path.</li>
<li>Use some sort of fold-in approach. For example, you could initialise the new user's embedding using the average of embeddings of users that have interacted with similar items.</li>
<li>Use a model that doesn't have this problem and that can incorporate new users in a more natural manner.</li>
</ol>
<p>My recommendation would be the third option. There are classes of models that take the sequence (or set) of user interactions directly when making predictions, and do not rely on you declaring the number of users ahead of time. For example, you could use on of the following:</p>
<ol>
<li>The <a href="https://pdfs.semanticscholar.org/6c02/053805434162e0fed26e1d5e035eb1071249.pdf" rel="nofollow noreferrer">AutoRec</a> model: a simple autoencoder model that takes the set of items the user has interacted with as the input.</li>
<li><a href="https://arxiv.org/pdf/1511.06939.pdf" rel="nofollow noreferrer">Session-based Recommendations with Recurrent Neural Networks</a>: a recurrent model that takes as input the sequence of user interactions at prediction time.</li>
</ol>
<p>Both models naturally handle new users without changes to the computation graph; adding new items will require re-training.</p>
<p>One implementation for the first class of models is <a href="https://github.com/mesuvash/NNRec" rel="nofollow noreferrer">here</a>; for the second class, check out my recommender system package <a href="https://maciejkula.github.io/spotlight/sequence/implicit.html" rel="nofollow noreferrer">Spotlight</a>.</p> | 2017-11-15 15:43:49.173000+00:00 | 2017-11-15 15:43:49.173000+00:00 | null | null | 47,272,031 | <p>I have a TensorFlow recommendation system based off <a href="https://github.com/songgc/TF-recomm" rel="nofollow noreferrer"><code>TF-recomm</code></a>. Each user has <code>1+numFactors</code> numbers associated with her: a vector of <code>numFactors</code>, and an offset of a single number. Each task also has a bias and a vector of <code>numFactors</code> assigned. The TF-recomm code is</p>
<pre><code>def inference_svd(user_batch, item_batch, user_num, item_num, dim=5):
bias_global = tf.get_variable("bias_global", shape=[])
w_bias_user = tf.get_variable("embd_bias_user", shape=[user_num])
w_bias_item = tf.get_variable("embd_bias_item", shape=[item_num])
bias_user = tf.nn.embedding_lookup(w_bias_user, user_batch, name="bias_user")
bias_item = tf.nn.embedding_lookup(w_bias_item, item_batch, name="bias_item")
w_user = tf.get_variable("embd_user", shape=[user_num, dim], initializer=tf.truncated_normal_initializer(stddev=0.02))
w_item = tf.get_variable("embd_item", shape=[item_num, dim], initializer=tf.truncated_normal_initializer(stddev=0.02))
embd_user = tf.nn.embedding_lookup(w_user, user_batch, name="embedding_user")
embd_item = tf.nn.embedding_lookup(w_item, item_batch, name="embedding_item")
infer = tf.reduce_sum(tf.multiply(embd_user, embd_item), 1)
infer = tf.add(infer, bias_global)
infer = tf.add(infer, bias_user)
infer = tf.add(infer, bias_item, name="svd_inference")
regularizer = tf.add(tf.nn.l2_loss(embd_user), tf.nn.l2_loss(embd_item), name="svd_regularizer")
return infer, regularizer
</code></pre>
<p>I have been able to get this code to work, and have been able to link it up with a REST-API. </p>
<p>The problem that I encounter is when I get new users. I know what I want to do:</p>
<ul>
<li>Add a row to the <code>bias_user</code>, initialized to 0</li>
<li>Add a row to the <code>embd_user</code>, initialized to 0</li>
<li>When users rate new items, we use the same graph but <code>freeze</code> the weights on the items (which I can do with <code>var_list</code> on <code>optimizer.minimize</code>)</li>
</ul>
<p>However, the weights and biases have their shapes declared ahead of time. All the material I have seen on tensorflow (running or deploying) allows the weights to change, but doesn't seem to allow the network to grow.</p>
<p>If I implemented this in <code>numpy</code> I would simply add new rows to the appropriate matrices. There are a couple of ways of doing this, such as creating new graphs and variables, but it seems best to reuse the graph used to train the model in the first place (to ensure consistency). </p>
<p>I am looking for a system of "best practices" for dealing with changing the size of embedding tensors, especially for a system that is online where it will have to serve predictions quickly (which prevents expensive operations).</p> | 2017-11-13 19:26:54.860000+00:00 | 2017-11-15 15:43:49.173000+00:00 | null | python|tensorflow|recommendation-engine | ['https://pdfs.semanticscholar.org/6c02/053805434162e0fed26e1d5e035eb1071249.pdf', 'https://arxiv.org/pdf/1511.06939.pdf', 'https://github.com/mesuvash/NNRec', 'https://maciejkula.github.io/spotlight/sequence/implicit.html'] | 4 |
65,950,643 | <p>Here is a more efficient and more stable implementation. Assuming <code>zi</code> and <code>zj</code> are interlaced!</p>
<pre><code>class NT_Xent(tf.keras.layers.Layer):
""" Normalized temperature-scaled CrossEntropy loss [1]
[1] T. Chen, S. Kornblith, M. Norouzi, and G. Hinton, “A simple framework for contrastive learning of visual representations,” arXiv. 2020, Accessed: Jan. 15, 2021. [Online]. Available: https://github.com/google-research/simclr.
"""
def __init__(self, tau=1, **kwargs):
super().__init__(**kwargs)
self.tau = tau
self.similarity = tf.keras.losses.CosineSimilarity(axis=-1, reduction=tf.keras.losses.Reduction.NONE)
self.criterion = tf.keras.losses.CategoricalCrossentropy(from_logits=True)
def get_config(self):
return {"tau": self.tau}
def call(self, zizj):
""" zizj is [B,N] tensor with order z_i1 z_j1 z_i2 z_j2 z_i3 z_j3 ...
batch_size is twice the original batch_size
"""
batch_size = tf.shape(zizj)[0]
mask = tf.repeat(tf.repeat(~tf.eye(batch_size/2, dtype=tf.bool), 2, axis=0), 2, axis=1)
sim = -1*self.similarity(tf.expand_dims(zizj, 1), tf.expand_dims(zizj, 0))/self.tau
sim_i_j = -1*self.similarity(zizj[0::2], zizj[1::2])/self.tau
pos = tf.reshape(tf.repeat(sim_i_j, repeats=2), (batch_size, -1))
neg = tf.reshape(sim[mask], (batch_size, -1))
logits = tf.concat((pos, neg), axis=-1)
labels = tf.one_hot(tf.zeros((batch_size,), dtype=tf.int32), depth=batch_size-1)
return self.criterion(labels, logits)
</code></pre>
<p>source: <a href="https://github.com/gabriel-vanzandycke/tf_layers" rel="nofollow noreferrer">https://github.com/gabriel-vanzandycke/tf_layers</a></p> | 2021-01-29 07:55:30.137000+00:00 | 2021-12-27 15:29:01.250000+00:00 | 2021-12-27 15:29:01.250000+00:00 | null | 62,793,043 | <p>As the title suggests, I'm trying train a model based on the SimCLR framework (seen in this paper: <a href="https://arxiv.org/pdf/2002.05709.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/2002.05709.pdf</a> - the NT_Xent loss is stated in equation (1) and Algorithm 1).</p>
<p>I have managed to create a numpy version of the loss function, but this is not suitable to train the model on, as numpy arrays cannot store the required information for back propagation. I am having difficulty converting my numpy code over to Tensorflow. Here is my numpy version:</p>
<pre><code>import numpy as np
from sklearn.metrics.pairwise import cosine_similarity
# Define the contrastive loss function, NT_Xent
def NT_Xent(zi, zj, tau=1):
""" Calculates the contrastive loss of the input data using NT_Xent. The
equation can be found in the paper: https://arxiv.org/pdf/2002.05709.pdf
Args:
zi: One half of the input data, shape = (batch_size, feature_1, feature_2, ..., feature_N)
zj: Other half of the input data, must have the same shape as zi
tau: Temperature parameter (a constant), default = 1.
Returns:
loss: The complete NT_Xent constrastive loss
"""
z = np.concatenate((zi, zj), 0)
loss = 0
for k in range(zi.shape[0]):
# Numerator (compare i,j & j,i)
i = k
j = k + zi.shape[0]
sim_ij = np.squeeze(cosine_similarity(z[i].reshape(1, -1), z[j].reshape(1, -1)))
sim_ji = np.squeeze(cosine_similarity(z[j].reshape(1, -1), z[i].reshape(1, -1)))
numerator_ij = np.exp(sim_ij / tau)
numerator_ji = np.exp(sim_ji / tau)
# Denominator (compare i & j to all samples apart from themselves)
sim_ik = np.squeeze(cosine_similarity(z[i].reshape(1, -1), z[np.arange(z.shape[0]) != i]))
sim_jk = np.squeeze(cosine_similarity(z[j].reshape(1, -1), z[np.arange(z.shape[0]) != j]))
denominator_ik = np.sum(np.exp(sim_ik / tau))
denominator_jk = np.sum(np.exp(sim_jk / tau))
# Calculate individual and combined losses
loss_ij = - np.log(numerator_ij / denominator_ik)
loss_ji = - np.log(numerator_ji / denominator_jk)
loss += loss_ij + loss_ji
# Divide by the total number of samples
loss /= z.shape[0]
return loss
</code></pre>
<p>I am fairly confident that this function produces the correct results (albeit slowly, as I have seen other implementations of it online that were vectorised versions - such as this one for Pytorch: <a href="https://github.com/Spijkervet/SimCLR/blob/master/modules/nt_xent.py" rel="nofollow noreferrer">https://github.com/Spijkervet/SimCLR/blob/master/modules/nt_xent.py</a> (my code produces the same result for identical inputs), but I do not see how their version is mathematically equivalent to the formula in the paper, hence why I am trying to build my own).</p>
<p>As a first try I have converted the numpy functions to their TF equivalents (tf.concat, tf.reshape, tf.math.exp, tf.range, etc.), but I believe my only/main problem is that sklearn's cosine_similarity function returns a numpy array, and I do not know how to build this function myself in Tensorflow. Any ideas?</p> | 2020-07-08 10:45:50.177000+00:00 | 2021-12-27 15:29:01.250000+00:00 | null | python|tensorflow|scikit-learn|backpropagation|cosine-similarity | ['https://github.com/gabriel-vanzandycke/tf_layers'] | 1 |
62,793,878 | <p>I managed to figure it out myself!
I did not realise there was a Tensorflow implementation of the cosine similarity function "tf.keras.losses.CosineSimilarity"</p>
<p>Here is my code:</p>
<pre><code>import tensorflow as tf
# Define the contrastive loss function, NT_Xent (Tensorflow version)
def NT_Xent_tf(zi, zj, tau=1):
""" Calculates the contrastive loss of the input data using NT_Xent. The
equation can be found in the paper: https://arxiv.org/pdf/2002.05709.pdf
(This is the Tensorflow implementation of the standard numpy version found
in the NT_Xent function).
Args:
zi: One half of the input data, shape = (batch_size, feature_1, feature_2, ..., feature_N)
zj: Other half of the input data, must have the same shape as zi
tau: Temperature parameter (a constant), default = 1.
Returns:
loss: The complete NT_Xent constrastive loss
"""
z = tf.cast(tf.concat((zi, zj), 0), dtype=tf.float32)
loss = 0
for k in range(zi.shape[0]):
# Numerator (compare i,j & j,i)
i = k
j = k + zi.shape[0]
# Instantiate the cosine similarity loss function
cosine_sim = tf.keras.losses.CosineSimilarity(axis=-1, reduction=tf.keras.losses.Reduction.NONE)
sim = tf.squeeze(- cosine_sim(tf.reshape(z[i], (1, -1)), tf.reshape(z[j], (1, -1))))
numerator = tf.math.exp(sim / tau)
# Denominator (compare i & j to all samples apart from themselves)
sim_ik = - cosine_sim(tf.reshape(z[i], (1, -1)), z[tf.range(z.shape[0]) != i])
sim_jk = - cosine_sim(tf.reshape(z[j], (1, -1)), z[tf.range(z.shape[0]) != j])
denominator_ik = tf.reduce_sum(tf.math.exp(sim_ik / tau))
denominator_jk = tf.reduce_sum(tf.math.exp(sim_jk / tau))
# Calculate individual and combined losses
loss_ij = - tf.math.log(numerator / denominator_ik)
loss_ji = - tf.math.log(numerator / denominator_jk)
loss += loss_ij + loss_ji
# Divide by the total number of samples
loss /= z.shape[0]
return loss
</code></pre>
<p>As you can see, I have essentially just swapped out the numpy functions for the TF equivalents. One main point of note is that I had to use "reduction=tf.keras.losses.Reduction.NONE" within the "cosine_sim" function, this was to keep the shapes consistent in the "sim_ik" and "sim_jk", because otherwise the resulting loss did not match up with my original numpy implementation.</p>
<p>I also noticed that individually calculating the numerator for i,j and j,i was redundant as the answers were the same, so I have removed one instance of that calculation.</p>
<p>Of course if anybody has a quicker implementation I am more than happy to hear about it!</p> | 2020-07-08 11:33:39.013000+00:00 | 2020-07-08 11:33:39.013000+00:00 | null | null | 62,793,043 | <p>As the title suggests, I'm trying train a model based on the SimCLR framework (seen in this paper: <a href="https://arxiv.org/pdf/2002.05709.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/2002.05709.pdf</a> - the NT_Xent loss is stated in equation (1) and Algorithm 1).</p>
<p>I have managed to create a numpy version of the loss function, but this is not suitable to train the model on, as numpy arrays cannot store the required information for back propagation. I am having difficulty converting my numpy code over to Tensorflow. Here is my numpy version:</p>
<pre><code>import numpy as np
from sklearn.metrics.pairwise import cosine_similarity
# Define the contrastive loss function, NT_Xent
def NT_Xent(zi, zj, tau=1):
""" Calculates the contrastive loss of the input data using NT_Xent. The
equation can be found in the paper: https://arxiv.org/pdf/2002.05709.pdf
Args:
zi: One half of the input data, shape = (batch_size, feature_1, feature_2, ..., feature_N)
zj: Other half of the input data, must have the same shape as zi
tau: Temperature parameter (a constant), default = 1.
Returns:
loss: The complete NT_Xent constrastive loss
"""
z = np.concatenate((zi, zj), 0)
loss = 0
for k in range(zi.shape[0]):
# Numerator (compare i,j & j,i)
i = k
j = k + zi.shape[0]
sim_ij = np.squeeze(cosine_similarity(z[i].reshape(1, -1), z[j].reshape(1, -1)))
sim_ji = np.squeeze(cosine_similarity(z[j].reshape(1, -1), z[i].reshape(1, -1)))
numerator_ij = np.exp(sim_ij / tau)
numerator_ji = np.exp(sim_ji / tau)
# Denominator (compare i & j to all samples apart from themselves)
sim_ik = np.squeeze(cosine_similarity(z[i].reshape(1, -1), z[np.arange(z.shape[0]) != i]))
sim_jk = np.squeeze(cosine_similarity(z[j].reshape(1, -1), z[np.arange(z.shape[0]) != j]))
denominator_ik = np.sum(np.exp(sim_ik / tau))
denominator_jk = np.sum(np.exp(sim_jk / tau))
# Calculate individual and combined losses
loss_ij = - np.log(numerator_ij / denominator_ik)
loss_ji = - np.log(numerator_ji / denominator_jk)
loss += loss_ij + loss_ji
# Divide by the total number of samples
loss /= z.shape[0]
return loss
</code></pre>
<p>I am fairly confident that this function produces the correct results (albeit slowly, as I have seen other implementations of it online that were vectorised versions - such as this one for Pytorch: <a href="https://github.com/Spijkervet/SimCLR/blob/master/modules/nt_xent.py" rel="nofollow noreferrer">https://github.com/Spijkervet/SimCLR/blob/master/modules/nt_xent.py</a> (my code produces the same result for identical inputs), but I do not see how their version is mathematically equivalent to the formula in the paper, hence why I am trying to build my own).</p>
<p>As a first try I have converted the numpy functions to their TF equivalents (tf.concat, tf.reshape, tf.math.exp, tf.range, etc.), but I believe my only/main problem is that sklearn's cosine_similarity function returns a numpy array, and I do not know how to build this function myself in Tensorflow. Any ideas?</p> | 2020-07-08 10:45:50.177000+00:00 | 2021-12-27 15:29:01.250000+00:00 | null | python|tensorflow|scikit-learn|backpropagation|cosine-similarity | [] | 0 |
69,420,043 | <p>Hyperparameter tuning is typically done on the validation set of a train-val-test split, where each split will have something along the lines of 70%, 10%, and 20% of the entire dataset respectively. As a baseline, random search can be used while <a href="https://arxiv.org/abs/1206.2944" rel="nofollow noreferrer">Bayesian optimization with Gaussian processes</a> has been shown to be more compute efficient. <a href="https://scikit-optimize.github.io/stable/auto_examples/bayesian-optimization.html" rel="nofollow noreferrer">scikit-optimize</a> is a good package for this.</p> | 2021-10-02 20:29:58.507000+00:00 | 2021-10-02 20:29:58.507000+00:00 | null | null | 69,419,809 | <p>I almost finished my time series model, collected enough data and now I am stuck at hyperparameter optimization.</p>
<p>And after lots of googling I found new & good library called ultraopt, but problem is that how much amount of fragment of data should I use from my total data (~150 GB) for hyperparameter tuning. And I want to try lots of algorithm and combinations, is there any faster and easy way?</p>
<p>Or</p>
<p>Is there any math involved, something like,
mydata = 100%size</p>
<p>hyperparameter optimization with 5% of mydatasize,</p>
<p>optimized hyperparameter *or+ or something with left 95% of datasize #something like this</p>
<p>To get a similar result as full data used for optimization at a time. Is there any shortcut for these?</p>
<p>I am using Python 3.7,
CPU: AMD ryzen5 3400g,
GPU: AMD Vega 11,
RAM: 16 GB</p> | 2021-10-02 19:55:56.360000+00:00 | 2021-10-03 08:58:59.760000+00:00 | 2021-10-03 08:58:59.760000+00:00 | python|performance|machine-learning|large-data|hyperparameters | ['https://arxiv.org/abs/1206.2944', 'https://scikit-optimize.github.io/stable/auto_examples/bayesian-optimization.html'] | 2 |
69,420,445 | <p>A good python library for hyper-parameter tuning is <a href="https://arxiv.org/pdf/1603.06560.pdf" rel="nofollow noreferrer"><code>keras tuner</code></a>. You can utilize different tuners in this library, but for the large data, as you've mentioned, <a href="https://arxiv.org/pdf/1603.06560.pdf" rel="nofollow noreferrer"><code>Hyperband Optimization</code></a> can be state-of-the-art and appropriate one.</p> | 2021-10-02 21:37:08.380000+00:00 | 2021-10-02 21:37:08.380000+00:00 | null | null | 69,419,809 | <p>I almost finished my time series model, collected enough data and now I am stuck at hyperparameter optimization.</p>
<p>And after lots of googling I found new & good library called ultraopt, but problem is that how much amount of fragment of data should I use from my total data (~150 GB) for hyperparameter tuning. And I want to try lots of algorithm and combinations, is there any faster and easy way?</p>
<p>Or</p>
<p>Is there any math involved, something like,
mydata = 100%size</p>
<p>hyperparameter optimization with 5% of mydatasize,</p>
<p>optimized hyperparameter *or+ or something with left 95% of datasize #something like this</p>
<p>To get a similar result as full data used for optimization at a time. Is there any shortcut for these?</p>
<p>I am using Python 3.7,
CPU: AMD ryzen5 3400g,
GPU: AMD Vega 11,
RAM: 16 GB</p> | 2021-10-02 19:55:56.360000+00:00 | 2021-10-03 08:58:59.760000+00:00 | 2021-10-03 08:58:59.760000+00:00 | python|performance|machine-learning|large-data|hyperparameters | ['https://arxiv.org/pdf/1603.06560.pdf', 'https://arxiv.org/pdf/1603.06560.pdf'] | 2 |
64,588,418 | <p>Remember that fine-tuning a pre-trained model like Bert usually requires a much smaller number of epochs than models trained from scratch. In fact <a href="https://arxiv.org/pdf/1810.04805.pdf" rel="nofollow noreferrer">the authors of Bert recommend between 2 and 4 epochs</a>. Further training often translates to overfitting to your data and forgetting the pre-trained weights (see <em>catastrophic forgetting</em>).</p>
<p>In my experience, this affects small datasets especially as it's easy to overfit on them, even at the 2nd epoch. Besides, you haven't commented on your custom layers on top of Bert, but adding much complexity there might increase overfitting also -- note that the common architecture for text classification only adds a linear transformation.</p> | 2020-10-29 09:37:58.480000+00:00 | 2020-10-29 09:37:58.480000+00:00 | null | null | 63,096,908 | <p>I'm training a classification model with custom layers on top of BERT. During this, the training performance of this model is going down with increasing epochs ( after the first epoch ) .. I'm not sure what to fix here - is it the model or the data?</p>
<p>( for the data it's binary labels, and balanced in the number of data points for each label).</p>
<p>Any quick pointers on what the problem could be? Has anyone come across this before?</p>
<p>Edit: Turns out there was a mismatch in the transformers library and tf version I was using. Once I fixed that, the training performance was fine!</p>
<p>Thanks!</p> | 2020-07-26 06:36:45.657000+00:00 | 2020-11-03 23:43:59.867000+00:00 | 2020-11-03 23:43:59.867000+00:00 | tensorflow|machine-learning|nlp|language-model | ['https://arxiv.org/pdf/1810.04805.pdf'] | 1 |
52,443,301 | <p>In the context of autoencoders the input and output of the model is the same. So, if the input values are in the range [0,1] then it is acceptable to use <code>sigmoid</code> as the activation function of last layer. Otherwise, you need to use an appropriate activation function for the last layer (e.g. <code>linear</code> which is the default one).</p>
<p>As for the loss function, it comes back to the values of input data again. If the input data are <s>only</s> between zeros and ones <s>(and not the values between them)</s>, then <code>binary_crossentropy</code> is acceptable as the loss function. Otherwise, you need to use other loss functions such as <code>'mse'</code> (i.e. mean squared error) or <code>'mae'</code> (i.e. mean absolute error). Note that in the case of input values in range <code>[0,1]</code> you can use <code>binary_crossentropy</code>, as it is usually used (e.g. <a href="https://blog.keras.io/building-autoencoders-in-keras.html" rel="noreferrer">Keras autoencoder tutorial</a> and <a href="https://arxiv.org/abs/1708.08487" rel="noreferrer">this paper</a>). However, don't expect that the loss value becomes zero since <code>binary_crossentropy</code> does not return zero when both prediction and label are not either zero or one (no matter they are equal or not). <a href="https://www.youtube.com/watch?v=xTU79Zs4XKY" rel="noreferrer">Here</a> is a video from <a href="http://www.dmi.usherb.ca/~larocheh/index_en.html" rel="noreferrer">Hugo Larochelle</a> where he explains the loss functions used in autoencoders (the part about using <code>binary_crossentropy</code> with inputs in range [0,1] starts at <a href="https://youtu.be/xTU79Zs4XKY?t=330" rel="noreferrer">5:30</a>)</p>
<p>Concretely, in your example, you are using the MNIST dataset. So by default the values of MNIST are integers in the range [0, 255]. Usually you need to normalize them first:</p>
<pre><code>trainX = trainX.astype('float32')
trainX /= 255.
</code></pre>
<p>Now the values would be in range [0,1]. So <code>sigmoid</code> can be used as the activation function and either of <code>binary_crossentropy</code> or <code>mse</code> as the loss function.</p>
<hr>
<p><strong>Why <code>binary_crossentropy</code> can be used even when the true label values (i.e. ground-truth) are in the range [0,1]?</strong></p>
<p>Note that we are trying to minimize the loss function in training. So if the loss function we have used reaches its minimum value (which may not be necessarily equal to zero) when prediction is equal to true label, then it is an acceptable choice. Let's verify this is the case for binray cross-entropy which is defined as follows:</p>
<pre><code>bce_loss = -y*log(p) - (1-y)*log(1-p)
</code></pre>
<p>where <code>y</code> is the true label and <code>p</code> is the predicted value. Let's consider <code>y</code> as fixed and see what value of <code>p</code> minimizes this function: we need to take the derivative with respect to <code>p</code> (I have assumed the <code>log</code> is the natural logarithm function for simplicity of calculations):</p>
<pre><code>bce_loss_derivative = -y*(1/p) - (1-y)*(-1/(1-p)) = 0 =>
-y/p + (1-y)/(1-p) = 0 =>
-y*(1-p) + (1-y)*p = 0 =>
-y + y*p + p - y*p = 0 =>
p - y = 0 => y = p
</code></pre>
<p>As you can see binary cross-entropy have the minimum value when <code>y=p</code>, i.e. when the true label is equal to predicted label and this is exactly what we are looking for. </p> | 2018-09-21 11:58:45.940000+00:00 | 2019-06-25 07:22:23.607000+00:00 | 2019-06-25 07:22:23.607000+00:00 | null | 52,441,877 | <p>I wrote a vanilla autoencoder using only <code>Dense</code> layer.
Below is my code:</p>
<pre class="lang-py prettyprint-override"><code>iLayer = Input ((784,))
layer1 = Dense(128, activation='relu' ) (iLayer)
layer2 = Dense(64, activation='relu') (layer1)
layer3 = Dense(28, activation ='relu') (layer2)
layer4 = Dense(64, activation='relu') (layer3)
layer5 = Dense(128, activation='relu' ) (layer4)
layer6 = Dense(784, activation='softmax' ) (layer5)
model = Model (iLayer, layer6)
model.compile(loss='binary_crossentropy', optimizer='adam')
(trainX, trainY), (testX, testY) = mnist.load_data()
print ("shape of the trainX", trainX.shape)
trainX = trainX.reshape(trainX.shape[0], trainX.shape[1]* trainX.shape[2])
print ("shape of the trainX", trainX.shape)
model.fit (trainX, trainX, epochs=5, batch_size=100)
</code></pre>
<h2>Questions:</h2>
<p>1) <code>softmax</code> provides probability distribution. Understood. This means, I would have a vector of 784 values with probability between 0 and 1. For example [ 0.02, 0.03..... upto 784 items], summing all 784 elements provides 1. </p>
<p>2) I don't understand how the binary crossentropy works with these values. Binary cross entropy is for two values of output, right?</p> | 2018-09-21 10:35:59.863000+00:00 | 2019-06-25 07:22:23.607000+00:00 | 2018-09-21 12:37:43.647000+00:00 | machine-learning|neural-network|keras|autoencoder|cross-entropy | ['https://blog.keras.io/building-autoencoders-in-keras.html', 'https://arxiv.org/abs/1708.08487', 'https://www.youtube.com/watch?v=xTU79Zs4XKY', 'http://www.dmi.usherb.ca/~larocheh/index_en.html', 'https://youtu.be/xTU79Zs4XKY?t=330'] | 5 |
44,936,598 | <p>From what I read, I'd be surprised if they're using neural networks. Here's how they say they detect anomalies:</p>
<blockquote>
<p>Detect outliers in a population by building a profile of a “typical” user or machine to know when one starts to stray from the pack.</p>
</blockquote>
<p>Doing anomaly detection like that requires nothing more than a statistical test of whether or not observed behavior is within 2-3 standard deviations of the expected behavior.</p>
<p>If you want to use neural networks for some reason, you could go with CNNs, RNNs, or attention-only networks. <a href="https://arxiv.org/abs/1706.03762" rel="nofollow noreferrer">Google recently showed that you don't need RNNs or CNNs to do state-of-the-art translation</a>.</p> | 2017-07-05 21:53:38.927000+00:00 | 2017-07-05 21:53:38.927000+00:00 | null | null | 44,935,473 | <p>I'm very impressed from the new <a href="https://www.elastic.co/products/x-pack/machine-learning" rel="nofollow noreferrer">x-pack</a> ML of the elastic stack. It seems their technique learns data patterns over time and can predict anomalies in multiple domains.</p>
<p><a href="https://i.stack.imgur.com/TrGx5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TrGx5.png" alt="enter image description here"></a></p>
<p>Zoomed in:
<a href="https://i.stack.imgur.com/CfBdJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CfBdJ.png" alt="enter image description here"></a></p>
<p>I was wondering what approach and network topology could be used, in order to create a similar feature. Is it fair to assume that since x-pack works on time series data, RNN would be a good start?</p>
<p>Interested in your opinions and references.</p> | 2017-07-05 20:30:32.270000+00:00 | 2018-07-23 16:14:58.170000+00:00 | 2017-07-06 06:39:31.113000+00:00 | elasticsearch|machine-learning|anomaly-detection|rnn | ['https://arxiv.org/abs/1706.03762'] | 1 |
49,606,866 | <blockquote>
<ol>
<li>Use Convolution2D layers and LSTM layer</li>
</ol>
</blockquote>
<p>In this technique, you stack convolution and LSTM layers. The convolutional layers help you to learn the spatial features and the LSTM helps you learn the correlation in time.</p>
<blockquote>
<p>2.Use ConvLSTM2D</p>
</blockquote>
<p>ConvLSTM is a LSTM in which the gates (input to state and state to state transitions) are convolution operations.<br>
Research paper- <a href="https://arxiv.org/abs/1506.04214" rel="nofollow noreferrer">Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting</a> </p>
<p><a href="https://stackoverflow.com/questions/49468918/appplication-of-convlstm2d-layers/49472074#49472074">More about ConvLSTM in this SO answer</a></p> | 2018-04-02 07:11:14.890000+00:00 | 2018-04-11 08:01:36.633000+00:00 | 2018-04-11 08:01:36.633000+00:00 | null | 49,603,498 | <p>Are <code>1</code> and <code>2</code> the same?</p>
<ol>
<li>Use <code>Convolution2D</code> layers and <code>LSTM</code> layers </li>
<li>Use <code>ConvLSTM2D</code></li>
</ol>
<p>If there is any difference, could you explain it for me?</p> | 2018-04-01 23:14:53.400000+00:00 | 2018-04-13 16:40:43.003000+00:00 | 2018-04-13 16:40:43.003000+00:00 | python|tensorflow|keras | ['https://arxiv.org/abs/1506.04214', 'https://stackoverflow.com/questions/49468918/appplication-of-convlstm2d-layers/49472074#49472074'] | 2 |
49,770,553 | <p>They are not exactly the same, here is why:</p>
<h3>1. Use <code>Convolution2D</code> layers and <code>LSTM</code> layers</h3>
<p>As it is known, <code>Convolution2D</code> serves well for capturing image or spatial features, whilst <code>LSTM</code> are used to detect correlations over time. However, by stacking these kind of layers, the correlation between space and time features may not be captured properly.</p>
<h3>2. Use <code>ConvLSTM2D</code></h3>
<p>To solve this, <a href="https://arxiv.org/abs/1506.04214v1" rel="noreferrer">Xingjian Shi et al.</a> proposed a network structure able to capture spatiotemporal correlations, namely <code>ConvLSTM</code>. In Keras, this is reflected in the <a href="https://keras.io/layers/recurrent/#convlstm2d" rel="noreferrer"><code>ConvLSTM2D</code></a> class, which computes convolutional operations in both the input and the recurrent transformations.</p>
<h3>Keras code</h3>
<p>Too illustrate this, you can see <a href="https://github.com/keras-team/keras/blob/master/keras/layers/recurrent.py" rel="noreferrer">here</a> the <code>LSTM</code> code, if you go to the <code>call</code> method from <code>LSTMCell</code>, you'd only see:</p>
<pre class="lang-python prettyprint-override"><code> x_i = K.dot(inputs_i, self.kernel_i)
x_f = K.dot(inputs_f, self.kernel_f)
x_c = K.dot(inputs_c, self.kernel_c)
x_o = K.dot(inputs_o, self.kernel_o)
</code></pre>
<p>Instead, the <a href="https://github.com/keras-team/keras/blob/master/keras/layers/convolutional_recurrent.py" rel="noreferrer"><code>ConvLSTM2DCell</code></a> class calls:</p>
<pre class="lang-python prettyprint-override"><code> x_i = self.input_conv(inputs_i, self.kernel_i, self.bias_i, padding=self.padding)
x_f = self.input_conv(inputs_f, self.kernel_f, self.bias_f, padding=self.padding)
x_c = self.input_conv(inputs_c, self.kernel_c, self.bias_c, padding=self.padding)
x_o = self.input_conv(inputs_o, self.kernel_o, self.bias_o, padding=self.padding)
h_i = self.recurrent_conv(h_tm1_i, self.recurrent_kernel_i)
h_f = self.recurrent_conv(h_tm1_f, self.recurrent_kernel_f)
h_c = self.recurrent_conv(h_tm1_c, self.recurrent_kernel_c)
h_o = self.recurrent_conv(h_tm1_o, self.recurrent_kernel_o)
</code></pre>
<p>Where:</p>
<pre class="lang-python prettyprint-override"><code>def input_conv(self, x, w, b=None, padding='valid'):
conv_out = K.conv2d(x, w, strides=self.strides,
padding=padding,
data_format=self.data_format,
dilation_rate=self.dilation_rate)
if b is not None:
conv_out = K.bias_add(conv_out, b,
data_format=self.data_format)
return conv_out
def recurrent_conv(self, x, w):
conv_out = K.conv2d(x, w, strides=(1, 1),
padding='same',
data_format=self.data_format)
return conv_out
</code></pre>
<p>In <code>LSTM</code>, the equivalent for <code>h_x</code> (recurrent transformations) would be:</p>
<pre class="lang-python prettyprint-override"><code>K.dot(h_tm1_x, self.recurrent_kernel_x)
</code></pre>
<p>Instead of <code>ConvLSTM2D</code>'s:</p>
<pre class="lang-python prettyprint-override"><code>self.recurrent_conv(h_tm1_x, self.recurrent_kernel_x)
</code></pre>
<p>These kind of transformations could not be computed with stacked <code>Conv2D</code> and <code>LSTM</code> layers.</p> | 2018-04-11 08:46:57.363000+00:00 | 2018-04-11 08:46:57.363000+00:00 | null | null | 49,603,498 | <p>Are <code>1</code> and <code>2</code> the same?</p>
<ol>
<li>Use <code>Convolution2D</code> layers and <code>LSTM</code> layers </li>
<li>Use <code>ConvLSTM2D</code></li>
</ol>
<p>If there is any difference, could you explain it for me?</p> | 2018-04-01 23:14:53.400000+00:00 | 2018-04-13 16:40:43.003000+00:00 | 2018-04-13 16:40:43.003000+00:00 | python|tensorflow|keras | ['https://arxiv.org/abs/1506.04214v1', 'https://keras.io/layers/recurrent/#convlstm2d', 'https://github.com/keras-team/keras/blob/master/keras/layers/recurrent.py', 'https://github.com/keras-team/keras/blob/master/keras/layers/convolutional_recurrent.py'] | 4 |
65,766,155 | <p>As Ivan already noted you have a class imbalance problem. This can be resolved via:</p>
<ol>
<li><p><strong>Online hard negative mining:</strong> at each iteration after computing the loss, you can sort all elements in the batch belonging to "no DR" class and keep only the worst <code>k</code>. Then you estimate the gradient only using these worse k and <em>discard</em> all the rest.<br />
see, e.g.:<br />
<em>Abhinav Shrivastava, Abhinav Gupta and Ross Girshick</em> <a href="https://arxiv.org/abs/1604.03540" rel="nofollow noreferrer"><strong>Training Region-based Object Detectors with Online Hard Example Mining</strong></a> (CVPR 2016)</p>
</li>
<li><p><a href="https://stackoverflow.com/a/52161194/1714410"><strong>Focal loss:</strong></a> a modification for the "vanilla" cross entropy loss can be used to tackle class imbalance.</p>
</li>
</ol>
<hr />
<p>Related posts <a href="https://stackoverflow.com/a/58213245/1714410">this</a> and <a href="https://stackoverflow.com/a/64365532/1714410">this</a>.</p> | 2021-01-17 21:30:06.740000+00:00 | 2021-01-18 07:31:56.007000+00:00 | 2021-01-18 07:31:56.007000+00:00 | null | 65,762,961 | <p>I am relatively new to the deep learning landscape, so please don't be as mean as Reddit! It seems like a general question so I won't be giving my code here as it doesn't seem necessary (if it is, here's the link to <a href="https://colab.research.google.com/drive/1x6DOqow1dnZvy29_UZ3nHAKZLaNKr9hM?usp=sharing" rel="nofollow noreferrer">colab</a>)</p>
<p>A bit about the data: You can find the original data <a href="https://www.kaggle.com/tanlikesmath/diabetic-retinopathy-resized" rel="nofollow noreferrer">here</a>. It is a downsized version of the original dataset of 82 GB.</p>
<p>Once I trained my CNN on this, it predicts 'No Diabetic Retinopathy' (No DR) every single time, leading to an accuracy of 73%. Is the reason for this is just the vast amount of No DR images or something else? I have no idea! The 5 classes I have for prediction are <code>["Mild", "Moderate", "No DR", "Proliferative DR", "Severe"]</code>.</p>
<p>It's probably just bad code, was hoping you guys could help</p> | 2021-01-17 16:12:12.567000+00:00 | 2022-08-05 16:05:04.970000+00:00 | 2021-01-18 00:28:51.830000+00:00 | python|deep-learning|pytorch|conv-neural-network | ['https://arxiv.org/abs/1604.03540', 'https://stackoverflow.com/a/52161194/1714410', 'https://stackoverflow.com/a/58213245/1714410', 'https://stackoverflow.com/a/64365532/1714410'] | 4 |
29,876,936 | <p>Solved it by giving absolute path. I was trying all combinations of paths and also gave absolute path /var/www/arxiv/static/data/name.json and it worked.</p> | 2015-04-26 11:18:38.003000+00:00 | 2015-04-26 11:18:38.003000+00:00 | null | null | 29,876,315 | <p>I am using f = open('name.json','w+') to create a new file and write to it. But i am unable to create the file. Apache server logs show "No such file exists."</p> | 2015-04-26 10:10:09.927000+00:00 | 2015-04-26 11:18:38.003000+00:00 | 2015-04-26 10:31:43.320000+00:00 | python|apache|flask | [] | 0 |
39,249,179 | <p>You will need backtracking, because it is possible to add numbers to the Sudoku board which don't violate any rules immediately, but which will lead to a contradiction later on. If you take any unique-solution Sudoku problem and arbitrarily place any number anywhere, you are bound to experience just this.</p>
<p>I suggest you investigate the <a href="https://arxiv.org/abs/cs/0011047" rel="nofollow noreferrer">Dancing Links</a> algorithm. You can easily formulate Sudoku as a set cover problem, and that algorithm can find a solution if there exists one. For the completely empty board, there has to be a solution. Randomize the matrix if you want to obtain a random result.</p>
<p>Also investigate <a href="https://stackoverflow.com/questions/tagged/sudoku?sort=votes">all the other sudoku-tagged questions</a>, since you are not the first trying to generate such boards, and translating from one language to another doesn't really change the game that much.</p> | 2016-08-31 12:04:53.170000+00:00 | 2016-08-31 12:04:53.170000+00:00 | 2017-05-23 11:47:14.977000+00:00 | null | 39,246,124 | <p>I am trying to to generate a Sudoku board using this script:</p>
<p>The problem is that I don't know how to validate to generate unique numbers on columns and squares.</p>
<p>Actually is just validating and generating unique numbers only on rows.</p>
<p>Here is that code :
<div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-js lang-js prettyprint-override"><code>function generate(count, values) {
return Array.apply(null, { length: count }).map(function () {
var r = [],
array = values.slice();
while (array.length) {
r.push(array.splice(Math.floor(Math.random() * array.length), 1)[0]);
}
return r;
});
};
var myStringArray = generate(9, [1, 2, 3, 4, 5, 6, 7, 8, 9]);
Array.from(document.getElementsByClassName('cell')).forEach(function(e, i){
var y = Math.floor(i/myStringArray.length);
var x = i % myStringArray.length;
e.textContent = myStringArray[y][x];
});</code></pre>
<pre class="snippet-code-css lang-css prettyprint-override"><code>.container{
min-height: 100vh;
width: 100%;
display: flex;
align-items : center;
justify-content : center;
margin-bottom: 0;
}
.table {
display:table;
border: 2px solid #444;
border-collapse: collapse;
position: relative;
}
.row {
display:table-row;
position: relative;
z-index:-1;
}
.cell {
display:table-cell;
padding:8px;
border: 1px solid #ff0000;
text-align: center;
}
.cell:nth-child(3), .cell:nth-child(6){border-right: 5px solid #555; } /*vertical*/
.row:nth-child(3) .cell, .row:nth-child(6) .cell{border-bottom: 5px solid #555;} /*horizontal*/
.header {
text-align:center;
position: relative;
}
.header {
counter-increment: colno;
}
.header::before {
content: counter(colno);
position: absolute;
top: -30px;
font-weight:bold;
color: #777;
}
.row:nth-child(n+1) {
counter-increment: rowno;
}
.row:nth-child(n+1)::before{
content: counter(rowno);
position: absolute;
left: -30px;
top:10px;
font-weight:bold;
color: #777;
}</code></pre>
<pre class="snippet-code-html lang-html prettyprint-override"><code><div class="container">
<div class="table">
<div id="mytab1" class="row">
<span class="cell header">***</span>
<span class="cell header">***</span>
<span class="cell header">***</span>
<span class="cell header">***</span>
<span class="cell header">***</span>
<span class="cell header">***</span>
<span class="cell header">***</span>
<span class="cell header">***</span>
<span class="cell header">***</span>
</div>
<div class="row">
<span class="cell">***</span>
<span class="cell">***</span>
<span class="cell">***</span>
<span class="cell">***</span>
<span class="cell">***</span>
<span class="cell">***</span>
<span class="cell">***</span>
<span class="cell">***</span>
<span class="cell">***</span>
</div>
<div class="row">
<span class="cell">***</span>
<span class="cell">***</span>
<span class="cell">***</span>
<span class="cell">***</span>
<span class="cell">***</span>
<span class="cell">***</span>
<span class="cell">***</span>
<span class="cell">***</span>
<span class="cell">***</span>
</div>
<div class="row">
<span class="cell">***</span>
<span class="cell">***</span>
<span class="cell">***</span>
<span class="cell">***</span>
<span class="cell">***</span>
<span class="cell">***</span>
<span class="cell">***</span>
<span class="cell">***</span>
<span class="cell">***</span>
</div>
<div class="row">
<span class="cell">***</span>
<span class="cell">***</span>
<span class="cell">***</span>
<span class="cell">***</span>
<span class="cell">***</span>
<span class="cell">***</span>
<span class="cell">***</span>
<span class="cell">***</span>
<span class="cell">***</span>
</div>
<div class="row">
<span class="cell">***</span>
<span class="cell">***</span>
<span class="cell">***</span>
<span class="cell">***</span>
<span class="cell">***</span>
<span class="cell">***</span>
<span class="cell">***</span>
<span class="cell">***</span>
<span class="cell">***</span>
</div>
<div class="row">
<span class="cell">***</span>
<span class="cell">***</span>
<span class="cell">***</span>
<span class="cell">***</span>
<span class="cell">***</span>
<span class="cell">***</span>
<span class="cell">***</span>
<span class="cell">***</span>
<span class="cell">***</span>
</div>
<div class="row">
<span class="cell">***</span>
<span class="cell">***</span>
<span class="cell">***</span>
<span class="cell">***</span>
<span class="cell">***</span>
<span class="cell">***</span>
<span class="cell">***</span>
<span class="cell">***</span>
<span class="cell">***</span>
</div>
<div class="row">
<span class="cell">***</span>
<span class="cell">***</span>
<span class="cell">***</span>
<span class="cell">***</span>
<span class="cell">***</span>
<span class="cell">***</span>
<span class="cell">***</span>
<span class="cell">***</span>
<span class="cell">***</span>
</div>
</div>
</div></code></pre>
</div>
</div>
</p>
<p>Please visit fiddle <strong><a href="https://jsfiddle.net/707bshLy/12/" rel="nofollow noreferrer">here</a></strong></p>
<p>Appreciate your help,
Thanks</p> | 2016-08-31 09:43:43.907000+00:00 | 2018-08-16 16:38:54.950000+00:00 | 2018-08-16 16:38:54.950000+00:00 | javascript|arrays|validation|math|sudoku | ['https://arxiv.org/abs/cs/0011047', 'https://stackoverflow.com/questions/tagged/sudoku?sort=votes'] | 2 |
4,635,560 | <p>Asking the question in such a general way does not permit very specific advice to be given.</p>
<p>I'd begin the analysis by looking for ways to evaluate or rewrite the function using groups of variables that interact closely, creating intermediate expressions that can be used to make the final evaluation. You may find a way to do this involving a hierarchy of subexpressions that leads from the variables themselves to the final function.</p>
<p>In general the shorter and wider such an evaluation tree is, the greater the degree of parallelism. There are two cautionary notes to keep in mind that detract from "more parallelism is better."</p>
<p>For one thing a highly parallel approach may actually involve more total computation than your original "serial" approach. In fact some loss of efficiency in this regard is to be expected, since a serial approach can take advantage of all prior subexpression evaluations and maximize their reuse.</p>
<p>For another thing the parallel evaluation will often have worse rounding/accuracy behavior than a serial evaluation chosen to give good or optimal error estimates. </p>
<p>A lot of work has been done on evaluations that involve matrices, where there is usually a lot of symmetry to how the function value depends on its arguments. So it helps to be familiar with numerical linear algebra and parallel algorithms that have been developed there. </p>
<p>Another area where a lot is known is for multivariate polynomial and rational functions. </p>
<p>When the function is transcendental, one might hope for some transformations or refactoring that makes the dependence more tractable (algebraic). </p>
<p>Not directly relevant to your question are algorithms that amortize the cost of computing function values across a number of arguments. For example in computing solutions to ordinary differential equations, there may be "multi-step" methods that share the cost of evaluating derivatives at intermediate points by reusing those values several times.</p>
<p>I'd suggest that your concern to speed up the evaluation of the function suggests that you plan to perform more than one evaluation. So you might think about ways to take advantage of prior evaluations or perform evaluations at related arguments in a way that contributes to your search for parallelism.</p>
<p><strong>Added:</strong> Some links and discussion of search strategy</p>
<p>Most authors use the phrase "parallel function evaluation" to
mean evaluating the same function at multiple argument points.</p>
<p>See for example:</p>
<p>[Coarse Grained Parallel Function Evaluation -- Rulon and Youssef]<br>
<a href="http://cdsweb.cern.ch/record/401028/files/p837.pdf" rel="nofollow">http://cdsweb.cern.ch/record/401028/files/p837.pdf</a></p>
<p>A search strategy to find the kind of material Gaurav Kalra asks
about should try to avoid those. For example, we might include
"fine-grained" in our search terms.</p>
<p>It's also effective to focus on specific kinds of functions, e.g.
"polynomial evaluation" rather than "function evaluation".</p>
<p>Here for example we have a treatment of some well-known techniques
for "fast" evaluations applied to design for GPU-based computation:</p>
<p>[How to obtain efficient GPU kernels -- Cruz, Layton, and Barba]<br>
<a href="http://arxiv.org/PS_cache/arxiv/pdf/1009/1009.3457v1.pdf" rel="nofollow">http://arxiv.org/PS_cache/arxiv/pdf/1009/1009.3457v1.pdf</a></p>
<p>(from their Abstract) "Here, we have tackled fast summation
algorithms (fast multipole method and fast Gauss transform),
and applied algorithmic redesign for attaining performance on
GPUs. The progression of performance improvements attained
illustrates the exercise of formulating algorithms for the
massively parallel architecture of the GPU."</p>
<p>Another search term that might be worth excluding is "pipelined".
This term invariably discusses the sort of parallelism that can
be used when multiple function evaluations are to be done. Early
stages of the computation can be done in parallel with later
stages, but on different inputs.</p>
<p>So that's a search term that one might want to exclude. Or not.</p>
<p>Here's a paper that discusses n-fold speedup for n-variate
polynomial evaluation over finite fields GF(p). This might be
of direct interest for cryptographic applications, but the
approach via modified Horner's method may be interesting for
its potential for generalization:</p>
<p>[Comparison of Bit and Word Level Algorithms for Evaluating
Unstructured Functions over Finite Rings -- Sunar and Cyganski]<br>
<a href="http://www.iacr.org/archive/ches2005/018.pdf" rel="nofollow">http://www.iacr.org/archive/ches2005/018.pdf</a></p>
<p>"We present a modification to Horner’s algorithm for evaluating
arbitrary n-variate functions defined over finite rings and fields.
... If the domain is a finite field GF(p) the complexity of
multivariate Horner polynomial evaluation is improved from O(p^n)
to O((p^n)/(2n)). We prove the optimality of the presented algorithm."</p>
<p>Multivariate rational functions can be considered simply as the
ratio of two such polynomial functions. For the special case
of univariate rational functions, which can be particularly
effective in approximating elementary transcendental functions
and others, can be evaluated via finite (resp. truncated)
continued fractions, whose convergents (partial numerators
and denominators) can be defined recursively.</p>
<p>The topic of continued fraction evaluations allows us to segue
to a final link that connects that topic with some familiar
parallelism of numerical linear algebra:</p>
<p>[LU Factorization and Parallel Evaluation of Continued Fractions
-- Ömer Egecioglu]<br>
<a href="http://www.cs.ucsb.edu/~omer/DOWNLOADABLE/lu-cf98.pdf" rel="nofollow">http://www.cs.ucsb.edu/~omer/DOWNLOADABLE/lu-cf98.pdf</a></p>
<p>"The first n convergents of a general continued fraction
(CF) can be computed optimally in logarithmic parallel
time using O(n/log(n))processors."</p> | 2011-01-08 18:54:26.003000+00:00 | 2011-01-09 09:22:47.240000+00:00 | 2011-01-09 09:22:47.240000+00:00 | null | 4,635,254 | <p>The question may seem vague, but let me explain it.</p>
<p>Suppose we have a function f(x,y,z ....) and we need to find its value at the point (x1,y1,z1 .....).</p>
<p>The most trivial approach is to just replace (x,y,z ...) with (x1,y1,z1 .....).</p>
<p>Now suppose that the function is taking a lot of time in evaluation and I want to parallelize the algorithm to evaluate it. Obviously it will depend on the nature of function, too.</p>
<p>So my question is: what are the constraints that I have to look for while "thinking" to parallelize f(x,y,z...)?</p>
<p>If possible, please share links to study.</p> | 2011-01-08 17:48:44.687000+00:00 | 2011-01-09 09:22:47.240000+00:00 | 2011-01-09 00:52:02.030000+00:00 | algorithm|math|parallel-processing | ['http://cdsweb.cern.ch/record/401028/files/p837.pdf', 'http://arxiv.org/PS_cache/arxiv/pdf/1009/1009.3457v1.pdf', 'http://www.iacr.org/archive/ches2005/018.pdf', 'http://www.cs.ucsb.edu/~omer/DOWNLOADABLE/lu-cf98.pdf'] | 4 |
56,115,600 | <p>You can read the official paper here <a href="https://arxiv.org/pdf/1412.6980.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1412.6980.pdf</a></p>
<p>Your update looks somewhat like this (for brevity, sake I have omitted the warmup-phase):</p>
<p><code>new_theta = old_theta-learning_rate*momentum/(velocity+eps)
</code></p>
<p>The intuition here is that if <code>momentum</code>><code>velocity</code>, then the optimizer is in a plateau, so the the <code>learning_rate</code> is increased because <code>momentum/velocity > 1</code>. on the other hand if <code>momentum</code><<code>velocity</code>, then the optimizer is in a steep slope or noisy region, so the <code>learning_rate</code> is decreased.</p>
<p>The <code>learning_rate</code> isn't necessarily decreased throughout the training, as you have mentioned in you question.</p> | 2019-05-13 15:35:47.110000+00:00 | 2020-02-05 14:29:34.257000+00:00 | 2020-02-05 14:29:34.257000+00:00 | null | 56,114,706 | <p>In the Adam optimization algorithm, the learning speed is adjusted according to the number of iterations. I don't quite understand Adam's design, especially when using batch training. When using batch training, if there are 19,200 pictures, each time 64 pictures are trained, it is equivalent to 300 iterations. If our epoch has 200 times, there are a total of 60,000 iterations. I don't know if such multiple iterations will reduce the learning speed to a very small size. So when we are training, shall we initialize the optim after each epoch, or do nothing throughout the process?</p>
<p>Using pytorch. I have tried to initialize the optim after each epoch if I use batch train, and I do nothing when the number of data is small.</p>
<p>For expample, I don't know whether the two pieces of code is correct:</p>
<pre class="lang-py prettyprint-override"><code>optimizer = optim.Adam(model.parameters(), lr=0.1)
for epoch in range(100):
###Some code
optim.step()
</code></pre>
<p>Another piece of code:</p>
<pre class="lang-py prettyprint-override"><code>for epoch in range(100):
optimizer = optim.Adam(model.parameters(), lr=0.1)
###Some code
optim.step()
</code></pre> | 2019-05-13 14:42:54.983000+00:00 | 2020-02-05 14:29:34.257000+00:00 | 2019-05-15 19:51:16.053000+00:00 | pytorch | ['https://arxiv.org/pdf/1412.6980.pdf'] | 1 |
49,187,418 | <blockquote>
<p>Is deep learning the only way to detect humans in a picture?</p>
</blockquote>
<p><strong>No.</strong> Is it the best way we know? Depends on your conditions.</p>
<p>The simplest way of detection is to generate lots of random bounding boxes and then solving the classification problem of the crop. Here is some pythonic pseudo-code:</p>
<pre><code>def detect_people(image):
"""
Find all people in image.
Parameters
----------
image : image object
Returns
-------
people : list of axis-aligned bounding boxes (aabb)
Each bounding box contains a person
"""
people = []
for aabb in generate_random_aabb(image):
crop = crop_image(image, aabb)
if is_person(crop):
people.append(crop)
return people
</code></pre>
<p>In this case <code>is_person</code> can be any classifier, e.g. boosted decision stumps as used in the <a href="https://en.wikipedia.org/wiki/Viola%E2%80%93Jones_object_detection_framework" rel="nofollow noreferrer">Viola–Jones object detection framework</a>. Speaking of which: That would likely be the way to go without DL, but is much more complicated to explain.</p>
<h2>Object Detection vs Segmentation</h2>
<p>Your question mixes both. Object detection gives you bounding boxes (coarse) for instances. Semantic segmentation labels all pixels by classes, but does not distinguish different instances of the same class (e.g. different people). Instance segmentation is like object detection, but is fine-grained and aims for pixel-exact results.</p>
<p>If you are interested in segemantation, I can recommend my paper: <a href="https://arxiv.org/pdf/1602.06541.pdf" rel="nofollow noreferrer">A Survey of Semantic Segmentation</a></p> | 2018-03-09 05:52:29.863000+00:00 | 2018-03-09 05:52:29.863000+00:00 | null | null | 49,156,852 | <p>I'm looking for a way to detect humans in a picture. For instance, regarding the picture below, I'd like to coarsely determine how many people are in the scene. I must be able to detect both standing and sitting people. I do not mind not detecting people located behind a physical object (such as the glass in the bus picture).</p>
<p><a href="https://i.stack.imgur.com/5m6Ew.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5m6Ew.jpg" alt="enter image description here"></a></p>
<p>AFAIK, such a problem can rather easily be solved by training deep neural networks. However, my coworkers would like me to also implement a detection technique based on general image processing techniques. I've spent several days looking for techniques designed by researchers but I couldn't find anything else than saliency-based techniques (which may be fine, but I'd like to test several techniques based on old-fashioned image processing).</p>
<p>I'd like to mention that I'm not new to the topic of image segmentation & I used to segment aortas in medical scans. However, this task was easier IMHO since scanners have similar features: in this use-case (human detection in a bus, for instance), the pictures will have very different characteristics (e.g. image contrast can strongly vary, whether it's been taken during the day or at night).</p>
<p><strong>Long story short, I'd like to know if there's some segmentation technique for human detection for which it'd be interesting giving a shot, given the fact that the images features vary a lot?</strong> </p> | 2018-03-07 16:30:15.100000+00:00 | 2018-03-09 05:52:29.863000+00:00 | null | image-processing|deep-learning|image-segmentation | ['https://en.wikipedia.org/wiki/Viola%E2%80%93Jones_object_detection_framework', 'https://arxiv.org/pdf/1602.06541.pdf'] | 2 |
70,489,887 | <p>Fabric nodes use gRPC for communication and as such, it uses protobuf data serialization.</p>
<p>In Fabric, there is no need for the data sent over the wire to be human readable, as the data is anyway being constructed without human intervention.</p>
<p>You usually use JSON when you have payloads that are either expected to be constructed by users such as a REST API that is simple enough for users to use with curl/wget/scripts or when you want to be be able to debug the data sent over the wire.</p>
<p>As @jpa pointed out in his comment, sending JSON has its drawbacks such as sending the data scheme itself with every message along with the actual data and it is inefficient compared to protobuf which only sends the data only and the data scheme is not sent over the wire.</p>
<p>However there is also a drawback in using protobuf, and that is - that protobuf serialization is not deterministic.</p>
<p>In a system where there are signature checks everywhere such as Fabric, if you send a signature over a message, the message must be sent as a blob of bytes.</p>
<p>This makes the Fabric transaction structure extremely inefficient because it contains nested encapsulations of messages because each "layer" is represented in the layer above it as an opaque blob of bytes and to access an inner field of a transaction message you need to unmarshal the blob of bytes into the actual message object.</p>
<p><a href="https://i.stack.imgur.com/Egb9t.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Egb9t.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/eUdmj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eUdmj.png" alt="enter image description here" /></a></p>
<p>This makes Fabric transaction processing very slow, as <a href="https://arxiv.org/abs/1901.00910" rel="nofollow noreferrer">observed in a paper</a> a couple of years ago:</p>
<blockquote>
<p>Fabric uses gRPC for communication between nodes in the network. To
prepare data for transmission, Protocol Buffers are used for
serialization. To be able to deal with application and software
upgrades over time, Fabric’s block structure is highly layered, where
each layer is marshaled and unmarshaled separately. This leads to a
vast amount of memory allocated to convert byte arrays into data
structures.</p>
</blockquote>
<p>Had Fabric transaction messaging been based on a deterministic marshaling scheme such as ASN1, then you would've been able to send a transaction without intermediate opaque byte blobs because signature checks would've involved serializing the message via the deterministic ASN1 serialization to obtain the blob of bytes that Fabric messages carry.</p> | 2021-12-26 21:09:08.340000+00:00 | 2021-12-26 21:09:08.340000+00:00 | null | null | 70,473,568 | <p>What are the advantages that protobuf provides which JSON cannot?</p>
<p>"<em>protobuff is faster than JSON</em>" - What does this mean?</p>
<p>Note: I am aware of the differences between protobuf and JSON.</p> | 2021-12-24 13:51:32.763000+00:00 | 2021-12-26 21:09:08.340000+00:00 | null | json|protocol-buffers|hyperledger-fabric|hyperledger | ['https://i.stack.imgur.com/Egb9t.png', 'https://i.stack.imgur.com/eUdmj.png', 'https://arxiv.org/abs/1901.00910'] | 3 |
62,361,833 | <p>A good overview of the different norms is shown in the <a href="https://arxiv.org/abs/1803.08494" rel="nofollow noreferrer">Group Normalization paper</a>.</p>
<p><a href="https://i.stack.imgur.com/EBEms.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EBEms.png" alt="Norms overview"></a></p>
<p>Instance normalisation is summarised as:</p>
<blockquote>
<p>[...] IN computes µ and σ along the (H, W) axes for each sample and each channel.</p>
</blockquote>
<p>The mean and standard deviation are computed on the spatial dimensions (H, W) only and are independent of the batch size and channels (there are <em>N x C</em> different norms). Hence, you can use it with a batch size of 1.</p> | 2020-06-13 15:23:20.913000+00:00 | 2020-06-13 15:23:20.913000+00:00 | null | null | 62,356,985 | <p>I am really confused with the meaning of Instance Norm and whether I can use it with a batch size of 1. I am using PyTorch and nothing in the documentation says that batch size should be greater than 1.</p>
<p>I know that for BatchNorm the performance is adversely affected when batch size is less than 8 and hence it puts a sort of soft bound on the batch size. However, I did not see any such analysis on Instance Norm and am a bit confused now. Should I remove the norm layer if my batch size is 1 then?</p> | 2020-06-13 07:52:04.063000+00:00 | 2020-06-13 15:23:20.913000+00:00 | null | machine-learning|deep-learning|pytorch | ['https://arxiv.org/abs/1803.08494', 'https://i.stack.imgur.com/EBEms.png'] | 2 |
45,822,511 | <p>Common word2vec libraries, like gensim, do not provide such a facility for merging models. Inherently, the coordinates of words within-a-model are only comparable, in terms of distances and directions, with other words in-the-same-model – only by bring incrementally trained together do they get nudged to meaningful relative positions. </p>
<p>The most straightforward approach is, as @mujiga's answer suggests, to combine the two training corpuses which include all desired words, and train a new model on the combined texts. (And ideally, you'd shuffle the two corpuses together, rather than simply concatenate them, so that no words appear only in the beginning or end of the full set-of-texts.)</p>
<p>A more complicated approach is possible, when there are a lot of overlapping words. You can pick one of the two "spaces" as the coordinate-system you'd like to retain – probably the one with more words, having been trained on more texts. Call that the 'reference' model. </p>
<p>You would take a large number of the words (maybe all) that are shared between the two models, and learn a 'translation' operation that projects those words' coordinates in the smaller model to roughly the right places for those same words in the reference model. (This is itself typically a mathematical optimization problem.) Finally, you'd use that translation operation on the non-shared words, to convert coordinates of the smaller model into the reference-model coordinate space – and then construct a new data structure that includes all the original reference vectors, plus the translated vectors. </p>
<p>This is the technique used by <a href="https://arxiv.org/abs/1309.4168" rel="nofollow noreferrer">one of the original word2vec papers</a> for machine translation. It's also mentioned in <a href="http://arxiv.org/abs/1506.06726" rel="nofollow noreferrer">section 2.2 of the skip-thoughts paper</a> as a way to leverage words from another source when they don't appear in a local corpus. There's currently (August 2017) <a href="https://github.com/RaRe-Technologies/gensim/pull/1434" rel="nofollow noreferrer">some work-in-progress to add a facility for learning-the-translation</a> in gensim, but it's not yet fully-tested/documented or part of any formal release.</p>
<p>But really: the safe and straightforward thing is to train a new model on a common corpus. </p> | 2017-08-22 16:10:21.853000+00:00 | 2017-08-22 16:10:21.853000+00:00 | null | null | 45,810,954 | <p>I have created two models using gensim word2vec. Now I want to merge these two models in a way that I get the union of these two models.</p>
<p>Eg: </p>
<ul>
1. Model one has following vocabulary
</ul>
<pre><code>{"Hi", "Hello", "World"}
</code></pre>
<ul>
2. Model two has the following vocabulary
</ul>
<pre><code>{"Hi", "King", "Hello", "Human"}
</code></pre>
<p>Now I want to use these two models and create a new model which will have the following vocabulary</p>
<pre><code>{"Hi", "Hello", "World", "King", "Human"}
</code></pre> | 2017-08-22 07:03:01.030000+00:00 | 2017-10-12 12:46:28.437000+00:00 | 2017-08-22 08:20:23.910000+00:00 | machine-learning|nlp|deep-learning|gensim|word2vec | ['https://arxiv.org/abs/1309.4168', 'http://arxiv.org/abs/1506.06726', 'https://github.com/RaRe-Technologies/gensim/pull/1434'] | 3 |
55,794,895 | <p>It is a known thing with high capacity models. They are suprisingly resistant to overfitting which contradicts to the classical statistical learning theory that says that without explicit regularization you going to overfit. For example, <a href="https://www.padl.ws/papers/Paper%2010.pdf" rel="nofollow noreferrer">this paper</a> says</p>
<blockquote>
<p>most of deep neural networks with learned parameters often generalize
very well empirically, even equipped with much more effective
parameters than the number of training samples, i.e. high capacity...
Thus, statistical learning theory cannot explain the generalization
ability of deep learning models.</p>
</blockquote>
<p>Also, <a href="https://arxiv.org/pdf/1705.08741.pdf" rel="nofollow noreferrer">this</a> and <a href="https://arxiv.org/pdf/1710.10345.pdf" rel="nofollow noreferrer">this</a> papers are talking about it. You can keep on following the references in these papers to read more.</p>
<p>Personally, I have never seen high capacity model overfits even after training for 10s of thousands of epochs. If you want the example that does overfit: take Lenet 5 for Cifar10 with ReLU activations and without dropout and train it using SGD with learning rate <code>0.01</code>. The number of training parameters in this model is ~ 60000 thousand which is the same as the number of samples in Cifar10 (low capacity model). After at most 500-1000 epochs you are going to see a very clear overfitting with increasing loss and error over time.</p> | 2019-04-22 13:04:59.640000+00:00 | 2019-04-22 13:10:09.337000+00:00 | 2019-04-22 13:10:09.337000+00:00 | null | 55,794,687 | <p>I trained a model, got decent results, but then I got greedy and I wanted even more accuracy, so, I trained the model for longer, and longer and longer, but to no avail, nothing happens! according to theory, at some point, the validation accuracy must start to decrease after too much training (the loss start to INCREASE)! but this never seem to happen. So, I figured may be the NN is too simple to ever be able to overfit, so I increased its capacity and I ended up with millions of parameters, and I trained it for 10,000 epochs, still no overfitting happens. </p>
<p>The same question was asked <a href="https://stackoverflow.com/questions/54489549/why-does-my-neural-network-never-overfit">here</a>, but the answers there are anything but satisfying.</p>
<p>What does that mean?</p> | 2019-04-22 12:47:52.120000+00:00 | 2019-04-22 13:31:39.567000+00:00 | 2019-04-22 13:31:39.567000+00:00 | python|tensorflow|machine-learning|keras|deep-learning | ['https://www.padl.ws/papers/Paper%2010.pdf', 'https://arxiv.org/pdf/1705.08741.pdf', 'https://arxiv.org/pdf/1710.10345.pdf'] | 3 |
57,994,105 | <p>Papers on analyzing deep learning models for sentiment analysis (<a href="https://arxiv.org/pdf/1809.08037.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1809.08037.pdf</a>, <a href="https://arxiv.org/pdf/1907.04613.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1907.04613.pdf</a>) show that the networks really learn domain-specific information.</p>
<p>For instance, if you train a model on the IMDB dataset, name <em>Tarantino</em> is always associated with positive sentiment. If you say: "It was like Tarantino." the model classifies it positively, although really should not in the political domain.</p>
<p>If you have time to play around with it, you try to gather as many sentiment datasets as you can and try to train simple classifier over BERT and see if it works.</p>
<p>If you need a quick solution, you can still do some pre-machine learning approach and rely on a sentiment lexicon. It works surprisingly well.</p> | 2019-09-18 13:38:13.217000+00:00 | 2019-09-18 13:38:13.217000+00:00 | null | null | 57,991,521 | <p>We need to analyse a lot of articles relevant to political instability in a given country (things like the possibility of a coalition / a snap election etc).
The problem is that I could not find any labeled datasets which could be plugged into a neural network (CNN/LSTM in TensorFlow) so as to supervise it for real-time events (news articles, tweets etc).</p>
<p>I believe we can't use publicly available big datasets - like IMDB film reviews - for training the models to accurately identify and predict the occurrence of such events (or can we?).</p>
<p>Are there other ways to solve this problem?</p>
<p>I also thought of using unsupervised learning - libraries like VADER - but that gives me a more generic sentiment-score, rather than attuned to the specific corpora relevant to the problem.</p> | 2019-09-18 11:16:42.517000+00:00 | 2019-09-18 13:38:13.217000+00:00 | null | machine-learning|neural-network|nlp|sentiment-analysis | ['https://arxiv.org/pdf/1809.08037.pdf', 'https://arxiv.org/pdf/1907.04613.pdf'] | 2 |
61,386,604 | <p>Although the expression "<em>Adam is RMSProp with momentum</em>" is widely used indeed, it is just a very rough shorthand description, and it should not be taken at face value; already in the original <a href="https://arxiv.org/pdf/1412.6980.pdf" rel="nofollow noreferrer">Adam paper</a>, it was explicitly clarified (p. 6):</p>
<blockquote>
<p>There are a few important differences between RMSProp with momentum and Adam: RMSProp with momentum generates its parameter updates using a momentum on the rescaled gradient, whereas Adam updates are directly estimated using a running average of first and second moment of the gradient.</p>
</blockquote>
<p>Sometimes, authors make clear that the subject expression is just a loose description, e.g. in the (highly recommended) <a href="https://ruder.io/optimizing-gradient-descent/index.html#adam" rel="nofollow noreferrer">Overview of gradient descent optimization algorithms</a> (emphasis added):</p>
<blockquote>
<p>Adam also keeps an exponentially decaying average of past gradients mt, <strong>similar</strong> to momentum.</p>
</blockquote>
<p>or in <a href="https://cs231n.github.io/neural-networks-3/#ada" rel="nofollow noreferrer">Stanford CS231n: CNNs for Visual Recognition</a> (again, emphasis added):</p>
<blockquote>
<p>Adam is a recently proposed update that <strong>looks a bit</strong> like RMSProp with momentum.</p>
</blockquote>
<p>That said, it's true that some other frameworks actually include a <code>momentum</code> parameter for Adam, but this is actually the <code>beta1</code> parameter; here is <a href="https://cntk.ai/pythondocs/cntk.learners.html#cntk.learners.adam" rel="nofollow noreferrer">CNTK</a>:</p>
<blockquote>
<p><strong>momentum</strong> (float, list, output of <code>momentum_schedule()</code>) – momentum schedule. Note that this is the beta1 parameter in the Adam paper. For additional information, please refer to the <a href="https://docs.microsoft.com/en-us/cognitive-toolkit/BrainScript-SGD-Block#converting-learning-rate-and-momentum-parameters-from-other-toolkits" rel="nofollow noreferrer">this CNTK Wiki article</a>.</p>
</blockquote>
<p>So, don't take this too literally, and don't loose your sleep over it.</p> | 2020-04-23 11:53:32.427000+00:00 | 2020-04-23 12:18:25.137000+00:00 | 2020-04-23 12:18:25.137000+00:00 | null | 61,381,648 | <p>Here is a <a href="https://www.tensorflow.org/api_docs/python/tf/keras/optimizers" rel="nofollow noreferrer">link</a> to tensorflow optimizers. And there you can see, that RMSprop takes momentum as argument while Adam does not do this. So I am confuzed. Adam optimization pretends to be a RMSprop optimization with momentum, like this: </p>
<p><em>Adam = RMSprop + Momentum</em> </p>
<p>But why then RMSprop does have the momentum parameter and Adam does not?</p> | 2020-04-23 07:30:58.123000+00:00 | 2020-04-23 13:08:07.157000+00:00 | 2020-04-23 13:08:07.157000+00:00 | tensorflow|optimization|keras|adam | ['https://arxiv.org/pdf/1412.6980.pdf', 'https://ruder.io/optimizing-gradient-descent/index.html#adam', 'https://cs231n.github.io/neural-networks-3/#ada', 'https://cntk.ai/pythondocs/cntk.learners.html#cntk.learners.adam', 'https://docs.microsoft.com/en-us/cognitive-toolkit/BrainScript-SGD-Block#converting-learning-rate-and-momentum-parameters-from-other-toolkits'] | 5 |
48,486,864 | <p>The usual representation is</p>
<pre><code> PREDICTED
[[41 2 0 0 0 0]
T [ 1 11 4 1 0 0]
R [ 0 1 12 0 0 0]
U [ 0 0 0 22 1 0]
E [ 0 0 0 0 7 0]
[ 0 0 0 0 0 20]]
</code></pre>
<p>As pointed out by M. Rath (+1), this is also what Tensorflow does. This means for 41 samples you correctly predicted class 0. For 2 samples, you predicted class 1, but it actually was class 0.</p>
<p>Please note that you can also manipulate the order for visualizations. So instead of</p>
<pre><code>class 0, class 1, class 2
</code></pre>
<p>you could have (for both, prediction and true value) the order</p>
<pre><code>class 0, class 2, class 1
</code></pre>
<p>This contains the same information, but a visualization might convey a different story. See my masters thesis <a href="https://arxiv.org/pdf/1707.09725.pdf#page=48" rel="nofollow noreferrer">Analysis and Optimization of
Convolutional Neural Network Architectures</a> page 48 (Confusion Matrix Ordering), especially figure 5.12 and 5.13.</p>
<p>An implementation can be found in the tool <a href="https://github.com/MartinThoma/clana" rel="nofollow noreferrer"><code>clana</code></a></p> | 2018-01-28 13:29:21.860000+00:00 | 2018-01-28 13:29:21.860000+00:00 | null | null | 48,457,140 | <p>I have 6 classes and I used tf-slim in Tensorflow to obtained the confusion matrix such as</p>
<pre><code>[[41 2 0 0 0 0]
[ 1 11 4 1 0 0]
[ 0 1 12 0 0 0]
[ 0 0 0 22 1 0]
[ 0 0 0 0 7 0]
[ 0 0 0 0 0 20]]
</code></pre>
<p>My question is that what is confusion matrix order of the above table? Is it right if I said that the columns represent the prediction label, while the rows represent the true label? Some reference said on opposite side.</p> | 2018-01-26 07:19:44.490000+00:00 | 2018-01-28 13:29:21.860000+00:00 | null | tensorflow|deep-learning|tf-slim | ['https://arxiv.org/pdf/1707.09725.pdf#page=48', 'https://github.com/MartinThoma/clana'] | 2 |
69,304,325 | <p>You might have misread the source code, the first sample you gave is not averaging the resut of <code>D</code> to compute its loss but instead uses the binary cross-entropy.</p>
<p>To be more precise:</p>
<ul>
<li><p>The first method (<strong>"GAN"</strong>) uses the BCE loss to compute the loss terms for <code>D</code> and <code>G</code>. The standard GAN optimization objective for <code>D</code> is to minimize <code>E_x[log(D(x))] + E_z[log(1-D(G(z)))]</code>. <br><a href="https://github.com/NUS-Tim/Pytorch-WGAN/blob/master/models/gan.py#L83" rel="nofollow noreferrer"><em>Source code</em></a>:</p>
<pre><code>outputs = self.D(images)
d_loss_real = self.loss(outputs.flatten(), real_labels) # <- bce loss
real_score = outputs
# Compute BCELoss using fake images
fake_images = self.G(z)
outputs = self.D(fake_images)
d_loss_fake = self.loss(outputs.flatten(), fake_labels) # <- bce loss
fake_score = outputs
# Optimizie discriminator
d_loss = d_loss_real + d_loss_fake
self.D.zero_grad()
d_loss.backward()
self.d_optimizer.step()
</code></pre>
<p>For <code>d_loss_real</code> you optimize towards <code>1</code>s (output is considered <em>real</em>), while <code>d_loss_fake</code> optimizes towards <code>0</code>s (output is considered <em>fake</em>).</p>
</li>
<li><p>While the second (<strong>"WCGAN"</strong>) uses the Wasserstein loss (<a href="https://arxiv.org/pdf/1704.00028v3.pdf" rel="nofollow noreferrer">ref</a>) whereby we maximise for D the loss: <code>E_x[D(x)] - E_z[D(G(z))]</code>. <br><a href="https://github.com/NUS-Tim/Pytorch-WGAN/blob/master/models/wgan_gradient_penalty.py#L174" rel="nofollow noreferrer"><em>Source code</em></a>:</p>
<pre><code># Train discriminator
# WGAN - Training discriminator more iterations than generator
# Train with real images
d_loss_real = self.D(images)
d_loss_real = d_loss_real.mean()
d_loss_real.backward(mone)
# Train with fake images
z = self.get_torch_variable(torch.randn(self.batch_size, 100, 1, 1))
fake_images = self.G(z)
d_loss_fake = self.D(fake_images)
d_loss_fake = d_loss_fake.mean()
d_loss_fake.backward(one)
# [...]
Wasserstein_D = d_loss_real - d_loss_fake
</code></pre>
<p>By doing <code>d_loss_real.backward(mone)</code> you backpropage with a gradient of opposite sign, <em>i.e.</em> its's a gradient ascend, and you end up maximizing <code>d_loss_real</code>.</p>
</li>
</ul> | 2021-09-23 17:13:07.637000+00:00 | 2021-09-23 17:13:07.637000+00:00 | null | null | 69,302,763 | <p>I just find that in the code here:</p>
<p><a href="https://github.com/NUS-Tim/Pytorch-WGAN/tree/master/models" rel="nofollow noreferrer">https://github.com/NUS-Tim/Pytorch-WGAN/tree/master/models</a></p>
<p>The "generator" loss, <code>G</code>, between WGAN and WGAN-GP is different, for WGAN:</p>
<pre><code>g_loss = self.D(fake_images)
g_loss = g_loss.mean().mean(0).view(1)
g_loss.backward(one) # !!!
g_cost = -g_loss
</code></pre>
<p>But for WGAN-GP:</p>
<pre><code>g_loss = self.D(fake_images)
g_loss = g_loss.mean()
g_loss.backward(mone) # !!!
g_cost = -g_loss
</code></pre>
<p>Why one is <code>one=1</code> and another is <code>mone=-1</code>?</p> | 2021-09-23 15:18:59.933000+00:00 | 2022-01-06 15:57:12.017000+00:00 | 2021-09-24 08:16:26.943000+00:00 | deep-learning|neural-network|pytorch|backpropagation|generative-adversarial-network | ['https://github.com/NUS-Tim/Pytorch-WGAN/blob/master/models/gan.py#L83', 'https://arxiv.org/pdf/1704.00028v3.pdf', 'https://github.com/NUS-Tim/Pytorch-WGAN/blob/master/models/wgan_gradient_penalty.py#L174'] | 3 |
63,191,640 | <p>My understanding is that the authors are saying that one does not need to downsample image (or, any intermediate feature map) before applying let's say 3x3 convolution which is typical in DCNNs (e.g., VGG16 or ResNet) for feature extraction and followed by upsampling for semantic segmentation. In a typical encoder-decoder network (e.g. UNet or SegNet), one first downsamples the feature map by half, followed by convolution operation and upsampling the feature map again by 2x times.</p>
<p>All of these effects (downsampling, feature extraction and upsampling) can be captured in a single atrous convolution (of course with stride=1). Moreover, the output of an atrous convolution is a dense feature map comparing to same "downsampling, feature extraction and upsampling" which results in a spare feature map. See the following figure for more details. It is from <a href="https://arxiv.org/pdf/1606.00915.pdf" rel="nofollow noreferrer">DeepLabV1 paper</a>. Therefore, you can control the size of a feature map by replacing any normal convolution by atrous convolution in an intermediate layer.</p>
<p>That's also why there is a constant "output_stride (input resolution / feature map resolution)" of 16 in all the atrous convolutions in the picture (cascaded model) you posted above.</p>
<p><a href="https://i.stack.imgur.com/Aeqv9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Aeqv9.png" alt="enter image description here" /></a></p> | 2020-07-31 12:14:12.700000+00:00 | 2020-07-31 12:14:12.700000+00:00 | null | null | 55,007,114 | <p>i try to understand dilated convolution. I already familiar with increasing the size of the kernel by filling the gaps with zeros. Its usefull to cover a bigger area and get a better understanding about larger objects.
But please can someone explain me how it is possible that dilated convolutional layers keep the origin resolution of the receptive field. It is used in the deeplabV3+ structure with a atrous rate from 2 to 16. How is it possible to use dilated convolution with a obvious bigger kernel without zero padding and the output size will be consistent.</p>
<p>deeplabV3+ structure:</p>
<p><a href="https://i.stack.imgur.com/QMs29.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QMs29.png" alt="enter image description here"></a></p>
<p>Im confused because when i have a look at these explanation here: </p>
<p><a href="https://i.stack.imgur.com/WfD07.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WfD07.gif" alt="enter image description here"></a></p>
<p>The outputsize (3x3) of the dilated convolution layer is smaller?</p>
<p>Thank you so much for your help!</p>
<p>Lukas</p> | 2019-03-05 16:14:10.223000+00:00 | 2020-07-31 12:14:12.700000+00:00 | 2019-03-05 18:59:24.123000+00:00 | deep-learning|conv-neural-network|deeplab | ['https://arxiv.org/pdf/1606.00915.pdf', 'https://i.stack.imgur.com/Aeqv9.png'] | 2 |
67,849,743 | <p>The interpretation in terms of holes applies to the filtration defined as a union of balls around each data point, or equivalently to the Čech or α-complex filtrations. The Rips filtration is often used as an easier to compute approximation of that filtration, but it does not have the same theoretical properties. There are actually several publications studying higher dimensional persistent homology of the Vietoris-Rips filtration of a circle (<a href="https://arxiv.org/abs/1503.03669" rel="nofollow noreferrer">example</a>).</p>
<p>By the way, if you are working with points in low dimension with the Euclidean metric, it would probably make more sense to use an α-complex filtration, which is significantly smaller. You can get it with <a href="https://gudhi.inria.fr/" rel="nofollow noreferrer">Gudhi</a> or <a href="https://github.com/mrzv/diode" rel="nofollow noreferrer">DioDe</a>/<a href="https://www.mrzv.org/software/dionysus2/" rel="nofollow noreferrer">Dionysus 2</a> for instance.</p> | 2021-06-05 12:45:54.790000+00:00 | 2021-06-05 12:45:54.790000+00:00 | null | null | 59,648,575 | <p>I use <code>RIPSER</code> library in python to analyze my <code>data(TDA)</code>. The <code>plot(data)</code> i have used is <code>2-d</code>, but when i use <code>ripser</code> to analyze, it gives me output with three homology <code>groups(H0, H1, H2)</code>. But this is not possible, since <code>H0</code> represents the connected components and <code>H1</code> represents the <code>HOLES</code> in <code>2-d</code> plot, whereas <code>H2</code> represents the <code>VOID</code> in the plot which is not possible in a <code>2-d plot</code>.</p>
<p><a href="https://i.stack.imgur.com/3Z2vy.png" rel="nofollow noreferrer">Example</a></p>
<p>The code that i have used:</p>
<pre><code>import matplotlib
import numpy as np
from ripser import ripser
from persim import plot_diagrams
from matplotlib import pyplot as plt
data = np.loadtxt('t=25_data(0.38).dat')
fig = plt.figure()
plt.title("PERSISTENCE DIAGRAM")
diagrams = ripser(data, maxdim=2)['dgms'] #### persistence diagram
### LIFETIME plot
plot_diagrams(diagrams, show=True)
plot_diagrams(diagrams, lifetime=True)
</code></pre> | 2020-01-08 15:01:48.543000+00:00 | 2021-06-05 12:45:54.790000+00:00 | 2020-01-08 16:34:47.820000+00:00 | python|r|python-3.x|topology | ['https://arxiv.org/abs/1503.03669', 'https://gudhi.inria.fr/', 'https://github.com/mrzv/diode', 'https://www.mrzv.org/software/dionysus2/'] | 4 |
50,257,008 | <p>First, a bit of terminology:</p>
<ul>
<li><p>In <strong>ensembles</strong> (as I understand them) you have N models at test time and you combine their <strong>predictions</strong> (by voting, or even better combining probabilistic distributions and using as input for further decoding in case of autoregressive seq2seq decoders). You can have <em>independent ensembles</em> (training each model independently from scratch, with different random initialization) or <em>checkpoint ensembles</em> (taking N last checkpoints, or possibly N checkpoints with best validation score). See e.g. <a href="http://www.statmt.org/wmt17/pdf/WMT39.pdf" rel="nofollow noreferrer">Sennrich et al., 2017</a> for a comparison of these two types of ensembles.</p></li>
<li><p>In <strong>averaging</strong> you average the <strong>weights</strong> of N models, so at test time you have just one averaged model. This gives usually worse results than real ensembles, but it is much faster, so you can afford higher N. If the models are trained completely independently with different random initialization, averaging does not work at all. However, if the models share a reasonable number of initial training steps, averaging may work. A special case is <em>checkpoint averaging</em> where the last N checkpoints are averaged, but you can try even "forking" the training and using "semi-independent" models for averaging (in addition to checkpoint averaging). It may be very useful to use constant or cyclical learning rate, see <a href="https://arxiv.org/pdf/1803.05407.pdf" rel="nofollow noreferrer">Izmailov et al., 2018</a>.</p></li>
</ul>
<p>As for your question, how to do the averaging of Tensorflow checkpoints:
See <a href="https://github.com/tensorflow/tensor2tensor/blob/master/tensor2tensor/utils/avg_checkpoints.py" rel="nofollow noreferrer">avg_checkpoints.py</a> or <a href="https://github.com/tensorflow/tensor2tensor/blob/master/tensor2tensor/bin/t2t_avg_all.py" rel="nofollow noreferrer">t2t-avg-all</a>.</p> | 2018-05-09 15:27:59.090000+00:00 | 2018-05-09 15:27:59.090000+00:00 | null | null | 42,837,829 | <p>I have a trained a tensorflow <a href="https://www.tensorflow.org/tutorials/seq2seq" rel="nofollow noreferrer">seq2seq</a> model for 30 epochs, and saved a checkpoint for each epoch. What I want to do now is combining the best X of those checkpoints (based on results on a development set). Specifically, I'm looking for a way that lets me average different model weights and merge them into a new model that can be used for decoding. However, there does not seem to be a set way for this, and loading different models can be a bit tricky. But even if this succeeds, I can't find a good answer on how to combine the weights in a new model. </p>
<p>Any help would be greatly appreciated. </p>
<p>Related questions (that do not sufficiently answer in my opinion):</p>
<p><a href="https://stackoverflow.com/questions/38750635/building-multiple-models-in-the-same-graph">Building multiple models in the same graph</a></p>
<p><a href="https://stackoverflow.com/questions/36828600/how-to-load-several-identical-models-from-save-files-into-one-session-in-tensorf?noredirect=1&lq=1">How to load several identical models from save files into one session in Tensorflow</a></p>
<p><a href="https://stackoverflow.com/questions/35464652/how-to-create-ensemble-in-tensorflow">How to create ensemble in tensorflow?</a></p> | 2017-03-16 15:06:42.943000+00:00 | 2018-05-09 15:27:59.090000+00:00 | 2017-05-23 10:29:57.447000+00:00 | tensorflow|deep-learning|ensemble-learning|sequence-to-sequence | ['http://www.statmt.org/wmt17/pdf/WMT39.pdf', 'https://arxiv.org/pdf/1803.05407.pdf', 'https://github.com/tensorflow/tensor2tensor/blob/master/tensor2tensor/utils/avg_checkpoints.py', 'https://github.com/tensorflow/tensor2tensor/blob/master/tensor2tensor/bin/t2t_avg_all.py'] | 4 |
51,576,094 | <p>This paper contains some formulas that relate convolutional layer parameters to input and output shapes: <a href="https://arxiv.org/pdf/1603.07285.pdf" rel="nofollow noreferrer">A guide to convolution arithmetic for deep
learning</a></p> | 2018-07-28 23:45:27.293000+00:00 | 2018-07-28 23:45:27.293000+00:00 | null | null | 51,572,107 | <p>I'd like to generate an image of size (176,132), from a (60,1) Vector. </p>
<p>I don't know how to choose the Stride, Kernels, Padding and output paddings to best fit my problem... Is there any online resource that could help ?</p>
<p>I believe stacking a few TransposedConvolution2D layers helps, but how do each layer's parameters relate to each other ? Should they go increasing or decreasing ?</p>
<p>Thanks a lot !</p> | 2018-07-28 14:03:53.060000+00:00 | 2018-07-28 23:45:27.293000+00:00 | null | python-3.x|machine-learning|keras | ['https://arxiv.org/pdf/1603.07285.pdf'] | 1 |
66,084,306 | <p>I am a physicist and programmer who has worked extensively on Qiskit. I have limited experience with things like machine learning but if I am not mistaken <a href="https://arxiv.org/pdf/1401.2142.pdf" rel="nofollow noreferrer">figure 13 on page 22 of this paper on Nearest-Neighbor methods</a> is precisely the circuit you are creating.</p>
<p>You have a dramatic performance hit because you are simulating quantum hardware using classical algorithms. This is commented out:</p>
<pre><code># IBMQ Configure
# IBMQ.save_account(os.environ.get('IBM'))
# IBMQ.load_account()
# provider = IBMQ.get_provider('ibm-q')
# qcomp = provider.get_backend('ibmq_16_melbourne')
</code></pre>
<p>Where the "ibmq_16_melbourne" refers to a physical quantum computer with <a href="https://www.researchgate.net/figure/a-IBM-Q16-Melbourne-architecture-b-Previous-available-IBM-Q20-Tokyo-architecture_fig2_340963391" rel="nofollow noreferrer">the ibm architecture which is partially documented here</a>. That makes total sense because IBM restricts access for most accounts. That is why later on you have this:</p>
<pre><code># Get backend using the Aer provider
backend = Aer.get_backend('qasm_simulator')
</code></pre>
<p>"Aer" refers to the <em><strong>quantum computer simulation software which is running locally on your client-side computer</strong></em>. To my knowledge there is not yet something in qiskit which could simulate a specific physical quantum computer. That is what would presumably tell you what the simulated/theoretical speedup would be (despite the fact that it would take longer to simulate on a classical computer).</p>
<p>IMPORTANT:
Many of the standards defined as part of the Qiskit ecosystem (like the OpenQASM format) are meant to be hardware agnostic. You can describe a circuit that has any two qubits interacting at any time. But the truth is a physical quantum computers of any size (in terms of 10+ qubits) will not have direct any-qubit-to-any-other-qubit connections. You have to swap things around in ways specific to that architecture (like the Melbourne 16-qubit architecture).</p> | 2021-02-07 03:34:54.610000+00:00 | 2021-02-07 03:34:54.610000+00:00 | null | null | 66,071,608 | <p>I did a small benchmarking to compare quantum version of algorithm to its classical version, and I found that quantum computing taking so much time in comparison to classical version.</p>
<p>I don't understand why its happening, it should be either similar to classical or better.</p>
<p>DataSet Explanation: 1 test data point and 3 training data points, dimension = 2.
Goal: our goal is to classify the test data point in one of training data point category.</p>
<pre><code>import matplotlib.pyplot as plt
import pandas as pd
from numpy import pi
from qiskit import Aer, execute
from qiskit import QuantumCircuit
from qiskit import QuantumRegister, ClassicalRegister
from qiskit import IBMQ
import os
import time
# IBMQ Configure
# IBMQ.save_account(os.environ.get('IBM'))
# IBMQ.load_account()
# provider = IBMQ.get_provider('ibm-q')
# qcomp = provider.get_backend('ibmq_16_melbourne')
##
fig, ax = plt.subplots()
ax.set(xlabel='Data Feature 1', ylabel='Data Feature 2')
# Get the data from the .csv file
data = pd.read_csv('data.csv',
usecols=['Feature 1', 'Feature 2', 'Class'])
# Create binary variables to filter data
isGreen = data['Class'] == 'Green'
isBlue = data['Class'] == 'Blue'
isBlack = data['Class'] == 'Black'
# Filter data
greenData = data[isGreen].drop(['Class'], axis=1)
blueData = data[isBlue].drop(['Class'], axis=1)
blackData = data[isBlack].drop(['Class'], axis=1)
# This is the point we need to classify
y_p = 0.141
x_p = -0.161
# Finding the x-coords of the centroids
xgc = sum(greenData['Feature 1']) / len(greenData['Feature 1'])
xbc = sum(blueData['Feature 1']) / len(blueData['Feature 1'])
xkc = sum(blackData['Feature 1']) / len(blackData['Feature 1'])
# Finding the y-coords of the centroids
ygc = sum(greenData['Feature 2']) / len(greenData['Feature 2'])
ybc = sum(blueData['Feature 2']) / len(blueData['Feature 2'])
ykc = sum(blackData['Feature 2']) / len(blackData['Feature 2'])
# Plotting the centroids
plt.plot(xgc, ygc, 'gx')
plt.plot(xbc, ybc, 'bx')
plt.plot(xkc, ykc, 'kx')
# Plotting the new data point
plt.plot(x_p, y_p, 'ro')
# Setting the axis ranges
plt.axis([-1, 1, -1, 1])
plt.show()
# Calculating theta and phi values
phi_list = [((x + 1) * pi / 2) for x in [x_p, xgc, xbc, xkc]]
theta_list = [((x + 1) * pi / 2) for x in [y_p, ygc, ybc, ykc]]
#----- quantum start time -------#
st = time.time()
# Create a 2 qubit QuantumRegister - two for the vectors, and
# one for the ancillary qubit
qreg = QuantumRegister(3)
# Create a one bit ClassicalRegister to hold the result
# of the measurements
creg = ClassicalRegister(1)
qc = QuantumCircuit(qreg, creg, name='qc')
# Get backend using the Aer provider
backend = Aer.get_backend('qasm_simulator')
# Create list to hold the results
results_list = []
# Estimating distances from the new point to the centroids
for i in range(1, 4):
# Apply a Hadamard to the ancillary
qc.h(qreg[2])
# Encode new point and centroid
qc.u(theta_list[0], phi_list[0], 0, qreg[0])
qc.u(theta_list[i], phi_list[i], 0, qreg[1])
# Perform controlled swap
qc.cswap(qreg[2], qreg[0], qreg[1])
# Apply second Hadamard to ancillary
qc.h(qreg[2])
# Measure ancillary
qc.measure(qreg[2], creg[0])
# run on quantum computer
# job = execute(qc, backend=qcomp, shots=1024)
# job_monitor(job)
# Reset qubits
qc.reset(qreg)
# Register and execute job
job = execute(qc, backend=backend, shots=1024)
result = job.result().get_counts(qc)
results_list.append(result['1'])
et = time.time()
# --------- end time ----------
print(results_list)
print('final circuit fig')
print(qc.draw())
# Create a list to hold the possible classes
class_list = ['Green', 'Blue', 'Black']
# Find out which class the new data point belongs to
# according to our distance estimation algorithm
quantum_p_class = class_list[results_list.index(min(results_list))]
# Find out which class the new data point belongs to
# according to classical euclidean distance calculation
# classical start time
cst = time.time()
distances_list = [((x_p - i[0]) ** 2 + (y_p - i[1]) ** 2) ** 0.5 for i in [(xgc, ygc), (xbc, ybc), (xkc, ykc)]]
cet = time.time()
classical_p_class = class_list[distances_list.index(min(distances_list))]
# Print time taken
print("classical time => ", cet-cst)
print("quantum time => ", et-st)
# Print results
print("""According to our distance algorithm, the new data point belongs to the""", quantum_p_class, 'class.\n')
print('Euclidean distances: ', distances_list, '\n')
print("""According to euclidean distance calculations, the new data point belongs to the""", classical_p_class,
'class.')
</code></pre>
<p>Output:</p>
<pre><code>classical time => **1.0967254638671875e-05**
quantum time => **0.2530648708343506** // more time
According to our distance algorithm, the new data point belongs to the Blue class.
Euclidean distances: [0.520285324797846, 0.4905204028376393, 0.7014755294377704]
According to euclidean distance calculations, the new data point belongs to the Blue class.
</code></pre>
<p>I am not able to understand, why quantum computing taking so much time.</p> | 2021-02-05 22:46:37.240000+00:00 | 2021-02-07 03:34:54.610000+00:00 | 2021-02-05 22:55:50.577000+00:00 | python|machine-learning|euclidean-distance|quantum-computing | ['https://arxiv.org/pdf/1401.2142.pdf', 'https://www.researchgate.net/figure/a-IBM-Q16-Melbourne-architecture-b-Previous-available-IBM-Q20-Tokyo-architecture_fig2_340963391'] | 2 |
62,723,657 | <p>The use of the <code>[CLS]</code> token to represent the entire sentence comes from the <a href="https://arxiv.org/pdf/1810.04805.pdf" rel="noreferrer">original BERT paper</a>, section 3:</p>
<blockquote>
<p>The first token of every sequence is always a special classification token ([CLS]). The final hidden state corresponding to this token is used as the aggregate sequence representation for classification tasks.</p>
</blockquote>
<p>Your intuition is correct that averaging the vectors of all the tokens may produce superior results. In fact, that is exactly what is mentioned in the Huggingface documentation for <a href="https://huggingface.co/transformers/v3.0.2/model_doc/bert.html#bertmodel" rel="noreferrer">BertModel</a>:</p>
<blockquote>
<p><strong>Returns</strong></p>
<p>pooler_output (<code>torch.FloatTensor</code>: of shape <code>(batch_size, hidden_size)</code>):</p>
<p>Last layer hidden-state of the first token of the sequence (classification token) further processed by a Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence prediction (classification) objective during pre-training.</p>
<p>This output is usually not a good summary of the semantic content of the input, <em>you’re often better with averaging or pooling the sequence of hidden-states for the whole input sequence</em>.</p>
</blockquote>
<p><strong>Update</strong>: Huggingface removed that statement ("This output is usually not a good summary of the semantic content ...") in v3.1.0. You'll have to ask them why.</p> | 2020-07-03 23:10:29.270000+00:00 | 2020-09-09 00:15:55.043000+00:00 | 2020-09-09 00:15:55.043000+00:00 | null | 62,705,268 | <p>I am doing experiments on bert architecture and found out that most of the fine-tuning task takes the final hidden layer as text representation and later they pass it to other models for the further downstream task.</p>
<p>Bert's last layer looks like this :</p>
<p><a href="https://i.stack.imgur.com/m0jrg.png" rel="noreferrer"><img src="https://i.stack.imgur.com/m0jrg.png" alt="enter image description here" /></a></p>
<p>Where we take the [CLS] token of each sentence :</p>
<p><a href="https://i.stack.imgur.com/1OklZ.png" rel="noreferrer"><img src="https://i.stack.imgur.com/1OklZ.png" alt="enter image description here" /></a></p>
<p><a href="https://jalammar.github.io/a-visual-guide-to-using-bert-for-the-first-time/" rel="noreferrer">Image source</a></p>
<p>I went through many discussion on this <a href="https://github.com/huggingface/transformers/issues/1950" rel="noreferrer">huggingface issue</a>, <a href="https://datascience.stackexchange.com/questions/66207/what-is-purpose-of-the-cls-token-and-why-its-encoding-output-is-important">datascience forum question</a>, <a href="https://github.com/google-research/bert/issues/196" rel="noreferrer">github issue</a> Most of the data scientist gives this explanation :</p>
<blockquote>
<p>BERT is bidirectional, the [CLS] is encoded including all
representative information of all tokens through the multi-layer
encoding procedure. The representation of [CLS] is individual in
different sentences.</p>
</blockquote>
<p>My question is, Why the author ignored the other information ( each token's vector ) and taking the average, max_pool or other methods to make use of all information rather than using [CLS] token for classification?</p>
<p>How does this [CLS] token help compare to the average of all token vectors?</p> | 2020-07-02 21:25:04.807000+00:00 | 2020-09-09 00:15:55.043000+00:00 | 2020-07-03 01:18:12.007000+00:00 | tensorflow|machine-learning|keras|deep-learning|bert-language-model | ['https://arxiv.org/pdf/1810.04805.pdf', 'https://huggingface.co/transformers/v3.0.2/model_doc/bert.html#bertmodel'] | 2 |
57,612,327 | <p>u r right. That method only takes the data uncertainty into account (assuming u don't fit the neural net while applying the noise). As a side note, alternatively when fitting the data using a neural net u may also apply mixture density networks (see one of the many tutorials). </p>
<p>More importantly, in order to account for model uncertainty u should apply bayesian neural nets. U could could start e.g. with Monte-Carlo dropout. Also very interesting should be this work on performing sampling-free inference when using Monte-Carlo dropout</p>
<p><a href="https://arxiv.org/abs/1908.00598" rel="nofollow noreferrer">https://arxiv.org/abs/1908.00598</a></p>
<p>This work explicitly uses error propagation through neural networks and should be very interesting for u!</p>
<p>Best</p> | 2019-08-22 15:05:46.253000+00:00 | 2019-08-22 15:05:46.253000+00:00 | null | null | 33,323,571 | <p>I have inputs <code>x_1, ..., x_n</code> that have known 1-sigma uncertainties <code>e_1, ..., e_n</code>. I am using them to predict outputs <code>y_1, ..., y_m</code> on a trained neural network. How can I obtain 1-sigma uncertainties on my predictions? </p>
<p>My idea is to randomly perturb each input <code>x_i</code> with normal noise having mean 0 and standard deviation <code>e_i</code> a large number of times (say, 10000), and then take the median and standard deviation of each prediction <code>y_i</code>. Does this work? </p>
<p>I fear that this only takes into account the "random" error (from the measurements) and not the "systematic" error (from the network), i.e., each prediction inherently has some error to it that is not being considered in this approach. How can I properly obtain 1-sigma error bars on my predictions?</p> | 2015-10-24 21:29:35.420000+00:00 | 2019-08-22 15:05:46.253000+00:00 | 2015-10-24 21:36:15.323000+00:00 | machine-learning|neural-network | ['https://arxiv.org/abs/1908.00598'] | 1 |
61,037,182 | <p>I'm assuming you're using the scikit-learn random forest model, since it has that <code>feature_importances_</code> attribute. Though you might see similar results by checking the correlation between features and your target variable, <code>feature_importances_</code> uses a more sophisticated approach. From the <a href="https://scikit-learn.org/stable/modules/ensemble.html#feature-importance-evaluation" rel="nofollow noreferrer">user guide</a>:</p>
<blockquote>
<p>The relative rank (i.e. depth) of a feature used as a decision node in
a tree can be used to assess the relative importance of that feature
with respect to the predictability of the target variable. Features
used at the top of the tree contribute to the final prediction
decision of a larger fraction of the input samples. The expected
fraction of the samples they contribute to can thus be used as an
estimate of the relative importance of the features. In scikit-learn,
the fraction of samples a feature contributes to is combined with the
decrease in impurity from splitting them to create a normalized
estimate of the predictive power of that feature.</p>
<p>By averaging the estimates of predictive ability over several
randomized trees one can reduce the variance of such an estimate and
use it for feature selection. This is known as the mean decrease in
impurity, or MDI. Refer to [L2014] for more information on MDI and
feature importance evaluation with Random Forests.</p>
</blockquote>
<p>The reference there was to <em><a href="https://arxiv.org/pdf/1407.7502.pdf" rel="nofollow noreferrer">Understanding Random Forests: From Theory to Practice</a></em>. Of specific interest to you will be Chapter 6 (p. 135), "Understanding Variable Importances".</p> | 2020-04-05 02:12:21.030000+00:00 | 2020-04-05 02:12:21.030000+00:00 | null | null | 61,037,114 | <p>I want to see the correlation between variables. So first, i used Correlation Matrix. It showed me the correlation between all variables. Then i create my <code>random forest regressor</code> model. In an article i found that it has function of <code>feature_importances_</code>. It tells the correlation between the independent variables and the dependent variable. So i tried it, then i saw that it shows the same correlation values with the values of Correlation Matrix. My question is, then what is the difference between Correlation Matrix and Random Forest Feature Importance?</p> | 2020-04-05 01:58:35.490000+00:00 | 2020-04-05 16:43:38.870000+00:00 | 2020-04-05 02:26:05.780000+00:00 | python|machine-learning|scikit-learn | ['https://scikit-learn.org/stable/modules/ensemble.html#feature-importance-evaluation', 'https://arxiv.org/pdf/1407.7502.pdf'] | 2 |
58,602,515 | <p>There is a paper about SAC with discrete action spaces. It says SAC for discrete action spaces doesn't need re-parametrization tricks like Gumbel softmax. Instead, SAC needs some modifications. please refer to the paper for more details.</p>
<p><a href="https://arxiv.org/abs/1910.07207" rel="nofollow noreferrer">Paper</a> /
<a href="https://github.com/p-christ/Deep-Reinforcement-Learning-Algorithms-with-PyTorch" rel="nofollow noreferrer">Author's implementation (without codes for atari)</a> /
<a href="https://github.com/ku2482/sac-discrete.pytorch" rel="nofollow noreferrer">Reproduction (with codes for atari)</a></p>
<p>I hope it helps you.</p> | 2019-10-29 06:44:08.427000+00:00 | 2019-10-30 07:08:03.470000+00:00 | 2019-10-30 07:08:03.470000+00:00 | null | 56,226,133 | <p>I'm trying to implement the soft actor critic algorithm for discrete action space and I have trouble with the loss function.</p>
<p>Here is the link from SAC with continuous action space:
<a href="https://spinningup.openai.com/en/latest/algorithms/sac.html" rel="nofollow noreferrer">https://spinningup.openai.com/en/latest/algorithms/sac.html</a></p>
<p>I do not know what I'm doing wrong.</p>
<p>The problem is the network do not learn anything on the cartpole environment.</p>
<p>The full code on github: <a href="https://github.com/tk2232/sac_discrete/blob/master/sac_discrete.py" rel="nofollow noreferrer">https://github.com/tk2232/sac_discrete/blob/master/sac_discrete.py</a></p>
<p>Here is my idea how to calculate the loss for discrete actions.</p>
<h2>Value Network</h2>
<pre class="lang-py prettyprint-override"><code>class ValueNet:
def __init__(self, sess, state_size, hidden_dim, name):
self.sess = sess
with tf.variable_scope(name):
self.states = tf.placeholder(dtype=tf.float32, shape=[None, state_size], name='value_states')
self.targets = tf.placeholder(dtype=tf.float32, shape=[None, 1], name='value_targets')
x = Dense(units=hidden_dim, activation='relu')(self.states)
x = Dense(units=hidden_dim, activation='relu')(x)
self.values = Dense(units=1, activation=None)(x)
optimizer = tf.train.AdamOptimizer(0.001)
loss = 0.5 * tf.reduce_mean((self.values - tf.stop_gradient(self.targets)) ** 2)
self.train_op = optimizer.minimize(loss, var_list=_params(name))
def get_value(self, s):
return self.sess.run(self.values, feed_dict={self.states: s})
def update(self, s, targets):
self.sess.run(self.train_op, feed_dict={self.states: s, self.targets: targets})
</code></pre>
<p>In the Q_Network I'm gather the values with the collected actions</p>
<h2>Example</h2>
<pre><code>q_out = [[0.5533, 0.4444], [0.2222, 0.6666]]
collected_actions = [0, 1]
gather = [[0.5533], [0.6666]]
</code></pre>
<h2>gather function</h2>
<pre class="lang-py prettyprint-override"><code>def gather_tensor(params, idx):
idx = tf.stack([tf.range(tf.shape(idx)[0]), idx[:, 0]], axis=-1)
params = tf.gather_nd(params, idx)
return params
</code></pre>
<h2>Q Network</h2>
<pre class="lang-py prettyprint-override"><code>class SoftQNetwork:
def __init__(self, sess, state_size, action_size, hidden_dim, name):
self.sess = sess
with tf.variable_scope(name):
self.states = tf.placeholder(dtype=tf.float32, shape=[None, state_size], name='q_states')
self.targets = tf.placeholder(dtype=tf.float32, shape=[None, 1], name='q_targets')
self.actions = tf.placeholder(dtype=tf.int32, shape=[None, 1], name='q_actions')
x = Dense(units=hidden_dim, activation='relu')(self.states)
x = Dense(units=hidden_dim, activation='relu')(x)
x = Dense(units=action_size, activation=None)(x)
self.q = tf.reshape(gather_tensor(x, self.actions), shape=(-1, 1))
optimizer = tf.train.AdamOptimizer(0.001)
loss = 0.5 * tf.reduce_mean((self.q - tf.stop_gradient(self.targets)) ** 2)
self.train_op = optimizer.minimize(loss, var_list=_params(name))
def update(self, s, a, target):
self.sess.run(self.train_op, feed_dict={self.states: s, self.actions: a, self.targets: target})
def get_q(self, s, a):
return self.sess.run(self.q, feed_dict={self.states: s, self.actions: a})
</code></pre>
<h2>Policy Net</h2>
<pre class="lang-py prettyprint-override"><code>class PolicyNet:
def __init__(self, sess, state_size, action_size, hidden_dim):
self.sess = sess
with tf.variable_scope('policy_net'):
self.states = tf.placeholder(dtype=tf.float32, shape=[None, state_size], name='policy_states')
self.targets = tf.placeholder(dtype=tf.float32, shape=[None, 1], name='policy_targets')
self.actions = tf.placeholder(dtype=tf.int32, shape=[None, 1], name='policy_actions')
x = Dense(units=hidden_dim, activation='relu')(self.states)
x = Dense(units=hidden_dim, activation='relu')(x)
self.logits = Dense(units=action_size, activation=None)(x)
dist = Categorical(logits=self.logits)
optimizer = tf.train.AdamOptimizer(0.001)
# Get action
self.new_action = dist.sample()
self.new_log_prob = dist.log_prob(self.new_action)
# Calc loss
log_prob = dist.log_prob(tf.squeeze(self.actions))
loss = tf.reduce_mean(tf.squeeze(self.targets) - 0.2 * log_prob)
self.train_op = optimizer.minimize(loss, var_list=_params('policy_net'))
def get_action(self, s):
action = self.sess.run(self.new_action, feed_dict={self.states: s[np.newaxis, :]})
return action[0]
def get_next_action(self, s):
next_action, next_log_prob = self.sess.run([self.new_action, self.new_log_prob], feed_dict={self.states: s})
return next_action.reshape((-1, 1)), next_log_prob.reshape((-1, 1))
def update(self, s, a, target):
self.sess.run(self.train_op, feed_dict={self.states: s, self.actions: a, self.targets: target})
</code></pre>
<h2>Update function</h2>
<pre class="lang-py prettyprint-override"><code>def soft_q_update(batch_size, frame_idx):
gamma = 0.99
alpha = 0.2
state, action, reward, next_state, done = replay_buffer.sample(batch_size)
action = action.reshape((-1, 1))
reward = reward.reshape((-1, 1))
done = done.reshape((-1, 1))
</code></pre>
<h3>Q_target</h3>
<p><img src="https://latex.codecogs.com/gif.latex?r&space;+&space;%5Cgamma&space;(1-d)V_%7B%5Cpsi&space;targ%7D(s%7B%7D%27)" title="r + \gamma (1-d)V_{\psi targ}(s{}')" /></p>
<pre class="lang-py prettyprint-override"><code>v_ = value_net_target.get_value(next_state)
q_target = reward + (1 - done) * gamma * v_
</code></pre>
<h3>V_target</h3>
<p><img src="https://latex.codecogs.com/gif.latex?%5Cmin_%7Bi=1,2%7D&space;Q_%7B%5Cphi_i%7D(s,%5Ctilde%7Ba%7D)&space;-&space;%5Calpha&space;%5Clog&space;%5Cpi_%7B%5Ctheta%7D(%5Ctilde%7Ba%7D|s)" title="\min_{i=1,2} Q_{\phi_i}(s,\tilde{a}) - \alpha \log \pi_{\theta}(\tilde{a}|s)" /></p>
<pre class="lang-py prettyprint-override"><code>next_action, next_log_prob = policy_net.get_next_action(state)
q1 = soft_q_net_1.get_q(state, next_action)
q2 = soft_q_net_2.get_q(state, next_action)
q = np.minimum(q1, q2)
v_target = q - alpha * next_log_prob
</code></pre>
<h3>Policy_target</h3>
<p><img src="https://latex.codecogs.com/gif.latex?Q%5E%7B%5Cpi&space;%7D(s,a))" title="Q^{\pi }(s,a))" /></p>
<pre class="lang-py prettyprint-override"><code>q1 = soft_q_net_1.get_q(state, action)
q2 = soft_q_net_2.get_q(state, action)
policy_target = np.minimum(q1, q2)
</code></pre> | 2019-05-20 18:11:23.630000+00:00 | 2021-04-11 01:12:01.010000+00:00 | 2020-06-20 09:12:55.060000+00:00 | python|tensorflow|machine-learning|reinforcement-learning | ['https://arxiv.org/abs/1910.07207', 'https://github.com/p-christ/Deep-Reinforcement-Learning-Algorithms-with-PyTorch', 'https://github.com/ku2482/sac-discrete.pytorch'] | 3 |
61,294,337 | <p>Try this:</p>
<ol>
<li>Train the first model, which sets <code>trainable</code> to <code>False</code>. You don't have to train it to saturation, so I would start with your 5 epochs.</li>
<li>Go back and set <code>trainable</code> to <code>True</code> for all the <code>vgg19</code> parameters. Then, <a href="https://keras.io/getting-started/faq/#how-can-i-freeze-keras-layers" rel="nofollow noreferrer">per the documentation</a>, you can rebuild and recompile the model to have these changes take effect.</li>
<li>Continue training on the rebuilt model, which now has all parameters available for tuning.</li>
</ol>
<p>It is very common in transfer learning to completely freeze the transferred layers in order to preserve them. In the early stages of training your additional layers don't know what to do. That means a noisy gradient by the time it gets to the transferred layers, which will quickly "detune" them away from their previously well-tuned weights.</p>
<p>Putting it all together into some code, it would look something like this.</p>
<pre><code># Original code. Transfer VGG and freeze the weights.
vgg19 = keras.applications.vgg19.VGG19(
weights='imagenet',
include_top=False,
input_shape=(img_height, img_width, img_channels))
for layer in vgg19.layers:
layer.trainable = False
model = Sequential(layers=vgg19.layers)
model.add(Flatten())
model.add(Dense(1024, activation='relu'))
model.add(Dense(512, activation='relu'))
model.add(Dense(10, activation='softmax'))
opt = Adam(learning_rate=0.001, beta_1=0.9)
model.compile(
loss='categorical_crossentropy',
optimizer=opt,
metrics=['accuracy'])
model.fit()
# New second stage: unfreeze and continue training.
for layer in vgg19.layers:
layer.trainable = True
full_model = Sequential(layers=model.layers)
full_model.compile(
loss='categorical_crossentropy',
optimizer=opt,
metrics=['accuracy'])
full_model.fit()
</code></pre>
<p>You may want to tune the learning rate for the fine-tuning stage. It's not essential to start, just something to keep in mind.</p>
<hr>
<p>A third option is to use discriminative learning rates, as introduced by Jeremy Howard and Sebastian Ruder in the <a href="https://arxiv.org/abs/1801.06146" rel="nofollow noreferrer">ULMFiT paper</a>. The idea is that, in Transfer Learning, you usually want the later layers to learn faster than the earlier, transferred layers. So you actually set the learning rates to be different for different sets of layers. The fastai library has <a href="https://docs.fast.ai/basic_train.html#Discriminative-layer-training" rel="nofollow noreferrer">a PyTorch implementation</a> that works by dividing the model into "layer groups" and allowing different parameters for each.</p> | 2020-04-18 18:40:42.910000+00:00 | 2020-04-19 16:58:38.890000+00:00 | 2020-04-19 16:58:38.890000+00:00 | null | 61,292,890 | <p>I have two models initialized like this</p>
<pre><code>vgg19 = keras.applications.vgg19.VGG19(
weights='imagenet',
include_top=False,
input_shape=(img_height, img_width, img_channels))
for layer in vgg19.layers:
layer.trainable = False
model = Sequential(layers=vgg19.layers)
model.add(Flatten())
model.add(Dense(1024, activation='relu'))
model.add(Dense(512, activation='relu'))
model.add(Dense(10, activation='softmax'))
opt = Adam(learning_rate=0.001, beta_1=0.9)
model.compile(
loss='categorical_crossentropy',
optimizer=opt,
metrics=['accuracy'])
</code></pre>
<p>and</p>
<pre><code>vgg19_2 = keras.applications.vgg19.VGG19(
weights='imagenet',
include_top=False,
input_shape=(img_height, img_width, img_channels))
model2 = Sequential(layers=vgg19_2.layers)
model2.add(Dense(1024, activation='relu'))
model2.add(Dense(512, activation='relu'))
model2.add(Dense(10, activation='softmax'))
opt = Adam(learning_rate=0.001, beta_1=0.9)
model2.compile(
loss='categorical_crossentropy',
optimizer=opt,
metrics=['accuracy'])
</code></pre>
<p>In other words the only difference is the second model doesn't set vgg19 layers' trainable parameter to false. Unfortunately the model with trainable set to true does not learn the data.</p>
<p>When I use model.fit I get</p>
<pre><code>Trainable set to false:
Epoch 1/51
2500/2500 [==============================] - 49s 20ms/step - loss: 1.4319 - accuracy: 0.5466 - val_loss: 1.3951 - val_accuracy: 0.5693
Epoch 2/51
2500/2500 [==============================] - 47s 19ms/step - loss: 1.1508 - accuracy: 0.6009 - val_loss: 0.7832 - val_accuracy: 0.6023
Epoch 3/51
2500/2500 [==============================] - 48s 19ms/step - loss: 1.0816 - accuracy: 0.6256 - val_loss: 0.6782 - val_accuracy: 0.6153
Epoch 4/51
2500/2500 [==============================] - 47s 19ms/step - loss: 1.0396 - accuracy: 0.6450 - val_loss: 1.3045 - val_accuracy: 0.6103
</code></pre>
<p>The model trains to about 65% accuracy within a few epochs. However using model2 which should be able to make even better predictions (since there are more trainable parameters) I get:</p>
<pre><code>Epoch 1/5
2500/2500 [==============================] - 226s 90ms/step - loss: 2.3028 - accuracy: 0.0980 - val_loss: 2.3038 - val_accuracy: 0.1008
Epoch 2/5
2500/2500 [==============================] - 311s 124ms/step - loss: 2.3029 - accuracy: 0.0980 - val_loss: 2.2988 - val_accuracy: 0.1017
Epoch 3/5
2500/2500 [==============================] - 306s 123ms/step - loss: 2.3029 - accuracy: 0.0980 - val_loss: 2.3052 - val_accuracy: 0.0997
Epoch 4/5
2500/2500 [==============================] - 321s 129ms/step - loss: 2.3029 - accuracy: 0.0972 - val_loss: 2.3028 - val_accuracy: 0.0997
Epoch 5/5
2500/2500 [==============================] - 300s 120ms/step - loss: 2.3028 - accuracy: 0.0988 - val_loss: 2.3027 - val_accuracy: 0.1007
</code></pre>
<p>When I then try to compute weights gradients on my data I get only zeros. I understand that it may take a long time to train such a big neural net like vgg to optimum but considering the calculated gradients for the last 3 layers should be very similar in both cases why is the accuracy so low? Training for more time gives no improvement.</p> | 2020-04-18 16:54:44.350000+00:00 | 2020-04-19 16:58:38.890000+00:00 | null | machine-learning|keras|neural-network|transfer-learning|vgg-net | ['https://keras.io/getting-started/faq/#how-can-i-freeze-keras-layers', 'https://arxiv.org/abs/1801.06146', 'https://docs.fast.ai/basic_train.html#Discriminative-layer-training'] | 3 |
23,161,886 | <p>According the Joaquín Pérez-Iglesias in <a href="https://arxiv.org/abs/0911.5046" rel="nofollow noreferrer">Integrating the Probabilistic Model BM25/BM25F into Lucene</a>, the score function R should be defined as followed :</p>
<p><img src="https://i.stack.imgur.com/sgOGC.png" alt="enter image description here"></p>
<p>such as</p>
<ul>
<li><code>occurs_t^d</code> is the term frequency of <code>t</code> in <code>d</code>,</li>
<li><code>l_d</code> is the document <code>d</code> length.</li>
<li><code>avl_d</code> is the document average length along the collection</li>
<li><code>k_1</code> is a free parameter usually 2 and <code>b</code> in [0,1] (usually 0.75). </li>
</ul>
<p>Assigning 0 to <code>b</code> is equivalent to avoid the process of normalisation and therefore the document length will not affect the final score. </p>
<p>If <code>b</code> takes 1, we will be carrying out a full length normalisation.</p>
<p><img src="https://i.stack.imgur.com/XEBhE.png" alt="enter image description here"></p>
<p>where <code>N</code> is the number of document in the collection and <code>df</code> is the number of documents where appears the term <code>t</code>.</p> | 2014-04-18 20:35:15.557000+00:00 | 2017-06-13 15:31:38.190000+00:00 | 2017-06-13 15:31:38.190000+00:00 | null | 23,161,677 | <p>I am studying the Okapi BMS25 model. I understand everything but two confusion. While calculating document length (dl) and average document length (avdl). I found the document length is </p>
<p><img src="https://i.stack.imgur.com/3UYl4.jpg" alt="enter image description here"></p>
<p>So it is a summation of my keywords/terms in a particular document. But when I see wiki's def:</p>
<p><img src="https://i.stack.imgur.com/Y7gjY.png" alt="enter image description here"></p>
<p>So |D| is the length of the document D in words (i.e. is summation of total words count).
Now, the question what is dl actually?</p>
<p>Now, second question how to calculate avdl? (just calculating (doc1+doc2+...N)/N where N is my total no documents in collection? (and avdl is fixed for whole collection?)</p> | 2014-04-18 20:20:34.857000+00:00 | 2017-06-13 15:31:38.190000+00:00 | 2014-04-18 20:24:03.840000+00:00 | information-retrieval | ['https://arxiv.org/abs/0911.5046'] | 1 |
57,708,625 | <p>@j23 has given a very reasonable operational answer. By way of possible explanation of the behaviour you observed, the number you are printing is not a single <code>$RANDOM</code> value, but the product of two such values. Pairs of consecutive outputs of a pseudorandom number generator (PRNG) are not necessarily as independent as you might like. E.g., Matlab <code>randn</code> had a correlation problem in 2006 (arXiv:math/0603058).</p> | 2019-08-29 10:53:47.187000+00:00 | 2019-08-29 10:53:47.187000+00:00 | null | null | 57,705,578 | <p>I wrote a script with bash in while loop. The code:</p>
<pre><code>number=0
while [ 1500 -gt $number ]
do
var="abcdefghijklmnopqrstuvxyz"
letter1="${var:$(( RANDOM % ${#var} )):1}"
letter2="${var:$(( RANDOM % ${#var} )):1}"
a=$RANDOM
b=$RANDOM
c=$(( $a * $b))
echo "$letter1$letter2 $c" >> a.txt
number=$(( 1 + $number ))
done
</code></pre>
<p>but now I see duplicate numbers in result:</p>
<pre><code>Result:
ab 15474
at 15474
yh 15474
gd 15474
re 18696
jg 18696
</code></pre>
<p>The numbers are duplicate.</p>
<p>I guess the <code>$RANDOM</code> changes after a invariant time and my script starts again the the <code>while</code> loop faster than the <code>$RANDOM</code> changes.</p>
<p>Can you help me an other randomization way?</p> | 2019-08-29 07:53:50.393000+00:00 | 2019-08-30 03:50:34.093000+00:00 | 2019-08-29 10:15:47.567000+00:00 | linux|bash|random|debian | [] | 0 |
36,675,792 | <p>I will first go through some general links and resources and then try to describe the general idea of the algorithm.</p>
<p><strong>SEEDS implementations:</strong></p>
<p>You obviously already saw the documentation <a href="http://docs.opencv.org/3.0-beta/modules/ximgproc/doc/superpixels.html" rel="noreferrer">here</a>. A usage example for OpenCV's SEEDS implementation can be found here: <a href="https://github.com/Itseez/opencv_contrib/blob/6cd8e9f556c8c55c05178dec05d5277ae00020d9/modules/ximgproc/samples/seeds.cpp" rel="noreferrer">Itseez/opencv_contrib/modules/ximgproc/samples/seeds.cpp</a>, and allows to adapt the number of superpixels, the number of levels and other parameters live - so after reading up on the idea behind SEEDS you should definitely try the example. The original implementation, as well as a revised implementation (part of my bachelor thesis), can be found on GitHub: <a href="https://github.com/davidstutz/superpixels-revisited/tree/master/lib_seeds" rel="noreferrer">davidstutz/superpixels-revisited/lib_seeds</a> and <a href="https://github.com/davidstutz/seeds-revised" rel="noreferrer">davidstutz/seeds-revised</a>. The implementations should be pretty comparable, though.</p>
<p><strong>Publication and other resources:</strong></p>
<p>The paper was released on arxiv: <a href="http://arxiv.org/abs/1309.3848" rel="noreferrer">arxiv.org/abs/1309.3848</a>. A somewhat shorter description (which may be easier to follow) is available on my website: <a href="http://davidstutz.de/efficient-high-quality-superpixels-seeds-revised/" rel="noreferrer">davidstutz.de/efficient-high-quality-superpixels-seeds-revised</a>. The provided algorithm description should be easy to follow and -- in the best case -- allow to implement SEEDS (see the "Algorithm" section of the article). A more precise description can also be found in my <a href="http://davidstutz.de/wordpress/wp-content/uploads/2014/09/thesis.pdf" rel="noreferrer">bachelor thesis</a>, in particular in section 3.1.</p>
<p><strong>General description:</strong></p>
<p><em>Note that this description is based on both the above mentioned article and my bachelor thesis. Both offer a mathematically concise description.</em></p>
<p>Given an image of with width <code>W</code> and height <code>H</code>, SEEDS starts by grouping pixels into blocks of size <code>w x h</code>. These blocks are further arranged into groups of <code>2 x 2</code>. This schemes is repeated for <code>L</code> levels (this is the number of levels parameter). So at level <code>l</code>, you have blocks of size</p>
<pre><code>w*2^(l - 1) x h*2^(l - 1).
</code></pre>
<p>The number of superpixels is determined by the blocks at level <code>L</code>, i.e. letting <code>w_L</code> and <code>h_L</code> denote the width and height of the blocks at level <code>L</code>, the number of superpixels is</p>
<pre><code>S = W/w_L * H/h_L
</code></pre>
<p>where we use integer divisions.</p>
<p>The initial superpixel segmentation which is now iteratively refined by exchanging blocks of pixels and individual pixels between neighboring superpixels. To this end, color histograms of the superpixels and all blocks are computed (the histograms are determined by the number of bins parameter in the implementation). This can be done efficiently by seeing that the histogram of a superpixel is just the sum of the histograms of the <code>2 x 2</code> blocks it consists of, and the histogram of one of these blocks is the sum of the histograms of the <code>2 x 2</code> underlying blocks (and so on). So let <code>h_i</code> be the histogram of a block of pixels belonging to superpixel <code>j</code>, and <code>h_j</code> the histogram of this superpixel. Then, the similarity of the block <code>j</code> to superpixel <code>j</code> is computed by the histogram intersection of <code>h_i</code> and <code>h_j</code> (see one of the above resources for the equation). Similarly, the similarity of a pixel and a superpixel is either the Euclidean distance of the pixel color to the superpixel mean color (this is the better performing option), or the probability of the pixel's color belonging to the superpixel (which is simply the normalized entry of the superpixel's histogram at the pixel's color). With this background, the algorithm can be summarized as follow:</p>
<pre><code>initialize block hierarchy and the initial superpixel segmentation
for l = L - 1 to 1 // go through all levels
// for level l = L these are the initial superpixels
for each block in level l
initialize the color histogram of this block
// as described this is done using the histograms of the level below
// now we start exchanging blocks between superpixels
for l = L - 1 to 1
for each block at level l
if the block lies at the border to a superpixel it does not belong to
compute the histogram intersection with both superpixels
assign the block to the superpixel with the highest intersection
// now we exchange individual pixels between superpixels
for all pixels
if the pixel lies at the border to a superpixel it does not belong to
compute the Euclidean distance of the pixel to both superpixel's mean color
assign the pixel to the closest superpixel
</code></pre>
<p>In practice, the block updates and pixel updates are iterated more than ones (which is the number of iterations parameter), and often twice as many iterations per level are done (which is the double step parameter). In the original segmentation, the number of superpixels is computed from <code>w</code>, <code>h</code>, <code>L</code> and the image size. In OpenCV, using the above equations, <code>w</code> and <code>h</code> is computed from the desired number of superpixels and number of levels (which are determined by the corresponding parameters).</p>
<p>One parameter remains unclear: the prior tries to enforce smooth boundaries. In practice this is done by considering the <code>3 x 3</code> neighborhood around a pixel which is going to be updated. If most of the pixels in this neighborhood belong to superpixel <code>j</code>, the pixel to be updated is also more likely to belong to superpixel <code>j</code> (and vice versa). OpenCV's implementation as well as my implementation (SEEDS revised), allow to consider larger neighborhoods <code>k x k</code> with <code>k in {0,...,5}</code> in the case of OpenCV.</p> | 2016-04-17 11:17:55.450000+00:00 | 2016-04-17 11:17:55.450000+00:00 | null | null | 36,672,019 | <p>I am interested in superpixels extracted via energy-driven sampling (SEEDS) which is a method of image segmentation using superpixels. This is also what OpenCV uses to create superpixels. I am having troubles finding documentation behind the SEEDS algorithm. OpenCV gives a very general description which can be found <a href="http://docs.opencv.org/3.0-beta/modules/ximgproc/doc/superpixels.html" rel="nofollow">here</a>. </p>
<p>I am looking for a more in depth description on how SEEDS functions (either a general walk through or a mathematical explanation). Any links or thoughts concerning the algorithm would be much appreciated! I can't seem to find any good material. Thanks!</p> | 2016-04-17 02:22:16.217000+00:00 | 2016-04-17 11:17:55.450000+00:00 | null | opencv|image-segmentation|superpixels | ['http://docs.opencv.org/3.0-beta/modules/ximgproc/doc/superpixels.html', 'https://github.com/Itseez/opencv_contrib/blob/6cd8e9f556c8c55c05178dec05d5277ae00020d9/modules/ximgproc/samples/seeds.cpp', 'https://github.com/davidstutz/superpixels-revisited/tree/master/lib_seeds', 'https://github.com/davidstutz/seeds-revised', 'http://arxiv.org/abs/1309.3848', 'http://davidstutz.de/efficient-high-quality-superpixels-seeds-revised/', 'http://davidstutz.de/wordpress/wp-content/uploads/2014/09/thesis.pdf'] | 7 |
59,618,044 | <p>You did not provide full code of algorithm.
Check <a href="https://arxiv.org/abs/1509.02971" rel="nofollow noreferrer">DDPG paper</a> for networks architecture and hyperparameters, because its proven in paper that same algorithm configuration works very well for different problems and environments.
Make sure you use correctly Target networks, experience replay, exploration...</p>
<ul>
<li><p><strong>Target networks</strong> make learning stable. For Critic network update you actually should use results from Target Actor and Target Critic networks and calculate TD-error based on Q-learning, using sample of data from replay buffer.</p></li>
<li><p>For off-policy algorithms, as DDPG, exploration can be solved by just adding noise directly to the action. You can choose noise function depending on environment (refer again on paper and check Ornstein-Uhlenbeck noise function).</p></li>
</ul> | 2020-01-06 19:44:23.100000+00:00 | 2020-01-06 19:44:23.100000+00:00 | null | null | 59,470,798 | <p>I'm trying to implement DDPG with tensorflow 2.
The problem is that it doesn't learn: even after adding some noise and some expolitation vs exploration factor the agent seems to stuck everytime in a generic direction, only changing its intensity.</p>
<p>This is my Actor neural network:</p>
<pre><code> d1 = self.dense(states, weights[0], weights[1])
d1 = tf.nn.relu(d1)
d2 = self.dense(d1, weights[2], weights[3])
d2 = tf.nn.relu(d2)
d3 = self.dense(d2, weights[4], weights[5])
d3 = tf.nn.tanh(d3)
return d3*self.action_bounds
</code></pre>
<p>and this is its training function: </p>
<pre><code>def train(self, states, critic_gradients):
with tf.GradientTape() as t:
actor_pred = self.network(states)
actor_gradients = \
t.gradient(actor_pred, self.weights, -critic_gradients)
actor_gradients = list(map(lambda x: x/self.batch_size, actor_gradients))
self.opt.apply_gradients(zip(actor_gradients, self.weights))
</code></pre>
<p>Where critic_gradients are taken by the critic class.</p>
<p>The critic net is similar to the actor's one:</p>
<pre><code>def _network(self, states, actions, weights, axis):
x = tf.concat([states, actions], axis=axis)
d1 = self.dense(x, weights[0], weights[1])
d1 = tf.nn.relu(d1)
d2 = self.dense(d1, weights[2], weights[3])
d2 = tf.nn.relu(d2)
d3 = self.dense(d2, weights[4], weights[5])
d3 = tf.nn.relu(d3)
return d3
</code></pre>
<p>With weights:
<code>self.shapes = [
[self.state_size+self.action_size, 64],
[64],
[64, 32],
[32],
[32, 1],
[1]
]</code></p>
<p>Critic trains with a simple minimize function over a mean squared error function loss.</p>
<p>I can't get if the error is in the main (that I wrote following the main paper) or in the classes.
One thing to note is that I tested the critic's network with a simple dataset and it converges.
I don't know how to try the actor network, i'm just using Gym with Pendulum environment.</p> | 2019-12-24 15:25:28.733000+00:00 | 2020-01-06 19:44:23.100000+00:00 | null | tensorflow|neural-network|reinforcement-learning | ['https://arxiv.org/abs/1509.02971'] | 1 |
54,587,814 | <p>You would like to combine the pretrained model labels with your own labels, or in other words you are augmenting the pretrained model with new classes. The practical approach is to use the basis of transfer learning.</p>
<p>But let me tell you this, this is still a hot research topic. It is easier to retrain with your own classes than to add additional classes. Hard, not impossible!</p>
<p>What you should be doing: One way to do this is to change the last softmax layer to identify more classes than it is designed to label. <strong>Network surgery</strong>. You will have to train the model again and it will take more time.</p>
<p>Another way is to create a new custom model with all 1020 labels and train it on the whole dataset, which is not very efficient and you cannot exploit the weights from the pretrained model and you have to perform full training again.</p>
<p>A hack can be to use the checkpoint which already predicts 1000 classes. Add your data for the new class. Now you should combine the new class' data along with Imagenet data set, create TFRecords for all 1020 classes and train from the network checkpoint. </p>
<p>What you are trying to do is called "<strong>learning without forgetting</strong>". Please refer to the below paper for more information about how to implement this.</p>
<p><a href="https://arxiv.org/abs/1606.09282" rel="nofollow noreferrer">https://arxiv.org/abs/1606.09282</a></p>
<p>And the matlab code is available here.</p>
<p><a href="https://github.com/lizhitwo/LearningWithoutForgetting" rel="nofollow noreferrer">https://github.com/lizhitwo/LearningWithoutForgetting</a></p>
<p>You may also try tweaking the below file to get the result want.</p>
<p><a href="https://github.com/tensorflow/hub/blob/master/examples/image_retraining/retrain.py" rel="nofollow noreferrer">https://github.com/tensorflow/hub/blob/master/examples/image_retraining/retrain.py</a></p>
<p>Additionally,
To get more information on retraining with pretrained models refer to the below link.</p>
<p><a href="https://www.tensorflow.org/tutorials/images/hub_with_keras" rel="nofollow noreferrer">https://www.tensorflow.org/tutorials/images/hub_with_keras</a></p>
<p>Now coming to the answer,</p>
<p>How do we do it:</p>
<p>Rebuild the same model with increased number of labels in the final classification layer then restore all weights from the pretrained inception V3 except the final layer and fine tune the model.</p>
<p>We need specify two flags for this: </p>
<pre><code>1. --pretrained_model_checkpoint_path
2. --fine_tune
</code></pre>
<p>The code will looks like this.</p>
<pre><code># Build the model. Note that we need to make sure the TensorFlow is ready to
# use before this as this command will not build TensorFlow.
cd tensorflow-models/inception
bazel build //inception:custom_train
# Path to the downloaded Inception-v3 model.
MODEL_PATH="${INCEPTION_MODEL_DIR}/inception-v3/model.ckpt-1456935"
# Directory where the data resides.
DATA_DIR=/tmp/custom-data/
# Directory where to save the checkpoint and events files.
TRAIN_DIR=/tmp/custom_train/
# Run the fine-tuning on the flowers data set starting from the pre-trained
# Imagenet-v3 model.
bazel-bin/inception/flowers_train \
--train_dir="${TRAIN_DIR}" \
--data_dir="${DATA_DIR}" \
--pretrained_model_checkpoint_path="${MODEL_PATH}" \
--fine_tune=True \
--initial_learning_rate=0.001 \
--input_queue_memory_factor=1
</code></pre>
<p>Refer to the below documentation for more information.</p>
<p><a href="https://github.com/tensorflow/models/tree/master/research/inception" rel="nofollow noreferrer">https://github.com/tensorflow/models/tree/master/research/inception</a></p> | 2019-02-08 07:31:55.783000+00:00 | 2019-02-08 08:02:14.117000+00:00 | 2019-02-08 08:02:14.117000+00:00 | null | 54,548,361 | <p>I have two Pre-Trained and saved Inception Models.</p>
<p>Model 1 = Inception Model with Imagenet Classes - It can predict an image in 1000 classes.</p>
<p>Model 2 = Inception Model with my own classiciation (20 Classes) - It can predict an image in 20 classes. Performed transfer learning and saved the model.</p>
<p>I would like to combine this both to predict images with 1020 Classes.</p>
<pre><code>Model1 = inception_v3.InceptionV3(weights='imagenet')
</code></pre>
<p>Predicts image in 1000 classes</p>
<pre><code>predictions1 = Model1.predict(processed_image)
Model2 = InceptionV3(weights='imagenet',
include_top=False,
input_shape=(224, 224, 3))
</code></pre>
<p>I have performed transfer learning with my own 20 classes. Same input shape for both models. Predicts image in 20 classes</p>
<pre><code>predictions = Model2.predict_classes(precessed_image)
</code></pre>
<h1>How do I combine 2 Pre-Trained Inception Model to predict Imagenet classes (1000) and my own classifiers (20 Classes) = predict images on 1020 classes?</h1>
<p>Please give me your solution with a small snippet(code) as an example for better understanding. I am pretty new to Keras.</p> | 2019-02-06 07:12:47.770000+00:00 | 2022-01-09 19:11:50.310000+00:00 | 2019-02-07 00:36:51.703000+00:00 | python|keras | ['https://arxiv.org/abs/1606.09282', 'https://github.com/lizhitwo/LearningWithoutForgetting', 'https://github.com/tensorflow/hub/blob/master/examples/image_retraining/retrain.py', 'https://www.tensorflow.org/tutorials/images/hub_with_keras', 'https://github.com/tensorflow/models/tree/master/research/inception'] | 5 |
61,218,755 | <p>Let me talk about random integer generating algorithms that are "optimal" in terms of the number of random bits it uses on average. In the rest of this post, we will assume we have a "true" random generator that can produce unbiased and independent random bits.</p>
<p>In 1976, D. E. Knuth and A. C. Yao showed that any algorithm that produces random integers with a given probability, using only random bits, can be represented as a binary tree, where random bits indicate which way to traverse the tree and each leaf (endpoint) corresponds to an outcome. They also gave lower bounds on the number of bits a given algorithm will need on average for this task. In this case, an <em>optimal</em> algorithm to generate integers in <code>[0, n)</code> uniformly, will need at most <code>log2(n) + 2</code> bits on average. There are many examples of <em>optimal</em> algorithms in this sense. One of them is the <a href="https://arxiv.org/abs/1304.1916" rel="nofollow noreferrer">Fast Dice Roller</a> by J. Lumbroso (2013) (implemented below), and another is perhaps the algorithm given in the <a href="http://mathforum.org/library/drmath/view/65653.html" rel="nofollow noreferrer">Math Forum</a> in 2004 (see also "<a href="https://arxiv.org/abs/1012.4290" rel="nofollow noreferrer">Bit Recycling for Scaling Random Number Generators</a>"). On the other hand, none of the algorithms <a href="https://www.pcg-random.org/posts/bounded-rands.html" rel="nofollow noreferrer">surveyed by M. O'Neill</a> are optimal, since all of them rely on generating blocks of random bits at a time.</p>
<p>However, any <em>optimal</em> integer generator that is also <em>unbiased</em> will, in general, run forever in the worst case, as also shown by Knuth and Yao. Going back to the binary tree, each one of the n outcomes labels leaves in the binary tree so that each integer in [0, n) can occur with probability 1/n. But if 1/n has a non-terminating binary expansion (which will be the case if n is not a power of 2), this binary tree will necessarily either—</p>
<ul>
<li>Have an "infinite" depth, or</li>
<li>include "rejection" leaves at the end of the tree,</li>
</ul>
<p>and in either case, the algorithm will run forever in the worst case, even if it uses very few random bits on average. (On the other hand, when n is a power of 2, the optimal binary tree will have a finite depth and no rejection nodes.) The Fast Dice Roller is an example of an algorithm that uses "rejection" events to ensure it's unbiased; see the comment in the code below.</p>
<p>Thus, in general, <strong>a random integer generator can be <em>either</em> unbiased <em>or</em> constant-time (or even neither), but not both.</strong> In particular, there is no way to "fix" the worst case of an indefinite running time without introducing bias. For instance, modulo reductions (e.g., <code>Random64Bits() % n</code>) are equivalent to a binary tree in which rejection leaves are replaced with labeled outcomes — but since there are more possible outcomes than rejection leaves, only some of the outcomes can take the place of the rejection leaves, introducing bias. The same kind of binary tree — and the same kind of bias — results if you stop rejecting after a set number of iterations. (However, this bias may be negligible depending on the application. There are also security aspects to random integer generation, which are too complicated to discuss in this answer.)</p>
<h3>Fast Dice Roller Implementation</h3>
<p>The following is a JavaScript implementation of the Fast Dice Roller. Note that it uses rejection events and a loop to ensure it's unbiased. <code>RandomBit()</code> is a method that produces an independent unbiased random bit (e.g., <code>(Math.random() < 0.5 ? 1 : 0)</code> in JavaScript, even though this idiom is not necessarily efficient in terms of the random bits it ultimately relies on).</p>
<pre class="lang-js prettyprint-override"><code>function randomInt(minInclusive, maxExclusive) {
var maxInclusive = (maxExclusive - minInclusive) - 1
var x = 1
var y = 0
while(true) {
x = x * 2
var randomBit = Math.random()<0.5 ? 1 : 0
y = y * 2 + randomBit
if(x > maxInclusive) {
if (y <= maxInclusive) { return y + minInclusive }
// Rejection
x = x - maxInclusive - 1
y = y - maxInclusive - 1
}
}
}
</code></pre>
<p>The following version returns a BigInt, an arbitrary-precision integer supported in recent versions of JavaScript:</p>
<pre class="lang-js prettyprint-override"><code>function randomInt(minInclusive, maxExclusive) {
minInclusive=BigInt(minInclusive)
maxExclusive=BigInt(maxExclusive)
var maxInclusive = (maxExclusive - minInclusive) - BigInt(1)
var x = BigInt(1)
var y = BigInt(0)
while(true) {
x = x * BigInt(2)
var randomBit = BigInt(Math.random()<0.5 ? 1 : 0)
y = y * BigInt(2) + randomBit
if(x > maxInclusive) {
if (y <= maxInclusive) { return y + minInclusive }
// Rejection
x = x - maxInclusive - BigInt(1)
y = y - maxInclusive - BigInt(1)
}
}
}
</code></pre> | 2020-04-14 23:09:50.483000+00:00 | 2022-01-12 13:43:19.307000+00:00 | 2022-01-12 13:43:19.307000+00:00 | null | 18,101,973 | <p>If I have a true random number generator (TRNG) which can give me either a 0 or a 1 each time I call it, then it is trivial to then generate any number in a range with a length equal to a power of 2. For example, if I wanted to generate a random number between 0 and 63, I would simply poll the TRNG 5 times, for a maximum value of 11111 and a minimum value of 00000. The problem is when I want a number in a rangle not equal to 2^n. Say I wanted to simulate the roll of a dice. I would need a range between 1 and 6, with equal weighting. Clearly, I would need three bits to store the result, but polling the TRNG 3 times would introduce two eroneous values. We could simply ignore them, but then that would give one side of the dice a much lower odds of being rolled.</p>
<p>My question of ome most effectively deals with this.</p> | 2013-08-07 11:16:38.410000+00:00 | 2022-01-12 13:43:19.307000+00:00 | null | algorithm|random|statistics | ['https://arxiv.org/abs/1304.1916', 'http://mathforum.org/library/drmath/view/65653.html', 'https://arxiv.org/abs/1012.4290', 'https://www.pcg-random.org/posts/bounded-rands.html'] | 4 |
73,809,965 | <p>The answer is much harder than one might think: <a href="https://arxiv.org/pdf/1201.2374.pdf" rel="nofollow noreferrer">this paper</a> contains a 50-page proof of an algorithm that does this.</p> | 2022-09-22 06:00:58.077000+00:00 | 2022-09-22 06:00:58.077000+00:00 | null | null | 73,788,683 | <p>There have been similar questions, like <a href="https://stackoverflow.com/q/27649735/2281318">this</a> or <a href="https://stackoverflow.com/questions/39850826/how-can-i-convert-pcfg-in-cnf-for-this-grammar">this</a>, but mine is different:</p>
<p>Let's say, I have a probabilistic context-free grammar (PCFG)</p>
<pre><code>S --> A [1/2] | B [1/2]
A --> eps [p] | AA [q] | x [r]
B --> y [1]
</code></pre>
<p>where eps is an empty string, p + q + r = 1 (and q <= 1/2, so that generating process finishes with probability 1).</p>
<ol>
<li>What is Chomsky normal form for this PCFG?</li>
</ol>
<p>I found it very difficult to get rid of the null production rule <code>A --> eps</code> and at the same time keep all the probabilities of generating a given string intact. For example, <code>a := P(eps) = (1 - sqrt(1 - 4 p q)) / (2 q)</code> and <code>P(x) = r / (1 - 2 a q)</code> (as well as the others) should not be changed after creating a CNF(PCFG).</p>
<ol start="2">
<li>Is there a source for an algorithm that does a conversion PCFG --> CNF(PCFG) for any PCFG? (<strong>or proves this is not possible</strong>)</li>
</ol>
<p>Searching for this, I found numerous sources, claiming that this is possible, however, I saw no proof for this.
Also, following the procedure for CNF(CFG) (where no probability is assigned to rules) does not work (or at least I do not see how one could generalize this to any PCFG).</p>
<hr />
<p>EDIT: <a href="https://cs.brown.edu/courses/csci1460/assets/files/parsing.pdf" rel="nofollow noreferrer">This pdf</a> (page 112) claims that <em>It
turns out that every epsilon-free PCFG G has a corresponding binarized
PCFG G′ that generates the same language as G</em>.</p>
<p>Binarized form of PCFG-s is slightly less restrictive than CFN. Again, the pdf provides no sources / proofs for this claim.</p>
<p>Epsilon free means that there are not rules <code>X --> eps</code> (which is not true for the toy grammar above).</p> | 2022-09-20 14:51:13.297000+00:00 | 2022-09-22 06:00:58.077000+00:00 | 2022-09-21 08:27:46.903000+00:00 | nlp|context-free-grammar|probability-distribution | ['https://arxiv.org/pdf/1201.2374.pdf'] | 1 |
60,204,849 | <p>This is not an easy problem.</p>
<p>You suggest replacing the words on the data preprocessing level. To do so, you would need word alignment that would tell you what source words match your target word. There are tools for that like <a href="https://github.com/clab/fast_align" rel="nofollow noreferrer">FastAlign</a>. Even when you have the alignment, there is no guarantee that the copied source word will be in the target vocabulary.</p>
<p>Some people tried to solve this issue on the modeling level and include explicit copy mechanisms in their network (like <a href="https://arxiv.org/abs/1603.06393" rel="nofollow noreferrer">this paper</a>), however it makes the network rather complicated and gives only a little improvement.</p>
<p>The most common workaround for this issue is using subword-based vocabulary like <a href="https://github.com/glample/fastBPE" rel="nofollow noreferrer">BPE</a> or <a href="https://github.com/google/sentencepiece" rel="nofollow noreferrer">SentencePiece</a>. With these methods, unfrequent words get segmented into smaller units, so nothing is in the end out of vocabulary. If the word is the same both on the source and target side (this often happens with proper names), it will get segmented in the same way on the source and target side and the model will learn that copying the word fragments is what it usually should do.</p> | 2020-02-13 09:48:13.833000+00:00 | 2020-02-13 09:48:13.833000+00:00 | null | null | 60,202,019 | <p>I'm working on Machine learning AI translation system, and I want to make it more adaptable my code now when the word is new will place <em>UNK</em> which stands for <em>UNKNOWN</em> and leave it, but I want to copy the same word and past it back instead of printing <em>UNK</em>, so if a new word comes it should pass back the same word as translation instead of <em>UNK</em> my code looks like this for now:</p>
<p>any ideas what shall I change : </p>
<pre><code># Adding the word 'UNK' to the end of the array (stands for UNKNOWN words)
X_ix_to_word.append('UNK')
# Creating the word-to-index dictionary from the array created above
X_word_to_ix = {word:ix for ix, word in enumerate(X_ix_to_word)}
# Converting each word to its index value
for i, sentence in enumerate(X):
for j, word in enumerate(sentence):
if word in X_word_to_ix:
X[i][j] = X_word_to_ix[word]
else:
X[i][j] = X_word_to_ix['UNK']
y_ix_to_word = [word[0] for word in y_vocab]
y_ix_to_word.insert(0, 'ZERO')
y_ix_to_word.append('UNK')
y_word_to_ix = {word:ix for ix, word in enumerate(y_ix_to_word)}
for i, sentence in enumerate(y):
for j, word in enumerate(sentence):
if word in y_word_to_ix:
y[i][j] = y_word_to_ix[word]
else:
y[i][j] = y_word_to_ix['UNK']
return (X, len(X_vocab)+2, X_word_to_ix, X_ix_to_word, y, len(y_vocab)+2, y_word_to_ix, y_ix_to_word)
def load_test_data(source, X_word_to_ix, max_len):
f = open(source, 'r')
X_data = f.read()
f.close()
X = [text_to_word_sequence(x)[::-1] for x in X_data.split('\n') if len(x) > 0 and len(x) <= max_len]
for i, sentence in enumerate(X):
for j, word in enumerate(sentence):
if word in X_word_to_ix:
X[i][j] = X_word_to_ix[word]
else:
X[i][j] = X_word_to_ix['UNK']
return X
</code></pre> | 2020-02-13 06:52:39.317000+00:00 | 2020-02-13 09:48:13.833000+00:00 | 2020-02-13 09:38:42.347000+00:00 | python|tensorflow|keras|nlp|machine-translation | ['https://github.com/clab/fast_align', 'https://arxiv.org/abs/1603.06393', 'https://github.com/glample/fastBPE', 'https://github.com/google/sentencepiece'] | 4 |
47,697,583 | <p>Looks like you're facing the problem of exploding gradients with ReLu activation function (that what <code>NaN</code> means -- very big activations). There are several techniques to deal with this issue, e.g. <a href="https://arxiv.org/abs/1502.03167" rel="nofollow noreferrer">batch normalization</a> (changes the network architecture) or a delicate variable initialization (that's what I'd try first).</p>
<p>You are using Xavier initialization for <code>V</code> variables in different layers, which indeed works fine for logistic sigmoid activation (see <a href="http://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf" rel="nofollow noreferrer">the paper by Xavier Glorot and Yoshua Bengio</a>), or, in other words, <code>tanh</code>. </p>
<p>The preferred initialization strategy for the ReLU activation function (and its variants, including ELU) is He initialization. In tensorflow it's implemented via <a href="https://www.tensorflow.org/api_docs/python/tf/variance_scaling_initializer" rel="nofollow noreferrer"><code>tf.variance_scaling_initializer</code></a>:</p>
<pre class="lang-py prettyprint-override"><code>initializer = tf.variance_scaling_initializer()
v = tf.get_variable(name=var_name, initializer=initializer, ...)
</code></pre>
<p>You might also want to try smaller values for <code>b</code> and <code>g</code> variables, but it's hard to say the exact value just by looking at your model. If nothing helps, consider adding batch-norm layers to your model to control activation distribution.</p> | 2017-12-07 14:44:40.883000+00:00 | 2017-12-07 14:44:40.883000+00:00 | null | null | 47,685,341 | <p>The neural network I trained is the critic network for deep reinforcement learning. The problem is when one of the layer's activation is set to be relu or elu, the output would be nan after some training step, while the output is normal if the activation is tanh. And the code is as follows(based on tensorflow):</p>
<pre class="lang-py prettyprint-override"><code>with tf.variable_scope('critic'):
self.batch_size = tf.shape(self.tfs)[0]
l_out_x = denseWN(x=self.tfs, name='l3', num_units=self.cell_size, nonlinearity=tf.nn.tanh, trainable=True,shape=[det*step*2, self.cell_size])
l_out_x1 = denseWN(x=l_out_x, name='l3_1', num_units=32, trainable=True,nonlinearity=tf.nn.tanh, shape=[self.cell_size, 32])
l_out_x2 = denseWN(x=l_out_x1, name='l3_2', num_units=32, trainable=True,nonlinearity=tf.nn.tanh,shape=[32, 32])
l_out_x3 = denseWN(x=l_out_x2, name='l3_3', num_units=32, trainable=True,shape=[32, 32])
self.v = denseWN(x=l_out_x3, name='l4', num_units=1, trainable=True, shape=[32, 1])
</code></pre>
<p>Here is the code for basic layer construction:</p>
<pre class="lang-py prettyprint-override"><code>def get_var_maybe_avg(var_name, ema, trainable, shape):
if var_name=='V':
initializer = tf.contrib.layers.xavier_initializer()
v = tf.get_variable(name=var_name, initializer=initializer, trainable=trainable, shape=shape)
if var_name=='g':
initializer = tf.constant_initializer(1.0)
v = tf.get_variable(name=var_name, initializer=initializer, trainable=trainable, shape=[shape[-1]])
if var_name=='b':
initializer = tf.constant_initializer(0.1)
v = tf.get_variable(name=var_name, initializer=initializer, trainable=trainable, shape=[shape[-1]])
if ema is not None:
v = ema.average(v)
return v
def get_vars_maybe_avg(var_names, ema, trainable, shape):
vars=[]
for vn in var_names:
vars.append(get_var_maybe_avg(vn, ema, trainable=trainable, shape=shape))
return vars
def denseWN(x, name, num_units, trainable, shape, nonlinearity=None, ema=None, **kwargs):
with tf.variable_scope(name):
V, g, b = get_vars_maybe_avg(['V', 'g', 'b'], ema, trainable=trainable, shape=shape)
x = tf.matmul(x, V)
scaler = g/tf.sqrt(tf.reduce_sum(tf.square(V),[0]))
x = tf.reshape(scaler,[1,num_units])*x + tf.reshape(b,[1,num_units])
if nonlinearity is not None:
x = nonlinearity(x)
return x
</code></pre>
<p>Here is the code to train the network:</p>
<pre class="lang-py prettyprint-override"><code>self.tfdc_r = tf.placeholder(tf.float32, [None, 1], 'discounted_r')
self.advantage = self.tfdc_r - self.v
l1_regularizer = tf.contrib.layers.l1_regularizer(scale=0.005, scope=None)
self.weights = tf.trainable_variables()
regularization_penalty_critic = tf.contrib.layers.apply_regularization(l1_regularizer, self.weights)
self.closs = tf.reduce_mean(tf.square(self.advantage))
self.optimizer = tf.train.RMSPropOptimizer(0.0001, 0.99, 0.0, 1e-6)
self.grads_and_vars = self.optimizer.compute_gradients(self.closs)
self.grads_and_vars = [[tf.clip_by_norm(grad,5), var] for grad, var in self.grads_and_vars if grad is not None]
self.ctrain_op = self.optimizer.apply_gradients(self.grads_and_vars, global_step=tf.contrib.framework.get_global_step())
</code></pre> | 2017-12-06 23:41:56.423000+00:00 | 2017-12-07 16:36:30.227000+00:00 | 2017-12-07 16:36:30.227000+00:00 | machine-learning|tensorflow|neural-network|deep-learning|reinforcement-learning | ['https://arxiv.org/abs/1502.03167', 'http://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf', 'https://www.tensorflow.org/api_docs/python/tf/variance_scaling_initializer'] | 3 |
43,142,186 | <p>Why don't you want to join the prediction at feature level? </p>
<p>If you really want to stick to your idea of not joining any layers of the network, you can apply a CRF or SVM on top of the overall prediction pipeline to learn cross-correlations between the predictions. For any other method you will need to combine features inside the network, one way or another. However I would strongly recommend, that you consider doing this. It is a general theme in deep learning, that doing stuff inside the network works better then doing it outside.</p>
<p>From what I have learned by experimenting with joint prediction, you will get the most performance gain, if you share weights between all convolutional layers of the network. You can then apply independent fc-layers, followed by a softmax regression and separate loss functions on top of the jointly predicted features. This will allow the network to learn cross-correlation between features while it is still able to make separate predictions.</p>
<p>Have a look at my <a href="https://arxiv.org/abs/1612.07695" rel="nofollow noreferrer">MultiNet paper</a> as a good starting point. All our training code is on <a href="https://github.com/MarvinTeichmann/MultiNet#multinet" rel="nofollow noreferrer">github</a>.</p> | 2017-03-31 13:39:48.350000+00:00 | 2017-03-31 15:36:59.993000+00:00 | 2017-03-31 15:36:59.993000+00:00 | null | 43,128,935 | <p>I have trained a network on two different modals of the same image. I pass the data together in one layer but after that, it is pretty much two networks in parallel, they don't share a layer and the two tasks have different set of labels, therefore I have two different loss and accuracy layers (I use caffe btw). I would like to learn these tasks jointly. For example, the prediction of a class of task 1 should be higher in presence of the task 2 predicting a certain class label. I don't want to join them at feature level but at prediction level. How do I get to do this?</p> | 2017-03-30 21:46:51.827000+00:00 | 2018-09-19 06:53:53.693000+00:00 | 2018-09-19 06:53:53.693000+00:00 | neural-network|deep-learning|conv-neural-network | ['https://arxiv.org/abs/1612.07695', 'https://github.com/MarvinTeichmann/MultiNet#multinet'] | 2 |
68,837,545 | <p>The variances of several of your latent variables are very small. For example, Dlbi appears to be effectively zero. That's the source of the issue here.</p>
<p>There are two things you can to try to remedy this.</p>
<p>First, it may work better to identify the model by fixing the latent variable variances to 1, rather than fixing the first indicator factor loadings to 1. Do this by specifying <code>std.lv = TRUE</code>.</p>
<p>Even then, it will likely be the case that loadings onto one or more of the group factors will have very small loadings. This indicates that there really isn't much of a distinct group factor in your data for this items that is distinct from the general factor. You should consider estimating a model that drops that group factor (as well as comparing with models dropping the other group factors one at a time). We discuss this issue some here: <a href="https://psyarxiv.com/q356f/" rel="nofollow noreferrer">https://psyarxiv.com/q356f/</a></p>
<p>Additionally, you should constrain item loadings so that they are in the theoretically expected direction (e.g., all positive with a lower bound of 0). It is common for bifactor models to overextract variance in items and produce uninterpretable group factors that have a mix of positive and negative loadings. This can also cause convergence issues.</p>
<p>In general, this sort of unconstrained bifactor model tends to be overly flexible and tends to overfit to a similar degree as exploratory factor analysis. You should be sure to evaluate the bifactor model based not only on global model fit statistics, but also on whether the factor loadings actually resemble a true bifactor model--do the items each show substantial loadings on both the general factor and their group factor in the expected directions, or do items tend to load on only one or the other? See some examples in the paper linked above about this issue.</p>
<p>Another option would be to switch to exploratory bifactor modeling. This is implemented in R in the <em>fungible</em> package in the <code>fungible::BiFAD()</code> function. This approach is discussed here:
<a href="https://www.sciencedirect.com/science/article/pii/S0001879120300555" rel="nofollow noreferrer">https://www.sciencedirect.com/science/article/pii/S0001879120300555</a></p>
<p>Exploratory bifactor models are useful because they rely on targeted EFA rotation to estimate loadings. This makes convergence much more likely and can help to diagnose when a group factor is too weak to identify in the data.</p> | 2021-08-18 18:21:04.677000+00:00 | 2021-08-18 18:21:04.677000+00:00 | null | null | 68,837,355 | <p>Title basically explains it but I'm trying to build a bifactor model with psychopathy as one factor and subtypes as the other. I believe that I have everything constrained properly but that might be the issue.</p>
<p><strong>Current code:</strong></p>
<pre class="lang-r prettyprint-override"><code>BifactorModel <- 'psychopathyBi =~ YPIS_1 + YPIS_2 + YPIS_3 + YPIS_4 + YPIS_5 + YPIS_6 + YPIS_7 + YPIS_8 + YPIS_9 +YPIS_10 + YPIS_11 + YPIS_12 + YPIS_13 + YPIS_14 + YPIS_15 + YPIS_16 + YPIS_17 + YPIS_18
GMbi =~ YPIS_4 + YPIS_5 + YPIS_8 + YPIS_9 + YPIS_14 + YPIS_16
CUbi =~ YPIS_3 + YPIS_6 + YPIS_10 + YPIS_15 + YPIS_17 + YPIS_18
DIbi =~ YPIS_1 + YPIS_2 + YPIS_7 + YPIS_11 + YPIS_12 + YPIS_13
psychopathyBi ~~ 0*GMbi
psychopathyBi ~~ 0*CUbi
psychopathyBi ~~ 0*DIbi
GMbi ~~ 0*CUbi
GMbi ~~ 0*DIbi
CUbi ~~ 0*DIbi
'
#fit bifactor model
bifactorFit <- cfa(BifactorModel, data = YPIS_Data)
#get summary of bifactor model
summary(bifactorFit, fit.measures = TRUE, standardized = TRUE)
</code></pre>
<p><strong>This produces the following:</strong></p>
<blockquote>
<p>lavaan 0.6-9 did NOT end normally after 862 iterations</p>
</blockquote>
<p><strong>this is what the model should ultimately look like once converged</strong>
<a href="https://i.stack.imgur.com/Rxgbo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Rxgbo.png" alt="What the model should ultimately look like" /></a></p>
<p>Any suggestions or comments would be greatly appreciated. Thanks in advance.</p> | 2021-08-18 18:05:39.517000+00:00 | 2021-08-18 18:21:04.677000+00:00 | 2021-08-18 18:13:26.287000+00:00 | r|statistics|r-lavaan|factor-analysis | ['https://psyarxiv.com/q356f/', 'https://www.sciencedirect.com/science/article/pii/S0001879120300555'] | 2 |
46,810,226 | <p>The document <a href="https://github.com/commaai/research/blob/3429b061cf2a15dc37661552775aa983206f7561/DriveSim.md" rel="nofollow noreferrer">DriveSim.md</a> in the repository links to a paper titled <a href="https://arxiv.org/pdf/1608.01230.pdf" rel="nofollow noreferrer">Learning a Driving Simulator</a>. In the paper, they state:</p>
<blockquote>
<p>Due to the problem complexity we decided to learn video prediction with separable networks. </p>
</blockquote>
<p>They also mention the frame rate they used is 5 Hz.</p>
<p>While that sentence is the only one that addresses your question, and it isn't exactly crystal clear, let's break down the task in question:</p>
<ul>
<li>Grab an image from a camera </li>
<li>Preprocess/downsample/normalize the image pixels</li>
<li>Pass the image through an autoencoder to extract representative feature vector</li>
<li>Pass the output of the autoencoder on to an RNN that will predict proper steering angle</li>
</ul>
<p>The "problem complexity" refers to the fact that they're dealing with a long sequence of large images that are (as they say in the paper) "highly uncorrelated." There are lots of different tasks that are going on, so the network approach is more modular - in addition to allowing them to work in parallel, it also allows scaling up the components without getting bottlenecked by a single piece of hardware reaching its threshold computational abilities. (And just think: this is only the <em>steering</em> aspect. The <a href="https://github.com/commaai/research/blob/master/Logs.md" rel="nofollow noreferrer">Logs.md</a> file lists other components of the vehicle to worry about that aren't addressed by this neural network - gas, brakes, blinkers, acceleration, etc.).</p>
<p>Now let's fast forward to the practical implementation in a self-driving vehicle. There will definitely be more than one neural network operating onboard the vehicle, and each will need to be limited in size - microcomputers or embedded hardware, with limited computational power. So, there's a natural ceiling to how much work one component can do.</p>
<p>Tying all of this together is the fact that cars <em>already</em> operate using a network architecture - a <a href="https://en.wikipedia.org/wiki/CAN_bus" rel="nofollow noreferrer">CAN bus</a> is literally a computer network inside of a vehicle. So, this work simply plans to farm out pieces of an enormously complex task to a number of distributed components (which will be limited in capability) using a network that's already in place.</p> | 2017-10-18 12:29:17.730000+00:00 | 2017-10-18 12:29:17.730000+00:00 | null | null | 46,787,099 | <p>In comma.ai's <a href="https://github.com/commaai/research" rel="nofollow noreferrer">self-driving car software</a> they use a client/server architecture. Two processes are started separately, <code>server.py</code> and <code>train_steering_model.py.</code> </p>
<p><code>server.py</code> sends data to <code>train_steering_model.py</code> via http and sockets. </p>
<p>Why do they use this technique? Isn't this a complicated way of sending data? Isn't this easier to make <code>train_steering_model.py</code> load the data set by it self?</p> | 2017-10-17 09:39:40.777000+00:00 | 2017-10-18 15:43:06.183000+00:00 | 2017-10-18 15:43:06.183000+00:00 | machine-learning|tensorflow|client-server|keras | ['https://github.com/commaai/research/blob/3429b061cf2a15dc37661552775aa983206f7561/DriveSim.md', 'https://arxiv.org/pdf/1608.01230.pdf', 'https://github.com/commaai/research/blob/master/Logs.md', 'https://en.wikipedia.org/wiki/CAN_bus'] | 4 |
17,254,565 | <blockquote>
<p>Is it a bug?</p>
</blockquote>
<p>No, a bug is something that violates the specification. The <a href="http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#pow(double,%20double)" rel="noreferrer">specification</a> states:</p>
<blockquote>
<p>Returns the value of the first argument raised to the power of the second argument. Special cases:</p>
<ul>
<li>If the second argument is positive or negative zero, then the result is 1.0.</li>
</ul>
</blockquote>
<p>Finally, mathematically, <a href="https://en.wikipedia.org/wiki/Exponentiation#Zero_to_the_power_of_zero" rel="noreferrer">some do define</a> <code>0^0</code> as <code>1</code>. In fact, Knuth <a href="http://arxiv.org/abs/math/9205211" rel="noreferrer">says</a> that it <em>has</em> to be <code>1</code>.</p>
<blockquote>
<p>The number of mappings from the empty set to the empty set is <code>0^0</code>. It <em>has</em> to be <code>1</code>.</p>
</blockquote>
<p>His reasoning is as follows. If you have two sets, <code>A</code> and <code>B</code>, the number of functions from <code>A</code> to <code>B</code> is <code>|B|^|A|</code>. How many functions are there from the empty set to the empty set? Well, there is exactly one. By this logic, <code>0^0</code> <em>should</em> be <code>1</code>.</p> | 2013-06-22 19:33:14.970000+00:00 | 2013-06-22 19:33:14.970000+00:00 | 2020-06-20 09:12:55.060000+00:00 | null | 17,254,533 | <p><code>Math.pow(0.0, 0.0)</code> in Java returns 1 which is wrong. 0^0 is undefined. The same problem exists also in the windows calculator (I am using Win7). Why is that?</p>
<p>Mathematica declares it as an error as well as my Casio scientific calculator, why not java or the Win calculator... Is it a bug? </p> | 2013-06-22 19:29:07.417000+00:00 | 2013-06-22 19:47:28.097000+00:00 | 2013-06-22 19:43:12.600000+00:00 | java|math|pow | ['http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#pow(double,%20double)', 'https://en.wikipedia.org/wiki/Exponentiation#Zero_to_the_power_of_zero', 'http://arxiv.org/abs/math/9205211'] | 3 |
44,476,646 | <p>The paper <a href="https://arxiv.org/abs/1403.2056" rel="nofollow noreferrer">Engineering Parallel String Sorting</a> has, surprisingly, benchmarks of a large number of single-threaded algorithms on an URL dataset (see page 29). Rantala's variant of multikey quicksort with caching comes out ahead; you can test multikey_cache8 in <a href="https://github.com/rantala/string-sorting" rel="nofollow noreferrer">this repository cited in the paper</a>. </p>
<p>I've tested the dataset in that paper, and if it's any indication you're seeing barely one bit of entropy in the first ten characters, and distinguishing prefixes in the 100-character range. Doing 100 passes of radix sort will thrash the cache with little benefit, eg sorting a million urls means you're looking for ~20 distinguishing bits in each key at a cost of ~100 cache misses.</p>
<p>While in general radix sort will not perform well on long strings , the optimizations that Kärkkäinen and Rantala describe in <a href="https://link.springer.com/chapter/10.1007%2F978-3-540-89097-3_3" rel="nofollow noreferrer">Engineering Radix Sort for Strings</a> are sufficient for the URL dataset. In particular they read ahead 8 characters and cache them with the string pointers; sorting on these cached values yields enough entropy to get past the cache-miss problem.</p>
<p>For longer strings try some of the LCP-based algorithms in that repository; in my experience the URL dataset is about at the break-even point between highly-optimized radix-type sorts and LCP-based algorithms which do asymptotically better on longer strings. </p> | 2017-06-10 18:22:27.283000+00:00 | 2017-06-10 18:22:27.283000+00:00 | null | null | 16,245,946 | <p>I have a set of strings. 90% of them are URLs start with <code>"http://www."</code>. I want to sort them alphabetically.</p>
<p>Currently I use C++ std::sort(). but std::sort is a variant of quick-sort based on comparison, and comparing two strings with long common prefix is not effecient. However (I think) a radix-sort won't work either, since most strings are put in the same bucket because of long common prefix.</p>
<p>Is there any better algorithm than normal quick-sort/radix-sort for this problem?</p> | 2013-04-26 22:08:45.513000+00:00 | 2017-06-10 18:22:27.283000+00:00 | null | string|algorithm|sorting|data-structures | ['https://arxiv.org/abs/1403.2056', 'https://github.com/rantala/string-sorting', 'https://link.springer.com/chapter/10.1007%2F978-3-540-89097-3_3'] | 3 |
68,626,258 | <p>The addition of noise affects the backward pass through that layer. For example, for quantization layers, people typically use straight-through estimator (which is basically just gradient clipping if I remember correctly).</p>
<p>Please note that when clipping gradients they should be done when passing through noise layers. This paper shows how to clip the gradient and then adds noise. <em>See section 3.1</em> <strong>Differentially Private SGD Algorithm</strong>, "<em><strong>Algorithm 1</strong></em>" on <em>page 3</em>: <a href="https://arxiv.org/pdf/1607.00133.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1607.00133.pdf</a></p>
<p>But to answer your question, this is the code you will need to add noise:</p>
<pre><code>class GaussianNoise(nn.Module):
"""Gaussian noise regularizer.
Args:
sigma (float, optional): relative standard deviation used to generate the
noise. Relative means that it will be multiplied by the magnitude of
the value your are adding the noise to. This means that sigma can be
the same regardless of the scale of the vector.
is_relative_detach (bool, optional): whether to detach the variable before
computing the scale of the noise. If `False` then the scale of the noise
won't be seen as a constant but something to optimize: this will bias the
network to generate vectors with smaller values.
"""
def __init__(self, sigma=0.1, is_relative_detach=True):
super().__init__()
self.sigma = sigma
self.is_relative_detach = is_relative_detach
self.register_buffer('noise', torch.tensor(0))
def forward(self, x):
if self.training and self.sigma != 0:
scale = self.sigma * x.detach() if self.is_relative_detach else self.sigma * x
sampled_noise = self.noise.expand(*x.size()).float().normal_() * scale
x = x + sampled_noise
return x
</code></pre>
<p>*<em>You might also want to scroll through this thread as well:</em> <a href="https://discuss.pytorch.org/t/how-to-add-noise-to-mnist-dataset-when-using-pytorch/59745/17" rel="nofollow noreferrer">https://discuss.pytorch.org/t/how-to-add-noise-to-mnist-dataset-when-using-pytorch/59745/17</a></p> | 2021-08-02 18:13:53+00:00 | 2021-08-02 18:22:28.163000+00:00 | 2021-08-02 18:22:28.163000+00:00 | null | 68,620,396 | <p>accroding to paper by OpenAI(<a href="https://arxiv.org/pdf/1706.02275.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1706.02275.pdf</a>),we should add noise to policy to ensure exploration. While in an example code, there is a method to add noise:</p>
<p><code>u = torch.rand_like(model_out)</code></p>
<p><code>policy = F.softmax(model_out - torch.log(-torch.log(u)), dim=-1)</code></p>
<p>It works very well with simple_spread env,while when I simply add a scaler of gaussian noise to model_out, the time of covergence become quite long.
How it works?</p> | 2021-08-02 10:54:53.303000+00:00 | 2021-08-02 18:22:28.163000+00:00 | null | pytorch|reinforcement-learning|multi-agent | ['https://arxiv.org/pdf/1607.00133.pdf', 'https://discuss.pytorch.org/t/how-to-add-noise-to-mnist-dataset-when-using-pytorch/59745/17'] | 2 |
63,267,234 | <p>This is almost surely a duplicate but I am in-between tasks and cannot search now...</p>
<p>"It's still complicated" but Rcpp 1.0.5 now also ship this <a href="https://arxiv.org/abs/1911.06416" rel="nofollow noreferrer">arXiv paper I wrote on this</a> as this <a href="https://cloud.r-project.org/web/packages/Rcpp/vignettes/Rcpp-libraries.pdf" rel="nofollow noreferrer">vignette</a>. In short, you need to differentiate between</p>
<ul>
<li>header-only you can ship or include</li>
<li>small-ish library you include and build</li>
<li>an external library</li>
</ul>
<p>The third one is hardest as you now have the problem of getting it to CRAN and your users.</p> | 2020-08-05 14:21:48.940000+00:00 | 2020-08-05 14:21:48.940000+00:00 | null | null | 63,266,717 | <p>I can't seem to find anything on the internet which show you how to build and compile a C++ library such that it can be used in R via the Rcpp package. I am missing some steps where the library is somehow linked to R tools</p>
<p>For instance how do you get the boost library working with R or any other such library?</p>
<p>Normal instructions:</p>
<p><a href="https://andres.jaimes.net/718/how-to-install-the-c-boost-libraries-on-windows/" rel="nofollow noreferrer">https://andres.jaimes.net/718/how-to-install-the-c-boost-libraries-on-windows/</a>
<a href="https://www.boost.org/doc/libs/1_55_0/more/getting_started/windows.html#or-build-from-the-command-prompt" rel="nofollow noreferrer">https://www.boost.org/doc/libs/1_55_0/more/getting_started/windows.html#or-build-from-the-command-prompt</a></p> | 2020-08-05 13:54:23.270000+00:00 | 2020-08-05 14:21:48.940000+00:00 | null | c++|r|boost|rcpp | ['https://arxiv.org/abs/1911.06416', 'https://cloud.r-project.org/web/packages/Rcpp/vignettes/Rcpp-libraries.pdf'] | 2 |
62,840,563 | <p>I have to disagree with the other answer that quotes Barr et al (2013) to "keep it maximal". This has been proven to be bad advice on so many occasions that the authors of lme4 had introduce code to check for singular fit. Not only that but Doug Bates (primary author of lme4 and probably the worlds leading authority on mixed models) and colleagues wrote a paper in 2015 specifically adressing problems brought about by the wish to "keep it maximal" - <a href="https://arxiv.org/pdf/1506.04967.pdf" rel="nofollow noreferrer">Parsimonious Mixed Models</a>.</p>
<p>Of course I am not saying that checking for singular fits is a bad thing - it's a great thing, so something good certainly came from it.</p>
<p>So in this case we have <code>(task * stimulation|subject)</code> where <code>task</code> has 4 levels and <code>stimulation</code> has 3 so we are asking the software to estimate 8 variance-covariance parameters with only 20 subjects. I am not saying that this is impossible, but it just seems bizarre to me for this to be the goal. The Bates et al (2015) paper goes into considerable detail about how to handle the resulting problems and I have answered some questions on CV about how to do so <a href="https://stats.stackexchange.com/questions/378939/dealing-with-singular-fit-in-mixed-models/379068#379068">here</a> and <a href="https://stats.stackexchange.com/questions/449095/how-to-simplify-a-singular-random-structure-when-reported-correlations-are-not-n/449122#449122">here</a></p>
<p>So in summary, the other answer isn't necessarily wrong, but it can lead to a lot of problems.</p> | 2020-07-10 18:45:59.247000+00:00 | 2020-07-10 18:45:59.247000+00:00 | null | null | 55,124,966 | <p>I have a question regarding the correct model setup in R using lmer.</p>
<p>This is a repeated measures experiment</p>
<p>Each subject (20 in total) completed 4 different task for each stimulation condition (anode, cathode, and sham). The dependent variable is reaction times (rt)</p>
<p>I used this model but I am not sure if its correct. I am more concerned if the random effects are correctly assigned</p>
<pre><code>model<- lmer(rt ~ task * stimulation + (task * stimulation|subject), data=dat)
</code></pre>
<p>Any help will be appreciated.</p>
<p>Thank you</p> | 2019-03-12 15:17:29.430000+00:00 | 2022-01-16 21:30:10.213000+00:00 | 2022-01-16 21:30:10.213000+00:00 | r|statistics|regression|lme4|mixed-models | ['https://arxiv.org/pdf/1506.04967.pdf', 'https://stats.stackexchange.com/questions/378939/dealing-with-singular-fit-in-mixed-models/379068#379068', 'https://stats.stackexchange.com/questions/449095/how-to-simplify-a-singular-random-structure-when-reported-correlations-are-not-n/449122#449122'] | 3 |
44,602,084 | <p>A reduction from 0,1 knapsack to subset-sum is described in Theorem 2 of the paper <a href="https://arxiv.org/abs/1208.4225" rel="nofollow noreferrer">"Reducing a Target Interval to a Few Exact Queries"</a>. It proceeds in three steps. First, reduce knapsack to a decision problem that tests whether there is a subset with weight at most b and value at least t. This can be done by binary search over thresholds. Second, reduce this decision problem to a small set of exact queries of the form "is there a subset with weight exactly b and value exactly t?" This is a clever reduction that does not require enumerating all possible b's and t's. Third, reduce this exact query to subset-sum by mapping each (weight,value) pair into the integer (weight*C + value) for sufficiently large C.</p> | 2017-06-17 07:45:03.333000+00:00 | 2017-06-17 07:45:03.333000+00:00 | null | null | 44,167,855 | <p>I know how to reduce subset sum to 0,1 knapsack. But is it possible to reduce knapsack to subset sum? How?</p> | 2017-05-24 20:24:42.180000+00:00 | 2020-07-08 17:30:08.993000+00:00 | null | knapsack-problem|reduction|np-complete|subset-sum | ['https://arxiv.org/abs/1208.4225'] | 1 |
62,038,806 | <p>Every programing language that takes concurrency seriously needs a memory model - and here is why.</p>
<blockquote>
<p>The memory model is the crux of the concurrency semantics of shared-memory systems. It defines the possible values that a read operation is allowed to return for any given set of write operations performed by a concurrent program, thereby defining the basic semantics of shared variables. In other words, the memory model specifies the set of allowed outputs of a program's read and write operations, and constrains an implementation to produce only (but at least one) such allowed executions. The memory model may and often does allow executions where the outcome cannot be inferred from the order in which read and write operations occur in the program. It is impossible to meaningfully reason about a program or any part of the programming language implementation without an unambiguous memory model. The memory model defines the possible outcomes of a concurrent programs read and write operations. Conversely, the memory model also defines which instruction reorderings may be permitted, either by the processor, the memory system, or the compiler.</p>
</blockquote>
<p>This is an excerpt from the paper <a href="https://arxiv.org/pdf/1803.04432.pdf" rel="nofollow noreferrer">Memory Models for C/C++ Programmers</a> which I have co-authored. Even though a large part of it is dedicated to the C++ memory model, it also covers more general areas -
starting with the reason why we need a memory model in the first place, explaining the (intuitive) sequential consistent model, and finally the weaker memory models provided by current hardware in x86 and ARM/POWER CPUs.</p> | 2020-05-27 08:35:51.610000+00:00 | 2020-05-27 08:35:51.610000+00:00 | null | null | 62,036,945 | <p>Java's multithreaded code is finally mapped to the operating system thread for execution. </p>
<p>Is the operating system thread not thread safe? </p>
<p>Why use the Java memory model to ensure thread safety?Why define the Java memory model? </p>
<p>I hope someone can answer this question, I have looked up a lot of information on the Internet, still do not understand! </p>
<p>The material on the web is all about atomicity, visibility, orderliness, and using the cache consistency model as an example, but I don't think it really answers the question. </p>
<p>thank you very much!</p> | 2020-05-27 06:42:51.037000+00:00 | 2020-05-27 13:43:10.213000+00:00 | null | java|multithreading|jvm|memory-model|java-memory-model | ['https://arxiv.org/pdf/1803.04432.pdf'] | 1 |
48,084,719 | <p>There are at least 5 reasons which might cause such behavior:</p>
<ol>
<li><p><strong>Outliers:</strong> imagine that you have 10 exactly the same images and 9 out of them belong to class <strong>A</strong> and one belongs to class <strong>B</strong>. In this case, a model will start to assign a high probability of class <strong>A</strong> to this example because of the majority of examples. But then - a signal from outlier might destabilize model and make accuracy decreasing. In theory, a model should stabilize at assigning score <strong>90%</strong> to class <strong>A</strong> but it might last many epochs.</p>
<p><strong>Solutions:</strong> In order to deal with such examples I advise you to use <em>gradient clipping</em> (you may add such option in your optimizer). If you want to check if this phenomenon occurs - you may check your losses distributions (losses of individual examples from training set) and look for outliers.</p></li>
<li><p><strong>Bias</strong>: Now imagine that you have 10 exactly the same images but 5 of them have assigned class <strong>A</strong> and 5 - class <strong>B</strong>. In this case, a model will try to assign approximately <strong>50%-50%</strong> distribution on both of these classes. Now - your model can achieve at most 50% of accuracy here - choosing one class out of two valid.</p>
<p><strong>Solution:</strong> Try to increase the model capacity - very often you have a set of really similar images - adding expressive power might help to discriminate similar examples. Beware of overfitting though. Another solution is to try <a href="https://keras.io/callbacks/#reducelronplateau" rel="noreferrer">this</a> strategy in your training. If you want to check if such phenomenon occurs - check the distribution of losses of individual examples. If a distribution would be skewed toward higher values - you are probably suffering from <strong>bias</strong>.</p></li>
<li><p><strong>Class inbalance</strong>: Now imagine that <strong>90%</strong> of your images belong to class <strong>A</strong>. In an early stage of your training, your model is mainly concentrating on assigning this class to almost all of examples. This might make individual losses to achieve really high values and destabilize your model by making a predicted distribution more unstable. </p>
<p><strong>Solution:</strong> once again - gradient clipping. Second thing - patience, try simply leaving your model for more epochs. A model should learn more subtle in a further phase of training. And of course - try class balancing - by either assigning <code>sample_weights</code> or <code>class_weights</code>. If you want to check if this phenomenon occurs - check your class distribution.</p></li>
<li><p><strong>Too strong regularization:</strong> if you set your regularization to be too strict - a training process is mainly concentrated on making your weights to have smaller norm than actually learning interesting insights.</p>
<p><strong>Solution:</strong> add a <code>categorical_crossentropy</code> as a metric and observe if it's also decreasing. If not - then it means that your regularization is too strict - try to assign less weight penalty then.</p></li>
<li><p><strong>Bad model design</strong> - such behavior might be caused by a wrong model design. There are several good practices which one might apply in order to improve your model:</p>
<p><strong>Batch Normalization</strong> - thanks to this technique you are preventing your model from radical changes of inner network activations. This makes training much more stable and efficient. With a small batch size, this might be also a genuine way of regularizing your model.</p>
<p><strong>Gradient clipping</strong> - this makes your model training much more stable and efficient.</p>
<p><strong>Reduce bottleneck effect</strong> - read <a href="https://arxiv.org/abs/1512.00567" rel="noreferrer">this</a> fantastic paper and check if your model might suffer from <strong>bottleneck problem</strong>.</p>
<p><strong>Add auxiliary classifiers</strong> - if you are training your network from scratch - this should make your features much more meaningful and your training - faster and more efficient.</p></li>
</ol> | 2018-01-03 20:33:59.797000+00:00 | 2018-01-03 20:33:59.797000+00:00 | null | null | 48,025,267 | <p>Is it practically possible to have decreasing loss and decreasing accuracy at each epoch when training a CNN model?
I am getting the below result while training.</p>
<p><a href="https://i.stack.imgur.com/LL9k4.png" rel="noreferrer"><img src="https://i.stack.imgur.com/LL9k4.png" alt="enter image description here"></a></p>
<p>Can someone explain the possible reasons why this is happening?</p> | 2017-12-29 16:10:48.313000+00:00 | 2019-08-25 10:48:02.777000+00:00 | 2019-08-25 10:48:02.777000+00:00 | machine-learning|neural-network|deep-learning|keras|conv-neural-network | ['https://keras.io/callbacks/#reducelronplateau', 'https://arxiv.org/abs/1512.00567'] | 2 |
2,352,873 | <p><em><strong>About an O(n) time, O(1) space algorithm for out-shuffle</strong></em></p>
<hr />
<p>Doing an out-shuffle in O(n) time and O(1) space is possible, but it is <em><strong>tough</strong></em>. Not sure why people think it is easy and are suggesting you try something else.</p>
<p>The following paper has an O(n) time and O(1) space solution (though it is for in-shuffle, doing in-shuffle makes out-shuffle trivial):</p>
<p><a href="http://arxiv.org/PS_cache/arxiv/pdf/0805/0805.1598v1.pdf" rel="nofollow noreferrer">http://arxiv.org/PS_cache/arxiv/pdf/0805/0805.1598v1.pdf</a></p>
<br/>
<p><em><strong>About a method to tackle in-place array modification algorithms</strong></em></p>
<hr />
<p>In-place modification algorithms could become very <em>hard</em> to handle.</p>
<p>Consider a couple:</p>
<ul>
<li>Inplace out-shuffle in linear time. Uses number theory.</li>
<li>In-place merge sort, was open for a few years. An algorithm came but was too complicated to be practical. Uses very complicated bookkeeping.</li>
</ul>
<p>Sorry, if this sounds discouraging, but there is no magic elixir that will solve all in-place algorithm problems for you. You need to work with the problem, figure out its properties, and try to exploit them (as is the case with most algorithms).</p>
<p>That said, for array modifications where the result is a <em>permutation</em> of the original array, you can try the method of <strong>following the cycles of the permutation</strong>. Basically, any permutation can be written as a disjoint set of cycles (see John's answer too). For instance the permutation:</p>
<pre><code>1 4 2 5 3 6
</code></pre>
<p>of <code>1 2 3 4 5 6</code> can be written as</p>
<pre><code>1 -> 1
2 -> 3 -> 5 -> 4 -> 2
6 -> 6.
</code></pre>
<p>you can read the arrow as 'goes to'.</p>
<p>So to permute the array <code>1 2 3 4 5 6</code> you follow the three cycles:</p>
<p>1 goes to 1.</p>
<p>6 goes to 6.</p>
<p>2 goes to 3, 3 goes to 5, 5 goes to 4, and 4 goes to 2.</p>
<p>To follow this long cycle, you can use just one <code>temp</code> variable. Store 3 in it. Put 2 where 3 was. Now put 3 in 5 and store 5 in the <code>temp</code> and so on. Since you only use constant extra <code>temp</code> space to follow a particular cycle, you are doing an in-place modification of the array for that cycle.</p>
<p>Now if I gave you a formula for computing where an element goes to, all you now need is the set of starting elements of each cycle.</p>
<p>A judicious choice of the starting points of the cycles can make the algorithm easy. If you come up with the starting points in O(1) space, you now have a complete in-place algorithm. This is where you might actually have to get familiar with the problem and exploit its properties.</p>
<p>Even if you didn't know how to compute the starting points of the cycles, but had a formula to compute the next element, you could use this method to get an O(n) time in-place algorithm in some special cases.</p>
<p>For instance: if you knew the array of unsigned integers held only positive integers.</p>
<p>You can now follow the cycles, but negate the numbers in them as an indicator of 'visited' elements. Now you can walk the array and pick the first positive number you come across and follow the cycles for that, making the elements of the cycle negative and continue to find untouched elements. In the end, you just make all the elements positive again to get the resulting permutation.</p>
<p>You get an O(n) time and O(1) space algorithm! Of course, we kind of 'cheated' by using the sign bits of the array integers as our personal 'visited' bitmap.</p>
<p>Even if the array was not necessarily integers, this method (of following the cycles, not the hack of sign bits :-)) can actually be used to tackle the two problems you state:</p>
<ul>
<li><p><code>The in-shuffle (or out-shuffle) problem</code>: When <code>2n+1</code> is a power of <code>3</code>, it can be shown (using number theory) that <code>1,3,3^2,</code> etc are in different cycles and all cycles are covered using those. Combine this with the fact that the in-shuffle is susceptible to divide and conquer, you get an O(n) time, O(1) space algorithm (the formula is <code>i -> 2*i modulo 2n+1</code>). Refer to the above paper for more details.</p>
</li>
<li><p><code>The cyclic shift an array problem</code>: Cyclic shift an array of size <code>n</code> by <code>k</code> also gives a permutation of the resulting array (given by the formula <code>i</code> goes to <code>i+k modulo n</code>), and can also be solved in linear time and in-place using the following the cycle method. In fact, in terms of the number of element exchanges this following cycle method is <em>better</em> than the 3 reverses algorithm. Of course, following the cycle method can kill the cache because of the access patterns, and in practice, the 3 reverses algorithm might actually fare better.</p>
</li>
</ul>
<hr />
<p>As for interviews, if the interviewer is a reasonable person, they will be looking at how you <em>think</em> and approach the problem and not whether you actually solve it. So even if you don't solve a problem, I think you should not be discouraged.</p> | 2010-02-28 22:07:37.763000+00:00 | 2021-02-05 06:40:35.520000+00:00 | 2021-02-05 06:40:35.520000+00:00 | null | 2,352,542 | <p>I am preparing for a software job interview, and I am having trouble with in-place array modifications.</p>
<p>For example, in the out-shuffle problem you interleave two halves of an array so that <code>1 2 3 4 5 6 7 8</code> would become <code>1 5 2 6 3 7 4 8</code>. <a href="http://www.careercup.com/question?id=3002" rel="nofollow noreferrer">This question</a> asks for a constant-memory solution (and linear-time, although I'm not sure that's even possible).</p>
<p>First I thought a linear algorithm is trivial, but then I couldn't work it out. Then I did find a simple <code>O(n^2)</code> algorithm but it took me a long time. And I still don't find a faster solution.</p>
<p>I remember also having trouble solving a similar problem from Bentley's Programming Pearls, column 2:</p>
<blockquote>
<p>Rotate an array left by <code>i</code> positions (e.g. <code>abcde</code> rotated by 2 becomes <code>cdeab</code>), in time <code>O(n)</code> and with just a couple of bytes extra space.</p>
</blockquote>
<p>Does anyone have tips to help wrap my head around such problems?</p> | 2010-02-28 20:29:38.973000+00:00 | 2021-02-05 07:31:20.587000+00:00 | 2021-02-05 06:42:07.533000+00:00 | algorithm|arrays|in-place | ['http://arxiv.org/PS_cache/arxiv/pdf/0805/0805.1598v1.pdf'] | 1 |
69,530,100 | <p>The Federated Learning for Image Classification tutorial would be a good start to learn TFF: <a href="https://www.tensorflow.org/federated/tutorials/federated_learning_for_image_classification" rel="noreferrer">https://www.tensorflow.org/federated/tutorials/federated_learning_for_image_classification</a> The "Preparing the input data" section is related to your third question. About vertical vs. horizontal: I know there are many types of federated learning defined in recent publications. Personally I would call what you described as cross-silo federated learning, see Section 2.2 in this paper for more information: <a href="https://arxiv.org/abs/1912.04977" rel="noreferrer">https://arxiv.org/abs/1912.04977</a></p>
<p>To answer your other questions:</p>
<p>See the above tutorial for how to create an iterative_process with federated averaging while setting SGD learning rate on both the server and client side. You can also implement customized federated learning algorithms: <a href="https://www.tensorflow.org/federated/tutorials/building_your_own_federated_learning_algorithm" rel="noreferrer">https://www.tensorflow.org/federated/tutorials/building_your_own_federated_learning_algorithm</a> (this tutorial might also answer your second question about customized local training?)</p> | 2021-10-11 17:13:13.587000+00:00 | 2021-10-11 17:13:13.587000+00:00 | null | null | 69,499,432 | <p>I am a newbie in federated learning and just getting to know TensorFlow Federated TFF framework. I have some questions in my mind I would be really appreciated it if anybody can clarify them:</p>
<ol>
<li>Does Federated Averaging algorithm the only aggregation algorithm supported in TFF? and how it differs from Federated Stochastic Gradient Descent?</li>
<li>Dose Federated Averaging require each client to be trained with the Neural Networks? or it is possible for local data to be trained with any machine learning algorithm?</li>
<li>I have big data, and I am planning to partition my data into smaller datasets and simulated each part as one client? does this work in TFF? and does it consider horizontal or vertical federated learning?</li>
</ol>
<p>Thanks in advance</p> | 2021-10-08 17:03:26.013000+00:00 | 2021-10-11 17:13:13.587000+00:00 | null | tensorflow|tensorflow-federated|federated-learning | ['https://www.tensorflow.org/federated/tutorials/federated_learning_for_image_classification', 'https://arxiv.org/abs/1912.04977', 'https://www.tensorflow.org/federated/tutorials/building_your_own_federated_learning_algorithm'] | 3 |
34,654,016 | <p>The depth of the network is not the only thing that affects its number of parameters. The number of parameters per layer has a huge effect. This means that for each convolutional layer, the size of the filter and the number of filters (features learned) would make a huge difference. Have a look at the paper of the VGG group at this link <a href="http://arxiv.org/pdf/1409.1556.pdf" rel="nofollow">http://arxiv.org/pdf/1409.1556.pdf</a></p>
<p>Also please look at the paper "<a href="http://arxiv.org/pdf/1512.03385v1.pdf" rel="nofollow">Deep Residual Learning for image recognition</a>" which represents networks that have up to 152 layers i.e. 8x deeper than VGG nets while still having lower complexity.</p> | 2016-01-07 11:29:23.583000+00:00 | 2016-01-07 11:35:44.940000+00:00 | 2016-01-07 11:35:44.940000+00:00 | null | 34,650,307 | <p>I trained two convolutional neural nets in Keras. The first is net is as below</p>
<pre><code>def VGG1(weights_path):
model = Sequential()
model.add(Convolution2D(nb_filters, nb_conv, nb_conv,
border_mode='valid',
input_shape=(1, img_rows, img_cols)))
model.add(Activation('relu'))
model.add(Convolution2D(nb_filters, nb_conv, nb_conv))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(nb_pool, nb_pool)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(nb_classes))
model.add(Activation('softmax'))
if weights_path:
model.load_weights(weights_path)
return model
</code></pre>
<p>The second net</p>
<pre><code>def VGG2(weights_path):
model = Sequential()
model.add(Convolution2D(nb_filters, nb_conv, nb_conv, border_mode='valid', input_shape=(1, img_rows, img_cols)))
model.add(Activation('relu'))
model.add(Convolution2D(nb_filters, nb_conv, nb_conv))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(nb_pool, nb_pool)))
model.add(Dropout(0.25))
model.add(Convolution2D(64, nb_conv, nb_conv, border_mode='valid'))
model.add(Activation('relu'))
model.add(Convolution2D(64, nb_conv, nb_conv))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(nb_pool, nb_pool)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(256))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(nb_classes))
model.add(Activation('softmax'))
if weights_path:
model.load_weights(weights_path)
return model
</code></pre>
<p>When I call the <code>model.count_params()</code> method, the first net results in 604035 parameter and the second net results in 336387.</p>
<p>How is this possible? The second net is deeper and should contain more parameters. Is there any mistake?</p> | 2016-01-07 08:21:26.943000+00:00 | 2016-01-13 03:35:33.307000+00:00 | 2016-01-13 03:35:33.307000+00:00 | python|machine-learning|conv-neural-network|keras | ['http://arxiv.org/pdf/1409.1556.pdf', 'http://arxiv.org/pdf/1512.03385v1.pdf'] | 2 |
46,888,556 | <p>Yes, you could do it based on <a href="https://arxiv.org/pdf/1703.06870.pdf" rel="nofollow noreferrer">this paper</a></p>
<p>There is an implementation <a href="https://github.com/CharlesShang/FastMaskRCNN" rel="nofollow noreferrer">here</a> but note that this is not related to the object detection API.</p> | 2017-10-23 11:45:06.567000+00:00 | 2017-10-23 11:45:06.567000+00:00 | null | null | 46,784,295 | <p>in TensorFlow object detection API, is there any possibility to get other shape of recognized object than rectangle? </p>
<p>For example when I'm teaching my computer to recognize a forest, mostly it should be more useful to get some kind of polygon instead of rectangle. </p>
<p>Other thing is for example river, but polygon seems much easier than line. </p> | 2017-10-17 07:03:07.817000+00:00 | 2019-03-11 11:41:56.523000+00:00 | null | python|tensorflow|object-detection|object-detection-api | ['https://arxiv.org/pdf/1703.06870.pdf', 'https://github.com/CharlesShang/FastMaskRCNN'] | 2 |
46,784,633 | <p>The simple answer is no. This framework based on <a href="https://arxiv.org/abs/1506.01497" rel="nofollow noreferrer">Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks</a> and <a href="https://arxiv.org/abs/1512.02325" rel="nofollow noreferrer">SSD: Single Shot MultiBox Detector</a> which train additional output of CNN for <strong>bounding box</strong> regression. Can you create CNN with some structure for detect polygon instead of bounding box? May be, but whis will another network.</p> | 2017-10-17 07:23:45.300000+00:00 | 2017-10-17 07:23:45.300000+00:00 | null | null | 46,784,295 | <p>in TensorFlow object detection API, is there any possibility to get other shape of recognized object than rectangle? </p>
<p>For example when I'm teaching my computer to recognize a forest, mostly it should be more useful to get some kind of polygon instead of rectangle. </p>
<p>Other thing is for example river, but polygon seems much easier than line. </p> | 2017-10-17 07:03:07.817000+00:00 | 2019-03-11 11:41:56.523000+00:00 | null | python|tensorflow|object-detection|object-detection-api | ['https://arxiv.org/abs/1506.01497', 'https://arxiv.org/abs/1512.02325'] | 2 |
57,845,877 | <p>It highly depends on your dataset and is part of the data scientist's job to find which model is more suitable for a particular task in terms of selected performance metric, training cost, model complexity etc.</p>
<p>When you work on the problem you will probably test all of the above models and compare them. Which one of them to choose first? Andrew Ng in <a href="https://www.deeplearning.ai/machine-learning-yearning/" rel="nofollow noreferrer">"Machine Learning Yearning"</a> suggest starting with simple model so you can quickly iterate and test your idea, data preprocessing pipeline etc.</p>
<blockquote>
<p>Don’t start off trying to design and build the perfect system.
Instead, build and train a basic system quickly—perhaps in just a few
days</p>
</blockquote>
<p>According to this suggestion, you can start with a simpler model such as <a href="https://arxiv.org/abs/1801.06146" rel="nofollow noreferrer">ULMFiT</a> as a baseline, verify your ideas and then move on to more complex models and see how they can improve your results.</p>
<p>Note that modern NLP models contain a large number of parameters and it is difficult to train them from scratch without a large dataset. That's why you may want to use <em>transfer learning</em>: you can download pre-trained model and use it as a basis and fine-tune it to your task-specific dataset to achieve better performance and reduce training time.</p> | 2019-09-08 21:28:37.183000+00:00 | 2020-02-21 02:07:23.390000+00:00 | 2020-02-21 02:07:23.390000+00:00 | null | 57,845,439 | <p>I'm trying to train a model for a sentence classification task. The input is a sentence (a vector of integers) and the output is a label (0 or 1). I've seen some articles here and there about using Bert and GPT2 for text classification tasks. However, I'm not sure which one should I pick to start with. Which of these recent models in NLP such as original Transformer model, Bert, GPT2, XLNet would you use to start with? And why? I'd rather to implement in Tensorflow, but I'm flexible to go for PyTorch too.
Thanks!</p> | 2019-09-08 20:14:09.827000+00:00 | 2021-07-08 00:48:27.637000+00:00 | null | tensorflow|nlp|language-model|bert-language-model | ['https://www.deeplearning.ai/machine-learning-yearning/', 'https://arxiv.org/abs/1801.06146'] | 2 |
34,696,626 | <p>Caffeine does not implement LRU as its cache eviction policy. Instead, Caffeine uses a policy called <a href="http://arxiv.org/pdf/1512.00727.pdf" rel="noreferrer">TinyLFU</a>. The Caffeine documentation includes a page on <a href="https://github.com/ben-manes/caffeine/wiki/Efficiency" rel="noreferrer">Efficiency</a>, which discusses the rationale for this design choice. Quoting that page:</p>
<blockquote>
<p><code>TinyLfu</code> relies on a frequency sketch to probabilistically estimate the historic usage of an entry.</p>
</blockquote>
<p>Since Caffeine does not in fact implement LRU, I don't think you can reliably expect it to exhibit strict LRU behavior when you inspect the entries in the cache.</p>
<p>If you absolutely must have LRU behavior, then the JDK standard <a href="http://docs.oracle.com/javase/7/docs/api/java/util/LinkedHashMap.html" rel="noreferrer"><code>LinkedHashMap</code></a> is a good, straightforward choice. You would need to subclass it and override <a href="http://docs.oracle.com/javase/7/docs/api/java/util/LinkedHashMap.html#removeEldestEntry(java.util.Map.Entry)" rel="noreferrer"><code>removeEldestEntry</code></a> with logic to signal when the cache has grown larger than you want it. If multi-threaded usage is necessary, then you'll need to wrap operations with appropriate synchronization.</p>
<p>Caffeine was heavily inspired by the <a href="https://code.google.com/p/guava-libraries/wiki/CachesExplained" rel="noreferrer">Guava Cache</a>, which similarly provides concurrent access and has approximate LRU behavior. A quick test of your code against the Guava cache shows similar results. I am not aware of any standard library that would provide predictable, externally observable LRU results and true concurrent access without a coarse-grained lock.</p>
<p>You might reconsider whether it's really a requirement to have strict, externally observable LRU results. By its nature, a cache is fast temporary storage to provide optimized lookups. I would not expect program behavior to change drastically dependent on whether the cache implements strict LRU, an approximation of LRU, LFU, or some other eviction policy.</p>
<p>This prior question also has a great discussion of LRU cache options in Java.</p>
<p><a href="https://stackoverflow.com/questions/221525/how-would-you-implement-an-lru-cache-in-java-6">How would you implement an LRU cache in Java?</a></p> | 2016-01-09 17:26:56.323000+00:00 | 2016-01-10 23:38:37.203000+00:00 | 2017-05-23 11:46:16.967000+00:00 | null | 34,694,920 | <p>I'm trying to use Caffeine as LRU cache, so entries that were added first will be evicted first.
Ran this code:</p>
<pre><code>final Cache<Object, Object> map = Caffeine.newBuilder()
.maximumSize(10)
.initialCapacity(10)
.build();
for (long i=0; i<20;i++) {
map.put(i, i);
}
map.cleanUp();
System.out.println(map.ge.getAllPresent(map.asMap().keySet()));
</code></pre>
<p>Which prints :</p>
<pre><code>{0=0, 1=1, 2=2, 3=3, 4=4, 5=5, 6=6, 7=7, 8=8, 19=19}
</code></pre>
<p>But I expected </p>
<pre><code>{10=10, 11=11, 12=12, 13=13, 14=14, 15=15, 16=16, 17=17, 18=18, 19=19}
</code></pre>
<p>What am I doing wrong?</p> | 2016-01-09 14:51:15.053000+00:00 | 2016-01-14 05:24:38.227000+00:00 | 2016-01-14 05:24:38.227000+00:00 | java|lru|caffeine | ['http://arxiv.org/pdf/1512.00727.pdf', 'https://github.com/ben-manes/caffeine/wiki/Efficiency', 'http://docs.oracle.com/javase/7/docs/api/java/util/LinkedHashMap.html', 'http://docs.oracle.com/javase/7/docs/api/java/util/LinkedHashMap.html#removeEldestEntry(java.util.Map.Entry)', 'https://code.google.com/p/guava-libraries/wiki/CachesExplained', 'https://stackoverflow.com/questions/221525/how-would-you-implement-an-lru-cache-in-java-6'] | 6 |
13,221,327 | <p>It's possible to train tesseract to recognize handwriting. Here are the instructions: <a href="https://tesseract-ocr.github.io/tessdoc/Training-Tesseract" rel="nofollow noreferrer">https://tesseract-ocr.github.io/tessdoc/Training-Tesseract</a></p>
<p>But don't expect very good results. Academics have typically gotten accuracy results topping out about 90%. Here are a couple references for <a href="https://arxiv.org/pdf/1003.5893.pdf" rel="nofollow noreferrer">words</a> and <a href="https://arxiv.org/pdf/1003.5898.pdf" rel="nofollow noreferrer">numbers</a>. So if your use case can deal with at least 1/10 errors, this might work for you.</p> | 2012-11-04 18:03:56.083000+00:00 | 2021-05-06 01:28:11.220000+00:00 | 2021-05-06 01:28:11.220000+00:00 | null | 12,310,287 | <p>I have a dictionary of words in a text file, separated by newlines. And I want to recognize the handwriting using Tesseract, and output the nearest matching line in the text file.</p>
<p>This is the first time I'll be using Tesseract, and it's already in my project workspace, I just need the training data.</p>
<p>Is it possible to train Tesseract to do this?</p> | 2012-09-07 00:39:20.670000+00:00 | 2021-05-06 01:28:11.220000+00:00 | 2012-09-07 03:41:52.583000+00:00 | android|tesseract|handwriting | ['https://tesseract-ocr.github.io/tessdoc/Training-Tesseract', 'https://arxiv.org/pdf/1003.5893.pdf', 'https://arxiv.org/pdf/1003.5898.pdf'] | 3 |
61,700,910 | <p>Learning rates of the client and server optimizer can be very important in determining the final accuracy of the model.</p>
<p>Very high client learning rates can result in high <em>training</em> accuracy, since the <em>client local</em> training process is overfitting to the <em>client local</em> data (the two classes created in the synthetic split). However these overfit models can be averaged out to very little update to the <em>global</em> model. Lowering the client learning rate can be helpful here.</p>
<p>In <a href="https://arxiv.org/abs/2003.00295" rel="nofollow noreferrer">Reddi 2020</a> it was found that if using an adaptive optimizer (e.g. Adam/Yogi) on the server, tuning the epsilon parameter was necessary for best performance. Adding momentum to SGD also had significant improvements in convergence rates.</p> | 2020-05-09 17:28:22.087000+00:00 | 2020-05-09 17:28:22.087000+00:00 | null | null | 61,219,479 | <p>I create a Non-IID data set where I divide 60000 examples(10 classes and every class has 6000 examples) to 200 fragments, and every fragment has 300 examples. There are 100 clients and I allocate 2 fragments randomly to every client. This is the situation of some clients.
<a href="https://i.stack.imgur.com/P8Uhu.png" rel="nofollow noreferrer">the situation of some clients</a></p>
<p>I use this data set to train my TFF model. The accuracy of train set is about 0.99 but the accuracy of test set is only about 0.5. I try many times but no change.
And I think maybe the model is over-fit so I add two dropout to test, but I get the same result. Then I change relu() funcion to leakyrelu(), and change the optimizer function from SGD to Adam, but accuracy also is about 0.5. I don't why. I know Non-IID will cause descend of accuracy and FedAvg can relieve it. TFF use FedAvg to aggregate client model that means I have use FedAvg to be my underlying structure, is it right? But why I get a so low accaracy? </p> | 2020-04-15 00:35:21.863000+00:00 | 2020-05-09 17:28:22.087000+00:00 | null | python|tensorflow|imbalanced-data|tensorflow-federated | ['https://arxiv.org/abs/2003.00295'] | 1 |
55,455,537 | <p>You can learn more in the publication by <a href="https://arxiv.org/abs/1301.3781" rel="nofollow noreferrer">Tomas Mikolov</a> where he introduces vector embedding called Word2Vec.</p>
<p>Idea behind embedding is to compress sparse vector into a low dimensional dense vector. Lets consider a dictionary of 100k words. Each word can be represented as a vector of size 100000 with all 0s and a single 1 depending on word position/index. Representing a word directly by it's index is not effective (consider ordering or mathematical operations on idexes). But large sparse vectors are not very suitable to work with in context of NN. Embeddings try to encapsulate words in a lower dimensional continuous space, say k=100, where similar words would have lower distance between each other. Mikolov used to define Similarity by how often words appear near each other. Mikolov trained/tuned embedding matrix E in such a way that E(i)*E(j) would roughly represent the probability of words <em>i and j</em> appearing near by. Later publications showed, that you can achieve similar results by randomly initializing embedding layer in multy-layer network and, after training, embeddings would represent some sort of <em>similarity</em>.</p>
<p>Roughly speaking: NN randomly projects every index to some point in a low-dimensional space. During training with backpropagation this projections are organized in a sort of loose structure, helpful to the task at hand. </p>
<pre><code>Index Sparse N=6 -> Embedding K=3
----------------------------------------
0 [1 0 0 0 0 0] [0.425 0.233 0.556]
1 [0 1 0 0 0 0] [0.046 0.975 0.058]
2 [0 0 1 0 0 0] [0.306 0.424 0.651]
3 [0 0 0 1 0 0] [0.293 0.12 0.546]
4 [0 0 0 0 1 0] [0.236 0.657 0.046]
5 [0 0 0 0 0 1] [0.907 0.321 0.882]
</code></pre> | 2019-04-01 12:49:28.187000+00:00 | 2019-04-02 06:32:13.187000+00:00 | 2019-04-02 06:32:13.187000+00:00 | null | 55,453,716 | <p>In this <a href="https://keras.io/layers/embeddings/" rel="nofollow noreferrer">documentation</a> it says that it "Turns positive integers (indexes) into dense vectors of fixed size. eg. [[4], [20]] -> [[0.25, 0.1], [0.6, -0.2]]"</p>
<p>Could anyone please make it clearer? </p>
<p>How does the function convert integers into dense vectors? </p> | 2019-04-01 11:08:17.763000+00:00 | 2019-04-02 06:53:35.247000+00:00 | 2019-04-02 06:53:35.247000+00:00 | python|tensorflow|keras | ['https://arxiv.org/abs/1301.3781'] | 1 |
52,613,961 | <p>You can use Autoencoder on Textual data as explained <a href="https://machinelearningmastery.com/encoder-decoder-models-text-summarization-keras/" rel="nofollow noreferrer">here</a>.</p>
<p>Autoencoder usually worked better on image data but recent approaches changed the autoencoder in a way it is also good on the text data.</p>
<p>have a look at <a href="https://arxiv.org/abs/1705.02033" rel="nofollow noreferrer">this</a>.</p>
<p>the code is also available in GitHub.</p> | 2018-10-02 18:05:00.933000+00:00 | 2018-10-02 19:18:26.457000+00:00 | 2018-10-02 19:18:26.457000+00:00 | null | 33,794,615 | <p>I am doing my project based on health care.I am going to train my autoencoders with the symptoms and the diseases i.e my input is in textual form. Will that work? (I am using Rstudio).Please anyone help me with this</p> | 2015-11-19 03:14:28.260000+00:00 | 2018-10-02 19:18:26.457000+00:00 | null | autoencoder | ['https://machinelearningmastery.com/encoder-decoder-models-text-summarization-keras/', 'https://arxiv.org/abs/1705.02033'] | 2 |
65,784,745 | <p>The <strong>time-distributed dense layer</strong> as name suggested just an ordinary dense layer that apply to every temporal slice of an input, you can think it as special form of <strong>RNN cell</strong>, i.e without recurrent hidden state.</p>
<p>So you can using any layer that is time-distributed as your output layer for an Autoencoder that deal with time-distributed inputs, e.g RNN layer with <strong>LSTM Cell</strong>, <strong>GRU Cell</strong>, <strong>Simple RNN Cell</strong> or <strong>time-distributed dense layer</strong>; As in <a href="https://arxiv.org/abs/1502.04681" rel="nofollow noreferrer">research paper that propose the LSTM-Autoencoder</a>, it basic model for reconstruct sequence of vectors (image patches or features) only using one LSTM layer in both encoder and decoder, model structure is:</p>
<p><a href="https://i.stack.imgur.com/Mffem.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Mffem.png" alt="enter image description here" /></a></p>
<p>Following is an example to using <strong>time-distributed dense layer</strong> in decoder:</p>
<pre><code>class Decoder(nn.Module):
def __init__(self, seq_len, input_dim=64, n_features=1):
super(Decoder, self).__init__()
self.seq_len, self.input_dim = seq_len, input_dim
self.hidden_dim, self.n_features = 2 * input_dim, n_features
self.rnn = nn.LSTM(
input_size=input_dim,
hidden_size=self.hidden_dim,
num_layers=1,
batch_first=True)
self.output_layer = nn.Linear(self.hidden_dim, n_features)
def forward(self, x):
x = x.repeat(self.seq_len, self.n_features)
x = x.reshape((self.n_features, self.seq_len, self.input_dim))
x, (hidden_n, cell_n) = self.rnn(x)
x = x.reshape((self.seq_len, self.hidden_dim))
return self.output_layer(x)
</code></pre> | 2021-01-19 02:32:08.730000+00:00 | 2021-01-19 02:32:08.730000+00:00 | null | null | 65,783,297 | <p>I am trying to get a LSTM autoencoder to recreate its inputs. So far I have:</p>
<pre><code>class getSequence(nn.Module):
def forward(self, x):
out, _ = x
return out
class getLast(nn.Module):
def forward(self, x):
out, states = x
states = states[len(states) - 1]
return states
class AEncoder(nn.Module):
def __init__(self, input_size, first_layer, second_layer, n_layers):
super(AEncoder, self).__init__()
self.n_layers = n_layers
self.encode = nn.Sequential(nn.LSTM(input_size, first_layer, batch_first=True),
getSequence(),
nn.ReLU(True),
nn.LSTM(first_layer, second_layer),
getLast())
self.decode = nn.Sequential(nn.LSTM(second_layer, first_layer),
getSequence(),
nn.ReLU(True),
nn.LSTM(first_layer, input_size),
getSequence())
def forward(self, x):
x = x.float()
x = self.encode(x)
x = x.repeat(32, 1, 1) # repeating last hidden state of self.encode
x = self.decode(x)
return x
</code></pre>
<p>While researching I have been seeing some people adding a time-distributed dense layer at the end of the <code>self.decode</code>. I am confused if that final layer is specific to other tasks autoencoders are used for, if so, can I ignore that layer if I am only trying to recreate inputs?</p> | 2021-01-18 23:06:26.273000+00:00 | 2021-01-19 02:32:08.730000+00:00 | null | deep-learning|pytorch|recurrent-neural-network|autoencoder | ['https://arxiv.org/abs/1502.04681', 'https://i.stack.imgur.com/Mffem.png'] | 2 |