a_id
int64
7.84k
73.8M
a_body
stringlengths
61
33k
a_creation_date
stringlengths
25
32
a_last_activity_date
stringlengths
25
32
a_last_edit_date
stringlengths
25
32
a_tags
float64
q_id
int64
826
73.8M
q_body
stringlengths
61
29.9k
q_creation_date
stringlengths
25
32
q_last_activity_date
stringlengths
25
32
q_last_edit_date
stringlengths
25
32
q_tags
stringlengths
1
103
_arxiv_links
stringlengths
2
6.69k
_n_arxiv_links
int64
0
94
66,070,459
<p><strong>I could be wrong,</strong> it should not differ whether if it is a classification or regression. Think about it mathematically.</p> <p><strong>Generally speaking</strong>, having <code>softmax</code> in the hidden layers is not preferred because we want every neuron to be independent from each other. If you apply <code>softmax</code> then they will be linearly dependent as the activation will force their sum to be equal to one. That does not mean it is never used, you can refer <a href="https://arxiv.org/pdf/1410.5401v2.pdf" rel="nofollow noreferrer">this paper</a>.</p> <p>Assume using some advanced activations such as <code>LeakyReLU</code>, by using it neurons will be under control as alpha rate can be tuned. But with <code>softmax</code> that will not be possible.</p> <p><strong>Now back the question,</strong> this is dependent to dataset I think. Model is able to generalize this dataset with <code>softmax</code>. However I don't think it will always work that way. As mentioned above, you are making them linearly dependent to each other. So if one neuron learns something wrong, that will effect whole network's generalization because other values will be effected.</p> <p><strong>Edit</strong>: I tested two models. With some data <code>softmax</code> worked as good as <code>relu</code>. But the case is all neurons are dependent to each other. Making them dependent to each other is not a risk that should be taken, especially in large networks.</p> <p>Data:</p> <pre><code>X_train = np.random.randn(10000,20) y_train = np.random.randn(10000,1) X_test = np.random.randn(5000,20) y_test = np.random.randn(5000,1) </code></pre> <p>With <strong>Softmax</strong>:</p> <pre><code>model = Sequential() model.add(Dense(512, activation='relu',input_shape=(20,))) model.add(Dense(256,activation='softmax')) model.add(Dense(512,activation='softmax')) model.add(Dense(256,activation='softmax')) model.add(Dense(128,activation='softmax')) model.add(Dense(1,activation='linear')) model.compile(loss='mse',optimizer='adam') model.fit(X_train, y_train, epochs = 16, validation_data= (X_test, y_test)) </code></pre> <p><strong>Result</strong>: Model was not able to learn this data. It diverged and stayed in the same region as diverging. Seems like one neuron wants to learn but other one is not letting the other one.</p> <pre><code>Epoch 15/16 313/313 [==============================] - 1s 3ms/step - loss: 1.0259 - val_loss: 1.0269 Epoch 16/16 313/313 [==============================] - 1s 3ms/step - loss: 1.0020 - val_loss: 1.0271 </code></pre> <p>With <strong>relu</strong>:</p> <pre><code>model = Sequential() model.add(Dense(512, activation='relu',input_shape=(20,))) model.add(Dense(256,activation='relu')) model.add(Dense(512,activation='relu')) model.add(Dense(256,activation='relu')) model.add(Dense(128,activation='relu')) model.add(Dense(1,activation='linear')) model.compile(loss='mse',optimizer='adam') model.fit(X_train, y_train, epochs = 16, validation_data= (X_test, y_test)) # Obviously overfitting but that's not the case. </code></pre> <p><strong>Result</strong>: The models with <code>relu</code> was able to learn both of the data.</p> <pre><code>Epoch 15/16 313/313 [==============================] - 1s 3ms/step - loss: 0.5580 - val_loss: 1.3091 Epoch 16/16 313/313 [==============================] - 1s 3ms/step - loss: 0.4808 - val_loss: 1.3290 </code></pre>
2021-02-05 20:56:20.433000+00:00
2021-02-05 21:27:39.437000+00:00
2021-02-05 21:27:39.437000+00:00
null
66,069,636
<p>I have done manual hyperparameter optimization for ML models before and always defaulted to <em>tanh</em> or <em>relu</em> as hidden layer activation functions. Recently, I started trying out Keras Tuner to optimize my architecture and accidentally left <em>softmax</em> as a choice for hidden layer activation.</p> <p>I have only ever seen <em>softmax</em> used in classification models in the output layer, never as a hidden layer activation, especially for regression. This model has really good performance in predicting temperature, but I am having a tough time justifying using this model.</p> <p>I have seen posts like <a href="https://stackoverflow.com/questions/37588632/why-use-softmax-only-in-the-output-layer-and-not-in-hidden-layers">this one</a> which talk about why it should be used only for the output, but is there any justification in my case? I am showing the overall architecture below, for reference.</p> <pre><code>model = Sequential() model.add(Dense(648, activation='relu',input_shape=(train_x.shape[1],))) model.add(Dropout(0.3)) model.add(LayerNormalization()) model.add(Dense(152,activation='relu')) model.add(Dropout(0.15)) model.add(LayerNormalization()) model.add(Dense(924,activation='softsign')) model.add(Dropout(0.37)) model.add(LayerNormalization()) model.add(Dense(248,activation='softmax')) model.add(Dropout(0.12)) model.add(LayerNormalization()) model.add(Dense(1,activation='linear')) model.compile(loss='mse',optimizer='Adam') </code></pre>
2021-02-05 19:43:28.030000+00:00
2021-02-05 21:27:39.437000+00:00
null
python|keras|softmax
['https://arxiv.org/pdf/1410.5401v2.pdf']
1
13,017,935
<p>Daniel Lemire has a couple of papers on pre-sorting to increase compression and performance. Here's the latest: <a href="http://arxiv.org/abs/1207.2189" rel="nofollow">http://arxiv.org/abs/1207.2189</a></p> <p>You might look at his EWah variant as well. </p> <p>The prevailing feeling is that Bitmap array compression techniques work great when the dataset changes slowly as most implementations discard and rebuild the index on each change. For datasets that change more often, traditional index approaches (such as B-Tree variants) are still king.</p> <p>Implementations: <a href="https://github.com/lemire/javaewah" rel="nofollow">https://github.com/lemire/javaewah</a> and <a href="https://github.com/lemire/EWAHBoolArray" rel="nofollow">https://github.com/lemire/EWAHBoolArray</a></p>
2012-10-22 18:59:16.480000+00:00
2013-02-12 22:35:03.423000+00:00
2013-02-12 22:35:03.423000+00:00
null
11,924,954
<p>I have developing word align bitmap compression algorithm for data indexing.algorithm is based on the WAH compression research paper.compression bitmap perform well on bit-wise operation and it's very space efficient. but modifying the compressed bitmap not very efficient ,because modifying need splitting compressed word size block and several memmove cause performance bottleneck. </p> <blockquote> <p>please look at the following example.</p> <p>example: data set - [1000000,34,9,23456,6543,10000000,23440004,100,345]</p> </blockquote> <p>performance reduce due to the random nature of the data set , in the real application scenario this can happened.</p> <ol> <li>can anyone give me a hint on how to overcome this performance problem?.</li> </ol>
2012-08-12 19:01:42.053000+00:00
2013-02-12 22:35:03.423000+00:00
null
c++|database|performance|compression|bit-manipulation
['http://arxiv.org/abs/1207.2189', 'https://github.com/lemire/javaewah', 'https://github.com/lemire/EWAHBoolArray']
3
64,773,818
<p>Many people use &quot;Doc2Vec&quot; to refer to the word2vec-like algorithm introduced by a paper titled <a href="https://arxiv.org/abs/1405.4053" rel="nofollow noreferrer">Distributed Representation of Sentences and Documents</a> (by Le &amp; Mikolov). That paper calls the algorithm 'Paragraph Vector', without using the name 'Doc2Vec', and indeed introduces an extra vector per document, like you describe. (That is, the doc-vector is trained a bit like a 'floating' pseudoword-vector, that contributes to to the input 'context' for every training prediction in that document.)</p> <p>I'm not familiar with R or that R <code>word2vec</code> package, but from the docs you forwarded, it does <strong>not</strong> sound like that <code>doc2vec</code> function implements the 'Paragraph Vector' algorithm that others call 'Doc2Vec'. In particular:</p> <ul> <li><p>'Paragraph Vector' doc-vectors are <strong>not</strong> a simple sum-of-word-vectors</p> </li> <li><p>'Paragraph Vector' doc-vectors are created by a separate word2vec-like training process that co-creates any necessary word-vectors simultaneous with that training. Specifically: that process does <strong>not</strong> normally use as input some other pre-trained word-vectors, nor create word-vectors as a 1st step. (And further: the PV-DBOW option of the 'Paragraph Vector' paper doesn't create traditional word-vectors at all.)</p> </li> </ul> <p>It appears that function is poorly-named, and if you need to use the actual 'Paragraph Vector' algorithm, you will need to look elsewhere.</p>
2020-11-10 17:32:04.837000+00:00
2020-11-11 20:12:59.113000+00:00
2020-11-11 20:12:59.113000+00:00
null
64,772,221
<p>I am a student (computer science). This is my first question in stackoverflow. I really would appreciate your help! (The package I am referring to is called 'word2vec', thats why the tags/title are a bit confusing to choose.)</p> <p>In the description of the doc2vec function (here <a href="https://cran.r-project.org/web/packages/word2vec/word2vec.pdf" rel="nofollow noreferrer">https://cran.r-project.org/web/packages/word2vec/word2vec.pdf</a>) it says:</p> <blockquote> <p>Document vectors are the sum of the vectors of the words which are part of the document standardised by the scale of the vector space. This scale is the sqrt of the average inner product of the vector elements.</p> </blockquote> <p>From what I understood, doc2vec takes one additional vector for every paragraph. Which, in my eyes, seems to be different than the above description.</p> <p>Is my understanding of doc2vec correct, or close enough? And: Does the cited implementation work like the doc2vec-algorithm?</p>
2020-11-10 15:52:26.460000+00:00
2020-11-21 08:27:51.943000+00:00
2020-11-21 08:27:51.943000+00:00
r|word2vec|doc2vec
['https://arxiv.org/abs/1405.4053']
1
43,505,854
<p>A way to get around this without rooting the phone is to send your packets via multicast UDP*. These packets will make it from GO1 to GO2. </p> <p>There are some side effects to this: </p> <ul> <li><p>To use this for networking you must perform encapsulation and routing at the OSI Application level (not efficient). </p></li> <li><p>You will also need to route based on MAC addresses since every device has the same 192.168.49.1 address.</p></li> <li><p>"It is important to note that the multicast socket encapsulates a one-to-many unicast communication and, as a result of this, cannot fully utilize the total available WiFi and WiFi Direct bandwidth" *</p></li> </ul> <p>Something else worth noting: </p> <ul> <li>As you scale up the number of GOs, you will run into a problem of all nodes operating on the same wifi channel. This isn't a problem with a few devices, but with hundreds of devices, it will be a huge problem.</li> </ul> <p>*This method was mentioned in Colin Funai, Cristiano Tapparello, and Wendi Heinzelman paper titled "Supporting Multi-hop Device-to-Device Networks Through WiFi Direct Multi-group Networking" found here: <a href="https://arxiv.org/pdf/1601.00028.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1601.00028.pdf</a></p>
2017-04-19 20:53:29.257000+00:00
2017-04-19 20:53:29.257000+00:00
null
null
36,867,687
<p>I have two Android KitKat phones, both are running WiFi-Direct groups as Group Owners, let's call them GO1 and GO2</p> <p>I managed to connect GO1 as a legacy client to GO2 without breaking any of the (previously set) wifi-direct groups.</p> <p>The problem is that, as you might know, the GO IP address is hardcoded in Android source, and is set to 192.168.49.1</p> <p>Therefore, both of my devices, GO1 and GO2 have the same IP address (**)... each on his local network.</p> <p>My app is both client and server at the same time. But both networks are using the same IP range (192.168.49.XXX), which, apparently, I cannot change.</p> <p>As a result I cannot create a TCP connection between them if they are both hosting a WiFi-Direct Group, since any device will connect to itself when trying to connect to 192.168.49.1</p> <p>So the questions are:</p> <ul> <li>Is there a way to change the IP range used in Wifi-Direct?</li> <li>Is there a way to use IPv6 instead of IPv4 in Wifi-Direct?</li> <li>Can any of this be done without rooting the phone?</li> <li>Any other suggestion?</li> </ul> <p>** : Actually, because GO1 is connecting as a legacy client to GO2, then GO1 is known as 192.168.49.227 (for example) to GO2 and GO2 is known as 192.168.49.1 to GO1. But because GO1 is ALSO a GO, it also known as 192.168.49.1 to his clients (and itself).</p>
2016-04-26 14:19:00.243000+00:00
2017-04-19 20:53:29.257000+00:00
2016-04-27 10:52:02.297000+00:00
android|ip|ipv6|wifi-direct|wifip2p
['https://arxiv.org/pdf/1601.00028.pdf']
1
48,436,520
<p>You might be interested in my paper <a href="https://arxiv.org/pdf/1801.07779.pdf" rel="noreferrer">The WiLI benchmark dataset for written language identification</a>. I also benchmarked a couple of tools.</p> <p>TL;DR:</p> <ul> <li>CLD-2 is pretty good and extremely fast</li> <li><a href="https://pypi.python.org/pypi/langdetect?" rel="noreferrer">lang-detect</a> is a tiny bit better, but much slower</li> <li>langid is good, but CLD-2 and lang-detect are much better</li> <li>NLTK's Textcat is neither efficient nor effective.</li> </ul> <p>You can install <a href="https://github.com/MartinThoma/lidtk" rel="noreferrer"><code>lidtk</code></a> and classify languages:</p> <pre><code>$ lidtk cld2 predict --text "this is some text written in English" eng $ lidtk cld2 predict --text "this is some more text written in English" eng $ lidtk cld2 predict --text "Ce n'est pas en anglais" fra </code></pre>
2018-01-25 05:58:05.390000+00:00
2018-02-06 05:43:02.820000+00:00
2018-02-06 05:43:02.820000+00:00
null
43,377,265
<p>I am using both <a href="http://www.nltk.org/" rel="noreferrer">Nltk</a> and <a href="http://scikit-learn.org/stable/" rel="noreferrer">Scikit Learn</a> to do some text processing. However, within my list of documents I have some documents that are not in English. For example, the following could be true:</p> <pre><code>[ "this is some text written in English", "this is some more text written in English", "Ce n'est pas en anglais" ] </code></pre> <p>For the purposes of my analysis, I want all sentences that are not in English to be removed as part of pre-processing. However, is there a good way to do this? I have been Googling, but cannot find anything specific that will let me recognize if strings are in English or not. Is this something that is not offered as functionality in either <code>Nltk</code> or <code>Scikit learn</code>? <b>EDIT</b> I've seen questions both like <a href="https://stackoverflow.com/questions/29099621/how-to-find-out-wether-a-word-exists-in-english-using-nltk">this</a> and <a href="https://stackoverflow.com/questions/3788870/how-to-check-if-a-word-is-an-english-word-with-python">this</a> but both are for individual words... Not a "document". Would I have to loop through every word in a sentence to check if the whole sentence is in English?</p> <p>I'm using Python, so libraries that are in Python would be preferable, but I can switch languages if needed, just thought that Python would be the best for this.</p>
2017-04-12 18:41:32.477000+00:00
2022-04-14 01:45:44.743000+00:00
2017-11-29 08:27:32.963000+00:00
python|scikit-learn|nlp|nltk
['https://arxiv.org/pdf/1801.07779.pdf', 'https://pypi.python.org/pypi/langdetect?', 'https://github.com/MartinThoma/lidtk']
3
51,525,403
<p><strong>Segmentation Accuracy</strong></p> <p>This is a pretty common problem addressed in image segmentation literature, e.g., <a href="https://stackoverflow.com/questions/13974167/how-to-test-accuracy-of-segmentation-algorithm">here is a StackOverflow post</a></p> <p>One common approach is to consider the ratio of "correct pixels" to "incorrect pixels," which is common in <strong>image segmentation</strong> for safety domain, e.g., <a href="https://arxiv.org/abs/1703.06870" rel="nofollow noreferrer">Mask RCNN</a>, <a href="http://www.cs.cmu.edu/~aayushb/pixelNet/" rel="nofollow noreferrer">PixelNet</a>. </p> <p>Treating it as more of an <strong>object detection</strong> task, you could take the overlap of the hull of the objects and just measure <a href="https://dspace2.flinders.edu.au/xmlui/bitstream/handle/2328/27165/Powers%20Evaluation.pdf?sequence=1&amp;isAllowed=y" rel="nofollow noreferrer">accuracy</a> (commonly broken down into <a href="https://en.wikipedia.org/wiki/Precision_and_recall" rel="nofollow noreferrer">precision, recall</a>, <a href="https://en.wikipedia.org/wiki/F1_score" rel="nofollow noreferrer">f-score</a>, and other measures with <a href="https://dspace2.flinders.edu.au/xmlui/handle/2328/27165" rel="nofollow noreferrer">various bias/skews</a>). This allows you to produce an <a href="https://en.wikipedia.org/wiki/Receiver_operating_characteristic" rel="nofollow noreferrer">ROC curve</a> that can be calibrated for false positives/false negatives.</p> <p>There is no domain-agnostic consensus on what's correct. <a href="http://www.cvlibs.net/datasets/kitti/eval_semantics.php" rel="nofollow noreferrer">KITTI provides both.</a></p> <p>Mask RCNN is open source state-of-the-art, and provides implemenations <strong>in python</strong> of</p> <ul> <li><a href="https://github.com/matterport/Mask_RCNN/blob/master/mrcnn/utils.py#L665" rel="nofollow noreferrer">Computing image matching between segmented and original</a></li> <li><a href="https://github.com/matterport/Mask_RCNN/blob/master/mrcnn/visualize.py#L171" rel="nofollow noreferrer">Displaying the differences</a></li> </ul> <p>In your domain (medicine), standard statistical rules apply. Use a holdout set. Cross validate. Etc. (*)</p> <p><em>Note:</em> although the literature space is dauntingly large, I'd caution you to take a look at some domain-relevant papers, as they may take fewer "statistical short cuts" than other vision (digit recognition e.g.) projects accept. </p> <ul> <li>"<a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4533825/" rel="nofollow noreferrer">Metrics for evaluating 3D medical image segmentation: analysis, selection, and tool</a>" provides some summary methods in your your domain </li> <li>"<a href="https://www.annualreviews.org/doi/abs/10.1146/annurev.bioeng.2.1.315" rel="nofollow noreferrer">Current methods in image segmentation</a>" has about 2500 citations but is a little older. </li> <li>"<a href="https://aapm.onlinelibrary.wiley.com/doi/abs/10.1118/1.597000" rel="nofollow noreferrer">Review of MR image segmentation techniques using pattern recognition</a>" is a little older still and will get you safely into "traditional" vision models. </li> <li><a href="https://pubs.rsna.org/doi/abs/10.1148/radiology.218.2.r01fe44586" rel="nofollow noreferrer">Automated Segmentation of MR Images of Brain Tumors</a> is largely about its segmentation validation process </li> </ul> <hr> <p><strong>Python</strong></p> <p>Besides the mask rcnn links above, <a href="http://scikit-learn.org/stable/" rel="nofollow noreferrer">scikit-learn</a> provides some extremely user friendly tools and is considered part of the standard science "stack" for python.</p> <p>Implementing the difference between images in python is trivial (using numpy). Here's an overkill <a href="https://stackoverflow.com/questions/189943/how-can-i-quantify-difference-between-two-images">SO link</a>. </p> <p>Bounding box intersection in python <a href="https://www.pyimagesearch.com/2016/11/07/intersection-over-union-iou-for-object-detection/" rel="nofollow noreferrer">is easy to implement on one's own</a>; I'd use a library like <a href="https://stackoverflow.com/questions/14697442/faster-way-of-polygon-intersection-with-shapely">shapely if you want to measure general polygon intersection</a>.</p> <p>Scikit-learn has some nice machine-learning evaluation tools, for example,</p> <ul> <li><a href="http://scikit-learn.org/stable/auto_examples/model_selection/plot_roc.html" rel="nofollow noreferrer">ROC curves</a></li> <li><a href="http://scikit-learn.org/stable/modules/cross_validation.html" rel="nofollow noreferrer">Cross validation</a></li> <li><a href="http://scikit-learn.org/stable/model_selection.html" rel="nofollow noreferrer">Model selection</a></li> <li><a href="http://scikit-learn.org/stable/modules/classes.html" rel="nofollow noreferrer">A million others</a></li> </ul> <hr> <p><strong>Literature Searching</strong></p> <p>One reason that you may have trouble searching for the answer is because you're trying to measure performance of an unsupervised method, clustering, in a <strong>supervised learning</strong> arena. "Clusters" are fundamentally under-defined in mathematics (**). You want to be looking at the supervised learning literature for accuracy measures.</p> <p>There is literature on unsupervised learning/clustering, too, which looks for topological structure, generally. <a href="https://nlp.stanford.edu/IR-book/html/htmledition/evaluation-of-clustering-1.html" rel="nofollow noreferrer">Here's a very introductory summary</a>. I don't think that is what you want.</p> <p>A common problem, especially at scale, is that supervised methods require labels, which can be <a href="http://openaccess.thecvf.com/content_cvpr_2018/papers/Zlateski_On_the_Importance_CVPR_2018_paper.pdf" rel="nofollow noreferrer">time consuming to produce accurately</a> for dense segmentation. Object detection <a href="http://vision.stanford.edu/documents/Russakovsky_PhD_thesis_2015.pdf" rel="nofollow noreferrer">makes it a little easier</a>.</p> <p>There are some existing datasets for medicine (<a href="https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/bsds/" rel="nofollow noreferrer">[1]</a>, <a href="https://www.kaggle.com/paultimothymooney/identification-and-segmentation-of-nuclei-in-cells" rel="nofollow noreferrer">[2]</a>, e.g.) and <a href="https://arxiv.org/pdf/1702.03407.pdf" rel="nofollow noreferrer">some ongoing research in label-less metrics</a>. If none of these are options for you, then you may have to revert to considering it an unsupervised problem, but evaluation becomes very different in scope and utility.</p> <hr> <p><strong>Footnotes</strong></p> <p>[*] Vision people sometimes skip cross validation even though they shouldn't, mainly because the models are slow to fit and they're a lazy bunch. <em>Please don't skip a <a href="https://stats.stackexchange.com/questions/19048/what-is-the-difference-between-test-set-and-validation-set">train/test/validation split</a></em>, or your results may be dangerously useless</p> <p>[**] You can find all sorts of "formal" definitions, but never two people to agree on which one is correct or most useful. <a href="http://www.stat.cmu.edu/~larry/=sml/clustering.pdf" rel="nofollow noreferrer">Here's denser reading</a></p>
2018-07-25 18:21:27.220000+00:00
2018-08-03 18:39:27.487000+00:00
2018-08-03 18:39:27.487000+00:00
null
51,525,036
<p>I have implemented several clustering algorithms on an image dataset. I'm interested in deriving the success rate of clustering. I have to detect the tumor area, in the original image I know where the tumor is located, I would like to compare the two images and obtain the percentage of success. Following images:</p> <p>Original image: I know the position of cancer<img src="https://i.stack.imgur.com/TlMJl.png" alt=""></p> <p>Image after clustering algorithm<img src="https://i.stack.imgur.com/hDT1b.png" alt=""></p> <p>I'm using python 2.7.</p>
2018-07-25 17:56:05.493000+00:00
2018-08-03 18:39:27.487000+00:00
2018-07-27 05:45:45.623000+00:00
python|image-processing|cluster-analysis|analysis
['https://stackoverflow.com/questions/13974167/how-to-test-accuracy-of-segmentation-algorithm', 'https://arxiv.org/abs/1703.06870', 'http://www.cs.cmu.edu/~aayushb/pixelNet/', 'https://dspace2.flinders.edu.au/xmlui/bitstream/handle/2328/27165/Powers%20Evaluation.pdf?sequence=1&isAllowed=y', 'https://en.wikipedia.org/wiki/Precision_and_recall', 'https://en.wikipedia.org/wiki/F1_score', 'https://dspace2.flinders.edu.au/xmlui/handle/2328/27165', 'https://en.wikipedia.org/wiki/Receiver_operating_characteristic', 'http://www.cvlibs.net/datasets/kitti/eval_semantics.php', 'https://github.com/matterport/Mask_RCNN/blob/master/mrcnn/utils.py#L665', 'https://github.com/matterport/Mask_RCNN/blob/master/mrcnn/visualize.py#L171', 'https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4533825/', 'https://www.annualreviews.org/doi/abs/10.1146/annurev.bioeng.2.1.315', 'https://aapm.onlinelibrary.wiley.com/doi/abs/10.1118/1.597000', 'https://pubs.rsna.org/doi/abs/10.1148/radiology.218.2.r01fe44586', 'http://scikit-learn.org/stable/', 'https://stackoverflow.com/questions/189943/how-can-i-quantify-difference-between-two-images', 'https://www.pyimagesearch.com/2016/11/07/intersection-over-union-iou-for-object-detection/', 'https://stackoverflow.com/questions/14697442/faster-way-of-polygon-intersection-with-shapely', 'http://scikit-learn.org/stable/auto_examples/model_selection/plot_roc.html', 'http://scikit-learn.org/stable/modules/cross_validation.html', 'http://scikit-learn.org/stable/model_selection.html', 'http://scikit-learn.org/stable/modules/classes.html', 'https://nlp.stanford.edu/IR-book/html/htmledition/evaluation-of-clustering-1.html', 'http://openaccess.thecvf.com/content_cvpr_2018/papers/Zlateski_On_the_Importance_CVPR_2018_paper.pdf', 'http://vision.stanford.edu/documents/Russakovsky_PhD_thesis_2015.pdf', 'https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/bsds/', 'https://www.kaggle.com/paultimothymooney/identification-and-segmentation-of-nuclei-in-cells', 'https://arxiv.org/pdf/1702.03407.pdf', 'https://stats.stackexchange.com/questions/19048/what-is-the-difference-between-test-set-and-validation-set', 'http://www.stat.cmu.edu/~larry/=sml/clustering.pdf']
31
47,011,724
<p>This is normal behaviour and happens because your network is too confident of the quality of the input and doesn't learn to rely on the past (on it's internal state) enough, relying soley on the input. When you apply the network to its own output in the generation setting, the input to the network is not as reliable as it was in the training or validation case where it got the true input. </p> <p>I have two possible solutions for you:</p> <ul> <li><p>The first is the simplest but less intuitive one: Add a little bit of Gaussian noise to your input. This will force the network to rely more on its hidden state. </p></li> <li><p>The second, is the most obvious solution: during training, feed it not the true input but its generated output with a certain probability p. Start out training with p=0 and gradually increase it so that it learns to general longer and longer sequences, independently. This is called schedualed sampling, and you can read more about it here: <a href="https://arxiv.org/abs/1506.03099" rel="noreferrer">https://arxiv.org/abs/1506.03099</a> . </p></li> </ul>
2017-10-30 09:24:15.610000+00:00
2017-10-30 09:24:15.610000+00:00
null
null
43,459,013
<p>For several days now, I am trying to build a simple sine-wave sequence generation using LSTM, without any glimpse of success so far.</p> <p>I started from the <a href="https://github.com/pytorch/examples/tree/master/time_sequence_prediction" rel="noreferrer">time sequence prediction example</a></p> <p>All what I wanted to do differently is:</p> <ul> <li>Use different optimizers (e.g RMSprob) than LBFGS</li> <li>Try different signals (more sine-wave components)</li> </ul> <p>This is the link to <a href="https://github.com/osm3000/sequence_generation_pytorch.git" rel="noreferrer">my code</a>. "experiment.py" is the main file</p> <p>What I do is:</p> <ul> <li>I generate artificial time-series data (sine waves)</li> <li>I cut those time-series data into small sequences</li> <li>The input to my model is a sequence of time 0...T, and the output is a sequence of time 1...T+1</li> </ul> <p>What happens is:</p> <ul> <li>The training and the validation losses goes down smoothly</li> <li>The test loss is very low</li> <li>However, when I try to generate arbitrary-length sequences, starting from a seed (a random sequence from the test data), everything goes wrong. The output always flats out</li> </ul> <p><a href="https://i.stack.imgur.com/crO8z.png" rel="noreferrer"><img src="https://i.stack.imgur.com/crO8z.png" alt="Shape of the generated signal"></a></p> <p>I simply don't see what the problem is. I am playing with this for a week now, with no progress in sight. I would be very grateful for any help.</p> <p>Thank you</p>
2017-04-17 20:23:11.273000+00:00
2018-03-07 08:36:10.770000+00:00
2018-03-07 08:36:10.770000+00:00
python|machine-learning|deep-learning|lstm|pytorch
['https://arxiv.org/abs/1506.03099']
1
55,873,035
<h2>Update</h2> <p><a href="https://github.com/apple/swift/pull/25286" rel="nofollow noreferrer">This</a> implementation of a random number generator in an interval has been merged into the standard library and should perform better than before:</p> <pre><code>// s = upperBound; r1, r2 = random numbers from generator func bounded(s: UInt64, r1:UInt64, r2: UInt64) -&gt; UInt64 { // r1 would come from invoking generator's next() var m = r1.multipliedFullWidth(by: s) if m.low &lt; s { // let t = (0 &amp;- s) % s // Lemire's original form var t = 0 &amp;- s // O'Neill's modulo optimization if t &gt;= s { t &amp;-= s if t &gt;= s { t %= s } } while m.low &lt; t { // r2 would come from invoking generator's next() m = r2.multipliedFullWidth(by: s) } } return m.high } </code></pre> <p>See the answer below for more details.</p> <h2>Answer</h2> <p>An answer to your second question :</p> <blockquote> <p>"Are there any faster methods for random number generation in swift?"</p> </blockquote> <p>I've <a href="https://stackoverflow.com/questions/55548872/shuffle-struct-by-int/55549494#55549494">previously used</a> the <a href="http://xoshiro.di.unimi.it/" rel="nofollow noreferrer"><strong>Xoshiro</strong></a> Pseudo-Random Number Generator which is pretty fast.</p> <p>Here the code used for benchmarking :</p> <ul> <li><strong>randomGen1</strong></li> </ul> <pre><code>import Foundation public func randomGen1() { let n = 1_000_000 var sum: UInt32 = 0 let startTime = CFAbsoluteTimeGetCurrent() for _ in 0..&lt;n { sum = sum &amp;+ arc4random_uniform(10) } let timeElapsed = CFAbsoluteTimeGetCurrent() - startTime print(sum, timeElapsed) } do { randomGen1() } </code></pre> <ul> <li><strong>randomGen2</strong></li> </ul> <pre><code>public func randomGen2() { let n = 1_000_000 var sum: UInt32 = 0 let startTime = CFAbsoluteTimeGetCurrent() for _ in 0..&lt;n { sum = sum &amp;+ UInt32.random(in: 0..&lt;10) } let timeElapsed = CFAbsoluteTimeGetCurrent() - startTime print(sum, timeElapsed) } do { randomGen2() } </code></pre> <ul> <li>Xoshiro random number generator from <a href="https://github.com/mattgallagher/CwlUtils/blob/e4186dae4ba55ffa478264c8477d01a48fd2b459/Sources/CwlUtils/CwlRandom.swift#L80" rel="nofollow noreferrer">this library</a>:</li> </ul> <pre><code>struct Xoshiro: RandomNumberGenerator { public typealias StateType = (UInt32, UInt32, UInt32, UInt32) private var state: StateType public init(seed: StateType) { self.state = seed } public mutating func next() -&gt; Int { let x = state.1 &amp;* 5 let result = ((x &amp;&lt;&lt; 7) | (x &amp;&gt;&gt; 25)) &amp;* 9 let t = state.1 &amp;&lt;&lt; 9 state.2 ^= state.0 state.3 ^= state.1 state.1 ^= state.2 state.0 ^= state.3 state.2 ^= t state.3 = (state.3 &amp;&lt;&lt; 21) | (state.3 &amp;&gt;&gt; 11) return Int(result) } } var x = Xoshiro(seed: (UInt32.random(in: 0..&lt;10), //Other upper limits could be used to increase randomness UInt32.random(in: 0..&lt;10), UInt32.random(in: 0..&lt;10), UInt32.random(in: 0..&lt;10))) public func randomGen3() { let n = 1_000_000 var sum: UInt32 = 0 let startTime = CFAbsoluteTimeGetCurrent() for _ in 0..&lt;n { sum = sum &amp;+ UInt32(abs(x.next()) % 10) } let timeElapsed = CFAbsoluteTimeGetCurrent() - startTime print(sum, timeElapsed) } do { randomGen3() } </code></pre> <p>Xoshiro is fast but does not pass all randomness tests. If security is of concern then you could use <a href="https://lemire.me/blog/2019/03/19/the-fastest-conventional-random-number-generator-that-can-pass-big-crush/" rel="nofollow noreferrer">Wyhash</a>.</p> <p><a href="https://lemire.me/en/" rel="nofollow noreferrer"><strong>Daniel Lemire</strong></a> (the author of <a href="https://arxiv.org/pdf/1805.10941.pdf" rel="nofollow noreferrer">this</a> paper) has kindly just sent me a <a href="https://github.com/lemire/SwiftWyhash" rel="nofollow noreferrer">Swift implementation</a> of Wyhash:</p> <pre><code>class WyhashGenerator { var seed : UInt64 let multiplier1 : UInt64 = 0xa3b195354a39b70d let multiplier2 : UInt64 = 0x1b03738712fad5c9 let increment : UInt64 = 0x60bee2bee120fc15 init(userSeed : UInt64) { seed = userSeed; } func random() -&gt; UInt64 { seed &amp;+= increment let fullmult1 = seed.multipliedFullWidth(by: multiplier1) let m1 = fullmult1.high ^ fullmult1.low; let fullmult2 = m1.multipliedFullWidth(by: multiplier2) let m2 = fullmult2.high ^ fullmult2.low; return m2 } } </code></pre> <p>It can be used like so:</p> <pre><code>public func randomGen4() { let n = 1_000_000 var sum: UInt64 = 0 let startTime = CFAbsoluteTimeGetCurrent() let gen = WyhashGenerator(userSeed: 0) for _ in 0..&lt;n { sum = sum &amp;+ gen.random() % 10 } let timeElapsed = CFAbsoluteTimeGetCurrent() - startTime print(sum, timeElapsed) } do { randomGen4() } </code></pre> <hr> <p>And here are the benchmark results, with the code compiled in the terminal with optimizations (<code>-O</code>) :</p> <pre><code>arc4random_uniform() : 0.034s UInt32.random(in:) : 0.243s WyHash64 : 0.002s Xoshiro : 0.001s </code></pre> <hr> <p><sup>You can find more random number generators <a href="https://github.com/nvzqz/RandomKit" rel="nofollow noreferrer">here</a>.</sup></p>
2019-04-26 18:17:09.863000+00:00
2019-07-23 21:01:31.747000+00:00
2019-07-23 21:01:31.747000+00:00
null
55,872,415
<p>I have used Int.random() method and arc4random_uniform() for number generation speed tests.<br> Both tests were run in macOS console with build configuration set to release. Below are codes which I have used for testing. </p> <pre><code>public func randomGen1() { let n = 1_000_000 let startTime = CFAbsoluteTimeGetCurrent() for i in 0..&lt;n { _ = arc4random_uniform(10) } let timeElapsed = CFAbsoluteTimeGetCurrent() - startTime print(timeElapsed) } public func randomGen2() { let n = 1_000_000 let startTime = CFAbsoluteTimeGetCurrent() for i in 0..&lt;n { _ = Int.random(in: 0..&lt;10) } let timeElapsed = CFAbsoluteTimeGetCurrent() - startTime print(timeElapsed) } </code></pre> <p>The times I got are <br> 0.029475092887878418 (for arc4random_uniform(10))<br> 0.20298802852630615 (for Int.random(in: 0..&lt;10))</p> <p>Why is Int.random() so much slower?<br> Is there a way to optimise it?<br> Are there any faster methods for random number generation in swift?</p>
2019-04-26 17:26:41.613000+00:00
2019-07-23 21:01:31.747000+00:00
2019-04-26 17:33:32.537000+00:00
swift|random
['https://github.com/apple/swift/pull/25286', 'https://stackoverflow.com/questions/55548872/shuffle-struct-by-int/55549494#55549494', 'http://xoshiro.di.unimi.it/', 'https://github.com/mattgallagher/CwlUtils/blob/e4186dae4ba55ffa478264c8477d01a48fd2b459/Sources/CwlUtils/CwlRandom.swift#L80', 'https://lemire.me/blog/2019/03/19/the-fastest-conventional-random-number-generator-that-can-pass-big-crush/', 'https://lemire.me/en/', 'https://arxiv.org/pdf/1805.10941.pdf', 'https://github.com/lemire/SwiftWyhash', 'https://github.com/nvzqz/RandomKit']
9
59,938,045
<p>40% accuracy is not good. It needs to train more. You should rescale images to <code>128 or 256</code> to save time. Also try increasing epoch count to something like 100 or minimize loss to at least around 1 before testing. Another thing is class imbalance. </p> <p>According to this, <a href="https://arxiv.org/abs/1708.07747" rel="nofollow noreferrer">https://arxiv.org/abs/1708.07747</a> link <code>Fashion MNIST</code> contains <code>7000</code> images per class with <code>70000</code> images in total. If your dataset has class imbalance which seems likely then you should look into other metrics and methods.</p>
2020-01-27 19:57:38.153000+00:00
2020-01-27 19:57:38.153000+00:00
null
null
59,937,540
<p>I am following this guide to learn image classification with neural networks:</p> <p><a href="https://www.tensorflow.org/tutorials/keras/classification" rel="nofollow noreferrer">https://www.tensorflow.org/tutorials/keras/classification</a></p> <p>And I implement this code for my custom dataset. I have 2300 gray scaled 1024x1024 pictures to train model. I hold all my images in 3D numpy array as train_images and test_images. I have 4 class which are 0,1,2,3 and I hold those as list, named "labels".</p> <pre><code>train_images.shape # returns (2300,1024,1024) test_images.shape # returns (384,1024,1024) # normalize values train_images = train_images / 255.0 test_images = test_images / 255.0 model = keras.Sequential([ keras.layers.Flatten(input_shape=(1024, 1024)), keras.layers.Dense(128, activation='relu'), keras.layers.Dense(4, activation='softmax') ]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) model.fit(train_images, labels, epochs=10) </code></pre> <p>Everything almost same with guide. But my epoch accuracy is around 0.4</p> <pre><code>Epoch 10/10 ... 2176/2300 [===========================&gt;..] - ETA: 0s - loss: 9.5701 - acc: 0.4062 2208/2300 [===========================&gt;..] - ETA: 0s - loss: 9.5628 - acc: 0.4067 2240/2300 [============================&gt;.] - ETA: 0s - loss: 9.5485 - acc: 0.4076 2272/2300 [============================&gt;.] - ETA: 0s - loss: 9.5417 - acc: 0.4080 2300/2300 [==============================] - 12s 5ms/step - loss: 9.5307 - acc: 0.4087 </code></pre> <p>Also in guide some predictions are fractional but when I try to do prediction, My model predictions are only 0 or 1. It says this is %100 (x) but its wrong.</p> <pre><code>predictions = model.predict(test_images) print(predictions) # 0 | 0 | 1 | 0 # 0 | 0 | 1 | 0 # 1 | 0 | 0 | 0 </code></pre> <p><strong>UPDATED</strong></p> <p>Here is epoch results for 256*256 2 classed 100 images per class:</p> <pre><code>32/200 [===&gt;..........................] - ETA: 0s - loss: 8.5627 - acc: 0.4688 200/200 [==============================] - 0s 317us/step - loss: 8.0590 - acc: 0.5000 Epoch 10/10 </code></pre> <p>Also I lowered my classes into 2 but my predictions are still return %100 and wrong class.</p> <hr> <p>I dont know where I am doing wrong. If you have any advice/idea I would be grateful. Thank you in advance.</p>
2020-01-27 19:19:10.547000+00:00
2020-01-28 16:09:05.880000+00:00
2020-01-28 16:09:05.880000+00:00
python|tensorflow|machine-learning|keras|neural-network
['https://arxiv.org/abs/1708.07747']
1
62,836,902
<p>This question have been answered by <a href="https://arxiv.org/pdf/1605.05274.pdf" rel="nofollow noreferrer">R.Grigore(2016)</a> in his paper <em>Java Generics are Turing Complete</em></p> <p>Take the following Java code as an example to his suggestion:</p> <pre><code>//an empty interface interface Z {} //4 generic interfaces, one argument each interface N&lt;X&gt; {} interface L&lt;X&gt; {} interface Qlr&lt;X&gt; {} interface Qrl&lt;X&gt; {} //one complex generic, inheriting from instantiations of two other interface E&lt;X&gt; extends Qlr&lt;N&lt;? super Qr&lt;? super E&lt;? super E&lt;? super X&gt;&gt;&gt;&gt;&gt;, Qrl&lt;N&lt;?super Ql&lt;? super E&lt;? super E&lt;? super X&gt;&gt;&gt;&gt;&gt; {} //main class with a single function class Main{ //heavily nested return type L&lt;? super N&lt;? super L&lt;? super N&lt;? super L&lt;? super N&lt;? super E&lt;? super E&lt;? super Z&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt; f(Qr&lt;? super E&lt;? super E&lt;? super Z&gt;&gt;&gt; v) {return v;} } </code></pre>
2020-07-10 14:53:30.250000+00:00
2020-07-10 14:53:30.250000+00:00
null
null
3,451,519
<p>The JLS mentions in the type inference algorithm (§15.12.2):</p> <blockquote> <p>It is possible that the process above yields an infinite type. This is permissible, and Java compilers must recognize such situations and represent them appropriately using cyclic data structures.</p> </blockquote> <p>However, I'm unable to find an actual example where javac produces an infinite type. I think it ought to produce one in the following case:</p> <pre><code>&lt;T&gt; T pick(T a, T b) { ... } pick("string", 3); </code></pre> <p>Both String and Integer are Comparable&lt;themselve>, so their common supertype should be <code>Comparable&lt;? extends Comparable&lt;? extends Comparable&lt;? ...&gt;&gt;&gt;</code> (infinite).</p> <p>I can do:</p> <pre><code>Comparable&lt;? extends Comparable&lt;?&gt;&gt; x = pick("string", 3); </code></pre> <p>but then I tried:</p> <pre><code>Comparable&lt;? extends Comparable&lt;? extends Comparable&lt;?&gt;&gt;&gt; x = pick("string", 3); </code></pre> <p>and this doesn't compile. It seems that the recursion is aborted after 2 steps.</p> <p>Do you know of any case to make Java actually produce an infinite type?</p> <p>--</p> <p>Edit: it seems that the above is a compiler bug. Reading the specification, let's see how the calculation of <code>lub(String, Integer)</code> works out:</p> <pre><code>ST(String) = { String, Comparable&lt;String&gt;, Serializable, CharSequence, Object } ST(Integer) = { Integer, Comparable&lt;Integer&gt;, Serializable, Number, Object } EC = { Comparable, Serializable, Object } MEC = { Comparable, Serializable } Inv(Comparable) = { Comparable&lt;String&gt;, Comparable&lt;Integer&gt; } lcta(String, Integer) = ? extends lub(String, Integer) lci(Inv(Comparable)) = Comparable&lt;? extends lub(String, Integer)&gt; lub(String, Integer) = Serializable &amp; Comparable&lt;? extends lub(String, Integer)&gt; </code></pre> <p>So <code>lub(String, Integer)</code> should be an infinite type. Javac seems to be wrong here. Maybe it doesn't implement infinite types after all?</p>
2010-08-10 17:08:20.650000+00:00
2020-07-10 14:53:30.250000+00:00
2010-08-11 13:28:39.157000+00:00
java|programming-languages|wildcard|type-inference
['https://arxiv.org/pdf/1605.05274.pdf']
1
60,628,756
<p>I like your approach! When you mention your optimization I think a good way to go about it is by rotating the hexagonal grid and translating it till you find the least amount of circles that cover the region. You don't need to rotate 360 since the pattern is symmetric so just 360/6.</p> <p>I've been working on this problem for a while and have just published a paper that contains code to solve this problem! It uses genetic algorithms and BFGS optimization. You can find a link to the paper here: <a href="https://arxiv.org/abs/2003.04839" rel="nofollow noreferrer">https://arxiv.org/abs/2003.04839</a></p>
2020-03-11 03:15:12.600000+00:00
2020-03-13 00:59:05.403000+00:00
2020-03-13 00:59:05.403000+00:00
null
10,648,621
<p><strong>The following problem:</strong> Given is an arbitrary polygon. It shall be covered 100% with the minimum number of circles of a given radius.</p> <p><strong>Note:</strong> 1) Naturally the circles have to overlap. 2) I try to solve the problem for ARBITRARY polygons. But also solutions for CONVEX polygons are appreciated. 3) As far as Im informed, this problem is NP-hard ( <a href="https://stackoverflow.com/questions/4282003/an-algorithm-to-find-the-minimum-size-set-cover-for-the-set-cover-problem">an algorithm to find the minimum size set cover for the Set-cover problem</a> ) Choose: U = polygon and S1...Sk = circles with arbitrary centers.</p> <p><strong>My solution:</strong> Ive already read some papers and tried a few things on my own. The most promising idea that I came up with was in fact one already indicated in <a href="https://stackoverflow.com/questions/1404944/covering-an-arbitrary-area-with-circles-of-equal-radius">Covering an arbitrary area with circles of equal radius</a>.</p> <p>So I guess it’s best I quickly try to describe my own idea and then refine my questions.</p> <p>The picture gives you already a pretty good idea of what I do</p> <p><img src="https://i.stack.imgur.com/D58Yc.jpg" alt="enter image description here"></p> <p><strong>IDEA and Problem Formulation</strong> 1. I approximate the circles with their corresponding hexagons and tessellate the whole R2, i.e. an sufficiently large area; keyword hexagonally closest packaging. (cyan … tessellation, red dotted, centers of the cyan hexagons) 2. I put the polygon somewhere in the middle of this tessellated area and compute the number of hexagons that are needed to cover the polygon.</p> <p>In the following Im trying to minimize N, which is number ofhexagons needed to cover the polygon, by moving the polygon around step by step, after each step “counting” N.</p> <p><strong>Solving the problem:</strong> So that’s when it gets difficult (for me). I don’t know any optimizers that solve this problem properly, since they all terminate after moving the polygon around a bit and not observing any change.</p> <p><strong>My solution is the following:</strong> First note that this is a periodic problem: 1. The polygon can be moved in horizontal direction x with a period of 3*r (side length = radius r) of the hexagon. 2. The polygon can be moved in vertical direction y with a period of r^2+r^2-2*r<em>r</em>cos(2/3*pi) of the hexagon. 3. The polygon can be rotated phi with a period of 2/3*pi.</p> <p>That means, one has to search a finite area of possible solutions to find the optimal solution. So what I do is, I choose a stepsize for (x,y,phi) and simply brute force compute all possible solutions, picking out the optimum.</p> <p><strong>Refining my questions</strong> 1) Is the problem formulated intelligently? Right now im working on an algorithm that only tessellates a very small area, so that as little hexagons as possible have to be computed. 2) Is there a more intelligent optimizer to solve the problem? 3) FINALLY: I also have difficulties finding appropriate literature, since I don’t guess I don’t know the right keywords to look for. So if anybody can provide me with literature, it would also be appreciated a lot.</p> <p>Actually I could go on about other things ive tried but I think no one of u guys wants to spend the whole afternoon just reading my question.</p> <p>Thx in advance to everybody who takes the time to think about it.</p> <p>mat</p> <p>PS i implement my algorithms in matlab</p>
2012-05-18 07:40:55.300000+00:00
2020-03-13 00:59:05.403000+00:00
2017-05-23 12:32:32.410000+00:00
matlab|geometry
['https://arxiv.org/abs/2003.04839']
1
46,173,528
<p>You can use the idea of face-embeddings, which for example is proposed in the highly-cited paper <a href="https://arxiv.org/abs/1503.03832" rel="noreferrer">FaceNet</a> and implemented in <a href="https://cmusatyalab.github.io/openface/" rel="noreferrer">OpenFace</a> (which also comes pre-trained).</p> <p>The general idea: take some preprocessed face (frontal, cropped, ...) and embedd it to some lower dimension with the characteristic, that similar faces in input should have low euclidean-distance in the output.</p> <p>So in your case: use the embedding-CNN to map your faces to the reduced space (usually a vector of size 128) and calculate the distance as in the euclidean-space. Of course you also cluster faces then, but that's not your task.</p> <p>The good thing here besides the general idea: openface is a nice implementation ready to use and it's homepage also explains the idea:</p> <blockquote> <p>Use a deep neural network to represent (or embed) the face on a 128-dimensional unit hypersphere.</p> <p>The embedding is a generic representation for anybody's face. Unlike other face representations, this embedding has the nice property that a larger distance between two face embeddings means that the faces are likely not of the same person.</p> <p>This property makes clustering, similarity detection, and classification tasks easier than other face recognition techniques where the Euclidean distance between features is not meaningful.</p> </blockquote> <p>They even have a comparison-demo <a href="https://cmusatyalab.github.io/openface/demo-2-comparison/" rel="noreferrer">here</a>.</p>
2017-09-12 10:03:32.393000+00:00
2017-09-12 10:03:32.393000+00:00
2020-06-20 09:12:55.060000+00:00
null
46,168,182
<p>First of all here is my <a href="https://github.com/alucard001/OpenCV-Face-Recognition-and-Comparison/blob/master/Open%20CV.ipynb" rel="noreferrer">github link for the question</a>.</p> <p>And here is my question:</p> <p>I would like to do a face comparison function using Python. And I can successfully(?) recognize faces using OpenCV. Now, <strong>how do I do the comparison thing</strong>?</p> <p>What I understand is this:</p> <p>In general Machine learning approach, I need to gather lots of data about that particular person and finalize it using a CNN. </p> <p>However, I just got 2 images, how do I do the comparison? Should I think it in terms of classification or clustering (Using KNN)?</p> <p>Thank you very much in advance for all your help.</p>
2017-09-12 04:55:55.683000+00:00
2021-10-19 14:41:04.830000+00:00
2018-09-27 10:02:08.430000+00:00
python|opencv|neural-network|convolution|face-recognition
['https://arxiv.org/abs/1503.03832', 'https://cmusatyalab.github.io/openface/', 'https://cmusatyalab.github.io/openface/demo-2-comparison/']
3
58,753,908
<p>In the original paper, the author pushes one sample to the experience replay buffer and randomly samples 32 transitions to train the model in the minibatch fashion. The samples took from interacting with the environment is not directly feeding to the model. To increase the speed of training, the author store samples every step but updates the model every four steps. </p> <p>Use OpenAI's <a href="https://github.com/openai/baselines" rel="nofollow noreferrer">Baseline project</a>; this single-process method can master easy games like Atari Pong (Pong-v4) about 2.5 hours using a single GPU. Of course, training in this kind of single process way makes multi-core, multi-GPU (or single-GPU) system's resource underutilised. So in new publications had decoupled action-selection and model optimisation. They use multiple "Actors" to interact with environments simultaneously and a single GPU "Leaner" to optimise the model or multiple Leaners with multiple models on various GPUs. The multi-actor-single-learner is described in Deepmind's Apex-DQN (<a href="https://arxiv.org/abs/1803.00933" rel="nofollow noreferrer">Distributed Prioritized Experience Replay, D. Horgan et al., 2018</a>) method and the multi-actor-multi-learner described in (<a href="https://arxiv.org/abs/1803.00933" rel="nofollow noreferrer">Accelerated Methods for Deep Reinforcement Learning, Stooke and Abbeel, 2018</a>). When using multiple learners, the parameter sharing across processes becomes essential. The old trail described in Deepmind's PDQN (<a href="https://arxiv.org/abs/1507.04296" rel="nofollow noreferrer">Massively Parallel Methods for Deep Reinforcement Learning, Nair et al., 2015</a>) which was proposed in the period between DQN and A3C. However, the work was performed entirely on CPUs, so it looks using massive resources, the result can be easy outperformed by PPAC's batched action-selection on GPU method. </p> <p>You can't optimise on each episode end, because the episode length isn't fixed, the better the model usually results in the longer episode steps. The model's learning capability will decrease when they perform a little better. The learning progress will be instable. </p> <p>We also don't train the model only on target model clone, because the introduction of the target is to stabilise the training process by keeping an older set of parameters. If you update only on parameter clones, the target model's parameters will be the same as the model and this cause instability. Because, if we use the same parameters, one model update will cause the next state to have a higher value. </p> <p>In Deepmind's 2015 Nature paper, it states that:</p> <blockquote> <p>The second modification to online Q-learning aimed at further improving the stability of our method with neural networks is to use a separate network for generating the target yj in the Q-learning update. More precisely, every C updates we clone the network Q to obtain a target network Q' and use Q' for generating the Q-learning targets y<sub>j</sub> for the following C updates to Q. This modification makes the algorithm more stable compared to standard online Q-learning, where an update that increases Q(s<sub>t</sub>,a<sub>t</sub>) often also increases Q(s<sub>t+1</sub>, a) for all a and hence also increases the target y<sub>j</sub>, possibly leading to oscillations or divergence of the policy. Generating the targets using the older set of parameters adds a delay between the time an update to Q is made and the time the update affects the targets y<sub>j</sub>, making divergence or oscillations much more unlikely. </p> </blockquote> <p><a href="https://web.stanford.edu/class/psych209/Readings/MnihEtAlHassibis15NatureControlDeepRL.pdf" rel="nofollow noreferrer">Human-level control through deep reinforcement learning, Mnih et al., 2015</a> </p>
2019-11-07 17:13:49.143000+00:00
2019-11-09 14:55:36.403000+00:00
2019-11-09 14:55:36.403000+00:00
null
58,600,089
<p>I am confused why dqn with experience replay algorithm would perform gradient descent step for every step in a given episode? This will fit only one step, right? This would make it extremely slow. Why not after each episode ends or every time the model is cloned?</p>
2019-10-29 00:37:53.163000+00:00
2019-11-09 14:55:36.403000+00:00
2019-10-29 01:02:33.857000+00:00
deep-learning|reinforcement-learning
['https://github.com/openai/baselines', 'https://arxiv.org/abs/1803.00933', 'https://arxiv.org/abs/1803.00933', 'https://arxiv.org/abs/1507.04296', 'https://web.stanford.edu/class/psych209/Readings/MnihEtAlHassibis15NatureControlDeepRL.pdf']
5
36,406,057
<p>in my experience, NaNs, when training a network usually happen because of two problems:</p> <ul> <li>first, mathematical error, e.g. log of negative value. It could happen when you are using log() in your loss function.</li> <li>Second, there is a value that becomes too big so python can't handle.</li> </ul> <p>In your case, from your good observation, I think it's a second case. Your loss value may become too big to handled by python. Try to initialize smaller weight when you try to expand your network. Or just use different approach to initialize weight like explained by <a href="http://jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf" rel="nofollow">Glorot (2010)</a> or <a href="http://arxiv.org/abs/1502.01852" rel="nofollow">He (2015)</a>. Hope it helps.</p>
2016-04-04 14:54:07.443000+00:00
2016-04-06 22:30:51.983000+00:00
2016-04-06 22:30:51.983000+00:00
null
36,381,488
<p>I am training a simple feed-forward model with 3 or 4 hidden layers and dropouts between each (hidden layer + non linearity) combination. Sometimes after a few epochs (about 10-11) the model starts outputting Infs and NaNs as the error of the NLL and the accuracy falls to 0.0%. This problem does not happen when I do not use dropouts. Is this a known issue with dropouts in Theano? The way I implement dropouts is:</p> <pre><code>def drop(self, input): mask = self.theano_rng.binomial(n=1, p=self.p, size=input.shape, dtype=theano.config.floatX) return input * mask </code></pre> <p>where input is the feature-vector on which we want to apply dropouts. I have also observed that the occurance of NaNs happens earlier if the dropout probability (self.p) is higher. p = 0.5 would cause NaNs to occur around epoch 1 or 2 but p = 0.7 would cause NaNs to occur around epoch 10 or 11. Also the occurrence of NaNs happens only when hidden layer sizes are large. For example (800,700,700) gives NaNs whereas (500,500,500) does not.</p>
2016-04-03 04:03:04.203000+00:00
2016-04-06 22:30:51.983000+00:00
null
python|theano|deep-learning
['http://jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf', 'http://arxiv.org/abs/1502.01852']
2
46,021,189
<blockquote> <p>use a kd-tree</p> </blockquote> <p>Unfortunately, in high dimensions this data structure suffers severely from the <a href="https://en.wikipedia.org/wiki/Curse_of_dimensionality" rel="nofollow noreferrer">curse of dimensionality</a>, which causes its search time to be comparable to the brute force search.</p> <blockquote> <p>reduce the number of dimensions</p> </blockquote> <p><a href="https://en.wikipedia.org/wiki/Dimensionality_reduction" rel="nofollow noreferrer">Dimensionality reduction</a> is a good approach, which offers a fair trade-off between accuracy and speed. You lose some information when you reduce your dimensions, but gain some speed.</p> <p>By accuracy I mean finding the exact Nearest Neighbor (NN).</p> <p>Principal Component Analysis(<a href="https://en.wikipedia.org/wiki/Dimensionality_reduction#Principal_component_analysis_.28PCA.29" rel="nofollow noreferrer">PCA</a>) is a good idea when you want to reduce the dimensional space your data live on.</p> <blockquote> <p>Is there some clever algorithm or data structure to solve this exactly in reasonable time?</p> </blockquote> <p>Approximate nearest neighbor search (<a href="https://en.wikipedia.org/wiki/Nearest_neighbor_search#Approximate_nearest_neighbor" rel="nofollow noreferrer">ANNS</a>), where you are satisfied with finding a point that might not be the exact Nearest Neighbor, but rather a good approximation of it (that is the 4th for example NN to your query, while you are looking for the 1st NN).</p> <p>That approach cost you accuracy, but increases performance significantly. Moreover, the probability of finding a good NN (close enough to the query) is relatively high.</p> <p>You could read more about ANNS in the introduction our kd-GeRaF <a href="https://arxiv.org/pdf/1603.09596.pdf" rel="nofollow noreferrer">paper</a>.</p> <p>A good idea is to combine ANNS with dimensionality reduction.</p> <p>Locality Sensitive Hashing (<a href="https://en.wikipedia.org/wiki/Locality-sensitive_hashing" rel="nofollow noreferrer">LSH</a>) is a modern approach to solve the Nearest Neighbor problem in high dimensions. The key idea is that points that lie close to each other are hashed to the same bucket. So when a query arrives, it will be hashed to a bucket, where that bucket (and usually its neighboring ones) contain good NN candidates).</p> <p><a href="https://github.com/FALCONN-LIB/FALCONN" rel="nofollow noreferrer">FALCONN</a> is a good C++ implementation, which focuses in cosine similarity. Another good implementation is our <a href="https://github.com/gsamaras/Dolphinn" rel="nofollow noreferrer">DOLPHINN</a>, which is a more general library.</p>
2017-09-03 07:25:31.820000+00:00
2017-09-03 07:25:31.820000+00:00
null
null
3,962,775
<p>So I have about 16,000 75-dimensional data points, and for each point I want to find its k nearest neighbours (using euclidean distance, currently k=2 if this makes it easiser)</p> <p>My first thought was to use a kd-tree for this, but as it turns out they become rather inefficient as the number of dimension grows. In my sample implementation, its only slightly faster than exhaustive search.</p> <p>My next idea would be using PCA (Principal Component Analysis) to reduce the number of dimensions, but I was wondering: Is there some clever algorithm or data structure to solve this exactly in reasonable time?</p>
2010-10-18 19:46:36.487000+00:00
2017-09-03 07:26:10.790000+00:00
2017-09-03 07:26:10.790000+00:00
algorithm|data-structures|computational-geometry|nearest-neighbor|dimensionality-reduction
['https://en.wikipedia.org/wiki/Curse_of_dimensionality', 'https://en.wikipedia.org/wiki/Dimensionality_reduction', 'https://en.wikipedia.org/wiki/Dimensionality_reduction#Principal_component_analysis_.28PCA.29', 'https://en.wikipedia.org/wiki/Nearest_neighbor_search#Approximate_nearest_neighbor', 'https://arxiv.org/pdf/1603.09596.pdf', 'https://en.wikipedia.org/wiki/Locality-sensitive_hashing', 'https://github.com/FALCONN-LIB/FALCONN', 'https://github.com/gsamaras/Dolphinn']
8
61,577,834
<p>Document based databases have a big advantage over relational databases as they do not require defining a schema upfront- before being able to enter any data.</p> <p>Also, you should use a document database if your data is not relational and cannot be stored in a table but rather is a set of images, or for example newspaper articles.</p> <p>Another advantage is the easiness to use document-based databases in web development. For more in-depth NoSQL database models comparison check this source: <a href="https://arxiv.org/ftp/arxiv/papers/1509/1509.08035.pdf" rel="nofollow noreferrer">https://arxiv.org/ftp/arxiv/papers/1509/1509.08035.pdf</a></p>
2020-05-03 16:22:55.093000+00:00
2022-03-11 08:28:16.757000+00:00
2022-03-11 08:28:16.757000+00:00
null
441,441
<p>Why should I use document based database like CouchDB instead of using relational database. Are there any typical kinds of applications or domains where the document based database is more suitable than the relational database?</p>
2009-01-14 00:21:48.840000+00:00
2022-04-07 21:05:03.183000+00:00
null
database|couchdb|relational|non-relational-database
['https://arxiv.org/ftp/arxiv/papers/1509/1509.08035.pdf']
1
71,943,162
<p>There is this interesting paper <a href="https://arxiv.org/pdf/1802.06222.pdf" rel="nofollow noreferrer">Efficient GAN-based anomaly detection</a>.<br /> To evaluate the anomaly detection, they use the following experimental setting</p> <blockquote> <p>MNIST: We generated 10 different datasets from MNIST by successively making each digit class an anomaly and treating the remaining 9 digits as normal examples. The training set consists of 80% of the normal data and the test set consists of the remaining 20% of normal data and all of the anomalous data. All models were trained only with normal data and tested with both normal and anomalous data.</p> </blockquote>
2022-04-20 16:28:32.400000+00:00
2022-04-20 16:28:32.400000+00:00
null
null
71,942,290
<p>There's something about GAN's training that i don't understand. I am making a GAN for Anomaly Detection. To start I followed this guide <a href="https://www.tensorflow.org/tutorials/generative/dcgan" rel="nofollow noreferrer">here</a> to create a DCGAN (and understand how it works) and then move into the Anomaly Detection stuff.</p> <p>I understand how the two training's phases work for GANs and after nearly 2000 epochs the generator generate some good fake images. The problem is that the discriminator is not good to detect anomalies: if i try to give in input a real image, it produce a value between 0.5 and 1, no matter if the image has anomaly or not.</p> <p>So basically, the discriminator is good to distinguish real images from fake images, but not good to discriminate real images with anomalies.</p> <p>I tried to train the model some more but the results won't change (instead, it seems worst than before!). The two losses keep varing around 0 and 1, for example now the model has:</p> <pre><code>gen_loss: 0.97844017, disc_loss: 0.9973822 </code></pre> <p>What should I do to improve my net and make anomaly detection? It needs to be trained even more to make a better discriminator or for make anomaly detection should i add something more?</p> <p>Thanks in advice, i'm definitely doing something wrong. If needed i can post some code and more information about my net.</p> <p>P.S. My notebook is very similar to the one i linked before, the only difference is that i tried to feed test images to the discriminator after the training.</p>
2022-04-20 15:24:05.590000+00:00
2022-04-20 16:28:32.400000+00:00
null
python|tensorflow|deep-learning|generative-adversarial-network|anomaly-detection
['https://arxiv.org/pdf/1802.06222.pdf']
1
57,136,880
<p>L2 utilization and hit rate are orthogonal concepts.</p> <p>L2 utilization % measures how many operations (reads/writes/atomics) the L2 cache performed, compared to its peak performance. You can alternatively think of this as a proxy for "how much L2 bandwidth did I use" given there is a fixed bandwidth between L1 and L2 on a given GPU. Note, this metric is not measuring the % of L2 capacity used. (to simplify, in the diagram below, think of it as measuring the throughput of arrows next to the red dots)</p> <p>L2 cache hit rate measures when an L1 miss occurs, how often was it found in L2. (in the diagram, think of L2 cache tags at the green X)</p> <p><em>Original diagram from <a href="https://arxiv.org/pdf/1903.07486.pdf" rel="nofollow noreferrer">Dissecting the NVidia Turing T4 GPU via Microbenchmarking</a></em> <a href="https://i.stack.imgur.com/k3aG1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/k3aG1.png" alt="enter image description here"></a></p> <p>Hypothetically:</p> <ul> <li>Some CUDA kernel could read a single L1 cacheline (128B) per SM once, incurring a single L2 read that always hits. The L2 utilization would be ~0%, with L2 hit-rate of 100%.</li> <li>A different CUDA kernel could achieve ~100% L2 utilization and 100% L2 hit-rate, by performing tons of loads that either miss in L1, or were <a href="https://docs.nvidia.com/cuda/parallel-thread-execution/index.html#data-movement-and-conversion-instructions" rel="nofollow noreferrer">"cache global" loads</a>, where the set of accessed addresses fit within the size of the L2.</li> <li>Yet another CUDA kernel could achieve high L2 utilization and low L2 hit-rate, by performing tons of loads that either miss in L1, or were <a href="https://docs.nvidia.com/cuda/parallel-thread-execution/index.html#data-movement-and-conversion-instructions" rel="nofollow noreferrer">"cache global" loads</a> that are scattered throughout a Gigabyte sized buffer (i.e. that don't all fit simultaneously in L2).</li> </ul> <p>See also</p> <ul> <li>The <a href="https://docs.nvidia.com/cuda/profiler-users-guide/index.html#metrics-reference-7x" rel="nofollow noreferrer">tables of metrics in the CUDA Toolkit Profiler documentation</a>.</li> <li><a href="https://arxiv.org/pdf/1903.07486.pdf" rel="nofollow noreferrer">Dissecting the NVidia Turing T4 GPU via Microbenchmarking</a></li> <li><a href="https://arxiv.org/pdf/1804.06826.pdf" rel="nofollow noreferrer">Dissecting the NVIDIA Volta GPU Architecture via Microbenchmarking</a></li> </ul>
2019-07-21 20:42:24.077000+00:00
2019-07-21 20:42:24.077000+00:00
null
null
57,135,152
<p>I'm doing expriments by using cuda.</p> <p>I thought that if L2 cache hit ratio is high, performance will increase.</p> <p>However, from nvprof, L2 cache utilization is low even though L2 cache hit rate is about 93%.</p> <p>Why this happens? Are there examples that make it happen?</p>
2019-07-21 16:47:08.780000+00:00
2019-07-21 20:42:24.077000+00:00
null
caching|cuda|gpu
['https://arxiv.org/pdf/1903.07486.pdf', 'https://i.stack.imgur.com/k3aG1.png', 'https://docs.nvidia.com/cuda/parallel-thread-execution/index.html#data-movement-and-conversion-instructions', 'https://docs.nvidia.com/cuda/parallel-thread-execution/index.html#data-movement-and-conversion-instructions', 'https://docs.nvidia.com/cuda/profiler-users-guide/index.html#metrics-reference-7x', 'https://arxiv.org/pdf/1903.07486.pdf', 'https://arxiv.org/pdf/1804.06826.pdf']
7
70,006,822
<p>So, In this paper : <a href="https://arxiv.org/pdf/2004.07464.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/2004.07464.pdf</a> They have combined image embedding and text embedding by concatenating them.</p> <pre><code>X = TE + IE </code></pre> <p>Here X is fusion embedding with TE and IE as text and image embedding respectively. If your TE and IE have dimension of suppose 2048 each, your X will be of length 2*2024. Then maybe you can use this if possible or if you want to reduce the dimension you can use t-SNE/PCA or <a href="https://arxiv.org/abs/1708.03629" rel="nofollow noreferrer">https://arxiv.org/abs/1708.03629</a> (Implemented here : <a href="https://github.com/vyraun/Half-Size" rel="nofollow noreferrer">https://github.com/vyraun/Half-Size</a>)</p>
2021-11-17 15:04:52.033000+00:00
2021-11-17 15:04:52.033000+00:00
null
null
44,786,174
<p>I know the meaning and methods of word embedding(skip-gram, CBOW) completely. And I know, that Google has a word2vector API that by getting the word can produce the vector. but my problem is this: we have a clause that includes the subject, object, verb... that each word is previously embedded by the Google API, now "How we can combine these vectors together to create a vector that is equal to the clause?" Example: Clause: V= "dog bites man" after word embedding by the Google, we have V1, V2, V3 that each of them maps to the dog, bites, man. and we know that: V = V1+ V2 +V3 How can we provide V? I will appreciate if you explain it by taking an example of real vectors.</p>
2017-06-27 17:12:36.157000+00:00
2021-11-17 15:04:52.033000+00:00
null
nlp|information-retrieval|word2vec|google-api-python-client|word-embedding
['https://arxiv.org/pdf/2004.07464.pdf', 'https://arxiv.org/abs/1708.03629', 'https://github.com/vyraun/Half-Size']
3
44,093,296
<p>There is nothing wrong. The problem is that increasing layers does not automatically means a higher accuracy (otherwise machine learning would be kind of solved, because if you need a better accuracy in an image classifier you would just add +1 layer to an inception and claim a victory).</p> <p>To show you that this is not only your problem - take a look at this high-level paper: <a href="https://arxiv.org/pdf/1512.03385.pdf" rel="noreferrer">Deep Residual Learning for Image Recognition</a> where they see that increasing the number of layers decreases the scoring function (which is not important) and their architecture to overcome this problem (which is important). Here is a small part from it:<a href="https://i.stack.imgur.com/msoTt.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/msoTt.jpg" alt="enter image description here"></a></p> <blockquote> <p>The deeper network has higher training error and thus test error.</p> </blockquote>
2017-05-21 04:47:03.857000+00:00
2017-05-21 04:47:03.857000+00:00
null
null
44,092,936
<p>I'm learning TensorFlow, and trying to create a simple two layer neural network.</p> <p>The tutorial code <a href="https://www.tensorflow.org/get_started/mnist/pros" rel="nofollow noreferrer">https://www.tensorflow.org/get_started/mnist/pros</a> starts with this simple network, to get 92% accuracy:</p> <pre><code>W = tf.Variable(tf.zeros([784, 10])) b = tf.Variable(tf.zeros([10])) y = tf.nn.softmax(tf.matmul(x, W) + b) </code></pre> <p>I tried replacing it with this very simple network, adding a new layer, <strong>but accuracy now drops to 84%</strong>!!!</p> <pre><code>layer1_len = 10 w1 = weight_var([784, layer1_len]) b1 = bias_var([layer1_len]) o1 = tf.nn.relu(tf.matmul(x, w1) + b1) w2 = weight_var([layer1_len, 10]) b2 = bias_var([10]) y = tf.nn.softmax(tf.matmul(o1, w2) + b2) </code></pre> <p>I get that result with several different values for <code>layer1_len</code> as well as different numbers of training steps. (Note that if I omit the <code>weight_var</code> and <code>bias_var</code> random initialization, and keep everything at zero, accuracy drops to close to 10%, essentially no better than guessing.)</p> <p><strong>What am I doing wrong?</strong></p>
2017-05-21 03:40:05.703000+00:00
2017-05-21 04:47:03.857000+00:00
null
machine-learning|tensorflow|neural-network
['https://arxiv.org/pdf/1512.03385.pdf', 'https://i.stack.imgur.com/msoTt.jpg']
2
46,787,296
<p>The fact that the model trains on its own predictions is the whole point of Q-learning: it is a concept called bootstrapping, which means reusing your experience. The insight behind this is:</p> <ul> <li>The Agent is initialized with some weights</li> <li>These weights represent the Agent's current representation of the Q-Value function it is trying to approximate</li> <li>Then it acts on the environment, performing the action it believes to be of highest Q-Value (with some randomness for exploration)</li> <li>Then it receives some feedback from the environment : a reward, and the new state it is in</li> <li>By comparing the difference between the Agent's Q-Value approximation for state <code>t</code> (= <code>[s_t_batch, a_batch]</code>) and it's (discounted) approximation for state <code>t+1</code> <strong><em>plus</em></strong> the reward (=<code>y_batch</code>), it is able to measure how wrong it's prediction for <code>Qt</code> is.</li> <li>From this measure of mistake (called TD-Error) weights are updated in the direction lower MSE, as for any other gradient-based optimization. </li> <li>(One could wait for more than one step to have more information from the environment to update the weights in an even better direction. One could actually wait for the whole episode to be over and train on that. This continuum between training instantly and waiting for the end is called TD(Lambda), you should look into it)</li> </ul> <p>Your loss means exactly this: for one batch, it is the mean-squared error between your model's prediction for time <code>t</code> from its sole Q-Value approximation and its prediction for time <code>t</code> from its Q-Value approximation for the <strong><em>next</em></strong> state and taking into account some "ground truth" from the environment, that is the <em>reward</em> for this timestep.</p> <p>Your loss does go down it seems to me, it is however very unstable, which is a known issue of vanilla Q-Learning especially vanilla Deep Q-Learning. Look at the overview paper below to have an idea of how more complex algorithms work</p> <p>I advise you to look into <a href="https://en.wikipedia.org/wiki/Temporal_difference_learning" rel="nofollow noreferrer">Temporal Difference Learning</a>. Good ressources also are </p> <ul> <li><a href="https://medium.com/emergent-future/simple-reinforcement-learning-with-tensorflow-part-0-q-learning-with-tables-and-neural-networks-d195264329d0" rel="nofollow noreferrer">Simple Reinforcement Learning with Tensorflow</a></li> <li>The RL Bible: Sutton &amp; Barto, Reinforcement Learning: An Introduction (2015 Edition)</li> <li>This <a href="http://arxiv.org/abs/1701.07274" rel="nofollow noreferrer">overview paper</a> summarizing insights and implementation of recent algorithms</li> <li>I wrote my <a href="http://vict0rsch.github.io/thesis/thesisVictorSchmidt.pdf" rel="nofollow noreferrer">Master Thesis</a> on RL, you can checkout the part 2: background theory for more detailed insights</li> </ul>
2017-10-17 09:50:12.563000+00:00
2017-10-17 10:28:19.490000+00:00
2017-10-17 10:28:19.490000+00:00
null
46,783,760
<p>When I am training my model I have the following segment:</p> <pre><code>s_t_batch, a_batch, y_batch = train_data(minibatch, model2) # perform gradient step loss.append(model.train_on_batch([s_t_batch, a_batch], y_batch)) </code></pre> <p>where <code>s_t, a_</code> corresponds to current states and actions that were taken in those states respectively. <code>model2</code> is the same as <code>model</code> except that <code>model2</code> has an output of <code>num_actions</code> and <code>model</code> only outputs the value of the action that was taken in that state. </p> <p>What I find strange (and is really the focus of this question) is in the function <code>train_data</code> I have the line:</p> <pre><code>y_batch = r_batch + GAMMA * np.max(model.predict(s_t_batch), axis=1) </code></pre> <p>The strange part is the fact that I am using the model to generate my <code>y_batch</code> as well as training on them. Doesn't this become some sort of self fulfilling prophecy? If I understand correctly, the model tries to predict the expected maximum reward. Using the <strong>same</strong> model to try and generate <code>y_batch</code> is implying that it is the true model doesn't it?</p> <p><strong>The question is</strong>, 1. what is the intuition behind using the same model to generate y_batch as it is to train them. 2. (optional) does loss value mean anything. When I plot it, it seems doesn't seem to be converging, however the sum of rewards seem to be increasing (see plots in link below).</p> <p>The full code can be found <a href="https://github.com/sachinruk/deepschool.io/blob/master/Lesson%2020%20-%20Deep%20Q%20Learning%20-%20Solutions.ipynb" rel="nofollow noreferrer">here</a>, which is an implementation of Deep Q Learning on the CartPole-v0 problem: </p> <h2>Comments from other forums:</h2> <ol> <li>y = r + gamma*np.max(model.predict(s_t_batch), axis=1) is totally natural and y will converge to the true state-action value. And if you don't break down the correlation between consecutive updates with something like experience replay (or better prioritized exp replay) your model WILL diverge. And there are better variants like DDQN, Duelling Network which performs better.</li> <li>y_batch includes the reward. Both the target and online networks are estimates. It is indeed a somewhat self fulfilling prophecy as DQN's value function is overly optimistic. That is why Double DQN was added a few months later.</li> <li>y will converge, but not necessarily to the true (I assume you mean optimal) state-action value. No one has proven that the converged value is the optimal value but it is the best approximation we have. However will converge to the the true value for simple enough problems (e.g. grid-world)</li> </ol>
2017-10-17 06:25:25.803000+00:00
2019-10-19 08:00:48.420000+00:00
2019-10-19 08:00:48.420000+00:00
deep-learning|reinforcement-learning|openai-gym|q-learning
['https://en.wikipedia.org/wiki/Temporal_difference_learning', 'https://medium.com/emergent-future/simple-reinforcement-learning-with-tensorflow-part-0-q-learning-with-tables-and-neural-networks-d195264329d0', 'http://arxiv.org/abs/1701.07274', 'http://vict0rsch.github.io/thesis/thesisVictorSchmidt.pdf']
4
18,682,668
<p>First of all, take a look on File size, here is detailed <a href="http://arxiv.org/pdf/cs/0502012.pdf" rel="nofollow">Performance measurements</a></p>
2013-09-08 10:26:03.417000+00:00
2013-09-08 10:26:03.417000+00:00
null
null
18,681,936
<p>I have a requirement to load a file containing up to 1 million lines of string data. My first thought is to use C# 5.0 async to load the data whilst not blocking the UI thread. If the user tries to access something that relies on the data they will get a loading message.</p> <p>Still I would like the fastest possible method in order to improve the user's experience. </p> <p>Is the speed of reading data from the disk purely a function of the disk speed and thus StreamReader.ReadAllLines() is as performant as other c# code? Or is there something 'fancy' I can do to boost performance programmatically. This does not have to be described in detail. If so what approximate percentage improvement might be achieved? </p> <p>I am purely interested in read speed and not concerned with the speed of code that may process the data once loaded. </p>
2013-09-08 08:38:16.257000+00:00
2013-09-08 19:10:23.917000+00:00
2013-09-08 19:10:23.917000+00:00
c#
['http://arxiv.org/pdf/cs/0502012.pdf']
1
52,743,448
<p>The general answer whenever it comes to the question of "which is faster?" is always: measure how fast each approach runs your application scenario to find out. In this case, I would say that the first approach would seem preferable most of the time (if you had to pick one of those two options for some reason). Unless you have some very tiny convolution kernels, the second approach would have lots of threads idle in the parts that do much of the actual work. Be sure to avoid bank conflicts within your tiles and think about the memory access patterns you get from your warps when moving data to and from global memory.</p> <p>In the end, convolution is basically just computing sums over all possible combinations of kernel coefficients and input elements. Since the workload is essentially just repeatedly fetching these values in some order, convolution is almost necessarily going to be limited by bandwidth. Thus, doing convolution efficiently comes down to optimizing memory access and reducing bandwidth as much as possible.</p> <blockquote> <p>[…] which version is used more often in practice, like in deep learning?</p> </blockquote> <p>Neither. The naïve approach of throwing nested loops at it to brute-force convolution in the spatial domain is almost never an efficient way of computing convolutions. Convolution is such a fundamental operation for so many things that it has been studied extensively. There are literally hundreds, if not thousands of papers and books you could read on the subject. In deep learning, the problem of convolution has <a href="https://petewarden.com/2015/04/20/why-gemm-is-at-the-heart-of-deep-learning/" rel="nofollow noreferrer">commonly been formulated in terms of <em>general matrix multiplications</em> (GEMMs)</a> since this approach leads to rather nice memory access patterns and many efficient GEMM implementations are available for the GPU. But also FFT-based approaches as well as <a href="https://arxiv.org/abs/1509.09308" rel="nofollow noreferrer">other algorithms</a> are increasingly used depending on the application.</p>
2018-10-10 15:12:21.693000+00:00
2018-10-10 15:12:21.693000+00:00
null
null
52,729,965
<p>Based on my study, there are 2 different strategies to implement tiled version of convolution with cuda. I want to know more about this, and would like to see how they compare with each other, what is the advantage and disadvantage of each strategy, and how to choose. Below is the implementations of the two different strategies.</p> <p>Strategy 1: the tile size matches with the output size, and needs multiple steps to load the input.</p> <pre><code>#define MASK_WIDTH 3 #define MASK_RADIUS 1 #define TILE_WIDTH 8 #define SHAREDMEM_DIM (TILE_WIDTH + (MASK_RADIUS * 2)) __constant__ float deviceMask[MASK_WIDTH * MASK_WIDTH * MASK_WIDTH]; __global__ void conv3d(float *inputArray, float *outputArray, const int z_size, const int y_size, const int x_size) { __shared__ float subTile[SHAREDMEM_DIM][SHAREDMEM_DIM][SHAREDMEM_DIM]; int bx = blockIdx.x, tx = threadIdx.x; int by = blockIdx.y, ty = threadIdx.y; int bz = blockIdx.z, tz = threadIdx.z; int destination = (tz * TILE_WIDTH * TILE_WIDTH) + (ty * TILE_WIDTH) + tx; int destTmp = destination; int dX = destTmp % SHAREDMEM_DIM; destTmp = destTmp / SHAREDMEM_DIM; int dY = destTmp % SHAREDMEM_DIM; destTmp = destTmp / SHAREDMEM_DIM; int dZ = destTmp; int inputZ = dZ + (bz * TILE_WIDTH) - MASK_RADIUS; int inputY = dY + (by * TILE_WIDTH) - MASK_RADIUS; int inputX = dX + (bx * TILE_WIDTH) - MASK_RADIUS; int input = (inputZ * y_size * x_size) + (inputY * x_size) + inputX; if( inputZ &gt;= 0 &amp;&amp; inputZ &lt; z_size &amp;&amp; inputY &gt;= 0 &amp;&amp; inputY &lt; y_size &amp;&amp; inputX &gt;= 0 &amp;&amp; inputX &lt; x_size){ subTile[dZ][dY][dX] = inputArray[input]; } else{ subTile[dZ][dY][dX] = 0; } destination = TILE_WIDTH * TILE_WIDTH * TILE_WIDTH + (tz * TILE_WIDTH * TILE_WIDTH) + (ty * TILE_WIDTH) + tx; destTmp = destination; dX = destTmp % SHAREDMEM_DIM; destTmp = destTmp / SHAREDMEM_DIM; dY = destTmp % SHAREDMEM_DIM; destTmp = destTmp / SHAREDMEM_DIM; dZ = destTmp; inputZ = dZ + (bz * TILE_WIDTH) - MASK_RADIUS; inputY = dY + (by * TILE_WIDTH) - MASK_RADIUS; inputX = dX + (bx * TILE_WIDTH) - MASK_RADIUS; input = (inputZ * y_size * x_size) + (inputY * x_size) + inputX; if(dZ &lt; SHAREDMEM_DIM){ if( inputZ &gt;= 0 &amp;&amp; inputZ &lt; z_size &amp;&amp; inputY &gt;= 0 &amp;&amp; inputY &lt; y_size &amp;&amp; inputX &gt;= 0 &amp;&amp; inputX &lt; x_size ) { subTile[dZ][dY][dX] = inputArray[input]; } else{ subTile[dZ][dY][dX] = 0; } } __syncthreads(); float sum = 0; int z, y, x; for(z = 0; z &lt; MASK_WIDTH; z++){ for(y = 0; y &lt; MASK_WIDTH; y++){ for(x = 0; x &lt; MASK_WIDTH; x++){ sum += subTile[tz + z][ty + y][tx + x] * deviceMask[x + (y * MASK_WIDTH) + (z * MASK_WIDTH * MASK_WIDTH)]; } } } z = tz + (bz * TILE_WIDTH); y = ty + (by * TILE_WIDTH); x = tx + (bx * TILE_WIDTH); if(z &lt; z_size &amp;&amp; y &lt; y_size &amp;&amp; x &lt; x_size){ outputArray[x + (y * x_size) + (z * y_size * x_size)] = sum; } __syncthreads(); } </code></pre> <p>The second strategy is to set the block size to be the same with input tile. In calculating output, some threads are turned off.</p> <pre><code>#define TILE_X 14 #define TILE_Y 6 #define TILE_Z 6 #define MASK_WIDTH 3 #define MASK_SIZE MASK_WIDTH * MASK_WIDTH * MASK_WIDTH __constant__ float mask[MASK_WIDTH][MASK_WIDTH][MASK_WIDTH]; __global__ void conv3d(float *input, float *output, const int z_size, const int y_size, const int x_size) { __shared__ float inputTile [TILE_Z+MASK_WIDTH-1][TILE_Y+MASK_WIDTH-1][TILE_X+MASK_WIDTH-1]; int tx = threadIdx.x; int ty = threadIdx.y; int tz = threadIdx.z; int bx = blockIdx.x; int by = blockIdx.y; int bz = blockIdx.z; int x_o = bx * TILE_X + tx int y_o = by * TILE_Y + ty; int z_o = bz * TILE_Z + tz; int x_i = x_o - MASK_WIDTH/2; int y_i = y_o - MASK_WIDTH/2; int z_i = z_o - MASK_WIDTH/2; if (x_i &gt;= 0 &amp;&amp; y_i &gt;= 0 &amp;&amp; z_i &gt;= 0 &amp;&amp; x_i &lt; x_size &amp;&amp; y_i &lt; y_size &amp;&amp; z_i &lt; z_size) inputTile[tz][ty][tx] = input[(z_i * y_size + y_i) * x_size + x_i]; else inputTile[tz][ty][tx] = 0.0; __syncthreads(); float acc = 0.0; if(tz &lt; TILE_Z &amp;&amp; ty &lt; TILE_Y &amp;&amp; tx &lt; TILE_X) { for(int z_mask = 0; z_mask &lt; Z_MASK_WIDTH; z_mask++) { for(int y_mask = 0; y_mask &lt; Y_MASK_WIDTH; y_mask++) { for(int x_mask = 0; x_mask &lt; X_MASK_WIDTH; x_mask++) { acc += mask[z_mask][y_mask][x_mask] * inputTile[tz+z_mask][ty+y_mask][tx+x_mask]; } } } if(z_o &lt; z_size &amp;&amp; y_o &lt; y_size &amp;&amp; x_o &lt; x_size) output[(z_o * y_size + y_o) * x_size + x_o] = acc; } } </code></pre> <p>Any idea about how to choose between these? In addition, which version is used more often in practice, like in deep learning? Also if you have any comments on the code, please also let me know!</p>
2018-10-09 22:02:59.140000+00:00
2018-10-10 15:12:21.693000+00:00
2018-10-09 22:17:29.353000+00:00
c++|3d|cuda|deep-learning|convolution
['https://petewarden.com/2015/04/20/why-gemm-is-at-the-heart-of-deep-learning/', 'https://arxiv.org/abs/1509.09308']
2
69,508,740
<p>Looks great, as you have already followed most of the solutions to resolve gradient exploding problem. Below is the list of all solutions you can try</p> <p><strong>Solutions to avoid Gradient Exploding problem</strong></p> <ol> <li><p><em>Appropriate Weight initialization:</em> utilise appropriate weight Initialization based on the activation function used.</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Initialization</th> <th>Activation Function</th> </tr> </thead> <tbody> <tr> <td>He</td> <td>ReLU &amp; variants</td> </tr> <tr> <td>LeCun</td> <td>SELU</td> </tr> <tr> <td>Glorot</td> <td>Softmax, Logistic, None, Tanh</td> </tr> </tbody> </table> </div></li> <li><p><em>Redesigning your Neural network:</em> use fewer layers in neural network and/or use smaller batch size</p> </li> <li><p><em>Choosing Non Saturation activation function</em>: choose the right activation function with reduced learning rates</p> <ul> <li>ReLU</li> <li>Leaky ReLU</li> <li>randomized leaky ReLU (RReLU)</li> <li>parametric leaky ReLU (PReLU)</li> <li>exponential linear unit (ELU)</li> </ul> </li> <li><p><em>Batch Normalisation:</em> Ideally using batch normalisation before/after each layer, based on what works best for your dataset.</p> <ul> <li><p>after each layer <a href="https://arxiv.org/abs/1502.03167" rel="nofollow noreferrer">Paper reference</a></p> <pre><code>model = keras.models.Sequential([ keras.layers.Flatten(input_shape=[28, 28]), keras.layers.BatchNormalization(), keras.layers.Dense(300, activation=&quot;elu&quot;, kernel_initializer=&quot;he_normal&quot;), keras.layers.BatchNormalization(), keras.layers.Dense(100, activation=&quot;elu&quot;, kernel_initializer=&quot;he_normal&quot;), keras.layers.BatchNormalization(), keras.layers.Dense(10, activation=&quot;softmax&quot;) ]) </code></pre> </li> <li><p>before each layer</p> <pre><code> model = keras.models.Sequential([ keras.layers.Flatten(input_shape=[28, 28]), keras.layers.BatchNormalization(), keras.layers.Dense(300, kernel_initializer=&quot;he_normal&quot;, use_bias=False), keras.layers.BatchNormalization(), keras.layers.Activation(&quot;elu&quot;), keras.layers.Dense(100, kernel_initializer=&quot;he_normal&quot;, use_bias=False), keras.layers.Activation(&quot;elu&quot;), keras.layers.BatchNormalization(), keras.layers.Dense(10, activation=&quot;softmax&quot;) ]) </code></pre> </li> </ul> </li> <li><p><em>Gradient Clipping</em> : Good default values are clipnorm=1.0 and clipvalue=0.5</p> </li> <li><p><em>Ensure right optimizer is utilised</em>: Since you have utilised Adam optimizer, check if other optimizer works best for your case. Refer <a href="https://keras.io/api/optimizers/" rel="nofollow noreferrer">this documentation</a> for info on the available optimizers [SGD, RMSprop, Adam, Adadelta, Adagrad, Adamax, Nadam, Ftrl]</p> </li> <li><p><em>Truncated Backpropagation through time</em>: often works for RNNS refer this <a href="https://machinelearningmastery.com/gentle-introduction-backpropagation-time/" rel="nofollow noreferrer">documentation</a></p> </li> <li><p><em>Use LSTM</em>(solution for RNN)</p> </li> <li><p><em>Use weight regularizers on layers</em>: set <code>kernel_regularizer</code> to L1 or L2. <a href="https://keras.io/api/layers/regularizers/" rel="nofollow noreferrer">Weight regularizer document reference</a></p> </li> </ol> <p>For more information refer to chapter 11 in <em>Hands on Machine learning with scikit-learn, keras and tensorflow</em> book by <em>Aurélien</em></p>
2021-10-09 16:42:13.367000+00:00
2021-10-29 16:33:42.460000+00:00
2021-10-29 16:33:42.460000+00:00
null
69,427,103
<p>I have a gradient exploding problem which I couldn't solve after trying for several days. I implemented a custom message passing graph neural network in TensorFlow which is used to predict a continuous value from graph data. Each graph is associated with one target value. Each node of a graph is represented by a node attribute vector, and the edges between nodes are represented by an edge attribute vector.</p> <p>Within a message passing layer, node attributes are updated in a certain way (e.g., by aggregating other node/edge attributes), and these updated node attributes are returned.</p> <p>Now, I managed to figure out where the gradient problem occurs in my code. I have the below snippet.</p> <pre><code>to_concat = [neighbors_mean, e] z = K.concatenate(to_concat, axis=-1) output = self.Net(z) </code></pre> <p>Here, <code>neighbors_mean</code> is the element-wise mean between two node attributes <code>vi</code>, <code>vj</code> that form the edge having an edge attribute <code>e</code>. <code>Net</code> is a single layer feed-forward network. With this, the training loss suddenly jumps to NaN after about 30 epochs with a batch size of 32. If the batch size is 128, still the gradients explode after about 200 epochs.</p> <p>I found that, in this case, the gradients explode because of the edge attribute <code>e</code>. If I didn't concatenate <code>neighbors_mean</code> with <code>e</code> and just used the below code, there would be no gradient explosion.</p> <pre><code>output = self.Net(neighbors_mean) </code></pre> <p>Also I can avoid gradient explosion by sending <code>e</code> through a sigmoid function as follows. But this degrades the performance (final MAE), because the values in <code>e</code> are mapped to 0-1 range non-linearly. Note that <a href="https://en.wikipedia.org/wiki/Rectifier_(neural_networks)" rel="nofollow noreferrer">Rectified Linear Unit</a> (ReLU) instead of sigmoid didn't work.</p> <pre><code>to_concat = [neighbors_mean, tf.math.sigmoid(e)] z = K.concatenate(to_concat, axis=-1) output = self.Net(z) </code></pre> <p>Just to mention that <code>e</code> carries a single value relating to the distance between the two corresponding nodes and this distance is always in the range 0.5-4. There are no large values or NaNs in <code>e</code>.</p> <p>I have a custom loss function to train this model, but I found that this is not a problem with loss (other losses also led to the same problem). Below is my custom loss function. Note that although this is a single output regression network, the final layer of my NN has two neurons, relating to the mean and log(sigma) of the prediction.</p> <pre><code>def robust_loss(y_true, y_pred): &quot;&quot;&quot; Computes the robust loss between labels and predictions. &quot;&quot;&quot; mean, sigma = tf.split(y_pred, 2, axis=-1) # tried limiting 'sigma' with sigma = tf.clip_by_value(sigma,-4,1.0) but the gradients still explode loss = np.sqrt(2.0) * K.abs(mean - y_true) * K.exp(-sigma) + sigma return K.mean(loss) </code></pre> <p>I basically tried everything suggested online to avoid gradient explosion.</p> <ol> <li>Applied gradient clipping - with <code>Adam(lr, clipnorm=1, clipvalue=5)</code> and also with <code>tf.clip_by_global_norm(gradients, 1.0)</code></li> <li>My target variables are always scaled</li> <li>Weights are initialized with <code>glorot_uniform</code> distribution</li> <li>Applied regularisation to weights</li> <li>Tried larger batch sizes (till 256, although delayed gradient explosion happens at some point)</li> <li>Tried with reduced learning rate</li> </ol> <p>What am I missing here? I definitely know it has something to do with concatenating <code>e</code>. But given that 0.5&lt;e&lt;4, why do the gradients explode in this case? This feature <code>e</code> is important to me. What else can I do to avoid numerical overflow in my model?</p>
2021-10-03 17:05:28.273000+00:00
2021-10-29 16:33:42.460000+00:00
2021-10-24 12:36:43.910000+00:00
python|tensorflow|machine-learning|keras|gradient
['https://arxiv.org/abs/1502.03167', 'https://keras.io/api/optimizers/', 'https://machinelearningmastery.com/gentle-introduction-backpropagation-time/', 'https://keras.io/api/layers/regularizers/']
4
57,385,673
<p>The Batch normalization in LSTM is not that easy to implement. Some papers present some amazing results <a href="https://arxiv.org/pdf/1603.09025.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1603.09025.pdf</a> called Recurrent Batch normalization. The authors apply following the equations</p> <p><a href="https://i.stack.imgur.com/2FhjQ.png" rel="nofollow noreferrer">BATCH-NORMALIZED LSTM</a></p> <p>Unfortunately, this model is not implemented in keras yet only in tensorflow <a href="https://github.com/OlavHN/bnlstm" rel="nofollow noreferrer">https://github.com/OlavHN/bnlstm</a></p> <p>However, I was able to get good results using (default) batch normalization after the activation function with the without centering and shifting. This approach is different from the paper above applying BN after c_t and h_t, maybe it is worth a try.</p> <pre><code>model = Sequential() model.add(LSTM(neurons1, activation=tf.nn.relu, return_sequences=True, input_shape=(timesteps, data_dim))) model.add(BatchNormalization(momentum=m, scale=False, center=False)) model.add(LSTM(neurons2, activation=tf.nn.relu)) model.add(BatchNormalization(momentum=m, scale=False, center=False)) model.add(Dense(1)) </code></pre>
2019-08-07 00:55:29.967000+00:00
2019-08-07 00:55:29.967000+00:00
null
null
48,544,953
<p>I am trying to use batch normalization in LSTM using keras in R. In my dataset the target/output variable is the <code>Sales</code> column, and every row in the dataset records the <code>Sales</code> for each day in a year (2008-2017). The dataset looks like below:</p> <p><a href="https://i.stack.imgur.com/mFJgq.png" rel="noreferrer"><img src="https://i.stack.imgur.com/mFJgq.png" alt="Sales data"></a></p> <p>My objective is to build a LSTM model based on such dataset, which should be able to provide prediction at the end of training. I am training this model on the data from 2008-2016, and using half of the 2017 data as validation, and the rest as test set.</p> <p>Previously, I tried creating a model using dropout and early stopping. This looks like below:</p> <pre><code>mdl1 &lt;- keras_model_sequential() mdl1 %&gt;% layer_lstm(units = 512, input_shape = c(1, 3), return_sequences = T ) %&gt;% layer_dropout(rate = 0.3) %&gt;% layer_lstm(units = 512, return_sequences = FALSE) %&gt;% layer_dropout(rate = 0.2) %&gt;% layer_dense(units = 1, activation = "linear") mdl1 %&gt;% compile(loss = 'mse', optimizer = 'rmsprop') </code></pre> <p>The model looks as follows</p> <pre><code>___________________________________________________________ Layer (type) Output Shape Param # =========================================================== lstm_25 (LSTM) (None, 1, 512) 1056768 ___________________________________________________________ dropout_25 (Dropout) (None, 1, 512) 0 ___________________________________________________________ lstm_26 (LSTM) (None, 512) 2099200 ___________________________________________________________ dropout_26 (Dropout) (None, 512) 0 ___________________________________________________________ dense_13 (Dense) (None, 1) 513 =========================================================== Total params: 3,156,481 Trainable params: 3,156,481 Non-trainable params: 0 ___________________________________________________________ </code></pre> <p>To train the model, early stopping is used with a validation set.</p> <pre><code>mdl1.history &lt;- mdl1 %&gt;% fit(dt.tr, dt.tr.out, epochs=500, shuffle=F, validation_data = list(dt.val, dt.val.out), callbacks = list( callback_early_stopping(min_delta = 0.000001, patience = 10, verbose = 1) )) </code></pre> <p>On top of this, I want to use batch normalization to speed up the training. As per my understanding, to use batch normalization, I need to divide the data into batches, and apply <code>layer_batch_normalization</code> for the input of each hidden layer. The model layers looks like as follows:</p> <pre><code>batch_size &lt;- 32 mdl2 &lt;- keras_model_sequential() mdl2 %&gt;% layer_batch_normalization(input_shape = c(1, 3), batch_size = batch_size) %&gt;% layer_lstm(units = 512, return_sequences = T) %&gt;% layer_dropout(rate = 0.3) %&gt;% layer_batch_normalization(batch_size = batch_size) %&gt;% layer_lstm(units = 512, return_sequences = F) %&gt;% layer_dropout(rate = 0.2) %&gt;% layer_batch_normalization(batch_size = batch_size) %&gt;% layer_dense(units = 1, activation = "linear") mdl2 %&gt;% compile(loss = 'mse', optimizer = 'rmsprop') </code></pre> <p>This model looks as follows:</p> <pre><code>______________________________________________________________________________ Layer (type) Output Shape Param # ============================================================================== batch_normalization_34 (BatchNormalization) (32, 1, 3) 12 ______________________________________________________________________________ lstm_27 (LSTM) (32, 1, 512) 1056768 ______________________________________________________________________________ dropout_27 (Dropout) (32, 1, 512) 0 ______________________________________________________________________________ batch_normalization_35 (BatchNormalization) (32, 1, 512) 2048 ______________________________________________________________________________ lstm_28 (LSTM) (32, 1, 512) 2099200 ______________________________________________________________________________ dropout_28 (Dropout) (32, 1, 512) 0 ______________________________________________________________________________ batch_normalization_36 (BatchNormalization) (32, 1, 512) 2048 ______________________________________________________________________________ dense_14 (Dense) (32, 1, 1) 513 ============================================================================== Total params: 3,160,589 Trainable params: 3,158,535 Non-trainable params: 2,054 ______________________________________________________________________________ </code></pre> <p>Training the model looks like before. Only difference lies in the training and validation dataset, which are made of sizes that are multiple of <code>batch_size</code> (32 here), by resampling data from the 2nd last batch to the last batch.</p> <p>However, the performance of <code>mdl1</code> is much better than that of <code>mdl2</code>, as can be seen below.</p> <p><a href="https://i.stack.imgur.com/cyzak.png" rel="noreferrer"><img src="https://i.stack.imgur.com/cyzak.png" alt="models"></a></p> <p>I am not sure exactly what I am doing wrong, as I am starting with keras (and practical neural net in general). Additionally, the performance of first model is not so good as well; any suggestion on how to improve that would also be great.</p>
2018-01-31 14:49:03.390000+00:00
2019-08-07 00:55:29.967000+00:00
null
r|tensorflow|keras|recurrent-neural-network|batch-normalization
['https://arxiv.org/pdf/1603.09025.pdf', 'https://i.stack.imgur.com/2FhjQ.png', 'https://github.com/OlavHN/bnlstm']
3
60,792,910
<p>Target tracking is a very difficult problem. In target tracking you will have <strong>two main issues</strong>: the motion uncertainty problem, and the origin uncertainty problem. The first one refers to the way you model object motion so you can predict its future state, and the second refers to the issue of data association(what measurement corresponds to what track, and the literature is filled with <a href="https://www.mdpi.com/1424-8220/20/4/1110" rel="nofollow noreferrer">scientific</a> ways in which this issue can be approached).</p> <p>Before you can come up with a solution to your problem you will have to answer some questions yourself, regarding the tracking problem you want to solve. For example: what are the values that you what to track(this will define your state vector), how are those values related to one another, are you trying to perform single object tracking or multiple object tracking, how are the objects moving( do they have a relatively constant acceleration or velocity ) or not, do objects make turns, can objects also be occluded or not and so on.</p> <p>The <strong>Kalman Filter</strong> is good solution to predict the next state of your system (once you have identified your process model). A deep learning alternative to the Kalman filter is the so called <a href="https://arxiv.org/abs/1511.05121" rel="nofollow noreferrer">Deep Kalman Filter</a> which essentially is used to do the same thing. In case your process or measurement models are not linear, you will have to linearize them before predicting the next state. Some solutions that deal with non-linear process or measurement models are the <strong>Extended Kalman Filter</strong> (EKF) or <strong><a href="https://towardsdatascience.com/the-unscented-kalman-filter-anything-ekf-can-do-i-can-do-it-better-ce7c773cf88d" rel="nofollow noreferrer">Unscented Kalman Filter</a></strong> (UKF). </p> <p>Now related to fast moving objects, an idea you can use is to have a larger covariance matrix since the objects can move a lot more if they are fast, so the search space for the correct association has to be a bit larger. Additionally you can use multiple motion models in case your motion model cannot be satisfied with only one model. In case of occlusions I will leave you <a href="https://stackoverflow.com/questions/2764238/image-processing-what-are-occlusions/60644446#60644446">this</a> stack overflow thread, where I have given an answer covering more details regarding occlusion handling in case of tracking. I have added some references for you to read. You will have to provide more details in your question, if you would like to receive more information regarding a solution (for example you should define fast moving objects with respect to camera frame rate). </p> <p>I personally do not think there is a silver bullet solution for the tracking problem, I prefer to tailor a solution to the problem I am trying to solve. </p>
2020-03-21 20:27:41.103000+00:00
2020-03-21 20:34:19.133000+00:00
2020-03-21 20:34:19.133000+00:00
null
60,592,851
<p>I'm trying to create an application that will be able to track rapidly moving objects in video/camera feed, however have not found any CV/DL solution that is good enough. Can you recommend any computer vision solution for tracking fast moving objects on regular laptop computer and web cam? A demo app would be ideal.</p> <p>For example see this video where the tracking is done in hardware (I'm looking for software solution) : <a href="https://www.youtube.com/watch?v=qn5YQVvW-hQ" rel="nofollow noreferrer">https://www.youtube.com/watch?v=qn5YQVvW-hQ</a></p>
2020-03-08 22:54:46.580000+00:00
2020-06-08 11:39:46.317000+00:00
null
opencv|deep-learning
['https://www.mdpi.com/1424-8220/20/4/1110', 'https://arxiv.org/abs/1511.05121', 'https://towardsdatascience.com/the-unscented-kalman-filter-anything-ekf-can-do-i-can-do-it-better-ce7c773cf88d', 'https://stackoverflow.com/questions/2764238/image-processing-what-are-occlusions/60644446#60644446']
4
72,182,806
<p>As you said, <code>train_test_split</code> interprets each list of tags as a label, it doesn't matter what it contains. A sample with tags <code>[1, 2, 3]</code> will not be identified the same as a sample with tags <code>[1, 2]</code>. Hence, you cannot flatten the <code>tags</code> column to check the label counts.</p> <p>The solution, if you want to keep these labels, is to drop the observations with labels that are not enough represented (e.g., with <code>value_counts() == 1</code>. In fact, this is also what they do in the article you linked (see the last code snippet of the &quot;Perform exploratory data analysis&quot; paragraph):</p> <pre><code># Filtering the rare terms. arxiv_data_filtered = arxiv_data.groupby(&quot;terms&quot;).filter(lambda x: len(x) &gt; 1) </code></pre>
2022-05-10 08:14:05.597000+00:00
2022-05-10 08:14:05.597000+00:00
null
null
72,182,217
<p>I have multilabel dataset (<code>pd.DataFrame</code>) which looks like this: <a href="https://i.stack.imgur.com/AuOxq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AuOxq.png" alt="" /></a></p> <p>This is value_counts of flatten <code>tags</code> column:</p> <pre><code>101 4450171 86 3933972 45 3468383 0 2801217 46 2621773 ... 4681 1000 2923 1000 4580 1000 7569 1000 6955 1000 Length: 7657, dtype: int64 </code></pre> <p>Then I use <code>train_test_split</code> from <code>sklearn</code> with <code>stratify</code> argument to split dataset with balanced distribution:</p> <pre class="lang-py prettyprint-override"><code>train_df, test_df = train_test_split( df, test_size=0.02, stratify=df[&quot;tags&quot;].values, ) </code></pre> <p>And I get this error:</p> <pre><code>ValueError: The least populated class in y has only 1 member, which is too few. The minimum number of groups for any class cannot be less than 2. </code></pre> <p>Why? I see that the least populated class has 1000 samples. Does it actually compare lists instead of list values? I based on this article: <a href="https://keras.io/examples/nlp/multi_label_classification/" rel="nofollow noreferrer">https://keras.io/examples/nlp/multi_label_classification/</a></p>
2022-05-10 07:26:35.447000+00:00
2022-05-10 08:14:05.597000+00:00
2022-05-10 07:56:21.367000+00:00
python|pandas|scikit-learn|split|dataset
[]
0
66,679,501
<p>I was able to bring your code to a version where it would at least converge. In summary, I think there might be multiple problems with it: the normalization (why those values?), some unnecessary relus, too high learning rate, MSE loss instead of cross-entropy and mainly I don't think the softmax in the bottleneck layer works that way for vanishing gradient reasons, see here</p> <p><a href="https://www.quora.com/Does-anyone-ever-use-a-softmax-layer-mid-neural-network-rather-than-at-the-end" rel="nofollow noreferrer">https://www.quora.com/Does-anyone-ever-use-a-softmax-layer-mid-neural-network-rather-than-at-the-end</a></p> <p>Maybe one could fix this using the Gumbel softmax: <a href="https://arxiv.org/abs/1611.01144" rel="nofollow noreferrer">https://arxiv.org/abs/1611.01144</a></p> <p>Moreover, there are papers already achieving this, but as a Variational Autoencoder rather than a vanilla autoencoder, see here: <a href="https://arxiv.org/abs/1609.02200" rel="nofollow noreferrer">https://arxiv.org/abs/1609.02200</a>.</p> <p>For now you can use this modification, which at least converges and then modify step-by-step and see what breaks it.</p> <p>As for the classification, the standard way would be to use the trained encoder to generate features from images and then use a normal classifier (SVG or so) on top of that.</p> <pre><code>batch_size = 16 transform = transforms.Compose([ transforms.ToTensor(), ]) trainset = MNIST(root='./data/', train=True, download=True, transform=transform) dataloader = torch.utils.data.DataLoader(trainset, batch_size=batch_size, shuffle=True, num_workers=8) class Autoencoder(nn.Module): def __init__(self): super(Autoencoder,self).__init__() self.encoder = nn.Sequential( nn.Conv2d(1, 2, kernel_size=5), nn.ReLU(), nn.Conv2d(2, 4, kernel_size=5), ) self.decoder = nn.Sequential( nn.ConvTranspose2d(4, 10, kernel_size=5), nn.ReLU(), nn.ConvTranspose2d(10, 1, kernel_size=5), nn.Sigmoid(), ) def forward(self, x): x = self.encoder(x) x = self.decoder(x) return x model = Autoencoder().cpu() distance = nn.MSELoss() optimizer = torch.optim.Adam(model.parameters(), lr=0.001,weight_decay=1e-5) num_epochs = 20 outputs = [] for epoch in tqdm(range(num_epochs)): for data in dataloader: img, _ = data img = Variable(img).cpu() output = model(img) loss = distance(output, img) optimizer.zero_grad() loss.backward() optimizer.step() outputs.append(output) print('epoch [{}/{}], loss: {:.4f}'.format(epoch+1, num_epochs, loss.item())) import matplotlib.pyplot as plt % plotting epoch outputs for k in range(0, 20): plt.figure(figsize=(9, 2)) imgs = outputs[k].detach().numpy() for i, item in enumerate(imgs): plt.imshow(item[0]) plt.title(str(i)) plt.show() </code></pre>
2021-03-17 18:55:39.553000+00:00
2021-03-18 12:25:51.280000+00:00
2021-03-18 12:25:51.280000+00:00
null
66,667,949
<p>I'm trying to build a simple autoencoder for MNIST, where the middle layer is just 10 neurons. My hope is that it will learn to classify the 10 digits, and I assume that would lead to the lowest error in the end (wrt reproducing the original image).</p> <p>I have the following code, which I've already played around with a fair amount. If I run it for up-to 100 epochs, the loss doesn't really go below 1.0, and if I evaluate it, it's obviously not working. What am I missing?</p> <p>Training:</p> <pre class="lang-py prettyprint-override"><code>import torch import torchvision as tv import torchvision.transforms as transforms import torch.nn as nn import torch.nn.functional as F from torch.autograd import Variable from torchvision.utils import save_image num_epochs = 100 batch_size = 64 transform = transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) ]) trainset = tv.datasets.MNIST(root='./data', train=True, download=True, transform=transform) dataloader = torch.utils.data.DataLoader(trainset, batch_size=batch_size, shuffle=True, num_workers=4) class Autoencoder(nn.Module): def __init__(self): super(Autoencoder,self).__init__() self.encoder = nn.Sequential( # 28 x 28 nn.Conv2d(1, 4, kernel_size=5), nn.Dropout2d(p=0.2), # 4 x 24 x 24 nn.ReLU(True), nn.Conv2d(4, 8, kernel_size=5), nn.Dropout2d(p=0.2), # 8 x 20 x 20 = 3200 nn.ReLU(True), nn.Flatten(), nn.Linear(3200, 10), nn.ReLU(True), # 10 nn.Softmax(), # 10 ) self.decoder = nn.Sequential( # 10 nn.Linear(10, 400), nn.ReLU(True), # 400 nn.Unflatten(1, (1, 20, 20)), # 20 x 20 nn.Dropout2d(p=0.2), nn.ConvTranspose2d(1, 10, kernel_size=5), # 24 x 24 nn.ReLU(True), nn.Dropout2d(p=0.2), nn.ConvTranspose2d(10, 1, kernel_size=5), # 28 x 28 nn.ReLU(True), nn.Sigmoid(), ) def forward(self, x): x = self.encoder(x) x = self.decoder(x) return x model = Autoencoder().cpu() distance = nn.MSELoss() #optimizer = torch.optim.Adam(model.parameters(), weight_decay=1e-5) optimizer = torch.optim.SGD(model.parameters(), lr=0.01, momentum=0.5) for epoch in range(num_epochs): for data in dataloader: img, _ = data img = Variable(img).cpu() output = model(img) loss = distance(output, img) optimizer.zero_grad() loss.backward() optimizer.step() print('epoch [{}/{}], loss: {:.4f}'.format(epoch+1, num_epochs, loss.item())) </code></pre> <p>Already the training loss indicates that the thing is not working, but printing out the confusion matrix (which in this case should not necessarily be the identity matrix, since the neurons can be ordered arbitrarily, but should be row-col-reordarable and approximate the identity, if this would work):</p> <pre class="lang-py prettyprint-override"><code>import numpy as np confusion_matrix = np.zeros((10, 10)) batch_size = 20*1000 testset = tv.datasets.MNIST(root='./data', train=False, download=True, transform=transform) dataloader = torch.utils.data.DataLoader(testset, batch_size=batch_size, shuffle=True, num_workers=4) for data in dataloader: imgs, labels = data imgs = Variable(imgs).cpu() encs = model.encoder(imgs).detach().numpy() for i in range(len(encs)): predicted = np.argmax(encs[i]) actual = labels[i] confusion_matrix[actual][predicted] += 1 print(confusion_matrix) </code></pre>
2021-03-17 06:22:52.777000+00:00
2021-03-19 01:09:50.773000+00:00
2021-03-17 09:53:51.540000+00:00
python|pytorch|autoencoder|mnist
['https://www.quora.com/Does-anyone-ever-use-a-softmax-layer-mid-neural-network-rather-than-at-the-end', 'https://arxiv.org/abs/1611.01144', 'https://arxiv.org/abs/1609.02200']
3
41,315,489
<p>Stochastic Gradient Descent seems to require significant overparameterization in order to learn, here's one paper along those lines -- <a href="https://arxiv.org/abs/1301.3583" rel="nofollow noreferrer">"Big Neural Networks Waste Capacity"</a></p>
2016-12-24 17:33:52.760000+00:00
2016-12-24 17:33:52.760000+00:00
null
null
41,314,819
<p>I think I'm missing something obvious here but would love some help figuring this out. </p> <p>Say I have a million words and want to embed them as part of my model. With TF I can do an embedding lookup, though I need to provide a matrix of size [1m*space_size]. So for 50 dimensions that comes out to 50M trainable parameters. On the other hand I can one hot encode the a million words with a vector of dimension 20. I can embed that into a a space of dimension 50 with a [20*50] matrix for 1K parameters. Much cheaper. Since the weights of this matrix are still trainable, I'd expect to learn something about the words and if I need more capacity I can increase the size of the space. </p> <p>That's in theory, in practice I tried and the model didn't learn anything. So my question is, why? Thanks</p>
2016-12-24 16:08:11.587000+00:00
2016-12-26 19:19:35.393000+00:00
null
tensorflow|deep-learning
['https://arxiv.org/abs/1301.3583']
1
45,297,317
<p>As of SUMO 0.29.0, acceleration is not one of the <a href="http://www.sumo.dlr.de/wiki/TraCI/Vehicle_Value_Retrieval" rel="nofollow noreferrer">variables exposed by the SUMO TraCI API of a vehicle</a> - primarily because it is not one of the state variables of the most common car following models.</p> <p>You will need to compute acceleration yourself, by comparing the current speed of a vehicle to its speed before the last update.</p> <p>Note that there is more than one way of deriving acceleration from speed, depending on what you assume about the underlying process. For more details, there is <a href="http://arxiv.org/abs/1403.4881" rel="nofollow noreferrer">a 2015 paper by Treiber and Kanagaraj</a> that discusses this.</p>
2017-07-25 08:01:40.243000+00:00
2017-07-25 08:01:40.243000+00:00
null
null
45,287,511
<p>I am using <code>veins-4.5</code> <code>omnet++ 5</code> and <code>sumo 0.29.0</code>. How can I access the <em>acceleration</em> of a vehicle in veins?</p> <p>Thanks a lot.</p>
2017-07-24 18:23:29.063000+00:00
2017-07-25 08:01:40.243000+00:00
2017-07-25 06:27:26.650000+00:00
omnet++|veins
['http://www.sumo.dlr.de/wiki/TraCI/Vehicle_Value_Retrieval', 'http://arxiv.org/abs/1403.4881']
2
55,811,323
<p>This problem is a well studied problem: the problem of <strong>Journey planning in public transportation networks</strong>.<br> Your approach based on Bellman-Ford might become problematic and too expensive depending on the network since you can't consider that a vertex has been 'visited', or that the shortest path to a vertex has been computed already during the algorithm's execution.<br> These concepts (of 'visited', or of 'the shortest') can apply only to single objective shortest paths problem. That is because given <code>u, v</code> a couple of vertices, there is a potentially exponential number of interesting paths, because you can't consider only the faster or the cheaper option. You have to keep in memory any path such that there is no other path that is cheaper AND faster, and this number of paths can quickly grow out of control if you start working on realistic networks (which can be pretty big, ~100k stops, and millions of trips). </p> <p>I suggest you read about the <strong>multi-objective shortest path problem</strong>, with the additional fact that usually, the graph representing the network is a time-dependent graph.<br> I think it can be worthy for you to read <a href="http://www.filipyoo.com/multi-objectives-shortest-paths-algorithms-for-multi-transfer-flight-routes/" rel="nofollow noreferrer">this page</a> on multi objective shortest path to have an idea about the main techniques used in the field (The notion of Pareto-set, or of Pareto-frontier, is quite important to grasp regarding this problem), and even more the section 2 and 4 of <a href="https://arxiv.org/pdf/1504.05140.pdf" rel="nofollow noreferrer">this paper</a>, which describes the actual state of the art regarding such techniques. </p> <p>Despite seeming complex, most of them can run incredibly fast (hundreds of thousand times faster than a Dijkstra, and still much faster than any A*-based approach), and for some of them are not too hard to implement (for instance, the <a href="https://i11www.iti.kit.edu/extra/publications/dpsw-isftr-13.pdf" rel="nofollow noreferrer">CSA</a> is not too complex, and runs pretty fast, it can compute a simple query in a few miliseconds on a country sized network). </p>
2019-04-23 12:33:19.597000+00:00
2019-04-23 12:33:19.597000+00:00
null
null
55,802,336
<p>So suppose you are searching for a train ride. You would be interested in the price of the ride and also the amount of time the ride will take. Now suppose that you have a graph where each edge has a cost and a duration and you want to find the shortest duration path in the graph that doesn't go over a given maximum cost (There might be multiple edges between any two vertices).</p> <p>This is the problem I have. I think the best way to approach this problem is to modify the bellman-ford algorithm.</p> <p>This is what I have so far:</p> <pre><code> // A struct to represent an Edge in graph struct Edge { int source, dest, cost, duration; }; // A struct to reporesented a connected, directed //and wieghted graph struct Graph { int V, E; struct Edge* edge; }; // Creates Graph with V vertices and E edges struct Graph* createGraph(int V, int E) { struct Graph* graph = new (struct Graph); graph -&gt; V = V; graph -&gt; E = E; graph -&gt; edge = new Edge[E]; return graph; } </code></pre> <p>I have already filled the structs with all the information they need. Now I just need to "organize" the data based on cost. So i realize that for each vertex i need to store a list of paths that lead to it. For each edge i consider ill need to copy the paths from the first vertex list to the second vertex list (adding cost and distance). But how do I actually go about coding this, that the part I am stuck on. </p>
2019-04-22 23:27:57.550000+00:00
2019-04-23 23:59:48.697000+00:00
2019-04-23 23:59:48.697000+00:00
c++|algorithm|bellman-ford
['http://www.filipyoo.com/multi-objectives-shortest-paths-algorithms-for-multi-transfer-flight-routes/', 'https://arxiv.org/pdf/1504.05140.pdf', 'https://i11www.iti.kit.edu/extra/publications/dpsw-isftr-13.pdf']
3
29,853,535
<p>Yes, the <a href="http://eclipseclp.org" rel="noreferrer" title="ECLiPSe">ECLiPSe</a> system does this.</p> <p>As you suggest, it takes into account a number of simple built-in predicates (such as <code>integer/1, =/2, !/0</code>) for indexing purposes. Your example then executes deterministically, without choicepoints, for all calls of <code>foo/2</code> with the first argument instantiated. Moreover, ECLiPSe would do this on any argument, not just the first.</p> <p>You can find a little more detail in the paper <a href="http://arxiv.org/abs/1012.4240" rel="noreferrer" title="ECLiPSe TPLP">ECLiPSe - from LP to CLP</a>.</p> <p>To answer your followup question: No extra VM features are necessary, the generated VM code looks like this:</p> <pre><code>foo / 2: switch_on_type a(1) list: ref(L5) structure: ref(L1) bignum: ref(L7) []: ref(L4) integer: ref(L7) meta: ref(L0) label(L0): try 0 2 ref(L1) retry 0 ref(L3) trust 0 ref(L5) label(L1): get_structure a(1) h / 1 ref(L2) write_value a(2) ret label(L2): read_value a(2) ret label(L3): get_nil a(1) label(L4): get_atom a(2) nil ret label(L5): get_list a(1) ref(L6) write_void 2 label(L6): get_atom a(2) cons ret label(L7): get_structure a(2) n / 1 ref(L8) write_value a(1) ret label(L8): read_value a(1) ret </code></pre>
2015-04-24 17:11:33.927000+00:00
2015-04-25 17:04:36.530000+00:00
2015-04-25 17:04:36.530000+00:00
null
29,605,132
<p>I want to know how smart first argument indexing is implemented on various Prolog implementations.</p> <p>In particular, simple type-test goals like <code>integer/1</code> right after a clause "neck" <em>could</em> contribute to better indexing. Consider:</p> <pre><code>foo(h(X),X). foo([],nil). foo([_|_],cons). foo(X,Y) :- integer(X), Y = n(X). </code></pre> <p>With this clause ordering I would like the goal <code>foo([],_)</code> to succeed <strong>without</strong> leaving any useless choicepoints.</p> <p>Unfortunately, SWI Prolog does not figure it out:</p> <pre><code>?- length(Xs,10), maplist(=([]),Xs), statistics(trailused,T1), maplist(foo,Xs,Ys), statistics(trailused,T2). T1 = 5792, T2 = 5968, Xs = [[], [], [], [], [], [], [], [], [], []], Ys = [nil, nil, nil, nil, nil, nil, nil, nil, nil, nil] ... </code></pre> <p>Do other Prolog implementations do better?</p>
2015-04-13 12:18:26.433000+00:00
2019-01-27 16:03:04.750000+00:00
2015-04-13 17:12:52.820000+00:00
indexing|prolog
['http://eclipseclp.org', 'http://arxiv.org/abs/1012.4240']
2
20,955,187
<p>An alternative approach is to use something like generative backpropagation. In this scenario, you train a neural network updating the weights AND the input values. The given values are used as the output values since you can compute an error value directly. This approach has been used in dimensionality reduction, matrix completion (missing value imputation) among other applications. For more information, see <a href="http://bioinformatics.oxfordjournals.org/cgi/reprint/21/20/3887" rel="nofollow">non-linear principal component analysis (NLPCA)</a> and <a href="http://arxiv.org/abs/1312.5394" rel="nofollow">unsupervised backpropagation (UBP)</a> which uses the idea of generative backpropagation. UBP extends NLPCA by introducing a pre-training stage. An implementation of UBP and NLPCA and unsupervised backpropagation can be found in the waffles machine learning toolkit. The documentation for UBP and NLPCA can be found using the nlpca command.</p>
2014-01-06 17:02:27.733000+00:00
2014-01-06 17:02:27.733000+00:00
null
null
15,514,618
<p>I'm having trouble with some of the concepts in machine learning through neural networks. One of them is <a href="http://en.wikipedia.org/wiki/Delta_Rule" rel="noreferrer">backpropagation</a>. In the weight updating equation, </p> <pre><code>delta_w = a*(t - y)*g'(h)*x </code></pre> <p><code>t</code> is the "target output", which would be your class label, or something, in the case of supervised learning. But what would the "target output" be for unsupervised learning?</p> <p>Can someone kindly provide an example of how you'd use BP in unsupervised learning, specifically for clustering of classification?</p> <p>Thanks in advance.</p>
2013-03-20 03:12:20.327000+00:00
2019-04-02 08:42:01.440000+00:00
2017-04-19 04:50:59.180000+00:00
machine-learning|neural-network|unsupervised-learning
['http://bioinformatics.oxfordjournals.org/cgi/reprint/21/20/3887', 'http://arxiv.org/abs/1312.5394']
2
15,514,709
<p>The most common thing to do is train <a href="http://en.wikipedia.org/wiki/Autoencoder" rel="noreferrer">an autoencoder</a>, where the desired outputs are equal to the inputs. This makes the network try to learn a representation that best "compresses" the input distribution.</p> <p><a href="http://www.freepatentsonline.com/5590218.html" rel="noreferrer">Here's a patent</a> describing a different approach, where the output labels are assigned randomly and then sometimes flipped based on convergence rates. It seems weird to me, but okay.</p> <p>I'm not familiar with other methods that use backpropogation for clustering or other unsupervised tasks. Clustering approaches with ANNs seem to use other algorithms (<a href="http://arxiv.org/pdf/cs/0608115.pdf" rel="noreferrer">example 1</a>, <a href="http://www.rimtengg.com/coit2007/proceedings/pdfs/40.pdf" rel="noreferrer">example 2</a>).</p>
2013-03-20 03:22:39.767000+00:00
2013-03-20 03:22:39.767000+00:00
null
null
15,514,618
<p>I'm having trouble with some of the concepts in machine learning through neural networks. One of them is <a href="http://en.wikipedia.org/wiki/Delta_Rule" rel="noreferrer">backpropagation</a>. In the weight updating equation, </p> <pre><code>delta_w = a*(t - y)*g'(h)*x </code></pre> <p><code>t</code> is the "target output", which would be your class label, or something, in the case of supervised learning. But what would the "target output" be for unsupervised learning?</p> <p>Can someone kindly provide an example of how you'd use BP in unsupervised learning, specifically for clustering of classification?</p> <p>Thanks in advance.</p>
2013-03-20 03:12:20.327000+00:00
2019-04-02 08:42:01.440000+00:00
2017-04-19 04:50:59.180000+00:00
machine-learning|neural-network|unsupervised-learning
['http://en.wikipedia.org/wiki/Autoencoder', 'http://www.freepatentsonline.com/5590218.html', 'http://arxiv.org/pdf/cs/0608115.pdf', 'http://www.rimtengg.com/coit2007/proceedings/pdfs/40.pdf']
4
47,311,604
<p>The fundamental difficulty when it comes to adding new users in your system is that you need retraining to be able to give meaningful predictions to new users. Even if you were able to dynamically resize the embedding matrices, what values would you use for the parameters describing the new user?</p> <p>Taking this into account, you have a couple of options.</p> <ol> <li>Save the weights of the graph, then create a new graph with adjusted dimensions and retrain it on data that includes information on the new user. As you say, this may be too costly to be in your critical path.</li> <li>Use some sort of fold-in approach. For example, you could initialise the new user's embedding using the average of embeddings of users that have interacted with similar items.</li> <li>Use a model that doesn't have this problem and that can incorporate new users in a more natural manner.</li> </ol> <p>My recommendation would be the third option. There are classes of models that take the sequence (or set) of user interactions directly when making predictions, and do not rely on you declaring the number of users ahead of time. For example, you could use on of the following:</p> <ol> <li>The <a href="https://pdfs.semanticscholar.org/6c02/053805434162e0fed26e1d5e035eb1071249.pdf" rel="nofollow noreferrer">AutoRec</a> model: a simple autoencoder model that takes the set of items the user has interacted with as the input.</li> <li><a href="https://arxiv.org/pdf/1511.06939.pdf" rel="nofollow noreferrer">Session-based Recommendations with Recurrent Neural Networks</a>: a recurrent model that takes as input the sequence of user interactions at prediction time.</li> </ol> <p>Both models naturally handle new users without changes to the computation graph; adding new items will require re-training.</p> <p>One implementation for the first class of models is <a href="https://github.com/mesuvash/NNRec" rel="nofollow noreferrer">here</a>; for the second class, check out my recommender system package <a href="https://maciejkula.github.io/spotlight/sequence/implicit.html" rel="nofollow noreferrer">Spotlight</a>.</p>
2017-11-15 15:43:49.173000+00:00
2017-11-15 15:43:49.173000+00:00
null
null
47,272,031
<p>I have a TensorFlow recommendation system based off <a href="https://github.com/songgc/TF-recomm" rel="nofollow noreferrer"><code>TF-recomm</code></a>. Each user has <code>1+numFactors</code> numbers associated with her: a vector of <code>numFactors</code>, and an offset of a single number. Each task also has a bias and a vector of <code>numFactors</code> assigned. The TF-recomm code is</p> <pre><code>def inference_svd(user_batch, item_batch, user_num, item_num, dim=5): bias_global = tf.get_variable("bias_global", shape=[]) w_bias_user = tf.get_variable("embd_bias_user", shape=[user_num]) w_bias_item = tf.get_variable("embd_bias_item", shape=[item_num]) bias_user = tf.nn.embedding_lookup(w_bias_user, user_batch, name="bias_user") bias_item = tf.nn.embedding_lookup(w_bias_item, item_batch, name="bias_item") w_user = tf.get_variable("embd_user", shape=[user_num, dim], initializer=tf.truncated_normal_initializer(stddev=0.02)) w_item = tf.get_variable("embd_item", shape=[item_num, dim], initializer=tf.truncated_normal_initializer(stddev=0.02)) embd_user = tf.nn.embedding_lookup(w_user, user_batch, name="embedding_user") embd_item = tf.nn.embedding_lookup(w_item, item_batch, name="embedding_item") infer = tf.reduce_sum(tf.multiply(embd_user, embd_item), 1) infer = tf.add(infer, bias_global) infer = tf.add(infer, bias_user) infer = tf.add(infer, bias_item, name="svd_inference") regularizer = tf.add(tf.nn.l2_loss(embd_user), tf.nn.l2_loss(embd_item), name="svd_regularizer") return infer, regularizer </code></pre> <p>I have been able to get this code to work, and have been able to link it up with a REST-API. </p> <p>The problem that I encounter is when I get new users. I know what I want to do:</p> <ul> <li>Add a row to the <code>bias_user</code>, initialized to 0</li> <li>Add a row to the <code>embd_user</code>, initialized to 0</li> <li>When users rate new items, we use the same graph but <code>freeze</code> the weights on the items (which I can do with <code>var_list</code> on <code>optimizer.minimize</code>)</li> </ul> <p>However, the weights and biases have their shapes declared ahead of time. All the material I have seen on tensorflow (running or deploying) allows the weights to change, but doesn't seem to allow the network to grow.</p> <p>If I implemented this in <code>numpy</code> I would simply add new rows to the appropriate matrices. There are a couple of ways of doing this, such as creating new graphs and variables, but it seems best to reuse the graph used to train the model in the first place (to ensure consistency). </p> <p>I am looking for a system of "best practices" for dealing with changing the size of embedding tensors, especially for a system that is online where it will have to serve predictions quickly (which prevents expensive operations).</p>
2017-11-13 19:26:54.860000+00:00
2017-11-15 15:43:49.173000+00:00
null
python|tensorflow|recommendation-engine
['https://pdfs.semanticscholar.org/6c02/053805434162e0fed26e1d5e035eb1071249.pdf', 'https://arxiv.org/pdf/1511.06939.pdf', 'https://github.com/mesuvash/NNRec', 'https://maciejkula.github.io/spotlight/sequence/implicit.html']
4
65,950,643
<p>Here is a more efficient and more stable implementation. Assuming <code>zi</code> and <code>zj</code> are interlaced!</p> <pre><code>class NT_Xent(tf.keras.layers.Layer): &quot;&quot;&quot; Normalized temperature-scaled CrossEntropy loss [1] [1] T. Chen, S. Kornblith, M. Norouzi, and G. Hinton, “A simple framework for contrastive learning of visual representations,” arXiv. 2020, Accessed: Jan. 15, 2021. [Online]. Available: https://github.com/google-research/simclr. &quot;&quot;&quot; def __init__(self, tau=1, **kwargs): super().__init__(**kwargs) self.tau = tau self.similarity = tf.keras.losses.CosineSimilarity(axis=-1, reduction=tf.keras.losses.Reduction.NONE) self.criterion = tf.keras.losses.CategoricalCrossentropy(from_logits=True) def get_config(self): return {&quot;tau&quot;: self.tau} def call(self, zizj): &quot;&quot;&quot; zizj is [B,N] tensor with order z_i1 z_j1 z_i2 z_j2 z_i3 z_j3 ... batch_size is twice the original batch_size &quot;&quot;&quot; batch_size = tf.shape(zizj)[0] mask = tf.repeat(tf.repeat(~tf.eye(batch_size/2, dtype=tf.bool), 2, axis=0), 2, axis=1) sim = -1*self.similarity(tf.expand_dims(zizj, 1), tf.expand_dims(zizj, 0))/self.tau sim_i_j = -1*self.similarity(zizj[0::2], zizj[1::2])/self.tau pos = tf.reshape(tf.repeat(sim_i_j, repeats=2), (batch_size, -1)) neg = tf.reshape(sim[mask], (batch_size, -1)) logits = tf.concat((pos, neg), axis=-1) labels = tf.one_hot(tf.zeros((batch_size,), dtype=tf.int32), depth=batch_size-1) return self.criterion(labels, logits) </code></pre> <p>source: <a href="https://github.com/gabriel-vanzandycke/tf_layers" rel="nofollow noreferrer">https://github.com/gabriel-vanzandycke/tf_layers</a></p>
2021-01-29 07:55:30.137000+00:00
2021-12-27 15:29:01.250000+00:00
2021-12-27 15:29:01.250000+00:00
null
62,793,043
<p>As the title suggests, I'm trying train a model based on the SimCLR framework (seen in this paper: <a href="https://arxiv.org/pdf/2002.05709.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/2002.05709.pdf</a> - the NT_Xent loss is stated in equation (1) and Algorithm 1).</p> <p>I have managed to create a numpy version of the loss function, but this is not suitable to train the model on, as numpy arrays cannot store the required information for back propagation. I am having difficulty converting my numpy code over to Tensorflow. Here is my numpy version:</p> <pre><code>import numpy as np from sklearn.metrics.pairwise import cosine_similarity # Define the contrastive loss function, NT_Xent def NT_Xent(zi, zj, tau=1): &quot;&quot;&quot; Calculates the contrastive loss of the input data using NT_Xent. The equation can be found in the paper: https://arxiv.org/pdf/2002.05709.pdf Args: zi: One half of the input data, shape = (batch_size, feature_1, feature_2, ..., feature_N) zj: Other half of the input data, must have the same shape as zi tau: Temperature parameter (a constant), default = 1. Returns: loss: The complete NT_Xent constrastive loss &quot;&quot;&quot; z = np.concatenate((zi, zj), 0) loss = 0 for k in range(zi.shape[0]): # Numerator (compare i,j &amp; j,i) i = k j = k + zi.shape[0] sim_ij = np.squeeze(cosine_similarity(z[i].reshape(1, -1), z[j].reshape(1, -1))) sim_ji = np.squeeze(cosine_similarity(z[j].reshape(1, -1), z[i].reshape(1, -1))) numerator_ij = np.exp(sim_ij / tau) numerator_ji = np.exp(sim_ji / tau) # Denominator (compare i &amp; j to all samples apart from themselves) sim_ik = np.squeeze(cosine_similarity(z[i].reshape(1, -1), z[np.arange(z.shape[0]) != i])) sim_jk = np.squeeze(cosine_similarity(z[j].reshape(1, -1), z[np.arange(z.shape[0]) != j])) denominator_ik = np.sum(np.exp(sim_ik / tau)) denominator_jk = np.sum(np.exp(sim_jk / tau)) # Calculate individual and combined losses loss_ij = - np.log(numerator_ij / denominator_ik) loss_ji = - np.log(numerator_ji / denominator_jk) loss += loss_ij + loss_ji # Divide by the total number of samples loss /= z.shape[0] return loss </code></pre> <p>I am fairly confident that this function produces the correct results (albeit slowly, as I have seen other implementations of it online that were vectorised versions - such as this one for Pytorch: <a href="https://github.com/Spijkervet/SimCLR/blob/master/modules/nt_xent.py" rel="nofollow noreferrer">https://github.com/Spijkervet/SimCLR/blob/master/modules/nt_xent.py</a> (my code produces the same result for identical inputs), but I do not see how their version is mathematically equivalent to the formula in the paper, hence why I am trying to build my own).</p> <p>As a first try I have converted the numpy functions to their TF equivalents (tf.concat, tf.reshape, tf.math.exp, tf.range, etc.), but I believe my only/main problem is that sklearn's cosine_similarity function returns a numpy array, and I do not know how to build this function myself in Tensorflow. Any ideas?</p>
2020-07-08 10:45:50.177000+00:00
2021-12-27 15:29:01.250000+00:00
null
python|tensorflow|scikit-learn|backpropagation|cosine-similarity
['https://github.com/gabriel-vanzandycke/tf_layers']
1
62,793,878
<p>I managed to figure it out myself! I did not realise there was a Tensorflow implementation of the cosine similarity function &quot;tf.keras.losses.CosineSimilarity&quot;</p> <p>Here is my code:</p> <pre><code>import tensorflow as tf # Define the contrastive loss function, NT_Xent (Tensorflow version) def NT_Xent_tf(zi, zj, tau=1): &quot;&quot;&quot; Calculates the contrastive loss of the input data using NT_Xent. The equation can be found in the paper: https://arxiv.org/pdf/2002.05709.pdf (This is the Tensorflow implementation of the standard numpy version found in the NT_Xent function). Args: zi: One half of the input data, shape = (batch_size, feature_1, feature_2, ..., feature_N) zj: Other half of the input data, must have the same shape as zi tau: Temperature parameter (a constant), default = 1. Returns: loss: The complete NT_Xent constrastive loss &quot;&quot;&quot; z = tf.cast(tf.concat((zi, zj), 0), dtype=tf.float32) loss = 0 for k in range(zi.shape[0]): # Numerator (compare i,j &amp; j,i) i = k j = k + zi.shape[0] # Instantiate the cosine similarity loss function cosine_sim = tf.keras.losses.CosineSimilarity(axis=-1, reduction=tf.keras.losses.Reduction.NONE) sim = tf.squeeze(- cosine_sim(tf.reshape(z[i], (1, -1)), tf.reshape(z[j], (1, -1)))) numerator = tf.math.exp(sim / tau) # Denominator (compare i &amp; j to all samples apart from themselves) sim_ik = - cosine_sim(tf.reshape(z[i], (1, -1)), z[tf.range(z.shape[0]) != i]) sim_jk = - cosine_sim(tf.reshape(z[j], (1, -1)), z[tf.range(z.shape[0]) != j]) denominator_ik = tf.reduce_sum(tf.math.exp(sim_ik / tau)) denominator_jk = tf.reduce_sum(tf.math.exp(sim_jk / tau)) # Calculate individual and combined losses loss_ij = - tf.math.log(numerator / denominator_ik) loss_ji = - tf.math.log(numerator / denominator_jk) loss += loss_ij + loss_ji # Divide by the total number of samples loss /= z.shape[0] return loss </code></pre> <p>As you can see, I have essentially just swapped out the numpy functions for the TF equivalents. One main point of note is that I had to use &quot;reduction=tf.keras.losses.Reduction.NONE&quot; within the &quot;cosine_sim&quot; function, this was to keep the shapes consistent in the &quot;sim_ik&quot; and &quot;sim_jk&quot;, because otherwise the resulting loss did not match up with my original numpy implementation.</p> <p>I also noticed that individually calculating the numerator for i,j and j,i was redundant as the answers were the same, so I have removed one instance of that calculation.</p> <p>Of course if anybody has a quicker implementation I am more than happy to hear about it!</p>
2020-07-08 11:33:39.013000+00:00
2020-07-08 11:33:39.013000+00:00
null
null
62,793,043
<p>As the title suggests, I'm trying train a model based on the SimCLR framework (seen in this paper: <a href="https://arxiv.org/pdf/2002.05709.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/2002.05709.pdf</a> - the NT_Xent loss is stated in equation (1) and Algorithm 1).</p> <p>I have managed to create a numpy version of the loss function, but this is not suitable to train the model on, as numpy arrays cannot store the required information for back propagation. I am having difficulty converting my numpy code over to Tensorflow. Here is my numpy version:</p> <pre><code>import numpy as np from sklearn.metrics.pairwise import cosine_similarity # Define the contrastive loss function, NT_Xent def NT_Xent(zi, zj, tau=1): &quot;&quot;&quot; Calculates the contrastive loss of the input data using NT_Xent. The equation can be found in the paper: https://arxiv.org/pdf/2002.05709.pdf Args: zi: One half of the input data, shape = (batch_size, feature_1, feature_2, ..., feature_N) zj: Other half of the input data, must have the same shape as zi tau: Temperature parameter (a constant), default = 1. Returns: loss: The complete NT_Xent constrastive loss &quot;&quot;&quot; z = np.concatenate((zi, zj), 0) loss = 0 for k in range(zi.shape[0]): # Numerator (compare i,j &amp; j,i) i = k j = k + zi.shape[0] sim_ij = np.squeeze(cosine_similarity(z[i].reshape(1, -1), z[j].reshape(1, -1))) sim_ji = np.squeeze(cosine_similarity(z[j].reshape(1, -1), z[i].reshape(1, -1))) numerator_ij = np.exp(sim_ij / tau) numerator_ji = np.exp(sim_ji / tau) # Denominator (compare i &amp; j to all samples apart from themselves) sim_ik = np.squeeze(cosine_similarity(z[i].reshape(1, -1), z[np.arange(z.shape[0]) != i])) sim_jk = np.squeeze(cosine_similarity(z[j].reshape(1, -1), z[np.arange(z.shape[0]) != j])) denominator_ik = np.sum(np.exp(sim_ik / tau)) denominator_jk = np.sum(np.exp(sim_jk / tau)) # Calculate individual and combined losses loss_ij = - np.log(numerator_ij / denominator_ik) loss_ji = - np.log(numerator_ji / denominator_jk) loss += loss_ij + loss_ji # Divide by the total number of samples loss /= z.shape[0] return loss </code></pre> <p>I am fairly confident that this function produces the correct results (albeit slowly, as I have seen other implementations of it online that were vectorised versions - such as this one for Pytorch: <a href="https://github.com/Spijkervet/SimCLR/blob/master/modules/nt_xent.py" rel="nofollow noreferrer">https://github.com/Spijkervet/SimCLR/blob/master/modules/nt_xent.py</a> (my code produces the same result for identical inputs), but I do not see how their version is mathematically equivalent to the formula in the paper, hence why I am trying to build my own).</p> <p>As a first try I have converted the numpy functions to their TF equivalents (tf.concat, tf.reshape, tf.math.exp, tf.range, etc.), but I believe my only/main problem is that sklearn's cosine_similarity function returns a numpy array, and I do not know how to build this function myself in Tensorflow. Any ideas?</p>
2020-07-08 10:45:50.177000+00:00
2021-12-27 15:29:01.250000+00:00
null
python|tensorflow|scikit-learn|backpropagation|cosine-similarity
[]
0
69,420,043
<p>Hyperparameter tuning is typically done on the validation set of a train-val-test split, where each split will have something along the lines of 70%, 10%, and 20% of the entire dataset respectively. As a baseline, random search can be used while <a href="https://arxiv.org/abs/1206.2944" rel="nofollow noreferrer">Bayesian optimization with Gaussian processes</a> has been shown to be more compute efficient. <a href="https://scikit-optimize.github.io/stable/auto_examples/bayesian-optimization.html" rel="nofollow noreferrer">scikit-optimize</a> is a good package for this.</p>
2021-10-02 20:29:58.507000+00:00
2021-10-02 20:29:58.507000+00:00
null
null
69,419,809
<p>I almost finished my time series model, collected enough data and now I am stuck at hyperparameter optimization.</p> <p>And after lots of googling I found new &amp; good library called ultraopt, but problem is that how much amount of fragment of data should I use from my total data (~150 GB) for hyperparameter tuning. And I want to try lots of algorithm and combinations, is there any faster and easy way?</p> <p>Or</p> <p>Is there any math involved, something like, mydata = 100%size</p> <p>hyperparameter optimization with 5% of mydatasize,</p> <p>optimized hyperparameter *or+ or something with left 95% of datasize #something like this</p> <p>To get a similar result as full data used for optimization at a time. Is there any shortcut for these?</p> <p>I am using Python 3.7, CPU: AMD ryzen5 3400g, GPU: AMD Vega 11, RAM: 16 GB</p>
2021-10-02 19:55:56.360000+00:00
2021-10-03 08:58:59.760000+00:00
2021-10-03 08:58:59.760000+00:00
python|performance|machine-learning|large-data|hyperparameters
['https://arxiv.org/abs/1206.2944', 'https://scikit-optimize.github.io/stable/auto_examples/bayesian-optimization.html']
2
69,420,445
<p>A good python library for hyper-parameter tuning is <a href="https://arxiv.org/pdf/1603.06560.pdf" rel="nofollow noreferrer"><code>keras tuner</code></a>. You can utilize different tuners in this library, but for the large data, as you've mentioned, <a href="https://arxiv.org/pdf/1603.06560.pdf" rel="nofollow noreferrer"><code>Hyperband Optimization</code></a> can be state-of-the-art and appropriate one.</p>
2021-10-02 21:37:08.380000+00:00
2021-10-02 21:37:08.380000+00:00
null
null
69,419,809
<p>I almost finished my time series model, collected enough data and now I am stuck at hyperparameter optimization.</p> <p>And after lots of googling I found new &amp; good library called ultraopt, but problem is that how much amount of fragment of data should I use from my total data (~150 GB) for hyperparameter tuning. And I want to try lots of algorithm and combinations, is there any faster and easy way?</p> <p>Or</p> <p>Is there any math involved, something like, mydata = 100%size</p> <p>hyperparameter optimization with 5% of mydatasize,</p> <p>optimized hyperparameter *or+ or something with left 95% of datasize #something like this</p> <p>To get a similar result as full data used for optimization at a time. Is there any shortcut for these?</p> <p>I am using Python 3.7, CPU: AMD ryzen5 3400g, GPU: AMD Vega 11, RAM: 16 GB</p>
2021-10-02 19:55:56.360000+00:00
2021-10-03 08:58:59.760000+00:00
2021-10-03 08:58:59.760000+00:00
python|performance|machine-learning|large-data|hyperparameters
['https://arxiv.org/pdf/1603.06560.pdf', 'https://arxiv.org/pdf/1603.06560.pdf']
2
64,588,418
<p>Remember that fine-tuning a pre-trained model like Bert usually requires a much smaller number of epochs than models trained from scratch. In fact <a href="https://arxiv.org/pdf/1810.04805.pdf" rel="nofollow noreferrer">the authors of Bert recommend between 2 and 4 epochs</a>. Further training often translates to overfitting to your data and forgetting the pre-trained weights (see <em>catastrophic forgetting</em>).</p> <p>In my experience, this affects small datasets especially as it's easy to overfit on them, even at the 2nd epoch. Besides, you haven't commented on your custom layers on top of Bert, but adding much complexity there might increase overfitting also -- note that the common architecture for text classification only adds a linear transformation.</p>
2020-10-29 09:37:58.480000+00:00
2020-10-29 09:37:58.480000+00:00
null
null
63,096,908
<p>I'm training a classification model with custom layers on top of BERT. During this, the training performance of this model is going down with increasing epochs ( after the first epoch ) .. I'm not sure what to fix here - is it the model or the data?</p> <p>( for the data it's binary labels, and balanced in the number of data points for each label).</p> <p>Any quick pointers on what the problem could be? Has anyone come across this before?</p> <p>Edit: Turns out there was a mismatch in the transformers library and tf version I was using. Once I fixed that, the training performance was fine!</p> <p>Thanks!</p>
2020-07-26 06:36:45.657000+00:00
2020-11-03 23:43:59.867000+00:00
2020-11-03 23:43:59.867000+00:00
tensorflow|machine-learning|nlp|language-model
['https://arxiv.org/pdf/1810.04805.pdf']
1
52,443,301
<p>In the context of autoencoders the input and output of the model is the same. So, if the input values are in the range [0,1] then it is acceptable to use <code>sigmoid</code> as the activation function of last layer. Otherwise, you need to use an appropriate activation function for the last layer (e.g. <code>linear</code> which is the default one).</p> <p>As for the loss function, it comes back to the values of input data again. If the input data are <s>only</s> between zeros and ones <s>(and not the values between them)</s>, then <code>binary_crossentropy</code> is acceptable as the loss function. Otherwise, you need to use other loss functions such as <code>'mse'</code> (i.e. mean squared error) or <code>'mae'</code> (i.e. mean absolute error). Note that in the case of input values in range <code>[0,1]</code> you can use <code>binary_crossentropy</code>, as it is usually used (e.g. <a href="https://blog.keras.io/building-autoencoders-in-keras.html" rel="noreferrer">Keras autoencoder tutorial</a> and <a href="https://arxiv.org/abs/1708.08487" rel="noreferrer">this paper</a>). However, don't expect that the loss value becomes zero since <code>binary_crossentropy</code> does not return zero when both prediction and label are not either zero or one (no matter they are equal or not). <a href="https://www.youtube.com/watch?v=xTU79Zs4XKY" rel="noreferrer">Here</a> is a video from <a href="http://www.dmi.usherb.ca/~larocheh/index_en.html" rel="noreferrer">Hugo Larochelle</a> where he explains the loss functions used in autoencoders (the part about using <code>binary_crossentropy</code> with inputs in range [0,1] starts at <a href="https://youtu.be/xTU79Zs4XKY?t=330" rel="noreferrer">5:30</a>)</p> <p>Concretely, in your example, you are using the MNIST dataset. So by default the values of MNIST are integers in the range [0, 255]. Usually you need to normalize them first:</p> <pre><code>trainX = trainX.astype('float32') trainX /= 255. </code></pre> <p>Now the values would be in range [0,1]. So <code>sigmoid</code> can be used as the activation function and either of <code>binary_crossentropy</code> or <code>mse</code> as the loss function.</p> <hr> <p><strong>Why <code>binary_crossentropy</code> can be used even when the true label values (i.e. ground-truth) are in the range [0,1]?</strong></p> <p>Note that we are trying to minimize the loss function in training. So if the loss function we have used reaches its minimum value (which may not be necessarily equal to zero) when prediction is equal to true label, then it is an acceptable choice. Let's verify this is the case for binray cross-entropy which is defined as follows:</p> <pre><code>bce_loss = -y*log(p) - (1-y)*log(1-p) </code></pre> <p>where <code>y</code> is the true label and <code>p</code> is the predicted value. Let's consider <code>y</code> as fixed and see what value of <code>p</code> minimizes this function: we need to take the derivative with respect to <code>p</code> (I have assumed the <code>log</code> is the natural logarithm function for simplicity of calculations):</p> <pre><code>bce_loss_derivative = -y*(1/p) - (1-y)*(-1/(1-p)) = 0 =&gt; -y/p + (1-y)/(1-p) = 0 =&gt; -y*(1-p) + (1-y)*p = 0 =&gt; -y + y*p + p - y*p = 0 =&gt; p - y = 0 =&gt; y = p </code></pre> <p>As you can see binary cross-entropy have the minimum value when <code>y=p</code>, i.e. when the true label is equal to predicted label and this is exactly what we are looking for. </p>
2018-09-21 11:58:45.940000+00:00
2019-06-25 07:22:23.607000+00:00
2019-06-25 07:22:23.607000+00:00
null
52,441,877
<p>I wrote a vanilla autoencoder using only <code>Dense</code> layer. Below is my code:</p> <pre class="lang-py prettyprint-override"><code>iLayer = Input ((784,)) layer1 = Dense(128, activation='relu' ) (iLayer) layer2 = Dense(64, activation='relu') (layer1) layer3 = Dense(28, activation ='relu') (layer2) layer4 = Dense(64, activation='relu') (layer3) layer5 = Dense(128, activation='relu' ) (layer4) layer6 = Dense(784, activation='softmax' ) (layer5) model = Model (iLayer, layer6) model.compile(loss='binary_crossentropy', optimizer='adam') (trainX, trainY), (testX, testY) = mnist.load_data() print ("shape of the trainX", trainX.shape) trainX = trainX.reshape(trainX.shape[0], trainX.shape[1]* trainX.shape[2]) print ("shape of the trainX", trainX.shape) model.fit (trainX, trainX, epochs=5, batch_size=100) </code></pre> <h2>Questions:</h2> <p>1) <code>softmax</code> provides probability distribution. Understood. This means, I would have a vector of 784 values with probability between 0 and 1. For example [ 0.02, 0.03..... upto 784 items], summing all 784 elements provides 1. </p> <p>2) I don't understand how the binary crossentropy works with these values. Binary cross entropy is for two values of output, right?</p>
2018-09-21 10:35:59.863000+00:00
2019-06-25 07:22:23.607000+00:00
2018-09-21 12:37:43.647000+00:00
machine-learning|neural-network|keras|autoencoder|cross-entropy
['https://blog.keras.io/building-autoencoders-in-keras.html', 'https://arxiv.org/abs/1708.08487', 'https://www.youtube.com/watch?v=xTU79Zs4XKY', 'http://www.dmi.usherb.ca/~larocheh/index_en.html', 'https://youtu.be/xTU79Zs4XKY?t=330']
5
44,936,598
<p>From what I read, I'd be surprised if they're using neural networks. Here's how they say they detect anomalies:</p> <blockquote> <p>Detect outliers in a population by building a profile of a “typical” user or machine to know when one starts to stray from the pack.</p> </blockquote> <p>Doing anomaly detection like that requires nothing more than a statistical test of whether or not observed behavior is within 2-3 standard deviations of the expected behavior.</p> <p>If you want to use neural networks for some reason, you could go with CNNs, RNNs, or attention-only networks. <a href="https://arxiv.org/abs/1706.03762" rel="nofollow noreferrer">Google recently showed that you don't need RNNs or CNNs to do state-of-the-art translation</a>.</p>
2017-07-05 21:53:38.927000+00:00
2017-07-05 21:53:38.927000+00:00
null
null
44,935,473
<p>I'm very impressed from the new <a href="https://www.elastic.co/products/x-pack/machine-learning" rel="nofollow noreferrer">x-pack</a> ML of the elastic stack. It seems their technique learns data patterns over time and can predict anomalies in multiple domains.</p> <p><a href="https://i.stack.imgur.com/TrGx5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TrGx5.png" alt="enter image description here"></a></p> <p>Zoomed in: <a href="https://i.stack.imgur.com/CfBdJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CfBdJ.png" alt="enter image description here"></a></p> <p>I was wondering what approach and network topology could be used, in order to create a similar feature. Is it fair to assume that since x-pack works on time series data, RNN would be a good start?</p> <p>Interested in your opinions and references.</p>
2017-07-05 20:30:32.270000+00:00
2018-07-23 16:14:58.170000+00:00
2017-07-06 06:39:31.113000+00:00
elasticsearch|machine-learning|anomaly-detection|rnn
['https://arxiv.org/abs/1706.03762']
1
49,606,866
<blockquote> <ol> <li>Use Convolution2D layers and LSTM layer</li> </ol> </blockquote> <p>In this technique, you stack convolution and LSTM layers. The convolutional layers help you to learn the spatial features and the LSTM helps you learn the correlation in time.</p> <blockquote> <p>2.Use ConvLSTM2D</p> </blockquote> <p>ConvLSTM is a LSTM in which the gates (input to state and state to state transitions) are convolution operations.<br> Research paper- <a href="https://arxiv.org/abs/1506.04214" rel="nofollow noreferrer">Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting</a> </p> <p><a href="https://stackoverflow.com/questions/49468918/appplication-of-convlstm2d-layers/49472074#49472074">More about ConvLSTM in this SO answer</a></p>
2018-04-02 07:11:14.890000+00:00
2018-04-11 08:01:36.633000+00:00
2018-04-11 08:01:36.633000+00:00
null
49,603,498
<p>Are <code>1</code> and <code>2</code> the same?</p> <ol> <li>Use <code>Convolution2D</code> layers and <code>LSTM</code> layers </li> <li>Use <code>ConvLSTM2D</code></li> </ol> <p>If there is any difference, could you explain it for me?</p>
2018-04-01 23:14:53.400000+00:00
2018-04-13 16:40:43.003000+00:00
2018-04-13 16:40:43.003000+00:00
python|tensorflow|keras
['https://arxiv.org/abs/1506.04214', 'https://stackoverflow.com/questions/49468918/appplication-of-convlstm2d-layers/49472074#49472074']
2
49,770,553
<p>They are not exactly the same, here is why:</p> <h3>1. Use <code>Convolution2D</code> layers and <code>LSTM</code> layers</h3> <p>As it is known, <code>Convolution2D</code> serves well for capturing image or spatial features, whilst <code>LSTM</code> are used to detect correlations over time. However, by stacking these kind of layers, the correlation between space and time features may not be captured properly.</p> <h3>2. Use <code>ConvLSTM2D</code></h3> <p>To solve this, <a href="https://arxiv.org/abs/1506.04214v1" rel="noreferrer">Xingjian Shi et al.</a> proposed a network structure able to capture spatiotemporal correlations, namely <code>ConvLSTM</code>. In Keras, this is reflected in the <a href="https://keras.io/layers/recurrent/#convlstm2d" rel="noreferrer"><code>ConvLSTM2D</code></a> class, which computes convolutional operations in both the input and the recurrent transformations.</p> <h3>Keras code</h3> <p>Too illustrate this, you can see <a href="https://github.com/keras-team/keras/blob/master/keras/layers/recurrent.py" rel="noreferrer">here</a> the <code>LSTM</code> code, if you go to the <code>call</code> method from <code>LSTMCell</code>, you'd only see:</p> <pre class="lang-python prettyprint-override"><code> x_i = K.dot(inputs_i, self.kernel_i) x_f = K.dot(inputs_f, self.kernel_f) x_c = K.dot(inputs_c, self.kernel_c) x_o = K.dot(inputs_o, self.kernel_o) </code></pre> <p>Instead, the <a href="https://github.com/keras-team/keras/blob/master/keras/layers/convolutional_recurrent.py" rel="noreferrer"><code>ConvLSTM2DCell</code></a> class calls:</p> <pre class="lang-python prettyprint-override"><code> x_i = self.input_conv(inputs_i, self.kernel_i, self.bias_i, padding=self.padding) x_f = self.input_conv(inputs_f, self.kernel_f, self.bias_f, padding=self.padding) x_c = self.input_conv(inputs_c, self.kernel_c, self.bias_c, padding=self.padding) x_o = self.input_conv(inputs_o, self.kernel_o, self.bias_o, padding=self.padding) h_i = self.recurrent_conv(h_tm1_i, self.recurrent_kernel_i) h_f = self.recurrent_conv(h_tm1_f, self.recurrent_kernel_f) h_c = self.recurrent_conv(h_tm1_c, self.recurrent_kernel_c) h_o = self.recurrent_conv(h_tm1_o, self.recurrent_kernel_o) </code></pre> <p>Where:</p> <pre class="lang-python prettyprint-override"><code>def input_conv(self, x, w, b=None, padding='valid'): conv_out = K.conv2d(x, w, strides=self.strides, padding=padding, data_format=self.data_format, dilation_rate=self.dilation_rate) if b is not None: conv_out = K.bias_add(conv_out, b, data_format=self.data_format) return conv_out def recurrent_conv(self, x, w): conv_out = K.conv2d(x, w, strides=(1, 1), padding='same', data_format=self.data_format) return conv_out </code></pre> <p>In <code>LSTM</code>, the equivalent for <code>h_x</code> (recurrent transformations) would be:</p> <pre class="lang-python prettyprint-override"><code>K.dot(h_tm1_x, self.recurrent_kernel_x) </code></pre> <p>Instead of <code>ConvLSTM2D</code>'s:</p> <pre class="lang-python prettyprint-override"><code>self.recurrent_conv(h_tm1_x, self.recurrent_kernel_x) </code></pre> <p>These kind of transformations could not be computed with stacked <code>Conv2D</code> and <code>LSTM</code> layers.</p>
2018-04-11 08:46:57.363000+00:00
2018-04-11 08:46:57.363000+00:00
null
null
49,603,498
<p>Are <code>1</code> and <code>2</code> the same?</p> <ol> <li>Use <code>Convolution2D</code> layers and <code>LSTM</code> layers </li> <li>Use <code>ConvLSTM2D</code></li> </ol> <p>If there is any difference, could you explain it for me?</p>
2018-04-01 23:14:53.400000+00:00
2018-04-13 16:40:43.003000+00:00
2018-04-13 16:40:43.003000+00:00
python|tensorflow|keras
['https://arxiv.org/abs/1506.04214v1', 'https://keras.io/layers/recurrent/#convlstm2d', 'https://github.com/keras-team/keras/blob/master/keras/layers/recurrent.py', 'https://github.com/keras-team/keras/blob/master/keras/layers/convolutional_recurrent.py']
4
65,766,155
<p>As Ivan already noted you have a class imbalance problem. This can be resolved via:</p> <ol> <li><p><strong>Online hard negative mining:</strong> at each iteration after computing the loss, you can sort all elements in the batch belonging to &quot;no DR&quot; class and keep only the worst <code>k</code>. Then you estimate the gradient only using these worse k and <em>discard</em> all the rest.<br /> see, e.g.:<br /> <em>Abhinav Shrivastava, Abhinav Gupta and Ross Girshick</em> <a href="https://arxiv.org/abs/1604.03540" rel="nofollow noreferrer"><strong>Training Region-based Object Detectors with Online Hard Example Mining</strong></a> (CVPR 2016)</p> </li> <li><p><a href="https://stackoverflow.com/a/52161194/1714410"><strong>Focal loss:</strong></a> a modification for the &quot;vanilla&quot; cross entropy loss can be used to tackle class imbalance.</p> </li> </ol> <hr /> <p>Related posts <a href="https://stackoverflow.com/a/58213245/1714410">this</a> and <a href="https://stackoverflow.com/a/64365532/1714410">this</a>.</p>
2021-01-17 21:30:06.740000+00:00
2021-01-18 07:31:56.007000+00:00
2021-01-18 07:31:56.007000+00:00
null
65,762,961
<p>I am relatively new to the deep learning landscape, so please don't be as mean as Reddit! It seems like a general question so I won't be giving my code here as it doesn't seem necessary (if it is, here's the link to <a href="https://colab.research.google.com/drive/1x6DOqow1dnZvy29_UZ3nHAKZLaNKr9hM?usp=sharing" rel="nofollow noreferrer">colab</a>)</p> <p>A bit about the data: You can find the original data <a href="https://www.kaggle.com/tanlikesmath/diabetic-retinopathy-resized" rel="nofollow noreferrer">here</a>. It is a downsized version of the original dataset of 82 GB.</p> <p>Once I trained my CNN on this, it predicts 'No Diabetic Retinopathy' (No DR) every single time, leading to an accuracy of 73%. Is the reason for this is just the vast amount of No DR images or something else? I have no idea! The 5 classes I have for prediction are <code>[&quot;Mild&quot;, &quot;Moderate&quot;, &quot;No DR&quot;, &quot;Proliferative DR&quot;, &quot;Severe&quot;]</code>.</p> <p>It's probably just bad code, was hoping you guys could help</p>
2021-01-17 16:12:12.567000+00:00
2022-08-05 16:05:04.970000+00:00
2021-01-18 00:28:51.830000+00:00
python|deep-learning|pytorch|conv-neural-network
['https://arxiv.org/abs/1604.03540', 'https://stackoverflow.com/a/52161194/1714410', 'https://stackoverflow.com/a/58213245/1714410', 'https://stackoverflow.com/a/64365532/1714410']
4
29,876,936
<p>Solved it by giving absolute path. I was trying all combinations of paths and also gave absolute path /var/www/arxiv/static/data/name.json and it worked.</p>
2015-04-26 11:18:38.003000+00:00
2015-04-26 11:18:38.003000+00:00
null
null
29,876,315
<p>I am using f = open('name.json','w+') to create a new file and write to it. But i am unable to create the file. Apache server logs show "No such file exists."</p>
2015-04-26 10:10:09.927000+00:00
2015-04-26 11:18:38.003000+00:00
2015-04-26 10:31:43.320000+00:00
python|apache|flask
[]
0
39,249,179
<p>You will need backtracking, because it is possible to add numbers to the Sudoku board which don't violate any rules immediately, but which will lead to a contradiction later on. If you take any unique-solution Sudoku problem and arbitrarily place any number anywhere, you are bound to experience just this.</p> <p>I suggest you investigate the <a href="https://arxiv.org/abs/cs/0011047" rel="nofollow noreferrer">Dancing Links</a> algorithm. You can easily formulate Sudoku as a set cover problem, and that algorithm can find a solution if there exists one. For the completely empty board, there has to be a solution. Randomize the matrix if you want to obtain a random result.</p> <p>Also investigate <a href="https://stackoverflow.com/questions/tagged/sudoku?sort=votes">all the other sudoku-tagged questions</a>, since you are not the first trying to generate such boards, and translating from one language to another doesn't really change the game that much.</p>
2016-08-31 12:04:53.170000+00:00
2016-08-31 12:04:53.170000+00:00
2017-05-23 11:47:14.977000+00:00
null
39,246,124
<p>I am trying to to generate a Sudoku board using this script:</p> <p>The problem is that I don't know how to validate to generate unique numbers on columns and squares.</p> <p>Actually is just validating and generating unique numbers only on rows.</p> <p>Here is that code : <div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-js lang-js prettyprint-override"><code>function generate(count, values) { return Array.apply(null, { length: count }).map(function () { var r = [], array = values.slice(); while (array.length) { r.push(array.splice(Math.floor(Math.random() * array.length), 1)[0]); } return r; }); }; var myStringArray = generate(9, [1, 2, 3, 4, 5, 6, 7, 8, 9]); Array.from(document.getElementsByClassName('cell')).forEach(function(e, i){ var y = Math.floor(i/myStringArray.length); var x = i % myStringArray.length; e.textContent = myStringArray[y][x]; });</code></pre> <pre class="snippet-code-css lang-css prettyprint-override"><code>.container{ min-height: 100vh; width: 100%; display: flex; align-items : center; justify-content : center; margin-bottom: 0; } .table { display:table; border: 2px solid #444; border-collapse: collapse; position: relative; } .row { display:table-row; position: relative; z-index:-1; } .cell { display:table-cell; padding:8px; border: 1px solid #ff0000; text-align: center; } .cell:nth-child(3), .cell:nth-child(6){border-right: 5px solid #555; } /*vertical*/ .row:nth-child(3) .cell, .row:nth-child(6) .cell{border-bottom: 5px solid #555;} /*horizontal*/ .header { text-align:center; position: relative; } .header { counter-increment: colno; } .header::before { content: counter(colno); position: absolute; top: -30px; font-weight:bold; color: #777; } .row:nth-child(n+1) { counter-increment: rowno; } .row:nth-child(n+1)::before{ content: counter(rowno); position: absolute; left: -30px; top:10px; font-weight:bold; color: #777; }</code></pre> <pre class="snippet-code-html lang-html prettyprint-override"><code>&lt;div class="container"&gt; &lt;div class="table"&gt; &lt;div id="mytab1" class="row"&gt; &lt;span class="cell header"&gt;***&lt;/span&gt; &lt;span class="cell header"&gt;***&lt;/span&gt; &lt;span class="cell header"&gt;***&lt;/span&gt; &lt;span class="cell header"&gt;***&lt;/span&gt; &lt;span class="cell header"&gt;***&lt;/span&gt; &lt;span class="cell header"&gt;***&lt;/span&gt; &lt;span class="cell header"&gt;***&lt;/span&gt; &lt;span class="cell header"&gt;***&lt;/span&gt; &lt;span class="cell header"&gt;***&lt;/span&gt; &lt;/div&gt; &lt;div class="row"&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;/div&gt; &lt;div class="row"&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;/div&gt; &lt;div class="row"&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;/div&gt; &lt;div class="row"&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;/div&gt; &lt;div class="row"&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;/div&gt; &lt;div class="row"&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;/div&gt; &lt;div class="row"&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;/div&gt; &lt;div class="row"&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;span class="cell"&gt;***&lt;/span&gt; &lt;/div&gt; &lt;/div&gt; &lt;/div&gt;</code></pre> </div> </div> </p> <p>Please visit fiddle <strong><a href="https://jsfiddle.net/707bshLy/12/" rel="nofollow noreferrer">here</a></strong></p> <p>Appreciate your help, Thanks</p>
2016-08-31 09:43:43.907000+00:00
2018-08-16 16:38:54.950000+00:00
2018-08-16 16:38:54.950000+00:00
javascript|arrays|validation|math|sudoku
['https://arxiv.org/abs/cs/0011047', 'https://stackoverflow.com/questions/tagged/sudoku?sort=votes']
2

StackExchange Dataset

Working doc: https://docs.google.com/document/d/1h585bH5sYcQW4pkHzqWyQqA4ape2Bq6o1Cya0TkMOQc/edit?usp=sharing

BigQuery query (see so_bigquery.ipynb):

CREATE TEMP TABLE answers AS
SELECT *
FROM bigquery-public-data.stackoverflow.posts_answers
WHERE LOWER(Body) LIKE '%arxiv%';

CREATE TEMPORARY TABLE questions AS
SELECT *
FROM bigquery-public-data.stackoverflow.posts_questions;

SELECT *
FROM answers
JOIN questions ON questions.id = answers.parent_id;

NOTE: BigQuery only has the StackOverflow site data, not the other sites. So if we want to query the other sites, we would probably want to download the data dump to a cluster and run a SQL server.

Columns in the raw query output:

'id', 
'title', 
'body', 
'accepted_answer_id', 
'answer_count', 
'comment_count', 
'community_owned_date', # present only if post is community wiki'd
'creation_date',
'favorite_count',
'last_activity_date', 
'last_edit_date',
'last_editor_display_name', 
'last_editor_user_id',
'owner_display_name', 
'owner_user_id',
'parent_id', # if post is answer, then this is the question id; if post is question, this is None
'post_type_id', # 1 = QUESTION, 2 = ANSWER
'score',
'tags',
'view_count',

(Official database schema)[https://meta.stackexchange.com/questions/2677/database-schema-documentation-for-the-public-data-dump-and-sede/2678#2678]

File structure

Each folder represents the different StackExchange sites (~182). The largest one is StackOverflow.

Downloads last month
40