a_id
int64 7.84k
73.8M
| a_body
stringlengths 61
33k
| a_creation_date
stringlengths 25
32
| a_last_activity_date
stringlengths 25
32
| a_last_edit_date
stringlengths 25
32
⌀ | a_tags
float64 | q_id
int64 826
73.8M
| q_body
stringlengths 61
29.9k
| q_creation_date
stringlengths 25
32
| q_last_activity_date
stringlengths 25
32
| q_last_edit_date
stringlengths 25
32
⌀ | q_tags
stringlengths 1
103
| _arxiv_links
stringlengths 2
6.69k
| _n_arxiv_links
int64 0
94
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
47,529,556 | <p>Quick answer:</p>
<p>No you can't</p>
<p>Longer answer:</p>
<p>Pooling is like a reduce function. Applying it on a layer reduces the dimensions. But different input shapes don't produce the same output shapes. However with zero padding you can probably simulate this, with max_len we are doing this. So, in the second paper, the idea is to have a dynamic computational graph. It is not the same thing as before. It is basically creating several networks with different depths (depending on their input size). The generalized version for encoder-decoder architecture is called <a href="https://arxiv.org/abs/1610.10099" rel="nofollow noreferrer">bytenet</a></p> | 2017-11-28 10:43:46.350000+00:00 | 2017-11-28 10:43:46.350000+00:00 | null | null | 47,517,506 | <p>I implemented the CNN model for text classification based on this <a href="http://www.aclweb.org/anthology/D14-1181" rel="nofollow noreferrer">paper</a>. Since the CNN can only deal with the sentences that have fixed size, so I set the size of input as max length of sentence in my dataset and zero padding the short sentence. But for my understanding, no matter how long the input sentence is, the max pooling strategy will always extract only one value for each filter map. So it doesn't matter the size of input sentence is long or short, because after filter convoluted/pooled, the output will be the same size. In this case, why should I zero padding all the short sentence into the fixed size?</p>
<p>For example, my code for feeding data into the CNN model is <code>self.input_data = tf.placeholder(tf.int32,[None,max_len],name="input_data")</code>, can I do not specify <code>max_len</code>, and using the <code>None value</code> which is based on the length of current training sentence?</p>
<p>In addition, I was wondering is there any other new approach that can solve the variable input for CNN model. I also found the other <a href="http://www.aclweb.org/anthology/P14-1062" rel="nofollow noreferrer">paper</a> that can solve this problem, but for my understanding, it only used k values for max-pooling instead of 1 value of max-pooling, which can deal with variable sentence? How?</p> | 2017-11-27 18:27:41.733000+00:00 | 2017-11-28 10:43:46.350000+00:00 | null | tensorflow|nlp|convolution|text-classification | ['https://arxiv.org/abs/1610.10099'] | 1 |
56,181,751 | <p>Your network is quite deep for a fully connected architecture. Most likely you have been hit by a <a href="https://en.wikipedia.org/wiki/Vanishing_gradient_problem" rel="nofollow noreferrer">vanishing- or exploding gradient</a>, i.e. numerical problems caused by multiplying very small or very large numbers repeatedly. I'd recommend a shallower but wider network, with dense layers something like 2-3 layers is often enough in my experience. If you prefer working with the deeper architecture you could try out something like <a href="https://arxiv.org/abs/1512.03385" rel="nofollow noreferrer">skip connections</a>.</p> | 2019-05-17 07:36:34.907000+00:00 | 2019-05-17 07:36:34.907000+00:00 | null | null | 49,616,395 | <p>I have a model that learns to classify (binary classification) almost at 100% accuracy after 7-14 epochs but after reaching the minimum loss of 0.0004, in the next epoch it jumps to as much as 7.5 (which means it has a 50% chance of classifying correctly, same has pure chance) and then stays near 7. for all subsequent epochs.</p>
<p>I use adam optimiser which should take care of the learning rate.</p>
<p>How can I prevent the training loss from increasing? </p>
<p>This huge jump doesn't happen for SGD optimiser.</p>
<pre class="lang-python prettyprint-override"><code>inputs = Input(shape=(X_train.shape[1],))
Dx = Dense(32, activation="relu")(inputs)
Dx = Dense(32, activation="relu")(Dx)
for i in range(20):
Dx = Dense(32, activation="relu")(Dx)
Dx = Dense(1, activation="sigmoid")(Dx)
D = Model(input=[inputs], output=[Dx])
D.compile(loss="binary_crossentropy", optimizer="adam")
D.fit(X_train, y_train, nb_epoch=20)
</code></pre> | 2018-04-02 18:08:30.917000+00:00 | 2019-05-17 07:36:34.907000+00:00 | 2018-04-03 10:03:46.207000+00:00 | tensorflow|neural-network|keras | ['https://en.wikipedia.org/wiki/Vanishing_gradient_problem', 'https://arxiv.org/abs/1512.03385'] | 2 |
32,482,177 | <p>For whatever it's worth, a straightforward, unoptimized Python implementation of the "complete Karmarkar Karp" (CKK) search procedure in [Korf88] -- modified only slightly to bail out of the search after a given time limit (say, 4.95 seconds) and return the best solution found so far -- is sufficient to score <strong>14.204234</strong> on the SPOJ problem, beating the score for Karmarkar-Karp. <strike>As of this writing, this is <a href="http://www.spoj.com/ranks/JOHNNY/" rel="nofollow noreferrer">#3 on the rankings</a></strike> (<strong><em>see Edit #2 below</em></strong>)</p>
<p>A slightly more readable presentation of Korf's CKK algorithm can be found in [Mert99]. </p>
<hr>
<p><strong>EDIT #2</strong> - I've implemented <a href="https://stackoverflow.com/a/32467262/594729">Evgeny Kluev's</a> hybrid heuristic of applying Karmarkar-Karp until the list of numbers is below some threshold and then switching over to the exact Horowitz-Sahni subset enumeration method [HS74] (a concise description may be found in [Korf88]). As suspected, my Python implementation required lowering the switchover threshold versus his C++ implementation. With some trial and error, I found that a threshold of 37 was the maximum that allowed my program to finish within the time limit. Yet, even at that lower threshold, I was able to achieve a score of <strong>15.265633</strong>, good enough for <a href="http://www.spoj.com/ranks/JOHNNY/" rel="nofollow noreferrer">second place</a>. </p>
<p>I further attempted to incorporate this hybrid KK/HS method into the CKK tree search, basically by using HS as a very aggressive and expensive pruning strategy. In plain CKK, I was unable to find a switchover threshold that even matched the KK/HS method. However, using the ILDS (see below) search strategy for CKK and HS (with a threshold of 25) to prune, I was able to yield a very small gain over the previous score, up to <strong>15.272802</strong>. It probably should not be surprising that CKK+ILDS would outperform plain CKK in this context since it would, by design, provide a greater diversity of inputs to the HS phase.</p>
<hr>
<p><strong>EDIT #1</strong> -
I've tried two further refinements to the base CKK algorithm:</p>
<ol>
<li><p>"Improved Limited Discrepancy Search" (ILDS) [Korf96] This is an alternative to the natural DFS ordering of paths within the search tree. It has a tendency to explore more diverse solutions earlier on than regular Depth-First Search. </p></li>
<li><p>"Speeding up 2-Way Number Partitioning" [Cerq12] This generalizes one of the pruning criteria in CKK from nodes within 4 levels of the leaf nodes to nodes within 5, 6, and 7 levels above leaf nodes. </p></li>
</ol>
<p>In my test cases, both of these refinements generally provided noticeable benefits over the original CKK in reducing the number of nodes explored (in the case of the latter) and in arriving at better solutions sooner (in the case of the former). However, within the confines of the SPOJ problem structure, neither of these were sufficient to improve my score. </p>
<p>Given the idiosyncratic nature of this SPOJ problem (i.e.: 5-second time limit and only one specific and undisclosed problem instance), it is hard to give advice on what may actually improve the score<sup>*</sup>. For example, should we continue to pursue alternate search ordering strategies (e.g.: many of the papers by Wheeler Ruml <a href="http://www.cs.unh.edu/~ruml/papers/index.html" rel="nofollow noreferrer">listed here</a>)? Or should we try incorporating some form of local improvement heuristic to solutions found by CKK in order to help pruning? Or maybe we should abandon CKK-based approaches altogether and try for a dynamic programming approach? How about a PTAS? Without knowing more about the specific shape of the instance used in the SPOJ problem, it's very difficult to guess at what kind of approach would yield the most benefit. Each one has its strengths and weaknesses, depending on the specific properties of a given instance. </p>
<p><sup>*</sup> <em>Aside from simply running the same thing faster, say, by implementing in C++ instead of Python.</em></p>
<hr>
<h2>References</h2>
<p>[Cerq12] Cerquides, Jesús, and Pedro Meseguer. "Speeding Up 2-way Number Partitioning." ECAI. 2012, doi:<a href="http://dx.doi.org/10.3233/978-1-61499-098-7-223" rel="nofollow noreferrer">10.3233/978-1-61499-098-7-223</a></p>
<p>[HS74] Horowitz, Ellis, and Sartaj Sahni. "<a href="http://www.cise.ufl.edu/~sahni/papers/computingPartitions.pdf" rel="nofollow noreferrer">Computing partitions with applications to the knapsack problem.</a>" Journal of the ACM (JACM) 21.2 (1974): 277-292.</p>
<p>[Korf88] Korf, Richard E. (1998), "<a href="http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.90.993" rel="nofollow noreferrer">A complete anytime algorithm for number partitioning</a>", Artificial Intelligence 106 (2): 181–203, doi:<a href="https://dx.doi.org/10.1016%2FS0004-3702%2898%2900086-1" rel="nofollow noreferrer">10.1016/S0004-3702(98)00086-1</a>,</p>
<p>[Korf96] Korf, Richard E. "<a href="http://www.aaai.org/Papers/AAAI/1996/AAAI96-043.pdf" rel="nofollow noreferrer">Improved limited discrepancy search</a>." AAAI/IAAI, Vol. 1. 1996.</p>
<p>[Mert99] Mertens, Stephan (1999), A complete anytime algorithm for balanced number partitioning, arXiv:<a href="https://arxiv.org/abs/cs/9903011" rel="nofollow noreferrer">cs/9903011</a></p> | 2015-09-09 14:22:28.323000+00:00 | 2015-09-11 07:44:32.113000+00:00 | 2017-05-23 12:34:30.577000+00:00 | null | 32,354,215 | <p><a href="https://en.wikipedia.org/wiki/Partition_problem">Partition problem</a> is known to be NP-hard. Depending on the particular instance of the problem we can try dynamic programming or some heuristics like differencing (also known as Karmarkar-Karp algorithm).</p>
<p>The latter seems to be very useful for the instances with big numbers (what makes dynamic programming intractable), however not always perfect. What is an efficient way to find a better solution (random, tabu search, other approximations)?</p>
<p>PS: The question has some story behind it. There is a challenge <a href="http://www.spoj.com/problems/JOHNNY/">Johnny Goes Shopping</a> available at SPOJ since July 2004. Till now, the challenge has been solved by 1087 users, but only 11 of them scored better than correct Karmarkar-Karp algorithm implementation (with current scoring, Karmarkar-Karp gives 11.796614 points). How to do better? (Answers supported by accepted submission most wanted but please do not reveal your code.) </p> | 2015-09-02 13:13:38.887000+00:00 | 2015-09-11 07:44:32.113000+00:00 | 2015-09-04 13:09:04.613000+00:00 | algorithm|optimization|partition-problem | ['http://www.spoj.com/ranks/JOHNNY/', 'https://stackoverflow.com/a/32467262/594729', 'http://www.spoj.com/ranks/JOHNNY/', 'http://www.cs.unh.edu/~ruml/papers/index.html', 'http://dx.doi.org/10.3233/978-1-61499-098-7-223', 'http://www.cise.ufl.edu/~sahni/papers/computingPartitions.pdf', 'http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.90.993', 'https://dx.doi.org/10.1016%2FS0004-3702%2898%2900086-1', 'http://www.aaai.org/Papers/AAAI/1996/AAAI96-043.pdf', 'https://arxiv.org/abs/cs/9903011'] | 10 |
51,261,582 | <p>If segmentation is difficult, then simply avoid it. Handwriting recognition faces the same problem: how would you segment this image (source: IAM dataset) into characters?</p>
<p><a href="https://i.stack.imgur.com/BvaaB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BvaaB.png" alt="enter image description here"></a></p>
<p>End-to-end trainable neural networks (NN) are able to recognize text in such images.
These NNs are trained with pairs of images and ground-truth texts. You don't have to do any segmentation and you also don't have to specify the character positions.</p>
<p>Here is an illustration how such a neural network for text recognition may look (implementation see <a href="https://github.com/githubharald/SimpleHTR" rel="nofollow noreferrer">https://github.com/githubharald/SimpleHTR</a>).
<a href="https://i.stack.imgur.com/5Mre9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5Mre9.png" alt="enter image description here"></a></p>
<p>It contains CNN layers, RNN layers and a final CTC layer. This CTC layer is the ingredient which enables training in a segmentation-free manner.</p>
<p>I don't want to repeat myself too much, have a look at this article to get an understanding of how such a NN looks like and how it works: <a href="https://towardsdatascience.com/2326a3487cd5" rel="nofollow noreferrer">https://towardsdatascience.com/2326a3487cd5</a></p>
<p>And read this paper for a more in-depth discussion and for further references: <a href="https://arxiv.org/pdf/1507.05717.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1507.05717.pdf</a></p> | 2018-07-10 09:21:50.593000+00:00 | 2018-07-10 12:51:45.590000+00:00 | 2018-07-10 12:51:45.590000+00:00 | null | 51,260,184 | <p>I want to recognize the numbers in image,the numbers are not placed in a line and have some "noise" ,such as the below images (just a part of my data):
I have searched some project and papers ,but did not find a good way to solve the problem,Who can give me some tips that how can I solve the problem or what paper is useful for me?
Thanks!!! </p>
<p><a href="https://i.stack.imgur.com/NJHrt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NJHrt.png" alt="enter image description here"></a>
<a href="https://i.stack.imgur.com/j6b80.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/j6b80.png" alt="enter image description here"></a></p> | 2018-07-10 08:10:28.920000+00:00 | 2018-07-10 12:51:45.590000+00:00 | 2018-07-10 10:05:20.867000+00:00 | image-processing|ocr | ['https://i.stack.imgur.com/BvaaB.png', 'https://github.com/githubharald/SimpleHTR', 'https://i.stack.imgur.com/5Mre9.png', 'https://towardsdatascience.com/2326a3487cd5', 'https://arxiv.org/pdf/1507.05717.pdf'] | 5 |
41,363,305 | <p>I hadn't heard of this before either, so I am little defensive, since I <a href="https://arxiv.org/abs/1603.09596" rel="nofollow noreferrer">have seen that real and synthetic datasets in high dimensions</a> really do not support the claim of the paper in question.</p>
<p>As a result, what I would suggest, as a first, dirty, clumsy and maybe not good first attempt is to generate a sphere in a dimension of your choice (I <a href="https://gsamaras.wordpress.com/code/create-pointset-on-a-sphere-or-a-klein-bottle/" rel="nofollow noreferrer">do it like like this</a>) and then place a query at the center of the sphere. </p>
<p>In that case, every point lies in the same distance with the query point, thus the Nearest Neighbor has a distance equal to the Farthest Neighbor.</p>
<p>This, of course, is independent from the dimension, but it's what came at a thought after looking at the figures of the paper. It should be enough to get you stared, but surely, better datasets may be generated, if any.</p>
<hr>
<p>Edit about:</p>
<blockquote>
<p>distances for each point got bigger with more dimensions!!!!</p>
</blockquote>
<p>this is expected, since the higher the dimensional space, the sparser the space is, thus the greater the distance is. Moreover, this is expected, if you think for example, the Euclidean distance, which gets grater as the dimensions grow.</p> | 2016-12-28 13:39:09.777000+00:00 | 2016-12-28 17:33:17.367000+00:00 | 2016-12-28 17:33:17.367000+00:00 | null | 41,341,431 | <p>In the paper "<a href="https://cis.temple.edu/~vasilis/Courses/CIS750/Papers/beyer99when_17.pdf" rel="nofollow noreferrer">When Is 'Nearest Neighbor' Meaningful?</a>" we read that, "We show that under certain broad conditions (in terms of data and query distributions, or workload), as dimensionality increases, the distance to the nearest
neighbor approaches the distance to the farthest neighbor. In other words, the contrast in distances to different data points becomes nonexistent. The conditions
we have identified in which this happens are much broader than the independent and identically distributed (IID) dimensions assumption that other work
assumes."</p>
<p>My question is how I should generate a dataset that resembles this effect? I have created three points each with 1000 dimensions with random numbers ranging from 0-255 for each dimension but points create different distances and do not reproduce what is mentioned above. It seems changing dimensions (e.g. 10 or 100 or 1000 dimensions) and ranges (e.g. [0,1]) do not change anything. I still get different distances which should not be any problem for e.g. clustering algorithms! </p> | 2016-12-27 08:14:33.773000+00:00 | 2017-01-03 23:26:49.597000+00:00 | 2016-12-28 14:06:58.313000+00:00 | algorithm|math|dataset|nearest-neighbor|dimensional-modeling | ['https://arxiv.org/abs/1603.09596', 'https://gsamaras.wordpress.com/code/create-pointset-on-a-sphere-or-a-klein-bottle/'] | 2 |
39,336,201 | <p>It is not exactly clear to me what you want to accomplish with your application but I assume that you are trying to output a font from a database of fonts that matches a users handwriting the most.</p>
<p>In Machine Learning this would be a classification problem. The <strong>number of classes</strong> will by equal to the <strong>number of different fonts</strong> in your database. </p>
<p>You could solve this with the help of a <a href="https://en.wikipedia.org/wiki/Convolutional_neural_network" rel="nofollow">Convolutional neural network</a> which are widely used for image and video recognition related tasks. If you've never implemented a CNN before I would suggest that you look up this resources to learn about <code>Torch</code> which is a easy-to-start-with toolkit to implement CNN's. (Of course there are more Frameworks such as: <code>Tensor Flow</code>, <code>Caffe</code>, <code>Lasagne</code>, ...)</p>
<ul>
<li><a href="http://torch.ch" rel="nofollow">Torch Homepage</a></li>
<li><a href="https://github.com/soumith/cvpr2015/blob/master/Deep%20Learning%20with%20Torch.ipynb" rel="nofollow">Deep learning with Torch: 60 minutes blitz</a></li>
<li><a href="https://github.com/torch/torch7/wiki/Cheatsheet" rel="nofollow">Torch Cheatsheet</a></li>
</ul>
<p>The main obstacle you will face is that Neural Networks need thousands of images <code>(>100.000)</code> to properly train them and to achieve satisfying results. Furthermore you do not only need the images but also a correct label for each image. Will say, you would need a training image such as a handwritten character and the corresponding font it matches the most out of your database as its label. </p>
<p>I would suggest that you read about so called <a href="http://cs231n.github.io/transfer-learning/" rel="nofollow">transfer learning</a> which can give you an initial boost as you do not need to set up a CNN model completely by yourself. In addition people have <code>pre-trained</code> such a model for a related task so that you safe extra time as you would not need to train it for many hours on a <a href="https://en.wikipedia.org/wiki/Graphics_processing_unit" rel="nofollow">GPU</a>. (see <a href="https://en.wikipedia.org/wiki/CUDA" rel="nofollow">CUDA</a>)</p>
<p>A great resource to start with is the paper: <a href="https://arxiv.org/pdf/1411.1792v1.pdf" rel="nofollow">How transferable are features in deep neural networks?</a>, which could be helpful for the stated reasons.</p>
<p>To get tons of training and testing data you can look up the following open datasets that provide all types of <strong>characters</strong> that can be helpful for your task:</p>
<ul>
<li><a href="http://archive.ics.uci.edu/ml/datasets/Artificial+Characters" rel="nofollow">Artificial Characters Data Set</a></li>
<li><a href="http://archive.ics.uci.edu/ml/datasets/UJI+Pen+Characters" rel="nofollow">UJI Pen Characters Data Set</a></li>
<li><a href="http://www.ee.surrey.ac.uk/CVSSP/demos/chars74k/" rel="nofollow">The Chars74K dataset</a></li>
<li><a href="http://tc11.cvc.uab.es/datasets/type/" rel="nofollow">Hand written - Datasets</a></li>
<li><a href="https://lvdmaaten.github.io/publications/papers/TR%20New_Dataset_2009.pdf" rel="nofollow">A New Benchmark Dataset for Handwritten Character Recognition</a></li>
</ul>
<p>For access to a lot of fonts and maybe even the possibility to create further datasets on your own you can have a look at <a href="https://fonts.google.com" rel="nofollow">Google Fonts</a>.</p> | 2016-09-05 18:50:11.357000+00:00 | 2016-09-05 18:55:32.343000+00:00 | 2016-09-05 18:55:32.343000+00:00 | null | 39,164,883 | <p>I have been working on an application that involves font recognition based on a users free hand drawing characters in Android Canvas.</p>
<p>In this application the user is asked to enter some predefined characters in a predefined order <code>(A,a,B,c)</code>. Based on this, is there any way to show the very similar font which matches the user's hand writing. </p>
<p>I have researched on this topic found some papers & articles but most of them are recognizing font from a captured image. In that case they are having a lot of problems by segmenting paragraphs, individual letters and so on. But in my scenario I know what letter the user is drawing.</p>
<p>I have some knowledge in OpenCV and Machine Learning. Need help on how to proceed with this problem.</p> | 2016-08-26 11:08:20.613000+00:00 | 2016-09-05 19:02:23.427000+00:00 | 2016-09-05 19:02:23.427000+00:00 | android|opencv|fonts|machine-learning|pattern-matching | ['https://en.wikipedia.org/wiki/Convolutional_neural_network', 'http://torch.ch', 'https://github.com/soumith/cvpr2015/blob/master/Deep%20Learning%20with%20Torch.ipynb', 'https://github.com/torch/torch7/wiki/Cheatsheet', 'http://cs231n.github.io/transfer-learning/', 'https://en.wikipedia.org/wiki/Graphics_processing_unit', 'https://en.wikipedia.org/wiki/CUDA', 'https://arxiv.org/pdf/1411.1792v1.pdf', 'http://archive.ics.uci.edu/ml/datasets/Artificial+Characters', 'http://archive.ics.uci.edu/ml/datasets/UJI+Pen+Characters', 'http://www.ee.surrey.ac.uk/CVSSP/demos/chars74k/', 'http://tc11.cvc.uab.es/datasets/type/', 'https://lvdmaaten.github.io/publications/papers/TR%20New_Dataset_2009.pdf', 'https://fonts.google.com'] | 14 |
40,871,108 | <p>Before explaining the distinction between tensors and variables, we should be precise about what the word "tensor" means in the context of TensorFlow:</p>
<ul>
<li><p>In the <strong>Python API</strong>, a <a href="https://www.tensorflow.org/versions/r0.12/api_docs/python/framework.html#Tensor" rel="noreferrer"><code>tf.Tensor</code></a> object represents the symbolic result of a TensorFlow operation. For example, in the expression <code>t = tf.matmul(x, y)</code>, <code>t</code> is a <code>tf.Tensor</code> object representing the result of multiplying <code>x</code> and <code>y</code> (which may themselves be symbolic results of other operations, concrete values such as NumPy arrays, or variables).</p>
<p>In this context, a "symbolic result" is more complicated than a pointer to the result of an operation. It is more analogous to a function object that, when called (i.e. passed to <code>tf.Session.run()</code>) will run the necessary computation to produce the result of that operation, and return it to you as a concrete value (e.g. a NumPy array).</p></li>
<li><p>In the <strong>C++ API</strong>, a <a href="https://github.com/tensorflow/tensorflow/blob/5657d0dee8d87f4594b3e5902ed3e3ca8d6dfc0a/tensorflow/core/framework/tensor.h" rel="noreferrer"><code>tensorflow::Tensor</code></a> object represents the concrete value of a multi-dimensional array. For example, the <code>MatMul</code> kernel takes two two-dimensional <code>tensorflow::Tensor</code> objects as inputs, and produces a single two-dimensional <code>tensorflow::Tensor</code> object as its output.</p></li>
</ul>
<p>This distinction is a little confusing, and we might choose different names if we started over (in other language APIs, we prefer the name <code>Output</code> for a symbolic result and <code>Tensor</code> for a concrete value).</p>
<p>A similar distinction exists for variables. In the Python API, a <a href="https://www.tensorflow.org/versions/r0.12/api_docs/python/state_ops.html#Variable" rel="noreferrer"><code>tf.Variable</code></a> is the symbolic representation of a variable, which has methods for creating operations that read the current value of the variable, and assign values to it. In the C++ implementation, a <a href="https://github.com/tensorflow/tensorflow/blob/5657d0dee8d87f4594b3e5902ed3e3ca8d6dfc0a/tensorflow/core/kernels/variable_ops.h#L29" rel="noreferrer"><code>tensorflow::Var</code></a> object is a wrapper around a shared, mutable <code>tensorflow::Tensor</code> object.</p>
<p>With that context out the way, we can address your specific questions:</p>
<ol>
<li><p><strong>What is the meaning of "in-memory buffers"?</strong></p>
<p>An in-memory buffer is simply a contiguous region of memory that has been allocated with a TensorFlow allocator. <code>tensorflow::Tensor</code> objects contain a pointer to an in-memory buffer, which holds the values of that tensor. The buffer could be in host memory (i.e. accessible from the CPU) or device memory (e.g. accessible only from a GPU), and TensorFlow has operations to move data between these memory spaces.</p></li>
<li><p><strong>What is the meaning of a "handle"?</strong> </p>
<p>In the explanation in <a href="https://arxiv.org/pdf/1610.01178.pdf" rel="noreferrer">the paper</a>, the word "handle" is used in a couple of different ways, which are slightly different from how TensorFlow uses the term. The paper uses "symbolic handle" to refer to a <code>tf.Tensor</code> object, and "persistent, mutable handle" to refer to a <code>tf.Variable</code> object. The TensorFlow codebase uses "handle" to refer to a name for a stateful object (like a <code>tf.FIFOQueue</code> or <code>tf.TensorArray</code>) that can be passed around without copying all of the values (i.e. <a href="https://en.wikipedia.org/wiki/Evaluation_strategy#Call_by_reference" rel="noreferrer">call-by-reference</a>).</p></li>
<li><p><strong>Is my initial assumption about the internal of a tensor correct?</strong></p>
<p>Your assumption most closely matches the definition of a (C++) <code>tensorflow::Tensor</code> object. The (Python) <code>tf.Tensor</code> object is more complicated because it refers to a function for computing a value, rather than the value itself.</p></li>
<li><p><strong>What is the essential internal implementation difference between a tensor and a variable?</strong></p>
<p>In C++, a <code>tensorflow::Tensor</code> and <code>tensorflow::Var</code> are very similar; the only different is that <code>tensorflow::Var</code> also has a <code>mutex</code> that can be used to lock the variable when it is being updated.</p>
<p>In Python, the essential difference is that a <code>tf.Tensor</code> is implemented as a dataflow graph, and it is read-only (i.e. by calling <code>tf.Session.run()</code>). A <code>tf.Variable</code> can be both read (i.e. by evaluating its read operation) and written (e.g. by running an assign operation).</p>
<p><strong>Why are they declared differently and why is that difference essential to TensorFlow?</strong></p>
<p>Tensors and variables serve different purposes. Tensors (<code>tf.Tensor</code> objects) can represent complex compositions of mathematical expressions, like loss functions in a neural network, or symbolic gradients. Variables represent state that is updated over time, like weight matrices and convolutional filters during training. While in principle you could represent the evolving state of a model without variables, you would end up with a very large (and repetetive) mathematical expression, so variables provide a convenient way to materialize the state of the model, and—for example—share it with other machines for parallel training.</p></li>
</ol> | 2016-11-29 16:39:00.350000+00:00 | 2016-11-29 16:39:00.350000+00:00 | null | null | 40,866,675 | <p>First of all, I am aware that a related question has been asked <a href="https://stackoverflow.com/questions/38556078/in-tensorflow-what-is-the-difference-between-a-variable-and-a-tensor">here</a>.</p>
<p>However, this question is about the implementation and internals.
I was reading the paper "<a href="https://arxiv.org/pdf/1610.01178.pdf" rel="noreferrer">A Tour of TensorFlow</a>". The following two points are quoted from there:</p>
<p>1.</p>
<blockquote>
<p>A tensor itself does not hold or store values in memory, but provides
only an interface for retrieving the value referenced by the tensor.</p>
</blockquote>
<p>This suggests to me that a Tensor is an object that simply stores the pointer to a result of an operation and, on retrieving the result or value of the tensor, it simply dereferences that pointer.</p>
<p>2.</p>
<blockquote>
<p>Variables can be described as persistent, mutable handles to in-memory buffers storing tensors. As such, variables are characterized by a certain shape and a fixed type.</p>
</blockquote>
<p>At this I get confused because I thought, based on the previous point, that Tensors simply store a pointer. If they were simply pointers, they could be mutable as well.</p>
<p>To be precise these are my questions:</p>
<ol>
<li>What is the meaning of "in-memory buffers"?</li>
<li>What is the meaning of a "handle"?</li>
<li>Is my initial assumption about the internals of a tensor correct? </li>
<li>What is the essential internal implementation difference between a tensor and a variable? Why are they declared differently and why is that difference essential to TensorFlow?</li>
</ol> | 2016-11-29 13:06:36.810000+00:00 | 2016-11-29 16:54:06.853000+00:00 | 2017-05-23 12:34:42.623000+00:00 | tensorflow | ['https://www.tensorflow.org/versions/r0.12/api_docs/python/framework.html#Tensor', 'https://github.com/tensorflow/tensorflow/blob/5657d0dee8d87f4594b3e5902ed3e3ca8d6dfc0a/tensorflow/core/framework/tensor.h', 'https://www.tensorflow.org/versions/r0.12/api_docs/python/state_ops.html#Variable', 'https://github.com/tensorflow/tensorflow/blob/5657d0dee8d87f4594b3e5902ed3e3ca8d6dfc0a/tensorflow/core/kernels/variable_ops.h#L29', 'https://arxiv.org/pdf/1610.01178.pdf', 'https://en.wikipedia.org/wiki/Evaluation_strategy#Call_by_reference'] | 6 |
71,397,384 | <p>I think what you're looking for is in <em><a href="https://arxiv.org/abs/1803.08494" rel="nofollow noreferrer">Group Normalization</a>, by Yuxin Wu, Kaiming He IJCV'20</em>.</p>
<p>Especially <strong>Fig. 2</strong>:</p>
<p><a href="https://i.stack.imgur.com/sQR7B.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sQR7B.png" alt="enter image description here" /></a></p> | 2022-03-08 15:19:17.063000+00:00 | 2022-03-08 15:19:17.063000+00:00 | null | null | 71,392,482 | <p>Please illustrate batch normalisation and layer normalisation with a clear notation involving tensors. Also comment on when each one is required/recommended.</p> | 2022-03-08 09:09:35.380000+00:00 | 2022-03-08 15:19:17.063000+00:00 | null | pytorch|tensor | ['https://arxiv.org/abs/1803.08494', 'https://i.stack.imgur.com/sQR7B.png'] | 2 |
56,454,694 | <p>Sklearn is built for generic algorithms, TSP/VRP are too specific for it. Are you open to trying more specific libraries then Sklearn?</p>
<p>Recent advance in Reinforcement Learning seems to address TSP and VRP problems in a way that challenges the traditional Combinatorial Optimization approach.</p>
<p>To start with, you can look at <a href="https://github.com/higgsfield/np-hard-deep-reinforcement-learning" rel="nofollow noreferrer">this tutorial</a>.</p>
<p>A <a href="https://arxiv.org/abs/1802.04240" rel="nofollow noreferrer">recent paper</a> shows a method for VRP. They also shared their code on Github.</p>
<p>A <a href="https://www.researchgate.net/publication/333102662_TauRieL_Targeting_Traveling_Salesman_Problem_with_a_deep_reinforcement_learning_inspired_architecture" rel="nofollow noreferrer">more recent paper</a> claims to have a shorter training period.</p>
<p>Generally speaking, the architecture proposed in these papers looks on the VRP job as a whole and is better than a greedy approach by:</p>
<ol>
<li>The training phase which goes back and forth to include future
rewards </li>
<li>The solution architecture includes (at least) two NN.
Encoder and Decoder. The Encoder goes thru the entire input BEFORE the Decoder starts producing the output</li>
</ol>
<p>To summarize, if you want a quick and robust solution you can use existing open libraries such as <a href="https://github.com/graphhopper/jsprit" rel="nofollow noreferrer">Jsprit</a>. If you have time for research, the resources for training a NN and can take the risk of failing, go after Reinforcement Learning.</p> | 2019-06-05 05:30:07.283000+00:00 | 2019-06-05 08:07:02.647000+00:00 | 2019-06-05 08:07:02.647000+00:00 | null | 56,406,387 | <p>With sklearn, I am trying to model a pickup and dropoff vehicle routing problem. If you can recommend one of classifiers, it will be appreciated. For a simplicity, there is one vehicle and there are 5 customers. The training data has 20 features and 10 outputs.</p>
<p>Features include the x-y cords of 5 customers. Each customer has pickup and dropoff locations.</p>
<pre><code>c1p_x, c1p_y,c2p_x, c2p_y,c3p_x, c3p_y,c4p_x, c4p_y,c5p_x, c5p_y,
c1d_x, c1d_y,c2d_x, c2d_y,c3d_x, c3d_y,c4d_x, c4d_y,c5d_x, c5d_y,
</code></pre>
<p>c1<strong>p</strong>_x, c1<strong>p</strong>_y: customer 1 <strong>pickup</strong> x-y cord.</p>
<p>c1<strong>d</strong>_x: c1<strong>d</strong>_y: customer 1 <strong>dropoff</strong> x-y cord.</p>
<p>For example,</p>
<pre><code>123,106,332,418,106,477,178,363,381,349,54,214,297,34,5,122,3,441,455,322
</code></pre>
<p>Outputs include the optimal sequence of visit.For example, 5,10,2,7,1,6,4,9,3,8</p>
<p>Customer 5 (pkup) => 10 (drop) => 2 (pkup) => 7 (drop) ... => 8 (drop)</p>
<p>Note each pickup will be immediately followed by dropoff.</p>
<p>Here are codes I tried.</p>
<pre><code>import numpy as np
import pandas as pd
from sklearn.neural_network import MLPClassifier
train = pd.read_csv('ML_DARP_train.txt',header=None,sep=',')
print (train.head())
x = train[range(0,19)]
y = train[range(20,30)]
classifier = MLPClassifier(solver='lbfgs', alpha=1e-5, hidden_layer_sizes=(15,), random_state=1)
MLPClassifier(activation='relu', alpha=1e-05, batch_size='auto',
beta_1=0.9, beta_2=0.999, early_stopping=False,
epsilon=1e-08, hidden_layer_sizes=(15,),
learning_rate='constant', learning_rate_init=0.001,
max_iter=200, momentum=0.9, n_iter_no_change=10,
nesterovs_momentum=True, power_t=0.5, random_state=1,
shuffle=True, solver='lbfgs', tol=0.0001,
validation_fraction=0.1, verbose=False, warm_start=False)
classifier.fit(x, y)
print(classifier.score(x, y))
test = pd.read_csv('ML_DARP_test.txt',header=None,sep=',')
test = test[range(0,19)]
print (classifier.predict(test))
import numpy as np
from sklearn.ensemble import RandomForestClassifier
from sklearn.multioutput import MultiOutputClassifier
import pandas as pd
train = pd.read_csv('ML_DARP_train.txt',header=None,sep=',')
print (train.head())
x = train[range(0,19)]
y = train[range(20,30)]
print (y)
forest = RandomForestClassifier(n_estimators=100, random_state=0)
classifier = MultiOutputClassifier(forest, n_jobs=-1)
classifier.fit(x, y)
print(classifier.score(x, y))
test = pd.read_csv('ML_DARP_test.txt',header=None,sep=',')
test = test[range(0,19)]
print (classifier.predict(test))
</code></pre>
<p>Here are training data.</p>
<pre><code>123,106,332,418,106,477,178,363,381,349,54,214,297,34,5,122,3,441,455,322,5,10,2,7,1,6,4,9,3,8
154,129,466,95,135,191,243,13,289,227,300,40,171,286,219,403,232,113,378,428,5,10,2,7,1,6,4,9,3,8
215,182,163,321,259,500,434,304,355,276,77,414,93,83,42,292,101,459,488,237,5,10,4,9,3,8,2,7,1,6
277,220,313,29,304,229,500,454,263,154,339,255,484,351,287,87,330,147,411,343,1,6,3,8,2,7,4,9,5,10
308,258,464,223,349,460,64,120,188,62,100,96,374,118,16,368,73,352,365,480,2,7,1,6,5,10,3,8,4,9
369,296,97,385,363,174,161,317,128,472,346,423,217,338,246,163,349,87,335,132,2,7,4,9,1,6,5,10,3,8
400,318,263,94,471,467,321,45,146,475,107,264,139,136,53,36,155,370,382,380,3,8,2,7,4,9,5,10,1,6
477,387,461,350,62,244,417,242,102,399,401,137,76,451,330,364,431,90,368,47,3,8,1,6,4,9,2,7,5,10
38,441,95,12,45,412,452,361,496,276,162,479,420,155,12,112,128,263,290,138,4,9,1,6,3,8,2,7,5,10
69,447,245,205,106,157,79,89,467,216,393,289,311,422,273,440,435,30,291,323,2,7,4,9,3,8,1,6,5,10
115,0,427,430,214,451,207,302,439,172,185,178,232,220,64,282,210,266,292,22,2,7,5,10,1,6,3,8,4,9
192,53,92,123,259,180,273,468,363,81,447,19,122,488,310,77,454,471,246,159,3,8,1,6,5,10,2,7,4,9
223,91,227,317,304,411,385,180,319,5,208,361,498,239,54,389,245,222,231,328,5,10,2,7,4,9,1,6,3,8
269,113,424,57,396,188,12,378,322,493,470,218,435,52,331,231,20,474,263,59,4,9,5,10,1,6,2,7,3,8
315,151,42,204,410,387,78,43,215,355,215,28,278,273,44,11,264,178,170,149,5,10,2,7,3,8,4,9,1,6
393,236,239,444,487,148,191,240,202,326,23,417,200,71,321,338,39,414,203,365,4,9,2,7,3,8,1,6,5,10
454,274,390,153,62,410,303,453,173,266,286,259,106,354,96,165,331,165,203,48,4,9,2,7,3,8,5,10,1,6
15,327,86,378,154,187,447,181,160,237,62,131,27,152,389,23,137,448,220,264,3,8,4,9,2,7,1,6,5,10
61,365,237,71,184,417,43,379,131,178,324,474,403,388,133,334,413,167,205,417,5,10,3,8,1,6,4,9,2,7
123,418,403,280,261,178,124,59,56,86,101,331,309,170,394,145,172,404,175,70,3,8,2,7,5,10,4,9,1,6
169,441,36,458,275,378,190,194,466,465,332,141,167,406,108,426,385,76,97,176,3,8,2,7,1,6,5,10,4,9
215,494,249,213,398,186,365,470,500,483,124,29,120,236,431,315,238,391,161,439,3,8,5,10,4,9,1,6,2,7
246,0,337,345,396,370,399,88,377,329,355,324,449,441,113,48,420,32,68,28,2,7,3,8,1,6,5,10,4,9
339,100,49,84,489,131,496,286,317,253,163,213,370,238,390,375,195,268,37,181,5,10,2,7,3,8,4,9,1,6
385,122,215,294,80,424,139,29,320,240,425,70,292,36,181,233,17,50,70,413,2,7,4,9,1,6,5,10,3,8
416,144,366,2,125,154,236,211,291,180,170,396,182,304,427,44,277,286,86,112,5,10,3,8,2,7,4,9,1,6
477,198,15,180,170,384,348,424,231,105,448,253,41,39,171,356,68,22,56,265,1,6,4,9,2,7,5,10,3,8
38,251,181,390,231,130,414,58,171,13,209,95,448,322,417,151,281,211,10,387,2,7,5,10,1,6,3,8,4,9
69,258,347,98,276,360,495,239,111,439,455,437,354,89,161,447,40,447,480,55,3,8,1,6,4,9,5,10,2,7
131,327,28,323,384,153,169,500,130,426,248,309,275,388,469,321,379,229,27,286,1,6,5,10,2,7,4,9,3,8
161,334,116,454,351,305,172,102,492,272,463,88,87,76,120,37,60,387,419,361,1,6,5,10,3,8,4,9,2,7
238,403,313,194,428,82,300,331,479,227,255,462,9,375,412,396,367,154,420,45,2,7,1,6,4,9,5,10,3,8
285,456,10,419,35,375,460,74,498,230,47,350,447,189,219,254,189,437,484,308,2,7,5,10,4,9,1,6,3,8
346,494,144,112,95,120,71,287,469,170,294,176,322,441,480,81,480,188,469,476,3,8,2,7,4,9,5,10,1,6
393,47,310,322,156,367,168,453,378,63,71,33,227,223,240,393,208,377,423,113,5,10,1,6,3,8,4,9,2,7
470,100,22,77,264,159,312,197,380,34,364,406,180,52,47,266,30,159,440,329,4,9,2,7,5,10,1,6,3,8
15,138,157,239,294,358,362,332,289,429,109,248,23,273,246,14,243,348,393,466,5,10,4,9,3,8,2,7,1,6
61,176,323,449,355,119,490,59,261,384,371,89,430,39,22,357,49,99,394,134,2,7,4,9,1,6,3,8,5,10
92,198,427,110,353,303,39,210,201,293,117,400,274,260,220,121,278,304,348,272,1,6,4,9,5,10,3,8,2,7
185,267,154,367,476,111,183,439,172,249,410,288,226,89,27,496,84,70,365,488,5,10,4,9,1,6,3,8,2,7
231,321,305,59,36,357,264,119,112,157,187,145,117,357,288,291,345,291,319,124,4,9,1,6,5,10,2,7,3,8
262,327,440,238,66,71,345,270,36,66,417,456,477,92,17,86,72,480,273,262,1,6,2,7,4,9,3,8,5,10
323,381,89,431,96,287,411,436,477,476,179,298,367,359,231,367,317,200,242,415,1,6,3,8,2,7,5,10,4,9
369,418,286,171,219,94,85,195,10,494,456,170,304,173,54,256,154,499,321,192,1,6,2,7,3,8,4,9,5,10
416,456,437,365,249,325,166,377,452,403,217,12,179,425,299,51,415,218,275,330,3,8,2,7,4,9,5,10,1,6
477,9,86,42,294,55,248,58,360,296,479,354,53,176,28,347,158,423,214,452,3,8,2,7,4,9,1,6,5,10
7,31,237,251,355,301,360,255,332,236,240,179,460,459,289,158,434,158,214,151,2,7,3,8,4,9,5,10,1,6
84,84,418,476,447,78,473,468,319,207,17,52,381,241,65,0,225,426,231,335,5,10,4,9,2,7,3,8,1,6
146,154,83,169,477,277,22,102,180,37,295,410,256,493,279,265,438,83,107,395,4,9,5,10,2,7,1,6,3,8
177,176,234,363,21,7,134,299,183,24,40,236,131,244,24,76,213,334,155,125,4,9,2,7,1,6,3,8,5,10
238,214,416,87,144,332,309,74,201,27,318,108,68,58,347,466,66,148,202,372,1,6,2,7,4,9,5,10,3,8
316,298,128,344,236,108,438,287,173,468,126,498,5,372,154,340,357,400,188,40,4,9,1,6,5,10,3,8,2,7
346,305,215,475,219,276,441,391,50,330,341,292,334,76,337,57,38,41,126,162,3,8,1,6,4,9,5,10,2,7
408,358,397,199,296,37,68,103,37,285,118,133,239,359,97,384,330,309,127,346,3,8,5,10,1,6,4,9,2,7
439,381,47,377,357,284,180,316,494,225,380,491,114,110,358,211,120,44,112,30,1,6,3,8,5,10,2,7,4,9
15,450,228,117,449,60,309,28,465,165,156,348,51,425,150,69,412,311,97,199,3,8,4,9,5,10,2,7,1,6
77,2,363,295,463,260,358,163,358,43,434,205,411,160,348,318,124,469,35,305,5,10,1,6,3,8,2,7,4,9
123,40,59,19,23,21,487,392,345,500,211,62,317,428,124,160,416,236,36,4,5,10,3,8,1,6,2,7,4,9
169,62,194,197,83,267,83,72,317,455,441,389,207,194,385,472,175,472,37,189,2,7,1,6,5,10,4,9,3,8
231,131,376,438,160,28,164,254,241,348,234,261,129,493,145,298,435,176,492,326,3,8,5,10,2,7,4,9,1,6
277,154,25,115,221,274,276,452,213,304,480,87,3,244,406,109,210,428,493,10,3,8,2,7,4,9,5,10,1,6
323,191,176,309,266,489,358,132,153,213,241,429,394,11,135,405,470,147,463,179,5,10,1,6,3,8,2,7,4,9
354,214,311,487,296,203,423,283,46,90,488,255,253,247,349,185,198,336,401,300,3,8,1,6,4,9,5,10,2,7
447,298,7,211,372,481,66,11,64,93,296,143,175,45,141,43,4,103,449,31,3,8,4,9,2,7,5,10,1,6
493,336,157,420,449,242,178,224,35,33,41,470,65,312,417,370,296,370,434,200,5,10,3,8,1,6,2,7,4,9
7,343,323,129,40,19,291,421,477,458,287,311,488,110,193,197,71,90,404,369,2,7,5,10,4,9,3,8,1,6
69,396,474,307,54,218,372,102,432,382,64,168,346,346,407,477,331,326,389,21,1,6,3,8,5,10,4,9,2,7
146,465,155,31,115,465,469,284,357,291,342,25,252,113,167,304,90,30,343,158,3,8,1,6,5,10,4,9,2,7
192,2,305,240,191,226,96,11,375,278,103,368,158,396,444,146,397,313,375,390,3,8,1,6,4,9,5,10,2,7
238,24,471,450,268,488,209,209,315,202,365,209,64,178,220,474,156,33,360,58,3,8,2,7,4,9,1,6,5,10
285,78,105,111,282,202,274,359,224,80,126,50,424,415,434,238,385,222,298,179,5,10,3,8,1,6,2,7,4,9
346,116,286,336,358,464,387,71,195,35,388,393,330,197,210,80,176,474,299,364,1,6,3,8,5,10,2,7,4,9
424,185,484,92,466,256,46,316,229,38,196,281,283,26,17,454,499,272,347,110,3,8,4,9,2,7,5,10,1,6
470,223,133,270,11,471,111,482,138,432,442,122,157,278,231,218,242,476,301,248,3,8,2,7,4,9,5,10,1,6
15,261,284,464,56,201,208,163,94,357,203,465,32,29,477,29,1,196,271,401,4,9,1,6,5,10,2,7,3,8
46,267,418,141,70,400,258,313,2,250,434,275,392,265,190,310,230,401,209,6,4,9,3,8,5,10,1,6,2,7
107,336,130,381,193,224,417,72,5,237,242,178,345,79,12,183,68,183,257,253,2,7,4,9,1,6,3,8,5,10
154,358,234,43,176,392,452,176,415,114,473,474,188,284,195,417,250,341,179,359,5,10,3,8,1,6,4,9,2,7
200,412,400,252,252,153,79,405,370,54,250,331,94,66,472,259,56,92,165,27,1,6,4,9,5,10,3,8,2,7
277,465,81,462,298,383,129,38,295,448,26,188,485,334,201,54,269,281,134,196,3,8,4,9,1,6,2,7,5,10
339,18,262,186,405,176,304,314,313,451,304,60,406,131,8,429,122,94,182,428,4,9,1,6,2,7,3,8,5,10
370,56,429,411,498,453,432,26,269,375,65,403,328,430,300,271,398,331,152,80,1,6,5,10,4,9,2,7,3,8
431,94,78,88,11,152,466,161,162,252,327,244,187,166,499,19,110,3,74,186,5,10,3,8,1,6,4,9,2,7
462,116,228,282,71,414,94,390,180,239,72,70,93,449,274,362,417,286,122,433,2,7,4,9,1,6,5,10,3,8
38,185,410,6,164,190,237,102,136,179,366,459,500,231,66,220,208,22,107,101,5,10,4,9,2,7,3,8,1,6
100,238,75,215,225,421,319,284,92,104,142,300,390,499,311,15,468,258,77,238,3,8,5,10,1,6,2,7,4,9
131,245,179,362,192,73,337,387,454,451,358,95,233,203,478,249,149,400,485,329,1,6,3,8,5,10,4,9,2,7
177,298,392,118,331,397,27,178,3,469,150,484,186,32,317,154,18,229,47,91,3,8,2,7,1,6,5,10,4,9
254,352,57,311,392,143,108,344,460,409,428,341,76,299,61,450,262,450,48,275,3,8,4,9,1,6,5,10,2,7
285,390,191,489,406,358,174,9,369,302,173,167,436,35,291,229,6,138,2,413,5,10,2,7,3,8,1,6,4,9
362,443,373,229,482,119,318,253,387,289,451,23,357,334,67,71,329,437,34,143,4,9,1,6,3,8,2,7,5,10
408,481,54,439,105,428,462,466,358,245,227,397,279,131,375,446,119,188,35,328,5,10,3,8,1,6,4,9,2,7
470,34,204,131,134,142,42,147,283,138,489,238,154,383,104,241,364,393,475,450,5,10,1,6,2,7,3,8,4,9
15,71,355,325,164,357,92,282,176,15,250,64,28,134,318,5,76,65,413,71,3,8,5,10,2,7,4,9,1,6
61,94,4,18,225,102,204,495,178,488,12,406,435,417,78,317,368,333,429,286,1,6,5,10,3,8,2,7,4,9
123,163,202,259,348,411,364,254,181,475,289,279,357,215,402,206,205,115,462,487,2,7,4,9,1,6,5,10,3,8
185,201,336,437,362,125,429,405,90,352,50,120,231,452,115,487,434,320,400,107,1,6,5,10,3,8,2,7,4,9
231,238,17,145,407,356,41,117,77,323,312,463,121,218,360,298,225,70,432,323,1,6,4,9,2,7,5,10,3,8
262,276,167,355,484,101,138,298,1,216,73,320,27,0,120,108,485,291,370,461,1,6,4,9,3,8,2,7,5,10
292,283,302,32,44,347,203,449,442,141,304,130,403,252,366,420,213,480,356,129,4,9,3,8,1,6,2,7,5,10
</code></pre> | 2019-06-01 11:39:13.373000+00:00 | 2019-06-05 08:07:02.647000+00:00 | 2019-06-01 12:03:59.460000+00:00 | scikit-learn | ['https://github.com/higgsfield/np-hard-deep-reinforcement-learning', 'https://arxiv.org/abs/1802.04240', 'https://www.researchgate.net/publication/333102662_TauRieL_Targeting_Traveling_Salesman_Problem_with_a_deep_reinforcement_learning_inspired_architecture', 'https://github.com/graphhopper/jsprit'] | 4 |
42,533,963 | <ol>
<li>Take a look at the following papers. Both are examples that use Tesseract Training process for handwriting recognition. </li>
</ol>
<p><a href="https://arxiv.org/ftp/arxiv/papers/1003/1003.5893.pdf" rel="nofollow noreferrer">Tesseract Training for Handwritten Digit Recognition</a></p>
<p><a href="https://arxiv.org/abs/1003.5898" rel="nofollow noreferrer">Training Tesseract for Roman Font Handwriting</a></p>
<ol start="2">
<li><p>Check out the official Tesseract Training page.</p></li>
<li><p>The following link takes you through the Training Process, it helped me a lot.
<a href="https://web.archive.org/web/20170820212334/http://www.resolveradiologic.com:80/blog/2013/01/15/training-tesseract" rel="nofollow noreferrer">https://web.archive.org/web/20170820212334/http://www.resolveradiologic.com:80/blog/2013/01/15/training-tesseract</a></p></li>
<li><p>Use a third party GUI for Tesseract Training, it will make your life much easier. I recommend tesseract4java and jTessBoxEditor (both work on OS X)</p></li>
</ol> | 2017-03-01 14:10:57.230000+00:00 | 2018-09-06 08:54:32.210000+00:00 | 2018-09-06 08:54:32.210000+00:00 | null | 42,526,607 | <p>I'm unable to read the form exactly on using node-tesseract.Only the printed text of the form is recognized and returned correctly whereas the handwritten text is returned with some special characters.</p>
<p>My code is,</p>
<pre><code>var options = {
l: 'deu',
psm: 6,
env: {
maxBuffer: 4096 * 4096
}
};
tesseract.process('./server/images/form.jpg', options, function (err,text) {
if (err) {
return console.log("An error occured: ", err);
}
console.log("Recognized text:");
console.log(text);
});
</code></pre>
<p>my <code>input ------> OWNER Brian Dude
output------> OW_NER ägga ] )ggé;= ‘</code></p>
<p>here, OWNER is some text filed here</p> | 2017-03-01 08:22:44.490000+00:00 | 2018-09-06 08:54:32.210000+00:00 | 2017-03-01 15:59:58.967000+00:00 | node.js|tesseract | ['https://arxiv.org/ftp/arxiv/papers/1003/1003.5893.pdf', 'https://arxiv.org/abs/1003.5898', 'https://web.archive.org/web/20170820212334/http://www.resolveradiologic.com:80/blog/2013/01/15/training-tesseract'] | 3 |
66,351,843 | <p>According to the paper <a href="https://arxiv.org/pdf/1606.00915.pdf" rel="nofollow noreferrer">Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation</a> (3.1 DeeplabV3+ as an encoder), output_stride simply means the ratio between image input size and feature map output size (before global pooling). So change output_stride will change the output result.</p>
<p>just copy form <a href="https://github.com/tensorflow/models/issues/4233" rel="nofollow noreferrer">link</a>.</p> | 2021-02-24 13:25:05.943000+00:00 | 2021-02-24 13:25:05.943000+00:00 | null | null | 58,936,953 | <p>I am using DeepLabv3+ and I am running some tests. For my first run I used an <code>output_stride=16</code> and <code>atrous_rates=[6, 12, 18]</code> and in the 2nd run I used <code>output_stride=8</code> and <code>atrous_rates=[12,24, 36]</code>. Then I used tensorboard to see the results and I could notice that the heatmaps look larger and one "unit" is 4x bigger than the run with <code>output_stride=16</code>. <a href="https://i.stack.imgur.com/iOXuB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iOXuB.png" alt="enter image description here"></a> </p>
<p><code>output_stride=16</code></p>
<p><a href="https://i.stack.imgur.com/tKbn5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tKbn5.png" alt="enter image description here"></a></p>
<p><code>output_stride=8</code>
I would like to know what is the reason behing this behaviour and the consequences on my mIOU metric.</p>
<p>regards</p> | 2019-11-19 15:01:38.510000+00:00 | 2021-02-24 13:25:05.943000+00:00 | null | tensorflow|deeplab | ['https://arxiv.org/pdf/1606.00915.pdf', 'https://github.com/tensorflow/models/issues/4233'] | 2 |
68,263,932 | <h1 id="fine-tuning-approach-81pu">Fine Tuning Approach</h1>
<p>There are multiple approaches to fine-tune BERT for the target tasks.</p>
<ol>
<li>Further Pre-training the base BERT model</li>
<li>Custom classification layer(s) on top of the base BERT model being trainable</li>
<li>Custom classification layer(s) on top of the base BERT model being non-trainable (frozen)</li>
</ol>
<p>Note that the BERT base model has been pre-trained only for two tasks as in the original paper.</p>
<ul>
<li><a href="https://arxiv.org/abs/1810.04805" rel="noreferrer">BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding</a></li>
</ul>
<blockquote>
<p>3.1 Pre-training BERT ...we pre-train BERT using two unsupervised tasks<br></p>
<ul>
<li>Task #1: Masked LM<br></li>
<li>Task #2: Next Sentence Prediction (NSP)<br></li>
</ul>
</blockquote>
<p>Hence, the base BERT model is like half-baked which can be fully baked for the target domain (1st way). We can use it as part of our custom model training with the base trainable (2nd) or not-trainable (3rd).</p>
<hr />
<h1 id="st-approach-wp3e">1st approach</h1>
<p><a href="https://arxiv.org/abs/1905.05583" rel="noreferrer">How to Fine-Tune BERT for Text Classification?</a> demonstrated the 1st approach of Further Pre-training, and pointed out the learning rate is the key to avoid <strong>Catastrophic Forgetting</strong> where the pre-trained knowledge is erased during learning of new knowledge.</p>
<blockquote>
<p>We find that a lower learning rate, such as 2e-5,
is necessary to make BERT overcome the catastrophic forgetting problem. With an aggressive learn rate of 4e-4, the training set fails to converge.<br>
<a href="https://i.stack.imgur.com/pm2EV.png" rel="noreferrer"><img src="https://i.stack.imgur.com/pm2EV.png" alt="enter image description here" /></a></p>
</blockquote>
<p>Probably this is the reason why the <a href="https://arxiv.org/abs/1810.04805" rel="noreferrer">BERT paper</a> used 5e-5, 4e-5, 3e-5, and 2e-5 for <strong>fine-tuning</strong>.</p>
<blockquote>
<p>We use a batch size of 32 and fine-tune for 3 epochs over the data for all GLUE tasks. For each task, we selected the best fine-tuning learning rate (among 5e-5, 4e-5, 3e-5, and 2e-5) on the Dev set</p>
</blockquote>
<p>Note that the base model pre-training itself used higher learning rate.</p>
<ul>
<li><a href="https://huggingface.co/bert-base-uncased#pretraining" rel="noreferrer">bert-base-uncased - pretraining</a></li>
</ul>
<blockquote>
<p>The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer used is Adam with a learning rate of <code>1e-4</code>, β1=<code>0.9</code> and β2=<code>0.999</code>, a weight decay of <code>0.01</code>, learning rate warmup for 10,000 steps and linear decay of the learning rate after.</p>
</blockquote>
<p>Will describe the 1st way as part of the 3rd approach below.</p>
<p>FYI:
<a href="https://huggingface.co/transformers/model_doc/distilbert.html#tfdistilbertmodel" rel="noreferrer">TFDistilBertModel</a> is the bare base model with the name <code>distilbert</code>.</p>
<pre><code>Model: "tf_distil_bert_model_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
distilbert (TFDistilBertMain multiple 66362880
=================================================================
Total params: 66,362,880
Trainable params: 66,362,880
Non-trainable params: 0
</code></pre>
<hr />
<h1 id="nd-approach-oakk">2nd approach</h1>
<p>Huggingface takes the 2nd approach as in <a href="https://huggingface.co/transformers/custom_datasets.html#fine-tuning-with-native-pytorch-tensorflow" rel="noreferrer">Fine-tuning with native PyTorch/TensorFlow</a> where <code>TFDistilBertForSequenceClassification</code> has added the custom classification layer <code>classifier</code> on top of the base <code>distilbert</code> model being trainable. The small learning rate requirement will apply as well to avoid the catastrophic forgetting.</p>
<pre><code>from transformers import TFDistilBertForSequenceClassification
model = TFDistilBertForSequenceClassification.from_pretrained('distilbert-base-uncased')
optimizer = tf.keras.optimizers.Adam(learning_rate=5e-5)
model.compile(optimizer=optimizer, loss=model.compute_loss) # can also use any keras loss fn
model.fit(train_dataset.shuffle(1000).batch(16), epochs=3, batch_size=16)
</code></pre>
<pre><code>Model: "tf_distil_bert_for_sequence_classification_2"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
distilbert (TFDistilBertMain multiple 66362880
_________________________________________________________________
pre_classifier (Dense) multiple 590592
_________________________________________________________________
classifier (Dense) multiple 1538
_________________________________________________________________
dropout_59 (Dropout) multiple 0
=================================================================
Total params: 66,955,010
Trainable params: 66,955,010 <--- All parameters are trainable
Non-trainable params: 0
</code></pre>
<h3 id="implementation-of-the-2nd-approach-uizy">Implementation of the 2nd approach</h3>
<pre><code>import pandas as pd
import tensorflow as tf
from sklearn.model_selection import train_test_split
from transformers import (
DistilBertTokenizerFast,
TFDistilBertForSequenceClassification,
)
DATA_COLUMN = 'text'
LABEL_COLUMN = 'category_index'
MAX_SEQUENCE_LENGTH = 512
LEARNING_RATE = 5e-5
BATCH_SIZE = 16
NUM_EPOCHS = 3
# --------------------------------------------------------------------------------
# Tokenizer
# --------------------------------------------------------------------------------
tokenizer = DistilBertTokenizerFast.from_pretrained('distilbert-base-uncased')
def tokenize(sentences, max_length=MAX_SEQUENCE_LENGTH, padding='max_length'):
"""Tokenize using the Huggingface tokenizer
Args:
sentences: String or list of string to tokenize
padding: Padding method ['do_not_pad'|'longest'|'max_length']
"""
return tokenizer(
sentences,
truncation=True,
padding=padding,
max_length=max_length,
return_tensors="tf"
)
# --------------------------------------------------------------------------------
# Load data
# --------------------------------------------------------------------------------
raw_train = pd.read_csv("./train.csv")
train_data, validation_data, train_label, validation_label = train_test_split(
raw_train[DATA_COLUMN].tolist(),
raw_train[LABEL_COLUMN].tolist(),
test_size=.2,
shuffle=True
)
# --------------------------------------------------------------------------------
# Prepare TF dataset
# --------------------------------------------------------------------------------
train_dataset = tf.data.Dataset.from_tensor_slices((
dict(tokenize(train_data)), # Convert BatchEncoding instance to dictionary
train_label
)).shuffle(1000).batch(BATCH_SIZE).prefetch(1)
validation_dataset = tf.data.Dataset.from_tensor_slices((
dict(tokenize(validation_data)),
validation_label
)).batch(BATCH_SIZE).prefetch(1)
# --------------------------------------------------------------------------------
# training
# --------------------------------------------------------------------------------
model = TFDistilBertForSequenceClassification.from_pretrained(
'distilbert-base-uncased',
num_labels=NUM_LABELS
)
optimizer = tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE)
model.compile(
optimizer=optimizer,
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
)
model.fit(
x=train_dataset,
y=None,
validation_data=validation_dataset,
batch_size=BATCH_SIZE,
epochs=NUM_EPOCHS,
)
</code></pre>
<hr />
<h1 id="rd-approach-zy3n">3rd approach</h1>
<h2 id="basics-rn1v">Basics</h2>
<p>Please note that the images are taken from <a href="http://jalammar.github.io/a-visual-guide-to-using-bert-for-the-first-time/" rel="noreferrer">A Visual Guide to Using BERT for the First Time</a> and modified.</p>
<h3 id="tokenizer-oz6j">Tokenizer</h3>
<p><a href="https://huggingface.co/transformers/main_classes/tokenizer.html" rel="noreferrer">Tokenizer</a> generates the instance of BatchEncoding which can be used like a Python dictionary and the input to the BERT model.</p>
<ul>
<li><a href="https://huggingface.co/transformers/main_classes/tokenizer.html#batchencoding" rel="noreferrer">BatchEncoding</a></li>
</ul>
<blockquote>
<p>Holds the output of the encode_plus() and batch_encode() methods (tokens, attention_masks, etc).
<br>
This class is derived from a python dictionary and <strong>can be used as a dictionary</strong>. In addition, this class exposes utility methods to map from word/character space to token space.<br><br>
Parameters<br></p>
<ul>
<li>data (dict) – Dictionary of lists/arrays/tensors returned by the encode/batch_encode methods (‘input_ids’, ‘attention_mask’, etc.).</li>
</ul>
</blockquote>
<p>The <code>data</code> attribute of the class is the tokens generated which has <code>input_ids</code> and <code>attention_mask</code> elements.</p>
<h3 id="input_ids-usmq">input_ids</h3>
<ul>
<li><a href="https://huggingface.co/transformers/glossary.html#input-ids" rel="noreferrer">input_ids</a></li>
</ul>
<blockquote>
<p>The input ids are often the only required parameters to be passed to the model as input. They are <strong>token indices, numerical representations of tokens</strong> building the sequences that will be used as input by the model.</p>
</blockquote>
<h3 id="attention_mask-fe8f">attention_mask</h3>
<ul>
<li><a href="https://huggingface.co/transformers/glossary.html#attention-mask" rel="noreferrer">Attention mask</a></li>
</ul>
<blockquote>
<p>This argument indicates to the model which tokens should be attended to, and which should not.</p>
</blockquote>
<p>If the attention_mask is <code>0</code>, the token id is ignored. For instance if a sequence is padded to adjust the sequence length, the padded words should be ignored hence their attention_mask are 0.</p>
<h3 id="special-tokens-9ral">Special Tokens</h3>
<p>BertTokenizer addes special tokens, enclosing a sequence with <code>[CLS]</code> and <code>[SEP]</code>. <code>[CLS]</code> represents <strong>Classification</strong> and <code>[SEP]</code> separates sequences. For Question Answer or Paraphrase tasks, <code>[SEP]</code> separates the two sentences to compare.</p>
<p><a href="https://huggingface.co/transformers/model_doc/bert.html#berttokenizer" rel="noreferrer">BertTokenizer</a></p>
<blockquote>
<ul>
<li>cls_token (str, optional, defaults to "<strong>[CLS]</strong>")<BR>The <strong>Classifier Token which is used when doing sequence classification</strong> (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens.</li>
<li>sep_token (str, optional, defaults to "[SEP]")<BR>The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens.</li>
</ul>
</blockquote>
<p><a href="http://jalammar.github.io/a-visual-guide-to-using-bert-for-the-first-time/" rel="noreferrer">A Visual Guide to Using BERT for the First Time</a> show the tokenization.</p>
<p><a href="https://i.stack.imgur.com/zQtff.png" rel="noreferrer"><img src="https://i.stack.imgur.com/zQtff.png" alt="enter image description here" /></a></p>
<h3 id="cls-1tyh">[CLS]</h3>
<p>The embedding vector for <strong><code>[CLS]</code></strong> in the output from the base model final layer represents the classification that has been learned by the base model. Hence feed the embedding vector of <strong><code>[CLS]</code></strong> token into the classification layer added on top of the base model.</p>
<ul>
<li><a href="https://arxiv.org/abs/1810.04805" rel="noreferrer">BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding</a></li>
</ul>
<blockquote>
<p>The first token of every sequence is always <code>a special classification token ([CLS])</code>. The final hidden state corresponding to this token is <strong>used as the aggregate sequence representation for classification tasks</strong>. Sentence pairs are packed together into a single sequence. We differentiate the sentences in two ways. First, we separate them with a special token ([SEP]). Second, we add a learned embedding to every token indicating whether it belongs to sentence A or sentence B.</p>
</blockquote>
<p>The model structure will be illustrated as below.</p>
<p><a href="https://i.stack.imgur.com/VAq7v.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/VAq7v.jpg" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/tjpn4.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/tjpn4.jpg" alt="enter image description here" /></a></p>
<h3 id="vector-size-qfoj">Vector size</h3>
<p>In the model <code>distilbert-base-uncased</code>, each token is embedded into a vector of size <strong>768</strong>. The shape of the output from the base model is <code>(batch_size, max_sequence_length, embedding_vector_size=768)</code>. This accords with the BERT paper about the BERT/BASE model (as indicated in distilbert-<em><strong>base</strong></em>-uncased).</p>
<ul>
<li><a href="https://arxiv.org/abs/1810.04805" rel="noreferrer">BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding</a></li>
</ul>
<blockquote>
<p>BERT/BASE (L=12, H=<strong>768</strong>, A=12, Total Parameters=110M) and BERT/LARGE (L=24, H=1024, A=16, Total Parameters=340M).</p>
</blockquote>
<h3 id="base-model-tfdistilbertmodel-x5ki">Base Model - TFDistilBertModel</h3>
<ul>
<li><a href="https://towardsdatascience.com/hugging-face-transformers-fine-tuning-distilbert-for-binary-classification-tasks-490f1d192379" rel="noreferrer">Hugging Face Transformers: Fine-tuning DistilBERT for Binary Classification Tasks</a></li>
</ul>
<blockquote>
<p>TFDistilBertModel class to instantiate the base DistilBERT model <strong>without any specific head on top</strong> (as opposed to other classes such as TFDistilBertForSequenceClassification that do have an added classification head). <br><br>
We do not want any task-specific head attached because we simply want the pre-trained weights of the base model to provide a general understanding of the English language, and it will be our job to add our own classification head during the fine-tuning process in order to help the model distinguish between toxic comments.</p>
</blockquote>
<p><code>TFDistilBertModel</code> generates an instance of <code>TFBaseModelOutput</code> whose <code>last_hidden_state</code> parameter is the output from the model last layer.</p>
<pre><code>TFBaseModelOutput([(
'last_hidden_state',
<tf.Tensor: shape=(batch_size, sequence_lendgth, 768), dtype=float32, numpy=array([[[...]]], dtype=float32)>
)])
</code></pre>
<ul>
<li><a href="https://huggingface.co/transformers/main_classes/output.html#tfbasemodeloutput" rel="noreferrer">TFBaseModelOutput</a></li>
</ul>
<blockquote>
<p>Parameters<br></p>
<ul>
<li>last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) – Sequence of hidden-states at the output of the last layer of the model.</li>
</ul>
</blockquote>
<h2 id="implementation-8eyf">Implementation</h2>
<h3 id="python-modules-ndqs">Python modules</h3>
<pre><code>import pandas as pd
import tensorflow as tf
from sklearn.model_selection import train_test_split
from transformers import (
DistilBertTokenizerFast,
TFDistilBertModel,
)
</code></pre>
<h3 id="configuration-ioqo">Configuration</h3>
<pre><code>TIMESTAMP = datetime.datetime.now().strftime("%Y%b%d%H%M").upper()
DATA_COLUMN = 'text'
LABEL_COLUMN = 'category_index'
MAX_SEQUENCE_LENGTH = 512 # Max length allowed for BERT is 512.
NUM_LABELS = len(raw_train[LABEL_COLUMN].unique())
MODEL_NAME = 'distilbert-base-uncased'
NUM_BASE_MODEL_OUTPUT = 768
# Flag to freeze base model
FREEZE_BASE = True
# Flag to add custom classification heads
USE_CUSTOM_HEAD = True
if USE_CUSTOM_HEAD == False:
# Make the base trainable when no classification head exists.
FREEZE_BASE = False
BATCH_SIZE = 16
LEARNING_RATE = 1e-2 if FREEZE_BASE else 5e-5
L2 = 0.01
</code></pre>
<h3 id="tokenizer-1-xigx">Tokenizer</h3>
<pre><code>tokenizer = DistilBertTokenizerFast.from_pretrained(MODEL_NAME)
def tokenize(sentences, max_length=MAX_SEQUENCE_LENGTH, padding='max_length'):
"""Tokenize using the Huggingface tokenizer
Args:
sentences: String or list of string to tokenize
padding: Padding method ['do_not_pad'|'longest'|'max_length']
"""
return tokenizer(
sentences,
truncation=True,
padding=padding,
max_length=max_length,
return_tensors="tf"
)
</code></pre>
<h3 id="input-layer-7zwx">Input layer</h3>
<p>The base model expects <code>input_ids</code> and <code>attention_mask</code> whose shape is <code>(max_sequence_length,)</code>. Generate Keras Tensors for them with <code>Input</code> layer respectively.</p>
<pre><code># Inputs for token indices and attention masks
input_ids = tf.keras.layers.Input(shape=(MAX_SEQUENCE_LENGTH,), dtype=tf.int32, name='input_ids')
attention_mask = tf.keras.layers.Input((MAX_SEQUENCE_LENGTH,), dtype=tf.int32, name='attention_mask')
</code></pre>
<h3 id="base-model-layer-39vu">Base model layer</h3>
<p>Generate the output from the base model. The base model generates <code>TFBaseModelOutput</code>. Feed the embedding of <strong><code>[CLS]</code></strong> to the next layer.</p>
<pre><code>base = TFDistilBertModel.from_pretrained(
MODEL_NAME,
num_labels=NUM_LABELS
)
# Freeze the base model weights.
if FREEZE_BASE:
for layer in base.layers:
layer.trainable = False
base.summary()
# [CLS] embedding is last_hidden_state[:, 0, :]
output = base([input_ids, attention_mask]).last_hidden_state[:, 0, :]
</code></pre>
<h3 id="classification-layers-756t">Classification layers</h3>
<pre><code>if USE_CUSTOM_HEAD:
# -------------------------------------------------------------------------------
# Classifiation leayer 01
# --------------------------------------------------------------------------------
output = tf.keras.layers.Dropout(
rate=0.15,
name="01_dropout",
)(output)
output = tf.keras.layers.Dense(
units=NUM_BASE_MODEL_OUTPUT,
kernel_initializer='glorot_uniform',
activation=None,
name="01_dense_relu_no_regularizer",
)(output)
output = tf.keras.layers.BatchNormalization(
name="01_bn"
)(output)
output = tf.keras.layers.Activation(
"relu",
name="01_relu"
)(output)
# --------------------------------------------------------------------------------
# Classifiation leayer 02
# --------------------------------------------------------------------------------
output = tf.keras.layers.Dense(
units=NUM_BASE_MODEL_OUTPUT,
kernel_initializer='glorot_uniform',
activation=None,
name="02_dense_relu_no_regularizer",
)(output)
output = tf.keras.layers.BatchNormalization(
name="02_bn"
)(output)
output = tf.keras.layers.Activation(
"relu",
name="02_relu"
)(output)
</code></pre>
<h3 id="softmax-layer-79sx">Softmax Layer</h3>
<pre><code>output = tf.keras.layers.Dense(
units=NUM_LABELS,
kernel_initializer='glorot_uniform',
kernel_regularizer=tf.keras.regularizers.l2(l2=L2),
activation='softmax',
name="softmax"
)(output)
</code></pre>
<h3 id="final-custom-model-bclf">Final Custom Model</h3>
<pre><code>name = f"{TIMESTAMP}_{MODEL_NAME.upper()}"
model = tf.keras.models.Model(inputs=[input_ids, attention_mask], outputs=output, name=name)
model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False),
optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),
metrics=['accuracy']
)
model.summary()
---
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_ids (InputLayer) [(None, 256)] 0
__________________________________________________________________________________________________
attention_mask (InputLayer) [(None, 256)] 0
__________________________________________________________________________________________________
tf_distil_bert_model (TFDistilB TFBaseModelOutput(la 66362880 input_ids[0][0]
attention_mask[0][0]
__________________________________________________________________________________________________
tf.__operators__.getitem_1 (Sli (None, 768) 0 tf_distil_bert_model[1][0]
__________________________________________________________________________________________________
01_dropout (Dropout) (None, 768) 0 tf.__operators__.getitem_1[0][0]
__________________________________________________________________________________________________
01_dense_relu_no_regularizer (D (None, 768) 590592 01_dropout[0][0]
__________________________________________________________________________________________________
01_bn (BatchNormalization) (None, 768) 3072 01_dense_relu_no_regularizer[0][0
__________________________________________________________________________________________________
01_relu (Activation) (None, 768) 0 01_bn[0][0]
__________________________________________________________________________________________________
02_dense_relu_no_regularizer (D (None, 768) 590592 01_relu[0][0]
__________________________________________________________________________________________________
02_bn (BatchNormalization) (None, 768) 3072 02_dense_relu_no_regularizer[0][0
__________________________________________________________________________________________________
02_relu (Activation) (None, 768) 0 02_bn[0][0]
__________________________________________________________________________________________________
softmax (Dense) (None, 2) 1538 02_relu[0][0]
==================================================================================================
Total params: 67,551,746
Trainable params: 1,185,794
Non-trainable params: 66,365,952 <--- Base BERT model is frozen
</code></pre>
<h3 id="data-allocation-e53l">Data allocation</h3>
<pre><code># --------------------------------------------------------------------------------
# Split data into training and validation
# --------------------------------------------------------------------------------
raw_train = pd.read_csv("./train.csv")
train_data, validation_data, train_label, validation_label = train_test_split(
raw_train[DATA_COLUMN].tolist(),
raw_train[LABEL_COLUMN].tolist(),
test_size=.2,
shuffle=True
)
# X = dict(tokenize(train_data))
# Y = tf.convert_to_tensor(train_label)
X = tf.data.Dataset.from_tensor_slices((
dict(tokenize(train_data)), # Convert BatchEncoding instance to dictionary
train_label
)).batch(BATCH_SIZE).prefetch(1)
V = tf.data.Dataset.from_tensor_slices((
dict(tokenize(validation_data)), # Convert BatchEncoding instance to dictionary
validation_label
)).batch(BATCH_SIZE).prefetch(1)
</code></pre>
<h3 id="train-w43l">Train</h3>
<pre><code># --------------------------------------------------------------------------------
# Train the model
# https://www.tensorflow.org/api_docs/python/tf/keras/Model#fit
# Input data x can be a dict mapping input names to the corresponding array/tensors,
# if the model has named inputs. Beware of the "names". y should be consistent with x
# (you cannot have Numpy inputs and tensor targets, or inversely).
# --------------------------------------------------------------------------------
history = model.fit(
x=X, # dictionary
# y=Y,
y=None,
epochs=NUM_EPOCHS,
batch_size=BATCH_SIZE,
validation_data=V,
)
</code></pre>
<p>To implement the 1st approach, change the configuration as below.</p>
<pre><code>USE_CUSTOM_HEAD = False
</code></pre>
<p>Then <code>FREEZE_BASE</code> is changed to <code>False</code> and <code>LEARNING_RATE</code> is changed to <code>5e-5</code> which will run Further Pre-training on the base BERT model.</p>
<h3 id="saving-the-model-ib9t">Saving the model</h3>
<p>For the 3rd approach, saving the model will cause issues. The <a href="https://huggingface.co/transformers/main_classes/model.html#transformers.PreTrainedModel.save_pretrained" rel="noreferrer">save_pretrained</a> method of the Huggingface Model cannot be used as the model is not a direct sub class from of Huggingface <a href="https://huggingface.co/transformers/main_classes/model.html#transformers.PreTrainedModel" rel="noreferrer">PreTrainedModel</a>.</p>
<p><a href="https://www.tensorflow.org/api_docs/python/tf/keras/models/save_model" rel="noreferrer">Keras save_model</a> causes an error with the default <code>save_traces=True</code>, or causes a different error with <code>save_traces=True</code> when loading the model with <a href="https://www.tensorflow.org/api_docs/python/tf/keras/models/load_model" rel="noreferrer">Keras load_model</a>.</p>
<pre><code>---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-71-01d66991d115> in <module>()
----> 1 tf.keras.models.load_model(MODEL_DIRECTORY)
11 frames
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/saving/saved_model/load.py in _unable_to_call_layer_due_to_serialization_issue(layer, *unused_args, **unused_kwargs)
865 'recorded when the object is called, and used when saving. To manually '
866 'specify the input shape/dtype, decorate the call function with '
--> 867 '`@tf.function(input_signature=...)`.'.format(layer.name, type(layer)))
868
869
ValueError: Cannot call custom layer tf_distil_bert_model of type <class 'tensorflow.python.keras.saving.saved_model.load.TFDistilBertModel'>, because the call function was not serialized to the SavedModel.Please try one of the following methods to fix this issue:
(1) Implement `get_config` and `from_config` in the layer/model class, and pass the object to the `custom_objects` argument when loading the model. For more details, see: https://www.tensorflow.org/guide/keras/save_and_serialize
(2) Ensure that the subclassed model or layer overwrites `call` and not `__call__`. The input shape and dtype will be automatically recorded when the object is called, and used when saving. To manually specify the input shape/dtype, decorate the call function with `@tf.function(input_signature=...)`.
</code></pre>
<p>Only <a href="https://www.tensorflow.org/api_docs/python/tf/keras/Model#save_weights" rel="noreferrer">Keras Model save_weights</a> worked as far as I tested.</p>
<h1 id="experiments-5gms">Experiments</h1>
<p>As far as I tested with <a href="https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge" rel="noreferrer">Toxic Comment Classification Challenge</a>, the 1st approach gave better recall (identify true toxic comment, true non-toxic comment). Code can be accessed as below. Please provide correction/suggestion if anything.</p>
<ul>
<li><a href="https://nbviewer.jupyter.org/github/omontasama/nlp-huggingface/blob/main/fine_tuning/huggingface_fine_tuning.ipynb" rel="noreferrer">Code for 1st and 3rd approach</a></li>
</ul>
<hr />
<h1 id="related-qa1d">Related</h1>
<ul>
<li><a href="https://www.youtube.com/watch?v=_eSGWNqKeeY" rel="noreferrer">BERT Document Classification Tutorial with Code</a> - Fine tuning using TFDistilBertForSequenceClassification and Pytorch</li>
<li><a href="https://towardsdatascience.com/hugging-face-transformers-fine-tuning-distilbert-for-binary-classification-tasks-490f1d192379" rel="noreferrer">Hugging Face Transformers: Fine-tuning DistilBERT for Binary Classification Tasks</a> - Fine tuning using TFDistilBertModel</li>
</ul> | 2021-07-06 02:49:09.273000+00:00 | 2021-07-20 13:17:18.390000+00:00 | 2021-07-20 13:17:18.390000+00:00 | null | 60,463,829 | <p>I am working on a TextClassification problem, for which I am trying to traing my model on TFBertForSequenceClassification given in huggingface-transformers library.</p>
<p>I followed the example given on their <a href="https://github.com/huggingface/transformers#quick-tour-tf-20-training-and-pytorch-interoperability" rel="nofollow noreferrer">github</a> page, I am able to run the sample code with given sample data using <code>tensorflow_datasets.load('glue/mrpc')</code>.
However, I am unable to find an example on how to load my own custom data and pass it in
<code>model.fit(train_dataset, epochs=2, steps_per_epoch=115, validation_data=valid_dataset, validation_steps=7)</code>. </p>
<p>How can I define my own X, do tokenization of my X and prepare train_dataset with my X and Y. Where X represents my input text and Y represents classification category of given X.</p>
<p>Sample Training dataframe : </p>
<pre><code> text category_index
0 Assorted Print Joggers - Pack of 2 ,/ Gray Pri... 0
1 "Buckle" ( Matt ) for 35 mm Width Belt 0
2 (Gagam 07) Barcelona Football Jersey Home 17 1... 2
3 (Pack of 3 Pair) Flocklined Reusable Rubber Ha... 1
4 (Summer special Offer)Firststep new born baby ... 0
</code></pre> | 2020-02-29 09:49:46.150000+00:00 | 2022-06-16 17:42:44.827000+00:00 | 2020-08-03 01:00:29.153000+00:00 | nlp|pytorch|tensorflow2.0|huggingface-transformers|bert-language-model | ['https://arxiv.org/abs/1810.04805', 'https://arxiv.org/abs/1905.05583', 'https://i.stack.imgur.com/pm2EV.png', 'https://arxiv.org/abs/1810.04805', 'https://huggingface.co/bert-base-uncased#pretraining', 'https://huggingface.co/transformers/model_doc/distilbert.html#tfdistilbertmodel', 'https://huggingface.co/transformers/custom_datasets.html#fine-tuning-with-native-pytorch-tensorflow', 'http://jalammar.github.io/a-visual-guide-to-using-bert-for-the-first-time/', 'https://huggingface.co/transformers/main_classes/tokenizer.html', 'https://huggingface.co/transformers/main_classes/tokenizer.html#batchencoding', 'https://huggingface.co/transformers/glossary.html#input-ids', 'https://huggingface.co/transformers/glossary.html#attention-mask', 'https://huggingface.co/transformers/model_doc/bert.html#berttokenizer', 'http://jalammar.github.io/a-visual-guide-to-using-bert-for-the-first-time/', 'https://i.stack.imgur.com/zQtff.png', 'https://arxiv.org/abs/1810.04805', 'https://i.stack.imgur.com/VAq7v.jpg', 'https://i.stack.imgur.com/tjpn4.jpg', 'https://arxiv.org/abs/1810.04805', 'https://towardsdatascience.com/hugging-face-transformers-fine-tuning-distilbert-for-binary-classification-tasks-490f1d192379', 'https://huggingface.co/transformers/main_classes/output.html#tfbasemodeloutput', 'https://huggingface.co/transformers/main_classes/model.html#transformers.PreTrainedModel.save_pretrained', 'https://huggingface.co/transformers/main_classes/model.html#transformers.PreTrainedModel', 'https://www.tensorflow.org/api_docs/python/tf/keras/models/save_model', 'https://www.tensorflow.org/api_docs/python/tf/keras/models/load_model', 'https://www.tensorflow.org/api_docs/python/tf/keras/Model#save_weights', 'https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge', 'https://nbviewer.jupyter.org/github/omontasama/nlp-huggingface/blob/main/fine_tuning/huggingface_fine_tuning.ipynb', 'https://www.youtube.com/watch?v=_eSGWNqKeeY', 'https://towardsdatascience.com/hugging-face-transformers-fine-tuning-distilbert-for-binary-classification-tasks-490f1d192379'] | 30 |
47,153,496 | <blockquote>
<p>The choice of the softmax function seems <strong>somehow arbitrary</strong> as there are many other possible normalizing functions. It is thus unclear why the log-softmax loss would perform better than other loss alternatives.</p>
</blockquote>
<p>From "<strong>An Exploration of Softmax Alternatives Belonging to the Spherical Loss Family</strong>" <a href="https://arxiv.org/abs/1511.05042" rel="noreferrer">https://arxiv.org/abs/1511.05042</a></p>
<p>The authors explored some other functions among which are Taylor expansion of <code>exp</code> and so called spherical softmax and found out that sometimes they might perform better than usual <code>softmax</code>.</p> | 2017-11-07 08:49:07.220000+00:00 | 2017-11-07 08:49:07.220000+00:00 | null | null | 17,187,507 | <p>In the output layer of a neural network, it is typical to use the softmax function to approximate a probability distribution:</p>
<p><img src="https://i.stack.imgur.com/r1MZm.png" alt="enter image description here"></p>
<p>This is expensive to compute because of the exponents. Why not simply perform a Z transform so that all outputs are positive, and then normalise just by dividing all outputs by the sum of all outputs?</p> | 2013-06-19 09:20:26.780000+00:00 | 2021-05-03 15:07:05.163000+00:00 | 2017-01-09 16:18:48.030000+00:00 | math|neural-network|softmax | ['https://arxiv.org/abs/1511.05042'] | 1 |
65,961,634 | <p>So I solved this issue a while ago, but forgot to post an answer on stack overflow. So I will simply post my code here which should work probably pretty good.
Some disclaimer:</p>
<ul>
<li>I am not quite sure if it works since I did this a year ago</li>
<li>its for 128x128px Images MNIST</li>
<li>It's not a vanilla GAN I used various optimization techniques</li>
<li>If you want to use it you need to change various details, such as the training dataset</li>
</ul>
<p>Resources:</p>
<ul>
<li><a href="https://arxiv.org/abs/1903.06048" rel="nofollow noreferrer">Multi-Scale Gradients</a></li>
<li><a href="https://www.inference.vc/instance-noise-a-trick-for-stabilising-gan-training/" rel="nofollow noreferrer">Instance Noise</a></li>
<li><a href="https://github.com/soumith/ganhacks/blob/master/README.md" rel="nofollow noreferrer">Various tricks I used</a></li>
<li><a href="https://towardsdatascience.com/10-lessons-i-learned-training-generative-adversarial-networks-gans-for-a-year-c9071159628" rel="nofollow noreferrer">More tricks</a></li>
</ul>
<p>``</p>
<pre><code>import torch
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
import torchvision
import torchvision.transforms as transforms
from torch.utils.data import DataLoader
import pytorch_lightning as pl
from pytorch_lightning import loggers
from numpy.random import choice
import os
from pathlib import Path
import shutil
from collections import OrderedDict
# custom weights initialization called on netG and netD
def weights_init(m):
classname = m.__class__.__name__
if classname.find('Conv') != -1:
nn.init.normal_(m.weight.data, 0.0, 0.02)
elif classname.find('BatchNorm') != -1:
nn.init.normal_(m.weight.data, 1.0, 0.02)
nn.init.constant_(m.bias.data, 0)
# randomly flip some labels
def noisy_labels(y, p_flip=0.05): # # flip labels with 5% probability
# determine the number of labels to flip
n_select = int(p_flip * y.shape[0])
# choose labels to flip
flip_ix = choice([i for i in range(y.shape[0])], size=n_select)
# invert the labels in place
y[flip_ix] = 1 - y[flip_ix]
return y
class AddGaussianNoise(object):
def __init__(self, mean=0.0, std=0.1):
self.std = std
self.mean = mean
def __call__(self, tensor):
tensor = tensor.cuda()
return tensor + (torch.randn(tensor.size()) * self.std + self.mean).cuda()
def __repr__(self):
return self.__class__.__name__ + '(mean={0}, std={1})'.format(self.mean, self.std)
def resize2d(img, size):
return (F.adaptive_avg_pool2d(img, size).data).cuda()
def get_valid_labels(img):
return ((0.8 - 1.1) * torch.rand(img.shape[0], 1, 1, 1) + 1.1).cuda() # soft labels
def get_unvalid_labels(img):
return (noisy_labels((0.0 - 0.3) * torch.rand(img.shape[0], 1, 1, 1) + 0.3)).cuda() # soft labels
class Generator(pl.LightningModule):
def __init__(self, ngf, nc, latent_dim):
super(Generator, self).__init__()
self.ngf = ngf
self.latent_dim = latent_dim
self.nc = nc
self.fc0 = nn.Sequential(
# input is Z, going into a convolution
nn.utils.spectral_norm(nn.ConvTranspose2d(latent_dim, ngf * 16, 4, 1, 0, bias=False)),
nn.LeakyReLU(0.2, inplace=True),
nn.BatchNorm2d(ngf * 16)
)
self.fc1 = nn.Sequential(
# state size. (ngf*8) x 4 x 4
nn.utils.spectral_norm(nn.ConvTranspose2d(ngf * 16, ngf * 8, 4, 2, 1, bias=False)),
nn.LeakyReLU(0.2, inplace=True),
nn.BatchNorm2d(ngf * 8)
)
self.fc2 = nn.Sequential(
# state size. (ngf*4) x 8 x 8
nn.utils.spectral_norm(nn.ConvTranspose2d(ngf * 8, ngf * 4, 4, 2, 1, bias=False)),
nn.LeakyReLU(0.2, inplace=True),
nn.BatchNorm2d(ngf * 4)
)
self.fc3 = nn.Sequential(
# state size. (ngf*2) x 16 x 16
nn.utils.spectral_norm(nn.ConvTranspose2d(ngf * 4, ngf * 2, 4, 2, 1, bias=False)),
nn.LeakyReLU(0.2, inplace=True),
nn.BatchNorm2d(ngf * 2)
)
self.fc4 = nn.Sequential(
# state size. (ngf) x 32 x 32
nn.utils.spectral_norm(nn.ConvTranspose2d(ngf * 2, ngf, 4, 2, 1, bias=False)),
nn.LeakyReLU(0.2, inplace=True),
nn.BatchNorm2d(ngf)
)
self.fc5 = nn.Sequential(
# state size. (nc) x 64 x 64
nn.utils.spectral_norm(nn.ConvTranspose2d(ngf, nc, 4, 2, 1, bias=False)),
nn.Tanh()
)
# state size. (nc) x 128 x 128
# For Multi-Scale Gradient
# Converting the intermediate layers into images
self.fc0_r = nn.Conv2d(ngf * 16, self.nc, 1)
self.fc1_r = nn.Conv2d(ngf * 8, self.nc, 1)
self.fc2_r = nn.Conv2d(ngf * 4, self.nc, 1)
self.fc3_r = nn.Conv2d(ngf * 2, self.nc, 1)
self.fc4_r = nn.Conv2d(ngf, self.nc, 1)
def forward(self, input):
x_0 = self.fc0(input)
x_1 = self.fc1(x_0)
x_2 = self.fc2(x_1)
x_3 = self.fc3(x_2)
x_4 = self.fc4(x_3)
x_5 = self.fc5(x_4)
# For Multi-Scale Gradient
# Converting the intermediate layers into images
x_0_r = self.fc0_r(x_0)
x_1_r = self.fc1_r(x_1)
x_2_r = self.fc2_r(x_2)
x_3_r = self.fc3_r(x_3)
x_4_r = self.fc4_r(x_4)
return x_5, x_0_r, x_1_r, x_2_r, x_3_r, x_4_r
class Discriminator(pl.LightningModule):
def __init__(self, ndf, nc):
super(Discriminator, self).__init__()
self.nc = nc
self.ndf = ndf
self.fc0 = nn.Sequential(
# input is (nc) x 128 x 128
nn.utils.spectral_norm(nn.Conv2d(nc, ndf, 4, 2, 1, bias=False)),
nn.LeakyReLU(0.2, inplace=True)
)
self.fc1 = nn.Sequential(
# state size. (ndf) x 64 x 64
nn.utils.spectral_norm(nn.Conv2d(ndf + nc, ndf * 2, 4, 2, 1, bias=False)),
# "+ nc" because of multi scale gradient
nn.LeakyReLU(0.2, inplace=True),
nn.BatchNorm2d(ndf * 2)
)
self.fc2 = nn.Sequential(
# state size. (ndf*2) x 32 x 32
nn.utils.spectral_norm(nn.Conv2d(ndf * 2 + nc, ndf * 4, 4, 2, 1, bias=False)),
# "+ nc" because of multi scale gradient
nn.LeakyReLU(0.2, inplace=True),
nn.BatchNorm2d(ndf * 4)
)
self.fc3 = nn.Sequential(
# state size. (ndf*4) x 16 x 16e
nn.utils.spectral_norm(nn.Conv2d(ndf * 4 + nc, ndf * 8, 4, 2, 1, bias=False)),
# "+ nc" because of multi scale gradient
nn.LeakyReLU(0.2, inplace=True),
nn.BatchNorm2d(ndf * 8),
)
self.fc4 = nn.Sequential(
# state size. (ndf*8) x 8 x 8
nn.utils.spectral_norm(nn.Conv2d(ndf * 8 + nc, ndf * 16, 4, 2, 1, bias=False)),
nn.LeakyReLU(0.2, inplace=True),
nn.BatchNorm2d(ndf * 16)
)
self.fc5 = nn.Sequential(
# state size. (ndf*8) x 4 x 4
nn.utils.spectral_norm(nn.Conv2d(ndf * 16 + nc, 1, 4, 1, 0, bias=False)),
nn.Sigmoid()
)
# state size. 1 x 1 x 1
def forward(self, input, detach_or_not):
# When we train i ncombination with generator we use multi scale gradient.
x, x_0_r, x_1_r, x_2_r, x_3_r, x_4_r = input
if detach_or_not:
x = x.detach()
x_0 = self.fc0(x)
x_0 = torch.cat((x_0, x_4_r), dim=1) # Concat Multi-Scale Gradient
x_1 = self.fc1(x_0)
x_1 = torch.cat((x_1, x_3_r), dim=1) # Concat Multi-Scale Gradient
x_2 = self.fc2(x_1)
x_2 = torch.cat((x_2, x_2_r), dim=1) # Concat Multi-Scale Gradient
x_3 = self.fc3(x_2)
x_3 = torch.cat((x_3, x_1_r), dim=1) # Concat Multi-Scale Gradient
x_4 = self.fc4(x_3)
x_4 = torch.cat((x_4, x_0_r), dim=1) # Concat Multi-Scale Gradient
x_5 = self.fc5(x_4)
return x_5
class DCGAN(pl.LightningModule):
def __init__(self, hparams, checkpoint_folder, experiment_name):
super().__init__()
self.hparams = hparams
self.checkpoint_folder = checkpoint_folder
self.experiment_name = experiment_name
# networks
self.generator = Generator(ngf=hparams.ngf, nc=hparams.nc, latent_dim=hparams.latent_dim)
self.discriminator = Discriminator(ndf=hparams.ndf, nc=hparams.nc)
self.generator.apply(weights_init)
self.discriminator.apply(weights_init)
# cache for generated images
self.generated_imgs = None
self.last_imgs = None
# For experience replay
self.exp_replay_dis = torch.tensor([])
def forward(self, z):
return self.generator(z)
def adversarial_loss(self, y_hat, y):
return F.binary_cross_entropy(y_hat, y)
def training_step(self, batch, batch_nb, optimizer_idx):
# For adding Instance noise for more visit: https://www.inference.vc/instance-noise-a-trick-for-stabilising-gan-training/
std_gaussian = max(0, self.hparams.level_of_noise - (
(self.hparams.level_of_noise * 2) * (self.current_epoch / self.hparams.epochs)))
AddGaussianNoiseInst = AddGaussianNoise(std=std_gaussian) # the noise decays over time
imgs, _ = batch
imgs = AddGaussianNoiseInst(imgs) # Adding instance noise to real images
self.last_imgs = imgs
# train generator
if optimizer_idx == 0:
# sample noise
z = torch.randn(imgs.shape[0], self.hparams.latent_dim, 1, 1).cuda()
# generate images
self.generated_imgs = self(z)
# ground truth result (ie: all fake)
g_loss = self.adversarial_loss(self.discriminator(self.generated_imgs, False), get_valid_labels(self.generated_imgs[0])) # adversarial loss is binary cross-entropy; [0] is the image of the last layer
tqdm_dict = {'g_loss': g_loss}
log = {'g_loss': g_loss, "std_gaussian": std_gaussian}
output = OrderedDict({
'loss': g_loss,
'progress_bar': tqdm_dict,
'log': log
})
return output
# train discriminator
if optimizer_idx == 1:
# Measure discriminator's ability to classify real from generated samples
# how well can it label as real?
real_loss = self.adversarial_loss(
self.discriminator([imgs, resize2d(imgs, 4), resize2d(imgs, 8), resize2d(imgs, 16), resize2d(imgs, 32), resize2d(imgs, 64)],
False), get_valid_labels(imgs))
fake_loss = self.adversarial_loss(self.discriminator(self.generated_imgs, True), get_unvalid_labels(
self.generated_imgs[0])) # how well can it label as fake?; [0] is the image of the last layer
# discriminator loss is the average of these
d_loss = (real_loss + fake_loss) / 2
tqdm_dict = {'d_loss': d_loss}
log = {'d_loss': d_loss, "std_gaussian": std_gaussian}
output = OrderedDict({
'loss': d_loss,
'progress_bar': tqdm_dict,
'log': log
})
return output
def configure_optimizers(self):
lr_gen = self.hparams.lr_gen
lr_dis = self.hparams.lr_dis
b1 = self.hparams.b1
b2 = self.hparams.b2
opt_g = torch.optim.Adam(self.generator.parameters(), lr=lr_gen, betas=(b1, b2))
opt_d = torch.optim.Adam(self.discriminator.parameters(), lr=lr_dis, betas=(b1, b2))
return [opt_g, opt_d], []
def backward(self, trainer, loss, optimizer, optimizer_idx: int) -> None:
loss.backward(retain_graph=True)
def train_dataloader(self):
# transform = transforms.Compose([transforms.Resize((self.hparams.image_size, self.hparams.image_size)),
# transforms.ToTensor(),
# transforms.Normalize([0.5], [0.5])])
# dataset = torchvision.datasets.MNIST(os.getcwd(), train=False, download=True, transform=transform)
# return DataLoader(dataset, batch_size=self.hparams.batch_size)
# transform = transforms.Compose([transforms.Resize((self.hparams.image_size, self.hparams.image_size)),
# transforms.ToTensor(),
# transforms.Normalize([0.5], [0.5])
# ])
# train_dataset = torchvision.datasets.ImageFolder(
# root="./drive/My Drive/datasets/flower_dataset/",
# # root="./drive/My Drive/datasets/ghibli_dataset_small_overfit/",
# transform=transform
# )
# return DataLoader(train_dataset, num_workers=self.hparams.num_workers, shuffle=True,
# batch_size=self.hparams.batch_size)
transform = transforms.Compose([transforms.Resize((self.hparams.image_size, self.hparams.image_size)),
transforms.ToTensor(),
transforms.Normalize([0.5], [0.5])
])
train_dataset = torchvision.datasets.ImageFolder(
root="ghibli_dataset_small_overfit/",
transform=transform
)
return DataLoader(train_dataset, num_workers=self.hparams.num_workers, shuffle=True,
batch_size=self.hparams.batch_size)
def on_epoch_end(self):
z = torch.randn(4, self.hparams.latent_dim, 1, 1).cuda()
# match gpu device (or keep as cpu)
if self.on_gpu:
z = z.cuda(self.last_imgs.device.index)
# log sampled images
sample_imgs = self.generator(z)[0]
torchvision.utils.save_image(sample_imgs, f'generated_images_epoch{self.current_epoch}.png')
# save model
if self.current_epoch % self.hparams.save_model_every_epoch == 0:
trainer.save_checkpoint(
self.checkpoint_folder + "/" + self.experiment_name + "_epoch_" + str(self.current_epoch) + ".ckpt")
from argparse import Namespace
args = {
'batch_size': 128, # batch size
'lr_gen': 0.0003, # TTUR;learnin rate of both networks; tested value: 0.0002
'lr_dis': 0.0003, # TTUR;learnin rate of both networks; tested value: 0.0002
'b1': 0.5, # Momentum for adam; tested value(dcgan paper): 0.5
'b2': 0.999, # Momentum for adam; tested value(dcgan paper): 0.999
'latent_dim': 256, # tested value which worked(in V4_1): 100
'nc': 3, # number of color channels
'ndf': 8, # number of discriminator features
'ngf': 8, # number of generator features
'epochs': 4, # the maxima lamount of epochs the algorith should run
'save_model_every_epoch': 1, # how often we save our model
'image_size': 128, # size of the image
'num_workers': 3,
'level_of_noise': 0.1, # how much instance noise we introduce(std; tested value: 0.15 and 0.1
'experience_save_per_batch': 1, # this value should be very low; tested value which works: 1
'experience_batch_size': 50 # this value shouldnt be too high; tested value which works: 50
}
hparams = Namespace(**args)
# Parameters
experiment_name = "DCGAN_6_2_MNIST_128px"
dataset_name = "mnist"
checkpoint_folder = "DCGAN/"
tags = ["DCGAN", "128x128"]
dirpath = Path(checkpoint_folder)
# defining net
net = DCGAN(hparams, checkpoint_folder, experiment_name)
torch.autograd.set_detect_anomaly(True)
trainer = pl.Trainer( # resume_from_checkpoint="DCGAN_V4_2_GHIBLI_epoch_999.ckpt",
max_epochs=args["epochs"],
gpus=1
)
trainer.fit(net)
</code></pre>
<p>``</p> | 2021-01-29 20:59:48.240000+00:00 | 2021-01-29 20:59:48.240000+00:00 | null | null | 60,421,475 | <p><strong>Introduction:</strong></p>
<p>I am trying to get a CDCGAN (Conditional Deep Convolutional Generative Adversarial Network) to work on the MNIST dataset which should be fairly easy considering that the library (PyTorch) I am using has a tutorial on its website.<br />
But I can't seem to get It working it just produces garbage or the model collapses or both.</p>
<p><strong>What I tried:</strong></p>
<ul>
<li>making the model Conditional semi-supervised learning</li>
<li>using batch norm</li>
<li>using dropout on each layer besides the input/output layer on the generator and discriminator</li>
<li>label smoothing to combat overconfidence</li>
<li>adding noise to the images (I guess you call this instance noise) to get a better data distribution</li>
<li>use leaky relu to avoid vanishing gradients</li>
<li>using a replay buffer to combat forgetting of learned stuff and overfitting</li>
<li>playing with hyperparameters</li>
<li>comparing it to the model from PyTorch tutorial</li>
<li><a href="https://github.com/soumith/ganhacks" rel="noreferrer">basically what I did besides some things like Embedding layer ect.</a></li>
</ul>
<p><strong>Images my Model generated:</strong></p>
<p><em>Hyperparameters:</em></p>
<p>batch_size=50, learning_rate_discrimiantor=0.0001, learning_rate_generator=0.0003, shuffle=True, ndf=64, ngf=64, droupout=0.5<br />
<img src="https://i.stack.imgur.com/k4kXy.png" alt="enter image description here" />
<img src="https://i.stack.imgur.com/DhpDI.png" alt="enter image description here" />
<img src="https://i.stack.imgur.com/uUtUG.png" alt="enter image description here" />
<img src="https://i.stack.imgur.com/kDvdN.png" alt="enter image description here" /></p>
<p>batch_size=50, learning_rate_discriminator=0.0003, learning_rate_generator=0.0003, shuffle=True, ndf=64, ngf=64, dropout=0<br />
<img src="https://i.stack.imgur.com/ZARxo.png" alt="enter image description here" />
<img src="https://i.stack.imgur.com/2DTJX.png" alt="enter image description here" />
<img src="https://i.stack.imgur.com/9rA5O.png" alt="enter image description here" />
<img src="https://i.stack.imgur.com/UCfen.png" alt="enter image description here" /></p>
<p><strong>Images <a href="https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html" rel="noreferrer">Pytorch tutorial Model</a> generated:</strong></p>
<p><a href="https://github.com/pytorch/examples/tree/master/dcgan" rel="noreferrer">Code for the pytorch tutorial dcgan model</a><br />
As comparison here are the images from the DCGAN from the pytorch turoial:<br />
<img src="https://i.stack.imgur.com/l1Vtl.png" alt="enter image description here" />
<img src="https://i.stack.imgur.com/Tg9jW.png" alt="enter image description here" />
<img src="https://i.stack.imgur.com/R7etI.png" alt="enter image description here" /></p>
<p><strong>My Code:</strong></p>
<pre><code>import torch
import torch.nn as nn
import torchvision
from torchvision import transforms, datasets
import torch.nn.functional as F
from torch import optim as optim
from torch.utils.tensorboard import SummaryWriter
import numpy as np
import os
import time
class Discriminator(torch.nn.Module):
def __init__(self, ndf=16, dropout_value=0.5): # ndf feature map discriminator
super().__init__()
self.ndf = ndf
self.droupout_value = dropout_value
self.condi = nn.Sequential(
nn.Linear(in_features=10, out_features=64 * 64)
)
self.hidden0 = nn.Sequential(
nn.Conv2d(in_channels=2, out_channels=self.ndf, kernel_size=4, stride=2, padding=1, bias=False),
nn.LeakyReLU(0.2),
)
self.hidden1 = nn.Sequential(
nn.Conv2d(in_channels=self.ndf, out_channels=self.ndf * 2, kernel_size=4, stride=2, padding=1, bias=False),
nn.BatchNorm2d(self.ndf * 2),
nn.LeakyReLU(0.2),
nn.Dropout(self.droupout_value)
)
self.hidden2 = nn.Sequential(
nn.Conv2d(in_channels=self.ndf * 2, out_channels=self.ndf * 4, kernel_size=4, stride=2, padding=1, bias=False),
#nn.BatchNorm2d(self.ndf * 4),
nn.LeakyReLU(0.2),
nn.Dropout(self.droupout_value)
)
self.hidden3 = nn.Sequential(
nn.Conv2d(in_channels=self.ndf * 4, out_channels=self.ndf * 8, kernel_size=4, stride=2, padding=1, bias=False),
nn.BatchNorm2d(self.ndf * 8),
nn.LeakyReLU(0.2),
nn.Dropout(self.droupout_value)
)
self.out = nn.Sequential(
nn.Conv2d(in_channels=self.ndf * 8, out_channels=1, kernel_size=4, stride=1, padding=0, bias=False),
torch.nn.Sigmoid()
)
def forward(self, x, y):
y = self.condi(y.view(-1, 10))
y = y.view(-1, 1, 64, 64)
x = torch.cat((x, y), dim=1)
x = self.hidden0(x)
x = self.hidden1(x)
x = self.hidden2(x)
x = self.hidden3(x)
x = self.out(x)
return x
class Generator(torch.nn.Module):
def __init__(self, n_features=100, ngf=16, c_channels=1, dropout_value=0.5): # ngf feature map of generator
super().__init__()
self.ngf = ngf
self.n_features = n_features
self.c_channels = c_channels
self.droupout_value = dropout_value
self.hidden0 = nn.Sequential(
nn.ConvTranspose2d(in_channels=self.n_features + 10, out_channels=self.ngf * 8,
kernel_size=4, stride=1, padding=0, bias=False),
nn.BatchNorm2d(self.ngf * 8),
nn.LeakyReLU(0.2)
)
self.hidden1 = nn.Sequential(
nn.ConvTranspose2d(in_channels=self.ngf * 8, out_channels=self.ngf * 4,
kernel_size=4, stride=2, padding=1, bias=False),
#nn.BatchNorm2d(self.ngf * 4),
nn.LeakyReLU(0.2),
nn.Dropout(self.droupout_value)
)
self.hidden2 = nn.Sequential(
nn.ConvTranspose2d(in_channels=self.ngf * 4, out_channels=self.ngf * 2,
kernel_size=4, stride=2, padding=1, bias=False),
nn.BatchNorm2d(self.ngf * 2),
nn.LeakyReLU(0.2),
nn.Dropout(self.droupout_value)
)
self.hidden3 = nn.Sequential(
nn.ConvTranspose2d(in_channels=self.ngf * 2, out_channels=self.ngf,
kernel_size=4, stride=2, padding=1, bias=False),
nn.BatchNorm2d(self.ngf),
nn.LeakyReLU(0.2),
nn.Dropout(self.droupout_value)
)
self.out = nn.Sequential(
# "out_channels=1" because gray scale
nn.ConvTranspose2d(in_channels=self.ngf, out_channels=1, kernel_size=4,
stride=2, padding=1, bias=False),
nn.Tanh()
)
def forward(self, x, y):
x_cond = torch.cat((x, y), dim=1) # Combine flatten image with conditional input (class labels)
x = self.hidden0(x_cond) # Image goes into a "ConvTranspose2d" layer
x = self.hidden1(x)
x = self.hidden2(x)
x = self.hidden3(x)
x = self.out(x)
return x
class Logger:
def __init__(self, model_name, model1, model2, m1_optimizer, m2_optimizer, model_parameter, train_loader):
self.out_dir = "data"
self.model_name = model_name
self.train_loader = train_loader
self.model1 = model1
self.model2 = model2
self.model_parameter = model_parameter
self.m1_optimizer = m1_optimizer
self.m2_optimizer = m2_optimizer
# Exclude Epochs of the model name. This make sense e.g. when we stop a training progress and continue later on.
self.experiment_name = '_'.join("{!s}={!r}".format(k, v) for (k, v) in model_parameter.items())\
.replace("Epochs" + "=" + str(model_parameter["Epochs"]), "")
self.d_error = 0
self.g_error = 0
self.tb = SummaryWriter(log_dir=str(self.out_dir + "/log/" + self.model_name + "/runs/" + self.experiment_name))
self.path_image = os.path.join(os.getcwd(), f'{self.out_dir}/log/{self.model_name}/images/{self.experiment_name}')
self.path_model = os.path.join(os.getcwd(), f'{self.out_dir}/log/{self.model_name}/model/{self.experiment_name}')
try:
os.makedirs(self.path_image)
except Exception as e:
print("WARNING: ", str(e))
try:
os.makedirs(self.path_model)
except Exception as e:
print("WARNING: ", str(e))
def log_graph(self, model1_input, model2_input, model1_label, model2_label):
self.tb.add_graph(self.model1, input_to_model=(model1_input, model1_label))
self.tb.add_graph(self.model2, input_to_model=(model2_input, model2_label))
def log(self, num_epoch, d_error, g_error):
self.d_error = d_error
self.g_error = g_error
self.tb.add_scalar("Discriminator Train Error", self.d_error, num_epoch)
self.tb.add_scalar("Generator Train Error", self.g_error, num_epoch)
def log_image(self, images, epoch, batch_num):
grid = torchvision.utils.make_grid(images)
torchvision.utils.save_image(grid, f'{self.path_image}\\Epoch_{epoch}_batch_{batch_num}.png')
self.tb.add_image("Generator Image", grid)
def log_histogramm(self):
for name, param in self.model2.named_parameters():
self.tb.add_histogram(name, param, self.model_parameter["Epochs"])
self.tb.add_histogram(f'gen_{name}.grad', param.grad, self.model_parameter["Epochs"])
for name, param in self.model1.named_parameters():
self.tb.add_histogram(name, param, self.model_parameter["Epochs"])
self.tb.add_histogram(f'dis_{name}.grad', param.grad, self.model_parameter["Epochs"])
def log_model(self, num_epoch):
torch.save({
"epoch": num_epoch,
"model_generator_state_dict": self.model1.state_dict(),
"model_discriminator_state_dict": self.model2.state_dict(),
"optimizer_generator_state_dict": self.m1_optimizer.state_dict(),
"optimizer_discriminator_state_dict": self.m2_optimizer.state_dict(),
}, str(self.path_model + f'\\{time.time()}_epoch{num_epoch}.pth'))
def close(self, logger, images, num_epoch, d_error, g_error):
logger.log_model(num_epoch)
logger.log_histogramm()
logger.log(num_epoch, d_error, g_error)
self.tb.close()
def display_stats(self, epoch, batch_num, dis_error, gen_error):
print(f'Epoch: [{epoch}/{self.model_parameter["Epochs"]}] '
f'Batch: [{batch_num}/{len(self.train_loader)}] '
f'Loss_D: {dis_error.data.cpu()}, '
f'Loss_G: {gen_error.data.cpu()}')
def get_MNIST_dataset(num_workers_loader, model_parameter, out_dir="data"):
compose = transforms.Compose([
transforms.Resize((64, 64)),
transforms.CenterCrop((64, 64)),
transforms.ToTensor(),
torchvision.transforms.Normalize(mean=[0.5], std=[0.5])
])
dataset = datasets.MNIST(
root=out_dir,
train=True,
download=True,
transform=compose
)
train_loader = torch.utils.data.DataLoader(dataset,
batch_size=model_parameter["batch_size"],
num_workers=num_workers_loader,
shuffle=model_parameter["shuffle"])
return dataset, train_loader
def train_discriminator(p_optimizer, p_noise, p_images, p_fake_target, p_real_target, p_images_labels, p_fake_labels, device):
p_optimizer.zero_grad()
# 1.1 Train on real data
pred_dis_real = discriminator(p_images, p_images_labels)
error_real = loss(pred_dis_real, p_real_target)
error_real.backward()
# 1.2 Train on fake data
fake_data = generator(p_noise, p_fake_labels).detach()
fake_data = add_noise_to_image(fake_data, device)
pred_dis_fake = discriminator(fake_data, p_fake_labels)
error_fake = loss(pred_dis_fake, p_fake_target)
error_fake.backward()
p_optimizer.step()
return error_fake + error_real
def train_generator(p_optimizer, p_noise, p_real_target, p_fake_labels, device):
p_optimizer.zero_grad()
fake_images = generator(p_noise, p_fake_labels)
fake_images = add_noise_to_image(fake_images, device)
pred_dis_fake = discriminator(fake_images, p_fake_labels)
error_fake = loss(pred_dis_fake, p_real_target) # because
"""
We use "p_real_target" instead of "p_fake_target" because we want to
maximize that the discriminator is wrong.
"""
error_fake.backward()
p_optimizer.step()
return fake_images, pred_dis_fake, error_fake
# TODO change to a Truncated normal distribution
def get_noise(batch_size, n_features=100):
return torch.FloatTensor(batch_size, n_features, 1, 1).uniform_(-1, 1)
# We flip label of real and fate data. Better gradient flow I have told
def get_real_data_target(batch_size):
return torch.FloatTensor(batch_size, 1, 1, 1).uniform_(0.0, 0.2)
def get_fake_data_target(batch_size):
return torch.FloatTensor(batch_size, 1, 1, 1).uniform_(0.8, 1.1)
def image_to_vector(images):
return torch.flatten(images, start_dim=1, end_dim=-1)
def vector_to_image(images):
return images.view(images.size(0), 1, 28, 28)
def get_rand_labels(batch_size):
return torch.randint(low=0, high=9, size=(batch_size,))
def load_model(model_load_path):
if model_load_path:
checkpoint = torch.load(model_load_path)
discriminator.load_state_dict(checkpoint["model_discriminator_state_dict"])
generator.load_state_dict(checkpoint["model_generator_state_dict"])
dis_opti.load_state_dict(checkpoint["optimizer_discriminator_state_dict"])
gen_opti.load_state_dict(checkpoint["optimizer_generator_state_dict"])
return checkpoint["epoch"]
else:
return 0
def init_model_optimizer(model_parameter, device):
# Initialize the Models
discriminator = Discriminator(ndf=model_parameter["ndf"], dropout_value=model_parameter["dropout"]).to(device)
generator = Generator(ngf=model_parameter["ngf"], dropout_value=model_parameter["dropout"]).to(device)
# train
dis_opti = optim.Adam(discriminator.parameters(), lr=model_parameter["learning_rate_dis"], betas=(0.5, 0.999))
gen_opti = optim.Adam(generator.parameters(), lr=model_parameter["learning_rate_gen"], betas=(0.5, 0.999))
return discriminator, generator, dis_opti, gen_opti
def get_hot_vector_encode(labels, device):
return torch.eye(10)[labels].view(-1, 10, 1, 1).to(device)
def add_noise_to_image(images, device, level_of_noise=0.1):
return images[0].to(device) + (level_of_noise) * torch.randn(images.shape).to(device)
if __name__ == "__main__":
# Hyperparameter
model_parameter = {
"batch_size": 500,
"learning_rate_dis": 0.0002,
"learning_rate_gen": 0.0002,
"shuffle": False,
"Epochs": 10,
"ndf": 64,
"ngf": 64,
"dropout": 0.5
}
# Parameter
r_frequent = 10 # How many samples we save for replay per batch (batch_size / r_frequent).
model_name = "CDCGAN" # The name of you model e.g. "Gan"
num_workers_loader = 1 # How many workers should load the data
sample_save_size = 16 # How many numbers your saved imaged should show
device = "cuda" # Which device should be used to train the neural network
model_load_path = "" # If set load model instead of training from new
num_epoch_log = 1 # How frequent you want to log/
torch.manual_seed(43) # Sets a seed for torch for reproducibility
dataset_train, train_loader = get_MNIST_dataset(num_workers_loader, model_parameter) # Get dataset
# Initialize the Models and optimizer
discriminator, generator, dis_opti, gen_opti = init_model_optimizer(model_parameter, device) # Init model/Optimizer
start_epoch = load_model(model_load_path) # when we want to load a model
# Init Logger
logger = Logger(model_name, generator, discriminator, gen_opti, dis_opti, model_parameter, train_loader)
loss = nn.BCELoss()
images, labels = next(iter(train_loader)) # For logging
# For testing
# pred = generator(get_noise(model_parameter["batch_size"]).to(device), get_hot_vector_encode(get_rand_labels(model_parameter["batch_size"]), device))
# dis = discriminator(images.to(device), get_hot_vector_encode(labels, device))
logger.log_graph(get_noise(model_parameter["batch_size"]).to(device), images.to(device),
get_hot_vector_encode(get_rand_labels(model_parameter["batch_size"]), device),
get_hot_vector_encode(labels, device))
# Array to store
exp_replay = torch.tensor([]).to(device)
for num_epoch in range(start_epoch, model_parameter["Epochs"]):
for batch_num, data_loader in enumerate(train_loader):
images, labels = data_loader
images = add_noise_to_image(images, device) # Add noise to the images
# 1. Train Discriminator
dis_error = train_discriminator(
dis_opti,
get_noise(model_parameter["batch_size"]).to(device),
images.to(device),
get_fake_data_target(model_parameter["batch_size"]).to(device),
get_real_data_target(model_parameter["batch_size"]).to(device),
get_hot_vector_encode(labels, device),
get_hot_vector_encode(
get_rand_labels(model_parameter["batch_size"]), device),
device
)
# 2. Train Generator
fake_image, pred_dis_fake, gen_error = train_generator(
gen_opti,
get_noise(model_parameter["batch_size"]).to(device),
get_real_data_target(model_parameter["batch_size"]).to(device),
get_hot_vector_encode(
get_rand_labels(model_parameter["batch_size"]),
device),
device
)
# Store a random point for experience replay
perm = torch.randperm(fake_image.size(0))
r_idx = perm[:max(1, int(model_parameter["batch_size"] / r_frequent))]
r_samples = add_noise_to_image(fake_image[r_idx], device)
exp_replay = torch.cat((exp_replay, r_samples), 0).detach()
if exp_replay.size(0) >= model_parameter["batch_size"]:
# Train on experienced data
dis_opti.zero_grad()
r_label = get_hot_vector_encode(torch.zeros(exp_replay.size(0)).numpy(), device)
pred_dis_real = discriminator(exp_replay, r_label)
error_real = loss(pred_dis_real, get_fake_data_target(exp_replay.size(0)).to(device))
error_real.backward()
dis_opti.step()
print(f'Epoch: [{num_epoch}/{model_parameter["Epochs"]}] '
f'Batch: Replay/Experience batch '
f'Loss_D: {error_real.data.cpu()}, '
)
exp_replay = torch.tensor([]).to(device)
logger.display_stats(epoch=num_epoch, batch_num=batch_num, dis_error=dis_error, gen_error=gen_error)
if batch_num % 100 == 0:
logger.log_image(fake_image[:sample_save_size], num_epoch, batch_num)
logger.log(num_epoch, dis_error, gen_error)
if num_epoch % num_epoch_log == 0:
logger.log_model(num_epoch)
logger.log_histogramm()
logger.close(logger, fake_image[:sample_save_size], num_epoch, dis_error, gen_error)
</code></pre>
<p><a href="https://pastebin.com/XCd4rwRW" rel="noreferrer">First link to my Code (Pastebin)</a><br />
<a href="https://0bin.net/paste/Cc-mqarmRD6Jij3W#BO7gB8PanNOg4rtTkTgeC0lm3NOG1TNauwZ0Nzkny0c" rel="noreferrer">Second link to my Code (0bin)</a></p>
<p><strong>Conclusion:</strong></p>
<p>Since I implemented all these things (e.g. label smoothing) which are considered beneficial to a GAN/DCGAN.<br />
And my Model still performs worse than the Tutorial DCGAN from PyTorch I think I might have a bug in my code but I can't seem to find it.</p>
<p><strong>Reproducibility:</strong></p>
<p>You should be able to just copy the code and run it if you have the libraries that I imported installed to look for yourself if you can find anything.</p>
<p>I appreciate any feedback.</p> | 2020-02-26 19:47:51.707000+00:00 | 2022-01-21 10:47:01.750000+00:00 | 2021-01-02 20:23:18.517000+00:00 | python|neural-network|pytorch|generative-adversarial-network | ['https://arxiv.org/abs/1903.06048', 'https://www.inference.vc/instance-noise-a-trick-for-stabilising-gan-training/', 'https://github.com/soumith/ganhacks/blob/master/README.md', 'https://towardsdatascience.com/10-lessons-i-learned-training-generative-adversarial-networks-gans-for-a-year-c9071159628'] | 4 |
64,120,696 | <p><strong>CrossEntropy</strong> is not the best loss function when you deal with Top-k accuracy because cross-entropy may be prone to <strong>overfitting on small datasets or noisy labels</strong>.</p>
<p>As you have already pointed out, "smooth loss" functions are developed for top-k classification with SVM. To my knowledge, there is no a "off-the-shelf" loss function in Keras/TF that is best suited for top-k. However, I suggest you to try <strong>Smooth Surrogate Loss</strong> <strong>(SSL)</strong> presented in the <a href="https://arxiv.org/pdf/1802.07595.pdf" rel="nofollow noreferrer">article</a> and implemented in <em>Pytorch</em> to use with deep neural networks (see <a href="https://github.com/oval-group/smooth-topk" rel="nofollow noreferrer">Github</a>). It derives from multi-class SVMs as SSL creates a margin between the correct top-k predictions and the incorrect ones. The training time of SSL is comparatevely the same as in the case of cross-entropy thanking to a divide-and-conquer approach and the use of polynomials (see <a href="https://github.com/oval-group/smooth-topk/tree/master/topk/polynomial" rel="nofollow noreferrer">implementation</a>).</p> | 2020-09-29 13:32:02.827000+00:00 | 2020-09-29 14:08:56.663000+00:00 | 2020-09-29 14:08:56.663000+00:00 | null | 63,972,516 | <p>For multiclass classification problems, Keras and tf.keras have metrics like SparseTopKCategoricalAccuracy and TopKCategoricalAccuracy. However, if one uses loss functions like SparseCategoricalCrossentropy or CategoricalCrossentropy, they cannot achieve the max values for these two metrics.</p>
<p>What is a good loss function to use when one wants to maximize SparseTopKCategoricalAccuracy or TopKCategoricalAccuracy?</p>
<p>I understand that SparseTopKCategoricalAccuracy is not differentiable, just like Accuracy. I am trying to find a function that can approximate the smooth loss function and yield a higher number for SparseTopKCategoricalAccuracy.</p> | 2020-09-19 19:11:38.850000+00:00 | 2020-09-29 14:08:56.663000+00:00 | 2020-09-19 22:25:57.883000+00:00 | tensorflow|keras|tensorflow2.0|metrics|loss-function | ['https://arxiv.org/pdf/1802.07595.pdf', 'https://github.com/oval-group/smooth-topk', 'https://github.com/oval-group/smooth-topk/tree/master/topk/polynomial'] | 3 |
61,521,654 | <p>I don't think you can directly apply vanilla GPs to your problem in the way you formulated it. </p>
<p>A GP gives you a distribution over possible functions, conditioned on samples from one stochastic function. If I understand your setting correctly you have a collection of functions (Time series) and want to learn the underlying dynamics of those functions when conditioning on your X (the [-28, 32]).</p>
<p>Let's reframe your problem setting:</p>
<pre><code>list_1 = [....]
list_2 = [....]
...
k = [-28, 32, ...]
X1 = np.arange(0, len(list_1))
Y1 = list_1
X2 = np.arange(0, len(list_2))
Y2 = list_2
...
</code></pre>
<p>In the framework of GPs I would say you want to learn a kernel that depends on your k.
Then, given that Kernel, you have a standard GP where the <code>x</code>s are the time indexes and the <code>y=f(x)+noise</code> are the time series values. For each k You have some training data you condition on (that is e.g. <code>(X1, Y1)</code>) and want to predict the values f(x) for some other time points (maybe inbetween your x, or in the future for forecasting).</p>
<p>So what you could do is try something like a DeepKernel, where a neural network (or some other function) predicts the Kernel given k(see for example [1]). You could train it via maximizing the marginal likelihood of your training data in gpflow. There are probably some other, more data efficient, methods for learning a kernel, but you'll have to research that.</p>
<p>Alternatively, I've recently seen a few papers about so-called 'Neural Processes', where they have a similar setting: <a href="https://github.com/deepmind/neural-processes" rel="nofollow noreferrer">https://github.com/deepmind/neural-processes</a>
Here the stochastic processes are predicted exclusively with Neural Networks, and not GPs.</p>
<p>Note however that both methods would probably require quite a bit of data.</p>
<p>[1] Calandra, Roberto, et al. "Manifold Gaussian processes for regression." 2016 International Joint Conference on Neural Networks (IJCNN). IEEE, 2016.
<a href="https://arxiv.org/abs/1402.5876" rel="nofollow noreferrer">https://arxiv.org/abs/1402.5876</a></p> | 2020-04-30 10:48:23.150000+00:00 | 2020-04-30 10:48:23.150000+00:00 | null | null | 60,363,620 | <p>First, I would like to describe what I am trying to achieve:
My input X consists of 2 values and output Y consists of 2 lists with different lengths:</p>
<pre><code>list_1 = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 2, 4, 4, 8, 10, 11, 11, 11, 14, 14, 15, 15, 20, 20, 22, 22, 22, 22, 22, 10, 10, 10, 10, 15, 15, 15, 15, 15, 15, 20, 22, 22, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 22, 22, 22, 18, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
list_2 =[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 6, 6, 20, 20, 25, 25, 32, 33, 33, 33, 33, 33, 27, 27, 7, 7, 7, 7, 15, 15, 15, 22, 22, 22, 27, 30, 30, 30, 30, 30, 30, 31, 31, 31, 31, 33, 33, 33, 33, 34, 34, 34, 34, 35, 35, 35, 35, 37, 37, 37, 37, 37, 37, 37, 38, 38, 38, 38, 38, 38, 38, 38, 47, 47, 47, 47, 47, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
X= [-28,32]
Y= [list_1,list_2]
</code></pre>
<p>In most of cases, regression problem maps an input x to a single output y. However in my case, I need to map an input x to a time series y. I am trying to implement this in python using gpflow. I would like to have your opinions, whether the choice of GPR is appropriate for this case and if someone has already done similiar tasks, could you give me some introduction?</p> | 2020-02-23 15:12:17.503000+00:00 | 2020-04-30 10:48:23.150000+00:00 | null | python|machine-learning|regression|gaussian|gpflow | ['https://github.com/deepmind/neural-processes', 'https://arxiv.org/abs/1402.5876'] | 2 |
36,945,690 | <p>In the last year a few research groups have started using the character sequence of a word to generate word embedding vectors. See this paper "<a href="http://www.cs.cmu.edu/~lingwang/papers/emnlp2015.pdf" rel="nofollow">Finding Function in Form: Compositional Character Models for Open Vocabulary Word Representation</a>" for an example. There is also an earlier paper "<a href="http://arxiv.org/pdf/1405.4273v1.pdf" rel="nofollow">Compositional Morphology for Word Representations and Language Modelling</a>" that specifically uses models morphological differences like differences between singular and plural word forms.</p>
<p>I'm not aware of any open source implementations of these types of models.</p> | 2016-04-29 19:01:06.610000+00:00 | 2016-04-29 19:01:06.610000+00:00 | null | null | 36,915,236 | <p>I wonder if you know any word2vec implementation that takes into account that car and cars represents nearly the same concept, or lehrer and lehrerin (German for male and female teacher respectively) are almost the same. The implementations I have seen largely ignore this fact, and therefore the quality of the results is poor.</p>
<p>Thank you in advance.</p> | 2016-04-28 12:50:30.140000+00:00 | 2016-04-29 19:01:06.610000+00:00 | null | word2vec | ['http://www.cs.cmu.edu/~lingwang/papers/emnlp2015.pdf', 'http://arxiv.org/pdf/1405.4273v1.pdf'] | 2 |
41,021,174 | <p>You can either initialize the filters randomly or pretrain them on some other data set.</p>
<hr>
<p>Some references:</p>
<p><a href="http://deeplearning.net/tutorial/lenet.html" rel="noreferrer">http://deeplearning.net/tutorial/lenet.html</a>:</p>
<blockquote>
<p>Notice that a randomly initialized filter acts very much like an edge
detector!</p>
<p>Note that we use the same weight initialization formula as with the
MLP. Weights are sampled randomly from a uniform distribution in the
range [-1/fan-in, 1/fan-in], where fan-in is the number of inputs to a
hidden unit. For MLPs, this was the number of units in the layer
below. For CNNs however, we have to take into account the number of
input feature maps and the size of the receptive fields.</p>
</blockquote>
<p><a href="http://cs231n.github.io/transfer-learning/" rel="noreferrer">http://cs231n.github.io/transfer-learning/</a> :</p>
<blockquote>
<h2>Transfer Learning</h2>
<p>In practice, very few people train an entire Convolutional Network
from scratch (with random initialization), because it is relatively
rare to have a dataset of sufficient size. Instead, it is common to
pretrain a ConvNet on a very large dataset (e.g. ImageNet, which
contains 1.2 million images with 1000 categories), and then use the
ConvNet either as an initialization or a fixed feature extractor for
the task of interest. The three major Transfer Learning scenarios look
as follows:</p>
<ul>
<li><strong>ConvNet as fixed feature extractor</strong>. Take a ConvNet pretrained on ImageNet, remove the last fully-connected layer (this layer's outputs
are the 1000 class scores for a different task like ImageNet), then
treat the rest of the ConvNet as a fixed feature extractor for the new
dataset. In an AlexNet, this would compute a 4096-D vector for every
image that contains the activations of the hidden layer immediately
before the classifier. We call these features <strong>CNN codes</strong>. It is
important for performance that these codes are ReLUd (i.e. thresholded
at zero) if they were also thresholded during the training of the
ConvNet on ImageNet (as is usually the case). Once you extract the
4096-D codes for all images, train a linear classifier (e.g. Linear
SVM or Softmax classifier) for the new dataset.</li>
<li><strong>Fine-tuning the ConvNet</strong>. The second strategy is to not only replace and retrain the classifier on top of the ConvNet on the new
dataset, but to also fine-tune the weights of the pretrained network
by continuing the backpropagation. It is possible to fine-tune all the
layers of the ConvNet, or it's possible to keep some of the earlier
layers fixed (due to overfitting concerns) and only fine-tune some
higher-level portion of the network. This is motivated by the
observation that the earlier features of a ConvNet contain more
generic features (e.g. edge detectors or color blob detectors) that
should be useful to many tasks, but later layers of the ConvNet
becomes progressively more specific to the details of the classes
contained in the original dataset. In case of ImageNet for example,
which contains many dog breeds, a significant portion of the
representational power of the ConvNet may be devoted to features that
are specific to differentiating between dog breeds.</li>
</ul>
<p><strong>Pretrained models</strong>. Since modern ConvNets take 2-3 weeks to train across multiple GPUs on ImageNet, it is common to see people release
their final ConvNet checkpoints for the benefit of others who can use
the networks for fine-tuning. For example, the Caffe library has a
<a href="https://github.com/BVLC/caffe/wiki/Model-Zoo" rel="noreferrer">Model Zoo</a> where people
share their network weights.</p>
<p><strong>When and how to fine-tune?</strong> How do you decide what type of transfer learning you should perform on a new dataset? This is a function of
several factors, but the two most important ones are the size of the
new dataset (small or big), and its similarity to the original dataset
(e.g. ImageNet-like in terms of the content of images and the classes,
or very different, such as microscope images). Keeping in mind that
ConvNet features are more generic in early layers and more
original-dataset-specific in later layers, here are some common rules
of thumb for navigating the 4 major scenarios:</p>
<ol>
<li><em>New dataset is small and similar to original dataset</em>. Since the data is small, it is not a good idea to fine-tune the ConvNet due to
overfitting concerns. Since the data is similar to the original data,
we expect higher-level features in the ConvNet to be relevant to this
dataset as well. Hence, the best idea might be to train a linear
classifier on the CNN codes.</li>
<li><em>New dataset is large and similar to the original dataset</em>. Since we have more data, we can have more confidence that we won't overfit
if we were to try to fine-tune through the full network.</li>
<li><em>New dataset is small but very different from the original dataset</em>. Since the data is small, it is likely best to only train a
linear classifier. Since the dataset is very different, it might not
be best to train the classifier form the top of the network, which
contains more dataset-specific features. Instead, it might work better
to train the SVM classifier from activations somewhere earlier in the
network.</li>
<li><em>New dataset is large and very different from the original dataset</em>. Since the dataset is very large, we may expect that we can
afford to train a ConvNet from scratch. However, in practice it is
very often still beneficial to initialize with weights from a
pretrained model. In this case, we would have enough data and
confidence to fine-tune through the entire network.</li>
</ol>
<p><strong>Practical advice</strong>. There are a few additional things to keep in mind when performing Transfer Learning:</p>
<ul>
<li><em>Constraints from pretrained models</em>. Note that if you wish to use a pretrained network, you may be slightly constrained in terms of the
architecture you can use for your new dataset. For example, you can't
arbitrarily take out Conv layers from the pretrained network. However,
some changes are straight-forward: Due to parameter sharing, you can
easily run a pretrained network on images of different spatial size.
This is clearly evident in the case of Conv/Pool layers because their
forward function is independent of the input volume spatial size (as
long as the strides "fit"). In case of FC layers, this still holds
true because FC layers can be converted to a Convolutional Layer: For
example, in an AlexNet, the final pooling volume before the first FC
layer is of size [6x6x512]. Therefore, the FC layer looking at this
volume is equivalent to having a Convolutional Layer that has
receptive field size 6x6, and is applied with padding of 0.</li>
<li><em>Learning rates</em>. It's common to use a smaller learning rate for ConvNet weights that are being fine-tuned, in comparison to the
(randomly-initialized) weights for the new linear classifier that
computes the class scores of your new dataset. This is because we
expect that the ConvNet weights are relatively good, so we don't wish
to distort them too quickly and too much (especially while the new
Linear Classifier above them is being trained from random
initialization).</li>
</ul>
<p></p>
<h2>Additional References</h2>
<ul>
<li><a href="http://arxiv.org/abs/1403.6382" rel="noreferrer">CNN Features off-the-shelf: an Astounding Baseline for Recognition</a> trains SVMs on features
from ImageNet-pretrained ConvNet and reports several state of the art
results.</li>
<li><a href="http://arxiv.org/abs/1310.1531" rel="noreferrer">DeCAF</a> reported similar findings in 2013. The framework in this paper (DeCAF) was a Python-based precursor to the C++ Caffe library.</li>
<li><a href="http://arxiv.org/abs/1411.1792" rel="noreferrer">How transferable are features in deep neural networks?</a> studies the transfer
learning performance in detail, including some unintuitive findings
about layer co-adaptations.</li>
</ul>
</blockquote> | 2016-12-07 15:21:39.423000+00:00 | 2016-12-07 15:21:39.423000+00:00 | null | null | 41,020,524 | <p>I read a lot of papers on convnets, but there is one thing I don't understand, how the filters in convolutional layer are initialized ?
Because, for examples, in first layer, filters should detect edges etc..
But if it randomly init, it could not be accurate ? Same for next layer and high-level features.
And an other question, what are the range of the value in those filters ?</p>
<p>Many thanks to you!</p> | 2016-12-07 14:50:41.350000+00:00 | 2016-12-07 15:21:39.423000+00:00 | null | tensorflow|deep-learning|theano|keras | ['http://deeplearning.net/tutorial/lenet.html', 'http://cs231n.github.io/transfer-learning/', 'https://github.com/BVLC/caffe/wiki/Model-Zoo', 'http://arxiv.org/abs/1403.6382', 'http://arxiv.org/abs/1310.1531', 'http://arxiv.org/abs/1411.1792'] | 6 |
33,821,542 | <p>If you are looking for another POS tagger with fast performances in Python, you might want to try <a href="http://rdrpostagger.sourceforge.net/" rel="nofollow">RDRPOSTagger</a>. For example, on English POS tagging, the tagging speed is 8K words/second for a single threaded implementation in Python, using a computer of Core 2Duo 2.4GHz. You can get faster tagging speed by simply using the multi-threaded mode. RDRPOSTagger obtains very competitive accuracies in comparison to state-of-the-art taggers and now supports pre-trained models for 40 languages. See experimental results in <a href="http://arxiv.org/abs/1412.4021" rel="nofollow">this paper</a>.</p> | 2015-11-20 07:51:48.197000+00:00 | 2016-10-20 04:06:47.217000+00:00 | 2016-10-20 04:06:47.217000+00:00 | null | 33,676,526 | <p>I am using <code>nltk</code> to generate n-grams from sentences by first removing given stop words. However, <code>nltk.pos_tag()</code> is extremely slow taking up to 0.6 sec on my CPU (Intel i7).</p>
<p>The output:</p>
<pre><code>['The first time I went, and was completely taken by the live jazz band and atmosphere, I ordered the Lobster Cobb Salad.']
0.620481014252
["It's simply the best meal in NYC."]
0.640982151031
['You cannot go wrong at the Red Eye Grill.']
0.644664049149
</code></pre>
<p>The code:</p>
<pre><code>for sentence in source:
nltk_ngrams = None
if stop_words is not None:
start = time.time()
sentence_pos = nltk.pos_tag(word_tokenize(sentence))
print time.time() - start
filtered_words = [word for (word, pos) in sentence_pos if pos not in stop_words]
else:
filtered_words = ngrams(sentence.split(), n)
</code></pre>
<p>Is this really that slow or am I doing something wrong here?</p> | 2015-11-12 16:32:19.997000+00:00 | 2016-10-20 04:06:47.217000+00:00 | 2015-11-12 16:59:18.870000+00:00 | python|nlp|nltk|pos-tagger | ['http://rdrpostagger.sourceforge.net/', 'http://arxiv.org/abs/1412.4021'] | 2 |
54,389,479 | <blockquote>
<p>How random is the Adam optimizer?</p>
</blockquote>
<p><em>The randomness in your result <code>y</code> is not something Adam brings for the fixed values of hyper-parameters. It is based on parameters <code>W</code> and biases <code>b</code> TensorFlow fills in in respect to <code>np.random.seed(0)</code> or <code>tf.set_random_seed(0)</code>.</em></p>
<p>As described in <a href="https://arxiv.org/pdf/1412.6980.pdf" rel="nofollow noreferrer">Adam</a>, it is RMSProp combined with Gradient Descent with momentum.</p>
<p>If you check out the <a href="https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/Adam" rel="nofollow noreferrer">arguments</a>:</p>
<ul>
<li>lr: float >= 0. Learning rate.</li>
<li>beta_1: float, 0 < beta < 1. Generally close to 1.</li>
<li>beta_2: float, 0 < beta < 1. Generally close to 1.</li>
<li>epsilon: float >= 0. Fuzz factor. If None, defaults to K.epsilon().</li>
<li>decay: float >= 0. Learning rate decay over each update.</li>
<li>amsgrad: boolean. Whether to apply the AMSGrad variant of this algorithm from the paper "On the Convergence of Adam and Beyond".</li>
</ul>
<p>There are quite a few, and by default:</p>
<pre><code>__init__(
lr=0.001,
beta_1=0.9,
beta_2=0.999,
epsilon=None,
decay=0.0,
amsgrad=False, **kwargs
)
</code></pre>
<p>For the fixed set of the default hyper-parameters the results will be the same.</p> | 2019-01-27 14:57:26.480000+00:00 | 2019-01-27 15:03:16.880000+00:00 | 2019-01-27 15:03:16.880000+00:00 | null | 54,389,279 | <p>Suppose:</p>
<ol>
<li>I feed the data in the same order to 10 AdamOptimizer.</li>
<li>All AdamOptimizer tries to minimize the same objective function.</li>
<li>The initial values for the variables are different for the 10 AdamOptimizer</li>
<li>Some of the variables (let call them set b) should have no effect on the minimal value of the objective function. But I don't know which variables are in set b before the minimization.</li>
<li>The objective function is deterministic.</li>
</ol>
<p>Would the variables in set b have different values for the 10 minimization?</p>
<p>I am trying to run the 10 minimization concurrently on a GPU.
The training data is large.</p> | 2019-01-27 14:39:08.800000+00:00 | 2019-01-27 15:03:16.880000+00:00 | 2019-01-27 14:51:40.097000+00:00 | python|tensorflow | ['https://arxiv.org/pdf/1412.6980.pdf', 'https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/Adam'] | 2 |
58,189,943 | <p>One start would be to use data from Wikidata. It has some information on Chinese companies (I suppose you are referring to companies listed on Chinese stock exchanges). For instance, <a href="https://www.wikidata.org/wiki/Q831445" rel="nofollow noreferrer">https://www.wikidata.org/wiki/Q831445</a> displays information about Sinopec.</p>
<p>The data from Wikidata can be downloaded from the API, the large dumps files at <a href="https://dumps.wikimedia.org/wikidatawiki/" rel="nofollow noreferrer">https://dumps.wikimedia.org/wikidatawiki/</a> or the SPARQL endpoint at <a href="https://query.wikidata.org/" rel="nofollow noreferrer">https://query.wikidata.org/</a>.</p>
<p>You can get a list of companies listed on the Shenzhen Stock Exchange with the SPARQL query:</p>
<pre><code>SELECT
?company ?companyLabel
?industry ?industryLabel
{
?company wdt:P414 wd:Q517750 .
OPTIONAL { ?company wdt:P452 ?industry }
SERVICE wikibase:label { bd:serviceParam wikibase:language "[AUTO_LANGUAGE],en,zh". }
}
</code></pre>
<p>The result is (also) available at <a href="https://w.wiki/9DM" rel="nofollow noreferrer">https://w.wiki/9DM</a> . This result can be extended by modifying the query and it can be downloaded in various formats. With the DESCRIBE SPARQL keyword you can get the triple format that may be useful for the TransE algorithm, e.g., <code>DESCRIBE wd:Q831445</code> with the result at <a href="https://w.wiki/9DN" rel="nofollow noreferrer">https://w.wiki/9DN</a> .</p>
<p>It is possible to process the large dump files and make a knowledge graph embedding with Gensim's Word2Vec, see "Wembedder: Wikidata entity embedding web service" at <a href="https://arxiv.org/abs/1710.04099" rel="nofollow noreferrer">https://arxiv.org/abs/1710.04099</a> . You can explore one result of this approach with the Wembedder webapp, e.g., <a href="https://tools.wmflabs.org/wembedder/most-similar/Q51747" rel="nofollow noreferrer">https://tools.wmflabs.org/wembedder/most-similar/Q51747</a> displays the result of a "most similar" query in the knowledge graph embedding with Air China </p> | 2019-10-01 17:50:46.337000+00:00 | 2019-10-01 17:50:46.337000+00:00 | null | null | 56,963,129 | <p>now I am building a knowledge graph of the Chinese stock and want to build a news recommendation system. And I want to use TransE algorithm for the entity embedding and relationship embedding. But I do not have the dataset and don't know clearly how to build a dataset using my own knowledge graph?</p> | 2019-07-10 03:42:36.170000+00:00 | 2019-10-01 17:50:46.337000+00:00 | null | dataset|embedding|knowledge-graph | ['https://www.wikidata.org/wiki/Q831445', 'https://dumps.wikimedia.org/wikidatawiki/', 'https://query.wikidata.org/', 'https://w.wiki/9DM', 'https://w.wiki/9DN', 'https://arxiv.org/abs/1710.04099', 'https://tools.wmflabs.org/wembedder/most-similar/Q51747'] | 7 |
60,023,348 | <p>You can experiment with various loss functions instead of cross entropy. For multi-class segmentation, you can try:</p>
<ul>
<li>generalized dice loss</li>
<li>dice loss (summed across all classes)</li>
<li><a href="https://github.com/umbertogriffo/focal-loss-keras/blob/master/losses.py" rel="nofollow noreferrer">categorical focal loss</a></li>
<li><a href="https://arxiv.org/abs/1812.07032" rel="nofollow noreferrer">boundary loss</a></li>
</ul>
<p>The winner of brats 2018 used autoencoder regularization (<a href="https://github.com/IAmSuyogJadhav/3d-mri-brain-tumor-segmentation-using-autoencoder-regularization" rel="nofollow noreferrer">https://github.com/IAmSuyogJadhav/3d-mri-brain-tumor-segmentation-using-autoencoder-regularization</a>). You could try this as well. The idea in that paper is that the model is also learning how to better encode the features in the latent space, and that helps the model with segmentation somehow.</p> | 2020-02-02 05:00:30.300000+00:00 | 2020-02-02 05:00:30.300000+00:00 | null | null | 60,019,869 | <p>I have been using U-nets for a while now, and notice that in most of my applications, it generates an over-estimation surrounding a specific class. </p>
<p>For example, here's a grayscale image:</p>
<p><a href="https://i.stack.imgur.com/cxZgy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cxZgy.png" alt="enter image description here"></a></p>
<p>And a manual segmentation of 3 classes (lesion [green], tissue [magenta], background [all else]):</p>
<p><a href="https://i.stack.imgur.com/t9I9W.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/t9I9W.png" alt="enter image description here"></a></p>
<p>The issue I notice on prediction (over-estimation at boundaries):</p>
<p><a href="https://i.stack.imgur.com/Pzogc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Pzogc.png" alt="enter image description here"></a></p>
<p>The typical architecture used looks something like this:</p>
<pre><code>def get_unet(dim=128, dropout=0.5, n_classes=3):
inputs = Input((dim, dim, 1))
conv1 = Conv2D(64, 3, activation='relu', padding='same', kernel_initializer='he_normal')(inputs)
conv1 = Conv2D(64, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv1)
pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)
conv2 = Conv2D(128, 3, activation='relu', padding='same', kernel_initializer='he_normal')(pool1)
conv2 = Conv2D(128, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv2)
pool2 = MaxPooling2D(pool_size=(2, 2))(conv2)
conv3 = Conv2D(256, 3, activation='relu', padding='same', kernel_initializer='he_normal')(pool2)
conv3 = Conv2D(256, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv3)
pool3 = MaxPooling2D(pool_size=(2, 2))(conv3)
conv4 = Conv2D(512, 3, activation='relu', padding='same', kernel_initializer='he_normal')(pool3)
conv4 = Conv2D(512, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv4)
conv4 = Dropout(dropout)(conv4)
pool4 = MaxPooling2D(pool_size=(2, 2))(conv4)
conv5 = Conv2D(1024, 3, activation='relu', padding='same', kernel_initializer='he_normal')(pool4)
conv5 = Conv2D(1024, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv5)
conv5 = Dropout(dropout)(conv5)
up6 = concatenate([UpSampling2D(size=(2, 2))(conv5), conv4], axis=3)
conv6 = Conv2D(512, 3, activation='relu', padding='same', kernel_initializer='he_normal')(up6)
conv6 = Conv2D(512, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv6)
up7 = concatenate([UpSampling2D(size=(2, 2))(conv6), conv3], axis=3)
conv7 = Conv2D(256, 3, activation='relu', padding='same', kernel_initializer='he_normal')(up7)
conv7 = Conv2D(256, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv7)
up8 = concatenate([UpSampling2D(size=(2, 2))(conv7), conv2], axis=3)
conv8 = Conv2D(128, 3, activation='relu', padding='same', kernel_initializer='he_normal')(up8)
conv8 = Conv2D(128, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv8)
up9 = concatenate([UpSampling2D(size=(2, 2))(conv8), conv1], axis=3)
conv9 = Conv2D(64, 3, activation='relu', padding='same', kernel_initializer='he_normal')(up9)
conv9 = Conv2D(64, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv9)
conv10 = Conv2D(n_classes, (1, 1), activation='relu', padding='same', ker nel_initializer='he_normal')(conv9)
conv10 = Reshape((dim * dim, n_classes))(conv10)
output = Activation('softmax')(conv10)
model = Model(inputs=[inputs], outputs=[output])
return model
</code></pre>
<p>Plus:</p>
<pre><code>mgpu_model.compile(optimizer='adadelta', loss='categorical_crossentropy',
metrics=['accuracy'], sample_weight_mode='temporal')
open(p, 'w').write(json_string)
model_checkpoint = callbacks.ModelCheckpoint(f, save_best_only=True)
reduce_lr_cback = callbacks.ReduceLROnPlateau(
monitor='val_loss', factor=0.2,
patience=5, verbose=1,
min_lr=0.05 * 0.0001)
h = mgpu_model.fit(train_gray, train_masks,
batch_size=64, epochs=50,
verbose=1, shuffle=True, validation_split=0.2, sample_weight=sample_weights,
callbacks=[model_checkpoint, reduce_lr_cback])
</code></pre>
<p><strong>My Question:</strong>
Do you have any insight or suggestion on how to change either the architecture or hyperparameters to mitigate the over-estimation? This could include even using a different architecture that may be better at more precise segmentation. (Please note I already do class balancing/weighting to compensate for imbalances in class frequency)</p> | 2020-02-01 18:24:38.070000+00:00 | 2020-02-03 03:29:08.763000+00:00 | 2020-02-03 03:29:08.763000+00:00 | python|tensorflow|keras|conv-neural-network|image-segmentation | ['https://github.com/umbertogriffo/focal-loss-keras/blob/master/losses.py', 'https://arxiv.org/abs/1812.07032', 'https://github.com/IAmSuyogJadhav/3d-mri-brain-tumor-segmentation-using-autoencoder-regularization'] | 3 |
67,313,863 | <p>Sorry to say but if the face recognition is good it should not recognize cartoon faces, it's designed to recognize human faces and therefore should only tell you how many human faces it is on the image, otherwise it's a bad designed algorithm. If you want a machine-learning algorithm to recognize cartoon faces you would have to train it your self for that specific test.</p>
<p>I did a quick search on google and the first things I found was an article named "Cartoon Face Recognition: A Benchmark Dataset" at <a href="https://arxiv.org/pdf/1907.13394.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1907.13394.pdf</a> . Maybe you can find an already existing machine-learning algorithm that have been trained to recognize cartoon faces.</p>
<p>Hope this helped and I hope you find what you're looking for.</p>
<p>--------------------------------EDIT--------------------------------</p>
<p>I found these two git repositories, could be worth looking into more</p>
<p><a href="https://github.com/srvCodes/Cartoon-Face-Detection-and-Recognition" rel="nofollow noreferrer">https://github.com/srvCodes/Cartoon-Face-Detection-and-Recognition</a>
<a href="https://github.com/hako/dissertation" rel="nofollow noreferrer">https://github.com/hako/dissertation</a></p>
<p>The last link is a link for emotions of cartoon character.</p> | 2021-04-29 08:38:40.950000+00:00 | 2021-04-29 08:46:34.637000+00:00 | 2021-04-29 08:46:34.637000+00:00 | null | 67,313,707 | <p>I am trying to face recognise by python <a href="https://pypi.org/project/face-recognition/" rel="nofollow noreferrer">Face-recognition</a> library</p>
<p>I have tried below code for below image</p>
<p>Code :</p>
<pre><code>import face_recognition
image = face_recognition.load_image_file("img/bill.jpeg")
property(image)
face_locations = face_recognition.face_locations(image)
print(len(face_locations))
</code></pre>
<p>For below image I am getting output for total face : 6</p>
<p>Image :</p>
<p><a href="https://i.stack.imgur.com/AUVbn.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AUVbn.jpg" alt="enter image description here" /></a></p>
<p>But When I am trying by a cartoon image</p>
<p><a href="https://i.stack.imgur.com/oaY5v.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oaY5v.jpg" alt="enter image description here" /></a></p>
<p>I am getting output : 0</p>
<p>How can I recognise cartoon face by face-recognition?</p> | 2021-04-29 08:27:36.487000+00:00 | 2021-04-29 08:46:34.637000+00:00 | null | python|machine-learning|face-recognition | ['https://arxiv.org/pdf/1907.13394.pdf', 'https://github.com/srvCodes/Cartoon-Face-Detection-and-Recognition', 'https://github.com/hako/dissertation'] | 3 |
66,329,722 | <p>The answer depends very much on what you aim to use the representation from the autoencoder for. Each autoencoder needs something that makes the autoencoding task hard, so it needs a rich intermediate representation to solve the task. It can be either a bottleneck in the architecture (as in the case of the vanilla encoder-decoder model) or adding noise in the source side (you can view BERT as a special case of denoising autoencoder where some input tokens are masked).</p>
<p>If you do not introduce any noise on the source side, the autoencoder would learn to simply copy the input without learning anything beyond the identity of input/output symbols – the attention would break the bottleneck property of the vanilla model. The same holds also for the case of labeling the encoder states.</p>
<p>There are sequence-to-sequence autoencoders (<a href="https://arxiv.org/abs/1910.13461" rel="nofollow noreferrer">BART</a>, <a href="https://arxiv.org/abs/1905.02450" rel="nofollow noreferrer">MASS</a>) that use encoder-decoder attention. The generated noise includes masking and randomly permuting tokens. The representation that they learn is then more suitable for sequence-to-sequence tasks (such as text summarization or low-resource machine translation) than representations from encoder-only models such as BERT.</p> | 2021-02-23 08:40:51.693000+00:00 | 2021-02-23 08:40:51.693000+00:00 | null | null | 58,145,570 | <p>I am struggling with the concept of attention in the the context of autoencoders. I believe I understand the usage of attention with regards to seq2seq translation - after training the combined encoder and decoder, we can use both encoder and decoder to create (for example) a language translator. Because we are still using the decoder in production, we can take advantage of the attention mechanism.</p>
<p>However, what if the main goal of the autoencoder is mainly to produce a latent compressed representation of the input vector? I am talking about cases where we can essentially dispose of the decoder part of the model after training.</p>
<p>For example, if I use an LSTM without attention, the "classic" approach is to use the last hidden state as the context vector - it should represent the main features of my input sequence. If I were to use an LSTM with attention, my latent representation would have to be <em>all</em> hidden states per time step. This doesn't seem to fit into the notion of input compression and of keeping the main features. Its likely that the dimensionality may even be siginificantly higher.</p>
<p>Additionally, if I needed to use all hidden states as my latent representation (like in the attention case) - why use attention at all? I could just use all hidden states to initialize the decoder.</p> | 2019-09-28 10:49:20.987000+00:00 | 2021-02-23 08:40:51.693000+00:00 | null | lstm|recurrent-neural-network|autoencoder|dimensionality-reduction|attention-model | ['https://arxiv.org/abs/1910.13461', 'https://arxiv.org/abs/1905.02450'] | 2 |
72,655,486 | <p>No it won't work like this. A Neural Network is a non-invertible function.</p>
<p>If instead you start from internal representations, apparently it's possible to do something: <a href="https://arxiv.org/abs/2107.06304" rel="nofollow noreferrer">https://arxiv.org/abs/2107.06304</a></p> | 2022-06-17 07:02:11.227000+00:00 | 2022-06-17 07:02:11.227000+00:00 | null | null | 72,655,348 | <p>If i reverse image classification NN and feed it with some class label will it generate image?</p> | 2022-06-17 06:48:22.057000+00:00 | 2022-06-17 07:02:11.227000+00:00 | null | machine-learning|deep-learning|neural-network|classification | ['https://arxiv.org/abs/2107.06304'] | 1 |
69,976,668 | <p>here is a list of references on multiple intents detection (some with links to github repos) -</p>
<ol>
<li>GL-GIN: Fast and Accurate Non-Autoregressive Model for Joint Multiple Intent Detection and Slot Filling (2021)</li>
</ol>
<ul>
<li><p><a href="https://arxiv.org/pdf/2106.01925v1.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/2106.01925v1.pdf</a></p>
</li>
<li><p><a href="https://github.com/yizhen20133868/GL-GIN" rel="nofollow noreferrer">https://github.com/yizhen20133868/GL-GIN</a></p>
</li>
</ul>
<ol start="2">
<li>Joint Multiple Intent Detection and Slot Labeling for Goal-Oriented Dialog (2019)</li>
</ol>
<ul>
<li><a href="https://aclanthology.org/N19-1055.pdf" rel="nofollow noreferrer">https://aclanthology.org/N19-1055.pdf</a></li>
</ul>
<ol start="3">
<li>Towards Open Intent Discovery for Conversational Text (2019)</li>
</ol>
<ul>
<li><a href="https://arxiv.org/pdf/1904.08524.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1904.08524.pdf</a></li>
</ul>
<ol start="4">
<li>MULTI-CLASS CLASSIFICATION WITHOUT MULTICLASS LABELS (2019)</li>
</ol>
<ul>
<li><p><a href="https://openreview.net/pdf?id=SJzR2iRcK7" rel="nofollow noreferrer">https://openreview.net/pdf?id=SJzR2iRcK7</a></p>
</li>
<li><p><a href="https://github.com/yizhen20133868/GL-GIN" rel="nofollow noreferrer">https://github.com/yizhen20133868/GL-GIN</a></p>
</li>
</ul> | 2021-11-15 15:15:30.690000+00:00 | 2021-11-15 15:15:30.690000+00:00 | null | null | 58,135,568 | <p>I am trying to build a Model that can take the User conversation, that involves dialogues, as Input and Find all the Intents involved in it. This is Basically an Intent Detection Problem. However, Normally labeling Sentences and extracting the features out of it and building an Intent classifier wouldn't work here because Multiple Intents might be Available in a Single Conversation. Is there any Tool / Way / any pipeline that I should follow to achieve this Use case.</p> | 2019-09-27 13:30:02.110000+00:00 | 2021-11-15 15:15:30.690000+00:00 | null | machine-learning|nlp|text-classification|multiclass-classification | ['https://arxiv.org/pdf/2106.01925v1.pdf', 'https://github.com/yizhen20133868/GL-GIN', 'https://aclanthology.org/N19-1055.pdf', 'https://arxiv.org/pdf/1904.08524.pdf', 'https://openreview.net/pdf?id=SJzR2iRcK7', 'https://github.com/yizhen20133868/GL-GIN'] | 6 |
55,280,438 | <p>FastText it's based in the <a href="https://arxiv.org/abs/1607.01759" rel="nofollow noreferrer">WordNGrams</a>, it means that you need to be <strong><em>a complete sentence</em></strong> as input for the algorithm.</p>
<p>In your example, you're passing only a unigram for the algorithm, and depending on the number of <code>WordNGrams</code> that you're using in the parameters you model is not able to learn. </p>
<p><a href="https://www.urbandictionary.com/define.php?term=ELI5" rel="nofollow noreferrer">ELI5</a>: The algorithm it's saying: I'm able to learn complex sentences because the structure of the words and their combination, but you're sending to me only words. I cannot handle that.</p> | 2019-03-21 12:27:49.290000+00:00 | 2019-03-21 12:27:49.290000+00:00 | null | null | 54,291,161 | <p>am trying to use fasttext to label some data <code>[url]</code>or<code>[PN]</code> just to test it
after training on <strong>6k</strong> of each label and upon predicting it keeps predicting [PN]</p>
<p>training command</p>
<pre><code>fasttext supervised -input input.txt -output model -minn 0 -maxn 0 -epoch 100 -lr 0.1
</code></pre>
<p>sample training data </p>
<pre><code>__label__PN 5962-8904XA
__label__PN 585DD4P54ZP
__label__PN GQ0B11400FCT
__label__URL http://ws.com/qd/lat/ispls32883.pdf
__label__URL http://ws.com/pdfs//2004/0423/ds/m412b.pdf
__label__URL http://ws.com/pdfs//2004/0423/mc68.pdf
</code></pre>
<p>sample test data</p>
<pre><code>945
74ACT399MTC
http://www.msn.com/mylink.pdf
MQ8797BH
74AC1153
ICL762PA+
54LS3482A
54LS76A/B
54HC27/A
www.google.com
</code></pre> | 2019-01-21 13:33:32.870000+00:00 | 2019-03-21 12:27:49.290000+00:00 | null | text-classification|fasttext | ['https://arxiv.org/abs/1607.01759', 'https://www.urbandictionary.com/define.php?term=ELI5'] | 2 |
42,923,723 | <p>I've gone through numerous books and articles and here are my findings. May be they will help others like me.<br />
Regarding theory - I found an article <a href="https://arxiv.org/abs/1302.6613" rel="nofollow noreferrer">An Introductory Study on Time Series Modeling and Forecasting</a> very well written. That doesn't mean I understood all of its contents, but it's a really good overview of available time series models. <br />
If you're like me and like to see some actual code - there's <a href="https://www.quantstart.com/articles#time-series-analysis" rel="nofollow noreferrer">article series on QuantStart</a>. Examples are in R, but I guess many of them are portable to Python.
<br />
I can highly recommend <a href="https://www.quantstart.com/" rel="nofollow noreferrer">QuantStart blog</a> by Michael Halls-Moore, I found articles easy to read and the author has done a great job trying not to overwhelm a reader with math. I also read Michael's <a href="https://www.quantstart.com/successful-algorithmic-trading-ebook" rel="nofollow noreferrer">first book</a> and it's a good one for a beginner in the space like me. <br />
Textbooks on the topic are extremely hard for me to read. I tried <a href="http://amzn.to/2n32JKn" rel="nofollow noreferrer">Time Series Analysis by Hamilton</a>, but haven't gotten far. <br />
Regarding outlier detection I mentioned - I've found <a href="https://stackoverflow.com/questions/3390458/simple-algorithm-for-online-outlier-detection-of-a-generic-time-series">this question on SO</a> and <a href="https://stats.stackexchange.com/questions/1142/simple-algorithm-for-online-outlier-detection-of-a-generic-time-series">its stats counterpart</a>. By the looks of it, it's not something you can study and implement in a couple of evenings, at least not for me.</p> | 2017-03-21 09:59:34.437000+00:00 | 2017-03-21 09:59:34.437000+00:00 | 2017-05-23 12:17:17.153000+00:00 | null | 39,733,723 | <p>I'm a programmer who is interested in processing and analyzing time-series data. I know basic statistics and math, but I'm afraid that's all.<br />
Can you please recommend good books and/or articles that <strong>does not require Ph.D. to understand them</strong>? <br />
As for my concrete tasks - I want to be able to spot trends, eliminate outliers, be able to make predictions and calculate stats over a range of values. We have quite a bit of events coming off our systems. <br />
I started reading "Introduction to Time Series and Forecasting" by Brockwell and Davis - and I'm completely lost in math. <br />
<strong>update on outliers</strong> by outliers I mean data points that doesn't necessarily make sense. e.g. the exchange rate is 1.5$(+-10 cents) for a pound on average, but a guy around the corner offers 1.09$ and says he's completely legit. <br /></p> | 2016-09-27 20:27:54.920000+00:00 | 2017-03-21 09:59:34.437000+00:00 | 2016-09-27 22:08:09.333000+00:00 | time-series|article | ['https://arxiv.org/abs/1302.6613', 'https://www.quantstart.com/articles#time-series-analysis', 'https://www.quantstart.com/', 'https://www.quantstart.com/successful-algorithmic-trading-ebook', 'http://amzn.to/2n32JKn', 'https://stackoverflow.com/questions/3390458/simple-algorithm-for-online-outlier-detection-of-a-generic-time-series', 'https://stats.stackexchange.com/questions/1142/simple-algorithm-for-online-outlier-detection-of-a-generic-time-series'] | 7 |
52,211,241 | <p>There cannot be stricto sensu an equivalent of <code>asm</code>, because it is essentially for <em>compiled</em> languages (and <code>asm</code> is possible in C because C compilers emit assembler code!).</p>
<p>I have published in my <a href="https://arxiv.org/pdf/1109.0779.pdf" rel="nofollow noreferrer">DSL2011</a> paper a description of
<em>MELT - a Translated Domain Specific Language Embedded in the GCC Compiler</em></p>
<p>I describ in that paper several traits which helps in generating C code from MELT (which is a Lisp-like language translated to C or C++).</p>
<p>But interpreted languages with a bytecode interpreter (e.g. Lua, Guile, Nim, Ocaml) provide hooks to add new primitives into that bytecode interpreter. Usually, the bytecode operation would be something like <em>invoke primitive#N with arguments arg1 arg2 arg3</em>.</p>
<p>You could implement your language (some DSL) as a translator to C. This is <a href="https://softwareengineering.stackexchange.com/a/257873/40065">usual practice</a>, and quite fun to do. You then code some "naive" compiler from your language to C. You could consider instead using some JIT-compiling library like <a href="https://gcc.gnu.org/onlinedocs/jit/" rel="nofollow noreferrer">libgccjit</a> or <a href="http://llvm.org/" rel="nofollow noreferrer">LLVM</a> or libjit or lightning or <a href="https://github.com/asmjit/asmjit" rel="nofollow noreferrer">asmjit</a>.</p>
<p>And some languages are <a href="https://github.com/asmjit/asmjit" rel="nofollow noreferrer">homoiconic</a>, they are then exposing somehow their bytecode or some good enough IR. Learn <a href="https://en.wikipedia.org/wiki/Lisp_(programming_language)" rel="nofollow noreferrer">Lisp</a> (at least read <a href="https://mitpress.mit.edu/sites/default/files/sicp/index.html" rel="nofollow noreferrer">SICP</a>) then read <a href="https://en.wikipedia.org/wiki/Lisp_in_Small_Pieces" rel="nofollow noreferrer"><em>Lisp In Small Pieces</em></a></p>
<p>Be aware of <a href="https://en.wikipedia.org/wiki/Greenspun%27s_tenth_rule" rel="nofollow noreferrer"><em>Greenspun's tenth rule</em></a>. Look into <a href="https://archive.fosdem.org/2018/schedule/event/alternative_histories/" rel="nofollow noreferrer"><em>The circuit less traveled</em></a> talk by Liam Proven at FOSDEM 2018.</p> | 2018-09-06 19:50:11.490000+00:00 | 2018-09-06 20:03:11.037000+00:00 | 2018-09-06 20:03:11.037000+00:00 | null | 52,211,170 | <p>I'm curious to know if any language exists that gives the programmer the possibility to 'emit' bytecode in the middle of the source code. To be more clear, is there any interpreted language that has a facility similar to the <code>asm</code> keyword for c/c++ ? </p> | 2018-09-06 19:45:02.587000+00:00 | 2018-09-06 23:15:37.163000+00:00 | 2018-09-06 20:12:00.200000+00:00 | assembly|keyword|bytecode|inline-assembly|interpreted-language | ['https://arxiv.org/pdf/1109.0779.pdf', 'https://softwareengineering.stackexchange.com/a/257873/40065', 'https://gcc.gnu.org/onlinedocs/jit/', 'http://llvm.org/', 'https://github.com/asmjit/asmjit', 'https://github.com/asmjit/asmjit', 'https://en.wikipedia.org/wiki/Lisp_(programming_language)', 'https://mitpress.mit.edu/sites/default/files/sicp/index.html', 'https://en.wikipedia.org/wiki/Lisp_in_Small_Pieces', 'https://en.wikipedia.org/wiki/Greenspun%27s_tenth_rule', 'https://archive.fosdem.org/2018/schedule/event/alternative_histories/'] | 11 |
61,835,511 | <p>I have been dealing with a very similar problem and came across a reasonably robust solution. <a href="https://arxiv.org/pdf/1309.4168.pdf" rel="noreferrer">This paper</a> shows that a linear relationship can be defined between two Word2Vec models that have been trained on different languages. This means you can derive a translation matrix to convert word embeddings from one language model into the vector space of another language model. What does all of that mean? It means I can take a word from one language, and find words in the other language that have a similar meaning.</p>
<p>I've written a small Python package that implements this for you: <a href="https://pypi.org/project/transvec/" rel="noreferrer">transvec</a>. Here's an example where I use pre-trained models to search for Russian words and find English words with a similar meaning:</p>
<pre><code>import gensim.downloader
from transvec.transformers import TranslationWordVectorizer
# Pretrained models in two different languages.
ru_model = gensim.downloader.load("word2vec-ruscorpora-300")
en_model = gensim.downloader.load("glove-wiki-gigaword-300")
# Training data: pairs of English words with their Russian translations.
# The more you can provide, the better.
train = [
("king", "царь_NOUN"), ("tsar", "царь_NOUN"),
("man", "мужчина_NOUN"), ("woman", "женщина_NOUN")
]
bilingual_model = TranslationWordVectorizer(en_model, ru_model).fit(train)
# Find words with similar meanings across both languages.
bilingual_model.similar_by_word("царица_NOUN", 1) # "queen"
# [('king', 0.7763221263885498)]
</code></pre>
<p>Don't worry about the weird POS tags on the Russian words - this is just a quirk of the particular pre-trained model I used.</p>
<p>So basically, if you can provide a list of words with their translations, then you can train a <code>TranslationWordVectorizer</code> to translate <em>any</em> word that exists in your source language corpus into the target language. When I used this for real, I produced some training data by extracting all the individual Russian words from my data, running them through Google Translate and then keeping everything that translated to a single word in English. The results were pretty good (sorry I don't have any more detail for the benchmark yet; it's still a work in progress!).</p> | 2020-05-16 10:43:50.913000+00:00 | 2020-05-16 10:43:50.913000+00:00 | null | null | 51,233,632 | <p>This problem is going completely over my head. I am training a Word2Vec model using gensim. I have provided data in multiple languages i.e. English and Hindi. When I am trying to find the words closest to 'man', this is what I am getting:</p>
<pre><code>model.wv.most_similar(positive = ['man'])
Out[14]:
[('woman', 0.7380284070968628),
('lady', 0.6933152675628662),
('monk', 0.6662989258766174),
('guy', 0.6513140201568604),
('soldier', 0.6491742134094238),
('priest', 0.6440571546554565),
('farmer', 0.6366012692451477),
('sailor', 0.6297377943992615),
('knight', 0.6290514469146729),
('person', 0.6288090944290161)]
--------------------------------------------
</code></pre>
<p>Problem is, these are all English words. Then I tried to find similarity between same meaning Hindi and English words, </p>
<pre><code>model.similarity('man', 'आदमी')
__main__:1: DeprecationWarning: Call to deprecated `similarity` (Method will
be removed in 4.0.0, use self.wv.similarity() instead).
Out[13]: 0.078265618974427215
</code></pre>
<p>This accuracy should have been better than all the other accuracies. The Hindi corpus I have has been made by translating the English one. Hence the words appear in similar contexts. Hence they should be close.</p>
<p>This is what I am doing here:</p>
<pre><code>#Combining all the words together.
all_reviews=HindiWordsList + EnglishWordsList
#Training FastText model
cpu_count=multiprocessing.cpu_count()
model=Word2Vec(size=300,window=5,min_count=1,alpha=0.025,workers=cpu_count,max_vocab_size=None,negative=10)
model.build_vocab(all_reviews)
model.train(all_reviews,total_examples=model.corpus_count,epochs=model.iter)
model.save("word2vec_combined_50.bin")
</code></pre> | 2018-07-08 15:46:00.683000+00:00 | 2020-05-16 10:43:50.913000+00:00 | 2018-07-10 10:20:52.643000+00:00 | python|nlp|artificial-intelligence|word2vec|gensim | ['https://arxiv.org/pdf/1309.4168.pdf', 'https://pypi.org/project/transvec/'] | 2 |
25,778,367 | <p>Work by Leskovec, Kleinberg, and Faloutsos has examined this question specifically [<a href="http://www.cs.cmu.edu/afs/cs/Web/People/jure/pubs/powergrowth-kdd05.pdf" rel="nofollow">1</a>,<a href="http://arxiv.org/pdf/physics/0603229.pdf" rel="nofollow">2</a>]. They find:</p>
<blockquote>
<p>"First, graphs densify over time, with the number of edges crowing super-linearly in the number of nodes. Second, the average distance between nodes often shrinks over time, in contrast to the conventional wisdom that such distance parameters should increase slowly as a function of the number of nodes."</p>
</blockquote> | 2014-09-11 03:02:11.107000+00:00 | 2014-09-11 03:02:11.107000+00:00 | null | null | 25,769,752 | <p>My question is with regard to the increase/decrease of the diameter of a network. I'm thinking that as one adds more nodes to an existing network, the density should effectively increase and the probability of the edges created by the new nodes could result in higher degree of clustering. If this is the case, my assumption is that the diameter of the network should decrease as we add more nodes, owing to the probability that shorter geodesic paths can now exist and become the new diameter. Am I wrong with this logic? Or is there a better explanation or perhaps something I'm missing?</p> | 2014-09-10 15:48:22.047000+00:00 | 2014-09-11 03:02:11.107000+00:00 | null | networking|graph-theory|network-analysis | ['http://www.cs.cmu.edu/afs/cs/Web/People/jure/pubs/powergrowth-kdd05.pdf', 'http://arxiv.org/pdf/physics/0603229.pdf'] | 2 |
66,806,542 | <p>Update: The Fairlearn documentation now has a FAQ section on this topic <a href="https://fairlearn.org/main/faq.html" rel="nofollow noreferrer">https://fairlearn.org/main/faq.html</a> Search for "Does Fairlearn support multi-class classification?"</p>
<p>Previous answer:
Fairlearn's metrics are designed for binary classification or regression. You could evaluate the various labels individually, of course. If you have a specific idea of what you'd like to see please open a <a href="https://github.com/fairlearn/fairlearn/issues/new/choose" rel="nofollow noreferrer">new feature request</a>.</p>
<p>Fairlearn does support a variety of metrics, not just accuracy. The user guide has a full list: <a href="https://fairlearn.org/v0.6.0/user_guide/assessment.html#scalar-results-from-metricframe" rel="nofollow noreferrer">https://fairlearn.org/v0.6.0/user_guide/assessment.html#scalar-results-from-metricframe</a></p>
<p>One example that comes to mind for a paper doing multi-class classification while thinking about fairness is <a href="https://arxiv.org/abs/2003.00827" rel="nofollow noreferrer">CheXclusion</a> by Seyyed-Kalantari et al. They mostly look into TPR differences when classifying chest x-rays.</p>
<p>The Fairlearn community would definitely be interested in hearing about your use case. Perhaps there's some way we can help. Feel free to reach out via Gitter or by creating your feature request (as mentioned above).</p> | 2021-03-25 19:44:50.550000+00:00 | 2021-06-30 17:05:58.513000+00:00 | 2021-06-30 17:05:58.513000+00:00 | null | 66,574,745 | <p>Are there any metrics implemented in Fairlearn or any published papers that I can refer to for use-cases around fairness measurement of multi-class classification where the metrics are AP and not accuracy? Thanks!</p> | 2021-03-11 00:15:03.980000+00:00 | 2021-06-30 17:05:58.513000+00:00 | null | fairlearn | ['https://fairlearn.org/main/faq.html', 'https://github.com/fairlearn/fairlearn/issues/new/choose', 'https://fairlearn.org/v0.6.0/user_guide/assessment.html#scalar-results-from-metricframe', 'https://arxiv.org/abs/2003.00827'] | 4 |
70,160,474 | <p>You can refer to the gradient reversal idea from <a href="https://arxiv.org/abs/1409.7495" rel="nofollow noreferrer">https://arxiv.org/abs/1409.7495</a>.</p>
<p>But the crux of the idea is this: you have some loss function l(X,Y) where X and Y are parameters. Now you want to update X to minimize loss and update Y to maximize loss, which can be seen as minimizing -l(X,Y).</p>
<p>Essentially you want to update parameters X with dl/dX and Y with d(-l)/dY = -dl/dy. You can do this by doing a backpropagation step, modifying the gradients of Y, and applying the update. In pytorch terms, that would be:</p>
<pre><code>loss = compute_loss()
loss.backward()
# modify gradients of Y
Y.grad.data = -Y.grad.data
optimizer.step()
optimizer.zero_grad()
</code></pre> | 2021-11-29 19:45:05.850000+00:00 | 2021-11-29 19:45:05.850000+00:00 | null | null | 70,135,992 | <p>I have a loss function that includes two sets of parameters to learn. One is a matrix, wrt which I want to maximize the loss, other is the set of parameters for logistic regression, wrt which I want to minimize the loss.
In pytorch whenever I use loss.backward(), the loss is minimized wrt both sets of parameters and (-loss).backward() maximizes wrt both. How do I do minimax optimization wrt the sets of parameters in pytorch?
Tensorflow probably has this concept of gradient_tape and tape.watch() concept. What's the alternative in pytorch?</p> | 2021-11-27 15:09:20.543000+00:00 | 2021-11-29 19:45:05.850000+00:00 | null | optimization|pytorch|minimax | ['https://arxiv.org/abs/1409.7495'] | 1 |
8,409,211 | <p>Perhaps the best ending to an abstract I've read yet: <a href="http://arxiv.org/abs/1103.2841">"Multiplate only requires rank 3 polymorphism in addition to the normal type class mechanism of Haskell."</a>. (Oh, only rank-3 polymorphism, no big deal!)</p> | 2011-12-07 01:08:43.793000+00:00 | 2011-12-07 01:08:43.793000+00:00 | null | null | 8,405,364 | <p>I've seen a few use cases for rank-2 polymorphism (the most prominent example being the <a href="http://www.haskell.org/haskellwiki/Monad/ST" rel="noreferrer">ST monad</a>), but none for a higher rank than that. Does anyone know of such a use case?</p> | 2011-12-06 19:06:56.727000+00:00 | 2011-12-07 01:08:43.793000+00:00 | null | haskell|types|polymorphism|higher-rank-types|parametric-polymorphism | ['http://arxiv.org/abs/1103.2841'] | 1 |
8,035,940 | <p>Neighboring fractions in Farey sequences are described in Sec. 3 of Neighboring Fractions in Farey Subsequences, <a href="http://arxiv.org/abs/0801.1981" rel="nofollow">http://arxiv.org/abs/0801.1981</a> .</p> | 2011-11-07 11:38:11.063000+00:00 | 2011-11-07 11:38:11.063000+00:00 | null | null | 8,013,153 | <p>I am going to implement a Farey fraction approximation for converting limited-precision user input into possibly-repeating rationals.<br>
<a href="http://mathworld.wolfram.com/FareySequence.html" rel="nofollow">http://mathworld.wolfram.com/FareySequence.html</a></p>
<p>I can easily locate the closest Farey fraction in a sequence, and I can find Fn by recursively searching for mediant fractions by building the Stern-Brocot tree.<br>
<a href="http://mathworld.wolfram.com/Stern-BrocotTree.html" rel="nofollow">http://mathworld.wolfram.com/Stern-BrocotTree.html</a></p>
<p>However, the method I've come up with for finding the fractions in the sequence Fn seems very inefficient:<br>
(pseudo)<br></p>
<pre><code>For int i = 0 to fractions.count -2
{
if fractions[i].denominator + fractions[i+1].denominator < n
{
insert new fraction(
numerator = fractions[i].numerator + fractions[i+1].numerator
,denominator = fractions[i].denominator + fractions[i+1].denominator)
//note that fraction will reduce itself
addedAnElement = true
}
}
if addedAnElement
repeat
</code></pre>
<p>I will almost always be defining the sequence Fn where n = 10^m where m >1</p>
<p>So perhaps it might be best to build the sequence one time and cache it... but it still seems like there should be a better way to derive it. </p>
<p><strong>EDIT:</strong><br>
This paper has a promising algorithm:<br>
<a href="http://www.math.harvard.edu/~corina/publications/farey.pdf" rel="nofollow">http://www.math.harvard.edu/~corina/publications/farey.pdf</a></p>
<p>I will try to implement. <br>
The trouble is that their "most efficient" algorithm requires knowing the prior two elements. I know element one of any sequence is 1/n but finding the second element seems a challenge...</p>
<p><strong>EDIT2:</strong><br>
I'm not sure how I overlooked this:<br>
Given F0 = 1/n<br>
If x > 2 then <br>
F1 = 1/(n-1)</p>
<p>Therefore for all n > 2, the first two fractions will always be<br>
1/n, 1/(n-1) and I can implement the solution from Patrascu.<br></p>
<p>So now, we the answer to this question should prove that this solution is or isn't optimal using benchmarks..</p> | 2011-11-04 17:07:31.777000+00:00 | 2014-05-14 20:31:47.110000+00:00 | 2014-05-14 20:31:47.110000+00:00 | algorithm|fractions|rational-numbers | ['http://arxiv.org/abs/0801.1981'] | 1 |
72,199,526 | <p>There are multiple factors contributing to the bad predictions of your model:</p>
<ul>
<li>The dataset is small</li>
<li>The model itself you are using is quite simple</li>
<li>The training time is very short</li>
<li><s>Predicting Shakespeare sonnets will produce random gibberish even if trained right </s></li>
</ul>
<p>Try to train it for longer. This will ultimately lead to better results although predicting coorect speech based on text may be one of the hardest tasks in Machine Learning in general. For example GPT3, one of the models, which solves this problem almost perfectly, consists of billions of parameters (see <a href="https://arxiv.org/abs/2005.14165" rel="nofollow noreferrer">here</a>).</p>
<p>EDIT: The reason why your model is performing worse than the one in the tutorial although you have more stacked RNN layers may be, that more layers need more training time. Simply increasing the number of layers will not necessarily increase your prediction quality. As I said, try to increase training time or play around with hyper parameters (learning rate, Nomralization layers, etc.).</p> | 2022-05-11 10:35:29.310000+00:00 | 2022-05-11 11:37:12.557000+00:00 | 2022-05-11 11:37:12.557000+00:00 | null | 72,198,069 | <p>I've been trying to implement a character-level language model in tensorflow based on <a href="https://www.tensorflow.org/text/tutorials/text_generation" rel="nofollow noreferrer">this tutorial</a>.</p>
<p>I would like to extend the model by allowing multiple RNN layers to be stacked. So far I've come up with this:</p>
<pre class="lang-py prettyprint-override"><code>class MyModel(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim, rnn_type, rnn_units, num_layers, dropout):
super().__init__(self)
self.rnn_type = rnn_type.lower()
self.num_layers = num_layers
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
if self.rnn_type == 'gru':
rnn_layer = tf.keras.layers.GRU
elif self.rnn_type == 'lstm':
rnn_layer = tf.keras.layers.LSTM
elif self.rnn_type == 'rnn':
rnn_layer = tf.keras.layers.SimpleRNN
else:
raise ValueError(f'Unsupported RNN layer: {rnn_type}')
setattr(self, self.rnn_type, rnn_layer(rnn_units, return_sequences=True, return_state=True, dropout=dropout))
for i in range(1, num_layers):
setattr(self, f'{self.rnn_type}_{i}', rnn_layer(rnn_units, return_sequences=True, return_state=True, dropout=dropout))
self.dense = tf.keras.layers.Dense(vocab_size)
def call(self, inputs, states=None, return_state=False, training=False):
x = inputs
x = self.embedding(x, training=training)
rnn = self.get_layer(self.rnn_type)
if states is None:
states = rnn.get_initial_state(x)
x, states = rnn(x, initial_state=states, training=training)
for i in range(1, self.num_layers):
layer = self.get_layer(f'{self.rnn_type}_{i}')
x, states = layer(x, initial_state=states, training=training)
x = self.dense(x, training=training)
if return_state:
return x, states
else:
return x
model = MyModel(
vocab_size=vocab_size,
embedding_dim=embedding_dim,
rnn_type='gru',
rnn_units=512,
num_layers=3,
dropout=dropout)
</code></pre>
<p>When trained for 30 epochs on the dataset in the tutorial, this model generates some random gibberish. Now I don't know if I'm doing the stacking wrong or if the dataset is just too small.</p> | 2022-05-11 08:48:42.373000+00:00 | 2022-05-11 11:37:12.557000+00:00 | null | tensorflow | ['https://arxiv.org/abs/2005.14165'] | 1 |
32,182,854 | <p>The problem here is that the feed exemple you give doesn't include <a href="http://www.rssboard.org/rss-profile#element-channel-item-pubdate" rel="nofollow">pubDate</a> as a sub-element of each item... wich normaly helps RSS readers to detect new items. And the feed have <a href="https://validator.w3.org/feed/check.cgi?url=http%3A%2F%2Fexport.arxiv.org%2Frss%2Fastro-ph.IM" rel="nofollow">a date fault</a>.</p>
<p>If you were (you are perhaps) in an autopublishing process (i.e. RSS to WordPress) you could employ a <a href="http://www.thesitewizard.com/general/set-cron-job.shtml" rel="nofollow">Cron Job</a> from your web server where you could say... "verify if there's a new item and if yes get it". </p>
<p>My coding skills are not sufficient, sorry, to explain how (on a site I manage, a plugin does this task).</p> | 2015-08-24 12:53:25.897000+00:00 | 2015-08-24 12:53:25.897000+00:00 | null | null | 32,181,024 | <p>I have a webpage that connects to an external site and tries to save some of the information in its RSS feed into MySQL, every time I visit this webpage. The problem is that this site updates its RSS feed daily and so if one day I forget to visit my webpage, the information from the RSS from the external site is lost. Is there a way to retrieve or find the RSS from yesterday if a website daily updates its RSS feed?</p> | 2015-08-24 11:18:54.537000+00:00 | 2015-08-24 17:45:28.103000+00:00 | 2015-08-24 13:08:37.347000+00:00 | php|mysql|rss|feed | ['http://www.rssboard.org/rss-profile#element-channel-item-pubdate', 'https://validator.w3.org/feed/check.cgi?url=http%3A%2F%2Fexport.arxiv.org%2Frss%2Fastro-ph.IM', 'http://www.thesitewizard.com/general/set-cron-job.shtml'] | 3 |
46,676,960 | <p>A logical qubit is one you can use for programming, which holds a superposition of the |0> and |1> states. It could be implemented by a simulator running on a normal binary CPU in your desktop or laptop, to let you develop and debug quantum algorithms. (<a href="https://en.wikipedia.org/wiki/Qubit#Bit_versus_qubit" rel="noreferrer">Representing an n-qubit quantum state takes 2<sup>n</sup>-1 complex numbers.</a> Presumably a simulator would use fixed width integer or floating-point representations, if rounding error is ok.)</p>
<p>A physical qubit is an actual quantum implementation of a qubit. Wikipedia has a table of various possibilities: <a href="https://en.wikipedia.org/wiki/Qubit#Physical_representation" rel="noreferrer">https://en.wikipedia.org/wiki/Qubit#Physical_representation</a>. For example, an electron that can have a superposition of spin up / spin down states.</p>
<p>Real physical qubits suffer from unwanted decoherence. This is a problem if you use them directly as logical qubits. Instead, you can implement a logical qubit on top of multiple physical qubits to get redundancy.</p>
<blockquote>
<p>From <a href="https://arxiv.org/pdf/0905.2794.pdf" rel="noreferrer">Quantum Error Correction for Beginners</a>, Devitt, Munro, and Nemoto (2013).</p>
<p><strong>THE 3-QUBIT CODE: A GOOD STARTING POINT FOR QUANTUM ERROR CORRECTION</strong></p>
<p>...</p>
<p>The 3-qubit code encodes a single logical qubit into
three physical qubits with the property that it can correct
for a single, σ<sub>x</sub>, bit-flip error.</p>
<p>The two logical basis states |0><sub>L</sub> and |1><sub>L</sub> are defined as</p>
<pre><code>|0>L = |000>, |1>L = |111>
</code></pre>
</blockquote>
<p>That paper goes on to describe other error-correction schemes that can handle more errors.</p>
<p>I barely looked at more than this in the paper myself, but this sounds very similar to classical fail-safe redundant computing where you correct for hardware failure / cosmic-ray glitches by having <a href="https://en.wikipedia.org/wiki/Triple_modular_redundancy" rel="noreferrer">triple redundancy and taking the 2 results that agree.</a> You can do this on a per-bit level for error-correction, especially in a high-error environment like space flight where cosmic rays will flip bits.</p>
<p>You can also build and program 3 separate computers (different hardware from different manufacturers, with software written by teams that don't talk to each other). Only compare their final results for the same inputs. This is what you want <a href="https://aviation.stackexchange.com/a/44353/7767">for airliner fly-by-wire control systems, and manned space flight</a>.</p>
<p>Anyway, we're getting off topic here, but I hope the analogy is useful for understanding the idea of <strong>using multiple unreliable physical computations to produce one (more) reliable logical computation</strong>.</p>
<hr>
<p>This is sort of the opposite of what we do with modern NAND flash storage. Instead of using only one bit per cell (low or high voltage), <a href="https://en.wikipedia.org/wiki/Multi-level_cell" rel="noreferrer">https://en.wikipedia.org/wiki/Multi-level_cell</a> flash uses 4 or 8 voltage levels to store 2 or 3 bits per cell. (Or I guess 3 levels could store more than 1 bit per cell total across multiple cells, using an encoding scheme.)</p>
<p>Not that you'd want to (decoherence is enough of a problem without trying to pack more logical bits per physical thing), but some quantum systems could maybe do this. Wikipedia gives an example of a nonlinear oscillator where one level is the ground state, and another level is the first excited state. Using the 2nd and 3rd excited states could let you store 2 qubits in it. But like I said, this is not useful in real systems.</p> | 2017-10-10 22:25:56.527000+00:00 | 2017-10-11 00:22:50.817000+00:00 | 2017-10-11 00:22:50.817000+00:00 | null | 46,664,653 | <p>What is the difference between a physical and a logical qubit?</p>
<p>I hope someone can help me with this question, I can't figure out exactly what the difference is.</p>
<p>Best, Dirma</p> | 2017-10-10 10:38:43.713000+00:00 | 2019-02-16 13:26:01.607000+00:00 | null | logic|computer-science|cpu-architecture|quantum-computing|qubit | ['https://en.wikipedia.org/wiki/Qubit#Bit_versus_qubit', 'https://en.wikipedia.org/wiki/Qubit#Physical_representation', 'https://arxiv.org/pdf/0905.2794.pdf', 'https://en.wikipedia.org/wiki/Triple_modular_redundancy', 'https://aviation.stackexchange.com/a/44353/7767', 'https://en.wikipedia.org/wiki/Multi-level_cell'] | 6 |
13,634,073 | <p>You could use the <a href="http://cran.r-project.org/web/packages/SPOT/index.html" rel="nofollow">SPOT</a> packet (R programming language). It allows you to find (near-)optimal parameter settings using significantly less runs than brute force. You can use any programming language for your fitness function code, SPOT has an adapter for that, and offers an automatic mode with default setup (You don't have to worry about the design types and prediction models). It has a steep learning curve, but once you understood the basics, it is a powerful tool. <a href="http://arxiv.org/pdf/1006.4645v1.pdf" rel="nofollow">Here</a> is a quick guide; chapter 2.6 offers a concrete example. The SPOT package comes with several examples.</p> | 2012-11-29 20:22:58.900000+00:00 | 2012-11-29 20:22:58.900000+00:00 | null | null | 6,959,928 | <p>I've got a an audio processing app that takes an input audio file, processes it, and spits out a modified output audio file. This audio processing app has 10-15 parameters that affect how it processes the audio, and thus affects the content of the output audio file (it might have, say, a different frequency response, be louder, quieter, etc.). All these parameters have constrained ranges (x0 must be < 1 and > -1 for example).</p>
<p>The output audio file is evaluated by a tool that gives it a score. This tool knows what the "ideal" output should sound like, and scores the output file accordingly. A score of 1.0 means the output is ideal, i.e. the input file was processed with the best possible parameter set. A score of 0 means the output is completely wrong.</p>
<p>So with 10-15 parameters with their valid ranges, the combinations are endless! I'd be sitting here manually tweaking these parameters forever until I got the best solution. I've checked out some LP/MIP solvers (CBC, MS Solver Foundation, GKLP) but these use a mathematical equation as an objective function... you don't "plug in" an external evaluation function as far as I can see.</p>
<p>Is a LP/MIP solver the right tool to aid in the parameter tuning? Any ideas?</p>
<p>Thanks,</p>
<p>akevan</p> | 2011-08-05 17:01:31.163000+00:00 | 2013-11-26 17:30:22.620000+00:00 | 2011-08-05 17:09:11.477000+00:00 | c#|.net|optimization|constraint-programming|ms-solver-foundation | ['http://cran.r-project.org/web/packages/SPOT/index.html', 'http://arxiv.org/pdf/1006.4645v1.pdf'] | 2 |
64,830,699 | <p>There's a PR in progress that addresses these limitations. <a href="https://github.com/linux-test-project/lcov/pull/86" rel="noreferrer">https://github.com/linux-test-project/lcov/pull/86</a>.</p>
<p><a href="https://arxiv.org/abs/2008.07947" rel="noreferrer">This paper</a> explains the theory behind the implementation.</p>
<p><a href="https://user-images.githubusercontent.com/471374/99132883-bc70d780-25cc-11eb-84da-4cf92b4bdc69.png" rel="noreferrer"><img src="https://user-images.githubusercontent.com/471374/99132883-bc70d780-25cc-11eb-84da-4cf92b4bdc69.png" alt="Result" /></a></p> | 2020-11-14 03:35:39.753000+00:00 | 2020-11-14 03:35:39.753000+00:00 | null | null | 42,003,783 | <p>We are using LCOV/GCOV to produce test coverage of our projects. Recently we tried to enable branch-coverage additionally. But it looks like, this just doesn't yield the results we expected from a high-level developer view.</p>
<p>Using branch-coverage with C++ blows the report up with branches all over the place. We suspect (as the searching for the issues indicates) that mostly exception handling code creates these "hidden branches". And GCOV/LCOV doesn't seem to skip over these.</p>
<p>I created a small test project to show the problem: <a href="https://github.com/ghandmann/lcov-branch-coverage-weirdness" rel="noreferrer">https://github.com/ghandmann/lcov-branch-coverage-weirdness</a></p>
<p>Currently we use Ubuntu 16.04. with:</p>
<ul>
<li>gcc v5.4</li>
<li>lcov & genhtml v1.12</li>
</ul>
<p>Our production code is built with c++11 enabled. The minimal example isn't built with c++11 enabled, but as we experimented a bit with all different options (c++ standard, optimization, <code>-fno-exceptions</code>) we didn't come up with a passable result.</p>
<p>Anybody got some ideas? Tipps? Are we using anything the wrong way? Is this - as stated somewhere else - really expected behavior?</p>
<p><strong>Update:</strong></p>
<p>As also pointed out on the <a href="https://gcc.gnu.org/ml/gcc-help/2017-02/msg00008.html" rel="noreferrer">gcc-help mailing list</a>, these "hidden branches" occur because of exception handling. So adding the <code>-fno-exceptions</code> switch to gcc produces 100% branch coverage for "simple" programs. But when exceptions are disabled, gcc refuses to compile code which actually uses exceptions (e.g. try-catch, throw). Therefore for real production code this is not an option. Looks like, you have to simply declare ~50% coverage to be the new 100% in this case. ;)</p> | 2017-02-02 13:42:45.630000+00:00 | 2020-11-14 03:35:39.753000+00:00 | 2018-09-17 10:18:51.547000+00:00 | c++|code-coverage|gcov|lcov | ['https://github.com/linux-test-project/lcov/pull/86', 'https://arxiv.org/abs/2008.07947', 'https://user-images.githubusercontent.com/471374/99132883-bc70d780-25cc-11eb-84da-4cf92b4bdc69.png'] | 3 |
60,746,082 | <p>MiniSat is quite an old program at this point. At the very least, you should look into one of the solvers entered into a <a href="http://sat2018.forsyte.tuwien.ac.at/index.php?cat=results" rel="nofollow noreferrer">recent SAT competition</a>, e.g. <a href="https://www.labri.fr/perso/lsimon/glucose/" rel="nofollow noreferrer">Glucose</a>. The competitions have required SAT solvers to emit <a href="https://arxiv.org/pdf/1610.06229.pdf" rel="nofollow noreferrer">DRAT proofs</a> of unsatisfiability since 2013. Run whichever solver you choose and have it dump its DRAT proof into proof.out. Feed proof.out into the <a href="https://github.com/marijnheule/drat-trim" rel="nofollow noreferrer">drat-trim</a> utility with the -c option, which will produce an UNSAT core in DIMACS format. I.e.</p>
<pre><code>drat-trim originalproblem.cnf proof.out -c core.cnf
</code></pre>
<p>Note that you don't have to use a MiniSat clone; any modern solver that emits DRAT proofs can have its proof fed into drat-trim to yield an UNSAT core.</p> | 2020-03-18 19:11:34.620000+00:00 | 2020-03-18 19:11:34.620000+00:00 | null | null | 60,701,258 | <p>Is there any API call in minisat to extract unsat core or any other method for the same.</p>
<p>I want to extract the unsat core for every invocation of the solver and then work on the unsat core. </p> | 2020-03-16 06:26:25.667000+00:00 | 2020-03-18 19:11:34.620000+00:00 | null | constraint-programming|sat|sat-solvers | ['http://sat2018.forsyte.tuwien.ac.at/index.php?cat=results', 'https://www.labri.fr/perso/lsimon/glucose/', 'https://arxiv.org/pdf/1610.06229.pdf', 'https://github.com/marijnheule/drat-trim'] | 4 |
68,538,153 | <p>Yes, but be careful. You can average two tensors with <a href="https://js.tensorflow.org/api/latest/#mean" rel="nofollow noreferrer"><code>tf.mean</code></a> like <a href="https://stackoverflow.com/users/5069957/edkeveked">https://stackoverflow.com/users/5069957/edkeveked</a> said. However, remember <code>axis=0</code> should be shortened to just <code>0</code> in JavaScript.</p>
<p>Just to rewrite his code in a second way:</p>
<pre class="lang-js prettyprint-override"><code>const x = tf.tensor([1, 2, 3, 2, 3, 4], [2, 3]);
x.mean(0).print()
</code></pre>
<p><em><strong>However,</strong></em> you asked if you're doing it right, and that depends on if you're averaging as you go or not. There's an issue with a rolling average.</p>
<h2 id="example">Example:</h2>
<p>If you average (10, 20) then 30, you get (22.5) a different number than averaging (20, 30) then 10 (17.5), which is of course different from averaging all three at the same time, which would give you 20.</p>
<p>Averages do not adhere to an order-irrelevance principle once they've been calculated. It's the division part that removes the associative property. So you'll need to either:</p>
<p><strong>A:</strong> Store all model weights and calculate a new average each time based on all previous models</p>
<p>or</p>
<p><strong>B:</strong> Add a weighting system to your federated average so more recent models do not significantly affect the system.</p>
<h2 id="which-makes-sense">Which makes sense?</h2>
<p>I recommend <strong>B</strong> in the situation that you:</p>
<ol>
<li>Don't want to or cannot store every model and weight ever submitted.</li>
<li>You know some models have seen more valid data, and should be weighted appropriately compared to blind models.</li>
</ol>
<p>You can computer a weighted average adjusting the denominator for your existing model vs your incoming model.</p>
<p>In JavaScript you can do something simple like this to computer a weighted average between two values:</p>
<pre class="lang-js prettyprint-override"><code>const modelVal1 = 0
const modelVal2 = 1
const weight1 = 0.5
const weight2 = 1 - weight1
const average = (modelVal1 * weight1) + (modelVal2 * weight2)
</code></pre>
<p>The above code is your common evenly weighted average, but as you adjust the weight1, you are rebalancing the scales to significantly adjust the outcome in favor of <code>modelVal1</code> or <code>modelVal2</code>.</p>
<p>Obviously, you'll need to convert the JavaScript I have shown into tensor mathematical functions, but that's trivial.</p>
<p>Iterate averaging (or weighted average) with weights decaying is often used in Federated learning. See <a href="https://arxiv.org/abs/1802.08009" rel="nofollow noreferrer">Iterate averaging as regularization for stochastic gradient descent</a>, and <a href="https://arxiv.org/pdf/2103.11619.pdf" rel="nofollow noreferrer">Server Averaging for Federated Learning</a>.</p> | 2021-07-27 02:12:28.230000+00:00 | 2021-07-27 02:12:28.230000+00:00 | null | null | 55,788,696 | <p>I am implementing federated learning with tensorflowjs. But i am kind of stuck in the federated averaging process. The idea is simple: get updated weights from multiple clients and average it in the server.</p>
<p>I have trained a model on browser, got the updated weights via model.getWeights() method and sent the weights to server for averaging.</p>
<pre><code>
//get weights from multiple clients(happens i client-side)
w1 = model.getWeights(); //weights from client 1
w2 = model.getWeights(); //weights from client 2
//calculate average of the weights(server-side)
var mean_weights= [];
let length = w1.length; // length of all weights_array is same
for(var i=0; i<length; i++){
let sum = w1[i].add(w2[i]);
let mean = sum.divide(2); //got confused here, how to calculate mean of tensors ??
mean_weights.push(mean);
}
//apply updates to the model(both client-side and server-side)
model.setWeights(mean_weights);
</code></pre>
<p><strong>So my question is:
How do I calculate the mean of tensor array ?
Also, is this the right approach to perform federated averaging via tensorflowjs ?</strong></p> | 2019-04-22 03:07:58.987000+00:00 | 2021-07-27 02:12:28.230000+00:00 | 2019-04-23 07:33:50.763000+00:00 | tensorflow|tensorflow.js | ['https://js.tensorflow.org/api/latest/#mean', 'https://stackoverflow.com/users/5069957/edkeveked', 'https://arxiv.org/abs/1802.08009', 'https://arxiv.org/pdf/2103.11619.pdf'] | 4 |
70,291,940 | <p>It is a misconception that non-blocking communication was motivated by performance: it was mostly to be able to be able to express deadlock/serialization-free communication patterns. (Indeed, actually getting overlap of communication/computation was only possible through the "Iprobe trick". Only recently with "progress threads" has it become a more systematic possibility.)</p>
<p>Partitioned communication is intended for multi-threaded contexts. You may call that a performance argument, or a completely new use case.</p>
<p>Persistent sends have indeed the potential for performance improvement, since various setups and buffer allocations can be amortized. However, I don't see much evidence that MPI implementations actually do this. <a href="https://arxiv.org/abs/1809.10778" rel="nofollow noreferrer">https://arxiv.org/abs/1809.10778</a></p>
<p>Finally, you're missing buffered, synchronized, and ready sends. These indeed have the potential to improve performance, though again I don't see much evidence that they do.</p> | 2021-12-09 14:51:54.640000+00:00 | 2021-12-09 14:51:54.640000+00:00 | null | null | 70,291,489 | <p>The latest version of MPI includes these types of point to point (p-p) communication:</p>
<ul>
<li>blocking</li>
<li>ordinary non-blocking</li>
<li>persistent non-blocking</li>
<li>partitioned non-blocking</li>
</ul>
<p>As far as I know, historically blocking p-p communication was the first type that existed. Then different types of non-blocking p-p communication were introduced one after the other to increase performance. For example, they allow overlap of computation and communication. But are there cases where blocking p-p communication is actually faster than the non-blocking alternatives? If no, what does justify their existence? Simply backward compatibility and their simplicity of use?</p> | 2021-12-09 14:17:39.287000+00:00 | 2021-12-09 14:51:54.640000+00:00 | null | mpi | ['https://arxiv.org/abs/1809.10778'] | 1 |
47,758,776 | <p>The best choice of model depends on your exact requirements and deployment environment. SSD Mobilenet (and other SSD models) perform inference very quickly, but with less accuracy. They are well suited to cases where fast/realtime inference is desirable or situations where computing power is limited (ie. Mobile phones or IoT). In contrast, Faster RCNN or RFCN models will yield more accurate results, but run slower.</p>
<p>Consider trying the Faster RCNN Resnet 101 models. If you want more details check out the <a href="https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md" rel="nofollow noreferrer">model zoo</a> or <a href="https://arxiv.org/abs/1611.10012" rel="nofollow noreferrer">this paper</a> on the speed accuracy trade-offs of object detection architectures.</p> | 2017-12-11 18:05:48.447000+00:00 | 2017-12-11 18:05:48.447000+00:00 | null | null | 47,728,306 | <p>I have a dataset which has satellite images. So dataset is quite different from usual image datasets used for object detection. I trained ssd_mobilenet_v1_pets model, but trained model performs really bad.</p>
<p>Does it mean that ssd_mobilenet_v1_pets is not a good candidate for satellite imagery? And which one of all other available models in TF object detection would be a better performer in my case?</p> | 2017-12-09 11:54:24.660000+00:00 | 2017-12-11 18:05:48.447000+00:00 | null | tensorflow|object-detection | ['https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md', 'https://arxiv.org/abs/1611.10012'] | 2 |
61,362,386 | <p>I suggest you use a type of <a href="https://arxiv.org/abs/1505.04597" rel="nofollow noreferrer">UNet</a>. This kind of architecture has downsampling layers, followed by up sampling layers to get back to the original spatial dimensions.</p> | 2020-04-22 10:06:24.070000+00:00 | 2020-04-24 06:55:13.083000+00:00 | 2020-04-24 06:55:13.083000+00:00 | null | 59,335,245 | <p>I'm trying to do depth estimation with CNNs (this is my ultimate goal), but a problem that i found is: I just did image classifications with CNNs, using for example "CIFAR-10", "MNIST", "Cats vs Dogs", etc. To do depth estimation I need to output a new image (the NYUv2 dataset has the labeled images). So, I'll input an image like 256x256x3 and need to output another image with for example 228x228x3.</p>
<p>What I need to do? Can I just do the convolutions for a while and after that decrease the features maps and increase the dimension? Thanks</p>
<p>obs: I'm using Tensorflow 2.0</p> | 2019-12-14 12:49:46.337000+00:00 | 2020-04-24 06:55:13.083000+00:00 | null | python|tensorflow|keras|deep-learning|conv-neural-network | ['https://arxiv.org/abs/1505.04597'] | 1 |
42,434,106 | <p>SegNet uses the 13 convolutional layers from VGG. (2+2+3+3+3)</p>
<p>Check <a href="http://ethereon.github.io/netscope/#/preset/vgg-16" rel="nofollow noreferrer">this visualization</a> and <a href="https://arxiv.org/abs/1409.1556" rel="nofollow noreferrer">the paper</a> for more information.</p>
<p>From the paper:</p>
<blockquote>
<p>It is easy to see that a stack of two 3×3 conv. layers (without spatial pooling in between) has an effective receptive field of 5×5 such layers have a 7 × 7 effective receptive field. So what have we gained by using, for instance, a stack of three 3×3 conv. layers instead of a single 7×7 layer? First, we incorporate three non-linear rectification layers instead of a single one, which makes the decision function more discriminative. Second, we decrease the number of parameters: assuming that both the input and the output of a three-layer 3 × 3 convolution stack has C channels, the stack is parametrised by <a href="https://i.stack.imgur.com/1ikBj.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1ikBj.gif" alt="enter image description here"></a> weights; at the same time, a single 7 × 7 conv. layer would require <a href="https://i.stack.imgur.com/5TImF.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5TImF.gif" alt="enter image description here"></a> parameters, i.e. 81% more. This can be seen as imposing a regularisation on the 7 × 7 conv. filters, forcing them to have a decomposition through the 3 × 3 filters (with non-linearity injected in between).</p>
</blockquote> | 2017-02-24 08:38:01.237000+00:00 | 2017-02-24 08:38:01.237000+00:00 | null | null | 41,969,388 | <p>In <a href="https://arxiv.org/pdf/1511.00561.pdf" rel="nofollow noreferrer">SegNet</a>, the architecture proposed by authors is shown as follows. <a href="https://i.stack.imgur.com/DhAjw.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DhAjw.jpg" alt="enter image description here"></a>.</p>
<p>What confuses me is that there are two convolutional layers following with each other in each building block, just as what shown in the figure as 1 and 2. What are the major motivations to place convolution layers this way instead of aggregating them into a single convolutional layer?</p> | 2017-01-31 23:30:07.210000+00:00 | 2017-02-24 08:38:01.237000+00:00 | null | machine-learning|tensorflow|computer-vision|deep-learning|caffe | ['http://ethereon.github.io/netscope/#/preset/vgg-16', 'https://arxiv.org/abs/1409.1556', 'https://i.stack.imgur.com/1ikBj.gif', 'https://i.stack.imgur.com/5TImF.gif'] | 4 |
38,335,125 | <p>Gradient descent, by its nature, is looking at the function locally (local gradient). Hence, there is absolutely no guarantee that it will be the global minima. In fact, it probably will not be unless the function is convex. This is also the reason GD like methods are sensitive to initial position you start from. Having said that, there was a recent paper which said that in high-dimensional solution spaces, the number of maximas/minimas are not as many as previously thought. </p>
<p>Finding global minimas in high dimensional space in a reasonable way seems very much an unsolved problem. However, you might wanna focus more on <em>saddle points</em> rather than minimas. See this post for example: </p>
<p><a href="http://kdnuggets.com/2015/11/theoretical-deep-learning.html" rel="noreferrer">High level description for saddle point problem</a></p>
<p>A more detailed paper is here (<a href="https://arxiv.org/pdf/1406.2572.pdf" rel="noreferrer">https://arxiv.org/pdf/1406.2572.pdf</a>)</p> | 2016-07-12 17:08:58.253000+00:00 | 2016-07-12 17:08:58.253000+00:00 | null | null | 38,333,121 | <p>I was playing with tensorflow for quite a while, and I have more of a theoretical question. In general, when we train a network we usually use GradientDescentOptimizer (probably its variations like adagrad or adam) to minimize the loss function. In general it looks like we are trying to adjust weights and biases so that we get the global minimum of this loss function. But the issue is that I assume that this function has an extremely complicated look if you plot it, with lots of local optimums. What I wonder is how can we be sure that Gradient Descent finds global optimum and that we are not getting instantly stuck in some local optimum instead which is far away from global optimum?</p>
<p>I recollect that for example when you are performing clustering in sklearn it usually runs clustering algorithm several times with random initialization of cluster centers, and by doing this we ensure that we are not getting stuck with not optimal result. But we are not doing something like this while training ANNs in tensorflow - we start with some random weights and just travel along the slope of the function.</p>
<p>So, any insight into this? Why we can be more or less sure that the results of training via gradient descent are close to global minimum once the loss stops to decrease significantly?</p>
<p>Just to clarify, why I am wondering about this matter is that if we can't be sure that we get at least close to global minimum we can't easily judge which of 2 different models is actually better. Because we could run experiment, get some model evaluation which shows that model is not good... But actually it just stuck in local minimum shortly after training started. While other model which seemed for us to be better was just more lucky to start training from a better starting point and didn't stuck in local minimum fast. Moreover, this issue means that we can't even be sure that we get maximum from the network architecture we currently could be testing. For example, it could have really good global minimum but it is hard to find it and we mostly get stuck with poor solutions at local minimums, which would be far away from global optimum and never see the full potential of network at hand.</p> | 2016-07-12 15:27:06.647000+00:00 | 2018-01-13 08:30:38.620000+00:00 | 2016-07-12 17:17:29.007000+00:00 | machine-learning|neural-network|tensorflow|deep-learning | ['http://kdnuggets.com/2015/11/theoretical-deep-learning.html', 'https://arxiv.org/pdf/1406.2572.pdf'] | 2 |
54,344,089 | <p>The loss function of the Mask R-CNN paper combines a <strong>weighted sum of 3 losses</strong> (the 3 outputs): classification, localization and segmentation mask:</p>
<p><img src="https://latex.codecogs.com/gif.latex?L=w_%7Bcls%7D%5Ccdot&space;L_%7Bcls%7D+w_%7Bbbox%7D%5Ccdot&space;L_%7Bbbox%7D+w_%7Bmask%7D%5Ccdot&space;L_%7Bmask%7D," title="L=w_{cls}\cdot L_{cls}+w_{bbox}\cdot L_{bbox}+w_{mask}\cdot L_{mask}," /></p>
<p>The classification and bounding-box (localization) losses are the same as in Faster R-CNN.</p>
<p><strong>What is added is a per-pixel sigmoid + binary loss for the mask</strong>.
The mask branch generates a mask for each class, without competition among classes (so if you have 10 classes the mask branch predicts 10 masks). The loss being used is per-pixel sigmoid + binary loss.</p>
<p>If you want to dive in a little bit deeper into the mask loss, the paper states that "Multinomial vs. Independent Masks: Mask R-CNN decouples mask and class prediction: as the existing box
branch predicts the class label, we generate a mask for each
class without competition among classes (by a per-pixel sigmoid and a binary loss). In Table 2b, we compare this to
using a per-pixel softmax and a multinomial loss (as commonly used in FCN [30])."</p>
<p>you can see it in the <a href="https://arxiv.org/pdf/1703.06870.pdf" rel="nofollow noreferrer">paper</a> at page number 6, table number 2.b ("Multinomial vs. Independent Masks").</p> | 2019-01-24 10:11:11.807000+00:00 | 2019-01-24 10:11:11.807000+00:00 | null | null | 54,341,514 | <p>I am training for <strong>Custom Object Detection</strong> using <strong>Mask RCNN</strong> in <strong>TensorFlow Object Detection</strong>. Therefore, I am to predict the object instance mask along with the bounding box.</p>
<p>Pre-trained model : <code>mask_rcnn_inception_v2_coco</code></p>
<p>Following is a snapshot of my training.</p>
<blockquote>
<p>INFO:tensorflow:global step 4181: loss = 0.0031 (3.290 sec/step)</p>
<p>INFO:tensorflow:global step 4181: loss = 0.0031 (3.290 sec/step)</p>
<p>INFO:tensorflow:global step 4182: loss = 0.0030 (2.745 sec/step)</p>
<p>INFO:tensorflow:global step 4182: loss = 0.0030 (2.745 sec/step)</p>
</blockquote>
<p><strong>In this case, can you please tell me what is the loss here?</strong></p>
<p>My questions is not related to training loss and its variation w.r.t. the steps.</p>
<p>I am just unclear about what is meant by this loss while training a Mask RCNN? In a Mask RCNN, there are 3 parallel heads at the last layer,</p>
<ul>
<li>for detecting the class</li>
<li>for predicting bounding box</li>
<li>for predicting instance masks</li>
</ul>
<p><strong>In such a case, what is loss?</strong></p> | 2019-01-24 07:38:08.663000+00:00 | 2019-01-24 11:55:57.227000+00:00 | 2019-01-24 11:55:57.227000+00:00 | tensorflow|computer-vision|object-detection | ['https://arxiv.org/pdf/1703.06870.pdf'] | 1 |
57,725,977 | <p>I tried researching on this.But currently no one who has a full fledge integration that you want.Basically what i understood is a multi tenant blockchain platform(with some tweaking) will address your purpose(I am not sure whether you would be achieving your 3rd requirement).</p>
<p>Here is a paper which could help you start on your idea.</p>
<p><a href="https://arxiv.org/pdf/1901.11219.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1901.11219.pdf</a></p> | 2019-08-30 11:33:57.670000+00:00 | 2019-08-30 11:33:57.670000+00:00 | null | null | 57,706,788 | <blockquote>
<p>How to build the bridge between two different consensus blockchain platform?</p>
</blockquote>
<p>use case:</p>
<p>1.we need to ensure visibility of the transaction in both platforms to assure its integrity</p>
<ol start="2">
<li><p>When a smart contract is established on one platform, that contract can be referred to, transacted and transferred to the other platform easily and securely</p></li>
<li><p>The bridge should be able to leverage the one platform and other platform depending upon the transaction type and the TPS requirement. If it requires a faster TPS, then one consensus framework or else normal mining framework of another consensus</p></li>
</ol>
<blockquote>
<p>How to achieve consensus on the multiple decentralized oracles?</p>
</blockquote> | 2019-08-29 09:09:16.783000+00:00 | 2019-08-30 11:33:57.670000+00:00 | 2019-08-29 18:45:42.620000+00:00 | blockchain|ethereum | ['https://arxiv.org/pdf/1901.11219.pdf'] | 1 |
57,319,618 | <p>Your first question:<br>
Yes, <code>input_shape</code> must be specified in the first layer, which is ,in your case, the <code>Flatten</code> layer. It is because different #param (weights & bias) needed to be initialised after compiling the model. In you case, the #pram of your first <code>Dense</code> layer will depend on the <code>input_shape</code> you specified in <code>Flatten</code> layer. </p>
<p>Second question:<br>
If the <code>Desnse</code> layer is the last layer of your model, the <code>unit</code> should clearly be #classes / #outputs you want the model to perform. However, when it comes to <code>hidden layers</code>, as far as I know, there is no such a universal rule/formula to guarantee an optimal number of <code>unit</code>. It really depends on the data you feed into the model & the complexity of the task & etc... I would say it should be chosen on trial and error basis. </p>
<p>Edit:<br>
Here I found some info for your second question if you really cannot be satisfied by my answer.<br>
<a href="https://arxiv.org/abs/1905.11946" rel="nofollow noreferrer">EfficientNet paper</a>: but it is for Convolutional Neural Networks<br>
<a href="https://www.r-bloggers.com/selecting-the-number-of-neurons-in-the-hidden-layer-of-a-neural-network/" rel="nofollow noreferrer">"Rlue of thumb"</a>: ...</p> | 2019-08-02 03:41:31.553000+00:00 | 2019-08-02 03:50:15.233000+00:00 | 2019-08-02 03:50:15.233000+00:00 | null | 57,318,863 | <p>I'm new to Keras and am doing a basic Kaggle tutorial (<a href="https://www.kaggle.com/c/digit-recognizer" rel="nofollow noreferrer">The Digit Recognizer</a>). I am struggling to understand what to actually put into a <code>Dense</code> layer. I have found <a href="https://stackoverflow.com/questions/44747343/keras-input-explanation-input-shape-units-batch-size-dim-etc">this</a> post to be very helpful, but my understanding isn't quite there yet.</p>
<p>In my <code>Sequential</code> model, I'm starting off with a <code>Dense</code> layer. But, I see some posts saying that the first layer <em>must</em> have an <code>input_shape</code> whereas I see plenty of Kaggle submissions and other examples that don't adhere to this.</p>
<ol>
<li>Is an <code>input_shape</code> actually required in the first layer? Is it required at all?</li>
<li>A <code>Dense</code> layers first argument is <code>units</code>. For the life of me, I cannot find a solid explanation on what this argument should actually be. Is there some formula to run here based on the <code>shape</code> of your data? Sometimes I see rather large numbers (something like 784) in the first <code>Dense</code> layer whereas in other cases it's small (something like 10). Or, is it a total guess?</li>
</ol>
<p>I understand that there isn't a "this is what you do for this type of data" approach to building a predictive model, but I can't understand how to even take an educated guess at what numbers to plug in here.</p>
<p>Here is my current model:</p>
<pre><code>model = Sequential()
model.add(Flatten())
model.add(Dense(64, activation=tf.nn.relu))
model.add(Dense(10, activation=tf.nn.softmax))
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
x_val = x_train[:10000]
y_val = labels[:10000]
model.fit(x_train, y_train, epochs=3) # , validation_data=(x_val, y_val))
</code></pre>
<p>My model performs <em>decent</em> (I think) as I only have about 350 misses out of 8400 images. I've got it down to about 220 with adding more layers, changing numbers, using <code>Dropout</code>, etc...</p>
<p>I'd really like to get a better understanding about the best way to understand what numbers I should be plugging in. And also, do I need an <code>input_shape</code>?</p> | 2019-08-02 01:28:41.070000+00:00 | 2019-08-02 03:50:15.233000+00:00 | 2019-08-02 01:54:03.093000+00:00 | python|tensorflow|keras | ['https://arxiv.org/abs/1905.11946', 'https://www.r-bloggers.com/selecting-the-number-of-neurons-in-the-hidden-layer-of-a-neural-network/'] | 2 |
47,618,682 | <p>The model-based GLM tree you have specified tries to fit the following model:
logit(prob) = alpha_tree(x1-x4) * t, where alpha_tree(x1-x4) is a subgroup-specific coefficient from the tree (based on partitioning variables x1-x4) for the treatment indicator t. Thus, this model does not include the possibility for an intercept or global main effects of x1 and x2 - as simulated in your data. As these main effects are quite pronounced the model has no other possibility except to capture these by further splits. Hence, the large tree in your example.</p>
<p>In the classic MOB framework (Zeileis <em>et al.</em> 2008, Journal of Computational and Graphical Statistics, <a href="http://dx.doi.org/10.1198/106186008X319331" rel="nofollow noreferrer">doi:10.1198/106186008X319331</a>), the only option would be to include every relevant regressor in the model to be partitioned, i.e., logit(mu) = beta0_tree(x1-x4) + beta1_tree(x1_x4) * x1 + beta2_tree(x1-x4) * x2 + alpha_tree(x1-x4) * t. This works and will detect subgroups with respect to only the treatment effect alpha * t as well but, of course, loses some efficiency because it re-estimates the beta coefficients in every subgroup as well. A discussion of this approach specifically for subgroup analyses is available in Seibold <em>et al.</em> (2016a), The International Journal of Biostatistics, <a href="http://dx.doi.org/10.1515/ijb-2015-0032" rel="nofollow noreferrer">doi:10.1515/ijb-2015-0032</a>.</p>
<p>Recently, we have suggested an adaptation of MOB that we called PALM tree for partially additive linear model trees (Seibold <em>et al.</em> 2016b, <a href="http://arxiv.org/abs/1612.07498" rel="nofollow noreferrer">http://arxiv.org/abs/1612.07498</a>). This allows to fit models of the type logit(mu) = beta0 + beta1 * x1 + beta2 * x2 + alpha_tree(x1-x4) * t as you simulated in your question.</p>
<p>Implementations of both the classic GLM-based MOB tree and the PALM tree are available as <code>glmtree()</code> and <code>palmtree()</code>, respectively, in <code>partykit</code> which is recommended over the old <code>party</code> implementation. Applying these to your simulated data above, yields the following. First, it is important that all categorical partitioning variables are also flagged as <code>factor</code> variables (in order to choose the right parameter instability tests):</p>
<pre><code>dt <- transform(dt, x3 = factor(x3))
</code></pre>
<p>Then, we can fit the model from which you have simulated with only a subgroup-specific treatment effect, a global main effect of x1 and x2, and partitioning based on x1, x2, x3, x4.</p>
<pre><code>library("partykit")
tr1 <- palmtree(y ~ t - 1 | x1 + x2 | x1 + x2 + x3 + x4, data = dt,
family = binomial, minsize = 100)
tr1
## Partially additive generalized linear model tree (family: binomial)
##
## Model formula:
## y ~ t - 1 | x1 + x2 + x3 + x4
##
## Fitted party:
## [1] root
## | [2] x1 <= -0.21797: n = 399
## | t
## | -0.9245345
## | [3] x1 > -0.21797: n = 601
## | t
## | 0.6033979
##
## Number of inner nodes: 1
## Number of terminal nodes: 2
## Number of parameters per node: 1
## Objective function (negative log-likelihood): 432.5717
##
## Linear fixed effects (from palm model):
## (Intercept) x1 x2
## 0.7140397 1.7589675 -1.1335779
</code></pre>
<p>Thus, this covers the most important parts of the data-generating process but fails to detect the interaction with x2.</p>
<pre><code>plot(tr1)
</code></pre>
<p><a href="https://i.stack.imgur.com/JZ65h.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JZ65h.png" alt="tr1"></a></p>
<p>I played around with setting other seeds or using BIC-based post-pruning (combined with a large significance level) which sometimes could also discover the interaction with x2. Presumably, a larger sample would yield the "true" tree more often, as well. Thus, the model seems to be in principle capable to fit the model you simulated, just not in this particular setting.</p>
<p>Personally, I would always let both the intercept <em>and</em> the treatment effect vary across subgroups. Because if there are any main effects that I overlooked this is likely to yield a better model. If an intercept is included in every subgroup, then it is nicer to code both <code>y</code> and <code>t</code> as factors, yielding nicer plots in the visualization:</p>
<pre><code>dt <- transform(dt, y = factor(y), t = factor(t))
tr2 <- palmtree(y ~ t | x1 + x2 | x1 + x2 + x3 + x4, data = dt,
family = binomial, minsize = 100)
tr2
## Partially additive generalized linear model tree (family: binomial)
##
## Model formula:
## y ~ t | x1 + x2 + x3 + x4
##
## Fitted party:
## [1] root
## | [2] x1 <= -0.26515: n = 382
## | (Intercept) t1
## | 0.9839353 -1.1376981
## | [3] x1 > -0.26515: n = 618
## | (Intercept) t1
## | 0.5331386 0.6566111
##
## Number of inner nodes: 1
## Number of terminal nodes: 2
## Number of parameters per node: 2
## Objective function (negative log-likelihood): 432.3303
##
## Linear fixed effects (from palm model):
## x1 x2
## 1.964397 -1.078958
</code></pre>
<p>For this data, this fits almost the same model as above. But the display is much easier to read, showing the large difference in absolute treatment effect between the two groups:</p>
<pre><code>plot(tr2)
</code></pre>
<p><a href="https://i.stack.imgur.com/1mKSY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1mKSY.png" alt="tr2"></a></p>
<p>Finally, if using a plain old MOB without the possibility to include additive main effects, we should include the regressors x1 and x2 in every subgroup:</p>
<pre><code>tr3 <- glmtree(y ~ t + x1 + x2 | x1 + x2 + x3 + x4, data = dt,
family = binomial, minsize = 100)
tr3
## Generalized linear model tree (family: binomial)
##
## Model formula:
## y ~ t + x1 + x2 | x1 + x2 + x3 + x4
##
## Fitted party:
## [1] root
## | [2] x1 <= -0.26515: n = 382
## | (Intercept) t1 x1 x2
## | 0.5570219 -1.0511317 1.2533975 -1.3899679
## | [3] x1 > -0.26515: n = 618
## | (Intercept) t1 x1 x2
## | 0.3573041 0.6943206 2.2910053 -0.9570403
##
## Number of inner nodes: 1
## Number of terminal nodes: 2
## Number of parameters per node: 4
## Objective function (negative log-likelihood): 429.2406
</code></pre>
<p>Again, this essentially finds the same subgroups as before. However, it loses a bit of efficiency because more regression coefficients have to be estimated in each subgroup while only the <code>t</code> coefficient really changes between the subgroups.</p>
<pre><code>plot(tr3, tp_args = list(which = 1))
</code></pre>
<p><a href="https://i.stack.imgur.com/Bo2KC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Bo2KC.png" alt="tr3"></a></p> | 2017-12-03 13:06:34.920000+00:00 | 2018-06-08 17:59:20.560000+00:00 | 2018-06-08 17:59:20.560000+00:00 | null | 47,613,304 | <p>I am very interested in a recent paper on "Tutorial in biostatistics: data-driven subgroup identification and analysis in clinical trials " (DOI: 10.1002/sim.7064), and I want to reproduce results in the section "Performance of the tree-based regression approach". However, the partitioning tree seems not get the result as I want.</p>
<pre><code>set.seed(123)
n <- 1000
x1 <- rnorm(n)
x2 <- runif(n)
t <- rbinom(n,1,0.5)
x3 <- rbinom(n,1,0.3)
x4 <- rnorm(n)
z <-1+ 2*x1-1.8*x2+(x1>=-0.3)*(x2>=0.4)*t-(x1< -0.3)*(x2<0.4)*t
pr = 1/(1+exp(-z))
y = rbinom(n,1,pr)
dt <- data.frame(x1,x2,x3,x4,t,y)
library(party)
mb <- mob(y~t-1|x1+x2+x3+x4,
data=dt,
model = glinearModel,
family = binomial(),
control=mob_control(minsplit=100))
plot(mb)
</code></pre>
<p><a href="https://i.stack.imgur.com/xlarg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xlarg.png" alt="regression tree"></a></p>
<p>The figure is shown above. It is supposed to split on x1 and x2 at cutoff values of -0.3 and 0.4 defined in the simulation. However, it doesn't appear to do so.x1 dominated the node partition and x2 seems not an important determinant of the partitioning process. What's wrong with the code? </p> | 2017-12-02 22:31:56.243000+00:00 | 2018-06-08 17:59:20.560000+00:00 | null | r|tree|regression|partitioning|party | ['http://dx.doi.org/10.1198/106186008X319331', 'http://dx.doi.org/10.1515/ijb-2015-0032', 'http://arxiv.org/abs/1612.07498', 'https://i.stack.imgur.com/JZ65h.png', 'https://i.stack.imgur.com/1mKSY.png', 'https://i.stack.imgur.com/Bo2KC.png'] | 6 |
57,540,363 | <blockquote>
<p>The original code I haven't found on PyTorch website anymore.</p>
</blockquote>
<pre><code>gradients = torch.FloatTensor([0.1, 1.0, 0.0001])
y.backward(gradients)
print(x.grad)
</code></pre>
<p>The problem with the code above is there is no function based on how to calculate the gradients. This means we don't know how many parameters (arguments the function takes) and the dimension of parameters.</p>
<p>To fully understand this I created an example close to the original:</p>
<blockquote>
<p>Example 1:</p>
</blockquote>
<pre><code>a = torch.tensor([1.0, 2.0, 3.0], requires_grad = True)
b = torch.tensor([3.0, 4.0, 5.0], requires_grad = True)
c = torch.tensor([6.0, 7.0, 8.0], requires_grad = True)
y=3*a + 2*b*b + torch.log(c)
gradients = torch.FloatTensor([0.1, 1.0, 0.0001])
y.backward(gradients,retain_graph=True)
print(a.grad) # tensor([3.0000e-01, 3.0000e+00, 3.0000e-04])
print(b.grad) # tensor([1.2000e+00, 1.6000e+01, 2.0000e-03])
print(c.grad) # tensor([1.6667e-02, 1.4286e-01, 1.2500e-05])
</code></pre>
<p>I assumed our function is <code>y=3*a + 2*b*b + torch.log(c)</code> and the parameters are tensors with three elements inside.</p>
<p>You can think of the <code>gradients = torch.FloatTensor([0.1, 1.0, 0.0001])</code> like this is the accumulator.</p>
<p>As you may hear, PyTorch autograd system calculation is equivalent to Jacobian product.</p>
<p><a href="https://i.stack.imgur.com/sDlmj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sDlmj.png" alt="Jacobian" /></a></p>
<p>In case you have a function, like we did:</p>
<pre><code>y=3*a + 2*b*b + torch.log(c)
</code></pre>
<p>Jacobian would be <code>[3, 4*b, 1/c]</code>. However, this <a href="https://en.wikipedia.org/wiki/Jacobian_matrix_and_determinant" rel="nofollow noreferrer">Jacobian</a> is not how PyTorch is doing things to calculate the gradients at a certain point.</p>
<p>PyTorch uses forward pass and <a href="https://arxiv.org/pdf/1502.05767.pdf" rel="nofollow noreferrer">backward mode automatic differentiation</a> (AD) in tandem.</p>
<p>There is no symbolic math involved and no numerical differentiation.</p>
<blockquote>
<p>Numerical differentiation would be to calculate <code>δy/δb</code>, for <code>b=1</code> and <code>b=1+ε</code> where ε is small.</p>
</blockquote>
<p>If you don't use gradients in <code>y.backward()</code>:</p>
<blockquote>
<p>Example 2</p>
</blockquote>
<pre><code>a = torch.tensor(0.1, requires_grad = True)
b = torch.tensor(1.0, requires_grad = True)
c = torch.tensor(0.1, requires_grad = True)
y=3*a + 2*b*b + torch.log(c)
y.backward()
print(a.grad) # tensor(3.)
print(b.grad) # tensor(4.)
print(c.grad) # tensor(10.)
</code></pre>
<p>You will simply get the result at a point, based on how you set your <code>a</code>, <code>b</code>, <code>c</code> tensors initially.</p>
<p>Be careful how you initialize your <code>a</code>, <code>b</code>, <code>c</code>:</p>
<blockquote>
<p>Example 3:</p>
</blockquote>
<pre><code>a = torch.empty(1, requires_grad = True, pin_memory=True)
b = torch.empty(1, requires_grad = True, pin_memory=True)
c = torch.empty(1, requires_grad = True, pin_memory=True)
y=3*a + 2*b*b + torch.log(c)
gradients = torch.FloatTensor([0.1, 1.0, 0.0001])
y.backward(gradients)
print(a.grad) # tensor([3.3003])
print(b.grad) # tensor([0.])
print(c.grad) # tensor([inf])
</code></pre>
<p>If you use <code>torch.empty()</code> and don't use <code>pin_memory=True</code> you may have different results each time.</p>
<p>Also, note gradients are like accumulators so zero them when needed.</p>
<blockquote>
<p>Example 4:</p>
</blockquote>
<pre><code>a = torch.tensor(1.0, requires_grad = True)
b = torch.tensor(1.0, requires_grad = True)
c = torch.tensor(1.0, requires_grad = True)
y=3*a + 2*b*b + torch.log(c)
y.backward(retain_graph=True)
y.backward()
print(a.grad) # tensor(6.)
print(b.grad) # tensor(8.)
print(c.grad) # tensor(2.)
</code></pre>
<p>Lastly few tips on terms PyTorch uses:</p>
<p>PyTorch creates a <strong>dynamic computational graph</strong> when calculating the gradients in forward pass. This looks much like a tree.</p>
<p>So you will often hear the <em>leaves</em> of this tree are <strong>input tensors</strong> and the <em>root</em> is <strong>output tensor</strong>.</p>
<p>Gradients are calculated by tracing the graph from the root to the leaf and multiplying every gradient in the way using the <strong>chain rule</strong>. This multiplying occurs in the backward pass.</p>
<p>Back some time I created <a href="https://programming-review.com/pytorch/ad" rel="nofollow noreferrer">PyTorch Automatic Differentiation tutorial</a> that you may check interesting explaining all the tiny details about AD.</p> | 2019-08-17 22:25:19.260000+00:00 | 2021-12-03 21:55:12.377000+00:00 | 2021-12-03 21:55:12.377000+00:00 | null | 43,451,125 | <p>I am reading through the documentation of PyTorch and found an example where they write </p>
<pre><code>gradients = torch.FloatTensor([0.1, 1.0, 0.0001])
y.backward(gradients)
print(x.grad)
</code></pre>
<p>where x was an initial variable, from which y was constructed (a 3-vector). The question is, what are the 0.1, 1.0 and 0.0001 arguments of the gradients tensor ? The documentation is not very clear on that.</p> | 2017-04-17 12:04:14.160000+00:00 | 2021-12-03 21:55:12.377000+00:00 | 2019-04-08 16:00:41.473000+00:00 | neural-network|gradient|pytorch|torch|gradient-descent | ['https://i.stack.imgur.com/sDlmj.png', 'https://en.wikipedia.org/wiki/Jacobian_matrix_and_determinant', 'https://arxiv.org/pdf/1502.05767.pdf', 'https://programming-review.com/pytorch/ad'] | 4 |
57,329,702 | <p>Yes, it is possible to use OpenAI gym environments for multi-agent games. Although in the OpenAI gym community <a href="https://github.com/openai/gym/issues/934" rel="noreferrer">there is no standardized interface</a> for multi-agent environments, it is easy enough to build an OpenAI gym that supports this. For instance, in OpenAI's <a href="https://arxiv.org/pdf/1706.02275.pdf" rel="noreferrer">recent work</a> on multi-agent particle environments <a href="https://github.com/openai/multiagent-particle-envs/blob/master/multiagent/environment.py" rel="noreferrer">they make a multi-agent environment</a> that inherits from <code>gym.Env</code> which takes the following form:</p>
<pre class="lang-py prettyprint-override"><code>class MultiAgentEnv(gym.Env):
def step(self, action_n):
obs_n = list()
reward_n = list()
done_n = list()
info_n = {'n': []}
# ...
return obs_n, reward_n, done_n, info_n
</code></pre>
<p>We can see that the <code>step</code> function takes a list of actions (one for each agent) and returns a list of observations, list of rewards, list of dones, while stepping the environment forwards. This interface is representative of <a href="https://www2.cs.duke.edu/courses/spring07/cps296.3/littman94markov.pdf" rel="noreferrer">Markov Game</a>, in which all agents take actions <em>at the same time</em> and each observe their own subsequent observation, reward.</p>
<p>However, this kind of Markov Game interface may not be suitable for all multi-agent environments. In particular, turn-based games (such as card games) might be better cast as an <em>alternating</em> Markov Game, in which agents <em>take turns</em> (i.e. actions) one at a time. For this kind of environment, you may need to include which agent's turn it is in the representation of state, and your step function would then just take a single action, and return a single observation, reward and done.</p> | 2019-08-02 15:41:40.353000+00:00 | 2019-08-02 19:07:01.783000+00:00 | 2019-08-02 19:07:01.783000+00:00 | null | 44,369,938 | <p>Is it possible to use <a href="https://openai.com/" rel="noreferrer">openai</a>'s <a href="https://gym.openai.com/docs" rel="noreferrer">gym environments</a> for multi-agent games? Specifically, I would like to model a card game with four players (agents). The player scoring a turn starts the next turn. How would I model the necessary coordination between the players (e.g. who's turn it is next)? Ultimately, I would like to use reinforcement learning on four agents that play against each other.</p> | 2017-06-05 13:19:47.303000+00:00 | 2021-10-09 07:52:11.047000+00:00 | null | reinforcement-learning|openai-gym | ['https://github.com/openai/gym/issues/934', 'https://arxiv.org/pdf/1706.02275.pdf', 'https://github.com/openai/multiagent-particle-envs/blob/master/multiagent/environment.py', 'https://www2.cs.duke.edu/courses/spring07/cps296.3/littman94markov.pdf'] | 4 |
70,951,384 | <p>It will only work in combination with another layer, for example a <code>Dense</code> layer. Also, the <code>Maxout</code> layer itself does not have any trainable weights as you can see in the model summary but it does have a hyperparameter <code>num_units</code>:</p>
<pre class="lang-py prettyprint-override"><code>import tensorflow as tf
import tensorflow_addons as tfa
d=3
x_in=tf.keras.layers.Input(shape=d)
x = tf.keras.layers.Dense(3)(x_in)
x_out = tfa.layers.Maxout(3)(x)
model = tf.keras.Model(inputs=x_in, outputs=x_out)
model.compile(optimizer='adam', loss='MeanAbsoluteError')
X=tf.random.normal([200,3])
Y=tf.random.normal([200,3])
model.fit(X, Y, epochs=5, batch_size=32)
print(model.summary())
</code></pre>
<pre><code>Epoch 1/5
7/7 [==============================] - 0s 2ms/step - loss: 1.0404
Epoch 2/5
7/7 [==============================] - 0s 3ms/step - loss: 1.0361
Epoch 3/5
7/7 [==============================] - 0s 2ms/step - loss: 1.0322
Epoch 4/5
7/7 [==============================] - 0s 2ms/step - loss: 1.0283
Epoch 5/5
7/7 [==============================] - 0s 3ms/step - loss: 1.0244
Model: "model_5"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_6 (InputLayer) [(None, 3)] 0
dense_5 (Dense) (None, 3) 12
maxout_4 (Maxout) (None, 3) 0
=================================================================
Total params: 12
Trainable params: 12
Non-trainable params: 0
_________________________________________________________________
None
</code></pre>
<p>Maybe also take a look at the <a href="https://arxiv.org/pdf/1302.4389.pdf" rel="nofollow noreferrer">paper</a> regarding <code>Maxout</code>:</p>
<blockquote>
<p>The maxout model is simply a feed-forward achitecture, such as a multilayer perceptron or deep convolutional neural network, that uses a new type of activation function: the maxout unit.</p>
</blockquote> | 2022-02-02 06:35:53.973000+00:00 | 2022-02-02 06:42:30.287000+00:00 | 2022-02-02 06:42:30.287000+00:00 | null | 70,950,212 | <p>I am trying to use the tensorflow maxout implementation (<a href="https://www.tensorflow.org/addons/api_docs/python/tfa/layers/Maxout" rel="nofollow noreferrer">https://www.tensorflow.org/addons/api_docs/python/tfa/layers/Maxout</a>) but struggle with it;</p>
<p>I try to illustrate my problem: If I have the following</p>
<pre><code>d=3
x_in=Input(shape=d)
x_out=Dense(d, activation='relu')(x_in)
model = Model(inputs=x_in, outputs=x_out)
model.compile(optimizer='adam', loss='MeanAbsoluteError')
X=tf.random.normal([200,3])
Y=tf.random.normal([200,3])
model.fit(X, Y, epochs=5, batch_size=32)
</code></pre>
<p>Then it is working normally, i.e. the loss is continuously getting smaller and I can get the estimated weights:</p>
<pre><code>model.layers[1].get_weights()
Out[141]:
[array([[-0.15133516, -0.14892222, -0.64674205],
[ 0.34437487, 0.7822309 , -0.08931279],
[-0.8330534 , -0.13827904, -0.23096593]], dtype=float32),
array([-0.03069788, -0.03311999, -0.02603031], dtype=float32)]
</code></pre>
<p>However, when I want to use a maxout activation instead, things do not work out</p>
<pre><code>d=3
x_in=Input(shape=d)
x_out = tfa.layers.Maxout(3)(x_in)
model = Model(inputs=x_in, outputs=x_out)
model.compile(optimizer='adam', loss='MeanAbsoluteError')
X=tf.random.normal([200,3])
Y=tf.random.normal([200,3])
model.fit(X, Y, epochs=5, batch_size=32)
</code></pre>
<p>The loss stays constant for all Epochs and</p>
<pre><code>model.layers[1].get_weights()
Out[141]: []
</code></pre>
<p>Where is my mistake?</p> | 2022-02-02 03:28:09.683000+00:00 | 2022-02-02 06:42:30.287000+00:00 | 2022-02-02 04:27:42.567000+00:00 | python|tensorflow|keras | ['https://arxiv.org/pdf/1302.4389.pdf'] | 1 |
21,950,577 | <p>We've integrated the catch-all Protocol Buffer schema for R objects from RHIPE into RProtoBuf and new functions <code>serialize_pb</code> and <code>unserialize_pb</code> to convert arbitrary R objects such as data.frames into protocol buffers. For example:</p>
<pre><code>msg <- tempfile();
serialize_pb(iris, msg);
obj <- unserialize_pb(msg);
identical(iris, obj);
</code></pre>
<p>This functionality was introduced in <a href="http://cran.r-project.org/web/packages/RProtoBuf/index.html" rel="nofollow">RProtoBuf 0.4</a> which came out after your question was originally asked. See a preprint of our JSS paper that introduces these new features on arXiv : <a href="http://arxiv.org/abs/1401.7372" rel="nofollow">RProtoBuf: Efficient Cross-Language Data Serialization in R</a></p> | 2014-02-22 06:19:06.143000+00:00 | 2014-02-22 06:19:06.143000+00:00 | null | null | 19,059,640 | <p>I think it is possible but I am looking for a way to map base types in R using rprotobuf package. What I want is to create a network/server very similar to Rserve but using protocol buffers to serialize the data rather than Rserve's QAP protocol. My question is how would it be possible to map something like a data.frame into a protocol buffer. Here is an example of kind of what I would like it to look like but let me know if I am going about it the wrong way.</p>
<pre><code>message TextCell {
required string name = 1;
}
message NumericCell {
repeated int32 num 1;
}
message TextColumn {
repeated TextCell text 1;
}
message NumericColumn {
repeated NumericCell number 1;
}
message DataFrame {
optional NumericColumn numbericColumn = 1;
optional TextColumn textColumns = 2;
}
</code></pre>
<p>I mocked this up just now so it will probably have errors but this is the concept that I am looking at and it doesn't take into account things like Doubles which seems like a bad idea. Would it possibly be a better solution to use a bytes type and deserialize the column on the other side. Not sure how to attack this problem yet and feedback would be greatly appreciated from more knowledgeable people.</p>
<p>Note, I wish to use protocol buffers due to their storage efficiency and the possibility to use many more languages but there is nothing wrong with the QAP protocol. It is very fast and efficient.</p>
<p>Thanks in advance</p> | 2013-09-27 20:35:02.530000+00:00 | 2014-02-22 06:19:06.143000+00:00 | 2013-09-27 20:57:21.627000+00:00 | r|protocol-buffers | ['http://cran.r-project.org/web/packages/RProtoBuf/index.html', 'http://arxiv.org/abs/1401.7372'] | 2 |
59,869,479 | <p>Reinforcement learning is very different from traditional supervised learning because the training data distribution <em>changes</em> as the policy improves. In optimization terms, the objective function can be said to be <em>non-stationary</em>. For this reason, I suspect your intuition is likely correct -- that a "one-cycle" optimizer would perform poorly after a while in your application.</p>
<p>My question is, what is wrong with Adam? Typically, the choice of optimizer is a minor detail for deep reinforcement learning; other factors like the exploration policy, algorithmic hyperparameters, or network architecture tend to have a much greater impact on performance.</p>
<p>Nevertheless, if you really want to try other optimizers, you could experiment with RMSProp, Adadelta, or Nesterov Momentum. However, my guess is that you will see incremental improvements, if any. Perhaps searching for better hyperparameters to use with Adam would be a more effective use of time.</p>
<hr>
<p><strong>EDIT:</strong> In my original answer, I made the claim that the choice of a particular optimizer is not primarily important for reinforcement learning speed, and neither is generalization. I want to add some discussion that helps illustrate these points.</p>
<p>Consider how most deep policy gradient methods operate: they sample a trajectory of experience from the environment, estimate returns, and then conduct one or more gradient steps to improve the parameterized policy (<em>e.g.</em> a neural network). This process repeats until convergence (to a locally optimal policy).</p>
<p>Why must we continuously sample new experience from the environment? Because our current data can only provide a reasonable first-order approximation within a small <em>trust region</em> around the policy parameters that were used to collect that data. Hence, whenever we update the policy, we need to sample more data.</p>
<p>A good way to visualize this is to consider an <a href="https://en.wikipedia.org/wiki/MM_algorithm" rel="nofollow noreferrer">MM algorithm</a>. At each iteration, a surrogate objective is constructed based on the data we have now and then maximized. Each time, we will get closer to the true optimum, but the speed at which we approach it is determined only by the number of surrogates we construct -- <strong><em>not by the specific optimizer we use to maximize each surrogate</em></strong>. Adam might maximize each surrogate in fewer gradient steps than, say, RMSProp does, but this does not affect the learning speed of the agent (with respect to environment samples). It just reduces the number of minibatch updates you need to conduct.</p>
<p><a href="https://upload.wikimedia.org/wikipedia/commons/thumb/c/cc/Mmalgorithm.jpg/440px-Mmalgorithm.jpg" rel="nofollow noreferrer"><img src="https://upload.wikimedia.org/wikipedia/commons/thumb/c/cc/Mmalgorithm.jpg/440px-Mmalgorithm.jpg" alt="MM Algorithm"></a></p>
<p>SAC is a little more complicated than this, as it learns Q-values in an off-policy manner and conducts updates using experience replay, but the general idea holds. The best attainable policy is subject to whatever the current data in our replay memory are; regardless of the optimizer we use, we will need to sample roughly the same amount of data from the environment to converge to the optimal policy.</p>
<p>So, how do you make a faster (more sample-efficient) policy gradient method? You need to fundamentally change the RL algorithm itself. For example, <a href="https://arxiv.org/abs/1707.06347" rel="nofollow noreferrer">PPO</a> almost always learns faster than <a href="https://arxiv.org/abs/1502.05477" rel="nofollow noreferrer">TRPO</a>, because John Schulman and co-authors found a different and empirically better way to generate policy gradient steps.</p>
<p>Finally, notice that there is no notion of generalization here. We have an objective function that we want to optimize, and once we do optimize it, we have solved the task as well as we can. This is why I suspect that the <a href="https://arxiv.org/abs/1705.08292" rel="nofollow noreferrer">"Adam-generalizes-worse-than-SGD"</a> issue is actually irrelevant for RL.</p> | 2020-01-22 23:03:53.873000+00:00 | 2020-02-04 00:12:18.003000+00:00 | 2020-02-04 00:12:18.003000+00:00 | null | 59,833,217 | <p>I'm running a SAC reinforcement learner for a robotics application with some pretty decent results. One of the reasons I opted for reinforcement learning is for the ability for learning in the field, e.g. to adjust to a mechanical change, such as worn tires or a wheel going a little out of alignment.</p>
<p>My reinforcement learner restores it's last saved weights and replay buffer upon startup, so it doesn't need to retrain every time I turn it on. However, one concern I have is with respect to the optimizer.</p>
<p>Optimizers have come a long way since ADAM, but everything I read and all the RL code samples I see still seem to use ADAM with a fixed learning rate. I'd like to take advantage of some of the advances in optimizers, e.g. one cycle AdamW. However, a one-cycle optimizer seems inappropriate for a continuous real-world reinforcement learning problem: I imagine it's pretty good for the initial training/calibration, but I expect the low final learning rate would react too slowly to mechanical changes.</p>
<p>One thought I had was perhaps to do a one-cycle approach for initial training, and triggering a smaller one-cycle restart if a change in error that indicates something has changed (perhaps the size of the restart could be based on the size of the change in error).</p>
<p>Has anyone experimented with optimizers other than ADAM for reinforcement learning or have any suggestions for dealing with this sort of problem?</p> | 2020-01-21 01:58:41.400000+00:00 | 2020-02-05 04:51:49.060000+00:00 | null | machine-learning|reinforcement-learning | ['https://en.wikipedia.org/wiki/MM_algorithm', 'https://upload.wikimedia.org/wikipedia/commons/thumb/c/cc/Mmalgorithm.jpg/440px-Mmalgorithm.jpg', 'https://arxiv.org/abs/1707.06347', 'https://arxiv.org/abs/1502.05477', 'https://arxiv.org/abs/1705.08292'] | 5 |
34,808,354 | <p>I have rewritten your code with <a href="https://github.com/iabudiab/HTMLKit" rel="nofollow">HTMLKit</a>. It looks like this:</p>
<pre><code>NSURL *papersURL = [NSURL URLWithString:@"http://www.arxiv.org/list/cond-mat/recent"];
NSData *papersHTMLData = [NSData dataWithContentsOfURL:papersURL];
NSString *htmlString = [[NSString alloc] initWithData:papersHTMLData encoding:NSUTF8StringEncoding];
HTMLDocument *document = [HTMLDocument documentWithString:htmlString];
NSArray *divs = [document querySelectorAll:@"div[class='list-title']"];
for (HTMLElement *element in divs) {
NSLog(@"%@", element.textContent);
}
</code></pre>
<p>Back to your question in the comment: </p>
<blockquote>
<p>Could you give some useful links that you find good to learn about HTMLKit?</p>
</blockquote>
<p>You can check out the examples on the project's GitHub page. The source code is documented and using it is relatively straightforward. If you have basic HTML & CSS experience then using HTMLKit would be just as easy. Unfortunately there are no other resources it to learn it yet.</p> | 2016-01-15 09:50:45.893000+00:00 | 2016-01-15 09:50:45.893000+00:00 | null | null | 34,702,731 | <p>I'm trying to write a very simple iOS app that will parse a webpage (<a href="http://arxiv.org/list/cond-mat/recent" rel="nofollow">http://arxiv.org/list/cond-mat/recent</a>) and display a simplified version of it. I chose to use TFHpple to parse this page. I want to get titles of papers and display them in the TableViewController. The HTML container for paper descriptions looks like:</p>
<pre><code><div class="list-title">
<span class="descriptor">Title:</span> Encoding Complexity within Supramolecular Analogues of Frustrated Magnets
</div>
</code></pre>
<p>Function that I use to parse and get the values is the following (thanks to raywenderlich.com):</p>
<pre><code>- (void) loadPapers{
NSURL *papersURL = [NSURL URLWithString:@"http://www.arxiv.org/list/cond-mat/recent"];
NSData *papersHTMLData = [NSData dataWithContentsOfURL:papersURL];
TFHpple *papersParser = [TFHpple hppleWithHTMLData:papersHTMLData];
NSString *papersXpathQueryString = @"//div[@class='list-title']";
NSArray *papersNodes = [papersParser searchWithXPathQuery:papersXpathQueryString];
NSMutableArray *newPapers = [[NSMutableArray alloc] initWithCapacity:0];
for (TFHppleElement *element in papersNodes){
Paper *paper = [[Paper alloc] init];
[newPapers addObject:paper];
paper.title = [[element firstChild] content];
}
_objects = newPapers;
[self.tableView reloadData];
}
</code></pre>
<p>This function is supposed to parse the entire HTML page and return data into TableView. However, when I try it returns empty objects into the paperNodes array. Basically, the number of the elements is correct (~25), but they're all empty and I am not sure why.</p>
<p>Any help is greatly appreciated! Thanks!</p> | 2016-01-10 06:24:52.360000+00:00 | 2016-01-15 09:50:45.893000+00:00 | 2016-01-10 17:25:41.153000+00:00 | html|ios|objective-c|tfhpple | ['https://github.com/iabudiab/HTMLKit'] | 1 |
51,374,291 | <p>Although the other answer basically already gives you the correct result, I would like to clarify on a few points you made in your post, and correct it.<br/>
The (commonly accepted) definitions of the different terms are as follows.</p>
<ul>
<li><strong>Gradient Descent (GD)</strong>: Iterative method to find a (local or global) optimum in your function. Default Gradient Descent will <em>go through all examples</em> (one epoch), then update <strong>once</strong>.</li>
<li><strong>Stochastic Gradient Descent (SGD)</strong>: Unlike regular GD, it will <em>go through one example</em>, then <strong>immediately update</strong>. This way, you get a way higher update rate.</li>
<li><strong>Mini Batching</strong>: Since the frequent updates of SGD are quite costly (updating the gradients is kind of tedious to perform), and can lead to worse results in certain circumstances, it is helpful to aggregate <em>multiple (but not all) examples into one update</em>. This means, you would go through <em>n</em> examples (where <em>n</em> is your batch size), and <strong>then update</strong>. This will still result in multiple updates within one epoch, but not necessarily as many as with SGD.</li>
<li><strong>Epoch</strong>: One epoch simply refers to a pass through all of your training data. You can generally perform as many epochs as you would like.</li>
</ul>
<p>One another note, you are correct about <a href="https://arxiv.org/abs/1412.6980" rel="nofollow noreferrer">ADAM</a>. It is generally seen as a more powerful variant of vanilla gradient descent, since it uses more sophisticated heuristics (first order derivatives) to speed up and stabilize convergence.</p> | 2018-07-17 06:08:05.663000+00:00 | 2018-07-17 06:08:05.663000+00:00 | null | null | 51,373,903 | <p>I am taking a course on Deep Learning in Python and I am stuck on the following lines of an example:</p>
<pre><code>regressor.compile(optimizer = 'adam', loss = 'mean_squared_error')
regressor.fit(X_train, y_train, epochs = 100, batch_size = 32)
</code></pre>
<p>From the definitions I know,
1 epoch = going through all training examples once to do one weight update.</p>
<p><code>batch_size</code> is used in optimizer that divide the training examples into mini batches. Each mini batch is of size <code>batch_size</code>. </p>
<p>I am not familiar with adam optimization, but I believe it is a variation of the GD or Mini batch GD. Gradient Descent - has one big batch (all the data), but multiple epochs. Mini Batch Gradient Descent - uses multiple mini batches, but only 1 epoch.</p>
<p>Then, how come the code has both multiple mini batches and multiple epochs?
Does epoch in this code has a different meaning then the definition above?</p> | 2018-07-17 05:36:32.097000+00:00 | 2019-04-04 16:58:18.673000+00:00 | 2019-04-04 16:58:18.673000+00:00 | python|machine-learning|regression | ['https://arxiv.org/abs/1412.6980'] | 1 |
30,142,273 | <p>COP is a programming paradigm supporting software adaptation to the execution context.</p>
<p>It's an alternative to the use of hard-coded conditional statements spread over the application to encode context-dependent behavior.</p>
<p>In the years several COP extensions to various languages have been proposed:</p>
<ul>
<li><a href="https://www.hpi.uni-potsdam.de/hirschfeld/trac/Cop/wiki/ContextJ" rel="noreferrer">ContextJ</a> and <a href="https://www.hpi.uni-potsdam.de/hirschfeld/trac/Cop/wiki/JCop" rel="noreferrer">JCop</a> for Java</li>
<li><a href="http://www.guidosalvaneschi.com/wp/software/contexterlang/" rel="noreferrer">Context Erlang</a> for Erlang</li>
<li><a href="https://common-lisp.net/project/closer/contextl.html" rel="noreferrer">ContextL</a> for Common Lisp (the first COP extension to a programming language)</li>
<li><a href="https://released.info.ucl.ac.be/Tools/Subjective-C" rel="noreferrer">SubjectiveC</a> for Objective C</li>
<li>ContextS for Smalltalk</li>
<li><a href="https://pypi.python.org/pypi/PyContext" rel="noreferrer">PyContext</a> for Python</li>
<li><a href="https://github.com/schmidt/contextr/" rel="noreferrer">ContextR</a> for Ruby</li>
<li><a href="https://www.hpi.uni-potsdam.de/hirschfeld/trac/Cop/wiki/ContextJS" rel="noreferrer">ContextJS</a> for Javascript</li>
</ul>
<p>and probably many others.</p>
<p>Each concrete language design and implementation comes with different variations of the features of the COP paradigm. For further details you can see <a href="http://soft.vub.ac.be/cop09/papers/a6-appeltauer.pdf" rel="noreferrer">A Comparison of Context-oriented
Programming Languages</a> (Malte Appeltauer, Robert Hirschfeld, Michael Haupt, Jens Lincke, Michael Perscheid - 2010).</p>
<p>Also a good introduction / starting point is <a href="http://www.jot.fm/issues/issue_2008_03/article4/" rel="noreferrer">Context-oriented Programming</a> (Robert Hirschfeld, Pascal Costanza, Oscar Nierstrasz) or <a href="http://arxiv.org/pdf/1105.0069.pdf" rel="noreferrer">Context-Oriented Programming: A Programming Paradigm for Autonomic Systems</a> (Guido Salvaneschi, Carlo Ghezzi, Matteo Pradella - 2013).</p> | 2015-05-09 16:16:05.530000+00:00 | 2015-05-09 16:38:49.997000+00:00 | 2015-05-09 16:38:49.997000+00:00 | null | 30,141,883 | <p>I heard someone talking about Context-Oriented Programming, so I googled it to found out what that means, and it seems like a new paradigm of programming, but also all I found are academic papers talking about the concept.</p>
<p>So I would like to know if there's any language that implements context-orientation and what is this good for?</p> | 2015-05-09 15:39:55.610000+00:00 | 2022-07-19 09:17:41.927000+00:00 | 2022-07-19 09:17:41.927000+00:00 | programming-languages|paradigms | ['https://www.hpi.uni-potsdam.de/hirschfeld/trac/Cop/wiki/ContextJ', 'https://www.hpi.uni-potsdam.de/hirschfeld/trac/Cop/wiki/JCop', 'http://www.guidosalvaneschi.com/wp/software/contexterlang/', 'https://common-lisp.net/project/closer/contextl.html', 'https://released.info.ucl.ac.be/Tools/Subjective-C', 'https://pypi.python.org/pypi/PyContext', 'https://github.com/schmidt/contextr/', 'https://www.hpi.uni-potsdam.de/hirschfeld/trac/Cop/wiki/ContextJS', 'http://soft.vub.ac.be/cop09/papers/a6-appeltauer.pdf', 'http://www.jot.fm/issues/issue_2008_03/article4/', 'http://arxiv.org/pdf/1105.0069.pdf'] | 11 |
30,427,475 | <p>If all the conversions are integers, and there is a least common measure which can be identified with 1 unit of value (it looks like your coin E would be such a thing), then the problem reduces to the classic change-making problem. </p>
<p>In North America we have 1 cent, 5 cent, 10 cent, 25 cent (ignoring higher valued coins). With that system, a greedy algorithm works: take the largest coin you can at each step. The result of that process is the minimum number of coins to make change. We say the system {1, 5, 10, 25} is <strong>canonical</strong> because the greedy algorithm works.</p>
<p>For other systems of coins, the greedy algorithm does not work. For example, if there are no 5 cent pieces, the greedy algorithm applied to 30 cents yields 25 + 1 + 1 + 1 + 1 + 1, six coins, whereas the minimum is 10 + 10 + 10, three coins. We say the system {1, 10, 25} is not canonical.</p>
<p>The simplest way to approach your problem is to set up a canonical system of coins, then just use the greedy algorithm. A simple canonical system is {1, 5, 10, 25} mentioned above. If you want something funkier you can use arithmetic progressions, geometric progressions, or Fibonacci numbers. Other examples and a general discussion can be found at <a href="http://arxiv.org/pdf/0809.0400.pdf" rel="nofollow">http://arxiv.org/pdf/0809.0400.pdf</a>.</p>
<p>If you want to use a non-canonical system, or if you want to use a system and can't prove that it's canonical, there's a dynamic programming solution. Let <code>n[i]</code> be an array from <code>0</code> to <code>v</code>, the amount for which you want to make change (e.g., in the example I gave above, <code>v = 30</code>). <code>n[i]</code> represents the minimum number of coins needed to make change for value <code>i</code>. We know <code>n[0] = 0</code>, and <code>n[1] = 1</code> (because there is a 1 cent piece). Then we calculate the other <code>n[i]</code> in order. <code>n[i] = min { n[i-c]+1 where c is a coin in the set}</code>. So in the example {1, 10, 25}, we have <code>n[2] = min {n[2-1]+1} = 2</code>, <code>n[3] = min {n[3-1]+1} = 3</code>, <code>n[4] = min{n[4-1]+1} = 4</code>, ..., <code>n[9] = 9</code>, and <code>n[10] = min {n[10-1]+1, n[10-10]+1} = min {10,1} = 1</code>, ... . Once you have <code>n[v]</code>, you work backwards, figuring out which coin <code>c</code> results in <code>n[v-c] < n[v]</code>, and continue in that manner until you hit zero.</p>
<p>The dynamic programming solution is slower than the greedy algorithm ... much slower for large values <code>v</code> ... and it's more complicated to program and more error-prone. So I suggest you first check whether your system is canconical. If it isn't, you can change the system. If you are stuck with a non-canonical system in circulation, you can introduce new coin values to it to make it canonical. Then you can use the greedy algorithm.</p> | 2015-05-24 19:29:09.083000+00:00 | 2015-05-24 19:47:56.743000+00:00 | 2015-05-24 19:47:56.743000+00:00 | null | 30,425,674 | <p>I came up with this algorithmic problem while trying to solve a problem in my (adventure-based) program.
There are 5 different types of coins called, A,B,C,D,E (from most valuable to least valuable). The conversions between
the coin values are AtoE, BtoE, CtoE, DtoE (i.e. AtoE means that a coin of type A is worth AtoE times the value of a
coin of type E). The struct <code>Currency</code> represents how much money a customer has. The goal of the function</p>
<pre><code>template <int AtoE, int BtoE, int CtoE, int DtoE>
void purchase (int numCoins, CoinType coinType, int a, int b, int c, int d, int e)
</code></pre>
<p>is to have the customer (who has <code>a</code> coins of type <code>A</code>, <code>b</code> coins of type <code>B</code>, etc...) to purchase an item whose
price is <code>numCoins</code> <code>coinType</code>s while minimizing the amount of coins he has after receiving the change.
Can someone suggest the pseudocode for the body of this function to get the correct resulting change to
minimize the resulting number of coins? Optimization would be nice to, but first how to get it working?
I'm really stuck here. Here I've written the starting code in C++, but the problem is language-independent.</p>
<pre><code>#include <iostream>
#include <array>
#include <algorithm>
enum CoinType {A, B, C, D, E, NumCoinTypes};
struct Currency {
std::array<int, NumCoinTypes> coins;
Currency (int a, int b, int c, int d, int e) : coins ({a,b,c,d,e}) {}
void print() const {
for (int x : coins) std::cout << x << ' ';
std::cout << " total coins = " << std::accumulate (coins.begin(), coins.end(), 0) << '\n';
}
};
struct Item {
struct Value { int numCoins; CoinType coinType; };
Value value;
};
template <int AtoE, int BtoE, int CtoE, int DtoE>
void purchase (int numCoins, CoinType coinType, int a, int b, int c, int d, int e) {
const Item item {numCoins, coinType};
Currency currency(a,b,c,d,e);
std::cout << "Before paying for the item: "; currency.print();
// Goal: Purchase 'item' so that the change in 'currency' due to paying for 'item'
// and receiving the change minimizes the number of coins in 'currency'.
// Modify 'currency' somehow here.
std::cout << "After paying for the item: "; currency.print();
}
int main() {
purchase<5,10,8,15>(50, C, 1,2,5,40,30); // Sample test run.
}
</code></pre>
<p>There have been some references to the Knapsack Problem, but I'm not sure it applies here. The amount of money S that is given to the cashier is not known. Thus the change received, which is <code>S - price</code>, is not fixed, so it does not appear to me that the knapsack problem applies. Perhaps, once could try all possible (reasonable) values of S and then apply a Knapsack algorithm to each S value. But the amount of change comprising the currency not given to the cashier also depends on what S was (and the currency used to hand over the amount S). The amount of coins being minimized is not just that which adds up to <code>S - price</code>, but rather ALL the coins, including those not given to the cashier (which, again, depends on S and the currency to make up S). Also, the number of coins for each coin type in the result is not just 1 or 0.</p>
<p><strong>Update:</strong> Thanks to Edward Doolittle's algorithm, the problem has been solved (my implemented code in one the answers below), but the solution makes one assumption: that the customer pays for the item with ALL the coins he possesses. Mathematically, the optimized change does give the correct answer, but it doesn't simulate the real world too well. Would a customer carrying a huge bag of change really pour out ALL his change to buy a candy???</p>
<p>So now I stipulate a condition that will seek a second solution. This second solution will not minimize the resulting number of coins like the first solution, but it does give a more realistic result. This new condition is:</p>
<p><strong>The customer shall pay for the item with some of his coins such that he pays enough to purchase the item without paying any redundant coins.</strong></p>
<p>For example, if 4 quarters is enough to purchase the item, he shall NOT pay a 5th quarter (nor shall he add any pennies or whatever on top of these 4 quarters). This condition is pretty much what the typical customer in the real world follows when purchasing an item. So here is the algorithm I've thought of for determining what coins the customer shall pay to minimize his number of coins at the end while following the above condition: The total payment will be with as many of the cheapest coins as possible, then (if these are not enough), with as many of the second cheapest coin as possible, then (if these are also not enough), with as many of the third cheapest coin as possible, and so forth. However, I'm not sure if this is the correct algorithm, and even if it is, it needs mathematical proof. I've written a solution using this algorithm and provided it as another answer.</p> | 2015-05-24 16:21:21.243000+00:00 | 2015-05-28 22:45:15.143000+00:00 | 2015-05-26 02:47:14.013000+00:00 | c++|algorithm | ['http://arxiv.org/pdf/0809.0400.pdf'] | 1 |
73,519,043 | <p>One way to choose the weight is based on your confidence in the dirty data and assign the weight accordingly. For example, if you think that 90% of dirty data is labeled correctly, then choosing 0.9 as the weight for the noisy data is a reasonable option.</p>
<p>Additionally, there is a whole literature on learning from noisy labels, you can check this survey for more information: <a href="https://arxiv.org/abs/2007.08199" rel="nofollow noreferrer">https://arxiv.org/abs/2007.08199</a></p> | 2022-08-28 13:23:30.943000+00:00 | 2022-08-28 13:23:30.943000+00:00 | null | null | 73,512,467 | <p>I have two datasets, one with clean data and one with dirty data. I train a Roberta model on the clean dataset and then get predictions for the dirty dataset. Those predictions with a probability greater than 0.9 go to the clean dataset. I then retrain the Roberta model with this new dataset (clean + dirty moving to clean).</p>
<p>For the retraining I am using the MAE loss function (more robust to noisy labels) and I use weights to give less value to the data that passes from the dirty to the clean dataset, as follows:</p>
<pre><code>loss = torch.mean(torch.abs(y_true - y_pred) * weights)
</code></pre>
<p>Initially I am using an arbitrary weight of 0.5 for all the dirty data that gets passed into the clean dataset. However, I would like to assign them a weight in a more academic way, not so arbitrary.</p>
<p>How can I do that?</p> | 2022-08-27 15:58:10.807000+00:00 | 2022-08-29 14:25:00.117000+00:00 | 2022-08-29 14:25:00.117000+00:00 | tensorflow|keras|deep-learning|nlp|pytorch | ['https://arxiv.org/abs/2007.08199'] | 1 |
58,168,583 | <p>The question's code won't compile due to misspellings. Even if those errors were fixed, the code doesn't do something useful - <code>xmlToList</code> is applied on the <em>URL</em>, not the results of a GET request. That's enough to generate the error :</p>
<pre><code>query<-"http://export.arxiv.org/api/query?search_query=(au:( \"Benoit Bertrand\"))&start=0&max_results=2000"
xmlToList(query)
</code></pre>
<p>No amount of URL encoding and conversions will fix that. No conversion is needed either, since the URL falls in the US-ASCII range. In that range a UTF8 string is indistinguishable from an ASCII string. </p>
<p>The correct code to get and parse this Arxiv page is :</p>
<pre><code>//Just a URL
query<-"http://export.arxiv.org/api/query?search_query=(au:( \"Benoit Bertrand\"))&start=0&max_results=2000"
//Get the contents
r <- GET(query)
//Extract the text from the response
xml<-content(r, "text")
//Read as lists
l<-xmlToList(xml)
</code></pre>
<p>The response <code>r</code> isn't just a string, it's an object that contains headers (including the encoding), the response status and the response content. One of the headers is the Content-Type :</p>
<pre><code>> r
Response [http://export.arxiv.org/api/query?search_query=(au:( "Benoit Bertrand"))&start=0&max_results=2000]
Date: 2019-09-30 12:54
Status: 200
Content-Type: application/atom+xml; charset=UTF-8
Size: 786 B
</code></pre>
<p><code>content(r, "text")</code> converts the content to text using the encoding stored in that header.</p>
<p>After that, <code>xmlToList</code> can parse the XML string </p> | 2019-09-30 13:08:53.813000+00:00 | 2019-09-30 13:08:53.813000+00:00 | null | null | 58,168,093 | <p>Here is my code, I have a query that I transform in UTF8 but finally I get an error that the query is not in UTF8 I don't manage to fix it:</p>
<pre><code>library("XML")
library("methods")
library("httr")
query = http://export.arxiv.org/api/query?search_query=(au:( \"Benoit Bertrand\"))&start=0&max_results=2000
xml_data = xmlToList(iconv(URLencode(query),to="UTF-8"))
</code></pre>
<blockquote>
<p>Error: 1: Input is not proper UTF-8, indicate encoding !<br>
Bytes: 0xC9 0x70 0x69 0x6A</p>
</blockquote>
<p>I find that's space character that made the code crash but that is all I got </p> | 2019-09-30 12:39:01.087000+00:00 | 2019-09-30 16:19:21.663000+00:00 | 2019-09-30 16:19:21.663000+00:00 | r|xml|database|encoding|utf-8 | [] | 0 |
26,783,313 | <p>Hamiltonian cycle from your graph: <a href="http://figshare.com/articles/Hamiltonian_Cycle/1228800" rel="noreferrer">http://figshare.com/articles/Hamiltonian_Cycle/1228800</a></p>
<p>How to find Hamiltonian cycle in your graph in C#:</p>
<p>First file:</p>
<pre><code>using System;
using System.Collections.Generic;
namespace Graph
{
partial class Program
{
static List<string> vertices;
static void Main(string[] args)
{
List<int>[] graph = GetGraph();
List<int> HamiltonianCycle = Algorithm(graph);
string a = Translate(HamiltonianCycle);
Console.Write(a);
Console.ReadKey();
}
static List<int>[] GetGraph()
{
List<string> list = new List<string>(){"A","B","C","D","E","F","G","H","I","J"};
vertices = new List<string>();
for(int a=0;a<10;++a)
for(int b=0;b<10;++b)
for(int c=0;c<10;++c)
for(int d=0;d<10;++d)
{
if(a==b || a== c || a==d || b == c || b == d|| c==d)
continue;
string vertex = list[a] + list[b] + list[c] + list[d];
vertices.Add(vertex);
}
List<int>[] graph = new List<int>[vertices.Count];
for(int i=0;i<graph.Length;++i)
graph[i] = new List<int>();
foreach(string s1 in vertices)
foreach(string s2 in vertices)
if(s1 != s2)
if(s1[s1.Length-3] == s2[0] && s1[s1.Length-2] == s2[1] && s1[s1.Length-1] == s2[2])
{
int v1 = vertices.IndexOf(s1);
int v2 = vertices.IndexOf(s2);
graph[v1].Add(v2);
}
return graph;
}
static string Translate(List<int> HamiltonianCycle)
{
string a = "";
foreach(int b in HamiltonianCycle)
a += vertices[b] + " -> ";
return a;
}
}
}
</code></pre>
<p>Second file:</p>
<pre><code>using System;
using System.Collections.Generic;
using System.Linq;
namespace Graph
{
partial class Program
{
static List<int>[] graph, oppositeGraph;
static List<int> HamiltonianCycle;
static bool endOfAlgorithm;
static int level, v1, v2;
static List<int> Algorithm(List<int>[] graphArgument)
{
graph = SaveGraph(graphArgument);
HamiltonianCycle = new List<int>();
endOfAlgorithm = false;
level = 0;
RemoveMultipleEdgesAndLoops(graph); //3.1
CreateOppositeGraph(graph);
bool HamiltonianCycleCantExist = AnalyzeGraph(new List<Edge>()); //6.1.a
ReverseGraph();
if (!HamiltonianCycleCantExist)
FindHamiltonianCycle(GetNextVertex()); //5.3
HamiltonianCycle.Reverse();
return HamiltonianCycle;
}
static void ReverseGraph()
{
graph = SaveGraph(oppositeGraph);
CreateOppositeGraph(graph);
}
static void FindHamiltonianCycle(int a)
{
if (!endOfAlgorithm)
{
++level;
if (HamiltonianCycleFound())
endOfAlgorithm = true;
SortList(a); //5.4
while (graph[a].Count > 0 && !endOfAlgorithm)
{
List<Edge> removedEdges = new List<Edge>();
int chosenVertex = graph[a][0];
graph[a].Remove(chosenVertex);
List<int>[] currentGraph = SaveGraph(graph);
#region 6.2
foreach (int b in graph[a])
{
removedEdges.Add(new Edge(a, b));
oppositeGraph[b].Remove(a);
}
graph[a].Clear();
#endregion
graph[a].Add(chosenVertex);
v1 = a;
v2 = chosenVertex;
bool HamiltonianCycleCantExist = AnalyzeGraph(removedEdges); //6.1.b
if (!HamiltonianCycleCantExist)
{
FindHamiltonianCycle(GetNextVertex()); //5.5
RestoreGraphs(currentGraph); //6.4
}
else
{
foreach (Edge e in removedEdges) //6.3
{
graph[e.from].Add(e.to);
oppositeGraph[e.to].Add(e.from);
}
RemoveEdge(new Edge(a, chosenVertex), graph, oppositeGraph);
}
}
if (!endOfAlgorithm)
{
--level;
if (level == 0)
endOfAlgorithm = true;
}
}
}
static bool HamiltonianCycleFound()
{
foreach (List<int> list in graph)
if (list.Count != 1)
return false;
HamiltonianCycle = GetHamiltonianCycle(graph);
return true;
}
static List<int> GetHamiltonianCycle(List<int>[] graphArgument)
{
List<int> cycle = new List<int>() { 0 };
while (true)
{
if (cycle.Count == graphArgument.Length && graphArgument[cycle.Last()].Contains(cycle[0]))
return cycle;
if (cycle.Contains(graphArgument[cycle.Last()][0]))
return new List<int>();
else
cycle.Add(graphArgument[cycle.Last()][0]);
}
}
static int GetNextVertex()
{
List<int> correctOrder = GetCorrectOrder(graph);
foreach (int a in correctOrder)
if (graph[a].Count != 1)
return a;
return 0;
}
static bool AnalyzeGraph(List<Edge> removedEdges)
{
bool HamiltonianCycleCantExist = false;
int a;
do
{
a = removedEdges.Count;
HamiltonianCycleCantExist = RemoveUnnecessaryEdges(graph, oppositeGraph, removedEdges, false);
if (!HamiltonianCycleCantExist)
HamiltonianCycleCantExist = RemoveUnnecessaryEdges(oppositeGraph, graph, removedEdges, true);
}
while (a != removedEdges.Count && !HamiltonianCycleCantExist);
if (!HamiltonianCycleCantExist)
HamiltonianCycleCantExist = GraphIsDisconnected(graph);
return HamiltonianCycleCantExist;
}
static bool RemoveUnnecessaryEdges(List<int>[] graphArgument, List<int>[] oppositeGraphArgument, List<Edge> removedEdges, bool oppositeGraph)
{
bool HamiltonianCycleCantExist = false;
for (int a = 0; a < graphArgument.Length; ++a)
{
if (graphArgument[a].Count == 0 //4.1
|| (graphArgument[a].Count == 1 && SearchForCycleAmongVerticesOfDegreeEqual1(graphArgument, a)) //4.2.1
|| (graphArgument[a].Count > 1 && SearchForCycleAmongVerticesOfDegreeGreaterThan1(a, graphArgument, oppositeGraphArgument))) //4.2.2
return true;
List<Edge> edges = new List<Edge>();
#region 3.2
if (graphArgument[a].Count == 1 && oppositeGraphArgument[graphArgument[a][0]].Count != 1)
{
foreach (int c in oppositeGraphArgument[graphArgument[a][0]])
if (c != a)
if (!oppositeGraph)
edges.Add(new Edge(c, graphArgument[a][0]));
else
edges.Add(new Edge(graphArgument[a][0], c));
}
#endregion
#region 3.4
if (graphArgument[a].Count == 1 && graphArgument[graphArgument[a][0]].Contains(a))
{
if (!oppositeGraph)
edges.Add(new Edge(graphArgument[a][0], a));
else
edges.Add(new Edge(a, graphArgument[a][0]));
}
#endregion
foreach (Edge edge in edges)
{
removedEdges.Add(edge);
if (!oppositeGraph)
RemoveEdge(edge, graphArgument, oppositeGraphArgument);
else
RemoveEdge(edge, oppositeGraphArgument, graphArgument);
}
}
return HamiltonianCycleCantExist;
}
static bool SearchForCycleAmongVerticesOfDegreeEqual1(List<int>[] graphArgument, int a)
{
if(!(a==v1 || a == v2))
return false;
List<int> cycle = new List<int>() { a };
while (true)
if (graphArgument[cycle.Last()].Count == 1 && cycle.Count < graphArgument.Length)
if (cycle.Contains(graphArgument[cycle.Last()][0]))
return true;
else
cycle.Add(graphArgument[cycle.Last()][0]);
else
return false;
}
static bool SearchForCycleAmongVerticesOfDegreeGreaterThan1(int a, List<int>[] graphArgument, List<int>[] oppossiteGraphArgument)
{
if (!ListsAreEqual(graphArgument[a], oppossiteGraphArgument[a], true))
return false;
int b = 1;
for (int c = 0; c < graphArgument.Length && graphArgument.Length - c > graphArgument[a].Count - b; ++c)
{
if (c == a)
continue;
if (ListsAreEqual(graphArgument[c], graphArgument[a], false) && ListsAreEqual(graphArgument[c], oppossiteGraphArgument[c], true))
++b;
if (b == graphArgument[a].Count)
return true;
}
return false;
}
static bool ListsAreEqual(List<int> firstList, List<int> secondList, bool EqualCount)
{
if (EqualCount && firstList.Count != secondList.Count)
return false;
foreach (int a in firstList)
if (!secondList.Contains(a))
return false;
return true;
}
static void SortList(int a)
{
List<int> correctOrder = GetCorrectOrder(oppositeGraph);
for (int b = 1; b < graph[a].Count; ++b)
for (int c = 0; c < graph[a].Count - 1; ++c)
if (correctOrder.IndexOf(graph[a][c]) > correctOrder.IndexOf(graph[a][c + 1]))
{
int n = graph[a][c];
graph[a][c] = graph[a][c + 1];
graph[a][c + 1] = n;
}
}
static List<int> GetCorrectOrder(List<int>[] graphArgument) //5.1
{
Dictionary<int, int> vertices = new Dictionary<int, int>();
List<int> order = new List<int>();
for (int a = 0; a < graphArgument.Length; ++a)
vertices.Add(a, graphArgument[a].Count);
IEnumerable<int> v = from pair in vertices orderby pair.Value ascending select pair.Key;
foreach (int a in v)
order.Add(a);
return order;
}
static void RemoveEdge(Edge e, List<int>[] graphArgument, List<int>[] oppositeGraphArgument)
{
graphArgument[e.from].Remove(e.to);
oppositeGraphArgument[e.to].Remove(e.from);
}
static void RemoveMultipleEdgesAndLoops(List<int>[] graphArgument)
{
for (int a = 0; a < graphArgument.Length; ++a)
{
graphArgument[a] = graphArgument[a].Distinct().ToList();
graphArgument[a].Remove(a);
}
}
static void CreateOppositeGraph(List<int>[] graphArgument)
{
oppositeGraph = new List<int>[graphArgument.Length];
for (int a = 0; a < graphArgument.Length; ++a)
oppositeGraph[a] = new List<int>();
for (int a = 0; a < graphArgument.Length; ++a)
foreach (int b in graphArgument[a])
oppositeGraph[b].Add(a);
}
static void RestoreGraphs(List<int>[] graphArgument)
{
graph = new List<int>[graphArgument.Length];
for (int a = 0; a < graphArgument.Length; ++a)
{
graph[a] = new List<int>();
graph[a].AddRange(graphArgument[a]);
}
CreateOppositeGraph(graph);
}
static List<int>[] SaveGraph(List<int>[] graphArgument)
{
List<int>[] savedGraph = new List<int>[graphArgument.Length];
for (int a = 0; a < graphArgument.Length; ++a)
{
savedGraph[a] = new List<int>();
savedGraph[a].AddRange(graphArgument[a]);
}
return savedGraph;
}
static bool GraphIsDisconnected(List<int>[] graphArgument)
{
Stack<int> stack = new Stack<int>();
Color[] colors = new Color[graphArgument.Length];
colors[0] = Color.Gray;
stack.Push(0);
while (stack.Count > 0)
{
int a = stack.Pop();
foreach (int b in graphArgument[a])
{
if (colors[b] == Color.White)
{
colors[b] = Color.Gray;
stack.Push(b);
}
}
colors[a] = Color.Black;
}
foreach (Color c in colors)
if (c != Color.Black)
return true;
return false;
}
}
class Edge
{
public int from, to;
public Edge(int f, int t)
{
from = f;
to = t;
}
}
enum Color { White, Gray, Black };
}
</code></pre>
<p>I found Hamilonian cycle with modified version of my algorithm: <a href="http://arxiv.org/abs/1405.6347" rel="noreferrer">http://arxiv.org/abs/1405.6347</a> Modifications that were made are:</p>
<ul>
<li>Algorithm searched in opposite graph</li>
<li>Algorithm tested if graph is disconnected</li>
<li>Algorithm did not test "unique neighbours" rule</li>
<li>Algorithm searched for cycles that are not Hamiltonian, starting only from vertices that creates currently visited edge - only in function SearchForCycleAmongVerticesOfDegreeEqual1</li>
</ul> | 2014-11-06 15:35:39.493000+00:00 | 2014-11-06 15:46:12.477000+00:00 | 2014-11-06 15:46:12.477000+00:00 | null | 26,596,672 | <p>Suppose that there is a directed graph consists of vertices named below:</p>
<pre><code>"ABC", "ABD", "ACB", "ACD", "ADB", "ADC", "BAC", "BAD",
"BCA", "BCD", "BDA", "BDC", "CAB", "CAD", "CBA", "CBD",
"CDA", "CDB", "DAB", "DAC", "DBA", "DBC", "DCA", "DCB"
</code></pre>
<p>These are the 3 letter permutations over 4 different letters. (<code>total = 4*3*2=24</code>)
Name of vertices also describes edges between them. Any two vertices are connected to each other if last two character of source is equal to first two character of destination such as </p>
<p>A<strong>BC</strong> -> <strong>BC</strong>D</p>
<p>or </p>
<p>D<strong>CB</strong> -> <strong>CB</strong>A</p>
<p>The graph is very similar to De Burjin's or Kautz's, but not same. It is strongly connected and I know that it has Hamiltonian cycle.</p>
<p>To solve the problem, I'm not an expert at algorithms, I simply went through latest boost graph library and found hawick_unique_circuits() function which enumerates all cycles and here is my example codes:</p>
<pre><code>#include <iostream>
#include <cstdint>
#include <vector>
#include <string>
#include <boost/graph/adjacency_list.hpp>
#include <boost/graph/hawick_circuits.hpp>
#include "combination.hpp" // from http://howardhinnant.github.io/combinations.html
using namespace std;
using namespace boost;
typedef boost::adjacency_list<vecS, vecS, directedS, no_property, property<edge_weight_t, uint32_t> > TGraph;
TGraph m_Graph;
vector<string> m_StrVertexList;
void CreateStringVertexList(vector<string>& vl, uint32_t n, uint32_t k)
{
vl.clear();
if ((k > 0) && (n > k))
{
string code = "A";
while (--n)
{
code += code.back() + 1;
}
// for_each_permutation from Howard Hinnant
// http://howardhinnant.github.io/combinations.html
for_each_permutation(code.begin(), code.begin() + k, code.end(),
[&](string::iterator first, string::iterator last)->bool{ vl.push_back(string(first, last)); return(false); });
}
}
void AddEdgesFromStringVertex(TGraph& g, const vector<string>& vl)
{
uint32_t connection_len = vl.begin()->size() - 1;
g.clear();
for (uint32_t f = 0; f < vl.size(); f++)
for (uint32_t t = 0; t < vl.size(); t++)
{
if ((f != t) &&
(vl[f].substr(1, connection_len) == vl[t].substr(0, connection_len)))
{
add_edge(f, t, 1, g);
}
}
}
class hawick_visitor
{
public:
void cycle(const vector<TGraph::vertex_descriptor>& circuit, const TGraph& graph) const
{
if (circuit.size() == m_StrVertexList.size())
{
for (auto ii = circuit.begin(); ii != circuit.end(); ++ii)
{
cout << m_StrVertexList[*ii] << " -> ";
}
cout << endl;
}
}
};
void Circuits(const TGraph& g)
{
hawick_unique_circuits(g, hawick_visitor());
cout << "- end of hawick_unique_circuits() -" << endl;
}
void main(void)
{
//CreateStringVertexList(m_StrVertexList, 10, 4);
CreateStringVertexList(m_StrVertexList, 4, 3);
AddEdgesFromStringVertex(m_Graph, m_StrVertexList);
Circuits(m_Graph);
}
</code></pre>
<p>hawick_visitor class simply checks whether cycle found has same vertices as Graph's. If it has, that means we find one of Hamiltonian cycle we need.</p>
<p>It works perfectly for 24 vertices which is 3 char chosen from 4 unique char and here is one of outputs:</p>
<pre><code>ABC -> BCA -> CAD -> ADB -> DBC -> BCD -> CDA -> DAC ->
ACB -> CBD -> BDC -> DCB -> CBA -> BAC -> ACD -> CDB ->
DBA -> BAD -> ADC -> DCA -> CAB -> ABD -> BDA -> DAB -> ABC
</code></pre>
<p>But when I try to solve similar graph has 5040 vertices named as 4 char chosen from 10 unique char, this function never returns. There should be a far better algorithm than hawick_unique_circuits() to do that. Because I know people doing similar calculation for 10,000 vertices less than a minute, but I don't know how. Any idea highly appreciated.</p>
<p>Here is the graph has 5040 vertices that I need to solve:
<img src="https://i.stack.imgur.com/bXWEH.png" alt="enter image description here"></p> | 2014-10-27 20:54:53.403000+00:00 | 2014-11-06 15:46:12.477000+00:00 | 2014-10-27 23:16:58.157000+00:00 | graph-theory|hamiltonian-cycle | ['http://figshare.com/articles/Hamiltonian_Cycle/1228800', 'http://arxiv.org/abs/1405.6347'] | 2 |
68,904,577 | <p>In <code>scikit-learn</code> there are 3 classes that share interface: <em>Estimators, Transformers</em> and <em>Predictors</em></p>
<p><strong>Estimators</strong> have <code>fit()</code> function, which serves always the same purpose. It estimates parameters based on the dataset.</p>
<p><strong>Transformers</strong> have <code>transform()</code> function. It returns the transformed dataset. Some Estimators are also Transformers, e.g. <code>MinMaxScaler()</code></p>
<p><strong>Predictors</strong> have <code>predict()</code> function, which returns predictions on new instances, e.g. <code>KNeighborsClassifier()</code></p>
<p>Both <code>MinMaxScaler()</code> and <code>KNeighborClassifier()</code> contain <code>fit()</code> method, because they share interface of an <strong>Estimator</strong>.</p>
<blockquote>
<p>However, there are cases when there is no 'learning' involved with <code>fit()</code></p>
</blockquote>
<p>There is 'learning' involved. Transformer, <code>MinMaxScaler()</code> has to 'learn' min and max values for each numerical feature.
When you call <code>min_max_scaler.fit(X_train)</code> your scaler estimates values for each numerical column in your train set. <code>min_max_scaler.transform(X_train)</code> scales your train set based on the estimations. <code>min_max_scaler.transform(X_test)</code> scales the test set with the estimations learned for train set. This is important to scale both train and test set with the same estimations.</p>
<p>For further reading, you can check this: <a href="https://arxiv.org/abs/1309.0238" rel="nofollow noreferrer">https://arxiv.org/abs/1309.0238</a></p> | 2021-08-24 08:56:17.540000+00:00 | 2021-08-24 08:56:17.540000+00:00 | null | null | 68,896,760 | <p>The <code>fit()</code> method in <code>sklearn</code> appears to be serving different purposes in same interface.</p>
<p>When applied to the training set, like so:</p>
<pre><code>model.fit(X_train, y_train)
</code></pre>
<p><code>fit()</code> is used to learn parameters that will later be used on the test set with <code>predict(X_test)</code></p>
<hr />
<p>However, there are cases when there is no 'learning' involved with <code>fit()</code>, but only some normalization to transform the data, like so:</p>
<pre><code>min_max_scaler = preprocessing.MinMaxScaler()
min_max_scaler.fit(X_train)
</code></pre>
<p>which will simply scale feature values between, say, 0 and 1, to avoid some features with higher variance to have a disproportional influence on the model.</p>
<hr />
<p>To make things even less intuitive, sometimes the <code>fit()</code> method that scales (and already appears to be transforming) needs to be followed by further <code>transform()</code> method, before being called again with the <code>fit()</code> that actually learns and builds the model, like so:</p>
<pre><code>X_train2 = min_max_scaler.transform(X_train)
X_test2 = min_max_scaler.transform(X_test)
# the model being used
knn = KNeighborsClassifier(n_neighbors=3,metric="euclidean")
# learn parameters
knn.fit(X_train2, y_train)
# predict
y_pred = knn.predict(X_test2)
</code></pre>
<hr />
<p>Could someone please clarify the use, or multiple uses, of <code>fit()</code>, as well as the difference of scaling and transforming the data?</p> | 2021-08-23 17:32:24.863000+00:00 | 2021-08-28 02:34:12.250000+00:00 | 2021-08-28 02:34:12.250000+00:00 | python|scikit-learn | ['https://arxiv.org/abs/1309.0238'] | 1 |
56,247,299 | <p>Just to add to Mark's answer above, when using nat grads in non-conjugate models it can take a bit of tuning to get the best performance, and instability is potentially a problem. As Mark points out, the large steps that provide potentially faster convergence can also lead to the parameters ending up in in bad regions of the parameter space. When the variational approximation is good (i.e. the true and approximate posterior are close) then there is good reason to expect that the nat grad will perform well, but unfortunately there is no silver bullet in the general case. See <a href="https://arxiv.org/abs/1903.02984" rel="nofollow noreferrer">https://arxiv.org/abs/1903.02984</a> for some intuition.</p> | 2019-05-21 22:57:21.817000+00:00 | 2019-05-21 22:57:21.817000+00:00 | null | null | 56,236,466 | <p>When optimizing a SVGP with Poisson Likelihood for a big data set I see what I think are exploding gradients.
After a few epochs I see a spiky drop of the ELBO, which then very slowly recovers after getting rid of all progress made before.
Roughly 21 iterations correspond to an Epoch.</p>
<p><img src="https://i.imgur.com/VbZMKqT.png" alt="ELBO"></p>
<p>This spike (at least the second one) resulted in a complete shift of the parameters (for vectors of parameters I just plotted the norm to see changes):
<img src="https://i.imgur.com/o7y4ySh.png" alt="Parameters"></p>
<p>How can I deal with that? My first approach would be to clip the gradient, but that seems to require digging around the gpflow code.</p>
<p><b>My Setup:</b></p>
<p>Training works via Natural Gradients for the variational parameters and ADAM for the rest, with a slowly (linearly) increasing schedule for the Natural Gradient Gamma.</p>
<p>The batch and inducing point sizes are as large as possible for my setup
(both 2^12, with the data set consisting of ~88k samples). I include 1e-5 jitter and initialize the inducing points with kmeans.</p>
<p>I use a combined kernel, consisting of a combination of RBF, Matern52, a periodic and a linear kernel on a total of 95 features (a lot of them due to a one-hot encoding), all learnable.
The lengthscales are transformed with gpflow.transforms.</p>
<pre><code> with gpflow.defer_build():
k1 = Matern52(input_dim=len(kernel_idxs["coords"]), active_dims=kernel_idxs["coords"], ARD=False)
k2 = Periodic(input_dim=len(kernel_idxs["wday"]), active_dims=kernel_idxs["wday"])
k3 = Linear(input_dim=len(kernel_idxs["onehot"]), active_dims=kernel_idxs["onehot"], ARD=True)
k4 = RBF(input_dim=len(kernel_idxs["rest"]), active_dims=kernel_idxs["rest"], ARD=True)
#
k1.lengthscales.transform = gpflow.transforms.Exp()
k2.lengthscales.transform = gpflow.transforms.Exp()
k3.variance.transform = gpflow.transforms.Exp()
k4.lengthscales.transform = gpflow.transforms.Exp()
m = gpflow.models.SVGP(X, Y, k1 + k2 + k3 + k4, gpflow.likelihoods.Poisson(), Z,
mean_function=gpflow.mean_functions.Constant(c=np.ones(1)),
minibatch_size=MB_SIZE, name=NAME)
m.mean_function.set_trainable(False)
m.compile()
</code></pre>
<p><b>UPDATE: Using only ADAM</b>
Following the suggestion by Mark, I switched to ADAM only,
which helped me get rid of that sudden explosion. However, I still initialized with an epoch of natgrad only, which seems to save a lot of time.</p>
<p><img src="https://i.imgur.com/zS4wh9c.png" alt="Comparison of ADAM+NATGRAD(grey) vs ADAM only(red)"></p>
<p>In addition, the variational parameters seem to change a lot less abrupt (in terms of their norm at least). I guess they'll converge way slower now, but at least it's stable.
<img src="https://i.imgur.com/pMvv2LS.png" alt="Comparison of ADAM+NATGRAD(grey) vs ADAM only(red)"></p> | 2019-05-21 10:38:01.160000+00:00 | 2019-05-22 10:52:42.093000+00:00 | 2019-05-22 10:52:42.093000+00:00 | gradient|gradient-descent|gpflow | ['https://arxiv.org/abs/1903.02984'] | 1 |
50,552,539 | <p>If you want to have have multiple layers that pass the information backward or forward in time, there are two ways how to design this. Assume the forward layer consists of two layers F1, F2 and the backword layer consists of two layers B1, B2.</p>
<p>If you use <code>tf.nn.bidirectional_dynamic_rnn</code> the model will look like this (time flows from left to right):</p>
<p><a href="https://i.stack.imgur.com/wrpA9.png" rel="noreferrer"><img src="https://i.stack.imgur.com/wrpA9.png" alt="enter image description here"></a></p>
<p>If you use <code>tf.contrib.rnn.stack_bidirectional_dynamic_rnn</code> the model will look like this:</p>
<p><a href="https://i.stack.imgur.com/qODfk.png" rel="noreferrer"><img src="https://i.stack.imgur.com/qODfk.png" alt="enter image description here"></a></p>
<p>Here the black dot between first and second layer represents a concatentation. I.e., the outputs of the forward and backward cells are concatenated together and fed to the backward and forward layers of the next upper layer. This means both F2 and B2 receive exactly the same input and there is an explicit connection between backward and forward layers. In <a href="https://arxiv.org/pdf/1303.5778.pdf" rel="noreferrer">"Speech Recognition with Deep Recurrent Neural Networks"</a> Graves et al. summarize this as follows:</p>
<blockquote>
<p>... every hidden layer receives input from both the
forward and backward layers at the level below.</p>
</blockquote>
<p>This connection only happens implicitly in the unstacked BiRNN (first image), namely when mapping back to the output. The stacked BiRNN usually performed better for my purposes, but I guess that depends on your problem setting. But for sure it is worthwile to try it out!</p>
<p><strong>EDIT</strong></p>
<p>In response to your comment: I base my answer on the documentation of the function <code>tf.contrib.rnn.stack_bidirectional_dynamic_rnn</code> which says:</p>
<blockquote>
<p>Stacks several bidirectional rnn layers. The combined forward and
backward layer outputs are used as input of the next layer.
tf.bidirectional_rnn does not allow to share forward and backward
information between layers.</p>
</blockquote>
<p>Also, I looked at the implementation available under <a href="https://github.com/tensorflow/tensorflow/blob/r1.8/tensorflow/contrib/rnn/python/ops/rnn.py" rel="noreferrer">this link</a>. </p> | 2018-05-27 13:14:55.493000+00:00 | 2018-08-20 12:41:53.847000+00:00 | 2018-08-20 12:41:53.847000+00:00 | null | 49,242,266 | <p>I am building a dynamic RNN network with stacking multiple LSTMs. I see there are 2 options</p>
<pre><code># cells_fw and cells_bw are list of cells eg LSTM cells
stacked_cell_fw = tf.contrib.rnn.MultiRNNCell(cells_fw)
stacked_cell_bw = tf.contrib.rnn.MultiRNNCell(cells_bw)
output = tf.nn.bidirectional_dynamic_rnn(
stacked_cell_fw, stacked_cell_bw, INPUT,
sequence_length=LENGTHS, dtype=tf.float32)
</code></pre>
<p>vs </p>
<pre><code>output = tf.contrib.rnn.stack_bidirectional_dynamic_rnn(cells_fw, cells_bw, INPUT,
sequence_length=LENGTHS, dtype=tf.float32)
</code></pre>
<p>What is the difference between the 2 approaches and is one better than the other?</p> | 2018-03-12 18:35:36.587000+00:00 | 2019-04-09 17:02:29.233000+00:00 | 2019-04-09 17:02:29.233000+00:00 | tensorflow|recurrent-neural-network | ['https://i.stack.imgur.com/wrpA9.png', 'https://i.stack.imgur.com/qODfk.png', 'https://arxiv.org/pdf/1303.5778.pdf', 'https://github.com/tensorflow/tensorflow/blob/r1.8/tensorflow/contrib/rnn/python/ops/rnn.py'] | 4 |
30,527,549 | <p>It is a research problem called <strong>wrapper induction</strong> or <strong>web data extraction</strong>. I don't know any library for this, but there are a lot of research papers (see below the list of good ones IMHO) and some research projects like <a href="http://diadem.cs.ox.ac.uk/" rel="nofollow">DIADEM</a> (their site contains list of publications as well).</p>
<ul>
<li>Muslea, Ion, Steven Minton, and Craig A. Knoblock. “<a href="http://www.ai.sri.com/~muslea/PS/jaamas-2k.pdf" rel="nofollow">Hierarchical
Wrapper Induction for Semistructured Information Sources</a>.” Autonomous
Agents and Multi-Agent Systems 4, no. 1–2 (2001): 93–114. </li>
<li>Dalvi, Nilesh, Ravi Kumar, and Mohamed Soliman. “<a href="http://arxiv.org/pdf/1103.2406" rel="nofollow">Automatic Wrappers for
Large Scale Web Extraction.</a>” Proceedings of the VLDB Endowment 4, no.
4 (2011): 219–230. </li>
<li>Dalvi, Nilesh, Ashwin Machanavajjhala, and Bo Pang. “An Analysis of
Structured Data on the Web.” Proceedings of the VLDB Endowment 5, no.
7 (2012): 680–691.</li>
<li>Gentile, Anna Lisa, Ziqi Zhang, Isabelle Augenstein, and Fabio
Ciravegna. “<a href="http://arxiv.org/pdf/1203.6406" rel="nofollow">Unsupervised Wrapper Induction Using Linked Data</a>.” In
Proceedings of the Seventh International Conference on Knowledge
Capture, 41–48, 2013. </li>
<li>Weninger, Tim, and Jiawei Han. “Exploring Structure and Content on
the Web: Extraction and Integration of the Semi-Structured Web.” In
Proceedings of the Sixth ACM International Conference on Web Search
and Data Mining, 779–780, 2013.
<a href="http://dl.acm.org/citation.cfm?id=2433499" rel="nofollow">http://dl.acm.org/citation.cfm?id=2433499</a>.</li>
</ul> | 2015-05-29 10:42:53.923000+00:00 | 2015-05-29 10:42:53.923000+00:00 | null | null | 30,524,874 | <p>It is a pattern recognition task in web crawler. The traditional crawler gets the data of the whole page. If there is any way to make the crawler a litter intelligence, like just to identify and capture the the information part.</p> | 2015-05-29 08:33:24.390000+00:00 | 2015-05-29 10:42:53.923000+00:00 | null | machine-learning|web-crawler | ['http://diadem.cs.ox.ac.uk/', 'http://www.ai.sri.com/~muslea/PS/jaamas-2k.pdf', 'http://arxiv.org/pdf/1103.2406', 'http://arxiv.org/pdf/1203.6406', 'http://dl.acm.org/citation.cfm?id=2433499'] | 5 |
55,763,637 | <p>A (very) partial answer (and solution) to your question is to use <a href="https://arxiv.org/pdf/1703.09307.pdf" rel="nofollow noreferrer">Fluid Communities algorithm</a> implemented by Networkx as <a href="https://networkx.github.io/documentation/stable/reference/algorithms/generated/networkx.algorithms.community.asyn_fluid.asyn_fluidc.html#networkx.algorithms.community.asyn_fluid.asyn_fluidc" rel="nofollow noreferrer"><code>asyn_fluidc</code></a>.</p>
<p>Note that it works on connected, undirected, unweighted graphs, so if your graph has n connected components, you should run it n times. In fact this could be a significant issue as you should have some sort of preliminary knowledge of each component to choose the corresponding k.</p>
<p>Anyway, it is worth a try.</p> | 2019-04-19 14:45:57.080000+00:00 | 2019-04-19 14:45:57.080000+00:00 | null | null | 55,745,157 | <p>I have a network that is a graph network and it is the Email-Eu network that is available in <a href="https://snap.stanford.edu/data/email-Eu-core.html" rel="nofollow noreferrer">here</a>.</p>
<p>This dataset has the actual dataset, which is a graph of around 1005 nodes with the edges that form this giant graph. It also has the ground truth labels for the nodes and its corresponding communities (department). Each one of these nodes belongs to one of each 42 departments.</p>
<p>I want to run a community detection algorithm on the graph to find to the corresponding department for each node. My main objective is to find the nodes in the largest community.</p>
<p>So, first I need to find the first 42 departments (Communities), then find the nodes in the biggest one of them.</p>
<p>I started with Girvan-Newman Algorithm to find the communities. The beauty of Girvan-Newman is that it is easy to implement since every time I need to find the edge with the highest betweenness and remove it till I find the 42 departments(Communities) I want.</p>
<p>I am struggling to find other Community Detection Algorithms that give me the option of specifying how many communities/partitions I need to break down my graph into.</p>
<p>Is there any Community Detection Function/Technique that I can use, which gives me the option of specifying how many communities do I need to uncover from my graph? Any ideas are very much appreciated.</p>
<p>I am using Python and NetworkX.</p> | 2019-04-18 11:34:27.950000+00:00 | 2020-07-17 22:28:49.660000+00:00 | 2020-07-17 22:28:49.660000+00:00 | python|python-3.x|algorithm|networkx|graph-theory | ['https://arxiv.org/pdf/1703.09307.pdf', 'https://networkx.github.io/documentation/stable/reference/algorithms/generated/networkx.algorithms.community.asyn_fluid.asyn_fluidc.html#networkx.algorithms.community.asyn_fluid.asyn_fluidc'] | 2 |
61,710,709 | <p><a href="https://i.stack.imgur.com/wexKD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wexKD.png" alt="enter image description here"></a></p>
<p>Technically, if you are only looking for the points with maximum distance, you can build a polygon (convex hull) with the points, the maximum distance should be the ones in the border.</p>
<p>You can calculate convex hull in O(k.log(k)) </p>
<p><a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.ConvexHull.html" rel="nofollow noreferrer">https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.ConvexHull.html</a></p>
<p>After that, you need to just test points on the border.</p>
<p>This is the deterministic approach, you can apply heuristic, randomized search to do it faster but they are not guaranteed to provide the correct solution.</p>
<p>Here's a paper which discusses the topic with another algorithm: <a href="https://arxiv.org/ftp/arxiv/papers/1708/1708.02758.pdf" rel="nofollow noreferrer">https://arxiv.org/ftp/arxiv/papers/1708/1708.02758.pdf</a></p> | 2020-05-10 10:45:16.967000+00:00 | 2020-05-10 10:45:16.967000+00:00 | null | null | 61,710,480 | <p>given a set of n points , i take k points randomly. I need to compute in the most efficient way the <strong>maximum distance</strong> of the <strong>k</strong> points from the <strong>n</strong> points with a <strong>2-approx factor</strong> (exploiting in some way the triangular inequality).
A first idea I had was to use the Manhattan distance instead of the Euclidean Distance, but this does not reduce complexity as it is still <strong>O(n*k)</strong>.
What could be some ideas?</p>
<p>EDIT: what if i first compute the 2 farthest point in the k points and then calculate the distance of the 2 points from all the n points?</p> | 2020-05-10 10:29:10.063000+00:00 | 2020-05-10 11:07:15.593000+00:00 | 2020-05-10 11:07:15.593000+00:00 | python|time-complexity|approximation|pairwise-distance | ['https://i.stack.imgur.com/wexKD.png', 'https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.ConvexHull.html', 'https://arxiv.org/ftp/arxiv/papers/1708/1708.02758.pdf'] | 3 |
60,146,856 | <p>100k words it too few to train such a large model as BERT or RoBERTa. The main claim of <a href="https://arxiv.org/pdf/1907.11692.pdf" rel="nofollow noreferrer">the RoBERTa paper</a> is that BERT is actually undertrained. Whereas BERT was trained on 16 GB of text data, RoBERTa used 160 GB of plain text.</p>
<p>For a small domain-specific data as you describe, you can try fine-tuning an existing model. In this case, I would go for RoBERTa because it seems to be better pre-trained, does not have the next-sentence-objective (which is a hassle to pre-process data for it) and it uses SentencePiece for tokenization, which allows loss-less detokenization.</p> | 2020-02-10 08:49:19.093000+00:00 | 2020-02-13 22:37:42.353000+00:00 | 2020-02-13 22:37:42.353000+00:00 | null | 60,137,162 | <p>I want to pre-train BERT and RoBERTa MLM using domain corpus (sentiment-related text). How long it gonna take for using 50k~100k words. Since RoBERTa is not trained on predicting the next sentence objective, one training objective less than BERT and with larger mini-batches and learning rates, I assume RoBERTa will be much faster?</p> | 2020-02-09 13:33:21.890000+00:00 | 2020-03-26 11:30:15.040000+00:00 | 2020-03-26 11:30:15.040000+00:00 | language-model|bert-language-model|huggingface-transformers | ['https://arxiv.org/pdf/1907.11692.pdf'] | 1 |
52,704,019 | <p>There were some requirements for the algorithm on the relation between training instances and the dimensions of your vectors , but you can try <a href="https://databricks.com/blog/2014/10/20/efficient-similarity-algorithm-now-in-spark-twitter.html" rel="nofollow noreferrer">DIMSUM</a>.</p>
<p>You can find the paper <a href="https://arxiv.org/abs/1304.1467" rel="nofollow noreferrer">here</a>.</p> | 2018-10-08 14:02:49.837000+00:00 | 2018-10-08 14:02:49.837000+00:00 | null | null | 52,659,152 | <p><strong><em>The problem:</em></strong></p>
<p>Suppose I have a group of around 1,000,000 short documents D (no more than 50 words each), and I want to let users to supply a document from the same group D, and and get the top K similar documents from D.</p>
<p><strong><em>My approach:</em></strong></p>
<p>My first approach was to preprocess the group D by applying simple tf-idf, and after I have vector for each document, which is extremely sparse, to use a simple nearest neighbours algorithm based on cosine similarity.
Then, on query time, to justuse my static nearest neighbours table which its size is 1,000,000 x K, without any further calculations.</p>
<p>After applying tf-idf, I got vectors in size ~200,000, which means now I have a very sparse table (that can be stored efficiently in memory using sparse vectors) in size 1,000,000 x 200,000.
However, calculating the nearest neighbours model took me more than one day, and still haven't finished.
I tried to lower the vectors dimension by applying HashingTF, that utilizes the <a href="https://en.wikipedia.org/wiki/Feature_hashing" rel="noreferrer">hasing trick</a>, instead, so I can set the dimension to a constant one (in my case, i used 2^13 for uninfied hashing), but still I get the same bad performance.</p>
<p>Some technical information:</p>
<p>I use Spark 2.0 for the tf-idf calculation, and sklearn NearestNeighbours on the collected data.</p>
<p>Is thier any more efficient way to achieve that goal?</p>
<p>Thanks in advance.</p>
<p><strong>Edit:</strong></p>
<p>I had an idea to try a <a href="https://en.wikipedia.org/wiki/Locality-sensitive_hashing" rel="noreferrer">LSH</a> based approximation similarity algorithm like those implemented in spark as described <a href="https://spark.apache.org/docs/latest/ml-features.html#locality-sensitive-hashing" rel="noreferrer">here</a>, but could not find one that supports the 'cosine' similarity metric.</p> | 2018-10-05 05:57:28.320000+00:00 | 2018-10-08 14:02:49.837000+00:00 | 2018-10-05 07:21:10.443000+00:00 | apache-spark|scikit-learn|pyspark | ['https://databricks.com/blog/2014/10/20/efficient-similarity-algorithm-now-in-spark-twitter.html', 'https://arxiv.org/abs/1304.1467'] | 2 |
3,158,136 | <p>It is possible, people calling it homework probably haven't tried solving it yet.</p>
<p>We use the following as a sub-routine:</p>
<pre><code>Given an array a1 a2 ... an b1 b2 .. bn, convert in O(n) time and O(1) space to
b1 a1 b2 a2 ... bn an
</code></pre>
<p>A solution for that can be found here: <a href="http://arxiv.org/abs/0805.1598" rel="noreferrer">http://arxiv.org/abs/0805.1598</a></p>
<p>We use that as follows.</p>
<p>Do the above interleaving for the first 2^(k+1) - 2 elements, starting at k=1 repeating for k=2, 3 etc, till you go past the end of array.</p>
<p>For example in your array we get (interleaving sets identified by brackets)</p>
<pre><code> 8, 4, 12, 2, 6, 10, 14, 1, 3, 5, 7, 9, 11, 13, 15
[ ][ ]
4, 8, 12, 2, 6, 10, 14, 1, 3, 5, 7, 9, 11, 13, 15 (k = 1, interleave 2)
[ ][ ]
2, 4, 6, 8, 10, 12, 14, 1, 3, 5, 7, 9, 11, 13, 15 (k = 2, interleave 6)
[ ][ ]
1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 (k = 3, interleave 14)
</code></pre>
<p>So the total time is n + n/2 + n/4 + ... = O(n).
Space used is O(1).</p>
<p>That this works can be proved by induction.</p> | 2010-07-01 13:17:42.007000+00:00 | 2010-07-01 13:51:26.543000+00:00 | 2010-07-01 13:51:26.543000+00:00 | null | 3,158,014 | <p>This is not a homework. Just an interesting task :)</p>
<p>Given a complete binary search three represensted by array. Sort the array in O(n) using constant memory.</p>
<p>Example:</p>
<p>Tree:</p>
<pre><code> 8
/ \
4 12
/\ / \
2 6 10 14
/\ /\ /\ /\
1 3 5 7 9 11 13 15
</code></pre>
<p>Array: 8, 4, 12, 2, 6, 10, 14, 1, 3, 5, 7, 9, 11, 13, 15</p>
<p>Output: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15</p> | 2010-07-01 13:00:57.630000+00:00 | 2010-08-09 20:19:16.890000+00:00 | 2010-08-09 20:19:16.890000+00:00 | arrays|algorithm|sorting|binary-tree | ['http://arxiv.org/abs/0805.1598'] | 1 |
50,892,512 | <p>Be aware that <a href="http://%20%20%20[1]:%20https://arxiv.org/pdf/1608.05859.pdf" rel="noreferrer">Press and Wolf</a> dont't propose to freeze the weights to some pretrained ones, but tie them. That means, to ensure that input and output weights are always the same during training (in the sense of synchronized).</p>
<p>In a typical NLP model (e.g. language modelling/translation), you have an input dimension (vocabulary) of size <code>V</code> and a hidden representation size <code>H</code>. Then, you start with an <code>Embedding</code> layer, which is a matrix <code>VxH</code>. And the output layer is (probably) something like <code>Dense(V, activation='softmax')</code>, which is a matrix <code>H2xV</code>. When tying the weights, we want that those matrices are the same (therefore, <code>H==H2</code>).
For doing this in Keras, I think the way to go is via shared layers:</p>
<p>In your model, you need to instantiate a shared embedding layer (of dimension <code>VxH</code>), and apply it to either your input and output. But you need to transpose it, to have the desired output dimensions (<code>HxV</code>). So, we declare a <code>TiedEmbeddingsTransposed</code> layer, which transposes the embedding matrix from a given layer (and applies an activation function):</p>
<pre><code>class TiedEmbeddingsTransposed(Layer):
"""Layer for tying embeddings in an output layer.
A regular embedding layer has the shape: V x H (V: size of the vocabulary. H: size of the projected space).
In this layer, we'll go: H x V.
With the same weights than the regular embedding.
In addition, it may have an activation.
# References
- [ Using the Output Embedding to Improve Language Models](https://arxiv.org/abs/1608.05859)
"""
def __init__(self, tied_to=None,
activation=None,
**kwargs):
super(TiedEmbeddingsTransposed, self).__init__(**kwargs)
self.tied_to = tied_to
self.activation = activations.get(activation)
def build(self, input_shape):
self.transposed_weights = K.transpose(self.tied_to.weights[0])
self.built = True
def compute_mask(self, inputs, mask=None):
return mask
def compute_output_shape(self, input_shape):
return input_shape[0], K.int_shape(self.tied_to.weights[0])[0]
def call(self, inputs, mask=None):
output = K.dot(inputs, self.transposed_weights)
if self.activation is not None:
output = self.activation(output)
return output
def get_config(self):
config = {'activation': activations.serialize(self.activation)
}
base_config = super(TiedEmbeddingsTransposed, self).get_config()
return dict(list(base_config.items()) + list(config.items()))
</code></pre>
<p>The usage of this layer is:</p>
<pre><code># Declare the shared embedding layer
shared_embedding_layer = Embedding(V, H)
# Obtain word embeddings
word_embedding = shared_embedding_layer(input)
# Do stuff with your model
# Compute output (e.g. a vocabulary-size probability vector) with the shared layer:
output = TimeDistributed(TiedEmbeddingsTransposed(tied_to=shared_embedding_layer, activation='softmax')(intermediate_rep)
</code></pre>
<p>I have tested this in <a href="https://github.com/lvapeab/nmt-keras" rel="noreferrer">NMT-Keras</a> and it trains properly. But, as I try to load a trained model, it gets an error, related to the way Keras loads the models: it doesn't load the weights from the <code>tied_to</code>. I've found several questions regarding this (<a href="https://stackoverflow.com/questions/42015116/decoders-weights-of-autoencoder-with-tied-weights-in-keras">1</a>, <a href="https://stackoverflow.com/questions/47106830/keras-get-weights-returning-empty">2</a>, <a href="https://gist.github.com/dswah/c6b3e4d47d933b057aab32c9c29c4221#gistcomment-2298473" rel="noreferrer">3</a>), but I haven't managed to solve this issue. If someone have any ideas on the next steps to take, I'd be very glad to hear them :)</p> | 2018-06-16 23:24:22.767000+00:00 | 2018-06-16 23:24:22.767000+00:00 | null | null | 47,095,673 | <p>Its commonplace for various neural network architectures in NLP and vision-language problems to tie the weights of an initial word embedding layer to that of an output softmax. Usually this produces a boost to sentence generation quality. (see example <a href="https://arxiv.org/pdf/1608.05859.pdf" rel="noreferrer">here</a>)</p>
<p>In Keras its typical to embed word embedding layers using the Embedding class, however there seems to be no easy way to tie the weights of this layer to the output softmax. Would anyone happen to know how this could be implemented ?</p> | 2017-11-03 12:20:38.443000+00:00 | 2018-06-16 23:24:22.767000+00:00 | 2018-04-24 12:29:02.223000+00:00 | machine-learning|neural-network|nlp|deep-learning|keras | ['http://%20%20%20[1]:%20https://arxiv.org/pdf/1608.05859.pdf', 'https://github.com/lvapeab/nmt-keras', 'https://stackoverflow.com/questions/42015116/decoders-weights-of-autoencoder-with-tied-weights-in-keras', 'https://stackoverflow.com/questions/47106830/keras-get-weights-returning-empty', 'https://gist.github.com/dswah/c6b3e4d47d933b057aab32c9c29c4221#gistcomment-2298473'] | 5 |
8,916,615 | <p>In a nutshell: OpenCPU is a layer on top of the regular tools (e.g. RApache, rpy2) that defines a framework and protocol for interacting with R. It handles stuff like object serialization, security, resource control, reproducibility etc, while abstracting away technicalities. </p>
<p>This <a href="http://arxiv.org/abs/1406.4806">paper on arxiv</a> goes into more detail on the motivation and design of the system and API.</p> | 2012-01-18 20:02:01.727000+00:00 | 2014-12-08 19:02:47.753000+00:00 | 2014-12-08 19:02:47.753000+00:00 | null | 8,858,429 | <p>Lately I was pointed to <a href="http://opencpu.org/" rel="nofollow">http://opencpu.org/</a> . Nifty website, but after browsing for a little while I wasn't so sure where it is located in the R landscape compared to e.g. <a href="http://rapache.net/" rel="nofollow">rApache</a> or <a href="http://rpy2.bitbucket.org/" rel="nofollow">RPy2</a>.</p>
<p>After waiting a long time for the server to come back I was finally able to read the architecture section, but that wasn't too comprehensive. <strong>I'm looking for a more detailed explanation of what OpenCPU is, how it is intended to be used, and how this compares with existing tools such as <a href="http://rapache.net/" rel="nofollow">rApache</a> and <a href="http://rpy2.bitbucket.org/" rel="nofollow">RPy2</a>.</strong></p> | 2012-01-13 23:00:30.947000+00:00 | 2017-06-19 08:36:40.233000+00:00 | 2017-06-19 08:36:40.233000+00:00 | r|rpy2|opencpu|rapache | ['http://arxiv.org/abs/1406.4806'] | 1 |
42,911,408 | <p>Your loss function value is not going into right direction. This means your model is not able to capture features it needs to focus on. There are 2 ways you can try</p>
<ol>
<li>Change your model. I'll suggest first pick up some widely used model for text classification. Then experiment with it towards better accuracy</li>
<li>Although your input representation makes sense, you can try with some specialized sentence-to-vectors models like <a href="https://arxiv.org/abs/1506.06726" rel="nofollow noreferrer">skip-thought</a> (one implementation <a href="https://github.com/ryankiros/skip-thoughts" rel="nofollow noreferrer">here</a>)</li>
</ol> | 2017-03-20 18:35:05.887000+00:00 | 2017-03-20 18:35:05.887000+00:00 | null | null | 42,897,510 | <p>I am trying to train a convolutional neural network with Keras at recognizing tags for Stack Exchange questions about cooking. </p>
<p>The i-th question element of my data-set is like this:</p>
<pre><code>id 2
title How should I cook bacon in an oven?
content <p>I've heard of people cooking bacon in an ov...
tags oven cooking-time bacon
Name: 1, dtype: object
</code></pre>
<p>I have removed tags with BeautifulSoup and removed punctuation too.
Since questions' content are very big I have decided to focus on titles.
I have used sklearn CountVectorizer to vectorize words in titles. However they were more than 8000 words (excluding stop words). So I decided apply a part of speech tagging and retrieve only Nouns and Gerunds.</p>
<pre><code>from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer(stop_words='english')
titles = dataframes['cooking']['title']
pos_titles = []
for i,title in enumerate(titles):
pos = []
pt_titl = nltk.pos_tag(word_tokenize(title))
for pt in pt_titl:
if pt[1]=='NN' or pt[1]=='NNS' or pt[1]=='VBG':# or pt[1]=='VBP' or pt[1]=='VBS':
pos.append(pt[0])
pos_titles.append(" ".join(pos))
</code></pre>
<p>This represents my input vector. I have vectorized tags too and extract dense matrixes for both input and tags. </p>
<pre><code>tags = [" ".join(x) for x in dataframes['cooking']['tags']]
Xd = X.todense()
Y = vectorizer.fit_transform(tags)
Yd = Y.todense()
</code></pre>
<p>Split data into train and validation set </p>
<pre><code>from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(Xd, Yd, test_size=0.33, random_state=42)
</code></pre>
<p>Now I am trying to train a Conv1D network</p>
<pre><code>from keras.models import Sequential
from keras.layers import Dense, Activation,Flatten
from keras.layers import Conv2D, MaxPooling2D,Conv1D, Embedding,GlobalMaxPooling1D,Dropout,MaxPooling1D
model = Sequential()
model.add(Embedding(Xd.shape[1],
128,
input_length=Xd.shape[1]))
model.add(Conv1D(32,5,activation='relu'))
model.add(MaxPooling1D(100,stride=50))
model.add(Conv1D(32,5,activation='relu'))
model.add(GlobalMaxPooling1D())
model.add(Dense(Yd.shape[1], activation ='softmax'))
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit(X_train, y_train, batch_size=32,verbose=1)
</code></pre>
<p>But it gets stucked on a very low accuracy and it shows a barely increasing loss along the epochs</p>
<pre><code>Epoch 1/10
10320/10320 [==============================] - 401s - loss: 15.8098 - acc: 0.0604
Epoch 2/10
10320/10320 [==============================] - 339s - loss: 15.5671 - acc: 0.0577
Epoch 3/10
10320/10320 [==============================] - 314s - loss: 15.5509 - acc: 0.0578
Epoch 4/10
10320/10320 [==============================] - 34953s - loss: 15.5493 - acc: 0.0578
Epoch 5/10
10320/10320 [==============================] - 323s - loss: 15.5587 - acc: 0.0578
Epoch 6/10
6272/10320 [=================>............] - ETA: 133s - loss: 15.6005 - acc: 0.0550
</code></pre> | 2017-03-20 06:58:22.160000+00:00 | 2017-03-20 18:35:05.887000+00:00 | null | machine-learning|nlp|deep-learning|keras | ['https://arxiv.org/abs/1506.06726', 'https://github.com/ryankiros/skip-thoughts'] | 2 |
44,889,219 | <p>1: Each element (or a group of element) in embedding vector have some meaning, but mostly unknown for human. Depend on what algorithm you use, a word embedding vector may have different meaning, but usually useful.
For example, <a href="https://nlp.stanford.edu/projects/glove/" rel="noreferrer">Glove</a>, similar word 'frog', 'toad' stay near each other in vector space. King - man result in vector similar to Queen. </p>
<ol start="3">
<li><p>Turn vocab into index. For example, you have a vocabulary list:
[dog, cat, mouse, feed, play, with]
Then the sentences: Dog play with cat => 0, 4, 5, 1
While, you have embedding matrix as follow</p>
<p>[0.1, 0.1, 0] # comment: this is dog <br>
[0.2, 0.5, 0.1] # this is cat <br>
[...] <br>
[...] <br>
[...] <br>
[...] <br></p></li>
</ol>
<p>where first row is embedding vector of dog, second row is cat, then so on
Then, you use the index (0, 4, 5, 1) after lookup would become a matrix [[0.1, 0.1, 0][...][...][0.2, 0.5, 0.1]]</p>
<ol start="4">
<li>either or both
<ul>
<li>You can randomly init embedding vector and training it with gradient descent</li>
<li>You can take pretrained word vector and keep it fixed (i.e: read-only, no change).
You can train your word vector in model and use it in another model. Our you can download pretrained word vector online. Example Common Crawl (840B tokens, 2.2M vocab, cased, 300d vectors, 2.03 GB download): glove.840B.300d.zip on <a href="https://nlp.stanford.edu/projects/glove/" rel="noreferrer">Glove</a></li>
<li>You can init with pretrained word vector and train with your model by gradient descent</li>
</ul></li>
</ol>
<p>Update:
<strong>One-hot vector</strong> does not contain any information. You can think that one-hot vector is index of that vector in vocabulary.
For example, Dog => [1, 0, 0, 0, 0, 0] and cat => [0, 1, 0, 0, 0, 0]. There are some different between one-hot vs index: </p>
<ul>
<li><p>if you input a list of index: [0, 4, 5, 1] to your multi-layer perceptron, it cannot learn anything (I tried...).But if you input a matrix of one-hot vector [[...1][1...][...][...]], it learn something. But it costly in term of RAM and CPU. </p></li>
<li><p>One-hot cost a lot of memory to store zeros. Thus, I suggest randomly init embedding matrix if you don't have one. Store dataset as index, and use index to look up embedding vector</p></li>
</ul>
<blockquote>
<p>"its mean that lookup table is just a matrix of embedded vectors
(already been trained seperately via word2vec or...) for each word in
the vocabulary. and while in the process of neural network either we
can use an Embedding Layer or we can just refer to embedded vector in
lookup table for that particular embedded vector against particular
one-hot vector."</p>
</blockquote>
<p>Use the "INDEX" to look-up in lookup table. Turn dog into 0, cat into 1. One-hot vector and index contain same information, but one-hot cost more memory to store. Moreover, a lot of deeplearning framework accept index as input to embedding layer (which, output is a vector represent for a word in that index.)</p>
<blockquote>
<p>". How we get this embedding vector..."</p>
</blockquote>
<p>=> read paper. Here is paper about <a href="https://arxiv.org/abs/1301.3781" rel="noreferrer">Word2vec</a> and <a href="https://nlp.stanford.edu/projects/glove/" rel="noreferrer">Glove</a>. Ask your lecturers for more detail, they are willing to help you. </p> | 2017-07-03 15:24:22.327000+00:00 | 2017-07-04 16:44:38.497000+00:00 | 2017-07-04 16:44:38.497000+00:00 | null | 44,881,999 | <p>I need to ask few questions regarding word embeddings.....could be basic.</p>
<ol>
<li>When we convert a one-hot vector of a word for instance king <code>[0 0 0 1 0]</code> into an embedded vector <code>E = [0.2, 0.4, 0.2, 0.2]</code>.... is there any importance for each index in resultant word vector? For instance <code>E[1]</code> which is 0.2.... what specifically <code>E[1]</code> defines (although I know its basically a transformation into another space).... or word vector collectively defines context but not individually...</li>
<li>How the dimension (reduced or increased) of a word vector matters as compared to the original one-hot vector ?</li>
<li>How can we define lookup table in term of embedding layer?</li>
<li>is lookup table a kind of random generated table or it already been trained separately with respect to data instance in data and we just use it later on in Neural Network operations?
5- Is there any method to visualize an embedded vector at Hidden Layer (as we do have in Image based Neural Network Processing)?</li>
</ol>
<p>Thanks in advance</p> | 2017-07-03 09:24:22.947000+00:00 | 2019-03-24 17:13:31.060000+00:00 | 2019-03-24 17:13:31.060000+00:00 | deep-learning|text-mining|word2vec|word-embedding | ['https://nlp.stanford.edu/projects/glove/', 'https://nlp.stanford.edu/projects/glove/', 'https://arxiv.org/abs/1301.3781', 'https://nlp.stanford.edu/projects/glove/'] | 4 |
70,159,953 | <p>The reason is how hardware makes the process. In deep learning matrix operations are the main computations and source of floating point operations (FLOPs).</p>
<p>Single Instruction Multiple Data (SIMD) operations in CPUs happen in batch sizes, which are powers of 2. Consider take a look if you are interested:</p>
<p><a href="https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/37631.pdf" rel="nofollow noreferrer">https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/37631.pdf</a></p>
<p>And for GPUs:</p>
<p><a href="https://docs.nvidia.com/cuda/cuda-c-best-practices-guide/index.html" rel="nofollow noreferrer">https://docs.nvidia.com/cuda/cuda-c-best-practices-guide/index.html</a></p>
<blockquote>
<p>Memory allocated through the CUDA Runtime API, such as via
cudaMalloc(), is guaranteed to be aligned to at least 256 bytes.
Therefore, choosing sensible thread block sizes, such as multiples of
the warp size (i.e., 32 on current GPUs), facilitates memory accesses
by warps that are properly aligned. (Consider what would happen to the
memory addresses accessed by the second, third, and subsequent thread
blocks if the thread block size was not a multiple of warp size, for
example.)</p>
</blockquote>
<p>This means that any multiple of 32 will optimize the memory access, and thus, the processing speed, while you are using a gpu.</p>
<p>About the right value, pyramidal shape usually works better, because as you go deeper, the neural network tends to create internal representations of the transformed data, in an expected hierarchical, thus, pyramidal shape. So a good guess is to use decreasing amounts of neurons at each layer as you come close to the output, e.g:</p>
<pre><code>self.flatten = nn.Flatten()
self.linear_relu_stack = nn.Sequential(
nn.Linear(28*28, 512),
nn.ReLU(),
nn.Linear(512, 128),
nn.ReLU(),
nn.Linear(128, 10),
nn.ReLU()
)
</code></pre>
<p>But there is no general rule and you can find whole fields of study (like Neural Architecture Search) about how to find optimal hyper-parameters for neural networks.</p>
<p>you can take a look here for some deeper information:</p>
<p><a href="https://arxiv.org/pdf/1608.04064.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1608.04064.pdf</a></p> | 2021-11-29 18:55:58.303000+00:00 | 2021-12-09 20:13:40.520000+00:00 | 2021-12-09 20:13:40.520000+00:00 | null | 70,159,370 | <p>I'm working through the lessons on <a href="https://docs.microsoft.com/en-us/learn/modules/intro-machine-learning-pytorch/5-model" rel="nofollow noreferrer">building a neural network</a> and I'm confused as to why 512 is used for the linear_relu_stack in the example code:</p>
<pre><code>class NeuralNetwork(nn.Module):
def __init__(self):
super(NeuralNetwork, self).__init__()
self.flatten = nn.Flatten()
self.linear_relu_stack = nn.Sequential(
nn.Linear(28*28, 512),
nn.ReLU(),
nn.Linear(512, 512),
nn.ReLU(),
nn.Linear(512, 10),
nn.ReLU()
)
def forward(self, x):
x = self.flatten(x)
logits = self.linear_relu_stack(x)
return logits
</code></pre>
<p>I started googling around and saw many examples of the <code>torch.nn.Linear</code> function using various values of <code>2**N</code> but it isn't clear to me why they are using powers of 2 nor how they are choosing which value to use.</p> | 2021-11-29 18:10:02.960000+00:00 | 2021-12-09 20:13:40.520000+00:00 | null | python|deep-learning|neural-network|pytorch | ['https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/37631.pdf', 'https://docs.nvidia.com/cuda/cuda-c-best-practices-guide/index.html', 'https://arxiv.org/pdf/1608.04064.pdf'] | 3 |
49,530,799 | <p>It seems like your loss doesn't fit your problem. You use binary cross entropy loss here:</p>
<pre><code>model.compile(optimizer=Adadelta(), loss='binary_crossentropy')
</code></pre>
<p>But you have more than two classes. So I would suggest to use <code>categorical_crossentropy</code> loss (appears in the list of losses <a href="https://keras.io/losses/" rel="nofollow noreferrer">here</a>. Read on the bottom of the page how to prepare your data in order to use this loss).</p>
<p>There are additional types of losses which may fit better inbalance classes situation. You may try to use dice loss, which is a differential approximation of IoU (intersection over union).</p>
<p>This loss is described on page 6, section 3 <a href="https://arxiv.org/pdf/1606.04797.pdf%E2%80%8B" rel="nofollow noreferrer">here</a>.</p> | 2018-03-28 09:26:34.260000+00:00 | 2018-03-28 09:31:41.247000+00:00 | 2018-03-28 09:31:41.247000+00:00 | null | 49,518,935 | <p>I have pretrained VGG16 based FCN-32s like model, defined like:</p>
<pre class="lang-python prettyprint-override"><code>def pop_layer(model):
if not model.outputs:
raise Exception('Sequential model cannot be popped: model is empty.')
model.layers.pop()
if not model.layers:
model.outputs = []
model.inbound_nodes = []
model.outbound_nodes = []
else:
model.layers[-1].outbound_nodes = []
model.outputs = [model.layers[-1].output]
model.built = False
def get_model():
#Fully convolutional part of VGG16
model = VGG16(include_top=False, weights='imagenet')
#Remove last max pooling layer
pop_layer(model)
#Freeze pretrained layers
for layer in model.layers:
layer.trainable = False
model = Model(inputs=model.inputs, outputs=model.outputs)
#print('len(model.layers)', len(model.layers)) #
#print(model.summary()) #
x = Conv2D(512, (3, 3), activation='relu', padding='same')(model.output)
x = Conv2DTranspose(NUMBER_OF_CLASSES, kernel_size=(32, 32), strides=(16, 16), activation='sigmoid', padding='same')(x)
head = Reshape((-1,NUMBER_OF_CLASSES))(x)
model = Model(inputs=model.inputs, outputs=head)
model.compile(optimizer=Adadelta(), loss='binary_crossentropy')
print('len(model.layers)', len(model.layers)) #
print(model.summary()) #
return model
</code></pre>
<p>Model summary:</p>
<pre class="lang-python prettyprint-override"><code>len(model.layers) 21
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) (None, None, None, 3) 0
_________________________________________________________________
block1_conv1 (Conv2D) (None, None, None, 64) 1792
_________________________________________________________________
block1_conv2 (Conv2D) (None, None, None, 64) 36928
_________________________________________________________________
block1_pool (MaxPooling2D) (None, None, None, 64) 0
_________________________________________________________________
block2_conv1 (Conv2D) (None, None, None, 128) 73856
_________________________________________________________________
block2_conv2 (Conv2D) (None, None, None, 128) 147584
_________________________________________________________________
block2_pool (MaxPooling2D) (None, None, None, 128) 0
_________________________________________________________________
block3_conv1 (Conv2D) (None, None, None, 256) 295168
_________________________________________________________________
block3_conv2 (Conv2D) (None, None, None, 256) 590080
_________________________________________________________________
block3_conv3 (Conv2D) (None, None, None, 256) 590080
_________________________________________________________________
block3_pool (MaxPooling2D) (None, None, None, 256) 0
_________________________________________________________________
block4_conv1 (Conv2D) (None, None, None, 512) 1180160
_________________________________________________________________
block4_conv2 (Conv2D) (None, None, None, 512) 2359808
_________________________________________________________________
block4_conv3 (Conv2D) (None, None, None, 512) 2359808
_________________________________________________________________
block4_pool (MaxPooling2D) (None, None, None, 512) 0
_________________________________________________________________
block5_conv1 (Conv2D) (None, None, None, 512) 2359808
_________________________________________________________________
block5_conv2 (Conv2D) (None, None, None, 512) 2359808
_________________________________________________________________
block5_conv3 (Conv2D) (None, None, None, 512) 2359808
_________________________________________________________________
conv2d_1 (Conv2D) (None, None, None, 512) 2359808
_________________________________________________________________
conv2d_transpose_1 (Conv2DTr (None, None, None, 3) 1572867
_________________________________________________________________
reshape_1 (Reshape) (None, None, 3) 0
=================================================================
Total params: 18,647,363
Trainable params: 3,932,675
Non-trainable params: 14,714,688
_________________________________________________________________
None
</code></pre>
<p>But when I train model it only predict most dominant class, my dataset is unbalanced:</p>
<pre class="lang-python prettyprint-override"><code>Pixel area per class ratio:
class1 : 62.93 %
class2 : 25.46 %
class3 : 11.61 %
</code></pre>
<p>So my questions are: is my model definition ok? How to deal with class inbalanced? maybe batch should be constructed in some special way?</p> | 2018-03-27 17:21:07.983000+00:00 | 2019-03-07 19:06:00.043000+00:00 | 2018-03-28 08:28:06.963000+00:00 | deep-learning|keras | ['https://keras.io/losses/', 'https://arxiv.org/pdf/1606.04797.pdf%E2%80%8B'] | 2 |
57,266,362 | <p>You can also use random erasing so that the network learns more about some specific areas rather than just focusing on some specific parts of the face the entire time of training.</p>
<p>Paper for random erasing can be found at <a href="https://arxiv.org/abs/1708.04896" rel="nofollow noreferrer">https://arxiv.org/abs/1708.04896</a>
and its implementation can be found at <a href="https://github.com/yu4u/cutout-random-erasing" rel="nofollow noreferrer">https://github.com/yu4u/cutout-random-erasing</a>.</p> | 2019-07-30 07:25:04.117000+00:00 | 2019-07-30 07:25:04.117000+00:00 | null | null | 44,959,311 | <p>I'm working on facial expression recognition using Keras, the dataset I'm using does not have a big amount of data available, So I'm going to use Keras's image preprocessing for data augmentation.</p>
<p>I want to know the best parameters of ImageDataGenerator to generate normal faces wich I can use to train my neural network with.</p>
<p>Here's the code I'm using for Data augmentation :</p>
<pre><code>def data_augmentation(subdir):
datagen = ImageDataGenerator(
featurewise_center=False,
samplewise_center=False,
featurewise_std_normalization=False,
samplewise_std_normalization=False,
zca_whitening=False,
rotation_range=30,
width_shift_range=0.2,
height_shift_range=0.2,
horizontal_flip=True,
vertical_flip=False)
print ("\nData augmentation...")
print ("\nProcess...")
for file in glob.glob(subdir+"*/*.jpg"):
img = load_img(file)
print ("\nProcessing..." + str(file))
x = img_to_array(img)
x = x.reshape((1,) + x.shape)
i = 0
for batch in datagen.flow(x, batch_size=1, save_to_dir='data_aug', save_prefix='Fig', save_format='jpg'):
i += 1
if i > 20:
break
</code></pre>
<p>Here's all ImageDataGenerator's parameters</p>
<pre><code>keras.preprocessing.image.ImageDataGenerator(featurewise_center=False,
samplewise_center=False,
featurewise_std_normalization=False,
samplewise_std_normalization=False,
zca_whitening=False,
zca_epsilon=1e-6,
rotation_range=0.,
width_shift_range=0.,
height_shift_range=0.,
shear_range=0.,
zoom_range=0.,
channel_shift_range=0.,
fill_mode='nearest',
cval=0.,
horizontal_flip=False,
vertical_flip=False,
rescale=None,
preprocessing_function=None,
data_format=K.image_data_format())
</code></pre>
<p>And here's an example of images generated using my code :</p>
<p><a href="https://i.stack.imgur.com/2Q8sF.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/2Q8sF.jpg" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/ijiqJ.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/ijiqJ.jpg" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/3d3kv.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/3d3kv.jpg" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/wo42s.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/wo42s.jpg" alt="enter image description here"></a></p>
<p>As you can see, the images are distorted and not good enough to train my network.</p>
<p>I want to know what's the best parameters of ImageDataGenerator for human faces or is there any better methods for data augmentation ?</p> | 2017-07-06 21:39:12.623000+00:00 | 2019-07-30 07:25:04.117000+00:00 | null | image-processing|keras | ['https://arxiv.org/abs/1708.04896', 'https://github.com/yu4u/cutout-random-erasing'] | 2 |
67,138,028 | <p>I want to mention few things here which may be useful to others:</p>
<p><strong>1) Data stratification / random sampling</strong></p>
<p>When you use <code>validation_split</code> Keras uses the last x percent of data as validation data. This means that if the data is ordered by class, e.g. because "pairs" or "tripletts" are made in a sequence, validation data will only come from classes (or the class) contained in the last x percent of data. In this case, the validation set will be of no use. Thus <strong>it is essential to suffle input</strong> data to make sure that the validation set contains random samples from each class.</p>
<p>The docs for <code>validation_split</code> say:</p>
<blockquote>
<p>Float between 0 and 1. Fraction of the training data to be used as
validation data. The model will set apart this fraction of the
training data, will not train on it, and will evaluate the loss and
any model metrics on this data at the end of each epoch. The
validation data is selected from the <strong>last samples</strong> in the x and y data
provided, before shuffling</p>
</blockquote>
<p><strong>2) Choice of optimizer</strong></p>
<p>In <code>model.compile()</code> choosing <code>optimizer='sgd'</code> may not be the best approach since <code>sgd</code> can get stuck in local minima etc. <code>Adam</code> (<a href="https://keras.io/api/optimizers/adam/" rel="nofollow noreferrer">see docs</a>) seems to be a good choice to start with since it...</p>
<blockquote>
<p>[...] combines the advantages of [...] AdaGrad to deal with sparse
gradients, and the ability of RMSProp to deal with non-stationary
objectives.</p>
</blockquote>
<p>according to <a href="https://arxiv.org/abs/1412.6980" rel="nofollow noreferrer">Kingma and Ba</a> (2014, page 10).</p>
<pre><code>from keras.optimizers import Adam
...
model.compile(loss=contrastive_loss, optimizer=keras.optimizers.Adam(lr=0.0001))
</code></pre>
<p><strong>3) Early stopping / learning rate</strong></p>
<p>Using <a href="https://keras.io/api/callbacks/early_stopping/" rel="nofollow noreferrer">early stopping</a> and <a href="https://keras.io/api/callbacks/reduce_lr_on_plateau/" rel="nofollow noreferrer">adjusting the learning rate</a> during training may also be highly useful to achieve good results. So the model can train until there is no more success (stop automatically in this case).</p>
<pre><code>from keras.callbacks import EarlyStopping
from keras.callbacks import ReduceLROnPlateau
...
early_stopping = EarlyStopping(monitor='val_loss', patience=50, mode='auto', restore_best_weights=True)
reduce_on_plateau = ReduceLROnPlateau(monitor="val_loss", factor=0.8, patience=15, cooldown=5, verbose=0)
...
hist = model.fit([img_1, img_2], y,
validation_split=.2,
batch_size=128,
verbose=1,
epochs=9999,
callbacks=[early_stopping])
</code></pre>
<p><strong>4) Kernel initialization</strong></p>
<p>Kernel initialization (with a small SD) may be helpful as well.</p>
<pre><code># Layer 1
seq.add(Conv2D(8, (5,5), input_shape=input_shape,
kernel_initializer=keras.initializers.TruncatedNormal(mean=0.0, stddev=0.01, seed=None),
data_format="channels_first"))
seq.add(Activation('relu'))
seq.add(MaxPooling2D(pool_size=(2, 2)))
seq.add(Dropout(0.1))
</code></pre>
<p><strong>5) Overfitting</strong></p>
<p>I noticed that instead of using dropout to fight overfitting, adding some noise can be rather helpful. In this case simply add some <a href="https://keras.io/api/layers/regularization_layers/gaussian_noise/" rel="nofollow noreferrer">GaussianNoise</a> at the top of the network.</p> | 2021-04-17 12:11:56.780000+00:00 | 2021-04-17 12:11:56.780000+00:00 | null | null | 59,442,922 | <p>I am trying to train a Siamese neural network using Keras, with the goal of identifying if 2 images belong to same class or not. My data is shuffled and has equal number of positive examples and negative examples. My model is not learning anything and it is predicting the same output always. I am getting the same loss, validation accuracy, and validation loss every time. </p>
<p><a href="https://i.stack.imgur.com/dRfLF.png" rel="nofollow noreferrer">Training Output</a></p>
<pre class="lang-py prettyprint-override"><code>def convert(row):
return imread(row)
def contrastive_loss(y_true, y_pred):
margin = 1
square_pred = K.square(y_pred)
margin_square = K.square(K.maximum(margin - y_pred, 0))
return K.mean(y_true * square_pred + (1 - y_true) * margin_square)
def SiameseNetwork(input_shape):
top_input = Input(input_shape)
bottom_input = Input(input_shape)
# Network
model = Sequential()
model.add(Conv2D(96,(7,7),activation='relu',input_shape=input_shape))
model.add(MaxPooling2D())
model.add(Conv2D(256,(5,5),activation='relu',input_shape=input_shape))
model.add(MaxPooling2D())
model.add(Conv2D(256,(5,5),activation='relu',input_shape=input_shape))
model.add(MaxPooling2D())
model.add(Flatten())
model.add(Dense(4096,activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(1024,activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(512,activation='relu'))
model.add(Dropout(0.5))
encoded_top = model(top_input)
encoded_bottom = model(bottom_input)
L1_layer = Lambda(lambda tensors:K.abs(tensors[0] - tensors[1]))
L1_distance = L1_layer([encoded_top, encoded_bottom])
prediction = Dense(1,activation='sigmoid')(L1_distance)
siamesenet = Model(inputs=[top_input,bottom_input],outputs=prediction)
return siamesenet
data = pd.read_csv('shuffleddata.csv')
print('Converting X1....')
X1 = [convert(x) for x in data['X1']]
print('Converting X2....')
X2 = [convert(x) for x in data['X2']]
print('Converting Y.....')
Y = [0 if data['Y'][i] == 'Negative' else 1 for i in range(len(data['Y']))]
input_shape = (53,121,3,)
model = SiameseNetwork(input_shape)
model.compile(loss=contrastive_loss,optimizer='sgd',metrics=['accuracy'])
print(model.summary())
model.fit(X1,Y,batch_size=32,epochs=20,shuffle=True,validation_split = 0.2)
model.save('Siamese.h5')
</code></pre> | 2019-12-22 09:09:08.873000+00:00 | 2021-04-17 12:11:56.780000+00:00 | 2019-12-22 09:17:39.143000+00:00 | python|tensorflow|keras|neural-network | ['https://keras.io/api/optimizers/adam/', 'https://arxiv.org/abs/1412.6980', 'https://keras.io/api/callbacks/early_stopping/', 'https://keras.io/api/callbacks/reduce_lr_on_plateau/', 'https://keras.io/api/layers/regularization_layers/gaussian_noise/'] | 5 |
67,749,084 | <p>Edit: if the problem is restricted to use an MLP exclusively, I think you're looking for differentiable objectives for clustering. (K-Means objective is not differentiable because of the finding the centroids part). I think this is not a 'mainstream' approach to clustering, but certainly there seems to be some work to use deep networks to optimize clustering (differentiable) objectives:</p>
<ol>
<li><a href="https://arxiv.org/pdf/1910.09036.pdf" rel="nofollow noreferrer">Differentiable Deep Clustering with Cluster Size Constraints
</a>: <em>"we exploit the connection between optimal transport and k-means, and rely on entropic regularization to
derive a fully-differentiable clustering loss that can be
used in (P) and directly optimized with SGD"</em>. So you can apply SGD to an MLP, is an MLP the best architecture for using this loss? Depends on your data.</li>
</ol>
<p>Another approach I could think of using ANNs is <a href="http://www.pitt.edu/%7Eis2470pb/Spring05/FinalProjects/Group1a/tutorial/som.html#:%7E:text=Self%2DOrganizing%20Map,by%20grouping%20similar%20data%20together." rel="nofollow noreferrer">self-organizing maps (or Kohonen maps)</a>. It depends how relaxed is your definition of MLP, you can certainly add a bunch of layers between the input layer and the output feature maps.</p>
<s>
You can potentially use a MLP to embed your data in to a vector space, which you can use to compute some metric during KMeans (eg Euclidean distance) which might or might not make sense, depending on how you compute the embeddings and the dataset.
<p>You could do this with an Autoencoder in the absence of labels, though that is a bit more complex than a simple MLP:</p>
<p><a href="https://i.stack.imgur.com/3BsLQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3BsLQ.png" alt="enter image description here" /></a></p>
<p>This could be an overkill though, it really depends on the problem. Consider doing KMeans on your data first (no MLP). If the problem is complicated enough, moving the data to latent space could work, this is essentially what word2vec does and people do clustering and all sort of things with it (see <a href="https://ai.intelligentonlinetools.com/ml/k-means-clustering-example-word2vec/" rel="nofollow noreferrer">this</a>)
</s></p> | 2021-05-29 07:58:40.617000+00:00 | 2021-05-30 22:35:23.177000+00:00 | 2021-05-30 22:35:23.177000+00:00 | null | 67,748,978 | <p>How to use Multilayered Perceptron for clustering like K-Means on non-labeled dataset.
I've MNIST dataset with labels but i was wanted to perform clustering algorithm with MLP.
Any idea?</p> | 2021-05-29 07:45:35.723000+00:00 | 2021-05-30 22:35:23.177000+00:00 | null | python|machine-learning|cluster-analysis|k-means|mlp | ['https://arxiv.org/pdf/1910.09036.pdf', 'http://www.pitt.edu/%7Eis2470pb/Spring05/FinalProjects/Group1a/tutorial/som.html#:%7E:text=Self%2DOrganizing%20Map,by%20grouping%20similar%20data%20together.', 'https://i.stack.imgur.com/3BsLQ.png', 'https://ai.intelligentonlinetools.com/ml/k-means-clustering-example-word2vec/'] | 4 |
51,046,737 | <p>Too large changes to the current policy is the main cause of instability of A3C algorithm. There are methods to stabilize it, e.g. <a href="https://arxiv.org/abs/1502.05477" rel="nofollow noreferrer">TRPO</a> or <a href="https://arxiv.org/abs/1707.06347" rel="nofollow noreferrer">PPO</a>. I'd suggest you to look at PPO - it is very easy to implement, although less effective.</p>
<p>In PPO, you would simply change the loss function (based on the blog's notation) to:</p>
<p><a href="https://i.stack.imgur.com/6Wetc.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6Wetc.gif" alt="math formula"></a></p>
<p>Where it is recommended to use <code>e=0.2</code>. Good luck with your implementation!</p> | 2018-06-26 15:40:10.190000+00:00 | 2018-06-26 15:40:10.190000+00:00 | null | null | 50,964,051 | <p>I've built an A3C implementation in keras using this as referance: <a href="https://jaromiru.com/2017/03/26/lets-make-an-a3c-implementation/" rel="nofollow noreferrer">https://jaromiru.com/2017/03/26/lets-make-an-a3c-implementation/</a>
And I'm using custom environment, where an agent has a choise of purchasing some items, selling or exchanging them given their price as state. And it is given positive rewards for good deals and negative rewards for bad deals. I have tested it on DQN in the past and it sucessfully converged showing really good results. But when I use the same environment in A3C, it results in model just choosing the same action over and over. I tried changing some hyper-parametrs, but no result. I also tried using target model and updating it every n episodes, which resulted in better convergence with gym CartPole environment, but still no effect on performance of my model in my custom environment. I have found a few discussions on reddit about the same problem, but none of them were answered. </p> | 2018-06-21 08:39:20.157000+00:00 | 2019-07-10 13:07:26.823000+00:00 | 2019-07-10 13:07:26.823000+00:00 | python-3.x|tensorflow|keras|reinforcement-learning | ['https://arxiv.org/abs/1502.05477', 'https://arxiv.org/abs/1707.06347', 'https://i.stack.imgur.com/6Wetc.gif'] | 3 |
59,534,359 | <p>I think it depends on what you want to compare. If you just want to compare different models with regards to prediction power (classifier and regressor alike), nested cross validation is usually good in order to not report overly optimistic metrics: <a href="https://scikit-learn.org/stable/auto_examples/model_selection/plot_nested_cross_validation_iris.html" rel="nofollow noreferrer">https://scikit-learn.org/stable/auto_examples/model_selection/plot_nested_cross_validation_iris.html</a> while allowing you to find the best set of hyperparameters.</p>
<p>However, sometimes it seems like is just overkilling: <a href="https://arxiv.org/abs/1809.09446" rel="nofollow noreferrer">https://arxiv.org/abs/1809.09446</a></p>
<p>Also, depending on how the ml algorithms behave, what datasets are you talking about, their characteristics, etc etc, maybe your "comparison" might need to take into consideration a lot of other things rather than just prediction power. Maybe if you give some more details we will be able to help more.</p> | 2019-12-30 17:05:45.320000+00:00 | 2019-12-30 17:05:45.320000+00:00 | null | null | 59,534,265 | <p>I'm working on an assignment in which I have to compare two regressors (random forest and svr) that I implement with <code>scikit-learn</code>. I want to evaluate both regressors and I was googling around a lot and came across the nested cross validation where you use the inner loop to tune the hyperparameter and the outer loop to validate on the k folds of the training set. I would like to use the inner loop to tune both my regressors and the outer loop to validate both so I will have the same test and train folds for both rergessors.<br>
Is this a proper way to compare two ml algorithms with each other? Are there better ways to compare two algorithms with each other? Especialy regressors?</p>
<p>I found some entries in blogs but I could not find any scientific paper stating this is a good technique to compare two algorithms with each other, which would be important to me. If there are some links to current papers I would be glad if you could post them, too.
Thanks for the help in advance!</p>
<p><strong>EDIT</strong><br>
I have a very low amount of data (apprx. <strong>200 samples</strong>) with a high amount of features (<strong>apprx. 250, after using feature selection, otherwise about 4500</strong>) so I decided to use cross validation.My dependent variable is a continous value from 0 to 1.
The problem is a recommender problem so it makes no sense to test for accuracy in this case. As this is only an assignment I can only measure the ml algorithms with statistical methods rather than asking users for their opinion or measure the purchases done by them. </p> | 2019-12-30 16:56:21.760000+00:00 | 2019-12-30 17:16:44.343000+00:00 | 2019-12-30 17:16:44.343000+00:00 | machine-learning|scikit-learn|regression | ['https://scikit-learn.org/stable/auto_examples/model_selection/plot_nested_cross_validation_iris.html', 'https://arxiv.org/abs/1809.09446'] | 2 |
72,957,961 | <p>Shapley values are very difficult to calculate exactly. Kernel SHAP and Deep SHAP are two different approximation methods to calculate the Shapley values efficiently, and so one shouldn't expect them to necessarily agree.</p>
<p>You can read the <a href="https://arxiv.org/pdf/1705.07874.pdf" rel="nofollow noreferrer">authors' paper</a> for more details.</p>
<blockquote>
<p>While Kernel SHAP can be used on any model, including deep models, it is natural to ask whether
there is a way to leverage extra knowledge about the compositional nature of deep networks to improve
computational performance. [...] This motivates our adapting DeepLIFT to become a compositional approximation
of SHAP values, leading to Deep SHAP.</p>
</blockquote>
<p>In section 5, they compare the performance of Kernel SHAP and Deep SHAP. From their example it seems like Kernel SHAP performs better than Deep SHAP. So I guess if you aren't running into computational issues, you can stick with Kernel SHAP.</p>
<p><a href="https://i.stack.imgur.com/6AppU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6AppU.png" alt="Figure 5B" /></a></p>
<p>p.s. Just to make sure, you're inputting the exact same <strong>trained</strong> model to SHAP right? You shouldn't be training separate models, because they'll learn different weights.</p> | 2022-07-12 20:20:02.163000+00:00 | 2022-07-12 20:20:02.163000+00:00 | null | null | 70,510,341 | <p>I haven't been able to find much in the way of examples on SHAP values with PyTorch. I've used two techniques to generate SHAP values, however, their results don't appear to agree with each other.</p>
<h1>SHAP KernelExplainer with PyTorch</h1>
<pre><code>import torch
from torch.autograd import Variable
import shap
import numpy
import pandas
torch.set_grad_enabled(False)
# Get features
train_features_df = ... # pandas dataframe
test_features_df = ... # pandas dataframe
# Define function to wrap model to transform data to tensor
f = lambda x: model_list[0]( Variable( torch.from_numpy(x) ) ).detach().numpy()
# Convert my pandas dataframe to numpy
data = test_features_df.to_numpy(dtype=np.float32)
# The explainer doesn't like tensors, hence the f function
explainer = shap.KernelExplainer(f, data)
# Get the shap values from my test data
shap_values = explainer.shap_values(data)
# Enable the plots in jupyter
shap.initjs()
feature_names = test_features_df.columns
# Plots
#shap.force_plot(explainer.expected_value, shap_values[0], feature_names)
#shap.dependence_plot("b1_price_avg", shap_values[0], data, feature_names)
shap.summary_plot(shap_values[0], data, feature_names)
</code></pre>
<p><a href="https://i.stack.imgur.com/BuhCX.png" rel="noreferrer"><img src="https://i.stack.imgur.com/BuhCX.png" alt="SHAP summary plot from KernelExplainer with PyTorch" /></a></p>
<h1>SHAP DeepExplainer with PyTorch</h1>
<pre><code># It wants gradients enabled, and uses the training set
torch.set_grad_enabled(True)
e = shap.DeepExplainer(model, Variable( torch.from_numpy( train_features_df.to_numpy(dtype=np.float32) ) ) )
# Get the shap values from my test data (this explainer likes tensors)
shap_values = e.shap_values( Variable( torch.from_numpy(data) ) )
# Plots
#shap.force_plot(explainer.expected_value, shap_values, feature_names)
#shap.dependence_plot("b1_price_avg", shap_values, data, feature_names)
shap.summary_plot(shap_values, data, feature_names)
</code></pre>
<p><a href="https://i.stack.imgur.com/lPNpa.png" rel="noreferrer"><img src="https://i.stack.imgur.com/lPNpa.png" alt="enter image description here" /></a></p>
<h1>Comparing results</h1>
<p>As you can see from the summary plots, the value given to the features from the same PyTorch model, with the same test data, are noticeably different.</p>
<p>For example the feature b1_addresses_avg has value one from last with the KernelExplainer. But with the DeepExplainer is ranked third from top.</p>
<p>I'm not sure where to go from here.</p> | 2021-12-28 17:14:23.160000+00:00 | 2022-07-12 20:20:02.163000+00:00 | null | python|pytorch|shap | ['https://arxiv.org/pdf/1705.07874.pdf', 'https://i.stack.imgur.com/6AppU.png'] | 2 |
37,190,900 | <p>From your log, it seems that your model tended to keep predicting label unchangingly during training and namely, your training diverged. I advise you to make the following check.</p>
<ol>
<li>Check out your label when convert train/validation lmdb data and in your CNN architechure, the Dropout layer is better placed under the InnerProduct layer, namely "fc6", instead of Pooling layer, "pool5".</li>
<li>I don't know how you sampled your training data during training. In principle, if you just use Softmax cost(multi-nominal cross entropy loss), you should shuffle your train data when prepare your train/val lmdb data and set a properly large <strong>batch size</strong>, for example 256 during training. </li>
<li>Maybe your learn rate(base_lr) was too large, you may further reduce your learn rate from 0.001 to 0.0001, but I noticed that the CASIA WebFace baseline(<a href="http://arxiv.org/abs/1411.7923" rel="nofollow">http://arxiv.org/abs/1411.7923</a>) used a 0.01 learning rate, and the input data scale, active function, the depth and width of your model is similar to that, so it was less possibly caused by learn rate.(but you should check whether the weight initializing method matters much.)</li>
<li>Try a smaller convolutional kernel size. Sometimes this may help due to reducing the information loss resulting from the alignment problem between convolution kernel and its corresponding input feature map.</li>
</ol>
<p>By the way, you are training a task of classifying 10575 classes with every class only having about 40 training samples, so to some extent, training data is insufficient. So like work in the base line, to enhance the model's ability to distinguish the same and the different samples, it's better to add a <strong>Contrastive cost</strong> besides a Softmax cost.</p>
<p>Reference
Sun Y, Chen Y, Wang X, et al. Deep learning face representation by joint identification-verification[C]//Advances in Neural Information Processing Systems. 2014: 1988-1996.</p> | 2016-05-12 15:07:56.503000+00:00 | 2016-09-21 15:54:26.817000+00:00 | 2016-09-21 15:54:26.817000+00:00 | null | 37,063,965 | <p>This is the result I get when I train my own model</p>
<pre><code>I0510 20:53:16.677439 3591 solver.cpp:337] Iteration 0, Testing net (#0)
I0510 20:57:20.822933 3591 solver.cpp:404] Test net output #0: accuracy = 3.78788e-05
I0510 20:57:20.823001 3591 solver.cpp:404] Test net output #1: loss = 9.27223 (* 1 = 9.27223 loss)
I0510 20:57:21.423084 3591 solver.cpp:228] Iteration 0, loss = 9.29181
I0510 20:57:21.423110 3591 solver.cpp:244] Train net output #0: loss = 9.29181 (* 1 = 9.29181 loss)
I0510 20:57:21.423120 3591 sgd_solver.cpp:106] Iteration 0, lr = 0.001
I0510 21:06:57.498831 3591 solver.cpp:337] Iteration 1000, Testing net (#0)
I0510 21:10:59.477396 3591 solver.cpp:404] Test net output #0: accuracy = 0.00186553
I0510 21:10:59.477463 3591 solver.cpp:404] Test net output #1: loss = 8.86572 (* 1 = 8.86572 loss)
I0510 21:20:35.828510 3591 solver.cpp:337] Iteration 2000, Testing net (#0)
I0510 21:24:42.838196 3591 solver.cpp:404] Test net output #0: accuracy = 0.00144886
I0510 21:24:42.838245 3591 solver.cpp:404] Test net output #1: loss = 8.83859 (* 1 = 8.83859 loss)
I0510 21:24:43.412120 3591 solver.cpp:228] Iteration 2000, loss = 8.81461
I0510 21:24:43.412145 3591 solver.cpp:244] Train net output #0: loss = 8.81461 (* 1 = 8.81461 loss)
I0510 21:24:43.412150 3591 sgd_solver.cpp:106] Iteration 2000, lr = 0.001
I0510 21:38:50.990823 3591 solver.cpp:337] Iteration 3000, Testing net (#0)
I0510 21:42:52.918418 3591 solver.cpp:404] Test net output #0: accuracy = 0.00140152
I0510 21:42:52.918493 3591 solver.cpp:404] Test net output #1: loss = 8.81789 (* 1 = 8.81789 loss)
I0510 22:00:09.519151 3591 solver.cpp:337] Iteration 4000, Testing net (#0)
I0510 22:09:13.918016 3591 solver.cpp:404] Test net output #0: accuracy = 0.00149621
I0510 22:09:13.918102 3591 solver.cpp:404] Test net output #1: loss = 8.80909 (* 1 = 8.80909 loss)
I0510 22:09:15.127683 3591 solver.cpp:228] Iteration 4000, loss = 8.8597
I0510 22:09:15.127722 3591 solver.cpp:244] Train net output #0: loss = 8.8597 (* 1 = 8.8597 loss)
I0510 22:09:15.127729 3591 sgd_solver.cpp:106] Iteration 4000, lr = 0.001
I0510 22:28:39.320019 3591 solver.cpp:337] Iteration 5000, Testing net (#0)
I0510 22:37:43.847064 3591 solver.cpp:404] Test net output #0: accuracy = 0.00118371
I0510 22:37:43.847173 3591 solver.cpp:404] Test net output #1: loss = 8.80527 (* 1 = 8.80527 loss)
I0510 23:58:17.120088 3591 solver.cpp:454] Snapshotting to binary proto file /home/wang/caffe-master/examples/NN2_iter_10000.caffemodel
I0510 23:58:17.238307 3591 sgd_solver.cpp:273] Snapshotting solver state to binary proto file /home/wang/caffe-master/examples/NN2_iter_10000.solverstate
I0510 23:58:17.491825 3591 solver.cpp:337] Iteration 10000, Testing net (#0)
I0511 00:02:19.412715 3591 solver.cpp:404] Test net output #0: accuracy = 0.00186553
I0511 00:02:19.412762 3591 solver.cpp:404] Test net output #1: loss = 8.79114 (* 1 = 8.79114 loss)
I0511 00:02:19.986547 3591 solver.cpp:228] Iteration 10000, loss = 8.83457
I0511 00:02:19.986570 3591 solver.cpp:244] Train net output #0: loss = 8.83457 (* 1 = 8.83457 loss)
I0511 00:02:19.986578 3591 sgd_solver.cpp:106] Iteration 10000, lr = 0.001
I0511 00:11:55.546052 3591 solver.cpp:337] Iteration 11000, Testing net (#0)
I0511 00:15:57.490486 3591 solver.cpp:404] Test net output #0: accuracy = 0.00164773
I0511 00:15:57.490532 3591 solver.cpp:404] Test net output #1: loss = 8.78702 (* 1 = 8.78702 loss)
I0511 00:25:33.666496 3591 solver.cpp:337] Iteration 12000, Testing net (#0)
I0511 00:29:35.603062 3591 solver.cpp:404] Test net output #0: accuracy = 0.0016572
I0511 00:29:35.603109 3591 solver.cpp:404] Test net output #1: loss = 8.7848 (* 1 = 8.7848 loss)
I0511 00:29:36.177078 3591 solver.cpp:228] Iteration 12000, loss = 9.00561
I0511 00:29:36.177105 3591 solver.cpp:244] Train net output #0: loss = 9.00561 (* 1 = 9.00561 loss)
I0511 00:29:36.177114 3591 sgd_solver.cpp:106] Iteration 12000, lr = 0.001
I0511 00:39:11.729369 3591 solver.cpp:337] Iteration 13000, Testing net (#0)
I0511 00:43:13.678067 3591 solver.cpp:404] Test net output #0: accuracy = 0.001875
I0511 00:43:13.678113 3591 solver.cpp:404] Test net output #1: loss = 8.78359 (* 1 = 8.78359 loss)
I0511 00:52:49.851985 3591 solver.cpp:337] Iteration 14000, Testing net (#0)
I0511 00:56:51.767343 3591 solver.cpp:404] Test net output #0: accuracy = 0.00154356
I0511 00:56:51.767390 3591 solver.cpp:404] Test net output #1: loss = 8.77998 (* 1 = 8.77998 loss)
I0511 00:56:52.341564 3591 solver.cpp:228] Iteration 14000, loss = 8.83385
I0511 00:56:52.341591 3591 solver.cpp:244] Train net output #0: loss = 8.83385 (* 1 = 8.83385 loss)
I0511 00:56:52.341598 3591 sgd_solver.cpp:106] Iteration 14000, lr = 0.001
I0511 02:14:38.224290 3591 solver.cpp:454] Snapshotting to binary proto file /home/wang/caffe-master/examples/NN2_iter_20000.caffemodel
I0511 02:14:38.735008 3591 sgd_solver.cpp:273] Snapshotting solver state to binary proto file /home/wang/caffe-master/examples/NN2_iter_20000.solverstate
I0511 02:14:38.805809 3591 solver.cpp:337] Iteration 20000, Testing net (#0)
I0511 02:18:40.681993 3591 solver.cpp:404] Test net output #0: accuracy = 0.00179924
I0511 02:18:40.682086 3591 solver.cpp:404] Test net output #1: loss = 8.78129 (* 1 = 8.78129 loss)
I0511 02:18:41.255969 3591 solver.cpp:228] Iteration 20000, loss = 8.82502
I0511 02:18:41.255995 3591 solver.cpp:244] Train net output #0: loss = 8.82502 (* 1 = 8.82502 loss)
I0511 02:18:41.256001 3591 sgd_solver.cpp:106] Iteration 20000, lr = 0.001
I0511 04:30:58.924096 3591 solver.cpp:454] Snapshotting to binary proto file /home/wang/caffe-master/examples/NN2_iter_30000.caffemodel
I0511 04:31:00.742739 3591 sgd_solver.cpp:273] Snapshotting solver state to binary proto file /home/wang/caffe-master/examples/NN2_iter_30000.solverstate
I0511 04:31:01.151980 3591 solver.cpp:337] Iteration 30000, Testing net (#0)
I0511 04:35:03.075263 3591 solver.cpp:404] Test net output #0: accuracy = 0.00186553
I0511 04:35:03.075307 3591 solver.cpp:404] Test net output #1: loss = 8.77867 (* 1 = 8.77867 loss)
I0511 04:35:03.649479 3591 solver.cpp:228] Iteration 30000, loss = 8.82915
I0511 04:35:03.649507 3591 solver.cpp:244] Train net output #0: loss = 8.82915 (* 1 = 8.82915 loss)
I0511 04:35:03.649513 3591 sgd_solver.cpp:106] Iteration 30000, lr = 0.001
I0511 07:55:36.848265 3591 solver.cpp:337] Iteration 45000, Testing net (#0)
I0511 07:59:38.834043 3591 solver.cpp:404] Test net output #0: accuracy = 0.00179924
I0511 07:59:38.834095 3591 solver.cpp:404] Test net output #1: loss = 8.77432 (* 1 = 8.77432 loss)
I0511 09:03:48.141854 3591 solver.cpp:454] Snapshotting to binary proto file /home/wang/caffe-master/examples/NN2_iter_50000.caffemodel
I0511 09:03:49.736464 3591 sgd_solver.cpp:273] Snapshotting solver state to binary proto file /home/wang/caffe-master/examples/NN2_iter_50000.solverstate
I0511 09:03:49.797582 3591 solver.cpp:337] Iteration 50000, Testing net (#0)
I0511 09:07:51.777150 3591 solver.cpp:404] Test net output #0: accuracy = 0.001875
I0511 09:07:51.777207 3591 solver.cpp:404] Test net output #1: loss = 8.77058 (* 1 = 8.77058 loss)
I0511 09:07:52.351323 3591 solver.cpp:228] Iteration 50000, loss = 9.11435
I0511 09:07:52.351351 3591 solver.cpp:244] Train net output #0: loss = 9.11435 (* 1 = 9.11435 loss)
I0511 09:07:52.351357 3591 sgd_solver.cpp:106] Iteration 50000, lr = 0.001
I0511 09:17:28.188742 3591 solver.cpp:337] Iteration 51000, Testing net (#0)
I0511 09:21:30.200623 3591 solver.cpp:404] Test net output #0: accuracy = 0.00186553
I0511 09:21:30.200716 3591 solver.cpp:404] Test net output #1: loss = 8.77026 (* 1 = 8.77026 loss)
I0511 09:31:06.596501 3591 solver.cpp:337] Iteration 52000, Testing net (#0)
I0511 09:35:08.580215 3591 solver.cpp:404] Test net output #0: accuracy = 0.00182765
I0511 09:35:08.580313 3591 solver.cpp:404] Test net output #1: loss = 8.76917 (* 1 = 8.76917 loss)
I0511 09:35:09.154428 3591 solver.cpp:228] Iteration 52000, loss = 8.89758
I0511 09:35:09.154453 3591 solver.cpp:244] Train net output #0: loss = 8.89758 (* 1 = 8.89758 loss)
I0511 09:35:09.154459 3591 sgd_solver.cpp:106] Iteration 52000, lr = 0.001
I0511 09:44:44.906309 3591 solver.cpp:337] Iteration 53000, Testing net (#0)
I0511 09:48:46.866353 3591 solver.cpp:404] Test net output #0: accuracy = 0.00185606
I0511 09:48:46.866430 3591 solver.cpp:404] Test net output #1: loss = 8.7708 (* 1 = 8.7708 loss)
I0511 09:58:23.097244 3591 solver.cpp:337] Iteration 54000, Testing net (#0)
I0511 10:02:25.056555 3591 solver.cpp:404] Test net output #0: accuracy = 0.00192235
I0511 10:02:25.056605 3591 solver.cpp:404] Test net output #1: loss = 8.76884 (* 1 = 8.76884 loss)
I0511 10:02:25.630312 3591 solver.cpp:228] Iteration 54000, loss = 8.90552
I0511 10:02:25.630337 3591 solver.cpp:244] Train net output #0: loss = 8.90552 (* 1 = 8.90552 loss)
I0511 10:02:25.630342 3591 sgd_solver.cpp:106] Iteration 54000, lr = 0.001
I0511 14:44:51.563555 3591 solver.cpp:337] Iteration 75000, Testing net (#0)
I0511 14:48:53.573640 3591 solver.cpp:404] Test net output #0: accuracy = 0.0016572
I0511 14:48:53.573724 3591 solver.cpp:404] Test net output #1: loss = 8.76967 (* 1 = 8.76967 loss)
I0511 14:58:30.080453 3591 solver.cpp:337] Iteration 76000, Testing net (#0)
I0511 15:02:32.076011 3591 solver.cpp:404] Test net output #0: accuracy = 0.001875
I0511 15:02:32.076077 3591 solver.cpp:404] Test net output #1: loss = 8.7695 (* 1 = 8.7695 loss)
I0511 15:02:32.650342 3591 solver.cpp:228] Iteration 76000, loss = 9.0084
I0511 15:02:32.650367 3591 solver.cpp:244] Train net output #0: loss = 9.0084 (* 1 = 9.0084 loss)
I0511 15:02:32.650373 3591 sgd_solver.cpp:106] Iteration 76000, lr = 0.001
I0511 15:12:08.597450 3591 solver.cpp:337] Iteration 77000, Testing net (#0)
I0511 15:16:10.636613 3591 solver.cpp:404] Test net output #0: accuracy = 0.00181818
I0511 15:16:10.636693 3591 solver.cpp:404] Test net output #1: loss = 8.76889 (* 1 = 8.76889 loss)
I0511 15:25:47.167667 3591 solver.cpp:337] Iteration 78000, Testing net (#0)
I0511 15:29:49.204596 3591 solver.cpp:404] Test net output #0: accuracy = 0.00185606
I0511 15:29:49.204649 3591 solver.cpp:404] Test net output #1: loss = 8.77059 (* 1 = 8.77059 loss)
I0511 15:29:49.779094 3591 solver.cpp:228] Iteration 78000, loss = 8.73139
I0511 15:29:49.779119 3591 solver.cpp:244] Train net output #0: loss = 8.73139 (* 1 = 8.73139 loss)
I0511 15:29:49.779124 3591 sgd_solver.cpp:106] Iteration 78000, lr = 0.001
I0511 15:39:25.730358 3591 solver.cpp:337] Iteration 79000, Testing net (#0)
I0511 15:43:27.756417 3591 solver.cpp:404] Test net output #0: accuracy = 0.00192235
I0511 15:43:27.756485 3591 solver.cpp:404] Test net output #1: loss = 8.76846 (* 1 = 8.76846 loss)
I0511 15:53:04.419961 3591 solver.cpp:454] Snapshotting to binary proto file /home/wang/caffe-master/examples/NN2_iter_80000.caffemodel
I0511 15:53:06.138357 3591 sgd_solver.cpp:273] Snapshotting solver state to binary proto file /home/wang/caffe-master/examples/NN2_iter_80000.solverstate
I0511 15:53:06.519551 3591 solver.cpp:337] Iteration 80000, Testing net (#0)
I0511 15:57:08.719681 3591 solver.cpp:404] Test net output #0: accuracy = 0.00164773
I0511 15:57:08.719737 3591 solver.cpp:404] Test net output #1: loss = 8.77126 (* 1 = 8.77126 loss)
I0511 15:57:09.294163 3591 solver.cpp:228] Iteration 80000, loss = 8.56576
I0511 15:57:09.294188 3591 solver.cpp:244] Train net output #0: loss = 8.56576 (* 1 = 8.56576 loss)
I0511 15:57:09.294193 3591 sgd_solver.cpp:106] Iteration 80000, lr = 0.001
I0511 17:01:19.190099 3591 solver.cpp:337] Iteration 85000, Testing net (#0)
I0511 17:05:21.148668 3591 solver.cpp:404] Test net output #0: accuracy = 0.00185606
I0511 17:05:21.148733 3591 solver.cpp:404] Test net output #1: loss = 8.77196 (* 1 = 8.77196 loss)
I0511 17:14:57.670343 3591 solver.cpp:337] Iteration 86000, Testing net (#0)
I0511 17:18:59.659850 3591 solver.cpp:404] Test net output #0: accuracy = 0.00181818
I0511 17:18:59.659907 3591 solver.cpp:404] Test net output #1: loss = 8.77126 (* 1 = 8.77126 loss)
I0511 17:19:00.234335 3591 solver.cpp:228] Iteration 86000, loss = 8.72875
I0511 17:19:00.234359 3591 solver.cpp:244] Train net output #0: loss = 8.72875 (* 1 = 8.72875 loss)
I0511 17:19:00.234364 3591 sgd_solver.cpp:106] Iteration 86000, lr = 0.001
I0511 17:28:36.196920 3591 solver.cpp:337] Iteration 87000, Testing net (#0)
I0511 17:32:38.181174 3591 solver.cpp:404] Test net output #0: accuracy = 0.00181818
I0511 17:32:38.181231 3591 solver.cpp:404] Test net output #1: loss = 8.771 (* 1 = 8.771 loss)
I0511 17:42:14.658293 3591 solver.cpp:337] Iteration 88000, Testing net (#0)
I0511 17:46:16.614358 3591 solver.cpp:404] Test net output #0: accuracy = 0.00188447
I0511 17:46:16.614415 3591 solver.cpp:404] Test net output #1: loss = 8.76964 (* 1 = 8.76964 loss)
I0511 17:46:17.188212 3591 solver.cpp:228] Iteration 88000, loss = 8.80409
I0511 17:46:17.188233 3591 solver.cpp:244] Train net output #0: loss = 8.80409 (* 1 = 8.80409 loss)
I0511 17:46:17.188240 3591 sgd_solver.cpp:106] Iteration 88000, lr = 0.001
I0511 17:55:53.358322 3591 solver.cpp:337] Iteration 89000, Testing net (#0)
I0511 17:59:55.305763 3591 solver.cpp:404] Test net output #0: accuracy = 0.00186553
I0511 17:59:55.305868 3591 solver.cpp:404] Test net output #1: loss = 8.76909 (* 1 = 8.76909 loss)
I0511 18:09:31.658655 3591 solver.cpp:454] Snapshotting to binary proto file /home/wang/caffe-master/examples/NN2_iter_90000.caffemodel
I0511 18:09:33.138741 3591 sgd_solver.cpp:273] Snapshotting solver state to binary proto file /home/wang/caffe-master/examples/NN2_iter_90000.solverstate
I0511 18:09:33.691995 3591 solver.cpp:337] Iteration 90000, Testing net (#0)
I0511 18:13:35.626065 3591 solver.cpp:404] Test net output #0: accuracy = 0.00168561
I0511 18:13:35.626148 3591 solver.cpp:404] Test net output #1: loss = 8.76973 (* 1 = 8.76973 loss)
I0511 18:13:36.200448 3591 solver.cpp:228] Iteration 90000, loss = 8.97326
I0511 18:13:36.200469 3591 solver.cpp:244] Train net output #0: loss = 8.97326 (* 1 = 8.97326 loss)
I0511 18:13:36.200474 3591 sgd_solver.cpp:106] Iteration 90000, lr = 0.001
I0511 19:31:23.715662 3591 solver.cpp:337] Iteration 96000, Testing net (#0)
I0511 19:35:25.677780 3591 solver.cpp:404] Test net output #0: accuracy = 0.00188447
I0511 19:35:25.677836 3591 solver.cpp:404] Test net output #1: loss = 8.7695 (* 1 = 8.7695 loss)
I0511 19:35:26.251850 3591 solver.cpp:228] Iteration 96000, loss = 8.74232
I0511 19:35:26.251875 3591 solver.cpp:244] Train net output #0: loss = 8.74232 (* 1 = 8.74232 loss)
I0511 19:35:26.251880 3591 sgd_solver.cpp:106] Iteration 96000, lr = 0.001
I0511 19:45:02.057610 3591 solver.cpp:337] Iteration 97000, Testing net (#0)
I0511 19:49:04.029269 3591 solver.cpp:404] Test net output #0: accuracy = 0.00188447
I0511 19:49:04.029357 3591 solver.cpp:404] Test net output #1: loss = 8.77655 (* 1 = 8.77655 loss)
I0511 19:58:40.265120 3591 solver.cpp:337] Iteration 98000, Testing net (#0)
I0511 20:02:42.182787 3591 solver.cpp:404] Test net output #0: accuracy = 0.00183712
I0511 20:02:42.182859 3591 solver.cpp:404] Test net output #1: loss = 8.77069 (* 1 = 8.77069 loss)
I0511 20:02:42.756922 3591 solver.cpp:228] Iteration 98000, loss = 8.61745
I0511 20:02:42.756944 3591 solver.cpp:244] Train net output #0: loss = 8.61745 (* 1 = 8.61745 loss)
</code></pre>
<p>Duo to the limit of characters of codes, I have to delete some rows of the log. However, it doesn’t matter.
As you can see, there is no difference between "<em>Iteration 98000</em>" and "<em>Iteration 0</em>". I am really puzzled with this situation.</p>
<p>This is the architecture of my model</p>
<pre><code>name: "NN2"
layer {
name: "data"
type: "Data"
top: "data"
top: "label"
include {
phase: TRAIN
}
transform_param {
mirror: true
mean_file :"/home/jiayi-wei/caffe/examples/NN2/image_train_mean.binaryproto"
data_param {
source: "/home/jiayi-wei/caffe/examples/NN2/img_train_lmdb"
batch_size: 30
backend: LMDB
}
}
layer {
name: "data"
type: "Data"
top: "data"
top: "label"
include {
phase: TEST
}
transform_param {
mirror: false
mean_file :"/home/jiayi-wei/caffe/examples/NN2/image_train_mean.binaryproto"
data_param {
source: "/home/jiayi-wei/caffe/examples/NN2/img_val_lmdb"
batch_size: 11
backend: LMDB
}
}
#first layers
layer {
name: "conv11"
type: "Convolution"
bottom: "data"
top: "conv11"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 64
kernel_size: 3
stride: 1
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
value: 0
}
}
}
layer {
name: "relu11"
type: "ReLU"
bottom: "conv11"
top: "conv11"
}
layer {
name: "conv12"
type: "Convolution"
bottom: "conv11"
top: "conv12"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 128
kernel_size: 3
stride: 1
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
value: 0
}
}
}
layer {
name: "relu12"
type: "ReLU"
bottom: "conv12"
top: "conv12"
}
layer {
name: "pool1"
type: "Pooling"
bottom: "conv12"
top: "pool1"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
#second layers
layer {
name: "conv21"
type: "Convolution"
bottom: "pool1"
top: "conv21"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 64
kernel_size: 3
stride: 1
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
value: 0
}
}
}
layer {
name: "relu21"
type: "ReLU"
bottom: "conv21"
top: "conv21"
}
layer {
name: "conv22"
type: "Convolution"
bottom: "conv21"
top: "conv22"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 128
kernel_size: 3
stride: 1
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
value: 0
}
}
}
layer {
name: "relu22"
type: "ReLU"
bottom: "conv22"
top: "conv22"
}
layer {
name: "pool2"
type: "Pooling"
bottom: "conv22"
top: "pool2"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
#third layers
layer {
name: "conv31"
type: "Convolution"
bottom: "pool2"
top: "conv31"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 128
pad:1
kernel_size: 3
stride: 1
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
value: 0
}
}
}
layer {
name: "relu31"
type: "ReLU"
bottom: "conv31"
top: "conv31"
}
layer {
name: "conv32"
type: "Convolution"
bottom: "conv31"
top: "conv32"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 128
pad:1
kernel_size: 3
stride: 1
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
value: 0
}
}
}
layer {
name: "relu32"
type: "ReLU"
bottom: "conv32"
top: "conv32"
}
layer {
name: "pool3"
type: "Pooling"
bottom: "conv32"
top: "pool3"
pooling_param {
pool: MAX
pad:1
kernel_size: 2
stride: 2
}
}
#fourth layer
layer {
name: "conv41"
type: "Convolution"
bottom: "pool3"
top: "conv41"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 256
pad:1
kernel_size: 3
stride: 1
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
value: 0
}
}
}
layer {
name: "relu41"
type: "ReLU"
bottom: "conv41"
top: "conv41"
}
layer {
name: "conv42"
type: "Convolution"
bottom: "conv41"
top: "conv42"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 256
pad:1
kernel_size: 3
stride: 1
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
value: 0
}
}
}
layer {
name: "relu42"
type: "ReLU"
bottom: "conv42"
top: "conv42"
}
layer {
name: "conv43"
type: "Convolution"
bottom: "conv42"
top: "conv43"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 256
pad:1
kernel_size: 3
stride: 1
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
value: 0
}
}
}
layer {
name: "relu43"
type: "ReLU"
bottom: "conv43"
top: "conv43"
}
layer {
name: "pool4"
type: "Pooling"
bottom: "conv43"
top: "pool4"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
#fiveth layer
layer {
name: "conv51"
type: "Convolution"
bottom: "pool4"
top: "conv51"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 256
pad:1
kernel_size: 3
stride: 1
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
value: 0
}
}
}
layer {
name: "relu51"
type: "ReLU"
bottom: "conv51"
top: "conv51"
}
layer {
name: "conv52"
type: "Convolution"
bottom: "conv51"
top: "conv52"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 256
pad:1
kernel_size: 3
stride: 1
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
value: 0
}
}
}
layer {
name: "relu52"
type: "ReLU"
bottom: "conv52"
top: "conv52"
}
layer {
name: "conv53"
type: "Convolution"
bottom: "conv52"
top: "conv53"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 256
pad:1
kernel_size: 3
stride: 1
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
value: 0
}
}
}
layer {
name: "pool5"
type: "Pooling"
bottom: "conv53"
top: "pool5"
pooling_param {
pool: AVE
pad:1
kernel_size: 2
stride: 2
}
}
#drop_Fc
layer {
name: "dropout"
type: "Dropout"
bottom: "pool5"
top: "pool5"
dropout_param {
dropout_ratio: 0.5
}
}
layer {
name: "fc6"
type: "InnerProduct"
bottom: "pool5"
top: "fc6"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
inner_product_param {
num_output:1000
weight_filler {
type: "gaussian"
std: 0.005
}
bias_filler {
type: "constant"
value: 1
}
}
}
layer {
name: "fc7"
type: "InnerProduct"
bottom: "fc6"
top: "fc7"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
inner_product_param {
num_output:10575
weight_filler {
type: "gaussian"
std: 0.005
}
bias_filler {
type: "constant"
value: 1
}
}
}
layer {
name: "accuracy"
type: "Accuracy"
bottom: "fc7"
bottom: "label"
top: "accuracy"
include {
phase: TEST
}
}
layer {
name: "SoftMax"
type: "SoftmaxWithLoss"
bottom: "fc7"
bottom: "label"
top: "SoftMax"
}
</code></pre>
<p>Following is my solver. And i have change base_lr to <em>"0.001"</em></p>
<pre><code>net: "train_val.prototxt"
test_iter: 10000
test_interval: 1000
base_lr: 0.001
lr_policy: "step"
gamma: 0.1
stepsize: 100000
display: 20
max_iter: 450000
momentum: 0.9
weight_decay: 0.0005
snapshot: 10000
snapshot_prefix: "/home/jiayi-wei/caffe/examples/NN2"
solver_mode: GPU
</code></pre>
<p>I have tried to change some parametric and I have already tried to reduce a "conv" layer from the block who has three "conv" layers. However the result always keep like the picture shows.</p>
<p>Please tell me how can i make out the problem? thanks</p> | 2016-05-06 03:56:47.720000+00:00 | 2017-08-23 01:39:55.040000+00:00 | 2016-05-11 12:49:23.040000+00:00 | machine-learning|neural-network|deep-learning|caffe | ['http://arxiv.org/abs/1411.7923'] | 1 |
62,711,304 | <p>The most immediate way of doing this that occurs to me is to set up a datatype for you modal language <code>datatype fml = ...</code>, and a satisfaction predicate <code>sat :: ('w ⇒ 'w ⇒ bool) ⇒ 'w ⇒ fml ⇒ bool</code>, and embed the logic in Isabelle. This is (essentially) the approach taken by Xu and Norrish (<a href="https://tqft.net/web/research/students/YimingXu/thesis.pdf" rel="nofollow noreferrer">Xu's Honour's Thesis</a>; <a href="https://doi.org/10.1007/978-3-030-51074-9_30" rel="nofollow noreferrer">IJCAR Paper</a>).
The downside to this approach is that it's a deep embedding, which means you would have to deal with variable binding and substitution yourself.</p>
<p>You could also declare that worlds and an accessibility relation exist, using <code>typedecl</code> and <code>consts</code>, and define 'lifted connectives' as functions which take a world, and return a boolean, e.g. <code>TT ≡ λw. True</code> and <code>◊φ ≡ λw. ∃v. (w R v) ∧ φ v</code> (where <code>R</code> is defined by <code>consts</code>). This is the approach taken by Christoph Benzmüller in this <a href="https://export.arxiv.org/abs/2001.04701" rel="nofollow noreferrer">preprint on Gödel's Ontological Argument</a>.</p> | 2020-07-03 08:18:39.447000+00:00 | 2020-07-03 08:18:39.447000+00:00 | null | null | 62,709,073 | <p>I am trying to formalize business rules in Isabelle/HOL, but business rules involve the change of the values of some variables, like <code>if x=5 => x=6</code>. <code>=></code> is not the usual implication there, because x is 5 in one world (interpretation, semantic domain) and after the execution of the rule the x is 6 and it is already a different world (different interpretation, assignment function, semantic domain). So - technically <code>is x=5 => x=6</code> is not the formula in First Order Logic and it is not also the formula of Higher Order Logic of Isabelle/HOL. More possible is that it is the formula in dynamic/action logic <code>[rule1](x=5)=(x=6)</code> (that defines that upon application of <code>rule1</code> to the formula in some world yields to the truthful formula in some other world (dynamic/action logic is essentially a kind of modal logic and Kripke/possible world semantics can be applied here).</p>
<p>Now, back to Isabelle/HOL or Coq: FOL/HOL theory is set of formulas in all the worlds or in one particular world. If we stay at FOL/HOL then there is no change of the world, no change of the values of the variables. We should go to the modal logics if we would like to model, reasong about such changes. But can it be done in HOL? This change of world?</p>
<p><a href="http://matryoshka.gforge.inria.fr/pubs/fernandez_burgos_bsc_thesis.pdf" rel="nofollow noreferrer">http://matryoshka.gforge.inria.fr/pubs/fernandez_burgos_bsc_thesis.pdf</a> is good work about formalization of sorting algorithms in Isabelle/HOL, functional programming is used there and there is no need of variables and hence - no this change among worlds. But if we try to model Reinforcement Learning, Markov processes, impertative/object oriented programming, business rules in Isabelle/HOL, then we should express this change. How to do that?</p>
<p>How to express business rule <code>if x=5 => x=6</code> in Isabelle/HOL?</p>
<p>The same approach can be used in Coq, that is why answers from the Coq community are welcome as well.</p> | 2020-07-03 05:32:31.200000+00:00 | 2020-07-03 08:18:39.447000+00:00 | null | logic|computer-science|coq|symbolic-math|isabelle | ['https://tqft.net/web/research/students/YimingXu/thesis.pdf', 'https://doi.org/10.1007/978-3-030-51074-9_30', 'https://export.arxiv.org/abs/2001.04701'] | 3 |
19,266,345 | <p>This paper gives a list of major ad libraries, which may be helpful in setting up your DB: <a href="http://arxiv.org/pdf/1303.0857.pdf" rel="nofollow">http://arxiv.org/pdf/1303.0857.pdf</a></p> | 2013-10-09 08:05:51.593000+00:00 | 2013-10-09 08:05:51.593000+00:00 | null | null | 18,544,883 | <p>I have downloaded the source codes of open source android apps. I have around 2000 of them. I wish to do an analysis on ad libraries used by android apps. I have 2 questions,</p>
<ol>
<li>How can I find whether an app uses ad library</li>
<li>If it uses, how can I find the name of the ad library (eg. AdMob, InMobi etc.)</li>
</ol> | 2013-08-31 05:34:56.590000+00:00 | 2014-06-13 10:37:38.760000+00:00 | null | android|static-analysis|android-library|android-lint|advertisement-server | ['http://arxiv.org/pdf/1303.0857.pdf'] | 1 |
73,820,535 | <p>Neo is optimizing inference using compilation, which is different and often orthogonal to compression</p>
<ul>
<li><p><strong>compilation</strong> makes inference faster and lighter by specializing the prediction application, notably: (1) changing the environment in which the model runs, in particular replacing training frameworks by the least amount of necessary math libraries, (2) optimizing the model graph to be prediction-only and grouping together operators that can be, (3) specializing the runtime to use best the specific hardware and instructions available on a given target machine. Compilation is not supposed to change the model math, thereby doesn't change its footprint on disk</p>
</li>
<li><p><strong>compression</strong> makes inference faster by removing model weights or making them smaller (quantization). Weights can be removed by pruning (dropping weights that do not influence much results or distillation (training a small model to mimic a big model).</p>
</li>
</ul>
<p>At the time of this writing, SageMaker Neo is a managed compilation service. That being said, compilation and compression can be combined, and you can prune or distill your network before feeding it to Neo.</p>
<p>SageMaker Neo covers a large grid of hardware targets and model architectures, and consequently leverages numerous backends and optimizations. Neo internals are publicly documented in many places:</p>
<ul>
<li><p>According to <a href="https://aws.amazon.com/blogs/machine-learning/unlock-performance-gains-with-xgboost-amazon-sagemaker-neo-and-serverless-artillery/" rel="nofollow noreferrer">this blog</a>, Neo uses <a href="https://treelite.readthedocs.io/en/latest/" rel="nofollow noreferrer"><strong>Treelite</strong></a> for tree models optimization (<a href="https://mlsys.org/Conferences/doc/2018/196.pdf" rel="nofollow noreferrer"><em>Treelite: toolbox for decision tree deployment</em></a>, Cho et Li)</p>
</li>
<li><p>According to its <a href="https://aws.amazon.com/sagemaker/neo/" rel="nofollow noreferrer">landing page</a>, Neo uses <a href="https://tvm.apache.org/" rel="nofollow noreferrer"><strong>Apache TVM</strong></a> too. TVM is the leading open-source compiler, developed by <a href="https://tqchen.com/" rel="nofollow noreferrer">Tianqi Chen</a> and the <a href="https://github.com/dmlc" rel="nofollow noreferrer">DMLC</a> community (that also co-authored <a href="https://arxiv.org/abs/1603.02754" rel="nofollow noreferrer">XGBoost</a> and <a href="https://arxiv.org/abs/1512.01274" rel="nofollow noreferrer">MXNet</a>). TVM tricks are abundantly documented in <a href="https://arxiv.org/pdf/1802.04799.pdf" rel="nofollow noreferrer"><em>TVM: An Automated End-to-End Optimizing Compiler for Deep Learning</em></a> (Chen et al)</p>
</li>
<li><p>According to <a href="https://aws.amazon.com/blogs/machine-learning/amazon-sagemaker-neo-makes-it-easier-to-get-faster-inference-for-more-ml-models-with-nvidia-tensorrt/" rel="nofollow noreferrer">this blog</a>, Neo also sometimes leverages <a href="https://developer.nvidia.com/tensorrt" rel="nofollow noreferrer"><strong>NVIDIA TensorRT</strong></a>, the official inference optimization stack from NVIDIA</p>
</li>
<li><p>Neo also uses a number of Amazon-developed optimization:</p>
<ul>
<li><a href="https://arxiv.org/pdf/1907.02154.pdf" rel="nofollow noreferrer"><em>A Unified Optimization Approach for CNN Model Inference on Integrated GPUs</em></a> (Wang et al): <em>"Our work is already deployed
in Amazon SageMaker Neo Service"</em></li>
<li><a href="https://www.usenix.org/system/files/atc19-liu-yizhi.pdf" rel="nofollow noreferrer"><em>Optimizing CNN Model Inference on CPUs</em></a> (Liu et al)<em>"NeoCPU is used in Amazon SageMaker Neo Service"</em></li>
</ul>
</li>
</ul> | 2022-09-22 20:50:01.720000+00:00 | 2022-09-22 20:50:01.720000+00:00 | null | null | 73,788,252 | <p>Does SageMaker Neo (SageMaker compilation job) use any techniques for model optimization? Are there any compression techniques used (distillation, quantization etc) to reduce the model size?</p>
<p>I found some description here (<a href="https://docs.aws.amazon.com/sagemaker/latest/dg/neo.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/sagemaker/latest/dg/neo.html</a>) regarding quantization but it's not clear how it could be used.</p>
<p>Thanks very much for any insight.</p> | 2022-09-20 14:20:15.587000+00:00 | 2022-09-22 20:50:01.720000+00:00 | null | amazon-web-services|amazon-sagemaker|amazon-machine-learning | ['https://aws.amazon.com/blogs/machine-learning/unlock-performance-gains-with-xgboost-amazon-sagemaker-neo-and-serverless-artillery/', 'https://treelite.readthedocs.io/en/latest/', 'https://mlsys.org/Conferences/doc/2018/196.pdf', 'https://aws.amazon.com/sagemaker/neo/', 'https://tvm.apache.org/', 'https://tqchen.com/', 'https://github.com/dmlc', 'https://arxiv.org/abs/1603.02754', 'https://arxiv.org/abs/1512.01274', 'https://arxiv.org/pdf/1802.04799.pdf', 'https://aws.amazon.com/blogs/machine-learning/amazon-sagemaker-neo-makes-it-easier-to-get-faster-inference-for-more-ml-models-with-nvidia-tensorrt/', 'https://developer.nvidia.com/tensorrt', 'https://arxiv.org/pdf/1907.02154.pdf', 'https://www.usenix.org/system/files/atc19-liu-yizhi.pdf'] | 14 |
68,975,633 | <p><code>workers=-1</code> is not a valid parameter value. If there's an example suggesting that negative-count somewhere, it's a bad example. If you got the impression that would work from something in Gensim's official docs, please report that documentation as a bug to be fixed.</p>
<p>More generally: enabling logging at the <code>INFO</code> level will show a lot more detail about what's happening, and something like "misguided parameter that prevents any training from happening" may become more obvious when using such logging.</p>
<p>Separately:</p>
<ul>
<li><p>The <code>Word2Vec</code> loss-tracking of Gensim has a lot of open issues (including the failure to tally by epoch, which your <code>Callback</code> tries to correct). I'd suggest not futzing with loss-display unless/until you've already achieved some success without it.</p>
</li>
<li><p>Such a low <code>min_count=2</code> is usually a bad idea with the word2vec algorithm, at least in normal natural-language settings. Words with so few occurrences lack the variety of contrasting usage examples to achieve a generalizable word-vector, or, individually, to influence the model much compared to the far-more-numerous other words. But such rare words are, altogether, quite numerous - essentially serving as 'noise' worsening other words. Discarding more such rare words often improves the remaining words, and overall model, noticeably. So, if you have enough raw training data to make word2vec worth applying, it should be more common to <em>increase</em> this cutoff higher than the default <code>min_count=5</code> than reduce it.</p>
</li>
<li><p>For recommendation-like systems being fed by pseudotexts that aren't exactly natural-language-like, it may be especially worthwhile to experiment with the <code>ns_exponent</code> parameter. As per <a href="https://arxiv.org/abs/1804.04212" rel="nofollow noreferrer">the research paper</a> linked in <a href="https://radimrehurek.com/gensim/models/word2vec.html#gensim.models.word2vec.Word2Vec" rel="nofollow noreferrer">the class docs</a>, the original <code>ns_exponent=0.75</code> value, which was an unchangeable constant in early word2vec implementations, might not be ideal for other applications like recommender systems.</p>
</li>
</ul> | 2021-08-29 18:01:04.453000+00:00 | 2021-08-29 18:01:04.453000+00:00 | null | null | 68,971,791 | <p>I have a dataset off millions of arrays like follows:</p>
<pre><code> sentences=[
[
'query_foo bar',
'split_query_foo',
'split_query_bar',
'sku_qwre',
'brand_A B C',
'split_brand_A',
'split_brand_B',
'split_brand_C',
'color_black',
'category_C1',
'product_group_clothing',
'silhouette_t_shirt_top',
],
[...]
]
</code></pre>
<p>where you find a query, a sku that was acquired by the user doing the query and a few attributes of the SKU. My idea was to do a very basic model based on word2vec where I could find similar things together.</p>
<p>In a simple way, if I search for <code>t-shirt</code> on the model I would expect to have t-shirt SKUs near the query.</p>
<p>I try to use gensim (I'm new to this library) with different attributes to build a model:</p>
<pre><code>from gensim.models.callbacks import CallbackAny2Vec
class callback(CallbackAny2Vec):
'''Callback to print loss after each epoch.'''
def __init__(self):
self.epoch = 0
self.loss_to_be_subed = 0
def on_epoch_end(self, model):
loss = model.get_latest_training_loss()
loss_now = loss - self.loss_to_be_subed
self.loss_to_be_subed = loss
print('Loss after epoch {}: {}'.format(self.epoch, loss_now))
self.epoch += 1
model = Word2Vec(
sentences=sentences,
vector_size=100,
window=1000,
min_count=2,
workers=-1,
epochs=10,
# negative=5,
compute_loss=True,
callbacks=[callback()]
)
</code></pre>
<p>I got this output:</p>
<pre><code>Loss after epoch 0: 0.0
Loss after epoch 1: 0.0
Loss after epoch 2: 0.0
Loss after epoch 3: 0.0
Loss after epoch 4: 0.0
Loss after epoch 5: 0.0
Loss after epoch 6: 0.0
Loss after epoch 7: 0.0
Loss after epoch 8: 0.0
Loss after epoch 9: 0.0
</code></pre>
<p>All losses of 0!!!
I start to get very suspicious at this point.</p>
<p>Note: Each element of <code>sentences</code> are independent, I hop the library don't try to mix different terms in different arrays.</p>
<p>For trying to test the model, I tried a very frequent query like <code>model.wv.most_similar('query_t-shirt', topn=100)</code> and the results are completely absurd.</p>
<p>Is my idea crazy or am I using incorrectly the library?</p> | 2021-08-29 09:58:03.240000+00:00 | 2021-08-29 18:01:04.453000+00:00 | null | python|nlp|gensim|information-retrieval | ['https://arxiv.org/abs/1804.04212', 'https://radimrehurek.com/gensim/models/word2vec.html#gensim.models.word2vec.Word2Vec'] | 2 |
49,011,667 | <p>The square filter is often selected just because there's no preference in which direction a pattern can be found. For example, it can be a horizontal or a vertical line, both can be important features in an image and the network should capture any of those, if they are important. In other words, you might want your network to be <em>symmetric</em>.</p>
<p>Asymmetric filters became much more popular in the last years, after they were successfully used in <a href="https://arxiv.org/abs/1409.4842" rel="nofollow noreferrer">Inception network</a>. The idea is that <code>n x n</code> filter has the same receptive field as a sequence of <code>1 x n</code> and <code>n x 1</code> convolutions (called <em>effective receptive field</em>, see <a href="http://cs231n.github.io/convolutional-networks/#layerpat" rel="nofollow noreferrer">CS231n tutorial</a> for details), but the latter requires less floating point operations and stores fewer parameters. The architecture is still symmetric in both directions (vertical patterns can be discovered as easily as horizontal ones), but the trick makes it more efficient.</p>
<p>Here's a picture of inception module from <a href="https://arxiv.org/abs/1512.00567" rel="nofollow noreferrer">Inception v2</a>:</p>
<p><a href="https://i.stack.imgur.com/1fnBw.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1fnBw.jpg" alt="inception-module"></a></p>
<p>In smaller applications, where this kind of optimization isn't critical, there's no big reason to go with such complex architectures and simply use <code>n x n</code> filters.</p> | 2018-02-27 14:58:34.137000+00:00 | 2018-02-27 16:32:52.340000+00:00 | 2018-02-27 16:32:52.340000+00:00 | null | 49,003,346 | <p>Given a filter of shape <code>(f1, f2, depth)</code>, what's the implications if <code>f1 != f2</code>?</p> | 2018-02-27 07:29:54.673000+00:00 | 2018-02-27 19:36:42.953000+00:00 | 2018-02-27 19:36:42.953000+00:00 | tensorflow|machine-learning|deep-learning|conv-neural-network|convolution | ['https://arxiv.org/abs/1409.4842', 'http://cs231n.github.io/convolutional-networks/#layerpat', 'https://arxiv.org/abs/1512.00567', 'https://i.stack.imgur.com/1fnBw.jpg'] | 4 |
72,623,068 | <p>There is nothing "obvious" about skip connections, it is something that as a community we learned the hard way. The basic premise is that in neural network parametrisation of feed forward layers, it is surprisingly hard to learn identify function. Skip connections make this special function (<code>f(x)=x</code>) extremely easy to learn, which improves network learning stability, and overall performance in a wide range of applications, at pretty much no extra computational cost. You are essentially giving a network an easy way of not using convoluted, comlpex part of computation when it does not need to, and thus allow us to use complex and big architectures without in depth understanding of the dynamics of the problem (which are beyond our current understanding of math!).</p>
<p>You can look at old-ish papers like <a href="https://arxiv.org/pdf/1505.00387.pdf" rel="nofollow noreferrer">highway networks</a> showing how it allows to train very deep models that otherwise would be to ill-conditioned to trian.</p> | 2022-06-14 20:34:27.053000+00:00 | 2022-06-14 20:34:27.053000+00:00 | null | null | 72,620,563 | <p>I've recently been learning about self-attention transformers and the "Attention is All You Need" paper. When describing the architecture of the neural network used in the paper, one breakdown of the paper included this explanation for residual connections:</p>
<p>"Residual layer connections are used (of course) in both encoder and decoder blocks"
(origin: <a href="https://www.kaggle.com/code/residentmario/transformer-architecture-self-attention/notebook" rel="nofollow noreferrer">https://www.kaggle.com/code/residentmario/transformer-architecture-self-attention/notebook</a>)</p>
<p>This was, unfortunately, not obvious to me. What is the purpose of residual connections, and why should this be standard practice?</p> | 2022-06-14 16:39:03.913000+00:00 | 2022-06-14 20:34:27.053000+00:00 | null | machine-learning|neural-network|transformer-model|self-attention | ['https://arxiv.org/pdf/1505.00387.pdf'] | 1 |
46,803,973 | <p>Quoting <a href="https://arxiv.org/pdf/1506.01497.pdf" rel="nofollow noreferrer">faster-RCNN paper</a>:</p>
<blockquote>
<p>An important property of our approach is that it is
translation invariant, both in terms of the anchors and the
functions that compute proposals relative to the anchors. If
one translates an object in an image, the proposal should
translate and the same function should be able to predict the
proposal in either location. This translation-invariant property
is guaranteed by our method*</p>
<p>*As is the case of FCNs [7], our network is translation invariant up to the network’s total stride</p>
</blockquote>
<p>So the short answer is that you'll probably be ok with the object is mostly at a certain location in the train set and somewhere else in the test set. </p>
<p>A bit longer answer is that the location may have side affects that may affect the accuracy and it will probably be better to have the object in different locations; however you can try to add - for testing purposes - N test samples to the train set and see what is the accuracy change in the test set -N remaining samples.</p> | 2017-10-18 06:25:08.117000+00:00 | 2017-10-18 06:25:08.117000+00:00 | null | null | 46,794,578 | <p>Has enyone try the effect of the location per class in faster rcnn?</p>
<p>In case my train data has one of the object classes always in one area of the frame, lets say in the top right of the image, and on the evaluation dataset I have one image that this object is on other area, down left,
Is the Faster RCNN capable to handle with this case? </p>
<p>Or if I want my network to find all of the classes in all of the frame areas I need to provide example in the train dataset that cover all the areas? </p> | 2017-10-17 16:09:18.097000+00:00 | 2017-10-18 06:25:08.117000+00:00 | 2017-10-18 02:29:43.440000+00:00 | tensorflow|caffe|detection|object-detection | ['https://arxiv.org/pdf/1506.01497.pdf'] | 1 |
46,390,284 | <p>You got M samples and N features, with <code>M=950 , N=5000</code>.</p>
<p>The takeaway here is: <strong>But when p>n, the lasso criterion is not strictly convex, and hence it may not have a unique minimum.</strong> <a href="https://arxiv.org/abs/1206.0313" rel="nofollow noreferrer">reference</a>.</p>
<p>This complicates optimization a bit (keep in mind: it's not the simplest of all problems as non-smooth by nature!) and most solvers will be tuned for the other cases. </p>
<p>In your case there is a clear warning and a recommendation: increase the number of iterations! And make sure your <em>alphas</em> are not too small. Not sure, how you did init the latter, but if those <code>1e-15</code> magnitudes are hand-made, re-think about your problem-formulation!</p>
<p>The warning is enough to not take those solutions as optimized ones (so: <em>my lasso has different solutions for different inits</em> is technically not correct; only your approximated solution behaves like that).</p> | 2017-09-24 12:58:13.550000+00:00 | 2017-09-24 13:04:54.377000+00:00 | 2017-09-24 13:04:54.377000+00:00 | null | 46,390,193 | <p>I am trying to use lasso optimization on the data with 950 samples and about 5000 features. The lasso function is $(1 / (2 * numberofsamples)) * ||y - Xw||^2_2 + alpha * ||w||_1$.Once I try the minimization with an initialization, I get the totally different w which is odd because lasso is convex and initialization should not affect the result. Here is the result of the lasso with and without initialization. tol is the tolerance. If the change of the w became bellow tolerance, the convergence has happened.</p>
<pre><code>tol=0.00000001
##### lasso model errors #####
gene: 5478 matrix error: 0.069611732213
with initialization: alpha: 1e-20 promotion: -3.58847815733e-13
coef: [-0.00214732 -0.00509795 0.00272167 -0.00651548 -0.00164646 -0.00115342
0.00553346 0.01047653 0.00139832]
without initialization: alpha: 1e-20 promotion: -19.0735249749
coef: [-0.03650629 0.08992003 -0.01287155 0.03203973 0.1567577 -0.03708655
-0.13710957 -0.01252736 -0.21710334]
with initialization: alpha: 1e-15 promotion: 1.06179081478e-10
coef: [-0.00214732 -0.00509795 0.00272167 -0.00651548 -0.00164646 -0.00115342
0.00553346 0.01047653 0.00139832]
without initialization: alpha: 1e-15 promotion: -19.0735249463
coef: [-0.03650629 0.08992003 -0.01287155 0.03203973 0.1567577 -0.03708655
-0.13710957 -0.01252736 -0.21710334]
Warning (from warnings module):
File "/usr/local/lib/python2.7/site-packages/sklearn/linear_model/coordinate_descent.py", line 491
ConvergenceWarning)
ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
with initialization: alpha: 1e-10 promotion: 0.775144987537
coef: [-0.00185139 -0.0048819 0.00218349 -0.00622618 -0.00145647 -0.00115857
0.0055919 0.01072924 0.00043773]
without initialization: alpha: 1e-10 promotion: -17.8649603301
coef: [-0.03581581 0.0892119 -0.01232829 0.03151441 0.15606195 -0.03734093
-0.13604286 -0.01247732 -0.21233529]
with initialization: alpha: 1e-08 promotion: -5.87121366314
coef: [-0. 0. -0. -0.01064477 0. -0.00116167
-0. 0.01114746 0. ]
without initialization: alpha: 1e-08 promotion: 4.05593555389
coef: [ 0. 0.04505117 0.00668611 0. 0.07731668 -0.03537848
-0.03151995 0. -0.00310122]
max promote:
4.05593555389
</code></pre>
<p>For the implementation, I used the lasso function of the python package sklearn.linear_model. I also change the data, but the results on the new data alter with initialization too. I think this is odd but I could not analyze it and find the explanation.</p>
<p>Here is the part of my code, which is related to the lasso. my data is gene expression. I test the code on both normalized and un-normalized data. On both of them the initial point made difference.</p>
<pre><code> alpha_lasso = [1e-20,1e-15, 1e-10, 1e-8, 1e-7,1e-6,1e-5,1e-4, 1e-3,1e-2, 1, 5 ,20]
lassoreg = Lasso(alpha=alpha_lasso[i],warm_start=True,tol=0.00000001,max_iter=100000)
lassoreg.coef_ = mybeta[:,j-c]
lassoreg.fit(train[:,predictors],train[:,y])
y_train_pred = lassoreg.predict(A)#train[:,predictors])
y_test_pred = lassoreg.predict(C)#test[:,predictors])
</code></pre>
<p>Here also is my whole code:</p>
<pre><code>import pandas as pd
import random
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
import os
from GEOparse.GEOTypes import (GSE, GSM, GPL, GDS,
GDSSubset, GEODatabase,
DataIncompatibilityException,
NoMetadataException,
)
import GEOparse as GEO
import numpy as np
import copy
import sys
import math
from sklearn import linear_model
from sklearn.linear_model import Lasso
from sklearn.linear_model import LassoLars
from sklearn.linear_model import MultiTaskLassoCV
from sklearn.linear_model import coordinate_descent
from sklearn.linear_model import lasso_path, enet_path
import numpy as np
from sklearn.base import BaseEstimator, RegressorMixin
from copy import deepcopy
miss_percent = 0.1
alpha_lasso = [1e-20,1e-15, 1e-10, 1e-8, 1e-7,1e-6,1e-5,1e-4, 1e-3,1e-2, 1, 5 ,20]
mins=[]
maxs=[]
mean_err=[]
alphas=[]
mins1=[]
maxs1=[]
mean_err1=[]
alphas1=[]
#mnist = input_data.read_data_sets('../../MNIST_data', one_hot=True)
def getdata(percent):
gsd = GEO.get_GEO(geo="GDS4971")
ngsd = gsd.table.replace('null', np.NaN)
ngsd = ngsd.dropna(axis=0, how='any')
ngsd =ngsd.transpose()
dataarray = ngsd.values
data = np.delete(dataarray, [0,1], 0)
x = data.astype(np.float)
r_df = x.shape[0]
c_df = x.shape[1]
r = int(r_df-math.sqrt((1-percent)*r_df))
c = int(c_df-math.sqrt((1-percent)*c_df))
train = x[0:r,:]
test = x[r:r_df,:]
return x,train,test,r_df,c_df,r,c
genedata,train,test,r_df,c_df,r,c = getdata(miss_percent)
predictors = range(0,c)
promotion =[[0.001 for x in range(len(alpha_lasso))] for y in range(c_df-c)]
promotion = np.asmatrix(promotion)
#error of ax-b
error_aw_b = [[0.001 for x in range(len(alpha_lasso))] for y in range(c_df-c)]
error_aw_b = np.asmatrix(error_aw_b)
#error of cw-x
error_cw_x = [[0.001 for x in range(len(alpha_lasso))] for y in range(c_df-c)]
error_cw_x = np.asmatrix(error_cw_x)
#error of lasso function
error_lasso = [[0.001 for x in range(len(alpha_lasso))] for y in range(c_df-c)]
error_lasso = np.asmatrix(error_lasso)
promotion1 =[[0.001 for x in range(len(alpha_lasso))] for y in range(c_df-c)]
promotion1 = np.asmatrix(promotion)
#error of ax-b
error_aw_b1 = [[0.001 for x in range(len(alpha_lasso))] for y in range(c_df-c)]
error_aw_b1 = np.asmatrix(error_aw_b)
#error of cw-x
error_cw_x1 = [[0.001 for x in range(len(alpha_lasso))] for y in range(c_df-c)]
error_cw_x1 = np.asmatrix(error_cw_x)
#error of lasso function
error_lasso1 = [[0.001 for x in range(len(alpha_lasso))] for y in range(c_df-c)]
error_lasso1 = np.asmatrix(error_lasso)
mybeta = #any initialization
###################### LASSO #####################
print("##### lasso model errors #####")
for j in range(c,c+1):
mean_err=[]
print("\n")
y=j
eachMeanError= math.sqrt((np.power(errorC[:,j-c],2)).sum()/(r_df-r))
print("gene: "+str(j)+ " matrix error: "+ str(eachMeanError))
for i in range(0,4):#len(alpha_lasso)):
lassoreg = Lasso(alpha=alpha_lasso[i],warm_start=True,tol=0.00000001,max_iter=100000)
lassoreg.coef_ = mybeta[:,j-c]
lassoreg.fit(train[:,predictors],train[:,y])
y_train_pred = lassoreg.predict(A)#train[:,predictors])
y_test_pred = lassoreg.predict(C)#test[:,predictors])
y_lasso_func = (1/(2*r))*sum(y_train_pred)+sum(abs(lassoreg.coef_))
################## RMS ##################
error_aw_b[j-c,i] = math.sqrt(sum((y_train_pred-train[:,y])**2)/r)
error_lasso[j-c,i] = y_lasso_func
error_cw_x[j-c,i] = math.sqrt(sum((y_test_pred-test[:,y])**2)/(r_df-r))
mins.extend([(error_cw_x.min())])
maxs.extend([(error_cw_x.max())])
promotion[j-c,i] = (((eachMeanError-error_cw_x[j-c,i])/eachMeanError)*100)
print("alpha: "+str(alpha_lasso[i])+ " error_aw_b: "+str(error_aw_b[j-c,i]) + " error_cw_x: " + str(error_cw_x[j-c,i])+" error_lasso: "+str(error_lasso[j-c,i]) + " promotion: " + str(promotion[j-c,i]) )
print("coef: " + str(lassoreg.coef_[1:10]))
lassoreg1 = Lasso(alpha=alpha_lasso[i],tol=0.00000001,max_iter=100000)
lassoreg1.fit(train[:,predictors],train[:,y])
y_train_pred1 = lassoreg1.predict(A)#train[:,predictors])
y_test_pred1 = lassoreg1.predict(C)#test[:,predictors])
y_lasso_func1 = (1/(2*r))*sum(y_train_pred1)+sum(abs(lassoreg1.coef_))
################## RMS ##################
error_aw_b1[j-c,i] = math.sqrt(sum((y_train_pred1-train[:,y])**2)/r)
error_lasso1[j-c,i] = y_lasso_func1
error_cw_x1[j-c,i] = math.sqrt(sum((y_test_pred1-test[:,y])**2)/(r_df-r))
mins1.extend([(error_cw_x1.min())])
maxs1.extend([(error_cw_x1.max())])
promotion1[j-c,i] = (((eachMeanError-error_cw_x1[j-c,i])/eachMeanError)*100)
print("alpha: "+str(alpha_lasso[i])+ " error_aw_b: "+str(error_aw_b1[j-c,i]) + " error_cw_x: " + str(error_cw_x1[j-c,i])+" error_lasso: "+str(error_lasso1[j-c,i]) + " promotion: " + str(promotion1[j-c,i]) )
print("coef: " + str(lassoreg1.coef_[1:10]))
print("\n")
print("max promote:")
print((promotion[j-c,:].max()))
f = open('analyse_col', 'wb')
np.save(f, [promotion,alphas,error_cw_x,mins,maxs])
f.close()
plt.plot(promotion[:,j-c])
plt.ylabel('coef for ')
plt.xlabel('each gene')
plt.show()
</code></pre> | 2017-09-24 12:47:58.993000+00:00 | 2017-09-24 17:13:50.913000+00:00 | 2017-09-24 17:13:50.913000+00:00 | python|optimization|scikit-learn|linear-regression|lasso-regression | ['https://arxiv.org/abs/1206.0313'] | 1 |
33,924,955 | <p>NaN as the only value <code>x</code> with the property <code>x!=x</code> is an IEEE 754 guarantee. Whether it is a faithful test to recognize <code>NaN</code> in C boils down to how closely the representation of variables and the operations are mapped to IEEE 754 formats and operations in the compiler(s) you intend to use.</p>
<p>You should in particular worry about “excess precision” and the way compilers deal with it. Excess precision is what happens when the FPU only conveniently supports computations in a wider format than the compiler would like to use for <code>float</code> and <code>double</code> types. In this case computations can be made at the wider precision, and rounded to the type's precision when the compiler feels like it in an unpredictable way.</p>
<p>The C99 standard defined a way to handle this excess precision that preserved the property that only NaN was different from itself, but for a long time after 1999 (and even nowadays when the compiler's authors do not care), in presence of excess precision, <code>x != x</code> could possibly be true for any variable <code>x</code> that contains the finite result of a computation, if the compiler chooses to round the excess-precision result of the computation in-between the evaluation of the first <code>x</code> and the second <code>x</code>.</p>
<p>This <a href="http://arxiv.org/abs/cs/0701192" rel="noreferrer">report</a> describes the dark times of compilers that made no effort to implement C99 (either because it wasn't 1999 yet or because they didn't care enough).</p>
<p>This <a href="https://gcc.gnu.org/ml/gcc-patches/2008-11/msg00105.html" rel="noreferrer">2008 post</a> describes how GCC started to implement the C99 standard for excess precision in 2008. Before that, GCC could provide one with all the surprises described in the aforementioned report.</p>
<p>Of course, if the target platform does not implement IEEE 754 at all, a NaN value may not even exist, or exist and have different properties than specified by IEEE 754. The common cases are a compiler that implements IEEE 754 quite faithfully with FLT_EVAL_METHOD set to 0, 1 or 2 (all of which guarantee that <code>x != x</code> iff <code>x</code> is NaN), or a compiler with a non-standard implementation of excess precision, where <code>x != x</code> is not a reliable test for NaN.</p> | 2015-11-25 19:30:48.297000+00:00 | 2015-11-25 19:46:25.077000+00:00 | 2015-11-25 19:46:25.077000+00:00 | null | 33,924,866 | <p>In C you can test to see if a double if NaN using <code>isnan(x)</code>. However many places online, including for example this <a href="https://stackoverflow.com/a/1923933/2179021">SO answer</a> say that you can simply use <code>x!=x</code> instead.</p>
<p>Is <code>x!=x</code> in any C specification as a method that is guaranteed to test if x is NaN? I can't find it myself and I would like my code to work with different compilers.</p> | 2015-11-25 19:25:41.543000+00:00 | 2015-11-25 19:51:05.057000+00:00 | 2017-05-23 12:25:17.597000+00:00 | c|floating-point|nan | ['http://arxiv.org/abs/cs/0701192', 'https://gcc.gnu.org/ml/gcc-patches/2008-11/msg00105.html'] | 2 |
43,432,146 | <p>ROI (region of interest) layer is introduced in <a href="https://arxiv.org/pdf/1504.08083.pdf" rel="noreferrer">Fast R-CNN</a> and is a special case of spatial pyramid pooling layer which is introduced in <a href="https://arxiv.org/pdf/1406.4729.pdf" rel="noreferrer">Spatial Pyramid Pooling in Deep Convolutional
Networks for Visual Recognition</a>. The main function of ROI layer is reshape inputs with arbitrary size into a fixed length output because of size constraint in Fully Connected layers. </p>
<p>How ROI layer works is showed below:</p>
<p><a href="https://i.stack.imgur.com/4uCN1.png" rel="noreferrer"><img src="https://i.stack.imgur.com/4uCN1.png" alt="enter image description here"></a></p>
<p>In this image, the input image with arbitrary size is fed into this layer which has 3 different window: 4x4 (blue), 2x2 (green), 1x1 (gray) to produce outputs with fixed size of 16 x F, 4 x F, and 1 x F, respectively, where F is the number of filters. Then, those outputs are concatenated into a vector to be fed to Fully Connected layer. </p> | 2017-04-15 23:11:39.290000+00:00 | 2018-09-28 10:04:31.783000+00:00 | 2018-09-28 10:04:31.783000+00:00 | null | 43,430,056 | <p>In <a href="https://leonardoaraujosantos.gitbooks.io/artificial-inteligence/content/object_localization_and_detection.html" rel="noreferrer">this</a> tutorial about object detection, the fast R-CNN is mentioned. The ROI (region of interest) layer is also mentioned.</p>
<p>What is happening, mathematically, when region proposals get resized according to final convolution layer activation functions (in each cell)?</p> | 2017-04-15 18:58:41.750000+00:00 | 2018-09-28 10:04:31.783000+00:00 | 2018-09-28 09:57:49.863000+00:00 | deep-learning|computer-vision|conv-neural-network|object-detection | ['https://arxiv.org/pdf/1504.08083.pdf', 'https://arxiv.org/pdf/1406.4729.pdf', 'https://i.stack.imgur.com/4uCN1.png'] | 3 |
55,607,008 | <p>It is a common claim, that automatic differentiation and symbolic differentiation are different. However, this is not true. Forward mode automatic differentiation and symbolic differentiation are in fact equivalent. Please see this <a href="https://arxiv.org/abs/1904.02990" rel="noreferrer">paper</a>.</p>
<p>In short, they both apply the chain rule from the input variables to the output variables of an expression graph. It is often said, that symbolic differentiation operates on mathematical expressions and automatic differentiation on computer programs. In the end, they are actually both represented as expression graphs.</p>
<p>On the other hand, automatic differentiation also provides more modes. For instance, when applying the chain rule from output variables to input variables then this is called reverse mode automatic differentiation.</p> | 2019-04-10 07:28:25.113000+00:00 | 2019-04-10 07:28:25.113000+00:00 | null | null | 43,455,320 | <p>I just cannot seem to understand the difference. For me it looks like both just go through an expression and apply the chain rule.. What am I missing?</p> | 2017-04-17 16:24:26.127000+00:00 | 2020-04-06 17:54:14.747000+00:00 | null | symbolic-math|automatic-differentiation | ['https://arxiv.org/abs/1904.02990'] | 1 |
47,893,204 | <p>Both ways are possible and the choice mostly depends on the way you read the data.</p>
<ul>
<li><p><em>Whole training set</em> normalization is convenient when you can load the whole dataset at once into a numpy array. E.g., <a href="https://www.tensorflow.org/get_started/mnist/beginners#the_mnist_data" rel="nofollow noreferrer">MNIST dataset</a> is usually loaded fully into memory. This way is also preferable in terms of convergence, when the individual images vary significantly: two training images, one is mostly white and the other is mostly black, will have very different means.</p></li>
<li><p><em>Per image</em> normalization is convenient when the images are loaded one by one or in small batches, for example from the TFRecord. It's also the only viable option when the dataset is too large too fit in memory. In this case, it's better to organize the <a href="https://www.tensorflow.org/versions/r0.12/how_tos/reading_data/" rel="nofollow noreferrer">input pipeline in tensorflow</a> and transform the image tensors just like other tensors in the graph. I've seen pretty good accuracy with this normalization in CIFAR-10, so it's a viable way, despite the issues stated earlier. Also note that you can reduce the negative effect via <a href="https://arxiv.org/abs/1502.03167" rel="nofollow noreferrer">batch normalization</a>.</p></li>
</ul> | 2017-12-19 18:36:02.160000+00:00 | 2017-12-21 13:57:38.873000+00:00 | 2017-12-21 13:57:38.873000+00:00 | null | 47,865,695 | <p>When preparing train set for neural network training, I find two possible way.</p>
<ol>
<li>The traditional way: calculate the mean on <strong>whole training set</strong>, and minus this <strong>fixed mean value</strong> per image before sending to network. Processing standard deviation in the similar way.</li>
<li>I find tensorflow provides a function <code>tf.image.per_image_standardization</code> that do normalization on <strong>single image</strong>.</li>
</ol>
<p>I wonder which way is more appropriate?</p> | 2017-12-18 09:47:17.437000+00:00 | 2017-12-21 13:57:38.873000+00:00 | 2017-12-19 19:49:01.660000+00:00 | python|tensorflow|neural-network|deep-learning|data-science | ['https://www.tensorflow.org/get_started/mnist/beginners#the_mnist_data', 'https://www.tensorflow.org/versions/r0.12/how_tos/reading_data/', 'https://arxiv.org/abs/1502.03167'] | 3 |
54,172,562 | <ul>
<li>Your assumptions are correct: the data acquisition flow is: <code>sensor -> driver -> camera library -> other libraries built on top of it</code> (see OpenCV support for Intel RealSense)<code>-> captured image.</code> Once you got the image, you can do whatever you want of course.</li>
<li>The various libraries allow you to work easily with the device. In particular, OpenCV compiled with the Intel RealSense support allows you to use OpenCV standard data acquisition stream, without bothering about the image format coming from the sensor and used by the Intel library. 10/10 use these libraries, they make your life easier.</li>
<li>You can start from the OpenCV wrapper documentation for Intel RealSense (<a href="https://github.com/IntelRealSense/librealsense/tree/master/wrappers/opencv" rel="nofollow noreferrer">https://github.com/IntelRealSense/librealsense/tree/master/wrappers/opencv</a>). Once you are able to capture the RGBD images, you can create your input pipeline for your model using <code>tf.data</code> and develop in tensorflow any application that uses CNNs on RGDB images (just google it and look on arxiv to have ideas about the possible applications).</li>
</ul>
<p>Once your model has been trained, just export the trained graph and use it in inference, hence your pipeline will become: <code>sensor -> driver -> camera library -> libs -> RGBD image -> trained model -> model output</code></p> | 2019-01-13 19:43:58.453000+00:00 | 2019-01-13 19:43:58.453000+00:00 | null | null | 54,171,922 | <p>I would like to experiment with machine learning (especially CNNs) on the aligned RGB and depth stream of either an Intel RealSense or an Orbbec Astra camera. My goal is to do some object recognisation and highlight/mark them in the output video stream (as a starting point). </p>
<p>But after having read many articles I am still confused about the involved frameworks and how the data flows from the camera through the involved software components. I just can't get a high level picture.</p>
<p>This is my assumption regarding the processing flow: </p>
<p>Sensor => Driver => libRealSense / Astra SDK => TensorFlow </p>
<p><strong>Questions</strong></p>
<ul>
<li>Is my assumption correct regarding the processing?</li>
<li>Orbbec provides an additional <code>Astra OpenNI SDK</code> besides the <code>Astra SDK</code> where as Intel has wrappers (?) for <code>OpenCV</code> and <code>OpenNI</code>. When or why would I need this additional libraries/support?</li>
<li>What would be the quickest way to get started? I would prefer C# over C++ </li>
</ul> | 2019-01-13 18:30:57.617000+00:00 | 2019-01-31 09:29:49.310000+00:00 | 2019-01-31 09:29:49.310000+00:00 | opencv|tensorflow|openni|realsense|orbbec | ['https://github.com/IntelRealSense/librealsense/tree/master/wrappers/opencv'] | 1 |
61,455,701 | <p>So there are 3 questions :</p>
<p>First,</p>
<blockquote>
<p>So at the pre-processing stage, should I change the text into rainy
days lead to [MASK] or something like rainy days lead to [MASK]
[MASK]?</p>
</blockquote>
<p>In a word point of view, you should set [MASK] [MASK]. But remember that in BERT, the mask is set at a token point of view. In fact, 'wet weather' may be tokenized in something like : [wet] [weath] [##er], and in this case, you should have [MASK] [MASK] [MASK]. So one [MASK] per token.</p>
<p>Second,</p>
<blockquote>
<p>I know that the masked LM works well on the single token prediction,
do you think the masked LM can work well on the multiple tokens
prediction?</p>
</blockquote>
<p>As you can read it in <a href="https://arxiv.org/pdf/1810.04805.pdf" rel="nofollow noreferrer">the original paper</a>, they said :</p>
<blockquote>
<p>The training data generator chooses 15% of the token positions at
random for prediction. If the i-th token is chosen, we replace the
i-th token with (1) the [MASK] token 80% of the time (2) a random
token 10% of the time (3) the unchanged i-th token 10% of the time.</p>
</blockquote>
<p>They notice no limitation in the amount of MASKED token per sentence, you have several MASKED token during pre-training BERT.
In my own experience, I pre-trained BERT several times and I noticed that there were almost non differences between the prediction made on MASKED token if there were only one or more MASKED token in my input.</p>
<p>Third,</p>
<blockquote>
<p>If no, do you have any suggestions on how to pre-process and train
this kind of data?</p>
</blockquote>
<p>So the answer is yes, but if you really want to MASK elements you choose (and not randomly like in the paper), you should adapt the MASK when the data will be tokenized because the number of MASKED token will be greater (or equal) that the number of MASK in the word space you set (like the example I gave you : 1 word is not equals to 1 token, so basically, 1 MASKED word will be 1 or more MASK token). But honestly, the process of labellisation will be so huge I recommend you to increase the 15% of probability for MASK tokien or make a process that MASK the 1 or 2 next token for each MASKED token (or something like this)..</p> | 2020-04-27 09:31:29.457000+00:00 | 2020-04-27 09:31:29.457000+00:00 | null | null | 61,419,089 | <p>I'm looking for suggestions on using Bert and Bert's masked language model to predict multiple tokens.</p>
<p>My data looks like:</p>
<p>context: <code>some very long context paragraph</code></p>
<p>question: <code>rainy days lead to @placeholder</code> and the answer for this <code>@placeholder</code> is <code>wet weather</code>. In the model, <code>wet environment</code> is the answer to predict. </p>
<p>So at the pre-processing stage, should I change the text into <code>rainy days lead to [MASK]</code> or something like <code>rainy days lead to [MASK] [MASK]</code>? I know that the masked LM works well on the single token prediction, do you think the masked LM can work well on the multiple tokens prediction? If no, do you have any suggestions on how to pre-process and train this kind of data?</p>
<p>Thanks so much!</p> | 2020-04-24 23:41:34+00:00 | 2020-04-27 09:31:29.457000+00:00 | 2020-04-24 23:51:38.747000+00:00 | python|bert-language-model | ['https://arxiv.org/pdf/1810.04805.pdf'] | 1 |
27,463,661 | <p>Even if you had the (pseudo-)distances between each pair of possible trees, this is actually not what you're after. You actually want to do <strong>unsupervised learning</strong> (clustering) in which you combine structure learning with parameter learning. The types of data structures you want to perform inference on are trees. To postulate "some metric space" for your clustering method, you introduce something that is not really necessary. To find the proper <strong>distance measure</strong> is a very difficult problem. I'll point in different directions in the following paragraphs and hope they can help you on your way.</p>
<p>The following is not the only way to represent this problem... You can see your problem as <strong>Bayesian inference</strong> over all possible trees with all possible values at the tree nodes. You probably would have some prior knowledge on what kind of trees are more likely than others and/or what kind of values are more likely than others. The Bayesian approach would allow you to define priors for both.</p>
<p>One article you might like to read is "Learning with Mixtures of Trees" by Meila and Jordan, 2000 (<a href="http://machinelearning.wustl.edu/mlpapers/paper_files/MeilaJ00.pdf" rel="nofollow">pdf</a>). It explains that it is possible to use a <strong>decomposable prior</strong>: the tree structure has a different prior from the values/parameters (this of course means that there is some assumption of independence at play here).</p>
<p>I know you were hinting at heuristics such as the average fan-out etc., but you might find it worthwhile to check out these new applications of Bayesian inference. Note, for example that within nonparametric Bayesian method it is also feasible to reason about infinite trees, as done e.g. by Hutter, 2004 (<a href="http://arxiv.org/pdf/math.ST/0411515" rel="nofollow">pdf</a>)!</p> | 2014-12-13 21:10:09.403000+00:00 | 2015-02-25 23:43:31.230000+00:00 | 2015-02-25 23:43:31.230000+00:00 | null | 21,897,879 | <p>I am working on a problem of Clustering of Results of Keyword Search on Graph. The results are in the form of Tree and I need to cluster those threes in group based on their similarities. Every node of the tree has two keys, one is the table name in the SQL database(semantic form) and second is the actual values of a record of that table(label).</p>
<p>I have used Zhang and Shasha, Klein, Demaine and RTED algorithms to find the Tree Edit Distance between the trees based on these two keys. All algorithms use no of deletion/insertion/relabel operation need to modify the trees to make them look same. </p>
<p>**I want some more matrices of to check the similarities between two trees e.g. Number of Nodes, average fan outs and more so that I can take a weighted average of these matrices to reach on a very good similarity matrix which takes into account both the semantic form of the tree (structure) and information contained in the tree(Labels at the node).</p>
<p>Can you please suggest me some way out or some literature which can be of some help?**</p>
<p>Can anyone suggest me some good paper</p> | 2014-02-20 04:04:53.733000+00:00 | 2015-02-25 23:43:31.230000+00:00 | null | algorithm|tree|string-matching|similarity | ['http://machinelearning.wustl.edu/mlpapers/paper_files/MeilaJ00.pdf', 'http://arxiv.org/pdf/math.ST/0411515'] | 2 |