a_id
int64 7.84k
73.8M
| a_body
stringlengths 61
33k
| a_creation_date
stringlengths 25
32
| a_last_activity_date
stringlengths 25
32
| a_last_edit_date
stringlengths 25
32
⌀ | a_tags
float64 | q_id
int64 826
73.8M
| q_body
stringlengths 61
29.9k
| q_creation_date
stringlengths 25
32
| q_last_activity_date
stringlengths 25
32
| q_last_edit_date
stringlengths 25
32
⌀ | q_tags
stringlengths 1
103
| _arxiv_links
stringlengths 2
6.69k
| _n_arxiv_links
int64 0
94
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
37,153,743 | <p>I haven't heard of this in the wild, but <strong>it may be a good idea</strong> if you choose the right consistent hash implementation. Specifically, <a href="http://arxiv.org/abs/1406.2294" rel="nofollow">Jump Consistent Hashing</a> by Google et al. First I'll go into why Jump, then I'll go into how it can be useful in a local data structure.</p>
<h3>Jump Consistent Hashing</h3>
<p>Jump Consistent Hashing (which I'll shorten to Jump) is great for this space for a few reasons. Jump assumes that nodes don't fail, which is great for local data structures because they, well, don't fail! This allows Jump to merely be a mapping to a range of numbers <code>[0, numBuckets)</code>, requiring only 2-4 bytes of space.</p>
<p>Further the implementation is simple and fast. And it is even faster if we remove the reference implementation's floating point divides and replace them with half as many integer divides. (Which we can, by the way.)</p>
<p>All this can be used for a variation on...</p>
<h3>ConcurrentHashMap</h3>
<p>But first, Java's <a href="https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ConcurrentHashMap.html#ConcurrentHashMap-int-float-int-" rel="nofollow">Concurrent Hash Map</a> at a high-level.</p>
<p>Java's ConcurrentHashMap is parameterized by a number of <em>buckets</em>. This sharding factor is constant through the life of the map. Each of these buckets is itself a hash map with its own lock.</p>
<p>When inserting a key-value pair into the map, the key is hashed into one of the buckets. The lock for that key is taken, and the item is inserted into the bucket's hash map before releasing the lock. Whilst inserting into bucket <code>x</code> another thread can be inserting concurrently into bucket <code>y</code>, but it will wait for the lock if inserting into bucket <code>x</code>. Thus <strong>Java's ConcurrentHashMap has n-way concurrency</strong>, where <em>n</em> is the <em>bucket</em> parameter of the constructor.</p>
<p>Just like any hash map, a bucket in ConcurrentHashMap can fill up and need to grow. Just like the regular hash map it does this by doubling its size and rehashing everything in the bucket back into its bigger self. Except that 'its bigger self' is only the bucket's 'self'. If a bucket is a hot spot and gets more than its fair share of keys, the bucket will grow disproportionately compared to the other buckets. And each time a bucket grows it takes longer and longer to rehash into itself. This last point is not only a problem for hot spots, but when the hash table plain old gets more keys.</p>
<p>Imagine if we could grow the number of buckets as the number of keys grows. With this we could dampen the amount of growth each individual bucket grows. </p>
<p><strong>Enter consistent hashing</strong>, which allows us to add more buckets!</p>
<h3>ConcurrentHashMap take 2: Consistent Hashing Style</h3>
<p>We can get ConcurrentHashMap to grow its number of buckets in a two easy steps.</p>
<p>First replace the function that maps to each bucket with the jump consistent hash function. So far everything should work the the same.</p>
<p>Second split off a new bucket when a bucket is filled; also grow the filled bucket. Actually, only split off a new bucket if the filled bucket becomes the largest biggest in terms of capacity. That can be calculated without iterating the buckets. </p>
<p>With consistent hashing the split will only direct keys into the new bucket and not backwards into any of the old buckets.</p>
<h3>End notes</h3>
<p>I'm sure there can be improvements on this scheme. To wit, splitting off a bucket requires a full table scan to move keys into the new bucket. This is surely no worse than a vanilla hash map, and likely better, but it is at a disadvantage to the ConcurrentHashMap implementation which likely doesn't have to do a full scan. </p> | 2016-05-11 05:27:32.557000+00:00 | 2016-05-11 05:27:32.557000+00:00 | null | null | 14,164,298 | <p>I'm not talking about distributed key/value systems, such as typically used with memcached, which use consistent hashing to make adding/removing nodes a relatively cheap procedure.</p>
<p>I'm talking about your standard in-memory hashtable like python's dict or perl's hash.</p>
<p>It would seem like the benefits of using consistent hashing would also apply to these standard data structures, by lowering the cost of resizing the hashtable. Real-time systems (and other latency-sensitive systems) would benefit from / require hashtables optimized for low-cost growth, even if overall throughput declines slightly.</p>
<p>Wikipedia alludes to "incremental resizing" but basically talks about a hot/cold replacement approach to resizing; there is a separate article about "extendible hashing" that uses a trie for bucket lookup to accomplish cheap rehashing. </p>
<p>Just curious if anyone's heard of in-core, single-node hashtables that use consistent hashing to lower growth cost. Or is this requirement better met using something other approach (ala the two wikipedia bits listed above)?</p>
<p>or ... is my whole question misguided? Do memory paging considerations make the complexity not worth it? That is, the extra indirection of consistent hashing lets you rehash only a fraction of the total keys, but perhaps that doesn't matter because you'll probably have to read from each existing page, so memory latency is your primary factor, and whether you rehash some or all of the keys doesn't matter compared to the cost of the memory access.... but on the other hand, with consistent hashing, all of your key remaps have the same destination page, so there's going to be less memory thrashing than if your keys remap to any of the existing pages.</p>
<p>EDIT: added "data-structures" tag, clarified final sentence to say "page" instead of "bucket".</p> | 2013-01-04 20:17:57.013000+00:00 | 2016-05-11 05:27:32.557000+00:00 | 2013-01-04 20:48:05.673000+00:00 | data-structures|hashtable|consistent-hashing | ['http://arxiv.org/abs/1406.2294', 'https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ConcurrentHashMap.html#ConcurrentHashMap-int-float-int-'] | 2 |
49,179,305 | <p>Basically, the difference relies on the different way in which the final layer is influenced by middle features. </p>
<p>Standard architectures with skip-connection using element-wise summation (e.g. <a href="http://openaccess.thecvf.com/content_cvpr_2016/papers/He_Deep_Residual_Learning_CVPR_2016_paper.pdf" rel="noreferrer">ResNet</a>) can be viewed as an iterative estimation procedure to some extent (see for instance <a href="https://arxiv.org/pdf/1612.07771.pdf" rel="noreferrer">this work</a>), where the features are refined through the various layers of the network. The main benefits of this choice are that it works and is a compact solution (it keeps the number of features fixed across a block).</p>
<p>Architectures with concatenated skip-connections (e.g. <a href="http://openaccess.thecvf.com/content_cvpr_2017/papers/Huang_Densely_Connected_Convolutional_CVPR_2017_paper.pdf" rel="noreferrer">DenseNet</a>), allow the subsequent layers to re-use middle representations, maintaining more information which can lead to better performances. Apart from the feature re-use, another consequence is the implicit deep supervision (as in <a href="http://proceedings.mlr.press/v38/lee15a.pdf" rel="noreferrer">this work</a>) which allow better gradient propagation across the network, especially for deep ones (in fact it has been used for the <a href="http://openaccess.thecvf.com/content_cvpr_2015/papers/Szegedy_Going_Deeper_With_2015_CVPR_paper.pdf" rel="noreferrer">Inception</a> architecture).</p>
<p>Obviously, if not properly designed, concatenating features can lead to an exponential growth of the parameters (this explains, in part, the hierarchical aggregation used in the work you pointed out) and, depending on the problem, using a lot of information could lead to overfitting.</p> | 2018-03-08 17:42:37.903000+00:00 | 2018-03-08 17:42:37.903000+00:00 | null | null | 49,164,230 | <p>In deep neural network, we can implement the skip connections to help:</p>
<ul>
<li><p>Solve problem of vanishing gradient, training faster</p></li>
<li><p>The network learns a combination of low level and high level features</p></li>
<li><p>Recover info loss during downsampling like max pooling.</p></li>
</ul>
<p><a href="https://medium.com/@mikeliao/deep-layer-aggregation-combining-layers-in-nn-architectures-2744d29cab8" rel="noreferrer">https://medium.com/@mikeliao/deep-layer-aggregation-combining-layers-in-nn-architectures-2744d29cab8</a></p>
<p>However, i read some source code, some implemented skip connections as concatenation, some as summation. So my question is what are the benefits of each of these implementations?</p> | 2018-03-08 02:03:58.787000+00:00 | 2018-03-08 17:42:37.903000+00:00 | null | tensorflow|computer-vision|deep-learning|keras | ['http://openaccess.thecvf.com/content_cvpr_2016/papers/He_Deep_Residual_Learning_CVPR_2016_paper.pdf', 'https://arxiv.org/pdf/1612.07771.pdf', 'http://openaccess.thecvf.com/content_cvpr_2017/papers/Huang_Densely_Connected_Convolutional_CVPR_2017_paper.pdf', 'http://proceedings.mlr.press/v38/lee15a.pdf', 'http://openaccess.thecvf.com/content_cvpr_2015/papers/Szegedy_Going_Deeper_With_2015_CVPR_paper.pdf'] | 5 |
13,602,681 | <p><a href="http://arxiv.org/ftp/arxiv/papers/1007/1007.1012.pdf" rel="nofollow noreferrer">Rudmin (2010)</a> states that exact variance of pooled data set is the mean of the variances plus the variance of the means. <a href="https://stackoverflow.com/questions/9222056/existing-function-to-combine-standard-deviations-in-r?rq=1">flodel has already provided an answer and function</a> that gives similar values to Rudmin's statement. Using Rudmin's data set and <a href="https://stackoverflow.com/questions/9222056/existing-function-to-combine-standard-deviations-in-r?rq=1">flodel's function</a> based on <a href="http://en.wikipedia.org/wiki/Standard_deviation#Combining_standard_deviations" rel="nofollow noreferrer">Wikipedia</a>:</p>
<pre><code>df <- data.frame(mean = c(30.66667, 31.14286, 40.33333), variance = c(8.555555, 13.26531, 1.555555), n = c(6,7,3))
grand.sd <- function(S, M, N) {sqrt(weighted.mean(S^2 + M^2, N) -
weighted.mean(M, N)^2)}
grand.sd(sqrt(df$variance), df$mean, df$n)^2
#[1] 22.83983 = Dp variance in Rudmin (2010).
</code></pre>
<p>However this solution gives slightly different values compared to the <a href="http://www.talkstats.com/showthread.php/14523-An-average-of-standard-deviations" rel="nofollow noreferrer">function 5.38 from Headrick (2010)</a> (unless there is a mistake somewhere):</p>
<pre><code>dat <- data.frame(variable = c(rep("x", 2), rep("y", 3)), replicate = c(1,2,1,2,3),
mean = c(3.4, 2.5, 6.5, 5.7, 5.1), sd = c(1.2, 0.7, 2.4, 4.0, 3.5),
n = c(3,3,5,4,6))
x <- subset(dat, variable == "x")
((x$n[1]^2)*(x$sd[1]^2)+
(x$n[2]^2)*(x$sd[2]^2)-
(x$n[2])*(x$sd[1]^2) -
(x$n[2])*(x$sd[2]^2) -
(x$n[1])*(x$sd[1]^2) -
(x$n[1])*(x$sd[2]^2) +
(x$n[1])*(x$n[2])*(x$sd[1]^2) +
(x$n[1])*(x$n[2])*(x$sd[2]^2) +
(x$n[1])*(x$n[2])*(x$mean[1] - x$mean[2])^2)/
((x$n[1] + x$n[2] - 1)*(x$n[1] + x$n[2]))
#[1] 1.015
grand.sd(x$sd, x$mean, x$n)^2
#[1] 1.1675
</code></pre>
<p>To answer my own question, the desired <code>data.frame</code> would be acquired followingly:</p>
<pre><code>library(plyr)
ddply(dat, c("variable"), function(dat) c(mean=with(dat,weighted.mean(mean, n)), sd = with(dat, grand.sd(sd, mean, n))))
variable mean sd
1 x 2.950000 1.080509
2 y 5.726667 3.382793
</code></pre> | 2012-11-28 10:16:20.083000+00:00 | 2012-11-28 12:17:40.923000+00:00 | 2017-05-23 10:24:44.177000+00:00 | null | 13,593,196 | <p>I have a data set with mean values, standard deviations and n. One of the variables has an equal sample size, while the sample size for the other one varies.</p>
<pre><code>dat <- data.frame(variable = c(rep("x", 2), rep("y", 3)), replicate = c(1,2,1,2,3),
mean = c(3.4, 2.5, 6.5, 5.7, 5.1), sd = c(1.2, 0.7, 2.4, 4.0, 3.5),
n = c(3,3,5,4,6))
</code></pre>
<p>I need to combine <code>x</code> and <code>y</code> variables and am trying to find a code-sparing way to calculate combined standard deviation for instance using by <code>aggregate</code> function. <a href="http://www.talkstats.com/showthread.php/14523-An-average-of-standard-deviations" rel="nofollow noreferrer">The equation for combined standard deviation</a> is following: </p>
<p><img src="https://i.stack.imgur.com/bcfph.gif" alt="enter image description here"></p>
<p>And for unequal sample sizes (<a href="http://www.talkstats.com/showthread.php/14523-An-average-of-standard-deviations" rel="nofollow noreferrer">same source</a>):</p>
<p><img src="https://i.stack.imgur.com/qcoht.gif" alt="enter image description here"></p>
<p>My combined data frame should look like this:</p>
<pre><code>variable mean sd
x 2.95 sd_x
y 5.76 sd_y
</code></pre>
<p><strong>How to make a function in R that calculates the combined standard deviation?</strong> Or alternatively, if there is a package designed for this, it counts as an answer too =)</p> | 2012-11-27 21:15:21.403000+00:00 | 2021-02-06 04:42:28.443000+00:00 | null | r|variance|propagation|standard-deviation | ['http://arxiv.org/ftp/arxiv/papers/1007/1007.1012.pdf', 'https://stackoverflow.com/questions/9222056/existing-function-to-combine-standard-deviations-in-r?rq=1', 'https://stackoverflow.com/questions/9222056/existing-function-to-combine-standard-deviations-in-r?rq=1', 'http://en.wikipedia.org/wiki/Standard_deviation#Combining_standard_deviations', 'http://www.talkstats.com/showthread.php/14523-An-average-of-standard-deviations'] | 5 |
68,966,578 | <p>Hello Luka answering to your question the answer is: yes it is posible if you add a gyro into to mix to estimate the position hence the velocity but it is hard and not too reliable (depnding on the accuracy and precision of your sensors).</p>
<p>What you are proposing is an inertial navigation system, which works on the principle of detecting the subtle changes in acceleration with respect to a given inertial-frame, in this case the planet Earth which rotates at a constant speed. For more in depth info I found this <a href="https://arxiv.org/pdf/1704.06053.pdf" rel="nofollow noreferrer">article</a> hope this helps!</p> | 2021-08-28 17:09:20.887000+00:00 | 2021-08-28 17:09:20.887000+00:00 | null | null | 68,895,447 | <p>I get a list of acceleration and gyroscope data from the myo armband. The acceleration data is given like <code>[[-0.11474609375, -0.13037109375, 0.9873046875], [-0.111328125, -0.12890625, 0.9892578125], [-0.11376953125, -0.12255859375, 0.98828125]...]</code> where the values represent each the x,y and z axis values. I know to get the velocity, you have to integrate it once and for the location you have to double integrate it.</p>
<p>The problems I encounter are:</p>
<ul>
<li>In some questions/articles/answers it's stated you need the initial velocity to correctly calculate it. But how do i get it from the accelerometer (and/or gyroscope) data?</li>
<li>For the <a href="https://docs.scipy.org/doc/scipy/reference/tutorial/integrate.html" rel="nofollow noreferrer">quad function from scipy</a> a function is required to integrate the data. How do i find out which function is correct? Can i just take f(x) where x is the current value and iterate over all values?</li>
<li>The same question is for <a href="https://docs.scipy.org/doc/scipy/reference/tutorial/integrate.html" rel="nofollow noreferrer">dblquad</a>, just that it takes multiple values.</li>
</ul>
<p>The time period for those collected data is relatively short (around 5-7 seconds), so i don't think that the error drift will sum up much in this time.</p>
<p>Background: Those data is needed as an feature for ML algos with detecting handwritten letters. So velocity might vary due to the point since each person writes with a different tempo, but an average tempo could be used.</p>
<p>Some question already asked like <a href="https://stackoverflow.com/questions/7829097/android-accelerometer-accuracy-inertial-navigation/7835988#7835988">this</a> refer to real positioning, which isn't my goal. Also would <a href="https://stackoverflow.com/a/6645563/3715677">this</a> be enough aka just summing up without using the integration at all?</p> | 2021-08-23 15:41:17.687000+00:00 | 2021-08-28 17:09:20.887000+00:00 | null | python|scipy|physics|accelerometer|gyroscope | ['https://arxiv.org/pdf/1704.06053.pdf'] | 1 |
38,469,349 | <p>Yes it's certainly possible but requires a bit more work. Stan (and popular MCMC tools that I know of) are not designed to be run in a distributed setting, via Spark or otherwise. In general, distributed MCMC is an area of active research. For a recent review, I'd recommend section 4 of <a href="http://arxiv.org/abs/1602.05221" rel="noreferrer">Patterns of Scalable Bayesian Inference</a> (PoFSBI). There are multiple possible ways you might want to split up a big MCMC computation but I think one of the more straightforward ways would be splitting up the data and running an off-the-shelf tool like Stan, with the same model, on each partition. Each model will produce a <em>subposterior</em> which can be reduce'd together to form a posterior. PoFSBI discusses several ways of combining such subposteriors.</p>
<p>I've <a href="https://gist.github.com/strongh/0ba143d21e2382e3ec61f6a0bdc2f55d" rel="noreferrer">put together</a> a very rough proof of concept using pyspark and pystan (python is the common language with the most Stan and Spark support). It's a rough and limited implementation of the weighted-average consensus algorithm in PoFSBI, running on the tiny 8-schools dataset. I don't think this example would be practically very useful but it should provide some idea of what might be necessary to run Stan as a Spark program: partition data, run stan on each partition, combine the subposteriors.</p> | 2016-07-19 22:09:12.140000+00:00 | 2016-07-19 22:09:12.140000+00:00 | null | null | 36,498,491 | <p>I hope to use <a href="http://mc-stan.org/" rel="nofollow">MC-Stan</a> on <a href="http://spark.apache.org/" rel="nofollow">Spark</a>, but it seems there is no related page searched by Google.</p>
<p>I wonder if this approach is even possible on Spark, therefore I would appreciate if someone let me know.</p>
<p>Moreover, I also wonder what is the widely-used approach to use MCMC on Spark. I heard Scala is widely used, but I need some language that has a decent MCMC library such as MC-Stan.</p> | 2016-04-08 11:30:31.223000+00:00 | 2018-08-10 13:53:42.090000+00:00 | 2016-04-08 11:46:43.337000+00:00 | apache-spark|stan | ['http://arxiv.org/abs/1602.05221', 'https://gist.github.com/strongh/0ba143d21e2382e3ec61f6a0bdc2f55d'] | 2 |
69,535,932 | <p>Two things that come to my mind: your max-pooling layers are reducing the size of the input to the next convolutional layers every time and eventually the size is too small to run another max-pooling operation. Try running</p>
<pre><code> tf.print(model.summary)
</code></pre>
<p>after each max-pooling operation and you will quickly find out that your tensor cannot be further reduced. You can then consider using a different <code>pool_size</code> in your max-pooling layers.</p>
<p>The second thing I notice (I am not sure if it is intentional), but <em>MaxPooling1D != Global Max Pooling</em>. Keras supports both <a href="https://keras.io/api/layers/pooling_layers/" rel="nofollow noreferrer">operations</a>. Take a look at the documentation.</p>
<p>On a side note, sentence classification with CNNs was widely popularized by the <a href="https://arxiv.org/pdf/1408.5882.pdf" rel="nofollow noreferrer">work</a> of Yoon Kim. In his work, he shows that global max-pooling operations perform much better than striding max-pooling operations in sentence classification (when using word embeddings, as you are doing).</p> | 2021-10-12 06:21:57.563000+00:00 | 2021-10-12 06:21:57.563000+00:00 | null | null | 69,533,961 | <p>I have worked with text classification using Glove and CNN and found the problem below:</p>
<pre><code>File "c:\programfiles_anaconda\anaconda3\envs\math_stat_class\lib\site-packages\tensorflow\python\framework\ops.py", line 1657, in _create_c_op
raise ValueError(str(e))
ValueError: Negative dimension size caused by subtracting 5 from 1 for '{{node max_pooling1d_9/MaxPool}} = MaxPool[T=DT_FLOAT, data_format="NHWC", ksize=[1, 5, 1, 1], padding="VALID", strides=[1, 5, 1, 1]](max_pooling1d_9/ExpandDims)' with input shapes: [?,1,1,128].
</code></pre>
<h3>Glove input</h3>
<pre class="lang-py prettyprint-override"><code>EMBEDDING_DIM = 100
embeddings_index = {}
f = open(glove_path, encoding='utf-8')
for line in f:
values = line.split()
word = values[0]
coefs = np.asarray(values[1:], dtype='float32')
embeddings_index[word] = coefs
f.close()
print('Found %s word vectors.' % len(embeddings_index))
embedding_matrix = np.zeros((len(word_index) + 1, EMBEDDING_DIM))
for word, i in word_index.items():
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None:
# words not found in embedding index will be all-zeros.
embedding_matrix[i] = embedding_vector
</code></pre>
<h3>Layer input of CNN</h3>
<pre class="lang-py prettyprint-override"><code># apply embedding matrix into an Embedding layer
# trainable=False to prevent the weights from being updated during training
embedding_layer = Embedding(len(word_index) + 1,
EMBEDDING_DIM,
weights=[embedding_matrix],
input_length=MAX_SEQUENCE_LENGTH,
trainable=False)
</code></pre>
<h3>Training 1D CNN</h3>
<pre class="lang-py prettyprint-override"><code>sequence_input = Input(shape=(MAX_SEQUENCE_LENGTH,), dtype='int32')
embedded_sequences = embedding_layer(sequence_input)
x = Conv1D(128, 5, activation='relu')(embedded_sequences)
print("x shape = ", x)
x = MaxPooling1D(5)(x)
print("x shape = ", x)
x = Conv1D(128, 5, activation='relu')(x)
print("x shape = ", x)
#-----This line below produced error-----
x = MaxPooling1D(5)(x) #Error this line
#-----This line above produced error-----
print("x shape = ", x)
x = Conv1D(128, 5, activation='relu')(x)
print("x shape = ", x)
x = MaxPooling1D(35)(x) # global max pooling
print("x shape = ", x)
x = Flatten()(x)
x = Dense(128, activation='relu')(x)
preds = Dense(len(labels_index), activation='softmax')(x)
model = Model(sequence_input, preds)
model.compile(loss='categorical_crossentropy',
optimizer='rmsprop',
metrics=['acc'])
# Learning
model.fit(X_train, y_train, validation_data=(X_val, y_val),
epochs=2, batch_size=128)
</code></pre>
<h3>My ideas</h3>
<p><strong>1) Are there some issues/problems with Glove input?</strong></p>
<p><strong>2) Conv1D:</strong></p>
<ul>
<li>Change the "kernel_size" from 5 to a New Value.</li>
</ul>
<p><strong>3) MaxPooling1D:</strong></p>
<ul>
<li>Change pool_size from 5 to a New Value.</li>
<li>Specify other parameters: strides, padding and so on.</li>
</ul>
<p><strong>4) I currently use keras on tensorflow 2.20 and python 3.6</strong></p>
<ul>
<li>Do I need to upgrade tensorflow and python?</li>
</ul>
<p>However, I could not figure out any better way to do. May I have your suggestions?</p> | 2021-10-12 01:01:05.317000+00:00 | 2021-10-28 21:45:50.160000+00:00 | 2021-10-28 21:45:50.160000+00:00 | python|tensorflow|conv-neural-network|stanford-nlp|text-classification | ['https://keras.io/api/layers/pooling_layers/', 'https://arxiv.org/pdf/1408.5882.pdf'] | 2 |
49,675,362 | <p>The recent "Understanding the Lomb-Scargle Periodogram -
Jacob T. VanderPlas" that you can find on <a href="https://arxiv.org/pdf/1703.09824.pdf" rel="nofollow noreferrer">This link to arxiv</a> is a very good reference on how to chose the right set of parameters and how to test if your results are reliable.</p> | 2018-04-05 14:51:24.727000+00:00 | 2018-04-05 14:51:24.727000+00:00 | null | null | 20,782,489 | <p>I am using two methods,fft and lomb-scargle, to find periods for a timing sequence points.<br/>
However the fundamental frequency changes much even for the same dataset.<br/>
I can get the right result only when I fine tune the sampling rate.<>br/</p>
<p>How to choose the parameters of the two methods to get a consistent fundamental frequency. <br/>
How to know the result is reliable?<br/></p> | 2013-12-26 09:40:57.160000+00:00 | 2018-04-05 14:51:24.727000+00:00 | null | signals|fft | ['https://arxiv.org/pdf/1703.09824.pdf'] | 1 |
10,218,691 | <p>Djinn is a theorem prover. It seems your question is: what does theorem proving have to do with programming?</p>
<p>Strongly typed programming has a very close relationship to logic. In particular, traditional functional languages in the ML tradition are closely related to <a href="http://en.wikipedia.org/wiki/Intuitionistic_logic">Intuitionist</a> <a href="http://en.wikipedia.org/wiki/Propositional_calculus">Propositional Logic</a>. </p>
<p>The slogan is "programs are proofs, the proposition that a program proves is its type."<br>
In general you can think of</p>
<pre><code> foo :: Foo
</code></pre>
<p>as saying that <code>foo</code> is a proof of the formula <code>Foo</code>. For example the type</p>
<pre><code> a -> b
</code></pre>
<p>corresponds to functions from <code>a</code> to <code>b</code>, so if you have a proof of <code>a</code> and a proof of <code>a -> b</code> you have a proof of <code>b</code>. So, function correspond perfectly to implication in logic. Similarly</p>
<pre><code>(a,b)
</code></pre>
<p>Corresponds to conjunction (logic and). So the logic tautology <code>a -> b -> a & b</code> corresponds to the Haskell type <code>a -> b -> (a,b)</code>
and has the proof:</p>
<pre><code>\a b -> (a,b)
</code></pre>
<p>this is the "and introduction rule"
While, <code>fst :: (a,b) -> a</code> and <code>snd :: (a,b) -> b</code> correspond to the 2 "and elimination rules"</p>
<p>similarly, <code>a OR b</code> corresponds to the Haskell type <code>Either a b</code>.</p>
<p>This correspondence is sometimes referred to as the "Curry-Howard Isomorphism" or "<a href="http://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspondence">Curry-Howard Correspondence</a>" after <a href="http://en.wikipedia.org/wiki/Haskell_Curry">Haskell Curry</a> and <a href="http://en.wikipedia.org/wiki/William_Alvin_Howard">William Alvin Howard</a></p>
<p>This story is complicated by non-totality in Haskell.</p>
<p>Djinn is "just" a theorem prover. </p>
<p>If you are interested in trying to write a clone, the first page of google results for "Simple Theorem Prover" has <a href="http://arxiv.org/pdf/cs.LO/9301110.pdf">this</a> paper which describes writing a theorem prover for LK that appears to be written in SML. </p>
<p>Edit:
as to "how is theorem proving possible?" The answer is that in some sense it isn't hard. It is just a search problem:</p>
<p>Consider the problem restated as this: we have a set of propositions we know how to prove S, and a proposition we want to prove P. What do we do?
First of all, we ask: do we already have a proof of P in S? If so, we can use that, if not we can <em>pattern match</em> on P</p>
<pre><code>case P of
(a -> b) -> add a to S, and prove b (-> introduction)
(a ^ b) -> prove a, then prove b (and introduction)
(a v b) -> try to prove a, if that doesn't work prove b (or introduction)
</code></pre>
<p>if none of those work</p>
<pre><code>for each conjunction `a ^ b` in S, add a and b to S (and elimination)
for each disjunction `a v b` in S, try proving `(a -> P) ^ (b -> P)` (or elimination)
for each implication `a -> P` is S, try proving `a` (-> elimination)
</code></pre>
<p>Real theorem provers have some smarts, but the idea is the same. The research area of "Decision Procedures" examines strategy for finding proofs to certain kinds of formula that are guaranteed to work. On the other hand "Tactics" looks at how to optimally order the proof search.</p>
<p>As to: "How can proofs be translated into Haskell?"</p>
<p>Each inference rule in a formal system corresponds to some simple Haskell construct, so if you have a tree of inference rules, you can construct a corresponding program--Haskell is a proof language after all.</p>
<p>Implication introduction:</p>
<pre><code>\s -> ?
</code></pre>
<p>Or introduction</p>
<pre><code>Left
Right
</code></pre>
<p>And introduction</p>
<pre><code>\a b -> (a,b)
</code></pre>
<p>And elimination</p>
<pre><code>fst
snd
</code></pre>
<p>etc </p>
<p>augustss says in his answer that they way he implemented this in Djinn is a little tedious for an SO answer. I bet though, that you can figure it how to implement it on your own.</p> | 2012-04-18 22:00:03.763000+00:00 | 2014-04-15 12:05:35.173000+00:00 | 2014-04-15 12:05:35.173000+00:00 | null | 10,217,931 | <p>OK, so I realise that I will probably regret this for the rest of my life, but... How does Djinn actually work?</p>
<p>The documentation says that it uses an algorithm which is "an extension of LJ" and points to a long confusing paper about LJT. As best as I can tell, this is a big complicated system of highly formalised rules for figuring out which logical statements are true or false. But this doesn't even <em>begin</em> to explain how you turn a type signature into an executable expression. Presumably all the complicated formal reasoning is <em>involved</em> somehow, but the picture is crucially incomplete.</p>
<hr/>
<p>It's a bit like that time I tried to write a Pascal interpretter in BASIC. (Don't laugh! I was only twelve...) I spent hours trying to figure it out, and in the end I had to give up. I just couldn't figure out how the heck you get from a giant string containing an entire program, to something you can compare against known program fragments in order to decide what to actually do.</p>
<p>The answer, of course, is that you need to write a thing called a "parser". Once you comprehend what this is and what it does, suddenly everything becomes <em>obvious</em>. Oh, it's still not trivial to code it, but the <em>idea</em> is simple. You just have to write the actual code. If I'd known about parsers when I was twelve, then maybe I wouldn't have spent two hours just staring at a blank screen.</p>
<p>I suspect that what Djinn is doing is fundamentally simple, but I'm missing some important detail which explains how all this complicated logical gymnastics relates to Haskell source code...</p> | 2012-04-18 21:01:10.237000+00:00 | 2014-04-15 12:05:35.173000+00:00 | 2012-04-18 21:08:50.310000+00:00 | haskell|types|logic | ['http://en.wikipedia.org/wiki/Intuitionistic_logic', 'http://en.wikipedia.org/wiki/Propositional_calculus', 'http://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspondence', 'http://en.wikipedia.org/wiki/Haskell_Curry', 'http://en.wikipedia.org/wiki/William_Alvin_Howard', 'http://arxiv.org/pdf/cs.LO/9301110.pdf'] | 6 |
66,317,805 | <p>Your problem is that when you get to the convolution, your time dimension (2) is smaller than the filter that you have specified (5).</p>
<pre class="lang-py prettyprint-override"><code>import tensorflow as tf
from tensorflow.keras.layers import Input
from tensorflow.keras.layers import Permute
from tensorflow.keras.layers import Concatenate
from tensorflow.keras.layers import BatchNormalization
from tensorflow.keras.layers import Conv1D
# dummy data w/batch 32
X_train = tf.random.normal([32, 6249, 1])
X_train_phase = tf.random.normal([32, 6249, 1])
signal1 = Input(shape=(X_train.shape[1:]))
signal2 = Input(shape=(X_train_phase.shape[1:]))
concat_signal = Concatenate()([signal1, signal2])
x = Permute(dims=(2, 1))(concat_signal)
x = BatchNormalization()(x)
print(x.shape)
# (None, 2, 6249)
</code></pre>
<p>If you see the docs for <a href="https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv1D" rel="nofollow noreferrer"><code>tf.keras.layers.Conv1D</code></a>, you'll notice that <code>"valid"</code> is the default padding, which means there is no padding. There is a great reference, <a href="https://arxiv.org/pdf/1603.07285.pdf" rel="nofollow noreferrer">"A guide to convolution arithmetic for deep
learning"</a>, which does a good job of illustrating the relationship between input size, kernel size, strides, and padding.</p>
<p>While I am not sure what you're trying to accomplish with this network, adding the argument <code>padding="same"</code> to your convolution layers will send the input through without issue.</p>
<pre class="lang-py prettyprint-override"><code>x = Conv1D(
filters=64,
kernel_size=5,
activation="relu",
padding="same", # <= add this.
kernel_initializer="glorot_normal")(x)
</code></pre> | 2021-02-22 14:40:24.497000+00:00 | 2021-02-22 14:40:24.497000+00:00 | null | null | 66,312,033 | <p>I have two sensor inputs for which I have applied the Concatenate layer previously for fusion. Both of them are time series data for which I'm now trying to apply a permutation layer. However, when I do so, I get the error:</p>
<blockquote>
<p>Negative dimension size caused by subtracting 3 from 2 for '{{node conv1d_334/conv1d}} = Conv2D[T=DT_FLOAT, data_format="NHWC", dilations=[1, 1, 1, 1], explicit_paddings=[], padding="VALID", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true](conv1d_334/conv1d/ExpandDims, conv1d_334/conv1d/ExpandDims_1)' with input shapes: [?,1,2,6249], [1,3,6249,128].</p>
</blockquote>
<p>My inputs are both time series data with input dimension <code>(1176, 6249, 1)</code>. Can anybody tell me what I'm doing wrong? Here is a sample code:</p>
<pre><code>lr = 0.0005
n_timesteps = 3750
n_features = 1
n_outputs = 3
def small_model(optimizer='rmsprop', init='glorot_uniform'):
signal1 = Input(shape=(X_train.shape[1:]))
signal2 = Input(shape=(X_train_phase.shape[1:]))
concat_signal = Concatenate()([signal1, signal2])
# x = InputLayer(input_shape=(None, X_train.shape[1:][0],1))(inputA)
x = Permute(dims=(2, 1))(concat_signal)
x = BatchNormalization()(x)
x = Conv1D(64, 5, activation='relu', kernel_initializer='glorot_normal')(x) #, input_shape=(None, 3750, n_features)
x = Conv1D(64, 5, activation='relu', kernel_initializer='glorot_normal')(x)
x = MaxPooling1D(5)(x)
x = Dropout(0.3)(x)
</code></pre> | 2021-02-22 08:07:15.017000+00:00 | 2021-02-22 14:40:24.497000+00:00 | 2021-02-22 14:22:14.140000+00:00 | python|tensorflow|keras | ['https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv1D', 'https://arxiv.org/pdf/1603.07285.pdf'] | 2 |
55,821,619 | <p>This is only a toy model with no neurons. There is no optimization going on here. Batch normalization won't change your <code>X</code> variable because by definition it is a constant.</p>
<p>What it does is: <em>in the process of training a neural network</em>, it <em>transforms</em> your outputs from some layer into normalized inputs to the next layer such that it helps to train the next layer's weights. I am not a kerns user, but I'd guess you might be able to check the normalized outputs of some layer only by inspecting the tensorflow nodes directly (if then)</p>
<p>To answer the title of your question, Batch Normalization in itself is just <a href="https://en.wikipedia.org/wiki/Standard_score#Calculation" rel="nofollow noreferrer">standard z-score normalization</a>.
It is the same as subtracting the mean and dividing by the standard deviation of the series. </p>
<p>In mathematical notation,</p>
<p><a href="https://i.stack.imgur.com/NbpBu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NbpBu.png" alt="enter image description here"></a></p>
<p>In code, where <code>arr</code> is a numpy array,</p>
<pre><code>(arr - arr.mean(axis=0))/arr.std(axis=0, ddof=1)
</code></pre>
<p>The idea of normalizing is to get your distribution closer to a standard normal with mean 0 and standard deviation 1 i.e. ~ N(0,1).</p>
<p>It has been discussed lately (e.g. <a href="https://arxiv.org/abs/1502.03167" rel="nofollow noreferrer">here</a> and <a href="https://papers.nips.cc/paper/7996-understanding-batch-normalization.pdf" rel="nofollow noreferrer">here</a>) that by renormalizing your batches you can train your Neural Networks faster by reducing internal covariate shift.</p> | 2019-04-24 02:39:16.800000+00:00 | 2019-04-24 03:00:29.337000+00:00 | 2019-04-24 03:00:29.337000+00:00 | null | 55,821,575 | <p>I tried batch normalization for toy set [[1,2],[5,4].
Normalizaing among axis=0, we get </p>
<pre><code>#[[-1/sqrt(2),-1/sqrt(2)],[1/sqrt(2), 1/sqrt(2)]]
</code></pre>
<p>However, my layer(axis=0) and layer(axis=1) both give incorrect result.</p>
<pre><code>X = tf.constant([[1,2],[5,4]],dtype = tf.float32)
layer = keras.layers.BatchNormalization()
hidden = layer(X)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer(axis=0))
print(sess.run(layer.trainable_weights))
print(sess.run(hidden))
#results
#[array([1., 1.], dtype=float32), array([0., 0.], dtype=float32)]
#[[0.9995004 4.997502 ]
# [1.9990008 3.9980016]]
X = tf.constant([[1,2],[5,4]],dtype = tf.float32)
layer = keras.layers.BatchNormalization()
hidden = layer(X)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer(axis=1))
print(sess.run(layer.trainable_weights))
print(sess.run(hidden))
#results
#[array([1., 1.], dtype=float32), array([0., 0.], dtype=float32)]
#[[0.9995004 4.997502 ]
# [1.9990008 3.9980016]]
</code></pre>
<p>gamma=1 and beta=0 as trainable_weights shows. Then how this layer works? </p> | 2019-04-24 02:33:11.057000+00:00 | 2019-04-24 03:00:29.337000+00:00 | null | python|keras|batch-normalization | ['https://en.wikipedia.org/wiki/Standard_score#Calculation', 'https://i.stack.imgur.com/NbpBu.png', 'https://arxiv.org/abs/1502.03167', 'https://papers.nips.cc/paper/7996-understanding-batch-normalization.pdf'] | 4 |
61,094,158 | <p>the learning rate of your model is poor.You are certainly over-fitting it. <a href="https://i.stack.imgur.com/k79do.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/k79do.png" alt="this is what you should expect."></a></p>
<p>some changes are:
1.do cross-validation first
2.try removing some features.
3.make sure your labels are coded right</p>
<p>There could be multiple things which might be wrong. Its hard to tell.
for reference I can give you the architecture.</p>
<pre><code>import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
from layers import *
from data import voc, coco
import os
class SSD(nn.Module):
"""Single Shot Multibox Architecture
The network is composed of a base VGG network followed by the
added multibox conv layers. Each multibox layer branches into
1) conv2d for class conf scores
2) conv2d for localization predictions
3) associated priorbox layer to produce default bounding
boxes specific to the layer's feature map size.
See: https://arxiv.org/pdf/1512.02325.pdf for more details.
Args:
phase: (string) Can be "test" or "train"
size: input image size
base: VGG16 layers for input, size of either 300 or 500
extras: extra layers that feed to multibox loc and conf layers
head: "multibox head" consists of loc and conf conv layers
"""
def __init__(self, phase, size, base, extras, head, num_classes):
super(SSD, self).__init__()
self.phase = phase
self.num_classes = num_classes
self.cfg = (coco, voc)[num_classes == 21]
self.priorbox = PriorBox(self.cfg)
self.priors = Variable(self.priorbox.forward(), volatile=True)
self.size = size
# SSD network
self.vgg = nn.ModuleList(base)
# Layer learns to scale the l2 normalized features from conv4_3
self.L2Norm = L2Norm(512, 20)
self.extras = nn.ModuleList(extras)
self.loc = nn.ModuleList(head[0])
self.conf = nn.ModuleList(head[1])
if phase == 'test':
self.softmax = nn.Softmax(dim=-1)
self.detect = Detect(num_classes, 0, 200, 0.01, 0.45)
def forward(self, x):
"""Applies network layers and ops on input image(s) x.
Args:
x: input image or batch of images. Shape: [batch,3,300,300].
Return:
Depending on phase:
test:
Variable(tensor) of output class label predictions,
confidence score, and corresponding location predictions for
each object detected. Shape: [batch,topk,7]
train:
list of concat outputs from:
1: confidence layers, Shape: [batch*num_priors,num_classes]
2: localization layers, Shape: [batch,num_priors*4]
3: priorbox layers, Shape: [2,num_priors*4]
"""
sources = list()
loc = list()
conf = list()
# apply vgg up to conv4_3 relu
for k in range(23):
x = self.vgg[k](x)
s = self.L2Norm(x)
sources.append(s)
# apply vgg up to fc7
for k in range(23, len(self.vgg)):
x = self.vgg[k](x)
sources.append(x)
# apply extra layers and cache source layer outputs
for k, v in enumerate(self.extras):
x = F.relu(v(x), inplace=True)
if k % 2 == 1:
sources.append(x)
# apply multibox head to source layers
for (x, l, c) in zip(sources, self.loc, self.conf):
loc.append(l(x).permute(0, 2, 3, 1).contiguous())
conf.append(c(x).permute(0, 2, 3, 1).contiguous())
loc = torch.cat([o.view(o.size(0), -1) for o in loc], 1)
conf = torch.cat([o.view(o.size(0), -1) for o in conf], 1)
if self.phase == "test":
output = self.detect(
loc.view(loc.size(0), -1, 4), # loc preds
self.softmax(conf.view(conf.size(0), -1,
self.num_classes)), # conf preds
self.priors.type(type(x.data)) # default boxes
)
else:
output = (
loc.view(loc.size(0), -1, 4),
conf.view(conf.size(0), -1, self.num_classes),
self.priors
)
return output
def load_weights(self, base_file):
other, ext = os.path.splitext(base_file)
if ext == '.pkl' or '.pth':
print('Loading weights into state dict...')
self.load_state_dict(torch.load(base_file,
map_location=lambda storage, loc: storage))
print('Finished!')
else:
print('Sorry only .pth and .pkl files supported.')
def vgg(cfg, i, batch_norm=False):
layers = []
in_channels = i
for v in cfg:
if v == 'M':
layers += [nn.MaxPool2d(kernel_size=2, stride=2)]
elif v == 'C':
layers += [nn.MaxPool2d(kernel_size=2, stride=2, ceil_mode=True)]
else:
conv2d = nn.Conv2d(in_channels, v, kernel_size=3, padding=1)
if batch_norm:
layers += [conv2d, nn.BatchNorm2d(v), nn.ReLU(inplace=True)]
else:
layers += [conv2d, nn.ReLU(inplace=True)]
in_channels = v
pool5 = nn.MaxPool2d(kernel_size=3, stride=1, padding=1)
conv6 = nn.Conv2d(512, 1024, kernel_size=3, padding=6, dilation=6)
conv7 = nn.Conv2d(1024, 1024, kernel_size=1)
layers += [pool5, conv6,
nn.ReLU(inplace=True), conv7, nn.ReLU(inplace=True)]
return layers
def add_extras(cfg, i, batch_norm=False):
# Extra layers added to VGG for feature scaling
layers = []
in_channels = i
flag = False
for k, v in enumerate(cfg):
if in_channels != 'S':
if v == 'S':
layers += [nn.Conv2d(in_channels, cfg[k + 1],
kernel_size=(1, 3)[flag], stride=2, padding=1)]
else:
layers += [nn.Conv2d(in_channels, v, kernel_size=(1, 3)[flag])]
flag = not flag
in_channels = v
return layers
def multibox(vgg, extra_layers, cfg, num_classes):
loc_layers = []
conf_layers = []
vgg_source = [21, -2]
for k, v in enumerate(vgg_source):
loc_layers += [nn.Conv2d(vgg[v].out_channels,
cfg[k] * 4, kernel_size=3, padding=1)]
conf_layers += [nn.Conv2d(vgg[v].out_channels,
cfg[k] * num_classes, kernel_size=3, padding=1)]
for k, v in enumerate(extra_layers[1::2], 2):
loc_layers += [nn.Conv2d(v.out_channels, cfg[k]
* 4, kernel_size=3, padding=1)]
conf_layers += [nn.Conv2d(v.out_channels, cfg[k]
* num_classes, kernel_size=3, padding=1)]
return vgg, extra_layers, (loc_layers, conf_layers)
base = {
'300': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 'C', 512, 512, 512, 'M',
512, 512, 512],
'512': [],
}
extras = {
'300': [256, 'S', 512, 128, 'S', 256, 128, 256, 128, 256],
'512': [],
}
mbox = {
'300': [4, 6, 6, 6, 4, 4], # number of boxes per feature map location
'512': [],
}
def build_ssd(phase, size=300, num_classes=21):
if phase != "test" and phase != "train":
print("ERROR: Phase: " + phase + " not recognized")
return
if size != 300:
print("ERROR: You specified size " + repr(size) + ". However, " +
"currently only SSD300 (size=300) is supported!")
return
base_, extras_, head_ = multibox(vgg(base[str(size)], 3),
add_extras(extras[str(size)], 1024),
mbox[str(size)], num_classes)
return SSD(phase, size, base_, extras_, head_, num_classes)
</code></pre> | 2020-04-08 05:58:09.233000+00:00 | 2020-04-08 05:58:09.233000+00:00 | null | null | 61,091,655 | <p>I am <strong>training from scratch</strong> an SSD based object detection network. I am training on 250,000 images. The dataset has a skew for some classes, but I have decent representation for minority classes as well (2000 or so).</p>
<p>I see that the model is not training well, with 150k steps, it has only reached 8% precision and 25% recall. The learning rate is also not a smooth graph. What kind of learning rate graph should I expect and what other things I can try to improve my training? </p>
<pre><code> optimizer {
rms_prop_optimizer {
learning_rate {
exponential_decay_learning_rate {
initial_learning_rate: 0.004000000189989805
decay_steps: 800720
decay_factor: 0.949999988079071
}
}
momentum_optimizer_value: 0.8999999761581421
decay: 0.8999999761581421
epsilon: 1.0
}
}
</code></pre>
<p><a href="https://i.stack.imgur.com/epJgR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/epJgR.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/QWmc8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QWmc8.png" alt="enter image description here"></a><a href="https://i.stack.imgur.com/YDbWa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YDbWa.png" alt="enter image description here"></a><a href="https://i.stack.imgur.com/mMkVI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mMkVI.png" alt="enter image description here"></a></p> | 2020-04-08 00:48:17.860000+00:00 | 2020-04-20 13:40:03.917000+00:00 | 2020-04-08 04:42:25.683000+00:00 | python|tensorflow|computer-vision|object-detection | ['https://i.stack.imgur.com/k79do.png'] | 1 |
17,202,375 | <p>crosspost of this: <a href="https://scicomp.stackexchange.com/questions/1681/what-is-the-fastest-way-to-calculate-the-largest-eigenvalue-of-a-general-matrix/7487#7487">https://scicomp.stackexchange.com/questions/1681/what-is-the-fastest-way-to-calculate-the-largest-eigenvalue-of-a-general-matrix/7487#7487</a></p>
<p>There has been some good research on this recently. The new approaches use "randomized algorithms" which only require a few reads of your matrix to get good accuracy on the largest eigenvalues. This is in contrast to power iterations which require several matrix-vector multiplications to reach high accuracy.</p>
<p>You can read more about the new research here:</p>
<p><a href="http://math.berkeley.edu/~strain/273.F10/martinsson.tygert.rokhlin.randomized.decomposition.pdf" rel="nofollow noreferrer">http://math.berkeley.edu/~strain/273.F10/martinsson.tygert.rokhlin.randomized.decomposition.pdf</a></p>
<p><a href="http://arxiv.org/abs/0909.4061" rel="nofollow noreferrer">http://arxiv.org/abs/0909.4061</a></p>
<p>This code will do it for you:</p>
<p><a href="http://cims.nyu.edu/~tygert/software.html" rel="nofollow noreferrer">http://cims.nyu.edu/~tygert/software.html</a></p>
<p><a href="https://bitbucket.org/rcompton/pca_hgdp/raw/be45a1d9a7077b60219f7017af0130c7f43d7b52/pca.m" rel="nofollow noreferrer">https://bitbucket.org/rcompton/pca_hgdp/raw/be45a1d9a7077b60219f7017af0130c7f43d7b52/pca.m</a></p>
<p><a href="http://code.google.com/p/redsvd/" rel="nofollow noreferrer">http://code.google.com/p/redsvd/</a></p>
<p><a href="https://cwiki.apache.org/MAHOUT/stochastic-singular-value-decomposition.html" rel="nofollow noreferrer">https://cwiki.apache.org/MAHOUT/stochastic-singular-value-decomposition.html</a></p>
<p>If your language of choice isn't in there you can roll your own randomized SVD pretty easily; it only requires a matrix vector multiplication followed by a call to an off-the-shelf SVD.</p> | 2013-06-19 22:20:12.580000+00:00 | 2013-06-19 22:20:12.580000+00:00 | 2017-04-13 12:53:54.717000+00:00 | null | 12,287,796 | <p>I would like to perform a Principal Component Analysis on a dataset composed of approximately 40 000 samples, each sample displaying about 10 000 features.</p>
<p>Using Matlab princomp function takes ages ... What would be the fastest algorithm ? How long would it take on a i7 dual core / 4GB Ram ?</p>
<p>Thanks for your support</p> | 2012-09-05 18:42:23.990000+00:00 | 2013-06-19 22:20:12.580000+00:00 | null | data-mining | ['https://scicomp.stackexchange.com/questions/1681/what-is-the-fastest-way-to-calculate-the-largest-eigenvalue-of-a-general-matrix/7487#7487', 'http://math.berkeley.edu/~strain/273.F10/martinsson.tygert.rokhlin.randomized.decomposition.pdf', 'http://arxiv.org/abs/0909.4061', 'http://cims.nyu.edu/~tygert/software.html', 'https://bitbucket.org/rcompton/pca_hgdp/raw/be45a1d9a7077b60219f7017af0130c7f43d7b52/pca.m', 'http://code.google.com/p/redsvd/', 'https://cwiki.apache.org/MAHOUT/stochastic-singular-value-decomposition.html'] | 7 |
25,983,827 | <p>Facebook doesn't provide an endpoint that returns such a list.</p>
<p>We (Hangtime) calculate the list ourselves using the data from the Facebook stream and analysis of the mutual friend graph.</p>
<p>A simple algorithm can get you most of the way there. Getting it to give good results across a wide variety of users can be difficult depending on how you define a suggested friend in your app. It can also require a fairly large number of API calls for each user.</p>
<p>The most basic version is to sum up the number of interactions between a user and each friend on the Facebook stream and do a descending sort. Facebook published a paper recently with an interesting algorithm using only social graph analysis: <a href="http://arxiv.org/pdf/1310.6753.pdf" rel="nofollow">Romantic Partnerships and the Dispersion of Social Ties:
A Network Analysis of Relationship Status on Facebook</a></p> | 2014-09-22 22:05:32.817000+00:00 | 2014-10-16 00:35:30.983000+00:00 | 2014-10-16 00:35:30.983000+00:00 | null | 25,965,856 | <p>I saw in Hangtime app or DrinkAdvisor app, when I used Facebook to register & login, and in Friends tab, I can see list of Suggested Friends, I guess this list include people that I interract often, ie: like their photos, chat with them ...
I'm not sure how they can get that list?
Would you please help?</p> | 2014-09-22 02:19:13.540000+00:00 | 2014-10-16 00:35:30.983000+00:00 | null | ios|facebook|friend | ['http://arxiv.org/pdf/1310.6753.pdf'] | 1 |
51,954,940 | <p>From their published paper at <a href="https://arxiv.org/pdf/1801.04381.pdf" rel="nofollow noreferrer">MobileNetV2: Inverted Residuals and Linear Bottlenecks</a>,</p>
<p>under subtopic number 5: Implementation Notes, 5.1. Memory efficient inference;</p>
<blockquote>
<p>The inverted residual bottleneck layers allow a particularly
memory efficient implementation which is very
important for mobile applications. (and more in paper)</p>
</blockquote>
<p>According to TensorFlow team, it's optimized smaller in size can also be used as TF Lite. As far as we know TF Lite is indeed for mobile use. It's much slower on desktop GPU probably V2 has more conv layers compared to V1 which make sense if the training tooks more times to finish. For now, we didn't do the training and inferencing of data on mobile because of computational speed hunger which lead to power hunger as well. </p>
<p>Hope I answer the question.</p> | 2018-08-21 18:48:10.903000+00:00 | 2018-08-21 18:48:10.903000+00:00 | null | null | 50,385,735 | <p>I am studying about Google's brandnew <code>MobileNetV2</code> architecture.</p>
<p>During studying, I've read this string at Tensorflow model zoo Github</p>
<p>'For example Mobilenet V2 is faster on mobile devices than Mobilenet V1, but is slightly slower on desktop GPU.'</p>
<p>So, my question is,</p>
<p>How that could be possible? I really want to know why.</p> | 2018-05-17 07:31:02.033000+00:00 | 2019-03-31 09:50:54.373000+00:00 | 2019-03-31 09:50:54.373000+00:00 | tensorflow|mobile|gpu | ['https://arxiv.org/pdf/1801.04381.pdf'] | 1 |
55,395,294 | <p>From <a href="https://arxiv.org/abs/1903.08469v1" rel="noreferrer">https://arxiv.org/abs/1903.08469v1</a> :</p>
<p>"However, MobileNet V2 uses <strong>depthwise separable convolutions which are not directly supported in GPU firmware</strong> (the cuDNN library). Therefore, MobileNet V2 tends to be slower than ResNet18 in most experimental setups. Note that the same issue disqualifies usage of the DenseNet architecture [12], since it requires efficient convolution over a non-contiguous tensor, which is still not supported in cuDNN."</p> | 2019-03-28 10:26:39.877000+00:00 | 2019-03-28 10:26:39.877000+00:00 | null | null | 50,385,735 | <p>I am studying about Google's brandnew <code>MobileNetV2</code> architecture.</p>
<p>During studying, I've read this string at Tensorflow model zoo Github</p>
<p>'For example Mobilenet V2 is faster on mobile devices than Mobilenet V1, but is slightly slower on desktop GPU.'</p>
<p>So, my question is,</p>
<p>How that could be possible? I really want to know why.</p> | 2018-05-17 07:31:02.033000+00:00 | 2019-03-31 09:50:54.373000+00:00 | 2019-03-31 09:50:54.373000+00:00 | tensorflow|mobile|gpu | ['https://arxiv.org/abs/1903.08469v1'] | 1 |
35,459,892 | <p>The other answers are interesting and fairly good, but I believe that I can provide some additional elements of answer, point per point:</p>
<ul>
<li>Is it worth the effort? Well, if you need to sort small collections of integers and the sorting networks are tuned to take advantage of some instructions as much as possible, it might be worth the effort. The following graph presents the results of sorting a million arrays of <code>int</code> of size 0-14 with different sorting algorithms. As you can see, the sorting networks can provide a significant speedup if you really need it.</li>
</ul>
<p><img src="https://camo.githubusercontent.com/474b4e969d8043f0b8d83b073ff77bcd032b916a/68747470733a2f2f692e696d6775722e636f6d2f476152486e39782e706e67" alt=""></p>
<ul>
<li><p>No standard implementation of <code>std::sort</code> I know of use sorting networks; when they are not fine-tuned, they might be slower than a straight insertion sort. libc++'s <code>std::sort</code> has dedicated algorithms to sort 0 thru 5 values at once but they it doesn't use sorting networks either. The only sorting algorithm I know of which uses sorting networks to sort a few values is <a href="https://github.com/BonzaiThePenguin/WikiSort" rel="noreferrer">Wikisort</a>. That said, the research paper <a href="http://arxiv.org/abs/1505.01962" rel="noreferrer"><em>Applying Sorting Networks to Synthesize Optimized Sorting Libraries</em></a> suggests that sorting networks could be used to sort small arrays or to improve recursive sorting algorithms such as quicksort, but only if they are fine-tuned to take advantage of specific hardware instructions.</p>
<p>The <a href="https://github.com/khegeman/floki" rel="noreferrer">access aligned sort</a> algorithm is some kind of bottom-up mergesort that apparently uses bitonic sorting networks implemented with SIMD instructions for the first pass. Apparently, the algorithm could be faster than the standard library one for some scalar types.</p></li>
<li><p>I can actually provide such information for the simple reason that I developed <a href="https://github.com/Morwenn/cpp-sort" rel="noreferrer">a C++14 sorting library</a> that happens to provide efficient sorting networks of size 0 thru 32 that implement the optimizations described in the previous section. I used it to generate the graph in the first section. I am still working on the sorting networks part of the library to provide size-optimal, depth-optimal and swaps-optimal networks. Small optimal sorting networks are found with brute force while bigger sorting networks use results from the litterature.</p>
<p>Note that none of the sorting algorithms in the library directly use sorting networks, but you can adapt them so that a sorting network will be picked whenever the sorting algorithm is given a small <code>std::array</code> or a small fixed-size C array:</p>
<pre><code>using namespace cppsort;
// Sorters are function objects that can be
// adapted with sorter adapters from the
// library
using sorter = small_array_adapter<
std_sorter,
sorting_network_sorter
>;
// Now you can use it as a function
sorter sort;
// Instead of a size-agnostic sorting algorithm,
// sort will use an optimal sorting network for
// 5 inputs since the bound of the array can be
// deduced at compile time
int arr[] = { 2, 4, 7, 9, 3 };
sort(arr);
</code></pre>
<p>As mentioned above, the library provides efficient sorting networks for built-in integers, but you're probably out of luck if you need to sort small arrays of something else (<em>e.g.</em> my latest benchmarks show that they are not better than a straight insertion sort even for <code>long long int</code>).</p></li>
<li><p>You could probably use template metaprogramming to generate sorting networks of any size, but no known algorithm can generate the best sorting networks, so you might as well write the best ones by hand. I don't think the ones generated by simple algorithms can actually provide usable and efficient networks anyway (Batcher's odd-even sort and pairwise sorting networks might be the only usable ones) <em>[Another answer seems to show that generated networks could actually work]</em>.</p></li>
</ul> | 2016-02-17 14:47:16.480000+00:00 | 2016-04-26 14:11:50.013000+00:00 | 2016-04-26 14:11:50.013000+00:00 | null | 19,790,522 | <p>I have some performance critical code that involves sorting a very short fixed-length array with between around 3 and 10 elements in C++ (the parameter changes at compile time).</p>
<p>It occurred to me that a static sorting network specialised to each possible input size would perhaps be a very efficient way to do this: We do all the comparisons necessary to figure out which case we are in, then do the optimal number of swaps to sort the array. </p>
<p>To apply this, we use a bit of template magic to deduce the array length and apply the correct network:</p>
<pre><code>#include <iostream>
using namespace std;
template< int K >
void static_sort(const double(&array)[K])
{
cout << "General static sort\n" << endl;
}
template<>
void static_sort<3>(const double(&array)[3])
{
cout << "Static sort for K=3" << endl;
}
int main()
{
double array[3];
// performance critical code.
// ...
static_sort(array);
// ...
}
</code></pre>
<p>Obviously it's quite a hassle to code all this up, so:</p>
<ul>
<li>Does anyone have any opinions on whether or not this is worth the effort?</li>
<li>Does anyone know if this optimisation exists in any standard implementations of, for example, std::sort? </li>
<li>Is there an easy place to get hold of code implementing this kind of sorting network?</li>
<li>Perhaps it would be possible to generate a sorting network like this statically using template magic..</li>
</ul>
<p>For now I just use insertion sort with a static template parameter (as above), in the hope that it will encourage unrolling and other compile-time optimisations.</p>
<p>Your thoughts welcome.</p>
<hr>
<p><em><strong>Update:</em></strong>
I wrote some testing code to compare a 'static' insertion short and std::sort. (When I say static, I mean that the array size is fixed and deduced at compile time (presumably allowing loop unrolling etc).
I get at least a 20% NET improvement (note that the generation is included in the timing). Platform: clang, OS X 10.9.</p>
<p>The code is here <a href="https://github.com/rosshemsley/static_sorting">https://github.com/rosshemsley/static_sorting</a> if you would like to compare it to your implementations of stdlib.</p>
<p>I have still yet to find a nice set of implementations for comparator network sorters.</p>
<hr> | 2013-11-05 13:48:07.460000+00:00 | 2020-06-09 23:50:38.137000+00:00 | 2015-12-02 01:41:57.020000+00:00 | c++|arrays|sorting|template-meta-programming|sorting-network | ['https://github.com/BonzaiThePenguin/WikiSort', 'http://arxiv.org/abs/1505.01962', 'https://github.com/khegeman/floki', 'https://github.com/Morwenn/cpp-sort'] | 4 |
48,851,435 | <p>I've increased learning rate to <code>0.01</code> and it seems like the network is able to learn something (I get <code>Epoch 1000/5000- 0s - loss: 0.2330</code>) .</p>
<p>I think it's worth to note the following from the abstract of original <a href="https://arxiv.org/pdf/1502.03167.pdf" rel="nofollow noreferrer">Batch Normalization</a> paper:</p>
<blockquote>
<p>Batch Normalization allows us to use much higher learning rates and
be less careful about initialization. It also acts as a regularizer (...)</p>
</blockquote>
<p>That hinted to increased learning rate (that's something you might want to experiment with).</p>
<p>Be aware that since it works like regularization, BatchNorm <strong>should</strong> make your training loss worse - it's supposed to prevent overfitting and thus close the gap between the train and test/valid errors.</p> | 2018-02-18 12:03:39.200000+00:00 | 2018-02-18 12:03:39.200000+00:00 | null | null | 48,842,172 | <p>I'm new to keras and have been experimenting with various things such as BatchNormalization but it is not working at all. When the BatchNormalization line is commented out it will converge to around 0.04 loss or better, but with it as it is it will converge to 0.71 and get stuck around there, I'm not sure what's wrong.</p>
<pre><code>from sklearn import preprocessing
from sklearn.datasets import load_boston
from keras.models import Model
from keras.layers import Input, Dense
from keras.layers.normalization import BatchNormalization
import keras.optimizers
boston = load_boston()
x = boston.data
y = boston.target
normx = preprocessing.scale(x)
normy = preprocessing.scale(y)
# doesnt construct output layer
def layer_looper(inputs, number_of_loops, neurons):
inputs_copy = inputs
for i in range(number_of_loops):
inputs_copy = Dense(neurons, activation='relu')(inputs_copy)
inputs_copy = BatchNormalization()(inputs_copy)
return inputs_copy
inputs = Input(shape = (13,))
x = layer_looper(inputs, 40, 20)
predictions = Dense(1, activation='linear')(x)
model = Model(inputs=inputs, outputs=predictions)
opti = keras.optimizers.Adam(lr=0.0001)
model.compile(loss='mean_absolute_error', optimizer=opti, metrics=['acc'])
print(model.summary())
model.fit(normx, normy, epochs=5000, verbose=2, batch_size=128)
</code></pre>
<p>I have tried experimenting with batch sizes and the optimizer but it doesn't seem very effective. Am I doing something wrong?</p> | 2018-02-17 14:06:54.087000+00:00 | 2018-02-18 12:03:39.200000+00:00 | null | keras|batch-normalization | ['https://arxiv.org/pdf/1502.03167.pdf'] | 1 |
52,807,117 | <p>So, after lots of Googling around, I figured out that I, in fact, can't use clustering techniques because I lack feature variables on which I can cluster the words. If I make a table where I note how often each word exists with other words (in fact cartesian product) is in fact adjacency matrix and clustering doesn't work well on it.</p>
<p>So, the solution I was looking for is graph community detection. I used igraph library (or the python's python-ipgraph wrapper) to find the clusters and it runs very well and fast.</p>
<p>More informations:</p>
<ul>
<li>similar question: <a href="https://stats.stackexchange.com/questions/142297/finding-natural-groups-clusters-in-an-undirected-graph-over-several-undirect">https://stats.stackexchange.com/questions/142297/finding-natural-groups-clusters-in-an-undirected-graph-over-several-undirect</a></li>
<li>Community detection in graphs paper: <a href="https://arxiv.org/pdf/0906.0612.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/0906.0612.pdf</a></li>
<li>Basic descrption of various algorithms: <a href="https://stackoverflow.com/questions/9471906/what-are-the-differences-between-community-detection-algorithms-in-igraph">What are the differences between community detection algorithms in igraph?</a></li>
</ul> | 2018-10-14 21:08:09.693000+00:00 | 2018-10-14 21:08:09.693000+00:00 | null | null | 52,764,682 | <p>Let's say I have a list of lists of words, for example</p>
<pre><code>[['apple','banana'],
['apple','orange'],
['banana','orange'],
['rice','potatoes','orange'],
['potatoes','rice']]
</code></pre>
<p>The set is much bigger. I want to cluster the words that words usually existing together will have the same cluster. So in this case the clusters will be <code>['apple', 'banana', 'orange']</code> and <code>['rice','potatoes']</code>.<br>
What is the best approach to archive this kind of clustering?</p> | 2018-10-11 16:16:45.070000+00:00 | 2018-10-25 10:07:16.937000+00:00 | 2018-10-11 16:22:48.543000+00:00 | python|machine-learning|cluster-analysis|information-retrieval | ['https://stats.stackexchange.com/questions/142297/finding-natural-groups-clusters-in-an-undirected-graph-over-several-undirect', 'https://arxiv.org/pdf/0906.0612.pdf', 'https://stackoverflow.com/questions/9471906/what-are-the-differences-between-community-detection-algorithms-in-igraph'] | 3 |
56,911,875 | <p>The <code>include_top=False</code> may be used because the last 3 layers (for that specific model) are fully connected layers which are not typically good feature vectors. If the model directly outputs a feature vector, then you don't need it.</p>
<p>Most people use the last layer for transfer learning, but it may depend on your application. For example, <a href="https://arxiv.org/abs/1508.06576" rel="nofollow noreferrer">Gatys et. al.</a> show that the first few layers of VGG are sensitive to the style of the image and later layers are sensitive to the content. </p>
<p>I would probably try all of them in a hyperparameter search and see which gives the best performance. If by image similarity you mean the similarity of objects contained inside, I would probably start with the last layer.</p> | 2019-07-06 06:44:09.553000+00:00 | 2019-07-06 06:56:08.377000+00:00 | 2019-07-06 06:56:08.377000+00:00 | null | 56,911,622 | <p>Now, I want feature of image to compute their similarity. We can get feature using pre-trained VGG19 model in tensorflow easily. But VGG19 model has many layers, and I don't know which layer should I use to get feature. Which layer's output is appropriate for this problem?</p>
<pre class="lang-py prettyprint-override"><code># I think this how is correct to extract feature
model = tf.keras.application.VGG19(include_top=True,
weight='imagenet')
input = model.input
output = model.layers[-2].output
extract_model = tf.keras.Model(input, output)
</code></pre>
<p>It's my infer that the more closer to last output, the more the model output powerful feature. But some tutorials says 'use <code>include_top=False</code> to extract feature' (e.g <a href="https://www.tensorflow.org/beta/tutorials/text/image_captioning" rel="nofollow noreferrer">Image Captioning with Attention TensorFlow</a>)</p>
<p>So, I don't know which layer should I use. Please try to help me here in this thread.</p> | 2019-07-06 05:53:12.033000+00:00 | 2019-07-06 07:15:02.833000+00:00 | 2019-07-06 07:15:02.833000+00:00 | python|tensorflow|machine-learning|tf.keras | ['https://arxiv.org/abs/1508.06576'] | 1 |
49,731,169 | <p>You can use a CNN:</p>
<ul>
<li>The input is then not 3 * w * h but (3*number of images) * w * h - so you can just concatenate the stuff in depth</li>
<li>The output is just an image instead of a class. So no flattening in between... or a reshape has to be added.</li>
</ul>
<p>Have a look at <a href="https://arxiv.org/abs/1605.06211" rel="nofollow noreferrer">
Fully Convolutional Networks for Semantic Segmentation</a> and Image-to-Image translation.</p>
<p>If you haven't seen it already: <a href="https://keras.io" rel="nofollow noreferrer">Keras</a> is pretty handy.</p>
<p>You might also be interested in the concept of <a href="https://en.wikipedia.org/wiki/Optical_flow" rel="nofollow noreferrer">Optical Flow</a></p> | 2018-04-09 10:47:36.663000+00:00 | 2018-04-09 11:05:57.890000+00:00 | 2018-04-09 11:05:57.890000+00:00 | null | 49,714,969 | <p>I am planing to predict the next image from an image sequence. I have searched on the internet (Google/YouTube) for tutorials and for similar work. but I couldn't find any.
I want to know whether it is possible to find the pattern and predict the next image and can I find some tutorials for that.</p> | 2018-04-08 06:07:06.953000+00:00 | 2018-04-09 11:05:57.890000+00:00 | 2018-04-09 10:54:05.133000+00:00 | image-processing|machine-learning|artificial-intelligence | ['https://arxiv.org/abs/1605.06211', 'https://keras.io', 'https://en.wikipedia.org/wiki/Optical_flow'] | 3 |
7,989,837 | <p>You can try this:
(this is from <a href="http://arxiv.org/PS_cache/cond-mat/pdf/0308/0308217v1.pdf" rel="nofollow">http://arxiv.org/PS_cache/cond-mat/pdf/0308/0308217v1.pdf</a>, Pg 5).</p>
<p>We will give weights to different vertices starting with s, based on the number of paths to the vertex from S and also set their distance based on the no of edges they are away from s.</p>
<ol>
<li>Start with wt(S) = 1 and d(s) = 0;</li>
<li>Every vertex i adjacent to S is given weight = w(s) = 1 and d(i) = (d) +1;</li>
<li><p>For each vertex j adjacent to one of these vertices and d(j) not yet defined,
if w(j) = 0, then assigned w(i)=w(j) /<em>that is assign the wt of parent to j</em>/
else if d(j)=d(i) +1 , then w(j) = w(i) + 1.</p></li>
<li><p>Repeat step 3 till T is reached. The wt(t) will give the number of shortest paths from s to T. </p></li>
</ol>
<p>Accordingly, the paper explains why this is linear.</p> | 2011-11-03 03:07:33.077000+00:00 | 2011-11-03 03:07:33.077000+00:00 | null | null | 7,989,046 | <p>Wouldn't it keep finding t if we start at s?</p>
<blockquote>
<p>Give a linear-time algorithm that takes as input a directed acyclic graph G = (V,E) and two vertices s and t, and returns the number of paths from s to t in G.</p>
</blockquote>
<hr>
<p>solution:</p>
<blockquote>
<p>The basic idea here is to start at vertex t, and use depth-first search in reverse direction until we reach vertex s. Each and maintains a counter which indicates the number of unique reverse paths found from vertex t.</p>
<ol>
<li>Initialize counters to 0 for all vertices.</li>
<li>Start depth-first-search in reverse direction using vertex t as a root.</li>
<li>For each edge (u, y) examined in the breadth-first search. <code>Counter(v) = max{ Counter(v) + 1, Counter(v) + Counter(u) }</code></li>
<li>Return Counter(s).</li>
</ol>
</blockquote> | 2011-11-03 00:37:49.303000+00:00 | 2011-11-03 18:26:10.367000+00:00 | 2011-11-03 02:00:00.450000+00:00 | algorithm|depth-first-search | ['http://arxiv.org/PS_cache/cond-mat/pdf/0308/0308217v1.pdf'] | 1 |
41,274,664 | <p>As a complement to <a href="https://stackoverflow.com/a/41270097/6281477">Prune's answer</a>, during testing, batch normalization layer will use the average <code>mean/variance/scale/shift</code> values from different training iterations to normalize its input(subtract mean and divide by the standard deviation). </p>
<p>And the original <a href="https://arxiv.org/abs/1502.03167" rel="nofollow noreferrer">google's batch normalization paper</a> only said that it should be a moving average method and no more thorough explanation was provided though. Both caffe and tensorflow use an exponential moving average method.</p>
<p>In my experience, a <a href="https://en.wikipedia.org/wiki/Moving_average#Simple_moving_average" rel="nofollow noreferrer">simple moving average method</a> usually better than an exponential moving average method, as far as to the validation accuracy(Maybe it need more experiments). For a compare, you can refer to <a href="https://github.com/knsong/caffe-mt" rel="nofollow noreferrer">here</a>(I tried the two moving average methods implementations in <code>channel_wise_bn_layer</code>, compared with the <code>batch norm</code> layer in <a href="https://github.com/BVLC/caffe" rel="nofollow noreferrer">BVLC/caffe</a>). </p> | 2016-12-22 01:51:01.233000+00:00 | 2016-12-22 01:51:01.233000+00:00 | 2017-05-23 11:46:08.043000+00:00 | null | 41,269,570 | <p>Recently, many deep architectures use "batch normalization" for training.</p>
<p>What is "batch normalization"? What does it do mathematically? In what way does it help the training process?</p>
<p>How is batch normalization used during training? is it a special layer inserted into the model? Do I need to normalize before each layer, or only once? </p>
<p>Suppose I used batched normalization for training. Does this affect my test-time model? Should I replace the batch normalization with some other/equivalent layer/operation in my "deploy" network?</p>
<hr>
<p><a href="https://stackoverflow.com/q/29979251/1714410">This question</a> about batch normalization only covers part of this question, I was aiming and hoping for a more detailed answer. More specifically, I would like to know how training with batch normalization affect test time prediction, i.e., the "deploy" network and the TEST phase of the net.</p> | 2016-12-21 18:30:03.507000+00:00 | 2022-03-16 19:48:01.177000+00:00 | 2017-09-23 05:56:56.683000+00:00 | machine-learning|neural-network|deep-learning|normalization|caffe | ['https://stackoverflow.com/a/41270097/6281477', 'https://arxiv.org/abs/1502.03167', 'https://en.wikipedia.org/wiki/Moving_average#Simple_moving_average', 'https://github.com/knsong/caffe-mt', 'https://github.com/BVLC/caffe'] | 5 |
50,326,888 | <p>It is not commonly used, though I found <a href="https://arxiv.org/pdf/1603.09025.pdf" rel="nofollow noreferrer">this paper</a> from 2017 shows a way to use batch normalization in the input-to-hidden and the hidden-to-hidden transformations trains faster and generalizes better on some problems.</p> | 2018-05-14 09:17:06.360000+00:00 | 2021-03-25 01:46:51.457000+00:00 | 2021-03-25 01:46:51.457000+00:00 | null | 45,493,384 | <p>I know in regular neural nets people use batch norm before activation and it will reduce the reliance on good weight initialization. I wonder if it would do the same to RNN/lstm RNN when i use it. Does anyone have any experience with it?</p> | 2017-08-03 19:50:55.467000+00:00 | 2021-03-25 01:46:51.457000+00:00 | 2021-03-25 01:27:00.590000+00:00 | machine-learning|deep-learning | ['https://arxiv.org/pdf/1603.09025.pdf'] | 1 |
56,263,233 | <p>The answer is Yes and No.</p>
<p>Why Yes, according to the paper <a href="https://arxiv.org/pdf/1607.06450.pdf" rel="nofollow noreferrer">layer normalization</a>, in section it clearly indicates the usage of BN in RNNs. </p>
<p>Why No? The distribution of output at each timestep has to be stored and calcualted to conduct BN. Imagine that you pad the sequence input so all examples have the same length, so if the predict case is longer than all training cases, at some time step you have no mean/std of the output distribution summarized from the SGD training procedure.</p>
<p>Meanwhile, at least in Keras, I believe the BN layer only consider the normalization in vertical direction, i.e., the sequence output. The horizontal direction, i.e., hidden_status, cell_status, are not normalized. Correct me if I an wrong here.</p>
<p>In multiple-layer RNNs, you may consider using layer normalization tricks.</p> | 2019-05-22 18:52:43.160000+00:00 | 2019-05-22 18:52:43.160000+00:00 | null | null | 45,493,384 | <p>I know in regular neural nets people use batch norm before activation and it will reduce the reliance on good weight initialization. I wonder if it would do the same to RNN/lstm RNN when i use it. Does anyone have any experience with it?</p> | 2017-08-03 19:50:55.467000+00:00 | 2021-03-25 01:46:51.457000+00:00 | 2021-03-25 01:27:00.590000+00:00 | machine-learning|deep-learning | ['https://arxiv.org/pdf/1607.06450.pdf'] | 1 |
45,495,331 | <p>No, you cannot use Batch Normalization on a recurrent neural network, as the statistics are computed per batch, this does not consider the recurrent part of the network. Weights are shared in an RNN, and the activation response for each "recurrent loop" might have completely different statistical properties.</p>
<p>Other techniques similar to Batch Normalization that take these limitations into account have been developed, for example <a href="https://arxiv.org/abs/1607.06450" rel="noreferrer">Layer Normalization</a>. There are also reparametrizations of the LSTM layer that allow Batch Normalization to be used, for example as described in <a href="https://arxiv.org/abs/1603.09025" rel="noreferrer">Recurrent Batch Normalization by Coijmaans et al. 2016.</a></p> | 2017-08-03 22:10:28.003000+00:00 | 2019-06-12 13:31:15.017000+00:00 | 2019-06-12 13:31:15.017000+00:00 | null | 45,493,384 | <p>I know in regular neural nets people use batch norm before activation and it will reduce the reliance on good weight initialization. I wonder if it would do the same to RNN/lstm RNN when i use it. Does anyone have any experience with it?</p> | 2017-08-03 19:50:55.467000+00:00 | 2021-03-25 01:46:51.457000+00:00 | 2021-03-25 01:27:00.590000+00:00 | machine-learning|deep-learning | ['https://arxiv.org/abs/1607.06450', 'https://arxiv.org/abs/1603.09025'] | 2 |
64,047,528 | <p>According to the <a href="https://arxiv.org/help/api/user-manual#query_details" rel="nofollow noreferrer">arXiv documentation</a>, there is no <code>published</code> or <code>date</code> field available.</p>
<p>What you can do is to <a href="https://arxiv.org/help/api/user-manual#sort" rel="nofollow noreferrer">sort the results</a> by date (by adding <code>&sortBy=submittedDate&sortOrder=descending</code> to your query parameters) and stop making requests when you reach 2018.</p>
<p>Basically your code should be modified like this:</p>
<pre class="lang-py prettyprint-override"><code>import urllib.request
import time
import feedparser
# Base api query url
base_url = 'http://export.arxiv.org/api/query?';
# Search parameters
search_query = urllib.parse.quote("ti:machine learning")
i = 0
results_per_iteration = 1000
wait_time = 3
papers = []
year = ""
print('Searching arXiv for %s' % search_query)
while (year != "2018"): #stop requesting when papers date reach 2018
print("Results %i - %i" % (i,i+results_per_iteration))
query = 'search_query=%s&start=%i&max_results=%i&sortBy=submittedDate&sortOrder=descending' % (search_query,
i,
results_per_iteration)
# perform a GET request using the base_url and query
response = urllib.request.urlopen(base_url+query).read()
# parse the response using feedparser
feed = feedparser.parse(response)
# Run through each entry, and print out information
for entry in feed.entries:
#print('arxiv-id: %s' % entry.id.split('/abs/')[-1])
#print('Title: %s' % entry.title)
#feedparser v4.1 only grabs the first author
#print('First Author: %s' % entry.author)
paper = {}
paper["date"] = entry.published
year = paper["date"][0:4]
paper["title"] = entry.title
paper["first_author"] = entry.author
paper["summary"] = entry.summary
papers.append(paper)
# Sleep a bit before calling the API again
print('Bulk: %i' % 1)
i += results_per_iteration
time.sleep(wait_time)
</code></pre>
<p>for the "post-filtering" approach, once enough results are collected, I'd do something like this:</p>
<pre class="lang-py prettyprint-override"><code>papers2019 = [item for item in papers if item["date"][0:4] == "2019"]
</code></pre> | 2020-09-24 13:28:29.830000+00:00 | 2020-09-24 15:53:07.813000+00:00 | 2020-09-24 15:53:07.813000+00:00 | null | 64,047,299 | <p>I'm using the code shown below in order to retrieve papers from arXiv. I want to retrieve papers that have words "machine" and "learning" in the title. The number of papers is large, therefore I want to implement a slicing by year (<code>published</code>).</p>
<p>How can I request records of 2020 and 2019 in <code>search_query</code>? Please notice that I'm not interested in post-filtering.</p>
<pre><code>import urllib.request
import time
import feedparser
# Base api query url
base_url = 'http://export.arxiv.org/api/query?';
# Search parameters
search_query = urllib.parse.quote("ti:machine learning")
start = 0
total_results = 5000
results_per_iteration = 1000
wait_time = 3
papers = []
print('Searching arXiv for %s' % search_query)
for i in range(start,total_results,results_per_iteration):
print("Results %i - %i" % (i,i+results_per_iteration))
query = 'search_query=%s&start=%i&max_results=%i' % (search_query,
i,
results_per_iteration)
# perform a GET request using the base_url and query
response = urllib.request.urlopen(base_url+query).read()
# parse the response using feedparser
feed = feedparser.parse(response)
# Run through each entry, and print out information
for entry in feed.entries:
#print('arxiv-id: %s' % entry.id.split('/abs/')[-1])
#print('Title: %s' % entry.title)
#feedparser v4.1 only grabs the first author
#print('First Author: %s' % entry.author)
paper = {}
paper["date"] = entry.published
paper["title"] = entry.title
paper["first_author"] = entry.author
paper["summary"] = entry.summary
papers.append(paper)
# Sleep a bit before calling the API again
print('Bulk: %i' % 1)
time.sleep(wait_time)
</code></pre> | 2020-09-24 13:16:32.127000+00:00 | 2020-09-24 15:53:07.813000+00:00 | null | python|api|urllib|feedparser | ['https://arxiv.org/help/api/user-manual#query_details', 'https://arxiv.org/help/api/user-manual#sort'] | 2 |
40,852,966 | <ol>
<li>Do we need to handle <code>row_pooling_sequence</code> and <code>col_pooling_sequence</code>?</li>
</ol>
<p>Even though the <a href="https://www.tensorflow.org/versions/r0.11/api_docs/python/nn.html#fractional_max_pool" rel="noreferrer"><code>tf.nn.fractional_max_pool</code> documentation</a> says it turns 2 extra tensors which are needed to calculate gradient, I believe we <strong>do not</strong> need to specially handle these 2 extra tensors and add them into gradient calculation operation. The backpropagation of <code>tf.nn.fractional_max_pool</code>in TensorFlow is already registered into the gradient calculation flow by the <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/nn_grad.py" rel="noreferrer"><code>_FractionalMaxPoolGrad</code> function</a>. As you can see in the <code>_FractionalMaxPoolGrad</code>, the <code>row_pooling_sequence</code> and <code>col_pooling_sequence</code> are extracted by <code>op.outputs[1]</code> and <code>op.outputs[2]</code> and used to calculate gradient. </p>
<pre><code>@ops.RegisterGradient("FractionalMaxPool")
def _FractionalMaxPoolGrad(op, grad_0, unused_grad_1, unused_grad_2):
"""..."""
return gen_nn_ops._fractional_max_pool_grad(op.inputs[0], op.outputs[0],
grad_0, op.outputs[1],
op.outputs[2],
op.get_attr("overlapping"))
</code></pre>
<ol start="2">
<li>Possible reasons for poorer performance after using <code>fractional_max_pool</code> (in my personal opinions). </li>
</ol>
<p>In the <a href="https://arxiv.org/pdf/1412.6071v4.pdf" rel="noreferrer">fractional max pooling paper</a>, the author used fractional max pooling in a <a href="https://arxiv.org/pdf/1409.6070v1.pdf" rel="noreferrer">spatially-sparse convolutional network</a>. According to his spatially-sparse convolutional network design, he actually extended the image input spatial size by padding zeros. Additionally, fractional max pooling downsizes the input by a factor of <code>pooling_ratio</code> which is often less than 2. These two combined together allowed stacking more convolutional layers than using regular max pooling and hence building a deeper network. (i.e. imagine using CIFAR-10 dataset, the (non-padding) input spatial size is 32x32, the spatial size drops to 4x4 after 3 convolutional layers and 3 max pooling operations. If using fractional max pooling with <code>pooling_ratio=1.4</code>, the spatial size drops to 4x4 after 6 convolutional and 6 fractional max pooling layers). I experimented with building a CNN with 2-conv-layer+2-pooling-layer(regular max pool vs. fractional max pool with <code>pooling_ratio=1.47</code>)+2-fully-connected-layer on MNIST dataset. The one using regular max pooling also produced a better performance than the one using fractional max pooling (down by 15~20% performance). Comparing the spatial size before feeding into fully connected layers, the model with regular max pooling has spatial size of 7x7, the one with fractional max pooling has spatial size of 12x12. Adding one more conv+fractional_max_pool into the latter model (final spatial size dropped to be 8x8) improved the performance to a more comparative level with the former model with regular max pooling. </p>
<p>In summary, I personally think the good performance in the Fractional Max-Pooling paper is achieved by a combination of using spatially-sparse CNN with fractional max-pooling and small filters (and network in network) which enable building a deep network even when the input image spatial size is small. Hence in regular CNN network, simply replace regular max pooling with fractional max pooling does not necessarily give you a better performance.</p> | 2016-11-28 20:35:10.710000+00:00 | 2016-11-28 20:42:10.397000+00:00 | 2016-11-28 20:42:10.397000+00:00 | null | 39,886,715 | <p>When using the function <code>tf.nn.fractional_max_pool</code> in Tensorflow, in addition to the output pooled tensor it returns, it also returns a <code>row_pooling_sequence</code> and a <code>col_pooling_sequence</code>, which I presume is used in backpropagation to find the gradient of. This is in contrast to the normal $2 \times 2$ max pooling, which just returns the pooled tensor.</p>
<p>My question is: do we have to handle the row_pooling and col_pooling values ourselves? How would we include them into a network to get backpropagation working properly? I modified a simple convolutional neural network to use fractional max pooling instead of 2 x 2 max pooling without making use of these values and the results were much poorer, leading me to believe we must explicitly handle these. </p>
<p>Here's the relevant portion of my code that makes use of the FMP:</p>
<pre><code>def add_layer_ops_FMP(conv_func, x_input, W, keep_prob_layer, training_phase):
h_conv = conv_func(x_input, W, stride_l = 1)
h_BN = batch_norm(h_conv, training_phase, epsilon)
h_elu = tf.nn.elu(h_BN) # Rectified unit layer - change accordingly
def dropout_no_training(h_elu=h_elu):
return dropout_op(h_elu, keep_prob = 1.0)
def dropout_in_training(h_elu=h_elu, keep_prob_layer=keep_prob_layer):
return dropout_op(h_elu, keep_prob = keep_prob_layer)
h_drop = tf.cond(training_phase, dropout_in_training, dropout_no_training)
h_pool, row_pooling_sequence, col_pooling_sequence = tf.nn.fractional_max_pool(h_drop) # FMP layer. See Ben Graham's paper
return h_pool
</code></pre>
<p><a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.fractional_max_pool.md" rel="nofollow">Link to function on github.</a></p> | 2016-10-06 02:47:21.433000+00:00 | 2016-11-28 20:42:10.397000+00:00 | null | machine-learning|tensorflow | ['https://www.tensorflow.org/versions/r0.11/api_docs/python/nn.html#fractional_max_pool', 'https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/nn_grad.py', 'https://arxiv.org/pdf/1412.6071v4.pdf', 'https://arxiv.org/pdf/1409.6070v1.pdf'] | 4 |
52,474,058 | <p><a href="https://stackoverflow.com/a/44225502/5728089">Salvador Dali's answer</a> already explains about the differences between some popular methods (i.e. optimizers), but I would try to elaborate on them some more.<br/>
(Note that our answers disagree about some points, especially regarding ADAGRAD.)</p>
<h3>Classical Momentum (CM) vs Nesterov's Accelerated Gradient (NAG)</h3>
<p>(Mostly based on section 2 in the paper <a href="http://proceedings.mlr.press/v28/sutskever13.pdf" rel="noreferrer">On the importance of initialization and momentum in deep learning</a>.)<br/></p>
<p>Each step in both CM and NAG is actually composed of two sub-steps:</p>
<ul>
<li>A momentum sub-step - This is simply a fraction (typically in the range <code>[0.9,1)</code>) of the last step.</li>
<li>A gradient dependent sub-step - This is like the usual step in <a href="https://en.wikipedia.org/wiki/Stochastic_gradient_descent#Iterative_method" rel="noreferrer">SGD</a> - it is the product of the learning rate and the vector opposite to the gradient, while the gradient is computed where this sub-step starts from.</li>
</ul>
<p>CM takes the gradient sub-step first, while NAG takes the momentum sub-step first.</p>
<p>Here is a demonstration from <a href="https://stats.stackexchange.com/a/368179/215801">an answer about intuition for CM and NAG</a>:<br/></p>
<p><a href="https://i.stack.imgur.com/pAwIf.png" rel="noreferrer"><img src="https://i.stack.imgur.com/pAwIf.png" alt="CM vs NAG example"></a></p>
<p>So NAG seems to be better (at least in the image), but why?<br/></p>
<p>The important thing to note is that it doesn't matter when the momentum sub-step comes - it would be the same either way. Therefore, we might as well behave is if the momentum sub-step has already been taken.<br/></p>
<p>Thus, the question actually is: Assuming that the gradient sub-step is taken after the momentum sub-step, should we calculate the gradient sub-step as if it started in the position before or after taking the momentum sub-step?<br/></p>
<p>"After it" seems like the right answer, as generally, the gradient at some point <code>θ</code> roughly points you in the direction from <code>θ</code> to a minimum (with the relatively right magnitude), while the gradient at some other point is less likely to point you in the direction from <code>θ</code> to a minimum (with the relatively right magnitude).<br/></p>
<p>Here is a demonstration (from the gif below):</p>
<p><a href="https://i.stack.imgur.com/T7Ori.png" rel="noreferrer"><img src="https://i.stack.imgur.com/T7Ori.png" alt="CM vs NAG in a specific moment in the awesome gif"></a></p>
<ul>
<li>The minimum is where the star is, and the curves are <a href="https://en.wikipedia.org/wiki/Contour_line" rel="noreferrer">contour lines</a>. (For an explanation about contour lines and why they are perpendicular to the gradient, see videos <a href="https://www.youtube.com/watch?v=WsZj5Rb6do8" rel="noreferrer">1</a> and <a href="https://www.youtube.com/watch?v=ZTbTYEMvo10" rel="noreferrer">2</a> by the legendary <a href="https://www.youtube.com/c/3blue1brown" rel="noreferrer">3Blue1Brown</a>.)</li>
<li>The (long) purple arrow is the momentum sub-step.</li>
<li>The transparent red arrow is the gradient sub-step if it starts before the momentum sub-step.</li>
<li>The black arrow is the gradient sub-step if it starts after the momentum sub-step.</li>
<li>CM would end up in the target of the dark red arrow.</li>
<li>NAG would end up in the target of the black arrow.</li>
</ul>
<p>Note that this argument for why NAG is better is independent of whether the algorithm is close to a minimum.<br/>
In general, both NAG and CM often have the problem of accumulating more momentum than is good for them, so whenever they should change direction, they have an embarrassing "response time". The advantage of NAG over CM that we explained doesn't prevent the problem, but only makes the "response time" of NAG less embarrassing (but embarrassing still).<br/></p>
<p>This "response time" problem is beautifully demonstrated in the gif by <a href="https://twitter.com/AlecRad" rel="noreferrer">Alec Radford</a> (which appeared in <a href="https://stackoverflow.com/a/44225502/5728089">Salvador Dali's answer</a>):<br/>
<a href="https://i.stack.imgur.com/ihlS2.gif" rel="noreferrer"><img src="https://i.stack.imgur.com/ihlS2.gif" alt="An example of the embarrassing response time of both momentum methods"></a></p>
<h3>ADAGRAD</h3>
<p>(Mostly based on section 2.2.2 in <a href="https://arxiv.org/abs/1212.5701" rel="noreferrer">ADADELTA: An Adaptive Learning Rate Method</a> (the original ADADELTA paper), as I find it much more accessible than <a href="http://www.jmlr.org/papers/v12/duchi11a.html" rel="noreferrer">Adaptive Subgradient Methods for Online Learning and Stochastic Optimization</a> (the original ADAGRAD paper).)<br/></p>
<p>In <a href="https://en.wikipedia.org/wiki/Stochastic_gradient_descent#Iterative_method" rel="noreferrer">SGD</a>, the step is given by <code>- learning_rate * gradient</code>, while <code>learning_rate</code> is a hyperparameter.<br/>
ADAGRAD also has a <code>learning_rate</code> hyperparameter, but the actual learning rate for each component of the gradient is calculated individually.<br/>
The <code>i</code>-th component of the <code>t</code>-th step is given by:<br/></p>
<pre><code> learning_rate
- --------------------------------------- * gradient_i_t
norm((gradient_i_1, ..., gradient_i_t))
</code></pre>
<p>while:</p>
<ul>
<li><code>gradient_i_k</code> is the <code>i</code>-th component of the gradient in the <code>k</code>-th step</li>
<li><code>(gradient_i_1, ..., gradient_i_t)</code> is a vector with <code>t</code> components. This isn't intuitive (at least to me) that constructing such a vector makes sense, but that's what the algorithm does (conceptually).</li>
<li><code>norm(vector)</code> is the <a href="https://en.wikipedia.org/wiki/Norm_(mathematics)#Euclidean_norm" rel="noreferrer">Eucldiean norm</a> (aka <code>l2</code> norm) of <code>vector</code>, which is our intuitive notion of length of <code>vector</code>.</li>
<li>Confusingly, in ADAGRAD (as well as in some other methods) the expression that is multiplied by <code>gradient_i_t</code> (in this case, <code>learning_rate / norm(...)</code>) is often called "the learning rate" (in fact, I called it "the actual learning rate" in the previous paragraph). I guess this is because in <a href="https://en.wikipedia.org/wiki/Stochastic_gradient_descent#Iterative_method" rel="noreferrer">SGD</a> the <code>learning_rate</code> hyperparameter and this expression are one and the same.</li>
<li>In a real implementation, some constant would be added to the denominator, to prevent a division by zero.</li>
</ul>
<p>E.g. if:</p>
<ul>
<li>The <code>i</code>-th component of the gradient in the first step is <code>1.15</code></li>
<li>The <code>i</code>-th component of the gradient in the second step is <code>1.35</code></li>
<li>The <code>i</code>-th component of the gradient in the third step is <code>0.9</code></li>
</ul>
<p>Then the norm of <code>(1.15, 1.35, 0.9)</code> is the length of the yellow line, which is:<br/>
<code>sqrt(1.15^2 + 1.35^2 + 0.9^2) = 1.989</code>.<br/>
And so the <code>i</code>-th component of the third step is: <code>- learning_rate / 1.989 * 0.9</code><br/></p>
<p><a href="https://i.stack.imgur.com/pjyp7.png" rel="noreferrer"><img src="https://i.stack.imgur.com/pjyp7.png" alt="l2 norm example"></a></p>
<p>Note two things about the <code>i</code>-th component of the step:</p>
<ol>
<li>It is proportional to <code>learning_rate</code>.</li>
<li>In the calculations of it, the norm is increasing, and so the learning rate is decreasing.</li>
</ol>
<p>This means that ADAGRAD is sensitive to the choice of the hyperparameter <code>learning_rate</code>.<br/>
In addition, it might be that after some time the steps become so small, that ADAGRAD virtually gets stuck.</p>
<h3>ADADELTA and RMSProp</h3>
<p>From the <a href="https://arxiv.org/abs/1212.5701" rel="noreferrer">ADADELTA paper</a>:</p>
<blockquote>
<p>The idea presented in this paper was derived from ADAGRAD in order to improve upon the two main drawbacks
of the method: 1) the continual decay of learning rates
throughout training, and 2) the need for a manually selected
global learning rate.</p>
</blockquote>
<p>The paper then explains an improvement that is meant to tackle the first drawback:</p>
<blockquote>
<p>Instead of accumulating the sum of squared gradients over all
time, we restricted the window of past gradients that are accumulated
to be some fixed size <code>w</code> [...]. This ensures that learning continues
to make progress even after many iterations of updates have
been done.<br/>
Since storing <code>w</code> previous squared gradients is inefficient,
our methods implements this accumulation as an exponentially
decaying average of the squared gradients.</p>
</blockquote>
<p>By "exponentially decaying average of the squared gradients" the paper means that for each <code>i</code> we compute a weighted average of all of the squared <code>i</code>-th components of all of the gradients that were calculated.<br/>
The weight of each squared <code>i</code>-th component is bigger than the weight of the squared <code>i</code>-th component in the previous step.</p>
<p>This is an approximation of a window of size <code>w</code> because the weights in earlier steps are very small.</p>
<p>(When I think about an exponentially decaying average, I like to visualize a <a href="https://en.wikipedia.org/wiki/Comet" rel="noreferrer">comet's</a> trail, which becomes dimmer and dimmer as it gets further from the comet:<br/></p>
<p><a href="https://i.stack.imgur.com/UIzkI.png" rel="noreferrer"><img src="https://i.stack.imgur.com/UIzkI.png" alt="a comet's trail as an intuition for a moving average"></a>)</p>
<p>If you make only this change to ADAGRAD, then you will get RMSProp, which is a method that was proposed by Geoff Hinton in <a href="http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf" rel="noreferrer">Lecture 6e of his Coursera Class</a>.</p>
<p>So in RMSProp, the <code>i</code>-th component of the <code>t</code>-th step is given by:<br/></p>
<pre><code> learning_rate
- ------------------------------------------------ * gradient_i_t
sqrt(exp_decay_avg_of_squared_grads_i + epsilon)
</code></pre>
<p>while:</p>
<ul>
<li><code>epsilon</code> is a hyperparameter that prevents a division by zero.</li>
<li><code>exp_decay_avg_of_squared_grads_i</code> is an exponentially decaying average of the squared <code>i</code>-th components of all of the gradients calculated (including <code>gradient_i_t</code>).</li>
</ul>
<p>But as aforementioned, ADADELTA also aims to get rid of the <code>learning_rate</code> hyperparameter, so there must be more stuff going on in it.<br/></p>
<p>In ADADELTA, the <code>i</code>-th component of the <code>t</code>-th step is given by:<br/></p>
<pre><code> sqrt(exp_decay_avg_of_squared_steps_i + epsilon)
- ------------------------------------------------ * gradient_i_t
sqrt(exp_decay_avg_of_squared_grads_i + epsilon)
</code></pre>
<p>while <code>exp_decay_avg_of_squared_steps_i</code> is an exponentially decaying average of the squared <code>i</code>-th components of all of the steps calculated (until the <code>t-1</code>-th step).<br/>
<code>sqrt(exp_decay_avg_of_squared_steps_i + epsilon)</code> is somewhat similar to momentum, and according to <a href="https://arxiv.org/abs/1212.5701" rel="noreferrer">the paper</a>, it "acts as an acceleration term". (The paper also gives another reason for why it was added, but my answer is already too long, so if you are curious, check out section 3.2.)</p>
<h3>Adam</h3>
<p>(Mostly based on <a href="https://arxiv.org/abs/1412.6980" rel="noreferrer">Adam: A Method for Stochastic Optimization</a>, the original Adam paper.)</p>
<p>Adam is short for Adaptive Moment Estimation (see <a href="https://stats.stackexchange.com/a/367987/215801">this answer</a> for an explanation about the name).<br/>
The <code>i</code>-th component of the <code>t</code>-th step is given by:<br/></p>
<pre><code> learning_rate
- ------------------------------------------------ * exp_decay_avg_of_grads_i
sqrt(exp_decay_avg_of_squared_grads_i) + epsilon
</code></pre>
<p>while:</p>
<ul>
<li><code>exp_decay_avg_of_grads_i</code> is an exponentially decaying average of the <code>i</code>-th components of all of the gradients calculated (including <code>gradient_i_t</code>).</li>
<li>Actually, both <code>exp_decay_avg_of_grads_i</code> and <code>exp_decay_avg_of_squared_grads_i</code> are also corrected to account for a bias toward <code>0</code> (for more about that, see section 3 in <a href="https://arxiv.org/abs/1412.6980" rel="noreferrer">the paper</a>, and also <a href="https://stats.stackexchange.com/a/234686/215801">an answer in stats.stackexchange</a>).</li>
</ul>
<p>Note that Adam uses an exponentially decaying average of the <code>i</code>-th components of the gradients where most <a href="https://en.wikipedia.org/wiki/Stochastic_gradient_descent#Iterative_method" rel="noreferrer">SGD</a> methods use the <code>i</code>-th component of the current gradient. This causes Adam to behave like "a heavy ball with friction", as explained in the paper <a href="https://arxiv.org/abs/1706.08500" rel="noreferrer">GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium</a>.<br/>
See <a href="https://datascience.stackexchange.com/a/39224/56981">this answer</a> for more about how Adam's momentum-like behavior is different from the usual momentum-like behavior.</p> | 2018-09-24 06:56:40.883000+00:00 | 2018-10-05 12:47:15.473000+00:00 | 2018-10-05 12:47:15.473000+00:00 | null | 36,162,180 | <p>I'm studying <em>TensorFlow</em> and how to use it, even if I'm not an expert of neural networks and deep learning (just the basics).</p>
<p>Following tutorials, I don't understand the real and practical differences between the three optimizers for loss. I look at the <a href="https://www.tensorflow.org/versions/r0.7/api_docs/python/train.html#optimizers" rel="noreferrer">API</a> and I understand the principles, but my questions are:</p>
<p><strong>1. When is it preferable to use one instead of the others ?</strong></p>
<p><strong>2. Are there important differences to know ?</strong></p> | 2016-03-22 18:16:38.643000+00:00 | 2019-04-12 17:38:07.683000+00:00 | 2018-04-12 02:56:38.057000+00:00 | tensorflow|deep-learning | ['https://stackoverflow.com/a/44225502/5728089', 'http://proceedings.mlr.press/v28/sutskever13.pdf', 'https://en.wikipedia.org/wiki/Stochastic_gradient_descent#Iterative_method', 'https://stats.stackexchange.com/a/368179/215801', 'https://i.stack.imgur.com/pAwIf.png', 'https://i.stack.imgur.com/T7Ori.png', 'https://en.wikipedia.org/wiki/Contour_line', 'https://www.youtube.com/watch?v=WsZj5Rb6do8', 'https://www.youtube.com/watch?v=ZTbTYEMvo10', 'https://www.youtube.com/c/3blue1brown', 'https://twitter.com/AlecRad', 'https://stackoverflow.com/a/44225502/5728089', 'https://i.stack.imgur.com/ihlS2.gif', 'https://arxiv.org/abs/1212.5701', 'http://www.jmlr.org/papers/v12/duchi11a.html', 'https://en.wikipedia.org/wiki/Stochastic_gradient_descent#Iterative_method', 'https://en.wikipedia.org/wiki/Norm_(mathematics)#Euclidean_norm', 'https://en.wikipedia.org/wiki/Stochastic_gradient_descent#Iterative_method', 'https://i.stack.imgur.com/pjyp7.png', 'https://arxiv.org/abs/1212.5701', 'https://en.wikipedia.org/wiki/Comet', 'https://i.stack.imgur.com/UIzkI.png', 'http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf', 'https://arxiv.org/abs/1212.5701', 'https://arxiv.org/abs/1412.6980', 'https://stats.stackexchange.com/a/367987/215801', 'https://arxiv.org/abs/1412.6980', 'https://stats.stackexchange.com/a/234686/215801', 'https://en.wikipedia.org/wiki/Stochastic_gradient_descent#Iterative_method', 'https://arxiv.org/abs/1706.08500', 'https://datascience.stackexchange.com/a/39224/56981'] | 31 |
50,929,833 | <p>Okay. I know that you're more concerned about throughput from Keras and your hardware, but there are a few things I'd like to mention here:</p>
<ol>
<li><strong>smaller batch sizes gave me better results</strong></li>
</ol>
<p>Given you case, where you have not so huge data, assuming you're running the training for fixed number of epochs (say 5), training with lesser batch size is naturally expected to give you a slightly better result as it would mean a higher number of back-prop steps overall compared to that of a higher batch-size. If you're training for a fixed number of training steps instead, I don't know why this is happening. </p>
<ol start="2">
<li><strong>loss values will occasionally flip to NaN with batch sizes of 4</strong> </li>
</ol>
<p>Again, I'm assuming you're using batch-normalization here, with CNNs. While using BN, it's never actually recommended to use a smaller batch-size like 2 or 4 (or even 8). And probably, one of the reasons why you can be facing NaN with smaller batch-size is if you have low-variance in the current batch and if you take the epsilon value too small, you might have very small values that can lead to numerical instability going forward. But more generally, this might be a case of gradient instability like you mentioned. Consider using gradient clipping to see if it helps.</p>
<ol start="3">
<li><strong>GPU Workflow</strong> </li>
</ol>
<p>Here, I assume that you have only 1 GPU. And unfortunately, you can't parallelise using single-GPU. To clarify, you shouldn't be concerned about the size of your data for GPU RAM. In most of the single-GPU cases, the current batch stays on the CPU and GPU would only take up the operations. Rather, you should be concerned about the size of parameters that GPU would be computing. Since for 1-layer experiment and 3-layers experiment your operations differ a lot, I don't think it's possible as you can't place multiple ops on same device simultaneously. The best case for you here would be to use a larger batch-size (not too large - as this would reduce the number of back-prop steps in case of training for fixed-epochs), so that you'd cover more data in a single-go.</p>
<p>Just a tip for hyper-paramter tuning, you can consider using <a href="https://arxiv.org/abs/1507.06228" rel="nofollow noreferrer">Highway-CNNs</a>. These are inspired from gating mechanism of LSTMs where you specify a large number of hidden layers and the network figures out itself on how to control the information flow among the layers. So in short, this would practically eliminate your efforts of tuning the depth of network, and allowing you tune other hyper-params like learning rate or filter-sizes etc. </p>
<p>I hope at least some of this is relevant and helpful to you ;)</p> | 2018-06-19 13:35:53.857000+00:00 | 2018-06-19 13:35:53.857000+00:00 | null | null | 50,919,074 | <p>I am working on CNN models which are intended to predict a protein's structure from its amino acid sequence. I am implementing my CNN's in Keras. The Keras API is the one that comes bundled with TensorFlow 1.4.0, so obviously TensorFlow is my backend. I have installed the GPU version of TensorFlow, and I have verified that the GPU is being used. My GPU is somewhat older, an NVidia GTX 760.</p>
<p>When I perform 3X cross-validation to help select architectures and hyperparameters, I have 50K examples in my training folds and 25K samples in my validation folds. These are decently large data sets, however they're small in comparison to the RAM available in my computer (16 GB) or on my GPU (2 GB). Fully unpacked and expressed as float32 values, with redundancy introduced because of sliding windows, all the folds taken together, input plus target values, occupies 316 MB. I have pre-calculated my folds, and saved files of each fold to disk. When I experiment with architectures and hyperparameters, the same folds are being used in every trial.</p>
<p>I started with networks containing a single hidden layer to see what I could achieve, and then switched to two hidden layers. I used a fixed batch size of 64 for all of my early experiments. Training proceeded quickly enough that I didn't concern myself with speed. Performing a 3X cross-validation for a given architecture typically took about 12 minutes.</p>
<p>But in the last experiment that I did with two-layer networks, I decided to start investigating the effect of batch size. I learned that smaller batch sizes gave me better results, up to a point. Batch sizes of 8 were the smallest ones that I could count on not to crash. My loss values will occasionally flip to NaN with batch sizes of 4, and they will frequently flip to NaN with batch sizes of 1 or 2. After that occurs, the network becomes untrainable. I am aware of the possibility of gradient instability. I think I was getting some.</p>
<p>So why not just use batch sizes of 8 and keep going? The problem is speed. Using two hidden layers, batches of eight took me approximately 35 minutes to cross-validate. Batches of 64, as I mentioned above, took one third that much time. My first experiments with three hidden layers have taken 45 to 65 minutes per trial. And I want to investigate potentially hundreds of architectures and hyperparameters, using still deeper networks. With small batches, I can see that the batch-by-batch progress bar in Keras progresses more slowly. I can see much longer pauses when an epoch ends.</p>
<p>Yes, I can upgrade my GPU to a 10 series. I think that will only double my throughput at most? Yes, I can rent GPU time in the cloud. Eventually I might do that. But if my software is inefficient, I definitely don't want to set it loose in the cloud to burn my money.</p>
<p>It is my understanding (please correct me if I am wrong) that when the GPU is used in a normal TF / Keras work flow, each individual batch is sent separately from the GPU to the CPU. If I am training 50 networks in a 3X cross-validation scheme, this would mean that I'm sending the same data to my GPU 150 times. As I mentioned earlier, all my data occupies at most 316 MB, about 15% of the RAM available on the GPU. Can I devise a workflow which sends this 316 MB to the GPU <strong>once</strong>, and if so, will that have a useful impact on my throughput? Intuitively, it feels like it should.</p>
<p>Are there other bottlenecks I should be thinking about? Is there a way to profile TF or Keras operations?</p>
<p>Thanks for any advice you may have!</p> | 2018-06-19 00:24:34.303000+00:00 | 2018-06-19 13:35:53.857000+00:00 | null | tensorflow|keras|profiling|cross-validation | ['https://arxiv.org/abs/1507.06228'] | 1 |
30,706,578 | <p>You are actually implementing the idea published in this <a href="http://lear.inrialpes.fr/people/triggs/pubs/Dalal-cvpr05.pdf" rel="nofollow noreferrer">paper</a>.</p>
<p>An (extended) sample code can be found at <a href="http://www.ics.uci.edu/~dramanan/software/pose/pose-release1.2-full.tgz" rel="nofollow noreferrer">UCI</a></p>
<p>To summarize:</p>
<ol>
<li>You have to generate a positive and a negative training set. This means in the positive training images you have to know where the players are located.</li>
<li>Then you have to extract the HoG features at the players position. Note the original HoG method takes input patches of size 128x64, so ensure that your players are all scaled to the same size. And important: HoG feature size depends on the extraction window size, so keep it fixed!
Store the information in a data structure with corresponding label 1.</li>
<li>Then extract negative features from negative images and store them with corresponding label 0 or -1.</li>
<li>Use some training method. I currently work with a linear support vector machine similar to liblinear: <a href="http://arxiv.org/abs/1312.1743" rel="nofollow noreferrer">SVM</a></li>
<li>Then use the test set to ensure you are getting correct results. For testing use a sliding window and slide it all over the image and score the extracted features. Take the best score, as it is most likely, that the player is located there.</li>
<li>If you want to detect several players in one image use non maximum suppression.</li>
</ol>
<p>Note: HoG features are quite difficult to handle, as small changes in extraction might have a great impact on performance. For example openCV ships with an (undocumented) HoG detector. <a href="http://www.juergenwiki.de/old_wiki/doku.php?id=public:hog_descriptor_computation_and_visualization" rel="nofollow noreferrer" title="HoG visualization">HoG visualization</a> helped me to understand how it works. </p>
<p><strong>EDIT: fixed HoG visualization link</strong></p> | 2015-06-08 10:20:22.827000+00:00 | 2017-01-28 21:55:37.433000+00:00 | 2017-01-28 21:55:37.433000+00:00 | null | 30,705,477 | <p>I'm trying to detect players in soccer game with javacv using HOG Descriptor. I already implemented the method with the default people detector, but, the results are not satisfying. So, I extracted positive and negative images and I want to extract features using this images.
Do anyone have any ideas on how to do this please? Thanks!</p> | 2015-06-08 09:25:48.470000+00:00 | 2017-01-28 21:55:37.433000+00:00 | null | java|eclipse|opencv | ['http://lear.inrialpes.fr/people/triggs/pubs/Dalal-cvpr05.pdf', 'http://www.ics.uci.edu/~dramanan/software/pose/pose-release1.2-full.tgz', 'http://arxiv.org/abs/1312.1743', 'http://www.juergenwiki.de/old_wiki/doku.php?id=public:hog_descriptor_computation_and_visualization'] | 4 |
61,700,838 | <p>Note: there are many notions of "non-iid-ness" that may be interesting to explore.</p>
<ul>
<li><p><em>Label non-iid</em>: you might want to make the distribution of labels very unbalanced across clients. Evenly distributing the number of examples, we can still get non-iid distribution such as <code>[(35, 35), (10, 60), (50, 20), (45, 25)]</code>. The <a href="https://arxiv.org/abs/1602.05629" rel="nofollow noreferrer">McMahan 2016</a> paper takes a similar approach, but takes a problem with 10 classes and gives most clients only two classes (the exact method is on Page 5 of the paper).</p></li>
<li><p><em>Amount of data</em>: you might want to give some clients more data than others. With 280 examples, perhaps the split is <code>(180, 80, 10, 10)</code> examples (ignores how the labels are distributed). The StackOverflow dataset in TensorFlow Federated also exhibits this, as some cleints have tens of thousands of examples, while others only have 100.</p></li>
<li><p><em>Feature non-iid</em>: If there are patterns in the feature space, it maybe useful to restrict certain patterns to certain users. For example in an image recognition task, perhaps some camera had a different white balance, rotation, or color saturation than others (even though they have most or all labels). Instead of shuffling these randomly across synthetic clients, grouping the similar feature patterns into a single client can give a different form of non-iid.</p></li>
</ul> | 2020-05-09 17:22:33.433000+00:00 | 2020-05-09 17:22:33.433000+00:00 | null | null | 61,419,057 | <p>I have 2 classes and every class has 140 examples, and I have 4 clients, I would like to create a non-iid dataset like the paper of McMahan, how divide examples into fragments ? </p> | 2020-04-24 23:37:48.163000+00:00 | 2020-05-09 17:22:33.433000+00:00 | null | tensorflow-federated | ['https://arxiv.org/abs/1602.05629'] | 1 |
48,571,811 | <blockquote>
<p>For example, when I inspect the CSR using openssl req -noout -text -in
sslcert.csr, the CSR generated by the first method contains much more
detailed information about the key, with a section for pub, Prime, A,
B, Generator, Order, Cofactor, Seed, but there is no mention of
secp521r1.</p>
</blockquote>
<p>These are called "domain parameters". They explicitly list the modulus, the coefficients, the generator, the public point, etc.</p>
<blockquote>
<p>However, the CSR generated by the second method contains only pub and
a ASN1 OID: secp521r1. Are these differences important if I'm creating
certificates for HTTPS use?</p>
</blockquote>
<p>A name like <em>ASN1 OID: secp521r1</em> is called a "named curve".</p>
<p>The IETF's PKIX and TLS working group say domain parameters and named curves <strong><em>are not</em></strong> the same thing. The PKIX group is responsible for the internet's PKI and certificates. There have been several discussions about this on the TLS working group mailing lists.</p>
<p>If you <strong><em>don't</em></strong> use a named curve, then your clients and servers will fail to connect to one another even the the domain parameters and named curves are equivalent. You will get errors similar to "no shared cipher suites" which will result in a TLS alert.</p>
<p>Here are the errors you get when using <code>s_client</code> and <code>s_server</code> during testing if you mix and match domain parameters with named curves:</p>
<p><em>Client (s_client)</em>:</p>
<pre><code>139925962778272:error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure:s3_pkt.c:1256:SSL alert number 40
139925962778272:error:1409E0E5:SSL routines:SSL3_WRITE_BYTES:ssl handshake failure:s3_pkt.c:596:
</code></pre>
<p><em>Server (s_server)</em>:</p>
<pre><code>140339533272744:error:1408A0C1:SSL routines:SSL3_GET_CLIENT_HELLO:no shared cipher:s3_srvr.c:1353:
</code></pre>
<p>For interoperability you should <em>always</em> explicitly set <code>named_curve</code>. Also see <a href="https://wiki.openssl.org/index.php/Elliptic_Curve_Cryptography#Named_Curves" rel="nofollow noreferrer">Elliptic Curve Cryptography | Named Curves</a> on the OpenSSL wiki.</p>
<hr>
<blockquote>
<p>Both of these methods seem to create valid CSR files (I have tested them here).</p>
</blockquote>
<p>They kind of do, but they don't inter-operate well if they are mixed/matched.</p>
<hr>
<blockquote>
<p>For modern compatibility, I've gone with EC (<code>secp521r1</code>) certificates...</p>
</blockquote>
<p>I have not surveyed it recently, but <code>secp256r1</code> is (was?) the most popular one. <strike>That may have changed but I don't recall reading it anywhere. Maybe a scan of the Alexis top 1 Million will give you an idea or answer.</strike></p>
<p>The 2016 paper <a href="https://arxiv.org/pdf/1511.00341.pdf" rel="nofollow noreferrer">TLS in the wild: An Internet-wide analysis of TLS-based protocols for electronic communication</a> says:</p>
<blockquote>
<p>Looking at the elliptic curves that are used in ECDHE key exchanges
reveals that 97.2% of connections use the secp256r1 curve, followed
by 2% using secp384r1 and 0.78% using sect571r1.</p>
</blockquote> | 2018-02-01 21:12:59.007000+00:00 | 2018-02-01 22:13:20.667000+00:00 | 2018-02-01 22:13:20.667000+00:00 | null | 48,559,711 | <p>I'm creating CSRs for new certificates using OpenSSL. For modern compatibility, I've gone with EC (<code>secp521r1</code>) certificates. While googling around, I found two different ways of creating the CSR.</p>
<p><strong>I can create a private key explicitly</strong></p>
<pre><code>openssl ecparam -name secp521r1 -genkey -param_enc explicit -out private.key
openssl req -new -sha256 -nodes -key private.key -out sslcert.csr -config san.cnf
</code></pre>
<p><strong>or I can create a private key with the request</strong></p>
<pre><code>openssl ecparam -name secp521r1 > ec.file
openssl req -new -sha256 -nodes -newkey ec:ec.file -keyout private.key -out sslcert.csr -config san.cnf
</code></pre>
<p>Both of these methods seem to create valid CSR files (I have tested them <a href="http://www.entrust.net/ssl-technical/csr-viewer.cfm" rel="nofollow noreferrer">here</a>).</p>
<p>My question is whether one of the methods above is better/safer? I noticed that the private key file generated by the first method is larger, and so is the CSR file.</p>
<p>For example, when I inspect the CSR using <code>openssl req -noout -text -in sslcert.csr</code>, the CSR generated by the first method contains much more detailed information about the key, with a section for <code>pub</code>, <code>Prime</code>, <code>A</code>, <code>B</code>, <code>Generator</code>, <code>Order</code>, <code>Cofactor</code>, <code>Seed</code>, but there is no mention of <code>secp521r1</code>.</p>
<p>However, the CSR generated by the second method contains only <code>pub</code> and a <code>ASN1 OID: secp521r1</code>. Are these differences important if I'm creating certificates for HTTPS use?</p>
<p>Many thanks!</p> | 2018-02-01 09:48:58.643000+00:00 | 2018-02-01 22:13:20.667000+00:00 | 2018-02-01 21:15:54.530000+00:00 | ssl|https|openssl|elliptic-curve|csr | ['https://wiki.openssl.org/index.php/Elliptic_Curve_Cryptography#Named_Curves', 'https://arxiv.org/pdf/1511.00341.pdf'] | 2 |
50,183,865 | <p>With deep-learning models you typically have enough degrees of freedom for the model to learn structure in the feature space that's useful for predictions. If there are two groups with different characteristics (e.g. apes and humans), and knowing this group is useful in making a prediction, the model should be able to learn this.</p>
<p>Additionally if your ultimate aim is to classify, it's common in deep-learning models to have a <a href="https://mxnet.incubator.apache.org/api/python/ndarray/ndarray.html?highlight=softmax#mxnet.ndarray.softmax" rel="nofollow noreferrer">softmax layer</a> as an output which can be interpreted as a probability of given class; the higher this probability is the more confident you can be in the prediction. You should calibrate and evaluate this probability as suggested in <a href="https://arxiv.org/pdf/1706.04599.pdf" rel="nofollow noreferrer">this paper</a>.</p>
<p>On the other hand, if you're looking to apply more simple models (e.g. linear models), you may want to perform unsupervised learning beforehand and include this as a categorical feature in your model. As suggested by Viacheslav a clustering algorithm like K-Means could work for your dataset, otherwise you may want to look at Gaussian Mixture Models or DBSCAN.</p> | 2018-05-04 22:31:43.080000+00:00 | 2018-05-04 22:31:43.080000+00:00 | null | null | 44,717,363 | <p>I have a training set looks something like this.</p>
<p>features:categorical/numerical</p>
<p>output:binary 1/0</p>
<pre><code>[1] feature[1][1] feature[1][2] ... feature[1][j]
[2] feature[2][1] feature[2][2] ... feature[2][j]
.
.
.
[i] feature[i][1] feature[i][2] ... feature[i][j]
</code></pre>
<p>Suppose some samples(row) have "good" value combinations that are likely to yield similar output, whereas others have "bad" value combinations thus difficult to predict.</p>
<p>My goal is, by getting rid of of those bad samples which lack regularity, I want to improve final accuracy. Can someone tell me what could be the best algorithm or preprocess to automatically detect those samples so that only the good samples are going to be trained? Thank you in advance!</p>
<p>ENV: MXNet, R</p> | 2017-06-23 09:03:36.563000+00:00 | 2018-05-04 22:31:43.080000+00:00 | 2017-06-23 10:56:31.663000+00:00 | r|ruby|machine-learning|deep-learning|mxnet | ['https://mxnet.incubator.apache.org/api/python/ndarray/ndarray.html?highlight=softmax#mxnet.ndarray.softmax', 'https://arxiv.org/pdf/1706.04599.pdf'] | 2 |
64,912,865 | <p>There is mixed precision in training as in the link you mentioned.
Nvidia has a little bit more in depth information what that means here:
<a href="https://docs.nvidia.com/deeplearning/performance/mixed-precision-training/index.html" rel="nofollow noreferrer">https://docs.nvidia.com/deeplearning/performance/mixed-precision-training/index.html</a></p>
<p>Here is the paper describing the actual process, as well as how and why there is a copy of the FP16 weights in FP32 as master weights (therefore a mix of precisions).
<a href="https://arxiv.org/pdf/1710.03740.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1710.03740.pdf</a></p>
<p>But there are also mixed precision operations in hardware and these can speedup your inference when your data is FP32 and your weights/biases are in FP16, with hardware that supports mixed precision operations this can accelerate inference a lot.</p>
<p>For example with a Nvidia T4 i had a a speedup of ~2 on YOLO3, but no speedup on a Nvidia 1080.</p> | 2020-11-19 13:38:18.830000+00:00 | 2020-11-19 13:38:18.830000+00:00 | null | null | 63,716,372 | <p>I am not sure if I understand the idea of tensorflow keras <a href="https://www.tensorflow.org/guide/mixed_precision" rel="nofollow noreferrer">mixed precision</a>. My goal is to run a <code>tf.keras</code> model with floating point 16 precision to improve inference speed. Can this be done with mixed precision?</p>
<p>I am setting this policy before training my model:</p>
<pre class="lang-py prettyprint-override"><code>from tensorflow.keras.mixed_precision import experimental as mixed_precision
policy = mixed_precision.Policy('mixed_float16')
mixed_precision.set_policy(policy)
</code></pre>
<p>Or this is just to speed-up training. If this is the case, how could I achieve weights/activations of my tf.keras model to have FP16 precision?</p>
<p><em>Note: I am using <code>tensorflow==2.3.0</code></em></p> | 2020-09-03 03:30:09.143000+00:00 | 2021-11-14 16:54:54.513000+00:00 | 2020-09-03 17:58:33.667000+00:00 | tensorflow|keras|tensorflow2.0|tf.keras|tensorflow2.x | ['https://docs.nvidia.com/deeplearning/performance/mixed-precision-training/index.html', 'https://arxiv.org/pdf/1710.03740.pdf'] | 2 |
60,372,559 | <p>There is a <a href="https://arxiv.org/pdf/1612.03467.pdf" rel="nofollow noreferrer">paper</a> which handles the case of equal area tiling (almost square tiles around the equator) and is relatively easy to pre-compute neighbouring tiles and on which tile a specific set of coordinates falls in. It doesn't fare well with the requirement for equal distance between the vertices though.</p>
<p>Copying here the abstract:</p>
<blockquote>
<p>A new method is proposed to divide a spherical surface into equal-area cells. The method is based on dividing a sphere into several latitudinal bands of near-constant span with further division of each band into equal-area cells. It is simple in construction and provides more uniform latitude step between latitudinal bands than other methods of isolatitudinal equal-area tessellation of a spherical surface.</p>
</blockquote>
<p>(I've used its ideas trying to find closest geolocation neighbours from a long list of locations).</p> | 2020-02-24 09:08:30.303000+00:00 | 2020-02-24 09:08:30.303000+00:00 | null | null | 749,264 | <p>Many strategy games use hexagonal tiles. One of the main advantages is that the distance between the center of any tile and all its neighboring tiles is the same.</p>
<p>I was wondering if anyone has any thoughts on marrying a hexagonal tile system with the traditional geographic system (longitude/latitude). I think it would be interesting to cover a globe with hexagonal tiles and be able to map a geographic coordinate to a tile.</p>
<p>Has anyone seen anything remotely close to this before?</p>
<p><strong>UPDATE</strong></p>
<p>I'm looking for a way to subdivide the surface of a sphere so that each division has the same surface area. Ideally, the centers of adjacent sub-divisions would be equidistant.</p> | 2009-04-14 20:34:55.467000+00:00 | 2020-02-24 09:08:30.303000+00:00 | 2009-04-14 20:52:15.690000+00:00 | math|coordinates|tesselation|hexagonal-tiles | ['https://arxiv.org/pdf/1612.03467.pdf'] | 1 |
56,560,189 | <p>What the <a href="https://arxiv.org/abs/1502.03167" rel="nofollow noreferrer">original paper</a> tries to explain is to reduce overfitting use Batch Normalization.</p>
<blockquote>
<p>Where should you splice the normalization when designing a network?</p>
</blockquote>
<p>Set the normalization early on inputs. Unbalanced input extreme values can cause instability. </p>
<p>While if you normalize on outputs this will not prevent the inputs to cause the instability all over again.</p>
<p>Here is the little code that explains what the BN do:</p>
<pre><code>import torch
import torch.nn as nn
m = nn.BatchNorm1d(100, affine=False)
input = 1000*torch.randn(3, 100)
print(input)
output = m(input)
print(output)
print(output.mean()) # should be ~ 0
print(output.std()) # should be ~ 1
</code></pre>
<blockquote>
<p>Does it make sense to normalize any time after you have a dense layer</p>
</blockquote>
<p>Yes, you may do so as matrix multiplication may lead to producing the extremes. Also, after convolution layers, because these are also matrix multiplication, similar but less intense comparing to dense (<code>nn.Linear</code>) layer. If you for instance print the resent model, you will see that batch norms are set every time after the conv layer like this:</p>
<pre><code>(conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
</code></pre>
<p>To print the full resnet you may use this:</p>
<pre><code>import torchvision.models as models
r = models.resnet18()
print(r)
</code></pre> | 2019-06-12 10:39:23.970000+00:00 | 2019-07-10 22:29:36.887000+00:00 | 2019-07-10 22:29:36.887000+00:00 | null | 56,535,040 | <p>Where should you splice the normalization when designing a network? E.g. if you have a stacked Transformer or Attention network, does it make sense to normalize any time after you have a dense layer?</p> | 2019-06-11 00:39:37.340000+00:00 | 2019-07-10 22:29:36.887000+00:00 | null | neural-network|deep-learning|pytorch | ['https://arxiv.org/abs/1502.03167'] | 1 |
30,995,572 | <p>A few comments before answering the actual question:</p>
<ol>
<li>When using an RBF kernel, you really have to tune <code>gamma</code> to get good results. Only tuning misclassification penalties (<code>C</code> and <code>weights</code>) is not sufficient.</li>
<li>The <a href="http://optunity.readthedocs.org/en/latest/api/optunity.html" rel="nofollow">main API functions</a> are <code>optunity.maximize</code>, <code>optunity.minimize</code> and <code>optunity.optimize</code>, not the solver-specific methods you are using. Though both offer similar functionality, the API functions are probably easier to use.</li>
<li>For real tuning tasks, I strongly recommend to use the default particle swarm optimizer over grid search. You will get better results in far fewer function evaluations (= time).</li>
<li>It might be easier to use Optunity's cross-validation facilities instead of scikit-learn's. This is entirely optional, though. You can find more information about this <a href="http://optunity.readthedocs.org/en/latest/user/cross_validation.html" rel="nofollow">here</a>.</li>
<li>The hyperparameters <code>m</code> and <code>w</code> are somewhat redundant. You don't have to balance classes if you're going to optimize class weights. I would stop optimizing class balance (for which you're necessarily under -or oversampling = changing your data).</li>
</ol>
<h1>The solution</h1>
<p>The function you specify for <code>optimize</code> has to be the objective function, that means the only arguments to this function must be the hyperparameters you want to optimize. For more information on this, please refer to <a href="http://arxiv.org/abs/1412.1114" rel="nofollow">Optunity's paper</a>. In your specific example, this means that the arguments should be <code>c</code>, <code>m</code> and <code>w</code>.</p>
<p>To fix <code>X</code> and <code>y</code>, you can use any of the standard Python approaches, such as <code>functools.partial</code> or closures. In my opinion, closures are the cleanest method:</p>
<pre><code>def fix_data(X_fixed, y_fixed):
def svm_acc(m, w, c):
X, y = balanceClasses(X_fixed, y_fixed, m)
clf = svm.SVC(kernel='rbf', C=c, class_weight = {1: w})
scores = cross_validation.cross_val_score(clf, X, y, 5)
return( scores.mean() )
return svm_acc
</code></pre>
<p>The function, <code>fix_data</code> fixes a certain data set <code>X_fixed</code> and <code>y_fixed</code> and produces a function which only has the hyperparameters as arguments, as required. Then you can do something like this (assuming you've constructed the solver etc.):</p>
<pre><code>svm_acc_with_fixed_data = fix_data(X, y)
best_pars, _ = s.optimize(svm_acc_with_fixed_data)
</code></pre> | 2015-06-23 06:41:27.617000+00:00 | 2015-06-23 06:48:22.813000+00:00 | 2015-06-23 06:48:22.813000+00:00 | null | 30,990,342 | <h2>Background</h2>
<p>I am using a support vector machine for binary classification on unbalanced classes (i.e. the ratio of positive to negative labels in my training set is ~100). I would like to optimize the following parameters: m (the ratio of positive to negative labels I sample from my training data), w (the class weight), and the SVM parameter C.</p>
<h2>Problem</h2>
<p>I would like to optimize these parameters by doing a gridsearch, and have defined the score function as follows:</p>
<pre><code>def svm_acc(X, y, m, w, c):
X, y = balanceClasses(X, y, m)
clf = svm.SVC(kernel='rbf', C=c, class_weight = {1: w})
scores = cross_validation.cross_val_score(clf, X, y, 5)
return( scores.mean() )
</code></pre>
<p>where X is the feature matrix, y are binary classification labels, and <code>svm_acc</code> returns the mean accuracy from 5-fold cross-validation. I have tried the following in optunity:</p>
<pre><code>import optunity as opt
s = opt.solvers.GridSearch(mult=[1,10], w=[1,10], c=[1,10])
best_pars, _ = s.optimize(svm_acc, X=X, y=y)
</code></pre>
<p>but I get this error:</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: optimize() got an unexpected keyword argument 'X'
</code></pre>
<p>I gathered from the documentation that <code>optimize</code> does not take any additional keyword arguments (<code>X</code> and <code>y</code>). I have tried different variations of the above but have not been able to figure out how to pass additional parameters that should not be optimized to the routine.</p>
<p>As far as I can see, I cannot use scikit-learn's gridsearch because I want to optimize the <code>m</code> parameter, which is not 'intrinsic' to the estimator. Could anybody point me to a solution or to other python packages for doing gridsearch?</p> | 2015-06-22 21:42:07.687000+00:00 | 2015-06-23 06:48:22.813000+00:00 | null | python|optimization|scikit-learn | ['http://optunity.readthedocs.org/en/latest/api/optunity.html', 'http://optunity.readthedocs.org/en/latest/user/cross_validation.html', 'http://arxiv.org/abs/1412.1114'] | 3 |
71,316,051 | <p>I have approached problems like this by using a fixed input-size CNN to do a simple classification and then called the CNN multiple times as you scan across your variable length sample (1-5 sec sound bite).</p>
<p>For example, let's say you create a CNN that inputs 0.2s of data, the input size is now fixed. You can compute a {0, 1} label for that 0.2s based on whether the center point of the sample is within an <em>event</em> as you defined in your question. You could try different input sizes using the same method.</p>
<p>Now you ask the CNN to make a prediction at every point in your 1-5 second sample. To start with you pass the CNN the first 0.2s of data, then step forward one or more data points (your step size is a hyper-parameter you can tune). Let's say your step size is 0.1s, your second step would produce a CNN classification using the data from 0.1s to 0.3s in your sample. Continue until you reach the end of your sample. You now have classifications across the sample. In principle you could get a classification at every data point so you have as many predictions as you have data points. A rolling median filter (see pandas) is a great way to smooth out the predictions.</p>
<p>This is a very simple CNN to set up. You also benefit by increasing your training data quite a bit because each sound file is now many training samples. Your resolution for predictions is very granular with this method.</p>
<p>Here's a paper that describes the approach in greater depth (there's also a slightly earlier version on arXiv by the same title if that's pay walled for you), start reading at Section 3 onward:</p>
<p><a href="https://academic.oup.com/mnras/article/476/1/1151/4828364" rel="nofollow noreferrer">https://academic.oup.com/mnras/article/476/1/1151/4828364</a></p>
<p>In that paper we're working with 1D astronomy data, which is structured basically the same as 1D audio data, so the technique will apply. In that paper I'm doing a bit more than just classification, using the same technique I'm localizing zero or more events as well as characterizing those events (I would start with just the classification for your purposes). So you can see that this approach extends quite well. In fact even multiple events that partially overlap each other in time can be identified and extracted effectively.</p> | 2022-03-02 00:15:34.070000+00:00 | 2022-03-02 00:22:13.827000+00:00 | 2022-03-02 00:22:13.827000+00:00 | null | 71,311,977 | <p>I am trying to perform detection of a certain type of sound in audio files. These audio recordings have variable lengths and the type of sound that I want to detect is usually around 1~5 seconds long and I have the labels of the dataset (onset and offset of when events happen).</p>
<p>My initial approach was by treating it as a binary classification problem. Where I compute the mel spectrogram each half a second (for example). I would label that spectrogram with a 0 if there wasn't a event in those 0.5s and labeled it 1 if the other way.</p>
<p>In what way could I fight this? I am trying to change by passing 0.1 instead of 1 (assuming the previous example). Basically labeling the percentage of the the event happening in the image: labels [0~1] instead of {0,1}.</p>
<p>Many thanks.</p> | 2022-03-01 16:52:48.223000+00:00 | 2022-03-08 09:18:56.163000+00:00 | 2022-03-08 09:18:56.163000+00:00 | deep-learning|conv-neural-network|regression|classification | ['https://academic.oup.com/mnras/article/476/1/1151/4828364'] | 1 |
63,972,026 | <p>Real datasets will have many <em>other</em> examples of contextual usage all of <code>'beautiful'</code>, <code>'boy'</code>, and <code>'lady'</code>, each somewhat different.</p>
<p>In fact, in order for a dataset to train useful high-dimensional vectors, they must have many varied uses - & far more variety than your synthetic examples. As a result, you couldn't simply look at a real dataset, check that <code>'lady'</code> & <code>'beautiful'</code> appear near-each-other 5x more than <code>'boy'</code> & <code>'beautiful'</code>, & be sure that <code>'lady'</code> and <code>'beautful'</code> will have more-similar word-vectors. Their actual positions will depend on all the <em>other</em> uses of those words, with all the <em>other</em> words in the corpus, which could override the simple relationship you've observed.</p>
<p>The value of word-vectors is simply in how well they perform on downstream tasks, and no set of word-vectors is best for all tasks. That is, there's no one 'right' way to create them - just a bunch of different algorithms, each also parameterizable, which might yield vectors better or worse for a particular purpose.</p>
<p>If you've decided, or have as the external constraint for some reason, that <code>'beautiful'</code> must be equidistant from both <code>'lady'</code> and <code>'boy'</code>, no matter what the training data teaches about usage, there are lots of things you could try.</p>
<p>Some are crude, like just patching the vectors at the very end. Others could add extra steps to training that maintain the desired relationship, or at least give it an advantage that the data & other parameters could still weaken, at every update.</p>
<p>As you seem to be interested in equalizing relationships between words of different genders, you'd likely be interested in a paper which highlighted gender-biases in word-vectors, and proposed ways to modify word-vectors in response:</p>
<p><a href="https://arxiv.org/abs/1607.06520" rel="nofollow noreferrer">"Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings"</a></p>
<p>Note, though, that this paper, and much other work noticing the same effects, has overclaimed the strength of such biases by overlooking that typical routines for solving analogies tend to <em>rule out</em> any word that appears as a prompt.</p>
<p>For example, consider the headline claim of the paper, that the <code>GoogleNews</code> vectors released by Google along with the original word2vec paper will answer the analogy...</p>
<p><code>man : computer_programmer :: woman : ___?___</code></p>
<p>with <code>homemaker</code>.</p>
<p>Using something like the original <code>word-analogy</code> tool in the Google <code>word2vec</code> release, or the similar functionality of the <code>.most_similar()</code> method of Gensim Python library for working with word-vectors, you can naively reproduce that claim:</p>
<pre class="lang-py prettyprint-override"><code>from gensim.models import KeyedVectors
gkv = KeyedVectors.load_word2vec_format(
'/home/gojomo/Downloads/GoogleNews-vectors-negative300.bin',
binary=True
)
gkv.most_similar(
positive=['computer_programmer', 'woman'],
negative=['man']
)
</code></pre>
<pre class="lang-py prettyprint-override"><code>[('homemaker', 0.5627118945121765),
('housewife', 0.5105046629905701),
('graphic_designer', 0.505180299282074),
('schoolteacher', 0.497949481010437),
('businesswoman', 0.493489146232605),
('paralegal', 0.49255111813545227),
('registered_nurse', 0.4907974600791931),
('saleswoman', 0.48816272616386414),
('electrical_engineer', 0.4797726571559906),
('mechanical_engineer', 0.4755399227142334)]
</code></pre>
<p>(To solve an analogy of the form <code>A : B :: C : _?_</code> using <code>.most_similar()</code>, the <code>B</code> & <code>C</code> should become <code>positive</code> examples, and <code>A</code> the <code>negative</code> example.)</p>
<p>But, each of these analogy-solving routines have coded-in as an assumption that the asker <em>never</em> wants to see <em>any</em> of the prompt-words. So even if <code>computer_programmer</code> was still the top-analogy-answer, it would be filtered from the results.</p>
<p>With gensim, we can override that behavior by supplying the prompts as raw vectors, rather than lookup keys - and then we get a different result:</p>
<pre class="lang-py prettyprint-override"><code>gkv.most_similar(
positive=[gkv.get_vector('computer_programmer', norm=True), 'woman'],
negative=['man']
)
</code></pre>
<pre class="lang-py prettyprint-override"><code>[('computer_programmer', 0.8276970982551575),
('homemaker', 0.5627118945121765),
('housewife', 0.5105046629905701),
('graphic_designer', 0.505180299282074),
('schoolteacher', 0.497949481010437),
('businesswoman', 0.493489146232605),
('paralegal', 0.49255111813545227),
('registered_nurse', 0.4907974600791931),
('saleswoman', 0.48816272616386414),
('electrical_engineer', 0.4797726571559906)]
</code></pre>
<p>Now, the model recognizes <code>computer_programmer</code> as the top match, with the unrestricted analogical-answer not massively shifted by the gender-influence. The results retain a gendered-character, suggesting the 'target point' found by the analogy-calculations is on-the-side of <code>'computer_programmer'</code> towards a neighborhood of stereotypically-female roles.</p>
<p>Also interesting to note: if you check the top-10 most-similar words to <code>'computer_programmer'</code> in the <code>GoogleNews</code> vectors, <code>'homemaker'</code> is among them, as well as a woman author/programmer by name. So the gender contrasts among these roles and words, as captured by that particular vector-set, may be more subtle than some of the headline claims of papers emphasizing bias.</p> | 2020-09-19 18:24:19.067000+00:00 | 2020-09-19 18:24:19.067000+00:00 | null | null | 63,970,338 | <p>Distributional semantic models such as word2vec and Glove generate the word vector by capturing the context occurrence of words with each other. Then if one word appears more than others next to the focus word, it has a closer vector to the focus word, and it dominates the directions of the focus word's vector. For example:</p>
<p>Consider beautiful as a focus word, and the sequences are as follows:<br />
<strong>You are a beautiful lady. (this sequence appear 100 times)<br />
You are a beautiful boy. (this sequence appear 10 times)<br />
As a result, beautiful is closer to the lady than the boy.</strong></p>
<p>If the distribution of two context words (co-occur with the focus word) is the same in the entire dataset, then the focus word's vector might have the same closeness to those two context word's vector.</p>
<p>Consider the previous example:<br />
<strong>You are a beautiful lady. (this sequence appear 10 times)<br />
You are a beautiful boy. (this sequence appear 10 times)<br />
As a result, beautiful has same closeness to lady and boy.</strong></p>
<p>My question is related to this point, How close should be the word distribution to get the second example situation ( a vector that has same closeness to other two vectors)? Is there literature research that covers words distribution domination in generating the vector?</p>
<p>Update: Imagine beautiful has only two contexts, lady and boy in the entire dataset.</p> | 2020-09-19 15:21:19.973000+00:00 | 2020-09-19 20:43:06.473000+00:00 | 2020-09-19 20:43:06.473000+00:00 | vector|stanford-nlp|word2vec | ['https://arxiv.org/abs/1607.06520'] | 1 |
46,161,639 | <p>The 'Paragraph Vectors' algorithms behind <code>Doc2Vec</code> simply gives documents vectors that are interesting in their distance/directional arrangement in comparison to other co-trained document vectors. </p>
<p>The individual dimensions don't have specific interpretable meanings. As with <code>Word2Vec</code>, there may be 'neighborhoods' of related items, and certain <code>directions</code> may vaguely map to understandable concepts.</p>
<p>But those directions aren't directly aligned with the individual perpendicular dimensions of the coordinate space. And there's nothing in the process that helps you describe those directional tendencies. (They tend to come up if differencing vectors, as in the analogy-solving problems.)</p>
<p>You can see an example in the '<a href="https://arxiv.org/abs/1507.07998" rel="nofollow noreferrer">Document Embedding With Paragraph Vectors</a>' paper, Table 2, where Japanese pop artists who are (perhaps) similar to 'Lady Gaga' are discovered by shifting in space in the directions of <code>-'American'+'Japanese'</code>. That is, there's no one dimension that Japanese-vs-American – but there is a directional tendency across all dimensions. </p> | 2017-09-11 17:51:30.010000+00:00 | 2017-09-11 17:51:30.010000+00:00 | null | null | 46,157,805 | <p>I have created a doc2vec model of size of 100 dimensions. From what I understand from my reading that these dimensions are features of my model. How can I identify what these dimensions are exactly.</p> | 2017-09-11 14:12:49.893000+00:00 | 2017-09-11 17:51:30.010000+00:00 | null | python|gensim|doc2vec | ['https://arxiv.org/abs/1507.07998'] | 1 |
61,257,031 | <p>I know of two common ways to manage multiple input sequences, and your approach lands somewhere between them.</p>
<hr>
<p>One approach is to design a multi-input model with each of your text columns as a different input. They can share the vocabulary and/or embedding layer later, but for now you still need a distinct input sub-model for each of description, category, etc.</p>
<p>Each of these becomes an input to the network, using the <code>Model(inputs=[...], outputs=rest_of_nn)</code> syntax. You will need to design <code>rest_of_nn</code> so it can take multiple inputs. This can be as simple as your current concatenation, or you could use additional layers to do the synthesis.</p>
<p>It could look something like this:</p>
<pre class="lang-py prettyprint-override"><code># Build separate vocabularies. This could be shared.
desc_tokenizer = Tokenizer()
desc_tokenizer.fit_on_texts(training_sentence)
desc_vocab_size = len(desc_tokenizer.word_index)
categ_tokenizer = Tokenizer()
categ_tokenizer.fit_on_texts(training_category)
categ_vocab_size = len(categ_tokenizer.word_index)
# Inputs.
desc = Input(shape=(desc_maxlen,))
categ = Input(shape=(categ_maxlen,))
# Input encodings, opting for different embeddings.
# Descriptions go through an LSTM as a demo of extra processing.
embedded_desc = Embedding(desc_vocab_size, desc_embed_size, input_length=desc_maxlen)(desc)
encoded_desc = LSTM(categ_embed_size, return_sequences=True)(embedded_desc)
encoded_categ = Embedding(categ_vocab_size, categ_embed_size, input_length=categ_maxlen)(categ)
# Rest of the NN, which knows how to put everything together to get an output.
merged = concatenate([encoded_desc, encoded_categ], axis=1)
rest_of_nn = Dense(hidden_size, activation='relu')(merged)
rest_of_nn = Flatten()(rest_of_nn)
rest_of_nn = Dense(output_size, activation='softmax')(rest_of_nn)
# Create the model, assuming some sort of classification problem.
model = Model(inputs=[desc, categ], outputs=rest_of_nn)
model.compile(optimizer='adam', loss=K.categorical_crossentropy)
</code></pre>
<hr>
<p>The second approach is to concatenate all of your data before encoding it, and then treat everything as a more standard single-sequence problem after that. It is common to use a unique token to separate or define the different fields, similar to <code>BOS</code> and <code>EOS</code> for the beginning and end of the sequence.</p>
<p>It would look something like this:</p>
<pre><code>XXBOS XXDESC This event will be fun. XXCATEG leisure XXLOC Seattle, WA XXEOS
</code></pre>
<p>You can also do end tags for the fields like <code>DESCXX</code>, omit the <code>BOS</code> and <code>EOS</code> tokens, and generally mix and match however you want. You can even use this to combine some of your input sequences, but then use a multi-input model as above to merge the rest.</p>
<hr>
<p>Speaking of mixing and matching, you also have the option to treat some of your inputs directly as an embedding. Low-cardinality fields like <code>category</code> and <code>location</code> do not need to be tokenized, and can be embedded directly without any need to split into tokens. That is, they don't need to be a sequence.</p>
<p>If you are looking for a reference, I enjoyed this paper on <a href="https://arxiv.org/abs/1903.04254" rel="nofollow noreferrer">Large Scale Product Categorization using Structured and Unstructured Attributes</a>. It tests all or most of the ideas I have just outlined, on real data at scale.</p> | 2020-04-16 18:08:15.413000+00:00 | 2020-04-16 18:17:24.890000+00:00 | 2020-04-16 18:17:24.890000+00:00 | null | 61,252,972 | <p>I am just a beginner in this subject, I have tested some NN for image recognition as well as using NLP for sequence classification.</p>
<p>This second topic is interesting for me.
using </p>
<pre><code>sentences = [
'some test sentence',
'and the second sentence'
]
tokenizer = Tokenizer(num_words=100, oov_token='<OOV>')
tokenizer.fit_on_texts(sentences)
sentences = tokenizer.texts_to_sequences(sentences)
</code></pre>
<p>will result with an array of size <code>[n,1]</code> where n is word size in sentence. And assuming I have implemented padding correctly each Training example in set will be size of <code>[n,1]</code> where n is the max sentence length.</p>
<p>that prepared training set I can pass into keras <code>model.fit</code></p>
<p>what when I have multiple features in my data set?
let's say I would like to build an event prioritization algorithm and my data structure would look like:</p>
<p><code>[event_description, event_category, event_location, label]</code></p>
<p>trying to tokenize such array would result in [n,m] matrix where n is maximum word length and m is the feature number</p>
<p>how to prepare such a dataset so a model could be trained on such data?</p>
<p>would this approach be ok:</p>
<pre><code># Going through training set to get all features into specific ararys
for data in dataset:
training_sentence.append(data['event_description'])
training_category.append(data['event_category'])
training_location.append(data['event_location'])
training_labels.append(data['label'])
# Tokenize each array which contains tokenized value
tokenizer.fit_on_texts(training_sentence)
tokenizer.fit_on_texts(training_category)
tokenizer.fit_on_texts(training_location)
sequences = tokenizer.texts_to_sequences(training_sentence)
categories = tokenizer.texts_to_sequences(training_category)
locations = tokenizer.texts_to_sequences(training_location)
# Concatenating arrays with features into one
training_example = numpy.concatenate([sequences,categories, locations])
#ommiting model definition, training the model
model.fit(training_example, training_labels, epochs=num_epochs, validation_data=(testing_padded, testing_labels_final))
</code></pre>
<p>I haven't been testing it yet. I just want to make sure if I understand everything correctly and if my assumptions are correct.</p>
<p>Is this a correct approach to create NPL using NN?</p> | 2020-04-16 14:37:51.873000+00:00 | 2020-04-16 18:17:24.890000+00:00 | null | python|tensorflow|machine-learning|deep-learning|neural-network | ['https://arxiv.org/abs/1903.04254'] | 1 |
31,796,971 | <p>Based on a quick google search parallelization of Huffman encoding is possible and the papers dates back decades ago.</p>
<ul>
<li><a href="http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.23.3927" rel="nofollow">Constructing Huffman Trees in Parallel</a></li>
<li><a href="http://www.ittc.ku.edu/~jsv/Papers/HoV95.pdcfull.pdf" rel="nofollow">Parallel Lossless Image Compression
Using Human and Arithmetic Coding</a></li>
<li><a href="http://arxiv.org/pdf/1107.1525.pdf" rel="nofollow">http://arxiv.org/pdf/1107.1525.pdf</a></li>
</ul>
<p>I have not ready any of them (just took a short glance to understand the relevancy).</p>
<p>Here is a <a href="http://jmyers-gnu.blogspot.com/2007/02/parallelizing-huffman-coding-in-that.html" rel="nofollow">small blog post on the topic</a> where the guy claims that:</p>
<blockquote>
<p>though, parallel Huffman Decoding shouldn't be hard if we have a suitably large number of processors, and we assume that communication time between processor elements is very short.</p>
</blockquote> | 2015-08-03 21:22:26.473000+00:00 | 2015-08-03 21:22:26.473000+00:00 | null | null | 31,796,662 | <p>The steps involved in Huffman encoding are quite sequential. So, I was wondering how could I introduce parallelism while implementing Huffman encoding on any platforms supporting parallel implementation, like GPUs, many core processor etc.?</p> | 2015-08-03 20:59:42.083000+00:00 | 2015-08-04 07:53:30.703000+00:00 | 2015-08-03 21:15:39.407000+00:00 | algorithm|parallel-processing|hardware|processor|huffman-code | ['http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.23.3927', 'http://www.ittc.ku.edu/~jsv/Papers/HoV95.pdcfull.pdf', 'http://arxiv.org/pdf/1107.1525.pdf', 'http://jmyers-gnu.blogspot.com/2007/02/parallelizing-huffman-coding-in-that.html'] | 4 |
41,881,067 | <p>A very late answer, but might help someone searching for this in google, since this question popped up as the first result for "best segmentation algorithm".</p>
<p>Fully convolutional networks seem to do exactly the task you're asking for. Check the paper in <a href="https://arxiv.org/pdf/1605.06211.pdf" rel="nofollow noreferrer">arXiv</a>, and an implementation in <a href="http://www.vlfeat.org/matconvnet/models/pascal-fcn8s-dag.mat" rel="nofollow noreferrer">MatConvNet</a>.</p>
<p>The following image illustrates a segmentation example from these CNNs (the paper I linked actually proposes 3 different architectures, FCN-8s being the best).
<a href="https://i.stack.imgur.com/OShpH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OShpH.png" alt="Segmentation results"></a></p> | 2017-01-26 19:07:42.953000+00:00 | 2017-01-26 19:07:42.953000+00:00 | null | null | 5,042,445 | <p>I'm trying to develop a system, which recognizes various objects present in an image based on their primitive features like texture, shape & color.</p>
<p>The first stage of this process is to extract out individual objects from an image and later on doing image processing on each one by one.</p>
<p>However, segmentation algorithm I've studied so far are not even near perfect or so called Ideal Image segmentation algorithm.</p>
<p>Segmentation accuracy will decide how much better the system responds to given query.</p>
<p>Segmentation should be fast as well as accurate.</p>
<p>Can any one suggest me any segmentation algorithm developed or implemented so far, which won't be too complicated to implement but will be fair enough to complete my project..</p>
<p>Any Help is appreicated.. </p> | 2011-02-18 14:26:39.370000+00:00 | 2019-05-31 08:16:55.043000+00:00 | 2013-01-14 16:15:33.930000+00:00 | image-processing|image-segmentation|cbir | ['https://arxiv.org/pdf/1605.06211.pdf', 'http://www.vlfeat.org/matconvnet/models/pascal-fcn8s-dag.mat', 'https://i.stack.imgur.com/OShpH.png'] | 3 |
26,825,029 | <p>I pondered the various ideas I received in comments and also ran a few experiments based on that feedback. In the end I decided that a refined heuristic search was the best way forward. I have now managed to reduce the maximum error for <code>atanf_poly()</code> to 1.01036 ulps, with just three arguments exceeding my stated goal of a 1 ulp error bound:</p>
<pre><code>ulp = -1.00829 @ |a| = 9.80738342e-001 0x1.f62356p-1 (3f7b11ab)
ulp = -1.01036 @ |a| = 9.87551928e-001 0x1.f9a068p-1 (3f7cd034)
ulp = 1.00050 @ |a| = 9.99375939e-001 0x1.ffae34p-1 (3f7fd71a)
</code></pre>
<p>Based on the manner of generating the improved approximation there is no guarantee that this is a best approximation; no scientific breakthrough here. As the ulp error of the current solution is not yet perfectly balanced, and since continuing the search continues to deliver better approximations (albeit at exponentially increasing time intervals) my <em>guess</em> is that a 1 ulp error bound is achievable, but at the same time we seem to be very close to the best machine-optimized approximation already.</p>
<p>The better quality of the new approximation is the result of a refined search process. I observed that all of the largest ulp errors in the polynomial occur close to unity, say in [0.75,1.0] to be conservative. This allows for a fast scan for interesting coefficient sets whose maximum error is smaller than some bound, say 1.08 ulps. I can then test in detail and exhaustively all coefficient sets within a heuristically chosen hyper-cone anchored at that point. This second step searches for minimum ulp error as the primary goal, and maximum percentage of correctly rounded results as a secondary objective. By using this two-step process across all four cores of my CPU I was able to significantly speed up the search process: I have been able to check about 2<sup>21</sup> coefficient sets so far. </p>
<p>Based on the range of each coefficient across all "close" solutions I now estimate that the total useful search space for this approximation problem is >= 2<sup>24</sup> coefficient sets rather than the more optimistic number of 2<sup>20</sup> I threw out before. This seems like a feasible problem to solve for someone who is either very patient or has lots of computational horse-power at their disposal.</p>
<p>My updated code is as follows:</p>
<pre><code>// max ulp err = 1.01036
float atanf_poly (float a)
{
float r, s;
s = a * a;
r = 0x1.7ed22cp-9f;
r = fmaf (r, s, -0x1.0c2c2ep-6f);
r = fmaf (r, s, 0x1.61fdf6p-5f);
r = fmaf (r, s, -0x1.3556b4p-4f);
r = fmaf (r, s, 0x1.b4e12ep-4f);
r = fmaf (r, s, -0x1.230ae0p-3f);
r = fmaf (r, s, 0x1.9978eep-3f);
r = fmaf (r, s, -0x1.5554dap-2f);
r = r * s;
r = fmaf (r, a, a);
return r;
}
// max ulp err = 1.51871
float my_atanf (float a)
{
float r, t;
t = fabsf (a);
r = t;
if (t > 1.0f) {
r = 1.0f / r;
}
r = atanf_poly (r);
if (t > 1.0f) {
r = fmaf (0x1.ddcb02p-1f, 0x1.aee9d6p+0f, -r); // pi/2 - r
}
r = copysignf (r, a);
return r;
}
</code></pre>
<hr>
<p><strong>Update</strong> (after revisiting the issue two-and-a-half years later)</p>
<p>Using T. Myklebust's <a href="https://arxiv.org/abs/1508.03211" rel="nofollow noreferrer">draft publication</a> as a starting point, I found the arctangent approximation on [-1,1] that has the smallest error to have a maximum error of 0.94528 ulp.</p>
<pre><code>/* Based on: Tor Myklebust, "Computing accurate Horner form approximations
to special functions in finite precision arithmetic", arXiv:1508.03211,
August 2015. maximum ulp err = 0.94528
*/
float atanf_poly (float a)
{
float r, s;
s = a * a;
r = 0x1.6d2086p-9f; // 2.78569828e-3
r = fmaf (r, s, -0x1.03f2ecp-6f); // -1.58660226e-2
r = fmaf (r, s, 0x1.5beebap-5f); // 4.24722321e-2
r = fmaf (r, s, -0x1.33194ep-4f); // -7.49753043e-2
r = fmaf (r, s, 0x1.b403a8p-4f); // 1.06448799e-1
r = fmaf (r, s, -0x1.22f5c2p-3f); // -1.42070308e-1
r = fmaf (r, s, 0x1.997748p-3f); // 1.99934542e-1
r = fmaf (r, s, -0x1.5554d8p-2f); // -3.33331466e-1
r = r * s;
r = fmaf (r, a, a);
return r;
}
</code></pre> | 2014-11-09 04:33:35.200000+00:00 | 2017-10-18 19:14:58.220000+00:00 | 2017-10-18 19:14:58.220000+00:00 | null | 26,692,859 | <p>For the simple and efficient implementation of fast math functions with reasonable accuracy, polynomial minimax approximations are often the method of choice. Minimax approximations are typically generated with a variant of the Remez algorithm. Various widely available tools such as Maple and Mathematica have built-in functionality for this. The generated coefficients are typically computed using high-precision arithmetic. It is well-known that simply rounding those coefficients to machine precision leads to suboptimal accuracy in the resulting implementation.</p>
<p>Instead, one searches for closely related sets of coefficients that are exactly representable as machine numbers to generate a machine-optimized approximation. Two relevant papers are:</p>
<p><em>Nicolas Brisebarre, Jean-Michel Muller, and Arnaud Tisserand, "Computing Machine-Efficient Polynomial Approximations", ACM Transactions on Mathematical Software, Vol. 32, No. 2, June 2006, pp. 236–256.</em></p>
<p><em>Nicolas Brisebarre and Sylvain Chevillard, "Efficient polynomial L∞-approximations", 18th IEEE Symposium on Computer Arithmetic (ARITH-18), Montpellier (France), June 2007, pp. 169-176.</em></p>
<p>An implementation of the LLL-algorithm from the latter paper is available as the <code>fpminimax()</code> command of the <a href="http://sollya.gforge.inria.fr/sollya-3.0/help.php?name=fpminimax">Sollya tool</a>. It is my understanding that all algorithms proposed for the generation of machine-optimized approximations are based on heuristics, and that it is therefore generally unknown what accuracy can be achieved by an optimal approximation. It is not clear to me whether the availability of FMA (fused multiply-add) for the evaluation of the approximation has an influence on the answer to that question. It seems to me naively that it should.</p>
<p>I am currently looking at a simple polynomial approximation for arctangent on [-1,1] that is evaluated in IEEE-754 single-precision arithmetic, using the Horner scheme and FMA. See function <code>atan_poly()</code> in the C99 code below. For lack of access to a Linux machine at the moment, I did not use Sollya to generate these coefficients, but used my own heuristic that could be loosely described as a mixture of steepest decent and simulated annealing (to avoid getting stuck on local minima). The maximum error of my machine-optimized polynomial is very close to 1 ulp, but ideally I would like the maximum ulp error to be below 1 ulp. </p>
<p>I am aware that I could change my computation to increase the accuracy, for example by using a leading coefficient represented to more than single-precision precision, but I would like to keep the code exactly as is (that is, as simple as possible) adjusting only the coefficients to deliver the most accurate result possible.</p>
<p>A "proven" optimal set of coefficients would be ideal, pointers to relevant literature are welcome. I did a literature search but could not find any paper that advances the state of the art meaningfully beyond Sollya's <code>fpminimax()</code>, and none that examine the role of FMA (if any) in this issue.</p>
<pre><code>// max ulp err = 1.03143
float atan_poly (float a)
{
float r, s;
s = a * a;
r = 0x1.7ed1ccp-9f;
r = fmaf (r, s, -0x1.0c2c08p-6f);
r = fmaf (r, s, 0x1.61fdd0p-5f);
r = fmaf (r, s, -0x1.3556b2p-4f);
r = fmaf (r, s, 0x1.b4e128p-4f);
r = fmaf (r, s, -0x1.230ad2p-3f);
r = fmaf (r, s, 0x1.9978ecp-3f);
r = fmaf (r, s, -0x1.5554dcp-2f);
r = r * s;
r = fmaf (r, a, a);
return r;
}
// max ulp err = 1.52637
float my_atanf (float a)
{
float r, t;
t = fabsf (a);
r = t;
if (t > 1.0f) {
r = 1.0f / r;
}
r = atan_poly (r);
if (t > 1.0f) {
r = fmaf (0x1.ddcb02p-1f, 0x1.aee9d6p+0f, -r); // pi/2 - r
}
r = copysignf (r, a);
return r;
}
</code></pre> | 2014-11-01 20:22:01.337000+00:00 | 2017-10-18 19:14:58.220000+00:00 | 2014-11-02 17:06:20.573000+00:00 | c|algorithm|floating-point | ['https://arxiv.org/abs/1508.03211'] | 1 |
60,813,522 | <p>When you train an image recognition model, you train it for a specific image size (and resolution), let's say n_dims = <code>[256, 256]</code>. Now, in the prediction phase, you have images of different sizes (with respect to pixels), e.g. <code>[1024, 1024]</code>. You extract patches (you can resize the image first by lowering the resolution) and hover over the image patches with your model, and for each patch, you obtain a prediction for all classes (in a patch, more than one of the objects might be present), which you have to average somehow for the whole image at the end.</p>
<p>See <a href="https://arxiv.org/abs/1312.6229" rel="nofollow noreferrer">OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks</a>.</p>
<blockquote>
<p>Instead, we explore the entire image by densely running the network at each location and at multiple
scales. While the sliding window approach may be computationally prohibitive for certain types
of model, it is inherently efficient in the case of ConvNets (see section 3.5). This approach yields
significantly more views for voting, which increases robustness while remaining efficient. The result
of convolving a ConvNet on an image of arbitrary size is a spatial map of C-dimensional vectors at
each scale.</p>
</blockquote> | 2020-03-23 12:24:25.990000+00:00 | 2020-03-23 12:24:25.990000+00:00 | null | null | 60,812,966 | <p>I was going through <a href="https://arxiv.org/pdf/1409.1556.pdf" rel="nofollow noreferrer">vggnet</a> paper and i came across the testing phase of vggnet. </p>
<p>During the testing phase, test image goes through the vggnet and a class score map is obtained. This class score map is spatially averaged to produce a fixed size vector. </p>
<p>I have googled class score map, but then i couldn't find any relevant results. I wish to know what is the role of class score map.</p>
<p>Any hint would be greatly helpful. Thanks</p>
<p><a href="https://i.stack.imgur.com/EaRaY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EaRaY.png" alt="enter image description here"></a></p> | 2020-03-23 11:45:46.167000+00:00 | 2020-03-23 12:24:25.990000+00:00 | null | deep-learning|vgg-net | ['https://arxiv.org/abs/1312.6229'] | 1 |
54,805,105 | <p>What you call your <em>bias</em> is actually the greatest strength you have. You can include your knowledge of the system. Machine learning, including glorious <em>deep learning</em> is, to put it bluntly, <strong>stupid</strong>. Although it can figure out features for you, interpretation of these will be difficult.</p>
<p>Also, especially deep learning, has great capacity to memorise (not learn!) patterns, making it easy to overfit to training data. Making machine learning models that generalise well in real world is tough.</p>
<p>In most successful approaches (check against Master Kagglers) people create features. In your case I'd probably want to calculate magnitude and vector of the force. Depending on the type of scenario, I might transform (Lat, Long) into distance from specific point (say, point of origin / activation, or established every 1 minute) or maybe use different coordinate system. </p>
<p>Since your data in time series, I'd probably use something well suited for time series modelling that you can understand and troubleshoot. CNN and such are typically your last resort in majority of cases. </p>
<p>If you really would like to automate it, check e.g. <a href="https://autokeras.com/" rel="nofollow noreferrer">Auto Keras</a> or <a href="https://uber.github.io/ludwig/" rel="nofollow noreferrer">ludwig</a>. When it comes to learning which features matter most, I'd recommend going with <a href="https://en.wikipedia.org/wiki/Gradient_boosting" rel="nofollow noreferrer">gradient boosting</a> (GBDT).</p>
<p>I'd recommend reading <a href="https://arxiv.org/pdf/1810.09591.pdf" rel="nofollow noreferrer">this article from AirBnB</a> that takes deeper dive into journey of building such systems and feature engineering.</p> | 2019-02-21 10:40:54.503000+00:00 | 2019-02-21 10:57:21.403000+00:00 | 2019-02-21 10:57:21.403000+00:00 | null | 54,803,825 | <p>I'm new to machine learning, and I understand that there are parameters and choices that apply to the <em>model</em> you attach to a certain set of inputs, which can be tuned/optimised, but those inputs obviously tie back to fields you generated by slicing and dicing whatever source data you had in a way that makes sense to <em>you</em>. But what if the way you decided to model and cut up your source data, and therefore training data, isn't optimal? Are there ways or tools that extend the power of machine learning into, not only the model, but the way training data was created in the first place?</p>
<p>Say you're analysing the accelerometer, GPS, heartrate and surrounding topography data of someone moving. You want to try determine where this person is likely to become exhausted and stop, assuming they'll continue moving in a straight line based on their trajectory, and that going up any hill will increase heartrate to some point where they must stop. If they're running or walking modifies these things obviously.</p>
<p>So you cut up your data, and feel free to correct how you'd do this, but it's less relevant to the main question:</p>
<ul>
<li>Slice up raw accelerometer data along X, Y, Z axis for the past <strong><em>A</em></strong> number of seconds into <strong><em>B</em></strong> number of slices to try and profile it, probably applying a CNN to it, to determine if running or walking</li>
<li>Cut up the recent <strong><em>C</em></strong> seconds of raw GPS data into a sequence of <strong><em>D</em></strong> (Lat, Long) pairs, each pair representing the average of <strong><em>E</em></strong> seconds of raw data</li>
<li><em>Based on the previous sequence</em>, determine speed and trajectory, and determine the upcoming slope, by slicing the next <strong><em>F</em></strong> distance (or seconds, another option to determine, of <strong><em>G</em></strong>) into <strong><em>H</em></strong> number of slices, profiling each, etc...</li>
</ul>
<p>You get the idea. How do you effectively determine <strong><em>A</em></strong> through <strong><em>H</em></strong>, some of which would completely change the number and behaviour of model inputs? I want to take out any bias I may have about what's right, and let it determine end-to-end. Are there practical solutions to this? Each time it changes the parameters of data creation, go back, re-generate the training data, feed it into the model, train it, tune it, over and over again until you get the best result.</p> | 2019-02-21 09:40:42.777000+00:00 | 2019-02-21 10:58:23.283000+00:00 | 2019-02-21 10:58:23.283000+00:00 | machine-learning|deep-learning|data-modeling | ['https://autokeras.com/', 'https://uber.github.io/ludwig/', 'https://en.wikipedia.org/wiki/Gradient_boosting', 'https://arxiv.org/pdf/1810.09591.pdf'] | 4 |
29,338,214 | <p>I don't know of any document frequency lists for large corpora such as the web, but there are some term frequency lists available. For example, there are the <a href="http://wacky.sslmit.unibo.it/doku.php?id=frequency_lists" rel="nofollow">frequency lists from the web corpora compiled by the Web-As-Corpus Kool Yinitiative</a>, which include the 2-billion ukWaC English web corpus. Alternatively, there are the <a href="http://storage.googleapis.com/books/ngrams/books/datasetsv2.html" rel="nofollow">n-grams from the Google Books Corpus</a>. </p>
<p>It <a href="http://arxiv.org/abs/0807.3755" rel="nofollow">has been shown</a> that such term frequency counts can be used to reliably approximate document frequency counts. </p> | 2015-03-30 04:02:57.360000+00:00 | 2015-03-30 04:02:57.360000+00:00 | null | null | 29,337,199 | <p>I am looking for resources that provides the number of documents a term is covered in. For example, there is about 25 billion documents that contains the term "the" in the indexed internet.</p> | 2015-03-30 01:45:37.290000+00:00 | 2015-03-30 15:43:45.983000+00:00 | null | nlp|stanford-nlp|opennlp|corpus | ['http://wacky.sslmit.unibo.it/doku.php?id=frequency_lists', 'http://storage.googleapis.com/books/ngrams/books/datasetsv2.html', 'http://arxiv.org/abs/0807.3755'] | 3 |
17,522,280 | <p>Some answers can be found in this article: <a href="http://arxiv.org/pdf/0801.0120v2.pdf" rel="nofollow">http://arxiv.org/pdf/0801.0120v2.pdf</a></p>
<p>(At least according to their abstract, I didn't read the whole paper)</p> | 2013-07-08 08:41:08.327000+00:00 | 2013-07-08 08:41:08.327000+00:00 | null | null | 17,521,661 | <p>To make $6.39, you can choose:</p>
<ul>
<li>$5 bill </li>
<li>$1 bill to make $6. </li>
<li>25¢ coin, to make $6.25 </li>
<li>10¢ coin, to make $6.35 </li>
<li>Four 1¢ coins, to make $6.39.</li>
</ul>
<p>However it doesn't work if the currency has coins with weights of 1,7, and 10. My question is, why does the greedy algorithm work [efficiently] only for a few weights? What are the conditions to be satisfied for the given set of weights to satisfy the greedy algorithm <strong>and</strong> be optimal at the same time?</p> | 2013-07-08 08:02:50.227000+00:00 | 2015-10-20 23:27:58.187000+00:00 | 2015-10-20 23:27:58.187000+00:00 | algorithm|greedy | ['http://arxiv.org/pdf/0801.0120v2.pdf'] | 1 |
10,670,934 | <p>Modern Portfolio theory is a subject in its own right, with books such as "Modern Portfolio Theory and Investment Analysis", and an introduction at <a href="http://en.wikipedia.org/wiki/Modern_portfolio_theory" rel="nofollow">http://en.wikipedia.org/wiki/Modern_portfolio_theory</a>.</p>
<p>One way to get problems you can actually solve is to treat it as a mathematical optimization problem. If you have a vector which gives you the amount of each stock you buy, then - under various assumptions - the return is a linear function of this vector, and the risk is a quadratic function of this vector. Maximising the return for given risk, or minimising the risk for given return, is a well-understood mathematical problem, even for very large numbers of stocks - <a href="http://en.wikipedia.org/wiki/Quadratic_programming" rel="nofollow">http://en.wikipedia.org/wiki/Quadratic_programming</a>.</p>
<p>One practical problem with this is that the answer you get will probably tell you buy some fraction of almost all the stocks on the market. My guess is that real life programs use some "secret sauce" heuristic that doesn't guarantee the perfect answer, subject to a constraint on the number of stocks you are actually prepared to buy, but works pretty well in practice. Returning the perfect answer appears to be a hard problem - see e.g. <a href="http://arxiv.org/ftp/arxiv/papers/1105/1105.3594.pdf" rel="nofollow">http://arxiv.org/ftp/arxiv/papers/1105/1105.3594.pdf</a></p> | 2012-05-20 05:11:05.657000+00:00 | 2012-05-20 05:11:05.657000+00:00 | null | null | 10,670,268 | <p>I hope this does not get closed because it is related to algorithms that I have not been able to figure out(its also pretty long because I'm so confused about how its being done). Basically many years back I used to work at a mutual fund and we used different tools to select optimize portfolios as well as hedge existing ones. We would take these results and make our own modifications then sell them to clients. After my company downsized, I decided I wanted to give it a try(to create the software and include my customizations) but I have no clue how combinations are actually generated for the software. </p>
<p>After 6 months of trying, I'm accepting that my approach is impossible. I was trying to use combination algorithms like from Knuth's book, as well as doing <code>bit</code> combinations to try to find every possible portfolio(I limited it to 30 stocks) on the NYSE(5,000+ stocks). But as per everyone I have spoken to this will take me billions of billions of years to just get one days results(for me on a GPU i stopped it after 2 days of straight processing).</p>
<p>So what am I missing? We would enter our risk tolerance and view of the market(stock market growth expectations, inflation expectations, fed funds expectations,etc..) and it would give us the ideal portfolio(in theory..) within a few seconds/minutes. With thousands of possibilities and quadrillion possible combinations of weights of stocks, how are they able to calculate results so quickly(or even at all)? As the admin of the system, I know we downloaded a file everyday(less than 100 mb and loaded in a mssql database probably just market data..so its not like we had every possibility. Using my approach above I would get a 5 gig file in a min of doing my version of Knuth's combination algo) and the applications worked offline(so it must have been doing it locally on the desktop/laptop cpu not on a massive supercomputer somewhere and took a min or two to run..15 minutes was the longest for a global fund which includes every stock in the world). Its so confusing because their work required correlation of the entire fund(I don't think they were just sending the top stocks they pre-calculated because everyone got different results). So if I wanted a 30 stock fund that gave me 2% returns and had a negative correlation with the market, and was 60% hedged how could the software generate that portfolio out of billions of possibilities so quickly? note, I'm not asking about the math or the finance part, I'm asking how it was able to generate 30 stocks from the entire market that gave 2% returns when in order to do that it would need to know the returns of all 30 stock portfolio(That alone would make it run for billions of years, right? the other restrictions make it more complex).</p>
<p>So How is this being done programmatically? I'm starting to believe they are not using Knuth's combination algorithm to generate every possibility yet their results don't seem randomly selected and individually selecting the stocks seems to miss the correlation part. How can so many investment softwares do things like this? </p> | 2012-05-20 02:14:57.037000+00:00 | 2012-05-20 05:11:05.657000+00:00 | 2012-05-20 04:35:24.557000+00:00 | algorithm|math|statistics|finance|portfolio | ['http://en.wikipedia.org/wiki/Modern_portfolio_theory', 'http://en.wikipedia.org/wiki/Quadratic_programming', 'http://arxiv.org/ftp/arxiv/papers/1105/1105.3594.pdf'] | 3 |
48,728,761 | <p>According to this study: </p>
<blockquote>
<p>Gupta, S., Agrawal, A., Gopalakrishnan, K., & Narayanan, P. (2015,
June). Deep learning with limited numerical precision. In
International Conference on Machine Learning (pp. 1737-1746). At:
<a href="https://arxiv.org/pdf/1502.02551.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1502.02551.pdf</a></p>
</blockquote>
<p>stochastic rounding was required to obtain convergence when using half-point floating precision (float16); however, when that rounding technique was used, they claimed to get very good results.</p>
<p>Here's a relevant quotation from that paper:</p>
<p>"A recent work (Chen et al., 2014) presents a hardware accelerator
for deep neural network training that employs
fixed-point computation units, but finds it necessary to
use 32-bit fixed-point representation to achieve convergence
while training a convolutional neural network on
the MNIST dataset. In contrast, our results show that
it is possible to train these networks using only 16-bit
fixed-point numbers, so long as stochastic rounding is used
during fixed-point computations."</p>
<p>For reference, here's the citation for Chen at al., 2014:</p>
<blockquote>
<p>Chen, Y., Luo, T., Liu, S., Zhang, S., He, L., Wang, J., ... & Temam,
O. (2014, December). Dadiannao: A machine-learning supercomputer. In
Proceedings of the 47th Annual IEEE/ACM International Symposium on
Microarchitecture (pp. 609-622). IEEE Computer Society. At:
<a href="http://ieeexplore.ieee.org/document/7011421/?part=1" rel="nofollow noreferrer">http://ieeexplore.ieee.org/document/7011421/?part=1</a></p>
</blockquote> | 2018-02-11 05:55:08.387000+00:00 | 2018-02-11 05:55:08.387000+00:00 | null | null | 46,613,748 | <p>The standard is float32 but I'm wondering under what conditions it's ok to use float16?</p>
<p>I've compared running the same covnet with both datatypes and haven't noticed any issues. With large dataset I prefer float16 because I can worry less about memory issues..</p> | 2017-10-06 20:50:48.530000+00:00 | 2018-02-11 05:55:08.387000+00:00 | null | numpy|tensorflow|neural-network|keras|conv-neural-network | ['https://arxiv.org/pdf/1502.02551.pdf', 'http://ieeexplore.ieee.org/document/7011421/?part=1'] | 2 |
46,822,632 | <p>float16 training is tricky: your model might not converge when using standard float16, but float16 does save memory, and is also faster if you are using the latest Volta GPUs. Nvidia recommends "Mixed Precision Training" in the latest <a href="http://docs.nvidia.com/deeplearning/sdk/mixed-precision-training/index.html#mptrain" rel="nofollow noreferrer">doc</a> and <a href="https://arxiv.org/abs/1710.03740" rel="nofollow noreferrer">paper</a>.</p>
<p>To better use float16, you need to manually and carefully choose the <strong>loss_scale</strong>. If loss_scale is too large, you may get NANs and INFs; if loss_scale is too small, the model might not converge. Unfortunately, there is no common loss_scale for all models, so you have to choose it carefully for your specific model.</p>
<p>If you just want to reduce the memory usage, you could also try tf. to_bfloat16, which might converge better. </p> | 2017-10-19 04:00:18.220000+00:00 | 2017-10-19 04:00:18.220000+00:00 | null | null | 46,613,748 | <p>The standard is float32 but I'm wondering under what conditions it's ok to use float16?</p>
<p>I've compared running the same covnet with both datatypes and haven't noticed any issues. With large dataset I prefer float16 because I can worry less about memory issues..</p> | 2017-10-06 20:50:48.530000+00:00 | 2018-02-11 05:55:08.387000+00:00 | null | numpy|tensorflow|neural-network|keras|conv-neural-network | ['http://docs.nvidia.com/deeplearning/sdk/mixed-precision-training/index.html#mptrain', 'https://arxiv.org/abs/1710.03740'] | 2 |
46,630,858 | <p>Surprisingly, it's totally OK to use 16 bits, even not just for fun, but in production as well. For example, in <a href="https://www.youtube.com/watch?v=sUzQpd-Ku4o" rel="noreferrer">this video</a> Jeff Dean talks about <strong>16-bit</strong> calculations at Google, around 52:00. A quote from the slides:</p>
<blockquote>
<p>Neural net training very tolerant of reduced precision</p>
</blockquote>
<p>Since GPU memory is the main bottleneck in ML computation, there has been a lot of research on precision reduction. E.g. </p>
<ul>
<li><p><a href="https://arxiv.org/abs/1502.02551" rel="noreferrer">Gupta at al paper</a> "Deep Learning with Limited Numerical Precision" about <strong>fixed</strong> (not floating) <strong>16-bit</strong> training but with <em>stochastic rounding</em>. </p></li>
<li><p><a href="https://arxiv.org/abs/1412.7024" rel="noreferrer">Courbariaux at al</a> "Training Deep Neural Networks with Low Precision Multiplications" about <strong>10-bit</strong> activations and <strong>12-bit</strong> parameter updates. </p></li>
<li><p>And this is not the limit. <a href="https://arxiv.org/abs/1602.02830" rel="noreferrer">Courbariaux et al</a>, "BinaryNet: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1". Here they discuss <strong>1-bit</strong> activations and weights (though higher precision for the gradients), which makes the forward pass super fast.</p></li>
</ul>
<p>Of course, I can imagine some networks may require high precision for training, but I would recommend at least to try 16 bits for training a big network and switch to 32 bits if it proves to work worse.</p> | 2017-10-08 11:59:31.793000+00:00 | 2017-10-08 11:59:31.793000+00:00 | null | null | 46,613,748 | <p>The standard is float32 but I'm wondering under what conditions it's ok to use float16?</p>
<p>I've compared running the same covnet with both datatypes and haven't noticed any issues. With large dataset I prefer float16 because I can worry less about memory issues..</p> | 2017-10-06 20:50:48.530000+00:00 | 2018-02-11 05:55:08.387000+00:00 | null | numpy|tensorflow|neural-network|keras|conv-neural-network | ['https://www.youtube.com/watch?v=sUzQpd-Ku4o', 'https://arxiv.org/abs/1502.02551', 'https://arxiv.org/abs/1412.7024', 'https://arxiv.org/abs/1602.02830'] | 4 |
40,495,730 | <p>The following paper has scaling results for pde constrained parameter estimation problems but not up to anywhere near the number of cores you seem to be interested in: <a href="https://arxiv.org/abs/1606.07399" rel="nofollow noreferrer">https://arxiv.org/abs/1606.07399</a>. I haven't seen any examples going up to thousands of cores. </p>
<p>Re infiniband: By default Julia uses shared memory for communication within a node and TCP/IP across nodes, so by default infiniband is not supported. However, the language allows for the implementation of custom transports and I imagine someone will add infiniband support at some point but I couldn't find any implementations with a quick google search.</p> | 2016-11-08 20:22:33.243000+00:00 | 2016-11-08 20:22:33.243000+00:00 | null | null | 40,389,342 | <p><strong>General context:</strong></p>
<p>I have developed a fairly large Navier-Stokes (finite difference) solver written in FORTRAN90. It has adaptive grids (hence load-balance issue), and I have tried various techniques (MPI, OpenMP & OpenMP-MPI hyrbid) to parallelize it. However, it does not scale good enough i.e. according to Amdahl's law it runs 96-97% of the computations in parallel. Also, the general size of the mesh is a couple of hundred million points, which would require to increase later in the future.</p>
<p><strong>Query:</strong></p>
<p>Now, I am thinking of switching to Julia, since it has become very tedious to maintain and add further functionalities to the existing code.</p>
<p>The problem is that I am unable to find a good answer about the parallel performance of Julia. I have searched on the internet as well as have watched a lot of youtube videos. What I have noticed is that most people say that Julia is very much suitable for the parallel computing, some even provide a bar chart showing the reduction in the elapsed time compared to the serial code. However, some of the answers/videos are quite old, which make them a little unreliable due to the growing nature of this new language.</p>
<p>Therefore, I would like to know if the language has the ability to scale even for a few thousand cores?</p>
<p><strong>Extra information:</strong></p>
<p>I am still trying hard to improve the speedup of my existing code to achieve almost linear performance for a couple of thousand cores. The solver needs to exchange overlapping points 3-4 times per timestep. Hence, it involves a huge communication overhead. However, the non-adaptive grid version of the code easily scales up to 20k cores.</p>
<p>I have also read somewhere that Julia does not use InfiniBand standard for data communication in parallel.</p> | 2016-11-02 20:36:47.617000+00:00 | 2016-11-08 20:22:33.243000+00:00 | null | parallel-processing|julia | ['https://arxiv.org/abs/1606.07399'] | 1 |
62,975,640 | <p>This is actually a great problem to try deep learning. As you probably already know: language models are few shot learners (<a href="https://arxiv.org/abs/2005.14165" rel="nofollow noreferrer">https://arxiv.org/abs/2005.14165</a>)</p>
<p>If you are not familiar with language model, I can explain a little bit here. Otherwise, you can skip this section.
Basically, the area of NLP has got great progress by doing generative pre-training on unlabeled data. A popular example is BERT. The idea is that you can train a model on a language modeling task (e.g. next word prediction.) By training on such tasks, the model will be able to learn well the "world-knowledge". Then, when you want to use the model for other tasks, you do not need that many labeled training data. You can take a look at this video (<a href="https://www.youtube.com/watch?v=SY5PvZrJhLE" rel="nofollow noreferrer">https://www.youtube.com/watch?v=SY5PvZrJhLE</a>) if you are interested in knowing more.</p>
<p>For your problem specifically, I have adapt a colab (that I prepared for my UC class) for your application:
<a href="https://colab.research.google.com/drive/1dKCqwNwPCsLfLHw9KkScghBJkOrU9PAs?usp=sharing" rel="nofollow noreferrer">https://colab.research.google.com/drive/1dKCqwNwPCsLfLHw9KkScghBJkOrU9PAs?usp=sharing</a>
In this colab, we use a pre-trained BERT provided by Google Research, and fine-tune on your labeled data. The fine-tuning process is very fast and takes about 1 minute. The colab should work out-of-the-box for you as colab provides GPU supports to train the model. Practically, I think you many need to hand generate a more diverse set of training data, but I do not think you need to have huge data sets.</p> | 2020-07-19 02:23:14.930000+00:00 | 2020-07-19 02:23:14.930000+00:00 | null | null | 62,970,861 | <p>I am trying to make a chatbot and to do that i have to perform two main task
1st is Intent Classification and other is Entity recognition but i stuck in Intent classification. Basically i am developing a chatbot for Ecommerce site and my chatbot have very specific use case, my chatbot has to negotiate with customers on the price of products, thats it. To keep things simple and easy i am just considering 5 intents.</p>
<ol>
<li>Ask for price</li>
<li>Counter Offer</li>
<li>negotiation</li>
<li>success</li>
<li>Buy a product</li>
</ol>
<p>To train a classifier on these intents i have trained a Naive Bayes classifier on my little hand written corpus of data, but that data is too too and too less to train a good classifier. I have searched on internet a lot and looked into every machine learning data repository (kaggle, uci, etc) but cannot find any data for my such specific use case. Can you guys guide me what should i do in that case. If i got a big data like i want then i will try Deep learning classifier which will far better for me.
Any help would be highly appreciated.</p>
<pre><code>from textblob.classifiers import NaiveBayesClassifier
import joblib # This is used to save the trained classifier in pickle format
training_data = [
('i want to buy a jeans pent', 'Buy_a_product'),
('i want to purchase a pair of shoes', 'Buy_a_product'),
('are you selling laptops', 'Buy_a_product'),
('i need an apple jam', 'Buy_a_product'),
('can you please tell me the price of this product', 'Buy_a_product'),
('please give me some discount.', 'negotition'),
("i cannot afford such price", 'negotition'),
("could you negotiate", "negotition"),
("i agree on your offer", "success"),
("yes i accepcted your offer", "success"),
("offer accepted", "success"),
("agreed", "success"),
("what is the price of this watch", "ask_for_price"),
("How much it's cost", "ask_for_price"),
("i will only give you 3000 for this product", "counter_offer"),
("Its too costly i can only pay 1500 for it", "counter_offer"),
]
clf = NaiveBayesClassifier(training_data)
joblib.dump(clf, 'intentClassifier.pkl')
</code></pre> | 2020-07-18 16:17:20.343000+00:00 | 2020-07-19 02:23:14.930000+00:00 | null | python|chatbot | ['https://arxiv.org/abs/2005.14165', 'https://www.youtube.com/watch?v=SY5PvZrJhLE', 'https://colab.research.google.com/drive/1dKCqwNwPCsLfLHw9KkScghBJkOrU9PAs?usp=sharing'] | 3 |
40,019,480 | <p>You can use gradient boosting packages which handle missing values and are ideal for your case.Since you asked for packages gbm in R and xgboost in python can be used.If you want to know how missing values are handled automatically in xgboost go through section 3.4 of <a href="https://arxiv.org/pdf/1603.02754v3.pdf" rel="nofollow">this paper</a> to get an insight.</p> | 2016-10-13 11:16:17.327000+00:00 | 2016-10-13 11:16:17.327000+00:00 | null | null | 39,993,738 | <p>I am currently working with a quite particular dataset : it has about 1000 columns and 1M rows, but about 90% of the values are Nan.
This is not because the records are bad, but because the data represent measurement made on individuals and only about 100 features are relevant for each individual. As such, imputing missing values would completely destroy the information in the data. </p>
<p>It is not easy either to just group together individuals that have the same features and only consider the column relevant to each subgroup, as this would actually yield extremely small groups for each set of columns (almost any combination of filled in columns is possible for a given individual). </p>
<p>The issue is, scikit learn dimension reduction methods cannot handle missing values. Is there a package that does, or should I use a different method and skip dimension reduction?
I </p> | 2016-10-12 08:18:54.340000+00:00 | 2016-10-13 11:16:17.327000+00:00 | null | python|machine-learning|pca|missing-data | ['https://arxiv.org/pdf/1603.02754v3.pdf'] | 1 |
42,498,221 | <p>In addition to the benefits you already mentioned such as larger <strong>receptive field</strong>, <strong>efficient computation</strong> and <strong>lesser memory consumption</strong>, the dilated causal convolutions also has the following benefits:</p>
<ul>
<li>it <strong>preserves the resolution/dimensions of data</strong> at the output layer. This is because the layers are dilated instead of pooling, hence the name <em>dilated causal convolutions</em>.</li>
<li>it <strong>maintains the ordering of data</strong>. For example, in 1D dilated causal convolutions when the prediction of output depends on previous inputs then the structure of convolution helps in maintaining the ordering of data. </li>
</ul>
<p>I'd refer you to read this amazing paper <a href="https://arxiv.org/pdf/1609.03499.pdf" rel="noreferrer">WaveNet</a> which applies dilated causal convolutions to raw audio waveform for generating speech, music and even recognize speech from raw audio waveform.</p>
<p>I hope you find this answer helpful.</p> | 2017-02-28 00:24:07.500000+00:00 | 2017-02-28 00:24:07.500000+00:00 | null | null | 41,178,576 | <p>I refer to <a href="https://arxiv.org/abs/1511.07122" rel="noreferrer">Multi-Scale Context Aggregation by Dilated Convolutions</a>.</p>
<ul>
<li>A 2x2 kernel would have holes in it such that it becomes a 3x3 kernel.</li>
<li>A 3x3 kernel would have holes in it such that it becomes a 5x5 kernel.</li>
<li>Above assumes interval 1 of course.</li>
</ul>
<p>I can clearly see that this allows you to effectively use 4 parameters but have a receptive field of 3x3 and 9 parameters but have a receptive field of 5x5. </p>
<p>Is the case of dilated convolution simply to save on parameters while reaping the benefit of a larger receptive field and thus save memory and computations?</p> | 2016-12-16 06:35:32.783000+00:00 | 2017-06-19 14:30:41.470000+00:00 | 2016-12-16 17:00:11.463000+00:00 | deep-learning | ['https://arxiv.org/pdf/1609.03499.pdf'] | 1 |
44,147,945 | <p><strong>TLDR</strong></p>
<ol>
<li>Dilated convolutions have generally improved performance (see the better semantic segmentation results in <a href="https://arxiv.org/pdf/1511.07122.pdf" rel="noreferrer">Multi-Scale Context Aggregation by Dilated Convolutions</a>)</li>
<li><p>The more important point is that <strong>the architecture is based on the fact that dilated convolutions support exponential expansion of the receptive field without loss of resolution or coverage</strong>.</p></li>
<li><p>Allows one to have <strong>larger receptive field</strong> with <strong>same computation and memory costs</strong> while also <strong>preserving resolution</strong>.</p></li>
<li><strong>Pooling</strong> and <strong>Strided Convolutions</strong> are similar concepts but both <strong>reduce the resolution</strong>. </li>
</ol>
<p>@Rahul referenced <a href="https://arxiv.org/pdf/1609.03499.pdf" rel="noreferrer">WaveNet</a>, which put it very succinctly in 2.1 Dilated Causal Convolutions. It is also worth looking at <a href="https://arxiv.org/pdf/1511.07122.pdf" rel="noreferrer">Multi-Scale Context Aggregation by Dilated Convolutions</a> I break it down further here:</p>
<p><a href="https://i.stack.imgur.com/tOg0g.png" rel="noreferrer"><img src="https://i.stack.imgur.com/tOg0g.png" alt="Screenshot from https://arxiv.org/pdf/1511.07122.pdf"></a></p>
<ul>
<li>Figure (a) is a 1-dilated 3x3 convolution filter. In other words, it's a standard 3x3 convolution filter.</li>
<li>Figure (b) is a 2-dilated 3x3 convolution filter. The red dots are where the weights are and everywhere else is 0. In other words, it's a <strong>5x5 convolution filter with 9 non-zero weights and everywhere else 0</strong>, as mentioned in the question. The receptive field in this case is 7x7 because each unit in the previous output has a receptive field of 3x3. The highlighted portions in blue show the receptive field and <strong>NOT</strong> the convolution filter (you could see it as a convolution filter if you wanted to but it's not helpful).</li>
<li>Figure (c) is a 4-dilated 3x3 convolution filter. It's a <strong>9x9 convolution filter with 9 non-zeros weights and everywhere else 0</strong>. From (b), we have it that each unit now has a 7x7 receptive field, and hence you can see a 7x7 blue portion around each red dot.</li>
</ul>
<p>To draw an explicit contrast, consider this:</p>
<ul>
<li>If we use 3 successive layers of 3x3 convolution filters with stride of 1, the effective receptive field will only be 7x7 at the end of it. However, with the same computation and memory costs, we can achieve 15x15 with dilated convolutions. Both operations preserve resolution.</li>
<li>If we use 3 successive layers of 3x3 convolution filters with increasing stride at an exponential rate at exactly the same rate as dilated convolutions in the paper, we will get a 15x15 receptive field at the end of it <strong>but with loss of coverage</strong> eventually as the stride gets larger. What this loss of coverage means is that the effective receptive field at some point will not be what we see above. Some parts will not be overlapping.</li>
</ul> | 2017-05-24 02:27:37.947000+00:00 | 2017-06-19 14:30:41.470000+00:00 | 2017-06-19 14:30:41.470000+00:00 | null | 41,178,576 | <p>I refer to <a href="https://arxiv.org/abs/1511.07122" rel="noreferrer">Multi-Scale Context Aggregation by Dilated Convolutions</a>.</p>
<ul>
<li>A 2x2 kernel would have holes in it such that it becomes a 3x3 kernel.</li>
<li>A 3x3 kernel would have holes in it such that it becomes a 5x5 kernel.</li>
<li>Above assumes interval 1 of course.</li>
</ul>
<p>I can clearly see that this allows you to effectively use 4 parameters but have a receptive field of 3x3 and 9 parameters but have a receptive field of 5x5. </p>
<p>Is the case of dilated convolution simply to save on parameters while reaping the benefit of a larger receptive field and thus save memory and computations?</p> | 2016-12-16 06:35:32.783000+00:00 | 2017-06-19 14:30:41.470000+00:00 | 2016-12-16 17:00:11.463000+00:00 | deep-learning | ['https://arxiv.org/pdf/1511.07122.pdf', 'https://arxiv.org/pdf/1609.03499.pdf', 'https://arxiv.org/pdf/1511.07122.pdf', 'https://i.stack.imgur.com/tOg0g.png'] | 4 |
45,684,700 | <p><strong>I've revisited this topic recently in my work and thought I'd update with my current learnings for any who visit in the future.</strong></p>
<p>The topic appeared on <a href="https://github.com/tensorflow/models/issues/2544" rel="noreferrer">Tensorflow's Models repo issue tracker</a>. SSD allows you to set the ratio of how many negative:postive examples to mine (<code>max_negatives_per_positive: 3</code>), but you can also set a minimum number for images with no postives (<code>min_negatives_per_image: 3</code>). Both of these are defined in the model-ssd-loss config section. </p>
<p>That said, I don't see the same option in Faster-RCNN's model configuration. It's mentioned in the issue that <code>models/research/object_detection/core/balanced_positive_negative_sampler.py</code> contains the code used for Faster-RCNN.</p>
<p>One other option discussed in the issue is creating a second class specifically for lookalikes. During training, the model will attempt to learn class differences which should help serve your purpose.</p>
<p>Lastly, I came across this <a href="https://arxiv.org/pdf/1802.07845.pdf" rel="noreferrer">article</a> on Filter Amplifier Networks (FAN) that may be informative for your work on aerial imagery.</p>
<p>===================================================================</p>
<p>The following paper describes hard negative mining for the same purpose you describe:
<a href="https://arxiv.org/abs/1604.03540" rel="noreferrer">Training Region-based Object Detectors with Online Hard Example Mining</a></p>
<p>In section 3.1 they describe using a foreground and background class:</p>
<blockquote>
<p>Background RoIs. A region is labeled background (bg) if its maximum
IoU with ground truth is in the interval [bg lo, 0.5). A lower
threshold of bg lo = 0.1 is used by both FRCN and SPPnet, and is
hypothesized in [14] to crudely approximate hard negative mining; the
assumption is that regions with some overlap with the ground truth are
more likely to be the confusing or hard ones. We show in Section 5.4
that although this heuristic helps convergence and detection accuracy,
it is suboptimal because it ignores some infrequent, but important,
difficult background regions. Our method removes the bg lo threshold.</p>
</blockquote>
<p>In fact this paper is referenced and its ideas are used in Tensorflow's object detection losses.py code for hard mining:</p>
<pre><code>class HardExampleMiner(object):
"""Hard example mining for regions in a list of images.
Implements hard example mining to select a subset of regions to be
back-propagated. For each image, selects the regions with highest losses,
subject to the condition that a newly selected region cannot have
an IOU > iou_threshold with any of the previously selected regions.
This can be achieved by re-using a greedy non-maximum suppression algorithm.
A constraint on the number of negatives mined per positive region can also be
enforced.
Reference papers: "Training Region-based Object Detectors with Online
Hard Example Mining" (CVPR 2016) by Srivastava et al., and
"SSD: Single Shot MultiBox Detector" (ECCV 2016) by Liu et al.
"""
</code></pre>
<p>Based on your model config file, the HardMinerObject is returned by losses_builder.py in this bit of code:</p>
<pre><code>def build_hard_example_miner(config,
classification_weight,
localization_weight):
"""Builds hard example miner based on the config.
Args:
config: A losses_pb2.HardExampleMiner object.
classification_weight: Classification loss weight.
localization_weight: Localization loss weight.
Returns:
Hard example miner.
"""
loss_type = None
if config.loss_type == losses_pb2.HardExampleMiner.BOTH:
loss_type = 'both'
if config.loss_type == losses_pb2.HardExampleMiner.CLASSIFICATION:
loss_type = 'cls'
if config.loss_type == losses_pb2.HardExampleMiner.LOCALIZATION:
loss_type = 'loc'
max_negatives_per_positive = None
num_hard_examples = None
if config.max_negatives_per_positive > 0:
max_negatives_per_positive = config.max_negatives_per_positive
if config.num_hard_examples > 0:
num_hard_examples = config.num_hard_examples
hard_example_miner = losses.HardExampleMiner(
num_hard_examples=num_hard_examples,
iou_threshold=config.iou_threshold,
loss_type=loss_type,
cls_loss_weight=classification_weight,
loc_loss_weight=localization_weight,
max_negatives_per_positive=max_negatives_per_positive,
min_negatives_per_image=config.min_negatives_per_image)
return hard_example_miner
</code></pre>
<p>which is returned by model_builder.py and called by train.py. So basically, it seems to me that simply generating your true positive labels (with a tool like LabelImg or RectLabel) should be enough for the train algorithm to find hard negatives within the same images. The related question gives an excellent <a href="https://stackoverflow.com/questions/44973184/train-tensorflow-object-detection-on-own-dataset">walkthrough</a>.</p>
<p>In the event you want to feed in data that has no true positives (i.e. nothing should be classified in the image), just add the negative image to your tfrecord with no bounding boxes. </p> | 2017-08-14 23:57:52.720000+00:00 | 2018-04-29 18:59:31.477000+00:00 | 2018-04-29 18:59:31.477000+00:00 | null | 45,666,499 | <p>I'm setting up the new <a href="https://github.com/tensorflow/models/tree/master/research/object_detection" rel="noreferrer">Tensorflow Object Detection API</a> to find small objects in large areas of satellite imagery. It works quite well - it finds all 10 objects I want, but I also get 50-100 false positives [things that look a little like the target object, but aren't].</p>
<p>I'm using the <a href="https://github.com/tensorflow/models/blob/master/research/object_detection/samples/configs/faster_rcnn_resnet101_pets.config" rel="noreferrer">sample config</a> from the <a href="https://cloud.google.com/blog/big-data/2017/06/training-an-object-detector-using-cloud-machine-learning-engine" rel="noreferrer">'pets' tutorial</a>, to fine-tune the <code>faster_rcnn_resnet101_coco</code> model they offer. I've started small, with only 100 training examples of my objects (just 1 class). 50 examples in my validation set. Each example is a 200x200 pixel image with a labeled object (~40x40) in the center. I train until my precision & loss curves plateau.</p>
<p>I'm relatively new to using deep learning for object detection. What is the best strategy to increase my precision? e.g. Hard-negative mining? Increase my training dataset size? I've yet to try the most accurate model they offer <code>faster_rcnn_inception_resnet_v2_atrous_coco</code> as i'd like to maintain some speed, but will do so if needed.</p>
<p>Hard-negative mining seems to be a logical step. If you agree, how do I implement it w.r.t setting up the tfrecord file for my training dataset? Let's say I make 200x200 images for each of the 50-100 false positives:</p>
<ul>
<li>Do I create 'annotation' xml files for each, with no 'object' element?</li>
<li>...or do I label these hard negatives as a second class?</li>
<li>If I then have 100 negatives to 100 positives in my training set - is that a healthy ratio? How many negatives can I include?</li>
</ul> | 2017-08-14 01:54:44.927000+00:00 | 2021-09-04 07:18:46.157000+00:00 | 2018-04-30 00:09:19.593000+00:00 | machine-learning|tensorflow|computer-vision|deep-learning|object-detection | ['https://github.com/tensorflow/models/issues/2544', 'https://arxiv.org/pdf/1802.07845.pdf', 'https://arxiv.org/abs/1604.03540', 'https://stackoverflow.com/questions/44973184/train-tensorflow-object-detection-on-own-dataset'] | 4 |
17,895,809 | <pre class="lang-c prettyprint-override"><code>_space = cpSpaceNew(); // setup the world in which the simulation takes place
_space->elasticIterations = _space->iterations;
_space->gravity = cpv(0.0f, GRAVITY); // initial gravity vector
// CGSize size = [[self view] bounds].size;
//setup the 'edges' of our world so the bouncing balls don't move offscreen
cpBody *edge = cpBodyNewStatic();
cpShape *shape = NULL;
int x1=384;
int x2=73;
int y1=191;
int y2=191;
for (int x=0 ; x<312; x++)
{
CGPoint Point1 = CGPointMake(x1, y1);
CGPoint Point2 = CGPointMake(x2, y2);
NSLog(@"(%i,%i) And (%i,%i) ",x1,y1,x2,y2);
shape = cpSegmentShapeNew(edge, Point1, Point2, 0.0f);
shape->u = 0.1f; // minimal friction on the ground
shape->e = 0.7f;
cpSpaceAddStaticShape(_space, shape);
x1--;
y2++;
// a body can be
}
x1=384;
x2=695;
y1=191;
y2=191;
for (int x=0 ; x<312; x++)
{
CGPoint Point1 = CGPointMake(x1, y1);
CGPoint Point2 = CGPointMake(x2, y2);
shape = cpSegmentShapeNew(edge, Point1, Point2, 0.0f);
shape->u = 0.1f; // minimal friction on the ground
shape->e = 0.9f;
cpSpaceAddStaticShape(_space, shape);
x1++;
y2++;
// a body can be
}
x1=73;
x2=73;
y1=502;
y2=813;
for (int x=0 ; x<312; x++)
{
CGPoint Point1 = CGPointMake(x1, y1);
CGPoint Point2 = CGPointMake(x2, y2);
shape = cpSegmentShapeNew(edge, Point1, Point2, 0.0f);
shape->u = 0.1f; // minimal friction on the ground
shape->e = 0.9f;
cpSpaceAddStaticShape(_space, shape);
y1++;
x2++;
// a body can be
}
x1=384;
x2=695;
y1=813;
y2=813;
for (int x=0 ; x<312; x++)
{
CGPoint Point1 = CGPointMake(x1, y1);
CGPoint Point2 = CGPointMake(x2, y2);
shape = cpSegmentShapeNew(edge, Point1, Point2, 0.0f);
shape->u = 0.1f; // minimal friction on the ground
shape->e = 0.7f;
cpSpaceAddStaticShape(_space, shape);
x1++;
y2--;
// a body can be
}
</code></pre>
<p>i Got a Solution <a href="http://arxiv.org/ftp/arxiv/papers/0811/0811.4121.pdf" rel="nofollow">Using this Document</a> i make A Rounded Space. Thank You</p> | 2013-07-27 08:25:17.053000+00:00 | 2013-09-24 10:10:41.970000+00:00 | 2013-09-24 10:10:41.970000+00:00 | null | 17,874,715 | <pre class="lang-c prettyprint-override"><code>_space = cpSpaceNew(); // setup the world in which the simulation takes place
_space->elasticIterations = _space->iterations;
_space->gravity = cpv(0.0f, GRAVITY); // initial gravity vector
CGSize size = [[self view] bounds].size;
//setup the 'edges' of our world so the bouncing balls don't move offscreen
cpBody *edge = cpBodyNewStatic();
cpShape *shape = NULL;
// left
shape = cpSegmentShapeNew(edge, cpvzero, cpv(0.0f, size.height-10), 0.0f);
shape->u = 0.1f; // minimal friction on the ground
shape->e = 0.7f;
cpSpaceAddStaticShape(_space, shape);
// a body can be represented by multiple shapes
// top
shape = cpSegmentShapeNew(edge, cpvzero, cpv(size.width-10, 0.0f), 0.0f);
shape->u = 0.1f;
shape->e = 0.7f;
cpSpaceAddStaticShape(_space, shape);
// right
shape = cpSegmentShapeNew(edge, cpv(size.width-10, 0.0f), cpv(size.width-10, size.height-10), 0.0f);
shape->u = 0.1f;
shape->e = 0.7f;
cpSpaceAddStaticShape(_space, shape);
// bottom
shape = cpSegmentShapeNew(edge, cpv(0.0f, size.height-10), cpv(size.width-10, size.height-10), 0.0f);
shape->u = 0.5f;
shape->e = 0.15f;
cpSpaceAddStaticShape(_space, shape);
</code></pre>
<p>Using this that make Rectangle Space Around the Screen But i want to make it Rounded.
Any suggestions are welcome</p> | 2013-07-26 06:50:10.073000+00:00 | 2013-09-24 10:10:41.970000+00:00 | 2013-09-24 10:07:44.550000+00:00 | objective-c|chipmunk | ['http://arxiv.org/ftp/arxiv/papers/0811/0811.4121.pdf'] | 1 |
35,519,254 | <p><code>HoughCircles</code> is not a strong enough way to detect circle in such complex image like your case.</p>
<p>SO has already had some discussion about this. You could refer these post with quality accepted answers</p>
<p><strong>Standard way:</strong></p>
<p><a href="https://stackoverflow.com/questions/21612258/filled-circle-detection-using-cv2-in-python">Filled circle detection using CV2 in Python?</a></p>
<p><a href="https://stackoverflow.com/questions/15878325/what-are-the-possible-fast-ways-to-detect-circle-in-an-image">What are the possible fast ways to detect circle in an image?</a></p>
<p><strong>Noise image:</strong></p>
<p><a href="https://dsp.stackexchange.com/questions/5930/find-circle-in-noisy-data">https://dsp.stackexchange.com/questions/5930/find-circle-in-noisy-data</a></p>
<p><strong>Another method:</strong> </p>
<p><a href="http://staff.itee.uq.edu.au/lovell/aprs/dicta2003/pdf/0879.pdf" rel="noreferrer">Gradient Pair Vectors</a></p>
<p><a href="http://arxiv.org/ftp/arxiv/papers/1405/1405.5531.pdf" rel="noreferrer"> Learning Automata</a></p> | 2016-02-20 04:51:05.670000+00:00 | 2016-02-20 13:55:17.537000+00:00 | 2017-05-23 11:47:01.187000+00:00 | null | 35,519,102 | <p>everyone i'm fairly new to OpenCV and computer vision and i'm stuck at this problem , which might seem like a fairly trivial but forgive my noobness :)</p>
<p>I'm trying to detect Rebars from a cross-sectional image.</p>
<p><a href="https://i.stack.imgur.com/HAssA.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/HAssA.jpg" alt="Original Color Image"></a></p>
<p>i'm using this code :</p>
<pre><code>import cv2
import cv2.cv as cv
import numpy as np
img = cv2.imread('test/t2.jpg',0)
img = cv2.equalizeHist(img)
cimg = cv2.cvtColor(img,cv2.COLOR_GRAY2BGR)
circles = cv2.HoughCircles(img,cv.CV_HOUGH_GRADIENT,1,10,param1=50,param2=30,minRadius=0,maxRadius=25)
circles = np.uint16(np.around(circles))
for i in circles[0,:]:
# draw the outer circle
cv2.circle(cimg,(i[0],i[1]),i[2],(0,255,0),2)
cv2.imshow('detected circles',cimg)
cv2.waitKey(0)
cv2.destroyAllWindows()
</code></pre>
<p>This is the result i'm getting currently, which is not good :
<a href="https://i.stack.imgur.com/rDmq2.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/rDmq2.jpg" alt="Result"></a></p>
<p>I'm looking for pointers on how to proceed with this problem and how to learn more about CV as i'm really interested!</p>
<p>Thanks a ton!</p> | 2016-02-20 04:26:09.767000+00:00 | 2017-03-18 09:30:59.027000+00:00 | 2016-02-20 11:01:42.093000+00:00 | python|opencv|computer-vision|artificial-intelligence | ['https://stackoverflow.com/questions/21612258/filled-circle-detection-using-cv2-in-python', 'https://stackoverflow.com/questions/15878325/what-are-the-possible-fast-ways-to-detect-circle-in-an-image', 'https://dsp.stackexchange.com/questions/5930/find-circle-in-noisy-data', 'http://staff.itee.uq.edu.au/lovell/aprs/dicta2003/pdf/0879.pdf', 'http://arxiv.org/ftp/arxiv/papers/1405/1405.5531.pdf'] | 5 |
57,290,109 | <p>What you need is Pointer Networks (<a href="https://arxiv.org/abs/1506.03134" rel="noreferrer">https://arxiv.org/abs/1506.03134</a>)</p>
<p>Here is a introduction quote from a post about it:</p>
<blockquote>
<p>Pointer networks are a new neural architecture that learns pointers to positions in an input sequence. This is new because existing techniques need to have a fixed number of target classes, which isn't generally applicable— consider the Travelling Salesman Problem, in which the number of classes is equal to the number of inputs. An additional example would be sorting a variably sized sequence.
- <a href="https://finbarr.ca/pointer-networks/" rel="noreferrer">https://finbarr.ca/pointer-networks/</a></p>
</blockquote>
<p>Its an attention based model.</p>
<p>Essentially a pointer network is used to predict pointers back to the input, meaning your output layer isn't actually fixed, but variable.</p>
<p>A use case where I have used them is for translating raw text into SQL queries. </p>
<ul>
<li>Input: "HOW MANY CARS WERE SOLD IN US IN 1983" </li>
<li>Output: SELECT COUNT(Car_id) FROM Car_table WHERE (Country='US' AND
Year=='1983')</li>
</ul>
<p>The issue with raw text such as this is that it will only make sense w.r.t to a specific table (in this case car table with a set of variables around car sales, similar to your different boards for board games). Meaning, that if the question cant be the only input. So the input that actually goes into the pointer network is a combination of -</p>
<p>Input - </p>
<ol>
<li>Query</li>
<li>Metadata of the table (column names)</li>
<li>Token vocabulary for all categorical columns</li>
<li>Keywords from SQL syntax (SELECT, WHERE etc..)</li>
</ol>
<p>All of these are appended together.</p>
<p>The output layer then simply points back to specific indexes of the input. It points to Country and Year (from column names in metadata), it points to US and 1983 (from tokens in vocabulary of categorical columns), it points to SELECT, WHERE etc from the SQL syntax component of the input.</p>
<p>The sequence of these indexes in the appended index is then used as the output of your computation graph, and optimized using a training dataset that exists as WIKISQL dataset.</p>
<p>Your case is quite similar, you need to pass the inputs, metadata of the game, and the stuff you need as part of your output as an appended index. Then the pointer network simply makes selections from the input (points to them).</p> | 2019-07-31 11:51:54.400000+00:00 | 2019-07-31 16:14:33.437000+00:00 | 2019-07-31 16:14:33.437000+00:00 | null | 49,655,891 | <p>I'm trying to use the approach described in this paper <a href="https://arxiv.org/abs/1712.01815" rel="noreferrer">https://arxiv.org/abs/1712.01815</a> to make the algorithm learn a new game. </p>
<p>There is only one problem that does not directly fit into this approach. The game I am trying to learn has no fixed board size. So currently the input tensor has dimensions <code>m*n*11</code>, where m and n are the dimensions of the game board and can vary each time the game is played. So first of all I need a neural network able to make use of such varying input sizes.</p>
<p>The size of the output is also a function of the board size, as it has a vector with entries for every possible move on the board, and so the output vector will be bigger if the board size increases. </p>
<p>I have read about recurrent and recursive neural networks but they all seem to relate to NLP, and I'm not sure on how to translate that to my problem. </p>
<p>Any ideas on NN architectures able to handle my case would be welcome. </p> | 2018-04-04 16:20:20.977000+00:00 | 2019-08-03 03:06:28.287000+00:00 | null | machine-learning|neural-network|conv-neural-network|rnn | ['https://arxiv.org/abs/1506.03134', 'https://finbarr.ca/pointer-networks/'] | 2 |
58,384,488 | <p>I know this question is about Visual Studio, but I'm going to try to answer for as many compilers as I can (including Visual Studio)…</p>
<p>A decade later there is progress! As of Visual Studio 2019 MSVC still doesn't support anything like this (even though it's <a href="https://arxiv.org/pdf/1907.00863.pdf" rel="nofollow noreferrer">the most popular builtin/intrinsic</a>), but as Pauli Nieminen mentioned above C++20 has <a href="http://wg21.link/p0479r5" rel="nofollow noreferrer"><code>likely</code> / <code>unlikely</code> attributes</a> which can be used to create likely/unlikely macros and MSVC usually adds support for new C++ standards pretty quickly (unlike C) so I expect Visual Studio 2021 to support them.</p>
<p>Currently (2019-10-14) only GCC supports these attributes, and even then only applied to labels, but it is sufficient to at least do some basic testing. Here is a quick implementation which you can <a href="https://godbolt.org/z/n32vRw" rel="nofollow noreferrer">test on Compiler Explorer</a>:</p>
<pre class="lang-cpp prettyprint-override"><code>#define LIKELY(expr) \
( \
([](bool value){ \
switch (value) { \
[[likely]] case true: \
return true; \
[[unlikely]] case false: \
return false; \
} \
}) \
(expr))
#define UNLIKELY(expr) \
( \
([](bool value){ \
switch (value) { \
[[unlikely]] case true: \
return true; \
[[likely]] case false: \
return false; \
} \
}) \
(expr))
</code></pre>
<p><em>Edit (2022-05-02)</em>: MSVC 2022 supports C++20, including <code>[[likely]]</code>/<code>[[unlikely]]</code>, but generates absolutely terrible code for this (see the comments on this post)... don't use it there.</p>
<p>You'll probably want to #ifdef around it to support compilers that can't handle it, but luckily most compilers support <code>__builtin_expect</code>:</p>
<ul>
<li>GCC 3.0</li>
<li>clang</li>
<li>ICC since at least 13, probably much longer.</li>
<li>Oracle Development Studio 12.6+, but only in C++ mode.</li>
<li>ARM 4.1</li>
<li>IBM XL C/C++ since at least 10.1, probably longer.</li>
<li>TI since 6.1</li>
<li>TinyCC since 0.9.27</li>
</ul>
<p>GCC 9+ also supports <a href="https://gcc.gnu.org/onlinedocs/gcc/Other-Builtins.html#index-_005f_005fbuiltin_005fexpect_005fwith_005fprobability" rel="nofollow noreferrer"><code>__builtin_expect_with_probability</code></a>. It's not available anywhere else, but hopefully one day… It takes a lot of the guesswork out of trying to figure out whether to use ilkely/unlikely or not—you just set the probability and the compiler (theoretically) does the right thing.</p>
<p>Also, clang supports a <a href="https://clang.llvm.org/docs/LanguageExtensions.html#builtin-unpredictable" rel="nofollow noreferrer"><code>__builtin_unpredictable</code></a> (since 3.8, but test for it with <code>__has_builtin(__builtin_unpredictable)</code>). Since a lot of compilers are based on clang these days it probably works in them, too.</p>
<p>If you want this all wrapped up and ready to go, you might be interested in one of my projects, <a href="https://nemequ.github.io/hedley/" rel="nofollow noreferrer">Hedley</a>. It's a single public-domain C/C++ header which works on pretty much all compilers and contains lots of useful macros, including <a href="https://nemequ.github.io/hedley/api-reference.html#HEDLEY_LIKELY" rel="nofollow noreferrer"><code>HEDLEY_LIKELY</code></a>, <a href="https://nemequ.github.io/hedley/api-reference.html#HEDLEY_UNLIKELY" rel="nofollow noreferrer"><code>HEDLEY_UNLIKELY</code></a>, <a href="https://nemequ.github.io/hedley/api-reference.html#HEDLEY_UNPREDICTABLE" rel="nofollow noreferrer"><code>HEDLEY_UNPREDICTABLE</code></a>, <a href="https://nemequ.github.io/hedley/api-reference.html#HEDLEY_PREDICT" rel="nofollow noreferrer"><code>HEDLEY_PREDICT</code></a>, <a href="https://nemequ.github.io/hedley/api-reference.html#HEDLEY_PREDICT_TRUE" rel="nofollow noreferrer"><code>HEDLEY_PREDICT_TRUE</code></a>, and <a href="https://nemequ.github.io/hedley/api-reference.html#HEDLEY_PREDICT_FALSE" rel="nofollow noreferrer"><code>HEDLEY_PREDICT_FALSE</code></a>. It doesn't have the C++20 version quite yet, but <a href="https://github.com/nemequ/hedley/issues/29" rel="nofollow noreferrer">it should be there soon</a>…</p>
<p>Even if you don't want to use Hedley in your project, you might want to check the the implementations there instead of relying on the lists above; I'll probably forget to update this answer with new information, but Hedley should always be up-to-date.</p> | 2019-10-14 21:50:55.037000+00:00 | 2022-05-02 13:04:21.217000+00:00 | 2022-05-02 13:04:21.217000+00:00 | null | 1,440,570 | <p>GCC compiler supports __builtin_expect statement that is used to define likely and unlikely macros.</p>
<p>eg.</p>
<pre><code>#define likely(expr) (__builtin_expect(!!(expr), 1))
#define unlikely(expr) (__builtin_expect(!!(expr), 0))
</code></pre>
<p>Is there an equivalent statement for the Microsoft Visual C compiler, or something equivalent ?</p> | 2009-09-17 18:32:10.397000+00:00 | 2022-05-02 13:04:21.217000+00:00 | 2018-10-22 16:50:33.697000+00:00 | visual-studio|gcc|optimization|compiler-construction|likely-unlikely | ['https://arxiv.org/pdf/1907.00863.pdf', 'http://wg21.link/p0479r5', 'https://godbolt.org/z/n32vRw', 'https://gcc.gnu.org/onlinedocs/gcc/Other-Builtins.html#index-_005f_005fbuiltin_005fexpect_005fwith_005fprobability', 'https://clang.llvm.org/docs/LanguageExtensions.html#builtin-unpredictable', 'https://nemequ.github.io/hedley/', 'https://nemequ.github.io/hedley/api-reference.html#HEDLEY_LIKELY', 'https://nemequ.github.io/hedley/api-reference.html#HEDLEY_UNLIKELY', 'https://nemequ.github.io/hedley/api-reference.html#HEDLEY_UNPREDICTABLE', 'https://nemequ.github.io/hedley/api-reference.html#HEDLEY_PREDICT', 'https://nemequ.github.io/hedley/api-reference.html#HEDLEY_PREDICT_TRUE', 'https://nemequ.github.io/hedley/api-reference.html#HEDLEY_PREDICT_FALSE', 'https://github.com/nemequ/hedley/issues/29'] | 13 |
6,599,972 | <p>I read a research paper about that. You can find it at <a href="http://arxiv.org/abs/1105.5170" rel="nofollow">http://arxiv.org/abs/1105.5170</a></p>
<p>The researchers where interested in studying the "strong relationships" in the twitter social graph and they are using metrics base on the number of message between the user (@... mentions).</p>
<p>I'm not a twitter user, but this looks like what you want to do. They add access to the whole social network, which is not available to everyone, but it could probably be adapted for a single user.</p> | 2011-07-06 16:38:19.723000+00:00 | 2011-07-06 16:38:19.723000+00:00 | null | null | 2,830,264 | <p>What's the best way to find someone's top friends on twitter? </p>
<p>I'm trying to figure out a way to see who they interact with the most, but I am not sure what the best way is of doing this. Also, there are obvious things to check (is the username following that person, etc etc). </p>
<p>I imagine someone else has thought about this, so I am trying to get a little smarter on it. I am using Python to figure this out. Any help would be great.</p> | 2010-05-13 20:58:15.350000+00:00 | 2011-07-06 16:38:19.723000+00:00 | null | python|twitter|graph | ['http://arxiv.org/abs/1105.5170'] | 1 |
50,605,338 | <p>Yes, <code>tf.layers.batch_normalization()</code> works with batches of single elements. Doing <em>batch normalization</em> on such batches is actually named <em>instance normalization</em> (i.e. normalization of a single instance).</p>
<p>@Maxim made a great <a href="https://stackoverflow.com/a/48118940/624547">post</a> about <em>instance normalization</em> if you want to know more. You can also find more theory on the web and in the literature, e.g. <a href="https://arxiv.org/abs/1607.08022" rel="nofollow noreferrer">Instance Normalization: The Missing Ingredient for Fast Stylization</a>.</p> | 2018-05-30 12:57:20.613000+00:00 | 2018-05-30 12:57:20.613000+00:00 | null | null | 50,602,885 | <p>everyone. I am using tensorflow 1.4 to train a model like <strong>U-net</strong> for my purpose. Due to the constraints of my hardware, when training, the <strong>batch_size could only set to be 1 otherwise there will be OOM error.</strong> </p>
<p>Here comes my question. In this case, when the batch_size equals to 1, will the <code>tf.layers.batch_normalization()</code> works correctly(saying moving average, moving variance, gamma, beta)? will small batch_size makes it working unstable?</p>
<p>In my work, I set <code>training=True</code> <strong>when training</strong>, and <code>training=False</code> <strong>when testing</strong>. When training, I use</p>
<pre><code>logits = mymodel.inference()
loss = tf.mean_square_error(labels, logits)
updata_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(update_ops):
train_op = optimizer.minimize(loss)
...
saver = tf.train.Saver(tf.global_variables())
with tf.Session() as sess:
sess.run(tf.group(tf.global_variables_initializer(),
tf.local_variables_initializer()))
sess.run(train_op)
...
saver.save(sess, save_path, global_step)
</code></pre>
<p>when testing, I use:</p>
<pre><code>logits = model.inference()
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
sess.run(tf.local_variables_initializer())
results = sess.run(logits)
</code></pre>
<p>Could anyone tell me that am I using this wrong? And how much influence with <strong>batch_size equals to 1</strong> in <strong>tf.layers.batch_normalization()</strong>?</p>
<p>Any help will be appreciated! Thanks in advance.</p> | 2018-05-30 10:55:47.980000+00:00 | 2018-05-30 12:57:20.613000+00:00 | 2018-05-30 11:02:39.073000+00:00 | tensorflow|batch-normalization | ['https://stackoverflow.com/a/48118940/624547', 'https://arxiv.org/abs/1607.08022'] | 2 |
45,857,569 | <p>Actually, Ross Girshick's py-faster-rcnn repo includes an implementation for the SVD step: <a href="https://github.com/rbgirshick/py-faster-rcnn/blob/master/tools/compress_net.py" rel="nofollow noreferrer"><code>compress_net.py</code></a>.</p>
<p>BTW, you usually need to fine-tune the compressed model to recover the accuracy (or to compress in a more sophisticated way, see for example "<a href="https://arxiv.org/abs/1505.06798" rel="nofollow noreferrer">Accelerating Very Deep Convolutional Networks for Classification and Detection</a>", Zhang et al).</p>
<p>Also, for me scipy.linalg.svd worked faster than numpy's svd.</p> | 2017-08-24 09:13:43.377000+00:00 | 2017-08-31 11:47:20.297000+00:00 | 2017-08-31 11:47:20.297000+00:00 | null | 40,480,827 | <p>In the paper <a href="https://arxiv.org/abs/1504.08083" rel="noreferrer"><em>Girshick, R</em> <strong>Fast-RCNN</strong> (ICCV 2015)</a>, section "3.1 Truncated SVD for faster detection", the author proposes to use <a href="https://en.wikipedia.org/wiki/Singular_value_decomposition" rel="noreferrer">SVD</a> trick to reduce the size and computation time of a fully connected layer. </p>
<p>Given a <em>trained</em> model (<code>deploy.prototxt</code> and <code>weights.caffemodel</code>), how can I use this trick to replace a fully connected layer with a truncated one?</p> | 2016-11-08 07:02:24.750000+00:00 | 2019-07-09 05:17:38.920000+00:00 | 2016-11-14 06:31:08.150000+00:00 | machine-learning|neural-network|linear-algebra|deep-learning|caffe | ['https://github.com/rbgirshick/py-faster-rcnn/blob/master/tools/compress_net.py', 'https://arxiv.org/abs/1505.06798'] | 2 |
34,036,805 | <p>Some suggestions for improving the Brill's tagger were presented in the papers "Independence and Commitment: Assumptions for Rapid Training and Execution of Rule-based POS Taggers" and "Transformation-Based Learning in the Fast Lane." In addition, the rule-based POS and morphological tagging toolkit <a href="http://rdrpostagger.sourceforge.net/" rel="nofollow">RDRPOSTagger</a> also provides improvements for the Brill's tagger, where transformation-based rules are stored in the form of a binary decision tree. So RDRPOSTagger obtains very fast training and tagging performance with better accuracy than Brill's. See results <a href="http://arxiv.org/abs/1412.4021" rel="nofollow">here</a>.</p> | 2015-12-02 07:08:26.343000+00:00 | 2015-12-02 07:08:26.343000+00:00 | null | null | 2,341,907 | <p>What are the weaknesses and strengths of the Brill Tagger? Can you suggest some possible improvements for the tagger?</p> | 2010-02-26 13:30:55.077000+00:00 | 2015-12-02 07:08:26.343000+00:00 | null | nlp|tagging|part-of-speech | ['http://rdrpostagger.sourceforge.net/', 'http://arxiv.org/abs/1412.4021'] | 2 |
34,583,004 | <p>OK, I'm sorry but sometimes this happens (I'm the author).</p>
<p>Originally the function had two memcpy(). Then I realised then a circular copy was needed. But I replaced just the <em>first</em> memcpy(). Stupid, stupid, stupid. All files on the site have been fixed. The arXiv copy is undergoing update. See <a href="http://xorshift.di.unimi.it/xorshift1024star.c">http://xorshift.di.unimi.it/xorshift1024star.c</a> </p>
<p>Incidentally: I didn't "publish" anything wrong in the scientific sense, as the jump() function is not part of the ACM Trans. Math. Soft. paper—it just has been added few weeks ago on the site and on the arXiv/WWW version. The fast publication path of the web and arXiv means that, sometimes, one distributes unpolished papers. I can only thank the reporter for reporting this bug (OK, technically StackOverflow is not reporting bugs, but I got an email, too).</p>
<p>Unfortunately, the unit test I had did not consider the case p ≠ 0. My main concern was that the correctness of the computed polynomial. The function, as noted above, is correct when p = 0.</p>
<p>As for the computation: to each generator corresponds a characteristic polynomial P(x). The jump polynomial for k is just x^k mod P(x). I use fermat to compute such powers, and then I have some scripts generating the C code.</p>
<p>Of course I can't test 2^512, but since my generation code works perfectly from 2 to 2^30 (the range you can easily test), I'm confident it works at 2^512, too. It's just fermat computing x^{2^512} instead of x^{2^30}. But independent verifications are more than welcome.</p>
<p>I have code working only for powers of the form x^{2^t}. This is what I need to compute useful jump functions. Computing polynomials modulo P(x) is not difficult, so one could conceivably have a completely generic jump function for any value, but frankly I find this totally overkill.</p>
<p>If anybody is interested in getting other jump polynomials, I can provide the scripts. They will be part, as it happens for all other code, of the next xorshift distribution, but I need to complete the documentation before giving them out.</p>
<p>For the record, the characteristic polynomial of xorshift1024* is x^1024 + x^974 + x^973 + x^972 + x^971 + x^966 + x^965 + x^964 + x^963 + x^960 + x^958 + x^957 + x^956 + x^955 + x^950 + x^949 + x^948 + x^947 + x^942 + x^941 + x^940 + x^939 + x^934 + x^933 + x^932 + x^931 + x^926 + x^925 + x^923 + x^922 + x^920 + x^917 + x^916 + x^915 + x^908 + x^906 + x^904 + x^902 + x^890 + x^886 + x^873 + x^870 + x^857 + x^856 + x^846 + x^845 + x^844 + x^843 + x^841 + x^840 + x^837 + x^835 + x^830 + x^828 + x^825 + x^824 + x^820 + x^816 + x^814 + x^813 + x^811 + x^810 + x^803 + x^798 + x^797 + x^790 + x^788 + x^787 + x^786 + x^783 + x^774 + x^772 + x^771 + x^770 + x^769 + x^768 + x^767 + x^765 + x^760 + x^758 + x^753 + x^749 + x^747 + x^746 + x^743 + x^741 + x^740 + x^738 + x^737 + x^736 + x^735 + x^728 + x^726 + x^723 + x^722 + x^721 + x^720 + x^718 + x^716 + x^715 + x^714 + x^710 + x^709 + x^707 + x^694 + x^687 + x^686 + x^685 + x^684 + x^679 + x^678 + x^677 + x^674 + x^670 + x^669 + x^667 + x^666 + x^665 + x^663 + x^658 + x^655 + x^651 + x^639 + x^638 + x^635 + x^634 + x^632 + x^630 + x^623 + x^621 + x^618 + x^617 + x^616 + x^615 + x^614 + x^613 + x^609 + x^606 + x^604 + x^601 + x^600 + x^598 + x^597 + x^596 + x^594 + x^593 + x^592 + x^590 + x^589 + x^588 + x^584 + x^583 + x^582 + x^581 + x^579 + x^577 + x^575 + x^573 + x^572 + x^571 + x^569 + x^567 + x^565 + x^564 + x^563 + x^561 + x^559 + x^557 + x^556 + x^553 + x^552 + x^550 + x^544 + x^543 + x^542 + x^541 + x^537 + x^534 + x^532 + x^530 + x^528 + x^526 + x^523 + x^521 + x^520 + x^518 + x^516 + x^515 + x^512 + x^511 + x^510 + x^508 + x^507 + x^506 + x^505 + x^504 + x^502 + x^501 + x^499 + x^497 + x^494 + x^493 + x^492 + x^491 + x^490 + x^487 + x^485 + x^483 + x^482 + x^480 + x^479 + x^477 + x^476 + x^475 + x^473 + x^469 + x^468 + x^465 + x^463 + x^461 + x^460 + x^459 + x^458 + x^455 + x^453 + x^451 + x^448 + x^447 + x^446 + x^445 + x^443 + x^438 + x^437 + x^431 + x^430 + x^429 + x^428 + x^423 + x^417 + x^416 + x^415 + x^414 + x^412 + x^410 + x^409 + x^408 + x^400 + x^398 + x^396 + x^395 + x^391 + x^390 + x^386 + x^385 + x^381 + x^380 + x^378 + x^375 + x^373 + x^372 + x^369 + x^368 + x^365 + x^360 + x^358 + x^357 + x^354 + x^350 + x^348 + x^346 + x^345 + x^344 + x^343 + x^342 + x^340 + x^338 + x^337 + x^336 + x^335 + x^333 + x^332 + x^325 + x^323 + x^318 + x^315 + x^313 + x^309 + x^308 + x^305 + x^303 + x^302 + x^300 + x^294 + x^290 + x^281 + x^279 + x^276 + x^275 + x^273 + x^272 + x^267 + x^263 + x^262 + x^261 + x^260 + x^258 + x^257 + x^256 + x^249 + x^248 + x^243 + x^242 + x^240 + x^238 + x^236 + x^233 + x^232 + x^230 + x^228 + x^225 + x^216 + x^214 + x^212 + x^210 + x^208 + x^206 + x^205 + x^200 + x^197 + x^196 + x^184 + x^180 + x^176 + x^175 + x^174 + x^173 + x^168 + x^167 + x^166 + x^157 + x^155 + x^153 + x^152 + x^151 + x^150 + x^144 + x^143 + x^136 + x^135 + x^125 + x^121 + x^111 + x^109 + x^107 + x^105 + x^92 + x^90 + x^79 + x^78 + x^77 + x^76 + x^60 + 1</p> | 2016-01-03 23:51:38.277000+00:00 | 2016-01-03 23:51:38.277000+00:00 | null | null | 34,574,701 | <p>I've been porting Sebastiano Vigna's <a href="http://xorshift.di.unimi.it/xorshift1024star.c" rel="noreferrer"><strong>xorshift1024*</strong></a> PRNG to be compatible with the standard C++11 uniform random number generator contract and noticed some strange behavior with the <code>jump()</code> function he provides.</p>
<p>According to Vigna, a call to <code>jump()</code> should be equivalent to 2^512 calls to <code>next()</code>. Therefore a series of calls to <code>jump()</code> and <code>next()</code> should be commutative. For example, assuming the generator starts in some known state,</p>
<pre><code>jump();
next();
</code></pre>
<p>should leave the generator in the same state as</p>
<pre><code>next();
jump();
</code></pre>
<p>since both should be equivalent to</p>
<pre><code>for (bigint i = 0; i < (bigint(1) << 512) + 1; ++i)
next();
</code></pre>
<p>assuming <code>bigint</code> is some integer type with an extremely large maximum value (and assuming you are a very, very, very patient person).</p>
<p>Unfortunately, this doesn't work with the reference implementation Vigna provides (which I will include at the end for posterity; in case the implementation linked above changes or is taken down in the future). When testing the first two options using the following test code:</p>
<pre><code>memset(s, 0xFF, sizeof(s));
p = 0;
// jump() and/or next() calls...
std::cout << p << ';';
for (int i = 0; i < 16; ++i)
std::cout << ' ' << s[i];
</code></pre>
<p>calling <code>jump()</code> before <code>next()</code> outputs:</p>
<pre><code>1; 9726214034378009495 13187905351877324975 10033047168458208082 990371716258730972 965585206446988056 74622805968655940 11468976784638207029 3005795712504439672 6792676950637600526 9275830639065898170 6762742930827334073 16862800599087838815 13481924545051381634 16436948992084179560 6906520316916502096 12790717607058950780
</code></pre>
<p>while calling <code>next()</code> first results in:</p>
<pre><code>1; 13187905351877324975 10033047168458208082 990371716258730972 965585206446988056 74622805968655940 11468976784638207029 3005795712504439672 6792676950637600526 9275830639065898170 6762742930827334073 16862800599087838815 13481924545051381634 16436948992084179560 6906520316916502096 12790717607058950780 9726214034378009495
</code></pre>
<p>Clearly either my understanding of what <code>jump()</code> is doing is wrong, or there's a bug in the <code>jump()</code> function, or the jump polynomial data is wrong. Vigna claims that such a jump function can be calculated for any stride of the period, but doesn't elaborate on how to calculate it (including in his <a href="http://vigna.di.unimi.it/ftp/papers/xorshift.pdf" rel="noreferrer">paper on xorshift* generators</a>). How can I calculate the correct jump data to verify that there's not a typo somewhere in it?</p>
<hr>
<p>Xorshift1024* reference implementation; <a href="http://xorshift.di.unimi.it/xorshift1024star.c" rel="noreferrer">http://xorshift.di.unimi.it/xorshift1024star.c</a></p>
<pre><code>/* Written in 2014-2015 by Sebastiano Vigna ([email protected])
To the extent possible under law, the author has dedicated all copyright
and related and neighboring rights to this software to the public domain
worldwide. This software is distributed without any warranty.
See <http://creativecommons.org/publicdomain/zero/1.0/>. */
#include <stdint.h>
#include <string.h>
/* This is a fast, top-quality generator. If 1024 bits of state are too
much, try a xorshift128+ generator.
The state must be seeded so that it is not everywhere zero. If you have
a 64-bit seed, we suggest to seed a splitmix64 generator and use its
output to fill s. */
uint64_t s[16];
int p;
uint64_t next(void) {
const uint64_t s0 = s[p];
uint64_t s1 = s[p = (p + 1) & 15];
s1 ^= s1 << 31; // a
s[p] = s1 ^ s0 ^ (s1 >> 11) ^ (s0 >> 30); // b,c
return s[p] * UINT64_C(1181783497276652981);
}
/* This is the jump function for the generator. It is equivalent
to 2^512 calls to next(); it can be used to generate 2^512
non-overlapping subsequences for parallel computations. */
void jump() {
static const uint64_t JUMP[] = { 0x84242f96eca9c41dULL,
0xa3c65b8776f96855ULL, 0x5b34a39f070b5837ULL, 0x4489affce4f31a1eULL,
0x2ffeeb0a48316f40ULL, 0xdc2d9891fe68c022ULL, 0x3659132bb12fea70ULL,
0xaac17d8efa43cab8ULL, 0xc4cb815590989b13ULL, 0x5ee975283d71c93bULL,
0x691548c86c1bd540ULL, 0x7910c41d10a1e6a5ULL, 0x0b5fc64563b3e2a8ULL,
0x047f7684e9fc949dULL, 0xb99181f2d8f685caULL, 0x284600e3f30e38c3ULL
};
uint64_t t[16] = { 0 };
for(int i = 0; i < sizeof JUMP / sizeof *JUMP; i++)
for(int b = 0; b < 64; b++) {
if (JUMP[i] & 1ULL << b)
for(int j = 0; j < 16; j++)
t[j] ^= s[(j + p) & 15];
next();
}
memcpy(s, t, sizeof t);
}
</code></pre> | 2016-01-03 08:32:38.057000+00:00 | 2016-11-28 22:28:21.393000+00:00 | 2016-01-03 10:38:25.703000+00:00 | c++|random | ['http://xorshift.di.unimi.it/xorshift1024star.c'] | 1 |
62,353,534 | <p>The goal of <strong>machine learning</strong> methods is to learn rules from data and make predictions and/or decisions based on them. </p>
<p>The learning process can be done in a(n) <strong>supervised</strong>, <strong>semi-supervised</strong>, <strong>unsupervised</strong>, <strong>reinforcement</strong> learning fashion. </p>
<p>In <strong>reinforcement learning</strong> (RL), an agent interacts with an environment and learns an optimal policy, by trial and error (using reward points for successful actions and penalties for errors). It is used in sequential decision making problems [1].</p>
<p><strong>Deep learning</strong> as a sub-field of machine learning is a mathematical framework for learning latent rules in the data or new representations of the data at hand. The term "deep" refer to the number of learning layers in the framework. Deep learning can be used with any of aforementioned learning strategies, i.e., supervised, semi-supervised, unsupervised, and reinforcement learning. </p>
<p>A <strong>deep reinforcement learning</strong> technique is obtained when deep learning is utilized by any of the components of reinforcement learning [1]. Note that <strong>Q-learning</strong> is a component of RL used to tell an agent that what action needs to be taken in what situation. Detailed information can be found in [1].</p>
<p>[1] Li, Yuxi. "Deep reinforcement learning: An overview." arXiv preprint arXiv:1701.07274 (2017).</p> | 2020-06-12 22:48:41.027000+00:00 | 2020-06-14 05:24:17.070000+00:00 | 2020-06-14 05:24:17.070000+00:00 | null | 50,542,818 | <p>What's the difference between reinforcement learning, deep learning, and deep reinforcement learning? Where does Q-learning fit in?</p> | 2018-05-26 12:34:31.090000+00:00 | 2022-06-29 21:40:53.407000+00:00 | null | machine-learning|neural-network|deep-learning|reinforcement-learning|q-learning | [] | 0 |
46,358,005 | <p>It's because original LSTM model only applies dropout on the input and output layers (only to the non-recurrent layers.) This paper is considered as a "textbook" that describes the LSTM with dropout: <a href="https://arxiv.org/pdf/1409.2329.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1409.2329.pdf</a></p>
<p>Recently some people tried applying dropout in recurrent layers as well. If you want to look at the implementation and the math behind it, search for "A Theoretically Grounded Application of Dropout in Recurrent Neural Networks" by Yarin Gal. I'm not sure Tensorflow or Keras already implemented this approach though.</p> | 2017-09-22 06:14:23.457000+00:00 | 2017-09-22 06:36:59.813000+00:00 | 2017-09-22 06:36:59.813000+00:00 | null | 46,242,660 | <p>Tensorflow's <code>DropoutWrapper</code> allows to apply dropout to either the cell's inputs, outputs or states. However, I haven't seen an option to do the same thing for the recurrent weights of the cell (4 out of the 8 different matrices used in the original LSTM formulation). I just wanted to check that this is the case before implementing a Wrapper of my own.</p>
<p>EDIT:</p>
<p>Apparently this functionality has been added in newer versions (my original comment referred to v1.4): <a href="https://github.com/tensorflow/tensorflow/issues/13103" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/issues/13103</a></p> | 2017-09-15 15:07:46.340000+00:00 | 2018-04-12 09:53:51.060000+00:00 | 2018-04-12 09:53:51.060000+00:00 | python|tensorflow|lstm | ['https://arxiv.org/pdf/1409.2329.pdf'] | 1 |
59,802,035 | <p>As far as I know these noises are not natural and estimating them is almost impossible. As a simple and handy method, median filter is the best. Since the presence of the noise is too much, the images are corrupted and there is a huge loss in the image information. In this situation, the best practice is to use deep learning method, because they can restore the impaired parts of the image to a desirable level. </p>
<p>I found this paper, promising for such purpose but you would find more resources:</p>
<p><a href="https://arxiv.org/pdf/1810.10039.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1810.10039.pdf</a></p>
<p>In this page also you will find many strong deep learning methods and codes for denoising image:</p>
<p><a href="https://github.com/z-bingo/awesome-image-denoising-state-of-the-art" rel="nofollow noreferrer">https://github.com/z-bingo/awesome-image-denoising-state-of-the-art</a></p> | 2020-01-18 15:32:20.030000+00:00 | 2020-01-18 15:32:20.030000+00:00 | null | null | 59,800,932 | <p>I am creating a generic method to work on salt and pepper noise and variants. The example images are as shown below :</p>
<p><a href="https://i.stack.imgur.com/qp7qO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qp7qO.png" alt="enter image description here"></a>
<a href="https://i.stack.imgur.com/KthYa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KthYa.png" alt="enter image description here"></a>
<a href="https://i.stack.imgur.com/7qYKc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7qYKc.png" alt="enter image description here"></a></p>
<p>I tried few methods, such as</p>
<ul>
<li>Median filter from scipy</li>
<li>Selective Adaptive Median Filter by Jayanta Das et al.</li>
</ul>
<p>The closest result was on Image 3, with Median filter, giving the closest result to the original image with no noise.</p>
<p>These are my following doubts :</p>
<ol>
<li>Can we consider these noises as salt and pepper noise? Is there something else that I am missing?</li>
<li>What could be the better suggested method? Currently, I am planning to implement Switching Median filter by Pei-Eng et al, but I would like to know if this could be the right track.</li>
</ol>
<p>The originals that I am trying to get closer to:</p>
<p><a href="https://i.stack.imgur.com/lXxDI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lXxDI.png" alt="enter image description here"></a>
<a href="https://i.stack.imgur.com/8GhrU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8GhrU.png" alt="enter image description here"></a>
<a href="https://i.stack.imgur.com/pDzAz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pDzAz.png" alt="enter image description here"></a></p> | 2020-01-18 13:14:10.007000+00:00 | 2020-01-18 15:32:20.030000+00:00 | 2020-01-18 13:57:54.207000+00:00 | opencv|image-processing|filtering|median|noise-reduction | ['https://arxiv.org/pdf/1810.10039.pdf', 'https://github.com/z-bingo/awesome-image-denoising-state-of-the-art'] | 2 |
40,851,298 | <p>For posterity, quoting myself from <a href="https://stackoverflow.com/a/30934839/4776939">an older answer</a>:</p>
<blockquote>
<p>Isabelle itself is implemented in Standard ML, but for communicating with the external world, it uses a protocol called <em>PIDE</em> (= Prover IDE). The reference implementation of PIDE is bundled with Isabelle and written in Scala, so it can be used with any JVM language. The primary application of PIDE is Isabelle/jEdit, which uses the jEdit editor to build an IDE for Isabelle, including markup, continuous checking, ...</p>
</blockquote>
<p>By reusing the underlying protocol, you can implement your own application on top of Isabelle.</p>
<p>As far as I'm aware, the current most advanced example for this is <a href="http://lara.epfl.ch/w/leon" rel="noreferrer">Leon</a>, which is an automated verification & synthesis toolkit for Scala programs. Internally, it uses <a href="https://github.com/larsrh/libisabelle" rel="noreferrer">libisabelle</a> to communicate with Isabelle. <em>(Full disclosure: I'm the author of libisabelle.)</em> An overview of how this works is given in <a href="https://arxiv.org/abs/1607.01539" rel="noreferrer">a paper</a>.</p>
<p>libisabelle itself is available as a stand-alone library including some basic documentation that should allow you to get started. See <a href="https://github.com/larsrh/libisabelle" rel="noreferrer">the repository</a> for more details. In essence, it allows you to</p>
<ul>
<li>manage Isabelle installations from within Scala (download, unpacking)</li>
<li>abstract over different Isabelle versions (currently supported: 2016 and the 2016-1 release candidates)</li>
<li>lifecycle management of an Isabelle session (building, starting, stopping)</li>
<li>treat Isabelle/ML functions as Scala functions</li>
<li>goodies like Isabelle term syntax in Scala (<code>term"$n > 0 --> ($b & ${HOLogic.True})"</code>)</li>
</ul>
<p>There is no built-in routine to set up a goal state and apply some proof steps, but the necessary infrastructure is all there.</p> | 2016-11-28 18:43:50.033000+00:00 | 2016-11-28 18:43:50.033000+00:00 | null | null | 40,848,265 | <p>I am looking at creating an Eclipse PDE and need to communicate with with Isabelle. I do find some publication stating that Scala can be used to communicate to Isabelle. </p>
<p>I am looking for an example how to use Scala to create proves in Isabelle.</p> | 2016-11-28 15:46:19.013000+00:00 | 2016-11-29 19:36:16.013000+00:00 | 2016-11-29 19:36:16.013000+00:00 | scala|isabelle | ['https://stackoverflow.com/a/30934839/4776939', 'http://lara.epfl.ch/w/leon', 'https://github.com/larsrh/libisabelle', 'https://arxiv.org/abs/1607.01539', 'https://github.com/larsrh/libisabelle'] | 5 |
65,405,898 | <p>The following paper proposes an algorithm that <em>uniformly</em> samples <strong>connected</strong> random graphs with prescribed degree sequence, with an efficient implementation. It is available in several libraries, like Networkit or igraph.</p>
<p><a href="https://arxiv.org/abs/cs/0502085" rel="nofollow noreferrer">Fast generation of random connected graphs with prescribed degrees.</a>
Fabien Viger, Matthieu Latapy</p>
<p>Be careful when you make simulations on random graphs: if they are not sampled uniformly, then they may have hidden properties that impact simulations; alternatively, uniformly sampled graphs may be very different from the ones your code will meet in practice...</p> | 2020-12-22 09:07:46.010000+00:00 | 2020-12-22 09:07:46.010000+00:00 | null | null | 20,171,901 | <p>I want to be able to generate random, undirected, and connected graphs in Java. In addition, I want to be able to control the maximum number of vertices in the graph. I am not sure what would be the best way to approach this problem, but here are a few I can think of:</p>
<p>(1) Generate a number between <code>0</code> and <code>n</code> and let that be the number of vertices. Then, somehow randomly link vertices together (maybe generate a random number per vertex and let that be the number of edges coming out of said vertex). Traverse the graph starting from an arbitrary vertex (say with Breadth-First-Search) and let our random graph <code>G</code> be all the visited nodes (this way, we make sure that <code>G</code> is connected).</p>
<p>(2) Generate a random square matrix (of <code>0</code>'s and <code>1</code>'s) with side length between <code>0</code> and <code>n</code> (somehow). This would be the adjacency matrix for our graph (the diagonal of the matrix should then either be all <code>1</code>'s or all <code>0</code>'s). Make a data structure from the graph and traverse the graph from any node to get a connected list of nodes and call that the graph <code>G</code>.</p>
<p>Any other way to generate a sufficiently random graph is welcomed. <strong>Note</strong>: I do not need a purely random graph, i.e., the graph you generate doesn't have to have any special mathematical properties (like uniformity of some sort). I simply need lots and lots of graphs for testing purposes of something else.</p>
<p>Here is the Java <code>Node</code> class I am using:</p>
<pre><code>public class Node<T> {
T data;
ArrayList<Node> children= new ArrayList<Node>();
...}
</code></pre>
<p>Here is the <code>Graph</code> class I am using (you can tell why I am only interested in connected graphs at the moment):</p>
<pre><code>public class Graph {
Node mainNode;
ArrayList<Node> V= new ArrayList<Node>();
public Graph(Node node){
mainNode= node;
}
...}
</code></pre>
<p>As an example, this is how I make graphs for testing purposes right now:</p>
<pre><code>//The following makes a "kite" graph G (with "a" as the main node).
/* a-b
|/|
c-d
*/
Node<String> a= new Node("a");
Node<String> b= new Node("b");
Node<String> c= new Node("c");
Node<String> d= new Node("d");
a.addChild(b);
a.addChild(c);
b.addChild(a);
b.addChild(c);
b.addChild(d);
c.addChild(a);
c.addChild(b);
c.addChild(d);
d.addChild(c);
d.addChild(b);
Graph G1= new Graph(a);
</code></pre> | 2013-11-24 06:44:00.577000+00:00 | 2020-12-22 09:56:15.763000+00:00 | 2020-12-22 09:56:15.763000+00:00 | java|algorithm|random|graph | ['https://arxiv.org/abs/cs/0502085'] | 1 |
59,377,202 | <p>Another option is to use the neural networks like - PixelLink: Detecting Scene Text via Instance Segmentation </p>
<p><a href="https://arxiv.org/pdf/1801.01315.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1801.01315.pdf</a> </p> | 2019-12-17 15:18:51.433000+00:00 | 2019-12-17 15:18:51.433000+00:00 | null | null | 58,679,475 | <p>I am training a model to extract all the necessary fields from a resume for which I am using mask rcnn to detect the fields in image. I have trained my mask RCNN model for 1000 training samples with 49 fields to extract. I am unable to improve the accuracy. How to improve the model? Is there any pretrained weights that may help?</p>
<p><a href="https://i.stack.imgur.com/tb9Ke.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tb9Ke.jpg" alt="enter image description here"></a></p>
<p>Difficulty in reading following text - </p>
<p><a href="https://i.stack.imgur.com/EGsmr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EGsmr.png" alt="enter image description here"></a></p> | 2019-11-03 10:56:23.563000+00:00 | 2020-01-17 13:26:28.743000+00:00 | 2020-01-17 13:26:28.743000+00:00 | python|keras|deep-learning|object-detection | ['https://arxiv.org/pdf/1801.01315.pdf'] | 1 |
3,969,847 | <p>The following article (PDF download) is a comparative study of parallel sorting algorithms on various architectures:</p>
<p><a href="http://parasol.tamu.edu/publications/download.php?file_id=191" rel="nofollow noreferrer">Parallel sorting algorithms on various architectures</a></p>
<p>According to the article, <strong>sample sort</strong> seems to be best on many parallel architecture types.</p>
<p>Update to address Mark's concern of age:</p>
<p>Here are more recent articles introducing something more novel (from 2007, which, btw, still get compared with sample sort):</p>
<p><a href="http://gauss.cs.ucsb.edu/publication/psort.pdf" rel="nofollow noreferrer">Improvements on sample sort</a> <br/>
<a href="http://www.dia.eui.upm.es/asignatu/pro_par/articulos/AASort.pdf" rel="nofollow noreferrer">AA-Sort</a></p>
<p>The bleeding edge (circa 2010, some only a couple months old):</p>
<p><a href="http://parlab.eecs.berkeley.edu/wiki/_media/patterns/sortingpattern.pdf" rel="nofollow noreferrer">Parallel sorting pattern</a><br/>
<a href="http://lap.epfl.ch/webdav/site/lap/shared/publications/YeApr10_HighPerformanceComparisonBasedSortingAlgorithmOnManyCoreGpus_IPDPS10.pdf" rel="nofollow noreferrer">Many-core GPU based parallel sorting</a><br/>
<a href="http://cass-mt.pnl.gov/docs/pubs/gpusort-2010-08-25.pdf" rel="nofollow noreferrer">Hybrid CPU/GPU parallel sort</a><br/>
<a href="http://www.cc.gatech.edu/%7Ebader/COURSES/UNM/ece638-Fall2004/papers/HBJ98.pdf" rel="nofollow noreferrer">Randomized Parallel Sorting Algorithm with an Experimental Study</a><br/>
<a href="http://charm.cs.uiuc.edu/charmWorkshop/slides/CharmWorkshop2010_Sorting.pdf" rel="nofollow noreferrer">Highly scalable parallel sorting</a><br/>
<a href="http://www.scipub.org/fulltext/jcs/jcs62163-167.pdf" rel="nofollow noreferrer">Sorting N-Elements Using Natural Order: A New Adaptive Sorting Approach</a><br/></p>
<p><strong>Update for 2013:</strong>
Here is the bleeding edge circa January, 2013. (Note: A few of the links are to papers at Citeseer and require registration which is free):<br/></p>
<p>University lectures:<br/>
<a href="http://www.cs.sunysb.edu/%7Erezaul/Spring-2012/CSE613/CSE613-lecture-9.pdf" rel="nofollow noreferrer">Parallel Partitioning for Selection and Sorting</a><br/>
<a href="http://homepages.math.uic.edu/%7Ejan/mcs572/parallelsorting.pdf" rel="nofollow noreferrer">Parallel Sorting Algorithms Lecture</a><br/>
<a href="http://www.clear.rice.edu/comp422/lecture-notes/comp422-2012-Lecture21-Sorting.pdf" rel="nofollow noreferrer">Parallel Sorting Algorithms Lecture 2</a><br/>
<a href="http://www.redgenes.com/Lecture-Sorting.pdf" rel="nofollow noreferrer">Parallel Sorting Algorithms Lecture 3</a><br/>
<br/>
Other sources and papers:
<br/>
<a href="http://hgpu.org/?p=7227" rel="nofollow noreferrer">A novel sorting algorithm for many-core architectures based on adaptive bitonic sort</a><br/>
<a href="http://charm.cs.illinois.edu/talks/SortingIPDPS10.pdf" rel="nofollow noreferrer">Highly Scalable Parallel Sorting 2</a><br/>
<a href="http://www.economyinformatics.ase.ro/content/EN4/alecu.pdf" rel="nofollow noreferrer">Parallel Merging</a><br/>
<a href="http://www.drdobbs.com/parallel/parallel-merge/229204454?queryText=parallel%2Bsort" rel="nofollow noreferrer">Parallel Merging 2</a><br/>
<a href="http://arxiv.org/ftp/arxiv/papers/1209/1209.3050.pdf" rel="nofollow noreferrer">Parallel Self-Sorting System for Objects</a><br/>
<a href="http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.252.9314" rel="nofollow noreferrer">Performance Comparison of Sequential Quick Sort and Parallel Quick Sort Algorithms</a><br/>
<a href="http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.217.7866" rel="nofollow noreferrer">Shared Memory, Message Passing, and Hybrid Merge Sorts for Standalone and Clustered SMPs</a><br/>
<a href="http://pages.videotron.com/aminer/" rel="nofollow noreferrer">Various parallel algorithms (sorting et al) including implementations</a><br/>
<br/>
GPU and CPU/GPU hybrid sources and papers:
<br/>
<a href="http://hgpu.org/?p=8085" rel="nofollow noreferrer">An OpenCL Method of Parallel Sorting Algorithms for GPU Architecture</a><br/>
<a href="http://hgpu.org/?p=8198" rel="nofollow noreferrer">Data Sorting Using Graphics Processing Units</a><br/>
<a href="http://hgpu.org/?p=8031" rel="nofollow noreferrer">Efficient Algorithms for Sorting on GPUs</a><br/>
<a href="http://hgpu.org/?p=1203" rel="nofollow noreferrer">Designing efficient sorting algorithms for manycore GPUs</a><br/>
<a href="http://hgpu.org/?p=1422" rel="nofollow noreferrer">Deterministic Sample Sort For GPUs</a><br/>
<a href="http://hgpu.org/?p=3331" rel="nofollow noreferrer">Fast in-place sorting with CUDA based on bitonic sort</a><br/>
<a href="http://hgpu.org/?p=923" rel="nofollow noreferrer">Fast parallel GPU-sorting using a hybrid algorithm</a><br/>
<a href="http://hgpu.org/?p=8602" rel="nofollow noreferrer">Fast Parallel Sorting Algorithms on GPUs</a><br/>
<a href="http://hgpu.org/?p=1533" rel="nofollow noreferrer">Fast sort on CPUs and GPUs: a case for bandwidth oblivious SIMD sort</a><br/>
<a href="http://hgpu.org/?p=892" rel="nofollow noreferrer">GPU sample sort</a><br/>
<a href="http://hgpu.org/?p=2033" rel="nofollow noreferrer">GPU-ABiSort: Optimal Parallel Sorting on Stream Architectures</a><br/>
<a href="http://hgpu.org/?p=1177" rel="nofollow noreferrer">GPUTeraSort: high performance graphics co-processor sorting for large database management</a><br/>
<a href="http://hgpu.org/?p=4182" rel="nofollow noreferrer">High performance comparison-based sorting algorithm on many-core GPUs</a><br/>
<a href="http://hgpu.org/?p=1084" rel="nofollow noreferrer">Parallel external sorting for CUDA-enabled GPUs with load balancing and low transfer overhead</a><br/>
<a href="http://hgpu.org/?p=6687" rel="nofollow noreferrer">Sorting on GPUs for large scale datasets: A thorough comparison</a><br/></p>
<p><strong>Update for 2022</strong>: I have not forgotten this answer and like all things computer related, it has not aged well. I will do my best to update and refresh it for current trends as well as the state of the art, at some point prior to the end of this year (2022). If you have interest in this topic and would like to see the update sooner, please either reply to or better yet, upvote the <em>comment I made below this answer</em>, so that I can gauge interest in this topic over others that also are in need of an update.</p> | 2010-10-19 15:11:51.743000+00:00 | 2022-06-16 15:08:58.607000+00:00 | 2022-06-16 15:08:58.607000+00:00 | null | 3,969,813 | <p>Sorting takes O(n log n) in the serial case. If we have O(n) processors we would hope for a linear speedup. O(log n) parallel algorithms exist but they have a very high constant. They also aren't applicable on commodity hardware which doesn't have anywhere near O(n) processors. With p processors, reasonable algorithms should take O(n/p log n) time.</p>
<p>In the serial case, quick sort has the best runtime complexity on average. A parallel quick sort algorithm is easy to implement (see <a href="https://stackoverflow.com/questions/1897458/parallel-sort-algorithm">here</a> and <a href="https://stackoverflow.com/questions/1784028/which-sorting-method-is-most-suitable-for-parallel-processing">here</a>). However it doesn't perform well since the very first step is to partition the whole collection on a single core. I have found information on many parallel sort algorithms but so far I have not seen anything pointing to a clear winner.</p>
<p>I'm looking to sort lists of 1 million to 100 million elements in a JVM language running on 8 to 32 cores.</p> | 2010-10-19 15:07:32.260000+00:00 | 2022-06-16 15:08:58.607000+00:00 | 2017-05-23 12:10:45.110000+00:00 | algorithm|sorting|concurrency | ['http://parasol.tamu.edu/publications/download.php?file_id=191', 'http://gauss.cs.ucsb.edu/publication/psort.pdf', 'http://www.dia.eui.upm.es/asignatu/pro_par/articulos/AASort.pdf', 'http://parlab.eecs.berkeley.edu/wiki/_media/patterns/sortingpattern.pdf', 'http://lap.epfl.ch/webdav/site/lap/shared/publications/YeApr10_HighPerformanceComparisonBasedSortingAlgorithmOnManyCoreGpus_IPDPS10.pdf', 'http://cass-mt.pnl.gov/docs/pubs/gpusort-2010-08-25.pdf', 'http://www.cc.gatech.edu/%7Ebader/COURSES/UNM/ece638-Fall2004/papers/HBJ98.pdf', 'http://charm.cs.uiuc.edu/charmWorkshop/slides/CharmWorkshop2010_Sorting.pdf', 'http://www.scipub.org/fulltext/jcs/jcs62163-167.pdf', 'http://www.cs.sunysb.edu/%7Erezaul/Spring-2012/CSE613/CSE613-lecture-9.pdf', 'http://homepages.math.uic.edu/%7Ejan/mcs572/parallelsorting.pdf', 'http://www.clear.rice.edu/comp422/lecture-notes/comp422-2012-Lecture21-Sorting.pdf', 'http://www.redgenes.com/Lecture-Sorting.pdf', 'http://hgpu.org/?p=7227', 'http://charm.cs.illinois.edu/talks/SortingIPDPS10.pdf', 'http://www.economyinformatics.ase.ro/content/EN4/alecu.pdf', 'http://www.drdobbs.com/parallel/parallel-merge/229204454?queryText=parallel%2Bsort', 'http://arxiv.org/ftp/arxiv/papers/1209/1209.3050.pdf', 'http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.252.9314', 'http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.217.7866', 'http://pages.videotron.com/aminer/', 'http://hgpu.org/?p=8085', 'http://hgpu.org/?p=8198', 'http://hgpu.org/?p=8031', 'http://hgpu.org/?p=1203', 'http://hgpu.org/?p=1422', 'http://hgpu.org/?p=3331', 'http://hgpu.org/?p=923', 'http://hgpu.org/?p=8602', 'http://hgpu.org/?p=1533', 'http://hgpu.org/?p=892', 'http://hgpu.org/?p=2033', 'http://hgpu.org/?p=1177', 'http://hgpu.org/?p=4182', 'http://hgpu.org/?p=1084', 'http://hgpu.org/?p=6687'] | 36 |
61,881,687 | <p>It's not fullproof but you can try several language identification tools.</p>
<h1>Using <code>langid.py</code></h1>
<p>One of the most popular and easiest to use, being <code>langid.py</code> <a href="https://github.com/saffsd/langid.py" rel="nofollow noreferrer">https://github.com/saffsd/langid.py</a></p>
<p>To install: <code>python -m pip install -U langid</code></p>
<pre class="lang-py prettyprint-override"><code>>>> import langid
>>> text = "Hallo, wie gehts?"
>>> lang, log_prob = langid.classify(text)
>>> print(lang)
de
</code></pre>
<h1>Using <code>pyCLD2</code></h1>
<p>The <code>pycld2</code> is a wrapper around <code>chromium-compact-language-detector</code>, see <a href="https://github.com/aboSamoor/pycld2" rel="nofollow noreferrer">https://github.com/aboSamoor/pycld2</a></p>
<p>Install: <code>python -m pip install -U pycld2</code></p>
<pre><code>>>> import pycld2 as cld2
>>> text = "Hallo, wie gehts?"
>>> isReliable, textBytesFound, details = cld2.detect(text)
>>> lang = details[0][1]
>>> print(lang)
de
</code></pre>
<h1>Using <code>cld3</code></h1>
<p>Install: <code>python -m pip install -U pycld3</code></p>
<pre><code>>>> import cld3
>>> text = "Hallo, wie gehts?"
>>> prediction = cld3.get_language(text)
>>> print(prediction.language)
de
</code></pre>
<p>Here's a pretty nice recent summary (2019) from <a href="https://arxiv.org/pdf/1910.06748.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1910.06748.pdf</a></p> | 2020-05-19 01:21:27.170000+00:00 | 2020-05-19 02:25:25+00:00 | 2020-05-19 02:25:25+00:00 | null | 61,872,214 | <p>For a research-purpose work I should:</p>
<ol>
<li>Read a .csv file</li>
<li>Detect the language of the text by the title</li>
<li>Identifying the argument of the text by some keywords
ex. lobotomy --> brain</li>
</ol>
<p>I am trying to do the 2nd and 3rd point using Python with its library NLTK,
Could you give me some tips if you ever did something like it?</p>
<p>Thank you in advance!</p> | 2020-05-18 14:43:50.340000+00:00 | 2020-05-19 02:25:25+00:00 | null | python|text|nlp|nltk | ['https://github.com/saffsd/langid.py', 'https://github.com/aboSamoor/pycld2', 'https://arxiv.org/pdf/1910.06748.pdf'] | 3 |
73,214,999 | <p>Yes -- you can pub/sub to clients, but those still have to be <em>Redis</em> clients that <em>also know about pub/sub</em>. Per your account handle, you probably need to look for a simple-but-sufficient Java library.</p>
<p>I wrote about a simple application I create for my own use: a listener process fetches market data (for less than a handful of contracts) and <em>publishes</em> them to Redis from where clients (in a different process and/or on a different machine) can subscribe. Works <em>great</em>. I do this in R because I had already written a (C++ based) client for R (around hiredis) to which we then added pub/sub.</p>
<p>The blog post is <a href="https://dirk.eddelbuettel.com/blog/2022/02/23/#036_pub_sub_for_market_monitoring_with_r_and_redis" rel="nofollow noreferrer">here</a>, and I also wrote about Redis for market monitoring in this <a href="https://arxiv.org/abs/2203.08323" rel="nofollow noreferrer">short arXiv paper</a> -- and there is a corresponding <a href="https://arxiv.org/abs/2203.06559" rel="nofollow noreferrer">short arXiv paper introducing Redis</a>.</p> | 2022-08-03 00:09:32.637000+00:00 | 2022-08-03 00:09:32.637000+00:00 | null | null | 73,214,935 | <p>I have a scenario like I would be storing details on the redis-database and this would publish messages to external API's.</p>
<p><strong>Question:</strong></p>
<ol>
<li>Is that feasible to publish message from Redis to outside world? - All pub/sub I have seen are within <strong>redis-cli</strong>.</li>
</ol>
<p><a href="https://i.stack.imgur.com/ixiPR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ixiPR.png" alt="enter image description here" /></a></p> | 2022-08-02 23:57:39.353000+00:00 | 2022-08-03 00:36:23.633000+00:00 | 2022-08-03 00:36:23.633000+00:00 | redis|publish-subscribe|redis-streams | ['https://dirk.eddelbuettel.com/blog/2022/02/23/#036_pub_sub_for_market_monitoring_with_r_and_redis', 'https://arxiv.org/abs/2203.08323', 'https://arxiv.org/abs/2203.06559'] | 3 |
48,542,758 | <p>The answer I want to give you is: <a href="https://arxiv.org/abs/1710.09829" rel="nofollow noreferrer">CapsNets</a></p>
<p>You should definately check out the paper, where you will be introduced to some short comings of CNNs and how they tried to fix them.</p>
<p>That said, I find it hard to believe that your architecture cannot solve the problem successfully when the perspective changes. Is your dataset extremely small? I'd expect the neural network to learn filters for the riffled edges, which can be seen from all perspectives. </p>
<p>If you're not limited to one camera you could try to train a "normal" classifier, which you feed multiple images in production and average the prediction. Or you could build an architecture that takes in multiple perspectives at once. You have to try for yourself, what works best.</p>
<p>Also, never underestimate the power of old school image preprocessing. If you have 3 different perspectives, you could take the one that comes closest to the "flat" perspective. This is probably as easy as using the image with the largest colored area, where <code>img.sum()</code> is the highest.</p>
<p>Another idea is to figure out the color through explicit programming, which should be fairly easy and then feed the network a grayscale image. Maybe your network is confused by the strong correlation of the color and ignores the shape altogether.</p> | 2018-01-31 12:56:43.740000+00:00 | 2018-01-31 12:56:43.740000+00:00 | null | null | 48,190,418 | <p>I have a problem statement to recognize 10 classes of different variations(variations in color and size) of same object (bottle cap) while falling taking into account the camera sees different viewpoint of the object. I have split this into sub-tasks</p>
<p>1) Trained a deep learning model to classify only the flat surface of the object and successful in this attempt.</p>
<p>Flat Faces of sample 2 class</p>
<p><a href="https://i.stack.imgur.com/cz3XS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cz3XS.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/492S5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/492S5.png" alt="enter image description here"></a></p>
<p>2) Instead of taking fall into account, trained a model for possible perspective changes - not successful.</p>
<p>Perception changes of sample 2 class</p>
<p><a href="https://i.stack.imgur.com/SPbQS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SPbQS.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/CMRpn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CMRpn.png" alt="enter image description here"></a></p>
<p>What are the approaches to recognize the object even for perspective changes. I am not constrained to arrive with a single camera solution. Open to ideas in approaching towards this problem of variable perceptions.</p>
<p>Any help could be really appreciated, Thanks in advance!</p> | 2018-01-10 15:10:28.270000+00:00 | 2018-01-31 12:56:43.740000+00:00 | 2018-01-22 06:58:12.550000+00:00 | machine-learning|computer-vision|deep-learning|svm|object-recognition | ['https://arxiv.org/abs/1710.09829'] | 1 |
68,993,654 | <p>The question you are asking is of topic <strong>continual learning</strong>, which is an active area of research nowadays. Since you need to add more classes to your model, you need to add the new class with the previous data and retrain the model from start. If you don't do that, i.e., you only train on the new class, your model will forget completely about the previous data (learned feature); this forgetting is known as <strong>Catastrophic Forgetting</strong>.</p>
<p>Many people have suggested various ways to avoid this Catastrophic forgetting; I personally feel that <a href="https://arxiv.org/abs/1606.04671" rel="nofollow noreferrer">Progressive Neural Network</a> is highly immune to Forgetting. Apart from it, you can find other methods <a href="https://paperswithcode.com/task/continual-learning" rel="nofollow noreferrer">here</a></p>
<p>As I told you, this is currently a highly active area of research; There is no full-proof solution. For now, the best way is to add the new data to the previous data and retrain your model.</p> | 2021-08-31 06:08:12.660000+00:00 | 2021-08-31 06:23:42.097000+00:00 | 2021-08-31 06:23:42.097000+00:00 | null | 68,993,575 | <p>I trained my custom data set in the yoloV5s model and I got 80% accuracy on my inference. Now I need to increase the accuracy by adding more images and labels.</p>
<p>My question here is, I already trained 10,000+ labels to reach 80% it took 7 hours for me. <strong>Shall I need to include the old 10,000+ data with my new data which is only 1000 to train and improve my accuracy?</strong></p>
<p>Is there any way that I can include the new data only to retrain the model even I add a new class?</p>
<p>How can I save my time and space?</p> | 2021-08-31 06:00:16.233000+00:00 | 2021-09-03 00:55:03.630000+00:00 | 2021-09-03 00:55:03.630000+00:00 | machine-learning|pytorch|training-data|yolov5 | ['https://arxiv.org/abs/1606.04671', 'https://paperswithcode.com/task/continual-learning'] | 2 |
66,483,804 | <p>It is sort of expected that on numerical applications, Haskell code tends to be say 2X to 5X slower than equivalent code written in classic imperative languages such as C/C++/Fortran. However, Haskell code being 100 times slower is very unexpected !</p>
<h2>Sanity check: reproducing the result</h2>
<p>Using a somewhat oldish notebook with an Intel Celeron N2840 64 bits cpu, running Linux kernel 5.3, GLIBC 2.28, GCC 8.3, GHC 8.2.2, GHC random-1.1, we get the timings:</p>
<pre>
C code, gcc -O3: 950 msec
Haskell code, ghc -O3: 104 sec
</pre>
<p><strong>So indeed, with these configurations, the Haskell code runs about 100 times slower than the C code.</strong></p>
<h2>Why is that ?</h2>
<p>A first remark is that the π computing arithmetics atop random number generation looks quite simple, hence the runtimes are probably dominated by the generation of 2*10 millions pseudo-random numbers. Furthermore, we have every reason to expect that <strong>Haskell and C are using completely unrelated random number generation algorithms.</strong></p>
<p>So, rather than comparing Haskell with C, we could be comparing the random number generation algorithms these 2 languages happen to be using, as well as their respective implementations.</p>
<p>In C, the language standard does not specify which algorithm function <code>srand()</code> is supposed to use, but that tends to be old, simple and fast algorithms.</p>
<p>On the other hand, in Haskell, we have traditionally seen a lot of complaints about the poor efficiency of <code>StdGen</code>, like here back in 2006:</p>
<p><a href="https://gitlab.haskell.org/ghc/ghc/-/issues/427" rel="nofollow noreferrer">https://gitlab.haskell.org/ghc/ghc/-/issues/427</a></p>
<p>where one of the leading Haskell luminaries mentions that <code>StdGen</code> could possibly be made 30 times faster.</p>
<p>Fortunately, help is already on the way. This <a href="https://alexey.kuleshevi.ch/blog/2021/01/29/random-interface/" rel="nofollow noreferrer">recent blog post</a> explains how the <a href="https://hackage.haskell.org/package/random-1.2.0" rel="nofollow noreferrer"><strong>new Haskell random-1.2 package</strong></a> is solving the problem of <code>StdGen</code> lack of speed, using a completely different algorithm known as <em>splitmix</em>.</p>
<p>The field of pseudo-random number generation (PRNG) is a rather active one. Algorithms routinely get obsoleted by newer and better ones. For the sake of perspective, here is a relatively recent (2018) <a href="https://arxiv.org/pdf/1811.04035" rel="nofollow noreferrer">review paper on the subject</a>.</p>
<h2>Moving to more recent, better Haskell components:</h2>
<p>Using another, slightly more powerful machine with an Intel Core i5-4440 3GHz 64 bits cpu, running Linux kernel 5.10, GLIBC 2.32, GCC 10.2, and critically <strong>Haskell package random-1.2</strong>:</p>
<pre>
C code, gcc -O3: 164 msec
Haskell code, ghc -O3: 985 msec
</pre>
<p>So Haskell is now “just” 6 times slower instead of 100 times.</p>
<p>And we still have to address the injustice of Haskell having to use <code>x**2+y**2</code> versus C getting <code>x*x+y*y</code>, a detail which did not really matter before with <strong>random-1.1</strong>. This gives us 379 msec ! <strong>So we are back into the usual 2X-5X ballpark for Haskell to C speed comparisons.</strong></p>
<p>Note that if we ask the Haskell executable to run with statistics on, we get the following output:</p>
<pre>
$ time q66441802.x +RTS -s -RTS 10000000
3.1415616
925,771,512 bytes allocated in the heap
...
Alloc rate 2,488,684,937 bytes per MUT second
Productivity 99.3% of total user, 99.3% of total elapsed
real 0m0,379s
user 0m0,373s
sys 0m0,004s
</pre>
<p>so Haskell is found to allocate close to one gigabyte of memory along the way, which helps to understand the speed difference.</p>
<h2>A code fix:</h2>
<p>We note that the C code uses a single random serie while the Haskell code is using two, with two calls to <code>mkStdGen</code> with numbers 1 and 2 as seeds. This is not only unfair but also incorrect. Applied mathematicians who design PRNG algorithms take great care to ensure any single serie has the right statistical properties, but give essentially no guarantees about possible correlations between different series. It is not unheard of to even use the seed as an offset into a single global sequence.</p>
<p>This is fixed in the following code, which does not alter performance significantly:</p>
<pre><code>computeNorms :: [Double] -> [Double]
computeNorms [] = []
computeNorms [x] = []
computeNorms (x:y:xs2) = (x*x + y*y) : (computeNorms xs2)
monteCarloCircle :: Int -> Double
monteCarloCircle nn =
let
randomSeed = 1
gen0 = mkStdGen randomSeed
-- use a single random serie:
uxys = (randomRs (-1.0, 1.0) gen0) :: [Double]
norms = take nn (computeNorms uxys)
insiderCount = length $ filter (<= 1.0) norms
in
(4.0::Double) * ((fromIntegral insiderCount) / (fromIntegral nn))
</code></pre>
<h2>Side note:</h2>
<p>The new Haskell random-1.2 package has been mentioned in this <a href="https://stackoverflow.com/questions/66404604/using-system-random-over-multiple-data-types-the-haskell-way">recent SO question</a>, though in the context of the new monadic interface.</p>
<h2>A word of conclusion:</h2>
<p>Assuming the goal of the exercise is to gauge the relative runtime speeds of the C and Haskell languages, there are essentially two possibilities:</p>
<p>One is to avoid using PRNG altogether, because of the different algorithms in use.</p>
<p>The other one is to control random number generation by manually providing one and the same algorithm into both languages. For example, one could use the publicly available <a href="https://www.iro.umontreal.ca/%7Elecuyer/myftp/papers/streams00.pdf" rel="nofollow noreferrer">MRG32k3a</a> algorithm designed by Pierre L'Ecuyer. <a href="https://hackage.haskell.org/package/mrg-random-0.1.0.0/candidate" rel="nofollow noreferrer">Candidate Haskell MRG32k3a implementation here</a> (as of March 2021). Left as an exercise for the reader.</p> | 2021-03-04 22:10:24.153000+00:00 | 2021-03-05 17:44:30.717000+00:00 | 2021-03-05 17:44:30.717000+00:00 | null | 66,441,802 | <p><em>update:</em></p>
<p>My compilation command is <code>ghc -O2 Montecarlo.hs</code>. My random version is <code>random-1.1</code>, ghc version is <code>8.6.4</code>, and my system is macOS Big Sur 11.1 (Intel chip). The command I used to test the speed is <code>time ./Montecarlo 10000000</code>, and the result it returns is <code>real 0m17.463s, user 0m17.176s, sys 0m0.162s</code>.</p>
<hr />
<p>The following is a Haskell program that uses Monte Carlo to calculate <code>pi</code>. However, when the input is 10 million, the program ran for 20 seconds. The C program written in the same logic took only 0.206 seconds. Why is this, and how can I speed it up? Thank you all.</p>
<p>This is the Haskell version:</p>
<pre><code>import System.Random
import Data.List
import System.Environment
montecarloCircle :: Int -> Double
montecarloCircle x
= 4*fromIntegral
(foldl' (\x y -> if y <= 1 then x+1 else x) 0
$ zipWith (\x y -> (x**2 + y**2))
(take x $ randomRs (-1.0,1) (mkStdGen 1) :: [Double])
(take x $ randomRs (-1.0,1) (mkStdGen 2) :: [Double]) )
/ fromIntegral x
main = do
num <- getArgs
let n = read (num !! 0)
print $ montecarloCircle n
</code></pre>
<p>This is the C version:</p>
<pre><code>#include <stdio.h>
#include <math.h>
#include <time.h>
#include <stdlib.h>
#define N 10000000
#define int_t long // the type of N and M
// Rand from 0.0 to 1.0
double rand01()
{
return rand()*1.0/RAND_MAX;
}
int main()
{
srand((unsigned)time(NULL));
double x,y;
int_t M = 0;
for (int_t i = 0;i < N;i++)
{
x = rand01();
y = rand01();
if (x*x+y*y<1) M++;
}
double pi = (double)4*M/N;
printf("%lf\n", pi);
}
</code></pre> | 2021-03-02 15:01:15.260000+00:00 | 2021-03-05 17:44:30.717000+00:00 | 2021-03-04 22:13:30.640000+00:00 | performance|haskell|random|montecarlo | ['https://gitlab.haskell.org/ghc/ghc/-/issues/427', 'https://alexey.kuleshevi.ch/blog/2021/01/29/random-interface/', 'https://hackage.haskell.org/package/random-1.2.0', 'https://arxiv.org/pdf/1811.04035', 'https://stackoverflow.com/questions/66404604/using-system-random-over-multiple-data-types-the-haskell-way', 'https://www.iro.umontreal.ca/%7Elecuyer/myftp/papers/streams00.pdf', 'https://hackage.haskell.org/package/mrg-random-0.1.0.0/candidate'] | 7 |
45,250,170 | <p>I asked myself the same thing today and found this question. I have never implemented an attention mechanism myself, but from <a href="https://arxiv.org/abs/1409.0473" rel="noreferrer">this paper</a> it seems a little bit more than just a straight softmax. For each output y<sub>i</sub> of the decoder network, a context vector <strong>c</strong><sub>i</sub> is computed as a weighted sum of the encoder hidden states <strong>h</strong><sub>1</sub>, ..., <strong>h</strong><sub>T</sub>:</p>
<p><strong>c</strong><sub>i</sub> = α<sub>i1</sub><strong>h</strong><sub>1</sub>+...+α<sub>iT</sub><strong>h</strong><sub>T</sub></p>
<p>The number of time steps T may be different for each sample because the coefficients α<sub>ij</sub> are not vector of fixed size. In fact, they are computed by softmax(e<sub>i1</sub>, ..., e<sub>iT</sub>), where each e<sub>ij</sub> is the output of a neural network whose input is the encoder hidden state <strong>h</strong><sub>j</sub> and the decoder hidden state <strong>s</strong><sub>i-1</sub>:</p>
<p>e<sub>ij</sub> = f(<strong>s</strong><sub>i-1</sub>, <strong>h</strong><sub>j</sub>)</p>
<p>Thus, before y<sub>i</sub> is computed, this neural network must be evaluated T times, producing T weights α<sub>i1</sub>,...,α<sub>iT</sub>. Also, <a href="https://github.com/ilivans/tf-rnn-attention/blob/master/attention.py" rel="noreferrer">this tensorflow impementation</a> might be useful.</p> | 2017-07-22 02:49:30.163000+00:00 | 2017-07-24 00:32:18.307000+00:00 | 2017-07-24 00:32:18.307000+00:00 | null | 44,443,433 | <p>The attention mechanism of LSTM is a straight softmax feed forward network that takes in the hidden states of each time step of the encoder and the decoder's current state.</p>
<p>These 2 steps seems to contradict and can't wrap my head around:
1) The number of inputs to a feed forward network needs to be predefined
2) the number of hidden states of the encoder is variable (depends on number of time steps during encoding).</p>
<p>Am I misunderstanding something? Also would training be the same as if I were to train a regular encoder/decoder network or would I have to train the attention mechanism separately?</p>
<p>Thanks in Advance</p> | 2017-06-08 18:48:21.513000+00:00 | 2018-03-08 03:18:36.677000+00:00 | null | machine-learning|neural-network|text-processing|lstm|recurrent-neural-network | ['https://arxiv.org/abs/1409.0473', 'https://github.com/ilivans/tf-rnn-attention/blob/master/attention.py'] | 2 |
21,950,655 | <p>We have recently published a preprint on arXiv of a JSS paper we wrote with more examples of using RProtoBuf, including sending RPC requests to remote web services. For more exposition of sharing data between R and other languages with RProtoBuf, see <a href="http://arxiv.org/abs/1401.7372" rel="nofollow">RProtoBuf: Efficient Cross-Language Data Serialization in R</a>.</p>
<p>You can use RProtoBuf with any transport mechanism, as explained in the article -- You can save serialized protocol buffers to files to be read by other applications written in other languages, or you can send them over connections/sockets or other higher level RPC systems. Protocol Buffers are widely used in everything from Sony Playstations to large scale web services, but they do not include an RPC system -- you use them as your serialization format with whatever transport system you are already using.</p> | 2014-02-22 06:27:42.070000+00:00 | 2014-02-22 06:27:42.070000+00:00 | null | null | 7,022,275 | <p>It is not entirely obvious how to go about using RProtoBuf for communicating between R and other languages (Java, in my case).</p>
<p>The RprotoBuf Developers developed something that is still here - <a href="https://r-forge.r-project.org/scm/viewvc.php/java/?root=rprotobuf" rel="nofollow">https://r-forge.r-project.org/scm/viewvc.php/java/?root=rprotobuf</a>, but it seems very outdated. I am not sure if this is the way to go. Here are two conversations between the authors of RProtoBuf that might help with understanding the code -</p>
<p><a href="http://lists.r-forge.r-project.org/pipermail/rprotobuf-yada/2009-December/000116.html" rel="nofollow">http://lists.r-forge.r-project.org/pipermail/rprotobuf-yada/2009-December/000116.html</a></p>
<p><a href="http://lists.r-forge.r-project.org/pipermail/rprotobuf-yada/2009-December/000119.html" rel="nofollow">http://lists.r-forge.r-project.org/pipermail/rprotobuf-yada/2009-December/000119.html</a></p>
<p>It seems that they started work with Java and then abandoned it in C++'s favour!</p>
<p>Is there anyone using R-RProtoBuf-Java combination? How do you do it? Is there a tutorial or example available?</p>
<p>My exposure to Java is very very limited. I want to use a few programs written in Java.</p>
<p>Edit :
To clarify, I suppose I want to see an example of an R rpc client being used with RProtobuf. Pointers towards Java RPC servers would be welcome.</p>
<p>Edit2 :
The first link actually points to some documentation generator code, as Dirk pointed out.</p> | 2011-08-11 07:26:03.787000+00:00 | 2014-02-22 06:27:42.070000+00:00 | 2011-08-11 14:12:33.563000+00:00 | java|r|rpc|protocol-buffers | ['http://arxiv.org/abs/1401.7372'] | 1 |
3,543,780 | <p>full pdf content is in the amazon cloud.</p>
<p>while there are > 600k papers on arXiv the total size of the pdf is < 1/2 TB</p>
<p><a href="http://arxiv.org/help/bulk_data_s3" rel="nofollow noreferrer">http://arxiv.org/help/bulk_data_s3</a></p>
<p>T.</p> | 2010-08-22 22:49:16.177000+00:00 | 2010-08-22 22:49:16.177000+00:00 | null | null | 1,206,166 | <p>The arXiv e-print archive has several terabytes of papers from various fields of science. Some users would like to maintain a full copy of this data on their own computers, while others just want to download the most recent papers in a particular category. They are looking to reduce bandwidth load using some kind of distributed download system (e.g. BitTorrent). I'm looking for ideas for a program or set of programs that would cover all of this.</p> | 2009-07-30 12:04:32.710000+00:00 | 2010-08-22 22:49:16.177000+00:00 | 2009-07-30 16:37:48.437000+00:00 | pdf|dataset|sync | ['http://arxiv.org/help/bulk_data_s3'] | 1 |
1,206,757 | <p><a href="http://arxiv.org/help/faq/cache" rel="nofollow noreferrer" title="arXiv recommends squid">arXiv recommends squid</a> in httpd accelerator mode for precisely this purpose. Any particular reason why this is not good enough?</p> | 2009-07-30 13:52:28.637000+00:00 | 2009-07-30 13:52:28.637000+00:00 | null | null | 1,206,166 | <p>The arXiv e-print archive has several terabytes of papers from various fields of science. Some users would like to maintain a full copy of this data on their own computers, while others just want to download the most recent papers in a particular category. They are looking to reduce bandwidth load using some kind of distributed download system (e.g. BitTorrent). I'm looking for ideas for a program or set of programs that would cover all of this.</p> | 2009-07-30 12:04:32.710000+00:00 | 2010-08-22 22:49:16.177000+00:00 | 2009-07-30 16:37:48.437000+00:00 | pdf|dataset|sync | ['http://arxiv.org/help/faq/cache'] | 1 |
60,722,328 | <p>If the class distribution of the dataset is uneven, this may cause trouble in later phases of training and classification as classifiers will have very fewer data to learn features of a particular class.</p>
<p>Unlike normal upsampling, SMOTE makes use of the nearest neighbor algorithm to generate new and synthetic data that can be used to train the models.</p>
<p>As said in <a href="https://arxiv.org/pdf/1106.1813.pdf" rel="nofollow noreferrer">this original paper of SMOTE</a>, "<em>The minority class is over-sampled by taking each minority class sample and introducing synthetic examples along the line segments joining any/all of the k minority class nearest neighbors.</em>"</p>
<p>So <strong>yes</strong>, these newly generated synthetic data points are important and you do not have to worry about them much. SMOTE is one of the best techniques out there to perform this task, so I would suggest using this.</p>
<p>Consider the following image for example:
<a href="https://i.stack.imgur.com/kDtRH.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kDtRH.jpg" alt="enter image description here"></a>
<strong><em>Figure a</em></strong> has more data points for <strong>class 0</strong> while very little for <strong>class 1</strong>.</p>
<p>As you can see, after applying SMOTE (<strong><em>Figure b</em></strong>), it will generate new data points for <strong>minority</strong> class (in this case, for <strong>class 1</strong>) in order to balance the dataset.</p>
<hr>
<p>Try reading:</p>
<ol>
<li><p><a href="http://rikunert.com/SMOTE_explained" rel="nofollow noreferrer">http://rikunert.com/SMOTE_explained</a> </p></li>
<li><p><a href="https://machinelearningmastery.com/smote-oversampling-for-imbalanced-classification/" rel="nofollow noreferrer">https://machinelearningmastery.com/smote-oversampling-for-imbalanced-classification/</a></p></li>
</ol> | 2020-03-17 12:20:27.787000+00:00 | 2020-03-17 12:32:48.560000+00:00 | 2020-03-17 12:32:48.560000+00:00 | null | 60,721,496 | <p>I am trying to solve an imbalanced classification problem, all the input features are categorical.
Here are the value counts of each feature:</p>
<pre><code> for i in X_train.columns:
print(i+':',X_train[i].value_counts().shape[0])
Pclass: 3
Sex: 2
IsAlone: 2
Title: 5
IsCabin: 2
AgeBin: 4
FareBin: 4
</code></pre>
<p>Applying SMOTE on the training data, after train_test_split. There are new values that are created, which are not present in the X_train dataset.</p>
<pre><code> from imblearn.over_sampling import SMOTE
from collections import Counter
#removing the random_state dosent help
sm = SMOTE(random_state=0)
X_res, y_res = sm.fit_resample(X_train, y_train)
print('Resampled dataset shape %s' % Counter(y_res))
Resampled dataset shape Counter({1: 381, 0: 381})
</code></pre>
<p>Value counts of the resampled dataset:</p>
<pre><code> Pclass: 16
Sex: 7
IsAlone: 2
Title: 12
IsCabin: 2
AgeBin: 4
FareBin: 4
</code></pre>
<p>There are new values being created by using SMOTE, this was also the case with under_sampling new values were created. These new values are not present in the test dataset.</p>
<p>Example:</p>
<pre><code>X_train-Pclass 1-20,2-15,3-40
X_res-Pclass 1-20,0.999999-3,2-11,1.9999999-4,3-34,2.9999999-6
</code></pre>
<p>My question:</p>
<ol>
<li><p>Why are these values created and do they hold some-importance?</p></li>
<li><p>How to deal with them? Should I just round them off or remove them</p></li>
<li><p>Is there a way to perform over and under sampling without creating these new values?</p></li>
</ol> | 2020-03-17 11:29:36.933000+00:00 | 2020-03-17 12:32:48.560000+00:00 | 2020-03-17 12:22:25.180000+00:00 | python|oversampling|smote | ['https://arxiv.org/pdf/1106.1813.pdf', 'https://i.stack.imgur.com/kDtRH.jpg', 'http://rikunert.com/SMOTE_explained', 'https://machinelearningmastery.com/smote-oversampling-for-imbalanced-classification/'] | 4 |
69,018,415 | <p>It seems that there are two distinct problems here:</p>
<p>Problem #1. getting <code>gtsummary()</code> to produce a table with p values and confidence intervals of the pooled, matched data</p>
<p>Problem #2. producing a <code>ggforest()</code> to produce a plot of the pooled estimates.</p>
<p><strong>Problem #1:</strong></p>
<p>Let us follow the instructions in the paper "MatchThem:: Matching and Weighting after Multiple Imputation" (<a href="https://arxiv.org/ftp/arxiv/papers/2009/2009.11772.pdf" rel="nofollow noreferrer">https://arxiv.org/ftp/arxiv/papers/2009/2009.11772.pdf</a>) [page 15]</p>
<p>and modify your block #3. Instead of calling <code>pool_and_tidy_mice()</code> we do the following:</p>
<pre><code>matched.results <- pool(mimira_object)
summary(matched.results, conf.int = TRUE)
</code></pre>
<p>This produces the following:</p>
<pre><code> term estimate std.error statistic df p.value 2.5 % 97.5 %
1 age -0.0005997864 0.001448251 -0.4141453 55.266353 6.803707e-01 -0.003501832 0.00230226
2 is_smoker 1.1157796620 0.077943244 14.3152839 9.961064 5.713387e-08 0.942019234 1.28954009
3 disease_stage 0.2360965310 0.051799813 4.5578645 3.879879 1.111782e-02 0.090504018 0.38168904
</code></pre>
<p>This means that performing the imputation with <code>mice</code> and then matching with <code>MatchThem</code> works, since you do get the p values and the confidence intervals.</p>
<p>Compare to the output from <code>pool_and_tidy_mice()</code>:</p>
<pre><code> term estimate std.error statistic p.value b df dfcom fmi lambda m
1 age -0.0005997864 0.001448251 -0.4141453 NaN 2.992395e-07 NaN Inf NaN 0.1902260 3
2 is_smoker 1.1157796620 0.077943244 14.3152839 NaN 2.041627e-03 NaN Inf NaN 0.4480827 3
3 disease_stage 0.2360965310 0.051799813 4.5578645 NaN 1.444843e-03 NaN Inf NaN 0.7179644 3
riv ubar
1 0.2349124 1.698446e-06
2 0.8118657 3.352980e-03
3 2.5456522 7.567636e-04
</code></pre>
<p>Where everything is the same except for df and p.value which were not calculated in the latter table.</p>
<p>I therefore think this is an issue with the <code>pool_and_tidy_mice()</code> and you should post this as an issue on GitHub at gtsummary.</p>
<p>For right now, you can bypass this problem by changing <code>svycoxph()</code> to <code>survival::coxph()</code> in block #3 when you call the <code>with()</code> function. If you do that, then eventually you will get a gtsummary table with p.values and confidence intervals. Ultimately, the problem is probably some interaction between <code>svycoxph()</code> and <code>pool_and_mice()</code>, hence why I believe that you should post this on GitHub.</p>
<p><strong>Problem #2:</strong></p>
<p>The short answer is that there cannot be a ggforest plot with all the data that you are looking for.</p>
<p><a href="https://www.rdocumentation.org/packages/mice/versions/3.13.0/topics/pool" rel="nofollow noreferrer">https://www.rdocumentation.org/packages/mice/versions/3.13.0/topics/pool</a> reads:</p>
<p><em>A common error is to reverse steps 2 and 3, i.e., <strong>to pool the multiply-imputed data instead of the estimates</strong>. Doing so may severely bias the estimates of scientific interest and yield incorrect statistical intervals and p-values. The pool() function will detect this case.</em></p>
<p>This means that there is no "real" dataset for the pooled estimates (i.e. you cannot really combine the datasets for imputations 1-3), which means that <code>ggforest()</code> cannot compute the desired plot (since it needs to have a dataset and that cannot be used because it would lead to erroneous estimates).</p>
<p>What you could do, is present all the ggforest plots for each imputation (so if you did 3 imputations, you will get 3 slightly different ggforest plots) and finally add the pooled estimates plot by using <code>plot()</code> as suggested above.</p>
<p>To create each ggforest plot you need the following line of code:</p>
<pre><code>ggforest(mimira_object$analyses[[1]], complete(mydata_imp_m3_psm, 1))
</code></pre>
<p>This will create the ggforest plot for your first imputation. Change the numbers to 2 and 3 to check the remaining imputations.</p>
<p>I hope this helped,</p>
<p>Alex</p> | 2021-09-01 17:42:27.883000+00:00 | 2021-09-01 18:31:55.243000+00:00 | 2021-09-01 18:31:55.243000+00:00 | null | 68,936,768 | <p>Dear StackOverflow community,
as a surgeon, and full of enthusiasm for 6 months for R learning in self-taught mode (StackOverflow, and so many websites), I beg your indulgence in the triviality of my concern.</p>
<p><strong>The background:</strong>
Briefly, my objective is to run a survival cox model regression for a dataset of cancer patients. Due to the retrospective aspect, I planned to make a matching 1:3 with propensity score matching (PSM). The missing data were dealt with multiple imputations ("mice" pkg). The PSM was managed with "MatchThem" pkg.
I used "survey" pkg for pooling the survival (svycoxph() pooled through with() function). This leads us to a mimira object, which I can easily print out into a beautiful Table, with tbl_regression ("gtsummary" pkg).</p>
<p><strong>The issue:</strong>
As a usually print my cox regressions into a Hazard ratios Table and a graphical version (Forest plot with ggforest(), from "survminer" pkg), this time I am really stuck. The function ggforest doesn't recognize the mimira object as a "coxph object" and send this error :</p>
<pre><code>Error in ggforest(tbl_regression_object, data = mimira_object) :
inherits(model, "coxph") is not TRUE
</code></pre>
<p>I guess that adding a PSM to my multiple imputations is the problem, as I had no problem for printing cox regression of multiple imputations with Forest plot (ggforest is able to deal mira objects without problem with pool_and_tidy_mice() function).</p>
<p>Here is <strong>the script:</strong></p>
<pre><code>#Data
library(fabricatr)
library(simsurv)
# Simulate patient data in a clinical trial
participant_data <- fabricate(
N = 2000,
age = runif(N, min = 18, max = 85),
is_female = draw_binary(prob = 0.5, N = N),
is_smoker = draw_binary(prob = 0.2 + 0.2 * (age > 50), N = N),
disease_stage = round(runif(N, min = 1 + 0.5 * (age > 65), max = 4)),
treatment = draw_binary(prob = 0.5, N = N),
kps = runif(N, min = 40, max = 100)
)
# Simulate data in the survival context
survival_data <- simsurv(
lambdas = 0.1, gammas = 1.8,
x = participant_data,
betas = c(is_female = -0.2, is_smoker = 1.2,
treatment = -0.4, kps = -0.005,
disease_stage = 0.2),
maxt = 5)
# Merging df
library(dplyr)
mydata_complete <- bind_cols(survival_data, participant_data)
# generating missing value
library(missMethods)
mydata_uncomp <- delete_MCAR(mydata_complete, 0.3)
mydata <- mydata_uncomp
#1 imputation with "mice"
library(mice)
mydata$nelsonaalen <- nelsonaalen(mydata, eventtime, status)
mydata_mice_imp_m3 <- mice(mydata, maxit = 2, m = 3, seed = 20200801) # m=3 is for testing
#2 matching (PSM 1:3) with "MatchThem"
library(MatchThem)
mydata_imp_m3_psm <- matchthem(treatment ~ age + is_female + disease_stage, data = mydata_mice_imp_m3, approach = "within" ,ratio= 1, method = "optimal")
#3 Pooling Coxph models in multiple imputed datasets and PSM with "survey"
library(survey)
mimira_object <- with(data = mydata_imp_m3_psm, expr = svycoxph(Surv(eventtime, status) ~ age+ is_smoker + disease_stage))
pool_and_tidy_mice(mimira_object, exponentiate = TRUE, conf.int=TRUE) -> pooled_imp_m3_cph
# estimates with pool_and_tidy_mice() works with mimira_object but cannot bring me de degree of freedoms. Warning message :
In get.dfcom(object, dfcom) : Infinite sample size assumed.
> pooled_imp_m3_cph
term estimate std.error statistic p.value conf.low conf.high b df dfcom fmi lambda m riv ubar
1 age 0.9995807 0.001961343 -0.2138208 NaN NaN NaN 1.489769e-06 NaN Inf NaN 0.5163574 3 1.067643 1.860509e-06
2 is_smoker 2.8626952 0.093476026 11.2516931 NaN NaN NaN 4.182884e-03 NaN Inf NaN 0.6382842 3 1.764601 3.160589e-03
3 disease_stage 1.2386947 0.044092483 4.8547535 NaN NaN NaN 8.995628e-04 NaN Inf NaN 0.6169374 3 1.610540 7.447299e-04
#4 Table summary of the pooled results
library(gtsummary)
tbl_regression_object <- tbl_regression(mimira_object, exp=TRUE, conf.int = TRUE) # 95% CI and p-value are missing due to an issue with an other issue in the pooling of the mimira_object. The Matchthem:::get.2dfcom function gives a dfcom = 999999 (another issue to be solved in my concern)
#5 What it should looks like as graphical summary
library(survival)
mydata.cox <- coxph(Surv(eventtime, status) ~ age+ is_smoker + disease_stage, mydata_uncomp) # (df mydata_uncomp is without imputation and PSM)
#with gtsummary
forestGT <-
mydata.cox %>%
tbl_regression(exponentiate = TRUE,
add_estimate_to_reference_rows = TRUE) %>%
plot()
(forestGT) # See picture GT_plot1. Almost perfect. Would have been great to know how to add N, 95% CI, HR, p-value and parameters of the model (AIC, events, concordance, etc.)
#with survminer
HRforest <-
survminer::ggforest(mydata.cox, data = mydata_uncomp)
(HRforest) # See picture Ggforest. Everything I need to know about my cox regression is all in there. For me it is just a great regression cox forest plot.
#6 Actually what happens when I do the same thing with imputed and matched df
#with gtsummary
forestGT_imp_psm <-
mimira_object %>%
tbl_regression(exponentiate = TRUE,
add_estimate_to_reference_rows = TRUE) %>%
plot() # WARNING message : In get.dfcom(object, dfcom) : Infinite sample size assumed.
(forestGT_imp_psm) # See picture GT_plot2. The plot is rendered but without 95% IC
#with survminer
HRforest_imp_psm <-
ggforest(mimira_object, data = mydata_imp_m3_psm) # ERROR:in ggforest(mimira_object, data = mydata_imp_m3_psm) : inherits(model, "coxph") is not TRUE
(HRforest_imp_psm)
#7 The lucky and providential step
# your solution/advise
</code></pre>
<p>Would greatly appreciate your help.</p>
<p>cheers.</p>
<p>AK</p>
<p>Picture GT_plot1
(not allowed to embed images in this post, here is sharelink : <a href="https://drive.google.com/file/d/1cn2wMAmkBxTVDeqmtcE-gRvfDaOW-HT6/view?usp=sharing" rel="nofollow noreferrer">GT_plot1</a></p>
<p>Picture Ggforest_plot
<a href="https://drive.google.com/file/d/1NOawx8cK9jshrtFeFguMJ0KXE-T9H4K8/view?usp=sharing" rel="nofollow noreferrer">Ggforest_plot</a></p>
<p>Picture GT_plot2
<a href="https://drive.google.com/file/d/1yUtBbOtSALpQZaV2hnsZXe8AmiVodA6r/view?usp=sharing" rel="nofollow noreferrer">GT_plot2</a></p> | 2021-08-26 10:22:41.633000+00:00 | 2021-09-01 18:31:55.243000+00:00 | 2021-08-31 08:05:38.740000+00:00 | r-mice|cox-regression|gtsummary|forestplot|propensity-score-matching | ['https://arxiv.org/ftp/arxiv/papers/2009/2009.11772.pdf', 'https://www.rdocumentation.org/packages/mice/versions/3.13.0/topics/pool'] | 2 |
47,143,624 | <p>The original <a href="https://arxiv.org/abs/1502.03167" rel="noreferrer">batch-norm paper</a> prescribes using the batch-norm before ReLU activation. But there is evidence that it's probably better to use batchnorm <em>after</em> the activation. Here's a comment on <a href="https://github.com/fchollet/keras/issues/1802#issuecomment-187966878" rel="noreferrer">Keras GitHub</a> by Francois Chollet:</p>
<blockquote>
<p>... I can guarantee that recent code written by Christian [Szegedy]
applies relu
before BN. It is still occasionally a topic of debate, though.</p>
</blockquote>
<p>To your second question: in tensorflow, you can use a high-level <a href="https://www.tensorflow.org/api_docs/python/tf/layers/batch_normalization" rel="noreferrer"><code>tf.layers.batch_normalization</code></a> function, or a low-level <a href="https://www.tensorflow.org/api_docs/python/tf/nn/batch_normalization" rel="noreferrer"><code>tf.nn.batch_normalization</code></a>.</p> | 2017-11-06 18:43:38.553000+00:00 | 2017-11-06 18:43:38.553000+00:00 | null | null | 47,143,521 | <p>I have the following architecture:</p>
<pre><code>Conv1
Relu1
Pooling1
Conv2
Relu2
Pooling3
FullyConnect1
FullyConnect2
</code></pre>
<p>My question is, where do I apply batch normalization? And what would be the best function to do this in TensorFlow?</p> | 2017-11-06 18:37:01.683000+00:00 | 2021-01-15 18:20:08.610000+00:00 | 2017-11-06 19:50:00.910000+00:00 | python|machine-learning|tensorflow|conv-neural-network|batch-normalization | ['https://arxiv.org/abs/1502.03167', 'https://github.com/fchollet/keras/issues/1802#issuecomment-187966878', 'https://www.tensorflow.org/api_docs/python/tf/layers/batch_normalization', 'https://www.tensorflow.org/api_docs/python/tf/nn/batch_normalization'] | 4 |
42,548,087 | <p>Good question, Im not expert, in fact I got intrigued when you asked the question. </p>
<p><strong>td;lr;</strong> more accurate dependency parsers would in allow one propagate sentiment through a graph and thus better determine sentiment polarity, at least in theory.</p>
<p>It seems from my reading that sentiment analysis using dependency tree graphs propagate the independent (prior -- sentiment you might get from a lexicon) sentiment of words to compose overall sentiment polarity of the text. </p>
<p>This approach uses the composition of language (its grammatical structure) to determine sentiment. This is somewhat* opposed to a statistical (naives bayes, logistics regression, neural networks) approach to understanding sentiment.</p>
<p>Here's the paper I scanned:</p>
<p><a href="http://www.aaai.org/ocs/index.php/FLAIRS/FLAIRS14/paper/viewFile/7869/7837" rel="nofollow noreferrer">http://www.aaai.org/ocs/index.php/FLAIRS/FLAIRS14/paper/viewFile/7869/7837</a></p>
<p>For a deeper exploration of whats possible, this might help:</p>
<p><a href="https://arxiv.org/pdf/1401.6330.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1401.6330.pdf</a></p>
<p>A more through introduction to dependency parsing if you're interested might be <a href="https://web.stanford.edu/~jurafsky/slp3/14.pdf" rel="nofollow noreferrer">https://web.stanford.edu/~jurafsky/slp3/14.pdf</a> </p>
<p>*somewhat in the sense that (in particular) convolution networks do learn a certain composition of language so do rnns.</p> | 2017-03-02 06:15:12.870000+00:00 | 2017-03-02 06:15:12.870000+00:00 | null | null | 42,546,538 | <p>With the announcement from Google on release of Parsey McParseface <a href="https://research.googleblog.com/2016/05/announcing-syntaxnet-worlds-most.html" rel="nofollow noreferrer">syntaxnet</a>
which is claimed to be the most accurate dependency parser. I want to understand how this parser can be used for more accurate sentiment analysis ? If someone can share some blogs or research papers or tutorials which can help me understand the overall flow.</p> | 2017-03-02 03:52:24.517000+00:00 | 2017-03-02 06:15:12.870000+00:00 | null | tensorflow|nlp|stanford-nlp|sentiment-analysis|syntaxnet | ['http://www.aaai.org/ocs/index.php/FLAIRS/FLAIRS14/paper/viewFile/7869/7837', 'https://arxiv.org/pdf/1401.6330.pdf', 'https://web.stanford.edu/~jurafsky/slp3/14.pdf'] | 3 |
5,107,007 | <p><a href="http://www.cs.arizona.edu/~kobourov/PROJECTS/maps.html" rel="nofollow">GMap</a> is exactly what I want. They're combining a variety of techniques, including Voronoi diagrams (<a href="http://arxiv.org/pdf/0907.2585v1" rel="nofollow">here's a paper on the algorithm</a>). Now I just have to figure out how to get it...</p> | 2011-02-24 15:52:12.490000+00:00 | 2012-12-31 16:05:31.720000+00:00 | 2012-12-31 16:05:31.720000+00:00 | null | 5,074,024 | <p>I'm writing a Risk-like board game in java. A feature is that players can design their own maps which they store in a text file. The text file lists all territories (== countries) in the world map followed by their direct neighbors. The game then scans the file and creates a collection of the territories with their corresponding adjacency lists.</p>
<p>The next step would be to translate this graph into a graphical representation. That means I want to represent each territory by a rectangle or some other simple shape. I don't want to go into complex, edgy borders between territories yet. So basically the territories will look like some African or North American nations with horizontal and vertical borders.</p>
<p>Now my problem is: While it would be easy to visualize a graph where the borders are represented by drawn edges between them, I find it difficult to place the territories (== vertices) directly adjacent to each other. In other words the territories should "touch" each other, like in the real world.</p>
<p>In particular, it is difficult because of such places where 4 or more territories border with each other (Consider Four Corners in USA with Arizona, Colorado, New Mexico, and Utah).</p>
<p>Now I was wondering if anybody ever tried to do something similar or if there are even existing algorithms dealing with this problem. I would appreciate any help and creative input. Thanks!</p> | 2011-02-22 04:33:43.477000+00:00 | 2012-12-31 16:05:31.720000+00:00 | 2011-02-22 05:35:06.877000+00:00 | algorithm|graph|map | ['http://www.cs.arizona.edu/~kobourov/PROJECTS/maps.html', 'http://arxiv.org/pdf/0907.2585v1'] | 2 |
63,253,377 | <p>Use</p>
<pre><code>/^(.*?)(v\d+)?$/
</code></pre>
<p>See <a href="https://regex101.com/r/quRVSn/1" rel="nofollow noreferrer">proof</a>.</p>
<p><strong>EXPLANATION</strong></p>
<pre><code>NODE EXPLANATION
--------------------------------------------------------------------------------
^ the beginning of the string
--------------------------------------------------------------------------------
( group and capture to \1:
--------------------------------------------------------------------------------
.*? any character except \n (0 or more times
(matching the least amount possible))
--------------------------------------------------------------------------------
) end of \1
--------------------------------------------------------------------------------
( group and capture to \2 (optional
(matching the most amount possible)):
--------------------------------------------------------------------------------
v 'v'
--------------------------------------------------------------------------------
\d+ digits (0-9) (1 or more times (matching
the most amount possible))
--------------------------------------------------------------------------------
)? end of \2 (NOTE: because you are using a
quantifier on this capture, only the LAST
repetition of the captured pattern will be
stored in \2)
--------------------------------------------------------------------------------
$ before an optional \n, and the end of the
string
</code></pre>
<p>JavaScript:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-js lang-js prettyprint-override"><code>let url = 'https://arxiv.org/abs/2004.10934v3'
let regURL = /^(.*?)(v\d+)?$/;
let [_, url_f, version] = regURL.exec(url);
console.log(url_f);</code></pre>
</div>
</div>
</p> | 2020-08-04 18:52:51.430000+00:00 | 2020-08-04 18:52:51.430000+00:00 | null | null | 63,250,297 | <p>I have a url, it can either be</p>
<pre><code>https://arxiv.org/abs/2004.10934
</code></pre>
<p>or with a suffix v[0-9]</p>
<pre><code>https://arxiv.org/abs/2004.10934v3
</code></pre>
<p>I want to filtered out the v[0-9] part with regex.</p>
<p>That is, getting 'https://arxiv.org/abs/2004.10934', no matter which type of url us given.</p>
<p>Below is what I have for now, it works but seems hacky...</p>
<pre><code>let url = 'https://arxiv.org/abs/2004.10934v3'
let regURL = /(.*(?<!v[0-9])(?<!v))/g;
let url_f = regURL.exec(url)[0];
</code></pre>
<p>Is there a better regex pattern for this?</p> | 2020-08-04 15:37:02.783000+00:00 | 2020-08-04 18:52:51.430000+00:00 | null | javascript|regex|string | ['https://regex101.com/r/quRVSn/1'] | 1 |
49,884,662 | <p>You can use embedding dropouts like this..</p>
<pre><code>with tf.variable_scope('embedding'):
self.embedding_matrix = tf.get_variable( "embedding", shape=[self.vocab_size, self.embd_size], dtype=tf.float32, initializer=self.initializer)
with tf.name_scope("embedding_dropout"):
self.embedding_matrix = tf.nn.dropout(self.embedding_matrix, keep_prob=self.embedding_dropout, noise_shape=[self.vocab_size,1])
with tf.name_scope('input'):
self.input_batch = tf.placeholder(tf.int64, shape=(None, None))
self.inputs = tf.nn.embedding_lookup(self.embedding_matrix, self.input_batch)
</code></pre>
<p>This randomly sets the rows of embedding matrix to zero, as mentioned in the <a href="https://arxiv.org/pdf/1512.05287.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1512.05287.pdf</a> which is cited in the paper you mentioned.</p>
<p>Source:</p>
<p><a href="https://github.com/tensorflow/tensorflow/issues/14746" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/issues/14746</a></p>
<p>Similar pytorch implementation:</p>
<p><a href="https://github.com/salesforce/awd-lstm-lm/blob/master/embed_regularize.py" rel="nofollow noreferrer">https://github.com/salesforce/awd-lstm-lm/blob/master/embed_regularize.py</a></p> | 2018-04-17 18:01:42.270000+00:00 | 2018-04-17 18:01:42.270000+00:00 | null | null | 48,693,587 | <p>I am reading this paper on "<a href="https://arxiv.org/pdf/1708.02182.pdf" rel="nofollow noreferrer">Regularizing and Optimizing LSTM Language Models</a>" and they talk about <code>Embedding Dropout</code> which says "As the dropout occurs on the embedding matrix that is used for a full forward and backward pass, this means that all occurrences of a specific word will disappear within that pass, equivalent to performing variational dropout on the connection between the one-hot embedding and the embedding lookup." However, I cannot seem to figure out a great approach to do this within a tensorflow experiment. For each new batch, I currently embed my sequence with the following code:</p>
<pre><code>embedding_sequence = tf.contrib.layers.embed_sequence(features['input_sequence'], vocab_size=n_tokens, embed_dim=word_embedding_size)
</code></pre>
<p>Now I could easily apply dropout to the <code>embedding_sequence</code>, however my read of the paper says that the same words should be dropped from the entire forward/backward pass. Any suggestions on a simple approach that would still allow me to use <code>embed_sequence</code>? Here is what I think my approach should be after breaking down <a href="https://github.com/tensorflow/tensorflow/blob/r1.5/tensorflow/contrib/layers/python/layers/encoders.py#L91" rel="nofollow noreferrer">embed_sequence</a> but I'm still not convinced it is correct...</p>
<p><strong>PROPOSED SOLUTION</strong></p>
<pre><code>embedding_matrix = tf.get_variable("embeddings", shape=[vocab_size, embed_dim], dtype = tf.float32, initializer = None, trainable=True)
embedding_matrix_dropout = tf.nn.dropout(embedding_matrix, keep_prob=keep_prob)
embedding_sequence = tf.nn.embedding_lookup(embedding_matrix_dropout, features['input_sequence'])
</code></pre>
<p>Is there a more appropriate way to handle this? Is there anything I am getting from <code>embed_sequence</code> that I will not get from my proposed solution?</p>
<p>Secondary things I'm unsure about:</p>
<ol>
<li>what should my embedding_matrix initializer be? Default is set to None?</li>
<li><a href="https://www.tensorflow.org/api_docs/python/tf/nn/dropout" rel="nofollow noreferrer">tf.nn.dropout</a> appears to handle scaling by 1/keep_prob as mentioned is necessary in the paper, correct?</li>
</ol> | 2018-02-08 19:45:52.043000+00:00 | 2019-09-12 14:16:51.977000+00:00 | 2018-02-22 17:51:33.167000+00:00 | python|tensorflow | ['https://arxiv.org/pdf/1512.05287.pdf', 'https://github.com/tensorflow/tensorflow/issues/14746', 'https://github.com/salesforce/awd-lstm-lm/blob/master/embed_regularize.py'] | 3 |
61,017,035 | <p>Much better solution for modern versions of mySQL: skip the base64 encoding, and use <code>_binary %s</code> to send binary data, or just add <code>binary_prefix = True</code> option when setting up the pymysql connection. For example, </p>
<pre><code>import pymysql
import requests
myPDF = requests.get("https://arxiv.org/pdf/2004.00627.pdf")
conn = pymysql.connect(
host = "127.0.0.1",
user = user,
passwd = password,
db = "myDB",
binary_prefix = True)
cur = conn.cursor()
insertLine = "INSERT INTO myDB (PDF) VALUES (%s)"
cur.execute(insertLine, myPDF)
conn.commit()
</code></pre> | 2020-04-03 17:12:54.677000+00:00 | 2020-04-03 17:12:54.677000+00:00 | null | null | 61,012,951 | <p>I use the <code>requests</code> library to retrieve a binary file from a website. I now want to store it in MySQL as a BLOB. I don't want to take the intermediate step of writing the file to disk. What is the best way to do this?</p>
<p>At present, I am using <code>base64</code> to encode the binary file so that MySQL will accept it, as in <a href="https://stackoverflow.com/a/46093925/697473">this suggestion</a>. Is this the best strategy, or is there a way that permits me to skip the encoding (and the subsequent decoding when I retrieve the file)?</p>
<p>Minimal example:</p>
<pre><code>import base64
import pymysql
import requests
myPDF = requests.get("https://arxiv.org/pdf/2004.00627.pdf")
myPDF_encoded = base64.b64encode(myPDF.content)
conn = pymysql.connect(
host = "127.0.0.1",
user = user,
passwd = password,
db = "myDB")
cur = conn.cursor()
insertLine = "INSERT INTO myDB (PDF) VALUES (%s)"
cur.execute(insertLine, myPDF_encoded)
conn.commit()
</code></pre>
<p>Many posts speak to the general problem of writing a binary file to a BLOB, but as best I can tell, all start from the assumption that the file is to be read from disk.</p> | 2020-04-03 13:26:04.287000+00:00 | 2020-04-03 17:12:54.677000+00:00 | null | python|mysql|python-requests|blob | [] | 0 |
63,021,392 | <p>The default learning rate is too high for BERT. Try setting it to one of the recommended learning rates from the <a href="https://arxiv.org/pdf/1810.04805.pdf" rel="noreferrer">original paper</a> Appendix A.3 of 5e-5, 3e-5 or 2e-5.</p> | 2020-07-21 19:04:40.780000+00:00 | 2020-07-21 19:04:40.780000+00:00 | null | null | 61,389,018 | <p>I am using huggingface <code>TFBertModel</code> to do a classification task (from <a href="https://huggingface.co/transformers/model_doc/bert.html#tfbertmodel" rel="nofollow noreferrer">here</a>: ), I am using the bare <code>TFBertModel</code> with an added head dense layer and not <code>TFBertForSequenceClassification</code> since I didn't see how I could use the latter using pretrained weights to only fine-tune the model.</p>
<p>As far as I know, fine tuning should give me about 80% or more accuracy in both BERT and ALBERT, but I am not coming even near that number:</p>
<pre><code>Train on 3600 samples, validate on 400 samples
Epoch 1/2
3600/3600 [==============================] - 177s 49ms/sample - loss: 0.6531 - accuracy: 0.5792 - val_loss: 0.5296 - val_accuracy: 0.7675
Epoch 2/2
3600/3600 [==============================] - 172s 48ms/sample - loss: 0.6288 - accuracy: 0.6119 - val_loss: 0.5020 - val_accuracy: 0.7850
</code></pre>
<p>More epochs don't make much difference.</p>
<p>I am using CoLA public <a href="https://nyu-mll.github.io/CoLA/" rel="nofollow noreferrer">data set</a> to fine-tune , this is how the data looks like:</p>
<pre><code>gj04 1 Our friends won't buy this analysis, let alone the next one we propose.
gj04 1 One more pseudo generalization and I'm giving up.
gj04 1 One more pseudo generalization or I'm giving up.
gj04 1 The more we study verbs, the crazier they get.
...
</code></pre>
<p>And this is the code that loads the data into python:</p>
<pre><code>import csv
def get_cola_data(max_items=None):
csv_file = open('cola_public/raw/in_domain_train.tsv')
reader = csv.reader(csv_file, delimiter='\t')
x = []
y = []
for row in reader:
x.append(row[3])
y.append(float(row[1]))
if max_items is not None:
x = x[:max_items]
y = y[:max_items]
return x, y
</code></pre>
<p>I verified that the data is in the format that I want it to be in the lists, and this is the code of the model itself:</p>
<pre><code>#!/usr/bin/env python
import tensorflow as tf
from tensorflow import keras
from transformers import BertTokenizer, TFBertModel
import numpy as np
from cola_public import get_cola_data
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
bert_model = TFBertModel.from_pretrained('bert-base-uncased')
bert_model.trainable = False
x_input = keras.Input(shape=(512,), dtype=tf.int64)
x_mask = keras.Input(shape=(512,), dtype=tf.int64)
_, output = bert_model([x_input, x_mask])
output = keras.layers.Dense(1)(output)
model = keras.Model(
inputs=[x_input, x_mask],
outputs=output,
name='bert_classifier',
)
model.compile(
loss=keras.losses.BinaryCrossentropy(from_logits=True),
optimizer=keras.optimizers.Adam(),
metrics=['accuracy'],
)
train_data_x, train_data_y = get_cola_data(max_items=4000)
encoded_data = [tokenizer.encode_plus(data, add_special_tokens=True, pad_to_max_length=True) for data in train_data_x]
train_data_x = np.array([data['input_ids'] for data in encoded_data])
mask_data_x = np.array([data['attention_mask'] for data in encoded_data])
train_data_y = np.array(train_data_y)
model.fit(
[train_data_x, mask_data_x],
train_data_y,
epochs=2,
validation_split=0.1,
)
cmd_input = ''
while True:
print("Type an opinion: ")
cmd_input = input()
# print('Your opinion is: %s' % cmd_input)
if cmd_input == 'exit':
break
cmd_input_tokens = tokenizer.encode_plus(cmd_input, add_special_tokens=True, pad_to_max_length=True)
cmd_input_ids = np.array([cmd_input_tokens['input_ids']])
cmd_mask = np.array([cmd_input_tokens['attention_mask']])
model.reset_states()
result = model.predict([cmd_input_ids, cmd_mask])
print(result)
</code></pre>
<p>Now, no matter if I use other dataset, other number of items from the datasets, if I use a dropout layer before the last dense layer, if I give another dense layer before the last one with higher number of units or if I use Albert instead of BERT, I always have low accuracy and high loss, and often, the validation accuracy is higher than training accuracy.</p>
<p>I have the same results if I try to use BERT/ALBERT for NER task, always the same result, which makes me believe I systematically make some fundamental mistake in fine tuning.</p>
<p>I know that I have <code>bert_model.trainable = False</code> and it is what I want, since I want to train only the last head and not the pretrained weights and I know that people train that way successfully. Even if I train with the pretrained weights, the results are much worse.</p>
<p>I see I have a very high underfit, but I just can't put my finger where I could improve here, especially seeing that people tend tohave good results with just a single dense layer on top of the model.</p> | 2020-04-23 13:57:41.583000+00:00 | 2020-07-22 02:57:22.783000+00:00 | 2020-07-22 02:57:22.783000+00:00 | python|tensorflow|machine-learning|keras|deep-learning | ['https://arxiv.org/pdf/1810.04805.pdf'] | 1 |
2,489,670 | <p>To map a word to a number, you should probably just use an <a href="http://nlp.stanford.edu/nlp/javadoc/javanlp/edu/stanford/nlp/natlog/util/Index.html" rel="nofollow noreferrer">index</a>. Using hashcodes is just asking for trouble, since completely unrelated words could end up using the same value.</p>
<p>There are a number of ways to get a numerical measure of how semantically related words are, such as <a href="http://en.wikipedia.org/wiki/Latent_semantic_analysis" rel="nofollow noreferrer">latent semantic analysis (LSA)</a> or using some measure of relatedness within a lexical resource like <a href="http://wordnet.princeton.edu/" rel="nofollow noreferrer">WordNet</a> (e.g. <a href="http://webdocs.cs.ualberta.ca/~lindek/papers/sim.pdf" rel="nofollow noreferrer">Lin</a>, <a href="http://www.cs.cmu.edu/afs/cs/project/jair/pub/volume11/resnik99a.pdf" rel="nofollow noreferrer">Resnik</a>, or <a href="http://arxiv.org/PS_cache/cmp-lg/pdf/9709/9709008v1.pdf" rel="nofollow noreferrer">Jiang-Conrath</a>).</p>
<p>To get what you're calling lexical categories, you'll need to use <a href="http://nlp.stanford.edu/software/tagger.shtml" rel="nofollow noreferrer">a part-of-speech (POS) tagger</a>. The POS tags will also give you tense information (e.g., VBP means the word is a past tense verb). </p>
<p>To assign words to topics, you could make use of <a href="http://en.wikipedia.org/wiki/Hypernym" rel="nofollow noreferrer">hypernym information</a> from WordNet. This will give you stuff like 'red' is a 'color'. Or, you could make use of <a href="http://en.wikipedia.org/wiki/Latent_Dirichlet_allocation" rel="nofollow noreferrer">Latent Dirichlet allocation (LDA)</a>, if you would like to have a softer assignment of words to topics such that each word can be assigned to numerous topics to varying degrees. </p> | 2010-03-22 02:18:02+00:00 | 2010-03-24 17:49:57.577000+00:00 | 2010-03-24 17:49:57.577000+00:00 | null | 2,489,595 | <p>As part of a larger project, I need to read in text and represent each word as a number. For example, if the program reads in "<em>Every good boy deserves fruit</em>", then I would get a table that converts '<strong><em>every</em></strong>' to '<strong>1742</strong>', '<strong><em>good</em></strong>' to '<strong>977513</strong>', etc. </p>
<p>Now, obviously I can just use a hashing algorithm to get these numbers. However, it would be more useful if words with similar meanings had numerical values close to each other, so that '<strong><em>good</em></strong>' becomes '<strong>6827</strong>' and '<strong><em>great</em></strong>' becomes '<strong>6835</strong>', etc. </p>
<p>As another option, instead of a simple integer representing each number, it would be even better to have a vector made up of multiple numbers, eg (<i>lexical_category</i>, <em>tense</em>, <em>classification</em>, <i>specific_word</i>) where <i>lexical_category</i> is noun/verb/adjective/etc, <em>tense</em> is future/past/present, <em>classification</em> defines a wide set of general topics and <i>specific_word</i> is much the same as described in the previous paragraph.</p>
<p>Does any such an algorithm exist? If not, can you give me any tips on how to get started on developing one myself? I code in C++.</p> | 2010-03-22 01:48:01.773000+00:00 | 2012-05-04 17:33:12.623000+00:00 | 2012-05-04 17:33:12.623000+00:00 | c++|nlp|hash|semantic-markup | ['http://nlp.stanford.edu/nlp/javadoc/javanlp/edu/stanford/nlp/natlog/util/Index.html', 'http://en.wikipedia.org/wiki/Latent_semantic_analysis', 'http://wordnet.princeton.edu/', 'http://webdocs.cs.ualberta.ca/~lindek/papers/sim.pdf', 'http://www.cs.cmu.edu/afs/cs/project/jair/pub/volume11/resnik99a.pdf', 'http://arxiv.org/PS_cache/cmp-lg/pdf/9709/9709008v1.pdf', 'http://nlp.stanford.edu/software/tagger.shtml', 'http://en.wikipedia.org/wiki/Hypernym', 'http://en.wikipedia.org/wiki/Latent_Dirichlet_allocation'] | 9 |
2,898,373 | <p><strong>Design Patterns for Functional Strategic Programming</strong><br>
<a href="http://arxiv.org/abs/cs.PL/0204015" rel="nofollow noreferrer">http://arxiv.org/abs/cs.PL/0204015</a></p>
<blockquote>
<p>In previous work, we introduced the fundamentals and a supporting
combinator library for strategic programming. This an idiom for
generic programming based on the notion of a functional strategy: a
first-class generic function that cannot only be applied to terms of
any type, but which also allows generic traversal into subterms and
can be customized with type-specific behaviour. </p>
<p>This paper seeks to provide practicing functional programmers with
pragmatic guidance in crafting their own strategic programs. We
present the fundamentals and the support from a user's perspective,
and we initiate a catalogue of strategy design patterns. These design
patterns aim at consolidating strategic programming expertise in
accessible form.</p>
</blockquote>
<p><strong>Incorporating Functional Design Patterns In Software Development</strong><br>
<a href="http://essay.utwente.nl/631/" rel="nofollow noreferrer">http://essay.utwente.nl/631/</a></p>
<blockquote>
<p>This thesis proposes a method for the incorporation of Functional
Design Patterns in the software development process. The goal of the
method is to enable functional and technical designers to make more
efficient use of Functional Design Patterns at different phases of
development. The method does not focus solely on functional design but
ranges from acquisition all the way to maintenance.</p>
</blockquote> | 2010-05-24 16:11:07.420000+00:00 | 2013-07-22 17:12:15.627000+00:00 | 2013-07-22 17:12:15.627000+00:00 | null | 2,898,366 | <p>I was wondering if all design Patterns are only used in Object-Oriented design? Are there any design patterns used in non Object-Oriented design?</p>
<p>Thanks and regards!</p> | 2010-05-24 16:09:58.857000+00:00 | 2013-07-22 17:12:15.627000+00:00 | 2010-06-02 18:27:26.523000+00:00 | design-patterns|oop | ['http://arxiv.org/abs/cs.PL/0204015', 'http://essay.utwente.nl/631/'] | 2 |
57,044,636 | <p>There is a similar paper:<a href="https://arxiv.org/pdf/1810.10325.pdf" rel="nofollow noreferrer">Multi-stage Reinforcement Learning for Object Detection</a> which does the same thing.</p>
<p>The code for the implementation of the paper can be found <a href="https://github.com/qq456cvb/multi-stage-detection" rel="nofollow noreferrer">here</a>.</p> | 2019-07-15 17:33:15.313000+00:00 | 2019-07-15 17:33:15.313000+00:00 | null | null | 57,010,404 | <p>I have a model which detects an object and makes a bounding box over it. The problem is that those bounding boxes are not accurate and need to be a little more tight on the object rather than some body parts exceeding the box or some boxes bigger than the object size. I want to apply reinforcement learning to make bounding boxes more accurate as I have the information of perfect bounding boxes which is the target and the input images which have the inaccurate bounding boxes or the inaccurate coordinates. I found a paper online on the exact same topic but I cant find the code for the it builds an environment with defined states, actions and awards. As I am very new to reinforcement learning I can not make the environment from scratch. </p>
<p>Here is the paper <a href="https://melaniemitchell.me/ResearchGroupContent/MastersTheses/AndrewClelandThesis.pdf" rel="nofollow noreferrer">https://melaniemitchell.me/ResearchGroupContent/MastersTheses/AndrewClelandThesis.pdf</a></p>
<p>Is this whole approach of using and changing grid size measures as states and actions doable? If yes then can someone please link me to a code preferably on github which builds quite similar environment? If not then can someone give any suggestion of building the environment or what other approach I could use?</p> | 2019-07-12 15:57:40.367000+00:00 | 2019-07-15 17:33:15.313000+00:00 | 2019-07-12 16:04:13.400000+00:00 | python|opencv|reinforcement-learning|bounding-box|q-learning | ['https://arxiv.org/pdf/1810.10325.pdf', 'https://github.com/qq456cvb/multi-stage-detection'] | 2 |
55,341,770 | <p>You should look at the <code>doc2vec-</code> Jupyter notebooks bundled with gensim in its <code>docs/notebooks</code> directory (or <a href="https://github.com/RaRe-Technologies/gensim/tree/develop/docs/notebooks" rel="nofollow noreferrer">viewable online</a>) for more examples of proper use. Looking through existing SO answers on the <a href="https://stackoverflow.com/questions/tagged/doc2vec">tag <code>doc2vec</code></a> (and perhaps especially <a href="https://stackoverflow.com/search?q=user:130288%20[doc2vec]">my answers</a>) may also give you an idea of common mistakes.) </p>
<p>To tune the model in an unsupervised setting, you essentially need some domain-specific repeatable evaluation score. This might require going through your whole clustering & end-application, then counting its success on certain results it "should" give for a hand-created subset of your data. </p>
<p>For comparison, if you look at the original 'Paragraph Vector' paper, it used existing batches of top-10 search-results snippets from an existing search engine as the training documents, but then scored any model by how well it put snippets that were in a shared-top-10 closer to each other than to random 3rd documents. The followup paper 'Document Embedding with Paragraph Vectors' trained on Wikipedia articles or Arxiv papers, and tuned their model based on how well the resulting model put documents into the same pre-curated categories that exist on those systems. </p>
<p>You can use any clustering algorithms on the per-document vectors. The output of <code>Doc2Vec</code>, as a document-per-vector, can become the input of downstream algorithms. (I'm not sure what you mean about "separate word and document classification models". You've only described document-level final needs, you might not need word-vectors at all... though some <code>Doc2Vec</code> modes will create such vectors.)</p>
<p>You use the <code>infer_vector()</code> method to create vectors for novel documents, after the model has been trained and frozen. </p>
<p>Looking at the specifics of your data/code, some observations:</p>
<ul>
<li>it's not clear what your multiple columns are, or that they should be separate docs (as opposed to coalesced into one doc). Seeing some full rows might help make clear the essence of your data.</li>
<li>that's a tiny dataset - most published <code>Doc2Vec</code> work operates on tens-of-thousands to millions of documents. This algorithm works best with more data.</li>
<li>the original work gave each document just a single unique-ID tag. While gensim <code>Doc2Vec</code> supports giving documents multiple tags, as you've done here, it's best considered an advanced technique. It essentially dilutes what can be learned from a doc across the multiple tags, which could weaken the results, especially in small datasets.</li>
<li>10-20 training epochs is most common in published work, though more may especially be helpful for smaller datasets. It's best to set the epochs on the model initialization, as well, as that value will also be the default used for future <code>infer_vector()</code> operations (unless another value is explicitly passed there).</li>
<li>the structure of your two methods is a bit odd - saving an untrained model, but then perhaps training and overwriting it right away? (Or are you just trying to re-use a saved model with pre-built vocabulary for multiple training runs with different data?)</li>
<li><code>Word2Vec</code> and <code>Doc2Vec</code> often do better discarding rare words (with the default <code>min_count=5</code> or larger when practical) than trying to train on them. Words that only appear one or a few times are often idiosyncratic in their usage, compared to the "true" importance of the word in the larger world. Keeping them makes models larger, slower to train, and more likely to reflect idiosyncracies of the data than generalizable patterns. </li>
</ul> | 2019-03-25 15:57:29.073000+00:00 | 2019-03-25 15:57:29.073000+00:00 | null | null | 55,335,354 | <p>I've been using doc2vec in the most basic way so far with limited success. I'm able to find similar documents however often I get a lot of false positives. My primary goal is to build a classification algorithm for user requirements. This is to help with user requirement analysis and search.</p>
<p>I know this is not really a large enough dataset so there are a few questions I'd like help with:</p>
<ol>
<li>How can a train on one set of documents and build vectors on another?</li>
<li>How do I go about tuning the model, specifically selecting the right number of dimensions for the vector space</li>
<li>How can I create a Hierarchical Clustering for the word vectors, should a do this with one model or should I create separate word and document classification models?</li>
<li>I don't have ground truth, this is unsupervised learning when tuning how do I measure the quality of the result?</li>
<li>And finally, are there any recommended online resource that might cover some of the above.</li>
</ol>
<p>I've been calling train once with 100 vectors on 2000 documents, each with about 100 words, each document has 22 columns which are tagged by both cell and row.</p>
<pre><code>def tag_dataframe(df, selected_cols):
tagged_cells = []
headers = list(df.columns.values)
for index, row in df.iterrows():
row_tag = 'row_' + str(index)
for col_name in headers:
if col_name in selected_cols:
col_tag = 'col_' + col_name
cell_tag = 'cell_' + str(index) + '_' + col_name
cell_val = str(row[col_name])
if cell_val == 'nan':
continue
cleaned_text = clean_str(cell_val)
if len(cleaned_text) == 0:
continue
tagged_cells.append(
gensim.models.doc2vec.TaggedDocument(
cleaned_text,
[row_tag, cell_tag]))
print('tagged rows')
return tagged_cells
def load_or_build_vocab(model_path, tagged_cells):
if os.path.exists(model_path):
print('Loading vocab')
d2vm = gensim.models.Doc2Vec.load(model_path)
else:
print('building vocab')
d2vm = gensim.models.Doc2Vec(
vector_size=100,
min_count=0,
alpha=0.025,
min_alpha=0.001)
d2vm.build_vocab(tagged_cells)
print(' built')
d2vm.save(model_path)
return d2vm
def load_or_train_model(model_path, d2vm, tagged_cells):
if os.path.exists(model_path):
print('Loading Model')
d2vm = gensim.models.Doc2Vec.load(model_path)
else:
print('Training Model')
d2vm.train(
tagged_cells,
total_examples=len(tagged_cells),
epochs=100)
print(' trained')
d2vm.save(model_path)
return d2vm
</code></pre>
<p>What I hope to achieve is a set of document vectors which will help with finding similar user requirements from a free text and a Hierarchical Clustering to build navigation of the existing requirements.</p> | 2019-03-25 10:07:38.270000+00:00 | 2019-03-25 15:57:29.073000+00:00 | 2019-03-25 10:54:21.233000+00:00 | python|dataframe|gensim|doc2vec | ['https://github.com/RaRe-Technologies/gensim/tree/develop/docs/notebooks', 'https://stackoverflow.com/questions/tagged/doc2vec', 'https://stackoverflow.com/search?q=user:130288%20[doc2vec]'] | 3 |
70,299,033 | <p>There are mainly two kind of metrics/loss function in generative models/image restoration models</p>
<ol>
<li>reference based
<ul>
<li>l2 loss (~rmse loss)</li>
<li>l1 loss</li>
<li>PSNR : peak Signal-to-Noise ratio</li>
</ul>
</li>
<li>non-reference based
<ul>
<li>SSIM : structural similarity index,</li>
<li>MS-SSIM : Multi-scale SSIM
<br />
*Also there are metrics that are combination of both of classes (Ex. Mixed SM-SSIM)<br />
For more detailed understanding you can read <a href="https://arxiv.org/pdf/1511.08861.pdf" rel="nofollow noreferrer">this paper</a>.</li>
</ul>
</li>
</ol>
<p>I think for the error you got can be resolve by,<br />
<code>mse(vec1, vec2)</code><br />
*It could also depends on library you are using.</p> | 2021-12-10 02:22:52.953000+00:00 | 2021-12-10 02:29:20.090000+00:00 | 2021-12-10 02:29:20.090000+00:00 | null | 70,288,876 | <p>I want to train autoencoder on mnist dataset to generate images similar to input. That's code I found:</p>
<pre><code>% Load the training data.
XTrain = digitTrainCellArrayData;
%%
% Train an autoencoder with a hidden layer containing 50 neurons.
hiddenSize = 50;
autoenc = trainAutoencoder(XTrain,hiddenSize,...
'MaxEpochs', 200, ...
'L2WeightRegularization', 10^(-3),...
'SparsityRegularization',1,...
'SparsityProportion',0.45);
% Load the test data.
XTest = digitTestCellArrayData;
% Reconstruct the test image data using the trained autoencoder.
xReconstructed = predict(autoenc, XTest);
</code></pre>
<p>And now I want to somehow evaluate performance of my autoencoder and I'm not sure how to do it. I found <a href="https://nl.mathworks.com/help/deeplearning/ref/trainautoencoder.html" rel="nofollow noreferrer">here</a> that MSE can be used between real and reconstructed data set. However when I tried this:</p>
<pre><code>mse(XTest - xReconstructed)
</code></pre>
<p>I obtain error</p>
<pre><code>Operator '-' is not supported for operands of type 'cell'.
</code></pre>
<p>How can I properly perform evaluation of this autoencoder using mse?</p> | 2021-12-09 11:11:15.790000+00:00 | 2021-12-10 02:29:20.090000+00:00 | 2021-12-10 01:17:54.123000+00:00 | matlab|deep-learning|neural-network|autoencoder | ['https://arxiv.org/pdf/1511.08861.pdf'] | 1 |
36,647,605 | <p>I think MinG's opinion about solving the maximum clique problem efficiently using branch and bound method on sparse graphs, is fully supported by the paper of Rossi, Gleich and Gebremedhin <a href="http://arxiv.org/abs/1302.6256" rel="nofollow">http://arxiv.org/abs/1302.6256</a> . They show that with heavy pruning, the maximum clique can be found really quickly on sparse graphs (even though they may be really huge). </p>
<p>Of course the problem still remains NP-Complete, but since it can be scalable on graphs with millions of edges, one might say that branch and bound methods might be promising on this problem.</p> | 2016-04-15 12:48:36.807000+00:00 | 2016-04-15 12:48:36.807000+00:00 | null | null | 16,625,964 | <p>The maximum clique problem (MC-problem) is a classical NP problem, and we could use branch-bound to solve this problem effectively. Recently, I try to develop an algorithm to find out the clique that has the maximum edge-weighted clique in a graph, as we know, maximum edge-weighted clique problem (MEC-problem). </p>
<p>I have found some properties about this problem. First, the clique must be a maximal clique which does not belong to any larger clique. Then the sum of edges of the clique must be the largest of all maximal clique.</p>
<p>However, traditional algorithm of MC-problem will not be effective on MEC-problem. Therefore, I want to find an effective algorithm on MEC-problem, especially branch-bound algorithm.</p> | 2013-05-18 15:34:28.987000+00:00 | 2020-01-31 13:58:17.577000+00:00 | 2020-01-31 13:58:17.577000+00:00 | algorithm|optimization|weighted|clique | ['http://arxiv.org/abs/1302.6256'] | 1 |
47,787,231 | <p><em>Single-threaded</em> memory bandwidth on modern CPUs is limited by <code>max_concurrency / latency</code> of the transfers from L1D to the rest of the system, not by DRAM-controller bottlenecks. Each core has 10 Line-Fill Buffers (LFBs) which track outstanding requests to/from L1D. (And 16 "superqueue" entries which track lines to/from L2).</p>
<p>(Update: experiments show that Skylake probably has 12 LFBs, up from 10 in Broadwell. e.g. Fig7 in <a href="https://arxiv.org/pdf/1905.05726.pdf" rel="noreferrer">the ZombieLoad paper</a>, and other performance experiments including <a href="https://github.com/Kobzol/hardware-effects/issues/1#issuecomment-467629288" rel="noreferrer">@BeeOnRope's testing of multiple store streams</a>)</p>
<hr>
<p><strong>Intel's many-core chips have higher latency to L3 / memory than quad-core or dual-core desktop / laptop chips, so <em>single-threaded</em> memory bandwidth is actually much worse</strong> on a big Xeon, even though the max aggregate bandwidth with many threads is much better. They have many more hops on the ring bus that connects cores, memory controllers, and the System Agent (PCIe and so on).</p>
<p>SKX (Skylake-server / AVX512, including the i9 "high-end desktop" chips) is really bad for this: L3 / memory latency is significantly higher than for Broadwell-E / Broadwell-EP, so single-threaded bandwidth is even worse than on a Broadwell with a similar core count. (SKX uses a mesh instead of a ring bus because that scales better, <a href="https://www.anandtech.com/show/11550/the-intel-skylakex-review-core-i9-7900x-i7-7820x-and-i7-7800x-tested/5" rel="noreferrer">see this for details on both</a>. But apparently the constant factors are bad in the new design; maybe future generations will have better L3 bandwidth/latency for small / medium core counts. The private per-core L2 is bumped up to 1MiB though, so maybe L3 is intentionally slow to save power.)</p>
<p>(Skylake-client (SKL) like in the question, and later quad/hex-core desktop/laptop chips like Kaby Lake and Coffee Lake, still use the simpler ring-bus layout. Only the server chips changed. We don't yet know for sure what Ice Lake client will do.)</p>
<hr>
<p>A quad or dual core chip only needs a couple threads (especially if the cores + uncore (L3) are clocked high) to saturate its memory bandwidth, and a Skylake with fast DDR4 dual channel has quite a lot of bandwidth.</p>
<p>For more about this, see the Latency-bound Platforms section of <a href="https://stackoverflow.com/questions/43343231/enhanced-rep-movsb-for-memcpy/43574756#43574756">this answer</a> about x86 memory bandwidth. (And read the other parts for memcpy/memset with SIMD loops vs. <code>rep movs/rep stos</code>, and NT stores vs. regular RFO stores, and more.)</p>
<p>Also related: <a href="https://stackoverflow.com/questions/8126311/what-every-programmer-should-know-about-memory/47714514#47714514">What Every Programmer Should Know About Memory?</a> (2017 update on what's still true and what's changed in that excellent article from 2007).</p> | 2017-12-13 06:58:29.780000+00:00 | 2019-05-25 20:41:01.587000+00:00 | 2019-05-25 20:41:01.587000+00:00 | null | 39,260,020 | <p>We've got a simple memory throughput benchmark. All it does is memcpy repeatedly for a large block of memory.</p>
<p>Looking at the results (compiled for 64-bit) on a few different machines, Skylake machines do significantly better than Broadwell-E, keeping OS (Win10-64), processor speed, and RAM speed (DDR4-2133) the same. We're not talking a few percentage points, <strong>but rather a factor of about 2</strong>. Skylake is configured dual-channel, and the results for Broadwell-E don't vary for dual/triple/quad-channel.</p>
<p>Any ideas why this might be happening? The code that follows is compiled in Release in VS2015, and reports average time to complete each memcpy at:</p>
<p><strong>64-bit: 2.2ms for Skylake vs 4.5ms for Broadwell-E</strong></p>
<p><strong>32-bit: 2.2ms for Skylake vs 3.5ms for Broadwell-E</strong>.</p>
<p>We can get greater memory throughput on a quad-channel Broadwell-E build by utilizing multiple threads, and that's nice, but to see such a drastic difference for single-threaded memory access is frustrating. <strong>Any thoughts on why the difference is so pronounced?</strong></p>
<p>We've also used various benchmarking software, and they validate what this simple example shows - single-threaded memory throughput is way better on Skylake.</p>
<pre><code>#include <memory>
#include <Windows.h>
#include <iostream>
//Prevent the memcpy from being optimized out of the for loop
_declspec(noinline) void MemoryCopy(void *destinationMemoryBlock, void *sourceMemoryBlock, size_t size)
{
memcpy(destinationMemoryBlock, sourceMemoryBlock, size);
}
int main()
{
const int SIZE_OF_BLOCKS = 25000000;
const int NUMBER_ITERATIONS = 100;
void* sourceMemoryBlock = malloc(SIZE_OF_BLOCKS);
void* destinationMemoryBlock = malloc(SIZE_OF_BLOCKS);
LARGE_INTEGER Frequency;
QueryPerformanceFrequency(&Frequency);
while (true)
{
LONGLONG total = 0;
LONGLONG max = 0;
LARGE_INTEGER StartingTime, EndingTime, ElapsedMicroseconds;
for (int i = 0; i < NUMBER_ITERATIONS; ++i)
{
QueryPerformanceCounter(&StartingTime);
MemoryCopy(destinationMemoryBlock, sourceMemoryBlock, SIZE_OF_BLOCKS);
QueryPerformanceCounter(&EndingTime);
ElapsedMicroseconds.QuadPart = EndingTime.QuadPart - StartingTime.QuadPart;
ElapsedMicroseconds.QuadPart *= 1000000;
ElapsedMicroseconds.QuadPart /= Frequency.QuadPart;
total += ElapsedMicroseconds.QuadPart;
max = max(ElapsedMicroseconds.QuadPart, max);
}
std::cout << "Average is " << total*1.0 / NUMBER_ITERATIONS / 1000.0 << "ms" << std::endl;
std::cout << "Max is " << max / 1000.0 << "ms" << std::endl;
}
getchar();
}
</code></pre> | 2016-08-31 22:32:31.127000+00:00 | 2019-05-25 20:41:01.587000+00:00 | 2016-09-01 16:20:09.427000+00:00 | performance|x86|benchmarking|intel|cpu-architecture | ['https://arxiv.org/pdf/1905.05726.pdf', 'https://github.com/Kobzol/hardware-effects/issues/1#issuecomment-467629288', 'https://www.anandtech.com/show/11550/the-intel-skylakex-review-core-i9-7900x-i7-7820x-and-i7-7800x-tested/5', 'https://stackoverflow.com/questions/43343231/enhanced-rep-movsb-for-memcpy/43574756#43574756', 'https://stackoverflow.com/questions/8126311/what-every-programmer-should-know-about-memory/47714514#47714514'] | 5 |
62,440,333 | <p>I explain how to do it in <a href="https://medium.com/@estebanuri/real-time-face-recognition-with-android-tensorflow-lite-14e9c6cc53a5" rel="nofollow noreferrer">this article</a>. I used a TensorFlow Lite with a <a href="https://arxiv.org/pdf/1804.07573.pdf" rel="nofollow noreferrer">MobileFaceNet</a> implementation, achieving very accurate results and with surprisingly high speed.</p>
<p>You'll find the <strong>source code</strong> and an <strong>APK</strong> in <a href="https://github.com/estebanuri/face_recognition" rel="nofollow noreferrer">this repo</a></p>
<p><a href="https://i.stack.imgur.com/PTTAw.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PTTAw.jpg" alt="enter image description here"></a></p> | 2020-06-18 00:13:49.470000+00:00 | 2020-06-18 00:13:49.470000+00:00 | null | null | 29,901,643 | <p>i'm new to developing android apps in general.
I'm trying to create an application that given a certain image it would detect faces and would give me the eye locations and other info.</p>
<p>I've done some research and i found some stuff such as, the android FaceDetector API and OpenCV.</p>
<p>Could anyone give me some advice on how to make an app like this or send me a link with any info related to this, all help would be great!</p>
<p>Thanks, Daniel.</p> | 2015-04-27 16:52:54.563000+00:00 | 2020-07-08 22:27:03.683000+00:00 | 2015-04-28 08:53:45.270000+00:00 | android|face-detection | ['https://medium.com/@estebanuri/real-time-face-recognition-with-android-tensorflow-lite-14e9c6cc53a5', 'https://arxiv.org/pdf/1804.07573.pdf', 'https://github.com/estebanuri/face_recognition', 'https://i.stack.imgur.com/PTTAw.jpg'] | 4 |
Subsets and Splits