a_id
int64 7.84k
73.8M
| a_body
stringlengths 61
33k
| a_creation_date
stringlengths 25
32
| a_last_activity_date
stringlengths 25
32
| a_last_edit_date
stringlengths 25
32
⌀ | a_tags
float64 | q_id
int64 826
73.8M
| q_body
stringlengths 61
29.9k
| q_creation_date
stringlengths 25
32
| q_last_activity_date
stringlengths 25
32
| q_last_edit_date
stringlengths 25
32
⌀ | q_tags
stringlengths 1
103
| _arxiv_links
stringlengths 2
6.69k
| _n_arxiv_links
int64 0
94
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
44,543,481 | <p>As Oliver mentioned in order to use beam search in the training procedure we have to use <strong>beam search optimization</strong> which is clearly mentioned in the paper <a href="https://arxiv.org/abs/1606.02960" rel="noreferrer">Sequence-to-Sequence Learning as Beam-Search Optimization</a>.</p>
<p>We can't use beam search in the training procedure with the current loss function. Because current loss function is a log loss which is taken on each time step. It's a greedy way. It also clearly mentioned in the this paper <a href="https://arxiv.org/pdf/1409.3215v1.pdf%3B" rel="noreferrer">Sequence to Sequence Learning
with Neural Networks</a>.
In the section 3.2 it has mentioned the above case neatly.</p>
<p><a href="https://i.stack.imgur.com/6IqKL.png" rel="noreferrer"><img src="https://i.stack.imgur.com/6IqKL.png" alt="enter image description here"></a> </p>
<p><strong><em>"where
S
is the training set. Once training is complete, we produce tr
anslations by finding the most
likely translation according to the LSTM:"</em></strong></p>
<p>So the original seq2seq architecture use beam search only in the testing time. If we want to use this beam search in the training time we have to use another loss and optimization method as in the paper. </p> | 2017-06-14 11:27:57.010000+00:00 | 2017-06-14 11:27:57.010000+00:00 | null | null | 44,228,271 | <p>If we use beam search in seq2seq model it will give more proper results. There are several tensorflow implementations.
But with the softmax function in each cell you can't use beam search in the training process. So is there any other modified optimization function when using beam search? </p> | 2017-05-28 14:17:47.550000+00:00 | 2020-06-28 02:51:21.777000+00:00 | null | tensorflow|deep-learning | ['https://arxiv.org/abs/1606.02960', 'https://arxiv.org/pdf/1409.3215v1.pdf%3B', 'https://i.stack.imgur.com/6IqKL.png'] | 3 |
44,251,631 | <p>Sequence-to-Sequence Learning as Beam-Search Optimization is a paper that describes the steps neccesary to use beam search in the training process.
<a href="https://arxiv.org/abs/1606.02960" rel="nofollow noreferrer">https://arxiv.org/abs/1606.02960</a></p>
<p>The following issue contains a script that can perform the beam search however it does not contain any of the training logic
<a href="https://github.com/tensorflow/tensorflow/issues/654" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/issues/654</a></p> | 2017-05-30 00:02:27.110000+00:00 | 2017-05-30 00:02:27.110000+00:00 | null | null | 44,228,271 | <p>If we use beam search in seq2seq model it will give more proper results. There are several tensorflow implementations.
But with the softmax function in each cell you can't use beam search in the training process. So is there any other modified optimization function when using beam search? </p> | 2017-05-28 14:17:47.550000+00:00 | 2020-06-28 02:51:21.777000+00:00 | null | tensorflow|deep-learning | ['https://arxiv.org/abs/1606.02960', 'https://github.com/tensorflow/tensorflow/issues/654'] | 2 |
60,764,998 | <p>See <a href="https://doi.org/10.1145/3009909" rel="nofollow noreferrer">https://doi.org/10.1145/3009909</a> for a careful analysis of the number of random bits required to generate a random permutation. (It's open-access, but it's not easy reading! Bottom line: if carefully implemented, all of the usual methods for generating random permutations are efficient in their use of random bits.)</p>
<p>And... if your goal is to generate a random permutation rapidly for large N, I'd suggest you try the MergeShuffle algorithm. An <a href="https://arxiv.org/abs/1508.03167" rel="nofollow noreferrer">article published in 2015</a> claimed a factor-of-two speedup over Fisher-Yates in both parallel and sequential implementations, and a significant speedup in sequential computations over the other standard algorithm they tested (Rao-Sandelius). </p>
<p>An implementation of MergeShuffle (and of the usual Fisher-Yates and Rao-Sandelius algorithms) is available at <a href="https://github.com/axel-bacher/mergeshuffle" rel="nofollow noreferrer">https://github.com/axel-bacher/mergeshuffle</a>. But caveat emptor! The authors are theoreticians, not software engineers. They have published their experimental code to <a href="https://github.com/axel-bacher/mergeshuffle" rel="nofollow noreferrer">github</a> but aren't maintaining it. Someday, I imagine someone (perhaps you!) will add MergeShuffle to GSL. At present <code>gsl_ran_shuffle()</code> is an implementation of Fisher-Yates, see <a href="https://www.gnu.org/software/gsl/doc/html/randist.html?highlight=gsl_ran_shuffle" rel="nofollow noreferrer">https://www.gnu.org/software/gsl/doc/html/randist.html?highlight=gsl_ran_shuffle</a>.</p>
<p><a href="https://i.stack.imgur.com/7Idp6.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7Idp6.jpg" alt="Runtimes of Fisher-Yates, Rao-Sandelius, and MergeShuffle algorithms, in parallel and serial implementations"></a></p>
<p><a href="https://i.stack.imgur.com/cB4se.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cB4se.jpg" alt="Average number of random bits used by our implementation of various random permutation algorithms over 100 trials."></a></p> | 2020-03-19 21:01:12.473000+00:00 | 2020-03-19 21:01:12.473000+00:00 | null | null | 3,079,633 | <p>I would like to genrate a random permutation as fast as possible.
The problem: The knuth shuffle which is O(n) involves generating n random numbers.
Since generating random numbers is quite expensive.
I would like to find an O(n) function involving a fixed O(1) amount of random numbers.</p>
<p>I realize that this question has been asked before, but I did not see any relevant answers.</p>
<p>Just to stress a point: I am not looking for anything less than O(n), just an algorithm involving less generation of random numbers.</p>
<p>Thanks</p> | 2010-06-20 14:46:13.067000+00:00 | 2020-03-19 21:01:12.473000+00:00 | 2010-06-20 14:53:31.483000+00:00 | algorithm|random|permutation | ['https://doi.org/10.1145/3009909', 'https://arxiv.org/abs/1508.03167', 'https://github.com/axel-bacher/mergeshuffle', 'https://github.com/axel-bacher/mergeshuffle', 'https://www.gnu.org/software/gsl/doc/html/randist.html?highlight=gsl_ran_shuffle', 'https://i.stack.imgur.com/7Idp6.jpg', 'https://i.stack.imgur.com/cB4se.jpg'] | 7 |
57,323,359 | <p>A suitably constructed STFT spectrogram containing both magnitude and phase can be converted back to a time-domain waveform using the <a href="https://ccrma.stanford.edu/~jos/sasp/Overlap_Add_OLA_STFT_Processing.html" rel="nofollow noreferrer">Overlap Add method</a>. Important thing is that the spectrogram construction must have the <a href="https://ccrma.stanford.edu/~jos/sasp/Mathematical_Definition_STFT.html#19930" rel="nofollow noreferrer">constant-overlap-add</a> property.</p>
<p>It can be challenging to have your modifications correctly manipulate both magnitude and phase of a spectrogram. So sometimes the phase is discarded, and magnitude manipulated independently. In order to convert this back into a waveform one must then <em>estimate</em> phase information during reconstruction (phase reconstruction). This is a lossy process, and usually pretty computationally intensive. Established approaches use an iterative algorithm, usually a variation on Griffin-Lim. But there are now also <a href="https://arxiv.org/abs/1808.06719" rel="nofollow noreferrer">new methods</a> using Convolutional Neural Networks.</p>
<h2>Waveform from mel-spectrogram or MFCC using librosa</h2>
<p><a href="http://librosa.github.io/librosa/changelog.html#v0-7-0" rel="nofollow noreferrer">librosa version 0.7.0</a> contains a fast Griffin-Lim implementation as well as helper functions to invert a mel-spectrogram of MFCC.</p>
<p>Below is a code example. The input test file is found at <a href="https://github.com/jonnor/machinehearing/blob/ab7fe72807e9519af0151ec4f7ebfd890f432c83/handson/spectrogram-inversion/436951__arnaud-coutancier__old-ladies-pets-and-train-02.flac" rel="nofollow noreferrer">https://github.com/jonnor/machinehearing/blob/ab7fe72807e9519af0151ec4f7ebfd890f432c83/handson/spectrogram-inversion/436951__arnaud-coutancier__old-ladies-pets-and-train-02.flac</a></p>
<pre class="lang-py prettyprint-override"><code>import numpy
import librosa
import soundfile
# parameters
sr = 22050
n_mels = 128
hop_length = 512
n_iter = 32
n_mfcc = None # can try n_mfcc=20
# load audio and create Mel-spectrogram
path = '436951__arnaud-coutancier__old-ladies-pets-and-train-02.flac'
y, _ = librosa.load(path, sr=sr)
S = numpy.abs(librosa.stft(y, hop_length=hop_length, n_fft=hop_length*2))
mel_spec = librosa.feature.melspectrogram(S=S, sr=sr, n_mels=n_mels, hop_length=hop_length)
# optional, compute MFCCs in addition
if n_mfcc is not None:
mfcc = librosa.feature.mfcc(S=librosa.power_to_db(S), sr=sr, n_mfcc=n_mfcc)
mel_spec = librosa.feature.inverse.mfcc_to_mel(mfcc, n_mels=n_mels)
# Invert mel-spectrogram
S_inv = librosa.feature.inverse.mel_to_stft(mel_spec, sr=sr, n_fft=hop_length*4)
y_inv = librosa.griffinlim(S_inv, n_iter=n_iter,
hop_length=hop_length)
soundfile.write('orig.wav', y, samplerate=sr)
soundfile.write('inv.wav', y_inv, samplerate=sr)
</code></pre>
<h2>Results</h2>
<p>The reconstructed waveform will have some artifacts.</p>
<p>The above example got a lot of repetitive noise, more than I expected. It was possible to reduce it quite a lot using the standard Noise Reduction algorithm in Audacity. </p>
<p><a href="https://i.stack.imgur.com/orlyg.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/orlyg.jpg" alt="Spectrograms of original audio, reconstructed audio and reconstructed audio with noise removed"></a></p> | 2019-08-02 09:12:16.403000+00:00 | 2019-09-08 16:18:41.910000+00:00 | 2019-09-08 16:18:41.910000+00:00 | null | 56,931,834 | <p>As you might notice, i am really new to python and sound processing. I (hopefully) extracted FFT data from a wave file using python and the logfbank and mfcc function. (The logfbank seems to give the most promising data, mfcc output looked a bit weird for me).</p>
<p>In my program i want to change the logfbank/mfcc data and then create wave data from it (and write them into a file). I didn't really find any information about the process of creating wave data from FFT data. Does anyone of you have an idea how to solve this? I would appreciate it a lot :)</p>
<p>This is my code so far:</p>
<pre><code>from scipy.io import wavfile
import numpy as np
from python_speech_features import mfcc, logfbank
rate, signal = wavfile.read('orig.wav')
fbank = logfbank(signal, rate, nfilt=100, nfft=1400).T
mfcc = mfcc(signal, rate, numcep=13, nfilt=26, nfft=1103).T
#magic data processing of fbank or mfcc here
#creating wave data and writing it back to a .wav file here
</code></pre> | 2019-07-08 09:20:22.910000+00:00 | 2019-09-08 16:18:41.910000+00:00 | 2019-08-02 10:42:47.267000+00:00 | python|signal-processing|fft|spectrogram|mfcc | ['https://ccrma.stanford.edu/~jos/sasp/Overlap_Add_OLA_STFT_Processing.html', 'https://ccrma.stanford.edu/~jos/sasp/Mathematical_Definition_STFT.html#19930', 'https://arxiv.org/abs/1808.06719', 'http://librosa.github.io/librosa/changelog.html#v0-7-0', 'https://github.com/jonnor/machinehearing/blob/ab7fe72807e9519af0151ec4f7ebfd890f432c83/handson/spectrogram-inversion/436951__arnaud-coutancier__old-ladies-pets-and-train-02.flac', 'https://i.stack.imgur.com/orlyg.jpg'] | 6 |
53,831,893 | <p>My pipilene (in general) for similar task.</p>
<h3>I don't use nn to solve a whole task</h3>
<p>First, I don't use NNs directly to label separate entities like "camera", "screen" etc. There's some good approaches which might be useful, like a <a href="https://arxiv.org/abs/1506.03134" rel="nofollow noreferrer">pointer networks</a> or just <a href="http://akosiorek.github.io/ml/2017/10/14/visual-attention.html" rel="nofollow noreferrer">attention</a>, but it just didn't wotk in my case.<br />
I guess, this architectures don't work well because there are a lot of noise, aka "I'm so glad I bought this TV" or so in my dataset. Approx. 75% overall, and the rest of the data is not so clean, to.</p>
<p>Because of this, I do some additional actions:</p>
<ol>
<li>Split sentences into chunks (<em>sometimes</em> they contatins desired entities)</li>
<li>Label this chunks by hand into "non-useful" (aka "I'm so happy/so upset" etc.) and useful: "good camera", "bad phone" etc.</li>
<li>Train classifier(s) to classify this data.</li>
</ol>
<h3>Details about a pipeline</h3>
<p><strong>How to "recognize" entities</strong><br />
I just used regexps and part-of-speech tags to split my data. But I work with russian language dataset, so there's not good free syntax parser / library for russian. If you work with english or another language, well-presented in spacy or nltk libraries, you can use it for parsing to separate entities. Also, english grammar is so strict in contrast to russian - it's make your task easier probably.<br />
Anyway, try to start with regexes and parsing.</p>
<p>Vocabularies with keywords for topics like "camera", "battery", ... are very helpful, too.</p>
<p>Another approach to recognize entities is topic modellig - PLSA/LDA (<a href="https://radimrehurek.com/gensim/" rel="nofollow noreferrer">gensim</a> rocks), but it's hard to tune, imo, because there are lot of noise in texts. You'll get a lot of topics <code>{"happy", "glad", "bought", "family", ...}</code> and so on - but you can try topic modelling anyway.</p>
<p>Also you can create a dataset with an entities labels for each text and train a NN with attention, so you can recognize it by high attention, but create this dataset is very tedious.</p>
<p><strong>Create dataset and train NN's</strong><br />
I start to create dataset only when I've got acceptable quality of "named entities" - because if you change this (footing) part later, you probalby can throw away a dataset and start it from scratch again.</p>
<p>Better decide which labels you will use once and then don't change them - it's critical part of work.</p>
<p>Training NN's on such data is easiest part of the work probably - just any good classifier, as for the whole texts. Even not a nn, but a simplier calssifiers might be useful - use blending, bagging etc.</p>
<p><strong>Possible troubles</strong><br />
There's a trap - some reviews / features not so obvious for NN classifier or even for a human, like "loud sound" or "gets very hot". Often they context-depentend. So, I use a little help of our team to mark a dataset - so, each entry was labeld by a group of humans to get better quality. Also I use <em>context labels</em> - category of a product - adding a context for each entity: so, "loud sound" for audio system and for washing mashing bears controversal sentiment and model can learn it. Most cases category labels easy accessable throug databases/web parsing.</p>
<p>Hope it helps, also I hope someone knows a better approach.</p> | 2018-12-18 11:19:45.843000+00:00 | 2018-12-18 11:19:45.843000+00:00 | 2020-06-20 09:12:55.060000+00:00 | null | 53,829,938 | <p>I tagged a dataset of texts with independent categories. When running a CNN classifier in Keras, I receive an accuracy of > 90%. </p>
<p>My texts are customer reviews "I really liked the camera of this phone." Classes are e.g. "phone camera", "memory" etc.</p>
<p>What I am searching for is whether I can tag the sentences with categories that appear in them while the classifier marks the entities that indicate the class. Or more specifically: How can I extract those parts of the input sentence that made a CNN network in Keras opt (i.e. classify) for 1, 2 or more categories?</p> | 2018-12-18 09:29:16.103000+00:00 | 2018-12-18 13:43:44.717000+00:00 | 2018-12-18 13:43:44.717000+00:00 | python|keras|neural-network|conv-neural-network|multilabel-classification | ['https://arxiv.org/abs/1506.03134', 'http://akosiorek.github.io/ml/2017/10/14/visual-attention.html', 'https://radimrehurek.com/gensim/'] | 3 |
45,426,747 | <p>Distributed transaction can include records of multiple class/clusters. The protocol used is an optimistic 2-phase commit, very similar to the one used in Google BigTable (<a href="https://arxiv.org/html/1106.3325" rel="nofollow noreferrer">https://arxiv.org/html/1106.3325</a>). The consensus is based on a <code>writeQuorum</code> that is majority by default, but can be relaxed (eventual consistency) or increased (=all) to have no dirty reads between servers.</p> | 2017-07-31 23:27:25.500000+00:00 | 2017-07-31 23:27:25.500000+00:00 | null | null | 42,845,227 | <p>I'm trying to understand what sort of distributed transactions OrientDB supports. The documentation on distributed transactions here:</p>
<p><a href="http://orientdb.com/docs/last/Distributed-Architecture.html#distributed-transactions" rel="nofollow noreferrer">http://orientdb.com/docs/last/Distributed-Architecture.html#distributed-transactions</a></p>
<p>Seems to be describing transactionally updating replicas of a cluster using a consensus protocol, but does not describe anything about updating multiple clusters on multiple servers. </p>
<p>Are distributed transactions in OrientDB limited to executing on a single cluster, or can ACID transactions be executed on multiple clusters on multiple servers? If so, what is the mechanism that OrientDB uses to accomplish this? </p> | 2017-03-16 21:37:52.677000+00:00 | 2017-07-31 23:27:25.500000+00:00 | null | transactions|orientdb | ['https://arxiv.org/html/1106.3325'] | 1 |
43,938,588 | <p>In the generator step training, you can think that the network involves the discriminator too. But to do the backpropagation, you will only consider the generator weights. A good explanation for it is found <a href="http://www.rricard.me/machine/learning/generative/adversarial/networks/2017/04/05/gans-part1.html" rel="noreferrer">here</a>.</p>
<p>As mentioned in <a href="https://arxiv.org/abs/1406.2661" rel="noreferrer">original paper</a>, the Discriminator cost is:</p>
<p><a href="https://i.stack.imgur.com/hsGm2.gif" rel="noreferrer"><img src="https://i.stack.imgur.com/hsGm2.gif" alt="enter image description here"></a></p>
<p>And the generator cost is:</p>
<p><a href="https://i.stack.imgur.com/G7jMP.gif" rel="noreferrer"><img src="https://i.stack.imgur.com/G7jMP.gif" alt="enter image description here"></a></p>
<p>Of course, you don't need to calculate it by hand. Tensorflow already handles it. To do all the process, you can implement the following:</p>
<pre><code>G_sample = generator(z)
D_real = discriminator(X)
D_fake = discriminator(G_sample)
D_loss = tf.reduce_mean(-tf.log(D_real)-tf.log(1-D_fake))
G_loss = tf.reduce_mean(-tf.log(D_fake))
</code></pre>
<p>where D_real, D_fake and D_sample are the last layers of your network.
Then you can implement the training process by the standard way:</p>
<pre><code>D_solver = (tf.train.AdamOptimizer(learning_rate=0.0001,beta1=0.5)
.minimize(D_loss, var_list=theta_D))
G_solver = (tf.train.AdamOptimizer(learning_rate=0.0001,beta1=0.5)
.minimize(G_loss, var_list=theta_G))
</code></pre>
<p>And just run the solvers in a session.</p> | 2017-05-12 13:08:39.363000+00:00 | 2017-05-12 13:08:39.363000+00:00 | null | null | 43,622,771 | <p>I would like to build a <a href="https://arxiv.org/abs/1511.06434" rel="nofollow noreferrer">DCGAN</a> for MNIST by myself in TensorFlow. However, I'm struggling to find out how I should set up the loss function for the generator. In a <a href="https://github.com/jacobgil/keras-dcgan" rel="nofollow noreferrer">Keras DCGAN implementation</a> the author used a little "workaround" for this problem: he simply built 3 models. The generator (G), the discriminator (D) and third one, where he just combined G with D, while setting the train-ability of D to false there.</p>
<p>This way, he can feed D with real images + generated images to train D and train the G+D-combined model, because the loss of D is propagated to G, since D is not trainable in the G+D-combined model.</p>
<p>In TensorFlow, I've built G and D already. Training D is relatively simple, since I just need to combine a batch of real MNIST training images with generated ones and call the training op:</p>
<pre><code>session.run(D_train_op,
feed_dict={x: batch_x, y: batch_y})
</code></pre>
<p>The training op in this example is a binary <a href="https://www.tensorflow.org/api_docs/python/tf/losses/softmax_cross_entropy" rel="nofollow noreferrer">cross entropy</a>:</p>
<pre><code>tf.losses.softmax_cross_entropy(y, D_out)
</code></pre>
<p>...but how would I set up the loss function for G, when I do not have a "stacked" model, combining "G and D" to single, third model?</p>
<p>I know that I have to generate a batch of images with G, feed them into D and then I can obtain the loss of D...however, the output of G is of shape <code>(batch_size, 28, 28, 1)</code>. How would I set up a loss function for G by hand?</p>
<p>Without the "G and D"-combined model "workaround" for this, I have to propagate the loss of D, which has an output shape of <code>(batch_size, 1)</code> to the output layer of G.</p>
<p>If G would do some classification for example, this wouldn't be that hard to figure out...but G outputs images. Thus, I can not directly map the loss of D to the output layer of G.</p>
<p>Do I have to set up a third model combining G+D? Or is there a way to calculate the loss for G by hand?</p>
<p>Any help is highly appreciated :)</p> | 2017-04-25 23:54:32.903000+00:00 | 2017-05-12 13:08:39.363000+00:00 | null | tensorflow|neural-network|conv-neural-network|mnist|dcgan | ['http://www.rricard.me/machine/learning/generative/adversarial/networks/2017/04/05/gans-part1.html', 'https://arxiv.org/abs/1406.2661', 'https://i.stack.imgur.com/hsGm2.gif', 'https://i.stack.imgur.com/G7jMP.gif'] | 4 |
70,590,040 | <p>There are two main properties of transformers that makes them so appealing compared to convolutions:</p>
<ol>
<li>A transformer is <em>permutation</em> equivariant. This makes transformers very useful for set predictions. For sequences and images where order does matter, positional encoding/embedding are used.</li>
<li>The receptive field of a transformer is the <em>entire</em> input (!) as opposed to the very limited receptive field of a convolution layer.</li>
</ol>
<p>See sec. 3 and fig. 3 in:<br />
<em>Shir Amir, Yossi Gandelsman, Shai Bagon and Tali Dekel</em> <a href="https://arxiv.org/pdf/2112.05814.pdf" rel="nofollow noreferrer"><strong>Deep ViT Features as Dense Visual Descriptors</strong></a> (arXiv 2021).</p> | 2022-01-05 08:50:55.997000+00:00 | 2022-01-05 08:50:55.997000+00:00 | null | null | 70,589,452 | <p>Today my teacher ask me a question: he said the CNN is use the translation invariance of the images or matrixs. So what is the properties Transformer uses ???</p> | 2022-01-05 07:54:25.510000+00:00 | 2022-01-05 08:50:55.997000+00:00 | null | conv-neural-network|transformer-model|self-attention | ['https://arxiv.org/pdf/2112.05814.pdf'] | 1 |
61,474,807 | <p>You can easily try and see that you will get quite bad results. Even you added some positional encoding to the input embeddings, the result will be pretty bad.</p>
<p>The order matters. Sentences:</p>
<ul>
<li>John loves Marry.</li>
<li>Marry loves John.</li>
</ul>
<p>indeed have a different meaning. Also, the order is not the only information you get from the encoder. The encoder does also input disambiguation: words can be homonymous such as "train" (see <a href="https://arxiv.org/pdf/1908.11771.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1908.11771.pdf</a>). Also, the probing of trained neural networks shows that the encoder develops a pretty abstract representation of the input sentence (see <a href="https://arxiv.org/pdf/1911.00317.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1911.00317.pdf</a>) and a large part of the translation actually already happens in the encoder (see <a href="https://arxiv.org/pdf/2003.09586.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/2003.09586.pdf</a>).</p> | 2020-04-28 07:25:21.477000+00:00 | 2020-04-28 07:25:21.477000+00:00 | null | null | 61,466,789 | <p>I was wondering how useful the encoder's hidden state is for an attention network. When I looked into the structure of an attention model, this is what I found a model generally looks like:</p>
<ul>
<li>x: Input.</li>
<li>h: Encoder's hidden state which feeds forward to the next
encoder's hidden state.</li>
<li>s: Decoder's hidden state which has a
weighted sum of all the encoder's hidden states as input and feeds
forward to the next decoder's hidden state.</li>
<li>y: Output.</li>
</ul>
<p>With a process like translation, why is it important for encoder's hidden states to feed forward or exist in the first place? We already know what the next x is going to be. Thereby, the order of the input isn't necessarily important for the order of the output, neither is what has been memorized from the previous input as the attention model looks at all inputs simulaneously. Couldn't you just use attention directly on the embedding of x?</p>
<p>Thank you!</p> | 2020-04-27 19:23:46.540000+00:00 | 2020-04-28 07:25:21.477000+00:00 | 2020-04-27 22:19:40.577000+00:00 | machine-learning|recurrent-neural-network|translate|attention-model | ['https://arxiv.org/pdf/1908.11771.pdf', 'https://arxiv.org/pdf/1911.00317.pdf', 'https://arxiv.org/pdf/2003.09586.pdf'] | 3 |
71,864,172 | <p>Wasn't the T5 model also trained on BoolQ which would make this difficult and kind of fishy to test/evaluate because the later test data would not really be unseen data for the model? You can see it listed in the <a href="https://huggingface.co/t5-base" rel="nofollow noreferrer">model card on huggingface</a> as well as Google's <a href="https://arxiv.org/abs/1905.10044" rel="nofollow noreferrer">original paper</a>.</p>
<p>What I do find strange is that giving the pretrained T5-base a question from the dataset <a href="https://huggingface.co/t5-base?text=question%3A%20is%20there%20a%20now%20you%20see%20me%203%20coming%20out%0A%09%09context%3A%20Now%20You%20See%20Me%20is%20a%20series%20of%20heist%20thriller%20film%20written%20by%20%0AEd%20Solomon%2C%20Boaz%20Yakin%2C%20and%20Edward%20Ricourt.%20They%20focus%20on%20the%20actions%20of%0A%20a%20team%20of%20illusionists%20named%20%22The%20Four%20Horsemen%22%20who%20pull%20off%20near%20%0Aimpossible%20heists.%20The%20series%20features%20an%20ensemble%20cast%20including%20Jesse%20%0AEisenberg%2C%20Mark%20Ruffalo%2C%20Woody%20Harrelson%2C%20Isla%20Fisher%2C%20Dave%20Franco%2C%20%0AMichael%20Caine%2C%20Lizzy%20Caplan%2C%20and%20Morgan%20Freeman.%20The%20first%20film%20was%20%0Areleased%20in%202013%2C%20while%20the%20second%20was%20released%20in%202016%2C%20and%20a%20third%20%0Afilm%20is%20currently%20in%20development%20and%20set%20to%20be%20released%20in%202019.%20The%20%0Aseries%20has%20received%20mixed%20reviews%20from%20critics%20and%20audiences%20and%20grossed%0A%20nearly%20%24700%20million%20worldwide." rel="nofollow noreferrer">does not yield the expected answer or answer format</a>. There is a fine-tuned version of t5 for BoolQ which gives <a href="https://huggingface.co/mrm8488/t5-small-finetuned-boolq?text=question%3A%20is%20there%20a%20now%20you%20see%20me%203%20coming%20out%0A%09%09context%3A%20Now%20You%20See%20Me%20is%20a%20series%20of%20heist%20thriller%20film%20written%20by%20%0AEd%20Solomon%2C%20Boaz%20Yakin%2C%20and%20Edward%20Ricourt.%20They%20focus%20on%20the%20actions%20of%0A%20a%20team%20of%20illusionists%20named%20%22The%20Four%20Horsemen%22%20who%20pull%20off%20near%20%0Aimpossible%20heists.%20The%20series%20features%20an%20ensemble%20cast%20including%20Jesse%20%0AEisenberg%2C%20Mark%20Ruffalo%2C%20Woody%20Harrelson%2C%20Isla%20Fisher%2C%20Dave%20Franco%2C%20%0AMichael%20Caine%2C%20Lizzy%20Caplan%2C%20and%20Morgan%20Freeman.%20The%20first%20film%20was%20%0Areleased%20in%202013%2C%20while%20the%20second%20was%20released%20in%202016%2C%20and%20a%20third%20%0Afilm%20is%20currently%20in%20development%20and%20set%20to%20be%20released%20in%202019.%20The%20%0Aseries%20has%20received%20mixed%20reviews%20from%20critics%20and%20audiences%20and%20grossed%0A%20nearly%20%24700%20million%20worldwide." rel="nofollow noreferrer">a more acceptable answer</a>. Same problem with the pretrained model for <a href="https://huggingface.co/t5-base?text=question%3A%20What%20does%20increased%20oxygen%20concentrations%20in%20the%20patient%E2%80%99s%20lungs%20displace%3F%20context%3A%20Hyperbaric%20%28high-pressure%29%20medicine%20uses%20special%20oxygen%20chambers%20to%20increase%20the%20partial%20pressure%20of%20O%202%20around%20the%20patient%20and%2C%20when%20needed%2C%20the%20medical%20staff.%20Carbon%20monoxide%20poisoning%2C%20gas%20gangrene%2C%20and%20decompression%20sickness%20%28the%20%E2%80%99bends%E2%80%99%29%20are%20sometimes%20treated%20using%20these%20devices.%20Increased%20O%202%20concentration%20in%20the%20lungs%20helps%20to%20displace%20carbon%20monoxide%20from%20the%20heme%20group%20of%20hemoglobin.%20Oxygen%20gas%20is%20poisonous%20to%20the%20anaerobic%20bacteria%20that%20cause%20gas%20gangrene%2C%20so%20increasing%20its%20partial%20pressure%20helps%20kill%20them.%20Decompression%20sickness%20occurs%20in%20divers%20who%20decompress%20too%20quickly%20after%20a%20dive%2C%20resulting%20in%20bubbles%20of%20inert%20gas%2C%20mostly%20nitrogen%20and%20helium%2C%20forming%20in%20their%20blood.%20Increasing%20the%20pressure%20of%20O%202%20as%20soon%20as%20possible%20is%20part%20of%20the%20treatment." rel="nofollow noreferrer">Question answering in the SQuAD format</a> even when using the exact example and format from the paper.</p>
<p>Which leads me to think the fine-tuning on question answering is unlike some other tasks not actually included in the released version of the model or at least does not seem to have enough of an effect for the model to remember how the task works. In which case fine-tuning on it (again/more) would make sense again.</p> | 2022-04-13 21:48:45.363000+00:00 | 2022-04-13 21:48:45.363000+00:00 | null | null | 71,861,922 | <p>I want to use the pre-trained T5 model <a href="https://huggingface.co/docs/transformers/model_doc/t5" rel="nofollow noreferrer">https://huggingface.co/docs/transformers/model_doc/t5</a> on the task of Question Answering on the <a href="https://huggingface.co/datasets/boolq" rel="nofollow noreferrer">https://huggingface.co/datasets/boolq</a> knowing that my inputs will be the passage and the question and the output is the boolean true or false that is the answer for the question.</p>
<p>I have seen some people tuning the model to this specific task. But, I want to know if there is a way to do it with pre-trained model to get some outputs and then compare them with the model after tuning.</p>
<p>Thanks!</p> | 2022-04-13 17:55:49.173000+00:00 | 2022-04-13 21:48:45.363000+00:00 | null | python|nlp|huggingface-transformers | ['https://huggingface.co/t5-base', 'https://arxiv.org/abs/1905.10044', 'https://huggingface.co/t5-base?text=question%3A%20is%20there%20a%20now%20you%20see%20me%203%20coming%20out%0A%09%09context%3A%20Now%20You%20See%20Me%20is%20a%20series%20of%20heist%20thriller%20film%20written%20by%20%0AEd%20Solomon%2C%20Boaz%20Yakin%2C%20and%20Edward%20Ricourt.%20They%20focus%20on%20the%20actions%20of%0A%20a%20team%20of%20illusionists%20named%20%22The%20Four%20Horsemen%22%20who%20pull%20off%20near%20%0Aimpossible%20heists.%20The%20series%20features%20an%20ensemble%20cast%20including%20Jesse%20%0AEisenberg%2C%20Mark%20Ruffalo%2C%20Woody%20Harrelson%2C%20Isla%20Fisher%2C%20Dave%20Franco%2C%20%0AMichael%20Caine%2C%20Lizzy%20Caplan%2C%20and%20Morgan%20Freeman.%20The%20first%20film%20was%20%0Areleased%20in%202013%2C%20while%20the%20second%20was%20released%20in%202016%2C%20and%20a%20third%20%0Afilm%20is%20currently%20in%20development%20and%20set%20to%20be%20released%20in%202019.%20The%20%0Aseries%20has%20received%20mixed%20reviews%20from%20critics%20and%20audiences%20and%20grossed%0A%20nearly%20%24700%20million%20worldwide.', 'https://huggingface.co/mrm8488/t5-small-finetuned-boolq?text=question%3A%20is%20there%20a%20now%20you%20see%20me%203%20coming%20out%0A%09%09context%3A%20Now%20You%20See%20Me%20is%20a%20series%20of%20heist%20thriller%20film%20written%20by%20%0AEd%20Solomon%2C%20Boaz%20Yakin%2C%20and%20Edward%20Ricourt.%20They%20focus%20on%20the%20actions%20of%0A%20a%20team%20of%20illusionists%20named%20%22The%20Four%20Horsemen%22%20who%20pull%20off%20near%20%0Aimpossible%20heists.%20The%20series%20features%20an%20ensemble%20cast%20including%20Jesse%20%0AEisenberg%2C%20Mark%20Ruffalo%2C%20Woody%20Harrelson%2C%20Isla%20Fisher%2C%20Dave%20Franco%2C%20%0AMichael%20Caine%2C%20Lizzy%20Caplan%2C%20and%20Morgan%20Freeman.%20The%20first%20film%20was%20%0Areleased%20in%202013%2C%20while%20the%20second%20was%20released%20in%202016%2C%20and%20a%20third%20%0Afilm%20is%20currently%20in%20development%20and%20set%20to%20be%20released%20in%202019.%20The%20%0Aseries%20has%20received%20mixed%20reviews%20from%20critics%20and%20audiences%20and%20grossed%0A%20nearly%20%24700%20million%20worldwide.', 'https://huggingface.co/t5-base?text=question%3A%20What%20does%20increased%20oxygen%20concentrations%20in%20the%20patient%E2%80%99s%20lungs%20displace%3F%20context%3A%20Hyperbaric%20%28high-pressure%29%20medicine%20uses%20special%20oxygen%20chambers%20to%20increase%20the%20partial%20pressure%20of%20O%202%20around%20the%20patient%20and%2C%20when%20needed%2C%20the%20medical%20staff.%20Carbon%20monoxide%20poisoning%2C%20gas%20gangrene%2C%20and%20decompression%20sickness%20%28the%20%E2%80%99bends%E2%80%99%29%20are%20sometimes%20treated%20using%20these%20devices.%20Increased%20O%202%20concentration%20in%20the%20lungs%20helps%20to%20displace%20carbon%20monoxide%20from%20the%20heme%20group%20of%20hemoglobin.%20Oxygen%20gas%20is%20poisonous%20to%20the%20anaerobic%20bacteria%20that%20cause%20gas%20gangrene%2C%20so%20increasing%20its%20partial%20pressure%20helps%20kill%20them.%20Decompression%20sickness%20occurs%20in%20divers%20who%20decompress%20too%20quickly%20after%20a%20dive%2C%20resulting%20in%20bubbles%20of%20inert%20gas%2C%20mostly%20nitrogen%20and%20helium%2C%20forming%20in%20their%20blood.%20Increasing%20the%20pressure%20of%20O%202%20as%20soon%20as%20possible%20is%20part%20of%20the%20treatment.'] | 5 |
57,678,923 | <p>Only you can decide whether a result is adequate for your purposes. These kinds of scores are most meaningful when comparing one model against another, as a rough guide as to whether other changes – new parameters, new preprocessing, more/different data – are helping or hurting. </p>
<p>You could look at the paper introducing the evaluation dataset you're using to see how to interpret the scores:</p>
<p><a href="https://arxiv.org/abs/1408.3456v1" rel="nofollow noreferrer">https://arxiv.org/abs/1408.3456v1</a></p>
<p>You could also download some off-the-shelf word-vector sets, checking their evaluation scores, to compare with yours. </p> | 2019-08-27 16:31:30.563000+00:00 | 2019-08-27 16:31:30.563000+00:00 | null | null | 57,674,722 | <p>I have evaluated my model with SimLex-999 and wordsim353 but i don't know if the result is ok or not?<a href="https://i.stack.imgur.com/U0fi4.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/U0fi4.jpg" alt="please see the result"></a></p>
<p>wordsim353 result </p>
<pre><code> Pearson correlation coefficient against C:\ProgramData\Anaconda3\lib\site-packages\gensim\test\test_data\wordsim353.tsv: 0.4895
2019-08-27 08:30:06,655 : INFO : Spearman rank-order correlation coefficient against C:\ProgramData\Anaconda3\lib\site-packages\gensim\test\test_data\wordsim353.tsv: 0.4799
2019-08-27 08:30:06,656 : INFO : Pairs with unknown words ratio: 7.1%
((0.4894983099817645, 3.6324947252392034e-21), SpearmanrResult(correlation=0.4798812637344527, pvalue=2.6991867797169835e-20), 7.0821529745042495)
</code></pre>
<p>SimLex-999 result</p>
<pre><code> 2019-08-27 15:43:13,000 : INFO : Pearson correlation coefficient against C:\ProgramData\Anaconda3\lib\site-packages\gensim\test\test_data\simlex999.txt: 0.3138
2019-08-27 15:43:13,001 : INFO : Spearman rank-order correlation coefficient against C:\ProgramData\Anaconda3\lib\site-packages\gensim\test\test_data\simlex999.txt: 0.2992
2019-08-27 15:43:13,002 : INFO : Pairs with unknown words ratio: 1.2%
((0.31381174440491943, 5.375150591505246e-24), SpearmanrResult(correlation=0.29915866880742126, pvalue=7.433265418805336e-22), 1.2012012012012012)
</code></pre> | 2019-08-27 12:29:01.777000+00:00 | 2019-08-27 16:31:30.563000+00:00 | 2019-08-27 12:36:55.067000+00:00 | word2vec | ['https://arxiv.org/abs/1408.3456v1'] | 1 |
65,612,364 | <p>As I wrote in the comments, if you have a univariate problem you should use <code>cumulants(dm,3,1)</code> as the cumulants are calulated using tensors and the tensors are saved in a block structure, where the blocks are of size bxb, i.e. the third argument in the function call. However, If you have only one column, the size of the tensors will be 1, so that it doesn't make sense to save it in a 2x2 block.</p>
<p>To access the cumulants in Array form you have to convert them first. This is done by <code>Array(cumulant(data, nc, b)[c])</code>, where nc is the number of cumulants you want to calculate, b is the block size (for efficient storage of the tensors), and c is the cumulant you need.
Summing up:</p>
<pre><code>using Cumulants
# univariate data
unidata = rand(1000,1)
uc = cumulants(unidata, 3, 1)
Array(uc[1])
#1-element Array{Float64,1}:
# 0.48772026299259374
Array(uc[2])
#1×1 Array{Float64,2}:
# 0.0811428357438324
Array(uc[3])
#[:, :, 1] =
# 0.0008653019738796724
# multivariate data
multidata = rand(1000,3)
mc = cumulants(multidata, 3, 2)
Array(mc[1])
#3-element Array{Float64,1}:
# 0.5024511157116442
# 0.4904838734508787
# 0.48286680648519215
Array(mc[2])
#3×3 Array{Float64,2}:
# 0.0834021 -0.00368562 -0.00151614
# -0.00368562 0.0835084 0.00233202
# -0.00151614 0.00233202 0.0808521
Array(mc[3])
# [:, :, 1] =
# -0.000506926 -0.000763061 -0.00183751
# -0.000763061 -0.00104804 -0.00117227
# -0.00183751 -0.00117227 0.00112968
#
# [:, :, 2] =
# -0.000763061 -0.00104804 -0.00117227
# -0.00104804 0.000889305 -0.00116559
# -0.00117227 -0.00116559 -0.000106866
#
# [:, :, 3] =
# -0.00183751 -0.00117227 0.00112968
# -0.00117227 -0.00116559 -0.000106866
# 0.00112968 -0.000106866 0.00131965
</code></pre>
<p>The optimal size of the blocks can be found in their software paper (<a href="https://arxiv.org/pdf/1701.05420.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1701.05420.pdf</a>), where they write (for proper latex formatting have a look at the paper):</p>
<blockquote>
<p>5.2.1. <strong>The optimal size of blocks.</strong>
The number of coefficients required to store a super-symmetric tensor of order d and n dimensions is equal to (d+n−1 over n). The storage of tensor disregarding the super-symmetry requires n^d coefficients. The block structure introduced in [49] uses more than minimal amount of memory but allows for easier further processing of super-symmetric tensors.If we store the super-symmetric tensor in the block structure, the block size parameter b appears. In our implementation in order to store a super-symmetric tensor in the block structure we need, assuming n|b, an array of (n over b)^d pointers to blocks and an array of the same size of flags that contain the information if a pointer points to a valid block. Recall that diagonal blocks contain redundant information.Therefore on the one hand, the smaller the value of b, the less redundant elements on diagonals of the block structure. On the other hand, the larger the value of b,the smaller the number of blocks, the smaller the blocks’ operation overhead, and the fewer the number of pointers pointing to empty blocks. For detailed discussion of memory usage see [49]. The analysis of the influence of the parameter b on the computational time of cumulants for some parameters are presented in Fig. 2. We obtain the shortest computation time for b = 2 in almost all test cases, and this value will be set as default and used in all efficiency tests. Note that for b = 1we loose all the memory savings.</p>
</blockquote> | 2021-01-07 12:26:07.303000+00:00 | 2021-01-07 12:26:07.303000+00:00 | null | null | 65,602,034 | <p>I cannot for the life of me figure out how to use Cumulants.jl to get moments or cumulants from some data. I find the docs (<a href="https://juliahub.com/docs/Cumulants/Vrq25/1.0.4/" rel="nofollow noreferrer">https://juliahub.com/docs/Cumulants/Vrq25/1.0.4/</a>) completely over my head.</p>
<p>Suppose I have a vector of some data e.g.:</p>
<pre><code>using Distributions
d = rand(Exponential(1), 1000)
</code></pre>
<p>The documentation suggests, so far as I can understand it, that <code>cumulants(d, 3)</code> should return the first three cumulants. The function is defined like so:</p>
<pre><code>cumulants(data::Matrix{T}, m::Int = 4, b::Int = 2) where T<: AbstractFloat
</code></pre>
<p>a Matrix in Julia is, so far as I understand, a 2D array. So I convert my data to a 2D array:</p>
<pre><code>dm = reshape(d, length(d), 1)
</code></pre>
<p>But I get:</p>
<pre><code>julia> cumulants(dm,3)
ERROR: DimensionMismatch("bad block size 2 > 1")
</code></pre>
<p>My question concisely: how do I use Cumulants.jl to get the first <code>m</code> cumulants and the first <code>m</code> moments from some simulated data?</p>
<p>Thanks!</p>
<p>EDIT: In the above example, <code>c = cumulants(dm,3,1)</code> as suggested in a comment will give, for <code>c</code>:</p>
<pre><code>3-element Array{SymmetricTensors.SymmetricTensor{Float64,N} where N,1}:
SymmetricTensors.SymmetricTensor{Float64,1}(Union{Nothing, Array{Float64,1}}[[1.0122452678071678]], 1, 1, 1, true)
SymmetricTensors.SymmetricTensor{Float64,2}(Union{Nothing, Array{Float64,2}}[[1.0336298356976195]], 1, 1, 1, true)
SymmetricTensors.SymmetricTensor{Float64,3}(Union{Nothing, Array{Float64,3}}[[2.5438037582591146]], 1, 1, 1, true)
</code></pre>
<p>I find that I can access the first, second, and third cumulants by:</p>
<pre><code>c[1][1]
c[2][1,1]
c[3][1,1,1]
</code></pre>
<p>Which I arrived at essentially by guessing. I have no idea why this nutty output format exists. I still cannot figure out how to get the first <code>m</code> cumulants as a vector easily.</p> | 2021-01-06 19:14:41.857000+00:00 | 2021-01-07 20:26:45.833000+00:00 | 2021-01-07 04:01:10.700000+00:00 | julia | ['https://arxiv.org/pdf/1701.05420.pdf'] | 1 |
59,874,553 | <p>BERT provides contextual representation, i.e., a joint representation of <em>a word and the context</em>. Unlike non-contextual embeddings, it is not as clear what the closest word should mean.</p>
<p>A good approximation of close words is certainly the prediction that BERT does as a (masked) language model. It basically says what similar words could be in the same context. However, this is not in the <a href="https://bert-as-service.readthedocs.io/en/latest/source/client.html#module-client" rel="noreferrer">client API of bert-as-service</a>. You can either implement the prediction layer yourself (I think it is just multiplication of the last layer with the embedding matrix + softmax, but maybe there is some additional projection going on) or use a different implementation such as <a href="https://github.com/huggingface/transformers" rel="noreferrer">Hugginface's Transformers</a>.</p>
<p>The most theoretically correct (and computationally expensive) solution would be running BERT on a large dataset and store pairs of words and corresponding contextual representations and then use e.g., <a href="https://github.com/facebookresearch/faiss" rel="noreferrer">faiss</a> for retrieving nearest neighbors that would include also the context, similarly as in the <a href="https://arxiv.org/abs/1911.00172" rel="noreferrer">nearest neighbors language models</a>.</p> | 2020-01-23 08:49:41.343000+00:00 | 2020-01-23 08:49:41.343000+00:00 | null | null | 59,865,719 | <p>I am trying to get textual representation(or the closest word) of given word embedding using BERT. Basically I am trying to get similar functionality as in gensim:</p>
<pre><code>>>> your_word_vector = array([-0.00449447, -0.00310097, 0.02421786, ...], dtype=float32)
>>> model.most_similar(positive=[your_word_vector], topn=1))
</code></pre>
<p>So far, I have been able to generate contextual word embedding using <a href="https://github.com/hanxiao/bert-as-service#getting-elmo-like-contextual-word-embedding" rel="noreferrer">bert-as-service</a> but can't figure out how to get closest words to this embedding. I have used pre-trained bert model (uncased_L-12_H-768_A-12) and haven't done any fine tuning.</p> | 2020-01-22 18:00:05.313000+00:00 | 2020-03-22 20:59:20.450000+00:00 | null | nlp|word-embedding|bert-language-model | ['https://bert-as-service.readthedocs.io/en/latest/source/client.html#module-client', 'https://github.com/huggingface/transformers', 'https://github.com/facebookresearch/faiss', 'https://arxiv.org/abs/1911.00172'] | 4 |
29,858,901 | <p>An edge whose deletion disconnects two connected components is called a <strong><a href="http://en.wikipedia.org/wiki/Bridge_%28graph_theory%29" rel="nofollow">bridge</a></strong> and there are linear-time algorithms for finding all the bridges in a graph (usually based on depth-first search). The Wikipedia article lists one of them (due to Tarjan) as an example. <a href="http://arxiv.org/pdf/1209.0700.pdf" rel="nofollow">This paper</a> also gives a simple algorithm for listing all the bridges in a graph and seems to be a bit simpler than Tarjan's algorithm.</p>
<p>Hope this helps!</p> | 2015-04-24 23:01:33.157000+00:00 | 2015-04-24 23:01:33.157000+00:00 | null | null | 29,856,062 | <p>I have a set of edges E, and I want to know if I can safely remove edge i in E, meaning if I remove it from the graph, the graph should still be connected.</p>
<p>In my understanding that implies that edge i, has to lie on a circle.
The output should be a list of indices of all edges I can't remove.</p>
<p>The problem:</p>
<p>My different solutions seem to do the right thing, but are far too slow (inefficient).</p>
<p>One of my solutions was:</p>
<pre><code>1. Loop through all edges i in E
2. Loop through all edges x in V
3. Add edge x to the graph (excluding edge i) until nodes of i are connected or end reached
4. If no connection possible, edge is not removable and added to the list
</code></pre>
<p>This way was way too slow.</p>
<p>I then decided to rewrite my code and use breadth-first-search to look if another path is possible without edge i.</p>
<p>I thought it would be performant enough, but it seems it's not. Either I have implemented in a very bad way or that's also the wrong algorithm for this task.</p>
<p>Here is the algorithm in the C++ code I have (removed some not important parts):</p>
<pre><code>struct connection {
int a, b;
};
void expand(int x, connection *&arr, std::set<int> &exp, int size) {
for (int i = 0; i < size; i++) {
if (x == arr[i].a) {
exp.insert(arr[i].b);
}
else if (x == arr[i].b) {
exp.insert(arr[i].a);
}
}
return;
}
// recursive breadth-first-seach
bool BFSr(std::set<int> &group, std::set<int> &visited, int goal, connection *&arr, int size) {
if (group.empty()) return false;
if (group.find(goal) != group.end()) return true;
std::set<int> tempa;
for (std::set<int>::iterator it = group.begin(); it != group.end(); ++it) {
expand(*it, arr, tempa size);
}
for (std::set<int>::iterator it = visited.begin(); it != visited.end(); ++it) {
tempa.erase(*it);
}
tempb = visited;
tempb.insert(group.begin(), group.end());
return BFSr(tempa, tempb, goal, arr, size);
}
bool BFS(int start, int goal, connection *&arr, int size) {
std::set<int> tempa;
std::set<int> tempb;
tempa.insert(start);
return BFSr(tempa, tempb, goal, arr, size);
}
int main()
{
connection *arr = new connection[m];
connection *con = new connection[m - 1];
// fill arr with connections arr.a < arr.b ....
for (int j = 0; j < (m - 1); j++) {
con[j] = arr[j + 1];
}
// 1. edge for performance reasons extra
if (!BFS(arr[0].a, arr[0].b, con, (m - 1))) {
// connection is important therefore add to list
printf(" %d", 1);
}
// Look if nodes still connected after removing connection
for (int s = 1; s < m; s++) {
con[s - 1] = arr[s - 1];
if (!BFS(arr[s].a, arr[s].b, con, (m-1))) {
// connection is important therefore add to list
printf(" %d", s + 1);
}
}
printf("\n");
free(arr);
free(con);
return 0;
}
</code></pre>
<p>Do you know any solutions for me to make it faster, or do you know a better algorithm for my problem?</p> | 2015-04-24 19:39:06.973000+00:00 | 2015-04-25 02:00:14.230000+00:00 | 2015-04-24 20:26:36.467000+00:00 | c++|performance|graph|breadth-first-search|undirected-graph | ['http://en.wikipedia.org/wiki/Bridge_%28graph_theory%29', 'http://arxiv.org/pdf/1209.0700.pdf'] | 2 |
46,121,526 | <p>If you need precision <em>and</em> parallelism, use Kahan summation or another error-compensation technique to let you reorder your sum (into SIMD vector element strides with multiple accumulators).</p>
<p>As <a href="https://arxiv.org/pdf/1401.0248.pdf" rel="nofollow noreferrer">Twofold fast summation - Evgeny Latkin</a> points out, if you bottleneck on memory bandwidth, an error-compensated sum isn't much slower than a max-performance sum, since the CPU has lots of computation throughput that goes unused in a simply-parallelized sum that bottlenecks on memory bandwidth</p>
<p>See also (google results for <code>kahan summation avx</code>)</p>
<ul>
<li><p><a href="https://github.com/rreusser/summation-algorithms" rel="nofollow noreferrer">https://github.com/rreusser/summation-algorithms</a></p>
</li>
<li><p><a href="https://scicomp.stackexchange.com/questions/10869/which-algorithm-is-more-accurate-for-computing-the-sum-of-a-sorted-array-of-numb">https://scicomp.stackexchange.com/questions/10869/which-algorithm-is-more-accurate-for-computing-the-sum-of-a-sorted-array-of-numb</a></p>
</li>
</ul>
<hr />
<p>Re: your idea: Summing groups of 4 numbers in-order will let you hide the FP-add latency, and bottleneck on scalar add throughput.</p>
<p>Doing horizontal sums within vectors takes a lot of shuffling, so it's a potential bottleneck. You might consider loading <code>a0 a1 a2 a3</code>, then shuffling to get <code>a0+a1 x a2+a3 x</code>, then <code>(a0+a1) + (a2+a3)</code>. You have a Ryzen, right? The last step is just a <code>vextractf128</code> down to 128b. That's still 3 total ADD uops, and 3 shuffle uops, but with fewer instructions than scalar or 128b vectors.</p>
<hr />
<h3>Your idea is very similar to Pairwise Summation</h3>
<p>You're always going to get <em>some</em> rounding error, but adding numbers of similar magnitude minimizes it.</p>
<p><strong>See also</strong> <a href="https://stackoverflow.com/questions/55477701/simd-matmul-program-gives-different-numerical-results">Simd matmul program gives different numerical results</a> for some comments about Pairwise Summation and simple efficient SIMD.</p>
<p>The difference between adding 4 contiguous numbers vs. vertically adding 4 SIMD vectors is probably negligible. SIMD vectors give you small strides (of SIMD vector width) in the array. Unless the array grows extremely quickly, they're still going to have basically similar magnitudes.</p>
<p><strong>You don't need to horizontal sum until the very end to still get most of the benefit</strong>. You can maintain 1 or 2 SIMD vector accumulators while you use more SIMD registers to sum short runs (of maybe 4 or 8 SIMD vectors) before adding into the main accumulators.</p>
<p>In fact having your total split more ways (across the SIMD vector elements) means it doesn't grow as large. So it helps with exactly the problem you're trying to avoid, and <strong>horizontal summing down to a single scalar accumulator actually makes things worse, especially for a strictly sorted array.</strong></p>
<p>With out-of-order execution, you don't need very many tmp accumulators to make this work and hide the FP-add latency of accumulating into the main accumulators. You can do a couple groups of accumulating into a fresh <code>tmp = _mm_load_ps()</code> vector and adding that to the total, and OoO exec will overlap those executions. So you don't need a huge unroll factor for your main loop.</p>
<p>But it shouldn't be too small, you don't want to bottleneck on the add into the main accumulator. You want to bottleneck on FP-add throughput. (Or if you care about Broadwell/Haswell and you don't totally bottleneck on memory bandwidth, mix in some FMA with a <code>1.0</code> multiplier to take advantage of that throughput.)</p>
<p>e.g. Skylake SIMD FP add has 4 cycle latency, 0.5 cycle throughput, so you need to be doing at least 7 adds that are part of a short dep chain for every add into a single accumulator. Preferably more, and/or preferably with 2 long-term accumulators to better absorb bubbles in scheduling from resource conflicts.</p>
<p>See <a href="https://stackoverflow.com/questions/66260651/mm256-fmadd-ps-is-slower-than-mm256-mul-ps-mm256-add-ps">_mm256_fmadd_ps is slower than _mm256_mul_ps + _mm256_add_ps?</a> for more about multiple accumulators.</p> | 2017-09-08 17:11:57.757000+00:00 | 2021-02-20 23:23:34.197000+00:00 | 2021-02-20 23:23:34.197000+00:00 | null | 46,119,811 | <p>Consider a sorted (ascending) array of <code>double</code> numbers. For numerical stability the array should be summed up as if iterating it from the beginning till the end, accumulating the sum in some variable.</p>
<p>How to vectorize this efficiently with AVX2?</p>
<p>I've looked into this method <a href="https://stackoverflow.com/questions/9775538/fastest-way-to-do-horizontal-vector-sum-with-avx-instructions">Fastest way to do horizontal vector sum with AVX instructions</a> , but it seems quite tricky to scale it to an array (some divide&conquer approach may be needed), while keeping the floating-point precision by ensuring that small numbers are summed up before adding them to a larger number.</p>
<p><strong>Clarification 1</strong>: I think it should be ok to e.g. sum the first 4 items, then add them to the sum of the next 4 items, etc. I'm willing to trade some stability for performance. But I would prefer a method that doesn't ruin the stability completely.</p>
<p><strong>Clarification 2</strong>: memory shouldn't be a bottleneck because the array is in L3 cache (but not in L1/L2 cache, because pieces of the array were populated from different threads). I wouldn't like to resort to Kahan summation because I think it's really the number of operations that matters, and Kahan summation would increase it about 4 times.</p> | 2017-09-08 15:24:17.277000+00:00 | 2021-02-20 23:23:34.197000+00:00 | 2017-09-08 17:51:32.457000+00:00 | c++|algorithm|floating-point|vectorization|x86-64 | ['https://arxiv.org/pdf/1401.0248.pdf', 'https://github.com/rreusser/summation-algorithms', 'https://scicomp.stackexchange.com/questions/10869/which-algorithm-is-more-accurate-for-computing-the-sum-of-a-sorted-array-of-numb', 'https://stackoverflow.com/questions/55477701/simd-matmul-program-gives-different-numerical-results', 'https://stackoverflow.com/questions/66260651/mm256-fmadd-ps-is-slower-than-mm256-mul-ps-mm256-add-ps'] | 5 |
5,769,285 | <p>There are at least 4 libraries that I am aware of providing lenses.</p>
<p>The notion of a lens is that it provides something isomorphic to</p>
<pre><code>data Lens a b = Lens (a -> b) (b -> a -> a)
</code></pre>
<p>providing two functions: a getter, and a setter</p>
<pre><code>get (Lens g _) = g
put (Lens _ s) = s
</code></pre>
<p>subject to three laws:</p>
<p>First, that if you put something, you can get it back out</p>
<pre><code>get l (put l b a) = b
</code></pre>
<p>Second that getting and then setting doesn't change the answer</p>
<pre><code>put l (get l a) a = a
</code></pre>
<p>And third, putting twice is the same as putting once, or rather, that the second put wins.</p>
<pre><code>put l b1 (put l b2 a) = put l b1 a
</code></pre>
<p>Note, that the type system isn't sufficient to check these laws for you, so you need to ensure them yourself no matter what lens implementation you use.</p>
<p>Many of these libraries also provide a bunch of extra combinators on top, and usually some form of template haskell machinery to automatically generate lenses for the fields of simple record types.</p>
<p>With that in mind, we can turn to the different implementations:</p>
<p><em><strong>Implementations</strong></em></p>
<p><strong>fclabels</strong></p>
<p><a href="http://hackage.haskell.org/package/fclabels" rel="nofollow noreferrer">fclabels</a> is perhaps the most easily reasoned about of the lens libraries, because its <code>a :-> b</code> can be directly translated to the above type. It provides a <a href="http://www.haskell.org/ghc/docs/6.12.2/html/libraries/base-4.2.0.1/Control-Category.html" rel="nofollow noreferrer">Category</a> instance for <code>(:->)</code> which is useful as it allows you to compose lenses. It also provides a lawless <code>Point</code> type which generalizes the notion of a lens used here, and some plumbing for dealing with isomorphisms.</p>
<p>One hindrance to the adoption of <code>fclabels</code> is that the main package includes the template-haskell plumbing, so the package is not Haskell 98, and it also requires the (fairly non-controversial) <code>TypeOperators</code> extension.</p>
<p><strong>data-accessor</strong></p>
<p>[Edit: <code>data-accessor</code> is no longer using this representation, but has moved to a form similar to that of <code>data-lens</code>. I'm keeping this commentary, though.]</p>
<p><a href="http://hackage.haskell.org/package/data-accessor" rel="nofollow noreferrer">data-accessor</a> is somewhat more popular than <code>fclabels</code>, in part because it <em>is</em> Haskell 98. However, its choice of internal representation makes me throw up in my mouth a little bit.</p>
<p>The type <code>T</code> it uses to represent a lens is internally defined as</p>
<pre><code>newtype T r a = Cons { decons :: a -> r -> (a, r) }
</code></pre>
<p>Consequently, in order to <code>get</code> the value of a lens, you must submit an undefined value for the 'a' argument! This strikes me as an incredibly ugly and ad hoc implementation.</p>
<p>That said, Henning has included the template-haskell plumbing to automatically generate the accessors for you in a separate '<a href="http://hackage.haskell.org/package/data-accessor-template" rel="nofollow noreferrer">data-accessor-template</a>' package.</p>
<p>It has the benefit of a decently large set of packages that already employ it, being Haskell 98, and providing the all-important <code>Category</code> instance, so if you don't pay attention to how the sausage is made, this package is actually pretty reasonable choice.</p>
<p><strong>lenses</strong></p>
<p>Next, there is the <a href="http://hackage.haskell.org/packages/archive/lenses/0.1.4/doc/html/Data-Lenses.html" rel="nofollow noreferrer">lenses</a> package, which observes that a lens can provide a state monad homomorphism between two state monads, by definining lenses directly <em>as</em> such monad homomorphisms.</p>
<p>If it actually bothered to provide a type for its lenses, they would have a rank-2 type like:</p>
<pre><code>newtype Lens s t = Lens (forall a. State t a -> State s a)
</code></pre>
<p>As a result, I rather don't like this approach, as it needlessly yanks you out of Haskell 98 (if you want a type to provide to your lenses in the abstract) and deprives you of the <code>Category</code> instance for lenses, which would let you compose them with <code>.</code>. The implementation also requires multi-parameter type classes.</p>
<p>Note, all of the other lens libraries mentioned here provide some combinator or can be used to provide this same state focalization effect, so nothing is gained by encoding your lens directly in this fashion.</p>
<p>Furthermore, the side-conditions stated at the start don't really have a nice expression in this form. As with 'fclabels' this does provide template-haskell method for automatically generating lenses for a record type directly in the main package.</p>
<p>Because of the lack of <code>Category</code> instance, the baroque encoding, and the requirement of template-haskell in the main package, this is my least favorite implementation.</p>
<p><strong>data-lens</strong></p>
<p>[Edit: As of 1.8.0, these have moved from the comonad-transformers package to data-lens]</p>
<p>My <a href="http://hackage.haskell.org/package/data-lens" rel="nofollow noreferrer"><code>data-lens</code></a> package provides lenses in terms of the <a href="http://hackage.haskell.org/packages/archive/comonad-transformers/1.5.2.6/doc/html/Control-Comonad-Trans-Store-Lazy.html" rel="nofollow noreferrer">Store</a> comonad.</p>
<pre><code>newtype Lens a b = Lens (a -> Store b a)
</code></pre>
<p>where</p>
<pre><code>data Store b a = Store (b -> a) b
</code></pre>
<p>Expanded this is equivalent to</p>
<pre><code>newtype Lens a b = Lens (a -> (b, b -> a))
</code></pre>
<p>You can view this as factoring out the common argument from the getter and the setter to return a pair consisting of the result of retrieving the element, and a setter to put a new value back in. This offers the computational benefit that the 'setter' here can recycle some of the work used to get the value out, making for a more efficient 'modify' operation than in the <code>fclabels</code> definition, especially when accessors are chained.</p>
<p>There is also a nice theoretical justification for this representation, because the subset of 'Lens' values that satisfy the 3 laws stated in the beginning of this response are precisely those lenses for which the wrapped function is a 'comonad coalgebra' for the store comonad. This transforms 3 hairy laws for a lens <code>l</code> down to 2 nicely pointfree equivalents:</p>
<pre><code>extract . l = id
duplicate . l = fmap l . l
</code></pre>
<p>This approach was first noted and described in Russell O'Connor's <a href="https://arxiv.org/abs/1103.2841" rel="nofollow noreferrer"><code>Functor</code> is to <code>Lens</code> as <code>Applicative</code> is to <code>Biplate</code>: Introducing Multiplate</a> and was <a href="http://patternsinfp.wordpress.com/2011/01/31/lenses-are-the-coalgebras-for-the-costate-comonad/" rel="nofollow noreferrer">blogged about based on a preprint</a> by Jeremy Gibbons.</p>
<p>It also includes a number of combinators for working with lenses strictly and some stock lenses for containers, such as <code>Data.Map</code>.</p>
<p>So the lenses in <code>data-lens</code> form a <code>Category</code> (unlike the <code>lenses</code> package), are Haskell 98 (unlike <code>fclabels</code>/<code>lenses</code>), are sane (unlike the back end of <code>data-accessor</code>) and provide a slightly more efficient implementation, <a href="http://hackage.haskell.org/package/data-lens-fd" rel="nofollow noreferrer"><code>data-lens-fd</code></a> provides the functionality for working with MonadState for those willing to step outside of Haskell 98, and the template-haskell machinery is now available via <a href="http://hackage.haskell.org/package/data-lens-template" rel="nofollow noreferrer"><code>data-lens-template</code></a>.</p>
<p><em><strong>Update 6/28/2012: Other Lens Implementation Strategies</strong></em></p>
<p><strong>Isomorphism Lenses</strong></p>
<p>There are two other lens encodings worth considering. The first gives a nice theoretical way to view a lens as a way to break a structure into the value of the field, and 'everything else'.</p>
<p>Given a type for isomorphisms</p>
<pre><code>data Iso a b = Iso { hither :: a -> b, yon :: b -> a }
</code></pre>
<p>such that valid members satisfy <code>hither . yon = id</code>, and <code>yon . hither = id</code></p>
<p>We can represent a lens with:</p>
<pre><code>data Lens a b = forall c. Lens (Iso a (b,c))
</code></pre>
<p>These are primarily useful as a way to think about the meaning of lenses, and we can use them as a reasoning tool to explain other lenses.</p>
<p><strong>van Laarhoven Lenses</strong></p>
<p>We can model lenses such that they can be composed with <code>(.)</code> and <code>id</code>, even without a <code>Category</code> instance by using</p>
<pre><code>type Lens a b = forall f. Functor f => (b -> f b) -> a -> f a
</code></pre>
<p>as the type for our lenses.</p>
<p>Then defining a lens is as easy as:</p>
<pre><code>_2 f (a,b) = (,) a <$> f b
</code></pre>
<p>and you can validate for yourself that function composition is lens composition.</p>
<p>I've recently written on how you can further <a href="http://comonad.com/reader/2012/mirrored-lenses/" rel="nofollow noreferrer">generalize van Laarhoven lenses</a> to get lens families that can change the types of fields, just by generalizing this signature to</p>
<pre><code>type LensFamily a b c d = forall f. Functor f => (c -> f d) -> a -> f b
</code></pre>
<p>This does have the unfortunate consequence that the best way to talk about lenses is to use rank 2 polymorphism, but you don't need to use that signature directly when defining lenses.</p>
<p>The <code>Lens</code> I defined above for <code>_2</code> is actually a <code>LensFamily</code>.</p>
<pre><code>_2 :: Functor f => (a -> f b) -> (c,a) -> f (c, b)
</code></pre>
<p>I've written a library that includes lenses, lens families, and other generalizations including getters, setters, folds and traversals. It is available on hackage as the <a href="http://hackage.haskell.org/package/lens" rel="nofollow noreferrer"><code>lens</code></a> package.</p>
<p>Again, a big advantage of this approach is that library maintainers can actually create lenses in this style in your libraries without incurring any lens library dependency whatsoever, by just supplying functions with type <code>Functor f => (b -> f b) -> a -> f a</code>, for their particular types 'a' and 'b'. This greatly lowers the cost of adoption.</p>
<p>Since you don't need to actually use the package to define new lenses, it takes a lot of pressure off my earlier concerns about keeping the library Haskell 98.</p> | 2011-04-24 07:31:21.233000+00:00 | 2017-04-01 20:31:51.870000+00:00 | 2022-06-22 07:14:35.793000+00:00 | null | 5,767,129 | <p>There are at least three popular libraries for accessing and manipulating fields of records. The ones I know of are: data-accessor, fclabels and lenses. </p>
<p>Personally I started with data-accessor and I'm using them now. However recently on haskell-cafe there was an opinion of fclabels being superior.</p>
<p>Therefore I'm interested in comparison of those three (and maybe more) libraries.</p> | 2011-04-23 21:42:26.093000+00:00 | 2017-04-01 20:31:51.870000+00:00 | 2011-04-24 12:13:42.347000+00:00 | data-structures|haskell|record|lenses | ['http://hackage.haskell.org/package/fclabels', 'http://www.haskell.org/ghc/docs/6.12.2/html/libraries/base-4.2.0.1/Control-Category.html', 'http://hackage.haskell.org/package/data-accessor', 'http://hackage.haskell.org/package/data-accessor-template', 'http://hackage.haskell.org/packages/archive/lenses/0.1.4/doc/html/Data-Lenses.html', 'http://hackage.haskell.org/package/data-lens', 'http://hackage.haskell.org/packages/archive/comonad-transformers/1.5.2.6/doc/html/Control-Comonad-Trans-Store-Lazy.html', 'https://arxiv.org/abs/1103.2841', 'http://patternsinfp.wordpress.com/2011/01/31/lenses-are-the-coalgebras-for-the-costate-comonad/', 'http://hackage.haskell.org/package/data-lens-fd', 'http://hackage.haskell.org/package/data-lens-template', 'http://comonad.com/reader/2012/mirrored-lenses/', 'http://hackage.haskell.org/package/lens'] | 13 |
47,579,598 | <p><strong>Actions</strong></p>
<p>Controllable actions are the results of choices that the decision maker makes. In the classic POMDP tiger problem, there is a tiger hidden behind one of two doors. At each time step, the decision maker can choose to listen or to open one of the doors. The actions in this scenario are {listen, open left door, open right door}. The transition function from one state to another depends on both the previous state and the action chosen. </p>
<p>In a hidden Markov model (HMM), there are no actions for the decision maker. In the tiger problem context, this means the participant can only listen without opening doors. In this case, the transition function only depends on the previous state, since there are no actions. </p>
<p>For more details on the tiger problem, see Kaelbling Littman and Cassandra's 1998 <a href="http://people.csail.mit.edu/lpk/papers/aij98-pomdp.pdf" rel="nofollow noreferrer">POMDP paper</a>, Section 5.1. There's also a more introductory walk-through available in this <a href="http://www.pomdp.org/tutorial/" rel="nofollow noreferrer">tutorial</a>. </p>
<p><strong>Adaptability</strong></p>
<p>The basic intuition in your question is correct, but can be refined. POMDPs are a class of models, whereas Q-learning is a solution technique. The basic difference in your question is between model-based and model-free approaches. POMDPs are model-based, although the partial observability allows for additional uncertainty. Reinforcement learning can be applied in a model-free context, with Q-learning. The model-free approach will be more flexible for non-stationary problems. That being said, depending on the complexity of the problem, you could incorporate the non-stationarity into the model itself and treat it as an MDP. </p>
<p>There's a very thorough discussion on these non-stationary modelling trade-offs in the answer to this <a href="https://stats.stackexchange.com/questions/308617/reinforcement-learning-in-non-stationary-environment">question</a>. </p>
<p>Lastly, it is correct that POMDP's can be considered expert systems. Mazumdar et al (2017) have <a href="https://arxiv.org/abs/1707.05714" rel="nofollow noreferrer">suggested</a> treating Markov decision processes (MDPs) as expert systems. </p> | 2017-11-30 17:46:11.227000+00:00 | 2017-11-30 17:46:11.227000+00:00 | null | null | 47,512,110 | <p>I have some questions related to POMDPs.</p>
<ol>
<li><p>What do we mean by <em>controllable actions</em> in a partially observable Markov decision process? Or no controllable actions in hidden Markov states? </p></li>
<li><p>When computing policies through value or policy iteration, could we say that the POMDP is an expert system (because we model the environment)? While, when using <em>Q-learning</em>, it is a more flexible system in terms of intelligence or <em>adaptability to a changing environment</em>?</p></li>
</ol> | 2017-11-27 13:28:08.633000+00:00 | 2019-09-01 00:47:49.853000+00:00 | 2019-09-01 00:47:49.853000+00:00 | artificial-intelligence|probability|reinforcement-learning|expert-system|markov-decision-process | ['http://people.csail.mit.edu/lpk/papers/aij98-pomdp.pdf', 'http://www.pomdp.org/tutorial/', 'https://stats.stackexchange.com/questions/308617/reinforcement-learning-in-non-stationary-environment', 'https://arxiv.org/abs/1707.05714'] | 4 |
51,373,784 | <p>This paper may the thing you are looking for: </p>
<p><a href="https://arxiv.org/abs/1608.02214" rel="nofollow noreferrer">[1608.02214] Robsut Wrod Reocginiton via semi-Character Recurrent Neural Network</a></p>
<p>A Brief introduction:</p>
<p>The author of this paper demonstrated a method to recognize jumbled words which like Cmabrigde Uinervtisy(Cambridge University). Training the neural network with correct begin, end characters and the encoded internal characters which doesn't contain it's position information, the neural network can learn to recognize and correct it.</p>
<p>You can easily modify the network structure to adapt your own need, the OCR, as you had mentioned.</p>
<p><a href="https://i.stack.imgur.com/bCMyK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bCMyK.png" alt=""></a><br>
<sub>(source: <a href="https://screenshotscdn.firefoxusercontent.com/images/24464385-3bef-4555-80ce-669d252fe5fe.png" rel="nofollow noreferrer">firefoxusercontent.com</a>)</sub> </p>
<p><a href="https://i.stack.imgur.com/ydmKA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ydmKA.png" alt=""></a><br>
<sub>(source: <a href="https://screenshotscdn.firefoxusercontent.com/images/313b52fb-f590-4904-9144-f1044160a0df.png" rel="nofollow noreferrer">firefoxusercontent.com</a>)</sub> </p> | 2018-07-17 05:27:38.827000+00:00 | 2019-10-23 17:33:12+00:00 | 2019-10-23 17:33:12+00:00 | null | 34,015,467 | <p>I have recently started exploring Recurrent Neural Networks. So far I have trained character level language model on tensorFlow using Andrej Karpathy's <a href="http://karpathy.github.io/2015/05/21/rnn-effectiveness/" rel="noreferrer">blog</a>. It works great.</p>
<p>I couldnt however find any study on using RNNs for string matching or keyword spotting. For one of my project I require OCR of scanned documents and then parsing the converted text for key data points. Most string matching techniques fail to incorporate the OCR conversion mistakes and that leads to significant error.</p>
<p>Is it possible to train the RNN on the variations of converted text I receive and use it for finding keywords.</p> | 2015-12-01 08:07:53.330000+00:00 | 2019-10-23 17:33:12+00:00 | 2015-12-01 09:26:47.460000+00:00 | machine-learning|string-matching|tensorflow|recurrent-neural-network | ['https://arxiv.org/abs/1608.02214', 'https://i.stack.imgur.com/bCMyK.png', 'https://screenshotscdn.firefoxusercontent.com/images/24464385-3bef-4555-80ce-669d252fe5fe.png', 'https://i.stack.imgur.com/ydmKA.png', 'https://screenshotscdn.firefoxusercontent.com/images/313b52fb-f590-4904-9144-f1044160a0df.png'] | 5 |
36,077,065 | <p>On re-reading your question, it appears you are asking about whether this can be done on the GPU. Yes it can, but with some very severe restrictions. CUDA harware supports atomic compare-and-swap. There are examples in the literature of using this and other atomic instructions to implement locks and condition variables, as well as lock-free data structures. Some reading to get you started:</p>
<ul>
<li><a href="http://arxiv.org/abs/1110.4623v1" rel="nofollow">http://arxiv.org/abs/1110.4623v1</a></li>
<li><a href="http://www.cse.chalmers.se/~bapic/lic_thesis_bapi.pdf" rel="nofollow">http://www.cse.chalmers.se/~bapic/lic_thesis_bapi.pdf</a></li>
<li><a href="http://arxiv.org/pdf/1302.2757.pdf" rel="nofollow">http://arxiv.org/pdf/1302.2757.pdf</a></li>
<li><a href="https://www.researchgate.net/publication/275603769_Toward_Concurrent_Lock-Free_Queues_on_GPUs" rel="nofollow">https://www.researchgate.net/publication/275603769_Toward_Concurrent_Lock-Free_Queues_on_GPUs</a></li>
</ul>
<p>Bottom line is: you have to roll your own implementation of futures. Also, any waiting thread will have to spin-wait, since there's no analog of host-side yielding.</p>
<p>My original answer, with the understanding you were asking about support for futures on the host side with CUDA:</p>
<p>Yes. Recent versions of CUDA support C++11 and CUDA has supported multiple host threads for some time. So you can wrap a CUDA kernel call with <code>std::async</code>.</p>
<p>One aspect you may want to consider is that CUDA will create a thread-local context for each thread in which CUDA functions are accessed. Depending on the implementation of <code>std::async</code> in your C++ library, you may incur severe overhead if you end up creating a new context for each <code>std::async</code> call.</p>
<p>Finally, CUDA calls are already asynchronous, i.e. you can continue processing things on the host thread while the GPU is busy. There can sometimes be a benefit to pipelining kernel calls. You can also use the CUDA events API to coordinate multiple asynchronous CUDA activities within a single thread. In some sense the CUDA implementation is already doing what you are possibly proposing with <code>std::future</code>. I would recommend first convincing yourself you cannot manage with a single host thread before venturing into multi-threaded territory, which can sometimes bring a host of non-CUDA related problems. Hope that helps.</p> | 2016-03-18 05:49:29.397000+00:00 | 2016-03-18 06:46:13.557000+00:00 | 2016-03-18 06:46:13.557000+00:00 | null | 36,074,572 | <p>I want to create a parallel procession CUDA/C++ application that does many functional operations concurrently. I want to be able to create a thread in CUDA that acts as the hub for assigning tasks and creates futures(if at all possible) that will do time consuming mathematical calculations in parallel. Does the CUDA library support this?</p>
<p>Edit for clarification: The thread I want to act as a hub would be created on the host CPU, and the tasks that it creates and manages would be created on the GPU device. I believe it would be possible for the CPU to checking the values of thousands of futures in sequence and assigning them new tasks as they finish. If this is possible, could the answer please reference or create a specific example of how I would be able to do this.</p> | 2016-03-18 01:27:09.273000+00:00 | 2016-03-18 16:40:45.183000+00:00 | 2016-03-18 16:40:45.183000+00:00 | c++|cuda|future | ['http://arxiv.org/abs/1110.4623v1', 'http://www.cse.chalmers.se/~bapic/lic_thesis_bapi.pdf', 'http://arxiv.org/pdf/1302.2757.pdf', 'https://www.researchgate.net/publication/275603769_Toward_Concurrent_Lock-Free_Queues_on_GPUs'] | 4 |
44,467,354 | <p>The result is not odd at all. The network has never learnt what makes your face special, but just remembered what makes the 500 set different from yours. Once you present a new face, it has no "memory" of it, and therefore interprets it as yours, just because none of features present in the 500 faces appears in the 501th.</p>
<p>Some ideas how to tackle this:</p>
<ul>
<li>Augment your data with <a href="https://keras.io/preprocessing/image/" rel="nofollow noreferrer">ImageDataGenerator</a>, as proposed by petezurich.</li>
<li>Increase your kernel size. 2*2 is too tiny to catch facial features. Consider 3*3, or even stacking two 3*3 in the first hidden layer.</li>
<li>Consider using Batch Normalization and regularization. Increase Dropout to 0.5.</li>
<li>Consider replacing pooling layers with <a href="https://arxiv.org/abs/1511.07122" rel="nofollow noreferrer">diluted convolutions</a> (available in Keras).</li>
<li>Make sure you normalise your input data.</li>
<li>Lower the number of feature maps (filters) in the first layer. Consider using e.g. 32 3*3 maps instead of 128 tiny elements (these are your main problem if I am to guess). This way you will force the network to generalise instead of learning some tiny nuances.</li>
</ul>
<p>A nice test for my hypothesis from the last point would be to visualise the activations in hidden layers, especially the first hidden layer. I have a feeling your network activates on some irrelevant features (or rather - noise), rather than "human features" (like eyes, haircut).</p>
<p>[EDIT after more code has been added]</p>
<ul>
<li>Centre your input data around zero.</li>
<li>Lower the batch size. With so few examples you don't want too much of averaging in a batch.</li>
</ul>
<p>I still think that using e.g. 16 or 32 filters in the first hidden layer should be the first thing to check. Look at your face. Can you spot 128 "features"? Unless you have some severe acne, I don't think so.</p> | 2017-06-09 22:11:25.103000+00:00 | 2017-06-10 07:01:12.857000+00:00 | 2017-06-10 07:01:12.857000+00:00 | null | 44,464,219 | <p>I am using a binary classifier on CNN. I have two categories "me" and "others". I have around 250 images of myself and 500 of others ( random faces db ). My current implementation of layers is very simple</p>
<pre><code> self.model.add(Conv2D(128, (2, 2), padding='same',
input_shape=dataset.X_train.shape[1:]))
self.model.add(Activation('relu'))
self.model.add(MaxPooling2D(pool_size=(2, 2)))
self.model.add(Dropout(0.25))
self.model.add(Conv2D(64, (2, 2), padding='same'))
self.model.add(Activation('relu'))
self.model.add(MaxPooling2D(pool_size=(2, 2)))
self.model.add(Dropout(0.25))
self.model.add(Conv2D(32, (1, 1), padding='same'))
self.model.add(Activation('relu'))
self.model.add(MaxPooling2D(pool_size=(2, 2)))
self.model.add(Dropout(0.5))
self.model.add(Dense(512))
self.model.add(Activation('relu'))
self.model.add(Dropout(0.25))
self.model.add(Dense(2)) # for two classes
self.model.add(Activation('softmax'))
</code></pre>
<p>My network reaches accuracy of 93%
<a href="https://i.stack.imgur.com/8RVEV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8RVEV.png" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/xxdmI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xxdmI.png" alt="enter image description here"></a>
My problem is the when I use this network to predict faces it always recognises any face as mine. I have cropped the faces, applied gabor filter but nothing works. Any suggestion will be appreciated.</p>
<p>Prediction Results on Random Faces: [KK represents my face]
The probabilities are always over 97%:</p>
<pre><code>KK identified!
1/1 [==============================] - 0s
[[ 0.9741978 0.0258022]]
1/1 [==============================] - 0s
KK identified!
1/1 [==============================] - 0s
[[ 0.9897241 0.01027592]]
1/1 [==============================] - 0s
</code></pre>
<p>Prediction Results on my images: [KK represents my face]
The probabilities are always over 99%:</p>
<pre><code>KK identified!
1/1 [==============================] - 0s
[[ 0.99639165 0.00360837]]
1/1 [==============================] - 0s
KK identified!
1/1 [==============================] - 0s
[[ 0.99527925 0.00472075]]
1/1 [==============================] - 0s
</code></pre>
<p>Training code</p>
<pre><code> def get_data(self, img_rows=IMAGE_SIZE, img_cols=IMAGE_SIZE, img_channels=3, nb_classes=2):
images, labels = fetch_data('./data/')
labels = np.reshape(labels, [-1])
X_train, X_test, y_train, y_test = \
train_test_split(images, labels, test_size=0.3, random_state=random.randint(0, 100))
X_valid, X_test, y_valid, y_test = \
train_test_split(images, labels, test_size=0.3, random_state=random.randint(0, 100))
#train_test_split(images, labels, test_size=0.3, random_state=np.random.seed(15))
if K.image_dim_ordering() == 'th':
X_train = X_train.reshape(X_train.shape[0], 3, img_rows, img_cols)
X_valid = X_valid.reshape(X_valid.shape[0], 3, img_rows, img_cols)
X_test = X_test.reshape(X_test.shape[0], 3, img_rows, img_cols)
# input_shape = (3, img_rows, img_cols)
else:
X_train = X_train.reshape(X_train.shape[0], img_rows, img_cols, 3)
X_valid = X_valid.reshape(X_valid.shape[0], img_rows, img_cols, 3)
X_test = X_test.reshape(X_test.shape[0], img_rows, img_cols, 3)
# input_shape = (img_rows, img_cols, 3)
Y_train = np_utils.to_categorical(y_train, nb_classes)
Y_valid = np_utils.to_categorical(y_valid, nb_classes)
Y_test = np_utils.to_categorical(y_test, nb_classes)
X_train = X_train.astype('float32')
X_valid = X_valid.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_valid /= 255
X_test /= 255
self.X_train = X_train
self.X_valid = X_valid
self.X_test = X_test
self.Y_train = Y_train
self.Y_valid = Y_valid
self.Y_test = Y_test
def train_network(self, dataset, batch_size=32, nb_epoch=40, data_augmentation=True):
sgd = SGD(lr=0.003, decay=0.0000001, momentum=0.9, nesterov=True)
# adam = Adam(lr=0.01, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0001)
self.model.compile(loss='binary_crossentropy',
optimizer=sgd,
metrics=['accuracy'])
if not data_augmentation:
processed_data = self.model.fit(dataset.X_train, dataset.Y_train,
batch_size=batch_size,
nb_epoch=nb_epoch,
validation_data=(dataset.X_valid, dataset.Y_valid),
shuffle=True)
else:
datagenerator = ImageDataGenerator(
featurewise_center=False,
samplewise_center=False,
featurewise_std_normalization=False,
samplewise_std_normalization=False,
zca_whitening=False,
rotation_range=20,
width_shift_range=0.2,
height_shift_range=0.2,
horizontal_flip=True,
vertical_flip=False)
datagenerator.fit(dataset.X_train)
processed_data = self.model.fit_generator(datagen.flow(dataset.X_train, dataset.Y_train, batch_size=batch_size, shuffle=True),
samples_per_epoch=dataset.X_train.shape[0], nb_epoch=nb_epoch, validation_data=(dataset.X_valid, dataset.Y_valid))
</code></pre>
<p>Thanks</p>
<p>[Update: 11th June]</p>
<p>Layers</p>
<pre><code>def build_model(self, dataset, nb_classes=2):
self.model = Sequential()
self.model.add(Conv2D(32, (3, 3), padding='same', input_shape=dataset.X_train.shape[1:]))
self.model.add(Activation('relu'))
self.model.add(Conv2D(32, (3, 3)))
self.model.add(Activation('relu'))
self.model.add(MaxPooling2D(pool_size=(2, 2)))
self.model.add(Dropout(0.5))
self.model.add(Conv2D(16, (3, 3), padding='same'))
self.model.add(Activation('relu'))
self.model.add(Conv2D(16, (3, 3)))
self.model.add(Activation('relu'))
self.model.add(MaxPooling2D(pool_size=(2, 2)))
self.model.add(Dropout(0.5))
self.model.add(Flatten())
self.model.add(Dense(512))
self.model.add(Activation('relu'))
self.model.add(Dropout(0.5))
self.model.add(Dense(nb_classes))
self.model.add(Activation('softmax'))
self.model.summary()
</code></pre>
<p>Data augmentation</p>
<pre><code> # this will do preprocessing and realtime data augmentation
datagen = ImageDataGenerator(
featurewise_center=True, # set input mean to 0 over the dataset
samplewise_center=False, # set each sample mean to 0
featurewise_std_normalization=False, # divide inputs by std of the dataset
samplewise_std_normalization=False, # divide each input by its std
zca_whitening=False, # apply ZCA whitening
rotation_range=20, # randomly rotate images in the range (degrees, 0 to 180)
width_shift_range=0.2, # randomly shift images horizontally (fraction of total width)
height_shift_range=0.2, # randomly shift images vertically (fraction of total height)
# rescale=1. / 255,
# shear_range=0.2,
# zoom_range=0.2,
horizontal_flip=True, # randomly flip images
vertical_flip=False) # randomly flip images
datagen.fit(dataset.X_train)
checkpoint = ModelCheckpoint(self.FILE_PATH, monitor='val_acc', verbose=1, save_best_only=True, mode='max')
callback_list = [checkpoint]
# fit the model on the batches generated by datagen.flow()
train_generator = datagen.flow(dataset.X_train, dataset.Y_train, batch_size=batch_size, shuffle=True)
history = self.model.fit_generator(train_generator,
samples_per_epoch=dataset.X_train.shape[0],
nb_epoch=nb_epoch,
validation_data=(dataset.X_valid, dataset.Y_valid),
callbacks=callback_list)
</code></pre>
<p>Dataset</p>
<pre><code>class DataSet(object):
def __init__(self):
self.X_train = None
self.X_valid = None
self.X_test = None
self.Y_train = None
self.Y_valid = None
self.Y_test = None
# support only binary classification for now, thus 2 class limit
def get_data(self, img_rows=IMAGE_SIZE, img_cols=IMAGE_SIZE, img_channels=3, nb_classes=2):
images, labels = fetch_data('./data/')
labels = np.reshape(labels, [-1])
X_train, X_test, y_train, y_test = \
train_test_split(images, labels, test_size=0.2, random_state=random.randint(0, 100))
X_valid, X_test, y_valid, y_test = \
train_test_split(images, labels, test_size=0.2, random_state=random.randint(0, 100))
if K.image_dim_ordering() == 'th':
X_train = X_train.reshape(X_train.shape[0], 3, img_rows, img_cols)
X_valid = X_valid.reshape(X_valid.shape[0], 3, img_rows, img_cols)
X_test = X_test.reshape(X_test.shape[0], 3, img_rows, img_cols)
else:
X_train = X_train.reshape(X_train.shape[0], img_rows, img_cols, 3)
X_valid = X_valid.reshape(X_valid.shape[0], img_rows, img_cols, 3)
X_test = X_test.reshape(X_test.shape[0], img_rows, img_cols, 3)
# convert class vectors to binary class matrices
Y_train = np_utils.to_categorical(y_train, nb_classes)
Y_valid = np_utils.to_categorical(y_valid, nb_classes)
Y_test = np_utils.to_categorical(y_test, nb_classes)
X_train = X_train.astype('float32')
X_valid = X_valid.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_valid /= 255
X_test /= 255
self.X_train = X_train
self.X_valid = X_valid
self.X_test = X_test
self.Y_train = Y_train
self.Y_valid = Y_valid
self.Y_test = Y_test
</code></pre> | 2017-06-09 18:06:31.523000+00:00 | 2017-06-11 16:21:39.343000+00:00 | 2017-06-11 16:21:39.343000+00:00 | python|deep-learning|keras|convolution | ['https://keras.io/preprocessing/image/', 'https://arxiv.org/abs/1511.07122'] | 2 |
52,532,726 | <p>The real potential of LSTM models for time series forecasting can be exploiting by building a global model using all the time series, instead as a univariate model, which actually ignores any cross series information available across your time series.</p>
<p>We implement the use case you are referring to by introducing a '<strong>Moving Window Approach</strong>' strategy that involves modeling a multiple input and output mapping, where you can pool time series that have different lengths. More detailed discussion of this strategy is described in <strong>section 3.4</strong> on our paper[1]. Here, you basically produce multiple input and output tuples for the given set of time series you have and then pool them to together for the LSTM training purposes. This accommodates even if you have time series with different lengths.</p>
<p>[1] <a href="https://arxiv.org/pdf/1710.03222.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1710.03222.pdf</a></p> | 2018-09-27 08:34:07.293000+00:00 | 2018-09-28 18:11:03.060000+00:00 | 2018-09-28 18:11:03.060000+00:00 | null | 49,797,291 | <p>I am following this tutorial <a href="https://machinelearningmastery.com/time-series-prediction-lstm-recurrent-neural-networks-python-keras/" rel="nofollow noreferrer">LSTM</a> and I wonder how to map this to a multi-time series input. I have a dataset of several time-series and I want to predict for each time series the future. I don't know how to scale LSTM to several time-series.</p>
<p>The aim is to avoid to make a model for each time series as I have 40k of time series.</p>
<p>Thank you</p> | 2018-04-12 13:01:56.997000+00:00 | 2018-09-28 18:11:03.060000+00:00 | null | python|tensorflow|machine-learning|keras|lstm | ['https://arxiv.org/pdf/1710.03222.pdf'] | 1 |
60,136,841 | <p>There are some works on how to implement Decision trees or random forests with neural networks, see for example <a href="https://arxiv.org/pdf/1806.06988.pdf" rel="nofollow noreferrer">Yang et al.</a> (2018). For the second part of your question, about if we can use randomforests on cnn data, it is also possible, consider for example using a pruned cnn to extract features from a given image, and to use a randomforest to classify them.</p> | 2020-02-09 12:53:49.237000+00:00 | 2020-02-09 12:53:49.237000+00:00 | null | null | 60,129,869 | <p>We can use random forest algorithm only for tabular data but cnn can be used for other various data types.so if we can somehow generate tabular data from cnn can we use it in random forest algo.</p> | 2020-02-08 18:27:21.667000+00:00 | 2020-02-09 12:53:49.237000+00:00 | null | machine-learning|deep-learning|conv-neural-network|random-forest | ['https://arxiv.org/pdf/1806.06988.pdf'] | 1 |
48,183,310 | <p>Certainly. There is already research being done on developing deep learning / convolutional neural networks to do exactly this. Four recent references as of January 2018 are given below. </p>
<p>The main challenges with doing it are:</p>
<ol>
<li>Acquiring a large enough dataset (human face images and their respective attractiveness scores) with proper subject approval.</li>
<li>The fact that attractiveness is subjective and varies with ethnic group and culture. Therefore such training data will have a broader range of labels than in more classical recognition tasks such as object detection (for which the label is binary), leading to more uncertainty in the network's predictions. For this reason most research focuses on training networks for a specific group.</li>
</ol>
<p>This research area isn't being developed hugely (at least in academia) at the moment most likely because of ethical considerations with acquiring such sensitive data and dubious uses. I suspect that now companies like OKCupid and Match.com are or will be developing this research privately for the purposes of automatic match making. </p>
<p>Xu et al., A new humanlike facial attractiveness predictor with cascaded fine-tuning deep learning model, arXiv 2015,
<a href="https://arxiv.org/pdf/1511.02465.pdf" rel="nofollow noreferrer">paper</a></p>
<p>Gan et al., Deep self-taught learning for facial beauty prediction, Neurocomputing 2014
<a href="https://www.sciencedirect.com/science/article/pii/S0925231214006468" rel="nofollow noreferrer">paper</a></p>
<p>Wang et al., Attractive or Not?: Beauty Prediction with Attractiveness-Aware Encoders and Robust Late Fusion, ACM international conference on Multimedia 2014
<a href="https://dl.acm.org/citation.cfm?id=2654986" rel="nofollow noreferrer">paper</a></p>
<p>Shen et al., Fooling Neural Networks in Face Attractiveness Evaluation: Adversarial Examples with High Attractiveness Score But Low Subjective Score
Multimedia Big Data (BigMM), 2017 IEEE Third International Conference on
<a href="http://ieeexplore.ieee.org/document/7966718/" rel="nofollow noreferrer">paper</a></p> | 2018-01-10 08:38:54+00:00 | 2018-01-10 09:27:50.833000+00:00 | 2018-01-10 09:27:50.833000+00:00 | null | 48,180,112 | <p>I'm trying to understand if the project I'm thinking about is feasible or not using Neural Networks. I'm aware of apps like MakeApp and FakeApp which use neural networks to manipulate human faces. </p>
<p>My question is - <strong>Can modern (2018) neural networks be trained to identify aspects of human facial attractiveness and give a percentile score?</strong></p>
<p>For example, given an image, I want to know if the neural network thinks this image is in the top 20% facial attractiveness. If possible, how big of a dataset I need to be able to train such network? Is it tens of thousands of human-scored images? </p> | 2018-01-10 04:01:05.707000+00:00 | 2018-01-10 09:27:50.833000+00:00 | null | image-processing|neural-network|computer-vision|conv-neural-network|feasibility | ['https://arxiv.org/pdf/1511.02465.pdf', 'https://www.sciencedirect.com/science/article/pii/S0925231214006468', 'https://dl.acm.org/citation.cfm?id=2654986', 'http://ieeexplore.ieee.org/document/7966718/'] | 4 |
71,300,267 | <p><strong>You should NOT use BERT's output as sentence embeddings for semantic similarity.</strong> BERT is not pretrained for semantic similarity, which will result in poor results, even worse than simple Glove Embeddings. See below a comment from Jacob Devlin (first author in BERT's paper) and a piece from the Sentence-BERT paper, which discusses in detail sentence embeddings.</p>
<blockquote>
<p><strong>Jacob Devlin's comment:</strong> I'm not sure what these vectors are, since BERT does not generate meaningful sentence vectors. It seems that this is is doing average pooling over the word tokens to get a sentence vector, but we never suggested that this will generate meaningful sentence representations. And even if they are decent representations when fed into a DNN trained for a downstream task, it doesn't mean that they will be meaningful in terms of cosine distance. (Since cosine distance is a linear space where all dimensions are weighted equally). (<a href="https://github.com/google-research/bert/issues/164#issuecomment-441324222" rel="nofollow noreferrer">https://github.com/google-research/bert/issues/164#issuecomment-441324222</a>)</p>
</blockquote>
<blockquote>
<p><strong>From Sentence-BERT paper:</strong> The results show that directly using the output of BERT leads to rather poor performances. Averaging the BERT embeddings achieves an average correlation of only 54.81, and using the CLS token output only achieves an average correlation of 29.19. Both are worse than computing average GloVe embeddings. (<a href="https://arxiv.org/pdf/1908.10084.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1908.10084.pdf</a>)</p>
</blockquote>
<p><strong>You should use instead a model pre-trained specifically for sentence similarity</strong>, such as Sentence-BERT. Sentence-BERT and several other pretrained models for sentence similarity are available in the sentence-transformers library (<a href="https://www.sbert.net/docs/pretrained_models.html" rel="nofollow noreferrer">https://www.sbert.net/docs/pretrained_models.html</a>), which is fully compatible with the amazing HuggingFace transformers library. With these libraries, you can obtain sentence embeddings in just a line of code.</p> | 2022-02-28 19:41:37.090000+00:00 | 2022-02-28 19:41:37.090000+00:00 | null | null | 60,492,839 | <p>I am using the HuggingFace Transformers package to access pretrained models. As my use case needs functionality for both English and Arabic, I am using the <a href="https://github.com/google-research/bert/blob/master/multilingual.md" rel="noreferrer">bert-base-multilingual-cased</a> pretrained model. I need to be able to compare the similarity of sentences using something such as cosine similarity. To use this, I first need to get an embedding vector for each sentence, and can then compute the cosine similarity.</p>
<p>Firstly, what is the best way to extratc the semantic embedding from the BERT model? Would taking the last hidden state of the model after being fed the sentence suffice?</p>
<pre><code>import torch
from transformers import BertModel, BertTokenizer
model_class = BertModel
tokenizer_class = BertTokenizer
pretrained_weights = 'bert-base-multilingual-cased'
tokenizer = tokenizer_class.from_pretrained(pretrained_weights)
model = model_class.from_pretrained(pretrained_weights)
sentence = 'this is a test sentence'
input_ids = torch.tensor([tokenizer.encode(sentence, add_special_tokens=True)])
with torch.no_grad():
output_tuple = model(input_ids)
last_hidden_states = output_tuple[0]
print(last_hidden_states.size(), last_hidden_states)
</code></pre>
<p>Secondly, if this is a sufficient way to get embeddings from my sentence, I now have another problem where the embedding vectors have different lengths depending on the length of the original sentence. The shapes output are <code>[1, n, vocab_size]</code>, where <code>n</code> can have any value. </p>
<p>In order to compute two vectors' cosine similarity, they need to be the same length. How can I do this here? Could something as naive as first summing across <code>axis=1</code> still work? What other options do I have? </p> | 2020-03-02 16:20:07.080000+00:00 | 2022-06-29 14:38:24.143000+00:00 | 2020-03-02 16:25:55.783000+00:00 | python|vector|nlp|cosine-similarity|huggingface-transformers | ['https://github.com/google-research/bert/issues/164#issuecomment-441324222', 'https://arxiv.org/pdf/1908.10084.pdf', 'https://www.sbert.net/docs/pretrained_models.html'] | 3 |
60,504,075 | <p>In addition to an already great accepted answer, I want to point you to <a href="https://arxiv.org/abs/1908.10084" rel="noreferrer"><code>sentence-BERT</code></a>, which discusses the similarity aspect and implications of specific metrics (like cosine similarity) in greater detail.
They also have a <a href="https://github.com/UKPLab/sentence-transformers" rel="noreferrer">very convenient implementation</a> online. The main advantage here is that they seemingly gain a lot of processing speed compared to a "naive" sentence embedding comparison, but I am not familiar enough with the implementation itself.</p>
<p>Importantly, there is also generally a more fine-grained distinction in <em>what kind of similarity</em> you want to look at. Specifically for that, there is also a great discussion in one of the <a href="https://www.aclweb.org/anthology/S14-2001.pdf" rel="noreferrer">task papers</a> from SemEval 2014 (SICK dataset), which goes into more detail about this. From your task description, I am assuming that you are already using data from one of the later SemEval tasks, which also extended this to multilingual similarity.</p> | 2020-03-03 09:31:58.820000+00:00 | 2020-03-03 09:31:58.820000+00:00 | null | null | 60,492,839 | <p>I am using the HuggingFace Transformers package to access pretrained models. As my use case needs functionality for both English and Arabic, I am using the <a href="https://github.com/google-research/bert/blob/master/multilingual.md" rel="noreferrer">bert-base-multilingual-cased</a> pretrained model. I need to be able to compare the similarity of sentences using something such as cosine similarity. To use this, I first need to get an embedding vector for each sentence, and can then compute the cosine similarity.</p>
<p>Firstly, what is the best way to extratc the semantic embedding from the BERT model? Would taking the last hidden state of the model after being fed the sentence suffice?</p>
<pre><code>import torch
from transformers import BertModel, BertTokenizer
model_class = BertModel
tokenizer_class = BertTokenizer
pretrained_weights = 'bert-base-multilingual-cased'
tokenizer = tokenizer_class.from_pretrained(pretrained_weights)
model = model_class.from_pretrained(pretrained_weights)
sentence = 'this is a test sentence'
input_ids = torch.tensor([tokenizer.encode(sentence, add_special_tokens=True)])
with torch.no_grad():
output_tuple = model(input_ids)
last_hidden_states = output_tuple[0]
print(last_hidden_states.size(), last_hidden_states)
</code></pre>
<p>Secondly, if this is a sufficient way to get embeddings from my sentence, I now have another problem where the embedding vectors have different lengths depending on the length of the original sentence. The shapes output are <code>[1, n, vocab_size]</code>, where <code>n</code> can have any value. </p>
<p>In order to compute two vectors' cosine similarity, they need to be the same length. How can I do this here? Could something as naive as first summing across <code>axis=1</code> still work? What other options do I have? </p> | 2020-03-02 16:20:07.080000+00:00 | 2022-06-29 14:38:24.143000+00:00 | 2020-03-02 16:25:55.783000+00:00 | python|vector|nlp|cosine-similarity|huggingface-transformers | ['https://arxiv.org/abs/1908.10084', 'https://github.com/UKPLab/sentence-transformers', 'https://www.aclweb.org/anthology/S14-2001.pdf'] | 3 |
60,493,083 | <p>You can use the <code>[CLS]</code> token as a representation for the entire sequence. This token is typically prepended to your sentence during the preprocessing step. This token that is typically used for classification tasks (see figure 2 and paragraph 3.2 in the <a href="https://arxiv.org/pdf/1810.04805.pdf" rel="noreferrer">BERT paper</a>).</p>
<p>It is the very first token of the embedding.</p>
<p>Alternatively you can take the average vector of the sequence (like you say over the first(?) axis), which can yield better results according to the <a href="https://huggingface.co/transformers/model_doc/bert.html" rel="noreferrer">huggingface documentation</a> (3rd tip).</p>
<p>Note that BERT was not designed for sentence similarity using the cosine distance, though in my experience it does yield decent results. </p> | 2020-03-02 16:36:36.230000+00:00 | 2020-03-02 16:36:36.230000+00:00 | null | null | 60,492,839 | <p>I am using the HuggingFace Transformers package to access pretrained models. As my use case needs functionality for both English and Arabic, I am using the <a href="https://github.com/google-research/bert/blob/master/multilingual.md" rel="noreferrer">bert-base-multilingual-cased</a> pretrained model. I need to be able to compare the similarity of sentences using something such as cosine similarity. To use this, I first need to get an embedding vector for each sentence, and can then compute the cosine similarity.</p>
<p>Firstly, what is the best way to extratc the semantic embedding from the BERT model? Would taking the last hidden state of the model after being fed the sentence suffice?</p>
<pre><code>import torch
from transformers import BertModel, BertTokenizer
model_class = BertModel
tokenizer_class = BertTokenizer
pretrained_weights = 'bert-base-multilingual-cased'
tokenizer = tokenizer_class.from_pretrained(pretrained_weights)
model = model_class.from_pretrained(pretrained_weights)
sentence = 'this is a test sentence'
input_ids = torch.tensor([tokenizer.encode(sentence, add_special_tokens=True)])
with torch.no_grad():
output_tuple = model(input_ids)
last_hidden_states = output_tuple[0]
print(last_hidden_states.size(), last_hidden_states)
</code></pre>
<p>Secondly, if this is a sufficient way to get embeddings from my sentence, I now have another problem where the embedding vectors have different lengths depending on the length of the original sentence. The shapes output are <code>[1, n, vocab_size]</code>, where <code>n</code> can have any value. </p>
<p>In order to compute two vectors' cosine similarity, they need to be the same length. How can I do this here? Could something as naive as first summing across <code>axis=1</code> still work? What other options do I have? </p> | 2020-03-02 16:20:07.080000+00:00 | 2022-06-29 14:38:24.143000+00:00 | 2020-03-02 16:25:55.783000+00:00 | python|vector|nlp|cosine-similarity|huggingface-transformers | ['https://arxiv.org/pdf/1810.04805.pdf', 'https://huggingface.co/transformers/model_doc/bert.html'] | 2 |
46,655,895 | <p>From the recent Deep Learning book by Goodfellow et al., <a href="http://www.deeplearningbook.org/contents/optimization.html" rel="noreferrer">chapter 8</a>:</p>
<blockquote>
<p>Minibatch sizes are generally driven by the following factors:</p>
<ul>
<li>Larger batches provide a more accurate estimate of the gradient, but
with less than linear returns.</li>
<li>Multicore architectures are usually
underutilized by extremely small batches. This motivates using some
absolute minimum batch size, below which there is no reduction in the
time to process a minibatch.</li>
<li>If all examples in the batch are to be
processed in parallel (as is typically the case), then the amount of
memory scales with the batch size. For many hardware setups this is
the limiting factor in batch size.</li>
<li>Some kinds of hardware achieve
better runtime with specific sizes of arrays. Especially when using
GPUs, it is common for power of 2 batch sizes to offer better runtime.
Typical power of 2 batch sizes range from 32 to 256, with 16 sometimes
being attempted for large models.</li>
<li>Small batches can offer a
regularizing effect (Wilson and Martinez, 2003), perhaps due to the
noise they add to the learning process. Generalization error is often
best for a batch size of 1. Training with such a small batch size
might require a small learning rate to maintain stability because of
the high variance in the estimate of the gradient. The total runtime
can be very high as a result of the need to make more steps, both
because of the reduced learning rate and because it takes more steps
to observe the entire training set.</li>
</ul>
</blockquote>
<p>Which in practice usually means "<em>in powers of 2 and the larger the better, provided that the batch fits into your (GPU) memory</em>".</p>
<p>You might want also to consult several good posts here in Stack Exchange:</p>
<ul>
<li><a href="https://stats.stackexchange.com/questions/164876/tradeoff-batch-size-vs-number-of-iterations-to-train-a-neural-network">Tradeoff batch size vs. number of iterations to train a neural network</a></li>
<li><a href="https://stackoverflow.com/questions/40535679/selection-of-mini-batch-size-for-neural-network-regression">Selection of Mini-batch Size for Neural Network Regression</a></li>
<li><a href="https://stats.stackexchange.com/questions/140811/how-large-should-the-batch-size-be-for-stochastic-gradient-descent">How large should the batch size be for stochastic gradient descent?</a></li>
</ul>
<p>Just keep in mind that the paper by Keskar et al. '<a href="https://arxiv.org/abs/1609.04836" rel="noreferrer">On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima</a>', quoted by several of the posts above, has received <a href="https://arxiv.org/abs/1703.04933" rel="noreferrer">some objections</a> by other respectable researchers of the deep learning community.</p>
<p>Hope this helps...</p>
<p><strong>UPDATE</strong> (Dec 2017):</p>
<p>There is a new paper by Yoshua Bengio & team, <a href="https://arxiv.org/abs/1711.04623" rel="noreferrer">Three Factors Influencing Minima in SGD</a> (Nov 2017); it is worth reading in the sense that it reports new theoretical & experimental results on the interplay between learning rate and batch size.</p>
<p><strong>UPDATE</strong> (Mar 2021):</p>
<p>Of interest here is also another paper from 2018, <a href="https://arxiv.org/abs/1804.07612" rel="noreferrer">Revisiting Small Batch Training for Deep Neural Networks</a> (h/t to Nicolas Gervais), which runs contrary to <em>the larger the better</em> advice; quoting from the abstract:</p>
<blockquote>
<p>The best performance has been consistently obtained for mini-batch sizes between m=2 and m=32, which contrasts with recent work advocating the use of mini-batch sizes in the thousands.</p>
</blockquote> | 2017-10-09 22:27:55.310000+00:00 | 2021-03-21 10:54:24.937000+00:00 | 2021-03-21 10:54:24.937000+00:00 | null | 46,654,424 | <p>Sometimes I run into a problem:</p>
<pre><code>OOM when allocating tensor with shape
</code></pre>
<p>e.g.</p>
<pre><code>OOM when allocating tensor with shape (1024, 100, 160)
</code></pre>
<p>Where 1024 is my batch size and I don't know what's the rest. If I reduce the batch size or the number of neurons in the model, it runs fine.</p>
<p>Is there a generic way to calculate optimal batch size based on model and GPU memory, so the program doesn't crash?</p>
<p>In short: I want the largest batch size possible in terms of my model, which will fit into my GPU memory and won't crash the program.</p> | 2017-10-09 20:25:09.803000+00:00 | 2022-07-18 21:18:47.933000+00:00 | 2022-07-18 21:18:47.933000+00:00 | machine-learning|neural-network|deep-learning|keras|gradient-descent | ['http://www.deeplearningbook.org/contents/optimization.html', 'https://stats.stackexchange.com/questions/164876/tradeoff-batch-size-vs-number-of-iterations-to-train-a-neural-network', 'https://stackoverflow.com/questions/40535679/selection-of-mini-batch-size-for-neural-network-regression', 'https://stats.stackexchange.com/questions/140811/how-large-should-the-batch-size-be-for-stochastic-gradient-descent', 'https://arxiv.org/abs/1609.04836', 'https://arxiv.org/abs/1703.04933', 'https://arxiv.org/abs/1711.04623', 'https://arxiv.org/abs/1804.07612'] | 8 |
53,115,509 | <p>For your first question, you are talking about image style transfer. In that case, <a href="https://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Gatys_Image_Style_Transfer_CVPR_2016_paper.pdf" rel="nofollow noreferrer">CNNs</a> may help you.</p>
<p>For the second, if I understand correctly, by growing you mean introducing variations in the image patch while keeping it realistic. If that's the goal, you may use <a href="https://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf" rel="nofollow noreferrer">GANs</a> for generating images, provided you have a reasonable sized dataset to train with:</p>
<p><a href="https://arxiv.org/pdf/1803.04469.pdf" rel="nofollow noreferrer">Image Synthesis with GANs</a></p>
<p>Intuitively, conditional GANs model the joint distribution of the input dataset (which in your case, are images you want to imitate) and can draw new samples (images) from the learned distribution, thereby allowing you to create more images having similar content.</p>
<p><a href="https://github.com/phillipi/pix2pix" rel="nofollow noreferrer">Pix2Pix</a> is the open-source code of a <a href="https://arxiv.org/abs/1611.07004" rel="nofollow noreferrer">well-known paper</a> that you can play around to generate images. Specifically, let X be your input image and Y be a target image. You can train the network and feed X to observe the output O of the generator. Thereafter, by tweaking the architecture a bit or by changing the skip connections (read the paper) train again and you can generate variety in the output images O.</p>
<p><a href="https://bair.berkeley.edu/blog/2018/03/13/mcgan/" rel="nofollow noreferrer">Font Style Transfer</a> is an interesting experiment with text on images (rather than image on image, as in your case).</p> | 2018-11-02 09:03:24.330000+00:00 | 2018-11-02 10:38:30.363000+00:00 | 2018-11-02 10:38:30.363000+00:00 | null | 53,113,656 | <p><a href="https://i.stack.imgur.com/NwVZx.png" rel="nofollow noreferrer">Image showing required content transfer between source and target images</a></p>
<p>Essentially is there a way to grow an image patch defined by mask and keep it realistic?</p> | 2018-11-02 06:30:24.900000+00:00 | 2018-11-02 10:38:30.363000+00:00 | null | image-processing|dynamic-image-generation | ['https://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Gatys_Image_Style_Transfer_CVPR_2016_paper.pdf', 'https://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf', 'https://arxiv.org/pdf/1803.04469.pdf', 'https://github.com/phillipi/pix2pix', 'https://arxiv.org/abs/1611.07004', 'https://bair.berkeley.edu/blog/2018/03/13/mcgan/'] | 6 |
65,718,728 | <p>You cannot easily upsample as this is a multilabel case (what I've missed from the post originally).</p>
<p>What <strong>you can do</strong> is give <code>1</code> way higher weights, something like this:</p>
<pre><code>import torch
class BCEWithLogitsLossWeighted(torch.nn.Module):
def __init__(self, weight, *args, **kwargs):
super().__init__()
# Notice none reduction
self.bce = torch.nn.BCEWithLogitsLoss(*args, **kwargs, reduction="none")
self.weight = weight
def forward(self, logits, labels):
loss = self.bce(logits, labels)
binary_labels = labels.bool()
loss[binary_labels] *= labels[binary_labels] * self.weight
# Or any other reduction
return torch.mean(loss)
loss = BCEWithLogitsLossWeighted(50)
logits = torch.randn(64, 512)
labels = torch.randint(0, 2, size=(64, 512)).float()
print(loss(logits, labels))
</code></pre>
<p>Also you can use <a href="https://arxiv.org/abs/1708.02002" rel="nofollow noreferrer">FocalLoss</a> to focus on positive examples (there should be some implementations available in some libraries).</p>
<p><strong>EDIT:</strong></p>
<p>Focal Loss can be coded something along those lines also (functional form cause that's what I have in repo, but you should be able to work from that):</p>
<pre><code>def binary_focal_loss(
outputs: torch.Tensor,
targets: torch.Tensor,
gamma: float,
weight=None,
pos_weight=None,
reduction: typing.Callable[[torch.Tensor], torch.Tensor] = None,
) -> torch.Tensor:
probabilities = (1 - torch.sigmoid(outputs)) ** gamma
loss = probabilities * torch.nn.functional.binary_cross_entropy_with_logits(
outputs,
targets.float(),
weight,
reduction="none",
pos_weight=pos_weight,
)
return reduction(loss)
</code></pre> | 2021-01-14 12:11:55.757000+00:00 | 2021-01-14 12:43:35.880000+00:00 | 2021-01-14 12:43:35.880000+00:00 | null | 65,718,296 | <p>I am building multi label classification network.
My GTs are vectors of length <code>512</code> <code>[0,0,0,1,0,1,0,...,0,0,0,1]</code>
Most of the time they are <code>zeroes</code>, each vector has about <code>5 ones</code>, and rest are zeros .</p>
<p>I am thinking to do:</p>
<p>Use <code>sigmoid</code> for activation for output layer.</p>
<p>Use <code>binary_crossentropy</code> for loss function.</p>
<p>But how I can solve the unbalance issue ?
Network can learn to predict <code>always zeros</code> and still have really low learning loss score.</p>
<p>How I can make it actually learn to predict ones...</p> | 2021-01-14 11:40:00.057000+00:00 | 2021-01-14 12:43:35.880000+00:00 | null | python|tensorflow|neural-network|pytorch|multilabel-classification | ['https://arxiv.org/abs/1708.02002'] | 1 |
67,571,216 | <p>There isn't always one best way to fill missing values in fact.
Here are some methods used in python to fill values of time series.<a href="https://stackoverflow.com/questions/49308530/missing-values-in-time-series-in-python">missing-values-in-time-series-in-python</a></p>
<p>Filling missing values a.k.a imputation is a well-studied topic in computer science and statistics.</p>
<p>Previously, we used to impute data with mean values regardless of data types. A big problem that mean imputation(all const imputation) triggers is mutations in time series.</p>
<p>Later, autoregressive(AR) and moving average(MA) used for modeling time series are used in imputation. These methods have a strong theoretical basis <a href="https://online.stat.psu.edu/stat510/" rel="nofollow noreferrer">STAT510</a> and are used to forecast/impute time series.</p>
<p>Matrix Factorization is another important method, such as TRMF, SVD, PCA. A recent benchmark about MF imputation was published in PVLDB.<a href="http://www.vldb.org/pvldb/vol13/p768-khayati.pdf" rel="nofollow noreferrer">Mind the Gap: An Experimental Evaluation of Imputation of Missing Values Techniques in Time Series</a>.</p>
<p>Besides, there are other machine/deep learning methods proposed recently. There is a survey about imputation methods used in time series<a href="https://arxiv.org/abs/2011.11347" rel="nofollow noreferrer">Time Series Data Imputation: A Survey on Deep Learning Approaches</a>, which may help you a lot. However, the methods mentioned in this survey are not accurate enough.</p>
<p>Back to your question, MICE is just a framework where you can use any regression algorithms. It assumes that different columns(A, B, C, and E, F) are correlated.</p>
<p><strong>Forecasting and imputation are the same by nature.</strong> You can think that forecasting is a special case of imputation without succeeding data.</p>
<p>You'd better try more imputation methods to find the best one.</p> | 2021-05-17 14:00:33.303000+00:00 | 2021-05-17 14:07:44.180000+00:00 | 2021-05-17 14:07:44.180000+00:00 | null | 59,990,884 | <p>For the first time, I am trying to work on a case study using python for continuous dataframe, which is the time series data of properties during the period 2006-2016</p>
<p>But I have missing values for the year 2015-16 in columns A,B,C,D and 2006-07 in E and F columns.
I am trying to impute the values and fill the data.</p>
<p><a href="https://i.stack.imgur.com/Ngp6s.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ngp6s.png" alt="**DataFrame**"></a></p>
<p>I have tried MICE and Interpolation but am not sure if it's even correct or not. which method to apply and how to apply it in python?
I have gone through links: </p>
<p><a href="https://www.theanalysisfactor.com/seven-ways-to-make-up-data-common-methods-to-imputing-missing-data/" rel="nofollow noreferrer">https://www.theanalysisfactor.com/seven-ways-to-make-up-data-common-methods-to-imputing-missing-data/</a>
<a href="https://www.researchgate.net/post/What_is_a_reliable_method_of_dealing_with_missing_data_in_time_series_records" rel="nofollow noreferrer">https://www.researchgate.net/post/What_is_a_reliable_method_of_dealing_with_missing_data_in_time_series_records</a></p>
<p>Should I be using forecasting method instead of imputation to fill the data?</p>
<p>Please help.</p> | 2020-01-30 16:56:36.327000+00:00 | 2021-05-17 14:07:44.180000+00:00 | null | python|pandas|dataframe|time-series|missing-data | ['https://stackoverflow.com/questions/49308530/missing-values-in-time-series-in-python', 'https://online.stat.psu.edu/stat510/', 'http://www.vldb.org/pvldb/vol13/p768-khayati.pdf', 'https://arxiv.org/abs/2011.11347'] | 4 |
54,332,916 | <p>The performance of a neural network on one dataset will not generally be the same as its performance on another. Images in one dataset can be more difficult to distinguish than those in another. As a rule of thumb: if your landmark datasets are similar, it's likely that performance will be similar. However, this is not always the case: subtle differences between the datasets can result in significantly different performance.</p>
<p>You can account for the potentially different performance on the two datasets by training another network on the other dataset. This will give you a baseline of what to expect when you try to generalize your network to it.</p>
<p>You can apply your neural network trained for one set of classes to another set of classes. There are two main approaches to this:</p>
<ul>
<li><a href="https://arxiv.org/abs/1808.01974" rel="nofollow noreferrer">Transfer learning</a>. This is where the last layer of your trained network is replaced with a new layer(s) that is trained, by itself, to classify the new images. (Use for many classes. Can use for few classes.)</li>
<li><a href="https://arxiv.org/abs/1711.04450" rel="nofollow noreferrer">All-Transfer learning</a>. Rather than replacing the last layer, add a new layer after it and only train the final layers. (Use for few classes.)</li>
</ul>
<p>Both approaches are much quicker than training a neural network from scratch.</p> | 2019-01-23 17:47:13.310000+00:00 | 2019-01-23 17:47:13.310000+00:00 | null | null | 54,326,023 | <p>I am new to deep learning, and I am doing a research using CNNs. I need to train a CNN model on a dataset of images (landmark images) and test the same model using a different dataset (landmark images too). One of the motivations is to see the ability of the model to generalize. But the problems is: Since the dataset used for train and test is not the same, the classes are not the same! Possibly, the number of classes too, which means that the predictions made on the test dataset are not trust worthy (Since the weights of the output layer have been calculated based on different classes belonging to train dataset). Is there any way to evaluate a model on a different dataset without affecting test accuracy? </p> | 2019-01-23 11:17:32.077000+00:00 | 2021-07-08 01:50:56.767000+00:00 | null | tensorflow|keras|deep-learning|dataset|conv-neural-network | ['https://arxiv.org/abs/1808.01974', 'https://arxiv.org/abs/1711.04450'] | 2 |
21,151,790 | <p>Actually, for giving you a proper answer, I'd be happy to know some details of your task and your data. Face Recognition is a non-trivial problem and there is no general solution for all sorts of image acquisition.</p>
<p>First of all, you should define how many sources of variation (posing, emotions, illumination, occlusions or time-lapse) you have in your sample and testing sets. Then you should choose an appropriate algorithm and, very importantly, preprocessing steps according to the types.</p>
<p>If you don't have any significant variations, then it is a good idea to consider for a small training set one of the <a href="https://ieeexplore.ieee.org/document/5605541" rel="nofollow noreferrer">Discrete Orthogonal Moments</a> as a feature extraction method. They have a very strong ability to extract features without redundancy. Some of them (Hahn, Racah moments) can also work in two modes - local and global feature extraction. The topic is relatively new, and there are still few articles about it. Although, they are thought to become a very powerful tool in Image Recognition. They can be computed in near real-time by using recurrence relationships. For more information, have a look <a href="https://ieeexplore.ieee.org/document/941859" rel="nofollow noreferrer">here</a> and <a href="http://www.researchgate.net/publication/5606888_Image_analysis_by_Krawtchouk_moments/file/72e7e52a11c6fd7f2b.pdf" rel="nofollow noreferrer">here</a>.</p>
<p>If the pose of the individuals significantly varies, you may try to perform firstly pose correction by <a href="https://link.springer.com/chapter/10.1007/978-3-540-79547-6_51" rel="nofollow noreferrer">Active Appearance Model</a>.</p>
<p>If there are lots of occlusions (glasses, hats) then using one of the <a href="https://arxiv.org/ftp/arxiv/papers/0907/0907.4984.pdf" rel="nofollow noreferrer">local feature extractors</a> may help.</p>
<p>If there is a significant time lapse between train and probe images, the local features of the faces could change over the age, then it's a good option to try one of the algorithms which use <a href="https://ieeexplore.ieee.org/document/5634496" rel="nofollow noreferrer">graphs for face representation</a> so as to keep the face topology.</p>
<p>I believe that non of the above are implemented in OpenCV, but for some of them you can find <a href="https://www.mathworks.com/matlabcentral/fileexchange/26706-active-shape-model-asm-and-active-appearance-model-aam" rel="nofollow noreferrer">MATLAB implementation</a>.</p>
<p>I'm not native speaker as well, so sorry for the grammar</p> | 2014-01-16 01:30:04.757000+00:00 | 2022-05-06 11:01:39.630000+00:00 | 2022-05-06 11:01:39.630000+00:00 | null | 21,148,837 | <p>Can anyone advise me way to build effective face classifier that may be able to classify many different faces (~1000)?</p>
<p>And i have only 1-5 examples of each face</p>
<p>I know about opencv face classifier, but it works bad for my task (many classes, a few samples).
It works alright for one face classification with small number of samples. But i think that 1k separate classifier is not good idea</p>
<p>I read a few articles about face recognition but methods from these articles reqiues a lot of samples of each class for work </p>
<p>PS Sorry for my writing mistakes. English in not my native language.</p> | 2014-01-15 21:42:12.330000+00:00 | 2022-05-06 11:01:39.630000+00:00 | 2014-01-16 08:39:52.353000+00:00 | opencv|machine-learning|computer-vision|face-recognition | ['https://ieeexplore.ieee.org/document/5605541', 'https://ieeexplore.ieee.org/document/941859', 'http://www.researchgate.net/publication/5606888_Image_analysis_by_Krawtchouk_moments/file/72e7e52a11c6fd7f2b.pdf', 'https://link.springer.com/chapter/10.1007/978-3-540-79547-6_51', 'https://arxiv.org/ftp/arxiv/papers/0907/0907.4984.pdf', 'https://ieeexplore.ieee.org/document/5634496', 'https://www.mathworks.com/matlabcentral/fileexchange/26706-active-shape-model-asm-and-active-appearance-model-aam'] | 7 |
57,397,771 | <p>The problem is closely related to <a href="https://en.wikipedia.org/wiki/Sphere_packing_in_a_sphere" rel="nofollow noreferrer">packing identical spheres into a unit sphere</a> (maybe the two problems are even equivalent): having a solution of a packing of <code>n</code> spheres with radius <code>r</code> into a unit sphere, all sphere centers are inside a sphere with radius <code>1-r</code> and have a distance of at least <code>2r</code>. So a solution for packing identical spheres into a unit sphere can easily be transformed to a solution to your problem.</p>
<p>Proven optimal solutions for packing identical spheres into a sphere only exist up to <code>n=12</code>. So I guess you will also have to live with near-optimal solutions, at least for <code>n>12</code>. The currently best known algorithm in terms of optimality seems to be
<a href="https://arxiv.org/abs/1202.4149" rel="nofollow noreferrer">Serial Symmetrical Relocation Algorithm for the Equal Sphere Packing Problem</a>.</p> | 2019-08-07 15:26:11.620000+00:00 | 2019-08-12 15:29:04.880000+00:00 | 2019-08-12 15:29:04.880000+00:00 | null | 57,360,792 | <p>If I have a sphere with center (x,y,z) and radius r, do an algorithm exist for placing X number of points (x,y,z) inside the sphere in such as way that the minimal distance to each other point is maximized?</p>
<p>E.g. one points would simply be placed in the middle, two points would be placed on the opposite borders, three points would be placed in a "triangle" formation on the border and so on.</p>
<p>As have been pointed out, an equal distance to each other point cannot be found for every number of points. The requirement is therefore to maximize the minimal distance between the points.</p>
<p><strong>Update:</strong></p>
<p>The following code generates points inside a sphere with radius 1.0, but does not maximize the distance.</p>
<pre><code>function getPoint() {
var d, x, y, z;
do {
x = Math.random() * 2.0 - 1.0;
y = Math.random() * 2.0 - 1.0;
z = Math.random() * 2.0 - 1.0;
d = x*x + y*y + z*z;
} while(d > 1.0);
return {x: x, y: y, z: z};
}
</code></pre>
<p>I think I need some kind of iteration afterwards. I have tried to apply a force model using the n-body problem as inspiration, and while fun to watch, it didn't really work that well.</p> | 2019-08-05 14:35:37.050000+00:00 | 2020-07-14 10:32:24.490000+00:00 | 2019-08-07 11:15:19.500000+00:00 | algorithm|3d|geometry | ['https://en.wikipedia.org/wiki/Sphere_packing_in_a_sphere', 'https://arxiv.org/abs/1202.4149'] | 2 |
49,784,891 | <p>An excellent recent study comparing several of the most modern techniques for counting the <strong><em>number of 'set' (1-valued) bits</em></strong> in a range of memory (<em>aka</em> <a href="https://en.wikipedia.org/wiki/Hamming_weight" rel="nofollow noreferrer">Hamming Weight</a>, <em>bitset cardinality</em>, <em>sideways sum</em>, <em>population count</em> or <a href="https://software.intel.com/en-us/node/679649" rel="nofollow noreferrer"><code>popcnt</code></a>, <em>etc.</em>) can be found in Wojciech, Kurz, and Lemire (2017), <a href="https://arxiv.org/pdf/1611.07612.pdf" rel="nofollow noreferrer"><em>Faster population counts using AVX2 instructions</em></a><sup> <a href="https://i.stack.imgur.com/69xAm.png" rel="nofollow noreferrer">1</a></sup></p>
<p>The following is a complete, tested, and fully-working <strong>C#</strong> adaptation of the "Harley-Seal" algorithm from that paper, which the authors found to be the fastest method that uses general-purpose bitwise operations (that is, that doesn't require special hardware).</p>
<p><strong>1. Managed array entry points</strong><br>(optional) Provides access to the block-optimized bit-counting for managed array <code>ulong[]</code>.</p>
<pre><code>/// <summary> Returns the total number of 1-valued bits in the array </summary>
[DebuggerStepThrough]
public static int OnesCount(ulong[] rg) => OnesCount(rg, 0, rg.Length);
/// <summary> Finds the total number of '1' bits in an array or its subset </summary>
/// <param name="rg"> Array of ulong values to scan </param>
/// <param name="index"> Starting index in the array </param>
/// <param name="count"> Number of ulong values to examine, starting at 'i' </param>
public static int OnesCount(ulong[] rg, int index, int count)
{
if ((index | count) < 0 || index > rg.Length - count)
throw new ArgumentException();
fixed (ulong* p = &rg[index])
return OnesCount(p, count);
}
</code></pre>
<p><strong>2. Scalar API</strong><br>Used by the block-optimized counter to aggregate results from the carry-save adder, and also to finish up any remainder for block sizes not divisible by the optimized chunk size of 16 x 8 bytes/ulong = 128 bytes. Suitable for general-purpose use also.</p>
<pre><code>/// <summary> Finds the Hamming Weight or ones-count of a ulong value </summary>
/// <returns> The number of 1-bits that are set in 'x' </returns>
public static int OnesCount(ulong x)
{
x -= (x >> 1) & 0x5555555555555555;
x = ((x >> 2) & 0x3333333333333333) + (x & 0x3333333333333333);
return (int)((((x + (x >> 4)) & 0x0F0F0F0F0F0F0F0F) * 0x0101010101010101) >> 56);
}
</code></pre>
<p><strong>3. <em>"Harley-Seal"</em> block-optimized 1s-bit counter</strong><br>Processes blocks of 128 bytes at a time, i.e., 16 <code>ulong</code> values per block. Uses the carry-save adder (shown below) to gang-add single bits across adjacent <code>ulong</code>s, and aggregates totals upwards as powers of two.</p>
<pre><code>/// <summary> Count the number of 'set' (1-valued) bits in a range of memory. </summary>
/// <param name="p"> Pointer to an array of 64-bit ulong values to scan </param>
/// <param name="c"> Size of the memory block as a count of 64-bit ulongs </param>
/// <returns> The total number of 1-bits </returns>
public static int OnesCount(ulong* p, int c)
{
ulong z, y, x, w;
int c = 0;
for (w = x = y = z = 0UL; cq >= 16; cq -= 16)
c += OnesCount(CSA(ref w,
CSA(ref x,
CSA(ref y,
CSA(ref z, *p++, *p++),
CSA(ref z, *p++, *p++)),
CSA(ref y,
CSA(ref z, *p++, *p++),
CSA(ref z, *p++, *p++))),
CSA(ref x,
CSA(ref y,
CSA(ref z, *p++, *p++),
CSA(ref z, *p++, *p++)),
CSA(ref y,
CSA(ref z, *p++, *p++),
CSA(ref z, *p++, *p++)))));
c <<= 4;
c += (OnesCount(w) << 3) + (OnesCount(x) << 2) + (OnesCount(y) << 1) + OnesCount(z);
while (--cq >= 0)
c += OnesCount(*p++);
return c;
}
</code></pre>
<p><strong>4. Carry-save adder (CSA)</strong></p>
<pre><code>/// <summary> carry-save adder </summary>
[MethodImpl(MethodImplOptions.AggressiveInlining)]
static ulong CSA(ref ulong a, ulong b, ulong c)
{
ulong v = a & b | (a ^ b) & c;
a ^= b ^ c;
return v;
}
</code></pre>
<p><hr>
<strong>Remarks</strong></p>
<p>Because the approach shown here counts the total number of 1-bits by proceeding 128-byte chunks at a time, it only becomes optimal with larger memory block sizes. For example, likely at least some (small) multiple of that sixteen-qword (16-<code>ulong</code>) chunk size. For counting 1-bits in smaller memory ranges, this code will work correctly, but drastically underperform more naïve methods. See the paper for details.</p>
<p>From the paper, this diagram summarizes how the <a href="https://en.wikipedia.org/wiki/Carry-save_adder" rel="nofollow noreferrer">Carry-Save Adder</a> works:</p>
<p><a href="https://i.stack.imgur.com/69xAm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/69xAm.png" alt="Carry-Save Adder in 'Harley-Seal' block-optimized bit count"></a>
<br><br><hr>
<strong>References</strong></p>
<p>[1.] Muła, Wojciech, Nathan Kurz, and Daniel Lemire. "Faster population counts using AVX2 instructions." The Computer Journal 61, no. 1 (2017): 111-120.</p> | 2018-04-11 21:44:28.857000+00:00 | 2018-05-04 00:21:37.013000+00:00 | 2018-05-04 00:21:37.013000+00:00 | null | 7,213,062 | <p>I was asked in an interview the following question.</p>
<pre><code>int countSetBits(void *ptr, int start, int end);
</code></pre>
<p><strong>Synopsis:</strong>
Assume that <code>ptr</code> points to a big chunk of memory. Viewing this memory as contiguous sequence of bits, <code>start</code> and <code>end</code> are bit positions. Assume <code>start</code> and <code>end</code>
have proper values and <code>ptr</code> is pointing to an initialized chunck of memory. </p>
<p><strong>Question:</strong>
Write a C code to count number of bits set from <code>start</code> to <code>end</code> [inclusive] and return the count. </p>
<p>Just to make it more clear </p>
<pre><code> ptr---->+-------------------------------+
| 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
+-------------------------------+
| 8 | 9 | |15 |
+-------------------------------+
| |
+-------------------------------+
...
...
+-------------------------------+
| | S | |
+-------------------------------+
...
...
+-------------------------------+
| | E | |
+-------------------------------+
...
...
</code></pre>
<p>My solution: </p>
<pre><code>int countSetBits(void *ptr, int start, int end )
{
int count = 0, idx;
char *ch;
for (idx = start; idx <= end; idx++)
{ ch = ptr + (idx/8);
if((128 >> (idx%8)) & (*ch))
{
count++;
}
}
return count;
}
</code></pre>
<p>I gave a very lengthy and somewhat inefficient code during the interview. I worked on it later and came up with above solution. </p>
<p>I am very sure SO community can provide more elegant solution. I am just curious to see their response. </p>
<p>PS: Above code is not compiled. It is more like a pseudo code and may contain errors. </p> | 2011-08-27 07:12:16.337000+00:00 | 2018-05-04 00:21:37.013000+00:00 | null | c++|c|algorithm|bit-manipulation | ['https://en.wikipedia.org/wiki/Hamming_weight', 'https://software.intel.com/en-us/node/679649', 'https://arxiv.org/pdf/1611.07612.pdf', 'https://i.stack.imgur.com/69xAm.png', 'https://en.wikipedia.org/wiki/Carry-save_adder', 'https://i.stack.imgur.com/69xAm.png'] | 6 |
58,771,507 | <p>Natural Language Generation is a broad field. If you have a (not only finite but also) small enough set of possible questions, you could use canned text, which means that you prepare template strings that you enrich with the necessary information from the database, e.g. </p>
<pre><code>"why do you like {}?"format("Paris").
</code></pre>
<p>This is not the most elegant way, but definitely a method often applied in NLP Systems.
Alternatively, you have to build the full pipeline of Content determination, Text planning, micro planning and then surface realisation.
The first means that you determine the content of your question, e.g. "reasons for liking paris". The middle concepts mean building a HPSG-like structure that reveals the constituent-structure of your expression, semantic roles, arguments of the verb, adjuncts etc.
Surface realisation can be done unsing simpleNLG or another tool of your choice/platform.
Both ways are possible for online generation, but the first is definitely less work.
For a good scientific overview: <a href="https://arxiv.org/pdf/1703.09902.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1703.09902.pdf</a></p> | 2019-11-08 17:48:12.763000+00:00 | 2019-11-08 17:48:12.763000+00:00 | null | null | 58,093,177 | <p>I am developing a chatbot that asks the user the information that is not there in the database. </p>
<p>Consider the database has 40 details for every person: Name, Age, Fav food, Fav Restaurant, Fav city, Reason for Fav City, Four the most liked things in the city,etc.</p>
<p>So, the questions can be
"What is our name?"
"Why do you like Paris?"
"Name four places in Paris that you like the most?"</p>
<p>etc.</p>
<p>I want these questions to be generated by the bot on the fly but have no idea how to formulate these questions in English.
Any help or direction (research papers/libraries/codes, etc) would be appreciated.</p> | 2019-09-25 07:23:43.830000+00:00 | 2019-11-08 17:48:12.763000+00:00 | 2019-09-25 08:55:04.380000+00:00 | python|nlp|dynamically-generated|nlp-question-answering|nlg | ['https://arxiv.org/pdf/1703.09902.pdf'] | 1 |
60,499,625 | <p>Well, you did not actually get the real idea of DBSCAN. </p>
<p>This is a copy from wikipedia: </p>
<blockquote>
<p>A point p is a core point if at least minPts points are within
distance ε of it (including p).</p>
<p>A point q is directly reachable from p if point q is within distance ε
from core point p. Points are only said to be directly reachable from
core points.</p>
<p>A point q is reachable from p if there is a path p1, ..., pn with p1 =
p and pn = q, where each pi+1 is directly reachable from pi. Note that
this implies that all points on the path must be core points, with the
possible exception of q.</p>
<p>All points not reachable from any other point are outliers or noise
points.</p>
</blockquote>
<p>So saying in easier words, The idea is that:</p>
<ul>
<li><p>Any sample who has min_samples neigbours by the distance of epsilon is a core sample.</p></li>
<li><p>Any data sample which is not core, but has at least one core neighbor (with a distance less than eps), is a directly reachable sample and can be added to the cluster.</p></li>
<li><p>Any data sample which is not directly reachable nor a core, but has at least one directly reachable neighbor (with a distance less than eps) is a reachable sample and will be added to the cluster.</p></li>
<li><p>Any other examples are considered to be noise, outlier or whatever you want to name it.( and those will be labeled by -1)</p></li>
</ul>
<p>Depending on the parameters of the clustering (eps and min_samples) , you are very likely to have more than two clusters. You see, that is the reason you are seeing other values than 0 and -1 in the result of your clustering. </p>
<p>To answer your second question </p>
<blockquote>
<p>Also is it normal to find the best value of eps using trial and error,</p>
</blockquote>
<p>If you mean doing cross-validation( over a set where you know the cluster labels or you can approximate the correct clustering), yes I think that is the normal way to do it</p>
<p>PS: The <a href="https://arxiv.org/pdf/1703.03503.pdf" rel="noreferrer">paper</a> is very good and comprehensive. I highly suggest you have a look. Good luck.</p> | 2020-03-03 03:23:44.753000+00:00 | 2020-03-03 03:42:29.547000+00:00 | 2020-03-03 03:42:29.547000+00:00 | null | 60,499,358 | <p>I have been trying to use DBSCAN in order to detect outliers, from my understanding DBSCAN outputs -1 as outlier and 1 as inliner, but after I ran the code, I'm getting numbers that are not -1 or 1, can someone please explain why? Also is it normal to find the best value of eps using trial and error, because I couldn't figure out a way to find the best possible eps value.</p>
<pre><code>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn.cluster import DBSCAN
df = pd.read_csv('Final After Simple Filtering.csv',index_col=None,low_memory=True)
# Dropping columns with low feature importance
del df['AmbTemp_DegC']
del df['NacelleOrientation_Deg']
del df['MeasuredYawError']
#applying DBSCAN
DBSCAN = DBSCAN(eps = 1.8, min_samples =10,n_jobs=-1)
df['anomaly'] = DBSCAN.fit_predict(df)
np.unique(df['anomaly'],return_counts=True)
</code></pre>
<pre><code>(array([ -1, 0, 1, ..., 8462, 8463, 8464]),
array([1737565, 3539278, 4455734, ..., 13, 8, 8]))
</code></pre>
<p>Thank you.</p> | 2020-03-03 02:44:28.720000+00:00 | 2020-03-13 01:36:28.593000+00:00 | 2020-03-03 08:00:09.803000+00:00 | python|machine-learning|cluster-analysis | ['https://arxiv.org/pdf/1703.03503.pdf'] | 1 |
49,521,287 | <p>If you are looking for direct support for higher-order tensors, bad luck. David Hall already <a href="https://groups.google.com/d/msg/scala-breeze/tn-iXWdOkYM/SUohQOdzEQAJ" rel="nofollow noreferrer">stated</a> that they would <a href="https://github.com/scalanlp/breeze/issues/505" rel="nofollow noreferrer">require</a> a lot of work to implement properly which makes perfect sense. They are very <a href="https://arxiv.org/pdf/1704.08578.pdf" rel="nofollow noreferrer">math intensive</a>. </p>
<p>With that said you can still make a 2 by 1 <code>DenseMatrix</code> of <code>DenseVectors</code> and manipulate the values of those vectors. 2 by 1 matrix is basically a vector so here's example with 2 by 2 and "40 zeroes" vectors</p>
<pre><code>val matrix = DenseMatrix(
(DenseVector.zeros[Double](40), DenseVector.zeros[Double](40)),
(DenseVector.zeros[Double](40), DenseVector.zeros[Double](40))
)
matrix(0,0)(0) = 1.0
println( matrix(0,0) )
</code></pre>
<p>I must say that this approach is not recommended due to complex nature of computations on such data structure. If you are working with points in 3D space I would rather go for either a matrix or distributed dataset of vectors.</p> | 2018-03-27 19:48:47.107000+00:00 | 2018-03-27 19:48:47.107000+00:00 | null | null | 49,449,071 | <p>I am fairly new to Breeze library. I am trying to convert a 3D Array of shape (2,1,40) to a dense matrix but I am not sure if I am doing it the right way.
My requirement is: </p>
<p>A matrix with 2 rows, 1 column and each of the two rows should have 0.0 values (40 times)</p>
<pre><code>import breeze.linalg._
val matrix = new DenseMatrix[Float](shape(0), shape(1), Array.fill(shape(2))(0.0f))
</code></pre> | 2018-03-23 11:51:45.107000+00:00 | 2018-03-27 21:52:24.237000+00:00 | 2018-03-27 21:52:24.237000+00:00 | scala|scala-breeze | ['https://groups.google.com/d/msg/scala-breeze/tn-iXWdOkYM/SUohQOdzEQAJ', 'https://github.com/scalanlp/breeze/issues/505', 'https://arxiv.org/pdf/1704.08578.pdf'] | 3 |
61,587,236 | <p>BERT is not a machine translation model, BERT is designed to provide a contextual sentence representation that should be useful for various NLP tasks. Although there exist ways how BERT can be incorporated into machine translation (<a href="https://openreview.net/forum?id=Hyl7ygStwB" rel="nofollow noreferrer">https://openreview.net/forum?id=Hyl7ygStwB</a>), it is not an easy problem and there are doubts if it really pays off.</p>
<p>From your question, it seems that you are not really machine translation, but automatic summarization. Similarly to machine translation, it can be approached using sequence-to-sequence models, but we do not call it translation in NLP.
For sequence-to-sequence modeling, there are different pre-trained models, such as <a href="https://arxiv.org/abs/1910.13461" rel="nofollow noreferrer">BART</a> or <a href="https://arxiv.org/abs/1905.02450" rel="nofollow noreferrer">MASS</a>. These should be much more useful than BERT.</p>
<hr />
<p>Update in September 2022: There are multilingual BERT-like models, the most famous are <a href="https://huggingface.co/bert-base-multilingual-cased" rel="nofollow noreferrer">multilingual BERT</a> and <a href="https://huggingface.co/xlm-roberta-base" rel="nofollow noreferrer">XLM-RoBERTa</a>. When fine-tuned carefully, they can be used as a universal encoder for machine translation and enable so-called zero-shot machine translation. The model is trained to translate from several source languages into English, but in the end, it can translate from all languages covered by the multilingual BERT-like models. The method is called <a href="https://arxiv.org/abs/2104.08757v1" rel="nofollow noreferrer">SixT</a>.</p> | 2020-05-04 07:35:51.253000+00:00 | 2022-09-15 08:16:51.663000+00:00 | 2022-09-15 08:16:51.663000+00:00 | null | 61,523,829 | <p>I got a big problem. For my bachelor thesis I have to make a machine tranlation model with BERT.
But I am not getting anywhere right now.
Do you know a documentation or something that can help me here?
I have read some papers in that direction but maybe there is a documentation or tutorial that can help me.</p>
<p>For my bachelor thesis I have to translate from a summary of a text into a title.
I hope someone can help me.</p> | 2020-04-30 12:51:41.163000+00:00 | 2022-09-15 08:16:51.663000+00:00 | null | jupyter-notebook|machine-translation|bert-language-model|sequence-to-sequence | ['https://openreview.net/forum?id=Hyl7ygStwB', 'https://arxiv.org/abs/1910.13461', 'https://arxiv.org/abs/1905.02450', 'https://huggingface.co/bert-base-multilingual-cased', 'https://huggingface.co/xlm-roberta-base', 'https://arxiv.org/abs/2104.08757v1'] | 6 |
54,738,202 | <p>We can get a Theta(n log n)-time algorithm for 1 and a linear time algorithm for 2 and 3 as follows. For 1, we sort and apply 2. For 2, we use an inverse <a href="https://en.wikipedia.org/wiki/Faro_shuffle" rel="nofollow noreferrer">Faro shuffle</a> and a rotation to get the leaves to the end of the array, then "recurse" (tail recursion, so it's actually just a for loop) on the subtree with the leaves removed. For 3, we do the inverse steps of 2 in reverse order.</p>
<p>The C++ code below uses a Theta(n log n) Faro shuffle/inverse shuffle algorithm because it's easier than <a href="https://arxiv.org/abs/0805.1598" rel="nofollow noreferrer">Peiyush Jain's algorithm</a>. Note that Peiyush's algorithm may not be faster on real hardware for any realistic value of n due its poor cache utilization.</p>
<p>I have tested the code below on literally one input. You are hereby warned.</p>
<pre><code>#include <algorithm>
#include <cassert>
#include <iostream>
#include <iterator>
#include <numeric>
#include <vector>
namespace {
// Transforms [a0 b0 a1 b1 ... an-1 bn-1 an] to [a0 a1 ... an b0 b1 ... bn-1]
// and returns an iterator to b0. The running time is Theta(n log n). If you're
// feeling ambitious, you could try Peiyush Jain's linear-time algorithm.
template <typename RandomAccessIterator>
RandomAccessIterator InvertFaroShuffle(RandomAccessIterator first,
RandomAccessIterator last) {
using Index =
typename std::iterator_traits<RandomAccessIterator>::difference_type;
Index size = last - first;
assert((size & 1) == 1);
if (size == 1) {
return last;
}
RandomAccessIterator middle = first + (((size + 1) >> 2) << 1);
return std::rotate(InvertFaroShuffle(first, middle - 1), middle,
InvertFaroShuffle(middle, last));
}
// Theta(n log n)-time algorithm for #2.
template <typename RandomAccessIterator>
void SortedToLevelOrder(RandomAccessIterator first, RandomAccessIterator last) {
using Index =
typename std::iterator_traits<RandomAccessIterator>::difference_type;
Index size = last - first;
if (size <= 1) {
return;
}
unsigned height = 1;
while ((Index{2} << height) - 1 < size) {
height++;
}
for (unsigned level = height; level > 0; level--) {
Index effective_size = std::min((Index{2} << level) - 1, size);
Index leaf_count =
std::min(Index{1} << level, size - ((Index{1} << level) - 1));
InvertFaroShuffle(first, first + 2 * leaf_count - 1);
std::rotate(first, first + leaf_count, first + effective_size);
}
}
// Theta(n log n)-time algorithm for #1.
template <typename RandomAccessIterator>
void UnsortedToLevelOrder(RandomAccessIterator first,
RandomAccessIterator last) {
std::sort(first, last);
SortedToLevelOrder(first, last);
}
// Transforms [a0 a1 ... an b0 b1 ... bn-1] to [a0 b0 a1 b1 ... an-1 bn-1 an].
// The running time is Theta(n log n). If you're feeling ambitious, you could
// try Peiyush Jain's linear-time algorithm.
template <typename RandomAccessIterator>
void FaroShuffle(RandomAccessIterator first, RandomAccessIterator last) {
using Index =
typename std::iterator_traits<RandomAccessIterator>::difference_type;
Index size = last - first;
assert((size & 1) == 1);
if (size == 1) {
return;
}
Index half = (size + 1) >> 1;
RandomAccessIterator middle = first + half;
Index quarter = half >> 1;
middle = std::rotate(first + quarter, middle, middle + quarter);
FaroShuffle(first, middle - 1);
FaroShuffle(middle, last);
}
// Theta(n log n)-time algorithm for #3.
template <typename RandomAccessIterator>
void LevelOrderToSorted(RandomAccessIterator first, RandomAccessIterator last) {
using Index =
typename std::iterator_traits<RandomAccessIterator>::difference_type;
Index size = last - first;
if (size <= 1) {
return;
}
unsigned height = 1;
while ((Index{2} << height) - 1 < size) {
height++;
}
for (unsigned level = 1; level < height + 1; level++) {
Index effective_size = std::min((Index{2} << level) - 1, size);
Index leaf_count =
std::min(Index{1} << level, size - ((Index{1} << level) - 1));
std::rotate(first, first + (effective_size - leaf_count),
first + effective_size);
FaroShuffle(first, first + 2 * leaf_count - 1);
}
}
void PrintList(const std::vector<int>& list) {
for (int elem : list) {
std::cout << ' ' << elem;
}
std::cout << '\n';
}
} // namespace
int main() {
std::vector<int> list(10);
std::iota(list.begin(), list.end(), 0);
PrintList(list);
SortedToLevelOrder(list.begin(), list.end());
PrintList(list);
LevelOrderToSorted(list.begin(), list.end());
PrintList(list);
}
</code></pre> | 2019-02-17 22:15:23.630000+00:00 | 2019-02-17 22:15:23.630000+00:00 | null | null | 54,736,288 | <p>This question is similar to <a href="https://stackoverflow.com/questions/36419760/sorted-list-to-complete-bst-array-representation">Sorted list to complete BST array representation</a> but perhaps more specifically focused. This question could be used to solve <a href="https://stackoverflow.com/questions/34835192/inserting-node-dynamically-in-complete-binary-search-tree">Inserting node dynamically in Complete Binary Search Tree</a>.</p>
<p>Consider a <a href="http://web.cecs.pdx.edu/~sheard/course/Cs163/Doc/FullvsComplete.html" rel="nofollow noreferrer">complete binary tree</a> represented in memory as a contiguous array <code>a[0..n)</code>, where element <code>a[0]</code> is the root of the tree, and for any node <code>a[i]</code>, it has left child <code>a[2*i+1]</code> and right child <code>a[2*i+2]</code> (if those indices are less than <code>n</code>).</p>
<p>C++ programmers will be familiar with this representation because it's used by <a href="https://en.cppreference.com/w/cpp/algorithm/make_heap" rel="nofollow noreferrer"><code>std::make_heap</code></a>. <code>std::make_heap(a, a+n)</code> takes an unsorted array (which can be viewed as an unsorted complete binary tree) and permutes its elements (which can be viewed as tree rotations) to turn the tree into a complete binary <a href="https://www.cs.cmu.edu/~adamchik/15-121/lectures/Binary%20Heaps/heaps.html" rel="nofollow noreferrer">heap</a> where each node's value is greater than either of its children. We say that the resulting array is in "max-heap order."</p>
<p>On the other hand, if each node's value is greater than its left child but less than its right child, then we say that the complete binary tree is a complete binary <em>search tree</em>. In this case let's say that the resulting array is in "level order."<sup>[1]</sup></p>
<p>Whereas there are many permissible "max-heap orders" for a given set of elements, each set of elements has only a single unique "level order."</p>
<p>The following vectors are in level order:</p>
<pre><code>std::vector<int> v1 = { 3, 1, 4, 0, 2 };
// corresponds to the complete binary search tree
// 3
// 1 4
// 0 2
std::vector<int> v2 = { 6, 3, 8, 1, 5, 7, 9, 0, 2, 4 };
// corresponds to the complete binary search tree
// 6
// 3 8
// 1 5 7 9
// 0 2 4
</code></pre>
<p>What I'm looking for is a family of efficient algorithms for:</p>
<ol>
<li>permuting an unsorted sequence into level order</li>
<li>permuting a sorted sequence into level order</li>
<li>permuting a level-order sequence into sorted order</li>
</ol>
<p>When I say <em>efficient</em>, I mean algorithms that work without deep recursion, without dynamic memory allocation, and without temporary arrays. I already know that the permutation cannot be done particularly quickly; I'd hope for O(n lg n).</p>
<p>Notice that parts 2 and 3 are basically asking to come up with a mapping <code>OldIndex -> NewIndex</code>; once you have such a function, you can do the permutation in-place using <a href="https://stackoverflow.com/a/54735864/1424877">one of these algorithms</a>.</p>
<p>Part 1 is asking for the implementation of <code>nonstd::make_searchtree</code> by analogy to <a href="https://en.cppreference.com/w/cpp/algorithm/make_heap" rel="nofollow noreferrer"><code>std::make_heap</code></a>. Part 3 is asking for the implementation of <code>nonstd::sort_searchtree</code> by analogy to <a href="https://en.cppreference.com/w/cpp/algorithm/sort_heap" rel="nofollow noreferrer"><code>std::sort_heap</code></a>.</p>
<hr>
<p>[1] — I basically made up this term "level order." If you know a more widely recognized academic term for this ordering, please leave a comment!</p> | 2019-02-17 18:23:46.240000+00:00 | 2019-02-17 22:15:23.630000+00:00 | 2019-02-17 20:02:03.140000+00:00 | algorithm|stl|binary-search-tree|computer-science|tree-rotation | ['https://en.wikipedia.org/wiki/Faro_shuffle', 'https://arxiv.org/abs/0805.1598'] | 2 |
41,416,399 | <blockquote>
<p>why do we have multiple layers and multiple nodes per layer in a neural network?</p>
</blockquote>
<p>We need at least one hidden layer with a non-linear activation to be able to learn non-linear functions. Usually, one thinks of each layer as an abstraction level. For computer vision, the input layer contains the image and the output layer contains one node for each class. The first hidden layer detects edges, the second hidden layer might detect circles / rectangles, then there come more complex patterns.</p>
<p>There is a theoretical result which says that an MLP with only one hidden layer can fit every function of interest up to an arbitrary low error margin if this hidden layer has enough neurons. However, the number of parameters might be MUCH larger than if you add more layers.</p>
<p>Basically, by adding more hidden layers / more neurons per layer you add more parameters to the model. Hence you allow the model to fit more complex functions. However, up to my knowledge there is no quantitative understanding what adding a single further layer / node exactly makes.</p>
<p>It seems to me that you might want a general introduction into neural networks. I recommend chapter 4.3 and 4.4 of [Tho14a] (my bachelors thesis) as well as [LBH15].</p>
<blockquote>
<p>[Tho14a]
M. Thoma, “On-line recognition of handwritten mathematical symbols,”
Karlsruhe, Germany, Nov. 2014. [Online]. Available: <a href="https://arxiv.org/abs/1511.09030" rel="nofollow noreferrer">https://arxiv.org/abs/1511.09030</a></p>
<p>[LBH15]
Y. LeCun,
vol. 521,
Y. Bengio,
no. 7553,
and G. Hinton,
pp. 436–444,
“Deep learning,”
Nature,
May 2015. [Online]. Available:
<a href="http://www.nature.com/nature/journal/v521/n7553/abs/nature14539.html" rel="nofollow noreferrer">http://www.nature.com/nature/journal/v521/n7553/abs/nature14539.html</a></p>
</blockquote> | 2017-01-01 15:02:43.793000+00:00 | 2017-01-01 15:12:01.340000+00:00 | 2017-01-01 15:12:01.340000+00:00 | null | 41,410,317 | <p>I just started to learn about neural networks and so far my knowledge of machine learning is simply linear and logistic regression. from my understanding of the latter algorithms, is that given multiples of inputs the job of the learning algorithm is to come up with appropriate weights for each input so that eventually I have a polynomial that either describes the data which is the case of linear regression or separates it as in the case of logistic regression.
if I was to represent the same mechanism in neural network, according to my understanding, it would look something like this,
<a href="https://i.stack.imgur.com/x8EaK.png" rel="noreferrer"><img src="https://i.stack.imgur.com/x8EaK.png" alt="enter image description here"></a>
multiple nodes at the input layer and a single node in the output layer. where I can back propagate the error proportionally to each input. so that also eventually I arrive to a polynomial X1W1 + X2W2+....XnWn that describes the data. to me having multiple nodes per layer, aside from the input layer, seems to make the learning process parallel, so that I can arrive to the result faster. it's almost like running multiple learning algorithms each with different starting points to see which one converges faster. and as for the multiple layers I'm at a lose of what mechanism and advantage does it have on the learning outcome. </p> | 2016-12-31 17:08:11.930000+00:00 | 2017-01-01 15:12:01.340000+00:00 | null | machine-learning|neural-network|linear-regression|logistic-regression | ['https://arxiv.org/abs/1511.09030', 'http://www.nature.com/nature/journal/v521/n7553/abs/nature14539.html'] | 2 |
42,939,848 | <p>Yes, there is a paper about <a href="https://arxiv.org/abs/1606.06266" rel="nofollow noreferrer">Detection and Tracking of Liquids with Fully Convolutional Networks</a>. A quick google search would have found it. And of course, it uses Deep Learning :)</p> | 2017-03-21 23:05:30.410000+00:00 | 2017-03-21 23:05:30.410000+00:00 | null | null | 42,931,850 | <p>This question just came up in my mind with curiosity.
So my question is that are there any solutions or algorithms detecting liquid such as water, coffee or something like this.
What I would like to do with this is just define whether it is liquid or not.</p>
<p>any ideas and comments are appreciated</p>
<p>Thanks in advance!</p> | 2017-03-21 15:44:53.007000+00:00 | 2017-03-21 23:05:30.410000+00:00 | null | algorithm|computer-vision|deep-learning|vision | ['https://arxiv.org/abs/1606.06266'] | 1 |
41,470,175 | <p>You can indeed put a variational autoencoder (VAE) in front in order to generate the initial distribution z (see <a href="https://arxiv.org/abs/1512.09300" rel="nofollow noreferrer">paper</a>). </p>
<p>If you are interested in the topic I can recommend the this <a href="https://www.kadenze.com/courses/creative-applications-of-deep-learning-with-tensorflow/info" rel="nofollow noreferrer">course</a> at Kadenze.</p> | 2017-01-04 17:48:22.740000+00:00 | 2017-01-04 17:48:22.740000+00:00 | null | null | 41,469,128 | <p>I'm very interested in GAN those times.</p>
<p>I coded one for MNIST with the following structure :
Generator model
Discriminator model
Gen + Dis model</p>
<p>Generator model generate batches of image from random distribution.
Discrimator is trained over it and real images.
Then Discriminator is freeze in Gen+Dis model and Generator trained. (With the frozen Discriminator who says if the generator is good or not)</p>
<p>Now, imagine I don't want to feed my generator with a random distribution but with images. (For upscaling for example, or generate an real image from a draw)</p>
<p>Do I need to change something in it ?
(Except the conv model who will be more complex)
Should I continue to use the binary_crossentropy as loss function ?</p>
<p>Thanks you very much!</p> | 2017-01-04 16:48:19.443000+00:00 | 2017-01-04 17:48:22.740000+00:00 | null | tensorflow|neural-network|deep-learning|conv-neural-network | ['https://arxiv.org/abs/1512.09300', 'https://www.kadenze.com/courses/creative-applications-of-deep-learning-with-tensorflow/info'] | 2 |
65,635,714 | <p><strong>Here is my theory :</strong></p>
<p>Pre-training is useful when you want to leverage already existing data to help the model train on similar data, for which you have few instances. At least this was the reasoning behind the <a href="https://arxiv.org/abs/1505.04597" rel="nofollow noreferrer">Unet</a> architecture in medical image segmentation.</p>
<p>Now, to me the key is in the notion of "similar". If your network have been pre-trained on cats, dogs and you want to extrapolate to weld seam there's a chance your pre-training is not helping or even getting in the way of the model training properly.</p>
<p><strong>Why ?</strong></p>
<p>When training your CNN you get randomly initialized weights, whereas using a pre-trained network you get pre-trainned weights. If the features your are extracting are similar across dataset then you get a head start by having the network already attuned to this features.</p>
<p>For example, Cats and Dogs share similar spatial features visually (eye position, nose, ears...). So there's chance that you converge to a local minima faster during training since your are already starting from a good base that just need to adapt to the new specific of your data.</p>
<p><strong>Conclusions:</strong></p>
<p>If the similarity assumptions does not hold it means your model would have to "unlearn" what he already learned to adapt to the new specifics of your dataset and I guess that would be the reason why training is more difficult and does not give as good result as a blank slate CNN. (especially if you don't have that much data).</p>
<p><strong>PS :</strong> I'd be curious to see if your pre trained model end up catching up with your CNN if you give it more epochs to train.</p> | 2021-01-08 20:06:55.747000+00:00 | 2021-01-08 20:06:55.747000+00:00 | null | null | 65,627,620 | <p>I have a dataset of laser welding images of size 300*300 which contains two class of bad and good weld seam. I have followed <a href="https://pytorch.org/tutorials/beginner/finetuning_torchvision_models_tutorial.html" rel="nofollow noreferrer">Pytorch fine-tuning tutorial</a> for an inception-v3 classifier.</p>
<p>on the other hand, I also build a custom CNN with 3 conv layer and 3 fc. What I observed is that the fine tuning showed lots of variation on validation accuracy. basically, I see different maximum accuracy every time I train my model. Plus, my accuracy in fine-tuning is much less than my custom CNN!! for example the accuracy for my synthetic images from a GAN is 86% with inception-v3, while it is 94% with my custom CNN. The real data for both network shows almost similar behaviour and accuracy, however accuracy in custom CNN is about 2% more.</p>
<p>I trained with different training scales of 200, 500 and 1000 train-set images (half of them for each class like for 200 images we have 100 good and 100 bad). I also include a resize transform of 224 in my train_loader; in fine tuning tutorial, this resize is automatically done to 299 for inception-v3. for each trial, the validation-size and its content is constant.</p>
<p><strong>Do you know what cause this behavior? Is it because my dataset is so different from the pretrained model classes? am I not supposed to get better results with fine-tuning?</strong></p>
<p>My custom CNN:</p>
<pre><code>device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv3 = nn.Conv2d(16, 24, 5)
self.fc1 = nn.Linear(13824, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 2)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
#x = x.view(-1, 16 * 5 * 5)
x = x.view(x.size(0),-1)
#print(x.shape)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
#x = F.softmax(x, dim=1)
return x
model = Net()
criterion = nn.CrossEntropyLoss()
#optimizer = optim.Adam(model.parameters(), lr=0.001)
optimizer = optim.SGD(model.parameters(), lr=0.001, momentum=0.9, weight_decay=5e-4)
model.to(device)
</code></pre>
<p>with training loop of:</p>
<pre><code>epochs = 15
steps = 0
running_loss = 0
print_every = 10
train_losses, test_losses = [], []
train_acc, test_acc = [], []
for epoch in range(epochs):
for inputs, labels in trainloader:
steps += 1
inputs, labels = inputs.to(device), labels.to(device)
optimizer.zero_grad()
logps = model.forward(inputs)
loss = criterion(logps, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
if steps % print_every == 0:
test_loss = 0
accuracy = 0
model.eval()
with torch.no_grad():
for inputs, labels in testloader:
inputs, labels = inputs.to(device), labels.to(device)
logps = model.forward(inputs)
batch_loss = criterion(logps, labels)
test_loss += batch_loss.item()
ps = torch.exp(logps)
top_p, top_class = ps.topk(1, dim=1)
equals = top_class == labels.view(*top_class.shape)
accuracy += torch.mean(equals.type(torch.FloatTensor)).item()
train_losses.append(running_loss/len(trainloader))
test_losses.append(test_loss/len(testloader))
#train_acc.append(running_loss/len(trainloader))
test_acc.append(accuracy/len(testloader))
print(f"Epoch {epoch+1}/{epochs}.. "
f"Train loss: {running_loss/print_every:.3f}.. "
f"Test loss: {test_loss/len(testloader):.3f}.. "
f"Test accuracy: {accuracy/len(testloader):.3f}")
running_loss = 0
model.train()
</code></pre> | 2021-01-08 10:49:56.483000+00:00 | 2021-03-02 16:34:17.663000+00:00 | 2021-01-08 11:24:21.783000+00:00 | python|deep-learning|pytorch|conv-neural-network|transfer-learning | ['https://arxiv.org/abs/1505.04597'] | 1 |
13,078,971 | <p>Had a brief discussion with the author of the book: "Python Text Processing with NLTK 2.0 Cookbook", Mr.Jacob Perkins. He said, "a generalized grammar for sentences is pretty hard. I would instead see if you can find common tag patterns, and use those. But then you're essentially do classification by regexp matching. Parsing is usually used to extract phrases within a sentence, or to produce deep parse trees of a sentence, but you're just trying to identify/extract sentences, which is why I think classification is a much better approach. Consider including tagged words as features when you try this, since the grammar could be significant." taking his suggestions I looked at the causal sentences I had and I found out that these sentences have words like</p>
<pre><code>consequently
as a result
Therefore
as a consequence
For this reason
For all these reasons
Thus
because
since
because of
on account of
due to
for the reason
so, that
</code></pre>
<p>These words are indeed connecting cause and effect in a sentence. Using these connectors it is now easy to extract causal sentences. A detailed report can be found on arxiv: <a href="https://arxiv.org/pdf/1507.02447.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1507.02447.pdf</a> </p> | 2012-10-25 23:49:39.593000+00:00 | 2018-09-26 15:18:28.397000+00:00 | 2018-09-26 15:18:28.397000+00:00 | null | 13,068,386 | <p>I am extracting causal sentences from the accident reports on water. I am using NLTK as a tool here. I manually created my regExp grammar by taking 20 causal sentence structures [see examples below]. The constructed grammar is of the type </p>
<pre><code>grammar = r'''Cause: {<DT|IN|JJ>?<NN.*|PRP|EX><VBD><NN.*|PRP|VBD>?<.*>+<VBD|VBN>?<.*>+}'''
</code></pre>
<p>Now the grammar has 100% recall on the test set ( I built my own toy dataset with 50 causal and 50 non causal sentences) but a low precision. I would like to ask about: </p>
<ol>
<li>How to train NLTK to build the regexp grammar automatically for
extracting particular type of sentences.</li>
<li><p>Has any one ever tried to extract causal sentences. Example
causal sentences are: </p>
<ul>
<li><p>There was poor sanitation in the village, as a consequence, she had
health problems.</p></li>
<li><p>The water was impure in her village, For this reason, she suffered
from parasites.</p></li>
<li><p>She had health problems because of poor sanitation in the village.
I would want to extract only the above type of sentences from a
large text.</p></li>
</ul></li>
</ol> | 2012-10-25 12:17:08.800000+00:00 | 2018-09-26 15:18:28.397000+00:00 | 2012-10-25 12:41:23.510000+00:00 | nlp|nltk | ['https://arxiv.org/pdf/1507.02447.pdf'] | 1 |
62,188,622 | <p>This could happen if you have a high learning rate. Maybe, in the gradient-descent even if you reach close to a good local optimum, due to the high learning rate you get shot out of it soon. Try reducing the learning rate and see if this behavior goes away. Also, use Adam if you're not using already.</p>
<p><a href="https://i.stack.imgur.com/BIoUW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BIoUW.png" alt="Image from KDnuggets"></a>
Validation loss is just following the overall trend of the training loss, that's nothing interesting.</p>
<p>A similar strategy is used in cyclic learning, where learning is suddenly increased to get models with multiple local minima. (In your case, the model is just not going to converge)</p>
<p><a href="https://arxiv.org/pdf/1506.01186.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1506.01186.pdf</a></p> | 2020-06-04 06:59:06.220000+00:00 | 2020-06-04 07:04:56.083000+00:00 | 2020-06-04 07:04:56.083000+00:00 | null | 62,187,316 | <p>I've made a neural network and I plotted the loss of training and validation set. And for validation I get like a step function type of loss and for training, I get these weird spikes. Now I know my model is learning nothing because my loss is so high but I still wonder what does these spikes actually mean. I mean why am I getting these spikes. I've been looking to literature but haven't been able to find an explanation. Could it be that during gradient descent my model moves close to some local optimum but then takes a giant step and moves away but then these spikes seem to happen periodically. And I have no idea what's causing a step function for validation.
I've attached a image as well.</p>
<p><a href="https://i.stack.imgur.com/Tj34C.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Tj34C.png" alt=""></a></p> | 2020-06-04 05:19:20.723000+00:00 | 2020-06-04 07:04:56.083000+00:00 | null | python-3.x|keras|deep-learning | ['https://i.stack.imgur.com/BIoUW.png', 'https://arxiv.org/pdf/1506.01186.pdf'] | 2 |
60,777,779 | <p>This problem can be restated as:</p>
<ul>
<li>Generate a random integer in the interval [0, N).</li>
<li>Output 1 if the integer is 0, or 0 otherwise.</li>
</ul>
<p>There are various ways to <a href="https://peteroupc.github.io/randomfunc.html#Uniform_Random_Integers" rel="nofollow noreferrer">generate random integers</a> in a range from a random bit stream. Of these, J. Lumbroso showed an optimal way to solve this problem given a random bit stream ("<a href="http://arxiv.org/abs/1304.1916" rel="nofollow noreferrer">Optimal Discrete Uniform Generation from Coin Flips, and Applications</a>", 2013). (However, Appendix B of that paper also points out a solution to your direct problem: generating 0 or 1 with a given probability.) Other ways include those mentioned in "<a href="http://www.pcg-random.org/posts/bounded-rands.html" rel="nofollow noreferrer">Efficiently Generating a Random Number in a Range</a>" as well as a brand-new algorithm, the "<a href="https://github.com/probcomp/fast-loaded-dice-roller" rel="nofollow noreferrer">Fast Loaded Dice Roller</a>".</p> | 2020-03-20 16:13:10.093000+00:00 | 2020-03-20 16:59:57.017000+00:00 | 2020-03-20 16:59:57.017000+00:00 | null | 60,777,414 | <p>Suppose you have a regular number generator which is able to produce uniformly distributed random 32 bit numbers. And suppose you look for a way to generate a pseudo-random sequence of bits where ones (i.e., the set bits) appear in the sequence with predefined probability.</p>
<p>A naive way of producing such sequence would be running number generator on per bit level but it's terribly inefficient for small probabilities like 0.01 or 1% of bits in the sequence most of the bits will be zero. On average just one bit in a hundred would be set. On the other hand even with such low probability there's a chance to encounter a long sub-sequence of consecutive ones that extends beyond the 8, 16, 32, 64 bits.</p>
<p>The question is how to produce such sequence efficiently using regular PRNG.</p>
<hr>
<p><strong>Edit</strong></p>
<p>A toy implementation of rational Bernoulli variable sampling in javascript suggested by Peter O.:</p>
<pre><code>// Based on
// https://arxiv.org/abs/1304.1916
// https://arxiv.org/pdf/1304.1916.pdf (page 21, figure 6)
class Xor128 {
constructor(x, y, z, w) {
this.x = x;
this.y = y;
this.z = z;
this.w = w;
}
prev() {
var t = this.w ^ this.z ^ (this.z >>> 19);
t ^= t >>> 8;
t ^= t >>> 16;
this.w = this.z;
this.z = this.y;
this.y = this.x;
t ^= t << 11;
t ^= t << 22;
this.x = t;
return this.w;
}
curr() {
return this.w;
}
next() {
var t = this.x ^ (this.x << 11);
this.x = this.y;
this.y = this.z;
this.z = this.w;
return this.w = this.w ^ (this.w >>> 19) ^ (t ^ (t >>> 8));
}
}
function* flip(xor128) {
while (true) {
var value = xor128.next();
yield value & 1; value >>>= 1;
yield value & 1; value >>>= 1;
yield value & 1; value >>>= 1;
yield value & 1; value >>>= 1;
yield value & 1; value >>>= 1;
yield value & 1; value >>>= 1;
yield value & 1; value >>>= 1;
yield value & 1; value >>>= 1;
yield value & 1; value >>>= 1;
yield value & 1; value >>>= 1;
yield value & 1; value >>>= 1;
yield value & 1; value >>>= 1;
yield value & 1; value >>>= 1;
yield value & 1; value >>>= 1;
yield value & 1; value >>>= 1;
yield value & 1; value >>>= 1;
yield value & 1; value >>>= 1;
yield value & 1; value >>>= 1;
yield value & 1; value >>>= 1;
yield value & 1; value >>>= 1;
yield value & 1; value >>>= 1;
yield value & 1; value >>>= 1;
yield value & 1; value >>>= 1;
yield value & 1; value >>>= 1;
yield value & 1; value >>>= 1;
yield value & 1; value >>>= 1;
yield value & 1; value >>>= 1;
yield value & 1; value >>>= 1;
yield value & 1; value >>>= 1;
yield value & 1; value >>>= 1;
yield value & 1; value >>>= 1;
yield value & 1; value >>>= 1;
}
}
function* bernoulli(flip, k, n) {
var b;
var v = k
for (const bit of flip) {
v <<= 1;
if (v >= n) {
v -= n;
b = 1;
} else {
b = 0;
}
if (bit === 1) {
yield b;
v = k;
}
}
}
var xor128 = new Xor128(1, 2, 3, 4);
var z = 0, o = 0;
var then = Date.now();
for (const value of bernoulli(flip(xor128), 5, 1000)) {
if (value === 0) {
z++;
} else {
o++;
}
if (Date.now() - then > 1000) {
console.log(`${z} ${o}`);
}
}
// Pieces of code to test out xor128:
//
// for (let index = 0; index < 100; index++) {
// console.log(xor128.curr())
// xor128.next();
// }
// console.log('-----------------------------------')
// for (let index = 0; index < 100; index++) {
// xor128.prev();
// console.log(xor128.curr())
// }
</code></pre>
<p><strong>Another edit</strong></p>
<p>The code below is implemented in C# produces 91.2 million bits per second packed into UInt32 data type (MacBookPro 2019 Core I9 2.4 Ghz). I think in C it'd be possible to get over 100 million bits, also it feels like it is possible to further utilize binary arithmetic to generate all 32 bits of random number in parallel, some loop unrolling or maybe SIMD not sure, anyway here's the code: </p>
<pre><code>public class Bernoulli
{
public UInt32 X { get; set; }
public UInt32 Y { get; set; }
public UInt32 Z { get; set; }
public UInt32 W { get; set; }
public Bernoulli()
: this(Guid.NewGuid())
{
}
public Bernoulli(Guid guid)
{
var index = 0;
var bytes = guid.ToByteArray();
X = (UInt32)((bytes[index++] << 24) | (bytes[index++] << 16) | (bytes[index++] << 8) | bytes[index++]);
Y = (UInt32)((bytes[index++] << 24) | (bytes[index++] << 16) | (bytes[index++] << 8) | bytes[index++]);
Z = (UInt32)((bytes[index++] << 24) | (bytes[index++] << 16) | (bytes[index++] << 8) | bytes[index++]);
W = (UInt32)((bytes[index++] << 24) | (bytes[index++] << 16) | (bytes[index++] << 8) | bytes[index++]);
}
public Bernoulli(UInt32 x, UInt32 y, UInt32 z, UInt32 w)
{
X = x;
Y = y;
Z = z;
W = w;
}
UInt64 bits = 0;
UInt32 bitsCount = 0;
public UInt32 Next(UInt32 k, UInt32 n)
{
UInt32 b;
var c = 0;
var v = k;
var r = 0u;
// ------------------------
do
{
while (bitsCount <= 32)
{
b = X ^ (X << 11);
X = Y;
Y = Z;
Z = W;
bits <<= 32;
bits |= ((UInt64)(W = W ^ (W >> 19) ^ (b ^ (b >> 8))));
bitsCount += 32;
}
while (c < 32 && 0 < bitsCount)
{
v <<= 1;
// Two lines of code below is a two step optimization:
// First we optimize the following statement:
//
// if (v >= n)
// {
// v -= n;
// b = 1;
// }
// else
// {
// b = 0;
// }
//
// into the following:
//
// var b = v < n ? 0u : 1u;
// v -= b * n
//
// thus reducing branching, but we would like also to omit
// multiplication, which we can do through:
b = v < n ? 0u : 0xFFFFFFFFu;
v -= b & n;
if ((bits & 1) == 1)
{
r |= b & 1;
r <<= 1;
v = k;
c++;
}
bits >>= 1;
bitsCount--;
}
} while (c < 32);
return r;
}
}
</code></pre> | 2020-03-20 15:49:47.947000+00:00 | 2020-03-21 11:34:10.513000+00:00 | 2020-03-20 22:40:10.517000+00:00 | random|pseudocode | ['https://peteroupc.github.io/randomfunc.html#Uniform_Random_Integers', 'http://arxiv.org/abs/1304.1916', 'http://www.pcg-random.org/posts/bounded-rands.html', 'https://github.com/probcomp/fast-loaded-dice-roller'] | 4 |
41,459,320 | <p>You could try something like this: <a href="https://arxiv.org/pdf/1412.7725.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1412.7725.pdf</a>. But with deep learning and your amount of training data you can problem get any big enough model to work well.</p> | 2017-01-04 08:31:01.740000+00:00 | 2017-01-04 08:31:01.740000+00:00 | null | null | 41,457,478 | <p>We are running a huge team that process child photos for our customers, the team processes over 1M photos per year.</p>
<p>The process includes basic tuning of light, resize, apply some filters to make the skin looks better.</p>
<p>We want to use deep learning to complete the jobs as much as possible. Which means I want to choose one model and train that model using our existing data. And then use the trained model to generate photos by inputing the new unprocessed photos.</p>
<p>Is there existing model that I can make use of, or any papers have covered this scenario?</p>
<p>Any help would be appreciated, thanks!</p> | 2017-01-04 06:25:30.333000+00:00 | 2017-01-05 19:58:02.863000+00:00 | null | deep-learning|image-generation | ['https://arxiv.org/pdf/1412.7725.pdf'] | 1 |
69,755,964 | <p>Your alpha/learning-rate seems to be too big.</p>
<p>Try with a lower learning-rate, like so:</p>
<pre class="lang-py prettyprint-override"><code>import tensorflow as tf
import numpy as np
from tensorflow import keras
model = keras.Sequential([
keras.layers.Dense(units=1, input_shape=[1])
])
# manually set the optimizer, default learning_rate=0.01
opt = keras.optimizers.SGD(learning_rate=0.0001)
model.compile(optimizer=opt, loss="mean_squared_error")
xs = np.array([1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0, 13.0, 14.0, 15.0, 16.0, 17.0, 18.0, 19.0, 20.0], dtype=float)
ys = np.array([0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0, 5.5, 6.0, 6.5, 7.0, 7.5, 8.0, 8.5, 9.0, 9.5, 10.0], dtype=float)
model.fit(xs, ys, epochs=500)
print(model.predict([25.0]))
</code></pre>
<p>... which will converge.</p>
<p>One of the reasons ADAM works better, is probably because it estimates the learning-rate adaptively - I think the A in ADAM stands for Adaptive ;)).</p>
<p><strong>EDIT: It does!</strong></p>
<p>From <a href="https://arxiv.org/pdf/1412.6980.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1412.6980.pdf</a></p>
<blockquote>
<p>The method computes individual adaptive learning rates for
different parameters from estimates of first and second moments of the gradients; <strong>the name Adam
is derived from adaptive moment estimation</strong></p>
</blockquote>
<pre><code>Epoch 1/500
1/1 [==============================] - 0s 129ms/step - loss: 1.2133
Epoch 2/500
1/1 [==============================] - 0s 990us/step - loss: 1.1442
Epoch 3/500
1/1 [==============================] - 0s 0s/step - loss: 1.0792
Epoch 4/500
1/1 [==============================] - 0s 1ms/step - loss: 1.0178
Epoch 5/500
1/1 [==============================] - 0s 1ms/step - loss: 0.9599
Epoch 6/500
1/1 [==============================] - 0s 1ms/step - loss: 0.9053
Epoch 7/500
1/1 [==============================] - 0s 0s/step - loss: 0.8538
Epoch 8/500
1/1 [==============================] - 0s 1ms/step - loss: 0.8053
Epoch 9/500
1/1 [==============================] - 0s 999us/step - loss: 0.7595
Epoch 10/500
1/1 [==============================] - 0s 1ms/step - loss: 0.7163
...
Epoch 499/500
1/1 [==============================] - 0s 1ms/step - loss: 9.9431e-06
Epoch 500/500
1/1 [==============================] - 0s 999us/step - loss: 9.9420e-06
</code></pre>
<p><strong>EDIT2:</strong></p>
<p>With true/"vanilla" gradient descent, you should see convergence at every step. If you start to diverge it's usually because the alpha/learning-rate/step-size is too big. Which means the search "overshoots" in one, several or all dimensions.</p>
<p>Consider a loss function whose partial-derivative/gradient has a very narrow valley in one or several dimensions. A "small step too far" can mean a large error suddenly.</p> | 2021-10-28 14:29:23.853000+00:00 | 2021-11-12 19:01:35.547000+00:00 | 2021-11-12 19:01:35.547000+00:00 | null | 69,755,379 | <p>I'm new to ML, so I'm sorry if this is some stupid question anyone could have figured out. I am using TensorFlow and Keras here.</p>
<p>So here's my code:</p>
<pre><code>import tensorflow as tf
import numpy as np
from tensorflow import keras
model = keras.Sequential([
keras.layers.Dense(units=1, input_shape=[1])
])
model.compile(optimizer="sgd", loss="mean_squared_error")
xs = np.array([1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0, 13.0, 14.0, 15.0, 16.0, 17.0, 18.0, 19.0, 20.0], dtype=float)
ys = np.array([0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0, 5.5, 6.0, 6.5, 7.0, 7.5, 8.0, 8.5, 9.0, 9.5, 10.0], dtype=float)
model.fit(xs, ys, epochs=500)
print(model.predict([25.0]))
</code></pre>
<p>I get this as output [I'm not showing the whole 500 lines, just 20 epochs:</p>
<pre><code>Epoch 1/500
1/1 [==============================] - 0s 210ms/step - loss: 450.9794
Epoch 2/500
1/1 [==============================] - 0s 4ms/step - loss: 1603.0852
Epoch 3/500
1/1 [==============================] - 0s 10ms/step - loss: 5698.4731
Epoch 4/500
1/1 [==============================] - 0s 7ms/step - loss: 20256.3398
Epoch 5/500
1/1 [==============================] - 0s 10ms/step - loss: 72005.1719
Epoch 6/500
1/1 [==============================] - 0s 4ms/step - loss: 255956.5938
Epoch 7/500
1/1 [==============================] - 0s 3ms/step - loss: 909848.5000
Epoch 8/500
1/1 [==============================] - 0s 5ms/step - loss: 3234236.0000
Epoch 9/500
1/1 [==============================] - 0s 3ms/step - loss: 11496730.0000
Epoch 10/500
1/1 [==============================] - 0s 3ms/step - loss: 40867392.0000
Epoch 11/500
1/1 [==============================] - 0s 3ms/step - loss: 145271264.0000
Epoch 12/500
1/1 [==============================] - 0s 3ms/step - loss: 516395584.0000
Epoch 13/500
1/1 [==============================] - 0s 4ms/step - loss: 1835629312.0000
Epoch 14/500
1/1 [==============================] - 0s 3ms/step - loss: 6525110272.0000
Epoch 15/500
1/1 [==============================] - 0s 3ms/step - loss: 23194802176.0000
Epoch 16/500
1/1 [==============================] - 0s 3ms/step - loss: 82450513920.0000
Epoch 17/500
1/1 [==============================] - 0s 3ms/step - loss: 293086593024.0000
Epoch 18/500
1/1 [==============================] - 0s 5ms/step - loss: 1041834835968.0000
Epoch 19/500
1/1 [==============================] - 0s 3ms/step - loss: 3703408164864.0000
Epoch 20/500
1/1 [==============================] - 0s 3ms/step - loss: 13164500484096.0000
</code></pre>
<p>As you can see, it is increasing exponentially. Soon (at the 64th epoch), these numbers become <code>inf</code>. And then, from infinity, it does something and becomes <code>NaN</code> (Not a Number). I thought a model will get better at figuring out patterns over time, what is going on?</p>
<p>One thing I noticed, if I reduce the length of <code>xs</code> and <code>ys</code> from 20 to 10, the loss decreases and becomes <code>7.9193e-05</code>. After I increase the length of both numpy arrays to <code>18</code> it starts increasing uncontrollably, otherwise it's fine. I gave 20 values because I thought the model will be better if I give more data, which is why I gave 20 values.</p> | 2021-10-28 13:52:03.493000+00:00 | 2021-11-12 19:01:35.547000+00:00 | null | python|tensorflow|keras|artificial-intelligence|loss-function | ['https://arxiv.org/pdf/1412.6980.pdf'] | 1 |
65,077,118 | <p>As far as I am aware, adversarial training (i.e., continuously training / fine-tuning on new adversarial images with correct labels) is the only robust defense to adversarial examples that cannot be completely overcome by some form of adversarial attack, (Please correct me if I'm wrong). There have been many other attempts to defend against adversarial examples, but typically there is a way around them if the attacker has an idea of what the defense is (For instance, see <a href="https://arxiv.org/pdf/1802.00420.pdf" rel="nofollow noreferrer">Obfuscated Gradients Give a False Sense of Security:
Circumventing Defenses to Adversarial Examples</a>).</p>
<p>Note that to truly obtain robustness with adversarial training, you have to generate adversarial examples during training, or continue updating with new adversarial images. As I understand it, this is because once you train on some adversarial examples, your model changes slightly, and while it is made robust to your initial adversarial examples, there still exist other adversarial examples that still target your newly trained / fine-tuned model. Adversarial training gradually changes your model to minimize the availability of effective adversarial perturbations.</p>
<p>However, doing this is can be at odds with accuracy, (See <a href="https://arxiv.org/pdf/1805.12152.pdf" rel="nofollow noreferrer">Robustness May Be at Odds with Accuracy</a>). A model that is truly robust to adversarial examples may have a significantly lower accuracy for non-adversarial examples. Additionally, adversarial training may be hard to scale to datasets with larger images.</p> | 2020-11-30 16:12:30.980000+00:00 | 2020-11-30 16:49:28.780000+00:00 | 2020-11-30 16:49:28.780000+00:00 | null | 56,242,702 | <p>Maybe this is more a conceptual problem, but I hope you can give me your opinions. I do understand that adversarial training means introducing some corrupted instances into the training process in order to confuse the model and produce false prediction when testing. However, is this model applicable in the following scenario?:
Let's assume an adversarial patch is created to fool a classifier that detects a stop sign, so a normal object detector will not be able to distinguish a real stop sign in presence of this patch. But what if the model trains both instances with and without patches? This is not so difficult to perform for the object classificator and the attack looses all chances to succeed, right?.
I do not get the idea why those attacks can be successful if for the model it would take only a bit more of training to include those adversarial samples.</p> | 2019-05-21 16:29:49.253000+00:00 | 2020-11-30 16:49:28.780000+00:00 | null | computer-vision|generative-adversarial-network|adversarial-machines | ['https://arxiv.org/pdf/1802.00420.pdf', 'https://arxiv.org/pdf/1805.12152.pdf'] | 2 |
37,528,208 | <p>I'm just about to learn LSTMs in TensorFlow and try to implement an example which (luckily) tries to predict some time-series / number-series genereated by a simple math-fuction.</p>
<p>But I'm using a different way to structure the data for training, motivated by <a href="http://arxiv.org/pdf/1502.04681v3.pdf" rel="noreferrer">Unsupervised Learning of Video Representations using LSTMs</a>:</p>
<p><a href="http://i.stack.imgur.com/1Ngkk.png" rel="noreferrer">LSTM Future Predictor Model</a></p>
<p><strong>Option 5:</strong></p>
<pre><code>input data label
1,2,3,4 5,6,7,8
2,3,4,5 6,7,8,9
3,4,5,6 7,8,9,10
...
</code></pre>
<p>Beside this paper, I (tried) to take inspiration by the given TensorFlow RNN examples. My current complete solution looks like this:</p>
<pre><code>import math
import random
import numpy as np
import tensorflow as tf
LSTM_SIZE = 64
LSTM_LAYERS = 2
BATCH_SIZE = 16
NUM_T_STEPS = 4
MAX_STEPS = 1000
LAMBDA_REG = 5e-4
def ground_truth_func(i, j, t):
return i * math.pow(t, 2) + j
def get_batch(batch_size):
seq = np.zeros([batch_size, NUM_T_STEPS, 1], dtype=np.float32)
tgt = np.zeros([batch_size, NUM_T_STEPS], dtype=np.float32)
for b in xrange(batch_size):
i = float(random.randint(-25, 25))
j = float(random.randint(-100, 100))
for t in xrange(NUM_T_STEPS):
value = ground_truth_func(i, j, t)
seq[b, t, 0] = value
for t in xrange(NUM_T_STEPS):
tgt[b, t] = ground_truth_func(i, j, t + NUM_T_STEPS)
return seq, tgt
# Placeholder for the inputs in a given iteration
sequence = tf.placeholder(tf.float32, [BATCH_SIZE, NUM_T_STEPS, 1])
target = tf.placeholder(tf.float32, [BATCH_SIZE, NUM_T_STEPS])
fc1_weight = tf.get_variable('w1', [LSTM_SIZE, 1], initializer=tf.random_normal_initializer(mean=0.0, stddev=1.0))
fc1_bias = tf.get_variable('b1', [1], initializer=tf.constant_initializer(0.1))
# ENCODER
with tf.variable_scope('ENC_LSTM'):
lstm = tf.nn.rnn_cell.LSTMCell(LSTM_SIZE)
multi_lstm = tf.nn.rnn_cell.MultiRNNCell([lstm] * LSTM_LAYERS)
initial_state = multi_lstm.zero_state(BATCH_SIZE, tf.float32)
state = initial_state
for t_step in xrange(NUM_T_STEPS):
if t_step > 0:
tf.get_variable_scope().reuse_variables()
# state value is updated after processing each batch of sequences
output, state = multi_lstm(sequence[:, t_step, :], state)
learned_representation = state
# DECODER
with tf.variable_scope('DEC_LSTM'):
lstm = tf.nn.rnn_cell.LSTMCell(LSTM_SIZE)
multi_lstm = tf.nn.rnn_cell.MultiRNNCell([lstm] * LSTM_LAYERS)
state = learned_representation
logits_stacked = None
loss = 0.0
for t_step in xrange(NUM_T_STEPS):
if t_step > 0:
tf.get_variable_scope().reuse_variables()
# state value is updated after processing each batch of sequences
output, state = multi_lstm(sequence[:, t_step, :], state)
# output can be used to make next number prediction
logits = tf.matmul(output, fc1_weight) + fc1_bias
if logits_stacked is None:
logits_stacked = logits
else:
logits_stacked = tf.concat(1, [logits_stacked, logits])
loss += tf.reduce_sum(tf.square(logits - target[:, t_step])) / BATCH_SIZE
reg_loss = loss + LAMBDA_REG * (tf.nn.l2_loss(fc1_weight) + tf.nn.l2_loss(fc1_bias))
train = tf.train.AdamOptimizer().minimize(reg_loss)
with tf.Session() as sess:
sess.run(tf.initialize_all_variables())
total_loss = 0.0
for step in xrange(MAX_STEPS):
seq_batch, target_batch = get_batch(BATCH_SIZE)
feed = {sequence: seq_batch, target: target_batch}
_, current_loss = sess.run([train, reg_loss], feed)
if step % 10 == 0:
print("@{}: {}".format(step, current_loss))
total_loss += current_loss
print('Total loss:', total_loss)
print('### SIMPLE EVAL: ###')
seq_batch, target_batch = get_batch(BATCH_SIZE)
feed = {sequence: seq_batch, target: target_batch}
prediction = sess.run([logits_stacked], feed)
for b in xrange(BATCH_SIZE):
print("{} -> {})".format(str(seq_batch[b, :, 0]), target_batch[b, :]))
print(" `-> Prediction: {}".format(prediction[0][b]))
</code></pre>
<p>Sample output of this looks like this:</p>
<pre><code>### SIMPLE EVAL: ###
# [input seq] -> [target prediction]
# `-> Prediction: [model prediction]
[ 33. 53. 113. 213.] -> [ 353. 533. 753. 1013.])
`-> Prediction: [ 19.74548721 28.3149128 33.11489105 35.06603241]
[ -17. -32. -77. -152.] -> [-257. -392. -557. -752.])
`-> Prediction: [-16.38951683 -24.3657589 -29.49801064 -31.58583832]
[ -7. -4. 5. 20.] -> [ 41. 68. 101. 140.])
`-> Prediction: [ 14.14126873 22.74848557 31.29668617 36.73633194]
...
</code></pre>
<p>The model is a <em>LSTM-autoencoder</em> having 2 layers each.</p>
<p>Unfortunately, as you can see in the results, this model does not learn the sequence properly. I might be the case that I'm just doing a bad mistake somewhere, or that 1000-10000 training steps is just way to few for a LSTM. As I said, I'm also just starting to understand/use LSTMs properly.
But hopefully this can give you some inspiration regarding the implementation.</p> | 2016-05-30 14:32:15.827000+00:00 | 2016-05-30 14:32:15.827000+00:00 | null | null | 35,961,216 | <p>I'm currently trying to build a simple model for predicting time series. The goal would be to train the model with a sequence so that the model is able to predict future values.</p>
<p>I'm using tensorflow and lstm cells to do so. The model is trained with truncated backpropagation through time. My question is how to structure the data for training.</p>
<p>For example let's assume we want to learn the given sequence:</p>
<pre><code>[1,2,3,4,5,6,7,8,9,10,11,...]
</code></pre>
<p>And we unroll the network for <code>num_steps=4</code>.</p>
<p><strong>Option 1</strong></p>
<pre><code>input data label
1,2,3,4 2,3,4,5
5,6,7,8 6,7,8,9
9,10,11,12 10,11,12,13
...
</code></pre>
<p><strong>Option 2</strong></p>
<pre><code>input data label
1,2,3,4 2,3,4,5
2,3,4,5 3,4,5,6
3,4,5,6 4,5,6,7
...
</code></pre>
<p><strong>Option 3</strong></p>
<pre><code>input data label
1,2,3,4 5
2,3,4,5 6
3,4,5,6 7
...
</code></pre>
<p><strong>Option 4</strong></p>
<pre><code>input data label
1,2,3,4 5
5,6,7,8 9
9,10,11,12 13
...
</code></pre>
<p>Any help would be appreciated.</p> | 2016-03-12 17:49:07.590000+00:00 | 2018-04-24 07:20:21.910000+00:00 | null | time-series|tensorflow|prediction|lstm | ['http://arxiv.org/pdf/1502.04681v3.pdf', 'http://i.stack.imgur.com/1Ngkk.png'] | 2 |
65,281,636 | <p>I will mention 3 "levels" at which you could solve this problem, assuming that you will be able to frame your problem statement accordingly. Please consider this answer as something you can use to get direction on how to solve this problem since the question you ask is not that specific and covers a very wide scope (usually against SO guidelines).</p>
<hr />
<p><strong>Traditional approaches</strong> involved using some DR (Dimensionality reduction) approaches such as PCA followed by Clustering such as Kmeans, Gaussian mixtures, Density-based methods, etc.</p>
<p><a href="https://i.stack.imgur.com/nxt6q.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nxt6q.png" alt="enter image description here" /></a></p>
<p>The issue with these approaches was that they assumed that <em>the observed data was generated from a lower-dimensional latent space via simple linear transformations</em>. E.g. When using PCA on data, you assume that the data that you see comes from linear combinations of the 2 principal components. This works for a lot of datasets but more complex data is usually a result of non-linear transformations of lower-dimensional latent spaces.</p>
<hr />
<p><strong>More modern approaches</strong> handled this to some extent using DNNs as pre-processing followed by clustering methods. DNNs helped with the non-linearity as well as allowing for better low dimensional representations for data types such as sequences and images. This is usually what the majority of the baseline benchmark models are made on -</p>
<ol>
<li>Train an auto-encoder to regenerate the sequence</li>
<li>Take the bottleneck embedding/latent vector and use a clustering algorithm to cluster in this latent space.</li>
</ol>
<p><a href="https://i.stack.imgur.com/E6asN.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/E6asN.jpg" alt="enter image description here" /></a></p>
<p>While these approaches work well, there is a flaw in these as well. Since no clustering-driven objective is explicitly incorporated in the learning process, <em>the learned DNNs do not necessarily output low dimensional data that are suitable for clustering</em>.</p>
<hr />
<p><strong>The latest research</strong> involves training DNNs along with a clustering loss so that it ensures that the latent space is clustering friendly. These algorithms give superior results to any of the above approaches. One of the SOTA approaches in this category is <a href="https://arxiv.org/pdf/1610.04794v1.pdf" rel="nofollow noreferrer">DCN (Deep clustering networks)</a>. DCNs combine the reconstruction loss of an autoencoder with a clustering loss. It defines a centroid-based target probability distribution (very similar to Kmeans but with student-t distribution) and minimizes its KL divergence against the model clustering result.</p>
<p><a href="https://i.stack.imgur.com/6U7Wr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6U7Wr.png" alt="enter image description here" /></a></p>
<p>Find more information <a href="https://deepnotes.io/deep-clustering" rel="nofollow noreferrer">here</a> and <a href="https://www.dlology.com/blog/how-to-do-unsupervised-clustering-with-keras/" rel="nofollow noreferrer">here</a>.</p>
<hr />
<p><strong>Specific to your case:</strong> You have a sequence vector with 4 features. You can build an LSTM based autoencoder to create initial embeddings and then use a clustering method to cluster the latent vector. Or if you are interested in DCNs, you can build a similar setup with an autoencoder and then use the clustering loss along with reconstruction loss to further train the encoder to generate clustering-friendly embeddings.</p> | 2020-12-13 23:05:47.787000+00:00 | 2020-12-13 23:23:34.790000+00:00 | 2020-12-13 23:23:34.790000+00:00 | null | 65,280,557 | <p>How you would cluster sequential information? I have about 500 sequences and some have the same characteristics. Is there anything like K-means for categorical sequential (temporal) data, or what would your approach look like?</p>
<p>These sequences are sequences of one-hot-encoded vectors which are representing classes. Consider for example the nurse-rostering problem with four classes: early-shift, day-shift, night-shift, home. The vectors look like this: [0, 1, 0, 0], [0, 1, 0, 0], [0, 0, 0, 1], this nurse works 2 days with the day-shift and is home the third day. But this "schedule" could depend on the parameters of the hospital, so I would like to cluster similar data. I have about 500 "schedules". Any ideas?</p> | 2020-12-13 20:53:10.707000+00:00 | 2020-12-13 23:23:34.790000+00:00 | 2020-12-13 22:36:15.163000+00:00 | machine-learning|deep-learning|cluster-analysis|sequence|sequence-analysis | ['https://i.stack.imgur.com/nxt6q.png', 'https://i.stack.imgur.com/E6asN.jpg', 'https://arxiv.org/pdf/1610.04794v1.pdf', 'https://i.stack.imgur.com/6U7Wr.png', 'https://deepnotes.io/deep-clustering', 'https://www.dlology.com/blog/how-to-do-unsupervised-clustering-with-keras/'] | 6 |
57,856,271 | <p>I think the popular solution is to encode functors as <a href="https://en.wikipedia.org/wiki/Container_(type_theory)" rel="nofollow noreferrer">"containers"</a>, the intro of this paper is a good starting point: <a href="https://arxiv.org/pdf/1805.08059.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1805.08059.pdf</a> The idea is much older (the paper means to give a self-contained explanation), and you can chase the references from that paper, but what I found in a cursory search can be hard to follow if you're not familiar with type theory or category theory.</p>
<p>In short, instead of <code>Type -> Type</code>, we use the following type:</p>
<pre><code>Set Implicit Arguments.
Set Contextual Implicit.
Record container : Type := {
shape : Type;
pos : shape -> Type;
}.
</code></pre>
<p>where roughly, if you imagine a "base functor" <code>F</code> of a recursive type <code>Fix F</code>, <code>shape</code> describes the constructors of <code>F</code>, and for each constructor, <code>pos</code> enumerates the "holes" in it. So the base functor of <code>List</code></p>
<pre><code>data ListF a x
= NilF -- no holes
| ConsF a x -- one hole x
</code></pre>
<p>is given by this container:</p>
<pre><code>Inductive list_shape a :=
| NilF : list_shape a
| ConsF : a -> list_shape a.
Definition list_pos a (s : list_shape a) : Type :=
match s with
| NilF => False (* no holes *)
| ConsF _ => True (* one hole x *)
end.
Definition list_container a : container := {|
shape := list_shape a;
pos := fun s => list_pos s;
|}.
</code></pre>
<p>The point is that this container describes a strictly positive functor:</p>
<pre><code>Inductive ext (c : container) (a : Type) : Type := {
this_shape : shape c;
this_rec : pos c this_shape -> a;
}.
Definition listF a : Type -> Type := ext (list_container a).
</code></pre>
<p>so instead of <code>Fix f = f (Fix f)</code>, the fixpoint construction can take a container:</p>
<pre><code>Inductive Fix (c : container) : Type := MkFix : ext c (Fix c) -> Fix c.
</code></pre>
<p>Not all functors can be encoded as containers (the continuation functor being a prime example) but you don't often see them used with <code>Fix</code>.</p>
<p>Full gist: <a href="https://gist.github.com/Lysxia/21dd5fc7b79ced410b129f31ddf25c12" rel="nofollow noreferrer">https://gist.github.com/Lysxia/21dd5fc7b79ced410b129f31ddf25c12</a></p> | 2019-09-09 14:41:44.503000+00:00 | 2019-09-09 18:11:37.037000+00:00 | 2019-09-09 18:11:37.037000+00:00 | null | 57,849,746 | <p>I'd like to see a Coq version of the Bananas, Lenses, etc. They are built up in the excellent series of blog posts at <a href="https://blog.sumtypeofway.com/an-introduction-to-recursion-schemes/" rel="nofollow noreferrer">sumtypeofway Introduction to Recursion schemes</a></p>
<p>However, the blog post is in Haskell, which permits infinite non-terminating recursion, and thus is perfectly content with the <code>Y</code> combinator. Which Coq isn't.</p>
<p>In particular, the definitions depend on the type</p>
<pre><code>newtype Term f = In { out :: f (Term f) }
</code></pre>
<p>which builds inifinite types <code>f (f (f (f ...)))</code>. <code>Term f</code> permits very pretty and succinct definitions for catamorphisms, paramorphisms, anamorphisms, etc., using the Term family of types.</p>
<p>Trying to port this to Coq as </p>
<pre><code>Inductive Term f : Type := {out:f (Term f)}.
</code></pre>
<p>gives me the expected </p>
<pre><code>Error: Non strictly positive occurrence of "Term" in "f (Term f) -> Term f".
</code></pre>
<p><em>Q: What would be a good way to formalize the above Haskell Term type in Coq?</em></p>
<p>Above <code>f</code> is of type <code>Type->Type</code>, but perhaps it is too general, and there could be some way to restrict us to inductive types such that each application of <code>f</code> is decreasing?</p>
<p>Perhaps someone has already implemented the recursion schemes from <a href="https://research.utwente.nl/en/publications/functional-programming-with-bananas-lenses-envelopes-and-barbed-w" rel="nofollow noreferrer">Banans, Lenses, Envelopes</a> in Coq?</p> | 2019-09-09 07:45:14+00:00 | 2019-09-09 18:11:37.037000+00:00 | 2019-09-09 08:03:31.383000+00:00 | coq|recursion-schemes | ['https://en.wikipedia.org/wiki/Container_(type_theory)', 'https://arxiv.org/pdf/1805.08059.pdf', 'https://gist.github.com/Lysxia/21dd5fc7b79ced410b129f31ddf25c12'] | 3 |
50,958,287 | <p>There is a technique where you can write you own loss function to focus on the ranking metrics like(AUC, Precision-Recall) rather than the classification losses like(hinge loss or log-loss).</p>
<p>Refer section 4( Maximizing Recall at Fixed Precision) of the paper
Scalable Learning of Non-Decomposable Objectives (<a href="https://arxiv.org/pdf/1608.04802.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1608.04802.pdf</a>) for more details.</p> | 2018-06-20 23:13:01.947000+00:00 | 2018-06-20 23:13:01.947000+00:00 | null | null | 29,994,694 | <p>The performance of a machine learning classifier can be measured by a variety of metrics like precision, recall, and classification accuracy, among other metrics.</p>
<p>Given code like this:</p>
<pre><code>clf = svm.SVC(kernel='rbf')
clf.fit(X_train, y_train)
</code></pre>
<ol>
<li><p>What metric is the fit function trying to optimze?</p></li>
<li><p>How can the model be tuned to improve precision, when precision is much more important than recall?</p></li>
</ol> | 2015-05-01 20:32:44.600000+00:00 | 2019-12-02 23:37:40.190000+00:00 | 2019-12-02 23:37:40.190000+00:00 | machine-learning|scikit-learn|svm | ['https://arxiv.org/pdf/1608.04802.pdf'] | 1 |
32,276,707 | <p>Similar to the guava version, there is a BigIntegerMath class <a href="http://arxiv.org/src/0908.3030v2/anc" rel="nofollow">here</a> by Richard J. Mathar refered to as org.nevec.rjm, which is the package of the classes. </p>
<p>Their implementation provides two signatures for the binomial method: int,int and BigInteger,BigInteger.</p> | 2015-08-28 17:33:14.537000+00:00 | 2015-08-28 17:33:14.537000+00:00 | null | null | 2,201,113 | <p>Is there a built in method in a java library that can compute 'N choose R' for any N, R?</p> | 2010-02-04 16:03:08.067000+00:00 | 2022-03-25 00:45:37.733000+00:00 | 2010-02-04 16:04:19.937000+00:00 | java|math|combinatorics | ['http://arxiv.org/src/0908.3030v2/anc'] | 1 |
71,047,606 | <p>The <a href="https://arxiv.org/pdf/1607.06450.pdf" rel="nofollow noreferrer">original</a> layer normalisation paper advised against using layer normalisation in CNNs, as receptive fields around the boundary of images will have different values as opposed to the receptive fields in the actual image content. This issue does not arise with RNNs, which is what layer norm was originally tested for. Are you sure you want to be using LayerNorm? If you're looking to compare a different normalisation technique against BatchNorm, consider <a href="https://arxiv.org/pdf/1803.08494.pdf" rel="nofollow noreferrer">GroupNorm</a>. This gets rid of the LayerNorm assumption that <em>all</em> channels in a layer contribute equally to a prediction, which is problematic particularly if the layer is convolutional. Instead, each channel is divided further into groups, that still allows a GN layer to learn different statistics across channels.</p>
<p>Please refer <a href="https://discuss.pytorch.org/t/swapping-batchnorm-for-layernorm-in-resnet/112226" rel="nofollow noreferrer">here</a> for related discussion.</p> | 2022-02-09 10:08:22.117000+00:00 | 2022-02-09 10:21:48.030000+00:00 | 2022-02-09 10:21:48.030000+00:00 | null | 63,914,843 | <p>I am trying to use LayerNorm inside <a href="https://pytorch.org/docs/stable/generated/torch.nn.Sequential.html#torch.nn.Sequential" rel="nofollow noreferrer">nn.Sequential</a> in torch. This is what I am looking for-</p>
<pre><code>import torch.nn as nn
class LayerNormCnn(nn.Module):
def __init__(self):
super(LayerNormCnn, self).__init__()
self.net = nn.Sequential(
nn.Conv2d(3, 32, kernel_size=3, stride=2, padding=1),
nn.LayerNorm(),
nn.ReLU(),
nn.Conv2d(32, 64, kernel_size=3, stride=2, padding=1),
nn.LayerNorm(),
nn.ReLU(),
)
def forward(self, x):
x = self.net(x)
return x
</code></pre>
<p>Unfortunately, it doesn't work because <a href="https://pytorch.org/docs/stable/generated/torch.nn.LayerNorm.html#torch.nn.LayerNorm" rel="nofollow noreferrer">LayerNorm</a> requires <code>normalized_shape</code> as input. The code above throws following exception-</p>
<pre><code> nn.LayerNorm(),
TypeError: __init__() missing 1 required positional argument: 'normalized_shape'
</code></pre>
<p>Right now, this is how I have implemented it-</p>
<pre><code>import torch.nn as nn
import torch.nn.functional as F
class LayerNormCnn(nn.Module):
def __init__(self, state_shape):
super(LayerNormCnn, self).__init__()
self.conv1 = nn.Conv2d(state_shape[0], 32, kernel_size=3, stride=2, padding=1)
self.conv2 = nn.Conv2d(32, 64, kernel_size=3, stride=2, padding=1)
# compute shape by doing a forward pass
with torch.no_grad():
fake_input = torch.randn(1, *state_shape)
out = self.conv1(fake_input)
bn1_size = out.size()[1:]
out = self.conv2(out)
bn2_size = out.size()[1:]
self.bn1 = nn.LayerNorm(bn1_size)
self.bn2 = nn.LayerNorm(bn2_size)
def forward(self, x):
x = F.relu(self.bn1(self.conv1(x)))
x = F.relu(self.bn2(self.conv2(x)))
return x
if __name__ == '__main__':
in_shape = (3, 128, 128)
batch_size = 32
model = LayerNormCnn(in_shape)
x = torch.randn((batch_size,) + in_shape)
out = model(x)
print(out.shape)
</code></pre>
<p>Is it possible to use LayerNorm inside nn.Sequential?</p> | 2020-09-16 07:06:44.943000+00:00 | 2022-02-09 10:21:48.030000+00:00 | 2020-09-16 09:06:34.987000+00:00 | python|neural-network|pytorch | ['https://arxiv.org/pdf/1607.06450.pdf', 'https://arxiv.org/pdf/1803.08494.pdf', 'https://discuss.pytorch.org/t/swapping-batchnorm-for-layernorm-in-resnet/112226'] | 3 |
60,178,043 | <p>It's almost certainly possible to do, but depending on how far down you want to go, you will need to write some parts in another language. The authors of <a href="https://arxiv.org/pdf/1901.10664.pdf" rel="nofollow noreferrer">this paper</a> [1] wrote a version of their user-space network driver <a href="https://github.com/ixy-languages/ixy-languages" rel="nofollow noreferrer">in Python</a>, but they used Cython for external memory management.</p>
<p>While using Python for this is probably possible, Python is much slower than many other languages. The authors of the paper I mentioned <a href="https://github.com/ixy-languages/ixy-languages" rel="nofollow noreferrer">implemented their driver in 10 languages and compared them</a>, and Python was the slowest—10 times slower than the next-slowest (but they do note (in [2]) that all of their drivers <em>except</em> the Python driver were optimized for performance).</p>
<p>Generally speaking, if you want to learn how to do systems programming, I recommend doing it in a systems language, like C or Rust. Traditionally, this type of code would most-often be written in C. If you want arguments in favor of using languages besides C, the same authors wrote <a href="https://www.net.in.tum.de/fileadmin/bibtex/publications/papers/the-case-for-writing-network-drivers-in-high-level-languages.pdf" rel="nofollow noreferrer">this paper</a> [2], which discusses why you would want to use higher-level languages (from Rust to Python) for writing a network driver.</p>
<p>In short, Python probably isn't the best language for this if you want it to be more than a toy project, but if you want to do it, the Python code from those papers is probably a good place to start on the lowest-level parts; in fact, the authors hoped to be helpful to others [2]:</p>
<blockquote>
<p>We provide primitives for PCIe driver development in Python that we hope to be helpful to others as this is the first PCIe driver in Python to our knowledge.</p>
</blockquote>
<hr>
<ol>
<li><p>P. Emmerich, M. Pudelko, S. Bauer, S. Huber, T. Zwickl, and G. Carle,
"User Space Network Drivers,"
in ACM/IEEE Symposium on Architectures for Networking and Communications Systems (ANCS 2019), 2019.
<a href="https://arxiv.org/abs/1901.10664" rel="nofollow noreferrer">arXiv:1901.10664</a> [cs.NI]</p></li>
<li><p>P. Emmerich, S. Ellmann, F. Bonk, A. Egger, E. G. Sánchez-Torija, T. Günzel, S. Di Luzio, A. Obada, M. Stadlmeier, S. Volt, and G. Carle,
"The Case for Writing Network Drivers in High-Level Programming Languages,"
in ANCS 2019.
<a href="https://arxiv.org/abs/1909.06344" rel="nofollow noreferrer">arXiv:1909.06344</a> [cs.NI]</p></li>
</ol> | 2020-02-11 22:02:36.490000+00:00 | 2020-04-24 00:36:19.823000+00:00 | 2020-04-24 00:36:19.823000+00:00 | null | 60,177,409 | <p>Looking to create a project which implements a <em>user-space networking stack</em>, so that a user space application has access to the network cards, I've never done this before and I was wondering if it would possible to get this close to the hardware using a language like <em>python</em> and, if not, which language would be best?</p> | 2020-02-11 21:09:23.267000+00:00 | 2020-04-24 00:36:19.823000+00:00 | null | python|networking|kernel|userspace | ['https://arxiv.org/pdf/1901.10664.pdf', 'https://github.com/ixy-languages/ixy-languages', 'https://github.com/ixy-languages/ixy-languages', 'https://www.net.in.tum.de/fileadmin/bibtex/publications/papers/the-case-for-writing-network-drivers-in-high-level-languages.pdf', 'https://arxiv.org/abs/1901.10664', 'https://arxiv.org/abs/1909.06344'] | 6 |
34,031,429 | <p>There are plenty of practical applications that depend on graph coloring. For example, <em>Frequency assignment to cellular antennas</em> is a famous problem. In this problem, one can only use small number of frequencies(colors), and assign them to the antennas such that clients always connect to one of the antennas. Problem arises when coverage area of several antennas overlap, and in this case we want to have at least one antenna with unique frequency(color). One cannot use simple <em>proper(non-monotonic)</em> coloring in this problem. </p>
<p>In above case, we deal with <em>colouring of hypergraph</em>. We define hypergraph according to the coloring problem. And there are different variants of colorings, like <em>Conflict-free coloring</em>, <em>Unique-maximum coloring</em>. Many research have been done in case of static situations, where hypergraph do not change over time. And there are many interesting research going on in case of <em>online Setting</em>, where hyper edge may change in time (say some antenna stops working, or new antenna is fixed at some location).</p>
<p>you can also refer <strong><a href="http://arxiv.org/abs/1005.3616" rel="nofollow">1</a></strong> survey paper to read more about interesting coloring problems and its applications.</p> | 2015-12-01 22:28:12.903000+00:00 | 2015-12-01 22:28:12.903000+00:00 | null | null | 19,739,717 | <p>What is the practical application of graph coloring. In other words, why do we need to optimise number of colors(some important feature of a problem) using such algorithms.</p> | 2013-11-02 07:03:34.423000+00:00 | 2015-12-02 00:11:12.523000+00:00 | 2015-12-02 00:11:12.523000+00:00 | algorithm|graph-algorithm | ['http://arxiv.org/abs/1005.3616'] | 1 |
63,414,435 | <p>There are actually two strategies for warmup, ref <a href="https://arxiv.org/abs/1706.02677" rel="nofollow noreferrer">here</a>.</p>
<ul>
<li><strong>constant</strong>: Use a low learning rate than base learning rate for the initial few steps.</li>
<li><strong>gradual</strong>: In the first few steps, the learning rate is set to be lower than base learning rate and increased gradually to approach it as step number increases. As @Prune an @Patel suggested.</li>
</ul> | 2020-08-14 14:10:55.300000+00:00 | 2020-08-14 14:10:55.300000+00:00 | null | null | 55,933,867 | <p>In machine learning, especially deep learning, what does it mean to warm-up?</p>
<p>I've heard sometimes that in some models, warming-up is a phase in training. But honestly, I don't know what it is because I'm very new to ML. Until now I've never used or come across it, but I want to know it because I think it might be useful for me.</p>
<p><strong>What is learning rate warm-up and when do we need it?</strong></p> | 2019-05-01 09:01:57.817000+00:00 | 2020-08-15 11:38:32.900000+00:00 | 2020-08-15 11:38:32.900000+00:00 | machine-learning|neural-network|deep-learning|terminology | ['https://arxiv.org/abs/1706.02677'] | 1 |
46,710,660 | <p>Short answer: no they are not redundant.
The R-CNN article and its variants popularized the use of what we used to call a cascade.
Back then for detection it was fairly common to use different detectors often very similar in structures to do detection because of their complementary power.</p>
<p>If the detections are partly orthogonal it allows to remove false positive along the way.</p>
<p>Furthermore by definition both parts of R-CNN have different roles the first one is used to discriminate objects from background and the second one to discriminate fine grained categories of objects from themselves (and from the background also).</p>
<p>But you are right if there is only 1 class vs the background one could use only the RPN part to to detection but even in that case it would probably better the result to chain two different classifiers (or not see e.g. <a href="https://arxiv.org/abs/1607.07032" rel="noreferrer">this article</a>)</p>
<p><strong>PS:</strong> I answered because I wanted to but this question is definitely unsuited for stackoverflow</p> | 2017-10-12 13:17:15.697000+00:00 | 2018-03-29 12:25:27.090000+00:00 | 2018-03-29 12:25:27.090000+00:00 | null | 41,976,254 | <p>As we know, faster-RCNN has two main parts: one is region proposal network(RPN), and another one is fast-RCNN.</p>
<p>My question is, now that region proposal network(RPN) can output class scores and bounding boxes and is trainable, why do we need Fast-RCNN?</p>
<p>Am I thinking it right that the RPN is enough for detection (red circle), and Fast-RCNN is now becoming redundant (blue circle)?</p>
<p><a href="https://i.stack.imgur.com/RUJ2b.png" rel="noreferrer"><img src="https://i.stack.imgur.com/RUJ2b.png" alt="enter image description here"></a></p> | 2017-02-01 09:32:41.077000+00:00 | 2020-05-22 23:00:46.667000+00:00 | 2018-05-15 12:27:07.560000+00:00 | machine-learning|computer-vision|object-detection | ['https://arxiv.org/abs/1607.07032'] | 1 |
68,486,414 | <p>I started with Julia Silge's blog post <a href="https://juliasilge.com/blog/mining-cran-description/" rel="nofollow noreferrer">here</a>:</p>
<pre class="lang-r prettyprint-override"><code>cran <- tools::CRAN_package_db()
desc_with_doi <- grep("doi:", cran$Description, value = TRUE)
</code></pre>
<p>Here are some examples:</p>
<blockquote>
<p>Given a protein multiple sequence alignment, it is daunting task to assess the effects of substitutions along sequence length. 'aaSEA' package is intended to help researchers to rapidly analyse property changes caused by single, multiple and correlated amino acid substitutions in proteins. Methods for identification of co-evolving positions from multiple sequence alignment are as described in : Pelé et al., (2017) <doi:10.4172/2379-1764.1000250>.</p>
</blockquote>
<p>or</p>
<blockquote>
<p>Estimate parameters of accumulated damage (load duration) models based on failure time data under a Bayesian framework, using Approximate Bayesian Computation (ABC). Assess long-term reliability under stochastic load profiles. Yang, Zidek, and Wong (2019) <doi:10.1080/00401706.2018.1512900>.</p>
</blockquote>
<p>Using a similar filter for "https" shows (unsurprisingly) a lot more generic website links than scholarly references, but e.g.:</p>
<blockquote>
<p>Designed for studies where animals tagged with acoustic tags are expected\n to move through receiver arrays. This package combines the advantages of automatic sorting and checking \n of animal movements with the possibility for user intervention on tags that deviate from expected \n behaviour. The three analysis functions (explore(), migration() and residency()) \n allow the users to analyse their data in a systematic way, making it easy to compare results from \n different studies.\n CJS calculations are based on Perry et al. (2012) <https://www.researchgate.net/publication/256443823_Using_mark-recapture_models_to_estimate_survival_from_telemetry_data>.</p>
</blockquote>
<p>ArXiv (there are only 24 packages with such links out of 17962 total at present):</p>
<blockquote>
<p>Provides functions for model fitting and selection of generalised hypergeometric ensembles of random graphs (gHypEG).\n To learn how to use it, check the vignettes for a quick tutorial.\n Please reference its use as Casiraghi, G., Nanumyan, V. (2019) doi:10.5281/zenodo.2555300\n together with those relevant references from the one listed below.\n The package is based on the research developed at the Chair of Systems Design, ETH Zurich.\n Casiraghi, G., Nanumyan, V., Scholtes, I., Schweitzer, F. (2016) <arXiv:1607.02441>.\n Casiraghi, G., Nanumyan, V., Scholtes, I., Schweitzer, F. (2017) <doi:10.1007/978-3-319-67256-4_11>.\n Casiraghi, G., (2017) <arxiv:1702.02048>\n Casiraghi, G., Nanumyan, V. (2018) <arXiv:1810.06495>.\n Brandenberger, L., Casiraghi, G., Nanumyan, V., Schweitzer, F. (2019) <doi:10.1145/3341161.3342926>\n Casiraghi, G. (2019) <doi:10.1007/s41109-019-0241-1>.</p>
</blockquote> | 2021-07-22 14:13:58.587000+00:00 | 2022-03-02 15:42:51.870000+00:00 | 2022-03-02 15:42:51.870000+00:00 | null | 68,485,926 | <p>I just submitted an R package to CRAN. I got this comment back:</p>
<pre><code>If there are references describing the methods in your package, please add these in the description field of your DESCRIPTION file in the form
authors (year) <doi:...>
authors (year) <arXiv:...>
authors (year, ISBN:...)
or if those are not available: <https:...>
with no space after 'doi:', 'arXiv:', 'https:' and angle brackets for auto-linking.
(If you want to add a title as well please put it in quotes: "Title")
</code></pre>
<p>But I thought that the <code>description</code> field is limited to one paragraph, which means you can't include additional text besides the single paragraph in that field. So I was unsure what the exact formatting is for including references in the description field. My guess is below, but this format returns a note stating that the description is malformed.</p>
<pre><code>Description: Text describing the package, blah blah blah.
More text goes here, etc etc etc.
Foo, B., and J. Baz. (1999) <doi:23232/xxxxx.00>
Smith, C. (2021) <https://something.etc/foo>
</code></pre>
<p>Note returned when running <code>R CMD check</code>:</p>
<pre><code>checking DESCRIPTION meta-information ... NOTE
Malformed Description field: should contain one or more complete sentences.
</code></pre>
<p><a href="https://stackoverflow.com/questions/58840531/cran-rejection-based-on-references-describing-the-methods-in-your-package">This question</a> is related but does not have a satisfactory answer so I am asking again.</p> | 2021-07-22 13:44:11.797000+00:00 | 2022-03-02 15:42:51.870000+00:00 | 2021-07-22 14:04:41.070000+00:00 | r|cran | ['https://juliasilge.com/blog/mining-cran-description/'] | 1 |
57,208,186 | <p>After poking around I found a <a href="https://arxiv.org/pdf/1708.02002.pdf" rel="nofollow noreferrer">research paper</a> introducing focal loss, and conveniently, a <a href="https://github.com/umbertogriffo/focal-loss-keras" rel="nofollow noreferrer">github</a> implementation of it for keras.</p>
<p>That, combined with @meowongac's suggestion ( I used Google word2vec embeddings) resulted in much better sampling of words with lower frequencies.</p>
<p>I also, separately, used <code>class_weight</code> :</p>
<pre class="lang-py prettyprint-override"><code>model.fit_generator(batch_sequence_data(prediction_sequences,
BATCH_SIZE, SEQUENCE_LENGTH, VOCAB_SIZE),
steps_per_epoch = steps_per_epoch, epochs=30, callbacks=[earlystop],
class_weight = class_weight)
</code></pre>
<p>which I set inversely proportional to the word frequency.
Again, combined with using Google word embeddings it worked, in some sense, even better, bringing up words with lower frequencies.</p>
<p>For example, for a 10 word sequence:</p>
<p><code>['two', 'three', 'marines', 'sort', 'charges', 'pending', 'another', 'fight', 'week', 'interesting']</code></p>
<p>the focal loss approach with gamma = 5 predicted the next word <code>people</code>, the class_weight approach predicted <code>attorney</code></p> | 2019-07-25 18:29:24.797000+00:00 | 2019-07-25 18:29:24.797000+00:00 | 2020-06-20 09:12:55.060000+00:00 | null | 57,154,583 | <p>I decided to try my hand at building a word prediction model using a recurrent neural network. There are a number of different examples online, including online courses, that make it sound that building such a model is fairly easy. Most of them use LSTM. Also, most, if not all, of them use a very small data set. I decided to try it with a larger data set, the 20 News Groups data set <code>from sklearn.datasets import fetch_20newsgroups</code>. I do some minimal preprocessing: removal of punctuation, stopwords and numbers. </p>
<p>I'm predicting a word based on the 10 preceding words history. I only use the posts that have at least 11 words. For each post I build a training set by taking a sliding window of size 11 and sliding it along the post. For each position the first 10 words are predictors and the 11th word is the target. I put together a simple model: Embedding layer, LSTM layer, and the output Dense layer. Here is the code:</p>
<pre class="lang-py prettyprint-override"><code>def make_prediction_sequences(input_texts, max_nb_words, sequence_length = 10):
# input_texts is a list of strings/texts
# select top vocab_size words based on the word counts
# word_index is the dictionary used to transform the words into the tokens.
tokenizer = Tokenizer(oov_token='UNK',num_words=max_nb_words)
tokenizer.fit_on_texts(input_texts)
sequences = tokenizer.texts_to_sequences(input_texts)
prediction_sequences = []
for sequence in sequences:
if len(sequence) > sequence_length: # at least 1 for prediction
for j in range(0,len(sequence) - sequence_length):
prediction_sequences.append(sequence[j:sequence_length+j+1])
word_index = {e:i-1 for e,i in tokenizer.word_index.items() if i <= max_nb_words} # i-1 because tokenizer is 1 indexed
return (np.array(prediction_sequences) , word_index)
def batch_sequence_data(prediction_sequences, batch_size, sequence_length, vocab_size):
number_batches = int(len(prediction_sequences)/batch_size)
while True:
for i in range(number_batches):
X = prediction_sequences[i*batch_size:(i+1)*batch_size, 0:sequence_length]
Y = to_categorical(prediction_sequences[i*batch_size:(i+1)*batch_size, sequence_length], num_classes=vocab_size)
yield np.array(X),Y
VOCAB_SIZE = 15000
SEQUENCE_LENGTH = 10
BATCH_SIZE = 128
prediction_sequences, word_index = make_prediction_sequences(data, VOCAB_SIZE, sequence_length=SEQUENCE_LENGTH)
## define the model
EMBEDDING_DIM = 64
rnn_size = 32
sequence_input = Input(shape=(SEQUENCE_LENGTH,), dtype='int32', name='rnn_input')
embedding_layer = Embedding(len(word_index), EMBEDDING_DIM, input_length=SEQUENCE_LENGTH)
embedded_sequences = embedding_layer(sequence_input)
x = LSTM(rnn_size, use_bias=True)(embedded_sequences)
preds = Dense(VOCAB_SIZE, activation='softmax')(x)
model = Model(sequence_input, preds)
model.compile(loss='categorical_crossentropy',optimizer='adam',metrics=['categorical_accuracy'])
#train the model
steps_per_epoch = len(prediction_sequences)/(BATCH_SIZE * SEQUENCE_LENGTH)
earlystop = EarlyStopping(patience=3, restore_best_weights=True,monitor='loss')
history = model.fit_generator(batch_sequence_data(prediction_sequences, BATCH_SIZE, SEQUENCE_LENGTH, VOCAB_SIZE),
steps_per_epoch = steps_per_epoch, epochs=30, callbacks=[earlystop])
</code></pre>
<p>The training achieves an accuracy of ~0.1. When I apply the model to predict words for the 10 word snippets from the training data, the output is overwhelmingly the most frequent word, 'one'. </p>
<p>I tried a more complex model with 2 LSTM layers, 2 Dense layers. I tried to use a pretrained word embedding using gensim word2vec model. Invariably the accuracy is ~0.1 and most of the time the prediction is 'one'. </p>
<p>When I thought about, it kind of made sense. Predicting the most frequent class for imbalanced data gives high accuracy 'for free'. This is clearly a local minimum, but one that is hard to escape.
The thing is, the algorithm doesn't minimize the accuracy, it minimizes the loss, which is categoricall_crossentropy, and it is supposed to work just fine for imbalanced data. But, perhaps, it is not always true, and there a different loss that can be used to deal better with imbalanced data?</p> | 2019-07-22 22:40:10.360000+00:00 | 2019-07-25 18:29:24.797000+00:00 | null | python|tensorflow|keras|lstm|loss-function | ['https://arxiv.org/pdf/1708.02002.pdf', 'https://github.com/umbertogriffo/focal-loss-keras'] | 2 |
55,376,091 | <p>It is your lucky day as I have recently uploaded a PyTorch and TF implementation of the paper <a href="https://arxiv.org/abs/1805.03096" rel="nofollow noreferrer">Fast Dense Feature Extraction with CNNs with Pooling Layers</a>.</p>
<p>An approach to compute patch-based local feature descriptors efficiently in presence of pooling and striding layers for whole images at once.</p>
<p>See <a href="https://github.com/erezposner/Fast_Dense_Feature_Extraction" rel="nofollow noreferrer">https://github.com/erezposner/Fast_Dense_Feature_Extraction</a> for details.</p>
<p>It contains simple instructions that will explain how to use the Fast Dense Feature Extraction (FDFE) project.</p>
<p>Good luck</p> | 2019-03-27 11:27:35.597000+00:00 | 2020-01-07 11:33:05.523000+00:00 | 2020-01-07 11:33:05.523000+00:00 | null | 55,126,493 | <p>I am trying to implement this paper in PyTorch <a href="https://www.dfki.de/fileadmin/user_upload/import/9245_FastCNNFeature_BMVC.pdf" rel="nofollow noreferrer">Fast Dense Feature Extractor</a> but I am having trouble converting the Torch implementation example they provide into PyTorch. </p>
<p>My attempt thus far has the issue that when adding an additional dimension to the feature map then the convolutional weights don't match the feature shape. How is this managed in Torch (from their implementation it seem that Torch doesn't care about this, but PyTorch does). My code: <a href="https://gist.github.com/system123/c4b8ef3824f2230f181f8cfba84f0cfd" rel="nofollow noreferrer">https://gist.github.com/system123/c4b8ef3824f2230f181f8cfba84f0cfd</a></p>
<p>Any other solutions to this problem would be great too. Basically, I have a feature extractor that converts a 128x128 patch into an embedding and I'd like to apply this in a dense manner across a larger image without using a for loop to evaluate the CNN on each location as that has a lot of duplicate computation.</p> | 2019-03-12 16:34:51.113000+00:00 | 2021-08-04 15:55:58.040000+00:00 | 2021-08-04 15:55:58.040000+00:00 | python|pytorch | ['https://arxiv.org/abs/1805.03096', 'https://github.com/erezposner/Fast_Dense_Feature_Extraction'] | 2 |
60,004,080 | <p>Regarding (3) - it depends on the memory order used. If both, the store and the RMW operation use <code>std::memory_order_seq_cst</code>, then both operations are ordered in some way - i.e., either the store happens before the RMW, or the other way round. If the store is order before the RMW, then it is guaranteed that the RMW operation "sees" the value that was stored. If the store is ordered after the RMW, it would overwrite the value written by the RMW operation.</p>
<p>If you use more relaxed memory orders, the modifications will still be ordered in some way (the modification order of the variable), but you have no guarantees on whether the RMW "sees" the value from the store operation - even if the RMW operation is order <em>after</em> the write in the variable's modification order.</p>
<p>In case you want to read yet another article I can refer you to <a href="https://arxiv.org/abs/1803.04432" rel="nofollow noreferrer">Memory Models for C/C++ Programmers</a>.</p> | 2020-01-31 12:37:04.530000+00:00 | 2020-01-31 12:37:04.530000+00:00 | null | null | 59,999,996 | <p>I have listened and read to several articles, talks and stackoverflow questions about <code>std::atomic</code>, and I would like to be sure that I have understood it well. Because I am still a bit confused with cache line writes visibility due to possible delays in MESI (or derived) cache coherency protocols, store buffers, invalidate queues, and so on. </p>
<p>I read x86 has a stronger memory model, and that if a cache invalidation is delayed x86 can revert started operations. But I am now interested only on what I should assume as a C++ programmer, independently of the platform.</p>
<p>[T1: thread1 T2: thread2 V1: shared atomic variable]</p>
<p>I understand that std::atomic guarantees that,</p>
<p>(1) No data races occur on a variable (thanks to exclusive access to the cache line).</p>
<p>(2) Depending which memory_order we use, it guarantees (with barriers) that sequential consistency happens (before a barrier, after a barrier or both).</p>
<p>(3) After an atomic write(V1) on T1, an atomic RMW(V1) on T2 will be coherent (its cache line will have been updated with the written value on T1).</p>
<p>But as <a href="https://fgiesen.wordpress.com/2014/07/07/cache-coherency/" rel="nofollow noreferrer">cache coherency primer</a> mention,</p>
<blockquote>
<p>The implication of all these things is that, by default, loads can fetch stale data (if a corresponding invalidation request was sitting in the invalidation queue)</p>
</blockquote>
<p>So, is the following correct?</p>
<p>(4) <code>std::atomic</code> does NOT guarantee that T2 won't read a 'stale' value on an atomic read(V) after an atomic write(V) on T1.</p>
<p>Questions if (4) is right: if the atomic write on T1 invalidates the cache line no matter the delay, why is T2 waiting for the invalidation to be effective when does an atomic RMW operation but not on an atomic read?</p>
<p>Questions if (4) is wrong: when can a thread read a 'stale' value and "it's visible" in the execution, then?</p>
<p>I appreciate your answers a lot</p>
<p><strong>Update 1</strong></p>
<p>So it seems I was wrong on (3) then. Imagine the following interleave, for an initial V1=0:</p>
<pre><code>T1: W(1)
T2: R(0) M(++) W(1)
</code></pre>
<p>Even though T2's RMW is guaranteed to happen entirely after W(1) in this case, it can still read a 'stale' value (I was wrong). According to this, atomic doesn't guarantee full cache coherency, only sequential consistency.</p>
<p><strong>Update 2</strong></p>
<p>(5) Now imagine this example (x = y = 0 and are atomic):</p>
<pre><code>T1: x = 1;
T2: y = 1;
T3: if (x==1 && y==0) print("msg");
</code></pre>
<p>according to what we've talked, seeing the "msg" displayed on screen wouldn't give us information beyond that T2 was executed after T1. So either of the following executions might have happened:</p>
<ul>
<li>T1 < T3 < T2</li>
<li>T1 < T2 < T3 (where T3 sees x = 1 but not y = 1 yet)</li>
</ul>
<p>is that right?</p>
<p>(6) If a thread can always read 'stale' values, what would happen if we took the typical "publish" scenario but instead of signaling that some data is ready, we do just the opposite (delete the data)?</p>
<pre><code>T1: delete gameObjectPtr; is_enabled.store(false, std::memory_order_release);
T2: while (is_enabled.load(std::memory_order_acquire)) gameObjectPtr->doSomething();
</code></pre>
<p>where T2 would still be using a deleted ptr until sees that is_enabled is false.</p>
<p>(7) Also, the fact that threads may read 'stale' values means that a <strong>mutex</strong> cannot be implemented with just one lock-free atomic right? It would require a synch mechanism between threads. Would it require a lockable atomic?</p> | 2020-01-31 08:01:22.053000+00:00 | 2020-01-31 15:50:58.963000+00:00 | 2020-01-31 15:29:24.610000+00:00 | c++|caching|concurrency|c++17|atomic | ['https://arxiv.org/abs/1803.04432'] | 1 |
46,261,576 | <p>I'd like to add that <a href="https://en.wikipedia.org/wiki/Bayesian_optimization" rel="nofollow noreferrer">Bayesian optimization</a> is a perfect example of an <em>adaptive random search</em>, so looks like it's exactly what you want to apply.</p>
<p>The idea of Bayesian optimization is to model the target function using <a href="https://en.wikipedia.org/wiki/Gaussian_process" rel="nofollow noreferrer">Gaussian Processes</a> (GP), select the best next point according to the current model and update the model after seeing the actual outcome. So, effectively, Bayesian optimization starts like a random search, gradually builds a picture of what the function looks like and shifts its focus to the most promising areas (note that "promising" can be defined differently by different particular methods - PI, EI, UCB, etc). There are further techniques to help it to find a right balance between exploration and exploitation, for example <a href="https://arxiv.org/pdf/1009.5419.pdf" rel="nofollow noreferrer">portfolio strategy</a>. If that's what you mean by <em>adaptive</em>, then Bayesian optimization is your choice .</p>
<p>If you'd like to extend your code without external libraries, it's totally possible because Bayesian optimization is not that hard to implement. You can take a look at sample code that I used in <a href="https://github.com/maxim5/hyper-engine" rel="nofollow noreferrer">my research</a>, for example <a href="https://github.com/maxim5/hyper-engine/blob/master/hyperengine/bayesian/utility.py" rel="nofollow noreferrer">here</a> is the bulk of GP-related code.</p> | 2017-09-17 07:08:29.717000+00:00 | 2017-09-17 07:08:29.717000+00:00 | null | null | 37,332,499 | <p>Random search is one possibility for hyperparameter optimization in machine learning. I have applied random search to search for the best hyperparameters of a SVM classifier with a RBF kernel. Additional to the continuous Cost and gamma parameter, I have one discrete parameter and also an equality constraint over some parameters.</p>
<p>Now, I would like to develop random search further, e.g. through adaptive random search. That means for example adaptation of the search direction or of the search range.</p>
<p>Does somebody have an idea how this can be done or could reference to some existing work on this? Other ideas for improving random search are also welcome.</p> | 2016-05-19 19:33:29.813000+00:00 | 2017-09-17 07:08:29.717000+00:00 | null | machine-learning|mathematical-optimization|nonlinear-optimization|hyperparameters | ['https://en.wikipedia.org/wiki/Bayesian_optimization', 'https://en.wikipedia.org/wiki/Gaussian_process', 'https://arxiv.org/pdf/1009.5419.pdf', 'https://github.com/maxim5/hyper-engine', 'https://github.com/maxim5/hyper-engine/blob/master/hyperengine/bayesian/utility.py'] | 5 |
70,425,270 | <p>Heyo, sorry about that! Look's like I'm dumb.</p>
<p>Context; I was trying to incorporate the <a href="https://arxiv.org/abs/2001.02407" rel="nofollow noreferrer">SPACE detection model</a> into <a href="https://arxiv.org/abs/2010.02193" rel="nofollow noreferrer">DreamerV2</a>, and I didn't see the little footnote with Space:</p>
<blockquote>
<p>For some reason we were using BGR images for our Atari dataset and our
pretrained models can only handle that. Please convert the images to
BGR if you are to test your own Atari images with the provided
pretrained models.</p>
</blockquote>
<p>So yeah... if you see something like this, I guess this is what's wrong...</p> | 2021-12-20 16:52:04.290000+00:00 | 2021-12-20 16:52:04.290000+00:00 | null | null | 70,424,771 | <p>I'm trying to coordinate two systems; one that was already pre-trained on the MuJoCo MsPacman-v0 and another that only supports the gym version for training. With both systems working on the rgb image representations, the color palette discrepancy is problematic (Gym Output Left, Expected Right):</p>
<p><a href="https://i.stack.imgur.com/TvsNV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TvsNV.png" alt="enter image description here" /></a></p>
<p>Is there a simple way to fix this (i.e. pixel mapping trick or some environment setting I'm not aware of), or is there something more involved that I have to do? Of note, the actual simulation I'm running uses gym.</p> | 2021-12-20 16:14:14.880000+00:00 | 2021-12-20 16:53:48.450000+00:00 | 2021-12-20 16:53:48.450000+00:00 | python|python-imaging-library|reinforcement-learning|openai-gym | ['https://arxiv.org/abs/2001.02407', 'https://arxiv.org/abs/2010.02193'] | 2 |
28,924,133 | <p>According to <a href="http://arxiv.org/pdf/1503.01192.pdf" rel="nofollow">http://arxiv.org/pdf/1503.01192.pdf</a> it is "well known" that you cannot find the number of inversions more efficiently than O(n log n).</p> | 2015-03-08 07:29:18.303000+00:00 | 2015-03-08 07:29:18.303000+00:00 | null | null | 28,923,848 | <p>I am trying to count inversion in a array (two elements a[i] and a[j] form an inversion if a[i] > a[j] and i < j). I know that is easily possible to resolve these problems using brute force in O(n^2) and by using Divide and Conquer in O(nlgn). </p>
<p>My Question was that is it possible to use a form of bucketing technique to achieve an efficiency of O(n) with knowledge about the data. For instance I already know that the array is a permutation of 1-32 , thus the maximum element is 32 (entailing that we can do something with bucketing). </p>
<p>I have been thinking about this and noticed that if we are inserting an element in a bucket, then the sum of all buckets greater than it at the time of insertion is its inversion count. But if we add the number of elements in each bucket every time, then it causes me to lose the O(n) efficiency. Any suggestions of how to keep a count to remove this penalty.</p>
<p>Please note that the permutation can be of any length, but during execution we know the number of elements in the permutation. Thus the value of "n" is known during execution and the permutation consists of elements from "1" to "n".</p>
<p>Sorting : It is possible to sort this data set in O(n) time complexity, as we can create 32 buckets and we know that each bucket will have exactly one element. Thus the efficiency of bucket sort which is O(n + M) is O(n + 1) = O(n) for this specific example.</p> | 2015-03-08 06:43:59.817000+00:00 | 2015-03-08 19:58:58.133000+00:00 | 2015-03-08 19:58:58.133000+00:00 | algorithm|buckets|bucket-sort | ['http://arxiv.org/pdf/1503.01192.pdf'] | 1 |
66,000,444 | <p>I agree with you that the huge <code>Dense</code> layer (which has millions of parameters) might hinder the performance of the model. Instead of <em>inflating</em> the tabular data with a <code>Dense</code> layer, you could rather choose one of the following two approaches.</p>
<hr />
<p><strong>Option 1:</strong> Tile the <code>x_tab</code> tensor so that it matches your desired shape. This can be achieved with the following steps:</p>
<p>First, there is no need to flatten the <code>ConvLSTM2D</code>'s encoded tensor:</p>
<pre><code>x_input = Input(shape=(3, 128, 128, 3))
x = ConvLSTM2D(32, 3, strides = 1, padding='same', dilation_rate = 2,return_sequences=True)(x_input)
x = BatchNormalization()(x)
x = ConvLSTM2D(16, 3, strides = 1, padding='same', dilation_rate = 2,return_sequences=True)(x)
x = BatchNormalization()(x)
x = ConvLSTM2D(8, 3, strides = 1, padding='same', dilation_rate = 2,return_sequences=True)(x)
x = BatchNormalization()(x) # Shape=(None, None, 128, 128, 8)
# Commented: x = Flatten()(x)
</code></pre>
<p>Second, you can process your tabular data with one or several <code>Dense</code> layers. For example:</p>
<pre><code>dim = 10
x_tab_input = Input(shape=(5))
x_tab = Dense(100, activation="relu")(x_tab_input)
x_tab = Dense(dim, activation="relu")(x_tab)
# x_tab = Flatten()(x_tab) # Note: Flattening a 2D tensor leaves the tensor unchanged
</code></pre>
<p>Third, we wrap the tensorflow operation <a href="https://www.tensorflow.org/api_docs/python/tf/tile" rel="nofollow noreferrer">tf.tile</a> in a <a href="https://www.tensorflow.org/api_docs/python/tf/keras/layers/Lambda" rel="nofollow noreferrer">Lambda</a> layer, effectively creating copies of the tensor <code>x_tab</code> so that it matches the desired shape:</p>
<pre><code>def repeat_tabular(x_tab):
h = x_tab[:, None, None, None, :] # Shape=(bs, 1, 1, 1, dim)
h = tf.tile(h, [1, 3, 128, 128, 1]) # Shape=(bs, 3, 128, 128, dim)
return h
x_tab = Lambda(repeat_tabular)(x_tab)
</code></pre>
<p>Finally, we concatenate the <code>x</code> and the tiled <code>x_tab</code> tensors along the last axis (you might also consider concatenating along the first axis, corresponding to the channels' dimension)</p>
<pre><code>concat = Concatenate(axis=-1)([x, x_tab]) # Shape=(3,128,128,8+dim)
output = concat
output = Conv3D(filters=3, kernel_size=(3, 3, 3), activation='relu', padding="same")(output)
# ...
</code></pre>
<p>Note that this solution might be a bit naive in the sense that the model is not encoding the input sequence of images into a low-dimensional representation, limiting the receptive field of the network and potentially resulting in degraded performance.</p>
<hr />
<p><strong>Option 2:</strong> Similar to autoencoders and <a href="https://arxiv.org/abs/1505.04597" rel="nofollow noreferrer">U-Net</a>, it might be desirable to encode your sequence of images into a low-dimensional representation in order to discard the unwanted variation (e.g. noise) while preserving the meaningful signal (e.g. required to infer the next 3 images of the sequence). This can be achieved as follows:</p>
<p>First, encode the input sequence of images into a low-dimension 2-dimensional tensor. For example, something along the lines of:</p>
<pre><code>x_input = Input(shape=(None, 128, 128, 3))
x = ConvLSTM2D(32, 3, strides = 1, padding='same', dilation_rate = 2,return_sequences=True)(x_input)
x = BatchNormalization()(x)
x = ConvLSTM2D(16, 3, strides = 1, padding='same', dilation_rate = 2,return_sequences=True)(x)
x = BatchNormalization()(x)
x = ConvLSTM2D(8, 3, strides = 1, padding='same', dilation_rate = 2, return_sequences=False)(x)
x = BatchNormalization()(x)
x = Flatten()(x)
x = Dense(64, activation='relu')(x)
</code></pre>
<p>Note that the last <code>ConvLSTM2D</code> is not returning the sequences. You might want to explore different encoders to arrive at this point (e.g. you could also use pooling layers here).</p>
<p>Second, process your tabular data with <code>Dense</code> layers. For example:</p>
<pre><code>dim = 10
x_tab_input = Input(shape=(5))
x_tab = Dense(100, activation="relu")(x_tab_input)
x_tab = Dense(dim, activation="relu")(x_tab)
</code></pre>
<p>Third, concatenate the data from the previous two streams:</p>
<pre><code>concat = Concatenate(axis=-1)([x, x_tab])
</code></pre>
<p>Fourth, use a <code>Dense</code> + <code>Reshape</code> layer to project the concatenated vectors into a sequence of low-resolution images:</p>
<pre><code>h = Dense(3 * 32 * 32 * 3)(concat)
output = Reshape((3, 32, 32, 3))(h)
</code></pre>
<p>The shape of <code>output</code> allows to up-sample the images into a shape of <code>(128, 128, 3)</code>, but it is otherwise arbitrary (e.g. you might also want to experiment here).</p>
<p>Finally, apply one or several <a href="https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv3DTranspose" rel="nofollow noreferrer">Conv3DTranspose</a> layers to get to the desired output (e.g. 3 images of shape <code>(128, 128, 3)</code>).</p>
<pre><code>output = tf.keras.layers.Conv3DTranspose(filters=50, kernel_size=(3, 3, 3),
strides=(1, 2, 2), padding='same',
activation='relu')(output)
output = tf.keras.layers.Conv3DTranspose(filters=3, kernel_size=(3, 3, 3),
strides=(1, 2, 2), padding='same',
activation='relu')(output) # Shape=(None, 3, 128, 128, 3)
</code></pre>
<p>The rationale behind <em>transposed</em> convolution layers is discussed <a href="https://machinelearningmastery.com/upsampling-and-transpose-convolution-layers-for-generative-adversarial-networks/" rel="nofollow noreferrer">here</a>. Essentially, the <code>Conv3DTranspose</code> layer goes in the opposite direction of normal convolutions - it allows upsampling your low-resolution images into high-resolution images.</p> | 2021-02-01 21:31:46.543000+00:00 | 2021-02-02 08:21:19.320000+00:00 | 2021-02-02 08:21:19.320000+00:00 | null | 65,963,752 | <p>I have built a model that takes 3 images of a time series along with 5 numerical information as input and produces the next three images of the time series.
I accomplished this by:</p>
<ol>
<li>Build a ConvLSTM2D model for processing the images (pretty similar to the example listed on Keras documentation <a href="https://keras.io/examples/vision/conv_lstm/" rel="noreferrer">here</a>). Input size=(3x128x128x3)</li>
<li>Build a simple model for tabular data with a few Dense layers. Input size=(1,5)</li>
<li>Concatenate these two models</li>
<li>Have a Conv3D model that produces the next 3 images</li>
</ol>
<p>The LSTM models produces output of size 393216 (3x128x128x8). Now I had to set the output of tabular model to 49,152 so that I can have the input size of 442368 (3x128x128x9) in the next layer. So this unnecessary inflation of tabular model's Dense layer makes the otherwise efficient LSTM model perform awfully.</p>
<p>Is there a better way to concatenate the two models? Is there a way I can just have an output of 10 in the tabular model's Dense layer?</p>
<p>The model:</p>
<pre><code>x_input = Input(shape=(None, 128, 128, 3))
x = ConvLSTM2D(32, 3, strides = 1, padding='same', dilation_rate = 2,return_sequences=True)(x_input)
x = BatchNormalization()(x)
x = ConvLSTM2D(16, 3, strides = 1, padding='same', dilation_rate = 2,return_sequences=True)(x)
x = BatchNormalization()(x)
x = ConvLSTM2D(8, 3, strides = 1, padding='same', dilation_rate = 2,return_sequences=True)(x)
x = BatchNormalization()(x)
x = Flatten()(x)
# x = MaxPooling3D()(x)
x_tab_input = Input(shape=(5))
x_tab = Dense(100, activation="relu")(x_tab_input)
x_tab = Dense(49152, activation="relu")(x_tab)
x_tab = Flatten()(x_tab)
concat = Concatenate()([x, x_tab])
output = Reshape((3,128,128,9))(concat)
output = Conv3D(filters=3, kernel_size=(3, 3, 3), activation='relu', padding="same")(output)
model = Model([x_input, x_tab_input], output)
model.compile(loss='mae', optimizer='rmsprop')
</code></pre>
<p>Model Summary:</p>
<pre><code>Model: "functional_3"
______________________________________________________________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
======================================================================================================================================================
input_4 (InputLayer) [(None, None, 128, 128, 3)] 0
______________________________________________________________________________________________________________________________________________________
conv_lst_m2d_9 (ConvLSTM2D) (None, None, 128, 128, 32) 40448 input_4[0][0]
______________________________________________________________________________________________________________________________________________________
batch_normalization_9 (BatchNormalization) (None, None, 128, 128, 32) 128 conv_lst_m2d_9[0][0]
______________________________________________________________________________________________________________________________________________________
conv_lst_m2d_10 (ConvLSTM2D) (None, None, 128, 128, 16) 27712 batch_normalization_9[0][0]
______________________________________________________________________________________________________________________________________________________
batch_normalization_10 (BatchNormalization) (None, None, 128, 128, 16) 64 conv_lst_m2d_10[0][0]
______________________________________________________________________________________________________________________________________________________
input_5 (InputLayer) [(None, 5)] 0
______________________________________________________________________________________________________________________________________________________
conv_lst_m2d_11 (ConvLSTM2D) (None, None, 128, 128, 8) 6944 batch_normalization_10[0][0]
______________________________________________________________________________________________________________________________________________________
dense (Dense) (None, 100) 600 input_5[0][0]
______________________________________________________________________________________________________________________________________________________
batch_normalization_11 (BatchNormalization) (None, None, 128, 128, 8) 32 conv_lst_m2d_11[0][0]
______________________________________________________________________________________________________________________________________________________
dense_1 (Dense) (None, 49152) 4964352 dense[0][0]
______________________________________________________________________________________________________________________________________________________
flatten_3 (Flatten) (None, None) 0 batch_normalization_11[0][0]
______________________________________________________________________________________________________________________________________________________
flatten_4 (Flatten) (None, 49152) 0 dense_1[0][0]
______________________________________________________________________________________________________________________________________________________
concatenate (Concatenate) (None, None) 0 flatten_3[0][0]
flatten_4[0][0]
______________________________________________________________________________________________________________________________________________________
reshape_2 (Reshape) (None, 3, 128, 128, 9) 0 concatenate[0][0]
______________________________________________________________________________________________________________________________________________________
conv3d_2 (Conv3D) (None, 3, 128, 128, 3) 732 reshape_2[0][0]
======================================================================================================================================================
Total params: 5,041,012
Trainable params: 5,040,900
Non-trainable params: 112
______________________________________________________________________________________________________________________________________________________
</code></pre> | 2021-01-30 01:21:02.463000+00:00 | 2021-02-02 08:21:19.320000+00:00 | null | tensorflow|machine-learning|keras|lstm | ['https://www.tensorflow.org/api_docs/python/tf/tile', 'https://www.tensorflow.org/api_docs/python/tf/keras/layers/Lambda', 'https://arxiv.org/abs/1505.04597', 'https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv3DTranspose', 'https://machinelearningmastery.com/upsampling-and-transpose-convolution-layers-for-generative-adversarial-networks/'] | 5 |
52,503,096 | <p>This depends on what you are looking to do in your application.
If you are just looking to process the 3D image in terms of slices, then you can define a <a href="https://keras.io/layers/wrappers/" rel="nofollow noreferrer">TimeDistributed</a> VGG16 network (Conv2D instead of Conv3D) would be the way to go. </p>
<p>The model then becomes something like this for each layer you define above:</p>
<pre><code>img_input = Input((100,80,80,3))
x = TimeDistributed(Conv2D(64, (3, 3), activation='relu', padding='same', name='block1_conv1', trainable=False))(img_input)
x = TimeDistributed(Conv2D(64, (3, 3), activation='relu', padding='same', name='block1_conv2', trainable=False))(x)
x = TimeDistributed((MaxPooling2D((2, 2), strides=(2, 2), name='block1_pool', trainable=False)(x)
...
...
</code></pre>
<p>Note that I have included the option 'trainable=False' over here. This is pretty useful if you only want to train the deeper layers and freeze the lower layers with the well trained wights of VGG.</p>
<p>To load the VGG weights for the model, you can then use the <a href="https://keras.io/models/about-keras-models/" rel="nofollow noreferrer">load_weights</a> function of Keras.</p>
<p><code>model.load_weights(filepath, by_name=True)</code></p>
<p>If you set the layer names which you do not want to train to be the same as what is defined in the <a href="https://keras.io/applications/#vgg16" rel="nofollow noreferrer">VGG16</a>, then you can simply load those layers by name over here. </p>
<p>However, spatio-temporal feature learning is something that can be potentially done better much by using 3D ConvNets.
If this is the basis of your application, then you cannot directly import VGG16 weights in to a Conv3D model, because the number of parameters in each layer now increases because the filter went from being a 3*3 to a 3*3*3 for example.</p>
<p>You could still load the weights layer by layer to the model by considering which patch of 3*3 from the 3*3*3 would be most suitable for initialization with VGG16 weights. <a href="https://keras.io/layers/about-keras-layers/" rel="nofollow noreferrer">set_weights()</a> function takes as input a list of numpy arrays (for the kernel weights and the bias respectively). You can extract each layer weight from VGG16 and then construct a new numpy array for an equivalent Conv3D weight matrix and feed it to your Conv3D model.</p>
<p>But I would encourage you to look at existing literature and models for processing 3D images to see if those can give you the better initialization using transfer learning.</p>
<p>For example, <a href="https://arxiv.org/abs/1412.0767" rel="nofollow noreferrer">C3D</a> is one such popular model. <a href="https://www.shapenet.org/" rel="nofollow noreferrer">ShapeNet</a> and <a href="http://cvgl.stanford.edu/projects/pascal3d.html" rel="nofollow noreferrer">Pascal3D</a> are popular 3D datasets.</p>
<p><a href="https://stackoverflow.com/a/42635571/10111931">This discussion</a> on how to process video data might also be useful to give you better insights on how to proceed.</p> | 2018-09-25 16:44:24.720000+00:00 | 2018-09-25 16:44:24.720000+00:00 | null | null | 52,446,965 | <p>I'm trying to implement a 3D convnet followed by LSTM layer for sequence generation using 3D images as input , on Keras with Tensorflow backend.</p>
<p>I would like to start training with weights of an existing pre-trained model in order to avoid common issues with random initialization .</p>
<p>In order to start with a basic example, I took VGG-16 and I implemented a "3D" version of this network (without the FC layers):</p>
<pre><code>img_input = Input((100,80,80,3))
x = Conv3D(64, (3, 3 ,3), activation='relu', padding='same', name='block1_conv1')(img_input)
x = Conv3D(64, (3, 3 ,3), activation='relu', padding='same', name='block1_conv2')(x)
x = MaxPooling3D((1, 2, 2), strides=(1, 2, 2), name='block1_pool')(x)
x = Conv3D(128, (3, 3 ,3), activation='relu', padding='same', name='block2_conv1')(x)
x = Conv3D(128, (3, 3 ,3), activation='relu', padding='same', name='block2_conv2')(x)
x = MaxPooling3D((1, 2 ,2), strides=(1,2, 2), name='block2_pool')(x)
x = Conv3D(256, (3, 3 ,3), activation='relu', padding='same', name='block3_conv1')(x)
x = Conv3D(256, (3, 3 , 3), activation='relu', padding='same', name='block3_conv2')(x)
x = Conv3D(256, (3, 3, 3), activation='relu', padding='same', name='block3_conv3')(x)
x = MaxPooling3D((1, 2 ,2), strides=(1,2, 2), name='block3_pool')(x)
x = Conv3D(512, (3, 3 ,3), activation='relu', padding='same', name='block4_conv1')(x)
x = Conv3D(512, (3, 3 ,3), activation='relu', padding='same', name='block4_conv2')(x)
x = Conv3D(512, (3, 3 ,3), activation='relu', padding='same', name='block4_conv3')(x)
x = MaxPooling3D((1, 2 ,2), strides=(1, 2, 2), name='block4_pool')(x)
x = Conv3D(512, (3, 3 ,3), activation='relu', padding='same', name='block5_conv1')(x)
x = Conv3D(512, (3, 3 ,3), activation='relu', padding='same', name='block5_conv2')(x)
x = Conv3D(512, (3, 3 ,3), activation='relu', padding='same', name='block5_conv3')(x)
x = MaxPooling3D((1, 2 ,2), strides=(1, 2, 2), name='block5_pool')(x)
</code></pre>
<p>So I would like to know how can I load the weights of the pretrained VGG-16 into each one of the 100 slices (my 3D images are composed by 100 80x80 rgb slices) ,</p>
<p>Any advise you can give to me would be useful,</p>
<p>Thanks</p> | 2018-09-21 15:32:48.420000+00:00 | 2018-09-25 16:44:24.720000+00:00 | null | python-3.x|tensorflow|keras|conv-neural-network|transfer-learning | ['https://keras.io/layers/wrappers/', 'https://keras.io/models/about-keras-models/', 'https://keras.io/applications/#vgg16', 'https://keras.io/layers/about-keras-layers/', 'https://arxiv.org/abs/1412.0767', 'https://www.shapenet.org/', 'http://cvgl.stanford.edu/projects/pascal3d.html', 'https://stackoverflow.com/a/42635571/10111931'] | 8 |
37,128,959 | <p>If you have some unlabeled training data, you could add a <em>dustbin class</em> that contains all your unlabeled data. In your example this class would have the interpretation "not one of the colors green, blue or red". This approach is described in detail in <a href="http://arxiv.org/abs/1511.03719" rel="nofollow">http://arxiv.org/abs/1511.03719</a></p> | 2016-05-10 03:47:53.427000+00:00 | 2016-05-10 03:47:53.427000+00:00 | null | null | 37,127,941 | <p>I'm planning to use Python Scikit to do some text classification, and was planning to use using TfidfVectorizer and MultinomialNB.</p>
<p>but I realized that MultinomialNB will always predict my sample into an existing (known) category. </p>
<p>for example, if I have:</p>
<pre><code>category A: trained with sample "this is green"
category B: trained with sample "this is blue"
category C: trained with sample "this is red"
</code></pre>
<p>and I try to predict: <code>"this is yellow"</code></p>
<p>it will give me <code>category A</code> (or any other, because the probablity is the same for all categories in this case).</p>
<p>my question is: is there a classifier that would give me "unknown" (or none, or false, or error) for the test case above?</p>
<p>I would like to know when my test case could not be predicted with the given training set.</p>
<p>I think I could check if <code>my_classifier.predict_proba(X_test))</code> returns an array with all equal or close values (in this example case: <code>[[ 0.33333333 0.33333333 0.33333333]]</code>). </p>
<p>actually, I would have to check if the values are close to their defaults, because the probabilities might not be the same for each category :)</p>
<p>so... any better approach or... is there a classifier with some confidence threshold I could use?</p> | 2016-05-10 01:37:05.707000+00:00 | 2019-01-08 05:17:50.117000+00:00 | 2019-01-08 05:17:50.117000+00:00 | scikit-learn|text-classification|naivebayes | ['http://arxiv.org/abs/1511.03719'] | 1 |
67,536,549 | <p>I suspect this comes from two different uses of the word <a href="https://openreview.net/pdf/25b8eee6c373d48b84e5e9c6e10e7cbbbce4ac73.pdf" rel="nofollow noreferrer">'tape'</a> in the context of automatic differentiation.</p>
<p>When people say that <a href="/questions/tagged/pytorch" class="post-tag" title="show questions tagged 'pytorch'" rel="tag">pytorch</a> is not tape-based, they mean it uses Operator Overloading as opposed to [tape-based] Source Transformation for automatic differentiation.</p>
<blockquote>
<p>[<strong>Operator overloading</strong>] relies on a language’s ability to redefine the meaning of functions and operators. All primitives are overloaded so that they additionally perform a tracing operation: The primitive is logged onto a ‘tape’, along with its inputs to ensure that those intermediate variables are kept alive. At the end of the function’s execution, this tape contains a linear trace of all the numerical operations in the program. Derivatives can be calculated by walking this tape in reverse. [...]<br />
OO is the technique used by PyTorch, Autograd, and Chainer [37].</p>
<p>...</p>
<p><strong>Tape-based</strong> Frameworks such as ADIFOR [8] and Tapenade [20] for Fortran and C use a global stack also called a ‘tape’<sup>2</sup> to ensure that intermediate variables are kept alive. The original (primal) function is augmented so that it writes intermediate variables to the tape during the forward pass, and the adjoint program will read intermediate variables from the tape during the backward pass. More recently, tape-based ST was implemented for Python in the ML framework Tangent [38].</p>
<p>...</p>
<p><sup><sup>2</sup> The tape used in ST stores only the intermediate variables, whereas the tape in OO is a program trace that stores the executed primitives as well.</sup></p>
</blockquote>
<ul>
<li><a href="https://arxiv.org/abs/1810.11530" rel="nofollow noreferrer">Automatic differentiation in ML: Where we are and where we should be going</a></li>
</ul> | 2021-05-14 15:03:34.553000+00:00 | 2021-10-26 00:55:08.870000+00:00 | 2021-10-26 00:55:08.870000+00:00 | null | 64,856,195 | <p>I understand <code>autograd</code> is used to imply automatic differentiation. But what exactly is <code>tape-based autograd</code> in <code>Pytorch</code> and why there are so many discussions that affirm or deny it.</p>
<p>For example:</p>
<p><a href="https://discuss.pytorch.org/t/is-pytorch-autograd-tape-based/13992" rel="noreferrer">this</a></p>
<blockquote>
<p>In pytorch, there is no traditional sense of tape</p>
</blockquote>
<p>and <a href="https://discuss.pytorch.org/t/get-the-gradient-tape/62886" rel="noreferrer">this</a></p>
<blockquote>
<p>We don’t really build gradient tapes per se. But graphs.</p>
</blockquote>
<p>but not <a href="https://pytorch.org/tutorials/beginner/former_torchies/autograd_tutorial.html" rel="noreferrer">this</a></p>
<blockquote>
<p>Autograd is now a core torch package for automatic differentiation. It
uses a tape based system for automatic differentiation.</p>
</blockquote>
<p>And for further reference, please compare it with <code>GradientTape</code> in <code>Tensorflow</code>.</p> | 2020-11-16 10:18:53.260000+00:00 | 2021-10-26 00:55:08.870000+00:00 | 2021-05-17 12:19:44.257000+00:00 | python|tensorflow|machine-learning|pytorch|tensorflow2.0 | ['https://openreview.net/pdf/25b8eee6c373d48b84e5e9c6e10e7cbbbce4ac73.pdf', '/questions/tagged/pytorch', 'https://arxiv.org/abs/1810.11530'] | 3 |
36,342,908 | <p>1 Unpooling. </p>
<p>In the <a href="http://arxiv.org/pdf/1311.2901v3.pdf" rel="noreferrer">original paper</a> on unpooling, remaining activations are zeroed. </p>
<p>2 Deconvolution.</p>
<p>A deconvolutional layer is just the transposed of its corresponding conv layer. E.g. if conv layer's shape is <code>[height, width, previous_layer_fms, next_layer_fms]</code>, than the deconv layer will have the shape <code>[height, width, next_layer_fms, previous_layer_fms]</code>. The weights of conv and deconv layers are shared! (see <a href="http://people.idsia.ch/~ciresan/data/icann2011.pdf" rel="noreferrer">this paper</a> for instance)</p> | 2016-03-31 20:15:53.463000+00:00 | 2019-01-26 01:57:11.347000+00:00 | 2019-01-26 01:57:11.347000+00:00 | null | 35,049,197 | <p>I have been trying to understand how unpooling and deconvolution works in DeConvNets. </p>
<p>Unpooling</p>
<p>While during the unpooling stage, the activations are restored back to the locations of maximum activation selections, which makes sense, but what about the remaining activations? Do those remaining activations need to be restored as well or interpolated in some way or just filled as zeros in unpooled map.</p>
<p>Deconvolution</p>
<p>After the convolution section (i.e., Convolution layer, Relu, Pooling ), it is common to have more than one feature map output, which would be treated as input channels to successive layers ( Deconv.. ). How could these feature maps be combined together in order to achieve the activation map with same resolution as original input?</p> | 2016-01-27 22:19:24.337000+00:00 | 2019-01-26 01:57:11.347000+00:00 | null | image-processing|machine-learning|neural-network|deep-learning|conv-neural-network | ['http://arxiv.org/pdf/1311.2901v3.pdf', 'http://people.idsia.ch/~ciresan/data/icann2011.pdf'] | 2 |
38,450,609 | <h2>Unpooling</h2>
<p>As etoropov wrote, you can read about unpooling in <a href="http://arxiv.org/pdf/1311.2901v3.pdf" rel="noreferrer">Visualizing and Understanding Convolutional Networks</a> by Zeiler and Ferguson:</p>
<blockquote>
<p>Unpooling: In the convnet, the max pooling operation
is non-invertible, however we can obtain an approximate
inverse by recording the locations of the
maxima within each pooling region in a set of switch
variables. In the deconvnet, the unpooling operation
uses these switches to place the reconstructions from
the layer above into appropriate locations, preserving
the structure of the stimulus. See Fig. 1(bottom) for
an illustration of the procedure.</p>
</blockquote>
<h2>Deconvolution</h2>
<p>Deconvolution works like this:</p>
<ul>
<li>You add padding around each pixel</li>
<li>You apply a convolution</li>
</ul>
<p>For example, in the following illustration the original blue image is padded with zeros (white), the gray convolution filter is applied to get the green output.</p>
<p><img src="https://i.stack.imgur.com/f2RiP.gif" alt=""></p>
<p>Source: <a href="https://datascience.stackexchange.com/q/6107/8820">What are deconvolutional layers?</a></p> | 2016-07-19 05:58:55.660000+00:00 | 2016-07-19 05:58:55.660000+00:00 | 2017-04-13 12:50:40.647000+00:00 | null | 35,049,197 | <p>I have been trying to understand how unpooling and deconvolution works in DeConvNets. </p>
<p>Unpooling</p>
<p>While during the unpooling stage, the activations are restored back to the locations of maximum activation selections, which makes sense, but what about the remaining activations? Do those remaining activations need to be restored as well or interpolated in some way or just filled as zeros in unpooled map.</p>
<p>Deconvolution</p>
<p>After the convolution section (i.e., Convolution layer, Relu, Pooling ), it is common to have more than one feature map output, which would be treated as input channels to successive layers ( Deconv.. ). How could these feature maps be combined together in order to achieve the activation map with same resolution as original input?</p> | 2016-01-27 22:19:24.337000+00:00 | 2019-01-26 01:57:11.347000+00:00 | null | image-processing|machine-learning|neural-network|deep-learning|conv-neural-network | ['http://arxiv.org/pdf/1311.2901v3.pdf', 'https://datascience.stackexchange.com/q/6107/8820'] | 2 |
44,757,758 | <p>I am surprised the previous answers haven't mentioned word embedding. Word embedding algorithm can produce word vectors for each word a given dataset. These algorithms can nfer word vectors from the context. For instance, by looking at the context of the following sentences we can say that "clever" and "smart" is somehow related. Because the context is almost the same. <br/></p>
<p><code>
He is a clever guy
He is a smart guy
</code></p>
<p>A co-occurrence matrix can be constructed to do this. However, it is too inefficient. A famous technique designed for this purpose is called Word2Vec. It can be studied from the following papers.<br/>
<a href="https://arxiv.org/pdf/1411.2738.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1411.2738.pdf</a> <br/>
<a href="https://arxiv.org/pdf/1402.3722.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1402.3722.pdf</a> <br/></p>
<p>I have been using it for Swedish. It is quite effective in detecting similar words and completely unsupervised. </p>
<p>A package could be find in gensim and tensorflow. </p> | 2017-06-26 10:32:20.207000+00:00 | 2017-06-26 10:32:20.207000+00:00 | null | null | 14,810,944 | <p>Usually one wants to get a feature from a text by using the bag of words approach, counting the words and calculate different measures, for example tf-idf values, like this: <a href="https://stackoverflow.com/questions/4207057/how-to-include-words-as-numerical-feature-in-classification">How to include words as numerical feature in classification</a></p>
<p>But my problem is different, I want to extract a feature vector from a single word. I want to know for example that potatoes and french fries are close to each other in the vector space, since they are both made of potatoes. I want to know that milk and cream also are close, hot and warm, stone and hard and so on.</p>
<p>What is this problem called? Can I learn the similarities and features of words by just looking at a large number documents?</p>
<p>I will not make the implementation in English, so I can't use databases.</p> | 2013-02-11 11:03:56.717000+00:00 | 2020-02-14 15:17:55.740000+00:00 | 2017-05-23 12:08:29.637000+00:00 | machine-learning|nlp|feature-extraction | ['https://arxiv.org/pdf/1411.2738.pdf', 'https://arxiv.org/pdf/1402.3722.pdf'] | 2 |
61,274,329 | <p>To use generalized barycentric coordinates to map between these two polygon (one n-sided and the other square), you can add artificial vertices around the square so you are mapping one $n$-sided polygon to another. For example, <a href="https://i.stack.imgur.com/OFWvv.png" rel="nofollow noreferrer">this image</a> shows an eight-sided polygon and a square augmented with side midpoints to also have eight vertices.</p>
<p>Then on the original polygon, you define some generalized barycentric coordinates:</p>
<p>x = L1(x) v1 + L2(x) v2 + ... + Ln(x) vn</p>
<p>The specific functions L1, L2, ..., Ln are defined by which ever generalized barycentric coordinate you select, e.g., mean value Wachspress, etc. </p>
<p>Corresponding generalized barycentric coordinates are also defined for the square with the extra matching vertices identified, i.e.,</p>
<p>y = M1(y) w1 + M2(y) w2 + ... + Mn(y) wn</p>
<p>Now, given a point x in the polygon, we compute the associated point y in the square (i.e. the (u,v) that we can look up the texture from) using the barycentric coordinates from the polygon but the vertex positions in the square,</p>
<p>y= L1(x) w1 + L2(x) w2 + ... + Ln(x) wn</p>
<p>A couple notes:</p>
<ul>
<li>Following this approach, you probably don't want to use Wachspress coordinates. They don't behave well when adding these extra vertices in the middle of straight edges. <a href="https://i.stack.imgur.com/1KZrk.png" rel="nofollow noreferrer">This image</a> (from <a href="https://arxiv.org/pdf/1111.5588.pdf" rel="nofollow noreferrer">here</a>) shows how Wachpress coordinates work poorly in this situation.</li>
<li>There are still decisions to be made about where exactly to split the sides of the square to get a matching number of vertices, and how to associate the vertices between the polygon and the square. These decisions will definitely impact how the texture gets stretched and skewed.</li>
</ul> | 2020-04-17 14:44:03.320000+00:00 | 2020-04-17 14:44:03.320000+00:00 | null | null | 20,470,843 | <p>I'm trying to implement proper texturing of convex polygons. I have a polygon with n triangles, for each triangle i'm calculating barycentric coordinates which are uv of each one but in [0..1] of each triangle, not entire polygon. How to interpolate each uv so it stretches (wraps and not repeats as it is now) entire texture ? </p>
<p>Now it looks like this: </p>
<p><img src="https://i.stack.imgur.com/JTe3M.png" alt="enter image description here"></p>
<pre><code>//region.triangulatedVectors = List<Vector2> // triangle points in CCW
//foreach triangle
for (int i = 0;i<region.triangulatedVectors.size();i+=3){
float aX = region.triangulatedVectors.get(i).x;
float aY = region.triangulatedVectors.get(i).y;
float bX = region.triangulatedVectors.get(i+1).x;
float bY = region.triangulatedVectors.get(i+1).y;
float cX = region.triangulatedVectors.get(i+2).x;
float cY = region.triangulatedVectors.get(i+2).y;
Vector2 bary0 = new Vector2();
Vector2 bary1 = new Vector2();
Vector2 bary2 = new Vector2();
Vector2 a = new Vector2(aX, aY);
Vector2 b = new Vector2(bX, bY);
Vector2 c = new Vector2(cX, cY);
GeometryUtils.barycentric(a, a, b, c, bary0);
GeometryUtils.barycentric(b, a, b, c, bary1);
GeometryUtils.barycentric(c, a, b, c, bary2);
//first point
texCoords[k++] = bary0.x;
texCoords[k++] = bary0.y;
texCoords[k++] = bary1.x;
texCoords[k++] = bary1.y;
texCoords[k++] = bary2.x;
texCoords[k++] = bary2.y;
//TODO , interpolate
}
</code></pre>
<p>It seems that there are 3 ways of dealing with 2D. Wachspress, Discrete harmonic and Mean value. </p> | 2013-12-09 12:35:52.960000+00:00 | 2020-04-17 14:44:03.320000+00:00 | 2013-12-10 07:54:53.500000+00:00 | java|libgdx|polygon|mesh|texture-mapping | ['https://i.stack.imgur.com/OFWvv.png', 'https://i.stack.imgur.com/1KZrk.png', 'https://arxiv.org/pdf/1111.5588.pdf'] | 3 |
57,575,872 | <p>If you only want to generate images, you could look into generating a galaxy with some number of spiral arms using cos and <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Math/sin" rel="nofollow noreferrer">sin</a>, play around with the circle radius: </p>
<p><code>Math.cos(radians) * radius, Math.sin(radians) * radius</code></p>
<p>Get this to work first.
You probably want to draw something somewhat elliptical instead of a full circle.
Randomly go more often in the center of the galaxy and close to the arms.</p>
<ul>
<li>Step 1: Randomly generate galaxy points</li>
<li>Step 2: Blend colors (<a href="https://stackoverflow.com/questions/28564851/html5-canvas-paint-blending-color-tool">HTML5 canvas paint blending color tool</a>)</li>
<li>Step 3: if you want realtime performance use WebGL ...</li>
</ul>
<p>Bonus: if you want to go full overkill you could even try to use realistic formulas:
<a href="https://arxiv.org/pdf/0908.0892.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/0908.0892.pdf</a></p> | 2019-08-20 14:27:21.063000+00:00 | 2019-08-20 14:27:21.063000+00:00 | null | null | 57,547,460 | <p>I'm working on a project that procedurally generates images of galaxies like this one: </p>
<p><a href="https://i.stack.imgur.com/4h34S.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4h34S.jpg" alt="Galaxy"></a></p>
<p>This sample was "hand drawn" (by waving the cursor around). See this pen:
<a href="https://codepen.io/OpherV/pen/JQBKVq?editors=0110" rel="nofollow noreferrer">https://codepen.io/OpherV/pen/JQBKVq?editors=0110</a></p>
<p>I would like to procedurally generate these types of images, but rather than generate them at one go <strong>I'd like the generation be performed using a "drawing" process, that is, moving the drawing cursor in a pattern that achieves these visual structures.</strong> </p>
<p><a href="https://i.stack.imgur.com/LSsLb.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LSsLb.gif" alt="enter image description here"></a></p>
<p>The mouse-simulation code that I currently have is lifted directly from Louis Hoebregts' <a href="https://css-tricks.com/simulating-mouse-movement/" rel="nofollow noreferrer">"Simulating Mouse Movement" article on CSS Tricks</a>.</p>
<p>The principle function relies on Simplex noise:</p>
<pre><code> const s = 0.001 * (speed / 100);
const noiseX = (noise.simplex3(1, 0, a * s) + 1) / 2;
const noiseY = (noise.simplex3(11, 0, a * s) + 1) / 2;
random += randomness / 1000;
const randX = noise.simplex3(1, 0, random) * window.innerWidth * 0.1;
const randY = noise.simplex3(3, 0, random) * window.innerHeight * 0.1;
const x = noiseX * innerWidth + randX;
const y = noiseY * innerHeight + randY;
updateMouse(x, y);
</code></pre>
<p>However this type of noise won't create the visuals I'm aiming for. Breaking down the visual structure I have in mind, we have a center-weighted blob and elliptical "arms". To achieve the former, I think more "drawing time" should be performed near the center (which creates the bright blobs inside), with less often "offshoots" performing more elliptic motion to make the latter.</p>
<p>I thought about somehow gradienting the Simplex noise so that it veers more towards the center, but I'm unsure how to go about doing that in 2d space. I'm also not certain how to proceed combining that with something that draws the "arms" of the galaxy.</p>
<p>Can you suggest an algorithm to achieve this?
Thanks </p> | 2019-08-18 18:51:23.827000+00:00 | 2019-08-20 14:30:19.737000+00:00 | 2019-08-20 14:30:19.737000+00:00 | javascript|html5-canvas|shapes|procedural-generation | ['https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Math/sin', 'https://stackoverflow.com/questions/28564851/html5-canvas-paint-blending-color-tool', 'https://arxiv.org/pdf/0908.0892.pdf'] | 3 |
37,217,389 | <p>It depends on your datasets and NN models, but generally, I would start with Adam. Figure 2 in this paper (<a href="http://arxiv.org/abs/1412.6980" rel="noreferrer">http://arxiv.org/abs/1412.6980</a>) shows Adam works well.</p>
<p><a href="https://i.stack.imgur.com/cWsLk.png" rel="noreferrer"><img src="https://i.stack.imgur.com/cWsLk.png" alt="enter image description here"></a></p>
<p>Also, you can see a very nice animation from
<a href="http://www.denizyuret.com/2015/03/alec-radfords-animations-for.html" rel="noreferrer">http://www.denizyuret.com/2015/03/alec-radfords-animations-for.html</a>.</p>
<p><a href="https://i.stack.imgur.com/88sSR.gif" rel="noreferrer"><img src="https://i.stack.imgur.com/88sSR.gif" alt="enter image description here"></a></p> | 2016-05-13 18:53:26.193000+00:00 | 2016-05-13 18:53:26.193000+00:00 | null | null | 37,214,884 | <p>Tensorflow seems to have a large collection of optimizers, is there any high level guideline (or review paper) on which one is best adapted to specific classes of loss functions ?</p> | 2016-05-13 16:20:09.973000+00:00 | 2016-05-17 11:15:58.547000+00:00 | null | tensorflow | ['http://arxiv.org/abs/1412.6980', 'https://i.stack.imgur.com/cWsLk.png', 'http://www.denizyuret.com/2015/03/alec-radfords-animations-for.html', 'https://i.stack.imgur.com/88sSR.gif'] | 4 |
59,446,450 | <h1>Shuffling complexity</h1>
<p>The time complexity of np.shuffle is O(n) as explained <a href="https://stackoverflow.com/a/9371105/5350621">here</a>, so at least in the programs below it should not be a bottleneck, but let's explore different aspects of the question below.</p>
<h1>Problem formalization and complexity</h1>
<p>If I understand correctly, your problem can be formulated as a <a href="https://en.wikipedia.org/wiki/Bipartite_graph" rel="nofollow noreferrer">bipartite graph</a> with N_u user nodes, N_s website nodes and N_v edges between them, reflecting the visits, see panel (A) below.</p>
<p>Then counting the number of users who visited the same pairs of websites (your counterdict dictionary) simply corresponds to the
<a href="https://en.wikipedia.org/wiki/Bipartite_network_projection" rel="nofollow noreferrer">weighted bipartite network projection</a> onto the website nodes, see panel (B) below.</p>
<p>The <a href="https://arxiv.org/pdf/1707.00912.pdf" rel="nofollow noreferrer">complexity</a> of the weighted bipartite network projection for the brute-force approach is O(N_u^2*N_s). Consequently, when iterating over multiple randomizations, the O(N_v) from shuffling should be neglible (unless of course N_v > N_u^2*N_s). There are also approaches for <a href="https://arxiv.org/pdf/1712.08685.pdf" rel="nofollow noreferrer">sampling bipartite network projections</a> in case of very large graphs.</p>
<p>In the small dummy example below, using bipartite network projection is around 150 times faster than your implementation (0.00024 vs 0.03600 seconds) and yields identical results.</p>
<p><a href="https://i.stack.imgur.com/Vgnkm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Vgnkm.png" alt="bipartite graph setup"></a></p>
<h1>The code 1</h1>
<pre><code># import modules
import collections
import itertools
import time
import matplotlib.pyplot as plt
import numpy as np
import networkx as nx
import pandas as pd
import pymc3 as pm
# generate fake data for demonstration purposes
np.random.seed(0)
nvisits = 24
nusers = 12
nsites = 6
userz = np.random.choice(['U'+str(user).zfill(3) for user in range(nusers)], nvisits)
sitez = np.random.choice(range(nsites), nvisits)
users = np.unique(userz)
sites = np.unique(sitez)
# copy original implementation from the question
def get_site_pairs(users, sites, userz, sitez):
dct = dict()
dct['user'] = userz
dct['site'] = sitez
name=pd.DataFrame(dct)
groups=name.groupby('user')
pairs = []
for ui in users:
userdata = groups.get_group(ui)
userdata=userdata.drop_duplicates()
site_list=userdata['site'].values
pair=list(itertools.combinations(site_list, 2))
for j in pair:
pairs.append(sorted(j))
site_exp=pd.DataFrame(pairs, columns=['node1', 'node2'], dtype=str)
site_exp['pair'] = site_exp['node1']+'<--->'+site_exp['node2']
counterdict=collections.Counter(site_exp['pair'].values)
counterdict=pd.DataFrame(list(counterdict.items()), columns=['pair','site_obs'])
return counterdict
temp = time.time()
counterdict = get_site_pairs(users, sites, userz, sitez)
print (time.time() - temp)
# 0.03600 seconds
# implement bipartite-graph based algorithm
def get_site_graph(users, sites, userz, sitez):
graph = nx.Graph()
graph.add_nodes_from(users, bipartite=0)
graph.add_nodes_from(sites, bipartite=1)
graph.add_edges_from(zip(userz, sitez))
projection = nx.algorithms.bipartite.projection.weighted_projected_graph(graph, sites)
return graph, projection
temp = time.time()
graph, projection = get_site_graph(users, sites, userz, sitez)
print (time.time() - temp)
# 0.00024 seconds
# verify equality of results
for idr, row in counterdict.iterrows():
u, v = np.array(row['pair'].split('<--->')).astype(np.int)
pro = projection[u][v]
assert row['site_obs'] == pro['weight']
# prepare graph layouts for plotting
layers = nx.bipartite_layout(graph, userz)
circle = nx.circular_layout(projection)
width = np.array(list(nx.get_edge_attributes(projection, 'weight').values()))
width = 0.2 + 0.8 * width / max(width)
degrees = graph.degree()
# plot graphs
fig = plt.figure(figsize=(16, 9))
plt.subplot(131)
plt.title('(A)\nbipartite graph', loc='center')
nx.draw_networkx(graph, layers, width=2)
plt.axis('off')
plt.subplot(132)
plt.title('(B)\none-mode projection (onto sites)', loc='center')
nx.draw_networkx(projection, circle, edge_color=plt.cm.Greys(width), width=2)
plt.axis('off')
plt.subplot(133)
plt.title('(C)\nrandomization setup', loc='center')
nx.draw_networkx(graph, layers, width=2)
plt.text(*(layers['U000']-[0.1, 0]), '$n_u=%s$' % degrees['U000'], ha='right')
plt.text(*(layers[0]+[0.1, 0]), '$n_s=%s$' % degrees[0], ha='left')
plt.text(*(layers[1]+[0.1, 0]), '$n_t=%s$' % degrees[1], ha='left')
plt.text(0.3, -1, '$N_v=%s$' % nvisits)
plt.plot([0.3]*2, [-1, 1], lw=160, color='white')
plt.axis('off')
</code></pre>
<h1>Network randomization and PyMC3 simulation</h1>
<p>When randomizing the user list, as mentioned in the question, we can get a distribution of site-site connections. For networks of moderate size this should be reasonably fast, see argument regarding shuffling complexity above and code example below.</p>
<p>If the network is too large, sampling may be an option and the graph formalization helps to set up the sampling scenario, see panel (C) above. For given n_u and n_s edge randomization corresponds to random draws from a <a href="https://en.wikipedia.org/wiki/Hypergeometric_distribution#Multivariate_hypergeometric_distribution" rel="nofollow noreferrer">multivariate hypergeometric distribution</a>. </p>
<p>Unfortunately, PyMC3 does <a href="https://github.com/pymc-devs/pymc3/pull/3504" rel="nofollow noreferrer">not yet</a> support hypergeometic distributions. In case this helps, I added a small example using PyMC3 and sampling from a simple binomial distribution below. The black histograms show the distribution of site-site connections n_{s,t} from full network randomization and bipartite projection.
The gray vertical line indicates that the maximum n_{s,t} <= min(N_u, n_s, n_t).
The red dots are from the binomial approximation which assumes there are nvisits*(nvisits-1)/2 pairs of edges to be distributed and the chance of connecting nodes s and t via user u is p_s * p_u * p_t * p_u, with p_x = n_x / N_x. Here, all edges are assumed to be independent and the result obviously yields an approximation only.</p>
<p><a href="https://i.stack.imgur.com/RjMuQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RjMuQ.png" alt="graph randomization results"></a></p>
<h1>The code 2</h1>
<pre><code># randomize user visits and store frequencies of site-site connections
niters = 1000
matrix = np.zeros((niters, nsites, nsites))
siten = collections.Counter(sitez)
for i in range(niters):
np.random.shuffle(userz)
graph, projection = get_site_graph(users, sites, userz, sitez)
edges = projection.edges(data=True)
for u, v, d in edges:
matrix[i, u, v] = d['weight']
# define PyMC3 function for sampling from binomial distribution
def sample_pymc3(prob, number, bins, draws=1000):
with pm.Model() as model:
nst = pm.Binomial('nst', n=number, p=prob)
trace = pm.sample(draws=draws, step=pm.Metropolis())
nst = trace.get_values('nst')
freqs = [np.mean(nst == val) for val in bins]
return freqs
# define auxiliary variables
# probability to select site s by chance
probs = [np.mean(sitez == s) for s in sites]
# probability to select user u by chance
probu = [np.mean(userz == u) for u in users]
# plot connectivity statistics
nsitez = min(5, nsites)
bins = np.arange(9)
number = nvisits*(nvisits-1)/2
fig, axis = plt.subplots(nrows=nsitez,
ncols=nsitez,
figsize=(16, 9))
for s in sites[:nsitez]:
for t in sites[:nsitez]:
# prepare axis
axia = axis[s, t]
if t <= s:
axia.set_axis_off()
continue
# plot histogram
axia.hist(matrix[:, s, t], bins=bins, histtype='step', density=True,
zorder=-10, align='left', color='black', lw=2)
axia.plot([min(siten[s], siten[t], nusers)+0.5]*2, [0, 0.5], lw=4, color='gray')
# approximate probabilities using PyMC3
prob = np.sum([probs[s] * pru * probs[t] * pru for pru in probu])
freqs = sample_pymc3(prob, number, bins)
freqs = sample_pymc3(prob, number, bins)
axia.scatter(bins, freqs, color='red')
# set axes
nst = '$n_{s=%s,t=%s}$' % (s, t)
axia.set_xlabel(nst)
if t == s+1:
axia.set_ylabel('frequency')
plt.suptitle('distribution of the number $n_{s,t}$\nof connections between site $s$ and $t$')
plt.tight_layout(rect=[-0.2, -0.2, 1, 0.9])
</code></pre> | 2019-12-22 17:20:46.103000+00:00 | 2019-12-22 17:20:46.103000+00:00 | null | null | 48,313,332 | <p>There are two columns in the dataset, user_id, and site_name respectively. It records every site name that every user browsed.</p>
<pre><code>toy_dict = {'site_name': {0: u'\u4eac\u4e1c\u7f51\u4e0a\u5546\u57ce',
1: u'\u963f\u91cc\u4e91',
2: u'\u6dd8\u5b9d\u7f51',
3: u'\u624b\u673a\u6dd8\u5b9d\u7f51',
4: u'\u6211\u4eec\u7684\u70b9\u5fc3\u7f51',
5: u'\u8c46\u74e3\u7f51',
6: u'\u9ad8\u5fb7\u5730\u56fe',
7: u'\u817e\u8baf\u7f51',
8: u'\u70b9\u5fc3',
9: u'\u767e\u5ea6',
10: u'\u641c\u72d7',
11: u'\u8c37\u6b4c',
12: u'AccuWeather\u6c14\u8c61\u9884\u62a5',
13: u'\u79fb\u52a8\u68a6\u7f51',
14: u'\u817e\u8baf\u7f51',
15: u'\u641c\u72d7\u7f51',
16: u'360\u624b\u673a\u52a9\u624b',
17: u'\u641c\u72d0',
18: u'\u767e\u5ea6'},
'user_id': {0: 37924550,
1: 37924550,
2: 37924550,
3: 37924550,
4: 37924550,
5: 37924550,
6: 37924550,
7: 37924550,
8: 37924551,
9: 37924551,
10: 37924551,
11: 37924551,
12: 37924551,
13: 37924552,
14: 45285152,
15: 45285153,
16: 45285153,
17: 45285153,
18: 45285153}}
</code></pre>
<p>Now I want to reconstruct random network and meanwhile ensure a person with n sites in the observed network will have also have n sites in the randomized network.</p>
<p>The <code>numpy.random.shuffle</code> in Python is of low efficiency when the amount of data is massive.</p>
<p>I am using the following Python script currently:</p>
<pre><code>import pandas as pd
import numpy as np
import itertools
from collections import Counter
for i in range (10): # reconstruct random network for 10 times
name='site_exp'+str(i)
name=pd.DataFrame(toy_dict)# read data
np.random.shuffle(name['site_name'].values) # shuffle the data
users=name['user_id'].drop_duplicates()
groups=name.groupby('user_id')
pairs = []
for ui in users[:5]:
userdata = groups.get_group(ui)
userdata=userdata.drop_duplicates()
site_list=userdata['site_name'].values
pair=list(itertools.combinations(site_list,2))
for j in pair:
pairs.append(j)
site_exp=pd.DataFrame(pairs, columns = ['node1', 'node2'], dtype= str)
site_exp['pair']=site_exp['node1']+'<--->'+site_exp['node2']
counterdict=Counter(site_exp['pair'].values)
counterdict=pd.DataFrame(list(counterdict.items()),columns=['pair','site_obs'])
counterdict.to_csv('site_exp'+str(i) + '.csv')
</code></pre>
<p>I am wondering if we can use a Monte Carlo algorithm in Python and reduce computational complexity? </p> | 2018-01-18 03:23:31.173000+00:00 | 2019-12-22 17:20:46.103000+00:00 | 2018-01-20 05:05:56.033000+00:00 | python-3.x|networking|random|montecarlo|pymc | ['https://stackoverflow.com/a/9371105/5350621', 'https://en.wikipedia.org/wiki/Bipartite_graph', 'https://en.wikipedia.org/wiki/Bipartite_network_projection', 'https://arxiv.org/pdf/1707.00912.pdf', 'https://arxiv.org/pdf/1712.08685.pdf', 'https://i.stack.imgur.com/Vgnkm.png', 'https://en.wikipedia.org/wiki/Hypergeometric_distribution#Multivariate_hypergeometric_distribution', 'https://github.com/pymc-devs/pymc3/pull/3504', 'https://i.stack.imgur.com/RjMuQ.png'] | 9 |
63,324,846 | <p>you can use two approaches:</p>
<p>1- rule based (extract common words in each tag and classify documents with them)</p>
<p>2- machine learning</p>
<p>if you have a large scale training data you can use machine learning to classify documents:</p>
<p>you can use this approaches:</p>
<p><a href="https://arxiv.org/abs/1904.08398" rel="nofollow noreferrer">https://arxiv.org/abs/1904.08398</a></p>
<p><a href="https://medium.com/@armandj.olivares/using-bert-for-classifying-documents-with-long-texts-5c3e7b04573d" rel="nofollow noreferrer">https://medium.com/@armandj.olivares/using-bert-for-classifying-documents-with-long-texts-5c3e7b04573d</a></p> | 2020-08-09 09:47:30.763000+00:00 | 2020-08-09 09:47:30.763000+00:00 | null | null | 63,303,007 | <p>I have set of documents and corresponding set of tags for those documents</p>
<p>ex.</p>
<p>Document-"Learned Counsel appearing for the Appellants however points out that in the..etc etc"</p>
<p>Tags - "Compensation, Fundamental Right"</p>
<p>Now I have multiple documents with their corresponding tags and I another test set of data without any tags what NLP techniques do I use to give these documents tag? Do I use text classification or topic modeling can someone please guide or suggest some ideas.</p> | 2020-08-07 13:38:03.547000+00:00 | 2020-08-09 09:47:30.763000+00:00 | null | python|nlp|stanford-nlp | ['https://arxiv.org/abs/1904.08398', 'https://medium.com/@armandj.olivares/using-bert-for-classifying-documents-with-long-texts-5c3e7b04573d'] | 2 |
55,367,573 | <blockquote>
<p>but still has a blurred effect on it.</p>
</blockquote>
<p>Thats because you deleted high frequencies with your Notch filter.</p>
<blockquote>
<p>I am trying to sharpen this image an return it to its original
quality</p>
</blockquote>
<p>It is not possible to return the image to its original Moire free quality. Although you might be able to yield better results if you design your Notch filter more carefully.</p>
<p>Moire already destroyed information and removing those disturbing frequencies with the Notch filter destroyed even more.</p>
<p>You cannot get that information back. The only thing you can do is to apply some sharpening filters which usualy rely on increasing the <a href="https://en.wikipedia.org/wiki/Acutance" rel="nofollow noreferrer">acutance</a>. With such filters the image is actually not sharpened but edge transitions are manipulated in a way that makes them appear sharper.</p>
<p>There are other techniques than using Notch filters which in many cases perform better.</p>
<p>Read <a href="https://arxiv.org/ftp/arxiv/papers/1701/1701.09037.pdf" rel="nofollow noreferrer">A New Method for Removing the Moire' Pattern from
Images</a> for example. They use median filters.</p> | 2019-03-26 23:20:44.200000+00:00 | 2019-03-26 23:20:44.200000+00:00 | null | null | 55,361,116 | <p>I am attempting to remove a moire pattern from an image by blurring the image and then returning it to its original quality by enhancing and sharpening the image, however I can only seem to remove the pattern and leave it very blurry, which is not what I am trying to do. </p>
<p>I have tried to apply a filter to the image to blur the image and remove the checkerboard pattern however I cannot seem to return the image to its original quality without the checkerboard. </p>
<pre><code>imageFiles = {'radiograph_01.jpg', 'radiograph_02.jpg'};
medWinSize = [7 11];
notchCenters{1} = [269 80; 261 123; 245 216; 238 258];
sigmas{1} = [45 20 20 45];
notchCenters{2} = [277 442; 209 450];
sigmas{2} = [20 20];
for nImage = 1:length(imageFiles)
% Load image
[pathStr, name, ext] = fileparts(imageFiles{nImage});
img = imread(imageFiles{nImage});
img = im2double(img);
[height, width] = size(img);
figure(1); clf;
subplot(1,3,1);
imshow(img);
% Median filter
imgMed = medfilt2(img, medWinSize(nImage)*[1 1]);
subplot(1,3,2);
imshow(imgMed);
imwrite(imgMed, [name '-med-filtered.jpg']);
% Compute DFT of original image
imgDFT = fftshift(fft2(img));
imgDFTMag = abs(imgDFT);
figure(2); clf;
subplot(1,3,1);
imshow(log(imgDFTMag), [0 10]); colorbar;
% Apply notch filter
[omega_x, omega_y] = meshgrid(1:width, 1:height);
filterDFT = ones(size(omega_x));
for n = 1:size(notchCenters{nImage},1)
rSq = (omega_x - notchCenters{nImage}(n,1)).^2 + ...
(omega_y - notchCenters{nImage}(n,2)).^2;
filterDFT = filterDFT .* (1 - exp(-rSq / sigmas{nImage}(n)^2));
end % n
imgFiltDFT = imgDFT .* filterDFT;
imgFiltDFTMag = abs(imgFiltDFT);
subplot(1,3,2);
imshow(filterDFT, [0 1]); colorbar;
subplot(1,3,3);
imshow(log(imgFiltDFTMag), [0 10]); colorbar;
% Reconstruct image
imgFilt = real(ifft2(ifftshift(imgFiltDFT)));
imgFilt = max(0, min(1, imgFilt));
figure(1);
subplot(1,3,3);
imshow(imgFilt);
imwrite(imgFilt, [name '-notch-filtered.jpg']);
end % nImage
</code></pre>
<p><img src="https://i.stack.imgur.com/c6r5D.png" alt="Original Image"></p>
<p>With this current code, the image is loaded in, the filter is applied and the image is returned without the moire pattern, but still has a blurred effect on it. I am trying to sharpen this image an return it to its original quality after having removed this pattern.</p> | 2019-03-26 15:41:43.270000+00:00 | 2019-03-26 23:20:44.200000+00:00 | 2019-03-26 16:00:58.547000+00:00 | image|matlab|image-processing | ['https://en.wikipedia.org/wiki/Acutance', 'https://arxiv.org/ftp/arxiv/papers/1701/1701.09037.pdf'] | 2 |
17,414,511 | <p>Computing the Levenshtein distances for a sliding window boils down to computing the distances between several pairs of vertices in an acyclic directed <em>planar</em> graph that looks like this one (capital letters denote the pairs).</p>
<pre><code> h a y s t a c k
n A-B-C-D-E-F-*-*
|\|\|\|\|\|\|\|
e *-*-*-*-*-*-*-*
|\|\|\|\|\|\|\|
e *-*-A-B-C-D-E-F
</code></pre>
<p>The horizontal and vertical arcs have cost 1; the diagonal arcs have cost 0 if the corresponding letters match or 1 otherwise.</p>
<p>Since all of the paired vertices lie on the infinite face, Klein's or <a href="http://arxiv.org/abs/1202.0314" rel="nofollow">Cabello-Chambers's</a> multiple-source shortest paths algorithm can be used to compute the needed distances in time O(m n log (m n)).</p>
<p>To shave the final log (and practically speaking, it's much worse than for, e.g., Dijkstra's algorithm), you might look in Alexander Tiskin's manuscript <a href="http://arxiv.org/abs/0707.3619" rel="nofollow">Semi-local string comparison: Algorithmic techniques and applications</a>, which treats problems similar to this one if not this one itself. (Probably that should be my primary answer, but I haven't read it and know the multiple-source shortest path techniques a lot better.)</p>
<p><sub><sup>It's also possible that, with some additional logic to handle the unidirectional edges, my <a href="http://www.davideisenstat.com/cv/EisenstatK13.pdf" rel="nofollow">multiple-source shortest path algorithm with Klein</a> could be made to achieve O(m n).</sup></sub></p> | 2013-07-01 22:12:07.053000+00:00 | 2013-07-01 22:12:07.053000+00:00 | null | null | 17,412,543 | <p>[Reposted from <a href="https://cs.stackexchange.com/questions/12986/sliding-window-edit-distance">https://cs.stackexchange.com/questions/12986/sliding-window-edit-distance</a> ]</p>
<p>If you have a long string of length n and a shorter string of length m, what is a suitable recurrence to let you compute all n-m+1 <a href="http://en.wikipedia.org/wiki/Levenshtein_distance" rel="nofollow noreferrer">Levenshtein distances</a> between the shorter string and all substrings of the longer string of length m?</p>
<p>Can it in fact be done in O(nm) time?</p> | 2013-07-01 19:50:59.223000+00:00 | 2013-07-02 07:22:18.157000+00:00 | 2017-04-13 12:48:30.793000+00:00 | algorithm | ['http://arxiv.org/abs/1202.0314', 'http://arxiv.org/abs/0707.3619', 'http://www.davideisenstat.com/cv/EisenstatK13.pdf'] | 3 |
61,608,076 | <p>The problem you are trying to solve is called <a href="https://en.wikipedia.org/wiki/Entity_linking" rel="nofollow noreferrer">Entity Linking</a>. There are many academic papers discussing solutions to this problem, but only few of them provide an implementation.</p>
<p><a href="https://arxiv.org/abs/1904.09131" rel="nofollow noreferrer">OpenTapioka</a> from Oxford has an <a href="https://github.com/wetneb/opentapioca" rel="nofollow noreferrer">open source implementation</a> and an <a href="https://opentapioca.org" rel="nofollow noreferrer">online demo</a>.</p>
<p><a href="https://arxiv.org/pdf/1804.03580.pdf" rel="nofollow noreferrer">SWAT</a> from the University of Pisa has a <a href="https://sobigdata.d4science.org/web/tagme/swat-api" rel="nofollow noreferrer">publically available API</a>.</p> | 2020-05-05 07:39:27.233000+00:00 | 2020-05-05 07:39:27.233000+00:00 | null | null | 61,600,865 | <p>Given a text, I am looking to find links to all Wikipedia pages related to named entities mentioned in the text. Is there a reliable way to do this? </p>
<p>For example, consider the text,</p>
<blockquote>
<p>Mark Elliot Zuckerberg is an American internet entrepreneur and
philanthropist.</p>
</blockquote>
<p>" Given this, I am looking at output with the following links:</p>
<ul>
<li><a href="https://en.wikipedia.org/wiki/Mark_Zuckerberg" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Mark_Zuckerberg</a></li>
<li><a href="https://en.wikipedia.org/wiki/Americans" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Americans</a></li>
<li><a href="https://en.wikipedia.org/wiki/Internet" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Internet</a></li>
<li><a href="https://en.wikipedia.org/wiki/Entrepreneurship" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Entrepreneurship</a></li>
<li><a href="https://en.wikipedia.org/wiki/Philanthropy" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Philanthropy</a></li>
</ul>
<p>Is this possible at all given the current state of NLP?
Many thanks!</p> | 2020-05-04 20:17:56.063000+00:00 | 2020-05-06 21:59:31.437000+00:00 | null | nlp|mediawiki|stanford-nlp|wikipedia|named-entity-recognition | ['https://en.wikipedia.org/wiki/Entity_linking', 'https://arxiv.org/abs/1904.09131', 'https://github.com/wetneb/opentapioca', 'https://opentapioca.org', 'https://arxiv.org/pdf/1804.03580.pdf', 'https://sobigdata.d4science.org/web/tagme/swat-api'] | 6 |
72,503,614 | <p>I've likely got this wrong, but using <code>mprf</code> objects provides access to base R <code>intersect</code>, <code>union</code>, <code>setdiff</code>, while a <code>sort(...</code> needs to be wrapped inside a <code>mprf(sort(...), 'bits')</code>:</p>
<pre><code>library(Rmprf)
f3 <- mpfr(5:9, 53)
f4 <- mpfr(8:12, 53)
intersect(f3,f4)
2 'mpfr' numbers of precision 53 bits
[1] 8 9
setdiff(f3,f4)
3 'mpfr' numbers of precision 53 bits
[1] 5 6 7
f3 %in% f4
[1] FALSE FALSE FALSE TRUE TRUE
# large integers from vignette
ns <- mpfr(1:24, 120)
fact_ns <- factorial(ns)
fact_ns[20:24]
5 'mpfr' numbers of precision 120 bits
[1] 2432902008176640000 51090942171709440000 1124000727777607680000
[4] 25852016738884976640000 620448401733239439360000
pasc80 <- chooseMpfr.all(n = 80, 77)[40:49]
pasc80
10 'mpfr' numbers of precision 77 bits
[1] 107507208733336176461620 104885081691059684352800 97393290141698278327600
[4] 86068488962431036661600 72375774809317008101800 57900619847453606481440
[7] 44054819449149483192400 31869443856831541032800 21910242651571684460050
[10] 14308729894903957198400
mpfr(sort(union(fact_ns[20:24], pasc80)), 77)
15 'mpfr' numbers of precision 77 bits
[1] 2432902008176640000 51090942171709440000 1124000727777607680000
[4] 14308729894903957198400 21910242651571684460050 25852016738884976640000
[7] 31869443856831541032800 44054819449149483192400 57900619847453606481440
[10] 72375774809317008101800 86068488962431036661600 97393290141698278327600
[13] 104885081691059684352800 107507208733336176461620 6204484017332394393600
</code></pre>
<p>so for these operations <code>sets</code> is not necessary, and assuming your workflow is amenable to <code>Rmprf</code> based objects.</p>
<p>As the problem is presented in the context of 'precision', one likely wouldn't want a function that promotes or demotes sets to their highest/lowest 'prec', but be intentionally involved in the decision (though, admittedly, I looked for one).</p>
<p>Here, renaming your f3 & f4 below to f7 & f8:</p>
<pre><code>getPrec(f7)[1]
[1] 10
getPrec(f8)[1]
[1] 20
intersect(roundMpfr(f7, 20), f8)
2 'mpfr' numbers of precision 20 bits
[1] 9 6
intersect(f7, roundMpfr(f8, 10))
2 'mpfr' numbers of precision 10 bits
[1] 9 6
</code></pre>
<p>So it appears that 'precision handling' is required as to set operations, though such additional cycles may be avoided if it is plausible that upon mpfr creation, defaults would render the inputs the same precision. Using OEIS as inputs:</p>
<pre><code>library(OEIS.R) # git clone of EnriquePH/OEIS.R --no-build-vignettes
A011784 <- OEIS_bfile('A011784')
max(nchar(A011784$data$A011784))
[1] 221
max(nchar(A078140$data$A078140))
[1] 228
# so we see precision handling here, perhaps
A011784_228 <- mpfr(A011784$data$A011784, 228)
A078140_228 <- mpfr(A078140$data$A078140, 228)
intersect(A011784_228,A078140_228)
2 'mpfr' numbers of precision 228 bits
[1] 1 3
</code></pre>
<p>Ah, so little in common. And it is probably not that your sequences are in OEIS, rather checking for similarity to those from your sequences 'from the wild', and this doesn't reflect your workflow.</p>
<p>As to using lists:</p>
<pre><code>is(A011784_bigz)
[1] "bigz" "oldClass" "Mnumber" "mNumber"
> is(A011784_228)
[1] "mpfr" "list" "Mnumber" "mNumber" "vector"
</code></pre>
<p>So those as.list cycles have already been expended in mpfr creation.</p>
<p>And some related, light reading <a href="https://arxiv.org/abs/2202.02384" rel="nofollow noreferrer">primitive sets</a> from recent news.</p> | 2022-06-04 22:41:34.820000+00:00 | 2022-06-11 23:08:28.150000+00:00 | 2022-06-11 23:08:28.150000+00:00 | null | 72,466,597 | <p>The current release of the package <code>gmp</code> does not support set operations such as <code>intersect</code>, <code>setdiff</code> , etc. I'm doing some work with number sequences (see <a href="http://oeis.org/" rel="nofollow noreferrer">OEIS</a> for examples) and need to handle large collections of large integers. I'm currently stuck with using various loops to generate the desired differences or intersections; while I could probably generate compiled (Rccp, etc) code, I'm hoping to find a way within existing <code>R</code> functions and packages.</p> | 2022-06-01 18:36:44.630000+00:00 | 2022-06-18 06:03:11.697000+00:00 | 2022-06-11 20:51:15.750000+00:00 | r|set|gmp|set-intersection | ['https://arxiv.org/abs/2202.02384'] | 1 |
72,716,587 | <p><a href="https://arxiv.org/pdf/1411.2738.pdf" rel="nofollow noreferrer">This paper</a> has a great explanation of how Word2Vec operates both for CBOW and Skip-Gram.</p>
<p>The words are fed in simultaneously as one-hot encoded vectors. Then the one-hot representations are multiplied by the weight matrix (which doubles up as the word embedding matrix in the Word2Vec architecture). This multiplication boils down to extracting a single row from W - the row corresponding to the word embedding in question.</p>
<p>The hidden layer is then just an aggregation (average in most cases) of each of the word embeddings extracted using the one-hot encoded inputs.</p>
<p><a href="https://i.stack.imgur.com/Ubos3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ubos3.png" alt="Word2Vec CBOW Model" /></a></p> | 2022-06-22 13:39:27.173000+00:00 | 2022-06-22 13:39:27.173000+00:00 | null | null | 42,603,417 | <p><a href="https://i.stack.imgur.com/7BXvM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7BXvM.png" alt="enter image description here"></a></p>
<p>For the CBOW model, the INPUT words are feed into the training model simultaneously or one by one?</p>
<p>Thanks</p> | 2017-03-05 01:30:01.097000+00:00 | 2022-06-22 13:39:27.173000+00:00 | null | machine-learning|nlp|word2vec | ['https://arxiv.org/pdf/1411.2738.pdf', 'https://i.stack.imgur.com/Ubos3.png'] | 2 |
41,337,270 | <p>I would assume that you meant <strong>normal</strong> magic square (where the number are restricted to 1,2..n^2)</p>
<p>First of all, it's impposible to construct such magic square for n=2.</p>
<p>2nd, you would need an whole new algorithm for it, which is much more complicated. The problem (constructing magic square for <strong>any</strong> even number) is solved <a href="https://arxiv.org/ftp/arxiv/papers/1202/1202.0948.pdf" rel="nofollow noreferrer">in this paper</a> and while there isn't any psaudo code there, the implementation from the explenation is quite straightforward (long one though).</p> | 2016-12-26 23:28:54.077000+00:00 | 2016-12-26 23:28:54.077000+00:00 | null | null | 41,336,150 | <p>This code that runs only for odd N. The problem is that there are no ideas how to add support for even values N</p>
<pre><code>#include "stdafx.h"
#include <iostream>
using namespace std;
int main()
{
setlocale(0, "");
int n;
cout << "Enter the size of the magic square - ";
cin >> n;
int **matrix = new int *[n];
for (int i = 0; i < n; ++i)
{
matrix[i] = new int[n];
}
int nsqr = n * n;
int i = 0, j = n / 2;
for (int k = 1; k <= nsqr; ++k)
{
matrix[i][j] = k;
i--;
j++;
if (k % n == 0)
{
i += 2;
--j;
}
else
{
if (j == n)
{
j -= n;
}
else if (i < 0)
{
i += n;
}
}
}
cout << "\n\nMagic square size - " << n << "\n\n";
for (int i = 0; i < n; i++)
{
for (int j = 0; j < n; j++)
{
cout << matrix[i][j] << "\t";
}
cout << endl;
}
for (i = 0; i < n; i++)
delete[] matrix[i];
delete[] matrix;
system("pause >> null");
return 0;
}
</code></pre>
<p>I would be grateful for tips on troubleshooting.</p>
<p>If i'm not mistaken, the problem is in this line: </p>
<pre><code>int i = 0, j = n / 2;
</code></pre>
<p>But i don't know how to change the code to support even values</p> | 2016-12-26 20:48:35.760000+00:00 | 2017-01-05 13:11:43.810000+00:00 | null | c++|algorithm|generator | ['https://arxiv.org/ftp/arxiv/papers/1202/1202.0948.pdf'] | 1 |
32,763,369 | <p>The interpretation of decision tree ensembles is much harder than interpreting individual trees, as you note. Geometrically you can think about a decision tree ensemble as an approximation of a complex, high dimensional surface. The goal is to find variables that contribute to the approximation, and to visualize their effects.</p>
<p>The basic idea for interpreting an ensemble is not to get an 'average' tree, or to obtain plots of any of the individual trees, but to visualize the 'average' effect of the variable. In the literature, this is the 'partial dependence' of the predictor - it's effect holding the other variables constant. How the "partial dependence" is estimated is a little complicated to describe, but it is the model implied predictions obtained by allowing only predictor <em>j</em> to vary, for observation <em>i</em>. The predictions are then averaged over all <em>i</em> observations. See <a href="http://arxiv.org/pdf/0811.1679.pdf%20%22(Friedman%20&%20Popesdue,%202008)" rel="nofollow noreferrer">Friedman & Popescue (2008)</a> for the gory details. </p>
<p>You can then plot estimated dependence (or what I call refer to as the "model implied") effect of the predictor against the actual values of the predictor. This let's you see the model implied effect of the predictor.</p>
<p>The good news is that such plots can be obtained in <code>dismo</code> pretty easily. See <code>gbm.plot</code> for single predictors, and <code>gbm.perspec</code> for perspective plots involving two predictors. The vignette also provides examples. To further help interpret the model, <code>gbm.interactions</code> provides a way to detect possible 2 or 3-way interactions. See <a href="https://stackoverflow.com/questions/29998014/gbminteract-gbm-vs-dismogbm-interactions/32762708#32762708">this question</a> for more details on that.</p> | 2015-09-24 14:00:14.787000+00:00 | 2015-09-24 14:00:14.787000+00:00 | 2017-05-23 12:03:49.933000+00:00 | null | 28,025,662 | <p>(previously posted <a href="https://stackoverflow.com/questions/27880856/how-to-plot-dendrograms-of-gbm-aka-brt-models-r">here</a>, to the wrong sub, with not enough info, which was closed, I edited, the edits seem to have been deleted, & the post consigned to purgatory, so apologies for re-posting, I don't know whether the previous post can/should be resurrected)</p>
<p>In R, I've run some Boosted Regression Trees, aka Generalized Boosting Models, using <code>dismo</code> which uses <code>gbm</code>. Reproducible example to get people to where I am currently:</p>
<pre><code>library(dismo); data(Anguilla_train)
angaus.tc5.lr01 <- gbm.step(data=Anguilla_train, gbm.x = 3:13, gbm.y = 2, family = "bernoulli", tree.complexity = 5, learning.rate = 0.01, bag.fraction = 0.5)
</code></pre>
<p>(From <a href="http://cran.r-project.org/web/packages/dismo/vignettes/brt.pdf" rel="nofollow noreferrer">here</a>). This leaves you with gbm model object "angaus.tc5.lr01".
I'd like to generate dendrograms of the splits (folds?), i.e. plot the trees, as per De'ath 2007 (see pic, left pane). BUT: De'ath's plot is of a single regression tree, not a boosted regression tree which is the average of potentially thousands of trees each run with a different set of data randomly drawn from the dataset.</p>
<p>User <strong>ckluss</strong> kindly suggested rpart, however that needs the model to be generated by <code>rpart</code> so doesn't work for BRTs/GBMs produced by <code>gbm.step</code>. The same is true of <code>prp</code> from <code>rpart.plot</code>.</p>
<p><code>pretty.gbm.tree</code> in <code>gbm</code> extracts a matrix of info for any one tree selected (try <code>pretty.gbm.tree(angaus.tc5.lr01, i.tree=1)</code> for the first) so I'm wondering if this might be a plausible route to success? E.g. by writing some script which creates an averaged tree matrix using all of the available trees, then converting this into a tree-like object, possibly using some of the methods <a href="https://stats.stackexchange.com/questions/41443/how-to-actually-plot-a-sample-tree-from-randomforestgettree">here</a>.</p>
<p>People have asked varyingly similar questions seemingly with no success elsewhere on the net. BRT models are regularly described as being 'black boxes' so maybe the general opinion is that one shouldn't need/be able/bother to probe into them and display their inner processes.</p>
<p>If anyone knows enough about BRTs / <code>gbm</code> and has any ideas, they'd be gratefully received.
Thanks.</p>
<p><img src="https://i.stack.imgur.com/I6orv.png" alt="De'ath tree diagram"></p> | 2015-01-19 13:23:54.697000+00:00 | 2015-09-24 14:00:14.787000+00:00 | null | r|plot|tree|dendrogram|gbm | ['http://arxiv.org/pdf/0811.1679.pdf%20%22(Friedman%20&%20Popesdue,%202008)', 'https://stackoverflow.com/questions/29998014/gbminteract-gbm-vs-dismogbm-interactions/32762708#32762708'] | 2 |
57,421,261 | <p>See this stack overflow post: <a href="https://stats.stackexchange.com/questions/280459/estimating-gamma-distribution-parameters-using-sample-mean-and-std">https://stats.stackexchange.com/questions/280459/estimating-gamma-distribution-parameters-using-sample-mean-and-std</a></p>
<p>I don't understand what you are trying to do with:</p>
<pre><code>actual_grs = [i for i in f.gamma_random_sample(actual_population_distribution)]
sample_grs = [i for i in f.gamma_random_sample(sample_population_distribution)]
</code></pre>
<p>It doesn't look like you are fitting to a gamma distribution, it looks like you are using the Method of Moment estimator to get the parameters of the gamma distribution and then you are drawing a single random number for each element of your actual(sample)_population_distribution lists given the distribution statistics of the list.</p>
<p>The gamma distribution is notoriously hard to fit. I hope your actual data has a longer list -- 4 data points are hardly sufficient for estimating a two parameter distribution. The estimates are kind of garbage until you get hundreds of elements or more, take a look at this document on the MLE estimator for the fisher information of a gamma distribution: <a href="https://www.math.arizona.edu/~jwatkins/O3_mle.pdf" rel="nofollow noreferrer">https://www.math.arizona.edu/~jwatkins/O3_mle.pdf</a> .</p>
<p>I don't know what you are trying to do with the kl divergence either. Your actual population is already normalized to 1 and so is the sample distribution. You can plug in those elements directly into the KL divergence for a discrete score -- what you are doing with your code is a stretching and addition of gamma noise to your original list values with your defined gamma function. You are more likely to have a larger deviation with the KL divergence after the gamma corruption of your original population data.</p>
<p>I'm sorry, I just don't see what you are trying to accomplish here. If I were to guess your original intent, I'd say your problem is that you need hundreds of data points to guarantee convergence with any gamma fitting program.</p>
<p>EDIT: I just wanted to add that with regards to the KL divergence. If you intend to score your fit gamma distributions with the KL divergence, it's better to use an analytical solution where the scale and shape parameters of your two gamma distributions are your two inputs. Randomly sampling noisy data points won't be helpful unless you take 100,000 random samples and histogram them into 1,000 bins or so and then normalize your histogram -- I'm just throwing those numbers out, but you are going to want to approximate a continuous distribution as best as you can and it will be hard because the gamma distributions have long tails. This document has the analytical solution for a generalized distribution: <a href="https://arxiv.org/pdf/1401.6853.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1401.6853.pdf</a> . Just set that third parameter to 1 and simplify and then code up a function.</p> | 2019-08-08 22:12:01.653000+00:00 | 2019-08-08 22:34:45.327000+00:00 | 2019-08-08 22:34:45.327000+00:00 | null | 57,351,224 | <p>I have two list. Both include normalized percent:</p>
<ul>
<li>actual_population_distribution = [0.2,0.3,0.3,0.2]</li>
<li>sample_population_distribution = [0.1,0.4,0.2,0.3]</li>
</ul>
<p>I wish to fit these two list in to gamma distribution and then calculate the returned two list in order to get the KL value.</p>
<p>I have already able to get KL.</p>
<p>This is the function I used to calculate gamma:</p>
<pre><code>def gamma_random_sample(data_list):
mean = np.mean(data_list)
var = np.var(data_list)
g_alpha = mean * mean / var
g_beta = mean / var
for i in range(len(data_list)):
yield random.gammavariate(g_alpha, 1/g_beta)
</code></pre>
<p>Fit two lists into gamma distribution:</p>
<pre><code>actual_grs = [i for i in f.gamma_random_sample(actual_population_distribution)]
sample_grs = [i for i in f.gamma_random_sample(sample_population_distribution)]
</code></pre>
<p>This is the code I used to calculate KL:</p>
<pre><code>kl = np.sum(scipy.special.kl_div(actual_grs, sample_grs))
</code></pre>
<p>The code above does not produce any errors.</p>
<p>But I suspect the way I did for gamma is wrong because of <code>np.mean/var</code> to get mean and variance. </p>
<p>Indeed, the number is different to:</p>
<pre><code>mean, var, skew, kurt = gamma.stats(fit_alpha, loc = fit_loc, scale = fit_beta, moments = 'mvsk')
</code></pre>
<p>if I use this way. </p>
<p>By using "<code>mean, var, skew, kurt = gamma.stats(fit_alpha, loc = fit_loc, scale = fit_beta, moments = 'mvsk')</code>", I will get a KL value way larger than 1 so both two ways are invalid for getting a correct KL.</p>
<p>What do I miss?</p> | 2019-08-05 01:07:51.220000+00:00 | 2019-08-08 22:34:45.327000+00:00 | 2019-08-05 06:14:44.453000+00:00 | python|gamma-distribution|gamma|gamma-function | ['https://stats.stackexchange.com/questions/280459/estimating-gamma-distribution-parameters-using-sample-mean-and-std', 'https://www.math.arizona.edu/~jwatkins/O3_mle.pdf', 'https://arxiv.org/pdf/1401.6853.pdf'] | 3 |
64,840,655 | <p>There are two categories for detectors, one stage and two stage. Yolo, SSD, RetinaNet, CenterNet etc. fall in one stage while R-FCN, R-CNN, Faster R-CNN, etc. fall in two stage category.</p>
<p>Direct quote from <strong>[1]</strong> about advantage two stage detector comprated to one stage,</p>
<blockquote>
<p>Compared to one-stage detectors, the two-stage ones have the following
advantages: 1) By sampling a sparse set of region proposals, two-stage
detectors filter out most of the negative proposals; while one-stage
detectors directly face all the regions on the image and have a
problem of class imbalance if no specialized design is introduced. 2)
Since two-stage detectors only process a small number of proposals,
the head of the network (for proposal classification and regression)
can be larger than one-stage detectors, so that richer features will
be extracted. 3) Two-stage detectors have high-quality features of
sampled proposals by use of the RoIAlign [10] operation that extracts
the location consistent feature of each proposal; but different region
proposals can share the same feature in one-stage detectors and the
coarse and spatially implicit representation of proposals may cause
severe feature misalignment. 4) Two-stage detectors regress the object
location twice (once on each stage) and the bounding boxes are better
refined than one-stage methods.</p>
</blockquote>
<p>Quote accuracy vs efficiency,</p>
<blockquote>
<p>One-stage detectors are more efficient and elegant in design, but
currently the two-stage detectors have domination in accuracy.</p>
</blockquote>
<p>One stage detectors can be deployed on edge devices such as phones for fast real-time detection. This can save more energy compared to more compute intensive detectors.</p>
<p>In summary, go for two stage detectors if accuracy is more important, otherwise go for one stage for faster detection while maintaining good enough accuracy.</p>
<p>Related works section of <strong>[1]</strong> contains easy to read details as well as each referenced papers have review on two stage vs one stage.</p>
<h3>Object detection benchmarks</h3>
<p><a href="https://paperswithcode.com/task/object-detection" rel="nofollow noreferrer">https://paperswithcode.com/task/object-detection</a></p>
<h3>References</h3>
<p>[1] MimicDet, <a href="https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123590528.pdf" rel="nofollow noreferrer">https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123590528.pdf</a></p>
<p>[2] Speed/accuracy trade-offs for modern convolutional object detectors, <a href="https://arxiv.org/pdf/1611.10012.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1611.10012.pdf</a></p>
<p>[3] RetinaNet, <a href="https://arxiv.org/pdf/1708.02002.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1708.02002.pdf</a></p>
<p>[4] Object detection review, <a href="https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9186021" rel="nofollow noreferrer">https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9186021</a></p>
<p>[5] CSPNET, <a href="https://arxiv.org/pdf/1911.11929v1.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1911.11929v1.pdf</a></p>
<p>[6] CenterNet, <a href="https://arxiv.org/pdf/1904.08189v3.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1904.08189v3.pdf</a></p>
<p>[7] EfficientDet, <a href="https://arxiv.org/pdf/1911.09070.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1911.09070.pdf</a></p>
<p>[8] SpineNet, <a href="https://arxiv.org/pdf/1912.05027.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1912.05027.pdf</a></p>
<h3>Related articles</h3>
<p><a href="https://jonathan-hui.medium.com/object-detection-speed-and-accuracy-comparison-faster-r-cnn-r-fcn-ssd-and-yolo-5425656ae359" rel="nofollow noreferrer">https://jonathan-hui.medium.com/object-detection-speed-and-accuracy-comparison-faster-r-cnn-r-fcn-ssd-and-yolo-5425656ae359</a></p>
<p><a href="https://www.jeremyjordan.me/object-detection-one-stage/" rel="nofollow noreferrer">https://www.jeremyjordan.me/object-detection-one-stage/</a></p> | 2020-11-15 02:26:35.960000+00:00 | 2020-11-15 02:26:35.960000+00:00 | null | null | 64,839,039 | <p>I'm in the research phase of my project and I'm trying to make an object detector using CNN. I know that in general there's 2 "type" of CNN object detector, Region Proposal based (i.e R-CNN and R-FCN ) and Regression/Classification based method (i.e YOLO and SSD). The problem is I'm not so sure which method should I use. I would like to know what are the usual reasoning to choose a Method over the other. there's a few general criteria such as Speed vs Accuracy. But is there any other commonly used reasoning ?</p> | 2020-11-14 21:57:24.643000+00:00 | 2020-11-15 18:02:47.377000+00:00 | 2020-11-15 18:02:47.377000+00:00 | object-detection|conv-neural-network | ['https://paperswithcode.com/task/object-detection', 'https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123590528.pdf', 'https://arxiv.org/pdf/1611.10012.pdf', 'https://arxiv.org/pdf/1708.02002.pdf', 'https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9186021', 'https://arxiv.org/pdf/1911.11929v1.pdf', 'https://arxiv.org/pdf/1904.08189v3.pdf', 'https://arxiv.org/pdf/1911.09070.pdf', 'https://arxiv.org/pdf/1912.05027.pdf', 'https://jonathan-hui.medium.com/object-detection-speed-and-accuracy-comparison-faster-r-cnn-r-fcn-ssd-and-yolo-5425656ae359', 'https://www.jeremyjordan.me/object-detection-one-stage/'] | 11 |
52,658,859 | <p>The largest issue was that you were using mean squared error as your loss function on a classification problem. The cross-entropy loss function is much more suited for this kind of problem. </p>
<p>Here's a visualization of the difference between the cross-entropy loss function and the mean squared error loss function: </p>
<p><a href="https://i.stack.imgur.com/o0lY2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/o0lY2.png" alt="MSE vs Cross-Entropy Loss"></a>
Source: <a href="http://www.wolframalpha.com/input/?i=(x%20-%201)%5E2%20and%20-ln(x)%20from%200%20to%201" rel="nofollow noreferrer" title="Wolfram Alpha Link">Wolfram Alpha</a></p>
<p>Notice how the loss increases asymptotically as the model gets further from the correct prediction (in this case 1). This curvature provides a much stronger gradient signal during backpropogation while also satisfying many important theoretical probability distribution distance (divergence) properties. By minimizing the cross-entropy loss you are actually also minimizing the KL divergence between your model's prediction distribution and the training data label distribution. You can read more about the cross-entropy loss function here: <a href="http://colah.github.io/posts/2015-09-Visual-Information/" rel="nofollow noreferrer">http://colah.github.io/posts/2015-09-Visual-Information/</a></p>
<p>I also tweaked a few other things to make the code better and make the model easier to modify. This should solve all your problems:</p>
<pre><code>import tensorflow as tf
import numpy as np
from tqdm import tqdm
# define a random seed for (somewhat) reproducible results:
seed = 0
np.random.seed(seed)
print('Creating Datasets:')
# much faster dataset creation
x_train = np.random.uniform(low=0, high=255, size=[10000, 3])
# easier label creation
# if the average color is greater than half the color space than use black, otherwise use white
# classes:
# white = 0
# black = 1
y_train = ((np.mean(x_train, axis=1) / 255.0) > 0.5).astype(int)
# now transform dataset to be within range [-1, 1] instead of [0, 255]
# for numeric stability and quicker model training
x_train = (2 * (x_train / 255)) - 1
graph = tf.Graph()
with graph.as_default():
# must do this within graph scope
tf.set_random_seed(seed)
# specify input dims for clarity
x = tf.placeholder(tf.float32, shape=[None, 3])
# y is now integer label [0 or 1]
y = tf.placeholder(tf.int32, shape=[None])
# use relu, usually better than sigmoid
activation_fn = tf.nn.relu
# from https://arxiv.org/abs/1502.01852v1
initializer = tf.initializers.variance_scaling(
scale=2.0,
mode='fan_in',
distribution='truncated_normal')
# better api to reduce clutter
l_1 = tf.layers.dense(
x,
10,
activation=activation_fn,
kernel_initializer=initializer)
l_2 = tf.layers.dense(
l_1,
10,
activation=activation_fn,
kernel_initializer=initializer)
l_3 = tf.layers.dense(
l_2,
5,
activation=activation_fn,
kernel_initializer=initializer)
y_logits = tf.layers.dense(
l_3,
2,
activation=None,
kernel_initializer=initializer)
y_ = tf.nn.softmax(y_logits)
# much better loss function for classification
loss = tf.reduce_mean(
tf.losses.sparse_softmax_cross_entropy(
labels=y,
logits=y_logits))
# much better default optimizer for new problems
# good learning rate, but probably can tune
optimizer = tf.train.AdamOptimizer(
learning_rate=0.01)
# seperate train op for easier calling
train_op = optimizer.minimize(loss)
# tell tensorflow not to allocate all gpu memory at start
config = tf.ConfigProto()
config.gpu_options.allow_growth=True
with tf.Session(config=config) as sess:
sess.run(tf.global_variables_initializer())
print('Training:')
for step in tqdm(range(5000)):
index = np.random.randint(0, len(x_train) - 129)
feed_dict = {x : x_train[index:index+128],
y : y_train[index:index+128]}
# can train and get loss in single run, much more efficient
_, b_loss = sess.run([train_op, loss], feed_dict=feed_dict)
if step % 1000 == 0:
print(b_loss)
while True:
inp1 = int(input('Enter R pixel color: '))
inp2 = int(input('Enter G pixel color: '))
inp3 = int(input('Enter B pixel color: '))
# scale to model train range [-1, 1]
model_input = (2 * (np.array([inp1, inp2, inp3], dtype=float) / 255.0)) - 1
if (model_input >= -1).all() and (model_input <= 1).all():
# y_ is now two probabilities (white_prob, black_prob) but they will sum to 1.
white_prob, black_prob = sess.run(y_, feed_dict={x : [model_input]})[0]
print('White prob: {:.2f} Black prob: {:.2f}'.format(white_prob, black_prob))
else:
print('Values not within [0, 255]!')
</code></pre>
<p>I documented my changes with comments, but let me know if you have any questions! I ran this on my end and it worked perfectly:</p>
<pre><code>Creating Datasets:
2018-10-05 00:50:59.156822: I T:\src\github\tensorflow\tensorflow\core\platform\cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
2018-10-05 00:50:59.411003: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1405] Found device 0 with properties:
name: GeForce GTX 1080 major: 6 minor: 1 memoryClockRate(GHz): 1.7335
pciBusID: 0000:03:00.0
totalMemory: 8.00GiB freeMemory: 6.60GiB
2018-10-05 00:50:59.417736: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1484] Adding visible gpu devices: 0
2018-10-05 00:51:00.109351: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:965] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-10-05 00:51:00.113660: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:971] 0
2018-10-05 00:51:00.118545: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:984] 0: N
2018-10-05 00:51:00.121605: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1097] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6370 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080, pci bus id: 0000:03:00.0, compute capability: 6.1)
Training:
0%| | 0/5000 [00:00<?, ?it/s]0.6222609
19%|██████████████▋ | 940/5000 [00:01<00:14, 275.57it/s]0.013466636
39%|██████████████████████████████ | 1951/5000 [00:02<00:04, 708.07it/s]0.0067519126
59%|█████████████████████████████████████████████▊ | 2971/5000 [00:04<00:02, 733.24it/s]0.0028143923
79%|████████████████████████████████████████████████████████████▌ | 3935/5000 [00:05<00:01, 726.36it/s]0.0073514087
100%|█████████████████████████████████████████████████████████████████████████████| 5000/5000 [00:07<00:00, 698.32it/s]
Enter R pixel color: 1
Enter G pixel color: 1
Enter B pixel color: 1
White prob: 1.00 Black prob: 0.00
Enter R pixel color: 255
Enter G pixel color: 255
Enter B pixel color: 255
White prob: 0.00 Black prob: 1.00
Enter R pixel color: 128
Enter G pixel color: 128
Enter B pixel color: 128
White prob: 0.08 Black prob: 0.92
Enter R pixel color: 126
Enter G pixel color: 126
Enter B pixel color: 126
White prob: 0.99 Black prob: 0.01
</code></pre> | 2018-10-05 05:28:00.417000+00:00 | 2018-10-05 06:41:23.307000+00:00 | 2018-10-05 06:41:23.307000+00:00 | null | 50,641,866 | <p>I am training a classifier that takes a RGB input (so three 0 to 255 values) and returns whether black or white (0 or 1) font would fit best with that colour. After training, my classifier always returns 0.5 (or there about) and never gets any more accurate than that.</p>
<p>The code is below:</p>
<pre><code>import tensorflow as tf
import numpy as np
from tqdm import tqdm
print('Creating Datasets:')
x_train = []
y_train = []
for i in tqdm(range(10000)):
x_train.append([np.random.uniform(0, 255), np.random.uniform(0, 255), np.random.uniform(0, 255)])
for elem in tqdm(x_train):
if (((elem[0] + elem[1] + elem[2]) / 3) / 255) > 0.5:
y_train.append(0)
else:
y_train.append(1)
x_train = np.array(x_train)
y_train = np.array(y_train)
graph = tf.Graph()
with graph.as_default():
x = tf.placeholder(tf.float32)
y = tf.placeholder(tf.float32)
w_1 = tf.Variable(tf.random_normal([3, 10], stddev=1.0), tf.float32)
b_1 = tf.Variable(tf.random_normal([10]), tf.float32)
l_1 = tf.sigmoid(tf.matmul(x, w_1) + b_1)
w_2 = tf.Variable(tf.random_normal([10, 10], stddev=1.0), tf.float32)
b_2 = tf.Variable(tf.random_normal([10]), tf.float32)
l_2 = tf.sigmoid(tf.matmul(l_1, w_2) + b_2)
w_3 = tf.Variable(tf.random_normal([10, 5], stddev=1.0), tf.float32)
b_3 = tf.Variable(tf.random_normal([5]), tf.float32)
l_3 = tf.sigmoid(tf.matmul(l_2, w_3) + b_3)
w_4 = tf.Variable(tf.random_normal([5, 1], stddev=1.0), tf.float32)
b_4 = tf.Variable(tf.random_normal([1]), tf.float32)
y_ = tf.sigmoid(tf.matmul(l_3, w_4) + b_4)
loss = tf.reduce_mean(tf.squared_difference(y, y_))
optimizer = tf.train.AdadeltaOptimizer().minimize(loss)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print('Training:')
for step in tqdm(range(5000)):
index = np.random.randint(0, len(x_train) - 129)
feed_dict = {x : x_train[index:index+128], y : y_train[index:index+128]}
sess.run(optimizer, feed_dict=feed_dict)
if step % 1000 == 0:
print(sess.run([loss], feed_dict=feed_dict))
while True:
inp1 = int(input(''))
inp2 = int(input(''))
inp3 = int(input(''))
print(sess.run(y_, feed_dict={x : [[inp1, inp2, inp3]]}))
</code></pre>
<p>As you can see, I start by importing the modules I will be using. Next I generate my input x dataset and desired output y dataset. The x_train dataset consists of 10000 random RGB values, while the y_train dataset consists of 0's and 1's, with a 1 corresponding to an RGB value with a mean lower than 128 and a 0 corresponding to an RGB value with a mean higher than 128 (this ensures bright backgrounds get dark font and vice versa).</p>
<p>My neural net is admittedly overly complex (or so i assume), but as far as I am aware it is a pretty standard feed forward net, with an Adadelta optimiser and the default learning rate.</p>
<p>The training of the net is normal as far as my limited knowledge informs me, but nonetheless the model always spits out 0.5.</p>
<p>The last block of code allows the user to input values and see what they turn into when passed to the neural net.</p>
<p>I have messed around with different activation functions, losses, methods of initialising biases etc. But to no avail. Some times when I tinker with the code, the model always returns 1 or 0 respectively, but this is still just as inaccurate as being indecisive and returning 0.5 over and over. I have not been able to find a suitable solution to my problem online. Any advice or suggestions are welcome. </p>
<p>Edit:</p>
<p>The loss, weights, biases and the output don't change much over the course of training (the weights and biases only change by hundredths and thousandths every 1000 iterations, and the loss fluctuates around 0.3). Also, the output sometimes varies f depending on the input (as you would expect), but other times is constant. One run of the program lead to constant 0.7's as output, while another always returned 0.5 apart from very near to zero, where it returned 0.3 or 0.4 type values. Neither of the aforementioned are the desired output. What should happen is that (255, 255, 255) should map to 0 and (0, 0, 0) should map to 1 and (128, 128, 128) should map to either 1 or 0, as in the middle the font colour doesn't really matter.</p> | 2018-06-01 10:53:48.403000+00:00 | 2018-10-05 06:41:23.307000+00:00 | 2018-10-04 17:48:05.497000+00:00 | python|tensorflow|machine-learning | ['https://i.stack.imgur.com/o0lY2.png', 'http://www.wolframalpha.com/input/?i=(x%20-%201)%5E2%20and%20-ln(x)%20from%200%20to%201', 'http://colah.github.io/posts/2015-09-Visual-Information/'] | 3 |
60,753,584 | <p>Probably it just depends on the argument <code>len</code> of the function. For example, if I run the following code </p>
<pre class="lang-r prettyprint-override"><code># packages
library(FLSSS)
# data
prof <- c(400, 320, 230, 210, 190, 130)
costs <- cbind(
c(6, 3, 2, 2, 2, 1),
c(8, 8, 7, 6, 5, 4)
)
capac <- c(9, 18)
</code></pre>
<p>then I get your current optimal sack </p>
<pre class="lang-r prettyprint-override"><code>mmKnapsack(
maxCore = 3,
len = 2,
itemsProfits = prof,
itemsCosts = costs,
capacities = capac
)
#> Updated profit = 720
#> $solution
#> [1] 2 1
#>
#> $selectionCosts
#> [1] 9 16
#>
#> $budgets
#> [1] 9 18
#>
#> $selectionProfit
#> [1] 720
#>
#> $unconstrainedMaxProfit
#> [1] 720
</code></pre>
<p>but if I increase the maximum subsize size (i.e. please note that I set <code>len = 3</code>), the I get another optimal solution. </p>
<pre class="lang-r prettyprint-override"><code>mmKnapsack(
maxCore = 3,
len = 3,
itemsProfits = prof,
itemsCosts = costs,
capacities = capac
)
#> Updated profit = 630
#> Updated profit = 660
#> Updated profit = 720
#> Updated profit = 740
#> $solution
#> [1] 6 4 1
#>
#> $selectionCosts
#> [1] 9 18
#>
#> $budgets
#> [1] 9 18
#>
#> $selectionProfit
#> [1] 740
#>
#> $unconstrainedMaxProfit
#> [1] 950
</code></pre>
<p><sup>Created on 2020-03-19 by the <a href="https://reprex.tidyverse.org" rel="nofollow noreferrer">reprex package</a> (v0.3.0)</sup></p>
<p>From the <a href="https://arxiv.org/abs/1612.04484" rel="nofollow noreferrer">package docs</a> you can read that if you set <code>len = 0</code>, then FLSSS function tries to look for the optimal solution for all subset sizes, i.e.</p>
<pre class="lang-r prettyprint-override"><code>mmKnapsack(
maxCore = 3,
len = 0,
itemsProfits = prof,
itemsCosts = costs,
capacities = capac
)
#> Updated profit = 630
#> Updated profit = 740
#> $solution
#> [1] 1 4 6
#>
#> $selectionCosts
#> [1] 9 18
#>
#> $budgets
#> [1] 9 18
#>
#> $selectionProfit
#> [1] 740
#>
#> $unconstrainedMaxProfit
#> [1] NA
</code></pre>
<p><sup>Created on 2020-03-19 by the <a href="https://reprex.tidyverse.org" rel="nofollow noreferrer">reprex package</a> (v0.3.0)</sup></p> | 2020-03-19 08:43:23.857000+00:00 | 2020-03-19 08:43:23.857000+00:00 | null | null | 60,749,594 | <p>I've just tried using the mmKnapsack function to solve a multi-dimensional knapsack problem in R.
I noticed that the solution seemed a bit suspect, so I tried a very simple 2-d problem:<a href="https://i.stack.imgur.com/uyvNB.png" rel="nofollow noreferrer">the 2-d problem</a>. It returned an optimal profit of 720, which I can easily see is not the optimal profit of the 2-d problem(the optimal profit is 740). It returns a solution of items 2 and 1 <a href="https://i.stack.imgur.com/6iNAI.png" rel="nofollow noreferrer">as shown here</a>, but the optimal solution is items 1, 4 and 6.
<a href="https://i.stack.imgur.com/YFJ1l.png" rel="nofollow noreferrer">Here</a> is the code I ran </p> | 2020-03-19 01:09:24.683000+00:00 | 2020-03-19 08:43:23.857000+00:00 | null | r|optimization|knapsack-problem | ['https://reprex.tidyverse.org', 'https://arxiv.org/abs/1612.04484', 'https://reprex.tidyverse.org'] | 3 |
51,682,077 | <p>At the moment, Jython is still considerably slower than CPython. Depending on the program and how well the JIT can optimize it, multithreading might or might not pay off. Jython's primary design goal is compatibility, before performance. It is mainly intended for glue code and there is still a lot of potential for efficiency improvements. See e.g. zippy for a blazingly fast Python implementation in Java, however it is experimental and lacks Jython's compatibility level. In a way this represents the opposite design goal.</p>
<p>Now adding JyNI to Jython does not exactly make it faster, but so far I found that performace optimization in JyNI would be premature and usually the Jython part dominates the runtime anyway. Also, e.g. for NumPy the native numerics workload vastly dominates the glue code cost.</p>
<p>Finally, note that JyNI must emulate a GIL on C side. For details have a look at the paper <a href="https://arxiv.org/abs/1607.00825" rel="nofollow noreferrer">https://arxiv.org/abs/1607.00825</a>. Maybe it will be possible to operate certain extensions without a GIL - it depends on implementation detail, how sensitive an extension is to that. Currently the C-side GIL is mandatory. That's why you might not benefit from Java multithreading when using NumPy. C-extensions have the option to explicitly release the GIL e.g. during computationally intense operations that don't interact with the interpreter. I don't know if NumPy makes use of this.</p>
<blockquote>
<p>JyNI is alpha and buggy</p>
</blockquote>
<p>Please make sure to report bugs at the issue tracker.</p> | 2018-08-04 03:04:52.350000+00:00 | 2018-08-04 03:04:52.350000+00:00 | null | null | 47,090,816 | <p>I'm rephrasing my question because I think many thought it was the question "does python have threads". It does, but CPython also has the GIL, which will never schedule more than one thread at any given time. That makes CPython threads useless for cpu-intensive computations.</p>
<p>I need to use threads; process parallelism won't work for me because of the IPC costs (I have large shared objects).</p>
<p>I'm currently using Jython (no GIL) with JyNI so that I can use numpy. JyNI is alpha, but it does now support numpy. I got this to work. However, JyNI is alpha and buggy, and the whole process is slow.</p>
<p>I've read a bunch of old threads. I wonder whether there has been a viable option since then? I'm forced to use python 2.7.</p>
<p>Thanks.</p> | 2017-11-03 07:48:06.883000+00:00 | 2018-08-04 03:04:52.350000+00:00 | null | python|jython|gil|jyni | ['https://arxiv.org/abs/1607.00825'] | 1 |
44,685,779 | <p>In <a href="https://arxiv.org/abs/1409.0473" rel="nofollow noreferrer">Neural Machine Translation by Jointly Learning to Align and Translate</a> they give a description of the (Bahdanau) attention mechanism; essentially what happens is that you compute scalar "alignment scores" <code>a_1, a_2, ..., a_n</code> that indicate how important each element of your encoded input sequence is at a given moment in time (i.e. which part of the input sentence you should pay attention to right now in the current timestep).</p>
<p>Assuming your (encoded) input sequence that you want to "pay attention"/"attend over" is a sequence of vectors denoted as <code>e_1, e_2, ..., e_n</code>, the context vector at a given timestep is the weighted sum over all of these as determined by your alignment scores:</p>
<p><code>context = c := (a_1*e_1) + (a_2*e_2) + ... + (a_n*e_n)</code></p>
<p>(Remember that the <code>a_k</code>'s are scalars; you can think of this as an "averaged-out" letter/word in your sentence --- so ideally if your model is trained well, the context looks most similar to the <code>e_i</code> you want to pay attention to the most, but bears a little bit of resemblance to <code>e_{i-1}</code>, <code>e_{i+1}</code>, etc. Intuitively, think of a "smeared-out" input element, if that makes any sense...)</p>
<p>Anyway, if <code>attention_layer_size</code> is not <code>None</code>, then it specifies the number of hidden units in a feedforward layer within your decoder that is used to mix this context vector with the output of the decoder's internal RNN cell to get the attention value. If <code>attention_layer_size == None</code>, it just uses the context vector above as the attention value, and no mixing of the internal RNN cell's output is done. (When I say "mixing", I mean that the context vector and the RNN cell's output are concatenated and then projected to the dimensionality that you specify by setting <code>attention_layer_size</code>.)</p>
<p>The relevant part of the implementation is at <a href="https://github.com/tensorflow/tensorflow/blob/r1.2/tensorflow/contrib/seq2seq/python/ops/attention_wrapper.py#L731" rel="nofollow noreferrer">this line</a> and has a description of how it's computed.</p>
<p>Hope that helps!</p> | 2017-06-21 20:33:03.857000+00:00 | 2017-06-21 20:33:03.857000+00:00 | null | null | 44,685,359 | <p>In <code>tensorflow.contrib.seq2seq</code>'s <code>AttentionWrapper</code>, what does "depth" refer to as stated in the <code>attention_layer_size</code> documentation? When the documentation says to "use the context as attention" if the value is <code>None</code>, what is meant by "the context"?</p> | 2017-06-21 20:06:09.520000+00:00 | 2017-06-21 20:33:03.857000+00:00 | null | tensorflow | ['https://arxiv.org/abs/1409.0473', 'https://github.com/tensorflow/tensorflow/blob/r1.2/tensorflow/contrib/seq2seq/python/ops/attention_wrapper.py#L731'] | 2 |
71,316,669 | <p>Just wanted to point out that SMOTE generally doesn't improve prediction quality. See <a href="https://arxiv.org/abs/2201.08528" rel="nofollow noreferrer">https://arxiv.org/abs/2201.08528</a></p> | 2022-03-02 02:08:13.827000+00:00 | 2022-03-02 02:08:13.827000+00:00 | null | null | 71,127,641 | <p>I have an imbalanced classification problem and I am using <code>make_pipeline</code> from <code>imblearn</code></p>
<p>So the steps are the following:</p>
<pre><code>kf = StratifiedKFold(n_splits=10, random_state=42, shuffle=True)
params = {
'max_depth': [2,3,5],
# 'max_features':['auto', 'sqrt', 'log2'],
# 'min_samples_leaf': [5,10,20,50,100,200,300],
'n_estimators': [10,25,30,50]
# 'bootstrap': [True, False]
}
from imblearn.pipeline import make_pipeline
imba_pipeline = make_pipeline(SMOTE(random_state = 42), RobustScaler(), RandomForestClassifier(random_state=42))
imba_pipeline
out:Pipeline(steps=[('smote', SMOTE(random_state=42)),
('robustscaler', RobustScaler()),
('randomforestclassifier',
RandomForestClassifier(random_state=42))])
new_params = {'randomforestclassifier__' + key: params[key] for key in params}
grid_imba = GridSearchCV(imba_pipeline, param_grid=new_params, cv=kf, scoring='recall',
return_train_score=True, n_jobs=-1, verbose=2)
grid_imba.fit(X_train, y_train)
</code></pre>
<p>And everything is going ok and I am reaching to the end to by problem (i.e I can see the classification report)</p>
<p>However when I am trying to see inside the black box with <code>eli5</code> with <code>eli.explain_weights(imba_pipeline)</code></p>
<p>I get back as error</p>
<pre><code>TypeError: All intermediate steps should be transformers and implement fit and transform or be the string 'passthrough' 'SMOTE(random_state=42)' (type <class 'imblearn.over_sampling._smote.SMOTE'>) doesn't
</code></pre>
<p>I know that this Is a common problem and i have read the related questions but i am confused as the problem is occurred after the end of my classification procedure</p>
<p>Any suggestions?</p> | 2022-02-15 13:52:19.817000+00:00 | 2022-03-02 02:08:13.827000+00:00 | 2022-02-15 14:54:10.807000+00:00 | python-3.x|machine-learning|classification|imbalanced-data|oversampling | ['https://arxiv.org/abs/2201.08528'] | 1 |
11,421,598 | <blockquote>
<p>Objects are poor man's closures.</p>
</blockquote>
<p>Consider Java. Java is an object-oriented programming language with no language level support for real lexical closures. As a work-around Java programmers use anonymous inner classes that can close over the variables available in lexical scope (provided they're <code>final</code>). In this sense, objects are poor man's closures.</p>
<blockquote>
<p>Closures are poor man's objects.</p>
</blockquote>
<p>Consider Haskell. Haskell is a functional language with no language level support for real objects. However they can be modeled using closures, as described in <a href="http://arxiv.org/abs/cs/0509027" rel="noreferrer">this</a> excellent paper by Oleg Kiselyov and Ralf Lammel. In this sense, closures are poor man's objects.</p>
<hr>
<p>If you come from an OO background, you'll probably find thinking in terms of objects more natural, and may therefore think of them as a more fundamental concept than closures. If you come from a FP background, you might find thinking in terms of closures more natural, and may therefore think of them as a more fundamental concept than objects. </p>
<p>Moral of the story is that <strong>closures and objects are ideas that are expressible in terms of each other, and none is more fundamental than the other</strong>. That's all there is to the statement under consideration.</p>
<p>In philosophy, this is referred to as <a href="http://en.wikipedia.org/wiki/Model-dependent_realism" rel="noreferrer">model dependent realism</a>.</p> | 2012-07-10 20:41:08.930000+00:00 | 2014-01-03 10:23:36.053000+00:00 | 2014-01-03 10:23:36.053000+00:00 | null | 2,497,801 | <blockquote>
<p>Closures are poor man's objects and vice versa.</p>
</blockquote>
<p>I have seen this statement <a href="http://c2.com/cgi/wiki?ClosuresAndObjectsAreEquivalent" rel="noreferrer">at</a> <a href="http://www.kimbly.com/blog/000063.html" rel="noreferrer">many</a> <a href="http://dotnetslackers.com/CSharp/re-198713_Epiphany_Closures_are_Objects_Objects_are_Closures.aspx" rel="noreferrer">places</a> on the web (<a href="https://stackoverflow.com/questions/501023/closures-and-objects">including SO</a>) but I don't quite understand what it means. Could someone please explain what it exactly means? </p>
<p>If possible, please include examples in your answer.</p> | 2010-03-23 05:38:57.333000+00:00 | 2021-03-16 10:31:25.453000+00:00 | 2017-05-23 12:10:41.520000+00:00 | functional-programming|object|oop|closures | ['http://arxiv.org/abs/cs/0509027', 'http://en.wikipedia.org/wiki/Model-dependent_realism'] | 2 |
46,255,193 | <p>I have applied Bayesian Optimization for hyper-parameter tuning in production, so I've faced similar issues. </p>
<p>Different Bayesian methods have different characteristics in terms of exploration/exploitation trade off, for example probability of improvement (PI) tends to exploit more by selecting the next point close to the known extremum, while <a href="http://www.jmlr.org/papers/volume3/auer02a/auer02a.pdf" rel="nofollow noreferrer">upper confidence bound</a> (UCB) on the contrary prefers exploration. The problem is the first local maximum is often not good enough to exploit, but "exploratory" methods take too much time and luck to use them alone.</p>
<p>The method that demonstrated the best results for me was a <a href="https://arxiv.org/pdf/1009.5419.pdf" rel="nofollow noreferrer">portfolio strategy</a> (also known as a mixed strategy), that essentially makes an ensemble out of other methods. For example, on each step, it can pick UCB with 40% probability, PI with 40% and plain random point with 20% probability. What is important is that all methods share the outcomes, for instance if at some point a random method selects a good candidate, it then changes the GP model for UCB and PI, so from this moment PI is more likely to exploit that point. I have sometimes noticed that even the negative yet unexpected result changed the shape of a GP significantly, which in turn affected UCB and how it explores.</p>
<p>Clearly, a portfolio distribution itself can also change over time. It makes sense to start off by exploring more and then shift to exploitation, but still leaving some chance to explore (ε-greedy in the limit).</p>
<p>As for the range selection, I preferred to make it as large as possible and let Bayesian Optimization decide which values deserve more attempts, at least in the beginning. Note that PI method doesn't care much how big your range is. UCB tends to take more attempts as the range grows. In my experience, often the correlation between certain ranges (e.g. a regularizer is less than 0.01) and the outcome (severe overfitting) became obvious after several runs, and that allowed to narrow the range for all methods. But I don't recommend "premature optimizations" like that right from the start.</p>
<p>In the end, I wrote my own library for Bayesian Optimization. If you're interested in the code, please <a href="https://github.com/maxim5/hyper-engine" rel="nofollow noreferrer">check it out on GitHub</a>.</p> | 2017-09-16 14:56:08.187000+00:00 | 2017-09-20 08:35:08.927000+00:00 | 2017-09-20 08:35:08.927000+00:00 | null | 44,674,279 | <p>What is the best way to use hyperparameter tuning using Bayesian Optimization with some heuristic selections to explore too?</p>
<p>In packages such as <a href="https://github.com/kuz/caffe-with-spearmint" rel="nofollow noreferrer">spearmint</a> or <a href="https://github.com/hyperopt/hyperopt" rel="nofollow noreferrer">hyperopt</a> you can specify a range to explore but I want to also explore some heuristic values that do not necessarily belong to the range. Any suggestions what' the best practice to do this?</p> | 2017-06-21 11:02:43.223000+00:00 | 2017-09-20 08:35:08.927000+00:00 | null | deep-learning|mathematical-optimization|gaussian|hyperparameters | ['http://www.jmlr.org/papers/volume3/auer02a/auer02a.pdf', 'https://arxiv.org/pdf/1009.5419.pdf', 'https://github.com/maxim5/hyper-engine'] | 3 |
71,122,998 | <p>I know that this is an extremely late answer, but this is a question that has come up for me as well, and I wanted to provide some information for anyone who sees this in the future.</p>
<p>I recently found this resource - <a href="https://arxiv.org/pdf/1905.08778.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1905.08778.pdf</a></p>
<p>The table at the bottom lists the latency of basic operations on several graphics cards. There is a small but consistent savings to be found by using uints on all measured hardware. However, what the warning doesn't state is that the greater optimization is to be found by replacing division with multiplication if at all possible.</p>
<p><a href="https://www.slideshare.net/DevCentralAMD/lowlevel-shader-optimization-for-nextgen-and-dx11-by-emil-persson" rel="nofollow noreferrer">https://www.slideshare.net/DevCentralAMD/lowlevel-shader-optimization-for-nextgen-and-dx11-by-emil-persson</a> states that type conversion is a full-rate operation like int/float subtraction, addition, and multiplication, whereas division is very slow.</p>
<p>I've seen it suggested that to improve performance, one should convert to float, divide, then convert back to int, but as shown in the first source, this will at best give you small gains and at worst actually decrease performance.</p>
<p>You are correct that it varies from performance of operations on the CPU, although I'm not entirely certain why.</p>
<p>Looking at <a href="https://www.agner.org/optimize/instruction_tables.pdf" rel="nofollow noreferrer">https://www.agner.org/optimize/instruction_tables.pdf</a> it appears that which operation is faster (MUL vs IMUL) varies from CPU to CPU - in a few at the top of the list IMUL is actually faster, despite a higher instruction count. Other CPUs don't provide a distinction between MUL and IMUL at all.</p>
<p>TL;DR uint division is faster on the GPU, but on the CPU YMMV</p> | 2022-02-15 08:11:37.820000+00:00 | 2022-02-15 08:11:37.820000+00:00 | null | null | 37,179,981 | <p>I keep having warnings from compute shader compilation in that I'm recommended to use uints instead of ints with dividing.</p>
<p>By default from the data type I assume uints are faster; however various tests online seem to point to the contrary; perhaps this contradiction is on the CPU side only and GPU parallelisation has some unknown advantage?
(Or is it just bad advice?)</p> | 2016-05-12 07:23:41.267000+00:00 | 2022-02-15 08:11:37.820000+00:00 | null | hlsl|execution-time|hlsl2glsl | ['https://arxiv.org/pdf/1905.08778.pdf', 'https://www.slideshare.net/DevCentralAMD/lowlevel-shader-optimization-for-nextgen-and-dx11-by-emil-persson', 'https://www.agner.org/optimize/instruction_tables.pdf'] | 3 |
54,078,288 | <p>In your question, I believe there may be a lot of confusion and misconceptions.</p>
<ol>
<li><p>Firstly, deep deterministic policy gradient (DDPG) can <strong>definitely</strong> handle continuous states and actions. And it is so famous only because of it. Also, it is the first ever <em>stable</em> architecture to do so. Also, the paper you linked is actually DPG, not DDPG. However, DDPG and DPG can both handle continuous states and actions, but the latter is much more unstable. The paper is actually published by my "senior" at UofA. Heres the link to DDPG: <a href="https://arxiv.org/pdf/1509.02971.pdf" rel="noreferrer">https://arxiv.org/pdf/1509.02971.pdf</a>.</p></li>
<li><p>Actor-critic RL is not an algorithm, rather, its a family of RL algorithms where the actor maps states to actions, while the critic "pre-processes" the feedback signal so the actor can learn it more efficiently. DDPG is an example of an actor-critic set-up. In DDPG, a DQN is used as a critic to pre-process feedback signals to the deterministic policy gradient (actor).</p></li>
<li>Q-learning and deep Q-learning are also family of RL algorithms. Q-learning certainly cannot handle high state spaces given inadequate computing power, however, deep Q-learning certainly can. An example is Deep Q-network.</li>
</ol>
<p>Back to the original question.</p>
<p>I can almost guarantee that you can solve your problem using DDPG. In fact, DDPG is still one of the only algorithms that can be used to control an agent in a continuous state, continuous action space.</p>
<p>The other method that can do so is called trust region policy optimization (TRPO). It is developed by the UC Bekelery team (along with OpenAI?). The fundamental structure of TRPO and DDPG are identical (both actor-critic), however, the training is different. DDPG uses a target network approach to guarantee convergence and stability while TRPO puts a Kullerback-Leibler divergence constraint on the update of the networks to ensure each update of the network is not too large (i.e. optimal policy of the network at t is not too different from t - 1). TRPO is extremely difficult to code, thus, OpenAI published another paper called Proximal Policy Gradient (PPO). This method is similar to TRPO but easier to implement.</p>
<p>Long story short, I'd recommend trying DDPG because if your task is simple as you say, DDPG will definitely work.</p> | 2019-01-07 16:34:16.943000+00:00 | 2019-04-18 23:37:41.553000+00:00 | 2019-04-18 23:37:41.553000+00:00 | null | 54,051,499 | <p><strong>Problem</strong></p>
<p>My goal is to apply Reinforcement Learning to predict the next state of an object under a known force in a 3D environment (the approach would be reduced to supervised learning, off-line learning).</p>
<p><strong>Details of my approach</strong></p>
<p>The current state is the vector representing the position of the object in the environment (3 dimensions), and the velocity of the object (3 dimensions). The starting position is randomly initialized in the environment, as well as the starting velocity.</p>
<p>The action is the vector representing the movement from state <em>t</em> to state <em>t+1</em>.</p>
<p>The reward is just the Euclidean distance between the predicted next state, and the real next state (I already have the target position).</p>
<p><strong>What have I done so far?</strong></p>
<p>I have been looking for many methods to do this. <em>Deep Deterministic Policy Gradients</em> works for a continuous action space, but in my case I also have a continuous state space. If you are interested in this approach, here's the original paper written at DeepMind:
<a href="http://proceedings.mlr.press/v32/silver14.pdf" rel="nofollow noreferrer">http://proceedings.mlr.press/v32/silver14.pdf</a></p>
<p>The <em>Actor-Critic</em> approach should work, but it is usually (or always) applied to discrete and low-dimensional state space.</p>
<p><em>Q-Learning</em> and <em>Deep-Q Learning</em> cannot handle high dimensional state space, so my configuration would not work even if discretizing the state space.</p>
<p><em>Inverse Reinforcement Learning</em> (an instance of Imitation learning, with <em>Behavioral Cloning</em> and <em>Direct Policy Learning</em>) approximates a reward function when finding the reward function is more complicated than finding the policy function. Interesting approach, but I haven't seen any implementation, and in my case the reward function is pretty straightforward.
Is there a methodology to deal with my configuration that I haven't explored?</p> | 2019-01-05 11:24:35.003000+00:00 | 2019-04-18 23:37:41.553000+00:00 | 2019-01-05 13:46:46.657000+00:00 | python|machine-learning|artificial-intelligence|reinforcement-learning | ['https://arxiv.org/pdf/1509.02971.pdf'] | 1 |
48,251,701 | <p>How to reduce your calculation to 1 microsecond? Easy.</p>
<p>Your grand total is : sum of the n square roots - sum of the n cubic roots</p>
<p>Math tips : </p>
<pre><code>double sumOfSquareRoots = 2D * Math.pow((n + 0.5D), 1.5D) / 3D - 0.22474487139;
</code></pre>
<p>See <a href="https://arxiv.org/pdf/1204.0877.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1204.0877.pdf</a> for cubic roots :)</p> | 2018-01-14 16:31:29.727000+00:00 | 2018-01-14 16:31:29.727000+00:00 | null | null | 48,250,694 | <p>My function is <code>total += Math.sqrt(num) - Math.cbrt(num);</code> I want to apply this to every number till the max value(starting from 0) i determine. So i wrote some code below which uses divide technique to calculate faster. How can i fasten this calculation? Code below(<em>8 threads</em>) takes <strong>20 seconds</strong> to finish while non-thread takes <strong>150 seconds</strong> to finish. I believe with <code>forkjoinpool</code> i can make it faster or maybe <code>parallel streams</code>? How to implement it with them?</p>
<pre><code>public class Main {
private static int targetNum = Integer.MAX_VALUE;
private static int threadCount = 8;
private static double total = 0;
public static void main(String[] args) {
// write your code here
DecimalFormat df2 = new DecimalFormat(".##");
long time = System.currentTimeMillis();
ExecutorService executor = Executors.newFixedThreadPool(threadCount);
try {
ArrayList<Future<Double>> futureList = new ArrayList<>();
for(int a = 0; a < threadCount; a++){
calculatorService ss = new calculatorService(a*(targetNum/threadCount) ,(a+1) *(targetNum/threadCount));
futureList.add(executor.submit(ss));
}
for(int a = 0; a < threadCount; a++){
total+= futureList.get(a).get();
}
System.out.println("Result= "+ df2.format(total) + "\nTime passed= " + ((System.currentTimeMillis() - time)/1000f));
executor.shutdown();
} catch (Exception e) {
e.printStackTrace();
}
}
}
class calculatorService implements Callable<Double>{
private int start,end;
public SqrtSummer(int start, int end) {
this.start = start;
this.end = end;
}
@Override
public Double call(){
double total = 0;
for (int a = start; a < end; a++) {
total += Math.sqrt(a) - Math.cbrt(a);
}
return total;
}
}
</code></pre>
<p><strong>Edit 1</strong></p>
<p><code>futureList.get(a).get();</code>i had to do that in that way because i don't know the thread(core) count. Thus i can not write futureList.get(0).get() + futureList.get(1).get()..... I know till futureList.get(0).get() loop will wait but still they will be doing their job. My thread count is not fixed and can change any moment.</p> | 2018-01-14 14:40:23.490000+00:00 | 2018-01-14 16:31:29.727000+00:00 | 2018-01-14 14:47:20.057000+00:00 | java|multithreading|concurrency | ['https://arxiv.org/pdf/1204.0877.pdf'] | 1 |
2,717,946 | <p>The real answer here is another question: why do you think you need this? There may be a better way to accomplish what you're trying to do that doesn't depend on intricate details of platform floating-point. Having said that...</p>
<p>It's unfortunate that you can't change the Linux code, since it's really the Linux results that are deficient here. The SUN results are as good as they could possibly be: they're correctly rounded; each multiplication gives the unique (in this case) C double that's closest to the result. In contrast, the first Linux multiplication does <em>not</em> give a correctly rounded result.</p>
<p>Your Linux results come from a 32-bit system on x86 hardware, right? The results you show are consistent with, and likely caused by, the phenomenon of 'double rounding': the result of the first multiplication is first rounded to 64-bit precision (the precision used internally by the Intel x87 FPU), and then re-rounded to the usual 53-bit precision of a double. Most of the time (around 1999 times out of 2000 or so on average) this double round has the same effect as a single round to 53-bit precision would have had, but occasionally it can produce a different result, and that's what you're seeing here.</p>
<p>As you say, there are ways to fix the Linux results to match the Solaris ones: one of these is to use appropriate compiler flags to force the use of SSE2 instructions for floating-point operations if possible. The recent 4.5 release of gcc also fixes the difference by means of a new <a href="http://gcc.gnu.org/onlinedocs/gcc-4.5.0/gcc/Optimize-Options.html#index-fexcess_002dprecision-809" rel="nofollow noreferrer"><code>-fexcess-precision</code></a> flag, though the fix may impact performance when not using SSE2.</p>
<p>[Edit: after several rereads of the gcc manuals, the gcc-patches mailing list thread at <a href="http://gcc.gnu.org/ml/gcc-patches/2008-11/msg00105.html" rel="nofollow noreferrer">http://gcc.gnu.org/ml/gcc-patches/2008-11/msg00105.html</a>, and the related <a href="http://gcc.gnu.org/bugzilla/show_bug.cgi?id=323" rel="nofollow noreferrer">gcc bug report</a>, it's still not clear to me whether use of <code>-fexcess-precision=standard</code> does in fact eliminate double rounding on x87 systems; I think the answer depends on the value of FLT_EVAL_METHOD. I don't have a 32-bit Linux/x86 machine handy to test this on.]</p>
<p>But I don't know how you'd fix the Solaris results to match the Linux ones, and I'm not sure why you'd want to: you'd be making the Solaris results less accurate instead of making the Linux results more accurate.</p>
<p>[Edit: caf has a good suggestion here. On Solaris, try deliberately using long double for intermediate results, then forcing back to double. If done right, this should reproduce the double rounding effect that you're seeing in Linux.]</p>
<p>See David Monniaux's excellent paper <a href="http://arxiv.org/abs/cs/0701192" rel="nofollow noreferrer">The pitfalls of verifying floating-point computations</a> for a good explanation of double rounding. It's essential reading after the Goldberg article mentioned in an earlier answer.</p> | 2010-04-27 00:27:31.723000+00:00 | 2010-04-27 08:46:32.337000+00:00 | 2010-04-27 08:46:32.337000+00:00 | null | 2,717,371 | <p>I have a project where I have to perform some mathematics calculations with double variables.
The problem is that I get different results on SUN Solaris 9 and Linux.
There are a lot of ways (explained here and other forums) how to make Linux work as Sun, but not the other way around.
I cannot touch the Linux code, so it is only SUN I can change.
Is there any way to make SUN to behave as Linux?</p>
<p>The code I run(compile with gcc on both systems):</p>
<pre><code>int hash_func(char *long_id)
{
double product, lnum, gold;
while (*long_id)
lnum = lnum * 10.0 + (*long_id++ - '0');
printf("lnum => %20.20f\n", lnum);
lnum = lnum * 10.0E-8;
printf("lnum => %20.20f\n", lnum);
gold = 0.6125423371582974;
product = lnum * gold;
printf("product => %20.20f\n", product);
...
}
</code></pre>
<p>if the input is 339886769243483</p>
<p>the output in Linux:</p>
<pre><code>lnum => 339886769243**483**.00000000000000000000
lnum => 33988676.9243**4829473495483398**
product => 20819503.600158**59827399253845**
</code></pre>
<p>When on SUN:</p>
<pre><code>lnum => 339886769243483.00000000000000000000
lnum => 33988676.92434830218553543091
product = 20819503.600158**60199928283691**
</code></pre>
<p><em>Note:</em> The result is not always different, moreover most of the times it is the same. Just 10 15-digit numbers out of 60000 have this problem.</p>
<p>Please help!!!</p> | 2010-04-26 22:27:47.583000+00:00 | 2010-04-27 08:46:32.337000+00:00 | 2010-04-26 22:33:14.443000+00:00 | linux|floating-point|solaris | ['http://gcc.gnu.org/onlinedocs/gcc-4.5.0/gcc/Optimize-Options.html#index-fexcess_002dprecision-809', 'http://gcc.gnu.org/ml/gcc-patches/2008-11/msg00105.html', 'http://gcc.gnu.org/bugzilla/show_bug.cgi?id=323', 'http://arxiv.org/abs/cs/0701192'] | 4 |
61,311,820 | <p>A problem you may encountering is that you don't have enough training data for the model to be able to fit well. In your example, <strong>you only have 21 training instances, each with only 1 feature</strong>. Broadly speaking with neural network models, you need on the order of 10K or more training instances to produce a decent model.</p>
<p>Consider the following code that generates a noisy sine wave and tries to train a densely-connected feed-forward neural network to fit the data. My model has two linear layers, each with 50 hidden units and a ReLU activation function. The experiments are parameterized with the variable <code>num_points</code> which I will increase.</p>
<pre class="lang-py prettyprint-override"><code>import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(7)
def generate_data(num_points=100):
X = np.linspace(0.0 , 2.0 * np.pi, num_points).reshape(-1, 1)
noise = np.random.normal(0, 1, num_points).reshape(-1, 1)
y = 3 * np.sin(X) + noise
return X, y
def run_experiment(X_train, y_train, X_test, batch_size=64):
num_points = X_train.shape[0]
model = keras.Sequential()
model.add(layers.Dense(50, input_shape=(1, ), activation='relu'))
model.add(layers.Dense(50, activation='relu'))
model.add(layers.Dense(1, activation='linear'))
model.compile(loss = "mse", optimizer = "adam", metrics=["mse"] )
history = model.fit(X_train, y_train, epochs=10,
batch_size=batch_size, verbose=0)
yhat = model.predict(X_test, batch_size=batch_size)
plt.figure(figsize=(5, 5))
plt.plot(X_train, y_train, "ro", markersize=2, label='True')
plt.plot(X_train, yhat, "bo", markersize=1, label='Predicted')
plt.ylim(-5, 5)
plt.title('N=%d points' % (num_points))
plt.legend()
plt.grid()
plt.show()
</code></pre>
<p>Here is how I invoke the code:</p>
<pre class="lang-py prettyprint-override"><code>num_points = 100
X, y = generate_data(num_points)
run_experiment(X, y, X)
</code></pre>
<p>Now, if I run the experiment with <code>num_points = 100</code>, the model predictions (in blue) do a terrible job at fitting the true noisy sine wave (in red).</p>
<p><a href="https://i.stack.imgur.com/pIlnM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pIlnM.png" alt="enter image description here"></a></p>
<p>Now, here is <code>num_points = 1000</code>:</p>
<p><a href="https://i.stack.imgur.com/GSUS7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GSUS7.png" alt="enter image description here"></a></p>
<p>Here is <code>num_points = 10000</code>:</p>
<p><a href="https://i.stack.imgur.com/SBgCK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SBgCK.png" alt="enter image description here"></a></p>
<p>And here is <code>num_points = 100000</code>:</p>
<p><a href="https://i.stack.imgur.com/sQDSG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sQDSG.png" alt="enter image description here"></a></p>
<p>As you can see, <em>for my chosen NN architecture</em>, adding more training instances allows the neural network to better (over)fit the data.</p>
<p>If you do have a lot of training instances, then if you want to purposefully overfit your data, you can either increase the neural network capacity or reduce regularization. Specifically, you can control the following knobs:</p>
<ul>
<li>increase the number of layers</li>
<li>increase the number of hidden units</li>
<li>increase the number of features per data instance</li>
<li>reduce regularization (e.g. by removing dropout layers)</li>
<li>use a more complex neural network architecture (e.g. transformer blocks instead of RNN)</li>
</ul>
<p>You may be wondering if neural networks can fit arbitrary data rather than just a noisy sine wave as in my example. Previous research says that, yes, a big enough neural network can fit any data. See:</p>
<ul>
<li>Universal approximation theorem. <a href="https://en.wikipedia.org/wiki/Universal_approximation_theorem" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Universal_approximation_theorem</a></li>
<li>Zhang 2016, "Understanding deep learning requires rethinking generalization". <a href="https://arxiv.org/abs/1611.03530" rel="nofollow noreferrer">https://arxiv.org/abs/1611.03530</a></li>
</ul> | 2020-04-19 21:24:17.520000+00:00 | 2020-04-19 23:41:35.230000+00:00 | 2020-04-19 23:41:35.230000+00:00 | null | 61,252,785 | <p>I'm trying to build a simple regression model using keras and tensorflow. In my problem I have data in the form <code>(x, y)</code>, where <code>x</code> and <code>y</code> are simply numbers. I'd like to build a keras model in order to predict <code>y</code> using <code>x</code> as an input.</p>
<p>Since I think images better explains thing, these are my data:</p>
<p><a href="https://i.stack.imgur.com/npCjz.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/npCjz.jpg" alt="enter image description here"></a></p>
<p>We may discuss if they are good or not, but in my problem I cannot really cheat them.</p>
<p>My keras model is the following (data are splitted 30% test <code>(X_test, y_test)</code> and 70% training <code>(X_train, y_train)</code>):</p>
<pre><code>model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(32, input_shape=() activation="relu", name="first_layer"))
model.add(tf.keras.layers.Dense(16, activation="relu", name="second_layer"))
model.add(tf.keras.layers.Dense(1, name="output_layer"))
model.compile(loss = "mean_squared_error", optimizer = "adam", metrics=["mse"] )
history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=0, shuffle=False)
eval_result = model.evaluate(X_test, y_test)
print("\n\nTest loss:", eval_result, "\n")
predict_Y = model.predict(X)
</code></pre>
<p>note: <code>X</code> contains both <code>X_test</code> and <code>X_train</code>.</p>
<p>Plotting the prediction I get (blue squares are the prediction <code>predict_Y</code>)</p>
<p><a href="https://i.stack.imgur.com/KKiRd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KKiRd.png" alt="enter image description here"></a></p>
<p>I'm playing a lot with layers, activation funztions and other parameters. My goal is to find the best parameters to train the model, but the actual question, here, is slightly different: in fact I have hard times to force the model to overfit the data (as you can see from the above results).</p>
<p>Does anyone have some sort of idea about how to reproduce overfitting?</p>
<p>This is the outcome I would like to get:
<a href="https://i.stack.imgur.com/4IvqX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4IvqX.png" alt="enter image description here"></a></p>
<p>(red dots are under blue squares!)</p>
<p>EDIT:</p>
<p>Here I provide you the data used in the example above: you can copy paste directly to a python interpreter:</p>
<pre><code>X_train = [0.704619794270697, 0.6779457393024553, 0.8207082120250023, 0.8588819357831449, 0.8692320257603844, 0.6878750931810429, 0.9556331888763945, 0.77677964510883, 0.7211381534179618, 0.6438319113259414, 0.6478339581502052, 0.9710222750072649, 0.8952188423349681, 0.6303124926673513, 0.9640316662124185, 0.869691568491902, 0.8320164648420931, 0.8236399177660375, 0.8877334038470911, 0.8084042532069621, 0.8045680821762038]
y_train = [0.7766424210611557, 0.8210846773655833, 0.9996114311913593, 0.8041331063189883, 0.9980525368790883, 0.8164056182686034, 0.8925487603333683, 0.7758207470960685, 0.37345286573743475, 0.9325789202459493, 0.6060269037514895, 0.9319771743389491, 0.9990691225991941, 0.9320002808310418, 0.9992560731072977, 0.9980241561997089, 0.8882905258641204, 0.4678339275898943, 0.9312152374846061, 0.9542371205095945, 0.8885893668675711]
X_test = [0.9749191829308574, 0.8735366740730178, 0.8882783211709133, 0.8022891400991644, 0.8650601322313454, 0.8697902997857514, 1.0, 0.8165876695985228, 0.8923841531760973]
y_test = [0.975653685270635, 0.9096752789481569, 0.6653736469114154, 0.46367666660348744, 0.9991817903431941, 1.0, 0.9111205717076893, 0.5264993912088891, 0.9989199241685126]
X = [0.704619794270697, 0.77677964510883, 0.7211381534179618, 0.6478339581502052, 0.6779457393024553, 0.8588819357831449, 0.8045680821762038, 0.8320164648420931, 0.8650601322313454, 0.8697902997857514, 0.8236399177660375, 0.6878750931810429, 0.8923841531760973, 0.8692320257603844, 0.8877334038470911, 0.8735366740730178, 0.8207082120250023, 0.8022891400991644, 0.6303124926673513, 0.8084042532069621, 0.869691568491902, 0.9710222750072649, 0.9556331888763945, 0.8882783211709133, 0.8165876695985228, 0.6438319113259414, 0.8952188423349681, 0.9749191829308574, 1.0, 0.9640316662124185]
Y = [0.7766424210611557, 0.7758207470960685, 0.37345286573743475, 0.6060269037514895, 0.8210846773655833, 0.8041331063189883, 0.8885893668675711, 0.8882905258641204, 0.9991817903431941, 1.0, 0.4678339275898943, 0.8164056182686034, 0.9989199241685126, 0.9980525368790883, 0.9312152374846061, 0.9096752789481569, 0.9996114311913593, 0.46367666660348744, 0.9320002808310418, 0.9542371205095945, 0.9980241561997089, 0.9319771743389491, 0.8925487603333683, 0.6653736469114154, 0.5264993912088891, 0.9325789202459493, 0.9990691225991941, 0.975653685270635, 0.9111205717076893, 0.9992560731072977]
</code></pre>
<p>Where <code>X</code> contains the list of the x values and <code>Y</code> the corresponding y value. (X_test, y_test) and (X_train, y_train) are two (non overlapping) subset of (X, Y).</p>
<p>To predict and show the model results I simply use matplotlib (imported as plt):</p>
<pre><code>predict_Y = model.predict(X)
plt.plot(X, Y, "ro", X, predict_Y, "bs")
plt.show()
</code></pre> | 2020-04-16 14:30:19.563000+00:00 | 2020-04-20 01:50:51.677000+00:00 | 2020-04-17 07:11:46.033000+00:00 | machine-learning|keras|neural-network|tf.keras | ['https://i.stack.imgur.com/pIlnM.png', 'https://i.stack.imgur.com/GSUS7.png', 'https://i.stack.imgur.com/SBgCK.png', 'https://i.stack.imgur.com/sQDSG.png', 'https://en.wikipedia.org/wiki/Universal_approximation_theorem', 'https://arxiv.org/abs/1611.03530'] | 6 |
36,144,412 | <p><code>forecast::na.interp</code> is a good approach. From the <a href="https://cran.r-project.org/web/packages/forecast/forecast.pdf" rel="nofollow">documentation</a></p>
<blockquote>
<p>Uses linear interpolation for non-seasonal series and a periodic stl decomposition with seasonal series to replace missing values.</p>
</blockquote>
<pre><code>library(forecast)
fit <- na.interp(myzoo)
fit[10] # 32.5, vs. 31.0 actual and 32.0 from Rob Hyndman's answer
</code></pre>
<p><a href="http://arxiv.org/pdf/1510.03924.pdf" rel="nofollow">This paper</a> evaluates several interpolation methods against real time series, and finds that <code>na.interp</code> is both accurate and efficient:</p>
<blockquote>
<p>From the R implementations tested in this paper, na.interp from the forecast package and na.StructTS from the zoo package showed the best overall results.</p>
<p>The na.interp function is also not that much slower than
na.approx [the fastest method], so the loess decomposition seems not to be very demanding in terms of computing time.</p>
</blockquote>
<p>Also worth noting that Rob Hyndman wrote the <code>forecast</code> package, and included <code>na.interp</code> after providing his answer to this question. It's likely that <code>na.interp</code> is an improvement upon this approach, even though it performed worse in this instance (probably due to specifying the period in <code>StructTS</code>, where <code>na.interp</code> figures it out).</p> | 2016-03-22 00:57:22.833000+00:00 | 2016-03-24 05:42:44.410000+00:00 | 2016-03-24 05:42:44.410000+00:00 | null | 4,964,255 | <p>I have a time series for which I want to intelligently interpolate the missing values. The value at a particular time is influenced by a multi-day trend, as well as its position in the daily cycle. </p>
<p>Here is an example in which the tenth observation is missing from <code>myzoo</code></p>
<pre><code>start <- as.POSIXct("2010-01-01")
freq <- as.difftime(6, units = "hours")
dayvals <- (1:4)*10
timevals <- c(3, 1, 2, 4)
index <- seq(from = start, by = freq, length.out = 16)
obs <- (rep(dayvals, each = 4) + rep(timevals, times = 4))
myzoo <- zoo(obs, index)
myzoo[10] <- NA
</code></pre>
<p>If I had to implement this, I'd use some kind of weighted mean of close times on nearby days, or add a value for the day to a function line fitted to the larger trend, but I hope there already exist some package or functions that apply to this situation?</p>
<p>EDIT: Modified the code slightly to clarify my problem. There are <code>na.*</code> methods that interpolate from nearest neighbors, but in this case they do not recognize that the missing value is at the time that is the lowest value of the day. Maybe the solution is to reshape the data to wide format and then interpolate, but I wouldn't like to completely disregard the contiguous values from the same day. It is worth noting that <code>diff(myzoo, lag = 4)</code> returns a vector of 10's. The solution may lie with some combination of <code>reshape</code>, <code>na.spline</code>, and <code>diff.inv</code>, but I just can't figure it out.</p>
<p>Here are three approaches that don't work:
<img src="https://i.stack.imgur.com/xm05G.jpg" alt="enter image description here"></p>
<p>EDIT2. Image produced using the following code.</p>
<pre><code>myzoo <- zoo(obs, index)
myzoo[10] <- NA # knock out the missing point
plot(myzoo, type="o", pch=16) # plot solid line
points(na.approx(myzoo)[10], col = "red")
points(na.locf(myzoo)[10], col = "blue")
points(na.spline(myzoo)[10], col = "green")
myzoo[10] <- 31 # replace the missing point
lines(myzoo, type = "o", lty=3, pch=16) # dashed line over the gap
legend(x = "topleft",
legend = c("na.spline", "na.locf", "na.approx"),
col=c("green","blue","red"), pch = 1)
</code></pre> | 2011-02-11 00:12:27.270000+00:00 | 2019-11-18 01:17:33.143000+00:00 | 2011-02-11 05:20:21.483000+00:00 | r|interpolation|time-series | ['https://cran.r-project.org/web/packages/forecast/forecast.pdf', 'http://arxiv.org/pdf/1510.03924.pdf'] | 2 |
Subsets and Splits