a_id
int64 7.84k
73.8M
| a_body
stringlengths 61
33k
| a_creation_date
stringlengths 25
32
| a_last_activity_date
stringlengths 25
32
| a_last_edit_date
stringlengths 25
32
⌀ | a_tags
float64 | q_id
int64 826
73.8M
| q_body
stringlengths 61
29.9k
| q_creation_date
stringlengths 25
32
| q_last_activity_date
stringlengths 25
32
| q_last_edit_date
stringlengths 25
32
⌀ | q_tags
stringlengths 1
103
| _arxiv_links
stringlengths 2
6.69k
| _n_arxiv_links
int64 0
94
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
64,836,669 | <p>The confusion arise from the naming convention, the difference is naming with first name or surname. Basically, the initialization method proposed in <a href="https://arxiv.org/abs/1502.01852" rel="noreferrer">this paper</a> co-authored by Kaiming He. The framework implementations differ, however;</p>
<p>Tensorflow via Keras backend uses the name <code>He</code> initilization. Torch uses <code>Kaiming</code> initilization as the method names.</p>
<p>In fact, the same applies also for Glorot/Xavier initialization. See <a href="http://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf" rel="noreferrer">the paper</a> co-authored by Xavier Glorot.</p>
<p>Here, Tensorflow via Keras uses the surname <code>Glorot</code> whereas Torch uses the first name <code>Xavier</code>.</p> | 2020-11-14 17:30:47.707000+00:00 | 2020-12-23 09:24:49.643000+00:00 | 2020-12-23 09:24:49.643000+00:00 | null | 64,835,050 | <p>My model layers are using <strong>relu activation function</strong>. I am <strong>using he_uniform for the kernel initializer</strong>, but i saw <em><strong>kaiming initialization giving better result than he_uniform</strong></em>. I m using keras, and keras has no kaiming initializer, how can I implement it?</p> | 2020-11-14 14:41:42.900000+00:00 | 2020-12-23 09:24:49.643000+00:00 | null | python|tensorflow|machine-learning|keras|deep-learning | ['https://arxiv.org/abs/1502.01852', 'http://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf'] | 2 |
50,796,769 | <p>Compiling a list of potential resources on it:</p>
<ol>
<li>"Word Embeddings and Their Use In Sentence Classification Tasks" by Amit Mandelbaum and Adi Shalev. <a href="https://arxiv.org/abs/1610.08229" rel="nofollow noreferrer">https://arxiv.org/abs/1610.08229</a></li>
</ol> | 2018-06-11 11:34:08.430000+00:00 | 2018-06-11 11:34:08.430000+00:00 | null | null | 49,126,323 | <p>W.r.t text classification, now a common approach is to combine (often sum or average) word embeddings and use the result vector as features.</p>
<p>Are there any reference document(s) that establish the comparison of this approach for text classification over traditional feature engineering approaches? [Comparison based on accuracy] [could be on popular datasets like IMDB, sentiment-140 etc]</p> | 2018-03-06 08:27:34.620000+00:00 | 2018-06-11 11:34:08.430000+00:00 | 2018-03-07 08:45:16.997000+00:00 | nlp|text-classification|word-embedding | ['https://arxiv.org/abs/1610.08229'] | 1 |
47,491,046 | <p>Here's a straightforward way you can do it and <strong>preserve <a href="/questions/tagged/logical-purity" class="post-tag" title="show questions tagged 'logical-purity'" rel="tag">logical-purity</a>!</strong></p>
<pre><code>not_all_equal([E|Es]) :-
some_dif(Es, E).
some_dif([X|Xs], E) :-
( dif(X, E)
; X = E, some_dif(Xs, E)
).
</code></pre>
<p>Here are some sample queries using SWI-Prolog 7.7.2. </p>
<p>First, the most general query:</p>
<pre><code>?- not_all_equal(Es).
dif(_A,_B), Es = [_A,_B|_C]
; dif(_A,_B), Es = [_A,_A,_B|_C]
; dif(_A,_B), Es = [_A,_A,_A,_B|_C]
; dif(_A,_B), Es = [_A,_A,_A,_A,_B|_C]
; dif(_A,_B), Es = [_A,_A,_A,_A,_A,_B|_C]
...
</code></pre>
<p>Next, the query the OP gave in the question:</p>
<pre><code>?- not_all_equal([A,B,C]), A=a, B=b.
A = a, B = b
; false. % <- the toplevel hints at non-determinism
</code></pre>
<p>Last, let's put the subgoal <code>A=a, B=b</code> upfront:</p>
<pre><code>?- A=a, B=b, not_all_equal([A,B,C]).
A = a, B = b
; false. % <- (non-deterministic, like above)
</code></pre>
<p>Good, but ideally the last query should have succeeded deterministically!</p>
<hr>
<h3>Enter <code><a href="http://www.complang.tuwien.ac.at/ulrich/Prolog-inedit/swi/" rel="nofollow noreferrer">library</a>(<a href="http://www.complang.tuwien.ac.at/ulrich/Prolog-inedit/swi/reif.pl" rel="nofollow noreferrer">reif</a>)</code></h3>
<p><a href="https://sicstus.sics.se/sicstus/docs/latest4/html/sicstus.html/Indexing.html" rel="nofollow noreferrer">First argument indexing</a>
takes the principal functor of the first predicate argument (plus a few simple built-in tests) into account to improve the determinism of sufficiently instantiated goals.</p>
<p>This, by itself, does <em>not</em> cover <code>dif/2</code> satisfactorily.</p>
<p>What can we do? Work with
<a href="https://stackoverflow.com/q/13664870/4609915">reified term equality/inequality</a>—effectively <a href="https://arxiv.org/abs/1607.01590" rel="nofollow noreferrer">indexing <code>dif/2</code></a>!</p>
<pre><code>some_dif([X|Xs], E) :- % some_dif([X|Xs], E) :-
if_(dif(X,E), true, % ( dif(X,E), true
(X = E, some_dif(Xs,E)) % ; X = E, some_dif(Xs,E)
). % ).
</code></pre>
<p>Notice the similarities of the new and the old implementation!</p>
<p>Above, the goal <code>X = E</code> is redundant on the left-hand side. Let's remove it! </p>
<pre><code>some_dif([X|Xs], E) :-
if_(dif(X,E), true, some_dif(Xs,E)).
</code></pre>
<p><em>Sweet!</em> But, alas, we're not quite done (yet)!</p>
<pre>
?- not_all_equal(Xs).
<b>DOES NOT TERMINATE</b>
</pre>
<p><em>What's going on?</em></p>
<p>It turns out that the implementation of <code>dif/3</code> prevents us from getting a nice answer sequence for the most general query. To do so—without using additional goals forcing fair enumeration—we need a tweaked implementation of <code>dif/3</code>, which I call <code>diffirst/3</code>:</p>
<pre><code>diffirst(X, Y, T) :-
( X == Y -> T = false
; X \= Y -> T = true
; T = true, dif(X, Y)
; T = false, X = Y
).
</code></pre>
<p>Let's use <code>diffirst/3</code> instead of <code>dif/3</code> in the definition of predicate <code>some_dif/2</code>:</p>
<pre><code>some_dif([X|Xs], E) :-
if_(diffirst(X,E), true, some_dif(Xs,E)).
</code></pre>
<p>So, at long last, here are above queries with the new <code>some_dif/2</code>:</p>
<pre><code>?- not_all_equal(Es). % query #1
dif(_A,_B), Es = [_A,_B|_C]
; dif(_A,_B), Es = [_A,_A,_B|_C]
; dif(_A,_B), Es = [_A,_A,_A,_B|_C]
...
?- not_all_equal([A,B,C]), A=a, B=b. % query #2
A = a, B = b
; false.
?- A=a, B=b, not_all_equal([A,B,C]). % query #3
A = a, B = b.
</code></pre>
<p>Query #1 does not terminate, but has the same nice compact answer sequence. <strong>Good!</strong></p>
<p>Query #2 is still non-determinstic. Okay. To me this is as good as it gets.</p>
<p>Query #3 has become deterministic: <strong>Better now!</strong></p>
<hr>
<p><strong>The bottom line:</strong></p>
<ol>
<li>Use <code>library(reif)</code> to tame excess non-determinism while preserving logical purity!</li>
<li><code>diffirst/3</code> should find its way into <code>library(reif)</code> :)</li>
</ol>
<hr>
<hr>
<hr>
<p><strong>EDIT:</strong> more general using a <a href="/questions/tagged/meta-predicate" class="post-tag" title="show questions tagged 'meta-predicate'" rel="tag">meta-predicate</a> (suggested by a comment; thx!)</p>
<p>Let's generalize <code>some_dif/2</code> like so:</p>
<pre><code>:- meta_predicate some(2,?).
some(P_2, [X|Xs]) :-
if_(call(P_2,X), true, some(P_2,Xs)).
</code></pre>
<p><code>some/2</code> can be used with reified predicates other than <code>diffirst/3</code>.</p>
<p>Here an update to <code>not_all_equal/1</code> which now uses <code>some/2</code> instead of <code>some_dif/2</code>:</p>
<pre><code>not_all_equal([X|Xs]) :-
some(diffirst(X), Xs).
</code></pre>
<p>Above sample queries still give the same answers, so I won't show these here.</p> | 2017-11-25 22:18:42.880000+00:00 | 2017-11-26 20:55:03.150000+00:00 | 2017-11-26 20:55:03.150000+00:00 | null | 47,473,624 | <p>How would one implement a <code>not_all_equal/1</code> predicate, which succeeds if the given list contains at least 2 different elements and fails otherwise?</p>
<p>Here is my attempt (a not very pure one):</p>
<pre><code>not_all_equal(L) :-
( member(H1, L), member(H2, L), H1 \= H2 -> true
; list_to_set(L, S),
not_all_equal_(S)
).
not_all_equal_([H|T]) :-
( member(H1, T), dif(H, H1)
; not_all_equal_(T)
).
</code></pre>
<p>This however does not always have the best behaviour:</p>
<pre><code>?- not_all_equal([A,B,C]), A = a, B = b.
A = a,
B = b ;
A = a,
B = b,
dif(a, C) ;
A = a,
B = b,
dif(b, C) ;
false.
</code></pre>
<p>In this example, only the first answer should come out, the two other ones are superfluous.</p> | 2017-11-24 12:45:42.873000+00:00 | 2022-08-11 16:48:01.110000+00:00 | 2017-11-25 23:24:32.853000+00:00 | list|prolog|predicate|prolog-dif|logical-purity | ['/questions/tagged/logical-purity', 'http://www.complang.tuwien.ac.at/ulrich/Prolog-inedit/swi/', 'http://www.complang.tuwien.ac.at/ulrich/Prolog-inedit/swi/reif.pl', 'https://sicstus.sics.se/sicstus/docs/latest4/html/sicstus.html/Indexing.html', 'https://stackoverflow.com/q/13664870/4609915', 'https://arxiv.org/abs/1607.01590', '/questions/tagged/meta-predicate'] | 7 |
42,675,620 | <p>We have an entire research paper on this topic: <a href="https://arxiv.org/abs/1611.07612" rel="nofollow noreferrer">Faster Population Counts using AVX2 Instructions</a>. Despite the title, it covers SSE as well. For a relevant software library, see <a href="https://github.com/CountOnes/hamming_weight" rel="nofollow noreferrer">hamming_weight</a>. It includes all sorts of fast functions to do this type of work.</p>
<p>The short answer: you can use a Muła popcount function like so:</p>
<pre><code> __m128i popcount(__m128i v) {
const __m128i lookup = _mm_setr_epi8(
/* 0 */ 0, /* 1 */ 1, /* 2 */ 1, /* 3 */ 2,
/* 4 */ 1, /* 5 */ 2, /* 6 */ 2, /* 7 */ 3,
/* 8 */ 1, /* 9 */ 2, /* a */ 2, /* b */ 3,
/* c */ 2, /* d */ 3, /* e */ 3, /* f */ 4
);
__m128i low_mask = _mm_set1_epi8(0x0f);
__m128i lo = _mm_and_si128(v, low_mask);
__m128i hi = _mm_and_si128(_mm_srli_epi16(v, 4), low_mask);
__m128i popcnt1 = _mm_shuffle_epi8(lookup, lo);
__m128i popcnt2 = _mm_shuffle_epi8(lookup, hi);
return _mm_sad_epu8(_mm_add_epi8(popcnt1, popcnt2), _mm_setzero_si128());
}
</code></pre>
<p>The result from the <code>popcount</code> call is a 128-bit counter made of two 64-bit counters that you must add up. Summing up the two 64-bit counters can be done at the very end to save computational time.</p> | 2017-03-08 15:39:02.493000+00:00 | 2017-03-08 17:48:15.617000+00:00 | 2017-03-08 17:48:15.617000+00:00 | null | 42,463,926 | <p>I am trying to accumulate the <code>POPCOUNT</code>s for the <code>uint64_t</code> integers in an array using SSE instructions.<br>
This is my code:</p>
<pre><code>#include <emmintrin.h>
#include <nmmintrin.h>
#include <chrono>
int main()
{
uint64_t data[4] = {1,1,1,1};
uint64_t data2[4] = {1,0,1,0};
__m128i* ptr = (__m128i*) data;
__m128i* ptr2 = (__m128i*) data2;
int total = 0;
for (int i = 0; i < 2; ++i, ++ptr, ++ptr2)
total += popcount(_mm_and_si128(*ptr, *ptr2)); // This doesn't work
}
</code></pre>
<p>I need the equivalent of the <code>POPCOUNT</code> function which operates on the output of <code>_mm_and_si128</code>, so I can accumulate all the <code>POPCOUNT</code>s into the <code>total</code> variable.</p> | 2017-02-26 02:09:30.283000+00:00 | 2017-03-08 17:48:15.617000+00:00 | 2017-02-26 02:36:49.233000+00:00 | c++|sse|simd | ['https://arxiv.org/abs/1611.07612', 'https://github.com/CountOnes/hamming_weight'] | 2 |
52,881,658 | <p>Thanks for the Question.</p>
<p>There's a variety of information about timeouts in <a href="https://tendermint.com/docs/tendermint-core/running-in-production.html" rel="noreferrer">https://tendermint.com/docs/tendermint-core/running-in-production.html</a></p>
<p>You can also find more detailed technical explanation in the spec: <a href="https://arxiv.org/abs/1807.04938" rel="noreferrer">https://arxiv.org/abs/1807.04938</a></p>
<p>Note that in a successful round, the only timeout that we absolutely wait no matter what is <code>timeout_commit</code>.</p>
<p>Here's a brief summary of the timeouts:</p>
<ul>
<li>timeout_propose = how long we wait for a proposal block before prevoting nil</li>
<li>timeout_propose_delta = how much timeout_propose increases with each round</li>
<li>timeout_prevote = how long we wait after receiving +2/3 prevotes for "anything" (ie. not a single block or nil)</li>
<li>timeout_prevote_delta = how much the timeout_prevote increases with each round</li>
<li>timeout_precommit = how long we wait after receiving +2/3 precommits for "anything" (ie. not a single block or nil)</li>
<li>timeout_precommit_delta = how much the timeout_precommit increases with each round</li>
<li>timeout_commit = how long we wait after committing a block, before starting on the new height (this gives us a chance to receive some more precommits, even though we already have +2/3)</li>
</ul> | 2018-10-18 20:01:10.663000+00:00 | 2018-10-18 20:01:10.663000+00:00 | null | null | 52,790,981 | <p>Tendermint seems to lack a description of block creation time...<br>
They create default config file as </p>
<pre><code>timeout_propose = 3000
timeout_propose_delta = 500
timeout_prevote = 1000
timeout_prevote_delta = 500
timeout_precommit = 1000
timeout_precommit_delta = 500
timeout_commit = 5000
</code></pre>
<p>I read documents and code. </p>
<p>So in my guess, if tendermint succeeds to create block in one round,<br>
timeout_propose + timeout_prevote + timeout_precommit = 5s and wait timeout_commit for 5s...<br>
so block commit occurs in 5s~10s thus next block consensus starts after 10s. </p>
<p>And if tendermint succeeds to create block in two round,<br>
(timeout_propose + timeout_prevote + timeout_precommit) + (timeout_propose + timeout_propose_delta + timeout_prevote + timeout_prevot_delta + timeout_precommit + timeout_precommit_delta) = 5s + 6.5s = 11.5s and wait timeout_commit for 5s...<br>
so block commit occurs in 11.5s~16.5s thus next block consensus starts afters 16.5s.
I guess that tendermint add delta timeout for each round.</p>
<p>Is my guess right? If not, what exactly do the timeouts in the configuration file mean?</p> | 2018-10-13 08:18:35.447000+00:00 | 2018-10-18 20:01:10.663000+00:00 | 2018-10-13 08:42:41.537000+00:00 | tendermint | ['https://tendermint.com/docs/tendermint-core/running-in-production.html', 'https://arxiv.org/abs/1807.04938'] | 2 |
42,327,887 | <p>It's called Saliency Detection.<br>
<a href="https://arxiv.org/pdf/1609.02132.pdf" rel="nofollow noreferrer">Ubernet</a> supports saliency (among other features).</p> | 2017-02-19 13:37:47.190000+00:00 | 2017-02-19 13:37:47.190000+00:00 | null | null | 42,327,122 | <p>Actually I am searching for today's state of the art neural network model for detecting "an interesting point" in an image but I lacks the proper keyword for it.</p>
<p>For example, if an image is a portrait that point might be person's face. If it is an image of one flower in a vest, that point would be the flower's petals. It is the point that would catch the viewer's eye first. I want to know are there any convolutional neural network model that takes an input image and return a point like this?</p>
<p>For detecting one answer from an image it would be "classification", for detecting pixels area it would be "semantic segmentation". But for one coordinate from an image, I am not sure.</p>
<p>I thought by now someone would have already invented or even trained a network for tasks like this. Thank you.</p> | 2017-02-19 12:18:25.117000+00:00 | 2017-02-19 13:37:47.190000+00:00 | 2017-02-19 12:32:44.373000+00:00 | image-processing|neural-network | ['https://arxiv.org/pdf/1609.02132.pdf'] | 1 |
61,916,421 | <p>For building your own <code>Faster RCNN Models</code>, you can follow the instructions mentioned in the <a href="https://github.com/tensorflow/models/blob/1af55e018eebce03fb61bba9959a04672536107d/research/object_detection/g3doc/defining_your_own_model.md" rel="nofollow noreferrer">Official Tensorflow Github Repository</a>.</p>
<p>The advantage of following these instructions is that if you face any problem, you can file an issue in <a href="https://github.com/tensorflow/models/issues" rel="nofollow noreferrer">this Repo</a> and you will be assisted by Google Engineers.</p>
<p>Specifying the steps mentioned in the <a href="https://github.com/tensorflow/models/blob/1af55e018eebce03fb61bba9959a04672536107d/research/object_detection/g3doc/instance_segmentation.md" rel="nofollow noreferrer">Github Repo</a> for the benefit of the community (just in case that link breaks).</p>
<p><strong>Defining a new Faster R-CNN or SSD Feature Extractor:</strong></p>
<p>In most cases, you probably will not implement a <code>DetectionModel</code> from scratch --- instead you might create a new feature extractor to be used by one of the SSD or Faster R-CNN meta-architectures. (We think of meta-architectures as classes that define entire families of models using the <code>DetectionModel</code> abstraction).</p>
<p>Note: For the following discussion to make sense, we recommend first becoming familiar with the <a href="https://arxiv.org/abs/1506.01497" rel="nofollow noreferrer">Faster R-CNN</a> paper.</p>
<p>Let’s now imagine that you have invented a brand new network architecture (say, “InceptionV100”) for classification and want to see how InceptionV100 would behave as a feature extractor for detection (say, with Faster R-CNN). A similar procedure would hold for SSD models, but we’ll discuss Faster R-CNN.</p>
<p>To use InceptionV100, we will have to define a new <code>FasterRCNNFeatureExtractor</code> and pass it to our FasterRCNNMetaArch constructor as input. See <code>object_detection/meta_architectures/faster_rcnn_meta_arch.py</code> for definitions of <code>FasterRCNNFeatureExtractor</code> and <code>FasterRCNNMetaArch</code>, respectively. A <code>FasterRCNNFeatureExtractor</code> must define a few functions:</p>
<ul>
<li><code>preprocess</code>: Run any preprocessing of input values that is necessary prior to running the detector on an input image.</li>
<li><code>_extract_proposal_features</code>: Extract first stage Region Proposal Network (RPN) features.</li>
<li><code>_extract_box_classifier_features</code>: Extract second stage Box Classifier features.</li>
<li><code>restore_from_classification_checkpoint_fn</code>: Load a checkpoint into the Tensorflow graph.</li>
</ul>
<p>See the <code>object_detection/models/faster_rcnn_resnet_v1_feature_extractor.py</code> definition as one example. Some remarks:</p>
<ul>
<li>We typically initialize the weights of this feature extractor using those from the <a href="https://github.com/tensorflow/models/tree/master/research/slim#pre-trained-models" rel="nofollow noreferrer">Slim Resnet-101 classification checkpoint</a>, and we know that images were preprocessed when training this checkpoint by subtracting a channel mean from each input image. Thus, we implement the preprocess function to replicate the same channel mean subtraction behavior.</li>
<li>The “full” resnet classification network defined in slim is cut into two parts --- all but the last “resnet block” is put into the <code>_extract_proposal_features</code> function and the final block is separately defined in the <code>_extract_box_classifier_features</code> function. In general, some experimentation may be required to decide on an optimal layer at which to “cut” your feature extractor into these two pieces for Faster R-CNN.</li>
</ul>
<p>For more information, please refer <a href="https://github.com/tensorflow/models/blob/1af55e018eebce03fb61bba9959a04672536107d/research/object_detection/g3doc/defining_your_own_model.md" rel="nofollow noreferrer">this link</a> and this Github Repo for <a href="https://github.com/tensorflow/models/" rel="nofollow noreferrer">Tensorflow Models</a>.</p>
<p>Hope this helps. Happy Learning!</p> | 2020-05-20 15:03:51.120000+00:00 | 2020-05-20 15:03:51.120000+00:00 | null | null | 58,284,744 | <p>I want build my own Faster Rcnn model, I download an example from <a href="https://github.com/dBeker/Faster-RCNN-TensorFlow-Python3" rel="nofollow noreferrer">https://github.com/dBeker/Faster-RCNN-TensorFlow-Python3</a> </p>
<p>I get an error when running the code, and I don't know why</p>
<pre><code>Traceback (most recent call last):
File "C:/Users/l/Desktop/Faster-RCNN/train.py", line 216, in <module>
train.train()
File "C:/Users/l/Desktop/Faster-RCNN/train.py", line 148, in train
blobs = self.data_layer.forward()
File "C:\Users\l\Desktop\Faster-RCNN\lib\layer_utils\roi_data_layer.py", line 75, in forward
blobs = self._get_next_minibatch()
File "C:\Users\l\Desktop\Faster-RCNN\lib\layer_utils\roi_data_layer.py", line 71, in _get_next_minibatch
return get_minibatch(minibatch_db, self._num_classes)
File "C:\Users\l\Desktop\Faster-RCNN\lib\utils\minibatch.py", line 30, in get_minibatch
im_blob, im_scales = _get_image_blob(roidb, random_scale_inds)
File "C`enter code here`:\Users\l\Desktop\Faster-RCNN\lib\utils\minibatch.py", line 67, in _get_image_blob
im, im_scale = prep_im_for_blob(im, cfg.FLAGS2["pixel_means"], target_size, cfg.FLAGS.max_size)
File "C:\Users\l\Desktop\Faster-RCNN\lib\utils\blob.py", line 35, in prep_im_for_blob
im = im.astype(np.float32, copy=False)
AttributeError: 'NoneType' object has no attribute 'astype'
</code></pre> | 2019-10-08 10:39:04.920000+00:00 | 2020-05-20 15:03:51.120000+00:00 | 2019-10-08 11:31:36.890000+00:00 | python|windows|tensorflow|deep-learning|faster-rcnn | ['https://github.com/tensorflow/models/blob/1af55e018eebce03fb61bba9959a04672536107d/research/object_detection/g3doc/defining_your_own_model.md', 'https://github.com/tensorflow/models/issues', 'https://github.com/tensorflow/models/blob/1af55e018eebce03fb61bba9959a04672536107d/research/object_detection/g3doc/instance_segmentation.md', 'https://arxiv.org/abs/1506.01497', 'https://github.com/tensorflow/models/tree/master/research/slim#pre-trained-models', 'https://github.com/tensorflow/models/blob/1af55e018eebce03fb61bba9959a04672536107d/research/object_detection/g3doc/defining_your_own_model.md', 'https://github.com/tensorflow/models/'] | 7 |
54,456,631 | <p>I found this: <a href="https://github.com/keras-team/keras/pull/3851/commits/232f77003f0357d3de31a0db0b1ac9c532dfb3ce" rel="nofollow noreferrer">Soft Exponential activation function</a></p>
<p>C# Convert:</p>
<pre><code>public double SoftExponential(double x, double alpha = 0.0, double max_value = 0.0)
{
// """Soft Exponential activation function by Godfrey and Gashler
// See: https://arxiv.org/pdf/1602.01321.pdf
// α == 0: f(α, x) = x
// α > 0: f(α, x) = (exp(αx)-1) / α + α
// α< 0: f(α, x) = -ln(1 - α(x + α)) / α
// """
if (alpha == 0)
return x;
else if (alpha > 0)
return alpha + (Math.Exp(alpha * x) - 1.0) / alpha;
else
return -Math.Log(1 - alpha * (x + alpha)) / alpha;
}
</code></pre> | 2019-01-31 08:51:35.437000+00:00 | 2019-01-31 08:51:35.437000+00:00 | null | null | 36,384,249 | <p>I can find a list of activation functions in math but not in code.
So i guess this would be the right place for such a list in code if there ever should be one.
starting with the translation of the algorithms in these 2 links:
<a href="https://en.wikipedia.org/wiki/Activation_function" rel="noreferrer">https://en.wikipedia.org/wiki/Activation_function</a>
<a href="https://stats.stackexchange.com/questions/115258/comprehensive-list-of-activation-functions-in-neural-networks-with-pros-cons">https://stats.stackexchange.com/questions/115258/comprehensive-list-of-activation-functions-in-neural-networks-with-pros-cons</a></p>
<p>the goal is to have an Activation class (with the functions and their derivative) with easy accessibility via UI. </p>
<p>EDIT:
my attempt</p>
<pre><code>using UnityEngine;
using System.Collections;
using System;
///<summary>
///Activation Functions from:
///https://en.wikipedia.org/wiki/Activation_function
///https://stats.stackexchange.com/questions/115258/comprehensive-list-of-activation-functions-in-neural-networks-with-pros-cons
///D infront means the Deravitive of the function
///x is the input of one perceptron. a is the alpha value sometimes needed.
///</summary>
[System.Serializable]
public class Activation
{
public ActivationType activationType;
public Activation(ActivationType type)
{
activationType = type;
}
public double AFunction(double x)
{
switch(activationType)
{
case ActivationType.Identity:
return Identity(x);
case ActivationType.BinaryStep:
return BinaryStep(x);
case ActivationType.Logistic:
return Logistic(x);
case ActivationType.Tanh:
return Tanh(x);
case ActivationType.ArcTan:
return ArcTan(x);
case ActivationType.ReLU:
return ReLU(x);
case ActivationType.SoftPlus:
return SoftPlus(x);
case ActivationType.BentIdentity:
return BentIdentity(x);
case ActivationType.Sinusoid:
return Sinusoid(x);
case ActivationType.Sinc:
return Sinc(x);
case ActivationType.Gaussian:
return Gaussian(x);
case ActivationType.Bipolar:
return Bipolar(x);
case ActivationType.BipolarSigmoid:
return BipolarSigmoid(x);
}
return 0;
}
public double ActivationDerivative(double x)
{
switch(activationType)
{
case ActivationType.Logistic:
return DLogistic(x);
case ActivationType.Tanh:
return DTanh(x);
case ActivationType.ArcTan:
return DArcTan(x);
case ActivationType.ReLU:
return DReLU(x);
case ActivationType.SoftPlus:
return DSoftPlus(x);
case ActivationType.BentIdentity:
return DBentIdentity(x);
case ActivationType.Sinusoid:
return DSinusoid(x);
case ActivationType.Sinc:
return DSinc(x);
case ActivationType.Gaussian:
return DGaussian(x);
case ActivationType.BipolarSigmoid:
return DBipolarSigmoid(x);
}
return 0;
}
public double AFunction(double x, double a)
{
switch(activationType)
{
case ActivationType.PReLU:
return PReLU(x,a);
case ActivationType.ELU:
return ELU(x,a);
}
return 0;
}
public double ActivationDerivative(double x, double a)
{
switch(activationType)
{
case ActivationType.PReLU:
return DPReLU(x,a);
case ActivationType.ELU:
return DELU(x,a);
}
return 0;
}
public double Identity(double x)
{
return x;
}
public double BinaryStep(double x)
{
return x < 0 ? 0 : 1;
}
public double Logistic(double x)
{
return 1/(1+Math.Pow(Math.E,-x));
}
public double DLogistic(double x)
{
return Logistic(x)*(1-Logistic(x));
}
public double Tanh(double x)
{
return 2/(1+Math.Pow(Math.E, -(2*x)))-1;
}
public double DTanh(double x)
{
return 1-Math.Pow(Tanh(x),2);
}
public double ArcTan(double x)
{
return Math.Atan(x);
}
public double DArcTan(double x)
{
return 1/Math.Pow(x,2)+1;
}
//Rectified Linear Unit
public double ReLU(double x)
{
return Math.Max(0,x);// x < 0 ? 0 : x;
}
public double DReLU(double x)
{
return Math.Max(0,1);// x < 0 ? 0 : x;
}
//Parameteric Rectified Linear Unit
public double PReLU(double x, double a)
{
return x < 0 ? a*x : x;
}
public double DPReLU(double x, double a)
{
return x < 0 ? a : 1;
}
//Exponential Linear Unit
public double ELU(double x, double a)
{
return x < 0 ? a*(Math.Pow(Math.E, x) - 1) : x;
}
public double DELU(double x, double a)
{
return x < 0 ? ELU(x, a)+a: 1;
}
public double SoftPlus(double x)
{
return Math.Log(Math.Exp(x)+1);
}
public double DSoftPlus(double x)
{
return Logistic(x);
}
public double BentIdentity(double x)
{
return (((Math.Sqrt(Math.Pow(x,2)+1))-1)/2)+x;
}
public double DBentIdentity(double x)
{
return (x/(2*Math.Sqrt(Math.Pow(x,2)+1)))+1;
}
// public float SoftExponential(float x)
// {
//
// }
public double Sinusoid(double x)
{
return Math.Sin(x);
}
public double DSinusoid(double x)
{
return Math.Cos(x);
}
public double Sinc(double x)
{
return x == 0 ? 1 : Math.Sin(x)/x;
}
public double DSinc(double x)
{
return x == 0 ? 0 : (Math.Cos(x)/x)-(Math.Sin(x)/Math.Pow(x,2));
}
public double Gaussian(double x)
{
return Math.Pow(Math.E, Math.Pow(-x, 2));
}
public double DGaussian(double x)
{
return -2*x*Math.Pow(Math.E, Math.Pow(-x,2));
}
public double Bipolar(double x)
{
return x < 0 ? -1:1;
}
public double BipolarSigmoid(double x)
{
return (1-Math.Exp(-x))/(1+Math.Exp(-x));
}
public double DBipolarSigmoid(double x)
{
return 0.5 * (1 + BipolarSigmoid(x)) * (1 - BipolarSigmoid(x));
}
public double Scaler(double x, double min, double max)
{
return (x - min) / (max - min);
}
}
public enum ActivationType
{
Identity,
BinaryStep,
Logistic,
Tanh,
ArcTan,
ReLU,
PReLU,
ELU,
SoftPlus,
BentIdentity,
Sinusoid,
Sinc,
Gaussian,
Bipolar,
BipolarSigmoid
}
</code></pre>
<p>Not sure if i did the math correct so I'm not posting it as an answer.
if anyone is willing to do an error check i could make it the answer.</p> | 2016-04-03 10:23:06.293000+00:00 | 2019-01-31 08:51:35.437000+00:00 | 2017-04-13 12:44:13.837000+00:00 | c#|neural-network|derivative|activation-function | ['https://github.com/keras-team/keras/pull/3851/commits/232f77003f0357d3de31a0db0b1ac9c532dfb3ce'] | 1 |
71,484,132 | <p>A slightly larger feature space may help. But your main issue is the architecture of the feature extractor. In order to match people and distinguish them from impostors, features corresponding small local regions (e.g. shoes, glasses) and global whole body regions are equally important. This is not captured by the simple feature extractor provided by <a href="https://github.com/nwojke/deep_sort" rel="nofollow noreferrer">https://github.com/nwojke/deep_sort</a>. For more information on this check: <a href="https://arxiv.org/pdf/1905.00953.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1905.00953.pdf</a>. I recommend you to try any of the OSNet models provided here: <a href="https://kaiyangzhou.github.io/deep-person-reid/MODEL_ZOO" rel="nofollow noreferrer">https://kaiyangzhou.github.io/deep-person-reid/MODEL_ZOO</a></p>
<p>I can also recommend you to check out my repository: <a href="https://github.com/mikel-brostrom/Yolov5_DeepSort_Pytorch" rel="nofollow noreferrer">https://github.com/mikel-brostrom/Yolov5_DeepSort_Pytorch</a>. It seems to provide all you need: multi-camera multi-object tracking and OSNet models</p> | 2022-03-15 14:42:01.273000+00:00 | 2022-03-16 12:18:53.897000+00:00 | 2022-03-16 12:18:53.897000+00:00 | null | 70,494,402 | <p>I am using this repo for DeepSORT - <a href="https://github.com/nwojke/deep_sort" rel="nofollow noreferrer">https://github.com/nwojke/deep_sort</a></p>
<p>I am trying to build a Multi Camera Person Tracking system. I want to save and utilize the features extracted by one camera in footage from other cameras.</p>
<p>The Feature extractor which is trained on Mars dataset, doesn't seem to help in differentiating between two different people.</p>
<p>I wrote the below snippet to check the Cosine Distance between images from same person and different person.</p>
<pre><code>extr = Extractor("./deep_sort_pytorch/deep_sort/deep/checkpoint/ckpt.t7")
list = glob.glob("mars/*.jpg")
features_list = []
for i in list:
im = cv2.imread(i)
im_crops = [im]
features = extr(im_crops)
features_list.append(features)
for f in features_list:
print(_cosine_distance(f, features_list[0]),"<<")
def _cosine_distance(a, b, data_is_normalized=False):
cos = nn.CosineSimilarity(dim=1, eps=1e-6)
return (1. - cos(torch.from_numpy(a), torch.from_numpy(b)))
</code></pre>
<p>As expected, the cosine distance between images of same person is very low.
But unexpectedly the cosine distance between crops of two different people is also similarly low.</p>
<p>I thought the Feature extractor will help me in differentiating.</p>
<p>Shall I increase the latent space dimensions from 512 to a bigger size?</p>
<p>Or maybe I am mistaking the role of Feature extractor.</p> | 2021-12-27 10:39:44.490000+00:00 | 2022-03-16 12:18:53.897000+00:00 | 2022-02-07 23:37:55.187000+00:00 | pytorch|computer-vision|feature-extraction | ['https://github.com/nwojke/deep_sort', 'https://arxiv.org/pdf/1905.00953.pdf', 'https://kaiyangzhou.github.io/deep-person-reid/MODEL_ZOO', 'https://github.com/mikel-brostrom/Yolov5_DeepSort_Pytorch'] | 4 |
48,867,709 | <p><a href="https://arxiv.org/abs/1711.00489" rel="nofollow noreferrer">This paper</a> researches the relation of batch size and learning rate.
Instead of decaying the learning rate, they increase the batch size by the same factor.</p>
<blockquote>
<p>It reaches equivalent test accuracies after the same number of training epochs, but with fewer parameter updates, leading to greater parallelism and shorter training times.</p>
</blockquote>
<p>In short, if you use a bigger batch size, you can use a larger learning rate to reduce training time.</p> | 2018-02-19 13:52:11.677000+00:00 | 2018-02-19 13:52:11.677000+00:00 | null | null | 43,762,679 | <p>I am trying to train the model found at
<a href="https://github.com/silicon-valley-data-science/RNN-Tutorial" rel="nofollow noreferrer">https://github.com/silicon-valley-data-science/RNN-Tutorial</a> With a dataset generated through <a href="https://github.com/jupiter126/Create_Speech_Dataset" rel="nofollow noreferrer">https://github.com/jupiter126/Create_Speech_Dataset</a> (around 340000 small wav audio samples with transcripts).<br /><br />
When I train with GPU the training goes relatively fast, however I can't set batch_train_size above 25 without reaching OOM.<br />
When I train with CPU, training is much slower, but I can easily set batch_train_size to 250 (probably up to 700 but didn't try yet).<br /><br /></p>
<p>I am confused on how the small batch size limit on GPU might affect training quality, or if raising the raising the number of epoch might cancel out that effect...<br /> In other words, 25 samples on 10000 epoch or 500 samples on 500 epoch? <br /> <br />
GPU is GTX 1060 with 6Gb ram, CPU is dual XEON 2630l v4 (2*10 hyperthreaded cores at 1.7Ghz) with 128Gb ram.<br /><br /></p> | 2017-05-03 14:19:19.527000+00:00 | 2018-02-19 13:52:11.677000+00:00 | 2017-05-03 14:21:43.937000+00:00 | performance|tensorflow|cpu|ram | ['https://arxiv.org/abs/1711.00489'] | 1 |
510,222 | <p>As pointed out by many other posters, it is possible to base cryptography on NP-hard or NP-complete problems.</p>
<p>However, the common methods for cryptography are going to be based on difficult mathematics (difficult to crack, that is). The truth is that it is easier to serialize numbers as a traditional key than to create a standardized string that solves an NP-hard problem. Therefore, practical crypto is based on mathematical problems that are not yet proven to be NP-hard or NP-complete (so it is conceivable that some of these problems are in P).</p>
<p>In ElGamal or RSA encryption, breaking it requires the cracking the discrete logarithm, so look at this <a href="http://en.wikipedia.org/wiki/Discrete_logarithm" rel="nofollow noreferrer">wikipedia</a> article.</p>
<blockquote>
<p>No efficient algorithm for computing general discrete logarithms logbg is known. The naive algorithm is to raise b to higher and higher powers k until the desired g is found; this is sometimes called trial multiplication. This algorithm requires running time linear in the size of the group G and thus exponential in the number of digits in the size of the group. There exists an efficient quantum algorithm due to Peter Shor however (<a href="http://arxiv.org/abs/quant-ph/9508027" rel="nofollow noreferrer">http://arxiv.org/abs/quant-ph/9508027</a>).</p>
<p>Computing discrete logarithms is apparently difficult. Not only is no efficient algorithm known for the worst case, but the average-case complexity can be shown to be at least as hard as the worst case using random self-reducibility.</p>
<p>At the same time, the inverse problem of discrete exponentiation is not (it can be computed efficiently using exponentiation by squaring, for example). This asymmetry is analogous to the one between integer factorization and integer multiplication. Both asymmetries have been exploited in the construction of cryptographic systems.</p>
</blockquote>
<p>The widespread belief is that these are NP-complete, but maybe can't be proven so. Note that quantum computers may break crypto efficiently!</p> | 2009-02-04 05:53:42.457000+00:00 | 2009-02-04 05:53:42.457000+00:00 | 2020-06-20 09:12:55.060000+00:00 | null | 311,064 | <p>Should practical quantum computing become a reality, I am wondering if there are any public key cryptographic algorithms that are based on NP-complete problems, rather than integer factorization or discrete logarithms.</p>
<p>Edit:</p>
<p>Please check out the "Quantum computing in computational complexity theory" section of
<a href="http://en.wikipedia.org/wiki/Quantum_computer" rel="noreferrer">the wiki article on quantum computers.</a> It points out that the class of problems quantum computers can answer (BQP) is believed to be strictly easier than NP-complete. </p>
<p>Edit 2:</p>
<p>'Based on NP-complete' is a bad way of expressing what I'm interested in.</p>
<p>What I intended to ask is for a Public Key encryption algorithm with the property that any method for breaking the encryption can also be used to break the underlying NP-complete problem. This means breaking the encryption proves P=NP.</p> | 2008-11-22 08:06:20.480000+00:00 | 2011-11-19 22:27:49.493000+00:00 | 2008-11-22 09:35:48.160000+00:00 | cryptography|complexity-theory|quantum-computing | ['http://en.wikipedia.org/wiki/Discrete_logarithm', 'http://arxiv.org/abs/quant-ph/9508027'] | 2 |
51,936,605 | <p>One solution is to use Multi-dimensional RNN or LSTM as described in <a href="https://arxiv.org/pdf/0705.2011.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/0705.2011.pdf</a>. In this case, your data will be treated as a sequence with 4 dimensions. This github repo provides an implementation of 2D LSTM <a href="https://github.com/philipperemy/tensorflow-multi-dimensional-lstm" rel="nofollow noreferrer">https://github.com/philipperemy/tensorflow-multi-dimensional-lstm</a>. Hope this helps</p> | 2018-08-20 18:28:56.023000+00:00 | 2018-08-20 18:28:56.023000+00:00 | null | null | 44,190,969 | <p>I have data that consists of 4 different time series, e.g.:</p>
<pre><code> [35, 45, 47, 39...]
[47, 60, 57, 55...]
[42, 42, 61, 69...]
[62, 70, 62, 65...]
</code></pre>
<p>Thing is, besides temporal dependency (horizontal one), there also exists vertical dependency (in columns, if we look at this example 'matrix').</p>
<p>Output vectors would be these same time series, only shifted for one step.</p>
<p>Is it possible to create LSTM network for each of time series (so, 4 networks in my case, and also 4 outputs) but also connect them vertically, i.e. create 2D LSTM?</p>
<p>If so, how would one achieve that in Tensorflow?</p>
<p>Is it also possible to make this kind of network deeper (have additional LSTM layers appended to each of these 4 networks)?</p>
<p>I hope I was clear enough with explanation.</p> | 2017-05-25 22:50:29.117000+00:00 | 2018-08-23 05:55:19.230000+00:00 | null | python|tensorflow|lstm | ['https://arxiv.org/pdf/0705.2011.pdf', 'https://github.com/philipperemy/tensorflow-multi-dimensional-lstm'] | 2 |
60,648,541 | <p>As mentioned in Table 1 in the <a href="https://arxiv.org/pdf/1608.06993.pdf" rel="nofollow noreferrer">DenseNet paper</a>, DenseNet-121 uses something called <a href="https://alexisbcook.github.io/2017/global-average-pooling-layers-for-object-localization/" rel="nofollow noreferrer">Global Average Pooling</a>, which is an extreme way of pooling where a tensor of dimensions <code>d x h x w</code> is reduced to <code>d x 1 x 1</code>. </p> | 2020-03-12 06:15:29.940000+00:00 | 2020-03-12 06:20:58.293000+00:00 | 2020-03-12 06:20:58.293000+00:00 | null | 60,646,996 | <p>I'm fairly new to deeplearning, python, and pytorch so please bear with me!</p>
<p>I'm trying to understand Transfer Learning in Pytorch using two different Pretrained Networks: Vgg11 and Densenet121.
I've run data of shape (3 x 224 x 224) through the "features" part of the above networks, and the output shapes are as follows:</p>
<p>Vgg11 features output shape: 512 x 7 x 7</p>
<p>Densenet121 features output shape: 1024 x 7 x7</p>
<p>Now, I'm trying to make my own Classifier to use instead of the Pre-trained one. Upon checking both pre-trained classifiers, I see the Vgg11 classifier has in the first layer:</p>
<blockquote>
<p>(0): Linear(in_features=25088, out_features=4096, bias=True)</p>
</blockquote>
<p>While the Densenet121 has in the first layer:</p>
<blockquote>
<p>(classifier): Linear(in_features=1024, out_features=1000, bias=True))</p>
</blockquote>
<p>The Vgg one makes sense, since if you flatten the output of the "features" part, you get 512 x 7 x 7 = 25,088.</p>
<p>How does the Densenet one have only 1024 dimensions? If you flatten the output of its "features" part, you get 1024 x 7 x 7 = 50,176</p>
<p>Are there steps that I am missing for either of them? Are there ways to check the input and output shapes of each layer and find out exactly what's happening?</p>
<p>Thank you.</p> | 2020-03-12 02:57:24.820000+00:00 | 2020-03-12 06:20:58.293000+00:00 | null | machine-learning|deep-learning|pytorch | ['https://arxiv.org/pdf/1608.06993.pdf', 'https://alexisbcook.github.io/2017/global-average-pooling-layers-for-object-localization/'] | 2 |
61,300,304 | <p>There are multiple options, and you need to experiment which one will be optimal for your case.</p>
<p>Option 1: You can treat your static features as fixed temporal data. So, you make a temporal dimension for each of your static features and let LSTM handle the rest.</p>
<p>For example your transformed data will look like this:</p>
<pre><code> water rate pump speed total produced water depth_wall
2000-01-01 10 4 1120 100
2000-01-02 20 8 1140 100
2000-01-03 10 4 1150 100
2000-01-04 10 3 1160 100
2000-01-05 10 4 1170 100
</code></pre>
<p>Option 2: Designing multi-head networks.</p>
<pre><code>TIME_SERIES_INPUT ------> LSTM -------\
*---> MERGE / Concatenate ---> [more layers]
STATIC_INPUTS --> [FC layer/ conv] ---/
</code></pre>
<p>Here is a paper explaining a combining strategy: <a href="https://arxiv.org/pdf/1712.08160.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1712.08160.pdf</a></p>
<p>Here is another paper utilizing option 2: <a href="https://www.researchgate.net/publication/337159046_Classification_of_ECG_signals_by_dot_Residual_LSTM_Network_with_data_augmentation_for_anomaly_detection" rel="nofollow noreferrer">https://www.researchgate.net/publication/337159046_Classification_of_ECG_signals_by_dot_Residual_LSTM_Network_with_data_augmentation_for_anomaly_detection</a></p>
<p>Source code for paper 2: <a href="https://github.com/zabir-nabil/dot-res-lstm" rel="nofollow noreferrer">https://github.com/zabir-nabil/dot-res-lstm</a></p> | 2020-04-19 06:14:31.790000+00:00 | 2020-04-19 06:14:31.790000+00:00 | null | null | 61,300,023 | <p>I tried to find a similar question and its answers but was not successful in doing so. That's why I'm asking a question that might be asked before:</p>
<p>I'm working on a problem that outputs the cumulative water production of several water wells. The features I have are both time series (water rate and pump speed as functions of time) and static (depth of the wells, latitude and longitude of the well, thickness of the water bearing zone, etc.)</p>
<p>My input data can be shown as below for well#1.</p>
<p>dynamic data:</p>
<pre><code> water rate pump speed total produced water
2000-01-01 10 4 1120
2000-01-02 20 8 1140
2000-01-03 10 4 1150
2000-01-04 10 3 1160
2000-01-05 10 4 1170
</code></pre>
<p>static data:</p>
<pre><code>depth of the well_1 = 100
latitude and longitude of the well_1 = x1, y1
thickness of the water bearing zone of well_1 = 3
</code></pre>
<p>My question is how a RNN model (LSTM, GRU, ...) can be built that can take both dynamic and static features?</p> | 2020-04-19 05:38:43.093000+00:00 | 2022-09-15 20:28:49.573000+00:00 | 2020-04-19 06:17:11.400000+00:00 | python|static|time-series|recurrent-neural-network | ['https://arxiv.org/pdf/1712.08160.pdf', 'https://www.researchgate.net/publication/337159046_Classification_of_ECG_signals_by_dot_Residual_LSTM_Network_with_data_augmentation_for_anomaly_detection', 'https://github.com/zabir-nabil/dot-res-lstm'] | 3 |
39,895,916 | <p><strong>Just use 1 hash function!</strong> (and save the <code>1/(f ε^2)</code> smallest values.) </p>
<p>Check out <a href="https://arxiv.org/pdf/1303.5479v2.pdf" rel="nofollow noreferrer">this article</a> for the state of the art practical and theoretical bounds. It has this nice graph (below), explaining why you probably want to use just one 2-independent hash function and save the <code>k</code> smallest values.</p>
<p>When estimating set sizes the paper shows that you can get a relative error of approximately <code>ε = 1/sqrt(f k)</code> where <code>f</code> is the jaccard similarity and <code>k</code> is the number of values kept. So if you want error <code>ε</code>, you need <code>k=1/(fε^2)</code> or if your sets have similarity around <code>1/3</code> and you want a <code>10%</code> relative error, you should keep the <code>300</code> smallest values.</p>
<p><a href="https://i.stack.imgur.com/lHc9c.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lHc9c.png" alt="graph"></a></p> | 2016-10-06 12:23:40.620000+00:00 | 2019-01-31 09:27:16.493000+00:00 | 2019-01-31 09:27:16.493000+00:00 | null | 19,701,052 | <p>I am keen to try and implement minhashing to find near duplicate content. <a href="http://blog.cluster-text.com/tag/minhash/">http://blog.cluster-text.com/tag/minhash/</a> has a nice write up, but there the question of just how many hashing algorithms you need to run across the shingles in a document to get reasonable results.</p>
<p>The blog post above mentioned something like 200 hashing algorithms. <a href="http://blogs.msdn.com/b/spt/archive/2008/06/10/set-similarity-and-min-hash.aspx">http://blogs.msdn.com/b/spt/archive/2008/06/10/set-similarity-and-min-hash.aspx</a> lists 100 as a default.</p>
<p>Obviously there is an increase in the accuracy as the number of hashes increases, but how many hash functions is reasonable?</p>
<p>To quote from the blog</p>
<blockquote>
<p>It is tough to get the error bar on our similarity estimate much
smaller than [7%] because of the way error bars on statistically
sampled values scale — to cut the error bar in half we would need four
times as many samples.</p>
</blockquote>
<p>Does this mean that mean that decreasing the number of hashes to something like 12 (200 / 4 / 4) would result in an error rate of 28% (7 * 2 * 2)?</p> | 2013-10-31 07:58:48.847000+00:00 | 2020-06-23 14:59:40.130000+00:00 | null | algorithm|hash | ['https://arxiv.org/pdf/1303.5479v2.pdf', 'https://i.stack.imgur.com/lHc9c.png'] | 2 |
18,600,806 | <p>I start with a simple but not quite linear-time algorithm. We choose some value between <code>array1[0]+array2[0]</code> and <code>array1[N-1]+array2[N-1]</code>. Then we determine how many pair sums are greater than this value and how many of them are less. This may be done by iterating the arrays with two pointers: pointer to the first array incremented when sum is too large and pointer to the second array decremented when sum is too small. Repeating this procedure for different values and using binary search (or one-sided binary search) we could find Kth largest sum in O(N log R) time, where N is size of the largest array and R is number of possible values between <code>array1[N-1]+array2[N-1]</code> and <code>array1[0]+array2[0]</code>. This algorithm has linear time complexity only when the array elements are integers bounded by small constant.</p>
<p>Previous algorithm may be improved if we stop binary search as soon as number of pair sums in binary search range decreases from O(N<sup>2</sup>) to O(N). Then we fill auxiliary array with these pair sums (this may be done with slightly modified two-pointers algorithm). And then we use quickselect algorithm to find Kth largest sum in this auxiliary array. All this does not improve worst-case complexity because we still need O(log R) binary search steps. What if we keep the quickselect part of this algorithm but (to get proper value range) we use something better than binary search?</p>
<p>We could estimate value range with the following trick: get every second element from each array and try to find the pair sum with rank <code>k/4</code> for these half-arrays (using the same algorithm recursively). Obviously this should give some approximation for needed value range. And in fact slightly improved variant of this trick gives range containing only O(N) elements. This is proven in following paper: <a href="http://www.cse.yorku.ca/~andy/pubs/X%2BY.pdf">"Selection in X + Y and matrices with sorted rows and columns" by A. Mirzaian and E. Arjomandi</a>. This paper contains detailed explanation of the algorithm, proof, complexity analysis, and pseudo-code for all parts of the algorithm except <a href="http://en.wikipedia.org/wiki/Quickselect">Quickselect</a>. If linear worst-case complexity is required, Quickselect may be augmented with <a href="http://en.wikipedia.org/wiki/Median_of_medians">Median of medians</a> algorithm.</p>
<p>This algorithm has complexity O(N). If one of the arrays is shorter than other array (M < N) we could assume that this shorter array is extended to size N with some very small elements so that all calculations in the algorithm use size of the largest array. We don't actually need to extract pairs with these "added" elements and feed them to quickselect, which makes algorithm a little bit faster but does not improve asymptotic complexity.</p>
<p>If k < N we could ignore all the array elements with index greater than k. In this case complexity is equal to O(k). If N < k < N(N-1) we just have better complexity than requested in OP. If k > N(N-1), we'd better solve the opposite problem: k'th smallest sum.</p>
<p>I uploaded simple C++11 implementation to <a href="http://ideone.com/qe1YHA">ideone</a>. Code is not optimized and not thoroughly tested. I tried to make it as close as possible to pseudo-code in linked paper. This implementation uses <code>std::nth_element</code>, which allows linear complexity only on average (not worst-case).</p>
<hr>
<p>A completely different approach to find K'th sum in linear time is based on priority queue (PQ). One variation is to insert largest pair to PQ, then repeatedly remove top of PQ and instead insert up to two pairs (one with decremented index in one array, other with decremented index in other array). And take some measures to prevent inserting duplicate pairs. Other variation is to insert all possible pairs containing largest element of first array, then repeatedly remove top of PQ and instead insert pair with decremented index in first array and same index in second array. In this case there is no need to bother about duplicates.</p>
<p>OP mentions O(K log K) solution where PQ is implemented as max-heap. But in some cases (when array elements are evenly distributed integers with limited range and linear complexity is needed only on average, not worst-case) we could use O(1) time priority queue, for example, as described in this paper: <a href="http://arxiv.org/pdf/physics/0606226">"A Complexity O(1) Priority Queue for Event Driven Molecular Dynamics Simulations" by Gerald Paul</a>. This allows O(K) expected time complexity.</p>
<p>Advantage of this approach is a possibility to provide first K elements in sorted order. Disadvantages are limited choice of array element type, more complex and slower algorithm, worse asymptotic complexity: O(K) > O(N).</p> | 2013-09-03 20:20:20.487000+00:00 | 2013-09-07 21:18:49.053000+00:00 | 2013-09-07 21:18:49.053000+00:00 | null | 18,557,175 | <p>Given two sorted arrays of numbers, we want to find the pair with the kth largest possible sum. (A pair is one element from the first array and one element from the second array). For example, with arrays</p>
<ul>
<li>[2, 3, 5, 8, 13]</li>
<li>[4, 8, 12, 16]</li>
</ul>
<p>The pairs with largest sums are</p>
<ul>
<li>13 + 16 = 29</li>
<li>13 + 12 = 25</li>
<li>8 + 16 = 24</li>
<li>13 + 8 = 21</li>
<li>8 + 12 = 20</li>
</ul>
<p>So the pair with the 4th largest sum is (13, 8). How to find the pair with the kth largest possible sum?</p>
<p>Also, what is the fastest algorithm? The arrays are already sorted and sizes M and N.</p>
<hr>
<p>I am already aware of the <strong>O(Klogk)</strong> solution , using Max-Heap given <a href="https://stackoverflow.com/questions/5212037/find-the-kth-largest-sum-in-two-arrays">here</a> .</p>
<p>It also is one of the favorite <em>Google</em> interview question , and they demand a <strong>O(k) solution</strong> .</p>
<p>I've also read somewhere that there exists a <strong>O(k)</strong> solution, which i am unable to figure out .</p>
<p>Can someone explain the correct solution with a pseudocode .</p>
<p>P.S.
Please DON'T post <a href="http://www.ocf.berkeley.edu/~wwu/cgi-bin/yabb/YaBB.cgi?board=riddles_cs;action=display;num=1132204952;start=25#47" rel="nofollow noreferrer">this</a> link as answer/comment.It DOESN'T contain the answer.</p> | 2013-09-01 09:44:36.057000+00:00 | 2022-05-11 08:45:24.127000+00:00 | 2017-05-23 12:07:17.090000+00:00 | performance|algorithm|math|language-agnostic|combinatorics | ['http://www.cse.yorku.ca/~andy/pubs/X%2BY.pdf', 'http://en.wikipedia.org/wiki/Quickselect', 'http://en.wikipedia.org/wiki/Median_of_medians', 'http://ideone.com/qe1YHA', 'http://arxiv.org/pdf/physics/0606226'] | 5 |
14,843,041 | <p>We have written recent research papers that survey the best schemes for this problem. Please see:</p>
<p>Daniel Lemire and Leonid Boytsov, Decoding billions of integers per second through vectorization,Software: Practice & Experience 45 (1), 2015.
<a href="http://arxiv.org/abs/1209.2137">http://arxiv.org/abs/1209.2137</a></p>
<p>Daniel Lemire, Nathan Kurz, Leonid Boytsov, SIMD Compression and the Intersection of Sorted Integers, Software: Practice and Experience (to appear) <a href="http://arxiv.org/abs/1401.6399">http://arxiv.org/abs/1401.6399</a></p>
<p>They include an extensive experimental evaluation.</p>
<p>You can find a complete implementation of all techniques in C++11 online:
<a href="https://github.com/lemire/FastPFor">https://github.com/lemire/FastPFor</a> and <a href="https://github.com/lemire/SIMDCompressionAndIntersection">https://github.com/lemire/SIMDCompressionAndIntersection</a></p>
<p>There are also C libraries: <a href="https://github.com/lemire/simdcomp">https://github.com/lemire/simdcomp</a> and <a href="https://github.com/lemire/MaskedVByte">https://github.com/lemire/MaskedVByte</a></p>
<p>If you prefer Java, please see <a href="https://github.com/lemire/JavaFastPFOR">https://github.com/lemire/JavaFastPFOR</a></p> | 2013-02-12 22:28:39.507000+00:00 | 2015-05-06 02:20:35.870000+00:00 | 2015-05-06 02:20:35.870000+00:00 | null | 283,299 | <p>I have a large array with a range of integers that are mostly continuous, eg 1-100, 110-160, etc. All integers are positive.
What would be the best algorithm to compress this?<br/><br/>
I tried the deflate algorithm but that gives me only 50% compression.
Note that the algorithm cannot be lossy.</p>
<p>All numbers are unique and progressively increasing.</p>
<p>Also if you can point me to the java implementation of such algorithm that would be great.</p> | 2008-11-12 08:19:07.483000+00:00 | 2022-04-01 19:11:30.287000+00:00 | 2008-11-12 08:29:42.097000+00:00 | algorithm|compression | ['http://arxiv.org/abs/1209.2137', 'http://arxiv.org/abs/1401.6399', 'https://github.com/lemire/FastPFor', 'https://github.com/lemire/SIMDCompressionAndIntersection', 'https://github.com/lemire/simdcomp', 'https://github.com/lemire/MaskedVByte', 'https://github.com/lemire/JavaFastPFOR'] | 7 |
64,472,881 | <p>I happened upon an answer for this while looking at the library authors' paper: <a href="https://arxiv.org/abs/1710.11555" rel="nofollow noreferrer">https://arxiv.org/abs/1710.11555</a>.</p>
<p>In their paper they include an image of their distributed training environment and it notably includes a couple of parameterservers. <a href="https://i.stack.imgur.com/shJGx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/shJGx.png" alt="distributed environment setup" /></a></p>
<p>The particular <code>merge_call</code> thing that tensorflow is complaining about in your question is something that I think has to be done on a separate device than the worker. So basically if you are trying to train the forest in a distributed setting, your runconfig has to look like</p>
<pre><code>strategy = tf.distribute.experimental.ParameterServerStrategy()
config = tf.estimator.RunConfig(
train_distribute=strategy)
</code></pre>
<p>and you have to figure out how to set up TF_CONFIG appropriately so that the worker tensorflow process is aware of the chief and parameter servers. If you use google AI platform, they set that stuff up when you start the job. I haven't tried getting this to work locally or on a non-ai-platform cluster.</p> | 2020-10-21 23:02:55.227000+00:00 | 2020-10-21 23:02:55.227000+00:00 | null | null | 61,030,540 | <p>I have traversed through numerous githubs, however i am unable to find an example where Tensorflow distributed learning was used with Boosted Trees Classifier estimator.
All the tutorials are for Neural nets. </p>
<p>I have slightly adapted the boosted trees code to work with distributed strategy as below: </p>
<pre><code>import numpy as np
import pandas as pd
from IPython.display import clear_output
from matplotlib import pyplot as plt
import tensorflow as tf
tf.random.set_seed(123)
dftrain = pd.read_csv('https://storage.googleapis.com/tf-datasets/titanic/train.csv')
dfeval = pd.read_csv('https://storage.googleapis.com/tf-datasets/titanic/eval.csv')
y_train = dftrain.pop('survived')
y_eval = dfeval.pop('survived')
fc = tf.feature_column
CATEGORICAL_COLUMNS = ['sex', 'n_siblings_spouses', 'parch', 'class', 'deck',
'embark_town', 'alone']
NUMERIC_COLUMNS = ['age', 'fare']
def one_hot_cat_column(feature_name, vocab):
return tf.feature_column.indicator_column(
tf.feature_column.categorical_column_with_vocabulary_list(feature_name,
vocab))
feature_columns = []
for feature_name in CATEGORICAL_COLUMNS:
# Need to one-hot encode categorical features.
vocabulary = dftrain[feature_name].unique()
feature_columns.append(one_hot_cat_column(feature_name, vocabulary))
for feature_name in NUMERIC_COLUMNS:
feature_columns.append(tf.feature_column.numeric_column(feature_name,
dtype=tf.float32))
NUM_EXAMPLES = len(y_train)
def make_input_fn(X, y, n_epochs=None, shuffle=True):
def input_fn():
dataset = tf.data.Dataset.from_tensor_slices((dict(X), y))
if shuffle:
dataset = dataset.shuffle(NUM_EXAMPLES)
# For training, cycle thru dataset as many times as need (n_epochs=None).
dataset = dataset.repeat(n_epochs)
# In memory training doesn't use batching.
dataset = dataset.batch(NUM_EXAMPLES)
return dataset
return input_fn
# Training and evaluation input functions.
train_input_fn = make_input_fn(dftrain, y_train)
eval_input_fn = make_input_fn(dfeval, y_eval, shuffle=False, n_epochs=1)
n_batches = 10
mirrored_strategy = tf.distribute.MirroredStrategy()
config = tf.estimator.RunConfig(
train_distribute=mirrored_strategy, eval_distribute=mirrored_strategy)
est = tf.estimator.BoostedTreesClassifier(feature_columns,
n_batches_per_layer=n_batches,config=config)
# The model will stop training once the specified number of trees is built, not
# based on the number of steps.
est.train(train_input_fn, max_steps=100)
# Eval.
result = est.evaluate(eval_input_fn)
clear_output()
print(pd.Series(result))
</code></pre>
<p>However, whenever i run this code, i get the error:</p>
<blockquote>
<p><code>merge_call</code> called while defining a new graph or a tf.function. This can often happen if the function <code>fn</code> passed to <code>strategy.experimental_run()</code> is decorated with <code>@tf.function</code> (or contains a nested <code>@tf.function</code>), and <code>fn</code> contains a synchronization point, such as aggregating gradients. This behavior is not yet supported. Instead, please wrap the entire call <code>strategy.experimental_run(fn)</code> in a <code>@tf.function</code>, and avoid nested <code>tf.function</code>s that may potentially cross a synchronization boundary.</p>
</blockquote>
<p>So, i would be grateful if i could either get a way to debug this error or find an example which uses distributed learning with Boosted Trees.</p> | 2020-04-04 15:38:35.727000+00:00 | 2020-10-21 23:02:55.227000+00:00 | 2020-04-04 15:58:51.273000+00:00 | python|tensorflow|machine-learning | ['https://arxiv.org/abs/1710.11555', 'https://i.stack.imgur.com/shJGx.png'] | 2 |
45,871,024 | <p>The approach depends on what kind of phrases or keywords you want to extract. </p>
<p>If the type of phrase is well-defined, the best way might be to parse the fragments and then extract from the parse tree with a few rules. As long as the fragments are proper English, parsers should process them with about the same quality as full sentences.</p>
<p>More generally, you could also approach this problem as a machine learning problem. If you have enough data, i.e. pairs of fragments and what should be extracted, you can use that to train a model. Common approaches would be</p>
<ul>
<li><a href="https://en.wikipedia.org/wiki/Sequence_labeling" rel="nofollow noreferrer">sequence labeling</a> - marking for each token if it should be kept or dropped, using <a href="https://en.wikipedia.org/wiki/Conditional_random_field" rel="nofollow noreferrer">CRFs</a> or <a href="https://en.wikipedia.org/wiki/Recursive_neural_network" rel="nofollow noreferrer">RNNs</a></li>
<li><a href="https://medium.com/towards-data-science/sequence-to-sequence-model-introduction-and-concepts-44d9b41cd42d" rel="nofollow noreferrer">sequence-to-sequence models</a> - encoding the input sequence and then generating a new, shorter output, using a combinations of two RNNs</li>
</ul>
<p>In the NLP literature, you could look for sentence compression / summarization. A recent paper that first proposed the second approach listed above is:</p>
<ul>
<li><a href="https://arxiv.org/pdf/1509.00685.pdf" rel="nofollow noreferrer">A Neural Attention Model for Abstractive Sentence Summarization</a> - Rush et al. 2015</li>
</ul> | 2017-08-24 21:17:41.200000+00:00 | 2017-08-24 21:17:41.200000+00:00 | null | null | 45,831,545 | <p>While there are tons of information on how to extract keywords/phrases from documents, I could not find any technique on how to extract key phrases from fragments (not necessarily sentences). Here are some examples:</p>
<ul>
<li>Art Museums and galleries in China -> Museums and galleries Naval </li>
<li>Battles Of The Russo-Japanese War -> Naval Battles, The Russo-Japanese War</li>
</ul>
<p>One could suggest to simply use NLP toolkit and parse the tree and extract the noun phrases. I wonder if there are any better approaches.</p> | 2017-08-23 05:48:43.750000+00:00 | 2017-08-24 21:17:41.200000+00:00 | null | nlp|nltk|stanford-nlp | ['https://en.wikipedia.org/wiki/Sequence_labeling', 'https://en.wikipedia.org/wiki/Conditional_random_field', 'https://en.wikipedia.org/wiki/Recursive_neural_network', 'https://medium.com/towards-data-science/sequence-to-sequence-model-introduction-and-concepts-44d9b41cd42d', 'https://arxiv.org/pdf/1509.00685.pdf'] | 5 |
68,614,060 | <p>You can do it with "warcio" lib.</p>
<p>Example code:</p>
<pre><code>import requests
from warcio.archiveiterator import ArchiveIterator
from warcio.warcwriter import WARCWriter
def split_records(url):
resp = requests.get(url, stream=True)
for record in ArchiveIterator(resp.raw, arc2warc=True):
if record.rec_type == 'warcinfo':
print(record.raw_stream.read())
elif record.rec_type == 'response':
id = record.rec_headers.get_header('WARC-Record-ID').rsplit(':', 1)[-1].rstrip('>')
print(id)
output = open('%s.warc.gz' % (id), 'wb')
writer = WARCWriter(output, gzip=True)
writer.write_record(record)
output.close()
split_records('https://cdn.ruarxive.org/public/webcollect2020/kgi/komitetgi.ru/komitetgi.ru.warc.gz')
</code></pre>
<p>It will split the WARC file into single WARC records stored in the same path as script file.</p> | 2021-08-01 20:17:46.333000+00:00 | 2021-08-01 20:17:46.333000+00:00 | null | null | 64,475,290 | <p>My goal is to split and sort WARC file from CommonCrawl into its individual records. Example file:</p>
<pre><code>WARC/1.0
WARC-Type: warcinfo
WARC-Date: 2020-08-04T01:43:40Z
WARC-Record-ID: <urn:uuid:959ea654-33fd-466b-b1bf-f08aa8abe774>
Content-Length: 500
Content-Type: application/warc-fields
WARC-Filename: CC-MAIN-20200804014340-20200804044340-00045.warc.gz
isPartOf: CC-MAIN-2020-34
publisher: Common Crawl
description: Wide crawl of the web for August 2020
operator: Common Crawl Admin ([email protected])
hostname: ip-10-67-67-22.ec2.internal
software: Apache Nutch 1.17 (modified, https://github.com/commoncrawl/nutch/)
robots: checked via crawler-commons 1.2-SNAPSHOT (https://github.com/crawler-commons/crawler-commons)
format: WARC File Format 1.1
conformsTo: http://iipc.github.io/warc-specifications/specifications/warc-format/warc-1.1/
WARC/1.0
WARC-Type: request
WARC-Date: 2020-08-04T03:25:25Z
WARC-Record-ID: <urn:uuid:6c0b749a-4d02-4a77-ab93-9bc4ba094cdc>
Content-Length: 303
Content-Type: application/http; msgtype=request
WARC-Warcinfo-ID: <urn:uuid:959ea654-33fd-466b-b1bf-f08aa8abe774>
WARC-IP-Address: 104.254.66.40
WARC-Target-URI: http://00.auto.sohu.com/d/details?cityCode=450100&planId=1450&trimId=145372
</code></pre>
<p>How can I split the file into its different records at the line: "WARC/1.0"?</p> | 2020-10-22 04:24:26.897000+00:00 | 2021-08-01 20:17:46.333000+00:00 | null | python|split|warc | [] | 0 |
38,681,028 | <ul>
<li><p><strong>All you need is a good init (2016)</strong> : This paper proposes a simple method for weight initialization for deep net learning (<a href="http://arxiv.org/abs/1511.06422" rel="noreferrer">http://arxiv.org/abs/1511.06422</a>)</p></li>
<li><p>Watch this 6 mins video by andrew ng (Machine Learning, Coursera -> Week 5-> Random Initialization) explains danger of setting all initial weights to zero in Backpropagation (<a href="https://www.coursera.org/learn/machine-learning/lecture/ND5G5/random-initialization" rel="noreferrer">https://www.coursera.org/learn/machine-learning/lecture/ND5G5/random-initialization</a>) </p></li>
</ul>
<p><a href="https://i.stack.imgur.com/U0Ppm.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/U0Ppm.jpg" alt="enter image description here"></a>
If we initialize all weights to the same value (e.g. zero or one). In this case, each hidden unit will get exactly the same signal. E.g. if all weights are initialized to 1, each unit gets signal equal to sum of inputs (and outputs sigmoid(sum(inputs))). If all weights are zeros, which is even worse, every hidden unit will get zero signal. No matter what was the input - if all weights are the same, all units in hidden layer will be the same too. This is why one should initialize weights randomly.</p> | 2016-07-31 05:24:45.400000+00:00 | 2016-07-31 05:24:45.400000+00:00 | null | null | 38,593,287 | <p>Using R and the package <code>neuralnet</code>, I try to model data that looks like this:</p>
<p><a href="https://i.stack.imgur.com/m4Seq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/m4Seq.png" alt="training data"></a></p>
<p>These are temperature readings in 10 min intervals over several days (above is a 2 day cutout). Using the code below, I fit a neural network to the data. There are probably simpler ways to model this exact data, but in the future the data might look quite different. Using a single hidden layer with 2 neurons gives me satisfactory results:</p>
<p><a href="https://i.stack.imgur.com/D67DZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/D67DZ.png" alt="good neural network performance"></a></p>
<p>This also works <em>most of the time</em> with more layers and neurons. However, with one hidden layer with one neuron and <em>occasionally</em> with two layers (in my case 3 and 2 neurons respectively), I get rather poor results, always in the same shape:</p>
<p><a href="https://i.stack.imgur.com/FHCA7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FHCA7.png" alt="poor neural network performance"></a></p>
<p>The only thing random is the initialization of start weights, so I assume it's related to that. However, I must admit that I have not fully grasped the theory of neural networks yet. What I would like to know is, whether the poor results are due to a local minimum ('neuralnet' uses resilient backpropagation with weight backtracking by default) and I'm simply out of luck, or if I can avoid such a scenario. I am under the impression that there is an optimal number of hidden nodes for fitting e.g. polynomials of degree 2, 5, 10. If not, what's my best course of action? A larger learning rate? Smaller error threshold? Thanks in advance.</p>
<p>I have <em>not</em> tried tuning the rprop parameters yet, so the solution might lie there.</p>
<p>Code: </p>
<pre><code># DATA ----------------------
minute <- seq(0, 6*24 - 1)
temp <- rep.int(17, 6*24)
temp[(6*7):(6*20)] <- 20
n <- 10
dta <- data.frame(Zeit = minute, Status = temp)
dta <- dta[rep(seq_len(nrow(dta)), n), ]
# Scale everything
maxs <- apply(dta, 2, max)
mins <- apply(dta, 2, min)
nnInput <- data.frame(Zeit = dta$Zeit, Status = dta$Status)
nnInput <- as.data.frame(scale(nnInput, center = mins, scale = maxs - mins))
trainingData <- nnInput[seq(1, nrow(nnInput), 2), ]
testData <- nnInput[seq(2, nrow(nnInput), 2), ]
# MODEL ---------------------
model <- as.formula("Status ~ Zeit")
net <- neuralnet::neuralnet(model,
trainingData,
hidden = 2,
threshold = 0.01,
linear.output = TRUE,
lifesign = "full",
stepmax = 100000,
rep = 1)
net.results <- neuralnet::compute(net, testData$Zeit)
results <- net.results$net.result * (maxs["Status"] - mins["Status"]) + mins["Status"]
testData <- as.data.frame(t(t(testData) * (maxs - mins) + mins))
cleanOutput <- data.frame(Actual = testData$Status,
Prediction = results,
diff = abs(results - testData$Status))
summary(cleanOutput)
plot(cleanOutput$Actual[1:144], main = "Zeittabelle", xlab = paste("Min. seit 0:00 *", n), ylab = "Temperatur")
lines(cleanOutput$Prediction[1:144], col = "red", lwd = 3)
</code></pre> | 2016-07-26 14:54:33.567000+00:00 | 2017-07-10 08:47:43.293000+00:00 | 2017-07-10 08:47:43.293000+00:00 | r|machine-learning|neural-network|deep-learning | ['http://arxiv.org/abs/1511.06422', 'https://www.coursera.org/learn/machine-learning/lecture/ND5G5/random-initialization', 'https://i.stack.imgur.com/U0Ppm.jpg'] | 3 |
38,608,875 | <p>Basically - initialization is really important. If you don't initialize it randomly then you might make your network not working at all (e.g. by setting all the weights to <code>0</code>). It is also proven that for <a href="http://machinelearning.wustl.edu/mlpapers/paper_files/AISTATS2010_GlorotB10.pdf">sigmoid</a> and <a href="http://arxiv.org/pdf/1502.01852v1.pdf">relu</a> a certain kind of activation might help in training your network.</p>
<p>But in your case - I think that the differences are mostly made by the complexity of your problem. With a models with a complexity which seem to fit the complexity of your problem performs nice. The other models may suffer for the following reasons:</p>
<ol>
<li><strong>Too small complexity</strong> - with one node maybe you are basically not able to learn the proper function.</li>
<li><strong>Too big complexity</strong> - with two-layer network you might experience stucking in a local minimas. Increasing the number of parameters of your network is also increasing the size of parameter space. Of course - one hand you might get the better model - on the other hand - you may land in this region of a parameter space which will result in poor solution. Maybe trying the same model with different initialization - and choosing the best model might overcome this issue.</li>
</ol>
<p><strong>UPDATE:</strong></p>
<ol>
<li><p>With small network sizes - it is quite usual to stuck in a local minimum. Depending on the amount of time which you need to train your network you may use the following techniques to overcome that:</p>
<ul>
<li><strong>Dropout / Batch normalization / Batch learning randomization :</strong> when you are able to train your network for a little bit longer time - you might use a randomization properties of dropout or batch normalization. Due to this random fluctuations you are able to move from poor local minima (which are usually believed to be relatively shallow).</li>
<li><strong>Cross - validation / Multiple run:</strong> When you are starting your training multiple times - the probability that you will finish in a poor minimum significantly decreases.</li>
</ul></li>
<li><p>About the connection between layer size and polynomial degree - I think that the question is not clearly stated. You must specify more details like e.g. the activation function. I also think that the nature of a polynomials and functions which could be modelled by a classic neural networks differs a lot. In polynomials - the small change in parameters values usually tends to much higher difference than in neural network case. Usually a derivative of a neural network is a bounded function whereas the polynomial derivative is unbounded when the degree is bigger that 2. Due to this facts I think - that looking for a dependency between a polynomial degree and a size of a hidden layer might be not worth serious considerations.</p></li>
</ol> | 2016-07-27 09:23:22.930000+00:00 | 2016-07-29 11:14:11.753000+00:00 | 2016-07-29 11:14:11.753000+00:00 | null | 38,593,287 | <p>Using R and the package <code>neuralnet</code>, I try to model data that looks like this:</p>
<p><a href="https://i.stack.imgur.com/m4Seq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/m4Seq.png" alt="training data"></a></p>
<p>These are temperature readings in 10 min intervals over several days (above is a 2 day cutout). Using the code below, I fit a neural network to the data. There are probably simpler ways to model this exact data, but in the future the data might look quite different. Using a single hidden layer with 2 neurons gives me satisfactory results:</p>
<p><a href="https://i.stack.imgur.com/D67DZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/D67DZ.png" alt="good neural network performance"></a></p>
<p>This also works <em>most of the time</em> with more layers and neurons. However, with one hidden layer with one neuron and <em>occasionally</em> with two layers (in my case 3 and 2 neurons respectively), I get rather poor results, always in the same shape:</p>
<p><a href="https://i.stack.imgur.com/FHCA7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FHCA7.png" alt="poor neural network performance"></a></p>
<p>The only thing random is the initialization of start weights, so I assume it's related to that. However, I must admit that I have not fully grasped the theory of neural networks yet. What I would like to know is, whether the poor results are due to a local minimum ('neuralnet' uses resilient backpropagation with weight backtracking by default) and I'm simply out of luck, or if I can avoid such a scenario. I am under the impression that there is an optimal number of hidden nodes for fitting e.g. polynomials of degree 2, 5, 10. If not, what's my best course of action? A larger learning rate? Smaller error threshold? Thanks in advance.</p>
<p>I have <em>not</em> tried tuning the rprop parameters yet, so the solution might lie there.</p>
<p>Code: </p>
<pre><code># DATA ----------------------
minute <- seq(0, 6*24 - 1)
temp <- rep.int(17, 6*24)
temp[(6*7):(6*20)] <- 20
n <- 10
dta <- data.frame(Zeit = minute, Status = temp)
dta <- dta[rep(seq_len(nrow(dta)), n), ]
# Scale everything
maxs <- apply(dta, 2, max)
mins <- apply(dta, 2, min)
nnInput <- data.frame(Zeit = dta$Zeit, Status = dta$Status)
nnInput <- as.data.frame(scale(nnInput, center = mins, scale = maxs - mins))
trainingData <- nnInput[seq(1, nrow(nnInput), 2), ]
testData <- nnInput[seq(2, nrow(nnInput), 2), ]
# MODEL ---------------------
model <- as.formula("Status ~ Zeit")
net <- neuralnet::neuralnet(model,
trainingData,
hidden = 2,
threshold = 0.01,
linear.output = TRUE,
lifesign = "full",
stepmax = 100000,
rep = 1)
net.results <- neuralnet::compute(net, testData$Zeit)
results <- net.results$net.result * (maxs["Status"] - mins["Status"]) + mins["Status"]
testData <- as.data.frame(t(t(testData) * (maxs - mins) + mins))
cleanOutput <- data.frame(Actual = testData$Status,
Prediction = results,
diff = abs(results - testData$Status))
summary(cleanOutput)
plot(cleanOutput$Actual[1:144], main = "Zeittabelle", xlab = paste("Min. seit 0:00 *", n), ylab = "Temperatur")
lines(cleanOutput$Prediction[1:144], col = "red", lwd = 3)
</code></pre> | 2016-07-26 14:54:33.567000+00:00 | 2017-07-10 08:47:43.293000+00:00 | 2017-07-10 08:47:43.293000+00:00 | r|machine-learning|neural-network|deep-learning | ['http://machinelearning.wustl.edu/mlpapers/paper_files/AISTATS2010_GlorotB10.pdf', 'http://arxiv.org/pdf/1502.01852v1.pdf'] | 2 |
61,417,669 | <p>There are several things you can do here.</p>
<ul>
<li>For fixed parameters <code>v_esc</code> and <code>v_0</code>, <code>n_0</code> is a constant, so it doesn't need to be calculated in the <code>pdf</code> method.</li>
<li>If you define only a PDF for a SciPy <code>rv_continuous</code> subclass, then the class's <code>rvs</code>, <code>mean</code>, and so on will be very slow, presumably because the method needs to integrate the PDF every time it needs to generate a random variate or calculate a statistic. If speed is at a premium, you will thus need to add to <code>maxwell_boltzmann_pdf</code> an <code>_rvs</code> method that uses its own sampler. (See also <a href="https://stackoverflow.com/questions/62516272">this question</a>.) One possible method is the <em>rejection sampling</em> method: Draw a number in a box until the box falls within the PDF. It works for any bounded PDF with a finite domain, as long as you know what the domain and bound are (the bound is the maximum value of <code>f</code> in the domain). See <a href="https://stackoverflow.com/questions/55724501">this question</a> for example Python code.</li>
<li>If you know the distribution's CDF, then there are some additional tricks. One of them is the relatively new <a href="https://arxiv.org/abs/2004.02339" rel="nofollow noreferrer"><em>k-vector sampling</em></a> method for sampling a continuous distribution. There are two phases: a setup phase and a sampling phase. The setup phase involves approximating the CDF's inverse via root finding, and the sampling phase uses this approximation to generate random numbers that follow the distribution in a very fast way without having to further evaluate the CDF. For a fixed distribution like this one, if you show me the CDF, I can precalculate the necessary data and the code needed to sample the distribution using that data. Essentially, the only non-trivial part of <em>k-vector sampling</em> is the root-finding step.</li>
<li>More information on sampling from an arbitrary distribution is found on my <a href="https://peteroupc.github.io/randomfunc.html#Random_Numbers_from_an_Arbitrary_Distribution" rel="nofollow noreferrer">sampling methods page</a>.</li>
</ul> | 2020-04-24 21:23:45.540000+00:00 | 2021-06-20 09:38:55.963000+00:00 | 2021-06-20 09:38:55.963000+00:00 | null | 61,407,802 | <p>I would like to generate random numbers using a truncated Maxwell-Boltzmann distribution. I know that scipy has built-in Maxwell random variables, but there is no truncated version of it (I am also aware of a truncated normal distribution, which is irrelevant here). I have tried to write my own random variables using rvs_continuous:</p>
<pre><code>import scipy.stats as st
class maxwell_boltzmann_pdf(st.rv_continuous):
def _pdf(self,x):
n_0 = np.power(np.pi,3/2)*np.square(v_0)*(v_0*erf(v_esc/v_0)-(2/np.sqrt(np.pi))*v_esc*np.exp(-np.square(v_esc/v_0)))
return (1/n_0)*(4*np.pi*np.square(x))*np.exp(-np.square(x/v_0))*np.heaviside(v_esc-x,0)
maxwell_boltzmann_cv = maxwell_boltzmann_pdf(a=0, b=v_esc, name='maxwell_boltzmann_pdf')
</code></pre>
<p>This does exactly what I want, but it is way too slow for my purpose (I am doing Monte Carlo simulations), even if I draw all the random velocities outside of all the loops. I have also thought of using Inverse transform sampling method, but the inverse of the CDF does not have an analytic form and I will need to do a bisection for every number I draw. It would be great if there is a convenient way for me to generate random numbers from a truncated Maxwell-Boltzmann distribution with decent speed.</p> | 2020-04-24 11:57:21.630000+00:00 | 2021-06-20 09:38:55.963000+00:00 | 2020-04-24 15:22:13.297000+00:00 | python|random|scipy|montecarlo | ['https://stackoverflow.com/questions/62516272', 'https://stackoverflow.com/questions/55724501', 'https://arxiv.org/abs/2004.02339', 'https://peteroupc.github.io/randomfunc.html#Random_Numbers_from_an_Arbitrary_Distribution'] | 4 |
46,632,089 | <p>In addition to already said in the answers:</p>
<ul>
<li>You can have several <code>Dropout</code> layers with different probabilities, e.g. after the pooling layers. Early layers often have higher keep probability, since they use fewer filters.</li>
<li><a href="https://machinelearningmastery.com/image-augmentation-deep-learning-keras/" rel="nofollow noreferrer">Image data augmentation</a> is another way towards generalization and in my experience it always improves the result, at least slightly (of course, provided that input transformation is not severe).</li>
<li><a href="https://keras.io/layers/normalization/" rel="nofollow noreferrer">Batch normalization</a> (and its successors, <a href="https://arxiv.org/abs/1602.07868" rel="nofollow noreferrer">weight normalization</a> and <a href="https://arxiv.org/abs/1607.06450" rel="nofollow noreferrer">layer normalization</a>) is a modern regularization method that reduces the required dropout intensity, sometimes completely, i.e. you can get rid of dropout layers. In addition, batchnorm improves activations statistics, which often makes the network learn faster. I used it in addition to dropout and it worked pretty well.</li>
<li>A technique called Scaled Exponential Linear Units (SELU) has been <a href="https://arxiv.org/pdf/1706.02515.pdf" rel="nofollow noreferrer">published very recently</a>, which is said to have implicit self-normalizing properties. It's even <a href="https://keras.io/activations/" rel="nofollow noreferrer">already implemented</a> in keras.</li>
<li>The good old L2 or L1 regularizer is still in use. If nothing else helps, consider adding it too. But I'm pretty sure that batchnorm, selu and few dropout layers will be enough.</li>
</ul> | 2017-10-08 14:18:10.630000+00:00 | 2017-10-08 14:18:10.630000+00:00 | null | null | 46,624,153 | <p>I'm building a model to detect keypoints of body parts. To do that I'm using the COCO dataset (<a href="http://cocodataset.org/#download" rel="nofollow noreferrer">http://cocodataset.org/#download</a>). I'm trying to understand why I'm running into overfitting issues (training loss converges, but I reach a ceiling really early for testing loss). In the model, I've tried adding layers of dropout (gradually adding more layers with higher probabilities, but I quickly get to a point when training loss stops decreasing which is just as bad. <strong>My theory is that the model I use isn't complex enough but I'd like to know if that's the likely reason or if it could be something else. The models I've found online are all extremely deep (30+ layers).</strong> </p>
<p><strong>Data</strong></p>
<p>I'm using 10,000 RGB images each of which has a single person in it. They each have different sizes but a max of 640 length and width. As a preprocessing step, I make every image the size 640x640 by filling any extra area (bottom and right of image) with (0,0,0) or black. </p>
<p><strong>Targets</strong></p>
<p>The full dataset has many keypoints but I'm only interested in the right shoulder, right elbow, and right wrist. Each body part has 2 keypoints (X coordinate and Y coordinate) so my target is a list of length 6.</p>
<p><strong>Model</strong></p>
<pre><code>activation_function = 'relu'
batch_size = 16 # ##
epoch_count = 40 # ##
loss_function = 'mean_squared_error'
opt = 'adam'
model = Sequential()
model.add(Conv2D(filters=16, kernel_size=(3, 3), input_shape=inp_shape))
# model.add(Conv2D(filters=16, kernel_size=(3, 3)))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(filters=32, kernel_size=(3, 3)))
# model.add(Conv2D(filters=32, kernel_size=(3, 3)))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(300, activation=activation_function))
model.add(Dropout(rate=0.1))
model.add(Dense(300, activation=activation_function))
model.add(Dense(num_targets))
model.summary()
model.compile(loss=loss_function, optimizer=opt)
hist = model.fit(x_train, y_train, batch_size=batch_size, epochs=epoch_count,
verbose=verbose_level,
validation_data=(x_valid, y_valid))
</code></pre> | 2017-10-07 19:15:37.813000+00:00 | 2017-10-08 14:18:10.630000+00:00 | 2017-10-07 20:59:17.627000+00:00 | neural-network|keras|conv-neural-network|keypoint | ['https://machinelearningmastery.com/image-augmentation-deep-learning-keras/', 'https://keras.io/layers/normalization/', 'https://arxiv.org/abs/1602.07868', 'https://arxiv.org/abs/1607.06450', 'https://arxiv.org/pdf/1706.02515.pdf', 'https://keras.io/activations/'] | 6 |
68,372,398 | <p>This is pretty straightforward constant propagation, which enables the compiler to use more efficient instructions.</p>
<p>Since the value of <code>sizeof</code> is a constant known at compile time, in the version with the comparison the compiler knows the value of <code>typeSize</code> and therefore can use more efficient instructions. Here’s for example the computation of <code>dynamicArr_Size(header)</code>, which is to say, <code>(header->end - header->begin) / header->typeSize</code>:</p>
<pre class="lang-diff prettyprint-override"><code>--- without sizeof
+++ with sizeof
- mov rax, QWORD PTR [rbx+8]
- xor edx, edx
- sub rax, QWORD PTR [rbx]
- div r8
+ mov rdi, QWORD PTR [rbx+8]
+ sub rdi, QWORD PTR [rbx]
+ shr rdi, 2
</code></pre>
<p>In the version without <code>sizeof</code>, the compiler has to treat the element size as an unknown and use the actual divide instruction (which also requires zeroing the <code>rdx</code> register, as the instruction takes a 128-bit dividend in the <code>rdx:rax</code> register pair). When the size is known, the compiler can substitute a much faster bit shift instead and avoid touching <code>rdx</code>. This is probably the most impactful difference, as the divide instruction tends to be incredibly slow relative to other arithmetic; most compilers, whenever they have a chance (when the divisor is constant), will instead use bit shifts or <a href="https://arxiv.org/abs/2012.12369" rel="nofollow noreferrer">wrap-around multiplication tricks</a> just to avoid using the division instruction. (Matt Godbolt has <a href="https://www.youtube.com/watch?v=bSkpMdDe4g4&t=32m54s" rel="nofollow noreferrer">a whole section about division in his talk about compiler optimisations</a>.) More efficient use of registers also frees up registers to be used in other places and may prevent spilling values into memory (though in this case it doesn’t seem to make much of a difference).</p>
<p>Another example, here’s how <code>sizeof(dynamicArr) + header->typeSize * newCapacity</code> is computed:</p>
<pre class="lang-diff prettyprint-override"><code>--- without sizeof
+++ with sizeof
- mov rdx, rsi
- imul rdx, r8
- add rdx, 32
+ lea rdx, QWORD PTR [rsi*4+32]
</code></pre>
<p>In the version without comparison against <code>sizeof</code>, the compiler has to treat <code>header->typeSize</code> as an unknown and use a generic multiplication instruction. When the size is known, it can instead use a <code>lea</code> instruction with a special addressing mode that allows to compute value in a single instruction. Though generic multiplication is not as slow as division, a bit shift will still beat it. But even if <code>lea</code> were not necessarily faster in itself, the higher code density allows more code to fit into the instruction cache and avoid slowdowns due to cache misses.</p>
<p>Finally, you claimed that</p>
<blockquote>
<p>A lot of people seem to think that the reason is because <code>sizeof item</code> is simply evaluated at compile time. I don't think that this is the case because if I remove the condition and replace all instances of <code>header->typeSize</code> with <code>sizeof item</code> the performance is still worse than if the if condition was there.</p>
</blockquote>
<p>I assume you have only replaced direct uses of the field inside the macro and not the indirect use inside the <code>dynamicArr_Size</code> utility function. When you replace that use as well, <a href="https://godbolt.org/z/TT6endePG" rel="nofollow noreferrer">the generated code is nearly identical, modulo label names and the immediate value in the <code>if</code> condition check</a>. With comparison against <code>sizeof</code>, the compiler simply does that automatically while inlining that function.</p> | 2021-07-14 05:17:24.437000+00:00 | 2021-07-14 20:02:13.390000+00:00 | 2021-07-14 20:02:13.390000+00:00 | null | 68,324,268 | <p>I have a dynamic array and there is a hot loop that spends a lot of time adding a lot of elements to the dynamic array I have implemented.</p>
<p>The dynamic array works by storing a header at the start of the array of values:</p>
<pre class="lang-c prettyprint-override"><code>typedef struct
{
unsigned char* begin; // Pointer to first element (should just point to end of struct, the header is stored prior data)
unsigned char* end; // Pointer to last element
size_t typeSize; // Size of type that the array stores
size_t capacity; // capacity of array (when size ((end - begin) / typeSize) exceeds capacity, realloc)
} dynamicArr;
</code></pre>
<p>For some reason the loop runs faster when I compare the <code>sizeof item</code> to the <code>typeSize</code>, and its by a large margin (30% increase) as to when I don't make the comparison.</p>
<p>Note, the comparison is only there to safe keep against adding an item of different type and causing misalignment in the array. This should not happen anyway if the dynamic array is used properly to store 1 type and thus in practice should always evaluate to true.</p>
<p>Here's the code for adding an element to the list:</p>
<pre class="lang-c prettyprint-override"><code>if (arr)
{
dynamicArr* header = dynamicArr_Header(arr);
if (header->typeSize && header->typeSize == sizeof item) //If I remove "header->typeSize == sizeof item" performance decreases.
{
size_t arrSize = dynamicArr_Size(header);
if (arrSize == header->capacity)
{
size_t newCapacity = (size_t)(header->capacity * 1.5f);
if (newCapacity == header->capacity) ++newCapacity;
void* tmp = realloc(header, sizeof(dynamicArr) + header->typeSize * newCapacity);
if (tmp)
{
dynamicArr_Construct(header, tmp, newCapacity, arrSize, header->typeSize);
*((void**)&(arr)) = header->begin;
arr[arrSize] = item;
header->end += header->typeSize;
}
else
{
free(header);
arr = NULL;
}
}
else
{
arr[arrSize] = item;
header->end += header->typeSize;
}
}
}
</code></pre>
<p>I don't understand why this was the case. I'm not very good at reading assembly either, from what I can see though they are very different, so if someone could help me out here that would be much appreciated!</p>
<p>(Compiled in MSVC with /O2 and /Tc)</p>
<p><a href="https://godbolt.org/#z:OYLghAFBqd5TKALEBjA9gEwKYFFMCWALugE4A0BIEAZugHZEDKqAhgDbYgCMALOXUYBVAM7YACgA8QAcgAMM8gCse5dq3qgA%2BjtTkxnVEQINq2epgDC6dgFcAtvRAAmcucwAZAvWwA5BwBG2KQuAMzkAA7oIsQm9NZ2ji6R0bEMXj7%2B9kEhzuEG2EZxTESspEQJDk6uBUUMJWVEGX6BwWH6peWVSTWdTd4t2W15AJT66LakqFwyAKR5EaSswPasANQM07NyAILbe3neqHY4a7OhliJEhOgAdEjnuPvzoUcn2GcXV4SM94/Ph3ox1sp3Ol2u7AIAT%2BoSeuxebxBHzBV1I3mAMLhe12RAAnhFsDgaGtUbYjM8AOwAIWecgAnLZ6DFgD5MGtUEgygAqNZBYDec5UtYAemFa3E6G8RGCaxIa0wrFKtIZTIILMJ7M5pB57kFIrFEqlMrl6iua2wnHs5iIypiAC9sFoiLL8dgmAQHXrRWt3Q6NsS8QTbR7Hc62BFWKhiLivWLLKwI1G8f61mUlri1hB6OhnfRWqQUxbsFbGCI1t5U6R0yNKQAReW4%2BisewEVA7KuC2kHUJEgYNpstttVrTWJlEUhkohU1hiCA4K7kElTRc%2BADu8cT0ZG8uwZvO9ZEUz186IAFpHuHI9HPvW1xur3jj7uz48%2BRX95nGczWZruSMT2czhCva2DoMSXIngCPbYDQfaYI2zatu2pAjgwpJGHOz6LoeehrHeCYPrii6oJMpC%2Btgi6Bm6IbbvBA5IcOo7oVOM7YJhC5Lrh%2BGbniNahEKJ7nrC7g3juVxCbgb70IBQpUeRaw8hAJFVuRNbwvCeS9j4/aIUOKEABLYKwOCkBAabblAdG6chXIjFAX5qj%2BHJ/mZVbbqeJIhmBc4IYOyEjAF%2BxXIqrblvQkLaSBTo6X5w7kT59F6Ty5mUjS6l0qQ2BEJM0muaQEkiR5aYSVJ25isVjxySGnbwhStZdi8WkfFZsUoeItgiEgeWLsQxbbrMACslj7Jg6BnNSg3DelBDEnl/VDcqszUmcC3pdsdItQx2prEgRkmaJm16VohnGcEc16pNi30jNma7ad%2BWVa68nzAAbK9O17cEElVX6%2B4fiBYHltK9jzVN2J0utS1Cpda30utUXOmmz2hPWh3IVo8V3SZfHQ6t4OQ9ds1IyGN4fljX0XgRSa4qDV0Q3Dy0w/jcMs3SCN4dg65U9eH4QAjdnkw9sKXtTClrNwtwDTQOMrWDBP0wrN1Zpz96i39KMffdEki1uMnzFS3GERdePy/LABukqYDyRD2BEomZRw7DoKgECC9hXk0Al1luTJmsmd9T0kzyhvUzLTOm6zSs2xEtOwwrkOMybrMRxtvlbahY4ThhbuyrbK4q9zeKLsTDqLoLAcEqpxty8ntd0lyUAWwQVu2a9c39Rr5evtg/L0NX%2BxrIPQ/DyPI9poN04qdVA31h%2BvX2P3ccp13wkWHrnefULuA/dgi/M/HDOz0nB8KxaYjjUKNCZWxgsy2mom%2BEIHgeBddWy3TCdHzXJ9n8ix%2Bm4nMGo9gFD3HgNSeZFp6zw1vPPeKd1or1wCJfWZNN4V2op6fi78l6H2wfvJaX9Fpv3Dm/VcSACCcAgHINS2IdiigBjQfYUo1gADFGQuylDyKU/lUoDyHmjYc7VOoQG4VWRc3BqE1T2HVGQYx2CyAGooJw8hFDoFkJYEkEwpjIjyNwRQRBZAKACuQQW1AxgAGsQADTkGoWQvBFD2BALwOktwAAcoQ5DcGcLwAaFIBqhG4HwXg/BlEKHIGomQigRAgBsQYlRYw4CwBQBgHA%2BBiBkEoNQQQzA2CcB4PwbJogJDSBUcoZI6hNAgB0FoPQtRjCmAgO4boTgvGuLcBYZoWQcg8BeikGI9T4g2CqDwbgA0%2BlpHoJ0/MPT9BFjqPQBoXQhlJFabMwwAzFn9EyNM7gvTgpLMSC00ZHRGhTKGCEXZYwRBaOmLIF4ixlirA2ECXe6luyIlBF8a4JhMRQQ%2BciL5PwiC/I0q8IE7xPjgkwJCaEDxYR/PBUiSFqJ0QgtoVRIkJJxyTl4elBy6o2TOW2lJWM4pJSMGNGNBUSo8WqgJb%2BbauosHekNBSgsJoZzOiLCWG06V2Y71JfJQGVFgwOmijrR8zK4yFwzIDNMrAMxZhzHhfMhZLTWjLBWeVNM6wxS2lIqCTU9VHSYtiow05ZwnmwsuDmXMeI0zEs6f6R4sGCUpva0SIdoxPnEt3XuokID4qclqWyAF9aeQdIDCCz5DUwTgmnE1aEzVEHYkQa1XEC72uIqRcilFA4Ologm9Gpqs4sUtVhTi%2Bc7WERlm61ebIPx1skj3d8QEXSVyDpmZSkCC0NU0nG7SAiDKbzmpmL2rVbL2TpcGly5k1geQYeOraAUaHBWMKgMKEUPjsyHRjEMS6kqVlIDQqGypMrZVILlCq9b51HpKi2%2BgZU72PQ7ZgtK0j6pvMagO5qRbBEdS6mmHqwNY47FGhfcOhNMxzsgxDQBdMh08kFgdP9w77rnSwbB9aStEE71AwA3G395bsxLgC1GqG90OldpvMO/9I5Eynr9FGqCtbuprXg%2BB8GcEK3Zl65MvN%2BbUdY8LGVYsJZS1o0RuuSs%2BMZnVvWRBEqHUoKArJuBdd1pNytrnO2H4HbsCdi7HOi6h39Tbbh/NHxg6Zprepk%2B2HZrR3wxphmhGP5113SWycQmTKUTzra1W0Zi6MYon7CmsI8N2fgfSBuEAtNclbs4F67dRKIJJZhvGICsugI7OA0jg1oH1lgRlqT9n6SIOQUBFj/sX0YN3iV9z8diF0Z/uwc%2BUM1hX2wDfGjep74fkfs/V%2BhDuMEI43XX%2B437NcZ2Nl0eYCIHkQK6JYrbnRvlbQY8SrNIN7Ce3pZqLydmulbG1h47uqyEULYpIt5opswRtAow3YzC2FAhEYwLhjAeG1XfcPXdQiuqiIoOLSR/E6yyNsTIRR5BQmqPUZoyY0xAKhGcPowxsiximIgHIuxDirFyFuLsukri/G8G8S9VxriBoDV6bD8JsgokxPIHEoxFiQChF4G44nzhAk04pBSFHXjIehCUej%2BnkTmfo4SYgRJyA0DoFtldigVAlIK4iErtAxxuB0i0C9ApFDpSkGiRAAIYuAjeDKLiWQejyAYHsDygA8uFK3pScCrE0Hk13BBMpFDNruMX2BJCFFsNKa3igjTyNKTCpYpBcTWBwGL8cBBHHxIEAwHJHAuB8DT8IMQUgxcqG4GoDQ2hdBqChNEyAYx0ARAGdE4UDuahzIGWYCwzTVDuDOd0ovUR%2BlxHbz31IAyu9tCL3U4ofQB9rMKBsvoI%2BLknIOcMsfc%2BBhdNH1cm5WecdQ9F6UiJkg9drHsCIM2G6zba8J3ITMAB1AAkr4XAFk0kkALC8IvaxrCK84G/3R25LBo7xJs7OBjKR72LkCOLWIw5i4RKM6xJS7kCy6uzy7f7BCZKq6oEhCoBa46564CAG7BDG6m6lLm5Nix5h624K6O7O5i5u4l6e5hKEA%2B7GB%2B7RKlKB7B6h4yA24R5i7R6W7x4zBhJJ4p56JjDZIsCZ75I55EBFL56lIqCuAVKl41Ll4BCV7Y7hK15xD16N7T7zKt5WDLItLtKeBr47LjIDJT694TLz6qDj71CT7GH2HN4T6nLmHnL2FOGHJeHuHbKeHcCb6I5cDOA77Q504H5H4n5n5rAX4uISK34P5P6Zgv5kDI6uCf5q5K7pH/6AGs4mKbxmKQ7gGQE2IREM76BM4s4Y7kCWIgG3Ac50ihAU50i8AUhyD84Ui9KR4i7QH74VHVHS7wDIGEA0A0DoESG5JZ4FLp5yElJhLsBIDRJF6LEiCjE0BURM6kBLHJDbFrEzQbGugxJhF75hIRK1gHFrCH68DH6n7n6X4JEQD36P7bhmxljXG3ExFxFX6JEvF5E1GWKhAuJ%2BLeLa68BU7a6eLeLFGnFw4S7RLwFAGQ6o59FnEDFS5jB%2B5G5xBOJAA" rel="nofollow noreferrer">Link</a> to the assembly and the rest of the relevant code.</p>
<p><strong>Edit 1:</strong>
<s>A lot of people seem to think that the reason is because <code>sizeof item</code> is simply evaluated at compile time. I don't think that this is the case because if I remove the condition and replace all instances of <code>header->typeSize</code> with <code>sizeof item</code> the performance is still worse than if the <code>if</code> condition was there.</s> => I seem to have missed changing the use of <code>header->typeSize</code> in the macro <code>dynamicArr_Size</code> which caused this confusion, refer to the marked answer.</p>
<p>Here is the full code:</p>
<pre class="lang-c prettyprint-override"><code>typedef struct
{
unsigned char* begin; // Pointer to data
unsigned char* end; // Pointer to last element
size_t typeSize; // Size of type
size_t capacity; // Capacity of array (not number of elements in array)
} dynamicArr;
#define dynamicArr_ConstructBase(dest, src, newCapacity) dest = src; dest->capacity = newCapacity; dest->begin = (unsigned char*)dest + sizeof *dest
#define dynamicArr_Construct(dest, src, newCapacity, currSize, typeSize) dynamicArr_ConstructBase(dest, src, newCapacity); dest->end = dest->begin + typeSize * (currSize)
#define dynamicArr_Header(arr) ((dynamicArr*)((unsigned char*)(arr) - sizeof(dynamicArr)))
static inline size_t dynamicArr_Size(dynamicArr* arr)
{
return (arr->end - arr->begin) / arr->typeSize;
}
#define dynamicArr_Create(typename, arr) typename* arr = (typename*)dynamicArr_Create_(sizeof(typename))
static inline unsigned char* dynamicArr_Create_(size_t typeSize)
{
dynamicArr* dynArr;
void* tmp = malloc(sizeof * dynArr + typeSize * 10);
if (!tmp) return NULL;
dynArr = tmp;
dynArr->begin = (unsigned char*)dynArr + sizeof * dynArr;
dynArr->end = dynArr->begin;
dynArr->capacity = 10;
dynArr->typeSize = typeSize;
return dynArr->begin;
}
#define dynamicArr_Free(arr) free(dynamicArr_Header(arr))
#define dynamicArr_Push(arr, item) \
do {\
if (arr) \
{ \
dynamicArr* header = dynamicArr_Header(arr); \
if (header->typeSize && header->typeSize == sizeof item) \
{ \
size_t arrSize = dynamicArr_Size(header); \
if (arrSize == header->capacity) \
{ \
size_t newCapacity = (size_t)(header->capacity * 1.5f); \
if (newCapacity == header->capacity) ++newCapacity; \
void* tmp = realloc(header, sizeof(dynamicArr) + header->typeSize * newCapacity); \
if (tmp) \
{ \
dynamicArr_Construct(header, tmp, newCapacity, arrSize, header->typeSize); \
*((void**)&(arr)) = header->begin; \
arr[arrSize] = item; \
header->end += header->typeSize; \
} \
else \
{ \
free(header); \
arr = NULL; \
} \
} \
else \
{ \
arr[arrSize] = item; \
header->end += header->typeSize; \
} \
} \
} \
} while(0)
</code></pre>
<p>And example use:</p>
<pre class="lang-c prettyprint-override"><code>void Func()
{
dynamicArr_Create(int, intArr);
dynamicArr_Push(intArr, 10);
printf("%i\n", intArr[0]);
dynamicArr_Free(intArr);
}
</code></pre>
<p>As for a simple test for profiling:</p>
<pre class="lang-c prettyprint-override"><code>int main()
{
dynamicArr_Create(int, intArr);
clock_t begin = clock();
for (int i = 0; i < 1000000000; ++i)
{
dynamicArr_Push(intArr, 10);
}
clock_t end = clock();
double time_spent = (double)(end - begin) / CLOCKS_PER_SEC;
printf("%f\n", time_spent);
dynamicArr_Free(intArr);
}
</code></pre>
<p>Compiling in release mode in Visual Studio 2019 on windows using /Tc I get the results:</p>
<ul>
<li>With <code>header->typeSize == sizeof item</code> => 3.65 seconds</li>
<li>Without <code>header->typeSize == sizeof item</code> => 9.374 seconds</li>
<li>Replacing <code>header->typeSize</code> with <code>sizeof item</code> and removing <code>header->typeSize == sizeof item</code> => 9.302 seconds</li>
</ul>
<p>I repeated the test 10 times and it was consistent to the results above.</p> | 2021-07-10 02:38:17.840000+00:00 | 2021-07-14 20:02:13.390000+00:00 | 2021-07-14 19:36:41.633000+00:00 | c|optimization|x86-64|compiler-optimization | ['https://arxiv.org/abs/2012.12369', 'https://www.youtube.com/watch?v=bSkpMdDe4g4&t=32m54s', 'https://godbolt.org/z/TT6endePG'] | 3 |
21,355,344 | <p>Nor will you ever understand how to calculate such a derivative, because technically, it isn't possible. You can only ever interpolate a true discrete derivative over cells back to cells, but it isn't defined there.</p>
<p>Now that does not sound very helpful, I know, and I cannot really explain all that very well in the space given here. But if you want to do yourself a favor, read up on discrete exterior calculus. It may sound like a bit of scary terminology at first, but it will make your life a lot easier, quite fast.</p>
<p>Try google, or start following references here: <a href="http://arxiv.org/abs/1103.3076" rel="nofollow">http://arxiv.org/abs/1103.3076</a></p> | 2014-01-25 19:41:47.890000+00:00 | 2014-01-25 19:41:47.890000+00:00 | null | null | 21,355,096 | <p>I am new to physics of games. I have a problem where i have a 2D or 3D mesh. The computational cells are triangles or tetrahedrons respectively. Certain physical quantities like density and energy are given at cell centers as cell centered averages. I need to compute the gradient of these quantities at the center of all the cells in the mesh.</p>
<p>I understand that in 1D, the derivative of a quantity in a cell (i) can be calculated by dividing the difference of values of that quantity in the neighboring cells (i+1,i-1) by the distance between them (central difference formula). What i don't understand is to solve this problem on an arbitrary 2D or 3D mesh?</p>
<p>Can i get reference to some literature where i can get such numerical methods/algorithms ?</p>
<p>Thanks in advance.</p> | 2014-01-25 19:20:23.123000+00:00 | 2014-01-29 11:27:21.833000+00:00 | 2014-01-25 19:32:46.570000+00:00 | c|simulation|computational-geometry|game-physics|mesh | ['http://arxiv.org/abs/1103.3076'] | 1 |
70,607,632 | <p>There is another solution that is in the model-level - using models that support weights of samples, such as Gradient Boosted Trees. Of those, CatBoost is usually the best as its training method leads to less leakage (as described in their <a href="https://arxiv.org/pdf/1706.09516.pdf" rel="nofollow noreferrer">article</a>).</p>
<p>Example code:</p>
<p>from catboost import CatBoostClassifier</p>
<pre><code>y = df['Label']
X = df.drop('Label',axis=1)
label_ratio = (y==1).sum() / (y==0).sum()
model = CatBoostClassifier(scale_pos_weight = label_ratio)
model.fit(X, y)
</code></pre>
<p>And so forth.
This works because Catboost treats each sample with a weight, so you can determine class weights in advance (scale_pos_weight).
This is better than downsampling, and is technically equal to oversampling (but requires less memory).</p>
<p>Also, a major part of treating imbalanced data, is making sure your metrics are weighted as well, or at least well-defined, as you might want equal performance (or skewed performance) on these metrics.</p>
<p>And if you want a more visual output than sklearn's classification_report, you can use one of the Deepchecks built-in checks (disclosure - I'm one of the maintainers):</p>
<pre><code>from deepchecks.checks import PerformanceReport
from deepchecks import Dataset
PerformanceReport().run(Dataset(train_df, label='Label'), Dataset(test_df, label='Label'), model)
</code></pre> | 2022-01-06 13:11:55.877000+00:00 | 2022-01-06 13:11:55.877000+00:00 | null | null | 68,292,028 | <p>I have a dataset with 1400 obs and 19 columns. The Target variable has values 1 (value that I am most interested in) and 0. The distribution of classes shows imbalance (70:30).</p>
<p>Using the code below I am getting weird values (all 1s). I am not figuring out if this is due to a problem of overfitting/imbalance data or to feature selection (I used Pearson correlation since all values are numeric/boolean).
I am thinking that the steps followed are wrong.</p>
<pre><code>import numpy as np
import math
import sklearn.metrics as metrics
from sklearn.metrics import f1_score
y = df['Label']
X = df.drop('Label',axis=1)
def create_cv(X,y):
if type(X)!=np.ndarray:
X=X.values
y=y.values
test_size=1/5
proportion_of_true=y[y==1].shape[0]/y.shape[0]
num_test_samples=math.ceil(y.shape[0]*test_size)
num_test_true_labels=math.floor(num_test_samples*proportion_of_true)
num_test_false_labels=math.floor(num_test_samples-num_test_true_labels)
y_test=np.concatenate([y[y==0][:num_test_false_labels],y[y==1][:num_test_true_labels]])
y_train=np.concatenate([y[y==0][num_test_false_labels:],y[y==1][num_test_true_labels:]])
X_test=np.concatenate([X[y==0][:num_test_false_labels] ,X[y==1][:num_test_true_labels]],axis=0)
X_train=np.concatenate([X[y==0][num_test_false_labels:],X[y==1][num_test_true_labels:]],axis=0)
return X_train,X_test,y_train,y_test
X_train,X_test,y_train,y_test=create_cv(X,y)
X_train,X_crossv,y_train,y_crossv=create_cv(X_train,y_train)
tree = DecisionTreeClassifier(max_depth = 5)
tree.fit(X_train, y_train)
y_predict_test = tree.predict(X_test)
print(classification_report(y_test, y_predict_test))
f1_score(y_test, y_predict_test)
</code></pre>
<p>Output:</p>
<pre><code> precision recall f1-score support
0 1.00 1.00 1.00 24
1 1.00 1.00 1.00 70
accuracy 1.00 94
macro avg 1.00 1.00 1.00 94
weighted avg 1.00 1.00 1.00 94
</code></pre>
<p>Has anyone experienced similar issues in building a classifier when data has imbalance, using CV and/or under sampling? Happy to share the whole dataset, in case you might want to replicate the output.
What I would like to ask you for some clear answer to follow that can show me the steps and what I am doing wrong.</p>
<p>I know that, to reduce overfitting and work with balance data, there are some methods such as random sampling (over/under), SMOTE, CV. My idea is</p>
<ul>
<li>Split the data on train/test taking into account imbalance</li>
<li>Perform CV on trains set</li>
<li>Apply undersampling only on a test fold</li>
<li>After the model has been chosen with the help of CV, undersample the train set and train the classifier</li>
<li>Estimate the performance on the untouched test set
(f1-score)</li>
</ul>
<p>as also outlined in this question: <a href="https://stackoverflow.com/questions/67537605/cv-and-under-sampling-on-a-test-fold">CV and under sampling on a test fold</a> .</p>
<p>I think the steps above should make sense, but happy to receive any feedback that you might have on this.</p> | 2021-07-07 19:28:10.083000+00:00 | 2022-01-06 13:11:55.877000+00:00 | 2021-10-01 19:17:32.547000+00:00 | python|machine-learning|scikit-learn|cross-validation|resampling | ['https://arxiv.org/pdf/1706.09516.pdf'] | 1 |
69,228,145 | <p>I have adapted from another code a solution that works for this. Here is the code:</p>
<pre><code>def unet_weight_map(y, wc=None, w0 = 10, sigma = 20):
"""
Generate weight maps as specified in the U-Net paper
for boolean mask.
"U-Net: Convolutional Networks for Biomedical Image Segmentation"
https://arxiv.org/pdf/1505.04597.pdf
Parameters
----------
mask: Numpy array
2D array of shape (image_height, image_width) representing binary mask
of objects.
wc: dict
Dictionary of weight classes.
w0: int
Border weight parameter.
sigma: int
Border width parameter.
Returns
-------
Numpy array
Training weights. A 2D array of shape (image_height, image_width).
"""
y = y.reshape(y.shape[0], y.shape[1])
labels = label(y)
no_labels = labels == 0
label_ids = sorted(np.unique(labels))
if len(label_ids) > 0:
distances = np.zeros((y.shape[0], y.shape[1], len(label_ids)))
for i, label_id in enumerate(label_ids):
distances[:,:,i] = distance_transform_edt(labels != label_id)
distances = np.sort(distances, axis=2)
d1 = distances[:,:,0]
d2 = distances[:,:,1]
w = w0 * np.exp(-1/2*((d1 + d2) / sigma)**2) * no_labels
else:
w = np.zeros_like(y)
if wc:
class_weights = np.zeros_like(y)
for k, v in wc.items():
class_weights[y == k] = v
w = w + class_weights
return w
wc = {
0: 0, # background
1: 1 # objects
}
w = unet_weight_map(img, wc)
</code></pre>
<p>Input:
<a href="https://i.stack.imgur.com/oSmwr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oSmwr.png" alt="enter image description here" /></a></p>
<p>Output:
<a href="https://i.stack.imgur.com/PRqmY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PRqmY.png" alt="enter image description here" /></a></p>
<p>If someone has a better solution, please!</p> | 2021-09-17 18:38:12.037000+00:00 | 2021-09-17 18:38:12.037000+00:00 | null | null | 69,209,591 | <p>I'm training a U-Net for extracting the area of buildings from satellite images. The results are not bad but I want to sharp the contours of the figures inside the image.
<a href="https://i.stack.imgur.com/akcXj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/akcXj.png" alt="enter image description here" /></a><a href="https://i.stack.imgur.com/dwdoj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dwdoj.png" alt="enter image description here" /></a></p>
<p>In order to improve it, I'm trying to use a weight map of the contours or borders of the figure inside the image.
<a href="https://i.stack.imgur.com/x0sZW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/x0sZW.png" alt="enter image description here" /></a>
Therefore, I'm trying to construct a map of weights with high values - e.g. 10 - on the borders and the values decaying from both sides. But I didn't know how to do it yet.</p> | 2021-09-16 13:45:18.630000+00:00 | 2022-07-15 09:24:12.677000+00:00 | 2022-07-15 09:24:12.677000+00:00 | python|image|scikit-image|weighted-graph | ['https://i.stack.imgur.com/oSmwr.png', 'https://i.stack.imgur.com/PRqmY.png'] | 2 |
64,261,606 | <p>Yes, you are right with the concept of <strong>CONVLSTM2D</strong>.<br />
<strong>CONVLSTM2D</strong> architecture combines gating of LSTM with 2D convolutions.</p>
<p>As you have mentioned, CONVLSTM layers will do a similar task to LSTM but instead of matrix multiplications, it does convolution operations and retains the input dimensions.</p>
<p>Another different approach would be that the images pass through the convolution layer and the result will be a flattened 1D array and this will be the input to the LSTM layers with a set of features over time.</p>
<p><strong>Input of Kera's CONVLSTM layer:</strong> is a 5D tensor with shape</p>
<p><code>(samples, time, channels, rows, cols)</code> if it is channels first.<br />
<code>(samples, time, rows, cols, channels)</code> if it is channels last.</p>
<p><strong>Output of a CONVLSTM layer:</strong></p>
<p>If <code>return_sequences = True</code> then it is a 5D tensor with shape</p>
<pre><code>(samples, time, filters, rows, cols)
</code></pre>
<p>If return_sequences = False then it is a 4D tensor with shape.</p>
<pre><code>(samples, filters, rows, cols)
</code></pre>
<p>You can refer to <a href="https://arxiv.org/pdf/1506.04214v1.pdf" rel="nofollow noreferrer">this</a> paper from where the implementation of CONVLSTM is done.</p> | 2020-10-08 11:30:51.713000+00:00 | 2020-10-08 11:30:51.713000+00:00 | null | null | 63,107,438 | <p>I would like to understand the <a href="https://www.tensorflow.org/api_docs/python/tf/keras/layers/ConvLSTM2D" rel="nofollow noreferrer">ConvLSTM2D</a> Keras layer a bit better.</p>
<p>Does it execute an 2D convolution on a 2D input (image) and then average/ flatten its ouptut and feed that into a LSTM module?
But I guess it is basically an LSTM cell, where the matrix multiplications are replaced with convolution operations. Is that correct?</p>
<p>Thank you</p> | 2020-07-27 01:16:25.143000+00:00 | 2020-10-08 11:30:51.713000+00:00 | 2020-07-27 01:35:47.580000+00:00 | tensorflow|keras|conv-neural-network|lstm | ['https://arxiv.org/pdf/1506.04214v1.pdf'] | 1 |
47,050,832 | <p>I have already solved my problem. After reading some papers and more trial and error, I figured out what my mistakes were.</p>
<p><strong>1) Dataset: I had a large dataset, but I didn't format it properly.</strong></p>
<ul>
<li>I checked the distribution of tweet labels (Neutral, Positive and Negative), realized there was a disparity in the distribution of said tweets and normalized it.</li>
<li>I cleaned it up even more by erasing url hashtags and unnecessary punctuation.</li>
<li>I shuffled prior to vectorization.</li>
</ul>
<p><strong>2) Initialization:</strong></p>
<ul>
<li>I initialized the MultiRNNCell with zeros and I changed my custom final layer to tf.contrib.fully_connected. I also added the initialization of the bias and weight matrix. (By fixing this, I started to see better loss and accuracy plots in Tensorboard)</li>
</ul>
<p><strong>3) Dropout:</strong></p>
<ul>
<li>I read this paper, <a href="https://arxiv.org/pdf/1603.05118.pdf" rel="nofollow noreferrer">Recurrent Dropout without Memory Loss</a>, and I changed my dropouts accordingly; I started seeing improvements in the loss and accuracy.</li>
</ul>
<p><strong>4) Decaying the learning rate:</strong></p>
<ul>
<li>I added an exponential decaying rate after 10,000 steps to control over-fitting.</li>
</ul>
<p><strong><em>Final results:</em></strong></p>
<p>After applying all of these changes, I achieved a test accuracy of 84%, which is acceptable because my data set still sucks.</p>
<p>My final network config was:</p>
<ul>
<li>num_epochs = 20</li>
<li>tweet_size = 20</li>
<li>hidden_size = 400</li>
<li>vec_size = 300</li>
<li>batch_size = 512</li>
<li>number_of_layers= 2</li>
<li>number_of_classes= 3</li>
<li>start_learning_rate = 0.001</li>
</ul> | 2017-11-01 08:27:01.543000+00:00 | 2017-11-02 22:44:22.017000+00:00 | 2017-11-02 22:44:22.017000+00:00 | null | 46,576,332 | <h1>UPDATED:</h1>
<p>i'm building a Neural Network for my final project and i need some help with it.</p>
<p>I'm trying to build a rnn to do sentiment analysis over Spanish text. I have about 200,000 labeled tweets and i vectorized them using a word2vec with a Spanish embedding</p>
<p><strong>Dataset & Vectorization:</strong></p>
<ul>
<li>I erased duplicates and split the dataset into training and testing sets.</li>
<li>Padding, unknown and end of sentence tokens are applied when vectorizing.</li>
<li>I mapped the @mentions to known names in the word2vec model. Example: @iamthebest => "John"</li>
</ul>
<p><strong>My model:</strong></p>
<ul>
<li>My data tensor has shape = (batch_size, 20, 300).</li>
<li>I have 3 classes: neutral, positive and negative, so my target tensor has shape = (batch_size, 3)</li>
<li>I use BasicLstm cells and dynamic rnn to build the net.</li>
<li>I use Adam Optimizer, and softmax_cross entropy for the loss calculation</li>
<li>I use a dropout wrapper to decrease the overfitting.</li>
</ul>
<p><strong>Last run:</strong></p>
<ul>
<li>I have tried with different configurations and non of them seem to work.</li>
<li>Last setup: 2 Layers, 512 batch size, 15 epochs and 0.001 of lr.</li>
</ul>
<p><a href="https://i.stack.imgur.com/Y786W.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Y786W.jpg" alt="Accuracy" /></a></p>
<p><a href="https://i.stack.imgur.com/5OGrS.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5OGrS.jpg" alt="Loss" /></a></p>
<p><strong>Weak points for me:</strong></p>
<p>im worried about the final layer and the handing of the final state in the dynamic_rnn</p>
<p><strong>Code:</strong></p>
<pre><code># set variables
num_epochs = 15
tweet_size = 20
hidden_size = 200
vec_size = 300
batch_size = 512
number_of_layers= 1
number_of_classes= 3
learning_rate = 0.001
TRAIN_DIR="/checkpoints"
tf.reset_default_graph()
# Create a session
session = tf.Session()
# Inputs placeholders
tweets = tf.placeholder(tf.float32, [None, tweet_size, vec_size], "tweets")
labels = tf.placeholder(tf.float32, [None, number_of_classes], "labels")
# Placeholder for dropout
keep_prob = tf.placeholder(tf.float32)
# make the lstm cells, and wrap them in MultiRNNCell for multiple layers
def lstm_cell():
cell = tf.contrib.rnn.BasicLSTMCell(hidden_size)
return tf.contrib.rnn.DropoutWrapper(cell=cell, output_keep_prob=keep_prob)
multi_lstm_cells = tf.contrib.rnn.MultiRNNCell([lstm_cell() for _ in range(number_of_layers)], state_is_tuple=True)
# Creates a recurrent neural network
outputs, final_state = tf.nn.dynamic_rnn(multi_lstm_cells, tweets, dtype=tf.float32)
with tf.name_scope("final_layer"):
# weight and bias to shape the final layer
W = tf.get_variable("weight_matrix", [hidden_size, number_of_classes], tf.float32, tf.random_normal_initializer(stddev=1.0 / math.sqrt(hidden_size)))
b = tf.get_variable("bias", [number_of_classes], initializer=tf.constant_initializer(1.0))
sentiments = tf.matmul(final_state[-1][-1], W) + b
prob = tf.nn.softmax(sentiments)
tf.summary.histogram('softmax', prob)
with tf.name_scope("loss"):
# define cross entropy loss function
losses = tf.nn.softmax_cross_entropy_with_logits(logits=sentiments, labels=labels)
loss = tf.reduce_mean(losses)
tf.summary.scalar("loss", loss)
with tf.name_scope("accuracy"):
# round our actual probabilities to compute error
accuracy = tf.to_float(tf.equal(tf.argmax(prob,1), tf.argmax(labels,1)))
accuracy = tf.reduce_mean(tf.cast(accuracy, dtype=tf.float32))
tf.summary.scalar("accuracy", accuracy)
# define our optimizer to minimize the loss
with tf.name_scope("train"):
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(loss)
#tensorboard summaries
merged_summary = tf.summary.merge_all()
logdir = "tensorboard/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S") + "/"
writer = tf.summary.FileWriter(logdir, session.graph)
# initialize any variables
tf.global_variables_initializer().run(session=session)
# Create a saver for writing training checkpoints.
saver = tf.train.Saver()
# load our data and separate it into tweets and labels
train_tweets = np.load('data_es/train_vec_tweets.npy')
train_labels = np.load('data_es/train_vec_labels.npy')
test_tweets = np.load('data_es/test_vec_tweets.npy')
test_labels = np.load('data_es/test_vec_labels.npy')
**HERE I HAVE THE LOOP FOR TRAINING AND TESTING, I KNOW ITS FINE**
</code></pre> | 2017-10-05 01:00:50.050000+00:00 | 2018-04-28 15:27:20.947000+00:00 | 2020-06-20 09:12:55.060000+00:00 | tensorflow|lstm|sentiment-analysis|recurrent-neural-network|rnn | ['https://arxiv.org/pdf/1603.05118.pdf'] | 1 |
59,068,717 | <p>While lower-order convolution kernels are usually in smaller size comparing to the input image,
extracted features focus more on local perception. However, higher-order convolutions enable
expansions in the overall receptive field which gradually converts local features to global features
which is also the case how human gazes at an object and recognizes it.
Here you go my friend
<a href="https://iopscience.iop.org/article/10.1088/1742-6596/1087/6/062032/pdf" rel="nofollow noreferrer">https://iopscience.iop.org/article/10.1088/1742-6596/1087/6/062032/pdf</a></p>
<p>and this one
<a href="https://arxiv.org/pdf/1904.04447.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1904.04447.pdf</a></p>
<p>s. If we only use the neighbor patterns
extracted by CNN, many useful global feature interactions will be
lost. This is also why CNN models do not perform well for CTR
prediction task. To overcome this limitation, we perform CNN and
MLP, which complement each other, to learn global-local feature
interactions for feature generation</p> | 2019-11-27 11:03:44.403000+00:00 | 2019-11-27 11:03:44.403000+00:00 | null | null | 56,991,978 | <p>As it is known that local features describe the local structure of the image contents while the global features describe the image contents as a whole. Convolution neural network which is under deep learning field makes to extract important feature by itself, I want to understand what type of the extracted feature by CNN is they are local or global feature or both of them? and why? is there anyone can help me in some analysis or references answering my question. Thanks.</p> | 2019-07-11 14:49:47.700000+00:00 | 2019-11-27 11:03:44.403000+00:00 | null | image-processing|conv-neural-network|feature-extraction | ['https://iopscience.iop.org/article/10.1088/1742-6596/1087/6/062032/pdf', 'https://arxiv.org/pdf/1904.04447.pdf'] | 2 |
59,697,684 | <p>A) One thing you could try is get data for clear text images, train a GAN or some similar network by adding artificial noise to your images and passing that as input to train it for denoising, pass the image through that network and then pass it to a text detector/ocr engine (such as pytesseract or google vision ocr)</p>
<p>B) Train an image detector on your possible character set (something like YOLO or FasterRCNN) with added noise, you can do this by again, artificially adding noise to data, but might need some manual annotation.</p>
<p>C) You can try something like <a href="https://arxiv.org/abs/1803.09597" rel="nofollow noreferrer">this</a>, by checking the image for all alphabets/known characters and then combine the results. I personally would prefer this one.</p>
<p>PS. I haven't completely read the paper linked in C, but the images you linked seem to be closer to be solved by the one shot segmentation method rather than training a GAN.</p>
<p>PPS.Based on your comment under your question, make sure creating a captcha solving bot does not violate any legal conditions for using the site (I feel obligated to say this for some reason.)</p> | 2020-01-11 19:04:10.163000+00:00 | 2020-01-11 19:11:14.650000+00:00 | 2020-01-11 19:11:14.650000+00:00 | null | 59,462,922 | <p>So I've been reading up on machine learning using TensorFlow and Keras, I've been trying to setup a dataset using some custom images and trying to learn the script to recognize the text while filtering out the noise, but the issue is that the noise color is the same and the text color which results in filtering out everything.</p>
<p>I'm not asking to get spoonfed, I just simply want pointers to the best way to solve/train the script to solve the text on the images.</p>
<p>What I'm looking for is to get the script to read on screen and calculate the word hidden in the image and print the result in the command line.</p>
<p><strong><em>There is no sample code since everything before was a failure and not actually what I was looking for.</em></strong></p>
<p><a href="https://imgur.com/a/3FZXHsg" rel="nofollow noreferrer">Album link for Imgur</a></p> | 2019-12-24 01:33:46.640000+00:00 | 2020-01-11 19:11:14.650000+00:00 | null | python|machine-learning | ['https://arxiv.org/abs/1803.09597'] | 1 |
53,966,030 | <p>The problem you're describing is a <strong>Nearest Neighbor Search (NNS)</strong>. There are two main methods of solving NNS problems: <strong>exact</strong> and <strong>approximate</strong>.</p>
<p>If you need an exact solution, I would recommend a <strong>metric tree</strong>, such as the <strong>M-tree</strong>, the <strong>MVP-tree</strong>, and the <strong>BK-tree</strong>. These trees take advantage of the triangle inequality to speed up search.</p>
<p>If you're willing to accept an approximate solution, there are much faster algorithms. The current state of the art for approximate methods is <a href="https://arxiv.org/abs/1603.09320" rel="nofollow noreferrer">Hierarchical Navigable Small World (hnsw)</a>. The <a href="https://github.com/nmslib/nmslib" rel="nofollow noreferrer">Non-Metric Space Library (nmslib)</a> provides an efficient implementation of hnsw as well as several other approximate NNS methods.</p>
<p>(You can compute the Levenshtein distance with <a href="https://en.wikipedia.org/wiki/Hirschberg%27s_algorithm" rel="nofollow noreferrer">Hirschberg's algorithm</a>)</p> | 2018-12-29 01:42:52.283000+00:00 | 2018-12-29 01:42:52.283000+00:00 | null | null | 53,950,048 | <p>Let's say I have a dictionary (word list) of millions upon millions of words. Given a query word, I want to find the word from that huge list that is most similar.</p>
<p>So let's say my query is <code>elepant</code>, then the result would most likely be <code>elephant</code>.</p>
<p>If my word is <code>fentist</code>, the result will probably be <code>dentist</code>.</p>
<p>Of course assuming both <code>elephant</code> and <code>dentist</code> are present in my initial word list.</p>
<p>What kind of index, data structure or algorithm can I use for this so that the query is fast? Hopefully complexity of <code>O(log N)</code>.</p>
<p><strong>What I have:</strong> The most naive thing to do is to create a "distance function" (which computes the "distance" between two words, in terms of how different they are) and then in O(n) compare the query with every word in the list, and return the one with the closest distance. But I wouldn't use this because it's slow.</p> | 2018-12-27 19:40:13.760000+00:00 | 2018-12-29 01:42:52.283000+00:00 | null | string|algorithm|search|data-structures | ['https://arxiv.org/abs/1603.09320', 'https://github.com/nmslib/nmslib', 'https://en.wikipedia.org/wiki/Hirschberg%27s_algorithm'] | 3 |
58,377,503 | <p>I don't know details of <a href="https://arxiv.org/pdf/1511.06434.pdf" rel="nofollow noreferrer">DCGAN thesis</a>, but if I look into it I can find the below guidelines to make stable DCGAN. Why did you use <code>LeakyReLU</code> in Generator instead of <code>ReLU</code>?</p>
<blockquote>
<p><strong>Architecture guidelines for stable Deep Convolutional GANs</strong></p>
<ul>
<li>Replace any pooling layers with strided convolutions (discriminator) and fractional-strided convolutions (generator).</li>
<li>Use batchnorm in both the generator and the discriminator.</li>
<li>Remove fully connected hidden layers for deeper architectures.</li>
<li><strong>Use ReLU activation in generator</strong> for all layers except for the output, which uses Tanh.</li>
<li>Use LeakyReLU activation in the discriminator for all layers</li>
</ul>
</blockquote> | 2019-10-14 13:05:28.317000+00:00 | 2019-10-14 13:05:28.317000+00:00 | null | null | 58,376,226 | <p>I have been trying to implement the <a href="https://arxiv.org/abs/1511.06434" rel="nofollow noreferrer">DCGan</a>, the face book's paper, and blocked by below two issues almost for 2 weeks. Any suggestions would be appreciated. Thanks.</p>
<p><strong>Issue 1:</strong></p>
<p>DCGAN paper suggest to use BN(Batch Normalization) both the generator and discriminator. But, I couldn't get better result with BN rather than w/out BN. </p>
<p>I copied DCGAN model I used which is exactly the same with a DCGAN paper. I don't think it is due to overfitting. Because (1) It keeps showing the noise the same with an initial noise picture and seems never been trained.
(2) The Loss value is very stable that gan and discriminator both are not really changed. (It's staying about 0.6 ~ 0.7 and never felt down or bumped up like when both models are collapsed.) If I check only loss function, seems like it is getting trained well.</p>
<p><strong>Issue 2:</strong> </p>
<p>When I used float16, it always gives me Nan with the model below.
I have changed epsilon as 1e-4 1e-3 both but failed.
And here is one more question.
If I don't use the BatchNormalization, it can be Nan. it enough makes sense, I can get it. But, if I use BatchNormalization, it normalizes in every layer. Even if the result becomes very big number or very small number it will be batch normalized in every layer that the result will be almost centered and the fade-out shouldn't happen. isn't it? it's actually my thought but I don't know what I am thinking wrong..
please, somebody, help me.</p>
<p>===== Generator =====</p>
<p>Input # (None, 128) <= latent</p>
<p>Dense # (None, 16384)<br>
BatchNormalization<br>
LeakyReLU </p>
<p>Reshape # (None, 4, 4, 1024) </p>
<p>Conv2DTranspose # (None, 4, 4, 512)</p>
<p>BatchNormalization<br>
LeakyReLU</p>
<p>Conv2DTranspose # (None, 8, 8, 256)</p>
<p>BatchNormalization<br>
LeakyReLU</p>
<p>Conv2DTranspose # (None, 16, 16, 128)</p>
<p>BatchNormalization<br>
LeakyReLU </p>
<p>Conv2DTranspose # (None, 32, 32, 64)</p>
<p>BatchNormalization<br>
LeakyReLU </p>
<p>Conv2DTranspose # (None, 64, 64, 32)</p>
<p>BatchNormalization<br>
LeakyReLU</p>
<p>Conv2DTranspose # (None, 128, 128, 16)</p>
<p>BatchNormalization<br>
LeakyReLU</p>
<p>Conv2D # (None, 128, 128, 3) </p>
<p>===== Discriminator =====</p>
<p>Conv2D # (None, 128, 128, 3)
LeakyReLU </p>
<p>Conv2D # (None, 64, 64, 16)
BatchNormalization<br>
Dropout<br>
LeakyReLU</p>
<p>Conv2D # (None, 32, 32, 32)<br>
BatchNormalization<br>
Dropout<br>
LeakyReLU</p>
<p>Conv2D # (None, 16, 16, 64)<br>
BatchNormalization<br>
Dropout<br>
LeakyReLU</p>
<p>Conv2D # (None, 8, 8, 128)<br>
BatchNormalization<br>
Dropout<br>
LeakyReLU</p>
<p>Conv2D # (None, 4, 4, 256)<br>
BatchNormalization<br>
Dropout<br>
LeakyReLU</p>
<p>Conv2D # (None, 2, 2, 512)<br>
BatchNormalization<br>
Dropout<br>
LeakyReLU</p>
<p>Flatten<br>
Dropout<br>
Dense</p>
<p>and the last hyperparameters I have tried are as below and I didn't forget to add the gaussian noise to training pictures.</p>
<pre><code>image_shape => (128, 128, 3)
latent_dim => 128
channels => 3
iterations => 10000
batch_size => 128
epsilon => 0.005
weight_init_stddev => 0.02
beta_1 => 0.5
discriminator_lr => 0.0002
gan_lr => 0.0002
</code></pre> | 2019-10-14 11:53:44.577000+00:00 | 2020-12-08 05:11:48.093000+00:00 | 2019-10-14 13:13:52.243000+00:00 | deep-learning|conv-neural-network|batch-normalization|dcgan | ['https://arxiv.org/pdf/1511.06434.pdf'] | 1 |
47,484,994 | <p>The transform and its inverse can be computed in linear time and consuming linear storage. Check out the following <a href="https://arxiv.org/pdf/1201.3077.pdf" rel="nofollow noreferrer">paper</a>.</p> | 2017-11-25 10:37:36.450000+00:00 | 2017-11-25 10:37:36.450000+00:00 | null | null | 47,483,752 | <p>I have some difficulty finding the complexity of the <strong>Bijective String Sorting Transform</strong>. Does anybody know the complexity of the transform?</p> | 2017-11-25 07:49:35.230000+00:00 | 2017-11-25 10:37:36.450000+00:00 | null | algorithm | ['https://arxiv.org/pdf/1201.3077.pdf'] | 1 |
47,369,040 | <p>It seems a bit crazy but it seems to work : instead of creating a custom loss function that I would pass in model.compile, the network computes the loss (Eq. 1 from <a href="http://arxiv.org/pdf/1708.04729.pdf" rel="nofollow noreferrer">arxiv.org/pdf/1708.04729.pdf</a>) in a function that I call with Lambda :</p>
<pre><code>loss = Lambda(lambda x: similarity(x[0], x[1], x[2]))([X_hat, X, embedding_matrix])
</code></pre>
<p>And the network has two outputs: <code>X_hat</code> and <code>loss</code>, but I weight <code>X_hat</code> to have 0 weight and loss to have all the weight :</p>
<pre><code>model = Model(input_sequence, [X_hat, loss])
model.compile(loss=mean_squared_error,
optimizer=optimizer,
loss_weights=[0., 1.])
</code></pre>
<p>When I train the model : </p>
<pre><code>for i in range(epochs):
for j in range(num_data):
input_embedding = model.layers[1].get_weights()[0][[data[j:j+1]]]
y = [input_embedding, 0] #The embedding of the input
model.fit(data[j:j+1], y, batch_size=1, ...)
</code></pre>
<p>That way, the model is trained to tend <code>loss</code> toward 0, and when I want to use the trained model's prediction I use the first output which is the reconstruction <code>X_hat</code></p> | 2017-11-18 17:35:50.227000+00:00 | 2017-11-18 18:24:10.730000+00:00 | 2017-11-18 18:24:10.730000+00:00 | null | 47,336,721 | <p>I am implementing a custom loss function in keras. The model is an <code>autoencoder</code>. The first layer is an Embedding layer, which embed an input of size <code>(batch_size, sentence_length)</code> into <code>(batch_size, sentence_length, embedding_dimension)</code>. Then the model compresses the embedding into a vector of a certain dimension, and finaly must reconstruct the embedding <code>(batch_size, sentence_lenght, embedding_dimension)</code>. </p>
<p>But the embedding layer is trainable, and the loss must use the weights of the embedding layer (I have to sum over all word embeddings of my vocabulary).</p>
<p>For exemple, if I want to train on the toy exemple : "the cat". The <code>sentence_length is 2</code> and suppose <code>embedding_dimension is 10</code> and the <code>vocabulary size is 50</code>, so the embedding matrix has shape <code>(50,10)</code>. The Embedding layer's output <code>X</code> is of shape <code>(1,2,10)</code>. Then it passes in the model and the output <code>X_hat</code>, is also of shape <code>(1,2,10)</code>. The model must be trained to maximize the probability that the vector <code>X_hat[0]</code> representing 'the' is the most similar to the vector <code>X[0]</code> representing 'the' in the Embedding layer, and same thing for 'cat'. But the loss is such that I have to compute the cosine similarity between <code>X</code> and <code>X_hat</code>, normalized by the sum of cosine similarity of <code>X_hat</code> and every embedding (50, since the vocabulary size is 50) in the embedding matrix, which are the columns of the weights of the embedding layer. </p>
<p>But How can I access the weights in the embedding layer at each iteration of the training process?</p>
<p>Thank you !</p> | 2017-11-16 18:26:23.263000+00:00 | 2017-11-18 18:24:10.730000+00:00 | 2017-11-18 18:21:18.513000+00:00 | keras|embedding|tensor|loss | ['http://arxiv.org/pdf/1708.04729.pdf'] | 1 |
69,495,749 | <p>you can also have a look at section 3.3.2 of the following paper:</p>
<ul>
<li><a href="https://arxiv.org/abs/2008.06692" rel="nofollow noreferrer">https://arxiv.org/abs/2008.06692</a></li>
</ul>
<p>The method presented there allows you to compute <code>m</code> different stable models of a logic program by finding <code>m</code> <code>1</code>-diverse stable models. But note that <code>m</code> has to be given as input, hence the method is not directly applicable to the problem of computing one stable model that contains all stable models of a logic program.</p> | 2021-10-08 12:20:38.437000+00:00 | 2021-10-08 12:20:38.437000+00:00 | null | null | 69,297,349 | <p>my question is, if I can gather all answer sets into one answer. I attach the code below for my program. The results that it returns and a description of what I would like to have.</p>
<pre><code>% Main domain predicates definitions
argument(1..3).
element(1).
#show scope/2.
{scope(A, U) : element(U)}:- argument(A).
</code></pre>
<p>what I get is shown in the picture below</p>
<p><a href="https://i.stack.imgur.com/9Ir4p.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9Ir4p.png" alt="enter image description here" /></a></p>
<p>But what I would like to get is, some predicate that has a unique id for each answer set. For example:</p>
<p>newScope(1,empty)-newScope(2,2,1)-newScope(3,3,1)-....-newScope(8,1,1)|newScope(8,2,1)|newScope(8,3,1)</p>
<p>thanks in advance to whomever has the patience to answer me.</p> | 2021-09-23 09:23:46.883000+00:00 | 2021-10-08 12:20:38.437000+00:00 | null | answer-set-programming|clingo | ['https://arxiv.org/abs/2008.06692'] | 1 |
42,939,008 | <p>Assume a grey-scale 2D image which can mathematically be described as a matrix. Generalizing the concept of a matrix results in theory about <a href="https://en.wikipedia.org/wiki/Tensor" rel="nofollow noreferrer">tensors</a> (informally you can think of a multidimensional array). I.e. a RGB 2D image is represented as a tensor of size <em>[width, height, 3]</em>. Further a RGB 3D Image is represented as a tensor of size <em>[width, height, depth, 3]</em>. Moreover and like in the case of matrices you can also perform tensor-tensor multiplications.</p>
<p>For instance consider the typical neural network with 2D images as input. Such a network does basically nothing else than matrix-matrix multiplications (despite of the elementwise non-linear operations at nodes). In the same way a neural network operates on tensors by performing tensor-tensor multiplications.</p>
<p>Now back to your question of feature extraction: Indeed the problem of tensors are their high dimensionality. Hence modern research problems regard the efficient decomposition of tensors retaining the initial (most meaningful) information. In order to extract features from tensors a tensor decomposition approach might be a good start in order to reduce the rank of the tensor. A few papers on tensors in machine learning are:</p>
<p><a href="http://www.cs.columbia.edu/~djhsu/papers/power-jmlr.pdf" rel="nofollow noreferrer">Tensor Decompositions for Learning Latent Variable Models</a></p>
<p><a href="https://arxiv.org/pdf/1605.05775.pdf" rel="nofollow noreferrer">Supervised Learning With Quantum-Inspired Tensor Networks</a></p>
<p><a href="http://ieeexplore.ieee.org/document/7207289/?reload=true" rel="nofollow noreferrer">Optimal Feature Extraction and Classification of Tensors via Matrix Product State Decomposition</a></p>
<p>Hope this helps, even though the math behind is not easy.</p> | 2017-03-21 22:05:26.933000+00:00 | 2017-03-21 22:05:26.933000+00:00 | null | null | 36,636,912 | <p>Assume a workflow for 2D image feature extraction by using SIFT, SURF, or MSER methods followed by bag-of-words/features encoded and subsequently used to train classifiers.</p>
<p>I was wondering if there is an analogous approach for 3D datasets, for example, a 3D volume of MRI data. When dealing with 2D images, each image represents an entity with features to be detected and indexed. However, in a 3D dataset is it possible to extract features from the three-dimensional entity? Does this have to be done slice-by-slice, by decomposing the 3D images to multiple 2D images (slices)? Or is there a way of reducing the 3D dimensionality to 2D while retaining the 3D information?</p>
<p>Any pointers would be greatly appreciated.</p> | 2016-04-15 01:45:20.510000+00:00 | 2017-03-21 22:05:26.933000+00:00 | 2016-04-15 09:28:20.507000+00:00 | artificial-intelligence|classification|feature-detection|feature-extraction|surf | ['https://en.wikipedia.org/wiki/Tensor', 'http://www.cs.columbia.edu/~djhsu/papers/power-jmlr.pdf', 'https://arxiv.org/pdf/1605.05775.pdf', 'http://ieeexplore.ieee.org/document/7207289/?reload=true'] | 4 |
13,896,984 | <p>I'm pretty sure that this is some kind of variant of the <a href="http://en.wikipedia.org/wiki/Exact_cover_problem" rel="nofollow">exact cover problem</a>, which is known to be NP-complete. Your proposed algorithm is a simple greedy solution. The problem with greedy solutions is that they often work well enough to convince you that greed is good and then suddenly leave you high and dry looking for a better solution. (Consider the global economy, for example.) Anyway, Knuth's <a href="http://en.wikipedia.org/wiki/Dancing_Links" rel="nofollow">Dancing</a> <a href="http://arxiv.org/pdf/cs/0011047v1.pdf" rel="nofollow">Links</a> technique is a standard way of solving the problem (exact set cover, not global economy).</p> | 2012-12-15 23:02:08.853000+00:00 | 2012-12-15 23:02:08.853000+00:00 | null | null | 13,890,987 | <p>Here's the problem: I'm given a matrix like</p>
<p>Input:</p>
<pre><code>1 1 1
1 1 1
1 1 1
</code></pre>
<p>At each step, I need to find a "second" matrix of 1's and 0's with no two 1's on the same row or column. Then, I'll subtract the second matrix from the original matrix. I will repeat the process until I get a matrix with all 0's. Furthermore, I need to take the least possible number of steps.</p>
<p>I need to print all the "second" matrices in O(n) time. In the above example I can get to the null matrix in 3 steps by subtracting these three matrices in order:</p>
<p>Expected output:</p>
<pre><code>1 0 0
0 1 0
0 0 1
0 0 1
1 0 0
0 1 0
0 1 0
0 0 1
1 0 0
</code></pre>
<p>I have coded an attempt, in which I am finding the first maximum value and creating the second matrices based on the index of that value. But for the above input I am getting 4 output matrices, which is wrong:</p>
<p>My output:</p>
<pre><code>1 0 0
0 1 0
0 0 1
0 1 0
1 0 0
0 0 0
0 0 1
0 0 0
1 0 0
0 0 0
0 0 1
0 1 0
</code></pre>
<p>My solution works for most of the test cases but fails for the one given above. Can someone give me some pointers on how to proceed, or find an algorithm that guarantees optimality?</p>
<p>Test case that works:</p>
<p>Input:</p>
<pre><code>0 2 1
0 0 0
3 0 0
</code></pre>
<p>Output</p>
<pre><code>0 1 0
0 0 0
1 0 0
0 1 0
0 0 0
1 0 0
0 0 1
0 0 0
1 0 0
</code></pre> | 2012-12-15 09:22:13.753000+00:00 | 2012-12-15 23:02:08.853000+00:00 | 2012-12-15 10:34:34.607000+00:00 | java|algorithm|optimization|graph|matrix | ['http://en.wikipedia.org/wiki/Exact_cover_problem', 'http://en.wikipedia.org/wiki/Dancing_Links', 'http://arxiv.org/pdf/cs/0011047v1.pdf'] | 3 |
71,594,253 | <p>I recommend BartScore. Check the <a href="https://github.com/neulab/BARTScore" rel="nofollow noreferrer">Github</a> page and the <a href="https://arxiv.org/abs/2106.11520#:%7E:text=BARTScore%3A%20Evaluating%20Generated%20Text%20as%20Text%20Generation,-Weizhe%20Yuan%2C%20Graham&text=The%20general%20idea%20is%20that,the%20generated%20text%20is%20better." rel="nofollow noreferrer">article</a>. The authors issued also a meta-evaluation on the <a href="http://explainaboard.nlpedia.ai/leaderboard/task-meval/" rel="nofollow noreferrer">ExplainaBoard platform</a>, "which allows to interactively understand the strengths, weaknesses, and complementarity of each metric". You can find the list of most of the state-of-the-art metrics there.</p> | 2022-03-23 21:28:18.990000+00:00 | 2022-03-23 21:28:18.990000+00:00 | null | null | 9,879,276 | <p>I have written a system that summarizes a long document containing thousands of words. Are there any norms on how such a system should be evaluated in the context of a user survey?</p>
<p>In short, is there a metric for evaluating the time that my tool has saved a human? Currently, I was thinking of using the (Time taken to read the original document/Time taken to read the summary) as a way of determining the time saved, but are there better metrics?</p>
<p>Currently, I am asking the user subjective questions about the accuracy of the summary.</p> | 2012-03-26 20:26:04.360000+00:00 | 2022-04-04 11:42:45.930000+00:00 | null | language-agnostic|nlp|information-retrieval|evaluation | ['https://github.com/neulab/BARTScore', 'https://arxiv.org/abs/2106.11520#:%7E:text=BARTScore%3A%20Evaluating%20Generated%20Text%20as%20Text%20Generation,-Weizhe%20Yuan%2C%20Graham&text=The%20general%20idea%20is%20that,the%20generated%20text%20is%20better.', 'http://explainaboard.nlpedia.ai/leaderboard/task-meval/'] | 3 |
62,983,533 | <p>There is also the very recent <strong>BERTScore</strong> metric (arXiv'19, ICLR'20, already almost 90 citations) that does not suffer from the well-known issues of ROUGE and BLEU.</p>
<p>Abstract from the paper:</p>
<blockquote>
<p>We propose BERTScore, an automatic evaluation metric for text
generation. Analogously to common metrics, BERTScore computes a
similarity score for each token in the candidate sentence with each
token in the reference sentence. However, instead of exact matches, we
compute token similarity using contextual embeddings. We evaluate
using the outputs of 363 machine translation and image captioning
systems. BERTScore correlates better with human judgments and provides
stronger model selection performance than existing metrics. Finally,
we use an adversarial paraphrase detection task to show that BERTScore
is more robust to challenging examples when compared to existing
metrics.</p>
</blockquote>
<ul>
<li><p>Paper: <a href="https://arxiv.org/pdf/1904.09675.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1904.09675.pdf</a></p>
</li>
<li><p>Code: <a href="https://github.com/Tiiiger/bert_score" rel="nofollow noreferrer">https://github.com/Tiiiger/bert_score</a></p>
</li>
<li><p>Full reference:</p>
<p>Zhang, Tianyi, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. "Bertscore: Evaluating text generation with bert." arXiv preprint arXiv:1904.09675 (2019).</p>
</li>
</ul> | 2020-07-19 17:28:58.877000+00:00 | 2020-09-05 10:07:50.790000+00:00 | 2020-09-05 10:07:50.790000+00:00 | null | 9,879,276 | <p>I have written a system that summarizes a long document containing thousands of words. Are there any norms on how such a system should be evaluated in the context of a user survey?</p>
<p>In short, is there a metric for evaluating the time that my tool has saved a human? Currently, I was thinking of using the (Time taken to read the original document/Time taken to read the summary) as a way of determining the time saved, but are there better metrics?</p>
<p>Currently, I am asking the user subjective questions about the accuracy of the summary.</p> | 2012-03-26 20:26:04.360000+00:00 | 2022-04-04 11:42:45.930000+00:00 | null | language-agnostic|nlp|information-retrieval|evaluation | ['https://arxiv.org/pdf/1904.09675.pdf', 'https://github.com/Tiiiger/bert_score'] | 2 |
69,958,712 | <p>In reference to the <a href="https://github.com/google/svcca/blob/1f3fbf19bd31bd9b76e728ef75842aa1d9a4cd2b/tutorials/001_Introduction.ipynb" rel="noreferrer">notebook</a> you provided which is a supporting artefact to and implements ideas from the following two papers<br></p>
<ol>
<li><a href="https://arxiv.org/abs/1706.05806" rel="noreferrer">"SVCCA: Singular Vector Canonical Correlation Analysis for Deep Learning Dynamics and Interpretability"</a>. Neural Information Processing Systems (NeurIPS) 2017</li>
<li><a href="https://arxiv.org/abs/1806.05759" rel="noreferrer">"Insights on Representational Similarity in Deep Neural Networks with Canonical Correlation"</a>. Neural Information Processing Systems (NeurIPS) 2018<br></li>
</ol>
<p>The authors there calculate 50 = min(A_fake neurons, B_fake neurons) components and plot the correlations between the transformed vectors of each component (i.e. 50).</p>
<p>With the help of the below code, using <a href="https://scikit-learn.org/stable/modules/generated/sklearn.cross_decomposition.CCA.html" rel="noreferrer"><code>sklearn CCA</code></a>, I am trying to reproduce their <em>Toy Example</em>. As we'll see the correlation plots match. The sanity check they used in the notebook came very handy - it passed seamlessly with this code as well.</p>
<pre><code>import numpy as np
from matplotlib import pyplot as plt
from sklearn.cross_decomposition import CCA
# rows contain the number of samples for CCA and the number of rvs goes in columns
X = np.random.randn(2000, 100)
Y = np.random.randn(2000, 50)
# num of components
n_comps = min(X.shape[1], Y.shape[1])
cca = CCA(n_components=n_comps)
cca.fit(X, Y)
X_c, Y_c = cca.transform(X, Y)
# calculate and plot the correlations of all components
corrs = [np.corrcoef(X_c[:, i], Y_c[:, i])[0, 1] for i in range(n_comps)]
plt.plot(corrs)
plt.xlabel('cca_idx')
plt.ylabel('cca_corr')
plt.show()
</code></pre>
<p><strong>Output:</strong></p>
<p><a href="https://i.stack.imgur.com/3D01O.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/3D01O.jpg" alt="enter image description here" /></a></p>
<p>For the sanity check, replace the Y data matrix by a scaled invertible transform of X and rerun the code.<br></p>
<pre><code>Y = np.dot(X, np.random.randn(100, 100))
</code></pre>
<p><strong>Output:</strong></p>
<p><a href="https://i.stack.imgur.com/3jvbn.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/3jvbn.jpg" alt="enter image description here" /></a></p> | 2021-11-13 22:16:29.873000+00:00 | 2021-11-16 10:07:40.100000+00:00 | 2021-11-16 10:07:40.100000+00:00 | null | 69,800,500 | <p>I need to measure similarity between feature vectors using CCA module. I saw sklearn has a good CCA module available: <a href="https://scikit-learn.org/stable/modules/generated/sklearn.cross_decomposition.CCA.html" rel="nofollow noreferrer">https://scikit-learn.org/stable/modules/generated/sklearn.cross_decomposition.CCA.html</a></p>
<p>In different papers I reviewed, I saw that the way to measure similarity using CCA is to calculate the mean of the correlation coefficients, for example as done in this following notebook example: <a href="https://github.com/google/svcca/blob/1f3fbf19bd31bd9b76e728ef75842aa1d9a4cd2b/tutorials/001_Introduction.ipynb" rel="nofollow noreferrer">https://github.com/google/svcca/blob/1f3fbf19bd31bd9b76e728ef75842aa1d9a4cd2b/tutorials/001_Introduction.ipynb</a></p>
<p>How to calculate the correlation coefficients (as shown in the notebook) using sklearn CCA module?</p>
<pre><code>from sklearn.cross_decomposition import CCA
import numpy as np
U = np.random.random_sample(500).reshape(100,5)
V = np.random.random_sample(500).reshape(100,5)
cca = CCA(n_components=1)
cca.fit(U, V)
cca.coef_.shape # (5,5)
U_c, V_c = cca.transform(U, V)
U_c.shape # (100,1)
V_c.shape # (100,1)
</code></pre>
<p>This is an example of the sklearn CCA module, however I have no idea how to retrieve correlation coefficients from it.</p> | 2021-11-01 17:26:34.577000+00:00 | 2021-11-16 18:53:53.900000+00:00 | 2021-11-16 18:53:53.900000+00:00 | python|scikit-learn | ['https://github.com/google/svcca/blob/1f3fbf19bd31bd9b76e728ef75842aa1d9a4cd2b/tutorials/001_Introduction.ipynb', 'https://arxiv.org/abs/1706.05806', 'https://arxiv.org/abs/1806.05759', 'https://scikit-learn.org/stable/modules/generated/sklearn.cross_decomposition.CCA.html', 'https://i.stack.imgur.com/3D01O.jpg', 'https://i.stack.imgur.com/3jvbn.jpg'] | 6 |
69,386,710 | <p>This may be surprising, but Transformers don't always beat LSTMs. For example, <a href="https://arxiv.org/abs/1904.09408" rel="nofollow noreferrer">Language Models with Transformers</a> states:</p>
<blockquote>
<p>Transformer architectures are suboptimal for language model itself. Neither self-attention nor the positional encoding in the Transformer is able to efficiently incorporate the word-level sequential context crucial to language modeling.</p>
</blockquote>
<p>If you run the Transformer tutorial code itself (on which your code is based), you'll also see LSTM do better there. See <a href="https://stats.stackexchange.com/questions/522116/">this thread on stats.SE</a> for more discussion on this topic (disclaimer: both the question and the answer there are mine)</p> | 2021-09-30 05:39:02.953000+00:00 | 2021-09-30 05:44:48.750000+00:00 | 2021-09-30 05:44:48.750000+00:00 | null | 69,380,237 | <p>I am dealing with a sequence tagging problem and I am using a single Transformer Encoder to obtain logits from each element of the sequence. Having experimented both with Transformer and BiLSTM it looks like in my case BiLSTM is working better, so I was wondering if maybe it is because my Transformer implementation has some problem... Below is my implementation of the Transformer Encoder and related functions for creating padding mask and positional embeddings:</p>
<pre><code>def create_mask(src, lengths):
"""Create a mask hiding future tokens
Parameters:
src (tensor): the source tensor having shape [batch_size, number_of_steps, features_dimensions]
length (list): a list of integers representing the length (i.e. number_of_steps) of each sample in the batch."""
mask = []
max_len = src.shape[1]
for index, i in enumerate(src):
# The mask consists in tensors having false at the step number that doesn't need to be hidden and true otherwise
mask.append([False if (i+1)>lengths[index] else True for i in range(max_len)])
return torch.tensor(mask)
class PositionalEncoding(nn.Module):
def __init__(self, d_model, dropout=0.1, max_len=5000, device = 'cpu'):
super().__init__()
self.dropout = nn.Dropout(p=dropout)
self.device = device
position = torch.arange(max_len).unsqueeze(1)
div_term = torch.exp(torch.arange(0, d_model, 2) * (-math.log(10000.0) / d_model))
pe = torch.zeros(1, max_len, d_model)
pe[0, :, 0::2] = torch.sin(position * div_term)
pe[0, :, 1::2] = torch.cos(position * div_term)
self.register_buffer('pe', pe)
def forward(self, x):
x = x + self.pe[:, :x.size(1), :].to(self.device)
return self.dropout(x)
class Transformer(nn.Module):
"""Class implementing transformer ecnoder, partially based on
https://pytorch.org/tutorials/beginner/transformer_tutorial.html"""
def __init__(self, in_dim, h_dim, n_heads, n_layers, dropout=0.2, drop_out = 0.0, batch_first = True, device = 'cpu', positional_encoding = True):
super(Transformer, self).__init__()
self.model_type = 'Transformer'
self.pos_encoder = PositionalEncoding(in_dim, dropout, device = device)
encoder_layers = nn.TransformerEncoderLayer(in_dim, n_heads, h_dim, dropout)
self.transformer_encoder = nn.TransformerEncoder(encoder_layers, n_layers, norm=nn.LayerNorm(in_dim))
self.in_dim = in_dim
self.drop_out = drop_out
self.positional_encoding = positional_encoding
def forward(self, src, mask = None, line_len=None):
src = src * math.sqrt(self.in_dim)
if self.positional_encoding:
src = self.pos_encoder(src)
if line_len is not None and mask is None:
mask = create_mask(src, line_len)
else:
mask = None
output = self.transformer_encoder(src, src_key_padding_mask = mask)
if self.drop_out:
output = F.dropout(output, p = self.drop_out)
return src, output
</code></pre>
<p>As it can be seen, the above network outputs the hidden states and then I pass them into an additional linear layer and train with a CrossEntropy loss over two classes and Adam optimizer. I have tried multiple combinations of hyperparameters but the BiLSTM still performs better. Can anyone spot anything off in my Transformer or suggest why I experience such a counterintuitive result?</p> | 2021-09-29 16:28:44.087000+00:00 | 2021-09-30 05:45:52.317000+00:00 | 2021-09-30 05:45:52.317000+00:00 | deep-learning|pytorch|lstm|transformer-model|language-model | ['https://arxiv.org/abs/1904.09408', 'https://stats.stackexchange.com/questions/522116/'] | 2 |
69,158,739 | <p>It depends on the type of position encoding the Transformer uses. Models with learned static position embeddings (such as BERT) cannot go beyond the number of learned positions, simply because they cannot embed the next input for the decoder to produce an output.</p>
<p>The original Transformer for machine translation, uses analytically defined position encoding (so-called sinusoidal encoding) which in theory should generalize for arbitrarily long inputs and outputs. However, in practice, it generalizes badly for sequences that are much longer than those in the training data.</p>
<p>If you want to read more about position encoding in Transformers, you can checkout <a href="https://arxiv.org/abs/2102.11090" rel="nofollow noreferrer">this survey</a>.</p> | 2021-09-13 07:19:54.793000+00:00 | 2021-09-13 07:19:54.793000+00:00 | null | null | 69,118,249 | <p>There's just one thing that I can't find an answer to :
When putting the ouput back in the transformer, we compute it similarly to the inputs (with added masks), so is there also a sequence size limit ?</p>
<p>Even BERT has an input size limit of 512 tokens, so transformers are limited in how much they can take in.
So is there something to make the output length as big as wanted or is there a fixed max length ?</p>
<p>If I wasn't clear enough, does the network generate words infinitely until the < end > token or is there a token limit for the outputs?</p> | 2021-09-09 12:33:04.787000+00:00 | 2021-09-13 07:19:54.793000+00:00 | null | nlp|artificial-intelligence|transformer-model | ['https://arxiv.org/abs/2102.11090'] | 1 |
63,538,041 | <p>Out of bag estimates are used for estimating the error, I don't think you can switch to CV using that package. It's up to you to decide whether CV is better than this. In their <a href="https://cran.r-project.org/web/packages/tuneRanger/readme/README.html" rel="nofollow noreferrer">readme</a>, they linked to a <a href="https://arxiv.org/pdf/1804.03515.pdf" rel="nofollow noreferrer">publication</a>, and under it section 3.5 they wrote:</p>
<blockquote>
<p>Out-of-bag predictions are used for evaluation, which makes it much
faster than other packages that use evaluation strategies such as
cross-validation</p>
</blockquote>
<p>If you want to use cross-validation or repeated cross-validation, you would have to use <code>caret</code>, for example:</p>
<pre><code>library(caret)
mdl = train(Species ~ .,data=iris,method="ranger",trControl=trainControl(method="repeatedcv",repeats=2),
tuneGrid = expand.grid(mtry=2:3,min.node.size = 1:2,splitrule="gini"))
Random Forest
150 samples
4 predictor
3 classes: 'setosa', 'versicolor', 'virginica'
No pre-processing
Resampling: Cross-Validated (10 fold, repeated 2 times)
Summary of sample sizes: 135, 135, 135, 135, 135, 135, ...
Resampling results across tuning parameters:
mtry min.node.size Accuracy Kappa
2 1 0.96 0.94
2 2 0.96 0.94
3 1 0.96 0.94
3 2 0.96 0.94
Tuning parameter 'splitrule' was held constant at a value of gini
Accuracy was used to select the optimal model using the largest value.
The final values used for the model were mtry = 2, splitrule = gini
and min.node.size = 1.
</code></pre>
<p>The parameters you can tune will be different. I think <code>mlr</code> also allows you to perform <a href="https://mlr.mlr-org.com/articles/tutorial/resample.html" rel="nofollow noreferrer">cross-validation</a> but the same limitations apply.</p> | 2020-08-22 15:40:23.023000+00:00 | 2020-08-22 15:40:23.023000+00:00 | null | null | 63,536,361 | <p>I am using the package "TuneRanger" to tune a RF model. It works good and I obtained good results but I am not sure if it is overfitting my model. I would like to use a Repeated CV for every instance the package is tuning the model but I can't find a way to do it. Also I would like to know if anybody knows how the package validates the results of every try (train-test, cv, repeated cv?) I have been reading the instructions of the package (<a href="https://cran.r-project.org/web/packages/tuneRanger/tuneRanger.pdf" rel="nofollow noreferrer">https://cran.r-project.org/web/packages/tuneRanger/tuneRanger.pdf</a>) but it says nothing about it.</p>
<p>Thank you for your help.</p> | 2020-08-22 12:47:07.980000+00:00 | 2020-08-22 15:40:23.023000+00:00 | 2020-08-22 13:39:03.070000+00:00 | r|random-forest|r-ranger | ['https://cran.r-project.org/web/packages/tuneRanger/readme/README.html', 'https://arxiv.org/pdf/1804.03515.pdf', 'https://mlr.mlr-org.com/articles/tutorial/resample.html'] | 3 |
37,839,687 | <p>As <a href="https://stackoverflow.com/users/3001761/jonrsharpe">jonrsharpe</a> mentioned, that's not really stackoverflow's MO, but in practice, many people do choose to write code to help explain answers (because it's often easier).
So I'm going to assume that this was just miscommunication, and you really intended to ask one of the following two questions:</p>
<ol>
<li>How does one grab the values of the last layer of Alexnet in TensorFlow?</li>
<li>How does feature extraction from the last layer of a deep convolutional network like alexnet work?</li>
</ol>
<p>The answer to the first question is actually very easy. I'll use the <code>cifar10</code> example code in TensorFlow (which is loosely based on AlexNet) as an example. The forward pass of the network is built in the <code>inference</code> function, which returns a variable representing the output of the softmax layer. To actually get predicted image labels, you just argmax the logits, like this: (I've left out some of the setup code, but if you're already running alexnet, you already have that working)</p>
<pre><code>logits = cifar10.inference(images)
predictions = tf.argmax(logits,1)
# Actually run the computation
labels = session.run([predictions])
</code></pre>
<p>So grabbing just the last layer features is literally just as easy as asking for them. The only wrinkle is that, in this case, cifar10 doesn't natively expose them, so you need to modify the cifar10.inference function to return both:</p>
<pre><code># old code in cifar10.inference:
# return softmax_linear
# new code in cifar10.inference:
return softmax_linear, local4
</code></pre>
<p>And then modify all the calls to cifar10.inference, like the one we just showed:</p>
<pre><code>logits,local4 = cifar10.inference(images)
predictions = tf.argmax(logits,1)
# Actually run the computation, this time asking for both answers
labels,last_layer = session.run([predictions, local4])
</code></pre>
<p>And that's it. <code>last_layer</code> contains the last layer for all of the inputs you gave the model.</p>
<p>As for the second question, that's a much deeper question, but I'm guessing that's why you want to work on it. I'd suggest starting by reading up on some of the papers published in this area. I'm not an expert here, but I do like Bolei Zhou's work. For instance, try looking at Figure 2 in <a href="http://arxiv.org/abs/1512.04150" rel="nofollow noreferrer">"Learning Deep Features for Discriminative Localization"</a>. It's a localization paper, but it's using very similar techniques (and several of Bolei's papers use it).</p> | 2016-06-15 15:28:14.080000+00:00 | 2016-06-15 15:28:14.080000+00:00 | 2017-05-23 10:29:15.527000+00:00 | null | 37,837,406 | <p>I try to get reliable features for ImageNet to do further classification on them. To achieve that I would like to use tensorflow with Alexnet, for feature extraction. That means I would like to get the values from the last layer in the CNN. Could someone write a piece of Python code that explains how that works?</p> | 2016-06-15 13:49:26.183000+00:00 | 2016-06-15 15:28:14.080000+00:00 | 2016-06-15 13:52:48.113000+00:00 | python|classification|tensorflow|feature-extraction | ['https://stackoverflow.com/users/3001761/jonrsharpe', 'http://arxiv.org/abs/1512.04150'] | 2 |
49,475,737 | <p>I suggest having a look at</p>
<blockquote>
<p><a href="https://arxiv.org/abs/1503.03832" rel="nofollow noreferrer">FaceNet: A Unified Embedding for Face Recognition and Clustering</a></p>
</blockquote>
<p>My <a href="http://www.shortscience.org/paper?bibtexKey=journals/corr/1503.03832" rel="nofollow noreferrer">shortscience summary</a> (go there if you want to see the Math parts rendered correctly):</p>
<p>FaceNet directly maps face images to $\mathbb{R}^{128}$ where distances directly correspond to a measure of face similarity. They use a triplet loss function. The triplet is (face of person A, other face of person A, face of person which is not A). Later, this is called (anchor, positive, negative).</p>
<p>The loss function is learned and inspired by LMNN. The idea is to minimize the distance between the two images of the same person and maximize the distance to the other persons image.</p>
<h3>LMNN</h3>
<p>Large Margin Nearest Neighbor (LMNN) is learning a pseudo-metric</p>
<p>$$d(x, y) = (x -y) M (x -y)^T$$</p>
<p>where $M$ is a positive-definite matrix. The only difference between a pseudo-metric and a metric is that $d(x, y) = 0 \Leftrightarrow x = y$ does not hold.</p>
<h2>Curriculum Learning: Triplet selection</h2>
<p>Show simple examples first, then increase the difficulty. This is done by selecting the triplets.</p>
<p>They use the triplets which are <em>hard</em>. For the positive example, this means the distance between the anchor and the positive example is high. For the negative example this means the distance between the anchor and the negative example is low.</p>
<p>They want to have</p>
<p>$$||f(x_i^a) - f(x_i^p)||_2^2 + \alpha < ||f(x_i^a) - f(x_i^n)||_2^2$$</p>
<p>where $\alpha$ is a margin and $x_i^a$ is the anchor, $x_i^p$ is the positive face example and $x_i^n$ is the negative example. They increase $\alpha$ over time. It is crucial that $f$ maps the images not in the complete $\mathbb{R}^{128}$, but on the unit sphere. Otherwise one could double $\alpha$ by simply making $f' = 2 \cdot f$.</p>
<h3>Tasks</h3>
<ul>
<li><strong>Face verification</strong>: Is this the same person?</li>
<li><strong>Face recognition</strong>: Who is this person?</li>
</ul>
<h3>Datasets</h3>
<ul>
<li>99.63% accuracy on Labeled FAces in the Wild (LFW)</li>
<li>95.12% accuracy on YouTube Faces DB</li>
</ul>
<h3>Network</h3>
<p>Two models are evaluated: The <a href="http://www.shortscience.org/paper?bibtexKey=journals/corr/ZeilerF13" rel="nofollow noreferrer">Zeiler & Fergus model</a> and an architecture based on the <a href="http://www.shortscience.org/paper?bibtexKey=journals/corr/SzegedyLJSRAEVR14" rel="nofollow noreferrer">Inception model</a>.</p>
<h2>See also</h2>
<ul>
<li><a href="http://www.shortscience.org/paper?bibtexKey=conf/cvpr/TaigmanYRW14#martinthoma" rel="nofollow noreferrer">DeepFace</a></li>
</ul>
<h2>See also</h2>
<ul>
<li><a href="https://www.cs.toronto.edu/~ranzato/publications/taigman_cvpr14.pdf" rel="nofollow noreferrer">DeepFace: Closing the Gap to Human-Level Performance in Face Verification</a></li>
</ul> | 2018-03-25 12:18:07.487000+00:00 | 2018-03-25 12:18:07.487000+00:00 | null | null | 26,179,052 | <p>I have run the face detection algorithm inbuilt in opencv to extract faces in each frame of a video(sampled at 1 fps). I have also resized each face image to be of same size and I have cropped some fraction of image to remove background noise and hair. Now the problem is that I have to cluster these images of faces - Each cluster corresponding to a person. I implemented the algorithm described here <a href="http://bitsearch.blogspot.in/2013/02/unsupervised-face-clustering-with-opencv.html" rel="nofollow">http://bitsearch.blogspot.in/2013/02/unsupervised-face-clustering-with-opencv.html</a></p>
<p>Basically the above algorithm, uses LBPH face recognizer of OpenCV iteratively to cluster the images. In the description on that page itself the results are not satisfactory. In my implementation the results are worse. Can anyone suggest a better way to cluster faces? May be using some other feature and some other clustering algorithm. The number of clusters are unknown. </p> | 2014-10-03 12:36:37.353000+00:00 | 2018-03-25 12:18:07.487000+00:00 | null | opencv|computer-vision|cluster-analysis|face-recognition|feature-extraction | ['https://arxiv.org/abs/1503.03832', 'http://www.shortscience.org/paper?bibtexKey=journals/corr/1503.03832', 'http://www.shortscience.org/paper?bibtexKey=journals/corr/ZeilerF13', 'http://www.shortscience.org/paper?bibtexKey=journals/corr/SzegedyLJSRAEVR14', 'http://www.shortscience.org/paper?bibtexKey=conf/cvpr/TaigmanYRW14#martinthoma', 'https://www.cs.toronto.edu/~ranzato/publications/taigman_cvpr14.pdf'] | 6 |
59,172,208 | <p>You may want to have a look at LSTNet, which does exactly that - <a href="https://arxiv.org/abs/1703.07015" rel="nofollow noreferrer">https://arxiv.org/abs/1703.07015</a> and <a href="https://github.com/laiguokun/LSTNet" rel="nofollow noreferrer">https://github.com/laiguokun/LSTNet</a></p> | 2019-12-04 08:57:01.867000+00:00 | 2019-12-04 08:57:01.867000+00:00 | null | null | 59,168,306 | <p>Most commonly CNN is used when there are images as data. However, I have seen that CNN are sometines used for timeseries. Therefore, I tried both LSTM and CNN models seperately for my timeseries classification problem. My two models are as follows.</p>
<p>LSTM:</p>
<pre><code>model = Sequential()
model.add(LSTM(200, input_shape=(25,3)))
model.add(Dense(100))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
</code></pre>
<p>CNN:</p>
<pre><code>model = Sequential()
model.add(Conv1D(200, kernel_size=3, input_shape=(25,3)))
model.add(Conv1D(200, kernel_size=2))
model.add(GlobalMaxPooling1D())
model.add(Dense(100))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
</code></pre>
<p>I think LSTM and CNN has there unique characteristics and combining these two in my prediction will produce better results. However, I am struggling to find a suitable resource that suits my problem.</p>
<p>Is it possible to do this for my problem? If so how I can do it? Will it produce better results?</p>
<p>I am happy to provide more details if needed.</p>
<h2>EDIT:</h2>
<p>My problem setting is as follows. I have a dataset with about 5000 data points. Each data point has 3 time-series data that are exactly 25 in size. My labeled data is <code>1</code> or <code>0</code> (i.e. binary classification). More specifically my dataset looks as follows.</p>
<pre><code>node, time-series1, time_series2, time_series3, Label
n1, [1.2, 2.5, 3.7, 4.2, ... 5.6, 8.8], [6.2, 5.2, 4.7, 3.2, ... 2.6, 1.8], [1.0, 2.8, 3.9, 4.1, ... 5.2, 8.6] …, 1
n2, [5.2, 4.5, 3.7, 2.2, ... 1.6, 0.8], [8.2, 7.5, 6.7, 5.2, ... 4.6, 1.8], …, [1.2, 2.5, 3.7, 4.2, ... 5.2, 8.5] 0
and so on.
</code></pre>
<p>I input these data to my LSTM and CNN models.</p> | 2019-12-04 02:56:27.953000+00:00 | 2019-12-04 12:46:34.203000+00:00 | 2019-12-04 03:23:08.547000+00:00 | python|machine-learning|keras|deep-learning|time-series | ['https://arxiv.org/abs/1703.07015', 'https://github.com/laiguokun/LSTNet'] | 2 |
8,927,094 | <p>If you take a look in <a href="http://arxiv.org/abs/1005.4117">Random Numbers In Scientific Computing: An Introduction</a> by Katzgrabber (which is an excellent, lucid discussion of the ins and outs of using PRNGs for technical computing), in parallel they suggest using a hash function of time and PID to generate a seed. From their section 7.1:</p>
<pre><code>long seedgen(void) {
long s, seed, pid;
pid = getpid();
s = time ( &seconds ); /* get CPU seconds since 01/01/1970 */
seed = abs(((s*181)*((pid-83)*359))%104729);
return seed;
}
</code></pre>
<p>of course, in Fortran this would be something like</p>
<pre><code>function seedgen(pid)
use iso_fortran_env
implicit none
integer(kind=int64) :: seedgen
integer, intent(IN) :: pid
integer :: s
call system_clock(s)
seedgen = abs( mod((s*181)*((pid-83)*359), 104729) )
end function seedgen
</code></pre>
<p>It's also sometimes handy to be able to pass in the time, rather than calling it from within <code>seedgen</code>, so that when you are testing you can give it fixed values that then generate a reproducable (== testable) sequence.</p> | 2012-01-19 13:46:14.723000+00:00 | 2012-01-19 14:44:19.123000+00:00 | 2012-01-19 14:44:19.123000+00:00 | null | 8,920,411 | <p>Two points -- first, the example is in Fortran, but I think it should hold for any language; second, the built in random number generators are not truly random and other generators exist, but we're not interested in using them for what we're doing. </p>
<p>Most discussions on random seeds acknowledge that if the program doesn't seed it at run-time, then the seed is generated at compile time. So, the same sequence of numbers is generated every time the program is run, which is not good for random numbers. One way to overcome this is to seed the random number generator with the system clock. </p>
<p>However, when running in parallel with MPI on a multi-core machine, the system clock approach for us generated the same kinds of problems. While the sequences changed from run to run, all processors got the same system clock and thus the same random seed and same sequences. </p>
<p>So consider the following example code:</p>
<pre><code>PROGRAM clock_test
IMPLICIT NONE
INCLUDE "mpif.h"
INTEGER :: ierr, rank, clock, i, n, method
INTEGER, DIMENSION(:), ALLOCATABLE :: seed
REAL(KIND=8) :: random
INTEGER, PARAMETER :: OLD_METHOD = 0, &
NEW_METHOD = 1
CALL MPI_INIT(ierr)
CALL MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierr)
CALL RANDOM_SEED(SIZE=n)
ALLOCATE(seed(n))
DO method = 0, 1
SELECT CASE (method)
CASE (OLD_METHOD)
CALL SYSTEM_CLOCK(COUNT=clock)
seed = clock + 37 * (/ (i - 1, i = 1, n) /)
CALL RANDOM_SEED(put=seed)
CALL RANDOM_NUMBER(random)
WRITE(*,*) "OLD Rank, dev = ", rank, random
CASE (NEW_METHOD)
OPEN(89,FILE='/dev/urandom',ACCESS='stream',FORM='UNFORMATTED')
READ(89) seed
CLOSE(89)
CALL RANDOM_SEED(put=seed)
CALL RANDOM_NUMBER(random)
WRITE(*,*) "NEW Rank, dev = ", rank, random
END SELECT
CALL MPI_BARRIER(MPI_COMM_WORLD, ierr)
END DO
CALL MPI_FINALIZE(ierr)
END PROGRAM clock_test
</code></pre>
<p>Which when run on my workstation with 2 cores, gives:</p>
<pre><code>OLD Rank, dev = 0 0.330676306089146
OLD Rank, dev = 1 0.330676306089146
NEW Rank, dev = 0 0.531503215980609
NEW Rank, dev = 1 0.747413828750221
</code></pre>
<p>So, we overcame the clock issue by reading the seed from <code>/dev/urandom</code> instead. This way each core gets its own random number. </p>
<p>What other seed approaches are there that will work in a multi-core, MPI system and still be unique on each core, from run to run?</p> | 2012-01-19 02:42:32.543000+00:00 | 2012-01-19 14:44:19.123000+00:00 | null | random|parallel-processing|mpi|multicore|seed | ['http://arxiv.org/abs/1005.4117'] | 1 |
65,513,482 | <h3>The example</h3>
<p>I'm familiar with that example, and I think the <strong>28x28</strong> multiplier is justified because of the operation <code>tf.reduce_mean(kl_loss)</code> which takes the average loss of all the pixels in the image which would result in a number between 0 and 1 and then multiplies it by the number of pixels. Here's <a href="https://www.tensorflow.org/guide/keras/custom_layers_and_models" rel="nofollow noreferrer">another take</a> with an external training loop for creating a VAE.</p>
<h3>The problem is posterior collapse</h3>
<p>The above would not be an issue since it's just multiplication by a constant if not for as you point out the <em>KL divergence</em> term. The KL loss acts as a regularizer that penalizes latent variable probability distributions that when sampled using a combination of Gaussians are different than the samples created by the encoder. Naturally, the question arises, how much should be reconstruction loss and how much should be the penalty. This is an area of research. Consider <em>β-VAE</em> which purportedly serves to disentangle representations by increasing the importance of KL-loss, on the other hand, increase <strong>β</strong> too much and you get a phenomenon known as posterior collapse <a href="https://arxiv.org/abs/1910.00698" rel="nofollow noreferrer">Re-balancing Variational Autoencoder Loss for Molecule Sequence Generation</a> limits <strong>β</strong> to 0.1 to avoid the problem. But it may not even be that simple as explained in <a href="https://arxiv.org/abs/1912.10702" rel="nofollow noreferrer">The Usual Suspects? Reassessing Blame for VAE Posterior Collapse</a>. A thorough solution is proposed in <a href="https://arxiv.org/abs/1903.05789" rel="nofollow noreferrer">Diagnosing and Enhancing VAE Models</a>. While <a href="https://arxiv.org/abs/2002.07514" rel="nofollow noreferrer">Balancing reconstruction error and Kullback-Leibler divergence in Variational Autoencoders
</a> suggest that there is a more simple deterministic (and better) way.</p>
<h3>Experimentation and Extension</h3>
<p>For something simple like Minst, and that example, in particular, try experimenting. Keep the 28x28 term, and arbitrarily multiply kl_loss by a constant <code>B</code> where 0 <= B < 28*28. Follow the kl loss term and the reconstruction loss term during training and compare it to the first reference graphs.</p> | 2020-12-30 20:55:28.913000+00:00 | 2020-12-30 20:55:28.913000+00:00 | null | null | 63,679,934 | <p>I am following this variational autoencoder tutorial: <a href="https://keras.io/examples/generative/vae/" rel="nofollow noreferrer">https://keras.io/examples/generative/vae/</a>.</p>
<p>I know VAE's loss function consists of the reconstruction loss that compares the original image and reconstruction, as well as the KL loss. However, I'm a bit confused about the reconstruction loss and whether it is over the entire image (sum of squared differences) or per pixel (average sum of squared differences). My understanding is that the reconstruction loss should be per pixel (MSE), but the example code I am following multiplies MSE by <code>28 x 28</code>, the MNIST image dimensions. Is that correct? Furthermore, my assumption is this would make the reconstruction loss term significantly larger than the KL loss and I'm not sure we want that.</p>
<p>I tried removing the multiplication by (28x28), but this resulted in extremely poor reconstructions. Essentially all the reconstructions looked the same regardless of the input. Can I use a lambda parameter to capture the tradeoff between kl divergence and reconstruction, or it that incorrect because the loss has a precise derivation (as opposed to just adding a regularization penalty).</p>
<pre><code>reconstruction_loss = tf.reduce_mean(
keras.losses.binary_crossentropy(data, reconstruction)
)
reconstruction_loss *= 28 * 28
kl_loss = 1 + z_log_var - tf.square(z_mean) - tf.exp(z_log_var)
kl_loss = tf.reduce_mean(kl_loss)
kl_loss *= -0.5
total_loss = reconstruction_loss + kl_loss
</code></pre> | 2020-09-01 00:52:52.933000+00:00 | 2020-12-30 20:55:28.913000+00:00 | 2020-09-01 01:26:49.237000+00:00 | python|keras|autoencoder|loss-function|mean-square-error | ['https://www.tensorflow.org/guide/keras/custom_layers_and_models', 'https://arxiv.org/abs/1910.00698', 'https://arxiv.org/abs/1912.10702', 'https://arxiv.org/abs/1903.05789', 'https://arxiv.org/abs/2002.07514'] | 5 |
55,444,470 | <p>My guess is that you're misunderstanding how convolutional layers defined.</p>
<p>My notation for the shape of the convolutional layer is <code>(out_channels, in_channels, k, k)</code> where <code>k</code> is a the size of the kernel. The <code>out_channels</code> is the number of filters (i.e. convolutional neurons). Consider following image:</p>
<p><a href="https://i.stack.imgur.com/8o519.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8o519.png" alt="Convolution illustration"></a></p>
<p>The 3d convolutional kernel weights in the picture slide across different data windows of <code>A_{i-1}</code>(i.e. input image). Patches of 3D data of that image of shape <code>(in_channels, k, k)</code> are paired with individual 3d convolutional kernels of matching dimensionality. How many such 3d kernels are there? As the number of output channels <code>out_channels</code>. The depth dimension that kernel adopts is the <code>in_channels</code> of <code>A_{i-1}</code>. Therefore, the dimension <code>in_channels</code> of <code>A_{i-1}</code> is contracted away by the depth-wise dot product that builds up the output tensor with <code>out_channels</code> channels. The precise way in which the sliding windows are constructed is defined by the sampling tuple (<code>kernel_size, stride, padding)</code> and results in output tensor with spatial dimensions determined by the formula that you're correctly applied.</p>
<p>If you want to understand more, including backpropagation and implementation take a look at <a href="https://arxiv.org/pdf/1811.11987.pdf" rel="nofollow noreferrer">this</a> paper.</p> | 2019-03-31 19:12:08.957000+00:00 | 2019-03-31 19:12:08.957000+00:00 | null | null | 55,444,120 | <p>I do not understand why the channel dimension is not included in the output dimension of a conv2D layer in Keras.</p>
<p>I have the following model</p>
<pre><code>def create_model():
image = Input(shape=(128,128,3))
x = Conv2D(24, kernel_size=(8,8), strides=(2,2), activation='relu', name='conv_1')(image)
x = Conv2D(24, kernel_size=(8,8), strides=(2,2), activation='relu', name='conv_2')(x)
x = Conv2D(24, kernel_size=(8,8), strides=(2,2), activation='relu', name='conv_3')(x)
flatten = Flatten(name='flatten')(x)
output = Dense(1, activation='relu', name='output')(flatten)
model = Model(input=image, output=output)
return model
model = create_model()
model.summary()
</code></pre>
<p>The model summary is given the figure at the end of my question. The input layer takes RGB images with width = 128 and height = 128. The first conv2D layer tells me the output dimension is (None, 61, 61, 24). I have used the kernel size of (8, 8), a stride of (2, 2) no padding. The values 61 = floor( (128 - 8 + 2 * 0)/2 + 1) and 24 (number of kernels/filters) makes sense. <strong>But why isn't the dimension for the different channels included in the dimension?</strong> As far as I can see the parameters for the 24 filters on each of the channels is included in the number of the parameters. <strong>So I would expect the output dimension to be (None, 61, 61, 24, 3) or (None, 61, 61, 24 * 3). Is this just a strange notation in Keras or am I confused about something else?</strong></p>
<p><a href="https://i.stack.imgur.com/Ptyqi.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Ptyqi.png" alt="enter image description here"></a></p> | 2019-03-31 18:32:17.770000+00:00 | 2019-07-30 09:24:44.540000+00:00 | null | python|keras|conv-neural-network | ['https://i.stack.imgur.com/8o519.png', 'https://arxiv.org/pdf/1811.11987.pdf'] | 2 |
38,231,636 | <p>See <a href="https://arxiv.org/pdf/1707.09725.pdf#page=96" rel="nofollow noreferrer">my masters thesis</a> for a very similar list:</p>
<h2>Optimization algorithms for neural networks</h2>
<ul>
<li>Gradient based
<ul>
<li>Flavours of gradient descent (only first order gradient):
<ul>
<li>Stochastic gradient descent: <a href="https://i.stack.imgur.com/SZ72p.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SZ72p.png" alt="enter image description here" /></a></li>
<li>Mini-Batch gradient descent: <img src="https://i.imgur.com/JktJp2O.png" alt="" /></li>
<li>Learning Rate Scheduling:
<ul>
<li>Momentum: <img src="https://i.imgur.com/MGWost4.png" alt="" /></li>
<li><a href="https://en.wikipedia.org/wiki/Rprop" rel="nofollow noreferrer">RProp</a> and the mini-batch version RMSProp</li>
<li><a href="https://en.wikipedia.org/wiki/Stochastic_gradient_descent#AdaGrad" rel="nofollow noreferrer">AdaGrad</a></li>
<li>Adadelta (<a href="http://arxiv.org/pdf/1212.5701v1.pdf" rel="nofollow noreferrer">paper</a>)</li>
<li>Exponential Decay Learning Rate</li>
<li>Performance Scheduling</li>
<li>Newbob Scheduling</li>
</ul>
</li>
<li><a href="https://en.wikipedia.org/wiki/Quickprop" rel="nofollow noreferrer">Quickprop</a></li>
<li>Nesterov Accelerated Gradient (NAG): <a href="http://cs231n.github.io/neural-networks-3/#sgd" rel="nofollow noreferrer">Explanation</a></li>
</ul>
</li>
<li>Higher order gradients
<ul>
<li><a href="https://en.wikipedia.org/wiki/Newton%27s_method" rel="nofollow noreferrer">Newton's method</a>: <a href="https://stats.stackexchange.com/a/253636/25741">Typically not possible</a></li>
<li>Quasi-Newton method
<ul>
<li>BFGS</li>
<li>L-BFGS</li>
</ul>
</li>
</ul>
</li>
<li>Unsure how it works
<ul>
<li>Adam (Adaptive Moment Estimation)
<ul>
<li>AdaMax</li>
</ul>
</li>
<li>Conjugate gradient</li>
</ul>
</li>
</ul>
</li>
<li>Alternatives
<ul>
<li>Genetic algorithms</li>
<li>Simulated Annealing</li>
<li><a href="https://martin-thoma.com/twiddle/" rel="nofollow noreferrer">Twiddle</a></li>
<li>Markov random fields (graphcut/mincut)</li>
<li>The <strong>Simplex algorithm</strong> is used for linear optimization in a operations research setting, but aparently also for neural networks (<a href="https://visualstudiomagazine.com/articles/2014/10/01/simplex-optimization.aspx" rel="nofollow noreferrer">source</a>)</li>
</ul>
</li>
</ul>
<p>You might also want to have a look at my article about <a href="https://martin-thoma.com/optimization-basics" rel="nofollow noreferrer">optimization basics</a> and at Alec Radfords nice gifs: <a href="https://imgur.com/a/Hqolp" rel="nofollow noreferrer">1</a> and <a href="https://imgur.com/s25RsOr" rel="nofollow noreferrer">2</a>, e.g.</p>
<p><img src="https://i.imgur.com/2dKCQHh.gif?1" alt="" /></p>
<p>Other interesting resources are:</p>
<ul>
<li><a href="http://sebastianruder.com/optimizing-gradient-descent/" rel="nofollow noreferrer">An overview of gradient descent optimization algorithms</a></li>
</ul>
<h2>Trade-Offs</h2>
<p>I think all of the posted optimization algorithms have some scenarios where they have advantages. The general trade-offs are:</p>
<ul>
<li>How much of an improvement do you get in one step?</li>
<li>How fast can you calculate one step?</li>
<li>How much data can the algorithm deal with?</li>
<li>Is it guaranteed to find a local minimum?</li>
<li>What requirements does the optimization algorithm have for your function? (e.g. to be once, twice or three times differentiable)</li>
</ul> | 2016-07-06 18:50:18.127000+00:00 | 2021-03-08 09:21:36.193000+00:00 | 2021-03-08 09:21:36.193000+00:00 | null | 23,554,606 | <p>Gradient Descent has a problem of Local Minima. We need run gradient descent exponential times for to find global minima. </p>
<p>Can anybody tell me about any alternatives of gradient descent with their pros and cons.</p>
<p>Thanks.</p> | 2014-05-08 23:57:10.580000+00:00 | 2021-03-08 09:21:36.193000+00:00 | 2016-08-23 09:13:41.883000+00:00 | machine-learning|neural-network|logistic-regression|gradient-descent | ['https://arxiv.org/pdf/1707.09725.pdf#page=96', 'https://i.stack.imgur.com/SZ72p.png', 'https://en.wikipedia.org/wiki/Rprop', 'https://en.wikipedia.org/wiki/Stochastic_gradient_descent#AdaGrad', 'http://arxiv.org/pdf/1212.5701v1.pdf', 'https://en.wikipedia.org/wiki/Quickprop', 'http://cs231n.github.io/neural-networks-3/#sgd', 'https://en.wikipedia.org/wiki/Newton%27s_method', 'https://stats.stackexchange.com/a/253636/25741', 'https://martin-thoma.com/twiddle/', 'https://visualstudiomagazine.com/articles/2014/10/01/simplex-optimization.aspx', 'https://martin-thoma.com/optimization-basics', 'https://imgur.com/a/Hqolp', 'https://imgur.com/s25RsOr', 'http://sebastianruder.com/optimizing-gradient-descent/'] | 15 |
58,697,648 | <p>Well, BERT and ELMo are trained on huge corpus(BERT is trained on 16GB of raw text) of data. This implies, that the embeddings produced from these models are generic, this would leverage the capabilities of a language model in most of the task. </p>
<p>Since your task is biology related, you can have look at alternatives such as BioBERT (<a href="https://arxiv.org/abs/1901.08746" rel="nofollow noreferrer">https://arxiv.org/abs/1901.08746</a>)</p> | 2019-11-04 16:40:29.953000+00:00 | 2019-11-04 16:40:29.953000+00:00 | null | null | 58,689,911 | <p>How-to issue:
spaCy mentions that ELMo/BERT are very effective in NLP tasks if you have few data, as these two have very good transfer learning properties. </p>
<p>My question: transfer learning relative to what model. If you have a language model for dogs, finding a good language model for kangeroos is easier (my case is biology-related, and has a lot of terminology)?</p> | 2019-11-04 08:53:04.677000+00:00 | 2019-11-04 16:40:29.953000+00:00 | null | spacy|pre-trained-model|elmo | ['https://arxiv.org/abs/1901.08746'] | 1 |
50,669,512 | <p>There are different things going on here.</p>
<h2>Class imbalance</h2>
<p>As already discussed in the comments, your training set consists of 2941 negative samples (faces from other persons than you) and 308 positive samples (images with your face on it). So as classifier which always votes for the negative class gets 2941 of the 3249 samples right, i.e. 90.5%.
Your training accuracy score should be read with this information in mind, because your 89.4% hint that your network has learnt nothing valuable.</p>
<p>What you could try: </p>
<ol>
<li>Oversample the minority class</li>
<li>Undersample the majority class</li>
<li>Use weights in your loss function</li>
</ol>
<p>There exists a vast body of literature and tutorials on this topic, so just look around a bit.</p>
<h2>Model capacity/network design</h2>
<p>You mentioned in the comments that after using balanced training data, the loss still doesn't decrease. This hints that the capacity of your model is not high enough or that your model is not well designed for that task.</p>
<p>There exist many different network structures for image processing tasks, depending on what exactly the task is. For example, a network for doing pixel-wise classification and instance segmentation (like <a href="https://arxiv.org/abs/1703.06870" rel="nofollow noreferrer">Mask RCNN</a>) will look different from a network that is designed to do face recognition (like <a href="https://arxiv.org/abs/1503.03832" rel="nofollow noreferrer">FaceNet</a>). Even if we only focus on one task (in this/your case: face recognition) the network structures you find will look very different.</p>
<p>So what does that mean? Building a neural network for a specific task is not trivial and it is not easy to predict which structures work well for which task. A lot of trial and error is often needed, to find a suitable structure. </p>
<p>The model capacity describes how "powerful" your model is. There is no rigorous definition for this but you can think about it like this: Different tasks can be harder or easier to solve. A certain model can only solve problems which are "easy enough" and on more difficult problems the model will only show poor performance. The model capacity often goes hand in hand with the network design/structure.</p>
<p>How does that help you? Well, look at different network structures that do the same task as you aim to do and understand why certain building blocks are used and what they are good for. Then you can try to rebuild this structures and experiment with it. A good starting point might be <a href="https://hackernoon.com/building-a-facial-recognition-pipeline-with-deep-learning-in-tensorflow-66e7645015b8" rel="nofollow noreferrer">this</a> or <a href="https://medium.com/@ageitgey/machine-learning-is-fun-part-4-modern-face-recognition-with-deep-learning-c3cffc121d78" rel="nofollow noreferrer">this</a> tutorial on face recognition.</p> | 2018-06-03 18:10:43.917000+00:00 | 2018-06-03 18:20:41.963000+00:00 | 2018-06-03 18:20:41.963000+00:00 | null | 50,665,466 | <p>i am fairly new to machine learning and this is my first post on StackOverflow.
I want to train a CNN such that it can distinguish my face from others.
My model just stops improving after the first 3 epochs.</p>
<p>I found some faces in online databases for machine learning, then centered, grayscaled and cropped them. The same thing i did with pictures taken by my webcam from my face.</p>
<p>The data for the NN looks like this:</p>
<pre>
| input | output | |
|---------------------- |-------- |--- |
| face of me | 0 | 1 |
| face of someone else | 1 | 0 |
</pre>
<p>So far so good. </p>
<p>I then tried to train the CNN with this with the following structure:</p>
<pre><code>model= Sequential()
# sort out the input layer later
model.add(Conv2D(32,3,3, activation='relu', input_shape=(100,100,3)))
model.add(MaxPooling2D((2,2)))
model.add(Conv2D(12,3,3, activation='relu'))
model.add(MaxPooling2D((2,2)))
model.add(Flatten())
model.add(Dense(600, activation='relu'))
model.add(Dense(60, activation='relu'))
#model.add(Dropout(p=0.2))
model.add(Dense(2, activation='softmax'))
model.summary()
epochs = 100
lrate = 0.01
decay = lrate/epochs
sgd = SGD(lr=lrate, momentum=0.9, decay=decay, nesterov=False)
#%%
model.compile(loss='categorical_crossentropy', optimizer="adam", metrics=['accuracy'])
</code></pre>
<p>I tried different optimizers (sdg,adam,rmsprop) and had some attempts on other parameters for the CNN but every time the model just stops reducing the loss after the first few epochs.</p>
<pre><code>Epoch 1/100
2176/2176 [==============================] - 55s 25ms/step - loss: 1.8043 - acc: 0.8869
Epoch 2/100
2176/2176 [==============================] - 56s 26ms/step - loss: 1.7037 - acc: 0.8943
Epoch 3/100
2176/2176 [==============================] - 57s 26ms/step - loss: 1.7037 - acc: 0.8943
Epoch 4/100
</code></pre>
<p>No improvement in Loss after epoch 2 for this example.</p>
<p>Do you have any idea why this might be the case?</p> | 2018-06-03 10:39:19.173000+00:00 | 2018-06-03 18:20:41.963000+00:00 | null | python|tensorflow|keras|face-recognition|convolutional-neural-network | ['https://arxiv.org/abs/1703.06870', 'https://arxiv.org/abs/1503.03832', 'https://hackernoon.com/building-a-facial-recognition-pipeline-with-deep-learning-in-tensorflow-66e7645015b8', 'https://medium.com/@ageitgey/machine-learning-is-fun-part-4-modern-face-recognition-with-deep-learning-c3cffc121d78'] | 4 |
15,591,075 | <p>Make sure your test set is large enough compared to the training set (e.g. 10% of the overall data) and check it regarding diversity. If your test set only covers very specific cases, this could be a reason. Also make sure you always use the same test set. Alternatively you should google the term <em>cross-validation</em>.</p>
<p>Furthermore, observing good training set accuracy while observing bad test set accuracy is a sign for <em>overfitting</em>. Try to apply regularization like a simple L2 weight decay (simply multiply your weight matrices with e.g. 0.999 after each weight update). Depending on your data, <a href="http://arxiv.org/pdf/1207.0580.pdf" rel="nofollow">Dropout</a> or L1 regularization could also help (especially if you have a lot of redundancies in your input data). Also try to choose a smaller network topology (fewer layers and/or fewer neurons per layer).</p>
<p>To speed up training, you could also try alternative learning algorithms like <a href="https://class.coursera.org/neuralnets-2012-001/lecture/preview" rel="nofollow">RPROP+, RPROP- or RMSProp</a> instead of plain backpropagation.</p> | 2013-03-23 19:22:41.767000+00:00 | 2013-03-23 19:22:41.767000+00:00 | null | null | 15,443,102 | <p>I'm implementing a neural network for a supervised classification task in MATLAB.</p>
<p>I have a training set and a test set to evaluate the results.
The problem is that every time I train the network for the same training set I get very different results (sometimes I get a 95% classification accuracy and sometimes like 60%) for the same test set. </p>
<p>Now I know this is because I get different initial weights and I know that I can use 'seed' to set the same initial weights but the question is what does this say about my data and what is the right way to look at this? How do I define the accuracy I'm getting using my designed ANN? Is there a protocol for this (like running the ANN 50 times and get an average accuracy or something)?</p>
<p>Thanks</p> | 2013-03-15 22:30:07.793000+00:00 | 2016-05-17 10:57:33.360000+00:00 | 2016-05-17 10:57:33.360000+00:00 | matlab|neural-network|classification | ['http://arxiv.org/pdf/1207.0580.pdf', 'https://class.coursera.org/neuralnets-2012-001/lecture/preview'] | 2 |
34,501,801 | <p>You may learn an auto-encoder on the negative examples you have (if their number is kind of large) and then generate examples using an inference technique such as <a href="http://arxiv.org/abs/1312.6114" rel="nofollow">variational Bayes</a> or <a href="http://jmlr.csail.mit.edu/papers/volume15/alain14a/alain14a.pdf" rel="nofollow">Markov Chain Monte Carlo</a>. This way you can increase the number of samples for the negative examples and kind of move towards a more balanced data set.</p> | 2015-12-28 22:37:07.190000+00:00 | 2015-12-28 22:48:10.073000+00:00 | 2015-12-28 22:48:10.073000+00:00 | null | 34,500,992 | <p>I am using TensorFlow LinearClassifier and also DNN to classify two - classes dataset.</p>
<p>However, the problem is the dataset contains 96% of Positive output, and 4% of negative output, and my program always return the prediction as Positive. Of course, in this case I will achieved the accuracy of 96%, but it does not make sense at all.</p>
<p>What is the good way to deal with this kind of situation?</p> | 2015-12-28 21:25:29.243000+00:00 | 2017-03-30 04:45:30.003000+00:00 | null | machine-learning|classification|tensorflow | ['http://arxiv.org/abs/1312.6114', 'http://jmlr.csail.mit.edu/papers/volume15/alain14a/alain14a.pdf'] | 2 |
56,630,742 | <p>You can not directly use the <code>.pb</code> model produced by image classification to perform object detection. You will have to obtain an object detection model, train it and then use it to detect. There are pretrained object detection models at Tensorflow obejct detection model <a href="https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md" rel="nofollow noreferrer">zoo</a>.</p>
<p><strong>detailed answer below:</strong></p>
<p>Image classification and object detection are two different but very closely related tasks. In fact, Ross Girshick asked a similar question on the famous paper <a href="https://arxiv.org/abs/1311.2524" rel="nofollow noreferrer">R-CNN</a></p>
<blockquote>
<p>To what extent do the CNN classification results on ImageNet generalize to object detection results on the PASCAL VOC Challenge?</p>
</blockquote>
<p>This question basically means that image classification model can be used to help object detection but there are some more steps needed. So you cannot just directly use a classification network to do object detection task. (But the error you gave was something different, you can find the correct tensor name and fix the error, but it just does not make sense to directly use classification network to do object detection that way.)</p>
<p>There is naive solution to combine the two, you could just use a sliding window of various sizes passing through the image and perform classification, this can perform object detection.</p>
<p>Another solution is integrated. To give an example, Faster R-CNN is an object detection network which used VGG as the feature extractor (In the original paper). Here you can see that VGG is an image classification network and it is pretrained on some image classification task. </p>
<p><a href="https://i.stack.imgur.com/rTgYF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rTgYF.png" alt="enter image description here"></a></p>
<p><a href="https://courses.cs.washington.edu/courses/csep576/18sp/lectures/8_1_detection.pdf" rel="nofollow noreferrer">image source</a></p> | 2019-06-17 11:51:30.363000+00:00 | 2019-06-17 11:57:56.557000+00:00 | 2019-06-17 11:57:56.557000+00:00 | null | 56,630,132 | <p>I have used <strong>Tensorflow-for-poets</strong> to build an image classification model. However, I now want to use the trained model in an object detection model. Can I just import the .pb files directly or do I have to retrain the model?</p>
<p>I am getting this error when I try it</p>
<blockquote>
<p>KeyError: "The name 'image_tensor:0' refers to a Tensor which does not exist. The operation, 'image_tensor', does not exist in the graph."</p>
</blockquote> | 2019-06-17 11:12:43.087000+00:00 | 2019-06-17 23:53:08.520000+00:00 | 2019-06-17 23:53:08.520000+00:00 | image|tensorflow|machine-learning|deep-learning|object-detection | ['https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md', 'https://arxiv.org/abs/1311.2524', 'https://i.stack.imgur.com/rTgYF.png', 'https://courses.cs.washington.edu/courses/csep576/18sp/lectures/8_1_detection.pdf'] | 4 |
64,546,055 | <p>Okay, in <a href="https://arxiv.org/pdf/1610.03483.pdf" rel="nofollow noreferrer">this paper</a>, it says:</p>
<hr />
<h2>"Prescribed probabilistic models are those that provide an explicit parametric specification of the distribution of an observed random variable <strong>x</strong> [...]. Most models in machine learning and statistics are of this form, whether they be state-of-the-art classifiers for object recognition, complex sequence models for machine translation, or fine-grained spatio-temporal models tracking the spread of disease. Alternatively, we can specify implicit probabilistic models that define a stochastic procedure that directly generates data."</h2>
<p>Maybe to wrap it up: A <em>prescribed</em> probabilistic model wants to learn a set of parameters suitable for the task and data at hand, e.g. object classification. On the other hand, an <em>implicit</em> probabilistic model generates data, though to be fair, GANs are also fed with data, e.g. MNIST samples, and if properly tuned, they produce MNIST numbers themselves.</p>
<p>Hope that helps a bit!</p>
<p>PS: I'm not sure whether <em>explicit</em> generative networks exists at all. Are you sure, maybe you can send a reference? After all, one always wants to generate data with a GAN ...</p> | 2020-10-26 22:49:37.063000+00:00 | 2020-10-26 22:49:37.063000+00:00 | null | null | 64,538,548 | <p>Adverserial Networks, such as GAN's, are called <strong>"implicit"</strong> networks. What does it mean? And, how do they differ from <strong>"explicit"</strong> generative networks? And what are <strong>"explicit"</strong> generative networks?</p> | 2020-10-26 13:55:10.707000+00:00 | 2020-10-27 05:27:07.707000+00:00 | null | deep-learning|computer-vision|computer-science|generative-adversarial-network | ['https://arxiv.org/pdf/1610.03483.pdf'] | 1 |
63,738,885 | <p>Dropout technique is not implemented on every single layer within a neural network; it’s commonly leveraged within the neurons in the last few layers within the network.</p>
<p>The technique works by randomly reducing the number of interconnecting neurons within a neural network. At every training step, each neuron has a chance of being left out, or rather, dropped out of the collated contribution from connected neurons</p>
<p>There’s some debate as to whether the dropout should be placed before or after the activation function. As a rule of thumb, place the dropout after the activate function for all activation functions other than <code>relu</code>.</p>
<p>you can add <code>dropout</code> after every hidden layer and generally it affect only the previous layer in (your case it will effect <code>(x = layers.Dense(1024, activation='relu')(x) )</code>). In the original paper that proposed dropout layers, by <a href="https://arxiv.org/pdf/1207.0580.pdf" rel="nofollow noreferrer">Hinton (2012)</a>, dropout (with p=0.5) was used on each of the fully connected (dense) layers before the output; it was not used on the convolutional layers. This became the most commonly used configuration.</p>
<p>I am adding the resources link that might help you:</p>
<p><a href="https://towardsdatascience.com/understanding-and-implementing-dropout-in-tensorflow-and-keras-a8a3a02c1bfa" rel="nofollow noreferrer">https://towardsdatascience.com/understanding-and-implementing-dropout-in-tensorflow-and-keras-a8a3a02c1bfa</a></p>
<p><a href="https://towardsdatascience.com/dropout-on-convolutional-layers-is-weird-5c6ab14f19b2" rel="nofollow noreferrer">https://towardsdatascience.com/dropout-on-convolutional-layers-is-weird-5c6ab14f19b2</a></p>
<p><a href="https://towardsdatascience.com/machine-learning-part-20-dropout-keras-layers-explained-8c9f6dc4c9ab" rel="nofollow noreferrer">https://towardsdatascience.com/machine-learning-part-20-dropout-keras-layers-explained-8c9f6dc4c9ab</a></p> | 2020-09-04 09:56:43.277000+00:00 | 2020-09-04 09:56:43.277000+00:00 | null | null | 63,738,681 | <p>Consider transfer learning in order to use a pretrained model in keras/tensorflow. For each old layer, <code>trained</code> parameter is set to <code>false</code> so that its weights are not updated during training whereas the last layer(s) have been substituted with new layers and these must be trained. Particularly two fully connected hidden layers with <code>512</code> and <code>1024</code> neurons and and relu activation function have been added. After these layers a Dropout layer is used with <code>rate</code> <code>0.2</code>. This means that during each epoch of training <code>20%</code> of the neurons are randomly discarded.</p>
<p>What layers does this dropout layer affect? Does it affect all the network including also the pretrained layers for which <code>layer.trainable=false</code> has been set or does it affect only the newly added layers? Or does it affect only the previous layer (i.e., the one with <code>1024</code> neurons)?</p>
<p>In other words, which layer(s) do the neurons that are turned off during each epoch by the dropout belong to?</p>
<pre><code>import os
from tensorflow.keras import layers
from tensorflow.keras import Model
from tensorflow.keras.applications.inception_v3 import InceptionV3
local_weights_file = 'weights.h5'
pre_trained_model = InceptionV3(input_shape = (150, 150, 3),
include_top = False,
weights = None)
pre_trained_model.load_weights(local_weights_file)
for layer in pre_trained_model.layers:
layer.trainable = False
# pre_trained_model.summary()
last_layer = pre_trained_model.get_layer('mixed7')
last_output = last_layer.output
# Flatten the output layer to 1 dimension
x = layers.Flatten()(last_output)
# Add two fully connected layers with 512 and 1,024 hidden units and ReLU activation
x = layers.Dense(512, activation='relu')(x)
x = layers.Dense(1024, activation='relu')(x)
# Add a dropout rate of 0.2
x = layers.Dropout(0.2)(x)
# Add a final sigmoid layer for classification
x = layers.Dense (1, activation='sigmoid')(x)
model = Model( pre_trained_model.input, x)
model.compile(optimizer = RMSprop(lr=0.0001),
loss = 'binary_crossentropy',
metrics = ['accuracy'])
</code></pre> | 2020-09-04 09:42:22.477000+00:00 | 2020-09-04 10:17:29.287000+00:00 | null | python|tensorflow|keras|transfer-learning|dropout | ['https://arxiv.org/pdf/1207.0580.pdf', 'https://towardsdatascience.com/understanding-and-implementing-dropout-in-tensorflow-and-keras-a8a3a02c1bfa', 'https://towardsdatascience.com/dropout-on-convolutional-layers-is-weird-5c6ab14f19b2', 'https://towardsdatascience.com/machine-learning-part-20-dropout-keras-layers-explained-8c9f6dc4c9ab'] | 4 |
32,340,573 | <p>Re-enumerating the PCIe bus/tree via <code>echo 1 > /sys/bus/pci/rescan</code> is the correct solution. We are using it the same way as you described it.</p>
<p>We are using <code>echo 1 > $pcidevice/remove</code> to disconnect the driver from the device and to detach the device from the tree. The driver (xillybus) is not unloaded, just disconnected. </p>
<p>A better solution is to rescan only the node where your FPGA is attached to. This reduces the over all impact for the system.</p>
<p>This technique is used in the <a href="http://arxiv.org/pdf/1508.06843.pdf" rel="noreferrer">RC3E</a> FPGA cloud system.</p> | 2015-09-01 20:44:17.940000+00:00 | 2015-09-04 16:49:57.460000+00:00 | 2015-09-04 16:49:57.460000+00:00 | null | 32,334,870 | <p>I have an FPGA (Like most of the people asking this question) that gets configured after my Linux kernel does the initial PCIe bus scan and enumeration. As you can guess, the FPGA implements a PCIe endpoint.</p>
<p>I would Like to have the PCIe core re-enumerate the ENTIRE PCIe bus so that my FPGA will then show up and I can load my driver module. I would also like the ability to SWAP the FPGA load out for a different configuration. By this I mean I would like to be able to:</p>
<ol>
<li>Boot Linux</li>
<li>Configure FPGA</li>
<li>Enumerate PCIe endpoint and load module</li>
<li>Remove PCIe endpoint</li>
<li>Re-configure FPGA</li>
<li>Re-enumerate PCIe endpoint</li>
</ol>
<p>All without rebooting Linux</p>
<p>Here are solutions that have been proposed elsewhere but do not solve the problem.</p>
<p><code>echo 1 > /sys/bus/pci/rescan</code> This seems to work (only sometimes) and it does not work if I want to hotswap the FPGA load after it was first enumerated.</p>
<p>Can the Hotplug/power managment facilities of PCIe be used to make this work? If so is there any good resources for how to use the Hotplug system with PCIe? (LDD does not quite cover it thoroughly enough)</p> | 2015-09-01 14:57:30.640000+00:00 | 2020-10-08 00:22:40.563000+00:00 | 2020-10-05 20:58:05.053000+00:00 | linux-kernel|linux-device-driver|pci-e|hotplugging | ['http://arxiv.org/pdf/1508.06843.pdf'] | 1 |
44,359,824 | <p>I finally went for the <a href="https://arxiv.org/pdf/0803.0476.pdf" rel="nofollow noreferrer">Louvain method</a> since it was the best fit for my problem and also there was a <a href="http://perso.crans.org/aynaud/communities/" rel="nofollow noreferrer">library</a> already implemented that speaks with Networkx.</p> | 2017-06-04 23:29:28.983000+00:00 | 2017-06-04 23:29:28.983000+00:00 | null | null | 44,287,447 | <p>I have this highly dense graph I want to cluster, I was wondering which is the best algorithm for this case scenario. I'd like to generate a considerable amount of subgroups.</p>
<p>I'm using Python's library <a href="https://networkx.github.io/" rel="nofollow noreferrer">Networkx</a> to generate the graph.</p>
<p><a href="https://i.stack.imgur.com/qJqRe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qJqRe.png" alt="my graph"></a></p> | 2017-05-31 14:26:33.040000+00:00 | 2017-06-04 23:29:28.983000+00:00 | null | python|algorithm|graph|cluster-analysis|hierarchical-clustering | ['https://arxiv.org/pdf/0803.0476.pdf', 'http://perso.crans.org/aynaud/communities/'] | 2 |
15,441,666 | <h2>In a nutshell</h2>
<p>You say that your compiler is Visual C++ 2010 Express.
I do not have access to this compiler, but I understand that it generates programs that initially configure the x87 CPU to use 53 bits of precision, in order to emulate IEEE 754 double-precision computations as closely as possible.</p>
<p>Unfortunately, “as closely as possible” is not always close enough. Historical 80-bit floating-point registers can have their significand limited in width for the purpose of emulating double-precision, but they always retain a full range for the exponent. The difference shows in particular when manipulating denormals (like your <code>y</code>).</p>
<h2>What happens</h2>
<p>My explanation would be that in <code>printf("%23.16e\n", 1.6*y);</code>, <code>1.6*y</code> is computed as a 80-bit reduced-significand full-exponent number (it is thus a normal number), then converted to IEEE 754 double-precision (resulting in a denormal), then printed.</p>
<p>On the other hand, in <code>printf("%23.16e\n", x + 1.6*y);</code>, <code>x + 1.6*y</code> is computed with all 80-bit reduced-significand full-exponent numbers (again all intermediate results are normal numbers), then converted to IEEE 754 double-precision, then printed.</p>
<p>This would explain why <code>1.6*y</code> prints the same as <code>2.0*y</code> but has a different effect when added to <code>x</code>. The number that is printed is a double-precision denormal. The number that is added to <code>x</code> is a 80-bit reduced-significand full-exponent normal number (not the same one).</p>
<h2>What happens with other compilers when generating x87 instructions</h2>
<p>Other compilers, like GCC, do not configure the x87 FPU to manipulate 53-bit significands. This can have the same consequences (in this case <code>x + 1.6*y</code> would be computed with all 80-bit full significand full exponent numbers, and then converted to double-precision for printing or storing in memory). In this case, the issue is noticeable even more often (you do not need to involve denormals or infinite numbers to notice differences).</p>
<p>This <a href="http://arxiv.org/abs/cs/0701192">article</a> by David Monniaux contains all the details you may wish for and more.</p>
<h2>Removing the unwanted behavior</h2>
<p>To get rid of the problem (if you consider it to be one), find the flag that tells your compiler to generate SSE2 instructions for floating-point. These implement exactly IEEE 754 semantics for single- and double-precision.</p> | 2013-03-15 20:36:51.943000+00:00 | 2013-03-15 20:49:41.653000+00:00 | 2013-03-15 20:49:41.653000+00:00 | null | 15,441,139 | <p>I'm having trouble understanding the output of this program</p>
<pre><code>int main()
{
double x = 1.8939201459282359e-308;
double y = 4.9406564584124654e-324;
printf("%23.16e\n", 1.6*y);
printf("%23.16e\n", 1.7*y);
printf("%23.16e\n", 1.8*y);
printf("%23.16e\n", 1.9*y);
printf("%23.16e\n", 2.0*y);
printf("%23.16e\n", x + 1.6*y);
printf("%23.16e\n", x + 1.7*y);
printf("%23.16e\n", x + 1.8*y);
printf("%23.16e\n", x + 1.9*y);
printf("%23.16e\n", x + 2.0*y);
}
</code></pre>
<p>The output is</p>
<pre><code>9.8813129168249309e-324
9.8813129168249309e-324
9.8813129168249309e-324
9.8813129168249309e-324
9.8813129168249309e-324
1.8939201459282364e-308
1.8939201459282364e-308
1.8939201459282369e-308
1.8939201459282369e-308
1.8939201459282369e-308
</code></pre>
<p>I'm using IEEE arithmetic. The variable <code>y</code> holds the smallest possible IEEE number. The first five prints show a number which is twice y as I would expect. What is confusing me is that the next five prints show different numbers. If <code>1.6*y</code> is the same as <code>2.0*y</code> then how can <code>x + 1.6*y</code> be different from <code>x + 2.0*y</code>?</p> | 2013-03-15 19:59:53.103000+00:00 | 2013-03-15 20:49:41.653000+00:00 | 2013-03-15 20:05:45.960000+00:00 | c|floating-point|floating-accuracy|ieee-754 | ['http://arxiv.org/abs/cs/0701192'] | 1 |
61,315,835 | <p>You could apply sort of compression technique as described <a href="https://github.com/ajauhri/bignum_compression" rel="nofollow noreferrer">here</a>. Start from section 2 of the <a href="https://arxiv.org/pdf/1509.05505.pdf" rel="nofollow noreferrer">paper</a>. </p>
<p>Disclaimer -- I am one of the authors of the paper. Feel free to drop a line here if it seems like a good option for your problem, and you have any queries. </p> | 2020-04-20 05:45:14.497000+00:00 | 2020-04-20 05:45:14.497000+00:00 | null | null | 61,313,780 | <p>I am trying to make a function which converts 2D coordinates <strong>(X, Y)</strong> into one single number (Does not matter if integer or float), but what matters is speed because I need to call that function more than 50 times per frame and 60 frames a second. Which is a lot. Additionally the function can not mirror itself. What I mean by that is I need a different answer when using something like (10, 50) or (-10, -50).</p>
<p>Because with something like <code>x * y</code> and numbers <strong>(10, 50)</strong> the output is <code>500</code>, now when we change the numbers into opposite of that <strong>(-10, -50)</strong> the output is still <code>500</code> witch is not wanted.</p>
<p>But at the same time anything with exponencional growth is too slow, something like <code>2 ** x * (2 * y + 1)</code> ,even though it is not mirroring it has the problem of being too slow with large numbers <strong>(100 000 and more)</strong> and I would like to push that lag free zone as far as possible.</p>
<p>I hope I explained it clearly enough.</p> | 2020-04-20 01:21:51.700000+00:00 | 2020-04-20 12:28:35.280000+00:00 | 2020-04-20 12:28:35.280000+00:00 | python|encoding|coordinates | ['https://github.com/ajauhri/bignum_compression', 'https://arxiv.org/pdf/1509.05505.pdf'] | 2 |
41,709,602 | <blockquote>
<p>Is it possible to determine the features of the image from the hidden layers that will lead to "yes"? </p>
</blockquote>
<p>Yes, it is. Have a look at</p>
<blockquote>
<p>Zeiler, M.D. and Fergus, R., 2014, September. <a href="https://arxiv.org/abs/1311.2901" rel="nofollow noreferrer">Visualizing and understanding convolutional networks</a>. In European Conference on Computer Vision (pp. 818-833). Springer International Publishing.</p>
</blockquote>
<h2>Summary</h2>
<p>There are three main ideas:</p>
<ol>
<li><em>Training data argmax method</em>: Pump your data through the network. Record for the neuron which you are interested which caused the highest activation.</li>
<li><em>Occlusion sensitivity analysis</em>: Cover a part of the image. Push the occluded image through the network. How did the score change? If it was about the same, the important features are likely not in that part of the image.</li>
<li><em>Gradient methods</em>: Train a "reconstruction network" which reconstructs the activation. Then set the neuron you are interested in to maximum activation, the rest to no activation. Reconstuct what could cause this behavior.</li>
</ol> | 2017-01-18 00:44:45.030000+00:00 | 2017-01-18 00:44:45.030000+00:00 | null | null | 41,703,594 | <p>Is it possible to determine the features of the image from the hidden layers that will lead to "yes"?
Like suppose I train the CNN with 1000 images, then I would like to know from the intermediate hidden layers about which features actually are leading to the image being tagged with a yes finally.
Is it possible?
And also how many training examples are required to converge for a binary classification using CNN?</p> | 2017-01-17 17:39:25.547000+00:00 | 2017-01-18 00:44:45.030000+00:00 | null | deep-learning|conv-neural-network | ['https://arxiv.org/abs/1311.2901'] | 1 |
69,450,719 | <p>The number of attention heads is irrespective of the number of (encoder) layers.
However, there is an inherent tie between the hidden size of each model (768 for <code>bert-base</code>, and 1024 for <code>bert-large</code>), which is explained in <a href="https://arxiv.org/pdf/1706.03762.pdf" rel="nofollow noreferrer">the original Transformers paper</a>.
Essentially, the choice made by the authors is that the self-attention block size (<code>d_k</code>) equals the hidden dimension (<code>d_hidden</code>), divided by the number of heads (<code>h</code>), or formally</p>
<pre><code>d_k = d_hidden / h
</code></pre>
<p>Since the standard choice seems to be <code>d_k = 64</code>, we can infer the final size from our parameters:</p>
<pre><code>h = d_hidden / d_k = 1024 / 64 = 16
</code></pre>
<p>which is exactly the value you are looking at in <code>bert-large</code>.</p> | 2021-10-05 12:45:22.403000+00:00 | 2021-10-05 12:45:22.403000+00:00 | null | null | 69,436,845 | <p>From the literature I read,</p>
<p>Bert Base has 12 encoder layers and 12 attention heads. Bert Large has 24 encoder layers and 16 attention heads.</p>
<p>Why is Bert large having 16 attentions heads ?</p> | 2021-10-04 13:29:17.070000+00:00 | 2021-10-05 12:45:22.403000+00:00 | null | bert-language-model|transformer-model | ['https://arxiv.org/pdf/1706.03762.pdf'] | 1 |
25,846,384 | <p>I would expect this to happen in accordance with the group mean distribution on alpha. If you think about it, if the group mean shifts around it will influence all alphas to the same degree. You could confirm this by doing a scatter plot of the group mean trace against some of the alphas. Hierarchical models are in general difficult for most samplers because of these complex interdependencies between group mean and variance and the individual RVs. See <a href="http://arxiv.org/abs/1312.0906" rel="nofollow">http://arxiv.org/abs/1312.0906</a> for more information on this.</p>
<p>In your specific case, the trace doesn't look too worrisome to me, especially after iteration 1000. So you could probably just discard those as burn-in and keep in mind that you have some sampling noise but probably got the right posterior overall. In addition, you might want to perform a posterior predictive check to see if the model can reproduce the patterns in your data you are interested in.</p>
<p>Alternatively, you could try to estimate a better hessian using <code>pm.find_hessian()</code>, e.g. <a href="https://github.com/pymc-devs/pymc/blob/3eb2237a8005286fee32776c304409ed9943cfb3/pymc/examples/hierarchical.py#L51" rel="nofollow">https://github.com/pymc-devs/pymc/blob/3eb2237a8005286fee32776c304409ed9943cfb3/pymc/examples/hierarchical.py#L51</a></p>
<p>I also found this paper which looks interesting (haven't read it yet but might be cool to implement in PyMC3): arxiv-web3.library.cornell.edu/pdf/1406.3843v1.pdf</p> | 2014-09-15 10:51:30.090000+00:00 | 2014-09-15 10:51:30.090000+00:00 | null | null | 25,515,818 | <p>I'm trying out PyMC3 with a simple multilevel model. When using both fake and real data the traces of the random effect distributions move with each other (see plot below) and appear to be offsets of the same trace. Is this an expected artifact of NUTS or an indication of a problem with my model?</p>
<p>Here is a traceplot on real data:</p>
<p><img src="https://i.stack.imgur.com/w4yvf.png" alt="traceplot"></p>
<p>Here is an <a href="http://nbviewer.ipython.org/gist/bmabey/07e600887276becbaf4f" rel="nofollow noreferrer">IPtyhon notebook</a> of the model and the functions used to create the fake data. Here is the <a href="https://gist.github.com/bmabey/07e600887276becbaf4f" rel="nofollow noreferrer">corresponding gist</a>.</p> | 2014-08-26 21:56:41.457000+00:00 | 2014-09-15 10:51:30.090000+00:00 | 2014-08-28 02:38:22.490000+00:00 | pymc|pymc3 | ['http://arxiv.org/abs/1312.0906', 'https://github.com/pymc-devs/pymc/blob/3eb2237a8005286fee32776c304409ed9943cfb3/pymc/examples/hierarchical.py#L51'] | 2 |
55,141,474 | <p>Simple explanation: <strong>you can't update the same key several times inside the same block</strong>: if you send several transactions updating the same key and all transactions get processed in the same block, only one of them (I think the first one) will be processed and the other transactions will be rejected. That's why in your case, when you send txns very close in time, only one is processed, and if you add a sleep between calls, both get processed correctly (the sleep must be equal or higher than your block time). There are several ways to handle this situation, one could be the use of queues, and of course design your internal architecture in a way you can minimize this kind of issues.</p>
<p><strong>Update:</strong></p>
<blockquote>
<p>Is it not possible to set the block size to max 1 transactions?</p>
</blockquote>
<p>Can't answer with confidence without further reading/investigation. Not sure about the implications in terms of stability and performance of the network using such a configuration. There's an interesting paper about performance and optimization of HLF written a year ago (may 2018) here <a href="https://arxiv.org/pdf/1805.11390.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1805.11390.pdf</a> which may be of help. Maybe this weekend I can get some time to run my own tests. Let me know if you find something else about this topic because it seems interesting to me, though I smell it's not going to work fine because the network has an inherent latency itself so you can't reach consensus in near to 0 time.</p>
<blockquote>
<p>Is this same with Sawtooth?</p>
</blockquote>
<p>Don't have experience with that platform, but I think the same idea applies: a blockchain is a network that needs time to reach consensus about a fact, so trying to reach that consensus in lesser time than the inherent latency of the network plus the time of executing the consensus algorithms, won't work in any case.</p> | 2019-03-13 12:03:31.427000+00:00 | 2019-03-14 10:27:01.917000+00:00 | 2019-03-14 10:27:01.917000+00:00 | null | 55,138,646 | <p>I am just trying to learn the Hyperledger Fabric and I made a little test:</p>
<pre><code>type Valami struct {
ObjectType string `json:"docType" binding:"required"`
Value string `json:"value" binding:"required"`
ID string `json:"id" binding:"required"`
}
func (t *SimpleChaincode) test(stub shim.ChaincodeStubInterface) pb.Response {
id := "104"
asbytes, err := stub.GetState(id) //get the marble from chaincode state
obj := &Valami{}
if err != nil {
return shim.Error("Failed to get state ")
} else if asbytes == nil {
fmt.Println("not found")
objtype := "test"
obj = &Valami{objtype, "", id}
} else {
fmt.Println("found")
err = json.Unmarshal(asbytes, obj)
if err != nil {
return shim.Error("Can not process to a JSON type!")
}
}
now := time.Now()
value := now.String()
fmt.Println("value: "+value)
obj.Value = value
// update
JSONasBytes, err := json.Marshal(obj)
if err != nil {
return shim.Error("Can not update the " + obj.ID + ". Reason: "+err.Error())
}
// save in state
err = stub.PutState(obj.ID, JSONasBytes)
if err != nil {
return shim.Error("Can not save "+ obj.ID + ". Reason: "+err.Error())
}
return shim.Success([]byte("value: "+obj.Value))
}
</code></pre>
<p>After I commit this twice quickly after each other:</p>
<pre><code>docker exec -e CORE_PEER_LOCALMSPID=Org1MSP -e CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/[email protected]/msp cli peer chaincode invoke -o orderer.example.com:7050 -C mychannel -n mur2 -c '{"Args":["test" ]}'
docker exec -e CORE_PEER_LOCALMSPID=Org1MSP -e CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/[email protected]/msp cli peer chaincode invoke -o orderer.example.com:7050 -C mychannel -n mur2 -c '{"Args":["test" ]}'
</code></pre>
<p>The return: </p>
<pre><code>2019-03-13 09:33:05.297 UTC [chaincodeCmd] chaincodeInvokeOrQuery -> INFO 001 Chaincode invoke successful. result: status:200 payload:"value: 2019-03-13 09:33:05.292254505 +0000 UTC m=+391.210396576"
2019-03-13 09:33:05.776 UTC [chaincodeCmd] chaincodeInvokeOrQuery -> INFO 001 Chaincode invoke successful. result: status:200 payload:"value: 2019-03-13 09:33:05.770792084 +0000 UTC m=+391.688934322"
</code></pre>
<p>So it looks like everything fine. However when I check the value: </p>
<pre><code>"{\"docType\":\"test\",\"id\":\"104\",\"value\":\"2019-03-13 09:33:05.292254505 +0000 UTC m=+391.210396576\"}"
</code></pre>
<p>So actually the second commit does not come across. If I put a sleep between the two commits, they work. So I guess the first one is not finishing before the second start and some reason the second dropped. I have not expected this, because it could happen any time on a network. Could somebody explain for me what happening in the background and how we can handle this kind of situation?</p> | 2019-03-13 09:42:31.623000+00:00 | 2019-04-25 20:11:23.247000+00:00 | null | hyperledger-fabric | ['https://arxiv.org/pdf/1805.11390.pdf'] | 1 |
42,989,687 | <p>While MFCCs have indeed been used in music information retrieval research (for genre classification etc...), in this case (pitch detection) you may want to use a semi-tone filterbank or constant Q transform as a first information reduction step. These transformations match better with musical pitch.</p>
<p>But I think it's also worth trying to use the audio samples directly with RNNs, in case you have a huge number of samples. In theory, the RNNs should be able to learn the wave patterns corresponding to particular pitches.</p>
<p>From your description, it's not entirely clear what type of "pitch recognition" you're aiming for: monophonic instruments (constant timbre, and only 1 pitch sounding at a time)? polyphonic (constant timbre, but multiple pitches may be sounding simultaneously)? multiple instruments playing together (multiple timbres, multiple pitches)? or even a full mix with both tonal and percussive sounds? The hardness of these use cases roughly increases in the order I mentioned them, so you may want to start with monophonic pitch recognition first.</p>
<p>To obtain the necessary amount of training examples, you could use a physical model or a multi-sampled virtual instrument to generate the audio samples for particular pitches in a controlled way. This way, you can quickly create your training material instead of recording it and labeling it manually. But I would advise you to at least add some background noise (random noise, or very low-level sounds from different recordings) to the created audio samples, or your data may be too artificial and lead to a model that doesn't work well once you want to use it in practice.</p>
<p>Here is a paper that might give you some ideas on the subject:
An End-to-End Neural Network for Polyphonic Piano Music Transcription
(Siddharth Sigtia, Emmanouil Benetos, and Simon Dixon)
<a href="https://arxiv.org/pdf/1508.01774.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1508.01774.pdf</a></p> | 2017-03-24 01:06:28.253000+00:00 | 2017-03-24 01:06:28.253000+00:00 | null | null | 42,719,196 | <p>I want to create sound or pitch recognition with recurrent deep neural network. And I'm wondering with what input will I get best results.</p>
<p>Should I feed DNN with amplitudes or with FFT(Fast Fourier transform) result?</p>
<p>Is there any other format that is known to produce good results and fast learning?</p> | 2017-03-10 13:09:20.123000+00:00 | 2017-03-24 01:06:28.253000+00:00 | 2017-03-10 13:16:18.150000+00:00 | neural-network|recurrent-neural-network | ['https://arxiv.org/pdf/1508.01774.pdf'] | 1 |
40,813,878 | <blockquote>
<p>What should I set my "num_output"?</p>
</blockquote>
<p>Before understanding how much you should set the <code>num_output</code>, let's explain what it means. In fact, you can view the two sides of the Simense network, <code>data -> fc7</code>, <code>data_p -> fc7_p</code> as 2 feature extractors. Each one is extracting feature e.g.<code>fc7</code> and <code>fc7_p</code> from the images in the corresponding data layer. So <code>num_output</code> defines the dimension of the extracted feature vector.</p>
<p>During training, the <code>ContrastiveLoss</code> layer always tries to minimize the 2 extracted feature vectors' distance when the images the vectors represent for are similiar(<code>label == 1</code>) and maximize the distance when dissimiliar(<code>label == 0</code>). Namely, the smaller the distance of the feature vectors is, the more similar the images are.</p>
<p>So what's the optimal dimension of the feature vector to best contain the information indicating the similarity? Or what should you set the <code>num_output</code>? There may not be an exact value, and it depends on the encoding quality of the feature extractor(you may view the feature as a code of the image) and how much hard it is to recognize the similarity of the images. So basically if the network(feature extractor) is deep and it is not too hard to recognize the similarity, you can choose a relative small <code>num_output</code> e.g.200, because the feature may be encoded well by a larger network and be more discriminative . If it is not , you can try a larger value e.g. 500, 1000 or try a more complicated network.</p>
<p>If you want to try a <code>MultinomialLogisticLoss</code> instead of <code>ContrastiveLoss</code> layer, you should first fusion the 2 feature vectors <code>fc7</code>, <code>fc7_p</code> into 1 using a layer like <code>CONCAT</code> and then feed it into a <code>SOFTMAX_LOSS</code> layer, like this:</p>
<pre><code>...#original layers
layers {
name: "concat"
type: CONCAT
bottom: "fc7"
bottom: "fc7_p"
top: "fc_concat" # concatenate fc7 and fc7_p along channel axis
}
layer {
name: "fc_cls"
type: INNER_PRODUCT
bottom: "fc_concat"
top: "fc_cls"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
inner_product_param {
num_output: 2 # a binary classification problem in this case
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "accuracy"
type: ACCURACY
bottom: "fc_cls"
bottom: "label"
top: "accuracy"
include {
phase: TEST
}
}
layer {
name: "loss"
type: SOFTMAX_LOSS
bottom: "fc_cls"
bottom: "label"
top: "loss"
}
</code></pre>
<hr>
<h2>Update</h2>
<blockquote>
<p>Which is the best method to implement in order to compare similarity and use it for deploy, Constrastive Loss or SoftMax Loss?</p>
</blockquote>
<p>Softmax Loss is simple and easy for deploy. But it can only give you the binary prediction, namely similar or dissimilar. The probability distribution over the 2 class(similar, dissimilar) it gives is often too hard(nonuniform), e.g. <code>[0.9*, 0.0*]</code>, <code>[0.0*, 0.9*]</code>,.... which in many cases will not reflect the true input similarity degree well.</p>
<p>While using Constrastive Loss you can get a discriminative feature vector for an image. And you can use the vector to compute a probability of similarity, as what the CVPR 2005 paper <a href="http://cs.nyu.edu/~sumit/research/assets/cvpr05.pdf" rel="noreferrer">Learning a Similarity Metric Discriminatively, with Application to Face Verification</a> did in Section 4.1.(The key point is to compute a multivariate normal density using the feature vectors generated from the images belonging to a same subject). Also you can use a threshold to control <a href="https://en.wikipedia.org/wiki/False_positive_rate" rel="noreferrer">the false positive rate and the false negative rate</a> of the model to get a <a href="https://en.wikipedia.org/wiki/Receiver_operating_characteristic" rel="noreferrer">ROC curve</a> to better evaluate a model.</p>
<p>By the way, to dig out more CNN architectures for predicting similarity, you can refer to the CVPR 2015 paper <a href="https://arxiv.org/abs/1504.03641" rel="noreferrer">Learning to Compare Image Patches via Convolutional Neural Networks</a>.</p> | 2016-11-26 00:40:51.760000+00:00 | 2016-11-30 00:46:41.547000+00:00 | 2016-11-30 00:46:41.547000+00:00 | null | 40,744,179 | <p>I'm trying to implement a siamese network in caffe in which it is composed of two imagenets that don't share weights. So what I am basically trying to do is give each network an image, and in the end try to find out the distance between them for similarity, below is my prototxt. So my main question is what should I set my "num_output" too? I have only 2 classes for my training, 0 for wither they are not alike, and 1 for if they are similar.</p>
<pre><code>name: "Siamese_ImageNet"
layers {
name: "data"
type: IMAGE_DATA
top: "data"
top: "label"
image_data_param {
source: "train1.txt"
batch_size: 32
new_height: 256
new_width: 256
}
include: { phase: TRAIN }
}
layers {
name: "data"
type: IMAGE_DATA
top: "data"
top: "label"
image_data_param {
source: "test1.txt"
batch_size: 32
new_height: 256
new_width: 256
}
include: { phase: TEST }
}
layers {
name: "data_p"
type: IMAGE_DATA
top: "data_p"
top: "label_p"
image_data_param {
source: "train2.txt"
batch_size: 32
new_height: 256
new_width: 256
}
include: { phase: TRAIN }
}
layers {
name: "data_p"
type: IMAGE_DATA
top: "data_p"
top: "label_p"
image_data_param {
source: "test2.txt"
batch_size: 32
new_height: 256
new_width: 256
}
include: { phase: TEST }
}
layers {
name: "conv1"
type: CONVOLUTION
bottom: "data"
top: "conv1"
blobs_lr: 1
blobs_lr: 2
weight_decay: 1
weight_decay: 0
convolution_param {
num_output: 96
kernel_size: 11
stride: 4
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
value: 0
}
}
}
layers {
name: "relu1"
type: RELU
bottom: "conv1"
top: "conv1"
}
layers {
name: "pool1"
type: POOLING
bottom: "conv1"
top: "pool1"
pooling_param {
pool: MAX
kernel_size: 3
stride: 2
}
}
layers {
name: "norm1"
type: LRN
bottom: "pool1"
top: "norm1"
lrn_param {
local_size: 5
alpha: 0.0001
beta: 0.75
}
}
layers {
name: "conv2"
type: CONVOLUTION
bottom: "norm1"
top: "conv2"
blobs_lr: 1
blobs_lr: 2
weight_decay: 1
weight_decay: 0
convolution_param {
num_output: 256
pad: 2
kernel_size: 5
group: 2
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
value: 1
}
}
}
layers {
name: "relu2"
type: RELU
bottom: "conv2"
top: "conv2"
}
layers {
name: "pool2"
type: POOLING
bottom: "conv2"
top: "pool2"
pooling_param {
pool: MAX
kernel_size: 3
stride: 2
}
}
layers {
name: "norm2"
type: LRN
bottom: "pool2"
top: "norm2"
lrn_param {
local_size: 5
alpha: 0.0001
beta: 0.75
}
}
layers {
name: "conv3"
type: CONVOLUTION
bottom: "norm2"
top: "conv3"
blobs_lr: 1
blobs_lr: 2
weight_decay: 1
weight_decay: 0
convolution_param {
num_output: 384
pad: 1
kernel_size: 3
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
value: 0
}
}
}
layers {
name: "relu3"
type: RELU
bottom: "conv3"
top: "conv3"
}
layers {
name: "conv4"
type: CONVOLUTION
bottom: "conv3"
top: "conv4"
blobs_lr: 1
blobs_lr: 2
weight_decay: 1
weight_decay: 0
convolution_param {
num_output: 384
pad: 1
kernel_size: 3
group: 2
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
value: 1
}
}
}
layers {
name: "relu4"
type: RELU
bottom: "conv4"
top: "conv4"
}
layers {
name: "conv5"
type: CONVOLUTION
bottom: "conv4"
top: "conv5"
blobs_lr: 1
blobs_lr: 2
weight_decay: 1
weight_decay: 0
convolution_param {
num_output: 256
pad: 1
kernel_size: 3
group: 2
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
value: 1
}
}
}
layers {
name: "relu5"
type: RELU
bottom: "conv5"
top: "conv5"
}
layers {
name: "pool5"
type: POOLING
bottom: "conv5"
top: "pool5"
pooling_param {
pool: MAX
kernel_size: 3
stride: 2
}
}
layers {
name: "fc6"
type: INNER_PRODUCT
bottom: "pool5"
top: "fc6"
blobs_lr: 1
blobs_lr: 2
weight_decay: 1
weight_decay: 0
inner_product_param {
num_output: 4096
weight_filler {
type: "gaussian"
std: 0.005
}
bias_filler {
type: "constant"
value: 1
}
}
}
layers {
name: "relu6"
type: RELU
bottom: "fc6"
top: "fc6"
}
layers {
name: "drop6"
type: DROPOUT
bottom: "fc6"
top: "fc6"
dropout_param {
dropout_ratio: 0.5
}
}
layers {
name: "fc7"
type: INNER_PRODUCT
bottom: "fc6"
top: "fc7"
blobs_lr: 1
blobs_lr: 2
weight_decay: 1
weight_decay: 0
inner_product_param {
num_output: 2
weight_filler {
type: "gaussian"
std: 0.005
}
bias_filler {
type: "constant"
value: 1
}
}
}
layers {
name: "relu7"
type: RELU
bottom: "fc7"
top: "fc7"
}
layers {
name: "drop7"
type: DROPOUT
bottom: "fc7"
top: "fc7"
dropout_param {
dropout_ratio: 0.5
}
}
layers {
name: "conv1_p"
type: CONVOLUTION
bottom: "data_p"
top: "conv1_p"
blobs_lr: 1
blobs_lr: 2
weight_decay: 1
weight_decay: 0
convolution_param {
num_output: 96
kernel_size: 11
stride: 4
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
value: 0
}
}
}
layers {
name: "relu1_p"
type: RELU
bottom: "conv1_p"
top: "conv1_p"
}
layers {
name: "pool1_p"
type: POOLING
bottom: "conv1_p"
top: "pool1_p"
pooling_param {
pool: MAX
kernel_size: 3
stride: 2
}
}
layers {
name: "norm1_p"
type: LRN
bottom: "pool1_p"
top: "norm1_p"
lrn_param {
local_size: 5
alpha: 0.0001
beta: 0.75
}
}
layers {
name: "conv2_p"
type: CONVOLUTION
bottom: "norm1_p"
top: "conv2_p"
blobs_lr: 1
blobs_lr: 2
weight_decay: 1
weight_decay: 0
convolution_param {
num_output: 256
pad: 2
kernel_size: 5
group: 2
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
value: 1
}
}
}
layers {
name: "relu2_p"
type: RELU
bottom: "conv2_p"
top: "conv2_p"
}
layers {
name: "pool2_p"
type: POOLING
bottom: "conv2_p"
top: "pool2_p"
pooling_param {
pool: MAX
kernel_size: 3
stride: 2
}
}
layers {
name: "norm2_p"
type: LRN
bottom: "pool2_p"
top: "norm2_p"
lrn_param {
local_size: 5
alpha: 0.0001
beta: 0.75
}
}
layers {
name: "conv3_p"
type: CONVOLUTION
bottom: "norm2_p"
top: "conv3_p"
blobs_lr: 1
blobs_lr: 2
weight_decay: 1
weight_decay: 0
convolution_param {
num_output: 384
pad: 1
kernel_size: 3
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
value: 0
}
}
}
layers {
name: "relu3_p"
type: RELU
bottom: "conv3_p"
top: "conv3_p"
}
layers {
name: "conv4_p"
type: CONVOLUTION
bottom: "conv3_p"
top: "conv4_p"
blobs_lr: 1
blobs_lr: 2
weight_decay: 1
weight_decay: 0
convolution_param {
num_output: 384
pad: 1
kernel_size: 3
group: 2
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
value: 1
}
}
}
layers {
name: "relu4_p"
type: RELU
bottom: "conv4_p"
top: "conv4_p"
}
layers {
name: "conv5_p"
type: CONVOLUTION
bottom: "conv4_p"
top: "conv5_p"
blobs_lr: 1
blobs_lr: 2
weight_decay: 1
weight_decay: 0
convolution_param {
num_output: 256
pad: 1
kernel_size: 3
group: 2
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
value: 1
}
}
}
layers {
name: "relu5_p"
type: RELU
bottom: "conv5_p"
top: "conv5_p"
}
layers {
name: "pool5_p"
type: POOLING
bottom: "conv5_p"
top: "pool5_p"
pooling_param {
pool: MAX
kernel_size: 3
stride: 2
}
}
layers {
name: "fc6_p"
type: INNER_PRODUCT
bottom: "pool5_p"
top: "fc6_p"
blobs_lr: 1
blobs_lr: 2
weight_decay: 1
weight_decay: 0
inner_product_param {
num_output: 4096
weight_filler {
type: "gaussian"
std: 0.005
}
bias_filler {
type: "constant"
value: 1
}
}
}
layers {
name: "relu6_p"
type: RELU
bottom: "fc6_p"
top: "fc6_p"
}
layers {
name: "drop6_p"
type: DROPOUT
bottom: "fc6_p"
top: "fc6_p"
dropout_param {
dropout_ratio: 0.5
}
}
layers {
name: "fc7_p"
type: INNER_PRODUCT
bottom: "fc6_p"
top: "fc7_p"
blobs_lr: 1
blobs_lr: 2
weight_decay: 1
weight_decay: 0
inner_product_param {
num_output: 2
weight_filler {
type: "gaussian"
std: 0.005
}
bias_filler {
type: "constant"
value: 1
}
}
}
layers {
name: "relu7_p"
type: RELU
bottom: "fc7_p"
top: "fc7_p"
}
layers {
name: "drop7_p"
type: DROPOUT
bottom: "fc7_p"
top: "fc7_p"
dropout_param {
dropout_ratio: 0.5
}
}
layers {
name: "loss"
type: CONTRASTIVE_LOSS
contrastive_loss_param {
margin: 1.0
}
bottom: "fc7"
bottom: "fc7_p"
bottom: "label"
top: "loss"
}
</code></pre>
<p>My training file structure:
0 is dissimilar, 1 is similar</p>
<pre><code> train1.txt:
/aer/img1_1.jpg 0
/aer/img1_2.jpg 1
/aer/img1_3.jpg 1
train2.txt:
/tpd/img2_1.jpg 0
/tpd/img2_2.jpg 1
/tpd/img2_3.jpg 1
</code></pre> | 2016-11-22 14:14:33.723000+00:00 | 2017-02-17 17:23:20.760000+00:00 | 2016-12-04 16:31:39.073000+00:00 | machine-learning|computer-vision|neural-network|deep-learning|caffe | ['http://cs.nyu.edu/~sumit/research/assets/cvpr05.pdf', 'https://en.wikipedia.org/wiki/False_positive_rate', 'https://en.wikipedia.org/wiki/Receiver_operating_characteristic', 'https://arxiv.org/abs/1504.03641'] | 4 |
15,948,634 | <p>The breakthrough with graph databases is not only performances, it more about concept: your routing algorithms deal with <strong>single relational graphs</strong> (that is graph were links are all of the same type) whereas with graphdatabases you have a <strong>multi-relational graph</strong>.</p>
<p>This enables you to compute the shortest path between nodes taking only a specific kind of edge or avoid another type.</p>
<p>For more information you should read about <a href="http://arxiv.org/abs/0806.2274" rel="nofollow">the algebra behind graph db</a> and the concept of pipes.</p>
<p>I strongly recommend <a href="http://www.tinkerpop.com/" rel="nofollow">thinkerpop</a> project to start with graph database.</p> | 2013-04-11 12:21:00.343000+00:00 | 2013-04-11 12:21:00.343000+00:00 | null | null | 6,897,546 | <p>My objective is to write a shortest path algorithm for a road network.</p>
<p>Currently my architecture is something like that: I store all the data in the PostGIS enabled PostgreSQL database. I do one <code>SELECT * FROM ways</code>, which takes less than 3 seconds on a table with 100,000 edges (ways) and after that I will apply a (Java, Ruby or anything-based) shortest path algorithm to the graph that already resides in memory. The second operation can take about 1.5 seconds on a graph with 100,000 edges. </p>
<p>So, it takes:</p>
<ul>
<li>2-3 seconds to load all the ways from the database into memory and create a graph (nodes are stored in one table with ways(edges));</li>
<li>1-1.5 seconds to calculate a shortest path on a graph which is already in memory.</li>
</ul>
<p>This is very similar to what pgRouting does (to my knowledge it uses C Boost to store the graph in memory), except pgRouting takes about 2 seconds in total to compute a shortest path on the same data set (yes, it is fast, but it is a black box for me, so I need my own).</p>
<p>But recently I found about Graph databases and about Neo4j. On their site they claim that "Still being able to do these calculations in sub-second speeds on graphs of millions of roads and waypoints makes it possible in many cases to abandon the normal approach of precomputing indexes with K/V stores and be able to put routing into the critical path with the possibility to adapt to the live conditions and build highly personalized and dynamic spatial services.".</p>
<p>So the question is: Will a graph database be faster with my particular problem?</p>
<p>The problem has the following properties:</p>
<ul>
<li>the database consists of one table (ways);</li>
<li>the only query to the database is to get all the ways into the memory (to build a graph);</li>
<li>I do not need scalability, i.e. it is likely that the graph will not grow.</li>
</ul> | 2011-08-01 11:08:43.827000+00:00 | 2013-04-11 12:21:00.343000+00:00 | null | database|graph|neo4j|shortest-path|graph-databases | ['http://arxiv.org/abs/0806.2274', 'http://www.tinkerpop.com/'] | 2 |
72,581,250 | <ul>
<li><code>nn.Embedding()</code> is usually used to transfer a sparse one-hot vector to a dense vector (e.g. transfer 'a' to [0.1,0.2,...]) for computation practically. I do not understand why you try to embed captions, which looks like ground-truth. If you want to compute loss with that, try <code>nn.CTCLoss()</code>.</li>
<li>If you are going to send a string to LSTM, it is recommended to embed characters in the string with <code>nn.Embedding()</code> firstly, which makes them dense and computational-practical. But if the inputs of LSTM is something extracted from CNN (or other modules), it is already dense and computational-practical and not necessary to project them with <code>fc_in</code> from my view.</li>
<li>I often use <code>nn.LSTM()</code> instead of <code>nn.LSTMCell()</code>, for the latter is troublesome.</li>
</ul>
<p>There are some bugs in your code and I fixed them:</p>
<pre class="lang-py prettyprint-override"><code>import torch
from torch import nn
class LSTM(nn.Module):
def __init__(self, cnn_dim, hidden_size, vocab_size, num_layers=1):
super(LSTM, self).__init__()
self.cnn_dim = cnn_dim # i think this is the input size
self.hidden_size = hidden_size
self.vocab_size = vocab_size # i think this should be the output size
# Building your LSTM cell
self.lstm_cell = nn.LSTMCell(input_size=self.vocab_size, hidden_size=hidden_size)
'''Connect CNN model to LSTM model'''
# output fully connected layer
# CNN does not necessarily need the FCC layers, in this example it is just extracting the features, that gets set to the LSTM which does the actual processing of the features
self.fc_in = nn.Linear(cnn_dim,
vocab_size) # this takes the input from the CNN takes the features from the cnn #cnn_dim = 512, hidden_size = 128
self.fc_out = nn.Linear(hidden_size,
vocab_size) # this is the looper in the LSTM #I think this is correct?
# embedding layer
self.embed = nn.Embedding(num_embeddings=self.vocab_size, embedding_dim=self.vocab_size)
# activations
self.softmax = nn.Softmax(dim=1)
def forward(self, features, captions):
# features: extracted features from ResNet
# captions: label of images
batch_size = features.size(0)
cnn_dim = features.size(1)
hidden_state = torch.zeros((batch_size, self.hidden_size)).cuda() # Initialize hidden state with zeros
cell_state = torch.zeros((batch_size, self.hidden_size)).cuda() # Initialize cell state with zeros
# outputs = torch.empty((batch_size, captions.size(1), self.vocab_size)).cuda()
outputs = torch.Tensor([]).cuda()
captions_embed = self.embed(captions)
'''Design LSTM model for captcha image recognition'''
# Pass the caption word by word for each time step
# It receives an input(x), makes an output(y), and receives this output as an input again recurrently
'''Defined hidden state, cell state, outputs, embedded captions'''
# can be designed to be word by word or character by character
# for t in range(captions).size(1):
for t in range(captions.size(1)):
# for the first time step the input is the feature vector
if t == 0:
# probably have to get the output from the ResNet layer
# use the LSTM cells in here i presume
x = self.fc_in(features)
# hidden_state, cell_state = self.lstm_cell(x[t], (hidden_state, cell_state))
hidden_state, cell_state = self.lstm_cell(x, (hidden_state, cell_state))
x = self.fc_out(hidden_state)
# outputs.append(hidden_state)
outputs = torch.cat([outputs, hidden_state])
# for the 2nd+ time steps
else:
# hidden_state, cell_state = self.lstm_cell(x[t], (hidden_state, cell_state))
hidden_state, cell_state = self.lstm_cell(x, (hidden_state, cell_state))
x = self.fc_out(hidden_state)
# outputs.append(hidden_state)
outputs = torch.cat([outputs, hidden_state])
# build the output tensor
# outputs = torch.stack(outputs, dim=0)
return outputs
m = LSTM(16, 32, 10)
m = m.cuda()
features = torch.randn((2, 16))
features = features.cuda()
captions = torch.randn((2, 10))
captions = torch.clip(captions, 0, 9)
captions = captions.long()
captions = captions.cuda()
m(features, captions)
</code></pre>
<p>This paper may help you somewhat: <a href="https://arxiv.org/abs/1904.01906" rel="nofollow noreferrer">https://arxiv.org/abs/1904.01906</a></p> | 2022-06-11 02:09:23.550000+00:00 | 2022-06-11 02:09:23.550000+00:00 | null | null | 72,569,340 | <p>I am creating a captcha image recognition system. It first extracts the features of the images with ResNet and then uses LSTM to recognize the words and letter in the image. An fc layer is supposed to connect the two. I have not designed a LSTM model before and am very new to machine learning, so I am pretty confused and overwhelmed by this.</p>
<p>I am confused enough that I am not even totally sure what questions I should ask. But here are a couple things that stand out to me:</p>
<ul>
<li>What is the purpose of embedding the captions if the captcha images are all randomized?</li>
<li>Is the linear fc layer in the first part of the for loop the correct way to connect the CNN feature vectors to the LSTM?</li>
<li>Is this a correct use of the LSTM cell in the LSTM?</li>
</ul>
<p>And, in general, if there are any suggestions of general directions to look into, that would be really appreciated.</p>
<p>So far, I have:</p>
<pre><code>class LSTM(nn.Module):
def __init__(self, cnn_dim, hidden_size, vocab_size, num_layers=1):
super(LSTM, self).__init__()
self.cnn_dim = cnn_dim #i think this is the input size
self.hidden_size = hidden_size
self.vocab_size = vocab_size #i think this should be the output size
# Building your LSTM cell
self.lstm_cell = nn.LSTMCell(input_size=self.vocab_size, hidden_size=hidden_size)
'''Connect CNN model to LSTM model'''
# output fully connected layer
# CNN does not necessarily need the FCC layers, in this example it is just extracting the features, that gets set to the LSTM which does the actual processing of the features
self.fc_in = nn.Linear(cnn_dim, vocab_size) #this takes the input from the CNN takes the features from the cnn #cnn_dim = 512, hidden_size = 128
self.fc_out = nn.Linear(hidden_size, vocab_size) # this is the looper in the LSTM #I think this is correct?
# embedding layer
self.embed = nn.Embedding(num_embeddings=self.vocab_size, embedding_dim=self.vocab_size)
# activations
self.softmax = nn.Softmax(dim=1)
def forward(self, features, captions):
#features: extracted features from ResNet
#captions: label of images
batch_size = features.size(0)
cnn_dim = features.size(1)
hidden_state = torch.zeros((batch_size, self.hidden_size)).cuda() # Initialize hidden state with zeros
cell_state = torch.zeros((batch_size, self.hidden_size)).cuda() # Initialize cell state with zeros
outputs = torch.empty((batch_size, captions.size(1), self.vocab_size)).cuda()
captions_embed = self.embed(captions)
'''Design LSTM model for captcha image recognition'''
# Pass the caption word by word for each time step
# It receives an input(x), makes an output(y), and receives this output as an input again recurrently
'''Defined hidden state, cell state, outputs, embedded captions'''
# can be designed to be word by word or character by character
for t in range(captions).size(1):
# for the first time step the input is the feature vector
if t == 0:
# probably have to get the output from the ResNet layer
# use the LSTM cells in here i presume
x = self.fc_in(features)
hidden_state, cell_state = self.lstm_cell(x[t], (hidden_state, cell_state))
x = self.fc_out(hidden_state)
outputs.append(hidden_state)
# for the 2nd+ time steps
else:
hidden_state, cell_state = self.lstm_cell(x[t], (hidden_state, cell_state))
x = self.fc_out(hidden_state)
outputs.append(hidden_state)
# build the output tensor
outputs = torch.stack(outputs,dim=0)
return outputs
</code></pre> | 2022-06-10 05:10:19.950000+00:00 | 2022-06-11 02:09:23.550000+00:00 | null | python|pytorch|conv-neural-network|lstm|captcha | ['https://arxiv.org/abs/1904.01906'] | 1 |
48,726,975 | <p>Before your edits, I started doing some work on a truly async approach:</p>
<p>Let's get the formalities out of the way:</p>
<pre><code>#include <boost/asio.hpp>
#include <boost/process.hpp>
#include <boost/process/async.hpp>
namespace ba = boost::asio;
namespace bp = boost::process;
#include <iostream>
#define LOG(x) std::clog
</code></pre>
<p>Now lets create a <code>ProcessManager</code> that runs all processes on a single <code>io_service</code> that is shutdown in the destructor. </p>
<p>The IO service is used to schedule all the work (like asynchronous IO). I've</p>
<ul>
<li>randomly decided to focus on line-oriented IO operations</li>
<li>decided that there's likely no reason to use more than 1 IO thread, but in case you <em>would</em> ever, there is the <code>strand</code> that correctly synchronizes operations with respect to a child.</li>
</ul>
<pre><code>#include <map>
#include <list>
#include <thread>
class ProcessManager { // ugh naming smell
using error_code = boost::system::error_code;
private:
ba::io_service _service;
boost::optional<ba::io_service::work> _keep{_service};
boost::process::group _group;
std::thread io_thread;
struct patchProcess : std::enable_shared_from_this<patchProcess> {
using ptr = std::shared_ptr<patchProcess>;
static ptr start(std::string command, std::vector<std::string> args, ProcessManager& mgr) {
ptr p(new patchProcess(std::move(command), std::move(args), mgr));
p->output_read_loop();
return p;
}
boost::optional<std::string> getline() {
std::lock_guard<std::mutex> lk(_mx);
std::string s;
if (has_newline(_output.data()) && std::getline(std::istream(&_output), s))
return s;
return boost::none;
}
void write(std::string message) {
std::lock_guard<std::mutex> lk(_mx);
_input_bufs.push_back({false, std::move(message)});
if (_input_bufs.size() == 1)
input_write_loop();
}
void close_stdin() {
std::lock_guard<std::mutex> lk(_mx);
if (_input_bufs.empty()) {
_strand.post([this, self=shared_from_this()] { _stdin.close(); });
} else {
_input_bufs.push_back({true, {}});
}
}
bool is_running() { return _process.running(); }
private:
patchProcess(std::string command, std::vector<std::string> args, ProcessManager& mgr)
: _strand(mgr._service),
_stdout(mgr._service), _stdin(mgr._service),
_process(command, args, mgr._group, bp::std_out > _stdout, bp::std_in < _stdin, mgr._service)
{ }
void output_read_loop() {
ba::async_read_until(_stdout, pending_output, "\n", _strand.wrap([this, self=shared_from_this()](error_code ec, size_t /*transferred*/) {
if (!ec) {
std::lock_guard<std::mutex> lk(_mx);
std::ostream(&_output) << &pending_output;
output_read_loop();
}
}));
}
void input_write_loop() { // assumes _mx locked
if (!_input_bufs.empty()) {
auto& msg = _input_bufs.front();
if (msg.eof) {
_strand.post([this, self=shared_from_this()] { _stdin.close(); });
} else {
ba::async_write(_stdin, ba::buffer(_input_bufs.front().pay_load),
_strand.wrap([this, self=shared_from_this()](error_code ec, size_t /*transferred*/) {
std::lock_guard<std::mutex> lk(_mx);
_input_bufs.pop_front();
if (!ec)
input_write_loop();
}));
}
}
}
ba::io_service::strand _strand; // thread-safe
// strand-local
bp::async_pipe _stdout, _stdin;
bp::child _process;
ba::streambuf pending_output;
// mutex protected
std::mutex mutable _mx;
struct out_message { bool eof; std::string pay_load; };
std::list<out_message> _input_bufs; // iterator stability again!
ba::streambuf _output;
// static helpers
template <typename T>
static bool has_newline(T buffer) {
return std::find(buffers_begin(buffer), buffers_end(buffer), '\n') != buffers_end(buffer);
}
};
using Map = std::map<std::string, patchProcess::ptr>; // iterator stability required!
Map processList;
void eventloop() {
for(;;) try {
if (!_service.run()) break;
} catch(std::exception const& e) {
LOG(error) << "Exception in handler: " << e.what() << "\n";
}
}
public:
ProcessManager() : io_thread([this] { eventloop(); }) { }
~ProcessManager() {
status(__FUNCTION__);
_keep.reset();
io_thread.join();
status(__FUNCTION__);
}
void status(std::string const& caption = "Status") const {
for (auto& p : processList) {
LOG(info) << caption << ": '" << p.first << "' is " << (p.second->is_running()? "still running":"done") << "\n";
}
}
patchProcess::ptr addNew(std::string name, std::string command, std::vector<std::string> args) {
auto pit = processList.find(name);
if (pit != processList.end()) {
if (pit->second->is_running()) {
LOG(error) << "Process already running ('" << name << "')\n";
return {};
}
// TODO make sure process cleaned up etc.
}
LOG(info) << "Creating process for patch " << name << "\n";
return processList[name] = patchProcess::start(std::move(command), std::move(args), *this);
}
};
</code></pre>
<h2>Demos</h2>
<p>The most naive run would be:</p>
<pre><code>int main() {
ProcessManager pm;
}
</code></pre>
<p>Which, predictably returns after doing nothing. Next, we try</p>
<pre><code>int main() {
ProcessManager pm;
pm.addNew("sleeper", "/bin/bash", {"-c", "sleep 3" });
}
</code></pre>
<p>Which predictably waits 3 seconds before exiting. It prints:</p>
<pre><code>Creating process for patch sleeper
~ProcessManager: 'sleeper' is still running
~ProcessManager: 'sleeper' is done
</code></pre>
<blockquote>
<p>But <strong>WAIT!</strong> Didn't you <em>specifically</em> say you didn't want waiting? Well, there is none! You can do whatever you please in the mean time. It's just that <code>ProcessManager</code>'s destructor will - by default - wait for the child to finish.</p>
</blockquote>
<p>Let's do some IO:</p>
<p><strong><kbd><a href="http://coliru.stacked-crooked.com/a/528acb11c8bb5580" rel="nofollow noreferrer">Live On Coliru</a></kbd></strong></p>
<pre><code>int main() {
ProcessManager pm;
auto ls = pm.addNew("listing", "/bin/ls", {"-ltr" });
boost::optional<std::string> l;
while ((l = ls->getline()) || ls->is_running()) {
if (l.is_initialized()) {
std::cout << "ls: " << std::quoted(*l) << std::endl;
l.reset();
}
}
}
</code></pre>
<p>Prints</p>
<pre><code>Creating process for patch listing
ls: "total 172"
ls: "-rw-rw-rw- 1 2001 2000 5645 Feb 11 00:10 main.cpp"
ls: "-rwxr-xr-x 1 2001 2000 162784 Feb 11 00:10 a.out"
~ProcessManager: 'listing' is done
~ProcessManager: 'listing' is done
</code></pre>
<p>To really drive the point home that the processes <em>and</em> their IO are synchronous, we can replace</p>
<pre><code>auto ls = pm.addNew("listing", "/bin/ls", {"-ltr" });
</code></pre>
<p>with something more time-varied:</p>
<pre><code>auto ls = pm.addNew("listing", "/bin/bash", {"-c", "ls -ltr | while read line; do sleep 1; echo \"$line\"; done" });
</code></pre>
<p>Now, to make it really challenging, let's add another child process and send the output of the <code>ls</code> to the other <code>child</code>:</p>
<p><strong><kbd><a href="http://coliru.stacked-crooked.com/a/8d3750dfbdc860e8" rel="nofollow noreferrer">Live On Coliru</a></kbd></strong></p>
<pre><code>int main() {
ProcessManager pm;
auto ls = pm.addNew("listing", "/bin/bash", {"-c", "ls -ltr | while read line; do sleep 1; echo \"$line\"; done" });
auto xxd = pm.addNew("hex encoding", "/usr/bin/xxd", {});
boost::optional<std::string> l, x;
bool closed = false;
while ((l || (l = ls->getline())) || (x || (x = xxd->getline())) || ls->is_running() || xxd->is_running()) {
if (l.is_initialized()) {
xxd->write(std::move(*l) + '\n');
l.reset();
std::cout << "[forwarded from ls to xxd]" << std::endl;
} else {
if (!closed && !ls->is_running()) {
std::cout << "[closing input to xxd]" << std::endl;
xxd->close_stdin();
closed = true;
}
}
if (x.is_initialized()) {
std::cout << std::quoted(*x) << std::endl;
x.reset();
}
}
}
</code></pre>
<p>Now, on Coliru the listing is too small to be interesting, but on my system you get output like:</p>
<pre><code>Creating process for patch listing
Creating process for patch hex encoding
[forwarded from ls to xxd]
[forwarded from ls to xxd]
[forwarded from ls to xxd]
[forwarded from ls to xxd]
[forwarded from ls to xxd]
[forwarded from ls to xxd]
[forwarded from ls to xxd]
[forwarded from ls to xxd]
[forwarded from ls to xxd]
[forwarded from ls to xxd]
[forwarded from ls to xxd]
[forwarded from ls to xxd]
[forwarded from ls to xxd]
[forwarded from ls to xxd]
[forwarded from ls to xxd]
[forwarded from ls to xxd]
[forwarded from ls to xxd]
[forwarded from ls to xxd]
"00000000: 746f 7461 6c20 3733 3635 0a2d 7277 2d72 total 7365.-rw-r"
"00000010: 772d 722d 2d20 2031 2073 6568 6520 7365 w-r-- 1 sehe se"
"00000020: 6865 2020 2020 3133 3737 206d 6569 2031 he 1377 mei 1"
"00000030: 3020 2032 3031 3720 636d 616b 655f 696e 0 2017 cmake_in"
"00000040: 7374 616c 6c2e 636d 616b 650a 6c72 7778 stall.cmake.lrwx"
"00000050: 7277 7872 7778 2020 3120 7365 6865 2073 rwxrwx 1 sehe s"
"00000060: 6568 6520 2020 2020 2020 3820 6d65 6920 ehe 8 mei "
"00000070: 3234 2020 3230 3137 206d 6169 6e2e 6370 24 2017 main.cp"
"00000080: 7020 2d3e 2074 6573 742e 6370 700a 2d72 p -> test.cpp.-r"
"00000090: 772d 7277 2d72 2d2d 2020 3120 7365 6865 w-rw-r-- 1 sehe"
"000000a0: 2073 6568 6520 2020 2020 3531 3420 7365 sehe 514 se"
"000000b0: 7020 3133 2030 383a 3336 2063 6f6d 7069 p 13 08:36 compi"
"000000c0: 6c65 5f63 6f6d 6d61 6e64 732e 6a73 6f6e le_commands.json"
"000000d0: 0a2d 7277 2d72 772d 722d 2d20 2031 2073 .-rw-rw-r-- 1 s"
"000000e0: 6568 6520 7365 6865 2020 2020 3135 3834 ehe sehe 1584"
"000000f0: 2073 6570 2032 3020 3232 3a30 3320 576f sep 20 22:03 Wo"
"00000100: 7264 436f 756e 7465 722e 680a 2d72 772d rdCounter.h.-rw-"
"00000110: 7277 2d72 2d2d 2020 3120 7365 6865 2073 rw-r-- 1 sehe s"
"00000120: 6568 6520 2020 2020 3336 3920 7365 7020 ehe 369 sep "
"00000130: 3233 2030 333a 3131 2063 6f6d 6d6f 6e2e 23 03:11 common."
"00000140: 680a 2d72 772d 7277 2d72 2d2d 2020 3120 h.-rw-rw-r-- 1 "
"00000150: 7365 6865 2073 6568 6520 2020 2020 3533 sehe sehe 53"
"00000160: 3920 7365 7020 3233 2030 333a 3131 2073 9 sep 23 03:11 s"
"00000170: 7472 7563 7473 616d 706c 652e 6870 700a tructsample.hpp."
"00000180: 2d72 772d 7277 2d72 2d2d 2020 3120 7365 -rw-rw-r-- 1 se"
"00000190: 6865 2073 6568 6520 2020 2032 3335 3220 he sehe 2352 "
"000001a0: 7365 7020 3238 2032 333a 3230 2061 6461 sep 28 23:20 ada"
"000001b0: 7074 6976 655f 7061 7273 6572 2e68 0a2d ptive_parser.h.-"
"000001c0: 7277 2d72 772d 722d 2d20 2031 2073 6568 rw-rw-r-- 1 seh"
"000001d0: 6520 7365 6865 2020 2020 3538 3738 2073 e sehe 5878 s"
"000001e0: 6570 2032 3820 3233 3a32 3120 6164 6170 ep 28 23:21 adap"
"000001f0: 7469 7665 5f70 6172 7365 722e 6370 700a tive_parser.cpp."
"00000200: 2d72 772d 7277 2d72 2d2d 2020 3120 7365 -rw-rw-r-- 1 se"
"00000210: 6865 2073 6568 6520 2020 2034 3232 3720 he sehe 4227 "
"00000220: 6f6b 7420 2034 2032 333a 3137 2070 686f okt 4 23:17 pho"
"00000230: 656e 695f 7833 2e68 7070 0a2d 7277 2d72 eni_x3.hpp.-rw-r"
"00000240: 772d 722d 2d20 2031 2073 6568 6520 7365 w-r-- 1 sehe se"
"00000250: 6865 2020 2031 3432 3035 2064 6563 2020 he 14205 dec "
"00000260: 3620 3231 3a30 3820 434d 616b 6543 6163 6 21:08 CMakeCac"
"00000270: 6865 2e74 7874 0a2d 7277 2d72 772d 722d he.txt.-rw-rw-r-"
"00000280: 2d20 2031 2073 6568 6520 7365 6865 2020 - 1 sehe sehe "
"00000290: 2020 3630 3738 2064 6563 2031 3420 3032 6078 dec 14 02"
"000002a0: 3a35 3320 636f 6e6e 6563 7469 6f6e 2e68 :53 connection.h"
"000002b0: 7070 0a2d 7277 7872 7778 722d 7820 2031 pp.-rwxrwxr-x 1"
"000002c0: 2073 6568 6520 7365 6865 2020 2020 3136 sehe sehe 16"
"000002d0: 3736 206a 616e 2031 3220 3032 3a34 3420 76 jan 12 02:44 "
"000002e0: 636f 6d70 696c 655f 6266 2e70 790a 2d72 compile_bf.py.-r"
"000002f0: 772d 722d 2d72 2d2d 2020 3120 7365 6865 w-r--r-- 1 sehe"
"00000300: 2073 6568 6520 2020 2038 3738 3020 6a61 sehe 8780 ja"
"00000310: 6e20 3132 2031 373a 3131 2074 6573 742e n 12 17:11 test."
"00000320: 6269 6e0a 2d72 7778 7277 7872 2d78 2020 bin.-rwxrwxr-x "
"00000330: 3120 7365 6865 2073 6568 6520 2020 2020 1 sehe sehe "
"00000340: 3131 3920 6a61 6e20 3235 2031 333a 3537 119 jan 25 13:57"
"00000350: 2074 6573 742e 7079 0a2d 7277 7872 7778 test.py.-rwxrwx"
"00000360: 722d 7820 2031 2073 6568 6520 7365 6865 r-x 1 sehe sehe"
"00000370: 2020 2020 2020 3736 2066 6562 2020 3820 76 feb 8 "
"00000380: 3130 3a33 3920 7465 7374 2e73 680a 2d72 10:39 test.sh.-r"
"00000390: 772d 7277 2d72 2d2d 2020 3120 7365 6865 w-rw-r-- 1 sehe"
"000003a0: 2073 6568 6520 2020 3236 3536 3920 6665 sehe 26569 fe"
"000003b0: 6220 2039 2031 313a 3533 2064 7261 6674 b 9 11:53 draft"
[forwarded from ls to xxd]
[forwarded from ls to xxd]
[forwarded from ls to xxd]
[forwarded from ls to xxd]
[forwarded from ls to xxd]
[forwarded from ls to xxd]
[forwarded from ls to xxd]
[forwarded from ls to xxd]
[forwarded from ls to xxd]
[closing input to xxd]
"000003c0: 2e6d 640a 2d72 772d 7277 2d72 2d2d 2020 .md.-rw-rw-r-- "
"000003d0: 3120 7365 6865 2073 6568 6520 2020 2020 1 sehe sehe "
"000003e0: 3131 3620 6665 6220 2039 2031 313a 3534 116 feb 9 11:54"
"000003f0: 2069 6e70 7574 2e74 7874 0a2d 7277 2d72 input.txt.-rw-r"
"00000400: 772d 722d 2d20 2031 2073 6568 6520 7365 w-r-- 1 sehe se"
"00000410: 6865 2020 2020 2020 3739 2066 6562 2031 he 79 feb 1"
"00000420: 3020 3136 3a32 3420 6172 7869 760a 2d72 0 16:24 arxiv.-r"
"00000430: 772d 7277 2d72 2d2d 2020 3120 7365 6865 w-rw-r-- 1 sehe"
"00000440: 2073 6568 6520 2020 2032 3933 3520 6665 sehe 2935 fe"
"00000450: 6220 3130 2031 363a 3238 2043 4d61 6b65 b 10 16:28 CMake"
"00000460: 4c69 7374 732e 7478 740a 2d72 772d 7277 Lists.txt.-rw-rw"
"00000470: 2d72 2d2d 2020 3120 7365 6865 2073 6568 -r-- 1 sehe seh"
"00000480: 6520 2020 2035 3134 3520 6665 6220 3130 e 5145 feb 10"
"00000490: 2031 363a 3238 204d 616b 6566 696c 650a 16:28 Makefile."
"000004a0: 2d72 772d 7277 2d72 2d2d 2020 3120 7365 -rw-rw-r-- 1 se"
"000004b0: 6865 2073 6568 6520 2020 2033 3937 3620 he sehe 3976 "
"000004c0: 6665 6220 3130 2031 363a 3430 2074 6573 feb 10 16:40 tes"
"000004d0: 7431 2e63 7070 0a2d 7277 2d72 772d 722d t1.cpp.-rw-rw-r-"
"000004e0: 2d20 2031 2073 6568 6520 7365 6865 2020 - 1 sehe sehe "
"000004f0: 2020 3632 3434 2066 6562 2031 3120 3031 6244 feb 11 01"
"00000500: 3a31 3320 7465 7374 2e63 7070 0a2d 7277 :13 test.cpp.-rw"
"00000510: 7872 7778 722d 7820 2031 2073 6568 6520 xrwxr-x 1 sehe "
"00000520: 7365 6865 2037 3139 3336 3838 2066 6562 sehe 7193688 feb"
"00000530: 2031 3120 3031 3a31 3320 736f 7465 7374 11 01:13 sotest"
"00000540: 0a2d 7277 2d72 772d 722d 2d20 2031 2073 .-rw-rw-r-- 1 s"
"00000550: 6568 6520 7365 6865 2020 2020 3535 3132 ehe sehe 5512"
"00000560: 2066 6562 2031 3120 3031 3a31 3620 5365 feb 11 01:16 Se"
"00000570: 7373 696f 6e2e 7669 6d0a 6472 7778 7277 ssion.vim.drwxrw"
"00000580: 7872 2d78 2031 3120 7365 6865 2073 6568 xr-x 11 sehe seh"
"00000590: 6520 2020 2020 2032 3320 6665 6220 3131 e 23 feb 11"
"000005a0: 2030 313a 3137 2043 4d61 6b65 4669 6c65 01:17 CMakeFile"
"000005b0: 730a 2d72 772d 7277 2d72 2d2d 2020 3120 s.-rw-rw-r-- 1 "
"000005c0: 7365 6865 2073 6568 6520 2020 2020 2037 sehe sehe 7"
"000005d0: 3520 6665 6220 3131 2030 313a 3137 206f 5 feb 11 01:17 o"
"000005e0: 7574 7075 742e 7478 740a utput.txt."
~ProcessManager: 'hex encoding' is done
~ProcessManager: 'listing' is done
~ProcessManager: 'hex encoding' is done
~ProcessManager: 'listing' is done
</code></pre> | 2018-02-11 00:18:04.690000+00:00 | 2018-02-11 00:18:04.690000+00:00 | null | null | 48,721,833 | <p>I am developing an application where I need to launch and stop a variety of different executables depending on user input. I would like my "core" program to run as normal whilst these executables run, i.e not wait for their termination which could theoretically be infinte. As well as this I need to be able to receive std_out and send std_in to these executables.</p>
<p>At the moment I have a set up where I have a process manager class:</p>
<pre><code>class ProcessManager {
private:
std::vector<patchProcess> processList;
boost::process::group processGroup;
public:
ProcessManager();
void addNew(std::string name,std::string command, std::string args);
void killAll();
void printAllIn();
};
</code></pre>
<p>Where patch process is:</p>
<pre><code>struct patchProcess {
std::string name;
boost::process::child *process;
std::shared_ptr<boost::process::ipstream> procOutStream;
};
</code></pre>
<p>Where I can launch / add a new process with the function</p>
<pre><code>void bbefxProcessManager::addNew(std::string name, std::string command, std::string args) {
LOG(info) << "Creating process for patch " << name;
patchProcess pp;
pp.name = name;
pp.procOutStream = std::shared_ptr<boost::process::ipstream>(new boost::process::ipstream);
boost::process::child newProc(command,args,processGroup,boost::process::std_out > *pp.procOutStream);
pp.process = &newProc;
processList.push_back(pp);
}
</code></pre>
<p>And my printing attempts:</p>
<pre><code>void bbefxProcessManager::printAllIn() {
std::string line;
for (auto &proc : processList) {
std::getline(*proc.procOutStream, line);
std::cout << line << std::endl;
}
}
</code></pre>
<p>This code sucessfully launches the process, however readAllIn gives me a blank output. I have a feeling that I am doing something horribly wrong with <code>std::shared_ptr<boost::process::ipstream> procOutStream;</code>. My rationale behind this is that I am using <code>push_back</code> into my processList (vector of struct), so it should be copyable. I can get the output of a test exec without using the patchProcess struct and these shared pointers but that makes mangement hard / messy. I can also confirm that if I attempt to read the output in the addNew function with something like:</p>
<pre><code>while(true) {
*pp.procOutStream >> line;
std::cout << line << std::endl;
}
</code></pre>
<p>I get the output of my executable. So does this hint something is going wrong with copy constructors?</p> | 2018-02-10 14:31:35.267000+00:00 | 2018-02-11 00:18:04.690000+00:00 | 2018-02-10 23:46:37.297000+00:00 | c++|boost|shared-ptr|boost-process | ['http://coliru.stacked-crooked.com/a/528acb11c8bb5580', 'http://coliru.stacked-crooked.com/a/8d3750dfbdc860e8'] | 2 |
48,401,181 | <p>Good suggestions, which will probably do for most applications. If you want to get fancy and state-of-the-art, then you can train a model to predict unknown word embeddings. Take a look at this recent EMNLP 2017 paper: <a href="https://arxiv.org/pdf/1707.06961.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1707.06961.pdf</a></p>
<p>TL;DR given a set of known word embeddings, the idea is to train a character-level BiLSTM which attempts to predict the embeddings given solely the characters of the word. Then this net can generalize to predict embeddings for unknown words. Ideally the net captures some morphological information, e.g. the predicted embedding for <code>apples</code> will be close to <code>apple</code>, and the evaluations in the paper seem to support this hypothesis.</p>
<p>There's a GitHub repository with pretrained models here: <a href="https://github.com/yuvalpinter/mimick" rel="nofollow noreferrer">https://github.com/yuvalpinter/mimick</a></p> | 2018-01-23 11:57:52.200000+00:00 | 2022-08-21 14:52:42.663000+00:00 | 2022-08-21 14:52:42.663000+00:00 | null | 48,395,570 | <p>I am trying to use CoNLL-2003 NER (English) Dataset and I am trying to utilize pretrained embeddings for it. I am using SENNA pretrained embeddings. Now I have around 20k words in my vocabulary and out of this I have embedding available for only 9.5k words.<br>
My current approach is to initialize an array of <code>20k X embedding_size</code> with zeros and initialize the 9.5k words whose embeddings is known to me and make all the embeddings learn-able.<br></p>
<p>My question is what is the best way to do this? Any reference to such research will be very helpful?</p> | 2018-01-23 06:35:42.547000+00:00 | 2022-08-21 14:52:42.663000+00:00 | 2018-01-23 10:48:01.527000+00:00 | machine-learning|nlp|deep-learning|word-embedding | ['https://arxiv.org/pdf/1707.06961.pdf', 'https://github.com/yuvalpinter/mimick'] | 2 |
70,117,350 | <p>Correct me if I'm wrong, but if you're trying to calculate a phone's absolute position in space at any moment in time directly from (past and present) accelerometer data, that is actually <a href="https://arxiv.org/pdf/1704.06053.pdf" rel="nofollow noreferrer">very complex</a>, mainly because the phone's accelerometer's frame of reference in terms of x, y, and z is the phone itself... and phones are not in a fixed orientation, especially when being thrown around, and besides... it will have zero acceleration while in the air, anyway.</p>
<p>It's sort of like being blindfolded and being taken on a space journey in a pod with rockets that fire in different directions randomly, and being expected to know where you are at the end. That would be technically possible if you knew where you were when you started, and you had the ability to track every acceleration vector you felt along the way... and integrate this with gyroscope data as well... converting all this into a single path.</p>
<p>But, luckily, we can still get the height thrown from the accelerometer indirectly, along with some other measurements.</p>
<p>This solution assumes that:</p>
<ul>
<li>The sensors package provides <em>acceleration values</em>, NOT velocity values (even though it claims to provide velocity, strangely), because <a href="https://en.wikipedia.org/wiki/Accelerometer" rel="nofollow noreferrer">accelerometers</a> themselves provide <em>acceleration</em>.</li>
<li>Total acceleration is equal to sqrt(x^2 + y^2 + z^2) regardless of phone orientation.</li>
<li>The accelerometer will read zero (or gravity only) during the throw</li>
<li><a href="https://www.wired.com/2013/08/how-high-can-you-toss-your-phone/" rel="nofollow noreferrer">This article in wired</a> is correct in that Height = (Gravity * Time^2) / 8</li>
</ul>
<p>The way my code works is:</p>
<ul>
<li>You (user) hold the "GO" button down.</li>
<li>When you throw the phone up, naturally you let go of the button, which starts the timer, and the phone starts listening to accelerometer events.</li>
<li>We assume that the total acceleration of the phone in the air is zero (or gravity only, depending on chosen accelerometer data type)... so we're not actually trying to calculate distance directly from the accelerometer data:</li>
<li>Instead, we are using the accelerometer ONLY to detect when you have caught the phone... by detecting a sudden change in acceleration using a threshold.</li>
<li>When this threshold is met, the timer is stopped.</li>
<li>Now we have a total time value for the throw from beginning to end and can calculate the height.</li>
</ul>
<p>Side notes:</p>
<ul>
<li>I'm using AccelerometerEvent (includes gravity), not UserAccelerometer event (does not include gravity), because I was getting weird numbers on my test device (non-zero at rest) using UserAccelerometerEvent.</li>
<li>It helps to catch the phone gently ***</li>
<li>My math could be complete off... I haven't had anyone else look at this yet... but at least this answer gets you started on a basic theory that works.</li>
<li>My phone landed in dog poo so I hope you accept this answer.</li>
</ul>
<p>Limitations on Accuracy:</p>
<ul>
<li>The height at which you let go, and catch are naturally going to be inconsistent.</li>
<li>The threshold is experimental.... test different values yourself. I've settled on 10.</li>
<li>There is probably some delay between the GO button depress and the timer beginning.</li>
<li>*** The threshold may not always be detected accurately, or at all if the deceleration ends too quickly because the frequency of accelerometer updates provided by the sensors package is quite low. Maybe there is a way to get updates at a higher frequency with a different package.</li>
<li>There is always the chance that the GO button could be depressed too early (while the phone is still in your hand) and so the acceleration will be non zero at that time, and perhaps enough to trigger the threshold.</li>
<li>Probably other things not yet considered.</li>
</ul>
<p>Code:</p>
<pre><code>import 'package:flutter/material.dart';
import 'package:sensors/sensors.dart';
import 'dart:async';
import 'dart:math';
void main() {
runApp(MyApp());
}
class MyApp extends StatelessWidget {
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Phone Throw Height',
theme: ThemeData(
primarySwatch: Colors.blue,
visualDensity: VisualDensity.adaptivePlatformDensity,
),
home: MyHomePage(title: 'Phone Throw Height'),
);
}
}
class MyHomePage extends StatefulWidget {
MyHomePage({Key? key, required this.title}) : super(key: key);
final String title;
@override
_MyHomePageState createState() => _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> {
List<StreamSubscription<dynamic>> _streamSubscriptions =
<StreamSubscription<dynamic>>[];
DateTime? startTime;
DateTime? endTime;
bool isBeingThrown = false;
final double GRAVITATIONAL_FORCE = 9.80665;
final double DECELERATION_THRESHOLD = 10; // <---- experimental
List<double> accelValuesForAnalysis = <double>[];
@override
void initState() {
super.initState();
_streamSubscriptions
.add(accelerometerEvents.listen((AccelerometerEvent event) {
if (isBeingThrown) {
double x_total = pow(event.x, 2).toDouble();
double y_total = pow(event.y, 2).toDouble();
double z_total = pow(event.z, 2).toDouble();
double totalXYZAcceleration = sqrt(x_total + y_total + z_total);
// only needed because we are not using UserAccelerometerEvent
// (because it was acting weird on my test phone Galaxy S5)
double accelMinusGravity = totalXYZAcceleration - GRAVITATIONAL_FORCE;
accelValuesForAnalysis.add(accelMinusGravity);
if (accelMinusGravity > DECELERATION_THRESHOLD) {
_throwHasEnded();
}
}
}));
}
void _throwHasEnded() {
isBeingThrown = false;
endTime = DateTime.now();
Duration totalTime = DateTime.now().difference(startTime!);
double totalTimeInSeconds = totalTime.inMilliseconds / 1000;
// this is the equation from the wired article
double heightInMeters =
(GRAVITATIONAL_FORCE * pow(totalTimeInSeconds, 2)) / 8;
Widget resetButton = TextButton(
child: Text("LONG PRESS TO RESET"),
onPressed: () {},
onLongPress: () {
startTime = null;
endTime = null;
print(accelValuesForAnalysis.toString());
accelValuesForAnalysis.clear();
Navigator.pop(context);
setState(() {
isBeingThrown = false;
});
},
);
AlertDialog alert = AlertDialog(
title: Text("Throw End Detected"),
content: Text("total throw time in seconds was: " +
totalTimeInSeconds.toString() +
"\n" +
"Total height was: " +
heightInMeters.toString() +
" meters. \n"),
actions: [
resetButton,
],
);
showDialog(
barrierDismissible: false,
context: context,
builder: (BuildContext context) {
return alert;
},
);
}
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: Text(widget.title),
),
body: SizedBox.expand(
child: Container(
color: Colors.green,
//alignment: Alignment.center,
child: SizedBox.expand(
child: (!isBeingThrown)
? TextButton(
child: Text("GO!",
style: TextStyle(
color: Colors.white,
fontWeight: FontWeight.bold,
fontSize: 40)),
onPressed: () {
setState(() {
isBeingThrown = true;
startTime = DateTime.now();
});
},
)
: Center(
child: Text("weeeeeeeeee!",
style: TextStyle(
color: Colors.white,
fontWeight: FontWeight.bold,
fontSize: 40)),
),
),
),
),
);
}
@override
void dispose() {
// cancel the stream from the accelerometer somehow!! ugh!!!
for (StreamSubscription<dynamic> subscription in _streamSubscriptions) {
subscription.cancel();
}
super.dispose();
}
}
</code></pre> | 2021-11-25 21:19:01.523000+00:00 | 2021-11-30 18:54:33.537000+00:00 | 2021-11-30 18:54:33.537000+00:00 | null | 70,026,672 | <p>In Flutter, there is the sensor package <a href="https://pub.dev/packages/sensors" rel="nofollow noreferrer">https://pub.dev/packages/sensors</a> that allow to know the velocity X, Y and Z.</p>
<p>My question is : how could I calculate the distance of a phone thrown in height ?</p>
<p>Example : you throw your telephone, with your hand at 0.5 meter from the ground.
The phone reaching 1 meter from your hand (so 1.5 meter from the ground).</p>
<p>How can I get the 1 meter value ?</p>
<p>Thanks all !</p>
<p>Here is the code I have right now (you need to install sensors package):</p>
<pre><code>import 'dart:async';
import 'package:flutter/material.dart';
import 'package:sensors/sensors.dart';
void main() {
runApp(MyApp());
}
class MyApp extends StatelessWidget {
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Flutter Demo',
theme: ThemeData(
primarySwatch: Colors.blue,
visualDensity: VisualDensity.adaptivePlatformDensity,
),
home: MyHomePage(title: 'Flutter Demo Home Page'),
);
}
}
class MyHomePage extends StatefulWidget {
MyHomePage({Key key, this.title}) : super(key: key);
final String title;
@override
_MyHomePageState createState() => _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> {
List _velocityY = [];
DateTime time;
List<double> distances = [];
List<StreamSubscription<dynamic>> _streamSubscriptions =
<StreamSubscription<dynamic>>[];
@override
void initState() {
super.initState();
_streamSubscriptions
.add(userAccelerometerEvents.listen((UserAccelerometerEvent event)
{
setState(() {
if (event.y.abs() > 0.1) {
if (time != null) {
_velocityY.add(event.y);
}
//print((new DateTime.now().difference(time).inSeconds));
if (_velocityY.length > 0) {
distances.add(_velocityY[_velocityY.length - 1] * (new DateTime.now().difference(time).inMicroseconds) / 1000);
}
time = new DateTime.now();
}
//print('time' + time.toString());
});
}));
}
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: Text(widget.title),
),
body: ListView(
children: [
for(double distance in distances.reversed.toList())
Text(
distance.toStringAsFixed(2),
style: Theme.of(context).textTheme.headline4,
),
],
),// This trailing comma makes auto-formatting nicer for build methods.
);
}
}
</code></pre> | 2021-11-18 20:59:40.820000+00:00 | 2021-11-30 18:54:33.537000+00:00 | 2021-11-24 18:49:29.693000+00:00 | flutter|velocity | ['https://arxiv.org/pdf/1704.06053.pdf', 'https://en.wikipedia.org/wiki/Accelerometer', 'https://www.wired.com/2013/08/how-high-can-you-toss-your-phone/'] | 3 |
17,778,682 | <p>dan wilkerson, simon goldsmith, et al. designed a thorough <a href="http://arxiv.org/abs/1106.3325" rel="noreferrer">global transaction system</a> on top of app engine's local (per entity group) transactions. at a high level, it uses techniques similar to the GUID one you describe. dan dealt with "submarine writes," ie the transactions you describe that report failure but later surface as succeeded, as well as many other theoretical and practical details of the datastore. erick armbrust implemented dan's design in <a href="http://code.google.com/p/tapioca-orm/" rel="noreferrer">tapioca-orm</a>.</p>
<p>i don't necessarily recommend that you implement his design or use tapioca-orm, but you'd definitely be interested in the research.</p>
<p>in response to your questions: plenty of people implement GAE apps that use the datastore without idempotency. it's only important when you need transactions with certain kinds of guarantees like the ones you describe. it's definitely important to understand when you do need them, but you often don't.</p>
<p>the datastore is implemented on top of megastore, which is described in depth <a href="http://research.google.com/pubs/pub36971.html" rel="noreferrer">in this paper</a>. in short, it uses <a href="http://en.wikipedia.org/wiki/Multiversion_concurrency_control" rel="noreferrer">multi-version concurrency control</a> within each entity group and <a href="http://en.wikipedia.org/wiki/Paxos_%28computer_science%29" rel="noreferrer">Paxos</a> for replication across datacenters, both of which can contribute to submarine writes. i don't know if there are public numbers on submarine write frequency in the datastore, but if there are, searches with these terms and on the datastore mailing lists should find them.</p>
<p>amazon's S3 isn't really a comparable system; it's more of a CDN than a distributed database. amazon's SimpleDB is comparable. it originally only provided <a href="http://aws.amazon.com/simpledb/#eventually-consistent" rel="noreferrer">eventual consistency</a>, and eventually added a very limited kind of transactions they call <a href="http://aws.amazon.com/simpledb/#consistent" rel="noreferrer">conditional writes</a>, but it doesn't have true transactions. other NoSQL databases (redis, mongo, couchdb, etc.) have different variations on transactions and consistency.</p>
<p>basically, there's always a tradeoff in distributed databases between scale, transaction breadth, and strength of consistency guarantees. this is best known by eric brewer's <a href="http://en.wikipedia.org/wiki/CAP_theorem" rel="noreferrer">CAP theorem</a>, which says the three axes of the tradeoff are consistency, availability, and partition tolerance.</p> | 2013-07-22 01:27:04.977000+00:00 | 2014-04-01 08:56:07.960000+00:00 | 2014-04-01 08:56:07.960000+00:00 | null | 17,721,895 | <p>The Google App Engine documentation contains this paragraph:</p>
<blockquote>
<p>Note: If your application receives an exception when committing a
transaction, it does not always mean that the transaction failed. You
can receive DatastoreTimeoutException,
ConcurrentModificationException, or DatastoreFailureException
exceptions in cases where transactions have been committed and
eventually will be applied successfully. Whenever possible, make your
Datastore transactions idempotent so that if you repeat a transaction,
the end result will be the same.</p>
</blockquote>
<p>Wait, what? It seems like there's a very important class of transactions that just simply cannot be made idempotent because they depend on current datastore state. For example, a simple counter, as in a like button. The transaction needs to read the current count, increment it, and write out the count again. If the transaction appears to "fail" but doesn't REALLY fail, and there's no way for me to tell that on the client side, then I need to try again, which will result in one click generating two "likes." Surely there is some way to prevent this with GAE?</p>
<p>Edit:</p>
<p>it seems that this is problem inherent in distributed systems, as per non other than Guido van Rossum -- see this link:</p>
<p><a href="https://stackoverflow.com/questions/13740724/app-engine-datastore-transaction-exception">app engine datastore transaction exception</a></p>
<p>So it looks like designing idempotent transactions is pretty much a must if you want a high degree of reliability.</p>
<p>I was wondering if it was possible to implement a global system across a whole app for ensuring idempotency. The key would be to maintain a transaction log in the datastore. The client would generated a GUID, and then include that GUID with the request (the same GUID would be re-sent on retries for the same request). On the server, at the start of each transaction, it would look in the datastore for a record in the Transactions entity group with that ID. If it found it, then this is a repeated transaction, so it would return without doing anything. </p>
<p>Of course this would require enabling cross-group transactions, or having a separate transaction log as a child of each entity group. Also there would be a performance hit if failed entity key lookups are slow, because almost every transaction would include a failed lookup, because most GUIDs would be new.</p>
<p>In terms of the additional $ cost in terms of additional datastore interactions, this would probably still be less than if I had to make every transaction idempotent, since that would require a lot of checking what's in the datastore in each level.</p> | 2013-07-18 11:13:50.413000+00:00 | 2016-11-30 09:50:01.887000+00:00 | 2017-05-23 12:02:54.123000+00:00 | google-app-engine|transactions|google-cloud-datastore | ['http://arxiv.org/abs/1106.3325', 'http://code.google.com/p/tapioca-orm/', 'http://research.google.com/pubs/pub36971.html', 'http://en.wikipedia.org/wiki/Multiversion_concurrency_control', 'http://en.wikipedia.org/wiki/Paxos_%28computer_science%29', 'http://aws.amazon.com/simpledb/#eventually-consistent', 'http://aws.amazon.com/simpledb/#consistent', 'http://en.wikipedia.org/wiki/CAP_theorem'] | 8 |
47,749,309 | <p>The state-of-the-art is <a href="https://farasa.qcri.org/lemmatization/" rel="nofollow noreferrer">Farasa Lemmatizer</a>.</p>
<p>Farasa Lemmatizer outperforms MADAMIRA Lemmatizer based on accuracy. It gives +7% relative gain in accuracy above MADAMIRA in lemmatization task.</p>
<p>You can read more about Farasa Lemmatizer from the following link:
<a href="https://arxiv.org/pdf/1710.06700.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1710.06700.pdf</a></p> | 2017-12-11 09:00:08.270000+00:00 | 2021-02-24 13:06:09.017000+00:00 | 2021-02-24 13:06:09.017000+00:00 | null | 33,073,805 | <p>How can I get lemmas for Arabic words? I tried the ISRI Arabic Stemmer from NLTK but it returns roots of words:</p>
<pre><code>from nltk.stem.isri import ISRIStemmer
st = ISRIStemmer()
print st.stem(u'اعلاميون')
</code></pre>
<p>It returns the root <code>علم</code> and i want the lemma <code>اعلامي</code></p> | 2015-10-12 05:25:29.093000+00:00 | 2021-02-24 13:06:09.017000+00:00 | 2020-06-25 12:57:06.137000+00:00 | python|python-2.7|python-3.x|stanford-nlp|text-processing | ['https://farasa.qcri.org/lemmatization/', 'https://arxiv.org/pdf/1710.06700.pdf'] | 2 |
9,444,483 | <p>Most of the problems I have seen were solved with 1-2 hidden layers. It is proven that MLPs with only one hidden layer are universal function approximators (<a href="http://deeplearning.cs.cmu.edu/pdfs/Kornick_et_al.pdf" rel="nofollow noreferrer">Hornik et. al.</a>). More hidden layers can make the problem easier or harder. You usually have to try different topologies. I heard that you cannot add an arbitrary number of hidden layers if you want to train your MLP with backprop because the gradient will become too small in the first layers (I have no reference for that). But there are some applications where people used up to <a href="http://arxiv.org/abs/1003.0358" rel="nofollow noreferrer">nine layers</a>. Maybe you are interested in a <a href="http://yann.lecun.com/exdb/mnist/" rel="nofollow noreferrer">standard benchmark problem</a> which is solved by different classifiers and MLP topologies.</p> | 2012-02-25 13:40:30.700000+00:00 | 2017-01-03 18:43:39.790000+00:00 | 2017-01-03 18:43:39.790000+00:00 | null | 9,436,209 | <p>What does number of hidden layers in a multilayer perceptron neural network do to the way neural network behaves? Same question for number of nodes in hidden layers?</p>
<p>Let's say I want to use a neural network for hand written character recognition. In this case I put pixel colour intensity values as input nodes, and character classes as output nodes. </p>
<p>How would I choose number of hidden layers and nodes to solve such problem?</p> | 2012-02-24 18:39:18.520000+00:00 | 2019-07-29 11:26:08.423000+00:00 | 2012-02-24 23:08:55.910000+00:00 | artificial-intelligence|machine-learning|neural-network | ['http://deeplearning.cs.cmu.edu/pdfs/Kornick_et_al.pdf', 'http://arxiv.org/abs/1003.0358', 'http://yann.lecun.com/exdb/mnist/'] | 3 |
71,991,539 | <p>I'll give you two answers. The first one is the academic one.</p>
<p>In your data (not sure where it comes from), I distinguish each line from the first letter:
S = Image size
K = Intrinsic camera matrix (3x3)
D = Distortion coefficients
R = Rotation matrix (3x4)
T = Translation vector (3x1)</p>
<p>It seems that we have 4 cameras here, so not sure. That's up to you to understand. Anyway, given the data above you may calculate camera center in the space. For the first camera:</p>
<p><strong>C_00</strong> = -<strong>R_00</strong> . <strong>T_00</strong></p>
<p>where . stands for the dot product. Do the same for the other camera, then the baseline is simply the distance between the two camera centers.</p>
<p>Focal length is already there: element (0,0) of intrinsic matrix is <strong>fx</strong> focal length over the x axis. Element (1,1) is <strong>fy</strong> focal length over y axis.
Why two focal lengths??? You will see that they are very similar indeed, but the difference reflects the imperfection of the camera lenses.</p>
<p>So, how do you get the depth??? Here I go with the less academic answer.
In practice you have to follow this process:</p>
<ol>
<li>Calibrate the two cameras (using metric units)</li>
<li>Take pictures</li>
<li>Undistort images (correct lens distortion using your D_00 coefficients)</li>
<li>Rectify images (what's rectification? You can check <a href="https://en.wikipedia.org/wiki/Image_rectification" rel="nofollow noreferrer">here</a> and <a href="https://arxiv.org/pdf/2203.00123.pdf" rel="nofollow noreferrer">here</a>). After this step the left and right images look like they were taken by the same camera only shifted along x.</li>
<li>Apply the formula to calculate depth. However, since you have fx and fy, but also the shear factor (element (0,1) of the camera matrix) and the principal point shift (elements (0,2) and (1,2) of the camera matrix), the formula is slightly complex that the one you suggested.
So I suggest to use OpenCV and use the function <code>cv2.reprojectImageTo3D</code>, calculating the Q matrix as done <a href="https://github.com/decadenza/SimpleStereo/blob/a139e8bf1ac7a10e532c4204bf249ea0e103a7d1/simplestereo/__init__.py#L503" rel="nofollow noreferrer">here</a>.</li>
</ol>
<p>You can find a working example to go from images to depth map and finally to depth (point cloud), <a href="https://github.com/decadenza/SimpleStereo/blob/master/examples/011%20Build3DPointCloud.py" rel="nofollow noreferrer">here</a>.</p>
<p>I hope this gives you a starting point.
Cheers.</p> | 2022-04-24 19:07:52.317000+00:00 | 2022-04-24 19:07:52.317000+00:00 | null | null | 71,973,053 | <p>I have a pair of stereo images and I have computed the disparity image for them. Now I need to convert this disparity map to a depth map. I have found that</p>
<pre><code>depth = (baseline * focal length) / disparity)
</code></pre>
<p>but I don't know how to find the values for baseline, focal length and disparity.
This is the calibration file given.</p>
<pre><code>calib_time: 09-Jan-2012 13:57:47
corner_dist: 9.950000e-02
S_00: 1.392000e+03 5.120000e+02
K_00: 9.842439e+02 0.000000e+00 6.900000e+02 0.000000e+00 9.808141e+02 2.331966e+02 0.000000e+00 0.000000e+00 1.000000e+00
D_00: -3.728755e-01 2.037299e-01 2.219027e-03 1.383707e-03 -7.233722e-02
R_00: 1.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 1.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 1.000000e+00
T_00: 2.573699e-16 -1.059758e-16 1.614870e-16
S_rect_00: 1.242000e+03 3.750000e+02
R_rect_00: 9.999239e-01 9.837760e-03 -7.445048e-03 -9.869795e-03 9.999421e-01 -4.278459e-03 7.402527e-03 4.351614e-03 9.999631e-01
P_rect_00: 7.215377e+02 0.000000e+00 6.095593e+02 0.000000e+00 0.000000e+00 7.215377e+02 1.728540e+02 0.000000e+00 0.000000e+00 0.000000e+00 1.000000e+00 0.000000e+00
S_01: 1.392000e+03 5.120000e+02
K_01: 9.895267e+02 0.000000e+00 7.020000e+02 0.000000e+00 9.878386e+02 2.455590e+02 0.000000e+00 0.000000e+00 1.000000e+00
D_01: -3.644661e-01 1.790019e-01 1.148107e-03 -6.298563e-04 -5.314062e-02
R_01: 9.993513e-01 1.860866e-02 -3.083487e-02 -1.887662e-02 9.997863e-01 -8.421873e-03 3.067156e-02 8.998467e-03 9.994890e-01
T_01: -5.370000e-01 4.822061e-03 -1.252488e-02
S_rect_01: 1.242000e+03 3.750000e+02
R_rect_01: 9.996878e-01 -8.976826e-03 2.331651e-02 8.876121e-03 9.999508e-01 4.418952e-03 -2.335503e-02 -4.210612e-03 9.997184e-01
P_rect_01: 7.215377e+02 0.000000e+00 6.095593e+02 -3.875744e+02 0.000000e+00 7.215377e+02 1.728540e+02 0.000000e+00 0.000000e+00 0.000000e+00 1.000000e+00 0.000000e+00
S_02: 1.392000e+03 5.120000e+02
K_02: 9.597910e+02 0.000000e+00 6.960217e+02 0.000000e+00 9.569251e+02 2.241806e+02 0.000000e+00 0.000000e+00 1.000000e+00
D_02: -3.691481e-01 1.968681e-01 1.353473e-03 5.677587e-04 -6.770705e-02
R_02: 9.999758e-01 -5.267463e-03 -4.552439e-03 5.251945e-03 9.999804e-01 -3.413835e-03 4.570332e-03 3.389843e-03 9.999838e-01
T_02: 5.956621e-02 2.900141e-04 2.577209e-03
S_rect_02: 1.242000e+03 3.750000e+02
R_rect_02: 9.998817e-01 1.511453e-02 -2.841595e-03 -1.511724e-02 9.998853e-01 -9.338510e-04 2.827154e-03 9.766976e-04 9.999955e-01
P_rect_02: 7.215377e+02 0.000000e+00 6.095593e+02 4.485728e+01 0.000000e+00 7.215377e+02 1.728540e+02 2.163791e-01 0.000000e+00 0.000000e+00 1.000000e+00 2.745884e-03
S_03: 1.392000e+03 5.120000e+02
K_03: 9.037596e+02 0.000000e+00 6.957519e+02 0.000000e+00 9.019653e+02 2.242509e+02 0.000000e+00 0.000000e+00 1.000000e+00
D_03: -3.639558e-01 1.788651e-01 6.029694e-04 -3.922424e-04 -5.382460e-02
R_03: 9.995599e-01 1.699522e-02 -2.431313e-02 -1.704422e-02 9.998531e-01 -1.809756e-03 2.427880e-02 2.223358e-03 9.997028e-01
T_03: -4.731050e-01 5.551470e-03 -5.250882e-03
S_rect_03: 1.242000e+03 3.750000e+02
R_rect_03: 9.998321e-01 -7.193136e-03 1.685599e-02 7.232804e-03 9.999712e-01 -2.293585e-03 -1.683901e-02 2.415116e-03 9.998553e-01
P_rect_03: 7.215377e+02 0.000000e+00 6.095593e+02 -3.395242e+02 0.000000e+00 7.215377e+02 1.728540e+02 2.199936e+00 0.000000e+00 0.000000e+00 1.000000e+00 2.729905e-03
</code></pre> | 2022-04-22 18:15:01.133000+00:00 | 2022-04-24 19:07:52.317000+00:00 | null | computer-vision|stereo-3d|stereoscopy | ['https://en.wikipedia.org/wiki/Image_rectification', 'https://arxiv.org/pdf/2203.00123.pdf', 'https://github.com/decadenza/SimpleStereo/blob/a139e8bf1ac7a10e532c4204bf249ea0e103a7d1/simplestereo/__init__.py#L503', 'https://github.com/decadenza/SimpleStereo/blob/master/examples/011%20Build3DPointCloud.py'] | 4 |
41,283,663 | <p>(1,1) convolution layer is not a fully connected layer. if you want to implement fully connected layer as convolution layer you should add last layer kernel size of the layer before. </p>
<p>(if the feature map of the layer before is 50x50 your last layer should have a kernel of 50 x 50). convolution layer with (1,1) kernel size is similar ro mlp layer. if tyou want to undertsand more how it works read this paper <a href="https://arxiv.org/abs/1312.4400" rel="nofollow noreferrer">Network in Network</a></p>
<p>If I understood well, you want to get ride of the fully connected layer. so you have to do two things: </p>
<ul>
<li>assure that you are reducing the last layer to the size of your class by using convolution layer (1,1) with channel equal to your output classes. </li>
<li>use global average pooling to reduce the feature map to 1 and then feed the result to the softmax. </li>
</ul> | 2016-12-22 12:55:08.963000+00:00 | 2016-12-22 12:55:08.963000+00:00 | null | null | 41,281,556 | <p>I'm trying to modify tensorflow slim overfeat network to classify small image classes, image size is 60*60 and 3 classes.
I'm use tensorflow v0.12 on Ubuntu 14.04 with TITAN X GPU.</p>
<p>My first network is </p>
<pre><code>
import tensorflow as tf
slim = tf.contrib.slim
trunc_normal = lambda stddev: tf.truncated_normal_initializer(0.0, stddev)
def overfeat_arg_scope(weight_decay=0.0005):
with slim.arg_scope(
[slim.conv2d, slim.fully_connected],
activation_fn=tf.nn.relu,
weights_regularizer=slim.l2_regularizer(weight_decay),
biases_initializer=tf.constant_initializer()):
with slim.arg_scope([slim.conv2d], padding='SAME'):
with slim.arg_scope([slim.max_pool2d], padding='VALID') as arg_sc:
return arg_sc
def overfeat(inputs,
num_classes=1000,
is_training=True,
dropout_keep_prob=0.5,
spatial_squeeze=False,
reuse=None,
scope='overfeat'):
with tf.variable_scope(scope, 'overfeat', [inputs], reuse=reuse) as sc:
end_points_collection = sc.name + '_end_points'
# Collect outputs for conv2d, fully_connected and max_pool2d
with slim.arg_scope([slim.conv2d, slim.fully_connected, slim.max_pool2d],
outputs_collections=end_points_collection):
net = slim.conv2d(inputs, 64, 3, padding='VALID',
scope='conv11')
net = slim.conv2d(inputs, 128, 3, padding='VALID',
scope='conv12')
net = slim.max_pool2d(net, 2, scope='pool1')
net = slim.conv2d(net, 128, 3, padding='VALID', scope='conv2')
net = slim.max_pool2d(net, 2, scope='pool2')
net = slim.conv2d(net, 256, 3, scope='conv3')
net = slim.conv2d(net, 256, 3, scope='conv4')
net = slim.conv2d(net, 256, 3, scope='conv5')
net = slim.max_pool2d(net, 2, scope='pool5')
with slim.arg_scope([slim.conv2d],
weights_initializer=trunc_normal(0.005),
biases_initializer=tf.constant_initializer(0.1)):
# Use conv2d instead of fully_connected layers.
net = slim.conv2d(net, 512, 3, padding='VALID', scope='fc6')
net = slim.dropout(net, dropout_keep_prob, is_training=is_training,
scope='dropout6')
net = slim.conv2d(net, 1024, 1, scope='fc7')
with tf.variable_scope('Logits'):
#pylint: disable=no-member
if is_training:
net = slim.avg_pool2d(net, net.get_shape()[1:3], padding='VALID',
scope='AvgPool_1a_8x8')
net = slim.conv2d(
net,
num_classes, 1,
activation_fn=None,
normalizer_fn=None,
biases_initializer=tf.constant_initializer(),
scope='fc9')
net = slim.dropout(net, dropout_keep_prob, is_training=is_training,
scope='Dropout')
# Convert end_points_collection into a end_point dict.
end_points = slim.utils.convert_collection_to_dict(end_points_collection)
if spatial_squeeze:
net = tf.squeeze(net, [1, 2], name='fc8/squeezed')
end_points[sc.name + '/fc8'] = net
return net, end_points
def inference(images, num_classes, keep_probability, phase_train=True, weight_decay=0.0, reuse=None):
batch_norm_params = {
# Decay for the moving averages.
'decay': 0.995,
# epsilon to prevent 0s in variance.
'epsilon': 0.001,
# force in-place updates of mean and variance estimates
'updates_collections': None,
}
with slim.arg_scope(overfeat_arg_scope()):
return overfeat(images, num_classes, is_training=phase_train,
dropout_keep_prob=keep_probability, reuse=reuse)
</code></pre>
<p>I'm use cross entroy loss with tf.nn.sparse_softmax_cross_entropy_with_logits function.</p>
<p>And train result is
<a href="https://i.stack.imgur.com/CHpCU.jpg" rel="nofollow noreferrer">Loss And Accuracy with one 1x Conv</a></p>
<p>This result is passable.
I'm trying to add one 1x1 conv after fc7 because I think 1x1 conv is same full connected layer and may be improve accuracy.</p>
<pre><code>
...
net = slim.conv2d(net, 1024, 1, scope='fc7')
net = slim.conv2d(net, 1024, 1, scope='fc7_1')
...
</code></pre>
<p>But I got unreliable result :
<a href="https://i.stack.imgur.com/3cObm.jpg" rel="nofollow noreferrer">Loss And Accuracy with two 1x1 Conv</a></p>
<p>This network don't be optimized with loss 1. </p>
<p>Why I can't add more 1x1 conv or fc layers?</p>
<p>And how can I improve this network?</p> | 2016-12-22 10:58:08.743000+00:00 | 2016-12-22 12:55:08.963000+00:00 | 2016-12-22 11:03:13.387000+00:00 | image-processing|tensorflow|deep-learning|convolution | ['https://arxiv.org/abs/1312.4400'] | 1 |
62,986,041 | <p>The idea for a learning rate range test as done in lr_find comes from this paper by Leslie Smith: <a href="https://arxiv.org/abs/1803.09820" rel="noreferrer">https://arxiv.org/abs/1803.09820</a> That has a lot of other useful tuning tips; it's worth studying closely.</p>
<p>In lr_find, the learning rate is slowly ramped up (in a log-linear way). You don't want to pick the point at which loss is lowest; you want to pick the point at which it is dropping fastest per step (=net is learning as fast as possible). That does happen somewhere around the middle of the downward slope or 1e-2, so the guy who wrote the notebook has it about right. Anything between 0.5e-2 and 3e-2 has roughly the same slope and would be a reasonable choice; the smaller values would correspond to a bit slower learning (=more epochs needed, also less regularization) but with a bit less risk of reaching a plateau too early.</p>
<p>I'll try to add a bit of intuition about what is happening when loss is the lowest in this test, say learning rate=1e-1. At this point, the gradient descent algorithm is taking large steps in the direction of the gradient, but loss is not decreasing. How can this happen? Well, it would happen if the steps are consistently too large. Think of trying to get into a well (or canyon) in the loss landscape. If your step size is larger than the size of the well, you can consistently step <em>over</em> it every time and end up on the other side.</p>
<p>This picture from a <a href="https://www.jeremyjordan.me/nn-learning-rate/" rel="noreferrer">nice blog post by Jeremy Jordan</a> shows it visually:
<a href="https://i.stack.imgur.com/4ep2K.png" rel="noreferrer"><img src="https://i.stack.imgur.com/4ep2K.png" alt="enter image description here" /></a></p>
<p>In the picture, it shows the gradient descent climbing out of a well by taking too large steps (maybe lr=1+0 in your test). I think this rarely happens exactly like that unless lr is truly excessive; more likely, the well is in a relatively flat landscape, and the gradient descent can step <em>over</em> it, not being able to get into the well in the first place. High-dimensional loss landscapes are hard to visualize, and may be very irregular, but in a sense the lr_find test is looking for the scale of the typical features in the landscape and then picking a learning rate that gives you a step which is similar sized but a bit smaller.</p> | 2020-07-19 21:44:08.103000+00:00 | 2020-07-19 21:44:08.103000+00:00 | null | null | 61,172,627 | <p>I am going over this <a href="https://www.kaggle.com/kageyama/fastai-heroes-recognition-resnet34" rel="noreferrer">Heroes Recognition ResNet34</a> notebook published on Kaggle.</p>
<p>The author uses fastai's <code>learn.lr_find()</code> method to find the optimal learning rate.</p>
<p>Plotting the loss function against the learning rate yields the following figure:</p>
<p><a href="https://i.stack.imgur.com/b0lAd.png" rel="noreferrer"><img src="https://i.stack.imgur.com/b0lAd.png" alt="enter image description here"></a> </p>
<p>It seems that the loss reaches a minimum for 1e-1, yet in the next step the author passes 1e-2 as the max_lr in <code>fit_one_cycle</code> in order to train his model:</p>
<p><code>learn.fit_one_cycle(6,1e-2)</code></p>
<p>Why use 1e-2 over 1e-1 in this example? Wouldn't this only make the training slower? </p> | 2020-04-12 14:01:50.247000+00:00 | 2022-08-08 16:48:34.290000+00:00 | 2020-07-21 21:46:58.813000+00:00 | pytorch|conv-neural-network|kaggle|fast-ai | ['https://arxiv.org/abs/1803.09820', 'https://www.jeremyjordan.me/nn-learning-rate/', 'https://i.stack.imgur.com/4ep2K.png'] | 3 |
29,839,620 | <p>Here is what i find:</p>
<p>A modified implementation of the model described in Clarke and
Lapata, 2008, "Global Inference for Sentence Compression: An Integer
Linear Programming Approach".</p>
<p>Paper: <a href="https://www.jair.org/media/2433/live-2433-3731-jair.pdf" rel="nofollow">https://www.jair.org/media/2433/live-2433-3731-jair.pdf</a></p>
<p>Source: <a href="https://github.com/cnap/sentence-compression" rel="nofollow">https://github.com/cnap/sentence-compression</a> (written in JAVA)</p>
<p><em>Input: At the camp , the rebel troops were welcomed with a banner that read 'Welcome home' .</em></p>
<p><em>Output: At camp , the troops were welcomed.</em></p>
<p><strong>Update:</strong>
Sequence-to-Sequence with Attention Model for Text Summarization.</p>
<p><a href="https://github.com/tensorflow/models/tree/master/textsum" rel="nofollow">https://github.com/tensorflow/models/tree/master/textsum</a></p>
<p><a href="https://arxiv.org/abs/1509.00685" rel="nofollow">https://arxiv.org/abs/1509.00685</a></p> | 2015-04-24 05:45:16.840000+00:00 | 2016-09-09 20:22:18.627000+00:00 | 2016-09-09 20:22:18.627000+00:00 | null | 7,857,648 | <p>Using Machine translation, can I obtain a very compressed version of a sentence,
eg. <strong>I would really like to have a delicious tasty cup of coffee</strong> would be translated to <strong>I want coffee</strong>
Does any of the NLP engines provide such a functionality?</p>
<p>I got a few research papers that does <a href="http://www.mitpressjournals.org/doi/pdfplus/10.1162/coli_a_00002" rel="noreferrer">paraphase generation</a> and <a href="http://www.cs.jhu.edu/~ccb/publications/learning-sentential-paraphrases-from-bilingual-parallel-corpora.pdf" rel="noreferrer">sentence compression</a>. But is there any library which has already implemented this?</p> | 2011-10-22 05:37:20.620000+00:00 | 2018-07-20 00:33:12.667000+00:00 | null | nlp|nltk|stanford-nlp|opennlp | ['https://www.jair.org/media/2433/live-2433-3731-jair.pdf', 'https://github.com/cnap/sentence-compression', 'https://github.com/tensorflow/models/tree/master/textsum', 'https://arxiv.org/abs/1509.00685'] | 4 |
60,235,181 | <p>I have an approach that should be applicable to an arbitrary 3D surface, even when that surface has holes in it or is noisy. It's pretty slow right now, but it seems to work and may give you some ideas for how to do this.</p>
<p>The basic premise is a differential geometric one and is to:</p>
<p>1.) Generate a pointset representing your surface</p>
<p>2.) Generate a k nearest neighbors proximity graph from this pointset (I also normalized distances across dimensions here as I felt it captured the notion of "neighbors" more accurately)</p>
<p>3.) Calculate the tangent spaces associated with each node in this proximity graph by using the point and its neighbors as columns of a matrix that I then perform SVD on. After SVD, the left singular vectors give me a new basis for my tangent space (the first two column vectors are my plane vectors, and the third is normal to the plane)</p>
<p>4.) Use dijkstra's algorithm to move from a starting node to an ending node on this proximity graph, but instead of using euclidean distance as edge weights, use the distance between vectors being parallel transported via tangent spaces.</p>
<p>It's inspired by this paper (minus all the unfolding): <a href="https://arxiv.org/pdf/1806.09039.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1806.09039.pdf</a></p>
<p>Note that I left a few helper functions I was using in that probably aren't relevant to you directly (the plane plotting stuff mostly).</p>
<p>The functions you'll want to look at are get_knn, build_proxy_graph, generate_tangent_spaces, and geodesic_single_path_dijkstra.</p>
<p>The implementation could also probably be improved.</p>
<p>Here's the code:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from mayavi import mlab
from sklearn.neighbors import NearestNeighbors
from scipy.linalg import svd
import networkx as nx
import heapq
from collections import defaultdict
def surface_squares(x_min, x_max, y_min, y_max, steps):
x = np.linspace(x_min, x_max, steps)
y = np.linspace(y_min, y_max, steps)
xx, yy = np.meshgrid(x, y)
zz = xx**2 + yy**2
return xx, yy, zz
def get_meshgrid_ax(x, y, z):
# fig = plt.figure()
# ax = fig.gca(projection='3d')
# ax.plot_surface(X=x, Y=y, Z=z)
# return ax
fig = mlab.figure()
su = mlab.surf(x.T, y.T, z.T, warp_scale=0.1)
def get_knn(flattened_points, num_neighbors):
# need the +1 because each point is its own nearest neighbor
knn = NearestNeighbors(num_neighbors+1)
# normalize flattened points when finding neighbors
neighbor_flattened = (flattened_points - np.min(flattened_points, axis=0)) / (np.max(flattened_points, axis=0) - np.min(flattened_points, axis=0))
knn.fit(neighbor_flattened)
dist, indices = knn.kneighbors(neighbor_flattened)
return dist, indices
def rotmatrix(axis, costheta):
""" Calculate rotation matrix
Arguments:
- `axis` : Rotation axis
- `costheta` : Rotation angle
"""
x, y, z = axis
c = costheta
s = np.sqrt(1-c*c)
C = 1-c
return np.matrix([[x*x*C+c, x*y*C-z*s, x*z*C+y*s],
[y*x*C+z*s, y*y*C+c, y*z*C-x*s],
[z*x*C-y*s, z*y*C+x*s, z*z*C+c]])
def plane(Lx, Ly, Nx, Ny, n, d):
""" Calculate points of a generic plane
Arguments:
- `Lx` : Plane Length first direction
- `Ly` : Plane Length second direction
- `Nx` : Number of points, first direction
- `Ny` : Number of points, second direction
- `n` : Plane orientation, normal vector
- `d` : distance from the origin
"""
x = np.linspace(-Lx/2, Lx/2, Nx)
y = np.linspace(-Ly/2, Ly/2, Ny)
# Create the mesh grid, of a XY plane sitting on the orgin
X, Y = np.meshgrid(x, y)
Z = np.zeros([Nx, Ny])
n0 = np.array([0, 0, 1])
# Rotate plane to the given normal vector
if any(n0 != n):
costheta = np.dot(n0, n)/(np.linalg.norm(n0)*np.linalg.norm(n))
axis = np.cross(n0, n)/np.linalg.norm(np.cross(n0, n))
rotMatrix = rotmatrix(axis, costheta)
XYZ = np.vstack([X.flatten(), Y.flatten(), Z.flatten()])
X, Y, Z = np.array(rotMatrix*XYZ).reshape(3, Nx, Ny)
eps = 0.000000001
dVec = d #abs((n/np.linalg.norm(n)))*d#np.array([abs(n[i])/np.linalg.norm(n)*val if abs(n[i]) > eps else val for i, val in enumerate(d)]) #
X, Y, Z = X+dVec[0], Y+dVec[1], Z+dVec[2]
return X, Y, Z
def build_proxy_graph(proxy_n_dist, proxy_n_indices):
G = nx.Graph()
for distance_list, neighbor_list in zip(proxy_n_dist, proxy_n_indices):
# first element is always point
current_node = neighbor_list[0]
neighbor_list = neighbor_list[1:]
distance_list = distance_list[1:]
for neighbor, dist in zip(neighbor_list, distance_list):
G.add_edge(current_node, neighbor, weight=dist)
return G
def get_plane_points(normal_vec, initial_point, min_range=-10, max_range=10, steps=1000):
steps_for_plane = np.linspace(min_range, max_range, steps)
xx, yy = np.meshgrid(steps_for_plane, steps_for_plane)
d = -initial_point.dot(normal_vec)
eps = 0.000000001
if abs(normal_vec[2]) < eps and abs(normal_vec[1]) > eps:
zz = (-xx*normal_vec[2] - yy*normal_vec[0] - d)/normal_vec[1]
else:
zz = (-xx*normal_vec[0] - yy*normal_vec[1] - d)/normal_vec[2]
return xx, yy, zz
# def plot_tangent_plane_at_point(pointset, flattened_points, node, normal_vec):
# ax = get_meshgrid_ax(x=pointset[:, :, 0], y=pointset[:, :, 1], z=pointset[:, :, 2])
# node_loc = flattened_points[node]
# print("Node loc: {}".format(node_loc))
# xx, yy, zz = plane(10, 10, 500, 500, normal_vec, node_loc)
# # xx, yy, zz = get_plane_points(normal_vec, node_loc)
# print("Normal Vec: {}".format(normal_vec))
# ax.plot_surface(X=xx, Y=yy, Z=zz)
# ax.plot([node_loc[0]], [node_loc[1]], [node_loc[2]], markerfacecolor='k', markeredgecolor='k', marker='o', markersize=10)
# plt.show()
def generate_tangent_spaces(proxy_graph, flattened_points):
# This depth should gaurantee at least 16 neighbors
tangent_spaces = {}
for node in proxy_graph.nodes():
neighbors = list(nx.neighbors(proxy_graph, node))
node_point = flattened_points[node]
zero_mean_mat = np.zeros((len(neighbors)+1, len(node_point)))
for i, neighbor in enumerate(neighbors):
zero_mean_mat[i] = flattened_points[neighbor]
zero_mean_mat[-1] = node_point
zero_mean_mat = zero_mean_mat - np.mean(zero_mean_mat, axis=0)
u, s, v = svd(zero_mean_mat.T)
# smat = np.zeros(u.shape[0], v.shape[0])
# smat[:s.shape[0], :s.shape[0]] = np.diag(s)
tangent_spaces[node] = u
return tangent_spaces
def geodesic_single_path_dijkstra(flattened_points, proximity_graph, tangent_frames, start, end):
# short circuit
if start == end:
return []
# Create min priority queue
minheap = []
pred = {}
dist = defaultdict(lambda: 1.0e+100)
# for i, point in enumerate(flattened_points):
R = {}
t_dist = {}
geo_dist = {}
R[start] = np.eye(3)
t_dist[start] = np.ones((3,))
dist[start] = 0
start_vector = flattened_points[start]
for neighbor in nx.neighbors(proxy_graph, start):
pred[neighbor] = start
dist[neighbor] = np.linalg.norm(start_vector - flattened_points[neighbor])
heapq.heappush(minheap, (dist[neighbor], neighbor))
while minheap:
r_dist, r_ind = heapq.heappop(minheap)
if r_ind == end:
break
q_ind = pred[r_ind]
u, s, v = svd(tangent_frames[q_ind].T*tangent_frames[r_ind])
R[r_ind] = np.dot(R[q_ind], u * v.T)
t_dist[r_ind] = t_dist[q_ind]+np.dot(R[q_ind], tangent_frames[q_ind].T * (r_dist - dist[q_ind]))
geo_dist[r_ind] = np.linalg.norm(t_dist[r_ind])
for neighbor in nx.neighbors(proxy_graph, r_ind):
temp_dist = dist[r_ind] + np.linalg.norm(flattened_points[neighbor] - flattened_points[r_ind])
if temp_dist < dist[neighbor]:
dist[neighbor] = temp_dist
pred[neighbor] = r_ind
heapq.heappush(minheap, (dist[neighbor], neighbor))
# found ending index, now loop through preds for path
current_ind = end
node_path = [end]
while current_ind != start:
node_path.append(pred[current_ind])
current_ind = pred[current_ind]
return node_path
def plot_path_on_surface(pointset, flattened_points, path):
# ax = get_meshgrid_ax(x=pointset[:, :, 0], y=pointset[:, :, 1], z=pointset[:, :, 2])
# ax.plot(points_in_path[:, 0], points_in_path[:, 1], points_in_path[:, 2], linewidth=10.0)
# plt.show()
get_meshgrid_ax(x=pointset[:, :, 0], y=pointset[:, :, 1], z=pointset[:, :, 2])
points_in_path = flattened_points[path]
mlab.plot3d(points_in_path[:, 0], points_in_path[:, 1], points_in_path[:, 2] *.1)
mlab.show()
"""
True geodesic of graph.
Build proximity graph
Find tangent space using geodisic neighborhood at each point in graph
Parallel transport vectors between tangent space points
Use this as your distance metric
Dijkstra's Algorithm
"""
if __name__ == "__main__":
x, y, z = surface_squares(-5, 5, -5, 5, 500)
# plot_meshgrid(x, y, z)
pointset = np.stack([x, y, z], axis=2)
proxy_graph_num_neighbors = 16
flattened_points = pointset.reshape(pointset.shape[0]*pointset.shape[1], pointset.shape[2])
flattened_points = flattened_points
proxy_n_dist, proxy_n_indices = get_knn(flattened_points, proxy_graph_num_neighbors)
# Generate a proximity graph using proxy_graph_num_neighbors
# Nodes = number of points, max # of edges = number of points * num_neighbors
proxy_graph = build_proxy_graph(proxy_n_dist, proxy_n_indices)
# Now, using the geodesic_num_neighbors, get geodesic neighborshood for tangent space construction
tangent_spaces = generate_tangent_spaces(proxy_graph, flattened_points)
node_to_use = 2968
# 3rd vector of tangent space is normal to plane
# plot_tangent_plane_at_point(pointset, flattened_points, node_to_use, tangent_spaces[node_to_use][:, 2])
path = geodesic_single_path_dijkstra(flattened_points, proxy_graph, tangent_spaces, 250, 249750)
plot_path_on_surface(pointset, flattened_points, path)
</code></pre>
<p>Note that I installed and set up mayavi to get a decent output image (matplotlib doesn't have real 3d rendering and consequently, its plots suck). I did however leave the matplotlib code in if you want to use it. If you do, just remove the scaling by .1 in the path plotter and uncomment the plotting code. Anyways, here's an example image for z=x^2+y^2. The white line is the geodesic path:</p>
<p><a href="https://i.stack.imgur.com/FaL9p.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FaL9p.jpg" alt="Example output"></a></p>
<p>You could also fairly easily adjust this to return all the pairwise geodesic distances between nodes from dijkstra's algorithm (look in the appendix of the paper to see the minor modifications you'll need to do this). Then you could draw whatever lines you want on your surface.</p> | 2020-02-15 01:04:10.703000+00:00 | 2020-02-15 03:19:07.603000+00:00 | 2020-02-15 03:19:07.603000+00:00 | null | 60,106,732 | <p>I have in mind <a href="https://www.youtube.com/watch?v=NfqrCdAjiks" rel="nofollow noreferrer">this video</a>, or this <a href="http://www.physikdidaktik.uni-karlsruhe.de/software/geodesiclab/a3.html" rel="nofollow noreferrer">simulation</a>, and I would like to reproduce the geodesic lines on some sort of surface in 3D, given by a function f(x,y), from some starting point.</p>
<p>The <a href="https://cs.stanford.edu/people/jbaek/18.821.paper2.pdf" rel="nofollow noreferrer">midpoint method</a> seems computationally and code intense, and so I'd like to ask if there is a way to generate an approximate geodesic curve based on the normal vector to the surface at different points. Each point has a tangent vector space associated with it, and therefore, it seems like knowing the normal vector does not determine a specific direction to move forward the curve.</p>
<p>I have tried working with Geogebra, but I realize that it may be necessary to shift to other software platforms, such as Python (or Poser?), Matlab, or others.</p>
<p>Is this idea possible, and can I get some ideas as to how to implement it?</p>
<hr>
<p>In case it provides some ideas as to how to answer the question, there previously was an answer (now unfortunatley erased) suggesting the midpoint method for a terrain with the functional form z = F(x,y), starting with the straight line between the endpoints, splitting in short segments [I presume the straight line on the XY plane (?)], and lifting [I presume the nodes between segments on the XY plane (?)] on the surface. Next it suggested finding "a midpoint" [I guess a midpoint of the segments joining each consecutive pairs of projected points on the surface (?)], and projecting "it" [I guess each one of these midpoints close, but not quite on the surface(?)] orthogonally on the surface (in the direction of the normal), using the equation Z + t = F(X + t Fx, Y + t Fy) [I guess this is a dot product meant to be zero...</p>
<p><img src="https://i.stack.imgur.com/rkO9C.png" alt="enter image description here"></p>
<p>(?)], where (X,Y,Z) are the coordinates of the midpoint, Fx, Fy the partial derivatives of F, and t the unknown [that is my main issue understanding this... What am I supposed to do with this t once I find it? Add it to each coordinate of (X,Y,Z) as in (X+t, Y+t, Z+t)? And then?]. This is a non-linear equation in t, solved via <a href="https://en.wikipedia.org/wiki/Newton%27s_method" rel="nofollow noreferrer">Newton's iterations</a>.</p>
<hr>
<p>As an update / bookmark, Alvise Vianello has kindly posted a Python computer simulation of geodesic lines inspired on <a href="http://www.physikdidaktik.uni-karlsruhe.de/software/geodesiclab/a3.html" rel="nofollow noreferrer">this</a> page <a href="https://github.com/amv213/Geodesic-Lines-3D" rel="nofollow noreferrer">on GitHub</a>. Thank you very much!</p> | 2020-02-07 03:33:47.790000+00:00 | 2022-07-05 10:40:52.430000+00:00 | 2020-03-03 01:19:22.467000+00:00 | python|matlab|geometry|computational-geometry|geometry-surface | ['https://arxiv.org/pdf/1806.09039.pdf', 'https://i.stack.imgur.com/FaL9p.jpg'] | 2 |
47,626,127 | <p>In practice, if your dataset is large, it is infeasible to sample hard triplet from the whole dataset. In fact, you can choose hard triplet for only a small proportion of your training dataset, which will be much more time-saving. After training the network using the hard triplet generated for K iterations. You feed the network with the next batch of images from the dataset and generate the new hard triplet.</p>
<p>In this way, the computation cost is acceptable and network is gradually improving as the training process goes.</p>
<p>see the <a href="https://arxiv.org/pdf/1604.01325.pdf" rel="nofollow noreferrer">article here</a> for more reference.(section 5.1)</p> | 2017-12-04 03:34:35.260000+00:00 | 2017-12-04 07:25:36.090000+00:00 | 2017-12-04 07:25:36.090000+00:00 | null | 42,605,479 | <p>I am trying to implement a deep network for triplet loss in Caffe.
When I select three samples for anchor, positive, negative images randomly, it almost produces zero losses. So I tried the following strategy:</p>
<pre><code>If I have 15,000 training images,
1. extract features of 15,000 images with the current weights.
2. calculate the triplet losses with all possible triplet combinations.
3. use the hard samples with n largest losses, and update the network n times.
4. iterate the above steps every k iterations to get new hard samples.
</code></pre>
<p>The step 1 is fast, but I think step 2 is very time-consuming and is really inefficient. So, I wonder whether there are other efficient strategies for hard data sampling.</p>
<p>Thanks.</p> | 2017-03-05 06:59:07.197000+00:00 | 2017-12-04 07:25:36.090000+00:00 | null | neural-network|deep-learning|caffe|conv-neural-network|pycaffe | ['https://arxiv.org/pdf/1604.01325.pdf'] | 1 |
65,159,410 | <blockquote>
<p>Is there any way to decompose a Keras CNN model and/or it's weights to glean how it's arriving at it's decision?</p>
</blockquote>
<p>You can look at the <a href="https://arxiv.org/abs/1610.02391" rel="nofollow noreferrer">Grad-CAM algorithm</a> to see where the neural network was looking at before taking its final decision. Here's an <a href="https://keras.io/examples/vision/grad_cam/" rel="nofollow noreferrer">implementation in Keras</a> using a pre-trained model.</p>
<pre><code>import numpy as np
import tensorflow as tf
from tensorflow import keras
from IPython.display import Image, display
import matplotlib.pyplot as plt
import matplotlib.cm as cm
model_builder = keras.applications.xception.Xception
img_size = (299, 299)
preprocess_input = keras.applications.xception.preprocess_input
decode_predictions = keras.applications.xception.decode_predictions
last_conv_layer_name = "block14_sepconv2_act"
classifier_layer_names = [
"avg_pool",
"predictions",
]
img_path = keras.utils.get_file(
"african_elephant.jpg", "https://i.imgur.com/Bvro0YD.png"
)
display(Image(img_path))
def get_img_array(img_path, size):
img = keras.preprocessing.image.load_img(img_path, target_size=size)
array = keras.preprocessing.image.img_to_array(img)
array = np.expand_dims(array, axis=0)
return array
def make_gradcam_heatmap(
img_array, model, last_conv_layer_name, classifier_layer_names
):
last_conv_layer = model.get_layer(last_conv_layer_name)
last_conv_layer_model = keras.Model(model.inputs, last_conv_layer.output)
classifier_input = keras.Input(shape=last_conv_layer.output.shape[1:])
x = classifier_input
for layer_name in classifier_layer_names:
x = model.get_layer(layer_name)(x)
classifier_model = keras.Model(classifier_input, x)
with tf.GradientTape() as tape:
last_conv_layer_output = last_conv_layer_model(img_array)
tape.watch(last_conv_layer_output)
preds = classifier_model(last_conv_layer_output)
top_pred_index = tf.argmax(preds[0])
top_class_channel = preds[:, top_pred_index]
grads = tape.gradient(top_class_channel, last_conv_layer_output)
pooled_grads = tf.reduce_mean(grads, axis=(0, 1, 2))
last_conv_layer_output = last_conv_layer_output.numpy()[0]
pooled_grads = pooled_grads.numpy()
for i in range(pooled_grads.shape[-1]):
last_conv_layer_output[:, :, i] *= pooled_grads[i]
heatmap = np.mean(last_conv_layer_output, axis=-1)
heatmap = np.maximum(heatmap, 0) / np.max(heatmap)
return heatmap
img_array = preprocess_input(get_img_array(img_path, size=img_size))
model = model_builder(weights="imagenet")
preds = model.predict(img_array)
print("Predicted:", decode_predictions(preds, top=1)[0])
heatmap = make_gradcam_heatmap(
img_array, model, last_conv_layer_name, classifier_layer_names
)
img = keras.preprocessing.image.load_img(img_path)
img = keras.preprocessing.image.img_to_array(img)
heatmap = np.uint8(255 * heatmap)
jet = cm.get_cmap("jet")
jet_colors = jet(np.arange(256))[:, :3]
jet_heatmap = jet_colors[heatmap]
jet_heatmap = keras.preprocessing.image.array_to_img(jet_heatmap)
jet_heatmap = jet_heatmap.resize((img.shape[1], img.shape[0]))
jet_heatmap = keras.preprocessing.image.img_to_array(jet_heatmap)
superimposed_img = jet_heatmap * 0.4 + img
superimposed_img = keras.preprocessing.image.array_to_img(superimposed_img)
save_path = "elephant_cam.jpg"
superimposed_img.save(save_path)
display(Image(save_path))
</code></pre>
<p><a href="https://i.stack.imgur.com/olpPF.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/olpPF.jpg" alt="enter image description here" /></a>
<a href="https://i.stack.imgur.com/nbKZl.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nbKZl.jpg" alt="enter image description here" /></a></p> | 2020-12-05 16:43:31.287000+00:00 | 2020-12-05 16:43:31.287000+00:00 | null | null | 65,159,265 | <p>I have a CNN that after a lot of work is now performing multiclass (8) classification at 99% accuracy.</p>
<p>While the classification itself has a lot of value, going to a prediction engine would be a game changer.</p>
<p>The catch is that the ability to predict is needed by a human in real life (IRL), not in being processed by a computer.</p>
<p>In this case the CNN is able to classify things faster than the human. It would be significant if the model could provide insights into how it is classifying things</p>
<p>Is there any way to decompose a Keras CNN model and/or it's weights to glean how it's arriving at it's decision? I don't believe so, but would hate not to ask and find out it's possible.</p>
<p>It's not that I'm looking for it to be exact, but if I can find one or two things that heavily influence the prediction/classification that a human being could key on, that could be significant.</p>
<p>Thoughts?</p> | 2020-12-05 16:30:16.507000+00:00 | 2020-12-05 16:43:31.287000+00:00 | null | python|tensorflow|machine-learning|keras | ['https://arxiv.org/abs/1610.02391', 'https://keras.io/examples/vision/grad_cam/', 'https://i.stack.imgur.com/olpPF.jpg', 'https://i.stack.imgur.com/nbKZl.jpg'] | 4 |
14,090,209 | <p>I'm not 100% sure that I fully understood the question, but from what I grasped I would face the situation in this way.</p>
<p>First of all I refer to Java because all the libraries I know are meant for this language. Secondly, I don't think that OWL on its own is able to satisfy your goal, given that it can represent rules and axioms, but it does not provide reasoning, that is, you need a reasoner, so you need to build a program that uses it, plus doing additional processing that I will sketch below:</p>
<p>1) You didn't mention it, but I guess you have an underlying ontology w.r.t. you need to prove your consequence relation (what you denote with symbol "->"). If the ontology is not explicit, maybe it can be extracted/composed from the textual expressions you mentioned in the question.</p>
<p>2) you need to use a library for ontology manipulation, I suggest <a href="http://owlapi.sourceforge.net/" rel="nofollow">OWL API</a> from Manchester University, it is very powerful and simple, in the tutorial under section "documentation" you have an overview of the main functionalities, including the use of reasoners (the example shows Hermit, but the principle holds for any other reasoner).</p>
<p>3) At this point you need to check if the ontology is consistent (otherwise anything can be derived, as it often happens with false premises)</p>
<p>4) You add the following axiom to the ontology (you build it directly in Java, no need to serialize back, you can let the reasoner work on the in-memory representation) and check for consistency: A \sqsubseteq B, that is, using the associated interpretation: A^I \subseteq B^I, so it is equivalent to A => B (they have the same truth table).</p>
<p>5) At this point you can add the axiom Match(A,B), where A and B are your class expressions and Match is a Role/Relation that relates all the class expressions for which the second is a consequence of the first.</p>
<p>6) After a number of repetitions of these steps you may want to serialize the result and store it, and this again can be achieved quite simply using OWL API from the in-memory representation.</p>
<p>For some basics about Description Logics (the logic underpinning OWL ontologies) you can refer to <a href="https://www.google.it/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&ved=0CEkQFjAB&url=http://arxiv.org/abs/1201.4089&ei=2B_gUIfkDsb44QShnYGoDQ&usg=AFQjCNGTx4GmGHf9d5gxqkcXZm4RCRsf5Q&sig2=lJpe7hESs50JRBDaV5_7jQ&bvm=bv.1355534169,d.bGE" rel="nofollow">A description logic Primer (2012), Horrocks et al.</a> and to <a href="http://www.aifb.kit.edu/images/1/19/DL-Intro.pdf" rel="nofollow">Foundations of Description Logics (2011), Rudolph</a>.</p>
<p>I'm not a logician or a DL expert, so please verify all the information I provided and feel free to correct me :)</p> | 2012-12-30 11:10:26.233000+00:00 | 2012-12-31 14:32:36.527000+00:00 | 2012-12-31 14:32:36.527000+00:00 | null | 14,034,656 | <p>I have a simple question which I suspect has no simple answer. Essentially, I want to check whether it is true that one OWL expression (#B) follows on logically from another (#A) - in other words I want to ask: is it true that #A -> #B?</p>
<p>The reason for this is that I'm writing a matching algorithm for an application which matches structures in a knowledge based (represented by the #KnowledgeStructure class) to a structure which describes the needs of the current application state (#StateRequirement). Both structures have properties which have string values representing OWL expressions over the state of a third kind of structure (#Model). These are: #KnowledgeStructure.PostCondition which expresses how the knowledge structure being applied to #Model will transform #Model; and #StateRequirement.GoalCondition, which expresses the #Model state that the application aims to achieve. I want to see, therefore, if the #KnowledgeStructure will satisfy the #StateRequirement by checking that the #KnowledgeStructure.PostCondition produces the desired #StateRequiremment.GoalCondition. I could express this abstractly as: (#KnowledgeStructure.Postcondition => #StateRequirement.GoalCondition) => Match(#KnowledgeStructure, #StateRequirement). Less confusingly I could express this as: ((#A -> #B) -> Match(#A, #B)) where both #A and #B are valid OWL expressions.</p>
<p>In the general case I would like to be able to express the following rule: "If it is true that the expression #B follows from #A, then the expression Match(#A, #B) is also true".</p>
<p>Essentially, my question is this: how do I pose or realise such a rule in OWL? How do I test whether one expression follows from another expression? Also, are existing reasoners sufficiently powerful to determine the relation #A -> #B between two expressions if this relation is not explicitly stated?</p> | 2012-12-25 23:18:34.013000+00:00 | 2012-12-31 14:32:36.527000+00:00 | 2012-12-26 00:32:30.657000+00:00 | semantic-web|ontology|owl|swrl | ['http://owlapi.sourceforge.net/', 'https://www.google.it/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&ved=0CEkQFjAB&url=http://arxiv.org/abs/1201.4089&ei=2B_gUIfkDsb44QShnYGoDQ&usg=AFQjCNGTx4GmGHf9d5gxqkcXZm4RCRsf5Q&sig2=lJpe7hESs50JRBDaV5_7jQ&bvm=bv.1355534169,d.bGE', 'http://www.aifb.kit.edu/images/1/19/DL-Intro.pdf'] | 3 |
58,307,182 | <p>If you are looking for faster malloc & free, there is a new (13 Feb 2019) drop-in replacement for systems that already support a hardware memory manager unit, MMU. Most computers do, most micro-controllers don't.</p>
<p>There is a paper and a study that shows it reduces Firefox's memory by 16% and Redis by 39%.</p>
<p>It is called Mesh. <a href="https://arxiv.org/abs/1902.04738" rel="nofollow noreferrer">Mesh dynamic memory allocation</a></p> | 2019-10-09 15:08:41.903000+00:00 | 2019-10-09 15:08:41.903000+00:00 | null | null | 58,299,305 | <p>I'm looking for <code>malloc</code>/<code>free</code>-like APIs in mainstream OSs that allow me to specify an explicit size during both allocation and de-allocation. What I hope to gain by this is that the runtime might spend less memory on bookkeeping when the allocated size is already available in the program.</p>
<p>On e.g. windows I only found <code>free()</code>, <code>_aligned_free()</code>, and <code>_freea()</code>, none of which take a second argument for size.</p> | 2019-10-09 07:44:25.170000+00:00 | 2019-10-09 15:08:41.903000+00:00 | 2019-10-09 11:49:35.737000+00:00 | c++|c|memory-management | ['https://arxiv.org/abs/1902.04738'] | 1 |
332,743 | <p>I think flocking can be in help to create those groups here. Because it seems like different entitys that would like to flock with each other should be part of a group.</p>
<p><a href="http://arxiv.org/abs/math?papernum=0502342" rel="nofollow noreferrer">http://arxiv.org/abs/math?papernum=0502342</a></p>
<p><a href="http://flashorbit.com/?page_id=40" rel="nofollow noreferrer">http://flashorbit.com/?page_id=40</a></p>
<p>"Boids" seems to be in the same neighborhood when calculate flocks.
<a href="http://www.red3d.com/cwr/boids/" rel="nofollow noreferrer">http://www.red3d.com/cwr/boids/</a></p> | 2008-12-02 01:13:24.430000+00:00 | 2008-12-02 01:19:16.633000+00:00 | 2008-12-02 01:19:16.633000+00:00 | null | 332,701 | <p>I'm dealing with a large group of entities that store locations. They are displayed on a map. I'm trying to come up with an efficient way to group near located entities into one entity when viewed from a higher location. So, for example, if you are very high, when looking down, you will see one entity that represents a group of closely located entities in an area. Zooming in close enough would split that entity out into its contained entities.</p>
<p>Is there an efficient algorithm for doing this? I thought about just griding off the view based on height and dropping entities into grid boxes based on location then rendering the box point. My only concern is if all the entities are in the upper right of that box, the entity rendered to represent them might be centered in the middle instead of the location of the group of entities.</p>
<p>Any thoughts or ideas?</p> | 2008-12-02 00:43:31.977000+00:00 | 2008-12-02 03:34:24.770000+00:00 | null | algorithm|grouping | ['http://arxiv.org/abs/math?papernum=0502342', 'http://flashorbit.com/?page_id=40', 'http://www.red3d.com/cwr/boids/'] | 3 |
62,903,264 | <p>There is ambiguity in your problem statement that, depending on how it is resolved, will change the algorithm that you would want to employ. I will discuss the ambiguity later.</p>
<p>As others have suggested, this falls into the domain of combinatorial optimization and there are many different OR tools that could be used to solve this.</p>
<p>To start, I would suggest employing a sequence of weighted bipartite matchings with (possibly) solution pruning.</p>
<p>Here is a solution written in python using networkx based on a sequence of two bipartite matchings (the first being a weighted one for students, the second being unweighted.)</p>
<pre><code>#!/usr/bin/python
"""
filename: student_assign.py
purpose: demonstrate that the problem described in
https://stackoverflow.com/questions/62755778/modified-version-of-student-project-allocation-algorithm
can be solved as a sequence of assignment problems solved through a weighted bipartite matching.
"""
import networkx as nx
import numpy as np
# For this demonstration we take data directly from the problem description
#supervisor | Topics of Interest | No. Of Groups
#L1 | P1, P3, P4 | 2
#L2 | P5, P2, P9 | 1
#L3 | P1, P3, P4 | 1
#L4 | P1, P3, P4 | 4
#L5 | SP1, P3, P8 | 3
#L6 | P32, P3, P40 | 3
supervisors = {
'L1' : { 'topics' : ['P1', 'P3', 'P4'], 'num_groups' : 2},
'L2' : { 'topics' : ['P5', 'P2', 'P9'], 'num_groups' : 1},
'L3' : { 'topics' : ['P1', 'P3', 'P4'], 'num_groups' : 1},
'L4' : { 'topics' : ['P1', 'P3', 'P4'], 'num_groups' : 4},
'L5' : { 'topics' : ['SP1', 'P3', 'P8'], 'num_groups' : 3},
'L6' : { 'topics' : ['P32', 'P3', 'P40'], 'num_groups' : 3},
}
all_topics = sorted(list({ t for s in supervisors for t in supervisors[s]['topics'] }))
# assuming there is a typo in the problem specification and 'supervisor' = 'student' below
#supervisor | Pref1 | Pref 2 | Pref 3 | Pref 4 |
#S1 | P4 | P1 | SP1 | P5 |
#S2 | P1 | P9 | SP1 | P5 |
#S3 | P3 | P1 | P2 | P5 |
#S4 | P4 | P1 | P40 | P5 |
#S5 | P4 | P32 | SP1 | P5 |
#S6 | P9 | P1 | SP1 | P5 |
students = {
'S1' : ['P4', 'P1', 'SP1', 'P5'] ,
'S2' : ['P1', 'P9', 'SP1', 'P5'] ,
'S3' : ['P3', 'P1', 'P2', 'P5'] ,
'S4' : ['P4', 'P1', 'P40', 'P5'] ,
'S5' : ['P4', 'P32', 'SP1', 'P5'] ,
'S6' : ['P9', 'P1', 'SP1', 'P5'] ,
}
MAX_GROUP_SIZE = 2
def get_student_assignments_to_topics(all_topics,students,max_group_size=MAX_GROUP_SIZE):
G = nx.DiGraph()
G.add_node('sink',demand=len(students))
for topic in all_topics:
G.add_node(topic)
G.add_edge(topic, 'sink', weight = 0, capacity = max_group_size)
for student in students:
prefs = students[student]
G.add_node(student,demand=-1)
# add increasing weight edges from student to preferences (lowest == best)
for i, topic in enumerate(prefs):
G.add_edge(student, topic, weight = i, capacity = 1)
# solve the weighted matching
flow_dict = nx.min_cost_flow(G)
# decode which student is assigned to which topic
student_assignments = { t : [] for t in all_topics}
for student in students:
adjacency = flow_dict[student]
prefs = students[student]
for pref in prefs:
if adjacency[pref]:
student_assignments[pref].append(student)
break
return student_assignments
def get_topic_assignments_to_supervisors(student_assignments,supervisors):
non_empty_student_assignments = { topic : student_assignments[topic] for topic in student_assignments if len(student_assignments[topic]) > 0}
G = nx.DiGraph()
G.add_node('sink',demand=len(non_empty_student_assignments))
for topic in non_empty_student_assignments:
G.add_node(topic,demand=-1)
for supervisor in supervisors:
supervisor_properties = supervisors[supervisor]
for topic in supervisor_properties['topics']:
if topic in non_empty_student_assignments:
G.add_edge(topic, supervisor, weight = 0, capacity = 1)
G.add_edge(supervisor, 'sink', weight = 0, capacity = supervisor_properties['num_groups'])
# solve the unweighted matching
flow_dict = nx.min_cost_flow(G)
# decode which supervisor is assigned to which topic
topic_assignments = { s : [] for s in supervisors}
for supervisor in supervisors:
supervisor_properties = supervisors[supervisor]
for topic in supervisor_properties['topics']:
if topic in non_empty_student_assignments:
adjacency = flow_dict[topic]
if adjacency[supervisor]:
topic_assignments[supervisor].append(topic)
return topic_assignments
# assign students to topics by preference
student_assignments = get_student_assignments_to_topics(all_topics,students)
# assign all topics with at least one student to a supervisor who fits the criteria
topic_assignments = get_topic_assignments_to_supervisors(student_assignments,supervisors)
print 'These are the assignment of students to topics based on preference:'
print student_assignments
print 'These are the assignment of topics to supervisors based on availability:'
print topic_assignments
</code></pre>
<p>This script outputs:</p>
<pre><code>These are the assignment of students to topics based on preference:
{'P2': [], 'P3': ['S3'], 'P1': ['S2', 'S1'], 'P4': ['S5', 'S4'], 'P5': [], 'SP1': [], 'P8': [], 'P9': ['S6'], 'P32': [], 'P40': []}
These are the assignment of topics to supervisors based on availability:
{'L6': [], 'L4': ['P1', 'P3'], 'L5': [], 'L2': ['P9'], 'L3': ['P4'], 'L1': []}
</code></pre>
<h2>Ambiguity</h2>
<p>There is ambiguity as to how you want to handle important edge cases:</p>
<ul>
<li>what if topics have no interest from students?</li>
<li>what if a topic only has one interested student?</li>
<li>students may have to rank all possible topics to ensure a solution exists?</li>
<li>should supervisors have preference for topics as well (if so whose preference takes precedence?)</li>
<li>should supervisor assignment to topics be load balanced (solutions with all supervisors having similar amount of work being preferred)?</li>
</ul>
<p>Answers to these specific questions that disambiguiate are very important and will shape the type of solution you craft (as well as being able to communicate to users of your algorithm what exactly is optimized.)</p>
<p>I definitely recommend you spend more time disambiguiating your problem.</p>
<h2>Solution existence</h2>
<p>The sequential bipartite matching algorithm presented here will find optimal solutions; however, it may not find <em>a</em> solution even if one exists.</p>
<p>This can happen if the solution of the first matching produces a set of projects for which there is no assignment of supervisors.</p>
<p>One possible way to address this is to systematically search through subsets of possible projects until a solution exists (see pruning below.)</p>
<h2>Pruning solutions</h2>
<p>If some assignments of students to topics is unfavorable, an easy way to prevent that solution from being possible is to set weights of student-topic assignment very high (infinity.) This gives a structured way to prune unwanted pairings:</p>
<ol>
<li>Solve the weighted bipartite matching</li>
<li>Identify undesirable student-topic pairing</li>
<li>Set weight to infinity or remove edge between student-topic pairing, resolve.</li>
</ol>
<h2>Efficiency</h2>
<p>Here python was used with networkx to optimize prototyping ability <em>not</em> efficiency. If you wish to scale this solution to large problem sizes, I would recommend the lemon MCF library (in particular the cost scaling <a href="https://arxiv.org/pdf/1207.6381.pdf" rel="nofollow noreferrer">MCF algorithm</a>) or Andrew V Goldberg's original cost-scaling MCF algorithm <a href="https://github.com/iveney/cs2" rel="nofollow noreferrer">implementation</a>.</p>
<p>In my experience benchmarking MCF, these are the two most competitive implementations. I don't have experience with Google-OR's implementation of MCF.</p> | 2020-07-14 20:10:07.290000+00:00 | 2020-07-27 17:10:24.873000+00:00 | 2020-07-27 17:10:24.873000+00:00 | null | 62,755,778 | <p>I am working on a project for a non-profit organization where they are trying to help students with special needs to match to different project topics. Each student will have four preferences and a set of supervisors will have their list of preferences on the topics they supervise.</p>
<p>The solution I look for is an algorithm which can find an optimum solution to match students to project topics and supervisors.</p>
<p>I have done extensive reading on SPA, HR and other Greedy Algorithms and even tried a flavour of Genetic algorithm. So far I have nothing but stress.</p>
<p>Here is the flow of the program.</p>
<ol>
<li>We have a pool of topics for supervisors to show their interest. Supervisors can pick topics where they like to supervise and they can also suggest a topic and decide how many project groups they would like to supervise.</li>
</ol>
<p><code>P1, P2, P3, P4, P5 ...... Pn ... SP1, SP2, SP3 .... SPn</code></p>
<p>In the above list, <code>P1 ... Pn</code> are existing topics and <code>SP1...SPn</code> are suggested topics.</p>
<p>Let's say after this round, we have a list of supervisors with the following preference.</p>
<pre><code>supervisor | Topics of Interest | No. Of Groups
L1 | P1, P3, P4 | 2
L2 | P5, P2, P9 | 1
L3 | P1, P3, P4 | 1
L4 | P1, P3, P4 | 4
L5 | SP1, P3, P8 | 3
L6 | P32, P3, P40 | 3
</code></pre>
<p>After the above round, we know that there are only supervisors who can Supervise students on the following topics.</p>
<p><code>P1, P2, P3, P4, P8, P9, P32, P40, SP1</code></p>
<ol start="2">
<li>When we open the topics for the students, they will only be able to pick projects from the above list, with their preference/priority. For example</li>
</ol>
<pre><code>student | Pref1 | Pref 2 | Pref 3 | Pref 4 |
S1 | P4 | P1 | SP1 | P5 |
S2 | P1 | P9 | SP1 | P5 |
S3 | P3 | P1 | P2 | P5 |
S4 | P4 | P1 | P40 | P5 |
S5 | P4 | P32 | SP1 | P5 |
...
Sn | P9 | P1 | SP1 | P5 |
</code></pre>
<p>Now, once the students pick the preference, We will then decide a number <code>MAX_GROUP_SIZE</code> and we will run our algorithm to group these students into partitions where we will</p>
<p>a. Group students with similar interests into same group ( eg. We add Students who picked P1 as their <code>pref1</code> and fill in the rest with <code>pref2, pref3, pref4</code> when they don't have groups for their first choices).
b. Assign a supervisor to a group where he has shown interest in the project ( Ideally, every students first preferences or best-matched project)
c. We need to make sure that we don't overload the supervisor, if a supervisor has shown interest in <code>P1, P2, P3</code> and mentioned that he can only supervise <code>2</code> projects, then we should only add him to <code>2</code> projects.</p>
<p>So far, I have been trying the above-explained algorithms and I still don't think I have a justified solution for the students. I prefer a solution which is more biased towards the students since they have special needs. If anyone can point me in the right direction or can provide me with a well-defined algorithm or an implementation I would not only appreciate the effort but I would buy you a coffee as well.</p> | 2020-07-06 12:09:25.053000+00:00 | 2020-11-12 15:01:28.780000+00:00 | 2020-07-15 17:00:21.270000+00:00 | algorithm|data-structures|genetic-algorithm|genetic-programming | ['https://arxiv.org/pdf/1207.6381.pdf', 'https://github.com/iveney/cs2'] | 2 |
59,563,923 | <p>Measurements of audio quality or asthetic is done both with and without machine learning.
However most of the work focuses on speech reproduction, much less on general audio.</p>
<p>One can conduct listening tests, where a panel of human asessors listen to audio and give their score,
to establish a Mean Opinion Score (MOS).
There exists several standards for conducting these, such as <a href="https://en.wikipedia.org/wiki/MUSHRA" rel="nofollow noreferrer">MUSHRA</a>.
Such subjective scores form the basis of developing "objective metrics", which are algorithmic ways to estimate qualities of the audio.
Some early examples are <a href="https://en.wikipedia.org/wiki/PESQ" rel="nofollow noreferrer">PESQ</a> for Speech Quality (ITU standard since 2001)
and <a href="https://en.wikipedia.org/wiki/PEAQ" rel="nofollow noreferrer">PEAQ</a> for Audio Quality (ITU standard since 1998).
More advanced include <a href="http://www.polqa.info" rel="nofollow noreferrer">POLQA</a> (ITU standard since 2011) and <a href="https://asa.scitation.org/doi/full/10.1121/1.4921674?TRACK=RSS" rel="nofollow noreferrer">ViSQOLAudio</a> (proposed in research).</p>
<p>The last years several papers have shown that one can learn such metrics using deep neural networks.
For speech quality, one recent paper (2019) is
<a href="https://www.microsoft.com/en-us/research/publication/intrusive-and-non-intrusive-perceptual-speech-quality-assessment-using-a-convolutional-neural-network/" rel="nofollow noreferrer">Intrusive and Non-Intrusive Perceptual Speech Quality Assessment Using a Convolutional Neural Network</a>.</p>
<p>The only learned evaluation I have found for general Audio Quality or music quality is <a href="https://arxiv.org/abs/1812.08466" rel="nofollow noreferrer">Fréchet Audio Distance</a>.</p> | 2020-01-02 13:08:31.247000+00:00 | 2020-01-09 20:09:58.847000+00:00 | 2020-01-09 20:09:58.847000+00:00 | null | 58,887,600 | <p>Is there any way to measure the <strong>quality</strong> and <strong>appeal/aesthetic</strong> of an audio clip? The quality quantifies how good the sound is, ie., the lower the noise the better the quality is. Whereas the appeal/aesthetic measures how appealing the sound is to the human. There exists some work for image quality and aesthetic assessment like <a href="https://ai.googleblog.com/2017/12/introducing-nima-neural-image-assessment.html" rel="nofollow noreferrer">NIMA</a>, but not for sound/audio. Any method or references will be helpful.</p> | 2019-11-16 04:33:48.717000+00:00 | 2020-01-09 20:09:58.847000+00:00 | null | deep-learning|signal-processing|audio-processing | ['https://en.wikipedia.org/wiki/MUSHRA', 'https://en.wikipedia.org/wiki/PESQ', 'https://en.wikipedia.org/wiki/PEAQ', 'http://www.polqa.info', 'https://asa.scitation.org/doi/full/10.1121/1.4921674?TRACK=RSS', 'https://www.microsoft.com/en-us/research/publication/intrusive-and-non-intrusive-perceptual-speech-quality-assessment-using-a-convolutional-neural-network/', 'https://arxiv.org/abs/1812.08466'] | 7 |
12,822,855 | <p>Using Newton-Raphson is an act of desperation. You are much better off finding the continued fraction that represents your function and calculating that. A CF will converge much faster and will produce the real root(s). Also, because the CF produces a ratio of two integers you have tight control over numeric precision and don't have to worry about accumulation of rounding errors and other similar hair-pulling-out problems.</p>
<p>To find the real roots of any polynomial function refer to "A Continued Fraction Algorithm for Approximating All Real Polynomial Roots" by David Rosen (1978).</p>
<p>------------ ADDENDUM 1 --- 11 OCT-----------------</p>
<p>Ok, you are solving a sextic. You have several options. The simplest is to use a Taylor approximation (say to the 3rd degree) in conjunction with Halley's method. This is much superior to Newton because it has cubic convergence and you can detect imaginary solutions. The disadvantage is that you will have rounding problems which may result in an incorrect answer.</p>
<p>The ideal option is to find the continued fraction that represents the monic root, because this CF will be computable as an integer ratio of any desired precision, thus elminating the problem of rounding. </p>
<p>One approach to computing this CF is via the Jacobi-Perron algorithm. See the paper Hendy and Jeans: <a href="http://www.ams.org/mcom/1981-36-154/S0025-5718-1981-0606514-X/S0025-5718-1981-0606514-X.pdf" rel="nofollow">http://www.ams.org/mcom/1981-36-154/S0025-5718-1981-0606514-X/S0025-5718-1981-0606514-X.pdf</a>. This paper shows the exact algorithm for computing cubic and quartic roots via CF approximation.</p>
<p>Note that if the sextic is reducible then it can converted into a quartic and quadratic: <a href="http://elib.mi.sanu.ac.rs/files/journals/tm/21/tm1124.pdf" rel="nofollow">http://elib.mi.sanu.ac.rs/files/journals/tm/21/tm1124.pdf</a>. The quartic is then solvable by the algorithm in the Hendy paper.</p>
<p>The general solution to generate a CF for a sextic can be done via the Rogers-Ramunajan CF. See the following paper for the method: <a href="http://arxiv.org/pdf/1111.6023v2" rel="nofollow">http://arxiv.org/pdf/1111.6023v2</a>. This will generate the CF for any sextic.</p> | 2012-10-10 15:20:30.030000+00:00 | 2012-10-11 17:57:59.910000+00:00 | 2012-10-11 17:57:59.910000+00:00 | null | 12,822,141 | <p>I am trying to implement a root finding algorithm. I am using the hybrid Newton-Raphson algorithm found in numerical recipes that works pretty nicely. But I have a problem in bracketing the root.</p>
<p>While implementing the root finding algorithm I realised that in several cases my functions have 1 real root and all the other imaginary (several of them, usually 6 or 9). The only root I am interested is in the real one so the problem is not there. The thing is that the function approaches the root like a cubic function, touching with the point the y=0 axis...</p>
<p>Newton-Rapson method needs some brackets of different sign and all the bracketing methods I found don't work for this specific case.</p>
<p>What can I do? It is pretty important to find that root in my program...</p>
<p>EDIT: more problems: sometimes due to reaaaaaally small numerical errors, say a variation of <code>1e-6</code> in some value the "cubic" function does NOT have that real root, it is just imaginary with a neglectable imaginary part... (checked with matlab)</p>
<p><strong>EDIT 2:</strong> Much more information about the problem.</p>
<p>Ok, I need root finding algorithm.</p>
<p>Info I have:</p>
<ul>
<li>The root I need to find is between [0-1] , if there are more roots outside that part I am not interested in them.</li>
<li>The root is real, there may be imaginary roots, but I don't want them.</li>
<li>Probably all the rest of the roots will be imaginary</li>
<li>The root may be double in that point, but I think that actually doesn't mater in numerical analysis problems</li>
<li>I need to use the root finding algorithm several times during the overall calculations, but the function will always be a polynomial</li>
<li>In one of the particular cases of the root finding, my polynomial will be similar to a quadratic function that touches Y=0 with the point. Example of a real case:
<img src="https://i.stack.imgur.com/kLu9I.jpg" alt="enter image description here" /></li>
<li>The coefficient may not be 100% precise and that really slight imprecision may make the function not to touch the Y=0 axis.</li>
<li>I cannot solve for this specific case because in other cases it may be that the polynomial is pretty normal and doesn't make any "strange" thing.</li>
<li>The method I am actually using is NewtonRaphson hybrid, where if the derivative is really small it makes a bisection instead of NewRaph (found in <em>numerical recipes</em>).</li>
</ul>
<p>Matlab's answer to the function on the image:
roots:</p>
<pre><code>0.853553390593276 + 0.353553390593278i
0.853553390593276 - 0.353553390593278i
0.146446609406726 + 0.353553390593273i
0.146446609406726 - 0.353553390593273i
0.499999999999996 + 0.000000040142134i
0.499999999999996 - 0.000000040142134i
</code></pre>
<p>The function is a real example I prepared where I know that the answer I want is <code>0.5</code></p>
<p>Note:
I still haven't check completely some of the answers I you people have give me (Thank you!), I am just trying to give al the information I already have to complete the question.</p> | 2012-10-10 14:43:31.283000+00:00 | 2020-09-27 03:07:02.880000+00:00 | 2020-06-20 09:12:55.060000+00:00 | c++|algorithm|math | ['http://www.ams.org/mcom/1981-36-154/S0025-5718-1981-0606514-X/S0025-5718-1981-0606514-X.pdf', 'http://elib.mi.sanu.ac.rs/files/journals/tm/21/tm1124.pdf', 'http://arxiv.org/pdf/1111.6023v2'] | 3 |
71,557,261 | <p>After a bit of research on the source code provided in the link, I was able to figure out how <code>hidden_size</code> is the main hyperparameter of the model. Here it is:</p>
<p><code>hidden_size</code> describes indeed the number of neurons of each Dense layer of the GRN. You can check out the structure of the GRN at <a href="https://arxiv.org/pdf/1912.09363.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1912.09363.pdf</a> (page 6, Figure 2). Note that since the final layer of the GRN is just a normalization layer, also the output of the GRN has dimension <code>hidden_size</code>.</p>
<p>How is this the main hyperparameter of the model? By looking at the structure of the TFT model (on page 6 as well), the GRN unit appears in the Variable Selection process, in the Static Enrichment section and in the Position-wise Feed Forward section, so basically in every step of the learning process. Each one of these GRNs is built in the same way (only the input size varies).</p> | 2022-03-21 11:58:03.113000+00:00 | 2022-03-21 14:01:58.943000+00:00 | 2022-03-21 14:01:58.943000+00:00 | null | 71,555,080 | <p>The Temporal-Fusion-Transformer (TFT) model in the PytorchForecasting package has several parameters (see: <a href="https://pytorch-forecasting.readthedocs.io/en/latest/_modules/pytorch_forecasting/models/temporal_fusion_transformer.html#TemporalFusionTransformer" rel="nofollow noreferrer">https://pytorch-forecasting.readthedocs.io/en/latest/_modules/pytorch_forecasting/models/temporal_fusion_transformer.html#TemporalFusionTransformer</a>).</p>
<p>What does the <code>hidden_size</code> parameter exactly refer to? My best guess is that it refers to the number of neurons contained in the GRN component of the TFT. If so, in which layer are these neurons contained?</p>
<p>I found the documentation not really helpful in this case, since they describe the <code>hidden_size</code> parameter as: "hidden size of network which is its main hyperparameter and can range from 8 to 512"</p>
<p>Side note: part of my ignorance might be due to the fact that I am not fully familiar with the individual components of the TFT model.</p> | 2022-03-21 09:06:16.533000+00:00 | 2022-03-21 14:57:06.693000+00:00 | 2022-03-21 14:57:06.693000+00:00 | deep-learning|time-series|forecasting|pytorch-forecasting | ['https://arxiv.org/pdf/1912.09363.pdf'] | 1 |
47,679,004 | <p>In the meantime I figured out how to use the 2 APIs together. The trick is to pass a 5D-Tensor as input to tf.nn.dynamic_rnn(), where the last dimension is the size of the "vector on the spatial grid" (which comes from the transformation of the input from 2D to 3D, inspired by the paper on which the implementation is based: <a href="https://arxiv.org/pdf/1506.04214.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1506.04214.pdf</a>). In my case the vector size is 1, I have to expand the dimension anyway though.</p>
<p>While fixing this error another issue emerged: In the paper mentioned above in section 3.1 they state the equations for the convLSTM. They use the Hadamard-product for weights connected to the cell outputs C. Printing the weights of my ConvLSTMCell in Tensorflow, it seems like they don't use the weights Wci, Wcf and Wco at all. So, can anybody tell me the exact implementation of the TF ConvLSTMCell?</p>
<p>Btw. the output of the tensorflow ConvSTMCell is C or H (in the notation of the paper)?</p> | 2017-12-06 16:25:35.900000+00:00 | 2017-12-06 16:25:35.900000+00:00 | null | null | 47,459,225 | <p>I'm trying to build a seq2seq model in tensorflow (1.4) using the tf.contrib.rnn.ConvLSTMCell API together with the tf.nn.dynamic_rnn API, but I got an error with the dimension of the inputs.</p>
<p>My code is:</p>
<pre><code># features is an image sequence with shape [600, 400, 10],
# so features is a tensor with shape [batch_size, 600, 400, 10]
features = tf.transpose(features, [0,3,1,2])
features = tf.reshape(features, [params['batch_size'],10,600,400])
encoder_cell = tf.contrib.rnn.ConvLSTMCell(conv_ndims=2,
input_shape=[600, 400,1],
output_channels=5,
kernel_shape=[7,7],
skip_connection=False)
_, encoder_state = tf.nn.dynamic_rnn(cell=encoder_cell,
inputs=features,
sequence_length=[10]*params['batch_size'],
dtype=tf.float32)
</code></pre>
<p>I get the following error </p>
<pre><code>ValueError: Conv Linear expects all args to be of same Dimension: [[2, 600, 400], [2, 600, 400, 5]]
</code></pre>
<p>Looking at the tf implementation, it seems that the inputs to dynamic_rnn is only 3-dimensional in contrary to the hidden state, which is 4-dimensional. I tried to pass the input as a nested tuple, but it didn't work.</p>
<p>The problem is similar to <a href="https://stackoverflow.com/questions/42513613/tensorflow-dynamic-rnn-regressor-valueerror-dimension-mismatch">TensorFlow dynamic_rnn regressor: ValueError dimension mismatch</a>, it's slightly different though, as they're using a plain LSTMCell (which worked for me). </p>
<p>Can anyone give me a minimal example how to use these 2 APIs together?
Thanks!</p> | 2017-11-23 15:48:50.927000+00:00 | 2017-12-06 16:25:35.900000+00:00 | null | python|tensorflow|neural-network|conv-neural-network|rnn | ['https://arxiv.org/pdf/1506.04214.pdf'] | 1 |
64,964,968 | <p>Resnet18 from <code>torchvision.models</code> it's an ImageNet implementation. Because ImageNet samples much bigger(224x224) than CIFAR10/100 (32x32), the first layers designed to aggressively downsample the input ('stem Network'). It's lead to missing much valuable information on small CIFAR10/100 images.</p>
<p>To achieve good accuracy on CIFAR10, authors use different network structure as described in original paper:
<a href="https://arxiv.org/pdf/1512.03385.pdf" rel="noreferrer">https://arxiv.org/pdf/1512.03385.pdf</a>
and explained in this article:
<a href="https://towardsdatascience.com/resnets-for-cifar-10-e63e900524e0" rel="noreferrer">https://towardsdatascience.com/resnets-for-cifar-10-e63e900524e0</a></p>
<p>You can download resnet fo CIFAR10 from this repo: <a href="https://github.com/akamaster/pytorch_resnet_cifar10" rel="noreferrer">https://github.com/akamaster/pytorch_resnet_cifar10</a></p> | 2020-11-23 08:37:00.497000+00:00 | 2020-11-27 13:35:45.500000+00:00 | 2020-11-27 13:35:45.500000+00:00 | null | 63,015,883 | <p>I'm training a resnet18 on CIFAR100 dataset. After about 50 iterations the validation accuracy converged at about 34%. While the training accuracy reached almost 100%.</p>
<p>I doubt it's kinda overfitting, so i applied data augmentation like <code>RandomHorizontalFlip</code> and <code>RandomRotation</code>, which made the validation converge at about 40%.</p>
<p>I also tried decaying learning rate <code>[0.1, 0.03, 0.01, 0.003, 0.001]</code>, decaying after each 50 iterations. Decaying learning rate seems not enhancing the performance.</p>
<p>Heard that Resnet on CIFAR100 may get 70%~80% accuracy. What trick else could I apply? Or is there anything wrong in my implementation? The same code on CIFAR10 can achieve about 80% accuracy.</p>
<p>My whole training and evaluation code is here below:</p>
<pre><code>import torch
from torch import nn
from torch import optim
from torch.utils.data import DataLoader
from torchvision.models import resnet18
from torchvision.transforms import Compose, ToTensor, RandomHorizontalFlip, RandomRotation, Normalize
from torchvision.datasets import CIFAR10, CIFAR100
import os
from datetime import datetime
import matplotlib.pyplot as plt
def draw_loss_curve(histories, legends, save_dir):
os.makedirs(save_dir, exist_ok=True)
for key in histories[0][0].keys():
if key != "epoch":
plt.figure()
plt.title(key)
for history in histories:
x = [h["epoch"] for h in history]
y = [h[key] for h in history]
# plt.ylim(ymin=0, ymax=3.0)
plt.plot(x, y)
plt.legend(legends)
plt.savefig(os.path.join(save_dir, key + ".png"))
def cal_acc(out, label):
batch_size = label.shape[0]
pred = torch.argmax(out, dim=1)
num_true = torch.nonzero(pred == label).shape[0]
acc = num_true / batch_size
return torch.tensor(acc)
class LrManager(optim.lr_scheduler.LambdaLR):
def __init__(self, optimizer, lrs):
def f(epoch):
rate = 1
for k in sorted(lrs.keys()):
if epoch >= k:
rate = lrs[k]
else:
break
return rate
super(LrManager, self).__init__(optimizer, f)
def main(cifar=100, epochs=250, batches_show=100):
if torch.cuda.is_available():
device = "cuda"
else:
device = "cpu"
print("warning: CUDA is not available, using CPU instead")
dataset_cls = CIFAR10 if cifar == 10 else CIFAR100
dataset_train = dataset_cls(root=f"data/{dataset_cls.__name__}/", download=True, train=True,
transform=Compose([RandomHorizontalFlip(), RandomRotation(15), ToTensor(), Normalize(mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5))]))
dataset_val = dataset_cls(root=f"data/{dataset_cls.__name__}/", download=True, train=False,
transform=Compose([ToTensor(), Normalize(mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5))]))
loader_train = DataLoader(dataset_train, batch_size=128, shuffle=True)
loader_val = DataLoader(dataset_val, batch_size=128, shuffle=True)
model = resnet18(pretrained=False, num_classes=cifar).to(device)
optimizer = optim.SGD(model.parameters(), lr=0.001, momentum=0.9, weight_decay=1e-5)
lr_scheduler = LrManager(optimizer, {0: 1.0, 50: 0.3, 100: 0.1, 150: 0.03, 200: 0.01})
criterion = nn.CrossEntropyLoss()
history = []
model.train()
for epoch in range(epochs):
print("------------------- TRAINING -------------------")
loss_train = 0.0
running_loss = 0.0
acc_train = 0.0
running_acc = 0.0
for batch, data in enumerate(loader_train, 1):
img, label = data[0].to(device), data[1].to(device)
optimizer.zero_grad()
pred = model(img)
loss = criterion(pred, label)
loss.backward()
optimizer.step()
running_loss += loss.item()
loss_train += loss.item()
acc = cal_acc(pred, label)
running_acc += acc.item()
acc_train += acc.item()
if batch % batches_show == 0:
print(f"epoch: {epoch}, batch: {batch}, loss: {running_loss/batches_show:.4f}, acc: {running_acc/batches_show:.4f}")
running_loss = 0.0
running_acc = 0.0
loss_train = loss_train / batch
acc_train = acc_train / batch
lr_scheduler.step()
print("------------------- EVALUATING -------------------")
with torch.no_grad():
running_acc = 0.0
for batch, data in enumerate(loader_val, 1):
img, label = data[0].to(device), data[1].to(device)
pred = model(img)
acc = cal_acc(pred, label)
running_acc += acc.item()
acc_val = running_acc / batch
print(f"epoch: {epoch}, acc_val: {acc_val:.4f}")
history.append({"epoch": epoch, "loss_train": loss_train, "acc_train": acc_train, "acc_val": acc_val})
draw_loss_curve([history], legends=[f"resnet18-CIFAR{cifar}"], save_dir=f"history/resnet18-CIFAR{cifar}[{datetime.now()}]")
if __name__ == '__main__':
main()
</code></pre> | 2020-07-21 13:43:35.163000+00:00 | 2020-11-27 13:35:45.500000+00:00 | null | python|pytorch | ['https://arxiv.org/pdf/1512.03385.pdf', 'https://towardsdatascience.com/resnets-for-cifar-10-e63e900524e0', 'https://github.com/akamaster/pytorch_resnet_cifar10'] | 3 |
63,878,067 | <p>Through continued research I stumbled across the <code>rosetta</code> package, which contains the function <code>gemm()</code>, which can create this model with the following code:</p>
<pre><code>result <- gemm(data = mydata,
xvar = "Bullying_c",
mvar = "SelfEsteem_c",
yvar = "Dissatisfaction",
xmmod = "YearLevel",
nboot = 5000)
print(result)
</code></pre>
<p>This function also allows for covariates. The <code>gemm()</code> function is clearly explained on the following pages:</p>
<p>Details about the function:<br />
<a href="https://rdrr.io/github/psytext/rosetta/man/gemm.html" rel="nofollow noreferrer">https://rdrr.io/github/psytext/rosetta/man/gemm.html</a></p>
<p>Link to PDF download for a tutorial running these analysis and interpreting results:
<a href="https://psyarxiv.com/mj2ug/download" rel="nofollow noreferrer">https://psyarxiv.com/mj2ug/download</a></p> | 2020-09-14 04:22:05.387000+00:00 | 2020-09-14 04:22:05.387000+00:00 | null | null | 63,869,065 | <p>I have data collected from a survey that i want to build a moderated mediation model with. My four variables are:</p>
<ul>
<li><code>Bullying</code> = Continuous, average response to several Likert Scale Questions from an individual, range 1-5 (IV)</li>
<li><code>Self-Esteem</code> = Continuous, calculated as above (Mediator)</li>
<li><code>Dissatisfaction</code> = Continuous, calculated as above (DV)</li>
<li><code>Year Level</code> = Discrete Ordinal, range from 1-10 (Moderator)</li>
</ul>
<p>I have already calculated and found that <code>Self-Esteem</code> is a simple Mediator of the effect <code>Bullying -> Dissatisfaction</code>. I now want to see if <code>Year level</code> is a Moderator of this Mediation model, however the only guides I can find rely on splitting the Moderator into two dichotomous groups, which I don't want to do. I think I have found that <code>Year Level</code> is not a simple moderator for the <code>Bullying -> Dissatisfaction</code> effect through the code:</p>
<pre><code>Bullying_c <- c(scale(Bullying, center = TRUE, scale = FALSE))
SelfEsteem_c <- c(scale(SelfEsteem, center = TRUE, scale = FALSE))
fitMod <- lm(Dissatisfaction ~ Bullying_c + SelfEsteem_c + Bullying_c*SelfEsteem_c)
fitModB <- Boot(fitMod, R = 1000)
summary (fitModB)
</code></pre>
<p>Which outputs:</p>
<pre><code>Residuals:
Min 1Q Median 3Q Max
-1.5143 -0.6560 -0.2014 0.5426 3.0809
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 1.853848 0.032137 57.686 < 2e-16 ***
Bullying_c 0.203157 0.027922 7.276 8.5e-13 ***
SelfEsteem_c -0.036391 0.015824 -2.300 0.0217 *
Bullying_c:SelfEsteem_c 0.007795 0.012430 0.627 0.5308
</code></pre>
<p>Which shows that <code>Self-esteem</code> is not an overall moderator, hence I'm looking at Moderated Mediation, not Mediated Moderation.</p>
<p>I've tried looking up the packages: <code>psych</code> , <code>mediation</code>, and <code>lavaan</code>, but I've been unable to find a way to run the analysis with an ordinal Moderator. Most guides want me to choose two values of the moderator, but I want to include all 10 classes.</p> | 2020-09-13 09:14:19.907000+00:00 | 2020-09-14 04:22:05.387000+00:00 | null | r|statistics | ['https://rdrr.io/github/psytext/rosetta/man/gemm.html', 'https://psyarxiv.com/mj2ug/download'] | 2 |
47,872,304 | <p>The checkpoint files contain additional information that is useful during training but is not necessary for inference. For example, the <a href="https://www.tensorflow.org/api_docs/python/tf/train/AdamOptimizer" rel="nofollow noreferrer"><code>tf.train.AdamOptimizer</code></a>, which implements the <a href="https://arxiv.org/abs/1412.6980" rel="nofollow noreferrer">Adam optimization algorithm</a>, stores two additional "moment vectors" for each variable being optimized, which means that the training state is approximately <strong>three times larger</strong> than the variables being optimized. These moment vectors are not needed when you perform inference, so the frozen graph can be much smaller.</p> | 2017-12-18 16:12:41.900000+00:00 | 2017-12-18 16:12:41.900000+00:00 | null | null | 47,870,314 | <p>After Saving my Tensorflow model the following files were generated:</p>
<pre><code>checkpoint
input_graph.pb
tmp.ckpt.data-00000-of-00001
tmp.ckpt.index
tmp.ckpt.meta
</code></pre>
<p>I generated output_graph.pb using freeze_graph.py feeding it the inputs:</p>
<pre><code>freeze_graph.py --input_graph=graph.pb --input_checkpoint=tmp.ckpt --output_graph=frozen_graph.pb --output_node_names="Dense2/output_node"
</code></pre>
<p>The size of output_graph.pb is around 615 kb whereas the size of tmp.ckpt.data-00000-of-00001 is around 1.5 MB. This is my tensorflow model:</p>
<pre><code>X = tf.placeholder(tf.float32, [None,training_set.shape[1]],name = "input_node")
Y = tf.placeholder(tf.float32,[None,training_labels.shape[1]], name = 'Y')
with tf.name_scope('Dense1'):
W1 = tf.get_variable( "W1", [40, training_set.shape[1]], dtype=tf.float32, initializer = tf.contrib.layers.xavier_initializer(seed = 0) )
b1 = tf.get_variable( "b1", [40,1], dtype=tf.float32,initializer=tf.zeros_initializer() )
A1 = tf.add(tf.matmul( W1, tf.transpose( X ) ), b1 )
A1 = tf.nn.relu( A1 )
A1 = tf.nn.dropout( A1, 0.8, name="A1" )
with tf.name_scope('Dense2'):
W2 = tf.get_variable( "W2", [2, 40], dtype=tf.float32, initializer = tf.contrib.layers.xavier_initializer(seed = 0) )
b2 = tf.get_variable( "b2",[2,1], dtype=tf.float32, initializer=tf.zeros_initializer() )
A2 = tf.add( tf.matmul( W2, A1 ), b2 )
A2 = tf.transpose( A2, name="output_node" )
print("Initialsing cost")
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = A2, labels = Y))
print("Initialsing optimizer")
global_step = tf.Variable(0, trainable=False)
start_learning_rate = 0.001
learning_rate = tf.train.exponential_decay(start_learning_rate, global_step, 200, 0.1, True )
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
</code></pre>
<p>Any idea on what went wrong?. Also any suggestions which will help me analyze output_graph.pb will be greatly appreciated</p> | 2017-12-18 14:14:25.353000+00:00 | 2017-12-18 16:12:41.900000+00:00 | null | python|tensorflow|neural-network | ['https://www.tensorflow.org/api_docs/python/tf/train/AdamOptimizer', 'https://arxiv.org/abs/1412.6980'] | 2 |
55,196,928 | <p>There is absolutely no general answer to this question, no prinicipal method to determine these hyperparameters is known. A conventional approach is to look for similar problems and deep learning architectures which have already been shown to work. Than a suitable architecture can be developed by experimentation. However conventional kernel size's are 3x3, 5x5 and 7x7.</p>
<p>Otherwise, there are paper about this <a href="https://arxiv.org/pdf/1606.02228v2.pdf" rel="nofollow noreferrer">1</a> and <a href="https://github.com/ducha-aiki/caffenet-benchmark" rel="nofollow noreferrer">2</a>, you may want to take look to see the art of choosing hyper parameters in CNN.</p> | 2019-03-16 12:44:20.340000+00:00 | 2019-03-16 12:44:20.340000+00:00 | null | null | 55,193,099 | <p>Before being fed to the neural network there are kernels applied to images for feature extraction.But, how do we understand that a particular kernel will help to extract the required feature for neural network.</p> | 2019-03-16 03:16:37.593000+00:00 | 2019-03-16 12:44:20.340000+00:00 | null | image-processing|machine-learning|neural-network|deep-learning|conv-neural-network | ['https://arxiv.org/pdf/1606.02228v2.pdf', 'https://github.com/ducha-aiki/caffenet-benchmark'] | 2 |
33,906,606 | <p>I think you got right way. What you need is just a lexicon to determine whether a given word is a known word or unknown word. <a href="http://rdrpostagger.sourceforge.net/" rel="nofollow">RDRPOSTagger</a> provides a piece of code to compute tagging accuracies for known words and unknown words. See the function <code>computeAccuracies(lexicon, goldCorpus, taggedCorpus)</code> in the <code>Eval.py</code> module in the <code>Utility</code> package. </p>
<p>You might want to look at <a href="http://arxiv.org/abs/1412.4021" rel="nofollow">this paper</a> which presents tagging results (for known words and unknown words) of 3 POS and morphological taggers on 13 languages including Bulgarian, Czech, Dutch, English, French, German, Hindi, Italian, Portuguese, Spanish, Swedish, Thai and Vietnamese.</p> | 2015-11-25 00:42:58.477000+00:00 | 2015-11-25 00:42:58.477000+00:00 | null | null | 27,728,001 | <p>How do I calculate the accuracy of known and unknown words in part of speech tagging? For example for known words, is it dividing the correctly tagged known words by all the known words ? Any other ways ?</p> | 2015-01-01 03:00:18.390000+00:00 | 2015-11-25 00:42:58.477000+00:00 | null | nlp|part-of-speech | ['http://rdrpostagger.sourceforge.net/', 'http://arxiv.org/abs/1412.4021'] | 2 |
14,153,318 | <p>It sounds like a reasonably simple problem where you just generate 1 parameter at a time, possibly based on the output of the previous variables.</p>
<p>My model of a flower will be: It has just a reasonably upright stem, a perfectly round center, some amount of leaves on the stem on alternating sides, petals perfectly distributed around the center.</p>
<p><code>random()</code> is just a random number within some chosen bounds, the bounds may be unique for each variable. <code>random(x1, x2, ..., xn)</code> generates a random number within some bounds dependent on the variables x1, x2, ..., xn (as in stemWidth < stemHeight/2, a reasonable assumption).</p>
<p><strong>The Stem</strong></p>
<pre><code>stemXPosition = width / 2
stemHeight = random()
stemWidth = random(stemHeight)
stemColour = randomColour()
stemWidthVariationMax = random(stemWidth, stemHeight)
stemWidthVariationPerPixel = random(stemWidth, stemHeight)
</code></pre>
<p><code>stemWidthVariationMax</code>/<code>-PerPixel</code> are for generating a stem that isn't perfectly straight (if you want to do something that complicated, a low <code>PerPixel</code> is for smoothness). Generate the stem using these as follows:</p>
<pre><code>pixelRelative[y-position][0] := left x-position at that y-position relative to the stem
pixelRelative[y-position][1] := right x-position at that y-position relative to the stem
pixelRelative[0][0] = randomInRange(-stemWidthVariationMax, stemWidthVariationMax)
for each y > 0:
pixelRelative[y-1][0] = max(min(randomInRange(pixel[y] - stemWidthVariationPerPixel,
pixel[y] + stemWidthVariationPerPixel),
-stemWidthVariationMax),
stemWidthVariationMax)
//pixelRelative[0][1] and pixelRelative[y-1][1] generated same as pixelRelative[y-1][i]
for each y:
pixelAbsolute[y][0] = width / 2 - stemWidth / 2 + pixelRelative[y][0]
pixelAbsolute[y][1] = width / 2 + stemWidth / 2 + pixelRelative[y][1]
</code></pre>
<p>You can also use arcs to simplify things and go more than 1 pixel at a time.</p>
<p><strong>The Top</strong></p>
<pre><code>centerRadius = random(stemHeight)
petalCount = random() // probably >= 3
petalSize = random(centerRadius, petalCount)
</code></pre>
<p>It's not too easy to generate the petals, you need to step from 0 to 2*PI with step-size of <code>2*PI/petalCount</code> and generate arcs around the circle. It requires either a good graphics API or some decent maths.</p>
<p><a href="http://golancourses.net/2010spring/02/22/flower-generation/" rel="noreferrer">Here's</a> some nicely generated tops of flowers, though seemingly not open-source. Note that they don't have a center at all. (or centerRadius = 0)</p>
<p><strong>The Leaves</strong></p>
<p>You could probably write an entire paper on this, (like <a href="http://arxiv.org/abs/1004.4388" rel="noreferrer">this one</a>) but a simple idea would just be to generate a 1/2 circle and extend lines outward from there to meet at 2*the radius of the circle and to draw parallel lines on the flower.</p>
<p>Once you have a leaf generation algorithm:</p>
<pre><code>leafSize = random(stemHeight) // either all leaves are the same size or generate the size for each randomly
leafStemLength = random(leafSize) // either all leaves have the same stem length or generate for each randomly
leafStemWidth = random(leafStemLength)
leaf[0].YPosition = random(stemHeight)
leaf[0].XSide = randomly either left or right
leaf[0].rotation = random between say 0 and 80 degrees
for each leaf i:
leaf[i].YPosition = random(stemHeight, leaf[i-1]) // only generate new leaves above previous leaves
leaf[i].XSide = opposite of leaf[i].XSide
</code></pre>
<p><strong>Last words</strong></p>
<p>The way to determine the bounds of each <code>random</code> would be either to argue it out, or give it some fixed value, generate everything else randomly a few times, keep increasing / decreasing it until it starts to look weird.</p>
<p>10 x 10 versus 500 x 500 would probably require greatly different algorithms, I wouldn't recommend the above for below 100 x 100, maybe generate a bigger image and simply shrink it using averaging or something.</p>
<p><strong>Code</strong></p>
<p>I started writing some Java code, when I realised it may take a bit longer than I would like to spend on this, so I'll show you what I have so far.</p>
<pre><code> // some other code, including these functions to generate random numbers:
float nextFloat(float rangeStart, float rangeEnd);
int nextInt(int rangeStart, int rangeEnd);
...
// generates a color somewhere between green and brown
Color stemColor = Color.getHSBColor(nextFloat(0.1, 0.2), nextFloat(0.5, 1), nextFloat(0.2, 0.8));
int stemHeight = nextInt(height/2, 3*height/4);
int stemWidth = nextInt(height/20, height/20 + height/5);
Color flowerColor = ??? // I just couldn't use the same method as above to generate bright colors, but I'm sure it's not too difficult
int flowerRadius = nextInt(Math.min(stemHeight, height - stemHeight)/4, 3*Math.min(stemHeight, height - stemHeight)/4);
</code></pre> | 2013-01-04 07:59:00.577000+00:00 | 2013-01-04 07:59:00.577000+00:00 | null | null | 14,116,410 | <p>Can anyone suggest any links, ideas or algorithms to generate flowers randomly like the one as my profile pic? The profile pic flower has only a 10 x 10 grid and the algorithm is not truly random. I would also prefer that the new algorithm use a grid of about 500 x 500 or even better, allow the user to pick the size of the grid.</p>
<p>[Plant[][] is declared as int plant[10][10];]</p>
<pre><code>public void generateSimpleSky(){
for(int w2=0;w2<10;w2++)
for(int w3=0;w3<10;w3++)
plant[w2][w3]=5;
}
public void generateSimpleSoil(){
for(int q=0;q<10;q++)
plant[q][9]=1;
}
public void generateSimpleStem(){
int ry=rand.nextInt(4);
plant[3+ry][8]=4;
xr=3+ry;
for(int u=7;u>1;u--){
int yu=rand.nextInt(3);
plant[xr-1+yu][u]=4;
xr=xr-1+yu;
}
}
public void generateSimpleFlower(){
plant[xr][2]=3;
for(int q2=1;q2<4;q2++)
if((2-q2)!=0)
plant[xr][q2]=2;
for(int q3=xr-1;q3<=xr+1;q3++)
if((xr-q3)!=0)
plant[q3][2]=2;
}
</code></pre> | 2013-01-02 03:00:39.300000+00:00 | 2013-11-15 23:14:00.263000+00:00 | 2013-11-15 23:14:00.263000+00:00 | algorithm|random | ['http://golancourses.net/2010spring/02/22/flower-generation/', 'http://arxiv.org/abs/1004.4388'] | 2 |
44,331,442 | <p><code>TrainingHelper</code> feeds the ground truth at every step. If you want to use decoder outputs, you can use <strong>scheduled sampling</strong> [1]. Scheduled sampling is implemented in <code>ScheduledEmbeddingTrainingHelper</code> and <code>ScheduledOutputTrainingHelper</code>, so you can use one of the two (depending on your particular application) instead of <code>TrainingHelper</code>. See also this thread here:
<a href="https://stackoverflow.com/questions/43795423/scheduled-sampling-in-tensorflow">scheduled sampling in Tensorflow</a>. </p>
<p>[1] <a href="https://arxiv.org/pdf/1506.03099.pdf" rel="noreferrer">https://arxiv.org/pdf/1506.03099.pdf</a></p> | 2017-06-02 14:33:18.110000+00:00 | 2017-06-02 14:33:18.110000+00:00 | null | null | 43,826,784 | <p>I managed to build a <strong>sequence to sequence</strong> model in <strong>tensorflow</strong> using the <strong>tf.contrib.seq2seq</strong> classes in 1.1 version. <br> <br>
For know I use the <strong>TrainingHelper</strong> for training my model.
But does this helper feed <strong>previously decoded</strong> values in the decoder for training or just the ground truth?
If it doesn't how can I feed previously decoded value as input in the decoder instead of ground truth values ?</p> | 2017-05-07 00:51:40.827000+00:00 | 2017-09-19 15:13:36.280000+00:00 | null | tensorflow|recurrent-neural-network|decoder|sequence-to-sequence | ['https://stackoverflow.com/questions/43795423/scheduled-sampling-in-tensorflow', 'https://arxiv.org/pdf/1506.03099.pdf'] | 2 |
45,131,614 | <p>There are three types of <a href="https://simple.wikipedia.org/wiki/Horn_clause" rel="nofollow noreferrer">Horn clauses</a></p>
<p>definite clause: ¬ p ∨ ¬ q ∨ ⋯ ∨ ¬ t ∨ u<br />
fact: u<br />
goal clause: ¬ p ∨ ¬ q ∨ ⋯ ∨ ¬ t</p>
<p>which relate to Prolog.<br />
Example Prolog from <a href="http://www.amzi.com/AdventureInProlog/apreface.php" rel="nofollow noreferrer">Adventures in Prolog</a></p>
<p>A definite clause is a Prolog <a href="http://www.amzi.com/AdventureInProlog/a5rules.php" rel="nofollow noreferrer">rule</a>:</p>
<pre><code>where_food(X,Y) :-
location(X,Y),
edible(X).
</code></pre>
<p>A fact is a Prolog <a href="http://www.amzi.com/AdventureInProlog/a2facts.php" rel="nofollow noreferrer">fact</a>:</p>
<pre><code>room(kitchen).
</code></pre>
<p>A goal clause is a Prolog <a href="http://www.amzi.com/AdventureInProlog/a4comqry.php" rel="nofollow noreferrer">query</a>:</p>
<pre><code>location(X, kitchen), edible(X).
</code></pre>
<p>Another way of looking at three common with Prolog uses <code>head</code>, <code>body</code> and <code>:-</code>.</p>
<p>A rule is <code>head :- body.</code><br />
A fact is <code>head.</code><br />
A query is <code>body.</code></p>
<p>A body is made up of calls to predicates (head), so a body can look like this <code>A,B,C</code>.</p>
<p>When you use a query it is really</p>
<pre><code>goal :- body.
</code></pre>
<p>or</p>
<pre><code>goal <- A,B,C
</code></pre>
<p>or</p>
<pre><code>location(X, kitchen), edible(X) ⊃ goal
</code></pre>
<h2>A Prolog example</h2>
<p>Facts</p>
<pre><code>location(apple, kitchen).
location(crackers, kitchen).
location(flashlight, desk).
edible(apple).
edible(crackers).
</code></pre>
<p>Goal clause</p>
<pre><code>location(X, kitchen), edible(X).
</code></pre>
<p>Answer</p>
<pre><code>X = apple
X = crackers
</code></pre>
<h2>Earlier answer</h2>
<p>Starting with a predicate in Prolog</p>
<pre><code>ancestor(X,Y) :- parent(X, Z) , ancestor(Z,Y).
</code></pre>
<p>where <code>ancestor(X,Y)</code> is know as the head clause and <code>parent(X, Z) , ancestor(Z,Y)</code> is known as the body.</p>
<p>Converting the Prolog to an implication<br />
<code>:-</code> is <code>⊃</code><br />
<code>,</code> is <code>∧</code><br />
and the implication is reversed.</p>
<pre><code>(parent(X,Z) ∧ ancestor(Z,Y)) ⊃ ancestor(X,Y)
</code></pre>
<p>converting the conjunction (∧) of literals into a disjunction (∨) of literals</p>
<pre><code>not parent(X,Z) ∨ not ancestor(Z,Y) ∨ ancestor(X,Y)
</code></pre>
<p>results in <code>not parent(X,Z) ∨ not ancestor(Z,Y)</code> which is the Prolog body or in your question the goal clause.</p>
<p>In other words the goal clause are the statements that need to be satisfied in order to achieve the goal which is the Prolog head <code>ancestor(X,Y)</code>.</p>
<p>To see an example of using the Prolog ancestor see <a href="https://en.wikibooks.org/wiki/Prolog/Recursive_Rules" rel="nofollow noreferrer">Prolog/Recursive Rules</a>. Note that the rule given in this example is only one of the two that are used to define ancestor with the missing Prolog rule being <code>ancestor(A, B) :- parent(A, B).</code></p>
<h2>References</h2>
<p>The University of Edinburgh Course: Prolog Programming for Automated Reasoning students - Lecture 10 - <a href="http://www.inf.ed.ac.uk/teaching/courses/ar/ARPROLOG/slides/lect10.pdf" rel="nofollow noreferrer">Logic Programming</a></p>
<p><a href="https://en.wikipedia.org/wiki/Horn_clause#Definition" rel="nofollow noreferrer">Goal clause</a> definition from Wikipedia -</p>
<p><code>a Horn clause without a positive literal is sometimes called a goal clause</code><br />
or <code>¬p ∨ ¬q ∨ ... ∨ ¬t</code></p>
<p>SWI-Prolog <a href="http://www.swi-prolog.org/pldoc/man?section=glossary" rel="nofollow noreferrer">Glossary of Terms</a></p>
<p><a href="https://en.wikibooks.org/wiki/Prolog/Recursive_Rules" rel="nofollow noreferrer">Prolog/Recursive Rules</a></p>
<p><a href="https://books.google.com/books?hl=en&lr=&id=TeKpCAAAQBAJ&oi=fnd&pg=PA1&dq=Foundations%20of%20Logic%20Programming&ots=wLoN6OROlT&sig=_tWV2OLVlQu6Z9dGJwmJR18KQ4Y#v=onepage&q=goal%20&f=false" rel="nofollow noreferrer">Foundations of Logic</a> (<a href="http://www.worldcat.org/oclc/864586957" rel="nofollow noreferrer">WorldCat</a>)</p>
<h2>Try the Prolog online</h2>
<p>Using <a href="https://swish.swi-prolog.org/p/XlRdOBoS.pl" rel="nofollow noreferrer">swish</a> (Prolog rules are already entered with this link)<br />
In the lower right by <code>?-</code> where it says <code>your query goes here ...</code> enter <code>ancestor(X, john).</code><br />
Then in the lower right click <code>Run!</code><br />
Above that you should see an ancestor of john<br />
<code>X=david</code><br />
Under that click <code>Next</code> and you should see another ancestor of john<br />
<code>X=jim</code><br />
keep clicking <code>Next</code> to see other ancestors and then finally you should see
<code>false</code> meaning there are no more valid answers.</p>
<h2>An excerpt</h2>
<p>From <a href="https://www.cs.cmu.edu/%7Efp/courses/lp/lectures/lp-all.pdf" rel="nofollow noreferrer">Logic Programming</a> by Frank Pfenning</p>
<blockquote>
<p>To make the transition from inference rules to logic programming we
need to impose a particular strategy. Two fundamental ideas suggest
themselves: we could either search backward from the conjecture,
growing a (potential) proof tree upwards, or we could work forwards
from the axioms applying rules until we arrive at the conjecture. We
call the first one goal-directed and the second one forward-reasoning.</p>
</blockquote>
<h2>How to search</h2>
<p>OP comment</p>
<blockquote>
<p>Could you tell me how you do such searches, 'cause when I run into
some complex problems I usually don't know how to search necessary
resources</p>
</blockquote>
<p>Normally I would have the OP (original poster) ask that as another question, but since it is more of a subjective than objective question it will get shot down at SO (StackOverflow) with down and close votes and I can use examples related to the original question so I will answer it here.</p>
<p>When searching the path to success is to figure out the current terminology used by the experts in the area and which key words in that terminology are relevant to what you seek. Sometimes I know the key words off the top of my head, as with this question with <code>disjunction of literals</code> and <code>goal</code> I knew they were key terms in logic, reasoning, theorem proving and logic languages. Other times I am guessing or in the dark as with this <a href="https://stackoverflow.com/q/41575278/1243762">question</a>.</p>
<p>One way that sometimes yields success when trying to learn the current terminology is to search for <a href="https://en.wikipedia.org/wiki/Review_article" rel="nofollow noreferrer">review articles</a> which typically have <code>survey</code> in the title and thus <code>survey</code> is a good keyword. e.g. using <a href="https://www.semanticscholar.org/" rel="nofollow noreferrer">Semantic Scholar</a> with <a href="https://www.semanticscholar.org/search?q=horn%20clause%20survey&sort=relevance&ae=false" rel="nofollow noreferrer">horn clause survey</a> finds on first page <a href="https://pdfs.semanticscholar.org/8440/00c1d84f547449d8f5baf3f62afe2a49965e.pdf" rel="nofollow noreferrer">Constraint Logic Programming: A Survey</a></p>
<p>As an example of searching for the canonical form of math expressions with <code>math canonical form</code> little of relevance was found but after finding that <code>standard from</code> was more commonly used, better results were obtained.</p>
<p>Sometimes it is not words that help you find the answer and search engines that rely on words will fail you. For <a href="https://stackoverflow.com/q/12881980/1243762">example</a> a type of search I see every few weeks or so involves finding the pattern/formula/etc. that generates a sequence of numbers and you only know part of the sequence, and typically the start of the sequence. This is were a search using <a href="http://oeisf.org/index.html" rel="nofollow noreferrer">OEIS</a> (The On-Line Encyclopedia of Integer Sequences®) comes in handy. Another type of search engine related to math is <a href="https://www.wolframalpha.com/" rel="nofollow noreferrer">WolframAlpha</a>. So be attentive to the fact that there are other kinds of search engines</p>
<p>Once you have the key words then as @WillNess notes you some times query for them as single words <code>goal clause</code>, but some times as exact words in quotes <code>"goal clause"</code>. For this question using an exact word returned better results.</p>
<p>The next thing to realize is the source of the information often corresponds with quality of information.</p>
<ul>
<li>Sources from university courses, online digital scientific libraries and books are high on my <a href="https://cs.stackexchange.com/a/7143/268">list</a></li>
<li>PDF (Postscript Document Format). The reason for PDF is that it is common to write technical professional papers with <a href="http://exponential_exponent_normal_form" rel="nofollow noreferrer">LaTeX</a> which are then output as PDF. The old format was PS (PostScript) but I rarely see that with newer papers. Thus <code>pdf</code> is a good search term to add.</li>
<li>Then sites from the creators such as <a href="https://www.ubuntu.com/" rel="nofollow noreferrer">Ubuntu</a>, <a href="http://www.swi-prolog.org/" rel="nofollow noreferrer">SWI-Prolog</a></li>
<li>Sites that are obviously good such as <a href="https://www.w3schools.com/" rel="nofollow noreferrer">w3schools</a></li>
<li>Blog entries by the experts; most blogs are not by the experts.</li>
</ul>
<p>Other search engines I use related to computer science technical papers are:</p>
<ul>
<li><a href="http://citeseerx.ist.psu.edu/index" rel="nofollow noreferrer">CiteSeerX</a></li>
<li><a href="https://arxiv.org/" rel="nofollow noreferrer">arxiv.org</a></li>
<li><a href="https://scholar.google.com/" rel="nofollow noreferrer">Google Scholar</a></li>
<li><a href="https://academic.microsoft.com/" rel="nofollow noreferrer">Microsoft Academic</a></li>
<li><a href="http://www.worldcat.org/" rel="nofollow noreferrer">WorldCat</a></li>
<li><a href="https://www.semanticscholar.org/" rel="nofollow noreferrer">Semantic Scholar</a></li>
<li><a href="http://dblp.uni-trier.de/" rel="nofollow noreferrer">dblp</a></li>
</ul>
<p>and of course you can always query for other <a href="https://www.google.com/search?q=acedemic%20search%20engines&ie=&oe=#q=academic%20search%20engines&spf=1500463958476" rel="nofollow noreferrer">academic search engines</a></p>
<p>If you have only one or two documents that appeal to you but still don't have enough detail then start working down the references noted in those documents. This can be challenging as years ago many of the articles were only published in professional journals and are not freely available. However it is common to find those articles freely available if one of the authors is a professor and you can find the document listed under their public pages where they teach. <a href="http://citeseerx.ist.psu.edu/index" rel="nofollow noreferrer">CiteSeerX</a> is really good for finding referenced documents.</p>
<p>If you are using an existing SO answer then check the tag to see if they are a <a href="https://stackoverflow.com/tags/f%23/topusers">top answerer</a> and remember that the accepted answer may not be the best answer and that any answer may not be a correct answer to your question.</p>
<p>Then it is a matter of reading several of the articles to see what information is current and if there is consistency.</p>
<p>Some fields are moving fast and changing rapidly even in the time span of the last 20 years of the Internet. Here is an example where one <a href="https://arxiv.org/abs/1706.03762" rel="nofollow noreferrer">paper</a> made a significant change. If you are not aware of this paper in the area it relates you will probably be confused by research that happened before the paper and research based on the paper. To find such papers note the significant number of citations, at present <a href="https://scholar.google.com/scholar_lookup?title=Attention%20Is%20All%20You%20Need&author=Ashish%20Vaswani&author=Noam%20Shazeer&publication_date=2017/12/06&arxiv_id=1706.03762" rel="nofollow noreferrer">18658</a>.</p>
<p>Don't be hesitant to spend upwards of an hour just searching and then a few more to a full day just reading. Here is a <a href="https://stackoverflow.com/q/37117089/1243762">question</a> I spent over four hours searching and reading and still could not find an answer. After finally running out of things to try I posted the question only to find it can not be done and is not documented. The answer was by someone I know to be an expert in F#.</p>
<p>Lastly don't be afraid to leave bread crumbs, e.g. your own personal notes as I do with <code>Of interest:</code> comments or in this <a href="https://stackoverflow.com/q/42627198/1243762">question</a>. You will often use the same search terms over and over again and if you have enough posted on the Internet will start to run into your own post. After a while you will realize that if you only left bread crumbs there it would have made your life easier.</p>
<p>The rest is just years of experience and diligence.</p>
<p>Also sometimes asking an SO question requesting help with which keywords to use sometimes gets answers and sometimes get shot down.</p> | 2017-07-16 17:42:11.687000+00:00 | 2021-03-21 13:15:32.780000+00:00 | 2021-03-21 13:15:32.780000+00:00 | null | 45,123,756 | <p>I think I know the meaning of <strong>goal clauses</strong> and <strong>horn clauses</strong>, but I'm quite confused about why people name a disjunction of literals of which none is positive a <strong>goal clause</strong>?</p>
<p>What's the goal here?</p> | 2017-07-15 23:53:44.450000+00:00 | 2021-03-21 13:15:32.780000+00:00 | 2017-07-17 00:14:46.637000+00:00 | prolog|logic | ['https://simple.wikipedia.org/wiki/Horn_clause', 'http://www.amzi.com/AdventureInProlog/apreface.php', 'http://www.amzi.com/AdventureInProlog/a5rules.php', 'http://www.amzi.com/AdventureInProlog/a2facts.php', 'http://www.amzi.com/AdventureInProlog/a4comqry.php', 'https://en.wikibooks.org/wiki/Prolog/Recursive_Rules', 'http://www.inf.ed.ac.uk/teaching/courses/ar/ARPROLOG/slides/lect10.pdf', 'https://en.wikipedia.org/wiki/Horn_clause#Definition', 'http://www.swi-prolog.org/pldoc/man?section=glossary', 'https://en.wikibooks.org/wiki/Prolog/Recursive_Rules', 'https://books.google.com/books?hl=en&lr=&id=TeKpCAAAQBAJ&oi=fnd&pg=PA1&dq=Foundations%20of%20Logic%20Programming&ots=wLoN6OROlT&sig=_tWV2OLVlQu6Z9dGJwmJR18KQ4Y#v=onepage&q=goal%20&f=false', 'http://www.worldcat.org/oclc/864586957', 'https://swish.swi-prolog.org/p/XlRdOBoS.pl', 'https://www.cs.cmu.edu/%7Efp/courses/lp/lectures/lp-all.pdf', 'https://stackoverflow.com/q/41575278/1243762', 'https://en.wikipedia.org/wiki/Review_article', 'https://www.semanticscholar.org/', 'https://www.semanticscholar.org/search?q=horn%20clause%20survey&sort=relevance&ae=false', 'https://pdfs.semanticscholar.org/8440/00c1d84f547449d8f5baf3f62afe2a49965e.pdf', 'https://stackoverflow.com/q/12881980/1243762', 'http://oeisf.org/index.html', 'https://www.wolframalpha.com/', 'https://cs.stackexchange.com/a/7143/268', 'http://exponential_exponent_normal_form', 'https://www.ubuntu.com/', 'http://www.swi-prolog.org/', 'https://www.w3schools.com/', 'http://citeseerx.ist.psu.edu/index', 'https://arxiv.org/', 'https://scholar.google.com/', 'https://academic.microsoft.com/', 'http://www.worldcat.org/', 'https://www.semanticscholar.org/', 'http://dblp.uni-trier.de/', 'https://www.google.com/search?q=acedemic%20search%20engines&ie=&oe=#q=academic%20search%20engines&spf=1500463958476', 'http://citeseerx.ist.psu.edu/index', 'https://stackoverflow.com/tags/f%23/topusers', 'https://arxiv.org/abs/1706.03762', 'https://scholar.google.com/scholar_lookup?title=Attention%20Is%20All%20You%20Need&author=Ashish%20Vaswani&author=Noam%20Shazeer&publication_date=2017/12/06&arxiv_id=1706.03762', 'https://stackoverflow.com/q/37117089/1243762', 'https://stackoverflow.com/q/42627198/1243762'] | 41 |
Subsets and Splits