a_id
int64
7.84k
73.8M
a_body
stringlengths
61
33k
a_creation_date
stringlengths
25
32
a_last_activity_date
stringlengths
25
32
a_last_edit_date
stringlengths
25
32
a_tags
float64
q_id
int64
826
73.8M
q_body
stringlengths
61
29.9k
q_creation_date
stringlengths
25
32
q_last_activity_date
stringlengths
25
32
q_last_edit_date
stringlengths
25
32
q_tags
stringlengths
1
103
_arxiv_links
stringlengths
2
6.69k
_n_arxiv_links
int64
0
94
48,146,353
<p>It appears that your network is collapsing, due to a number of potentials. I would try the following modifications:</p> <ul> <li>Use ReLU activation instead of tanh. ReLU has proven to be a much more robust activation in Conv networks than sigmoid or tanh.</li> <li>User batch-normalization between at the input of your convolutional layers (see paper <a href="https://arxiv.org/abs/1502.03167" rel="nofollow noreferrer">here</a>).</li> <li>Divide your range into sections and use softmax. If you must have regression, consider a separate regression network for each range and select the correct regression net based on the output of the softmax. Cross Entropy loss has shown more success in learning highly non-linear functions.</li> </ul>
2018-01-08 08:07:49.507000+00:00
2018-01-08 08:07:49.507000+00:00
null
null
45,383,926
<p>So I am trying to use image recognition using the mxnet package in R using a CNN to try and predict a scalar output (in my case wait time) based on the image. </p> <p>However, when I do this, I get the same resultant output (it predicts the same number which is probably just the average of all of the results). How do I get it to predict the scalar output correctly.</p> <p>Also, my image has already been pre-processed by greyscaling it and converting into the pixel format below. I am essentially using images to predict wait times which is why my train_y is the current wait times in seconds, hence why I didn't convert it into a [0,1] range. I would prefer a regression type output or some kind of scalar output that outputs the predicted wait time based on the image.</p> <p>What other ways would you recommend to tackle this problem, not sure if my approach is correct.</p> <p>Here is my reproducible code: </p> <pre><code>set.seed(0) df &lt;- data.frame(replicate(784,runif(7538))) df$waittime &lt;- 1000*runif(7538) training_index &lt;- createDataPartition(df$waittime, p = .9, times = 1) training_index &lt;- unlist(training_index) train_set &lt;- df[training_index,] dim(train_set) test_set &lt;- df[-training_index,] dim(test_set) ## Fix train and test datasets train_data &lt;- data.matrix(train_set) train_x &lt;- t(train_data[, -785]) train_y &lt;- train_data[,785] train_array &lt;- train_x dim(train_array) &lt;- c(28, 28, 1, ncol(train_array)) test_data &lt;- data.matrix(test_set) test_x &lt;- t(test_set[,-785]) test_y &lt;- test_set[,785] test_array &lt;- test_x dim(test_array) &lt;- c(28, 28, 1, ncol(test_x)) library(mxnet) ## Model mx_data &lt;- mx.symbol.Variable('data') ## 1st convolutional layer 5x5 kernel and 20 filters. conv_1 &lt;- mx.symbol.Convolution(data = mx_data, kernel = c(5, 5), num_filter = 20) tanh_1 &lt;- mx.symbol.Activation(data = conv_1, act_type = "tanh") pool_1 &lt;- mx.symbol.Pooling(data = tanh_1, pool_type = "max", kernel = c(2, 2), stride = c(2,2 )) ## 2nd convolutional layer 5x5 kernel and 50 filters. conv_2 &lt;- mx.symbol.Convolution(data = pool_1, kernel = c(5,5), num_filter = 50) tanh_2 &lt;- mx.symbol.Activation(data = conv_2, act_type = "tanh") pool_2 &lt;- mx.symbol.Pooling(data = tanh_2, pool_type = "max", kernel = c(2, 2), stride = c(2, 2)) ## 1st fully connected layer flat &lt;- mx.symbol.Flatten(data = pool_2) fcl_1 &lt;- mx.symbol.FullyConnected(data = flat, num_hidden = 500) tanh_3 &lt;- mx.symbol.Activation(data = fcl_1, act_type = "tanh") ## 2nd fully connected layer fcl_2 &lt;- mx.symbol.FullyConnected(data = tanh_3, num_hidden = 1) ## Output #NN_model &lt;- mx.symbol.SoftmaxOutput(data = fcl_2) label &lt;- mx.symbol.Variable("label") #NN_model &lt;- mx.symbol.MakeLoss(mx.symbol.square(mx.symbol.Reshape(fcl_2, shape = 0) - label)) NN_model &lt;- mx.symbol.LinearRegressionOutput(fcl_2) ## Device used. Sadly not the GPU :-( #device &lt;- mx.gpu #Didn't work well, predicted same number continuously regardless of image ## Train on 1200 samples model &lt;- mx.model.FeedForward.create(NN_model, X = train_array, y = train_y, # ctx = device, num.round = 30, array.batch.size = 100, initializer=mx.init.uniform(0.002), learning.rate = 0.00001, momentum = 0.9, wd = 0.00001, eval.metric = mx.metric.rmse) epoch.end.callback = mx.callback.log.train.metric(100)) pred &lt;- predict(model, test_array) #gives the same numeric output </code></pre>
2017-07-29 00:02:25.603000+00:00
2018-01-08 08:07:49.507000+00:00
2017-08-04 00:17:59.643000+00:00
r|image-processing|conv-neural-network|image-recognition|mxnet
['https://arxiv.org/abs/1502.03167']
1
71,813,543
<p><img src="https://jalammar.github.io/images/t/self-attention-matrix-calculation-2.png" alt="" /> <em>Taken from <a href="https://jalammar.github.io/illustrated-transformer/" rel="nofollow noreferrer">https://jalammar.github.io/illustrated-transformer/</a></em></p> <p>Changing the order of words, will permute the order of the rows of <code>V</code>, but will also permute the order of the columns of the correlation matrix <code>Q x transpose(K)</code>. Thus the resulting output will be unchanged and positional information will be lost after the first self attention layer.</p> <p>To solve this you encode the position into the embedding of each word, so the neural net can learn to take two embeddings and know how far they are apart no matter the order they are fed in.</p> <p>From the abstract that claimed <a href="https://arxiv.org/abs/1905.04226" rel="nofollow noreferrer">positional encoding is not necessary</a>:</p> <blockquote> <p>The positional encoding is an essential augmentation for the self-attention mechanism which is invariant to sequence ordering.</p> </blockquote>
2022-04-10 03:25:27.113000+00:00
2022-04-10 03:25:27.113000+00:00
null
null
61,440,281
<p>I am developing a language model like <a href="https://pytorch.org/tutorials/beginner/transformer_tutorial.html" rel="nofollow noreferrer">https://pytorch.org/tutorials/beginner/transformer_tutorial.html</a>.</p> <p>It is not clear for me - whether positional encoding is neccessary here ? As far as I understand - it is necessary for language translation task because the decoder should be able to position the word from the previous output within the sequence from encoder. But is it necessary in language modeling without the decoder ?</p> <p>Is it possible that the words in the encoder output are shuffled ?</p> <p><strong>Edit:</strong> </p> <p>there are no explanations in the original paper. And I didn't find explanations in tutorials (like here <a href="https://kazemnejad.com/blog/transformer_architecture_positional_encoding/" rel="nofollow noreferrer">https://kazemnejad.com/blog/transformer_architecture_positional_encoding/</a>).</p> <p>I don't understand this: </p> <p>"As each word in a sentence simultaneously flows through the Transformer’s encoder/decoder stack, The model itself doesn’t have any sense of position/order for each word."</p> <p>From my point of view - transformer encoder has info about the order because its input is an ordered sequence (similar to RNN).</p> <p>I tried to remove positional encoding from the model. It works, but with a worse performance.</p> <p>Is it useful to add such positional encoding to RNN ? Could it improve its performance ?</p>
2020-04-26 11:54:02.903000+00:00
2022-04-10 03:25:27.113000+00:00
2020-04-27 13:43:26+00:00
transformer-model|language-model
['https://jalammar.github.io/illustrated-transformer/', 'https://arxiv.org/abs/1905.04226']
2
63,948,329
<p>This research group claims positional encoding is not necessary: <a href="https://arxiv.org/abs/1905.04226" rel="noreferrer">https://arxiv.org/abs/1905.04226</a></p>
2020-09-18 02:01:44.613000+00:00
2020-09-18 02:01:44.613000+00:00
null
null
61,440,281
<p>I am developing a language model like <a href="https://pytorch.org/tutorials/beginner/transformer_tutorial.html" rel="nofollow noreferrer">https://pytorch.org/tutorials/beginner/transformer_tutorial.html</a>.</p> <p>It is not clear for me - whether positional encoding is neccessary here ? As far as I understand - it is necessary for language translation task because the decoder should be able to position the word from the previous output within the sequence from encoder. But is it necessary in language modeling without the decoder ?</p> <p>Is it possible that the words in the encoder output are shuffled ?</p> <p><strong>Edit:</strong> </p> <p>there are no explanations in the original paper. And I didn't find explanations in tutorials (like here <a href="https://kazemnejad.com/blog/transformer_architecture_positional_encoding/" rel="nofollow noreferrer">https://kazemnejad.com/blog/transformer_architecture_positional_encoding/</a>).</p> <p>I don't understand this: </p> <p>"As each word in a sentence simultaneously flows through the Transformer’s encoder/decoder stack, The model itself doesn’t have any sense of position/order for each word."</p> <p>From my point of view - transformer encoder has info about the order because its input is an ordered sequence (similar to RNN).</p> <p>I tried to remove positional encoding from the model. It works, but with a worse performance.</p> <p>Is it useful to add such positional encoding to RNN ? Could it improve its performance ?</p>
2020-04-26 11:54:02.903000+00:00
2022-04-10 03:25:27.113000+00:00
2020-04-27 13:43:26+00:00
transformer-model|language-model
['https://arxiv.org/abs/1905.04226']
1
22,330,272
<p>Symbol spotting is more complicated than logo spotting because interest points work hardly on document images such as architectural plans. Many conferences deals with pattern recognition, each year there are many new algorithms for symbol spotting so giving you the best method is not possible. You could check IAPR conferences : ICPR, ICDAR, DAS, GREC (Workshop on Graphics Recognition), etc. This researchers focus on this topic : M Rusiñol, J Lladós, S Tabbone, J-Y Ramel, M Liwicki, etc. They work on several techniques for improving symbol spotting such as : <a href="http://www.cvc.uab.es/~marcal/pdfs/GREC06.pdf" rel="nofollow">vectorial signatures</a>, <a href="http://arxiv.org/pdf/1004.5424" rel="nofollow">graph based signature</a> and so on (check google scholar for more papers).</p> <p>An easy way to start a new approach is to work with simples shapes such as lines, rectangles, triangles instead of matching everything at one time.</p>
2014-03-11 15:39:16.490000+00:00
2014-03-11 15:39:16.490000+00:00
null
null
22,319,867
<p>I have a large image (5400x3600) that has multiple CCTVs that I need to detect.</p> <p>The detection takes lot of time (4-7 minutes) with rotation. But it still fails to resolve certain CCTVs.</p> <p>What is the best method to match a template like this?</p> <p>I am using skImage - openCV is not an option for me, but I am open to suggestions on that too.</p> <p>For example: in the images below, the template is correct matched with the second image - but the first image is not matched - I guess due to the noise created by the text &quot;BLDG...&quot;</p> <hr /> <h2>Template:</h2> <p><img src="https://i.stack.imgur.com/p6zav.png" alt="Template" /></p> <hr /> <h2>Source image:</h2> <p><img src="https://i.stack.imgur.com/06XME.png" alt="Source image" /></p> <hr /> <h2>Match result:</h2> <p><img src="https://i.stack.imgur.com/T5rXi.png" alt="Match result" /></p>
2014-03-11 08:15:05.587000+00:00
2014-03-14 13:44:04.463000+00:00
2020-06-20 09:12:55.060000+00:00
opencv|python-2.7|image-processing|computer-vision
['http://www.cvc.uab.es/~marcal/pdfs/GREC06.pdf', 'http://arxiv.org/pdf/1004.5424']
2
48,418,201
<p>Assuming all these papers are from arXiv, you could instead extract the arXiv id (I'd guess that searching for "arXiv:" in the PDF's text would consistently reveal the id as the first hit). </p> <p>Once you have the arXiv reference number (and have done a <code>pip install arxiv</code>), you can get the title using</p> <pre><code>paper_ref = '1501.00730' arxiv.query(id_list=[paper_ref])[0].title </code></pre>
2018-01-24 08:45:54.893000+00:00
2018-01-24 08:45:54.893000+00:00
null
null
911,672
<p>I want to write a script to rename downloaded papers with their titles automatically, I'm wondering if there is any library or tricks i can make use of? The PDFs are all generated by TeX and should have some 'formal' structures.</p>
2009-05-26 16:52:40.053000+00:00
2018-01-24 08:45:54.893000+00:00
2009-05-26 17:03:59.970000+00:00
python|pdf
[]
0
851,745
<p>At the risk of being pedantic, what you describe is not <em>anonymous</em> data, but rather <em>pseudonymous</em> data. That said, have you considered using some sort of keyed hash function such as <a href="http://en.wikipedia.org/wiki/HMAC" rel="nofollow noreferrer">HMAC-SHA1</a> to perform the pseudonym generation? You can reach a fair compromise with a scheme like this:</p> <ul> <li>Separate your analysis and OLTP databases. Minimize the number of people that have access to both.</li> <li>Keep the HMAC key private to the application that copies data to the analysis database, not accessible from either database. Perhaps have the application generate it on installation and obfuscate it using a hardcoded key, so that neither the system administrators nor the software developers will find it trivial to get at without collusion.</li> <li>Do not copy real names and addresses <em>or any equivalent or easily linkable keys such as user number, invoice numbers, etc.</em> from the OLTP database without hashing them.</li> </ul> <p>If you do this, there are two main routes of attack to obtain the real identity from the pseudonym.</p> <ul> <li>Direct attack: Obtain the HMAC key, compute the pseudonym for each known user, and reverse the lookup in the resulting table. (HMAC is irreversible: given only a pseudonym and the key you cannot feasibly obtain the original value.)</li> <li>Information fusion attack: Without knowledge of the key and list of identities, the next best thing is simply to attempt to correlate the pseudonymous data with other data -- perhaps even a stolen copy of the OLTP database.</li> </ul> <p>Pseudonymous data sets are <a href="https://arxiv.org/abs/cs/0610105" rel="nofollow noreferrer">notoriously vulnerable</a> to information fusion attacks -- you have to strip out or "blur" a lot of key correlating information to make the data set resistant to such attacks, but exactly how much you need to strip is a <a href="https://arxiv.org/abs/0801.1715" rel="nofollow noreferrer">topic of current research</a>.</p>
2009-05-12 08:33:55.840000+00:00
2017-03-16 19:51:31.430000+00:00
2017-03-16 19:51:31.430000+00:00
null
851,481
<p>I am generating log records about user actions. For privacy reasons, these need to be anonymized after N days. However, I also need to run reports against this anonymized data.</p> <p>I want all actions by real user A to be listed under fake user X in the anonymized logs - records of one user must still remain records of one (fake) user in the logs. This obviously means that I need to have some mapping between real and fake users, which I use when anonymizing new records. Of course, this totally defeats the point of anonymization - if there's a mapping, the original user data can be restored.</p> <p>Example:</p> <blockquote> <p>User Frank Müller bought 3 cans of soup.</p> <p>Three days later, User Frank Müller asked for refund for 3 cans of soup.</p> </blockquote> <p>When I anonymize the second log entry, the first one has already been anonymized. I still want both log records to point to the same user. Well, that seems almost impossible to me in practice, so I would like to use some method of splitting up data that hopefully allows me to keep as much integrity as possible in the data. Perhaps using the logs as a data warehouse - split everything into facts and just accept the fact that some dimensions cannot be analyzed?</p> <p>Have you encountered such a scenario before? What are my options here? I obviously need to make some sort of compromise - what has proven effective for you? How to get the most use out of such data?</p>
2009-05-12 06:53:46.440000+00:00
2017-03-16 19:51:31.430000+00:00
2020-06-20 09:12:55.060000+00:00
logging|foreign-keys|privacy
['http://en.wikipedia.org/wiki/HMAC', 'https://arxiv.org/abs/cs/0610105', 'https://arxiv.org/abs/0801.1715']
3
63,230,841
<p>Vowpal Wabbit is based on online learning (SGD-like updates, but there is also <code>--bfgs</code> if you really need batch optimization) and (machine learning) <a href="https://vowpalwabbit.org/features.html" rel="nofollow noreferrer">reductions</a>. See some of the <a href="https://vowpalwabbit.org/tutorials.html" rel="nofollow noreferrer">tutorials</a> or <a href="https://vowpalwabbit.org/research.html" rel="nofollow noreferrer">papers</a> to understand the idea of reductions. Many VW papers are also about Contextual Bandit, which is implemented as a reduction to a cost-sensitive one-against-all (OAA) classification (which is further reduced to regression). See a simple <a href="https://medium.com/value-stream-design/machine-learning-reductions-mother-algorithms-part-i-introduction-67c1a69a4b4e" rel="nofollow noreferrer">intro into reductions</a> or a simple <a href="https://github.com/VowpalWabbit/vowpal_wabbit/blob/master/vowpalwabbit/binary.cc" rel="nofollow noreferrer">example how binary classification is reduced into regression</a>.</p> <p>As far as I know, VowpalWabbit does not support Decision trees nor ensembles, but see <a href="https://arxiv.org/abs/1502.02651" rel="nofollow noreferrer"><code>--boosting</code></a> and <a href="https://github.com/VowpalWabbit/vowpal_wabbit/wiki/Command-Line-Arguments#learning-algorithm--reduction-options" rel="nofollow noreferrer"><code>--bootstrap</code></a>. It does not support SVM, but see <code>--loss_function hinge</code> (hinge loss is one of the two key concepts of SVM) and <a href="https://github.com/JohnLangford/vowpal_wabbit/wiki/ksvm.pdf" rel="nofollow noreferrer"><code>--ksvm</code></a>. It does not support NN, but <code>--nn</code> (and <a href="https://github.com/VowpalWabbit/vowpal_wabbit/wiki/Command-Line-Arguments#neural-network-options" rel="nofollow noreferrer">related options</a>) provides a very limited support simulating a single hidden layer (feed-forward with tanh activation function), which can be added into the reduction stack.</p>
2020-08-03 13:58:37.120000+00:00
2020-08-04 09:12:27.133000+00:00
2020-08-04 09:12:27.133000+00:00
null
63,182,360
<p>I am trying to read the <a href="https://github.com/VowpalWabbit/vowpal_wabbit/wiki/Algorithm-details" rel="nofollow noreferrer">documentation</a> of Vowpal Wabbit and it doesn't specify how to select specific learning algorithms (Not loss) like SVM,NN, Decision trees, etc. How does one select a specific learning algorithm?</p> <p>Or does it select the algorithm itself depending on problem type (regression/classification like an automl type or low-code ML library?</p> <p>There are some blogs showing to use Neural networks with <code>-nn</code> command but that isn't part of documentation--is this because it doesn't focus on specific algorithm, as noted above? If so, What is Vowpal Wabbit in essence?</p>
2020-07-30 22:36:34.657000+00:00
2020-08-12 06:42:51.380000+00:00
2020-08-12 06:42:51.380000+00:00
machine-learning|automl|vowpalwabbit
['https://vowpalwabbit.org/features.html', 'https://vowpalwabbit.org/tutorials.html', 'https://vowpalwabbit.org/research.html', 'https://medium.com/value-stream-design/machine-learning-reductions-mother-algorithms-part-i-introduction-67c1a69a4b4e', 'https://github.com/VowpalWabbit/vowpal_wabbit/blob/master/vowpalwabbit/binary.cc', 'https://arxiv.org/abs/1502.02651', 'https://github.com/VowpalWabbit/vowpal_wabbit/wiki/Command-Line-Arguments#learning-algorithm--reduction-options', 'https://github.com/JohnLangford/vowpal_wabbit/wiki/ksvm.pdf', 'https://github.com/VowpalWabbit/vowpal_wabbit/wiki/Command-Line-Arguments#neural-network-options']
9
54,005,585
<p>Please refer to <a href="https://arxiv.org/pdf/1306.2597.pdf" rel="nofollow noreferrer">Introducing LETOR 4.0 Datasets </a> on page 2: as explained in the article:</p> <blockquote> <p>Blockquote Each row is a query-document pair. The first column is relevance label of this pair, the second column is query id, the following columns are features, and the end of the row is comment about the pair, including id of the document. The larger the relevance label, the more relevant the query-document pair</p> </blockquote> <p>I hope it helps</p>
2019-01-02 11:32:53.273000+00:00
2019-01-02 14:18:01.123000+00:00
2019-01-02 14:18:01.123000+00:00
null
35,304,773
<p>I just discovered the Letor dataset used for learning to rank problem. I'm wondering what are the last two columns "inc" and "prob" stands for in train/test/vali set? Thanks</p>
2016-02-10 00:23:29.520000+00:00
2019-01-02 14:18:01.123000+00:00
2018-05-27 18:45:20.580000+00:00
dataset|information-retrieval
['https://arxiv.org/pdf/1306.2597.pdf']
1
40,814,908
<p>JuMP has compared favorably with GAMS in <a href="https://arxiv.org/abs/1508.01982" rel="nofollow noreferrer">our own benchmarks</a>; take that as you may. The derivative computations are entirely in Julia (which is fast), no compiled C code.</p>
2016-11-26 04:26:06.073000+00:00
2016-11-26 04:26:06.073000+00:00
null
null
40,802,948
<p>I have some difficulties in understanding how performances in non-linear optimisation are influenced by the specific way the solver engine is interfaced.</p> <p>We have an optimisation model that, in its first version, was written in GAMS. IPOPT (a common FOOS non-linear solver engine) was returning an execution time for each optimisation of 1.4 CPU seconds in IPOPT (w/o function evaluations) and 0.2 CPU seconds in function evaluation.</p> <p>When we converted the model to C++ (for a better accounting of the non-optimisation components of the model) and interfaced IPOPT trough its C++ API (using ADOL-C and ColPack for AD) we got execution times of 0.7 secs in IPOPT and 9.4 secs in function evaluation (the improvement in IPOPT is likely due to the fact that, compiling IPOPT by source, we were able to use better linear solvers non available in the GAMS version of IPOPT).</p> <p>So, using C++, admittedly using a badly optimised code, gave us results ~50 times slower than GAMS, partially compensated by better solver time.</p> <p>We are now evaluating the feasibility to convert the model in other languages, either Python with Pyomo, or Julia with JuMP.</p> <p>But we would like first to understand how the function evaluation made by the solver at each step depends from the specific language implemented.</p> <p>With C++, it's pretty evident that the functions making the optimisation models are directly executed (evaluated) at each iteration, so the way they are implemented does matter (and in particular, gradient and hessian are recomputed each time, at least in our implementation).</p> <p>How is with Pyomo and JuMP? Would it be each iteration evaluated in Python and Julia, or Pyomo and JuMP would instead render first the model in (I guess) C, compute (not evaluate) the gradient and hessian once for all, and then is this "C version" that would be evaluated each time ? It clearly would make a big difference, especially for python..</p>
2016-11-25 10:36:17.153000+00:00
2016-11-26 04:26:06.073000+00:00
null
performance|mathematical-optimization|pyomo|julia-jump|ipopt
['https://arxiv.org/abs/1508.01982']
1
51,786,787
<p>I'd like to provide a more precise answer as to <em>how</em> the protocol works internally. I quote part of the abstract of <a href="https://arxiv.org/abs/1808.03156" rel="noreferrer">this paper</a>.</p> <blockquote> <p>In short, each AWDL node announces a sequence of Availability Windows (AWs) indicating its readiness to communicate with other AWDL nodes. An elected master node synchronizes these sequences. Outside the AWs, nodes can tune their Wi-Fi radio to a different channel to communicate with an access point, or could turn it off to save energy.</p> </blockquote> <p>From a user perspective, AWDL allows a device remain connected to an infrastructure-based Wi-Fi network and communicate with AWDL peers "at the same time" by quickly hopping between the channels of the two networks (AWDL uses fixed social channels 6, 44, and 149). In contrast to the previous answer, we found that current versions of AWDL work fairly well and channel hopping only induces a small overhead.</p> <p><em>Disclaimer</em>: I'm co-author of <a href="https://arxiv.org/abs/1808.03156" rel="noreferrer">this paper</a> and we retrieved this information by means of reverse engineering. If you are interested in the details, please read the paper and have a look at the <a href="https://seemoo.de/wireshark-awdl" rel="noreferrer">Wireshark dissector</a> (published soon).</p>
2018-08-10 12:41:31.683000+00:00
2018-08-10 12:41:31.683000+00:00
null
null
19,587,701
<p>I'm trying to find out what AWDL is. On iOS, if you use Apple's peer-to-peer networking over BlueTooth, it seems Apple creates a new Network Interface "awdl0" to implement (I guess) IP-over-BT.</p> <p>But I can't find any docs on this tech, or this interface, how it behaves, things we must / must not do with it, etc. Google comes up blank :(.</p> <p>In particular, I <em>believe</em> it means "established a BT connection, and I'm running an IP bridge over the top, and you can use this to communicate peer-to-peer". Apple's own system libraries have bugs where this bridge isn't setup quickly enough, and if you send data too soon, it appears to get dropped by the OS. So ... if I can query this awdl0, I hope to check "are you ready yet?" and delay P2P messages until the OS is happy.</p> <hr> <h2>UPDATE</h2> <p>More info: I can get pairs of iOS devices to create awdl0 connections to each other - but they never get created to OS X machines, whether BT and Bonjour are on or not, whether the devices are paired or not.</p> <hr> <p>Some background:</p> <p>In iOS5, Apple permanently disabled the Bluetooth parts of Bonjour/Peer-to-peer networking, and published a technote instructing everyone to use DNS-SD if they wanted to keep using Bluetooth as a transport between iOS devices. This is fine, but it means you <em>must</em> use DNS-SD if you want high-performance BT, and you want it reliable.</p> <p>(GameKit <em>sometimes</em> works fine, but we often see terrible performance in real-world scenarios, e.g. crowded public places - which goes away if you use DNS-SD)</p> <p>DNS-SD protocol doesn't include info to tell you what the hardware is using. But it does tell you the Network Interfaces (which is how I know we're running on awdl0)</p> <p>DNS-SD is awesome, and we have high-speed, low latency connections peer-to-peer between iOS devices - all the stuff that GameKit promises but often fails to deliver whenever there's more than a few wifi/BT devices in range.</p>
2013-10-25 10:45:17.810000+00:00
2021-02-02 13:20:36.953000+00:00
2013-10-25 11:32:27.037000+00:00
ios|bluetooth
['https://arxiv.org/abs/1808.03156', 'https://arxiv.org/abs/1808.03156', 'https://seemoo.de/wireshark-awdl']
3
61,789,417
<p>This is only orthogonally relevant, but I just published a preprint of a paper on a new parsing method that I call "pika parsing" (c.f. packrat parsing) that directly handles left recursive grammars without the need for rule rewriting.</p> <p><a href="https://arxiv.org/abs/2005.06444" rel="nofollow noreferrer">https://arxiv.org/abs/2005.06444</a></p>
2020-05-14 04:39:48.120000+00:00
2020-05-14 04:39:48.120000+00:00
null
null
2,999,755
<p>As is explained in <a href="https://stackoverflow.com/questions/2652060/removing-left-recursion">Removing left recursion</a> , there are two ways to remove the left recursion.</p> <ul> <li>Modify the original grammar to remove the left recursion using some procedure</li> <li>Write the grammar originally not to have the left recursion</li> </ul> <p>What people normally use for removing (not having) the left recursion with ANTLR? I've used flex/bison for parser, but I need to use ANTLR. The only thing I'm concerned about using ANTLR (or LL parser in genearal) is left recursion removal.</p> <ul> <li>In practical sense, how serious of removing left recursion in ANTLR? Is this a showstopper in using ANTLR? Or, nobody cares about it in ANTLR community?</li> <li>I like the idea of AST generation of ANTLR. In terms of getting AST quick and easy way, which method (out of the 2 removing left recursion methods) is preferable?</li> </ul> <h2>Added</h2> <p>I did some experiment with the following grammar.</p> <pre> E -> E + T|T T -> T * F|F F -> INT | ( E ) </pre> <p>After left recursion removal, I get the following one</p> <pre> E -> TE' E' -> null | + TE' T -> FT' T' -> null | * FT' </pre> <p>I could come up with the following ANTLR representation. Even though, It's relatively pretty simple and straightforward, it seems the grammar that doesn't have the left recursion should be the better way to go.</p> <pre> grammar T; options { language=Python; } start returns [value] : e {$value = $e.value}; e returns [value] : t ep { $value = $t.value if $ep.value != None: $value += $ep.value } ; ep returns [value] : {$value = None} | '+' t r = ep { $value = $t.value if $r.value != None: $value += $r.value } ; t returns [value] : f tp { $value = $f.value if $tp.value != None: $value *= $tp.value } ; tp returns [value] : {$value = None} | '*' f r = tp { $value = $f.value; if $r.value != None: $value *= $r.value } ; f returns [int value] : INT {$value = int($INT.text)} | '(' e ')' {$value = $e.value} ; INT : '0'..'9'+ ; WS: (' '|'\n'|'\r')+ {$channel=HIDDEN;} ; </pre>
2010-06-08 17:34:29.803000+00:00
2020-05-14 04:39:48.120000+00:00
2017-05-23 12:18:06.583000+00:00
antlr|compiler-theory
['https://arxiv.org/abs/2005.06444']
1
41,000,028
<p>You may want to review Greg Young's talk on <a href="https://skillsmatter.com/skillscasts/1980-cqrs-not-just-for-server-systems" rel="nofollow noreferrer">occasionally connected systems</a>.</p> <blockquote> <p>The problem is that the commands are fire-and-forget</p> </blockquote> <p>How do you understand &quot;fire-and-forget&quot;? If the model isn't allowed to reject the commands that are dispatched to it, then the messages you are sending are <em>events</em>, not commands.</p> <p>The usual implementation of the write model in CQRS is that commands are linearized (handled one at a time). Commands are expected to ensure that their pre-conditions are met.</p> <p><code>Compare-And-Set</code>, rather than <code>Set</code>.</p> <p>In its simplest form, the commands specify the initial version of the aggregate they are going to modify, in much the same way that conditional requests in HTTP specify <a href="https://www.rfc-editor.org/rfc/rfc7232#section-3" rel="nofollow noreferrer">preconditions on the resource</a>. In the case of a race condition, with multiple commands trying to change the same part of the model, one command would win, and the loser would be trivially rejected.</p> <p>Better (but more work to implement) would be to implement a sort of second chance for the losing command -- building into the model sufficient understanding that it can determine whether the changes induced by the two commands conflict. If they don't, then you just chain them together.</p> <p>An alternative approach is to allow both writes to happen, and accept that there is a conflict. Think about the way a source control system works: I've committed history 1-2-X, you've committed history 1-2-Y, and now there are two alternatives until somebody reconciles them with a merge.</p> <p>That approach is roughly in alignment with Udi Dahan's central point in his essay <a href="http://udidahan.com/2010/08/31/race-conditions-dont-exist/" rel="nofollow noreferrer">Race Conditions Don't Exist</a>.</p> <blockquote> <p>A microsecond difference in timing shouldn’t make a difference to core business behaviors.</p> </blockquote> <p>If your views are being built from lists of events (as you describe here), then one place to start with with a <a href="https://en.wikipedia.org/wiki/Conflict-free_replicated_data_type" rel="nofollow noreferrer">CRDT</a> of a list. Mark Klepmann describes a <a href="https://arxiv.org/abs/1608.03960" rel="nofollow noreferrer">JSON data type</a> <a href="https://news.ycombinator.com/item?id=12303100" rel="nofollow noreferrer">(hacker new commentary)</a>. That doesn't get you &quot;no conflicts&quot;, but it does get you the property that two users who see the same individual events necessarily seem them in the same order (if there is a problem, then everybody is seeing the same problem).</p> <p>But Greg's approach is probably simpler to implement - authors work with a local copy of the shared model, and interact with it, but that shared model is an approximation, and commands that cannot be reconciled by the book of record are returned to the author for manual mitigation.</p>
2016-12-06 16:19:17.770000+00:00
2016-12-06 16:19:17.770000+00:00
2021-10-07 12:12:37.660000+00:00
null
40,994,861
<p>I have the following task:</p> <ol> <li>Our current architecture is Web SPA with CQRS and MVVM. We have commands, queries and SingnalR as message bus. </li> <li>Users can select, move, resize divs on the same workspace from several web browsers simultaneously. </li> <li>Each div is binded to the approprite ViewModel. Each ViewModel has it's own query to refresh. Each ViewModel subscribed to business events and refresh the whole state after it.</li> </ol> <p>Lets image that User is doing the following steps:</p> <ol> <li>Select div (SelectWidgetCommand sent)</li> <li>Move div to x=10. (ChangePositionCommand sent)</li> <li>Move div to x=100. (ChangePositionCommand sent)</li> </ol> <p>The problem is that the commands are fire-and-forget and the User may receive event WidgetSelectedEvent during the Step 3 but the ChangePositionCommand may not be handled yet. So the User will receive the old x position and the div will move to the old position.</p> <p>What is the best practice to handle this kind of issues?</p> <p>What we are now doing is splitting the DivViewModel into two divs: SelectionViewModel, PositionViewModel. Each ViewModel has it's own query to refresh and different events to handle. Also we consider using debounce and rolling buffer for commands handling.</p>
2016-12-06 11:58:37.523000+00:00
2016-12-06 16:19:17.770000+00:00
null
design-patterns|mvvm|architecture|real-time|cqrs
['https://skillsmatter.com/skillscasts/1980-cqrs-not-just-for-server-systems', 'https://www.rfc-editor.org/rfc/rfc7232#section-3', 'http://udidahan.com/2010/08/31/race-conditions-dont-exist/', 'https://en.wikipedia.org/wiki/Conflict-free_replicated_data_type', 'https://arxiv.org/abs/1608.03960', 'https://news.ycombinator.com/item?id=12303100']
6
33,748,246
<p>No, the interpretation is not correct (unless my interpretation of your interpretation is incorrect). <code>x</code> is an input, and it is fixed in advance, so <code>x(t+1)</code> does not depend on the predicted value for timestep <code>t</code>.</p> <p>In that paragraph he discusses a particular case of an RNN, where <code>y(t)</code> is a prediction of <code>x(t + 1)</code>, in other words, the network is trying to predict the next symbol given all the previous symbols.</p> <p>My understanding of the sentence you are referring to is that since <code>y</code> is a result of a softmax, <code>y</code> has a limited range of values it can assume, and therefore <code>x</code> on itself has to be limited to the same range of values, hence <code>x</code> has to be a "symbol or bounded integer". Otherwise, if <code>x</code>, for instance, is a double, <code>y</code> cannot predict it, since the output of a softmax is a discrete value.</p> <p>UPDATE: as a matter of fact, Bengio has a great paper: <a href="http://arxiv.org/abs/1506.03099" rel="nofollow">http://arxiv.org/abs/1506.03099</a> in which he actually suggests that on some iterations we use <code>y(t)</code> instead of <code>x(t+1)</code> as input when predict <code>y(t+1)</code> during training (which is along the lines of your understanding in your question).</p>
2015-11-17 02:42:21.443000+00:00
2015-11-17 17:49:02.153000+00:00
2015-11-17 17:49:02.153000+00:00
null
33,748,166
<p>This question concerns with the <a href="http://goodfeli.github.io/dlbook/contents/rnn.html#pf2" rel="nofollow" title="Chapter on RNNs">chapter on RNNs</a> in the Deep learning look by Prof Bengio. In section 10.2.2 on page 336 in the last paragraph, the book talks about "...because the outputs are the result of a softmax, it must be that the input sequence is a sequence of symbols...". </p> <p>This seems to suggest that the output is treated as a probability distribution over the possible 'bits' and the next input x(t+1) is sampled using this joint probability distribution over the output bits. Is this interpretation correct? </p>
2015-11-17 02:31:17.720000+00:00
2015-11-17 17:49:02.153000+00:00
null
probability|deep-learning
['http://arxiv.org/abs/1506.03099']
1
55,460,245
<p>If you only want the highest likelihood parameter values, then you want the Maximum A Posteriori (MAP) estimate, which can be obtained using <code>pymc3.find_MAP()</code> (see <a href="https://github.com/pymc-devs/pymc3/blob/master/pymc3/tuning/starting.py" rel="nofollow noreferrer"><code>starting.py</code></a> for method details). If you expect a multimodal posterior, then you will likely need to run this repeatedly with different initializations and select the one that obtains the largest <code>logp</code> value, but that still only increases the chances of finding the global optimum, though cannot guarantee it.</p> <p>It should be noted that at high parameter dimensions, the MAP estimate is usually not part of the typical set, i.e., it is not representative of typical parameter values that would lead to the observed data. Michael Betancourt discusses this in <a href="https://arxiv.org/abs/1701.02434" rel="nofollow noreferrer">A Conceptual Introduction to Hamiltonian Monte Carlo</a>. The fully Bayesian approach is to use <a href="https://docs.pymc.io/api/inference.html#pymc3.sampling.sample_posterior_predictive" rel="nofollow noreferrer">posterior predictive distributions</a>, which effectively averages over all the high-likelihood parameter configurations rather than using a single point estimate for parameters.</p>
2019-04-01 17:05:33.410000+00:00
2019-04-01 17:05:33.410000+00:00
null
null
55,365,787
<p>The code is in PyMC3, but this is a general problem. I want to find which matrix (combination of variables) gives me the highest probability. Taking the mean of the trace of each element is meaningless because they depend on each other.</p> <p>Here is a simple case; the code uses a vector rather than a matrix for simplicity. The goal is to find a vector of length 2, where the each value is between 0 and 1, so that the sum is 1.</p> <pre><code>import numpy as np import theano import theano.tensor as tt import pymc3 as mc # define a theano Op for our likelihood function class LogLike_Matrix(tt.Op): itypes = [tt.dvector] # expects a vector of parameter values when called otypes = [tt.dscalar] # outputs a single scalar value (the log likelihood) def __init__(self, loglike): self.likelihood = loglike # the log-p function def perform(self, node, inputs, outputs): # the method that is used when calling the Op theta, = inputs # this will contain my variables # call the log-likelihood function logl = self.likelihood(theta) outputs[0][0] = np.array(logl) # output the log-likelihood def logLikelihood_Matrix(data): """ We want sum(data) = 1 """ p = 1-np.abs(np.sum(data)-1) return np.log(p) logl_matrix = LogLike_Matrix(logLikelihood_Matrix) # use PyMC3 to sampler from log-likelihood with mc.Model(): """ Data will be sampled randomly with uniform distribution because the log-p doesn't work on it """ data_matrix = mc.Uniform('data_matrix', shape=(2), lower=0.0, upper=1.0) # convert m and c to a tensor vector theta = tt.as_tensor_variable(data_matrix) # use a DensityDist (use a lamdba function to "call" the Op) mc.DensityDist('likelihood_matrix', lambda v: logl_matrix(v), observed={'v': theta}) trace_matrix = mc.sample(5000, tune=100, discard_tuned_samples=True) </code></pre>
2019-03-26 20:34:15.780000+00:00
2019-04-01 17:05:33.410000+00:00
null
pymc3|pymc|mcmc
['https://github.com/pymc-devs/pymc3/blob/master/pymc3/tuning/starting.py', 'https://arxiv.org/abs/1701.02434', 'https://docs.pymc.io/api/inference.html#pymc3.sampling.sample_posterior_predictive']
3
62,343,806
<p>You should use a different algorithm. There has been a lot of research on how to speed up feature selection. The RFE's computational complexity is prohibitive for a large set of features. You should consider using appoaches for high dimentional data, such as <strong>FBED</strong> (Forward-Backward-Early-Dropping), <strong>OMP</strong> (Orthogonal-Matching-Pursuit), <strong>SES</strong> (Statistically-Equivalent-Signatures), <strong>LASSO</strong> etc.</p> <p>Fbed <a href="https://arxiv.org/abs/1705.10770" rel="nofollow noreferrer">https://arxiv.org/abs/1705.10770</a></p> <p>OMP <a href="https://arxiv.org/abs/2004.00281" rel="nofollow noreferrer">https://arxiv.org/abs/2004.00281</a></p> <p>SES <a href="https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-018-2023-7" rel="nofollow noreferrer">https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-018-2023-7</a></p> <p>LASSO <a href="https://ieeexplore.ieee.org/document/7887916" rel="nofollow noreferrer">https://ieeexplore.ieee.org/document/7887916</a></p>
2020-06-12 11:56:32.280000+00:00
2020-06-12 11:56:32.280000+00:00
null
null
54,816,709
<p>I am trying to obtain a ranking of features from a rather large set of features (~6,100,000) in sklearn. Here's the code I have thus far:</p> <pre><code>train, test = train_test_split(rows, test_size=0.2, random_state=310) train, val = train_test_split(train, test_size=0.25, random_state=310) train_target = [i[-1] for i in train] svc = SVC(verbose=5, random_state=310, kernel='linear') svc.fit([i[1:-1] for i in train], train_target) model=svc rfe = RFE(model, verbose=5, step=1, n_features_to_select=1) rfe.fit([i[1:-1] for i in train], train_target) rank = rfe.ranking_ </code></pre> <p>Each training of the model takes ~10 minutes. for 6,100,000 features that means <em>decades</em> of computation time. Actually 115.9 years. Any better way to do this? I know rfe requires the results of the last elimination, but is there any way I can speed this through parallelizing up or obtain a ranking differently? I can use thousands of nodes (Thanks company I work for!) so any kind of parallelism would be awesome!</p> <p>I do have the list coefficients of the linear SVM's hyperplane. Ordering those is easy enough, but the paper which this is being done for is going to be reviewed by a Stanford data science professor and he has a strong reservation against using non-ranking algorithms for ranking....and non-Stanford alums like me. :P</p> <p>I can take a larger <code>step</code> but that would remove the ability to actually rank all features. rather I would rank groups of 100,000 or 10,000 features which isn't super helpful.</p> <p>EDIT: nSV might be useful so I've included it below:</p> <pre><code>obj = -163.983323, rho = -0.999801 nSV = 182, nBSV = 148 Total nSV = 182 </code></pre>
2019-02-21 21:45:44.717000+00:00
2020-06-12 11:56:32.280000+00:00
2019-02-21 22:31:15.463000+00:00
machine-learning|scikit-learn|svm|rfe
['https://arxiv.org/abs/1705.10770', 'https://arxiv.org/abs/2004.00281', 'https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-018-2023-7', 'https://ieeexplore.ieee.org/document/7887916']
4
61,487,856
<p>You can think of word 'vectors', numerically, as just points. It's not really significant that they all 'start' at the origin (<code>[0.0, 0.0, 0.0, ..., 0.0]</code>). </p> <p>The 'center' of any such vector is just its midpoint, which is also a vector of the same 'directionality' with half the magnitude. <strong>Often but not always</strong>, word-vectors are only compared in terms of raw-direction, <strong>not</strong> magnitude, via 'cosine similarity', which is essentially an angle-of-difference calculation that's oblvious to length/magnitude. (So, <code>cosine_similarity(a, b)</code> will be the same as <code>cosine_similarity(a/2, b)</code> or <code>cosine_similarity(a, b*4)</code>, etc.) So this 'center'/half-length instance you've asked about is usually less meaningful, with word-vectors, than in other vector models. And in general, as long as you're using cosine-similarity as your main method of comparing vectors, moving them closer to the origin-point is irrelevant. So, in that framing, the origin point doesn't really have a distinct meaning.</p> <p>Caveat with regard to magnitudes: the actual raw vectors created by word2vec training do in fact have a variety of magnitudes. Some have observed that these magnitudes sometimes correlate with interesting word differences – for example, highly polysemous words (with many alternate meanings) can often be lower-magnitude than words with one dominant meaning – as the need to "do something useful" in alternate contexts tugs the vector between extremes during training, leaving it more "in the middle". And while word-to-word comparisons usually ignore these magnitudes for the purely angular cosine-similarity, sometimes downstream uses, such as text classification, may do incrementally better keeping the raw magnitudes. </p> <p>Caveat with regard to the origin point: At least one paper, "<a href="https://arxiv.org/abs/1702.01417v2" rel="nofollow noreferrer">All-but-the-Top: Simple and Effective Postprocessing for Word Representations</a>" by Mu, Bhat, &amp; Viswanath, has observed that often the 'average' of all word-vectors isn't the origin-point, but significantly biased in one direction – which (in my stylized understanding) sort-of leaves the whole space imbalanced, in terms of whether it's using 'all angles' to represent contrasts-in-meaning. (Also, in my experiments, the extent of this imbalance seems a function of how many <code>negative</code> examples are used in negative-sampling.) They found that postprocessing the vectors to recenter them improved performance on some tasks, but I've not seen many other projects adopt this as a standard step. (They also suggest some other postprocessing transformations to essentially 'increase contrast in the most valuable dimensions'.)</p> <p>Regarding your "IIUC", yes, words are given starting vectors - <strong>but</strong> these are random, and then constantly adjusted via backprop-nudges, repeatedly after trying every training example in turn, to make those 'input word' vectors ever-so-slightly better as inputs to the neural network that's trying to predict nearby 'target/center/output' words. Both the networks 'internal'/'hidden' weights are adjusted, and the <code>input vectors</code> themselves, which are essentially 'projection weights' – from a one-hot representation of a single vocabulary word, 'to' the M different internal hidden-layer nodes. That is, each 'word vector' is essentially a word-specific subset of the neural-networks' internal weights. </p>
2020-04-28 18:56:42.357000+00:00
2020-04-28 18:56:42.357000+00:00
null
null
61,481,721
<p>I'm studying NLP and wrapping my head around the steps of passing through a Multi-Layer Perceptron. Since vectors are magnitude and direction in a space, I'm curious what the center of a word vector represents. In a very simple vector, my word might be 21, -5. Does 0,0 represent anything? If not, could it represent something after training a model? </p> <p>If I understand correctly, a word that has never been seen before will be given a numerical identity and a vector of M dimensions. This vector then passes into the first layer, which has as many nodes as there are dimensions, so in this case M nodes. Through backpropagation the weights are changed so that similar words "group" together in vector space. (So that means the word vectors themselves are never modified from their initial random value, right?) Please correct me if I've made wrong assumptions here. I would just appreciate some insight. </p>
2020-04-28 13:44:05.323000+00:00
2020-04-30 10:49:11.810000+00:00
null
tensorflow|nlp|vectorization|word2vec|dl4j
['https://arxiv.org/abs/1702.01417v2']
1
49,226,122
<p>If I understand David Eppstein's<sup>1 </sup> <a href="https://arxiv.org/abs/0908.3916" rel="nofollow noreferrer">explanation (see section 3)</a> then a solution can be found in a maximal independent set in the bipartite intersection graph of axis-aligned diagonals connecting one concave vertex to another. (This would be 2d. I'm not sure about 3d, although perhaps it involves evaluating hyperplanes instead of lines?)</p> <p>In your example, there is only one such diagonal:</p> <pre><code> ________ | | |_x....x_| |____| </code></pre> <p>The two <code>x</code>s represent connected concave vertices. The maximal independent set of edges here contains only one edge, splitting the polygon in two.</p> <p>Here's another with only one axis-parallel edge connecting two concave vertices, <code>x</code> and <code>x</code>. This polygon, though, also has two concave vertices, <code>a</code> and <code>b</code>, that do not have an opposite, axis-parallel match. In that case, it seems to me, each concave vertex without a partner would split the polygon it's on in two (either vertically or horizontally):</p> <pre><code> ____________ | | | |x | . | | . |a |___ . | b| . | | .___| |________|x </code></pre> <p>results in 4 rectangles:</p> <pre><code> ____________ | | | |x | . | | ..|a |___.......... | b| . | | .___| |________|x </code></pre> <p>Here's one with two intersecting axis-parallel diagonals, each connecting two concave vertices, <code>(x,x)</code> and <code>(y,y)</code>:</p> <pre><code> ____________ | | | |x_ | . | | . | |___ . . . .z. .|y y| . | | .____| |________|x </code></pre> <p>In this case, as I understand, the intersection graph again contains only one independent set:</p> <pre><code>(y,z) (z,y) (x,z) (z,x) </code></pre> <p>yielding 4 rectangles as a solution.</p> <p>Since I'm not completely sure how the "intersection graph" in the paper is defined, I would welcome any clarifying comments.</p> <p><sub>1. Graph-Theoretic Solutions to Computational Geometry Problems, David Eppstein (Submitted on 26 Aug 2009)</sub></p>
2018-03-11 23:14:54.913000+00:00
2018-03-13 15:18:00.550000+00:00
2018-03-13 15:18:00.550000+00:00
null
49,217,955
<p>Let a 3D grid, just like a checkerboard, with an extra dimension. Now let's say that I have a certain amount of cubes into that grid, each cube occupying 1x1x1 cells. Let's say that each of these cubes is an item.</p> <p>What I would like to do is replace/combine these cubes into larger boxes occupying any number of cells on the X, Y and Z axes, so that the resulting number of boxes is as small as possible while preserving the overall "appearance".</p> <p>It's probably unclear so I'll give a 2D example. Say I have a 2D grid containing several squares occupying 1x1 cells. A letter represents the cells occupied by a given item, each item having a different letter from the other ones. In the first example we have 10 different items, each of them occupying 1x1x1 cells.</p> <pre><code>+---+---+---+---+---+---+ | | | | | | | +---+---+---+---+---+---+ | | A | B | C | D | | +---+---+---+---+---+---+ | | E | F | G | H | | +---+---+---+---+---+---+ | | | K | L | | | +---+---+---+---+---+---+ | | | | | | | +---+---+---+---+---+---+ </code></pre> <p>That's my input data. I could now optimize it, i.e reduce the number of items while still occupying the same cells, by multiple possible ways, one of which could be :</p> <pre><code>+---+---+---+---+---+---+ | | | | | | | +---+---+---+---+---+---+ | | A | B | B | C | | +---+---+---+---+---+---+ | | A | B | B | C | | +---+---+---+---+---+---+ | | | B | B | | | +---+---+---+---+---+---+ | | | | | | | +---+---+---+---+---+---+ </code></pre> <p>Here, instead of 10 items, I only have 3 (i.e A, B and C). However it can be optimized even more :</p> <pre><code>+---+---+---+---+---+---+ | | | | | | | +---+---+---+---+---+---+ | | A | A | A | A | | +---+---+---+---+---+---+ | | A | A | A | A | | +---+---+---+---+---+---+ | | | B | B | | | +---+---+---+---+---+---+ | | | | | | | +---+---+---+---+---+---+ </code></pre> <p>Here I only have two items, A and B. This is as optimized as this can be.</p> <p>What I am looking for is an algorithm capable of finding the best item sizes and arrangement, or at least a reasonably good one, so that I have as few items as possible while occupying the same cells, and in 3D !</p> <p>Is there such an algorithm ? I'm sure there are some domains where that kind of algorithm would be useful, and I need it for a video game. Thanks !!</p>
2018-03-11 08:07:51.290000+00:00
2018-03-13 15:18:00.550000+00:00
null
algorithm|math|optimization|grid|space
['https://arxiv.org/abs/0908.3916']
1
31,248,809
<p>Take a look at <a href="http://arxiv.org/pdf/1410.5401v2.pdf" rel="nofollow">Neural Turing Machines</a> by Alex Graves, Greg Wayne and Ivo Danihelka from Google DeepMind.</p> <p>Abstract:</p> <blockquote> <p>We extend the capabilities of neural networks by coupling them to external memory resources, which they can interact with by attentional processes. The combined system is analogous to a Turing Machine or Von Neumann architecture but is differentiable end-to-end, allowing it to be efficiently trained with gradient descent. Preliminary results demonstrate that <strong>Neural Turing Machines can infer simple algorithms such as copying, sorting, and associative recall from input and output examples.</strong></p> </blockquote>
2015-07-06 14:43:00.390000+00:00
2015-07-06 14:43:00.390000+00:00
null
null
31,231,284
<p>You might have to read this twice so that my idea becomes clear. Please be patient.<br> I'm looking for existing work for exhausive search for algorithms for given problems. Exhaustive search is also known as brute force search, or simply brute force.</p> <p>Other exhaustive search algorithms search for a solution for a given problem. Usually, a solution for such a problem is some data which fulfills some requirements.</p> <p><strong>Exhaustive search example</strong>:<br> You want the solution for a KNAPSACK problem. That is the objects which can be packed into the bag so that there is no other combination of objects which fit into the bag and which would sum up to a bigger value than your result combination.<br> You can solve this by going over all possible combinations (exhaustive) and search for the one which fits into the bag and which is the most valuable one of those combinations.</p> <p>What I'm looking for is just a special case of exhaustive search: Exhaustive search which searches for an algorithm as a solution. So in the end, <strong>I'm looking for an algorithm which searches for an algorithm which solves some given problem.</strong></p> <p>You might say: Go google it. Well, yes I've done that. The difficulty I'm facing here is that googling for "algorithm which searches for another algorithm" results in all the same as "another search algorithm". Obviously, this has too many unwanted results, so I'm stuck here.</p> <p>How can I find existing work related to exhaustive search for algorithms?<br> More specifically: Has there any software been written for this? Can you point me to any links or algorithm names/better keywords in relation to this topic?</p> <p><strong>Update</strong>:<br> The purpose for which I'm looking for such an algorithm search is solving problems for which no good heuristics are known, e.g. prooving-algorithms or trying to finding other solution algorithms for problems which might or might not be NP-complete problems (thus proving that the problem is not NP-complete if a faster algorithm can be found; without any human interaction).</p>
2015-07-05 14:18:04.043000+00:00
2017-06-20 11:18:48.753000+00:00
2017-06-20 11:18:48.753000+00:00
algorithm
['http://arxiv.org/pdf/1410.5401v2.pdf']
1
31,248,274
<p>You seem to be looking for "program synthesis", which can work in some limited instances, provided you can correctly and formally specify what your algorithm is supposed to do (without giving an implementation). Synthesis is an effective approach to build gate-level circuits, but applying synthesis to software is so far still more of a research avenue than practical.</p> <p>Still, here are a couple of references on the subject, </p> <p>(Some of the most advanced work in the area in my opinion, has a tool) program sketching by Armando Solar-Lezama</p> <p>Check out Microsoft research page on the topic, they think it's hot topic: <a href="http://research.microsoft.com/en-us/um/people/sumitg/pubs/synthesis.html" rel="nofollow">http://research.microsoft.com/en-us/um/people/sumitg/pubs/synthesis.html</a></p> <p>Some other similar stuff I've seen : Model Checking-Based Genetic Programming with an Application to Mutual Exclusion. (Katz &amp; Peled @ TACAS '08), they have a more recent version on ArXiv : <a href="http://arxiv.org/abs/1402.6785" rel="nofollow">http://arxiv.org/abs/1402.6785</a></p> <p>Essentially the search space is explored (exhaustively) using a model-checker.</p>
2015-07-06 14:19:06.133000+00:00
2015-07-06 15:21:57.487000+00:00
2015-07-06 15:21:57.487000+00:00
null
31,231,284
<p>You might have to read this twice so that my idea becomes clear. Please be patient.<br> I'm looking for existing work for exhausive search for algorithms for given problems. Exhaustive search is also known as brute force search, or simply brute force.</p> <p>Other exhaustive search algorithms search for a solution for a given problem. Usually, a solution for such a problem is some data which fulfills some requirements.</p> <p><strong>Exhaustive search example</strong>:<br> You want the solution for a KNAPSACK problem. That is the objects which can be packed into the bag so that there is no other combination of objects which fit into the bag and which would sum up to a bigger value than your result combination.<br> You can solve this by going over all possible combinations (exhaustive) and search for the one which fits into the bag and which is the most valuable one of those combinations.</p> <p>What I'm looking for is just a special case of exhaustive search: Exhaustive search which searches for an algorithm as a solution. So in the end, <strong>I'm looking for an algorithm which searches for an algorithm which solves some given problem.</strong></p> <p>You might say: Go google it. Well, yes I've done that. The difficulty I'm facing here is that googling for "algorithm which searches for another algorithm" results in all the same as "another search algorithm". Obviously, this has too many unwanted results, so I'm stuck here.</p> <p>How can I find existing work related to exhaustive search for algorithms?<br> More specifically: Has there any software been written for this? Can you point me to any links or algorithm names/better keywords in relation to this topic?</p> <p><strong>Update</strong>:<br> The purpose for which I'm looking for such an algorithm search is solving problems for which no good heuristics are known, e.g. prooving-algorithms or trying to finding other solution algorithms for problems which might or might not be NP-complete problems (thus proving that the problem is not NP-complete if a faster algorithm can be found; without any human interaction).</p>
2015-07-05 14:18:04.043000+00:00
2017-06-20 11:18:48.753000+00:00
2017-06-20 11:18:48.753000+00:00
algorithm
['http://research.microsoft.com/en-us/um/people/sumitg/pubs/synthesis.html', 'http://arxiv.org/abs/1402.6785']
2
4,096,756
<p>You may look into compressed bitmaps. A common strategy is to use word-aligned run-length encoding. </p> <p>C++ implementation:</p> <p><a href="https://github.com/lemire/EWAHBoolArray" rel="nofollow">https://github.com/lemire/EWAHBoolArray</a></p> <p>Java implementation:</p> <p><a href="https://github.com/lemire/javaewah" rel="nofollow">https://github.com/lemire/javaewah</a></p> <p>Reference: </p> <p>Daniel Lemire, Owen Kaser, Kamel Aouiche, Sorting improves word-aligned bitmap indexes. Data &amp; Knowledge Engineering 69 (1), pages 3-28, 2010. <a href="http://arxiv.org/abs/0901.3751" rel="nofollow">http://arxiv.org/abs/0901.3751</a></p>
2010-11-04 12:44:49.010000+00:00
2010-11-04 12:44:49.010000+00:00
null
null
4,073,952
<p>I've got a special need and the most important concerns are:</p> <ul> <li>in-memory</li> <li>very low memory footprint</li> <li>speed</li> </ul> <p>Here's my "problem": I need to store, in-memory, a huge number of very sparse bit arrays. Those bitsets are "append only" and are to be used mostly for intersections. By huge, I mean as high as 200 000 bit arrays.</p> <p>The range shall be between [0...16 000 000] for each bitset.</p> <p>I ran some pre-test with "only" 10 673 bit arrays containing some actual data I've got and got the following results:</p> <pre><code> 1% of the bit arrays ( 106 bit arrays) Hamming weight: at most 1 bit set 5% of the bit arrays ( 534 bit arrays) Hamming weight: at most 4 bits set 10% of the bit arrays ( 1068 bit arrays) Hamming weight: at most 8 bits set 15% of the bit arrays ( 1603 bit arrays) Hamming weight: at most 12 bits set 20% of the bit arrays ( 2137 bit arrays) Hamming weight: at most 17 bits set 25% of the bit arrays ( 2671 bit arrays) Hamming weight: at most 22 bits set 30% of the bit arrays ( 3206 bit arrays) Hamming weight: at most 28 bits set 35% of the bit arrays ( 3740 bit arrays) Hamming weight: at most 35 bits set 40% of the bit arrays ( 4274 bit arrays) Hamming weight: at most 44 bits set 45% of the bit arrays ( 4809 bit arrays) Hamming weight: at most 55 bits set 50% of the bit arrays ( 5343 bit arrays) Hamming weight: at most 67 bits set 55% of the bit arrays ( 5877 bit arrays) Hamming weight: at most 83 bits set 60% of the bit arrays ( 6412 bit arrays) Hamming weight: at most 103 bits set 65% of the bit arrays ( 6946 bit arrays) Hamming weight: at most 128 bits set 70% of the bit arrays ( 7480 bit arrays) Hamming weight: at most 161 bits set 75% of the bit arrays ( 8015 bit arrays) Hamming weight: at most 206 bits set 80% of the bit arrays ( 8549 bit arrays) Hamming weight: at most 275 bits set 85% of the bit arrays ( 9083 bit arrays) Hamming weight: at most 395 bits set 90% of the bit arrays ( 9618 bit arrays) Hamming weight: at most 640 bits set 95% of the bit arrays (10152 bit arrays) Hamming weight: at most 1453 bits set 96% of the bit arrays (10259 bit arrays) Hamming weight: at most 1843 bits set 97% of the bit arrays (10366 bit arrays) Hamming weight: at most 2601 bits set 98% of the bit arrays (10473 bit arrays) Hamming weight: at most 3544 bits set 99% of the bit arrays (10580 bit arrays) Hamming weight: at most 4992 bits set 100% of the bit arrays (10687 bit arrays) Hamming weight: at most 53153 bits set </code></pre> <p>Seen the numbers involved, I obviously need to use compressed bit arrays and that is not an issue: it shall stay easy to deal with seen that the bit arrays are "append only".</p> <p>The bit array bits that are on are kinda grouped, but not totally. So you'll tend to have several bits on in the same area (but usually not one after another, making RLE kinda not great for bits that are on).</p> <p>My question is what kind of compression to use?</p> <p>Now I don't know if I should put my first approach here or in an answer to my own question. </p> <p>Basically I imagined a "worst case" scenario using a very dumb encoding:</p> <ul> <li><p>1 bit: if on, the following 5 bits determine how many bits are needed to compute the 'skip', if off, optimization: the following 5 bits determine how many bits are too be taken literally (that is 'on' or 'off', without skipping) [this would only be switched to when determined to be more efficient than the other representation, so when it kicks in, it shall always be an optimization (size-wise)]</p></li> <li><p>5 bits: how many bits we can skip before the next bit on</p></li> <li><p>x bits: skip</p></li> </ul> <p>Here's an example: a bit array has 3 bit set, the first bit being at 3 098 137, the second at 3 098 141 and the third at 3 098 143.</p> <pre><code> +-- now we won't skip | | +-- 3 because we need 3 bits to store "6" (from 3 098 138 to 3 098 143) | | +--- 3 098 141 is on 22 3 098 137 | 3 | +- 3 098 143 is on 1 10110 1011110100011000011001 0 00011 000101 etc. </code></pre> <p>First bit on tells we're going to skip bits. 5 next bits (always 5) tells how many bits we need to tell how many bits we'll skip 22 bits telling to skip to 3 098 137 one bit off telling now we're not skipping bits 5 next bits (always 5) tells how many bits we'll read "as is" 6 bits: off, off, off, on, off, on meaning 3 098 141 and 3 098 143 are on etc.</p> <p>Seen the amazing sparsity of these bit arrays, this seems quite size-efficient. </p> <p>So using that encoding, I took my sample data and computed a "worst case" scenario (I haven't written the algo yet, I'd rather have a few from here inputs first): basically I considered that not only the "size optimization" would never kick in and, also, that the 5 bits would always be set to their maximum value (24 bits), which of course cannot happen.</p> <p>I did it just to have a very crude approximation of what the "worst of the worst" case could be.</p> <p>I was very pleasantly surprised:</p> <pre><code>Worst case scenario: 108 913 290 bits needed for the 10 687 very sparse bit arrays 12.9 MB (13 295 KB) </code></pre> <p>The data being actual data and all the data being similar, I know that, if worse comes to worse, I could store my 200 000 bit arrays in about 240 MB, which is fine.</p> <p>I'm pretty sure the actual encoding will comes to way less than that but as I haven't actually written it yet, I can only (very easily) compute the "worst case" which is why I only show that one. </p> <p>Any hints / ideas as to how to make this more size-efficient (remembering these are super-sparse bit arrays, that there shall be hundreds thousands of them, that they must be in memory, and that they shall be "append only")?</p> <p><strong>About my 'append-only' case</strong></p> <p>Basically I've got one growing <em>"expanse"</em> (the range, but <em>"expanse"</em> is the actual term as I understand it) and a lot of bit arrays that have a few bit sets. When the range goes from, say, 0 to 1 000 000, all the bit arrays goes from 0 to 1 000 000 to. When the range grows to 1 000 001, then all the bit arrays are growing too, all by one bit. But most of these bit arrays will have a '0' appended at their end, while about 4 to 8 of the bit arrays will have a '1' appended at their end. However I cannot predict in advance which of the bit arrays will have a 0 or a 1 appended.</p> <p>So I've got a lot of bit arrays that have all the same size, that are all very sparse (&lt; 0.5% of their bits set) and that are all "growing" as the range growth (so they're all always growing at the same rate).</p> <hr> <p><a href="https://stackoverflow.com/a/4074050">Judy arrays</a> are great. But I read about them a few years ago and that stuff was "above my head". Judy arrays are a C-only 20KLOC lib and I'm definitely not re-implementing that. But they're amazing.</p> <p>So I guess I need to add I'd like all this to stay relatively simple, which is not that far-fetched seen the special "append only" property of my very sparse bit arrays.</p>
2010-11-01 23:26:17.620000+00:00
2015-02-13 18:45:42.263000+00:00
2017-05-23 12:02:14.493000+00:00
algorithm|compression|in-memory|bitarray
['https://github.com/lemire/EWAHBoolArray', 'https://github.com/lemire/javaewah', 'http://arxiv.org/abs/0901.3751']
3
59,110,068
<p>Given a set of points, the following two statements about an individual point p are equivalent:</p> <ul> <li>There is a line through p such that every point in the set lies on the same side of the line (or on the line),</li> <li>p is on the boundary of the set's <a href="https://en.wikipedia.org/wiki/Convex_hull" rel="nofollow noreferrer">convex hull</a>.</li> </ul> <p>This is true because if p is in the interior of the convex hull, then any line through p divides the convex hull into two parts. If one side of the line has no points in the set, then the other side is a smaller convex region which contains every point. This would contradict the definition of the convex hull, which is the smallest convex set containing every point.</p> <p>So the set of points which satisfy the property about having a line which doesn't divide the set in two, is precisely the same set of points that are on the boundary of the convex hull. The latter is what a convex hull algorithm returns, so logically, any algorithm which solves your problem for each point in the set <em>is</em> a convex hull algorithm.</p> <p>The only subtle difference I can think of is that standard convex hull algorithms usually also return the boundary points in a particular order, whereas you don't need them in any particular order. But I don't think there is a more efficient convex hull algorithm when the order doesn't matter. The running time is O(n log n) in the worst case, which gives you an average query time per point of at most O(log n).</p> <p>That is asymptotically optimal for this problem if you want to test each point in the set, since computing even the unordered convex hull takes at least O(n log n) in the worst case, according to <a href="https://arxiv.org/abs/1812.01332" rel="nofollow noreferrer">an arXiv paper by Herman Haverkort</a>. There's evidence that this is optimal even for just finding the cardinality of the convex hull (see <a href="https://www.sciencedirect.com/science/article/pii/0166218X82900658" rel="nofollow noreferrer">this paper by Davis Avis</a>).</p>
2019-11-29 19:25:44.567000+00:00
2019-11-30 21:00:58.973000+00:00
2019-11-30 21:00:58.973000+00:00
null
59,109,603
<p>Apologies for the vague title: I can't find a proper name for the theory I am looking for (that's why I am asking the question), so I'm going to explain it with an example, and I hope someone can point me in the right direction.</p> <p>Suppose you have a set of points in 2D.<br> The following R code:</p> <pre><code># make a random set of N points in 2D space as a numerical matrix set.seed(1010) d = 2 N = 15 ps &lt;- matrix(rnorm(d*N), , d) # center the points (subtract the mean of each coordinate) pss &lt;- scale(ps,scale=F) # represent the points in a 2D plot, with the origin (the new mean) in red plot(pss) text(pss,label=1:N,pos=4) points(0,0,col=2,pch=16) text(0,0,label=0) abline(v=0) abline(h=0) </code></pre> <p>should make a plot like:<br> <a href="https://i.stack.imgur.com/WnAVG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WnAVG.png" alt="enter image description here"></a></p> <p>Consider point 7. Intuitively one can see that there are several possible lines passing through point 7, which 'leave' all other points on 'one side' of the line (i.e. 'segregate' them in the half-plane defined by the line).</p> <p>Consider instead point 6. There can never be any line passing through point 6 for which one half-plane contains all the points.</p> <p>A point like 9 can also have such a line, although it's not particularly evident from the plot.</p> <p><strong>Question</strong>: is there any way to <em>exclude</em> the existence of such a line for each specific point? Meaning, could one do some operations on the coordinates of the points proving that such a line can <em>NOT</em> exist for a given point (so one could quickly classify the point into one that can and or can't have it)? I'm also thinking of higher dimensional applications, where lines would be planes, etc.</p> <p>All my searches on the topic so far took me to concepts like 'convex hull', and 'boundary', which seem indeed quite closely related to what I'm looking for, but go far beyond my simple requirement of classifying the points, and are reported to be 'output-sensitive', indeed because they provide a lot of information on the hull itself, which I do not need.</p> <p>Any ideas?</p> <p>Thanks!</p>
2019-11-29 18:31:25.307000+00:00
2019-11-30 21:00:58.973000+00:00
null
r|algorithm|computational-geometry
['https://en.wikipedia.org/wiki/Convex_hull', 'https://arxiv.org/abs/1812.01332', 'https://www.sciencedirect.com/science/article/pii/0166218X82900658']
3
62,460,456
<p>To get all links from the <code>https://news.ycombinator.com</code>, you can use CSS selector <code>'a.storylink'</code>.</p> <p>For example:</p> <pre><code>from bs4 import BeautifulSoup from requests import get import requests URL = "https://news.ycombinator.com" page = requests.get(URL) soup = BeautifulSoup(page.content, 'html.parser') links_with_text = [] for a in soup.select('a.storylink'): # &lt;-- find all &lt;a&gt; with class="storylink" links_with_text.append(a['href']) # &lt;-- note the ['href'] print(*links_with_text, sep='\n') </code></pre> <p>Prints:</p> <pre><code>https://blog.mozilla.org/futurereleases/2020/06/18/introducing-firefox-private-network-vpns-official-product-the-mozilla-vpn/ https://mxb.dev/blog/the-return-of-the-90s-web/ https://github.blog/2020-06-18-introducing-github-super-linter-one-linter-to-rule-them-all/ https://www.sciencemag.org/news/2018/11/why-536-was-worst-year-be-alive https://www.strongtowns.org/journal/2020/6/16/do-the-math-small-projects https://devblogs.nvidia.com/announcing-cuda-on-windows-subsystem-for-linux-2/ https://lwn.net/SubscriberLink/822568/61d29096a4012e06/ https://imil.net/blog/posts/2020/fakecracker-netbsd-as-a-function-based-microvm/ https://jepsen.io/consistency https://tumblr.beesbuzz.biz/post/621010836277837824/advice-to-young-web-developers https://archive.org/search.php?query=subject%3A%22The+Navy+Electricity+and+Electronics+Training+Series%22&amp;sort=publicdate https://googleprojectzero.blogspot.com/2020/06/ff-sandbox-escape-cve-2020-12388.html?m=1 https://apnews.com/1da061ce00eb531291b143ace0eed1c9 https://support.apple.com/library/content/dam/edam/applecare/images/en_US/appleid/android-apple-music-account-payment-none.jpg https://standpointmag.co.uk/issues/may-june-2020/the-healing-power-of-birdsong/ https://steveblank.com/2020/06/18/the-coming-chip-wars-of-the-21st-century/ https://www.videolan.org/security/sb-vlc3011.html https://onesignal.com/careers/2023b71d-2f44-4934-a33c-647855816903 https://www.bbc.com/news/world-europe-53006790 https://github.com/efficient/HOPE https://everytwoyears.org/ https://www.historytoday.com/archive/natural-histories/intelligence-earthworms https://cr.yp.to/2005-590/powerpc-cwg.pdf https://quantum.country/ http://www.crystallography.net/cod/ https://parkinsonsnewstoday.com/2020/06/17/tiny-magnetically-powered-implant-may-be-future-of-deep-brain-stimulation/ https://spark.apache.org/releases/spark-release-3-0-0.html https://arxiv.org/abs/1712.09624 https://www.washingtonpost.com/technology/2020/06/18/data-privacy-law-sherrod-brown/ https://blog.chromium.org/2020/06/improving-chromiums-browser.html </code></pre>
2020-06-18 22:36:22.487000+00:00
2020-06-18 22:36:22.487000+00:00
null
null
62,460,210
<p>Hey guess so I got as far as being able to add the <code>a</code> class to a list. The problem is I just want the href link to be added to the <code>links_with_text</code> list and not the entire a class. What am I doing wrong?</p> <pre><code>from bs4 import BeautifulSoup from requests import get import requests URL = "https://news.ycombinator.com" page = requests.get(URL) soup = BeautifulSoup(page.content, 'html.parser') results = soup.find(id = 'hnmain') articles = results.find_all(class_="title") links_with_text = [] for article in articles: link = article.find('a', href=True) links_with_text.append(link) print('\n'.join(map(str, links_with_text))) </code></pre> <p>This prints exactly how I want the list to print but I just want the href from every a class not the entire a class. Thank you </p>
2020-06-18 22:13:35.703000+00:00
2020-06-18 22:38:13.900000+00:00
2020-06-18 22:38:13.900000+00:00
python|list|beautifulsoup
[]
0
71,472,362
<p><a href="https://i.stack.imgur.com/BhVnx.png" rel="nofollow noreferrer">Transformer Encoder-Decoder Architecture</a> The BERT model contains only the encoder block of the transformer architecture. Let's look at individual elements of an encoder block for BERT to visualize the number weight matrices as well as the bias vectors. The given configuration L = 12 means there will be 12 layers of self attention, H = 768 means that the embedding dimension of individual tokens will be of 768 dimensions, A = 12 means there will be 12 attention heads in one layer of self attention. The encoder block performs the following sequence of operations:</p> <ol> <li><p>The input will be the sequence of tokens as a matrix of S * d dimension. Where s is the sequence length and d is the embedding dimension. The resultant input sequence will be the sum of token embeddings, token type embeddings as well as position embedding as a d-dimensional vector for each token. In the BERT model, the first set of parameters is the vocabulary embeddings. BERT uses WordPiece[<a href="https://arxiv.org/abs/1609.08144" rel="nofollow noreferrer">2</a>] embeddings that has 30522 tokens. Each token is of 768 dimensions.</p> </li> <li><p>Embedding layer normalization. One weight matrix and one bias vector.</p> </li> <li><p>Multi-head self attention. There will be h number of heads, and for each head there will be three matrices which will correspond to query matrix, key matrix and the value matrix. The first dimension of these matrices will be the embedding dimension and the second dimension will be the embedding dimension divided by the number of attention heads. Apart from this, there will be one more matrix to transform the concatenated values generated by attention heads to the final token representation.</p> </li> <li><p>Residual connection and layer normalization. One weight matrix and one bias vector.</p> </li> <li><p>Position-wise feedforward network will have one hidden layer, that will correspond to two weight matrices and two bias vectors. In the paper, it is mentioned that the number of units in the hidden layer will be four times the embedding dimension.</p> </li> <li><p>Residual connection and layer normalization. One weight matrix and one bias vector.</p> </li> </ol> <p>Let's calculate the actual number of parameters by associating the right dimensions to the weight matrices and bias vectors for the BERT base model.</p> <p><strong>Embedding Matrices:</strong></p> <ul> <li>Word Embedding Matrix size [Vocabulary size, embedding dimension] = [30522, 768] = 23440896</li> <li>Position embedding matrix size, [Maximum sequence length, embedding dimension] = [512, 768] = 393216</li> <li>Token Type Embedding matrix size [2, 768] = 1536</li> <li>Embedding Layer Normalization, weight and Bias [768] + [768] = 1536</li> <li>Total Embedding parameters = <strong> ≈ </strong></li> </ul> <p><strong>Attention Head:</strong></p> <ul> <li><p>Query Weight Matrix size [768, 64] = 49152 and Bias [768] = 768</p> </li> <li><p>Key Weight Matrix size [768, 64] = 49152 and Bias [768] = 768</p> </li> <li><p>Value Weight Matrix size [768, 64] = 49152 and Bias [768] = 768</p> </li> <li><p>Total parameters for one layer attention with 12 heads = 12∗(3 ∗(49152+768)) = 1797120</p> </li> <li><p>Dense weight for projection after concatenation of heads [768, 768] = 589824 and Bias [768] = 768, (589824+768 = 590592)</p> </li> <li><p>Layer Normalization weight and Bias [768], [768] = 1536</p> </li> <li><p>Position wise feedforward network weight matrices and bias [3072, 768] = 2359296, [3072] = 3072 and [768, 3072 ] = 2359296, [768] = 768, (2359296+3072+ 2359296+768 = 4722432)</p> </li> <li><p>Layer Normalization weight and Bias [768], [768] = 1536</p> </li> <li><p>Total parameters for one complete attention layer (1797120 + 590592 + 1536 + 4722432 + 1536 = <strong>7113216 ≈ 7</strong>)</p> </li> <li><p>Total parameters for 12 layers of attention ( ∗ = <strong> ≈ </strong>)</p> </li> </ul> <p><strong>Output layer of BERT Encoder:</strong></p> <ul> <li>Dense Weight Matrix and Bias [768, 768] = 589824, [768] = 768, (589824 + 768 = 590592)</li> </ul> <p><em>Total Parameters in ase = + + = <strong> ≈ </strong></em></p>
2022-03-14 18:11:50.570000+00:00
2022-03-14 18:11:50.570000+00:00
null
null
64,485,777
<p>The paper &quot;BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding&quot; by Devlin &amp; Co. calculated for the base model size 110M parameters (i.e. L=12, H=768, A=12) where L = number of layers, H = hidden size and A = number of self-attention operations. As far as I know parameters in a neural network are usually the count of &quot;weights and biases&quot; between the layers. So how is this calculated based on the given information? 12<em>768</em>768*12?</p>
2020-10-22 15:41:18.500000+00:00
2022-03-14 18:11:50.570000+00:00
null
neural-network|nlp|bert-language-model
['https://i.stack.imgur.com/BhVnx.png', 'https://arxiv.org/abs/1609.08144']
2
46,094,513
<p>I'd look at the <em>divergencies</em>, as explained in notes and literature on Hamiltonian Monte Carlo, see, e.g., <a href="http://twiecki.github.io/blog/2017/02/08/bayesian-hierchical-non-centered/" rel="nofollow noreferrer">here</a> and <a href="https://arxiv.org/abs/1312.0906" rel="nofollow noreferrer">here</a>.</p> <pre><code>with model: np.savetxt('diverging.csv', hb1_trace['diverging']) </code></pre> <p>As a dirty solution, you can try to increase <code>target_accept</code>, perhaps.</p> <p>Good luck!</p>
2017-09-07 10:53:01.607000+00:00
2017-09-07 10:53:01.607000+00:00
null
null
46,079,207
<p>I have a fairly simple test data set I am trying to fit with pymc3.</p> <p>The result generated by traceplot looks something like <a href="https://i.stack.imgur.com/XCDzQ.png" rel="nofollow noreferrer">this.</a> Essentially the trace of all parameter look like there is a standard 'caterpillar' for 100 iterations, followed by a flat line for 750 iterations, followed by the caterpillar again.</p> <p>The initial 100 iterations happen after 25,000 ADVI iterations, and 10,000 tune iterations. If I change these amounts, I randomly will/won't have these periods of unwanted stability.</p> <p>I'm wondering if anyone has any advice about how I can stop this from happening - and what is causing it?</p> <p>Thanks.</p> <p>The full code is below. In brief, I am generating a set of 'phases' (-pi -> pi) with a corresponding set of values y = a(j)*sin(phase) + b(j)*sin(phase). a and b are drawn for each subject j at random, but are related to each other. I then essentially try to fit this same model.</p> <p>Edit: <a href="https://i.stack.imgur.com/mFSHC.png" rel="nofollow noreferrer">Here is a similar example, running for 25,000 iterations. Something goes wrong around iteration 20,000.</a></p> <pre><code>#!/usr/bin/env python3 # -*- coding: utf-8 -*- import matplotlib.pyplot as plt import numpy as np import pymc3 as pm %matplotlib inline np.random.seed(0) n_draw = 2000 n_tune = 10000 n_init = 25000 init_string = 'advi' target_accept = 0.95 ## # Generate some test data # Just generates: # x a vector of phases # y a vector corresponding to some sinusoidal function of x # subject_idx a vector corresponding to which subject x is #9 Subjects N_j = 9 #Each with 276 measurements N_i = 276 sigma_y = 1.0 mean = [0.1, 0.1] cov = [[0.01, 0], [0, 0.01]] # diagonal covariance x_sub = np.zeros((N_j,N_i)) y_sub = np.zeros((N_j,N_i)) y_true_sub = np.zeros((N_j,N_i)) ab_sub = np.zeros((N_j,2)) tuning_sub = np.zeros((N_j,1)) sub_ix_sub = np.zeros((N_j,N_i)) for j in range(0,N_j): aj,bj = np.random.multivariate_normal(mean, cov) #aj = np.abs(aj) #bj = np.abs(bj) xj = np.random.uniform(-1,1,size = (N_i,1))*np.pi xj = np.sort(xj)#for convenience yj_true = aj*np.sin(xj) + bj*np.cos(xj) yj = yj_true + np.random.normal(scale=sigma_y, size=(N_i,1)) x_sub[j,:] = xj.ravel() y_sub[j,:] = yj.ravel() y_true_sub[j,:] = yj_true.ravel() ab_sub[j,:] = [aj,bj] tuning_sub[j,:] = np.sqrt(((aj**2)+(bj**2))) sub_ix_sub[j,:] = [j]*N_i x = np.ravel(x_sub) y = np.ravel(y_sub) subject_idx = np.ravel(sub_ix_sub) subject_idx = np.asarray(subject_idx, dtype=int) ## # Fit model hb1_model = pm.Model() with hb1_model: # Hyperpriors hb1_mu_a = pm.Normal('hb1_mu_a', mu=0., sd=100) hb1_sigma_a = pm.HalfCauchy('hb1_sigma_a', 4) hb1_mu_b = pm.Normal('hb1_mu_b', mu=0., sd=100) hb1_sigma_b = pm.HalfCauchy('hb1_sigma_b', 4) # We fit a mixture of a sine and cosine with these two coeffieicents # allowed to be different for each subject hb1_aj = pm.Normal('hb1_aj', mu=hb1_mu_a, sd=hb1_sigma_a, shape = N_j) hb1_bj = pm.Normal('hb1_bj', mu=hb1_mu_b, sd=hb1_sigma_b, shape = N_j) # Model error hb1_eps = pm.HalfCauchy('hb1_eps', 5) hb1_linear = hb1_aj[subject_idx]*pm.math.sin(x) + hb1_bj[subject_idx]*pm.math.cos(x) hb1_linear_like = pm.Normal('y', mu = hb1_linear, sd=hb1_eps, observed=y) with hb1_model: hb1_trace = pm.sample(draws=n_draw, tune = n_tune, init = init_string, n_init = n_init, target_accept = target_accept) pm.traceplot(hb1_trace) </code></pre>
2017-09-06 15:33:00.417000+00:00
2017-09-07 10:53:01.607000+00:00
2017-09-06 16:34:47.080000+00:00
pymc|pymc3|mcmc
['http://twiecki.github.io/blog/2017/02/08/bayesian-hierchical-non-centered/', 'https://arxiv.org/abs/1312.0906']
2
38,980,687
<p>I've been wondering about this too. It's not really clear to me what they're doing, but this is what I found.</p> <p>In the <a href="http://arxiv.org/pdf/1606.07792v1.pdf" rel="noreferrer">paper on wide and deep learning</a>, they describe the embedding vectors as being randomly initialized and then adjusted during training to minimize error.</p> <p>Normally when you do embeddings, you take some arbitrary vector representation of the data (such as one-hot vectors) and then multiply it by a matrix that represents the embedding. This matrix can be found by PCA or while training by something like t-SNE or word2vec.</p> <p>The actual code for the embedding_column is <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/layers/python/layers/feature_column.py" rel="noreferrer">here</a>, and it's implemented as a class called _EmbeddingColumn which is a subclass of _FeatureColumn. It stores the embedding matrix inside its sparse_id_column attribute. Then, the method to_dnn_input_layer applies this embedding matrix to produce the embeddings for the next layer.</p> <pre><code> def to_dnn_input_layer(self, input_tensor, weight_collections=None, trainable=True): output, embedding_weights = _create_embedding_lookup( input_tensor=self.sparse_id_column.id_tensor(input_tensor), weight_tensor=self.sparse_id_column.weight_tensor(input_tensor), vocab_size=self.length, dimension=self.dimension, weight_collections=_add_variable_collection(weight_collections), initializer=self.initializer, combiner=self.combiner, trainable=trainable) </code></pre> <p>So as far as I can see, it seems like the embeddings are formed by applying whatever learning rule you're using (gradient descent, etc.) to the embedding matrix.</p>
2016-08-16 17:06:35.570000+00:00
2016-08-16 17:06:35.570000+00:00
null
null
38,808,643
<p>I am going through tensorflow tutorial <a href="https://www.tensorflow.org/versions/r0.10/tutorials/wide_and_deep/index.html#tensorflow-wide-deep-learning-tutorial" rel="noreferrer">tensorflow</a>. I would like to find description of the following line:</p> <pre><code>tf.contrib.layers.embedding_column </code></pre> <p>I wonder if it uses word2vec or anything else, or maybe I am thinking in completely wrong direction. I tried to click around on GibHub, but found nothing. I am guessing looking on GitHub is not going to be easy, since python might refer to some C++ libraries. Could anybody point me in the right direction?</p>
2016-08-06 20:58:10.603000+00:00
2018-01-21 13:47:51.517000+00:00
2016-08-06 21:15:27.287000+00:00
python|tensorflow|embedding
['http://arxiv.org/pdf/1606.07792v1.pdf', 'https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/layers/python/layers/feature_column.py']
2
55,190,546
<p>There are a lot of heuristic approaches, but if you want something really state of the art, check <a href="https://arxiv.org/abs/1812.02603" rel="nofollow noreferrer" title="Confirmation Sampling for Exact Nearest Neighbor Search">"Confirmation Sampling for Exact Nearest Neighbor Search"</a>.</p>
2019-03-15 20:54:30.657000+00:00
2019-04-11 07:33:08.360000+00:00
2019-04-11 07:33:08.360000+00:00
null
53,790,243
<p>I'm curious if it's possible to find exact match using LSH. On MIT website about LSH they state: </p> <blockquote> <p>Locality-Sensitive Hashing (LSH) is an algorithm for solving the approximate or <strong>exact</strong> Near Neighbor Search in high dimensional spaces</p> </blockquote> <p><a href="https://www.mit.edu/~andoni/LSH/" rel="nofollow noreferrer">https://www.mit.edu/~andoni/LSH/</a></p> <p>I kinda made some search around internet and google scholar but it seems like there's no sign about it. Is there anyone know if it's possible and can point me to the paper about it? Much appreciated. </p>
2018-12-15 06:48:58.057000+00:00
2019-04-11 07:33:08.360000+00:00
2019-04-11 06:26:46.593000+00:00
data-mining|lsh
['https://arxiv.org/abs/1812.02603']
1
57,587,733
<p>When using dropout to estimate the uncertainty (or any other stochastic regularization method), make sure to also checkout our recent work on providing a sampling-free approximation of Monte-Carlo dropout.</p> <p><a href="https://arxiv.org/pdf/1908.00598.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1908.00598.pdf</a></p> <p>We essentially follow ur idea. Treat the activations as random variables and then propagate mean and variance using error propagation to the output layer. Consequently, we obtain <strong>two outputs - the mean and the variance</strong>.</p>
2019-08-21 08:44:55.567000+00:00
2019-08-21 08:44:55.567000+00:00
null
null
56,206,942
<p>I'd like to use a neural network to predict a scalar value which is the sum of a function of the input values and a random value (I'm assuming gaussian distribution) whose variance also depends on the input values. Now I'd like to have a neural network that has two outputs - the first output should approximate the deterministic part - the function, and the second output should approximate the variance of the random part, depending on the input values. What loss function do I need to train such a network?</p> <p>(It would be nice if there was an example with Python for Tensorflow, but I'm also interested in general answers. I'm also not quite clear how I could write something like in Python code - none of the examples I found so far show how to address individual outputs from the loss function.)</p>
2019-05-19 10:36:39.297000+00:00
2019-12-16 06:03:17.163000+00:00
null
python|tensorflow|neural-network|deep-learning|loss-function
['https://arxiv.org/pdf/1908.00598.pdf']
1
56,208,907
<p>You can use dropout for that. With a dropout layer you can make several different predictions based on different settings of which nodes dropped out. Then you can simply count the outcomes and interpret the result as a measure for uncertainty.</p> <p>For details, read:</p> <blockquote> <p>Gal, Yarin, and Zoubin Ghahramani. "<a href="https://arxiv.org/pdf/1506.02142.pdf" rel="nofollow noreferrer">Dropout as a bayesian approximation: Representing model uncertainty in deep learning</a>." international conference on machine learning. 2016.</p> </blockquote>
2019-05-19 14:45:13.623000+00:00
2019-05-19 14:45:13.623000+00:00
null
null
56,206,942
<p>I'd like to use a neural network to predict a scalar value which is the sum of a function of the input values and a random value (I'm assuming gaussian distribution) whose variance also depends on the input values. Now I'd like to have a neural network that has two outputs - the first output should approximate the deterministic part - the function, and the second output should approximate the variance of the random part, depending on the input values. What loss function do I need to train such a network?</p> <p>(It would be nice if there was an example with Python for Tensorflow, but I'm also interested in general answers. I'm also not quite clear how I could write something like in Python code - none of the examples I found so far show how to address individual outputs from the loss function.)</p>
2019-05-19 10:36:39.297000+00:00
2019-12-16 06:03:17.163000+00:00
null
python|tensorflow|neural-network|deep-learning|loss-function
['https://arxiv.org/pdf/1506.02142.pdf']
1
25,692,478
<p>Though I don't consider myself to be the expert in CV, but I have dabbled a bit to point you to the right literature.</p> <p>Look at this paper for a survey of work used in human gender recognition from face images: <a href="http://arxiv.org/pdf/1204.1611.pdf" rel="nofollow">http://arxiv.org/pdf/1204.1611.pdf</a></p> <p>Also look at following papers: <a href="http://www.cse.unr.edu/~bebis/GenderRecognitionIWSSIP12.pdf" rel="nofollow">http://www.cse.unr.edu/~bebis/GenderRecognitionIWSSIP12.pdf</a> <a href="http://tdlc.ucsd.edu/research/publications/Nestor_Tarr_Gender_Recognition.pdf" rel="nofollow">http://tdlc.ucsd.edu/research/publications/Nestor_Tarr_Gender_Recognition.pdf</a> <a href="http://www.ijarcce.com/upload/2013/june/43-Hadeel%20Alrashed%20-facial%20gender%20recognition%20using%20eyes%20images.pdf" rel="nofollow">http://www.ijarcce.com/upload/2013/june/43-Hadeel%20Alrashed%20-facial%20gender%20recognition%20using%20eyes%20images.pdf</a></p>
2014-09-05 19:11:03.883000+00:00
2014-09-05 19:11:03.883000+00:00
null
null
25,689,140
<p>Now I'm doing human face gender recognition using javacv, The correctness is not very satisfactory for only about 70%. The training set I'm using is some asia human face images I get from internet, and I both used geometry info based training &amp; classification and appearance based info.</p> <p>The geometry info is produced by choosing some face feature points ratios such like faceheight/facewidth etc and trained by a fisher recognizor, while the appearance info is trained by a LBP face recognizor.</p> <p>My question is: what is the key point to improve the correctness of gender recognition? Can someone share some experiences?</p> <p>Thanks~</p>
2014-09-05 15:21:32.257000+00:00
2014-09-05 19:11:03.883000+00:00
null
opencv|machine-learning|javacv|face-recognition
['http://arxiv.org/pdf/1204.1611.pdf', 'http://www.cse.unr.edu/~bebis/GenderRecognitionIWSSIP12.pdf', 'http://tdlc.ucsd.edu/research/publications/Nestor_Tarr_Gender_Recognition.pdf', 'http://www.ijarcce.com/upload/2013/june/43-Hadeel%20Alrashed%20-facial%20gender%20recognition%20using%20eyes%20images.pdf']
4
64,149,336
<p>I think you have the correct pointers for writing a custom federated computation, as well as converting a Keras model to a <code>tff.learning.Model</code>. So we'll focus on pulling a TFF type signature from an existing <code>tff.learning.Model</code>.</p> <p>Once you have your hands on such a model, you should be able to use <a href="https://www.tensorflow.org/federated/api_docs/python/tff/learning/framework/weights_type_from_model" rel="nofollow noreferrer"><code>tff.learning.framework.weights_type_from_model</code></a> to pull out the appropriate TFF type to use for your custom algorithm.</p> <p>There is an interesting caveat here: how precisely you <em>use</em> a <code>tff.learning.Model</code> in your custom algorithm is pretty much up to you, and this could affect your desired model weights type. This is unlikely to be the case (likely you will simply be assigning values from incoming tensors to the model variables), so I think we should prefer to avoid going deeper into this caveat.</p> <p>Finally, a few pointers of end-to-end custom algorithm implementations in TFF:</p> <ul> <li>One of the simplest complete examples TFF has is <a href="https://github.com/tensorflow/federated/tree/master/tensorflow_federated/python/examples/simple_fedavg" rel="nofollow noreferrer"><code>simple_fedavg</code></a>, which is totally self-contained and contains instructions for running.</li> <li>The <a href="https://github.com/google-research/federated/tree/master/optimization" rel="nofollow noreferrer">code</a> for a paper on <a href="https://arxiv.org/pdf/2003.00295.pdf" rel="nofollow noreferrer">Adaptive Federated Optimization</a> contains a <a href="https://github.com/google-research/federated/blob/master/optimization/shared/fed_avg_schedule.py" rel="nofollow noreferrer">handwritten implementation</a> of learning rate decay on the clients in TFF.</li> <li>A <a href="https://github.com/google-research/federated/blob/master/adaptive_lr_decay/adaptive_fed_avg.py" rel="nofollow noreferrer">similar implementation</a> of adaptive learning rate decay (think Keras' functions to decay learning rate on plateaus) is right next door to the code for AFO.</li> </ul>
2020-10-01 05:01:21.520000+00:00
2020-10-01 05:01:21.520000+00:00
null
null
64,053,996
<p><a href="https://www.tensorflow.org/federated/tutorials/federated_learning_for_image_classification" rel="nofollow noreferrer">This</a> tutorial describes how to build a TFF computation from keras model. <a href="https://www.tensorflow.org/federated/tutorials/custom_federated_algorithms_2" rel="nofollow noreferrer">This</a> tutorial describes how to build a custom TFF computation from scratch, possibly with a custom federated learning algorithm.</p> <p>What I need is a combination of these: I want to build a custom federated learning algorithm, and I want to use an existing keras model. <strong>Q.</strong> How can it be done?</p> <p>The second tutorial requires <code>MODEL_TYPE</code> which is based on <code>MODEL_SPEC</code>, but I don't know how to get it. I can see some variables in <code>model.trainable_variables</code> (where <code>model = tff.learning.from_keras_model(keras_model, ...</code>), but I doubt it's what I need.</p> <p>Of course, I can implement the model by hand (as in the second tutorial), but I want to avoid it.</p>
2020-09-24 20:32:14.213000+00:00
2020-10-01 05:01:21.520000+00:00
null
python|tensorflow|keras|tensorflow-federated
['https://www.tensorflow.org/federated/api_docs/python/tff/learning/framework/weights_type_from_model', 'https://github.com/tensorflow/federated/tree/master/tensorflow_federated/python/examples/simple_fedavg', 'https://github.com/google-research/federated/tree/master/optimization', 'https://arxiv.org/pdf/2003.00295.pdf', 'https://github.com/google-research/federated/blob/master/optimization/shared/fed_avg_schedule.py', 'https://github.com/google-research/federated/blob/master/adaptive_lr_decay/adaptive_fed_avg.py']
6
43,084,433
<p>At it's base level, those inputs are called HyperParameters and are not typically defined by any particular set of rules. That said, often we use rules of thumb (heuristics) to choose a set of hyper parameters and then use hyperparameter optimisation to increase performance or efficiency etc.</p> <p>A great explanation of this is <a href="https://stats.stackexchange.com/questions/148139/rules-for-selecting-convolutional-neural-network-parameters">Here</a></p> <p><strong>Edit:</strong> Further info in this paper - it's a widely studied problem, look into Arxiv and Stats.Stackexchange for more info but here is a great paper I used when I was learning <a href="https://arxiv.org/pdf/1206.5533v2.pdf" rel="nofollow noreferrer">Here</a></p>
2017-03-29 04:36:06.080000+00:00
2017-03-31 12:17:32.930000+00:00
2017-04-13 12:44:13.917000+00:00
null
43,084,006
<p>I'm praticing CNNs. I read some papers about training MNIST dataset use CNNs.size of image is 28x28 and use architecture 5 layers: input>conv1-maxpool1>conv2-maxpool2>fully connected>output</p> <pre><code>Convolutional Layer #1 - Computes 32 features using a 5x5 filter with ReLU activation. - Padding is added to preserve width and height. - Input Tensor Shape: [batch_size, 28, 28, 1] - Output Tensor Shape: [batch_size, 28, 28, 32] Pooling Layer #1 - First max pooling layer with a 2x2 filter and stride of 2 - Input Tensor Shape: [batch_size, 28, 28, 32] - Output Tensor Shape: [batch_size, 14, 14, 32] Convolutional Layer #2 - Computes 64 features using a 5x5 filter. - Padding is added to preserve width and height. - Input Tensor Shape: [batch_size, 14, 14, 32] - Output Tensor Shape: [batch_size, 14, 14, 64] Pooling Layer #2 - Second max pooling layer with a 2x2 filter and stride of 2 - Input Tensor Shape: [batch_size, 14, 14, 64] - Output Tensor Shape: [batch_size, 7, 7, 64] Flatten tensor into a batch of vectors - Input Tensor Shape: [batch_size, 7, 7, 64] - Output Tensor Shape: [batch_size, 7 * 7 * 64] Fully Connected Layer - Densely connected layer with 1024 neurons - Input Tensor Shape: [batch_size, 7 * 7 * 64] - Output Tensor Shape: [batch_size, 1024] Output layer - Input Tensor Shape: [batch_size, 1024] - Output Tensor Shape: [batch_size, 10] </code></pre> <p>In conv1, with 1 input computates 32 features using a 5x5 filter and in conv2 with 32 input from conv1 computates 64 features using same filter. What are parameters such as 32,64,2x2 filter chosen based on? Do they based on size of image?</p> <p>If size of images is larger than 28x28 such as 128x128. Should I increse the number of layers over 5 layers? How are above parameters changed with other size of images?</p> <p>Thank advance</p>
2017-03-29 03:55:05.953000+00:00
2020-06-28 16:57:59.700000+00:00
null
python|tensorflow|deep-learning|convolution
['https://stats.stackexchange.com/questions/148139/rules-for-selecting-convolutional-neural-network-parameters', 'https://arxiv.org/pdf/1206.5533v2.pdf']
2
72,679,733
<p><a href="https://arxiv.org/pdf/1908.03473.pdf" rel="nofollow noreferrer">Barder and Burkhardt</a> in 2019 proposed this approach to find MSTs in linear time given the non-MST edges are given in ascending order of their weights.</p>
2022-06-19 19:23:57.353000+00:00
2022-06-19 19:23:57.353000+00:00
null
null
33,531,475
<p>I was wondering if anyone can point to a linear time algorithm to find the MST of a graph when there is a small number of weights (I.e edges can only have 2 different weights).</p> <p>I could not find anything on google other than Prim's, Kruskal's, Boruvka's none of which seem to have any properties that would reduce the run time in this special case. I'm guessing to make it linear time it would have to be some sort of modification of BFS (which finds the MST when the weights are uniform).</p>
2015-11-04 20:34:23.743000+00:00
2022-06-19 19:23:57.353000+00:00
null
algorithm|graph
['https://arxiv.org/pdf/1908.03473.pdf']
1
66,350,596
<p>The reason why convolutions are more efficient than fully connected layers is <strong>because</strong> they are translation invariant. If you wish to have convolutions which are dependent on location you would need to add two extra parameters to the convolution i.e. having N+2 input channels where x, y coord are the values of the two additonal channels (as in e.g. <a href="https://arxiv.org/abs/1807.03247" rel="nofollow noreferrer">CoordConv</a>).</p> <p>As for alternative solutions, is the gradient meaningful? If not, and it is uniform across all images, it might be better to just manually remove it in the pre-processing stage (similar to orientation correction, cropping etc). If it is not (e.g. differences in lighting, shadows) then including other layers under the assumption they would learn the invariance of different lightings is a common hands-off approach.</p>
2021-02-24 12:06:52.503000+00:00
2021-03-25 11:30:05.833000+00:00
2021-03-25 11:30:05.833000+00:00
null
48,331,966
<p>Let's pretend that in plus of having an image, I also have a gradient from left to right on the X axis of an image, and another gradient from top to bottom on the Y axis. Those two gradients are of the same size of the image, and could both range from -0.5 to 0.5. </p> <p>Now, I'd like to make the convolution kernel (a.k.a. convolution filter, or convolution weights) depend on the <code>(x, y)</code> location in the gradient. So the kernel is a function of the gradient as if the kernel was the output of a nested mini-neural net. This would make the weights of the filter to be different in every position, but slightly similar to their neighbors. How do I do that within PyTorch or TensorFlow? </p> <p>Sure, I could compute a <a href="https://en.wikipedia.org/wiki/Toeplitz_matrix#Discrete_convolution" rel="nofollow noreferrer">Toeplitz matrix (a.k.a. diagonal-constant matrix) </a> by myself, but the matrix multiplication would take <code>O(n^3)</code> operations if pretending <code>x==y==n</code>, whereas convolutions can be implemented in <code>O(n^2) normally</code>. Or I could maybe iterate on every element myself and do the multiplications in an unvectorized fashion. </p> <p>Any better ideas? I'd like to see creativity here, thinking about how could this be implemented neatly. I believe coding that would be an interesting way to build a network layer capable of doing things similar to a simplified version of a <a href="https://arxiv.org/abs/1506.02025" rel="nofollow noreferrer">Spatial Transformer Networks</a>, but which's spatial transformation would be independent of the image. </p>
2018-01-18 23:29:45+00:00
2021-03-25 11:30:05.833000+00:00
2021-03-25 11:29:16.640000+00:00
machine-learning|neural-network|pytorch|conv-neural-network|convolution
['https://arxiv.org/abs/1807.03247']
1
32,185,357
<p>There is a worm named C. Elegance and its anatomy is completely know to us. Every cell is mapped out and every neuron is well studied. This worm has an interesting property by birth and that is it follows or grow towards only those temperature regions in which it was born. <a href="http://arxiv.org/abs/1410.7881" rel="nofollow">Here is link to the paper.</a> This paper has implementation of the property with neuronal model. And there are some students who have built robot that only follows dark regions in the region having different shades of light, using this neuronal model. This work could have been done using other methods as well but this method is more noise resilient as proved by paper to which I have given link above.</p>
2015-08-24 14:53:38.130000+00:00
2015-08-24 14:53:38.130000+00:00
null
null
873,448
<p>Just wondering, since we've reached 1 teraflop per PC, yet we are still not able to model an insect's brain. Has anyone seen a decent implementation of a self-learning, self-developing neural network?</p>
2009-05-16 22:24:33.190000+00:00
2015-08-24 14:53:38.130000+00:00
2013-08-08 20:27:08.940000+00:00
artificial-intelligence|neural-network|large-scale|biological-neural-network|neuroscience
['http://arxiv.org/abs/1410.7881']
1
43,418,891
<p>Yes, you can make a fully convolutional classifier, one example is <a href="https://arxiv.org/abs/1602.07360" rel="nofollow noreferrer">SqueezeNet</a>.</p> <p>The basic working principle is that at the end of the network you insert a convolutional layer with C output channels, where C is the number of classes. Then you proceed to apply global average pooling, which will produce a 1D vector of C elements (independent of input feature map width/height), and you can apply the softmax function to that vector to produce output class probabilities.</p>
2017-04-14 20:37:39.180000+00:00
2017-04-14 20:37:39.180000+00:00
null
null
43,418,497
<p>I mean to ask that can I have a neural network classifier with a large number of layers without fully connected layers?</p>
2017-04-14 20:04:52.573000+00:00
2017-04-15 11:15:58.737000+00:00
null
neural-network|deep-learning
['https://arxiv.org/abs/1602.07360']
1
28,907,869
<p>It seems that the problem is in the model of chunker that you read in the first line. Probably, it uses wrong tokenizer, which is the source of <code>Bali.</code>, <code>I'</code>, <code>Widodo. I</code>, <code>" Abbott</code>, <code>"</code>. <code>pair</code> and <code>the Bali</code> can be explained by ordinary errors (taggers usually have not more than 80-90% precision). However, the source of such errors can also be explained by bad model - for example, it can be trained for another domain.</p> <p>Btw, why do you use longpipe, but not <a href="http://nlp.stanford.edu/software/CRF-NER.shtml" rel="nofollow">Stanford NER</a>? It shows better results, as a rule; e.g. the first available (i.e. random) <a href="http://arxiv.org/ftp/arxiv/papers/1308/1308.0661.pdf" rel="nofollow">article</a>. Also, <a href="http://www.galalaly.me/index.php/2011/05/tagging-text-with-stanford-pos-tagger-in-java-applications/" rel="nofollow">here</a> is a good step-by-step tutorial for Standford NER.</p>
2015-03-06 21:21:02.473000+00:00
2015-03-06 21:21:02.473000+00:00
null
null
28,901,640
<p>I'm trying to extract named entities (people, persons and organizations) using LingPipe and following <a href="http://alias-i.com/lingpipe/demos/tutorial/ne/read-me.html" rel="nofollow">this tutorial</a>. Here is the <a href="http://pastebin.com/YAUCWBMi" rel="nofollow">full text</a> that I am trying to extract names from and here is the code (exception handling omitted for brevity):</p> <pre><code>Chunker chunker = readChunker("/path-to-chunker"); // custom method for reading the model String article = "Some long news article spanning multiple lines..."; Chunking chunking = chunker.chunk(article); Set&lt;Chunk&gt; chunkingSet = chunking.chunkSet(); for (Chunk chunk : chunkingSet) { String name = article.substring(chunk.start(), chunk.end())); System.out.println(name); } </code></pre> <p>And this is (part of) the output I get:</p> <pre><code>Tony Abbott Indonesia Joko Widodo Sukumaran Andrew Chan Bali. pair the Bali Nusa Kambangan Indonesian Indonesian I’ Widodo. I ” Abbott Julie Bishop Widodo al-Jazeera Sukumaran Chan Bishop ” </code></pre> <p>As you can see, there are a lot of mismatches/partial matches like <code>Bali.</code>, <code>pair</code>, <code>the Bali</code>, <code>I'</code>, <code>Widodo. I</code>, <code>" Abbott</code>, <code>"</code>. I assume library's NER is working just fine and the problem is that the above code is somehow misusing the classes/methods from this library. But I just can't seem to find what is wrong about the code? </p> <p>Any ideas?</p>
2015-03-06 15:08:11.283000+00:00
2015-03-06 21:21:02.473000+00:00
null
nlp|named-entity-recognition|lingpipe
['http://nlp.stanford.edu/software/CRF-NER.shtml', 'http://arxiv.org/ftp/arxiv/papers/1308/1308.0661.pdf', 'http://www.galalaly.me/index.php/2011/05/tagging-text-with-stanford-pos-tagger-in-java-applications/']
3
51,498,742
<p>Unfortunately, the problem you want to solve here is NP-complete, so there are no great absolute solutions to this problem, but the following papers provide some nice heuristic algorithms which are not difficult to implement.</p> <p><a href="https://pdfs.semanticscholar.org/bb99/86af2f26f7726fcef1bc684eac8239c9b853.pdf?_ga=1.50320358.1394974689.1485463187" rel="nofollow noreferrer">OPTIMIZING THREE-DIMENSIONAL BIN PACKING THROUGH SIMULATION</a></p> <p><a href="http://www.ic.unicamp.br/~fkm/publication/rotation.pdf" rel="nofollow noreferrer">Three-dimensional packings with rotations</a></p> <p><a href="https://arxiv.org/pdf/1305.1961.pdf" rel="nofollow noreferrer">An Improved Three-Weight Message-Passing Algorithm</a></p>
2018-07-24 12:33:15.110000+00:00
2018-07-24 12:33:15.110000+00:00
null
null
51,482,610
<p>I'm working on an algorithm to optimize the packing of items in boxes. </p> <p>I can have up to 20 items which I need to pack in as few boxes as possible (6 possible box sizes), while minimizing the wasted volume within the boxes. I thought of implementing a variation of the <strong>3D BPP algorithm</strong> - which solves part of my problem - but cannot find any algorithm <strong>written in Python</strong>. </p> <p>Does anyone have suggestions of the way to go or of python algorithms for 3D BPP which I could use?</p> <p>Thanks!</p>
2018-07-23 15:40:37.473000+00:00
2021-05-27 16:16:32.467000+00:00
null
python|bin-packing
['https://pdfs.semanticscholar.org/bb99/86af2f26f7726fcef1bc684eac8239c9b853.pdf?_ga=1.50320358.1394974689.1485463187', 'http://www.ic.unicamp.br/~fkm/publication/rotation.pdf', 'https://arxiv.org/pdf/1305.1961.pdf']
3
39,334,080
<p>Instead of the Split and Merge algorithm, you could also use superpixels. There are several fast and easy-to-use superpixel algorithms available (some are even implemented in recent OpenCV versions). To name just a view:</p> <ul> <li>Compact Watershed (see here: <a href="https://www.tu-chemnitz.de/etit/proaut/forschung/rsrc/cws_pSLIC_ICPR.pdf" rel="nofollow">https://www.tu-chemnitz.de/etit/proaut/forschung/rsrc/cws_pSLIC_ICPR.pdf</a>)</li> <li>preSLIC and SLIC (see here: <a href="https://www.tu-chemnitz.de/etit/proaut/forschung/rsrc/cws_pSLIC_ICPR.pdf" rel="nofollow">https://www.tu-chemnitz.de/etit/proaut/forschung/rsrc/cws_pSLIC_ICPR.pdf</a> and here: <a href="http://www.kev-smith.com/papers/SLIC_Superpixels.pdf" rel="nofollow">http://www.kev-smith.com/papers/SLIC_Superpixels.pdf</a>)</li> <li>SEEDS (see here: <a href="https://arxiv.org/abs/1309.3848" rel="nofollow">https://arxiv.org/abs/1309.3848</a>)</li> <li>ERGC (see here: <a href="https://sites.google.com/site/pierrebuyssens/code/ergc" rel="nofollow">https://sites.google.com/site/pierrebuyssens/code/ergc</a>)</li> </ul> <p>Given the superpixel segmentation, there is a vast set of features you can compute in order to classify them:</p> <ul> <li>In <a href="http://dhoiem.cs.illinois.edu/publications/popup.pdf" rel="nofollow">Automatic Photo Pop-Up</a> Table 1, Hoiem et al. consider, among others, the following features: mean RGB color, mean HSV color, color histograms, saturation histograms, Textons, differenty oriented Gaussian derivative filters, mean x and y location, area, ...</li> <li>In <a href="https://www.ri.cmu.edu/pub_files/pub4/hoiem_derek_2007_3/hoiem_derek_2007_3.pdf" rel="nofollow">Recovering Occlusion Boundaries from a Single Image</a>, Hoiem et al. consider some additional features to the above list in Table 1.</li> <li>In <a href="http://slazebni.cs.illinois.edu/publications/eccv10-jtighe.pdf" rel="nofollow">SuperParsing: Scalable Nonparametric Image Parsing with Superpixels </a>, Tighe et al. additionally consider SIFT histograms, the mask reduced on a 8 x 8 image, boundign box shape, and color thumbnails.</li> <li>In <a href="http://www.vision.cs.ucla.edu/papers/fulkersonVS09.pdf" rel="nofollow">Class Segmentation and Object Localization with Superpixel Neighborhoods </a>, Fulkerson et al. also consider features from neighboring superpixels.</li> </ul> <p>Based on the superpixels, you can still apply a simple merging-scheme in order to reduce the number of superpixels. Simple merging by color histograms might already be useful for your tasks. Otherwise you can additionally use edge information in between superpixels for merging.</p>
2016-09-05 15:54:30.970000+00:00
2016-09-05 15:54:30.970000+00:00
null
null
34,615,202
<p>I am working on a project to segment air images and classify each segment. The images are very large and have huge homogeneous areas, so I decided to use a Split and Merge Algorithm for the segmentation. </p> <p><a href="https://i.stack.imgur.com/XDi23.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XDi23.jpg" alt="enter image description here"></a></p> <p>(On the left the original image and on the right the segmented one, where each segment is represented in its RGB mean value <a href="https://stackoverflow.com/questions/7050164/image-segmentation-split-and-merge-quadtrees/14730467#14730467">Thanks to this answer</a>)</p> <p>For the classification I want to use a SVM Classifier (I used it a lot in two projects before) with a feature vector. For the beginning I just want to use five classes: Water, Vegetation, Built up area, Dune and Anomaly Now I am thinking about what I can put in this feature vector:</p> <ul> <li>The mean RGB Value of the Segment</li> <li>A texture feature (but can I represent the texture of the segment with just one value?)</li> <li>The place in the source image (maybe with a value which represents left, right or middle?)</li> <li>The size of the segment (Water segments should be much larger than built areas)</li> <li>The mean RGB values of the fourth neighborhood of the segment</li> </ul> <p>So has anyone done something like this and can give me some advises what useful stuff I can put in the feature vector? And can someone give me an advise how I can represent the texture in the segment correctly?</p> <p>Thank you for your help.</p>
2016-01-05 15:23:03.647000+00:00
2016-09-05 15:54:30.970000+00:00
2017-05-23 12:08:53.213000+00:00
image-processing|classification|feature-detection|feature-extraction|feature-selection
['https://www.tu-chemnitz.de/etit/proaut/forschung/rsrc/cws_pSLIC_ICPR.pdf', 'https://www.tu-chemnitz.de/etit/proaut/forschung/rsrc/cws_pSLIC_ICPR.pdf', 'http://www.kev-smith.com/papers/SLIC_Superpixels.pdf', 'https://arxiv.org/abs/1309.3848', 'https://sites.google.com/site/pierrebuyssens/code/ergc', 'http://dhoiem.cs.illinois.edu/publications/popup.pdf', 'https://www.ri.cmu.edu/pub_files/pub4/hoiem_derek_2007_3/hoiem_derek_2007_3.pdf', 'http://slazebni.cs.illinois.edu/publications/eccv10-jtighe.pdf', 'http://www.vision.cs.ucla.edu/papers/fulkersonVS09.pdf']
9
66,675,014
<p>As @kkHarshit already mentioned it is very hard to speed up a Mask R-CNN any further.</p> <p>The fastest instance segmentation model that I found is <a href="https://arxiv.org/abs/2012.12259" rel="nofollow noreferrer">YolactEdge: Real-time Instance Segmentation on the Edge (Jetson AGX Xavier: 30 FPS, RTX 2080 Ti: 170 FPS)</a>.</p> <p>It's perfomance is worse than Mask R-CNN or Yolact even but still very good.</p>
2021-03-17 14:21:56.097000+00:00
2021-03-17 14:21:56.097000+00:00
null
null
59,394,530
<p>I'm running a Mask R-CNN model on an edge device (with an NVIDIA GTX 1080). I am currently using the Detectron2 Mask R-CNN implementation and I archieve an inference speed of around 5 FPS.</p> <p>To speed this up I looked at other inference engines and model implementations. For example ONNX, but I'm not able to gain a faster inference speed.</p> <p>TensorRT looks very promising to me but I did not found a ready "out-of-the-box" implementation for it. </p> <p>Are there any other mature and fast inference engines or other techniques to speed up the inference?</p>
2019-12-18 14:49:22.460000+00:00
2021-03-17 14:21:56.097000+00:00
null
tensorflow|deep-learning|computer-vision|pytorch|onnx
['https://arxiv.org/abs/2012.12259']
1
54,390,966
<blockquote> <p>Could you please tell me which metrics and loss I need and if I need custom ones also?</p> </blockquote> <p>From the paper <a href="https://arxiv.org/pdf/1411.4389.pdf" rel="nofollow noreferrer">document </a> you can see the work is old: May 2016. </p> <p>Please consider some recent work with more specifics. </p> <p>This paper, provides no clue on LSTM specifics except the variable number of units they used, so you may try to find models with explained metrics.</p>
2019-01-27 17:37:17.293000+00:00
2019-01-27 17:37:17.293000+00:00
null
null
54,389,954
<p>I am trying to reproduce that in Keras:</p> <blockquote> <p>LRCN is trained to predict the video’s activity class at each time step. To produce a single label prediction for an entire video clip, we average the label probabilities—the outputs of the network’s softmax layer— across all frames and choose the most probable label.</p> </blockquote> <p>But I am quite new to LSTMs and am not sure about which metrics and loss function to use to replicate the method applied in the text above. So far I have an LSTM RNN, which returns sequences and its outputs I feed into a time-distributed dense layer of 3 classes.</p> <p>A "frame" corresponds to a timestep of the RNN and <code>return_sequences=True</code> will enable me to return prediction per frame.</p> <p>Could you please tell me which metrics and loss I need and if I need custom ones also?</p>
2019-01-27 15:50:35.880000+00:00
2019-01-27 17:37:17.293000+00:00
2019-01-27 15:56:28.007000+00:00
python|tensorflow|keras
['https://arxiv.org/pdf/1411.4389.pdf']
1
51,615,482
<p>The problem you are describing is often answered with <a href="https://www.youtube.com/watch?v=xManAGjbx2k" rel="nofollow noreferrer">Reward Shaping</a>.</p> <p>Like the frozen lake environment or <a href="https://blog.openai.com/learning-montezumas-revenge-from-a-single-demonstration/" rel="nofollow noreferrer">Montazuma's Revenge</a>, some problems have very sparse rewards. This means that any RL agent must spend a long time to explore the environment to see these rewards. This can be very frustrating for the humans who designed the task for the agent. So, like in the frozen lake environment, people often add extra information like you have suggested. This makes the reward function more dense and (sometimes) allows for faster learning (if the modified reward function actually follows what the human wants the agent to do).</p> <p>In order for the agent to solve these kinds of problems faster than random search and without human intervention, such as reward shaping or giving the agent a video of expert playing the game, the agent needs some mechanism to explore the space in an intelligent way[<a href="https://xkcd.com/285/" rel="nofollow noreferrer">citation needed</a>].</p> <p>Some current research areas on this topic are <a href="https://arxiv.org/abs/1703.01732" rel="nofollow noreferrer">Intrinsic Motivation</a>, <a href="https://arxiv.org/pdf/1705.05363.pdf" rel="nofollow noreferrer">Curiosity</a>, and <a href="https://arxiv.org/abs/1609.05140" rel="nofollow noreferrer">Options</a> and <a href="https://arxiv.org/pdf/1703.00956.pdf" rel="nofollow noreferrer">Option discovery</a>.</p> <p>Although promising, these research areas are still in their infancy and sometimes its just easier to say:</p> <pre><code>if agent_is_in_a_hole: return -10 </code></pre>
2018-07-31 14:13:21.843000+00:00
2019-06-01 21:14:02.693000+00:00
2019-06-01 21:14:02.693000+00:00
null
51,236,984
<p>I'm looking at the <a href="https://gym.openai.com/envs/FrozenLake8x8-v0/" rel="nofollow noreferrer">FrozenLake environments</a> in openai-gym. In both of them, there are no rewards, not even negative rewards, until the agent reaches the goal. Even if the agent falls through the ice, there is no negative reward -- although the episode ends. Without rewards, there is nothing to learn! Each episode starts from scratch with no benefit from previous episodes.</p> <p>This should be a simple breadth-first search. It doesn't need RL. But assuming you use RL, one approach would be a reward of -1 for a step to a frozen square (that isn't the goal) and a reward of -10 for a step into a hole. The -1 would allow the agent to learn not to repeat squares. The -10 would allow the agent to learn to avoid the holes. So I'm tempted to create my own negative rewards on the agent side. This would make it more like the cliffwalker.</p> <p>What am I missing? How would RL solve this (except via random search) with no rewards?</p>
2018-07-09 00:31:09.943000+00:00
2019-06-01 21:14:02.693000+00:00
null
openai-gym
['https://www.youtube.com/watch?v=xManAGjbx2k', 'https://blog.openai.com/learning-montezumas-revenge-from-a-single-demonstration/', 'https://xkcd.com/285/', 'https://arxiv.org/abs/1703.01732', 'https://arxiv.org/pdf/1705.05363.pdf', 'https://arxiv.org/abs/1609.05140', 'https://arxiv.org/pdf/1703.00956.pdf']
7
68,624,094
<p>BPE is one of the three algorithms to deal with the unknown word problem(or languages with rich morphology that require dealing with structure below the word level) in an automatic way: byte-pair encoding, unigram language modeling, WordPiece, and the BPE tokenization schema has two parts: a token learner and a token segmenter.</p> <blockquote> <p>The token learner takes a raw training corpus (sometimes roughly pre- separated into words, for example by whitespace) and induces a vocabulary, a set of tokens. The token segmenter takes a raw test sentence and segments it into the tokens in the vocabulary.</p> </blockquote> <p>The algorithm has a parameter k which means k merges or k new symbols in the final vocabulary.</p> <p>Let's train the BPE using this line: <code>Pen Penapple Apple Pen</code> adapted from <a href="https://en.wikipedia.org/wiki/PPAP_(Pen-Pineapple-Apple-Pen)" rel="nofollow noreferrer">PPAP</a>, and show how the unknown/rare &quot;words&quot; <code>penapplepen</code> and <code>applepen</code> in the test data can be automatically reduced into known subword units.</p> <h1>Learning</h1> <p><a href="https://i.stack.imgur.com/ExQa4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ExQa4.png" alt="enter image description here" /></a></p> <p>First, after some preprocessing(case mapping, regex based pre-tokenization, and adding end-of-word symbol _) we obtain the following strings C(as our corpus) and their frequencies(frequency: string):</p> <pre><code>2: p e n _ 1: p e n a p p l e _ 1: a p p l e _ </code></pre> <p>The vocabulary V is [_, p, e, n, a, l]</p> <p>Now let's run the first round of the for-loop in the above pseudocode:</p> <pre><code>p, e &lt;- most frequent pair in {(p, e): 3, (e, n): 3, (n, _): 2, (a, p): 2, (p, p): 2, (p, l): 2, (l, e): 2, (e, _): 2, (n, a): 1} pe &lt;- p + e [_, p, e, n, a, l, pe] &lt;- [_, p, e, n, a, l] + pe C becomes this: 2: pe n _ 1: pe n a p p l e _ 1: a p p l e _ </code></pre> <p>Let's run the second merge as follows:</p> <pre><code>p, e &lt;- most frequent pair in {(pe, n): 3, (n, _): 2, (a, p): 2, (p, p): 2, (p, l): 2, (l, e): 2, (e, _): 2, (n, a): 1} pen &lt;- pe + n [_, p, e, n, a, l, pe, pen] &lt;- [_, p, e, n, a, l, pe] + pen C becomes this: 2: pen _ 1: pen a p p l e _ 1: a p p l e _ </code></pre> <p>And here are the next merges if we take k as N &gt;= 9:</p> <pre><code>Merge Current V (pen, _) [_, p, e, n, a, l, pe, pen, pen_] (a, p) [_, p, e, n, a, l, pe, pen, pen_, ap] (ap, p) [_, p, e, n, a, l, pe, pen, pen_, ap, app] (app, l) [_, p, e, n, a, l, pe, pen, pen_, ap, app, appl] (appl, e) [_, p, e, n, a, l, pe, pen, pen_, ap, app, appl, apple] (apple, _) [_, p, e, n, a, l, pe, pen, pen_, ap, app, appl, apple, apple_] (pen, apple_) [_, p, e, n, a, l, pe, pen, pen_, ap, app, appl, apple, apple_, penapple_] </code></pre> <p>We see that after 9 iterations of merging, there are no adjacent pairs in C.</p> <h1>Parsing</h1> <blockquote> <p>The token parser just runs on the test data the merges we have learned from the training data, greedily, in the order we learned them. (Thus the frequencies in the test data don’t play a role, just the frequencies in the training data).</p> </blockquote> <p>In this step we test the parser using this sentence: <code>Applepen PenapplePen</code>. As usual, we do the preprocessing we did in the training step and obtain:</p> <pre><code>a p p l e p e n _ p e n a p p l e p e n _ </code></pre> <p>and follow the merging order:</p> <pre><code>(p, e), (pe, n), (pen, _), (a, p), (ap, p), (app, l), (appl, e), (apple, _), (pen, apple_) </code></pre> <p>First, (p, e). We merge p and e in the test data and get:</p> <pre><code>a p p l e pe n _ pe n a p p l e pe n _ </code></pre> <p>Second, the (pe, n) and get:</p> <pre><code>a p p l e pen _ pen a p p l e pen _ </code></pre> <p>.....<br /> After all the 9 turns of merging we get(if k &lt;= 9 we just apply the first k merges in this step; if k is 2 refer to <a href="https://stackoverflow.com/a/68624271/3552975">this answer</a>):</p> <pre><code>apple pen_ pen apple pen_ </code></pre> <p>And the final tokenized test sentence is [apple, pen_, pen, apple, pen_], and the unknown(unseen in the training data) word <code>penapplepen</code> can also be separated.</p> <p>Referneces:</p> <ol> <li><a href="https://arxiv.org/pdf/1508.07909.pdf" rel="nofollow noreferrer">Neural Machine Translation of Rare Words with Subword Units</a></li> <li><a href="https://web.stanford.edu/%7Ejurafsky/slp3/ed3book_dec302020.pdf" rel="nofollow noreferrer">Speech and Language Processing</a></li> <li><a href="https://genius.com/Pikotaro-ppap-pen-pineapple-apple-pen-lyrics" rel="nofollow noreferrer">PPAP (Pen Pineapple Apple Pen)</a></li> </ol>
2021-08-02 15:18:45.877000+00:00
2022-08-06 03:33:08.583000+00:00
2022-08-06 03:33:08.583000+00:00
null
50,583,254
<p>Can somebody help to explain the basic concept behind the <strong>bpe model</strong>? Except <a href="https://arxiv.org/abs/1508.07909" rel="noreferrer">this paper</a>, there is no so many explanations about it yet. </p> <p>What I have known so far is that it enables NMT model translation on open-vocabulary by encoding rare and unknown words as sequences of subword units.</p> <p>But I want to get a general idea of how it works without going through the paper.</p>
2018-05-29 11:28:51.363000+00:00
2022-08-06 03:33:08.583000+00:00
2021-08-02 15:23:05.987000+00:00
algorithm|nlp|tokenize
['https://en.wikipedia.org/wiki/PPAP_(Pen-Pineapple-Apple-Pen)', 'https://i.stack.imgur.com/ExQa4.png', 'https://stackoverflow.com/a/68624271/3552975', 'https://arxiv.org/pdf/1508.07909.pdf', 'https://web.stanford.edu/%7Ejurafsky/slp3/ed3book_dec302020.pdf', 'https://genius.com/Pikotaro-ppap-pen-pineapple-apple-pen-lyrics']
6
49,698,033
<p>There is no best way to do this (it depends on the problem), but the most common thing to do is to normalize both the train and the test data so that they have mean 0 and standard deviation 1.</p> <p>Yes, with <a href="https://arxiv.org/abs/1502.03167" rel="nofollow noreferrer">Batch Normalization</a> you can automatically normalize the data within the model provided that you feed batches of a reasonable size into the network. This might produce a similar effect to data augmentation, because the signals the network will see during training will rarely repeat (as signals for one example now depend on its entire batch). In Keras, this can be implemented by adding a <a href="https://keras.io/layers/normalization/" rel="nofollow noreferrer">BatchNorm</a> layer right after the input layer.</p>
2018-04-06 17:18:20.633000+00:00
2018-04-06 17:25:36.883000+00:00
2018-04-06 17:25:36.883000+00:00
null
49,696,981
<p>I'm creating a deep neural network in Keras to perform an NN regression using tabular data. Best practice is to normalize the inputs and output series. I'd also like to use the <code>predict</code> function to provide estimates of the model's output for various sets of inputs. If the training data was normalized, I assume I'll need to also normalize the <code>predict</code> data set using the same scaling parameters. What's the best way to do this? Is there a way to automatically normalize the data within the model?</p>
2018-04-06 16:11:57.260000+00:00
2018-04-06 19:38:02.870000+00:00
null
python|keras
['https://arxiv.org/abs/1502.03167', 'https://keras.io/layers/normalization/']
2
65,602,424
<p>Well, apparently François Chollet has made a few changes very recently (5 days ago), including changes in how the kl_loss and reconstruction_loss are computed, see <a href="https://github.com/keras-team/keras-io/commit/e68c2209f2e96da84babd5017d15911fa4d3c7e4#diff-8522b8e0350af836c7852feb030176985f2267bf6cbdec97a87fbac090a135a8" rel="nofollow noreferrer">here</a>.</p> <p>Having run the previous version (that you can find at the link above), I significantly reduced the difference between the two members of the equation, even reducing with increasing epoch (from epoch 7, the difference is &lt;.2), as compared to your values.</p> <p>It seems that VAE are subject to reconstruction loss underestimation, which is an ongoing issue, and for that, I encourage you to dig a bit in the litterature, with e.g. <a href="https://arxiv.org/pdf/1910.00698.pdf" rel="nofollow noreferrer">this article </a>(may not be the best one).</p> <p>Hope that helps! At least it's a step forward.</p>
2021-01-06 19:44:02.260000+00:00
2021-04-28 09:42:16.063000+00:00
2021-04-28 09:42:16.063000+00:00
null
65,601,032
<p>I have implemented a variational autoencoder with the Keras implementation as an example (<a href="https://keras.io/examples/generative/vae/" rel="nofollow noreferrer">https://keras.io/examples/generative/vae/</a>). When plotting the training loss I noticed that these were not the same as displayed in the console. I also saw that the displayed loss in the console in the Keras example was not right considering total_loss = reconstruction_loss + kl_loss.</p> <p>Is the displayed loss in the console not the total_loss?</p> <p>My VAE code:</p> <pre><code>class Sampling(layers.Layer): &quot;&quot;&quot;Uses (z_mean, z_log_var) to sample z, the vector encoding a digit.&quot;&quot;&quot; def call(self, inputs): z_mean, z_log_var = inputs batch = tf.shape(z_mean)[0] dim = tf.shape(z_mean)[1] epsilon = tf.keras.backend.random_normal(shape=(batch, dim)) return z_mean + tf.exp(0.5 * z_log_var) * epsilon latent_dim = 100 encoder_inputs = keras.Input(shape=(64, 64, 3)) #eigentlich 160 x = layers.Conv2D(32, 4, strides=2, padding=&quot;same&quot;)(encoder_inputs) x = layers.LeakyReLU(alpha=0.2)(x) x = layers.Conv2D(32, 3, strides=1, padding=&quot;same&quot;)(x) x = layers.LeakyReLU(alpha=0.2)(x) x = layers.Conv2D(64, 4,strides=2, padding=&quot;same&quot;)(x) x = layers.LeakyReLU(alpha=0.2)(x) x = layers.Conv2D(64, 3,strides=1, padding=&quot;same&quot;)(x) x = layers.LeakyReLU(alpha=0.2)(x) x = layers.Conv2D(128, 4,strides=2, padding=&quot;same&quot;)(x) x = layers.LeakyReLU(alpha=0.2)(x) x = layers.Conv2D(64, 3,strides=1, padding=&quot;same&quot;)(x) x = layers.LeakyReLU(alpha=0.2)(x) x = layers.Conv2D(32, 3,strides=1, padding=&quot;same&quot;)(x) x = layers.LeakyReLU(alpha=0.2)(x) x = layers.Conv2D(100, 8,strides=1, padding=&quot;valid&quot;)(x) x = layers.LeakyReLU(alpha=0.2)(x) x = layers.Flatten()(x) z_mean = layers.Dense(latent_dim, name=&quot;z_mean&quot;)(x) z_log_var = layers.Dense(latent_dim, name=&quot;z_log_var&quot;)(x) z = Sampling()([z_mean, z_log_var]) encoder = keras.Model(encoder_inputs, [z_mean, z_log_var, z], name=&quot;encoder&quot;) encoder.summary() latent_inputs = keras.Input(shape=(latent_dim,)) x = layers.Reshape((1, 1, 100))(latent_inputs) x = layers.Conv2DTranspose(100, 8, strides=1, padding=&quot;valid&quot;)(x) x = layers.LeakyReLU(alpha=0.2)(x) x = layers.Conv2DTranspose(32, 3, strides=1, padding=&quot;same&quot;)(x) x = layers.LeakyReLU(alpha=0.2)(x) x = layers.Conv2DTranspose(64, 3, strides=1, padding=&quot;same&quot;)(x) x = layers.LeakyReLU(alpha=0.2)(x) x = layers.Conv2DTranspose(128, 4, strides=2, padding=&quot;same&quot;)(x) x = layers.LeakyReLU(alpha=0.2)(x) x = layers.Conv2DTranspose(64, 3, strides=1, padding=&quot;same&quot;)(x) x = layers.LeakyReLU(alpha=0.2)(x) x = layers.Conv2DTranspose(64, 4, strides=2, padding=&quot;same&quot;)(x) x = layers.LeakyReLU(alpha=0.2)(x) x = layers.Conv2DTranspose(32, 3, strides=1, padding=&quot;same&quot;)(x) x = layers.LeakyReLU(alpha=0.2)(x) x = layers.Conv2DTranspose(32, 4, strides=2, padding=&quot;same&quot;)(x) x = layers.LeakyReLU(alpha=0.2)(x) decoder_outputs = layers.Conv2DTranspose(3, 3, activation=&quot;sigmoid&quot;, padding=&quot;same&quot;)(x) decoder = keras.Model(latent_inputs, decoder_outputs, name=&quot;decoder&quot;) decoder.summary() class VAE(keras.Model): def __init__(self, encoder, decoder, encoder_t1, encoder_t2, encoder_t3, encoder_t4, **kwargs): super(VAE, self).__init__(**kwargs) self.encoder = encoder self.decoder = decoder def train_step(self, data): if isinstance(data, tuple): data = data[0] with tf.GradientTape() as tape: z_mean, z_log_var, z = encoder(data) reconstruction = decoder(z) reconstruction_loss = tf.reduce_mean( #mean keras.losses.mse(data, reconstruction) #binary_crossentropy ) reconstruction_loss *= 64 * 64 #entspricht bildgröße kl_loss = 1 + z_log_var - tf.square(z_mean) - tf.exp(z_log_var) kl_loss = tf.reduce_mean(kl_loss) #mean kl_loss *= -0.5 total_loss = reconstruction_loss + kl_loss grads = tape.gradient(total_loss, self.trainable_weights) self.optimizer.apply_gradients(zip(grads, self.trainable_weights)) return { &quot;loss&quot;: total_loss, &quot;reconstruction_loss&quot;: reconstruction_loss, &quot;kl_loss&quot;: kl_loss, } def call(self, inputs): z_mean, z_log_var, z = encoder(inputs) reconstruction = decoder(z) reconstruction_loss = tf.reduce_mean( keras.losses.mse(inputs, reconstruction) ) reconstruction_loss *= 64 * 64 kl_loss = 1 + z_log_var - tf.square(z_mean) - tf.exp(z_log_var) kl_loss = tf.reduce_mean(kl_loss) kl_loss *= -0.5 total_loss = reconstruction_loss + kl_loss self.add_metric(kl_loss, name='kl_loss', aggregation='mean') self.add_metric(total_loss, name='total_loss', aggregation='mean') self.add_metric(reconstruction_loss, name='reconstruction_loss', aggregation='mean') return reconstruction </code></pre> <p>When I plot my loss with the following code:</p> <pre><code>vae_train = vae.fit( train_generator, steps_per_epoch=nb_train_samples, epochs=nb_epoch, validation_data=val_generator, validation_steps=nb_validation_samples, #141 #3963 callbacks=[es_callback] ) loss = vae_train.history['loss'] val_loss = vae_train.history['val_total_loss'] plt.figure() plt.plot(range(len(loss)), loss, 'b', label = 'Training loss') plt.plot(range(len(val_loss)), val_loss, 'm', label='Validation loss') plt.title('Training and validation loss') plt.legend() plt.show() </code></pre> <p>The resulting plot displays the loss differently than the displayed loss in the console. As the displayed loss in the console is not reconstruction_loss + kl_loss but the plotted loss is.</p> <p>For example the displayed loss here is not correct, but it is plotted right: (interestingly the val_total_loss is displayed correctly)</p> <pre><code>Epoch 20/100 1266/1266 [==============================] - 82s 65ms/step - loss: 45.2503 - reconstruction_loss: 49.9395 - kl_loss: 0.5695 - val_loss: 0.0000e+00 - val_kl_loss: 0.5888 - val_total_loss: 48.9094 - val_reconstruction_loss: 48.3206 </code></pre>
2021-01-06 18:02:57.933000+00:00
2021-04-28 09:42:16.063000+00:00
2021-01-06 20:27:49.617000+00:00
tensorflow|keras|neural-network
['https://github.com/keras-team/keras-io/commit/e68c2209f2e96da84babd5017d15911fa4d3c7e4#diff-8522b8e0350af836c7852feb030176985f2267bf6cbdec97a87fbac090a135a8', 'https://arxiv.org/pdf/1910.00698.pdf']
2
65,976,169
<p>First, what is YOLOv3 composed of?</p> <p>YOLOv3 is composed of two parts:</p> <ol> <li>Backbone or Feature Extractor --&gt; Darknet53</li> <li>Head or Detection Blocks --&gt; 53 layers</li> </ol> <p>The head is used for (1) bounding box localization, and (2) identify the class of the object inside the box.</p> <p>In the case of YOLOv4, it uses the same &quot;Head&quot; with that of YOLOv3.</p> <p>To summarize, YOLOv4 has three main parts:</p> <ol> <li>Backbone --&gt; CSPDarknet53</li> <li>Neck (Connects the backbone with the head) --&gt; SPP, PAN</li> <li>Head --&gt; YOLOv3's Head</li> </ol> <p>References:</p> <ul> <li>Section 1.A. in <a href="https://ieeexplore.ieee.org/document/9214094" rel="noreferrer">https://ieeexplore.ieee.org/document/9214094</a></li> <li>Page 5 of <a href="http://arxiv.org/abs/2004.10934" rel="noreferrer">http://arxiv.org/abs/2004.10934</a></li> </ul>
2021-01-31 05:53:07.517000+00:00
2021-01-31 05:53:07.517000+00:00
null
null
65,971,973
<p>I was going through <a href="https://arxiv.org/abs/2004.10934?" rel="nofollow noreferrer">yolov4</a> paper where the authors have mentioned Backbone(CSP DARKNET-53), Neck (SPP followed by PANet) &amp; than Head(YOLOv3). Hence is the architecture something like this:</p> <p>CSP Darknet-53--&gt;SPP--&gt;PANet--&gt;YOLOv3(106 layers of YOLOv3).</p> <p>Does this mean YOLOv4 incorporates entire YOLOv3?</p>
2021-01-30 18:58:33.290000+00:00
2021-01-31 05:53:07.517000+00:00
null
machine-learning|neural-network|artificial-intelligence|object-detection|yolo
['https://ieeexplore.ieee.org/document/9214094', 'http://arxiv.org/abs/2004.10934']
2
58,233,213
<p>I agree with dyukha that it depends. LSTM have shown good performance on forecasting on different tasks: <a href="https://eng.uber.com/m4-forecasting-competition/" rel="nofollow noreferrer">this approach</a> with LSTM won the M4 competition, Amazon uses LSTM for forecasting as well <a href="https://arxiv.org/abs/1704.04110" rel="nofollow noreferrer">paper</a>, <a href="https://medium.com/@julsimon/predicting-world-temperature-with-time-series-and-deepar-on-amazon-sagemaker-e371cf94ddb5" rel="nofollow noreferrer">blog</a>.</p> <p>Implementing such approaches from scratch is doable but can be a bit tricky if you are not familiar with neural network framework. However, you can use directly implementations from <a href="https://github.com/awslabs/gluon-ts" rel="nofollow noreferrer">Gluon-ts</a> which contains different type of neural network architecture (LSTM, convolution, feedforward, attention, etc) that you can try to see which one works best with your data.</p>
2019-10-04 09:05:31.980000+00:00
2019-10-04 09:05:31.980000+00:00
null
null
58,232,101
<p>I am thinking of using Neural Networks for forecasting. What kind of Neural Networks are suitable for this occasion? Are RNN suitable? LSTM? Thank you...</p>
2019-10-04 07:52:54.577000+00:00
2019-10-04 09:05:31.980000+00:00
null
machine-learning
['https://eng.uber.com/m4-forecasting-competition/', 'https://arxiv.org/abs/1704.04110', 'https://medium.com/@julsimon/predicting-world-temperature-with-time-series-and-deepar-on-amazon-sagemaker-e371cf94ddb5', 'https://github.com/awslabs/gluon-ts']
4
56,623,009
<p>If I had to do a thing like that I would concatenate image features and numerical features once they have the same form - a feature vector. For that, you can view the convolutional part of the network as a feature extractor that turns into a list of features after the last pooling layer i.e. it will have a shape like [batch_size, 1, 1, N]. At this point you can easily append/concatenate your regular numerical features before feeding them into a dense layer.</p> <p>Couple things I would be on the lookout for:</p> <ul> <li>be sure that numerical and convolutional features are from the same distribution i.e. BatchNorm is applied to both</li> <li>be sure that they have roughly the same size i.e. if you have 2048 conv features and only 5 numerical features it might not quite work as is.</li> </ul> <p>You can get more inspiration from <a href="https://arxiv.org/abs/1606.07792" rel="nofollow noreferrer">Wide &amp; Deep Learning for Recommender Systems</a>.</p>
2019-06-16 22:16:29.457000+00:00
2019-06-16 22:16:29.457000+00:00
null
null
56,622,689
<p>I believe this is my first question here.</p> <p>I am very new to Neural Networks. I just started working on one in Python that is supposed to look at levels of glucose in patients with a risk of diabetes and rank them from 1 to 3 on their risk of developing the disease. With 1 being high risk, and 3 being low risk.</p> <p>Right now, I have ~110 graphs previously ranked by a doctor (42 risk 1, 51 risk 2, 10 of risk 3). I randomly took 25% of each group as the test set, and put the rest as training, then gave it to a Keras for learning. </p> <p>It works just fine. Here's my code:</p> <pre class="lang-py prettyprint-override"><code> print("Convoluting") classifier.add(Convolution2D(32, 3, 3, input_shape = (64, 64, 3), activation = 'relu')) print("Pooling") classifier.add(MaxPooling2D(pool_size = (2,2))) print("Flattening") classifier.add(Flatten()) print("Connecting") classifier.add(Dense(activation = 'relu', units=128)) classifier.add(Dense(activation = 'softmax', units=3)) print("Compiling CNN") classifier.compile(optimizer = 'adam', loss = 'categorical_crossentropy', metrics = ['accuracy']) print("Generating images") from keras.preprocessing.image import ImageDataGenerator train_datagen = ImageDataGenerator() test_datagen = ImageDataGenerator() print("Setting sets") training_set = train_datagen.flow_from_directory( 'dataset/train_set', target_size=(64,64), batch_size=Batches, class_mode='categorical') test_set = test_datagen.flow_from_directory( 'dataset/test_set', target_size=(64,64), batch_size=Batches, class_mode='categorical') print("training nn...") from IPython.display import display from PIL import Image classifier.fit_generator( training_set, steps_per_epoch=StepsPerEpoch, epochs=Epochs, validation_data=test_set, validation_steps=ValidationSteps) </code></pre> <p>However, the accuracy after the training won't go above 0.4. Now, I know I have a relatively small sample for training a neural network, but I currently don't have access to information from more patients. I do, however, have access to demographic data from those patients, like weight, height, and age.</p> <p><strong>Basically, I would like to somehow include the weight, height, and age of each patient along with the graph showing their levels of glucose. So my program knows to take that information into account when making a judgement.</strong></p> <p>I haven't been able to find anything similar when searching online, though it may be due to my little knowledge on the matter. What should I do?</p> <p>Thanks for your time.</p>
2019-06-16 21:15:31.557000+00:00
2019-06-17 07:01:49.493000+00:00
2019-06-16 21:46:47.053000+00:00
python|tensorflow|keras|neural-network|deep-learning
['https://arxiv.org/abs/1606.07792']
1
63,369,618
<p>The <a href="https://arxiv.org/abs/1612.03651" rel="nofollow noreferrer">paper which introduced the FastText team's quantization strategy</a> only evaluated classification models, and used some pruning steps that might only make sense with labeled training documents. (Though, I don't see the arguments to <code>-quantize</code> as including the original training docs, so not sure the pruning technique as described in the paper is fully implmented.)</p> <p>While some of the compression steps could be applied to the unsupervised dense vectors, I've not yet seen a library offering that functionality, but it could be a neat thing to implement/add.</p> <p>However, it's possible that the kind of classification done by the FastText work is a &quot;sweet spot&quot; for these techniques, and applied to other word-vectors they'd have a much larger negative impact on downstream uses. So, extension of the technique should be accompanied by some experiments confirming its value.</p>
2020-08-12 04:06:32.320000+00:00
2020-08-12 04:06:32.320000+00:00
null
null
63,359,880
<p>I am trying to quantize the unsupervised model in fasttext using this command.</p> <pre><code>model.quantize(input=train_data, qnorm=True, retrain=True, cutoff=200000) </code></pre> <p>It's throwing an error that it is supported for only supervised models.</p> <p><a href="https://i.stack.imgur.com/VvVad.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VvVad.png" alt="enter image description here" /></a></p> <p>Is there any alternate way to quantize the unsupervised models?</p>
2020-08-11 14:05:33.800000+00:00
2021-12-14 10:34:57.503000+00:00
null
python|compression|fasttext
['https://arxiv.org/abs/1612.03651']
1
41,741,276
<p>One possible approach is to compute the sum using a <em>superaccumulator</em>: this is an algorithm for computing exact sums of floating point numbers. Although these ideas have been around for a while, the term is a relatively new one.</p> <p>In some sense, you can think of it as an extension of Kahan summation, where the sequential sum is stored as an array of values, rather than just a pair. The main challenge then becomes figuring out how to allocate the precision amongst the various values.</p> <p>Some relevant papers and code:</p> <ul> <li><p>Y. K. Zhu and W. B. Hayes. "Algorithm 908: Online Exact Summation of Floating-Point Streams". <em>ACM Transactions on Mathematical Software</em> (ACM TOMS), 37(3):37:1-37:13, September 2010. doi: <a href="http://dx.doi.org/10.1145/1824801.1824815">10.1145/1824801.1824815</a></p> <ul> <li>Unfortunately the paper and code are behind a paywall, but this appears to be <a href="https://github.com/aseldawy/sumn/tree/master/908">the C++ code</a>.</li> </ul></li> <li><p>R. M. Neal, "Fast Exact Summation using Small and Large Superaccumulators". 2015. arXiv: <a href="https://arxiv.org/abs/1505.05571">1505.05571</a></p> <ul> <li><a href="https://arxiv.org/src/1505.05571v1">C code available</a></li> </ul></li> <li><p>M. T. Goodrich, A. Eldawy "Parallel Algorithms for Summing Floating-Point Numbers". 2016. arXiv: <a href="https://arxiv.org/abs/1605.05436">1605.05436</a></p> <ul> <li><a href="https://github.com/aseldawy/sumn">Java code of this and the above</a></li> </ul></li> </ul>
2017-01-19 11:59:15.080000+00:00
2017-01-19 11:59:15.080000+00:00
null
null
41,728,910
<p>Assume You're given two sets of floating point variables implemented according to IEEE754, meant to be treated as exact values calculated according to formulae present in standard. All legal values are possible. The amount of variables in set may be any natural number.</p> <p>What would be a good way to compare exact, in mathematical sense, sums of values represented by said variables. Due to domain's nature, the problem can easily be represented as comparing a single sum to zero. You can disregard the possibility of presence of NaNs or Infinities, as it is irrelevant to core problem. (Those values can be checked for easily and independently, and acted upon in a manner suiting particular application of this problem.)</p> <p>A naive approach would be to simply sum and compare, or sum values of one set and subtract values of another.</p> <pre><code> bool compare(const std::vector&lt;float&gt;&amp; lhs, const std::vector&lt;float&gt;&amp; rhs) { float lSum = 0.0f; for (auto value : lhs) { lSum += value; } float rSum = 0.0f; for (auto value : rhs) { rSum += value; } return lSum &lt; rSum; } </code></pre> <p>Quite obviously there are problems with naive approach, as mentioned in various other questions regarding floating point arithmetic. Most of the problems are related to two difficulties:</p> <ul> <li>result of addition of floating point values differs depending on order</li> <li><p>certain orders of addition of certain sets of values may result in intermediate overflow (intermediate result of calculations goes beyond range supported by available data type)</p> <pre><code>float small = strtof("0x1.0p-126", NULL); float big = strtof("0x1.8p126", NULL); std::cout &lt;&lt; std::hexfloat &lt;&lt; small + big - big &lt;&lt; std::endl; std::cout &lt;&lt; std::hexfloat &lt;&lt; (big-2*small) + (big-small) + big - (big+small) - (big+2*small) &lt;&lt; std::endl; </code></pre> <p>This code will result in <code>0</code> and <code>inf</code>; this illustrates how ordering affects the result. Hopefully, also that the problem of ordering is non-trivial.</p> <pre><code>float prev; float curr = 0.0f; do { prev = curr; curr += strtof("0x1.0p-126", NULL); } while (prev != curr); std::cout &lt;&lt; std::hexfloat &lt;&lt; curr &lt;&lt; std::endl; </code></pre></li> </ul> <p>This code, given sufficient time to actually finish computing, would result in <code>0x1.000000p-102</code>, not, as could be naively expected, <code>0x1.fffffep127</code> (Change of curr initialization to `strtof("0x1.fff000p-103") would be advised to actually observe this.); this illustrates how proportion between intermediate results of addition and particular addends affects the result.</p> <p>A lot has been said about obtaining best precision, eg. <a href="https://stackoverflow.com/questions/6699066/in-which-order-should-floats-be-added-to-get-the-most-precise-result">this question</a>.</p> <p>The problem at hand differs in that we do not want to maximize precision, but we have a well-defined function that needs to be implemented exactly.</p> <p>While for some the idea that it may be useful exercise seems controversial at best, consider the following scenario: comparison between those value sets could be a cornerstone of other operations performed on entire datasets independently in various environments. Synchronized, flawless operation of some systems may depend on this comparison being well defined and deterministically implemented, irregardless of addends order and particular architecture implementing IEEE754 or not. </p> <p>This, or just curiosity.</p> <p>In the discussion, <a href="https://en.wikipedia.org/wiki/Kahan_summation_algorithm" rel="nofollow noreferrer">Kahan summation algorithm</a> has been mentioned as relevant. However this algorithm is a reasonable attempt at minimizing error. It neither guarantees correct sign of result, nor is independent of the order of operations (to at least guarantee consistent, if wrong, result, for permutations of sets).</p> <p>One of the most obvious solutions would be to employ/implement fixed point arithmetic using sufficient amount of bits to represent every possible operand value exactly and keep exact intermediate result.</p> <p>Perhaps however this can be done using only floating point arithmetic in a manner that guarantees correct sign of result. If so, the problem of overflow (as illustrated in one of the examples above) needs to be addressed in solution, as this question has particular technical aspect.</p> <p>(What follows is original question.)</p> <p><em>I have two sets of multiple floating point (float or double) values. I want to provide a perfect answer to the question, which set has larger sum. Because of artifacts in floating point arithmetic, in some corner cases the result of naive approach may be wrong, depending on order of operations. Not to mention simple sum can result in overflow. I can't provide any effort on my side, because all I have is vague ideas, all of them complicated and not convincing.</em></p>
2017-01-18 20:32:16.747000+00:00
2017-01-19 18:39:34.600000+00:00
2017-05-23 12:01:29.047000+00:00
c++|c|floating-point|overflow|rounding-error
['http://dx.doi.org/10.1145/1824801.1824815', 'https://github.com/aseldawy/sumn/tree/master/908', 'https://arxiv.org/abs/1505.05571', 'https://arxiv.org/src/1505.05571v1', 'https://arxiv.org/abs/1605.05436', 'https://github.com/aseldawy/sumn']
6
44,641,302
<p>Having only one element will make the batch normalization zero if epsilon is non-zero (variance is zero, mean will be same as input).<br> Its better to delete the BN layers from the network and try the activation function SELU (scaled exponential linear units). This is from the paper <a href="https://arxiv.org/pdf/1706.02515.pdf" rel="nofollow noreferrer">'Self normalizing neural networks'</a> (SNNs).</p> <p>Quote from the paper:</p> <blockquote> <p>While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are “scaled exponential linear units” (SELUs), which induce self-normalizing properties.</p> </blockquote> <p>The SELU is defined as: </p> <pre><code>def selu(x, name="selu"): alpha = 1.6732632423543772848170429916717 scale = 1.0507009873554804934193349852946 return scale * tf.where(x &gt;= 0.0, x, alpha * tf.nn.elu(x)) </code></pre>
2017-06-19 23:10:40.833000+00:00
2017-06-19 23:10:40.833000+00:00
null
null
44,621,731
<p>Training fully convolutional nerworks (FCNs) for pixelwise semantic segmentation is very memory intensive. So we often use batchsize=1 for traing FCNs. However, when we finetune the pretrained networks with BatchNorm (BN) layers, batchsize=1 doesn't make sense for the BN layers. So, how to handle the BN layers?</p> <p>Some options:</p> <ol> <li><p>delete the BN layers (merge the BN layers with the preceding layers for the pretrained model)</p></li> <li><p>Freeze the parameters and statistics of the BN layers</p></li> <li><p>....</p></li> </ol> <p>which is better and any demo for implementation in pytorch/tf/caffe?</p>
2017-06-19 03:10:38.270000+00:00
2018-03-27 12:57:24.247000+00:00
2017-06-19 03:17:53.337000+00:00
tensorflow|deep-learning|caffe|pytorch
['https://arxiv.org/pdf/1706.02515.pdf']
1
45,573,459
<p><a href="https://arxiv.org/pdf/1502.03167.pdf" rel="nofollow noreferrer">Batch Normalization</a> was introduced to reduce the internal covariate shift of the input feature maps. Due to change of parameters of each layer after every optimization steps, input distribution of a layer also changes, this slow down the model convergence. By using Batch Normalization we can normalize the input distribution irrespective of the batch_size (whether batch_size =1 or larger).</p> <blockquote> <p>BN normalizes the input distribution</p> </blockquote> <p>For convolutional network input for intermediate layer is 4D tensor. <code>[batch_size, width, height, num_filters]</code>. Normalization effect all the feature maps.</p> <blockquote> <p>delete the BN layers (merge the BN layers with the preceding layers for the pretrained model)</p> </blockquote> <p>This may further slow down the training step and convergence mayn't be achieved.</p> <blockquote> <p>Freeze the parameters and statistics of the BN layers</p> </blockquote> <p>Sometime the input data distribution for retrain/finetune, may vary significantly from the original data used to train the pretrained model used for initialization, Due to which your model may end-up in non-optimal solution.</p>
2017-08-08 16:23:53.267000+00:00
2017-08-08 16:23:53.267000+00:00
2020-06-20 09:12:55.060000+00:00
null
44,621,731
<p>Training fully convolutional nerworks (FCNs) for pixelwise semantic segmentation is very memory intensive. So we often use batchsize=1 for traing FCNs. However, when we finetune the pretrained networks with BatchNorm (BN) layers, batchsize=1 doesn't make sense for the BN layers. So, how to handle the BN layers?</p> <p>Some options:</p> <ol> <li><p>delete the BN layers (merge the BN layers with the preceding layers for the pretrained model)</p></li> <li><p>Freeze the parameters and statistics of the BN layers</p></li> <li><p>....</p></li> </ol> <p>which is better and any demo for implementation in pytorch/tf/caffe?</p>
2017-06-19 03:10:38.270000+00:00
2018-03-27 12:57:24.247000+00:00
2017-06-19 03:17:53.337000+00:00
tensorflow|deep-learning|caffe|pytorch
['https://arxiv.org/pdf/1502.03167.pdf']
1
44,622,244
<p>I do not have an exact answer, but here are my thoughts:</p> <hr> <blockquote> <p>networks with BatchNorm (BN) layers, batchsize=1 doesn't make sense for the BN layers</p> </blockquote> <p>The main motivation of BN is to fix the distribution (mean/variance) of the input in the batch. In my opinion, having one element this does not make sense. <a href="https://arxiv.org/pdf/1502.03167.pdf" rel="nofollow noreferrer">Judging from the paper</a> <a href="https://i.stack.imgur.com/G0KXa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/G0KXa.png" alt="enter image description here"></a></p> <p>you will need to calculate the mean and the variance for 1 element, which does not make sense.</p> <hr> <p>You can always just remove BN but are you sure you can't afford at least 16 elements in the batch?</p>
2017-06-19 04:23:54.840000+00:00
2017-06-19 05:12:18.563000+00:00
2017-06-19 05:12:18.563000+00:00
null
44,621,731
<p>Training fully convolutional nerworks (FCNs) for pixelwise semantic segmentation is very memory intensive. So we often use batchsize=1 for traing FCNs. However, when we finetune the pretrained networks with BatchNorm (BN) layers, batchsize=1 doesn't make sense for the BN layers. So, how to handle the BN layers?</p> <p>Some options:</p> <ol> <li><p>delete the BN layers (merge the BN layers with the preceding layers for the pretrained model)</p></li> <li><p>Freeze the parameters and statistics of the BN layers</p></li> <li><p>....</p></li> </ol> <p>which is better and any demo for implementation in pytorch/tf/caffe?</p>
2017-06-19 03:10:38.270000+00:00
2018-03-27 12:57:24.247000+00:00
2017-06-19 03:17:53.337000+00:00
tensorflow|deep-learning|caffe|pytorch
['https://arxiv.org/pdf/1502.03167.pdf', 'https://i.stack.imgur.com/G0KXa.png']
2
71,661,856
<p>You're right - Faster R-CNN already uses RPN.</p> <p>But you're likely misreading the title of the other table. It is &quot;RPN &amp; <strong>Fast</strong> R-CNN&quot;.</p> <p><a href="https://arxiv.org/pdf/1504.08083v2.pdf" rel="nofollow noreferrer">Fast R-CNN</a> is the predecessor of <a href="https://arxiv.org/pdf/1506.01497.pdf" rel="nofollow noreferrer">Faster R-CNN</a>. It takes as input an entire image and <strong>a set of object proposals</strong>. These object proposals have to therefore be pre-computed which, in the original paper, was done using <a href="https://ivi.fnwi.uva.nl/isis/publications/2013/UijlingsIJCV2013/UijlingsIJCV2013.pdf" rel="nofollow noreferrer">Selective Search</a>.</p> <p>Since the object proposal process is not part of the network architecture itself, it could use any other method, including an RPN. This is what you see in the Detectron2 model zoo - the pre-trained Fast R-CNN model uses an independently pre-trained RPN to generate the proposals. See the <a href="https://github.com/facebookresearch/detectron2/blob/main/configs/COCO-Detection/fast_rcnn_R_50_FPN_1x.yaml" rel="nofollow noreferrer">config</a> that specifies separate proposal files as part of the dataset.</p>
2022-03-29 12:16:31.580000+00:00
2022-03-29 12:16:31.580000+00:00
null
null
71,443,140
<p>I am trying to implement a pretrained model from the Detectron2 library for object detection and it seems that Faster R-CNN models outperform the RetinaNet models. However, when accessing the model zoo, I came across Faster R-CNN models and RPN Faster R-CNN models. I scoured the internet but I am struggling to find the difference between these models. Does not Faster R-CNN already use RPN?</p> <p>Model Zoo: <a href="https://github.com/facebookresearch/detectron2/blob/main/MODEL_ZOO.md" rel="nofollow noreferrer">https://github.com/facebookresearch/detectron2/blob/main/MODEL_ZOO.md</a></p> <p><a href="https://i.stack.imgur.com/G52TI.png" rel="nofollow noreferrer">Detectron2 Model Zoo</a></p>
2022-03-11 18:29:13.660000+00:00
2022-03-29 12:16:31.580000+00:00
null
object-detection|faster-rcnn|detectron
['https://arxiv.org/pdf/1504.08083v2.pdf', 'https://arxiv.org/pdf/1506.01497.pdf', 'https://ivi.fnwi.uva.nl/isis/publications/2013/UijlingsIJCV2013/UijlingsIJCV2013.pdf', 'https://github.com/facebookresearch/detectron2/blob/main/configs/COCO-Detection/fast_rcnn_R_50_FPN_1x.yaml']
4
14,394,713
<p>First of all, I'm not an expert in linguistic or study of languages. I think I understand what you're trying to do, and I don't know what's the best way to do it.</p> <p>If I got it right, you want to determine some centrality measure for your words (that would explain the social network reference), to find those who are the most linked to others, is that it ?</p> <p>The problem if you try that is that you will certainly find that the most central words are the most inintersting ones (the, if, then, some redundant adjectives...), if you don't apply a tokenization and lemmization procedure beforehand. Thus you could separate only nouns and stemming of verbs used, and then only you could try your approach. </p> <p>Another problem that you must keep in mind is that words are important both by their presence and by their rarity (see tf-idf weight measure for instance).</p> <p>To conclude, I did the following search on google :</p> <p>"<em>n gram graph language centrality word</em>"</p> <p>and found this paper that seems interesting for what you're asking (I might give it a look myself !) : </p> <p><a href="http://arxiv.org/abs/1109.2128" rel="nofollow">LexRank: Graph-based Lexical Centrality as Salience in Text Summarization</a></p>
2013-01-18 08:04:18.610000+00:00
2013-01-18 08:04:18.610000+00:00
null
null
13,730,777
<p>I have a problem for network. For one document I am extracting some information. I am drawing nice graphs for them. But in a document information flows. I am trying to depict it in graph like the way one reads a text flowing with text and then important most entity first and then the next important one.</p> <p>To understand and grasp this problem what are the kinds of things I have to study or which aspect of network theory or graph theory deals with it.</p> <p>If any one can kindly refer up. Regs, SK. </p>
2012-12-05 19:18:07.177000+00:00
2013-01-18 08:04:18.610000+00:00
null
networking|graph|social-networking|data-mining|information-retrieval
['http://arxiv.org/abs/1109.2128']
1
42,131,774
<p>First, are you sure removing unknown terms is so bad? (Have you tried it?) If there were no examples of those terms in your training data, they may not be that common or important – at least not until you have more training data with many varied examples of their use. And even if they are relevant, until you have many examples up front, you can't know much about their significance. </p> <p>As you note, you could re-train a Word2Vec model including the new examples, and thus get some vague idea of where the unknown words belong. You could conceivably then re-train any downstream classifier using all the new word-vectors, or project the new-word back into the original space. (This could use a method similar to that described in section 4 of the <em><a href="https://arxiv.org/abs/1309.4168" rel="nofollow noreferrer">Exploiting Similarities among Languages for Machine Translation</a></em> paper, except now your two 'languages' are the models before and after the new word(s) are added.) But if you're only working from a few such word-occurrences, or perhaps in a single new text, everything you'll learn about that word is already a function of the surrounding words already available to your classifier, so the gains might be quite small. (That is, it's only the heftier meaning that comes from many diverse examples elsewhere that's likely to add to understanding of a new text, beyond its existing context.)</p> <p>Some variants of word2vec, like Facebook's fastText, also learn vectors for fragments of words. Those are combined to make the full word vectors. Then when new out-of-original-vocabulary words are encountered later, they can synthesize word-vectors from the shared fragments. When the new word is morphologically related to a known word, this strategy can do OK. So you may want to take a look at FastText. (It also has a mode where classification-labels are mixed into word-vector training, which can serve to make the word-vectors better for later classification among the same labels.)</p>
2017-02-09 08:40:33.887000+00:00
2017-02-09 08:40:33.887000+00:00
null
null
42,088,715
<p>I'm using word2vec model to build a classifier on training data set and wonder what are technics to deal with unseen terms (words) in the test data.</p> <p>Removing new terms doesn't seem like a best approach. My current thought is to recalculate word2vec on combined data set (training + test) and replace new terms with nearest word from training data set (or maybe some linear combination of 2-3 nearest). Sounds a bit tricky, but should be doable.</p> <p>Have you come across similar problem? Any idea/suggestion how to deal with unseen terms?</p>
2017-02-07 11:28:34.110000+00:00
2017-03-27 09:21:21.627000+00:00
2017-02-07 11:55:59.927000+00:00
machine-learning|nlp|word2vec
['https://arxiv.org/abs/1309.4168']
1
5,923,912
<p>You can do a very impressive bit of sparse SVD in R using random projection as described in <a href="http://arxiv.org/abs/0909.4061" rel="noreferrer">http://arxiv.org/abs/0909.4061</a></p> <p>Here is some sample code:</p> <pre><code># computes first k singular values of A with corresponding singular vectors incore_stoch_svd = function(A, k) { p = 10 # may need a larger value here n = dim(A)[1] m = dim(A)[2] # random projection of A Y = (A %*% matrix(rnorm((k+p) * m), ncol=k+p)) # the left part of the decomposition works for A (approximately) Q = qr.Q(qr(Y)) # taking that off gives us something small to decompose B = t(Q) %*% A # decomposing B gives us singular values and right vectors for A s = svd(B) U = Q %*% s$u # and then we can put it all together for a complete result return (list(u=U, v=s$v, d=s$d)) } </code></pre>
2011-05-07 20:48:49.550000+00:00
2011-05-07 20:48:49.550000+00:00
null
null
4,951,286
<p>I've got a sparse <code>Matrix</code> in R that's apparently too big for me to run <code>as.matrix()</code> on (though it's not super-huge either). The <code>as.matrix()</code> call in question is inside the <code>svd()</code> function, so I'm wondering if anyone knows a different implementation of SVD that doesn't require first converting to a dense matrix.</p>
2011-02-09 22:26:56.877000+00:00
2014-04-06 06:54:24.620000+00:00
null
r|sparse-matrix|svd
['http://arxiv.org/abs/0909.4061']
1
70,482,924
<p>There is a paper <a href="https://arxiv.org/abs/1910.14659" rel="noreferrer">Masked Language Model Scoring</a> that explores pseudo-perplexity from masked language models and shows that pseudo-perplexity, while not being theoretically well justified, still performs well for comparing &quot;naturalness&quot; of texts.</p> <p>As for the code, your snippet is perfectly correct but for one detail: in recent implementations of Huggingface BERT, <code>masked_lm_labels</code> are renamed to simply <code>labels</code>, to make interfaces of various models more compatible. I have also replaced the hard-coded <code>103</code> with the generic <code>tokenizer.mask_token_id</code>. So the snippet below should work:</p> <pre class="lang-py prettyprint-override"><code>from transformers import AutoModelForMaskedLM, AutoTokenizer import torch import numpy as np model_name = 'cointegrated/rubert-tiny' model = AutoModelForMaskedLM.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) def score(model, tokenizer, sentence): tensor_input = tokenizer.encode(sentence, return_tensors='pt') repeat_input = tensor_input.repeat(tensor_input.size(-1)-2, 1) mask = torch.ones(tensor_input.size(-1) - 1).diag(1)[:-2] masked_input = repeat_input.masked_fill(mask == 1, tokenizer.mask_token_id) labels = repeat_input.masked_fill( masked_input != tokenizer.mask_token_id, -100) with torch.inference_mode(): loss = model(masked_input, labels=labels).loss return np.exp(loss.item()) print(score(sentence='London is the capital of Great Britain.', model=model, tokenizer=tokenizer)) # 4.541251105675365 print(score(sentence='London is the capital of South America.', model=model, tokenizer=tokenizer)) # 6.162017238332462 </code></pre> <p>You can try this code in Google Colab by running <a href="https://gist.github.com/avidale/f574c014cd686709636b89208f2259ce" rel="noreferrer">this gist</a>.</p>
2021-12-25 21:46:40.633000+00:00
2021-12-25 21:51:43.030000+00:00
2021-12-25 21:51:43.030000+00:00
null
70,464,428
<p>I have several masked language models (mainly Bert, Roberta, Albert, Electra). I also have a dataset of sentences. How can I get the perplexity of each sentence?</p> <p>From the huggingface documentation <a href="https://huggingface.co/docs/transformers/perplexity" rel="noreferrer">here</a> they mentioned that perplexity &quot;is not well defined for masked language models like BERT&quot;, though I still see people somehow calculate it.</p> <p>For example in this <a href="https://stackoverflow.com/questions/63030692/how-do-i-use-bertformaskedlm-or-bertmodel-to-calculate-perplexity-of-a-sentence">SO</a> question they calculated it using the function</p> <pre><code>def score(model, tokenizer, sentence, mask_token_id=103): tensor_input = tokenizer.encode(sentence, return_tensors='pt') repeat_input = tensor_input.repeat(tensor_input.size(-1)-2, 1) mask = torch.ones(tensor_input.size(-1) - 1).diag(1)[:-2] masked_input = repeat_input.masked_fill(mask == 1, 103) labels = repeat_input.masked_fill( masked_input != 103, -100) loss,_ = model(masked_input, masked_lm_labels=labels) result = np.exp(loss.item()) return result score(model, tokenizer, '我爱你') # returns 45.63794545581973 </code></pre> <p>However, when I try to use the code I get <code>TypeError: forward() got an unexpected keyword argument 'masked_lm_labels'</code>.</p> <p>I tried it with a couple of my models:</p> <pre><code>from transformers import pipeline, BertForMaskedLM, BertForMaskedLM, AutoTokenizer, RobertaForMaskedLM, AlbertForMaskedLM, ElectraForMaskedLM import torch 1) tokenizer = AutoTokenizer.from_pretrained(&quot;bioformers/bioformer-cased-v1.0&quot;) model = BertForMaskedLM.from_pretrained(&quot;bioformers/bioformer-cased-v1.0&quot;) 2) tokenizer = AutoTokenizer.from_pretrained(&quot;sultan/BioM-ELECTRA-Large-Generator&quot;) model = ElectraForMaskedLM.from_pretrained(&quot;sultan/BioM-ELECTRA-Large-Generator&quot;) </code></pre> <p><a href="https://stackoverflow.com/questions/61470768/how-does-masked-lm-labels-argument-work-in-bertformaskedlm">This</a> SO question also used the <code>masked_lm_labels</code> as an input and it seemed to work somehow.</p>
2021-12-23 15:50:06.030000+00:00
2021-12-25 21:51:43.030000+00:00
null
nlp|pytorch|huggingface-transformers|bert-language-model|transformer-model
['https://arxiv.org/abs/1910.14659', 'https://gist.github.com/avidale/f574c014cd686709636b89208f2259ce']
2
70,073,289
<p>(Comments from Oct. 29 and Nov. 2 moved here and edited.)</p> <p>I should note that such subtle <a href="https://peteroupc.github.io/random.html#Ensuring_Reproducibility" rel="nofollow noreferrer">reproducibility issues</a> with pseudorandom number generators (PRNGs) can occur when floating-point arithmetic is involved. For instance, Intel's instruction set architecture might make use of 80-bit extended precision for internal arithmetic. Extended precision, though, is only one way (among a host of others) that floating-point arithmetic might lead to non-reproducible pseudorandom numbers. Consider that Intel's and Arm's instruction set architectures are different enough to cause reproducibility issues. (If I understand, an Arm instruction set is what is used in Apple's M1 chip.)</p> <p>By contrast, integer arithmetic has fewer reproducibility problems.</p> <p>Thus, if bit-for-bit reproducibility matters to you, you should try to find an R language PRNG that uses only integer operations. (Indeed, computers generate pseudorandom floating-point numbers via integers, not the other way around, and most PRNGs produce integers, not floating-point numbers.)</p> <p>For instance, for uniform variates, take the integer output of the Mersenne Twister algorithm without manipulating it. For Gaussian (and exponential) random variates, there is fortunately an <a href="https://arxiv.org/pdf/1303.6257.pdf" rel="nofollow noreferrer">algorithm by Karney</a> that generates arbitrary-precision variates with only integer operations. Also, consider rational arithmetic built on underlying integer operations.</p> <p>REFERENCES:</p> <p>Karney, C.F.F., 2016. Sampling exactly from the normal distribution. ACM Transactions on Mathematical Software (TOMS), 42(1), pp.1-14.</p>
2021-11-22 22:46:15.783000+00:00
2021-11-23 05:53:19.593000+00:00
2021-11-23 05:53:19.593000+00:00
null
69,761,837
<p>Running the following code in R 4.1.1 gives different results between platforms.</p> <pre><code>set.seed(1) x &lt;- rnorm(3)[3] print(x, 22) # -0.83562861241004716 # intel windows # -0.8356286124100471557341 # m1 mac print(round(x, 15), 22) # -0.83562861241004704 # intel windows # -0.8356286124100470447118 # m1 mac </code></pre> <p>I know the size of difference is below <code>.Machine$double.eps</code> and the extra digits do not carry meaningful information.</p> <p>I am not happy with the fact that extra digits exist. How can I ensure exactly consistent results? Is there an RNG library that achieves this?</p> <p><strong>EDIT:</strong></p> <p>The bit representation is different.</p> <pre><code>set.seed(1) x &lt;- rnorm(100) x &lt;- sum(x) SoDA::binaryRep(x) .10101110001110000100001111110111000010011001011111011 # intel windows .10101110001110000100001111110111000010011001011111110 # m1 mac </code></pre> <p>Bits are also different in <code>runif()</code>. This suggests that the uniform-to-gaussian conversion is not the only breaking point.</p> <pre><code>set.seed(1) x &lt;- runif(10000000) x &lt;- sum(x) SoDA::binaryRep(x) # kind = &quot;Mersenne-Twister&quot; .10011000100101000110100110111100101000100000101100000 # intel windows .10011000100101000110100110111100101000011111001100000 # m1 mac # kind = &quot;Wichmann-Hill&quot; .10011000100111111110101000100001001001010100000011011 # intel windows .10011000100111111110101000100001001001010100001001010 # m1 mac # kind = &quot;Marsaglia-Multicarry&quot; .10011000100011100110000010000001011100011110100001110 # intel windows .10011000100011100110000010000001011100011110001010000 # m1 mac # kind = &quot;Super-Duper&quot; .10011000100010011010010110100001000101100011101011110 # intel windows .10011000100010011010010110100001000101100100001111101 # m1 mac # kind = &quot;Knuth-TAOCP-2002&quot; .10011000101000110101010111000111010011101001000101100 # intel windows .10011000101000110101010111000111010011101001000101101 # m1 mac # kind = &quot;Knuth-TAOCP&quot; .10011000100110001011010011000001011001001110011111000 # intel windows .10011000100110001011010011000001011001001110011111001 # m1 mac # kind = &quot;L'Ecuyer-CMRG&quot; .10011000100100010110100101101001011000000111010110101 # intel windows .10011000100100010110100101101001011000001000010100001 # m1 mac </code></pre>
2021-10-28 22:56:15.200000+00:00
2021-11-23 05:53:19.593000+00:00
2021-11-01 18:46:19.900000+00:00
r|random|floating-point|cross-platform
['https://peteroupc.github.io/random.html#Ensuring_Reproducibility', 'https://arxiv.org/pdf/1303.6257.pdf']
2
35,420,717
<p>See equation 3 in description of CuDNN <a href="http://arxiv.org/pdf/1410.0759.pdf" rel="noreferrer">here</a></p> <p>Basically for a single example (<code>n</code>), single row (<code>p</code>) and single column (<code>q</code>), the result of spatial convolution will be a weighted sum of <code>5x5x3</code> values. So each activation will contain information from all 3 colors.</p>
2016-02-15 23:02:17.193000+00:00
2016-02-15 23:02:17.193000+00:00
null
null
35,420,275
<p>At the moment I have some networks doing classification stuff with greyscaled images. I want to move on to colored (RGB) images.</p> <p>In the CIFAR-10 tutorial of Tensorflow I got confused by the weights for the convolution kernels. The first convolution there looks like this:</p> <pre><code>kernel = _variable_with_weight_decay('weights', shape=[5, 5, 3, 64], stddev=1e-4, wd=0.0) conv = tf.nn.conv2d(images, kernel, [1, 1, 1, 1], padding='SAME') </code></pre> <p>So it is a <code>5x5</code> convolution with an input of 3 (one for each color channel: the red, green and blue image information) and it is generating 64 feature maps.</p> <p>However, the second convolution layer takes an input of 64 feature maps:</p> <pre><code>kernel = _variable_with_weight_decay('weights', shape=[5, 5, 64, 64], stddev=1e-4, wd=0.0) conv = tf.nn.conv2d(norm1, kernel, [1, 1, 1, 1], padding='SAME') </code></pre> <p>...so, how does this process the color information? Does this mean the different color channels are somehow <em>spread</em> on the 64 feature maps of convolution layer 1?</p> <p>I thought conv layer 1 produces 64 feature maps for each color channel, therefore ending up in 3 * 64 = 196 feature maps...but obviously I were wrong.</p> <p>How is the color information <em>mixed</em> there in conv layer 1?</p>
2016-02-15 22:29:11.667000+00:00
2016-02-15 23:02:17.193000+00:00
null
python|colors|neural-network|tensorflow|conv-neural-network
['http://arxiv.org/pdf/1410.0759.pdf']
1
63,987,184
<p>In the appendix of <a href="https://arxiv.org/pdf/1911.09070.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1911.09070.pdf</a>, there is a section describing the image resolution for training.</p> <p>Depending on the speed/performance trade-off, you may choose a smaller resolution for detecting your objects, like 640x640, or increase the resolution to 1024x1024 if you have smaller objects in size; regardless of the case, preserve the aspect ratio when you resize your image.</p>
2020-09-21 06:27:47.200000+00:00
2020-09-21 06:27:47.200000+00:00
null
null
63,982,231
<p>what is the best image size should I use for training an EfficientDet D0 512x512 for object detection. I have image size varying from 500x500 to 2000x2000 is this okay for training the EfficientDet D0 512x512?</p>
2020-09-20 18:17:14.677000+00:00
2020-09-21 06:27:47.200000+00:00
null
tensorflow|deep-learning|computer-vision|object-detection|object-detection-api
['https://arxiv.org/pdf/1911.09070.pdf']
1
23,459,768
<p>The normalising constant is fairly easy to calculate. See the <a href="http://arxiv.org/abs/0706.1062" rel="nofollow"><em>Clauset et al's</em></a> powerlaw paper (in particular table 2.1). For the continuous case, <em>C = (alpha-1) xmin^(alpha-1)</em>, the discrete case involves calculating the diagamma function.</p> <p>You can also examine the R code:</p> <ul> <li><a href="https://github.com/csgillespie/poweRlaw/blob/master/pkg/R/pldis.R" rel="nofollow">Discrete</a></li> <li><a href="https://github.com/csgillespie/poweRlaw/blob/master/pkg/R/plcon.R" rel="nofollow">Continuous</a></li> </ul>
2014-05-04 18:08:38.777000+00:00
2014-05-04 18:08:38.777000+00:00
null
null
23,459,626
<p>With poweRlaw library, and once computed <code>alpha</code> and <code>xmin</code> with <code>estimate_xmin</code>, which formula the script uses to compute the fitted values? I mean, assuming that y=C·x^(-alpha), my question is how the script computes the <em>normalization constant</em> from <code>alpha</code> and <code>xmin</code>.</p>
2014-05-04 17:54:43.683000+00:00
2014-05-04 18:08:38.777000+00:00
null
r|power-law
['http://arxiv.org/abs/0706.1062', 'https://github.com/csgillespie/poweRlaw/blob/master/pkg/R/pldis.R', 'https://github.com/csgillespie/poweRlaw/blob/master/pkg/R/plcon.R']
3
56,399,652
<p>for 1) i do not think there is a reason for the number of dense nodes used (128x16x16), however the 16x16 is set because you only have 1 layer to up sample 16x16 to 32x32.</p> <p>for 2) the first argument <code>256</code> used to instantiate <a href="https://keras.io/layers/convolutional/" rel="nofollow noreferrer"><code>Conv2D</code></a> defines the number of filters.</p> <p>In regards to your last question <code>Next, run another Conv2DTranspose and then another 3 Conv2d. Why?!</code> I would recommend try increasing/decreasing the number of layers to get a feel on how the model behaves with those changes (performing better or not), this is part of the "<a href="https://arxiv.org/abs/1206.5533" rel="nofollow noreferrer">hyper-parameter tuning</a>" process when building a neural net.</p> <p>Hope the above helps.</p>
2019-05-31 17:45:16.777000+00:00
2019-05-31 17:57:31.693000+00:00
2019-05-31 17:57:31.693000+00:00
null
56,392,367
<p>I'm trying to understand how the adversarial generative network works: I found an example in the book by François Chollet (Deep learning with Python) in which there is an example of a GAN he uses CIFAR10 dataset, using the 'frog' class which contains 32x32 RGB images.</p> <p>I can't understand why:</p> <ul> <li>In (1) the input is transformed into 16 × 16 128-channel (why 128-channel?) feature map</li> <li>In (2) when a convolution is performed, with which filter? It is not specified</li> </ul> <p>Next, run another Conv2DTranspose and then another 3 Conv2d. Why?!</p> <p>At the end, I have a 32 × 32 1-channel feature map.</p> <pre><code>from keras import layers import numpy as np latent_dim = 32 height = 32 width = 32 channels = 3 generator_input = keras.Input(shape=(latent_dim,)) (1) x = layers.Dense(128 * 16 * 16)(generator_input) x = layers.LeakyReLU()(x) x = layers.Reshape((16, 16, 128))(x) (2) x = layers.Conv2D(256, 5, padding='same')(x) x = layers.LeakyReLU()(x) x = layers.Conv2DTranspose(256, 4, strides=2, padding='same')(x) x = layers.LeakyReLU()(x) x = layers.Conv2D(256, 5, padding='same')(x) x = layers.LeakyReLU()(x) x = layers.Conv2D(256, 5, padding='same')(x) x = layers.LeakyReLU()(x) x = layers.Conv2D(channels, 7, activation='tanh', padding='same')(x) generator = keras.models.Model(generator_input, x) generator.summary() </code></pre>
2019-05-31 09:29:59.157000+00:00
2019-05-31 21:12:32.410000+00:00
null
machine-learning|deep-learning|artificial-intelligence|generative-adversarial-network
['https://keras.io/layers/convolutional/', 'https://arxiv.org/abs/1206.5533']
2
54,076,232
<h1>Reading and Writing cookies</h1> <p>You can use the <code>cookies</code> parameter, which maps cookie names to fields that are present in messages.</p> <p>For example, <code>.cookies.auth_token = "token"</code> binds the cookie called <code>auth_token</code> to the field <code>token</code> in messages (both in reading and writing).</p> <p>Here's a complete example where the <code>login</code> operation sets the cookie <code>auth_token</code> in the browser.</p> <pre><code>execution { concurrent } inputPort Server { Location: "socket://localhost:8080" Protocol: http // Binds the cookie "auth_token" to the message field "token" { .cookies.auth_token = "token" } RequestResponse: login } main { login( request )( response ) { if ( request.pwd == "secret" ) response &lt;&lt; "OK" { .token = new } else response &lt;&lt; "Invalid pwd" { .token = "" } } } </code></pre> <p>You can try it by browsing <a href="http://localhost:8080/login?pwd=secret" rel="nofollow noreferrer">http://localhost:8080/login?pwd=secret</a>.</p> <hr> <h1>Bonus: cookies with correlation sets</h1> <p>Using cookies like this allows for combining them with workflows, to program <a href="https://doi.org/10.1016/j.scico.2016.05.002" rel="nofollow noreferrer">process-aware web applications</a>. Here is a more elaborate example with the following workflow:</p> <ol> <li>The user logs in;</li> <li>If the login is successful, operation <code>say</code> can be called at will, until operation <code>logout</code> is called;</li> <li>The user logs out.</li> </ol> <p>I'm using correlation sets below to track the session.</p> <pre><code>include "console.iol" execution { concurrent } type LoginRequest:void { .token?:string .pwd:string } type TokenMessage:void { .token:string } type SayRequest:void { .token:string .msg:string } interface ServerIface { RequestResponse: login(LoginRequest)(TokenMessage) throws InvalidPwd, say(SayRequest)(void), logout(TokenMessage)(TokenMessage) } inputPort Server { Location: "socket://localhost:8080" Protocol: http { .cookies.auth_token = "token" } Interfaces: ServerIface } cset { token: SayRequest.token TokenMessage.token } main { login( request )( response ) { if ( request.pwd == "secret" ) response.token = csets.token = new else throw( InvalidPwd ) }; provide [ say( request )() { println@Console( csets.token + " says " + request.msg )() } ] until [ logout()( response ) { response.token = "" } ] } </code></pre> <p>To try it, you can navigate to these links (in order):</p> <ol> <li><a href="http://localhost:8080/login?pwd=secret" rel="nofollow noreferrer">http://localhost:8080/login?pwd=secret</a></li> <li><a href="http://localhost:8080/say?msg=Hello" rel="nofollow noreferrer">http://localhost:8080/say?msg=Hello</a></li> <li><a href="http://localhost:8080/logout" rel="nofollow noreferrer">http://localhost:8080/logout</a></li> </ol> <p>References:</p> <ul> <li>Sessions and correlation sets: <a href="https://jolielang.gitbook.io/docs/basics/sessions" rel="nofollow noreferrer">https://jolielang.gitbook.io/docs/basics/sessions</a></li> <li>provide-until statement: <a href="https://jolielang.gitbook.io/docs/basics/composing_statements#the-provide-until-statement" rel="nofollow noreferrer">https://jolielang.gitbook.io/docs/basics/composing_statements#the-provide-until-statement</a></li> <li>The paper on provide-until and web workflows in Jolie: <a href="https://doi.org/10.1016/j.scico.2016.05.002" rel="nofollow noreferrer">https://doi.org/10.1016/j.scico.2016.05.002</a> (open version: <a href="https://arxiv.org/abs/1410.3712" rel="nofollow noreferrer">https://arxiv.org/abs/1410.3712</a>)</li> </ul>
2019-01-07 14:26:54.470000+00:00
2019-01-13 15:03:16.217000+00:00
2019-01-13 15:03:16.217000+00:00
null
53,999,369
<p>I need to set a cookie to the browser from an operation of a Jolie service, but I can't find information on how to do it.</p> <p>I checked the doc at <a href="https://jolielang.gitbook.io/docs/protocols/http" rel="nofollow noreferrer">https://jolielang.gitbook.io/docs/protocols/http</a> and <a href="https://jolielang.gitbook.io/docs/web-applications/rest-apis-publication-and-integration" rel="nofollow noreferrer">https://jolielang.gitbook.io/docs/web-applications/rest-apis-publication-and-integration</a>, but it seems the use case I presented has not yet been covered.</p> <pre><code>inputPort MyIP { Protocol: http { ... ??? -&gt; myCookie; ... } Interfaces: LoginInterface } main { [login(credentials)(res) { ... myCookie=??? ... }] } </code></pre> <p>I expect to see the cookie in the browser cookie store.</p>
2019-01-01 22:16:32.003000+00:00
2019-01-13 15:03:16.217000+00:00
null
cookies|jolie
['http://localhost:8080/login?pwd=secret', 'https://doi.org/10.1016/j.scico.2016.05.002', 'http://localhost:8080/login?pwd=secret', 'http://localhost:8080/say?msg=Hello', 'http://localhost:8080/logout', 'https://jolielang.gitbook.io/docs/basics/sessions', 'https://jolielang.gitbook.io/docs/basics/composing_statements#the-provide-until-statement', 'https://doi.org/10.1016/j.scico.2016.05.002', 'https://arxiv.org/abs/1410.3712']
9
39,060,363
<p>For flexibility in measuring the semantic similarity of very specific terms like Dehli or Hyderabad, what you want is not something hand-crafted like WordNet, but an <em>automatically-learned</em> similarity measure from a <em>very large database</em>. These are <a href="https://en.wikipedia.org/wiki/Semantic_similarity#Statistical_similarity" rel="nofollow">statistical similarity</a> approaches. Of course, you want to avoid having to train such a model on data yourself...</p> <p>Thus one thing that may be useful is the Google Distance (<a href="https://en.wikipedia.org/wiki/Normalized_Google_distance" rel="nofollow">wikipedia</a>, <a href="http://arxiv.org/pdf/cs/0412098.pdf" rel="nofollow">original paper</a>). It seems fairly simple to implement such a measure in a language like R (<a href="https://ryouready.wordpress.com/2009/01/12/r-normalized-google-distance-ngd-in-r-part-ii/" rel="nofollow">code</a>), and the original paper reports 87% agreement with WordNet.</p>
2016-08-21 02:33:01.070000+00:00
2016-08-21 02:33:01.070000+00:00
null
null
39,048,941
<p>I am new to <code>nltk</code>, and I find wordnet functionality pretty useful. It gives <code>synsets</code>, <code>hypernyms</code>, <code>similarity</code>, etc. But however it fails to give similarity between locations like 'Delhi'-'Hyderabad' obviously as these words are not in the wordnet corpus.</p> <p>So, I would like to know, if somehow I can update the wordnet corpus OR create wordnet over a different corpus e.g. Set of pages extracted from wikipedia related to travel? If at all we can create wordnet over different corpus, then what would be the format, steps to do the same, any limitations?</p> <p>Please can you point me to links that describe the above concerns. I have searched the internet, googled, read portions of nltk book, but I don't have a single hint to above question.</p> <p>Pardon me, if the question sounds completely ridiculous. </p>
2016-08-19 23:27:09.083000+00:00
2016-08-21 15:16:48.893000+00:00
null
nltk|wordnet|corpus
['https://en.wikipedia.org/wiki/Semantic_similarity#Statistical_similarity', 'https://en.wikipedia.org/wiki/Normalized_Google_distance', 'http://arxiv.org/pdf/cs/0412098.pdf', 'https://ryouready.wordpress.com/2009/01/12/r-normalized-google-distance-ngd-in-r-part-ii/']
4
60,250,131
<p>This is a slight variation of a special case of <a href="http://dimacs.rutgers.edu/~alantha/papers2/tspath.pdf" rel="nofollow noreferrer">The Traveling Salesman Path Problem</a>. In The Traveling Salesman Path Problem, you're given an undirected graph, a cost function on the edges, and two vertices <code>s</code> and <code>t</code>. The problem is to find a <a href="https://en.wikipedia.org/wiki/Hamiltonian_path" rel="nofollow noreferrer">Hamiltonian Path</a> (that is, a path that visits each vertex exactly once) from <code>s</code> to <code>t</code>.</p> <p>Your problem is the special case in which the input graph is a <a href="https://en.wikipedia.org/wiki/Clique_(graph_theory)" rel="nofollow noreferrer">clique</a> and the destination vertex <code>t</code> is an extra dummy vertex, connected to all other vertices by a 0-cost edge. Clearly, a solution to The Traveling Salesman Path Problem for the graph (with the extra dummy vertex <code>t</code>) induces a solution to your problem, obtained by simply ignoring the final extra step to the destination <code>t</code>.</p> <p>Unfortunately, just like the famous <a href="https://en.wikipedia.org/wiki/Travelling_salesman_problem" rel="nofollow noreferrer">Traveling Salesman Problem</a>, The Traveling Salesman Path Problem is not only NP-hard, but also NP-hard to approximate to within any constant factor. However, since your costs represent distances, maybe you could assume that the cost function satisfies <a href="https://en.wikipedia.org/wiki/Triangle_inequality" rel="nofollow noreferrer">The Triangle Inequality</a>?</p> <p>If the cost function satisfies The Triangle Inequality, then there exists a recent <a href="https://arxiv.org/abs/1805.04131" rel="nofollow noreferrer">1.5-approximation algorithm</a>. If this recent algorithm is an overkill, you can implement one of two simpler algorithms that are nicely described in <a href="https://www.cs.cmu.edu/afs/cs.cmu.edu/academic/class/15859-f11/www/notes/lecture19.pdf" rel="nofollow noreferrer">lecture notes</a> by Professor Ryan O’Donnell from CMU, and settle for either a 2-approximation or a 5/3-approximation.</p>
2020-02-16 15:37:49.040000+00:00
2020-02-16 15:37:49.040000+00:00
null
null
60,241,664
<p>I have a complete undirected graph of locations (nodes), where each edge represents the distance between its connected nodes, and I want to find the shortest path starting from a start node without specifying the end node so basically it can end at any node other then the first one.</p> <p>I looked through TSP problem and shortest Hamiltonian path but I couldn't find the exact response to my problem .</p> <p>So what this problem is exactly called or what variant of shortest path problems it is?</p> <p>This is an exemple of what I am looking for. Lets have a complete weighted graph as follows:<a href="https://i.stack.imgur.com/O0xzI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/O0xzI.png" alt="graph of locations"></a></p> <p>Each edge represents the distance between two locations for exemple edge AB=5, AC=11......</p> <p>My goal is to start from node A, and find the shortest path that covers all nodes (shortest possible path) and the end point can be any one other than A. For exemple this path that ends at E: <a href="https://i.stack.imgur.com/GaEjI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GaEjI.png" alt="shortest path"></a></p>
2020-02-15 18:09:46.207000+00:00
2020-02-16 15:37:49.040000+00:00
2020-02-15 20:50:12.507000+00:00
graph-theory|graph-algorithm
['http://dimacs.rutgers.edu/~alantha/papers2/tspath.pdf', 'https://en.wikipedia.org/wiki/Hamiltonian_path', 'https://en.wikipedia.org/wiki/Clique_(graph_theory)', 'https://en.wikipedia.org/wiki/Travelling_salesman_problem', 'https://en.wikipedia.org/wiki/Triangle_inequality', 'https://arxiv.org/abs/1805.04131', 'https://www.cs.cmu.edu/afs/cs.cmu.edu/academic/class/15859-f11/www/notes/lecture19.pdf']
7
59,831,276
<p>In general, the best way to &quot;sync&quot; PRNGs between two programs in different languages is to implement the same PRNG in both languages.</p> <p>For your purposes, a linear congruential generator (LCG) is a simple PRNG if you only want &quot;something that looks random (though it doesn't have to be secure)&quot;. This kind of generator is trivial to implement in both C# and Python.</p> <p>One example, among many other possibilities, is the following 32-bit LCG (where <code>x</code> is the seed):</p> <p>C#:</p> <pre><code>// Generate the next x from the current one. unchecked { // NOTE: x is an `int` x = ((int)0xadb4a92d * x) + 9999999; } </code></pre> <p>Python:</p> <pre><code># Generate the next x from the current one. x = ((0xadb4a92d * x) + 9999999) &amp; 0xFFFFFFFF </code></pre> <p>See section 8 of the very very recent <a href="https://arxiv.org/pdf/2001.05304.pdf" rel="nofollow noreferrer">paper by Steele and Vigna</a> for other parameter choices as well as a review of the theory involving LCGs.</p> <p>However, LCGs are far from perfect. (For instance, the above LCG produces <code>x</code>'s with weak low bits, so that, e.g., every other <code>x</code> is odd and every other <code>x</code> is even.) And in general, LCGs, especially those with 32-bit seeds or other short seeds, are far from appropriate for many situations, including scientific work or information security. In case you want another choice for a PRNG, I list <a href="https://peteroupc.github.io/hqprng.html" rel="nofollow noreferrer">many of them</a>.</p>
2020-01-20 21:43:27.523000+00:00
2020-12-09 19:17:56.877000+00:00
2020-12-09 19:17:56.877000+00:00
null
59,829,276
<p>I have a game implemented in Unity/C# that generates simple environments using the built-in PRNG (<code>UnityEngine.Random</code>). I am trying to reimplement the environment generation procedure in Python 3. I need the random number generators to be synchronized so that when provided with the same seed, the actual game in Unity and the Python reimplementation generate the exact same environment. What would be the best approach to synchronizing the two platforms?</p> <p>Some solutions I have considered so far:</p> <ul> <li>Attempt to reimplement Python's default random number generator (<code>random</code>) in C#. The source code is available, but fairly long so may difficult to implement in practice.</li> <li>Attempt to reimplement <code>UnityEngine.Random</code> in Python. However, I don't have any source code, and even if I knew the class of PRNG used, there is no guarantee that I will be able to successfully reimplement it exactly the same.</li> <li>Implement the same PRNG in both. However, I don't know what a good option for a PRNG is for my use case. I basically need something that looks random (though it doesn't have to be secure) and doesn't take more than an hour or two to put together. <a href="https://en.wikipedia.org/wiki/List_of_random_number_generators" rel="nofollow noreferrer">Wikipedia</a> has a long list of PRNGs, and I have no idea the difficulty in implementing each of them.</li> <li>Or maybe someone else has done this at some point...? I couldn't find anything online, though.</li> </ul> <p>Any suggestions on the best approach, or a simple PRNG I can implement easily in both C# and Python?</p> <p>Thanks!</p>
2020-01-20 18:48:01.717000+00:00
2020-12-09 19:17:56.877000+00:00
null
c#|python|unity3d|random
['https://arxiv.org/pdf/2001.05304.pdf', 'https://peteroupc.github.io/hqprng.html']
2
39,160,938
<p>The following article explains in detail how to compute quantiles (the inverse CDF) for the inverse Gaussian distribution:</p> <p>Giner, G, and Smyth, GK (2016). statmod: probability calculations for the inverse Gaussian distribution. R Journal. <a href="http://arxiv.org/abs/1603.06687" rel="nofollow">http://arxiv.org/abs/1603.06687</a></p> <p>Code for the R language is contained in the R package statmod available from CRAN. For example:</p> <pre><code>&gt; library(statmod) &gt; qinvgauss(0.01, lower.tail=FALSE) [1] 4.98 </code></pre> <p>computes the 0.01 upper tail quantile of the standard IG distribution.</p>
2016-08-26 07:41:01.633000+00:00
2016-08-26 07:47:30.883000+00:00
2016-08-26 07:47:30.883000+00:00
null
13,143,292
<p>I want to compute the parameters mu and lambda for the <a href="http://en.wikipedia.org/wiki/Inverse_Gaussian_distribution" rel="nofollow">Inverse Gaussian Distribution</a> given the CDF.</p> <p>By 'given the CDF' I mean that I have given the data AND the (estimated) quantile for the data I.e.</p> <pre><code>Quantile - Value 0.01 - 10 0.5 - 12 0.7 - 13 </code></pre> <p>Now I want to find out the inverse gaussian distribution for this data so that I can e.g. Look up the quantile for value 11 based on my distribution.</p> <p>How can I find out the values mu and lambda?</p> <p>The only solution I can think of is using Gradient descent to find the best mu and lambda using RMSE as an error measure.</p> <p>Isn't there a better solution?</p> <p>Comment: Matlab's MLE-Algorithm is not an option, since it does not use the quantile data.</p>
2012-10-30 16:15:49.363000+00:00
2016-08-26 07:47:30.883000+00:00
2012-12-07 21:58:46.787000+00:00
matlab|statistics|distribution|gaussian|quantile
['http://arxiv.org/abs/1603.06687']
1
51,332,488
<p>(1) You pick the desired dimensionality, as a meta-parameter of the model. Rigorous projects with enough time may try different sizes, to see what works best for their qualitative evaluations. </p> <p>(2) Individual dimensions/elements of each word-vector (floating-point numbers), in vanilla word2vec are not easily interpretable. It's only the arrangement of words as a whole that has usefulness – placing similar words near each other, and making relative directions (eg "towards 'queen' from 'king'") match human intuitions about categories/continuous-properties. And, because the algorithms use explicit randomization, and optimized multi-threaded operation introduces thread-scheduling randomness to the order-of-training-examples, even the exact same data can result in different (but equally good) vector-coordinates from run-to-run.</p> <p>(3) Basic word2vec doesn't have an easy fix, but there's a bunch of hints of polysemy in the vectors, and research work to do more to disambiguate contrasting senses. </p> <p>For example, generally more-polysemous word-tokens wind up with word-vectors that are some combination of their multiple senses, and (often) of a smaller-magnitude than less-polysemous words. </p> <p>This <a href="https://dl.acm.org/citation.cfm?id=2390645" rel="nofollow noreferrer">early paper</a> used multiple representations per word to help discover polysemy. Similar later papers like <a href="https://arxiv.org/abs/1610.07569v1" rel="nofollow noreferrer">this one</a> use clustering-of-contexts to discover polysemous words then relabel them to give each sense its own vector. </p> <p><a href="https://arxiv.org/abs/1601.03764v2" rel="nofollow noreferrer">This paper</a> manages an impressive job of detecting alternate senses via postprocessing of normal word2vec vectors. </p>
2018-07-13 20:36:38.177000+00:00
2018-07-13 20:36:38.177000+00:00
null
null
51,330,549
<p>I have some questions about Word2Vec:</p> <ol> <li><p>What determines the dimension of the result model vectors?</p></li> <li><p>What is elements of this vectors?</p></li> <li><p>Can I use Word2Vec for polysemy solving problems (state = administrative unit vs state = condition), if I already have texts for every meaning of words?</p></li> </ol>
2018-07-13 17:57:36.267000+00:00
2018-07-13 20:36:38.177000+00:00
null
nlp|word2vec
['https://dl.acm.org/citation.cfm?id=2390645', 'https://arxiv.org/abs/1610.07569v1', 'https://arxiv.org/abs/1601.03764v2']
3
37,363,777
<p><strong>ε is the approximation parameter</strong>.</p> <p>LSH (as <a href="http://www.cs.ubc.ca/research/flann/" rel="nofollow noreferrer">FLANN</a> &amp; <a href="https://gsamaras.wordpress.com/projects/#geraf" rel="nofollow noreferrer">kd-GeRaF</a>) is designed for high dimensional data. In that space, <a href="https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm" rel="nofollow noreferrer">k-NN</a> doesn't work well, in fact it is almost as slow as brute force, because of the <a href="https://math.stackexchange.com/questions/346775/confusion-related-to-curse-of-dimensionality-in-k-nearest-neighbor">curse of dimensionality</a>.</p> <p>For that reason, we focus on solving the <a href="https://en.wikipedia.org/wiki/Nearest_neighbor_search#Approximate_nearest_neighbor" rel="nofollow noreferrer"><strong>aproximate</strong> k-NN</a>. Check Definition 1 from our <a href="http://arxiv.org/pdf/1603.09596.pdf" rel="nofollow noreferrer">paper</a>, which basically say that it's OK to return an approximate neighbor lying in (1 + ε) further distance than the exact neighbor.</p> <p>Check the image below:</p> <p><a href="https://i.stack.imgur.com/YTYKH.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YTYKH.jpg" alt="enter image description here"></a></p> <p>here you see what does it mean finding the exact/approximate NN. In the traditional problem of NNS (Nearest Neighbor Search), we are asked to find the exact NN. In the modern problem, the approximate NNS, we are asked to find some neighbor inside the (1+ε) radius, thus either the exact or approximate NN would be a valid answer!</p> <p>So, with a high probability, LSH will return a NN inside that (1+ε) radius. For ε = 0, we actually solve the exact NN problem.</p>
2016-05-21 13:49:15.353000+00:00
2016-05-21 13:54:48.043000+00:00
2017-04-13 12:19:15.777000+00:00
null
37,358,467
<p>I've read the <a href="http://www.cs.princeton.edu/courses/archive/spring13/cos598C/Gionis.pdf" rel="nofollow">original paper</a> about Locality Sensitive Hashing.</p> <p>The complexity is in function of the parameter ε, but I don't understand what it is.</p> <p>Can you explain its meaning please?</p>
2016-05-21 03:08:06.853000+00:00
2016-05-21 16:15:23.460000+00:00
2016-05-21 16:15:23.460000+00:00
computational-geometry|nearest-neighbor|approximation|locality-sensitive-hash|approximate-nn-searching
['http://www.cs.ubc.ca/research/flann/', 'https://gsamaras.wordpress.com/projects/#geraf', 'https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm', 'https://math.stackexchange.com/questions/346775/confusion-related-to-curse-of-dimensionality-in-k-nearest-neighbor', 'https://en.wikipedia.org/wiki/Nearest_neighbor_search#Approximate_nearest_neighbor', 'http://arxiv.org/pdf/1603.09596.pdf', 'https://i.stack.imgur.com/YTYKH.jpg']
7
71,488,869
<p>It is bidirectional because it uses context from both sides of the current word (instead of e.g. using just the previous few words it uses the whole sequence).</p> <p>It depends on how much you want to go into detail but basically there are the attention and the self-attention mechanisms to make this &quot;handle everything in the sequence at once&quot; way work.</p> <p>In a nutshell the attention mechanism means that instead of going through the sentence sequentially/word-by-word, the entire sequence is used to do the decoding on the currently handled word while using an attention system to give weights to decide which word in the input gets how much say in how the current word is handled.</p> <p>The Self-Attention mechanism means that even for the encoding of the input sequence itself the context (rest of the sentence) is already used. So e.g. if you have a sentence with an &quot;it&quot; that is used as a pronoun, the encoding of that token is going to be strongly context dependent. Self-Attention means similarily to attention there is a weighting function for which other input token is how relevant for the encoding of the current input tokens.</p> <p>A popular way to explain Self-Attention is this: <code>The cat ran over the street, because it got startled.</code> The encoding of <code>it</code> in this sentence is strongly dependent on <code>The cat</code> and a bit dependent on <code>the street</code>, because the model learnt during the pre-training that to predict masked words after/around <code>it</code> in this kind of sentence will strongly depend on these nouns.</p> <p>If you didn't yet you definitely should check out the <a href="https://arxiv.org/abs/1706.03762" rel="nofollow noreferrer">Attention is all you need</a>-Paper as well as the <a href="https://arxiv.org/abs/1810.04805" rel="nofollow noreferrer">BERT</a>-Paper (at least the abstract), they explain in detail how the mechanisms and the pretraining process work.</p> <p>Another great source to get a better understanding of how it really works is <a href="https://jalammar.github.io/illustrated-transformer/" rel="nofollow noreferrer">Illustrated Transformer</a>.</p>
2022-03-15 20:50:42.553000+00:00
2022-03-15 20:50:42.553000+00:00
null
null
71,487,474
<p>Bert encoder takes the input and goes for the multi-head attention model. But how do they maintain sequence? Since current words don't take sequence of previous words. Besides, why is it bidirectional? Does it maintain forward and backward sequence like LSTM?</p>
2022-03-15 18:46:02.500000+00:00
2022-03-15 20:50:42.553000+00:00
null
nlp|lstm|bert-language-model|language-model|bilstm
['https://arxiv.org/abs/1706.03762', 'https://arxiv.org/abs/1810.04805', 'https://jalammar.github.io/illustrated-transformer/']
3
22,881,105
<p>It is not that simple for <strong>one-mode networks</strong>, but you're close.</p> <p>The network matrix <code>M</code> itself tells you which node is connected to which other node in one step. <code>M * M</code> tells you which node is connected to which other node in <em>exactly two</em> steps, <code>M ^ 3</code> whether in <em>exactly three</em> steps, and so on. "Exactly" is important, because if nodes are not connected to themselves (zero diagonal elements), then <code>M * M</code> does not only obtain two-step connectivity, but also loses one-step connectivity.</p> <p>The products (powers) are therefore more useful if all nodes are connected to themselves:</p> <pre><code>A = (M + eye(size(M)) &gt; 0) </code></pre> <p>This step also converts a weighted matrix into a pure adjacency matrix. Now</p> <pre><code>(A ^ i &gt; 0) </code></pre> <p>gives you the information whether two nodes are connected in <code>i</code> steps <em>or less</em>. In a network with <code>n = size(M, 1)</code> nodes, the distance between two nodes can be at most <code>n - 1</code> steps. Therefore</p> <pre><code>C = (A ^ (n - 1) &gt; 0) </code></pre> <p>gives you the information whether two nodes are connected <em>at all</em>. The whole network is connected if there are no pairs of unconnected nodes, i.e. if</p> <pre><code>connected = (sum(C(:)) == N ^ 2) </code></pre> <hr> <p>I'm not sure how one would define connectedness in a <strong>two-mode network</strong>. A simple approach would be to just disregard the difference between the two types of nodes, and consider them all to be part of a one-mode network. If <code>M</code> is the original two-mode network matrix of size <code>[m, n] = size(M)</code>, then </p> <pre><code>M = [zeros(m, m) , M ; M' ; zeros(n, n)]; </code></pre> <p>converts it into the matrix describing the corresponding one-mode network of size <code>(m+n)</code>x<code>(m+n)</code>.</p> <hr> <p>These matrix powers do not only tell you whether the whole network is connected, they can also be used to find the connected sub-networks (network clusters) if it is not. Since the behavior of powers of a matrix is closely connected to its spectral decomposition (a.k.a. <a href="https://en.wikipedia.org/wiki/Eigendecomposition_of_a_matrix" rel="nofollow">eigendecomposition</a>), this idea leads to the approach of <a href="https://en.wikipedia.org/wiki/Spectral_clustering" rel="nofollow">spectral clustering</a>. For an application to weighted synchronization networks, see e.g. <a href="http://arxiv.org/abs/0707.2479" rel="nofollow">arXiv:0706.3375</a>.</p>
2014-04-05 12:49:02.120000+00:00
2014-04-05 13:16:46.970000+00:00
2014-04-05 13:16:46.970000+00:00
null
22,874,364
<p>How can I determine from a given m x n matrix desribing a two-mode network whether it is connected or not?</p> <p>Of course once can create a data structure for finding it out – but is there any mathematical solution?</p> <p>I found out for n x n matrices (one-mode network) it is easy: The solution would be: <code>M * M</code> and then looking at the diagonal in the resulting matrix. If there is any zero on the diagonal then it is not connected. Isn't that true?</p> <p>What would be your suggestion for both problems?</p>
2014-04-04 22:57:54.140000+00:00
2014-04-05 13:21:28.963000+00:00
2014-04-05 13:21:28.963000+00:00
matlab|graph|social-networking|octave
['https://en.wikipedia.org/wiki/Eigendecomposition_of_a_matrix', 'https://en.wikipedia.org/wiki/Spectral_clustering', 'http://arxiv.org/abs/0707.2479']
3
61,135,455
<p>Yes, it is possible to recover most of the signal and estimate the phase with e.g. Griffin-Lim Algorithm (GLA). Its "fast" implementation for Python can be found in <a href="https://librosa.github.io/librosa/generated/librosa.core.griffinlim.html" rel="noreferrer">librosa</a>. Here's how you can use it:</p> <pre><code>import numpy as np import librosa y, sr = librosa.load(librosa.util.example_audio_file(), duration=10) S = np.abs(librosa.stft(y)) y_inv = librosa.griffinlim(S) </code></pre> <p>And that's how the original and reconstruction look like:</p> <p><a href="https://i.stack.imgur.com/DdNk7.png" rel="noreferrer"><img src="https://i.stack.imgur.com/DdNk7.png" alt="reconstruction"></a></p> <p>The algorithm by default randomly initialises the phases and then iterates forward and inverse STFT operations to estimate the phases.</p> <p>Looking at your code, to reconstruct the signal, you'd just need to do:</p> <pre><code>import numpy as np X_inv = librosa.griffinlim(np.abs(X)) </code></pre> <p>It's just an example of course. As pointed out by @PaulR, in your case you'd need to load the data from <code>jpeg</code> (which is lossy!) and then apply inverse transform to <code>amplitude_to_db</code> first.</p> <p>The algorithm, especially the phase estimation, can be further improved thanks to advances in artificial neural networks. <a href="https://arxiv.org/abs/1903.03971" rel="noreferrer">Here</a> is one paper that discusses some enhancements. </p>
2020-04-10 07:01:37.240000+00:00
2020-04-10 12:12:37.260000+00:00
2020-04-10 12:12:37.260000+00:00
null
61,132,574
<p>I converted some audio files to spectrograms and saved them to files using the following code:</p> <pre><code>import os from matplotlib import pyplot as plt import librosa import librosa.display import IPython.display as ipd audio_fpath = "./audios/" spectrograms_path = "./spectrograms/" audio_clips = os.listdir(audio_fpath) def generate_spectrogram(x, sr, save_name): X = librosa.stft(x) Xdb = librosa.amplitude_to_db(abs(X)) fig = plt.figure(figsize=(20, 20), dpi=1000, frameon=False) ax = fig.add_axes([0, 0, 1, 1], frameon=False) ax.axis('off') librosa.display.specshow(Xdb, sr=sr, cmap='gray', x_axis='time', y_axis='hz') plt.savefig(save_name, quality=100, bbox_inches=0, pad_inches=0) librosa.cache.clear() for i in audio_clips: audio_fpath = "./audios/" spectrograms_path = "./spectrograms/" audio_length = librosa.get_duration(filename=audio_fpath + i) j=60 while j &lt; audio_length: x, sr = librosa.load(audio_fpath + i, offset=j-60, duration=60) save_name = spectrograms_path + i + str(j) + ".jpg" generate_spectrogram(x, sr, save_name) j += 60 if j &gt;= audio_length: j = audio_length x, sr = librosa.load(audio_fpath + i, offset=j-60, duration=60) save_name = spectrograms_path + i + str(j) + ".jpg" generate_spectrogram(x, sr, save_name) </code></pre> <p>I wanted to keep the most detail and quality from the audios, so that i could turn them back to audio without too much loss (They are 80MB each).</p> <p>Is it possible to turn them back to audio files? How can I do it?</p> <p><a href="https://i.stack.imgur.com/hmvBJ.png" rel="noreferrer"><img src="https://i.stack.imgur.com/hmvBJ.png" alt="Example spectrograms"></a></p> <p>I tried using librosa.feature.inverse.mel_to_audio, but it didn't work, and I don't think it applies.</p> <p>I now have 1300 spectrogram files and want to train a Generative Adversarial Network with them, so that I can generate new audios, but I don't want to do it if i wont be able to listen to the results later.</p>
2020-04-10 01:04:49.303000+00:00
2020-04-10 12:12:37.260000+00:00
2020-04-10 07:09:01.473000+00:00
python|audio|signal-processing|spectrogram|librosa
['https://librosa.github.io/librosa/generated/librosa.core.griffinlim.html', 'https://i.stack.imgur.com/DdNk7.png', 'https://arxiv.org/abs/1903.03971']
3
68,443,618
<p>I was able to figure out the problem. It seems that splitting the javascript into packs and using a <code>javascript_pack_tag</code> on the view where you need the javascript is not the way to go according to <a href="https://stackoverflow.com/questions/63542339/rails-6-webpacker-modal-is-not-a-function-importing-dynamic-packs">this answer</a> and a couple others I found. Instead, I added a hidden element based on the value of the <code>visited_research_topics</code> attribute of <code>current_user</code>, then used jQuery to check for the presence this hidden element before running the Javascript. I still don't like this because now the Javascript that I need on one page is unnecessarily included on all pages, but at least it is working properly.</p> <h2 id="my-working-code-esin">My Working Code</h2> <p>app/views/research_topics/index.html.haml</p> <pre><code>- if !current_user.visited_research_topics .d-none#showInfoModal %h1.mx-auto.text-center.mt-3= &quot;Research Topics&quot; .container-md.mx-auto.p-0 %button.mb-3.fs-5.btn.btn-primary{data: { bs: { toggle: &quot;modal&quot;, target: &quot;#addTopicModal&quot; } } }= &quot;Add Topic&quot; .modal#addTopicModal{ tabindex: &quot;-1&quot;, aria: { labelledby: &quot;topicModalLabel&quot;, hidden: &quot;true&quot; } } .modal-dialog .modal-content .modal-header %h5.modal-title#topicModalLabel= &quot;New Topic&quot; %button.btn-close{ type: &quot;button&quot;, data: { bs: { dismiss: &quot;modal&quot; } }, aria: { label: &quot;Close&quot; } } .modal-body = form_for ResearchTopic.new do |f| .mb-3 = f.label :title, class: &quot;form-label&quot; = f.text_field :title, class: &quot;form-control mb-3&quot; = f.label :search_terms, &quot;Search Terms (add up to 5)&quot;, class: &quot;form-label&quot; - 5.times do |n| = f.text_field :search_terms, value: &quot;&quot;, id: &quot;searchTerm#{n}&quot;, name: &quot;research_topic[search_terms][]&quot;, class: &quot;form-control mb-2&quot; = f.submit &quot;Add&quot;, class: &quot;btn btn-primary&quot; - current_user.research_topics.each.with_index do |topic, topic_number| .container.mb-3.px-0.py-3.border-top.border-dark.border-3 .modal{id: &quot;topic-#{topic_number}-SearchTermModal&quot;, tabindex: &quot;-1&quot;, aria: { labelledby: &quot;searchTermModalLabel&quot;, hidden: &quot;true&quot; } } .modal-dialog .modal-content .modal-header %h5.modal-title#searchTermModalLabel= &quot;New Search Term&quot; %button.btn-close{ data: { bs: { dismiss: &quot;modal&quot; } }, aria: { label: &quot;Close&quot; } } .modal-body = form_for SearchTerm.new do |f| .mb-3 = f.label :term, &quot;New Term For #{topic.title}&quot;, class: &quot;form-label&quot; = f.text_field :term, class: &quot;form-control mb-3&quot; = f.hidden_field :research_topic_id, value: topic.id = f.submit &quot;Add&quot;, class: &quot;btn btn-primary&quot; %h2= topic.title .row .col-auto %h4= &quot;Search Terms:&quot; .col-9 .row.gy-2 - topic.search_terms.each do |term| .col-auto = form_for term, method: :delete do |f| %p.fs-5.text-dark = term.term %button.btn-close(type=&quot;submit&quot; aria-label=&quot;Close&quot;) .col-auto %button.btn.btn-primary{ data: { bs: { toggle: &quot;modal&quot;, target: &quot;#topic-#{topic_number}-SearchTermModal&quot; } } }= &quot;Add Term&quot; - if topic.research_articles.length == 0 %h4.p-3.my-3.bg-secondary= &quot;There were no articles found for your search.&quot; - else .accordion.my-3.bg-light{ id: &quot;topic-#{topic_number}-articlesAccordion&quot; } .accordion-item %h2.accordion-header{ id: &quot;topic-#{topic_number}-new&quot; } %button.accordion-button.collapsed{ type: &quot;button&quot;, data: { bs: { toggle: &quot;collapse&quot;, target: &quot;#topic-#{topic_number}-newArticlesCollapse&quot; } }, aria: { expanded: &quot;false&quot;, controls: &quot;topic-#{topic_number}-newArticlesCollapse&quot; } } = &quot;New&quot; %span.badge.bg-primary.ms-3= topic.research_articles.where(status: &quot;new&quot;).length - if topic.new_today_count &gt; 0 %span.ms-3.text-success= &quot;#{topic.new_today_count} New Today&quot; .collapse.accordion-collapse{ id: &quot;topic-#{topic_number}-newArticlesCollapse&quot; , aria: { labelledby: &quot;topic-#{topic_number}-new&quot; } } .accordion-body .accordion{ id: &quot;topic-#{topic_number}-newAccordion&quot; } - if topic.research_articles.where(status: &quot;new&quot;).length == 0 %p= &quot;It looks like there aren't any articles you haven't seen before. Try adding a new search term for more results.&quot; - topic.research_articles.where(status: &quot;new&quot;).each.with_index do |article, article_number| .accordion-item %h2.accordion-header{ id: &quot;topic-#{topic_number}-newArticle-#{article_number}&quot; } %button.accordion-button.collapsed{ type: &quot;button&quot;, data: { bs: { toggle: &quot;collapse&quot;, target: &quot;#topic-#{topic_number}-newArticle-#{article_number}-collapse&quot; } }, aria: { expanded: &quot;false&quot;, controls: &quot;topic-#{topic_number}-newArticle-#{article_number}-collapse&quot; } } .container .row= article.title .row.mt-3.text-muted= article.article_published .collapse.accordion-collapse{ id: &quot;topic-#{topic_number}-newArticle-#{article_number}-collapse&quot;, aria: { labelledby: &quot;topic-#{topic_number}-newArticle-#{article_number}&quot; } } .accordion-body = render 'shared/research_article', article: article .accordion-item %h2.accordion-header{ id: &quot;topic-#{topic_number}-saved&quot; } %button.accordion-button.collapsed{ type: &quot;button&quot;, data: { bs: { toggle: &quot;collapse&quot;, target: &quot;#topic-#{topic_number}-savedArticlesCollapse&quot; } }, aria: { expanded: &quot;false&quot;, controls: &quot;topic-#{topic_number}-savedArticlesCollapse&quot; } } = &quot;Saved&quot; %span.badge.bg-primary.ms-3= topic.research_articles.where(status: &quot;saved&quot;).length .collapse.accordion-collapse{ id: &quot;topic-#{topic_number}-savedArticlesCollapse&quot;, aria: { labelledby: &quot;topic-#{topic_number}-saved&quot; } } .accordion-body .accordion.bg-light{ id: &quot;topic-#{topic_number}-savedAccordion&quot; } - topic.research_articles.where(status: &quot;saved&quot;).each.with_index do |article, article_number| .accordion-item %h2.accordion-header{ id: &quot;topic-#{topic_number}-savedArticle-#{article_number}&quot; } %button.accordion-button.collapsed{ type: &quot;button&quot;, data: { bs: { toggle: &quot;collapse&quot;, target: &quot;#topic-#{topic_number}-savedArticle-#{article_number}-collapse&quot; } }, aria: { expanded: &quot;false&quot;, controls: &quot;topic-#{topic_number}-savedArticle-#{article_number}-collapse&quot; } } .container .row= article.title .row.mt-3.text-muted= article.article_published .collapse.accordion-collapse{ id: &quot;topic-#{topic_number}-savedArticle-#{article_number}-collapse&quot;, aria: { labelledby: &quot;topic-#{topic_number}-savedArticle-#{article_number}&quot; } } .accordion-body = render 'shared/research_article', article: article .accordion-item %h2.accordion-header{ id: &quot;topic-#{topic_number}-read&quot; } %button.accordion-button.collapsed{ type: &quot;button&quot;, data: { bs: { toggle: &quot;collapse&quot;, target: &quot;#topic-#{topic_number}-readArticlesCollapse&quot; } }, aria: { expanded: &quot;false&quot;, controls: &quot;topic-#{topic_number}-readArticlesCollapse&quot; } } = &quot;Read&quot; %span.ms-3.badge.bg-primary= topic.research_articles.where(status: &quot;read&quot;).length .collapse.accordion-collapse{ id: &quot;topic-#{topic_number}-readArticlesCollapse&quot;, aria: { labelledby: &quot;topic-#{topic_number}-read&quot; } } .accordion-body .accordion{ id: &quot;topic-#{topic_number}-readAccordion&quot; } - topic.research_articles.where(status: &quot;read&quot;).each.with_index do |article, article_number| .accordion-item %h2.accordion-header{ id: &quot;topic-#{topic_number}-readArticle-#{article_number}&quot; } %button.accordion-button.collapsed{ type: &quot;button&quot;, data: { bs: { toggle: &quot;collapse&quot;, target: &quot;#topic-#{topic_number}-readArticle-#{article_number}-collapse&quot; } }, aria: { expanded: &quot;false&quot;, controls: &quot;topic-#{topic_number}-readArticle-#{article_number}-collapse&quot; } } .container .row= article.title .row.mt-3.text-muted= article.article_published .collapse.accordion-collapse{ id: &quot;topic-#{topic_number}-readArticle-#{article_number}-collapse&quot;, aria: { labelledby: &quot;topic-#{topic_number}-readArticle-#{article_number}&quot; } } .accordion-body = render 'shared/research_article', article: article %button.btn.btn-dark{type: &quot;button&quot;, data: { bs: { toggle: &quot;modal&quot;, target: &quot;#deleteTopic-#{topic_number}&quot; } } }= &quot;Delete Topic&quot; .modal{ id: &quot;deleteTopic-#{topic_number}&quot;, tabindex: &quot;-1&quot; } .modal-dialog .modal-content .modal-body %p= &quot;Are you sure you want to delete the topic &lt;em&gt;#{topic.title}&lt;/em&gt;? This cannot be undone.&quot;.html_safe .row .col-2 = form_for topic, method: :delete do |f| = f.submit &quot;Yes&quot;, class: &quot;btn btn-dark&quot; .col-2 %button.btn.btn-dark{ data: { bs: { dismiss: &quot;modal&quot; } } }= &quot;No&quot; .modal#infoModal{ tabindex: &quot;-1&quot;, aria: { labelledby: &quot;infoModalLabel&quot;, hidden: &quot;false&quot; } } .modal-dialog.modal-lg .modal-content .modal-header .modal-title.h4.text-dark#infoModalLabel= &quot;About Research Topics Page&quot; %button.btn-close{ type: &quot;button&quot;, data: { bs: { dismiss: &quot;modal&quot; } }, aria: { label: &quot;Close&quot; } } .modal-body %p.fs-5.mb-3.text-dark= &quot;- On this page you can add as many research topics as you like.&quot; %p.fs-5.mb-3.text-dark= &quot;- Every 24 hours, the arXiv API is queried with the search terms you provide.&quot; %p.fs-5.mb-3.text-dark= &quot;- arXiv contains nearly 2 million scholarly articles in various academic fields.&quot; %p.fs-5.mb-3.text-dark= &quot;- The 10 most recent articles will appear in the 'new' section of each topic every day.&quot; %p.fs-5.mb-3.text-dark= &quot;- The 'new' section is automatically refreshed each time you add or delete a search term.&quot; %p.fs-5.mb-3.text-dark= &quot;- You can save the articles for later, mark them as read or not interested, or take notes about them.&quot; %p.fs-5.mb-3.text-dark= &quot;- If you see older articles appearing in the 'new' section, then you have read or saved all of the newest articles and the algorithm is finding older ones so there is always something you haven't seen before.&quot; %p.fs-5.mb-3.text-dark= &quot;- Click the &lt;i class='bi bi-question-circle mx-2 h4'&gt;&lt;/i&gt; to see this message again.&quot;.html_safe </code></pre> <p>app/javascript/packs/application.js</p> <pre><code>// This file is automatically compiled by Webpack, along with any other files // present in this directory. You're encouraged to place your actual application logic in // a relevant structure within app/javascript and only use these pack files to reference // that code so it'll be compiled. import Rails from &quot;@rails/ujs&quot; import Turbolinks from &quot;turbolinks&quot; import * as ActiveStorage from &quot;@rails/activestorage&quot; import &quot;channels&quot; import '../stylesheets/application' require(&quot;bootstrap&quot;) require('../info_modal/info_modal'); import 'bootstrap-icons/font/bootstrap-icons.css' Rails.start() Turbolinks.start() ActiveStorage.start() </code></pre> <p>app/javascript/info_modal/info_modal.js</p> <pre><code>$(document).on(&quot;turbolinks:load&quot;, function() { if ($('#showInfoModal').length &gt; 0) { new bootstrap.Modal($('#infoModal')).show(); }; }); </code></pre> <p>config/webpack/environment.js</p> <pre><code>const webpack = require('webpack'); const { environment } = require('@rails/webpacker') environment.plugins.append( 'Provide', new webpack.ProvidePlugin({ $: 'jquery', jQuery: 'jquery', Popper: ['popper.js', 'default'], bootstrap: 'bootstrap' }) ) module.exports = environment </code></pre>
2021-07-19 16:01:16.720000+00:00
2021-07-19 16:01:16.720000+00:00
null
null
68,433,604
<h2 id="programs-and-versions-yd4a">Programs and Versions</h2> <p>Hello, I am working on a personal Rails project using Rails 6.1.3.2, Bootstrap 5.0.1, Ruby 2.6.5, Haml 5.2.1, and Webpack 4.46.0.</p> <h2 id="the-goal-5lxw">The goal</h2> <p>I am trying to make a modal show on page load the first time the user visits a particular page, and afterwards they have to click a button to see it again.</p> <h2 id="the-approach-algn">The Approach</h2> <p>I am using a <a href="https://getbootstrap.com/docs/5.0/components/modal/#show" rel="nofollow noreferrer">Javascript function</a> provided by Bootstrap to show the modal, which requires me to make 'bootstrap' available as a variable with Webpack, in addition to requiring it in the application.js file. I have an attribute on the User model that tracks if they have visited the page, and an if statement which loads the <code>javascript_pack_tag</code> if the attribute is false.</p> <h2 id="the-problem-w338">The Problem</h2> <p>The function works, but after the modal is closed, the dropdown link doesn't open and the accordions will open but not close. If I remove the <code>require(&quot;bootstrap&quot;)</code> from application.js, everything works the first time, but when the page is reloaded all Javascript stops working.</p> <h2 id="my-suspicions-and-previous-efforts-zz0f">My Suspicions and Previous Efforts</h2> <p>I suspect that when the <code>javascript_pack_tag</code> is included, Bootstrap is being loaded twice and causing unusual problems. I tried using an import statement in the research_topics.js file, but the same problem occurred. I have tried removing <code>require(&quot;bootstrap&quot;)</code> from application.js, which works on the initial visit to the page, but once the page is reloaded none of the Bootstrap components work anymore. I learned on Rails version 4, so I'm new to Webpack and may be overlooking something simple. Any help would be greatly appreciated.</p> <h2 id="my-code-02j2">My Code</h2> <p>The full project is available <a href="https://github.com/Calvin0125/info_compass" rel="nofollow noreferrer">here</a>. Just run bundle install, rails db:migrate and rails db:seed to get it set up.</p> <p>app/views/research_topics/index.html.haml</p> <pre><code>= javascript_pack_tag 'research_topics_index', 'data-turbolinks-track': 'reload' if !current_user.visited_research_topics %h1.mx-auto.text-center.mt-3= &quot;Research Topics&quot; .container-md.mx-auto.p-0 %button.mb-3.fs-5.btn.btn-primary{data: { bs: { toggle: &quot;modal&quot;, target: &quot;#addTopicModal&quot; } } }= &quot;Add Topic&quot; .modal#addTopicModal{ tabindex: &quot;-1&quot;, aria: { labelledby: &quot;topicModalLabel&quot;, hidden: &quot;true&quot; } } .modal-dialog .modal-content .modal-header %h5.modal-title#topicModalLabel= &quot;New Topic&quot; %button.btn-close{ type: &quot;button&quot;, data: { bs: { dismiss: &quot;modal&quot; } }, aria: { label: &quot;Close&quot; } } .modal-body = form_for ResearchTopic.new do |f| .mb-3 = f.label :title, class: &quot;form-label&quot; = f.text_field :title, class: &quot;form-control mb-3&quot; = f.label :search_terms, &quot;Search Terms (add up to 5)&quot;, class: &quot;form-label&quot; - 5.times do |n| = f.text_field :search_terms, value: &quot;&quot;, id: &quot;searchTerm#{n}&quot;, name: &quot;research_topic[search_terms][]&quot;, class: &quot;form-control mb-2&quot; = f.submit &quot;Add&quot;, class: &quot;btn btn-primary&quot; - current_user.research_topics.each.with_index do |topic, topic_number| .container.mb-3.px-0.py-3.border-top.border-dark.border-3 .modal{id: &quot;topic-#{topic_number}-SearchTermModal&quot;, tabindex: &quot;-1&quot;, aria: { labelledby: &quot;searchTermModalLabel&quot;, hidden: &quot;true&quot; } } .modal-dialog .modal-content .modal-header %h5.modal-title#searchTermModalLabel= &quot;New Search Term&quot; %button.btn-close{ data: { bs: { dismiss: &quot;modal&quot; } }, aria: { label: &quot;Close&quot; } } .modal-body = form_for SearchTerm.new do |f| .mb-3 = f.label :term, &quot;New Term For #{topic.title}&quot;, class: &quot;form-label&quot; = f.text_field :term, class: &quot;form-control mb-3&quot; = f.hidden_field :research_topic_id, value: topic.id = f.submit &quot;Add&quot;, class: &quot;btn btn-primary&quot; %h2= topic.title .row .col-auto %h4= &quot;Search Terms:&quot; .col-9 .row.gy-2 - topic.search_terms.each do |term| .col-auto = form_for term, method: :delete do |f| %p.fs-5.text-dark = term.term %button.btn-close(type=&quot;submit&quot; aria-label=&quot;Close&quot;) .col-auto %button.btn.btn-primary{ data: { bs: { toggle: &quot;modal&quot;, target: &quot;#topic-#{topic_number}-SearchTermModal&quot; } } }= &quot;Add Term&quot; - if topic.research_articles.length == 0 %h4.p-3.my-3.bg-secondary= &quot;There were no articles found for your search.&quot; - else .accordion.my-3.bg-light{ id: &quot;topic-#{topic_number}-articlesAccordion&quot; } .accordion-item %h2.accordion-header{ id: &quot;topic-#{topic_number}-new&quot; } %button.accordion-button.collapsed{ type: &quot;button&quot;, data: { bs: { toggle: &quot;collapse&quot;, target: &quot;#topic-#{topic_number}-newArticlesCollapse&quot; } }, aria: { expanded: &quot;false&quot;, controls: &quot;topic-#{topic_number}-newArticlesCollapse&quot; } } = &quot;New&quot; %span.badge.bg-primary.ms-3= topic.research_articles.where(status: &quot;new&quot;).length - if topic.new_today_count &gt; 0 %span.ms-3.text-success= &quot;#{topic.new_today_count} New Today&quot; .collapse.accordion-collapse{ id: &quot;topic-#{topic_number}-newArticlesCollapse&quot; , aria: { labelledby: &quot;topic-#{topic_number}-new&quot; } } .accordion-body .accordion{ id: &quot;topic-#{topic_number}-newAccordion&quot; } - if topic.research_articles.where(status: &quot;new&quot;).length == 0 %p= &quot;It looks like there aren't any articles you haven't seen before. Try adding a new search term for more results.&quot; - topic.research_articles.where(status: &quot;new&quot;).each.with_index do |article, article_number| .accordion-item %h2.accordion-header{ id: &quot;topic-#{topic_number}-newArticle-#{article_number}&quot; } %button.accordion-button.collapsed{ type: &quot;button&quot;, data: { bs: { toggle: &quot;collapse&quot;, target: &quot;#topic-#{topic_number}-newArticle-#{article_number}-collapse&quot; } }, aria: { expanded: &quot;false&quot;, controls: &quot;topic-#{topic_number}-newArticle-#{article_number}-collapse&quot; } } .container .row= article.title .row.mt-3.text-muted= article.article_published .collapse.accordion-collapse{ id: &quot;topic-#{topic_number}-newArticle-#{article_number}-collapse&quot;, aria: { labelledby: &quot;topic-#{topic_number}-newArticle-#{article_number}&quot; } } .accordion-body = render 'shared/research_article', article: article .accordion-item %h2.accordion-header{ id: &quot;topic-#{topic_number}-saved&quot; } %button.accordion-button.collapsed{ type: &quot;button&quot;, data: { bs: { toggle: &quot;collapse&quot;, target: &quot;#topic-#{topic_number}-savedArticlesCollapse&quot; } }, aria: { expanded: &quot;false&quot;, controls: &quot;topic-#{topic_number}-savedArticlesCollapse&quot; } } = &quot;Saved&quot; %span.badge.bg-primary.ms-3= topic.research_articles.where(status: &quot;saved&quot;).length .collapse.accordion-collapse{ id: &quot;topic-#{topic_number}-savedArticlesCollapse&quot;, aria: { labelledby: &quot;topic-#{topic_number}-saved&quot; } } .accordion-body .accordion.bg-light{ id: &quot;topic-#{topic_number}-savedAccordion&quot; } - topic.research_articles.where(status: &quot;saved&quot;).each.with_index do |article, article_number| .accordion-item %h2.accordion-header{ id: &quot;topic-#{topic_number}-savedArticle-#{article_number}&quot; } %button.accordion-button.collapsed{ type: &quot;button&quot;, data: { bs: { toggle: &quot;collapse&quot;, target: &quot;#topic-#{topic_number}-savedArticle-#{article_number}-collapse&quot; } }, aria: { expanded: &quot;false&quot;, controls: &quot;topic-#{topic_number}-savedArticle-#{article_number}-collapse&quot; } } .container .row= article.title .row.mt-3.text-muted= article.article_published .collapse.accordion-collapse{ id: &quot;topic-#{topic_number}-savedArticle-#{article_number}-collapse&quot;, aria: { labelledby: &quot;topic-#{topic_number}-savedArticle-#{article_number}&quot; } } .accordion-body = render 'shared/research_article', article: article .accordion-item %h2.accordion-header{ id: &quot;topic-#{topic_number}-read&quot; } %button.accordion-button.collapsed{ type: &quot;button&quot;, data: { bs: { toggle: &quot;collapse&quot;, target: &quot;#topic-#{topic_number}-readArticlesCollapse&quot; } }, aria: { expanded: &quot;false&quot;, controls: &quot;topic-#{topic_number}-readArticlesCollapse&quot; } } = &quot;Read&quot; %span.ms-3.badge.bg-primary= topic.research_articles.where(status: &quot;read&quot;).length .collapse.accordion-collapse{ id: &quot;topic-#{topic_number}-readArticlesCollapse&quot;, aria: { labelledby: &quot;topic-#{topic_number}-read&quot; } } .accordion-body .accordion{ id: &quot;topic-#{topic_number}-readAccordion&quot; } - topic.research_articles.where(status: &quot;read&quot;).each.with_index do |article, article_number| .accordion-item %h2.accordion-header{ id: &quot;topic-#{topic_number}-readArticle-#{article_number}&quot; } %button.accordion-button.collapsed{ type: &quot;button&quot;, data: { bs: { toggle: &quot;collapse&quot;, target: &quot;#topic-#{topic_number}-readArticle-#{article_number}-collapse&quot; } }, aria: { expanded: &quot;false&quot;, controls: &quot;topic-#{topic_number}-readArticle-#{article_number}-collapse&quot; } } .container .row= article.title .row.mt-3.text-muted= article.article_published .collapse.accordion-collapse{ id: &quot;topic-#{topic_number}-readArticle-#{article_number}-collapse&quot;, aria: { labelledby: &quot;topic-#{topic_number}-readArticle-#{article_number}&quot; } } .accordion-body = render 'shared/research_article', article: article %button.btn.btn-dark{type: &quot;button&quot;, data: { bs: { toggle: &quot;modal&quot;, target: &quot;#deleteTopic-#{topic_number}&quot; } } }= &quot;Delete Topic&quot; .modal{ id: &quot;deleteTopic-#{topic_number}&quot;, tabindex: &quot;-1&quot; } .modal-dialog .modal-content .modal-body %p= &quot;Are you sure you want to delete the topic &lt;em&gt;#{topic.title}&lt;/em&gt;? This cannot be undone.&quot;.html_safe .row .col-2 = form_for topic, method: :delete do |f| = f.submit &quot;Yes&quot;, class: &quot;btn btn-dark&quot; .col-2 %button.btn.btn-dark{ data: { bs: { dismiss: &quot;modal&quot; } } }= &quot;No&quot; .modal#infoModal{ tabindex: &quot;-1&quot;, aria: { labelledby: &quot;infoModalLabel&quot;, hidden: &quot;false&quot; } } .modal-dialog.modal-lg .modal-content .modal-header .modal-title.h4.text-dark#infoModalLabel= &quot;About Research Topics Page&quot; %button.btn-close{ type: &quot;button&quot;, data: { bs: { dismiss: &quot;modal&quot; } }, aria: { label: &quot;Close&quot; } } .modal-body %p.fs-5.mb-3.text-dark= &quot;- On this page you can add as many research topics as you like.&quot; %p.fs-5.mb-3.text-dark= &quot;- Every 24 hours, the arXiv API is queried with the search terms you provide.&quot; %p.fs-5.mb-3.text-dark= &quot;- arXiv contains nearly 2 million scholarly articles in various academic fields.&quot; %p.fs-5.mb-3.text-dark= &quot;- The 10 most recent articles will appear in the 'new' section of each topic every day.&quot; %p.fs-5.mb-3.text-dark= &quot;- The 'new' section is automatically refreshed each time you add or delete a search term.&quot; %p.fs-5.mb-3.text-dark= &quot;- You can save the articles for later, mark them as read or not interested, or take notes about them.&quot; %p.fs-5.mb-3.text-dark= &quot;- If you see older articles appearing in the 'new' section, then you have read or saved all of the newest articles and the algorithm is finding older ones so there is always something you haven't seen before.&quot; %p.fs-5.mb-3.text-dark= &quot;- Click the &lt;i class='bi bi-question-circle mx-2 h4'&gt;&lt;/i&gt; to see this message again.&quot;.html_safe </code></pre> <p>app/javascript/packs/application.js</p> <pre><code>// This file is automatically compiled by Webpack, along with any other files // present in this directory. You're encouraged to place your actual application logic in // a relevant structure within app/javascript and only use these pack files to reference // that code so it'll be compiled. import Rails from &quot;@rails/ujs&quot; import Turbolinks from &quot;turbolinks&quot; import * as ActiveStorage from &quot;@rails/activestorage&quot; import &quot;channels&quot; import '../stylesheets/application' require(&quot;bootstrap&quot;) import 'bootstrap-icons/font/bootstrap-icons.css' Rails.start() Turbolinks.start() ActiveStorage.start() </code></pre> <p>app/javascript/packs/research_topics_index.js</p> <pre><code>$(window).on('load', function() { new bootstrap.Modal($('#infoModal')).show(); }); </code></pre> <p>config/webpack/environment.js</p> <pre><code>const webpack = require('webpack'); const { environment } = require('@rails/webpacker') environment.plugins.append( 'Provide', new webpack.ProvidePlugin({ $: 'jquery', jQuery: 'jquery', Popper: ['popper.js', 'default'], bootstrap: 'bootstrap' }) ) module.exports = environment </code></pre>
2021-07-18 22:48:48.960000+00:00
2021-07-19 16:01:16.720000+00:00
null
ruby-on-rails|webpack|bootstrap-5
['https://stackoverflow.com/questions/63542339/rails-6-webpacker-modal-is-not-a-function-importing-dynamic-packs']
1
61,747,525
<p>Just in case someone would be interested in how I tackled this problem:</p> <p>For segmentation of grayscale images, I found out this architecture: <a href="https://arxiv.org/abs/1505.04597" rel="nofollow noreferrer">U-Net</a> The tensorflow adaptation by Akeret et al. can be found in <a href="https://github.com/jakeret/tf_unet" rel="nofollow noreferrer">Github - UNet</a></p> <p>I did not use API Object detection in the end since I was not sure how well the pre-trained models would perform in 1-Channel Images since they have been created for RGB-images.</p>
2020-05-12 08:51:17.037000+00:00
2020-05-12 08:51:17.037000+00:00
null
null
61,687,843
<p>I want to detect features, and possibly segmented them, in CT scan images (greyscale images) by means of the Object Detection API from Tensorflow. So I have two questions:</p> <p>-Is there any pre-trained model within the Object Detection API that takes greyscale images as inputs for fine tuning training?</p> <p>-Or, How can I train exisiting models, that are for RGB images, so that I can train them with greyscale images? Which modifications should be made in the code?</p>
2020-05-08 20:47:04.183000+00:00
2020-05-12 08:51:17.037000+00:00
2020-05-12 08:46:33.507000+00:00
python|tensorflow|computer-vision|object-detection
['https://arxiv.org/abs/1505.04597', 'https://github.com/jakeret/tf_unet']
2
47,236,400
<p>With regular Haskell values, there is no problem keeping older versions of a value around. However, <code>Handle</code>s are references to mutable resources allocated with the operating system, and carry state. After calling a version of <code>hSetBuffering</code>that returned a new <code>Handle</code>, what should happen to <em>earlier</em> versions of the <code>Handle</code> that are still kept around? Should they reflect the change? If the answer is yes, then the new-handle-returning version of <code>hSetBuffering</code> is a bit of a lie.</p> <p>This new-handle-returning version of <code>hSetBuffering</code> could work if the type system somehow disallowed keeping old versions of the <code>Handle</code> after calling the function. It could do that by enforcing a constraint: functions that receive a <code>Handle</code> as parameter can only use that parameter <em>one single time</em>, and functions that "duplicate" handles like <code>dup :: Handle -&gt; (Handle,Handle)</code> are <em>disallowed</em>.</p> <p>There is a (not yet accepted) <a href="https://arxiv.org/abs/1710.09756" rel="noreferrer">proposal</a> to extend Haskell with the ability to enforce such restrictions. In fact, file operations are one of the motivating examples. From section 2.3 of the paper:</p> <pre><code>type File openFile :: FilePath → IOL 1 File readLine :: File ⊸ IOL 1 (File,Unrestricted ByteString) closeFile :: File ⊸ IOL ω () </code></pre> <p>Under this proposal, we can only have a single version of a <code>File</code> around at any given time. <code>closeFile</code> makes the reference to <code>File</code> unavailable so that we can't close an already closed file. Every read operation takes the previous version of the <code>File</code> and returns a new one along with the read data. And <code>hSetBuffering</code> would have a type like:</p> <pre><code>hSetBuffering :: BufferingMode -&gt; File ⊸ IOL 1 File </code></pre>
2017-11-11 09:31:23.533000+00:00
2017-11-11 09:31:23.533000+00:00
null
null
47,235,357
<p>In Haskell, as we try to write most of our code in immutable way by not changing variables or passed parameters and instead we create a new value from the old one with required changes. </p> <pre><code>main = do withFile "something.txt" ReadMode (\handle -&gt; do hSetBuffering handle $ BlockBuffering (Just 2048) contents &lt;- hGetContents handle putStr contents) </code></pre> <p>Then what is the reason than <code>hSetBuffering</code>, a function that takes a handle and sets its buffering mode, changes the passed <code>handle</code> itself instead of returning a new handle with required buffering mode?</p>
2017-11-11 07:07:00.040000+00:00
2017-11-11 09:31:23.533000+00:00
null
haskell
['https://arxiv.org/abs/1710.09756']
1
62,752,380
<p>We have the same question when We designing My Custom NER model. There are lots of solutions are available but I suggest you read this paper for a complete understanding of the NER model and the approach and their limitations.</p> <p><em>Title</em>: <em><strong>A Survey on Deep Learning for Named Entity Recognition</strong></em></p> <pre><code>URL: https://arxiv.org/pdf/1812.09449.pdf </code></pre>
2020-07-06 08:47:02.753000+00:00
2020-07-06 08:47:02.753000+00:00
null
null
47,603,200
<p>I'm trying to extract locations from blobs of text (NER/IE) and have tried many solutions all which are far too innaccurate spacy, Stanford etc etc.</p> <p>All really are only about 80-90% accurate on my dataset (spacy was like 70%), another problem I'm having is not having a probability that means anything for these entities so I don't know confidence and can't proceed accordingly.</p> <p>I tried a super naive approach of splitting my blobs into singular words then extracting surrounding context as features, also used a location placename lookup (30/40k location placenames) as a feature aswell. Then I used just a classifier(XGDBoost) and the results where much better once I trained the classifier on about 3k manually labelled datapoints (100k total only 3k where locations). 95% precision for states/countries and about 85% for cities.</p> <p>This approach sucks obviously but why is it outperforming everything I have tried? I think the black box approach to NER just isn't working for my data problem, I tried spacy custom training and it really just didn't seem like it was going to work. Not having a confidence in the entity is kind of killer also as the probability they give you for that is almost meaningless.</p> <p>Is there someway I can approach this problem a little better to improve my results even more? shallow nlp for like 2/3/4-grams? Another problem I have with my approach is the output of the classifier isnt some sequential entity, its literally just classified word blobs which somehow need to be clustered back into one entity i.e : -> San Francisco, CA is just 'city','city', '0','state' with no concept of them being the same entity</p> <p>spacy example:</p> <p>example blob:</p> <pre><code>About Us - Employment Opportunities Donate Donate Now The Power of Mushrooms Enhancing Response Where We Work Map Australia Africa Asia Pacific Our Work Agriculture Anti - Trafficking and Gender - based Violence Education Emergency Response Health and Nutrition Rural and Economic Development About Us Who We Are Annual Report Newsletters Employment Opportunities Video Library Contact Us Login My Profile Donate Join Our Email List Employment Opportunities Annual Report Newsletters Policies Video Library Contact Us Employment Opportunities Current Career Opportunity Internships Volunteer Who We Are Our History Employment Opportunities with World Hope International Working in Service to the Poor Are you a professional that wants a sense of satisfaction out of your job that goes beyond words of affirmation or a pat on the back ? You could be a part of a global community serving the poor in the name of Jesus Christ . You could use your talents and resources to make a significant difference to millions . Help World Hope International give a hand up rather than a hand out . Career opportunities . Internship opportunities . Volunteer Why We Work Here World Hope International envisions a world free of poverty . Where young girls aren ’ t sold into sexual slavery . Where every child has enough to eat . Where men and women can earn a fair and honest wage , and their children aren ’ t kept from an education . Where every community in Africa has clean water . As an employee of World Hope International , these are the people you will work for . Regardless of their religious beliefs , gender , race or ethnic background , you will help shine the light of hope into the darkness of poverty , injustice and oppression . Find out more by learning about the of World Hope International and reviewing a summary of our work in the most recent history annual report . Equal Opportunity Employer World Hope International is both an equal opportunity employer and a faith - based religious organization . We hire US employees without regard to race , color , ancestry , national origin , citizenship , age , sex , marital status , parental status , membership in any labor organization , political ideology or disability of an otherwise qualified individual . We hire national employees in our countries of operation pursuant to the law of the country where we hire the employees . The status of World Hope International as an equal opportunity employer does not prevent the organization from hiring US staff based on their religious beliefs so that all US staff share the same religious commitment . Pursuant to the United States Civil Rights Act of 1964 , Section 702 ( 42 U . S . C . 2000e 1 ( a ) ) , World Hope International has the right to , and does , hire only candidates whose beliefs align with the Apostle ’ s Creed . Apostle ’ s Creed : I believe in Jesus Christ , Gods only Son , our Lord , who was conceived by the Holy Spirit , born of the Virgin Mary , suffered under Pontius Pilate , was crucified , died , and was buried ; he descended to the dead . On the third day he rose again ; he ascended into heaven , he is seated at the right hand of the Father , and he will come again to judge the living and the dead . I believe in the Holy Spirit , the holy catholic church , the communion of saints , the forgiveness of sins , the resurrection of the body , and the life everlasting . AMEN . Christian Commitment All applicants will be screened for their Christian commitment . This process will include a discussion of : The applicant ’ s spiritual journey and relationship with Jesus Christ as indicated in their statement of faith The applicant ’ s understanding and acceptance of the Apostle ’ s Creed . Statement of Faith A statement of faith describes your faith and how you see it as relevant to your involvement with World Hope International . It must include , at a minimum , a description of your spiritual disciplines ( prayer , Bible study , etc . ) and your current fellowship or place of worship . Applicants can either incorporate their statement of faith into their cover letter content or submit it as a separate document . 519 Mt Petrie Road Mackenzie , Qld 4156 1 - 800 - 967 - 534 ( World Hope ) + 61 7 3624 9977 CHEQUE Donations World Hope International ATTN : Gift Processing 519 Mt Petrie Road Mackenzie , Qld 4156 Spread the Word Stay Informed Join Email List Focused on the Mission In fiscal year 2015 , 88 % of all expenditures went to program services . Find out more . Privacy Policy | Terms of Service World Hope Australia Overseas Aid Fund is registered with the ACNC and all donations over $ 2 are tax deductible . ABN : 64 983 196 241 © 2017 WORLD HOPE INTERNATIONAL . All rights reserved .' </code></pre> <p>and the results:</p> <pre><code>('US', 'GPE') ('US', 'GPE') ('US', 'GPE') ('the', 'GPE') ('United', 'GPE') ('States', 'GPE') ('Jesus', 'GPE') ('Christ', 'GPE') ('Pontius', 'GPE') ('Pilate', 'GPE') ('Faith', 'GPE') ('A', 'GPE') </code></pre>
2017-12-02 00:08:05.790000+00:00
2020-07-06 08:47:02.753000+00:00
2017-12-02 00:52:08.927000+00:00
python|entity|stanford-nlp|spacy|information-extraction
[]
0
47,274,507
<p>There is an astrophysics research paper here (<a href="http://iopscience.iop.org/article/10.3847/1538-4357/aa7ede/meta;jsessionid=A9DA9DDB925E6522D058F3CEEC7D0B21.ip-10-40-2-120" rel="nofollow noreferrer">http://iopscience.iop.org/article/10.3847/1538-4357/aa7ede/meta;jsessionid=A9DA9DDB925E6522D058F3CEEC7D0B21.ip-10-40-2-120</a>), (non-paywalled) version here (<a href="https://arxiv.org/abs/1707.02212" rel="nofollow noreferrer">https://arxiv.org/abs/1707.02212</a>) that gives a legitimate use for RdRand.</p> <p>It examines the effects of RdRand on a Monte Carlo simulator, as an earlier post advised. But the author didn't find any statistical difference in the results that use or do not use RdRand. From the performance standpoint, it looks like the Mersenne Twister is much faster. I think Sections 2.2.1 and 5 have all of the details.</p>
2017-11-13 22:18:21.630000+00:00
2017-11-13 22:18:21.630000+00:00
null
null
26,771,329
<p>Today I thought: well, even if there is great suspicion on RDRAND implementation of <a href="http://en.wikipedia.org/wiki/NIST_SP_800-90A" rel="noreferrer">NIST SP 800-90A</a>, it is still a hardware implementation of pseudo-random number generator (PRNG) that must be good enough for non-sensitive applications. So I thought of using it on my game instead of Mersenne Twister.</p> <p>So, to see if there was any performance gain on using the instruction, I compared the time of the two following codes:</p> <pre><code>// test.cpp #include &lt;cstdio&gt; int main() { unsigned int rnd = 0; for(int i = 0; i &lt; 10000000; ++i) { __builtin_ia32_rdrand32_step(&amp;rnd); } printf(&quot;%x\n&quot;, rnd); } </code></pre> <p>and</p> <pre><code>//test2.cpp #include &lt;cstdio&gt; #include &lt;random&gt; int main() { unsigned int rnd = 0; __builtin_ia32_rdrand32_step(&amp;rnd); std::mt19937 gen(rnd); for(int i = 0; i &lt; 10000000; ++i) { rnd ^= gen(); } printf(&quot;%x\n&quot;, rnd); } </code></pre> <p>and by running the two I get:</p> <pre><code>$ time ./test d230449a real 0m0.361s user 0m0.358s sys 0m0.002s $ time ./test2 bfc4e472 real 0m0.051s user 0m0.050s sys 0m0.002s </code></pre> <p>So, Mersenne Twister is much faster than RDRAND on my CPU. Well, I was disappointed, ruled out from my game. But RDRAND is a cryptographically secure PRNG (CSPRNG), so it does much behind the scenes... more fair would be compare it to other CSPRNG. So I took my <a href="http://github.com/lvella/libestream" rel="noreferrer">Rabbit</a> implementation (plain translation of the RFC to C, no fancy tricks for performance), and wrote the following test:</p> <pre><code>// test3.cpp #include &lt;cstdio&gt; extern &quot;C&quot; { #include &quot;rabbit.h&quot; } int main() { rabbit_state s; unsigned long long buf[2]; __builtin_ia32_rdrand64_step(&amp;buf[0]); __builtin_ia32_rdrand64_step(&amp;buf[1]); rabbit_init_key(&amp;s, (uint8_t*)&amp;buf[0]); for(int i = 0; i &lt; 10000000; ++i) { rabbit_extract(&amp;s, (uint8_t*)&amp;buf[0]); } printf(&quot;%llx\n&quot;, buf[0]); } </code></pre> <p>And for my surprise, generating twice as much pseudo-random data as the first two of them, I got a better time than RDRAND:</p> <pre><code>$ time ./test3 8ef9772277b70aba real 0m0.344s user 0m0.341s sys 0m0.002s </code></pre> <p>All three were compiled with optimization enabled.</p> <p>So, we have a widespread paranoia that RDRAND was made to embed NSA backdoors into everybody's software cryptography. Also we have at least one software CSPRNG faster than RDRAND, and the most widely used decent PRNG, Mersenne Twister, is <em>much</em> faster than RDRAND. Finally, we have open-source auditable software entropy pools, like <code>/dev/random</code> and <code>/dev/urandom</code>, that are not hidden behind twofold scrambler layers of AES, like RDRAND.</p> <p>So, the question: should people be using RDRAND? Is there any legitimate use for it? Or should we stop using it altogether?</p>
2014-11-06 03:39:42.703000+00:00
2021-05-12 15:27:03.050000+00:00
2021-05-12 15:27:03.050000+00:00
random|cryptography|stream-cipher|rdrand
['http://iopscience.iop.org/article/10.3847/1538-4357/aa7ede/meta;jsessionid=A9DA9DDB925E6522D058F3CEEC7D0B21.ip-10-40-2-120', 'https://arxiv.org/abs/1707.02212']
2
44,823,068
<p>You missed a term in the drift coefficient, note that to the right of <code>dp</code> there are two <code>dt</code> terms. Thus</p> <pre><code>def a_p(t, p, q): return -(Gamma0 - Omega0*eta*q**2)*p - Omega0**2*q </code></pre> <p>which is actually the part that makes the oscillator into an oscillator. With that corrected the solution looks like</p> <p><a href="https://i.stack.imgur.com/4gCdX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4gCdX.png" alt="One possible solution"></a></p> <p>And no, you did not implement the Milstein method as there are no derivatives of <code>b_p</code> which are what distinguishes Milstein from Euler-Maruyama, the missing term is <code>+0.5*b'(X)*b(X)*(dW**2-dt)</code>.</p> <hr> <p>There is also a derivative-free version of Milsteins method as a two-stage kind-of Runge-Kutta method, documented in <a href="https://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_method_%28SDE%29" rel="nofollow noreferrer">wikipedia</a> or the original in <a href="https://arxiv.org/pdf/1210.0933.pdf" rel="nofollow noreferrer">arxiv.org (PDF)</a>.</p> <p>The step there is (vector based, duplicate into <code>X=[p,q]</code>, <code>K1=[k1_p,k1_q]</code> etc. to be close to your conventions)</p> <pre><code>S = random_choice_of ([-1,1]) K1 = a(X )*dt + b(X )*(dW - S*sqrt(dt)) Xh = X + K1 K2 = a(Xh)*dt + b(Xh)*(dW + S*sqrt(dt)) X = X + 0.5 * (K1+K2) </code></pre>
2017-06-29 10:48:24.797000+00:00
2017-06-29 12:05:19.277000+00:00
2017-06-29 12:05:19.277000+00:00
null
44,820,604
<p>I have an stochastic differential equation (SDE) that I am trying to solve using Milsteins method but am getting results that disagree with experiment. </p> <p>The SDE is</p> <p><img src="https://latex.codecogs.com/gif.latex?$$&space;%5Cdfrac%7Bd%5E2q(t)%7D%7Bdt%5E2%7D&space;&plus;&space;(%5CGamma_0&space;-&space;%5COmega_0&space;%5Ceta&space;q(t)%5E2)%5Cdfrac%7Bdq(t)%7D%7Bdt%7D&space;&plus;&space;%5COmega_0%5E2&space;q(t)&space;-&space;%5Csqrt%7B%5Cdfrac%7B2%5CGamma_0&space;k_B&space;T_0%7D%7Bm%7D%7D&space;%5Cdfrac%7BdW(t)%7D%7Bdt%7D&space;=&space;0&space;$$" title="$$ \dfrac{d^2q(t)}{dt^2} + (\Gamma_0 - \Omega_0 \eta q(t)^2)\dfrac{dq(t)}{dt} + \Omega_0^2 q(t) - \sqrt{\dfrac{2\Gamma_0 k_B T_0}{m}} \dfrac{dW(t)}{dt} = 0 $$" /></p> <p>which I have broken up into 2 first order equations:</p> <p>eq1: <img src="https://latex.codecogs.com/gif.latex?dq&amp;=p%5C,dt%5C%5C" title="dq&amp;=p\,dt\\" /></p> <p>eq2: <img src="https://latex.codecogs.com/gif.latex?dp&space;=&space;-(%5CGamma_0&space;-&space;%5COmega_0&space;%5Ceta&space;q(t)%5E2)p~dt&space;-&space;%5COmega_0%5E2&space;q(t)~dt&space;&plus;&space;%5Csqrt%7B%5Cdfrac%7B2%5CGamma_0&space;k_B&space;T_0%7D%7Bm%7D%7D&space;dW" title="dp = -(\Gamma_0 - \Omega_0 \eta q(t)^2)p~dt - \Omega_0^2 q(t)~dt + \sqrt{\dfrac{2\Gamma_0 k_B T_0}{m}} dW" /></p> <p>Then I have used the Ito form:</p> <p><img src="https://latex.codecogs.com/gif.latex?$%7B%5Cmathrm&space;%7Bd%7D%7DX_%7Bt%7D=a(X_%7Bt%7D)%5C,%7B%5Cmathrm&space;%7Bd%7D%7Dt&plus;b(X_%7Bt%7D)%5C,%7B%5Cmathrm&space;%7Bd%7D%7DW_%7Bt%7D$" title="${\mathrm {d}}X_{t}=a(X_{t})\,{\mathrm {d}}t+b(X_{t})\,{\mathrm {d}}W_{t}$" /></p> <p>So that for eq1: </p> <p><img src="https://latex.codecogs.com/gif.latex?%5C%5C&space;$$&space;a(q_t)&space;=&space;p&space;$$&space;%5C%5C&space;$$&space;b(q_t)&space;=&space;0&space;$$&space;%5C%5C" title="\\ $$ a(q_t) = p $$ \\ $$ b(q_t) = 0 $$ \\" /></p> <p>and for eq2:</p> <p><img src="https://latex.codecogs.com/gif.latex?%5C%5C&space;$$a(p_t)&space;=&space;-(%5CGamma_0-%5COmega_0%5Ceta&space;q(t)%5E2)p$$&space;%5C%5C&space;$$b(p_t)&space;=&space;%5Csqrt%7B%5Cdfrac%7B2%5CGamma_0&space;k_B&space;T_0%7Dm%7D$$" title="\\ $$a(p_t) = -(\Gamma_0-\Omega_0\eta q(t)^2)p$$ \\ $$b(p_t) = \sqrt{\dfrac{2\Gamma_0 k_B T_0}m}$$" /></p> <p>My python code used to attempt to solve this is like so:</p> <pre><code># set constants from real data Gamma0 = 4000 # defines enviromental damping Omega0 = 75e3*2*np.pi # defines the angular frequency of the motion eta = 0 # set eta 0 =&gt; no effect from non-linear p*q**2 term T_0 = 300 # temperature of enviroment k_b = scipy.constants.Boltzmann m = 3.1e-19 # mass of oscillator # set a and b functions for these 2 equations def a_p(t, p, q): return -(Gamma0 - Omega0*eta*q**2)*p def b_p(t, p, q): return np.sqrt(2*Gamma0*k_b*T_0/m) def a_q(t, p, q): return p # generate time data dt = 10e-11 tArray = np.arange(0, 200e-6, dt) # initialise q and p arrays and set initial conditions to 0, 0 q0 = 0 p0 = 0 q = np.zeros_like(tArray) p = np.zeros_like(tArray) q[0] = q0 p[0] = p0 # generate normally distributed random numbers dwArray = np.random.normal(0, np.sqrt(dt), len(tArray)) # independent and identically distributed normal random variables with expected value 0 and variance dt # iterate through implementing Milstein's method (technically Euler-Maruyama since b' = 0 for n, t in enumerate(tArray[:-1]): dw = dwArray[n] p[n+1] = p[n] + a_p(t, p[n], q[n])*dt + b_p(t, p[n], q[n])*dw + 0 q[n+1] = q[n] + a_q(t, p[n], q[n])*dt + 0 </code></pre> <p>Where in this case p is velocity and q is position.</p> <p>I then get the following plots of q and p:</p> <p><a href="https://i.stack.imgur.com/OYQ5X.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OYQ5X.png" alt="p plotted with time"></a></p> <p><a href="https://i.stack.imgur.com/9VOdB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9VOdB.png" alt="q plotted with time"></a></p> <p>I expected the resulting plot of position to look something like the following, which I get from experimental data (from which the constants used in the model are determined):</p> <p><a href="https://i.stack.imgur.com/BDQRY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BDQRY.png" alt="experimental position with time"></a></p> <p>Have I implemented Milstein's method correctly?</p> <p>If I have, what else might be wrong my process of solving the SDE that'd causing this disagreement with the experiment?</p>
2017-06-29 08:58:35.070000+00:00
2017-06-29 12:10:44.367000+00:00
2017-06-29 12:10:44.367000+00:00
python|numerical-methods|differential-equations|stochastic
['https://i.stack.imgur.com/4gCdX.png', 'https://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_method_%28SDE%29', 'https://arxiv.org/pdf/1210.0933.pdf']
3
26,283,087
<p>To locally call R from Java, you need JRI. To remotely call Java you can use RServe. If you want to handle graphics, that is best done using something in R such as the evaluate package. Afaik most language bridges offer no particular functionality for handling graphics, you will need to do that on the R level. </p> <p>Have a look at <a href="http://arxiv.org/pdf/1406.4806.pdf" rel="nofollow">this paper</a> before you get started to be aware of the challenges and limitations of using cross language bridges for scientific computing. You'll save yourself a lot of trouble down the line.</p>
2014-10-09 16:08:36.930000+00:00
2014-10-09 16:08:36.930000+00:00
null
null
26,281,507
<p>I know there are multiple questions resembling this one but most of them are &gt;2 years old and actually not that related.</p> <p>What I need to do is fully integrate the R environment in an existing java (actually scala) application. I don't want any R-based web solutions like Rook, Rapache and the like, the server logic happens strictly in java land. What I need is a way to send R commands to the interpreter, let it run them and handle the output. More importantly, I need to be able to:</p> <ol> <li><p>Run commands interactively not only ready-made R scripts.</p> </li> <li><p>Produce and handle graphics from the established graphics packages.</p> </li> <li><p>Communicate raw data back and forth between the JVM and R interpreter.</p> <p>I am aware of <a href="http://www.rforge.net/JRI/" rel="nofollow noreferrer">JRI</a>. I would very much appreciate to hear from anyone who has used it. How stable is it? How actively maintained is the project? Any existing code that I can look at? Any other alternatives out there?</p> </li> </ol>
2014-10-09 14:49:52.813000+00:00
2018-10-11 20:55:16.543000+00:00
2020-06-20 09:12:55.060000+00:00
java|r|scala
['http://arxiv.org/pdf/1406.4806.pdf']
1
47,955,631
<p>The <a href="https://arxiv.org/pdf/1410.8504.pdf" rel="nofollow noreferrer">Model Confidence Set</a> package tests multiple models for superior predictive ability. </p> <pre><code>install.packages("MCS") library(MCS) data(Loss) MCS &lt;- MCSprocedure(Loss=Loss[,1:5],alpha=0.2,B=5000,statistic='Tmax',cl=NULL) </code></pre> <p>...and the output:</p> <pre><code>&gt; MCS &lt;- MCSprocedure(Loss=Loss[,1:5],alpha=0.2,B=5000,statistic='Tmax',cl=NULL) ########################################################################################################################### Superior Set Model created : Rank_M v_M MCS_M Rank_R v_R MCS_R Loss sGARCH-norm 4 0.8201805 0.6034 4 1.43408052 0.3576 0.0004042581 sGARCH-std 5 0.9649670 0.5008 5 3.22834167 0.0058 0.0004010655 sGARCH-ged 1 -1.3942903 1.0000 3 0.21893448 0.9940 0.0003986329 sGARCH-snorm 2 -1.3101987 1.0000 2 0.08452883 0.9998 0.0003982803 sGARCH-sstd 3 -0.4739630 1.0000 1 -0.08452883 1.0000 0.0003977886 p-value : [1] 0.5008 ########################################################################################################################### &gt; </code></pre>
2017-12-23 20:06:39.263000+00:00
2017-12-23 20:06:39.263000+00:00
null
null
47,954,225
<p>Does anynoe know hot to use the SPA test in R/Matlab or other software; it is a statistical method to evaluate models. I knew that there is a R package called "ttrTests" has a relevant SPA function, but it looks like suitable for comparing portfolio strategies, rather than comparing general models in terms of some loss function. Can someone tell me other source or how to prepare the data suitable for the "ttrTests" package.</p>
2017-12-23 16:46:48.037000+00:00
2017-12-24 03:22:48.230000+00:00
2017-12-24 03:22:48.230000+00:00
r|matlab
['https://arxiv.org/pdf/1410.8504.pdf']
1
46,709,071
<p>I'd like to add here that there is actually a wide range of ReLu-like activation functions that can be used instead of standard <a href="https://en.wikipedia.org/wiki/Rectifier_(neural_networks)" rel="nofollow noreferrer">ReLu activation</a>:</p> <ul> <li>You've mentioned Leaky ReLu yourself (parametrized by <code>alpha</code>).</li> <li><a href="https://arxiv.org/abs/1502.01852" rel="nofollow noreferrer">Parametric Rectified Linear Unit</a> (PReLU). The formula is the same as Leaky ReLu, but allows the coefficient <code>alpha</code> to be learned. See also <a href="https://datascience.stackexchange.com/questions/18583/what-is-the-difference-between-leakyrelu-and-prelu">this discussion</a>.</li> <li><a href="https://arxiv.org/abs/1511.07289" rel="nofollow noreferrer">Exponential linear unit</a> (ELU), which try to make the mean activations closer to zero which speeds up learning:</li> </ul> <p><a href="https://i.stack.imgur.com/6W7Dx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6W7Dx.png" alt="elu formula"></a></p> <ul> <li><a href="https://arxiv.org/abs/1706.02515" rel="nofollow noreferrer">Scaled exponential linear unit</a> (SELU) has been published very recently. It's an extension of ELU, with a specific choice of parameter, which has an additional normalizing effect and helps to learn faster.</li> </ul> <p><a href="https://en.wikipedia.org/wiki/Activation_function#Comparison_of_activation_functions" rel="nofollow noreferrer">Here's the list</a> of all activations and their derivatives.</p>
2017-10-12 11:59:52.543000+00:00
2017-10-12 11:59:52.543000+00:00
null
null
46,697,840
<pre><code>import numpy as np alpha = 0.0251 # as close to true alpha as possible def nonlinear(x, deriv=False): if(deriv==True): return x*(1-x) return 1/(1+np.e**(-x)) #seed np.random.seed(1) #testing sample test_x = np.array([[251,497,-246], [299,249,50], [194,180,14], [140,148,-8], [210,140,70]]) #Input Array - This input will be taken directly from a Pong game X = np.array([[198,200,-2], [90, 280,-190], [84, 256,-172], [140,240,-100], [114,216,-102], [72, 95,-23], [99, 31, 68], [144, 20, 124], [640, 216,424], [32, 464,-432], [176, 64,112], [754, 506,248], [107, 104,3], [116,101,15]]) #output array - if ball_pos - paddle &gt; 0 move up else move down Y = np.array([[0,0,0,0,0,0,1,1,1,0,1,1,1,1,]]).T syn0 = 2*np.random.random((3,14))-1 syn1 = 2*np.random.random((14,14))-1 for j in range(60000): #forward propagation l0 = X l1 = nonlinear(np.dot(l0, syn0)) l2 = nonlinear(np.dot(l1, syn1)) #how much did we miss l2_error = Y - l2 #multiply how much missed by the slope of sigmoid at the value in l1 l2_delta = l2_error * nonlinear(l2, True) #how much did l1 contribute to l2 error #(according to the weights) l1_error = l2_delta.dot(syn1.T) #in what direction is the target l1? # Sure? l1_delta = l1_error*nonlinear(l1,True) #update weight syn1 += alpha * (l1.T.dot(l2_delta)) syn0 += alpha * (l0.T.dot(l1_delta)) # display error if(j % 10000) == 0: print("ERROR: " + str(np.mean(np.abs(l2_error)))) #Testing Forward propagation l0_test = test_x l1_test = nonlinear(np.dot(l0_test,syn0)) l2_test = nonlinear(np.dot(l1_test,syn1)) #Dress up the array (make it look nice) l2_test_output = [] for x in range(len(l2_test)): l2_test_output.append(l2_test[x][0]) print("Test Output") print(l2_test_output) #Put all the l2 data in a way I could see it: Just the first probabilites l2_output = [] for x in range(len(l2)): l2_output.append(l2[x][0]) print("Output") print(l2_output) </code></pre> <p>This code is supposed to take in a group of three numbers [(value_1),(value_2),(value_1-value_2)] and return either a "0" if the difference between the first and second value is negative or a "1" if the difference is positive. So far it actually works very well.</p> <p>Here is the output: <code>ERROR: 0.497132186092 ERROR: 0.105081486632 ERROR: 0.102115299177 ERROR: 0.100813655802 ERROR: 0.100042420179 ERROR: 0.0995185781466 Test Output [0.0074706006801269686, 0.66687458928464094, 0.66687458928463983, 0.66686236694464551, 0.98341439176739631] Output [0.66687459245609326, 0.00083944690766060215, 0.00083946471285455484, 0.0074706634783305243, 0.0074706634765733968, 0.007480987498372226, 0.99646513183073093, 0.99647100131874755, 0.99646513180692531, 0.00083944572383107523, 0.99646513180692531, 0.98324165810211861, 0.66687439729829612, 0.66687459321626519]</code> ERROR: 0.497132186092</p> <p>As you can see the error given the alpha = 0.0251 (for gradient descent - found this through trial and error) is only about 9.95 %. </p> <p>Since I made this program, I've learned that leaky RelU is a better alternative to the Sigmoid function since it optimizes and learns faster than the Sigmoid. I want to implement the leaky RelU function using numpy in this program but I'm not sure of where to start and more particularly what its derivative is.</p> <p>How can I implement leaky RelU into this neural net?</p>
2017-10-11 21:27:27.673000+00:00
2017-10-12 11:59:52.543000+00:00
null
python|numpy|machine-learning|neural-network|sigmoid
['https://en.wikipedia.org/wiki/Rectifier_(neural_networks)', 'https://arxiv.org/abs/1502.01852', 'https://datascience.stackexchange.com/questions/18583/what-is-the-difference-between-leakyrelu-and-prelu', 'https://arxiv.org/abs/1511.07289', 'https://i.stack.imgur.com/6W7Dx.png', 'https://arxiv.org/abs/1706.02515', 'https://en.wikipedia.org/wiki/Activation_function#Comparison_of_activation_functions']
7
50,780,889
<p>Note that the formula you show does not actually present true "weight decay", but instead L2-regularization. Many people mix these up, including well-known professors. Let me explain.</p> <p>When using pure SGD (without momentum) as an optimizer, weight decay is the same thing as adding a L2-regularization term to the loss. <strong>When using any other optimizer, including Momentum, this is not true.</strong></p> <p>Weight decay (don't know how to TeX here, so excuse my pseudo-notation):</p> <pre><code>w[t+1] = w[t] - learning_rate * dw - weight_decay * w </code></pre> <p>L2-regularization:</p> <pre><code>loss = actual_loss + lambda * 1/2 sum(||w||_2 for w in network_params) </code></pre> <p>Computing the gradient of the extra term in L2-regularization gives <code>lambda * w</code> and thus inserting it into the SGD update equation</p> <pre><code>dloss_dw = dactual_loss_dw + lambda * w w[t+1] = w[t] - learning_rate * dw </code></pre> <p>gives the same as weight decay, but mixes <code>lambda</code> with the <code>learning_rate</code>. Any other optimizer, even SGD with momentum, gives a different update rule for weight decay as for L2-regularization! See the paper <a href="/https://arxiv.org/abs/1711.05101">Fixing weight decay in Adam</a> for more details. (Edit: AFAIK, <a href="http://www.cs.toronto.edu/~hinton/absps/parle.pdf" rel="nofollow noreferrer">this 1987 Hinton paper</a> introduced "weight decay", literally as "each time the weights are updated, their magnitude is also decremented by 0.4%" at page 10)</p> <p>That being said, there doesn't seem to be support for "proper" weight decay in TensorFlow yet. There are a few issues discussing it, specifically because of above paper.</p> <p>One possible way to implement it is by writing an op that does the decay step manually after every optimizer step. A different way, which is what I'm currently doing, is using an additional SGD optimizer just for the weight decay, and "attaching" it to your <code>train_op</code>. Both of these are just crude work-arounds, though. My current code:</p> <pre><code># In the network definition: with arg_scope([layers.conv2d, layers.dense], weights_regularizer=layers.l2_regularizer(weight_decay)): # define the network. loss = # compute the actual loss of your problem. train_op = optimizer.minimize(loss, global_step=global_step) if args.weight_decay not in (None, 0): with tf.control_dependencies([train_op]): sgd = tf.train.GradientDescentOptimizer(learning_rate=1.0) train_op = sgd.minimize(tf.add_n(tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES))) </code></pre> <p>This somewhat makes use of TensorFlow's provided bookkeeping. Note that the <code>arg_scope</code> takes care of appending an L2-regularization term for every layer to the <code>REGULARIZATION_LOSSES</code> graph-key, which I then all sum up and optimize using SGD which, as shown above, corresponds to actual weight-decay.</p> <p>Hope that helps, and if anyone gets a nicer code snippet for this, or TensorFlow implements it better (i.e. in the optimizers), please share.</p> <p><strong>Edit:</strong> see also <a href="https://github.com/tensorflow/tensorflow/pull/17438" rel="nofollow noreferrer">this PR</a> which just got merged into TF.</p>
2018-06-10 05:40:49.393000+00:00
2018-06-15 10:23:38.470000+00:00
2018-06-15 10:23:38.470000+00:00
null
34,986,911
<p>I'm trying to use TensorFlow with my deep learning project. </p> <p>When I use Momentum Gradient Descent, how is the weight cost strength set?<br> (The λ in this <a href="http://i.stack.imgur.com/305RE.jpg" rel="noreferrer">formula</a>.) </p>
2016-01-25 07:09:20.767000+00:00
2018-06-15 10:23:38.470000+00:00
2016-01-25 11:16:02.650000+00:00
deep-learning|tensorflow
['/https://arxiv.org/abs/1711.05101', 'http://www.cs.toronto.edu/~hinton/absps/parle.pdf', 'https://github.com/tensorflow/tensorflow/pull/17438']
3
47,426,230
<p>According to this paper <a href="https://arxiv.org/abs/1512.05287" rel="noreferrer">https://arxiv.org/abs/1512.05287</a> that is used for implementation of the <strong>variational_recurrent</strong> dropouts, you can think about as follows,</p> <ul> <li><p><code>input_keep_prob</code> - probability that dropping out input connections.</p></li> <li><p><code>output_keep_prob</code> - probability that dropping out output connections.</p></li> <li><code>state_keep_prob</code> - Probability that droping out recurrent connections.</li> </ul> <p>See the diagram below, </p> <p><a href="https://i.stack.imgur.com/VJQ3I.png" rel="noreferrer"><img src="https://i.stack.imgur.com/VJQ3I.png" alt="enter image description here"></a></p> <p>If you set the <code>variational_recurrent</code> to be true you will get an <strong>RNN</strong> that's similar to the model in right and otherwise in left.</p> <p>The basic differences in above two models are,</p> <ul> <li><p><strong>Variational RNN</strong> repeats the same dropout mask at each time step for both <strong>inputs</strong>, <strong>outputs</strong>, and <strong>recurrent layers</strong> (drop the same network units at each time step).</p></li> <li><p><strong>Native RNN</strong> uses different dropout masks at each time step for the <strong>inputs</strong> and <strong>outputs</strong> alone (no dropout is used with the recurrent connections since the use of different masks with these connections leads to deteriorated performance).</p></li> </ul> <p>In the above diagram, coloured connections represent the dropped-out connections, with different colours corresponding to different dropout masks. Dashed lines correspond to standard connections with no dropout.</p> <p>Therefore, if you use a <strong>variational RNN</strong> you can set all three probability parameters according to your requirement.</p> <p>Hope this helps.</p>
2017-11-22 03:15:07.267000+00:00
2017-11-22 03:15:07.267000+00:00
null
null
47,415,036
<p>The tensorflow <a href="https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/DropoutWrapper" rel="nofollow noreferrer">config dropout wrapper</a> has three different dropout probabilities that can be set: <code>input_keep_prob, output_keep_prob, state_keep_prob</code>. </p> <p>I want to use variational dropout for my LSTM units, by setting the <code>variational_recurrent</code> argument to true. However, I don't know which of the three dropout probabilities I have to use for variational dropout to function correctly. </p> <p>Can someone provide help?</p>
2017-11-21 14:05:38.680000+00:00
2017-11-22 03:15:07.267000+00:00
null
tensorflow|lstm|recurrent-neural-network
['https://arxiv.org/abs/1512.05287', 'https://i.stack.imgur.com/VJQ3I.png']
2
44,564,374
<p>Please see the tutorial <a href="https://www.tensorflow.org/tutorials/image_retraining" rel="nofollow noreferrer">How to Retrain Inception's Final Layer for New Categories</a> on TensorFlow.</p> <p>From the Tutorial:</p> <blockquote> <p>Modern object recognition models have millions of parameters and can take weeks to fully train. Transfer learning is a technique that shortcuts a lot of this work by taking a fully-trained model for a set of categories like ImageNet, and retrains from the existing weights for new classes. In this example we'll be retraining the final layer from scratch, while leaving all the others untouched. For more information on the approach you can see <a href="http://arxiv.org/pdf/1310.1531v1.pdf" rel="nofollow noreferrer">this paper on Decaf</a>.</p> </blockquote>
2017-06-15 09:51:56.850000+00:00
2017-06-15 09:51:56.850000+00:00
null
null
44,561,070
<p>Is there any way to add a new class to the existing 1000 classes of the inception v3 model in tensorflow..? or is it possible to fine tune the model by just adding more images to the existing category</p>
2017-06-15 07:17:13.993000+00:00
2017-06-15 09:51:56.850000+00:00
2017-06-15 09:50:15.777000+00:00
image-processing|tensorflow
['https://www.tensorflow.org/tutorials/image_retraining', 'http://arxiv.org/pdf/1310.1531v1.pdf']
2
58,473,065
<p>There are two types of Neural Networks: First one that can process variable input size and second that requires fixed input size.</p> <p>Good example for first kind is Fully Convolutional Network (FCN). They are widely used for object detection and semantic segmentation. Next code snippet is minimal example of testing pre-trained keypointrcnn from PyTorch. This is improvement of previous state of the art <a href="https://arxiv.org/abs/1703.06870" rel="nofollow noreferrer">Mask R-CNN</a></p> <pre><code>import torch import torchvision from PIL import Image model_rcnn = torchvision.models.detection.keypointrcnn_resnet50_fpn(pretrained=True) model_rcnn.eval() image1 = Image.open('image122 × 430.jpg') image2 = Image.open('image448 × 465.jpg') image_tensor1 = torchvision.transforms.functional.to_tensor(image1) image_tensor2 = torchvision.transforms.functional.to_tensor(image2) output1 = model_rcnn([image_tensor1]) output2 = model_rcnn([image_tensor2]) print(output1, output2) </code></pre> <p>Second kind of Neural Networks require fixed size input, for example ResNet. <strong>Standard solution is using Resize transform</strong> before feeding images to the network. Minimal example:</p> <pre><code>import torch import torchvision from torchvision import transforms from PIL import Image model_imagnet = torchvision.models.resnet50(pretrained=True) model_imagnet.eval() # don't forget to use the same normalization as in training, # if you are using pre-trained model normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) my_transforms = transforms.Compose([transforms.Resize(224), transforms.ToTensor(), normalize]) image1 = Image.open('image122 × 430.jpg') image2 = Image.open('image448 × 465.jpg') image_tensor1 = my_transforms(image1) image_tensor2 = my_transforms(image2) output1 = model_imagnet(torch.unsqueeze(image_tensor1, 0)) output2 = model_imagnet(torch.unsqueeze(image_tensor2, 0)) </code></pre> <p>For more details about the models and there usage you may refer to PyTorch <a href="https://pytorch.org/docs/stable/torchvision/models.html" rel="nofollow noreferrer">documentation</a></p>
2019-10-20 12:40:41.030000+00:00
2019-10-20 12:40:41.030000+00:00
null
null
58,422,569
<p>I am new in this area so pardon if my question seems stupid. I have created a multiresolution image pyramid using </p> <blockquote> <p>skimage.transform.pyramid_gaussian</p> </blockquote> <p>The images are 2D. Now I want to feed these images to a neural network. The structure of the neural network is not fixed. But I can't do that since the images are not of the same size. Can anyone guide me to any resource regarding if this can be done?</p> <p>Thank you</p>
2019-10-16 22:39:23.657000+00:00
2019-10-20 12:40:41.030000+00:00
null
python|image-processing|neural-network|pytorch|scikit-image
['https://arxiv.org/abs/1703.06870', 'https://pytorch.org/docs/stable/torchvision/models.html']
2
38,968,615
<p>Basically you can define a various size input using None as follows </p> <pre><code>self.x = tf.placeholder(tf.float32, [1, None, None, 3]) </code></pre> <p>and then you can feed different input</p> <pre><code>feed_dict={self.x: current_data} etc.. </code></pre> <p>But be careful about your neural net structure. If you flatten your last conv layer as input to the first dense layer then your network only works at that size, and you need to either stretch or crop the image to make it work.</p> <p>A more flexible approach is to use something like <a href="https://www.quora.com/What-is-global-average-pooling" rel="nofollow noreferrer">Global Average Pooling</a> or <a href="https://arxiv.org/abs/1406.4729" rel="nofollow noreferrer">Spatial Pyramid Pooling</a> which both fix this problem. </p>
2016-08-16 07:08:53.600000+00:00
2017-08-31 04:37:19.607000+00:00
2017-08-31 04:37:19.607000+00:00
null
38,966,533
<p>I want to achieve a python class, which can load a <code>tensorflow model</code> and implement a inference. However I have no idea about how can I input image with variable image size. :(</p> <pre><code>class ArtGenerater(): def __init__(self,model_path): self.model_path = model_path # vary shape? self.x = tf.placeholder(tf.float32,shape=(1,512,512,3)) self.gen = model.resnet(self.x) self.out = tf.saturate_cast(self.gen,tf.uint8) self.sess = tf.Session() file = tf.train.lastest_checkpoint(self.model_path) saver = tf.train.Saver() saver.restore(self.sess,file) def pic(self,image_path): img =np.asarray(Image.open(image_path)).astype(np.float32) img = np.expand_dims(img,0) output_t = self.sess.run(self.out,feed_dict={self.x:img}) return output_t </code></pre> <p>Now I just use <code>tf.placeholder(tf.float32,shape=(1,512,512,3))</code>, but my image have different sizes(eg. 1000*900). </p> <p>How can i achieve this function? Thank you. </p> <p><strong>EDIT:</strong></p> <p>Thank you everyone.I have solved problem by using: </p> <pre class="lang-py prettyprint-override"><code>x = tf.placeholder(tf.string) img = tf.image.decode_jpeg(x,channels=3) </code></pre> <p>And this can feed network (my <code>ConvNet</code> include many <code>conv2d</code> &amp; <code>conv2d_tranpose</code>) with different image size. :)</p>
2016-08-16 04:09:54.253000+00:00
2019-05-13 09:25:30.853000+00:00
2019-05-13 09:25:30.853000+00:00
python|tensorflow
['https://www.quora.com/What-is-global-average-pooling', 'https://arxiv.org/abs/1406.4729']
2
48,949,677
<p>The are all based on the same paper: <a href="http://arxiv.org/abs/1502.03167" rel="nofollow noreferrer">http://arxiv.org/abs/1502.03167</a> So they should do the same thing.</p> <p>Some functions move around the code but the old versions are kept to keep backwards compatibility and you end up with more than one version.</p> <p>I would recommend using the simplest one that lets you do your project (that is tf.nn.batch_normalization). If you need features/parameters that are not offered pick the one that works for you.</p> <p>Note: tf.contrib.* is not guarateed to remain backwards compatible (the api might change in a future version).</p>
2018-02-23 14:07:02.603000+00:00
2018-02-23 14:07:02.603000+00:00
null
null
48,949,318
<p>TensorFlow seems to implement at least 3 versions of batch normalization:</p> <ul> <li><a href="https://www.tensorflow.org/api_docs/python/tf/nn/batch_normalization" rel="nofollow noreferrer"><code>tf.nn.batch_normalization</code></a></li> <li><a href="https://www.tensorflow.org/api_docs/python/tf/layers/batch_normalization" rel="nofollow noreferrer"><code>tf.layers.batch_normalization</code></a></li> <li><a href="https://www.tensorflow.org/api_docs/python/tf/contrib/layers/batch_norm" rel="nofollow noreferrer"><code>tf.contrib.layers.batch_norm</code></a></li> </ul> <p>These all have different arguments and documentation.</p> <p>What is the difference between these, and which one should I use?</p>
2018-02-23 13:48:22.727000+00:00
2018-02-24 10:22:23.393000+00:00
null
python|tensorflow|batch-normalization
['http://arxiv.org/abs/1502.03167']
1
6,220,809
<p>Another, although maybe much slower approach is to do <a href="http://arxiv.org/pdf/cs.CV/0312044" rel="nofollow">Clustering by Compression (Arxviv.org, PDF)</a> and maybe <a href="http://www.sciencedirect.com/science/article/pii/S0031320301001820" rel="nofollow">use the JPEG coefficients</a> as the model data to be compared instead of the uncompressed image data compressed by some other method of compression. Also see <a href="http://scholar.google.com/scholar?q=related%3aHrtgXptrZKIJ%3ascholar.google.com/&amp;hl=fi&amp;as_sdt=0&amp;as_vis=1" rel="nofollow">articles related to</a> the first paper from Google Scholar.</p> <p>Clustering by compression basically means compressing a file X using the (statistical) model from file Y and compare to the size to just compressing X with it’s own model’s data.</p> <p>Here is some background about the idea of <a href="http://marknelson.us/1991/02/01/arithmetic-coding-statistical-modeling-data-compression/" rel="nofollow">using different statistical models for compression</a>. JPEG compression <a href="http://en.wikipedia.org/wiki/JPEG#Entropy_coding" rel="nofollow">uses Huffman coding or Arithmetic coding</a> to compress the DC coefficient tables.</p> <p>Yet another option, which may be much faster if the smaller images are not just downsampled and/or cropped versions, is to use the SIFT or SURF algorithms as suggested by Wajih.</p>
2011-06-02 21:51:00.233000+00:00
2011-06-02 21:51:00.233000+00:00
null
null
6,218,956
<p>I have two sets of images, {H} and {L}. {H} consists of 512x512 images. {L} consists of all of the images in {H}, but scaled down to 32x32-128x128 and with compression artifacts from lossy compression.</p> <p>What would be the best way of matching images in {H} to their closest match in {L} using OpenCV?</p>
2011-06-02 18:50:10.207000+00:00
2011-06-02 21:51:00.233000+00:00
null
image-processing|opencv
['http://arxiv.org/pdf/cs.CV/0312044', 'http://www.sciencedirect.com/science/article/pii/S0031320301001820', 'http://scholar.google.com/scholar?q=related%3aHrtgXptrZKIJ%3ascholar.google.com/&hl=fi&as_sdt=0&as_vis=1', 'http://marknelson.us/1991/02/01/arithmetic-coding-statistical-modeling-data-compression/', 'http://en.wikipedia.org/wiki/JPEG#Entropy_coding']
5
70,924,586
<p>Here an implementation from <a href="https://gist.github.com/wassname/7793e2058c5c9dacb5212c0ac0b18a8a" rel="nofollow noreferrer">Github</a> that would maybe help</p> <pre><code>def dice_coef(y_true, y_pred, smooth=1): &quot;&quot;&quot; Dice = (2*|X &amp; Y|)/ (|X|+ |Y|) = 2*sum(|A*B|)/(sum(A^2)+sum(B^2)) ref: https://arxiv.org/pdf/1606.04797v1.pdf &quot;&quot;&quot; intersection = K.sum(K.abs(y_true * y_pred), axis=-1) return (2. * intersection + smooth) / (K.sum(K.square(y_true),-1) + K.sum(K.square(y_pred),-1) + smooth) def dice_coef_loss(y_true, y_pred): return 1-dice_coef(y_true, y_pred) </code></pre>
2022-01-31 10:41:09.623000+00:00
2022-01-31 10:41:09.623000+00:00
null
null
57,568,455
<p>I use TensorFlow 1.12 for semantic (image) segmentation based on materials. With a multinomial cross-entropy loss function, this yields okay-ish results, especially considering the sparse amount of training data I´m working with, with mIoU of 0.44:</p> <p><a href="https://i.stack.imgur.com/7kTAi.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7kTAi.jpg" alt="Segmentations after training with cross entropy loss"></a></p> <p>When I replace this with my dice loss implementation, however, the networks predicts way less smaller segmentations, which is contrary to my understanding of its theory. I thought it´s supposed to work better with imbalanced datasets and should be better at predicting the smaller classes:</p> <p><a href="https://i.stack.imgur.com/SSQRX.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SSQRX.jpg" alt="Segmentations after training with dice loss"></a></p> <p>A table visualizes this better; as you can see, with dice loss a lot more smaller classes are never predicted (hence the undefined precision). With cross-entropy, at least some predictions are made for all classes:</p> <p><a href="https://i.stack.imgur.com/O5XOk.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/O5XOk.jpg" alt="Table comparing metrics for the different losses"></a></p> <p>I initially thought that this is the networks way of increasing mIoU (since my understanding is that dice loss optimizes dice loss directly). However, mIoU with dice loss is 0.33 compared to cross entropy´s 0.44 mIoU, so it has failed in that regard. I´m now wondering whether my implementation is correct:</p> <pre><code>def dice_loss(onehots_true, logits): probabilities = tf.nn.softmax(logits) #weights = 1.0 / ((tf.reduce_sum(onehots_true, axis=0)**2) + 1e-3) #weights = tf.clip_by_value(weights, 1e-17, 1.0 - 1e-7) numerator = tf.reduce_sum(onehots_true * probabilities, axis=0) #numerator = tf.reduce_sum(weights * numerator) denominator = tf.reduce_sum(onehots_true + probabilities, axis=0) #denominator = tf.reduce_sum(weights * denominator) loss = 1.0 - 2.0 * (numerator + 1) / (denominator + 1) return loss </code></pre> <p>Some implementations I found use weights, though I am not sure why, since mIoU isn´t weighted either. At any rate, training is prematurely stopped after one a few epochs with dreadful test results when I use weights, hence I commented them out.</p> <p>Does anyone see anything wrong with my dice loss implementation? I pretty faithfully followed online examples.</p> <p>In order to speed up the labeling process, I only annotated with parallelogram shaped polygons, and I copied some annotations from a larger dataset. This resulted in only a couple of ground truth segmentations per image:</p> <p><a href="https://i.stack.imgur.com/9ExFT.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9ExFT.jpg" alt="Ground truth annotations"></a></p> <p>(This image actually contains slightly more annotations than average.)</p>
2019-08-20 07:16:22.793000+00:00
2022-01-31 10:41:09.623000+00:00
2019-09-08 22:43:49.013000+00:00
tensorflow|image-segmentation|loss-function
['https://gist.github.com/wassname/7793e2058c5c9dacb5212c0ac0b18a8a']
1
58,583,829
<p>I'm going to add the formula for reference to anyone who answers in the future. The generalized dice loss is given by: <a href="https://i.stack.imgur.com/JEUAe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JEUAe.png" alt="enter image description here"></a></p> <p>picture taken from <a href="https://arxiv.org/abs/1707.03237" rel="nofollow noreferrer">Sudre et al</a>.</p> <p>Class is iterated by <code>l</code>. Each pixel location is iterated by <code>n</code>. The probabilities <code>p_ln</code> can be generated using softmax or sigmoid in your network output.</p> <hr> <p>In your implementation, the loss is <strong>summed</strong> across the batch. That would produce a very large loss value and your network gradients would explode. Instead, you need to use the average. Note that the weights are required to ensure you combat the class imbalance problem.</p> <p>There is no concrete proof that GDL outperforms cross-entropy, save in a very specific example noted in the paper. GDL is attractive because it is directly related to IoU, ergo the loss function and evaluation metrics would improve hand-in-hand. If you still haven't managed to train your network, I'd recommend moving to cross-entropy for good.</p>
2019-10-27 22:17:58.433000+00:00
2019-10-27 22:17:58.433000+00:00
null
null
57,568,455
<p>I use TensorFlow 1.12 for semantic (image) segmentation based on materials. With a multinomial cross-entropy loss function, this yields okay-ish results, especially considering the sparse amount of training data I´m working with, with mIoU of 0.44:</p> <p><a href="https://i.stack.imgur.com/7kTAi.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7kTAi.jpg" alt="Segmentations after training with cross entropy loss"></a></p> <p>When I replace this with my dice loss implementation, however, the networks predicts way less smaller segmentations, which is contrary to my understanding of its theory. I thought it´s supposed to work better with imbalanced datasets and should be better at predicting the smaller classes:</p> <p><a href="https://i.stack.imgur.com/SSQRX.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SSQRX.jpg" alt="Segmentations after training with dice loss"></a></p> <p>A table visualizes this better; as you can see, with dice loss a lot more smaller classes are never predicted (hence the undefined precision). With cross-entropy, at least some predictions are made for all classes:</p> <p><a href="https://i.stack.imgur.com/O5XOk.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/O5XOk.jpg" alt="Table comparing metrics for the different losses"></a></p> <p>I initially thought that this is the networks way of increasing mIoU (since my understanding is that dice loss optimizes dice loss directly). However, mIoU with dice loss is 0.33 compared to cross entropy´s 0.44 mIoU, so it has failed in that regard. I´m now wondering whether my implementation is correct:</p> <pre><code>def dice_loss(onehots_true, logits): probabilities = tf.nn.softmax(logits) #weights = 1.0 / ((tf.reduce_sum(onehots_true, axis=0)**2) + 1e-3) #weights = tf.clip_by_value(weights, 1e-17, 1.0 - 1e-7) numerator = tf.reduce_sum(onehots_true * probabilities, axis=0) #numerator = tf.reduce_sum(weights * numerator) denominator = tf.reduce_sum(onehots_true + probabilities, axis=0) #denominator = tf.reduce_sum(weights * denominator) loss = 1.0 - 2.0 * (numerator + 1) / (denominator + 1) return loss </code></pre> <p>Some implementations I found use weights, though I am not sure why, since mIoU isn´t weighted either. At any rate, training is prematurely stopped after one a few epochs with dreadful test results when I use weights, hence I commented them out.</p> <p>Does anyone see anything wrong with my dice loss implementation? I pretty faithfully followed online examples.</p> <p>In order to speed up the labeling process, I only annotated with parallelogram shaped polygons, and I copied some annotations from a larger dataset. This resulted in only a couple of ground truth segmentations per image:</p> <p><a href="https://i.stack.imgur.com/9ExFT.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9ExFT.jpg" alt="Ground truth annotations"></a></p> <p>(This image actually contains slightly more annotations than average.)</p>
2019-08-20 07:16:22.793000+00:00
2022-01-31 10:41:09.623000+00:00
2019-09-08 22:43:49.013000+00:00
tensorflow|image-segmentation|loss-function
['https://i.stack.imgur.com/JEUAe.png', 'https://arxiv.org/abs/1707.03237']
2
73,805,224
<p>You could maybe use a ResNet architecture at the beginning and see what happens. Basically, the ResNet takes as input an image of a shape HxWxC, where H-height, W-width, C-channels. In your case, you do not have an actual image, but you still encode your environment in 3 channels, with a HxW=10x10. So, I think your encoding should work.</p> <p>Then you will also have to change the output of the ResNet so that you will only output 4 values and each value will correspond to one action.</p> <p>Given that the input space is not that big, maybe you could start with a ResNet 18 which is very small and see what happens. Given that you are new to ML and RL, there is a very old paper that tries to solve Atari games using deep learning <a href="https://arxiv.org/pdf/1312.5602v1.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1312.5602v1.pdf</a> and the method is not that hard to understand. Snake is a game with a similar (or even less) complexity as Atari games, so this paper may provide more insights.</p>
2022-09-21 18:10:13.083000+00:00
2022-09-21 18:10:13.083000+00:00
null
null
73,802,653
<p>I'm new to machine learning and reinforcement learning, and I'm attempting to create an AI agent that learns to play Snake. I am having trouble choosing / developing a neural network architecture that can work with the shape of my input / output vectors.</p> <p>My input is a 3x10x10 tensor, basically 3 layers of a 10x10 grid the snake moves on (I only use 0s and 1s throughout the tensor, mark the position of the snake's body parts in the first layer, mark the apple's position on the second layer, and the snake's head position on the 3rd).</p> <p>For my output, I'm looking for a vector of 4 values, corresponding to the 4 possible moves a player has available (change direction to up / down / left / right).</p> <p>I would appreciate any recommendations on how to go about choosing an architecture in this case, as well as any thoughts regarding the way I chose to encode my game state into an input vector for the agent to train.</p>
2022-09-21 14:37:25.113000+00:00
2022-09-23 13:04:17.503000+00:00
2022-09-23 13:04:17.503000+00:00
machine-learning|neural-network|reinforcement-learning
['https://arxiv.org/pdf/1312.5602v1.pdf']
1
53,829,101
<p>Choosing the best method for decision making is completely related to the assumptions exist in your problem.</p> <p>The first thing you should consider is that "Are these two parameters completely independent or not?". If we assume transportation time and communication cost are independent, then there is a simple trade-off between them. <a href="https://arxiv.org/pdf/1610.04513.pdf" rel="nofollow noreferrer">On communication cost vs. load balancing in Content Delivery Networks</a> is a published paper which investigated around this trade-off in a CDN.</p> <p>I suggest you read the three basic methods proposed in this paper. These methods are general enough to use in any independent trade-off problem. So I think it would be enough to get the basic idea.</p> <hr> <p><strong>Added Information:</strong></p> <p>In case of having problem accessing the paper.</p> <p>The first step to compare cost and time would be scaling these two variables, so it would possible to compare them easily. <a href="https://en.wikipedia.org/wiki/Normalization_(statistics)" rel="nofollow noreferrer">Wikipedia</a> has a good article on this part. <a href="https://en.wikipedia.org/wiki/Feature_scaling" rel="nofollow noreferrer">Feature scaling</a> would be a good solution for you.</p> <p>One of the simplest methods for decision making in your problem is calculating the following parameter for each possible solution: </p> <pre><code>wi = α*ci + (1-α)*ti </code></pre> <p>Which <code>ci</code> denotes the scaled cost of picking <code>i</code>th solution and <code>ti</code> denotes the scaled time of choosing <code>i</code>th solution. The solution with minimum amount of <code>wi</code> would be the best answer.</p> <p>In this algorithm <code>0&lt; α &lt;1</code> determines the importance of time and cost. if <code>α=1</code> you are deciding only based on cost and if <code>α=0</code>, time is the only important parameter for you.</p>
2018-12-18 08:36:28.863000+00:00
2018-12-18 09:30:28.770000+00:00
2018-12-18 09:30:28.770000+00:00
null
53,828,561
<p>I am developing a simulation model to compare different delivery route options. A critical criteria for selecting the delivery route is to evaluate both the transportation time and cost, and the best achieved balance between time and cost will be selected (or according to certain weight assigned to time and cost). The question is time and cost are different measures and there needs a way to transform the two isolated measures into a single uniform measure. What are the usual methods/algorithms to do this work?</p>
2018-12-18 07:57:31.120000+00:00
2018-12-18 09:30:28.770000+00:00
null
algorithm|performance|units-of-measurement
['https://arxiv.org/pdf/1610.04513.pdf', 'https://en.wikipedia.org/wiki/Normalization_(statistics)', 'https://en.wikipedia.org/wiki/Feature_scaling']
3
47,378,446
<p>Another -- and drastically better -- option is to not use VGG16. If you look at <a href="https://arxiv.org/pdf/1707.07012.pdf" rel="nofollow noreferrer">Figure 5 in this paper</a>, you'll note that VGG16 does very badly in terms of accuracy vs. FLOPs (floating point operations per second). If you need speed, Mobilenet or a reduced-size ResNet will do much better. Even inception-v2 will outperform VGG in accuracy with much lower computational cost.</p> <p>This will drastically reduce your training time <em>and</em> memory use.</p>
2017-11-19 15:18:10.483000+00:00
2017-11-19 15:18:10.483000+00:00
null
null
41,160,339
<p>I am fine-tuning a VGG16 network on 32 cpu machine using tensorflow. I used cross entropy loss with sparse. I have to classify the cloths images into 50 classes. After 2 weeks of training this is how the loss is going down, which I feel is very slow convergence. My batch size is 50. Is it normal or what do you think is going wrong here? Accuracy is also really bad. And now it crashed with bad memory allocation error. <code>terminate called after throwing an instance of 'std::bad_alloc' what(): std::bad_allo</code></p> <p>My last line in log file looks like this - </p> <p><code>2016-12-13 08:56:57.162186: step 31525, loss = 232179.64 (1463843.280 sec/batch)</code></p> <p><a href="https://i.stack.imgur.com/I4A52.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/I4A52.png" alt="total loss"></a></p> <p>I also tried Tesla K80 GPU and after 20 hrs of training this is how the loss looks like. All parameters are same. Worrying part is - using GPU didn't increase the iteration rate which means each step is taking same time either in 32 cpu with 50 threds or in tesla K80. </p> <p>I definitely need some practical advice here.</p> <p><a href="https://i.stack.imgur.com/QKA13.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QKA13.png" alt="total loss GPU"></a> <a href="https://i.stack.imgur.com/pY5MF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pY5MF.png" alt="enter image description here"></a></p>
2016-12-15 09:15:07.467000+00:00
2017-11-19 15:18:10.483000+00:00
2016-12-15 09:27:20.110000+00:00
machine-learning|computer-vision|tensorflow|deep-learning
['https://arxiv.org/pdf/1707.07012.pdf']
1
50,256,072
<p>Well, you have inadvertently hit on an iceberg indeed...</p> <p>As a prelude, let's make clear that the concepts of variance &amp; standard deviation are defined only for <em>scalar</em> variables; for vector variables (like your own 3d output here), the concept of variance is no longer meaningful, and the <em>covariance matrix</em> is used instead (<a href="https://en.wikipedia.org/wiki/Covariance_matrix" rel="noreferrer">Wikipedia</a>, <a href="http://mathworld.wolfram.com/CovarianceMatrix.html" rel="noreferrer">Wolfram</a>).</p> <p>Continuing on the prelude, the shape of your <code>sigma</code> is indeed as expected according to the scikit-learn <a href="http://scikit-learn.org/stable/modules/generated/sklearn.gaussian_process.GaussianProcessRegressor.html#sklearn.gaussian_process.GaussianProcessRegressor.predict" rel="noreferrer">docs</a> on the <code>predict</code> method (i.e. there is no <em>coding</em> error in your case):</p> <blockquote> <p><strong>Returns</strong>:</p> <p><strong>y_mean</strong> : array, shape = (n_samples, [n_output_dims])</p> <p>Mean of predictive distribution a query points</p> <p><strong>y_std</strong> : array, shape = (n_samples,), optional</p> <p>Standard deviation of predictive distribution at query points. Only returned when return_std is True.</p> <p><strong>y_cov</strong> : array, shape = (n_samples, n_samples), optional</p> <p>Covariance of joint predictive distribution a query points. Only returned when return_cov is True.</p> </blockquote> <p>Combined with my previous remark about the covariance matrix, the first choice would be to try the <code>predict</code> function with the argument <code>return_cov=True</code> instead (since asking for the <em>variance</em> of a vector variable is meaningless); but again, this will lead to a 16x16 matrix, instead of a 3x3 one (the expected shape of a covariance matrix for 3 output variables)...</p> <p>Having clarified these details, let's proceed to the essence of the issue.</p> <hr /> <p>At the heart of your issue lies something rarely mentioned (or even hinted at) in practice and in relevant tutorials: Gaussian Process regression with multiple outputs is <strong>highly non-trivial</strong> and still a field of active research. Arguably, scikit-learn cannot really handle the case, despite the fact that it will superficially appear to do so, without issuing at least some relevant warning.</p> <p>Let's look for some corroboration of this claim in the <em>recent</em> scientific literature:</p> <p><a href="https://www.sciencedirect.com/science/article/abs/pii/S0169743915000180" rel="noreferrer">Gaussian process regression with multiple response variables</a> (2015) - quoting (emphasis mine):</p> <blockquote> <p><strong>most GPR implementations model only a single response variable</strong>, due to the difficulty in the formulation of covariance function for correlated multiple response variables, which describes not only the correlation between data points, but also the correlation between responses. In the paper we propose a direct formulation of the covariance function for multi-response GPR, based on the idea that [...]</p> <p>Despite the high uptake of GPR for various modelling tasks, there still exists some outstanding issues with the GPR method. Of particular interest in this paper is the need to model multiple response variables. <strong>Traditionally, one response variable is treated as a Gaussian process, and multiple responses are modelled independently without considering their correlation.</strong> This pragmatic and straightforward approach was taken in many applications (e.g. [7, 26, 27]), though it is not ideal. A key to modelling multi-response Gaussian processes is the formulation of covariance function that describes not only the correlation between data points, but also the correlation between responses.</p> </blockquote> <p><a href="https://www.sciencedirect.com/science/article/pii/S0950705117306123" rel="noreferrer">Remarks on multi-output Gaussian process regression</a> (2018) - quoting (emphasis in the original):</p> <blockquote> <p>Typical GPs are usually designed for single-output scenarios wherein the output is a scalar. However, the multi-output problems have arisen in various fields, [...]. Suppose that we attempt to approximate T outputs {f(t}, 1 ≤t ≤T , one intuitive idea is to use the single-output GP (SOGP) to approximate them individually using the associated training data D(t) = { X(t), y(t) }, see Fig. 1(a). Considering that the outputs are correlated in some way, modeling them individually may result in the loss of valuable information. Hence, an increasing diversity of engineering applications are embarking on the use of multi-output GP (MOGP), which is conceptually depicted in Fig. 1(b), for surrogate modeling.</p> <p>The study of MOGP has a long history and is known as multivariate Kriging or Co-Kriging in the geostatistic community; [...] The MOGP handles problems with the basic assumption that the outputs are correlated in some way. Hence, a key issue in MOGP is to <em>exploit the output correlations such that the outputs can leverage information from one another</em> in order to provide more accurate predictions in comparison to modeling them individually.</p> <p><a href="https://i.stack.imgur.com/F5H7f.png" rel="noreferrer"><img src="https://i.stack.imgur.com/F5H7f.png" alt="enter image description here" /></a></p> </blockquote> <p><a href="http://www.mcs.anl.gov/publication/physics-based-covariance-models-gaussian-processes-multiple-outputs" rel="noreferrer">Physics-Based Covariance Models for Gaussian Processes with Multiple Outputs</a> (2013) - quoting:</p> <blockquote> <p>Gaussian process analysis of processes with multiple outputs is limited by the fact that far fewer good classes of covariance functions exist compared with the scalar (single-output) case. [...]</p> <p>The difficulty of finding “good” covariance models for multiple outputs can have important practical consequences. An incorrect structure of the covariance matrix can significantly reduce the efficiency of the uncertainty quantification process, as well as the forecast efficiency in kriging inferences [16]. Therefore, we argue, the covariance model may play an even more profound role in co-kriging [7, 17]. This argument applies when the covariance structure is inferred from data, as is typically the case.</p> </blockquote> <hr /> <p>Hence, my understanding, as I said, is that sckit-learn is not really capable of handling such cases, despite the fact that something like that is not mentioned or hinted at in the documentation (it may be interesting to open a relevant issue at the project page). This seems to be the conclusion in <a href="https://stackoverflow.com/questions/43618633/multi-output-spatial-statistics-with-gaussian-processes?utm_medium=organic&amp;utm_source=google_rich_qa&amp;utm_campaign=google_rich_qa">this relevant SO thread</a>, too, as well as in <a href="https://stats.stackexchange.com/questions/65470/gaussian-processes-how-to-use-gpml-for-multi-dimensional-output">this CrossValidated thread</a> regarding the GPML (Matlab) toolbox.</p> <p>Having said that, and apart from reverting to the choice of simply modeling each output separately (not an invalid choice, as long as you keep in mind that you may be throwing away useful information from the correlation between your 3-D output elements), there is at least one Python toolbox which seems capable of modeling multiple-output GPs, namely the <strong>runlmc</strong> (<a href="https://arxiv.org/abs/1705.10813" rel="noreferrer">paper</a>, <a href="https://github.com/vlad17/runlmc" rel="noreferrer">code</a>, <a href="http://runlmc.readthedocs.io/en/latest/" rel="noreferrer">documentation</a>).</p>
2018-05-09 14:39:40.240000+00:00
2018-05-09 18:04:16.730000+00:00
2020-06-20 09:12:55.060000+00:00
null
50,185,399
<p>I am using <a href="http://scikit-learn.org/stable/modules/generated/sklearn.gaussian_process.GaussianProcessRegressor.html" rel="nofollow noreferrer">scikit learn</a> for Gaussian process regression (GPR) operation to predict data. My training data are as follows:</p> <pre class="lang-python prettyprint-override"><code>x_train = np.array([[0,0],[2,2],[3,3]]) #2-D cartesian coordinate points y_train = np.array([[200,250, 155],[321,345,210],[417,445,851]]) #observed output from three different datasources at respective input data points (x_train) </code></pre> <p>The test points (2-D) where mean and variance/standard deviation need to be predicted are:</p> <pre class="lang-python prettyprint-override"><code>xvalues = np.array([0,1,2,3]) yvalues = np.array([0,1,2,3]) x,y = np.meshgrid(xvalues,yvalues) #Total 16 locations (2-D) positions = np.vstack([x.ravel(), y.ravel()]) x_test = (np.array(positions)).T </code></pre> <p>Now, after running the GPR (<code>GausianProcessRegressor</code>) fit (Here, the product of ConstantKernel and RBF is used as Kernel in <code>GaussianProcessRegressor</code>), mean and variance/standard deviation can be predicted by following the line of code:</p> <pre class="lang-python prettyprint-override"><code>y_pred_test, sigma = gp.predict(x_test, return_std =True) </code></pre> <p>While printing the predicted mean (<code>y_pred_test</code>) and variance (<code>sigma</code>), I get following output printed in the console:</p> <p><a href="https://i.stack.imgur.com/86wov.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/86wov.jpg" alt="enter image description here" /></a></p> <p>In the predicted values (mean), the 'nested array' with three objects inside the inner array is printed. It can be presumed that the inner arrays are the predicted mean values of each data source at each 2-D test point locations. However, the printed variance contains only a single array with 16 objects (perhaps for 16 test location points). I know that the variance provides an indication of the uncertainty of the estimation. Hence, I was expecting the predicted variance for each data source at each test point. Is my expectation wrong? How can I get the predicted variance for each data source at each test points? Is it due to wrong code?</p>
2018-05-05 03:22:51.917000+00:00
2022-05-09 12:14:17.010000+00:00
2021-02-18 10:12:36.163000+00:00
python|machine-learning|scikit-learn|regression|gaussian-process
['https://en.wikipedia.org/wiki/Covariance_matrix', 'http://mathworld.wolfram.com/CovarianceMatrix.html', 'http://scikit-learn.org/stable/modules/generated/sklearn.gaussian_process.GaussianProcessRegressor.html#sklearn.gaussian_process.GaussianProcessRegressor.predict', 'https://www.sciencedirect.com/science/article/abs/pii/S0169743915000180', 'https://www.sciencedirect.com/science/article/pii/S0950705117306123', 'https://i.stack.imgur.com/F5H7f.png', 'http://www.mcs.anl.gov/publication/physics-based-covariance-models-gaussian-processes-multiple-outputs', 'https://stackoverflow.com/questions/43618633/multi-output-spatial-statistics-with-gaussian-processes?utm_medium=organic&utm_source=google_rich_qa&utm_campaign=google_rich_qa', 'https://stats.stackexchange.com/questions/65470/gaussian-processes-how-to-use-gpml-for-multi-dimensional-output', 'https://arxiv.org/abs/1705.10813', 'https://github.com/vlad17/runlmc', 'http://runlmc.readthedocs.io/en/latest/']
12
45,477,940
<p>I am not quite sure how the heatmap data of your project exactly looks like, but it seems to me that you can use something like <a href="http://www.huppelen.nl/publications/selectiveSearchDraft.pdf" rel="nofollow noreferrer">Selective Search</a>. You can also have a look on this <a href="https://arxiv.org/pdf/1602.08405.pdf" rel="nofollow noreferrer">interesting paper</a>. Maybe you can use this approach on your dataset.</p>
2017-08-03 07:45:14.583000+00:00
2017-08-03 07:45:14.583000+00:00
null
null
45,477,478
<p>I have a group of images and some separate heatmap data which (imperfectly) explains where subject of the image is. The heatmap data is in a numpy array with shape (224,224,3). I would like to generate bounding box data from this heatmap data.</p> <p>The heatmaps are not always perfect, So I guess I'm wondering if anyone can think of an intelligent way to do this.</p> <p>Here are some examples of what happens when I apply the heatmap data to the image:</p> <p><a href="https://github.com/jacobgil/keras-grad-cam/raw/master/examples/persian_cat.jpg?raw=true" rel="nofollow noreferrer"><img src="https://github.com/jacobgil/keras-grad-cam/raw/master/examples/persian_cat.jpg?raw=true" alt="Image of a cat with a heatmap illuminating the subject of the image"></a></p> <p><a href="https://i.stack.imgur.com/JZqJn.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JZqJn.jpg" alt="enter image description here"></a></p> <p>I found a solution to this in matlab, but I have no idea how to read this code! I am a python programmer, unfortunately. <a href="https://github.com/metalbubble/CAM/tree/master/bboxgenerator" rel="nofollow noreferrer">https://github.com/metalbubble/CAM/tree/master/bboxgenerator</a></p> <p>Anyone have any ideas about how to approach something like this? </p>
2017-08-03 07:22:37.967000+00:00
2021-12-06 14:45:53.087000+00:00
null
python|matlab|opencv|machine-learning|computer-vision
['http://www.huppelen.nl/publications/selectiveSearchDraft.pdf', 'https://arxiv.org/pdf/1602.08405.pdf']
2
72,995,868
<p>Add an explicit <code>shape=(input_shape[-1],)</code> call inside <code>add_weights</code>.</p> <p>Here is the corrected version of the function:</p> <pre><code>from tensorflow.keras.layers import Layer from keras import backend as K from tensorflow import keras class AttentionLayer(Layer): def __init__(self, step_dim, W_regularizer=None, b_regularizer=None, W_constraint=None, b_constraint=None, bias=True, **kwargs): &quot;&quot;&quot; Keras Layer that implements an Attention mechanism for temporal data. Supports Masking. Follows the work of Raffel et al. [https://arxiv.org/abs/1512.08756] # Input shape 3D tensor with shape: `(samples, steps, features)`. # Output shape 2D tensor with shape: `(samples, features)`. :param kwargs: Just put it on top of an RNN Layer (GRU/LSTM/SimpleRNN) with return_sequences=True. The dimensions are inferred based on the output shape of the RNN. &quot;&quot;&quot; self.supports_masking = True self.init = keras.initializers.get('glorot_uniform') self.W_regularizer = keras.regularizers.get(W_regularizer) self.b_regularizer = keras.regularizers.get(b_regularizer) self.W_constraint = keras.constraints.get(W_constraint) self.b_constraint = keras.constraints.get(b_constraint) self.bias = bias self.step_dim = step_dim self.features_dim = 0 super(AttentionLayer, self).__init__(**kwargs) def build(self, input_shape): assert len(input_shape) == 3 self.W = self.add_weight(shape=(input_shape[-1],), initializer=self.init, name='{}_W'.format(self.name), regularizer=self.W_regularizer, constraint=self.W_constraint) self.features_dim = input_shape[-1] if self.bias: self.b = self.add_weight(shape=(input_shape[1],), initializer='zero', name='{}_b'.format(self.name), regularizer=self.b_regularizer, constraint=self.b_constraint) else: self.b = None self.built = True def compute_mask(self, input, input_mask=None): # do not pass the mask to the next layers return None def call(self, x, mask=None): # TF backend doesn't support it # eij = K.dot(x, self.W) # features_dim = self.W.shape[0] # step_dim = x._keras_shape[1] features_dim = self.features_dim step_dim = self.step_dim eij = K.reshape(K.dot(K.reshape(x, (-1, features_dim)), K.reshape(self.W, (features_dim, 1))), (-1, step_dim)) if self.bias: eij += self.b eij = K.tanh(eij) a = K.exp(eij) # apply mask after the exp. will be re-normalized next if mask is not None: # Cast the mask to floatX to avoid float64 upcasting in theano a *= K.cast(mask, K.floatx()) # in some cases especially in the early stages of training the sum may be almost zero a /= K.cast(K.sum(a, axis=1, keepdims=True) + K.epsilon(), K.floatx()) a = K.expand_dims(a) weighted_input = x * a return K.sum(weighted_input, axis=1) def compute_output_shape(self, input_shape): return input_shape[0], self.features_dim def get_config(self): config = {'step_dim': self.step_dim} base_config = super(AttentionLayer, self).get_config() return dict(list(base_config.items()) + list(config.items())) </code></pre>
2022-07-15 14:54:26.067000+00:00
2022-07-15 14:54:26.067000+00:00
null
null
62,976,818
<p>(I think this is because of the version conflict as the authors have used <code>keras.engine.topology.Layer</code>)</p> <p><strong>With tensorflow==2.2.0 and keras==2.4.3</strong>, I am trying to learn the Attention Mechanism and have imported the code from somewhere as:</p> <pre><code>from keras import backend as K from keras.engine.topology import Layer from keras import initializers, regularizers, constraints from keras.layers import Dense, Input, LSTM, Bidirectional, Activation, Conv1D, GRU, TimeDistributed from keras.layers import Dropout, Embedding, GlobalMaxPooling1D, MaxPooling1D, Add, Flatten, SpatialDropout1D from keras.layers import GlobalAveragePooling1D, BatchNormalization, concatenate from keras.layers import Reshape, merge, Concatenate, Lambda, Average from keras.models import Sequential, Model from keras.initializers import Constant from keras.layers.merge import add class Attention(Layer): def __init__(self, step_dim, W_regularizer=None, b_regularizer=None, W_constraint=None, b_constraint=None, bias=True, **kwargs): self.supports_masking = True self.init = initializers.get('glorot_uniform') self.W_regularizer = regularizers.get(W_regularizer) self.b_regularizer = regularizers.get(b_regularizer) self.W_constraint = constraints.get(W_constraint) self.b_constraint = constraints.get(b_constraint) self.bias = bias self.step_dim = step_dim self.features_dim = 0 super(Attention, self).__init__(**kwargs) def build(self, input_shape): assert len(input_shape) == 3 self.W = self.add_weight((input_shape[-1],), initializer=self.init, name='{}_W'.format(self.name), regularizer=self.W_regularizer, constraint=self.W_constraint) self.features_dim = input_shape[-1] if self.bias: self.b = self.add_weight((input_shape[1],), initializer='zero', name='{}_b'.format(self.name), regularizer=self.b_regularizer, constraint=self.b_constraint) else: self.b = None self.built = True def compute_mask(self, input, input_mask=None): return None def call(self, x, mask=None): features_dim = self.features_dim step_dim = self.step_dim eij = K.reshape(K.dot(K.reshape(x, (-1, features_dim)), K.reshape(self.W, (features_dim, 1))), (-1, step_dim)) if self.bias: eij += self.b eij = K.tanh(eij) a = K.exp(eij) if mask is not None: a *= K.cast(mask, K.floatx()) a /= K.cast(K.sum(a, axis=1, keepdims=True) + K.epsilon(), K.floatx()) a = K.expand_dims(a) weighted_input = x * a return K.sum(weighted_input, axis=1) def compute_output_shape(self, input_shape): return input_shape[0], self.features_dim </code></pre> <p>The Problem is when I try to use,</p> <pre><code>lstm_layer = LSTM(300, dropout=0.25, recurrent_dropout=0.25, return_sequences=True) inp = Input(shape=(maxlen,), dtype='int32') embedding= embedding_layer(inp) x = lstm_layer(embedding) x = Dropout(0.25)(x) merged = Attention(maxlen)(x) merged = Dense(256, activation='relu')(merged) merged = Dropout(0.25)(merged) merged = BatchNormalization()(merged) outp = Dense(len(int_category), activation='softmax')(merged) AttentionLSTM = Model(inputs=inp, outputs=outp) AttentionLSTM.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['acc']) AttentionLSTM.summary() </code></pre> <p>it throws an error as <strong>TypeError: add_weight() got multiple values for argument 'name'</strong></p> <p>Full traceback of the error is:</p> <pre><code>--------------------------------------------------------------------------- TypeError Traceback (most recent call last) &lt;ipython-input-25-1ab1f1ef1ed7&gt; in &lt;module&gt; 5 x = lstm_layer(embedding) 6 x = Dropout(0.25)(x) ----&gt; 7 merged = Attention(maxlen)(x) 8 merged = Dense(256, activation='relu')(merged) 9 merged = Dropout(0.25)(merged) /opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py in __call__(self, *args, **kwargs) 895 # Build layer if applicable (if the `build` method has been 896 # overridden). --&gt; 897 self._maybe_build(inputs) 898 cast_inputs = self._maybe_cast_inputs(inputs) 899 /opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py in _maybe_build(self, inputs) 2414 # operations. 2415 with tf_utils.maybe_init_scope(self): -&gt; 2416 self.build(input_shapes) # pylint:disable=not-callable 2417 # We must set also ensure that the layer is marked as built, and the build 2418 # shape is stored since user defined build functions may not be calling &lt;ipython-input-20-86a01469b2e5&gt; in build(self, input_shape) 23 name='{}_W'.format(self.name), 24 regularizer=self.W_regularizer, ---&gt; 25 constraint=self.W_constraint) 26 self.features_dim = input_shape[-1] 27 if self.bias: TypeError: add_weight() got multiple values for argument 'name' </code></pre>
2020-07-19 05:58:44.587000+00:00
2022-07-15 14:54:26.067000+00:00
null
python|tensorflow|keras|tensorflow2.0
[]
0
57,605,907
<p>In Reinforcement Learning (RL) there is often a lot of CPU computation required for each sample step (of course dependent on environment, some environments can use GPU too). The RL-model has a hard time understanding the rewards and what action caused that specific reward, since a good reward could be dependent on a way earlier action. Therefore we want a simple model-architectures (shallow and fewer weights) while doing RL, else the training time will be way to slow. Hence your systems bottle neck is likely gathering samples rather than training the data. Also Note that not all Tensorflow-architectures scale equally well with GPU. Deep models with high numbers of weights like most Images cases scales super well (like CNN and MLP network with MNIST), while time-dependent RNN has less speedup potential (see <a href="https://ai.stackexchange.com/questions/7090/can-lstm-nets-be-speed-up-by-gpu">this stackexchange question</a>). So set your expectation accordingly when using GPU.</p> <p>Through my RL experience, I have figured some possible speedups I could share, and would love to see more suggestions!</p> <ol> <li>Single sample step, can be speed up by creating multiple environment run in parallel, equal to the number of CPU cores (there are packages for parallel processing in python you can use fore this). This can potential speed up sampling data proportional to the number of CPU cores. </li> <li><p>Between sampling you have to do model predictions for next action. Instead of calling model.predict at each step, you can call a single model.predict for all your parallel states (using a batch_size equal to the number of parallel environments). This will speed up prediction time, as there is more optimization options.</p></li> <li><p>The change from updating model weights to prediction is surprisingly slow. Hopefully this will be speed up in the future? But while the change is as slow as today, you can speed up training by holding the model constant and do lots of sample and prediction (example a whole episode, or multiple steps within an episode), then train the model on all the newly gathered data afterwards. In my case this resulted in periodically high GPU utilization.</p></li> <li><p>Since sampling is most likely the bottle neck, you can make a historical repo of state, action, rewards. Than at training you can sample randomly data from this repo and train it together with the newly gathered data. This is known as "Experience Replay" in RL.</p></li> <li><p>Maybe the most fun, and highest potential for improvements is by using more advance RL-learning architectures. Example changing the loss function (check out <a href="https://arxiv.org/pdf/1707.06347.pdf" rel="nofollow noreferrer">PPO</a> for example), using and tuning the "generalized advantage estimation" calculated by the rewards. Or changing the model by for example including time dependencies with RNN, <a href="http://kvfrans.com/variational-autoencoders-explained/" rel="nofollow noreferrer">VAC</a> or combining them all like <a href="https://worldmodels.github.io" rel="nofollow noreferrer">here</a>. </p></li> </ol> <p>Hopefully this help you speed up the training time, and maybe get more utilization of your GPU. </p>
2019-08-22 09:15:25.730000+00:00
2019-08-22 09:15:25.730000+00:00
null
null
57,603,707
<p><a href="https://github.com/keon/deep-q-learning/blob/master/dqn.py#L52" rel="nofollow noreferrer">https://github.com/keon/deep-q-learning/blob/master/dqn.py#L52</a></p> <pre><code>def replay(self, batch_size): minibatch = random.sample(self.memory, batch_size) for state, action, reward, next_state, done in minibatch: target = reward if not done: target = (reward + self.gamma * np.amax(self.model.predict(next_state)[0])) target_f = self.model.predict(state) target_f[0][action] = target self.model.fit(state, target_f, epochs=1, verbose=0) </code></pre> <p>This code seems can't get help from GPU since it trains data once per action.</p> <pre><code>self.model.fit(state, target_f, epochs=1, verbose=0) </code></pre> <p>How to change this code to train in parallel, then get help from GPU?</p>
2019-08-22 07:02:00.713000+00:00
2019-08-22 12:01:29.953000+00:00
2019-08-22 12:01:29.953000+00:00
python|tensorflow|gpu|reinforcement-learning
['https://ai.stackexchange.com/questions/7090/can-lstm-nets-be-speed-up-by-gpu', 'https://arxiv.org/pdf/1707.06347.pdf', 'http://kvfrans.com/variational-autoencoders-explained/', 'https://worldmodels.github.io']
4
41,194,018
<p>This is a hard way of tackling a relatively simple problem that is unashamedly 2-D! </p> <p>If the objects you are looking for are as prominent as those in your figure, create the 2_d map for the data and then threshold it for a series of threshold levels: the highest thresholds pick out the brightest objects. Any continuous projection like Aitoff or Hammmer will do, and to eliminate the edge problems, use rotations of the projection. Segmented projections, like Healpix, are good for data storage, but not necessarily ideal for data analysis.</p> <p>If the map has poor signal to noise so that you are looking for objects in the murk of the noise, then some sophistication is required, maybe even some neural net algorithm. However, you might take a look at the Planck data analysis on Sunyaev-Zeldovich galaxy clusters, the earliest of which is perhaps <a href="https://arxiv.org/abs/1101.2024" rel="nofollow noreferrer">https://arxiv.org/abs/1101.2024</a> (Paper VIII). The subsequent papers refine and add to this.</p> <p>(This should have been a comment but I lack the rep.)</p>
2016-12-16 23:54:00.577000+00:00
2016-12-16 23:54:00.577000+00:00
null
null
40,681,296
<p>I am using Tensorflow and Keras. Is there a possibility to achieve a proper pattern recognition for images on the surface of a sphere? I am using the (<a href="https://healpy.readthedocs.io/en/latest/" rel="nofollow noreferrer">Healpy framework</a>) to create my skymaps on which the pattern recognition should work. The problem is that these Healpy skymaps are one dimensional numpy arrays, thus, a compact sub-pattern may be distributed scattered over this 1d array. This is actually pretty hard to learn for a basic machine learning algorithm (i am thinking about a convolutional deep network).</p> <p>A specific task in this context would be counting blobbs on the surface of a sphere (see attached <a href="https://i.stack.imgur.com/oGQLS.png" rel="nofollow noreferrer">image</a>). For this particular task the correct number would be 8. So I created 10000 skymaps (Healpy settings: nside=16 correpsonding to npix=3072) each with a random number of blobbs between 0 and 9 (thus 10 possibilities). I tried to solve this with the 1d Healpy array and a simple Feed Forward network:</p> <pre><code>model = Sequential() model.add(Dense(npix, input_dim=npix, init='uniform', activation='relu')) model.add(Dropout(0.25)) model.add(Dense(10, init='uniform', activation='softmax')) model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) model.fit(skymaps, number_of_correct_sources, batch=100, epochs=10, validation_split=1.-train) </code></pre> <p>, however, after training with 10,000 skymaps the test set yielded an accuracy of only 38%. I guess that this will significantly increase when providing the real arrangement of the Healpy cells (as it appears on the sphere) instead of the 1d array only. In this case one may use a Convolutional network (Convolution2d) and operate as for the usual image recognition. Any ideas how to map the healpy cells properly in a 2d array or using a convolutional network directly on the sphere?</p> <p>Thanks!</p>
2016-11-18 15:56:56.493000+00:00
2016-12-16 23:54:00.577000+00:00
null
tensorflow|keras|image-recognition|pattern-recognition|healpy
['https://arxiv.org/abs/1101.2024']
1
52,215,330
<p>In practice even in NLP you see that RNNs and CNNs are often competitive. <a href="https://arxiv.org/pdf/1702.01923.pdf" rel="nofollow noreferrer">Here's</a> a 2017 review paper that shows this in more detail. In theory it might be the case that RNNs can handle the full complexity and sequential nature of language better but in practice the bigger obstacle is usually properly training the network and RNNs are finicky.</p> <p>Another problem that might have a chance of working would be to look at a problem like the balanced parenthesis problem (either with just parentheses in the strings or parentheses along with other distractor characters). This requires processing the inputs sequentially and tracking some state and might be easier to learn with a LSTM then a FFN. </p> <p>Update: Some data that looks sequential might not actually have to be treated sequentially. For example even if you provide a sequence of numbers to add since addition is commutative a FFN will do just as well as a RNN. This could also be true of many health problems where the dominating information is not of a sequential nature. Suppose every year a patient's smoking habits are measured. From a behavioral standpoint the trajectory is important but if you're predicting whether the patient will develop lung cancer the prediction will be dominated by just the number of years the patient smoked (maybe restricted to the last 10 years for the FFN).</p> <p>So you want to make the toy problem more complex and to require taking into account the ordering of the data. Maybe some kind of simulated time series, where you want to predict whether there was a spike in the data, but you don't care about absolute values just about the relative nature of the spike. </p> <p><strong>Update2</strong></p> <p>I modified your code to show a case where RNNs perform better. The trick was to use more complex conditional logic that is more naturally modeled in LSTMs than FFNs. The code is below. For 8 columns we see that the FFN trains in 1 minute and reaches a validation loss of 6.3. The LSTM takes 3x longer to train but it's final validation loss is 6x lower at 1.06.</p> <p>As we increase the number of columns the LSTM has a larger and larger advantage, especially if we added more complicated conditions in. For 16 columns the FFNs validation loss is 19 (and you can more clearly see the training curve as the model isn't able to instantly fit the data). In comparison the LSTM takes 11 times longer to train but has a validation loss of 0.31, 30 times smaller than the FFN! You can play around with even larger matrices to see how far this trend will extend. </p> <pre><code>from keras import models from keras import layers from keras.layers import Dense, LSTM import numpy as np import matplotlib.pyplot as plt import matplotlib import time matplotlib.use('Agg') np.random.seed(20180908) rows = 20500 cols = 10 # Randomly generate Z Z = 100*np.random.uniform(0.05, 1.0, size = (rows, cols)) larger = np.max(Z[:, :cols/2], axis=1).reshape((rows, 1)) larger2 = np.max(Z[:, cols/2:], axis=1).reshape((rows, 1)) smaller = np.min((larger, larger2), axis=0) # Z is now the max of the first half of the array. Z = np.append(Z, larger, axis=1) # Z is now the min of the max of each half of the array. # Z = np.append(Z, smaller, axis=1) # Combine and shuffle. #Z = np.concatenate((Z_sum, Z_avg), axis = 0) np.random.shuffle(Z) ## Training and validation data. split = 10000 X_train = Z[:split, :-1] X_valid = Z[split:, :-1] Y_train = Z[:split, -1:].reshape(split, 1) Y_valid = Z[split:, -1:].reshape(rows - split, 1) print(X_train.shape) print(Y_train.shape) print(X_valid.shape) print(Y_valid.shape) print("Now setting up the FNN") ## FNN model. tick = time.time() # Define model. network_fnn = models.Sequential() network_fnn.add(layers.Dense(32, activation = 'relu', input_shape = (X_train.shape[1],))) network_fnn.add(Dense(1, activation = None)) # Compile model. network_fnn.compile(optimizer = 'adam', loss = 'mean_squared_error') # Fit model. history_fnn = network_fnn.fit(X_train, Y_train, epochs = 500, batch_size = 128, verbose = False, validation_data = (X_valid, Y_valid)) tock = time.time() print() print(str('%.2f' % ((tock - tick) / 60)) + ' minutes.') print("Now evaluating the FNN") loss_fnn = history_fnn.history['loss'] val_loss_fnn = history_fnn.history['val_loss'] epochs_fnn = range(1, len(loss_fnn) + 1) print("train loss: ", loss_fnn[-1]) print("validation loss: ", val_loss_fnn[-1]) plt.plot(epochs_fnn, loss_fnn, 'black', label = 'Training Loss') plt.plot(epochs_fnn, val_loss_fnn, 'red', label = 'Validation Loss') plt.title('FNN: Training and Validation Loss') plt.legend() plt.show() plt.scatter(Y_train, network_fnn.predict(X_train), alpha = 0.1) plt.xlabel('Actual') plt.ylabel('Predicted') plt.title('training points') plt.show() plt.scatter(Y_valid, network_fnn.predict(X_valid), alpha = 0.1) plt.xlabel('Actual') plt.ylabel('Predicted') plt.title('valid points') plt.show() print("LSTM") ## LSTM model. X_lstm_train = X_train.reshape(X_train.shape[0], X_train.shape[1], 1) X_lstm_valid = X_valid.reshape(X_valid.shape[0], X_valid.shape[1], 1) tick = time.time() # Define model. network_lstm = models.Sequential() network_lstm.add(layers.LSTM(32, activation = 'relu', input_shape = (X_lstm_train.shape[1], 1))) network_lstm.add(layers.Dense(1, activation = None)) # Compile model. network_lstm.compile(optimizer = 'adam', loss = 'mean_squared_error') # Fit model. history_lstm = network_lstm.fit(X_lstm_train, Y_train, epochs = 500, batch_size = 128, verbose = False, validation_data = (X_lstm_valid, Y_valid)) tock = time.time() print() print(str('%.2f' % ((tock - tick) / 60)) + ' minutes.') print("now eval") loss_lstm = history_lstm.history['loss'] val_loss_lstm = history_lstm.history['val_loss'] epochs_lstm = range(1, len(loss_lstm) + 1) print("train loss: ", loss_lstm[-1]) print("validation loss: ", val_loss_lstm[-1]) plt.plot(epochs_lstm, loss_lstm, 'black', label = 'Training Loss') plt.plot(epochs_lstm, val_loss_lstm, 'red', label = 'Validation Loss') plt.title('LSTM: Training and Validation Loss') plt.legend() plt.show() plt.scatter(Y_train, network_lstm.predict(X_lstm_train), alpha = 0.1) plt.xlabel('Actual') plt.ylabel('Predicted') plt.title('training') plt.show() plt.scatter(Y_valid, network_lstm.predict(X_lstm_valid), alpha = 0.1) plt.xlabel('Actual') plt.ylabel('Predicted') plt.title("validation") plt.show() </code></pre>
2018-09-07 04:31:06.333000+00:00
2018-09-17 01:35:55.487000+00:00
2018-09-17 01:35:55.487000+00:00
null
52,020,748
<p>I have been developing feedforward neural networks (FNNs) and recurrent neural networks (RNNs) in Keras with structured data of the shape <code>[instances, time, features]</code>, and the performance of FNNs and RNNs has been the same (except that RNNs require more computation time).</p> <p>I have also simulated tabular data (code below) where I expected a RNN to outperform a FNN because the next value in the series is dependent on the previous value in the series; however, both architectures predict correctly. </p> <p>With NLP data, I have seen RNNs outperform FNNs, but not with tabular data. Generally, when would one expect a RNN to outperform a FNN with tabular data? Specifically, could someone post simulation code with tabular data demonstrating a RNN outperforming a FNN? </p> <p>Thank you! If my simulation code is not ideal for my question, please adapt it or share a more ideal one!</p> <pre><code>from keras import models from keras import layers from keras.layers import Dense, LSTM import numpy as np import matplotlib.pyplot as plt </code></pre> <p>Two features were simulated over 10 time steps, where the value of the second feature is dependent on the value of both features in the prior time step.</p> <pre><code>## Simulate data. np.random.seed(20180825) X = np.random.randint(50, 70, size = (11000, 1)) / 100 X = np.concatenate((X, X), axis = 1) for i in range(10): X_next = np.random.randint(50, 70, size = (11000, 1)) / 100 X = np.concatenate((X, X_next, (0.50 * X[:, -1].reshape(len(X), 1)) + (0.50 * X[:, -2].reshape(len(X), 1))), axis = 1) print(X.shape) ## Training and validation data. split = 10000 Y_train = X[:split, -1:].reshape(split, 1) Y_valid = X[split:, -1:].reshape(len(X) - split, 1) X_train = X[:split, :-2] X_valid = X[split:, :-2] print(X_train.shape) print(Y_train.shape) print(X_valid.shape) print(Y_valid.shape) </code></pre> <p>FNN:</p> <pre><code>## FNN model. # Define model. network_fnn = models.Sequential() network_fnn.add(layers.Dense(64, activation = 'relu', input_shape = (X_train.shape[1],))) network_fnn.add(Dense(1, activation = None)) # Compile model. network_fnn.compile(optimizer = 'adam', loss = 'mean_squared_error') # Fit model. history_fnn = network_fnn.fit(X_train, Y_train, epochs = 10, batch_size = 32, verbose = False, validation_data = (X_valid, Y_valid)) plt.scatter(Y_train, network_fnn.predict(X_train), alpha = 0.1) plt.xlabel('Actual') plt.ylabel('Predicted') plt.show() plt.scatter(Y_valid, network_fnn.predict(X_valid), alpha = 0.1) plt.xlabel('Actual') plt.ylabel('Predicted') plt.show() </code></pre> <p>LSTM:</p> <pre><code>## LSTM model. X_lstm_train = X_train.reshape(X_train.shape[0], X_train.shape[1] // 2, 2) X_lstm_valid = X_valid.reshape(X_valid.shape[0], X_valid.shape[1] // 2, 2) # Define model. network_lstm = models.Sequential() network_lstm.add(layers.LSTM(64, activation = 'relu', input_shape = (X_lstm_train.shape[1], 2))) network_lstm.add(layers.Dense(1, activation = None)) # Compile model. network_lstm.compile(optimizer = 'adam', loss = 'mean_squared_error') # Fit model. history_lstm = network_lstm.fit(X_lstm_train, Y_train, epochs = 10, batch_size = 32, verbose = False, validation_data = (X_lstm_valid, Y_valid)) plt.scatter(Y_train, network_lstm.predict(X_lstm_train), alpha = 0.1) plt.xlabel('Actual') plt.ylabel('Predicted') plt.show() plt.scatter(Y_valid, network_lstm.predict(X_lstm_valid), alpha = 0.1) plt.xlabel('Actual') plt.ylabel('Predicted') plt.show() </code></pre>
2018-08-25 19:50:50.630000+00:00
2018-11-22 05:11:47.377000+00:00
2018-11-22 05:11:47.377000+00:00
python|keras|prediction|recurrent-neural-network
['https://arxiv.org/pdf/1702.01923.pdf']
1
44,171,436
<p>I think that this paper <a href="https://arxiv.org/pdf/1703.06870.pdf" rel="nofollow noreferrer">Mask R-CNN</a> will also help you. If you want to be able of doing extraction of cars you have to train using a lot of training data of different cars, diferent angles, multiple cars in one picture.</p> <p>Here and implementation using keras: <a href="https://github.com/broadinstitute/keras-rcnn/tree/master/keras_rcnn" rel="nofollow noreferrer">Mask RCNN</a><a href="https://i.stack.imgur.com/WbthR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WbthR.png" alt="enter image description here"></a></p>
2017-05-25 02:41:58.587000+00:00
2017-05-25 02:41:58.587000+00:00
null
null
44,083,393
<p>How will I proceed with the extraction of the whole car from an image containing a front, side or rear views, keeping in account of the shadows, other distant cars in the image and different types of cars (hatchback, sedan, etc.)? </p> <p>The car in the image will be front, rear or either sides. That is fixed. Car models can be different with varying colors and background.</p> <p>I have researched on - Edge Detection Algorithms (<a href="https://en.wikipedia.org/wiki/Sobel_operator" rel="nofollow noreferrer">Sobel</a>, <a href="https://en.wikipedia.org/wiki/Canny_edge_detector" rel="nofollow noreferrer">Canny</a>),<br> reading about the <a href="https://en.wikipedia.org/wiki/Scale-invariant_feature_transform" rel="nofollow noreferrer">Scale-Invariant Feature Transform</a> for Feature Extraction. </p> <p>Am I going in the right direction ? </p> <p>Sample Images : </p> <p><a href="https://i.stack.imgur.com/kfwff.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kfwff.jpg" alt="enter image description here"></a> </p> <p><a href="https://i.stack.imgur.com/U1802.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/U1802.jpg" alt="enter image description here"></a></p> <p>Following results using Canny Edge Detection - </p> <p><a href="https://i.stack.imgur.com/F4iXk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/F4iXk.png" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/k6WYy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/k6WYy.png" alt="enter image description here"></a></p> <p>How to extract the car after the edge detection ?</p>
2017-05-20 07:59:38.333000+00:00
2022-02-11 18:31:07.630000+00:00
2022-02-11 18:31:07.630000+00:00
image-processing|feature-extraction|background-subtraction
['https://arxiv.org/pdf/1703.06870.pdf', 'https://github.com/broadinstitute/keras-rcnn/tree/master/keras_rcnn', 'https://i.stack.imgur.com/WbthR.png']
3