a_id
int64
7.84k
73.8M
a_body
stringlengths
61
33k
a_creation_date
stringlengths
25
32
a_last_activity_date
stringlengths
25
32
a_last_edit_date
stringlengths
25
32
a_tags
float64
q_id
int64
826
73.8M
q_body
stringlengths
61
29.9k
q_creation_date
stringlengths
25
32
q_last_activity_date
stringlengths
25
32
q_last_edit_date
stringlengths
25
32
q_tags
stringlengths
1
103
_arxiv_links
stringlengths
2
6.69k
_n_arxiv_links
int64
0
94
68,789,124
<p>The Reformer model was proposed in the paper Reformer: <a href="https://arxiv.org/abs/2001.04451.pdf" rel="nofollow noreferrer">The Efficient Transformer</a> by Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya. The paper contains a method for factorization gigantic matrix which is resulted of working with very long sequences! This factorization is relying on 2 assumptions</p> <ol> <li>the parameter <code>config.axial_pos_embds_dim</code> is set to a tuple <code>(d1,d2)</code> which sum has to be equal to config.hidden_size</li> <li><code>config.axial_pos_shape</code> is set to a tuple <code>(n1s,n2s)</code> which product has to be equal to config.max_embedding_size (more on these <a href="https://huggingface.co/transformers/model_doc/reformer.html#axial-positional-encodings" rel="nofollow noreferrer">here</a>!)</li> </ol> <p>Finally your question ;)</p> <ul> <li>I'm almost sure your session crushed duo to ram overflow</li> <li>you can change any config parameter during model instantiation like the <a href="https://huggingface.co/transformers/model_doc/reformer.html#reformerconfig" rel="nofollow noreferrer">official documentation</a>!</li> </ul>
2021-08-15 06:11:34.403000+00:00
2021-08-15 06:11:34.403000+00:00
null
null
68,742,863
<p>I'm trying to fine-tune the ReformerModelWithLMHead (google/reformer-enwik8) for NER. I used the padding sequence length same as in the encode method (max_length = max([len(string) for string in list_of_strings])) along with attention_masks. And I got this error:</p> <p><strong>ValueError: If training, make sure that config.axial_pos_shape factors: (128, 512) multiply to sequence length. Got prod((128, 512)) != sequence_length: 2248. You might want to consider padding your sequence length to 65536 or changing config.axial_pos_shape.</strong></p> <ul> <li>When I changed the sequence length to 65536, my colab session crashed by getting all the inputs of 65536 lengths.</li> <li>According to the second option(changing config.axial_pos_shape), I cannot change it.</li> </ul> <p>I would like to know, Is there any chance to change config.axial_pos_shape while fine-tuning the model? Or I'm missing something in encoding the input strings for reformer-enwik8?</p> <p>Thanks!</p> <p><strong>Question Update: I have tried the following methods:</strong></p> <ol> <li>By giving paramteres at the time of model instantiation:</li> </ol> <blockquote> <p>model = transformers.ReformerModelWithLMHead.from_pretrained(&quot;google/reformer-enwik8&quot;, num_labels=9, max_position_embeddings=1024, axial_pos_shape=[16,64], axial_pos_embds_dim=[32,96],hidden_size=128)</p> </blockquote> <p>It gives me the following error:</p> <blockquote> <p>RuntimeError: Error(s) in loading state_dict for ReformerModelWithLMHead: size mismatch for reformer.embeddings.word_embeddings.weight: copying a param with shape torch.Size([258, 1024]) from checkpoint, the shape in current model is torch.Size([258, 128]). size mismatch for reformer.embeddings.position_embeddings.weights.0: copying a param with shape torch.Size([128, 1, 256]) from checkpoint, the shape in current model is torch.Size([16, 1, 32]).</p> </blockquote> <p>This is quite a long error.</p> <ol start="2"> <li>Then I tried this code to update the config:</li> </ol> <blockquote> <p>model1 = transformers.ReformerModelWithLMHead.from_pretrained('google/reformer-enwik8', num_labels = 9)</p> </blockquote> <h4>Reshape Axial Position Embeddings layer to match desired max seq length</h4> <pre><code>model1.reformer.embeddings.position_embeddings.weights[1] = torch.nn.Parameter(model1.reformer.embeddings.position_embeddings.weights[1][0][:128]) </code></pre> <h4>Update the config file to match custom max seq length</h4> <pre><code>model1.config.axial_pos_shape = 16,128 model1.config.max_position_embeddings = 16*128 #2048 model1.config.axial_pos_embds_dim= 32,96 model1.config.hidden_size = 128 output_model_path = &quot;model&quot; model1.save_pretrained(output_model_path) </code></pre> <p>By this implementation, I am getting this error:</p> <blockquote> <p>RuntimeError: The expanded size of the tensor (512) must match the existing size (128) at non-singleton dimension 2. Target sizes: [1, 128, 512, 768]. Tensor sizes: [128, 768]</p> </blockquote> <p>Because updated size/shape doesn't match with the original config parameters of pretrained model. The original parameters are: axial_pos_shape = 128,512 max_position_embeddings = 128*512 #65536 axial_pos_embds_dim= 256,768 hidden_size = 1024</p> <p>Is it the right way I'm changing the config parameters or do I have to do something else?</p> <p>Is there any example where ReformerModelWithLMHead('google/reformer-enwik8') model fine-tuned.</p> <p>My main code implementation is as follow:</p> <pre><code>class REFORMER(torch.nn.Module): def __init__(self): super(REFORMER, self).__init__() self.l1 = transformers.ReformerModelWithLMHead.from_pretrained(&quot;google/reformer-enwik8&quot;, num_labels=9) def forward(self, input_ids, attention_masks, labels): output_1= self.l1(input_ids, attention_masks, labels = labels) return output_1 model = REFORMER() def train(epoch): model.train() for _, data in enumerate(training_loader,0): ids = data['input_ids'][0] # input_ids from encode method of the model https://huggingface.co/google/reformer-enwik8#:~:text=import%20torch%0A%0A%23%20Encoding-,def%20encode,-(list_of_strings%2C%20pad_token_id%3D0 input_shape = ids.size() targets = data['tags'] print(&quot;tags: &quot;, targets, targets.size()) least_common_mult_chunk_length = 65536 padding_length = least_common_mult_chunk_length - input_shape[-1] % least_common_mult_chunk_length #pad input input_ids, inputs_embeds, attention_mask, position_ids, input_shape = _pad_to_mult_of_chunk_length(self=model.l1, input_ids=ids, inputs_embeds=None, attention_mask=None, position_ids=None, input_shape=input_shape, padding_length=padding_length, padded_seq_length=None, device=None, ) outputs = model(input_ids, attention_mask, labels=targets) # sending inputs to the forward method print(outputs) loss = outputs.loss logits = outputs.logits if _%500==0: print(f'Epoch: {epoch}, Loss: {loss}') for epoch in range(1): train(epoch) </code></pre>
2021-08-11 13:20:44.563000+00:00
2021-08-22 21:36:37.800000+00:00
2021-08-18 15:04:04.067000+00:00
python|nlp|pytorch|huggingface-transformers|named-entity-recognition
['https://arxiv.org/abs/2001.04451.pdf', 'https://huggingface.co/transformers/model_doc/reformer.html#axial-positional-encodings', 'https://huggingface.co/transformers/model_doc/reformer.html#reformerconfig']
3
44,336,485
<p>There is an end-to-end training method combying a neural network with a CRF that does exactly what you want it to do. The paper is this one: <a href="https://arxiv.org/pdf/1611.10229.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1611.10229.pdf</a>. </p>
2017-06-02 19:53:21.243000+00:00
2017-06-02 19:53:21.243000+00:00
null
null
44,334,164
<p>I am working with image segmentation using saliency maps via gradient ascent. Here is an image of the process: <a href="https://imgur.com/a/h8vBZ" rel="nofollow noreferrer">http://imgur.com/a/h8vBZ</a></p> <p>I have a trained model that can accurately predict my classes. This model is then used to compute a gradient of an input picture with gradient ascent wrt loss. To me, the gradient produced here is a representation of what the model is focusing on in prediction. </p> <p>I run a quantile filter to pick out the the gradient values (pixels) that are most related to the class, then produce a binary mask from this. This works well, but find that the map could be more accurate and tighter around the class within the image. I read about Conditional Random Fields as a mechanism to generate more accurate and smooth segmentation results and am attempting to implement this, but feel as if I don't have a full understanding of the gradient produced here. </p> <p>My question is: what exactly does the gradient represent in this case? My guess is that these values are essentially pixel level predictions/pixel labels. Is this equivalent to unary potentials?</p>
2017-06-02 17:10:06.360000+00:00
2017-08-09 17:04:55.790000+00:00
null
deep-learning|conv-neural-network|gradient-descent|crf
['https://arxiv.org/pdf/1611.10229.pdf']
1
58,645,209
<p>QATM:Quality-Aware Template Matching For Deep Learning might be what you are looking for.</p> <p>Original paper : <a href="https://arxiv.org/abs/1903.07254" rel="nofollow noreferrer">https://arxiv.org/abs/1903.07254</a></p> <p>And the following github contain an example with electric scheme: <a href="https://github.com/kamata1729/QATM_pytorch" rel="nofollow noreferrer">https://github.com/kamata1729/QATM_pytorch</a></p> <p><a href="https://i.stack.imgur.com/1Ucvz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1Ucvz.png" alt="enter image description here"></a></p>
2019-10-31 14:12:46.980000+00:00
2019-10-31 14:12:46.980000+00:00
null
null
58,643,717
<p>I am trying to detect electrical symbol in electrical scheme. Here I think 2 ways could be use:</p> <ul> <li>classical way with OpenCV, I tried re to recognise shape with opencv and python but some symbole are too complexe</li> <li>deep learning way: I tried with Mask-RCNN using a handmade dataset of symbol but nothing get really successful</li> </ul> <p>Here is a really simple example of what I would like to do: <a href="https://i.stack.imgur.com/kzpmt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kzpmt.png" alt="enter image description here"></a> </p> <p>I think it could be easy to make a dataset of symbol but all symbol would be the same form and context of the image would not be represented.</p> <p>How do you think I could handle this problem ? </p>
2019-10-31 12:48:43.460000+00:00
2019-10-31 14:12:46.980000+00:00
null
python|opencv|computer-vision|symbols
['https://arxiv.org/abs/1903.07254', 'https://github.com/kamata1729/QATM_pytorch', 'https://i.stack.imgur.com/1Ucvz.png']
3
32,858,918
<p>You may want to use <a href="http://php.net/manual/en/book.curl.php" rel="nofollow">curl</a> and check for a <code>200</code> <a href="http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html" rel="nofollow">http status code</a> , i.e.:</p> <pre><code>&lt;?php $url = 'http://arxiv.org/pdf/1207.41021.pdf'; $ch = curl_init($url); curl_setopt($ch, CURLOPT_HEADER, true); // we want headers curl_setopt($ch, CURLOPT_NOBODY, true); // we don't need body curl_setopt($ch, CURLOPT_RETURNTRANSFER,1); curl_setopt($ch, CURLOPT_FOLLOWLOCATION,1); // we follow redirections curl_setopt($ch, CURLOPT_TIMEOUT,10); $output = curl_exec($ch); $httpcode = curl_getinfo($ch, CURLINFO_HTTP_CODE); curl_close($ch); if($httpcode == "200"){ echo "file exist"; }else{ echo "doesn't exist"; } </code></pre> <hr> <p>Both pdf files return <code>403 Forbidden</code></p> <blockquote> <p>The server understood the request, but is refusing to fulfill it. Authorization will not help and the request SHOULD NOT be repeated. If the request method was not HEAD and the server wishes to make public why the request has not been fulfilled, it SHOULD describe the reason for the refusal in the entity. If the server does not wish to make this information available to the client, the status code 404 (Not Found) can be used instead.</p> </blockquote>
2015-09-30 06:05:16.837000+00:00
2015-09-30 06:11:57.563000+00:00
2015-09-30 06:11:57.563000+00:00
null
32,858,687
<p>I am trying to check if pdf file exists in arXiv. There are two example</p> <blockquote> <p><a href="http://arxiv.org/pdf/1207.41021.pdf" rel="nofollow noreferrer">arxiv.org/pdf/1207.4102.pdf</a> </p> <p><a href="http://arxiv.org/pdf/1207.41021.pdf" rel="nofollow noreferrer">arxiv.org/pdf/1207.41021.pdf</a></p> </blockquote> <p>The first is a pdf file and the second is not and returns an <a href="http://arxiv.org/pdf/1207.41021.pdf" rel="nofollow noreferrer">error page</a>.</p> <p>Is there a way to check whether a url is pdf or not. I tried the answers in <a href="https://stackoverflow.com/questions/3646914/how-do-i-check-if-file-exists-in-jquery-or-javascript">How do I check if file exists in jQuery or JavaScript?</a> however none of them work and they return true (i.e. file exists) for both urls. Is there a way to find which url is pdf file in JavaScript/jQuery or even PHP? </p> <p>Can this be solved using <a href="https://stackoverflow.com/questions/tagged/pdf.js">pdf.js</a>?</p>
2015-09-30 05:49:00.937000+00:00
2015-09-30 06:11:57.563000+00:00
2017-05-23 12:03:50.390000+00:00
javascript|php|jquery|pdf
['http://php.net/manual/en/book.curl.php', 'http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html']
2
32,858,912
<blockquote> <p>You Can try this code for checking remote server file exists or not by Url</p> </blockquote> <pre><code> $filename= 'arxiv.org/pdf/1207.4102.pdf'; $file_headers = @get_headers($filename); if($file_headers[0] == 'HTTP/1.0 404 Not Found'){ echo "The file $filename does not exist"; } else if ($file_headers[0] == 'HTTP/1.0 302 Found' &amp;&amp; $file_headers[7] == 'HTTP/1.0 404 Not Found'){ echo "The file $filename does not exist, and I got redirected to a custom 404 page.."; } else { echo "The file $filename exists"; } </code></pre>
2015-09-30 06:04:36.377000+00:00
2015-09-30 06:04:36.377000+00:00
null
null
32,858,687
<p>I am trying to check if pdf file exists in arXiv. There are two example</p> <blockquote> <p><a href="http://arxiv.org/pdf/1207.41021.pdf" rel="nofollow noreferrer">arxiv.org/pdf/1207.4102.pdf</a> </p> <p><a href="http://arxiv.org/pdf/1207.41021.pdf" rel="nofollow noreferrer">arxiv.org/pdf/1207.41021.pdf</a></p> </blockquote> <p>The first is a pdf file and the second is not and returns an <a href="http://arxiv.org/pdf/1207.41021.pdf" rel="nofollow noreferrer">error page</a>.</p> <p>Is there a way to check whether a url is pdf or not. I tried the answers in <a href="https://stackoverflow.com/questions/3646914/how-do-i-check-if-file-exists-in-jquery-or-javascript">How do I check if file exists in jQuery or JavaScript?</a> however none of them work and they return true (i.e. file exists) for both urls. Is there a way to find which url is pdf file in JavaScript/jQuery or even PHP? </p> <p>Can this be solved using <a href="https://stackoverflow.com/questions/tagged/pdf.js">pdf.js</a>?</p>
2015-09-30 05:49:00.937000+00:00
2015-09-30 06:11:57.563000+00:00
2017-05-23 12:03:50.390000+00:00
javascript|php|jquery|pdf
[]
0
32,858,973
<p>It return correct result. </p> <pre><code>function getHTTPCode($url) { $ch = curl_init($url); curl_setopt($ch, CURLOPT_HEADER, true); curl_setopt($ch, CURLOPT_NOBODY, true); curl_setopt($ch, CURLOPT_RETURNTRANSFER,1); curl_setopt($ch, CURLOPT_TIMEOUT,10); curl_setopt($ch, CURLOPT_USERAGENT, 'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)'); $output = curl_exec($ch); $httpcode = curl_getinfo($ch, CURLINFO_HTTP_CODE); curl_close($ch); return $httpcode; </code></pre> <p>}</p> <pre><code>$url = 'http://arxiv.org/pdf/1207.41021.pdf'; if(getHTTPCode($url)==200) { echo 'found'; } else { echo 'not found'; } </code></pre>
2015-09-30 06:09:21.077000+00:00
2015-09-30 06:09:21.077000+00:00
null
null
32,858,687
<p>I am trying to check if pdf file exists in arXiv. There are two example</p> <blockquote> <p><a href="http://arxiv.org/pdf/1207.41021.pdf" rel="nofollow noreferrer">arxiv.org/pdf/1207.4102.pdf</a> </p> <p><a href="http://arxiv.org/pdf/1207.41021.pdf" rel="nofollow noreferrer">arxiv.org/pdf/1207.41021.pdf</a></p> </blockquote> <p>The first is a pdf file and the second is not and returns an <a href="http://arxiv.org/pdf/1207.41021.pdf" rel="nofollow noreferrer">error page</a>.</p> <p>Is there a way to check whether a url is pdf or not. I tried the answers in <a href="https://stackoverflow.com/questions/3646914/how-do-i-check-if-file-exists-in-jquery-or-javascript">How do I check if file exists in jQuery or JavaScript?</a> however none of them work and they return true (i.e. file exists) for both urls. Is there a way to find which url is pdf file in JavaScript/jQuery or even PHP? </p> <p>Can this be solved using <a href="https://stackoverflow.com/questions/tagged/pdf.js">pdf.js</a>?</p>
2015-09-30 05:49:00.937000+00:00
2015-09-30 06:11:57.563000+00:00
2017-05-23 12:03:50.390000+00:00
javascript|php|jquery|pdf
[]
0
51,193,491
<p>Object detection is mostly driven by the successes of detection algorithms in world-famous object detection benchmarks like PASCAL-VOC and MS-COCO, which are object centric datasets where most objects are vertical (potted plants, humans, horses, etc.) and thus data augmentation with left-right flips is often sufficient (for all we know data augmentation with rotated images like upside-down flips could even hurt detection performance).<br> Every year the entire community adopts the base algorithmic structure of the winning solution and build on it (I am exaggerating a bit to prove a point but not so much). </p> <p>Interestingly other less widely known topics like oriented text detections and oriented vehicle detections in aerial imagery both need rotation invariant features and rotation equivariant detection pipelines (like in both articles from Cheng you mentioned). </p> <p>If you want to find literature and code in this area you need to dive in these two domains. I can already give you a few pointers like the <a href="https://arxiv.org/pdf/1711.10398.pdf" rel="noreferrer">DOTA</a> challenge for aerial imagery or the <a href="https://arxiv.org/abs/1506.03184" rel="noreferrer">ICDAR challenges</a> for oriented text detections. </p> <p>As @Marcin Mozejko said, CNN are by nature translation invariant and not rotation invariant. It is an open problem how to incorporate perfect rotation invariance the few articles that deal with it have yet to become standards even though <a href="https://arxiv.org/pdf/1604.06720.pdf" rel="noreferrer">some</a> <a href="https://arxiv.org/abs/1711.07289" rel="noreferrer">of them</a> seem promising. My personal favorite for detection is the modification of Faster R-CNN recently proposed by <a href="https://arxiv.org/abs/1703.01086" rel="noreferrer">Ma</a>. </p> <p>I hope that this direction of research will be investigated more and more once people will get fed up of MS-COCO and VOC.</p> <p>What you could try is take a state-of-the-art detector trained on MS-COCO like <a href="https://github.com/tensorflow/models/tree/master/research/object_detection" rel="noreferrer">Faster R-CNN with NASNet from TF detection API</a> and see how it performs wrt rotating the test image, in my opinion it would be far from rotation invariant.</p>
2018-07-05 14:10:45.827000+00:00
2018-07-17 14:59:12.837000+00:00
2018-07-17 14:59:12.837000+00:00
null
41,069,903
<p>As known, modern most popular CNN (convolutional neural network): VGG/ResNet (FasterRCNN), SSD, Yolo, Yolo v2, DenseBox, DetectNet - are not rotate invariant: <a href="https://stackoverflow.com/questions/40952163/are-modern-cnn-convolutional-neural-network-as-detectnet-rotate-invariant">Are modern CNN (convolutional neural network) as DetectNet rotate invariant?</a></p> <p>Also known, that there are several neural networks with rotate-invariance object detection:</p> <ol> <li><p>Rotation-Invariant Neoperceptron 2006 (<a href="https://www.researchgate.net/profile/Beat_Fasel/publication/224649475_Rotation-Invariant_Neoperceptron/links/02e7e53859a00a588b000000.pdf" rel="noreferrer">PDF</a>): <a href="https://www.researchgate.net/publication/224649475_Rotation-Invariant_Neoperceptron" rel="noreferrer">https://www.researchgate.net/publication/224649475_Rotation-Invariant_Neoperceptron</a></p></li> <li><p>Learning rotation invariant convolutional filters for texture classification 2016 (<a href="https://arxiv.org/pdf/1604.06720v2" rel="noreferrer">PDF</a>): <a href="https://arxiv.org/abs/1604.06720" rel="noreferrer">https://arxiv.org/abs/1604.06720</a></p></li> <li><p>RIFD-CNN: Rotation-Invariant and Fisher Discriminative Convolutional Neural Networks for Object Detection 2016 (<a href="http://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Cheng_RIFD-CNN_Rotation-Invariant_and_CVPR_2016_paper.pdf" rel="noreferrer">PDF</a>): <a href="http://www.cv-foundation.org/openaccess/content_cvpr_2016/html/Cheng_RIFD-CNN_Rotation-Invariant_and_CVPR_2016_paper.html" rel="noreferrer">http://www.cv-foundation.org/openaccess/content_cvpr_2016/html/Cheng_RIFD-CNN_Rotation-Invariant_and_CVPR_2016_paper.html</a></p></li> <li><p>Encoded Invariance in Convolutional Neural Networks 2014 (<a href="http://theorycenter.cs.uchicago.edu/REU/2014/final-papers/sauder.pdf" rel="noreferrer">PDF</a>)</p></li> <li><p>Rotation-invariant convolutional neural networks for galaxy morphology prediction (<a href="https://arxiv.org/pdf/1503.07077v1" rel="noreferrer">PDF</a>): <a href="https://arxiv.org/abs/1503.07077" rel="noreferrer">https://arxiv.org/abs/1503.07077</a></p></li> <li><p>Learning Rotation-Invariant Convolutional Neural Networks for Object Detection in VHR Optical Remote Sensing Images 2016: <a href="http://ieeexplore.ieee.org/document/7560644/" rel="noreferrer">http://ieeexplore.ieee.org/document/7560644/</a></p></li> </ol> <p>We know, that in such image-detection competitions as: IMAGE-NET, MSCOCO, PASCAL VOC - used networks ensembles (simultaneously some neural networks). Or networks ensembles in single net such as ResNet (<a href="https://arxiv.org/abs/1605.06431" rel="noreferrer">Residual Networks Behave Like Ensembles of Relatively Shallow Networks</a>)</p> <p>But are used rotation invariant network ensembles in winners like as MSRA, and if not, then why? Why in ensemble the additional rotation-invariant network does not add accuracy to detect certain objects such as aircraft objects - which images is done at a different angles of rotation? </p> <p>It can be:</p> <ul> <li><p>aircraft objects which are photographed from the ground <a href="https://i.stack.imgur.com/s9C7w.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/s9C7w.jpg" alt="enter image description here"></a></p></li> <li><p>or ground objects which are photographed from the air <a href="https://i.stack.imgur.com/ICqUL.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/ICqUL.jpg" alt="enter image description here"></a></p></li> </ul> <p>Why rotation-invariant neural networks are not used in winners of the popular object-detection competitions?</p>
2016-12-09 22:31:21.297000+00:00
2021-09-07 18:51:06.440000+00:00
2017-05-23 12:03:05.750000+00:00
machine-learning|computer-vision|neural-network|deep-learning|conv-neural-network
['https://arxiv.org/pdf/1711.10398.pdf', 'https://arxiv.org/abs/1506.03184', 'https://arxiv.org/pdf/1604.06720.pdf', 'https://arxiv.org/abs/1711.07289', 'https://arxiv.org/abs/1703.01086', 'https://github.com/tensorflow/models/tree/master/research/object_detection']
6
20,376,146
<h1>Donald Knuth</h1> <p>sort of settled this debate in 1992 with the following:</p> <p><img src="https://i.stack.imgur.com/KARrP.png" alt="enter image description here"></p> <p>And went even more into details in his paper <a href="http://arxiv.org/pdf/math/9205211v1.pdf" rel="nofollow noreferrer" title="Two Notes on Notation">Two Notes on Notation</a>.</p> <p>Basically, while we don't have 1 as the limit of <code>f(x)/g(x)</code> for all not all functions <code>f(x)</code> and <code>g(x)</code>, it still makes combinatorics so much simpler to define <code>0^0=1</code>, and then just make special cases in the few places where you need to consider functions such as <code>0^x</code>, which are weird anyway. After all <code>x^0</code> comes up a lot more often.</p> <p>Some of the best discussions I know of this topic (other than the Knuth paper) are:</p> <ul> <li><a href="http://mathforum.org/dr.math/faq/faq.0.to.0.power.html" rel="nofollow noreferrer">http://mathforum.org/dr.math/faq/faq.0.to.0.power.html</a></li> <li><a href="http://www.quora.com/Mathematics/What-is-math-0-0-math?share=1" rel="nofollow noreferrer">http://www.quora.com/Mathematics/What-is-math-0-0-math?share=1</a></li> <li><a href="https://math.stackexchange.com/questions/475337/the-binomial-formula-and-the-value-of-00">https://math.stackexchange.com/questions/475337/the-binomial-formula-and-the-value-of-00</a></li> </ul>
2013-12-04 13:03:12.007000+00:00
2013-12-06 18:00:37.793000+00:00
2017-04-13 12:19:15.457000+00:00
null
19,955,968
<p>We all know that 0<sup>0</sup> is indeterminate.</p> <p><strong>But</strong>, <em>javascript</em> says that:</p> <pre><code>Math.pow(0, 0) === 1 // true </code></pre> <p>and <em>C++</em> says the same thing:</p> <pre><code>pow(0, 0) == 1 // true </code></pre> <p>WHY?</p> <p>I know that:</p> <pre><code>&gt;Math.pow(0.001, 0.001) 0.9931160484209338 </code></pre> <p>But why does <code>Math.pow(0, 0)</code> throw no errors? Or maybe a <code>NaN</code> would be better than <code>1</code>.</p>
2013-11-13 14:12:29.137000+00:00
2015-12-07 16:58:18.287000+00:00
2015-12-07 16:58:18.287000+00:00
javascript|c++|language-agnostic|pow
['http://arxiv.org/pdf/math/9205211v1.pdf', 'http://mathforum.org/dr.math/faq/faq.0.to.0.power.html', 'http://www.quora.com/Mathematics/What-is-math-0-0-math?share=1', 'https://math.stackexchange.com/questions/475337/the-binomial-formula-and-the-value-of-00']
4
55,772,610
<p>Adam is famous for working out of the box with its <em>default</em> paremeters, which, in almost all frameworks, include a learning rate of 0.001 (see the default values in <a href="https://keras.io/optimizers/" rel="nofollow noreferrer">Keras</a>, <a href="https://pytorch.org/docs/stable/_modules/torch/optim/adam.html" rel="nofollow noreferrer">PyTorch</a>, and <a href="https://www.tensorflow.org/api_docs/python/tf/train/AdamOptimizer" rel="nofollow noreferrer">Tensorflow</a>), which is indeed the value suggested in the <a href="https://arxiv.org/abs/1412.6980" rel="nofollow noreferrer">Adam paper</a>.</p> <p>So, I would suggest changing to </p> <pre><code>optimizer = optim.Adam(model.parameters(), lr=0.001) </code></pre> <p>or simply</p> <pre><code>optimizer = optim.Adam(model.parameters()) </code></pre> <p>in order to leave <code>lr</code> in its default value (although I would say I am surprised, as MNIST is famous nowadays for working practically with whatever you may throw into it).</p>
2019-04-20 10:31:51.483000+00:00
2019-04-20 14:29:36.383000+00:00
2019-04-20 14:29:36.383000+00:00
null
55,770,783
<p>I was going through a basic PyTorch MNIST example <a href="https://github.com/pytorch/examples/blob/master/mnist/main.py" rel="nofollow noreferrer">here</a> and noticed that when I changed the optimizer from SGD to Adam the model did not converge. Specifically, I changed line 106 from</p> <pre><code>optimizer = optim.SGD(model.parameters(), lr=args.lr, momentum=args.momentum) </code></pre> <p>to</p> <pre><code>optimizer = optim.Adam(model.parameters(), lr=args.lr) </code></pre> <p>I thought this would have no effect on the model. With SGD the loss quickly dropped to low values after about a quarter of an epoch. However with Adam, the loss did not drop at all even after 10 epochs. I'm curious to why this is happening; it seems to me these should have nearly identical performance.</p> <p>I ran this on Win10/Py3.6/PyTorch1.01/CUDA9</p> <p>And to save you a tiny bit of code digging, here are the hyperparams:</p> <ul> <li>lr=0.01</li> <li>momentum=0.5</li> <li>batch_size=64</li> </ul>
2019-04-20 06:08:40.633000+00:00
2019-09-27 18:07:10.703000+00:00
2019-09-27 18:07:10.703000+00:00
python|machine-learning|pytorch|adam|sgd
['https://keras.io/optimizers/', 'https://pytorch.org/docs/stable/_modules/torch/optim/adam.html', 'https://www.tensorflow.org/api_docs/python/tf/train/AdamOptimizer', 'https://arxiv.org/abs/1412.6980']
4
31,890,460
<p>The Modularity of software is a composite metric. Thus, first you need to define the modularity and participating metrics. The paper you mentioned (<a href="http://arxiv.org/ftp/arxiv/papers/1309/1309.5689.pdf" rel="nofollow">http://arxiv.org/ftp/arxiv/papers/1309/1309.5689.pdf</a>) offers one of the ways. An alternative way is to detect modularity design smells and form a modularity metric formula (that uses the number of detected instances of modularity design smells). The modularity design smells are: Insufficient Modularization, Broken Modularization, Cyclically-dependent Modularization, and Hub-like Modularization (refer "Refactoring for software design smells"). </p>
2015-08-08 06:44:58.160000+00:00
2015-08-10 01:25:14.910000+00:00
2015-08-10 01:25:14.910000+00:00
null
27,186,843
<p>Is there a way to calculate modularity metric from the preexisting metrics in SonarQube? I want to calculate modularity so that I can use it for my technical debt calculation.</p>
2014-11-28 10:36:58.220000+00:00
2015-08-10 01:25:14.910000+00:00
null
sonarqube|modularity|technical-debt
['http://arxiv.org/ftp/arxiv/papers/1309/1309.5689.pdf']
1
55,282,639
<p>Autoencoders have at least one hidden fully connected layer which "is usually referred to as code, latent variables, or latent representation" <a href="https://en.wikipedia.org/wiki/Autoencoder" rel="nofollow noreferrer">Wikipedia</a>. Actually, autoencoders do not have to be convolutional networks at all - <a href="https://en.wikipedia.org/wiki/Autoencoder" rel="nofollow noreferrer">Wikipedia</a> only states that they are feed-forward non-recurrent networks.</p> <p>On the other hand, Fully Convolutional Networks do not have any fully connected layers. See <a href="https://en.wikipedia.org/wiki/U-Net" rel="nofollow noreferrer">Wikipedia</a> and <a href="https://arxiv.org/pdf/1606.06650.pdf" rel="nofollow noreferrer">this paper by Cicek et al.</a> for more details (the paper has a nice visualization of the network).</p> <p>So even when both an encoder and a decoder in an autoencoder network are CNNs, there is at least one fully-connected hidden layer in between them. Thus, autoencoder networks are not FCNs.</p>
2019-03-21 14:24:41.943000+00:00
2019-03-21 14:24:41.943000+00:00
null
null
47,632,208
<p>what is the main difference between <strong>autoencoder networks</strong> and <strong>fully convolutional network</strong>? Please help me understand the difference between architecture of these two networks?</p>
2017-12-04 11:21:01.070000+00:00
2019-03-21 14:24:41.943000+00:00
2017-12-04 14:12:09.953000+00:00
image|autoencoder
['https://en.wikipedia.org/wiki/Autoencoder', 'https://en.wikipedia.org/wiki/Autoencoder', 'https://en.wikipedia.org/wiki/U-Net', 'https://arxiv.org/pdf/1606.06650.pdf']
4
51,865,441
<p>Pablo provided a great explanation. My research is actually in reinforcement learning vs model predictive control. And MPC is a control approach based on trajectory optimization. Reinforcement learning is just a data driven optimization algorithm and can be used for your above examples. Here is a paper for the <a href="https://arxiv.org/pdf/1611.09940.pdf" rel="noreferrer">traveling salesman problem</a> using RL.</p> <p>The biggest differences are really these:</p> <p><strong>Reinforcement Learning Method</strong></p> <ul> <li>Does not need a model, but a "playground" to try different actions in the environment and learn from it (ie. data driven approach)</li> <li>does NOT guarantee optimality in complex problems due to nonlinear mapping of states to actions. In multiple input multiple output problems, RL uses nonlinear function approximators to solve tasks. But there is no guaranteed convergence the moment these are used</li> <li>Great for problems where it is hard or impossible to derive a model for.</li> <li>Extremely difficult to train, but cheap online calculation</li> <li>Inherent adaptive nature. If the conditions of an environment change, RL can usually adapt by learning the new environment.</li> <li>Worst of all, decisions made by RL is uninterpretable. Advanced RL algorithms are made up of multiple neural networks, therefore, if our RL car driver drives off a cliff, it is nearly impossible to identify just why it would do such a thing.</li> </ul> <p><strong>Optimzation Approaches</strong></p> <ul> <li><p>Performance is dependent on the model. If the model is bad, the optimization will be terrible. </p></li> <li><p>Because performance is based on model, identifying a "perfect" model is extremely expensive. In the energy industry, such a model for one plant costs millions, especially because the operating conditions change over time.</p></li> <li><p>GUARANTEES optimality. There are many papers published that goes into the proofs regarding that these approaches guarantee robustness, feasibility, and stability.</p></li> <li><p>Easy to interpret. Controls and decisions using a optimization approach is easy to interpret because you can go into the model and calculate for why a certain action was performed. In the RL case, this is usually a neural network and completely a black box. Therefore, for safety sensitive problems, RL is currently RARELY used.</p></li> <li><p>Very expensive online calculation depending on prediction horizon, because at each time step, we have to optimize the trajectory given the current states.</p></li> </ul>
2018-08-15 19:47:01.237000+00:00
2018-08-15 19:47:01.237000+00:00
null
null
51,787,339
<p>I was wondering when one would decide to resort to Reinforcement Learning to problems that have been previously tackled by mathematical optimisation methods - think the Traveling Salesman Problem or Job Scheduling or Taxi Sharing Problems.</p> <p>Since Reinforcement Learning aims at minimising/maximising a certain cost/reward function in a similar way as Operational Research attempts at optimising the result of a certain cost function, I would assume that problems that could be solved by one of the two parties may be tackled by the other. However, is this the case? Are there tradeoffs between the two? I haven't really seen too much research done on RL regarding the problems stated above but I may be mistaken. </p> <p>If anyone has any insights at all, they would be highly appreciated!!</p>
2018-08-10 13:11:02.687000+00:00
2018-08-17 08:24:09.387000+00:00
2018-08-17 08:24:09.387000+00:00
optimization|mathematical-optimization|reinforcement-learning|operations-research
['https://arxiv.org/pdf/1611.09940.pdf']
1
65,379,330
<p>To do this we will introduce a new node status <code>'X'</code> to denote the susceptible isolating node. I've added an edge from <code>'S'</code> to <code>'X'</code> and another from <code>'X'</code> to <code>'S'</code> in <code>'H'</code>. I've added <code>'X'</code> to the <code>return_statuses</code>, and I've plotted it.</p> <p>To simplify your code, I have removed the loop that assigned everything to initially be <code>'S'</code>. Because it uses a <code>default_dict</code> it all starts out as <code>'S'</code> implicitly.</p> <p>You mention the next &quot;time step&quot; and shifting with probability <code>p</code>. This isn't what will happen in this code. The code works in continuous time. So it will need a rate. It doesn't specify exactly how long a node remains isolated, but rather gives an idea of how likely it is to leave at any moment. If this doesn't make sense to you, you should read up a little bit on the exponential distribution.</p> <p>Although I haven't used it here, you may want to be aware that EoN comes with tools that allow you to <a href="https://epidemicsonnetworks.readthedocs.io/en/latest/Examples.html#visualizing-or-animating-disease-spread" rel="nofollow noreferrer">animate the simulation</a>.</p> <pre><code>import networkx as nx import matplotlib.pyplot as plt import EoN from collections import defaultdict a = 0.1 b = 0.01 y = 0.001 d = 0.001 to_isolation_rate = 0.05 from_isolation_rate = 1 # Simple contagions # the below is based on an example of a SEIR disease (there is an exposed state before becoming infectious) # from https://arxiv.org/pdf/2001.02436.pdf Gnp = nx.gnp_random_graph(500, 0.005) H = nx.DiGraph() #For the spontaneous transitions H.add_edge('I', 'R', rate = b) # an infected node can be recovered/removed H.add_edge('I', 'S', rate = y) # an infected node can become susceptible again H.add_edge('R', 'S', rate = d) # a recovered node can become suscepticle again H.add_edge('S', 'X', rate = to_isolation_rate) H.add_edge('X', 'S', rate = from_isolation_rate) J = nx.DiGraph() #for the induced transitions J.add_edge(('I', 'S'),('I', 'I'), rate = a) # a susceptible node can become infected from a neighbour IC = defaultdict(lambda: 'S') IC[0] = 'I' return_statuses = ('S', 'I', 'R', 'X') print('doing Gillespie simulation') t, S, I, R, X = EoN.Gillespie_simple_contagion(Gnp, H, J, IC, return_statuses, tmax = 500) print('done with simulation, now plotting') plt.plot(t, S, label = 'Susceptible') plt.plot(t, I, label = 'Infected') plt.plot(t, R, label = 'Recovered') plt.plot(t, X, label = 'Isolating') plt.xlabel('$t$') plt.ylabel('Number of nodes') plt.legend() plt.show() </code></pre>
2020-12-20 11:21:15.113000+00:00
2020-12-20 11:21:15.113000+00:00
null
null
65,378,617
<p>I want to simulate and visualize a SIRS model on a NetworkX graph (where a recovered/removed node can again become susceptible). Each node also operates as an agent, and can at each time step, choose to isolate with a probability p, and thus be unable to become infected in that timestep.</p> <p>I have simulated a SIRS model with EoN.Gillespie_simple_contagion and I am trying to understand if I can modify these methods to include the agent behavior, or if I need to write a bespoke method.</p> <p>Ive seen its possible to modify some EoN methods: <a href="https://stackoverflow.com/questions/56927079/modified-sir-model">Modified SIR model</a></p> <p>Here is the code I am using:</p> <pre><code>import networkx as nx import matplotlib.pyplot as plt import EoN from collections import defaultdict # parameters required for the SIRS model a = 0.1 b = 0.01 y = 0.001 d = 0.001 # Simple contagions # the below is based on an example of a SEIR disease (there is an exposed state before becoming infectious) # from https://arxiv.org/pdf/2001.02436.pdf Gnp = nx.gnp_random_graph(500, 0.005) H = nx.DiGraph() #For the spontaneous transitions H.add_edge('I', 'R', rate = b) # an infected node can be recovered/removed H.add_edge('I', 'S', rate = y) # an infected node can become susceptible again H.add_edge('R', 'S', rate = d) # a recovered node can become suscepticle again J = nx.DiGraph() #for the induced transitions J.add_edge(('I', 'S'),('I', 'I'), rate = a) # a susceptible node can become infected from a neighbour IC = defaultdict(lambda: 'S') # set all statuses except one to susceptible. only one node shall begin infected for node in range(500): IC[node] = 'S' IC[0] = 'I' return_statuses = ('S', 'I', 'R') print('doing Gillespie simulation') t, S, I, R = EoN.Gillespie_simple_contagion(Gnp, H, J, IC, return_statuses, tmax = 500) print('done with simulation, now plotting') plt.plot(t, S, label = 'Susceptible') plt.plot(t, I, label = 'Infected') plt.plot(t, R, label = 'Recovered') plt.xlabel('$t$') plt.ylabel('Number of nodes') plt.legend() plt.show() </code></pre>
2020-12-20 09:43:37.500000+00:00
2020-12-26 15:13:12.973000+00:00
null
python|networkx|graph-theory|graph-algorithm|eon
['https://epidemicsonnetworks.readthedocs.io/en/latest/Examples.html#visualizing-or-animating-disease-spread']
1
53,697,941
<p>You might also be hitting Concept Drift, see <a href="https://datascience.stackexchange.com/questions/12761/should-a-model-be-re-trained-if-new-observations-are-available">Should you retrain a model when new observations are available</a>. There's also the concept of catastrophic forgetting which a bunch of academic papers discuss. Here's one with MNIST <a href="https://arxiv.org/pdf/1312.6211.pdf" rel="nofollow noreferrer">Empirical investigation of catastrophic forgetting</a></p>
2018-12-10 00:03:06.720000+00:00
2018-12-10 00:03:06.720000+00:00
null
null
42,666,046
<p>I was wondering if it was possible to save a partly trained Keras model and continue the training after loading the model again.</p> <p>The reason for this is that I will have more training data in the future and I do not want to retrain the whole model again.</p> <p>The functions which I am using are:</p> <pre><code>#Partly train model model.fit(first_training, first_classes, batch_size=32, nb_epoch=20) #Save partly trained model model.save('partly_trained.h5') #Load partly trained model from keras.models import load_model model = load_model('partly_trained.h5') #Continue training model.fit(second_training, second_classes, batch_size=32, nb_epoch=20) </code></pre> <hr /> <p><strong>Edit 1: added fully working example</strong></p> <p>With the first dataset after 10 epochs the loss of the last epoch will be 0.0748 and the accuracy 0.9863.</p> <p>After saving, deleting and reloading the model the loss and accuracy of the model trained on the second dataset will be 0.1711 and 0.9504 respectively.</p> <p>Is this caused by the new training data or by a completely re-trained model?</p> <pre><code>&quot;&quot;&quot; Model by: http://machinelearningmastery.com/ &quot;&quot;&quot; # load (downloaded if needed) the MNIST dataset import numpy from keras.datasets import mnist from keras.models import Sequential from keras.layers import Dense from keras.utils import np_utils from keras.models import load_model numpy.random.seed(7) def baseline_model(): model = Sequential() model.add(Dense(num_pixels, input_dim=num_pixels, init='normal', activation='relu')) model.add(Dense(num_classes, init='normal', activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) return model if __name__ == '__main__': # load data (X_train, y_train), (X_test, y_test) = mnist.load_data() # flatten 28*28 images to a 784 vector for each image num_pixels = X_train.shape[1] * X_train.shape[2] X_train = X_train.reshape(X_train.shape[0], num_pixels).astype('float32') X_test = X_test.reshape(X_test.shape[0], num_pixels).astype('float32') # normalize inputs from 0-255 to 0-1 X_train = X_train / 255 X_test = X_test / 255 # one hot encode outputs y_train = np_utils.to_categorical(y_train) y_test = np_utils.to_categorical(y_test) num_classes = y_test.shape[1] # build the model model = baseline_model() #Partly train model dataset1_x = X_train[:3000] dataset1_y = y_train[:3000] model.fit(dataset1_x, dataset1_y, nb_epoch=10, batch_size=200, verbose=2) # Final evaluation of the model scores = model.evaluate(X_test, y_test, verbose=0) print(&quot;Baseline Error: %.2f%%&quot; % (100-scores[1]*100)) #Save partly trained model model.save('partly_trained.h5') del model #Reload model model = load_model('partly_trained.h5') #Continue training dataset2_x = X_train[3000:] dataset2_y = y_train[3000:] model.fit(dataset2_x, dataset2_y, nb_epoch=10, batch_size=200, verbose=2) scores = model.evaluate(X_test, y_test, verbose=0) print(&quot;Baseline Error: %.2f%%&quot; % (100-scores[1]*100)) </code></pre> <hr /> <p><strong>Edit 2: tensorflow.keras remarks</strong></p> <p>For tensorflow.keras change the parameter nb_epochs to epochs in the model fit. The imports and basemodel function are:</p> <pre><code>import numpy from tensorflow.keras.datasets import mnist from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense from tensorflow.keras.utils import to_categorical from tensorflow.keras.models import load_model numpy.random.seed(7) def baseline_model(): model = Sequential() model.add(Dense(num_pixels, input_dim=num_pixels, activation='relu')) model.add(Dense(num_classes, activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) return model </code></pre>
2017-03-08 08:07:57.997000+00:00
2022-06-13 03:47:29.163000+00:00
2021-01-11 14:13:29.320000+00:00
python|tensorflow|neural-network|keras|resuming-training
['https://datascience.stackexchange.com/questions/12761/should-a-model-be-re-trained-if-new-observations-are-available', 'https://arxiv.org/pdf/1312.6211.pdf']
2
45,443,910
<p>In a couple of experiments in <a href="https://arxiv.org/pdf/1707.09725.pdf#page=73" rel="noreferrer">my masters thesis</a> (e.g. page 59), I found that the bias might be important for the first layer(s), but especially at the fully connected layers at the end it seems not to play a big role.</p> <p>This might be highly dependent on the network architecture / dataset.</p>
2017-08-01 17:09:42.807000+00:00
2017-08-01 17:09:42.807000+00:00
null
null
2,480,650
<p>I'm aware of the gradient descent and the back-propagation algorithm. What I don't get is: when is using a bias important and how do you use it?</p> <p>For example, when mapping the <code>AND</code> function, when I use two inputs and one output, it does not give the correct weights. However, when I use three inputs (one of which is a bias), it gives the correct weights.</p>
2010-03-19 21:18:12.770000+00:00
2021-03-29 01:14:31.160000+00:00
2021-03-28 22:59:16.737000+00:00
machine-learning|neural-network|artificial-intelligence|backpropagation
['https://arxiv.org/pdf/1707.09725.pdf#page=73']
1
60,042,657
<p>If you need MATLAB implementation, go for <a href="https://motchallenge.net/" rel="nofollow noreferrer">motchallenge.net</a> (multiple objects tracking benchmark) devkit: <a href="https://bitbucket.org/amilan/motchallenge-devkit" rel="nofollow noreferrer">https://bitbucket.org/amilan/motchallenge-devkit</a>. </p> <p>There is also an excellent and easier to use Python implementation by Christoph Heindl: <a href="https://github.com/cheind/py-motmetrics" rel="nofollow noreferrer">https://github.com/cheind/py-motmetrics</a>. In both cases you will get along <code>MOTA</code> and <code>MOTP</code> also more powerful <a href="https://arxiv.org/pdf/1609.01775.pdf" rel="nofollow noreferrer">ID-measures</a>.</p>
2020-02-03 15:49:55.273000+00:00
2020-02-11 07:34:40.237000+00:00
2020-02-11 07:34:40.237000+00:00
null
44,432,151
<p>I am dealing with multiple people tracking using single camera in Matlab, and need to calculate multiple objects tracking performance metrics which are MOTA, MOTP, FP and FN.</p> <p>Is it possible to calculate them using ''classperf'' function which is found in Matlab? or there is another way?</p> <p>Many thanks</p>
2017-06-08 09:42:10.547000+00:00
2020-02-11 07:34:40.237000+00:00
2017-06-08 10:20:47.573000+00:00
matlab|image-processing|computer-vision|tracking
['https://motchallenge.net/', 'https://bitbucket.org/amilan/motchallenge-devkit', 'https://github.com/cheind/py-motmetrics', 'https://arxiv.org/pdf/1609.01775.pdf']
4
65,069,982
<p>The script you are talking about is the right one for continuing pre-training. The original BERT uses next-sentence prediction as an auxiliary objective. When it is provided a pair of sentences (separated by the <code>[SEP]</code> token), the embedding of the <code>[CLS]</code> (the very first one) token is used as an input to a classifier telling if the sentences are adjacent in a coherent text or not.</p> <p>This what the empty lines are for: on a document boundary, the sentences cannot adjacent.</p> <p>However, the contribution of the next-sentence objective is arguable. For instance, the <a href="https://arxiv.org/abs/1907.11692" rel="nofollow noreferrer">RoBERTa</a> considers it superfluous and only uses the masked-language-modeling objective a still gets better representation quality than the original BERT.</p>
2020-11-30 08:12:11.953000+00:00
2020-11-30 08:12:11.953000+00:00
null
null
65,024,801
<p>I want to fine-tune BERT on a specific language domain using the following git repo:</p> <p><a href="https://github.com/cedrickchee/pytorch-pretrained-BERT/blob/master/examples/lm_finetuning/README.md" rel="nofollow noreferrer">https://github.com/cedrickchee/pytorch-pretrained-BERT/blob/master/examples/lm_finetuning/README.md</a></p> <p>Regarding the input format, it says:</p> <blockquote> <p>The scripts in this folder expect a single file as input, consisting of untokenized text, with one sentence per line, and one blank line between documents. The reason for the sentence splitting is that part of BERT's training involves a next sentence objective in which the model must predict whether two sequences of text are contiguous text from the same document or not, and to avoid making the task too easy, the split point between the sequences is always at the end of a sentence. The linebreaks in the file are therefore necessary to mark the points where the text can be split.</p> </blockquote> <p>What do they mean with documents in this regard? From my understanding, the <code>.txt</code> file used for fine-tuning just contains a lot of domain specific text with one sentence per line. Just to be sure, is it the correct approach to use this repository if I want to fine tune BERT on a specific language domain?</p> <p>Thank you for your help!</p>
2020-11-26 15:19:24.670000+00:00
2020-11-30 08:12:11.953000+00:00
null
python|nlp|bert-language-model|transformer-model
['https://arxiv.org/abs/1907.11692']
1
46,861,568
<blockquote> <p>Will augmenting data help? Does it improve accuracy?</p> </blockquote> <p>That's hard to say in advance. But almost certainly, when you already have a model which is better than random. And when you choose the right augmentation method.</p> <p>See my masters thesis <a href="https://arxiv.org/pdf/1707.09725.pdf#page=94" rel="nofollow noreferrer">Analysis and Optimization of Convolutional Neural Network Architectures</a>, page 80 for many different augmentation methods.</p> <blockquote> <p>In which case one should choose to augment data and should avoid?</p> </blockquote> <ul> <li>When you don't have enough data -> augment</li> <li>Avoid augmentations where you can't tell the emotion after the augmentation. So in case of character recognition, rotation is a bad idea (e.g. due to <code>6 vs 9</code> or <code>u vs n</code> or <code>\rightarrow vs \nearrow</code>)</li> </ul>
2017-10-21 08:43:46+00:00
2017-10-21 08:43:46+00:00
null
null
46,855,082
<p>I am trying to build cnn model (keras) that can classify image based on users emotions. I am having issues with data. I have really small data for training. Will augmenting data help? Does it improve accuracy? In which case one should choose to augment data and should avoid?</p>
2017-10-20 18:20:38.323000+00:00
2017-10-25 13:04:20.003000+00:00
2017-10-21 11:28:32.157000+00:00
neural-network|deep-learning|classification
['https://arxiv.org/pdf/1707.09725.pdf#page=94']
1
46,834,251
<p>Famous Word2Vec implementation is CBOW + Skip-Gram</p> <p>Your input for CBOW is your input word vector (each is a vector of length N; N = size of vocabulary). All these input word vectors together are an array of size M x N; M=length of words).</p> <p>Now what is interesting in the graphic below is the projection step, where we force an NN to learn a lower dimensional representation of our input space to predict the output correctly. The desired output is our original input. </p> <p>This lower dimensional representation P consists of abstract features describing words e.g. location, adjective, etc. (in reality these learned features are not really clear). Now these features represent one view on these words. </p> <p>And like with all features, we can see them as high-dimensional vectors. If you want you can use dimensionality reduction techniques to display them in 2 or 3 dimensional space. </p> <p><a href="https://i.stack.imgur.com/gpH3u.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gpH3u.png" alt="enter image description here"></a></p> <p>More details and source of graphic: <a href="https://arxiv.org/pdf/1301.3781.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1301.3781.pdf</a></p>
2017-10-19 16:06:12.970000+00:00
2017-10-19 16:06:12.970000+00:00
null
null
46,724,680
<p>I am sorry for my naivety, but I don't understand why word embeddings that are the result of NN training process (word2vec) are actually vectors.</p> <p>Embedding is the process of dimension reduction, during the training process NN reduces the 1/0 arrays of words into smaller size arrays, the process does nothing that applies vector arithmetic.</p> <p>So as result we got just arrays and not the vectors. Why should I think of these arrays as vectors?</p> <p>Even though, we got vectors, why does everyone depict them as vectors coming from the origin (0,0)?</p> <p>Again, I am sorry if my question looks stupid.</p>
2017-10-13 07:27:55.157000+00:00
2018-07-13 09:17:17.073000+00:00
2017-10-13 12:05:46.683000+00:00
machine-learning|neural-network|nlp|word2vec|embedding
['https://i.stack.imgur.com/gpH3u.png', 'https://arxiv.org/pdf/1301.3781.pdf']
2
46,765,443
<p><strong>What are embeddings?</strong></p> <blockquote> <p>Word embedding is the collective name for a set of language modeling and feature learning techniques in natural language processing (NLP) where words or phrases from the <strong>vocabulary are mapped to vectors of real numbers</strong>.</p> <p>Conceptually it involves a mathematical embedding from a space with one dimension per word to a continuous vector space with much lower dimension.</p> </blockquote> <p>(Source: <a href="https://en.wikipedia.org/wiki/Word_embedding" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Word_embedding</a>)</p> <p><strong>What is Word2Vec?</strong></p> <blockquote> <p>Word2vec is a group of related models that are used to produce word embeddings. These <strong>models are shallow, two-layer neural networks that are trained to reconstruct linguistic contexts of words</strong>.</p> <p>Word2vec takes as its input a large corpus of text and produces a vector space, typically of several hundred dimensions, with <strong>each unique word in the corpus being assigned a corresponding vector in the space</strong>.</p> <p>Word vectors are positioned in the vector space such that <strong>words that share common contexts in the corpus are located in close proximity to one another in the space</strong>.</p> </blockquote> <p>(Source: <a href="https://en.wikipedia.org/wiki/Word2vec" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Word2vec</a>)</p> <p><strong>What's an array?</strong></p> <blockquote> <p>In computer science, an array data structure, or simply an array, is a data structure consisting of a collection of elements (values or variables), each identified by at least one array index or key.</p> <p>An array is stored so that the position of each element can be computed from its index tuple by a mathematical formula.</p> <p><strong>The simplest type of data structure is a linear array, also called one-dimensional array.</strong></p> </blockquote> <p><strong>What's a vector / vector space?</strong></p> <blockquote> <p>A vector space (also called a linear space) is a collection of objects called vectors, which may be added together and multiplied (&quot;scaled&quot;) by numbers, called scalars.</p> <p>Scalars are often taken to be real numbers, but there are also vector spaces with scalar multiplication by complex numbers, rational numbers, or generally any field.</p> <p>The operations of vector addition and scalar multiplication must satisfy certain requirements, called axioms, listed below.</p> </blockquote> <p>(Source: <a href="https://en.wikipedia.org/wiki/Vector_space" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Vector_space</a>)</p> <p><strong>What's the difference between vectors and arrays?</strong></p> <p>Firstly, the vector in word embeddings is not exactly the programming language data structure (so it's not <a href="https://stackoverflow.com/questions/15079057/arrays-vs-vectors-introductory-similarities-and-differences">Arrays vs Vectors: Introductory Similarities and Differences</a>).</p> <p>Programmatically, a word embedding vector <strong>IS</strong> some sort of an array (data structure) of real numbers (i.e. scalars)</p> <p>Mathematically, any element with one or more dimension populated with real numbers is a <a href="https://en.wikipedia.org/wiki/Tensor" rel="nofollow noreferrer">tensor</a>. And a vector is a single dimension of scalars.</p> <hr /> <p>To answer the OP question:</p> <p><strong>Why are word embedding actually vectors?</strong></p> <blockquote> <p>By definition, word embeddings are vectors (see above)</p> </blockquote> <p><strong>Why do we represent words as vectors of real numbers?</strong></p> <blockquote> <p>To learn the differences between words, we have to quantify the difference in some manner.</p> </blockquote> <p>Imagine, if we assign theses &quot;smart&quot; numbers to the words:</p> <pre><code>&gt;&gt;&gt; semnum = semantic_numbers = {'car': 5, 'vehicle': 2, 'apple': 232, 'orange': 300, 'fruit': 211, 'samsung': 1080, 'iphone': 1200} &gt;&gt;&gt; abs(semnum['fruit'] - semnum['apple']) 21 &gt;&gt;&gt; abs(semnum['samsung'] - semnum['apple']) 848 </code></pre> <p>We see that the distance between <code>fruit</code> and <code>apple</code> is close but <code>samsung</code> and <code>apple</code> isn't. In this case, the single numerical &quot;feature&quot; of the word is capable of capturing some information about the word meanings but not fully.</p> <p>Imagine the we have two real number values for each word (i.e. vector):</p> <pre><code>&gt;&gt;&gt; import numpy as np &gt;&gt;&gt; semnum = semantic_numbers = {'car': [5, -20], 'vehicle': [2, -18], 'apple': [232, 1010], 'orange': [300, 250], 'fruit': [211, 250], 'samsung': [1080, 1002], 'iphone': [1200, 1100]} </code></pre> <p>To compute the difference, we could have done:</p> <pre><code>&gt;&gt;&gt; np.array(semnum['apple']) - np.array(semnum['orange']) array([-68, 761]) &gt;&gt;&gt; np.array(semnum['apple']) - np.array(semnum['samsung']) array([-848, 8]) </code></pre> <p>That's not very informative, it returns a vector and we can't get a definitive measure of distance between the words, so we can try some vectorial tricks and compute the distance between the vectors, e.g. <a href="https://stackoverflow.com/questions/1401712/how-can-the-euclidean-distance-be-calculated-with-numpy">euclidean distance</a>:</p> <pre><code>&gt;&gt;&gt; import numpy as np &gt;&gt;&gt; orange = np.array(semnum['orange']) &gt;&gt;&gt; apple = np.array(semnum['apple']) &gt;&gt;&gt; samsung = np.array(semnum['samsung']) &gt;&gt;&gt; np.linalg.norm(apple-orange) 763.03604108849277 &gt;&gt;&gt; np.linalg.norm(apple-samsung) 848.03773500947466 &gt;&gt;&gt; np.linalg.norm(orange-samsung) 1083.4685043876448 </code></pre> <p>Now, we can see more &quot;information&quot; that <code>apple</code> can be closer to <code>samsung</code> than <code>orange</code> to <code>samsung</code>. Possibly that's because <code>apple</code> co-occurs in the corpus more frequently with <code>samsung</code> than <code>orange</code>.</p> <p>The big question comes, <strong>&quot;How do we get these real numbers to represent the vector of the words?&quot;</strong>. That's where the Word2Vec / embedding training algorithms (<a href="https://link.springer.com/chapter/10.1007/3-540-33486-6_6" rel="nofollow noreferrer">originally conceived by Bengio 2003</a>) comes in.</p> <hr /> <h1>Taking a detour</h1> <p>Since adding more real numbers to the vector representing the words is more informative then why don't we just add a lot more dimensions (i.e. numbers of columns in each word vector)?</p> <p>Traditionally, we compute the differences between words by computing the word-by-word matrices in the field of <a href="https://en.wikipedia.org/wiki/Distributional_semantics" rel="nofollow noreferrer">distributional semantics/distributed lexical semantics</a>, but the matrices become really sparse with many zero values if the words don't co-occur with another.</p> <p>Thus a lot of effort has been put into <a href="https://en.wikipedia.org/wiki/Dimensionality_reduction" rel="nofollow noreferrer">dimensionality reduction</a> after computing the <a href="https://stackoverflow.com/questions/24073030/what-are-co-occurance-matrixes-and-how-are-they-used-in-nlp">word co-occurrence matrix</a>. IMHO, it's like a top-down view of how global relations between words are and then compressing the matrix to get a smaller vector to represent each word.</p> <p>So the &quot;deep learning&quot; word embedding creation comes from the another school of thought and starts with a randomly (sometimes not-so random) initialized a layer of vectors for each word and learning the parameters/weights for these vectors and optimizing these parameters/weights by minimizing some loss function based on some defined properties.</p> <p>It sounds a little vague but concretely, if we look at the Word2Vec learning technique, it'll be clearer, see</p> <ul> <li><a href="https://rare-technologies.com/making-sense-of-word2vec/" rel="nofollow noreferrer">https://rare-technologies.com/making-sense-of-word2vec/</a></li> <li><a href="http://colah.github.io/posts/2014-07-NLP-RNNs-Representations/" rel="nofollow noreferrer">http://colah.github.io/posts/2014-07-NLP-RNNs-Representations/</a></li> <li><a href="https://arxiv.org/pdf/1402.3722.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1402.3722.pdf</a> (more mathematical)</li> </ul> <p>Here's more resources to read-up on word embeddings: <a href="https://github.com/keon/awesome-nlp#word-vectors" rel="nofollow noreferrer">https://github.com/keon/awesome-nlp#word-vectors</a></p>
2017-10-16 07:55:31.027000+00:00
2017-10-22 15:03:30.527000+00:00
2020-06-20 09:12:55.060000+00:00
null
46,724,680
<p>I am sorry for my naivety, but I don't understand why word embeddings that are the result of NN training process (word2vec) are actually vectors.</p> <p>Embedding is the process of dimension reduction, during the training process NN reduces the 1/0 arrays of words into smaller size arrays, the process does nothing that applies vector arithmetic.</p> <p>So as result we got just arrays and not the vectors. Why should I think of these arrays as vectors?</p> <p>Even though, we got vectors, why does everyone depict them as vectors coming from the origin (0,0)?</p> <p>Again, I am sorry if my question looks stupid.</p>
2017-10-13 07:27:55.157000+00:00
2018-07-13 09:17:17.073000+00:00
2017-10-13 12:05:46.683000+00:00
machine-learning|neural-network|nlp|word2vec|embedding
['https://en.wikipedia.org/wiki/Word_embedding', 'https://en.wikipedia.org/wiki/Word2vec', 'https://en.wikipedia.org/wiki/Vector_space', 'https://stackoverflow.com/questions/15079057/arrays-vs-vectors-introductory-similarities-and-differences', 'https://en.wikipedia.org/wiki/Tensor', 'https://stackoverflow.com/questions/1401712/how-can-the-euclidean-distance-be-calculated-with-numpy', 'https://link.springer.com/chapter/10.1007/3-540-33486-6_6', 'https://en.wikipedia.org/wiki/Distributional_semantics', 'https://en.wikipedia.org/wiki/Dimensionality_reduction', 'https://stackoverflow.com/questions/24073030/what-are-co-occurance-matrixes-and-how-are-they-used-in-nlp', 'https://rare-technologies.com/making-sense-of-word2vec/', 'http://colah.github.io/posts/2014-07-NLP-RNNs-Representations/', 'https://arxiv.org/pdf/1402.3722.pdf', 'https://github.com/keon/awesome-nlp#word-vectors']
14
44,454,253
<p>I went through this <a href="https://arxiv.org/pdf/1508.04025.pdf" rel="noreferrer">Effective Approaches to Attention-based Neural Machine Translation</a>. In the section <em>3.1</em> They have mentioned the difference between two attentions as follows,</p> <ol> <li><p><strong>Luong attention</strong> used top hidden layer states in both of encoder and decoder. <em>But <strong>Bahdanau attention</strong> take concatenation of forward and backward source hidden state (Top Hidden Layer)</em>.</p> </li> <li><p>In <strong>Luong attention</strong> they get the decoder hidden state at time <em><strong>t</strong></em>. Then calculate attention scores and from that get the context vector which will be concatenated with hidden state of the decoder and then predict.</p> <p>But in the <strong>Bahdanau</strong> at time <em><strong>t</strong></em> we consider about <em><strong>t-1</strong></em> hidden state of the decoder. Then we calculate alignment , context vectors as above. But then we concatenate this context with hidden state of the decoder at <em><strong>t-1</strong></em>. So before the softmax this concatenated vector goes inside a GRU.</p> </li> <li><p>Luong has diffferent types of alignments. <strong>Bahdanau</strong> has only concat score alignment model.</p> </li> </ol> <p><a href="https://i.stack.imgur.com/tiQkz.png" rel="noreferrer"><img src="https://i.stack.imgur.com/tiQkz.png" alt="Alignment methdods" /></a></p>
2017-06-09 09:31:58.247000+00:00
2018-11-14 10:53:55.003000+00:00
2020-06-20 09:12:55.060000+00:00
null
44,238,154
<p>These two attentions are used in <strong>seq2seq</strong> modules. The two different attentions are introduced as multiplicative and additive attentions in <a href="https://www.tensorflow.org/versions/master/api_guides/python/contrib.seq2seq" rel="noreferrer">this</a> TensorFlow documentation. What is the difference?</p>
2017-05-29 08:43:37.273000+00:00
2022-04-11 08:46:58.967000+00:00
2020-10-26 12:21:28.697000+00:00
tensorflow|deep-learning|nlp|attention-model
['https://arxiv.org/pdf/1508.04025.pdf', 'https://i.stack.imgur.com/tiQkz.png']
2
69,470,115
<h2>Median</h2> <p>Two recent percentile approximation algorithms and their python implementations can be found here:</p> <p><strong>t-Digests</strong></p> <ul> <li><a href="https://arxiv.org/abs/1902.04023" rel="nofollow noreferrer">https://arxiv.org/abs/1902.04023</a></li> <li><a href="https://github.com/CamDavidsonPilon/tdigest" rel="nofollow noreferrer">https://github.com/CamDavidsonPilon/tdigest</a></li> </ul> <p><strong>DDSketch</strong></p> <ul> <li><a href="https://arxiv.org/abs/1908.10693" rel="nofollow noreferrer">https://arxiv.org/abs/1908.10693</a></li> <li><a href="https://github.com/DataDog/sketches-py" rel="nofollow noreferrer">https://github.com/DataDog/sketches-py</a></li> </ul> <p>Both algorithms bucket data. As T-Digest uses smaller bins near the tails the accuracy is better at the extremes (and weaker close to the median). DDSketch additionally provides relative error guarantees.</p>
2021-10-06 17:23:17.917000+00:00
2021-10-06 17:23:17.917000+00:00
null
null
1,058,813
<p>Is there an algorithm to estimate the median, mode, skewness, and/or kurtosis of set of values, but that does NOT require storing all the values in memory at once?</p> <p>I'd like to calculate the basic statistics:</p> <ul> <li>mean: arithmetic average</li> <li>variance: average of squared deviations from the mean</li> <li>standard deviation: square root of the variance</li> <li>median: value that separates larger half of the numbers from the smaller half</li> <li>mode: most frequent value found in the set</li> <li>skewness: tl; dr</li> <li>kurtosis: tl; dr</li> </ul> <p>The basic formulas for calculating any of these is grade-school arithmetic, and I do know them. There are many stats libraries that implement them, as well.</p> <p>My problem is the large number (billions) of values in the sets I'm handling: Working in Python, I can't just make a list or hash with billions of elements. Even if I wrote this in C, billion-element arrays aren't too practical.</p> <p>The data is not sorted. It's produced randomly, on-the-fly, by other processes. The size of each set is highly variable, and the sizes will not be known in advance.</p> <p>I've already figured out how to handle the mean and variance pretty well, iterating through each value in the set in any order. (Actually, in my case, I take them in the order in which they're generated.) Here's the algorithm I'm using, courtesy <a href="http://en.wikipedia.org/wiki/Algorithms_for_calculating_variance#On-line_algorithm" rel="noreferrer">http://en.wikipedia.org/wiki/Algorithms_for_calculating_variance#On-line_algorithm</a>:</p> <ul> <li>Initialize three variables: count, sum, and sum_of_squares</li> <li>For each value: <ul> <li>Increment count.</li> <li>Add the value to sum.</li> <li>Add the square of the value to sum_of_squares.</li> </ul></li> <li>Divide sum by count, storing as the variable mean.</li> <li>Divide sum_of_squares by count, storing as the variable mean_of_squares.</li> <li>Square mean, storing as square_of_mean.</li> <li>Subtract square_of_mean from mean_of_squares, storing as variance.</li> <li>Output mean and variance.</li> </ul> <p>This "on-line" algorithm has weaknesses (e.g., accuracy problems as sum_of_squares quickly grows larger than integer range or float precision), but it basically gives me what I need, without having to store every value in each set.</p> <p>But I don't know whether similar techniques exist for estimating the additional statistics (median, mode, skewness, kurtosis). I could live with a biased estimator, or even a method that compromises accuracy to a certain degree, as long as the memory required to process N values is substantially less than O(N).</p> <p>Pointing me to an existing stats library will help, too, if the library has functions to calculate one or more of these operations "on-line".</p>
2009-06-29 15:02:44.123000+00:00
2021-12-03 04:12:24.310000+00:00
2009-06-29 15:45:43.520000+00:00
algorithm|statistics|iterator|median
['https://arxiv.org/abs/1902.04023', 'https://github.com/CamDavidsonPilon/tdigest', 'https://arxiv.org/abs/1908.10693', 'https://github.com/DataDog/sketches-py']
4
43,136,534
<p>IDS , Intrusion Detection System.</p> <hr> <ol> <li>IDS AODV : Patches for ns-2.35 <a href="http://www.linuxquestions.org/questions/linux-wireless-networking-41/activate-mode-promiscuous-in-aodv-4175539244/#2" rel="nofollow noreferrer">http://www.linuxquestions.org/questions/linux-wireless-networking-41/activate-mode-promiscuous-in-aodv-4175539244/#2</a> ... <strong>Simulation examples</strong> : {blackholeaodv__idsAODV__dokurer.tcl, idsaodv2.tcl, idsAODV_mohawad-blackh.tcl, vanet-idsAODDV-blackhole-aodv.tcl} → <a href="https://drive.google.com/drive/folders/0B7S255p3kFXNSmRYb2lGcDRUdWs" rel="nofollow noreferrer">https://drive.google.com/drive/folders/0B7S255p3kFXNSmRYb2lGcDRUdWs</a></li> </ol> <hr> <ol start="2"> <li>IDS AOMDV : <a href="https://arxiv.org/pdf/1502.04801.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1502.04801.pdf</a></li> </ol> <hr> <p>(3) IDS for NetSim : IDS.zip → ids//{DSR/, WLAN/} <a href="https://en.wikipedia.org/wiki/NetSim" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/NetSim</a> .. The code : <a href="https://drive.google.com/file/d/0B7S255p3kFXNYkt4TkpBR3FyVjA/view?usp=sharing" rel="nofollow noreferrer">https://drive.google.com/file/d/0B7S255p3kFXNYkt4TkpBR3FyVjA/view?usp=sharing</a> </p>
2017-03-31 08:58:03.770000+00:00
2017-03-31 09:04:04.547000+00:00
2017-03-31 09:04:04.547000+00:00
null
43,111,609
<p>I want to write a TCL script to implement Intrusion Detection System in NS2. I searched a lot, but I could not find proper help. I implemented basic routing protocols in NS2. I have a bit knowledge in TCL. I want to know how to modify a AODV protocol. I request you to help me. </p>
2017-03-30 07:48:24.367000+00:00
2019-05-01 10:38:06.460000+00:00
null
networking|network-programming|tcl|ns2|intrusion-detection
['http://www.linuxquestions.org/questions/linux-wireless-networking-41/activate-mode-promiscuous-in-aodv-4175539244/#2', 'https://drive.google.com/drive/folders/0B7S255p3kFXNSmRYb2lGcDRUdWs', 'https://arxiv.org/pdf/1502.04801.pdf', 'https://en.wikipedia.org/wiki/NetSim', 'https://drive.google.com/file/d/0B7S255p3kFXNYkt4TkpBR3FyVjA/view?usp=sharing']
5
73,819,438
<p>I have downloaded and tested your model. The accuracy was as stated by you, when run against the Kaggle dataset. You were also on the right track with inverting the values of the input for your own image, the one that wasn't working. But you should have taken a look at the training inputs: the values are in the range of 0-255, while you're inverting the values with 1-x, assuming floating points from 0-1. I have drawn a simple &quot;X&quot; and &quot;P&quot; in Paint, saved it as a PNG (should work the same way with JPEG), and the neural network identifies them just fine. For that, I rescale it with OpenCV, grayscale it, then invert it (the white pixels had values of 255, while the training inputs use 0 for the blank pixels).</p> <p>Here is a rough code of what I have done:</p> <pre><code>import numpy as np import keras import cv2 def load_image(path): image = cv2.imread(path) image = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY) image = 255 - cv2.resize(image, (28,28)) image = image.reshape((1,784)) return image def load_dataset(path): dataset = np.loadtxt(path, delimiter=',') X = dataset[:,0:784] Y = dataset[:,0] return X, Y def benchmark(model, X, Y): test_count = 100 tests = np.random.randint(0, X.shape[0], test_count) correct = 0 p = model.predict(X[tests]) for i, ti in enumerate(tests): if Y[ti] == np.argmax(p[i]): correct += 1 print(f'Accuracy: {correct / test_count * 100}') def recognize(model, image): alph = &quot;abcdefghijklmnopqrstuvwxyz&quot; p = model.predict(image)[0] letter = alph[np.argmax(p)] print(f'Image prediction: {letter}') top3 = dict(sorted( zip(alph, 100 * np.exp(p) / sum(np.exp(p))), key=lambda x: x[1], reverse=True)[:3]) print(f'Top 3: {top3}') img_x = load_image('x.png') img_p = load_image('p.png') X, Y = load_dataset('chardata.csv') model = keras.models.load_model('CharRecognition.h5') benchmark(model, X, Y) recognize(model, img_x) recognize(model, img_p) </code></pre> <p>The predictions are &quot;x&quot; and &quot;p&quot;, respectively. I haven't tried other letters, yet, but the issues identified above seem to be part of the problem with high certainty.</p> <p>Here are the images I have used (as I said, both are hand-drawn, nothing generated):</p> <p><a href="https://i.stack.imgur.com/Aqq6B.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Aqq6B.png" alt="Sample input for neural network X" /></a> <a href="https://i.stack.imgur.com/rr81g.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rr81g.png" alt="Sample input for neural network P" /></a></p> <p>I have also run it with the image as a JPEG. All you need to do is change the file path for <code>imread</code>. OpenCV detects the format. If you don't want to or can't use OpenCV and still have trouble, I can expand on the answer. It might go beyond the scope of your actual question, though. Relevant documentation: <a href="https://docs.opencv.org/4.x/d6/d00/tutorial_py_root.html" rel="nofollow noreferrer">OpenCV Documentation</a>. Pillow and scikit-image would work very similarly.</p> <p>I noticed that the outputs produce values with high variation - many values are being printed with long scientific notation. It makes it hard to assess the output of the neural network. Therefor, when you're not using a softmax layer, you can also calculate the probabilities separately, as I did in <code>recognize</code> (see <a href="https://en.wikipedia.org/wiki/Softmax_function" rel="nofollow noreferrer">Wikipedia: Softmax</a> for the formula and more explanation). I'm mentioning it here because it can be a help troubleshooting such issues in the future and make it easier on other people trying to help you out.</p> <p>For the images above, it produces something like this, which shows that there is a high certainty about the category:</p> <pre><code>Image prediction: x Top 3: {'x': 100.0, 'a': 0.0, 'b': 0.0} Image prediction: p Top 3: {'p': 100.0, 'd': 2.6237523e-09, 'q': 7.537843e-12} </code></pre> <p>Why was the prediction always &quot;a&quot; in your case? Assuming you didn't do any other mistakes, I'd have to guess, but I think it's probably because the letter occupies a large amount of the area in the image, so an inverted image that had most areas filled in would resemble it most closely. Or, the inverted image of an &quot;a&quot; looked to the neural network most closely to the images of &quot;a&quot; it saw during training. It's a guess. But if you give a neural network something it never really saw during training, I think the best anyone can do is guess at the outcome. I would have expected it to be more randomly spread among the categories, probably, so there might be some other issue in your code, possibly with evaluating the prediction.</p> <p>Just out of curiosity, I have used two more images, which don't look like letters at all:</p> <p><a href="https://i.stack.imgur.com/fM3YP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fM3YP.png" alt="Nonsense" /></a> <a href="https://i.stack.imgur.com/KfjwB.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KfjwB.jpg" alt="enter image description here" /></a></p> <p>The first image the neural network insists is an &quot;e&quot;:</p> <pre><code>Top 3: {'e': 99.99985, 's': 0.00014580016, 'c': 1.3610912e-06} </code></pre> <p>The second image it believes to be, with high certainty, an &quot;a&quot;:</p> <pre><code>Top 3: {'a': 100.0, 'h': 1.28807605e-08, 'r': 1.0681121e-10} </code></pre> <p>It might be that random images of that sort simply &quot;look like an a&quot; to your neural network. Also, it has been known that neural networks can, at times, be easily fooled and hone in on features that seem very counterintuitive: <a href="https://arxiv.org/abs/1710.08864" rel="nofollow noreferrer">Jiawei Su, Danilo Vasconcellos Vargas, and Sakurai Kouichi, “One Pixel Attack for Fooling Deep Neural Networks,” IEEE Transactions on Evolutionary Computation 23, no. 5 (October 2019): 828–41, https://doi.org/10.1109/TEVC.2019.2890858</a>.</p> <p>I think there is also a lesson to be learned about training neural networks in general: I had the expectation that, in a case of a classification problem as you are solving, which seems to have become almost like a canonical introductory problem in many machine learning courses, an input that does not clearly belong to any of the trained classes, even in a well-trained network, would manifest itself as predictions that are spread out over several classes, signifying the ambiguity of the input. But, as we can see here, an &quot;unknown&quot; input does not need to produce such results at all, apparently. Even such a case can produce results that seem to show a high certainty that the input belong in a certain class, such as the apparent degree of &quot;certainty&quot; the neural network suggests to have that the nonsensical scribble be an &quot;e&quot;.</p> <p>Therefor, another conclusion can perhaps be drawn: if one wants to appropriately deal with inputs that do not belong to any of the trained categories, one must train the neural network for that purpose <em>explicitly</em>. By that I mean that one must add an additional class of <em>non-alphabetic</em> images and train it with images that are non-sensical, miscellaneous images (such as the flower above), or probably even classes very close to letters, such as numbers and non-latin writing symbols. It might be precisely the closeness of that &quot;miscellaneous category&quot; that could help the neural network get a clearer idea of what constitutes a letter. However, as we can see here, it seems insufficient to train a neural network on a set of target classes and then to simply expect it to also be able to give a useful prediction in the case of inputs outside of those classes. Some people might feel that I am way overthinking and complicating the topic at this point, but I think it's important enough of an observation about neural networks that, at least for myself, it is well worth keeping in mind.</p> <h1>Preprocessing Images</h1> <p>From the exchange in the comments, it turns out that there is another aspect to this problem. The images I had drawn happened to work very well. However, when I increase the contrast, they are no longer being recognized. I will first go into how I have done so. Since it is a common function in machine learning, I had the somewhat unconventional idea to apply a scaled sigmoid function, so as to keep the values in the range of 0-255, retain some of the relative shades, but turn up the contrast. More on that here: <a href="https://en.wikipedia.org/wiki/Sigmoid_function" rel="nofollow noreferrer">Wikipedia: Sigmoid</a>. I'm saying &quot;unconventional&quot; because I don't think it's something you usually use for images, but since this function is so ubiquitous in machine learning, specifically the activation functions, I thought it might be fun to repurpose it, even though the performance is probably terrible compared to algorithms that are more common for image processing.</p> <p><em>(Aside: I had done almost the exact same for audio processing once, which ended up, when applied to the volume, to function like a compressor. And that's sort of what we're doing: we're &quot;compressing&quot; the grayscale ranges here, without completely eliminating the transitions. This, I believe, ended up really pinpointing the issue with this neural network, because it's a modification that seems more specific, but proceeds to throw off the neural network almost right away. Adjust the parameters in this &quot;generalized sigmoid&quot; function a bit, if you like, to make it smoother (That means: less steep, to retain more of the transitions. Play around with the Desmos graph and look at the PyPlot previews, too.) and get a better feel for at what point precisely the neural network sort of gives up and says &quot;I don't recognize this anymore.&quot; People more graphically inclined might also be reminded of the <code>smoothstep</code> function often used to adjust harshness of edges in shaders <a href="https://registry.khronos.org/OpenGL-Refpages/gl4/html/smoothstep.xhtml" rel="nofollow noreferrer">GLSL: smoothstep</a>).</em></p> <p><a href="https://i.stack.imgur.com/7O3Fv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7O3Fv.png" alt="Desmos Sigmoid" /></a> <a href="https://www.desmos.com/calculator/gi9mo87k9a" rel="nofollow noreferrer">Desmos Graph</a></p> <p>Formula (s = 25, b = 50 appear to give good results):</p> <p><a href="https://i.stack.imgur.com/EoKc0.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EoKc0.jpg" alt="Formula" /></a></p> <p>Then, I preprocess the images with code like this:</p> <pre><code>import matplotlib.pyplot as plt def preprocess(before): s, b = 25, 50 f = lambda x: np.exp(s*(x/255 - s/b)) / (1 + np.exp(s*(x/255 - s/b))) after = f(before) fig, ax = plt.subplots(1,2) ax[0].imshow(before, cmap='gray') ax[1].imshow(after, cmap='gray') plt.show() return after </code></pre> <p>Call the above in <code>load_image</code>, before reshaping it. It will show you the result, side-by-side, before feeding the image to the neural network. In general, not just in machine learning but also statistics, it appears to be good practice to get an idea of the data, to preview and sanity check it, before further working with it. This might have also given you a hint early on about what was wrong with your input images.</p> <p>Here is an example, using the images from above, of what these look like before and after preprocessing:</p> <p><a href="https://i.stack.imgur.com/7QsFQ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7QsFQ.jpg" alt="High Contrast 1" /></a> <a href="https://i.stack.imgur.com/s7WIL.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/s7WIL.jpg" alt="High Contrast 2" /></a></p> <p>Considering it was such an ad-hoc idea and somewhat unconventional, it seems to work quite well. However, here are the new predictions for these images, after processing:</p> <pre><code>Image prediction: l Top 3: {'l': 11.176592, 'y': 9.341431, 'x': 7.692416} Image prediction: q Top 3: {'q': 11.703363, 'p': 9.119178, 'l': 7.6522427} </code></pre> <p>It doesn't recognize those images at all anymore, which confirms some of the issues you might have been having. Your neural network has &quot;learned&quot; the grey, fuzzy transitions around the letters to be part of the features it considers. I had used this site to draw the images: <a href="https://jspaint.app/" rel="nofollow noreferrer">JSPaint</a>. Maybe it was, in part, luck or intuition that I used the paintbrush and not the pen tool, as I would have probably encountered the same issues you are having, since it produces no transitions from black to white. That seemed natural to me, because it seemed to best fit the &quot;feel&quot; of your training inputs, even if it seemed like a trivial, negligible detail at first. Luck, experience - I don't know. But what you therefor want to do is use a tool that leaves &quot;fuzzy borders&quot; or write yet another preprocessing step that does the reverse of what I have just demonstrated, in order to show the negative case, and add blur to the borders.</p> <h1>Data Augmentation</h1> <p>I thought I would have been long since done with this question, but it really goes to show how involved dealing with neural networks can quickly get, it seems. The core of the problem of this question really appears to end up touching on what seems to be some of the fundamentals of machine learning. I will state plainly what I think this example ended up demonstrating, quite illustratively, maybe more for myself than for most other readers:</p> <blockquote> <p>Your neural network only learns what you teach it.</p> </blockquote> <p>The explanation might be simply, and probably there are important exceptions to his, that you didn't teach your neural network to recognize letters with sharp borders, so it didn't learn how to recognize them. I'm not a great machine learning expert, so probably none of this is news to anyone more experienced. But this reminded me of a technique in machine learning that I think could be applied in this scenario quite well, which is &quot;data augmentation&quot;:</p> <blockquote> <p>Data augmentation in data analysis are techniques used to increase the amount of data by adding slightly modified copies of already existing data or newly created synthetic data from existing data. It acts as a regularizer and helps reduce overfitting when training a machine learning model.</p> </blockquote> <p><a href="https://en.wikipedia.org/wiki/Data_augmentation" rel="nofollow noreferrer">Wikipedia: Data Augmentation</a></p> <p><a href="https://arxiv.org/abs/1712.04621" rel="nofollow noreferrer">Perez, Luis, and Jason Wang. “The Effectiveness of Data Augmentation in Image Classification Using Deep Learning.” arXiv, December 13, 2017. https://doi.org/10.48550/arXiv.1712.04621. </a></p> <p>The good news might be that I have given you everything you need to train your neural network further, without needing any additional data on top of the hundreds of megabytes of training data you are already loading from that CSV file. Use the contrast-enhancing preprocessing function above to create a variation of each of the training images, during learning, so that it learns to also handle such variations.</p> <ul> <li><p>Would another model architecture end up being less picky about such details?</p> </li> <li><p>Would different activation functions have handled these cases more flexibly, perhaps?</p> </li> </ul> <p>I don't know, but those seem like very interesting questions for machine learning in general.</p> <h1>Debugging Neural Networks</h1> <p>This answer has taken on dimensions I really did not intend, so I'm starting to feel the urge to apologize for adding on to it yet again, but this immediately leads one to wonder about a broader issue, one which has probably plagued the machine learning community (or at least someone with as humble experience in it as myself):</p> <p>How do you debug a neural network?</p> <p>So far, this was a bunch of trial and error, some luck, a little bit of intuition, but it feels like shooting in the dark sometimes when a neural network is not working. This might be far from perfect, but one approach that seems have been spreading online is to visualize which neurons activate for a given input, in order to get an idea of what areas in an image, or input more generally, influence the final prediction of a neural network most.</p> <p>For that, Keras already provides some functionality, by giving you access to the outputs of each model layer. As a reminder, the architecture of the model in question looks like this:</p> <pre><code>_________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense (Dense) (None, 200) 157000 dense_1 (Dense) (None, 150) 30150 dense_2 (Dense) (None, 100) 15100 dense_3 (Dense) (None, 50) 5050 dense_4 (Dense) (None, 26) 1326 ================================================================= Total params: 208,626 Trainable params: 208,626 Non-trainable params: 0 _________________________________________________________________ </code></pre> <p>You can get access to the activations of each layer by creating a new model and combine the outputs of each layer. That we can plot. Now, it would be a lot easier of those were CNN's, and those might be more appropriate for an image, but that's fine. The author of the question wasn't comfortable with those, yet, so let's go with what we have. With CNN layers we would naturally have a 2-dimensional shape to plot, but a dense layer of neurons is one dimensional. What I like to do in scenarios like that, even though it's less than perfect, is to pad them up to the next larger square.</p> <pre><code>def trace(model, image): outputs = [layer.output for layer in model.layers] trace_model = keras.models.Model(inputs=model.input, outputs=outputs) p = trace_model.predict(image) fig, ax = plt.subplots(1, len(p)) for i, layer in enumerate(p): neurons = layer[0].shape[0] square = int(np.ceil(np.sqrt(neurons))) padding = square**2 - neurons activations = np.append(layer[0], [np.min(layer[0])]*padding).reshape((square,square)) ax[i].imshow(activations) plt.show() </code></pre> <p>As I said, this would be nicer with CNN layers, which is why most sources on the Internet related to this topic will use those, so I thought suggesting something for dense layers might be useful.</p> <p>Here are the results, for the same images of the letter &quot;x&quot; and &quot;p&quot; from above:</p> <p><a href="https://i.stack.imgur.com/azmWz.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/azmWz.jpg" alt="Activations for x" /></a> <a href="https://i.stack.imgur.com/aXVRP.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aXVRP.jpg" alt="Activations for p" /></a></p> <p>We can see an image being plotted, per figure, one for each layer of the neural network. This colormap is &quot;viridis&quot;, as far as I know, the current default colormap for pyplot, where blue are the lowest values and yellow the highest. You can see the padding at the end of the image for each layer, except where it happens to be a perfect square already (such in the case of 100). There might be a better way to clearly delineate those. In the case of &quot;p&quot;, the second image, one can make out the final classification, from the output of the final layer, as the brightest, most yellow dot is on the third line, fourth column (&quot;p&quot; is the 16th letter of the alphabet, 16 = 2x6+4, as the next higher square for 26 letters was 36, so it ends up in a 6x6 square).</p> <p>It's still somewhat difficult to get a clear answer for what's wrong or what's going on here, but it might be a useful start. Other instances, using CNN's, show a lot more clearly what kind of shapes trigger the neural network, but a variation of this technique could perhaps be adopted to dense layers as well. To make a careful attempt at interpreting these images, it does seem to possibly confirm that the neural network is very specific about the feature it learns about an image, as the singular bright, yellow spot in the first layer of both of these images might suggest. What one would more likely expect, ideally, is probably that the neural network considers more features, with similar weights, across the image, thus paying more attention to the overall shape of the letter. However, I am less sure about this and it's probably non-trivial to properly interpret these &quot;activation plots&quot;.</p>
2022-09-22 18:51:13.793000+00:00
2022-09-23 14:02:27.263000+00:00
2022-09-23 14:02:27.263000+00:00
null
73,817,788
<p><a href="https://www.kaggle.com/datasets/sachinpatel21/az-handwritten-alphabets-in-csv-format" rel="nofollow noreferrer">Link to the dataset in question</a></p> <p>Before I begin, few things that might be relevant:</p> <ul> <li>The input file format is JPEG. I convert them to <code>numpy</code> arrays using <code>matplotlib</code>'s <code>imread</code></li> <li>The RGB images are then reshaped and converted to grayscale images using <code>tensorflow</code>'s <code>image.resize</code> method and <code>image.rgb_to_grayscale</code> method respectively.</li> </ul> <p>This is my model:</p> <pre><code>model = Sequential( [ tf.keras.Input(shape=(784,),), Dense(200, activation= &quot;relu&quot;), Dense(150, activation= &quot;relu&quot;), Dense(100, activation= &quot;relu&quot;), Dense(50, activation= &quot;relu&quot;), Dense(26, activation= &quot;linear&quot;) ] ) </code></pre> <p>The neural network scores a 98.9% accuracy on the dataset. However, when I try to use an image of my own, it always classifies the input as 'A'.</p> <p>I even went to the extent of inverting the colors of the image (black to white and vice versa; the original grayscale image had the alphabet in black and the rest in white).</p> <pre><code>img = plt.imread(&quot;20220922_194823.jpg&quot;) img = tf.image.rgb_to_grayscale(img) plt.imshow(img, cmap=&quot;gray&quot;) </code></pre> <p>Which displays <a href="https://i.stack.imgur.com/ICXSj.png" rel="nofollow noreferrer">this image.</a></p> <p><code>img.shape</code> returns <code>TensorShape([675, 637, 1])</code></p> <pre><code>img = 1 - img img = tf.image.resize(img, [28,28]).numpy() plt.imshow(img, cmap=&quot;gray&quot;) </code></pre> <p><a href="https://i.stack.imgur.com/xddtz.png" rel="nofollow noreferrer">This</a> is the result of <code>img = 1-img</code></p> <p>I suspect that the neural network keeps classifying the input image as 'A' because of some pixels that aren't completely black/white.</p> <p>But why does it do that? How do I avoid this problem in the future?</p> <p><a href="https://www.kaggle.com/code/rahulsundarram/handwrittencharrecognition" rel="nofollow noreferrer">Here's the notebook.</a></p>
2022-09-22 16:12:28.477000+00:00
2022-09-23 14:02:27.263000+00:00
2022-09-23 13:00:19.900000+00:00
python|tensorflow|neural-network
['https://i.stack.imgur.com/Aqq6B.png', 'https://i.stack.imgur.com/rr81g.png', 'https://docs.opencv.org/4.x/d6/d00/tutorial_py_root.html', 'https://en.wikipedia.org/wiki/Softmax_function', 'https://i.stack.imgur.com/fM3YP.png', 'https://i.stack.imgur.com/KfjwB.jpg', 'https://arxiv.org/abs/1710.08864', 'https://en.wikipedia.org/wiki/Sigmoid_function', 'https://registry.khronos.org/OpenGL-Refpages/gl4/html/smoothstep.xhtml', 'https://i.stack.imgur.com/7O3Fv.png', 'https://www.desmos.com/calculator/gi9mo87k9a', 'https://i.stack.imgur.com/EoKc0.jpg', 'https://i.stack.imgur.com/7QsFQ.jpg', 'https://i.stack.imgur.com/s7WIL.jpg', 'https://jspaint.app/', 'https://en.wikipedia.org/wiki/Data_augmentation', 'https://arxiv.org/abs/1712.04621', 'https://i.stack.imgur.com/azmWz.jpg', 'https://i.stack.imgur.com/aXVRP.jpg']
19
62,551,770
<ul> <li>The difference between YoloV4 and YoloV3 is the backbone. YoloV4 has CSPDarknet53, whilst YoloV3 has Darknet53 backbone. See <a href="https://arxiv.org/pdf/2004.10934.pdf" rel="noreferrer">https://arxiv.org/pdf/2004.10934.pdf</a>.</li> <li>Also, YoloV4 is not supported officially by OpenVINO. However, you can still test and validate YoloV4 on your end with some workaround. There is one way for now to run YoloV4 through OpenCV which will build network using nGraph API and then pass to Inference Engine. See <a href="https://github.com/opencv/opencv/pull/17185" rel="noreferrer">https://github.com/opencv/opencv/pull/17185</a>.</li> <li>The key problem is the Mish activation function - there is no optimized implementation yet, which is why we have to implement it by definition with tanh and exponential functions. Unfortunately, one-to-one topology comparison shows significant performance degradation. The performance results are also available in the github link above.</li> </ul>
2020-06-24 09:23:53.967000+00:00
2020-06-24 09:23:53.967000+00:00
null
null
62,129,609
<p>I am currently working with the YoloV3-tiny. Repository: <a href="https://github.com/AlexeyAB/darknet" rel="nofollow noreferrer">https://github.com/AlexeyAB/darknet</a></p> <p>To import the network into C++ project I use OpenVINO-Toolkit. In more detail I use the following procedure to convert the network:<br /> <a href="https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_YOLO_From_Tensorflow.html" rel="nofollow noreferrer">Converting YOLO* Models to the Intermediate Representation (IR)</a></p> <p>This procedure carries out a conversion and an optimization to proceed with the inference.</p> <p>Now, I would like to try the YoloV4 because it seems to be more effective for the purpose of the project. The problem is that OpenVINO Toolkit does not yet support this version and does not report the .json (file needed for optimization) file relative to version 4 but only up to version 3.</p> <p>What has changed in terms of structure between version 3 and version 4 of the Yolo?<br /> Can I hopefully hope that the conversion of the YoloV3-tiny (or YoloV3) is the same as the YoloV4?<br /> Is the YoloV4 much slower than the YoloV3-tiny using only the CPU for inference?<br /> When will the YoloV4-tiny be available?<br /> Does anyone have information about it?</p>
2020-06-01 09:51:41.530000+00:00
2022-03-18 08:05:13.093000+00:00
2021-12-22 08:20:14.080000+00:00
yolo|openvino
['https://arxiv.org/pdf/2004.10934.pdf', 'https://github.com/opencv/opencv/pull/17185']
2
43,595,015
<p>My NMT model has 2 layers, 512 hidden units. I train with maximum sentence length = 50, batch size = 32, and see similar speed between feed_dict and queue, about 2400-2500 target words per second (I use this metric for speed based on this <a href="https://arxiv.org/abs/1508.04025" rel="noreferrer">paper</a>).</p> <p>I find feed_dict very intuitive and easy to use. Queue is difficult. Using queue, you have to:</p> <p>1/ Convert your data into tfrecords. I actually gotta google a bit to understand how to convert my seq2seq data to tfrecords because the docs is not very helpful. </p> <p>2/ Decode your data from tfrecords. You'll find functions used to generate tfrecords and decode it don't intuitively match. For example, if each of my training examples has 3 sequences (just 3 lists of integers) <code>src_input, trg_input, trg_target</code> and I want to record the length of the <code>src_input</code> too (some of its elements might be PADDINGs, so don't count), here is how to generate tfrecord from each example: </p> <pre><code>def _make_example(src_input, src_seq_length, trg_input, trg_seq_length, trg_target, target_weight): context = tf.train.Features( feature={ 'src_seq_length': int64_feature(src_seq_length) }) feature_lists = tf.train.FeatureLists( feature_list={ 'src_input': int64_featurelist(src_input), 'trg_input': int64_featurelist(trg_input), 'trg_target': int64_featurelist(trg_target) }) return tf.train.SequenceExample(context=context, feature_lists=feature_lists) </code></pre> <p>And here's how to decode it: </p> <pre><code>def _read_and_decode(filename_queue): reader = tf.TFRecordReader(options=self.tfrecord_option) _, serialized_ex = reader.read(filename_queue) context_features = { 'src_seq_length': tf.FixedLenFeature([], dtype=tf.int64) } sequence_features = { 'src_input': tf.FixedLenSequenceFeature([], dtype=tf.int64), 'trg_input': tf.FixedLenSequenceFeature([], dtype=tf.int64), 'trg_target': tf.FixedLenSequenceFeature([], dtype=tf.int64) } context, sequences = tf.parse_single_sequence_example( serialized_ex, context_features=context_features, sequence_features=sequence_features) src_seq_length = tf.cast(context['src_seq_length'], tf.int32) src_input = tf.cast(sequences['src_input'], tf.int32) trg_input = tf.cast(sequences['trg_input'], tf.int32) trg_target = tf.cast(sequences['trg_target'], tf.int32) return src_input, src_seq_length, trg_input, trg_target </code></pre> <p>And to generate each tfrecord feature/featurelist: </p> <pre><code>def int64_feature(value): return tf.train.Feature(int64_list=tf.train.Int64List(value=[value])) def int64_featurelist(l): feature = [tf.train.Feature(int64_list=tf.train.Int64List(value=[x])) for x in l] return tf.train.FeatureList(feature=feature) </code></pre> <p><a href="https://i.stack.imgur.com/yDxH8.gif" rel="noreferrer"><img src="https://i.stack.imgur.com/yDxH8.gif" alt="http://gph.is/2cg7iKP"></a></p> <p>3/ Train/dev setup. I believe it's a common practice to periodically train your model for some time, then evaluate on dev set, then repeat. I don't know how to do this with queues. With feed_dict, you just build two graphs with shared parameters under the same session, one for train and one for dev. When you evaluate on dev set, just feed dev data to dev graph, that's it. But for queue, output from queue is part of the graph itself. To run queue, you have to start the queue runner, create a coordinator, use this coordinator to manage the queue. When it's done, the queue is close!!!. Currently, I have no idea how to best write my code to conform the train/dev setup with queues except opening new session, build new graph for dev each time I evaluate. The same issue was raised <a href="https://github.com/tensorflow/tensorflow/issues/7902" rel="noreferrer">here</a> , and you can google for similar questions on Stackoverflow. </p> <p>However, a lot of people said that queue is faster than feed_dict. My guess is queue is beneficial if you train in distributed manner. But for me, I often train on 1 GPU only and so far I'm not impressed with queue at all. Well, just my guess. </p>
2017-04-24 18:11:13.673000+00:00
2017-04-24 18:11:13.673000+00:00
null
null
38,416,824
<p>I've been using feed_dict to direct feed a <code>placeholder</code> during practicing the coding in small problems like MNIST. TensorFlow also supports feeding data using <code>queue</code> and <code>queue runner</code>, and it need some effort to learn. </p> <p>Has anybody did a comparison of these two methods and measure the performance? Is it worthy of spending time to learn using queue to feed data? </p> <p>I guess using queue not only for performance, but also for cleaner code, what ever that means. Maybe the code for one dataset can be easily used for another dataset (once I convert data into TFRecord)? </p> <p>However, <a href="https://indico.io/blog/tensorflow-data-input-part2-extensions/" rel="noreferrer">this post</a> seem to say queue can be slower than feed_dict method. Is it still true now? Why should I using queue if it's slower and harder to code? </p> <p>Thanks for your inputs. </p>
2016-07-17 00:23:04.373000+00:00
2017-07-17 00:18:17.193000+00:00
null
performance|tensorflow
['https://arxiv.org/abs/1508.04025', 'https://i.stack.imgur.com/yDxH8.gif', 'https://github.com/tensorflow/tensorflow/issues/7902']
3
48,662,985
<p>Check the footnote on page 2 of this: <a href="http://arxiv.org/pdf/1402.3722v1.pdf" rel="nofollow noreferrer">http://arxiv.org/pdf/1402.3722v1.pdf</a></p> <p>This gives a quite clear intuition for the problem.</p> <p>But you can also use only one vector to represent a word. Check this (Stanford CS 224n) <a href="https://youtu.be/ERibwqs9p38?t=2064" rel="nofollow noreferrer">https://youtu.be/ERibwqs9p38?t=2064</a></p> <p>I am not sure how that will be implemented(neither does the video explains).</p>
2018-02-07 11:44:03.773000+00:00
2018-02-07 11:44:03.773000+00:00
null
null
29,381,505
<p>I am trying to understand why word2vec's skipgram model has 2 representations for each word (the hidden representation which is the word embedding) and the output representation (also called context word embedding) . Is this just for generality where the context can be anything (not just words) or is there a more fundamental reason </p>
2015-04-01 01:44:41.350000+00:00
2021-11-24 03:37:03.290000+00:00
null
word2vec
['http://arxiv.org/pdf/1402.3722v1.pdf', 'https://youtu.be/ERibwqs9p38?t=2064']
2
30,479,316
<p>I recommend you to read this article about Word2Vec : <a href="http://arxiv.org/pdf/1402.3722v1.pdf" rel="noreferrer">http://arxiv.org/pdf/1402.3722v1.pdf</a></p> <p>They give an intuition about why two representations in a footnote : it is not likely that a word appears in its own context, so you would want to minimize the probability p(w|w). But if you use the same vectors for w as context than for w as center word, you cannot minimize p(w|w) (computed via the dot product) if you are to keep word embeddings in the unit circle. </p> <p>But it is just an intuition, I don't know if there is any clear justification to this...</p> <p>IMHO, the real reason why you use different representations is because you manipulate entities of different nature. "dog" as a context is not to be considered the same as "dog" as a center word because they are not. You basicly manipulate big matrices of occurences (word,context), trying to maximize the probability of these pairs that actually happen. Theoreticaly you could use as contexts bigrams, trying to maximize for instance the probability of (word="for", context="to maximize"), and you would assign a vector representation to "to maximize". We don't do this because there would be too many representations to compute, and we would have a reeeeeally sparse matrix, but I think the idea is here : the fact that we use "1-grams" as context is just a particular case of all the kinds of context we could use.</p> <p>That's how I see it, and if it's wrong please correct !</p>
2015-05-27 10:05:37.957000+00:00
2015-05-27 10:05:37.957000+00:00
null
null
29,381,505
<p>I am trying to understand why word2vec's skipgram model has 2 representations for each word (the hidden representation which is the word embedding) and the output representation (also called context word embedding) . Is this just for generality where the context can be anything (not just words) or is there a more fundamental reason </p>
2015-04-01 01:44:41.350000+00:00
2021-11-24 03:37:03.290000+00:00
null
word2vec
['http://arxiv.org/pdf/1402.3722v1.pdf']
1
62,760,069
<p>I believe you are talking about scaling learning horizontally as in training multiple agents in parallel.</p> <p><a href="https://arxiv.org/abs/1602.01783" rel="nofollow noreferrer">A3C</a> is one algorithm that does this by training multiple agents in parallel and independently of each other. Each agent has its own environment which allows it to gain a different experience than the rest of the agents, ultimately increasing the breadth of your agents collective experience. Eventually each agent updates a shared network asynchronously and you use this network to drive your main agent.</p> <p>You mentioned that you wanted to use the same environment for all parallel agents. I can think of this in two ways:</p> <ol> <li><p>If you are talking about a shared environment among agents, then this could possibly speed things up however you are likely not going to gain much in terms of performance. You are also very likely to face issues in terms of episode completion - if multiple agents are taking steps simultaneously then your transitions will be a mess to say the least. The complexity cost is high and the benefit is negligible.</p> </li> <li><p>If you are talking about cloning the same environment for each agent then you end up both gaining speed and a broader experience which translates to performance. This is probably the sane thing to do.</p> </li> </ol>
2020-07-06 16:10:02.613000+00:00
2020-07-06 16:10:02.613000+00:00
null
null
62,748,306
<p>I got a question regarding reinforcement learning. let's say we have a robot that is able to adapt to changing environments. Similar to this paper <a href="https://arxiv.org/pdf/2004.10190.pdf" rel="nofollow noreferrer">1</a>. When there is a change in the environment[light dimming], the robot's performance drops and it needs to explore its new environment by collecting data and running the Q-algorithm again to update its policy to be able to &quot;adapt&quot;. The collection of new data and updating of the policy takes about 4/5hrs. I was wondering if I have an army of these robots in the same room, undergoing the same environmental changes, can the data collection be sped up so that the policy can be updated more quickly? so that the policy can be updated in under 1 hour or so, allowing the performance of the robots to increase?</p>
2020-07-06 02:08:01.353000+00:00
2020-07-06 16:10:02.613000+00:00
null
reinforcement-learning|robotics
['https://arxiv.org/abs/1602.01783']
1
49,389,114
<p>A softmax activation won't do the trick I'm afraid; if you have an infinite number of combinations, or even a finite number of combinations that do not already appear in your data, there is no way to turn this into a multi-class classification problem (or if you do, you'll have loss of generality).</p> <p>The only way forward I can think of is a recurrent model employing variational encoding. To begin with, you have a lot of annotated data, which is good news; a recurrent network fed with a sequence <strong>X</strong> (10,2,) will definitely be able to predict a sequence <strong>Y</strong> (6,2,). But since you want not just one but rather <em>all probable</em> sequences, this won't suffice. Your implicit assumption here is that there is some probability space hidden behind your sequences, which affects how they play out over time; so to model the sequences properly, you need to model that latent probability space. A Variational Auto-Encoder (<a href="https://arxiv.org/abs/1312.6114" rel="nofollow noreferrer">VAE</a>) does just that; it learns the latent space, so that during inference the output prediction depends on sampling over that latent space. Multiple predictions over the same input can then result in different outputs, meaning that you can finally sample your predictions to empirically approximate the distribution of potential outputs. </p> <p>Unfortunately, VAEs can't really be explained within a single paragraph over stackoverflow, and even if they could I wouldn't be the most qualified person to attempt it. Try searching the web for LSTM-VAE and arm yourself with patience; you'll probably need to do some studying but it's definitely worth it. It might also be a good idea to look into <a href="http://pyro.ai/" rel="nofollow noreferrer">Pyro</a> or <a href="http://edwardlib.org/" rel="nofollow noreferrer">Edward</a>, which are probabilistic network libraries for python, better suited to the task at hand than Keras.</p>
2018-03-20 16:10:49.137000+00:00
2018-03-20 16:16:38.853000+00:00
2018-03-20 16:16:38.853000+00:00
null
49,386,548
<p>I'm trying to predict sequences of 2D coordinates. But I don't want only the most probable future path but all the most probable paths to visualize it in a grid map. For this I have traning data consisting of 40000 sequences. Each sequence consists of 10 2D coordinate pairs as input and 6 2D coordinate pairs as labels. All the coordinates are in a fixed value range. What would be my first step to predict all the probable paths? To get all probable paths I have to apply a softmax in the end, where each cell in the grid is one class right? But how to process the data to reflect this grid like structure? Any ideas?</p>
2018-03-20 14:11:24.100000+00:00
2018-03-20 16:16:38.853000+00:00
null
tensorflow|machine-learning|keras
['https://arxiv.org/abs/1312.6114', 'http://pyro.ai/', 'http://edwardlib.org/']
3
52,036,334
<p>At first, following the suggestion which is given in <a href="https://arxiv.org/pdf/1411.1784.pdf" rel="noreferrer">Conditional Generative Adversarial Nets</a> you have to define a second input. Then, just concatenate the two input vectors and process this concatenated vector.</p> <pre><code>def generator_model_v2(): input_image = Input((IN_CH, img_cols, img_rows)) input_conditional = Input((n_classes)) e0 = Flatten()(input_image) e1 = Concatenate()([e0, input_conditional]) e2 = BatchNormalization(mode=0)(e1) e3 = BatchNormalization(mode=0)(e2) e4 = Dense(1024, activation="relu")(e3) e5 = BatchNormalization(mode=0)(e4) e6 = Dense(512, activation="relu")(e5) e7 = BatchNormalization(mode=0)(e6) e8 = Dense(512, activation="relu")(e7) e9 = BatchNormalization(mode=0)(e8) e10 = Dense(IN_CH * img_cols *img_rows, activation="relu")(e9) e11 = Reshape((3, 28, 28))(e10) e12 = BatchNormalization(mode=0)(e11) e13 = Activation('tanh')(e12) model = Model(input=[input_image, input_conditional] , output=e13) return model </code></pre> <p>Then, you need to pass the class labels during the training as well to the network:</p> <pre><code>classifier.train_on_batch((image_batch, class_batch), label_batch) </code></pre>
2018-08-27 09:29:59.350000+00:00
2018-08-27 09:29:59.350000+00:00
null
null
51,989,867
<p>I want to use condition GANs with the purpose of generated images for one domain (noted as <code>domain A</code>) and by having input images from a second domain (noted as <code>domain B</code>) and the class information as well. Both domains are linked with the same label information (every image of domain A is linked to an image to domain B and a specific label). My generator so far in Keras is the following:</p> <pre><code>def generator_model_v2(): global BATCH_SIZE inputs = Input((IN_CH, img_cols, img_rows)) e1 = BatchNormalization(mode=0)(inputs) e2 = Flatten()(e1) e3 = BatchNormalization(mode=0)(e2) e4 = Dense(1024, activation="relu")(e3) e5 = BatchNormalization(mode=0)(e4) e6 = Dense(512, activation="relu")(e5) e7 = BatchNormalization(mode=0)(e6) e8 = Dense(512, activation="relu")(e7) e9 = BatchNormalization(mode=0)(e8) e10 = Dense(IN_CH * img_cols *img_rows, activation="relu")(e9) e11 = Reshape((3, 28, 28))(e10) e12 = BatchNormalization(mode=0)(e11) e13 = Activation('tanh')(e12) model = Model(input=inputs, output=e13) return model </code></pre> <p>So far my generator takes as input the images from the <code>domain A</code> (and the scope to output images from the <code>domain B</code>). I want somehow to input also the information of the class for the input domain A with the scope to produce images of the same class for the domain B. How can I add the label information after the flattening. So instead of having input size <code>1x1024</code> to have <code>1x1025</code> for example. Can I use a second Input for the class information in the Generator. And if yes how can I call then the generator from the training procedure of the GANs?</p> <p>The training procedure:</p> <pre><code>discriminator_and_classifier_on_generator = generator_containing_discriminator_and_classifier( generator, discriminator, classifier) generator.compile(loss=generator_l1_loss, optimizer=g_optim) discriminator_and_classifier_on_generator.compile( loss=[generator_l1_loss, discriminator_on_generator_loss, "categorical_crossentropy"], optimizer="rmsprop") discriminator.compile(loss=discriminator_loss, optimizer=d_optim) # rmsprop classifier.compile(loss="categorical_crossentropy", optimizer=c_optim) for epoch in range(30): for index in range(int(X_train.shape[0] / BATCH_SIZE)): image_batch = Y_train[index * BATCH_SIZE:(index + 1) * BATCH_SIZE] label_batch = LABEL_train[index * BATCH_SIZE:(index + 1) * BATCH_SIZE] # replace with your data here generated_images = generator.predict(X_train[index * BATCH_SIZE:(index + 1) * BATCH_SIZE]) real_pairs = np.concatenate((X_train[index * BATCH_SIZE:(index + 1) * BATCH_SIZE, :, :, :], image_batch),axis=1) fake_pairs = np.concatenate((X_train[index * BATCH_SIZE:(index + 1) * BATCH_SIZE, :, :, :], generated_images), axis=1) X = np.concatenate((real_pairs, fake_pairs)) y = np.concatenate((np.ones((100, 1, 64, 64)), np.zeros((100, 1, 64, 64)))) d_loss = discriminator.train_on_batch(X, y) discriminator.trainable = False c_loss = classifier.train_on_batch(image_batch, label_batch) classifier.trainable = False g_loss = discriminator_and_classifier_on_generator.train_on_batch( X_train[index * BATCH_SIZE:(index + 1) * BATCH_SIZE, :, :, :], [image_batch, np.ones((100, 1, 64, 64)), label_batch]) discriminator.trainable = True classifier.trainable = True </code></pre> <p>The code is implementation of <a href="https://github.com/r0nn13/conditional-dcgan-keras" rel="noreferrer">conditional dcgans</a> (with the addition of a classifier over the discriminator). And the network's functions are:</p> <pre><code>def generator_containing_discriminator_and_classifier(generator, discriminator, classifier): inputs = Input((IN_CH, img_cols, img_rows)) x_generator = generator(inputs) merged = merge([inputs, x_generator], mode='concat', concat_axis=1) discriminator.trainable = False x_discriminator = discriminator(merged) classifier.trainable = False x_classifier = classifier(x_generator) model = Model(input=inputs, output=[x_generator, x_discriminator, x_classifier]) return model def generator_containing_discriminator(generator, discriminator): inputs = Input((IN_CH, img_cols, img_rows)) x_generator = generator(inputs) merged = merge([inputs, x_generator], mode='concat',concat_axis=1) discriminator.trainable = False x_discriminator = discriminator(merged) model = Model(input=inputs, output=[x_generator,x_discriminator]) return model </code></pre>
2018-08-23 15:58:01.423000+00:00
2018-08-31 10:00:09.813000+00:00
2018-08-31 10:00:09.813000+00:00
python|machine-learning|keras|artificial-intelligence|generative-adversarial-network
['https://arxiv.org/pdf/1411.1784.pdf']
1
43,746,962
<p>Yes, the two paper do not mention an explicit bound. However, the authors kindly provided me their simulator and it has a <a href="https://github.com/ben-manes/caffeine/blob/master/simulator/src/main/resources/com/github/benmanes/caffeine/cache/simulator/parser/lirs/lirs.h#L8" rel="nofollow noreferrer">bound parameter</a>. Many implementations have had memory leaks (e.g. <a href="https://dev.clojure.org/jira/browse/CCACHE-32" rel="nofollow noreferrer">Clojure's</a>, <a href="https://issues.jboss.org/browse/ISPN-7171?_sscc=t" rel="nofollow noreferrer">Infinispan</a>).</p> <p>Unfortunately naive pruning is expensive as it requires a long stack walk. Thomas Mueller's <a href="https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core-spi/src/main/java/org/apache/jackrabbit/oak/cache/CacheLIRS.java" rel="nofollow noreferrer">implementation</a> uses a secondary queue for non-resident entries. This adds a little more cost per entry, but significantly improves runtime performance in my benchmarks.</p> <p>Unfortunately none of the implementations I came across matched the author's simulator. This is because some details are easily missed (like warmup) or not mentioned (don't promote on a correlated references). After debugging traces against their simulator, <a href="https://github.com/ben-manes/caffeine/blob/master/simulator/src/main/java/com/github/benmanes/caffeine/cache/simulator/policy/irr/LirsPolicy.java" rel="nofollow noreferrer">mine</a> is near perfect and includes Thomas' optimization. It was intended to be accurate and readable in case I adopted it for a cache.</p> <p>I chose <a href="http://arxiv.org/pdf/1512.00727.pdf" rel="nofollow noreferrer">TinyLFU</a> instead, after introducing an admission window to improve its performance in recency-skewed traces. The authors and I are experimenting with an adaptive window based on simple hill climbing to replace the static configuration. This <a href="http://highscalability.com/blog/2016/1/25/design-of-a-modern-cache.html" rel="nofollow noreferrer">article</a>, <a href="https://docs.google.com/presentation/d/1NlDxyXsUG1qlVHMl4vsUUBQfAJ2c2NsFPNPr2qymIBs" rel="nofollow noreferrer">slides</a>, and <a href="https://github.com/ben-manes/caffeine/wiki/Efficiency" rel="nofollow noreferrer">simulations</a> should provide a brief introduction to the policy. A reader implemented a minimal <a href="https://github.com/mandreyel/w-tiny-lfu" rel="nofollow noreferrer">C++ port</a>.</p>
2017-05-02 20:47:42.557000+00:00
2017-05-02 20:47:42.557000+00:00
null
null
43,741,032
<p>I'm looking into implementing LIRS caching algorithm (as described in <a href="https://en.wikipedia.org/wiki/LIRS_caching_algorithm" rel="nofollow noreferrer">wikipedia</a> and <a href="http://web.cse.ohio-state.edu/hpcs/WWW/HTML/publications/papers/TR-02-6.pdf" rel="nofollow noreferrer">this paper</a>), but the sources are rather difficult to follow, leaving out certain cases from their descriptions. Referring to <a href="https://en.wikipedia.org/wiki/LIRS_caching_algorithm#Selecting_the_replacement_victim" rel="nofollow noreferrer">example (e) on wikipedia</a> where a previously unknown element is referenced, it appears the element is added as resident HIR, without any element being deleted from LIRS. This suggests I could keep referencing unique elements, and grow LIRS forever. Is this the case...? This seems bad, as the references could blow up the using application's memory. Am I missing something?</p> <p>Also, if anyone knows any interesting alternatives to LIRS that are well described, I'd love to know about them - doing some side programming to catch up on my C++, and caching is the topic I've been working on :)</p>
2017-05-02 14:53:02.170000+00:00
2022-09-08 02:07:59.930000+00:00
2017-05-02 15:31:10.970000+00:00
caching
['https://github.com/ben-manes/caffeine/blob/master/simulator/src/main/resources/com/github/benmanes/caffeine/cache/simulator/parser/lirs/lirs.h#L8', 'https://dev.clojure.org/jira/browse/CCACHE-32', 'https://issues.jboss.org/browse/ISPN-7171?_sscc=t', 'https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core-spi/src/main/java/org/apache/jackrabbit/oak/cache/CacheLIRS.java', 'https://github.com/ben-manes/caffeine/blob/master/simulator/src/main/java/com/github/benmanes/caffeine/cache/simulator/policy/irr/LirsPolicy.java', 'http://arxiv.org/pdf/1512.00727.pdf', 'http://highscalability.com/blog/2016/1/25/design-of-a-modern-cache.html', 'https://docs.google.com/presentation/d/1NlDxyXsUG1qlVHMl4vsUUBQfAJ2c2NsFPNPr2qymIBs', 'https://github.com/ben-manes/caffeine/wiki/Efficiency', 'https://github.com/mandreyel/w-tiny-lfu']
10
60,712,539
<p>A good explainer (and a potential answer to your question on why it might not have worked on the undersampled classes) on SMOTE can be found in <a href="https://datascience.stackexchange.com/questions/27671/how-do-you-apply-smote-on-text-classification">this answer</a>. </p> <p>I think this issue can't be solved easily through off-the-shelf data augmentation strategies. One possibility might be to simply duplicate the example, but this would add no new information to your model. </p> <p>Here are a couple other strategies you could try as well:</p> <ol> <li>An embedding-based augmentation technique (similar theory to SMOTE but works better on text data) that's described in this <a href="https://www.aclweb.org/anthology/D15-1306.pdf" rel="nofollow noreferrer">2015 paper by William Wang and Diyi Yang</a>.</li> <li>A step further on #1 using contextualized word embeddings described here in this <a href="https://arxiv.org/pdf/1705.00440.pdf" rel="nofollow noreferrer">2017 paper by Marzieh Fadaee, Arianna Bisazza, and Christof Monz</a>. </li> <li>Use a synonym replacement library like WordNetAug. </li> </ol>
2020-03-16 20:04:50.150000+00:00
2020-03-16 20:04:50.150000+00:00
null
null
60,699,511
<p>Am trying to classify 10000 samples of text into 20 classes. 4 of the classes have just 1 sample each, I tried SMOTE to address this imbalance, but I am unable to generate new samples for classes that have only one record, though I could generate samples for classes with more than 1 sample. Any suggestions?</p>
2020-03-16 02:07:04.797000+00:00
2020-03-16 20:04:50.150000+00:00
null
machine-learning|nlp|data-science|text-classification|imbalanced-data
['https://datascience.stackexchange.com/questions/27671/how-do-you-apply-smote-on-text-classification', 'https://www.aclweb.org/anthology/D15-1306.pdf', 'https://arxiv.org/pdf/1705.00440.pdf']
3
69,627,137
<p>Neural machine translation models have a limited vocabulary. The reason is that you get the distribution over the target vocabulary tokens by multiplying the hidden state of the encoder by a matrix that has one row for each vocabulary token. The paper that you mention uses hidden state of 1000 dimensions. If you wanted to cover English reasonably, you would need a vocabulary of at least 200k tokens, which would mean 800MB only for this matrix.</p> <p>The paper that you mention is an outdated solution from 2015 and tries to find how to have the vocabulary as big as possible. However, increasing the vocabulary capacity did not appear to be the best solution because, with increasing vocabulary size, you add rarer and rarer words into the vocabulary and there is less and less training signal for embeddings of these words, so the model eventually does not learn to use those words properly.</p> <p>State-of-the art machine translation uses a segmentation into subwords that was <a href="https://aclanthology.org/P16-1162/" rel="nofollow noreferrer">introduced in 2016</a> with the BPE algorithm. In parallel, Google came with an alternative solution named WordPiece for their <a href="https://arxiv.org/abs/1609.08144" rel="nofollow noreferrer">first production neural machine translation system</a>. Later, Google came with an improved segmentation algorithm <a href="https://aclanthology.org/D18-2012" rel="nofollow noreferrer">SentencePiece in 2018</a>.</p> <p>The main principle of the subword vocabulary is that the frequent words remain intact, whereas rarer words get segmented into smaller units. Rare words are often proper names that do not really get translated. For languages with complex morphology, subword segmentation allows the models to learn how to create different forms of the same words.</p>
2021-10-19 08:05:23.720000+00:00
2021-10-19 08:05:23.720000+00:00
null
null
69,595,863
<p>When decoding / translating a test dataset after training on the base Transformer model (Vaswani et. al.), I sometimes see this token &quot;unk&quot; in the ouput.</p> <p>&quot;unk&quot; here refers to an unknown token, but my question is what is the reasoning behind that? Based on <a href="https://nlp.stanford.edu/pubs/acl15_nmt.pdf" rel="nofollow noreferrer">https://nlp.stanford.edu/pubs/acl15_nmt.pdf</a>, does it mean that the vocab I built for the training set does not contain the word present in the test set?</p> <p>For reference, I built the <code>Vocab</code> using <code>Spacy</code> <code>en_core_web_sm</code> and <code>de_core_news_sm</code> for a German to English translation task.</p> <p>Example output:</p> <pre><code>ground truth = ['a', 'girl', 'in', 'a', 'jean', 'dress', 'is', 'walking', 'along', 'a', 'raised', 'balance', 'beam', '.'] predicted = ['a', 'girl', 'in', 'a', '&lt;unk&gt;', 'costume', 'is', 'jumping', 'on', 'a', 'clothesline', '.', '&lt;eos&gt;'] </code></pre> <p>As you can see, the <em>jean</em> is &quot;unk&quot; here.</p>
2021-10-16 13:06:02.803000+00:00
2021-10-19 08:05:23.720000+00:00
null
python|transformer-model|machine-translation|opennmt
['https://aclanthology.org/P16-1162/', 'https://arxiv.org/abs/1609.08144', 'https://aclanthology.org/D18-2012']
3
59,449,632
<p>Using thushv89's answer, here is the full code for how I implemented <a href="https://arxiv.org/pdf/1903.01182.pdf" rel="nofollow noreferrer">COT</a> on LeNet from the referenced paper. The one trick is I am not actually flipping back and forth between the two objectives, instead there is just a random weight that flips <code>s</code>. </p> <pre><code># using tensorflow 2.0.0 and keras 2.3.1 import tensorflow.keras.backend as kb import tensorflow as tf from tensorflow.keras.layers import Conv2D, Input, Dense,Flatten,AveragePooling2D,GlobalAveragePooling2D from tensorflow.keras.models import Model from keras.datasets import mnist (x_train, y_train), (x_test, y_test) = mnist.load_data() # Normalize data. x_train = x_train.astype('float32') / 255 x_test = x_test.astype('float32') / 255 #exapnd dims to fit chn format x_train = np.expand_dims(x_train,axis=3) x_test = np.expand_dims(x_test,axis=3) # Convert class vectors to binary class matrices. y_train = tf.keras.utils.to_categorical(y_train, 10) y_test = tf.keras.utils.to_categorical(y_test, 10) input_shape = x_train.shape[1:] x_in = Input((input_shape)) act = 'tanh' x = Conv2D(32, (5, 5), activation=act, padding='same',strides = 1)(x_in) x = AveragePooling2D((2, 2),strides = (2,2))(x) x = Conv2D(16, (5, 5), activation=act)(x) x = AveragePooling2D((2, 2),strides = (2,2))(x) conv_out = Flatten()(x) z = Dense(120,activation = act)(conv_out)#120 z = Dense(84,activation = act)(z)#84 last = Dense(10,activation = 'softmax')(z) model = Model(x_in,last) def loss(y_true,y_pred, axis=-1): s = kb.round(tf.random.uniform( (1,), minval=0, maxval=1, dtype=tf.dtypes.float32)) s_ = 1 - s y_pred = y_pred + 1e-8 yg = kb.max(y_pred,axis=1) yc = tf.math.logical_not(kb.cast(y_true, 'bool')) yp_c = tf.boolean_mask(y_pred, yc) ygc_ = 1/(1-yg+1e-8) ygc_ = kb.expand_dims(ygc_,axis=1) Px = yp_c*ygc_ +1e-8 COT = kb.mean(Px*kb.log(Px),axis=1) CE = -kb.mean(y_true*kb.log(y_pred),axis=1) L = s*CE +s_*(1/(10-1))*COT return L model.compile(loss=loss, optimizer='adam', metrics=['accuracy']) model.fit(x_train,y_train,epochs=20,batch_size = 128,validation_data= (x_test,y_test)) pred = model.predict(x_test) pred_label = np.argmax(pred,axis=1) label = np.argmax(y_test,axis=1) cor = (pred_label == label).sum() acc = print('acc:',cor/label.shape[0]) </code></pre>
2019-12-23 02:39:53.557000+00:00
2019-12-31 16:45:06.657000+00:00
2019-12-31 16:45:06.657000+00:00
null
59,445,874
<p>Below is a simple example in numpy of what I would like to do:</p> <pre><code>import numpy as np y_true = np.array([0,0,1]) y_pred = np.array([0.1,0.2,0.7]) yc = (1-y_true).astype('bool') desired = y_pred[yc] &gt;&gt;&gt; desired &gt;&gt;&gt; array([0.1, 0.2]) </code></pre> <p>So the prediction corresponding to the ground truth is 0.7, I want to operate on an array containing all the elements of y_pred, except for the ground truth element.</p> <p>I am unsure of how to make this work within Keras. Here is a working example of the problem in the loss function. Right now 'desired' isn't accomplishing anything, but that is what I need to work with:</p> <pre><code># using tensorflow 2.0.0 and keras 2.3.1 import tensorflow.keras.backend as K import tensorflow as tf from tensorflow.keras.layers import Input,Dense,Flatten from tensorflow.keras.models import Model from keras.datasets import mnist (x_train, y_train), (x_test, y_test) = mnist.load_data() # Normalize data. x_train = x_train.astype('float32') / 255 x_test = x_test.astype('float32') / 255 # Convert class vectors to binary class matrices. y_train = tf.keras.utils.to_categorical(y_train, 10) y_test = tf.keras.utils.to_categorical(y_test, 10) input_shape = x_train.shape[1:] x_in = Input((input_shape)) x = Flatten()(x_in) x = Dense(256,'relu')(x) x = Dense(256,'relu')(x) x = Dense(256,'relu')(x) out = Dense(10,'softmax')(x) def loss(y_true,y_pred): yc = tf.math.logical_not(kb.cast(y_true, 'bool')) desired = tf.boolean_mask(y_pred,yc,axis = 1) #Remove and it runs CE = tf.keras.losses.categorical_crossentropy( y_true, y_pred) L = CE return L model = Model(x_in,out) model.compile('adam',loss = loss,metrics = ['accuracy']) model.fit(x_train,y_train) </code></pre> <p>I end up getting an error</p> <pre><code>ValueError: Shapes (10,) and (None, None) are incompatible </code></pre> <p>Where 10 is the number of categories. The end purpose is to implement this: <a href="https://github.com/henry8527/COT/blob/master/code/COT.py" rel="nofollow noreferrer">ComplementEntropy</a> in Keras, where my issue seems to be lines 26-28.</p>
2019-12-22 16:10:28.953000+00:00
2019-12-31 16:45:06.657000+00:00
2019-12-22 22:57:17.683000+00:00
python|tensorflow|keras|tensorflow2.0|loss-function
['https://arxiv.org/pdf/1903.01182.pdf']
1
66,531,130
<p>This is called &quot;abduction&quot;.</p> <p>For the view from philosophical logic, <em>Stanford Encyclopedia of Philosophy</em> offers this entry: <a href="https://plato.stanford.edu/entries/abduction/" rel="nofollow noreferrer">Abduction</a>.</p> <p>For the view from logic programming, <em>Wikipedia</em> offers this entry: <a href="https://en.wikipedia.org/wiki/Abductive_logic_programming" rel="nofollow noreferrer">Abductive Logic Programming</a>.</p> <p>A paper that uses Prolog and <a href="https://en.wikipedia.org/wiki/Constraint_Handling_Rules" rel="nofollow noreferrer">CHR</a> (Constraint Handling Rules) for Abductive reasoning:</p> <p><strong><em>Henning Christiansen: <a href="https://vision.unipv.it/IA2/aa2006-2007/Abductive%20Reasoning%20in%20Prolog%20and%20CHR.pdf" rel="nofollow noreferrer">Abductive reasoning in Prolog and CHR</a> (PDF): A short introduction for the KIIS course, Autumn 2005.</em></strong></p> <p>Christiansen refers to the book</p> <p><strong>Abduction and Induction: Essays on their Relation and Integration</strong>, edited by Peter A. Flach and Antonis Hadjiantonis (Kluwer Academic Publishers, April 2000), <a href="https://rads.stackoverflow.com/amzn/click/com/0792362500" rel="nofollow noreferrer" rel="nofollow noreferrer">Amazon Link</a>, <a href="https://www.researchgate.net/publication/321596989_Abduction_and_Induction_Essays_on_their_Relation_and_Integration/link/5e419ee3a6fdccd9659a13ae/download" rel="nofollow noreferrer">first chapter at researchgate</a></p> <p>and provides this introductory explainer:</p> <blockquote> <p><strong>Deduction</strong>, reasoning within the knowledge we have already, i.e.,from those facts we know and those rules and regularities of the world that we are familiar with. E.g., reasoning from causes to effects: <em>If you make a fire here, you will burn down the house.</em></p> </blockquote> <p>In Prolog, the language is structured so as to most naturally find the premise &quot;make a fire here&quot; if your goal happens to be &quot;burn the house down&quot;.</p> <blockquote> <p><strong>Induction</strong>, finding general rules from the regularities that we have experienced in the facts that we know; these rules can be used later for prediction: <em>Every time I made a fire in my living room, the house burnt down ... Aha, the next time I make a fire in my living room, the house will burn down too</em>.</p> </blockquote> <blockquote> <p><strong>Abduction</strong>, reasoning from observed results to the basic facts from which they follow, quite often it means from an observed effect to produce a qualified guess for a possible cause: <em>The house burnt down, perhaps my cousin has made a fire in the living room again.</em></p> </blockquote> <p>&quot;Abductive Logic Programming&quot; (ALP) is (used to be?) an active area of research.</p> <p>Here is a Sprinker Link with <a href="https://link.springer.com/search?query=abductive+logic+programming" rel="nofollow noreferrer">search result</a>.</p> <p>ALP is a common problem in commonsense reasoning and planning. Examples that come to mind:</p> <ul> <li><a href="https://arxiv.org/abs/0906.1182" rel="nofollow noreferrer">The CIFF Proof Procedure for Abductive Logic Programming with Constraints: Theory, Implementation and Experiments</a></li> <li>Robert Kowalski, Fariba Sadri et al. have worked on &quot;LPS&quot; (<a href="http://lps.doc.ic.ac.uk/" rel="nofollow noreferrer">Logic Production System</a>), which uses ALP (but not by name?) in the context of the <a href="https://en.wikipedia.org/wiki/Event_calculus" rel="nofollow noreferrer">event calculus</a> to decide what actions to take to make facts about the world <code>true</code> (wishing for more details here, I do hope they are editing a book on this).</li> <li>Contrariwise, Raymond Reiter does not use Prolog but <a href="https://en.wikipedia.org/wiki/Answer_set_programming" rel="nofollow noreferrer">Answer Set Programming</a> (which may be more adapted to ALP than the SLDNF approach of Prolog) for (among others) abductive reasoning in the <a href="https://en.wikipedia.org/wiki/Situation_calculus" rel="nofollow noreferrer">Situation Calculus</a>. More on this in the book <a href="https://mitpress.mit.edu/books/knowledge-action" rel="nofollow noreferrer">Knowledge in Action</a> (MIT Press, July 2001).</li> </ul>
2021-03-08 13:54:28.630000+00:00
2021-03-08 13:54:28.630000+00:00
null
null
66,519,451
<p>I'm looking for research, algorithms or even terminology for this area of research that take a Prolog program and a query I <em>want</em> to be true and attempt to find the facts that would need to be asserted to make it true. For example:</p> <pre><code>% Program hasProperty(Object, Property) :- property(Object, hasProperty, Property). property(apple, hasProperty, red). property(car, hasProperty, drivable). </code></pre> <pre><code>% Magic function that determines what Facts would make % query 'hasProperty(lemon, sour)' true % in the program above ?- whatFacts(hasProperty(lemon, sour), Facts). Facts = [property(lemon, sour)] </code></pre> <p>I'm sure research has been done on this, and certainly it seems unsolvable in the general case, but I'm curious what has been done but am having trouble finding the right terminology to find the work.</p> <p>Would love any pointers to actual algorithms or names for the area or problem I'm describing.</p>
2021-03-07 17:37:00.347000+00:00
2021-03-08 13:54:28.630000+00:00
null
prolog|logic|logic-programming
['https://plato.stanford.edu/entries/abduction/', 'https://en.wikipedia.org/wiki/Abductive_logic_programming', 'https://en.wikipedia.org/wiki/Constraint_Handling_Rules', 'https://vision.unipv.it/IA2/aa2006-2007/Abductive%20Reasoning%20in%20Prolog%20and%20CHR.pdf', 'https://rads.stackoverflow.com/amzn/click/com/0792362500', 'https://www.researchgate.net/publication/321596989_Abduction_and_Induction_Essays_on_their_Relation_and_Integration/link/5e419ee3a6fdccd9659a13ae/download', 'https://link.springer.com/search?query=abductive+logic+programming', 'https://arxiv.org/abs/0906.1182', 'http://lps.doc.ic.ac.uk/', 'https://en.wikipedia.org/wiki/Event_calculus', 'https://en.wikipedia.org/wiki/Answer_set_programming', 'https://en.wikipedia.org/wiki/Situation_calculus', 'https://mitpress.mit.edu/books/knowledge-action']
13
3,424,880
<p>The first thing you have to decide is a general policy about which side is considered "authoritative" in case of conflicting changes.</p> <p>I.e.: suppose Record #125 is changed on the server on January 5th at 10pm and the same record is changed on one of the phones (let's call it Client A) on January 5th at 11pm. Last synch was on Jan 3rd. Then the user reconnects on, say, January 8th.</p> <p>Identifying what needs to be changed is "easy" in the sense that both the client and the server know the date of the last synch, so anything <em>created or updated</em> (see below for more on this) since the last synch needs to be reconciled.</p> <p>So, suppose that the only changed record is #125. You either decide that one of the two automatically "wins" and overwrites the other, or you need to support a reconcile phase where a user can decide which version (server or client) is the correct one, overwriting the other.</p> <p>This decision is extremely important and you must weight the "role" of the clients. Especially if there is a potential conflict not only between client and server, but in case different clients can change the same record(s).</p> <p>[Assuming that #125 can be modified by a second client (Client B) there is a chance that Client B, which hasn't synched yet, will provide yet another version of the same record, making the previous conflict resolution moot]</p> <p>Regarding the "<em>created or updated</em>" point above... how can you properly identify a record if it has been originated on one of the clients (assuming this makes sense in your problem domain)? Let's suppose your app manages a list of business contacts. If Client A says you have to add a newly created John Smith, and the server has a John Smith created yesterday by Client D... do you create two records because you cannot be certain that they aren't different persons? Will you ask the user to reconcile this conflict too?</p> <p>Do clients have "ownership" of a subset of data? I.e. if Client B is setup to be the "authority" on data for Area #5 can Client A modify/create records for Area #5 or not? (This would make some conflict resolution easier, but may prove unfeasible for your situation).</p> <p>To sum it up the main problems are:</p> <ul> <li>How to define "identity" considering that detached clients may not have accessed the server before creating a new record.</li> <li>The previous situation, no matter how sophisticated the solution, may result in data duplication, so you must foresee how to periodically solve these and how to inform the clients that what they considered as "Record #675" has actually been merged with/superseded by Record #543</li> <li>Decide if conflicts will be resolved by <em>fiat</em> (e.g. "The server version always trumps the client's if the former has been updated since the last synch") or by manual intervention</li> <li>In case of <em>fiat</em>, especially if you decide that the client takes precedence, you must also take care of how to deal with other, not-yet-synched clients that may have some more changes coming.</li> <li>The previous items don't take in account the granularity of your data (in order to make things simpler to describe). Suffice to say that instead of reasoning at the "Record" level, as in my example, you may find more appropriate to record change at the field level, instead. Or to work on a set of records (e.g. Person record + Address record + Contacts record) at a time treating their aggregate as a sort of "Meta Record".</li> </ul> <p>Bibliography:</p> <ul> <li><p>More on this, of course, on <a href="http://en.wikipedia.org/wiki/Data_synchronization" rel="noreferrer">Wikipedia</a>.</p></li> <li><p><a href="https://unterwaditzer.net/2016/sync-algorithm.html" rel="noreferrer">A simple synchronization algorithm</a> by the author of <a href="https://vdirsyncer.readthedocs.org/en/stable/index.html" rel="noreferrer">Vdirsyncer</a></p></li> <li><p><a href="http://www.objc.io/issue-10/data-synchronization.html" rel="noreferrer">OBJC article on data synch</a></p></li> <li><p><a href="http://my.safaribooksonline.com/0130093696" rel="noreferrer">SyncML®: Synchronizing and Managing Your Mobile Data</a> (Book on O'Reilly Safari)</p></li> <li><p><a href="http://hal.inria.fr/docs/00/61/73/41/PDF/RR-7687.pdf" rel="noreferrer">Conflict-free Replicated Data Types</a></p></li> <li><p><a href="http://pagesperso-systeme.lip6.fr/Marc.Shapiro/papers/Optimistic_Replication_Computing_Surveys_2005-03_cameraready.pdf" rel="noreferrer">Optimistic Replication</a> YASUSHI SAITO (HP Laboratories) and MARC SHAPIRO (Microsoft Research Ltd.) - <em>ACM Computing Surveys, Vol. V, No. N, 3 2005.</em></p></li> <li><p>Alexander Traud, Juergen Nagler-Ihlein, Frank Kargl, and Michael Weber. 2008. Cyclic Data Synchronization through Reusing SyncML. In Proceedings of the The Ninth International Conference on Mobile Data Management (MDM '08). IEEE Computer Society, Washington, DC, USA, 165-172. DOI=10.1109/MDM.2008.10 <a href="http://dx.doi.org/10.1109/MDM.2008.10" rel="noreferrer">http://dx.doi.org/10.1109/MDM.2008.10</a> </p></li> <li><p>Lam, F., Lam, N., and Wong, R. 2002. Efficient synchronization for mobile XML data. In Proceedings of the Eleventh international Conference on information and Knowledge Management (McLean, Virginia, USA, November 04 - 09, 2002). CIKM '02. ACM, New York, NY, 153-160. DOI= <a href="http://doi.acm.org/10.1145/584792.584820" rel="noreferrer">http://doi.acm.org/10.1145/584792.584820</a></p></li> <li><p>Cunha, P. R. and Maibaum, T. S. 1981. Resource &equil; abstract data type + synchronization - A methodology for message oriented programming -. In Proceedings of the 5th international Conference on Software Engineering (San Diego, California, United States, March 09 - 12, 1981). International Conference on Software Engineering. IEEE Press, Piscataway, NJ, 263-272.</p></li> </ul> <p>(The last three are from the ACM digital library, no idea if you are a member or if you can get those through other channels).</p> <p>From the <a href="http://www.ddj.com" rel="noreferrer">Dr.Dobbs</a> site:</p> <ul> <li>Creating Apps with SQL Server CE and SQL RDA by Bill Wagner May 19, 2004 (Best practices for designing an application for both the desktop and mobile PC - Windows/.NET)</li> </ul> <p>From arxiv.org:</p> <ul> <li><a href="http://arxiv.org/abs/1608.03960" rel="noreferrer">A Conflict-Free Replicated JSON Datatype</a> - the paper describes a JSON CRDT implementation (Conflict-free replicated datatypes - CRDTs - are a family of data structures that support concurrent modification and that guarantee convergence of such concurrent updates).</li> </ul>
2010-08-06 14:45:49.117000+00:00
2016-11-21 11:23:02.763000+00:00
2016-11-21 11:23:02.763000+00:00
null
3,406,891
<p>I'm looking for some general strategies for synchronizing data on a central server with client applications that are not always online.</p> <p>In my particular case, I have an android phone application with an sqlite database and a PHP web application with a MySQL database. </p> <p>Users will be able to add and edit information on the phone application and on the web application. I need to make sure that changes made one place are reflected everywhere even when the phone is not able to immediately communicate with the server.</p> <p>I am not concerned with how to transfer data from the phone to the server or vice versa. I'm mentioning my particular technologies only because I cannot use, for example, the replication features available to MySQL.</p> <p>I know that the client-server data synchronization problem has been around for a long, long time and would like information - articles, books, advice, etc - about patterns for handling the problem. I'd like to know about general strategies for dealing with synchronization to compare strengths, weaknesses and trade-offs.</p>
2010-08-04 15:06:45.130000+00:00
2019-12-12 08:57:45.127000+00:00
2011-12-19 03:00:52.163000+00:00
sql|database|design-patterns|client-server|data-synchronization
['http://en.wikipedia.org/wiki/Data_synchronization', 'https://unterwaditzer.net/2016/sync-algorithm.html', 'https://vdirsyncer.readthedocs.org/en/stable/index.html', 'http://www.objc.io/issue-10/data-synchronization.html', 'http://my.safaribooksonline.com/0130093696', 'http://hal.inria.fr/docs/00/61/73/41/PDF/RR-7687.pdf', 'http://pagesperso-systeme.lip6.fr/Marc.Shapiro/papers/Optimistic_Replication_Computing_Surveys_2005-03_cameraready.pdf', 'http://dx.doi.org/10.1109/MDM.2008.10', 'http://doi.acm.org/10.1145/584792.584820', 'http://www.ddj.com', 'http://arxiv.org/abs/1608.03960']
11
54,690,254
<p>Provided the network is sufficiently powerful to synthesize complex functions, the shape of the prior should be - in theory - largely uninfluent. In the specific case of the variance of the Gaussian you take as prior, the network may easily adapt to a different variance by scaling the relevant statistics of the posterior distributions Q(z|X), and suitably rescaling sampling in the next layer of the network. The resulting network would have precisely the same behaviour (and loss) of the previous one. So, the variance of the prior Gaussian has just the role of fixing the unit of measure for the latent space. The topic is discussed in the excellent tutorial on <a href="https://arxiv.org/pdf/1606.05908.pdf" rel="nofollow noreferrer">Variational Autoencoders</a> by Doersh (Section 2.4.3); you might also be interested to have a look at my <a href="https://mydeeplearningblog.wordpress.com/2018/12/07/primo-articolo-del-blog/" rel="nofollow noreferrer">blog</a>.</p>
2019-02-14 12:19:44.520000+00:00
2019-02-14 12:19:44.520000+00:00
null
null
45,935,836
<p>In variational autoencoder, objective function has two terms, the one which makes input and output x same, and the other one regularizer, q(z) and p(z) to be close by KL divergence. What I doN't understand is why we can assume that p(z)~Normal Gaussian with 0 mean and 1 variance?</p> <p>Why not say..variance less than 1? so that more informationn is condensed with narrower gaussians in hidden layer?</p> <p>Thank you</p>
2017-08-29 09:46:21.547000+00:00
2019-02-14 12:19:44.520000+00:00
null
autoencoder|gauss
['https://arxiv.org/pdf/1606.05908.pdf', 'https://mydeeplearningblog.wordpress.com/2018/12/07/primo-articolo-del-blog/']
2
61,112,111
<p>The Tensor Contraction Engine (TCE) component of NWChem needs to be configured to output Broombridge files (the <code>quasar</code> component of NWChem, documented at <a href="https://github.com/nwchemgit/nwchem/tree/master/contrib/quasar" rel="nofollow noreferrer"><code>contrib/quasar/README.md</code></a> in the <a href="https://github.com/nwchemgit/nwchem/" rel="nofollow noreferrer">NWChem repository</a>). Using the example at <a href="https://docs.microsoft.com/quantum/libraries/chemistry/samples/end-to-end" rel="nofollow noreferrer">https://docs.microsoft.com/quantum/libraries/chemistry/samples/end-to-end</a>:</p> <pre><code>set tce:print_integrals T set tce:qorb 18 set tce:qela 9 set tce:qelb 9 </code></pre> <p>Adding those instructions to your input deck should enable outputting Broombridge. For more details, see Listing 7 of <a href="https://arxiv.org/pdf/1904.01131v1.pdf" rel="nofollow noreferrer">arXiv:1904.01131</a>.</p>
2020-04-09 00:28:08.987000+00:00
2020-04-09 00:28:08.987000+00:00
null
null
60,654,803
<p>I am using nwchem (with powershell as described in <a href="https://docs.microsoft.com/en-us/quantum/libraries/chemistry/installation" rel="nofollow noreferrer">https://docs.microsoft.com/en-us/quantum/libraries/chemistry/installation</a>) to generate .yamls so that I can do resource estimation using q#.</p> <p>I successfully converted a provided example .nw file into a yaml with this method, and I've included the .nw below</p> <pre><code>start n2_0_75Re_sto3g echo geometry units bohr symmetry c1 n 0 0 -0.7755 n 0 0 0.7755 end basis * library sto-3g end scf thresh 1.0e-10 tol2e 1.0e-10 singlet rhf end tce 2eorb 2emet 13 tilesize 1 ccsd thresh 1.0e-6 nroots 1 end set tce:print_integrals T set tce:qorb 10 set tce:qela 7 set tce:qelb 7 task tce energy mcscf active 10 actelec 14 multiplicity 1 end task mcscf </code></pre> <p>I'd like to better understand how to make my own for a different molecule. Using the code provided in the nwchem tutorial - <a href="https://github.com/nwchemgit/nwchem/wiki/Getting-Started#simple-input-file----scf-geometry-optimization" rel="nofollow noreferrer">https://github.com/nwchemgit/nwchem/wiki/Getting-Started#simple-input-file----scf-geometry-optimization</a> - where they provide the supposed minimal information to run something on nwchem:</p> <pre><code>title "Nitrogen cc-pvdz SCF geometry optimization" geometry n 0 0 0 n 0 0 1.08 end basis n library cc-pvdz end task scf optimize </code></pre> <p>This seems to run through nwchem successfully but errors before generating the yaml:</p> <pre><code>File "/opt/nwchem/contrib/quasar/export_chem_library_yaml.py", line 298, in &lt;module&gt; main() File "/opt/nwchem/contrib/quasar/export_chem_library_yaml.py", line 291, in main emitter_yaml_func() File "/opt/nwchem/contrib/quasar/export_chem_library_yaml.py", line 283, in emitter_yaml_func data = extract_fields() File "/opt/nwchem/contrib/quasar/export_chem_library_yaml.py", line 142, in extract_fields if geometry is None: </code></pre> <p>I want to use <a href="http://www.cheminfo.org/Chemistry/Cheminformatics/FormatConverter/index.html" rel="nofollow noreferrer">http://www.cheminfo.org/Chemistry/Cheminformatics/FormatConverter/index.html</a> to generate the geometry coordinates of a molecule, and have a bare bones .nw file which I can insert the geometry into. After playing around I often run into errors like the one above, which seems specific to the final stage of converting the output into a yaml.</p> <p>Any help would be appreciated!</p>
2020-03-12 13:07:06.080000+00:00
2020-04-09 00:28:08.987000+00:00
null
q#
['https://github.com/nwchemgit/nwchem/tree/master/contrib/quasar', 'https://github.com/nwchemgit/nwchem/', 'https://docs.microsoft.com/quantum/libraries/chemistry/samples/end-to-end', 'https://arxiv.org/pdf/1904.01131v1.pdf']
4
67,859,581
<h1>About CoordConv</h1> <p>Here is the original paper which proposed the CoordConv layer: <a href="https://arxiv.org/pdf/1807.03247.pdf" rel="nofollow noreferrer">CoordConv paper</a>.</p> <p>I will try to convey my instinctive undersanding of this operation.</p> <h1>How AddCoords works</h1> <p>The way the information is added is by stacking (<em><strong>concatenating</strong></em>, to be more accurate) two new 2D tensors to the data. Those two channels are not multiplied together, therefore there is no meshgrid involved in this process.</p> <p>Say we are at a specific layer of the network. The last convolution step produced 4 2D-tensors of shape <code>8x8</code>, each of which is the result of the previous convolution by a filter (thus we had 4 kernels in the previous step). They are in reality stacked in a single tensor of size <code>bs * 8 * 8 * 4</code> where <code>bs</code> is the <em>batch size</em>, but let's ignore the batch size from now.</p> <p>The <code>AddCoords</code> method will create two other 2D tensors:</p> <p><code>xx_channel</code>:</p> <pre><code>[[0, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1], [2, 2, 2, 2, 2, 2, 2, 2], [3, 3, 3, 3, 3, 3, 3, 3], [4, 4, 4, 4, 4, 4, 4, 4], [5, 5, 5, 5, 5, 5, 5, 5], [6, 6, 6, 6, 6, 6, 6, 6], [7, 7, 7, 7, 7, 7, 7, 7]] </code></pre> <p>and <code>yy_channel</code>:</p> <pre><code>[[0, 1, 2, 3, 4, 5, 6, 7], [0, 1, 2, 3, 4, 5, 6, 7], [0, 1, 2, 3, 4, 5, 6, 7], [0, 1, 2, 3, 4, 5, 6, 7], [0, 1, 2, 3, 4, 5, 6, 7], [0, 1, 2, 3, 4, 5, 6, 7], [0, 1, 2, 3, 4, 5, 6, 7], [0, 1, 2, 3, 4, 5, 6, 7]] </code></pre> <p>Those are the results of the matmuls of the <code>tf.range</code> by the <code>tf.ones</code>.</p> <p>They will then be scaled to fit in the range <code>[-1, 1]</code> and casted to tensorflow.float32 type:</p> <p><code>xx_channel</code>:</p> <pre><code>[[-1. , -1. , -1. , -1. , -1. , -1. , -1. , -1. ], [-0.71428571, -0.71428571, -0.71428571, -0.71428571, -0.71428571, -0.71428571, -0.71428571, -0.71428571], [-0.42857143, -0.42857143, -0.42857143, -0.42857143, -0.42857143, -0.42857143, -0.42857143, -0.42857143], [-0.14285714, -0.14285714, -0.14285714, -0.14285714, -0.14285714, -0.14285714, -0.14285714, -0.14285714], [ 0.14285714, 0.14285714, 0.14285714, 0.14285714, 0.14285714, 0.14285714, 0.14285714, 0.14285714], [ 0.42857143, 0.42857143, 0.42857143, 0.42857143, 0.42857143, 0.42857143, 0.42857143, 0.42857143], [ 0.71428571, 0.71428571, 0.71428571, 0.71428571, 0.71428571, 0.71428571, 0.71428571, 0.71428571], [ 1. , 1. , 1. , 1. , 1. , 1. , 1. , 1. ]] </code></pre> <p><code>yy_channel</code>:</p> <pre><code>[[-1., -0.71428571, -0.42857143, -0.14285714, 0.14285714, 0.42857143, 0.71428571, 1.], [-1., -0.71428571, -0.42857143, -0.14285714, 0.14285714, 0.42857143, 0.71428571, 1.], [-1., -0.71428571, -0.42857143, -0.14285714, 0.14285714, 0.42857143, 0.71428571, 1.], [-1., -0.71428571, -0.42857143, -0.14285714, 0.14285714, 0.42857143, 0.71428571, 1.], [-1., -0.71428571, -0.42857143, -0.14285714, 0.14285714, 0.42857143, 0.71428571, 1.], [-1., -0.71428571, -0.42857143, -0.14285714, 0.14285714, 0.42857143, 0.71428571, 1.], [-1., -0.71428571, -0.42857143, -0.14285714, 0.14285714, 0.42857143, 0.71428571, 1.], [-1., -0.71428571, -0.42857143, -0.14285714, 0.14285714, 0.42857143, 0.71428571, 1.]] </code></pre> <p>They will then be concatenated to the other 2D-tensors along the last dimension (&quot;-1&quot;), ending up with a 3D-tensor with shape <code>8 * 8 * 6</code>(again, the dimension of the batch size is ignored in my explanation).</p> <p>Those two generated channels are what the authors in the <a href="https://arxiv.org/pdf/1807.03247.pdf" rel="nofollow noreferrer">paper</a> call <strong>coordinate informations</strong>. The method literally adds the coordinates of each 2D position : the y-coord and the x-coord.</p> <p>In our example, let's consider the values of an input tensor at position <code>[4, 5]</code>, meaning the values along the last dimension (size 4), which is accessible like this : <code>input_tensor[4, 5, :]</code>. It may return something like this :</p> <pre><code>input_tensor[4, 5, :] # &gt; [0.75261177, 0.62114716, 0.76845441, 0.44747785] </code></pre> <p>After <code>AddCoords</code>, it becomes:</p> <pre><code>ret[4, 5, :] # &gt; [0.75261177, 0.62114716, 0.76845441, 0.44747785, 0.14285714, 0.42857143] </code></pre> <p>... where <code>0.14285714</code> is the scaled value of <strong>4 <em>ie its y-coord</em></strong> and <code>0.42857143</code> is the scaled value of <strong>5 <em>ie its x-coord</em></strong>. The information about coordinates is now contained inside the resulting tensor, which is returned by the <code>AddCoords</code> method.</p> <h1>The CoordConv</h1> <p>It's a designed layer that applies <code>AddCoords</code> to the input and feeds the resulting tensor to a classic Conv2D layer. As such, it can be added to a neural network, as you would do with a Conv2D layer.</p> <p>That's what the authors did, when experimenting with GANs for example, where they substitued <code>Conv2D</code> with <code>CoordConv</code> (which, again, includes a Conv2D).</p> <p>Let me know if that answers your questions and/or correct any misconceptions.</p> <h1>What does it imply for the neural network ?</h1> <h2>More trainable parameters...</h2> <p>Let's give a bit more context to our previous example. In our previous example, the last layer yielded a tensor with shape <code>8 x 8 x 4</code>. Let's say we want the next convolution layer to yield 16 output filters, from a convolution window of 3 * 3.</p> <blockquote> <p>You can see <a href="https://learnopencv.com/image-classification-using-convolutional-neural-networks-in-keras/" rel="nofollow noreferrer">this link to get what convolution does mathematically , chapter 2.1</a> . You can get a basic understanding of <a href="https://ezyang.github.io/convolution-visualizer/index.html" rel="nofollow noreferrer">what the convolution operation yields thanks to this visualizer</a>. Just keep in mind both links show a single kernel and a single channel input matrix.</p> </blockquote> <ul> <li>If we don't add the coordinate tensors, the convolution to come will have 16 kernels with shape <code>3 x 3 x 4</code> each.</li> <li>If we do apply AddCoords, we will feed a tensor with shape <code>8 x 8 x 6</code> instead, and our 16 kernels will each have the shape <code>3 x 3 x 6</code>.</li> </ul> <p>You can think of those kernels as neurons. Each neuron has <code>3 x 3 x 4 == 36</code> weights (Conv2D) or <code>3 x 3 x 6 == 54</code> weights (AddCoords+Conv2D, or CoordConv). Their weights will be updated during the learning process. Knowing this, it should appear evident that the coordinates channels of CoordConv implies new and specific weights to each kernel of the convolution layer. That's how the neural network takes into consideration these coordinates.</p> <h2>... implied in similar training processes</h2> <p>If you haven't been experimenting with Machine Learning, the supervised learning process of a neural network might be quite complex to comprehend, but it's more general and could be resumed (oversimplified) as:</p> <ul> <li>We calculate the error, which is a mathematical way to describe how far the prediction is from the ground truth. Then we update (add) each parameter (or weight) in the network, layer after layer from the output layer to the input one, by a value that represents its implication in this error and the direction it should take to decrease the error. This process is called &quot;backpropagation of the error&quot;.</li> </ul>
2021-06-06 13:19:16.210000+00:00
2021-06-09 08:55:48.420000+00:00
2021-06-09 08:55:48.420000+00:00
null
67,857,323
<p>i read a paper which written by uber lab <a href="https://medium.com/@Cambridge_Spark/coordconv-layer-deep-learning-e02d728c2311" rel="nofollow noreferrer">https://medium.com/@Cambridge_Spark/coordconv-layer-deep-learning-e02d728c2311</a> they create a network named Coordconv,and in this coordconv they not only add two layer of meshgrid but also with a simple conv net.</p> <ol> <li>it said through this way they add positional info to every pixel points? 2.so that after conv the pixel points still remain in same place as in original image?</li> <li>and this is also working to add two layers of meshgrid to freature maps which draw from neural network?</li> <li>how could Meshgrid help add positional info to the image?</li> <li>Does this just simply added two layers which are the same size as the original image but is in[-1,1] meshgrid to original input image?</li> </ol> <p>a big THANKS in advance!</p>
2021-06-06 08:49:56.057000+00:00
2021-06-09 08:55:48.420000+00:00
null
python|neural-network|pytorch|conv-neural-network
['https://arxiv.org/pdf/1807.03247.pdf', 'https://arxiv.org/pdf/1807.03247.pdf', 'https://learnopencv.com/image-classification-using-convolutional-neural-networks-in-keras/', 'https://ezyang.github.io/convolution-visualizer/index.html']
4
45,289,069
<p>It's a lot cleaner and more flexible if you implement it as a separate layer. Something like this should work:</p> <pre><code>class LayerNorm(Layer): """ Layer Normalization in the style of https://arxiv.org/abs/1607.06450 """ def __init__(self, scale_initializer='ones', bias_initializer='zeros', **kwargs): super(LayerNorm, self).__init__(**kwargs) self.epsilon = 1e-6 self.scale_initializer = initializers.get(scale_initializer) self.bias_initializer = initializers.get(bias_initializer) def build(self, input_shape): self.scale = self.add_weight(shape=(input_shape[-1],), initializer=self.scale_initializer, trainable=True, name='{}_scale'.format(self.name)) self.bias = self.add_weight(shape=(input_shape[-1],), initializer=self.bias_initializer, trainable=True, name='{}_bias'.format(self.name)) self.built = True def call(self, x, mask=None): mean = K.mean(x, axis=-1, keepdims=True) std = K.std(x, axis=-1, keepdims=True) norm = (x - mean) * (1/(std + self.epsilon)) return norm * self.scale + self.bias def compute_output_shape(self, input_shape): return input_shape </code></pre>
2017-07-24 19:59:41.013000+00:00
2017-07-24 19:59:41.013000+00:00
null
null
39,095,252
<p>I am trying to implement the <a href="https://arxiv.org/pdf/1607.06450v1.pdf" rel="nofollow">layer normalization</a> in a fully connected neural network with keras. The issue I have met is that all the loss are <code>NaN</code> and it doesn't learn. Here is my code:</p> <pre><code>class DenseLN(Layer): def __init__(self, output_dim, init='glorot_uniform', activation='linear', weights=None, W_regularizer=None, b_regularizer=None, activity_regularizer=None, W_constraint=None, b_constraint=None, bias=True, input_dim=None, **kwargs): self.init = initializations.get(init) self.activation = activations.get(activation) self.output_dim = output_dim self.input_dim = input_dim self.epsilon = 1e-5 self.W_regularizer = regularizers.get(W_regularizer) self.b_regularizer = regularizers.get(b_regularizer) self.activity_regularizer = regularizers.get(activity_regularizer) self.W_constraint = constraints.get(W_constraint) self.b_constraint = constraints.get(b_constraint) self.bias = bias self.initial_weights = weights self.input_spec = [InputSpec(ndim=2)] if self.input_dim: kwargs['input_shape'] = (self.input_dim,) super(DenseLN, self).__init__(**kwargs) def ln(self, x): # layer normalization function m = K.mean(x, axis=0) std = K.sqrt(K.var(x, axis=0) + self.epsilon) x_normed = (x - m) / (std + self.epsilon) x_normed = self.gamma * x_normed + self.beta return x_normed def build(self, input_shape): assert len(input_shape) == 2 input_dim = input_shape[1] self.input_spec = [InputSpec(dtype=K.floatx(), shape=(None, input_dim))] self.gamma = K.variable(np.ones(self.output_dim) * 0.2, name='{}_gamma'.format(self.name)) self.beta = K.zeros((self.output_dim,), name='{}_beta'.format(self.name)) self.W = self.init((input_dim, self.output_dim), name='{}_W'.format(self.name)) if self.bias: self.b = K.zeros((self.output_dim,), name='{}_b'.format(self.name)) self.trainable_weights = [self.W, self.gamma, self.beta, self.b] else: self.trainable_weights = [self.W, self.gamma, self.beta] self.regularizers = [] if self.W_regularizer: self.W_regularizer.set_param(self.W) self.regularizers.append(self.W_regularizer) if self.bias and self.b_regularizer: self.b_regularizer.set_param(self.b) self.regularizers.append(self.b_regularizer) if self.activity_regularizer: self.activity_regularizer.set_layer(self) self.regularizers.append(self.activity_regularizer) self.constraints = {} if self.W_constraint: self.constraints[self.W] = self.W_constraint if self.bias and self.b_constraint: self.constraints[self.b] = self.b_constraint if self.initial_weights is not None: self.set_weights(self.initial_weights) del self.initial_weights def call(self, x, mask=None): output = K.dot(x, self.W) output = self.ln(output) #print (theano.tensor.shape(output)) if self.bias: output += self.b return self.activation(output) def get_output_shape_for(self, input_shape): assert input_shape and len(input_shape) == 2 return (input_shape[0], self.output_dim) model = Sequential() model.add(Dense(12, activation='sigmoid', input_dim=12)) model.add(DenseLN(98, activation='sigmoid')) model.add(DenseLN(108, activation='sigmoid')) model.add(DenseLN(1)) adadelta = Adadelta(lr=0.1, rho=0.95, epsilon=1e-08) adagrad = Adagrad(lr=0.003, epsilon=1e-08) model.compile(loss='poisson', optimizer=adagrad, metrics=['accuracy']) model.fit(X_train_scale, Y_train, batch_size=3000, callbacks=[history], nb_epoch=300) </code></pre> <p>Do you know what's wrong here and how can I fix it? Thanks in advance!</p> <p>EDIT:</p> <p>I have also tried some combinations of the layers and found something weired. If the input and output layer are both normal <code>Dense</code> layer, the accuracy would be very low, nearly zero. But if the input layer is <code>DenseLN</code>, i.e., my customized layer, the accuracy would be <code>0.6+</code> at first and after tens of iterations, it reduced to zero again. Indeed I copied most of the code from <code>Dense</code> layer and all the difference is the <code>ln</code> function and <code>self.ln(output)</code> in <code>call</code> function. Besides, I have also added the <code>gamma</code> and <code>beta</code> to the <code>trainable_weights</code>.</p> <p>Any help is appreciated!</p>
2016-08-23 07:45:46.960000+00:00
2017-07-24 19:59:41.013000+00:00
2016-08-23 09:14:02.437000+00:00
python|neural-network|keras
[]
0
65,685,037
<p><strong>TLDR;</strong> You want to look at <a href="https://datascience.stackexchange.com/questions/6107/what-are-deconvolutional-layers">Deconv networks</a> (Convolution transpose) that help regenerate an image using convolution operations. You want to build an encoder-decoder convolution architecture that compresses an image to a latent representation using convolutions and then decodes an image from this compressed representation. For image segmentation, a popular architecture is <code>U-net</code>.</p> <hr /> <p><strong>NOTE:</strong> I cant answer for pytorch, so I will he sharing the Tensorflow equivalent. Please feel to ignore the code, but since you are looking for the concept, I can help you with what you need to solve this.</p> <p>You are trying to generate an image as the output of the network.</p> <p>A series convolution operation help to <code>Downsample</code> an image. Since you need an output 2D matrix (gray scale image), you want to <code>Upsample</code> as well. Such a network is called a Deconv network.</p> <p>The first series of layers convolve over the input, 'flattening' them into a vector of channels. The next set of layers use <code>2D Conv Transpose</code> or <code>Deconv</code> operations to change the channels back into a 2D matrix (Gray scale image)</p> <p>Refer to this image for reference -</p> <p><a href="https://i.stack.imgur.com/iuqcX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iuqcX.png" alt="enter image description here" /></a></p> <p>Here is a sample code that shows you how you can take a (10,3,1) image to a (12,10,1) image using a deconv net.</p> <blockquote> <p>You can find the <code>conv2dtranspose</code> layer implementation in pytorch <a href="https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose2d.html" rel="nofollow noreferrer">here</a>.</p> </blockquote> <pre><code>from tensorflow.keras import layers, Model, utils inp = layers.Input((128,128,1)) ## x = layers.Conv2D(2, (3,3))(inp) ## Convolution part x = layers.Conv2D(4, (3,3))(x) ## x = layers.Conv2D(6, (3,3))(x) ## ########## x = layers.Conv2DTranspose(6, (3,3))(x) x = layers.Conv2DTranspose(4, (3,3))(x) ## ## Deconvolution part out = layers.Conv2DTranspose(1, (3,3))(x) ## model = Model(inp, out) utils.plot_model(model, show_shapes=True, show_layer_names=False) </code></pre> <p><a href="https://i.stack.imgur.com/Yji8X.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Yji8X.png" alt="enter image description here" /></a></p> <hr /> <p>Also, if you are looking for tried and tested architectures in this domain, check out <code>U-net</code>; <a href="https://arxiv.org/abs/1505.04597" rel="nofollow noreferrer">U-Net: Convolutional Networks for Biomedical Image Segmentation</a>. This is an <code>encoder-decoder (conv2d, conv2d-transpose)</code> architecture that uses a concept called <code>skip connections</code> to avoid information loss and generate better image segmentation masks.</p> <p><a href="https://i.stack.imgur.com/U1oGB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/U1oGB.png" alt="enter image description here" /></a></p>
2021-01-12 13:36:01.230000+00:00
2021-01-12 13:36:01.230000+00:00
null
null
65,684,804
<p>I am pretty new to deep learning, so I got one question:</p> <p>Assume an input Grayscale image of shape (128,128,1). Target (Output) is as well an (128,128,1) sized image, e.g. for segmentation, depth prediction etc.. Usually with valid padding the size of the image shrinks after several convolution layers.</p> <p>What are decent (maybe not the toughest one) variants to keep the size or predict a same sized image? Is it via same-padding? Is it via tranpose convolution or upsampling? Should I use a FCN at the end and reshape them to the image size? I am using pytorch. I would be glad for any hints, because I didn't find much in the internet.</p> <p>Best</p>
2021-01-12 13:20:49.207000+00:00
2021-01-12 13:36:01.230000+00:00
null
pytorch
['https://datascience.stackexchange.com/questions/6107/what-are-deconvolutional-layers', 'https://i.stack.imgur.com/iuqcX.png', 'https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose2d.html', 'https://i.stack.imgur.com/Yji8X.png', 'https://arxiv.org/abs/1505.04597', 'https://i.stack.imgur.com/U1oGB.png']
6
62,086,531
<p>Increasing the number of client epochs can indeed increase per-round convergence rate; but you're absolutely right that there is a risk of overfitting.</p> <p>In the Federated Averaging algorithm, the number of client epochs determines the amount of "sequential progress" (or learning) each client makes before updating the global model. More epochs will result in more local progress each round, this can manifest as a much faster per-round convergence rate. Plotting this against the number of examples seen on all clients may instead show a more similar convergence rate however.</p> <p>In the federated optimization setting, there is a new risk of overfitting that may be correlated to how non-<a href="https://en.wikipedia.org/wiki/Independent_and_identically_distributed_random_variables" rel="nofollow noreferrer">IID</a> each client dataset is. If each client dataset has the same distribution as the global data distribution, the same practices used for non-federated optimization can be used. The less similar each client dataset is to the "global" dataset, the more likely there will be "drift" (clients converge to different optimal points) when using a high number of client epochs during later rounds. <em>Training</em> accuracy can still appear high in this setting, as each client is fitting to its own local data well during local training. However <em>test</em> accuracy is less likely to improve, as the global model average likely will average out to be very small (the different client-local optimal points cancelling each other out). <a href="https://arxiv.org/abs/1910.06378" rel="nofollow noreferrer">Praneeth et. al</a> has some discussion about this.</p>
2020-05-29 12:54:36.033000+00:00
2020-05-29 12:54:36.033000+00:00
null
null
62,063,550
<p>I've been trying to characterize the learning process (accuracy and loss) on the Federated Learning for Image Classification notebook tutorial with TF Federated.</p> <p>I'm seeing major improvements in speed of convergence by modifying the epoch hyperparameter. Changing epochs from 5, 10, 20 etc. But I'm also seeing major increase in training accuracy. I suspect overfitting is occurring, though then I evaluate on the test set accuracy is still high.</p> <p>Wondering what is going on. ? </p> <p>My understanding is that the epoch param controls the # of forward/back prop on each client per round of training. Is this correct ? So ie 10 rounds of training on 10 clients with 10 epochs would be 10 Epochs X 10 Clients X 10 rounds. Realise a lager range of clients is needed etc but I was expecting to see poorer accuracy on the test set. </p> <p>What can I do to see whats going on. Could I use the evaluation check with something like learning curves to to see if overfitting is occurring ?</p> <p><code>test_metrics = evaluation(state.model, federated_test_data)</code> Only appears to give a single data point, how can I get the individual test accuracy for each test example validated?</p>
2020-05-28 11:26:53.627000+00:00
2020-05-29 12:54:36.033000+00:00
2020-05-28 15:12:32.783000+00:00
machine-learning|deep-learning|tensorflow-federated
['https://en.wikipedia.org/wiki/Independent_and_identically_distributed_random_variables', 'https://arxiv.org/abs/1910.06378']
2
60,129,102
<p>Finding distance by time taken would be inaccurate with WiFi due to the fact that it uses Csma Ca (Inet/linklayer/csmaca). Its is technically a queue for the host to wait for it's time to send to the AP and for the AP to broadcast the message. This report has a great write up on the Impact of csmaca <a href="https://arxiv.org/pdf/1609.04604.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1609.04604.pdf</a></p> <p>You would have to create your own linklayer if you want to calculate the distance by speed. </p>
2020-02-08 17:06:24.757000+00:00
2020-02-08 17:06:24.757000+00:00
null
null
60,128,633
<p>Let's say I have two nodes, one sends a packet across to the other node. How do i calculate the distance between them using the time taken? I'm pretty sure I have to use the distance = speed*time formula, which I am. My code is as follows</p> <p><strong>Node1.cc</strong></p> <pre><code>socket.sendTo(payload, destAddr, destPort); auto const result = SEND_TIME_HISTORY.insert(std::make_pair(numSent, simTime().dbl())); if (not result.second) { result.first-&gt;second = simTime().dbl(); } </code></pre> <p>What that does is basically each time it sends a packet it stores its current simTime and sequence number inside a map, so im sure that im calculating from the correct sequence of packets</p> <p><strong>Node2.cc</strong></p> <pre><code>map&lt;int, double&gt; SENT_TIME_HISTORY = Node1-&gt;returnTimeHistory(); //basicaly retrieve the map for (std::map&lt;int, double&gt;::iterator iter = SENT_TIME_HISTORY.begin(); iter != SENT_TIME_HISTORY.end(); iter++) { //iterate through and check if the received sequence number matches the sent sequence number if (rcvseq == iter-&gt;first){ //host_currenttime is simTime().dbl() as soon as the packet is received timediff = host_currenttime - iter-&gt;second; cout &lt;&lt; "Time received: " &lt;&lt; host_currenttime &lt;&lt; "\tTime sent: " &lt;&lt; iter-&gt;second &lt;&lt;"\tActual time taken" &lt;&lt; timediff &lt;&lt; endl; } } cout &lt;&lt; "Calculated distance: " &lt;&lt; timediff*299792458 &lt;&lt; endl; </code></pre> <p>Basically the output is rubbish. Right now I am taking the time taken for the packet to be sent across multiplied by the speed of light to determine the distance. Am I doing something wrong here? </p> <p>If more source code is necessary please do inform me. Thanks in advance!</p>
2020-02-08 16:13:33.237000+00:00
2021-03-30 16:08:05.057000+00:00
null
c++|omnet++|inet
['https://arxiv.org/pdf/1609.04604.pdf']
1
49,422,273
<p>There are many ways to achieve this. One way would be to create the embeddings (vectors) yourself. This would have two advantages: first, you would be able to use bi-, tri-, and beyond (n-) grams as your tokens, and secondly, you are able to define the space that is best suited for your needs --- Wikipedia data is general, but, say, children's stories would be a more niche dataset (and more appropriate / "accurate" if you were solving problems to do with children and/or stories). There are several methods, of course <code>word2vec</code> being the most popular, and several packages to help you (e.g. <code>gensim</code>).</p> <p>However, my guess is you would like something that's already out there. The best word embeddings right now are:</p> <ul> <li><a href="https://github.com/commonsense/conceptnet-numberbatch" rel="nofollow noreferrer">Numberbatch</a> ('classic' best-in-class ensemble);</li> <li><a href="https://fasttext.cc/" rel="nofollow noreferrer">fastText</a>, by Facebook Research (created at the character level --- some words that are out of vocabulary can be "understood" as a result);</li> <li><a href="https://github.com/explosion/sense2vec" rel="nofollow noreferrer">sense2vec</a>, by the same guys behind Spacy (created using parts-of-speech (POS) as additional information, with the objective to disambiguate).</li> </ul> <p>The one we are interested in for a quick resolve of your problem is <code>sense2vec</code>. You should read the <a href="https://arxiv.org/abs/1511.06388" rel="nofollow noreferrer">paper</a>, but essentially these word embeddings were created using Reddit with additional POS information, and (thus) able to discriminate entities (e.g. nouns) that span multiple words. <a href="https://explosion.ai/blog/sense2vec-with-spacy" rel="nofollow noreferrer">This blog post</a> describes <code>sense2vec</code> very well. Here's some code to help you get started (taken from the prior links):</p> <p>Install:</p> <pre><code>git clone https://github.com/explosion/sense2vec pip install -r requirements.txt pip install -e . sputnik --name sense2vec --repository-url http://index.spacy.io install reddit_vectors </code></pre> <p>Example usage:</p> <pre><code>import sense2vec model = sense2vec.load() freq, query_vector = model["onion_rings|NOUN"] freq2, query_vector2 = model["chicken_nuggets|NOUN"] print(model.most_similar(query_vector, n=5)[0]) print(model.data.similarity(query_vector, query_vector2)) </code></pre> <p>Important note, <code>sense2vec</code> <strong>requires</strong> <code>spacy&gt;=0.100,&lt;0.101</code>, meaning <em>it will downgrade your current <code>spacy</code> install</em>, not too much of a problem if you are only loading the <code>en</code> model. Also, here are the POS tags used:</p> <pre><code>ADJ ADP ADV AUX CONJ DET INTJ NOUN NUM PART PRON PROPN PUNCT SCONJ SYM VERB X </code></pre> <p>You could use <code>spacy</code> for POS and dependency tagging, and then <code>sense2vec</code> to determine the similarity of resulting entities. Or, depending on the frequency of your dataset (not too large), you could grab n-grams in descending (n) order, and sequentially check to see if each one is an entity in the <code>sense2vec</code> model.</p> <p>Hope this helps!</p>
2018-03-22 06:50:47.490000+00:00
2018-03-22 06:50:47.490000+00:00
null
null
49,403,913
<p>I'm trying to identify user similarities by comparing the keywords used in their profile (from a website). For example, <code>Alice = pizza, music, movies</code>, <code>Bob = cooking, guitar, movie</code> and <code>Eve = knitting, running, gym</code>. Ideally, <code>Alice</code> and <code>Bob</code> are the most similar. I put down some simple code to calculate the similarity. To account for possible plural/singular version of the keywords I use something like:</p> <pre><code>from nltk.stem import WordNetLemmatizer from nltk.tokenize import word_tokenize wnl = WordNetLemmatizer() w1 = ["movies", "movie"] tokens = [token.lower() for token in word_tokenize(" ".join(w1))] lemmatized_words = [wnl.lemmatize(token) for token in tokens] </code></pre> <p>So that, <code>lemmatized_words = ["movie", "movie"]</code>. Afterwards, I do some pairwise keywords comparison using <a href="https://spacy.io/usage/vectors-similarity" rel="nofollow noreferrer"><code>spacy</code></a>, such as:</p> <pre><code>import spacy nlp = spacy.load('en') t1 = nlp(u"pizza") t2 = nlp(u"food") sim = t1.similarity(t2) </code></pre> <p>Now, the problem starts when I have to deal with compound words such as: <code>artificial intelligence</code>, <code>data science</code>, <code>whole food</code>, etc. By tokenizing, I would simply split those words into 2 (e.g. <code>artificial</code> and <code>intelligence</code>), but this would affect my similarity measure. What is (would be) the best approach to take into account those type of words?</p>
2018-03-21 10:30:49.100000+00:00
2018-05-21 12:42:26.410000+00:00
null
nlp|nltk|tokenize|spacy
['https://github.com/commonsense/conceptnet-numberbatch', 'https://fasttext.cc/', 'https://github.com/explosion/sense2vec', 'https://arxiv.org/abs/1511.06388', 'https://explosion.ai/blog/sense2vec-with-spacy']
5
66,379,153
<p>Regarding to your first approach,</p> <p>There are two synthetically prepared datasets available:</p> <ol> <li><a href="https://www.robots.ox.ac.uk/%7Evgg/data/text/" rel="nofollow noreferrer">Text Recognition Data</a> consists from 9M images.</li> <li><a href="https://www.robots.ox.ac.uk/%7Evgg/data/scenetext/" rel="nofollow noreferrer">SynthText in the Wild</a> consists from 8M images.</li> </ol> <p>I have used above datasets for text recognition on slab images. Images were quite challenging however now I achieved more than 90% accuracy for that. I have implemented following models to solve this task. These are:</p> <ol> <li><a href="https://github.com/clovaai/CRAFT-pytorch" rel="nofollow noreferrer">CRAFT</a> for text localization.</li> <li><a href="https://github.com/clovaai/deep-text-recognition-benchmark" rel="nofollow noreferrer">Deep Text Recognition</a> for text recognition.</li> </ol> <p>If you are working with <a href="https://i.stack.imgur.com/crUUR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/crUUR.png" alt="enter image description here" /></a> kinds of images only, I highly encourage you to try <strong>Deep Text Recognition</strong>. It is 4 stage framework. <a href="https://i.stack.imgur.com/PWXo8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PWXo8.png" alt="enter image description here" /></a></p> <ol> <li><p>For Transformation, you can choose <strong>TPS</strong> or <strong>None</strong>. With <strong>TPS</strong>, it has showed higher performance. They implemented <a href="https://arxiv.org/abs/1506.02025" rel="nofollow noreferrer">Spatial Transformer Networks</a>.</p> </li> <li><p>On Feature Extraction stage, you will have options: <strong>ResNet</strong> or <strong>VGG</strong></p> </li> <li><p>For Sequential Stage, <strong>BiLSTM</strong></p> </li> <li><p><strong>Attn</strong> or <strong>CTC</strong> for prediction stage.</p> </li> </ol> <p>They achieved best accuracy on <strong>TPS-ResNet-BiLSTM-Attn</strong> version. You can easily fine tune this network and I hope it can solve your task. The model trained with above mentioned datasets.</p>
2021-02-26 02:12:33.890000+00:00
2021-02-26 02:12:33.890000+00:00
null
null
65,790,276
<p>I am working on a problem, where I want to automatically read the number on images as follows:</p> <p><a href="https://i.stack.imgur.com/bOjqi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bOjqi.png" alt="enter image description here" /></a></p> <p><a href="https://i.stack.imgur.com/aeNUi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aeNUi.png" alt="enter image description here" /></a></p> <p>As can be seen, the images are quite challenging! Not only are these not connected lines in all cases, but also the contrast differs a lot. My first attempt was using pytesseract after some preprocessing. I also created a StackOverflow post <a href="https://stackoverflow.com/questions/65666507/local-contrast-enhancement-for-digit-recognition-with-cv2-pytesseract/65675638?noredirect=1#comment116147683_65675638">here</a>.</p> <p>While this approach works fine on an individual image, it is not universal, as it requires too much manual information for the preprocessing. The best solution I have so far, is to iterate over some hyperparameters such as threshold value, filter size of erosion/dilation, etc. However, this is computationally expensive!</p> <p>Therefore I came to believe, that the solution I am looking for must be deep-learning based. I have two ideas here:</p> <ul> <li>Using a pre-trained network on a similar task</li> <li>Splitting the input images into separate digits and train / finetune a network myself in an MNIST fashion</li> </ul> <p>Regarding the first approach, I have not found something good yet. Does anyone have an idea for that?</p> <p>Regarding the second approach, I would need a method first to automatically generate images of the separate digits. I guess this should also be deep-learning-based. Afterward, I could maybe achieve some good results with some data augmentation.</p> <p>Does anyone have ideas? :)</p>
2021-01-19 11:05:26.417000+00:00
2021-02-26 02:12:33.890000+00:00
2021-02-25 15:52:15.793000+00:00
python|deep-learning|ocr|image-recognition|mnist
['https://www.robots.ox.ac.uk/%7Evgg/data/text/', 'https://www.robots.ox.ac.uk/%7Evgg/data/scenetext/', 'https://github.com/clovaai/CRAFT-pytorch', 'https://github.com/clovaai/deep-text-recognition-benchmark', 'https://i.stack.imgur.com/crUUR.png', 'https://i.stack.imgur.com/PWXo8.png', 'https://arxiv.org/abs/1506.02025']
7
66,324,414
<p>Your task is really challenging. I have several ideas, may be it will help you on the way. First, if you get the images right, you can use <a href="https://www.jaided.ai/easyocr/" rel="nofollow noreferrer">EasyOCR</a>. It uses a sophisticated algorithm for detecting letters in the image called <a href="https://arxiv.org/abs/1904.01941" rel="nofollow noreferrer">CRAFT</a> and then recognizes them using CRNN. It provides very fine grained control over symbol detection and recognition parts. For example, after some manual manipulations on the images (greyscaling, contrast enhancing and sharpening) I got</p> <p><a href="https://i.stack.imgur.com/6rXHr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6rXHr.png" alt="enter image description here" /></a> <a href="https://i.stack.imgur.com/Xysuj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Xysuj.png" alt="enter image description here" /></a> and using the following code</p> <pre><code>import easyocr reader = easyocr.Reader(['en']) # need to run only once to load model into memory reader.readtext(path_to_file, allowlist='0123456789') </code></pre> <p>the results are <code>31197432</code> and <code>31197396</code>.</p> <p>Now, for the contrast restoration part, <code>opencv</code> has a tool called <a href="https://en.wikipedia.org/wiki/Adaptive_histogram_equalization#Contrast_Limited_AHE" rel="nofollow noreferrer">CLAHE</a>. If you run following code</p> <pre><code>img = cv2.imread(fileName) gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) blurred = cv2.GaussianBlur(gray, (25, 25), 0) grayscaleImage = gray * ((gray / blurred) &gt; 0.01) clahe = cv2.createCLAHE(clipLimit=6.0, tileGridSize=(16,6)) contrasted = clahe.apply(grayscaleImage) </code></pre> <p>on the original images, you will get <a href="https://i.stack.imgur.com/4Z5bv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4Z5bv.png" alt="enter image description here" /></a> <a href="https://i.stack.imgur.com/L21Fb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/L21Fb.png" alt="enter image description here" /></a> which are visually very similarly to those above. I believe that after some cleaning you can get it to be recognizable without too much fiddling with hyperparameters.</p> <p>And finally, if you want to train your own deep learning OCR, I suggest you use <a href="https://keras-ocr.readthedocs.io/en/latest/index.html#" rel="nofollow noreferrer">keras-ocr</a> . It uses the same algorithms as EasyOCR, but provides an end-to-end training pipeline to build new OCR model. It has all the necessary steps covered: data sets downloading, data generation, augmentation, training and inferencing.</p> <p>Take into account that deep learning solutions are very computationally heavy. Good luck!</p>
2021-02-22 22:33:04.713000+00:00
2021-02-22 22:53:35.253000+00:00
2021-02-22 22:53:35.253000+00:00
null
65,790,276
<p>I am working on a problem, where I want to automatically read the number on images as follows:</p> <p><a href="https://i.stack.imgur.com/bOjqi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bOjqi.png" alt="enter image description here" /></a></p> <p><a href="https://i.stack.imgur.com/aeNUi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aeNUi.png" alt="enter image description here" /></a></p> <p>As can be seen, the images are quite challenging! Not only are these not connected lines in all cases, but also the contrast differs a lot. My first attempt was using pytesseract after some preprocessing. I also created a StackOverflow post <a href="https://stackoverflow.com/questions/65666507/local-contrast-enhancement-for-digit-recognition-with-cv2-pytesseract/65675638?noredirect=1#comment116147683_65675638">here</a>.</p> <p>While this approach works fine on an individual image, it is not universal, as it requires too much manual information for the preprocessing. The best solution I have so far, is to iterate over some hyperparameters such as threshold value, filter size of erosion/dilation, etc. However, this is computationally expensive!</p> <p>Therefore I came to believe, that the solution I am looking for must be deep-learning based. I have two ideas here:</p> <ul> <li>Using a pre-trained network on a similar task</li> <li>Splitting the input images into separate digits and train / finetune a network myself in an MNIST fashion</li> </ul> <p>Regarding the first approach, I have not found something good yet. Does anyone have an idea for that?</p> <p>Regarding the second approach, I would need a method first to automatically generate images of the separate digits. I guess this should also be deep-learning-based. Afterward, I could maybe achieve some good results with some data augmentation.</p> <p>Does anyone have ideas? :)</p>
2021-01-19 11:05:26.417000+00:00
2021-02-26 02:12:33.890000+00:00
2021-02-25 15:52:15.793000+00:00
python|deep-learning|ocr|image-recognition|mnist
['https://www.jaided.ai/easyocr/', 'https://arxiv.org/abs/1904.01941', 'https://i.stack.imgur.com/6rXHr.png', 'https://i.stack.imgur.com/Xysuj.png', 'https://en.wikipedia.org/wiki/Adaptive_histogram_equalization#Contrast_Limited_AHE', 'https://i.stack.imgur.com/4Z5bv.png', 'https://i.stack.imgur.com/L21Fb.png', 'https://keras-ocr.readthedocs.io/en/latest/index.html#']
8
57,453,076
<p>Since <a href="https://scikit-learn.org/dev/whats_new.html#id36" rel="nofollow noreferrer">version 0.21.0</a>, the <a href="https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.PolynomialFeatures.html#sklearn-preprocessing-polynomialfeatures" rel="nofollow noreferrer">PolynomialFeatures</a> class accepts CSR matrices for degrees 2 and 3. The method laid out <a href="https://arxiv.org/abs/1803.06418" rel="nofollow noreferrer">here</a> is used, and the computation is much, much faster than if the input is a CSC matrix or dense (assuming the data's sparse to any reasonable degree - even slightly).</p>
2019-08-11 19:44:36.070000+00:00
2020-10-17 05:26:01.463000+00:00
2020-10-17 05:26:01.463000+00:00
null
48,199,391
<p>I am using Scikit-learn for converting my train data to polynomials features and then fit it to a linear model.</p> <pre><code>model = Pipeline([('poly', PolynomialFeatures(degree=3)), ('linear', LinearRegression(fit_intercept=False))]) model.fit(X, y) </code></pre> <p>But it throws an error </p> <pre><code>TypeError: A sparse matrix was passed, but dense data is required </code></pre> <p>I know my data is <code>sparse matrix</code> format. So when I try to convert my data to <code>dense matrix</code> it shows <code>memory error</code>. Because my data is huge(50k~). Because of these large amounts of data I can't convert it to a dense matrix. </p> <p>I also find <a href="https://github.com/scikit-learn/scikit-learn/issues/8376" rel="nofollow noreferrer">Github Issues</a> where this feature is requested. But still not implemented. </p> <p>So please can someone tell how to use sparse data format in PolynomialFeatures in Scikit-learn without converting it to dense format?</p>
2018-01-11 03:34:53.917000+00:00
2020-10-17 05:26:01.463000+00:00
null
scikit-learn|sparse-matrix|data-science|polynomials|sklearn-pandas
['https://scikit-learn.org/dev/whats_new.html#id36', 'https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.PolynomialFeatures.html#sklearn-preprocessing-polynomialfeatures', 'https://arxiv.org/abs/1803.06418']
3
38,668,283
<p>I'll throw an answer since I think the current one is incomplete...and I also think the comment of "simple heuristic" is premature. I think that if you cluster on points, you'll get a different result than what your diagram depicts. As the clusters will be near the end-points and you wouldn't get your nice ellipses.</p> <p>So, if your data really does behave similarly to how you display it. I would take a stab at turning each set of 2/3 points into a longer list of points that basically trace out the lines. (you will need to experiment on how dense)</p> <p>Then run HDBSCAN on the result see video ( <a href="https://www.youtube.com/watch?v=AgPQ76RIi6A" rel="nofollow noreferrer">https://www.youtube.com/watch?v=AgPQ76RIi6A</a> ) to get your clusters. I believe "pip install hdbscan" installs it.</p> <p>Now, when testing a new sample, first decompose it into many(N) points and fit them with your hdbscan model. I reckon that if you take a majority voting approach with your N points, you'll get the best overall cluster to which the "line" belongs.</p> <p>So, while I sort of agree with the "simple heuristic" comment, it's not so simple if you want the whole thing automated. And once you watch the video you may be convinced that HDBSCAN, because of its density-based algorithm, will suit this problem(if you decide to create many points from each sample).</p> <p>I'll wrap up by saying that I'm sure there are line-intersection models that have done this before...and that there does exist heuristics and rules that can do the job. Likely, they're computationally more economical too. My answer is just something organic using sklearn as you requested...and I haven't even tested it! It's just how I would proceed if I were in your shoes.</p> <p><strong>edit</strong></p> <p>I poked around and there a couple of line similarity measures you can possibly try. Frechet and Hausdorff distance measures.</p> <p>Frechet: <a href="http://arxiv.org/pdf/1307.6628.pdf" rel="nofollow noreferrer">http://arxiv.org/pdf/1307.6628.pdf</a> Hausdorff: <a href="https://stackoverflow.com/questions/13692801/distance-matrix-of-curves-in-python">distance matrix of curves in python</a> for a python example.</p> <p>If you generate all pair-wise similarities and then group them according to similarity and/or into N bins, you can then call those bins your "clusters" (not kmeans clusters though!). For each new line, generate all similarities and see which bin it belongs to. I revise my original comment of possibly being computationally less intensive...you're lucky your lines only have 2 or 3 points!</p>
2016-07-29 22:21:07.910000+00:00
2016-07-31 22:09:24.070000+00:00
2017-05-23 12:33:50.497000+00:00
null
38,667,420
<p>I have series of line data (2-3 connected points). What is the best machine learning algorithm that I can use to be able to classify lines to their location similarities? (image below)</p> <p>Preferably python libraries such as SciKit-Learn.</p> <p><img src="https://i.stack.imgur.com/z5Poe.png" alt="CLICK HERE TO SEE THE IMAGE"></p> <p><strong>Edit:</strong> I have tried DBSCAN, but the problem I faced was if there are two lines intersect each other, sometimes DBSCAN consider them to one group even though they are completely in different direction.</p> <p>Here is a solution I found so far:</p> <p><em>GeoPath Clustering Algorithm</em></p> <p>The idea here is to cluster geo paths that travel very similar to each other into groups.</p> <p>Steps:</p> <p>1- Cluster lines based on slope</p> <p>2- Within each cluster from step 1, find centriod of lines and by using k-mean algorithm cluster them into smaller groups</p> <p>3- Within each geoup from step 2, calculate lenght of each line and group lines within defined length threshold</p> <p>Result will be small groups of lines that have similar slope, close to each other and with similar travel distance.</p> <p>Here are screen shots of visualization: Yellow lines are all lines and red are cluster of paths travel together.<a href="https://i.stack.imgur.com/Xm7hr.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/Xm7hr.jpg" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/r1LNB.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/r1LNB.jpg" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/oJhXT.png" rel="noreferrer"><img src="https://i.stack.imgur.com/oJhXT.png" alt="enter image description here"></a></p>
2016-07-29 21:00:38.290000+00:00
2016-08-26 19:54:24.823000+00:00
2016-08-26 19:54:24.823000+00:00
python|machine-learning|scikit-learn|line|classification
['https://www.youtube.com/watch?v=AgPQ76RIi6A', 'http://arxiv.org/pdf/1307.6628.pdf', 'https://stackoverflow.com/questions/13692801/distance-matrix-of-curves-in-python']
3
72,279,722
<p>There are a bunch of potential approaches you could try, and see which might offer what you want.</p> <p>First &amp; foremost, some of the Gensim <code>Doc2Vec</code> modes co-train word-vectors into the same coordinate system as the doc-vectors – allowing direct comparisons betwee words &amp; docs, sometimes even to the level of compositional 'vector-arithmetic' (like in the famous word2vec analogy-solving examples).</p> <p>You can see this potential discussed in the paper <a href="https://arxiv.org/abs/1507.07998" rel="nofollow noreferrer">&quot;Document Embedding with Paragraph Vectors&quot;</a>.</p> <p>The default PV-DM mode (parameter <code>dm=1</code>) automatically co-trains words and docs in the same space. You can also add interleaved word-vector skip-gram training into the other PV-DBOW <code>dm=0</code> mode by adding the optional parameter <code>dbow_words=1</code>.</p> <p>While it is still the case that <code>d2v_model.dv.most_similar(docvec_or_doctag)</code> will only return doc-vector results, and <code>d2v_model.wv.most_similar(wordvec_or_word_token)</code> will only return word-vector results, you can absolutely provide a raw vector of a document to the set of word-vectors, or a word-vector to the set of doc-vectors, to get the nearest-neighbors of the other type.</p> <p>So in one of these modes, with doc-vector, you can use...</p> <pre><code>d2v_model.wv.most_simlar(positive=[doc_vector]) </code></pre> <p>...to get a list-of-words that are closest to that doc-vector. Whether they're sufficiently representative will vary based on lots of factors. (If they seem totally random, there may be other problems with your data-sufficiency or process, or you may be using the <code>dm=0, dbow_words=0</code> mode that leaves words random &amp; untrained.)</p> <p>You could use this on the centroid of your clusters – but note, a centroid might hide lots of the variety of a larger grouping, which might include docs <em>not</em> all in a tight 'ball' around the centroid. So you could also use this on <em>all</em> docs in a cluster, to get the top-N closest words to each – and then summarize the cluster as the words most often appearing in those many top-N lists, or most <em>uniquely</em> appearing in those top-N lists (versus the top-N lists of other clusters). That might describe more of the full cluster.</p> <p>Separately, there's a method from Gensim's <code>Word2Vec</code>, <a href="https://radimrehurek.com/gensim/models/word2vec.html#gensim.models.word2vec.Word2Vec.predict_output_word" rel="nofollow noreferrer"><code>predict_output_word()</code></a>, which vaguely simulates the word2vec training-predictions to give a ranked list of predictions of a word from its surrounding words. The same code <em>could</em> be generalized to predict document-words from a doc-vector – there's an <a href="https://github.com/RaRe-Technologies/gensim/issues/2459" rel="nofollow noreferrer">open pending issue to do so</a>, and it'd be a simple bit of coding, though no-one's tackled it yet. (It'd be a welcome, and pretty easy, 1ast contribution to the Gensim project.)</p> <p>Also: after having established your clusters, you could even put the <code>Doc2Vec</code> model aside, and use more traditional direct counting/frequency methods to pick out the most-salient words in each cluster. For example, turn each cluser into a single synthetic pseudodocument. Rank the words inside by TF-IDF, compared to the other cluster pseudodocs. (Or, get the top TF-IDF terms for every one of the individual original documents; describe each cluster by the most-often-relevant words tallied across all cluster docs.)</p>
2022-05-17 19:31:34.803000+00:00
2022-05-17 19:31:34.803000+00:00
null
null
72,260,769
<p>I am clustering comments.</p> <p>After preprocessing and a vectorization of a text, I have inferred vectors from my doc2vec model and applied kmeans.</p> <p>After that I want to convert cluster centroid vectors to words to kinda look at the semantic cores of the clusters. Is it possible?</p> <p>Edit: I use python/gensim.</p>
2022-05-16 14:22:47.537000+00:00
2022-05-22 11:22:28.280000+00:00
2022-05-17 09:03:26.507000+00:00
doc2vec
['https://arxiv.org/abs/1507.07998', 'https://radimrehurek.com/gensim/models/word2vec.html#gensim.models.word2vec.Word2Vec.predict_output_word', 'https://github.com/RaRe-Technologies/gensim/issues/2459']
3
55,280,316
<p>In our case here we have a very skewed dataset with <strong>200+ classes</strong> and <strong>20%</strong> of the classes containing <strong>80% of all data</strong>. </p> <p>In our data, even with this highly skewed data, we have a <strong>clear definition</strong> of the texts inside our categories. </p> <p><strong>Example</strong>: Text of the Majority Class: "<em>Hey, I need a <strong>computer</strong> and a <strong>mouse</strong> to open the <strong>internet</strong> and post a <strong>programming</strong> answer in <strong>Stack</strong> <strong>Overflow</strong></em>"</p> <p>Text of the Minority Class: "<em>Hey, could please give me the following items: <strong>Eggs</strong>, <strong>lettuce</strong>, <strong>onions</strong>, <strong>tomatoes</strong>, <strong>milk</strong> and <strong>wheat</strong>?</em>"</p> <p>As FastText deals with WordNGrams and hierarchical split if you have a <strong>very well defined category</strong> as my case above, the imbalance it's not a problem because of the nature of the algorithm. </p> <p>Reference: <a href="https://arxiv.org/abs/1607.01759" rel="nofollow noreferrer">Bag of Tricks for Efficient Text Classification</a> - Armand Joulin, Edouard Grave, Piotr Bojanowski, Tomas Mikolov</p>
2019-03-21 12:20:49.603000+00:00
2019-03-21 12:20:49.603000+00:00
null
null
50,781,707
<p>In FastText, I have unbalanced labels. What is the best way to handle it?</p>
2018-06-10 08:02:05.467000+00:00
2019-03-21 12:20:49.603000+00:00
2018-06-10 14:51:47.413000+00:00
nlp|word2vec|fasttext
['https://arxiv.org/abs/1607.01759']
1
34,249,837
<p>If you are searching for an exact optimal answer, that problem is NP complete, however, I notice that you describe a very small problem set.</p> <p>If your problem is that small, brute force search is feasible: you can just generate all possible solutions, compute solution cost, and choose the best one.</p> <p>If instead with your example you intended to outline just a problem description, and your real-life problem set is actually "large", brute force won't work, as execution time will increase exponentially with the number of items. Here's a rather <a href="http://www.or.deis.unibo.it/kp/Chapter6.pdf" rel="nofollow">old paper on this problem</a>.</p> <p>An interesting notice, is that <a href="http://dsec.pku.edu.cn/~tieli/notes/num_meth/lect8.pdf" rel="nofollow">you can transform your constraints into numerical</a> "penalties" and use unconstrained optimization techniques, simplifying your problem a bit.</p> <p>Algorithms guaranteeing optimal solution still need heuristics pruning bad/unfeasible and "dominated" solutions quickly.</p> <p>According to research I've read, even with pruning heuristics, this approach is practically unfeasible for more than a relatively small set <a href="http://arxiv.org/pdf/1007.4063.pdf" rel="nofollow">"paper about 2 phase" approach</a>.</p> <p>Typical metaheuristic optimization techniques apply for larger sets, in particular <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.93.2222&amp;rep=rep1&amp;type=pdf" rel="nofollow">Simulated Annealing</a>, Tabu search, swarm.</p>
2015-12-13 09:52:58.003000+00:00
2015-12-13 10:04:56.340000+00:00
2015-12-13 10:04:56.340000+00:00
null
34,245,876
<p>I have a problem which involves gear optimization in a game, but i'll simplify it with this example:</p> <ul> <li>Let's say i have 4 bags.</li> <li>I have a set of different items, of 4 types.</li> <li>Each item has its weight and price.</li> <li>Each bag is for one type of item, 4 bags - 4 types.</li> </ul> <p><br> I want to maximize the price i carry in all bags, but i have the following constraints:</p> <ol> <li>Each bag can hold at most 6 items. It can be empty too.</li> <li>Total weight of all 4 bags must not exceed 700kg.</li> <li>Each bag has a min weight value, even if it's empty it will count as 50kg (a bag with 1 item of 25kg will still count as 50kg, a bag with 1 item of 51kg will count as 51kg).</li> </ol> <p>The range of the number of items it's 200~300, and with 6*4=24 max items that can be choosen, it's impossible to brute force.</p> <p>There are other factors, but those are outside of the combinatorial problem and can be solved by simple programming <hr><br> What kind of problem is this? Is it a subset of linear programming? <br> Do you know what kind of algorithm i can reasearch to solve this? <br><br> <em>I began reading about linear programming but i have a problem understanding some symbols. I have experiencie in programming but not involving math.</em></p> <h2>Update</h2> <p><hr> I looked into it, and now i know that this is a multidimensional or multiple-choice knapsack problem. Having solved the simple knapsack problem, now i only 1 constraint left, the 6 items limit.</p> <p>Anyone knows a good aproach to this?</p> <h2>Update 2</h2> <p><hr> I'm now using GLPK to model this problem and solve it, i'm so close to finish it, but i'm stuck with a simple constraint.</p> <pre><code># Size of knapsack param s; # Total of items param t; # Type of items set Z; # Min bag param m; # Items: index, size, profit, count, type set I, dimen 5; # Indices set J := setof{(i,s,p,o,z) in I} i; # Assignment var a{J}, binary; #maximize profit maximize obj : sum{(i,s,p,o,z) in I} p*a[i]; /*s.t. size : sum{(i,s,p,o,z) in I} s*a[i] &lt;= c;*/ #constraint of total weight, but with the min value for each bag #the min function doesn't work, it says argument for min has invalid type #something about it not being a linear function s.t. size : sum{zz in Z} ( min(m, sum{(i,s,p,o,z) in I: zz=z} (s*a[i]) ) ) &lt;= c; #constraint of number of items in each bag, i put an extra count number #in the set so i could sum it and make it a constraint s.t. count{zz in Z}: sum{(i,s,p,o,z) in I: zz=z} o*a[i] &lt;= t; solve; printf "The bag contains:\n"; printf {(i,s,p,o,z) in I: a[i] == 1} " %i", i; printf "\n"; data; #set of type of items set Z := 1 2 3 4; # Total weight limit param c := 100; # Only 2 items per bag param t := 2; # Min bag value, if the bag weights less than it, then it counts as this value param M := 10; # Items: index, size, profit, count, type set I := 1 10 10 1 1 2 10 10 1 1 3 15 45 1 2 4 20 20 1 2 5 20 20 1 3 6 24 24 1 3 7 24 25 1 4 8 50 50 1 4; end; </code></pre> <p>Note: i used different values here to keep it small.</p> <p>That's my model, it works without the min weight constraint, i just need for it to sum the minimum value of 50kg or the bag total, but the <code>min</code> function doesn't work there. I tried this formula </p> <p>(can't post images)</p> <p><a href="https://chart.googleapis.com/chart?cht=tx&amp;chl=%5Cmin%7B%28a,%20b%29%7D%20=%20%5Cfrac%7Ba%20b%20-%20%7Ca-b%7C%7D%7B2%7D" rel="nofollow">min(a,b) = (a+b- abs(a-b))/2</a></p> <p>but i can't use the abs function either.</p> <p>Can somebody point me in the right direction about this.</p>
2015-12-12 22:43:33.550000+00:00
2015-12-15 16:34:23.413000+00:00
2015-12-15 16:34:23.413000+00:00
linear-programming|knapsack-problem|ampl|glpk
['http://www.or.deis.unibo.it/kp/Chapter6.pdf', 'http://dsec.pku.edu.cn/~tieli/notes/num_meth/lect8.pdf', 'http://arxiv.org/pdf/1007.4063.pdf', 'http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.93.2222&rep=rep1&type=pdf']
4
59,637,218
<p>You could use BiEntropy, Trientropy or their addition TriBientropy to compute the entropy of your pickle files. The algorithms are described on www.arxiv.org and BiEntropy has been implemented with test harnesses on Github. BiEntropy has been tested positevely on large raw binary files</p>
2020-01-07 22:51:55.427000+00:00
2020-01-07 22:51:55.427000+00:00
null
null
59,528,143
<p>I'm working on the <a href="https://www.unb.ca/cic/datasets/vpn.html" rel="nofollow noreferrer">ISCXVPN2016 dataset</a>, it consists of some pcap files (each pcap is captured traffic of a specific app such as skype, youtube, etc.) and I have converted them to pickle files and then write them into a text file using code below:</p> <pre><code>pkl = open("AIMchat2.pcapng.pickle", "rb") with open('file.txt', 'w') as f: for Item in pkl: f.write('%s\n' %Item) </code></pre> <p>file.txt:</p> <blockquote> <p>b'\x80\x03]q\x00(cnumpy.core.multiarray\n' b'_reconstruct\n' b'q\x01cnumpy\n' b'ndarray\n' b'q\x02K\x00\x85q\x03C\x01bq\x04\x87q\x05Rq\x06(K\x01K\x9d\x85q\x07cnumpy\n' b'dtype\n' b'q\x08X\x02\x00\x00\x00u1q\tK\x00K\x01\x87q\n' b'Rq\x0b(K\x03X\x01\x00\x00\x00|q\x0cNNNJ\xff\xff\xff\xffJ\xff\xff\xff\xffK\x00tq\rb\x89C\x9dE\x00\x00\x9dU\xbc@\x00\x80\x06\xd7\xc9\x83\xca\xf0W@\x0c\x18\xa74I\x01\xbb\t].\xc8\xf3*\xc51P\x18\xfa[)j\x00\x00\x17\x03\x02\x00p\x14\x90\xccY|\xa3\x7f\xd1\x12\xe2\xb4.U9)\xf20\xf1{\xbd\x1d\xa3W\x0c\x19\xc2\xf0\x8c\x0b\x8c\x86\x16\x99\xd8:\x19\xb0G\xe7\xb2\xf4\x9d\x82\x8e&amp;a\x04\xf2\xa2\x8e\xce\xa4b\xcc\xfb\xe4\xd0\xde\x89eUU]\x1e\xfeF\x9bv\x88\xf4\xf3\xdc\x8f\xde\xa6Kk1q`\x94]\x13\xd7|\xa3\x16\xce\xcc\x1b\xa7\x10\xc5\xbd\x00\xe8M\x8b\x05v\x95\xa3\x8c\xd0\x83\xc1\xf1\x12\xee\x9f\xefmq\x0etq\x0fbh\x01h\x02K\x00\x85q\x10h\x04\x87q\x11Rq\x12(K\x01K.\x85q\x13h\x0b\x89C.E\x00\x00</p> </blockquote> <p>My question is how I can compute the entropy of each pickle file?</p> <p>(I have updated the question)</p>
2019-12-30 09:04:25.093000+00:00
2020-01-07 22:51:55.427000+00:00
2020-01-04 05:14:35.167000+00:00
python|pickle|entropy
[]
0
45,013,065
<p>One way to do this can be found in this paper - <a href="https://arxiv.org/pdf/1511.06233.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1511.06233.pdf</a></p> <p>The paper also compares the result generated by simply putting the threshold on the final scores and the (OpenMax) technique proposed by the author.</p>
2017-07-10 13:15:08.563000+00:00
2017-07-10 13:15:08.563000+00:00
null
null
37,713,674
<p>I understand that if I train a ML classifying algorithm on sample pictures of apples, pears and bananas, it will be able to classify new pictures in one of those three categories. But if I provide a picure of a car, it will also classify it in one of those three classes because it has nowhere else to go.</p> <p>But is there a ML classifying algorithm that would be able to tell if a item/picture is not really beloning to any of the classes it was trained for? I know I could create a "unknown" class and train it on all sorts of pictures that are neither apples, pears or bananas, but the training set would need to be huge I assume. That does not sound very practical.</p>
2016-06-08 21:58:01.413000+00:00
2019-06-06 12:52:32.200000+00:00
null
algorithm
['https://arxiv.org/pdf/1511.06233.pdf']
1
64,965,472
<p>There is no reason to not consider Git as a blockchain. Git is focused in a very particular (and important) set of assets: source code. The consensus in this case is manual, and we can consider that a transaction (commit) is accepted when it is merged into the release branch. Actually, considering the number of transactions (commits), Git is by far the most successful blockchain.</p> <p>Extracted from: <a href="https://arxiv.org/pdf/1803.00892.pdf" rel="noreferrer">https://arxiv.org/pdf/1803.00892.pdf</a> &quot;... ...We define“blockchain” and “blockchain network”, and then discuss two very different, well known classes of blockchain networks: cryptocurrencies and Git repositories...&quot;</p> <p>See also next paper that explain why Google use a single monorepo as single source of truth (basically, as a blockchain). <a href="https://research.google/pubs/pub45424/" rel="noreferrer">https://research.google/pubs/pub45424/</a></p>
2020-11-23 09:12:20.967000+00:00
2020-11-23 20:55:08.507000+00:00
2020-11-23 20:55:08.507000+00:00
null
46,192,377
<p>Git's internal data structure is a tree of data objects, wherein each objects only points to its predecessor. Each data block is hashed. Modifying (bit error or attack) an intermediate block will be noticed when the saved hash and the actual hash deviate.</p> <p>How is this concept different from block chain?<br> Git is not listed as an example of block chains, but at least in summaries, both data structure descriptions look alike: data block, single direction reverse linking, hashes, ...).</p> <p>So where is the difference, that Git isn't called a block chain?</p>
2017-09-13 08:16:52.980000+00:00
2022-07-12 09:35:50.583000+00:00
2020-02-27 02:15:54.300000+00:00
git|hash|blockchain
['https://arxiv.org/pdf/1803.00892.pdf', 'https://research.google/pubs/pub45424/']
2
57,936,014
<p>scikit-multilearn's ML-KNN implementations is an improved version of scikit-learn's KNeighborsClassifier. It is actually built on top of it. After the k nearest neighbors in the training data are found, it uses maximum a posteriori principle to label a new instance to achieve a better performance. Also, since it operates on sparse matrices internally using SciPy sparse matrix library, it is highly memory-efficient. More info <a href="https://arxiv.org/pdf/1702.01460.pdf" rel="nofollow noreferrer">here</a> and <a href="https://cs.nju.edu.cn/zhouzh/zhouzh.files/publication/pr07.pdf" rel="nofollow noreferrer">here</a>.</p>
2019-09-14 13:34:51.143000+00:00
2019-09-14 13:34:51.143000+00:00
null
null
57,901,145
<p>This might be a stupid question but I was just wondering what the difference between ML-KNN implemented in scikit.ml and scikit-learn's KNeighborsClassifier is. According to <a href="https://scikit-learn.org/stable/modules/multiclass.html" rel="noreferrer">sklearn's docs</a> KNeighborsClassifier has support for multilabel classification. ML-KNN however is KNN adapted for multilabel classification built on top of sklearn's architecture based on it's <a href="http://scikit.ml/api/skmultilearn.adapt.mlknn.html" rel="noreferrer">docs</a>. </p> <p>When searching for sample multilabel problems, MLkNN mostly appears but I do not understand if there's any advantage of using it over the base implementation of sklearn if it already supports it. Is it only a late adaptation in sklearn's side or are there more differences in the implementation?</p> <p>Any input is appreciated. Thanks!</p>
2019-09-12 06:48:47.383000+00:00
2019-09-14 13:34:51.143000+00:00
null
python|machine-learning|scikit-learn|multilabel-classification|scikit-multilearn
['https://arxiv.org/pdf/1702.01460.pdf', 'https://cs.nju.edu.cn/zhouzh/zhouzh.files/publication/pr07.pdf']
2
18,467,188
<p>Haskell has been used as a quantum programming language for a while now. </p> <p>The primary point of reference would be the Quipper DSL in Haskell.</p> <ul> <li><a href="http://arxiv.org/pdf/1304.5485v1.pdf">Quipper paper</a></li> <li><a href="http://www.newscientist.com/article/dn23820-new-language-helps-quantum-coders-build-killer-apps.html">New Scientist article on Quipper</a></li> </ul> <p>And more fun stuff - <a href="http://www.kurzweilai.net/quipper-language-makes-quantum-computers-easier-to-program">http://www.kurzweilai.net/quipper-language-makes-quantum-computers-easier-to-program</a></p>
2013-08-27 13:42:14.870000+00:00
2013-08-27 13:42:14.870000+00:00
null
null
18,465,702
<p>I have just read an article talking about quantum physics. One interesting thing is that in a Haskell programmer's view there are some similarities between these two fields.</p> <p>First of all, measurement in the quantum world seems similar to lazy evaluation in Haskell: if you do not measure, you don't know whether the cat is living or dead. If you do not evaluate, you don't know whether the value is defined or <code>undefined</code>.</p> <p>Second, in quantum we have the <a href="http://en.wikipedia.org/wiki/EPR_paradox" rel="noreferrer">EPR paradox</a>, which can be explained by interactions with speed higher than light, or equivalently, a time machine. In Haskell, as we have seen in <a href="http://www.haskell.org/wikiupload/1/14/TMR-Issue6.pdf" rel="noreferrer">Assembly: Circular Programming with Recursive do -Monad.Reader issue 6</a>, we can access a value that came from the future by use of recursive <code>do</code>.</p> <p>Finally, in quantum we have to distinguish the observable world in which entropy never decreases, and the "pure" quantum world in which time is equivalent in both directions. In Haskell we have the <code>IO()</code> world that describes what the program actually does, and the pure functional world that never has side effects, and the values never depend on evaluation order.</p> <p>So I guess the above facts suggest there are some inter-connections between these two fields. Can this have more interesting consequences? For example, although I have talked about the EPR paradox, I don't know how to create a Haskell program to simulate this: a function creates two values, and later evaluation of one of them will affect the other (I think those values must have <code>IO()</code> types but I don't know how to put them together).</p>
2013-08-27 12:39:42.367000+00:00
2018-02-27 08:52:41.630000+00:00
2018-02-27 08:52:41.630000+00:00
haskell|quantum-computing
['http://arxiv.org/pdf/1304.5485v1.pdf', 'http://www.newscientist.com/article/dn23820-new-language-helps-quantum-coders-build-killer-apps.html', 'http://www.kurzweilai.net/quipper-language-makes-quantum-computers-easier-to-program']
3
63,255,601
<p>Most of the <a href="https://en.wikipedia.org/wiki/Webmail" rel="nofollow noreferrer">WebMail</a> service <a href="https://en.wikipedia.org/wiki/Comparison_of_webmail_providers" rel="nofollow noreferrer">providers</a> with free-service support basic/mobile web-browser and ofcourse supports general/full web-browser.<br /> These type of service provider's web-mail-servers can detect user's (client-side) web-browser software, by detecting the <a href="https://en.wikipedia.org/wiki/User_agent" rel="nofollow noreferrer">User-Agent</a> string &amp; can switch &amp; transfer to that mode of specific web-pages.<br /> <br /></p> <p>TB = THUNDERBIRD . TB is an EMAIL CLIENT type of software program/app . TB also uses Mozilla Firefox Web-Browser engine/core for the TB web-browser TAB . Webmail services / websites can be used inside TB's web-browser tab . In this way, email related external access &amp; information remains inside same software program/app, and security / firewall rules can be set bit more easily. <br /></p> <p>Below solution # 1 worked on basic <a href="https://en.wikipedia.org/wiki/Comparison_of_lightweight_web_browsers" rel="nofollow noreferrer">lightweight web-browser</a>, so it partially answers your question's 1st part,<br /> and solution # 2 is the answer for your 2nd &amp; 3rd part of the question.<br /> <br /></p> <p><strong>SOLUTION # 1 :</strong><br /> Web Access Based Solution For Basic Web-Browsers:<br /> In basic web-browser &quot;<a href="https://qutebrowser.org/" rel="nofollow noreferrer">qutebrowser</a>&quot; (with JS support) just goto <a href="https://www.mail.com/" rel="nofollow noreferrer">https://www.mail.com/</a> website.</p> <ul> <li>&quot;Mail<i>.</i>com&quot; web-servers will detect your browser &amp; approximate location &amp; connect your browser into appropriate web-servers related to those, just enable JS for only 7 sites/addresses shown in below, that should be sufficient, to access (view, send, receive) your emails.</li> <li>I have tested &quot;qutebrowser&quot; v1.13.1 on MacOSX Catalina (64bit-only macOS) &amp; it works fine, by the way qutebrowser installer for MacOSX is 144MB as it includes all dependencies, &amp; so it uses half-gigabyte space after decompress.</li> <li>if your basic/lightweight web-browser does not support JS, then this solution # 1 will not work, So wait for someone else to answer with a solution for that problem.<br /> <br /></li> </ul> <p><strong>SOLUTION # 2 :</strong><br /> Website/webmail/Web-Service Access Based Solution For <a href="https://www.thunderbird.net/en-US/thunderbird/all/" rel="nofollow noreferrer">Thunderbird</a> (Email-Client):<br /> this solution/process is the preferred way, as mentioned in above/OP's Question.<br /> Tested + worked on Thunderbird ( v68.12.1 ).</p> <ul> <li><p>Load &quot;<a href="https://addons.thunderbird.net/en-US/thunderbird/addon/browseintab/?src=search" rel="nofollow noreferrer">BrowseInTab</a>&quot; Thunderbird addon : Thunderbird &gt; Tools &gt; Addons &gt; in &quot;Find More Extensions&quot; box, type: BrowseInTab<br /> click on <code>[ + Add To Thunderbird ]</code> button &gt; &quot;Add&quot; &gt; restart Thunderbird.</p> <ul> <li>Also load &quot;<a href="https://addons.thunderbird.net/en-US/thunderbird/addon/open-tab/?src=search" rel="nofollow noreferrer">Open Tab</a>&quot; Thunderbird addon : Thunderbird &gt; Tools &gt; Addons &gt; in &quot;Find More Extensions&quot; box, type: Open Tab<br /> click on <code>[ + Add To Thunderbird ]</code> button &gt; &quot;Add&quot; &gt; restart Thunderbird.</li> </ul> </li> <li><p>now send a HTML-formatted email (not plain-text Email) , into any one of the email-address (or email account) that is already setup in your Thunderbird, in that email you must send an URL LINK, this link: <a href="https://www.mail.com/" rel="nofollow noreferrer">https://www.mail.com/</a><br />If you need to connect to a different site, then change above site.</p> </li> <li><p>goto Thunderbird &quot;Preferences&quot;/&quot;Options&quot;/Settings &gt; Privacy &gt; goto &quot;Web Content&quot; section.<br /><a href="https://i.stack.imgur.com/E1m9A.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/E1m9A.png" alt="Thunderbird - Preferences - Privacy - Web Content - Exceptions" /></a><br /> it should by-default have the option &quot;<b>Accept Cookies From Sites</b>&quot; unselected, for now keep it like that, (if not unseleted, then unselect it), in that row in right side, there is a button <b><code>[ Exceptions ]</code></b>, click on that, then type-in (or copy from here) each of below web-address (URL) into the &quot;Address of Website&quot; textbox, &amp; then press <code>[ Add ]</code>/<code>[ Allow ]</code> button, after all 7-sites are entered, then press <code>[ Save Changes ]</code>:</p> <p><b>Mail<i>.</i>com</b> (Mobile/Basic Version) web-service<b>:</b></p> <ol> <li><code>https://www.mail.com/</code></li> <li><code>https://3c-lxa.mail.com/</code></li> <li><code>https://dl.mail.com/</code></li> <li><code>https://mailderef.mail.com/</code></li> <li><code>https://navigator-lxa.mail.com/</code></li> <li><code>https://epimetheus.navigator-lxa.mail.com/</code></li> <li><code>https://home.navigator-lxa.mail.com/</code></li> <li><code>https://lps.navigator-lxa.mail.com/</code></li> <li><code>https://trackbar.navigator-lxa.mail.com/</code></li> <li><code>https://plus.mail.com/</code></li> <li><code>https://wa.mail.com/</code></li> <li><code>https://js.ui-portal.de/</code></li> <li><code>https://img.ui-portal.de/</code></li> <li><code>https://nct.ui-portal.de/</code></li> <li><code>https://s.uicdn.com/</code></li> <li><code>https://login.mail.com/</code></li> </ol> <ul> <li>Above list is valid for users in (southern) California, USA.</li> <li>NOTE: some of the above web-addresses (or URL(s) or site-addresses) may be DIFFERENT for your location.</li> <li>FF = Firefox . TB = Thunderbird.</li> <li>EXCEPTION / EXCLUSION LIST (BASIC/MOBILE VERSION) : How To Obtain Basic/Mobile Version Service URLs ? To find out, what exact URLs/sites are used by BASIC or MOBILE version web-service (for-example: &quot;Mail<i>.</i>com&quot;), you will have to load &quot;<a href="https://addons.mozilla.org/en-US/firefox/addon/noscript/" rel="nofollow noreferrer">NoScript</a>&quot;, &quot;<a href="https://addons.mozilla.org/en-US/firefox/addon/uaswitcher/" rel="nofollow noreferrer">User-Agent Switcher</a>&quot;, &quot;<a href="https://addons.mozilla.org/en-US/firefox/addon/user-agent-string-switcher/" rel="nofollow noreferrer">User-Agent Switcher and Manager</a>&quot; addons on a regular FF=<a href="https://www.mozilla.org/en-US/firefox/all/" rel="nofollow noreferrer">Firefox</a> web-browser . Start TB, send yourself one HTML based email with an URL/LINK in it, either this URL/LINK: &quot;http<b>:</b>//UserAgentString<i>.</i>com/&quot; or this &quot;https<b>:</b>//what-is-my<i>.</i>com/browser/user-agent/&quot; , open that message/email in TB , right-click on url/link , click-on &quot;Open Link in New Tab&quot; , TB will open the URL/LINK in a new browser-tab inside TB . Copy user-agent string code of your TB that will be shown there . Open another browser-tab in FF , and set/change that FF tab's User-Agent string by using the User-Agent switching/changing addon, &amp; set/change default User-Agent string of FF into the User-Agent string code obtained from TB . Then visit the &quot;https<b>:</b>//www<i>.</i>Mail<i>.</i>com/&quot; website in that FF tab , Mail<i>.</i>com website/web-service will provide web-pages to Firefox tab, based on Thunderbird's User-Agent string code that we setup in FF earlier . One by one allow+add URLs which MUST be approved/allowed in NoScript addon, for the Mail<i>.</i>com web-service to work . Now we have a list, this is the EXCEPTION LIST for using basic/mobile web-service.</li> <li>add &quot;Mail<i>.</i>com&quot; web-addresses in NoScript addon except for the number 4 &amp; 5 . When you will &quot;sign-in&quot; into &quot;https<b>:</b>//www<i>.</i>Mail<i>.</i>com/&quot; website, then you will see, immediately after sign-in with correct email-address &amp; correct password, that, Firefox web-browser's URL bar is showing a slightly different website address, MAY BE its not exactly same as number 4 shown as above, write down the part after the word &quot;navigator-&quot; or the &quot;3c-&quot; . So this new part of server-name word is what you have to use after the &quot;navigator-&quot; for the above URL/web-address # 4 in your case, and use that same part also after the &quot;3c-&quot; for the URL # 5 . So now you know &amp; can enter the correct URL # 4 &amp; 5 , so enter those inside the Thunderbird's Cookie EXCEPTION list.</li> </ul> </li> <li><p>goto the received email which has the link <a href="https://www.mail.com/" rel="nofollow noreferrer">https://www.mail.com/</a><br /> in Thunderbird (TB) &gt; right-click on that link &gt; you will see an new option <b><code>&quot;Open Link in New Tab&quot;</code></b>, use that, a new browser Tab will open up in Thunderbird.</p> </li> <li><p>now you can access (view, receive, send) your emails on &quot;Mail<i>.</i>com&quot; site itself directly, from your Email-client program, over port-443 based secured+encrypted (HTTPS + TLS/SSL) connection.</p> </li> <li><p>This Tab in TB should stay open, when you close/open TB next time.</p> </li> <li><p>regularly clear TRACKING-DATA (aka: COOKIES) inside TB.</p> </li> <li><p>Since you're using (basic browser) web browser tab(s) inside Thunderbird, &amp; it will not-only connect with primary webmail website, but will also connect with too many different types of websites, So you MUST also install protection addon : AdBlock (or alternative) addon to stop intrusive/annoying/data-stealing ADs. I prefer to use <a href="https://stackoverflow.com/a/63286125/3553808">uBlock-Origin</a> addon. But user may Allow simple or Text based small ADs which do not steal (your data) &amp; has obtained your specific permission.</p> </li> </ul> <p>If you/user want to use &quot;Mail<i>.</i>com&quot; mail services normally, thru default general full version web UI (user-interface), but inside the Thunderbird browser-tab (or inside other minimal or basic web-browser), then, also allow these URLs (along with previous 7-URLs in above), as &quot;Mail<i>.</i>com&quot; uses these for full version UI:</p> <ul> <li><b>Mail<i>.</i>com</b> (Full/default Version) web-service<b>:</b><br />17. <code>https://i0.mail.com/</code><br />18. <code>https://cats.navigator-lxa.mail.com/</code><br />19. <code>https://password.mail.com/</code><br />20. <code>https://wa.ui-portal.de/</code><br />21. <code>https://ogs.ui-portal.de/</code><br />22. <code>https://Account-lxa.Mail.com/</code><br />23. <code>https://MyAccount.Mail.com/</code><br />24. <code>https://mobileMailDeref.Mail.com/</code><br />25. <code>https://api.taboola.com/</code><br />26. <code>https://cats-tam.ui-portal.de/</code><br />27. <code>https://uim.tifbs.net/</code><br />28. <code>https://cdn.taboola.com/</code><br />29. <code>https://js-sec.indexWW.com/</code><br />30. <code>https://AddressBook.Navigator-lxa.Mail.com/</code><br />31. <code>https://ooEditor.Mail.com/</code><br />32. <code>https://ADclient.uimServ.net/</code><br />You may/should AVOID adding below:<br />33. Advertisements from <code>https://c.Amazon-ADsystem.com/</code> , 34. location tracking from <code>https://GeoLocation.OneTrust.com/</code>, usage profiling+tracking,etc from 35. <code>https://www.GoogleTagServices.com/</code> , 36. <code>https://www.GoogleTagManager.com/</code></li> </ul> <p>If you look into above multiple web-services, it can be very easily said, &quot;Mail<i>.</i>com&quot; DO NOT RESPECT USER's PRIVACY-RIGHTS, AND &quot;Mail<i>.</i>com&quot; IS VIOLATING+ABUSING PRIVACY-RIGHTS , they are sharing PRIVATE data with too many ESP (external-service-providers) (aka: TPSP = 3rd-party service providers), vendors, etc , using too many APIs from ESP/TPSP, vendors, etc.</p> <p><b>If your phone sends your voice, fingerprint, face, etc your PRIVATE biometric data outside of your phone into remote server for processing or whatever, then that is huge THEFT &amp; STEALING AND Violation+Abuse of Privacy-Rights , because phone can use builtin+INTERNAL software, tools, etc for processing.</b></p> <p>So similar way, the services that for-example: &quot;Mail<i>.</i>com&quot;, a WebMail service provider needs, those must be used+processed INSIDE the &quot;Mail<i>.</i>com&quot; SERVERS (inside Mail<i>.</i>com's premise &amp; under their control), their ESP/TPSP/vendors,etc can have remote access into their software (inside &quot;Mail<i>.</i>com&quot; server), but not any access into user's PRIVATE DATA/database, etc . Private data must not travel/copied outside of &quot;Mail<i>.</i>com&quot; servers . So &quot;Mail<i>.</i>com&quot; should create different sub-domain for their each ESP/TPSP/vendor,etc.</p> <p>If a person/entity really wishes to NOT violate/abuse human-rights , then there are always (many) ways for that.</p> <hr /> <br /> <p><strong>OAUTH:</strong><br /> various (remote) web-service &amp; other online service providers may/often use <b>OAuth</b> (<a href="https://en.wikipedia.org/wiki/OAuth" rel="nofollow noreferrer">OAuth</a> 2.0, etc) based verification to allow user to sign-in/login into their site/service-site from user's/client's software . OAuth verification process need to save a token as a Cookie inside your web-browser software , this process uses HTTPS/443 protocol based connection via a web-browser . If your web-browser blocks cookies, to create safety, from tracking cookies of various human-rights violating websites/web-services, etc , then you/user have to allow OAuth verification related specific cookies by adding specific OAuth verification related websites/webservices, into your web-browser's Cookie/Script EXCEPTION LIST . After that OAuth verification related sign-in/login will succeed &amp; an approved token as a cookie will be saved . OAuth verification may use one or few more extra web-sites/URLs from your (remote) service provider, than the sites that are generally used for a general login/sign-in . When this token/cookie is saved &amp; available inside a client software, then it can be used to verify user's client-software (that i connecting with (remote) service provider) for various other protocol based services, for-example: IMAP/POP3, SMTP mail-server services, IM(instant-messaging) chat network services, etc, etc.</p> <p>Normally without OAuth, user have to verify from the client software's connection into the (remote) web-server that it is indeed he himself (or she herself) is accessing the (remote) web-services, by providing the password (web-service access main/<b>master password</b>) as a proof each time, or by saving this main/master password inside the software . So if this client software is hacked or a backdoor/bug/vulnerability is found then harmful entity may/will also have the main/master password and takeover your account . But this <b>risk</b> can be reduced, by saving a token/cookie instead of the main/master password, and use that token/cookie to prove that its you who is accessing the service from that client software . If you suspect there was a remote access event occurred in your computer/device, then just clear saved token/cookie/password, &amp; re-verify via OAuth to save a new token/cookie . Harmful entity when obtains the token/cookie can access your some data, but not all data, as other sensitive data access (may) require entering main/master password.</p> <p>So even OAuth has weakness<sup><a href="https://www.bbc.co.uk/news/technology-39845545" rel="nofollow noreferrer">1</a>, <a href="http://homakov.blogspot.co.uk/2013/02/hacking-facebook-with-oauth2-and-chrome.html" rel="nofollow noreferrer">2</a>, <a href="https://arxiv.org/abs/1601.01229" rel="nofollow noreferrer">3</a></sup> &amp; strength<sup><a href="https://datatracker.ietf.org/doc/html/draft-ietf-oauth-security-topics-13.html" rel="nofollow noreferrer">1</a></sup>, so use wisely where &amp; when appropriate . When its used with other SECURED process only THEN it can be better.</p> <p>Client software/app which cannot handle web-browser connection to use OAuth, for those type of app/clients, you can go into your web-service provider's website, find-out the section that allows to generate/create a TP(Third-Party) App Access <b>Key</b> (AAK) code, or Secure Mail Key (<b>SMK</b>) code, etc . This type of (app access key) code should be used as password in/with your client-software, then main/master-password remains safe . This is much better solution than OAuth.<br /> Some service-providers will allow you to use (app) access-key in your client-software first, then they will also allow to use OAuth if you need-to.</p> <p>TB = Thunderbird .</p> <p>EXCEPTION / EXCLUSION LIST (OAUTH RELATED) : First, please follow the <b>procedure</b> shown in above &quot;Mail<i>.</i>com&quot; section on How to find-out &amp; add EXCEPTION to allow BASIC/MOBILE VERSION based access service by using a basic web-browser (or by using builtin browser-tab inside TB email-client software).<br /> Then Begin OAuth verification process in your client software , open OAuth verification URL in a web-browser (or open inside TB's builtin browser-tab) , in bottomside near app border AND in topside URL bar, you will see which web-sites it is attempting to connect or connecting, etc , either take screen-shot picture(s) whenever URL/website changes by pressing specific screenshot buttons , or write down each URLs when URL changes.<br /> If only one extra site/website is needed for OAuth, then after adding that one site (in EXCEPTION list) , oauth verification will complete, but as it is still not yet inside the Exception list, OAuth will not succeed , So add the URL/website in web-browser's (or TB's) Cookie/Script EXCEPTION list . And again initiate OAuth verification in your client software/app . this time it will succeed.<br /> If oauth verification need to use multiple sites, then you will also have to add multiple times different URLs in EXCEPTION list, and you also have to initiate oauth verification process multiple times from client software.<br /> When oauth succeeds then you're done.<br /> Time to share that list with others (please mention if 2FA option was enabled in your case or not).<br /> Share only URL portion, not the portion that is after the left-side first single <b>/</b> slash: https<b>:</b>//websiteURL<i>.</i>com<b>/</b>...</p> <p>For example, below pictures showing OAuth verification process during adding a new mail-account inside Thunderbird email client software.</p> <ul> <li>after pressing the &quot;Done&quot; button during adding/creating New Mail-Account in Thunderbird=TB , TB email client software has initiated OAuth2 verification process in browser-tab<br /><a href="https://i.stack.imgur.com/DyIKb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DyIKb.png" alt="Thunderbird - After Pressing Done Button For Creating/Adding New Mail Account - OAuth2 Verification Proceces Began Inside Browser-Tab" /></a></li> <li>after adding few more yahoo related URLs into Exception-list, Yahoo asking user to Sign-In with Yahoo main/master password, to verify &amp; find-out indeed an authentic user has initiated this process or not<br /><a href="https://i.stack.imgur.com/4eNDd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4eNDd.png" alt="Thunderbird - Yahoo asking user to Sign-In with main password" /></a></li> <li>Yahoo verifying user is authentic or not with 2FA type of verification, showing 2FA verification options<br /><a href="https://i.stack.imgur.com/Gd3mf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Gd3mf.png" alt="Thunderbird - Yahoo verifying user is authentic or not with 2FA type of verification" /></a></li> <li>Yahoo sending 2FA notification in their Yahoo Mail mobile app in user's smartphone<a href="https://i.stack.imgur.com/5q136.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5q136.png" alt="Yahoo Mail mobile app on Android - Yahoo sends notification in mobile app, To obtain permission from user" /></a></li> <li>Yahoo asking user to approve TB client/app for OAuth<br /><a href="https://i.stack.imgur.com/k0TFS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/k0TFS.png" alt="Thunderbird - Yahoo Asking User To Approve Thunderbird Client/App" /></a></li> <li>Thunderbird email client app is approved &amp; added into authorized/approved app list, and it can be seen (via Firefox) inside Yahoo Mail web-access site's Recent Activity section<br /><a href="https://i.stack.imgur.com/zmtV7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zmtV7.png" alt="Firefox - Yahoo's Recent Activity section showing Thunderbird as approved/authorized app" /></a></li> <li>Even though in above picture, the URL <code>https://api.login.yahoo.com/</code> is shown, but actually i needed to approve only <code>https://jsapi.login.yahoo.com/</code> in EXCEPTION list.</li> <li>in below goto Yahoo section to see which exact URLs were approved &amp; needed for OAuth2.0<br /> End of OAUTH section.</li> </ul> <hr /> <br /> <p><b>Yahoo</b> (Basic/Mobile Version) web-service<b>:</b><br /> This section contains info on what needs to be allowed in Thunderbird basic-browser tab, to access Yahoo &quot;free&quot; emails over their webmail web-service interface, to do basic functions: view new emails, or send emails. Below # 1 site is the webmail login/access site.</p> <ol> <li><code>https://mail.yahoo.com/</code> <a href="https://mail.yahoo.com/" rel="nofollow noreferrer">Mail.Yahoo.com</a></li> <li><code>https://login.yahoo.com/</code></li> <li><code>https://s.yimg.com/</code></li> <li><code>https://data.mail.yahoo.com/</code></li> </ol> <ul> <li>List is valid for users in (southern) California, USA, so it will be different based on different location. If you have Yahoo app on your phone, Yahoo may send user-sign-in event verification notice in it, once you select &quot;yes&quot; or allow it, basic browser in TB should take you to yahoo Inbox . NoScript on Firefox was used to obtain the list . Above list will be further different if you use their basic-HTML version site. List will be different if you've subscribed/changed your account into a different type of account. List will be different if you've enabled 2FA for your account . Follow above &quot;Mail.com&quot; section to apply it.</li> </ul> <p>Yahoo also has these MOBILE (aka: BASIC-service friendly, aka: BASIC/HTML version) access sites:<br />• <a href="https://login.yahoo.com/?.src=ym&amp;lang=&amp;done=https%3A%2F%2Fmail.yahoo.com%2Fneo%2Fb%2Flaunch" rel="nofollow noreferrer">https://login.yahoo.com/?.src=ym&amp;lang=&amp;done=https%3A%2F%2Fmail.yahoo.com%2Fneo%2Fb%2Flaunch</a> <br />• <a href="https://m.yahoo.com/" rel="nofollow noreferrer">https://m.yahoo.com/</a> <br />• <a href="https://us.m.yahoo.com/p/mail" rel="nofollow noreferrer">https://us.m.yahoo.com/p/mail</a></p> <p>For accessing Yahoo emails via &quot;OAuth2&quot; authentication-method, just add these two URLs as cookie <code>[ Exceptions ]</code> in TB,etc email-clients:<br />• <code>https://login.yahoo.com/</code><br />• <code>https://api.login.yahoo.com/</code></p> <p>For accessing Yahoo emails via their full-version (web mail access) website inside Thunderbird's (or Firefox's) browser-tab , use above four URLs and below URL list . These will be slightly different based on your/user's location, etc.<br /><a href="https://i.stack.imgur.com/meSdn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/meSdn.png" alt="Thunderbird - WebSite/URL Exceptions To Allow/Block Cookies" /></a></p> <hr /> <br /> <p><b>Microsoft Outlook/Hotmail/Live</b>,etc (Basic/Mobile Version) web-service<b>:</b><br /> This section contains info on what needs to be allowed in Thunderbird basic-browser tab, to access MS Outlook/Live/Hotmail &quot;free&quot; emails over their webmail web-service interface, to do basic functions: view new emails, or send emails. Below # 1 site is the webmail login/access site.</p> <ol> <li><code>https://outlook.live.com/</code> <a href="https://outlook.live.com/" rel="nofollow noreferrer">Outlook.Live.com</a></li> <li><code>https://login.live.com/</code></li> <li><code>https://logincdn.msauth.net/</code></li> <li><code>https://outlook-1.cdn.office.net/</code></li> </ol> <ul> <li>List is valid for users in (southern) California, USA, so it will be different based on different location. NoScript on Firefox was used to obtain the list . List will be further different if you use their basic-HTML version site. List will be different if you've subscribed/changed your account into a different type of account. List will be different if you've enabled 2FA for your account . Follow above &quot;Mail<i>.</i>com&quot; section to apply it.</li> </ul> <p>Microsoft mail services also has these Mobile (aka: Basic-service friendly, aka: BASIC/HTML version) webmail access sites:<br />• <a href="https://mssl.mail.live.com/m/?bfv=wm" rel="nofollow noreferrer">https://mssl.mail.live.com/m/?bfv=wm</a> <br />• <a href="https://mobile.live.com/hm" rel="nofollow noreferrer">https://mobile.live.com/hm</a> <br />• <a href="https://profile.live.com/contacts?bfv=um" rel="nofollow noreferrer">https://profile.live.com/contacts?bfv=um</a> <br />• <a href="https://mail.live.com/m" rel="nofollow noreferrer">https://mail.live.com/m</a> <br />• <a href="https://wls.live.com" rel="nofollow noreferrer">https://wls.live.com</a> <br />• <a href="https://mobile.msn.com/pocketpc/" rel="nofollow noreferrer">https://mobile.msn.com/pocketpc/</a></p> <p>For accessing emails thru &quot;OAuth2&quot; auth-method , use/add above four URLs &amp; below one URL in TB's Cookie <code>[ Exceptions ]</code> list:<br />5. <code>https://login.microsoftonline.com/</code></p> <p>For accessing emails thru full-version webmail access website, lots of URLs need to be added into Exception list.</p> <p>Push Microsoft to use TLS/SSL based encryption security, instead of StartTLS encryption security, as TLS/SSL is far far more secured+safer than StartTLS.</p> <hr /> <br /> <p><b>GMail</b> (Basic/Mobile Version) web-service<b>:</b><br /> This section contains info on what needs to be allowed in Thunderbird basic-browser tab, to access Gmail (from Google) &quot;free&quot; emails over their webmail web-service interface, to do basic functions: view new emails, or send emails. Below # 1 site is the webmail login/access site.</p> <ol> <li><code>https://mail.google.com/</code> (To access, goto: <a href="https://mail.google.com/" rel="nofollow noreferrer">mail.Google.com</a>)</li> <li><code>https://accounts.google.com/</code></li> <li><code>https://ssl.gstatic.com/</code></li> <li><code>https://www.gstatic.com/</code></li> </ol> <ul> <li>List is valid for users in (southern) California, USA, so it will be different based on different location . NoScript on Firefox was used to obtain the list . List will be further different if you use their basic-HTML version site. List will be different if you've subscribed/changed your account into a different type of account. List will be different if you've enabled 2FA for your account . Follow above &quot;Mail.com&quot; section to apply it.</li> </ul> <p>GMail also has these Mobile (aka: Basic-service friendly, aka: BASIC/HTML version) webmail access sites:<br />• <a href="https://mail.google.com/mail/u/0/h/1pq68r75kzvdr/?v%3Dlui" rel="nofollow noreferrer">https://mail.google.com/mail/u/0/h/1pq68r75kzvdr/?v%3Dlui</a> <br />• <a href="https://m.gmail.com/" rel="nofollow noreferrer">https://m.gmail.com/</a> <br />• <a href="https://mail.google.com/mail/x/gdlakb-/gp/" rel="nofollow noreferrer">https://mail.google.com/mail/x/gdlakb-/gp/</a> <br />• <a href="https://mail.google.com/a/%5BYour-Domain%5D/x/1gjikl11t3cl1" rel="nofollow noreferrer">https://mail.google.com/a/[Your-Domain]/x/1gjikl11t3cl1</a> <br />• <a href="https://www.google.com/ig/mobile?output=pda" rel="nofollow noreferrer">https://www.google.com/ig/mobile?output=pda</a></p> <p>For accessing GMail/Google-Mail emails via &quot;OAuth2&quot; authentication-method , add these three URL exceptions in TB,etc email-client's cookie Exception list:<br />• <code>https://accounts.google.com/</code><br />• <code>https://ssl.gstatic.com/</code><br />• <code>https://www.gstatic.com/</code></p> <p>For accessing emails thru full-version webmail access website (inside TB), lots of URLs need to be added into Exception list.</p> <p>For doing Hangouts CHAT securely inside TB via using google's hangouts website/web-service , Copy+paste add+allow below URLs into TB's Cookie-Exception list . Do not use (Thunderbird) TB's Google-Talk (GTalk) based chat account/connection, because that DOES NOT USE SECURE/ENCRYPTION PROTOCOL PROPERLY, So Your MAIN Password Will Be Exposed Or At Risk . Use &quot;Hangouts&quot; web-service inside TB's web-browser TAB, which can connect securely into Google's GTalk/XMPP chat network.<br /> Access/signin web-service site: <a href="https://hangouts.google.com/" rel="nofollow noreferrer">Hangouts.Google.com</a><br /> • <code>https://hangouts.google.com/</code><br />• <code>https://accounts.google.com/</code><br />• <code>https://myaccount.google.com/</code><br />• <code>https://ogs.google.com/</code><br />• <code>https://clients6.google.com/</code><br />• <code>https://clients4.google.com/</code><br />• <code>https://chat-pa.clients6.google.com/</code><br />• <code>https://chat-pa.clients4.google.com/</code><br />• <code>https://people-pa.clients6.google.com/</code><br />• <code>https://people-pa.clients4.google.com/</code><br />• <code>https://signaler-pa.clients6.google.com/</code><br />• <code>https://signaler-pa.clients4.google.com/</code><br />• <code>https://ssl.gstatic.com/</code><br />• <code>https://www.gstatic.com/</code><br />• <code>https://apis.google.com/</code><br />• <code>https://aa.google.com/</code><br />• <code>https://0.client-channel.google.com/</code> (You will have to add multiple of these servers, by changing &quot;0&quot; into other numbers: 1, 2, 3, 4, 5, ... etc, Add upto atleast 30 . Which exact one will be used, depends on which one is free &amp; randomly selected by google to serve your connection)</p>
2020-08-04 21:53:17.287000+00:00
2022-02-26 23:32:27.930000+00:00
2022-02-26 23:32:27.930000+00:00
null
63,253,091
<p>QUESTION(s) : (1) How can users or I have direct-access (aka: view, send, receive, etc capabilities) for  web-emails/<a href="https://en.wikipedia.org/wiki/Comparison_of_webmail_providers" rel="nofollow noreferrer">web-mails</a> (i.e:&quot;Mail.com&quot;) , from  simple/basic/lightweight/mobile  <a href="https://en.wikipedia.org/wiki/Comparison_of_lightweight_web_browsers" rel="nofollow noreferrer">web-browser</a>  thru/over  secure/encrypted  connection  and by using their  <strong>plain</strong>/basic/lite/<strong>lightweight/mobile  HTML</strong>  version based  web-service/WEBSITE/<strong>SITE</strong> ?<br /> and  (2) What Other Alternative Web-Mails Solutions (preferably: free solutions) I/User Can Use To  Send/Receive  Emails ?<br /> and  (3) Which Sites/URLs Need To Be Added In Cookie-Or-Script EXCEPTION List, To Allow Communication With Web Mail Servers Or For OAuth2 Authentication Token/Cookie ?<br /> and  (4) Which Sites/URLs Need To Be Added In Cookie-Or-Script EXCEPTION List, To Allow Saving OAuth2 Authentication Token/Cookie For Email Client Program TB=Thunderbird, SM=SeaMonkey, etc ?<br /> END-OF-QUESTION.</p> <hr /> <p><strong>DETAILS:</strong><br /> ( PLEASE  AVOID / SKIP  READING  BELOW ,<br /> if you have NO time to read more info, or if you have NO-respect that i/someone can have different preferences/choices, etc,<br /> or if you don't want to figure-out 1orMore solutions for my/user's problems,<br /> or avoid/skip when you don't want to helpout )<br /> <br /></p> <p>Abbr<b>:</b><br /> i.e. = in-example.<br /> aka = also-known-as.<br /> Eml = Email/Mail.<br /> Auth = Authentication/Verification.<br /> MSP = Mail Service Provider.<br /> WMSP = WebMail Service Provider.<br /> ESP = EMail Service Provider.<br /> ISP = Internet Service Provider.<br /> <br /></p> <p>Web-Browser (HTTP/HTTPS) Client (example) : Firefox, Safari, Chromium .<br /> Email-Client (example) : Thunderbird, SeaMonkey, Outlook.<br /> <br /></p> <p>Some email-client software program/app also contains web-browser engine/core inside them , in-example: Thunderbird, SeaMonkey, etc . These software has option to open web-browser tab, so webmail service / websites can be used/accessed inside that web-browser TAB, inside the email-client . This is what this stackoverflow question+answer is targeting to use . When email related external-server accesses are done from same software (separated from a web-browser which is used for accessing many other 3rd-party websites), then, often it is easier to setup security / firewall rules to control / filter such data net traffic , and keep email related cookies, components, data traffic, etc separate from web-browser related data traffic . There are many other benefits (in example: using web-browser based PGP/GPG addons to send/receive secure/encrypted or signed emails , session cookies remain out of access of non-email 3rd-party websites, addons, etc).<br /> <br /></p> <p>Why using &quot;Mail.com&quot; ?  Instead of using all of these ( Mail<i>.</i>com, HushMail, ProtonMail, Tutanota, Zoho-Mail, Mailfence, iCloud, Excite-Mail, etc ) WebMail based mail/email service providers (ESP/MSP/WMSP) NAME AGAIN &amp; AGAIN , here i will use only  &quot;Mail<i>.</i>com&quot;  to refer to all/any of these webmail based ESP/MSP/WMSP.<br /> <br /></p> <p><strong>BASIC  WEBMAIL(s) / WEB-EMAIL(s)  SERVICE  EXAMPLES:</strong><br /> Few EXAMPLEs of simple/plain HTML version based website/webservice to access emails, which is also known as basic <b><a href="https://en.wikipedia.org/wiki/Webmail" rel="nofollow noreferrer">webmail</a></b>/webemail service, etc.<br /> <br /></p> <p><strong>YAHOO</strong> : any user can access &quot;Yahoo&quot; emails over their secured &amp; plain HTML version site, by using below link:<br /> <a href="https://login.yahoo.com/?.src=ym&amp;lang=&amp;done=https%3A%2F%2Fmail.yahoo.com%2Fneo%2Fb%2Flaunch" rel="nofollow noreferrer">https://login.yahoo.com/?.src=ym&amp;lang=&amp;done=https%3A%2F%2Fmail.yahoo.com%2Fneo%2Fb%2Flaunch</a><br /> and to access &quot;Yahoo&quot; emails over standard HTML version site:<br /> <a href="https://login.yahoo.com/?.src=ym&amp;lang=&amp;done=https%3A%2F%2Fmail.yahoo.com%2F" rel="nofollow noreferrer">https://login.yahoo.com/?.src=ym&amp;lang=&amp;done=https%3A%2F%2Fmail.yahoo.com%2F</a></p> <ul> <li>Yahoo emails can also be accessed for free by using free IMAPS+POP3S+SMTPS mail-server services directly from Email-Client programs, more info: <a href="https://en-global.help.yahoo.com/kb/SLN4075.html" rel="nofollow noreferrer">https://en-global.help.yahoo.com/kb/SLN4075.html</a> <br />IMAPS <code>imap.mail.yahoo.com:993</code> or POP3S <code>pop.mail.yahoo.com:995</code>,<br />and SMTPS <code>smtp.mail.yahoo.com:465</code>(TLS/SSL),<br />Note: if a user is selecting Connection-Security: TLS/SSL (encryption), Auth-Method: OAuth2 , for login/accessing emails, then, for OAuth2 to work, Cookie from specific URLs need to be allowed inside email-client . Numbers at-end of mail-server name/address is network port number, for-example: <code>:993</code> is pre-assigned for IMAPS usage . The &quot;S&quot; at-end of &quot;IMAPS&quot; is indicating to &quot;Secure&quot; (which usually means &quot;Encrypted&quot;) . A User can also create/obtain App-Key (aka: Mail-Key, etc) from Yahoo's webmail access website, and use that app-key code as password (instead of using Yahoo email account's main/primary password), in password field of mail-account, inside email-client software . When user want to use App-Key based login, then Auth-Method should be &quot;Normal Password&quot; &amp; connection security must be &quot;SSL/TLS&quot; (encryption) in email-client software.<br /> <br /></li> </ul> <p><strong>GMAIL</strong> : any user can access &quot;GMail&quot; (from Google) emails over their secured &amp; plain HTML version site, by using below link:<br /> <a href="https://mail.google.com/mail/u/0/h/1pq68r75kzvdr/?v%3Dlui" rel="nofollow noreferrer">https://mail.google.com/mail/u/0/h/1pq68r75kzvdr/?v%3Dlui</a><br /> and to use Standard version (with all features) back again, this can be used:<br /> <a href="https://mail.google.com/mail/u/0/?nocheckbrowser" rel="nofollow noreferrer">https://mail.google.com/mail/u/0/?nocheckbrowser</a><br /> Reference for &quot;GMail&quot;: <a href="https://support.google.com/mail/answer/15049?hl=en" rel="nofollow noreferrer">https://support.google.com/mail/answer/15049?hl=en</a></p> <ul> <li>GMail also allows free access by using these mail-server services:<br />IMAPS <code>imap.gmail.com:993</code> or POP3S <code>pop.gmail.com:995</code>,<br />and SMTPS <code>smtp.gmail.com:465</code>(TLS/SSL),<br />Note: if a user selected Connection-Security: TLS/SSL (encryption), Auth-Method: &quot;Normal Password&quot;, for login/accessing GMail emails, then, GMail account's main-password need to be specified in email-client , and user also have to select &quot;Allow Less Secure App&quot; option inside GMail settings and also enable IMAP(s)/POP(s) based access . User can also use OAuth2 as Auth-Method for login/accessing emails, from an email-client software . For that, Cookie has to be allowed in email-client, and No-need to select &quot;Allow Less Secure App&quot; option inside GMail settings, but user may have to enable IMAP(s)/POP(s) based access inside GMail settings, and user have to allow/approve email-client software.<br /> <br /></li> </ul> <p><strong>Hotmail/Outlook/Live/MSN/etc</strong> : Microsoft(MS) Outlook/Hotmail/Live/etc free email service(s) can be accessed for free on <code>&quot;Live.com&quot;</code> or <code>&quot;Outlook.Live.com&quot;</code> website(s) . The &quot;Outlook.Live.com&quot; site includes an option (which is available after login via standard-HTML mode) to access site/service over <code>&quot;Light Version&quot;</code> mode , Once/when that is set/enabled then MS webmail service allows to access emails over plain HTML site.</p> <ul> <li><p>And MS also allows free IMAPS+POP3S+SMTPS mail-server access, which can be used from plain email-clients, for accessing emails of free email-account (or free microsoft account). To access emails use the info from &quot;MSN&quot; line shown here<b>:</b> <a href="https://support.microsoft.com/en-us/office/pop-and-imap-email-settings-for-outlook-8361e398-8af4-4e97-b147-6c6c4ac95353" rel="nofollow noreferrer">https://support.microsoft.com/en-us/office/pop-and-imap-email-settings-for-outlook-8361e398-8af4-4e97-b147-6c6c4ac95353</a> <br />IMAPS <code>imap-mail.outlook.com:993</code> or POP3S <code>pop-mail.outlook.com:995</code>,<br />and SMTPS <code>smtp-mail.outlook.com:587</code>(startTLS),<br />Note: if user selected Connection-Security: TLS/SSL (encryption), Auth-Method: &quot;Normal Password&quot;, for login/accessing emails , then, user can use main-password to access emails from email-client software and as password goes thru TLS/SSL encrypted connection so its fine &amp; secure (if its using strong encryption).<br /> Tell/Inform+Push Microsoft to SWITCH from <a href="https://en.wikipedia.org/wiki/Opportunistic_TLS" rel="nofollow noreferrer">STARTTLS</a> into TLS/SSL, as TLS/SSL is more secure than STARTTLS . STARTTLS can be abused<sup> <a href="https://www.eff.org/deeplinks/2014/11/starttls-downgrade-attacks" rel="nofollow noreferrer">1</a>, <a href="http://www.telecomasia.net/content/google-yahoo-smtp-email-severs-hit-thailand" rel="nofollow noreferrer">2</a>, <a href="http://www.goldenfrog.com/blog/fcc-must-prevent-isps-blocking-encryption" rel="nofollow noreferrer">3</a>, <a href="https://privacyinternational.org/sites/default/files/2017-10/thailand_2017_0.pdf" rel="nofollow noreferrer">4</a></sup> to violate Privacy-Rights of users: to STEAL-from Or SPY-on users.</p> </li> <li><p>QUESTION: Can &quot;Live.com&quot; (Outlook/Hotmail/Live,etc) free emails be accessed over plain-HTML site by using a specific URL (like something that is similar to Yahoo/Google) without enabling the &quot;LightVersion&quot;-option ?</p> </li> </ul> <p>End-of-EXAMPLES.<br /> <br /></p> <p><strong>WEBMAIL</strong><sup><a href="https://en.wikipedia.org/wiki/Comparison_of_webmail_providers" rel="nofollow noreferrer">1</a></sup><b>:</b><br /> WebMail/WebService access is needed into online webmail based email/mail service providers (ESP/MSP).<br /> &quot;Mail<i>.</i>com&quot; MSP seems to NOT-provide any free IMAPS/POP3S based services to free-accounts holders to get/view their received emails, and neither provides any free SMTPS service(s) to send emails outward from free-accounts . So it appears that, only free options i/user with free-accounts have, are to use their services either thru &quot;Mail<i>.</i>com&quot; website from any web-browser, or access their site thru their own &quot;Mail<i>.</i>com&quot; app . And their official app also does not have any option to use <a href="https://en.wikipedia.org/wiki/Pretty_Good_Privacy" rel="nofollow noreferrer">PGP</a>/<a href="https://en.wikipedia.org/wiki/OpenPGP" rel="nofollow noreferrer">OpenPGP</a>/<a href="https://en.wikipedia.org/wiki/GNU_Privacy_Guard" rel="nofollow noreferrer">GPG</a>/<a href="https://en.wikipedia.org/wiki/S/MIME" rel="nofollow noreferrer">SMIME</a> based secured emails.</p> <ul> <li>Another problem is, &quot;Mail<i>.</i>com&quot; Or it's parent-company seems to use too many other micro web-services from too many other sub-domains, etc !!!<br /> &quot;Mail<i>.</i>com&quot; &amp; its sub-domains are not <a href="https://en.wikipedia.org/wiki/Domain_Name_System_Security_Extensions" rel="nofollow noreferrer">DNSSEC</a>+<a href="https://en.wikipedia.org/wiki/DNS-based_Authentication_of_Named_Entities" rel="nofollow noreferrer">DANE</a> signed, so users cannot be 100% sure if they are using authentic site/service.</li> <li>So i (and users) need to know How to easily send+receive+view <a href="https://www.mail.com/" rel="nofollow noreferrer">&quot;Mail<i>.</i>com&quot;</a> emails from simple/BASIC/LIGHTWEIGHT WEB-BROWSER, by using secured/encrypted connection but over plain-HTML or lightweight-HTML version of web-email web-service from &quot;Mail<i>.</i>com&quot;.</li> <li>It will also be okay, if &quot;Mail<i>.</i>com&quot; can be directly accessed (for free-accounts) from email-client programs (i.e: <a href="https://www.thunderbird.net/" rel="nofollow noreferrer">Thunderbird</a>, <a href="https://www.seamonkey-project.org/" rel="nofollow noreferrer">SeaMonkey</a>, etc) by using some addons on the email-client, e.g: <a href="https://addons.thunderbird.net/en-US/thunderbird/addon/browseintab/?src=search" rel="nofollow noreferrer">BrowseInTab</a>, ThunderBrowse, WebApp, WebMail, etc . Do you know of any other/better addons ? ( this wud be my <strong>preferred</strong> way for accessing &quot;Mail<i>.</i>com&quot; )</li> <li>And please also share info with me+users about same for other (major) online Email Service Providers, if you know &amp; if you want to.</li> <li>Please assume i'm using a very simple &amp; basic (or lightweight) web-browser, or pls assume i'm using a very basic email-client program.</li> <li>Similar to &quot;<a href="https://en.wikipedia.org/wiki/Mail.com" rel="nofollow noreferrer">Mail<i>.</i>com</a>&quot;, these following email-service (webmail / web-service based) providers also do not provide free IMAPS/POP3S/SMTPS access to free email-account users, but provide only HTTPS(port-443) protocol based web-service/web-access (webpage based email access) for free , So they are &quot;webmail&quot;-providers . Many users from below email-services also need a solution (to my top-side question), to access emails by using email-service provider's basic/plain HTML version website to use from basic/lightweight web-browser software or to use from basic/lightweight email-client software. <ul> <li>Webmail-providers: <a href="https://en.wikipedia.org/wiki/Hushmail" rel="nofollow noreferrer">HushMail</a>, <a href="https://en.wikipedia.org/wiki/ProtonMail" rel="nofollow noreferrer">ProtonMail</a>, <a href="https://en.wikipedia.org/wiki/Tutanota" rel="nofollow noreferrer">Tutanota</a>, <a href="https://en.wikipedia.org/wiki/Zoho_Office_Suite" rel="nofollow noreferrer">Zoho-Mail</a>, <a href="https://en.wikipedia.org/wiki/Mailfence" rel="nofollow noreferrer">Mailfence</a>, <a href="https://en.wikipedia.org/wiki/ICloud" rel="nofollow noreferrer">iCloud</a>, <a href="https://en.wikipedia.org/wiki/Excite" rel="nofollow noreferrer">Excite-Mail</a>, etc.<br /> But these service providers should provide atleast POP3S+SMTPS protocol based access for free, as those 2-protocols are minimum &amp; being used atleast from 1984, and needed for accessing emails from email-client software, and also needed to easily send+receive secure (signed or encrypted or encrypted+signed) emails.<br /> <br /></li> </ul> </li> </ul> <p><strong>WEBMAIL ACCESS  INTO  SELF-HOSTED  MAIL-SERVER:</strong><br /> Another major/big usage &amp; need of having web-access for emails (aka: webmail, aka: web-browser based access) : in my case, its for accessing MY-OWN SELF-HOSTED<sup><a href="https://en.wikipedia.org/wiki/Self-hosting_%28web_services%29" rel="nofollow noreferrer">1</a>, <a href="https://list.community/self-hosted/" rel="nofollow noreferrer">2</a></sup> (small) <a href="https://en.wikipedia.org/wiki/List_of_mail_server_software" rel="nofollow noreferrer">MAIL-SERVER</a> , And similarly many other users &amp; teams &amp; groups, etc also need to have web-access into emails, either for their business or for their own project or simply for their own personal/private usage, by SELF-HOSTING.</p> <ul> <li>Such mail-servers (<a href="https://en.wikipedia.org/wiki/Comparison_of_mail_servers" rel="nofollow noreferrer">comparison</a>) usually use <a href="https://en.wikipedia.org/wiki/Free_and_open-source_software" rel="nofollow noreferrer">open-source &amp; free software</a>, and owner/user often/usually use less-powerful or overloaded SERVER computers, and often/usually many mail-servers do not have a widely accepted public-<a href="https://en.wikipedia.org/wiki/Certificate_authority" rel="nofollow noreferrer">CA</a> (certificate-authority) based SSL/TLS cert/certificate configured for it (and may instead use a simple free self-signed TLS/SSL-cert ) , and some mail-servers also get overloaded because of extra memory-usage &amp; extra computing resources consumed by virus/malware/spamware checker, scanner,etc software. <ul> <li>Recently, free SSL/TLS certs from a CA : LE(<a href="https://en.wikipedia.org/wiki/Let%27s_Encrypt" rel="nofollow noreferrer">Let's-Encrypt</a> <sup><a href="https://github.com/letsencrypt" rel="nofollow noreferrer">1</a>, <a href="https://github.com/certbot/certbot" rel="nofollow noreferrer">2</a></sup>) has been widely used, (and even more recently another new-comer CA : ZS(<a href="https://github.com/zerossl/zerossl" rel="nofollow noreferrer">ZeroSSL</a> <sup><a href="https://zerossl.com/" rel="nofollow noreferrer">1</a></sup>) is becoming popular over its ease of usage) . So LE based SSL/TLS cert has began to increase encryption usage in Web+Email servers &amp; so user's (and server owner's) Privacy is increasing.</li> <li>And, if individual or small-business or small-group/team based mail-server operator wants to, then they/he/she can avoid execessive protocols by reducing usage of specific 4-protocols : IMAP4S/993, POP3S/995, Mail-Submission/587, Mail-Submission-Over-TLS/465,<br /> and instead they/he/she can increase usage of 2-protocols : HTTPS/443 protocol based webmail to interact with end-users, &amp; SMTPS/25 protocol to send emails-to (or receive emails-from) remote (mail) servers.</li> <li>Users can easily create Mail-Servers with these free (and open-source) <a href="https://en.wikipedia.org/wiki/Mail_server_packages" rel="nofollow noreferrer">mail-server-bundle</a> (aka: mail-server-suite, aka: mail-server-package, aka: mail-server-stack) : <a href="https://mailinabox.email/" rel="nofollow noreferrer">Mail-in-a-Box</a> , <a href="https://mailcow.email/" rel="nofollow noreferrer">MailCow</a> (for Docker) , <a href="https://modoboa.org/" rel="nofollow noreferrer">Modoboa</a>, <a href="https://github.com/webmin/usermin" rel="nofollow noreferrer">Usermin</a>(webmail), <a href="https://www.iredmail.org/" rel="nofollow noreferrer">iRedMail</a>+iRedAdmin (opensource edition of this combo only has four features), etc.</li> <li>There are also many (open-source) server-admin (aka: hosting server control panel) type of software, which can also create full-featured mail-server (and also many other servers) : <a href="https://github.com/webmin/webmin" rel="nofollow noreferrer">Webmin</a>+<a href="https://github.com/virtualmin/virtualmin-gpl" rel="nofollow noreferrer">Virtualmin</a> , <a href="https://gnupanel.org/" rel="nofollow noreferrer">GNUpanel</a>, <a href="https://sourceforge.net/projects/ispconfig/" rel="nofollow noreferrer">ISPConfig</a>, etc, etc . You may also see a <a href="https://en.wikipedia.org/wiki/Comparison_of_web_hosting_control_panels" rel="nofollow noreferrer">Comparison</a> of <a href="https://en.wikipedia.org/wiki/Web_hosting_control_panel" rel="nofollow noreferrer">server control panel</a> in wikipedia site, or <a href="https://github.com/atErik/Server-Admin-Scripts/wiki" rel="nofollow noreferrer">here</a>. <br /><br /></li> </ul> </li> </ul> <p><strong>BASIC WEB-BROWSER:</strong><br /> A <strong>lightweight</strong>/plain/simple HTML <strong>site</strong>/website usually uses very simple basic/plain HTML, may use simple CSS styles, may use very very less JS(JavaScripts) or No JS at all, does not use any Flash/Java or any other objects/medias, etc.<br /></p> <p><strong>BASIC HTML WEB-SERVICE:</strong><br /> A plain-HTML site/website/web-service is usually tuned/optimized to work on a small-scale or light-footprint <strong>web-browsers</strong> that usually supports minimum+safe standard (or latest/best) security (encryption/decryption) protocols, but lightweight browsers usually do not have advanced viewing/interface support/capabilities (that is, they may lack big/wide screen, so lightweight web-browsers need to show less elements to make minimal items meaningful for the User so that User can use it by touch/tap/mouse), and lightweight browsers often/usually running on a device which has very-less computing-resources available (or low-speed or low FLOP/S microprocessor), etc constraints.<br /> More info on lightweight web-browsers:<br />   <a href="https://en.wikipedia.org/wiki/Comparison_of_lightweight_web_browsers" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Comparison_of_lightweight_web_browsers</a><br /> More info on mobile web-browsers:<br />   <a href="https://en.wikipedia.org/wiki/Mobile_browser" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Mobile_browser</a><br /> <br /><br /></p> <p>&quot;Email-Clients&quot; means, a type of program, which allows to receive/send/view emails. More info: <a href="https://en.wikipedia.org/wiki/Comparison_of_email_clients" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Comparison_of_email_clients</a><br /> <br /><br /></p> <p>PORTS FOR EMAIL-SERVICES:<br /> Internet or computer-network connection ports used by email/mail handling systems:<br /> ISP = Internet Service Provider, they also provide Mail Service, so they are also MSP.<br /> MSP = Mail Service Provider. For example: online mail/email service provider, webmail/web-email service provider, etc.<br /> IMAPS/IMAP or POPS/POP service are used to view/get emails (from mail-server into user's (email) client software/app). <a href="https://en.wikipedia.org/wiki/Simple_Mail_Transfer_Protocol" rel="nofollow noreferrer">SMTP</a> service is used to send emails.<br /> PROTOCOL(aka: Service) : PORT# ;<br /> IMAPS/IMAP4S : 993 (encrypted) ; IMAP/IMAP4 : 143 (not-encrypted, usually not-private) ;<br /> POPS/POP3S : 995 (encrypted) ; POP/POP3 : 110 (not-encrypted, usually not-private) ;<br /> SMTP/SMTPS : 25 (usually used for Email Server To Server communication, can be encrypted or not-encrypted, depends on email-server software capability, and it is usually allowed in business-class ISP connections, and usually not-allowed in residential-class ISP connections, Email-clients used inside business-class connections can use port 25 to send emails) ;<br /> SMTPS/SMTP (Mail-Submission) : 587 (usually for Email-Clients in residential ISP connections, and usually <a href="https://en.wikipedia.org/wiki/Opportunistic_TLS" rel="nofollow noreferrer">STARTTLS</a> encrypted, but it may use non-encrypted protocol) ; If your ISP/MSP uses STARTTLS then tell/<b>push</b> them to switch into TLS/SSL, as TLS/SSL is more secure than STARTTLS . STARTTLS can be abused<sup> <a href="https://www.eff.org/deeplinks/2014/11/starttls-downgrade-attacks" rel="nofollow noreferrer">1</a>, <a href="http://www.telecomasia.net/content/google-yahoo-smtp-email-severs-hit-thailand" rel="nofollow noreferrer">2</a>, <a href="http://www.goldenfrog.com/blog/fcc-must-prevent-isps-blocking-encryption" rel="nofollow noreferrer">3</a>, <a href="https://privacyinternational.org/sites/default/files/2017-10/thailand_2017_0.pdf" rel="nofollow noreferrer">4</a></sup> to violate Privacy-Rights of users: to STEAL-from Or SPY-on users ;<br /> SMTPS/SMTP (Message Submission Over TLS protocol) : 465 (usually for Email-Clients in residential-class connections, and usually TLS/SSL encrypted) ;<br /> HTTPS (Secure-HTTP) : 443 (webmail. web-service. SSL/TLS encrypted. For accessing (view, receive, send) emails by using web-browsers) ;<br /> HTTP : 80 (not-encrypted, not-private) (Avoid using it) ;</p> <p>When info/msg is sent/received by using Not-Encrypted protocol(s) or by using unencrypted (aka open) protocol(s), in such case, email/message contents can be immediately viewed+stored+cached by anyone in the middle, so private-info is not-private anymore.<br /> <br /><br /></p> <p>By the way, my question is NOT about an Email's message (or email body or content) viewing (or writing) formats or choices like these: &quot;Plain Text&quot; Email, or, &quot;HTML&quot; Email.<br /> <br /><br /></p> <p><strong>EXTRA  INFO:</strong><br /> ( PLEASE  AVOID / SKIP  READING  BELOW,<br /> if you have NO time to read more info, or if you have NO-respect that i/someone can have different preferences/choices, etc )</p> <p>Encrypted protocols help to protect information/data privacy, when info/data is transiting/going thru Internet, in-between User's (local) device/computer and remote web server (or remote service provider). Encrypted protocols can keep data private+secured for some short amount of time, until the encryption is weakened/cracked/broken after some time by using various reckless schemes/backdoors by violating user's Privacy-Rights, these schemes/backdoors are also discovered+accessed by many other harmful &amp; more-reckless entities/persons.</p> <ul> <li>If regular person or their children have no &quot;cloth&quot;-protection of their body, &amp; only special-group &amp; rich can have &quot;cloth&quot; (or special+rich are also purposefully removing their cloth), then, those special &amp; rich won, and achieved the harm on regular person (e.g: virus infections, sun-burn/cancer, social-chaos from nudity, hospital+pharma industries make more money, only special/rich/corrupt persons are allowed to do unethical &amp; immoral closed-door secret discussions that affects billions of people, etc backward+uncivilized) . &quot;Encryption&quot; is like &quot;Cloth&quot; in internet, &amp; more. We all must have cloth(real-world)+encryption(cyber-world) . All internet devices can have varieties of encryption software, no special hardware is needed for encryption, just math based encryption can work fine on all devices, So all must use one of the available encryption from a common set of encryption , we must work-on real innovative+constructive ways (instead of backward ways or thief's ways) to fix &amp; make sure cloth+encryption not-abused by anyone, but definitely Not by going backward by breaking,removing, backdooring,weakening it , such removal<sup><a href="https://en.wikipedia.org/wiki/Stellar_Wind" rel="nofollow noreferrer">steller-wind</a>, <a href="https://en.wikipedia.org/wiki/PRISM_%28surveillance_program%29" rel="nofollow noreferrer">prism</a>, <a href="https://en.wikipedia.org/wiki/ECHELON" rel="nofollow noreferrer">echelon</a>, <a href="https://en.wikipedia.org/wiki/XKeyscore" rel="nofollow noreferrer">xkeyscore</a>, <a href="https://en.wikipedia.org/wiki/Spying_on_United_Nations_leaders_by_United_States_diplomats" rel="nofollow noreferrer">USA-spy-on-UN</a></sup> of real-encryption has endangered security &amp; privacy of data &amp; human life/safety support/depending systems, etc, that is why Privacy-Rights has high priority &amp; placed at number 4th place as <a href="https://en.wikipedia.org/wiki/Fourth_Amendment_to_the_United_States_Constitution" rel="nofollow noreferrer">4th-Amendment</a><sup><a href="https://www.aclu.org/united-states-bill-rights-first-10-amendments-constitution" rel="nofollow noreferrer">ACLU</a>, <a href="https://www.law.cornell.edu/wex/fourth_amendment" rel="nofollow noreferrer">Law.Cornell.Edu</a>, <a href="https://www.britannica.com/topic/Fourth-Amendment" rel="nofollow noreferrer">B</a></sup> in USA-Constitution (1791) . UN/EU also supports Privacy-Rights (<a href="https://www.un.org/en/ga/search/view_doc.asp?symbol=A/RES/217%28III%29" rel="nofollow noreferrer">1948</a> Article-12 section of <a href="https://www.un.org/en/universal-declaration-human-rights/" rel="nofollow noreferrer">UDHR</a>, also 2014 <a href="https://undocs.org/A/RES/69/166" rel="nofollow noreferrer">Res-69/166</a>, etc), all member-states signed/agreed with it.</li> <li>With Guns,Powers(Lawfares/Abusive-Laws/Impunities) mainly in the hand of one major race of Police/LawEnforcement/JusticeDept side, have created massive civil inequalities &amp; massive systematic crimes+corruption, and it empowered harmful racism, etc, etc , So Guns,Powers,Lawfares,etc need to be equal for all side and all must have equal+same+easy access , that is why we have 2nd-Amendment in 2nd highest priority place . One person or only some-people cannot be above the Law . Law must be applied equally on anyone &amp; all, whoever will meet the Law's criteria . If all cannot have same set of Guns,tools,etc, and, if all do-not have same &amp; easy equal-access to those , then one solution is : all must give-up those Guns,tools,etc &amp; also sacrifice access to those , to create equality &amp; justice for all . Disarming people from their self-protection tools is not-good, only bad people/dictator benefits from absence of those tools, bcuz then they know they do not have to fear people when they will commit more crime or abuse more pople or loot more money from people . All People need training/education on these responsibility, (for example: to handle Vehicles/Cars, driving training+test(s) are needed, right ? so to handle those tools, training+tests are also needed ) , and LawEnforcement person needs to have ATLEAST 10-TIMES MORE TRAINING+TEST &amp; atleast 10-TIMES MORE HUMANITY INSIDE THEIR BRAIN+HEART , TO REALLY &quot;SERVE-&amp;-PROTECT&quot; PEOPLE INSTEAD OF &quot;STEAL-&amp;-KILL&quot; their life/privacy,etc . All human need regular/frequent TEST for (real-world) eligibility to carry/have/access these tools to response+stop attacks by evil-people who are inside the country . Similarly, All people must also have equal training &amp; easy-access to similar tools to use inside internet(cyber-world) to response+stop attacks &amp; data-theft by Evil-Corporations, evil-entities, evil-thief-agencies, etc that are inside the country.</li> </ul> <p>End of EXTRA-INFO.</p> <p>END OF DETAILS.</p>
2020-08-04 18:33:24.950000+00:00
2022-02-26 23:58:24.223000+00:00
2022-02-26 23:58:24.223000+00:00
web-services|email|browser|client|lightweight-processes
['https://en.wikipedia.org/wiki/Webmail', 'https://en.wikipedia.org/wiki/Comparison_of_webmail_providers', 'https://en.wikipedia.org/wiki/User_agent', 'https://en.wikipedia.org/wiki/Comparison_of_lightweight_web_browsers', 'https://qutebrowser.org/', 'https://www.mail.com/', 'https://www.thunderbird.net/en-US/thunderbird/all/', 'https://addons.thunderbird.net/en-US/thunderbird/addon/browseintab/?src=search', 'https://addons.thunderbird.net/en-US/thunderbird/addon/open-tab/?src=search', 'https://www.mail.com/', 'https://i.stack.imgur.com/E1m9A.png', 'https://addons.mozilla.org/en-US/firefox/addon/noscript/', 'https://addons.mozilla.org/en-US/firefox/addon/uaswitcher/', 'https://addons.mozilla.org/en-US/firefox/addon/user-agent-string-switcher/', 'https://www.mozilla.org/en-US/firefox/all/', 'https://www.mail.com/', 'https://stackoverflow.com/a/63286125/3553808', 'https://en.wikipedia.org/wiki/OAuth', 'https://www.bbc.co.uk/news/technology-39845545', 'http://homakov.blogspot.co.uk/2013/02/hacking-facebook-with-oauth2-and-chrome.html', 'https://arxiv.org/abs/1601.01229', 'https://datatracker.ietf.org/doc/html/draft-ietf-oauth-security-topics-13.html', 'https://i.stack.imgur.com/DyIKb.png', 'https://i.stack.imgur.com/4eNDd.png', 'https://i.stack.imgur.com/Gd3mf.png', 'https://i.stack.imgur.com/5q136.png', 'https://i.stack.imgur.com/k0TFS.png', 'https://i.stack.imgur.com/zmtV7.png', 'https://mail.yahoo.com/', 'https://login.yahoo.com/?.src=ym&lang=&done=https%3A%2F%2Fmail.yahoo.com%2Fneo%2Fb%2Flaunch', 'https://m.yahoo.com/', 'https://us.m.yahoo.com/p/mail', 'https://i.stack.imgur.com/meSdn.png', 'https://outlook.live.com/', 'https://mssl.mail.live.com/m/?bfv=wm', 'https://mobile.live.com/hm', 'https://profile.live.com/contacts?bfv=um', 'https://mail.live.com/m', 'https://wls.live.com', 'https://mobile.msn.com/pocketpc/', 'https://mail.google.com/', 'https://mail.google.com/mail/u/0/h/1pq68r75kzvdr/?v%3Dlui', 'https://m.gmail.com/', 'https://mail.google.com/mail/x/gdlakb-/gp/', 'https://mail.google.com/a/%5BYour-Domain%5D/x/1gjikl11t3cl1', 'https://www.google.com/ig/mobile?output=pda', 'https://hangouts.google.com/']
47
43,393,252
<p>Yes, Keras is thread safe, if you pay a little attention to it. </p> <p>In fact, in reinforcement learning there is an algorithm called <a href="https://arxiv.org/pdf/1602.01783.pdf" rel="noreferrer">Asynchronous Advantage Actor Critics (A3C)</a> where each agent relies on the same neural network to tell them what they should do in a given state. In other words, each thread calls <code>model.predict</code> concurrently as in your problem. An example implementation with Keras of it is <a href="https://github.com/jaara/AI-blog/blob/master/CartPole-A3C.py" rel="noreferrer">here</a>.</p> <p>You should, however, pay extra attention to this line if you looked into the code: <code>model._make_predict_function() # have to initialize before threading</code></p> <p>This is never mentioned in the Keras docs, but its necessary to make it work concurrently. In short, <code>_make_predict_function</code> is a function that compiles the <code>predict</code> function. In multi thread setting, you have to manually call this function to compile <code>predict</code> in advance, otherwise the <code>predict</code> function will not be compiled until you run it the first time, which will be problematic when many threading calling it at once. You can see a detailed explanation <a href="https://github.com/fchollet/keras/issues/6124" rel="noreferrer">here</a>.</p> <p>I have not met any other issues with multi threading in Keras till now.</p>
2017-04-13 13:10:07.093000+00:00
2017-04-13 13:10:07.093000+00:00
null
null
40,850,089
<p>I'm using Python and Keras (currently using Theano backend, but I have no qualms with switching). I have a neural network that I load and process multiple sources of information with in parallel. Currently, I've been running each one in a separate process and it loads its own copy of the network from the file. This seems like a waste of RAM, so I was thinking it would be more efficient to have a single multi-threaded process with one instance of the network that is used by all threads. However, I'm wondering if Keras is thread safe with either backend. If I run <code>.predict(x)</code> on two different inputs at the same time in different threads, will I run into race conditions or other issues?</p> <p>Thanks</p>
2016-11-28 17:25:10.120000+00:00
2019-12-08 17:04:12.397000+00:00
null
python|multithreading|keras
['https://arxiv.org/pdf/1602.01783.pdf', 'https://github.com/jaara/AI-blog/blob/master/CartPole-A3C.py', 'https://github.com/fchollet/keras/issues/6124']
3
52,480,260
<p><strong>Note:</strong> This is a more of a comment, but it was too long to fit in the comments section. </p> <p>Matlab does not necessarily provide the fastest means of generating random numbers. One extreme case is the binomial random variable, which matlab generates through drawing <code>n</code> Bernoulli numbers and summing them together. Your example is simply another case. </p> <p>I suggest you either </p> <ul> <li><p>implement a sampling yourself, thus you can tweak it for your needs,</p></li> <li><p>or use the one by Chopin, see the paper <a href="https://arxiv.org/abs/1201.6140" rel="nofollow noreferrer">here</a>, which you can get <a href="http://miv.u-strasbg.fr/mazet/rtnorm/rtnormM.zip" rel="nofollow noreferrer">here</a>, which is (to my knowledge) the newest such algorithms</p></li> </ul> <p>Please note, that even though "truncating" sounds like it is making things easier/faster this is not necesarrily the case. Especially nowadays where there exist very fast generators for the normal distribution. On the other hand, the x1000 is too large of a penalty, compared to better methods.</p>
2018-09-24 13:11:20+00:00
2018-09-24 13:27:56.163000+00:00
2018-09-24 13:27:56.163000+00:00
null
52,467,943
<p>I would prefer to truncate my Distribution, but at the moment it is simply not possible given the time penalty.</p> <p>Standard Kernel Distribution:</p> <pre><code>expectation=fitdist(BTS,'kernel'); </code></pre> <p>Result:</p> <pre><code>tic;expectation.random(10000,1);toc; Elapsed time is 0.000745 seconds. </code></pre> <p>Truncate Code:</p> <pre><code>Exp{i,j}=truncate(expectation,min(BTS)-1,max(BTS)+1); </code></pre> <p>Result:</p> <pre><code>tic;random(Exp{i,j},1,10000);toc Elapsed time is 0.772295 seconds. </code></pre>
2018-09-23 16:12:13.020000+00:00
2018-09-24 13:27:56.163000+00:00
2018-09-24 08:18:58.537000+00:00
matlab|random|truncate
['https://arxiv.org/abs/1201.6140', 'http://miv.u-strasbg.fr/mazet/rtnorm/rtnormM.zip']
2
53,529,436
<p>This is an attempt to justify the rationale of Spark here, and it should be read as a complement to the nice <em>programming</em> explanation already provided as an answer...</p> <p>To start with, how exactly individual word embeddings should be combined is not in principle a feature of the Word2Vec model itself (which is about, well, <em>individual</em> words), but an issue of concern to "higher order" models, such as <a href="https://github.com/klb3713/sentence2vec" rel="noreferrer">Sentence2Vec</a>, Paragraph2Vec, <a href="https://radimrehurek.com/gensim/models/doc2vec.html" rel="noreferrer">Doc2Vec</a>, <a href="https://wikipedia2vec.github.io/wikipedia2vec/" rel="noreferrer">Wikipedia2Vec</a> etc (you could name a few more, I guess...).</p> <p>Having said that, it turns out indeed that a very first approach in combining word vectors in order to get vector representations of larger pieces of text (phrases, sentences, tweets etc) is indeed to simply average the vector representations of the constituent words, as Spark ML does. </p> <p>Starting from the practitioner community, we have:</p> <p><a href="https://stackoverflow.com/questions/36731784/wordvectors-how-to-concatenate-word-vectors-to-form-sentence-vector">How to concatenate word vectors to form sentence vector</a> (SO answer):</p> <blockquote> <p>There are at least three common ways to combine embedding vectors; (a) summing, (b) summing &amp; averaging or (c) concatenating. [...] See <code>gensim.models.doc2vec.Doc2Vec</code>, <code>dm_concat</code> and <code>dm_mean</code> - it allows you to use any of those three options</p> </blockquote> <p><a href="https://medium.com/@premrajnarkhede/sentence2vec-evaluation-of-popular-theories-part-i-simple-average-of-word-vectors-3399f1183afe" rel="noreferrer">Sentence2Vec : Evaluation of popular theories — Part I (Simple average of word vectors)</a> (blog post):</p> <blockquote> <p>So what’s first thing that comes to your mind when you have word vectors and need to calculate sentence vector.</p> <p>Just average them?</p> <p>Yes that’s what we are going to do here. <a href="https://i.stack.imgur.com/878in.png" rel="noreferrer"><img src="https://i.stack.imgur.com/878in.png" alt="enter image description here"></a></p> </blockquote> <p><a href="https://github.com/stanleyfok/sentence2vec" rel="noreferrer">Sentence2Vec</a> (Github repo):</p> <blockquote> <p>Word2Vec can help to find other words with similar semantic meaning. However, Word2Vec can only take 1 word each time, while a sentence consists of multiple words. To solve this, I write the Sentence2Vec, which is actually a wrapper to Word2Vec. To obtain the vector of a sentence, I simply get the averaged vector sum of each word in the sentence.</p> </blockquote> <p>It certainly seems that, at least for practitioners, this simple averaging of the individual word vectors is far from unexpected. </p> <p>An expected counter-argument here is that blog posts and SO answers are arguably not <em>that</em> credible sources; what about the <em>researchers</em> and the relevant <em>scientific literature</em>? Well, it turns out that this simple averaging is far from uncommon here, too:</p> <p>From <a href="https://arxiv.org/abs/1405.4053" rel="noreferrer">Distributed Representations of Sentences and Documents</a> (Le &amp; Mikolov, Google, ICML 2014):</p> <p><a href="https://i.stack.imgur.com/dstJK.png" rel="noreferrer"><img src="https://i.stack.imgur.com/dstJK.png" alt="enter image description here"></a></p> <p>From <a href="http://www.aclweb.org/anthology/S17-2100" rel="noreferrer">NILC-USP at SemEval-2017 Task 4: A Multi-view Ensemble for Twitter Sentiment analysis</a> (SemEval 2017, section 2.1.2):</p> <p><a href="https://i.stack.imgur.com/7MHAH.png" rel="noreferrer"><img src="https://i.stack.imgur.com/7MHAH.png" alt="enter image description here"></a></p> <hr> <p>It should be clear by now that the particular design choice in Spark ML is far from arbitrary, or even uncommon; I have blogged about what certainly seem as <em>absurd</em> design choices in Spark ML (see <a href="https://www.nodalpoint.com/spark-classification/" rel="noreferrer">Classification in Spark 2.0: “Input validation failed” and other wondrous tales</a>), but it seems that this is not such a case...</p>
2018-11-28 23:04:34.547000+00:00
2018-11-28 23:10:28.167000+00:00
2018-11-28 23:10:28.167000+00:00
null
53,272,749
<p>Running the <a href="https://github.com/apache/spark/blob/master/examples/src/main/java/org/apache/spark/examples/ml/JavaWord2VecExample.java" rel="nofollow noreferrer">Spark's example for Word2Vec</a>, I realized that it takes in an array of string and gives out a vector. My question is, shouldn't it return a matrix instead of a vector? I was expecting one vector per input word. But it returns one vector period!</p> <p>Or maybe it should have accepted string, instead of an array of strings (one word) as input. Then, yeah sure, it could return one vector as output. But accepting an array of strings and returning one single vector does not make sense to me.</p> <p><strong>[UPDATE]</strong></p> <p>Per @Shaido's request, here's the code with my minor change to print the schema for the output:</p> <pre><code>public class JavaWord2VecExample { public static void main(String[] args) { SparkSession spark = SparkSession .builder() .appName("JavaWord2VecExample") .getOrCreate(); // $example on$ // Input data: Each row is a bag of words from a sentence or document. List&lt;Row&gt; data = Arrays.asList( RowFactory.create(Arrays.asList("Hi I heard about Spark".split(" "))), RowFactory.create(Arrays.asList("I wish Java could use case classes".split(" "))), RowFactory.create(Arrays.asList("Logistic regression models are neat".split(" "))) ); StructType schema = new StructType(new StructField[]{ new StructField("text", new ArrayType(DataTypes.StringType, true), false, Metadata.empty()) }); Dataset&lt;Row&gt; documentDF = spark.createDataFrame(data, schema); // Learn a mapping from words to Vectors. Word2Vec word2Vec = new Word2Vec() .setInputCol("text") .setOutputCol("result") .setVectorSize(7) .setMinCount(0); Word2VecModel model = word2Vec.fit(documentDF); Dataset&lt;Row&gt; result = model.transform(documentDF); for (Row row : result.collectAsList()) { List&lt;String&gt; text = row.getList(0); System.out.println("Schema: " + row.schema()); Vector vector = (Vector) row.get(1); System.out.println("Text: " + text + " =&gt; \nVector: " + vector + "\n"); } // $example off$ spark.stop(); } } </code></pre> <p>And it prints:</p> <pre><code>Schema: StructType(StructField(text,ArrayType(StringType,true),false), StructField(result,org.apache.spark.ml.linalg.VectorUDT@3bfc3ba7,true)) Text: [Hi, I, heard, about, Spark] =&gt; Vector: [-0.0033279924420639875,-0.0024428479373455048,0.01406305879354477,0.030621735751628878,0.00792500376701355,0.02839711122214794,-0.02286271695047617] Schema: StructType(StructField(text,ArrayType(StringType,true),false), StructField(result,org.apache.spark.ml.linalg.VectorUDT@3bfc3ba7,true)) Text: [I, wish, Java, could, use, case, classes] =&gt; Vector: [-9.96453288410391E-4,-0.013741840076233658,0.013064394239336252,-0.01155538750546319,-0.010510949650779366,0.004538436819400106,-0.0036846946126648356] Schema: StructType(StructField(text,ArrayType(StringType,true),false), StructField(result,org.apache.spark.ml.linalg.VectorUDT@3bfc3ba7,true)) Text: [Logistic, regression, models, are, neat] =&gt; Vector: [0.012510885251685977,-0.014472834207117558,0.002779599279165268,0.0022389178164303304,0.012743516173213721,-0.02409198731184006,0.017409833287820222] </code></pre> <p>Please correct me if I'm wrong, but the input is an array of strings and the output is a single vector. And I was expecting each word to be mapped into a vector.</p>
2018-11-13 02:08:17.443000+00:00
2020-04-07 02:45:33.050000+00:00
2020-03-09 20:38:39.647000+00:00
java|apache-spark|machine-learning|word2vec|apache-spark-ml
['https://github.com/klb3713/sentence2vec', 'https://radimrehurek.com/gensim/models/doc2vec.html', 'https://wikipedia2vec.github.io/wikipedia2vec/', 'https://stackoverflow.com/questions/36731784/wordvectors-how-to-concatenate-word-vectors-to-form-sentence-vector', 'https://medium.com/@premrajnarkhede/sentence2vec-evaluation-of-popular-theories-part-i-simple-average-of-word-vectors-3399f1183afe', 'https://i.stack.imgur.com/878in.png', 'https://github.com/stanleyfok/sentence2vec', 'https://arxiv.org/abs/1405.4053', 'https://i.stack.imgur.com/dstJK.png', 'http://www.aclweb.org/anthology/S17-2100', 'https://i.stack.imgur.com/7MHAH.png', 'https://www.nodalpoint.com/spark-classification/']
12
55,829,860
<p>Yes, you can build a minimal perfect hash function (MPHF) at runtime. There are multiple algorithms you can use, but most of them are a bit complex to implement so I can't give you working sample code. Many are implemented in the <a href="http://cmph.sourceforge.net/" rel="nofollow noreferrer">cmph project</a>.</p> <p>The most simple one is probably BDZ. On a high level, lookup requires calculating 3 hash functions, and 3 memory accesses. If memory isn't an issue, you only need 2. It supports millions of keys. This algorithm requires a lookup table that is about 1.23 times the number of entries, when using 3 hash functions, and with 2 bits per entry.</p> <p>There are other algorithms, one I invented myself, <a href="https://www.slideshare.net/ThomasMueller12/recsplit-minimal-perfect-hashing" rel="nofollow noreferrer">the RecSplit algorithm</a> (there's even a <a href="https://arxiv.org/abs/1910.06416" rel="nofollow noreferrer">research paper</a> now), and there is a <a href="https://github.com/vigna/sux/blob/master/sux/function/RecSplit.hpp" rel="nofollow noreferrer">C++ implementation</a>, and <a href="https://github.com/thomasmueller/minperf/blob/master/src/main/java/org/minperf/hem/recsplit/FastGenerator.java" rel="nofollow noreferrer">Java</a> right now. Basically, the algorithms finds a way to split the set into subsets (recursively), until the subset size is 1. You need to remember how you split. The most simple solution is in fact using a lookup table for &quot;how you split&quot;, but the table is really small, possibly only 5 integers for 64 keys. The first one to divide into 4 subsets of 16, and 4 to map each subset to a number 0..15.</p> <p>(I added a second answer if you don't strictly need a <em>minimal</em> perfect hash function, just a <em>perfect</em> hash function. Construction is simpler and lookup is a lot faster, but requires a larger array.)</p>
2019-04-24 12:17:21.280000+00:00
2022-01-02 21:01:19.567000+00:00
2022-01-02 21:01:19.567000+00:00
null
55,824,130
<p>I recently read this article <a href="http://stevehanov.ca/blog/?id=119" rel="nofollow noreferrer">Throw away the keys: Easy, Minimal Perfect Hashing</a> about generating a minimal perfect hash table for a known set of keys.</p> <p>The article seems to assume that you need an intermediate table. Is there any other, simpler way to generate such a function if we assume that the set of keys is small (i.e. &lt; 64).</p> <p>In my case, I want to map a set of thread ID:s to a unique block of data within an array. The threads are started before the hash function is generated and stay constant during the running time of the program. The exact number of threads vary but stays fixed during the runtime of the program:</p> <pre><code>unsigned int thread_ids*; unsigned int thread_count; struct { /* Some thread specific data */ }* ThreadData; int start_threads () { /* Code which starts the threads and allocates the threaddata. */ } int f(thread_id) { /* return unique index into threadData */ } int main() { thread_count = 64; /* This number will be small, e.g. &lt; 64 */ start_threads(); ThreadData[f(thread_ids[0])] } </code></pre>
2019-04-24 07:03:02.550000+00:00
2022-01-02 21:01:19.567000+00:00
2019-05-08 10:23:00.290000+00:00
c|algorithm|hash|perfect-hash
['http://cmph.sourceforge.net/', 'https://www.slideshare.net/ThomasMueller12/recsplit-minimal-perfect-hashing', 'https://arxiv.org/abs/1910.06416', 'https://github.com/vigna/sux/blob/master/sux/function/RecSplit.hpp', 'https://github.com/thomasmueller/minperf/blob/master/src/main/java/org/minperf/hem/recsplit/FastGenerator.java']
5
28,783,506
<p>The fastest substring search algorithm is going to depend on the context:</p> <ol> <li>the alphabet size (e.g. DNA vs English)</li> <li>the needle length</li> </ol> <p>The 2010 paper <a href="http://arxiv.org/pdf/1012.2547v1.pdf">"The Exact String Matching Problem: a Comprehensive Experimental Evaluation"</a> gives tables with runtimes for 51 algorithms (with different alphabet sizes and needle lengths), so you can pick the best algorithm for your context.</p> <p>All of those algorithms have C implementations, as well as a test suite, here:</p> <p><a href="http://www.dmi.unict.it/~faro/smart/algorithms.php">http://www.dmi.unict.it/~faro/smart/algorithms.php</a></p>
2015-02-28 15:40:34.017000+00:00
2015-02-28 17:32:01.930000+00:00
2015-02-28 17:32:01.930000+00:00
null
3,183,582
<p>OK, so I don't sound like an idiot I'm going to state the problem/requirements more explicitly:</p> <ul> <li>Needle (pattern) and haystack (text to search) are both C-style null-terminated strings. No length information is provided; if needed, it must be computed.</li> <li>Function should return a pointer to the first match, or <code>NULL</code> if no match is found.</li> <li><strong>Failure cases are not allowed. This means any algorithm with non-constant (or large constant) storage requirements will need to have a fallback case for allocation failure (and performance in the fallback care thereby contributes to worst-case performance).</strong></li> <li>Implementation is to be in C, although a good description of the algorithm (or link to such) without code is fine too.</li> </ul> <p>...as well as what I mean by "fastest":</p> <ul> <li>Deterministic <code>O(n)</code> where <code>n</code> = haystack length. (But it may be possible to use ideas from algorithms which are normally <code>O(nm)</code> (for example rolling hash) if they're combined with a more robust algorithm to give deterministic <code>O(n)</code> results).</li> <li>Never performs (measurably; a couple clocks for <code>if (!needle[1])</code> etc. are okay) worse than the naive brute force algorithm, especially on very short needles which are likely the most common case. (Unconditional heavy preprocessing overhead is bad, as is trying to improve the linear coefficient for pathological needles at the expense of likely needles.)</li> <li>Given an arbitrary needle and haystack, comparable or better performance (no worse than 50% longer search time) versus any other widely-implemented algorithm.</li> <li>Aside from these conditions, I'm leaving the definition of "fastest" open-ended. A good answer should explain why you consider the approach you're suggesting "fastest".</li> </ul> <p>My current implementation runs in roughly between 10% slower and 8 times faster (depending on the input) than glibc's implementation of Two-Way.</p> <p><strong>Update: My current optimal algorithm is as follows:</strong></p> <ul> <li>For needles of length 1, use <code>strchr</code>.</li> <li>For needles of length 2-4, use machine words to compare 2-4 bytes at once as follows: Preload needle in a 16- or 32-bit integer with bitshifts and cycle old byte out/new bytes in from the haystack at each iteration. Every byte of the haystack is read exactly once and incurs a check against 0 (end of string) and one 16- or 32-bit comparison.</li> <li>For needles of length >4, use Two-Way algorithm with a bad shift table (like Boyer-Moore) which is applied only to the last byte of the window. To avoid the overhead of initializing a 1kb table, which would be a net loss for many moderate-length needles, I keep a bit array (32 bytes) marking which entries in the shift table are initialized. Bits that are unset correspond to byte values which never appear in the needle, for which a full-needle-length shift is possible.</li> </ul> <p>The big questions left in my mind are:</p> <ul> <li>Is there a way to make better use of the bad shift table? Boyer-Moore makes best use of it by scanning backwards (right-to-left) but Two-Way requires a left-to-right scan.</li> <li>The only two viable candidate algorithms I've found for the general case (no out-of-memory or quadratic performance conditions) are <a href="http://www-igm.univ-mlv.fr/~lecroq/string/node26.html" rel="noreferrer">Two-Way</a> and <a href="http://www-igm.univ-mlv.fr/~lecroq/string/node27.html" rel="noreferrer">String Matching on Ordered Alphabets</a>. But are there easily-detectable cases where different algorithms would be optimal? Certainly many of the <code>O(m)</code> (where <code>m</code> is needle length) in space algorithms could be used for <code>m&lt;100</code> or so. It would also be possible to use algorithms which are worst-case quadratic if there's an easy test for needles which provably require only linear time.</li> </ul> <p>Bonus points for:</p> <ul> <li>Can you improve performance by assuming the needle and haystack are both well-formed UTF-8? (With characters of varying byte lengths, well-formed-ness imposes some string alignment requirements between the needle and haystack and allows automatic 2-4 byte shifts when a mismatching head byte is encountered. But do these constraints buy you much/anything beyond what maximal suffix computations, good suffix shifts, etc. already give you with various algorithms?)</li> </ul> <p><strong>Note:</strong> I'm well aware of most of the algorithms out there, just not how well they perform in practice. Here's a good reference so people don't keep giving me references on algorithms as comments/answers: <a href="http://www-igm.univ-mlv.fr/~lecroq/string/index.html" rel="noreferrer">http://www-igm.univ-mlv.fr/~lecroq/string/index.html</a></p>
2010-07-06 04:58:27.867000+00:00
2021-11-04 13:50:53.010000+00:00
2010-07-07 13:48:20.853000+00:00
c|algorithm|string|substring
['http://arxiv.org/pdf/1012.2547v1.pdf', 'http://www.dmi.unict.it/~faro/smart/algorithms.php']
2
30,035,042
<p>The idea of <a href="http://en.wikipedia.org/wiki/Gradient_boosting">gradient boosting</a> is that an ensemble model is built from black-box weak models. You can surely use VW as the black box, but note that VW does not offer decision trees, which are the most popular choice for the black-box weak models in boosting. Boosting in general decreases bias (and increases variance), so you should make sure that the VW models have low variance (no overfitting). See <a href="http://en.wikipedia.org/wiki/Bias%E2%80%93variance_tradeoff">bias-variance tradeoff</a>.</p> <p>There are some reductions related to boosting and bagging in VW:</p> <ul> <li><code>--autolink N</code> adds a link function with polynomial N, which can be considered a simple way of boosting.</li> <li><code>--log_multi K</code> is an online boosting algorithm for K-class classification. See <a href="http://arxiv.org/pdf/1406.1822">the paper</a>. You can use it even for binary classification (K=2), but not for regression.</li> <li><code>--bootstrap M</code> M-way bootstrap by online importance resampling. Use <code>--bs_type=vote</code> for classification and <code>--bs_type=mean</code> for regression. Note that this is <a href="http://en.wikipedia.org/wiki/Bootstrap_aggregating">bagging</a>, not boosting.</li> <li><code>--boosting N</code> (added on 2015-06-17) online boosting with N weak learners, see <a href="http://arxiv.org/abs/1502.02651">a theoretic paper</a></li> </ul>
2015-05-04 16:27:48.610000+00:00
2015-06-17 13:29:49.910000+00:00
2015-06-17 13:29:49.910000+00:00
null
30,008,991
<p>Is there a way to use gradient boosting on regression using Vowpal Wabbit? I use various techniques that come with Vowpal Wabbit that are helpful. I want to try gradient boosting along with that, but I can't find a way to implement gradient boosting on VW. </p>
2015-05-03 00:28:09.727000+00:00
2015-06-17 13:29:49.910000+00:00
null
machine-learning|vowpalwabbit
['http://en.wikipedia.org/wiki/Gradient_boosting', 'http://en.wikipedia.org/wiki/Bias%E2%80%93variance_tradeoff', 'http://arxiv.org/pdf/1406.1822', 'http://en.wikipedia.org/wiki/Bootstrap_aggregating', 'http://arxiv.org/abs/1502.02651']
5
29,197,474
<p>Question type classification is generally approached like any other text classification problem, thus there is a wide variety of algorithms from simple Naive Bayes to convolutional neural networks that can do this task without additional preprocessing (see for example [this paper]]<a href="http://arxiv.org/pdf/1408.5882v2.pdf" rel="nofollow">1</a> for review of conventional methods for question type classification and <a href="http://arxiv.org/pdf/1408.5882v2.pdf" rel="nofollow">this one</a> for example application of convnets). Perfomance, of course, may vary depending on your task specifics. </p>
2015-03-22 17:24:28.783000+00:00
2015-03-22 17:24:28.783000+00:00
null
null
29,191,955
<p>I have searched so many research papers. but I did not find a good procedure to do this. </p> <p>how to identify the question type and answer type detection in natural language processing with out using entity recognition?</p>
2015-03-22 07:20:42.460000+00:00
2015-03-22 17:24:28.783000+00:00
null
machine-learning|nlp
['http://arxiv.org/pdf/1408.5882v2.pdf', 'http://arxiv.org/pdf/1408.5882v2.pdf']
2
61,187,546
<p>mobilenet-ssd - is great for large objects, yet its performance for small objects is pretty poor. It is always better to train with anchors tuned to the objects aspect ratios, and sizes you expect. One more thing to take into account is that the first branch is the one which detects the smallest objects - the resolution of this branch is 1/16 of the input - you should consider adding another branch at the 1/8 feature map - which will help with small objects.</p> <p><strong><em>How to change anchors sizes and aspect ratios:</em></strong> Let us take for example the pipeline.config file which is being used for the training configuration - <a href="https://github.com/tensorflow/models/blob/master/research/object_detection/samples/configs/ssd_mobilenet_v2_coco.config" rel="noreferrer">https://github.com/tensorflow/models/blob/master/research/object_detection/samples/configs/ssd_mobilenet_v2_coco.config</a>. You will find there the following arguments:</p> <pre><code> 90 anchor_generator { 91 ssd_anchor_generator { 92 num_layers: 6 93 min_scale: 0.20000000298 94 max_scale: 0.949999988079 95 aspect_ratios: 1.0 96 aspect_ratios: 2.0 97 aspect_ratios: 0.5 98 aspect_ratios: 3.0 99 aspect_ratios: 0.333299994469 100 } 101 } </code></pre> <ul> <li><em>num_layers</em> - number of branches - starts from a branch from 1/16 of the input...</li> <li><em>min_scale</em> / <em>max_scale</em> - min_scale corresponds to the scale of the anchors in the first branch, max_scale corresponds to the scale of the last branch. While all the branches between gets scale from linear interpolation: <code>min_scale + (max_scale - min_scale)/(num_layers - 1) * (#branch)</code> (same as defined in <strong>SSD: Single Shot MultiBox Detector</strong> - <a href="https://arxiv.org/pdf/1512.02325.pdf" rel="noreferrer">https://arxiv.org/pdf/1512.02325.pdf</a>)</li> <li>aspect_ratios - list of aspect ratios define the anchors - this way you can decide what AR anchors to add, AR=1.0 means a square anchor, while 2.0 means that the anchor is landscape - while its width is x2 the height, 0.5 means portrait where the height is x2 the width... the code can be find in the following path: <a href="https://github.com/tensorflow/models/blob/master/research/object_detection/anchor_generators/grid_anchor_generator.py" rel="noreferrer">https://github.com/tensorflow/models/blob/master/research/object_detection/anchor_generators/grid_anchor_generator.py</a> and <a href="https://github.com/tensorflow/models/blob/master/research/object_detection/anchor_generators/multiscale_grid_anchor_generator.py" rel="noreferrer">https://github.com/tensorflow/models/blob/master/research/object_detection/anchor_generators/multiscale_grid_anchor_generator.py</a></li> <li>One more thing is that in mobilenet-v1-ssd - the first branch has only 3 anchors, i'm not sure how much mobilenet-v2-ssd has, but you may want to add more anchors. You will need to change it in the code (in <strong>multiple_grid_anchor_generator.py</strong>) 320 if layer == 0 and reduce_boxes_in_lowest_layer: 321 layer_box_specs = [(0.1, 1.0), (scale, 2.0), (scale, 0.5)] as you seed it is hard coded to be three anchors...</li> </ul> <p><strong>How to start the branches earlier</strong></p> <p>This also would be needed to be changed inside the code. Each predefined model has its own model file - i.e. ssd_mobilenet_v2: <a href="https://github.com/tensorflow/models/blob/master/research/object_detection/models/ssd_mobilenet_v2_feature_extractor.py" rel="noreferrer">https://github.com/tensorflow/models/blob/master/research/object_detection/models/ssd_mobilenet_v2_feature_extractor.py</a></p> <p><strong>lines 111:117</strong></p> <pre><code>feature_map_layout = { 'from_layer': ['layer_15/expansion_output', 'layer_19', '', '', '', '' ][:self._num_layers], 'layer_depth': [-1, -1, 512, 256, 256, 128][:self._num_layers], 'use_depthwise': self._use_depthwise, 'use_explicit_padding': self._use_explicit_padding, } </code></pre> <p>You can choose what layers to start from by their name.</p> <p>Now for my 2 cents, I didn't try mobilenet-v2-ssd, mainly used mobilenet-v1-ssd, but from my experience is is not a good model for small objects. I guess it can be optimized a little bit by editing the anchors, but not sure if it will be sufficient for your needs. for one stage ssd like network consider using <strong>ssd_mobilenet_v1_fpn_coco</strong> - it works on 640x640 input size, and its first branch is starts at 1/8 input size. (cons - bigger model, and higher inference time)</p>
2020-04-13 12:07:11.887000+00:00
2020-04-13 12:07:11.887000+00:00
null
null
60,537,788
<p>I am trying to detect small objects from ipcam videostreams using ssd mobilenetv2. The model was trained on the high resolution images of these small objects where the objects are very close to the camera.Images were downloaded from internet. I found that changing the anchorbox scales and modifying feature extractor.py are the proposed solutions to overcome this. Can anyone guide me how to do this?</p>
2020-03-05 03:25:17.713000+00:00
2020-04-13 12:07:11.887000+00:00
null
tensorflow|object|detection|mobilenet
['https://github.com/tensorflow/models/blob/master/research/object_detection/samples/configs/ssd_mobilenet_v2_coco.config', 'https://arxiv.org/pdf/1512.02325.pdf', 'https://github.com/tensorflow/models/blob/master/research/object_detection/anchor_generators/grid_anchor_generator.py', 'https://github.com/tensorflow/models/blob/master/research/object_detection/anchor_generators/multiscale_grid_anchor_generator.py', 'https://github.com/tensorflow/models/blob/master/research/object_detection/models/ssd_mobilenet_v2_feature_extractor.py']
5
44,526,311
<p>The prediction is usually made through an output softmax layer that gives the probabilities for all words in the vocabulary.</p> <p>However a recent paper suggests tying the input word vectors with the output word classifiers and training them end-to-end. This significantly reduces the number of parameters. <a href="https://arxiv.org/abs/1611.01462" rel="nofollow noreferrer">https://arxiv.org/abs/1611.01462</a></p> <p>With regards to architectures, atleast for training I would prefer the second option since the first one loses information about the second and third word that can also be used for training.</p>
2017-06-13 15:54:26.940000+00:00
2017-06-13 15:54:26.940000+00:00
null
null
44,524,630
<p>I'm trying to predict word with recurrent neural network. I'm training network by putting independently pre-trained <code>word2vec</code> of words as input.</p> <p>And I wonder if I can use <code>word2vec</code> of target word to calculate error cost. It seems not working and I've never seen such examples or papers. Is it possible to use word2vec as a target value for calculating error cost? If so, what kind of cost function should I use? If not, please explain the reason mathematically.</p> <p>And how should I set input and target? Now I'm using architecture like below :</p> <pre><code>input : word1, word2, word3, target : word4 input : word1, word2, word3, word4, target : word5 </code></pre> <p>Maybe I can use another option like :</p> <pre><code>input : word1, word2 target : word2, word3 input : word1, word2, word3, target : word2, word3, word4 </code></pre> <p>Which one is better? Or is there another option?</p> <p>If there's any reference let me know.</p>
2017-06-13 14:39:12.213000+00:00
2017-06-13 21:48:50.390000+00:00
2017-06-13 21:48:50.390000+00:00
nlp|recurrent-neural-network|word2vec
['https://arxiv.org/abs/1611.01462']
1
52,090,325
<p>You can find Moire Pattern in <a href="https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_imgproc/py_transforms/py_fourier_transform/py_fourier_transform.html" rel="nofollow noreferrer">Fourier transformed</a> image.</p> <p>If you want to remove it, apply median filter and inverse Fourier transform.</p> <p>See <a href="https://arxiv.org/ftp/arxiv/papers/1701/1701.09037.pdf" rel="nofollow noreferrer">this</a> paper.</p>
2018-08-30 06:15:41.257000+00:00
2018-08-30 06:15:41.257000+00:00
null
null
52,089,620
<p>Is there any way to find Moire Patter in an image I can use in my iOS app using Swift and maybe OpenCV?</p> <p>Any help would be appreciated. </p>
2018-08-30 05:13:14.047000+00:00
2019-03-04 05:14:34.837000+00:00
2018-08-30 06:32:02.720000+00:00
ios|swift|image|opencv
['https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_imgproc/py_transforms/py_fourier_transform/py_fourier_transform.html', 'https://arxiv.org/ftp/arxiv/papers/1701/1701.09037.pdf']
2
55,358,143
<p>If a (close) approximation of the median is OK for your purposes, you should consider computing a <em>median of medians</em>, which is a divide and conquer strategy that can be executed in parallel. In principle, <em>MoM</em> has <code>O(n)</code> complexity for serial execution, approaching <code>O(1)</code> for parallel execution on massively parallel systems.</p> <p>See <a href="https://en.wikipedia.org/wiki/Median_of_medians" rel="nofollow noreferrer">this Wiki entry</a> for a description and pseudo-code. See also <a href="https://stackoverflow.com/questions/10806303/python-implementation-of-median-of-medians-algorithm">this question on Stack Overflow</a> and discussion of the code, and <a href="https://arxiv.org/abs/1104.2732v1" rel="nofollow noreferrer">this ArXiv paper</a> for a GPU implementation.</p>
2019-03-26 13:17:22.883000+00:00
2019-03-26 13:17:22.883000+00:00
null
null
55,353,509
<p>I would like to calculate the median line by line in a dataframe of more than 500,000 rows. For the moment I'm using <code>np.median</code> because numpy is optimized to run on a single core. It's still very slow and I'd like to find a way to parallel the calculation</p> <p>Specifically, I have <code>N</code> tables of size <code>13 x 500,000</code> and for each table I want to add the columns Q1, Q3 and median so that for each row the median column contains the median of the row. So I have to calculate <code>N * 500,000</code> median values.</p> <p>I tried with <code>numexpr</code> but it doesn't seem possible.</p> <p><strong>EDIT :</strong> In fact I also need Q1 and Q3 so I can't use the statistics module which doesn't allow to calculate quartiles. Here is how I calculate the median for the moment </p> <pre><code> q = np.transpose(np.percentile(data[row_array], [25,50,75], axis = 1)) data['Q1_' + family] = q[:,0] data['MEDIAN_' + family] = q[:,1] data['Q3_' + family] = q[:,2] </code></pre> <p><strong>EDIT 2</strong> I solved my problem by using the median of median algorithm as proposed below </p>
2019-03-26 09:19:13.500000+00:00
2021-01-20 10:54:47.227000+00:00
2019-03-26 14:17:58.803000+00:00
python|multithreading|numpy|median
['https://en.wikipedia.org/wiki/Median_of_medians', 'https://stackoverflow.com/questions/10806303/python-implementation-of-median-of-medians-algorithm', 'https://arxiv.org/abs/1104.2732v1']
3
59,740,828
<p>The fundamental issue here is that you are mixing SMTLib's sequence logic and quantifiers. And the problem turns out to be too difficult for an SMT solver to handle. This sort of synthesis of functions is indeed possible if you restrict yourself to basic logics. (Bitvectors, Integers, Reals.) But adding sequences to the mix puts it into the undecidable fragment.</p> <p>This doesn't mean z3 cannot synthesize your <code>add</code> function. Perhaps a future version might be able to handle it. But at this point you're at the mercy of heuristics. To see why, note that you're asking the solver to synthesize the following definition:</p> <pre class="lang-hs prettyprint-override"><code> add :: Stack -&gt; Stack add s = v .: s'' where (a, s') = L.uncons s (b, s'') = L.uncons s' v = a + b </code></pre> <p>while this looks rather innocent and simple, it requires capabilities beyond the current abilities of z3. In general, z3 can currently synthesize functions that only make a finite number of choices on concrete elements. But it is unable to do so if the output depends on input for every choice of input. (Think of it as a case-analysis producing engine: It can conjure up a function that maps certain inputs to others, but cannot figure out if something should be incremented or two things must be added. This follows from the work in finite-model finding theory, and is way beyond the scope of this answer! See here for details: <a href="https://arxiv.org/abs/1706.00096" rel="nofollow noreferrer">https://arxiv.org/abs/1706.00096</a>)</p> <p>A better use case for SBV and SMT solving for this sort of problem is to actually tell it what the <code>add</code> function is, and then prove some given program is correctly "compiled" using Hutton's strategy. Note that I'm explicitly saying a "given" program: It would also be very difficult to model and prove this for an arbitrary program, but you can do this rather easily for a given fixed program. If you are interested in proving the correspondence for arbitrary programs, you really should be looking at theorem provers such as Isabelle, Coq, ACL2, etc.; which can deal with induction, a proof technique you will no doubt need for this sort of problem. Note that SMT solvers cannot perform induction in general. (You can use e-matching to simulate some induction like proofs, but it's a kludge at best and in general unmaintainable.)</p> <p>Here's your example, coded to prove the <code>\x -&gt; \y -&gt; x + y</code> program is "correctly" compiled and executed with respect to reference semantics:</p> <pre class="lang-hs prettyprint-override"><code>{-# LANGUAGE ScopedTypeVariables #-} import Data.SBV import qualified Data.SBV.List as L import Data.SBV.List ((.:)) -- AST Definition data Exp = Val SWord8 | Sum Exp Exp -- Our "Meaning" Function eval :: Exp -&gt; SWord8 eval (Val x) = x eval (Sum x y) = eval x + eval y -- Evaluation by "execution" type Stack = SList Word8 run :: Exp -&gt; SWord8 run e = L.head (eval' e L.nil) where eval' :: Exp -&gt; Stack -&gt; Stack eval' (Val n) s = n .: s eval' (Sum x y) s = add (eval' y (eval' x s)) add :: Stack -&gt; Stack add s = v .: s'' where (a, s') = L.uncons s (b, s'') = L.uncons s' v = a + b correct :: IO ThmResult correct = prove $ do x :: SWord8 &lt;- forall "x" y :: SWord8 &lt;- forall "y" let pgm = Sum (Val x) (Val y) spec = eval pgm machine = run pgm return $ spec .== machine </code></pre> <p>When I run this, I get:</p> <pre><code>*Main&gt; correct Q.E.D. </code></pre> <p>And the proof takes almost no time. You can easily extend this by adding other operators, binding forms, function calls, the whole works if you like. So long as you stick to a fixed "program" for verification, it should work out just fine.</p> <p>If you make a <em>mistake</em>, let's say define <code>add</code> by subtraction (modify the last line of it to ready <code>v = a - b</code>), you get:</p> <pre><code>*Main&gt; correct Falsifiable. Counter-example: x = 32 :: Word8 y = 0 :: Word8 </code></pre> <p>I hope this gives an idea of what the current capabilities of SMT solvers are and how you can put them to use in Haskell via SBV. </p> <p>Program synthesis is an active research area with many custom techniques and tools. An out of the box use of an SMT-solver will not get you there. But if you do build such a custom system in Haskell, you can use SBV to access an underlying SMT solver to solve many constraints you'll have to handle during the process.</p> <p>(<em>Aside:</em> An extended example, similar in spirit but with different goals, is shipped with the SBV package: <a href="https://hackage.haskell.org/package/sbv-8.5/docs/Documentation-SBV-Examples-Strings-SQLInjection.html" rel="nofollow noreferrer">https://hackage.haskell.org/package/sbv-8.5/docs/Documentation-SBV-Examples-Strings-SQLInjection.html</a>. This program shows how to use SBV and SMT solvers to find SQL injection vulnerabilities in an idealized SQL implementation. That might be of some interest here, and would be more aligned with how SMT solvers are typically used in practice.)</p>
2020-01-14 19:51:02.380000+00:00
2020-01-14 22:07:03.773000+00:00
2020-01-14 22:07:03.773000+00:00
null
59,629,721
<p>Graham Hutton, in the 2nd edition of <em>Programming in Haskell</em>, spends the last 2 chapters on the topic of <em>stack machine</em> based implementation of an AST. And he finishes by showing how to <em>derive</em> the correct implementation of that machine from the <em>semantic model</em> of the AST.</p> <p><strong>I'm trying to enlist the help of <code>Data.SBV</code> in that derivation, and failing.</strong></p> <p>And I'm hoping that someone can help me understand whether I'm:</p> <ol> <li>Asking for something that <code>Data.SBV</code> can't do, or</li> <li>Asking <code>Data.SBV</code> for something it <em>can</em> do, but asking incorrectly.</li> </ol> <pre><code>-- test/sbv-stack.lhs - Data.SBV assisted stack machine implementation derivation. {-# LANGUAGE OverloadedLists #-} {-# LANGUAGE ScopedTypeVariables #-} import Data.SBV import qualified Data.SBV.List as L import Data.SBV.List ((.:), (.++)) -- Since they don't collide w/ any existing list functions. -- AST Definition data Exp = Val SWord8 | Sum Exp Exp -- Our "Meaning" Function eval :: Exp -&gt; SWord8 eval (Val x) = x eval (Sum x y) = eval x + eval y type Stack = SList Word8 -- Our "Operational" Definition. -- -- This function attempts to implement the *specification* provided by our -- "meaning" function, above, in a way that is more conducive to -- implementation in our available (and, perhaps, quite primitive) -- computational machinery. -- -- Note that we've (temporarily) assumed that this machinery will consist -- of some form of *stack-based computation engine* (because we're -- following Hutton's example). -- -- Note that we give the *specification* of the function in the first -- (commented out) line of the definition. The derivation of the actual -- correct definition from this specification is detailed in Ch. 17 of -- Hutton's book. eval' :: Exp -&gt; Stack -&gt; Stack -- eval' e s = eval e : s -- our "specification" eval' (Val n) s = push n s -- We're defining this one manually. where push :: SWord8 -&gt; Stack -&gt; Stack push n s = n .: s eval' (Sum x y) s = add (eval' y (eval' x s)) where add :: Stack -&gt; Stack add = uninterpret "add" s -- This is the function we're asking to be derived. -- Now, let's just ask SBV to "solve" our specification of `eval'`: spec :: Goal spec = do x :: SWord8 &lt;- forall "x" y :: SWord8 &lt;- forall "y" -- Our spec., from above, specialized to the `Sum` case: constrain $ eval' (Sum (Val x) (Val y)) L.nil .== eval (Sum (Val x) (Val y)) .: L.nil </code></pre> <p>We get:</p> <pre><code>λ&gt; :l test/sbv-stack.lhs [1 of 1] Compiling Main ( test/sbv-stack.lhs, interpreted ) Ok, one module loaded. Collecting type info for 1 module(s) ... λ&gt; sat spec Unknown. Reason: smt tactic failed to show goal to be sat/unsat (incomplete quantifiers) </code></pre> <p>What happened?!<br> Well, maybe, asking SBV to solve for anything other than a <em>predicate</em> (i.e. - <code>a -&gt; Bool</code>) doesn't work?</p>
2020-01-07 13:49:19.973000+00:00
2020-01-14 22:07:03.773000+00:00
null
haskell|smt|stack-machine|sbv
['https://arxiv.org/abs/1706.00096', 'https://hackage.haskell.org/package/sbv-8.5/docs/Documentation-SBV-Examples-Strings-SQLInjection.html']
2
41,420,505
<p>You asked, "Is there only 1 way to calculate cost when doing this type of analysis?" The answer is no.</p> <p>These analyses are on mathematical models of machines, not real ones. When we say things like "appending to a resizable array is O(1) amortized", we are abstracting away the costs of various procedures needed in the algorithm. The motivation is to be able to compare algorithms even when you and I own different machines.</p> <p>In addition to different physical machines, however, there are also different <em>models</em> of machines. For instance, some models don't allow integers to be multiplied in constant time. Some models allow variables to be real numbers with infinite precision. In some models all computation is free and the only cost tracked is the latency of fetching data from memory.</p> <p>As hardware evolves, computer scientists make arguments for new models to be used in the analysis of algorithms. See, for instance, the work of <a href="http://people.mpi-inf.mpg.de/~tojot/" rel="nofollow noreferrer">Tomasz Jurkiewicz</a>, including <a href="https://arxiv.org/abs/1212.0703" rel="nofollow noreferrer">"The Cost of Address Translation"</a>.</p> <p>It sounds like your model included a concrete cost to malloc. That is neither wrong nor right. It might be a more accurate model on your computer and a less accurate model on the graders.</p>
2017-01-02 01:26:38.270000+00:00
2017-01-02 01:26:38.270000+00:00
null
null
23,412,759
<p>I had points docked on a homework assignment for calculating the wrong total cost in an amortized analysis of a dynamic array. I think the grader probably only looked at the total and not the steps I had taken, and I think I accounted for malloc and their answer key did not. </p> <p>Here is a section of my analysis:</p> <p><img src="https://i.stack.imgur.com/zvTQD.png" alt="amortized analysis"></p> <p>The example we were shown did not account for malloc, but I saw a video that did, and it made a lot of sense, so I put it in there. I realize that although malloc is a relatively costly operation, it would probably be O(1) here, so I could have left it out.</p> <p>But my question is this: Is there only 1 way to calculate cost when doing this type of analysis? Is there an objective right and wrong cost, or is the conclusion drawn what really matters?</p>
2014-05-01 17:14:22.160000+00:00
2017-01-02 01:26:38.270000+00:00
2014-05-02 02:09:57.347000+00:00
c|big-o|dynamic-arrays|amortized-analysis
['http://people.mpi-inf.mpg.de/~tojot/', 'https://arxiv.org/abs/1212.0703']
2
50,216,874
<p>Three approaches in order of usefulness. Approach 1 is strongly recommended.</p> <p><strong>1st Approach - LSTM/GRU</strong></p> <p>You don't use <em>simple</em> MLP. The type of data you're dealing with is a sequential data. Recurrent networks (LSTM/GRU) have been created for this purpose. They are capable of processing variable length sequences. </p> <p><strong>2nd Approach - Embeddings</strong></p> <p>Find a function that can transform your data into a fixed-length sequence, called embedding. An example of network producing time series embedding is <a href="https://arxiv.org/abs/1706.08838" rel="nofollow noreferrer">TimeNet</a>. However, that essentially brings us back to the first approach.</p> <p><strong>3rd Approach - Padding</strong></p> <p>If you you can find a reasonable upper bound for the length of sequence, you can pad shorter series to the length of the longest one (pad 0 at the beginning/end of the series, interpolate/forecast the remaining values), or cut longer series to the length of the shortest one. Obviously you will either introduce noise or lose information, respectively.</p>
2018-05-07 14:42:15.677000+00:00
2018-05-07 14:42:15.677000+00:00
null
null
50,216,691
<p>I want to run simple MLP Classifier (Scikit learn) with following set of data.</p> <p>Data set consists of 100 files, containing sound signals. Each file has two columns (two signals) and rows (length of the signals). The length of rows (signals) vary from file to file ranges between 70 to 80 values. So the dimensions of file are 70 x 2 to 80 x 2. Each file represent one complete record. </p> <p><a href="https://i.stack.imgur.com/psr5Y.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/psr5Y.png" alt="enter image description here"></a></p> <p>The problem I am facing how to train simple MLP with variable length of data, with training and testing set contains 75 and 25 files respectively.</p> <p>One solution is to concatenate all file and make one file i.e. 7500 x 2 and train MLP. But important information of signals is no longer useful in this case. </p>
2018-05-07 14:32:19.243000+00:00
2021-12-09 07:37:55.997000+00:00
null
python|machine-learning|scikit-learn|neural-network
['https://arxiv.org/abs/1706.08838']
1
14,848,358
<p>Some method I had experience with them are </p> <ul> <li><a href="http://hal.inria.fr/inria-00439290" rel="noreferrer">metric learning</a> for comparing faces</li> <li><a href="http://www.robots.ox.ac.uk/~vgg/research/nface/" rel="noreferrer">naming video characters</a>: they use SIFT descriptors computed at specific feducial points on each face. Their code worked quite well for me in the past.</li> </ul> <p>A dataset and benchmark that is dedicated for this task is <a href="http://vis-www.cs.umass.edu/lfw/" rel="noreferrer">labeled faces in the wild</a>. You can find there references to working methods for comparing faces after detection.</p> <p><strong>UPDATE:</strong><br> I have a description of an experiment on face clustering: unsupervised face identification. The experiment is described in <a href="http://arxiv.org/pdf/1210.7362v2.pdf" rel="noreferrer">Section 4.4 of my thesis</a>.<br> The basic flow is as follows</p> <ol> <li><p><strong>Metric learning:</strong> how to determine if two faces are of the same person or not.<br> This part is supervised, in the sense that it requires as input face images labeled with the identity of the person who appears in each photo.</p> <p>a. Detect fiducial points (eyes, corner of mouth, nose).<br> You may use <a href="http://www.robots.ox.ac.uk/~vgg/research/nface/" rel="noreferrer">this code</a>, or more recent versions such as <a href="http://www.ics.uci.edu/~xzhu/face/" rel="noreferrer">this one</a>.</p> <p>b. Extract SIFT descriptors at the detected fiducial points. </p> <p>c. Construct a "face descriptor": each face is described using a <strong>single</strong> vector.<br> This vector is a concatenation of the <strong><code>sqrt</code></strong> of all the SIFT descriptors.</p> <p>d. Use the method described <a href="http://hal.inria.fr/inria-00439290" rel="noreferrer">here</a> to learn a mahalanobis distance between faces of different persons.</p></li> <li><p><strong>Unsupervised face identification:</strong> Once a metric was learned, you may use new photos of <strong>new</strong> people (these people need not be part of the training set, you may use photos of <strong>unseen-before</strong> people!). </p> <p>a. Repeat stages a-c to construct the same "face descriptor" vector for each input face.</p> <p>b. Compare the descriptor vectors using the learned mahalanobis distance.</p></li> </ol>
2013-02-13 07:19:23.103000+00:00
2013-02-17 07:32:07.033000+00:00
2013-02-17 07:32:07.033000+00:00
null
5,597,302
<p>I'm using the libraries OpenCV for image processing in C + + and this is my question: can you think possible to do a facial recognition (saying the name of a person based on a database of photos) by comparing the frame of videocamera with images in a database using the technique of image histograms comparison? (Note that i compare only the facial region of an image using an example included in the opecv libraries).</p> <p>I'm asking this because i've just tried to do a program like above but i have a lot of problem (often i detect the wrong person)</p>
2011-04-08 15:28:04.940000+00:00
2015-03-23 17:29:13.793000+00:00
2015-03-23 17:29:13.793000+00:00
image|opencv|image-processing|video|computer-vision
['http://hal.inria.fr/inria-00439290', 'http://www.robots.ox.ac.uk/~vgg/research/nface/', 'http://vis-www.cs.umass.edu/lfw/', 'http://arxiv.org/pdf/1210.7362v2.pdf', 'http://www.robots.ox.ac.uk/~vgg/research/nface/', 'http://www.ics.uci.edu/~xzhu/face/', 'http://hal.inria.fr/inria-00439290']
7
56,765,796
<p>Multi-agent reinforcement learning is quite hard to master and has yet to prove effective for general cases.</p> <p>The problem is that in multi-agent the environment becomes non-stationary from the perspective of each individual agent. This means that an agents action cannot be mapped to the state directly because other agents are performing action seperately, which "confuse" all of the agents. There is an in-depth collection of multi-agent research here: <a href="https://github.com/LantaoYu/MARL-Papers" rel="nofollow noreferrer">https://github.com/LantaoYu/MARL-Papers</a></p> <p>If you would like you to pursue the actor-critic method you mentioned, I recommend this for you further research: <a href="https://arxiv.org/pdf/1706.02275.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1706.02275.pdf</a> if you would like to perfect <strong>Multi-Agent Actor Critic</strong> (MADDPG)</p>
2019-06-26 05:28:26.720000+00:00
2019-06-26 05:28:26.720000+00:00
null
null
56,730,118
<p>I am working on project in which I need to find best optimised path from 1 point to another in continuous space in multi agent scenario. I am looking for best algorithm which suits this problem using Reinforcement learning. I have tried "Multi-agent actor-critic for mixed cooperative-competitive environment" but it does not seems to reach goals in 10000 epesidoes. How can I improve this algorithm or is there any other algorithm that can help me with this. </p>
2019-06-24 05:05:08.127000+00:00
2019-06-26 05:28:26.720000+00:00
null
deep-learning|artificial-intelligence|pytorch|reinforcement-learning|multi-agent
['https://github.com/LantaoYu/MARL-Papers', 'https://arxiv.org/pdf/1706.02275.pdf']
2
52,311,879
<p>Do you know about the paper "<a href="https://arxiv.org/pdf/1611.06455" rel="noreferrer">Time Series Classification from Scratch with Deep Neural Networks: A Strong Baseline</a>"? If not, you should check it out. The authors provide a very comprehensive overview of different models, including a ResNet implementation adjusted for time series classification.</p> <p>Their Keras/Tensorflow implementation of ResNet can be found <a href="https://github.com/cauchyturing/UCR_Time_Series_Classification_Deep_Learning_Baseline/blob/master/ResNet.py" rel="noreferrer">here</a>.</p> <p><strong>Update:</strong> A more recent version of ResNet (and other classifiers) for time series data can be found <a href="https://github.com/hfawaz/dl-4-tsc" rel="noreferrer">here</a>.</p>
2018-09-13 10:34:23.670000+00:00
2019-11-04 16:00:46.640000+00:00
2019-11-04 16:00:46.640000+00:00
null
49,337,897
<p>I am trying to use the <a href="https://en.wikipedia.org/wiki/Residual_neural_network" rel="noreferrer">convolutional residual network neural network architecture</a> (ResNet). So far, I have implemented simple convolutions (conv1D) for time series data classification using Keras.</p> <p>Now, I am trying to build ResNet using Keras but I'm having some difficulties trying to adapt it to time series data. Most of the implementations of ResNet or Nasnet in Keras (such as <a href="https://github.com/fchollet/deep-learning-models/blob/master/resnet50.py" rel="noreferrer">this one</a> or <a href="https://github.com/titu1994/Keras-NASNet/blob/master/nasnet.py" rel="noreferrer">that one</a>) use conv2D for their implementation (which makes sense for images).</p> <p>Could someone help me in implementing this for time series data?</p>
2018-03-17 14:16:22.713000+00:00
2019-11-04 20:53:52.450000+00:00
2019-11-04 20:53:52.450000+00:00
python|machine-learning|keras|resnet
['https://arxiv.org/pdf/1611.06455', 'https://github.com/cauchyturing/UCR_Time_Series_Classification_Deep_Learning_Baseline/blob/master/ResNet.py', 'https://github.com/hfawaz/dl-4-tsc']
3
44,299,776
<p>Similar questions come up every now and then, e.g. <a href="https://stackoverflow.com/q/40848265/4776939">here</a> and <a href="https://stackoverflow.com/q/30905122/4776939">there</a>. None of them talks about Java or Scheme though, so here's a slightly adapted answer for the "Java" part.</p> <h2>Calling Isabelle from Java</h2> <p>Isabelle itself has no "API" that can be called from external tools. The general philosophy is that applications should live inside Isabelle or the <a href="https://www.isa-afp.org/" rel="nofollow noreferrer">Archive of Formal Proofs</a>. Most of the time, this means your applications needs to be implemented in Isabelle/ML.</p> <p>However, if you want to use Isabelle <em>as an external tool</em>, you have to play some tricks. I have bundled up these tricks as a Scala library (<a href="https://github.com/larsrh/libisabelle" rel="nofollow noreferrer">libisabelle</a>). An overview of how this works is given in <a href="https://arxiv.org/abs/1607.01539" rel="nofollow noreferrer">a paper</a>.</p> <p>libisabelle itself is available as a stand-alone library including some basic documentation that should allow you to get started. See <a href="https://github.com/larsrh/libisabelle" rel="nofollow noreferrer">the repository</a> for more details. In essence, it allows you to</p> <ul> <li>manage Isabelle installations from within Scala (download, unpacking)</li> <li>abstract over different Isabelle versions (currently supported: 2016 and 2016-1)</li> <li>lifecycle management of an Isabelle session (building, starting, stopping)</li> <li>treat Isabelle/ML functions as Scala functions</li> <li>goodies like Isabelle term syntax in Scala (<code>term"$n &gt; 0 --&gt; ($b &amp; ${HOLogic.True})"</code>)</li> </ul> <p>There is no built-in routine to set up a goal state and apply some proof steps, but the necessary infrastructure is all there.</p> <p>libisabelle is implemented in Scala, but there is a Java API that you can use, too. I know of one user who successfully uses that one. You can have a look at <a href="https://github.com/larsrh/libisabelle/blob/v0.8.0/modules/examples/src/main/java/edu/tum/cs/isabelle/examples/Hello_PIDE.java" rel="nofollow noreferrer">an example</a> in the repository.</p>
2017-06-01 06:12:23.587000+00:00
2017-06-01 06:12:23.587000+00:00
null
null
44,299,343
<p>Is it possible to call Isabelle from external programms (Java, Scheme/Guile)? I have not managed to find documentation about API</p>
2017-06-01 05:44:36.367000+00:00
2017-06-01 06:26:56.810000+00:00
2017-06-01 06:26:56.810000+00:00
java|scheme|isabelle
['https://stackoverflow.com/q/40848265/4776939', 'https://stackoverflow.com/q/30905122/4776939', 'https://www.isa-afp.org/', 'https://github.com/larsrh/libisabelle', 'https://arxiv.org/abs/1607.01539', 'https://github.com/larsrh/libisabelle', 'https://github.com/larsrh/libisabelle/blob/v0.8.0/modules/examples/src/main/java/edu/tum/cs/isabelle/examples/Hello_PIDE.java']
7
48,230,654
<p>I've created a <a href="https://gist.github.com/maxim5/c35ef2238ae708ccb0e55624e9e0252b" rel="nofollow noreferrer">gist</a> with a simple generator that builds on top of your initial idea: it's an LSTM network wired to the pre-trained word2vec embeddings, trained to predict the next word in a sentence. The data is the <a href="https://raw.githubusercontent.com/maxim5/stanford-tensorflow-tutorials/master/data/arxiv_abstracts.txt" rel="nofollow noreferrer">list of abstracts from arXiv website</a>.</p> <p>I'll highlight the most important parts here.</p> <h2>Gensim Word2Vec</h2> <p>Your code is fine, except for the number of iterations to train it. The default <code>iter=5</code> seems rather low. Besides, it's definitely not the bottleneck -- LSTM training takes much longer. <code>iter=100</code> looks better.</p> <pre class="lang-py prettyprint-override"><code>word_model = gensim.models.Word2Vec(sentences, vector_size=100, min_count=1, window=5, iter=100) pretrained_weights = word_model.wv.syn0 vocab_size, emdedding_size = pretrained_weights.shape print('Result embedding shape:', pretrained_weights.shape) print('Checking similar words:') for word in ['model', 'network', 'train', 'learn']: most_similar = ', '.join('%s (%.2f)' % (similar, dist) for similar, dist in word_model.most_similar(word)[:8]) print(' %s -&gt; %s' % (word, most_similar)) def word2idx(word): return word_model.wv.vocab[word].index def idx2word(idx): return word_model.wv.index2word[idx] </code></pre> <p>The result embedding matrix is saved into <code>pretrained_weights</code> array which has a shape <code>(vocab_size, emdedding_size)</code>.</p> <h2>Keras model</h2> <p>Your code is almost correct, except for the loss function. Since the model predicts the next word, it's a classification task, hence the loss should be <code>categorical_crossentropy</code> or <code>sparse_categorical_crossentropy</code>. I've chosen the latter for efficiency reasons: this way it avoids one-hot encoding, which is pretty expensive for a big vocabulary.</p> <pre class="lang-py prettyprint-override"><code>model = Sequential() model.add(Embedding(input_dim=vocab_size, output_dim=emdedding_size, weights=[pretrained_weights])) model.add(LSTM(units=emdedding_size)) model.add(Dense(units=vocab_size)) model.add(Activation('softmax')) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy') </code></pre> <p>Note passing the pre-trained weights to <code>weights</code>.</p> <h2>Data preparation</h2> <p>In order to work with <code>sparse_categorical_crossentropy</code> loss, both sentences and labels must be word indices. Short sentences must be padded with zeros to the common length.</p> <pre class="lang-py prettyprint-override"><code>train_x = np.zeros([len(sentences), max_sentence_len], dtype=np.int32) train_y = np.zeros([len(sentences)], dtype=np.int32) for i, sentence in enumerate(sentences): for t, word in enumerate(sentence[:-1]): train_x[i, t] = word2idx(word) train_y[i] = word2idx(sentence[-1]) </code></pre> <h2>Sample generation</h2> <p>This is pretty straight-forward: the model outputs the vector of probabilities, of which the next word is sampled and appended to the input. Note that the generated text would be better and more diverse if the next word is <em>sampled</em>, rather than <em>picked</em> as <code>argmax</code>. The temperature based random sampling I've used is <a href="https://medium.com/machine-learning-at-petiteprogrammer/sampling-strategies-for-recurrent-neural-networks-9aea02a6616f" rel="nofollow noreferrer">described here</a>.</p> <pre class="lang-py prettyprint-override"><code>def sample(preds, temperature=1.0): if temperature &lt;= 0: return np.argmax(preds) preds = np.asarray(preds).astype('float64') preds = np.log(preds) / temperature exp_preds = np.exp(preds) preds = exp_preds / np.sum(exp_preds) probas = np.random.multinomial(1, preds, 1) return np.argmax(probas) def generate_next(text, num_generated=10): word_idxs = [word2idx(word) for word in text.lower().split()] for i in range(num_generated): prediction = model.predict(x=np.array(word_idxs)) idx = sample(prediction[-1], temperature=0.7) word_idxs.append(idx) return ' '.join(idx2word(idx) for idx in word_idxs) </code></pre> <h2>Examples of generated text</h2> <pre class="lang-py prettyprint-override"><code>deep convolutional... -&gt; deep convolutional arithmetic initialization step unbiased effectiveness simple and effective... -&gt; simple and effective family of variables preventing compute automatically a nonconvex... -&gt; a nonconvex technique compared layer converges so independent onehidden markov a... -&gt; a function parameterization necessary both both intuitions with technique valpola utilizes </code></pre> <p>Doesn't make too much sense, but is able to produce sentences that look at least grammatically sound (sometimes).</p> <p>The link to the <a href="https://gist.github.com/maxim5/c35ef2238ae708ccb0e55624e9e0252b" rel="nofollow noreferrer">complete runnable script</a>.</p>
2018-01-12 16:51:22.010000+00:00
2021-07-04 07:48:48.543000+00:00
2021-07-04 07:48:48.543000+00:00
null
42,064,690
<p>LSTM/RNN can be used for text generation. <a href="https://blog.keras.io/using-pre-trained-word-embeddings-in-a-keras-model.html" rel="nofollow noreferrer">This</a> shows way to use pre-trained GloVe word embeddings for Keras model.</p> <ol> <li>How to use pre-trained Word2Vec word embeddings with Keras LSTM model? <a href="https://codekansas.github.io/gensim" rel="nofollow noreferrer">This</a> post did help.</li> <li>How to predict / generate next <em>word</em> when the model is provided with the sequence of words as its input?</li> </ol> <p>Sample approach tried:</p> <pre><code># Sample code to prepare word2vec word embeddings import gensim documents = ["Human machine interface for lab abc computer applications", "A survey of user opinion of computer system response time", "The EPS user interface management system", "System and human system engineering testing of EPS", "Relation of user perceived response time to error measurement", "The generation of random binary unordered trees", "The intersection graph of paths in trees", "Graph minors IV Widths of trees and well quasi ordering", "Graph minors A survey"] sentences = [[word for word in document.lower().split()] for document in documents] word_model = gensim.models.Word2Vec(sentences, size=200, min_count = 1, window = 5) # Code tried to prepare LSTM model for word generation from keras.layers.recurrent import LSTM from keras.layers.embeddings import Embedding from keras.models import Model, Sequential from keras.layers import Dense, Activation embedding_layer = Embedding(input_dim=word_model.syn0.shape[0], output_dim=word_model.syn0.shape[1], weights=[word_model.syn0]) model = Sequential() model.add(embedding_layer) model.add(LSTM(word_model.syn0.shape[1])) model.add(Dense(word_model.syn0.shape[0])) model.add(Activation('softmax')) model.compile(optimizer='sgd', loss='mse') </code></pre> <p>Sample code / psuedocode to train LSTM and predict will be appreciated. </p>
2017-02-06 09:47:22.213000+00:00
2021-07-04 07:48:48.543000+00:00
2020-01-21 19:19:15.537000+00:00
machine-learning|neural-network|keras|lstm|word2vec
['https://gist.github.com/maxim5/c35ef2238ae708ccb0e55624e9e0252b', 'https://raw.githubusercontent.com/maxim5/stanford-tensorflow-tutorials/master/data/arxiv_abstracts.txt', 'https://medium.com/machine-learning-at-petiteprogrammer/sampling-strategies-for-recurrent-neural-networks-9aea02a6616f', 'https://gist.github.com/maxim5/c35ef2238ae708ccb0e55624e9e0252b']
4
56,364,430
<p>Not a solution to your question, just some general thoughts that maybe are relevant:</p> <ul> <li>One of the biggest obstacles to apply Reinforcement Learning in "real world" problems is the astoundingly large amount of data/experience required to achieve acceptable results. For example, <a href="https://openai.com/blog/openai-five/" rel="nofollow noreferrer">OpenAI in Dota 2</a> game colletected the experience equivalent to 900 years per day. In the <a href="https://www.nature.com/articles/nature14236" rel="nofollow noreferrer">original Deep Q-network paper</a>, in order to achieve a performance close to a typicial human, it was required hundres of millions of game frames, depending on the specific game. In <a href="https://arxiv.org/abs/1707.02286" rel="nofollow noreferrer">other benchmarks</a> where the input are not raw pixels, such as MuJoCo, the situation isn't a lot better. So, if you don't have a simulator that can generate samples (state, action, next state, reward) cheaply, maybe RL is not a good choice. On the other hand, if you have a ground-truth model, maybe other approaches can easily outperform RL, such as Monte Carlo Tree Search (e.g., <a href="https://papers.nips.cc/paper/5421-deep-learning-for-real-time-atari-game-play-using-offline-monte-carlo-tree-search-planning" rel="nofollow noreferrer">Deep Learning for Real-Time Atari Game Play Using Offline Monte-Carlo Tree Search Planning</a> or <a href="https://arxiv.org/abs/1803.07055" rel="nofollow noreferrer">Simple random search provides a competitive approach to reinforcement learning</a>). All these ideas a much more are discussed in <a href="https://www.alexirpan.com/2018/02/14/rl-hard.html" rel="nofollow noreferrer">this great blog post</a>.</li> <li>The previous point is specially true for deep RL. The fact of approximatting value functions or policies using a deep neural network with millions of parameters usually implies that you'll need a huge quantity of data, or experience.</li> </ul> <p>And regarding to your specific question:</p> <ul> <li>In the comments, I've asked a few questions about the specific features of your problem. I was trying to figure out if you really need RL to solve the problem, since it's not the easiest technique to apply. On the other hand, if you really need RL, it's not clear if you should use a deep neural network as approximator or you can use a shallow model (e.g., random trees). However, these questions an other potential optimizations require more domain knowledge. Here, it seems you are not able to share the domain of the problem, which could be due a numerous reasons and I perfectly understand.</li> <li>You have estimated the number of required episodes to solve the problem based on some empirical studies using a smaller version of size 20*10 matrix. Just a caution note: due to the <a href="https://en.wikipedia.org/wiki/Curse_of_dimensionality" rel="nofollow noreferrer">curse of the dimensionality</a>, the complexity of the problem (or the experience needed) could grow exponentially when the state space dimensionalty grows, although maybe it is not your case.</li> </ul> <p>That said, I'm looking forward to see an answer that really helps you to solve your problem.</p>
2019-05-29 15:54:27.880000+00:00
2019-05-29 15:54:27.880000+00:00
null
null
56,192,370
<p>I am working on a problem for which we aim to solve with deep Q learning. However, the problem is that training just takes too long for each episode, roughly 83 hours. We are envisioning to solve the problem within, say, 100 episode.</p> <p>So we are gradually learning a matrix (100 * 10), and within each episode, we need to perform 100*10 iterations of certain operations. Basically we select a candidate from a pool of 1000 candidates, put this candidate in the matrix, and compute a reward function by feeding the whole matrix as the input:</p> <p><a href="https://i.stack.imgur.com/Fklf7.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/Fklf7.jpg" alt="enter image description here"></a></p> <p>The central hurdle is that the reward function computation at each step is costly, roughly 2 minutes, and each time we update one entry in the matrix. </p> <p>All the elements in the matrix depend on each other in the long term, so the whole procedure seems not suitable for some "distributed" system, if I understood correctly. </p> <p>Could anyone shed some lights on how we look at the potential optimization opportunities here? Like some extra engineering efforts or so? Any suggestion and comments would be appreciated very much. Thanks.</p> <p>======================= update of some definitions =================</p> <p><strong>0. initial stage:</strong></p> <ul> <li>a 100 * 10 matrix, with every element as empty</li> </ul> <p><strong>1. action space:</strong></p> <ul> <li>each step I will select one element from a candidate pool of 1000 elements. Then insert the element into the matrix one by one.</li> </ul> <p><strong>2. environment:</strong></p> <ul> <li><p>each step I will have an updated matrix to learn.</p></li> <li><p>An oracle function <strong>F</strong> returns a quantitative value range from 5000 ~ 30000, the higher the better (roughly one computation of <strong>F</strong> takes 120 seconds). </p> <p>This function <strong>F</strong> takes the matrix as the input and perform a very costly computation, and it returns a quantitative value to indicate the quality of the synthesized matrix so far.</p> <p>This function is essentially used to measure some performance of system, so it do takes a while to compute a reward value at each step.</p></li> </ul> <p><strong>3. episode:</strong></p> <p>By saying "we are envisioning to solve it within 100 episodes", that's just an empirical estimation. But it shouldn't be less than 100 episode, at least. </p> <p><strong>4. constraints</strong></p> <p>Ideally, like I mentioned, "All the elements in the matrix depend on each other in the long term", and that's why the reward function <strong>F</strong> computes the reward by taking the whole matrix as the input rather than the latest selected element. </p> <p>Indeed by appending more and more elements in the matrix, the reward could increase, or it could decrease as well.</p> <p><strong>5. goal</strong></p> <p>The synthesized matrix should let the oracle function <strong>F</strong> returns a value greater than 25000. Whenever it reaches this goal, I will terminate the learning step.</p>
2019-05-17 19:18:55.500000+00:00
2019-05-29 15:54:27.880000+00:00
2019-05-27 10:07:11.593000+00:00
machine-learning|optimization|deep-learning|reinforcement-learning
['https://openai.com/blog/openai-five/', 'https://www.nature.com/articles/nature14236', 'https://arxiv.org/abs/1707.02286', 'https://papers.nips.cc/paper/5421-deep-learning-for-real-time-atari-game-play-using-offline-monte-carlo-tree-search-planning', 'https://arxiv.org/abs/1803.07055', 'https://www.alexirpan.com/2018/02/14/rl-hard.html', 'https://en.wikipedia.org/wiki/Curse_of_dimensionality']
7
14,473,803
<p>A scientific description you find in <a href="http://arxiv.org/ftp/arxiv/papers/1201/1201.1422.pdf" rel="nofollow">Minutiae Extraction from Fingerprint Images</a>. Some algorithms are implemented in <a href="http://opencv.willowgarage.com/wiki/" rel="nofollow">OpenCV</a> see the segmentation section. </p> <p>The OpenCV library can be linked to java using JNI.</p>
2013-01-23 06:35:22.600000+00:00
2013-01-23 06:35:22.600000+00:00
null
null
14,472,646
<p>Is there any way to find Bifurcation point and ridge ending point in a Image (hand, vein), by using a Java code only not Matlab etc.? Can I achieve this by ImageJ Library of Java? </p>
2013-01-23 04:47:25.173000+00:00
2013-01-31 05:52:45.693000+00:00
2013-01-23 04:51:40.873000+00:00
java|image|image-processing|imagej
['http://arxiv.org/ftp/arxiv/papers/1201/1201.1422.pdf', 'http://opencv.willowgarage.com/wiki/']
2
73,018,169
<p>The solution is in the scipy official doc (<a href="https://docs.scipy.org/doc/scipy/tutorial/interpolate.html#spline-interpolation-in-1-d-procedural-interpolate-splxxx" rel="nofollow noreferrer">link</a>).</p> <ul> <li>Use <code>bisplrep</code> function (<em>rep</em> stands for representation) to obtain the interpoltaion output <code>tck</code> (see the docstring for <code>bisplrep</code>).</li> <li>The output <code>tck</code> is an array and can be stored in a file.</li> <li>Use <code>bisplev</code> (<em>ev</em> stands for evaluation) to constrcut an function.</li> </ul> <p>For using nueral network at interpolation see this state-of-the-art (<a href="https://arxiv.org/abs/1906.05661" rel="nofollow noreferrer">paper</a>) <em>Training Neural Networks for and by Interpolation</em>.</p>
2022-07-18 06:37:58.613000+00:00
2022-07-18 06:37:58.613000+00:00
null
null
72,976,863
<p>I have a huge 3D (x,y,z) data set with (x,y) being the inputs and (z) is the output. Now the dataset is very large, and I need to use that information in real time with minimal delay.</p> <p>Therefore, indexing/look-up table might seem slow. So my thought is to interpolate the dataset and in real time, instead of look-up table, I calcualte the value. So I don't have to store the original dataset but instead I can store the coefficients, which hopefully would be of smaller size than the original data set.</p> <p>I used the <code>scipy.interpolate.RectBivariateSpline</code> to perform interpolation. And I was able to fit the data and also obtain coefficients. But I am not sure how to reconstruct the interpolation function from the coefficients.</p> <p>I want to emphesize that the interpolation function will only be evaluated at input (x,y). Generalization is not of concern here.</p> <pre><code>from scipy import interpolate import numpy as np x = np.arange(1,500) y = np.arange(2,200) X,Y = np.meshgrid(x,y) z = np.sin(X+Y).T a = interpolate.RectBivariateSpline(x,y,z) # print(len(a.get_coeffs())) # coefficients can be obtained by a.get_coeffs() # I want to have the following # f = construct_spline_from_coefficient(a.get_coeffs()) # z = f(x_old, y_old) </code></pre> <p>Another approach I had in mind is use deep neural network. Can anyone shed some light here? Is this an over-kill?</p>
2022-07-14 07:37:37.807000+00:00
2022-07-18 06:37:58.613000+00:00
2022-07-15 10:06:29.057000+00:00
python|numpy|scipy|neural-network|interpolation
['https://docs.scipy.org/doc/scipy/tutorial/interpolate.html#spline-interpolation-in-1-d-procedural-interpolate-splxxx', 'https://arxiv.org/abs/1906.05661']
2
56,389,213
<p>You will have to encode your query params</p> <pre><code>import urllib.parse import urllib.request as ur from bs4 import BeautifulSoup query = urllib.parse.quote("all:quantum complexity of a black holeu") url = 'http://export.arxiv.org/api/query?search_query=' + query s = ur.urlopen(url) sl = s.read() soup = BeautifulSoup(sl, 'html.parser') print(soup) </code></pre>
2019-05-31 05:36:59.660000+00:00
2019-05-31 05:47:44.620000+00:00
2019-05-31 05:47:44.620000+00:00
null
56,389,171
<p>I am using <strong>arxiv API</strong> for scholarly papers search using python. For single term query arxiv API working perfectly well but for multi-term query (Key-phrase), API only took first term. </p> <p>For example : </p> <pre><code> import urllib.request as ur from bs4 import BeautifulSoup url = 'http://export.arxiv.org/api/query?search_query=all:electron' s = ur.urlopen(url) sl = s.read() soup = BeautifulSoup(sl, 'html.parser') papers=[soup.find_all('title')] print(soup) </code></pre> <p>Output(print the soup variable) </p> <p><a href="https://i.stack.imgur.com/iOxau.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iOxau.png" alt="enter image description here"></a></p> <p>Here I used query term <strong>electron</strong>, Arxiv API search also electron term (highlighted).</p> <p>But I used query term say <strong>quantum complexity of a black hole</strong>, arxiv API only took the first word (quantum). </p> <pre><code>import urllib.request as ur from bs4 import BeautifulSoup url = 'http://export.arxiv.org/api/query?search_query=all:quantum complexity of a black hole' #url='http://export.arxiv.org/api/query?search_query=ti:"quantum complexity of a black hole"&amp;sortBy=lastUpdatedDate&amp;sortOrder=ascending' s = ur.urlopen(url) sl = s.read() soup = BeautifulSoup(sl, 'html.parser') print(soup) </code></pre> <p>Output: <a href="https://i.stack.imgur.com/JPVca.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JPVca.png" alt="enter image description here"></a></p> <p>How can I search using whole key-words (quantum complexity of a black hole) so that it will return the scholarly papers that contain those key-words? </p>
2019-05-31 05:32:04.057000+00:00
2019-05-31 05:47:44.620000+00:00
null
python
[]
0
47,678,277
<p>First up, there are several ways to apply batch normalization, which are even mentioned in the <a href="https://arxiv.org/pdf/1502.03167v3.pdf" rel="nofollow noreferrer">original paper</a> specifically for convolutional neural networks. See the discussion in <a href="https://stackoverflow.com/q/38553927/712995">this question</a>, which outlines the difference between a <em>usual</em> and <em>convolutional</em> BN, and also the reason why both approaches make sense.</p> <p>Particularly <a href="https://keras.io/layers/normalization/" rel="nofollow noreferrer"><code>keras.layers.BatchNormalization</code></a> implements the <em>convolutional</em> BN, which means that for an input <code>[m,h,w,c]</code> it computes <code>c</code> means and standard deviations across <code>m*h*w</code> values. The shapes of the running mean, running std dev and gamma and beta variables are just <code>(c,)</code>. The values across spatial dimensions (<em>pixels</em>), as well as across the batch, are <strong>shared</strong>.</p> <p>So a more accurate algorithm would be: for each R, G, and B channel compute the mean/variance across all pixels and all images in this channel and apply the normalization.</p>
2017-12-06 15:47:58.953000+00:00
2017-12-06 15:47:58.953000+00:00
null
null
47,312,922
<p>My question is what is being normalized by BatchNormalization (BN).</p> <p>I am asking, does BN normalize the channels for each pixel separately or for all the pixels together. And does it do it on a per image basis or on all the channels of the entire batch.</p> <p>Specifically, BN is operating on <code>X</code>. Say, <code>X.shape = [m,h,w,c]</code>. So with <code>axis=3</code>, it is operating on the "c" dimension which is the number of channels (for rgb) or the number of feature maps.</p> <p>So lets say the <code>X</code> is an rgb and thus has 3 channels. Does the BN do the following: (this is a simplified version of the BN to discuss the dimensional aspects. I understand that gamma and beta are learned but not concerned with that here.)</p> <p>For each <code>image=X</code> in <code>m</code>:</p> <ol> <li>For each pixel (h,w) take the mean of the associated r, g, &amp; b values.</li> <li>For each pixel (h,w) take the variance of the associated r, g, &amp; b values</li> <li>Do <code>r = (r-mean)/var</code>, <code>g = (g-mean)/var</code>, &amp; <code>b = (b-mean)/var</code>, where r, g, &amp; b are the red, green, &amp; blue channels of <code>X</code> respectively.</li> <li>Then repeat this process for the next image in <code>m</code>,</li> </ol> <p>In keras, the docs for BatchNormalization says:</p> <blockquote> <p>axis: Integer, the axis that should be normalized (typically the features axis).</p> <p>For instance, after a <code>Conv2D</code> layer with <code>data_format="channels_first"</code>, set <code>axis=1</code> in <code>BatchNormalization</code>.</p> </blockquote> <p>But what is it exactly doing along each dimension?</p>
2017-11-15 16:45:29.467000+00:00
2017-12-06 15:47:58.953000+00:00
2017-12-06 15:45:46.793000+00:00
machine-learning|tensorflow|keras|conv-neural-network|batch-normalization
['https://arxiv.org/pdf/1502.03167v3.pdf', 'https://stackoverflow.com/q/38553927/712995', 'https://keras.io/layers/normalization/']
3
38,044,521
<p>There is nothing wrong with your training or sampling - this is the expected behavior for a "pure" LSTM network. To model the variance in your data, don't make the network predict the values at the next time step directly. Rather, your network should give you a probability distribution over the possible values for the next time step, from which you can then sample. </p> <p>Two examples of how you can do this:</p> <ul> <li>Discrete data, e.g. text: stack a softmax layer on top of the LSTM, which gives you the probabilites for each letter, then sample from these probabilites - this is also implemented in Karparthy's infamous <a href="https://github.com/karpathy/char-rnn" rel="nofollow">char-rnn</a>, see the paragraph "Temperature"</li> <li>Continuous data, e.g. timeseries: make the network predict the parameters of a mixture distribution (i.e. a linear combination of Gaussians), then sample from this - I very much recommend the section on handwriting prediction in <a href="http://arxiv.org/abs/1308.0850" rel="nofollow">Graves 2013</a>, or you can have a look at chapter 5 of <a href="https://github.com/jrieke/lstm-biology/blob/master/Project%20Report.pdf" rel="nofollow">this report</a> I recently wrote for a research project</li> </ul>
2016-06-27 00:49:05.190000+00:00
2016-06-27 00:49:05.190000+00:00
null
null
31,553,060
<p>I have trained a LSTM by a sequence, and try to test if it can synthesize some output sequence, but interestingly and unfortunately, it very quickly, i.e, after 2 time steps, stablizes to a fix output, meaning a sequence of exact same values.</p> <p>Now I have changed the initialization, but the outputs are always the same after 2 steps. What might be wrong in the training or sampling?</p> <p>Sorry that I cannot give more context, because the whole program is a big too large to post here.</p>
2015-07-22 02:53:21.653000+00:00
2016-06-27 00:49:05.190000+00:00
null
neural-network|deep-learning|lstm
['https://github.com/karpathy/char-rnn', 'http://arxiv.org/abs/1308.0850', 'https://github.com/jrieke/lstm-biology/blob/master/Project%20Report.pdf']
3
68,416,537
<p>The easiest way is probably to leverage the truncated normal distribution as provided by Scipy.</p> <p>This gives the following code, with ν (nu) as the variable of the standard Gaussian distribution, and τ (tau) mapping to ν<sub>0</sub> on that distribution. This function returns a Numpy array containing ranCount lognormal variates:</p> <pre><code>import numpy as np from scipy.stats import truncnorm def getMySamplesScipy(ranCount, mu, sigma, tau): nu0 = (math.log(tau) - mu) / sigma # position of tau on unit Gaussian xs = truncnorm.rvs(nu0, np.inf, size=ranCount) # truncated unit normal samples ys = np.exp(mu + sigma * xs) # go back to x space return ys </code></pre> <p>If for some reason this is not suitable, well some of the tricks commonly used for Gaussian variates, such as <a href="https://en.wikipedia.org/wiki/Box%E2%80%93Muller_transform" rel="nofollow noreferrer">Box-Muller</a> do not work for a truncated distribution, but we can resort always to a general principle: the <a href="https://en.wikipedia.org/wiki/Inverse_transform_sampling" rel="nofollow noreferrer">Inverse Transform Sampling</a> theorem.</p> <p>So we generate cumulative probabilities for our variates, by transforming uniform variates. And we trust Scipy, using its inverse of the <em>erf</em> error function to go back from our probabilities to the x space values.</p> <p>This gives something like the following Python code (without any attempt at optimization):</p> <pre><code>import math import random import numpy as np import numpy.random as nprd import scipy.special as spfn # using the &quot;Inverse Method&quot;: def getMySamples(ranCount, mu, sigma, tau): nu0 = (math.log(tau) - mu) / sigma # position of tau in standard Gaussian curve headCP = (1/2) * (1 + spfn.erf(nu0/math.sqrt(2))) tailCP = 1.0 - headCP # probability of being in the &quot;tail&quot; uvs = np.random.uniform(0.0, 1.0, ranCount) # uniform variates cps = (headCP + uvs * tailCP) # Cumulative ProbabilitieS nus = (math.sqrt(2)) * spfn.erfinv(2*cps-1) # positions in standard Gaussian xs = np.exp(mu + sigma * nus) # go back to x space return xs </code></pre> <h2 id="alternatives">Alternatives:</h2> <p>We can leverage the significant amount of material related to the <a href="https://en.wikipedia.org/wiki/Truncated_normal_distribution" rel="nofollow noreferrer">Truncated Gaussian distribution</a>.</p> <p>There is a relatively recent (2016) <a href="https://www.iro.umontreal.ca/%7Elecuyer/myftp/papers/vt16truncnormal.pdf" rel="nofollow noreferrer">review paper</a> on the subject by Zdravko Botev and Pierre L'Ecuyer. This paper provides a pointer to publicly available <a href="https://cran.r-project.org/web/packages/TruncatedNormal" rel="nofollow noreferrer">R source code</a>. Some material is seriously old, for example the 1986 book by Luc Devroye: <a href="http://www.eirene.de/Devroye.pdf" rel="nofollow noreferrer">Non-Uniform Random Variate Generation</a>.</p> <p>For example, a possible rejection-based method: if τ (tau) maps to ν<sub>0</sub> on the standard Gaussian curve, the unit Gaussian distribution is like exp(-ν<sup>2</sup>/2). If we write ν = ν<sub>0</sub> + δ, this is proportional to: exp(-δ<sup>2</sup>/2) * exp(-ν<sub>0</sub>*δ).</p> <p>The idea is to approximate the exact distribution beyond ν<sub>0</sub> by an <strong>exponential one</strong>, of parameter ν<sub>0</sub>. Note that the exact distribution is constantly <em>below</em> the approximate one. Then we can randomly accept the relatively cheap exponential variates with a probability of exp(-δ<sup>2</sup>/2).</p> <p>We can just pick an equivalent algorithm in the literature. In the Devroye book, chapter IX page 382, there is some pseudo-code:</p> <p>REPEAT generate independent exponential random variates X and Y UNTIL X<sup>2</sup> &lt;= 2*ν<sub>0</sub><sup>2</sup>*Y</p> <p>RETURN R &lt;-- ν<sub>0</sub> + X/ν<sub>0</sub></p> <p>for which a Numpy rendition could be written like this:</p> <pre><code>def getMySamplesXpRj(rawRanCount, mu, sigma, tau): nu0 = (math.log(tau) - mu) / sigma # position of tau in standard Gaussian if (nu0 &lt;= 0): print(&quot;Error: τ (tau) too small in getMySamplesXpRj&quot;) rnu0 = 1.0 / nu0 xs = nprd.exponential(1.0, rawRanCount) # exponential &quot;raw&quot; variates ys = nprd.exponential(1.0, rawRanCount) allSamples = nu0 + (rnu0 * xs) boolArray = (xs*xs - 2*nu0*nu0*ys) &lt;= 0.0 samples = allSamples[boolArray] ys = np.exp(mu + sigma * samples) # go back to x space return ys </code></pre> <p>According to Table 3 in the Botev-L'Ecuyer paper, the rejection rate of this algorithm is nicely low.</p> <p>Besides, if you are willing to allow for some sophistication, there is also some literature about the <a href="https://en.wikipedia.org/wiki/Ziggurat_algorithm" rel="nofollow noreferrer"><em>Ziggurat</em> algorithm</a> as used for truncated Gaussian distributions, for example the 2012 <a href="https://arxiv.org/pdf/1201.6140" rel="nofollow noreferrer">arXiv 1201.6140 paper</a> by Nicolas Chopin at ENSAE-CREST.</p> <p><strong>Side note:</strong> with recent versions of Python, it seems that you can use Greek letters for your variable names directly, σ instead of sigma, τ instead of tau, just as in the statistics books:</p> <pre><code>$ python3 Python 3.9.6 (default, Jun 29 2021, 00:00:00) &gt;&gt;&gt; &gt;&gt;&gt; σ = 2 &gt;&gt;&gt; τ = 7 &gt;&gt;&gt; &gt;&gt;&gt; στ = σ * τ &gt;&gt;&gt; &gt;&gt;&gt; στ + 1 15 &gt;&gt;&gt; </code></pre>
2021-07-17 00:38:12.767000+00:00
2021-07-19 14:00:03.413000+00:00
2021-07-19 14:00:03.413000+00:00
null
68,411,873
<p>I'm building a simulation which requires random draws from the tail of a lognormal distribution. A threshold τ (tau) is chosen, and a resulting conditional distribution is given by: <a href="https://i.stack.imgur.com/54uNl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/54uNl.png" alt="Formula for truncated lognormal distribution" /></a></p> <p>I need to randomly sample from that conditional distribution, where F(x) is lognormal with a chosen µ (mu) and σ (sigma), and τ (tau) is set by the user.</p> <p>My inelegant solution right now is simply to sample from the lognormal, tossing out any values under τ (tau), until I have the sample size I need. But I'm sure this can be improved.</p> <p>Thanks for the help!</p>
2021-07-16 15:48:41.607000+00:00
2021-07-21 13:41:06.663000+00:00
2021-07-21 13:41:06.663000+00:00
python|random|scipy|simulation|normal-distribution
['https://en.wikipedia.org/wiki/Box%E2%80%93Muller_transform', 'https://en.wikipedia.org/wiki/Inverse_transform_sampling', 'https://en.wikipedia.org/wiki/Truncated_normal_distribution', 'https://www.iro.umontreal.ca/%7Elecuyer/myftp/papers/vt16truncnormal.pdf', 'https://cran.r-project.org/web/packages/TruncatedNormal', 'http://www.eirene.de/Devroye.pdf', 'https://en.wikipedia.org/wiki/Ziggurat_algorithm', 'https://arxiv.org/pdf/1201.6140']
8
47,918,719
<p>See <a href="http://bamos.github.io/2016/08/09/deep-completion/" rel="nofollow noreferrer">Image Completion with Deep Learning in TensorFlow</a> for a long answer.</p> <p>In short: Suppose you make a CNN which has n filters of the size of its input and valid-padding. Then the output will be of shape n x 1 x 1. Then you can apply softmax to that shape and you have the probabilities in the channels.</p> <p>You might also want to read <a href="https://arxiv.org/pdf/1707.09725.pdf" rel="nofollow noreferrer">2.2.1. Convolutional Layers</a> of my Masters thesis.</p>
2017-12-21 05:52:27.807000+00:00
2017-12-21 05:52:27.807000+00:00
null
null
47,887,328
<p>I studying about DCGAN, and I wonder something about it.</p> <p>In Ian Goodfellow's natural GAN, discriminator Model outputs one scalar value what means the probability. But DCGAN's discriminator has designed with CNN architecture. I know that CNN's output is vector of class probabilities.</p> <p>So how discriminator works on DCGAN? And what output of DCGAN's discriminator is?</p>
2017-12-19 12:42:25.250000+00:00
2018-12-11 00:52:46.080000+00:00
null
deep-learning|dcgan
['http://bamos.github.io/2016/08/09/deep-completion/', 'https://arxiv.org/pdf/1707.09725.pdf']
2
35,527,653
<p>After sorting, invert the second half of the array:<br> now the rest of the problem is to do a <em>perfect shuffle</em> of the array elements - a problem to come up time and again.<br> If you want to apply a permutation in-place and know how to transform indices, you can keep a "scoreboard" of indices handled - but even a single bit per item is O(n) storage. (Find the next index still needing handling and perform the cycle containing it, keeping scores, until all indices are handled.) </p> <p>A pretty nice rendition of an in-place perfect shuffle in linear time and constant space in addition to the array is <a href="https://cs.stackexchange.com/a/400/19966">Aryabhata's</a> over at CS. The method has been <a href="http://arxiv.org/pdf/0805.1598v1.pdf" rel="nofollow noreferrer">placed at arxiv.org</a> by Peiyush Jain.<br> (The complexity of the sort as a first step may dominate the permutation/shuffle step(s).)</p> <hr> <p>There is another interpretation of this task, or the sort step: sort into a folded array.<br> The sort lending itself most readily to this task got to be the double-ended selection sort:<br> In each pass over the data not yet placed, determine the min and max in 3/2n comparisons and swap into their positions, until one value or none at all is left.<br> Or take a standard sort method, and have the indexes mapped. For the hell of it:</p> <pre><code>/** Anything with accessors with int parameter */ interface Indexable&lt;T&gt; { T get(int index); T set(int index, T value); // int size(); // YAGNI? } /** The accessors have this folded in half, * while iterator() is not overridden */ @SuppressWarnings("serial") class FoldedList&lt;T&gt; extends ArrayList&lt;T&gt; implements Indexable&lt;T&gt; { public FoldedList(@SuppressWarnings("unchecked") T...elements) { super(Arrays.asList(elements)); } int map(int index) { final int last = size()-1; index = 2*index; return last &lt;= index ? 2*last-index : index+1; } @Override public T get(int index) { return super.get(map(index)); } @Override public T set(int index, T element) { return super.set(map(index), element); } } /** Sort an Indexable&lt;T&gt; */ public class Sort { // Hoare/Sedgewick using middle index for pivot private static &lt;T extends Comparable&lt;T&gt;&gt; int split(Indexable&lt;T&gt; ixable, int lo, int hi) { int mid = lo + (hi-lo)/2, left = lo+1, right= hi-1; T pivot = ixable.get(mid), l = null, r = null; ixable.set(mid, ixable.get(lo)); scan: while (true) { while ((l = ixable.get(left)).compareTo(pivot) &lt; 0) if (right &lt; ++left) { left--; break scan; } while (pivot.compareTo(r = ixable.get(right)) &lt; 0) if (--right &lt;= left) { left -= 1; l = ixable.get(left); break scan; } ixable.set(left, r); // place misplaced items ixable.set(right, l); if (--right &lt; ++left) { left = right; l = r; break; } } ixable.set(lo, l); // put last left value into first position ixable.set(left, pivot); // place pivot at split index return left; } private static &lt;T extends Comparable&lt;T&gt;&gt; void sort(Indexable&lt;T&gt; ixable, int lo, int hi) { while (lo+2 &lt; hi) { // more than 2 Ts int split = split(ixable, lo, hi); if (split - lo &lt; hi - split) { sort(ixable, lo, split); // left part smaller lo = split + 1; } else { sort(ixable, split+1, hi); // right part smaller hi = split; } } T l, h; if (lo &lt; --hi // 2 Ts &amp;&amp; (l = ixable.get(lo)).compareTo(h = ixable.get(hi)) &gt; 0) { ixable.set(lo, h); // exchange ixable.set(hi, l); } } public static &lt;T extends Comparable&lt;T&gt;&gt; void main(String[] args) { Indexable&lt;Number&gt; nums = new FoldedList&lt;&gt;( //2,6,1,7,9,3); 7, 3, 9, 3, 0, 6, 1, 2, 8, 6, 5, 4, 7); sort((Indexable&lt;T&gt;) nums); System.out.println(nums); } } </code></pre>
2016-02-20 18:59:14.647000+00:00
2016-02-23 02:09:00.517000+00:00
2017-04-13 12:48:30.803000+00:00
null
35,513,542
<p>A zig-zag method which takes an array as argument and returns a zig-zag array.</p> <p>Example : Input 2,6,1,7,9,3 Output 9,1,7,2,6,3</p> <p>The array returned must have alternative highest numbers and lowest numbers.</p> <p>I can think of this method. //Pseudo code</p> <pre><code>public static int [] zig-zag(int arr[]) { arr.sort(); int returnArr[] = new int[arr.length]; int begindex = 0, endindex = arr.length -1; int idx = 0; while(begindex&lt;arr.length/2-1 &amp;&amp; endindex&gt;=arr.length/2) { returnArr[idx++] = arr[endindex]; returnArr[idx++] = arr[begindex]; begindex++;endindex--; } if(arr.length%2 == 1) reurnArr[idx] = arr[begindex]; return returnArr; } </code></pre> <p>This method has a time complexity of O(nlogn) (because of the sort) and space complexity of O(n). Is there any other way/algorithm so that it can do better than O(nlogn) ? or with O(nlogn) and space complexity being O(1) ?</p> <p>There's one more method with TC O(n^2) and SC O(1). But not interested in TC of O(n^2).</p>
2016-02-19 19:36:30.273000+00:00
2016-02-23 02:09:00.517000+00:00
null
java|arrays|algorithm
['https://cs.stackexchange.com/a/400/19966', 'http://arxiv.org/pdf/0805.1598v1.pdf']
2
44,553,670
<p>There have been some good and useful suggestions already but let me add a few remarks:</p> <ol> <li>The viridis and magma palettes are sequential palettes with multiple hues. Thus, along the scale you increase from very light colors to rather dark colors. Simultaneously the colorfulness is increased and the hue changes from yellow to blue (either via green or via red).</li> <li>Diverging palettes can be created by combining two sequential palettes. Typically, you join them at the light colors and then let them diverge to different dark colors.</li> <li>Usually, one uses single-hue sequential palettes that diverge from a neutral light gray to two different dark colors. One should pay attention though that the different "arms" of the palette are balanced with respect to luminance (light-dark) and chroma (colorfuness).</li> </ol> <p>Therefore, combining magma and viridis does not work well. You could let them diverge from a similar yellowish color but you would diverge to similar blueish colors. Also with the changing hues it would just become more difficult to judge in which arm of the palette you are.</p> <p>As mentioned by others, ColorBrewer.org provides good diverging palettes. Moreland's approach is also useful. Yet another general solution is our <code>diverging_hcl()</code> function in the <code>colorspace</code> package. In the accompanying paper at <a href="https://arxiv.org/abs/1903.06490" rel="noreferrer">https://arxiv.org/abs/1903.06490</a> (forthcoming in JSS) the construction principles are described and also how the general HCL-based strategy can approximate numerous palettes from ColorBrewer.org, CARTO, etc. (Earlier references include our initial work in CSDA at <a href="http://dx.doi.org/10.1016/j.csda.2008.11.033" rel="noreferrer">http://dx.doi.org/10.1016/j.csda.2008.11.033</a> and further recommendations geared towards meteorology, but applicable beyond, in a BAMS paper at <a href="http://dx.doi.org/10.1175/BAMS-D-13-00155.1" rel="noreferrer">http://dx.doi.org/10.1175/BAMS-D-13-00155.1</a>.)</p> <p>The advantage of our solution in HCL space (hue-chroma-luminance) is that you can interpret the coordinates relatively easily. It does take some practice but isn't as opaque as other solutions. Also we provide a GUI <code>hclwizard()</code> (see below) that helps understanding the importance of the different coordinates.</p> <p>Most of the palettes in the question and the other answers can be matched rather closely by <code>diverging_hcl()</code> provided that the two hues (argument <code>h</code>), the maximum chroma (<code>c</code>), and minimal/maximal luminance (<code>l</code>) are chosen appropriately. Furthermore, one may have to tweak the <code>power</code> argument which controls how quickly chroma and luminance are increased, respectively. Typically, chroma is added rather quickly (<code>power[1] &lt; 1</code>) whereas luminance is increased more slowly (<code>power[2] &gt; 1</code>).</p> <p>Moreland's "cool-warm" palette for example uses a blue (<code>h = 250</code>) and red (<code>h = 10</code>) hue but with a relatively small luminance contrast(<code>l = 37</code> vs. <code>l = 88</code>):</p> <pre><code>coolwarm_hcl &lt;- colorspace::diverging_hcl(11, h = c(250, 10), c = 100, l = c(37, 88), power = c(0.7, 1.7)) </code></pre> <p>which looks rather similar (see below) to:</p> <pre><code>coolwarm &lt;- Rgnuplot:::GpdivergingColormap(seq(0, 1, length.out = 11), rgb1 = colorspace::sRGB( 0.230, 0.299, 0.754), rgb2 = colorspace::sRGB( 0.706, 0.016, 0.150), outColorspace = "sRGB") coolwarm[coolwarm &gt; 1] &lt;- 1 coolwarm &lt;- rgb(coolwarm[, 1], coolwarm[, 2], coolwarm[, 3]) </code></pre> <p>In contrast, ColorBrewer.org's BrBG palette a much higher luminance contrast (<code>l = 20</code> vs. <code>l = 95</code>):</p> <pre><code>brbg &lt;- rev(RColorBrewer::brewer.pal(11, "BrBG")) brbg_hcl &lt;- colorspace::diverging_hcl(11, h = c(180, 50), c = 80, l = c(20, 95), power = c(0.7, 1.3)) </code></pre> <p>The resulting palettes are compared below with the HCL-based version below the original. You see that these are not identical but rather close. On the right-hand side I've also matched viridis and plasma with HCL-based palettes.</p> <p><a href="https://i.stack.imgur.com/nxSiv.png" rel="noreferrer"><img src="https://i.stack.imgur.com/nxSiv.png" alt="palettes"></a></p> <p>Whether you prefer the cool-warm or BrBG palette may depend on your personal taste but also - more importantly - what you want to bring out in your visualization. The low luminance contrast in cool-warm will be more useful if the <em>sign</em> of the deviation matters most. A high luminance contrast will be more useful if you want to bring out the <em>size</em> of the (extreme) deviations. More practical guidance is provided in the papers above.</p> <p>The rest of the replication code for the figure above is:</p> <pre><code>viridis &lt;- viridis::viridis(11) viridis_hcl &lt;- colorspace::sequential_hcl(11, h = c(300, 75), c = c(35, 95), l = c(15, 90), power = c(0.8, 1.2)) plasma &lt;- viridis::plasma(11) plasma_hcl &lt;- colorspace::sequential_hcl(11, h = c(-100, 100), c = c(60, 100), l = c(15, 95), power = c(2, 0.9)) pal &lt;- function(col, border = "transparent") { n &lt;- length(col) plot(0, 0, type="n", xlim = c(0, 1), ylim = c(0, 1), axes = FALSE, xlab = "", ylab = "") rect(0:(n-1)/n, 0, 1:n/n, 1, col = col, border = border) } par(mar = rep(0, 4), mfrow = c(4, 2)) pal(coolwarm) pal(viridis) pal(coolwarm_hcl) pal(viridis_hcl) pal(brbg) pal(plasma) pal(brbg_hcl) pal(plasma_hcl) </code></pre> <p><strong>Update:</strong> These HCL-based approximations of colors from other tools (ColorBrewer.org, viridis, scico, CARTO, ...) are now also available as named palettes in both the <code>colorspace</code> package and the <code>hcl.colors()</code> function from the basic <code>grDevices</code> package (starting from 3.6.0). Thus, you can now also say easily:</p> <pre><code>colorspace::sequential_hcl(11, "viridis") grDevices::hcl.colors(11, "viridis") </code></pre> <p>Finally, you can explore our proposed colors interactively in a shiny app: <a href="http://hclwizard.org:64230/hclwizard/" rel="noreferrer">http://hclwizard.org:64230/hclwizard/</a>. For users of R, you can also start the shiny app locally on your computer (which runs somewhat faster than from our server) or you can run a Tcl/Tk version of it (which is even faster):</p> <pre><code>colorspace::hclwizard(gui = "shiny") colorspace::hclwizard(gui = "tcltk") </code></pre> <p>If you want to understand what the paths of the palettes look like in RGB and HCL coordinates, the <code>colorspace::specplot()</code> is useful. See for example <code>colorspace::specplot(coolwarm)</code>.</p>
2017-06-14 19:57:38.353000+00:00
2019-10-24 07:42:06.740000+00:00
2019-10-24 07:42:06.740000+00:00
null
37,482,977
<p>I am interested in having a "good" divergent color pallette. One could obviously use just red, white, and blue:</p> <pre><code>img &lt;- function(obj, nam) { image(1:length(obj), 1, as.matrix(1:length(obj)), col=obj, main = nam, ylab = "", xaxt = "n", yaxt = "n", bty = "n") } rwb &lt;- colorRampPalette(colors = c("red", "white", "blue")) img(rwb(100), "red-white-blue") </code></pre> <p><a href="https://i.stack.imgur.com/eHrwm.png" rel="noreferrer"><img src="https://i.stack.imgur.com/eHrwm.png" alt="enter image description here"></a></p> <p>Since I recently fell in love with the <a href="https://cran.r-project.org/web/packages/viridis/vignettes/intro-to-viridis.html" rel="noreferrer">viridis color palettes</a>, I was hoping to combine viridis and magma to form such divergent colors (of course, color blind people would only see the absolute value of the color, but that is sometimes o.k.).</p> <p>When I tried combining viridis and magma, I found that they don't "end" (or "start") at the same place, so I get something like this (I'm using R, but this would probably be the same for python users):</p> <pre><code>library(viridis) img(c(rev(viridis(100, begin = 0)), magma(100, begin = 0)), "magma-viridis") </code></pre> <p><a href="https://i.stack.imgur.com/255WC.png" rel="noreferrer"><img src="https://i.stack.imgur.com/255WC.png" alt="enter image description here"></a></p> <p>We can see that when close to zero, viridis is purple, while magma is black. I would like for both of them to start in (more or less) the same spot, so I tried using 0.3 as a starting point:</p> <pre><code>img(c(rev(viridis(100, begin = 0.3)), magma(100, begin = 0.3)), "-viridis-magma(0.3)") </code></pre> <p><a href="https://i.stack.imgur.com/9FJyY.png" rel="noreferrer"><img src="https://i.stack.imgur.com/9FJyY.png" alt="enter image description here"></a></p> <p>This is indeed better, but I wonder if there is a better solution. </p> <p>(I am also "tagging" python users, since viridis is originally from <code>matplotlib</code>, so someone using it may know of such a solution)</p> <p>Thanks!</p>
2016-05-27 11:59:38.700000+00:00
2021-04-02 16:34:48.570000+00:00
2019-04-13 23:32:35.697000+00:00
python|r|matplotlib|colors|viridis
['https://arxiv.org/abs/1903.06490', 'http://dx.doi.org/10.1016/j.csda.2008.11.033', 'http://dx.doi.org/10.1175/BAMS-D-13-00155.1', 'https://i.stack.imgur.com/nxSiv.png', 'http://hclwizard.org:64230/hclwizard/']
5
68,964,739
<p>The 5GB data set provided byarmancohan <a href="https://github.com/armancohan/long-summarization" rel="nofollow noreferrer">should do.</a></p> <p>as he notes:</p> <blockquote> <p>Two datasets of long and structured documents (scientific papers) are provided. The datasets are obtained from ArXiv and PubMed OpenAccess repositories.</p> </blockquote> <p>or get it straight from <a href="https://www.tensorflow.org/datasets/catalog/scientific_papers" rel="nofollow noreferrer">TensorFlow datasets</a>.</p>
2021-08-28 13:18:14.753000+00:00
2021-08-28 13:18:14.753000+00:00
null
null
68,938,478
<p>I am trying to find a dataset containing scientific papers from different domains of interest (e.g., neuroscience, mathematics, physics, history, biology, medicine, etc.) in order to develop an NLP project intended to summarize scientific texts while changing domain-specific terms into more common words.</p> <p>Does anybody know where I could find such a dataset?</p>
2021-08-26 12:25:02.980000+00:00
2021-08-28 13:18:14.753000+00:00
null
nlp|dataset
['https://github.com/armancohan/long-summarization', 'https://www.tensorflow.org/datasets/catalog/scientific_papers']
2
67,691,233
<p>To add on @MSalters' comment, and somewhat basing on <a href="https://stackoverflow.com/a/65314347/913098">this</a>, it is possible, although not guaranteed that you could &quot;help&quot; your model learn something better than the identity, if you force it to learn not <em>the actual value</em> of the next step, but instead, make it learn <em>the difference</em> from the current step to the next.<br /> To take this one step further, you could also keep an exponential moving average and learn the difference from that, somewhat like was done <a href="https://arxiv.org/pdf/1704.04110.pdf" rel="nofollow noreferrer">here</a>.</p> <p>In short, it makes statistical sense to predict the same value, as it is a low-risk guess. <em>Maybe</em> learning a difference won't converge to zero.</p> <hr /> <p>Other things I noticed:</p> <ol> <li>Dropout - no need to use any normalization before you were able to over-fit. It just complicates debugging.</li> <li>Just one step into the past - it is likely you are losing a lot of required information, thus in fact forcing your net to have no idea what to do, and thus guess the same value. If you even gave it a single value more into the past, it could have a nice approximation of the derivative. That sounds important (only you know)</li> </ol>
2021-05-25 15:37:01.833000+00:00
2021-05-25 15:42:30.930000+00:00
2021-05-25 15:42:30.930000+00:00
null
67,690,858
<p>The above may sound ideal, but I'm trying to predict a step in front - i.e. with a look_back of 1. My code is as follows:</p> <pre><code>def create_scaled_datasets(data, scaler_transform, train_perc = 0.9): # Set training size train_size = int(len(data)*train_perc) # Reshape for scaler transform data = data.reshape((-1, 1)) # Scale data to range (-1,1) data_scaled = scaler_transform.fit_transform(data) # Reshape again data_scaled = data_scaled.reshape((-1, 1)) # Split into train and test data keeping time order train, test = data_scaled[0:train_size + 1, :], data_scaled[train_size:len(data), :] return train, test # Instantiate scaler transform scaler = MinMaxScaler(feature_range=(0, 1)) model.add(LSTM(5, input_shape=(1, 1), activation='tanh', return_sequences=True)) model.add(Dropout(0.1)) model.add(LSTM(12, input_shape=(1, 1), activation='tanh', return_sequences=True)) model.add(Dropout(0.1)) model.add(LSTM(2, input_shape=(1, 1), activation='tanh', return_sequences=False)) model.add(Dense(1)) model.compile(loss='mean_squared_error', optimizer='adam') # Create train/test data sets train, test = create_scaled_datasets(data, scaler) trainY = [] for i in range(len(train) - 1): trainY = np.append(trainY, train[i + 1]) train = np.reshape(train, (train.shape[0], 1, train.shape[1])) plotting_test = test test = np.reshape(test, (test.shape[0], 1, test.shape[1])) model.fit(train[:-1], trainY, epochs=150, verbose=0) testPredict = model.predict(test) plt.plot(testPredict, 'g') plt.plot(plotting_test, 'r') plt.show() </code></pre> <p>with output plot of:</p> <p><a href="https://i.stack.imgur.com/Wl9jm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Wl9jm.png" alt="enter image description here" /></a></p> <p>In essence, what I want to achieve is for the model to predict the next value, and I attempt to do this by training on the actual values as the features, and the labels being the actual values shifted along one (look_back of 1). Then I predict on the test data. As you can see from the plot, the model does a pretty good job, except it doesn't seem to be predicting the future, but instead seems to be predicting the present... I would expect the plot to look similar, except the green line (the predictions) to be shifted one point to the left. I have tried increasing the look_back value, but it seems to always do the same thing, which makes me think I'm training the model wrong, or attempting to predict incorrectly. If I am reading this wrong and the model is indeed doing what I want but I'm interpreting wrong (also very possible) how do I then predict further into the future?</p>
2021-05-25 15:13:31.730000+00:00
2021-05-25 15:42:30.930000+00:00
2021-05-25 15:20:43.603000+00:00
keras|deep-learning|lstm
['https://stackoverflow.com/a/65314347/913098', 'https://arxiv.org/pdf/1704.04110.pdf']
2
69,310,462
<p>In Scilab balanc() is hard-coded and based on LAPACK's dgebal (see the <a href="http://www.netlib.org/lapack/explore-html/dd/d9a/group__double_g_ecomputational_ga411292dd693c20ff9c27650fb7bddf85.html#ga411292dd693c20ff9c27650fb7bddf85" rel="nofollow noreferrer">Fortran source at Netlib</a>). In the algorithm the operations are quite simple (computing inf and 2-norms, swaping columns or rows of a matrix), maybe this could easily translated ?</p> <p>A more readable version of the algorithm can be found on page 3 (Algorithm 2) of the following document: <a href="https://arxiv.org/abs/1401.5766" rel="nofollow noreferrer">https://arxiv.org/abs/1401.5766</a>.</p> <p>Here is a Scilab implementation of Algorithm 3:</p> <pre><code>function [A,X]=bal(Ain) A = Ain; n = size(A,1); X = ones(n,1); β = 2; // multiply or divide by radix preserves precision p = 2; // eventually change to 1-norm converged = 0; while converged == 0 converged = 1; for i=1:n c = norm(A(:,i),p); r = norm(A(i,:),p); s = c^p+r^p; f = 1; while c &lt; r/β c = c*β; r = r/β; f = f*β; end while c &gt;= r*β c = c/β; r = r*β; f = f/β; end if (c^p+r^p) &lt; 0.95*s converged = 0; X(i) = f*X(i); A(:,i) = f*A(:,i); A(i,:) = A(i,:)/f; end end end X = diag(X); endfunction </code></pre> <p>On this example the above implementation gives the same balanced matrix:</p> <pre><code>--&gt; A=rand(5,5,&quot;normal&quot;); A(:,1)=A(:,1)*1024; A(2,:)=A(2,:)/1024 A = 897.30729 -1.6907865 -1.0217046 -0.9181476 -0.1464695 -0.5430253 -0.0011318 -0.0000356 -0.001277 -0.00038 -774.96457 3.1685332 0.1467254 -0.410953 -0.6165827 155.22118 0.1680727 -0.2262445 -0.3402948 1.6098294 1423.0797 -0.3302511 0.5909125 -1.2169245 -0.7546739 --&gt; [Ab,X]=balanc(A) Ab = 897.30729 -0.8453932 -32.694547 -14.690362 -9.3740507 -1.0860507 -0.0011318 -0.0022789 -0.0408643 -0.0486351 -24.217643 0.0495083 0.1467254 -0.2054765 -1.2331655 9.7013239 0.0052523 -0.452489 -0.3402948 6.4393174 22.23562 -0.0025801 0.2954562 -0.3042311 -0.7546739 X = 0.03125 0. 0. 0. 0. 0. 0.015625 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0.5 0. 0. 0. 0. 0. 2. --&gt; [Ab,X]=bal(A) Ab = 897.30729 -0.8453932 -32.694547 -14.690362 -9.3740507 -1.0860507 -0.0011318 -0.0022789 -0.0408643 -0.0486351 -24.217643 0.0495083 0.1467254 -0.2054765 -1.2331655 9.7013239 0.0052523 -0.452489 -0.3402948 6.4393174 22.23562 -0.0025801 0.2954562 -0.3042311 -0.7546739 X = 1. 0. 0. 0. 0. 0. 0.5 0. 0. 0. 0. 0. 32. 0. 0. 0. 0. 0. 16. 0. 0. 0. 0. 0. 64. </code></pre>
2021-09-24 06:26:04.960000+00:00
2021-09-24 07:27:55.323000+00:00
2021-09-24 07:27:55.323000+00:00
null
69,306,537
<p>I'm trying to program the z-transform in wxMaxima which doesn't have it programmed but not by definition but by using the Scilab approach. Scilab to calculate the z-transform first converts the transfer function to the state space, after that the system must be discretized and after that converted to z transfer function, I need this because of some algebraic calculations that I need to do to analyze stability of a system in function of the sample period.</p> <p>Right now I'm stranded with the function balanc() which finds a similarity transform such that</p> <pre><code>Ab = X^(-1) . A . X </code></pre> <p>as approximately equal row and column norms.</p> <p>Most of my code in wxMaxima to reach in the near future has been done by translating the Scilab code into wxMaxima, currently I'm writing the tf2ss() function an inside that function the balanc() function is called, the problem is that I couldn't find the code for that function in Scilab installation directory, I've searched info in books and papers but every example starts with the Ab matrix given as an input to the problem, Scilab instead has the option to have as an input only the A matrix and it calculates the Ab and X matrices, so, I need help to make this function exactly as Scilab has it programmed to been able to compare all the steps that I'm doing.</p> <p>Finally, wxMaxima has a function to calculate similarity transforms but it don't have the same output as Scilab what it means to me that they uses different criteria to calculate the similarity transform.</p> <p>Note: I've tried to make the calculations in wxMaxima to have Ab and X matrices as elements with variables but the system of equations remains with too many variables and couldn't be solved.</p> <p>Thanks in advance for the help in doing this.</p>
2021-09-23 20:27:10.083000+00:00
2021-09-24 07:27:55.323000+00:00
2021-09-24 02:33:55.240000+00:00
similarity|scilab|state-space|wxmaxima
['http://www.netlib.org/lapack/explore-html/dd/d9a/group__double_g_ecomputational_ga411292dd693c20ff9c27650fb7bddf85.html#ga411292dd693c20ff9c27650fb7bddf85', 'https://arxiv.org/abs/1401.5766']
2
36,303,806
<p>You really only would need to make a list of 100 or so, each, of negative and positive adjectives.</p> <p>See:<br/> <a href="http://na2english.wikispaces.com/file/view/ADJECTIVES%20TO%20DESCRIBE%20FILMS.pdf/400672720/ADJECTIVES%20TO%20DESCRIBE%20FILMS.pdf" rel="nofollow">http://na2english.wikispaces.com/file/view/ADJECTIVES%20TO%20DESCRIBE%20FILMS.pdf/400672720/ADJECTIVES%20TO%20DESCRIBE%20FILMS.pdf</a><br/><br/> <a href="http://arxiv.org/ftp/arxiv/papers/1011/1011.4623.pdf" rel="nofollow">http://arxiv.org/ftp/arxiv/papers/1011/1011.4623.pdf</a><br/><br/></p> <p>Obviously cite them if you use them, but language is free, so you can use those for your work.</p> <p>Probably more important than the size of the database you construct will be picking words that target your specific application for increased efficacy.</p> <p>Are you aiming this project at a specific commercial use or as a more generalized research effort?</p>
2016-03-30 08:43:38.550000+00:00
2016-03-30 08:43:38.550000+00:00
null
null
36,303,610
<p>I'm trying to train an LSTM model for the task of sentiment classification on short texts such as products reviews and tweets. </p> <p>I'm looking for a training set that labels positive/negative/neutral, is there such thing (free for research) out there that is really based on human tags and not on starts or emoticons? Iv'e found only small training sets which led me to poor results. Iv'e tried to increase the size of my network and stacked layers but no improvement. </p> <p>Whats the minimum size for such a training set in order to start getting reasonable results (F1 > 0.8).</p>
2016-03-30 08:34:16.653000+00:00
2016-03-30 08:43:38.550000+00:00
null
python|machine-learning|sentiment-analysis|keras|lstm
['http://na2english.wikispaces.com/file/view/ADJECTIVES%20TO%20DESCRIBE%20FILMS.pdf/400672720/ADJECTIVES%20TO%20DESCRIBE%20FILMS.pdf', 'http://arxiv.org/ftp/arxiv/papers/1011/1011.4623.pdf']
2
65,887,995
<h2><a href="https://datascience.stackexchange.com/questions/5706/what-is-the-dying-relu-problem-in-neural-networks">Dying ReLU</a></h2> <ul> <li>I think the main reason for <strong>underfitting</strong> in your case is <a href="https://datascience.stackexchange.com/questions/5706/what-is-the-dying-relu-problem-in-neural-networks"><strong>Dying Relu</strong></a> problem. Your network is simple Autoencoder with <a href="https://theaisummer.com/skip-connections/" rel="nofollow noreferrer"><strong>no skip/residual</strong></a> connections. So <strong>Code</strong> in the bottleneck should encode enough information about <strong>bias</strong> in the data to make <strong>Decoder</strong> to learn.</li> <li>So if <strong>ReLU</strong> activation function is used <strong>Negative Biased</strong> data information can be <strong>lost</strong> due to Dying ReLU problem. The solution is to to use better activation functions like <strong>LeakyReLU</strong>, <strong>ELU</strong>, <a href="https://arxiv.org/abs/1908.08681" rel="nofollow noreferrer"><strong>MISH</strong></a>, etc.</li> </ul> <h2>Linear vs Conv.</h2> <p>In your case, you are <strong>overfitting</strong> on a single batch. As <strong>Linear</strong> layers will have more <strong>parameters</strong> than that of <strong>Convolution</strong> layers maybe they are <strong>Memorising</strong> given small data easily.</p> <h2>Batch Size</h2> <p>As you are <strong>overfitting</strong> on a single batch, a <strong>small-batch</strong> of data will make it very easy to <strong>memorise</strong> on the other hand for <strong>large batch</strong> with single <strong>Update</strong> of network per batch(during overfitting) make network to learn <strong>Generalized</strong> abstract features. (This works better if more batches are there with a lot of variety of data)</p> <p>I tried to reproduce your problem using simple <strong>Gaussian</strong> data. Just by using <strong>LeakyReLU</strong> in place of <strong>ReLU</strong> with proper learning rate solved the problem. Same architecture given by you is used.</p> <p>Hyper parameters:</p> <p>batch_size = 16</p> <p>epochs = 100</p> <p>lr = 1e-3</p> <p>optimizer = Adam</p> <p>loss(after training with <strong>ReLU</strong>) = 0.27265918254852295</p> <p>loss(after training with <strong>LeakyReLU</strong>) = 0.0004763789474964142</p> <h2>With Relu</h2> <p><a href="https://i.stack.imgur.com/muhl5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/muhl5.png" alt="with relu" /></a></p> <h2>With Leaky Relu</h2> <p><a href="https://i.stack.imgur.com/Ve7ww.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ve7ww.png" alt="with Leaky relu" /></a></p>
2021-01-25 15:48:06.840000+00:00
2021-01-25 16:03:55.457000+00:00
2021-01-25 16:03:55.457000+00:00
null
65,882,896
<h2><strong>TL;DR</strong></h2> <p>I am unable to overfit batches with multiple samples using autoencoder.</p> <p>Fully connected decoder seems to handle more samples per batch than conv decoder, but then also fails when number of samples increases. <strong>Why is this happening, and how to debug this?</strong></p> <hr /> <h2>In depth</h2> <p>I am trying to use an auto encoder on 1d data points of size <code>(n, 1, 1024)</code>, where <code>n</code> is the number of samples in the batch.</p> <p>I am trying to overfit to that single batch.</p> <p>Using a convolutional decoder, I am only able to fit a single sample (<code>n=1</code>), and when <code>n&gt;1</code> I am unable to drop the loss (MSE) below 0.2.</p> <p><strong>In blue: expected output (=input), in orange: reconstruction.</strong></p> <p>Single sample, single batch:<br /> <a href="https://i.stack.imgur.com/IywNA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IywNA.png" alt="Conv1sample" /></a></p> <p>Multiple samples, single batch, loss won't go down: <a href="https://i.stack.imgur.com/OscSR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OscSR.png" alt="Conv4samples" /></a></p> <p>Using more than one sample, we can see the net learns the general shape of the input (=output) signal, but greatly misses the bias.</p> <hr /> <p>Using a fully connected decoder does manage to reconstruct batches of multiple samples:</p> <p><a href="https://i.stack.imgur.com/reik6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/reik6.png" alt="Fc4samples" /></a></p> <hr /> <h2>Relevant code:</h2> <pre><code>class Conv1DBlock(nn.Module): def __init__(self, in_channels, out_channels, kernel_size): super().__init__() self._in_channels = in_channels self._out_channels = out_channels self._kernel_size = kernel_size self._block = nn.Sequential( nn.Conv1d( in_channels=self._in_channels, out_channels=self._out_channels, kernel_size=self._kernel_size, stride=1, padding=(self._kernel_size - 1) // 2, ), # nn.BatchNorm1d(num_features=out_channels), nn.ReLU(True), nn.MaxPool1d(kernel_size=2, stride=2), ) def forward(self, x): for layer in self._block: x = layer(x) return x class Upsample1DBlock(nn.Module): def __init__(self, in_channels, out_channels, factor): super().__init__() self._in_channels = in_channels self._out_channels = out_channels self._factor = factor self._block = nn.Sequential( nn.Conv1d( in_channels=self._in_channels, out_channels=self._out_channels, kernel_size=3, stride=1, padding=1 ), # 'same' nn.ReLU(True), nn.Upsample(scale_factor=self._factor, mode='linear', align_corners=True), ) def forward(self, x): x_tag = x for layer in self._block: x_tag = layer(x_tag) # interpolated = F.interpolate(x, scale_factor=0.5, mode='linear') # resnet idea return x_tag </code></pre> <p>encoder:</p> <pre><code>self._encoder = nn.Sequential( # n, 1024 nn.Unflatten(dim=1, unflattened_size=(1, 1024)), # n, 1, 1024 Conv1DBlock(in_channels=1, out_channels=8, kernel_size=15), # n, 8, 512 Conv1DBlock(in_channels=8, out_channels=16, kernel_size=11), # n, 16, 256 Conv1DBlock(in_channels=16, out_channels=32, kernel_size=7), # n, 32, 128 Conv1DBlock(in_channels=32, out_channels=64, kernel_size=5), # n, 64, 64 Conv1DBlock(in_channels=64, out_channels=128, kernel_size=3), # n, 128, 32 nn.Conv1d(in_channels=128, out_channels=128, kernel_size=32, stride=1, padding=0), # FC # n, 128, 1 nn.Flatten(start_dim=1, end_dim=-1), # n, 128 ) </code></pre> <p>conv decoder:</p> <pre><code>self._decoder = nn.Sequential( nn.Unflatten(dim=1, unflattened_size=(128, 1)), # 1 Upsample1DBlock(in_channels=128, out_channels=64, factor=4), # 4 Upsample1DBlock(in_channels=64, out_channels=32, factor=4), # 16 Upsample1DBlock(in_channels=32, out_channels=16, factor=4), # 64 Upsample1DBlock(in_channels=16, out_channels=8, factor=4), # 256 Upsample1DBlock(in_channels=8, out_channels=1, factor=4), # 1024 nn.ReLU(True), nn.Conv1d(in_channels=1, out_channels=1, kernel_size=3, stride=1, padding=1), nn.ReLU(True), nn.Flatten(start_dim=1, end_dim=-1), nn.Linear(1024, 1024) ) </code></pre> <p>FC decoder:</p> <pre><code>self._decoder = nn.Sequential( nn.Linear(128, 256), nn.ReLU(True), nn.Linear(256, 512), nn.ReLU(True), nn.Linear(512, 1024), nn.ReLU(True), nn.Flatten(start_dim=1, end_dim=-1), nn.Linear(1024, 1024) ) </code></pre> <hr /> <p>Another observation is that when the batch size increases more, to say, 16, the FC decoder also starts to fail.</p> <p>In the image, 4 samples of a 16 sample batch I am trying to overfit</p> <p><a href="https://i.stack.imgur.com/jE1Pj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jE1Pj.png" alt="fc16Samples" /></a></p> <hr /> <p>What could be wrong with the conv decoder?</p> <p>How to debug this or make the conv decoder work?</p>
2021-01-25 10:25:40.133000+00:00
2021-01-25 16:03:55.457000+00:00
2021-01-25 10:47:59.340000+00:00
python|deep-learning|pytorch|conv-neural-network|autoencoder
['https://datascience.stackexchange.com/questions/5706/what-is-the-dying-relu-problem-in-neural-networks', 'https://datascience.stackexchange.com/questions/5706/what-is-the-dying-relu-problem-in-neural-networks', 'https://theaisummer.com/skip-connections/', 'https://arxiv.org/abs/1908.08681', 'https://i.stack.imgur.com/muhl5.png', 'https://i.stack.imgur.com/Ve7ww.png']
6
38,476,945
<p>No. You can set the length of the vector freely.</p> <p>Then, what is the vector?</p> <p>It is distributed representation of meaning of the word.</p> <p>I don't understand exactly how can it be trained. but, The trained one is meaning like below.</p> <p>If one vector representation like this,</p> <p>[0.2 0.6 0.2]</p> <p>It is closer one to [0.2 0.7 0.2] than [0.7 0.2 0.5].</p> <p>Here is another example.</p> <p>CRY [0.5 0.7 0.2]</p> <p>HAPPY [-0.4 0.3 0.1]</p> <p>SAD [0.4 0.6 0.2]</p> <p>'CRY' is more close to 'SAD' than 'HAPPY' because the methods(CBOW or SKIP-GRAM, etc.) can make the vectors more closely when the meaning(or syntactic position) of the words are similar.</p> <p>Actually, The accurate depends on many things, quite. Selecting methods are also important. and the large amount of good data(corpura), also.</p> <p>If you want to check the similarity of some words, you make the vectors of words first, and check the cosine similarity of that words.</p> <p>The paper(<a href="https://arxiv.org/pdf/1301.3781.pdf" rel="nofollow">https://arxiv.org/pdf/1301.3781.pdf</a>) explained some methods and listed accuracies. </p> <p>You can understand c codes, The word2vec program(<a href="https://code.google.com/archive/p/word2vec/" rel="nofollow">https://code.google.com/archive/p/word2vec/</a>) is useful. It implements CBOW(Continuous Bag-Of-Words) and SKIP-gram.</p> <p>ps) Please correct my bad english. ps) comment, if you have the question, yet.</p>
2016-07-20 09:08:10.633000+00:00
2016-07-20 09:08:10.633000+00:00
null
null
34,363,250
<p>My question is two-fold, but hopefully not too complicated. And both parts specifically pertain to the Skip-Gram model in Word2Vec:</p> <ul> <li><p>The first part is about structure: as far as I understand it, the Skip-Gram model is based on one neural network with one input weight matrix <strong>W</strong>, one hidden layer of size N, and C output weight matrices <strong>W'</strong> each used to produce one of the C output vectors. Is this correct?</p></li> <li><p>The second part is about the output vectors: as far as I understand it, each output vector is of size V and is a result of a Softmax function. Each output vector <em>node</em> corresponds to the index of a word in the vocabulary, and the value of each node is the probability that the corresponding word occurs at that context location (for a given input word). The target output vectors are not, however, one-hot encoded, even if the training instances are. Is this correct?</p></li> </ul> <p>The way I imagine it is something along the following lines (made-up example):</p> <p>Assuming the vocabulary ['quick', 'fox', 'jumped', 'lazy', 'dog'] and a context of C=1, and assuming that for the input word 'jumped' I see the two output vectors looking like this:</p> <p>[0.2 <strong>0.6</strong> 0.01 0.1 0.09]</p> <p>[0.2 0.2 0.01 0.16 <strong>0.43</strong>]</p> <p>I would interpret this as 'fox' being the most likely word to show up before 'jumped' (p=0.6), and 'dog' being the most likely to show up after it (p=0.43).</p> <p>Do I have this right? Or am I completely off? Any help is appreciated.</p>
2015-12-18 20:12:08.627000+00:00
2016-07-20 13:48:28.567000+00:00
2016-02-15 10:16:37.420000+00:00
vector|machine-learning|nlp|word2vec
['https://arxiv.org/pdf/1301.3781.pdf', 'https://code.google.com/archive/p/word2vec/']
2
38,483,158
<p>This is my first answer at SO, so here it goes..</p> <p>Your understanding in both parts seem to be correct, according to this paper :</p> <p><a href="http://arxiv.org/abs/1411.2738" rel="noreferrer">http://arxiv.org/abs/1411.2738</a></p> <p>The paper explains word2vec in detail and at the same time, keeps it very simple - it's worth a read for a thorough understanding of the neural net architecture used in word2vec.</p> <ul> <li>The structure of Skip Gram does use a single neural net, with input as one-hot encoded target-word and <strong>expected-output</strong> as one-hot encoded context words. After the neural-net is trained on the text-corpus, the input weight matrix <strong>W</strong> is used as the input-vector representations of words in the corpus and the output weight matrix <strong>W'</strong> which is shared across all the <strong>C</strong> outputs (output-vectors in the terminology of the question, but avoiding that to prevent confusion with output-vector representations used next..), becomes the output-vector representations of words. Usually the output-vector representations are ignored, and the input-vector representations, <strong>W</strong> are used as the word embeddings. To get into the dimensionality of the matrices, if we assume a vocabulary size of <strong>V</strong>, size of hidden layer as <strong>N</strong>, we will have <strong>W</strong> as <strong>(V,N)</strong> matrix, with each row representing the input vector of the indexed word in the vocabulary. <strong>W'</strong> will be a <strong>(N,V)</strong> matrix, with each column representing the output vector of the indexed word. In this way we get N-dimensional vectors for words.</li> <li>As you mentioned, each of the outputs(avoiding using the term output vector) is of size <strong>V</strong> and are the result of a softmax function, with each node in the output giving the probability of the word occurring as a context word for the given target word, resulting in the outputs not being one-hot encoded.But the expected outputs are indeed one-hot encoded, i.e in training phase, the error is computed by subtracting the one-hot encoded vector of the actual word occurring at that context position, from the neural-net output and then the weights are updated using gradient descent.</li> </ul> <p>Referring to the example you mentioned, with <strong>C</strong>=1 and with a vocabulary of ['quick', 'fox', 'jumped', 'lazy', 'dog'] </p> <p>If the output from the skip-gram is [0.2 0.6 0.01 0.1 0.09], where the correct target word is 'fox' then error is calculated as -</p> <p>[0 1 0 0 0] - [0.2 0.6 0.01 0.1 0.09] = [-0.2 0.4 -0.01 -0.1 -0.09]</p> <p>and the weight matrices are updated to minimize this error.</p> <p>Hope this helps !</p>
2016-07-20 13:48:28.567000+00:00
2016-07-20 13:48:28.567000+00:00
null
null
34,363,250
<p>My question is two-fold, but hopefully not too complicated. And both parts specifically pertain to the Skip-Gram model in Word2Vec:</p> <ul> <li><p>The first part is about structure: as far as I understand it, the Skip-Gram model is based on one neural network with one input weight matrix <strong>W</strong>, one hidden layer of size N, and C output weight matrices <strong>W'</strong> each used to produce one of the C output vectors. Is this correct?</p></li> <li><p>The second part is about the output vectors: as far as I understand it, each output vector is of size V and is a result of a Softmax function. Each output vector <em>node</em> corresponds to the index of a word in the vocabulary, and the value of each node is the probability that the corresponding word occurs at that context location (for a given input word). The target output vectors are not, however, one-hot encoded, even if the training instances are. Is this correct?</p></li> </ul> <p>The way I imagine it is something along the following lines (made-up example):</p> <p>Assuming the vocabulary ['quick', 'fox', 'jumped', 'lazy', 'dog'] and a context of C=1, and assuming that for the input word 'jumped' I see the two output vectors looking like this:</p> <p>[0.2 <strong>0.6</strong> 0.01 0.1 0.09]</p> <p>[0.2 0.2 0.01 0.16 <strong>0.43</strong>]</p> <p>I would interpret this as 'fox' being the most likely word to show up before 'jumped' (p=0.6), and 'dog' being the most likely to show up after it (p=0.43).</p> <p>Do I have this right? Or am I completely off? Any help is appreciated.</p>
2015-12-18 20:12:08.627000+00:00
2016-07-20 13:48:28.567000+00:00
2016-02-15 10:16:37.420000+00:00
vector|machine-learning|nlp|word2vec
['http://arxiv.org/abs/1411.2738']
1
58,027,633
<blockquote> <p>Do GANs use class labels in the training process?</p> </blockquote> <p>The author suspected GANs doesn't require labels. This is correct. The discriminator is trained to classify real and fake images. Since we know which images are real and which are generated by the generator, we do not need labels to train the discriminator. The generator is trained to fool the discriminator, which also doesn't require labels. </p> <p>This is one of the most attractive benefits of GANs [1]. Usually, we refer to methods that do not require labels as <em>unsupervised learning</em>. That said, if we had labels, maybe we could train a GAN that uses the labels to improve performance. This idea underlies the follow-up work by [2] who introduced the <em>conditional</em> GAN. </p> <blockquote> <p>If this is the case, then how do researchers propose to use the discriminator network for classification tasks? </p> </blockquote> <p>There seems to be a misunderstanding here. The purpose of the discriminator is NOT to act as a classifier on real data. The purpose of the discriminator is to "tell the generator how to improve its fakes". This is done by using the discriminator as a loss function, which we can backpropagate gradients through if it is a neural network. After training, we usually discard the discriminator. </p> <blockquote> <p>The generator network would also be difficult to use, seeing as we don't know what setting of the input vector 'Z' will result in the required generated image. </p> </blockquote> <p>It seems the underlying reason for posting the question lies here. The input vector 'Z' is chosen such that it follows some distribution, typically a normal distribution. But then what happens if we take 'Z', a random vector with normally distributed entries, and computes 'G(Z)'? We get a new vector which follows a very complicated distribution that depends on G. The entire idea of GANs is to change G such that this new complicated distribution is close to the distribution of our data. This idea is formalized with f-Divergences in [3]. </p> <p>[1] <a href="https://arxiv.org/abs/1406.2661" rel="nofollow noreferrer">https://arxiv.org/abs/1406.2661</a></p> <p>[2] <a href="https://arxiv.org/abs/1411.1784" rel="nofollow noreferrer">https://arxiv.org/abs/1411.1784</a></p> <p>[3] <a href="https://arxiv.org/abs/1606.00709" rel="nofollow noreferrer">https://arxiv.org/abs/1606.00709</a></p>
2019-09-20 11:36:42.890000+00:00
2019-09-20 11:36:42.890000+00:00
null
null
44,919,338
<p>I am trying to understand how a GAN is trained. I believe understand the Adversarial training process. What I can't seem to find information on is this: do GANs use class labels in the training process? My current understanding says no - because the discriminator is simply trying to discriminate between real or fake images, while the generator is trying to create real image (but not images of any specific class.)</p> <p>If this is the case, then how do researchers propose to use the discriminator network for classification tasks? the network would only be able to perform two way classification between real or fake images. The generator network would also be difficult to use, seeing as we don't know what setting of the input vector 'Z' will result in the required generated image. </p>
2017-07-05 07:13:31.263000+00:00
2019-09-20 11:36:42.890000+00:00
null
machine-learning|neural-network|classification|multiclass-classification
['https://arxiv.org/abs/1406.2661', 'https://arxiv.org/abs/1411.1784', 'https://arxiv.org/abs/1606.00709']
3
49,311,647
<p><code>initial_accumulator_value</code> is indeed the \delta and it should not be initialized to <code>0</code>. A value of <code>0.01</code> is more appropriate but the default of <code>0.1</code> is fine.</p> <p>Btw, if you are in the business of playing with optimizers, authors of Adagrad have a new optimizer <a href="https://arxiv.org/abs/1802.09568" rel="nofollow noreferrer">https://arxiv.org/abs/1802.09568</a> that performs considerably better than existing ones. It's TF implementation should be released fairly soon, Q2 2018.</p>
2018-03-16 00:58:36.167000+00:00
2018-03-20 00:38:38.797000+00:00
2018-03-20 00:38:38.797000+00:00
null
49,261,222
<ol> <li><p>According to the initial paper, there should be a parameter named δ. But I can't find such argument in TensorFlow <em>AdagradOptimizer</em> construtor. </p></li> <li><p>There is an argument named <em>initial_accumulator_value</em>, it is suggested to set as 0, but TensorFlow uses 0.1 as default. It is proper for me to set it as 0?</p></li> </ol> <p>Thank you so much for your time!</p> <p>Garrett</p>
2018-03-13 16:27:40.043000+00:00
2018-03-20 00:38:38.797000+00:00
null
tensorflow
['https://arxiv.org/abs/1802.09568']
1
65,344,279
<blockquote> <p>I have tried python-louvain for partitioning but that gives inaccurate results[...]</p> </blockquote> <p>The Louvain method is not perfect and there are no perfect methods, they always depend on what you are trying to achieve (see conclusion of <a href="https://hal.archives-ouvertes.fr/hal-01976587/document" rel="nofollow noreferrer">this paper</a>).</p> <blockquote> <p>[...] like it partitioned two users into different groups even when their messaging frequency was pretty high.</p> </blockquote> <p>It seems that this user may belong to more than a single community... Maybe try a partitioning method that allows overlapping communities, such as <a href="https://networkx.org/documentation/stable/reference/algorithms/generated/networkx.algorithms.community.kclique.k_clique_communities.html#networkx.algorithms.community.kclique.k_clique_communities" rel="nofollow noreferrer">K-Clique</a>. This kind of partitioning methods allow nodes to belong to more than a single community.</p> <h3>Algorithms:</h3> <p>Here are some alternative algorithms that I’ve found:</p> <ol> <li><p>There are a number of algorithms already included in the networkX package(<a href="https://networkx.org/documentation/stable/reference/algorithms/community.html" rel="nofollow noreferrer">here</a>). I would suggest <code>girvan_newman</code>but it takes a lot of computation power...</p> </li> <li><p>The CDLib package also has a number of algorithms for networkX(<a href="https://cdlib.readthedocs.io/en/latest/reference/cd_algorithms/node_clustering.html" rel="nofollow noreferrer">here</a>), including some that allow for overlapping communities. Also, check the <a href="https://cdlib.readthedocs.io/en/latest/reference/cd_algorithms/algs/cdlib.algorithms.leiden.html#cdlib.algorithms.leiden" rel="nofollow noreferrer">leiden algorithm</a>, you may prefer it to louvain, it's supposed to be better(according to their <a href="https://arxiv.org/pdf/1810.08473.pdf" rel="nofollow noreferrer">paper</a>)</p> </li> <li><p>I would still recommend using <a href="https://python-louvain.readthedocs.io/en/latest/api.html" rel="nofollow noreferrer">python-louvain</a>, for crisp communities</p> </li> </ol> <p>Good luck!</p>
2020-12-17 16:08:20.350000+00:00
2020-12-24 00:23:49.180000+00:00
2020-12-24 00:23:49.180000+00:00
null
65,343,701
<p>I have built a graph using networkx which is a social network with people as nodes and the messaging frequencies as the edge weights. I want to cluster this network into different groups of people. The ones who message each other a lot tend to be in the same group. How do I go about this? Which clustering algorithm should I use? Also, how do I visualize the grouping like a dendrogram tree?</p> <p>Thanks in advance! :D P.S.: I have tried python-louvain for partitioning but that gives inaccurate results like it partitioned two users into different groups even when their messaging frequency was pretty high</p>
2020-12-17 15:33:05.273000+00:00
2021-10-26 19:09:51.607000+00:00
null
python|graph|cluster-analysis|networkx
['https://hal.archives-ouvertes.fr/hal-01976587/document', 'https://networkx.org/documentation/stable/reference/algorithms/generated/networkx.algorithms.community.kclique.k_clique_communities.html#networkx.algorithms.community.kclique.k_clique_communities', 'https://networkx.org/documentation/stable/reference/algorithms/community.html', 'https://cdlib.readthedocs.io/en/latest/reference/cd_algorithms/node_clustering.html', 'https://cdlib.readthedocs.io/en/latest/reference/cd_algorithms/algs/cdlib.algorithms.leiden.html#cdlib.algorithms.leiden', 'https://arxiv.org/pdf/1810.08473.pdf', 'https://python-louvain.readthedocs.io/en/latest/api.html']
7
64,824,719
<p>This would come under &quot;Action Recognition&quot;. I think it should be able to handle your requirement. You need not find the key frames.</p> <p>Torch-vision has some pre-trained models which you can directly use in pytorch or you can fine-tune them with very less data. Look for &quot;Video classification&quot; models in this <a href="https://pytorch.org/docs/stable/torchvision/models.html" rel="nofollow noreferrer">Link</a>.</p> <p>I would suggest to go for R(2+1)D (<a href="https://arxiv.org/abs/1711.11248" rel="nofollow noreferrer">paper Link</a>)</p> <p>It is able to identify sports actions, gesture actions and sign-language.</p>
2020-11-13 16:44:07.747000+00:00
2020-11-13 16:44:07.747000+00:00
null
null
64,821,728
<p>I am starting a project with the task to recognize micro expressions in a human face. However, the first task that I formulated is to get the key frames in a 10 second video that has the most relation with the predicted expression. For eg, raising your eyebrows may represent surprise, but the raising activity may occur in only in say 10 frames and that 10 frames represents the micro expression for surprise. Any guides or research papers you can direct me to would be much helpful. I was planning to use some form of 3D-CNN, but I also welcome more efficient ways to do this as 3D CNN's are quite computationally expensive.</p>
2020-11-13 13:24:36.270000+00:00
2020-11-13 16:44:07.747000+00:00
null
python|deep-learning|computer-vision|conv-neural-network
['https://pytorch.org/docs/stable/torchvision/models.html', 'https://arxiv.org/abs/1711.11248']
2
72,422,751
<p>Read <em>Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He and Piotr Dollár</em> <a href="https://arxiv.org/abs/1708.02002" rel="nofollow noreferrer"><strong>Focal Loss for Dense Object Detection</strong></a> (ICCV 2017). They discuss in length the shortcoming of CE loss when classes are unbalanced and argue (quite compellingly) that WCE simply does not address this limitation of CE.</p> <p>CE loss never goes to zero: it always has non-zero gradients even if the prediction is perfect. CE strives to increase the <em>margin</em> between the different classes. As a result, when there is an imbalance between classes, CE will put an equal effort into being &quot;more certain&quot; about the dominant class as well as making fewer mistakes on the minority class. Putting weights on the CE will not make a fundamental change to this behavior.<br /> In contrast, what you actually want from a loss function, in this case, is to ignore samples that you already predict correctly, and make an effort to correct wrong predictions. This is usually achieved via hard-negative mining of Focal loss.</p>
2022-05-29 10:09:32.907000+00:00
2022-05-29 10:09:32.907000+00:00
null
null
72,416,581
<p>Weight Cross-Entroy (WCE) helps to handle an imbalanced dataset, and Cityscapes is quite imbalanced as seen below:</p> <p><a href="https://i.stack.imgur.com/ciUzX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ciUzX.png" alt="enter image description here" /></a></p> <p>If we check the <a href="https://paperswithcode.com/sota/semantic-segmentation-on-cityscapes" rel="nofollow noreferrer">best benchmarks</a> on this dataset, most of the works use bare CE as a loss function. I don't get it if there are any special causes that would lead WCE to a worse result for <strong>semantic segmentation</strong> tasks on the <strong>mIoU</strong> evaluation.</p> <p>I'm especially asking because I'm working in an even higher unbalanced dataset (multi-minority classes on the ratio of 1:1000 to the majority classes) and got very surprised when bare CE outperformed WCE on the mIoU metric.</p> <p>I found so far that WCE can yield many false positives from minority classes, but beyond that, would there be more reasons for it?</p>
2022-05-28 14:43:42.127000+00:00
2022-05-29 16:21:50.397000+00:00
2022-05-29 16:21:50.397000+00:00
image-segmentation|semantic-segmentation
['https://arxiv.org/abs/1708.02002']
1
58,192,883
<p>According to <a href="https://tsfresh.readthedocs.io/en/latest/text/feature_filtering.html" rel="nofollow noreferrer">that page</a> in their documentation, what they do is:</p> <ol> <li>they extract a whole set of features</li> <li>they individually test the different features for significance (in a supervised setting, so the test is something like "is this feature useful to predict that output?") and keep the most significant ones using a procedure called the Benjamini-Yekutieli procedure</li> </ol> <p>The references they provide should be of interest:</p> <p>[1] Christ, M., Kempa-Liehr, A.W. and Feindt, M. (2016). Distributed and parallel time series feature extraction for industrial big data applications. ArXiv e-prints: 1610.07717 URL: <a href="http://adsabs.harvard.edu/abs/2016arXiv161007717C" rel="nofollow noreferrer">http://adsabs.harvard.edu/abs/2016arXiv161007717C</a></p> <p>[2] Benjamini, Y. and Yekutieli, D. (2001). The control of the false discovery rate in multiple testing under dependency. Annals of statistics, 1165–1188</p> <p>where [1] is the paper describing <code>tsfresh</code> and [2] is the reference for the multiple testing procedure (called Benjamini-Yekutieli procedure above).</p>
2019-10-01 22:08:53.083000+00:00
2019-10-01 22:08:53.083000+00:00
null
null
58,192,180
<p>I recently started to use <code>tsfresh</code> library to extract features from time-series data.</p> <p>It's very cool that I can get the bag of features in few lines of code but I have doubt about the logic behind the <code>select_features</code> method. I looked into the official documents and googled it, but I couldn't find which algorithm is used for this. I want to know how it works, so that I can decide what to do on the feature selection phase after data processing in <code>tsfresh</code>.</p>
2019-10-01 20:58:07.507000+00:00
2019-10-01 22:08:53.083000+00:00
null
python|time-series|feature-extraction|feature-selection
['https://tsfresh.readthedocs.io/en/latest/text/feature_filtering.html', 'http://adsabs.harvard.edu/abs/2016arXiv161007717C']
2