a_id
int64
7.84k
73.8M
a_body
stringlengths
61
33k
a_creation_date
stringlengths
25
32
a_last_activity_date
stringlengths
25
32
a_last_edit_date
stringlengths
25
32
a_tags
float64
q_id
int64
826
73.8M
q_body
stringlengths
61
29.9k
q_creation_date
stringlengths
25
32
q_last_activity_date
stringlengths
25
32
q_last_edit_date
stringlengths
25
32
q_tags
stringlengths
1
103
_arxiv_links
stringlengths
2
6.69k
_n_arxiv_links
int64
0
94
66,118,070
<p>For getting the visible text with BeautifoulSoup, there is already this answer: <a href="https://stackoverflow.com/questions/1936466/beautifulsoup-grab-visible-webpage-text">BeautifulSoup Grab Visible Webpage Text</a></p> <p>Once you get your visible text, if you want to extract &quot;names&quot; (I'm assuming by names here you mean &quot;nouns&quot;), you can check nltk package (or Blob) on this other answer: <a href="https://stackoverflow.com/questions/33587667/extracting-all-nouns-from-a-text-file-using-nltk">Extracting all Nouns from a text file using nltk</a></p> <p>Once you apply both, you can ingest your outputs into pandas DataFrame.</p> <p><strong>Note</strong>: Please notice that extracting the visible text given an HTML it is still an open problem. This two papers can highlight the problem way better than I can and they are both using Machine Learning techniques: <a href="https://arxiv.org/abs/1801.02607" rel="nofollow noreferrer">https://arxiv.org/abs/1801.02607</a>, <a href="https://dl.acm.org/doi/abs/10.1145/3366424.3383547" rel="nofollow noreferrer">https://dl.acm.org/doi/abs/10.1145/3366424.3383547</a>. And their respective githubs <a href="https://github.com/dalab/web2text" rel="nofollow noreferrer">https://github.com/dalab/web2text</a>, <a href="https://github.com/mrjleo/boilernet" rel="nofollow noreferrer">https://github.com/mrjleo/boilernet</a></p>
2021-02-09 11:24:51.543000+00:00
2021-02-09 11:30:28.710000+00:00
2021-02-09 11:30:28.710000+00:00
null
66,118,029
<p>I have a list of different URLs which I would like to scrape the text from using Python. So far I've managed to build a script that returns URLs based on a Google Search with keywords, however I would now like to scrape the content of these URLs. The problem is that I'm now scraping the ENTIRE website including the layout/style info, while I only would like to scrape the 'visible text'. Ultimately, my goal is to scrape for names of all these urls, and store them in a pandas DataFrame. Perhaps even include how often certain names are mentioned, but that is for later. Below is a rather simple start of my code so far:</p> <pre><code>from urllib.request import Request, urlopen from bs4 import BeautifulSoup import requests from time import sleep from random import randint import spacy import en_core_web_sm import pandas as pd url_list = [&quot;https://www.nhtsa.gov/winter-driving-safety&quot;, &quot;https://www.safetravelusa.com/&quot;, &quot;https://www.theatlantic.com/business/archive/2014/01/how-2-inches-of-snow-created-a-traffic-nightmare-in-atlanta/283434/&quot;, &quot;https://www.wsdot.com/traffic/passes/stevens/&quot;] df = pd.DataFrame(url_list, columns = ['url']) df_Names = [] # load english language model nlp = en_core_web_sm.load() # find Names in text def spacy_entity(df): df1 = nlp(df) df2 = [[w.text,w.label_] for w in df1.ents] return df2 for index, url in df.iterrows(): print(index) print(url) sleep(randint(2,5)) # print(page) req = Request(url[0], headers={&quot;User-Agent&quot;: 'Mozilla/5.0'}) webpage = urlopen(req).read() soup = BeautifulSoup(webpage, 'html5lib').get_text() df_Names.append(spacy_entity(soup)) df[&quot;Names&quot;] = df_Names </code></pre>
2021-02-09 11:21:33.033000+00:00
2021-02-09 11:30:28.710000+00:00
null
python|dataframe|web-scraping|beautifulsoup|spacy
['https://stackoverflow.com/questions/1936466/beautifulsoup-grab-visible-webpage-text', 'https://stackoverflow.com/questions/33587667/extracting-all-nouns-from-a-text-file-using-nltk', 'https://arxiv.org/abs/1801.02607', 'https://dl.acm.org/doi/abs/10.1145/3366424.3383547', 'https://github.com/dalab/web2text', 'https://github.com/mrjleo/boilernet']
6
41,184,687
<p>Neural networks (including CNNs) are models with thousands of parameters which we try to optimize with gradient descent. Those models are able to fit a lot of different functions by having a non-linearity φ at their nodes. Without a non-linear activation function, the network collapses to a linear function in total. This means we need the non-linearity for most interesting problems.</p> <p>Common choices for φ are the logistic function, tanh or ReLU. All of them have the most interesting region around 0. This is where the gradient either is big enough to learn quickly or where a non-linearity is at all in case of ReLU. Weight initialization schemes like <a href="http://jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf" rel="nofollow noreferrer">Glorot initialization</a> try to make the network start at a good point for the optimization. Other techniques like <a href="https://arxiv.org/abs/1502.03167" rel="nofollow noreferrer">Batch Normalization</a> also keep the mean of the nodes input around 0.</p> <p>So you compute (and subtract) the mean of the image so that the first computing nodes get data which "behaves well". It has a mean of 0 and thus the intuition is that this helps the optimization process.</p> <p>In theory, a network can be able to "subtract" the mean by itself. So if you train long enough, this should not matter too much. However, depending on the activation function "long enough" can be important.</p>
2016-12-16 12:42:17.630000+00:00
2016-12-16 12:42:17.630000+00:00
null
null
41,036,859
<p>When I use caffe for image classification, it often computes the image mean. Why is that the case?</p> <p>Someone said that it can improve the accuracy, but I don't understand why this should be the case.</p>
2016-12-08 10:13:29.783000+00:00
2016-12-16 12:42:17.630000+00:00
2016-12-16 12:34:24.937000+00:00
image|optimization|classification|deep-learning|convolution
['http://jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf', 'https://arxiv.org/abs/1502.03167']
2
3,630,532
<p>Any regular expression can be linked to a DFA - you can minimize the DFA and since the minimal form is unique, you can decide whether two expressions are equivalent. Dani Cricco pointed out the Hopcroft O(n log n) algorithm. There is another improved algorithm by Hopcroft and Craft which tests the equivalence of two DFAs in O(n).</p> <p>For a good survey on the matter and an interesting approach to this, I reccomend the paper <a href="http://arxiv.org/pdf/0907.5058" rel="noreferrer">Testing the Equivalence of Regular Languages</a>, from arXiv.</p> <p>Later edit: if you are interested in inclusion rather than equivalence for regular expressions, I have come across a paper that might be of interest: <a href="http://www.ii.uib.no/~dagh/reinclusionBORA.pdf" rel="noreferrer">Inclusion Problem for Regular Expressions</a> - I have only skimmed through it but it seems to contain a polynomial time algorithm to the problem.</p>
2010-09-02 19:31:59.650000+00:00
2010-09-02 19:55:24.367000+00:00
2010-09-02 19:55:24.367000+00:00
null
3,630,203
<p>Let's say we have regular expressions:</p> <ul> <li>Hello W.*rld</li> <li>Hello World</li> <li>.* World</li> <li>.* W.*</li> </ul> <p>I would like to minimize the number of regexes required to match arbitrary input.</p> <p>To do that, I need to find if one regular expression matches any input matched by another expression. Is that possible?</p> <p>Billy3</p>
2010-09-02 18:48:48.780000+00:00
2015-11-30 11:58:22.947000+00:00
2010-09-02 20:04:44.953000+00:00
regex|computer-science|theory
['http://arxiv.org/pdf/0907.5058', 'http://www.ii.uib.no/~dagh/reinclusionBORA.pdf']
2
56,037,990
<p>According to <a href="https://arxiv.org/ftp/arxiv/papers/1803/1803.10195.pdf" rel="nofollow noreferrer">What we talk about when we talk about monads</a> the question &quot;What is a monad&quot; is wrong:</p> <blockquote> <p>The short answer to the question &quot;What is a monad?&quot; is that it is a monoid in the category of endofunctors or that it is a generic data type equipped with two operations that satisfy certain laws. This is correct, but it does not reveal an important bigger picture. This is because the question is wrong. In this paper, we aim to answer the right question, which is &quot;What do authors really say when they talk about monads?&quot;</p> </blockquote> <p>While that paper does not directly answer what a monad is it helps understanding what people with different backgrounds mean when they talk about monads and why.</p>
2019-05-08 09:54:51.823000+00:00
2021-01-19 21:27:56.380000+00:00
2021-01-19 21:27:56.380000+00:00
null
44,965
<p>Having briefly looked at Haskell recently, what would be a <em>brief, succinct, practical</em> explanation as to what a monad essentially is?</p> <p>I have found most explanations I've come across to be fairly inaccessible and lacking in practical detail.</p>
2008-09-04 23:26:44.767000+00:00
2022-05-16 19:51:35.730000+00:00
2015-08-28 17:05:19.850000+00:00
haskell|functional-programming|monads|terminology
['https://arxiv.org/ftp/arxiv/papers/1803/1803.10195.pdf']
1
57,730,583
<p>It depends on the method you're using. Usually the methods search for architecture for some epochs and at the end you train from scratch the found architecture. The main architecture search algorithms search for cells because iterating through whole architecture is way too expensive(computationally). Cells are the dynamic part of the architecture while the rest of it is a backbone that is static. The NAS(Neural Architecture Search) searches for cells and replaces them in the whole architecture.Usually the NAS searches for 2 kinds of cells, normal, that keeps the output channel the same as the input channel and reduction cells that reduce the input channel size. Here you can see the most popular methods <a href="https://arxiv.org/abs/1808.05377" rel="nofollow noreferrer">https://arxiv.org/abs/1808.05377</a> . </p>
2019-08-30 16:53:17.067000+00:00
2019-08-30 16:53:17.067000+00:00
null
null
57,194,339
<p>After training, what should be the output of the neural architecture search? </p> <p>Is that the best one ever during the training or the last architecture produced by the model?</p>
2019-07-25 04:28:31.747000+00:00
2019-08-30 16:53:17.067000+00:00
2019-07-25 06:17:59.777000+00:00
automl
['https://arxiv.org/abs/1808.05377']
1
44,678,700
<p>You are confusing two different papers. The quote you show <strong>does not</strong> come from the paper you mention. The cited paper:</p> <p><a href="https://journals.aps.org/prx/abstract/10.1103/PhysRevX.4.011047" rel="nofollow noreferrer">https://journals.aps.org/prx/abstract/10.1103/PhysRevX.4.011047</a></p> <p>explains exactly your question, i.e. how the most appropriate number of groups is determined, using minimum description length. You can also read a more recent introduction to Bayesian inference of the stochastic block model, which deals with this issue at length:</p> <p><a href="https://arxiv.org/abs/1705.10225" rel="nofollow noreferrer">https://arxiv.org/abs/1705.10225</a></p>
2017-06-21 14:17:31.443000+00:00
2017-07-12 17:16:41.677000+00:00
2017-07-12 17:16:41.677000+00:00
null
44,111,926
<p>The <a href="https://graph-tool.skewed.de/static/doc/inference.html#graph_tool.inference.minimize_nested_blockmodel_dl" rel="nofollow noreferrer">documentation</a> of <code>minimize_blockmodel_dl</code> says </p> <blockquote> <p>See <a href="https://graph-tool.skewed.de/static/doc/inference.html#id57" rel="nofollow noreferrer">peixoto-hierarchical-2014</a> for details on the algorithm.</p> </blockquote> <p>However, the paper explicitely states</p> <blockquote> <p>However, in order to perform model selection, one first needs to find optimal partitions of the network for given values of B, which is the subproblem which we consider in detail in this work. Therefore, in the remainder of this paper we will assume that the value of B is a fixed parameter, unless otherwise stated, but the reader should be aware that this value itself can be determined at a later step via model selection, as described, e.g., in Refs. [19,26].</p> </blockquote> <p>Hence, how exactly do <code>minimize_blockmodel_dl</code> and variants decide <code>B</code>? Ultimatively, I'd be interested in plotting the implied likelihoods for different values of <code>B</code>, but would first see what the algorythm has built-in by default - Bayesian model selection?</p>
2017-05-22 11:38:52.623000+00:00
2017-07-12 17:16:41.677000+00:00
null
python|graph-tool
['https://journals.aps.org/prx/abstract/10.1103/PhysRevX.4.011047', 'https://arxiv.org/abs/1705.10225']
2
62,811,462
<p>This is simply how <code>Mask-RCNN</code> works, and is a known side effect. Nothing you can do implementation wise to make it not appear. <a href="https://arxiv.org/pdf/1912.08193.pdf" rel="nofollow noreferrer">PointRend</a> discusses the problem (proving that it's not just you), and also proposes their own algorithm (which is an extension to <code>Mask-RCNN</code>) to solve it. In the image below you can see an image from that paper. On the top left they ran Mask-RCNN on a 28x28 image, in which you can see the staircasing as well. The other images relate to how they solve it.</p> <p>The bad news is of course that it's not easy to just slam the PointRend code onto Mask-RCNN, I at least don't know any great implementations. The PointRend <a href="https://github.com/facebookresearch/detectron2/tree/master/projects/PointRend" rel="nofollow noreferrer">github</a> itself also doesn't allow for any retraining on your own data.</p> <p><a href="https://i.stack.imgur.com/N7kEe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/N7kEe.png" alt="this" /></a></p> <p>EDIT:</p> <p>For postprocessing, there's a lot of approaches you can take. How about this one:</p> <p><a href="https://i.stack.imgur.com/7E3Cz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7E3Cz.png" alt="enter image description here" /></a></p>
2020-07-09 09:14:54.803000+00:00
2020-07-09 14:44:52.050000+00:00
2020-07-09 14:44:52.050000+00:00
null
62,810,854
<p>I've been using matterport's Mask R-CNN to train on a custom dataset. However, there seem to be some parameters that i failed to correctly define because on practically all of the images, the bottom or top of the object's mask is cut off: <a href="https://i.stack.imgur.com/YiGbq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YiGbq.png" alt="innacurate prediction: mask is cut-off on the bottom" /></a></p> <p>As you can see, the bounding box is fine since it covers the whole blade, but the mask seems to suddenly stop in a horizontal line on the bottom.</p> <p>On another hand, there is a stair-like effect on masks of larger and curvier objects such as this one (in addition to the bottom and top cut-offs): <a href="https://i.stack.imgur.com/riTf8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/riTf8.png" alt="enter image description here" /></a></p> <ul> <li>The original images are downscaled to <code>IMAGE_MIN_DIM = IMAGE_MAX_DIM = 1024</code> using the &quot;square&quot; mode.</li> <li><code>USE_MINI_MASK</code> is set to true with <code>MINI_MASK_SHAPE = (512, 512)</code> (somehow if i set it off, RAM gets filled and training chrashes).</li> <li><code>RPN_ANCHOR_SCALES = (64, 128, 256, 512, 1024)</code> since the objects occupy a large space of the image.</li> </ul> <p>It doesn't feel like the problem comes from the amount of training. These two predictions come from 6 epochs of 7000 steps per epoch (took around 17 hours). And the problem appears from early stage and persists along all the epochs.</p> <p>Any idea on what changes to make ?</p>
2020-07-09 08:42:22.127000+00:00
2020-07-10 08:40:36.890000+00:00
2020-07-10 08:40:36.890000+00:00
python|tensorflow|keras|image-segmentation
['https://arxiv.org/pdf/1912.08193.pdf', 'https://github.com/facebookresearch/detectron2/tree/master/projects/PointRend', 'https://i.stack.imgur.com/N7kEe.png', 'https://i.stack.imgur.com/7E3Cz.png']
4
47,274,455
<p>Intel's RdRand is a high-quality, cryptographically-secure, psuedorandom number generator. There is a detailed description of what it is, how to use it, how it has been used, and how fast it is to use given in a paper here (<a href="http://iopscience.iop.org/article/10.3847/1538-4357/aa7ede/meta;jsessionid=A9DA9DDB925E6522D058F3CEEC7D0B21.ip-10-40-2-120" rel="nofollow noreferrer">http://iopscience.iop.org/article/10.3847/1538-4357/aa7ede/meta;jsessionid=A9DA9DDB925E6522D058F3CEEC7D0B21.ip-10-40-2-120</a>) or non-paywalled version here (<a href="https://arxiv.org/abs/1707.02212" rel="nofollow noreferrer">https://arxiv.org/abs/1707.02212</a>).</p> <p>I think sections 2.2.1 and 5 have what you are looking for.</p>
2017-11-13 22:13:58.027000+00:00
2017-11-13 22:13:58.027000+00:00
null
null
27,653,736
<p>I've searched the net for quite a while and couldn't find a definitive answer. I want to know the quality of random numbers generated by intel's rdrand instructions. How does it compare to <a href="http://www.idquantique.com/component/content/article.html?id=9" rel="nofollow">IDQ's</a> cards for example? Is it truly random or pseudo random?</p> <p>Thanks</p>
2014-12-26 06:43:30.067000+00:00
2017-11-13 22:13:58.027000+00:00
null
random|intel|rdrand
['http://iopscience.iop.org/article/10.3847/1538-4357/aa7ede/meta;jsessionid=A9DA9DDB925E6522D058F3CEEC7D0B21.ip-10-40-2-120', 'https://arxiv.org/abs/1707.02212']
2
54,579,544
<p>This <a href="https://blog.rasa.com/rasa-nlu-in-depth-part-1-intent-classification/" rel="nofollow noreferrer">blog post</a> from Rasa clarifies some aspects.</p> <p>With Rasa you will first train a vectorizer that transforms each document in a <code>N</code>-dimensional vector, where <code>N</code> is the size of your vocabulary. This is exactly what scikit-learn's <a href="https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html" rel="nofollow noreferrer">CountVectorizer</a> does.</p> <p>Each intent embedding is instead built as an one-hot vector (or a vector with more <code>1</code>s if you have "mixed" intents). Each of these vectors has the same dimensions of a document embedding, so I guess <code>N</code> may actually be (vocabulary size) + (number of intents).</p> <p>At that point Rasa will train a neural network (default: 2 hidden layers) where the loss function is designed to maximise the similarity between document <code>d</code> and intent <code>i</code> if <code>d</code> is labelled as <code>i</code> in the training set (and minimize <code>d</code>'s similarity with all the other intent embeddings). The similarity is by default calculated as cosine similarity.</p> <p>Each new, unseen document is embedded by the neural network and its similarity computed for each of the intents. The intent which is most similar to the new document will be returned as the predicted label.</p> <hr> <p><em>Old answer:</em></p> <blockquote> <p>It's not an LSTM. They say their approach is inspired by Facebook's <a href="https://arxiv.org/abs/1709.03856" rel="nofollow noreferrer">StarSpace</a>.</p> <p>I didn't find the paper above very enlightning, however looking at Starspace's Github repo, the <a href="https://github.com/facebookresearch/StarSpace#tagspace-word--tag-embeddings" rel="nofollow noreferrer">text classification use case</a> is said to have same setting as their previous work <em>TagSpace</em>.</p> <p>The <a href="https://research.fb.com/wp-content/uploads/2014/09/tagspace-semantic-embeddings-from-hashtags.pdf?" rel="nofollow noreferrer">TagSpace paper</a> is more clear and explains how they use a CNN to embed each document in a space such that its distance to the associated class vector is minimized. Both words, documents and classes ("tags") are embedded in the same <code>d</code>-dimensional space and their distance measured via cosine similarity or inner product.</p> </blockquote>
2019-02-07 18:01:29.883000+00:00
2019-06-24 11:42:26.823000+00:00
2019-06-24 11:42:26.823000+00:00
null
54,014,979
<p>What kind of model is RASA NLU using to extract the entities and intents after word embedding?</p>
2019-01-03 00:39:21.540000+00:00
2019-06-24 11:42:26.823000+00:00
null
neural-network|nlp|rasa-nlu|named-entity-extraction
['https://blog.rasa.com/rasa-nlu-in-depth-part-1-intent-classification/', 'https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html', 'https://arxiv.org/abs/1709.03856', 'https://github.com/facebookresearch/StarSpace#tagspace-word--tag-embeddings', 'https://research.fb.com/wp-content/uploads/2014/09/tagspace-semantic-embeddings-from-hashtags.pdf?']
5
61,365,560
<p>Global Average Pooling has the following advantages over the fully connected final layers paradigm:</p> <ol> <li>The removal of a large number of trainable parameters from the model. Fully connected or dense layers have lots of parameters. A 7 x 7 x 64 CNN output being flattened and fed into a 500 node dense layer yields 1.56 million weights which need to be trained. Removing these layers speeds up the training of your model.</li> <li>The elimination of all these trainable parameters also reduces the tendency of over-fitting, which needs to be managed in fully connected layers by the use of dropout.</li> <li>The authors argue in the <a href="https://arxiv.org/pdf/1312.4400.pdf" rel="noreferrer">original paper</a> that removing the fully connected classification layers forces the feature maps to be more closely related to the classification categories – so that each feature map becomes a kind of “category confidence map”.</li> <li>Finally, the authors also argue that, due to the averaging operation over the feature maps, this makes the model more robust to spatial translations in the data. In other words, as long as the requisite feature is included / or activated in the feature map somewhere, it will still be “picked up” by the averaging operation.</li> </ol>
2020-04-22 12:48:37.650000+00:00
2020-04-22 12:48:37.650000+00:00
null
null
58,689,997
<p>Lately, I start a project about classification, using a very shallow ResNet. The model just has 10 conv. layer and then connects a Global avg pooling layer before softmax layer.</p> <p>The performance is good as my expectation --- 93% (yeah, it is ok).</p> <p>However, for some reasons, I need replace the Global avg pooling layer.</p> <p>I have tried the following ways:</p> <p>(Given the input shape of this layer [-1, 128, 1, 32], tensorflow form)</p> <ol> <li><p>Global max pooling layer. but got 85% ACC</p> </li> <li><p>Exponential Moving Average. but got 12% (almost didn't work)</p> <pre><code> split_list = tf.split(input, 128, axis=1) avg_pool = split_list[0] beta = 0.5 for i in range(1, 128): avg_pool = beta*split_list[i] + (1-beta)*avg_pool avg_pool = tf.reshape(avg_pool, [-1,32]) </code></pre> </li> <li><p>Split input into 4 parts, avg_pool each parts, finally concatenate them. but got 75%</p> <pre><code> split_shape = [32,32,32,32] split_list = tf.split(input, split_shape, axis=1) for i in range(len(split_shape)): split_list[i] = tf.keras.layers.GlobalMaxPooling2D()(split_list[i]) avg_pool = tf.concat(split_list, axis=1) </code></pre> </li> <li><p>Average the last channel. [-1, 128, 1, 32] --&gt; [-1, 128], didn't work. ^</p> </li> <li><p>Use a conv. layer with 1 kernel. In this way, the output shape is [-1, 128, 1, 1]. but didn't work, 25% or so.</p> </li> </ol> <p>I am pretty confused why global average pooling can work that well? And is there any other way to replace it?</p>
2019-11-04 08:58:49.337000+00:00
2021-10-20 02:04:55.763000+00:00
2021-10-20 02:04:55.763000+00:00
python|tensorflow|deep-learning|conv-neural-network|resnet
['https://arxiv.org/pdf/1312.4400.pdf']
1
63,867,312
<p>That's because the entire aim is to generate the next token based on the tokens we've seen so far. Take a look at the input into the model when we get our predictions. We're not just feeding the source sequence, but also the target sequence <em>up until our current step</em>. The model inside <code>Models.py</code> looks like:</p> <pre><code>class Transformer(nn.Module): def __init__(self, src_vocab, trg_vocab, d_model, N, heads, dropout): super().__init__() self.encoder = Encoder(src_vocab, d_model, N, heads, dropout) self.decoder = Decoder(trg_vocab, d_model, N, heads, dropout) self.out = nn.Linear(d_model, trg_vocab) def forward(self, src, trg, src_mask, trg_mask): e_outputs = self.encoder(src, src_mask) #print(&quot;DECODER&quot;) d_output = self.decoder(trg, e_outputs, src_mask, trg_mask) output = self.out(d_output) return output </code></pre> <p>So you can see that the <code>forward</code> method receives <code>src</code> and <code>trg</code>, which are each fed into the encoder and decoder. This is a bit easier to grasp if you take a look at the model architecture from <a href="https://arxiv.org/abs/1706.03762" rel="nofollow noreferrer">the original paper</a>:</p> <p><a href="https://i.stack.imgur.com/CYkA7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CYkA7.png" alt="enter image description here" /></a></p> <p>The &quot;Outputs (shifted right)&quot; corresponds to <code>trg[:, :-1]</code> in the code.</p>
2020-09-13 04:55:40.003000+00:00
2020-09-13 04:55:40.003000+00:00
null
null
63,867,124
<p>I'm trying to understand the code of Transformer (<a href="https://github.com/SamLynnEvans/Transformer" rel="nofollow noreferrer">https://github.com/SamLynnEvans/Transformer</a>).</p> <p>If seeing the train_model function in &quot;train&quot; script, I wonder why need to use the different sequence length of trg_input from trg:</p> <pre><code>trg_input = trg[:, :-1] </code></pre> <p>In this case, the sequence length of trg_input is &quot;seq_len(trg) - 1&quot;. It means that trg is like:</p> <pre><code>&lt;sos&gt; tok1 tok2 tokn &lt;eos&gt; </code></pre> <p>and trg_input is like:</p> <pre><code>&lt;sos&gt; tok1 tok2 tokn (no eos token) </code></pre> <p>Please let me know the reason.</p> <p>Thank you.</p> <p>The related code is like below:</p> <pre><code> for i, batch in enumerate(opt.train): src = batch.src.transpose(0, 1).to('cuda') trg = batch.trg.transpose(0, 1).to('cuda') trg_input = trg[:, :-1] src_mask, trg_mask = create_masks(src, trg_input, opt) preds = model(src, trg_input, src_mask, trg_mask) ys = trg[:, 1:].contiguous().view(-1) opt.optimizer.zero_grad() loss = F.cross_entropy(preds.view(-1, preds.size(-1)), ys, ignore_index=opt.trg_pad) loss.backward() opt.optimizer.step() def create_masks(src, trg, opt): src_mask = (src != opt.src_pad).unsqueeze(-2) if trg is not None: trg_mask = (trg != opt.trg_pad).unsqueeze(-2) size = trg.size(1) # get seq_len for matrix np_mask = nopeak_mask(size, opt) if trg.is_cuda: np_mask.cuda() trg_mask = trg_mask &amp; np_mask else: trg_mask = None return src_mask, trg_mask </code></pre>
2020-09-13 04:22:09.163000+00:00
2020-09-13 04:55:40.003000+00:00
null
nlp|pytorch|mask|transformer-model
['https://arxiv.org/abs/1706.03762', 'https://i.stack.imgur.com/CYkA7.png']
2
26,514,985
<p>Here is a quote from the abstract of a paper which I think would answer your question:</p> <blockquote> <p>In images with uniform contrast distribution of background and foreground like document images, global thresholding is more appropriate. In degraded document images, where considerable background noise or variation in contrast and illumination exists, there exists many pixels that cannot be easily classified as foreground or background. In such cases, binarization with local thresholding is more appropriate.</p> </blockquote> <p>Reference: <a href="http://arxiv.org/pdf/1201.5227v1.pdf" rel="nofollow noreferrer">click me</a></p> <p>If something is unclear please ask for clarification :)</p>
2014-10-22 18:54:41.093000+00:00
2014-10-22 19:08:33.950000+00:00
2020-06-20 09:12:55.060000+00:00
null
26,514,884
<p>I need to segment an object from background. As the starting point I assume there is a single object in the image and my task is to separate it from the background and create a binary image ( 0 -for background and 1 - for object). I read some stack overflow questions and research papers about segmentation using threshold. I have found two ways, global threshold and local/adaptive threshold. We can apply them under various conditions i.e. global threshold may be suitable for one case but may not be suitable for other case. My question is given an image how we can automatically find the most appropriate method. Is it always suitable to use local threshold approach instead global as a precaution?</p>
2014-10-22 18:49:06.930000+00:00
2014-10-22 19:08:33.950000+00:00
null
matlab|image-processing|image-segmentation|adaptive-threshold
['http://arxiv.org/pdf/1201.5227v1.pdf']
1
54,981,463
<p>Attention, as introduced in <a href="https://arxiv.org/abs/1706.03762" rel="nofollow noreferrer">Attention Is All You Need</a>, IMHO, is quite similar to what our brains use as the attention mechanism. </p> <p>We have something named LGN in our brains responsible for filtering out unnecessary information for the task at hand. For instance, if I start looking for my keys, my brain is going to pay less attention to objects with a color other than silver or gold (hopefully). Now, I'm not aware of the higher level attention mechanisms of the human brain. However, the one thing that's clear is that information is passing through each layer before the attention and represented in the form of neural activities.</p> <p>When you feed your artificial model representation of the world's current state, information is going to be represented as tensors and, similarly, attention allows you to see what you need to see to make the best decision (similar to finding keys).</p>
2019-03-04 10:38:27.767000+00:00
2019-03-04 17:28:40.040000+00:00
2019-03-04 17:28:40.040000+00:00
null
54,964,953
<p>When reading attention mechanism, I am confusing about the term attention. Is it the same with our attention nature as described in it usual definition?</p>
2019-03-03 02:08:58.950000+00:00
2019-03-04 17:28:40.040000+00:00
null
deep-learning|attention-model
['https://arxiv.org/abs/1706.03762']
1
38,067,223
<p>When you're training, you give the decoder input at each decoder timestep as the desired output. When testing, you do not have the desired output, so the best you can do is sample an output. This will be the input to the next timestep.</p> <p>TLDR; Feed in the decoder output at each timestep as the input for the next timestep.</p> <p><strong>Edit: Some TF codes</strong></p> <p>The <strong>basic_rnn_seq2seq</strong> function <strong>return</strong>s <strong>rnn_decoder</strong>(decoder_inputs, enc_states[-1], cell)</p> <p>let's look at the <strong>rnn_decoder</strong>: def rnn_decoder(decoder_inputs, initial_state, cell, loop_function=<strong>None</strong>, scope=None): ....</p> <p><strong>loop_function</strong>: if not None, this function will be applied to i-th output in order to generate i+1-th input, and decoder_inputs will be ignored, except for the first element ("GO" symbol). This can be used for decoding, but also for training to emulate <a href="http://arxiv.org/pdf/1506.03099v2.pdf" rel="nofollow">http://arxiv.org/pdf/1506.03099v2.pdf</a>.</p> <p>During decoding, you need to set this loop_function=<strong>True</strong></p> <p>I recommend looking at the translate.py file in Tensorflow seq2seq library to see how this is handled.</p>
2016-06-28 04:45:22.767000+00:00
2016-06-28 08:32:07.830000+00:00
2016-06-28 08:32:07.830000+00:00
null
38,050,333
<p>I've recently started working with tensorflow so I'm still struggling with basics.</p> <p>I wanted to create simple seq2seq prediction. </p> <ul> <li>Input is list of numbers between 0 and 1. </li> <li>Output is first number from list and the rest of the numbers multiplied by first.</li> </ul> <p>I managed to evaluate model performance and optimize weights. The thing I've been struggling is how to make predictions with trained model.</p> <pre><code> model_outputs, states = seq2seq.basic_rnn_seq2seq(encoder_inputs, decoder_inputs, rnn_cell.BasicLSTMCell(data_point_dim, state_is_tuple=True)) </code></pre> <p>In order to generate model_outputs I need both input and output values for the model, which is good for evaluation but in prediction I only have input values. I'm guessing I need to do something with states but I'm unsure how to transform them into sequence of floats.</p> <p>Full code is available here <a href="https://gist.github.com/anonymous/be405097927758acca158666854600a2">https://gist.github.com/anonymous/be405097927758acca158666854600a2</a></p>
2016-06-27 09:28:20.327000+00:00
2017-05-19 12:31:18.817000+00:00
2016-06-27 09:29:11.823000+00:00
python|tensorflow|lstm
['http://arxiv.org/pdf/1506.03099v2.pdf']
1
68,814,235
<p>I have reimplemented an algorithm which does not depend on MCMC but creates <strong>independent and identically distributed (iid) samples</strong> from the truncated multivariate normal distribution. Having iid samples can be very useful! I used to also use <a href="https://emcee.readthedocs.io/en/stable/" rel="nofollow noreferrer">emcee</a> as described in the answer by Warrick, but for convergence the number of samples needed exploded in higher dimensions, making it impractical for my use case.</p> <p>The algorithm was introduced by <a href="https://arxiv.org/pdf/1603.04166.pdf" rel="nofollow noreferrer">Botev (2016)</a> and uses an accept-reject algorithm based on minimax exponential tilting. It was <a href="https://de.mathworks.com/matlabcentral/fileexchange/53792-truncated-multivariate-normal-generator" rel="nofollow noreferrer">originally implemented in MATLAB</a> but reimplementing it for Python increased the performance significantly compared to running it using the MATLAB engine in Python. It also works well and is fast at higher dimensions.</p> <p>The code is available at: <a href="https://github.com/brunzema/truncated-mvn-sampler" rel="nofollow noreferrer">https://github.com/brunzema/truncated-mvn-sampler</a>.</p> <h2>An Example:</h2> <pre><code>d = 10 # dimensions # random mu and cov mu = np.random.rand(d) cov = 0.5 - np.random.rand(d ** 2).reshape((d, d)) cov = np.triu(cov) cov += cov.T - np.diag(cov.diagonal()) cov = np.dot(cov, cov) # constraints lb = np.zeros_like(mu) - 1 ub = np.ones_like(mu) * np.inf # create truncated normal and sample from it n_samples = 100000 tmvn = TruncatedMVN(mu, cov, lb, ub) samples = tmvn.sample(n_samples) </code></pre> <p>Plotting the first dimension results in:</p> <p><a href="https://i.stack.imgur.com/WxWxq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WxWxq.png" alt="First dimension of truncated MVN" /></a></p> <hr /> <h2>Reference:</h2> <p>Botev, Z. I., (2016), The normal law under linear restrictions: simulation and estimation via minimax tilting, Journal of the Royal Statistical Society Series B, 79, issue 1, p. 125-148</p>
2021-08-17 08:40:48.487000+00:00
2022-03-22 11:25:33.827000+00:00
2022-03-22 11:25:33.827000+00:00
null
20,115,917
<p>I'm trying to automate a process that at some point needs to draw samples from a truncated multivariate normal. That is, it's a normal multivariate normal distribution (i.e. Gaussian) but the variables are constrained to a cuboid. My given inputs are the mean and covariance of the full multivariate normal but I need samples in my box. </p> <p>Up to now, I'd just been rejecting samples outside the box and resampling as necessary, but I'm starting to find that my process sometimes gives me (a) large covariances and (b) means that are close to the edges. These two events conspire against the speed of my system.</p> <p>So what I'd like to do is sample the distribution correctly in the first place. Googling led only to <a href="https://groups.google.com/forum/#!topic/scipy-user/QohTys9U06M" rel="noreferrer">this discussion</a> or the <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.truncnorm.html#scipy.stats.truncnorm" rel="noreferrer"><code>truncnorm</code> distribution</a> in <code>scipy.stats</code>. The former is inconclusive and the latter seems to be for one variable. Is there any native multivariate truncated normal? And is it going to be any better than rejecting samples, or should I do something smarter?</p> <p>I'm going to start working on my own solution, which would be to rotate the untruncated Gaussian to it's principal axes (with an SVD decomposition or something), use a product of truncated Gaussians to sample the distribution, then rotate that sample back, and reject/resample as necessary. If the truncated sampling is more efficient, I think this should sample the desired distribution faster.</p>
2013-11-21 08:33:42.020000+00:00
2022-03-22 11:25:33.827000+00:00
null
python|scipy
['https://emcee.readthedocs.io/en/stable/', 'https://arxiv.org/pdf/1603.04166.pdf', 'https://de.mathworks.com/matlabcentral/fileexchange/53792-truncated-multivariate-normal-generator', 'https://github.com/brunzema/truncated-mvn-sampler', 'https://i.stack.imgur.com/WxWxq.png']
5
44,985,808
<p>In some use cases, 307 redirects might be abused by an attacker to learn the victim's credentials.</p> <p>Further information can be found in <strong>section 3.1</strong> of <a href="https://arxiv.org/pdf/1601.01229v2.pdf" rel="nofollow noreferrer">A Comprehensive Formal Security Analysis of OAuth 2.0</a>.</p> <p>The authors of the above paper suggest the following:</p> <blockquote> <p><strong>Fix.</strong> Contrary to the current wording in the OAuth standard, the exact method of the redirect is not an implementation detail but essential for the security of OAuth. In the HTTP standard (<a href="https://www.rfc-editor.org/rfc/rfc7231" rel="nofollow noreferrer">RFC 7231</a>), only the 303 redirect is defined unambigiously to drop the body of an HTTP POST request. All other HTTP redirection status codes, including the most commonly used 302, leave the browser the option to preserve the POST request and the form data. In practice, browsers typically rewrite to a GET request, thereby dropping the form data, except for 307 redirects. Therefore, the OAuth standard should require 303 redirects for the steps mentioned above in order to fix this problem.</p> </blockquote>
2017-07-08 11:38:51.707000+00:00
2018-11-29 04:12:30.883000+00:00
2021-10-07 13:28:41.503000+00:00
null
2,068,418
<p>What's the difference between a <code>302 FOUND</code> and a <code>307 TEMPORARY REDIRECT</code> HTTP response?</p> <p><a href="http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.3.3" rel="noreferrer">The W3 spec</a> seems to indicate that they're both used for temporary redirects, and neither can be cached unless the response specifically allows it.</p>
2010-01-14 23:53:05.413000+00:00
2021-09-08 21:20:10.973000+00:00
2015-11-02 22:45:23.383000+00:00
http|redirect
['https://arxiv.org/pdf/1601.01229v2.pdf', 'https://www.rfc-editor.org/rfc/rfc7231']
2
63,664,036
<p>I am not sure if this question is suitable in its current form in this forum (stack exchange might perhaps be a better fit), but as it is quite relevant for DNN based speech synthesis pipelines, I think it is a good idea to expand on it a bit.</p> <p>We cannot reconstruct exactly the STFT from a Mel spectrogram. The reason is that we the Mel is a 'compressed' version of the STFT with the frequencies coming from the Mel scale and then applying (to the STFT) triangular filters at these frequencies. Generally, we lose information going from STFT to mel. See this excellent article for detailed explanation.</p> <p><a href="https://haythamfayek.com/2016/04/21/speech-processing-for-machine-learning.html" rel="nofollow noreferrer">https://haythamfayek.com/2016/04/21/speech-processing-for-machine-learning.html</a></p> <p>Now, to get back to your question - I assume that you are working on speech synthesis in the manner of the Tacotron [1] work - in order to apply Griffin Lim, as you correctly note, we need the linear spectrogram. The way it is done in the paper is to use a neural network to transform Mel to STFT. They call this a postnet, as it serves as a post processor after the Mels are predicted.</p> <p>To set up this network, convert ground truth (target) audio into Mels, and create a recurrent network (CBHG or anything else) to transform it into STFT equivalents. Minimize loss between these STFT predictions and the actual STFT which we can create from target audio.</p> <p>[1] <a href="https://arxiv.org/abs/1703.10135" rel="nofollow noreferrer">https://arxiv.org/abs/1703.10135</a></p>
2020-08-31 02:34:16.043000+00:00
2020-08-31 05:38:07.520000+00:00
2020-08-31 05:38:07.520000+00:00
null
63,663,865
<p>I have generated a melspectrogram in librosa using the following code</p> <pre><code>import os from matplotlib import pyplot as plt import librosa import librosa.display import pylab import numpy as np x, sr = librosa.load('audio/example.wav') mel = librosa.feature.melspectrogram(x,sr) P = librosa.power_to_db(mel, ref=np.max) librosa.display.specshow(P) pylab.savefig(&quot;example.png&quot;, bbox_inches=None, pad_inches=0) </code></pre> <p>As I understand, the spectrogram is simply a visual representation of the STFT matrix for an audio signal. I'm trying to reconstruct the STFT matrix used to generate the spectrogram in order to pass it through the griffin lim function. How should I do this?</p> <p>Generating Spectrogram using STFT data</p> <pre><code>def generate_spectrogram(x, sr): X = librosa.stft(x) Xdb = librosa.amplitude_to_db(abs(X)) fig = plt.figure(figsize=(10, 10), dpi=100, frameon=False) ax = fig.add_axes([0, 0, 1, 1], frameon=False) ax.axis('off') librosa.display.specshow(Xdb, sr=sr, cmap='gray', x_axis='time', y_axis='hz') plt.savefig('example.png', quality=100, bbox_inches=0, pad_inches=0) librosa.cache.clear() </code></pre>
2020-08-31 02:06:26.887000+00:00
2020-08-31 05:38:07.520000+00:00
2020-08-31 04:39:01.147000+00:00
python|audio|signal-processing|spectrogram|librosa
['https://haythamfayek.com/2016/04/21/speech-processing-for-machine-learning.html', 'https://arxiv.org/abs/1703.10135']
2
59,855,459
<p>I had a project similar to this one.<br> As @meowongac stated in his comment, <strong>Unet</strong> was showing pretty good results and is easy to build.</p> <p>Look at <a href="https://github.com/zhixuhao/unet" rel="nofollow noreferrer">this repo</a> to implement it. Note that there are prettier way to write it, especially if you want to make the number of filters/number of conv blocks as parameters. If you want to have understanding on why the Unet is good in your case read <a href="https://arxiv.org/abs/1505.04597" rel="nofollow noreferrer">its paper</a>. Briefly, it uses a path to get small features on an contracting path, then a expansing path to get the spatializaion. </p> <p>On my project Unet was working well but I wanted a better model, I came across <a href="https://github.com/xmengli999/H-DenseUNet/blob/master/denseunet.py" rel="nofollow noreferrer">the DenseUNET repo</a> that add Dense layers to outperform Unet, and it was the case.</p> <p>Edit: I also tried <a href="https://github.com/ykamikawa/tf-keras-SegNet" rel="nofollow noreferrer">segnet</a> that was showing good results but not as good as Unet, it's even easier to implement.</p> <p>Also you <strong>souldn't</strong> look at <strong>accuracy</strong> since you're using regression paradigm. It's only useful in classification with probability between 0 and 1</p>
2020-01-22 08:28:51.227000+00:00
2020-01-22 08:34:15.823000+00:00
2020-01-22 08:34:15.823000+00:00
null
59,851,310
<p>I have 2 arrays of image, one of a degraded picture, the other one of the same picture but in a clean state. Their <code>shape=(576, 720, 3)</code>, since these images are <code>720*576</code>, and have <code>3</code> channels (RGB). I am currently trying to train my model with the degraded picture array as input, and the clean picture array as output. It is working perfectly fine, however I do precisely know what layer to add to my model, nor how to improve the accuracy I currently have.</p> <p>Here's my current model ( which is <strong>NOT</strong> good, I basically just randomly put layers here ):</p> <pre class="lang-py prettyprint-override"><code>model = models.Sequential() model.add(layers.Conv2D(32, 5, activation='relu', padding='same', input_shape=(576, 720, 3))) model.add(layers.AveragePooling2D(pool_size=(2, 2), strides=(2, 2))) model.add(layers.Conv2D(64, 3, activation='relu', padding='same')) model.add(layers.AveragePooling2D(pool_size=(2, 2))) model.add(layers.Conv2D(3, 3, activation='relu', padding='same')) model.add(layers.UpSampling2D((4, 4))) </code></pre> <p><a href="https://i.stack.imgur.com/DYjwU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DYjwU.png" alt="Model Description" /></a></p> <p>And here's how I compile it:</p> <pre class="lang-py prettyprint-override"><code>model.compile(optimizer='rmsprop', loss='mean_absolute_error', metrics=[&quot;accuracy&quot;]) </code></pre> <p>I currently have a 5000 pair of degraded / clean image, and I manage to get to ~75% accuracy and ~20 loss, but can't get to improve the model.</p> <p>What I would like is to understand what I am doing here, since I can't get to find anything on the Internet apart from Image Classification, which is not what I am doing. I would like to know what layer can be useful, and why. I know Conv2D is useful since I want to find flaws in images and correct them, and I know LSTM is useful if I'm working with a video, but apart from that, I do not know the correct layer Sequence to setup, nor the correct layers.</p> <p>Any help would be much appreciated, thank you.</p>
2020-01-22 01:05:27.753000+00:00
2020-01-22 08:34:15.823000+00:00
2020-06-20 09:12:55.060000+00:00
tensorflow|keras|deep-learning|neural-network|conv-neural-network
['https://github.com/zhixuhao/unet', 'https://arxiv.org/abs/1505.04597', 'https://github.com/xmengli999/H-DenseUNet/blob/master/denseunet.py', 'https://github.com/ykamikawa/tf-keras-SegNet']
4
33,882,357
<p>You can start by reading some papers on sarcasm detection in twitter, e.g. <a href="http://www.aclweb.org/anthology/W10-2914" rel="nofollow">Semi-Supervised Recognition of Sarcastic Sentences in Twitter and Amazon</a>, which uses patterns of content words and high frequency words, or closer to your question <a href="http://www.aclweb.org/anthology/D15-1116" rel="nofollow">Sarcastic or Not: Word Embeddings to Predict the Literal or Sarcastic Meaning of Words</a> which uses <code>word2vec</code>. The latter views the sarcasm detection problem as disambiguation problem of literal and sarcastic meanings of the same word. Perhaps you can employ this approach using the recently published <a href="http://arxiv.org/pdf/1511.06388v1.pdf" rel="nofollow">sense2vec - A Fast and Accurate Method for Word Sense Disambiguation In Neural Word Embeddings</a>.</p> <p>Try to use the techniques used in the papers, and when you encounter a <em>specific</em> problem ask a question with a minimal working example. </p>
2015-11-23 22:58:04.143000+00:00
2015-11-23 22:58:04.143000+00:00
null
null
33,856,296
<p>Im planning on using an NN for sarcasm detection on a number of tweets. Im unsure of how to prepare the word embeddings I will train the NN on. If I tokenize the strings and tag emoticons, capitalisation, user tags, hashtags etc, how do i then combine the resulting strings with word embeddings? do i train the word embeddings on the resulting corpus of tweets?</p>
2015-11-22 15:09:30.307000+00:00
2017-06-02 16:27:19.067000+00:00
null
twitter|nlp|neural-network|artificial-intelligence|preprocessor
['http://www.aclweb.org/anthology/W10-2914', 'http://www.aclweb.org/anthology/D15-1116', 'http://arxiv.org/pdf/1511.06388v1.pdf']
3
62,499,408
<p>I think your learning rate LR (default for adam : 0.001) is too high, leading to <code>catastrophic forgetting</code> . <strong>Refer: How to Fine-Tune BERT for Text Classification?</strong> <a href="https://arxiv.org/pdf/1905.05583.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1905.05583.pdf</a></p> <p>Ideally, LR should be of the order of e-5 . Try changing the code as follows and it should work</p> <pre><code>from keras_radam import RAdam model.compile( RAdam(lr=2e-5), loss='binary_crossentropy', metrics=['accuracy'], ) </code></pre>
2020-06-21 13:44:12.890000+00:00
2020-06-21 13:44:12.890000+00:00
null
null
60,732,018
<p>I am trying to use BERT for sentiment analysis but I suspect I am doing something wrong. In my code I am fine tuning bert using <code>bert-for-tf2</code> but after 1 epoch I am getting an accuracy of 42% when a simple GRU model was getting around 73% accuracy. What should I be doing different to effectively use BERT. I suspect I am traning the bert layers from the first batch which may be an issue as the dense layer is randomly initialized. Any advice would be appreciated, Thanks!</p> <pre><code>import bert-for-tf2 #gets imported as bert but relabeled for clarity model_name = "uncased_L-12_H-768_A-12" model_dir = bert.fetch_google_bert_model(model_name, ".models") model_ckpt = os.path.join(model_dir, "bert_model.ckpt") bert_params = bert.params_from_pretrained_ckpt(model_dir) l_bert = bert.BertModelLayer.from_params(bert_params, name="bert") max_seq_len = 100 l_input_ids = tensorflow.keras.layers.Input(shape=(max_seq_len,), dtype='int32') bertLayer = l_bert(l_input_ids) flat = Flatten()(bertLayer) output = Dense(1,activation = 'sigmoid')(flat) model = tensorflow.keras.Model(inputs=l_input_ids, outputs=output) model.build(input_shape=(None, max_seq_len)) bert.load_bert_weights(l_bert, model_ckpt) with open('../preprocessing_scripts/new_train_data.txt', 'r') as f: tweets = f.readlines() with open('../preprocessing_scripts/targets.csv', 'r') as f: targets = f.readlines() max_words = 14000 tokenizer = Tokenizer(num_words=max_words) trainX = tweets[:6000] trainY = targets[:6000] testX = tweets[6000:] testY = tweets[6000:] maxlen = 100 tokenizer.fit_on_texts(trainX) tokenized_version = tokenizer.texts_to_sequences(trainX) tokenized_version = pad_sequences(tokenized_version, maxlen=maxlen)trainY = np.array(trainY,dtype = 'int32') model.compile(loss="binary_crossentropy", optimizer="adam", metrics=['accuracy']) history = model.fit(x=tokenized_version, y=trainY, batch_size = 32, epochs=1, validation_split = 0.2) </code></pre>
2020-03-18 00:32:20.570000+00:00
2020-06-21 13:44:12.890000+00:00
2020-03-20 19:46:16.457000+00:00
tensorflow|keras|nlp|text-classification|bert-language-model
['https://arxiv.org/pdf/1905.05583.pdf']
1
63,923,643
<p>Ok after tinkering around and debugging the forward function I came to following explanation:</p> <h1>Some Information about the architecture</h1> <p>If you do classes from Andrew Ng or others you learn not to initialize the weights to the same value, as example &quot;0&quot;. This is what the writers of the original Paper of FCNs do and they say, because it doesn't change the performance or didn't yield to faster convergence (<a href="https://arxiv.org/abs/1411.4038" rel="nofollow noreferrer">FCN-Paper</a>).</p> <h1>My Solution</h1> <p>So for testing purpose I initlize in the testing module to seeded random values, which I can test against:</p> <pre class="lang-py prettyprint-override"><code>import unittest import torch from SemanticSegmentation.models.fcn8 import FCN8 class TestFCN8(unittest.TestCase): def setUp(self): self.model = FCN8(8, pretrained=True) torch.manual_seed(0) # instead of zero init for score tensors use random init self.model.score_fr[6].weight.data.random_() self.model.score_fr[6].bias.data.random_() self.model.score_pool3.weight.data.random_() self.model.score_pool3.bias.data.random_() self.model.score_pool4.weight.data.random_() self.model.score_pool4.bias.data.random_() self.x = torch.rand((4, 3, 45, 45)) def testForward(self): self.assertEqual( self.model.forward(self.x).shape.numel(), 64800) self.assertEqual( list(self.model.forward(self.x).shape), [4, 8, 45, 45]) self.assertEqual( float(self.model.forward(self.x)[3][4][44][4]), 2277257216.0)) if __name__ == &quot;__main__&quot;: unittest.main() </code></pre>
2020-09-16 15:46:45.267000+00:00
2020-09-16 15:46:45.267000+00:00
null
null
63,917,991
<p>I want to unittest the overridden forward function of my Network modell in Pytorch. So I loaded my model (pretrained from Zoo) with the setUp method, loaded a seed and created some random batch. In my method testForward I tested the result of forward against shape and numel, but I also want to check a specific value which a apears to be 0. I wasn't shure about that so checked my params in setUp also, which appears not to be 0.</p> <pre class="lang-py prettyprint-override"><code>import unittest import torch from SemanticSegmentation.models.fcn8 import FCN8 class TestFCN8(unittest.TestCase): def setUp(self): self.model = FCN8(8, pretrained=True) torch.manual_seed(0) self.x = torch.rand((4, 3, 45, 45)) for param in self.model.parameters(): print(param.data) def testForward(self): self.assertEqual(self.model.forward(self.x).shape.numel(), 64800) self.assertEqual(str(self.model.forward(self.x).shape), 'torch.Size([4, 8, 45, 45])') print(self.model.named_parameters) if __name__ == &quot;__main__&quot;: unittest.main() </code></pre> <p>So my Question is: the sahpe of the forward return tensor is what I expect but why is this tensor completly zero? I expected at least a few values.</p> <p>The imported modell is based on an VGG16 network with upscoring the after ConvLayer 4 , 8 and 16. If needed I could also present the modell code.</p>
2020-09-16 10:23:31.487000+00:00
2020-09-16 15:46:45.267000+00:00
2020-09-16 11:17:58.753000+00:00
python|pytorch|vgg-net
['https://arxiv.org/abs/1411.4038']
1
45,500,459
<p>There is a recently released CVPR 2017 paper with the title<a href="https://arxiv.org/abs/1611.10012" rel="nofollow noreferrer"> Speed/accuracy trade-offs for modern convolutional object detectors</a> (Huang et al.) that compares different feature extractors with some neural network architectures which the authors call "meta-architectures". They compare the so build models towards their speed, memory usage and accuracy.</p>
2017-08-04 07:16:30.227000+00:00
2017-08-07 06:28:03.333000+00:00
2017-08-07 06:28:03.333000+00:00
null
45,498,569
<p>In general, a more complicated neural network(say, an object classification CNN with 128 layers) requires more "resources"(time,number of gpu) to train than a less complicated neural network(for example, an object classification CNN with 32 layers).I found a link which has a very nice summary of different types of CNN and "resources" required to train them: <a href="https://adeshpande3.github.io/adeshpande3.github.io/The-9-Deep-Learning-Papers-You-Need-To-Know-About.html" rel="nofollow noreferrer">https://adeshpande3.github.io/adeshpande3.github.io/The-9-Deep-Learning-Papers-You-Need-To-Know-About.html</a></p> <p>However, after the training is complete, when we're actually using these neural networks(say, an autonomous driving car using a trained CNN to help navigate the car), do more complicated and more accurate neural networks require more "computational resources"(could be cpu,memory,etc) to run than a less complicated,less accurate neural networks? </p> <p>I'm asking a generic question and the neural networks are not limited to object classification but can also include neural networks in NLP or other areas.</p> <p>If the answer is "it depends",can you provide some examples of more complicated, more accurate neural networks using more resources to run than less complicated/accurate neural networks?</p>
2017-08-04 05:00:35.643000+00:00
2017-08-07 06:28:03.333000+00:00
null
machine-learning|neural-network|artificial-intelligence
['https://arxiv.org/abs/1611.10012']
1
58,838,359
<p>Generation of music and text differs form generation of images. Text and music generation can be done with sequence models (LSTM, RNN, GRU etc), while image generation can be done with <a href="https://arxiv.org/abs/1406.2661" rel="nofollow noreferrer">GAN - Generative Adversarial Network</a><br> <br><em>Text generation:</em><br> For text generation, first step is to create <a href="https://blogs.oracle.com/datascience/introduction-to-embedding-in-natural-language-processing" rel="nofollow noreferrer">embeddings</a> form your sentence either from pre-trained embedding models (word2vec, GloVe etc) then apply this embedding to your sentences. There are several other embedding techniques that you can explore. <br>Next step is to fit your embedded features to a sequence model. Probably <a href="https://machinelearningmastery.com/text-generation-lstm-recurrent-neural-networks-python-keras/" rel="nofollow noreferrer">this</a> one you can refer as starting point. <br><br>Music generation:<br> <a href="https://arxiv.org/abs/1709.01620" rel="nofollow noreferrer">Music generation</a> can be done with sequence models, difference being instead of sequence of words you have sound waveform, spectrogram, note, chord etc. <br> <br>Image generation:<br> This one is different then above two.<br></p> <blockquote> <p>We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. <br><a href="https://arxiv.org/abs/1406.2661" rel="nofollow noreferrer">https://arxiv.org/abs/1406.2661</a></p> </blockquote> <p>Coming to your questions:<br></p> <blockquote> <p>If I'd add more layers and different types of data, could it create for example pictures or music?</p> </blockquote> <p>As stated, music and text generation can be done with similar type of network architecture (as both follows a sequence), while images needs to be treated differently.</p>
2019-11-13 13:35:35.827000+00:00
2019-11-13 13:35:35.827000+00:00
null
null
58,047,324
<p>I Am an absolute beginner with Tensorflow. I have searched, but did not found how to do this:</p> <p>If I have a list of strings like this:</p> <pre><code>["sentace1", "...", "sentance5000"] </code></pre> <blockquote> <p>How do I train a neural network to create similar sentences? What is the logic of generating data, text, images? Can someone explain to me using code, through this relatively basic example?</p> </blockquote> <p>Also, If I'd add more layers and different types of data, could it create for example pictures or music?</p> <p><strong>A thousand thanks :)</strong></p>
2019-09-22 08:16:05.970000+00:00
2019-11-13 13:35:35.827000+00:00
null
python|tensorflow
['https://arxiv.org/abs/1406.2661', 'https://blogs.oracle.com/datascience/introduction-to-embedding-in-natural-language-processing', 'https://machinelearningmastery.com/text-generation-lstm-recurrent-neural-networks-python-keras/', 'https://arxiv.org/abs/1709.01620', 'https://arxiv.org/abs/1406.2661']
5
59,880,451
<h1>What is inductive bias?</h1> <p>Pretty much every design choice in machine learning signifies some sort of inductive bias. <a href="https://arxiv.org/pdf/1806.01261.pdf" rel="noreferrer">"Relational inductive biases, deep learning, and graph networks" (Battaglia et. al, 2018)</a> is an amazing read, which I will be referring to throughout this answer.</p> <blockquote> <p>An <strong>inductive bias</strong> allows a learning algorithm to <strong>prioritize one solution (or interpretation) over another</strong>, independent of the observed data. [...] Inductive biases can express assumptions about either the data-generating process or the space of solutions.</p> </blockquote> <h2>Examples in deep learning</h2> <p>Concretely speaking, the very <strong>composition of layers</strong> in deep learning provides a type of relational inductive bias: <em>hierarchical processing</em>. The <strong>type of layer</strong> imposes further relational inductive biases:</p> <p><a href="https://i.stack.imgur.com/QbD58.png" rel="noreferrer"><img src="https://i.stack.imgur.com/QbD58.png" alt="Various relational inductive biases in standard deep learning components (Battaglia et. al, 2018)"></a></p> <p>More generally, non-relational inductive biases used in deep learning include:</p> <ul> <li>activation non-linearities,</li> <li>weight decay,</li> <li>dropout,</li> <li>batch and layer normalization,</li> <li>data augmentation,</li> <li>training curricula,</li> <li>optimization algorithms,</li> <li><em>anything that imposes constraints on the learning trajectory</em>.</li> </ul> <h2>Examples outside of deep learning</h2> <p>In a Bayesian model, inductive biases are typically expressed through the choice and parameterization of the prior distribution. Adding a Tikhonov regularization penalty to your loss function implies assuming that simpler hypotheses are more likely.</p> <h1>Conclusion</h1> <p>The stronger the inductive bias, the better the sample efficiency--this can be understood in terms of the <strong>bias-variance tradeoff</strong>. Many modern deep learning methods follow an “end-to-end” design philosophy which emphasizes minimal <em>a priori</em> representational and computational assumptions, which explains why they tend to be so <strong>data-intensive</strong>. On the other hand, there is a lot of research into baking stronger relational inductive biases into deep learning architectures, e.g. with graph networks.</p> <h2>An aside about the word "inductive"</h2> <p>In philosophy, inductive reasoning refers to <strong>generalization</strong> from specific observations to a conclusion. This is a counterpoint to deductive reasoning, which refers to <strong>specialization</strong> from general ideas to a conclusion.</p>
2020-01-23 14:05:36.290000+00:00
2020-01-23 14:05:36.290000+00:00
null
null
35,655,267
<p>What is inductive bias in machine learning? Why is it necessary?</p>
2016-02-26 15:12:30.287000+00:00
2020-08-15 02:50:18.030000+00:00
2019-03-19 15:46:16.180000+00:00
machine-learning|terminology
['https://arxiv.org/pdf/1806.01261.pdf', 'https://i.stack.imgur.com/QbD58.png']
2
39,370,903
<p>I have actually studied this extensively in my paper. <a href="https://arxiv.org/ftp/arxiv/papers/1505/1505.00558.pdf" rel="nofollow">https://arxiv.org/ftp/arxiv/papers/1505/1505.00558.pdf</a></p> <p>The short answer is, NO. Dual Pivot doesn't perform as well when compared to a high end version of quicksort when swapping large elements. Look at figures 22 and 23.</p>
2016-09-07 13:13:17.633000+00:00
2016-09-07 13:13:17.633000+00:00
null
null
25,314,224
<p><em>Moved this question over to <a href="https://softwareengineering.stackexchange.com/questions/253350/dual-pivot-quicksort-in-face-of-expensive-swaps">Programmers</a>, as it didn't seem theoretical enough for CS.</em><br> <strong>TLDR</strong><br> Has anyone tested dual pivot quicksort performance with expensive-to-swap elements? It seems that in this case, it should massively underperform standard quicksort.</p> <p><br> <strong>Backstory</strong><br> Inspired by recent "question" <a href="https://stackoverflow.com/questions/24650626/how-to-implement-classic-sorting-algorithms-in-modern-c">here on stack overflow</a>, I decided to go and implement non trivial versions of given sorts (<a href="http://en.wikipedia.org/wiki/Introsort" rel="nofollow noreferrer">introsort</a>, <a href="http://en.wikipedia.org/wiki/Quicksort" rel="nofollow noreferrer">quicksort</a> with <a href="http://en.wikipedia.org/wiki/Dutch_national_flag_problem" rel="nofollow noreferrer">3-way partition</a>, median of 3 pivot selection, small block insertion sort etc).</p> <p>During some research I also came upon dual pivot quicksort, <a href="http://docs.oracle.com/javase/7/docs/api/java/util/Arrays.html#sort(int[])" rel="nofollow noreferrer">which is the current implementation of quicksort in Java standard library</a>. Generally it claims that it is always at least as good as standard quicksort, and empirical testing seemed to support it. (Which is the reason it is the current implementation.)</p> <p>However, it seems that no STL implementation uses dual pivot quicksort for the quicksort phase of introsort, which made me wonder why. After more research I found <a href="http://arxiv.org/pdf/1304.0988.pdf" rel="nofollow noreferrer">this paper</a>. It says that while dual pivot quicksort performs on average 5% less comparisons, it performs significantly more swaps. (Approximately 80% more) Obviously, since Java has only primitives and reference types, swapping is always cheap. (Even so, it uses this sort only for primitives, because it is not stable)</p> <p>So I wanted to see whether someone already tested standard quicksort vs dual pivot quicksort when elements are expensive to swap and has the numbers (and possibly source) lying around, or whether I will have to test this myself.</p> <p><strong>This question is specifically about quick sort variants.</strong></p>
2014-08-14 17:38:51.067000+00:00
2016-09-07 13:13:17.633000+00:00
2017-05-23 12:10:25.683000+00:00
c++|algorithm|sorting|quicksort
['https://arxiv.org/ftp/arxiv/papers/1505/1505.00558.pdf']
1
8,842,283
<p><a href="http://metaoptimize.com/qa/questions/6943/what-is-the-hashing-trick#6945" rel="nofollow">Here</a> (sorry I cannot add this as a comment for some reason.) Also, the first page of <a href="http://arxiv.org/pdf/0902.2206" rel="nofollow">Feature Hashing for Large Scale Multitask Learning</a> explains it nicely.</p>
2012-01-12 21:07:05.770000+00:00
2012-01-12 21:28:16.663000+00:00
2012-01-12 21:28:16.663000+00:00
null
8,673,035
<p>I know feature hashing (hashing-trick) is used to reduce the dimensionality and handle sparsity of bit vectors but I don't understand how it really works. Can anyone explain this to me.Is there any python library available to do feature hashing?</p> <p>Thank you.</p>
2011-12-29 20:29:41.833000+00:00
2017-02-24 05:36:43.130000+00:00
2011-12-30 23:18:13.073000+00:00
python|hash|vector|machine-learning
['http://metaoptimize.com/qa/questions/6943/what-is-the-hashing-trick#6945', 'http://arxiv.org/pdf/0902.2206']
2
59,636,024
<p>The screenshot shows the so-called <em>pairing instability</em>, which is one of the most frequent instability problems in SPH computations.</p> <p>Pairing instability is the consequence of the application of bell-shaped kernel functions with too large smoothing radii. Since polynomial kernel functions of at least third order have an infection point, particles, which are getting too close to each other, experience lower and lower repulsive forces and gradually stick together. This can be overcome by choosing a suitable smoothing radius leading to a rather optimal number of neighbors, which depends on the applied kernel function but usually is around 25 in 2D.</p> <p>You can read about the pairing instability and other issues of SPH simulations <a href="https://arxiv.org/abs/1111.1259" rel="nofollow noreferrer">here</a>. Pairing instability is briefly discussed on page 9.</p>
2020-01-07 21:01:52.267000+00:00
2020-01-07 21:01:52.267000+00:00
null
null
58,589,236
<p>I implemented a rather simple SPH simulation using a cubic-spline-kernel and a simple non-iterative pressure solver as described <a href="https://cg.informatik.uni-freiburg.de/publications/2014_EG_SPH_STAR.pdf" rel="nofollow noreferrer">in this PDF</a> in equation 9. I followed algorithm 1 of that paper (including gravity).</p> <p>The resulting particle behaviour is certainly fluid-like (with quite some compressibility as is expected from such a simple pressure solver). However as you can see in <a href="https://i.stack.imgur.com/K5WpD.png" rel="nofollow noreferrer">this screenshot</a> the particles are not evenly spread when in equilibrium, but instead arrange into small clusters of about 3 particle.</p> <p>Is this normal behaviour ? It appears strange to me, so I wanted to make sure this is either correct or someone would have an idea what could be wrong here.</p>
2019-10-28 10:25:57.773000+00:00
2020-01-07 21:01:52.267000+00:00
null
simulation|fluid|fluid-dynamics
['https://arxiv.org/abs/1111.1259']
1
60,066,557
<p>I believe I found the answer. The attention model used in the <code>date-conversion-attention</code> uses the dot product alignment score and it's described in <code>Effective Approaches to Attention-based Neural Machine Translation</code>. Link: <a href="https://arxiv.org/pdf/1508.04025.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1508.04025.pdf</a></p>
2020-02-04 22:36:56.980000+00:00
2020-02-05 11:58:52.597000+00:00
2020-02-05 11:58:52.597000+00:00
null
60,059,272
<p>I've been looking at tfjs examples and trying to learn about seq2seq models. During the process, I've stumbled upon the <a href="https://github.com/tensorflow/tfjs-examples/tree/master/date-conversion-attention" rel="nofollow noreferrer">date-conversion-attention</a> example.</p> <p>It's a great example but what kind of attention mechanism is being used in the example? There is no info in <code>Readme</code> file. Can somebody point me to the paper that describes the attention that's being used here?</p> <p>Link to attention part: <a href="https://github.com/tensorflow/tfjs-examples/blob/908ee32750ba750a14d15caeb53115e2d3dda2b3/date-conversion-attention/model.js#L102-L119" rel="nofollow noreferrer">https://github.com/tensorflow/tfjs-examples/blob/908ee32750ba750a14d15caeb53115e2d3dda2b3/date-conversion-attention/model.js#L102-L119</a></p>
2020-02-04 14:14:04.957000+00:00
2020-02-05 11:58:52.597000+00:00
2020-02-04 14:18:04.300000+00:00
tensorflow|tensorflow.js
['https://arxiv.org/pdf/1508.04025.pdf']
1
1,210,150
<p>You can also try searching the 'Computer Science' section of arXiv: <a href="http://arxiv.org" rel="nofollow noreferrer">http://arxiv.org</a> for "search engine" and the various terms that others have suggested.</p> <p>It contains many academic papers, all freely available... hopefully some of them will be relevant to your research. (Of course the caveat of validating any paper's content applies.)</p>
2009-07-31 00:46:25.923000+00:00
2009-07-31 00:46:25.923000+00:00
null
null
1,153,770
<p>I know that Google’s search algorithm is mainly based on pagerank. However, it also does analysis and uses the structure of the document <code>H1</code>, <code>H2</code>, <code>title</code> and other HTML tags to enhance the search results.</p> <p><strong>What is the name of this technique "using the document structure to enhance the search results"?</strong></p> <p>And are there any academic papers to help me study this area?</p> <p>The fact that Google is taking the HTML structure into account is well covered in SEO articles however I could not find it in the academic papers.</p>
2009-07-20 14:03:47.623000+00:00
2014-06-20 08:55:18.300000+00:00
2012-02-09 02:18:34.457000+00:00
html|seo|search-engine
['http://arxiv.org']
1
62,350,334
<p>This is an excellent question, and highlights some of the intricacies of the federated setting.</p> <p>In short, unfortunately, there is no single answer here except: it depends. Let's take a few examples.</p> <p>In the paper <a href="https://arxiv.org/abs/1909.12488" rel="nofollow noreferrer">Improving Federated Learning Personalization via Model Agnostic Meta Learning</a>, it is argued that for a personalization application, evaluation should be weighted on the per-client level, independent of how much data each client holds. This argument is intuitively reasonable: supposing we are using federated personalization in a mobile application, we may wish to optimize for the average <em>future user's</em> experience, which is better modeled by the per-client weighted average than the per-example weighted average. This is to say, we do not wish to make our application <em>work</em> better for those that <em>use</em> it more, rather we wish to make our application work better on average across users. Further, that referenced paper employs a 4-way split; clients are first partitioned into train and test clients, then the data on each client is partitioned into data to use for the personalization task and data on which to evaluate the personalized model. </p> <p>This may be fundamentally different than the concerns present in a different problem domain. For example, in the cross-silo FL setting, one might imagine that samples are coming from identical distributions, yet for some reason one silo holds more data than the others. One could imagine here a medical environment (making the rather unrealisitic assumption that there are no latent factors here), where we assume that e.g. medical images are being sampled from the same distribution, but a larger provider simply has more of them. In this setting I think it is reasonable that we would evaluate the model we train on a <em>per-example</em> basis, as the user-client mapping breaks down, and the users for which we wish to deploy our model maps better to "example" than "client" here (client mapping of course to the silo in this setting).</p> <p>I think other problem settings would call for other evaluation strategies, including things like median accuracy across clients or minimum accuracy across clients.</p> <p>Like in all data-science or ML applications, we should think hard in FL about exactly what we are trying to optimize for, and tailor our evaluation to this metric. I think the main difference in FL is that <em>this issue is more clear</em> on the front-end, which in my view is a feature of the framework.</p> <p>In TensorFlow Federated, the various methods of computing/aggregating metrics across clients can be adjusted by changing the <a href="https://www.tensorflow.org/federated/api_docs/python/tff/learning/Model#federated_output_computation" rel="nofollow noreferrer"><code>federated_output_computation</code></a> attribute on your <code>tff.learning.Model</code>, then passing this model (or rather, a model-building function) to <a href="https://www.tensorflow.org/federated/api_docs/python/tff/learning/build_federated_evaluation" rel="nofollow noreferrer"><code>build_federated_evaluation_process</code></a>.</p>
2020-06-12 18:27:36.803000+00:00
2020-06-12 18:27:36.803000+00:00
null
null
62,323,656
<p>Validating with typical AI/ML models is predicated on all the data being available locally. Splitting the data into e.g. 80/20 % split, 80% data for training, and 20% for test/evaluation. This scenario isn’t applicable to the FL paradigm. </p> <p>Using the evaluation function with TFF, should you validate at the individual <strong>client</strong> level or at a <strong>global</strong> level. i.e. </p> <p>Next word prediction example scenario: From the perspective of the solution developer, you may wish to evaluate model accuracy over a <strong>larger</strong> a number of users, but from the perspective of a <strong>single</strong> user, you want your next word prediction model to be performed for your personal needs.</p> <p>Example, </p> <pre><code>Eval Loop. NUM_ROUNDS = 10 for round_num in range(1, NUM_ROUNDS+1): ... federated_test_data = random_clients(emnist_test.client_ids,10) test_metrics = evaluation(state.model, federated_test_data) print('Validation round {:2d}, metrics={}'.format(round_num, test_metrics)) ... </code></pre> <p>Where you have a previously define function random_clients to randomly sample from the domain of available clients.? </p> <p>Do you evaluate on a single client or on multiple clients?</p>
2020-06-11 11:46:14.583000+00:00
2020-06-12 18:27:36.803000+00:00
2020-06-12 08:29:05.220000+00:00
machine-learning|tensorflow-federated
['https://arxiv.org/abs/1909.12488', 'https://www.tensorflow.org/federated/api_docs/python/tff/learning/Model#federated_output_computation', 'https://www.tensorflow.org/federated/api_docs/python/tff/learning/build_federated_evaluation']
3
67,637,460
<p>TFF does support different clients having different model architectures.</p> <p>However, the <a href="https://www.tensorflow.org/federated/tutorials/federated_learning_for_image_classification" rel="nofollow noreferrer">Federated Learning for Image Classification tutorial</a> uses <a href="https://www.tensorflow.org/federated/api_docs/python/tff/learning/build_federated_averaging_process" rel="nofollow noreferrer"><code>tff.learning.build_federated_averaging_process</code></a> which implements the Federated Averaging (<a href="https://arxiv.org/abs/1602.05629" rel="nofollow noreferrer">McMahan et. al 2017</a>) algorithm, defined as each client receiving the same architecture. This is accomplished in TFF by &quot;mapping&quot; (in the functional programming sense) the model to each client dataset to produce a new model, and then aggregating the result.</p> <p>To achieve different clients having different architectures, a different federated learning algorithm would need to be implemented. There are couple (non-exhaustive) ways this could be expressed:</p> <ol> <li><p>Implement an alternative to <a href="https://github.com/tensorflow/federated/blob/610843c724740e1b041837cc93501b609fb05d8f/tensorflow_federated/python/learning/federated_averaging.py#L43" rel="nofollow noreferrer"><code>ClientFedAvg</code></a>. This method applies a fixed model to the clients dataset. An alternate implementation could potentially create a different architecture per client.</p> </li> <li><p>Create a replacement for <code>tff.learning.build_federated_averaging_process</code> that uses a different function signature, splitting out groups of clients that would receive different architectures. For example, currently FedAvg looks like:</p> <pre><code>(&lt;state@SERVER, data@CLIENTS&gt; → &lt;state@SERVER, metrics@SERVER&gt; </code></pre> <p>this could be replaced with a method with signature:</p> <pre><code>(&lt;state@SERVER, data1@CLIENTS, data2@CLIENTS, ...&gt; → &lt;state@SERVER, metrics@SERVER&gt; </code></pre> <p>This would allow the function to internally <code>tff.federated_map()</code> different model architectures to different client datasets. This would likely only be useful in FL simulations or experimentation and research.</p> </li> </ol> <p><strong>However</strong>, in federated learning there will be difficult questions around how to aggregate the models back on the server into a single global model. This probably needs to be designed out first.</p>
2021-05-21 13:07:34.760000+00:00
2021-05-21 13:07:34.760000+00:00
null
null
67,634,399
<p>I practice on this <a href="https://www.tensorflow.org/federated/tutorials/federated_learning_for_image_classification" rel="nofollow noreferrer">tutorial</a>, I would like that each client train a different architecture and different model, Is this possible?</p>
2021-05-21 09:40:09.840000+00:00
2021-05-21 13:07:34.760000+00:00
null
tensorflow-federated|federated-learning
['https://www.tensorflow.org/federated/tutorials/federated_learning_for_image_classification', 'https://www.tensorflow.org/federated/api_docs/python/tff/learning/build_federated_averaging_process', 'https://arxiv.org/abs/1602.05629', 'https://github.com/tensorflow/federated/blob/610843c724740e1b041837cc93501b609fb05d8f/tensorflow_federated/python/learning/federated_averaging.py#L43']
4
60,277,406
<p>Ideally this is only possible through solutions like SEQ2SQL(<a href="https://arxiv.org/pdf/1709.00103.pdf" rel="nofollow noreferrer">Link here for reference</a>).</p> <p>But I implemented it in a workaround fashion:-</p> <ol> <li>I got the json using <code>tracker.latest_message</code> .</li> <li>After which I processed the json to make our own structured json like:</li> </ol> <blockquote> <p>[{'column_name':'a', 'operator': '=', 'value':'100'}, {'column_name':'b', 'operator': '&gt;', 'value':'100'}]</p> </blockquote> <ol start="3"> <li>Above structure was used to form the where clause of the query.</li> <li>Same way I made a custom json for Select Part as well :-</li> </ol> <blockquote> <p>[{sum:column1},{count:column2}]</p> </blockquote> <p>5.Then I looped through the json I had created and made our queries.</p> <p>Note:- This json Structure will not be able to cover all possible scenarios but worked decently for me.</p>
2020-02-18 09:17:33.760000+00:00
2022-03-21 17:29:28.560000+00:00
2022-03-21 17:29:28.560000+00:00
null
60,275,821
<p>I am trying to build a chatbot in Rasa/Dialogflow, the problem i ham facing is to convert English to SQL query so that what user write in English can be converted into SQL fetch data from MYSQL database and display result to use.</p> <p>Can someone suggest me how to do it?</p>
2020-02-18 07:30:17.993000+00:00
2022-03-21 17:29:28.560000+00:00
2020-02-18 14:27:50.913000+00:00
dialogflow-es|rasa-nlu|rasa-core
['https://arxiv.org/pdf/1709.00103.pdf']
1
44,886,721
<p>My first thought would be for you to use a library as <a href="https://cmusatyalab.github.io/openface/" rel="nofollow noreferrer">Openface</a>, which is already trained with lots of faces and has a great face representation (with the same 128 dimensions you need).</p> <p>However, you mentioned that you want to train it yourself. I'd recommend you to start taking a look at Siamese Neural Networks. Siamese Neural Networks receive a pair of images (genuine pair - e.g. images from the same person; impostor pair - e.g. images from different persons) and try to learn a similarity/dissimilarity metric (also called Metric Learning). It is very useful for learning face embeddings since your goal seems to be related to that. They basically learn a way to map the input images to a representation that "benefits comparison". Other implementations (as OpenFace) are trained with Triplet Embeddings, where instead of a pair of images you receive a triple (two similar and one dissimilar).</p> <p>Here are some references to start with Siamese Networks:</p> <ul> <li>Signature recognition (a little old but good to understand about them): <a href="https://papers.nips.cc/paper/769-signature-verification-using-a-siamese-time-delay-neural-network.pdf" rel="nofollow noreferrer">https://papers.nips.cc/paper/769-signature-verification-using-a-siamese-time-delay-neural-network.pdf</a></li> <li>Siamese networks for face embeddings: <a href="http://yann.lecun.com/exdb/publis/pdf/chopra-05.pdf" rel="nofollow noreferrer">http://yann.lecun.com/exdb/publis/pdf/chopra-05.pdf</a></li> <li>Triplet Embeddings paper: <a href="https://arxiv.org/pdf/1503.03832.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1503.03832.pdf</a></li> </ul> <p>Just keep in mind that training these architectures is quite difficult, since selecting the best pairs is a very important and challenging part of the problem. One paper that mentions some of the challenges for creating image pairs but is not related to faces is this <a href="https://www.cs.cornell.edu/~kb/publications/SIG15ProductNet.pdf" rel="nofollow noreferrer">one</a>.</p> <p>Hope that helps!</p>
2017-07-03 13:23:37.277000+00:00
2017-07-03 13:23:37.277000+00:00
null
null
44,870,576
<p>I want to train a neural network to extract a number of (128) face features from an image. </p> <p>The features are numbers that measure things like the distance between middles of the eyes, or the distance between middle of the left eyes and middle point of mouth.</p> <p>I need this to find the dissimilarity between two faces: given a database with users, by analyzing a photo I'll be able to tell if it's a photo of Jhon.</p> <p>I began my study using <a href="https://medium.com/@ageitgey/machine-learning-is-fun-part-4-modern-face-recognition-with-deep-learning-c3cffc121d78http://" rel="nofollow noreferrer">this</a> link, which states: <em>Researchers have discovered that the most accurate approach is to let the computer figure out the measurements to collect itself.</em></p> <p>Ok, so the output of the network is an array of 128 numbers, I'll use some formula to adjust the weights so the output numbers are as accurate as possible.</p> <p><strong>What should I use as input?</strong> Will my input nodes be three photos, like in <a href="https://medium.com/@ageitgey/machine-learning-is-fun-part-4-modern-face-recognition-with-deep-learning-c3cffc121d78http://" rel="nofollow noreferrer">this</a> article, and I'll extract the features based on the comparisons between the photos?</p>
2017-07-02 11:59:56.843000+00:00
2017-07-03 13:23:37.277000+00:00
null
machine-learning|face-recognition
['https://cmusatyalab.github.io/openface/', 'https://papers.nips.cc/paper/769-signature-verification-using-a-siamese-time-delay-neural-network.pdf', 'http://yann.lecun.com/exdb/publis/pdf/chopra-05.pdf', 'https://arxiv.org/pdf/1503.03832.pdf', 'https://www.cs.cornell.edu/~kb/publications/SIG15ProductNet.pdf']
5
64,160,575
<p>In the general case, the CUDA documentation does not give you enough information to calculate the number of clock cycles that a particular instruction requires. This would be related to the pipeline depth for the instruction (i.e. for the functional unit servicing that instruction) and this is not documented. The throughput table is largely useless for this exercise.</p> <p>This is one reason why you will find various microbenchmarking papers for CUDA. <a href="https://arxiv.org/pdf/1804.06826.pdf" rel="nofollow noreferrer">Here is one such example</a>.</p> <p>It has to be measured empirically (and carefully), for each architecture of interest, and for each <a href="https://docs.nvidia.com/cuda/cuda-binary-utilities/index.html#instruction-set-ref" rel="nofollow noreferrer">SASS instruction</a> of interest; it is not documented.</p>
2020-10-01 17:39:33.903000+00:00
2020-10-01 17:49:45.463000+00:00
2020-10-01 17:49:45.463000+00:00
null
64,160,061
<p>I am a beginner on CUDA. Now I am calculating the number of clock cycles per one instruction (e.g. addition). In <a href="https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#arithmetic-instructions" rel="nofollow noreferrer">https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#arithmetic-instructions</a>, it only gives the instruction throughput for different arithmetic operations. For example, the throughput in 7.x is 64 for 32-bit floating-point add. So, can i take 64/32=2 as the number of clock cycles per one instruction? If not, how can i calculate it?</p>
2020-10-01 17:00:27.817000+00:00
2020-10-01 17:49:45.463000+00:00
null
cuda
['https://arxiv.org/pdf/1804.06826.pdf', 'https://docs.nvidia.com/cuda/cuda-binary-utilities/index.html#instruction-set-ref']
2
55,580,738
<p>You can start with <a href="http://spacy.io" rel="nofollow noreferrer">spacy</a>'s implementation of <a href="https://github.com/explosion/sense2vec" rel="nofollow noreferrer">sense2vec</a>. It is based on the original sense2vec <a href="https://arxiv.org/abs/1511.06388" rel="nofollow noreferrer">paper</a>. From the abstract:</p> <blockquote> <p>This paper presents a novel approach which addresses these concerns by modeling multiple embeddings for each word based on supervised disambiguation, which provides a fast and accurate way for a consuming NLP model to select a sense-disambiguated embedding. We demonstrate that these embeddings can disambiguate both contrastive senses such as nominal and verbal senses as well as nuanced senses such as sarcasm.</p> </blockquote>
2019-04-08 19:47:21.707000+00:00
2019-04-08 19:47:21.707000+00:00
null
null
55,576,334
<p>I tried to solve the word-polysemy problem (fix WordNet-synsets for polysemy words in the text) via word2vec-like neural networks (<a href="https://stackoverflow.com/questions/51330549/using-word2vec-for-polysemy-solving-problems">Using Word2Vec for polysemy solving problems</a>), but it give too poor results. What are other state-of-art algorithms for resolving words polysemy/homonymy? Can you give me some articles?</p>
2019-04-08 15:00:05.167000+00:00
2019-04-09 18:34:43.497000+00:00
null
nlp|wordnet
['http://spacy.io', 'https://github.com/explosion/sense2vec', 'https://arxiv.org/abs/1511.06388']
3
71,871,933
<p>Well I haven't seen the answer I was looking for so I made a research myself. <a href="https://i.stack.imgur.com/qCTxz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qCTxz.png" alt="enter image description here" /></a></p> <p>In <a href="https://stats.stackexchange.com/questions/153531/what-is-batch-size-in-neural-network">this</a> article this is said:</p> <ul> <li>Stochastic means 1 sample, mimibatch batch of few samples and batch means full train dataset = this I fould <a href="https://stats.stackexchange.com/questions/117919/what-are-the-differences-between-epoch-batch-and-minibatch">here</a></li> <li>PROS of smaller batch: faster train, less RAM needed</li> <li>CONS: The smaller the batch the less accurate the estimate of the gradient will be</li> </ul> <p>In <a href="https://arxiv.org/pdf/1803.09820.pdf" rel="nofollow noreferrer">this</a> paper, they were trying 256,512,1024 batch sizes and the performance of all models were in the standard deviation of each other. This means that the batch size didn't have any significant influence on performance.</p> <p>Final word:</p> <ul> <li>If have problem with RAM = decrease batch size</li> <li>If you need to calculate faster = decrease batch size</li> <li>If the performace decreased after smaller batch = increase batch size</li> </ul> <p>If you find this post useful, please up-vote &amp; comment. Took the time to share it with you. Thanks</p>
2022-04-14 12:50:01.807000+00:00
2022-04-14 12:50:01.807000+00:00
null
null
35,050,753
<p>My training set has 970 samples and validation set has 243 samples.</p> <p>How big should batch size and number of epochs be when fitting a model to optimize the val_acc? Is there any sort of rule of thumb to use based on data input size?</p>
2016-01-28 00:21:39.103000+00:00
2022-05-29 03:56:02.223000+00:00
2022-05-29 03:56:02.223000+00:00
python|machine-learning|deep-learning
['https://i.stack.imgur.com/qCTxz.png', 'https://stats.stackexchange.com/questions/153531/what-is-batch-size-in-neural-network', 'https://stats.stackexchange.com/questions/117919/what-are-the-differences-between-epoch-batch-and-minibatch', 'https://arxiv.org/pdf/1803.09820.pdf']
4
67,623,535
<p><code>Recurrent Dropout</code> is a regularization method for recurrent neural networks. <code>Dropout</code> is applied to the updates to LSTM memory cells, i.e. it drops out the input/update gate in LSTM. For more information you can refer <a href="https://arxiv.org/abs/1603.05118v2" rel="nofollow noreferrer">here</a>.</p>
2021-05-20 15:32:22.020000+00:00
2021-05-20 15:32:22.020000+00:00
null
null
67,445,952
<p>Recently,I try to use the “<em>tf.contrib.rnn.LayerNormBasicLSTMCell</em>” , but I don't know what's the mean of the argument “dropout_keep_prob”.</p> <p>Then I look at the Document given by Google. Their explanation is “unit Tensor or float between 0 and 1 representing the <em><strong>recurrent dropout</strong></em> probability value. If float and 1.0, no dropout will be applied.”</p> <p>But I don't know the difference between “<strong>recurrent dropout</strong>” and“<strong>dropout</strong>”.</p>
2021-05-08 09:19:31.850000+00:00
2021-05-20 15:32:22.020000+00:00
2021-05-08 10:05:10.197000+00:00
tensorflow|lstm
['https://arxiv.org/abs/1603.05118v2']
1
67,716,853
<p>Aside from @benjaminplanche 's answer, I'd use it to manually change values of parameters.</p> <p>For instance, I have the following model:</p> <pre><code>model = nn.Sequential(nn.Linear(10, 1)) </code></pre> <p>and for some reason, I'd like to manually update the values of its parameters. Then, I can do:</p> <pre><code>for param in model.parameters(): param.data = 10 * param.data # multiply the parameter values by 10. </code></pre> <p>Note that if we remove <code>.data</code> behind <code>param</code>, the parameter values won't be updated.</p> <p>Such use can be found in <a href="https://arxiv.org/abs/2006.07733" rel="nofollow noreferrer">BYOL (Bootstrap your own latent)</a> and <a href="https://github.com/lucidrains/byol-pytorch/blob/2aa84ee18fafecaf35637da4657f92619e83876d/byol_pytorch/byol_pytorch.py#L61" rel="nofollow noreferrer">this Github webpage for BYOL pytorch implementation</a>.</p>
2021-05-27 06:35:02.173000+00:00
2021-05-27 06:35:02.173000+00:00
null
null
51,743,214
<p>I'm new to pytorch. I read much pytorch code which heavily uses tensor's <code>.data</code> member. But I search <code>.data</code> in the official document and Google, finding little. I guess <code>.data</code> contains the data in the tensor, but I don't know when we need it and when not? </p>
2018-08-08 09:31:28.567000+00:00
2022-01-11 10:40:52.673000+00:00
2018-08-08 12:10:19.857000+00:00
python|version|pytorch|tensor
['https://arxiv.org/abs/2006.07733', 'https://github.com/lucidrains/byol-pytorch/blob/2aa84ee18fafecaf35637da4657f92619e83876d/byol_pytorch/byol_pytorch.py#L61']
2
14,538,829
<p>The following paper presents algorithms for union, intersection and difference on ordered sets that beat O(Z) if the intersection is larger than the difference (Z > n/2):</p> <p><a href="http://arxiv.org/abs/1301.3388" rel="nofollow">Confluently Persistent Sets and Maps</a></p>
2013-01-26 16:18:02.350000+00:00
2013-01-26 16:18:02.350000+00:00
null
null
4,261,619
<p>I haven't been able to find any satisfactory coverage of this topic all in one place, so I was wondering: What are the fastest set intersect, union, and disjoin algorithms?<br> Are there any interesting ones with limited domains?<br> Can anyone beat O(Z) where Z is the actual size of intersection?</p> <p>If your approach relies on sorted sets, please note that, but don't consider it a disqualifying factor. It seems to me that there must be a veritable storehouse of subtle optimizations to be shared, and I don't want to miss any of them.</p> <p>A few algorithms I know rely on bitwise operations beyond the vanilla, so you may assume the presence of SSE4 and access to intrinsics like popcount. Please note this assumption.</p> <p>Of interest: <a href="http://fawx.com/2009/10/26/an-ode-to-set-intersection-part-1/" rel="noreferrer">An Implementation of B-Y Intersect</a></p> <p><strong>Update</strong><br> We've got some really good partial answers, but I'm still hoping for some more complete attacks on the problem. I'm particularly interested in seeing a more fully articulated use of bloom filters in attacking the problem.</p> <p><strong>Update</strong><br> I've done some preliminary work on combining bloom filters with a cuckoo hash table. It's looking almost obnoxiously promising, because they have very similar demands. I've gone ahead and accepted an answer, but I'm not really satisfied at the moment.</p>
2010-11-23 22:27:21.273000+00:00
2013-01-26 16:18:02.350000+00:00
2010-11-30 00:32:31.360000+00:00
algorithm|language-agnostic|set-intersection
['http://arxiv.org/abs/1301.3388']
1
52,439,041
<p><strong>Max Message Count</strong>: The maximum number of transactions/messages to permit in block.</p> <p><strong>Absolute Max Bytes</strong>: The (strict) maximum number of bytes allowed for serialized transactions/messages in a block. </p> <p><strong>Preferred Max Bytes</strong>: The preferred maximum number of bytes allowed the serialized transactions/messages in a batch. A transaction/message larger than the preferred max bytes will result in a batch larger than preferred max bytes.</p> <p>The criteria that is encountered first will be taken into consideration while the orderer cuts the block.</p> <p>If you have a constantly flowing high number of transactions, then pack as many transactions as possible in a block to get max throughput. Otherwise tweak the <strong>BatchTimeout</strong> and <strong>MaxMessageCount</strong> to optimize your transaction throughput.</p> <p>If you want to dig deep on this aspect refer to this research paper: <a href="https://arxiv.org/pdf/1805.11390.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1805.11390.pdf</a></p>
2018-09-21 07:50:30.160000+00:00
2018-09-21 07:50:30.160000+00:00
null
null
52,436,847
<p>What is the relationship between MaxMessageCount, AbsoluteMaxBytes, and PreferredMaxBytes?</p> <p>A block in fabric consists of a MaxMessageCount number of transaction or PreferredMaxBytes?</p> <p>What should be the value of these to get maximum throughput?</p>
2018-09-21 04:56:42.420000+00:00
2018-09-21 07:50:30.160000+00:00
2018-09-21 05:10:40.743000+00:00
hyperledger-fabric|blockchain
['https://arxiv.org/pdf/1805.11390.pdf']
1
64,960,852
<p>There is no built-in feature of Gensim that would allow this extra constraint/regularization to be applied during training.</p> <p>You should probably try to explain your 'really complicated' reason for this idosyncratic request. There might be a better way to achieve the real end-goal, rather than shoehorning vectors that are typically bushy-and-balanced around the origin into a non-negative representation.</p> <p>Notably, a paper called '<a href="https://arxiv.org/abs/1702.01417" rel="nofollow noreferrer">All-but-the-Top: Simple and Effective Postprocessing for Word Representations</a>' has suggested word-vectors can be improved by postprocessing to ensure they are <em>more</em> balanced around the origin, rather than less (as seems a reliable side-effect of typical negative-sampling configurations).</p> <p>If you're still interested to experiment in the opposite direction – transforming usual word2vec word-vectors into a representation where all dimensions are positive – I can think of a number of trivial, superficial ways to achieve that. I have no idea whether they'd actually preserve, or ruin, beneficial properties in the vectors – but you could try them, and see. For example:</p> <ul> <li>You could try simply setting all negative dimensions to 0.0 - truncation. (Loses lots of info but might give a quick indication if a dirt-simple experiment gives you any of the benefits you seek.)</li> <li>You could find the largest negative dimension that appears anywhere in any of the vectors, then add its absolute value to all other dimensions. Voila! No vector dimension is now lower than 0.0. (You could also try this in a per-dimension manner - only correct dimension #0 with the lowest dimension #0 value. Or, try other re-scalings of each dimension such that the previously-highly-negative values are 0.0, and the previous-highly-positive values stay where they are or only shift a little.)</li> <li>You could try turning every dimension in the original word-vectors into two dimensions in a transformed set: one that's the original positive value, or 0.0 if it was negative, and a 2nd dimension that's the absolute value of the original negative value, or 0.0 if it was positive. (Or similarly: one dimension that's the absolute-value of the original value, and one dimension that's 0.0 or 1.0 depending on whether original value was negative or positive.)</li> </ul> <p>There are probably other more-sophisticated factorization/decompositions for re-representing the full set of word-vectors in a transformed array with only non-negative individual values, but I don't know them offhand, other than to think it might be worth searching for them.</p> <p>And, whether any of these transformations work for your next steps, who knows? But it might be worth trying. (And if any of these offer surprisingly good results, it'd be great to hear in a followup comment!)</p>
2020-11-22 23:45:27.100000+00:00
2020-11-22 23:45:27.100000+00:00
null
null
64,955,331
<p>Is there any way in gensim that i can force the learned vectors in word2vec to be all positive? (all the elements of vector be positive). i am working on a different task that needs these vectors to be positive ( the reason is really complicated so don't ask why )</p> <p>so what is the easiest way for me to force gensim to learn positive vectors?</p>
2020-11-22 14:28:03.473000+00:00
2020-11-22 23:45:27.100000+00:00
null
gensim|word2vec
['https://arxiv.org/abs/1702.01417']
1
54,066,401
<p><a href="http://jshun.github.io/ligra/" rel="nofollow noreferrer">Ligra</a> is either the state of the art or close to it for single-machine multicore implementations. It should be able to handle your graph no problem.</p> <p><a href="https://arxiv.org/pdf/1807.10727.pdf" rel="nofollow noreferrer">Connected Components at Scale via Local Contractions</a>, by my colleagues Jakub Łącki, Vahab Mirrokni, and Michał Włodarczyk, is the state of the art (at least, that I know about) for MapReduce algorithms. We've used it on graphs a thousand times bigger than yours.</p>
2019-01-06 22:10:02.730000+00:00
2019-01-14 15:17:25.297000+00:00
2019-01-14 15:17:25.297000+00:00
null
54,057,160
<p>I have to find the best algorithm from already known for parallel computing of connected components of graph.</p> <p>Here is a brief outline of my data and computer architecture:</p> <ul> <li>I have access to a computational cluster with several thousands of processors (memory is not shared, but I expect that there should be enough memory in a single node to assess my needs for the whole data).</li> <li>my graph has rather small ratio of edges number to vertices number (about 5)</li> <li>I expect the most of connected components to be very small (2-3 vertices)</li> <li>there will be, however, very big components with millions of vertices (constituting even up to 10% of total vertices number).</li> </ul> <p>I have read about parallel algorithms for computing connected components of graphs. As I have noticed, some of them base on the classical BFS approach for the serialized case. To be honest, I got a bit lost in the number of these algorithms. Could anyone give me some advice, which algorithm would be the best for my purposes?</p>
2019-01-05 23:09:35.433000+00:00
2019-01-14 15:17:25.297000+00:00
null
multithreading|algorithm|graph|parallel-processing|multiprocessing
['http://jshun.github.io/ligra/', 'https://arxiv.org/pdf/1807.10727.pdf']
2
72,484,269
<p>In the original paper (Section 2), there seems to be ReLU activation function in the upsampling path?</p> <p>&quot;Every step in the expansive path consists of an upsampling of the feature map followed by a 2x2 convolution (“up-convolution”) that halves the number of feature channels, a concatenation with the correspondingly cropped feature map from the contracting path, and two 3x3 convolutions, each followed by a ReLU.&quot;</p> <p><a href="https://arxiv.org/pdf/1505.04597.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1505.04597.pdf</a></p>
2022-06-03 02:58:02.750000+00:00
2022-06-03 02:58:02.750000+00:00
null
null
54,313,572
<p>In the <a href="https://lmb.informatik.uni-freiburg.de/people/ronneber/u-net/" rel="nofollow noreferrer">U-net</a>, have activation functions in all the layers but there seems to be no activation function in the upsampling layer (that is done using transpose convolution). Why does this offer more efficiency than having an activation function?</p> <p><a href="https://i.stack.imgur.com/vR2UR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vR2UR.png" alt="enter image description here"></a></p> <p>From my understanding, activation functions offer non-linearity. So, this question really is, what benefit is there to maintaining linearity in transpose convolutions but maintaining <em>non</em>-linearity on regular convolutions. Wouldn't it just always be best to have an activation function in these layers? </p> <p>My only other intuition is that perhaps, they're trying to keep the upsampling as closely related to regular morphological interpolation methods.</p>
2019-01-22 17:30:57.717000+00:00
2022-06-03 02:58:02.750000+00:00
null
machine-learning|neural-network|artificial-intelligence|conv-neural-network
['https://arxiv.org/pdf/1505.04597.pdf']
1
54,444,912
<p>You can just produce class agnostic bounding box (i.e., just locate where an object without requiring its class) using regression. Instead of producing the box coordinates directly (box-location, width, height), calculating the offset relative to a default box size will provide with better performance. <a href="https://arxiv.org/abs/1512.02325" rel="nofollow noreferrer">SSD</a> model actually proposes a common bounding box for each class and calculate per class confidence score for that object. You can follow their approach.</p>
2019-01-30 16:15:38.523000+00:00
2019-01-30 16:15:38.523000+00:00
null
null
54,442,213
<p>I am working on a problem of Object Localization. Since the dataset is different from that ImageNet of COCO, I only to find is there an object in image or not and not the class of that object. How to proceed?</p>
2019-01-30 13:55:03.540000+00:00
2020-06-16 16:36:55.720000+00:00
null
deep-learning|computer-vision|conv-neural-network
['https://arxiv.org/abs/1512.02325']
1
27,625,104
<blockquote> <p>Adding some debug output made it work. Removing the debug code but compiling for debug mode made it work. No possibility of memory corruption.</p> </blockquote> <p>You seem to be having the sort of trouble described at length by David Monniaux in the <a href="http://arxiv.org/abs/cs/0701192" rel="nofollow">article</a> “The pitfalls of verifying floating-point computations”:</p> <blockquote> <p>The above examples indicate that common debugging practises that apparently should not change the computational semantics may actually alter the result of computations. Adding a logging statement in the middle of a computation may alter the scheduling of registers, […]</p> </blockquote> <p>Note that most of his reproaches are for C compilers and that the situation—for C compilers—has improved since his article was published: although the article was written after C99, some of the liberties taken by C compilers are clearly disallowed by the C99 standard or are not clearly allowed or are allowed only within a clearly defined framework (look for occurrences of <code>FLT_EVAL_METHOD</code> and <code>FP_CONTRACT</code> in the C99 standard for details).</p> <p>One way in which the situation has improved for C compilers is that GCC now implements clear, deterministic, standard-compliant semantics for floating-point even when extra precision is present. The relevant option is <code>-fexcess-precision=standard</code>, set by <code>-std=c99</code>.</p> <p>WRT the question - "How to discard unwanted extra precision from floating point computations?" - the simple answer is "don't". Discarding the precision at one point isn't sufficient as extra precision can be lost or retained at any point in the computation, so apparent reproducibility is a fluke.</p> <p>Although G++ uses the same back-end as GCC, and although the C++ standard defers to the C standard for the definition of <code>math.h</code> (which provides <code>FLT_EVAL_METHOD</code>), G++ unfortunately <a href="https://gcc.gnu.org/bugzilla/show_bug.cgi?id=55544" rel="nofollow">does not support</a> <code>-fexcess-precision=standard</code>.</p> <p>One solution is for you to move all the floating-point computations to C files that you would compile with <code>-fexcess-precision=standard</code> and link to the rest of the application. Another solution is to use <code>-msse2 -mfpmath=sse</code> instead to cause the compiler to emit SSE2 instructions that do not have the “excess precision” problem. The latter two options can be implied by <code>-m64</code>, since SSE2 predates the x86-64 instruction set and the “AMD64 ABI” already uses SSE2 registers for passing floating-point arguments.</p>
2014-12-23 17:20:58.867000+00:00
2014-12-24 03:50:12.403000+00:00
2014-12-24 03:50:12.403000+00:00
null
27,612,293
<p>First, I'll just say that I know floating point calculations are approximate - subject to rounding errors - so normally you can't test for exact equality and expect it to work. <strike>However, floating point calculations are still deterministic - run the same code with the same inputs and you should get the same result.</strike> [<em>see edit at end</em>] I've just had a surprise where that didn't work out.</p> <p>I'm writing a utility to extract some information from Photoshop PSD files. Paths containing cubic Bezier curves are part of that, and I needed to compute axis-aligned bounding boxes for Bezier curves. For the theory, I found <a href="http://pomax.github.io/bezierinfo/" rel="nofollow">A Primer on Bezier Curves</a> via another SO question.</p> <p>Quick summary of method...</p> <ol> <li>Determine the derivative of the cubic bezier - a quadratic bezier.</li> <li>Use the quadratic formula to solve the quadratic for each dimension, giving the parameter values for the stopping points (including maxima and minima) of the curve.</li> <li>Evaluate the cubic bezier positions at those parameters (and the start and end points), expanding the bounding box.</li> <li>Because I want the actual on-the-boundary points as well as the bounding box, make a second pass, computing positions and rejecting those points that aren't on the final bounding box.</li> </ol> <p>The fourth step was the problem. Expanding the bounding box shouldn't change floating-point values - it just selects largest/smallest values and stores them. I recompute the same points using the same curve control points and the same parameter, but comparing for exact equality with the bounding box failed where it should pass.</p> <p>Adding some debug output made it work. Removing the debug code but compiling for debug mode made it work. No possibility of memory corruption.</p> <p>I figured that the stored value (in the form of the bounding box) was somehow lower precision than the newly recomputed position, and that seems to be correct. When I add debug code or compile in debug mode, some inlining doesn't happen, less optimization happens, and as a result the newly computed values get the same precision-loss that the stored bounding box boundary has.</p> <p>In the end, I resolved the problem by using a <code>volatile</code> variable. The newly recomputed value gets written into the <code>volatile</code> variable, then read back out, thus forcing the compiler to compare a precision-reduced stored version with another precision-reduced stored version. That seems to work.</p> <p>However, I really have no idea whether that <em>should</em> work - it was just an idea and I know how sensitive things like that can be to compiler-writers interpretation of technicalities in the standard. I don't know precisely what relevant guarantees the standard makes, and I don't know if there's a conventional solution to this issue. I'm not even sure what inspired me to try <code>volatile</code> which IIRC I've not used before since I was working in pure C on embedded systems around 1996-ish.</p> <p>Of course I <em>could</em> compute the set of position vectors once and store them ready for both the bounding rectangle and the filtering, but I'm curious about this specific issue. As you may guess, I don't do much floating point work, so I'm not that familiar with the issues.</p> <p>So - is this use of <code>volatile</code> correct? Is it, as I suspect, unnecessarily inefficient? Is there an idiomatic way to force extra precision (beyond the limits of the type) to be discarded?</p> <p>Also, is there a reason why comparing for exact equality doesn't force the precision of the type first? Is comparing for exact equality with the extra precision (presumably unreliably) preserved ever a useful thing to do?</p> <p>[<strong>EDIT</strong> - following Hans Passants first comment, I've given some more thought to what I said about inlining above. Clearly, two different calls to the same code can be optimized differently due to different inlining decisions. That's not just at one level - it can happen at any depth of inlining functions into a piece of code. That means the precision reduction could happen pretty much anywhere, which really means that even when the same <strong>source code</strong> is used with the same inputs it can give different results.</p> <p>The FPU is deterministic, so presumably any particular target code is deterministic, but two different calls to the same function may not use the same target code for that function. Now I'm feeling a bit of an idiot, not seeing the implications of what I already figured out - oh well. If there's no better answer in the next few days, I'll add one myself.]</p>
2014-12-22 23:59:37.170000+00:00
2014-12-24 03:50:12.403000+00:00
2014-12-23 00:53:52.793000+00:00
c++|floating-point
['http://arxiv.org/abs/cs/0701192', 'https://gcc.gnu.org/bugzilla/show_bug.cgi?id=55544']
2
46,510,786
<h2>Deconvolution explained</h2> <p>First of all, deconvolution <em>is</em> a convolutional layer, only used for a different purpose, namely <em>upsampling</em> (why it's useful is explained in <a href="http://arxiv.org/abs/1411.4038" rel="nofollow noreferrer">this paper</a>). </p> <p>For example, here a <code>2x2</code> input image (bottom image in blue) is upsampled to <code>4x4</code> (top image in green):</p> <p><a href="https://i.stack.imgur.com/YyCu2.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YyCu2.gif" alt="deconvolution"></a></p> <p>To make it a valid convolution, the input is first padded to make it <code>6x6</code>, after which <code>3x3</code> filter is applied without striding. Just like in ordinary convolutional layer, you can choose <a href="https://github.com/vdumoulin/conv_arithmetic" rel="nofollow noreferrer">different padding/striding strategies</a> to produce the image size you want.</p> <h2>Backward pass</h2> <p>Now it should be clear that backward pass for deconvolution is a partial case of backward pass for a convolutional layer, with particular stride and padding. I think you've done it already, but here's a naive (and not very efficient) implementation for any stride and padding:</p> <pre><code># input: x, w, b, stride, pad, d_out # output: dx, dw, db &lt;- gradients with respect to x, w, and b N, C, H, W = x.shape F, C, HH, WW = w.shape N, C, H_out, W_out = d_out.shape x_pad = np.pad(x, pad_width=((0, 0), (0, 0), (pad, pad), (pad, pad)), mode='constant', constant_values=0) db = np.sum(d_out, axis=(0, 2, 3)) dw = np.zeros_like(w) dx = np.zeros_like(x_pad) for n in xrange(N): for f in xrange(F): filter_w = w[f, :, :, :] for out_i, i in enumerate(xrange(0, H, stride)): for out_j, j in enumerate(xrange(0, W, stride)): dw[f, :, :, :] += d_out[n, f , out_i, out_j] * x_pad[n, :, i:i+HH, j:j+WW] dx[n, :, i:i+HH, j:j+WW] += filter_w * d_out[n, f, out_i, out_j] dx = dx[:,:,1:H+1,1:W+1] </code></pre> <p>The same can be done more efficiently using <code>im2col</code> and <code>col2im</code>, but it's just an implementation detail. Another funny fact: the backward pass for a convolution operation (for both the data and the weights) is again a convolution, but with spatially-flipped filters.</p> <p>Here's how it's applied (plain simple SGD):</p> <pre><code># backward_msg is the message from the next layer, usually ReLu # conv_cache holds (x, w, b, conv_params), i.e. the info from the forward pass backward_msg, dW, db = conv_backward(backward_msg, conv_cache) w = w - learning_rate * dW b = b - learning_rate * db </code></pre> <p>As you can see, it's pretty straightforward, just need to understand that you're applying same old convolution.</p>
2017-10-01 08:57:56.320000+00:00
2017-10-01 08:57:56.320000+00:00
null
null
41,699,513
<p>I'm trying to develop a deconvolutional layer (or a transposed convolutional layer to be precise). </p> <p>In the forward pass, I do a full convolution (convolution with zero padding) In the backward pass, I do a valid convolution (convolution without padding) to pass the errors to the previous layer</p> <p>The gradients of the biases are easy to compute, simply a matter of averaging over the superfluous dimensions. </p> <p>The problem is I don't know how to update the weights of the convolutional filters. What are the gradients ? I'm sure it is a convolution operation but I don't see how. I tried a valid convolution of the inputs with the errors but to no avail. </p>
2017-01-17 14:20:24.900000+00:00
2017-10-01 08:57:56.320000+00:00
2017-02-21 10:24:45.673000+00:00
machine-learning|deep-learning|convolution|deconvolution
['http://arxiv.org/abs/1411.4038', 'https://i.stack.imgur.com/YyCu2.gif', 'https://github.com/vdumoulin/conv_arithmetic']
3
56,732,740
<h2>Why such design decision was made?</h2> <p>It is indeed a very interesting question. Let's see how this is described in Keras documentation:</p> <blockquote> <p>In the original version of Adadelta you don't have to set an initial learning rate. In this version, initial learning rate and decay factor can be set, as in most other Keras optimizers.</p> </blockquote> <p>So the documentations itself admits that this method doesn't need a learning rate. I believe this design decision was made because of some other templates, dependencies, or codes in the project.</p> <p>More specifically, the philosophy of keras is <strong>you can combine any building blocks you want</strong> (i.e. a unified API). If you remove the parameter <code>lr</code> from this, I believe you won't be able to use some of the <strong>callbacks</strong>.</p> <hr> <h2>Comparison</h2> <p>Now, let's compare the <a href="https://github.com/keras-team/keras/blob/master/keras/optimizers.py#L350" rel="noreferrer">Adadelta implementation</a> of Keras to the <a href="https://arxiv.org/pdf/1212.5701.pdf" rel="noreferrer">original paper</a>:</p> <ul> <li><p><a href="https://github.com/keras-team/keras/blob/master/keras/optimizers.py#L406" rel="noreferrer">Line 406:</a> here the gradients are accumulated into a moving average (<code>a</code> is the moving average, <code>rho</code> is decay rate as in the paper, <code>g</code> is computed gradients for parameter <code>p</code>):</p> <pre class="lang-python prettyprint-override"><code>new_a = self.rho * a + (1. - self.rho) * K.square(g) self.updates.append(K.update(a, new_a)) </code></pre> <p>This perfectly corresponds to the following line in the algorithm:</p> <p><a href="https://i.stack.imgur.com/Qyft7.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Qyft7.png" alt="enter image description here"></a></p></li> <li><p><a href="https://github.com/keras-team/keras/blob/master/keras/optimizers.py#L410" rel="noreferrer">Line 410:</a> delta calculation (here, <code>d_a</code> is delta accumulator, also in the form of moving average):</p> <pre class="lang-python prettyprint-override"><code>update = g * K.sqrt(d_a + self.epsilon) / K.sqrt(new_a + self.epsilon) </code></pre> <p>This perfectly corresponds to</p> <p><a href="https://i.stack.imgur.com/5Y8bU.png" rel="noreferrer"><img src="https://i.stack.imgur.com/5Y8bU.png" alt="enter image description here"></a></p></li> <li><p><a href="https://github.com/keras-team/keras/blob/master/keras/optimizers.py#L411" rel="noreferrer">Line 411:</a> now <strong>here is the tricky part.</strong> The code looks as follows:</p> <pre class="lang-python prettyprint-override"><code>new_p = p - lr * update </code></pre> <p>Which doesn't follows the original algorithm in the paper:</p> <p><a href="https://i.stack.imgur.com/Rkpo8.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Rkpo8.png" alt="enter image description here"></a></p> <p>Furthermore, such learning rate admits changes through the learning rate decay parameter. However, the default value of <code>lr</code> in Keras is <code>1.0</code>, and <code>decay</code> is <code>0.0</code> so by default it shouldn't affect the outcome.</p></li> </ul>
2019-06-24 08:36:30.630000+00:00
2019-06-24 09:06:28.417000+00:00
2019-06-24 09:06:28.417000+00:00
null
56,730,888
<p>In Keras, there is an Adadelta optimiser for SGD as follows:</p> <pre class="lang-py prettyprint-override"><code>optimizer = optimizers.Adadelta(lr=1.0, rho=0.95, epsilon=None, decay=0.0) </code></pre> <p>Here is the doc.: <a href="https://keras.io/optimizers/#adadelta" rel="nofollow noreferrer">https://keras.io/optimizers/#adadelta</a> But as we know, Adadelta does not use any learning rate. So what is lr for?</p>
2019-06-24 06:26:13.070000+00:00
2019-07-03 11:32:39.073000+00:00
2019-07-03 11:32:39.073000+00:00
python|keras|deep-learning
['https://github.com/keras-team/keras/blob/master/keras/optimizers.py#L350', 'https://arxiv.org/pdf/1212.5701.pdf', 'https://github.com/keras-team/keras/blob/master/keras/optimizers.py#L406', 'https://i.stack.imgur.com/Qyft7.png', 'https://github.com/keras-team/keras/blob/master/keras/optimizers.py#L410', 'https://i.stack.imgur.com/5Y8bU.png', 'https://github.com/keras-team/keras/blob/master/keras/optimizers.py#L411', 'https://i.stack.imgur.com/Rkpo8.png']
8
48,614,857
<p>I think learning (generalizing) XOR and memorizing XOR are different things. </p> <p>A two-layer perceptron can memorize XOR as you have seen, that is there exists a combination of weights where the loss is minimum and equal to 0 (absolute minimum). </p> <p>If the weights are randomly initialized, you might end up with the situation where you have actually learned XOR and not only memorized. </p> <p>Note that multi-layer perceptrons are non-convex functions so, there could be multiple minima (multiple global minima even). When data is missing one input, there are multiple minima (and all are equal in value) and there exists minima where the missing point would be correctly classified. Hence, MLP can learn an XOR. (though finding that weight combination might be hard with a missing point). </p> <p>It is quite often argued that Neural Networks are universal function approximator and can approximate non-sense labels even. In that light, you might want to look at this work <a href="https://arxiv.org/abs/1611.03530" rel="nofollow noreferrer">https://arxiv.org/abs/1611.03530</a></p>
2018-02-05 01:46:36.663000+00:00
2018-02-05 02:03:01.250000+00:00
2018-02-05 02:03:01.250000+00:00
null
48,614,723
<p>The XOR problem is known to be solved by the multi-layer perceptron given all 4 boolean inputs and outputs, it trains and memorizes the weights needed to reproduce the I/O. E.g.</p> <pre><code>import numpy as np np.random.seed(0) def sigmoid(x): # Returns values that sums to one. return 1 / (1 + np.exp(-x)) def sigmoid_derivative(sx): # See https://math.stackexchange.com/a/1225116 return sx * (1 - sx) # Cost functions. def cost(predicted, truth): return truth - predicted xor_input = np.array([[0,0], [0,1], [1,0], [1,1]]) xor_output = np.array([[0,1,1,0]]).T X = xor_input Y = xor_output # Define the shape of the weight vector. num_data, input_dim = X.shape # Lets set the dimensions for the intermediate layer. hidden_dim = 5 # Initialize weights between the input layers and the hidden layer. W1 = np.random.random((input_dim, hidden_dim)) # Define the shape of the output vector. output_dim = len(Y.T) # Initialize weights between the hidden layers and the output layer. W2 = np.random.random((hidden_dim, output_dim)) num_epochs = 10000 learning_rate = 1.0 for epoch_n in range(num_epochs): layer0 = X # Forward propagation. # Inside the perceptron, Step 2. layer1 = sigmoid(np.dot(layer0, W1)) layer2 = sigmoid(np.dot(layer1, W2)) # Back propagation (Y -&gt; layer2) # How much did we miss in the predictions? layer2_error = cost(layer2, Y) # In what direction is the target value? # Were we really close? If so, don't change too much. layer2_delta = layer2_error * sigmoid_derivative(layer2) # Back propagation (layer2 -&gt; layer1) # How much did each layer1 value contribute to the layer2 error (according to the weights)? layer1_error = np.dot(layer2_delta, W2.T) layer1_delta = layer1_error * sigmoid_derivative(layer1) # update weights W2 += learning_rate * np.dot(layer1.T, layer2_delta) W1 += learning_rate * np.dot(layer0.T, layer1_delta) </code></pre> <p>We see that we've fully trained the network to memorize the outputs for XOR:</p> <pre><code># On the training data [int(prediction &gt; 0.5) for prediction in layer2] </code></pre> <p>[out]:</p> <pre><code>[0, 1, 1, 0] </code></pre> <p>If we re-feed the same inputs, we get the same output:</p> <pre><code>for x, y in zip(X, Y): layer1_prediction = sigmoid(np.dot(W1.T, x)) # Feed the unseen input into trained W. prediction = layer2_prediction = sigmoid(np.dot(W2.T, layer1_prediction)) # Feed the unseen input into trained W. print(int(prediction &gt; 0.5), y) </code></pre> <p>[out]:</p> <pre><code>0 [0] 1 [1] 1 [1] 0 [0] </code></pre> <p>But if we retrain the parameters (W1 and W2) without one of the data points, i.e.</p> <pre><code>xor_input = np.array([[0,0], [0,1], [1,0], [1,1]]) xor_output = np.array([[0,1,1,0]]).T </code></pre> <h1>Let's drop the last row of data and use that as unseen test.</h1> <pre><code>X = xor_input[:-1] Y = xor_output[:-1] </code></pre> <p>And with the rest of the same code, regardless of how I change the hyperparameters, it's un-able to learn the XOR function and reproduce the I/O. </p> <pre><code>for x, y in zip(xor_input, xor_output): layer1_prediction = sigmoid(np.dot(W1.T, x)) # Feed the unseen input into trained W. prediction = layer2_prediction = sigmoid(np.dot(W2.T, layer1_prediction)) # Feed the unseen input into trained W. print(int(prediction &gt; 0.5), y) </code></pre> <p>[out]:</p> <pre><code>0 [0] 1 [1] 1 [1] 1 [0] </code></pre> <h1>Even if we shuffle the in-/output:</h1> <pre><code># Shuffle the order of the inputs _temp = list(zip(X, Y)) random.shuffle(_temp) xor_input_shuff, xor_output_shuff = map(np.array, zip(*_temp)) </code></pre> <p>We can't train the XOR function fully:'</p> <pre><code>for x, y in zip(xor_input, xor_output): layer1_prediction = sigmoid(np.dot(W1.T, x)) # Feed the unseen input into trained W. prediction = layer2_prediction = sigmoid(np.dot(W2.T, layer1_prediction)) # Feed the unseen input into trained W. print(x, int(prediction &gt; 0.5), y) </code></pre> <p>[out]:</p> <pre><code>[0 0] 1 [0] [0 1] 1 [1] [1 0] 1 [1] [1 1] 0 [0] </code></pre> <p>So when the literature states that the multi-layered perceptron (Aka the basic deep learning) solves XOR, <strong>does it mean that it can fully learn and memorize the weights given the fully set of in-/outputs but cannot generalize the XOR problem given that one of data point is missing?</strong></p> <p>Here's the link of the Kaggle dataset that answerers can test the network for themselves: <a href="https://www.kaggle.com/alvations/xor-with-mlp/" rel="nofollow noreferrer">https://www.kaggle.com/alvations/xor-with-mlp/</a></p>
2018-02-05 01:23:55.887000+00:00
2018-02-05 02:34:13.393000+00:00
2018-02-05 02:34:13.393000+00:00
numpy|neural-network|deep-learning|xor|perceptron
['https://arxiv.org/abs/1611.03530']
1
44,400,083
<p>Kullback-Leibler divergence is a measure of similarity between two probability distributions. The KL divergence implemented in Keras assumes two discrete probability distributions (hence the sum).</p> <p>The exact format of your KL loss function depends on the underlying probability distributions. A common usecase is that the neural network models the parameters of a probability distribution P (eg a Gaussian), and the KL divergence is then used in the loss function to determine the similarity between the modelled distribution and some other, known distribution (potentially Gaussian as well). E.g. a network outputs two vectors mu and sigma^2. Mu forms the mean of a Gaussian distribution P while sigma^2 is the diagonal of the covariance matrix Sigma. A possible loss function is then the KL divergence between the Gaussian P described by mu and Sigma, and a unit Gaussian N(0, I). The exact format of the KL divergence in that case can be derived analytically, yielding a <em>custom</em> keras loss function that is not at all equal to the KL divergence implemented in Keras.</p> <p>In the original paper that introduces Variational Auto-Encoders, the loss function is summed over the samples in the minibatch and then multiplied by a factor (N/M), where N is the size of the entire dataset, and M is the size of the minibatch. See equations 8 and 10 in <a href="https://arxiv.org/abs/1312.6114" rel="nofollow noreferrer">https://arxiv.org/abs/1312.6114</a>.</p>
2017-06-06 21:25:30.390000+00:00
2017-06-06 21:25:30.390000+00:00
null
null
44,376,691
<p>I'm a little confused about how the KL divergence is applied, specifically in Keras, but I think the question is general to deep learning applications. In keras, the KL loss function is defined like this:</p> <pre><code>def kullback_leibler_divergence(y_true, y_pred): y_true = K.clip(y_true, K.epsilon(), 1) y_pred = K.clip(y_pred, K.epsilon(), 1) return K.sum(y_true * K.log(y_true / y_pred), axis=-1) </code></pre> <p>In my model, <code>y_true</code> and <code>y_pred</code> are matrices; each row of <code>y_true</code> a one-hot encoding for one training example, and each row of <code>y_pred</code> the output of the model (a probability distribution) for that example. </p> <p>I can run this KL divergence calculation on any given pair of rows from <code>y_true</code> and <code>y_pred</code> and get the expected result. The mean of these KL divergence results over the rows matches the loss reported by Keras in the training history. But that aggregation - running KL divergence on each row and taking the mean - doesn't happen within loss function. In contrast, I understand MAE or MSE to aggregate across the examples: </p> <pre><code>def mean_squared_error(y_true, y_pred): return K.mean(K.square(y_pred - y_true), axis=-1) </code></pre> <p>For the KL divergence, it's not totally obvious to me that taking the mean across the examples is the right thing to do. I guess the idea is that the examples are random samples from the true distribution, so they should appear in proportion to their probability. But that seems to make a pretty strong assumption about how the training data was collected. I haven't really seen this aspect (aggregating across samples from a dataset) addressed in the online treatments of the KL divergence; I just see a lot of redefinition of the basic formula.</p> <p>So my questions are: </p> <ol> <li><p>Is this interpretation of what Keras is doing to come up with the KL divergence loss (i.e. averaging over the KL divergence of the rows) correct?</p></li> <li><p>Why is this the right thing to do?</p></li> <li><p>From an implementation perspective, why doesn't the definition of the loss function in Keras do the aggregation over the rows the way MAE or MSE does?</p></li> </ol>
2017-06-05 19:55:17.260000+00:00
2019-10-29 18:00:44.487000+00:00
2019-10-29 18:00:44.487000+00:00
tensorflow|machine-learning|keras|deep-learning
['https://arxiv.org/abs/1312.6114']
1
58,362,455
<p>Firstly, you open at once a large file that needs to fit into your RAM. If you do not have a really good computer, this might be the first bottleneck for perfomance. Read each line separately, or try to use some IO buffer. What CPU do you have? If you have enough cores, you can get a lot of extra performance when parallelizing the program with an async Pool from Multiprocessing library because you really use the full power of all cores (choose the number of processes according to the thread number. With this method, I reduced a model on 2500 data sets from ~5 minutes to ~17 seconds on 12 threads). You would have to implement the processes to return a dict each, updating them after the processes have finished.</p> <p>Otherwise, there are machine learning approches for text summarization (sequence to sequence RNNs). With a tensorflow implementation, you can use a dedicated GPU on your local machine (even a decent 10xx or a 2060 from Nvidia will help) to speed up your model. </p> <p><a href="https://docs.python.org/2/library/multiprocessing.html" rel="nofollow noreferrer">https://docs.python.org/2/library/multiprocessing.html</a> <a href="https://arxiv.org/abs/1602.06023" rel="nofollow noreferrer">https://arxiv.org/abs/1602.06023</a></p> <p>hope this helps</p>
2019-10-13 09:55:31.417000+00:00
2019-10-13 09:55:31.417000+00:00
null
null
58,361,290
<p>I am creating a text summarizer and using a basic model to work with using Bag of words approach.<br> the code i am performing is using the nltk library. the file read is a large file with over 2500000 words. below is the loop i am working on with but this takes over 2 hours to run and complete. is there a way to optimize this code </p> <pre><code>f= open('Complaints.csv', 'r') raw = f.read() len(raw) tokens = nltk.word_tokenize(raw) len(tokens) freq = nltk.FreqDist(text) top_words = [] # blank dictionary top_words = freq.most_common(100) print(top_words) sentences = sent_tokenize(raw) print(raw) ranking = defaultdict(int) for i, sent in enumerate(raw): for word in word_tokenize(sent.lower()): if word in freq: ranking[i]+=freq[word] top_sentences = nlargest(10, ranking, ranking.get) print(top_sentences) </code></pre> <p>This is only one one file and the actual deployment has more than 10-15 files of similar size. How we can improve this.<br> Please note these are the text from a chat bot and are actual sentences hence there was no requirement to remove whitespaces, stemming and other text pre processing methods</p>
2019-10-13 07:00:39.540000+00:00
2019-10-13 09:55:31.417000+00:00
null
python|nlp|nltk
['https://docs.python.org/2/library/multiprocessing.html', 'https://arxiv.org/abs/1602.06023']
2
36,141,976
<p>Not true. I studied the problem starting in June 1975 and March 1977 I discovered 8 general Rules of Inference for synthesizing computer programs. The algorithm continues to evolve. See <a href="http://arxiv.org/abs/1501.01363" rel="nofollow">http://arxiv.org/abs/1501.01363</a> . Nobody will publish it because computer science academic publishing is an Old Boy's Club that only accepts paper from college professors. Look at any academic journal and you'll see that every paper is by a professor - or a researcher for a rich company or university e.g. SRI.</p>
2016-03-21 21:29:50.310000+00:00
2016-03-21 21:29:50.310000+00:00
null
null
30,493,312
<p>I think my title is quite succinct. Is there an AI, Machine, or Automated Theorem Prover (ATP) that builds source code from input? A very simple idea of what I'm getting at is this "Hey AI/Machine/ATP, please build a 'hello world' source code."</p> <ol> <li>If not source code, what about creating output for LLVM IR, Java Bytecode, or MSIL?</li> </ol>
2015-05-27 21:17:49.053000+00:00
2016-03-21 21:29:50.310000+00:00
null
artificial-intelligence
['http://arxiv.org/abs/1501.01363']
1
65,620,999
<p>The other two answers are good. Another option is to actually use more recent packages that are purpose-built for highly dimensional / high volume data sets. They run their code using lower-level languages (C++ and/or Java) and in certain cases use parallelization.</p> <p>I'd recommend taking a look into these three:</p> <ol> <li>ranger (uses C++ compiler)</li> <li>randomForestSRC (uses C++ compiler)</li> <li>h2o (Java compiler - needs Java version 8 or higher)</li> </ol> <p>Also, some additional reading here to give you more to go off on which package to choose: <a href="https://arxiv.org/pdf/1508.04409.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1508.04409.pdf</a></p> <p><strong>Page 8 shows benchmarks showing the performance improvement of ranger against randomForest against growing data size - ranger is WAY faster due to linear growth in runtime rather than non-linear for randomForest for rising tree/sample/split/feature sizes.</strong></p> <p>Good Luck!</p>
2021-01-07 22:44:37.750000+00:00
2021-01-07 22:44:37.750000+00:00
null
null
37,190,135
<p>I'm trying to train several random forests (for regression) to have them compete and see which feature selection and which parameters give the best model.</p> <p>However the trainings seem to take an insane amount of time, and I'm wondering if I'm doing something wrong.</p> <p>The dataset I'm using for training (called <code>train</code> below) has 217k lines, and 58 columns (of which only 21 serve as predictors in the random forest. They're all <code>numeric</code> or <code>integer</code>, with the exception of a boolean one, which is of class <code>character</code>. The <code>y</code> output is <code>numeric</code>). </p> <p>I ran the following code four times, giving the values <code>4</code>, <code>100</code>, <code>500</code>, <code>2000</code> to <code>nb_trees</code> :</p> <pre><code>library("randomForest") nb_trees &lt;- #this changes with each test, see above ptm &lt;- proc.time() fit &lt;- randomForest(y ~ x1 + x2 + x3 + x4 + x5 + x6 + x7 + x8 + x9 + x10 + x11 + x12 + x13 + x14 + x15 + x16 + x17 + x18 + x19 + x20 + x21, data = train, ntree = nb_trees, do.trace=TRUE) proc.time() - ptm </code></pre> <p>Here is how long each of them took to train :</p> <pre><code>nb_trees | time 4 4mn 100 1h 41mn 500 8h 40mn 2000 34h 26mn </code></pre> <p>As my company's server has 12 cores and 125Go of RAM, I figured I could try to parallelize the training, following <a href="https://stackoverflow.com/a/7831848/4348534">this answer</a> (however, I used the <code>doParallel</code> package because it seemed to be running forever with <code>doSNOW</code>, I don't know why. And I can't find where I saw that <code>doParallel</code> would work too, sorry).</p> <pre><code>library("randomForest") library("foreach") library("doParallel") nb_trees &lt;- #this changes with each test, see table below nb_cores &lt;- #this changes with each test, see table below cl &lt;- makeCluster(nb_cores) registerDoParallel(cl) ptm &lt;- proc.time() fit &lt;- foreach(ntree = rep(nb_trees, nb_cores), .combine = combine, .packages = "randomForest") %dopar% { randomForest(y ~ x1 + x2 + x3 + x4 + x5 + x6 + x7 + x8 + x9 + x10 + x11 + x12 + x13 + x14 + x15 + x16 + x17 + x18 + x19 + x20 + x21, data = train, ntree = ntree, do.trace=TRUE)} proc.time() - ptm stopCluster(cl) </code></pre> <p>When I run it, it takes a shorter time than non-parallelized code :</p> <pre><code>nb_trees | nb_cores | total number of trees | time 1 4 4 2mn13s 10 10 100 52mn 9 12 108 (closest to 100 with 12 cores) 59mn 42 12 504 (closest to 500 with 12 cores) I won't be running this one 167 12 2004 (closest to 2000 with 12 cores) I'll run it next week-end </code></pre> <p>However, I think it's still taking a lot of time, isn't it ? I'm aware it takes time to combine the trees into the final forest, so I didn't expect it to be 12 times faster with 12 cores, but it's only ~2 times faster... </p> <ul> <li>Is this normal ? </li> <li>If it isn't, is there anything I can do with my data and/or my code to radically decrease the running time ? </li> <li>If not, should I tell the guy in charge of the server that it should be much faster ?</li> </ul> <p>Thanks for your answers.</p> <p>Notes :</p> <ul> <li>I'm the only one using this server</li> <li>for my next tests, I'll get rid of the columns that are not used in the random forest</li> <li>I realized quite late that I could improve the running time by calling <code>randomForest(predictors,decision)</code> instead of <code>randomForest(decision~.,data=input)</code>, and I'll be doing it from now on, but I think my questions above still holds.</li> </ul>
2016-05-12 14:36:27.750000+00:00
2021-01-07 22:44:37.750000+00:00
2017-05-23 12:23:56.677000+00:00
r|parallel-processing|random-forest|doparallel|parallel-foreach
['https://arxiv.org/pdf/1508.04409.pdf']
1
36,913,584
<blockquote> <p>Locality-sensitive hashing (LSH) reduces the dimensionality of high-dimensional data. LSH hashes input items so that similar items map to the same “buckets” with high probability:</p> </blockquote> <p><a href="https://en.wikipedia.org/wiki/Locality-sensitive_hashing" rel="nofollow">https://en.wikipedia.org/wiki/Locality-sensitive_hashing</a></p> <p>Also see: <a href="https://en.wikipedia.org/wiki/Perceptual_hashing" rel="nofollow">https://en.wikipedia.org/wiki/Perceptual_hashing</a></p> <p>Here is a nice example of perceptual hashing on DNA sequences: </p> <p><a href="http://arxiv.org/pdf/1412.5517.pdf" rel="nofollow">http://arxiv.org/pdf/1412.5517.pdf</a></p>
2016-04-28 11:40:51.997000+00:00
2016-04-28 11:40:51.997000+00:00
null
null
1,687,047
<p>Is there a hash function where small changes in the input result in small changes in the output? For example, something like:</p> <pre><code>hash("Foo") =&gt; 9e107d9d372bb6826bd81d3542a419d6 hash("Foo!") =&gt; 9e107d9d372bb6826bd81d3542a419d7 &lt;- note small difference </code></pre>
2009-11-06 11:35:29.257000+00:00
2016-04-28 11:40:51.997000+00:00
2012-09-05 15:55:15.097000+00:00
algorithm|hash|hashcode|simhash
['https://en.wikipedia.org/wiki/Locality-sensitive_hashing', 'https://en.wikipedia.org/wiki/Perceptual_hashing', 'http://arxiv.org/pdf/1412.5517.pdf']
3
61,603,110
<p>Before I get to answers, I would like to point out that when you have a program that uses a large set of numbers you should always use <code>numpy.array</code> from <a href="https://numpy.org/" rel="noreferrer">numpy library</a> to store that kind of data. I don't know what version of Python, <a href="https://scikit-learn.org" rel="noreferrer">scikit-learn</a>, and <a href="https://www.scipy.org/" rel="noreferrer">SciPy</a> are you using, but I am using Python 3.7.3, scikit-learn 0.21.3, and SciPy 1.3.0. When I ran your code to compare build-times, I got <code>AttributeError: 'list' object has no attribute 'size'</code>. This error is saying that list <code>listOfRandom2DPoints</code> has no attribute <code>size</code>. The problem is that <code>sklearn.neighbors.KDTree</code> expects <code>numpy.array</code> which has attribute <code>size</code>. Class <code>scipy.spatial.KDTree</code> works with Python lists but as you can see in the <a href="https://github.com/scipy/scipy/blob/v1.4.1/scipy/spatial/kdtree.py#L241-L942" rel="noreferrer">source code of <code>__init__</code> method of class <code>scipy.spatial.KDTree</code></a>, first line is <code>self.data = np.asarray(data)</code>, which means that data will be converted to <code>numpy.array</code>.</p> <p>Because of this, I cahanged your lines:</p> <pre><code>from random import randint listOfRandom2DPoints = [ [randint(0,dim),randint(0,dim)] for x in range(length)] </code></pre> <p>to:</p> <pre><code>import numpy as np ListOfRandom2DPoints = np.random.randint(0, dim, size=(length, 2)) </code></pre> <p>(This change doesn't affect speed comparisons because change is made in setup code.)</p> <p>Now answers on your questions:</p> <ol> <li><p>Like you said scikit-learn seems to beet SciPy for the build time. The reason why this happens isn't that scikit-learn has a faster algorithm, but <code>sklearn.neighbors.KDTree</code> is implemented in <a href="https://cython.org/" rel="noreferrer">Cython</a> (<a href="https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/neighbors/_kd_tree.pyx" rel="noreferrer">link to source code</a>), and <code>scipy.spatial.KDTree</code> is written in pure Python code (<a href="https://github.com/scipy/scipy/blob/v1.4.1/scipy/spatial/kdtree.py#L185-L942" rel="noreferrer">link to source code</a>).</p> <p>(If you don't know what is Cython, an oversimplified explanation would be that Cython makes possible writing C code in Python and main reason for doing that is that C is much faster than Python)</p> <p>SciPy library also has implementation in Cython <code>scipy.spatial.cKDTree</code> (<a href="https://github.com/scipy/scipy/blob/v1.4.1/scipy/spatial/ckdtree.pyx#L401-L1525" rel="noreferrer">link to source code</a>), it works the same as <code>scipy.spatial.KDTree</code> and if you compare build times of <code>sklearn.neighbors.KDTree</code> and <code>scipy.spatial.cKDTree</code>:</p> <pre><code>timeit.timeit('scipy.spatial.cKDTree(npListOfRandom2DPoints, leafsize=20)', setup=setup, number=nTimes) timeit.timeit('sklearn.neighbors.KDTree(npListOfRandom2DPoints, leaf_size=20)', setup=setup, number=nTimes) </code></pre> <p>Build times are very similar, and when I ran the code, <code>scipy.spatial.cKDTree</code> was a little bit (around 20%) faster.</p> <p>With query times situation is very similar, <code>scipy.spatial.KDTree</code> (pure Python implementation) is about ten times slower than <code>sklearn.neighbors.KDTree</code> (Cython implementation) and <code>scipy.spatial.cKDTree</code> (Cython implementation) is aproximatly as fast as <code>sklearn.neighbors.KDTree</code>. I have tested query times up to N = 10000000, and got the same result as you. Query times stay the same regardless of N (meaning query time for <code>scipy.spatial.KDTree</code> is same for N = 1000 and N = 1000000, and the same thing for query times for<code>sklearn.neighbors.KDTree</code> and <code>scipy.spatial.cKDTree</code>). That is because query (search) time complexity is O(logN) and even for N = 1000000, logN is very small so the difference is too small to measure.</p></li> <li><p>Build algorithm of <code>sklearn.neighbors.KDTree</code> (<code>__init__</code> method of class) has time complexity of O(KNlogN) (<a href="https://scikit-learn.org/stable/modules/neighbors.html#nearest-neighbor-algorithms" rel="noreferrer">about scikit-learn Nearest Neighbor Algorithms</a>) so in your case it would be O(2NlogN) which is practically O(NlogN). Based on very similar build times of <code>sklearn.neighbors.KDTree</code> and <code>scipy.spatial.cKDTree</code> I assume that the build algorithm of <code>scipy.spatial.cKDTree</code> also has time complexity of O(NlogN). I am no expert on nearest neighbor search algorithms, but based on some online search, I would say that for low-dimensional nearest neighbor search algorithms this as fast as it can be. If you go to <a href="https://en.wikipedia.org/wiki/Nearest_neighbor_search" rel="noreferrer">nearest neighbor search Wikipedia page</a> you will see that there are <a href="https://en.wikipedia.org/wiki/Nearest_neighbor_search#Exact_methods" rel="noreferrer">exact methods</a> and <a href="https://en.wikipedia.org/wiki/Nearest_neighbor_search#Approximation_methods" rel="noreferrer">approximation methods</a>. <a href="https://en.wikipedia.org/wiki/K-d_tree" rel="noreferrer">k-d tree</a> is exact method, it is subtype of <a href="https://en.wikipedia.org/wiki/Nearest_neighbor_search#Space_partitioning" rel="noreferrer">space partitioning methods</a>. Of all space partitioning methods (only fast exact methods for nearest neighbor search based on Wikipedia page), k-d tree is the best method in the case of low-dimensional Euclidean space for nearest neighbor search in static context (there isn't a lot of insertions and deletions). Also if you look at approximation methods under <a href="https://en.wikipedia.org/wiki/Nearest_neighbor_search#Greedy_search_in_proximity_neighborhood_graphs" rel="noreferrer">greedy search in proximity neighborhood graphs</a> you will see "Proximity graph methods are considered the current state-of-the-art for the approximate nearest neighbors search." When you look at the research article that is cited for this method (<a href="https://arxiv.org/abs/1603.09320" rel="noreferrer">Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs</a>) you can see that this method has time complexity of O(NlogN). This means that for low-dimensional spaces k-d tree (exact method) is as fast as approximation methods. For now, we have compared build (construction) time complexity of structures that are used for nearest neighbor searches. All these algorithms have search (query) time complexity of O(logN). So best that we can get is build complexity of O(NlogN) and query complexity of O(logN) which is what we have in k-d tree method. So based on my research I would say that k-d tree is the best structure for low-dimensional nearest neighbor searches. </p> <p>(I think if there was a better (faster) method to do nearest neighbor search, than scikit-learn and SciPy would have implemented that method. Also from a theoretical standpoint, knowing that fastest sorting algorithms have time complexity of O(NlogN), it would be quite surprising to have nearest neighbor search build algorithm with time complexity less than O(NlogN).)</p></li> <li><p>Like I said you are comparing <code>sklearn.neighbors.KDTree</code> with Cython implementation and <code>scipy.spatial.KDTree</code> with pure Python implementation. In theory <code>sklearn.neighbors.KDTree</code> should be faster than <code>scipy.spatial.KDTree</code>, I compared these up to 1000000 and they seem to get closer at large N. For N = 100, <code>scipy.spatial.KDTree</code> is about 10 times slower than <code>sklearn.neighbors.KDTree</code> and for N = 1000000, <code>scipy.spatial.KDTree</code> is about twice as slow as <code>sklearn.neighbors.KDTree</code>. I am not sure why is this happening, but I suspect that for big N, memory becomes a bigger problem than the number of operations.</p> <p>I checked re-build time also up to 1000000 and it does increase linearly and that is because the duration of function <code>pickle.loads</code> is linearly proportional to the size of the loading object.</p></li> <li><p>For me, pickling of <code>sklearn.neighbors.KDTree</code>, <code>scipy.spatial.KDTree</code>, and <code>scipy.spatial.cKDTree</code> works so I can't reproduce your error. I am guessing that the problem is that you have an older version of SciPy so updating SciPy to the newest version should fix this problem.</p> <p>(If you need more help on this problem you should add some more info to your question. What are your Python and SciPy versions, exact code to reproduce this error, and full error message?)</p></li> </ol>
2020-05-04 23:07:46.620000+00:00
2020-05-06 02:08:08.517000+00:00
2020-05-06 02:08:08.517000+00:00
null
30,447,355
<p>I have a large set of 2-dimensional points and want to be able to rapidly query the set for the k-Nearest-Neighbours of any point in the 2-d space. Since it's low-dimensional, a KD-Tree seems like a good way to go about it. My initial data set will only be updated very rarely, so the time to query a point should be more important to me than the build-time. However, each time I run the program I will need to re-load the object, so I also need a structure that can be saved and reloaded swiftly.</p> <p>The two choices readily available are the KDTree structures in SciPy and in SciKit-learn. Below I profile the two of these for build-speed and query-speed across a large range of list lengths. I also pickle the SciKit-learn structure and show the time to re-load the object from the pickle. These are compared in a graph, and the code used to generate the timings is included below.</p> <p>As I show in the graph, loading from a pickle is faster than building it from scratch by half an order of magnitude for large N, showing that the KDTree is suitable for my use case (ie. frequent re-loads but infrequent re-builds).</p> <p><img src="https://i.stack.imgur.com/YFDJj.png" alt="Comparing build-, reload- and query-time of two KD-Tree structures"></p> <p>Code to compare build-times:</p> <pre><code># Profiling the building time for the two KD-tree structures and re-loading from a pickle import math, timeit, pickle, sklearn.neighbors the_lengths = [100, 1000, 10000, 100000, 1000000] theSciPyBuildTime = [] theSklBuildTime = [] theRebuildTime = [] for length in the_lengths: dim = 5*int(math.sqrt(length)) nTimes = 50 from random import randint listOfRandom2DPoints = [ [randint(0,dim),randint(0,dim)] for x in range(length)] setup = """import scipy.spatial import sklearn.neighbors length = """ + str(length) + """ dim = """ + str(dim) + """ from random import randint listOfRandom2DPoints = [ [randint(0,dim),randint(0,dim)] for x in range(length)]""" theSciPyBuildTime.append( timeit.timeit('scipy.spatial.KDTree(listOfRandom2DPoints, leafsize=20)', setup=setup, number=nTimes)/nTimes ) theSklBuildTime.append( timeit.timeit('sklearn.neighbors.KDTree(listOfRandom2DPoints, leaf_size=20)', setup=setup, number=nTimes)/nTimes ) theTreeSkl = sklearn.neighbors.KDTree(listOfRandom2DPoints, leaf_size=20, metric='euclidean') f = open('temp.pkl','w') temp = pickle.dumps(theTreeSkl) theRebuildTime.append( timeit.timeit('pickle.loads(temp)', 'from __main__ import pickle,temp', number=nTimes)/nTimes ) </code></pre> <p>Code to compare query-times:</p> <pre><code># Profiling the query time for the two KD-tree structures import scipy.spatial, sklearn.neighbors the_lengths = [100, 1000, 10000, 100000, 1000000, 10000000] theSciPyQueryTime = [] theSklQueryTime = [] for length in the_lengths: dim = 5*int(math.sqrt(length)) nTimes = 50 listOfRandom2DPoints = [ [randint(0,dim),randint(0,dim)] for x in range(length)] setup = """from __main__ import sciPiTree,sklTree from random import randint length = """ + str(length) + """ randPoint = [randint(0,""" + str(dim) + """),randint(0,""" + str(dim) + """)]""" sciPiTree = scipy.spatial.KDTree(listOfRandom2DPoints, leafsize=20) sklTree = sklearn.neighbors.KDTree(listOfRandom2DPoints, leaf_size=20) theSciPyQueryTime.append( timeit.timeit('sciPiTree.query(randPoint,10)', setup=setup, number=nTimes)/nTimes ) theSklQueryTime.append( timeit.timeit('sklTree.query(randPoint,10)', setup=setup, number=nTimes)/nTimes ) </code></pre> <p>&nbsp;</p> <p>Questions:</p> <ol> <li><p><strong>The Result</strong>: Although they're getting closer for very large N, SciKit-learn seems to beat SciPy for both build time and query time. Have other people found this?</p></li> <li><p><strong>The Maths</strong>: Are there any better structures available for this? I'm only working in a 2D space (although the data will be quite dense so brute force is out), is there a better structure for low-dimensional kNN searches?</p></li> <li><p><strong>The Speed</strong>: It looks like the build-time for the two approaches is getting closer at large N but my computer gave up on me - can anyone verify this for me for larger N?! Thanks!! Does re-build time continue to increase roughly linearly as well?</p></li> <li><p><strong>Practicalities</strong>: The SciPy KDTree won't pickle. As reported in <a href="https://stackoverflow.com/questions/5773216/saving-kdtree-object-in-python">this post</a>, I'm given the following error "PicklingError: Can't pickle : it's not found as scipy.spatial.kdtree.innernode" - I think this is due to it being a nested structure. According to an answer reported in <a href="https://stackoverflow.com/questions/1947904/how-can-i-pickle-a-nested-class-in-python">this post</a>, nested structures can be pickled with dill. However, dill gives me the same error - why is this?</p></li> </ol>
2015-05-25 23:40:05.690000+00:00
2020-05-06 02:08:08.517000+00:00
2017-05-23 12:09:52.140000+00:00
python|scipy|scikit-learn|nearest-neighbor|kdtree
['https://numpy.org/', 'https://scikit-learn.org', 'https://www.scipy.org/', 'https://github.com/scipy/scipy/blob/v1.4.1/scipy/spatial/kdtree.py#L241-L942', 'https://cython.org/', 'https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/neighbors/_kd_tree.pyx', 'https://github.com/scipy/scipy/blob/v1.4.1/scipy/spatial/kdtree.py#L185-L942', 'https://github.com/scipy/scipy/blob/v1.4.1/scipy/spatial/ckdtree.pyx#L401-L1525', 'https://scikit-learn.org/stable/modules/neighbors.html#nearest-neighbor-algorithms', 'https://en.wikipedia.org/wiki/Nearest_neighbor_search', 'https://en.wikipedia.org/wiki/Nearest_neighbor_search#Exact_methods', 'https://en.wikipedia.org/wiki/Nearest_neighbor_search#Approximation_methods', 'https://en.wikipedia.org/wiki/K-d_tree', 'https://en.wikipedia.org/wiki/Nearest_neighbor_search#Space_partitioning', 'https://en.wikipedia.org/wiki/Nearest_neighbor_search#Greedy_search_in_proximity_neighborhood_graphs', 'https://arxiv.org/abs/1603.09320']
16
73,056,304
<p>I put in the effort to dig through <a href="https://crypto.stanford.edu/%7Edabo/papers/RSA-survey.pdf" rel="nofollow noreferrer">Boneh's paper</a>. The &quot;algorithm&quot; for deriving <code>(p, q)</code> from <code>(n, d)</code> is buried at the end of §1.1, coded in maths jargon, and left as an exercise for the reader to render out of his (rather terse) <em>proof</em> that it's <em>efficient</em> to do so.</p> <blockquote> <p>Let 〈<i>N</i>, <i>e</i>〉 be an RSA public key. Given the private key <i>d</i>, one can efficiently factor the modulus <i>N</i> = <i>pq</i>.</p> <p><strong>Proof.</strong> Compute <i>k</i> = <i>de</i> − 1. By definition of <i>d</i> and <i>e</i> we know that <i>k</i> is a multiple of <i>φ</i>(<i>N</i>). Since <i>φ</i>(<i>N</i>) is even, <i>k</i> = 2<sup><i>t</i></sup><i>r</i> with <i>r</i> odd and <i>t</i> ≥ 1. We have <i>g</i><sup><i>k</i></sup> = 1 for every <i>g</i> ∈ ℤ<sub><i>N</i></sub><sup>×</sup>, and therefore <em>g</em><sup><em>k</em>/2</sup> is a square root of unity modulo <i>N</i>. By the Chinese Remainder Theorem, 1 has four square roots modulo <i>N</i> = <i>pq</i>. Two of these square roots are ±1. The other two are ±<i>x</i> where <i>x</i> satisfies <i>x</i> = 1 mod <i>p</i> and <i>x</i> = −1 mod <i>q</i>. Using either one of these last two square roots, the factorization of <i>N</i> is revealed by computing gcd(<i>x</i> − 1, <i>N</i>). A straightforward argument shows that if <i>g</i> is chosen at random from ℤ<sub><i>N</i></sub><sup>×</sup> then with probability at least 1/2 (over the choice of <i>g</i>) one of the elements in the sequence <i>g</i><sup><i>k</i>/2</sup>, <i>g</i><sup><i>k</i>/4</sup>, …, <i>g</i><sup><i>k</i>/2<sup><i>t</i></sup></sup> mod <i>N</i> is a square root of unity that reveals the factorization of <i>N</i>. All elements in the sequence can be efficiently computed in time <i>O</i>(<i>n</i><sup>3</sup>) where <i>n</i> = log<sub>2</sub>(<i>N</i>).</p> </blockquote> <p>Obviously, this is pretty close to meaningless for anyone who doesn't know what <a href="https://math.stackexchange.com/q/4496759/188128#comments-4496759"><code>$Z_N^\ast$</code></a> is, and has a pretty nonlinear structure that takes a good deal of time to twist into a linear algorithm.</p> <p>So here is the worked solution:</p> <pre class="lang-py prettyprint-override"><code>from random import randrange from math import gcd def ned_to_pqe(secret_key): &quot;&quot;&quot; https://crypto.stanford.edu/~dabo/papers/RSA-survey.pdf#:~:text=Given%20d%2C,reveals%20the%20factorization%20of%20N%2E &quot;&quot;&quot; n, e, d = secret_key k = d * e - 1 t = bit_scan1(k) trivial_sqrt1 = {1, n - 1} while True: g = randrange(2, n - 1) for j in range(1, t + 1): x = pow(g, k &gt;&gt; j, n) if pow(x, 2, n) == 1: if x in trivial_sqrt1: continue p = gcd(x - 1, n) q = n // p if q &gt; p: p, q = q, p return p, q, e def pqe_to_ned(secret_key): p, q, e = secret_key n = p * q l = (p - 1) * (q - 1) d = pow(e, -1, l) return n, e, d def bit_scan1(i): &quot;&quot;&quot; https://gmpy2.readthedocs.io/en/latest/mpz.html#mpz.bit_scan1 &quot;&quot;&quot; # https://stackoverflow.com/a/63552117/1874170 return (i &amp; -i).bit_length() - 1 def test(): secret_key = ( # https://en.wikipedia.org/wiki/RSA_numbers#RSA-100 # Should take upwards of an hour to factor on a consumer desktop ca. 2022 1522605027922533360535618378132637429718068114961380688657908494580122963258952897654000350692006139, 65537, 1435319569480661473883310243084583371347212233430112391255270984679722445287591616684593449660400673 ) if secret_key != pqe_to_ned(ned_to_pqe(secret_key)): raise ValueError if __name__ == '__main__': test() print(&quot;Self-test OK&quot;) </code></pre> <p>Live demo (JS):</p> <p><div class="snippet" data-lang="js" data-hide="true" data-console="true" data-babel="false"> <div class="snippet-code snippet-currently-hidden"> <pre class="snippet-code-js lang-js prettyprint-override"><code>function ned_to_pqe({n, e, d}) { // https://crypto.stanford.edu/~dabo/papers/RSA-survey.pdf#:~:text=Given%20d%2C,reveals%20the%20factorization%20of%20N%2E let k = d * e - 1n; let t = scan1(k); let trivial_sqrt1 = new Set([1n, n - 1n]); while (true) { let g = insecure_randrange(2n, n - 1n); for ( let j = t ; j &gt; 0 ; --j ) { let x = bn_powMod(g, k &gt;&gt; j, n); if (bn_powMod(x, 2n, n) === 1n) { if (trivial_sqrt1.has(x)) continue; let p = gcd(x - 1n, n), q = n/p; if (q &gt; p) [p, q] = [q, p]; return {p, q, e}; } } } } function pqe_to_ned({p, q, e}) { let n = p * q; let l = (p - 1n) * (q - 1n); let d = bn_modInv(e, l); return {n, e, d}; } function bn_powMod(x, e, m) { // h/t https://umaranis.com/2018/07/12/calculate-modular-exponentiation-powermod-in-javascript-ap-n/ if (m === 1n) return 0n; let y = 1n; x = x % m; while (e &gt; 0n) { if (e % 2n === 1n) //odd number y = (y * x) % m; e = e &gt;&gt; 1n; //divide by 2 x = (x * x) % m; } return y; } function bn_modInv(x, m) { // TOY IMPLEMENTATION // DO NOT USE IN GENERAL-PURPOSE CODE // h/t https://rosettacode.org/wiki/Modular_inverse#C let m0 = m, t, q; let x0 = 0n, y = 1n; if (m === 1n) return 1n; while (x &gt; 1n) { q = x / m; t = m; m = x % m; x = t; t = x0; x0 = y - q * x0; y = t; } if (y &lt; 0n) y += m0; return y; } function gcd(a, b) { // h/t https://stackoverflow.com/a/17445304/1874170 while (b) { [a, b] = [b, a % b]; } return a; } function scan1(i) { // https://gmplib.org/manual/Integer-Logic-and-Bit-Fiddling#mpz_scan1 let k = 0n; if ( i !== 0n ) { while( (i &amp; 1n) === 0n ) { i &gt;&gt;= 1n; k += 1n; } } return k; } function insecure_randrange(a, b) { // h/t https://arxiv.org/abs/1304.1916 let numerator = 0n; let denominator = 1n; let n = (b - a); while (true) { numerator &lt;&lt;= 1n; denominator &lt;&lt;= 1n; numerator |= BigInt(Math.random()&gt;1/2); if (denominator &gt;= n) { if (numerator &lt; n) return a + numerator; numerator -= n; denominator -= n; } } }</code></pre> <pre class="snippet-code-html lang-html prettyprint-override"><code>&lt;form action="javascript:" onsubmit="(({target:form,submitter:{value:action}})=&gt;{eval(action)(form)})(event)"&gt; &lt;p&gt; &lt;label for="p"&gt;p=&lt;/label&gt;&lt;input name="p" value="37975227936943673922808872755445627854565536638199" /&gt;&lt;br /&gt; &lt;label for="q"&gt;q=&lt;/label&gt;&lt;input name="q" value="40094690950920881030683735292761468389214899724061" /&gt;&lt;br /&gt; &lt;label for="n"&gt;n=&lt;/label&gt;&lt;input name="n" /&gt;&lt;br /&gt; &lt;label for="e"&gt;e=&lt;/label&gt;&lt;input name="e" placeholder="65537" /&gt;&lt;br /&gt; &lt;label for="d"&gt;d=&lt;/label&gt;&lt;input name="d" /&gt;&lt;br /&gt; &lt;/p&gt; &lt;p&gt; &lt;button type="submit" value="pqe2nd"&gt;Get (n,d) from (p,q,e)&lt;/button&gt;&lt;br /&gt; &lt;button type="submit" value="delpq"&gt;Forget (p,q)&lt;/button&gt;&lt;br /&gt; &lt;button type="submit" value="ned2pq"&gt;Get (p,q) from (n,e,d)&lt;/button&gt; &lt;/form&gt; &lt;script&gt; function pqe2nd({elements}) { if (!elements['e'].value) elements['e'].value = elements['e'].placeholder; let p = BigInt(elements['p'].value||undefined); let q = BigInt(elements['q'].value||undefined); let e = BigInt(elements['e'].value||undefined); let {n, d} = pqe_to_ned({p,q,e}); elements['n'].value = n.toString(); elements['d'].value = d.toString(); } function ned2pq({elements}) { if (!elements['e'].value) elements['e'].value = elements['e'].placeholder; let n = BigInt(elements['n'].value||undefined); let e = BigInt(elements['e'].value||undefined); let d = BigInt(elements['d'].value||undefined); let {p, q} = ned_to_pqe({n,e,d}); elements['p'].value = p.toString(); elements['q'].value = q.toString(); } function delpq({elements}) { elements['p'].value = null; elements['q'].value = null; } &lt;/script&gt;</code></pre> </div> </div> </p> <hr /> <p>To answer the question as-stated in the title: factoring <code>N</code> entails <em>finding</em> <code>N</code>. But <a href="https://crypto.stackexchange.com/a/81620/8287">you cannot, in the general case, derive <code>N</code> from <code>(e, d)</code></a>. Therefore, you cannot, in the general case, derive the factors of <code>N</code> from <code>(e, d)</code>; QED.</p> <blockquote> <p>finding <em>n</em> from (<em>e</em>, <em>d</em>) is computationally feasible with fair probability, or even certainty, for a <strong>small</strong> but observable fraction of RSA keys of practical interest</p> </blockquote> <p>If you want to try to do so anyway, you'll need to be able to factorize <code>e * d - 1</code> (if I understand the above-linked answer correctly):</p> <pre class="lang-py prettyprint-override"><code>from itertools import permutations def ed_to_pq(e, d): # NOT ALWAYS POSSIBLE -- the number e*d-1 must be small enough to factorize # h/t https://crypto.stackexchange.com/a/81620/8287 factors = factorize(e * d - 1) factors.sort() # Unimplemented optimization: # if two factors are larger than (p * q).bit_length()//4 # and the greater of (p, q) is not many times bigger than the lesser, # then you can safely assume that the large factors belong to (p-1) and (q-1) # and thereby reduce the number of iterations in the following loops # Unimplemented optimization: # permutations are overkill for this partitioning scheme; # a clever mathematician could come up with something more efficient # Unimplemented optimization: # prune permutations based on &quot;sanity&quot; factor of logarithm knapsacking l = len(factors) for arrangement in permutations(factors): for l_pm1 in range(1, l - 1): for l_qm1 in range(1, l_pm1): pm1 = prod(arrangement[:l_pm1]) qm1 = prod(arrangement[l_pm1:l_pm1+l_qm1]) try: if pow(e, -1, pm1 * qm1) == d: return (pm1 + 1, qm1 + 1) except Exception: pass from functools import reduce from operator import mul def prod(l): return reduce(mul, l) </code></pre>
2022-07-20 18:05:22.330000+00:00
2022-09-16 16:47:57.827000+00:00
2022-09-16 16:47:57.827000+00:00
null
5,747,013
<p>I have a RSA private key with modulus <code>m</code>, public exponent <code>e</code> and private exponent <code>d</code>, but the program I am using needs the modulus's prime factors <code>p</code> and <code>q</code>.</p> <p>Is it possible to use <code>e</code> and <code>d</code> to get <code>p</code> and <code>q</code>?</p>
2011-04-21 16:28:54.437000+00:00
2022-09-16 16:47:57.827000+00:00
2011-04-22 08:55:37.160000+00:00
encryption|rsa
['https://crypto.stanford.edu/%7Edabo/papers/RSA-survey.pdf', 'https://math.stackexchange.com/q/4496759/188128#comments-4496759', 'https://crypto.stackexchange.com/a/81620/8287']
3
49,309,905
<p>Your understanding is quite correct. However, unfortunately, there is inconsistency between the Tensorflow terminology and the literature. In order to understand, you need to dig through the Tensorflow implementation code. </p> <p>A <strong>cell</strong> in the Tensorflow universe is called an LSTM layer in Colah's universe (i.e an unrolled version). That is why you always define a single cell, and not a layer in your Tensorflow architecture. For example,</p> <pre><code>cell=rnn.BasicLSTMCell(num_units=5,state_is_tuple=True) </code></pre> <p>Check the code here.</p> <p><a href="https://github.com/tensorflow/tensorflow/blob/ef96faaf02be54b7eb5945244c881126a4d38761/tensorflow/python/ops/rnn_cell.py#L90" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/blob/ef96faaf02be54b7eb5945244c881126a4d38761/tensorflow/python/ops/rnn_cell.py#L90</a></p> <blockquote> <p>The definition of cell in this package differs from the definition used in the literature. In the literature, cell refers to an object with a single scalar output. The definition in this package refers to a horizontal array of such units.</p> </blockquote> <p>Therefore, in order to understand num_units in Tensorflow, its best to imagine an unrolled LSTM as below.</p> <p><a href="https://i.stack.imgur.com/JGVM8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JGVM8.png" alt="enter image description here"></a></p> <p>In an unrolled version, you have an input <strong>X_t</strong> which is a tensor. When you specify an input of the shape </p> <blockquote> <p>[batch_size,time_steps,n_input]</p> </blockquote> <p>to Tensorflow, it knows how many times to unroll it from your <strong>time_steps</strong> parameter.</p> <p>So if you have <strong>X_t</strong> as a 1D array in TensorFlow, then in the Colahs unrolled version each LSTM cell <strong>x_t</strong> becomes a scalar value (Please observe the capital case X (vector/array) and small case x(scalar) - Also in Colah's figures)</p> <p>If you have <strong>X_t</strong> as a 2D array in the Tensorflow, then in the Colahs unrolled version each LSTM cell <strong>x_t</strong> becomes a 1D array/vector (as in your case here) and so on.</p> <p>Now here comes the most important question.</p> <p><strong>How would Tensorflow know what is the output/hidden dimension ** Z_t/H_t</strong> ? </p> <p>(Please note the difference between H_t and Z_t - I usually prefer to keep them separate as H_t goes back to input (the loop) and <strong>Z_t</strong> is the output - Not shown in figure)</p> <p>Would it be same dimension as <strong>X_t</strong> ? </p> <p><strong>No</strong>.It can be of any different shape. You need to specify it to the Tensorflow. And that is <strong>num_units - The Output Size</strong></p> <p>Check here in the code:</p> <p><a href="https://github.com/tensorflow/tensorflow/blob/ef96faaf02be54b7eb5945244c881126a4d38761/tensorflow/python/ops/rnn_cell.py#L298-L300" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/blob/ef96faaf02be54b7eb5945244c881126a4d38761/tensorflow/python/ops/rnn_cell.py#L298-L300</a></p> <pre><code> @property def output_size(self): return self._num_units </code></pre> <p>Tensorflow uses the implementation of LSTM cell as defined in Colahs universe from the following paper:</p> <p><a href="https://arxiv.org/pdf/1409.2329.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1409.2329.pdf</a></p>
2018-03-15 21:57:25.030000+00:00
2018-03-15 22:11:17.797000+00:00
2018-03-15 22:11:17.797000+00:00
null
49,225,326
<p>I tried very hard to search everywhere, but I couldn't find what <code>num_units</code> in TensorFlow actually is. I tried to relate my question to <a href="https://stackoverflow.com/questions/37901047/what-is-num-units-in-tensorflow-basiclstmcell">this question</a>, but I couldn't get clear explanation there.</p> <hr> <p>In TensorFlow, when creating an LSTM-based RNN, we use the following command</p> <pre><code>cell = rnn.BasicLSTMCell(num_units=5, state_is_tuple=True) </code></pre> <p>As <a href="http://colah.github.io/posts/2015-08-Understanding-LSTMs/" rel="noreferrer">Colah's blog</a> says, this is a basic LSTM cell:</p> <p><a href="https://i.stack.imgur.com/EL47v.png" rel="noreferrer"><img src="https://i.stack.imgur.com/EL47v.png" alt="enter image description here"></a></p> <p>Now, suppose my data is:</p> <pre><code>idx2char = ['h', 'i', 'e', 'l', 'o'] # Teach hello: hihell -&gt; ihello x_data = [[0, 1, 0, 2, 3, 3]] # hihell x_one_hot = [[[1, 0, 0, 0, 0], # h 0 [0, 1, 0, 0, 0], # i 1 [1, 0, 0, 0, 0], # h 0 [0, 0, 1, 0, 0], # e 2 [0, 0, 0, 1, 0], # l 3 [0, 0, 0, 1, 0]]] # l 3 y_data = [[1, 0, 2, 3, 3, 4]] # ihello </code></pre> <p>My input is:</p> <pre><code>x_one_hot = [[[1, 0, 0, 0, 0], # h 0 [0, 1, 0, 0, 0], # i 1 [1, 0, 0, 0, 0], # h 0 [0, 0, 1, 0, 0], # e 2 [0, 0, 0, 1, 0], # l 3 [0, 0, 0, 1, 0]]] # l 3 </code></pre> <p>which is of shape <code>[6,5]</code>. </p> <p>In <a href="https://jasdeep06.github.io/posts/Understanding-LSTM-in-Tensorflow-MNIST/" rel="noreferrer">this blog</a>, we have the following picture</p> <p><a href="https://i.stack.imgur.com/wgpPY.png" rel="noreferrer"><img src="https://i.stack.imgur.com/wgpPY.png" alt="enter image description here"></a></p> <p>As far as I know, the <code>BasicLSTMCell</code> will unroll for <code>t</code> time steps, where <code>t</code> is my number of rows (please, correct me if I am wrong!). For example, in the following figure, the LSTM is unrolled for <code>t = 28</code> time steps.</p> <p><a href="https://i.stack.imgur.com/fGWr7.png" rel="noreferrer"><img src="https://i.stack.imgur.com/fGWr7.png" alt="enter image description here"></a></p> <p>In the Colah's blog, it's written</p> <blockquote> <p>each line carries an entire vector</p> </blockquote> <p>So, let's see how my <code>[6,5]</code> input matrix will go through this LSTM-based RNN.</p> <p><a href="https://i.stack.imgur.com/Qje2S.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Qje2S.png" alt="enter image description here"></a></p> <p><em>If my above diagram is correct, then what exactly is <code>num_units</code> (which we defined in LSTM cell)</em>? Is it a parameter of an LSTM cell?</p> <p>If <code>num_unit</code> is a parameter of a single LSTM cell, then it should be something like:</p> <p><a href="https://i.stack.imgur.com/zK7Jb.png" rel="noreferrer"><img src="https://i.stack.imgur.com/zK7Jb.png" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/7mJ2z.png" rel="noreferrer"><img src="https://i.stack.imgur.com/7mJ2z.png" alt="enter image description here"></a></p> <p><em>If above diagram is correct, then where are those 5 <code>num_units</code> in the following schematic representation of the LSTM cell (according to Colah's blog)?</em></p> <p><a href="https://i.stack.imgur.com/sHJoV.png" rel="noreferrer"><img src="https://i.stack.imgur.com/sHJoV.png" alt="enter image description here"></a></p> <hr> <p>If you can give your answer with a figure, that would be really helpful! You can edit or create new whiteboard diagram <a href="https://awwapp.com/b/schvorrmy?dis=%5B%5B%22edit-board-name%22%5D%2C%5B%22no-init-modal%22%5D%5D" rel="noreferrer">here</a>.</p>
2018-03-11 21:32:59.753000+00:00
2018-03-15 22:36:57.590000+00:00
2018-03-15 22:36:57.590000+00:00
python|tensorflow|deep-learning|lstm|rnn
['https://github.com/tensorflow/tensorflow/blob/ef96faaf02be54b7eb5945244c881126a4d38761/tensorflow/python/ops/rnn_cell.py#L90', 'https://i.stack.imgur.com/JGVM8.png', 'https://github.com/tensorflow/tensorflow/blob/ef96faaf02be54b7eb5945244c881126a4d38761/tensorflow/python/ops/rnn_cell.py#L298-L300', 'https://arxiv.org/pdf/1409.2329.pdf']
4
36,066,107
<p>You can adapt the Sequence-to-Sequence model for NER tagging. Your training text is the source vocabulary/sequences to the encoder:</p> <pre><code>Yesterday afternoon , Mike Smith drove to New York . </code></pre> <p>your BIO / BILOU NER tags are your target vocabulary/sequences to the decoder for NER tagging:</p> <pre><code>O O O B_PER I_PER O O B_LOC I_LOC O </code></pre> <p>or instead use POS tags to the decoder for POS tagging:</p> <pre><code>NN NN , NNP NNP VBD TO NNP NNP . </code></pre> <p>[IMHO using a deep learning approach usually eliminates the need POS tagging as an intermediate step, unless you specifically need those features as an output for something.]</p> <p>You would probably want to switch off the word embeddings for the decoder.</p> <p>This well-known paper applies sequence-to-sequence models to syntactic parsing which has some similarities to the POS and/or NER tasks: <a href="http://arxiv.org/abs/1412.7449">Grammar as a Foreign Language</a></p>
2016-03-17 16:16:05.410000+00:00
2016-03-18 18:22:23.557000+00:00
2016-03-18 18:22:23.557000+00:00
null
36,057,715
<p>I was wondering if there is any possibility to use Named-Entity-Recognition with a self trained model in tensorflow.</p> <p>There is a word2vec implementation, but I could not find the 'classic' POS or NER tagger.</p> <p>Thanks for your help!</p>
2016-03-17 10:24:33.243000+00:00
2016-03-18 18:22:23.557000+00:00
null
nlp|tensorflow
['http://arxiv.org/abs/1412.7449']
1
50,018,894
<h1>In Short</h1> <p>Your project is a little overly ambitious. </p> <p>Also, try to ask more specific questions on Stackoverflow. Focus on a finding out what is wrong and explain what help you would require. That'll help people to help you better. </p> <h1>In Long</h1> <p>Let's try and break down your requirements:</p> <blockquote> <p>I am trying to make a program in python that will take notes on a passage that I input. </p> </blockquote> <p>Not sure what that really means...</p> <blockquote> <p>It will sort out the first and last sentence of the paragraph ...</p> </blockquote> <p>The original code in the original post (OP) doesn't have any checks on the dates/numbers. </p> <p>First, you need to define <strong>what is a sentence?</strong></p> <ul> <li>What counts as sentence boundary?</li> <li>How are you going to detect sentences from a paragraph.</li> </ul> <p>Perhaps, <code>nltk.sent_tokenize</code> would help:</p> <pre><code>from nltk import sent_tokenize text = """Gwaha-ju (과하주; 過夏酒; literally "summer-passing wine") is a traditional Korean fortified rice wine. The refined rice wine cheongju (also called yakju) is fortified by adding the distilled spirit soju to produce gwaha-ju. Gwaha-baekju was first mentioned in Sanga Yorok, a mid-15th century cookbook, but the rice wine was made without fortification. The earliest recorded recipe for fortified gangha-ju appears in Eumsik dimibang, a 1670 cookbook. Other Joseon books that mention the fortified rice wine include Jubangmun, Chisaeng yoram, Yeokjubangmun, Eumsikbo, Sallim gyeongje, Jeungbo sallim gyeongje, Gyuhap chongseo, and Imwon gyeongjeji.""" sent_tokenize(text) </code></pre> <blockquote> <p>... and the sentences with dates and numbers. </p> </blockquote> <p>Hmmm.. that's how about checking for digits in the string of each sentence:</p> <pre><code>from nltk import sent_tokenize text = """Gwaha-ju (과하주; 過夏酒; literally "summer-passing wine") is a traditional Korean fortified rice wine. The refined rice wine cheongju (also called yakju) is fortified by adding the distilled spirit soju to produce gwaha-ju. Gwaha-baekju was first mentioned in Sanga Yorok, a mid-15th century cookbook, but the rice wine was made without fortification. The earliest recorded recipe for fortified gangha-ju appears in Eumsik dimibang, a 1670 cookbook. Other Joseon books that mention the fortified rice wine include Jubangmun, Chisaeng yoram, Yeokjubangmun, Eumsikbo, Sallim gyeongje, Jeungbo sallim gyeongje, Gyuhap chongseo, and Imwon gyeongjeji.""" for sent in sent_tokenize(text): if any(ch for ch in sent if ch.isdigit()): print(sent) </code></pre> <blockquote> <p>It would then replace some words ... </p> </blockquote> <p>Then you have to define <strong>what is a word?</strong></p> <ul> <li>How do you define word boundary? </li> <li>It won't be the same for different languages</li> </ul> <p>Maybe with <code>nltk.word_tokenize</code>, e.g.</p> <pre><code>from nltk import sent_tokenize, word_tokenize text = """Gwaha-ju (과하주; 過夏酒; literally "summer-passing wine") is a traditional Korean fortified rice wine. The refined rice wine cheongju (also called yakju) is fortified by adding the distilled spirit soju to produce gwaha-ju. Gwaha-baekju was first mentioned in Sanga Yorok, a mid-15th century cookbook, but the rice wine was made without fortification. The earliest recorded recipe for fortified gangha-ju appears in Eumsik dimibang, a 1670 cookbook. Other Joseon books that mention the fortified rice wine include Jubangmun, Chisaeng yoram, Yeokjubangmun, Eumsikbo, Sallim gyeongje, Jeungbo sallim gyeongje, Gyuhap chongseo, and Imwon gyeongjeji.""" for sent in sent_tokenize(text): if any(ch for ch in sent if ch.isdigit()): print(word_tokenize(sent)) </code></pre> <blockquote> <p>It would then replace some words with synonyms, </p> </blockquote> <p>Not sure which word you would like to replace with synonyms and which synonyms you're going to choose from. But do note that WordNet is not a exactly a good thesaurus.</p> <p>Each word comes with different meanings and only meanings are linked in WordNet not words, see <a href="https://stackoverflow.com/a/19383914/610569">https://stackoverflow.com/a/19383914/610569</a></p> <p>E.g. given the word "wine":</p> <pre><code>from nltk.corpus import wordnet as wn for synset in wn.synsets('wine'): # each meaning for the word, aka. synset print(synset) print('Words with same meaning:', synset.lemma_names(), '\n') </code></pre> <p><strong>How do you know which synset/meaning to use?</strong></p> <p>That's is an open question. It's also known as the <a href="https://en.wikipedia.org/wiki/Word-sense_disambiguation" rel="nofollow noreferrer">Word Sense Disambiguation (WSD) task</a>. </p> <p>If you just flatten and use the lemma names of all synset, the "synonyms" or replacement you want to make won't make sense. E.g. </p> <pre><code>from itertools import chain from nltk.corpus import wordnet as wn from nltk import sent_tokenize, word_tokenize text = """Gwaha-ju (과하주; 過夏酒; literally "summer-passing wine") is a traditional Korean fortified rice wine. The refined rice wine cheongju (also called yakju) is fortified by adding the distilled spirit soju to produce gwaha-ju. Gwaha-baekju was first mentioned in Sanga Yorok, a mid-15th century cookbook, but the rice wine was made without fortification. The earliest recorded recipe for fortified gangha-ju appears in Eumsik dimibang, a 1670 cookbook. Other Joseon books that mention the fortified rice wine include Jubangmun, Chisaeng yoram, Yeokjubangmun, Eumsikbo, Sallim gyeongje, Jeungbo sallim gyeongje, Gyuhap chongseo, and Imwon gyeongjeji.""" for sent in sent_tokenize(text): if any(ch for ch in sent if ch.isdigit()): for word in word_tokenize(sent): lemma_names = set(chain(*[synset.lemma_names() for synset in wn.synsets(word)])) # If you just flatten and use the lemma names of all synset, # the "synonyms" or replacement you want to make won't make sense. print(word, '\t', lemma_names) </code></pre> <blockquote> <p>... and get rid of useless adjectives. </p> </blockquote> <p>Hmmm, that'll require yet another piece of NLP process call POS tagging and it's not perfect.</p> <p>Perhaps you can try <code>nltk.pos_tag</code> but don't expect too much of it (in terms of accuracy), e.g.</p> <pre><code>from itertools import chain from nltk.corpus import wordnet as wn from nltk import sent_tokenize, word_tokenize, pos_tag text = """Gwaha-ju (과하주; 過夏酒; literally "summer-passing wine") is a traditional Korean fortified rice wine. The refined rice wine cheongju (also called yakju) is fortified by adding the distilled spirit soju to produce gwaha-ju. Gwaha-baekju was first mentioned in Sanga Yorok, a mid-15th century cookbook, but the rice wine was made without fortification. The earliest recorded recipe for fortified gangha-ju appears in Eumsik dimibang, a 1670 cookbook. Other Joseon books that mention the fortified rice wine include Jubangmun, Chisaeng yoram, Yeokjubangmun, Eumsikbo, Sallim gyeongje, Jeungbo sallim gyeongje, Gyuhap chongseo, and Imwon gyeongjeji.""" for sent in sent_tokenize(text): if any(ch for ch in sent if ch.isdigit()): for word, tag in pos_tag(word_tokenize(sent)): if not tag.startswith('JJ'): # JJ* refers to adjective. print(word) print('-----') </code></pre> <blockquote> <p>I am know the generic stuff with python, but I am new to nltk and WordNet. I've started a prototype program that will replace words in a sentence with all the random synonyms, </p> </blockquote> <p>Keep it up! Don't be discouraged and I think starting with the goal of building an application may not be the right place to start with NLP, try instead:</p> <ul> <li><a href="https://web.stanford.edu/~jurafsky/slp3/" rel="nofollow noreferrer">https://web.stanford.edu/~jurafsky/slp3/</a></li> <li><a href="http://www.nltk.org/book/" rel="nofollow noreferrer">http://www.nltk.org/book/</a></li> <li><a href="https://arxiv.org/abs/1510.00726" rel="nofollow noreferrer">https://arxiv.org/abs/1510.00726</a></li> </ul> <blockquote> <p>however I keep getting an error that says there is something wrong with WordNet. I think I installed it right, but I might be wrong.</p> </blockquote> <p>Yes, there's nothing wrong with the installation. </p> <p>Perhaps going through the WordNet API in NLTK would help you to understand how and what WordNet can do: <a href="http://www.nltk.org/howto/wordnet.html" rel="nofollow noreferrer">http://www.nltk.org/howto/wordnet.html</a> </p> <p>Also, improving basic Python and understanding why the <code>AttributeError</code> is occurring would help a lot =)</p>
2018-04-25 09:39:41.997000+00:00
2018-04-25 09:47:56.343000+00:00
2018-04-25 09:47:56.343000+00:00
null
50,009,296
<p>I am trying to make a program in python that will take notes on a passage that I input. It will sort out the first and last sentence of the paragraph and the sentences with dates and numbers. It would then replace some words with synonyms, and get rid of useless adjectives. I am know the generic stuff with python, but I am new to nltk and WordNet. I've started a prototype program that will replace words in a sentence with all the random synonyms, however I keep getting an error that says there is something wrong with WordNet. I think I installed it right, but I might be wrong. Here is my code:</p> <pre><code>import random import sys from nltk.corpus import wordnet print('Enter your passage') Passage = sys.stdin.readline() PassageList = Passage.split(' ') wordCounter = 0 syns = [] def maxInt(list): i = 0 for x in list: i += 1 return i for x in PassageList: syns = wordnet.synsets(PassageList[wordCounter]) synLength = maxInt(syns) PassageList[wordCounter] == syns[0] print(PassageList[wordCounter]) wordCounter += 1 </code></pre> <p>Here is the error I keep getting:</p> <pre><code>Traceback (most recent call last): File "C:\Users\shoob\Documents\Programs\Python\Programs\NoteTake.py", line 22, in &lt;module&gt; PassageList[wordCounter] == syns[0] File "C:\Users\shoob\AppData\Local\Programs\Python\Python36-32\lib\site-packages\nltk\corpus\reader\wordnet.py", line 198, in __eq__ return self._name == other._name AttributeError: 'str' object has no attribute '_name' </code></pre> <p>If you can help in anyway it would help me out a lot. :-D</p>
2018-04-24 19:16:08.947000+00:00
2018-04-25 10:14:20.067000+00:00
2018-04-25 09:48:05.120000+00:00
python|nlp|nltk|wordnet
['https://stackoverflow.com/a/19383914/610569', 'https://en.wikipedia.org/wiki/Word-sense_disambiguation', 'https://web.stanford.edu/~jurafsky/slp3/', 'http://www.nltk.org/book/', 'https://arxiv.org/abs/1510.00726', 'http://www.nltk.org/howto/wordnet.html']
6
63,230,097
<p>In <a href="https://arxiv.org/abs/1602.05629" rel="nofollow noreferrer">McMahan et al., 2017</a>, the clients communicate the <em>model weights</em> after local training to the server, which are then averaged and re-broadcast for the next round. No server optimizer needed, the averaging step updates the global/server model.</p> <p><a href="https://www.tensorflow.org/federated/api_docs/python/tff/learning/build_federated_averaging_process" rel="nofollow noreferrer"><code>tff.learning.build_federated_averaging_process</code></a> takes a slight different approach: the <em>delta</em> of the model weights the client received and the model weights after local training is sent back to the server. This <em>delta</em> can be though of as a pseudo-gradient, allowing the server to apply it to the global model using standard optimization techniques. <a href="https://arxiv.org/abs/2003.00295" rel="nofollow noreferrer">Reddi et al., 2020</a> delves into this formulation and how adaptive optimizers (Adagrad, Adam, Yogi) on the server can greatly improve convergence rates. Using SGD without momentum as the server optimizer, with a learning rate of <code>1.0</code>, exactly recovers the method described in McMahan et al., 2017.</p>
2020-08-03 13:11:24.447000+00:00
2020-08-03 13:11:24.447000+00:00
null
null
63,229,611
<p>I'm carrying out a federated learning process and use the function tff.learning.build_federated_averaging_process to create an iterative process of federated learning. As mentioned in the TFF tutorial, this function has two arguments called client_optimizer_fn and server_optimizer_fn, which in my opinion, represent the optimizer for client and server, respectively. But in the FedAvg paper, it seems that only clients carry out the optimization while the server only do the averaging operation, so what exactly is the server_optimizer_fn doing and what does its learning rate mean?</p>
2020-08-03 12:39:40.100000+00:00
2020-08-04 08:35:54.703000+00:00
null
tensorflow-federated
['https://arxiv.org/abs/1602.05629', 'https://www.tensorflow.org/federated/api_docs/python/tff/learning/build_federated_averaging_process', 'https://arxiv.org/abs/2003.00295']
3
49,269,095
<p>First off, while the methods might have been developed by me, Cosma Shalizi, and Mark Newman, our implementation is in Matlab and R. The python implementation I think you're using could be from Jeff Alstott or Javier del Molino Matamala or maybe Joel Ornstein (<a href="http://tuvalu.santafe.edu/~aaronc/powerlaws/" rel="noreferrer">all of these are available off my website</a>).</p> <p>Now, about the results. A likelihood ratio test (LRT) does not allow you to conclude that you do or do not have a power-law distribution. It's only a model comparison tool, meaning it evaluates whether the power law is a less terrible fit to your data than some alternative. (I phrase it that way because an LRT is not a goodness of fit method.) Hence, even if the power-law distribution is <em>favored</em> over all the alternatives, it doesn't mean your data <em>are</em> power-law distributed. It only means that the power-law model is a <em>less terrible</em> statistical model of the data than the alternatives are.</p> <p>To evaluate whether the power-law distribution itself is a statistically plausible model, you should compute the <em>p</em>-value for the fitted power-law model, using the semi-parametric bootstrap <a href="https://arxiv.org/abs/0706.1062" rel="noreferrer">we describe in our paper</a>. If <em>p>0.1</em>, <em>and</em> the power-law model is favored over the alternatives by the LRT, then you can conclude relatively strong support for your data following a power-law distribution.</p> <p>Back to your specific results: each of your LRT comparisons produces a pair <em>(r,p)</em>, where <em>r</em> is the normalized log likelihood ratio and <em>p</em> is the statistical significance of that ratio. The thing that is being tested for the <em>p</em>-value here is whether the <em>sign</em> of <em>r</em> is meaningful. If <em>p&lt;0.05</em> for a LRT, then a positive sign indicates the power-law model is favored. Looking at your results, I see that the exponential and lognormal_positive alternatives are worse fits to the data than the power-law model. However, the lognormal, stretched_exponential, and truncated_power_law are not, meaning these alternatives are just as terrible fits to the data as your power-law model.</p> <p>Without the <em>p</em>-value from the hypothesis test for the power-law model itself, the LRT results are not fully interpretable. But even a partial interpretation is not consistent with a strong degree of evidence for a power-law pattern, since two non-power-law models are just as good (bad) as the power law for these data. The fact that the exponential model is genuinely worse than the power law is not surprising considering how right-skewed your data are, so nothing to write home about there.</p>
2018-03-14 03:01:14.183000+00:00
2018-03-14 03:01:14.183000+00:00
null
null
49,266,070
<p>I'm using Jeff Alstott's Python powerlaw package to try fitting my data to a Power Law. Jeff's package is based on the paper by Clauset et al which discusses the Powerlaw.</p> <p>First, some details on my data:</p> <ol> <li>It is discrete (word count data);</li> <li>It is heavily skewed to the left (high skewness)</li> <li>It is <strong><em>Leptokurtic</em></strong> (excess kurtosis is greater than 10)</li> </ol> <p><strong>What I have done so far</strong></p> <p>df_data is my Dataframe, where word_count is a Series containing word count data for around 1000 word tokens.</p> <p>First I've generated a <em>fit</em> object:</p> <pre><code>fit = powerlaw.Fit(data=df_data.word_count, discrete=True) </code></pre> <p>Next, I compare the powerlaw distribution for my data against other distributions - namely, <em>lognormal</em>, <em>exponential</em>, <em>lognormal_positive</em>, <em>stretched_exponential</em> and <em>truncated_powerlaw</em>, with the fit.distribution_compare(distribution_one, distribution_two) method.</p> <p>As a result of the distribution_compare method, I've obtained the following (r,p) tuples for each of the comparisons:</p> <ul> <li>fit.distribution_compare('power_law', 'lognormal') = (0.35617607052907196, 0.5346696007)</li> <li>fit.distribution_compare('power_law', 'exponential') = (397.3832646921206, 5.3999952097178692e-06)</li> <li>fit.distribution_compare('power_law', 'lognormal_positive') = (27.82736434863289, 4.2257378698322223e-07)</li> <li>fit.distribution_compare('power_law', 'stretched_exponential') = (1.37624682020371, 0.2974292837452046)</li> <li>fit.distribution_compare('power_law', 'truncated_power_law') =(-0.0038373682383605, 0.83159372694621)</li> </ul> <p>From the powerlaw documentation:</p> <blockquote> <p>R : float</p> <p>The loglikelihood ratio of the two sets of likelihoods. If positive, the first set of likelihoods is more likely (and so the probability distribution that produced them is a better fit to the data). If negative, the reverse is true.</p> <p>p : float</p> <p>The significance of the sign of R. If below a critical value (typically .05) the sign of R is taken to be significant. If above the critical value the sign of R is taken to be due to statistical fluctuations.</p> </blockquote> <p>From the comparison results between powerlaw, exponential and lognormal distributions, I feel inclined to say that I have a powerlaw distribution.</p> <p>Would this be a correct interpretation/assumption about the test results? Or perhaps I'm missing something?</p>
2018-03-13 21:22:57.233000+00:00
2018-03-14 21:12:11.410000+00:00
2018-03-14 21:12:11.410000+00:00
python|power-law
['http://tuvalu.santafe.edu/~aaronc/powerlaws/', 'https://arxiv.org/abs/0706.1062']
2
55,006,560
<p>In the research paper <a href="https://arxiv.org/pdf/1704.04861.pdf" rel="nofollow noreferrer">MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications</a>, the test accuracy using the 'MobileNet_1.0_224' architecture on the Stanford Dogs dataset is 83.3%, which seems in line with your results.</p> <p>When you visually examine the Stanford Dogs Dataset you will find a lot of the breeds look similar, which makes it hard to reach a higher accuracy, even with the state of the art image classifiers in accuracy. You might improve your results by either splitting similar looking breeds into larger subcategories.</p> <p>Alternatively, you might tweak the training settings of the retrain.py script in the Tensorflow for Poets tutorial, but the gains will be likely be marginal.</p>
2019-03-05 15:44:58.513000+00:00
2019-03-05 15:44:58.513000+00:00
null
null
55,005,386
<p>I need a tensorflow model which recognizes a dog's breed. I downloaded the Stanford Dogs Dataset - 20,580 images in 120 categories (=breeds). I followed the procedure described in <em>TensorFlow For Poets</em> to retrain mobilenet_1.0_224. I used <em>--how_many_training_steps=4000</em> and defaults for everything else. I got this tensorboard graph: <a href="https://i.stack.imgur.com/ZZmE6.png" rel="nofollow noreferrer">Training and validation accuracy</a></p> <p>The validation accuracy is only about 80%. </p> <p>What can I do to improve it? </p>
2019-03-05 14:43:54.617000+00:00
2019-03-05 15:44:58.513000+00:00
null
tensorflow
['https://arxiv.org/pdf/1704.04861.pdf']
1
70,410,273
<p>It is possible to create a custom loss by subclassing <code>tf.keras.losses.Loss</code>. The loss created in this way can be passed directly to the optimizer. Let me demonstrate this on an example of the focal loss (arXiv:1708.02002).</p> <pre><code>class focal_loss(tf.keras.losses.Loss): # function to initilize loss parameters def __init__(self, gamma): super().__init__() self.gamma = gamma # function to evaluate loss # must accept exectly 3 parameters: true labels, predicted labels, and, possible,samples weights # must return loss value def __call__(self, y_true, y_pred, sample_weight=None): entropy = tf.keras.losses.binary_crossentropy( y_true, y_pred ) focal_weight = tf.reduce_sum( y_true*tf.math.pow((1-y_pred),self.gamma), axis=-1 ) loss = tf.math.multiply(entropy,focal_weight) # use sample weights, if provided if sample_weight is not None: sample_weight = tf.squeeze(sample_weight) loss = tf.math.multiply(loss,sample_weight) loss = tf.math.reduce_sum( loss ) return loss </code></pre> <p>Further you can pass it directly to the optimizer:</p> <pre><code>f_loss = focal_loss(2.) model.compile(loss=f_loss, optimizer='adam') </code></pre> <p>If rewriting your loss in this way would not work, then it is clearly an implementation error (in a way you compute loss). A more carefull study will be needed.</p>
2021-12-19 09:36:53.463000+00:00
2021-12-19 09:36:53.463000+00:00
null
null
70,400,794
<p>When I'm testing my tensorflow keras custom loss(using additional input data to calculate loss), which is as follow:</p> <pre><code>@tf.function def build_walker_loss(labeled_output_t, unlabeled_output_t, label): similarity = tf.matmul(labeled_output_t, unlabeled_output_t, transpose_b=True) transition_prob_to_unlabeled = tf.nn.softmax(similarity, name=&quot;transition_prob_to_unlabeled&quot;) transition_prob_to_labeled = tf.nn.softmax(tf.transpose(similarity), name=&quot;transition_prob_to_labeled&quot;) roundtrip_prob = tf.matmul(transition_prob_to_unlabeled, transition_prob_to_labeled, name=&quot;roundtrip_prob&quot;) label = tf.reshape(label, [-1, 1]) target_distribution = tf.cast(tf.equal(label, tf.transpose(label)),dtype=tf.float32) num_class = tf.compat.v1.reduce_sum(target_distribution, axis=1, keep_dims=True) target_distribution = target_distribution / num_class loss = tf.keras.losses.categorical_crossentropy(from_logits=False, y_true = target_distribution, y_pred = tf.math.log(1e-8 + roundtrip_prob), ) print(loss) return loss X = np.random.uniform(0,1, (1000,10)) y = np.random.uniform(0,1, 1000) W = np.random.uniform(1,2, 1000) inp = Input((10,)) true = Input((10,)) sample_weight = Input((10,)) x = Dense(32, activation='relu')(inp) out = Dense(10)(x) print(true) print(out) m = Model([inp,true, sample_weight], out) m.add_loss( build_walker_loss( true, out, sample_weight ) ) m.compile(loss=None, optimizer='adam') </code></pre> <p>I got a error massage:</p> <pre><code> _SymbolicException Traceback (most recent call last) &lt;ipython-input-13-a0b380ce314d&gt; in &lt;module&gt; 37 print(out) 38 m = Model([inp,true, sample_weight], out) ---&gt; 39 m.add_loss( build_walker_loss( true, out, sample_weight ) ) 40 m.compile(loss=None, optimizer='adam') 41 # history = m.fit([X, y, W], y=None, epochs=10) E:\Anaconda3\envs\lrc\lib\site-packages\tensorflow\python\eager\def_function.py in __call__(self, *args, **kwds) 578 xla_context.Exit() 579 else: --&gt; 580 result = self._call(*args, **kwds) 581 582 if tracing_count == self._get_tracing_count(): E:\Anaconda3\envs\lrc\lib\site-packages\tensorflow\python\eager\def_function.py in _call(self, *args, **kwds) 648 *args, **kwds) 649 # If we did not create any variables the trace we have is good enough. --&gt; 650 return self._concrete_stateful_fn._filtered_call(canon_args, canon_kwds) # pylint: disable=protected-access 651 652 def fn_with_cond(*inner_args, **inner_kwds): E:\Anaconda3\envs\lrc\lib\site-packages\tensorflow\python\eager\function.py in _filtered_call(self, args, kwargs) 1663 if isinstance(t, (ops.Tensor, 1664 resource_variable_ops.BaseResourceVariable))), -&gt; 1665 self.captured_inputs) 1666 1667 def _call_flat(self, args, captured_inputs, cancellation_manager=None): E:\Anaconda3\envs\lrc\lib\site-packages\tensorflow\python\eager\function.py in _call_flat(self, args, captured_inputs, cancellation_manager) 1744 # No tape is watching; skip to running the function. 1745 return self._build_call_outputs(self._inference_function.call( -&gt; 1746 ctx, args, cancellation_manager=cancellation_manager)) 1747 forward_backward = self._select_forward_and_backward_functions( 1748 args, E:\Anaconda3\envs\lrc\lib\site-packages\tensorflow\python\eager\function.py in call(self, ctx, args, cancellation_manager) 596 inputs=args, 597 attrs=attrs, --&gt; 598 ctx=ctx) 599 else: 600 outputs = execute.execute_with_cancellation( E:\Anaconda3\envs\lrc\lib\site-packages\tensorflow\python\eager\execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name) 72 raise core._SymbolicException( 73 &quot;Inputs to eager execution function cannot be Keras symbolic &quot; ---&gt; 74 &quot;tensors, but found {}&quot;.format(keras_symbolic_tensors)) 75 raise e 76 # pylint: enable=protected-access _SymbolicException: Inputs to eager execution function cannot be Keras symbolic tensors, but found [&lt;tf.Tensor 'input_14:0' shape=(None, 10) dtype=float32&gt;, &lt;tf.Tensor 'dense_9/Identity:0' shape=(None, 10) dtype=float32&gt;, &lt;tf.Tensor 'input_15:0' shape=(None, 10) dtype=float32&gt;] </code></pre> <p>I follow the answer in <a href="https://stackoverflow.com/questions/62746381/custom-loss-problem-inputs-to-eager-execution-function-cannot-be-keras-symbolic">Custom loss problem: inputs to eager execution function cannot be keras symbolic tensors but found</a>, but without considering the correctness of the input data, when I change the mse loss to my own loss function, I still got this error.</p> <p>I don’t know which step made my function error. What can I do to add this loss function to my model?</p>
2021-12-18 03:34:13.793000+00:00
2021-12-19 09:36:53.463000+00:00
null
python|tensorflow|keras|deep-learning|eager-execution
[]
0
21,330,212
<p>Did you check this page? <a href="http://arxiv.org/help/trackback/" rel="nofollow">http://arxiv.org/help/trackback/</a></p> <p>arXiv does support trackbacks but only on pages with a url of the form <code>http://arxiv.org/abs/{paper_id}</code></p>
2014-01-24 10:26:37.960000+00:00
2014-01-24 10:26:37.960000+00:00
null
null
7,944,602
<p>I would like to submit a trackback to <a href="http://arxiv.org" rel="nofollow">arXiv</a> using the only php script available I found, <a href="http://sourceforge.net/projects/phptrackback/" rel="nofollow">PHP Trackback</a>. However it seems like I am not able to proceed since I get a "HTTP 403 Forbidden" error. It further states:</p> <blockquote> <p>Sadly, your client does not supply a proper User-Agent, and is consequently excluded.</p> </blockquote> <p>So, how can I include a User-Agent? As a guess I tried</p> <pre><code>fputs($tb_sock, "User-Agent: " . $_SERVER['HTTP_USER_AGENT'] . "\r\n"); </code></pre> <p>inside the corresponding function in the above mentioned script. Hence my question:</p> <p><strong>Is there a way to supply a User-Agent sending a trackback?</strong></p> <p>Please note that I do not have any blogging software on webspace. Thanks in advance!</p>
2011-10-30 11:21:11.040000+00:00
2015-06-09 20:50:15.247000+00:00
2012-10-05 18:08:09.130000+00:00
php|web|trackback
['http://arxiv.org/help/trackback/']
1
66,933,248
<p>There are at least 2 figures of merit commonly used when discussing GPU memory: latency and bandwidth. From a latency perspective, this number is not published by NVIDIA (that I know of) and the usual practice is to discover it with careful <a href="https://arxiv.org/pdf/1804.06826.pdf" rel="noreferrer">microbenchmarking</a>.</p> <p>From a bandwidth perspective, AFAIK this number is also not published by NVIDIA (for L2 cache), but it should be fairly easy to discover it with a fairly simple test case of a copy kernel. We can estimate the bandwidth of global memory simply by ensuring that our copy kernel uses a copy footprint that is much larger than the published L2 cache size (6MB for V100), whereas we can estimate the bandwidth of L2 by keeping our copy footprint smaller than that.</p> <p>Such a code (IMO) is fairly trivial to write:</p> <pre><code>$ cat t44.cu template &lt;typename T&gt; __global__ void k(volatile T * __restrict__ d1, volatile T * __restrict__ d2, const int loops, const int ds){ for (int i = 0; i &lt; loops; i++) for (int j = threadIdx.x+blockDim.x*blockIdx.x; j &lt; ds; j += gridDim.x*blockDim.x) if (i&amp;1) d1[j] = d2[j]; else d2[j] = d1[j]; } const int dsize = 1048576*128; const int iter = 64; int main(){ int *d; cudaMalloc(&amp;d, dsize); // case 1: 32MB copy, should exceed L2 cache on V100 int csize = 1048576*8; k&lt;&lt;&lt;80*2, 1024&gt;&gt;&gt;(d, d+csize, iter, csize); // case 2: 2MB copy, should fit in L2 cache on V100 csize = 1048576/2; k&lt;&lt;&lt;80*2, 1024&gt;&gt;&gt;(d, d+csize, iter, csize); cudaDeviceSynchronize(); } $ nvcc -o t44 t44.cu $ nvprof ./t44 ==53310== NVPROF is profiling process 53310, command: ./t44 ==53310== Profiling application: ./t44 ==53310== Profiling result: Type Time(%) Time Calls Avg Min Max Name GPU activities: 100.00% 6.9032ms 2 3.4516ms 123.39us 6.7798ms void k&lt;int&gt;(int volatile *, int volatile *, int, int) API calls: 89.47% 263.86ms 1 263.86ms 263.86ms 263.86ms cudaMalloc 4.45% 13.111ms 8 1.6388ms 942.75us 2.2322ms cuDeviceTotalMem 3.37% 9.9523ms 808 12.317us 186ns 725.86us cuDeviceGetAttribute 2.34% 6.9006ms 1 6.9006ms 6.9006ms 6.9006ms cudaDeviceSynchronize 0.33% 985.49us 8 123.19us 85.864us 180.73us cuDeviceGetName 0.01% 42.668us 8 5.3330us 1.8710us 22.553us cuDeviceGetPCIBusId 0.01% 34.281us 2 17.140us 6.2880us 27.993us cudaLaunchKernel 0.00% 8.0290us 16 501ns 256ns 1.7980us cuDeviceGet 0.00% 3.4000us 8 425ns 217ns 876ns cuDeviceGetUuid 0.00% 3.3970us 3 1.1320us 652ns 2.0020us cuDeviceGetCount $ </code></pre> <p>Based on the profiler output, we can estimate global memory bandwidth as:</p> <pre><code>2*64*32MB/6.78ms = 604GB/s </code></pre> <p>we can estimate L2 bandwidth as:</p> <pre><code>2*64*2MB/123us = 2.08TB/s </code></pre> <p>Both of these are rough measurements (I'm not doing careful benchmarking here), but <code>bandwidthTest</code> on this V100 GPU reports a device memory bandwidth of ~700GB/s, so I believe the 600GB/s number is &quot;in the ballpark&quot;. If we use that to judge that the L2 cache measurement is in the ballpark, then we might guess that the L2 cache may be ~3-4x faster than global memory in some circumstances.</p>
2021-04-03 16:07:58.280000+00:00
2021-04-03 16:07:58.280000+00:00
null
null
66,921,433
<p>Modern GPU architectures have both L1 cache and L2 cache. It is well-known that L1 cache is much faster than global memory. However, the speed of L2 cache is less clear in the CUDA documentation. I looked up the CUDA documentation, but can only find that the latency of global memory operation is about 300-500 cycles while L1 cache operation takes only about 30 cycles. Can anyone give the speed of L2 cache? Such information may be very useful, since the programming will not focus on optimizing the use of L2 cache if it is not very fast compared with global memory. If the speed is different for different architectures, I just want to focus on the latest architecture, such as NVIDIA Titan RTX 3090 (Compute Capability 8.6) or NVIDIA Telsa V100 (Compute Capability 7.0).</p> <p>Thank you!</p>
2021-04-02 15:15:14.290000+00:00
2021-04-03 16:07:58.280000+00:00
null
cuda|gpu|nvidia
['https://arxiv.org/pdf/1804.06826.pdf']
1
49,353,040
<p>I advise you to have a look at the <a href="https://github.com/marcotcr/lime" rel="nofollow noreferrer">LIME framework</a>:</p> <blockquote> <p>Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin: <a href="https://arxiv.org/abs/1602.04938" rel="nofollow noreferrer">"Why Should I Trust You?": Explaining the Predictions of Any Classifier</a>. 2016.</p> </blockquote> <p>It also has <a href="https://www.youtube.com/watch?v=hUnRCxnydCc" rel="nofollow noreferrer">a youtube video explanation</a>.</p> <p>The very gist / simplified extremely:</p> <ol> <li>Make the normal prediction</li> <li>Remove some words. Make more predictions.</li> <li>Look at the change in prediction. If the prediction changes more, the word was more important.</li> </ol>
2018-03-18 21:34:03.877000+00:00
2018-03-18 21:34:03.877000+00:00
null
null
49,274,295
<p>I am new to tensorflow, I am using CNN model descrined by <a href="http://www.wildml.com/2015/12/implementing-a-cnn-for-text-classification-in-tensorflow/" rel="nofollow noreferrer">http://www.wildml.com/2015/12/implementing-a-cnn-for-text-classification-in-tensorflow/</a></p> <p>I am getting around 60% accuracy for 5 class and 80% accuracy for 2 class classification</p> <p>Now I want to visualize which word impact most to a particular classification, can anybody tell me how to do it</p>
2018-03-14 09:42:41.263000+00:00
2018-03-18 21:34:03.877000+00:00
null
python|tensorflow|text|visualization|conv-neural-network
['https://github.com/marcotcr/lime', 'https://arxiv.org/abs/1602.04938', 'https://www.youtube.com/watch?v=hUnRCxnydCc']
3
62,575,360
<p>There different methods for summarizing a text i.e. Extractive &amp; Abstractive.</p> <blockquote> <p><strong>Extractive summarization</strong> means identifying important sections of the text and generating them verbatim producing a subset of the sentences from the original text; while <strong>abstractive summarization</strong> reproduces important material in a new way after interpretation and examination of the text using advanced natural language techniques to generate a new shorter text that conveys the most critical information from the original one.</p> </blockquote> <p>For a transformer based approach you just need an additional attention layer which you can add to an encoder-decoder model or you can use pre-trained transformers (fine tune them maybe) like BERT, GPT, T5, etc.</p> <p>You can have a look at : <a href="https://huggingface.co/transformers/" rel="nofollow noreferrer">https://huggingface.co/transformers/</a></p> <p>For Abstractive Summarization T5 works pretty well. Here's a nice and simple example : <a href="https://github.com/faiztariq/FzLabs/blob/master/abstractive-text-summarization-t5.ipynb" rel="nofollow noreferrer">https://github.com/faiztariq/FzLabs/blob/master/abstractive-text-summarization-t5.ipynb</a></p> <p>For Extractive Summarization you may take a look at : <a href="https://pypi.org/project/bert-extractive-summarizer/" rel="nofollow noreferrer">https://pypi.org/project/bert-extractive-summarizer/</a></p> <p>There's a paper (<strong>Attention Is All You Need</strong>) that explains transformers pretty well, you may also take a look at it : <a href="https://arxiv.org/abs/1706.03762" rel="nofollow noreferrer">https://arxiv.org/abs/1706.03762</a></p>
2020-06-25 12:28:28.267000+00:00
2020-06-25 14:34:10.903000+00:00
2020-06-25 14:34:10.903000+00:00
null
57,589,625
<p>I'm trying to build a text summarization model using seq2seq architecture in Keras. I've followed this tutorial <a href="https://keras.io/examples/lstm_seq2seq/" rel="nofollow noreferrer">https://keras.io/examples/lstm_seq2seq/</a> and implemented it with Embeddings layer, which works fine. But now I want to use BERT. Can pretrained BERT embeddings be used in such a task, usually I see text classifiation, but not the encoder-decoder architecture used with BERT.</p> <p>I access BERT model from TF Hub, and have a Layer class implemented from this tutorial <a href="https://github.com/strongio/keras-bert/blob/master/keras-bert.ipynb" rel="nofollow noreferrer">https://github.com/strongio/keras-bert/blob/master/keras-bert.ipynb</a>, I also tokenize accordingly with BERT tokenizer, below is my model</p> <pre class="lang-py prettyprint-override"><code>enc_in_id = Input(shape=(None, ), name="Encoder-Input-Ids") enc_in_mask = Input(shape=(None, ), name="Encoder-Input-Masks") enc_in_segment = Input(shape=(None, ), name="Encoder-Input-Segment-Ids") bert_encoder_inputs = [enc_in_id, enc_in_mask, enc_in_segment] encoder_embeddings = BertLayer(name='Encoder-Bert-Layer')(bert_encoder_inputs) encoder_embeddings = BatchNormalization(name='Encoder-Batch-Normalization')(encoder_embeddings) encoder_lstm = LSTM(latent_size, return_state=True, name='Encoder-LSTM') encoder_out, e_state_h, e_state_c = encoder_lstm(encoder_embeddings) encoder_states = [e_state_h, e_state_c] dec_in_id = Input(shape=(None,), name="Decoder-Input-Ids") dec_in_mask = Input(shape=(None,), name="Decoder-Input-Masks") dec_in_segment = Input(shape=(None,), name="Decoder-Input-Segment-Ids") bert_decoder_inputs = [dec_in_id, dec_in_mask, dec_in_segment] decoder_embeddings_layer = BertLayer(name='Decoder-Bert-Layer') decoder_embeddings = decoder_embeddings_layer(bert_decoder_inputs) decoder_batchnorm_layer = BatchNormalization(name='Decoder-Batch-Normalization-1') decoder_batchnorm = decoder_batchnorm_layer(decoder_embeddings) decoder_lstm = LSTM(latent_size, return_state=True, return_sequences=True, name='Decoder-LSTM') decoder_out, _, _ = decoder_lstm(decoder_batchnorm, initial_state=encoder_states) dense_batchnorm_layer = BatchNormalization(name='Decoder-Batch-Normalization-2') decoder_out_batchnorm = dense_batchnorm_layer(decoder_out) decoder_dense_id = Dense(vocabulary_size, activation='softmax', name='Dense-Id') dec_outputs_id = decoder_dense_id(decoder_out_batchnorm) </code></pre> <p>The model builds and after a couple of epochs accuracy rises to 1, and loss drops below 0.5, but the predictions are awful. Since I'm working on a dev set comprised of 5 samples, with max 30 WordPiece tokens and predicting on the same data, I only get the first or maybe two tokens right, then it just repeats the last seen token, or [PAD] token.</p>
2019-08-21 10:31:12.010000+00:00
2020-06-26 05:36:14.100000+00:00
null
tensorflow|keras|deep-learning|word-embedding|seq2seq
['https://huggingface.co/transformers/', 'https://github.com/faiztariq/FzLabs/blob/master/abstractive-text-summarization-t5.ipynb', 'https://pypi.org/project/bert-extractive-summarizer/', 'https://arxiv.org/abs/1706.03762']
4
62,920,514
<p>Let me talk about random integer generating algorithms that are &quot;optimal&quot; in terms of the number of random bits it uses on average. In the rest of this post, we will assume we have a &quot;true&quot; random generator that can produce unbiased and independent random bits.</p> <p>In 1976, D. E. Knuth and A. C. Yao showed that any algorithm that produces random integers with a given probability, using only random bits, can be represented as a binary tree, where random bits indicate which way to traverse the tree and each leaf (endpoint) corresponds to an outcome. (Knuth and Yao, &quot;The complexity of nonuniform random number generation&quot;, in <em>Algorithms and Complexity</em>, 1976.) Knuth and Yao showed that any <em>optimal</em> binary tree algorithm for generating integers in <code>[0, n)</code> uniformly, will need <strong>at least <code>log2(n)</code> and at most <code>log2(n) + 2</code> bits on average</strong>. (Thus, even an <em>optimal</em> algorithm has a chance of &quot;wasting&quot; bits.) See below for examples of optimal algorithms.</p> <p>However, any <em>optimal</em> integer generator that is also <em>unbiased</em> will, in general, run forever in the worst case, as also shown by Knuth and Yao. Going back to the binary tree, each one of the n outcomes labels leaves in the binary tree so that each integer in [0, n) can occur with probability 1/n. But if 1/n has a non-terminating binary expansion (which will be the case if n is not a power of 2), this binary tree will necessarily either—</p> <ul> <li>Have an &quot;infinite&quot; depth, or</li> <li>include &quot;rejection&quot; leaves at the end of the tree,</li> </ul> <p>And in either case, the algorithm will run forever in the worst case, even if it uses very few random bits on average. (On the other hand, when n is a power of 2, the optimal binary tree will have no rejection nodes and require exactly n bits before returning an outcome, so that no bits will be &quot;wasted&quot;.) The Fast Dice Roller is an example of an algorithm that uses &quot;rejection&quot; events to ensure it's unbiased; see the comment in the code below.</p> <p>Thus, in general, <strong>a random integer generator can be <em>either</em> unbiased <em>or</em> constant-time (or even neither), but not both.</strong> And the binary tree concept shows that there is no way in general to &quot;fix&quot; the worst case of an indefinite running time without introducing bias. For instance, modulo reductions (e.g., <code>rand() % n</code>) are equivalent to a binary tree in which rejection leaves are replaced with labeled outcomes — but since there are more possible outcomes than rejection leaves, only some of the outcomes can take the place of the rejection leaves, introducing bias. The same kind of binary tree — and the same kind of bias — results if you stop rejecting after a set number of iterations. (However, this bias may be negligible depending on the application. There are also security aspects to random integer generation, which are too complicated to discuss in this answer.)</p> <h3>Fast Dice Roller Implementation</h3> <p>There are many examples of <em>optimal</em> algorithms in the sense given earlier. One of them is the <a href="https://arxiv.org/abs/1304.1916" rel="nofollow noreferrer">Fast Dice Roller</a> by J. Lumbroso (2013) (implemented below), and perhaps other examples are the algorithm given in one of the other answers here and the algorithm given in the <a href="http://mathforum.org/library/drmath/view/65653.html" rel="nofollow noreferrer">Math Forum</a> in 2004. On the other hand, all the algorithms <a href="https://www.pcg-random.org/posts/bounded-rands.html" rel="nofollow noreferrer">surveyed by M. O'Neill</a> are not optimal, since they rely on generating blocks of random bits at a time. See also my note on <a href="https://peteroupc.github.io/randomfunc.html#RNDINT_Random_Integers_in_0_N" rel="nofollow noreferrer">integer generating algorithms</a>.</p> <p>The following is a JavaScript implementation of the Fast Dice Roller. Note that it uses rejection events and a loop to ensure it's unbiased. <code>nextBit()</code> is a method that produces an independent unbiased random bit (e.g., <code>Math.random()&lt;0.5 ? 1 : 0</code>, which isn't necessarily efficient in terms of random bits ultimately relied on in JavaScript).</p> <pre class="lang-js prettyprint-override"><code>function randomInt(minInclusive, maxExclusive) { var maxInclusive = (maxExclusive - minInclusive) - 1 var x = 1 var y = 0 while(true) { x = x * 2 var randomBit = nextBit() y = y * 2 + randomBit if(x &gt; maxInclusive) { if (y &lt;= maxInclusive) { return y + minInclusive } // Rejection x = x - maxInclusive - 1 y = y - maxInclusive - 1 } } } </code></pre> <p>The following version returns a BigInt, an arbitrary-precision integer supported in recent versions of JavaScript:</p> <pre class="lang-js prettyprint-override"><code>function randomInt(minInclusive, maxExclusive) { minInclusive=BigInt(minInclusive) maxExclusive=BigInt(maxExclusive) var maxInclusive = (maxExclusive - minInclusive) - BigInt(1) var x = BigInt(1) var y = BigInt(0) while(true) { x = x * BigInt(2) var randomBit = BigInt(Math.random()&lt;0.5 ? 1 : 0) y = y * BigInt(2) + randomBit if(x &gt; maxInclusive) { if (y &lt;= maxInclusive) { return y + minInclusive } // Rejection x = x - maxInclusive - BigInt(1) y = y - maxInclusive - BigInt(1) } } } </code></pre> <h3>Reducing Bit Waste</h3> <p>Recall that &quot;optimal&quot; integer generators, such as the Fast Dice Roller above, use on average at least <code>log2(n)</code> bits (the lower bound), or come within 2 bits of this lower bound on average. There are various techniques that can be used to bring an algorithm (even a less than optimal one) closer to this theoretical lower bound, including batching and randomness extraction. These are discussed in:</p> <ul> <li>The Fast Dice Roller paper itself, see section 3.1 (batching).</li> <li>The paper &quot;<a href="https://arxiv.org/abs/1502.02539" rel="nofollow noreferrer">Random variate generation using only finitely many unbiased, independently and identically distributed random bits</a>&quot; by Devroye and Gravel, section 2.3 (randomness extraction).</li> <li>The Math Forum page given above (recycling).</li> </ul>
2020-07-15 17:25:39.740000+00:00
2022-01-23 11:09:26.250000+00:00
2022-01-23 11:09:26.250000+00:00
null
6,046,918
<p>I have a stream of (uniform) random bits from which I'd like to generate random integers uniformly in the range [0,n] without wasting bits. (I'm considering bits wasted which are in excess of floor(log_2(n))+1, on the assumption that it's always possible to use no more than that.) E.g., if n = 5, then the algorithm I'm looking for should use no more than three bits. How can this be done?</p>
2011-05-18 15:05:02.857000+00:00
2022-01-23 11:09:26.250000+00:00
2020-07-15 17:19:28.557000+00:00
random
['https://arxiv.org/abs/1304.1916', 'http://mathforum.org/library/drmath/view/65653.html', 'https://www.pcg-random.org/posts/bounded-rands.html', 'https://peteroupc.github.io/randomfunc.html#RNDINT_Random_Integers_in_0_N', 'https://arxiv.org/abs/1502.02539']
5
45,016,130
<p>To read the FSNS dataset you can use <a href="https://github.com/tensorflow/models/blob/master/attention_ocr/python/datasets/fsns.py" rel="nofollow noreferrer">https://github.com/tensorflow/models/blob/master/attention_ocr/python/datasets/fsns.py</a> directly or as a reference.</p> <p>The feature keys are incorrect in the code snippet you provided - missing the 'image/' prefix. It should be 'image/encoded' instead of just 'encoded', 'image/width' instead of 'image' and so on. Refer to the Table 4 in the <a href="https://arxiv.org/abs/1702.03970" rel="nofollow noreferrer">paper</a>.</p>
2017-07-10 15:33:34.567000+00:00
2017-07-10 15:33:34.567000+00:00
null
null
45,003,662
<p>I just want to read image and text in your tfrecords file: <code>fsns/train/train-00511-of-00512Hi</code> in <a href="https://github.com/tensorflow/models/tree/a4944a57ad2811e1f6a7a87589a9fc8a776e8d3c/street" rel="nofollow noreferrer">FSNS datasets</a>. But when I do the work follow the guide in Tfrecords Guide: <a href="http://warmspringwinds.github.io/tensorflow/tf-slim/2016/12/21/tfrecords-guide/" rel="nofollow noreferrer">link</a>, it shows error message following:</p> <pre><code>InvalidArgumentError (see above for traceback): Name: &lt;unknown&gt;, Feature: encoded (data type: string) is required but could not be found. [[Node: ParseSingleExample/ParseExample/ParseExample = ParseExample[Ndense=4, Nsparse=0, Tdense=[DT_STRING, DT_INT64, DT_STRING, DT_INT64], dense_shapes=[[], [], [], []], sparse_types=[], _device="/job:localhost/replica:0/task:0/cpu:0"](ParseSingleExample/ExpandDims, ParseSingleExample/ParseExample/ParseExample/names, ParseSingleExample/ParseExample/ParseExample/dense_keys_0, ParseSingleExample/ParseExample/ParseExample/dense_keys_1, ParseSingleExample/ParseExample/ParseExample/dense_keys_2, ParseSingleExample/ParseExample/ParseExample/dense_keys_3, ParseSingleExample/ParseExample/Const, ParseSingleExample/ParseExample/Const_1, ParseSingleExample/ParseExample/Const_2, ParseSingleExample/ParseExample/Const_3)]] </code></pre> <p>It seems that the key name is wrong? My code is attached, could author or any other check my code and help me to fix the bug? </p> <pre><code>import tensorflow as tf import skimage.io as io IMAGE_HEIGHT = 384 IMAGE_WIDTH = 384 tfrecords_filename = '/home/wangjianbo_i/google_model/MyCode/models/attention_ocr/python/datasets/data/fsns/train/train-00511-of-00512' def read_and_decode(filename_queue): reader = tf.TFRecordReader() _, serialized_example = reader.read(filename_queue) features = tf.parse_single_example( serialized_example, # Defaults are not specified since both keys are required. features={ 'height': tf.FixedLenFeature([], tf.int64), 'width': tf.FixedLenFeature([], tf.int64), 'encoded': tf.FixedLenFeature([], tf.string), 'text':tf.FixedLenFeature([], tf.string) }) image = tf.decode_raw(features['encoded'], tf.uint8) text = tf.decode_raw(features['text'], tf.uint8) height = tf.cast(features['height'], tf.int32) width = tf.cast(features['width'], tf.int32) image_shape = tf.stack([height, width, 3]) image = tf.reshape(image, image_shape) image_size_const = tf.constant((IMAGE_HEIGHT, IMAGE_WIDTH, 3), dtype=tf.int32) resized_image = tf.image.resize_image_with_crop_or_pad(image=image, target_height=IMAGE_HEIGHT, target_width=IMAGE_WIDTH) images = tf.train.shuffle_batch( [resized_image], batch_size=2, capacity=30, num_threads=2, min_after_dequeue=10) return images,text filename_queue = tf.train.string_input_producer( [tfrecords_filename], num_epochs=10) image,text = read_and_decode(filename_queue) init_op = tf.group(tf.global_variables_initializer(), tf.local_variables_initializer()) with tf.Session() as sess: sess.run(init_op) coord = tf.train.Coordinator() threads = tf.train.start_queue_runners(coord=coord) # Let's read off 3 batches just for example for i in xrange(3): img,text= sess.run([image,text]) print img,text print(img[0, :, :, :].shape) print('current batch') io.imshow(img[0, :, :, :]) io.show() io.imshow(img[1, :, :, :]) io.show() coord.request_stop() coord.join(threads) </code></pre>
2017-07-10 04:11:53.920000+00:00
2017-10-24 08:49:13.907000+00:00
2017-07-10 08:01:41.597000+00:00
python|git|ocr
['https://github.com/tensorflow/models/blob/master/attention_ocr/python/datasets/fsns.py', 'https://arxiv.org/abs/1702.03970']
2
57,881,702
<p>The easiest way to do it is through the <code>resolution</code> package, available in this link <a href="https://github.com/analyxcompany/resolution" rel="nofollow noreferrer">https://github.com/analyxcompany/resolution</a></p> <p>It is based on this paper <a href="http://arxiv.org/pdf/0812.1770.pdf" rel="nofollow noreferrer">http://arxiv.org/pdf/0812.1770.pdf</a></p> <p>It pretty much has 2 functions <code>cluster_resolution()</code> and <code>cluster_resolution_RandomOrderFULL()</code>. In both you can state the resolution <code>t</code> and how many repetitions you want <code>rep</code>. And, you can just use the igraph object in the function.</p> <pre><code>cluster_resolution_RandomOrderFULL(g,t=0.5) cluster_resolution_RandomOrderFULL(g,rep=20) </code></pre> <p>NOTE/EDIT: it will not accept signed networks though! I'm trying to either contact the owner of the code or costumize it myself to make it suitable for signed networks.</p> <p>EDIT2: I was able to translate the function community_louvain.m from the <a href="https://sites.google.com/site/bctnet/Home/functions" rel="nofollow noreferrer">Brain Connectivity Toolbox</a> for Matlab to R.</p> <p>Here is the <a href="https://github.com/coelhocao/Brain_Network_analysis/blob/master/signed_louvain.R" rel="nofollow noreferrer">github link</a> for the <code>signed_louvain()</code></p> <p>you can pretty much just put for ex. <code>signed_louvain(g, gamma = 1, mod = 'modularity')</code> it works with igraph or matrix objects as input. If it has negative values, you have to choose <code>mod = 'neg_sym'</code> or <code>'neg_asym'</code>. </p>
2019-09-11 03:26:25.677000+00:00
2020-03-12 23:19:18.690000+00:00
2020-03-12 23:19:18.690000+00:00
null
43,100,556
<p>is there a way to set the resolution parameter when using the function <em>cluster_louvain</em> to detect communities in <em>igraph</em> for R? It makes a lot of difference for the result, as this parameter is related to the hierarchical dissimilarity between nodes. Thank you.</p>
2017-03-29 17:43:34.223000+00:00
2020-03-12 23:19:18.690000+00:00
null
r|social-networking|igraph|sna
['https://github.com/analyxcompany/resolution', 'http://arxiv.org/pdf/0812.1770.pdf', 'https://sites.google.com/site/bctnet/Home/functions', 'https://github.com/coelhocao/Brain_Network_analysis/blob/master/signed_louvain.R']
4
33,681,799
<p>Perhaps a variant of the Bloom filter called <a href="http://arxiv.org/ftp/cs/papers/0306/0306046.pdf" rel="nofollow">Compact Approximator</a>: like a bloom filter but generalized so the entries are values from a lattice. That lattice is here just floats between 0 and 1 (it has more structure than just being a lattice but it satisfies the requirements) or however you're storing those numbers.</p> <p>An update replaces the relevant entries by the max between it and the value being remembered, a query computes the minimum of all its relevant entries (examples below). The results can only overestimate the true value. By reversing the ordering (swapping min and max and initializing to 1 instead of 0) you can get an underestimation, together giving an interval that contains the true value.</p> <hr> <p>So for example, using the first approximated (overestimations), putting in a number looks like this:</p> <pre><code>index1 = hash1(key) data[index1] = max(data[index1], value); index2 = hash2(key) data[index2] = max(data[index2], value); ... etc </code></pre> <p>And getting the overestimation looks like:</p> <pre><code>result = 1 index1 = hash1(key) result = min(data[index1], result); index2 = hash2(key) result = min(data[index2], result); ... etc </code></pre>
2015-11-12 21:32:15.820000+00:00
2015-11-13 00:28:43.937000+00:00
2015-11-13 00:28:43.937000+00:00
null
33,681,289
<p>Consider we have an algorithm that receives a hypothetically long stream of keys. It then generates a value between 0 and 1 for each key, as we process it, for posterior retrieval. The input set is large enough that we can't afford to store one value for each key. The value-generating rule is independent across keys.</p> <p>Now, assume that we can tolerate error in the posterior lookup, but we want to still <strong>minimize</strong> the difference in <strong>retrieved</strong> and <strong>original</strong> values (i.e. asymptotically over many random retrievals). </p> <p>For example, if the original value for a given key was 0.008, retrieving 0.06 is much better than retrieving 0.6.</p> <p>What data structures or algorithms can we use to address this problem? </p> <p>Bloom filters are the closest data structure that I can think of. One could quantize the output range, use a bloom filter for each bucket, and somehow combine their output at retrieval time to estimate the most likely value. Before I proceed with this path and reinvent the wheel, are there any known data structures, algorithms, theoretical or practical approaches to address this problem?</p> <p>I am ideally looking for a solution that can <strong>parameterize</strong> the tradeoff between space and error rates.</p>
2015-11-12 21:01:42.580000+00:00
2015-11-13 00:28:43.937000+00:00
2015-11-12 21:27:31.507000+00:00
java|algorithm|data-structures|probability|bloom-filter
['http://arxiv.org/ftp/cs/papers/0306/0306046.pdf']
1
56,897,162
<p>I am not entirely sure how <em>uniform sampling of the eigenvalues</em> would work, but I think you are looking for <a href="https://www.caam.rice.edu/software/ARPACK/" rel="nofollow noreferrer">ARPACK</a>. ARPACK would use matrix-vector products to find your eigenvalues, so I am not entirely sure if the Real/Im decomposition is required in this case (hard to say without knowing a lot about the <code>U</code>).</p> <p>Also, you might want to look at <a href="http://www.ecs.umass.edu/~polizzi/feast/" rel="nofollow noreferrer">FEAST</a> algorithm, which would benefit a lot from the <a href="https://arxiv.org/abs/1203.4031" rel="nofollow noreferrer">given search contour</a>.</p> <p>I am not aware of the existing linking of Julia to those libraries, but I don't think it is a problem since Julia can call C functions.</p> <p>Here, I gave some brief ideas, and <a href="https://scicomp.stackexchange.com/">Computational Science</a> might be a better place to find the right crowd. However, a lot more details about <code>U</code>, its sparsity, size, and what does "uniform sampling of eigenvalues in the interval" means would be required.</p>
2019-07-05 05:47:21.220000+00:00
2019-07-05 05:47:21.220000+00:00
null
null
56,895,396
<p>I work with Julia, but I think the question is more general. Suppose that one wants to find the spectrum of a very large (sparse) unitary matrix <code>U</code> numerically. As is reported in many entries, diagonalizing by brute force using <code>eigs</code> ends without eigenvalue convergence.</p> <p>The trick would be then to work with simpler expressions, i.e. with </p> <pre><code>U_Re = real(U + U')*0.5 U_Im = real((U - U')*-0.5im) </code></pre> <p>My question is, is there a way to obtain a uniform sampling in finding the eigenvalues? That is, I would like to obtain, say <code>10e3</code> eigenvalues for <code>U_Re</code> and <code>U_Im</code> in the interval <code>[-1,1]</code>.</p>
2019-07-05 00:53:13.890000+00:00
2019-07-05 05:47:21.220000+00:00
null
julia|sparse-matrix|numerical-methods
['https://www.caam.rice.edu/software/ARPACK/', 'http://www.ecs.umass.edu/~polizzi/feast/', 'https://arxiv.org/abs/1203.4031', 'https://scicomp.stackexchange.com/']
4
43,956,736
<p>Note: something that I think will get you on the right track:</p> <p>The negative log likelihood is also know as the multiclass cross-entropy (Pattern Recognition and Machine Learning).</p> <p>EDIT: misread the question. I thought this was talking about Deep Deterministic Policy Gradients</p> <p>It would depend on your domain, but with a softmax, you are getting a probability across all output nodes. To me that doesn't really make sense in most domains when you think about DDPG. For example, if you are controlling the extension of robotic arms and legs, it wouldn't make sense to have limb extension measured as [.25, .25, .25, .25], if you wanted to have all limbs extended. In this case, .25 could mean fully extended, but what happens if the vector of outputs is [.75,.25,0,0]? So in this way, you could have a separate sigmoid function from 0 to 1 for all action nodes, where then you could represent it as [1,1,1,1] for all arms being extended. I hope that makes sense.</p> <p>Since the actor network is what determines the actions in DDPG, we could then represent our network like this for our robot (rough keras example):</p> <pre><code>state = Input(shape=[your_state_shape]) hidden_layer = Dense(30, activation='relu')(state) all_limbs = Dense(4, activation='sigmoid')(hidden_layer) model = Model(input=state, output=all_limbs) </code></pre> <p>Then, your critic network will have to account for the action dimensions.</p> <pre><code>state = Input(shape=[your_state_shape]) action = Input(shape=[4]) state_hidden = Dense(30, activation='relu')(state) state_hidden_2 = Dense(30, activation='linear')(state_hidden) action_hidden = Dense(30, activation='linear')(action) combined = merge([state_hidden_2, action_hidden], mode='sum') squasher = Dense(30, activation='relu')(combined) output = Dense(4, activation='linear')(squasher) #number of actions </code></pre> <p>Then you can use your target functions from there. Note, I don't know if this working code, as I haven't tested it, but hopefully you get the idea.</p> <p>Source: <a href="https://arxiv.org/pdf/1509.02971.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1509.02971.pdf</a> Awesome blog on this with Torc (not created by me): <a href="https://yanpanlau.github.io/2016/10/11/Torcs-Keras.html" rel="nofollow noreferrer">https://yanpanlau.github.io/2016/10/11/Torcs-Keras.html</a></p> <p>In the above blog, they also show how to use different output functions, such as one TAHN, and two sigmoid functions for actions.</p>
2017-05-13 18:42:22.977000+00:00
2017-05-15 15:17:11.540000+00:00
2017-05-15 15:17:11.540000+00:00
null
43,881,897
<p>I am trying to program a reinforcement learning algorithm using policy gradients, as inspired by <a href="http://karpathy.github.io/2016/05/31/rl/" rel="nofollow noreferrer">Karpathy's blog article</a>. Karpathy's example has only two actions UP or DOWN, so a single output neuron is sufficient (high activation=UP, low activation=DOWN). I want to extend this to multiple actions, so I believe I need a softmax activation function on the output layer. However, I am not certain about what the gradient for the output layer should be. </p> <p>If I was using a cross-entropy loss function with the softmax activation in a supervised learning context, the gradient for neuron is simply:</p> <pre><code>g[i] = a[i] - target[i] </code></pre> <p>where <code>target[i] = 1</code> for the desired action and <code>0</code> for all others.</p> <p>To use this for reinforcement learning I would multiply <code>g[i]</code> by the discounted reward before back-propagating.</p> <p>However, it seems that reinforcement learning uses negative log-likelihood as the loss instead of cross-entropy. <strong>How does that change the gradient?</strong></p>
2017-05-10 00:43:44.937000+00:00
2019-02-19 11:25:21.467000+00:00
2019-02-19 11:25:21.467000+00:00
neural-network|reinforcement-learning
['https://arxiv.org/pdf/1509.02971.pdf', 'https://yanpanlau.github.io/2016/10/11/Torcs-Keras.html']
2
52,437,914
<blockquote> <p>There are a lot of datasets for this purpose, <a href="https://arxiv.org/pdf/1502.01710.pdf" rel="nofollow noreferrer">in this paper</a> there is detailed Information about the Major ones, I linked in the following:</p> </blockquote> <ul> <li><a href="https://textminingonline.com/tag/dbpedia-ontology-classification-dataset" rel="nofollow noreferrer">DBpediaOntologyClassification</a></li> <li><a href="https://www.kaggle.com/bittlingmayer/amazonreviews" rel="nofollow noreferrer">AmazonReviewSentimentAnalysis</a></li> <li><a href="https://www.kaggle.com/c/stat-441841-data-challenge-ii/data" rel="nofollow noreferrer">Yahoo! AnswersTopicClassification</a></li> <li><a href="http://www.di.unipi.it/~gulli/AG_corpus_of_news_articles.html" rel="nofollow noreferrer">AG News Categorization in English</a></li> </ul>
2018-09-21 06:34:18.257000+00:00
2018-09-21 06:34:18.257000+00:00
null
null
52,434,867
<p>I have found a package for text classification in PHP in which the method for the classifier accepts the sentence and the category like this:</p> <pre><code>$classifier-&gt;learn('that was a clean election', 'not sports'); $classifier-&gt;learn('that was a nice game','sports'); $classifier-&gt;guess('the game was bad'); // returns sports </code></pre> <p>What dataset is best for this approach? And also I have a dynamic category which means I can add additional category. My problem is I have to give examples in every category added, which means I need more data in this category.</p>
2018-09-21 00:05:14.407000+00:00
2018-09-21 06:37:49.443000+00:00
2018-09-21 06:37:49.443000+00:00
php|machine-learning|dataset|text-classification
['https://arxiv.org/pdf/1502.01710.pdf', 'https://textminingonline.com/tag/dbpedia-ontology-classification-dataset', 'https://www.kaggle.com/bittlingmayer/amazonreviews', 'https://www.kaggle.com/c/stat-441841-data-challenge-ii/data', 'http://www.di.unipi.it/~gulli/AG_corpus_of_news_articles.html']
5
32,762,708
<p>To summarize, the difference between the two approaches boils down to how the "partial dependence function" of the two predictors is estimated. </p> <p>The <code>dismo</code> package is based on code originally given in <a href="http://onlinelibrary.wiley.com/doi/10.1111/j.1365-2656.2008.01390.x/full" rel="noreferrer">Elith et al., 2008</a> and you can find the original source in the supplementary material. The paper very briefly describes the procedure. Basically the model predictions are obtained over a grid of two predictors, <em>setting all other predictors at their means</em>. The model predictions are then regressed onto the grid. The mean squared errors of this model are then multiplied by 1000. This statistic indicates departures of the model predictions from a linear combination of the predictors, indicating a possible interaction. </p> <p>From the <code>dismo</code> package, we can also obtain the relevant source code for <code>gbm.interactions</code>. The interaction test boils down to the following commands (copied directly from source):</p> <pre><code>interaction.test.model &lt;- lm(prediction ~ as.factor(pred.frame[,1]) + as.factor(pred.frame[,2])) interaction.flag &lt;- round(mean(resid(interaction.test.model)^2) * 1000,2) </code></pre> <p><code>pred.frame</code> contains a grid of the two predictors in question, and <code>prediction</code> is the prediction from the original <code>gbm</code> fitted model where all but two predictors under consideration are set at their means. </p> <p>This is different than Friedman's H statistic <a href="http://arxiv.org/pdf/0811.1679.pdf" rel="noreferrer">(Friedman &amp; Popescue, 2005)</a>, which is estimated via formula (44) for any pair of predictors. This is essentially the departure from additivity for any two predictors <em>averaging over</em> the values of the other variables, NOT setting the other variables at their means. It is expressed as a percent of the total variance of the partial dependence function of the two variables (or model implied predictions) so will always be between 0-1. </p>
2015-09-24 13:31:14.470000+00:00
2015-09-24 13:31:14.470000+00:00
null
null
29,998,014
<p><strong>Background</strong></p> <p>The reference manual for the <code>gbm package</code> states the <code>interact.gbm</code> function computes Friedman's H-statistic to assess the strength of variable interactions. the H-statistic is on the scale of [0-1].</p> <p>The reference manual for the <code>dismo package</code> does not reference any literature for how the <code>gbm.interactions</code> function detects and models interactions. Instead it gives a list of general procedures used to detect and model interactions. The <code>dismo</code> vignette "Boosted Regression Trees for ecological modeling" states that the <code>dismo</code> package extends functions in the <code>gbm</code> package. </p> <p><strong>Question</strong></p> <p>How does <code>dismo::gbm.interactions</code> actually detect and model interactions?</p> <p><strong>Why</strong></p> <p>I am asking this question because <code>gbm.interactions</code> in the <code>dismo package</code> yields results >1, which the <code>gbm package</code> reference manual says is not possible. </p> <p>I checked the tar.gz for each of the packages to see if the source code was similar. It is different enough that I cannot determine if these two packages are using the same method to detect and model interactions.</p>
2015-04-05 00:58:27.743000+00:00
2015-09-24 13:31:14.470000+00:00
null
r|machine-learning|interaction|gbm
['http://onlinelibrary.wiley.com/doi/10.1111/j.1365-2656.2008.01390.x/full', 'http://arxiv.org/pdf/0811.1679.pdf']
2
72,484,466
<p>There are 2 ways to compute the perplexity score: non-overlapping and sliding window. <a href="https://arxiv.org/pdf/2012.15832.pdf" rel="nofollow noreferrer">This paper</a> describes the details.</p>
2022-06-03 03:41:41+00:00
2022-06-03 03:41:41+00:00
null
null
71,466,639
<p>I am pretraining a <code>GPT2LMHeadModel</code> using <code>Trainer</code> as follows:</p> <pre class="lang-py prettyprint-override"><code>training_args = TrainingArguments( output_dir=str(project_root / 'models/bn-gpt2/'), overwrite_output_dir=True, num_train_epochs=1, per_device_train_batch_size=1, per_device_eval_batch_size=1, gradient_accumulation_steps=4, fp16=True, optim=&quot;adafactor&quot;, eval_steps=400, save_steps=800, warmup_steps=500, evaluation_strategy=&quot;steps&quot;, ) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=tokenized_dataset['train'], eval_dataset=tokenized_dataset['test'], ) trainer.train() </code></pre> <p>I want to measure the performance of my pre-trained model using perplexity or accuracy metrics during and after training. I have found some ways to measure these for individual sentences, but I cannot find a way to do this for the complete model. My goal is to create a next word prediction model for my native language using GPT2 training from scratch.</p>
2022-03-14 10:58:15.067000+00:00
2022-06-03 03:41:41+00:00
null
python|pytorch|huggingface-transformers
['https://arxiv.org/pdf/2012.15832.pdf']
1
7,936,220
<p>There is a well-known greedy approximation algorithm for set cover that is also easy to implement in whatever language of your choice. The algorithm itself is described here:</p> <p><a href="http://en.wikipedia.org/wiki/Set_cover_problem#Greedy_algorithm" rel="noreferrer">http://en.wikipedia.org/wiki/Set_cover_problem#Greedy_algorithm</a></p> <p>It is so simple that the easiest thing is just to write it from scratch.</p> <p>Notably, it is also the best polynomial-time approximation algorithm known for set cover. That means that to get better worst-case performance (more compact result set) you would need to have non-polynomial running times (= slow algorithms for large sets).</p> <p>Unfortunately the Wikipedia entry doesn't actually cover weighted set cover, which is the case here. The extension is simple, and is described e.g. here:</p> <p><a href="http://pages.cs.wisc.edu/~shuchi/courses/880-S07/scribe-notes/lecture03.pdf" rel="noreferrer">http://pages.cs.wisc.edu/~shuchi/courses/880-S07/scribe-notes/lecture03.pdf</a></p> <p>Some more useful notes:</p> <p><a href="http://www.cs.ucr.edu/~neal/non_arxiv/Young08SetCover.pdf" rel="noreferrer">http://www.cs.ucr.edu/~neal/non_arxiv/Young08SetCover.pdf</a> http://www.cs.uiuc.edu/class/sp08/cs473/Lectures/lec20.pdf</p>
2011-10-29 00:41:28.800000+00:00
2011-10-29 23:24:04.253000+00:00
2011-10-29 23:24:04.253000+00:00
null
7,936,037
<p>This question follows from a related question of mine posted <a href="https://stackoverflow.com/questions/7927787/finding-an-optimal-solution-that-minimizes-a-constraint">here</a>. @mhum suggested that my problem falls into the <em>covering problem</em> domain. I tried encoding my question into a minimum set cover problem and currently I have a dataset in this form:</p> <pre><code>Set Cost (1,2) 1 (1) 1 (1,2,3) 2 (1) 2 (3,4) 2 (4) 3 (1,2) 3 (3,4) 4 (1,2,3,4) 4 </code></pre> <p>The objective is to find a good set cover that covers all numbers and one that attempts to minimize the total cost. My dataset is big with at least 30000 sets (of size varying from 5-40 elements) like this. Are there any good greedy implementations to solve this or am I on my own to implement this? I am not an expert in LP but any LP-solvers (from numpy/scipy) that can solve this are also acceptable.</p>
2011-10-29 00:01:14.680000+00:00
2020-09-14 21:47:28.740000+00:00
2017-05-23 10:30:04.313000+00:00
python|algorithm|numpy|scipy|linear-programming
['http://en.wikipedia.org/wiki/Set_cover_problem#Greedy_algorithm', 'http://pages.cs.wisc.edu/~shuchi/courses/880-S07/scribe-notes/lecture03.pdf', 'http://www.cs.ucr.edu/~neal/non_arxiv/Young08SetCover.pdf']
3
57,746,909
<p>The easiest way is to record all matches during the iteration through the matrix using the naive implementation.</p> <p>I needed <code>allLCS()</code> for tests, if other algorithms provide a valid solution, which must be one of all possible LCS.</p> <p>The code is on <a href="https://github.com/wollmers/LCS" rel="nofollow noreferrer">github</a>.</p> <p>It's in Perl, but easy to understand. Iterate through the matrix and add the matches in the cells. At end the bottom right cell contains the length of the LCS. That's the naive algorithm. Now record at each match the coordinates as an array [i,j] in a hash with the match count as key. </p> <pre class="lang-perl prettyprint-override"><code># get all LCS of two arrays # records the matches by rank sub allLCS { my ($self,$X,$Y) = @_; my $m = scalar @$X; my $n = scalar @$Y; my $ranks = {}; # e.g. '4' =&gt; [[3,6],[4,5]] my $c = []; my ($i,$j); for (0..$m) {$c-&gt;[$_][0]=0;} for (0..$n) {$c-&gt;[0][$_]=0;} for ($i=1;$i&lt;=$m;$i++) { for ($j=1;$j&lt;=$n;$j++) { if ($X-&gt;[$i-1] eq $Y-&gt;[$j-1]) { $c-&gt;[$i][$j] = $c-&gt;[$i-1][$j-1]+1; push @{$ranks-&gt;{$c-&gt;[$i][$j]}},[$i-1,$j-1]; } else { $c-&gt;[$i][$j] = ($c-&gt;[$i][$j-1] &gt; $c-&gt;[$i-1][$j]) ? $c-&gt;[$i][$j-1] : $c-&gt;[$i-1][$j]; } } } my $max = scalar keys %$ranks; return $self-&gt;_all_lcs($ranks,1,$max); } </code></pre> <p>At the end this collection of recorded matches is connected by the method <code>_all_lcs()</code>:</p> <pre class="lang-perl prettyprint-override"><code>sub _all_lcs { my ($self,$ranks,$rank,$max) = @_; my $R = [[]]; while ($rank &lt;= $max) { my @temp; for my $path (@$R) { for my $hunk (@{$ranks-&gt;{$rank}}) { if (scalar @{$path} == 0) { push @temp,[$hunk]; } elsif (($path-&gt;[-1][0] &lt; $hunk-&gt;[0]) &amp;&amp; ($path-&gt;[-1][1] &lt; $hunk-&gt;[1])) { push @temp,[@$path,$hunk]; } } } @$R = @temp; $rank++; } return $R; } </code></pre> <p>The code is inspired by the paper</p> <p>Ronald I. Greenberg. <a href="https://arxiv.org/pdf/cs/0211001.pdf" rel="nofollow noreferrer">Fast and Simple Computation of All Longest Common Subsequences</a></p>
2019-09-01 14:39:24.703000+00:00
2019-09-01 14:39:24.703000+00:00
null
null
56,518,854
<p>I implement the Longest Common Subsequence problem in C#. I need to detect <strong>ALL the common maximal subsequences</strong> between two strings.</p> <p>To do this, I create a table using <a href="https://en.wikipedia.org/wiki/Hirschberg%27s_algorithm" rel="nofollow noreferrer">Needleman-Wunsch algorithm</a> to store the LCS sequence for each step of the calculation. </p> <p>Is there any chance to determine, <strong><em>how many</em></strong> maximal subsequences were found (using a table)?</p> <p>Depending on this I want to choose a method how to collect each subsequence. The point is, for <em>one subsequence</em> recursion is not required, so it will give a <em>better performance</em>. And its crucial for my task.</p> <p>Here is a code snippet, where the basic functions from the project are implemented:</p> <pre><code> private static int[][] GetMatrixLCS(string x, string y) { var lenX = x.Length; var lenY = y.Length; matrixLCS = new int[lenX + 1][]; for (var i = 0; i &lt; matrixLCS.Length; i++) { matrixLCS[i] = new int[lenY + 1]; } for (int i = 0; i &lt;= lenX; i++) { for (int j = 0; j &lt;= lenY; j++) { if (i == 0 || j == 0) matrixLCS[i][j] = 0; else if (x[i - 1] == y[j - 1]) matrixLCS[i][j] = matrixLCS[i - 1][j - 1] + 1; else matrixLCS[i][j] = Math.Max(matrixLCS[i - 1][j], matrixLCS[i][j - 1]); } } return matrixLCS; } static HashSet&lt;string&gt; FindAllLcs(string X, string Y, int lenX, int lenY) { var set = new HashSet&lt;string&gt;(); if (lenX == 0 || lenY == 0) return emptySet; if (X[lenX - 1] == Y[lenY - 1]) { var tempResult = FindAllLcs(X, Y, lenX - 1, lenY - 1); foreach (var temp in tempResult) set.Add(temp + X[lenX - 1]); return set; } if (matrixLCS[lenX - 1][lenY] &gt;= matrixLCS[lenX][lenY - 1]) set = FindAllLcs(X, Y, lenX - 1, lenY); if (matrixLCS[lenX][lenY - 1] &gt;= matrixLCS[lenX - 1][lenY]) set.UnionWith(FindAllLcs(X, Y, lenX, lenY - 1)); return set; } </code></pre> <p>And the example with two types of inputs and expected outputs:</p> <pre><code> public void SingleOutput() { var sequence = LCS.FindLCS("ABC", "AB"); Assert.AreEqual(1, sequence.Length); Assert.AreEqual("AB", sequence[0]); } public void MultipleOutput() { var sequence = LCS.FindLCS("BCAB", "ABC"); Assert.AreEqual(2, sequence.Length); Assert.AreEqual("AB", sequence [0]); Assert.AreEqual("BC", sequence [1]); } </code></pre> <p>Any help would be strongly appreciated.</p>
2019-06-09 22:33:35.920000+00:00
2019-09-01 14:39:24.703000+00:00
2019-06-10 07:57:59.580000+00:00
c#|algorithm|lcs|needleman-wunsch
['https://github.com/wollmers/LCS', 'https://arxiv.org/pdf/cs/0211001.pdf']
2
55,967,143
<p>Object detection consists of classification and regression, that is, not only do we have to correctly classify an object on the image, but also we need to correctly locate the object. </p> <p>Although some object detection frameworks do look like a regression model (YOLO, SSD), but the loss function is not as simple as a L2 loss. In fact, the loss function consists of two parts, <code>crossentropy</code> loss for classification and <code>regression</code> loss for localization, and L2 loss is usually used for <code>regression</code> loss here.</p> <p>Here are the loss functions of some common object detection models.</p> <p><a href="https://arxiv.org/pdf/1512.02325.pdf" rel="nofollow noreferrer">SSD model.</a> <a href="https://i.stack.imgur.com/oKZGF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oKZGF.png" alt="enter image description here"></a></p> <p><a href="https://arxiv.org/pdf/1506.02640v2.pdf" rel="nofollow noreferrer">YOLO model</a></p> <p><a href="https://i.stack.imgur.com/OE1og.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OE1og.png" alt="enter image description here"></a></p>
2019-05-03 09:40:42.220000+00:00
2019-05-03 14:50:42.497000+00:00
2019-05-03 14:50:42.497000+00:00
null
55,963,192
<p>I'm studying about tensorflow (exactly Object Detection using CNN)</p> <p>I have already studied about Classification, but Object-Detection is Regression problem, so I am confused loss function and total network implementation.</p> <p>In classification problem, I should use-</p> <p><strong>tf.nn.softmax_cross_entropy_with_logits(logits=result, labels=Y)</strong></p> <p>(result is my CNN output tensor)</p> <p>but in regression problem, like sementic-segmentation and object detection, I found that I have to use l2-loss function.</p> <p><strong>tf.nn.l2_loss(t=result)</strong></p> <p>I don't know how can I use this function because I cannot use tf.argmax function.</p> <p><strong>[Source Code 1] Classification, used softmax and tf.argmax</strong></p> <pre class="lang-py prettyprint-override"><code>cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=result, labels=Y)) print("* Cross Entropy SIZE : " + str(cross_entropy)) Result_argmax = tf.argmax(tf.nn.softmax(result), 1) Label_argmax = tf.argmax(Y, 1) print("* Result Argmax : ", Result_argmax) print("* Label Argmax : ", Label_argmax) ay = tf.argmax(tf.nn.softmax(result), 1) ly = tf.argmax(tf.nn.softmax(Y), 1) correct_prediction = tf.equal(Result_argmax, Label_argmax) print("* tf.argmax : " + str(Result_argmax)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) train_step = tf.train.AdamOptimizer(0.0001 * batchsize).minimize(cross_entropy) </code></pre> <p>this is so easy and I totally understood.</p> <p><strong>[Source Code 2] Regression, used l2_loss function</strong></p> <pre class="lang-py prettyprint-override"><code>l2_loss = tf.reduce_mean(tf.nn.l2_loss(t=result)) print("** L2 Loss SIZE : " + str(l2_loss)) train_step = tf.train.AdamOptimizer(0.0001 * batchsize).minimize(l2_loss) ???????? </code></pre> <p>Is that correct? I cannot understand how to do box location learning.</p> <p>Also, There is my learning monitor which is captured.</p> <p><a href="https://i.stack.imgur.com/FxRJz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FxRJz.png" alt="enter image description here"></a></p> <p>Really, Really I can't understand. <strong>Please HELP ME!</strong></p> <p>(last, here is my session image captured.) <a href="https://i.stack.imgur.com/Trd7S.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Trd7S.png" alt="enter image description here"></a></p>
2019-05-03 04:12:56.133000+00:00
2019-05-03 14:50:42.497000+00:00
2019-05-03 05:10:11.333000+00:00
python|tensorflow|neural-network|object-detection|loss-function
['https://arxiv.org/pdf/1512.02325.pdf', 'https://i.stack.imgur.com/oKZGF.png', 'https://arxiv.org/pdf/1506.02640v2.pdf', 'https://i.stack.imgur.com/OE1og.png']
4
61,233,772
<p>I suggest you use a type of <a href="https://arxiv.org/abs/1505.04597" rel="nofollow noreferrer">UNet</a>. This kind of architecture has downsampling layers, followed by up sampling layers to get back to the original spatial dimensions.</p> <p>Please refer <a href="https://github.com/iver56/image-regression" rel="nofollow noreferrer">this</a> Image Regression article </p>
2020-04-15 16:28:43.283000+00:00
2020-04-24 06:56:30.597000+00:00
2020-04-24 06:56:30.597000+00:00
null
57,132,063
<p>I am working on a deep learning problem which requires me to have a deep - learning model that has as input an image and as output another image. Now, the input and output images are of different dimensions and for this reason I cannot use an autoencoder. I have already tried to build a very simple Convolutional Neural Network that has a final output dense layer that has as "units" argument the width and height of the output image multiplied together. However, this network that I am attaching below did not have success. My questions are: </p> <ul> <li>are CNNs the right type of deep learning network to approach this problem in the way I did it?</li> <li>if not, what are the other type of deep learning networks I can experiment to tackle this problem?</li> </ul> <p>Thanks in advance!</p> <p>Here is the summary of the CNN model that I have already tried:</p> <hr> <h1>Layer (type) Output Shape Param #</h1> <p>conv2d_1 (Conv2D) (None, 26, 877, 32) 544 </p> <hr> <p>activation_1 (Activation) (None, 26, 877, 32) 0 </p> <hr> <p>max_pooling2d_1 (MaxPooling2 (None, 13, 438, 32) 0 </p> <hr> <p>conv2d_2 (Conv2D) (None, 12, 437, 16) 2064 </p> <hr> <p>activation_2 (Activation) (None, 12, 437, 16) 0 </p> <hr> <p>max_pooling2d_2 (MaxPooling2 (None, 6, 218, 16) 0 </p> <hr> <p>conv2d_3 (Conv2D) (None, 5, 217, 8) 520 </p> <hr> <p>activation_3 (Activation) (None, 5, 217, 8) 0 </p> <hr> <p>max_pooling2d_3 (MaxPooling2 (None, 2, 108, 8) 0 </p> <hr> <p>activation_4 (Activation) (None, 2, 108, 8) 0 </p> <hr> <p>flatten_1 (Flatten) (None, 1728) 0 </p> <hr> <p>dropout_1 (Dropout) (None, 1728) 0 </p> <hr> <p>dense_1 (Dense) (None, 19316) 33397364 </p> <p>================================================================= Total params: 33,400,492 Trainable params: 33,400,492 Non-trainable params: 0</p> <hr> <pre><code>def generator(data_arr, batch_size = 10): num = len(data_arr) if num % batch_size != 0 : num = int(num/batch_size) # Loop forever so the generator never terminates while True: for offset in range(0, num, batch_size): batch_samples = (data_arr[offset:offset+batch_size]) samples = [] labels = [] for batch_sample in batch_samples: samples.append(batch_sample[0]) labels.append((np.array(batch_sample[1].flatten)).transpose()) X_ = np.array(samples) Y_ = np.array(labels) X_ = X_[:, :, :, newaxis] yield (X_, Y_) # compile and train the model using the generator function train_generator = generator(training_data, batch_size = 10) validation_generator = generator(val_data, batch_size = 10) run_opts = tf.RunOptions(report_tensor_allocations_upon_oom = True) model = Sequential() model.add(Conv2D(32, (4, 4), strides=(2, 2), input_shape = (55, 1756, 1))) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size = (2, 2))) model.add(Conv2D(16, (2, 2))) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size = (2, 2))) model.add(Conv2D(8, (2, 2))) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size = (2, 2))) model.add(Activation('softmax')) model.add(Flatten()) model.add(Dropout(0.3)) model.add(Dense(19316)) model.compile(loss = 'categorical_crossentropy', optimizer = 'adam', metrics = ['accuracy'], options = run_opts) model.summary() batch_size = 20 nb_epoch = 6 model.fit_generator(train_generator, steps_per_epoch = len(training_data) , epochs = nb_epoch, validation_data = validation_generator, validation_steps = len(val_data)) </code></pre>
2019-07-21 09:56:37.477000+00:00
2020-04-24 06:56:30.597000+00:00
null
python|tensorflow|keras|deep-learning|conv-neural-network
['https://arxiv.org/abs/1505.04597', 'https://github.com/iver56/image-regression']
2
65,299,278
<p>A small list with some examples:</p> <ul> <li>Some algorithms in computational number theory have their complexities expressed in <a href="https://en.wikipedia.org/wiki/L-notation" rel="nofollow noreferrer">L-notation</a>, and some of them feature irrational exponents: <a href="https://en.wikipedia.org/wiki/Lenstra_elliptic-curve_factorization#Algorithm" rel="nofollow noreferrer">Lenstra's elliptic-curve factorization</a>, <a href="https://en.wikipedia.org/wiki/Dixon%27s_factorization_method#Optimizations" rel="nofollow noreferrer">Dixon's factorization</a>, <a href="https://en.wikipedia.org/wiki/Index_calculus_algorithm#Complexity" rel="nofollow noreferrer">Index calculus algorithm</a>.</li> <li><a href="https://en.wikipedia.org/wiki/Stooge_sort" rel="nofollow noreferrer">Stooge Sort</a> has in its complexity the exponent <code>log(3)/log(3/2)</code> which is an irrational number.</li> <li><a href="https://en.wikipedia.org/wiki/Toom%E2%80%93Cook_multiplication" rel="nofollow noreferrer">Toom-Cook multiplication</a>.</li> <li>Allegedly, <a href="https://cstheory.stackexchange.com/a/32633">all SAT solvers feature</a> an irrational exponent in their complexity</li> <li><a href="https://cstheory.stackexchange.com/a/15002">The following list</a></li> </ul> <p>But more generally, it should be possible to mine (and maybe even monitor) <a href="https://en.wikipedia.org/wiki/List_of_algorithms" rel="nofollow noreferrer">wikipedia's list of algorithms</a> , parse and extract the mathematical expression associated with time complexity, and possibly feed the list of expressions into a <a href="https://en.wikipedia.org/wiki/Computer_algebra_system" rel="nofollow noreferrer">CAS</a> (for example <a href="https://www.sympy.org/" rel="nofollow noreferrer">SymPy</a> using a convertor like <a href="https://github.com/augustt198/latex2sympy" rel="nofollow noreferrer">latex2sympy</a>) capable of figuring out if there are any irrational exponents involved (wikidata could be a better option if it had complete structured data coverage, example: <a href="https://www.wikidata.org/wiki/Q486598" rel="nofollow noreferrer">wikidata's quicksort page</a>). It would also be possible to extend this data collection past Wikipedia, into arXiv which apparently <a href="https://arxiv.org/help/view" rel="nofollow noreferrer">offers latex sources</a> for some of the articles, and then employ a latex parser and an expression parser to find these types of complexities.</p>
2020-12-15 02:41:04.657000+00:00
2021-01-10 05:44:15.647000+00:00
2021-01-10 05:44:15.647000+00:00
null
65,298,653
<p>Hi I was wondering if there are any algorithms which has the run complexity which contain an irrational exponent. Strassen's algorithm for matrix multiplication is something I am looking for but is there more?</p>
2020-12-15 01:04:38.483000+00:00
2021-01-10 05:44:15.647000+00:00
null
time-complexity|logarithm|exponential
['https://en.wikipedia.org/wiki/L-notation', 'https://en.wikipedia.org/wiki/Lenstra_elliptic-curve_factorization#Algorithm', 'https://en.wikipedia.org/wiki/Dixon%27s_factorization_method#Optimizations', 'https://en.wikipedia.org/wiki/Index_calculus_algorithm#Complexity', 'https://en.wikipedia.org/wiki/Stooge_sort', 'https://en.wikipedia.org/wiki/Toom%E2%80%93Cook_multiplication', 'https://cstheory.stackexchange.com/a/32633', 'https://cstheory.stackexchange.com/a/15002', 'https://en.wikipedia.org/wiki/List_of_algorithms', 'https://en.wikipedia.org/wiki/Computer_algebra_system', 'https://www.sympy.org/', 'https://github.com/augustt198/latex2sympy', 'https://www.wikidata.org/wiki/Q486598', 'https://arxiv.org/help/view']
14
51,648,054
<p>Adam keeps per parameter <a href="https://arxiv.org/pdf/1412.6980.pdf" rel="nofollow noreferrer">statistics for its update</a>, see Alg. 1. In TensorFlow these are <a href="https://github.com/tensorflow/tensorflow/blob/6da31773d5d6aecb5672da6024bd03fc803ee599/tensorflow/contrib/optimizer_v2/optimizer_v2.py#L926" rel="nofollow noreferrer">generated here</a> and <a href="https://github.com/tensorflow/tensorflow/blob/6da31773d5d6aecb5672da6024bd03fc803ee599/tensorflow/contrib/optimizer_v2/adam.py#L115-L118" rel="nofollow noreferrer">there</a>.</p> <p>For inference, you just need to rely on <code>.../kernel:0</code>. </p>
2018-08-02 07:37:36.300000+00:00
2018-08-02 07:37:36.300000+00:00
null
null
51,646,123
<p>I'm working on Tensorflow recently.</p> <p>I have a trained model, and need to check the variables in it. So, I've restored the graph from the meta file, and obtained the variables by: </p> <pre><code>gvars = tf.global_variables() </code></pre> <p>I'm interested in the kernels of each convolution layers, and they got names like <code>'.../kernel:0'</code>. However, I found a similar tensor named <code>'.../kernel/Adam:0'</code> but having totally different values!! What I only understand is that the <code>.../Adam:0'</code> things are related to the training (optimization) processes, but, not sure...</p> <p>So.. what is the difference between two, and which one is actually used in evaluating, testing, deploying, etc?</p>
2018-08-02 05:29:39.277000+00:00
2018-08-02 07:37:36.300000+00:00
2018-08-02 05:33:51.213000+00:00
python|variables|tensorflow|tensor
['https://arxiv.org/pdf/1412.6980.pdf', 'https://github.com/tensorflow/tensorflow/blob/6da31773d5d6aecb5672da6024bd03fc803ee599/tensorflow/contrib/optimizer_v2/optimizer_v2.py#L926', 'https://github.com/tensorflow/tensorflow/blob/6da31773d5d6aecb5672da6024bd03fc803ee599/tensorflow/contrib/optimizer_v2/adam.py#L115-L118']
3
30,930,330
<p>There is indeed work on dynamic toposort.</p> <p>Pearce &amp; Kelly <a href="http://homepages.ecs.vuw.ac.nz/~djp/dts.html" rel="nofollow">http://homepages.ecs.vuw.ac.nz/~djp/dts.html</a> have an algorithm that they claim is both simple and practically efficient. They also provide a C++ implementation. By way of introduction, they discuss even simpler variants. </p> <p>Here is some of the later work on this problem, in chronological order. Later methods tend to be more complicated than earlier ones. Even if not, the later papers may assume that you have already read the earlier ones. You apparently started with the last paper on this list, which might have been jumping into the deep end.</p> <ul> <li><a href="http://www.cs.princeton.edu/~sssix/papers/dto-icalp08.pdf" rel="nofollow">http://www.cs.princeton.edu/~sssix/papers/dto-icalp08.pdf</a></li> <li><a href="http://dl.acm.org/citation.cfm?ID=1496890" rel="nofollow">http://dl.acm.org/citation.cfm?ID=1496890</a></li> <li><a href="http://dl.acm.org/citation.cfm?ID=2071382" rel="nofollow">http://dl.acm.org/citation.cfm?ID=2071382</a></li> <li><a href="http://arxiv.org/pdf/1310.8381.pdf" rel="nofollow">http://arxiv.org/pdf/1310.8381.pdf</a> (which you mention in your question)</li> </ul>
2015-06-19 05:09:29.130000+00:00
2015-06-19 05:12:30.493000+00:00
2015-06-19 05:12:30.493000+00:00
null
24,748,744
<p>I'm currently implementing a dynamic DAG graph in C++—it will be displayed through an UI to the user and insertion/removal of nodes/edges will be common operations.</p> <p>The size of the graphs might potentially range from the really small scale to the large one—I'm aiming to support millions of nodes.</p> <p>As such, I'm looking for an optimal data structure that won't take up too much space in memory but also for a way to have fast insertions/removals with a fast multi-threaded iteration over the topologically sorted nodes (so multiple nodes can be executed in parallel).</p> <p>I haven't done any profiling to see if a naive approach of recomputing a topological sort of the full graph each time a modification is being done would cut it, but for the sake of learning, I thought I'd rather find a “smarter” way.</p> <p>I've got no idea how to approach the multi-threaded iteration of the graph but for a start I've stumbled upon some papers related to the iterative/dynamic topological sorting step, and the problem is that they're a bit too smart for me to understand. It gets way into the theoretical/mathematical side and lacks concrete implementation examples that could help me to understand what's going on.</p> <p>Here's an example of a such paper: <a href="http://arxiv.org/pdf/1310.8381.pdf" rel="nofollow">A Labeling Approach to Incremental Cycle Detection</a>.</p> <p>Since there's a lack of papers such as “Iterative/Dynamic Topological Sorting for Dummies”, does anyone have any hint on subject?</p>
2014-07-15 01:55:19.953000+00:00
2015-06-19 05:12:30.493000+00:00
null
c++|sorting|graph|directed-acyclic-graphs|topological-sort
['http://homepages.ecs.vuw.ac.nz/~djp/dts.html', 'http://www.cs.princeton.edu/~sssix/papers/dto-icalp08.pdf', 'http://dl.acm.org/citation.cfm?ID=1496890', 'http://dl.acm.org/citation.cfm?ID=2071382', 'http://arxiv.org/pdf/1310.8381.pdf']
5
58,762,559
<p>Although there is no best activation function as such, I find <a href="https://arxiv.org/abs/1710.05941" rel="nofollow noreferrer"><code>Swish</code></a> to work particularly well for Time-Series problems. AFAIK keras doesn't provide <code>Swish</code> builtin, you can use:</p> <pre><code>from keras.utils.generic_utils import get_custom_objects from keras import backend as K from keras.layers import Activation def custom_activation(x, beta = 1): return (K.sigmoid(beta * x) * x) get_custom_objects().update({'custom_activation': Activation(custom_activation)}) </code></pre> <p>Then use it in model:</p> <pre><code>model.add(Activation(custom_activation,name = "Swish")) </code></pre>
2019-11-08 07:54:09.607000+00:00
2020-04-11 08:46:04.377000+00:00
2020-04-11 08:46:04.377000+00:00
null
58,761,233
<p>I am using the Sequential model from Keras, with the DENSE layer type. I wrote a function that recursively calculates predictions, but the predictions are way off. I am wondering what is the best activation function to use for my data. Currently I am using hard_sigmoid function. The output data values range from 5 to 25. The input data has the shape (6,1) and the output data is a single value. When I plot the predictions they never decrease. Thank you for the help!!</p> <pre><code># create and fit Multilayer Perceptron model model = Sequential(); model.add(Dense(20, input_dim=look_back, activation='hard_sigmoid')) model.add(Dense(16, activation='hard_sigmoid')) model.add(Dense(1)) model.compile(loss='mean_squared_error', optimizer='adam') model.fit(trainX, trainY, epochs=200, batch_size=2, verbose=0) #function to predict using predicted values numOfPredictions = 96; for i in range(numOfPredictions): temp = [[origAndPredictions[i,0],origAndPredictions[i,1],origAndPredictions[i,2],origAndPredictions[i,3],origAndPredictions[i,4],origAndPredictions[i,5]]] temp = numpy.array(temp) temp1 = model.predict(temp) predictions = numpy.append(predictions, temp1, axis=0) temp2 = [] temp2 = [[origAndPredictions[i,1],origAndPredictions[i,2],origAndPredictions[i,3],origAndPredictions[i,4],origAndPredictions[i,5],predictions[i,0]]] temp2 = numpy.array(temp2) origAndPredictions = numpy.vstack((origAndPredictions, temp2)) </code></pre> <p><a href="https://i.stack.imgur.com/ZQvm9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZQvm9.png" alt="enter image description here" /></a></p> <p>update: I used this code to implement the swish.</p> <pre><code>from keras.backend import sigmoid def swish1(x, beta = 1): return (x * sigmoid(beta * x)) def swish2(x, beta = 1): return (x * sigmoid(beta * x)) from keras.utils.generic_utils import get_custom_objects from keras.layers import Activation get_custom_objects().update({'swish': Activation(swish)}) model.add(Activation(custom_activation,name = &quot;swish1&quot;)) </code></pre> <p><a href="https://i.stack.imgur.com/qE3vb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qE3vb.png" alt="New plot of predictions using swish." /></a></p> <p>update: Using this code:</p> <pre><code>from keras.backend import sigmoid from keras import backend as K def swish1(x): return (K.sigmoid(x) * x) def swish2(x): return (K.sigmoid(x) * x) </code></pre> <p><a href="https://i.stack.imgur.com/bt1nz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bt1nz.png" alt="enter image description here" /></a></p> <p>Thanks for all the help!!</p>
2019-11-08 06:02:52.880000+00:00
2020-04-11 08:46:04.377000+00:00
2020-06-20 09:12:55.060000+00:00
python|keras|neural-network|sequential|activation-function
['https://arxiv.org/abs/1710.05941']
1
42,661,991
<p>To solving this problem,we can define a linear() function.</p> <pre><code>def linear(input_, output_size, scope=None): ''' Linear map: output[k] = sum_i(Matrix[k, i] * args[i] ) + Bias[k] Args: args: a tensor or a list of 2D, batch x n, Tensors. output_size: int, second dimension of W[i]. scope: VariableScope for the created subgraph; defaults to "Linear". Returns: A 2D Tensor with shape [batch x output_size] equal to sum_i(args[i] * W[i]), where W[i]s are newly created matrices. Raises: ValueError: if some of the arguments has unspecified or wrong shape. ''' shape = input_.get_shape().as_list() if len(shape) != 2: raise ValueError("Linear is expecting 2D arguments: %s" % str(shape)) if not shape[1]: raise ValueError("Linear expects shape[1] of arguments: %s" % str(shape)) input_size = shape[1] # Now the computation. with tf.variable_scope(scope or "SimpleLinear"): matrix = tf.get_variable("Matrix", [output_size, input_size], dtype=input_.dtype) bias_term = tf.get_variable("Bias", [output_size], dtype=input_.dtype) return tf.matmul(input_, tf.transpose(matrix)) + bias_term def highway(input_, size, num_layers=1, bias=-2.0, f=tf.nn.relu, scope='Highway'): """Highway Network (cf. http://arxiv.org/abs/1505.00387). t = sigmoid(Wy + b) z = t * g(Wy + b) + (1 - t) * y where g is nonlinearity, t is transform gate, and (1 - t) is carry gate. """ with tf.variable_scope(scope): for idx in range(num_layers): g = f(linear(input_, size, scope='highway_lin_%d' % idx)) t = tf.sigmoid(linear(input_, size, scope='highway_gate_%d' % idx) + bias) output = t * g + (1. - t) * input_ input_ = output return output </code></pre> <p><a href="https://github.com/mkroutikov/tf-lstm-char-cnn/blob/7e899e6992cbf9a96e6d791e5d364eaaeec339a2/model.py" rel="nofollow noreferrer">https://github.com/mkroutikov/tf-lstm-char-cnn/blob/7e899e6992cbf9a96e6d791e5d364eaaeec339a2/model.py</a></p>
2017-03-08 02:40:19.530000+00:00
2017-03-08 02:40:19.530000+00:00
null
null
42,437,115
<p>I am trying to get the SequenceGAN (<a href="https://github.com/LantaoYu/SeqGAN" rel="noreferrer">https://github.com/LantaoYu/SeqGAN</a>) from <a href="https://arxiv.org/pdf/1609.05473.pdf" rel="noreferrer">https://arxiv.org/pdf/1609.05473.pdf</a> to run.<br> After fixing the obvious errors, like replacing <code>pack</code> with <code>stack</code>, it still doesn't run, since the highway-network part requires the <code>tf.nn.rnn_cell._linear</code> function:</p> <pre><code># highway layer that borrowed from https://github.com/carpedm20/lstm-char-cnn-tensorflow def highway(input_, size, layer_size=1, bias=-2, f=tf.nn.relu): """Highway Network (cf. http://arxiv.org/abs/1505.00387). t = sigmoid(Wy + b) z = t * g(Wy + b) + (1 - t) * y where g is nonlinearity, t is transform gate, and (1 - t) is carry gate. """ output = input_ for idx in range(layer_size): output = f(tf.nn.rnn_cell._linear(output, size, 0, scope='output_lin_%d' % idx)) #tf.contrib.layers.linear instad doesn't work either. transform_gate = tf.sigmoid(tf.nn.rnn_cell._linear(input_, size, 0, scope='transform_lin_%d' % idx) + bias) carry_gate = 1. - transform_gate output = transform_gate * output + carry_gate * input_ return output </code></pre> <p>the <code>tf.nn.rnn_cell._linear</code> function doesn't appear to be there anymore in Tensorflow 1.0 or 0.12, and I have no clue what to replace it with. I can't find any new implementations of this, or any information on tensorflow's github or (unfortunately very sparse) documentation.</p> <p>Does anybody know the new pendant of the function? Thanks a lot in advance!</p>
2017-02-24 11:08:11.443000+00:00
2019-03-24 05:06:06.397000+00:00
2017-02-24 11:36:49.480000+00:00
python|tensorflow|neural-network
['https://github.com/mkroutikov/tf-lstm-char-cnn/blob/7e899e6992cbf9a96e6d791e5d364eaaeec339a2/model.py']
1
62,903,258
<p>Modulo reduction is a commonly seen way to make a random integer generator avoid the worst case of running forever.</p> <p>When the range of possible integers is unknown, however, there is no way in general to &quot;fix&quot; this worst case of running forever without introducing bias. It's not just modulo reduction (<code>rand() % n</code>, discussed in the accepted answer) that will introduce bias this way, but also the &quot;multiply-and-shift&quot; reduction of Daniel Lemire, or if you stop rejecting an outcome after a set number of iterations. (To be clear, this doesn't mean there is no way to fix the bias issues present in pseudorandom generators. For example, even though modulo and other reductions are biased in general, they will have no issues with bias if the range of possible integers is a power of 2 <em>and</em> if the random generator produces unbiased random bits or blocks of them.)</p> <p>The rest of this answer will show the relationship between running time and bias in random generators. From here on, we will assume we have a &quot;true&quot; random generator that can produce unbiased and independent random bits.*</p> <p>In 1976, D. E. Knuth and A. C. Yao showed that any algorithm that produces random integers with a given probability, using only random bits, can be represented as a binary tree, where random bits indicate which way to traverse the tree and each leaf (endpoint) corresponds to an outcome. In this case, we're dealing with algorithms that generate random integers in [0, n), where each integer is chosen with probability 1/n. The algorithm is <em>unbiased</em> if the same number of leaves appear in the tree for all outcomes. But if 1/n has a non-terminating binary expansion (which will be the case if n is not a power of 2), the algorithm will be unbiased only if—</p> <ul> <li>the binary tree has an &quot;infinite&quot; depth, or</li> <li>the binary tree includes &quot;rejection&quot; leaves at the end,</li> </ul> <p>and in either case, the algorithm won't run in constant time and will run forever in the worst case. (On the other hand, when <code>n</code> is a power of 2, the optimal binary tree will have a finite depth and no rejection nodes.)</p> <p>The binary tree concept also shows that any way to &quot;fix&quot; this worst-case time complexity will lead to bias in general. (Again, this doesn't mean there is no way to fix the bias issues present in pseudorandom generators.) For instance, modulo reductions are equivalent to a binary tree in which rejection leaves are replaced with labeled outcomes — but since there are more possible outcomes than rejection leaves, only some of the outcomes can take the place of the rejection leaves, introducing bias. The same kind of binary tree — and the same kind of bias — results if you stop rejecting after a set number of iterations. (However, this bias may be negligible depending on the application. There are also security aspects to random integer generation, which are too complicated to discuss in this answer.)</p> <p>To illustrate, the following JavaScript code implements a random integer algorithm called the <a href="https://arxiv.org/abs/1304.1916" rel="nofollow noreferrer">Fast Dice Roller</a> by J. Lumbroso (2013). Note that it includes a rejection event and a loop which are necessary to make the algorithm unbiased in the general case.</p> <pre><code>function randomInt(minInclusive, maxExclusive) { var maxInclusive = (maxExclusive - minInclusive) - 1 var x = 1 var y = 0 while(true) { x = x * 2 var randomBit = (Math.random() &lt; 0.5 ? 0 : 1) y = y * 2 + randomBit if(x &gt; maxInclusive) { if (y &lt;= maxInclusive) { return y + minInclusive } // Rejection x = x - maxInclusive - 1 y = y - maxInclusive - 1 } } } </code></pre> <h3>Note</h3> <p>* This answer won't involve the <code>rand()</code> function in C because it <a href="https://stackoverflow.com/questions/52869166/why-is-the-use-of-rand-considered-bad/52881465#52881465">has many issues</a>. Perhaps the most serious here is the fact that the C standard does not explicitly specify a particular distribution for the numbers returned by <code>rand()</code>, not even a uniform distribution.</p>
2020-07-14 20:09:56.660000+00:00
2021-01-07 00:36:21.833000+00:00
2021-01-07 00:36:21.833000+00:00
null
10,984,974
<p>I have seen this question asked a lot but never seen a true concrete answer to it. So I am going to post one here which will hopefully help people understand why exactly there is "modulo bias" when using a random number generator, like <code>rand()</code> in C++.</p>
2012-06-11 17:44:03.120000+00:00
2022-06-27 10:19:59.173000+00:00
2016-04-02 10:50:41.987000+00:00
c++|random|language-agnostic|modulo
['https://arxiv.org/abs/1304.1916', 'https://stackoverflow.com/questions/52869166/why-is-the-use-of-rand-considered-bad/52881465#52881465']
2
63,543,036
<p>Yup, this is easy to express in TFF, and will execution just fine in the default execution stacks.</p> <p>As you've noticed, the TFF repository generally has examples of <em>cross-device Federated Learning</em> (<a href="https://arxiv.org/abs/1912.04977" rel="noreferrer">Kairouz et. al 2019</a>). Generally we talk about the state have <code>tff.SERVER</code> placement, and the function signature for one &quot;round&quot; of federated learning has the structure (for details about TFF's type shorthand, see the <a href="https://www.tensorflow.org/federated/tutorials/custom_federated_algorithms_1#federated_data" rel="noreferrer">Federated data</a> section of the tutorials):</p> <pre><code>(&lt;State@SERVER, {Dataset}@CLIENTS&gt; -&gt; State@Server) </code></pre> <p>We can represent stateful client by simply extending the signature:</p> <pre><code>(&lt;State@SERVER, {State}@Clients, {Dataset}@CLIENTS&gt; -&gt; &lt;State@Server, {State}@Clients&gt;) </code></pre> <p>Implementing a version of Federated Averaging (<a href="https://arxiv.org/abs/1602.05629" rel="noreferrer">McMahan et. al 2016</a>) that includes a client state object might look something like:</p> <pre class="lang-py prettyprint-override"><code>@tff.tf_computation( model_type, client_state_type, # additional state parameter client_data_type) def client_training_fn(model, state, dataset): model_update, new_state = # do some local training return model_update, new_state # return a tuple including updated state @tff.federated_computation( tff.FederatedType(server_state_type, tff.SERVER), tff.FederatedType(client_state_type , tff.CLIENTS), # new parameter for state tff.FederatedType(client_data_type , tff.CIENTS)) def run_fed_avg(server_state, client_states, client_datasets): client_initial_models = tff.federated_broadcast(server_state.model) client_updates, new_client_state = tff.federated_map(client_training_fn, # Pass the client states as an argument. (client_initial_models, client_states, client_datasets)) average_update = tff.federated_mean(client_updates) new_server_state = tff.federated_map(server_update_fn, (server_state, average_update)) # Make sure to return the client states so they can be used in later rounds. return new_server_state, new_client_states </code></pre> <p>The invocation of <code>run_fed_avg</code> would require passing a Python <code>list</code> of tensors/structures for each client participating in a round, and the result fo the method invocation will be the server state, and a list of client states.</p>
2020-08-23 03:16:18.033000+00:00
2020-08-23 03:16:18.033000+00:00
null
null
63,498,907
<p>The code in the TFF tutorials and in the research projects I see generally only keep track of server states. I’d like there to be internal client states (for instance, additional client internal neural networks which are completely decentralized and don’t update in a federated manner) that would influence the federated client computations.</p> <p>However, in the client computations I have seen, they are only functions of the server states and the data. Is it possible to accomplish the above?</p>
2020-08-20 05:39:16.473000+00:00
2021-08-25 14:44:41.200000+00:00
null
tensorflow-federated
['https://arxiv.org/abs/1912.04977', 'https://www.tensorflow.org/federated/tutorials/custom_federated_algorithms_1#federated_data', 'https://arxiv.org/abs/1602.05629']
3
37,578,716
<p>Just as @Dmitri Budnikov memtioned, cache information is not public available.</p> <p>Some researchers are working on this problem, and <a href="http://arxiv.org/pdf/1509.02308.pdf" rel="nofollow">This Paper</a> give us some insights on the memory hierarchy of GPU architecture.</p> <p>Their findings on L2 cache can be summarized as below:</p> <ol> <li>The replacement policy of the L2 cache is not LRU;</li> <li>The L2 cache line size is 32 bytes;</li> <li>The data mapping is sophisticated and not conventional bits-defined;</li> <li>A hardware-level pre-fetching mechanism from the DRAM to the L2 data cache is found on fermi, Kepler and Maxwell architecture.</li> </ol> <p>The benchmarks they developed can be found <a href="http://www.comp.hkbu.edu.hk/~chxw/gpu_benchmark.html#l2_cache" rel="nofollow">here</a>.</p>
2016-06-01 21:02:02.500000+00:00
2016-06-01 21:02:02.500000+00:00
null
null
33,432,821
<p>I need the detail information about L2 caches of NVIDIA Kepler GPUs. I know the size (e.g. 512KB on GT740M GPU) and block size (32B) of the cache. I tried to capture the associativity, replacement policy, and more importantly, the mapping function (from global address to cache line), by a sample kernel and profiling the read hit ratio by nvprof profiler. I realized that mapping is not modulo operation. Is there any trick to find out what cache line a given global address is mapped to? can anyone help me?</p>
2015-10-30 09:39:07.447000+00:00
2016-06-01 21:02:02.500000+00:00
null
caching|gpu
['http://arxiv.org/pdf/1509.02308.pdf', 'http://www.comp.hkbu.edu.hk/~chxw/gpu_benchmark.html#l2_cache']
2
50,828,479
<p>From the paper <a href="https://arxiv.org/abs/1607.04606" rel="noreferrer">Enriching Word Vectors with Subword Information</a>:</p> <blockquote> <p>Continuous word representations, trained on large unlabeled corpora are useful for many natural language processing tasks. Popular models that learn such representations ignore the morphology of words, by assigning a distinct vector to each word. This is a limitation, especially for languages with large vocabularies and many rare words. In this paper, we propose a new approach based on the skipgram model, where each word is represented as a bag of character n-grams. A vector representation is associated to each character n-gram; words being represented as the sum of these representations.</p> </blockquote> <p>So out-of-vocab words are represented as <strong>the sum of character ngram vectors</strong>. While the intent is to handle out-of-vocab words (unks) like "blargfizzle", it also handles phrases like your input.</p> <p>If you look at <a href="https://github.com/RaRe-Technologies/gensim/blob/e50a0574e70b4c8808b16b50f140f21c4bb6352e/gensim/models/keyedvectors.py#L1790" rel="noreferrer">the implementation of the vectors</a> in Gensim you can see this is indeed what it's doing (along with normalization and hashing etc) - I added some comments starting with XXX:</p> <pre><code>def word_vec(self, word, use_norm=False): """ Accept a single word as input. Returns the word's representations in vector space, as a 1D numpy array. If `use_norm` is True, returns the normalized word vector. """ if word in self.vocab: # XXX in-vocab terms return with a simple lookup return super(FastTextKeyedVectors, self).word_vec(word, use_norm) else: # from gensim.models.fasttext import compute_ngrams # XXX Initialize the vector for the unk word_vec = np.zeros(self.vectors_ngrams.shape[1], dtype=np.float32) ngrams = _compute_ngrams(word, self.min_n, self.max_n) if use_norm: ngram_weights = self.vectors_ngrams_norm else: ngram_weights = self.vectors_ngrams ngrams_found = 0 for ngram in ngrams: ngram_hash = _ft_hash(ngram) % self.bucket if ngram_hash in self.hash2index: # XXX add the vector for the ngram to the unk vector word_vec += ngram_weights[self.hash2index[ngram_hash]] ngrams_found += 1 if word_vec.any(): return word_vec / max(1, ngrams_found) else: # No ngrams of the word are present in self.ngrams raise KeyError('all ngrams for word %s absent from model' % word) </code></pre> <p>Note that this doesn't mean it can provide vectors for any arbitrary string - it still needs to have data for at least some of the ngrams in an unk, so a string like <code>xwkxwkzrw</code> or <code>天爾遠波</code> will probably fail to return anything if your vectors are trained on English.</p>
2018-06-13 02:56:58.373000+00:00
2018-06-13 02:56:58.373000+00:00
null
null
50,828,314
<p>I am using gensim to load pre-trained fasttext model. I downloaded the English wikipedia trained model from fasttext <a href="https://github.com/facebookresearch/fastText/blob/master/docs/crawl-vectors.md" rel="noreferrer">website</a>. </p> <p>here is the code I wrote to load the pre-trained model: </p> <pre><code>from gensim.models import FastText as ft model=ft.load_fasttext_format("wiki.en.bin") </code></pre> <p>I try to check if the following phrase exists in the vocal(which rare chance it would as these are pre-trained model). </p> <pre><code>print("internal executive" in model.wv.vocab) print("internal executive" in model.wv) False True </code></pre> <p>So the phrase "internal executive" is not present in the vocabulary but we still have the word vector corresponding to that. </p> <pre><code>model.wv["internal executive"] Out[46]: array([ 0.0210917 , -0.15233646, -0.1173932 , -0.06210957, -0.07288644, -0.06304111, 0.07833624, -0.17026938, -0.21922196, 0.01146349, -0.13639058, 0.17283678, -0.09251394, -0.17875175, 0.01339212, -0.26683623, 0.05487974, -0.11843193, -0.01982722, 0.37037706, -0.24370994, 0.14269598, -0.16363597, 0.00328478, -0.16560239, -0.1450972 , -0.24787527, -0.01318423, 0.03277111, 0.16175713, -0.19367714, 0.16955379, 0.1972683 , 0.09044111, 0.01731548, -0.0034324 , -0.04834719, 0.14321515, 0.01422525, -0.08803893, -0.29411593, -0.1033244 , 0.06278021, 0.16452256, 0.0650492 , 0.1506474 , -0.14194389, 0.10778475, 0.16008648, -0.07853138, 0.2183501 , -0.25451994, -0.0345991 , -0.28843886, 0.19964759, -0.10923116, 0.26665714, -0.02544454, 0.30637854, 0.04568949, -0.04798719, -0.05769338, 0.25762403, -0.05158515, -0.04426906, -0.19901046, 0.00894193, -0.17269588, -0.24747233, -0.19061406, 0.14322804, -0.10804397, 0.4002605 , 0.01409482, -0.04675362, 0.10039093, 0.07260711, -0.0938239 , -0.20434211, 0.05741301, 0.07592541, -0.02921724, 0.21137556, -0.23188967, -0.23164661, -0.4569614 , 0.07434579, 0.10841205, -0.06514647, 0.01220404, 0.02679767, 0.11840229, 0.2247431 , -0.1946325 , -0.0990666 , -0.02524677, 0.0801085 , 0.02437297, 0.00674876, 0.02088535, 0.21464555, -0.16240154, 0.20670174, -0.21640894, 0.03900698, 0.21772243, 0.01954809, 0.04541844, 0.18990673, 0.11806394, -0.21336791, -0.10871669, -0.02197789, -0.13249406, -0.20440844, 0.1967368 , 0.09804545, 0.1440366 , -0.08401451, -0.03715726, 0.27826542, -0.25195453, -0.16737154, 0.3561183 , -0.15756823, 0.06724873, -0.295487 , 0.28395334, -0.04908851, 0.09448399, 0.10877471, -0.05020981, -0.24595442, -0.02822314, 0.17862654, 0.06452435, -0.15105674, -0.31911567, 0.08166212, 0.2634299 , 0.17043628, 0.10063848, 0.0687021 , -0.12210461, 0.10803893, 0.13644943, 0.10755012, -0.09816817, 0.11873955, -0.03881042, 0.18548298, -0.04769253, -0.01511982, -0.08552645, -0.05218676, 0.05387992, 0.0497043 , 0.06922272, -0.0089245 , 0.24790663, 0.27209425, -0.04925154, -0.08621719, 0.15918174, 0.25831223, 0.01654229, -0.03617229, -0.13490392, 0.08033483, 0.34922174, -0.01744722, -0.16894792, -0.10506647, 0.21708378, -0.22582002, 0.15625793, -0.10860757, -0.06058934, -0.25798836, -0.20142137, -0.06613475, -0.08779443, -0.10732629, 0.05967236, -0.02455976, 0.2229451 , -0.19476262, -0.2720119 , 0.03687386, -0.01220259, 0.07704347, -0.1674307 , 0.2400516 , 0.07338555, -0.2000631 , 0.13897157, -0.04637206, -0.00874449, -0.32827383, -0.03435039, 0.41587186, 0.04643605, 0.03352945, -0.13700874, 0.16430037, -0.13630766, -0.18546128, -0.04692861, 0.37308362, -0.30846512, 0.5535561 , -0.11573419, 0.2332801 , -0.07236694, -0.01018955, 0.05936847, 0.25877884, -0.2959846 , -0.13610311, 0.10905041, -0.18220575, 0.06902339, -0.10624941, 0.33002165, -0.12087796, 0.06742091, 0.20762768, -0.34141317, 0.0884434 , 0.11247049, 0.14748637, 0.13261876, -0.07357208, -0.11968047, -0.22124515, 0.12290633, 0.16602683, 0.01055585, 0.04445777, -0.11142147, 0.00004863, 0.22543314, -0.14342701, -0.23209116, -0.00003538, 0.19272381, -0.13767233, 0.04850799, -0.281997 , 0.10343244, 0.16510887, 0.08671653, -0.24125539, 0.01201926, 0.0995285 , 0.09807415, -0.06764816, -0.0206733 , 0.04697794, 0.02000999, 0.05817033, 0.10478792, 0.0974884 , -0.01756372, -0.2466861 , 0.02877498, 0.02499748, -0.00370895, -0.04728201, 0.00107118, -0.21848503, 0.2033032 , -0.00076264, 0.03828803, -0.2929495 , -0.18218371, 0.00628893, 0.20586628, 0.2410889 , 0.02364616, -0.05220835, -0.07040054, -0.03744286, -0.06718048, 0.19264086, -0.06490505, 0.27364203, 0.05527219, -0.27494466, 0.22256687, 0.10330909, -0.3076979 , 0.04852265, 0.07411488, 0.23980476, 0.1590279 , -0.26712465, 0.07580928, 0.05644221, -0.18824042], </code></pre> <p>Now my confusion is that Fastext creates vectors for character ngrams of a word too. So for a word "internal" it will create vectors for all its character ngrams including the full word and then the final word vector for the word is the sum of its character ngrams. </p> <p>However, how it is still able to give me vector of a word or even the whole sentence? Isn't fastext vector is for a word and its ngram? So what are these vector I am seeing for the phrase when its clearly two words?</p>
2018-06-13 02:33:19.897000+00:00
2019-03-29 22:36:40.557000+00:00
2018-06-14 06:40:43.037000+00:00
python|nlp|gensim|fasttext
['https://arxiv.org/abs/1607.04606', 'https://github.com/RaRe-Technologies/gensim/blob/e50a0574e70b4c8808b16b50f140f21c4bb6352e/gensim/models/keyedvectors.py#L1790']
2
66,390,026
<p>It's completely possible.</p> <h3>Hardware</h3> <p>Following main hardware specs need to be considered when you're deploying your model on edge devices like raspberry, banana pi, ...</p> <ul> <li><strong>Memory</strong></li> <li><strong>Processing Speed</strong></li> </ul> <p><strong>Memory</strong> - Random Access Memory(RAM). RAM allows you to deploy bigger models on your edge device and also in case of processing, the CPU is also most important one. Raspberry Pi versions RAMs:</p> <ol> <li>The Raspberry Pi 2 has <strong>1 GiB</strong> of RAM.</li> <li>The Raspberry Pi 3 has <strong>1 GiB</strong> of RAM in the <strong>B</strong> and <strong>B+</strong> models, and <strong>512 MiB</strong> of RAM in the A+ model. The Raspberry Pi Zero and Zero W have <strong>512 MiB</strong> of RAM.</li> <li>The Raspberry Pi 4 is available with <strong>2, 4 or 8 GiB of RAM</strong>. A 1 GiB model was originally available at launch in June 2019 but was discontinued in March 2020, and the <strong>8 GiB</strong> model was introduced in May 2020.</li> </ol> <h3>Model Optimization</h3> <p>If you have one of the version of a Raspberry Pi so then you can't change it's capability however you can optimize your model by updating your neural network. So you need think about using efficient networks, such as <a href="https://arxiv.org/abs/1905.11946" rel="nofollow noreferrer">EfficientNet</a>, <a href="https://arxiv.org/abs/1704.04861" rel="nofollow noreferrer">MobileNet</a>, <a href="https://arxiv.org/abs/1602.07360" rel="nofollow noreferrer">SqueezeNet</a>, <a href="https://arxiv.org/abs/1911.11907" rel="nofollow noreferrer">GhostNet</a>. For object detection purposes, I have used <strong>Raspberry Pi 2 B</strong> model with tiny Yolo with quite low FPS (frame per second).</p> <p>I hope, from now you can consider according to your task which Raspberry Pi device is suitable for you.</p>
2021-02-26 17:06:33.847000+00:00
2021-02-26 17:06:33.847000+00:00
null
null
66,389,566
<p>Firstly, I'm going to train a CNN model on my computer (image classification program), then I'm gonna save it to be used in raspberry pi</p> <p>After that, I'm gonna give the raspberry pi some images, I want it to predict the images using the trained model</p> <p>Finally, according to the result (the prediction) , i want it to take an action.</p> <p>So, is it possible to do that? if yes, what specifications should i keep in mind when i buy the raspberry pi ?</p>
2021-02-26 16:35:15.817000+00:00
2021-02-26 22:18:52.357000+00:00
2021-02-26 22:18:52.357000+00:00
machine-learning|neural-network|raspberry-pi|conv-neural-network
['https://arxiv.org/abs/1905.11946', 'https://arxiv.org/abs/1704.04861', 'https://arxiv.org/abs/1602.07360', 'https://arxiv.org/abs/1911.11907']
4
73,832,622
<p>refer to this <a href="https://arxiv.org/abs/1911.01685" rel="nofollow noreferrer">paper</a> and this <a href="https://arxiv.org/abs/1707.03237" rel="nofollow noreferrer">paper</a></p> <p>Soft Dice = 1 - Dice</p> <p>they called it <strong>differentiable surrogate</strong></p>
2022-09-23 20:19:49.683000+00:00
2022-09-23 20:19:49.683000+00:00
null
null
72,495,724
<p>What is the difference between <strong>Dice coefficient</strong> and <strong>Soft Dice coefficient</strong>?</p> <p>Background : semantic segmentation</p>
2022-06-03 22:30:34.963000+00:00
2022-09-23 20:19:49.683000+00:00
null
machine-learning|deep-learning|metrics|precision-recall|semantic-segmentation
['https://arxiv.org/abs/1911.01685', 'https://arxiv.org/abs/1707.03237']
2
50,999,285
<p><a href="https://arxiv.org/abs/1804.06868" rel="nofollow noreferrer">This paper</a> tackles this problem and was recently nominated as best paper in a top conference.</p> <p>They are gradually releasing and documenting code here: <a href="https://github.com/clic-lab/atis" rel="nofollow noreferrer">https://github.com/clic-lab/atis</a></p> <p><strong>EDIT</strong></p> <p>You could also use the OpenNMT library (<a href="https://github.com/OpenNMT/OpenNMT-py" rel="nofollow noreferrer">https://github.com/OpenNMT/OpenNMT-py</a>), to train a model to map natural language to SQL queries, if you have the training data.</p>
2018-06-23 08:10:21.310000+00:00
2018-06-27 03:45:39.427000+00:00
2018-06-27 03:45:39.427000+00:00
null
50,972,472
<p>I need to create a system that converts natural language to SQL queries. I know this has been done before so I am trying to find an SDK, API, or company that has already done it instead of reinventing the wheel by trying to write it from scratch. </p> <p>Most of the posts I find related to this topic are at least at couple of years old. Kueri.me seems to be a great solution but their downloads page isn't working and I can't find their SDK anywhere else online (their latest blog posts are also from 2016).</p> <p>Any advice? What is currently the best solution to do NLP to SQL? </p>
2018-06-21 15:36:14.607000+00:00
2018-11-25 13:56:55.570000+00:00
null
sql|nlp
['https://arxiv.org/abs/1804.06868', 'https://github.com/clic-lab/atis', 'https://github.com/OpenNMT/OpenNMT-py']
3
10,154,994
<p>There are some papers on incrementally maintaining the topological order of nodes in a graph, with variations on the algorithm you describe. </p> <p>If the graph has <code>n</code> nodes and <code>m</code> edges, you spend time <code>O(m + n)</code> every time you insert an edge. The papers ask how much time will it take to insert <code>k</code> edges? Trivially, <code>O(k * (n + m))</code>. But in fact you can show much better upper bounds - something like <code>O(k * sqrt(m + n))</code> for large enough <code>k</code>.</p> <p>Some links below, there are more:</p> <p><a href="http://igitur-archive.library.uu.nl/math/2007-0725-201647/2005-011.pdf" rel="nofollow noreferrer">http://igitur-archive.library.uu.nl/math/2007-0725-201647/2005-011.pdf</a></p> <p><a href="http://arxiv.org/abs/0802.1059" rel="nofollow noreferrer">http://arxiv.org/abs/0802.1059</a></p> <p><a href="http://www.siam.org/proceedings/soda/2009/SODA09_120_benderm.pdf" rel="nofollow noreferrer">http://www.siam.org/proceedings/soda/2009/SODA09_120_benderm.pdf</a></p>
2012-04-14 15:55:40.980000+00:00
2018-07-29 23:59:53.780000+00:00
2018-07-29 23:59:53.780000+00:00
null
10,152,476
<p>I had interest in Direct Acyclic Graphs (DAG) for a long time and after reading about Topological sort at Wikipedia, I did not find any special mentioning of an approach involving <strong>layers numbering</strong> (although layers are extensively mentioned for drawing). With this approach the graph is not technically topologically sorted, but knowing that every node contains the correct number for layer (level), we always can tell whether a particular node "bigger" than another topologically. On the other side as long as we don't have an ordered list, we can not enumerate the nodes topologically (although this can be done with a final conventional sort that compares the levels of the nodes). </p> <p>This approach allows implementing arbitrary connecting while maintaining the correctness of levels information. The steps can be:</p> <ul> <li>For any newly added node (without any connection) the level applied is 1. </li> <li>If a connection between two nodes is requested (from m to n) and the n.level > m.level then they are just simply being connected, no level fixing for other nodes is required. </li> <li>If for requested connection (from m to n) n.level&lt;=m.level then we change n.level to (m.level + 1) and recursively check any dependencies of n for similar level increase (or no increase if the levels on a recursive step are compatible). </li> <li>If we keep the list of recursively entered nodes then we can detect an attempt to cycle and implement some kind of undo operation (returning the levels of all affected nodes to previous values)</li> </ul> <p>For any set of known nodes and connections between them, we just add all nodes applying level=1 to them and just try to apply all known connections between (ignoring and undoing cicles). </p> <p>The final level information not only allows comparing nodes topologically, but contains other useful information. For example: </p> <ul> <li>every node with level = 1 has no incoming connections (every path starts from one of them). So any DAG enumeration can be started from them. </li> <li>The longest path (the number of edges) can not be longer than the (largest level + 1) </li> </ul> <p>I suppose that for some artificial data (n nodes, every Node(n) connected to Node(n + 1)) this algorithm can be very slow. But for real-world data I tried it with (Wikipedia categories - 800,000 nodes - 2,000,000 connections) the time is decent (5-10 minutes) and the number of levels and cycle attempts is low (369 levels, 1000 cycle attempts)</p> <p>So is this method new or is well-known, but just not widely presented in Wikipedia and other resources? Since it's not a sort (technically), should it be called a data restructuring?</p>
2012-04-14 09:22:36.803000+00:00
2018-07-29 23:59:53.780000+00:00
null
algorithm|directed-acyclic-graphs|topological-sort
['http://igitur-archive.library.uu.nl/math/2007-0725-201647/2005-011.pdf', 'http://arxiv.org/abs/0802.1059', 'http://www.siam.org/proceedings/soda/2009/SODA09_120_benderm.pdf']
3
38,263,032
<p>There are NO guarantees about the quality of the pseudo random generator in standard Fortran. If you care about some particular quality of implementation for cryptography or science sensitive to random numbers (Monte-Carlo), you should use some library which you have control about.</p> <p>You can study the manual of your compiler to find out what it says about the random number generator, but every compiler can implement a completely different algorithm to generate random numbers.</p> <p>Numerical Recipes is actually not well received by some people in the numerical mathematics community <a href="http://www.uwyo.edu/buerkle/misc/wnotnr.html" rel="nofollow noreferrer">http://www.uwyo.edu/buerkle/misc/wnotnr.html</a></p> <p>This site is not for software recommendation, but this article (link given by roygvib in a comment): <a href="https://arxiv.org/abs/1005.4117" rel="nofollow noreferrer">https://arxiv.org/abs/1005.4117</a> is a good review with examples of bad and good algorithms, methods how to test them, how to generate arbitrary number distributions and examples of calls of two example libraries in C (one of them can be called from Fortran as well).</p> <p>Personally I use this <a href="https://bitbucket.org/LadaF/elmm/src/master/src/rng_par_zig.f90" rel="nofollow noreferrer">https://bitbucket.org/LadaF/elmm/src/master/src/rng_par_zig.f90</a> parallel PRNG, but I didn't test the quality, I personally just need speed. But this is not a software recommendation site.</p>
2016-07-08 09:16:37.103000+00:00
2020-11-02 19:41:01.867000+00:00
2020-11-02 19:41:01.867000+00:00
null
38,262,846
<p>I have written a short monte carlo integration algorithm to calculate an integral in Fortran 90. I once compared the result obtained by solving the integral with respect to some parameter using the intrinsic random number generator with the random number generator method ran1 presented in Numerical Recipes for Fortran90 Volume 2. </p> <p>Running the same algorithm twice, once calling the intrinsic random_seed(), then always call random_number() and once calling the ran1() method provided in the Numerical Recipe book I obtain as result in principal the same shape but the intrinsic result is a continuous curve in contrast to the ran1 result. In both cases I call the function with random parameters 10,000 times for a parameter value q, add it and then go on to the next q value and call the function 10,000 times etc.</p> <p>A comparative image of the result can be found here: <a href="https://i.stack.imgur.com/kb5ft.png" rel="noreferrer"><img src="https://i.stack.imgur.com/kb5ft.png" alt="http://i.imgur.com/gZVFdOP.png"></a></p> <p>If I increase the number of calls both curves converge. But I was wondering: why does the intrinsic random number generator generate this smoothness? Is it still generally advised to use it or are there are other more advised RNG? I suppose the continuous result is a result of the "less" randomness of the intrinsic number generator.</p> <p>(I left out the source code as I don't think that there is a lot of input from it. If somebody cares I can hand it in later.)</p>
2016-07-08 09:06:58.110000+00:00
2020-11-02 19:41:01.867000+00:00
2016-07-08 09:17:48.647000+00:00
random|fortran|integration|montecarlo
['http://www.uwyo.edu/buerkle/misc/wnotnr.html', 'https://arxiv.org/abs/1005.4117', 'https://bitbucket.org/LadaF/elmm/src/master/src/rng_par_zig.f90']
3
16,594,343
<p>I would think you will need to generate probabilities for what text character the input is. If the highest probability text character is below some threshold, classify the stroke as drawing.</p> <p>This is a possible useful paper: <a href="http://arxiv.org/pdf/1304.0421v1.pdf" rel="nofollow">http://arxiv.org/pdf/1304.0421v1.pdf</a> (if only for its reference list). Also the first hit on this google scholar search looks relevant: <a href="http://scholar.google.com/scholar?q=classification+stroke+input+text+or+drawing" rel="nofollow">http://scholar.google.com/scholar?q=classification+stroke+input+text+or+drawing</a></p>
2013-05-16 17:59:17.250000+00:00
2013-05-16 17:59:17.250000+00:00
null
null
16,593,225
<p>I'm interested in taking user stroke input (i.e. drawing with an iPad) and classifying it as either text or a drawing (or, I suppose, just non-text), in whatever capacity is reasonably feasible. I'm not expecting a pre-built library for this, I'm just having a hard time finding any papers or algorithmic resources about this.</p> <p>I don't need to detect what the text is that they're drawing, just whether it's likely text or not.</p>
2013-05-16 16:53:14.183000+00:00
2013-05-16 19:19:18.007000+00:00
null
ios|algorithm|machine-learning|classification
['http://arxiv.org/pdf/1304.0421v1.pdf', 'http://scholar.google.com/scholar?q=classification+stroke+input+text+or+drawing']
2
72,294,318
<p>If you've already been able to perform sense-disambiguation outside word2vec, then you can change the word-tokens to reflect your external judgement. For example, change some appearances of the token <code>'jaguar'</code> to <code>'jaguar*car'</code> and others to <code>'jaguar*animal'</code>. Proceeding with normal word2vec training will then get your two different tokens two different word-vectors.</p> <p>If you're hoping for the training to discover these itself, as ~Erwan mentioned in a comment, that seems like an open research question, without a standard or off-the-shelf solution that a beginner could drop-in.</p> <p>I'd once seen a paper (around the time of the original word2vec papers, but can't find the link now) that tried to do this in a word2vec-compatible way by 1st proceeding with traditional polysemy-oblivious training. Then, for every appearance of a word X, model its surrounding context via some combination of the word-vectors of neighbors within a certain number of positions. (That in itself is very similar to the preparation of a context-vector in the CBOW mode of word2vec.) Perform some clustering on that collection-of-all-contexts to come up with some idea of alternate senses – each associated with one cluster. Then, in a followup pass on the original corpus, replace word-tokens with those that also reflect their nearby-context cluster. (EG: <code>'jaguar'</code> might be replaced with <code>'jaguar*1'</code>, <code>'jaguar*2'</code>, etc based on which discrete cluster its context suggested.) Then, repeat (or continue) word2vec training to get sense-specific word-vectors. Of course, the devil would be in the details of how contexts are defined, how clusters are deduced, and tough edge-cases (where potentially the text's author is themselves deploying the multiple senses).</p> <p>Some other interesting efforts to model or deduce polysemy in word2vec models:</p> <ul> <li><a href="http://www.offconvex.org/2016/07/10/embeddingspolysemy/" rel="nofollow noreferrer">&quot;Linear Algebraic Structure of Word Meanings&quot;</a></li> <li><a href="https://arxiv.org/abs/1707.01793" rel="nofollow noreferrer">&quot;A Simple Approach to Learn Polysemous Word Embeddings&quot;</a></li> </ul> <p>But per above, I've not seen these sorts of techniques widely implemented/adopted in a form that's easy to drop-in to another project.</p>
2022-05-18 18:31:30.067000+00:00
2022-05-18 18:31:30.067000+00:00
null
null
72,276,868
<p>I would like to create word embeddings that take <strong>context</strong> into account, so the vector of the word <em>Jaguar [animal]</em> would be different from the word <em>Jaguar [car brand]</em>.</p> <p>As you know, word2vec only gives one representation for a given word, and I would like to take already pretrained embeddings and enrich them with context. So far I've tried a simple way with taking an average vector of the word and category word, for example <a href="https://i.stack.imgur.com/NtMaT.png" rel="nofollow noreferrer">like this</a>.</p> <p>Now I would like to try to create and train a neural network that would take entire sentences, e.g.</p> <ol> <li><em>Jaguar F-PACE is a great SUV sports car.</em></li> <li><em>Among cats, only tigers and lions are bigger than jaguars.</em></li> </ol> <p>And then it would undertake the task of text classification (I have a dataset with several categories like animals, cars, etc.), but the result would be new representations for the word jaguar, but in different contexts, so two different embeddings.</p> <p>Does anyone have any idea how I could create such a network? I don't hide that I'm a beginner and have no idea how to go about it.</p>
2022-05-17 15:33:03.540000+00:00
2022-05-18 18:31:30.067000+00:00
null
python|deep-learning|nlp|word2vec|text-classification
['http://www.offconvex.org/2016/07/10/embeddingspolysemy/', 'https://arxiv.org/abs/1707.01793']
2
59,407,187
<p>This type of repetition is called a <strong>"text degeneration"</strong>.</p> <p>There is a great paper from 2019 which analyse this phenomenon: <strong><a href="https://arxiv.org/abs/1904.09751" rel="noreferrer">The Curious Case of Neural Text Degeneration</a></strong> by <strong>Ari Holtzman</strong> et al. from the Allen Institute for Artificial Intelligence.</p> <p>The repetition may come from the type of text search (text sampling) on the decoder site. Many people implement this just by the most probable next world proposed by the model (argmax on the softmax on the last layer) or by so called beam search. In fact the beam search is the industry standard for today.</p> <p>This is the example of Beam search from the article:</p> <p><strong>Continuation (BeamSearch, b=10):</strong></p> <p><em>"The unicorns were able to communicate with each other, they said unicorns. a statement that the unicorns. Professor of the Department of Los Angeles, the most important place the world to be recognition <strong>of the world to be a of the world to be a of the world to be a of the world to be a of the world to be a of the world to be a of the world to be a of the world to be a of the…</em></strong></p> <p>As you can see there is a great amount of repetition.</p> <p>According to the paper this curious case may be explained by the fact that each repeated sequence of words have higher probability than the sequence without the next repetition: <a href="https://i.stack.imgur.com/I0iUG.png" rel="noreferrer"><img src="https://i.stack.imgur.com/I0iUG.png" alt="enter image description here"></a></p> <p>The article propose some workarounds with words sampling by the decoder. It definitely requires more study, but this is the best explanation we have today.</p> <p>The other is that your model need still more training. In many cases I faced a similar behaviour when I had big training set and model still couldn't generalise well over whole diversity of the data. To test this hypothesis - try to train on smaller dataset and see if it generalise (produce meaningful results). </p> <p>But even if your model generalise well enough, that doesn't mean you won't ever face the repetition pattern. Unless you change the sampling patter of the decoder, it is a common scenario.</p>
2019-12-19 09:55:53.127000+00:00
2019-12-19 09:55:53.127000+00:00
null
null
46,924,452
<p>So, I've been working on a project for a while, we have <em>very</em> little data, I know it would become much better if we were able to put together a much much larger dataset. That aside, my issue at the moment is when I have a sentence input, my outputs look like this right now:</p> <blockquote> <p>contactid contactid contactid contactid</p> </blockquote> <p>A single word is focused on and repeated over and over again. What can I do to overcome this hurdle?</p> <p>Things I've tried:</p> <ol> <li>Double checked I was appending start/stop tokens and make sure the tokens were properly placed in the top of their vocab files, I am sharing vocab.</li> <li>I found something saying it could be due to poor word embeddings. To that end I checked with tensorboard and sure enough PCA showed a very dense cluster of points. Seeing that I grabbed Facebook's public pre trained word vectors and loaded them in as the embedding. Trained again and this time tensorboard PCA showed a much better picture.</li> <li>Switched my training scheduler from basic to SampledScheduling to occasionally replace a training output with the ground truth.</li> <li>Switched my decoder to use the beam search decoder I figured this may give more robust responses if the word choices were close together in the intermediary feature space.</li> </ol> <p>For certain my perplexity is steadily decreasing.</p> <p>Here is my dataset preperation code:</p> <pre><code>class ModelInputs(object): """Factory to construct various input hooks and functions depending on mode """ def __init__( self, vocab_files, batch_size, share_vocab=True, src_eos_id=1, tgt_eos_id=2 ): self.batch_size = batch_size self.vocab_files = vocab_files self.share_vocab = share_vocab self.src_eos_id = src_eos_id self.tgt_eos_id = tgt_eos_id def get_inputs(self, file_path, num_infer=None, mode=tf.estimator.ModeKeys.TRAIN): self.mode = mode if self.mode == tf.estimator.ModeKeys.TRAIN: return self._training_input_hook(file_path) if self.mode == tf.estimator.ModeKeys.EVAL: return self._validation_input_hook(file_path) if self.mode == tf.estimator.ModeKeys.PREDICT: if num_infer is None: raise ValueError('If performing inference must supply number of predictions to be made.') return self._infer_input_hook(file_path, num_infer) def _prepare_data(self, dataset, out=False): prep_set = dataset.map(lambda string: tf.string_split([string]).values) prep_set = prep_set.map(lambda words: (words, tf.size(words))) if out == True: return prep_set.map(lambda words, size: (self.vocab_tables[1].lookup(words), size)) return prep_set.map(lambda words, size: (self.vocab_tables[0].lookup(words), size)) def _batch_data(self, dataset, src_eos_id, tgt_eos_id): batched_set = dataset.padded_batch( self.batch_size, padded_shapes=((tf.TensorShape([None]), tf.TensorShape([])), (tf.TensorShape([None]), tf.TensorShape([]))), padding_values=((src_eos_id, 0), (tgt_eos_id, 0)) ) return batched_set def _batch_infer_data(self, dataset, src_eos_id): batched_set = dataset.padded_batch( self.batch_size, padded_shapes=(tf.TensorShape([None]), tf.TensorShape([])), padding_values=(src_eos_id, 0) ) return batched_set def _create_vocab_tables(self, vocab_files, share_vocab=False): if vocab_files[1] is None and share_vocab == False: raise ValueError('If share_vocab is set to false must provide target vocab. (src_vocab_file, \ target_vocab_file)') src_vocab_table = lookup_ops.index_table_from_file( vocab_files[0], default_value=UNK_ID ) if share_vocab: tgt_vocab_table = src_vocab_table else: tgt_vocab_table = lookup_ops.index_table_from_file( vocab_files[1], default_value=UNK_ID ) return src_vocab_table, tgt_vocab_table def _prepare_iterator_hook(self, hook, scope_name, iterator, file_path, name_placeholder): if self.mode == tf.estimator.ModeKeys.TRAIN or self.mode == tf.estimator.ModeKeys.EVAL: feed_dict = { name_placeholder[0]: file_path[0], name_placeholder[1]: file_path[1] } else: feed_dict = {name_placeholder: file_path} with tf.name_scope(scope_name): hook.iterator_initializer_func = \ lambda sess: sess.run( iterator.initializer, feed_dict=feed_dict, ) def _set_up_train_or_eval(self, scope_name, file_path): hook = IteratorInitializerHook() def input_fn(): with tf.name_scope(scope_name): with tf.name_scope('sentence_markers'): src_eos_id = tf.constant(self.src_eos_id, dtype=tf.int64) tgt_eos_id = tf.constant(self.tgt_eos_id, dtype=tf.int64) self.vocab_tables = self._create_vocab_tables(self.vocab_files, self.share_vocab) in_file = tf.placeholder(tf.string, shape=()) in_dataset = self._prepare_data(tf.contrib.data.TextLineDataset(in_file).repeat(None)) out_file = tf.placeholder(tf.string, shape=()) out_dataset = self._prepare_data(tf.contrib.data.TextLineDataset(out_file).repeat(None)) dataset = tf.contrib.data.Dataset.zip((in_dataset, out_dataset)) dataset = self._batch_data(dataset, src_eos_id, tgt_eos_id) iterator = dataset.make_initializable_iterator() next_example, next_label = iterator.get_next() self._prepare_iterator_hook(hook, scope_name, iterator, file_path, (in_file, out_file)) return next_example, next_label return (input_fn, hook) def _training_input_hook(self, file_path): input_fn, hook = self._set_up_train_or_eval('train_inputs', file_path) return (input_fn, hook) def _validation_input_hook(self, file_path): input_fn, hook = self._set_up_train_or_eval('eval_inputs', file_path) return (input_fn, hook) def _infer_input_hook(self, file_path, num_infer): hook = IteratorInitializerHook() def input_fn(): with tf.name_scope('infer_inputs'): with tf.name_scope('sentence_markers'): src_eos_id = tf.constant(self.src_eos_id, dtype=tf.int64) self.vocab_tables = self._create_vocab_tables(self.vocab_files, self.share_vocab) infer_file = tf.placeholder(tf.string, shape=()) dataset = tf.contrib.data.TextLineDataset(infer_file) dataset = self._prepare_data(dataset) dataset = self._batch_infer_data(dataset, src_eos_id) iterator = dataset.make_initializable_iterator() next_example, seq_len = iterator.get_next() self._prepare_iterator_hook(hook, 'infer_inputs', iterator, file_path, infer_file) return ((next_example, seq_len), None) return (input_fn, hook) </code></pre> <p>And here is my model:</p> <pre><code>class Seq2Seq(): def __init__( self, batch_size, inputs, outputs, inp_vocab_size, tgt_vocab_size, embed_dim, mode, time_major=False, enc_embedding=None, dec_embedding=None, average_across_batch=True, average_across_timesteps=True, vocab_path=None, embedding_path='./data_files/wiki.simple.vec' ): embed_np = self._get_embedding(embedding_path) if not enc_embedding: self.enc_embedding = tf.contrib.layers.embed_sequence( inputs, inp_vocab_size, embed_dim, trainable=True, scope='embed', initializer=tf.constant_initializer(value=embed_np, dtype=tf.float32) ) else: self.enc_embedding = enc_embedding if mode == tf.estimator.ModeKeys.TRAIN or mode == tf.estimator.ModeKeys.EVAL: if not dec_embedding: embed_outputs = tf.contrib.layers.embed_sequence( outputs, tgt_vocab_size, embed_dim, trainable=True, scope='embed', reuse=True ) with tf.variable_scope('embed', reuse=True): dec_embedding = tf.get_variable('embeddings') self.embed_outputs = embed_outputs self.dec_embedding = dec_embedding else: self.dec_embedding = dec_embedding else: with tf.variable_scope('embed', reuse=True): self.dec_embedding = tf.get_variable('embeddings') if mode == tf.estimator.ModeKeys.PREDICT and vocab_path is None: raise ValueError('If mode is predict, must supply vocab_path') self.vocab_path = vocab_path self.inp_vocab_size = inp_vocab_size self.tgt_vocab_size = tgt_vocab_size self.average_across_batch = average_across_batch self.average_across_timesteps = average_across_timesteps self.time_major = time_major self.batch_size = batch_size self.mode = mode def _get_embedding(self, embedding_path): model = KeyedVectors.load_word2vec_format(embedding_path) vocab = model.vocab vocab_len = len(vocab) return np.array([model.word_vec(k) for k in vocab.keys()]) def _get_lstm(self, num_units): return tf.nn.rnn_cell.BasicLSTMCell(num_units) def encode(self, num_units, num_layers, seq_len, cell_fw=None, cell_bw=None): if cell_fw and cell_bw: fw_cell = cell_fw bw_cell = cell_bw else: fw_cell = self._get_lstm(num_units) bw_cell = self._get_lstm(num_units) encoder_outputs, bi_encoder_state = tf.nn.bidirectional_dynamic_rnn( fw_cell, bw_cell, self.enc_embedding, sequence_length=seq_len, time_major=self.time_major, dtype=tf.float32 ) c_state = tf.concat([bi_encoder_state[0].c, bi_encoder_state[1].c], axis=1) h_state = tf.concat([bi_encoder_state[0].h, bi_encoder_state[1].h], axis=1) encoder_state = tf.contrib.rnn.LSTMStateTuple(c=c_state, h=h_state) return tf.concat(encoder_outputs, -1), encoder_state def _train_decoder(self, decoder_cell, out_seq_len, encoder_state, helper): if not helper: helper = tf.contrib.seq2seq.ScheduledEmbeddingTrainingHelper( self.embed_outputs, out_seq_len, self.dec_embedding, 0.3, ) # helper = tf.contrib.seq2seq.TrainingHelper( # self.dec_embedding, # out_seq_len, # ) projection_layer = layers_core.Dense(self.tgt_vocab_size, use_bias=False) decoder = tf.contrib.seq2seq.BasicDecoder( decoder_cell, helper, encoder_state, output_layer=projection_layer ) return decoder def _predict_decoder(self, cell, encoder_state, beam_width, length_penalty_weight): tiled_encoder_state = tf.contrib.seq2seq.tile_batch( encoder_state, multiplier=beam_width ) with tf.name_scope('sentence_markers'): sos_id = tf.constant(1, dtype=tf.int32) eos_id = tf.constant(2, dtype=tf.int32) start_tokens = tf.fill([self.batch_size], sos_id) end_token = eos_id projection_layer = layers_core.Dense(self.tgt_vocab_size, use_bias=False) emb = tf.squeeze(self.dec_embedding) decoder = tf.contrib.seq2seq.BeamSearchDecoder( cell=cell, embedding=self.dec_embedding, start_tokens=start_tokens, end_token=end_token, initial_state=tiled_encoder_state, beam_width=beam_width, output_layer=projection_layer, length_penalty_weight=length_penalty_weight ) return decoder def decode( self, num_units, out_seq_len, encoder_state, cell=None, helper=None, beam_width=None, length_penalty_weight=None ): with tf.name_scope('Decode'): if cell: decoder_cell = cell else: decoder_cell = tf.nn.rnn_cell.BasicLSTMCell(2*num_units) if self.mode != estimator.ModeKeys.PREDICT: decoder = self._train_decoder(decoder_cell, out_seq_len, encoder_state, helper) else: decoder = self._predict_decoder(decoder_cell, encoder_state, beam_width, length_penalty_weight) outputs = tf.contrib.seq2seq.dynamic_decode( decoder, maximum_iterations=20, swap_memory=True, ) outputs = outputs[0] if self.mode != estimator.ModeKeys.PREDICT: return outputs.rnn_output, outputs.sample_id else: return outputs.beam_search_decoder_output, outputs.predicted_ids def prepare_predict(self, sample_id): rev_table = lookup_ops.index_to_string_table_from_file( self.vocab_path, default_value=UNK) predictions = rev_table.lookup(tf.to_int64(sample_id)) return tf.estimator.EstimatorSpec( predictions=predictions, mode=tf.estimator.ModeKeys.PREDICT ) def prepare_train_eval( self, t_out, out_seq_len, labels, lr, train_op=None, loss=None ): if not loss: weights = tf.sequence_mask( out_seq_len, dtype=t_out.dtype ) loss = tf.contrib.seq2seq.sequence_loss( t_out, labels, weights, average_across_batch=self.average_across_batch, ) if not train_op: train_op = tf.contrib.layers.optimize_loss( loss, tf.train.get_global_step(), optimizer='SGD', learning_rate=lr, summaries=['loss', 'learning_rate'] ) return tf.estimator.EstimatorSpec( mode=self.mode, loss=loss, train_op=train_op, ) </code></pre>
2017-10-25 05:13:03.113000+00:00
2019-12-19 09:55:53.127000+00:00
null
machine-learning|tensorflow|nlp|translation
['https://arxiv.org/abs/1904.09751', 'https://i.stack.imgur.com/I0iUG.png']
2
12,423,235
<p><a href="http://www.iai.uni-bonn.de/~jv/free-slides.pdf">http://www.iai.uni-bonn.de/~jv/free-slides.pdf</a></p> <p><a href="http://daniel.yokomizo.org/2011/12/understanding-higher-order-code-for.html">http://daniel.yokomizo.org/2011/12/understanding-higher-order-code-for.html</a></p> <p><a href="http://arxiv.org/pdf/1107.1203.pdf">http://arxiv.org/pdf/1107.1203.pdf</a></p> <p>(Also in typeclassopedia Section 3.3)</p> <p><a href="http://hackage.haskell.org/package/free-theorems-seq">http://hackage.haskell.org/package/free-theorems-seq</a></p> <p><a href="http://hackage.haskell.org/package/free-theorems-counterexamples">http://hackage.haskell.org/package/free-theorems-counterexamples</a></p>
2012-09-14 11:04:00.263000+00:00
2012-09-14 17:55:17.823000+00:00
2012-09-14 17:55:17.823000+00:00
null
12,421,085
<p>I stumbled upon a nice idea of <a href="http://www-ps.iai.uni-bonn.de/cgi-bin/free-theorems-webui.cgi?help" rel="noreferrer">free theorems</a> in functional language. However, the only resource I was able to find is Wadler's article "<a href="http://doi.acm.org/10.1145/99370.99404" rel="noreferrer">Theorems for Free</a>". It's quite good but it definitely not a tutorial and hard for me to get through (I understood about half of it and it required for me to spend quite a lot of time). Can you recommend me another article or tutorial which is oriented towards a software developer familiar with functional programming instead of hard core functional language researcher?</p> <p>Thanks.</p>
2012-09-14 08:53:08.343000+00:00
2012-09-14 17:55:17.823000+00:00
2012-09-14 11:28:17.990000+00:00
haskell|functional-programming
['http://www.iai.uni-bonn.de/~jv/free-slides.pdf', 'http://daniel.yokomizo.org/2011/12/understanding-higher-order-code-for.html', 'http://arxiv.org/pdf/1107.1203.pdf', 'http://hackage.haskell.org/package/free-theorems-seq', 'http://hackage.haskell.org/package/free-theorems-counterexamples']
5
59,679,784
<p>In case of time series data some architectures are performing quite well : </p> <ul> <li><strong>Recurrent Neural Network</strong> (with LSTM, GRU or BERT for example), designed to train on sequence of data </li> </ul> <p>This could be an example : <a href="https://arxiv.org/pdf/1812.04818.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1812.04818.pdf</a> </p> <p>How this works inside : <a href="https://towardsdatascience.com/illustrated-guide-to-lstms-and-gru-s-a-step-by-step-explanation-44e9eb85bf21" rel="nofollow noreferrer">link</a> </p> <p>Example implementation in keras : <a href="https://stackabuse.com/time-series-analysis-with-lstm-using-pythons-keras-library/" rel="nofollow noreferrer">link</a> , you should then find/design your own architecture</p> <ul> <li><strong>TCN</strong>, it uses causal and dilated convolution in order to capture time series data</li> </ul> <p>Example : <a href="https://arxiv.org/pdf/1905.03806.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1905.03806.pdf</a></p> <p>How this works : <a href="https://medium.com/@raushan2807/temporal-convolutional-networks-bfea16e6d7d2" rel="nofollow noreferrer">link</a></p> <p>Implementation in keras : <a href="https://github.com/philipperemy/keras-tcn" rel="nofollow noreferrer">link</a></p> <p>I would personnaly go for those types of architecture, well suited for time series data.</p>
2020-01-10 10:23:46.760000+00:00
2020-01-10 10:23:46.760000+00:00
null
null
59,675,738
<p>I have a set of time series data belong to 5 different classes. [ EEG data (1 data point for 1 second). And those data have been divided in to 30-40 second epochs and each epoch is classified into different classes like A,B,C,D,E]. So basically I have around 13500 labelled data.</p> <pre><code>[10,5,48,75,1,...,22,45,8] = A [26,47,8,77,4,...,56,88,96] = B like wise </code></pre> <p>What I did was I directly fed these data to a Neural Network and trained the model. But the accuracy was very low around 40%. What want to know is rather than just using a neural network, what is the best model to train time series data? </p>
2020-01-10 05:02:04.937000+00:00
2020-01-10 10:23:46.760000+00:00
null
tensorflow|machine-learning|keras
['https://arxiv.org/pdf/1812.04818.pdf', 'https://towardsdatascience.com/illustrated-guide-to-lstms-and-gru-s-a-step-by-step-explanation-44e9eb85bf21', 'https://stackabuse.com/time-series-analysis-with-lstm-using-pythons-keras-library/', 'https://arxiv.org/pdf/1905.03806.pdf', 'https://medium.com/@raushan2807/temporal-convolutional-networks-bfea16e6d7d2', 'https://github.com/philipperemy/keras-tcn']
6
36,564,610
<p>It depends what you want to do. If you want to use Thrust you need to change the declaration to</p> <pre><code>boost::numeric::odeint::runge_kutta_dopri5&lt; state_type , double , state_type , double , thrust_algebra , thrust_operations &gt;; </code></pre> <p>The <code>thrust_algebra</code> and the <code>thrust_operations</code> ensure that all computations are redirected to appropriate <code>thrust::for_each</code> calls where zipped iterators are used. If you want to use some high-level linear algebra library which runs on the GPU (like VexCL or ViennaCL) you can use your above declaration and only change the <code>state_type</code> to the correct type, for example <code>vexcl::vector&lt; double &gt;</code>. The <code>vector_space_algebra</code> assumes that your <code>state_type</code> can handle operations like <code>y = a1*x1 + a2*x2</code>, which is the case for VexCL and ViennaCL due to the use of expression templates. You can also have a look <a href="http://arxiv.org/abs/1212.6326" rel="nofollow">here</a>.</p>
2016-04-12 05:51:13.570000+00:00
2016-04-12 05:51:13.570000+00:00
null
null
36,564,285
<p>The type signature for class stepper I am using is summarized here:</p> <p><a href="http://www.boost.org/doc/libs/1_56_0/libs/numeric/odeint/doc/html/boost/numeric/odeint/runge_kutta_dopri5.html" rel="nofollow noreferrer">http://www.boost.org/doc/libs/1_56_0/libs/numeric/odeint/doc/html/boost/numeric/odeint/runge_kutta_dopri5.html</a> </p> <p>It can be instantiated as follows:</p> <pre><code> boost::numeric::odeint::runge_kutta_dopri5&lt; state_type_ &gt; stepper; </code></pre> <p>So far so good. It works.</p> <p>I plan to port my program to cuda (using thrust) and later to openmp. I changed the declaration to following:</p> <pre><code>boost::numeric::odeint::runge_kutta_dopri5&lt; state_type_ , double , state_type_ , double , boost::numeric::odeint::vector_space_algebra &gt; stepper; </code></pre> <p>I followed solution to <a href="https://stackoverflow.com/questions/18808931/odeint-simple-1d-ode-example-does-not-compile/18809739#18809739">this problem</a> but this does not compile. </p> <pre><code>In file included from /usr/include/boost/numeric/odeint/stepper/euler.hpp:26: /usr/include/boost/numeric/odeint/algebra/default_operations.hpp:87:27: error: invalid operands to binary expression ('double' and 'const std::vector&lt;double, std::allocator&lt;double&gt; &gt;') t1 = m_alpha1 * t2 + m_alpha2 * t3; ~~~~~~~~ ^ ~~ </code></pre> <p>I am wondering what is the most portable way to declare the stepper so that minimum changes are required later when porting to cuda. </p>
2016-04-12 05:27:30.217000+00:00
2016-04-12 05:53:21.213000+00:00
2017-05-23 12:07:40.503000+00:00
c++|boost|cuda|odeint
['http://arxiv.org/abs/1212.6326']
1
45,003,233
<p>The <code>maxlen</code> parameter is the length of your text samples in words. </p> <p>In the Keras code example you have these settings: </p> <pre><code># set parameters: max_features = 5000 maxlen = 400 ... embedding_dims = 50 </code></pre> <p>This means you have a vocabulary of 5000 words, each of these words are embedded into a feature vector with 50 dimensions and each of your text samples can be 400 words long. </p> <p>Indirectly this also has a relation to padding when you have text samples that are shorter than 400 words. Then you have to pad these to a length of 400. </p> <p>For 1D-ConvNets for text classification see also this paper and this blog post:</p> <p><a href="https://arxiv.org/abs/1408.5882" rel="nofollow noreferrer">https://arxiv.org/abs/1408.5882</a> </p> <p><a href="http://www.wildml.com/2015/11/understanding-convolutional-neural-networks-for-nlp/" rel="nofollow noreferrer">http://www.wildml.com/2015/11/understanding-convolutional-neural-networks-for-nlp/</a></p>
2017-07-10 03:03:47.673000+00:00
2017-07-10 03:09:02.520000+00:00
2017-07-10 03:09:02.520000+00:00
null
45,002,378
<p>I'm looking at Keras' example for convolutional neural networks. (See <a href="https://github.com/fchollet/keras/blob/master/examples/imdb_cnn.py" rel="nofollow noreferrer">https://github.com/fchollet/keras/blob/master/examples/imdb_cnn.py</a> for example.) However, I cannot figure out what they mean by the "maxlen" parameter. Would it have something to do with padding? It isn't the maximum number of features; they have a max_features parameter for that. </p>
2017-07-10 00:43:21.290000+00:00
2017-07-10 03:09:02.520000+00:00
null
python|neural-network|keras
['https://arxiv.org/abs/1408.5882', 'http://www.wildml.com/2015/11/understanding-convolutional-neural-networks-for-nlp/']
2