a_id
int64
7.84k
73.8M
a_body
stringlengths
61
33k
a_creation_date
stringlengths
25
32
a_last_activity_date
stringlengths
25
32
a_last_edit_date
stringlengths
25
32
a_tags
float64
q_id
int64
826
73.8M
q_body
stringlengths
61
29.9k
q_creation_date
stringlengths
25
32
q_last_activity_date
stringlengths
25
32
q_last_edit_date
stringlengths
25
32
q_tags
stringlengths
1
103
_arxiv_links
stringlengths
2
6.69k
_n_arxiv_links
int64
0
94
42,150,383
<p>Assuming <a href="https://en.wikipedia.org/wiki/Exponential_time_hypothesis" rel="nofollow noreferrer">Exponential Time Hypothesis</a> (which is stricter than P is not equal to NP, but is still widely believed to be true), it is not possible to do it in time O(N^{2 - eps}) for any positive constant eps, see <a href="http://dl.acm.org/citation.cfm?id=2880032" rel="nofollow noreferrer">"Quadratic Conditional Lower Bounds for String Problems and Dynamic Time Warping"</a> by Karl Bringmann and Marvin Kunnemann (pre-print on arXiv is available).</p> <p>Roughly speaking, it means that the general case of this problem cannot be solved in time better than something like O(N^2/log N), so if you want faster algorithms you have to consider additional constraints (some properties of the strings) or look for approximate solution.</p>
2017-02-10 02:14:08.413000+00:00
2017-02-10 02:14:08.413000+00:00
null
null
30,768,610
<p>Is there any way of finding the longest common subsequence of two sequences in O(NlogN) time? </p> <p>I read somewhere that there is a way to achieve this using binary search. </p> <p>I know the dp approach that takes O(N<sup>2</sup>) time. </p>
2015-06-10 22:46:19.180000+00:00
2021-12-12 08:28:58.283000+00:00
2017-11-19 17:49:43.447000+00:00
algorithm|dynamic-programming|lcs
['https://en.wikipedia.org/wiki/Exponential_time_hypothesis', 'http://dl.acm.org/citation.cfm?id=2880032']
2
51,552,035
<p>By progressive autoencoder I assume you are referring to something like <a href="https://arxiv.org/abs/1807.03026" rel="nofollow noreferrer">Pioneer Networks: Progressively Growing Generative Autoencoder</a> which referred <a href="http://research.nvidia.com/sites/default/files/pubs/2017-10_Progressive-Growing-of/karras2018iclr-paper.pdf" rel="nofollow noreferrer">Progressive Growing of GANs for Improved Quality, Stability, and Variation</a>.</p> <p>First of all, don't use <code>nn.Sequential</code>. It is great for modeling simple and direct network structure, which is definitely not the case here. You should use simple <code>nn.Conv2d</code>, <code>F.ReLU</code> modules instead of building a <code>nn.Sequential</code> object.</p> <p>Second, this isn't really about implementation but theory. You cannot magically convert a convolution layer from accepting 1 channels to 8 channels. There are numbers of ways to <em>expand</em> your convolution filter like appending random weights but I think it is not what you wanted.</p> <p>From the second paper (It is a GAN, but the idea is the same), it does not expand any filters. Instead, the filters maintain their shape in the entire training process. Meaning that, you would have a </p> <pre><code>Conv2D(8, 16, 3, 1, 1) </code></pre> <p>from the very beginning (assuming you only have those two layers). An obvious problem pops out -- your grayscale image is a 1-channel input, but your convolution requires a 8-channel input in the first stage of training. In the second paper, it uses an extra 1x1 convolution layer to map RGB &lt;-> feature maps. In your case, that would be </p> <pre><code>Conv2D(1, 8, 1) </code></pre> <p>which maps 1-channel input to 8-channel output. This can be thrown away after you are done with first stage.</p> <p>There are other techniques like gradually fading in using a weight term stated in the paper. I suggest you read them, especially the second one.</p>
2018-07-27 06:32:31.167000+00:00
2018-07-27 06:32:31.167000+00:00
null
null
51,549,878
<p>I am trying to make a progressive autoencoder and I have thought a couple ways of growing my network during training. However, I am always stuck on this one part where I don't know if changing the input(encoder) and output(decoder) channel would affect my network. See the example below.</p> <pre><code>X = torch.randn( 8, 1, 4, 4,) # A batch of 8 grayscale images of 4x4 pixels in size Encoder = nn.Sequential( Conv2D( 1, 16, 3, 1, 1 ), nn.ReLU() ) # starting setup 16 3x3 kernels </code></pre> <p>if I print the above weights from the network I would get a size of [ 1, 16, 3, 3 ], 16 kernels each of size 3x3 if I want to grow the network I would need to do save those weight because hopefully its already well trained on those 4x4 image inputs.</p> <pre><code>X = torch.randn( 8, 1, 8, 8) # increase the image size from 4x4 to 8x8 ... new_model = nn.Sequential() then do... # copy the previous layer and its weights from the original encoder # BTW My issue starts here. # Add/grow the new_model with new layers concat with the old layer, also modify the input channela so they can link correctly # Final result would be like something below. new_model = nn.Sequential( Conv2D( **1**, 8, 3, 1, 1 ), nn.ReLU(), Conv2D( **8**, 16, 3, 1, 1 ), nn.ReLU() ) Encoder = new_model # Repeat process </code></pre> <p>Everything looks good however because I change the input channel the size of the weights changed as well, and this is the issue that I have been stuck on for a while now. You can simply check this by running,</p> <pre><code>foo_1 = nn.Conv2d(1, 1, 3, 1, 1) # You can think this as the starting Conv2D from the starting encoder foo_2 = nn.Conv2d(3, 1, 3, 1, 1) # You can think this as the modfiied starting Conv2D with an outer layer outputting 3 channels connecting to it print(foo_1.weight.size()) # torch.Size([1, 1, 3, 3]) print(foo_2.weight.size()) # torch.Size([1, 3, 3, 3]) </code></pre> <p>Initially, I thought foo_1 and foo_2 would both have the same weight size, as in both would only use one 3x3 kernel but it doesn't see to be the case. I hope you can see my dilemma now, after x amount of epochs I need to grow another convolution and I have to mess with the input size to make new layers chain properly but if I change the input size the shape of the weight is different and I don't know how pasting the old state would work.</p> <p>I have been looking at pro gan implementations in pytorch and IMO they are not easy to read. How can I build more institutions on how to properly progressively grow your network?</p>
2018-07-27 02:16:37.437000+00:00
2018-07-27 06:56:10.473000+00:00
2018-07-27 06:56:10.473000+00:00
python|machine-learning|conv-neural-network|pytorch|autoencoder
['https://arxiv.org/abs/1807.03026', 'http://research.nvidia.com/sites/default/files/pubs/2017-10_Progressive-Growing-of/karras2018iclr-paper.pdf']
2
58,773,864
<p>I tend to trust deployed code more than paper write-ups, especially in a case like word2vec, where the original authors' <a href="https://github.com/tmikolov/word2vec/blob/20c129af10659f7c50e86e3be406df663beff438/word2vec.c#L407" rel="nofollow noreferrer"><code>word2vec.c</code> code</a> released by the paper's authors has been widely used &amp; served as the template for other implementations. If we look at its subsampling mechanism...</p> <pre class="lang-c prettyprint-override"><code> if (sample &gt; 0) { real ran = (sqrt(vocab[word].cn / (sample * train_words)) + 1) * (sample * train_words) / vocab[word].cn; next_random = next_random * (unsigned long long)25214903917 + 11; if (ran &lt; (next_random &amp; 0xFFFF) / (real)65536) continue; } </code></pre> <p>...we see that those words with tiny counts (<code>.cn</code>) that could give negative values in the original formula instead here give values greater-than <code>1.0</code>, and thus can never be less than the <code>long</code>-random-masked-and-scaled to never be more than <code>1.0</code> (<code>(next_random &amp; 0xFFFF) / (real)65536</code>). So, it seems the authors' intent was for all negative-values of the original formula to mean "never discard". </p> <p>As per the <em>keras</em> <code>make_sampling_table()</code> comment &amp; implementation, they're <strong>not</strong> consulting the actual word-frequencies at all. Instead, they're assuming a Zipf-like distribution based on word-rank order to synthesize a simulated word-frequency. </p> <p>If their assumptions were to hold – the related words are from a natural-language corpus with a Zipf-like frequency-distribution – then I'd expect their sampling probabilities to be close to down-sampling probabilities that would have been calculated from true frequency information. And that's probably "close enough" for most purposes.</p> <p>I'm not sure why they chose this approximation. Perhaps other aspects of their usual processes have not maintained true frequencies through to this step, and they're expecting to always be working with natural-language texts, where the assumed frequencies will be generally true. </p> <p>(As luck would have it, and because people often want to impute frequencies to public sets of word-vectors which have dropped the true counts but are still sorted from most- to least-frequent, just a few days ago I wrote <a href="https://stackoverflow.com/a/58737377/130288">an answer about simulating a fake-but-plausible distribution using Zipf's law</a> – similar to what this keras code is doing.)</p> <p>But, if you're working with data that <strong>doesn't</strong> match their assumptions (as with your synthetic or described datasets), their sampling-probabilities will be quite different than what you would calculate yourself, with any form of the original formula that uses true word frequencies. </p> <p>In particular, imagine a distribution with one token a million times, then a hundred tokens all appearing just 10 times each. Those hundred tokens' order in the "rank" list is arbitrary – truly, they're all tied in frequency. But the simulation-based approach, by fitting a Zipfian distribution on that ordering, will in fact be sampling each of them very differently. The one 10-occurrence word lucky enough to be in the 2nd rank position will be far more downsampled, as if it were far more frequent. And the 1st-rank "tall head" value, by having its true frequency *under-*approximated, will be less down-sampled than otherwise. Neither of those effects seem beneficial, or in the spirit of the frequent-word-downsampling option - which should only "thin out" very-frequent words, and in all cases leave words of the same frequency as each other in the original corpus roughly equivalently present to each other in the down-sampled corpus. </p> <p>So for your case, I would go with the original formula (probability-of-discarding-that-requires-special-handling-of-negative-values), or the <code>word2vec.c</code> practical/inverted implementation (probability-of-keeping-that-saturates-at-1.0), rather than the keras-style approximation. </p> <p>(As a totally-separate note that nonetheless may be relevant for your dataset/purposes, if you're using negative-sampling: there's another parameter controlling the relative sampling of negative examples, often fixed at <code>0.75</code> in early implementations, that <a href="https://arxiv.org/abs/1804.04212" rel="nofollow noreferrer">one paper has suggested can usefully vary for non-natural-language token distributions &amp; recommendation-related end-uses</a>. This parameter is named <code>ns_exponent</code> in the Python <code>gensim</code> implementation, but simply <a href="https://github.com/tmikolov/word2vec/blob/20c129af10659f7c50e86e3be406df663beff438/word2vec.c#L55" rel="nofollow noreferrer">a fixed <code>power</code> value internal to a sampling-table pre-calculation in the original <code>word2vec.c</code> code</a>.)</p>
2019-11-08 21:13:43.700000+00:00
2019-11-09 19:10:36.713000+00:00
2019-11-09 19:10:36.713000+00:00
null
58,772,768
<p>I am implementing the <a href="https://arxiv.org/pdf/1310.4546.pdf" rel="nofollow noreferrer">Skipgram</a> model, both in Pytorch and Tensorflow2. I am having doubts about the implementation of subsampling of frequent words. Verbatim from the paper, the probability of subsampling word <code>wi</code> is computed as</p> <p><a href="https://i.stack.imgur.com/Eq2u8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Eq2u8.png" alt="enter image description here"></a></p> <p>where <code>t</code> is a custom threshold (usually, a small value such as <em>0.0001</em>) and <code>f</code> is the frequency of the word in the document. Although the authors implemented it in a different, but almost equivalent way, let's stick with this definition.</p> <p>When computing the <code>P(wi)</code>, we can end up with negative values. For example, assume we have 100 words, and one of them appears extremely more often than others (as it is the case for my dataset).</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import seaborn as sns np.random.seed(12345) # generate counts in [1, 20] counts = np.random.randint(low=1, high=20, size=99) # add an extremely bigger count counts = np.insert(counts, 0, 100000) # compute frequencies f = counts/counts.sum() # define threshold as in paper t = 0.0001 # compute probabilities as in paper probs = 1 - np.sqrt(t/f) sns.distplot(probs); </code></pre> <p><strong>Q: What is the correct way to implement subsampling using this "probability"?</strong></p> <p>As an additional info, I have seen that in <a href="https://keras.io/preprocessing/sequence/" rel="nofollow noreferrer">keras</a> the function <code>keras.preprocessing.sequence.make_sampling_table</code> takes a different approach: </p> <pre class="lang-py prettyprint-override"><code>def make_sampling_table(size, sampling_factor=1e-5): """Generates a word rank-based probabilistic sampling table. Used for generating the `sampling_table` argument for `skipgrams`. `sampling_table[i]` is the probability of sampling the i-th most common word in a dataset (more common words should be sampled less frequently, for balance). The sampling probabilities are generated according to the sampling distribution used in word2vec: ``` p(word) = (min(1, sqrt(word_frequency / sampling_factor) / (word_frequency / sampling_factor))) ``` We assume that the word frequencies follow Zipf's law (s=1) to derive a numerical approximation of frequency(rank): `frequency(rank) ~ 1/(rank * (log(rank) + gamma) + 1/2 - 1/(12*rank))` where `gamma` is the Euler-Mascheroni constant. # Arguments size: Int, number of possible words to sample. sampling_factor: The sampling factor in the word2vec formula. # Returns A 1D Numpy array of length `size` where the ith entry is the probability that a word of rank i should be sampled. """ gamma = 0.577 rank = np.arange(size) rank[0] = 1 inv_fq = rank * (np.log(rank) + gamma) + 0.5 - 1. / (12. * rank) f = sampling_factor * inv_fq return np.minimum(1., f / np.sqrt(f)) </code></pre>
2019-11-08 19:35:17.350000+00:00
2019-11-09 19:10:36.713000+00:00
null
keras|word2vec|tf.keras|subsampling
['https://github.com/tmikolov/word2vec/blob/20c129af10659f7c50e86e3be406df663beff438/word2vec.c#L407', 'https://stackoverflow.com/a/58737377/130288', 'https://arxiv.org/abs/1804.04212', 'https://github.com/tmikolov/word2vec/blob/20c129af10659f7c50e86e3be406df663beff438/word2vec.c#L55']
4
58,272,383
<p>Excellent Explanation by @majid ghafouri but I just want to add more details to make sure you got this and why we are using it or which advantages can we gain using it:</p> <p>Stochastic Gradient Descent performs updates according to the following iterative process. This type of learning, which performs updates a single example at a time is called <strong>online learning.</strong></p> <p>The Algorithm for it would looks like this:</p> <pre><code>procedure Online for several epochs of training do for each training example in the data do Calculate gradients of the loss Update the parameters according to this gradient end for end for end procedure </code></pre> <p>In contrast, we can also think of a batch learning algorithm, which treats the entire data set as a single unit, calculates the gradients for this unit, then only performs update after making a full pass through the data. These two update strategies have trade-offs.</p> <p>• Online training algorithms usually find a relatively good solution more quickly, as they don’t need to make a full pass through the data before performing an update.</p> <p>• However, at the end of training, batch learning algorithms can be more stable, as they are not overly influenced by the most recently seen training examples.</p> <p>The Algorithm for Batch would looks like this:</p> <pre><code>procedure Batch for several epochs of training do for each training example in the data do Calculate and accumulate gradients of the loss end for Update the parameters according to the accumulated gradient end for end procedure </code></pre> <p>• Batch training algorithms are also more prone to falling into local optima; the randomness in online training algorithms often allows them to bounce out of local optima and find a better global solution.</p> <p><strong>Minibatching</strong> is a happy medium between these two strategies. Basically, minibatched training is similar to online training, but instead of processing a single training example at a time, we calculate the gradient for n training examples at a time. In the extreme case of n = 1, this is equivalent to standard online training, and in the other extreme where n equals the size of the data, this is equivalent to fully batched training.</p> <p>As we increase the number of training examples, each parameter update becomes more informative and stable, but the amount of time to perform one update increases, so it is common to choose an n that allows for a good balance between the two. One other major advantage of minibatching is that by using a few tricks, it is actually possible to make the simultaneous processing of n training examples significantly faster than processing n different examples separately. Specifically, by taking multiple training examples and grouping similar operations together to be processed simultaneously, we can realize large gains in computational efficiency due to the fact that modern hardware (particularly GPUs, but also CPUs) have very efficient vector processing instructions that can be exploited with appropriately structured inputs.</p> <p>the Explanation is taken from this Excellent <a href="https://arxiv.org/pdf/1703.01619.pdf" rel="nofollow noreferrer">paper,</a> you can read further if you have time: </p>
2019-10-07 15:10:58.550000+00:00
2019-10-07 15:10:58.550000+00:00
null
null
58,269,460
<p>I'm taking the fast-ai course, and in &quot;Lesson 2 - SGD&quot; it says:</p> <blockquote> <p>Mini-batch: a random bunch of points that you use to update your weights</p> </blockquote> <p>And it also says that gradient descent uses mini-batches.</p> <p>What is a mini-batch? What's the difference between a mini-batch and a regular batch?</p>
2019-10-07 12:21:39.420000+00:00
2022-05-29 09:32:00.620000+00:00
2022-05-29 09:32:00.620000+00:00
deep-learning|pytorch|computer-science|cross-validation|fast-ai
['https://arxiv.org/pdf/1703.01619.pdf']
1
55,048,491
<p>One way to get weight decay in TensorFlow is by adding L2-regularization to the loss. This is equivalent to weight decay for standard SGD (but not for adaptive gradient optimizers) according to <a href="https://arxiv.org/pdf/1711.05101.pdf" rel="nofollow noreferrer">Decoupled Weight Decay Regularization</a> paper by Loshchilov &amp; Hutter.</p>
2019-03-07 16:24:06.680000+00:00
2019-03-07 16:24:06.680000+00:00
null
null
55,046,234
<p>In Keras and Pytorch, the SGD optimizer have Weight Decay parameter.I found <strong>tf.train.GradientDescentOptimizer</strong> do not have weight decay parameter. What is the tensorflow equivalent of SGD with weight decay? </p> <p>Pytorch Optim - <a href="https://pytorch.org/docs/stable/optim.html" rel="nofollow noreferrer">https://pytorch.org/docs/stable/optim.html</a></p> <p>Keras Optimizer - <a href="https://keras.io/optimizers/" rel="nofollow noreferrer">https://keras.io/optimizers/</a></p>
2019-03-07 14:31:47.607000+00:00
2020-10-20 11:53:18.237000+00:00
2019-03-07 15:53:58.787000+00:00
python|tensorflow|optimization|deep-learning
['https://arxiv.org/pdf/1711.05101.pdf']
1
45,805,873
<blockquote> <p>The result type of the application f a has to be an applicative, why? Would a functor not be enough?</p> </blockquote> <p>This is a <em>fantastic</em> question. The original <a href="http://strictlypositive.org/Idiom.pdf" rel="noreferrer">McBride &amp; Paterson</a> paper goes in the other direction: it notices that a lot of computation are applicative in nature (can be rewritten with <code>pure</code> and <code>&lt;*&gt;</code>). <em>Then</em> it notices that certain containers, like <code>[]</code>, allow for a function of this type:</p> <pre><code> idist :: Applicative f =&gt; [f a] -&gt; f [a] idist = ... </code></pre> <p>This we now call <code>sequence</code> in the <code>Traversable</code> class. All well and good, but it helps to probe the <em>strength</em> of our assumptions when we write abstractions. What if we tried to build a traversable library without <code>Applicative</code>, using only <code>Functor</code>? What exactly would go wrong?</p> <h2>Products!</h2> <p>For this it helps to read through the <a href="https://arxiv.org/pdf/1202.2919.pdf" rel="noreferrer">Jaskelioff &amp; Rypacek</a> paper that tries to pin down structures in category theory that correspond to applicative functors and traversable containers. The most interesting property of traversable containers is that they are <em>closed</em> under finite <em>sums</em> and <em>products</em>. This is great for Haskell programming, where a vast number of datatypes can be defined with sums and products:</p> <pre><code>data WeirdSum a = ByList [a] | ByMaybe (Maybe a) instance Traversable WeirdSum where traverse a2fb (ByList as) = ByList &lt;$&gt; traverse a2fb as traverse a2fb (ByMaybe maybeA) = ByMaybe &lt;$&gt; traverse a2fb maybeA </code></pre> <p>Ah, more evidence that we do not need all the power of Applicative! We are only using <code>fmap</code> here. Now finite products:</p> <pre><code>data WeirdProduct a = WeirdProduct [a] (Maybe a) instance Traversable WeirdProduct where traverse a2fb (WeirdProduct as aMaybe) = WeirdProduct &lt;$&gt; traverse a2fb as &lt;*&gt; traverse a2fb aMaybe </code></pre> <p>Here it is impossible to write a definition with <em>just</em> functors: <code>fmap</code> is great for sums but gives us no way to "glue" together two different functorial values. It is only with <code>&lt;*&gt;</code> that we are able to "close" traversable containers over finite products.</p> <p>This is all well and good but lacks precision. We are sort of cherry-picking evidence here that <code>Functor</code> might be bad, but could we argue from first principles that <code>Applicative</code> is exactly what we need, no more and no less?</p> <h2>Category theory!</h2> <p>This problem is tackled in the second half of the Jaskelioff &amp; Rypacek paper. In category-theoretic terms, a functor <code>T</code> is traversable iff it allows for a family of natural transformations</p> <pre><code>{ sequence | sequence : TFX -&gt; FTX, any applicative F } </code></pre> <p>where each natural transformation is "natural in F" and respects the "<a href="https://bartoszmilewski.com/2017/02/06/applicative-functors/" rel="noreferrer">monoidal structure of applicative functor composition</a>." It is that last phrase, that last little piece of jargon, where it is important to have <code>Applicative</code> rather than <code>Functor</code>. With <code>Applicative f</code>, we are able to glue together values of type <code>f a</code> and <code>f b</code>, where we either act on them (a la <code>foo &lt;$&gt; fa &lt;*&gt; fb</code> where <code>foo :: a -&gt; b -&gt; c</code> and <code>fa, fb :: f a, f b</code>) or just shove them into a tuple <code>f (a, b)</code>. This gives rise to the aforementioned "monoidal structure"; we need this to then prove that traversable functors are closed over finite products, just like we showed above. Without applicatives we couldn't even begin talking about how functors and products interact! If <strong>Hask</strong> is our category of Haskell types, then an applicative is just a way to name <strong>Hask</strong>-to-<strong>Hask</strong> endofunctors that "behave well" around <code>(-&gt;)</code> types and product types. </p> <p>Hopefully this two-pronged answer, one in practical programming and one in categorical foo-foo, gives a little intuition as to why you want applicative functors when talking about traversability. I think often traversables are introduced with an element of magic around them, but they are very much motivated by practical concerns with solid theoretical foundations. Other language ecosystems may have easier-to-use iteration patterns and libraries, but I for one love the simplicity and elegance of <code>traverse</code> and <code>sequence</code>.</p>
2017-08-21 21:35:49.613000+00:00
2017-08-22 16:26:32.730000+00:00
2017-08-22 16:26:32.730000+00:00
null
45,798,242
<p>Could someone please explain to me, what is the purpose of the typeclass <code>Traversable</code>? </p> <p>The typeclass definition is:</p> <pre><code>class (Functor t, Foldable t) =&gt; Traversable (t :: * -&gt; *) where </code></pre> <p>So <code>Traversable</code> is a <code>Functor t</code> and <code>Foldable t</code>. </p> <p>The <code>traverse</code> function is a member of <code>Traversable</code> and has the following signature: </p> <pre><code>traverse :: Applicative f =&gt; (a -&gt; f b) -&gt; t a -&gt; f (t b) </code></pre> <p>Why does the result have to be wrapped into an applicative? What is the sense of it? </p> <p>I have the following example:</p> <pre><code>module ExercisesTraversable where import Test.QuickCheck (Arbitrary, arbitrary) import Test.QuickCheck.Checkers (quickBatch, eq, (=-=), EqProp) import Test.QuickCheck.Classes (traversable) type TI = [] newtype IdentityT a = IdentityT a deriving (Eq, Ord, Show) instance Functor IdentityT where fmap f (IdentityT a) = IdentityT (f a) instance Foldable IdentityT where foldMap f (IdentityT a) = f a instance Traversable IdentityT where traverse f (IdentityT a) = IdentityT &lt;$&gt; f a instance Arbitrary a =&gt; Arbitrary (IdentityT a) where arbitrary = do a &lt;- arbitrary return (IdentityT a) instance Eq a =&gt; EqProp (IdentityT a) where (=-=) = eq main = do let trigger = undefined :: TI (Int, Int, [Int]) quickBatch (traversable trigger) </code></pre> <p>Let's take a look at the <code>traverse</code> implementation:</p> <pre><code>traverse f (IdentityT a) = IdentityT &lt;$&gt; f a </code></pre> <p>The result type of the application <code>f a</code> has to be an applicative, why? Would a functor not be enough?</p>
2017-08-21 13:21:34.847000+00:00
2017-08-22 16:26:32.730000+00:00
2017-08-21 16:32:02.967000+00:00
haskell
['http://strictlypositive.org/Idiom.pdf', 'https://arxiv.org/pdf/1202.2919.pdf', 'https://bartoszmilewski.com/2017/02/06/applicative-functors/']
3
49,243,112
<p>This is an open problem; many approaches exist to creating meaningful sentence vectors.</p> <ul> <li>BoW models, as Fabrizio_P explained</li> <li>Element-wise vector operations (<a href="http://www.aclweb.org/anthology/P/P08/P08-1028.pdf" rel="nofollow noreferrer">http://www.aclweb.org/anthology/P/P08/P08-1028.pdf</a>) <ul> <li>Addition (i.e. simply add all the word vector together, possibly normalizing afterwards)</li> <li>Multiplication (i.e. multiply all vectors together, element-wise, resulting in a logically grounded embedding)</li> </ul></li> <li>Arbitrary task-specific recurrent functions (<a href="http://www.aclweb.org/anthology/D12-1110" rel="nofollow noreferrer">http://www.aclweb.org/anthology/D12-1110</a>)</li> <li>More sophisticated general-purpose approaches (<a href="https://arxiv.org/abs/1508.02354" rel="nofollow noreferrer">https://arxiv.org/abs/1508.02354</a>, <a href="https://arxiv.org/abs/1506.06726" rel="nofollow noreferrer">https://arxiv.org/abs/1506.06726</a>)</li> </ul> <p>Element-wise operations, such as vector addition, suffice for most simple tasks, but obviously exhibit a high amount of information loss as sentences grow larger or the task at hand gets more demanding. Recurrent neural networks are quite good at creating task specific sentence embeddings, but obviously these require training data and some familiarity with machine learning. General purpose sentence embeddings are the most interesting ones from a research perspective, but probably not what you're looking for.</p>
2018-03-12 19:29:40.447000+00:00
2018-03-12 19:29:40.447000+00:00
null
null
49,221,701
<p>I had the sentence.I use word2vec to embed word to vector.For example, consider I have a sentence of 5 words.so I get 5 different vectors(One for each word) for the sentence.Which is the best method to make the complete sentence as a single vector which I will pass to the ANN?</p>
2018-03-11 15:31:51.733000+00:00
2018-03-12 22:50:33.300000+00:00
null
neural-network|artificial-intelligence|recurrent-neural-network|sentiment-analysis
['http://www.aclweb.org/anthology/P/P08/P08-1028.pdf', 'http://www.aclweb.org/anthology/D12-1110', 'https://arxiv.org/abs/1508.02354', 'https://arxiv.org/abs/1506.06726']
4
62,544,040
<p>As far as I can see this code doesn't provide multiple samples, but you can adjust it with a some adjustments.</p> <p>This line uses already multinomial but returns only 1:</p> <pre class="lang-py prettyprint-override"><code>next_token = torch.multinomial(F.softmax(filtered_logits, dim=-1), num_samples=1) </code></pre> <p>change it to:</p> <pre class="lang-py prettyprint-override"><code>next_token = torch.multinomial(F.softmax(filtered_logits, dim=-1), num_samples=num_samples) </code></pre> <p>Now you also need to change the result construction. This concatenates line the next_token with the sentence. You get now <code>num_samples</code> of next_tokens and you need unsqueeze all of them:</p> <pre><code>generated = torch.cat((generated, next_token.unsqueeze(0)), dim=1) </code></pre> <p>change it to:</p> <pre class="lang-py prettyprint-override"><code>generated = torch.cat((generated, next_token.unsqueeze(1)), dim=1) </code></pre> <p>The whole function should look like this now:</p> <pre class="lang-py prettyprint-override"><code>def sample_sequence( model, length, context, num_samples=1, temperature=1, top_k=0, top_p=0.9, repetition_penalty=1.0, device=&quot;cpu&quot;, ): context = torch.tensor(context, dtype=torch.long, device=device) context = context.unsqueeze(0).repeat(num_samples, 1) generated = context with torch.no_grad(): for _ in trange(length): inputs = {&quot;input_ids&quot;: generated} outputs = model( **inputs ) # Note: we could also use 'past' with GPT-2/Transfo-XL/XLNet/CTRL (cached hidden-states) next_token_logits = outputs[0][0, -1, :] / (temperature if temperature &gt; 0 else 1.0) # reptition penalty from CTRL (https://arxiv.org/abs/1909.05858) for _ in set(generated.view(-1).tolist()): next_token_logits[_] /= repetition_penalty filtered_logits = top_k_top_p_filtering(next_token_logits, top_k=top_k, top_p=top_p) if temperature == 0: # greedy sampling: next_token = torch.argmax(filtered_logits).unsqueeze(0) else: next_token = torch.multinomial(F.softmax(filtered_logits, dim=-1), num_samples=num_samples) generated = torch.cat((generated, next_token.unsqueeze(1)), dim=1) return generated </code></pre> <p>Last but not least you have to change your tokenizer.decode call to tokenizer.batch_decode as the return value contains now multiple samples:</p> <pre class="lang-py prettyprint-override"><code>tokenizer.batch_decode(output.tolist(), clean_up_tokenization_spaces=True, skip_special_tokens=True) </code></pre> <p>Something you have to think of byt yourself, is what you want to do when there is no valide <code>next_token</code>. Currently you will receive an error message like:</p> <blockquote> <p>RuntimeError: invalid multinomial distribution (with replacement=False, not enough non-negative category to sample)</p> </blockquote> <p><strong>Another thing you have to think of, is if their code is even correct.</strong> During the few test I have conducted, it felt like that the quality of created sentences decreased with an increasing number of <code>num_samples</code> (i.e. Maybe the quality is better when you use a simple loop to call sample_sequence multiple times?). I haven't worked with GPT2 yet and can't help you here.</p>
2020-06-23 21:28:44.113000+00:00
2020-06-24 08:07:50.973000+00:00
2020-06-24 08:07:50.973000+00:00
null
62,472,438
<p>I'm going off of <a href="https://github.com/cortexlabs/cortex/blob/master/examples/pytorch/text-generator/predictor.py" rel="nofollow noreferrer">https://github.com/cortexlabs/cortex/blob/master/examples/pytorch/text-generator/predictor.py</a></p> <p>But if I pass <code>num_samples=5</code>, I get:</p> <pre><code> generated = torch.cat((generated, next_token.unsqueeze(0)), dim=1) RuntimeError: Sizes of tensors must match except in dimension 1. Got 5 and 1 in dimension 0 </code></pre> <p>the code is:</p> <pre><code>def sample_sequence( model, length, context, num_samples=1, temperature=1, top_k=0, top_p=0.9, repetition_penalty=1.0, device="cpu", ): context = torch.tensor(context, dtype=torch.long, device=device) context = context.unsqueeze(0).repeat(num_samples, 1) print('context_size', context.shape) generated = context print('context', context) with torch.no_grad(): for _ in trange(length): inputs = {"input_ids": generated} outputs = model( **inputs ) # Note: we could also use 'past' with GPT-2/Transfo-XL/XLNet/CTRL (cached hidden-states) next_token_logits = outputs[0][0, -1, :] / (temperature if temperature &gt; 0 else 1.0) # reptition penalty from CTRL (https://arxiv.org/abs/1909.05858) for _ in set(generated.view(-1).tolist()): next_token_logits[_] /= repetition_penalty filtered_logits = top_k_top_p_filtering(next_token_logits, top_k=top_k, top_p=top_p) if temperature == 0: # greedy sampling: next_token = torch.argmax(filtered_logits).unsqueeze(0) else: next_token = torch.multinomial(F.softmax(filtered_logits, dim=-1), num_samples=1) generated = torch.cat((generated, next_token.unsqueeze(0)), dim=1) return generated </code></pre>
2020-06-19 14:28:32.710000+00:00
2020-06-24 08:07:50.973000+00:00
null
python|pytorch|huggingface-transformers
[]
0
39,410,372
<p>I am currently working with large files and a MsSQL database. Storing the whole file as a byte array is time consuming an will blow up your database.</p> <p>Consider using a SqlFileStream <a href="https://msdn.microsoft.com/de-de/library/gg471497.aspx" rel="nofollow">https://msdn.microsoft.com/de-de/library/gg471497.aspx</a> you have to do some configuration, but it will enable you to store any size of file within no time.</p> <p>This will give you a way to stream the file directly from the database instead of reading the whole file in your asp server and then send it to the Client.</p> <p>Microsoft Technical letter: To blob or not to blob: <a href="https://arxiv.org/ftp/cs/papers/0701/0701168.pdf" rel="nofollow">https://arxiv.org/ftp/cs/papers/0701/0701168.pdf</a></p>
2016-09-09 11:17:58.340000+00:00
2016-09-09 11:22:24.950000+00:00
2016-09-09 11:22:24.950000+00:00
null
39,410,188
<p>I have SQL database where I am storing user uploaded file.</p> <p>I have created a <code>linkbutton</code> to download the file. On click of that link button I am calling below code. Unfortunately it is not working. Not event throwing any error.</p> <p>I have stored <strong>filename</strong>, <strong>contentType</strong> and <strong>bytes</strong> as 3 columns in my SQL table</p> <pre><code>Response.Clear(); Response.Buffer = true; Response.Charset = ""; Response.Cache.SetCacheability(HttpCacheability.NoCache); Response.ContentType = contentType; Response.AppendHeader("Content-Disposition", "attachment; filename=" + fileName); Response.BinaryWrite(bytes); Response.Flush(); Response.End(); </code></pre> <p>What could be the issue? Or Any other way to achieve this?</p> <p><strong>UPDATE:</strong></p> <p>With same code and changes based on answer marked as correct, I did changes and it worked. Providing working code here for future reference of other users:</p> <pre><code> protected void Page_Load(object sender, EventArgs e) { DataTable dt = Session["fileAttachment"] as DataTable; string fileName = Convert.ToString(dt.Rows[0]["filename"]); string contentType = Convert.ToString(dt.Rows[0]["contentType"]); byte[] bytes = Convert.FromBase64String(Convert.ToString(dt.Rows[0]["bytearr"])); Response.Clear(); Response.ClearHeaders(); Response.Buffer = true; Response.Charset = ""; Response.Cache.SetCacheability(HttpCacheability.NoCache); Response.ContentType = contentType; Response.AppendHeader("Content-Disposition", "attachment; filename=" + fileName); Response.BinaryWrite(bytes); // //Response.BufferOutput = true; //Response.OutputStream.Write(bytes, 0, bytes.Length); Response.Flush(); Response.End(); } </code></pre> <p>Added above code in new page and redirected to this page after created session</p>
2016-09-09 11:07:48.033000+00:00
2016-09-12 05:05:29.653000+00:00
2016-09-12 05:05:29.653000+00:00
c#|sql
['https://msdn.microsoft.com/de-de/library/gg471497.aspx', 'https://arxiv.org/ftp/cs/papers/0701/0701168.pdf']
2
38,868,032
<p>This question is a very good example of a <em>bad</em> question for a forum like Stackoverflow. I am writing an answer because I feel you might use some advice, which, again, is very subjective. I wouldn't be surprised if the question gets closed as "opinion based". But first, an opinion on the exercises and the solutions:</p> <h2>Second element of list</h2> <p>Definitely, <code>second(X, [_,X|_]).</code> is to be preferred. It just looks more familiar. But you should be using the standard library anyway: <code>nth1(2, List, Element)</code>.</p> <h2>Mirroring a binary tree</h2> <p>The tree representation that the textbook suggests is a bit... unorthodox? A binary tree is almost invariably represented as a nested term, using two functors, for example:</p> <ol> <li><code>t/3</code> which is a non-empty tree, with <code>t(Value_at_node, Left_subtree, Right_subtree)</code></li> <li><code>nil/0</code> which is an empty tree</li> </ol> <p>Here are some binary trees:</p> <ul> <li>The empty tree: <code>nil</code></li> <li>A binary search tree holding {1,2,3}: <code>t(2, t(1, nil, nil), t(3, nil, nil))</code></li> <li>A degenerate left-leaning binary tree holding the list [1,2,3] (if you traversed it pre-order): <code>t(1, t(2, t(3, nil, nil), nil), nil)</code></li> </ul> <p>So, to "mirror" a tree, you would write:</p> <pre><code>mirror(nil, nil). mirror(t(X, L, R), t(X, MR, ML)) :- mirror(L, ML), mirror(R, MR). </code></pre> <blockquote> <p>The empty tree, mirrored, is the empty tree. A non-empty tree, mirrored, has its left and right sub-trees swapped, and mirrored.</p> </blockquote> <p>That's all. No need for swapping, really, or anything else. It is also efficient: for any argument, only one of the two clauses will be evaluated because the first arguments are different functors, <code>nil/0</code> and <code>t/3</code> (Look-up "first argument indexing" for more information on this). If you would have instead written:</p> <pre><code>mirror_x(T, MT) :- ( T = nil -&gt; MT = nil ; T = t(X, L, R), MT = t(X, MR, ML), mirror_x(L, ML), mirror_x(R, MR) ). </code></pre> <p>Than not only is this less readable (well...) but probably less efficient, too.</p> <h2>On readability and efficiency</h2> <p>Code is read by people and evaluated by machines. If you want to write readable code, you still might want to address it to other programmers and not to the machines that are going to evaluate it. Prolog implementations have gotten better and better at being efficient at evaluating code that is also more readable to people who have read and written a lot of Prolog code (do you recognize the feedback loop?). You might want to take a look at <a href="http://arxiv.org/pdf/0911.2899.pdf" rel="nofollow">Coding Guidelines for Prolog</a> if you are really interested in readability.</p> <p>A first step towards getting used to Prolog is trying to solve the <a href="http://www.ic.unicamp.br/~meidanis/courses/mc336/2009s2/prolog/problemas/" rel="nofollow">99 Prolog Problems</a> (there are other sites with the same content). Follow the suggestion to avoid using built-ins. Then, look at the solutions and study them. Then, study the documentation of a Prolog implementation to see how much of these problems have been solved with built-in predicates or standard libraries. Then, study the implementations. You might find some real gems there: one of my favorite examples is the library definition of <a href="http://eu.swi-prolog.org/pldoc/doc/home/swipl/lib/swipl/library/lists.pl?show=src#nth0/3" rel="nofollow"><code>nth0/3</code></a>. Just look at this beauty ;-).</p> <p>There is also a whole book written on the subject of good Prolog code: "The Craft of Prolog" by Richard O'Keefe. The efficiency measurements are quite outdated though. Basically, if you want to know how efficient your code is, you end up with a matrix with at least three dimensions:</p> <ul> <li>Prolog implementation (SWI-Prolog, SICSTUS, YAP, Gnu-Prolog...)</li> <li>Data structure and algorithm used</li> <li>Facilities provided by the implementation</li> </ul> <p>You will end up having some wholes in the matrix. Example: what is the best way to read line-based input, do something with each line, and output it? Read line by line, do the thing, output? Read all at once, do everything in memory, output at once? Use a DCG? In SWI-Prolog, since version 7, you can do:</p> <pre><code>read_string(In_stream, _, Input), split_string(Input, "\n", "", Lines), maplist(do_x, Lines, Xs), atomics_to_string(Xs, "\n", Output), format(Out_stream, "~s\n", Output) </code></pre> <p>This is concise and very efficient. Caveats:</p> <ul> <li>The available memory might be a bottle neck</li> <li>Strings are not standard Prolog, so you are stuck with implementations that have them</li> </ul> <p>This is a very basic example, but it demonstrates at least the following difficulties in answering your question:</p> <ul> <li>Differences between implementations</li> <li>Opinions on what is readable or idiomatic Prolog</li> <li>Opinions on the importance of standards</li> </ul> <p>The example above doesn't even go into details about your problem, as for example what you do with each line. Is it just text? Do you need to parse the lines? Why are you not using a stream of Prolog terms instead? and so on.</p> <h2>On efficiency measurements</h2> <p>Don't use the number of steps in the tracer, or even the reported number of inferences. You really need to measure time, with a realistic input. Sorting with <code>sort/2</code>, for example, always counts as exactly one inference, no matter what is the length of the list being sorted. On the other hand, <code>sort/2</code> in any Prolog is about as efficient as a sort on your machine would ever get, so is that an issue? You can't know until you have measured the performance.</p> <p>And of course, as long as you make an informed choice of an algorithm and a data structure, you can at the very least know the complexity of your solution. Doing an efficiency measurement is interesting only if you notice a <strong>discrepancy between what you expect and what you measure</strong>: obviously, there is a mistake. Either your complexity analysis is wrong, or your implementation is wrong, or even the Prolog implementation you are using is doing something unexpected.</p> <p>On top of this, there is the inherent problem of high-level libraries. With some of the more complex approaches, you might not be able to easily judge what the complexity of a given solution might be (constraint logic programming, as in CHR and CLPFD, is a prime example). Most real problems that fit nicely to the approach will be much easier to write, and more efficient than you could ever do without considerable effort and very specific code. But get fancy enough, and your CHR program might not even want to compile any more.</p> <h2>Unification in the head of the predicate</h2> <p>This is not opinion-based any more. Just do the unifications in the head if you can. It is <em>more readable to a Prolog programmer</em>, and it is more efficient.</p> <h2>PS</h2> <p>"Learn Prolog Now!" is a good starting point, but nothing more. Just work your way through it and move on.</p>
2016-08-10 08:29:29.127000+00:00
2016-08-10 14:21:58.870000+00:00
2016-08-10 14:21:58.870000+00:00
null
38,856,894
<p>I want to ask pros and cons of different Prolog representations in arguments of predicates.</p> <p>For example in <a href="http://www.learnprolognow.org/lpnpage.php?pagetype=html&amp;pageid=lpn-htmlse16" rel="nofollow">Exercise 4.3</a>: <em>Write a predicate second(X,List) which checks whether X is the second element of List.</em> The solution can be:</p> <pre><code>second(X,List):- [_,X|_]=List. </code></pre> <p>Or,</p> <pre><code>second(X,[_,X|_]). </code></pre> <p>The both predicates would behave similarly. The first one would be more readable than the second, at least to me. But the second one uses more stacks during the execution (I checked this with <em>trace</em>).</p> <p>A more complicated example is <a href="http://www.learnprolognow.org/lpnpage.php?pagetype=html&amp;pageid=lpn-htmlse11" rel="nofollow">Exercise 3.5</a>: <em>Binary trees are trees where all internal nodes have exactly two children. The smallest binary trees consist of only one leaf node. We will represent leaf nodes as leaf(Label) . For instance, leaf(3) and leaf(7) are leaf nodes, and therefore small binary trees. Given two binary trees B1 and B2 we can combine them into one binary tree using the functor tree/2 as follows: tree(B1,B2) . So, from the leaves leaf(1) and leaf(2) we can build the binary tree tree(leaf(1),leaf(2)) . And from the binary trees tree(leaf(1),leaf(2)) and leaf(4) we can build the binary tree tree(tree(leaf(1), leaf(2)),leaf(4)). Now, define a predicate swap/2 , which produces the mirror image of the binary tree that is its first argument.</em> The solution would be:</p> <p>A2.1:</p> <pre><code>swap(T1,T2):- T1=tree(leaf(L1),leaf(L2)), T2=tree(leaf(L2),leaf(L1)). swap(T1,T2):- T1=tree(tree(B1,B2),leaf(L3)), T2=tree(leaf(L3),T3), swap(tree(B1,B2),T3). swap(T1,T2):- T1=tree(leaf(L1),tree(B2,B3)), T2=tree(T3,leaf(L1)), swap(tree(B2,B3),T3). swap(T1,T2):- T1=tree(tree(B1,B2),tree(B3,B4)), T2=tree(T4,T3), swap(tree(B1,B2),T3),swap(tree(B3,B4),T4). </code></pre> <p>Alternatively,</p> <p>A2.2:</p> <pre><code>swap(tree(leaf(L1),leaf(L2)), tree(leaf(L2),leaf(L1))). swap(tree(tree(B1,B2),leaf(L3)), tree(leaf(L3),T3)):- swap(tree(B1,B2),T3). swap(tree(leaf(L1),tree(B2,B3)), tree(T3,leaf(L1))):- swap(tree(B2,B3),T3). swap(tree(tree(B1,B2),tree(B3,B4)), tree(T4,T3)):- swap(tree(B1,B2),T3),swap(tree(B3,B4),T4). </code></pre> <p>The number of steps of the second solution was much less than the first one (again, I checked with <em>trace</em>). But regarding the readability, the first one would be easier to understand, I think.</p> <p>Probably the readability depends on the level of one's Prolog skill. I am a learner level of Prolog, and am used to programming with C++, Python, etc. So I wonder if skillful Prolog programmers agree with the above readability.</p> <p>Also, I wonder if the number of steps can be a good measurement of the computational efficiency.</p> <p>Could you give me your opinions or guidelines to design predicate arguments?</p> <hr> <p>EDITED.</p> <p>According to the advice from @coder, I made a third version that consists of a single rule:</p> <p>A2.3:</p> <pre><code>swap(T1,T2):- ( T1=tree(leaf(L1),leaf(L2)), T2=tree(leaf(L2),leaf(L1)) ); ( T1=tree(tree(B1,B2),leaf(L3)), T2=tree(leaf(L3),T3), swap(tree(B1,B2),T3) ); ( T1=tree(leaf(L1),tree(B2,B3)), T2=tree(T3,leaf(L1)), swap(tree(B2,B3),T3) ); ( T1=tree(tree(B1,B2),tree(B3,B4)), T2=tree(T4,T3), swap(tree(B1,B2),T3),swap(tree(B3,B4),T4) ). </code></pre> <p>I compared the number of steps in <em>trace</em> of each solution:</p> <ul> <li>A2.1: 36 steps</li> <li>A2.2: 8 steps</li> <li>A2.3: 32 steps</li> </ul> <p>A2.3 (readable single-rule version) seems to be better than A2.1 (readable four-rule version), but A2.2 (non-readable four-rule version) still outperforms.</p> <p>I'm not sure if the number of steps in <em>trace</em> is reflecting the actual computation efficiency. There are less steps in A2.2 but it uses more computation cost in pattern matching of the arguments. So, I compared the execution time for 40000 queries (each query is a complicated one, <em>swap(tree(tree(tree(tree(leaf(3),leaf(4)),leaf(5)),tree(tree(tree(tree(leaf(3),leaf(4)),leaf(5)),leaf(4)),leaf(5))),tree(tree(leaf(3),tree(tree(leaf(3),leaf(4)),leaf(5))),tree(tree(tree(tree(leaf(3),leaf(4)),leaf(5)),leaf(4)),leaf(5)))), _).</em> ). The results were almost the same (0.954 sec, 0.944 sec, 0.960 sec respectively). This is showing that the three reresentations A2.1, A2.2, A2.3 have close computational efficiency. Do you agree with this result? (Probably this is a case specific; I need to vary the experimental setup).</p>
2016-08-09 17:19:30.547000+00:00
2016-08-10 14:21:58.870000+00:00
2016-08-10 09:41:47.020000+00:00
prolog
['http://arxiv.org/pdf/0911.2899.pdf', 'http://www.ic.unicamp.br/~meidanis/courses/mc336/2009s2/prolog/problemas/', 'http://eu.swi-prolog.org/pldoc/doc/home/swipl/lib/swipl/library/lists.pl?show=src#nth0/3']
3
24,664,694
<p>For the best bet, read the papers by Bergstra et. al. <a href="http://arxiv.org/pdf/1209.5111.pdf" rel="nofollow">1</a> <a href="http://jmlr.org/papers/volume13/bergstra12a/bergstra12a.pdf" rel="nofollow">2</a> and <a href="http://papers.nips.cc/paper/4443-algorithms-for-hyper-parameter-optimization.pdf" rel="nofollow">3</a>. I am not 100% clear on what the bandit_algo is, except that one of the papers mentions it as an alternative method to Gaussian Process and Tree of Parzen Estimators - maybe you can use it in the same way as those two?</p> <p>My guess is that if it not documented, it may not be finished yet. You can try raising an issue on Github - the devs are fairly responsive from what I have seen.</p> <p>EDIT: Looking at <a href="http://www.cs.ubc.ca/~hutter/nips2011workshop/papers_and_posters/Bergstra-abstract.pdf" rel="nofollow">this paper</a>, these bandit algorithms may be the base class that the others inherit from.</p>
2014-07-09 22:27:31.293000+00:00
2014-07-10 08:20:35.437000+00:00
2014-07-10 08:20:35.437000+00:00
null
24,654,937
<p>What kind of settings Hyperopt provides to adjust balance between exploration with exploitation ? There's something like "bandit" and "bandit_algo" in the code but no explanation.</p> <p>Could someone provide any code sample.</p> <p>Thanks a lot for any help!</p>
2014-07-09 13:24:09.777000+00:00
2018-12-28 10:26:25.493000+00:00
2018-12-28 10:26:25.493000+00:00
machine-learning|optimization|scikit-learn|deep-learning|hyperopt
['http://arxiv.org/pdf/1209.5111.pdf', 'http://jmlr.org/papers/volume13/bergstra12a/bergstra12a.pdf', 'http://papers.nips.cc/paper/4443-algorithms-for-hyper-parameter-optimization.pdf', 'http://www.cs.ubc.ca/~hutter/nips2011workshop/papers_and_posters/Bergstra-abstract.pdf']
4
48,826,965
<p>@Sravan K Reddy's answer is good enough to be a solution, but it is essential to know what is histogram? </p> <p>Histogram is frequency distribution of datasets and gives statistical information about data. Most commonly used histogram types are; Equi-width and equi-depth which is called equi-height or height-balanced.</p> <p>In database tools, equi-depth histogram is prefered. ex: Oracle <a href="https://docs.oracle.com/database/121/TGSQL/tgsql_histo.htm#TGSQL379" rel="nofollow noreferrer">see</a></p> <p>@Sravan K Reddy intends to create equi-width histogram of patent citations. However, in order to create histogram, data must be sorted. That is vital for histogram construction.</p> <p>If you want to create histogram of your big data, read <a href="https://arxiv.org/pdf/1606.05633.pdf" rel="nofollow noreferrer">this paper</a> and check <a href="https://github.com/tolgabuyuktanir/Equi-DepthHistogramConstroctionDemonstrations" rel="nofollow noreferrer">Apache Pig Scripts</a>.</p>
2018-02-16 12:37:21.640000+00:00
2018-02-16 12:37:21.640000+00:00
null
null
29,279,222
<p>This is a two part problem:</p> <p>PART 1:</p> <p>I am using the cloudera pig editor to transform my data. The data set is derived from the US Patents Citations data set. The first column is the "Cited" patent. The remaining data is the patents that cite the first patent. </p> <p>3858241 3634889,3557384,3398406,1324234,956203</p> <p>3858242 3707004,3668705,3319261,1515701</p> <p>3858243 3684611,3681785,3574238,3221341,3156927,3146465,2949611</p> <p>3858244 2912700,2838924,2635670,2211676,17445,14040</p> <p>3858245 3755824,3699969,3621837,3608095,3553737,3176316,2072303</p> <p>3858246 3601877,3503079,3451067</p> <p>3858247 3755824,3694819,3621837,2807431,1600859</p> <p>I need to create PIG code that will count the number of citation that the first patent has. So, I need the output to be:</p> <p>3858241 5</p> <p>3858242 4</p> <p>3858243 7</p> <p>3858244 6</p> <p>3858245 7</p> <p>3858246 3</p> <p>3858247 6</p> <p>PART 2: I need to create a histogram of the output from problem 1 using a PIG script.</p> <p>Any help would be greatly appreciated.</p> <p>Thanks</p>
2015-03-26 13:06:25.250000+00:00
2018-02-16 12:37:21.640000+00:00
2015-03-26 14:07:28.223000+00:00
apache-pig|histogram|frequency
['https://docs.oracle.com/database/121/TGSQL/tgsql_histo.htm#TGSQL379', 'https://arxiv.org/pdf/1606.05633.pdf', 'https://github.com/tolgabuyuktanir/Equi-DepthHistogramConstroctionDemonstrations']
3
40,314,502
<p>I created a repo that converts Inception v3 batch-normalized weights to the de-normalized weights needed by MPSCNNConvolution.</p> <p><a href="https://github.com/kakugawa/MetalCNNWeights" rel="noreferrer">https://github.com/kakugawa/MetalCNNWeights</a></p> <p>In the paper 'Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift' by Sergey Ioffe and Christian Szegedy (<a href="https://arxiv.org/pdf/1502.03167v3.pdf" rel="noreferrer">https://arxiv.org/pdf/1502.03167v3.pdf</a>), we can use Algorithm 2, Output, Step 11 to derive:</p> <pre><code>[Weight = \frac{\gamma}{\sqrt{Var[x]+0.001}} * Weight_{BN}](http://mathurl.com/z7snq3z.png) [Bias = \beta - (\frac{\gamma}{\sqrt{Var[x]+0.001}}) * E[x]](http://mathurl.com/zo4shhf.png) </code></pre> <p>Note: I need 10 reputation points before I can post images or 2 links.</p>
2016-10-28 23:25:27.187000+00:00
2016-10-28 23:25:27.187000+00:00
null
null
39,735,198
<p>The README provides the following comment:</p> <pre><code>/* The weights for this particular network were batch normalized but for inference we may use : w = gamma / √(s + 0.001), b = ß - ( A * m ) s: variance m: mean gamma : gamma ß: beta w: weights of a feature channel b: bias of a feature channel for every feature channel separately to get the corresponding weights and bias */ </code></pre> <p>I have been able to export all of the trained parameters from a retrained Inception model to binary, using TensorFlow. For example, for the first convolution node, these are the available binary files:</p> <blockquote> <p>conv0/BatchNorm/beta conv0/BatchNorm/beta/ExponentialMovingAverage conv0/BatchNorm/beta/RMSProp conv0/BatchNorm/beta/RMSProp_1 conv0/BatchNorm/moving_mean conv0/BatchNorm/moving_mean/ExponentialMovingAverage conv0/BatchNorm/moving_variance conv0/BatchNorm/moving_variance/ExponentialMovingAverage conv0/weights conv0/weights/ExponentialMovingAverage conv0/weights/Regularizer/L2Loss/value/avg conv0/weights/RMSProp conv0/weights/RMSProp_1</p> </blockquote> <p>Are these files transformed or recomputed to get the corresponding <strong>conv.dat</strong> file somehow, or is there a function in TensorFlow to export each node with batch normalization?</p> <p>Any additional direction would be extremely helpful, as there are few resources to connect the dots here.</p> <p>Thank you.</p>
2016-09-27 22:18:31.713000+00:00
2016-10-28 23:25:27.187000+00:00
2016-10-04 22:48:05.320000+00:00
neural-network|ios10|conv-neural-network|metal
['https://github.com/kakugawa/MetalCNNWeights', 'https://arxiv.org/pdf/1502.03167v3.pdf']
2
57,508,776
<p>First, we use the <code>train_unsupervised</code> API to create a <strong>Word-Representation Model</strong>. There are two techniques that we can use, <a href="https://arxiv.org/pdf/1310.4546.pdf" rel="noreferrer">skipgram</a> and <a href="https://arxiv.org/pdf/1301.3781.pdf" rel="noreferrer">cbow</a>. On the other hand, we use the <code>train_supervised</code> API to create <strong>Text Classification Model</strong>. You are asking about the <code>train_supervised</code> API, so I will stick to it.</p> <p>The way that text classification work in fasttext, is to first represent the word using skipgram by default. Then, use these word-vectors learned from the skipgram model to classify your input text. The two parameters that you asked about (<code>ws</code> and <code>wordNgrams</code>) are related to the skipgram/cbow model.</p> <p>The following image contains a simplified illustration of how we are using our input text to train skipgram model. Here, we defined the <code>ws</code> parameter as 2 and <code>wordNgrams</code> as 1.</p> <p><a href="https://i.stack.imgur.com/fYhXF.png" rel="noreferrer"><img src="https://i.stack.imgur.com/fYhXF.png" alt="enter image description here"></a></p> <p>As we can see, we have only one text in our training data which is <code>The quick brown fox jumps over the lazy dog</code>. We defined the context window to be two, which means that we will create a window whose center is <code>center word</code> and the next/previous two words within the window are <code>target words</code>. Then, we move this window a word at a time. The bigger the window size is, the more training samples you have for your model, the more overfitted the model becomes given a small sample of data. </p> <p>That's for our first argument <code>ws</code>. According to the second argument <code>wordNgrams</code>, if we set <code>wordNgrams</code> to 2, it will consider two-word pairs like the following image. (The <code>ws</code> in the following image is one for simplicity)</p> <p><a href="https://i.stack.imgur.com/UYRvW.png" rel="noreferrer"><img src="https://i.stack.imgur.com/UYRvW.png" alt="enter image description here"></a></p> <h2>Ref</h2> <ul> <li><p>Check this <a href="https://github.com/facebookresearch/fastText/blob/master/python/fasttext_module/fasttext/FastText.py" rel="noreferrer">link</a> which contains the source code for the <code>train_supervised</code> method.</p></li> <li><p>There is a major difference between skipgram and cbow that can be summarized in the following image:</p></li> </ul> <p><a href="https://i.stack.imgur.com/XyR8J.png" rel="noreferrer"><img src="https://i.stack.imgur.com/XyR8J.png" alt="skipgram vs. cbow"></a></p>
2019-08-15 11:10:50.027000+00:00
2019-08-15 11:10:50.027000+00:00
null
null
57,507,056
<p>In the description of the fasttext library for python <a href="https://github.com/facebookresearch/fastText/tree/master/python" rel="nofollow noreferrer">https://github.com/facebookresearch/fastText/tree/master/python</a> for training a supervised model there are different arguments, where among others are stated as:</p> <ul> <li><code>ws</code>: size of the context window</li> <li><code>wordNgrams</code>: max length of word ngram.</li> </ul> <p>If I understand it right, both of them are responsible for taking into account the surrounding words of the word, but what is the clear difference between them?</p>
2019-08-15 08:42:50.927000+00:00
2019-08-15 11:10:50.027000+00:00
2019-08-15 09:03:34.303000+00:00
python|nlp|fasttext
['https://arxiv.org/pdf/1310.4546.pdf', 'https://arxiv.org/pdf/1301.3781.pdf', 'https://i.stack.imgur.com/fYhXF.png', 'https://i.stack.imgur.com/UYRvW.png', 'https://github.com/facebookresearch/fastText/blob/master/python/fasttext_module/fasttext/FastText.py', 'https://i.stack.imgur.com/XyR8J.png']
6
50,253,028
<p>Binarization techniques are effective algorithms which allow to constrain both the parameters and the activations of a network to have binary values. Obviously the precision loss may degrade a bit the final performances, but the binary representation reduces a lot the resource requirements of the network.</p> <p>For instance, you can have a look at these works:</p> <ul> <li><a href="http://papers.nips.cc/paper/6573-binarized-neural-networks.pdf" rel="nofollow noreferrer">Binarized Neural Networks</a></li> <li><a href="https://arxiv.org/pdf/1602.02830" rel="nofollow noreferrer">Binarized Neural Networks: Training Neural Networks with Weights and Activations Constrained to +1 or −1</a></li> <li><a href="https://arxiv.org/pdf/1603.05279.pdf" rel="nofollow noreferrer">XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks</a></li> </ul> <p>which released their code.</p>
2018-05-09 12:08:20.480000+00:00
2018-05-09 12:08:20.480000+00:00
2020-06-20 09:12:55.060000+00:00
null
50,252,340
<p>Context: I am going to start training a CNN to classify a data set. This CNN will have to be deployed for a real world application. So a forward propagation through this CNN has to be fast. Most of the CNN architectures I have read cannot run without a GPU and need a lot of costly resources to be deployed.</p> <p>Question: Now I know one particular technique that's quite useful for reducing the size of a CNN architecture: Downsize the image using cubic interpolation ( Cubic interpolation helps improve certain image features like edges ). This helps reduce the number of convolution layers as well as the filter size thus reducing the overall parameters in a CNN by quite a lot. I wanted to know if there are other techniques which can make a CNN smaller so that it can be realistically deployed.</p>
2018-05-09 11:32:43.300000+00:00
2018-05-11 15:05:02.233000+00:00
2018-05-11 15:05:02.233000+00:00
tensorflow|neural-network|keras|deep-learning|convolutional-neural-network
['http://papers.nips.cc/paper/6573-binarized-neural-networks.pdf', 'https://arxiv.org/pdf/1602.02830', 'https://arxiv.org/pdf/1603.05279.pdf']
3
62,412,510
<blockquote> <p>I am interested know out how they are calculated to see how I can interpret the results obtained.</p> </blockquote> <p>Unfortunately knowing the first does not lead to knowing the second. </p> <p>Your question is concerned with contextual bandits, but it is important to note that interpreting model parameters is an issue that also occurs in supervised learning. Machine learning has made progress recently (i.e., my lifetime) largely by focusing concern on quality of predictions rather than meaningfulness of model parameters. In a <a href="https://towardsdatascience.com/predicting-vs-explaining-69b516f90796" rel="nofollow noreferrer">blog post</a>, Phoebe Wong outlines the issue while being entertaining.</p> <p>The bottom line is that our models are not causal, so you simply cannot conclude because "the weight of feature X is for arm A is large means that if I were to intervene in the system and increase this feature value that I will get more reward for playing arm A".</p> <p>We are currently working on tools for model inspection that leverage techniques such as <a href="https://scikit-learn.org/stable/modules/permutation_importance.html" rel="nofollow noreferrer">permutation importance</a> that will help you answer questions like "if I were to stop using a particular feature how would the frequency of playing each arm change for the trained policy". We're hoping that is helpful information.</p> <p>Having said all that, let me try to answer your original question ...</p> <blockquote> <p>In vowpawabbit there is an option --audit that prints the weights of the features.</p> <p>If we have a vw contextual bandit model with four arms, how is this feature weight created?</p> </blockquote> <p>The format is documented <a href="https://github.com/VowpalWabbit/vowpal_wabbit/wiki/Audit" rel="nofollow noreferrer">here</a>. Assuming you are using <code>--cb</code> (not <code>--cb_adf</code>) then there are a fixed number of arms and so the <code>offset</code> field will increment over the arms. So for an example like </p> <pre><code>1:2:0.4 |foo bar </code></pre> <p>with <code>--cb 4</code> you'll get an audit output with <code>namespace</code> of <code>foo</code>, <code>feature</code> of <code>bar</code>, and <code>offset</code> of 0, 1, 2, and 3.</p> <p>Interpreting the output when using <code>--cb_adf</code> is possible but difficult to explain succinctly.</p> <blockquote> <p>From what I understand vowpawabbit tries to fit one linear model to each arm.</p> </blockquote> <p><strong>Shorter answer</strong>: With <code>--cb_type dm</code>, essentially VW independently tries to predict the average reward for each arm using only examples where the policy played that arm. So the weight you get from audit at a particular offset N is analogous to what you would get from a supervised learning model trained to predict reward on a subset of the historical data consisting solely of times the historical policy played arm N. With other <code>--cb_type</code> settings the interpretation is more complicated.</p> <p><strong>Longer answer</strong>: "Linear model" refers to the representation being used. VW can incorporate nonlinearities into the model but let's ignore that for now. "Fit" is where some important details are. VW takes the partial feedback information of a CB problem (partial feedback = "for this example you don't know the reward of the arms not pulled") and reduces it to a full feedback supervised learning problem (full feedback = "for this example you do the reward of all arms"). The <code>--cb_type</code> argument selects the reduction strategy. There are several papers on the topic, a good place to start is <a href="https://arxiv.org/abs/1103.4601" rel="nofollow noreferrer">Dudik et. al.</a> and then look for papers that cite this paper. In terms of code, ultimately things are grounded <a href="https://github.com/VowpalWabbit/vowpal_wabbit/blob/master/vowpalwabbit/cb_algs.cc" rel="nofollow noreferrer">here</a>, but the code is written more for performance than intelligibility.</p>
2020-06-16 15:49:24.340000+00:00
2020-06-16 15:49:24.340000+00:00
null
null
62,325,466
<p>In vowpawabbit there is an option <code>--audit</code> that prints the weights of the features.</p> <p>If we have a <code>vw</code> contextual bandit model with four arms, how is this feature weight created? </p> <p>From what I understand vowpawabbit tries to fit one linear model to each arm.</p> <p>So if weights were calculated using an average across all the arms, then they would correlate with getting a reward generally, instead of which features makes the model pick one variant from another. </p> <p>I am interested know out how they are calculated to see how I can interpret the results obtained. I tried searching <a href="https://github.com/VowpalWabbit/vowpal_wabbit" rel="nofollow noreferrer">its Github repository</a> but could not find anything meaningful.</p>
2020-06-11 13:23:25.867000+00:00
2020-06-16 15:49:24.340000+00:00
2020-06-11 13:31:28.627000+00:00
vowpalwabbit
['https://towardsdatascience.com/predicting-vs-explaining-69b516f90796', 'https://scikit-learn.org/stable/modules/permutation_importance.html', 'https://github.com/VowpalWabbit/vowpal_wabbit/wiki/Audit', 'https://arxiv.org/abs/1103.4601', 'https://github.com/VowpalWabbit/vowpal_wabbit/blob/master/vowpalwabbit/cb_algs.cc']
5
63,730,388
<p>I found the solution.</p> <p>Element-wise multiplication between input and the mask before feeding it to a Conv2d method would be enough.(masking input is much easier than masking kernel itself !!):</p> <pre><code>mask = torch.tensor([[[1, 1, 1, 0]], [[1, 0, 1, 1]], [[1, 1, 0, 1]], [[0, 1, 1, 1]]], dtype=torch.float, requires_grad=False).reshape(1, 1, 4, 4) &gt;&gt;layer(torch.mul(x, mask)) tensor([[[[5., 15.], [30., 40.]]]], grad_fn=&lt;MkldnnConvolutionBackward&gt;) </code></pre> <p><strong>P.S</strong> Thanks to @Shai I got the idea from partial convolution represented in this <a href="https://arxiv.org/abs/1804.07723" rel="nofollow noreferrer">paper</a>. However it does some extra manipulation on output. it defines a mask ratio and I guess does some weighting the final output based on it.</p>
2020-09-03 19:24:29.263000+00:00
2020-09-03 19:50:24.200000+00:00
2020-09-03 19:50:24.200000+00:00
null
63,710,417
<p>Let's say:</p> <pre><code>x = torch.arange(16, dtype=torch.float).reshape(1, 1, 4, 4) </code></pre> <p>and a 2d convolution layer is:</p> <pre><code>layer = torch.nn.Conv2d(in_channels=1, out_channels=1, kernel_size=2, stride=2) layer.weight.data[:] = 1. layer.bias.data[:] = 0. </code></pre> <p>Normally, passing <em>x</em> to <em>layer</em> will give:</p> <pre><code>&gt;&gt;layer(x) tensor([[[[10., 18.], [42., 50.]]]], grad_fn=&lt;MkldnnConvolutionBackward&gt;) </code></pre> <p>Considering having 4 mask filters, how it is done to mask kernel in each step? for example the following picture indicates 4 filters(white: True, black: False) <a href="https://i.stack.imgur.com/KsWw2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KsWw2.png" alt="stride=2 and kernel_size=2 for given x" /></a></p> <p>The output should be:</p> <pre><code>tensor([[[[5., 15.], [30., 40.]]]], grad_fn=&lt;MkldnnConvolutionBackward&gt;) </code></pre> <p><strong>P.S:</strong> all masks are obtained by missing pixels in 2d input array. So 4 masks above are actually a matrix with the same shape as input.</p>
2020-09-02 17:09:36.063000+00:00
2020-09-03 19:50:24.200000+00:00
2020-09-03 18:51:50.677000+00:00
python|pytorch
['https://arxiv.org/abs/1804.07723']
1
63,717,565
<p>I think you are looking for <a href="https://github.com/NVIDIA/partialconv" rel="nofollow noreferrer">partial convolution</a> from Nvidia research.</p> <p>A more detailed description is given in their ECCV 2018 paper <a href="https://arxiv.org/abs/1804.07723" rel="nofollow noreferrer"><em>Image Inpainting for Irregular Holes Using Partial Convolutions</em></a></p>
2020-09-03 05:45:48.227000+00:00
2020-09-03 05:45:48.227000+00:00
null
null
63,710,417
<p>Let's say:</p> <pre><code>x = torch.arange(16, dtype=torch.float).reshape(1, 1, 4, 4) </code></pre> <p>and a 2d convolution layer is:</p> <pre><code>layer = torch.nn.Conv2d(in_channels=1, out_channels=1, kernel_size=2, stride=2) layer.weight.data[:] = 1. layer.bias.data[:] = 0. </code></pre> <p>Normally, passing <em>x</em> to <em>layer</em> will give:</p> <pre><code>&gt;&gt;layer(x) tensor([[[[10., 18.], [42., 50.]]]], grad_fn=&lt;MkldnnConvolutionBackward&gt;) </code></pre> <p>Considering having 4 mask filters, how it is done to mask kernel in each step? for example the following picture indicates 4 filters(white: True, black: False) <a href="https://i.stack.imgur.com/KsWw2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KsWw2.png" alt="stride=2 and kernel_size=2 for given x" /></a></p> <p>The output should be:</p> <pre><code>tensor([[[[5., 15.], [30., 40.]]]], grad_fn=&lt;MkldnnConvolutionBackward&gt;) </code></pre> <p><strong>P.S:</strong> all masks are obtained by missing pixels in 2d input array. So 4 masks above are actually a matrix with the same shape as input.</p>
2020-09-02 17:09:36.063000+00:00
2020-09-03 19:50:24.200000+00:00
2020-09-03 18:51:50.677000+00:00
python|pytorch
['https://github.com/NVIDIA/partialconv', 'https://arxiv.org/abs/1804.07723']
2
49,881,058
<p>I would not recommend using selenium for this task. If you have a list of urls, simply use <a href="https://docs.python.org/3.0/library/urllib.request.html#urllib.request.urlretrieve" rel="nofollow noreferrer"><code>urllib.request.urlretrive</code></a>:</p> <pre><code>In [5]: from urllib import request In [6]: request.urlretrieve('https://arxiv.org/pdf/1409.8470.pdf', r'C:\users\chris\test.pdf') Out[6]: ('C:\\users\\chris\\test.pdf', &lt;http.client.HTTPMessage at 0x59628d0&gt;) </code></pre> <p>Just pass each url as the first argument, and the destination as the final argument.</p>
2018-04-17 14:41:08.347000+00:00
2018-04-17 14:41:08.347000+00:00
null
null
49,880,846
<p>I am a bit new to using selenium and Python.Below is the code that I am trying to run to download multiple files.</p> <pre><code>from selenium import webdriver driver = webdriver.Chrome(executable_path=r'C:\chromedriver_win32\chromedriver.exe') cusip=['abc123','def456','ghi789'] for a in cusip: page=driver.get("http://mylink=" + str(a) + ".pdf") with open(a + '.pdf', 'wb') as f: for chunk in page.iter_content(chunk_size=1024): if chunk: f.write(chunk) </code></pre> <p>Error that I receive is as below:</p> <pre><code>Traceback (most recent call last): File "C:/Users/shashi.singh/PycharmProjects/HiSSS/Selenium.py", line 13, in &lt;module&gt; for chunk in page.iter_content(chunk_size=1024): AttributeError: 'NoneType' object has no attribute 'iter_content' </code></pre>
2018-04-17 14:31:38.210000+00:00
2018-04-18 13:13:38.257000+00:00
2018-04-17 14:46:38.393000+00:00
python|selenium|pdf|webdriver|chrome-web-driver
['https://docs.python.org/3.0/library/urllib.request.html#urllib.request.urlretrieve']
1
37,311,744
<p>Here is the <a href="https://github.com/paulknysh/blackbox" rel="nofollow">solution</a> to your problem.</p> <p>A method behind it is described in this <a href="http://arxiv.org/pdf/1605.00998.pdf" rel="nofollow">paper</a>.</p>
2016-05-18 23:39:42.133000+00:00
2016-05-18 23:39:42.133000+00:00
null
null
17,690,157
<p>I'm developing machine learning algorithms which classify images based on training data.</p> <p>During the image preprocessing stages, there are several parameters which I can modify that affect the data I feed my algorithms (for example, I can change the Hessian Threshold when extracting SURF features). So the flow thus far looks like:</p> <p>[param1, param2, param3...] => [black box] => accuracy %</p> <p>My problem is: <strong>with so many parameters at my disposal, how can I systematically pick values which give me optimized results/accuracy?</strong> A naive approach is to run i nested for-loops (assuming i parameters) and just iterate through all parameter combinations, but if it takes 5 minute to calculate an accuracy from my "black box" system this would take a long, long time.</p> <p>This being said, are there any algorithms or techniques which can search for optimal parameters in a black box system? I was thinking of taking a course in Discrete Optimization but I'm not sure if that would be the best use of my time.</p> <p>Thank you for your time and help!</p> <p><strong>Edit</strong> (to answer comments): I have 5-8 parameters. Each parameter has its own range. One parameter can be 0-1000 (integer), while another can be 0 to 1 (real number). Nothing is stopping me from multithreading the black box evaluation.</p> <p>Also, there are some parts of the black box that have some randomness to them. For example, one stage is using k-means clustering. Each black box evaluation, the cluster centers may change. I run k-means several times to (hopefully) avoid local optima. In addition, I evaluate the black box multiple times and find the median accuracy in order to further mitigate randomness and outliers.</p>
2013-07-17 02:37:46.273000+00:00
2016-05-19 00:32:18.067000+00:00
2013-07-17 12:40:22.210000+00:00
optimization|parameters|machine-learning
['https://github.com/paulknysh/blackbox', 'http://arxiv.org/pdf/1605.00998.pdf']
2
69,492,205
<blockquote> <p>it is not clear why you would want to avoid <code>FuncAnimation</code>.</p> </blockquote> <p>For very simple tests, where you want to check a situation deep inside a loop, it is not easy to set up an <code>animation</code> function.</p> <p>For instance, I wanted to visualize what happens with this strange sort algorithm: <a href="https://arxiv.org/pdf/2110.01111.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/2110.01111.pdf</a>. To my opinion, the simplest way to do it is:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt def sort(table): n = len(table) for i in range (n): for j in range (n): if table[i] &lt; table[j]: tmp = table[i] table[i] = table[j] table[j] = tmp plt.plot(table, 'ro') plt.title(f&quot;i {i} j {j}&quot;) plt.pause(0.001) plt.clf() # clear figure return table n = 50 table = np.random.randint(1,101,n) sort(table) ```python </code></pre>
2021-10-08 07:39:02.497000+00:00
2021-10-08 07:39:02.497000+00:00
null
null
42,035,779
<p>Is there a way to animate a graph in matplotlib without resorting to the built in animation functions? I find them extremely awkward to use and feel it would be much simpler to just plot a point, wipe the graph, then plot the next point. </p> <p>I envision something such as:</p> <pre><code>def f(): # do stuff here return x, y, t </code></pre> <p>where each <code>t</code> would be a different frame. </p> <p>I mean, I've tried stuff like using <code>plt.clf()</code>, <code>plt.close()</code> etc. but nothing seems to work. </p>
2017-02-04 02:18:11.353000+00:00
2022-02-18 10:10:31.567000+00:00
2017-02-04 02:45:06.347000+00:00
python|animation|matplotlib
['https://arxiv.org/pdf/2110.01111.pdf']
1
50,170,277
<p>The power of GPUs over CPUs is to run many operations at the same time. However archiving this high level of parallelization is not always easy. Frameworks like Tensorflow or PyTorch do its best to optimise everything for GPU and parallelisation, but this is not possible for every case.</p> <p>Computations in LSTMs and RNNs in general can be only parallelized to a very limited degree. The problem lies in their sequential structure, LSTMs and RNNs process only one input at a time, and they need to process everything in chronological order <em>(to compute n+1 you always need to compute n before)</em> - otherwise it wouldn't make sense.</p> <p>So the natural way of processing data in RNNs is completely the opposite of parallelization, using mini-batching does help a lot, but does not solve the fundamental problem of LSTMs.</p> <p>If you wan't a high amount of parallelization you need to use architectures like the <em>"Transformer"</em> proposed in the paper <a href="https://arxiv.org/abs/1706.03762" rel="nofollow noreferrer">"Attention is all you need"</a> by Google.</p> <p><strong>Summary</strong></p> <p>The degree of parallelization resp. the GPU acceleration of your model depends to a large extent on the architecture of the model itself. With some architectures like RNNs parallelization is only possible to a limited degree.</p> <p><strong>Edit:</strong></p> <p><em>For example, if I add more layers or add more hidden nodes, should I expect GPU usage to go up?</em></p> <p>When increasing the number of units within you should expect the GPU usage going up, matrix operations like passing an input to a hidden layer are can be well parallelized.</p> <p>Adding layers is different, there you have the same problem what causes RNNs to be slow on GPU. To compute the next layer you need to have already the result of the previous layer. So you need to compute one layer after another, it's not possible to compute all at the same time.</p> <p>This is the theory - In practice you might see some minor differences in GPU usage, depending on the actual implementation of the framework.</p>
2018-05-04 08:14:54.353000+00:00
2018-05-04 11:15:11.083000+00:00
2018-05-04 11:15:11.083000+00:00
null
50,164,417
<p>I am using an AWS <a href="https://aws.amazon.com/ec2/instance-types/p3/" rel="nofollow noreferrer">p3.2xlarge</a> instance with the <a href="https://aws.amazon.com/marketplace/pp/B077GCH38C" rel="nofollow noreferrer">Deep Learning AMI</a> (DLAMI). This instance has a single <a href="https://www.nvidia.com/en-us/data-center/tesla-v100/" rel="nofollow noreferrer">Tesla V100</a> (640 Tensor Cores and 5,120 CUDA Cores). When I run the <a href="https://pytorch.org/tutorials/intermediate/seq2seq_translation_tutorial.html" rel="nofollow noreferrer">PyTorch Seq2Seq</a> Jupyter Notebook, I noticed that only 25% of the GPU is used. I monitor the GPU usage with the following command <code>watch -n 1 nvidia-smi</code>. </p> <p>My question is, what determines GPU usage? Or, why is the GPU usage not 100%? The reason behind this question is related not only to inefficiency that may be a result of code but also cost ($3.06/hour). I am wondering if there is anything more that I can do to maximize the GPU usage.</p> <p>Of course, this is a deep learning model that is being learned, and the training code sends one sample at a time through the network for learning. I am thinking that mini-batch learning may not be appropriate (e.g. sending a couple of samples through before backpropagating). I am also wondering if the network architecture (the number of layers, their parameters, their input tensor dimensions, etc.) constrains how GPU is being used. For example, if I add more layers or add more hidden nodes, should I expect GPU usage to go up?</p>
2018-05-03 21:58:42.113000+00:00
2018-05-04 11:15:11.083000+00:00
2018-05-04 04:46:39.373000+00:00
amazon-ec2|neural-network|nvidia|pytorch|tensor
['https://arxiv.org/abs/1706.03762']
1
42,131,682
<p>My university colleage did some work with IEEE 802.15.4 in OMNeT++ with INET Framework : "A New IEEE 802.15.4 Simulation Model for OMNeT++ / INET" <a href="https://arxiv.org/pdf/1409.1177.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1409.1177.pdf</a></p> <p>Problem here is we're just porting it for newer INET Version and OMNeT++ but sooner or later it will work. It should be possible to use it for OMNeT++ 4.6 with INET 2.6 there is some implementation working for all the stuff you mentioned.</p>
2017-02-09 08:35:17.207000+00:00
2019-02-14 12:09:49.980000+00:00
2019-02-14 12:09:49.980000+00:00
null
42,110,502
<p>I am trying to implement what was available on MiXiM framework using INET 3.4.0, i.e. the protocol 802.15.4a (UWB).</p> <p>INET offers already the <em>NIC</em> module (radio+MAC) but not the rest, i.e the network and application layer.</p> <p>I would like to create a 'dummy' simulations where two 802154a hosts send and receive messages using the <em>INET NIC</em> module called <em>Ieee802154UWBIRNic</em>.</p> <p>I tried to follow the implementation used by MiXiM but I got lost with all the Base modules etc etc.</p> <p>Can someone help me? I also implemented a simple network layer and then tried to use <em>PingApp</em> but it is not working, it says something like</p> <pre><code>'Host Address Unknown' </code></pre>
2017-02-08 10:24:46.763000+00:00
2019-02-14 12:09:49.980000+00:00
null
c++|simulation|omnet++|nic|inet
['https://arxiv.org/pdf/1409.1177.pdf']
1
63,214,099
<p>There are going to be two main approaches:</p> <ul> <li>the one you have started, which is a list of emotional words, and counting how often they appear</li> <li>showing examples of what you consider emotional sentences and what are unemotional sentences to a machine learning model, and let it work it out.</li> </ul> <p>The first way will get better as you give it more words, but you will eventually hit a limit. (Simply due to the ambiguity and flexibility of human language, e.g. while &quot;you&quot; is more emotive than &quot;it&quot;, there are going to be a lot of unemotional sentences that use &quot;you&quot;.)</p> <blockquote> <p>any suggestions on how I can extract emotional words from wordnet?</p> </blockquote> <p>Take a look at sentiwordnet, which adds a measure of positivity, negativity or neutrality to each wordnet entry. For &quot;emotional&quot; you could extract just those that have either pos or neg score over e.g. 0.5. (Watch out for the non-commercial-only licence.)</p> <p>The second approach will probably work better <em>if</em> you can feed it enough training data, but &quot;enough&quot; can sometimes be too much. Other downsides are the models often need much more compute power and memory (a serious issue if you need to be offline, or working on a mobile device), and that they are a blackbox.</p> <p>I think the 2020 approach would be to start with a pre-trained BERT model (the bigger the better, see <a href="https://arxiv.org/abs/2005.14165" rel="nofollow noreferrer">the recent GPT-3 paper</a>), and then fine-tune it with a sample of your 100K sentences that you've manually annotated. Evaluate it on another sample, and annotate more training data for the ones it got wrong. Keep doing this until you get the desired level of accuracy.</p> <p>(Spacy has support for both approaches, by the way. What I called fine-tuning above is also called transfer learning. See <a href="https://spacy.io/usage/training#transfer-learning" rel="nofollow noreferrer">https://spacy.io/usage/training#transfer-learning</a> Also googling for &quot;spacy sentiment analysis&quot; will find quite a few tutorials.)</p>
2020-08-02 08:47:48.160000+00:00
2020-08-02 11:27:47.867000+00:00
2020-08-02 11:27:47.867000+00:00
null
63,204,418
<p>I have a series of 100.000+ sentences and I want to rank how emotional they are.</p> <p>I am quite new to the NLP world, but this is how I managed to get started (adaptation from <a href="https://spacy.io/usage/spacy-101" rel="nofollow noreferrer">spacy 101</a>)</p> <pre><code>import spacy from spacy.matcher import Matcher matcher = Matcher(nlp.vocab) def set_sentiment(matcher, doc, i, matches): doc.sentiment += 0.1 myemotionalwordlist = ['you','superb','great','free'] sentence0 = 'You are a superb great free person' sentence1 = 'You are a great person' sentence2 = 'Rocks are made o minerals' sentences = [sentence0,sentence1,sentence2] pattern2 = [[{&quot;ORTH&quot;: emotionalword, &quot;OP&quot;: &quot;+&quot;}] for emotionalword in myemotionalwordlist] matcher.add(&quot;Emotional&quot;, set_sentiment, *pattern2) # Match one or more emotional word for sentence in sentences: doc = nlp(sentence) matches = matcher(doc) for match_id, start, end in matches: string_id = nlp.vocab.strings[match_id] span = doc[start:end] print(&quot;Sentiment&quot;, doc.sentiment) </code></pre> <p>myemotionalwordlist is a list of about 200 words that Ive built manually.</p> <p>My questions are:</p> <p>(1-a) Counting the number of emotional words does not seem like the best approach. Anyone has any suggetions of a better way of doing so?</p> <p>(1-b) In case this approach is good enough, any suggestions on how I can extract emotional words from wordnet?</p> <p>(2) Whats the best way of escalating this? I am thinking about adding all sentences to a pandas data frame and then applying the match function to each one of them</p> <p>Thanks in advance!</p>
2020-08-01 10:55:54.343000+00:00
2020-08-02 11:27:47.867000+00:00
null
python|nlp|spacy|sentiment-analysis|wordnet
['https://arxiv.org/abs/2005.14165', 'https://spacy.io/usage/training#transfer-learning']
2
53,380,488
<p>Well:</p> <pre><code>stringdist::stringdist("rjson", "jsonlite") ## [1] 5 </code></pre> <p>That's a modest difference to begin with.</p> <p>However, your assertion seems to be amiss:</p> <pre><code>library(magrittr) rjson::fromJSON('{"name": "Sanjay", "unit_price": 130848, "amount": 11, "up_to_data_sales": 45725}') %&gt;% str() ## List of 4 ## $ name : chr "Sanjay" ## $ unit_price : num 130848 ## $ amount : num 11 ## $ up_to_data_sales: num 45725 jsonlite::fromJSON('{"name": "Sanjay", "unit_price": 130848, "amount": 11, "up_to_data_sales": 45725}') %&gt;% str() ## List of 4 ## $ name : chr "Sanjay" ## $ unit_price : int 130848 ## $ amount : int 11 ## $ up_to_data_sales: int 45725 </code></pre> <p>Apart from <code>jsonlite</code> using a more diminutive data type for the numbers, they both parse the JSON fine.</p> <p>So there's an issue with your file that you failed to disclose in the question.</p> <p>A further incorrect assertion </p> <pre><code>-rw-rw-r-- 1 bob staff 2690 Jul 30 2007 rjson_0.1.0.tar.gz -rw-rw-r-- 1 bob staff 400196 Dec 3 2013 jsonlite_0.9.0.tar.gz </code></pre> <p>not to mention:</p> <pre><code>-rw-rw-r-- 1 bob staff 873843 Oct 4 2010 RJSONIO_0.3-1.tar.gz </code></pre> <p><code>rjson</code> came first. (dir listings came from the CRAN mirror sitting next to me).</p> <p>You can <em>actually read</em> about the rationale and impetus behind <code>jsonlite</code> here: <a href="https://arxiv.org/abs/1403.2805" rel="nofollow noreferrer">https://arxiv.org/abs/1403.2805</a> (which I got off the CRAN page for <code>jsonlite</code>.</p>
2018-11-19 18:16:59.440000+00:00
2018-11-19 18:50:13.320000+00:00
2018-11-19 18:50:13.320000+00:00
null
53,379,940
<p>The rjson::fromJSON() reads a file incorrectly while jsonlite::fromJSON() reads it fine. Here's a sample example.</p> <p>file test.json contents:</p> <pre><code>{"name": "Sanjay", "unit_price": 130848, "amount": 11, "up_to_data_sales": 45725} </code></pre> <p>the <code>jsonlite</code> <code>fromJSON</code> outputs:</p> <pre><code>jsonlite::fromJSON("test.json") $name [1] "Sanjay" $unit_price [1] 130848 $amount [1] 11 $up_to_data_sales [1] 45725 </code></pre> <p>But the same throws an error in <code>rjson</code> package.</p> <pre><code>rjson::fromJSON("test.json") Error in rjson::fromJSON("test.json") : parseTrue: expected to see 'true' - likely an unquoted string starting with 't'. </code></pre> <ol> <li>Why is this error coming?</li> <li>What is the reason <code>rjson</code> package was launched when <code>jsonlite</code> existed?</li> </ol>
2018-11-19 17:36:12.263000+00:00
2021-08-11 06:11:43.337000+00:00
null
r|json
['https://arxiv.org/abs/1403.2805']
1
67,535,700
<p>According to the doc of <a href="https://www.tensorflow.org/api_docs/python/tf/keras/layers/LayerNormalization" rel="nofollow noreferrer"><code>tf.keras.layers.LayerNormalization</code></a> in <code>TF 2.4.1</code>, <a href="https://github.com/tensorflow/tensorflow/blob/v2.5.0/tensorflow/python/keras/layers/normalization.py#L1066-L1073" rel="nofollow noreferrer">source</a>:</p> <blockquote> <p>Note that other implementations of layer normalization may choose to define <code>gamma</code> and <code>beta</code> over a separate set of axes from the axes being normalized across. For example, Group Normalization (<a href="https://arxiv.org/abs/1803.08494" rel="nofollow noreferrer">Wu et al. 2018</a>) with a group size of 1 corresponds to a Layer Normalization that normalizes across height, width, and channel and has <code>gamma</code> and <code>beta</code> span only the channel dimension. <strong>So, this Layer Normalization implementation will not match a Group Normalization layer with group size set to 1.</strong></p> </blockquote>
2021-05-14 14:04:01.620000+00:00
2021-05-15 16:50:33.743000+00:00
2021-05-15 16:50:33.743000+00:00
null
67,533,628
<p>From the <strong>group normalization</strong> documentation in <code>tensorflow</code> addons, it states that the group norm layer should become layer normalization if the number of groups is set to one.</p> <p>However when I try this by calling the layers one a test tensor the results differ. It appears that the group norm layer computes the mean and variance across the time as well as the channel axis, whereas the layer norm computes it for each channel's vector independently.</p> <p>Is this a bug or am I missing something? The current behavior of layer norm is actually desirable for what I am doing.</p> <p>Here is the documentation for <a href="https://www.tensorflow.org/addons/api_docs/python/tfa/layers/GroupNormalization" rel="nofollow noreferrer">GroupNormalization</a>:</p> <pre><code>In [5]: x = tf.constant([[[1, 2], [3, 40]], [[1 , -1], [2, 200]]], dtype = tf.float32) In [6]: tf.keras.layers.LayerNormalization()(x) Out[6]: &lt;tf.Tensor: shape=(2, 2, 2), dtype=float32, numpy= array([[[-0.99800587, 0.99800587], [-0.99999857, 0.99999857]], [[ 0.9995002 , -0.9995002 ], [-1. , 1. ]]], dtype=float32)&gt; In [7]: tfa.layers.GroupNormalization(groups = 1)(x) Out[7]: &lt;tf.Tensor: shape=(2, 2, 2), dtype=float32, numpy= array([[[-0.6375344 , -0.57681686], [-0.5160993 , 1.7304504 ]], [[-0.5734435 , -0.5966129 ], [-0.5618587 , 1.7319152 ]]], dtype=float32)&gt; </code></pre>
2021-05-14 11:32:19.117000+00:00
2021-05-15 16:50:33.743000+00:00
2021-05-15 16:50:09.453000+00:00
python|tensorflow|machine-learning|deep-learning|keras-layer
['https://www.tensorflow.org/api_docs/python/tf/keras/layers/LayerNormalization', 'https://github.com/tensorflow/tensorflow/blob/v2.5.0/tensorflow/python/keras/layers/normalization.py#L1066-L1073', 'https://arxiv.org/abs/1803.08494']
3
56,766,719
<p>Unfortunately, tf_hub modules are <a href="https://github.com/tensorflow/hub/issues/124" rel="nofollow noreferrer">not yet supported in eager mode</a> except in tf 2 (which is still in beta and I think needs slightly different hub modules anyway). </p> <p>Therefore you'll need to run this in a session.</p> <p>Something like:</p> <pre class="lang-py prettyprint-override"><code>embed = hub.Module("https://public.ukp.informatik.tu-darmstadt.de/arxiv2018-xling-sentence-embeddings/tf-hub/monolingual/1") X = embed(["This is a test."]) with tf.Session() as session: session.run([tf.global_variables_initializer(), tf.tables_initializer()]) numpy_arr = session.run(X) </code></pre>
2019-06-26 06:46:01.327000+00:00
2019-06-26 06:46:01.327000+00:00
null
null
56,765,498
<p>I face problems when trying to get a numpy array from a tensorflow tensor. I use a tensorflow hub module but I don't want to use tensorflow in downstream tasks but rather need a numpy array.</p> <p>I know that I have to call the 'eval()' method on the tensor from within a tensorflow session. But unfortuantely I cannot get it to work... :( It tells me that the "tables are not initialized". I tried to add 'sess.run(tf.tables_initializer())' but then I get the error: 'NotFoundError: Resource localhost/module_1/embeddings_morph_specialized/class tensorflow::Var does not exist'. I am not sure what to try next. I have also tried 'sess.run()' but have also been unsuccessful.</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import tensorflow as tf import tensorflow_hub as hub embed = hub.Module("https://public.ukp.informatik.tu-darmstadt.de/arxiv2018-xling-sentence-embeddings/tf-hub/monolingual/1") X = embed(["This is a test."]) # I tried: #with tf.Session() as sess: # sess.run(tf.tables_initializer()) # X.eval() </code></pre> <p>'X' is the tensor which I would like to convert to a numpy array.</p> <p>Any help is appreciated. :) Thank you.</p>
2019-06-26 05:03:12.390000+00:00
2019-06-26 06:46:01.327000+00:00
null
python|tensorflow
['https://github.com/tensorflow/hub/issues/124']
1
51,079,764
<p>I'm not quite sure what you're asking, but if you want to find nodes that overlap among different communities in a network then there are several algorithms for this. Here is an article that will get you started: <a href="https://arxiv.org/pdf/1110.5813.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1110.5813.pdf</a>. The CFinder algorithm in particular has received alot of attention. Listed <a href="https://stackoverflow.com/questions/20063927/overlapping-community-detection-with-igraph-or-other-libaries">here</a>, you'll find an implementation of the algorithm that you may find useful. Here is code provided by <a href="https://stackoverflow.com/users/156771/tam%C3%A1s">Tamas</a> for CFinder:</p> <pre><code># CFINDER IMPLEMENTATION #!/usr/bin/env python from itertools import combinations import igraph import optparse parser = optparse.OptionParser(usage="%prog [options] infile") parser.add_option("-k", metavar="K", default=3, type=int, help="use a clique size of K") options, args = parser.parse_args() if not args: parser.error("Required input file as first argument") k = options.k g = igraph.load(args[0], format="ncol", directed=False) cls = map(set, g.maximal_cliques(min=k)) edgelist = [] for i, j in combinations(range(len(cls)), 2): if len(cls[i].intersection(cls[j])) &gt;= k-1: edgelist.append((i, j)) cg = igraph.Graph(edgelist, directed=False) clusters = cg.clusters() for cluster in clusters: members = set() for i in cluster: members.update(cls[i]) print "\t".join(g.vs[members]["name"]) </code></pre> <p>Alternatively, I believe NetworkX have a similar interpretation that can be found <a href="https://networkx.github.io/documentation/networkx-1.9.1/reference/generated/networkx.algorithms.community.kclique.k_clique_communities.html" rel="nofollow noreferrer">here</a>.</p>
2018-06-28 09:49:18.527000+00:00
2018-06-28 09:49:18.527000+00:00
null
null
50,892,989
<pre><code>dendo = community.generate_dendrogram(G_fb) for level in range(len(dendo) - 1) : print("partition at level", level, "is", partition_at_level(dendo, level)) </code></pre> <p>So I run the code above on my own data and found that the level of my data is only 1. It seems that the level is the size of the community where in my case, they are small. </p> <p>But what should I do next to find out overlapping communities, which means a node can be included within more than one communities in the output of the detection algorithm, for instance best_partition which is what I used for community detection. </p> <p>In other words, is there any correlation between the level and overlapping? You can find the community graph below.<a href="https://i.stack.imgur.com/m95Fx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/m95Fx.png" alt="enter image description here"></a></p>
2018-06-17 01:33:20.537000+00:00
2018-06-28 09:49:18.527000+00:00
null
python|networkx
['https://arxiv.org/pdf/1110.5813.pdf', 'https://stackoverflow.com/questions/20063927/overlapping-community-detection-with-igraph-or-other-libaries', 'https://stackoverflow.com/users/156771/tam%C3%A1s', 'https://networkx.github.io/documentation/networkx-1.9.1/reference/generated/networkx.algorithms.community.kclique.k_clique_communities.html']
4
64,973,085
<p>Residual Block from <a href="https://arxiv.org/abs/1512.03385" rel="noreferrer">ResNet</a> Architecture is the following :</p> <p><a href="https://i.stack.imgur.com/r0BS6.png" rel="noreferrer"><img src="https://i.stack.imgur.com/r0BS6.png" alt="Residual Block" /></a></p> <p>You need to use the <a href="https://www.tensorflow.org/guide/keras/functional" rel="noreferrer">Keras functionnal API</a> because Sequential models are too limited. Its implementation in Keras is :</p> <pre><code>from tensorflow.keras import layers def resblock(x, kernelsize, filters): fx = layers.Conv2D(filters, kernelsize, activation='relu', padding='same')(x) fx = layers.BatchNormalization()(fx) fx = layers.Conv2D(filters, kernelsize, padding='same')(fx) out = layers.Add()([x,fx]) out = layers.ReLU()(out) out = layers.BatchNormalization()(out) return out </code></pre> <p><code>BatchNormalisation()</code>layer is not essential but may be a solid option to increase its accuracy. <code>x</code> needs also to have the same number of filters than the <code>filters</code> parameter.</p>
2020-11-23 17:15:32.653000+00:00
2020-11-23 17:15:32.653000+00:00
null
null
64,792,460
<p>I have a basic CNN model's code built with <strong>tensorflow.keras</strong> library:</p> <pre><code>model = Sequential() # First Layer model.add(Conv2D(64, (3,3), input_shape = (IMG_SIZE,IMG_SIZE,1))) model.add(Activation(&quot;relu&quot;)) model.add(MaxPooling2D(pool_size = (3,3))) # Second Layer model.add(Conv2D(64, (3,3))) model.add(Activation(&quot;relu&quot;)) model.add(MaxPooling2D(pool_size = (3,3))) # Third Layer model.add(Conv2D(64, (3,3))) model.add(Activation(&quot;relu&quot;)) model.add(MaxPooling2D(pool_size = (3,3))) # Fourth Layer model.add(Conv2D(64, (3,3))) model.add(Activation(&quot;relu&quot;)) model.add(MaxPooling2D(pool_size = (3,3))) # Fifth Layer model.add(Conv2D(64, (3,3))) model.add(Activation(&quot;relu&quot;)) model.add(MaxPooling2D(pool_size = (3,3))) model.add(Flatten()) # Sixth Layer model.add(Dense(64)) model.add(Activation(&quot;relu&quot;)) # Seventh Layer model.add(Dense(1)) model.add(Activation('sigmoid')) </code></pre> <p>Now, I want to make a connection between the <strong>second</strong> and the <strong>fourth</strong> layer to achieve a <strong>residual block</strong> using <strong>tensorflow.keras</strong> library.</p> <p>So, How should I modify the code to achieve such a <strong>residual block</strong>?</p>
2020-11-11 18:53:09.217000+00:00
2020-11-23 17:15:32.653000+00:00
2020-11-11 19:08:35.423000+00:00
python|tensorflow|keras|deep-learning
['https://arxiv.org/abs/1512.03385', 'https://i.stack.imgur.com/r0BS6.png', 'https://www.tensorflow.org/guide/keras/functional']
3
50,928,635
<p>Since this has been written there has been a lot of work in the realm of cgans ( conditional generative adversarial networks ) please refer to: <a href="https://arxiv.org/pdf/1611.07004.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1611.07004.pdf</a></p>
2018-06-19 12:36:19.467000+00:00
2018-06-19 12:36:19.467000+00:00
null
null
31,609,809
<p>I have a pairs of images (input-output) but I don't know the transformation to going from A (input) to B (output). I want to record image A and get image B. Physically I can change the setup to get A or B, but I want to do it by software.</p> <p>If I understood well, a trained Artificial Neural Network is able to do that, having an input can give the corresponding output, is it right? Is there any software/ANN that just "training" it with entering a number of input-output pairs will be able to provide the correct output if the input is a new (but similar to the others) image?</p> <p>Thanks</p>
2015-07-24 11:48:18.017000+00:00
2018-06-19 12:36:19.467000+00:00
null
image|neural-network|transform
['https://arxiv.org/pdf/1611.07004.pdf']
1
50,151,171
<p>I understand this is an old post, but since this post comes up often in latex-python-parsing searches (as evident by <a href="https://stackoverflow.com/questions/49779853/extract-only-body-text-from-arxiv-articles-formatted-as-tex">Extract only body text from arXiv articles formatted as .tex</a>), leaving this here for folks down the line: Here's a LaTeX parser in Python that supports search over and modification of the parse tree, <a href="https://github.com/alvinwan/texsoup" rel="noreferrer">https://github.com/alvinwan/texsoup</a>. Taken from the README, here is sample text and how you can interact with it via TexSoup.</p> <pre><code>from TexSoup import TexSoup soup = TexSoup(""" \begin{document} \section{Hello \textit{world}.} \subsection{Watermelon} (n.) A sacred fruit. Also known as: \begin{itemize} \item red lemon \item life \end{itemize} Here is the prevalence of each synonym. \begin{tabular}{c c} red lemon &amp; uncommon \\ life &amp; common \end{tabular} \end{document} """) </code></pre> <p>Here's how to navigate the parse tree.</p> <pre><code>&gt;&gt;&gt; soup.section # grabs the first `section` \section{Hello \textit{world}.} &gt;&gt;&gt; soup.section.name 'section' &gt;&gt;&gt; soup.section.string 'Hello \\textit{world}.' &gt;&gt;&gt; soup.section.parent.name 'document' &gt;&gt;&gt; soup.tabular \begin{tabular}{c c} red lemon &amp; uncommon \\ life &amp; common \end{tabular} &gt;&gt;&gt; soup.tabular.args[0] 'c c' &gt;&gt;&gt; soup.item \item red lemon &gt;&gt;&gt; list(soup.find_all('item')) [\item red lemon, \item life] </code></pre> <p>Disclaimer: I wrote this lib, but it was for similar reasons. Regarding the post by Little Bobby Tales (regarding <code>def</code>), TexSoup doesn't handle definitions.</p>
2018-05-03 09:11:14.987000+00:00
2018-05-03 09:11:14.987000+00:00
null
null
4,792,065
<p>I have a couple of code projects in C++/Python in which LaTeX-format descriptions and labels are used to generate PDF documentation or graphs made using LaTeX+pstricks. However, we also have some plain text outputs, such as an HTML version of the documentation (I already have code to write minimal markup for that) and a non-TeX-enabled plot renderer.</p> <p>For these I would like to eliminate the TeX markup that is necessary for e.g. representing physical units. This includes non-breaking (thin) spaces, \text, \mathrm etc. It would also be nice to parse down things like \frac{#1}{#2} into #1/#2 for the plain text output (and use MathJax for the HTML). Due to the system that we've got at the moment, I need to be able to do this from Python, i.e. <em>ideally</em> I'm looking for a Python package, but a non-Python executable which I can call from Python and catch the output string would also be fine.</p> <p>I'm aware of the <a href="https://tex.stackexchange.com/questions/6431/options-for-converting-latex-to-plain-text">similar question on the TeX StackExchange site</a>, but there weren't any really programmatic solutions to that: I've looked at detex, plasTeX and pytex, which they all seem a bit dead and don't really do what I need: programmatic conversion of a TeX string to a representative plain text string.</p> <p>I could try writing a basic TeX parser using e.g. pyparsing, but a) that might be pitfall-laden and help would be appreciated and b) surely someone has tried that before, or knows of a way to hook into TeX itself to get a better result?</p> <p><strong>Update:</strong> Thanks for all the answers... it does indeed seem to be a bit of an awkward request! I can make do with less than general parsing of LaTeX, but the reason for considering a parser rather than a load of regexes in a loop is that I want to be able to handle nested macros and multi-arg macros nicely, and get the brace matching to work properly. Then I can e.g. reduce txt-irrelevant macros like \text and \mathrm first, and handle txt-relevant ones like \frac last... maybe even with appropriate parentheses! Well, I can dream... for now regexes are not doing such a terrible job.</p>
2011-01-25 09:58:38.020000+00:00
2021-01-30 16:26:10.647000+00:00
2017-04-13 12:34:29.243000+00:00
python|parsing|text|latex
['https://stackoverflow.com/questions/49779853/extract-only-body-text-from-arxiv-articles-formatted-as-tex', 'https://github.com/alvinwan/texsoup']
2
64,974,788
<p>Thanks for providing the code and a link to a Colab notebook! +1! Also, your code is well-written and easy to read. Unless I missed something, I think there are two problems with your code:</p> <ol> <li>The data normalization</li> <li>The implementation of the VAE loss.</li> </ol> <p><strong>About 1.</strong>, your <code>CIFAR10DataModule</code> class normalizes the RGB channels of the CIFAR10 images using <code>mean = 0.5</code> and <code>std = 0.5</code>. Since the pixel values are initially in [0,1] range, the normalized images have pixel values in the [-1,1] range. However, your <code>Decoder</code> class applies a <code>nn.Sigmoid()</code> activation to the reconstructed images. Therefore, your reconstructed images have pixel values in the [0,1] range. I suggest to remove this mean-std normalization so that both the &quot;true&quot; images and the reconstructed images have their pixel values in the [0,1] range.</p> <p><strong>About 2.</strong>: since you're dealing with RGB images the MSE loss makes sense. The idea behind the MSE loss is the &quot;Gaussian decoder&quot;. This decoder assumes the pixel values of a &quot;true image&quot; is generated by independent Gaussian distributions whose mean is the pixel values of the reconstructed image (i.e. the output of the decoder) and with a given variance. Your implementation of the reconstruction loss (namely <code>r_loss = F.mse_loss(predictions, targets)</code>) is equivalent to a fixed variance. Using ideas from <a href="https://arxiv.org/abs/2006.13202" rel="nofollow noreferrer">this paper</a>, we can do better and obtain an analytic expression for the &quot;optimal value&quot; of this variance parameter. Finally, the reconstruction loss should be summed over all pixels (<code>reduction = 'sum'</code>). To understand why, have a look at analytic expression of the reconstruction loss (see, for instance, <a href="http://adamlineberry.ai/vae-series/vae-code-experiments" rel="nofollow noreferrer">this blog post</a> which considers the BCE loss).</p> <p>Here is what the refactored <code>LitVAE</code> class looks like:</p> <pre class="lang-py prettyprint-override"><code>class LitVAE(pl.LightningModule): def __init__(self, learning_rate: float = 0.0005, **kwargs) -&gt; None: &quot;&quot;&quot; Parameters ---------- - `learning_rate: float`: learning rate for the optimizer - `**kwargs`: arguments to pass to the variational autoencoder constructor &quot;&quot;&quot; super(LitVAE, self).__init__() self.learning_rate = learning_rate self.vae = VariationalAutoEncoder(**kwargs) def forward(self, x) -&gt; _tensor_size_3_t: return self.vae(x) def training_step(self, batch, batch_idx): r_loss, kl_loss, sigma_opt = self.shared_step(batch) loss = r_loss + kl_loss self.log(&quot;train_loss_step&quot;, loss) return {&quot;loss&quot;: loss, 'log':{&quot;r_loss&quot;: r_loss / len(batch[0]), &quot;kl_loss&quot;: kl_loss / len(batch[0]), 'sigma_opt': sigma_opt}} def training_epoch_end(self, outputs) -&gt; None: # add computation graph if(self.current_epoch == 0): sample_input = torch.randn((1, 3, 32, 32)) sample_model = LitVAE(**MODEL_PARAMS) self.logger.experiment.add_graph(sample_model, sample_input) epoch_loss = self.average_metric(outputs, &quot;loss&quot;) self.logger.experiment.add_scalar(&quot;train_loss_epoch&quot;, epoch_loss, self.current_epoch) def validation_step(self, batch, batch_idx): r_loss, kl_loss, _ = self.shared_step(batch) loss = r_loss + kl_loss self.log(&quot;valid_loss_step&quot;, loss) return {&quot;loss&quot;: loss} def validation_epoch_end(self, outputs) -&gt; None: epoch_loss = self.average_metric(outputs, &quot;loss&quot;) self.logger.experiment.add_scalar(&quot;valid_loss_epoch&quot;, epoch_loss, self.current_epoch) def test_step(self, batch, batch_idx): r_loss, kl_loss, _ = self.shared_step(batch) loss = r_loss + kl_loss self.log(&quot;test_loss_step&quot;, loss) return {&quot;loss&quot;: loss} def test_epoch_end(self, outputs) -&gt; None: epoch_loss = self.average_metric(outputs, &quot;loss&quot;) self.logger.experiment.add_scalar(&quot;test_loss_epoch&quot;, epoch_loss, self.current_epoch) def configure_optimizers(self): return optim.Adam(self.parameters(), lr=self.learning_rate) def shared_step(self, batch) -&gt; torch.TensorType: # images are both samples and targets thus original # labels from the dataset are not required true_images, _ = batch # perform a forward pass through the VAE # mean and log_variance are used to calculate the KL Divergence loss # decoder_output represents the generated images mean, log_variance, generated_images = self(true_images) r_loss, kl_loss, sigma_opt = self.calculate_loss(mean, log_variance, generated_images, true_images) return r_loss, kl_loss, sigma_opt def calculate_loss(self, mean, log_variance, predictions, targets): mse = F.mse_loss(predictions, targets, reduction='mean') log_sigma_opt = 0.5 * mse.log() r_loss = 0.5 * torch.pow((targets - predictions) / log_sigma_opt.exp(), 2) + log_sigma_opt r_loss = r_loss.sum() kl_loss = self._compute_kl_loss(mean, log_variance) return r_loss, kl_loss, log_sigma_opt.exp() def _compute_kl_loss(self, mean, log_variance): return -0.5 * torch.sum(1 + log_variance - mean.pow(2) - log_variance.exp()) def average_metric(self, metrics, metric_name): avg_metric = torch.stack([x[metric_name] for x in metrics]).mean() return avg_metric </code></pre> <p>After 10 epochs, that's what the reconstructed images look like:</p> <p><a href="https://i.stack.imgur.com/c0K3W.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/c0K3W.png" alt="enter image description here" /></a></p>
2020-11-23 19:06:35.470000+00:00
2020-11-23 19:06:35.470000+00:00
null
null
64,909,658
<p>I have trained a VAE on CIFAR10 data-set. However, when I try to generate images from the VAE all I get is a bunch of gray noise back. The implementation of this VAE follows the implementation from the book <a href="https://www.oreilly.com/library/view/generative-deep-learning/9781492041931/" rel="nofollow noreferrer">Generative Deep Learning</a>, but instead of TensorFlow the code uses PyTorch.</p> <p>The notebook containing the training as well as the generation can be found <a href="https://colab.research.google.com/drive/1t0EmOo5UjmeRScFtQzpcDEB8u2vYW7Yk?usp=sharing" rel="nofollow noreferrer">here</a>, while the actual implementation of the VAE can be found <a href="https://github.com/Kinyugo/generative-deep-learning/blob/main/variational_autoencoder/variational_autoencoder.py" rel="nofollow noreferrer">here</a>.</p> <p>I have tried:</p> <ol> <li>Disabling dropouts.</li> <li>Increasing the dimension of the latent space.</li> </ol> <p>None of the methods show any improvement at all.</p> <p>I have verified that:</p> <ol> <li>The input size matches the output size</li> <li>Back-propagation runs successfully as the loss decreases during training.</li> </ol>
2020-11-19 10:21:03.340000+00:00
2020-11-23 19:06:35.470000+00:00
null
python|tensorflow|deep-learning|pytorch|autoencoder
['https://arxiv.org/abs/2006.13202', 'http://adamlineberry.ai/vae-series/vae-code-experiments', 'https://i.stack.imgur.com/c0K3W.png']
3
58,634,964
<p>Methods like these are not easy to implement, but there is a trick. Define the loss like this:</p> <pre><code>import keras.backend as K def regression_nll_loss(sigma_sq, epsilon = 1e-6): def nll_loss(y_true, y_pred): return 0.5 * K.mean(K.log(sigma_sq + epsilon) + K.square(y_true - y_pred) / (sigma_sq + epsilon)) return nll_loss </code></pre> <p>Then define a model with two outputs, one for the mean and another for the variance:</p> <pre><code>from keras.models import Model from keras.layers import Dense, Input inp = Input(shape=(1,)) x = Dense(32, activation="relu")(inp) x = Dense(32, activation="relu")(x) mean = Dense(1, activation="linear")(x) var = Dense(1, activation="softplus")(x) train_model = Model(inp, mean) pred_model = Model(inp, [mean, var]) train_model.compile(loss=regression_nll_loss(var), optimizer="adam") train_model.fit(x, y, ...) mean, var = pred_model.predict(some_input) </code></pre> <p>The trick is to explicitly pass the tensor for the variance to the loss, so it only needs two inputs, and supervision is only performed for the mean. Then you need to define two models that share weights, one for training, and another for testing/inference. This latter model returns both the mean and variance.</p> <p>Remember to use a softplus activation for the variance to keep it positive. I have implemented this loss for use with <a href="https://arxiv.org/abs/1612.01474" rel="nofollow noreferrer">Deep Ensembles</a>, you can find an example <a href="https://github.com/mvaldenegro/keras-uncertainty/blob/master/examples/regression_deep-ensemble.py" rel="nofollow noreferrer">here</a>.</p>
2019-10-30 23:26:41.950000+00:00
2019-10-30 23:26:41.950000+00:00
null
null
58,476,704
<p>I want to train a neural network which also returns prediction intervals, so that I can have some idea of my confidence in a prediction. There seems to be four main methods of achieving this, which are summarized in the paper "Comprehensive Review of Neural Network-Based Prediction Intervals and New Advances": <a href="https://ieeexplore.ieee.org/document/5966350" rel="nofollow noreferrer">https://ieeexplore.ieee.org/document/5966350</a></p> <p>I am interested in the mean-variance estimation (MVE) method because it seems to be the simplest to understand. However I am struggling to get my head around exactly how this would be implemented in Keras. </p> <p>I would guess the loss function would be defined by:</p> <pre><code>def mve_cost(y_true, y_pred, var_pred): loss = 0.5*tf.reduce_sum(tf.log(var_pred) + tf.divide((tf.square(y_true - y_pred)),(tf.square(var_pred))) ) return loss </code></pre> <p>But can a loss function in Keras take three inputs? I have never seen this before. Also, the target for the variance-NN is not known beforehand and takes into account the predictions made by the mean-NN. I suppose this will need some of the more flexible capabilities of the Keras Functional API but I'm confused about how it would be put together. </p> <ul> <li>How do you define the loss function properly for the MVE method?</li> <li>How can the tricky relationship between the two NNs be implemented in the Keras functional API?</li> <li>Does anyone know of an implementation of this method already online?</li> <li>Is there another method of generating prediction intervals for NNs that is more easily understood/implemented in Keras?</li> </ul>
2019-10-20 19:48:54.870000+00:00
2019-10-30 23:26:41.950000+00:00
null
python|tensorflow|machine-learning|keras|neural-network
['https://arxiv.org/abs/1612.01474', 'https://github.com/mvaldenegro/keras-uncertainty/blob/master/examples/regression_deep-ensemble.py']
2
40,927,483
<blockquote> <p>And secondly, why would I use [grouping]?</p> </blockquote> <p>This was originally presented as an optimization in the paper which sparked the current cycle of neural network popularity :</p> <blockquote> <p>Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. "<a href="https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf" rel="noreferrer">Imagenet classification with deep convolutional neural networks</a>." In Advances in neural information processing systems, pp. 1097-1105. 2012.</p> </blockquote> <p>Figure 2 shows how grouping was used for that work. The authors of caffe originally added this ability so they could replicate the AlexNet architecture. However grouping continues to show itself as beneficial in other scenarios. </p> <p>For example both Facebook and Google have released papers which essentially show that grouping can dramatically reduce resource use while helping to preserve accuracy. The Facebook paper can be seen here:(<a href="https://arxiv.org/abs/1611.05431" rel="noreferrer">ResNeXt</a>) and the Google paper can be found here: (<a href="https://arxiv.org/abs/1704.04861" rel="noreferrer">MobileNets</a>)</p>
2016-12-02 08:10:32.243000+00:00
2017-05-09 08:04:55.870000+00:00
2017-05-09 08:04:55.870000+00:00
null
40,872,914
<p>I have read the documentation about the <strong>group</strong> param:</p> <blockquote> <p>group (g) [default 1]: If g > 1, we restrict the connectivity of each filter to a subset of the input. Specifically, the input and output channels are separated into g groups, and the ith output group channels will be only connected to the ith input group channels.</p> </blockquote> <p>But first of all I do not understand exactly what they mean. And secondly, why would I use it. Could anyone help me to explain it a bit better?</p> <p>As far as I have understood it, it means following:</p> <p>If I set g greater than 1 my input and output channels are separated into groups. But how exactly is that done? If I set it to 20 and my input is 40 I will have to groups of 20? And if the output is 50 I will have one group of 20 and one group of 30?</p>
2016-11-29 18:15:29.077000+00:00
2018-11-06 01:33:36.420000+00:00
null
deep-learning|caffe|conv-neural-network
['https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf', 'https://arxiv.org/abs/1611.05431', 'https://arxiv.org/abs/1704.04861']
3
42,542,733
<p>The problem with all these solutions for <code>zip</code> is that they only fold over one list or the other, which can be a problem if both of them are "good producers", in the parlance of list fusion. What you actually need is a solution that folds over both lists. Fortunately, there is a paper about exactly that, called <a href="https://arxiv.org/pdf/1309.5135.pdf" rel="noreferrer">"Coroutining Folds with Hyperfunctions"</a>.</p> <p>You need an auxiliary type, a hyperfunction, which is basically a function that takes another hyperfunction as its argument.</p> <pre><code>newtype H a b = H { invoke :: H b a -&gt; b } </code></pre> <p>The hyperfunctions used here basically act like a "stack" of ordinary functions.</p> <pre><code>push :: (a -&gt; b) -&gt; H a b -&gt; H a b push f q = H $ \k -&gt; f $ invoke k q </code></pre> <p>You also need a way to put two hyperfunctions together, end to end.</p> <pre><code>(.#.) :: H b c -&gt; H a b -&gt; H a c f .#. g = H $ \k -&gt; invoke f $ g .#. k </code></pre> <p>This is related to <code>push</code> by the law:</p> <pre><code>(push f x) .#. (push g y) = push (f . g) (x .#. y) </code></pre> <p>This turns out to be an associative operator, and this is the identity:</p> <pre><code>self :: H a a self = H $ \k -&gt; invoke k self </code></pre> <p>You also need something that disregards everything else on the "stack" and returns a specific value:</p> <pre><code>base :: b -&gt; H a b base b = H $ const b </code></pre> <p>And finally, you need a way to get a value out of a hyperfunction:</p> <pre><code>run :: H a a -&gt; a run q = invoke q self </code></pre> <p><code>run</code> strings all of the <code>push</code>ed functions together, end to end, until it hits a <code>base</code> or loops infinitely.</p> <p>So now you can fold both lists into hyperfunctions, using functions that pass information from one to the other, and assemble the final value.</p> <pre><code>zip xs ys = run $ foldr (\x h -&gt; push (first x) h) (base []) xs .#. foldr (\y h -&gt; push (second y) h) (base Nothing) ys where first _ Nothing = [] first x (Just (y, xys)) = (x, y):xys second y xys = Just (y, xys) </code></pre> <p>The reason why folding over both lists matters is because of something GHC does called <em>list fusion</em>, which is talked about in <a href="http://hackage.haskell.org/package/base-4.9.0.0/docs/src/GHC.Base.html#build" rel="noreferrer">the GHC.Base module</a>, but probably should be much more well-known. Being a good list producer and using <code>build</code> with <code>foldr</code> can prevent lots of useless production and immediate consumption of list elements, and can expose further optimizations.</p>
2017-03-01 21:47:52.047000+00:00
2017-03-01 21:47:52.047000+00:00
null
null
235,148
<p>I'm currently on chapter 4 of Real World Haskell, and I'm trying to wrap my head around <a href="http://book.realworldhaskell.org/read/functional-programming.html#x_E9" rel="noreferrer">implementing foldl in terms of foldr</a>.</p> <p>(Here's their code:)</p> <pre><code>myFoldl :: (a -&gt; b -&gt; a) -&gt; a -&gt; [b] -&gt; a myFoldl f z xs = foldr step id xs z where step x g a = g (f a x) </code></pre> <p>I thought I'd try to implement <code>zip</code> using the same technique, but I don't seem to be making any progress. Is it even possible?</p>
2008-10-24 20:27:52.120000+00:00
2017-11-30 04:38:36.947000+00:00
2015-08-14 00:55:36.427000+00:00
haskell|functional-programming|fold|combinators
['https://arxiv.org/pdf/1309.5135.pdf', 'http://hackage.haskell.org/package/base-4.9.0.0/docs/src/GHC.Base.html#build']
2
51,364,384
<p><strong>Summary</strong></p> <p>Here is an in progress partial implementation:</p> <p><a href="https://github.com/carpedm20/pixel-rnn-tensorflow" rel="nofollow noreferrer">https://github.com/carpedm20/pixel-rnn-tensorflow</a></p> <p>Here is a description of Row LSTM and BiDiagonal LSTM at google deepmind:</p> <p><a href="https://towardsdatascience.com/summary-of-pixelrnn-by-google-deepmind-7-min-read-938d9871d6d9" rel="nofollow noreferrer">https://towardsdatascience.com/summary-of-pixelrnn-by-google-deepmind-7-min-read-938d9871d6d9</a></p> <hr> <p><strong>Row LSTM</strong></p> <p>From the linked deepmind blog:</p> <p>The hidden state of a pixel, red in the image below, is based on the "memory" of the triangular three pixels before it. Because they are in a "row", we can compute in parallel, speeding up computation. We sacrifice some context information (using more history or memory) for the ability to do this parallel computation and speed up training.</p> <p><a href="https://i.stack.imgur.com/MSlpr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MSlpr.png" alt="enter image description here"></a></p> <p>The actual implementation relies on several other optimizations and is quite involved. From the <a href="https://arxiv.org/pdf/1601.06759.pdf" rel="nofollow noreferrer">original paper</a>:</p> <blockquote> <p>The computation proceeds as follows. An LSTM layer has an input-to-state component and a recurrent state-to-state component that together determine the four gates inside the LSTM core. To enhance parallelization in the Row LSTM the input-to-state component is first computed for the entire two-dimensional input map; for this a k × 1 convolution is used to follow the row-wise orientation of the LSTM itself. The convolution is masked to include only the valid context (see Section 3.4) and produces a tensor of size 4h × n × n, representing the four gate vectors for each position in the input map, where h is the number of output feature maps. To compute one step of the state-to-state component of the LSTM layer, one is given the previous hidden and cell states hi−1 and ci−1, each of size h × n × 1. The new hidden and cell states hi , ci are obtained as follows: </p> </blockquote> <p><a href="https://i.stack.imgur.com/VpsNo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VpsNo.png" alt="enter image description here"></a></p> <blockquote> <p>where xi of size h × n × 1 is row i of the input map, and ~ represents the convolution operation and the elementwise multiplication. The weights Kss and Kis are the kernel weights for the state-to-state and the input-to-state components, where the latter is precomputed as described above. In the case of the output, forget and input gates oi , fi and ii , the activation σ is the logistic sigmoid function, whereas for the content gate gi , σ is the tanh function. Each step computes at once the new state for an entire row of the input map</p> </blockquote> <p><strong>Diagonal BLSTM</strong></p> <p>Diagonal BLSTM's were developed to leverage the speedup of parallelization without sacrificing as much context information. A node in a DBLSTM looks to its left and above it; since those nodes have also looked to the left and above, the conditional probability of a given node depends in some sense on all of its ancestors. Otherwise, the architectures are very similar. From the deepmind blog:</p> <p><a href="https://i.stack.imgur.com/TANB1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TANB1.png" alt="enter image description here"></a></p>
2018-07-16 14:41:50.963000+00:00
2018-07-16 14:59:33.433000+00:00
2018-07-16 14:59:33.433000+00:00
null
51,364,273
<p>I am trying to implement Pixel RNN in pytorch, but I cannot seem to find any documentation on this. The main parts of Pixel RNN are Row LSTM and BiDiagonal LSTM, so I am looking for some code of these algorithms to better understand what they are doing. Specifically, I am confused as to these algorithms calculate one row and diagonal at once, respectively. Any help would be much appreciated.</p>
2018-07-16 14:35:57.903000+00:00
2018-07-16 14:59:33.433000+00:00
null
python|machine-learning|lstm|pytorch|rnn
['https://github.com/carpedm20/pixel-rnn-tensorflow', 'https://towardsdatascience.com/summary-of-pixelrnn-by-google-deepmind-7-min-read-938d9871d6d9', 'https://i.stack.imgur.com/MSlpr.png', 'https://arxiv.org/pdf/1601.06759.pdf', 'https://i.stack.imgur.com/VpsNo.png', 'https://i.stack.imgur.com/TANB1.png']
6
30,317,009
<p>This is a known research problem in Computer Graphics to find the Boolean operations on polygonal meshes. You can take a look at some related papers at:</p> <p><a href="http://arxiv.org/pdf/1308.4434.pdf" rel="nofollow">http://arxiv.org/pdf/1308.4434.pdf</a></p> <p><a href="http://www.tandfonline.com/doi/abs/10.3722/cadaps.2010.405-415?journalCode=tcad20" rel="nofollow">http://www.tandfonline.com/doi/abs/10.3722/cadaps.2010.405-415?journalCode=tcad20</a>.</p> <p>(You can find older works by taking a look at the cited papers)</p> <p>In general, polygonal meshes are not very effective at Boolean operations. Boolean operations can be easily addressed in implicit modeling in which an object is represented by a function. Later, the object can be converted to a mesh by marching cubes (for example).</p>
2015-05-19 04:52:26.930000+00:00
2015-05-19 04:52:26.930000+00:00
null
null
30,305,149
<p>Suppose I have a 2 Polyhedrons, partially overlapping in space. Each is defined by a list of connected Polygons, which in turn are defined by lists of line segments, (which are defined by 2 points). Is there a simple algorithm for creating the polyhedron which is the union of the boundary of these polyhedrons, but removes all the interior pieces?</p> <p>Likewise after this, I'll be implementing a subtract, and Intersection Method.</p> <p>I'm contributing to this Open Source Library. Source Code: <a href="https://bitbucket.org/Clearspan/geometry-class-library/src/34a2ab5031879d051abb855a828368e397b4f5b6/GeometryClassLibrary/Solids/Polyhedron.cs?at=master" rel="nofollow">https://bitbucket.org/Clearspan/geometry-class-library/src/34a2ab5031879d051abb855a828368e397b4f5b6/GeometryClassLibrary/Solids/Polyhedron.cs?at=master</a></p>
2015-05-18 13:55:24.277000+00:00
2015-05-19 20:57:44.337000+00:00
null
c#|math|visual-studio-2013|polyhedra|geometry-class-library
['http://arxiv.org/pdf/1308.4434.pdf', 'http://www.tandfonline.com/doi/abs/10.3722/cadaps.2010.405-415?journalCode=tcad20']
2
47,626,036
<p>There are a lot of reasons why your network is performing poorly. From what I understand, your triplet generation method is fine. Here are some tips that may help improve your performance.</p> <h3>The model</h3> <p>In deep metric learning, people usually use some pre-trained models on ImageNet classification task as these models are pretty expressive and can generate good representation for image. You can fine-tuning your model on the basis of these pre-trained models, e.g., VGG16, GoogleNet, ResNet.</p> <h3>How to fine-tuing</h3> <p>Even if you have a good pre-trained model, it is often difficult to directly optimize the triplet loss using these model on your own dataset. Since these pre-trained models are trained on ImageNet, if your dataset is vastly different from ImageNet, you can first fine-tuning the model using classification task on your dataset. Once your model performs reasonably well on the classification task on your custom dataset, you can use the classification model as base network (maybe a little tweak) for triplet network. It will often lead to much better performance.</p> <h3>Hyper parameters</h3> <p>Hyper parameters such as learning rate, momentum, weight_decay etc. are also extremely important for good performance (learning rate maybe the most important factor). Since your are fine-tuning and not training the network from scratch. You should use a small learning rate, for example, <code>lr=0.001</code> or <code>lr=0.0001</code>. For momentum, 0.9 is a good choice. For weight_decay, people usually use 0.0005 or 0.00005.</p> <p>If you add some fully connected layers, then for these layers, the learning rate may be higher than other layers (0.01 for example). </p> <h3>Which layer to fine-tuing</h3> <p>As your network has several layers, you need to decide which layer to fine-tune. Researcher have found that the lower layers in network just produce some generic features such as line or edges. Typically, people will freeze the updating of lower layers and only update the weight of upper layers which tend to produce task-oriented features. You should try to optimize starting from different lower layers and see which setting performs best.</p> <h3>Reference</h3> <ol> <li><a href="https://arxiv.org/pdf/1504.08083.pdf" rel="nofollow noreferrer">Fast rcnn</a>(Section 4.5, which layers to fine-tune)</li> <li><a href="https://arxiv.org/pdf/1604.01325.pdf" rel="nofollow noreferrer">Deep image retrieval</a>(section 5.2, Influence of fine-tuning the representation)</li> </ol>
2017-12-04 03:19:44.897000+00:00
2017-12-04 03:19:44.897000+00:00
null
null
45,839,488
<p>I'm trying to train a convolutional neural network with triplet loss (more about triplet loss <a href="https://stackoverflow.com/questions/38260113/implementing-contrastive-loss-and-triplet-loss-in-tensorflow">here</a>) in order to generate face embeddings (128 values that accurately describe a face).</p> <p>In order to select only semi-hard triplets (<strong>distance(anchor, positive) &lt; distance(anchor, negative)</strong>), I first feed all values in a mini-batch and calculate the distances:</p> <pre><code>distance1, distance2 = sess.run([d_pos, d_neg], feed_dict={x_anchor:input1, x_positive:input2, x_negative:input3}) </code></pre> <p>Then I select the indices of the inputs with distances that respect the formula above:</p> <pre><code>valids_batch = compute_valids(distance1, distance2, batch_size) </code></pre> <p>The function <em>compute_valids</em>:</p> <pre><code>def compute_valids(distance1, distance2, batch_size): valids = list(); for q in range(0, len(distance1)): if(distance1[q] &lt; distance2[q]): valids.append(q) return valids; </code></pre> <p>Then <strong>I learn only from the training examples with indices returned by this filter function</strong>:</p> <pre><code>input1_valid = [input1[q] for q in valids_batch] input2_valid = [input2[q] for q in valids_batch] input3_valid = [input3[q] for q in valids_batch] _, loss_value, summary = sess.run([optimizer, cost, summary_op], feed_dict={x_anchor:input1_valid, x_positive:input2_valid, x_negative:input3_valid}) </code></pre> <p>Where optimizer is defined as:</p> <pre><code>model1 = siamese_convnet(x_anchor) model2 = siamese_convnet(x_positive) model3 = siamese_convnet(x_negative) d_pos = tf.reduce_sum(tf.square(model1 - model2), 1) d_neg = tf.reduce_sum(tf.square(model1 - model3), 1) cost = triplet_loss(d_pos, d_neg) optimizer = tf.train.AdamOptimizer(learning_rate = 1e-4).minimize( cost ) </code></pre> <p>But something is wrong because accuracy is very low (50%).</p> <p>What am I doing wrong?</p>
2017-08-23 12:16:23.970000+00:00
2018-03-21 11:41:16.483000+00:00
null
tensorflow|neural-network|conv-neural-network
['https://arxiv.org/pdf/1504.08083.pdf', 'https://arxiv.org/pdf/1604.01325.pdf']
2
13,483,252
<p>PCA could be a right choice here (but not the only one). Although, you should be aware of the fact, that PCA does not reduce the number of your input data features automatically. I recommend you reading this tutorial: <a href="http://arxiv.org/pdf/1404.1100v1.pdf" rel="nofollow">http://arxiv.org/pdf/1404.1100v1.pdf</a> - it is the one I used to understand PCA and its really good for beginners.</p> <p>Getting back to your question. An image is an vector in a 324-dimensional space. In this space the first base vector is one having a white pixel in top left corner, next one is having next pixel white, all the other black - and so on. It probably is not the best base vector set to represent this image data. PCA computes new base vectors (the COEFF matrix - the new vectors expressed as values in old vector space) and new image vector values (the SCORE matrix). At that point you have not lost ANY data at all (no feature number reduction). But, you could stop using some of the new base vectors, because they are probably connected with noice, not the data itself. It is all described in details in the tutorial.</p> <pre><code>images = rand(10,324); [COEFF, SCORE] = princomp(images); reconstructed_images = SCORE / COEFF + repmat(mean(images,1), 10, 1); images - reconstructed_images %as you see there are almost only zeros - the non-zero values are effects of small numerical errors %its possible because you are only switching between the sets of base vectors used to represent the data for i=100:324 SCORE(:,i) = zeros(10,1); end %we remove the features 100 to 324, leaving only first 99 %obviously, you could take only the non-zero part of the matrix and use it %somewhere else, like for your neural network reconstructed_images_with_reduced_features = SCORE / COEFF + repmat(mean(images,1), 10, 1); images - reconstructed_images_with_reduced_features %there are less features, but reconstruction is still pretty good </code></pre>
2012-11-20 22:23:55+00:00
2014-09-16 13:43:25.183000+00:00
2014-09-16 13:43:25.183000+00:00
null
13,430,628
<p>I have 10 images(18x18). I save these images inside an array named <code>images[324][10]</code> where the number 324 represents the amount of pixels for an image and the number 10 the total amount of images that I have.</p> <p>I would like to use these images for a neuron network however 324 is a big number to give as an input and thus I would like to decrease this number but retain as much information as possible.</p> <p>I heard that you can do this with the <code>princomp</code> function which implements PCA.</p> <p>The problem is that I haven't found any example on how to use this function, and especially for my case.</p> <p>If I run</p> <pre><code>[COEFF, SCORE, latent] = princomp(images); </code></pre> <p>it runs fine but how can I then get the array <code>newimages[number_of_desired_features][10]</code>?</p>
2012-11-17 12:37:18.587000+00:00
2014-09-16 13:43:25.183000+00:00
2012-11-17 12:50:10.123000+00:00
matlab|image-processing|pca
['http://arxiv.org/pdf/1404.1100v1.pdf']
1
70,773,862
<blockquote> <p>which one follows the best practices of Prolog?</p> </blockquote> <p>The first one.</p> <p>Using =/2 is not common. I am not saying to avoid it but most production code tends to refactor them out.</p> <p>For more info see: <a href="https://arxiv.org/pdf/0911.2899.pdf" rel="nofollow noreferrer">Coding Guidelines for Prolog</a></p> <hr /> <p>If you are using SWI-Prolog with <a href="https://www.swi-prolog.org/pldoc/man?section=ssu" rel="nofollow noreferrer">Single Side Unification</a>, then using =/2 is not only common but in some cases necessary.</p> <p>See: <a href="https://swi-prolog.discourse.group/t/example-of-refactoring-code-to-ssu/4713" rel="nofollow noreferrer">Example of refactoring code to SSU</a></p> <hr /> <p>If you are using <a href="https://www.swi-prolog.org/pldoc/man?predicate=listing/1" rel="nofollow noreferrer">listing/1</a> to look at DCG code then you will see <a href="https://www.swi-prolog.org/pldoc/doc_for?object=(%3D)/2" rel="nofollow noreferrer">=/2</a> used quite often. This is because the compiler is playing it safe and not refactoring the code to avoid introducing a bug.</p>
2022-01-19 16:03:27.253000+00:00
2022-01-21 08:23:44.737000+00:00
2022-01-21 08:23:44.737000+00:00
null
70,772,232
<p>I have defined the predicate duplicate_elements/2 whose definition is as follows:</p> <p>duplicate_elements(L1, L2): L1 = [a,b], L2 = [a,a,b,b]</p> <p>I would like to know, between the two alternatives 1 and 2, which one follows the best practices of Prolog:</p> <pre><code>duplicate_elements_1([],[]). duplicate_elements_1([P | R],[P,P | T]) :- duplicate_elements_1(R , T). </code></pre> <pre><code>duplicate_elements_2([],[]). duplicate_elements_2([P | R],[H1,H2 | T]) :- H1 = P, H2 = P, duplicate_elements_2(R , T). </code></pre> <p>Thanks</p>
2022-01-19 14:18:41.647000+00:00
2022-01-21 08:23:44.737000+00:00
2022-01-19 15:40:06.613000+00:00
prolog
['https://arxiv.org/pdf/0911.2899.pdf', 'https://www.swi-prolog.org/pldoc/man?section=ssu', 'https://swi-prolog.discourse.group/t/example-of-refactoring-code-to-ssu/4713', 'https://www.swi-prolog.org/pldoc/man?predicate=listing/1', 'https://www.swi-prolog.org/pldoc/doc_for?object=(%3D)/2']
5
23,974,994
<p>Seminal work in this field was done in <a href="https://courses.cs.washington.edu/courses/cse455/10au/notes/forsyth.pdf" rel="nofollow">Fleck, Margaret M., David A. Forsyth, and Chris Bregler. "Finding naked people." Computer Vision—ECCV'96. Springer Berlin Heidelberg, 1996. 593-602.</a>. The approach detects skin-colored regions and then determines whether or not the regions match predefined human shapes. More on their skin detection algorithm here: <a href="http://www.cs.hmc.edu/~fleck/naked-skin.html" rel="nofollow">http://www.cs.hmc.edu/~fleck/naked-skin.html</a> .</p> <p>More recent papers with summaries of current methods are available: </p> <ol> <li><a href="http://iseclab.org/people/cplatzer/papers/sfcs05-platzer.pdf" rel="nofollow">http://iseclab.org/people/cplatzer/papers/sfcs05-platzer.pdf</a> </li> <li><a href="http://arxiv.org/abs/1402.5792" rel="nofollow">http://arxiv.org/abs/1402.5792</a></li> </ol> <p>You may also take a look at: <a href="https://stackoverflow.com/questions/713247/what-is-the-best-way-to-programatically-detect-porn-images">What is the best way to programatically detect porn images?</a></p> <p><strong>update 2016:</strong> use a convnet. They are far better at building high resolution filters. I wrote about it in more detail here </p> <ul> <li><p><a href="http://blog.clarifai.com/what-convolutional-neural-networks-see-at-when-they-see-nudity/" rel="nofollow">http://blog.clarifai.com/what-convolutional-neural-networks-see-at-when-they-see-nudity/</a></p></li> <li><p><a href="https://www.slideshare.net/mobile/RyanCompton1/what-convnets-look-at-when-they-look-at-nudity" rel="nofollow">https://www.slideshare.net/mobile/RyanCompton1/what-convnets-look-at-when-they-look-at-nudity</a></p></li> </ul>
2014-05-31 22:23:23.443000+00:00
2016-11-23 14:20:51.507000+00:00
2016-11-23 14:20:51.507000+00:00
null
23,898,358
<p>Interestingly, what technologies and algorithms the large companies use. I found only that the Microsoft uses the PhotoDNA technologie, but it is responsible only how photos are compared. <strong>It is interesting also as they automatically detecting pornographic images.</strong> </p> <p>For example, are used any of methods : Skin Detection, ROI Detection, Bag-of-Visual-Words.</p>
2014-05-27 20:30:24.583000+00:00
2016-11-23 14:20:51.507000+00:00
null
machine-learning|computer-vision
['https://courses.cs.washington.edu/courses/cse455/10au/notes/forsyth.pdf', 'http://www.cs.hmc.edu/~fleck/naked-skin.html', 'http://iseclab.org/people/cplatzer/papers/sfcs05-platzer.pdf', 'http://arxiv.org/abs/1402.5792', 'https://stackoverflow.com/questions/713247/what-is-the-best-way-to-programatically-detect-porn-images', 'http://blog.clarifai.com/what-convolutional-neural-networks-see-at-when-they-see-nudity/', 'https://www.slideshare.net/mobile/RyanCompton1/what-convnets-look-at-when-they-look-at-nudity']
7
44,893,937
<p>This would be problematic, regardless of what framework you did this with. Specifically, we have from the <a href="https://arxiv.org/pdf/1412.6980.pdf" rel="nofollow noreferrer">ADAM paper</a> the relevant lines:</p> <pre><code>g_t = d Cost / d weights v_t = beta2 * v_{t-1} + (1 - beta2) g_t^2 </code></pre> <p>Now, if you were to include v_t into Cost, this would be an implicit equation:</p> <pre><code>g_t = d Cross Entropy / d weights + d (v_t*some_function) / d weights v_t = beta2 * v_{t-1} + (1 - beta2) g_t^2 </code></pre> <p>Notice how v_t appears in both equations. We can expand it as such for greater clarity</p> <pre><code>v_t = beta2 * v_{t-1} + (1 - beta2) [d Cross Entropy / d weights + d (v_t*some_function) / d weights]^2 </code></pre> <p>You could attempt to solve this exactly, but in doing so you would have to use some form of implicit solver, which will be very computationally costly. One way would be <a href="https://en.wikipedia.org/wiki/Fixed-point_iteration" rel="nofollow noreferrer">fixed point iteration</a>.</p>
2017-07-03 21:14:44.650000+00:00
2017-07-03 21:14:44.650000+00:00
null
null
44,893,395
<p>I'm currently trying to take Adam's second order moment term, v_t, and use that as an additional term in my cost function. How can I implement something like this:</p> <pre><code>Cost = Cross Entropy + v_t*some_function(weights) </code></pre> <p>Can this be accomplished within python? Or do I have to write my own C++ code to accomplish this? Also is this easily accomplished in a framework like Keras? Here's the code for the cost function that I'm trying to add into keras:</p> <pre><code>def my_loss(y_pred, y_true, current_weights, v_t): normal_loss=K.categorial_cross_entropy(y_pred,y_true) additional_term=K.dot(K.square(current_weights - K.some_function(current_weights)), v_t) return normal_loss + additional_term </code></pre>
2017-07-03 20:22:11.193000+00:00
2017-07-07 00:31:18.670000+00:00
2017-07-07 00:31:18.670000+00:00
python|tensorflow|deep-learning|keras|mathematical-optimization
['https://arxiv.org/pdf/1412.6980.pdf', 'https://en.wikipedia.org/wiki/Fixed-point_iteration']
2
44,641,581
<p>Your input is strange as it is an binary-code. I don't know whether the model will work well.</p> <p>First of all, you need to add start and end marks for your input and output which indicates the boundaries. Then design regional module of each time step, including how to use hidden state. You could try simple GRU/LSTM networks as following.</p> <p><a href="https://i.stack.imgur.com/WPPvb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WPPvb.png" alt="enter image description here"></a></p> <p>For details, you could try <strong>Encoder</strong></p> <p><a href="https://i.stack.imgur.com/0ZQMk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0ZQMk.png" alt="enter image description here"></a></p> <p>and <strong>Decoder</strong></p> <p><a href="https://i.stack.imgur.com/as2bM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/as2bM.png" alt="enter image description here"></a></p> <p>In addition, you could take a look at <strong>Attention mechanism</strong> in paper <a href="https://arxiv.org/pdf/1409.0473.pdf" rel="nofollow noreferrer">Neural Machine Translation by Jointly Learning to Align and Translate</a>. And the structure is as following.</p> <p><a href="https://i.stack.imgur.com/xaf2o.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xaf2o.png" alt="enter image description here"></a></p> <p>For details</p> <p><a href="https://i.stack.imgur.com/zz0Q6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zz0Q6.png" alt="enter image description here"></a></p> <p>Though you are using Keras, I think it will be helpful to read PyTorch codes as it is straightforward and easy to understand. The tutorial given in <a href="http://pytorch.org/tutorials/intermediate/seq2seq_translation_tutorial.html" rel="nofollow noreferrer">PyTorch tutorial</a></p>
2017-06-19 23:48:04.960000+00:00
2017-06-22 01:44:09.220000+00:00
2017-06-22 01:44:09.220000+00:00
null
44,640,760
<p>I'm looking to build a sequence-to-sequence model that takes in a 2048-long vector of 1s and 0s (ex. [1,0,1,0,0,1,0,0,0,1,...,1] ) as my input and translating it to my known output of (a variable length) 1-20 long characters (ex. GBNMIRN, ILCEQZG, or FPSRABBRF).</p> <p>My goal is to create a model that can take in a new 2048-long vector of 1s and 0s and predict what the output sequence will look like.</p> <p>I've looked at some github repositories like <a href="https://github.com/llSourcell/seq2seq_model_live/blob/master/2-seq2seq-advanced.ipynb%20" rel="nofollow noreferrer">this</a> and <a href="https://github.com/hans/ipython-notebooks/blob/master/tf/TF%20tutorial.ipynb" rel="nofollow noreferrer">this</a>.</p> <p>but I'm not sure how to implement it with my problem. Are there any projects that have done something similar to this/how could I implement this with the seq2seq models or LSTMs currently out there? (python implementations)</p> <p>I am using the keras library in python. </p>
2017-06-19 22:11:14.923000+00:00
2017-06-22 01:44:09.220000+00:00
2017-06-21 20:49:14.063000+00:00
machine-learning|neural-network|deep-learning|lstm|recurrent-neural-network
['https://i.stack.imgur.com/WPPvb.png', 'https://i.stack.imgur.com/0ZQMk.png', 'https://i.stack.imgur.com/as2bM.png', 'https://arxiv.org/pdf/1409.0473.pdf', 'https://i.stack.imgur.com/xaf2o.png', 'https://i.stack.imgur.com/zz0Q6.png', 'http://pytorch.org/tutorials/intermediate/seq2seq_translation_tutorial.html']
7
68,197,307
<blockquote> <p>However, refer [sic] to this answer, <a href="https://stackoverflow.com/a/29146718/7721525">https://stackoverflow.com/a/29146718/7721525</a>, some implementation of introselect runs <code>2lg(n)</code> partition_pivot</p> </blockquote> <p>As mentioned in the comments there, that implementation is <em>named</em> introselect, but it does not actually implement the <a href="https://en.wikipedia.org/wiki/Introselect" rel="nofollow noreferrer">introselect algorithm</a>.</p> <blockquote> <p>How to achieve a worst time <code>O(n)</code> introselect?</p> </blockquote> <p>You perform quickselect using either a constant number of partitions, or require that the partition is highly imbalanced at most a constant amount of times, and if not done within these restrictions you switch to the <a href="https://en.wikipedia.org/wiki/Median_of_medians" rel="nofollow noreferrer">median of medians</a> algorithm.</p> <p>Note that there's also the recent <a href="https://arxiv.org/abs/1606.00484" rel="nofollow noreferrer">QuickSelectAdaptive algorithm</a> by A. Alexandrescu which goes into detail how to implement deterministic linear-time selection.</p>
2021-06-30 15:04:13.740000+00:00
2021-06-30 15:04:13.740000+00:00
null
null
68,196,730
<p>The wikipedia <a href="https://en.wikipedia.org/wiki/Introselect" rel="nofollow noreferrer">introselect</a> says the worst time complexity is <code>O(n)</code>.</p> <p>However, refer to this answer, <a href="https://stackoverflow.com/a/29146718/7721525">https://stackoverflow.com/a/29146718/7721525</a>, some implementation of introselect runs <code>2lg(n)</code> times of <code>partition_pivot</code>, which has a worst time complexity of <code>O(nlg(n))</code>. Whether you use medians_of_medians algorithm of heapsort, the time complexity is already <code>O(nlg(n))</code>.</p> <p>How to achieve a worst time <code>O(n)</code> introselect? If we use a constant <code>k</code> instead of <code>2lg(n)</code>. Can we say the time is <code>O(n)</code>? Can you give me some practical choices of k( must be a constant)?</p>
2021-06-30 14:29:15.740000+00:00
2021-06-30 17:18:42.743000+00:00
2021-06-30 17:18:42.743000+00:00
algorithm|selection-sort
['https://stackoverflow.com/a/29146718/7721525', 'https://en.wikipedia.org/wiki/Introselect', 'https://en.wikipedia.org/wiki/Median_of_medians', 'https://arxiv.org/abs/1606.00484']
4
67,956,393
<p>Using convolutions as FCs can be done (for example) with filters of spatial size (1,1) and with depth of the same size as the FC input size.</p> <p>The resulting feature map would be of the same size as the input feature map, but each pixel would be the output of a &quot;FC&quot; layer whose weights are the weights of the shared 1x1 conv filter.</p> <p>This kind of thing is used mainly for semantic segmentation, meaning classification per pixel. <a href="https://arxiv.org/abs/1505.04597" rel="nofollow noreferrer">U-net</a> is a good example if memory serves.</p> <hr /> <p>Also see <a href="https://stats.stackexchange.com/questions/194142/what-does-1x1-convolution-mean-in-a-neural-network">this</a>.<br /> Also note that 1x1 convolutions have <a href="https://machinelearningmastery.com/introduction-to-1x1-convolutions-to-reduce-the-complexity-of-convolutional-neural-networks/" rel="nofollow noreferrer">other uses</a> as well.<br /> <a href="https://paperswithcode.com/task/semantic-segmentation" rel="nofollow noreferrer">paperswithcode</a> probably some of the nets there use this trick.</p>
2021-06-13 08:43:12.303000+00:00
2021-06-13 08:43:12.303000+00:00
null
null
67,525,656
<p>I was reading a decent paper <a href="https://paperswithcode.com/paper/from-open-set-to-closed-set-counting-objects" rel="nofollow noreferrer">S-DCNet</a> and I fell upon a section (page3,table1,classifier) where a convolution layer has been used on the feature map in order to produce a binary classification output as part of an internal process. Since I am a noob and when someone talks to me about classification I automatically make a synapse relating to FCs combined with softmax, I started wondering ... Is this a possible thing to do? Can indeed a convolutional layer be used to classify a binary outcome? The whole concept triggered my imagination so much that I insist on getting answers...</p> <p><strong>Honestly, how does this actually work? What is the difference between using a convolution filter instead of a fully connected layer for classification purposes?</strong></p> <p>Edit (Uncertain answer on how does it work): <em>I asked a colleague and he told me that using a filter of the same shape as the length-width shape of the feature map at the current stage, may lead to a learnable binary output (considering that you also reduce the #channels of the feature map to a single channel). But I still don't understand the motivations behind such a technique ..</em></p>
2021-05-13 20:15:04.313000+00:00
2021-06-13 08:50:32.627000+00:00
2021-06-13 08:07:54.943000+00:00
python|tensorflow|machine-learning|pytorch|conv-neural-network
['https://arxiv.org/abs/1505.04597', 'https://stats.stackexchange.com/questions/194142/what-does-1x1-convolution-mean-in-a-neural-network', 'https://machinelearningmastery.com/introduction-to-1x1-convolutions-to-reduce-the-complexity-of-convolutional-neural-networks/', 'https://paperswithcode.com/task/semantic-segmentation']
4
48,080,033
<p>One simple way to go from word-vectors, to a single vector for a range-of-text, is to average the vectors together. And, that often works well-enough for some tasks. </p> <p>However, that's not how the <code>Doc2Vec</code> class in <code>gensim</code> does it. That class implements the <a href="https://arxiv.org/abs/1405.4053" rel="noreferrer">'Paragraph Vectors' technique</a>, where separate document-vectors are trained in a manner analogous to word-vectors. </p> <p>The doc-vectors participate in training a bit like a floating synthetic word, involved in every sliding window/target-word-prediction. They're <em>not</em> composed-up or concatenated-from preexisting word-vectors, though in some modes they may be simultaneously trained alongside word-vectors. (However, the fast and often top-performing PV-DBOW mode, enabled in gensim with the parameter <code>dm=0</code>, doesn't train or use input-word-vectors at all. It just trains doc-vectors that are good for predicting the words in each text-example.)</p> <p>Since you've mentioned multiple libraries (both Spark MLib and gensim), but you've not shown your code, it's not certain exactly what <em>your</em> existing process is doing.</p>
2018-01-03 15:06:43.807000+00:00
2018-01-03 15:06:43.807000+00:00
null
null
48,064,378
<p>I have a pyspark dataframe with a corpus of ~300k unique rows each with a "doc" that contains a few sentences of text in each.</p> <p>After processing, I have a 200 dimension vectorized representation of each row/doc. My NLP Process: </p> <ol> <li>Remove Punctuation with regex udf </li> <li>Word Stemming with nltk snowball udf)</li> <li>Pyspark Tokenizer</li> <li>Word2Vec (ml.feature.Word2Vec, vectorSize=200, windowSize=5)</li> </ol> <p>I understand how this implementation uses the skipgram model to create embeddings for each word based on the full corpus used. My question is: <strong>How does this implementation go from a vector for each word in the corpus to a vector for each document/row?</strong></p> <p>Is it the same processes as in the gensim doc2vec implementation where it simply concatenates the word vectors in each doc together?: <a href="https://stackoverflow.com/questions/40413866/how-does-gensim-calculate-doc2vec-paragraph-vectors">How does gensim calculate doc2vec paragraph vectors</a>. If so, how does it cut the vector down to the specified size of 200 (Does it use just the first 200 words? Average?)? </p> <p>I was unable to find the information from the sourcecode: <a href="https://spark.apache.org/docs/2.2.0/api/python/_modules/pyspark/ml/feature.html#Word2Vec" rel="noreferrer">https://spark.apache.org/docs/2.2.0/api/python/_modules/pyspark/ml/feature.html#Word2Vec</a> </p> <p>Any help or reference material to look at is super appreciated!</p>
2018-01-02 16:20:45.540000+00:00
2019-03-28 02:48:39.150000+00:00
null
apache-spark|nlp|pyspark|word2vec|doc2vec
['https://arxiv.org/abs/1405.4053']
1
6,736,291
<p><a href="http://arxiv.org/abs/0804.1448" rel="nofollow">Fast k Nearest Neighbor Search using GPU</a></p> <p>I haven't tested, used it, nothing. I just googled and posted the first link I found.</p>
2011-07-18 16:41:42.293000+00:00
2011-07-18 16:41:42.293000+00:00
null
null
6,736,072
<p>Given a collection of thousands of points in 3D, I need to get the list of neighbours for each particle that fall inside some cutoff value (in terms of euclidean distance), and if possible, sorted from nearest fo farthest. </p> <p>Which is the fastest GPU algorithm for this purpose in the CUDA or OpenCL languages? </p>
2011-07-18 16:26:03.583000+00:00
2011-07-18 21:59:50.500000+00:00
2011-07-18 16:31:09.063000+00:00
cuda|opencl|gpu|nearest-neighbor
['http://arxiv.org/abs/0804.1448']
1
21,860,847
<p>I'm sorry to post a link only answer, but if you don't mind reading research paper, the definitive reference on string matching algorithms seems to me to be <a href="http://www-igm.univ-mlv.fr/~lecroq/string/" rel="nofollow">http://www-igm.univ-mlv.fr/~lecroq/string/</a> and the following <a href="http://arxiv.org/abs/1012.2547" rel="nofollow">research paper</a> by Simone Faro and Thierry Lecroq where they compared the relative performance of no less that 85 different string matching algorithms. I'm pretty sure there is one fitting your need among them. </p>
2014-02-18 17:18:32.373000+00:00
2014-02-18 17:18:32.373000+00:00
null
null
21,845,819
<p>I have like 2 million strings and I need to search each of them over a 1 TB text data. Searching all of them is not a best solution to do, so I was thinking about a better way to create a data structure like trie for all of the strings. In other words, a trie in which each node in that is a word. I wanted to ask, is there any good algorithm, data structure or library (in C++) for this purpose?</p> <hr> <p>Let me be more descriptive in this question fellows,</p> <p>For instance, I have these strings: s1- "I love you" s2- "How are you" s3- "What's up dude"</p> <p>And I have many text data like: t1- "Hi, my name is Omid and I love computers. How are you guys?" t2- "Your every wish will be done, they tell me..." t3 t4 . . . t10000</p> <p>Then I want to consider each of texts and search for each of strings on them. At last for this example I would just say: t1 contains s1 and nothing else. I am looking for an efficient way to search for strings but not foolishly for each of them each time.</p>
2014-02-18 06:21:31.047000+00:00
2014-03-17 13:56:36.337000+00:00
2014-03-17 13:56:36.337000+00:00
c++|string|search|trie|large-text
['http://www-igm.univ-mlv.fr/~lecroq/string/', 'http://arxiv.org/abs/1012.2547']
2
61,591,166
<p>Everyone leans towards a binary classification approach. This may be a solution but removes the fundamental design objective which may be to solve it with a one class classifier. Depending on what you want to achieve with a one-class classifier it can be an ill-conditioned problem. In my experience, your last point often applies.</p> <p>As mentioned in <a href="https://arxiv.org/pdf/1801.05365.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1801.05365.pdf</a>:</p> <blockquote> <p>In the classical multiple-class classification, features are learned with the objective of maximizing inter-class distances between classes and minimizing intra-class variances within classes [2]. How-ever, in the absence of multiple classes such a discriminative approach is not possible.</p> </blockquote> <p>It yields a trivial solution. The reason why is explained a bit later:</p> <blockquote> <p>The reason why this approach ends up yielding a trivial solution is due to the absence of a regularizing term in the loss function that takes into account the discriminative ability of the network. For example, since all class labels are identical, a zero loss can be obtained by making all weights equal to zero. It is true that this is a valid solution in the closed world where onlynormal chairobjects exist. But such a network has zero discriminative ability whenabnormal chairobjects appear</p> </blockquote> <p>Note that the description here is made with regards to attempting to use one class classifiers to solve for different classes. One other useful objective of one class classifiers is to detect anomaly in e.g. factory operation signals. This is what I am currently working on. In such cases, having knowledge regarding the various damage states is very hard to obtain. It would be ridiculous to break a machine just to see how it operates when broken so that a decent multinomial classifier can be made. One solution to the problem is described in the following: <a href="https://arxiv.org/abs/1912.12502" rel="nofollow noreferrer">https://arxiv.org/abs/1912.12502</a>. Note that in this paper, because of the stochastic similarity of the classes, the descriminative capacity of classes is achieved as well.</p> <p>I found that by following the guidelines described and specially, removing the last activation function, I got my one-class classifier working and the acuraccy did not give 0 values. Note that in your case you may also want to remove to binary-cross entropy since that requires binary inputs to make sense (use RMSE).</p> <p>This method should also work for your case. In that case the network would be capable of determining which photos are numerically further away from the training photo class. In my experience however, it is likely still a hard problem to solve due to the variance contained in the pictures e.g. different background, angles, etc... To that end, the problem I am solving is much easier as there is much more similarity between operating conditions of the same condition stage. To put that into analogy, in my case the training class is more like the same picture with different noise levels and only slight movements of objects.</p>
2020-05-04 11:28:54.047000+00:00
2020-08-20 08:39:12.830000+00:00
2020-08-20 08:39:12.830000+00:00
null
57,309,958
<h2>Intro and questions:</h2> <p>I'm trying to make a one-class classification convolutional neural network. By one-class I mean I have one image dataset containing about 200 images of Nicolas Cage. By one class classification I mean look at an image and predict 1 if Nicolas Cage is contained in this image and predict 0 Nicolas Cage is not contained in the image.</p> <p>I’m a definitely a machine learning/deep learning beginner so I was hoping someone with some more knowledge and experience could help guide me in the right direction. Here are my issues and questions right now. My network is performing terribly. I’ve tried making a few predictions with images of Nicolas Cage and it predicts 0 every single time.</p> <ul> <li>Should I collect more data for this to work? I’m performing data augmentations with a small dataset of 207 images. I was hoping the data augmentations would help the network generalize but I think I was wrong</li> <li>Should I try tweaking the amount of epochs, step per epoch, val steps, or the optimization algorithm I’m using for gradient descent? I’m using Adam but I was thinking maybe I should try stochastic gradient descent with different learning rates?</li> <li>Should I add more convolution or dense layers to help my network better generalize and learn?</li> <li>Should I just stop trying to do one class classification and go to normal binary classification because using a neural network with one class classification is not very feasible? I saw this post here <a href="https://stackoverflow.com/questions/51874969/one-class-classification-with-keras">one class classification with keras</a> and it seems like the OP ended up using an isolation forest. So I guess I could try using some convolutional layers and feed into an isolation forest or an SVM? I could not find a lot of info or tutorials about people using isolation forests with one-class image classification.</li> </ul> <hr /> <h2>Dataset:</h2> <p>Here is a screenshot of what my dataset looks like that I’ve collected use a package called google-images-download. It contains about 200 images of Nicolas Cage. I did two searches to download 500 images. After manually cleaning the images I was down to 200 quality pictures of Nic Cage. <a href="https://i.stack.imgur.com/c3KKo.png" rel="noreferrer">Dataset</a></p> <hr /> <h2>The imports and model:</h2> <pre><code>from keras.models import Sequential from keras.layers import Conv2D from keras.layers import MaxPooling2D from keras.layers import Flatten from keras.layers import Dense from keras.layers import Dropout from keras.layers import Activation classifier = Sequential() classifier.add(Conv2D(32, (3, 3), input_shape = (200, 200, 3), activation = 'relu')) classifier.add(MaxPooling2D(pool_size = (2, 2))) classifier.add(Conv2D(32, (3, 3), activation = 'relu')) classifier.add(MaxPooling2D(pool_size=(2, 2))) classifier.add(Conv2D(64, (3, 3), activation = 'relu')) classifier.add(MaxPooling2D(pool_size=(2, 2))) classifier.add(Flatten()) classifier.add(Dense(units = 64, activation = 'relu')) classifier.add(Dropout(0.5)) # output layer classifier.add(Dense(1)) classifier.add(Activation('sigmoid')) </code></pre> <hr /> <h2>Compiling and image augmentation</h2> <pre><code>classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy']) from keras.preprocessing.image import ImageDataGenerator train_datagen = ImageDataGenerator(rescale = 1./255, shear_range = 0.2, zoom_range = 0.2, horizontal_flip = True) test_datagen = ImageDataGenerator(rescale = 1./255) training_set = train_datagen.flow_from_directory('/Users/ginja/Desktop/Code/Nic_Cage/Small_Dataset/train/', target_size = (200, 200), batch_size = 32, class_mode = &quot;binary&quot;) test_set = test_datagen.flow_from_directory('/Users/ginja/Desktop/Code/Nic_Cage/Small_Dataset/test/', target_size = (200, 200), batch_size = 32, class_mode = &quot;binary&quot;) </code></pre> <hr /> <h3>Fitting the model</h3> <pre><code>history = classifier.fit_generator(training_set, steps_per_epoch = 1000, epochs = 25, validation_data = test_set, validation_steps = 500) Epoch 1/25 1000/1000 [==============================] - 1395s 1s/step - loss: 0.0012 - acc: 0.9994 - val_loss: 1.0000e-07 - val_acc: 1.0000 Epoch 2/25 1000/1000 [==============================] - 1350s 1s/step - loss: 1.0000e-07 - acc: 1.0000 - val_loss: 1.0000e-07 - val_acc: 1.0000 Epoch 3/25 1000/1000 [==============================] - 1398s 1s/step - loss: 1.0000e-07 - acc: 1.0000 - val_loss: 1.0000e-07 - val_acc: 1.0000 Epoch 4/25 1000/1000 [==============================] - 1342s 1s/step - loss: 1.0000e-07 - acc: 1.0000 - val_loss: 1.0000e-07 - val_acc: 1.0000 Epoch 5/25 1000/1000 [==============================] - 1327s 1s/step - loss: 1.0000e-07 - acc: 1.0000 - val_loss: 1.0000e-07 - val_acc: 1.0000 Epoch 6/25 1000/1000 [==============================] - 1329s 1s/step - loss: 1.0000e-07 - acc: 1.0000 - val_loss: 1.0000e-07 - val_acc: 1.0000 . . . </code></pre> <p>The model looks like it converges to a loss value of 1.0000e-07 as this doesn't change for the rest of the epochs</p> <hr /> <h2>Training and Test accuracy plotted</h2> <p><a href="https://i.stack.imgur.com/d5smF.png" rel="noreferrer">Training and Test accuracy</a></p> <h2>Training and Test loss plotted</h2> <p><a href="https://i.stack.imgur.com/LyotR.png" rel="noreferrer">Training and Test loss</a></p> <hr /> <h2>Making the prediction</h2> <pre><code>from keras.preprocessing import image import numpy as np test_image = image.load_img('/Users/ginja/Desktop/Code/Nic_Cage/nic_cage_predict_1.png', target_size = (200, 200)) #test_image.show() test_image = image.img_to_array(test_image) test_image = np.expand_dims(test_image, axis = 0) result = classifier.predict(test_image) training_set.class_indices if result[0][0] == 1: prediction = 'This is Nicolas Cage' else: prediction = 'This is not Nicolas Cage' print(prediction) </code></pre> <p>We get 'This is not Nicolas Cage' every single time for the prediction. I appreciate anyone that takes the time to even read through this and I appreciate any help on any part of this.</p>
2019-08-01 13:19:05.350000+00:00
2020-08-20 08:39:12.830000+00:00
2020-06-20 09:12:55.060000+00:00
python|keras|deep-learning|classification|conv-neural-network
['https://arxiv.org/pdf/1801.05365.pdf', 'https://arxiv.org/abs/1912.12502']
2
60,587,106
<p>Try out models based on BERT, e.g., </p> <blockquote> <p>MoverScore: <a href="https://pypi.org/project/moverscore/" rel="nofollow noreferrer">https://pypi.org/project/moverscore/</a></p> </blockquote> <p>which is very good for capturing the semantic similarity of two sentences. Paper reference: <a href="https://arxiv.org/abs/1909.02622" rel="nofollow noreferrer">https://arxiv.org/abs/1909.02622</a></p> <p>Also you may want to look for tasks such as "STS" (semantic textual similarity).</p>
2020-03-08 11:41:44.483000+00:00
2020-03-08 11:41:44.483000+00:00
null
null
23,117,979
<p>I am looking for a way to measure the semantic distance between two sentences. Suppose we have the following sentences:</p> <pre><code>(S1) The beautiful cherry blossoms in Japan. (S2) The beautiful Japan. </code></pre> <p>S2 is created from S1 by eliminating the words "cherry", "blossoms" and "in". I want to define a function that gives a high distance between S1 and S2. The reason for this is that they do have significantly different meaning, since beautiful modifies cherry blossoms and not Japan.</p>
2014-04-16 19:02:04.617000+00:00
2020-04-07 15:52:01.967000+00:00
2014-04-16 19:07:02.577000+00:00
nlp|semantics|linguistics|semantic-analysis
['https://pypi.org/project/moverscore/', 'https://arxiv.org/abs/1909.02622']
2
148,069
<p>This is a good application of a <strong>min-queue</strong> - a queue (First-In, First-Out = FIFO) which can simultaneously keep track of the minimum element it contains, with amortized constant-time updates. Of course, a max-queue is basically the same thing.</p> <p>Once you have this data structure in place, you can consider CurrentMax (of the past 1000 elements) minus CurrentMin, store that as the BestSoFar, and then push a new value and pop the old value, and check again. In this way, keep updating BestSoFar until the final value is the solution to your question. Each single step takes amortized constant time, so the whole thing is linear, and the implementation I know of has a good scalar constant (it's fast).</p> <p>I don't know of any documentation on min-queue's - this is a data structure I came up with in collaboration with a coworker. You can implement it by internally tracking a binary tree of the least elements within each contiguous sub-sequence of your data. It simplifies the problem that you'll only pop data from one end of the structure.</p> <p>If you're interested in more details, I can try to provide them. I was thinking of writing this data structure up as a paper for arxiv. Also note that Tarjan and others previously arrived at a more powerful min-deque structure that would work here, but the implementation is much more complex. You can <a href="http://www.google.com/search?q=mindeque" rel="noreferrer">google for "mindeque"</a> to read about Tarjan et al.'s work.</p>
2008-09-29 09:21:26.937000+00:00
2008-09-29 09:21:26.937000+00:00
null
null
148,003
<p>I have an array of a few million numbers.</p> <pre><code>double* const data = new double (3600000); </code></pre> <p>I need to iterate through the array and find the range (the largest value in the array minus the smallest value). However, there is a catch. I only want to find the range where the smallest and largest values are within 1,000 samples of each other.</p> <p>So I need to find the maximum of: range(data + 0, data + 1000), range(data + 1, data + 1001), range(data + 2, data + 1002), ...., range(data + 3599000, data + 3600000).</p> <p>I hope that makes sense. Basically I could do it like above, but I'm looking for a more efficient algorithm if one exists. I think the above algorithm is O(n), but I feel that it's possible to optimize. An idea I'm playing with is to keep track of the most recent maximum and minimum and how far back they are, then only backtrack when necessary.</p> <p>I'll be coding this in C++, but a nice algorithm in pseudo code would be just fine. Also, if this number I'm trying to find has a name, I'd love to know what it is.</p> <p>Thanks.</p>
2008-09-29 08:55:56.423000+00:00
2020-02-16 13:33:54.850000+00:00
null
c++|algorithm|statistics
['http://www.google.com/search?q=mindeque']
1
63,514,374
<p>If your action space is continuous, entropy can be negative, because differential entropy can be <a href="https://en.wikipedia.org/wiki/Differential_entropy" rel="nofollow noreferrer">negative</a>.</p> <p>Ideally, you want the entropy to be decreasing slowly and smoothly over the course of training, as the agent trades exploration in favor of exploitation.</p> <p>With regards to the vf_* metrics, it's helpful to know what they mean.</p> <p>In policy gradient methods, it can be helpful to reduce the variance of rollout estimates by using a value function--parameterized by a neural network--to estimate rewards that are farther in the future (check the <a href="https://arxiv.org/pdf/1707.06347.pdf" rel="nofollow noreferrer">PPO paper</a> for some math on page 5).</p> <p><strong>vf_explained_var</strong> is the explained variation of those future rewards through the use of the value function. You want this to be higher if possible, and it tops out at 1; however, if there is randomness in your environment it's unlikely for this to actually hit 1. <strong>vf_loss</strong> is the error that your value function is incurring; ideally this would decrease to 0, though this isn't always possible (due to randomness). <strong>kl</strong> is the difference between your old strategy and your new strategy at each time step: you want this to smoothly decrease as you train to indicate convergence.</p>
2020-08-20 23:16:56.807000+00:00
2020-08-20 23:16:56.807000+00:00
null
null
60,667,933
<p>I am beginner in Deep RL and would like to train my own gym environment in RLLIB with the PPO algorithm. However, I am having some difficulties seeing if my hyperparameter settings are being successful. Apart from the obvious episode_reward_mean metric which should rise we have many other plots.</p> <p>I am especially interested in how entropy should evolve during a successful training. In my case it looks like this:</p> <p><a href="https://i.stack.imgur.com/mOgou.jpg" rel="nofollow noreferrer">entropy.jpg</a></p> <p>It is usually dropping below 0 and then converging. I understand that entropy as part of the loss function is enforcing exploration and can therefore speedup learning. But why is it getting negative? Shouldn't it be always greater or equal to 0?</p> <p>What are other characteristics of a successful training (vf_explained_var, vf_loss, kl,...)?</p>
2020-03-13 09:30:49.893000+00:00
2020-08-20 23:16:56.807000+00:00
null
tensorflow|reinforcement-learning|rllib
['https://en.wikipedia.org/wiki/Differential_entropy', 'https://arxiv.org/pdf/1707.06347.pdf']
2
68,995,080
<p>Although YOLOv4 can detect people in an image/video stream, I think it might be too general in your case. When a person leaves the frame and comes back, ideally the model should remember seeing that person before.</p> <p>One way to tackle this is to train on images of the people you want to detect.</p> <p>E.g. in a system like yours, you could take multiple images of the people you want to track from different angles and label them using their unique identifiers. Afterwards you could train the model using this data (for your downstream task). This will ideally give more specific results for detecting and tracking people with their unique identifiers as opposed to the general people detection when using YOLOv4 as is..</p> <p>That said, I understand that taking lots of images of people may not be practical in certain scenarios. In that case you may want to look at techniques that produce accurate results with minimal data such as domain adaptation (<a href="https://arxiv.org/abs/1812.11806" rel="nofollow noreferrer">https://arxiv.org/abs/1812.11806</a>). However in an application for tracking and detecting people, I'm assuming you want minimal misclassifications.. Hence you could say it's always a tradeoff.</p> <p>You can find out more about dealing with lack of data in this article: (<a href="https://www.kdnuggets.com/2019/06/5-ways-lack-data-machine-learning.html" rel="nofollow noreferrer">https://www.kdnuggets.com/2019/06/5-ways-lack-data-machine-learning.html</a>)</p> <p>However I think this is a better place to start for a re-identification model: (<a href="https://github.com/KaiyangZhou/deep-person-reid" rel="nofollow noreferrer">https://github.com/KaiyangZhou/deep-person-reid</a>) It has ample documentation to get you started..</p>
2021-08-31 08:11:23.930000+00:00
2021-08-31 08:16:50.410000+00:00
2021-08-31 08:16:50.410000+00:00
null
68,992,986
<p>I am currently working on a project where I want to build a model which can detect and track people with a unique ID. The main issue is when a person leaves the frame and comes back after some time. Currently, I am working with yolov4 and Deepsort to detect and track. But it is failing in this situation.</p> <p>Please suggest some approach where we can do detection, reidentification and tracking of people or cars or any other object.</p> <p>Thank you :)</p>
2021-08-31 04:44:08.637000+00:00
2021-08-31 08:16:50.410000+00:00
null
deep-learning|object-detection|tracking|yolo
['https://arxiv.org/abs/1812.11806', 'https://www.kdnuggets.com/2019/06/5-ways-lack-data-machine-learning.html', 'https://github.com/KaiyangZhou/deep-person-reid']
3
49,952,793
<p>First, when you say <strong>transform</strong>, what does it mean? Is it something like a transform matrix, which can be represented as a set of parameters. Or it is actually very complicated, and can't be easily expressed by a few parameters. </p> <p>For the former case, e.g. a perspective transform matrix, you can directly regress these parameters. In other words, you don't have to use an image as your network target. Instead, you can train a CNN to predict all matrix elements.</p> <p>For the latter case, the short answer is YES. You should further read materials in special networks like <a href="https://arxiv.org/pdf/1506.02025.pdf" rel="nofollow noreferrer"><strong>spatial transformer networks</strong></a>. Note, I am not saying spatial transformer networks will solve your problem, but it points to the right direction, that is</p> <ol> <li>learn all transform related parameters in a network</li> <li>transform your input image in a differentiable way (this might be a custom layer)</li> <li>compute the reconstruction loss (e.g. MSE) between the transformed image and your target.</li> </ol>
2018-04-21 05:50:40.850000+00:00
2018-04-21 05:50:40.850000+00:00
null
null
49,952,231
<p>I am new to the keras and machine learning. My research problem could definitely benefit from using convolution neural networks (cnn). I am trying to build a cnn for certain image transformations specific to my research problem. So far most of the cnn examples i have come across are some form of classification. For the classification examples i understand the basic operation for a cnn, given an input image the network gives out a number. This number is compared with the label ( associated with the input image) and then the error from that is back-propagated to the network to adjust the weights for the next iteration. For my transformation problem, the output of the network is an image and the "label" which is the expected output is also an image. This is where i am stuck. How to use an image as label, and what modifications i need to do in model.fit () to use image as a label. </p> <p>Thank you and any guidance in this matter would be very much appreciated. Best, snsvsn</p>
2018-04-21 04:01:46.067000+00:00
2018-04-21 05:50:40.850000+00:00
null
keras|convolutional-neural-network
['https://arxiv.org/pdf/1506.02025.pdf']
1
60,315,929
<p>The short answer is: <strong>Yes, probably.</strong></p> <p>To explain this in a bit more detail, we have to look at the <a href="https://arxiv.org/pdf/1908.08345.pdf" rel="nofollow noreferrer">paper</a> behind the implementation: In Table 1, you can clearly see that most of their generated headlines are <em>much</em> shorter than what you are trying to initialize. While that alone might not be an indicator that you couldn't generate anything longer, we can go even deeper and look at the meaning of the <code>[unusedX]</code> tokens, as described by BERT dev <a href="https://github.com/google-research/bert/issues/9#issuecomment-434796704" rel="nofollow noreferrer">Jacob Devlin</a>:</p> <blockquote> <p>Since <em>[the</em> <code>[unusedX]</code> <em>tokens]</em> were not used they are effectively randomly initialized.</p> </blockquote> <p>Further, the summariazation paper describes </p> <blockquote> <p>Position embeddings in the original BERT model have a maximum length of 512; we over-come this limitation by adding more position em-beddings that are initialized randomly and fine-tuned with other parameters in the encoder. </p> </blockquote> <p>This is a strong indicator that past a certain length, they are likely falling back to the default initialization, which is unfortunately random. The question is whether you can still salvage the previous pre-training, and simply fine-tune to your objective, or whether it is better to just start from scratch.</p>
2020-02-20 08:45:28.873000+00:00
2020-02-20 08:45:28.873000+00:00
null
null
60,157,959
<p>I use Ai-powered summarization from <a href="https://github.com/huggingface/transformers/tree/master/examples/summarization" rel="nofollow noreferrer">https://github.com/huggingface/transformers/tree/master/examples/summarization</a> - state of the art results.</p> <p><strong>Should i train it myself to get summary output longer than used in original huggingface github training script?</strong> :</p> <pre><code>python run_summarization.py \ --documents_dir $DATA_PATH \ --summaries_output_dir $SUMMARIES_PATH \ # optional --no_cuda false \ --batch_size 4 \ --min_length 50 \ --max_length 200 \ --beam_size 5 \ --alpha 0.95 \ --block_trigram true \ --compute_rouge true </code></pre> <p>When i do inference with </p> <pre><code>--min_length 500 \ --max_length 600 \ </code></pre> <p>I got a good output for 200 tokens, but the rest of the text is </p> <pre><code>. . . [unused7] [unused7] [unused7] [unused8] [unused4] [unused7] [unused7] [unused4] [unused7] [unused8]. [unused4] [unused7] . [unused4] [unused8] [unused4] [unused8]. [unused4] [unused4] [unused8] [unused4] . . [unused4] [unused6] [unused4] [unused7] [unused6] [unused4] [unused8] [unused5] [unused4] [unused7] [unused4] [unused4] [unused7]. [unused4] [unused6]. [unused4] [unused4] [unused4] [unused8] [unused4] [unused7] [unused4] [unused8] [unused6] [unused4] [unused4] [unused4]. [unused4]. [unused5] [unused4] [unused8] [unused7] [unused4] [unused7] [unused9] [unused4] [unused7] [unused4] [unused7] [unused5] [unused4] [unused5] [unused4] [unused6] [unused4]. . . [unused5]. [unused4] [unused4] [unused4] [unused6] [unused5] [unused4] [unused4] [unused6] [unused4] [unused6] [unused4] [unused4] [unused5] [unused4]. [unused5] [unused4] . [unused4] [unused4] [unused8] [unused8] [unused4] [unused7] [unused4] [unused8] [unused4] [unused7] [unused4] [unused8] [unused4] [unused8] [unused4] [unused6] </code></pre>
2020-02-10 20:31:24.440000+00:00
2020-02-20 08:45:28.873000+00:00
null
pytorch|huggingface-transformers|pytorch-ignite
['https://arxiv.org/pdf/1908.08345.pdf', 'https://github.com/google-research/bert/issues/9#issuecomment-434796704']
2
57,294,996
<p>Adversarial examples are commonly crafted by introducing a perturbation that changes the classifier's prediction while the underlying class of the input did not change. Setting a maximum value for the norm of that introduced perturbation is difficult, whether the norm is measured with L2 or Linfinity. Typically, this norm is constrained to be smaller than a certain constant epsilon, because we assume that if epsilon is small enough, it is unlikely that the underlying class of the input changed once it has been perturbed. However, we now know that it is difficult to set a value of epsilon that holds across all inputs. You can find more details here: <a href="https://arxiv.org/abs/1903.10484" rel="nofollow noreferrer">https://arxiv.org/abs/1903.10484</a></p>
2019-07-31 16:09:18.937000+00:00
2019-07-31 16:09:18.937000+00:00
null
null
57,277,717
<p>How do we set constrain on L2 distance for cw L2 attack and deepfool attack? In attack publications authors mention smaller Lp distance to claim a stronger attack, but how to limit L2 distance to a fixed value is confusing. For L-ifinity it can be a max min crop but L2 distance is the average L2 distance on pixel value if I am not wrong, and how do we set that to be a fixed value?</p>
2019-07-30 18:12:07.130000+00:00
2019-08-12 11:58:50.407000+00:00
null
neural-network|cleverhans
['https://arxiv.org/abs/1903.10484']
1
56,441,883
<p>I guess the best way to understand the Generator training procedure is to revise all training loop.</p> <p>For each epoch:</p> <ol> <li><p>Update Discriminator:</p> <ul> <li><p>forward real images mini-batch pass through the Discriminator;</p></li> <li><p>compute the Discriminator loss and calculate gradients for the backward pass;</p></li> <li><p>generate fake images mini-batch via the Generator;</p></li> <li><p>forward generated fake mini-batch pass through the Discriminator;</p></li> <li><p>compute the Discriminator loss and derive gradients for the backward pass;</p></li> <li><p>add (real mini-batch gradients, fake mini-batch gradients)</p></li> <li><p>update the Discriminator (use Adam or SGD).</p></li> </ul></li> <li><p>Update Generator:</p> <ul> <li><p>flip the targets: fake images get labeled as real for the Generator. Note: this step ensures using cross-entropy minimization for the Generator. It helps overcome the problem of Generator's vanishing gradients if we continue implementation of the GAN minmax game.</p></li> <li><p>forward fake images mini-batch pass through the updated Discriminator;</p></li> <li><p>compute Generator loss based on the updated Discriminator output, e.g.:</p></li> </ul> <p>loss function (the probability that fake image is real estimated by Discriminator, 1).<br> Note: here 1 represents the Generator label for fake images as real.</p> <ul> <li>update the Generator (use Adam or SGD)</li> </ul></li> </ol> <p>I hope this helps. As you can see from the training procedure, GAN players are somewhat "cooperative, in the sense that the discriminator estimates the ratio of data to model distribution densities and then freely shares this information with the generator. From this point of view, the discriminator is more like a teacher instructing the generator in how to improve than an adversary" (cited from <a href="https://arxiv.org/pdf/1701.00160.pdf" rel="nofollow noreferrer">I.Goodfellow tutorial</a>). </p>
2019-06-04 10:16:56.547000+00:00
2019-06-04 10:16:56.547000+00:00
null
null
56,092,361
<p>After reading GAN tutorials and code samples i still don't understand how generator is trained. Let's say we have simple case: - generator input is noise and output is grayscale image 10x10 - discriminator input is image 10x10 and output is single value from 0 to 1 (fake or true)</p> <p>Training discriminator is easy - take its output for real and expect 1 for it. Take output for fake and expect 0. We're working with real output size here - single value.</p> <p>But training generator is different - we take fake output (1 value) and make expected output for that as one. But it sounds more like training of descriminator again. Output of generator is image 10x10 how can we train it with only 1 single value? How back propagation might work in this case?</p>
2019-05-11 16:44:33.540000+00:00
2019-06-04 10:16:56.547000+00:00
null
tensorflow|keras|neural-network|deep-learning|generative-adversarial-network
['https://arxiv.org/pdf/1701.00160.pdf']
1
40,849,894
<p>Almost all deep learning frameworks (MXNet included) will run much faster with a CUDA-capable GPU from NVIDIA. GPU's will often speed up the kinds of vector math needed for deep learning by 100x. Apple stopped building machines with NVIDIA GPUs several years ago (2012 IIRC). If you have one of those make sure you have <a href="http://docs.nvidia.com/cuda/cuda-installation-guide-mac-os-x/" rel="nofollow noreferrer">CUDA working on your Mac</a>. I'm not aware of any way right now to get MXNet to make use of the AMD or Intel GPUs that ship with Apple machines. Also know that even with the fastest GPU's deep learning jobs will often take hours, days, or even weeks to complete. So patience is definitely part of the game, regardless of what hardware you're using.</p> <p>That said, GPU's aren't the only way to run deep learning systems. Particularly for making predictions (inference) with pre-trained models, CPUs are often just fine. So this can be useful for a task like <a href="http://mxnet.io/tutorials/python/predict_imagenet.html" rel="nofollow noreferrer">semantic image processing</a>. Or when training, using smaller datasets and smaller models can make them run faster. Also, to make sure you're getting the most out of your CPU, check that you have installed a good BLAS library like <a href="https://software.intel.com/en-us/articles/performance-tools-for-software-developers-how-can-i-download-the-intel-ipp-and-intel-mkl-for-mac-os-x" rel="nofollow noreferrer">Intel's MKL</a>.</p> <p>But to get any useful work out of a raspberry pi is going to take some careful optimization, even for inference. This is an area of active scientific research. See for example <a href="https://arxiv.org/abs/1510.00149" rel="nofollow noreferrer">this paper</a>. Or look at adding a <a href="https://www.engadget.com/2016/04/28/movidius-fathom-neural-compute-stick/" rel="nofollow noreferrer">USB hardware accelerator</a>.</p>
2016-11-28 17:14:01.277000+00:00
2016-11-28 17:14:01.277000+00:00
null
null
40,798,792
<p>I am using my MacBookPro. I am trying to run the mxnet python demo code and the execution time is extremely slow. It takes a lot time to execute the code. Is this normal? Also i want to run mxnet on Raspberry Pi 3.</p>
2016-11-25 06:13:52.197000+00:00
2016-11-28 17:14:01.277000+00:00
null
macos|python-2.7|mxnet
['http://docs.nvidia.com/cuda/cuda-installation-guide-mac-os-x/', 'http://mxnet.io/tutorials/python/predict_imagenet.html', 'https://software.intel.com/en-us/articles/performance-tools-for-software-developers-how-can-i-download-the-intel-ipp-and-intel-mkl-for-mac-os-x', 'https://arxiv.org/abs/1510.00149', 'https://www.engadget.com/2016/04/28/movidius-fathom-neural-compute-stick/']
5
43,443,711
<p><code>in_channels</code> refers to the depth of the inputs to the constitutional layer. For example, if you are feed the layer with raw RGB images, then the depth will be 3, corresponding to the Red, Green, and Blue channels. This means that the kernels actually are 3D rather than 2D. The <code>out_channels</code> refer to the depth of output. Following picture from <a href="http://cs231n.github.io/convolutional-networks/" rel="nofollow noreferrer">here</a> illustrates an example with input depth of 3 and output depth of 5:</p> <p><a href="https://i.stack.imgur.com/3bHh8.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3bHh8.jpg" alt="enter image description here"></a></p> <p><code>properly define</code> is something done based on experiments. That is a network design issue. You may read about some of the famous architectures like <a href="http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf" rel="nofollow noreferrer">AlexNet</a> and <a href="https://arxiv.org/pdf/1409.1556/" rel="nofollow noreferrer">VGG-16</a> to see how network architectures are designed in practice.</p>
2017-04-17 00:49:29.393000+00:00
2017-04-17 00:49:29.393000+00:00
null
null
43,443,205
<p>In reading through the Tensorflow tutorial and API documentation, I do not understand how they defined the shape of the convolution input and filter arguments. The method is: <code>tf.nn.conv2d(input, filter, strides, padding, use_cudnn_on_gpu=None, data_format=None, name=None)</code>, where the input is shape: <code>[batch, in_height, in_width, in_channels]</code> and the filter is shape: <code>[filter_height, filter_width, in_channels, out_channels]</code>. If anyone could shed light on how to properly define the "in_channel" and "out_channel" sizes, that would be very helpful. </p>
2017-04-16 23:22:18.913000+00:00
2017-04-17 00:49:29.393000+00:00
null
python-3.x|tensorflow
['http://cs231n.github.io/convolutional-networks/', 'https://i.stack.imgur.com/3bHh8.jpg', 'http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf', 'https://arxiv.org/pdf/1409.1556/']
4
59,588,178
<p>Try <a href="https://cv-tricks.com/object-tracking/quick-guide-mdnet-goturn-rolo/" rel="nofollow noreferrer">object tracking</a>. It ensures stability along with more efficient use of computational resources also. </p> <p>If you are into research, <a href="https://arxiv.org/pdf/1611.06467.pdf" rel="nofollow noreferrer">this paper</a> might give you a better idea.</p>
2020-01-04 05:28:56.203000+00:00
2020-01-04 05:28:56.203000+00:00
null
null
59,568,448
<p>Is there any way to get less jitter on bounding boxes? I sort of understand <a href="http://cs230.stanford.edu/projects_winter_2019/reports/15812427.pdf" rel="nofollow noreferrer">why they happen</a>.</p> <p>And I am not the only one seeing this. See the video <a href="https://www.youtube.com/watch?v=vGiHciI-NC0" rel="nofollow noreferrer">here</a>.</p> <p>But I don't see any patches or fixes for this behavior. It also seems to happen within SSDs. From that paper, it seems like the solution is to pass information from one frame to the next... but I haven't been able to find any implementations of this yet.</p>
2020-01-02 18:44:07.897000+00:00
2020-01-04 05:28:56.203000+00:00
2020-01-03 17:14:24.960000+00:00
lstm|bounding-box|yolo|jitter
['https://cv-tricks.com/object-tracking/quick-guide-mdnet-goturn-rolo/', 'https://arxiv.org/pdf/1611.06467.pdf']
2
61,136,018
<p>This question is extremely broad, so I'm trying to give an answer that focuses on the main problem at hand. If you feel the need to have other questions answered, please open another question focusing on <em>one question at a time</em>, see the [help/on-topic] rules for Stackoverflow.</p> <p>Essentially, as you've correctly identified, BPE is central to any tokenization in modern deep networks. I highly recommend you to read the <a href="https://www.aclweb.org/anthology/P16-1162/" rel="noreferrer">original BPE paper by Sennrich et al.</a>, in which they also highlight a bit more of the history of BPEs. <br/> In any case, the tokenizers for any of the huggingface models are pretrained, meaning that they are usually generated from the training set of the algorithm beforehand. Common implementations such as <a href="https://github.com/google/sentencepiece" rel="noreferrer">SentencePiece</a> also give a bit better understanding of it, but essentially the task is framed as a constrained optimization problem, where you specify a maximum number of <code>k</code> allowed vocabulary words (the constraint), and the algorithm tries to then keep as many words intact without exceeding <code>k</code>.</p> <p>if there are not enough words to cover the whole vocabulary, smaller units are used to approximate the vocabulary, which results in the splits observed in the example you gave. RoBERTa uses a variant called "<em>byte-level BPE</em>", the best explanation is probably given in <a href="https://arxiv.org/pdf/1909.03341.pdf" rel="noreferrer">this study by Wang et al.</a>. The main benefit is, that it results in a smaller vocabulary while maintaining the quality of splits, from what I understand.</p> <p>The second part of your question is easier to explain; while BERT highlights the <em>merging</em> of two subsequent tokens (with <code>##</code>), RoBERTa's tokenizer instead highlights the <em>start of a new token</em> with a specific unicode character (in this case, <code>\u0120</code>, the G with a dot). The best reason I could find for this was <a href="https://github.com/openai/gpt-2/issues/80" rel="noreferrer">this thread</a>, which argues that it basically avoids the use of whitespaces in training.</p>
2020-04-10 07:45:44.390000+00:00
2020-04-10 07:45:44.390000+00:00
null
null
61,134,275
<pre class="lang-py prettyprint-override"><code>from transformers import AutoModel, AutoTokenizer tokenizer1 = AutoTokenizer.from_pretrained("roberta-base") tokenizer2 = AutoTokenizer.from_pretrained("bert-base-cased") sequence = "A Titan RTX has 24GB of VRAM" print(tokenizer1.tokenize(sequence)) print(tokenizer2.tokenize(sequence)) </code></pre> <p>Output:</p> <p>['A', 'ĠTitan', 'ĠRTX', 'Ġhas', 'Ġ24', 'GB', 'Ġof', 'ĠVR', 'AM']</p> <p>['A', 'Titan', 'R', '##T', '##X', 'has', '24', '##GB', 'of', 'V', '##RA', '##M']</p> <p>Bert model uses WordPiece tokenizer. Any word that does not occur in the WordPiece vocabulary is broken down into sub-words greedily. For example, 'RTX' is broken into 'R', '##T' and '##X' where ## indicates it is a subtoken. </p> <p>Roberta uses BPE tokenizer but I'm unable to understand </p> <p>a) how BPE tokenizer works? </p> <p>b) what does G represents in each of tokens?</p>
2020-04-10 04:58:26.833000+00:00
2022-03-21 19:44:02.080000+00:00
2020-04-10 07:19:09.977000+00:00
nlp|pytorch|huggingface-transformers|bert-language-model
['https://www.aclweb.org/anthology/P16-1162/', 'https://github.com/google/sentencepiece', 'https://arxiv.org/pdf/1909.03341.pdf', 'https://github.com/openai/gpt-2/issues/80']
4
34,094,585
<p>I'm not 100% sure I understand your question, but the basinhopping parameter <code>callback</code> sounds like what you're looking for.</p> <p>On a side note, what you're trying to do sounds a bit similar in concept to this paper <a href="http://arxiv.org/abs/1406.3896" rel="nofollow">Freeze-Thaw Bayesian Optimization</a></p>
2015-12-04 18:00:58.527000+00:00
2015-12-04 18:00:58.527000+00:00
null
null
34,056,788
<p>I'm using scipy's <a href="http://docs.scipy.org/doc/scipy-0.16.0/reference/generated/scipy.optimize.basinhopping.html#scipy.optimize.basinhopping" rel="nofollow">Basin Hopping algorithm</a> to optimize a multivariate cost function. Temperature is one of the parameters that greatly affects convergence time of the basin hopping algorithm. I would like to be able to determine how quickly <code>basinhopping()</code> is converging by fitting the cost function value curve up to the current iteration and determining if it's a faster convergence than the previous temperature setting.</p> <p>Here's what the basin hopping call looks like:</p> <pre><code>res = basinhopping(cost, guess, niter=1, T=t, minimizer_kwargs={"method": "cobyla"}) </code></pre> <p>Is there some way to get "live" updates on the current value of the cost function so that I can do an adaptive optimization?</p>
2015-12-03 02:24:23.850000+00:00
2016-01-05 11:46:11.733000+00:00
2015-12-03 02:51:22.167000+00:00
python|optimization|scipy
['http://arxiv.org/abs/1406.3896']
1
44,450,137
<p>Yes.</p> <p>Decades ago, <a href="https://en.wikipedia.org/wiki/Lisp_machine" rel="nofollow noreferrer">Lisp machines</a> performed simultaneous validation checks (e.g. type checks and bounds checks) as the program ran with the assumption the program and state were valid, jumping &quot;back in time&quot; if the check failed - unfortunately this ability to get &quot;free&quot; runtime validation was lost when conventional (i.e. x86) machines became dominant.</p> <blockquote> <p><a href="https://en.wikipedia.org/wiki/Lisp_machine" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Lisp_machine</a></p> <p>Lisp Machines ran the tests in parallel with the more conventional single instruction additions. If the simultaneous tests failed, then the result was discarded and recomputed; this meant in many cases a speed increase by several factors. <strong>This simultaneous checking approach was used as well in testing the bounds of arrays when referenced</strong>, and other memory management necessities (not merely garbage collection or arrays).</p> </blockquote> <p>Fortunately we're finally learning from the past and slowly, and by piecemeal, reintroducing those innovations - <a href="https://en.wikipedia.org/wiki/Intel_MPX" rel="nofollow noreferrer">Intel's &quot;MPX&quot; (Memory Protection eXtensions)</a> for x86 were introduced in Skylake-generation processors for hardware bounds-checking - though it isn't perfect.</p> <p>(x86 is a regression in other ways too: IBM's mainframes had true hardware-accelerated system virtualization in the 1980s - we didn't get it on x86 until 2005 with Intel's &quot;VT-x&quot; and AMD's &quot;AMD-V&quot; extensions).</p> <h3 id="x86-bound-qv0y">x86 <code>BOUND</code></h3> <p>Technically, x86 <em>does</em> have hardware bounds-checking: the <a href="http://x86.renejeschke.de/html/file_module_x86_id_18.html" rel="nofollow noreferrer">the <code>BOUND</code> instruction</a> was introduced in 1982 in the <a href="http://zsmith.co/intel_b.html#bound" rel="nofollow noreferrer">Intel 80188</a> (as well as the Intel 286 and above, but not the Intel 8086, 8088 or 80186 processors).</p> <p>While the <code>BOUND</code> instruction does provide hardware bounds-checking, I understand it indirectly caused performance issues because it breaks the hardware branch predictor (<a href="https://www.reddit.com/r/rust/comments/2qlbx1/is_intel_mpx_relevant_for_rust/" rel="nofollow noreferrer">according to a Reddit thread</a>, but I'm unsure why), but also because it requires the bounds to be specified in a tuple <em>in memory</em> - that's terrible for performance - I understand at runtime it's no faster than manually having the instructions to do an &quot;if <code>index</code> not in range <code>[x,y]</code> then signal the <code>BR</code> exception to the program or OS&quot; (so you might imagine the <code>BOUND</code> instruction was added for the convenience of people who coded assembly by-hand, which was quite common in the 1980s).</p> <p>The <code>BOUND</code> instruction is still present in today's processors, but it was not included in AMD64 (x64) - likely for the performance reasons I explained above, and also because likely very few people were using it (and compilers could trivially replace it with a manual bounds check, that might have <em>better</em> performance anyway, as that could use registers).</p> <p>Another disadvantage to storing the array bounds in memory is that code elsewhere (that wasn't subject to <code>BOUNDS</code> checking) could overwrite the previously written bounds for another pointer and circumvent the check that way - this is mostly a problem with code that <em>intentionally</em> tries to disable safety features (i.e. malware), but if the bounds were stored in the stack - and given how easy it is to corrupt the stack, it has even less utility.</p> <h3 id="intel-mpx-qizo">Intel MPX</h3> <p>Intel MPX was introduced in Skylake architecture in 2015 and should be present in all Skylake and subsequent processor models in the mainstream Intel Core family (including Xeon, and non-SoC versions of Celeron and Pentium). Intel also implemented MPX in the Goldmont architecture (Atom, and SoC versions of Celeron and Pentium) from 2016 onwards.</p> <p>MPX is superior to <code>BOUND</code> in that it provides dedicated registers to store the bounds range so the bounds-check should be <em>almost</em> zero-cost compared to <code>BOUND</code> which required a memory access. On the Intel 486 the <code>BOUND</code> instruction <a href="http://zsmith.co/intel_b.html#bound" rel="nofollow noreferrer">takes 7 cycles</a> (compare to <code>CMP</code> which takes <a href="http://zsmith.co/intel_c.html#bound" rel="nofollow noreferrer">only 2 cycles</a> even if the operand was a memory address). In Skylake the MPX equivalent (<code>BNDMK</code>, <code>BNDCL</code> and <code>BNDCU</code>) are all 1-cycle instructions and <code>BNDMK</code> can be amortized as it only needs to be called once for each new pointer).</p> <p>I cannot find any information on wherever or not AMD has implemented their own version of MPX yet (as of June 2017).</p> <h3 id="critical-thoughts-on-mpx-r2q2">Critical thoughts on MPX</h3> <p>Unfortunately the current state of MPX is not all that rosy - <a href="https://arxiv.org/abs/1702.00719" rel="nofollow noreferrer">a recent paper by Oleksenko, Kuvaiskii, et al. in February 2017 &quot;Intel MPX Explained&quot;</a> (<a href="https://arxiv.org/pdf/1702.00719.pdf" rel="nofollow noreferrer">PDF link</a>: caution: not yet peer-reviewed) is a tad critical:</p> <blockquote> <p>Our main conclusion is that Intel MPX is a promising technique that is not yet practical for widespread adoption. Intel MPX’s performance overheads are still high (~50% on average), and the supporting infrastructure has bugs which may cause compilation or runtime errors. Moreover, we showcase the design limitations of Intel MPX: it cannot detect temporal errors, may have false positives and false negatives in multithreaded code, and its restrictions on memory layout require substantial code changes for some programs.</p> </blockquote> <p>Also note that compared to the Lisp Machines of yore, Intel MPX is still executed inline - whereas in Lisp Machines (<em>if my understanding is correct</em>) bounds checks happened concurrently in hardware with a retroactive jump backwards if the check failed; thus, so-long as a running program's pointers do not point to out-of-bounds locations then there would be an absolutely zero runtime performance cost, so if you have this C code:</p> <pre><code>char arr[10]; arr[9] = 'a'; arr[8] = 'b'; </code></pre> <p>Then under MPX then this would be executed:</p> <pre><code>Time Instruction Notes 1 BNDMK arr, arr+9 Set bounds 0 to 9. 2 BNDCL arr Check `arr` meets lower-bound. 3 BNDCU arr Check `arr` meets upper-bound. 4 MOV 'a' arr+9 Assign 'a' to arr+9. 5 MOV 'a' arr+8 Assign 'a' to arr+8. </code></pre> <p>But on a Lisp machine (if it were magically possible to compile C to Lisp...), then the program-reader-hardware in the computer has the ability to execute additional &quot;side&quot; instructions concurrently with the &quot;actual&quot; instructions, allowing the &quot;side&quot; instructions to instruct the computer to disregard the results from the &quot;actual&quot; instructions in the event of an error:</p> <pre><code>Time Actual instruction Side instruction 1 MOV 'A' arr+9 ENSURE arr+9 BETWEEN arr, arr+9 2 MOV 'A' arr+8 ENSURE arr+8 BETWEEN arr, arr+9 </code></pre> <p>I understand the instructions-per-cycle for the &quot;side&quot; instructions are not the same as the &quot;Actual&quot; instructions - so the side-check for the instruction at <code>Time=1</code> might only complete after the &quot;Actual&quot; instructions have already progressed on to <code>Time=3</code> - but if the check failed then it would pass the instruction pointer of the failed instruction to the exception handler that would direct the program to disregard the results of the instructions executed after <code>Time=1</code>. I don't know how they could achieve that without massive amounts of memory or some mandatory execution pauses, possibly memory-fencing too - that's outside the scope of my answer, but it is at least theoretically possible.</p> <p>(Note in this contrived example I'm using <code>constexpr</code> index values that a compiler can prove will never be out-of-bounds so would omit the MPX checks entirely - so pretend they're user-supplied variables instead :) ).</p> <p>I'm not an expert in x86 (or have any experience in microprocessor design, spare a CS500-level course I took at UW and didn't do the homework for...) but I don't believe concurrent execution of bounds-checks nor &quot;time travel&quot; is possible with x86's current design, despite the extant implementation of out-of-order execution - I might be wrong, however. I speculate that if all pointer-types were promoted to 3-tuples ( <code>struct BoundedPointer&lt;T&gt; { T* ptr, T* min, T* max }</code> - which technically already happens with MPX and other software-based bounds-checks as every guarded pointer has its bounds defined when <code>BNDMK</code> is called) then the protection could be provided for free by the MMU - but now pointers will consume 24 bytes of memory, each, instead of the current 8 bytes - or compare to the measly 4 bytes under 32-bit x86 - RAM is plentiful, but still a finite resource that shouldn't be wasted.</p> <h3 id="mpx-in-gcc-f4ke">MPX in GCC</h3> <p>GCC supported for MPX from version 5.0 to 9.1 ( <a href="https://gcc.gnu.org/wiki/Intel%20MPX%20support%20in%20the%20GCC%20compiler" rel="nofollow noreferrer">https://gcc.gnu.org/wiki/Intel%20MPX%20support%20in%20the%20GCC%20compiler</a> ) when it was removed due to its <a href="https://gcc.gnu.org/legacy-ml/gcc-patches/2017-05/msg01829.html" rel="nofollow noreferrer">maintenance burden</a>.</p> <h3 id="mpx-in-visual-studio-visual-c-qjai">MPX in Visual Studio / Visual C++</h3> <p>Visual Studio 2015 Update 1 (2015.1) added &quot;experimental&quot; support for MPX with the <code>/d2MPX</code> switch ( <a href="https://blogs.msdn.microsoft.com/vcblog/2016/01/20/visual-studio-2015-update-1-new-experimental-feature-mpx/" rel="nofollow noreferrer">https://blogs.msdn.microsoft.com/vcblog/2016/01/20/visual-studio-2015-update-1-new-experimental-feature-mpx/</a> ). Support is still present in Visual Studio 2017 but Microsoft has not announced if it's considered a mainstream (i.e. non-experimental) feature yet.</p> <h3 id="mpx-in-clang-llvm-5vyt">MPX in Clang / LLVM</h3> <p>Clang has partially supported manual use of MPX in the past, but that support was fully removed in version 10.0</p> <p>As of July 2021, LLVM still seems capable of outputting MPX instructions, but I can't see any evidence of an MPX &quot;pass&quot;.</p> <h3 id="mpx-in-intel-cc-compiler-0bqr">MPX in Intel C/C++ Compiler</h3> <p>The Intel C/C++ Compiler has supported MPX since version 15.0.</p>
2017-06-09 05:47:34.323000+00:00
2021-07-13 01:25:33.350000+00:00
2021-07-13 01:25:33.350000+00:00
null
40,752,436
<p>It doesn't seem like it would be difficult to associate ranges with segments of memory. Then have an assembly instruction which treats 2 integers as "location" &amp; "offset" (another for "data" if setting), and returns the data and error code. This would mean no longer having to make a choice between speed and security/safety when working with arrays.</p> <p>Another example might be a function which verifies that instructions originating in a particular memory range cannot physically access memory outside that range. If all hardware connected to the motherboard had this capability (and were made to be compatible with each other), it would be trivial to make perfect virtual machines that run at nearly the same speed as the physical machine.</p> <p>Dustin Soodak</p>
2016-11-22 22:01:11.470000+00:00
2021-07-13 01:25:33.350000+00:00
null
memory|virtual-machine
['https://en.wikipedia.org/wiki/Lisp_machine', 'https://en.wikipedia.org/wiki/Lisp_machine', 'https://en.wikipedia.org/wiki/Intel_MPX', 'http://x86.renejeschke.de/html/file_module_x86_id_18.html', 'http://zsmith.co/intel_b.html#bound', 'https://www.reddit.com/r/rust/comments/2qlbx1/is_intel_mpx_relevant_for_rust/', 'http://zsmith.co/intel_b.html#bound', 'http://zsmith.co/intel_c.html#bound', 'https://arxiv.org/abs/1702.00719', 'https://arxiv.org/pdf/1702.00719.pdf', 'https://gcc.gnu.org/wiki/Intel%20MPX%20support%20in%20the%20GCC%20compiler', 'https://gcc.gnu.org/legacy-ml/gcc-patches/2017-05/msg01829.html', 'https://blogs.msdn.microsoft.com/vcblog/2016/01/20/visual-studio-2015-update-1-new-experimental-feature-mpx/']
13
5,420,784
<p>Per the Bloom Filter mentions, Deletable Bloom Filters (DlBF) are in some ways better than basic counting variants. See <a href="http://arxiv.org/abs/1005.0352" rel="nofollow">http://arxiv.org/abs/1005.0352</a></p>
2011-03-24 14:33:37.883000+00:00
2011-03-24 14:33:37.883000+00:00
null
null
500,607
<p>There are some data structures around that are really useful but are unknown to most programmers. Which ones are they?</p> <p>Everybody knows about linked lists, binary trees, and hashes, but what about <a href="http://en.wikipedia.org/wiki/Skip_list" rel="nofollow noreferrer">Skip lists</a> and <a href="http://en.wikipedia.org/wiki/Bloom_filter" rel="nofollow noreferrer">Bloom filters</a> for example. I would like to know more data structures that are not so common, but are worth knowing because they rely on great ideas and enrich a programmer's tool box.</p> <p>PS: I am also interested in techniques like <a href="http://en.wikipedia.org/wiki/Dancing_Links" rel="nofollow noreferrer">Dancing links</a> which make clever use of properties of a common data structure. </p> <p><strong>EDIT</strong>: Please try to <em>include links</em> to pages describing the data structures in more detail. Also, try to add a couple of words on <em>why</em> a data structure is cool (as <a href="https://stackoverflow.com/questions/500607/what-are-the-lesser-known-but-cool-data-structures">Jonas Kölker</a> already pointed out). Also, try to provide <strong><em>one data-structure per answer</em></strong>. This will allow the better data structures to float to the top based on their votes alone.</p>
2009-02-01 11:12:25.403000+00:00
2012-02-13 00:15:00.837000+00:00
2017-05-23 11:47:31.583000+00:00
language-agnostic|data-structures|computer-science
['http://arxiv.org/abs/1005.0352']
1
69,549,368
<p><a href="https://pytorch-geometric.readthedocs.io/en/latest/modules/nn.html#torch_geometric.nn.conv.GATConv" rel="nofollow noreferrer">That documentation page</a> includes a link to an <a href="https://arxiv.org/pdf/1710.10903.pdf" rel="nofollow noreferrer">arxiv paper</a> that includes the following (bottom of page three)...</p> <blockquote> <p><a href="https://i.stack.imgur.com/hpg0n.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hpg0n.png" alt="enter image description here" /></a> where <a href="https://i.stack.imgur.com/nWLtt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nWLtt.png" alt="enter image description here" /></a> represents transposition and || is the concatenation operation.</p> </blockquote> <p>So yes, || is the concatenation operator.</p>
2021-10-13 03:23:51.367000+00:00
2021-10-13 03:23:51.367000+00:00
null
null
69,549,292
<p>For example the &quot;||&quot; (\Vert) in <a href="https://pytorch-geometric.readthedocs.io/en/latest/modules/nn.html#torch_geometric.nn.conv.GATConv" rel="nofollow noreferrer">https://pytorch-geometric.readthedocs.io/en/latest/modules/nn.html#torch_geometric.nn.conv.GATConv</a></p>
2021-10-13 03:10:42+00:00
2022-01-03 16:11:39.943000+00:00
2022-01-03 16:11:39.943000+00:00
graph|pytorch|pytorch-geometric
['https://pytorch-geometric.readthedocs.io/en/latest/modules/nn.html#torch_geometric.nn.conv.GATConv', 'https://arxiv.org/pdf/1710.10903.pdf', 'https://i.stack.imgur.com/hpg0n.png', 'https://i.stack.imgur.com/nWLtt.png']
4
61,111,880
<p>You're correct that the <a href="https://github.com/microsoft/Quantum/tree/master/samples/chemistry/GetGateCount" rel="nofollow noreferrer"><code>GetGateCount</code> sample</a> was used to generate the plots in that figure. The <a href="https://github.com/microsoft/Quantum/blob/master/samples/chemistry/GetGateCount/Operation.qs" rel="nofollow noreferrer"><code>Operations.qs</code> source file</a> in that sample shows the details of what each different plot represents; in particular, the Trotter, Qubitization and Optimized Qubitization results are for a single step of each algorithm. As you note, the number of steps required depends on the relevant chemical accuracy.</p> <p>Though that's not a level of detail that we went into for that post, the follow-up <a href="https://arxiv.org/abs/1904.01131v1" rel="nofollow noreferrer">arXiv paper</a> goes into a lot more detail about that kind of comparison. All the source code used to generate that paper has been attached at <a href="https://arxiv.org/src/1904.01131v1/anc" rel="nofollow noreferrer">https://arxiv.org/src/1904.01131v1/anc</a>, along with a Dockerfile that can be used to reproduce the exact software environment.</p> <p>With respect to your other question, the resources estimator's count reports the number <code>T</code> operation calls required excluding used to decompose <code>R</code> calls. Thus, the optimized qubitization method trades off some <code>T</code> calls for a dramatic saving in the number of rotations that must be decomposed.</p>
2020-04-08 23:58:17.673000+00:00
2020-04-08 23:58:17.673000+00:00
null
null
60,638,620
<p>The article I was referring to in the title is: <a href="https://cloudblogs.microsoft.com/quantum/2018/12/04/simulating-nature-with-the-new-microsoft-quantum-development-kit-chemistry-library/" rel="nofollow noreferrer">https://cloudblogs.microsoft.com/quantum/2018/12/04/simulating-nature-with-the-new-microsoft-quantum-development-kit-chemistry-library/</a> which provides resource estimates for certain metrics for different molecules with Trotterisation, Qubitization, and Optimized Qubitization.</p> <p>I would like to reproduce some of this data, and I have been using the example program "GetGateCount" to do so. Trotterisation seems to have a problem with T-costs, see my question here for more: <a href="https://stackoverflow.com/questions/60619381/resource-estimation-for-trotterisation-always-outputs-a-0-t-gate-cost">Resource estimation for Trotterisation always outputs a 0 T-gate cost?</a></p> <p>My question here is what information do I need to reproduce the same values as provided in the article? I am using robust phase estimation (is this what was used for the article?), in which the run time is very dependent on the bits of precision. To approximately get the same T-cost as in the article for qubitization, I found 3 bits of precision worked best, (but this leads to a large uncertainty on the phase and energy? as described in: <a href="https://docs.microsoft.com/en-us/qsharp/api/qsharp/microsoft.quantum.characterization.robustphaseestimation" rel="nofollow noreferrer">https://docs.microsoft.com/en-us/qsharp/api/qsharp/microsoft.quantum.characterization.robustphaseestimation</a>). For comparison, I calculated that chemical accuracy for molecular hydrogen would require approx 13 bits of precision with robust phase estimation.</p> <p>Another related question is regarding the difference between Qubitization and Optimized Qubitization. The article explains that Optimized should have a minimized T-cost, and the graph provided shows this to be the case across the molecules investigated. Contrary to this, I find, using the same example program "GetGateCount", that the Qubitization oracle is consistently "resource estimated" to have a lower T-cost as compared to the Optimized Qubitization oracle. This does not seem to be dependent on the additional parameter for Optimized, "targetError", as it causes only a minimal change to the T-cost when varied across a wide range. </p> <p>An example of my results for energy estimation on LiH (not just a single application of the oracle) is included below:</p> <pre><code>Gate Count results on ..\IntegralData\yaml\LiHdata\integrals_lih_sto-3g_0.800.nw.out.yaml by Trotter with 12 spin-orbitals. It took 4335 ms. Gate count statistics: # T:0, # Rotations:90756, # CNOT:559872, Gate Count results on ..\IntegralData\yaml\LiHdata\integrals_lih_sto-3g_0.800.nw.out.yaml by Qubitization with 12 spin-orbitals. It took 14012 ms. Gate count statistics: # T:191376, # Rotations:315252, # CNOT:1241136, Gate Count results on ..\IntegralData\yaml\LiHdata\integrals_lih_sto-3g_0.800.nw.out.yaml by Optimized Qubitization with 12 spin-orbitals. It took 44345 ms. Gate count statistics: # T:711360, # Rotations:1620, # CNOT:2118168 </code></pre> <p>See Trotter with 0 cost, and the higher T cost of Optimized as compared to non-optimized. </p> <p>Any help would be appreciated greatly!</p>
2020-03-11 14:41:33.737000+00:00
2020-04-08 23:58:17.673000+00:00
null
q#
['https://github.com/microsoft/Quantum/tree/master/samples/chemistry/GetGateCount', 'https://github.com/microsoft/Quantum/blob/master/samples/chemistry/GetGateCount/Operation.qs', 'https://arxiv.org/abs/1904.01131v1', 'https://arxiv.org/src/1904.01131v1/anc']
4
65,527,994
<p>The data structure could be named depending on how you implement it.</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Producer</th> <th>Consumer</th> <th>Reference</th> <th>Name</th> </tr> </thead> <tbody> <tr> <td>Multi</td> <td>Single</td> <td><a href="https://arxiv.org/abs/2010.14189" rel="nofollow noreferrer">Jiffy: A Fast, Memory Efficient, Wait-Free Multi-Producers Single-Consumer Queue</a></td> <td>Queue</td> </tr> <tr> <td>Multi</td> <td>Multi</td> <td><a href="https://dl.acm.org/doi/10.1145/2695664.2695924" rel="nofollow noreferrer">A scalable multi-producer multi-consumer wait-free ring buffer</a></td> <td>Ring buffer</td> </tr> <tr> <td>Single</td> <td>Multi</td> <td><a href="https://www.computer.org/csdl/pds/api/csdl/proceedings/download-article/12OmNxH9XgT/pdf" rel="nofollow noreferrer">FFQ: A Fast Single-Producer/Multiple-ConsumerConcurrent FIFO Queue*</a></td> <td>Concurrent FIFO Queue</td> </tr> </tbody> </table> </div> <p>* Wait-free interface for producers, Lock-free for consumers</p>
2021-01-01 07:33:22.703000+00:00
2021-01-02 07:19:07.207000+00:00
2021-01-02 07:19:07.207000+00:00
null
65,437,528
<p>I am considering concurrent multi-producer multi-consumer data structure that has two methods: <code>success = try_put(elem)</code> and <code>success = try_get(&amp;elem)</code>. I assume that this data structure has a fixed amount of preallocated memory and in case it is full or empty, <code>success</code> boolean flag contains <code>false</code> meaning that the corresponding operation can't be made.</p> <p>The data structure doesn't enforce any ordering guarantees, so it doesn't matter is it a stack, queue, or something else. Does this data structure has some name in the literature?</p> <p>Is it possible to make the wait-free implementation of this data structure? Does the presence of constant time atomic operations is required, if yes how they should be used?</p>
2020-12-24 11:12:27.950000+00:00
2021-01-02 07:33:22.990000+00:00
2021-01-02 07:33:22.990000+00:00
data-structures|concurrency|lock-free|wait-free
['https://arxiv.org/abs/2010.14189', 'https://dl.acm.org/doi/10.1145/2695664.2695924', 'https://www.computer.org/csdl/pds/api/csdl/proceedings/download-article/12OmNxH9XgT/pdf']
3
53,566,987
<p>While nothing prevents you from clipping them as well, there is no reason to do so. A nice paper with reasons is <a href="https://arxiv.org/pdf/1211.5063.pdf" rel="nofollow noreferrer">here</a>, I'll try to give you an overview.</p> <p>The problem we're trying to solve by gradient clipping is that of <strong>exploding gradients:</strong> Let's assume that your RNN layer is computed like this:</p> <pre><code> h_t = sigmoid(U * x + W * h_tm1 + b) </code></pre> <p>So forgetting about the nonlinearity for a while, you could say that a current state <code>h_t</code> depends on some earlier state <code>h_{t-T}</code> as <code>h_t = W^T * h_tmT + input</code>. So if the matrix <code>W</code> inflates the hidden state, the influence of that old hidden state is growing exponentially with time. And the same happens as you backpropagate the gradient, resulting in gradients that will most likely get you to to some useless point in the parameter space.</p> <p>On the other hand, the output layer is applied just once during both forward and backward pass, so while it may complicate the learning, it will only be by a 'constant' factor, independent of the unrolling in time.</p> <p>To get a bit more technical: The crucial quantity which determines whether you get exploding gradient is the largest eigenvalue of <code>W</code>. If it is larger than one (or smaller than -1, then it's real fun :-)), then you get exploding gradients. Conversely, if it's smaller than one, you'll suffer from <strong>vanishing gradients</strong>, making it difficult to learn long-term dependencies. You can find a nice discussion of these phenomena <a href="https://arxiv.org/pdf/1607.03474.pdf" rel="nofollow noreferrer">here</a>, with pointers to classical literature.</p> <p>If we take the sigmoid back into the picture, it becomes more difficult to get exploding gradients, as the gradients get dampened by at least a factor of 4 when being backpropagated through it. But still, have an eigenvalue larger than 4 and you'll have adventures :-) It's rather important to initialize carefully, the second paper gives some hints. With <code>tanh</code>, there is little dampening around zero and ReLU just propagates the gradient through, so these are rather prone to gradient explodions and thus sensitive to initialization and gradient clipping.</p> <p>Overall, <a href="http://colah.github.io/posts/2015-08-Understanding-LSTMs/" rel="nofollow noreferrer">LSTMs</a> have better learning properties than vanilla RNNs, esp. with regard to the vanishing gradients. Though from my experience, gradient clipping is usually necessary with them as well.</p> <p>EDIT: When to clip? Right before the update of the weights, i.e. you do the backprop unaltered. The thing is that gradient clipping is kind of a dirty hack. You still want your gradient as precise as possible, so you better don't distort it in the middle of the backprop. Just that if you see the gradient become very large, you say <em>Nah, this smells. I better make a tiny step.</em> and clipping is an easy way to do it (it may be that only some elements of the gradient are exploded while the others are still well behaved and informative). With most of the toolkits, <strong>you don't have the choice anyway</strong>, because the backpropagation happens atomically.</p>
2018-12-01 01:25:49.887000+00:00
2018-12-02 18:58:51.890000+00:00
2018-12-02 18:58:51.890000+00:00
null
53,550,264
<p>I am currently training an LSTM RNN for time-series forecasting. I understand that it is common practice to clip the gradients of the RNN when it crosses a certain threshold. However, I am not completely clear on whether or not this includes the output layer. </p> <p>If we call the hidden layer of an RNN h, then the output is sigmoid(connected_weights*h + bias). I know that the gradients for the weights for determining the hidden layer are clipped, but does the same go for the output layer? </p> <p>In other words, are the gradients for the connected_weights also clipped in gradient clipping?</p>
2018-11-30 02:12:06.603000+00:00
2018-12-02 18:58:51.890000+00:00
2018-12-01 00:06:22.127000+00:00
machine-learning|neural-network|lstm|recurrent-neural-network|rnn
['https://arxiv.org/pdf/1211.5063.pdf', 'https://arxiv.org/pdf/1607.03474.pdf', 'http://colah.github.io/posts/2015-08-Understanding-LSTMs/']
3
61,405,978
<p>Usually you will have some constraints that will help you narrow down on your options. For example, if you need to develop for mobile device your best bets are <a href="https://arxiv.org/abs/1704.04861" rel="nofollow noreferrer">MobileNets</a> or <a href="https://arxiv.org/abs/1707.01083" rel="nofollow noreferrer">ShuffleNet</a> architectures. You can check each candidate model's performance on the <a href="http://www.image-net.org/challenges/LSVRC/" rel="nofollow noreferrer">ILSVRC</a> validation set and have an idea of their relative performance. From there you can select the model that performed best within your constraints (training resource available, time/memory limitation during inference). If you use transfer learning, you will need to decide how many layers to freeze. Although that will depend on your training data (more data means you can fine-tune more layers without risk of overfitting) there is still some trials and erros involved. Then based on the amount of time and resources you have you can investigate other models.</p>
2020-04-24 10:11:12.367000+00:00
2020-04-24 10:11:12.367000+00:00
null
null
61,379,006
<p>So I am in the requirement gathering phase of implementing CNN project for a use case. I am eager to know how picking up a suitable model for the particular use case is done in the AI/ML industry. I saw there are a lot of CNN architectures but how to choose which architecture would suit my requirement? </p> <p>Is it done only in a trial by error basis or are there any specific method for picking the right architecture? If it is a trial by error process, isnt it cumbersome? If we need to use only trial by error method, how long is it going to help us?</p>
2020-04-23 03:46:33.077000+00:00
2020-04-24 10:11:12.367000+00:00
null
machine-learning|deep-learning|neural-network|computer-vision|conv-neural-network
['https://arxiv.org/abs/1704.04861', 'https://arxiv.org/abs/1707.01083', 'http://www.image-net.org/challenges/LSVRC/']
3
66,619,030
<h3>Addressing your problem</h3> <p>With the expression:</p> <pre><code>x2 = std::rand() % (x1 - 1) + (x1 - 3); </code></pre> <p>Using your sample input, <code>rand % (x1 - 1)</code> will be <code>rand % 7</code>, this will render you values from <code>0</code> to <code>6</code>, adding <code>5</code> to that you'll have values from <code>5</code> to <code>11</code>.</p> <p>If you are adamant in using <code>rand</code> you can use something like this:</p> <pre><code>x2 = rand() / (RAND_MAX / ((x1 - 1) - (x1 - 3) + 1) + 1) + x1 - 3; </code></pre> <p><a href="https://godbolt.org/z/WPnh1v" rel="nofollow noreferrer">Live sample</a></p> <p>The interval is corrected, and this is a less biased method of obtaining the random value than using the modulo, though this problem is never completely solvable when using <code>rand</code>, <strong>it's a known drawback</strong>, there are others, you can find several threads in the website which provide solid reasoning as to why you shouldn't use it, like, for instance:</p> <p><a href="https://stackoverflow.com/q/52869166/6865932">Why is the use of rand() considered bad?</a></p> <p><a href="https://stackoverflow.com/q/10984974/6865932">Why do people say there is modulo bias when using a random number generator?</a></p> <p>You also have this more technical document kindly provided by <a href="https://stackoverflow.com/users/780717/njuffa">njuffa</a>:</p> <p><a href="https://arxiv.org/pdf/1805.10941.pdf" rel="nofollow noreferrer">Fast Random Integer Generation in an Interval</a></p> <p>Among many others.</p> <hr /> <h3>Recommend method</h3> <p>C++ <a href="https://en.cppreference.com/w/cpp/header/random" rel="nofollow noreferrer"><code>&lt;random&gt;</code></a> header provides better random number generation facilities, for example using <a href="https://en.cppreference.com/w/cpp/numeric/random/random_device" rel="nofollow noreferrer"><code>std::random_device</code></a> and the <a href="https://en.cppreference.com/w/cpp/numeric/random/mersenne_twister_engine" rel="nofollow noreferrer">Mersenne twister engine</a>:</p> <pre><code>#include &lt;iostream&gt; #include &lt;random&gt; int main() { int x1; int x2; std::random_device rand_d; // random number from hardware std::mt19937 generator(rand_d()); // seed while (true) { std::cin &gt;&gt; x1; std::uniform_int_distribution&lt;&gt; uniform_value(x1 - 3, x1 - 1); // setup the range x2 = uniform_value(generator); // generante the value std::cout &lt;&lt; &quot;result : &quot; &lt;&lt; x2 &lt;&lt; &quot;\n&quot;; } } </code></pre> <p><a href="https://godbolt.org/z/xfKfGd" rel="nofollow noreferrer">Live sample</a></p>
2021-03-13 22:07:04.850000+00:00
2021-03-14 17:00:27.753000+00:00
2021-03-14 17:00:27.753000+00:00
null
66,618,936
<p>I want to generate random numbers that deppend on input from user.</p> <p>For example if user inputs <code>8</code>, I want to generate number from <code>5</code> to <code>7</code>, <code>(8 - 3)</code> to <code>(8 - 1)</code></p> <p>My code:</p> <pre><code>#include &lt;iostream&gt; #include &lt;time.h&gt; int main() { int x1; int x2; std::srand(time(0)); while (true) { std::cout &lt;&lt; &quot;input : &quot;; std::cin &gt;&gt; x1; x2 = std::rand() % (x1 - 1) + (x1 - 3); std::cout &lt;&lt; &quot;result : &quot;; std::cout &lt;&lt; x2 &lt;&lt; std::endl; } return 0; } </code></pre> <p>But the output is:</p> <pre><code>input : 8 result : 7 input : 8 result : 10 input : 8 result : 8 </code></pre> <p>And I want <code>result</code> to be from <code>5</code> to <code>7</code>.</p>
2021-03-13 21:57:17.863000+00:00
2021-03-14 17:00:27.753000+00:00
2021-03-14 00:07:43.783000+00:00
c++|random
['https://godbolt.org/z/WPnh1v', 'https://stackoverflow.com/q/52869166/6865932', 'https://stackoverflow.com/q/10984974/6865932', 'https://stackoverflow.com/users/780717/njuffa', 'https://arxiv.org/pdf/1805.10941.pdf', 'https://en.cppreference.com/w/cpp/header/random', 'https://en.cppreference.com/w/cpp/numeric/random/random_device', 'https://en.cppreference.com/w/cpp/numeric/random/mersenne_twister_engine', 'https://godbolt.org/z/xfKfGd']
9
45,042,397
<p>The reason why this error occured is that model always expects the <strong>batch</strong> of examples - not a <strong>single</strong> example. This diverge from a common understanding of models as mathematical functions of their inputs. The reasons why model expects batches are:</p> <ol> <li>Models are computationaly designed to work faster on batches in order to speed up training.</li> <li>There are algorithms which takes into account the batch nature of input (e.g. <a href="https://arxiv.org/abs/1502.03167" rel="nofollow noreferrer"><strong>Batch Normalization</strong></a> or <strong>GAN</strong> training tricks).</li> </ol> <p>So four dimensions comes from a first dimension which is a <strong>sample / batch</strong> dimension and then - the next 3 dimensions are image dims.</p>
2017-07-11 18:46:55.673000+00:00
2017-07-11 18:46:55.673000+00:00
null
null
45,041,850
<p>I'm trying to create an example using the Keras built in the latest version of TensorFlow from Google. This example should be able to classify a classic image of an elephant. The code looks like this:</p> <pre><code># Import a few libraries for use later from PIL import Image as IMG from tensorflow.contrib.keras.python.keras.preprocessing import image from tensorflow.contrib.keras.python.keras.applications.inception_v3 import InceptionV3 from tensorflow.contrib.keras.python.keras.applications.inception_v3 import preprocess_input, decode_predictions # Get a copy of the Inception model print('Loading Inception V3...\n') model = InceptionV3(weights='imagenet', include_top=True) print ('Inception V3 loaded\n') # Read the elephant JPG elephant_img = IMG.open('elephant.jpg') # Convert the elephant to an array elephant = image.img_to_array(elephant_img) elephant = preprocess_input(elephant) elephant_preds = model.predict(elephant) print ('Predictions: ', decode_predictions(elephant_preds)) </code></pre> <p>Unfortunately I'm getting an error when trying to evaluate the model with model.predict:</p> <pre><code>ValueError: Error when checking : expected input_1 to have 4 dimensions, but got array with shape (299, 299, 3) </code></pre> <p>This code is taken from and based on <a href="https://github.com/vml-ffleschner/coremltools-keras-inception-test" rel="nofollow noreferrer">the excellent example coremltools-keras-inception</a> and will be expanded more when it is figured out.</p>
2017-07-11 18:14:49.617000+00:00
2017-07-12 15:15:01.217000+00:00
2017-07-11 20:13:27.983000+00:00
machine-learning|tensorflow|neural-network|keras|coreml
['https://arxiv.org/abs/1502.03167']
1
59,375,520
<h1>The short answer</h1> <p>Many of the answers here rely on the widely-used mathematical definition [1]:</p> <blockquote> <ul> <li>Discriminative models directly learn the conditional predictive distribution <code>p(y|x)</code>.</li> <li>Generative models learn the joint distribution <code>p(x,y)</code> (or rather, <code>p(x|y)</code> and <code>p(y)</code>). <ul> <li>Predictive distribution <code>p(y|x)</code> can be obtained with Bayes' rule.</li> </ul></li> </ul> </blockquote> <p>Although very useful, this <strong>narrow definition</strong> assumes the supervised setting, and is less handy when examining unsupervised or semi-supervised methods. It also <strong>doesn't apply to many contemporary approaches for deep generative modeling</strong>. For example, now we have implicit generative models, e.g. Generative Adversarial Networks (GANs), which are sampling-based and don't even explicitly model the probability density <code>p(x)</code> (instead learning a divergence measure via the discriminator network). But we call them "generative models” since they are used to generate (high-dimensional [10]) samples.</p> <p>A <strong>broader and more fundamental definition</strong> [2] seems equally fitting for this general question:</p> <blockquote> <ul> <li>Discriminative models learn the boundary between classes. <ul> <li>So they can <em>discriminate</em> between different kinds of data instances.</li> </ul></li> <li>Generative models learn the distribution of data. <ul> <li>So they can <em>generate</em> new data instances.</li> </ul></li> </ul> </blockquote> <p><a href="https://i.stack.imgur.com/mipQS.png" rel="noreferrer"><img src="https://i.stack.imgur.com/mipQS.png" alt="From http://primo.ai/index.php?title=Discriminative_vs._Generative"></a> <em><a href="http://primo.ai/index.php?title=Discriminative_vs._Generative" rel="noreferrer">Image source</a></em></p> <hr> <h1>A closer look</h1> <p>Even so, this question implies somewhat of a false dichotomy [3]. The generative-discriminative "dichotomy" is in fact a <strong>spectrum</strong> which you can even smoothly interpolate between [4]. </p> <p>As a consequence, this distinction gets arbitrary and confusing, especially when many popular models do not neatly fall into one or the other [5,6], or are in fact hybrid models (combinations of classically "discriminative" and "generative" models).</p> <p>Nevertheless it's still a highly useful and common distinction to make. We can list some clear-cut examples of generative and discriminative models, both canonical and recent:</p> <ul> <li>Generative: Naive Bayes, latent Dirichlet allocation (LDA), Generative Adversarial Networks (GAN), Variational Autoencoders (VAE), normalizing flows.</li> <li>Discriminative: Support vector machine (SVM), logistic regression, most deep neural networks.</li> </ul> <p>There is also a lot of interesting work deeply examining the generative-discriminative divide [7] and spectrum [4,8], and even transforming discriminative models into generative models [9].</p> <p>In the end, definitions are constantly evolving, especially in this rapidly growing field :) It's best to take them with a pinch of salt, and maybe even redefine them for yourself and others.</p> <hr> <h1>Sources</h1> <ol> <li>Possibly originating from "Machine Learning - Discriminative and Generative" (Tony Jebara, 2004).</li> <li><a href="https://developers.google.com/machine-learning/gan/generative" rel="noreferrer">Crash Course in Machine Learning by Google</a></li> <li><a href="http://www.trivialorwrong.com/2016/05/22/the-generative-discriminative-fallacy.html" rel="noreferrer">The Generative-Discriminative Fallacy</a></li> <li><a href="https://arxiv.org/abs/1705.09011" rel="noreferrer">"Principled Hybrids of Generative and Discriminative Models" (Lasserre et al., 2006)</a></li> <li><a href="https://stats.stackexchange.com/questions/408421/is-the-only-difference-between-conditional-generative-models-and-discriminative">@shimao's question</a></li> <li><a href="https://stats.stackexchange.com/a/191320/188037">Binu Jasim's answer</a></li> <li>Comparing logistic regression and naive Bayes: <ul> <li><a href="https://cs.cmu.edu/~tom/mlbook/NBayesLogReg.pdf" rel="noreferrer">cs.cmu.edu/~tom/mlbook/NBayesLogReg.pdf</a></li> <li><a href="https://papers.nips.cc/paper/2020-on-discriminative-vs-generative-classifiers-a-comparison-of-logistic-regression-and-naive-bayes" rel="noreferrer">"On Discriminative vs. Generative classifiers"</a></li> <li><a href="https://link.springer.com/article/10.1007/s11063-008-9088-7" rel="noreferrer">Comment on "On Discriminative vs. Generative classifiers"</a></li> </ul></li> <li><a href="https://www.microsoft.com/en-us/research/wp-content/uploads/2016/04/DengJaitly2015-ch1-2.pdf" rel="noreferrer">https://www.microsoft.com/en-us/research/wp-content/uploads/2016/04/DengJaitly2015-ch1-2.pdf</a></li> <li><a href="https://arxiv.org/abs/1912.03263" rel="noreferrer">"Your classifier is secretly an energy-based model" (Grathwohl et al., 2019)</a></li> <li><a href="https://deepgenerativemodels.github.io/notes/introduction/" rel="noreferrer">Stanford CS236 notes</a>: Technically, a probabilistic discriminative model is also a generative model of the labels conditioned on the data. However, the term generative models is typically reserved for high dimensional data.</li> </ol>
2019-12-17 13:40:39.577000+00:00
2020-05-29 12:44:00.817000+00:00
2020-05-29 12:44:00.817000+00:00
null
879,432
<p>What is the difference between a <strong>generative</strong> and a <strong>discriminative</strong> algorithm?</p>
2009-05-18 19:44:45.220000+00:00
2021-01-30 14:08:13.183000+00:00
2020-12-05 01:21:59.313000+00:00
algorithm|machine-learning|generative
['https://i.stack.imgur.com/mipQS.png', 'http://primo.ai/index.php?title=Discriminative_vs._Generative', 'https://developers.google.com/machine-learning/gan/generative', 'http://www.trivialorwrong.com/2016/05/22/the-generative-discriminative-fallacy.html', 'https://arxiv.org/abs/1705.09011', 'https://stats.stackexchange.com/questions/408421/is-the-only-difference-between-conditional-generative-models-and-discriminative', 'https://stats.stackexchange.com/a/191320/188037', 'https://cs.cmu.edu/~tom/mlbook/NBayesLogReg.pdf', 'https://papers.nips.cc/paper/2020-on-discriminative-vs-generative-classifiers-a-comparison-of-logistic-regression-and-naive-bayes', 'https://link.springer.com/article/10.1007/s11063-008-9088-7', 'https://www.microsoft.com/en-us/research/wp-content/uploads/2016/04/DengJaitly2015-ch1-2.pdf', 'https://arxiv.org/abs/1912.03263', 'https://deepgenerativemodels.github.io/notes/introduction/']
13
65,050,272
<p><code>fastai</code> is using a lot of tricks under the hood. A quick catch of what they're doing and you're not.</p> <p><strong>Those are in the order that I think matters most</strong>, especially the first two should improve your scores.</p> <h1>TLDR</h1> <p>Use some scheduler (<a href="https://pytorch.org/docs/1.6.0/optim.html#torch.optim.lr_scheduler.CyclicLR" rel="noreferrer">torch.optim.lr_scheduler.CyclicLR</a> preferably) and <code>AdamW</code> instead of <code>SGD</code>.</p> <h1>Longer version</h1> <h2><a href="https://fastai1.fast.ai/basic_train.html#fit_one_cycle" rel="noreferrer"><code>fit_one_cycle</code></a></h2> <p><a href="https://fastai1.fast.ai/callbacks.one_cycle.html" rel="noreferrer">1 cycle policy by Leslie Smith</a> is used in <code>fastai</code>. In PyTorch one can create similar routine using <a href="https://pytorch.org/docs/1.6.0/optim.html#torch.optim.lr_scheduler.CyclicLR" rel="noreferrer">torch.optim.lr_scheduler.CyclicLR</a> but that would require some manual setup.</p> <p>Basically it starts with lower learning rate, gradually increases up to <code>5e-3</code> in your case and comes back to lower learning rate again (making a cycle). You can adjust how the <code>lr</code> should raise and fall (in <code>fastai</code> it does so using cosine annealing IIRC).</p> <p>Your learning rate is too high at the beginning, some scheduler should help, test it out first of all.</p> <h2>Optimizer</h2> <p>In the provided code snippet you use <code>torch.optim.SGD</code> (as <code>optim_fn</code> is <code>None</code> and default is set) which is harder to setup correctly (usually).</p> <p>On the other hand, if you manage to set it up manually correctly, you might generalize better.</p> <p>Also <code>fastai</code> <strong>does not use <code>Adam</code> by default!</strong> It uses <a href="https://pytorch.org/docs/1.6.0/optim.html#torch.optim.AdamW" rel="noreferrer"><code>AdamW</code></a> if <code>true_wd</code> is set (I think, it will be default in your case anyway, see <a href="https://github.com/fastai/fastai1/blob/master/fastai/basic_train.py#L148" rel="noreferrer">source code</a>). <code>AdamW</code> decouples weight decay from adaptive learning rate which should improve convergence (read <a href="https://towardsdatascience.com/why-adamw-matters-736223f31b5d" rel="noreferrer">here</a> or <a href="https://arxiv.org/abs/1711.05101" rel="noreferrer">original paper</a></p> <h2>Number of epochs</h2> <p>Set the same number of epochs if you want to compare both approaches, currently it's apple to oranges.</p> <h2>Gradient clipping</h2> <p>You do not clip gradient (it is commented out), might help or not depending on the task. Would not focus on that one for now tbh.</p> <h2>Other tricks</h2> <p>Read about <a href="https://fastai1.fast.ai/basic_train.html#Learner" rel="noreferrer">Learner</a> and <a href="https://fastai1.fast.ai/basic_train.html#fit_one_cycle" rel="noreferrer">fit_one_cycle</a> and try to setup something similar in PyTorch (rough guidelines described above)</p> <p>Also you might use some form of data augmentation to improve the scores even further, but that's out of the question's scope I suppose.</p>
2020-11-28 13:49:26.910000+00:00
2020-11-28 13:49:26.910000+00:00
null
null
65,049,435
<p>I have two training python scripts. One using Pytorch's API for classification training and another one is using Fast-ai. Using Fast-ai has much better results.</p> <p>Training outcomes are as follows.</p> <pre><code>Fastai epoch train_loss valid_loss accuracy time 0 0.205338 2.318084 0.466482 23:02 1 0.182328 0.041315 0.993334 22:51 2 0.112462 0.064061 0.988932 22:47 3 0.052034 0.044727 0.986920 22:45 4 0.178388 0.081247 0.980883 22:45 5 0.009298 0.011817 0.996730 22:44 6 0.004008 0.003211 0.999748 22:43 Using Pytorch Epoch [1/10], train_loss : 31.0000 , val_loss : 1.6594, accuracy: 0.3568 Epoch [2/10], train_loss : 7.0000 , val_loss : 1.7065, accuracy: 0.3723 Epoch [3/10], train_loss : 4.0000 , val_loss : 1.6878, accuracy: 0.3889 Epoch [4/10], train_loss : 3.0000 , val_loss : 1.7054, accuracy: 0.4066 Epoch [5/10], train_loss : 2.0000 , val_loss : 1.7154, accuracy: 0.4106 Epoch [6/10], train_loss : 2.0000 , val_loss : 1.7232, accuracy: 0.4144 Epoch [7/10], train_loss : 2.0000 , val_loss : 1.7125, accuracy: 0.4295 Epoch [8/10], train_loss : 1.0000 , val_loss : 1.7372, accuracy: 0.4343 Epoch [9/10], train_loss : 1.0000 , val_loss : 1.6871, accuracy: 0.4441 Epoch [10/10], train_loss : 1.0000 , val_loss : 1.7384, accuracy: 0.4552 </code></pre> <p>Using Pytorch is not converging. I used the same network (Wideresnet22) and both are trained from scratch without pretrained model.</p> <p>The network is <a href="https://www.dropbox.com/s/xuv0uyauw85ahzu/wideresnet22.py?dl=0" rel="nofollow noreferrer">here</a>.</p> <p>Training using Pytorch is <a href="https://www.dropbox.com/s/wp2fsfrrt5f6u82/training_using_pytorch.py?dl=0" rel="nofollow noreferrer">here</a>.</p> <p>Using Fastai is as follows.</p> <pre><code>from fastai.basic_data import DataBunch from fastai.train import Learner from fastai.metrics import accuracy #DataBunch takes data and internall create data loader data = DataBunch.create(train_ds, valid_ds, bs=batch_size, path='./data') #Learner uses Adam as default for learning learner = Learner(data, model, loss_func=F.cross_entropy, metrics=[accuracy]) #Gradient is clipped learner.clip = 0.1 #learner finds its learning rate learner.lr_find() learner.recorder.plot() #Weight decay helps to lower down weight. Learn in https://towardsdatascience.com/ learner.fit_one_cycle(5, 5e-3, wd=1e-4) </code></pre> <p>What could be wrong in my training algorithm using Pytorch?</p>
2020-11-28 12:12:57.197000+00:00
2020-11-28 13:49:26.910000+00:00
null
python|pytorch|fast-ai
['https://pytorch.org/docs/1.6.0/optim.html#torch.optim.lr_scheduler.CyclicLR', 'https://fastai1.fast.ai/basic_train.html#fit_one_cycle', 'https://fastai1.fast.ai/callbacks.one_cycle.html', 'https://pytorch.org/docs/1.6.0/optim.html#torch.optim.lr_scheduler.CyclicLR', 'https://pytorch.org/docs/1.6.0/optim.html#torch.optim.AdamW', 'https://github.com/fastai/fastai1/blob/master/fastai/basic_train.py#L148', 'https://towardsdatascience.com/why-adamw-matters-736223f31b5d', 'https://arxiv.org/abs/1711.05101', 'https://fastai1.fast.ai/basic_train.html#Learner', 'https://fastai1.fast.ai/basic_train.html#fit_one_cycle']
10
72,429,921
<p>PostgreSQL detects serialization violations using a heuristics. Reading data causes predicate locks (<code>SIReadLock</code>) to be taken, and it checks for <em>dangerous structures</em>, which necessarily occur in every serialization violation. That means that you can get false positive serialization errors, but never false negatives.</p> <p>This is all described <a href="https://www.postgresql.org/docs/current/transaction-iso.html#XACT-SERIALIZABLE" rel="nofollow noreferrer">in the documentation</a> and in the <a href="https://arxiv.org/pdf/1208.4179.pdf" rel="nofollow noreferrer">scientific paper referenced there</a>, and we can hope that Amazon didn't hack up PostgreSQL too badly in that area.</p>
2022-05-30 06:21:59.247000+00:00
2022-05-30 06:32:38.590000+00:00
2022-05-30 06:32:38.590000+00:00
null
72,429,816
<p>Does anyone know how SQL databases detect serializable isolation violations (SIV's)? It seems like simply brute forcing every permutation of transaction executions to find a match for the concurrent execution results to verify serializability wouldn't scale.</p> <p>According to this paper from a third party researcher: <a href="https://amazonredshiftresearchproject.org/white_papers/downloads/multi_version_concurrency_control_and_serialization_isolation_failure.pdf" rel="nofollow noreferrer">https://amazonredshiftresearchproject.org/white_papers/downloads/multi_version_concurrency_control_and_serialization_isolation_failure.pdf</a></p> <p>SIV's occur when two transactions are occurring at the same time and the more recent one commits some deleted rows that the less recent transaction later tries to delete as well. This is a situation that MVCC is unable to deal with and thus has to abort with SIV.</p> <p>This makes sense for detecting SIV's involving queries that delete rows in MVCC, but I don't understand how SIV's are detected when only select and insert queries are used. For example, this example in AWS docs: <a href="https://aws.amazon.com/premiumsupport/knowledge-center/redshift-serializable-isolation/" rel="nofollow noreferrer">https://aws.amazon.com/premiumsupport/knowledge-center/redshift-serializable-isolation/</a></p> <p>Does anyone have any idea?</p>
2022-05-30 06:11:51.107000+00:00
2022-05-30 15:27:53.867000+00:00
2022-05-30 06:21:33.613000+00:00
sql|amazon-redshift|isolation-level
['https://www.postgresql.org/docs/current/transaction-iso.html#XACT-SERIALIZABLE', 'https://arxiv.org/pdf/1208.4179.pdf']
2
46,904,460
<p>It would result in [4, 6], and you can find out more in <a href="https://arxiv.org/pdf/1512.03385.pdf" rel="nofollow noreferrer">this paper</a> </p>
2017-10-24 07:19:37.013000+00:00
2018-06-20 17:27:58.983000+00:00
2018-06-20 17:27:58.983000+00:00
null
46,902,386
<p>With the residual block in residual neural networks, is the addition at the end of the block true element addition or is it concatenation?</p> <p>For example, would <code>addition([1, 2], [3, 4])</code> produce <code>[1, 2, 3, 4]</code> or <code>[4, 6]</code> ?</p>
2017-10-24 04:45:48.290000+00:00
2020-10-08 11:30:19.103000+00:00
2018-06-20 17:29:58.433000+00:00
neural-network|resnet|deep-residual-networks
['https://arxiv.org/pdf/1512.03385.pdf']
1
33,903,919
<p>Okay so <a href="https://stackoverflow.com/users/107090/lhf">lhf</a> found a good solution by suggesting the <a href="http://www.gnu.org/software/src-highlite/" rel="nofollow noreferrer">GNU source-hightlight</a> package. I basically took out each snippet of lua code from the latex file, put it into an appropriately named <code>[snippet].lua</code> file and ran the following on it to generate a <code>[snippet]-lua.tex</code> :</p> <p><code>source-highlight -s lua -f latex -i [snippet].lua -o [snippet]-lua.tex</code></p> <p>And then I included each such file into the main latex file using :</p> <p><code>\input{[snippet]-lua}</code></p> <p>The result really isn't as nice as that of the minted package, but I am tired of trying to convince the arXiv admin to support minted...</p>
2015-11-24 21:13:04.157000+00:00
2015-11-24 21:13:04.157000+00:00
2017-05-23 11:52:55.303000+00:00
null
33,899,642
<p>I have a latex file which needed to include snippets of Lua code (for display, not execution), so I used the <a href="https://github.com/gpoore/minted">minted</a> package. It requires latex to be run with the <code>latex -shell-escape</code> flag. </p> <p>I am trying to upload a PDF submission to <a href="http://arxiv.org">arXiv</a>. The site requires these to be submitted as <code>.tex</code>, <code>.sty</code> and <code>.bbl</code>, which they will automatically compile to PDF from latex. When I tried to submit to arXiv, I learned that there was no way for them to activate the <code>-shell-escape</code> flag. </p> <p>So I was wondering if any of you knew a way to highlight Lua code in latex without the <code>-shell-escape</code> flag. I tried the listings package, but I can't get it to work for Lua on my Ubuntu computer.</p>
2015-11-24 17:03:50.693000+00:00
2015-11-25 01:43:49.647000+00:00
null
pdf|lua|latex|syntax-highlighting
['https://stackoverflow.com/users/107090/lhf', 'http://www.gnu.org/software/src-highlite/']
2
36,778,209
<p>"Worstsort" has a complexity of <a href="https://i.stack.imgur.com/iccKN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iccKN.png" alt="\Omega((n!^{(f(n))})^2)"></a> where <a href="https://i.stack.imgur.com/aROy2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aROy2.png" alt="n!^{(m)} = (...((n!)!)!...)! ="></a> factorial of n iterated m times. The paper by Miguel A. Lerma can be found <a href="http://arxiv.org/pdf/1406.1077v1.pdf" rel="nofollow noreferrer">here</a>.</p>
2016-04-21 18:56:43.267000+00:00
2016-04-21 18:56:43.267000+00:00
null
null
7,907,299
<p>For an array of integers, what is the least efficient way of sorting the array. The function should make progress in each step (eg no infinity loop). What is the runtime of that algorithm? </p>
2011-10-26 18:39:21.170000+00:00
2016-04-21 18:56:43.267000+00:00
2011-10-26 18:43:08.153000+00:00
java|algorithm|sorting
['https://i.stack.imgur.com/iccKN.png', 'https://i.stack.imgur.com/aROy2.png', 'http://arxiv.org/pdf/1406.1077v1.pdf']
3
63,487,981
<p>You may read and merge them into a single Pandas Dataframe.</p> <p>With the wavenumbers in the columns and transmittances in 9 rows, <a href="https://stackoverflow.com/questions/23282130/principal-components-analysis-using-pandas-dataframe">apply PCA</a> for dimensionality reduction and visualizations (<a href="https://www.researchgate.net/publication/8521188_Principal_Component_Analysis_Applied_to_Fourier_Transform_Infrared_Spectroscopy_for_the_Design_of_Calibration_Sets_for_Glycerol_Prediction_Models_in_Wine_and_for_the_Detection_and_Classification_of_Ou" rel="nofollow noreferrer">example</a>).</p> <p>Alternatively, you may first extract features from the 9 spectra (max transmittance, min transmittance, centroid wavelength, etc.) and then apply the PCA in this feature space (<a href="https://arxiv.org/pdf/1401.8212.pdf" rel="nofollow noreferrer">example</a>).</p>
2020-08-19 13:22:39.760000+00:00
2020-08-19 13:22:39.760000+00:00
null
null
63,429,469
<p>I have spectral (FTIR) data for several molecules in .csv form, and I want to be able to visualize and classify these molecules using Principal Components Analysis (PCA) in Python.</p> <p>There are 9 relevant .csv files (one for each molecule). In each .csv file, there are two columns: Wavenumber (inverse centimeters) and Transmittance (%). How can I take the data for all 9 molecules and then do visualization and classification using PCA? Again, in Python? Any links to tutorials or code sources that are able to do this for multiple signal data would be very helpful.</p> <p>Thanks!</p>
2020-08-15 18:22:36.970000+00:00
2020-08-19 13:22:39.760000+00:00
2020-08-15 19:09:53.433000+00:00
python|signals|signal-processing|pca
['https://stackoverflow.com/questions/23282130/principal-components-analysis-using-pandas-dataframe', 'https://www.researchgate.net/publication/8521188_Principal_Component_Analysis_Applied_to_Fourier_Transform_Infrared_Spectroscopy_for_the_Design_of_Calibration_Sets_for_Glycerol_Prediction_Models_in_Wine_and_for_the_Detection_and_Classification_of_Ou', 'https://arxiv.org/pdf/1401.8212.pdf']
3
67,356,414
<p>If you mean the Leaky ReLU, I can say that, in fact, the Parametric ReLU (PReLU) is the activation function that generalizes the tradional rectified unit as well as the leaky ReLU. And yes, PReLU impoves model fitting with no significant extra computational cost and little overfitting risk.</p> <p>For more details, you can check out this link <a href="https://arxiv.org/abs/1502.01852" rel="nofollow noreferrer">Delving Deep into Rectifiers</a></p>
2021-05-02 12:28:39.877000+00:00
2021-05-02 12:28:39.877000+00:00
null
null
67,356,013
<p>I have a binary classification problem for my neural network.</p> <p>I already got good results using the ReLU activation function in my hidden layer and the sigmoid function in the output layer. Now I'm trying to get even better results. I added a second hidden layer with the ReLU activation function, and the results got even better. I tried to use the leaky ReLU function for the second hidden layer instead of the ReLU function and got even better results, but I'm not sure if this is even allowed.</p> <p>So I have something like that: Hidden layer 1: ReLU activation function Hidden layer 2: leaky ReLU activation function Hidden layer 3: sigmoid activation function</p> <p>I can't find many resources on it, and those I found always use the same activation function on all hidden layers.</p>
2021-05-02 11:48:06.570000+00:00
2021-05-02 12:28:39.877000+00:00
null
tensorflow|neural-network|multiclass-classification|activation-function|relu
['https://arxiv.org/abs/1502.01852']
1
69,896,653
<p>You can try using Mask R-CNN. Refer to these links for your understanding:</p> <ul> <li><a href="https://github.com/matterport/Mask_RCNN" rel="nofollow noreferrer">https://github.com/matterport/Mask_RCNN</a></li> <li><a href="https://arxiv.org/abs/1703.06870" rel="nofollow noreferrer">https://arxiv.org/abs/1703.06870</a></li> </ul> <p>Basically, you need to make an annotation (polygonal) for your dataset with tools like VIA image annotation tool (<a href="https://www.robots.ox.ac.uk/%7Evgg/software/via/" rel="nofollow noreferrer">https://www.robots.ox.ac.uk/~vgg/software/via/</a>) or MakeSense (<a href="https://www.makesense.ai/" rel="nofollow noreferrer">https://www.makesense.ai/</a>). These are the open source tools that I can recommend. After training, the network can predict the bounding box as well as the boundary of the detected objects.</p>
2021-11-09 10:38:57.060000+00:00
2021-11-09 10:38:57.060000+00:00
null
null
69,175,548
<p>I know nothing on the subject of deep learning.</p> <p>I am looking for references to <strong>build a deep learning algorithm</strong> to detect ROI in given images. My goal is to <strong>compare deep learning algorithms with usual image processing algorithms I have already made</strong>. The input images look like this :</p> <p><a href="https://i.stack.imgur.com/YYDHY.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YYDHY.jpg" alt="enter image description here" /></a></p> <p>The output of the algorithm should look like this :</p> <p><a href="https://i.stack.imgur.com/krdit.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/krdit.jpg" alt="enter image description here" /></a></p> <p>Q1: Do you have any <strong>references</strong> that if I read them would let me build such a <strong>deep learning algorithm</strong> from start to finish ?</p> <p>Q2: Otherwise, do such algorithms already exist and are freely available ? (<strong>Note:</strong> Such algorithms should produce precise ROI detection not broad rectangles encircling the bright regions).</p>
2021-09-14 09:47:19.750000+00:00
2022-09-08 18:11:16.563000+00:00
null
image-processing|deep-learning|roi
['https://github.com/matterport/Mask_RCNN', 'https://arxiv.org/abs/1703.06870', 'https://www.robots.ox.ac.uk/%7Evgg/software/via/', 'https://www.makesense.ai/']
4
45,878,676
<p>You can also try <a href="https://arxiv.org/abs/1707.06168" rel="nofollow noreferrer">channel pruning</a> your network. It's an algorithm that effectively prune channels in each layer, which could speed up network 2-5x. The github address is : <a href="https://github.com/yihui-he/channel-pruning" rel="nofollow noreferrer">https://github.com/yihui-he/channel-pruning</a></p>
2017-08-25 09:43:52.343000+00:00
2017-08-25 09:43:52.343000+00:00
null
null
30,822,009
<p>I am using python to use caffe classifier. I got image from my camera and peform predict image from training set. It work well but the problem is speed very slow. I thinks just 4 frames/second. Could you suggest to me some way to improve computational time in my code? The problem can be explained as following. I have to reload an network model <code>age_net.caffemodel</code> that its size about 80MB by following code</p> <pre><code>age_net_pretrained='./age_net.caffemodel' age_net_model_file='./deploy_age.prototxt' age_net = caffe.Classifier(age_net_model_file, age_net_pretrained, mean=mean, channel_swap=(2,1,0), raw_scale=255, image_dims=(256, 256)) </code></pre> <p>And for each input image (<code>caffe_input</code>), I call the predict function</p> <pre><code>prediction = age_net.predict([caffe_input]) </code></pre> <p>I think that due to size of network is very large. Then predict function takes long time to predict image. I think the slow time is from it.<br> This is my full reference code. It changed by me. </p> <pre><code>from conv_net import * import matplotlib.pyplot as plt import numpy as np import cv2 import glob import os caffe_root = './caffe' import sys sys.path.insert(0, caffe_root + 'python') import caffe DATA_PATH = './face/' cnn_params = './params/gender_5x5_5_5x5_10.param' face_params = './params/haarcascade_frontalface_alt.xml' def format_frame(frame): img = frame.astype(np.float32)/255. img = img[...,::-1] return img if __name__ == '__main__': files = glob.glob(os.path.join(DATA_PATH, '*.*')) # This is the configuration of the full convolutional part of the CNN # `d` is a list of dicts, where each dict represents a convolution-maxpooling # layer. # Eg c1 - first layer, convolution window size # p1 - first layer pooling window size # f_in1 - first layer no. of input feature arrays # f_out1 - first layer no. of output feature arrays d = [{'c1':(5,5), 'p1':(2,2), 'f_in1':1, 'f_out1':5}, {'c2':(5,5), 'p2':(2,2), 'f_in2':5, 'f_out2':10}] # This is the configuration of the mlp part of the CNN # first tuple has the fan_in and fan_out of the input layer # of the mlp and so on. nnet = [(800,256),(256,2)] c = ConvNet(d,nnet, (45,45)) c.load_params(cnn_params) face_cascade = cv2.CascadeClassifier(face_params) cap = cv2.VideoCapture(0) cv2.namedWindow("Image", cv2.WINDOW_NORMAL) plt.rcParams['figure.figsize'] = (10, 10) plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' mean_filename='./mean.binaryproto' proto_data = open(mean_filename, "rb").read() a = caffe.io.caffe_pb2.BlobProto.FromString(proto_data) mean = caffe.io.blobproto_to_array(a)[0] age_net_pretrained='./age_net.caffemodel' age_net_model_file='./deploy_age.prototxt' age_net = caffe.Classifier(age_net_model_file, age_net_pretrained, mean=mean, channel_swap=(2,1,0), raw_scale=255, image_dims=(256, 256)) age_list=['(0, 2)','(4, 6)','(8, 12)','(15, 20)','(25, 32)','(38, 43)','(48, 53)','(60, 100)'] while(True): val, image = cap.read() if image is None: break image = cv2.resize(image, (320,240)) gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) faces = face_cascade.detectMultiScale(gray, 1.3, 5, minSize=(30,30)) for f in faces: x,y,w,h = f cv2.rectangle(image, (x,y), (x+w,y+h), (0,255,255)) face_image_rgb = image[y:y+h, x:x+w] caffe_input = cv2.resize(face_image_rgb, (256, 256)).astype(np.float32) prediction = age_net.predict([caffe_input]) print 'predicted age:', age_list[prediction[0].argmax()] cv2.imshow('Image', image) ch = 0xFF &amp; cv2.waitKey(1) if ch == 27: break #break </code></pre>
2015-06-13 18:16:28.797000+00:00
2017-08-25 09:43:52.343000+00:00
2016-06-02 05:41:18.107000+00:00
python|computer-vision|neural-network|deep-learning|caffe
['https://arxiv.org/abs/1707.06168', 'https://github.com/yihui-he/channel-pruning']
2
4,988,447
<p>Is this the type of thing you are looking for</p> <p><img src="https://i.stack.imgur.com/9Gbvc.png" alt="Sort"></p> <p>These types of rules can be generalised (e.g. add commutation rules for noncommuting objects, make it handle nonnumeric indices, etc...) and packaged up into a <code>NCMSort</code> routine. You can also optimize it by doing the sorting in a single pass by defining a unique <code>NCMOrder</code> function, e.g.</p> <pre><code>NCMSort[expr_] := expr /. a_NonCommutativeMultiply :&gt; a[[NCMOrder[a]]] </code></pre> <hr> <p>An aside: I used such a process in generating the results of <a href="http://arxiv.org/abs/1009.3298" rel="nofollow noreferrer">arXiv:1009.3298</a> -- the notebook will be distributed with the (soon to be released) longer paper.</p>
2011-02-14 02:51:38+00:00
2011-02-14 03:02:24.360000+00:00
2011-02-14 03:02:24.360000+00:00
null
4,988,323
<p>Using Subscript[variable, integer] in Mathematica 7.0+, I have expressions of the following form: </p> <pre><code>a_-4 ** b_1 ** a_-4 ** b_-4 ** a_1 ** c_-4 ** c_1 ** c_5 </code></pre> <p>I would like to simplify this expression. </p> <p>Rules:<br> * Variables with the same subscript to don't commute,<br> * variables with different subscripts do commute. </p> <p>I need a way to simplify the expression and combine like terms (if possible); the output should be something like:</p> <pre><code>(a_-4)^2 ** b_-4 ** c_-4 ** b_1 ** a_1 ** c_1 ** c_5 </code></pre> <p>The most important thing I need is to order the terms in the expression by subscripts while preserving the rules about what commutes and what does not. The second thing (I would like) to do is to combine like terms once the order is correct. I need to at least order expressions like above in the following way: </p> <pre><code>a_-4 ** a_-4 ** b_-4 ** c_-4 ** b_1 ** a_1 ** c_1 ** c_5, </code></pre> <p>that is, commute variables with different subscripts while preserving the non-communicative nature of variables with the same subscripts. </p> <p>All ideas are welcome, thanks.</p>
2011-02-14 02:22:55.063000+00:00
2017-04-21 19:30:37.327000+00:00
2011-02-16 00:10:37.840000+00:00
wolfram-mathematica
['http://arxiv.org/abs/1009.3298']
1
5,205,490
<p>Yes (Self-Interpreting Finite State Machine)</p> <p>There is a paper here of a very short one...</p> <p><a href="http://arxiv.org/abs/cs/0311032" rel="nofollow">http://arxiv.org/abs/cs/0311032</a></p> <p>but I'm not sure if it is available free anywhere.</p> <p>Here is a self interpreter for brainfuck - </p> <p><a href="http://www.iwriteiam.nl/Ha_bf_inter.html" rel="nofollow">http://www.iwriteiam.nl/Ha_bf_inter.html</a> </p>
2011-03-05 17:32:09.180000+00:00
2011-03-05 17:32:09.180000+00:00
null
null
5,205,469
<p>I'm sorry for this newbie question, but I need a quick answer to tell a friend if that's possible.</p>
2011-03-05 17:28:19.450000+00:00
2011-03-13 14:31:10.353000+00:00
null
turing-complete|fsm|pushdown-automaton|self-interpreter
['http://arxiv.org/abs/cs/0311032', 'http://www.iwriteiam.nl/Ha_bf_inter.html']
2
46,173,595
<p>Neural networks use gradient descent, to train: In your high-dimensional parameter space, you always adjust them in the direction of the steepest negative gradient to find a minimum. For that, your loss function has to be differentiable. The rounding function, however, is not <a href="http://mathworld.wolfram.com/NearestIntegerFunction.html" rel="nofollow noreferrer">(image source)</a>:</p> <p><a href="https://i.stack.imgur.com/9FVYk.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9FVYk.gif" alt="Src.: http://mathworld.wolfram.com/NearestIntegerFunction.html" /></a></p> <p>As you can see, die gradient is undefined exactly between two integers, and zero everywhere else. Thus, even if you would define the gradient at the discontinuities manually, your backpropagated gradient would always be zero due to chain rule.</p> <p>I do not know the exact purpose of your network. However, it might be worth trying to convert your network from a regression (where you predict a continuous number) problem into a classification problem, where you predict a class score for each possible integer instead of rounding.</p> <p><strong>Update:</strong></p> <p>If you do masking or segmentation, the real-valued output will give you sort of a 'probability' (at least when using softmax in last layer) that your pixel or voxel belongs to the region you want to mask. If you round the result, you loose important detail for training your network. A pixel with a score of 0.4 will be given the same score as one with 0.1. Thus, a change a small weight change will not change the loss of your network and gradient descent will not work. The <a href="https://arxiv.org/pdf/1606.04797.pdf" rel="nofollow noreferrer">original paper</a> introducing dice loss for segmentation, also does not use rounding. If you want to map each pixel to foreground/background for visualizatuion purposes, you should do it after computing the loss.</p> <p>However, you always have the possiblity to define your own 'gradient', since gradient descent is not the only way to optimize. There are derivative-free optimization techniques. But be careful.</p> <p>Without trying if it works in practice, this would be my approach, when you really don't want to go without the round function (no guaranty that this will yield sensible results in any way): Using distribution theory, you could define a derivative of the round function, as a sum of the derivatives of many <a href="https://en.wikipedia.org/wiki/Heaviside_step_function#Antiderivative_and_derivative" rel="nofollow noreferrer">heaviside functions</a>, leaving you with a <a href="https://en.wikipedia.org/wiki/Dirac_comb" rel="nofollow noreferrer">dirac comb</a>. If you now replace the delta distributions with normal distributions with a small standard deviation, you get the effect, that the gradient in between integers will lead them in the direction of the nearest integer (with the exeption of exactly between, where the derivative of the normal distribution is 0).</p> <p><strong>Disclaimer:</strong> I've never seen something like this in use anywhere, and the best solution would be to just abandon the round function, but if you feel like experimenting a bit, you could try this. If anyone, has any arguments, why this is just plainly false, please tell me!</p>
2017-09-12 10:06:36.863000+00:00
2017-09-13 06:44:37.077000+00:00
2020-06-20 09:12:55.060000+00:00
null
46,167,901
<p>I am using sigmoid activation in the second-last layer and then resizing using <code>tf.images.resize_images()</code> in the last layer. </p> <p>The target tensor has a maximum value of 1.0. In the dice error cost function.</p> <pre><code>def dice(y_true, y_pred): return 1.0-dice_coef(y_true, y_pred, 1e-5, 0.5) def dice_coef(y_true, y_pred, smooth, thresh, axis = [1,2,3]): y_pred = K.round(y_pred) inse = K.sum(K.dot(y_true, K.transpose(y_pred)), axis=axis) l = K.sum(y_pred, axis=axis) r = K.sum(y_true, axis=axis) hard_dice = (2. * inse + smooth) / (l + r + smooth) hard_dice = K.mean(hard_dice) return hard_dice </code></pre> <p>When I run the code I get the error below. However, the error goes away when I remove <code>K.round(y_pred)</code>. Any idea on how to solve this problem?</p> <pre><code>loss,acc,err = Final_Model.train_on_batch(Train_image,Label) File "C:\local\Anaconda3-4.1.1-Windows-x86_64\envs\tensorflow-cpu\lib\site-packages\keras\engine\training.py", line 1761, in train_on_batch self._make_train_function() File "C:\local\Anaconda3-4.1.1-Windows-x86_64\envs\tensorflow-cpu\lib\site-packages\keras\engine\training.py", line 960, in _make_train_function loss=self.total_loss) File "C:\local\Anaconda3-4.1.1-Windows-x86_64\envs\tensorflow-cpu\lib\site-packages\keras\legacy\interfaces.py", line 87, in wrapper return func(*args, **kwargs) File "C:\local\Anaconda3-4.1.1-Windows-x86_64\envs\tensorflow-cpu\lib\site-packages\keras\optimizers.py", line 358, in get_updates new_a = self.rho * a + (1. - self.rho) * K.square(g) File "C:\local\Anaconda3-4.1.1-Windows-x86_64\envs\tensorflow-cpu\lib\site-packages\keras\backend\tensorflow_backend.py", line 1358, in square return tf.square(x) File "C:\local\Anaconda3-4.1.1-Windows-x86_64\envs\tensorflow-cpu\lib\site-packages\tensorflow\python\ops\math_ops.py", line 447, in square return gen_math_ops.square(x, name=name) File "C:\local\Anaconda3-4.1.1-Windows-x86_64\envs\tensorflow-cpu\lib\site-packages\tensorflow\python\ops\gen_math_ops.py", line 2591, in square result = _op_def_lib.apply_op("Square", x=x, name=name) File "C:\local\Anaconda3-4.1.1-Windows-x86_64\envs\tensorflow-cpu\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 508, in apply_op (input_name, err)) ValueError: Tried to convert 'x' to a tensor and failed. Error: None values not supported` </code></pre>
2017-09-12 04:25:32.340000+00:00
2017-09-13 06:44:37.077000+00:00
2017-09-12 07:15:02.770000+00:00
python|machine-learning|tensorflow|keras
['http://mathworld.wolfram.com/NearestIntegerFunction.html', 'https://i.stack.imgur.com/9FVYk.gif', 'https://arxiv.org/pdf/1606.04797.pdf', 'https://en.wikipedia.org/wiki/Heaviside_step_function#Antiderivative_and_derivative', 'https://en.wikipedia.org/wiki/Dirac_comb']
5
9,985,761
<p>You need to compute the principle eigenvector of a 100 billion by 100 billion matrix. Unless it's extremely sparse, you can not fit that inside your machine. So, you need a way to compute the leading eigenvector of a matrix when you can only look at a small part of your matrix at a time.</p> <p>Iterative methods to compute eigenvectors only require that you store a few vectors at each iteration (they'll each have 100 billion elements). Those may fit on your machine (with 4 byte floats you'll need around 375GB per vector). Once you have a candidate vector of rankings you can (very slowly) apply your giant matrix to it by reading the matrix in chunks (since you can look at 32 billion rows at a time you'll need just over 3 chunks). Repeat this process and you'll have the basics of the power method which is what gets used in pagerank. cf <a href="http://www.ams.org/samplings/feature-column/fcarc-pagerank" rel="nofollow noreferrer">http://www.ams.org/samplings/feature-column/fcarc-pagerank</a> and <a href="http://en.wikipedia.org/wiki/Power_iteration" rel="nofollow noreferrer">http://en.wikipedia.org/wiki/Power_iteration</a></p> <p>Of course the limiting factor here is how many times you need to examine the matrix. It turns out that by storing more than one candidate vector and using some randomized algorithms you can get good accuracy with fewer reads of your data. This is a current research topic in the applied math world. You can find more information here <a href="http://arxiv.org/abs/0909.4061" rel="nofollow noreferrer">http://arxiv.org/abs/0909.4061</a> , here <a href="http://arxiv.org/abs/0909.4061" rel="nofollow noreferrer">http://arxiv.org/abs/0909.4061</a> , and here <a href="http://arxiv.org/abs/0809.2274" rel="nofollow noreferrer">http://arxiv.org/abs/0809.2274</a> . There's code available here: <a href="http://code.google.com/p/redsvd/" rel="nofollow noreferrer">http://code.google.com/p/redsvd/</a> but you can't just use that off-the-shelf for the data sizes you're talking about.</p> <p>Another way you may go about this is to look into "incremental svd" which may suit your problem better but is a bit more complicated. Consider this note: <a href="http://www.cs.usask.ca/~spiteri/CSDA-06T0909e.pdf" rel="nofollow noreferrer">http://www.cs.usask.ca/~spiteri/CSDA-06T0909e.pdf</a> and this forum: <a href="https://mathoverflow.net/questions/32158/distributed-incremental-svd">https://mathoverflow.net/questions/32158/distributed-incremental-svd</a></p>
2012-04-03 00:50:05.113000+00:00
2012-04-03 00:59:00.023000+00:00
2017-04-13 12:57:55.007000+00:00
null
9,985,551
<p>Sorry if this is dumb but I was just thinking I should give a shot. Say I have a graph thats huge(for example, 100 billion nodes). Neo4J supports 32 Billion and others support more or less the same, so say I cannot have the entire dataset in a database at the same time, can I run pagerank on it if its a directed graph(no loops) and each set of nodes connect to the next set of nodes(so no new links will be created backwards, only new links are created to new sets of data).</p> <p>Is there a way I can somehow take the previous pagerank scores and apply them to new datasets(I only care about the pagerank for the most recent set of data but need the previous set's pagerank to derive the last sets data)?</p> <p>Does that make sense? If so, is it possible to do?</p>
2012-04-03 00:19:27.663000+00:00
2012-04-03 00:59:00.023000+00:00
null
graph|graph-theory|neo4j|pagerank
['http://www.ams.org/samplings/feature-column/fcarc-pagerank', 'http://en.wikipedia.org/wiki/Power_iteration', 'http://arxiv.org/abs/0909.4061', 'http://arxiv.org/abs/0909.4061', 'http://arxiv.org/abs/0809.2274', 'http://code.google.com/p/redsvd/', 'http://www.cs.usask.ca/~spiteri/CSDA-06T0909e.pdf', 'https://mathoverflow.net/questions/32158/distributed-incremental-svd']
8
58,044,089
<p>For CNN-RNN, some promising things to try:</p> <ul> <li><strong>Conv1D layers</strong>: <code>activation='relu'</code>, <code>kernel_initializer='he_normal'</code></li> <li><strong>LSTM layer</strong>: <code>activation='tanh'</code>, and <code>recurrent_dropout=.1, .2, .3</code></li> <li><strong>Optimizer</strong>: <code>Nadam</code>, <code>lr=2e-4</code> (Nadam may significantly outperform all other optimizers for RNNs)</li> <li><strong>batch_size</strong>: lower it. Unless you have 200+ batches in total, set <code>batch_size=32</code>; lower batch size better exploits the Stochastic mechanism of the optimizer and can improve generalization</li> <li><strong>Dropout</strong>: right after second <code>Conv1D</code>, with a rate <code>.1, .2</code> - or, after first <code>Conv1D</code>, with a rate <code>.25, .3</code>, but <em>only</em> if you use SqueezeExcite (see below), else <code>MaxPooling</code> won't work as well</li> <li><a href="https://arxiv.org/abs/1709.01507" rel="nofollow noreferrer">SqueezeExcite</a>: shown to enhance all CNN performance across a large variety of tasks; Keras implementation you can use below</li> <li><strong>BatchNormalization</strong>: while your model isn't large, it's still deep, and may benefit from one BN layer right after second <code>Conv1D</code></li> <li><strong>L2 weight decay</strong>: on <em>first</em> <code>Conv1D</code>, to prevent it from memorizing the input; try <code>1e-5, 1e-4</code>, e.g. <code>kernel_regularizer=l2(1e-4) # from keras.regularizers import l2</code></li> <li><strong>Preprocessing</strong>: make sure all data is normalized (or standardized if time-series), and batches are shuffled each epoch</li> </ul> <pre><code>def SqueezeExcite(_input): filters = _input._keras_shape[-1] se = GlobalAveragePooling1D()(_input) se = Reshape((1, filters))(se) se = Dense(filters//16,activation='relu', kernel_initializer='he_normal', use_bias=False)(se) se = Dense(filters, activation='sigmoid', kernel_initializer='he_normal', use_bias=False)(se) return multiply([_input, se]) </code></pre> <pre><code># Example usage x = Conv1D(filters=64, kernel_size=4, activation='relu', kernel_initializer='he_normal')(x) x = SqueezeExcite(x) # place after EACH Conv1D </code></pre>
2019-09-21 21:05:51.713000+00:00
2019-09-21 21:16:13.047000+00:00
2019-09-21 21:16:13.047000+00:00
null
58,043,002
<p>I'm currently working on online signature verification. The dataset has a variable shape of (x, 7) where x is the number of points a person used to sign their signature. I have the following model:</p> <pre><code> model = Sequential() #CNN model.add(Conv1D(filters=64, kernel_size=3, activation='sigmoid', input_shape=(None, 7))) model.add(MaxPooling1D(pool_size=3)) model.add(Conv1D(filters=64, kernel_size=2, activation='sigmoid')) #RNN model.add(Masking(mask_value=0.0)) model.add(LSTM(8)) model.add(Dense(2, activation='softmax')) opt = Adam(lr=0.0001) model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy']) model.summary() print(model.fit(x_train, y_train, epochs=100, verbose=2, batch_size=50)) score, accuracy = model.evaluate(x_test,y_test, verbose=2) print(score, accuracy) </code></pre> <p>I know it may not be the best model but this is the first time I'm building a neural network. I have to use a CNN and RNN as it is required for my honours project. At the moment, I achieve 0.5142 as the highest training accuracy and 0.54 testing accuracy. I have tried increasing the number of epochs, changing the activation function, add more layers, moving the layers around, changing the learning rate and changing the optimizer. </p> <p>Please share some advice on changing my model or dataset. Any help is much appreciated. </p>
2019-09-21 18:25:34.187000+00:00
2019-09-21 21:16:13.047000+00:00
null
python|keras|conv-neural-network|recurrent-neural-network
['https://arxiv.org/abs/1709.01507']
1
50,690,642
<p>In order for the KMV sketch to work, you need the k minimum values. If one of the branches of the union didn't have k values to begin with, you can still take the union and truncate to k. It's only if you truncate to k' that you have to truncate the combined sketch to k'.</p> <p>In fact, you can use even more of the samples to improve accuracy. See <a href="https://arxiv.org/abs/0903.0625" rel="nofollow noreferrer">https://arxiv.org/abs/0903.0625</a> *, which shows that it suffices to discard only down to the min discarded sample (which may be nothing at all), resulting in slightly better accuracy.</p> <p>* Leveraging Discarded Samples for Tighter Estimation of Multiple-Set Aggregates. Edith Cohen, Haim Kaplan.</p>
2018-06-05 00:59:01.327000+00:00
2018-06-05 00:59:01.327000+00:00
null
null
50,690,541
<p>While researching the K-minimum values (KVM) method I've found the following paragraph in the blog on KMV method:</p> <blockquote> <p>Note that if the two KMV objects are of different size, due to K being different sizes, or because either one isn't completely filled with K minimum values, you should use the smaller value of K as your union set K size.</p> </blockquote> <p>and also</p> <blockquote> <p>To perform union, you merely take 2 sketches and combine their values and keep the k smallest ones (if the 2 sketches are of different sizes, k and k', then you keep the min(k,k') values in order to keep the lowest resolution).</p> </blockquote> <p>Then it seems that if I am trying to use large K (for better accuracy, e.g. 2048) then if I look at the multiple KMV objects (e.g. tables in databases reporting unique users of the internet portal) and even one of them has less distinct values than K (i.e. K'), then I would have to use that smaller value of K' in the final union. Instead of large K, I may end up with very small K'. May I simply ignore the fact that K' &lt; K and use K minimum values each time I combine the minimum value data sets? Or better question would be: what is wrong with simply using K in all cases and why we need to use the smaller value?</p>
2018-06-05 00:41:48.847000+00:00
2021-01-12 18:42:00.463000+00:00
null
algorithm|distributed-system|distinct-values
['https://arxiv.org/abs/0903.0625']
1
54,137,722
<p>In 1993 paper <a href="https://doi.org/10.1016/0743-1066(93)90040-N" rel="nofollow noreferrer">Equivalence of datalog queries is undecidable</a> Oded Shmueli showed it is in general impossible:</p> <pre><code>It is shown that determining containment or equivalence of Datalog queries is recursively unsolvable. </code></pre> <p>The reason for this result is that rules may be 'recursive' (aka 'non-tight'):</p> <pre><code>ancestor(X, Z) :- ancestor(X, Y), parent(Y, Z) </code></pre> <p>and recursion makes the problem undecidable.</p> <p>It may, however, be possible for fragments of datalog, see <a href="https://arxiv.org/pdf/1406.7801.pdf" rel="nofollow noreferrer">Query Containment for Highly Expressive Datalog Fragments</a></p>
2019-01-10 22:20:05.783000+00:00
2019-01-10 22:20:05.783000+00:00
null
null
49,775,376
<p>given two Datalog programs <code>P1, P2</code> I would like to check if <code>P1</code> is contained in <code>P2</code>, i.e. on every Database <code>D</code>, the ouput of <code>P1</code> is contained in <code>P2</code>.</p> <p>For example, <code>P1</code>:</p> <pre><code>A(X,Y) :- a(X,Y) </code></pre> <p>and program <code>P2</code>:</p> <pre><code>A'(X,Y) :- a(X,Y), b(X,Y) </code></pre> <p>it is clear that for every possible Database <code>A' contained in A</code>, because <code>b(X,Y)</code> only filters results from <code>a(X,Y)</code> thus the containment.</p> <p>Is there a standard Datalog implementation that returns true/false on this kind of query?</p>
2018-04-11 12:37:12.800000+00:00
2019-01-10 22:20:05.783000+00:00
null
database|logic-programming|datalog
['https://doi.org/10.1016/0743-1066(93)90040-N', 'https://arxiv.org/pdf/1406.7801.pdf']
2
32,391,780
<blockquote> <p>x = ½ √( 2 + u² - v² + 2u√2 ) - ½ √( 2 + u² - v² - 2u√2 )<br> y = ½ √( 2 - u² + v² + 2v√2 ) - ½ √( 2 - u² + v² - 2v√2 )</p> </blockquote> <p>Note on notation: I'm using x = xSquare , y = ySquare, u = xCircle and v = yCircle;</p> <p>i.e. <strong>(u,v)</strong> are circular disc coordinates and <strong>(x,y)</strong> are square coordinates.</p> <p><a href="https://i.stack.imgur.com/VSFoo.png" rel="noreferrer"><img src="https://i.stack.imgur.com/VSFoo.png" alt="grid mapping"></a></p> <p>For a C++ implementation of the equations, go to<br> <a href="http://squircular.blogspot.com/2015/09/mapping-circle-to-square.html" rel="noreferrer">http://squircular.blogspot.com/2015/09/mapping-circle-to-square.html</a> </p> <p>See <a href="http://squircular.blogspot.com" rel="noreferrer">http://squircular.blogspot.com</a> for more example images.<br> Also, see <a href="http://arxiv.org/abs/1509.06344" rel="noreferrer">http://arxiv.org/abs/1509.06344</a> for the proof/derivation</p> <p>This mapping is the inverse of </p> <blockquote> <p>u = x √( 1 - ½ y² )<br> v = y √( 1 - ½ x² )</p> </blockquote> <hr> <p>P.S. The mapping is not unique. There are other mappings out there. The picture below illustrates the non-uniqueness of the mapping.</p> <p><a href="https://i.stack.imgur.com/rDL3l.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/rDL3l.jpg" alt="Boston Celtics squared"></a></p>
2015-09-04 06:49:35.693000+00:00
2016-02-28 05:53:53.417000+00:00
2016-02-28 05:53:53.417000+00:00
null
13,211,595
<p>I'm currently working on a game in LBP2 that has modify the way a controller gives input. This question: <a href="https://stackoverflow.com/questions/1621831/how-can-i-convert-coordinates-on-a-square-to-coordinates-on-a-circle">How can I convert coordinates on a square to coordinates on a circle?</a> Has helped me quite a lot with what I am doing, but I do have one problem. I need the inverse function of the one they give. They go from square -> circle, and I've tried searching all over for how to map a circle to a square.</p> <p>The function given in the previous question is:</p> <blockquote> <p>xCircle = xSquare * sqrt(1 - 0.5*ySquare^2)</p> <p>yCircle = ySquare * sqrt(1 - 0.5*xSquare^2)</p> <p>From <a href="http://mathproofs.blogspot.com/2005/07/mapping-square-to-circle.html" rel="nofollow noreferrer">Mapping a Square to a Circle</a></p> </blockquote> <p>My question is given xCircle and yCircle... how do I find xSquare and ySquare?</p> <p>I've tried all of the algebra I know, filled up two pages of notes, tried to get wolfram alpha to get the inverse functions, but this problem is beyond my abilities.</p> <p>Thank you for taking a look.</p>
2012-11-03 17:21:43.693000+00:00
2016-02-28 05:53:53.417000+00:00
2017-05-23 11:33:13.660000+00:00
math|coordinate-systems
['https://i.stack.imgur.com/VSFoo.png', 'http://squircular.blogspot.com/2015/09/mapping-circle-to-square.html', 'http://squircular.blogspot.com', 'http://arxiv.org/abs/1509.06344', 'https://i.stack.imgur.com/rDL3l.jpg']
5
54,698,421
<p>Solved the problem by switching to WGAN-GP (<a href="https://arxiv.org/abs/1704.00028" rel="nofollow noreferrer">https://arxiv.org/abs/1704.00028</a>).<br> Turns out it is more stable while training.</p>
2019-02-14 20:16:54.127000+00:00
2019-02-14 20:16:54.127000+00:00
null
null
54,647,599
<p>I'm currently training a DCGAN for 1x32x32 (channel, height, width) images. Quite soon in training G(z) becomes reasonably realistic apart from a problem with the 'chessboard' artifacts being visible, but this should go away after lots of training? However, after a long training session D(G(z)) goes to 0.5000 (and no longer changes) while D(x) stays between 0.8 and 0.9. Whenever D(G(z)) goes to 0.5 it also starts outputting fully black &amp; white images. Hence, the generator no longer produces anything that looks close to what's in the training dataset. G(z) just becomes a black or white square.</p> <p>The network used is from the original DCGAN paper, adapter for 1x32x32 images. With relu already replaced to leaky relu.</p>
2019-02-12 10:12:25.827000+00:00
2019-02-14 20:16:54.127000+00:00
null
tensorflow|keras|pytorch
['https://arxiv.org/abs/1704.00028']
1
63,012,602
<p>Your question is a bit vague. However, you can fine-tune a pre-trained model such as BERT, for a downstream task (NER, RE, QA). To answer your question, you will find interesting the following: SciBERT <a href="https://arxiv.org/abs/1903.10676" rel="nofollow noreferrer">https://arxiv.org/abs/1903.10676</a> and Bertje <a href="https://arxiv.org/abs/1912.09582" rel="nofollow noreferrer">https://arxiv.org/abs/1912.09582</a></p>
2020-07-21 10:33:30.430000+00:00
2020-07-21 10:33:30.430000+00:00
null
null
62,888,060
<p>Are there any other pre trained models available for different domains apart from BioBert and FinBert</p>
2020-07-14 04:23:23.113000+00:00
2020-07-21 10:33:30.430000+00:00
null
pre-trained-model
['https://arxiv.org/abs/1903.10676', 'https://arxiv.org/abs/1912.09582']
2
38,892,083
<p>Have a look at this <a href="https://github.com/kjw0612/awesome-deep-vision#object-detection" rel="nofollow">link</a>. It contains a list of networks for different computer vision tasks.</p> <p><a href="https://arxiv.org/abs/1605.06409" rel="nofollow">R-FCN: Object Detection via Region-based Fully Convolutional Networks</a> is a new take on Object Detection that uses FCN(which is used for Semantic Segmentation) for Object Detection.</p>
2016-08-11 09:13:13.010000+00:00
2016-08-11 09:13:13.010000+00:00
null
null
38,888,398
<p><a href="https://stackoverflow.com/questions/33947823/what-is-semantic-segmentation-compared-to-segmentation-and-scene-labeling?noredirect=1&amp;lq=1">This thread</a> discusses the comparison of different computer vision concepts. <a href="https://people.eecs.berkeley.edu/~jonlong/long_shelhamer_fcn.pdf" rel="nofollow noreferrer">Fully Convolutional Networks for Semantic Segmentation </a> is a very popular deep learning approach for semantic segmentation. What are the popular or state-of-the-art deep learning approaches for object detection? It seems to me that these two problems share quite some similarities. Are there any framework or methodology that study leveraging the result of solving one problem, i.e., semantic segmentation to solve object detection.</p>
2016-08-11 06:04:42.460000+00:00
2019-03-08 17:07:52.820000+00:00
2017-05-23 12:08:09.100000+00:00
image-processing|computer-vision|deep-learning|image-segmentation|object-detection
['https://github.com/kjw0612/awesome-deep-vision#object-detection', 'https://arxiv.org/abs/1605.06409']
2
26,236,090
<p>It is surely a bit late but here is an answer to you problem...</p> <p>If you just want to improve you model, you can put the new points where the variance of your kriging predictor is the highest. </p> <p>If you want to optimize the field you are interpolating (i.e. find its minimum or maximum), then you can use the expected improvement which is a criterion which tells you where to sample the next batch of points. See, for example: <a href="https://lirias.kuleuven.be/bitstream/123456789/310611/2/JGO_2012.pdf" rel="nofollow">https://lirias.kuleuven.be/bitstream/123456789/310611/2/JGO_2012.pdf</a></p> <p>Another approach could be to sample the bnew batch of points "more indepently" from the kriging interpolator you have computed. You can, for example, choose the next point so that the sample containing the previous sample points + the new ones minimizes a space-filling criterion like maximin criterion or a discrepancy criterion. For details about these types of criteria, see, for example, <a href="http://arxiv.org/abs/1307.6835" rel="nofollow">http://arxiv.org/abs/1307.6835</a> and the references inside.</p>
2014-10-07 12:31:59.487000+00:00
2014-10-20 10:21:43.680000+00:00
2014-10-20 10:21:43.680000+00:00
null
20,092,264
<p>This question should better fit Crossvalidated than stackoverflow, but my questions on kriging never find an answer there, while they do in here, so please do not move the question.</p> <p>In a project we sampled the DVB-T field and I made some kriging interpolation. A new measurement campaign is in the air is there a way to know, given the old measurement what is the best sampling design and how many measurements should be done?</p> <p>I checked on the Cressie, that sent me to a ton of other articles and I looked a lot in Google, but it seems I cannot find the right reference.</p> <p>I do not want an iterative method, that is the main deal. </p> <p>Any type of reference is welcome.</p>
2013-11-20 09:40:24.957000+00:00
2020-06-20 03:07:35.377000+00:00
null
r|statistics|geospatial|sampling|kriging
['https://lirias.kuleuven.be/bitstream/123456789/310611/2/JGO_2012.pdf', 'http://arxiv.org/abs/1307.6835']
2
65,621,328
<p>Be careful that one-hot encoding using model.matrix may create really sparse datasets.</p> <p>Another option is to actually use a recent packages that are purpose-built for highly dimensional / high volume data sets. They run their code using lower-level languages (C++ and/or Java) and in certain cases use parallelization for faster processing.</p> <p>I'd recommend taking a look into these three:</p> <p>ranger (uses C++ compiler) randomForestSRC (uses C++ compiler) h2o (Java compiler - needs Java version 8 or higher) Also, some additional reading here to give you more to go off on which package to choose: <a href="https://arxiv.org/pdf/1508.04409.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1508.04409.pdf</a></p> <p>Page 8 shows benchmarks showing the performance improvement of ranger against randomForest against growing data size - ranger is WAY faster due to linear growth in runtime rather than non-linear for randomForest for rising tree/sample/split/feature sizes.</p> <p>Beyond that, using variable importance to reduce the feature size to those with the most information would improve model performance/accuracy.</p> <p>Good Luck!</p>
2021-01-07 23:20:37.770000+00:00
2021-01-07 23:20:37.770000+00:00
null
null
19,801,908
<p>I'm trying to run random forest with a set of data that is 1664 (columns) x 208 (rows). My dependent variable is retention time (seconds), and my independent variables are 1664 different descriptors all the different chemical compounds. All of the descriptors give a value and are not categorical variables. I'm trying to perform a random forest for regression. However, when I try to run it using the following code:-</p> <pre><code>urine.rf &lt;- randomForest(RT..seconds.~., data=urine, importance=TRUE) </code></pre> <p>I get the message:-</p> <p>Error in randomForest.default(m, y, ...) : Can not handle categorical predictors with more than 32 categories.</p> <p>I read online that you can get around this by constructing some form of model.matrix. I'm completely new to R and haven't got the faintest clue about how to do this. I've tried formatting my cells in my csv file so that all the cells were numbers, but for some reason it still comes across as categorical predictors. How do I get around this?</p> <pre><code>RT (seconds) 1_MW 2_AMW 3_Sv 4_Se 5_Sp 6_Ss 46.58353 155.18 7.76 11.8 20.61 12.21 32.67 0.59 46.79514 145.29 5.01 14.76 28.37 16.11 21 0.51 48.18893 169.21 7.36 13.4 23.49 13.97 34.17 0.58 49.94328 169.21 7.36 13.4 23.49 13.97 34.17 0.58 50.81087 187.33 5.51 17.87 33.59 19.32 30.17 0.53 51.3834 104.2 4.96 10.39 20.67 11.41 16.5 0.49 51.51799 125.17 8.94 7.41 14.81 8.3 27.92 0.53 52.67208 117.13 7.81 8.2 15.73 8.45 29.33 0.55 52.79043 137.18 7.22 11.59 19.12 12.13 26.33 0.61 52.79046 161.23 6.2 13.71 26.27 14.7 33.5 0.53 </code></pre> <p>This is what the database I'm using looks like. I want RT to be my Y variable, and everything else (1_MW and all the others after) to be my independent variables (there's 1664)</p> <p>Thanks</p>
2013-11-06 00:37:41.197000+00:00
2021-01-07 23:20:37.770000+00:00
2013-11-06 00:52:58.993000+00:00
r|random-forest
['https://arxiv.org/pdf/1508.04409.pdf']
1
45,878,941
<p>Have a look at <a href="https://arxiv.org/abs/1707.06168" rel="nofollow noreferrer">Channel Pruning</a>, <em>ICCV 2017</em>, an elegant algorithm that effectively prune channels each layer. In its <a href="https://github.com/yihui-he/channel-pruning" rel="nofollow noreferrer">github code</a>, there's a pruning process for removing unnecessary neurons, which helps reducing model size and accelerating prediction.</p>
2017-08-25 09:59:26.870000+00:00
2017-08-25 09:59:26.870000+00:00
null
null
40,396,092
<p>I have trained a fastercnn model to detect human faces in an image using caffe. My current model size is 530MB. I wanted to reduce the size of my model, so I came accross <a href="https://arxiv.org/abs/1510.00149" rel="nofollow noreferrer">Deep Compression</a> By Song Han. </p> <p>I've updated the less significant weights with 0 in my model using Pycaffe. The model size isn't reduced now, how to remove those insignificant connections from the trained caffe model, so that the size of the model is reduced?</p>
2016-11-03 07:37:31.697000+00:00
2017-08-25 09:59:26.870000+00:00
2016-11-03 12:43:56.917000+00:00
neural-network|deep-learning|caffe|pycaffe
['https://arxiv.org/abs/1707.06168', 'https://github.com/yihui-he/channel-pruning']
2
34,393,913
<p>I am the author of a 2048 controller that scores better than any other program mentioned in this thread. An efficient implementation of the controller is available on <a href="https://github.com/aszczepanski/2048" rel="noreferrer">github</a>. In <a href="https://github.com/wjaskowski/mastering-2048" rel="noreferrer">a separate repo</a> there is also the code used for training the controller's state evaluation function. The training method is described in the <a href="http://arxiv.org/abs/1604.05085" rel="noreferrer">paper</a>.</p> <p>The controller uses expectimax search with a state evaluation function learned from scratch (without human 2048 expertise) by a variant of <strong>temporal difference learning</strong> (a reinforcement learning technique). The state-value function uses an <strong>n-tuple network</strong>, which is basically a weighted linear function of patterns observed on the board. It involved more than <strong>1 billion weights</strong>, in total.</p> <h2>Performance</h2> <p>At 1 moves/s: <strong>609104</strong> (100 games average)</p> <p>At 10 moves/s: <strong>589355</strong> (300 games average)</p> <p>At 3-ply (ca. 1500 moves/s): <strong>511759</strong> (1000 games average)</p> <p>The tile statistics for 10 moves/s are as follows:</p> <pre><code>2048: 100% 4096: 100% 8192: 100% 16384: 97% 32768: 64% 32768,16384,8192,4096: 10% </code></pre> <p>(The last line means having the given tiles at the same time on the board).</p> <p>For 3-ply:</p> <pre><code>2048: 100% 4096: 100% 8192: 100% 16384: 96% 32768: 54% 32768,16384,8192,4096: 8% </code></pre> <p>However, I have never observed it obtaining the 65536 tile.</p>
2015-12-21 10:49:45.780000+00:00
2016-12-23 21:45:40.227000+00:00
2016-12-23 21:45:40.227000+00:00
null
22,342,854
<p>I have recently stumbled upon the game <a href="http://gabrielecirulli.github.io/2048/" rel="noreferrer">2048</a>. You merge similar tiles by moving them in any of the four directions to make "bigger" tiles. After each move, a new tile appears at random empty position with a value of either <code>2</code> or <code>4</code>. The game terminates when all the boxes are filled and there are no moves that can merge tiles, or you create a tile with a value of <code>2048</code>.</p> <p>One, I need to follow a well-defined strategy to reach the goal. So, I thought of writing a program for it.</p> <p>My current algorithm:</p> <pre><code>while (!game_over) { for each possible move: count_no_of_merges_for_2-tiles and 4-tiles choose the move with a large number of merges } </code></pre> <p>What I am doing is at any point, I will try to merge the tiles with values <code>2</code> and <code>4</code>, that is, I try to have <code>2</code> and <code>4</code> tiles, as minimum as possible. If I try it this way, all other tiles were automatically getting merged and the strategy seems good.</p> <p>But, when I actually use this algorithm, I only get around 4000 points before the game terminates. Maximum points AFAIK is slightly more than 20,000 points which is way larger than my current score. Is there a better algorithm than the above?</p>
2014-03-12 05:37:21.207000+00:00
2021-08-15 01:56:15.760000+00:00
2017-02-22 03:52:20.673000+00:00
algorithm|logic|artificial-intelligence|2048
['https://github.com/aszczepanski/2048', 'https://github.com/wjaskowski/mastering-2048', 'http://arxiv.org/abs/1604.05085']
3
42,518,174
<p>Counting the number of objects in the scene can be done using normal neural networks using a sliding window approach: <a href="https://arxiv.org/pdf/1312.6229.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1312.6229.pdf</a>. </p> <p>Here a regressor and a classification network is used. The classification network is trained with one-hot encoding as you did. The regressor is used to find object boundaries. In the paper this is done by penalizing the regressor network by the output of the classification network. Then the object boundaries produced by the regressor network are used to predict objects in the scene.</p> <p>Edit: The response above solves a different problem.</p> <p>What I would do is an approach based on Generative Adversarial Networks (<a href="http://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf" rel="nofollow noreferrer">http://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf</a>). I would add vector of random variables to the input to introduce stochastic elements into the network. The generator generates a new room assessment based on the input vector and the stochastic input, while the discriminator discriminates between good assignments and bad assignments based on the output and the input without the stochastic component. This should converge to a generator for your one-hot encoding where you can control the output using a random variable. </p>
2017-02-28 20:32:02.620000+00:00
2017-03-01 09:06:17.997000+00:00
2017-03-01 09:06:17.997000+00:00
null
42,504,222
<p>I am working on some problems on room design. I got a lot of room design samples and would like to produce new designs by studying these samples. The very first problem is to decide what kind of and how many furniture to appear in a room.</p> <p>For a specific design sample, I know its room function, e.g. bedroom or living room. I can also count the number of furniture of different categories in this room, say one sofa, one tea table and two chairs.</p> <p>I built a neural network whose input is the one-hot encoding of room's function and whose output is a vector representing the number of furniture in different categories in that room. Therefore, this network can be trained with supervised learning. However, the problem with neural networks is that for a fixed input it can give only a fixed output, that is, for the rooms of identical function, it will always give the same set of furniture number. Is there any way to introduce stochastic into a neural network?</p> <p>I have ever come across the following question <a href="https://www.quora.com/What-is-a-stochastic-neural-network-and-how-does-it-differ-from-a-deterministic-one" rel="nofollow noreferrer">https://www.quora.com/What-is-a-stochastic-neural-network-and-how-does-it-differ-from-a-deterministic-one</a> and the paper <a href="http://www.cs.toronto.edu/~tang/papers/sfnn.pdf" rel="nofollow noreferrer">http://www.cs.toronto.edu/~tang/papers/sfnn.pdf</a> suggested by an answer, but the stochastic neural network mentioned in that paper looks like probabilistic graphical models to me, unlike most neural networks that can be easily implemented by deep learning libraries like Torch or Tensorflow. </p>
2017-02-28 08:58:45.393000+00:00
2020-10-30 07:47:44.593000+00:00
2017-03-01 00:29:22.560000+00:00
neural-network|deep-learning
['https://arxiv.org/pdf/1312.6229.pdf', 'http://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf']
2
61,722,156
<p>I got a few recent works for the problem of <strong>"Image-Text Matching"</strong> which is slightly different from my problem statement but I can adjust the code for my project. </p> <ol> <li>Transformer Reasoning Network for Image-Text Matching and Retrieval <a href="https://arxiv.org/pdf/2004.09144v1.pdf" rel="nofollow noreferrer">enter link description here</a></li> <li>Stacked Cross Attention for Image-Text Matching <a href="https://kuanghuei.github.io/SCANProject/" rel="nofollow noreferrer">enter link description here</a></li> </ol>
2020-05-11 04:11:29.007000+00:00
2020-05-11 04:11:29.007000+00:00
null
null
61,721,084
<p>I am working on a problem statement where I have to match (text, image) pair. Given a furniture description and furniture image, I have to say they are same or not. This is a binary classification problem but I have to combine both text and image data. One possible solution I am trying as follows <a href="https://i.stack.imgur.com/E1EIu.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/E1EIu.jpg" alt="enter image description here"></a> In the above diagram, I am combining the feature from the pre-trained text and image model and training the linear layer end to end. </p> <p>Is there any other way to handle this type of problem. any leads are most welcome. thanks a lot in advance for your help. </p>
2020-05-11 01:49:38.853000+00:00
2020-05-11 04:11:29.007000+00:00
null
machine-learning|image-processing|deep-learning|nlp|conv-neural-network
['https://arxiv.org/pdf/2004.09144v1.pdf', 'https://kuanghuei.github.io/SCANProject/']
2
11,231,474
<p>Have you considered that the "fast walking" and "fast walking + double tapping" signals might be too similar to differentiate using only accelerometer data? It may simply not be possible to achieve accuracy above a certain amount.</p> <p>Otherwise, neural networks are probably a good choice for your data, and it still may be possible to get better performance out of them.</p> <p>This very-useful paper (http://yann.lecun.com/exdb/publis/pdf/lecun-98b.pdf) recommends that you whiten your dataset so that it has a mean of zero and unit covariance.</p> <p>Also, since your problem is a classification problem, you should make sure that you are training your network using a cross-entropy criteria (http://arxiv.org/pdf/1103.0398v1.pdf ) rather than RMSE. (I have no idea whether Neuroph supports cross-entropy or not.)</p> <p>Another relatively simple thing you could try, as other posters suggested, is transforming your data. Using an FFT or DCT to transform your data to the frequency domain is relatively standard for time-series classification.</p> <p>You could also try training networks on different sized windows and averaging the results.</p> <p>If you want to try some more difficult NN architectures, you could look at the Time-Delay-Neural-Network (just google this for the paper), which takes multiple windows into account in its structure. It should be relatively straightforward to use one of the Torch libraries (http://www.torch.ch/) to implement this, but it might be hard to export the network to an Android environment.</p> <p>Finally, another method of getting better classification performance in time-series data is to consider the relationships between adjacent labels. Conditional Neural Fields (http://code.google.com/p/cnf/ - note:I have never used this code) do this by integrating neural networks into conditional random fields, and, depending on the patterns of behavior in your actual data, may do a better job.</p>
2012-06-27 17:03:25.327000+00:00
2012-06-27 17:03:25.327000+00:00
null
null
11,166,647
<p>I'm building an application for Android devices that requires it to recognize, by accelerometer data, the difference between walking noise and double tapping it. I'm trying to solve this problem using Neural Networks.</p> <p>At the start it went pretty well, teaching it to recognize the taps from noise such as standing up/ sitting down and walking around at a slower pace. But when it came to normal walking it never seemed to learn even though I fed it with a large proportion of noise data.</p> <p><strong>My question</strong>: Are there any serious flaws in my approach? Is the problem based on lack of data?</p> <h3>The network</h3> <p>I've choosen a 25 input 1 output multi-layer perceptron, which I am training with backpropagation. The input is the changes in acceleration every 20ms and output ranges from -1 (for no-tap) to 1 (for tap). I've tried pretty much every constallation of hidden inputs there are, but had most luck with 3 - 10.</p> <p>I'm using Neuroph's easyNeurons for the training and exporting to Java.</p> <h3>The data</h3> <p>My total training data is about 50 pieces double taps and about 3k noise. But I've also tried to train it with proportional amounts of noise to double taps.</p> <p>The data looks like this (ranges from +10 to -10):</p> <p>Sitting double taps: <img src="https://i.stack.imgur.com/a7PDT.png" alt="Sitting double taps, fairly easy to determine." /></p> <hr /> <p>Fast walking: <img src="https://i.stack.imgur.com/ljRwp.png" alt="Fast walking and double tapping, not that easy" /></p> <p>So to reiterate my questions: Are there any serious flaws in my approach here? Do I need more data for it to recognize the difference between walking and double tapping? Any other tips?</p> <p><strong>Update</strong></p> <p>Ok so after much adjusting we've boiled the essential problem down to being able to recognize double taps while taking a brisk walk. Sitting and regular (in-house) walking we can solve pretty good.</p> <p>Brisk walk <img src="https://i.stack.imgur.com/TwLJ1.png" alt="Brisk walk" /></p> <p>So this is some test data of me first walking then stopping, standing still, then walking and doing 5 double taps while I'm walking.</p> <p>If anyone is interested in the raw data, I linked it for the latest (brisk walk) data <a href="http://www.2shared.com/file/6I2niK0b/walkx3_5dt.html?" rel="nofollow noreferrer">here</a></p>
2012-06-23 03:42:42.453000+00:00
2013-02-20 13:16:28.377000+00:00
2020-06-20 09:12:55.060000+00:00
android|neural-network
[]
0
56,330,630
<p>There are at least four things that can count as polymorphism in current Haskell:</p> <ul> <li>Parametric polymorphism. (Also <a href="https://downloads.haskell.org/~ghc/latest/docs/html/users_guide/glasgow_exts.html#kind-polymorphism" rel="noreferrer">kind polymorphism</a>, polymorphism in the kinds instead of the types. Which I guess is parametric polymorphism one level above, so I'm not counting it as a separate entry.)</li> <li>Ad-hoc polymorphism, the one enabled by typeclasses. Introduced in the <a href="https://www.cse.iitk.ac.in/users/karkare/courses/2010/cs653/Papers/ad-hoc-polymorphism.pdf" rel="noreferrer">How to make ad-hoc polymorphism less ad hoc</a> paper.</li> <li><a href="https://www.andres-loeh.de/Koke2005.pdf" rel="noreferrer">Structural polymorphism</a>. This is the one enabled by <a href="http://hackage.haskell.org/package/base-4.12.0.0/docs/GHC-Generics.html" rel="noreferrer">generics</a>. A function can work over multiple data types that have different number of fields and constructors. For example, a generic equality function for records.</li> <li><a href="https://www.youtube.com/watch?v=lSJwXZ7vWBw" rel="noreferrer">Levity polymorphism</a>. Polymorphism over calling conventions / runtime representations of types. Described in the <a href="https://cs.brynmawr.edu/~rae/papers/2017/levity/levity-extended.pdf" rel="noreferrer">Levity Polymorphism</a> paper.</li> </ul> <p>There are two more types of polymorphism that <em>might</em> be introduced in future versions of Haskell:</p> <ul> <li><p>Matchability polymorphism. Would allow higher-order type families to work with both type constructors and type families as arguments. Described in the paper <a href="https://www.microsoft.com/en-us/research/uploads/prod/2019/03/ho-haskell.pdf" rel="noreferrer">Higher-order Type-level Programming in Haskell</a>.</p></li> <li><p>Multiplicity polymorphism. Would allow higher-order functions to work with both normal functions and linear functions as arguments. Described in the paper <a href="https://arxiv.org/pdf/1710.09756.pdf" rel="noreferrer">Linear Haskell Practical Linearity in a Higher-Order Polymorphic Language</a>.</p></li> </ul> <p>One might ask, why this whole panoply of polymorphisms? There seems to exist an overall design principle in Haskell that, whenever some challenge could be solved with either subtyping <em>or</em> polymorphism, <em>polymorphism should be preferred</em>.</p> <p>For example, from the levity polymorphism paper:</p> <blockquote> <p>We can now present the main idea of the paper: replace sub-kinding with kind polymorphism.</p> </blockquote> <p>From the paper introducing matchability polymorphism:</p> <blockquote> <p>At first you might think that we need subtyping, but instead we turn to polymorphism</p> </blockquote> <p>From the linear Haskell paper:</p> <blockquote> <p>The lack of subtyping is a deliberate choice in our design</p> </blockquote> <p>Simon Peyton Jones himself makes the point at <a href="https://www.youtube.com/watch?v=t0mhvd3-60Y&amp;feature=youtu.be&amp;t=2820" rel="noreferrer">47:00</a> in <a href="https://www.youtube.com/watch?v=t0mhvd3-60Y" rel="noreferrer">this talk</a>.</p> <blockquote> <p>Whenever you want to use subtyping, use polymorphism instead.</p> </blockquote>
2019-05-27 18:04:23.160000+00:00
2019-05-28 06:19:22.820000+00:00
2019-05-28 06:19:22.820000+00:00
null
56,326,016
<p>Reading the Wikipedia definition of <a href="https://en.wikipedia.org/wiki/Polymorphism_(computer_science)" rel="noreferrer">polymorphism</a>, I come with a question:</p> <p>Which polymorphism types are supported in Haskell and which are not?</p> <p>Looks like Wikipedia do not contain a description for some polymorphism types like <a href="https://www.microsoft.com/en-us/research/wp-content/uploads/2016/11/levity-1.pdf" rel="noreferrer">Levity Polymorphism</a> which is new for me and supported in Haskell.</p> <p>I wondering to have an extended list of <a href="https://wiki.haskell.org/Polymorphism" rel="noreferrer">Haskell Polymorphism</a> followed with examples to explore deeply.</p> <p>Looks like the main two are:</p> <ul> <li>Parametric polymorphism</li> <li>Ad-hoc polymorphism</li> </ul>
2019-05-27 12:23:32.283000+00:00
2019-05-28 06:19:22.820000+00:00
2019-05-27 12:27:57.680000+00:00
haskell|types|polymorphism
['https://downloads.haskell.org/~ghc/latest/docs/html/users_guide/glasgow_exts.html#kind-polymorphism', 'https://www.cse.iitk.ac.in/users/karkare/courses/2010/cs653/Papers/ad-hoc-polymorphism.pdf', 'https://www.andres-loeh.de/Koke2005.pdf', 'http://hackage.haskell.org/package/base-4.12.0.0/docs/GHC-Generics.html', 'https://www.youtube.com/watch?v=lSJwXZ7vWBw', 'https://cs.brynmawr.edu/~rae/papers/2017/levity/levity-extended.pdf', 'https://www.microsoft.com/en-us/research/uploads/prod/2019/03/ho-haskell.pdf', 'https://arxiv.org/pdf/1710.09756.pdf', 'https://www.youtube.com/watch?v=t0mhvd3-60Y&feature=youtu.be&t=2820', 'https://www.youtube.com/watch?v=t0mhvd3-60Y']
10