a_id
int64 7.84k
73.8M
| a_body
stringlengths 61
33k
| a_creation_date
stringlengths 25
32
| a_last_activity_date
stringlengths 25
32
| a_last_edit_date
stringlengths 25
32
⌀ | a_tags
float64 | q_id
int64 826
73.8M
| q_body
stringlengths 61
29.9k
| q_creation_date
stringlengths 25
32
| q_last_activity_date
stringlengths 25
32
| q_last_edit_date
stringlengths 25
32
⌀ | q_tags
stringlengths 1
103
| _arxiv_links
stringlengths 2
6.69k
| _n_arxiv_links
int64 0
94
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
55,703,243 | <h2>tl;dr</h2>
<p>The optimization criterion is the same, the difference is how the model gets the word vector.</p>
<h2>Using formulas</h2>
<p>Fasttext optimizes the same criterion as the standard skipgram model (using the formula from the <a href="https://arxiv.org/pdf/1607.04606.pdf" rel="nofollow noreferrer">FastText paper</a>):</p>
<p><a href="https://i.stack.imgur.com/wOWs3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wOWs3.png" alt="enter image description here" /></a></p>
<p>with all the approximation tricks that make the optimization computationally efficient. In the end, they get this:</p>
<p><a href="https://i.stack.imgur.com/yvirs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yvirs.png" alt="enter image description here" /></a></p>
<p>There is a sum over all words <em>w<sub>c</sub></em> and approximate the denominator using some negative samples <em>n</em>. The crucial difference is in the function <em>s</em>. In the original skip-gram model, it is a dot product of the two word embeddings.</p>
<p>However, in the FastText case, the function <em>s</em> is redefined:</p>
<p><a href="https://i.stack.imgur.com/kI2uB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kI2uB.png" alt="enter image description here" /></a></p>
<p>Word <em>w<sub>t</sub></em> is represented as a sum of all n-grams <em>z<sub>g</sub></em> the word consist of plus a vector for the word itself. You basically want to make no only the word, but also all its substrings probable in the given context window.</p> | 2019-04-16 07:57:25.340000+00:00 | 2019-04-16 07:57:25.340000+00:00 | 2020-06-20 09:12:55.060000+00:00 | null | 55,700,359 | <p>Given a sentence 'hello world', the vocabulary is </p>
<p>{hello, world} + {<hel, hell, ello, llo>, <wor, worl, orld, rld>}, </p>
<p>for convenience, just list all 4-gram.</p>
<p>In my comprehension, the word2vec skipgram will maximize</p>
<p><img src="https://latex.codecogs.com/gif.latex?%5Cdpi%7B200%7D&space;P(hello%5Cvert&space;world)&space;+&space;P(world%5Cvert&space;hello)" title="P(hello\vert world) + P(world\vert hello)" /></p>
<p>What will fasttext skipgram do?</p> | 2019-04-16 03:58:53.047000+00:00 | 2019-04-16 07:57:25.340000+00:00 | null | nlp|fasttext | ['https://arxiv.org/pdf/1607.04606.pdf', 'https://i.stack.imgur.com/wOWs3.png', 'https://i.stack.imgur.com/yvirs.png', 'https://i.stack.imgur.com/kI2uB.png'] | 4 |
20,159,074 | <p>I am the developer of <a href="https://github.com/ddemidov/vexcl" rel="noreferrer">VexCL</a>, but I really like what <a href="https://stackoverflow.com/users/226814/kyle-lutz">Kyle Lutz</a>, the author of <a href="https://github.com/boostorg/compute" rel="noreferrer">Boost.Compute</a>, had to say on the same subject on <a href="http://lists.boost.org/Archives/boost/2013/03/201547.php" rel="noreferrer">Boost mailing list</a>. In short, from the user standpoint <a href="http://thrust.github.io/" rel="noreferrer">Thrust</a>, Boost.Compute, AMD's <a href="https://github.com/HSA-Libraries/Bolt" rel="noreferrer">Bolt</a> and probably Microsoft's <a href="http://msdn.microsoft.com/en-us/library/vstudio/hh265136.aspx" rel="noreferrer">C++ AMP</a> all implement an STL-like API, while VexCL is an expression template based library that is closer to <a href="http://eigen.tuxfamily.org/" rel="noreferrer">Eigen</a> in nature. I believe the main difference between the STL-like libraries is their portability: </p>
<ol>
<li>Thrust only supports NVIDIA GPUs, but may also work on CPUs through its OpenMP and TBB backends.</li>
<li>Bolt uses AMD extensions to OpenCL which are only available on AMD GPUs. It also provides Microsoft C++ AMP and Intel TBB backends.</li>
<li>The only compiler that supports Microsoft C++ AMP is Microsoft Visual C++ (although the work on <a href="http://hsafoundation.com/bringing-camp-beyond-windows-via-clang-llvm/" rel="noreferrer">Bringing C++AMP Beyond Windows</a> is being done).</li>
<li>Boost.Compute seems to be the most portable solution of those, as it is based on standard OpenCL.</li>
</ol>
<p>Again, all of those libraries are trying to implement an STL-like interface, so they have very broad applicability. VexCL was developed with scientific computing in mind. If Boost.Compute was developed a bit earlier, I could probably base VexCL on top of it :). Another library for scientific computing worth looking at is <a href="http://viennacl.sourceforge.net/" rel="noreferrer">ViennaCL</a>, a free open-source linear algebra library for computations on many-core architectures (GPUs, MIC) and multi-core CPUs. Have a look at [1] for the comparison of VexCL, ViennaCL, CMTL4 and Thrust for that field.</p>
<p>Regarding the quoted inability of Thrust developers to add an OpenCL backend: Thrust, VexCL and Boost.Compute (I am not familiar with the internals of other libraries) all use metaprogramming techniques to do what they do. But since CUDA supports C++ templates, the job of Thrust developers is probably a bit easier: they have to write metaprograms that generate CUDA programs with help of C++ compiler. VexCL and Boost.Compute authors write metaprograms that generate programs that generate OpenCL source code. Have a look at the <a href="https://speakerdeck.com/ddemidov/vexcl-implementation-university-of-texas-2013" rel="noreferrer">slides</a> where I tried to explain how VexCL is implemented. So I agree that current Thrust's design prohibits them adding an OpenCL backend.</p>
<p>[1] Denis Demidov, Karsten Ahnert, Karl Rupp, Peter Gottschling, <a href="http://epubs.siam.org/doi/abs/10.1137/120903683" rel="noreferrer">Programming CUDA and OpenCL: A Case Study Using Modern C++ Libraries</a>, SIAM J. Sci. Comput., 35(5), C453–C472. (an <a href="http://arxiv.org/abs/1212.6326" rel="noreferrer">arXiv version</a> is also available).</p>
<p>Update: @gnzlbg commented that there is no support for C++ functors and lambdas in OpenCL-based libraries. And indeed, OpenCL is based on C99 and is compiled from sources stored in strings at runtime, so there is no easy way to fully interact with C++ classes. But to be fair, OpenCL-based libraries do support user-based functions and even lambdas to some extent.</p>
<ul>
<li>Boost.Compute provides its own <a href="http://boostorg.github.io/compute/boost_compute/advanced_topics.html#boost_compute.advanced_topics.lambda_expressions" rel="noreferrer">implementation of simple lambdas</a> (based on Boost.Proto), and allows to interact with user-defined structs through <a href="http://boostorg.github.io/compute/BOOST_COMPUTE_ADAPT_STRUCT.html" rel="noreferrer">BOOST_COMPUTE_ADAPT_STRUCT</a> and <a href="http://boostorg.github.io/compute/BOOST_COMPUTE_CLOSURE.html" rel="noreferrer">BOOST_COMPUTE_CLOSURE</a> macros.</li>
<li>VexCL provides linear-algebra-like DSL (also based on Boost.Proto), and also <a href="https://github.com/ddemidov/vexcl#converting-generic-c-algorithms-to-opencl" rel="noreferrer">supports conversion of generic C++ algorithms and functors</a> (and even Boost.Phoenix lambdas) to OpenCL functions (with restrictions).</li>
<li>I believe AMD's Bolt does support user-defined functors through its <a href="http://developer.amd.com/community/blog/2012/05/21/opencl-1-2-and-c-static-kernel-language-now-available/" rel="noreferrer">C++ for OpenCL</a> extension magic.</li>
</ul>
<p>Having said that, CUDA-based libraries (and may be C++ AMP) have an obvious advantage of actual compile-time compiler (can you even say that?), so the integration with user code can be much tighter.</p> | 2013-11-23 05:49:11.373000+00:00 | 2015-05-28 03:06:17.320000+00:00 | 2017-05-23 12:34:38.020000+00:00 | null | 20,154,179 | <p>With a just a cursory understanding of these libraries, they look to be very similar. I know that VexCL and Boost.Compute use OpenCl as a backend (although the v1.0 release VexCL also supports CUDA as a backend) and Thrust uses CUDA. Aside from the different backends, what's the difference between these. </p>
<p>Specifically, what problem space do they address and why would I want to use one over the other.</p>
<p>Also, on the Thrust FAQ it is stated that</p>
<blockquote>
<p>The primary barrier to OpenCL support is the lack of an OpenCL compiler and runtime with support for C++ templates</p>
</blockquote>
<p>If this is the case, how is it possible that VexCL and Boost.Compute even exist.</p> | 2013-11-22 20:47:17.713000+00:00 | 2016-09-16 16:26:49.857000+00:00 | 2016-09-16 16:26:49.857000+00:00 | c++|thrust|gpu|boost-compute|vexcl | ['https://github.com/ddemidov/vexcl', 'https://stackoverflow.com/users/226814/kyle-lutz', 'https://github.com/boostorg/compute', 'http://lists.boost.org/Archives/boost/2013/03/201547.php', 'http://thrust.github.io/', 'https://github.com/HSA-Libraries/Bolt', 'http://msdn.microsoft.com/en-us/library/vstudio/hh265136.aspx', 'http://eigen.tuxfamily.org/', 'http://hsafoundation.com/bringing-camp-beyond-windows-via-clang-llvm/', 'http://viennacl.sourceforge.net/', 'https://speakerdeck.com/ddemidov/vexcl-implementation-university-of-texas-2013', 'http://epubs.siam.org/doi/abs/10.1137/120903683', 'http://arxiv.org/abs/1212.6326', 'http://boostorg.github.io/compute/boost_compute/advanced_topics.html#boost_compute.advanced_topics.lambda_expressions', 'http://boostorg.github.io/compute/BOOST_COMPUTE_ADAPT_STRUCT.html', 'http://boostorg.github.io/compute/BOOST_COMPUTE_CLOSURE.html', 'https://github.com/ddemidov/vexcl#converting-generic-c-algorithms-to-opencl', 'http://developer.amd.com/community/blog/2012/05/21/opencl-1-2-and-c-static-kernel-language-now-available/'] | 18 |
5,247,268 | <p>Christopher D. Manning, Prabhakar Raghavan & Hinrich Schütze have a <a href="http://nlp.stanford.edu/IR-book/html/htmledition/irbook.html" rel="nofollow noreferrer">free information retrieval book</a>. Try <a href="http://nlp.stanford.edu/IR-book/html/htmledition/text-classification-and-naive-bayes-1.html" rel="nofollow noreferrer">chapter 13 - Text classification & Naive Bayes</a>. </p>
<p>See also the companion site for Manning and Schütze's <a href="https://rads.stackoverflow.com/amzn/click/com/0262133601" rel="nofollow noreferrer" rel="nofollow noreferrer">nlp book</a>, specifically <a href="http://www-nlp.stanford.edu/fsnlp/class/" rel="nofollow noreferrer">links for the text categorization chapter</a>.</p>
<p>Fabrizio Sebastiani wrote <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.38.7770&rep=rep1&type=pdf" rel="nofollow noreferrer">a useful tutorial about text categorization(PDF)</a> and <a href="http://arxiv.org/pdf/cs/0110053" rel="nofollow noreferrer">review paper of machine learning for text categorization (PDF)</a>.</p> | 2011-03-09 14:33:19.307000+00:00 | 2011-03-09 14:33:19.307000+00:00 | null | null | 5,245,437 | <p>I am interested in doing a project on document classification and have been looking for books that could be useful for the theoretical parts in text mining related to this or examples of articles describing the process of going from training data with documents classified (with subcategories) to a system which predicts the class of a document. There seem to be some (rather expensive!) titles available but these are conference proceedings with articles on smaller very specific topics. Can someone suggest books from the data mining literature that provides a good theoretical basis for a project on text mining, specifically document classification or articles with an overview of this process ?</p> | 2011-03-09 11:59:27.960000+00:00 | 2011-03-10 09:48:44.817000+00:00 | null | data-mining|text-mining|document-classification | ['http://nlp.stanford.edu/IR-book/html/htmledition/irbook.html', 'http://nlp.stanford.edu/IR-book/html/htmledition/text-classification-and-naive-bayes-1.html', 'https://rads.stackoverflow.com/amzn/click/com/0262133601', 'http://www-nlp.stanford.edu/fsnlp/class/', 'http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.38.7770&rep=rep1&type=pdf', 'http://arxiv.org/pdf/cs/0110053'] | 6 |
57,911,745 | <p>PyTorch at the moment doesn't have <a href="https://pytorch.org/docs/stable/torchvision/io.html" rel="nofollow noreferrer">support to detect and track objects</a> in a video. </p>
<p>You would need to create your own logic for that.</p>
<p>The support is limited to read the video and audio from a file, read frames and timestamps, and write the video read more in <a href="https://pytorch.org/blog/pytorch-1.2-and-domain-api-release/#torchvision-04-with-support-for-video" rel="nofollow noreferrer">here</a>.</p>
<p>What you will basically need to do is to create an object tracking, frame by frame together by keeping their with their square positions and based on that decide if the same object or not.</p>
<p>If you have a drone flying and inspecting people you may check <a href="https://deepmind.com/research/open-source/kinetics" rel="nofollow noreferrer">Kinetics</a> to detect human actions:</p>
<ul>
<li>ResNet 3D 18</li>
<li>ResNet MC 18</li>
<li>ResNet (2+1)D</li>
</ul>
<p>All based on <a href="https://arxiv.org/abs/1711.11248" rel="nofollow noreferrer">Kinetics-400</a>
But the newer one is <a href="https://arxiv.org/abs/1907.06987" rel="nofollow noreferrer">Kinetics-700</a>.</p> | 2019-09-12 17:22:43.830000+00:00 | 2019-09-12 22:55:45.370000+00:00 | 2019-09-12 22:55:45.370000+00:00 | null | 57,890,885 | <p>I am looking for some advice on how to apply a pytorch CNN to a video as opposed to an image. </p>
<p>Picture a drone flying over an area and using video to capture some objects below. I have a CNN trained on images of objects, and want to count the objects in the video. </p>
<p>Currently my strategy has been to convert the video to frames as PNGs and running the CNN on those PNGs. this seems inefficient, and I am struggling with how to count the objects without duplicating (frame 1 and frame 1+n will overlap). </p>
<p>It would be appreciated if someone had some advice, or a suggested tutorial/code set that did this. Thanks in advance. </p> | 2019-09-11 14:01:36.937000+00:00 | 2021-03-11 12:13:17.173000+00:00 | null | pytorch|image-recognition | ['https://pytorch.org/docs/stable/torchvision/io.html', 'https://pytorch.org/blog/pytorch-1.2-and-domain-api-release/#torchvision-04-with-support-for-video', 'https://deepmind.com/research/open-source/kinetics', 'https://arxiv.org/abs/1711.11248', 'https://arxiv.org/abs/1907.06987'] | 5 |
65,209,263 | <h1>It can be Done!</h1>
<p>You can initialize an array in <strong>O(1) worst-time</strong> and O(1) extra space, and it can be improved to using only <strong>1 extra bit</strong> of memory.</p>
<p>Both can be found in this <strong><a href="https://arxiv.org/abs/1709.08900" rel="nofollow noreferrer">Paper</a></strong>, explained simply in this <strong><a href="https://link.medium.com/Q8YbkDJX2bb" rel="nofollow noreferrer">Article</a></strong>, and implemented in the <strong><a href="https://github.com/tomhea/farray" rel="nofollow noreferrer">Farray</a></strong> project. <br>
Full disclosure - the last two were written by me.</p>
<p>It is the <strong>state-of-the-art</strong> Initializable array, and will <em>probably stay that</em>. The above paper proves that without the extra bit - the fill (init) operation should take Ө(n) time, even for amortized/randomized algorithms.</p> | 2020-12-09 01:14:32.643000+00:00 | 2020-12-11 00:23:46.110000+00:00 | 2020-12-11 00:23:46.110000+00:00 | null | 303,519 | <p>someone has an idea how can I do that ?</p>
<p>thanks</p> | 2008-11-19 21:55:56.357000+00:00 | 2020-12-11 00:23:46.110000+00:00 | null | performance | ['https://arxiv.org/abs/1709.08900', 'https://link.medium.com/Q8YbkDJX2bb', 'https://github.com/tomhea/farray'] | 3 |
21,143,949 | <p>I tried to port standard binary heap into functional settings. There is an article with described idea: <a href="http://arxiv.org/pdf/1312.4666v1.pdf" rel="nofollow">A Functional Approach to Standard Binary Heaps</a>. All the source code listings in the article are in Scala. But it might be ported very easy into any other functional language.</p> | 2014-01-15 17:14:16.640000+00:00 | 2014-01-15 17:14:16.640000+00:00 | null | null | 932,721 | <p>As an exercise in Haskell, I'm trying to implement heapsort. The heap is usually implemented as an array in imperative languages, but this would be hugely inefficient in purely functional languages. So I've looked at binary heaps, but everything I found so far describes them from an imperative viewpoint and the algorithms presented are hard to translate to a functional setting. How to efficiently implement a heap in a purely functional language such as Haskell?</p>
<p><em>Edit:</em> By efficient I mean it should still be in O(n*log n), but it doesn't have to beat a C program. Also, I'd like to use purely functional programming. What else would be the point of doing it in Haskell?</p> | 2009-05-31 19:52:23.240000+00:00 | 2014-01-15 17:14:16.640000+00:00 | 2010-01-31 07:19:03.610000+00:00 | haskell|functional-programming|binary-heap|heapsort|purely-functional | ['http://arxiv.org/pdf/1312.4666v1.pdf'] | 1 |
65,461,026 | <p>The choice of how to split the datasets is really up to the evaluator and what they are trying to accomplish. The preprocessed datasets in TFF (from <code>tff.simulation.datasets</code>) are usually only split into two, but they can be rejoined and split again in whatever way is desired.</p>
<p>One thing to consider: there are (at least) two dimensions that may be interesting to split on for federated learning.</p>
<ol>
<li><em>examples</em>: Splitting a single client's dataset into train, test, and validation. This could possibly be seen as most analogous to the centralized training regime. Most TFF datasets use this.</li>
<li><em>users</em>: Splitting users into train, test, and heldout <em>users</em> might be particularly interesting in the federated regime. This might be able to answer how well a global model generalizes to <em>unseen</em> users, but might be heavily affected by the non-iid ness of the individual datasets and splits. This is used in a few TFF provided datasets.</li>
</ol>
<p>Furthermore, both of these could be time based (if there is a notion of time), for example splitting each clients dataset into "previous day" (train) and "next day" (test). Or, as is often the case in practice with cross-device FL, splitting by time of day (users available for training at night maybe different than mid-day), <a href="https://arxiv.org/abs/1904.10120" rel="nofollow noreferrer">Eichner 2019</a> performed some experiments using this setup.</p>
<p><strong>Note</strong>: the <a href="https://www.tensorflow.org/federated/api_docs/python/tff/simulation/datasets/stackoverflow/load_data" rel="nofollow noreferrer"><code>tff.simulation.datasets.stackoverflow.load_data</code></a> does have three splits named <code>train</code>, <code>held_out</code> and <code>test</code>. Please read the documentation carefully as it utilizes both types of splitting mentioned above.</p> | 2020-12-26 21:53:28.667000+00:00 | 2020-12-26 21:53:28.667000+00:00 | null | null | 65,458,032 | <p>Why in the federated learning task, we don't split our dataset to train, test and validation, we make only train and test .</p> | 2020-12-26 15:43:13.573000+00:00 | 2020-12-26 21:53:28.667000+00:00 | null | tensorflow-federated | ['https://arxiv.org/abs/1904.10120', 'https://www.tensorflow.org/federated/api_docs/python/tff/simulation/datasets/stackoverflow/load_data'] | 2 |
17,234,422 | <p>I derived update formulas and algorithms for entropy and Gini index and made the note <a href="http://arxiv.org/abs/1403.6348" rel="nofollow noreferrer">available on arXiv</a>. (The working version of the note is available <a href="http://blazsovdat.com/publications/incremental.pdf" rel="nofollow noreferrer">here</a>.) Also see <a href="https://mathoverflow.net/questions/133977/incremental-entropy-computation/134376#134376">this mathoverflow</a> answer.</p>
<p>For the sake of convenience I am including simple Python code, demonstrating the derived formulas:
</p>
<pre><code>from math import log
from random import randint
# maps x to -x*log2(x) for x>0, and to 0 otherwise
h = lambda p: -p*log(p, 2) if p > 0 else 0
# update entropy if new example x comes in
def update(H, S, x):
new_S = S+x
return 1.0*H*S/new_S+h(1.0*x/new_S)+h(1.0*S/new_S)
# entropy of union of two samples with entropies H1 and H2
def update(H1, S1, H2, S2):
S = S1+S2
return 1.0*H1*S1/S+h(1.0*S1/S)+1.0*H2*S2/S+h(1.0*S2/S)
# compute entropy(L) using only `update' function
def test(L):
S = 0.0 # sum of the sample elements
H = 0.0 # sample entropy
for x in L:
H = update(H, S, x)
S = S+x
return H
# compute entropy using the classic equation
def entropy(L):
n = 1.0*sum(L)
return sum([h(x/n) for x in L])
# entry point
if __name__ == "__main__":
L = [randint(1,100) for k in range(100)]
M = [randint(100,1000) for k in range(100)]
L_ent = entropy(L)
L_sum = sum(L)
M_ent = entropy(M)
M_sum = sum(M)
T = L+M
print "Full = ", entropy(T)
print "Update = ", update(L_ent, L_sum, M_ent, M_sum)
</code></pre>
<hr> | 2013-06-21 11:44:10.620000+00:00 | 2014-04-06 21:07:06.790000+00:00 | 2017-04-13 12:57:55.007000+00:00 | null | 17,104,673 | <p>Let <code>std::vector<int> counts</code> be a vector of positive integers and let <code>N:=counts[0]+...+counts[counts.length()-1]</code> be the the sum of vector components. Setting <code>pi:=counts[i]/N</code>, I compute the entropy using the classic formula <code>H=p0*log2(p0)+...+pn*log2(pn)</code>.</p>
<p>The <code>counts</code> vector is changing --- counts are incremented --- and every 200 changes I recompute the entropy. After a quick google and stackoverflow search I couldn't find any method for incremental entropy computation. So the question: Is there an incremental method, <a href="http://en.wikipedia.org/wiki/Algorithms_for_calculating_variance#Online_algorithm" rel="nofollow noreferrer">like the ones for variance</a>, for entropy computation?</p>
<p>EDIT: Motivation for this question was usage of such formulas for incremental information gain estimation in <a href="http://homes.cs.washington.edu/~pedrod/papers/kdd00.pdf" rel="nofollow noreferrer">VFDT</a>-like learners. </p>
<p><strong>Resolved:</strong> See <a href="https://mathoverflow.net/questions/133977/incremental-entropy-computation/134376#134376">this mathoverflow post</a>.</p> | 2013-06-14 08:52:43.503000+00:00 | 2014-04-06 21:07:06.790000+00:00 | 2017-04-13 12:57:55.007000+00:00 | c++|algorithm|decision-tree|entropy | ['http://arxiv.org/abs/1403.6348', 'http://blazsovdat.com/publications/incremental.pdf', 'https://mathoverflow.net/questions/133977/incremental-entropy-computation/134376#134376'] | 3 |
61,850,359 | <p>According to <a href="https://pytorch.org/docs/stable/nn.init.html#torch.nn.init.kaiming_normal_" rel="noreferrer">documentation</a>:</p>
<blockquote>
<p>Choosing 'fan_in' preserves the magnitude of the variance of the
weights in the forward pass. Choosing 'fan_out' preserves the
magnitudes in the backwards pass.</p>
</blockquote>
<p>and according to <a href="https://arxiv.org/pdf/1502.01852.pdf" rel="noreferrer">Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification - He, K. et al. (2015)</a>: </p>
<blockquote>
<p>We note that it is sufficient to use either Eqn.(14) or
Eqn.(10)</p>
</blockquote>
<p>where Eqn.(10) and Eqn.(14) are <code>fan_in</code> and <code>fan_out</code> appropriately. Furthermore:</p>
<blockquote>
<p>This means that if the initialization properly scales the backward
signal, then this is also the case for the forward signal; and vice
versa. For all models in this paper, both forms can make them converge</p>
</blockquote>
<p><strong>so all in all it doesn't matter much</strong> but it's more about what you are after. I assume that if you suspect your backward pass might be more "chaotic" (greater variance) it is worth changing the mode to <code>fan_out</code>. This might happen when the loss oscillates a lot (e.g. very easy examples followed by very hard ones).</p>
<p>Correct choice of <code>nonlinearity</code> is more important, where <code>nonlinearity</code> is the activation you are using <strong>after</strong> the layer you are initializaing currently. Current defaults set it to <code>leaky_relu</code> with <code>a=0</code>, which is effectively the same as <code>relu</code>. If you are using <code>leaky_relu</code> you should change <code>a</code> to it's slope.</p> | 2020-05-17 10:31:14.750000+00:00 | 2020-05-17 10:31:14.750000+00:00 | null | null | 61,848,635 | <p>I have read several codes that do layer initialization using <code>nn.init.kaiming_normal_()</code> of PyTorch. Some codes use the <code>fan in</code> mode which is the default. Of the many examples, one can be found <a href="https://github.com/Xiaoming-Yu/RAGAN/blob/master/models/model.py#L50" rel="noreferrer">here</a> and shown below.</p>
<pre><code>init.kaiming_normal(m.weight.data, a=0, mode='fan_in')
</code></pre>
<p>However, sometimes I see people using the <code>fan out</code> mode as seen <a href="https://github.com/liminn/ICNet-pytorch/blob/master/models/base_models/resnetv1b.py#L126" rel="noreferrer">here</a> and shown below.</p>
<pre><code>if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
</code></pre>
<p>Can someone give me some guidelines or tips to help me decide which mode to select? Further I am working on image super resolutions and denoising tasks using PyTorch and which mode will be more beneficial.</p> | 2020-05-17 07:56:07.100000+00:00 | 2020-05-17 10:31:14.750000+00:00 | null | initialization|pytorch | ['https://pytorch.org/docs/stable/nn.init.html#torch.nn.init.kaiming_normal_', 'https://arxiv.org/pdf/1502.01852.pdf'] | 2 |
32,361,191 | <p>The number of layers, or depth, of a neural network is one of its <em>hyperparameters</em>.</p>
<p>This means that it is a quantity that can not be learned from the data, but you should choose it <strong>before</strong> trying to fit your dataset. According to <a href="http://arxiv.org/abs/1206.5533" rel="nofollow">Bengio</a>, </p>
<blockquote>
<p>We define a hyper-
parameter for a learning algorithm A as a variable to
be set prior to the actual application of A to the data,
one that is not directly selected by the learning algo-
rithm itself.</p>
</blockquote>
<p>There are three main approaches to find out the optimal value for an hyperparameter. The first two are well explained in the paper I linked.</p>
<ul>
<li>Manual search. Using well-known black magic, the researcher choose the optimal value through try-and-error.</li>
<li>Automatic search. The researcher relies on an automated routine in order to speed up the search.</li>
<li><a href="https://github.com/HIPS/Spearmint" rel="nofollow">Bayesian optimization</a>.</li>
</ul>
<p>More specifically, adding more layers to a deep neural network is likely to improve the performance (reduce generalization error), up to a certain number when it overfits the training data.</p>
<p>So, in practice, you should train your ConvNet with, say, 4 layers, try adding one hidden layer and train again, until you see some overfitting. Of course, some strong regularization techniques (such as <strong>dropout</strong>) is required.</p> | 2015-09-02 19:05:37.943000+00:00 | 2015-09-03 12:47:39.213000+00:00 | 2015-09-03 12:47:39.213000+00:00 | null | 29,545,897 | <p>I was looking for an automatic way to decide how many layers should I apply to my network depends on data and computer configuration. I searched in web, but I could not find anything. Maybe my keywords or looking ways are wrong.</p>
<p>Do you have any idea?</p> | 2015-04-09 18:10:58.767000+00:00 | 2015-09-03 12:47:39.213000+00:00 | null | machine-learning|neural-network|convolution|deep-learning|conv-neural-network | ['http://arxiv.org/abs/1206.5533', 'https://github.com/HIPS/Spearmint'] | 2 |
49,777,008 | <p>The easiest solution is to just threshold the result at some threshold value (I used 0.5 for convenience). However, you can use dice loss as in <a href="https://arxiv.org/abs/1608.04117" rel="nofollow noreferrer">https://arxiv.org/abs/1608.04117</a> (keras implementation can be found here: <a href="https://github.com/raghakot/ultrasound-nerve-segmentation" rel="nofollow noreferrer">https://github.com/raghakot/ultrasound-nerve-segmentation</a>), which tends to produce binary outputs. If the final result is better than using thresholding on the binary crossentropy output depends on your dataset though.</p> | 2018-04-11 13:54:58.800000+00:00 | 2018-04-11 13:54:58.800000+00:00 | null | null | 49,776,733 | <p>I am working on a binary segmentation problem for which I have to segment the nuclei from the cells. I am using binary cross entropy as loss function with U-Net CNN model. The resultant images got some blurry effects. The more number of epochs I run the experiment, the more blurriness occurs. What leads to result such blurry effect and what change should I make to my model to get rid of that?</p>
<p>I have attached one sample resultant image produced after 4 epochs.
<a href="https://i.stack.imgur.com/FmscI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FmscI.png" alt="enter image description here"></a></p> | 2018-04-11 13:41:53.217000+00:00 | 2018-04-11 13:54:58.800000+00:00 | null | image-processing|machine-learning|deep-learning|image-segmentation | ['https://arxiv.org/abs/1608.04117', 'https://github.com/raghakot/ultrasound-nerve-segmentation'] | 2 |
52,216,750 | <p>There are four possible solutions:</p>
<ul>
<li><p>Learn 4 models and use SavedModel to save them. Then, create a 5th model that restores the 4 saved models. This model has no trainable weights. Instead, it simply computes the context and applies the appropriate weight to each of the 4 saved models and returns the value. It is this 5th model that you will deploy.</p></li>
<li><p>Learn a single model. Make the context a categorical input to your model, i.e. follow the approach in <a href="https://arxiv.org/abs/1606.07792" rel="nofollow noreferrer">https://arxiv.org/abs/1606.07792</a></p></li>
<li><p>Use a separate AppEngine service that computes the context and invokes the underlying 4 services, weighs them and returns the result.</p></li>
<li><p>Use an AppEngine service written in Python that loads up all four saved models and invokes the 4 models and weights them and returns the result.</p></li>
</ul>
<p>option 1 involves more coding, and is quite tricky to get right.</p>
<p>option 2 would be my choice, although it changes the model formulation from what you desire. If you go this route, here's a sample code on MovieLens that you can adapt: <a href="https://github.com/GoogleCloudPlatform/cloudml-samples/tree/master/movielens" rel="nofollow noreferrer">https://github.com/GoogleCloudPlatform/cloudml-samples/tree/master/movielens</a></p>
<p>option 3 introduces more latency because of the additional network overhead</p>
<p>option 4 reduces network latency from #3, but you lose the parallelism. You will have to experiment between options 3 and 4 on which provides better performance overall</p> | 2018-09-07 06:44:29.470000+00:00 | 2018-09-08 16:58:33.860000+00:00 | 2018-09-08 16:58:33.860000+00:00 | null | 52,200,348 | <p>I am trying to build Context Aware Recommender System with Cloud ML Engine, which uses <code>context prefiltering</code> method (<a href="https://www.slideshare.net/irecsys/tutorial-context-in-recommender-systems" rel="nofollow noreferrer">as described in slide 55, solution a</a>) and I am using this <a href="https://cloud.google.com/solutions/machine-learning/recommendation-system-tensorflow-overview" rel="nofollow noreferrer">Google Cloud tutorial (part 2)</a> to build a demo. I have split the dataset to Weekday and Weekend contexts and Noon and Afternoon contexts by timestamp for purposes of this demo.</p>
<p>In practice I will learn four models, so that I can context filter by Weekday-unknown, Weekend-unknown, unknown-Noon, unknown-Afternoon, Weekday-Afternoon, Weekday-Noon... and so on. The idea is to use prediction from all the relevant models by user and then weight the resulting recommendation based on what is known about the context (unknown meaning, that all context models are used and weighted result is given).</p>
<p>I would need something, that responds fast and it seems like I will unfortunately need some kind of middle-ware if I don't want do the weighting in the front-end.</p>
<p>I know, that AppEngine has prediction mode, where it keeps the models in RAM, which guarantees fast responses, as you don't have to bootstrap the prediction models; then resolving the context would be fast.</p>
<p>However, would there be more simple solution, which would also guarantee similar performance in Google Cloud?</p>
<p>The reason I am using Cloud ML Engine is that when I do context aware recommender system this way, the amount of hyperparameter tuning grows hugely; I don't want to do it manually, but instead use the Cloud ML Engine Bayesian Hypertuner to do the job, so that I only need to tune the range of parameters one to three times per each context model (with an automated script); this saves much of Data Scientist development time, whenever the dataset is reiterated.</p> | 2018-09-06 09:01:10+00:00 | 2018-09-08 16:58:33.860000+00:00 | 2018-09-06 09:14:00.887000+00:00 | google-app-engine|tensorflow|google-cloud-platform|recommendation-engine|google-cloud-ml | ['https://arxiv.org/abs/1606.07792', 'https://github.com/GoogleCloudPlatform/cloudml-samples/tree/master/movielens'] | 2 |
55,287,548 | <p>In order to make operations on sets fast, use <a href="http://arxiv.org/pdf/1503.01547.pdf" rel="nofollow noreferrer">binary decision diagrams</a>.</p>
<p>In function of what operations on sets of sets you need, you can choose different variations of BSDs. In the most general case, use the ID of each set on the terminal nodes and do not unify the terminal nodes. </p>
<p>There are thousands of different articles where you can learn how to implement them; comparing to lists and other trivial data structures, to implement BSDs you have lots of different ways to do it, and some more mental effort to do to be able to use them, but after you understand them you will love this data structure.</p>
<p>This is a big intellectual effort to do to understand it, but it will run really fast when you implement list of sets, sets of sets (powerset), as you wish.</p> | 2019-03-21 19:01:49.497000+00:00 | 2019-03-21 20:16:59.180000+00:00 | 2019-03-21 20:16:59.180000+00:00 | null | 55,287,149 | <p>Consider the following list where each member contains sets of numbers.</p>
<pre><code>sets <- list(a=1:3, b=2:3, c=4:6, d=4:6, e=7)
</code></pre>
<p>I want to identify all sets that are proper subsets of another set in the list, such that my desired result would look like this...</p>
<pre><code>c(F,T,F,F,F)
</code></pre>
<p>Because my actual sets are quite large, I don't want to need to calculate power sets of each set. Does anyone have a thought for an efficient way to do this? </p>
<p>This is what I've done so far, and it works, but this can't be the most elegant way of doing it.</p>
<pre><code> truthtable <- bind_rows(lapply(X=sets, FUN=function(x, allsets){
unlist(lapply(X=allsets, FUN=function(x,testset){
return(all(x %in% testset) & !setequal(x, testset))
}, testset=x))
}, allsets=sets))
apply(truthtable, 1, function(x){(all(!x))})
</code></pre> | 2019-03-21 18:37:13.683000+00:00 | 2019-03-21 20:16:59.180000+00:00 | 2019-03-21 18:45:49.647000+00:00 | r|set | ['http://arxiv.org/pdf/1503.01547.pdf'] | 1 |
63,613,964 | <p>For anyone checking this out in 2020, it seems like the security concern only affects Android APIs lower than 17 (Android 4.2). So, if your <code>minSdkVersion</code> is <code>17</code> or higher, then you should be safe.</p>
<p>Here are references:</p>
<ul>
<li><p><a href="https://labs.f-secure.com/archive/webview-addjavascriptinterface-remote-code-execution/" rel="nofollow noreferrer">https://labs.f-secure.com/archive/webview-addjavascriptinterface-remote-code-execution/</a></p>
<blockquote>
<p>If the linked SDK has been built for an API lower than 17, the vulnerability exists – even if the application using the SDK has been built for API 17 or above.</p>
</blockquote>
</li>
<li><p><a href="https://7asecurity.com/blog/2019/09/hacking-mandated-apps-part-5-rce-in-webview-mstg-platform-7/" rel="nofollow noreferrer">https://7asecurity.com/blog/2019/09/hacking-mandated-apps-part-5-rce-in-webview-mstg-platform-7/</a></p>
<blockquote>
<p>Android versions from Android 2.4 to Android 4.1 are affected by a vulnerability that allows remote code execution when JavaScript is injected in the WebView.</p>
</blockquote>
</li>
<li><p><a href="https://arxiv.org/pdf/1912.12982.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1912.12982.pdf</a> (Page 7)</p>
<blockquote>
<p>Google later fixed this weakness on Android 4.2 and
above. However, if an app sets the targetSdkVersion lower than 17 and also
calls this API, the system will still render the vulnerable API behavior even when
running on Android 4.2+. Such vulnerable app examples are available at <a href="https://sites.google.com/site/androidrce/" rel="nofollow noreferrer">https://sites.google.com/site/androidrce/</a>.</p>
</blockquote>
</li>
</ul> | 2020-08-27 10:23:03.633000+00:00 | 2020-08-27 10:23:03.633000+00:00 | null | null | 6,415,882 | <p>From the documentation:
<a href="http://developer.android.com/reference/android/webkit/WebView.html#addJavascriptInterface%28java.lang.Object,%20java.lang.String%29" rel="noreferrer">http://developer.android.com/reference/android/webkit/WebView.html#addJavascriptInterface%28java.lang.Object,%20java.lang.String%29</a></p>
<p>"Using addJavascriptInterface() allows JavaScript to control your application. This can be a very useful feature or a dangerous security issue. When the HTML in the WebView is untrustworthy (for example, part or all of the HTML is provided by some person or process), then an attacker could inject HTML that will execute your code <strong>and possibly any code of the attacker's choosing.</strong>
<strong>Do not use addJavascriptInterface() unless all of the HTML in this WebView was written by you.</strong>
The Java object that is bound runs in another thread and not in the thread that it was constructed in.</p>
<p><strike>Suppose I have an interface that only shows a custom dialog box, or starts a download to sd card. Would this be unsafe to use for any url? How could an attack page use the interface to run any code of the attacker's choosing?</strike></p>
<p><strong>Update:</strong>
According to the <a href="http://developer.android.com/reference/android/webkit/WebView.html#addJavascriptInterface%28java.lang.Object,%20java.lang.String%29" rel="noreferrer">documentation</a>:</p>
<blockquote>
<p>This method can be used to allow JavaScript to control the host
application. This is a powerful feature, but also presents a security
risk for applications targeted to API level JELLY_BEAN or below,
because JavaScript could use reflection to access an injected object's
public fields. Use of this method in a WebView containing untrusted
content could allow an attacker to manipulate the host application in
unintended ways, executing Java code with the permissions of the host
application. Use extreme care when using this method in a WebView
which could contain untrusted content.</p>
</blockquote>
<p><strike>Is there an example of how this could happen? It this just saying that <code>DOWNLOADINTERFACE.dangerousfunction</code> could be called if that's a public method on that class?</strike></p>
<p><strong><h1>Update:</h1></strong></p>
<p>I tested based on the example of the exploit below, sites <em>can</em> get access to the system through interfaces in Android 4.4, 4.1, and 3.2.</p>
<p>However, I was <em>not</em> seeing this bug on Android 2.2, or 2.3, the hack only causes a force-close. What is the best way to prevent this hack, other than not using JSInterface? Can I include bogus functions like this, to prevent unauthorized calling of functions?</p>
<pre><code>public Object getClass() {
//throw error, return self, or something?
}
</code></pre>
<p>Or rewrite everything using ajax and intercepting calls? Would that result in better/worse performance?</p>
<p><strong>Update:</strong></p>
<p>I succeeded in removing the JS interface, and replaced the functionality by defining window.open(specialurl) commands for all the window.(interface) functions, and overriding those in the shouldOverrideUrlLoading. Strangely enough, window.open() must be used in some cases, or the webview breaks display (like javascript is stopping?), and in other cases location.replace should be used or it will just show a "interface://specialdata" could not be found message.</p>
<p>(I set settings.setJavaScriptCanOpenWindowsAutomatically(true) so window.open works from JS all the time.)</p>
<p>Anyone know the best way to rewrite an app with this behavior?</p> | 2011-06-20 18:43:59.290000+00:00 | 2020-08-27 10:23:03.633000+00:00 | 2014-01-05 09:31:31.920000+00:00 | javascript|android|interface|webview|android-webview | ['https://labs.f-secure.com/archive/webview-addjavascriptinterface-remote-code-execution/', 'https://7asecurity.com/blog/2019/09/hacking-mandated-apps-part-5-rce-in-webview-mstg-platform-7/', 'https://arxiv.org/pdf/1912.12982.pdf', 'https://sites.google.com/site/androidrce/'] | 4 |
67,008,933 | <p>It is fairly well-known in the literature that most CNNs do not output <em>well-calibrated probabilities</em>: in other words the 'probability' output of the softmax often does not correlate well to the true likelihood of being correct. See, for example, <a href="https://arxiv.org/pdf/1706.04599.pdf" rel="nofollow noreferrer"><em>On Calibration of Modern Neural Networks</em></a> by Guo et al for some context on the issue.</p>
<p>Recently interest has grown in <em>Bayesian deep learning</em> techniques which can model the <em>epistemic uncertainty</em>, i.e. the uncertainty in the model itself, and there are lots of interesting papers you can read, for example Kendall and Gal's <a href="https://arxiv.org/pdf/1703.04977.pdf" rel="nofollow noreferrer"><em>What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?</em></a> (there are also some slides from Kendall which are a bit lighter to read <a href="https://alexgkendall.com/media/presentations/oxford_seminar.pdf" rel="nofollow noreferrer">here</a>).</p>
<p>You could try the temperature scaling technique in Guo et al, which seems to improve the predictions somewhat.</p> | 2021-04-08 17:21:40.193000+00:00 | 2021-04-08 17:21:40.193000+00:00 | null | null | 67,008,680 | <p>I took pretrained model and trained it on custom dataset with 3 classes and softmax activation as an output layer. I got 91% accuracy on test set, but here's the problem. I want to my CNN to be able to say "I don't know what's on image". That will help with 2 issues:</p>
<ul>
<li>Missclassification</li>
<li>Images without any class. I search for snakes on image and want to be able to say that there are no snakes on the image</li>
</ul>
<p>I got strange results with softmax:</p>
<pre><code>[[0.05 0.89 0.05]]
[[0.05 0.89 0.05]]
</code></pre>
<p>First image was second class, so model got it right, but second image has NO known class, but model is pretty certain there is. How can I get something closer to real probabilities?
And model isn't bad at classifying, it just always get something like 0.85-0.89</p>
<p>My first thought was to add another class with no snakes on image. But that's pretty dirty solution.
Can Detection/segmentation help here?</p> | 2021-04-08 17:02:36.063000+00:00 | 2021-04-08 17:21:40.193000+00:00 | null | python|tensorflow|keras | ['https://arxiv.org/pdf/1706.04599.pdf', 'https://arxiv.org/pdf/1703.04977.pdf', 'https://alexgkendall.com/media/presentations/oxford_seminar.pdf'] | 3 |
33,262,054 | <p>Spearmint learns these kinds of transformations automatically from the data. If you take a look here: <a href="https://github.com/HIPS/Spearmint/tree/master/spearmint/transformations" rel="nofollow">https://github.com/HIPS/Spearmint/tree/master/spearmint/transformations</a>
you can see the implementation of the beta warping that is applied (detailed in this paper: <a href="http://arxiv.org/abs/1402.0929" rel="nofollow">http://arxiv.org/abs/1402.0929</a>). Spearmint doesn't have a way to specify these a-priori, but you could have Spearmint operate on e.g. the log of the parameters (by giving the log of the parameter ranges and exponentiating on your end). </p> | 2015-10-21 14:28:37.880000+00:00 | 2015-10-21 14:28:37.880000+00:00 | null | null | 33,250,449 | <p>I'm trying to use <a href="https://github.com/HIPS/Spearmint" rel="nofollow">Spearmint</a>, the Bayesian optimization library, to tune hyperparameters for a machine learning classifier. My question is how does one express parameter search spaces that does not follow a uniform distribution? </p>
<p>From the <a href="https://github.com/HIPS/Spearmint/blob/master/examples/constrained/config.json" rel="nofollow">project's github page</a>, here is an example of how to set two uniformly distributed parameter search spaces:</p>
<pre><code>"variables": {
"X": {
"type": "FLOAT",
"size": 1,
"min": -5,
"max": 10
},
"Y": {
"type": "FLOAT",
"size": 1,
"min": 0,
"max": 15
}
}
</code></pre>
<p>How would we define a search space like the one below in Spearmint?</p>
<pre><code>SVC_PARAMS = [
{
"bounds": {
"max": 10.0,
"min": 0.01,
},
"name": "C",
"type": "double",
"transformation": "log",
},
{
"bounds": {
"max": 1.0,
"min": 0.0001,
},
"name": "gamma",
"type": "double",
"transformation": "log",
},
{
"type": "categorical",
"name": "kernel",
"categorical_values": [
{"name": "rbf"},
{"name": "poly"},
{"name": "sigmoid"},
],
},
]
</code></pre>
<p>Is there a place to look up all of the stochastic expressions (ie <code>uniform</code>, <code>normal</code>, <code>log</code> etc) currently being supported by Spearmint?</p> | 2015-10-21 03:31:26.840000+00:00 | 2015-10-21 14:28:37.880000+00:00 | null | machine-learning|hyperparameters | ['https://github.com/HIPS/Spearmint/tree/master/spearmint/transformations', 'http://arxiv.org/abs/1402.0929'] | 2 |
46,010,546 | <p>You are looking for some kind of generative model. RNNs are commonly used, and there's a great blog post <a href="http://karpathy.github.io/2015/05/21/rnn-effectiveness/" rel="nofollow noreferrer">here</a> demonstrating character-by-character text generation.</p>
<p>The same principle can be applied to any ordered sequence. You talk about an image as being a sequence of pixels, but images have an intrinsic 2D structure (3 if you include color) that would be lost if you took the exact same approach as text generation. A couple of ideas:</p>
<ol>
<li>Use tensorflow's <a href="https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/GridLSTMCell" rel="nofollow noreferrer">GridLSTMCell</a>s</li>
<li>Treat a column of pixels as a single element of the sequence and predict column-by-column (or row by row)</li>
<li>Combine idea 2 with some 1D convolutions along the column/row</li>
<li>Use features from a section of an image as the seed to a <a href="https://arxiv.org/abs/1406.2661" rel="nofollow noreferrer">generative adversarial network</a>. See <a href="https://github.com/jackd/gan_tf" rel="nofollow noreferrer">this repository</a> for basic implementations.</li>
</ol> | 2017-09-02 04:44:42.050000+00:00 | 2017-09-02 04:44:42.050000+00:00 | null | null | 46,010,136 | <p>Just as a project, I wanted to see if it would be possible to predict the next pixel in an image given all previous pixels.<br>
For example: lets say I have an image with <em>x</em> pixels. Given the first <em>y</em> pixels, I want to be able to somewhat accurately predict the <em>y+1th</em> pixel. How should I go about solving this problem.</p> | 2017-09-02 03:17:24.263000+00:00 | 2017-09-02 04:44:42.050000+00:00 | null | python|tensorflow|nlp|rnn | ['http://karpathy.github.io/2015/05/21/rnn-effectiveness/', 'https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/GridLSTMCell', 'https://arxiv.org/abs/1406.2661', 'https://github.com/jackd/gan_tf'] | 4 |
53,452,423 | <p>Proposal 1: My attempt is the below, since it just uses <code>tf.image.extract_image_patches</code> and <code>tf.extract_volume_patches</code>, the implementation supports only 2d and 3d images.</p>
<p>Proposal 2: one could just format the data as a preprocessing step (via <code>tf.data.Dataset.map</code>), however this also takes alot of time, I am not sure why yet ( example <a href="https://gist.github.com/pangyuteng/ca5cb07fe383ebe59b521c832f2e2918" rel="nofollow noreferrer">https://gist.github.com/pangyuteng/ca5cb07fe383ebe59b521c832f2e2918</a> ).</p>
<p>Proposal 3: use convolutional blocks to parallelize processing, see "Hypercolumns for Object Segmentation and Fine-grained Localization" <a href="https://arxiv.org/abs/1411.5752" rel="nofollow noreferrer">https://arxiv.org/abs/1411.5752</a> .</p>
<p>--</p>
<p>Proposal 1 code:</p>
<pre><code>import time
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import tensorflow as tf
from tensorflow.contrib.data.python.ops import sliding
from skimage import img_as_float, data
from scipy.signal import medfilt
dtype = 2
if dtype==2:
imgs = img_as_float(data.camera())
elif dtype==3:
imgs = np.random.rand(28,28,28)
imgs = img_as_float(data.camera())
### SCIPY median ###
stime = time.time()
scipysmoothed = medfilt(imgs,(9,9))
etime = time.time()
print('scipy smoothed: {:1.4f} seconds'.format(etime-stime))
### TF median ###
method = 'Tensorflow'
imgs = np.expand_dims(imgs,axis=-1)
imgs = np.expand_dims(imgs,axis=0)
print('imgs.shape:{}'.format(imgs.shape))
imgs = tf.cast(imgs,tf.float32)
stime = time.time()
if len(imgs.shape) == 4:
kernel=(1,9,9,1)
stride=(1,1,1,1)
rates=(1,1,1,1)
padding='SAME'
patches=tf.image.extract_image_patches(
imgs,kernel,stride,rates,padding,
)
_,x,y,n = patches.shape
_,sx,sy,_ = kernel
window_func = lambda x: tf.contrib.distributions.percentile(x, 50.0)
patches = tf.reshape(patches,[x*y,sx,sy])
smoothed = tf.map_fn(lambda x: window_func(patches[x,:,:]), tf.range(x*y), dtype=tf.float32)
smoothed = tf.reshape(smoothed,[x,y])
elif len(imgs.shape) == 5:
kernel=(1,12,12,12,1)
stride=(1,1,1,1,1)
padding='SAME'
patches=tf.extract_volume_patches(
imgs,kernel,stride,padding,
)
_,x,y,z,n = patches.shape
_,sx,sy,sz,_ = kernel
window_func = lambda x: tf.contrib.distributions.percentile(x, 50.0)
patches = tf.reshape(patches,[x*y*z,sx,sy,sz])
smoothed = tf.map_fn(lambda x: window_func(patches[x,:,:]), tf.range(x*y*z), dtype=tf.float32)
smoothed = tf.reshape(smoothed,[x,y,z])
else:
raise NotImplemented()
with tf.Session() as sess:
output = sess.run(smoothed)
etime = time.time()
print('tf smoothed: {:1.4f} seconds'.format(etime-stime))
print(output.shape)
plt.figure(figsize=(20,20))
plt.subplot(131)
imgs = img_as_float(data.camera())
plt.imshow(imgs.squeeze(),cmap='gray',interpolation='none')
plt.title('original')
plt.subplot(132)
plt.imshow(output.squeeze(),cmap='gray',interpolation='none')
plt.title('actual smoothed\nwith {}'.format(method))
plt.subplot(133)
plt.imshow(scipysmoothed,cmap='gray',interpolation='none')
_=plt.title('expected smoothed')
</code></pre> | 2018-11-23 20:07:45.107000+00:00 | 2019-01-22 15:07:01.743000+00:00 | 2019-01-22 15:07:01.743000+00:00 | null | 53,451,728 | <p>I am looking for a gpu-accelerated n-dimensional sliding window operation implementation in Python using Tensorflow. You can post your implementation in Torch, Caffe or Theano, but I'll choose the Tensorflow implementation as the accepted answer. Please post working code snippet that performs a 2d median filter operation (hopefully, with no code change or minimal code change, can be applied to n-dimensional images)</p>
<p>With my limited knowledge on Tensorflow, I believe the 2 potential modules to start with are <a href="https://www.tensorflow.org/api_docs/python/tf/contrib/data/sliding_window_batch" rel="nofollow noreferrer"><code>sliding_window_batch</code></a> or <a href="https://www.tensorflow.org/api_docs/python/tf/image/extract_image_patches" rel="nofollow noreferrer"><code>extract_image_patches</code></a> and then with some <code>map</code>,<code>apply</code>,<code>reshape</code> magic?</p>
<p>My failed attempt is posted below, for entertainment. Please note I have posted a similar <a href="https://stackoverflow.com/questions/35691947/3d-sliding-window-operation-in-theano">question</a> 2 years ago, asking for a Theano implementation, nowadays, most people are using tf/keras or torch.</p>
<pre><code>import time
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import tensorflow as tf
from tensorflow.contrib.data.python.ops import sliding
from skimage import img_as_float, data
from scipy.signal import medfilt
imgs = img_as_float(data.camera())
### SCIPY median ###
stime = time.time()
scipysmoothed = medfilt(imgs,(9,9))
etime = time.time()
print('scipy smoothed: {:1.4f} seconds'.format(etime-stime))
### Failed attempt of TF median ###
method = 'Tensorflow'
stime = time.time()
window_func = lambda x: tf.contrib.distributions.percentile(x, 50.0)
# create TensorFlow Dataset object
data = tf.data.Dataset.from_tensor_slices(imgs)
# sliding window - only 1d is allowed?
window = 3
stride = 1
data = data.apply(sliding.sliding_window_batch(window, stride)).map(lambda x: window_func(x))
# create TensorFlow Iterator object
iterator = tf.data.Iterator.from_structure(data.output_types)
next_element = iterator.get_next()
# create initialization ops
init_op = iterator.make_initializer(data)
c=0
smoothed = np.zeros(imgs.shape)
with tf.Session() as sess:
# initialize the iterator on the data
sess.run(init_op)
while True:
try:
elem = sess.run(next_element)
smoothed[c,:]=elem
# obviously WRONG.
c+=1
except tf.errors.OutOfRangeError:
#print("End of dataset.")
break
#print(c)
etime = time.time()
print('tf smoothed: {:1.4f} seconds'.format(etime-stime))
plt.figure(figsize=(20,20))
plt.subplot(131)
plt.imshow(imgs,cmap='gray',interpolation='none')
plt.title('original')
plt.subplot(132)
plt.imshow(smoothed,cmap='gray',interpolation='none')
plt.title('actual smoothed\nwith {}'.format(method))
plt.subplot(133)
plt.imshow(scipysmoothed,cmap='gray',interpolation='none')
_=plt.title('expected smoothed')
</code></pre>
<p>.</p>
<pre><code>scipy smoothed: 1.1899 seconds
tf smoothed: 0.7485 seconds
</code></pre> | 2018-11-23 18:54:56.760000+00:00 | 2019-01-22 15:07:01.743000+00:00 | null | python|tensorflow|image-processing|computer-vision | ['https://gist.github.com/pangyuteng/ca5cb07fe383ebe59b521c832f2e2918', 'https://arxiv.org/abs/1411.5752'] | 2 |
53,113,383 | <p>In YoLo if you are only using convolution layers , the size of the output gird changes.</p>
<p>For example if you have size of:</p>
<ol>
<li><p>320x320, output size is 10x10</p></li>
<li><p>608x608, output size is 19x19</p></li>
</ol>
<p>You then calculate loss on these w.r.t to the ground truth grid which is similarly adjusted.</p>
<p>Thus you can back propagate loss without adding any more parameters.</p>
<p>Refer yolov1 paper for the loss function:</p>
<p><a href="https://i.stack.imgur.com/K6zaA.png" rel="nofollow noreferrer">Loss Function from the paper</a></p>
<p>You thus can in theory only adjust this function which depends upon the grid size and no <strong>model parameters</strong> and you should be good to go.</p>
<p>Paper Link: <a href="https://arxiv.org/pdf/1506.02640.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1506.02640.pdf</a></p>
<p>In the video explanation by the author mentions the same.</p>
<p><strong>Time: 14:53</strong></p>
<p><a href="https://www.youtube.com/watch?v=GBu2jofRJtk&list=PLV1NMsIkAYhBs-k7Zw6fPQ8bBFufhU6BK" rel="nofollow noreferrer">Video Link</a></p> | 2018-11-02 06:02:21.640000+00:00 | 2018-11-02 06:08:26.697000+00:00 | 2018-11-02 06:08:26.697000+00:00 | null | 50,005,852 | <p>I am wondering how the multi-scale training in <a href="https://arxiv.org/pdf/1612.08242.pdf" rel="nofollow noreferrer">YOLOv2</a> works.</p>
<p>In the paper, it is stated that:</p>
<blockquote>
<p>The original YOLO uses an input resolution of 448 × 448. ith the addition of anchor boxes we changed the resolution to 416×416. However, <strong>since our model only uses convolutional and pooling layers it can be resized on the fly</strong>. We want YOLOv2 to be robust to running on images of different sizes so we train this into the model. Instead of fixing the input image size we change the network every few iterations. Every 10 batches our network randomly chooses a new image dimension size. "Since our model downsamples by a factor of 32, we pull from the following multiples of 32: {320, 352, ..., 608}. Thus the smallest option is 320 × 320 and the largest is 608 × 608. We resize the network to that dimension and continue training. "</p>
</blockquote>
<p>I don't get how a network <strong>with only convolutional and pooling layers</strong> allow input of different resolutions. From my experience of building neural networks, if you change the resolution of the input to different scale, the number of parameters of this network will change, that is, the structure of this network will change.</p>
<p>So, how does YOLOv2 change this <em>on the fly</em>? </p>
<p>I read the configuration file for yolov2, but all I got was a <code>random=1</code> statement... </p> | 2018-04-24 15:44:19.367000+00:00 | 2018-12-20 02:13:04.953000+00:00 | null | computer-vision|object-detection|convolutional-neural-network|yolo | ['https://i.stack.imgur.com/K6zaA.png', 'https://arxiv.org/pdf/1506.02640.pdf', 'https://www.youtube.com/watch?v=GBu2jofRJtk&list=PLV1NMsIkAYhBs-k7Zw6fPQ8bBFufhU6BK'] | 3 |
65,249,696 | <p>Scaling pseudospectral optimal control problems is tricky. If you can get a copy of John Betts' <a href="https://epubs.siam.org/doi/book/10.1137/1.9780898718577?mobileUi=0" rel="nofollow noreferrer">Practical Methods for Optimal Control and Estimation Using Nonlinear Programming</a>, I highly recommend it. Betts suggest using the same scaling for both the state design variable values and the defects. This is often a good rule of thumb, but as with most approaches to scaling, isn't universal. The collocation "defects" which dictate whether the dynamics are physically correct are just the difference between the slope of the approximating polynomial and the computed equations of motion.</p>
<p>In situations where state values are large but tiny rates of change are significant, then different scaling is warranted in my experience. Examples of states where these can be true are aircraft range or spacecraft orbital elements. Just recently we had a situation where a low-thrust orbit transfer of spacecraft wasn't matching physics. The semi-latus rectum, for instance, is typically measured in km, so on the scale of thousands when in Earth orbit). In the units being used, a "significant" difference in the defect was less than 1E-6 (the threshold for feasibility being used). In this case, the problem was solved by bumping the defect_scaler up a few orders of magnitude (equivalent to bumping the defect_ref down a few orders of magnitude).</p>
<p>I'd also recommend <a href="https://arxiv.org/pdf/1810.11073.pdf" rel="nofollow noreferrer">this paper from Ross, Gong, Karpenko, and Proulx</a>. It lays out some good rules of thumb and has an approachable example in the brachistochrone. It references costates a lot. Dymos doesn't provide automatic costate estimation yet, but they are closely related to the lagrange multipliers of the problem, which are printed in the pyoptsparse output if you use SNOPT.</p>
<p>The github repo you pointed out was the work of an intern and was based around this <a href="https://elib.dlr.de/93327/1/Performance_analysis_of_linear_and_nonlinear_techniques.pdf" rel="nofollow noreferrer">scaling method developed by Sagliano</a>. We found it to work well in a many situations, but it's also not a panacea.</p>
<p>Ultimately we want some automatic scaling options in Dymos and/or OpenMDAO, but we're not sure when they might find their way into the framework. Our past work has typically tied scaling approaches more tightly to the equations of motion, and Dymos is designed to be more general in that the user can supply whatever EOM they choose.</p> | 2020-12-11 10:36:36.577000+00:00 | 2020-12-11 10:36:36.577000+00:00 | null | null | 65,243,017 | <p>I was hoping to get some information on how to set my defect refs in Dymos a smart way. I found the following notes on scaling here <a href="https://github.com/hweyandtnasa/scaling-tutorial" rel="nofollow noreferrer">https://github.com/hweyandtnasa/scaling-tutorial</a> but it lists defect scaling in Dymos as a TODO still. Should I just set them equal to the ref value for the state they pertain to?</p> | 2020-12-10 22:08:51.570000+00:00 | 2020-12-11 13:31:55.947000+00:00 | null | openmdao | ['https://epubs.siam.org/doi/book/10.1137/1.9780898718577?mobileUi=0', 'https://arxiv.org/pdf/1810.11073.pdf', 'https://elib.dlr.de/93327/1/Performance_analysis_of_linear_and_nonlinear_techniques.pdf'] | 3 |
45,443,841 | <p>I've collected a couple of augmentation techniques in <a href="https://arxiv.org/pdf/1707.09725.pdf#page=94" rel="noreferrer">my masters thesis, page 80</a>. It includes:</p>
<ul>
<li>Zoom,</li>
<li>Crop</li>
<li>Flip (horizontal / vertical)</li>
<li>Rotation</li>
<li>Scaling</li>
<li>shearing</li>
<li>channel shifts (rgb, hsv)</li>
<li>contrast</li>
<li>noise,</li>
<li>vignetting</li>
</ul> | 2017-08-01 17:06:13.980000+00:00 | 2017-08-01 17:06:13.980000+00:00 | null | null | 36,144,993 | <p>Currently i am training small logo datasets similar to <a href="http://www.multimedia-computing.de/flickrlogos/" rel="noreferrer">Flickrlogos-32</a> with deep CNNs. For training larger networks i need more dataset, thus using augmentation. The best i'm doing right now is using affine transformations(featurewise normalization, featurewise center, rotation, width height shift, horizontal vertical flip). But for bigger networks i need more augmentation. I tried searching on kaggle's national data science bowl's <a href="https://www.kaggle.com/c/datasciencebowl/forums/t/11360/data-augmentation" rel="noreferrer">forum</a> but couldn't get much help. There's code for some methods given <a href="https://github.com/benanne/kaggle-galaxies/blob/master/realtime_augmentation.py" rel="noreferrer">here</a> but i'm not sure what could be useful. What are some other(or better) image data augmentation techniques that could be applied to this type of(or in any general image) dataset other than affine transformations? </p> | 2016-03-22 02:06:53.167000+00:00 | 2020-05-06 08:33:03.647000+00:00 | null | image-processing|machine-learning|computer-vision|neural-network|deep-learning | ['https://arxiv.org/pdf/1707.09725.pdf#page=94'] | 1 |
36,186,665 | <p>A good recap can be found <a href="http://lamda.nju.edu.cn/weixs/project/CNNTricks/CNNTricks.html" rel="noreferrer">here</a>, section 1 on Data Augmentation: so namely <strong>flips</strong>, <strong>random crops</strong> and <strong>color jittering</strong> and also <strong>lighting noise</strong>:</p>
<blockquote>
<p><a href="http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf" rel="noreferrer">Krizhevsky et al.</a> proposed fancy PCA when training the famous Alex-Net in 2012. Fancy PCA alters the intensities of the RGB channels in training images.</p>
</blockquote>
<p>Alternatively you can also have a look at the Kaggle Galaxy Zoo challenge: the winners wrote a <a href="http://benanne.github.io/2014/04/05/galaxy-zoo.html" rel="noreferrer">very detailed blog post</a>. It covers the same kind of techniques:</p>
<ul>
<li>rotation,</li>
<li>translation,</li>
<li>zoom,</li>
<li>flips,</li>
<li>color perturbation.</li>
</ul>
<p>As stated they also do it "in realtime, i.e. during training".</p>
<p>For example here is a practical <a href="http://torch.ch/" rel="noreferrer">Torch</a> <a href="https://github.com/facebook/fb.resnet.torch/blob/e8fb31378fd8dc188836cf1a7c62b609eb4fd50a/datasets/transforms.lua" rel="noreferrer">implementation</a> by Facebook (for <a href="http://arxiv.org/abs/1512.03385" rel="noreferrer">ResNet</a> training).</p> | 2016-03-23 19:06:08.323000+00:00 | 2016-03-23 19:06:08.323000+00:00 | null | null | 36,144,993 | <p>Currently i am training small logo datasets similar to <a href="http://www.multimedia-computing.de/flickrlogos/" rel="noreferrer">Flickrlogos-32</a> with deep CNNs. For training larger networks i need more dataset, thus using augmentation. The best i'm doing right now is using affine transformations(featurewise normalization, featurewise center, rotation, width height shift, horizontal vertical flip). But for bigger networks i need more augmentation. I tried searching on kaggle's national data science bowl's <a href="https://www.kaggle.com/c/datasciencebowl/forums/t/11360/data-augmentation" rel="noreferrer">forum</a> but couldn't get much help. There's code for some methods given <a href="https://github.com/benanne/kaggle-galaxies/blob/master/realtime_augmentation.py" rel="noreferrer">here</a> but i'm not sure what could be useful. What are some other(or better) image data augmentation techniques that could be applied to this type of(or in any general image) dataset other than affine transformations? </p> | 2016-03-22 02:06:53.167000+00:00 | 2020-05-06 08:33:03.647000+00:00 | null | image-processing|machine-learning|computer-vision|neural-network|deep-learning | ['http://lamda.nju.edu.cn/weixs/project/CNNTricks/CNNTricks.html', 'http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf', 'http://benanne.github.io/2014/04/05/galaxy-zoo.html', 'http://torch.ch/', 'https://github.com/facebook/fb.resnet.torch/blob/e8fb31378fd8dc188836cf1a7c62b609eb4fd50a/datasets/transforms.lua', 'http://arxiv.org/abs/1512.03385'] | 6 |
62,919,761 | <p>Let me talk about random integer generating algorithms that are "optimal" in terms of the number of random bits it uses on average. In the rest of this post, we will assume we have a "true" random generator that can produce unbiased and independent random bits.</p>
<p>In 1976, D. E. Knuth and A. C. Yao showed that any algorithm that produces random integers with a given probability, using only random bits, can be represented as a binary tree, where random bits indicate which way to traverse the tree and each leaf (endpoint) corresponds to an outcome. They also gave lower bounds on the number of bits a given algorithm will need on average for this task. In this case, an <em>optimal</em> algorithm to generate integers in <code>[0, n)</code> uniformly, will need at most <code>log2(n) + 2</code> bits on average. There are many examples of <em>optimal</em> algorithms in this sense. One of them is the <a href="https://arxiv.org/abs/1304.1916" rel="nofollow noreferrer">Fast Dice Roller</a> by J. Lumbroso (2013) (implemented below), and another is perhaps the algorithm given in the <a href="http://mathforum.org/library/drmath/view/65653.html" rel="nofollow noreferrer">Math Forum</a> in 2004. On the other hand, all the algorithms <a href="https://www.pcg-random.org/posts/bounded-rands.html" rel="nofollow noreferrer">surveyed by M. O'Neill</a> are not optimal, since they rely on generating blocks of random bits at a time.</p>
<p>However, any <em>optimal</em> integer generator that is also <em>unbiased</em> will, in general, run forever in the worst case, as also shown by Knuth and Yao. Going back to the binary tree, each one of the n outcomes labels leaves in the binary tree so that each integer in [0, n) can occur with probability 1/n. But if 1/n has a non-terminating binary expansion (which will be the case if n is not a power of 2), this binary tree will necessarily either—</p>
<ul>
<li>Have an "infinite" depth, or</li>
<li>include "rejection" leaves at the end of the tree,</li>
</ul>
<p>and in either case, the algorithm will run forever in the worst case, even if it uses very few random bits on average. (On the other hand, when n is a power of 2, the optimal binary tree will have a finite depth and no rejection nodes.) The Fast Dice Roller is an example of an algorithm that uses "rejection" events to ensure it's unbiased; see the comment in the code below.</p>
<p>Thus, in general, <strong>a random integer generator can be <em>either</em> unbiased <em>or</em> constant-time (or even neither), but not both.</strong> In particular, there is no way in general to "fix" the worst case of an indefinite running time without introducing bias. For instance, modulo reductions (e.g., <code>randInt() % n</code>) are equivalent to a binary tree in which rejection leaves are replaced with labeled outcomes — but since there are more possible outcomes than rejection leaves, only some of the outcomes can take the place of the rejection leaves, introducing bias. The same kind of binary tree — and the same kind of bias — results if you stop rejecting after a set number of iterations. (However, this bias may be negligible depending on the application. There are also security aspects to random integer generation, which are too complicated to discuss in this answer.)</p>
<p>Note that we assumed we had a random bit generator. However, there is a wrinkle here in the Java implementation: it has no way to generate individual random bits. (And in fact, most pseudorandom number generators produce blocks of bits at a time, not individual bits.) Thus, as mentioned in the other answer at the time of this writing, for optimal use of random bits you will need to save the result of <code>nextInt()</code> and read out its bits one at a time. When all the bits have been read, generate another <code>nextInt()</code> and repeat.</p>
<h3>Fast Dice Roller Implementation</h3>
<p>The following is a JavaScript implementation of the Fast Dice Roller. Note that it uses rejection events and a loop to ensure it's unbiased. <code>nextBit()</code> is a method that produces independent unbiased random bits (e.g., <code>Math.random()<0.5 ? 1 : 0</code>, which isn't necessarily efficient in terms of random bits ultimately relied on, either in Java or in JavaScript).</p>
<pre class="lang-js prettyprint-override"><code>function randomInt(minInclusive, maxExclusive) {
var maxInclusive = (maxExclusive - minInclusive) - 1
var x = 1
var y = 0
while(true) {
x = x * 2
var randomBit = Math.random()<0.5 ? 1 : 0
y = y * 2 + randomBit
if(x > maxInclusive) {
if (y <= maxInclusive) { return y + minInclusive }
// Rejection
x = x - maxInclusive - 1
y = y - maxInclusive - 1
}
}
}
</code></pre>
<p>The following version returns a BigInt, an arbitrary-precision integer supported in recent versions of JavaScript:</p>
<pre class="lang-js prettyprint-override"><code>function randomInt(minInclusive, maxExclusive) {
minInclusive=BigInt(minInclusive)
maxExclusive=BigInt(maxExclusive)
var maxInclusive = (maxExclusive - minInclusive) - BigInt(1)
var x = BigInt(1)
var y = BigInt(0)
while(true) {
x = x * BigInt(2)
var randomBit = BigInt(Math.random()<0.5 ? 1 : 0)
y = y * BigInt(2) + randomBit
if(x > maxInclusive) {
if (y <= maxInclusive) { return y + minInclusive }
// Rejection
x = x - maxInclusive - BigInt(1)
y = y - maxInclusive - BigInt(1)
}
}
}
</code></pre> | 2020-07-15 16:44:13.537000+00:00 | 2022-01-12 13:45:45.540000+00:00 | 2022-01-12 13:45:45.540000+00:00 | null | 25,111,085 | <p>I have an application where I need to measure how many bits of randomness an algorithm consumes. I have instrumented a subclass of <code>Random</code> to do this by overriding <code>Random.next(int)</code> to increment a counter before calling its parent's method.</p>
<p>I have run into some issues with the implementation of the <code>nextInt(int)</code> method, because it will always draw 32 bits even when the range is a power of two. For other ranges there are even more problems: the method is not uniform---it only retries <em>once</em> for the case where the original value drawn is greater than the greatest multiple of the range lower than <code>Integer.MAX_VALUE</code>--- and it still uses more bits of randomness than are needed.</p>
<p>How can I implement a better version of <code>nextInt(int)</code> that uses only the bare minimum number of random bits needed to determine a value within the range, while being perfectly uniform? It does not need guaranteed termination (not possible anyway), just termination with probability 1.</p>
<h2>Edit:</h2>
<p>Here is what I have so far:</p>
<pre><code>int nextInt(int max){
int n = Integer.numberOfTrailingZeros(max);
return next(n) + nextOddInteger(max >> n) << n;
}
</code></pre>
<p>This might not exactly be correct, but essentially this factors out all <code>n</code> twos from <code>num</code>, generates a random number with <code>n</code> bits, and prepends <code>nextOddInteger(num)</code> to the resulting bits. <code>nextOddInteger</code> would generate a random integer up to a number whose prime factorization contains no twos. How can I implement this part in a very randomness-efficient way?</p> | 2014-08-04 02:37:23.780000+00:00 | 2022-01-12 13:45:45.540000+00:00 | 2014-08-04 08:52:05.743000+00:00 | java|random | ['https://arxiv.org/abs/1304.1916', 'http://mathforum.org/library/drmath/view/65653.html', 'https://www.pcg-random.org/posts/bounded-rands.html'] | 3 |
8,028,121 | <p>If you want to find all possible ways of representing a number N from a given set of numbers then you should follow a dynamic programming solution as already proposed. </p>
<p>But if you just want to know the number of ways, then you are dealing with the <a href="http://arxiv.org/pdf/math/0105218v2" rel="nofollow">restricted partition function problem</a>. </p>
<blockquote>
<p>The restricted partition function p(n, dm) ≡ p(n, {d1, d2, . . . ,
dm}) is a number of partitions of n into positive integers {d1, d2, .
. . , dm}, each not greater than n.</p>
</blockquote>
<p>You should also check the wikipedia article on <a href="http://en.wikipedia.org/wiki/Partition_function_%28number_theory%29#Partition_function" rel="nofollow">partition function without restrictions</a> where no restrictions apply.</p>
<p>PS. If negative numbers are also allowed then there probably are (countably )infinite ways to represent your sum. </p>
<pre><code>1+1+1+1-1+1
1+1+1+1-1+1-1+1
etc...
</code></pre>
<p>PS2. This is more a math question than a programming one</p> | 2011-11-06 15:38:12.937000+00:00 | 2011-11-06 15:38:12.937000+00:00 | null | null | 8,026,751 | <p>I want to know in how many ways can we represent a number x as a sum of numbers from a given set of numbers {a1.a2,a3,...}. Each number can be taken more than once.</p>
<p>For example, if x=4 and a1=1,a2=2, then the ways of representing x=4 are:</p>
<pre><code>1+1+1+1
1+1+2
1+2+1
2+1+1
2+2
</code></pre>
<p>Thus the number of ways =5.</p>
<p>I want to know if there exists a formula or some other fast method to do so. I can't brute force through it. I want to write code for it.</p>
<p>Note: x can be as large as 10^18. The number of terms a1,a2,a3,… can be up to 15, and each of a1,a2,a3,… can also be only up to 15.</p> | 2011-11-06 11:41:07.143000+00:00 | 2013-03-20 14:26:04.537000+00:00 | 2013-03-20 14:26:04.537000+00:00 | algorithm|combinatorics | ['http://arxiv.org/pdf/math/0105218v2', 'http://en.wikipedia.org/wiki/Partition_function_%28number_theory%29#Partition_function'] | 2 |
21,777,176 | <p>There is a package named powerRlaw in R by Colin Gillespie. This package is well documented and contains a lot of example to use each function. Very straightforward.</p>
<p><a href="http://cran.r-project.org/web/packages/poweRlaw/" rel="nofollow noreferrer">http://cran.r-project.org/web/packages/poweRlaw/</a></p>
<p>For example in R as the documentation said, the following code get data from the file <strong>full_path_of_file_name</strong> and estimate xmin and alpha and get p-value as proposed by <a href="https://arxiv.org/pdf/0706.1062.pdf" rel="nofollow noreferrer">Clauset and al. (2009)</a></p>
<pre><code>library("poweRLaw")
words = read.table(<full_path_of_file_name>)
m_plwords = displ$new(words$V1) # discrete power law fitting
est_plwords = estimate_xmin(m_plwords) # get xmin and alpha
# here we have the goodness-of-fit test p-value
# as proposed by Clauset and al. (2009)
bs_p = bootstrap_p(m_plwords)
</code></pre> | 2014-02-14 10:44:30.913000+00:00 | 2017-08-08 12:47:20.563000+00:00 | 2017-08-08 12:47:20.563000+00:00 | null | 21,541,240 | <p>I have a network for which I fit into a power-law using igraph software:</p>
<pre><code>plf = power.law.fit(degree_dist, impelementation = "plfit")
</code></pre>
<p>The plf variable now holds the following variables:</p>
<pre><code>$continuous
[1] TRUE
$alpha
[1] 1.63975
$xmin
[1] 0.03
$logLik
[1] 4.037563
$KS.stat
[1] 0.1721117
$KS.p
[1] 0.9984284
</code></pre>
<p>The igraph manual explains these variables:</p>
<pre><code>xmin = the lower bound for fitting the power-law
alpha = the exponent of the fitted power-law distribution
logLik = the log-likelihood of the fitted parameters
KS.stat = the test statistic of a Kolmogorov-Smirnov test that compares the fitted distribution with the input vector. Smaller scores denote better fit
KS.p = the p-value of the Kolmogorov-Smirnov test. Small p-values (less than 0.05) indicate that the test rejected the hypothesis that the original data could have been drawn from the fitted power-law distribution
</code></pre>
<p>I would like to do a "goodness of fit" test on this power law fit. But I am not sure how to do this, and although I found this question already asked in online forums, it usually remains unanswered.</p>
<p>I think one way to do this would be to do a chisq.test(x,y). One input parameter (say x) would be the degree_dist variable (the observed degree distribution of the network). The other input parameter (say y) would be the fitted power law equation, which is supposed to be of form P(k) = mk^a.</p>
<p>I am not sure whether this is a sound approach, and if so, I need advice on how to construct the fitted power law equation.</p>
<p>In case it helps, the degree_dist of my network was:</p>
<pre><code> 0.00 0.73 0.11 0.05 0.02 0.02 0.03 0.02 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.01
</code></pre>
<p>(These are frequencies that degrees of 0-21 occurred in the network. (For example, 73% of nodes has degree 1, 1% of nodes had degree 21).</p>
<p><em><strong></em>**<em>*</em>**<em>*</em>***<em></strong> EDIT <strong></em>**<em>*</em>**<em>*</em>**<em>*</em>****</strong> </p>
<p>I am unsure whether it was a mistake above to use degree_dist to calculate plf. In case it is, I also ran the same function using the degrees from the 100 nodes in my network:</p>
<pre><code>plf = power.law.fit(pure_deg, impelementation = "plfit")
</code></pre>
<p>where, pure_deg is:</p>
<pre><code> 21 7 5 6 17 3 6 6 2 5 4 3 7 4 3 2 2 2 2 3 2 3 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
</code></pre>
<p>This leads to output of:</p>
<pre><code>$continuous
[1] FALSE
$alpha
[1] 2.362445
$xmin
[1] 1
$logLik
[1] -114.6303
$KS.stat
[1] 0.02293443
$KS.p
[1] 1
</code></pre> | 2014-02-04 01:20:09.453000+00:00 | 2017-08-08 12:47:20.563000+00:00 | 2017-04-25 21:30:39.517000+00:00 | r|graph|igraph|chi-squared|goodness-of-fit | ['http://cran.r-project.org/web/packages/poweRlaw/', 'https://arxiv.org/pdf/0706.1062.pdf'] | 2 |
46,188,898 | <p>Despite continuous critics and debates starting with <a href="http://homepages.inf.ed.ac.uk/miles/papers/eacl06.pdf" rel="nofollow noreferrer">this 2006 article</a>, <strong>BLEU</strong> (<strong><em>B</strong>i<strong>L</strong>ingual <strong>E</strong>valuation <strong>U</strong>nderstudy</em>) score is still the most commonly used metric for machine translation. According to the <a href="https://en.wikipedia.org/wiki/BLEU" rel="nofollow noreferrer">Wikipedia page</a>, </p>
<blockquote>
<p>BLEU is an algorithm for evaluating the quality of text which has been machine-translated from one natural language to another. Quality is considered to be the correspondence between a machine's output and that of a human: "the closer a machine translation is to a professional human translation, the better it is" – this is the central idea behind BLEU. BLEU was one of the first metrics to achieve a high correlation with human judgements of quality, and remains one of the most popular automated and inexpensive metrics.</p>
</blockquote>
<p>More specifically, if you want to look at Japanese-English translation, there was a <a href="https://cs224d.stanford.edu/reports/GreensteinEric.pdf" rel="nofollow noreferrer">class project from Stanford CS 224d</a> that translates simple Japanese sentences like 「彼女は敵だった」 into English with neural network techniques, and uses BLEU as the evaluation metric. </p>
<p>If you want more readings on machine translation, I suggest a good start with one of the most influential one recently, namely <a href="https://arxiv.org/pdf/1409.0473" rel="nofollow noreferrer">Neural machine translation by jointly learning to align and translate</a> by Yoshua Bengio et al. You can also look at the <a href="https://scholar.google.com/scholar?cites=8900239586727494087" rel="nofollow noreferrer">top papers that cited the BLEU critics</a> to get a sense of other commonly used metrics.</p> | 2017-09-13 04:32:35.403000+00:00 | 2017-09-13 04:42:43.433000+00:00 | 2017-09-13 04:42:43.433000+00:00 | null | 46,187,542 | <p>Could you guys recommend some evaluation or metrics about machine learning for translation: for example Japanese to English et al. If possible, could you tell me some papers about metrics. I am a new one to translation. Thanks! </p> | 2017-09-13 01:36:05.773000+00:00 | 2017-09-13 04:42:43.433000+00:00 | null | machine-learning|translation | ['http://homepages.inf.ed.ac.uk/miles/papers/eacl06.pdf', 'https://en.wikipedia.org/wiki/BLEU', 'https://cs224d.stanford.edu/reports/GreensteinEric.pdf', 'https://arxiv.org/pdf/1409.0473', 'https://scholar.google.com/scholar?cites=8900239586727494087'] | 5 |
5,823,371 | <p>Community detection algorithms are sometimes part of a library (such as <a href="http://jung.sourceforge.net/">JUNG</a> for java) or a tool (see <a href="http://gephi.org/">Gephi</a>). When authors publish a new method, they do sometimes make their code available. For example, the <a href="http://sites.google.com/site/findcommunities/">Louvain</a> and <a href="http://www.mapequation.org/mapdemo/index.html">Infomap</a> methods.</p>
<p>Side note: Girvan-Newman algorithm is sometimes still used, but it has mostly been replaced by faster and more accurate methods. For a good overview of the topic, I recommend <a href="http://arxiv.org/abs/0908.1062">Community detection algorithms: a comparative analysis</a> or the longer <a href="http://arxiv.org/abs/0906.0612">Community detection in graphs</a> (103 pages).</p> | 2011-04-28 18:35:49.423000+00:00 | 2011-04-28 18:50:16.540000+00:00 | 2011-04-28 18:50:16.540000+00:00 | null | 5,822,265 | <p>I am looking for implementations of community detection algorithms, such as the Girvan-Newman algorithm (2002). I have visited the websites of several researchers in this field (Newman, Santo, etc.) but was unable to find any code. I imagine someone out there published implementations of these algorithms (maybe even a toolkit?), but I can't seem to find it.</p> | 2011-04-28 16:58:00.190000+00:00 | 2018-02-13 16:45:50.193000+00:00 | 2017-10-18 02:51:25.460000+00:00 | graph|graph-theory | ['http://jung.sourceforge.net/', 'http://gephi.org/', 'http://sites.google.com/site/findcommunities/', 'http://www.mapequation.org/mapdemo/index.html', 'http://arxiv.org/abs/0908.1062', 'http://arxiv.org/abs/0906.0612'] | 6 |
39,131,454 | <h2>AlgoKit</h2>
<p>I wrote an open source library called <a href="https://github.com/pgolebiowski/algo-kit" rel="noreferrer">AlgoKit</a>, available via <a href="https://www.nuget.org/packages/AlgoKit" rel="noreferrer">NuGet</a>. It contains:</p>
<ul>
<li><strong>Implicit d-ary heaps</strong> (ArrayHeap),</li>
<li><strong>Binomial heaps</strong>,</li>
<li><strong>Pairing heaps</strong>.</li>
</ul>
<p>The code has been extensively tested. I definitely recommend you to give it a try.</p>
<h2>Example</h2>
<pre><code>var comparer = Comparer<int>.Default;
var heap = new PairingHeap<int, string>(comparer);
heap.Add(3, "your");
heap.Add(5, "of");
heap.Add(7, "disturbing.");
heap.Add(2, "find");
heap.Add(1, "I");
heap.Add(6, "faith");
heap.Add(4, "lack");
while (!heap.IsEmpty)
Console.WriteLine(heap.Pop().Value);
</code></pre>
<h2>Why those three heaps?</h2>
<p>The optimal choice of implementation is strongly input-dependent — as Larkin, Sen, and Tarjan show in <em>A back-to-basics empirical study of priority queues</em>, <a href="https://arxiv.org/abs/1403.0252v1" rel="noreferrer">arXiv:1403.0252v1 [cs.DS]</a>. They tested implicit d-ary heaps, pairing heaps, Fibonacci heaps, binomial heaps, explicit d-ary heaps, rank-pairing heaps, quake heaps, violation heaps, rank-relaxed weak heaps, and strict Fibonacci heaps.</p>
<p>AlgoKit features three types of heaps that appeared to be most efficient among those tested.</p>
<h2>Hint on choice</h2>
<p>For a relatively small number of elements, you would likely be interested in using implicit heaps, especially quaternary heaps (implicit 4-ary). In case of operating on larger heap sizes, amortized structures like binomial heaps and pairing heaps should perform better.</p> | 2016-08-24 19:20:33.623000+00:00 | 2016-08-24 19:20:33.623000+00:00 | null | null | 102,398 | <p>I am looking for a .NET implementation of a priority queue or heap data structure</p>
<blockquote>
<p>Priority queues are data structures that provide more flexibility than simple sorting, because they allow new elements to enter a system at arbitrary intervals. It is much more cost-effective to insert a new job into a priority queue than to re-sort everything on each such arrival.</p>
<p>The basic priority queue supports three primary operations:</p>
<ul>
<li>Insert(Q,x). Given an item x with key k, insert it into the priority queue Q.</li>
<li>Find-Minimum(Q). Return a pointer to the item
whose key value is smaller than any other key in the priority queue
Q.</li>
<li>Delete-Minimum(Q). Remove the item from the priority queue Q whose key is minimum</li>
</ul>
</blockquote>
<p>Unless I am looking in the wrong place, there isn't one in the framework. Is anyone aware of a good one, or should I roll my own?</p> | 2008-09-19 14:43:54.277000+00:00 | 2021-10-19 15:56:04.563000+00:00 | 2015-11-04 15:46:50.153000+00:00 | c#|.net|data-structures|heap|priority-queue | ['https://github.com/pgolebiowski/algo-kit', 'https://www.nuget.org/packages/AlgoKit', 'https://arxiv.org/abs/1403.0252v1'] | 3 |
56,032,637 | <p>I think <em>Spatial Pyramid Pooling (SPP)</em> might be helpful. Checkout this <a href="https://arxiv.org/abs/1406.4729" rel="nofollow noreferrer">paper</a>.</p>
<blockquote>
<p>We note that SPP has several remarkable properties for deep CNNs: </p>
<p>1) SPP is able to generate a fixed-length output regardless of the input size, while the sliding window pooling used in the previous deep networks cannot;</p>
<p>2) SPP uses multi-level spatial bins, while the sliding window pooling uses only a single window size. Multi-level pooling has been shown to be robust to object deformations;</p>
<p>3) SPP can pool features extracted at variable scales thanks to the flexibility of input scales. Through experiments we show that all these factors elevate the recognition accuracy of deep networks.</p>
</blockquote>
<p><br>
<code>yhenon</code> has implemented <a href="https://github.com/yhenon/keras-spp" rel="nofollow noreferrer">SPP for Keras on Github</a>.</p> | 2019-05-08 02:13:46.033000+00:00 | 2019-05-08 02:13:46.033000+00:00 | null | null | 45,236,071 | <p>I am trying to model a fully convolutional neural network using the Keras library, Tensorflow backend. </p>
<p>The issue I face is that of feeding ]differently sized images in batches to <code>model.fit()</code> function. The training set consists of images of different sizes varying from 768x501 to 1024x760. </p>
<p>Not more than 5 images have the same dimensions, so grouping them into batches seems to be of no help. </p>
<p>Numpy allows storing the data in a single variable in list form. But the keras <code>model.fit()</code> function throws an error on receiving a list type training array. </p>
<p>I do not wish to resize and lose the data as I already have a very small dataset. </p>
<p>How do I go about training this network?</p> | 2017-07-21 11:05:24.243000+00:00 | 2019-05-08 18:17:32.770000+00:00 | 2019-05-08 18:17:32.770000+00:00 | python|keras|deep-learning|computer-vision|conv-neural-network | ['https://arxiv.org/abs/1406.4729', 'https://github.com/yhenon/keras-spp'] | 2 |
43,901,208 | <p>"I was wondering what is the state-of-the art deep learning model to replace Hidden Markov Models (HMM)"</p>
<p>At the moment RNN (Recurrent Neural Network) and LSTM (Long Short Term Memory) based DNNs are state of the art. They are the best for a lot of sequencing problems starting from Named Entity Recognition (<a href="https://www.quora.com/What-is-the-current-state-of-the-art-in-Named-Entity-Recognition-NER/answer/Rahul-Vadaga" rel="nofollow noreferrer">https://www.quora.com/What-is-the-current-state-of-the-art-in-Named-Entity-Recognition-NER/answer/Rahul-Vadaga</a>), Parsing (<a href="https://arxiv.org/pdf/1701.00874.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1701.00874.pdf</a>) to Machine Translation (<a href="https://arxiv.org/pdf/1609.08144.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1609.08144.pdf</a>).
These DNNs are also called sequence models (e.g. seq2seq where input as well as output is a sequence like Machine Translation) </p>
<p>"unsupervised pretraining"</p>
<p>The pre-training is not that popular any more (for supervised ML problems) since you can achieve the same results using random restarts with parallelization as you have more (and cheaper) CPUs now.</p>
<p>[Added the following later]</p>
<p>A recent paper (Optimal Hyperparameters for Deep LSTM-Networks for Sequence Labeling Tasks
by Nils Reimers, and Iryna Gurevych) does a good comparison of various seq2seq for common NLP tasks: <a href="https://arxiv.org/pdf/1707.06799.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1707.06799.pdf</a></p>
<p>Definitely worth a read.</p> | 2017-05-10 19:29:00.297000+00:00 | 2018-03-19 17:56:11.607000+00:00 | 2018-03-19 17:56:11.607000+00:00 | null | 43,880,116 | <p>I want to use deep learning techniques to perform better inference tasks than Hidden Markov Models (which is a shallow model)? I was wondering what is the state-of-the art deep learning model to replace Hidden Markov Models (HMM)? The set-up is semi-supervised. The training data X(t),Y(t) is a time series, with significant temporal correlations. Also, there is a huge amount of unlabelled data, i.e., simply X(t) and no Y(t). After reading many papers, I narrowed down on the following model -> Conditionally Restricted Boltzmann Machines (Ilya Sustkever MS thesis) and use Deep Belief Networks for unsupervised pretraining (or use variational autoencoders for pretraining). I am very new to the field, and was wondering if these techniques are outdated. </p> | 2017-05-09 21:25:33.943000+00:00 | 2018-03-19 17:56:11.607000+00:00 | 2017-05-09 21:33:22.410000+00:00 | machine-learning|artificial-intelligence|deep-learning|hidden-markov-models|unsupervised-learning | ['https://www.quora.com/What-is-the-current-state-of-the-art-in-Named-Entity-Recognition-NER/answer/Rahul-Vadaga', 'https://arxiv.org/pdf/1701.00874.pdf', 'https://arxiv.org/pdf/1609.08144.pdf', 'https://arxiv.org/pdf/1707.06799.pdf'] | 4 |
39,025,765 | <p>I had occasion to study the computation of GCD sums because the problem cropped up in a HackerEarth tutorial named <a href="https://www.hackerearth.com/practice/data-structures-1/advanced-trees/fenwick-binary-indexed-trees-1/tutorial/" rel="nofollow noreferrer">GCD Sum</a>. Googling turned up <a href="https://cs.uwaterloo.ca/journals/JIS/VOL13/Toth/toth10.pdf" rel="nofollow noreferrer">some</a> <a href="https://cs.uwaterloo.ca/journals/JIS/VOL4/BROUGHAN/gcdsum.pdf" rel="nofollow noreferrer">academic papers</a> with useful formulas, which I'm reporting here since they aren't mentioned in the <a href="https://math.stackexchange.com/a/135362/315905">MathOverflow article</a> linked by deviantfan.</p>
<p>For coprime m and n (i.e. gcd(m, n) == 1) the function is multiplicative:</p>
<pre><code>gcd_sum[m * n] = gcd_sum[m] * gcd_sum[n]
</code></pre>
<p>Powers e of primes p:</p>
<pre><code>gcd_sum[p^e] = (e + 1) * p^e - e * p^(e - 1)
</code></pre>
<p>If only a single sum is to be computed then these formulas could be applied to the result of factoring the number in question, which would still be way faster than repeated <code>gcd()</code> calls or going through the <a href="https://stackoverflow.com/posts/33568949/edit">rigmarole proposed by Толя</a>.</p>
<p>However, the formulas could just as easily be used to compute whole tables of the function efficiently. Basically, all you have to do is plug them into the algorithm for <a href="https://stackoverflow.com/questions/34260399/linear-time-eulers-totient-function-calculation">linear time Euler totient calculation</a> and you're done - this computes <em>all</em> GCD sums up to a million much faster than you can compute the single GCD sum for the number 10^6 by way of calls to a <code>gcd()</code> function. Basically, the algorithm efficiently enumerates the least factor decompositions of the numbers up to n in a way that makes it easy to compute any multiplicative function - Euler totient (a.k.a. phi), the sigmas or, in fact, GCD sums.</p>
<p>Here's a bit of hashish code that computes a table of GCD sums for smallish limits - ‘small’ in the sense that sqrt(N) * N does not overflow a 32-bit signed integer. IOW, it works for a limit of 10^6 (plenty enough for the HackerEarth task with its limit of 5 * 10^5) but a limit of 10^7 would require sticking <code>(long)</code> casts in a couple of strategic places. However, such hardening of the function for operation at higher ranges is left as the proverbial exercise for the reader... ;-)</p>
<pre class="lang-cs prettyprint-override"><code>static int[] precompute_Pillai (int limit)
{
var small_primes = new List<ushort>();
var result = new int[1 + limit];
result[1] = 1;
int n = 2, small_prime_limit = (int)Math.Sqrt(limit);
for (int half = limit / 2; n <= half; ++n)
{
int f_n = result[n];
if (f_n == 0)
{
f_n = result[n] = 2 * n - 1;
if (n <= small_prime_limit)
{
small_primes.Add((ushort)n);
}
}
foreach (int prime in small_primes)
{
int nth_multiple = n * prime, e = 1, p = 1; // 1e6 * 1e3 < INT_MAX
if (nth_multiple > limit)
break;
if (n % prime == 0)
{
if (n == prime)
{
f_n = 1;
e = 2;
p = prime;
}
else break;
}
for (int q; ; ++e, p = q)
{
result[nth_multiple] = f_n * ((e + 1) * (q = p * prime) - e * p);
if ((nth_multiple *= prime) > limit)
break;
}
}
}
for ( ; n <= limit; ++n)
if (result[n] == 0)
result[n] = 2 * n - 1;
return result;
}
</code></pre>
<p>As promised, this computes all GCD sums up to 500,000 in 12.4 ms, whereas computing the single sum for 500,000 via <code>gcd()</code> calls takes 48.1 ms on the same machine. The code has been verified against an <a href="http://oeis.org/A018804/b018804.txt" rel="nofollow noreferrer">OEIS list of the Pillai function</a> (A018804) up to 2000, and up to 500,000 against a gcd-based function - an undertaking that took a full 4 hours.</p>
<p>There's a whole range of optimisations that could be applied to make the code significantly faster, like replacing the modulo division with a multiplication (with the inverse) and a comparison, or to shave some more milliseconds by way of stepping the ‘prime cleaner-upper’ loop modulo 6. However, I wanted to show the algorithm in its basic, unoptimised form because (a) it is plenty fast as it is, and (b) it could be useful for other multiplicative functions, not just GCD sums.</p>
<p>P.S.: modulo testing via multiplication with the inverse is described in section 9 of the Granlund/Montgomery paper <a href="https://gmplib.org/~tege/divcnst-pldi94.pdf" rel="nofollow noreferrer">Division by Invariant Integers using Multiplication</a> but it is hard to find info on efficient computation of inverses modulo powers of 2. Most sources use the Extended Euclid's algorithm or similar overkill. So here comes a function that computes multiplicative inverses modulo 2^32:</p>
<pre><code>static uint ModularInverse (uint n)
{
uint x = 2 - n;
x *= 2 - x * n;
x *= 2 - x * n;
x *= 2 - x * n;
x *= 2 - x * n;
return x;
}
</code></pre>
<p>That's effectively five iterations of <a href="http://arxiv.org/pdf/1209.6626" rel="nofollow noreferrer">Newton-Raphson</a>, in case anyone cares. ;-)</p> | 2016-08-18 19:06:05.613000+00:00 | 2016-08-18 19:06:05.613000+00:00 | 2017-05-23 11:54:12.667000+00:00 | null | 33,568,231 | <p>There are n numbers from 1 to n. I need to find the
∑gcd(i,n) where i=1 to i=n
for n of the range 10^7. I used euclid's algorithm for gcd but it gave TLE. Is there any efficient method for finding the above sum?</p>
<pre><code>#include<bits/stdc++.h>
using namespace std;
typedef long long int ll;
int gcd(int a, int b)
{
return b == 0 ? a : gcd(b, a % b);
}
int main()
{
ll n,sum=0;
scanf("%lld",&n);
for(int i=1;i<=n;i++)
{
sum+=gcd(i,n);
}
printf("%lld\n",sum);
return 0;
}
</code></pre> | 2015-11-06 13:44:01.490000+00:00 | 2018-04-08 17:45:02.013000+00:00 | 2015-11-06 14:11:53.570000+00:00 | c++|arrays|algorithm|greatest-common-divisor | ['https://www.hackerearth.com/practice/data-structures-1/advanced-trees/fenwick-binary-indexed-trees-1/tutorial/', 'https://cs.uwaterloo.ca/journals/JIS/VOL13/Toth/toth10.pdf', 'https://cs.uwaterloo.ca/journals/JIS/VOL4/BROUGHAN/gcdsum.pdf', 'https://math.stackexchange.com/a/135362/315905', 'https://stackoverflow.com/posts/33568949/edit', 'https://stackoverflow.com/questions/34260399/linear-time-eulers-totient-function-calculation', 'http://oeis.org/A018804/b018804.txt', 'https://gmplib.org/~tege/divcnst-pldi94.pdf', 'http://arxiv.org/pdf/1209.6626'] | 9 |
67,529,445 | <p>optimal numbers of layers and resolution depend on dataset.
The smaller objects - the higher resolution is required.
The large objects - the more layers are required. There is an article on choosing the optimal number of layers, filters and resolution for MS COCO dataset: https://arxiv.org/pdf/1911.09070.pdf</p>
<p>It depends on what accuracy and speed do you want. To reduce rewritten_bbox % just increase resolution and/or move some masks from [yolo] layers with low resolution, the [yolo] layers with higher resolution, and train. Also iou_thresh=1 may reduce rewritten_bbox %</p> | 2021-05-14 05:28:30.460000+00:00 | 2021-05-14 05:28:30.460000+00:00 | null | null | 62,753,681 | <p>I am training my custom dataset on Yolo network and during training, I am getting info of rewritten box (as shown in the figure).</p>
<p>for example: total_bbox = 29159, rewritten_bbox = 0.006859 %</p>
<p>what does that mean? Is my training proceeding right?</p>
<p><a href="https://i.stack.imgur.com/MJm1Y.png" rel="nofollow noreferrer">enter image description here</a></p> | 2020-07-06 10:03:40.810000+00:00 | 2022-09-13 20:37:17.317000+00:00 | 2022-09-13 20:37:17.317000+00:00 | deep-learning|yolo | [] | 0 |
21,554,129 | <p>Neural networks can give an approximation of any function. The only consideration to do is the dimensionality of the search space, which give constraints to the amount of data you have to get a good approximation.</p>
<p>For a supervised network (you use autoencoders, then I think you use some variant of backpropagation), it's difficult for me to immagine how you think to do the trainig using single positions because you need similar positions in your training set. Maybe your approach is different, but I'm convinced that second strategy (using features) is more promising. I think using positions require a huge amount of data training to get good results.</p>
<p>For features take a look <a href="http://chessprogramming.wikispaces.com/Evaluation" rel="nofollow">here</a>, and to the classical work of <a href="http://vision.unipv.it/IA1/ProgrammingaComputerforPlayingChess.pdf" rel="nofollow">Shannon</a>.</p>
<p>I taked also useful informations from the source code of <a href="http://www.craftychess.com/" rel="nofollow">Crafty</a>.</p>
<p>But you have to extract these informations from the FEN string.</p>
<p>Autoencoders are a way to give a reduction of data (good because increase performances). It seems to be better the use of Pincipal Component Analysys, as reported <a href="http://arxiv.org/pdf/0709.2506.pdf" rel="nofollow">here</a>.</p>
<p>I hope this can help you.</p> | 2014-02-04 13:40:12.157000+00:00 | 2014-02-04 15:32:22.703000+00:00 | 2014-02-04 15:32:22.703000+00:00 | null | 21,530,330 | <p>I am working on a project where I take a chess board position (FEN string converted to binary) & it's evaluation score and feed it to a neural network. My aim is to make the neural network differentiate between good and bad positions.</p>
<p><strong>How I encode the position :</strong> There are 12 unique pieces in chess i.e pawn, rook, knight, bishop, queen and king for <em>white</em> as well as <em>black</em>. I encode each piece using 4 bits with 0000 denoting an empty square. So the 64 squares are encoded into 256 bits and I use 6 more bits to denote game state like whose turn it is to move, king-castle status, etc. </p>
<p><strong>Problem :</strong> Since the input space for chess positions is neither smooth nor uni-modal (one small change in the board position can result in a huge change in the evaluation score), the neural network doesn't learn well. Now, the next logical thing to somehow extract useful features (like material difference, center control, etc) and feed it to the network.</p>
<p>I do not want to hand pick the features as I want the network to learn everything by itself. Therefore I am thinking of extracting features automatically using <em>autoencoders</em>. Is there any better way to accomplish this?</p>
<p><strong>Summary :</strong> What is the best way to automatically extract features from a chess board position so that it can be fed into a neural network?</p>
<p><strong>UPDATE :</strong> To generate training data, I have modified Stockfish to dump it's evaluation process into a log file. So every new move(position) it considers is written to a file as an FEN string along with it's eval score</p> | 2014-02-03 14:48:57.777000+00:00 | 2014-02-04 15:32:22.703000+00:00 | 2014-02-04 14:45:34.673000+00:00 | chess|feature-extraction|neural-network|deep-learning | ['http://chessprogramming.wikispaces.com/Evaluation', 'http://vision.unipv.it/IA1/ProgrammingaComputerforPlayingChess.pdf', 'http://www.craftychess.com/', 'http://arxiv.org/pdf/0709.2506.pdf'] | 4 |
44,213,494 | <p>Implementation looks the same as <code>GRUCell</code> class <a href="https://github.com/tensorflow/tensorflow/blob/0b723590631432584d0761c03285eabb55116c6d/tensorflow/python/ops/rnn_cell_impl.py#L263" rel="nofollow noreferrer">doc</a> also points the same paper (specifically for gated) with <a href="http://arxiv.org/pdf/1406.1078v3.pdf" rel="nofollow noreferrer">link</a> given in Colah's article. Parameter <code>num_units</code> is the number of cells (assuming that is the hidden layer) corresponds to <code>output_size</code> due property <a href="https://github.com/tensorflow/tensorflow/blob/0b723590631432584d0761c03285eabb55116c6d/tensorflow/python/ops/rnn_cell_impl.py#L282" rel="nofollow noreferrer">definition</a>. </p> | 2017-05-27 05:59:46.770000+00:00 | 2017-06-03 18:08:10.550000+00:00 | 2017-06-03 18:08:10.550000+00:00 | null | 44,155,995 | <p>I'm new to RNN, and I'm trying to figure out the specifics of LSTM cells and they're relation to TensorFlow: <a href="http://colah.github.io/posts/2015-08-Understanding-LSTMs/" rel="noreferrer">Colah GitHub</a>
<a href="https://i.stack.imgur.com/bdKbh.png" rel="noreferrer"><img src="https://i.stack.imgur.com/bdKbh.png" alt="enter image description here"></a>
Does the GitHub website's example uses the same LSTM cell compared to TensorFlow? The only thing I got on the TensorFlow site was that basic LSTM cells uses the following architecture: <a href="https://arxiv.org/pdf/1409.2329.pdf" rel="noreferrer">Paper</a> If it's the same architecture then I can hand compute the numbers for a LSTM cell and see if it matches.</p>
<p>Also when we set a basic LSTM cell in tensorflow, it takes in a <code>num_units</code> according to: <a href="https://www.tensorflow.org/versions/r0.11/api_docs/python/rnn_cell/rnn_cells_for_use_with_tensorflow_s_core_rnn_methods#BasicRNNCell" rel="noreferrer">TensorFlow documentation</a></p>
<pre><code>tf.nn.rnn_cell.GRUCell.__init__(num_units, input_size=None, activation=tanh)
</code></pre>
<p>Is this number of hidden state (h_t)) and cell state (C_t)?</p>
<p>According to the GitHub website, there isn't any mention the number of cell state and hidden states. I'm assuming they have to be the same number?</p> | 2017-05-24 10:40:45.693000+00:00 | 2017-06-03 18:08:10.550000+00:00 | 2017-05-24 16:11:16.240000+00:00 | python|tensorflow|recurrent-neural-network | ['https://github.com/tensorflow/tensorflow/blob/0b723590631432584d0761c03285eabb55116c6d/tensorflow/python/ops/rnn_cell_impl.py#L263', 'http://arxiv.org/pdf/1406.1078v3.pdf', 'https://github.com/tensorflow/tensorflow/blob/0b723590631432584d0761c03285eabb55116c6d/tensorflow/python/ops/rnn_cell_impl.py#L282'] | 3 |
55,673,372 | <p>The commit you linked is no longer relevant to current Hyperledger Fabric versions. This functionality was added before v1.0, which restructured the entire framework architecture.</p>
<p>As of v1.0+, transactions are first simulated by endorsers, which create a signed set of state changes resulting from the chaincode. If enough endorsers sign a transaction (according to an endorsement policy), the client can then send the transaction to the ordering service for inclusion in the ledger. A transaction that results in an error in the chaincode would never get to this point, because it would fail to gather the necessary endorsements due to the error. The client must modify the transaction or request a modification of the chaincode for it to work.</p>
<p>Check out the Hyperledger Fabric architecture <a href="https://arxiv.org/abs/1801.10228" rel="nofollow noreferrer">paper</a> for a more detailed explanation, including a sequence diagram.</p> | 2019-04-14 08:28:21.123000+00:00 | 2019-04-14 08:28:21.123000+00:00 | null | null | 55,668,221 | <p>I'm looking for a transaction rollback. This is necessary if a chaincode transaction modifies the state, but then fails with an error before it is able to return.</p>
<p>I saw this is done for a pull request but I can not understand how does it works</p>
<p><a href="https://github.com/hyperledger-archives/fabric/pull/254" rel="nofollow noreferrer">Added support for rolling back a tx if chaincode execution fails</a> </p>
<p>Someone can give me an example how does it works?</p>
<p><strong>EDIT</strong></p>
<p>What I'm looking for is the concept of Transaction in database(unit of work) but in Hyperledger Fabric</p>
<p>Let's suppose that we are going to register a product for a list of clients, if there is a problem with the registration of the product in some customer then the operation is eliminated and the registration is not made to any client</p> | 2019-04-13 17:42:30.127000+00:00 | 2019-04-19 02:48:13.473000+00:00 | 2019-04-19 02:48:13.473000+00:00 | hyperledger-fabric|hyperledger|blockchain | ['https://arxiv.org/abs/1801.10228'] | 1 |
26,244,744 | <p>Both weighting (cost-sensitive) and thresholding are valid forms of cost-sensitive learning. In the briefest terms, you can think of the two as follows:</p>
<h1>Weighting</h1>
<p>Essentially one is asserting that the ‘cost’ of misclassifying the rare class is worse than misclassifying the common class. This is <strong>applied at the algorithmic level</strong> in such algorithms as SVM, ANN, and Random Forest. The limitations here consist of whether the algorithm can deal with weights. Furthermore, many applications of this are trying to address the idea of making a more serious misclassification (e.g. classifying someone who has pancreatic cancer as non having cancer). In such circumstances, you <strong>know</strong> why you want to make sure you classify specific classes even in imbalanced settings. Ideally you want to optimize the cost parameters as you would any other model parameter.</p>
<h1>Thresholding</h1>
<p>If the algorithm returns <strong>probabilities</strong> (or some other score), thresholding can be <strong>applied after a model has been built</strong>. Essentially you change the classification threshold from 50-50 to an appropriate trade-off level. This typically can be optimized by generated a curve of the evaluation metric (e.g. F-measure). The limitation here is that you are making absolute trade-offs. Any modification in the cutoff will in turn decrease the accuracy of predicting the other class. If you have exceedingly high probabilities for the majority of your common classes (e.g. most above 0.85) you are more likely to have success with this method. It is also algorithm independent (provided the algorithm returns probabilities).</p>
<h1>Sampling</h1>
<p>Sampling is another common option applied to imbalanced datasets to bring some balance to the class distributions. There are essentially two fundamental approaches.</p>
<p><strong><em>Under-sampling</em></strong></p>
<p>Extract a smaller set of the majority instances and keep the minority. This will result in a smaller dataset where the distribution between classes is closer; however, you have discarded data that may have been valuable. This could also be beneficial if you have a very large amount of data.</p>
<p><strong><em>Over-sampling</em></strong></p>
<p>Increase the number of minority instances by replicating them. This will result in a larger dataset which retains all the original data but may introduce bias. As you increase the size, however, you may begin to impact computational performance as well.</p>
<p><strong><em>Advanced Methods</em></strong></p>
<p>There are additional methods that are more ‘sophisticated’ to help address potential bias. These include methods such as <a href="http://arxiv.org/pdf/1106.1813.pdf" rel="noreferrer">SMOTE</a>, <a href="http://www3.nd.edu/~nchawla/papers/ECML03.pdf" rel="noreferrer">SMOTEBoost</a> and <a href="http://cse.seu.edu.cn/people/xyliu/publication/tsmcb09.pdf" rel="noreferrer">EasyEnsemble</a> as referenced in this <a href="https://stats.stackexchange.com/questions/97926/suggestions-for-cost-sensitive-learning-in-a-highly-imbalanced-setting">prior question</a> regarding imbalanced datasets and CSL.</p>
<h1>Model Building</h1>
<p>One further note regarding building models with imbalanced data is that you should keep in mind your model metric. For example, metrics such as F-measures don’t take into account the true negative rate. Therefore, it is often recommended that in imbalanced settings to use metrics such as <a href="http://en.wikipedia.org/wiki/Cohen%27s_kappa" rel="noreferrer">Cohen’s kappa metric</a>.</p> | 2014-10-07 20:31:22.887000+00:00 | 2014-10-08 12:14:15.640000+00:00 | 2017-04-13 12:44:17.437000+00:00 | null | 26,221,312 | <p>Here's a brief description of my problem:</p>
<ol>
<li>I am working on a <em>supervised learning</em> task to train a <em>binary</em> classifier. </li>
<li>I have a dataset with a large class <em>imbalance</em> distribution: 8 negative instances every one positive. </li>
<li>I use the <em>f-measure</em>, i.e. the harmonic mean between specificity and sensitivity, to assess the performance of a classifier.</li>
</ol>
<p>I plot the ROC graphs of several classifiers and all present a great AUC, meaning that the classification is good. However, when I test the classifier and compute the f-measure I get a really low value. I know that this issue is caused by the class skewness of the dataset and, by now, I discover two options to deal with it:</p>
<ol>
<li>Adopting a <strong>cost-sensitive</strong> approach by assigning weights to the dataset's instances (see this <a href="https://stackoverflow.com/questions/20082674/unbalanced-classification-using-randomforestclassifier-in-sklearn">post</a>)</li>
<li><strong>Thresholding</strong> the predicted probabilities returned by the classifiers, to reduce the number of false positives and false negatives.</li>
</ol>
<p>I went for the first option and that solved my issue (f-measure is satisfactory). BUT, now, my question is: which of these methods is preferable? And what are the differences?</p>
<p><em>P.S: I am using Python with the scikit-learn library.</em></p> | 2014-10-06 17:14:25.310000+00:00 | 2022-01-06 11:02:51.693000+00:00 | 2017-05-23 10:30:11.927000+00:00 | python|r|machine-learning|classification | ['http://arxiv.org/pdf/1106.1813.pdf', 'http://www3.nd.edu/~nchawla/papers/ECML03.pdf', 'http://cse.seu.edu.cn/people/xyliu/publication/tsmcb09.pdf', 'https://stats.stackexchange.com/questions/97926/suggestions-for-cost-sensitive-learning-in-a-highly-imbalanced-setting', 'http://en.wikipedia.org/wiki/Cohen%27s_kappa'] | 5 |
41,450,882 | <blockquote>
<p>AdamOptimizer performs a learning rate decay internally (from a fixed given value) or not ?</p>
</blockquote>
<p>Yes, Adam does perform a learning rate decay.</p>
<p>You should have a look at how Adam works: </p>
<blockquote>
<p>D. Kingma and J. Ba, “Adam: A method for stochastic optimization,”
arXiv preprint arXiv:1412.6980, Dec. 2014. [Online]. Available:
<a href="https://arxiv.org/abs/1412.6980" rel="nofollow noreferrer">https://arxiv.org/abs/1412.6980</a></p>
</blockquote>
<p>To sum it up: Adam is RMSProp with momentum and bias correction. A very nice explanation is here: <a href="http://sebastianruder.com/optimizing-gradient-descent/index.html#adam" rel="nofollow noreferrer">http://sebastianruder.com/optimizing-gradient-descent/index.html#adam</a></p> | 2017-01-03 19:24:58.097000+00:00 | 2017-01-03 19:24:58.097000+00:00 | null | null | 41,442,687 | <p>I'm dvelopping a convolutional neural network for images recognition based on three own classes.
I built an AlexNet-based model to train.
I'd like to know two things:</p>
<ol>
<li><em>AdamOptimizer</em> performs a learning rate decay internally (from a fixed given value) or not ?</li>
<li>In case of not, can I use <code>tf.train.exponential_decay</code> to perform decay ?</li>
</ol>
<p>Small examples are apprecciated.
Thanks</p> | 2017-01-03 11:39:40.383000+00:00 | 2017-01-03 19:24:58.097000+00:00 | null | tensorflow|deep-learning | ['https://arxiv.org/abs/1412.6980', 'http://sebastianruder.com/optimizing-gradient-descent/index.html#adam'] | 2 |
62,161,288 | <p>Alpha compares the total test variance to the sum of the item variances (* the number of items/number of items -1) . Total test variance is just the sum of the item variances and covariances. As you can see from your correlation matrix, you have some negatively correlated items. This will reduce the total variance to be less than the sum of the item variances. In your case, the total test variance is 6.197 and the sum of the item variances is 8.015 with k =5 and thus</p>
<pre><code>alpha = (6.198 - 8.015)/6.198 * 5/4 = -.366
lowerCor(data)
rMEQQ01 rMEQQ02 rMEQQ03 rMEQQ04 rMEQQ05
rMEQQ01 1.00
rMEQQ02 -0.26 1.00
rMEQQ03 0.29 -0.02 1.00
rMEQQ04 -0.15 0.09 -0.02 1.00
rMEQQ05 -0.45 0.30 -0.32 0.20 1.00
</code></pre>
<p>For a better understanding of alpha and the omega statistics, I recommend you read our paper in press at Psychological Assessment: Revelle and Condon, From alpha to Omega, a tutorial (2020) preprint at <a href="https://psyarxiv.com/2y3w9/" rel="nofollow noreferrer">https://psyarxiv.com/2y3w9/</a>. </p> | 2020-06-02 20:57:25.677000+00:00 | 2020-06-02 20:57:25.677000+00:00 | null | null | 62,003,633 | <p>I have data from a five-item measure completed by about 260 participants and estimated internal consistency with the psych library in R. I got a value of -0.36 (I triple-checked for reverse-coded questions). My data doesn't cover a broad range of the possible scores -- due to the nature of the group I am sampling from -- so this is a plausible number. But the scale I'm using is well validated, admittedly with low typical values of alpha ranging from 0.5--0.7. </p>
<p>When looking for alternative methods to assess my data I used psych::omega. This returns an alpha value of 0.57 alongwith the omega statistic I was looking for. This alpha value is much more in keeping with other literature which uses the scale in question. I have also looked at other validated measures from my study and the alpha values from psych::alpha and psych::omega are equal. It is only on this particular measure which is known to have low internal consistency where the difference appears.</p>
<p>My question: why are the alpha values different between psych::alpha and psych::omega?</p>
<p>The code below includes the actual data and will spit a warning about negatively correlated items. I've checked and rechecked my score-coding and it's correct so I am confident we can ignore this warning.</p>
<pre><code>library("psych")
data <- structure(list(rMEQQ01 = c(2, 3, 3, 3, 3, 3, 3, 1, 5, 3, 3, 3,
3, 3, 3, 3, 3, 5, 3, 2, 5, 2, 3, 3, 3, 3, 3, 2, 3, 2, 2, 1, 3,
5, 3, 4, 3, 4, 5, 4, 3, 3, 2, 5, 4, 2, 3, 3, 5, 3, 3, 5, 5, 4,
2, 2, 4, 1, 4, 1, 2, 1, 5, 2, 4, 3, 3, 1, 3, 3, 4, 2, 4, 2, 2,
3, 5, 4, 3, 3, 2, 5, 3, 5, 1, 5, 3, 5, 1, 3, 3, 3, 3, 3, 5, 3,
5, 5, 2, 5, 2, 5, 3, 5, 3, 3, 5, 4, 3, 3, 4, 3, 3, 4, 3, 2, 3,
4, 3, 2, 3, 5, 5, 3, 4, 4, 3, 5, 3, 2, 3, 2, 5, 3, 3, 5, 3, 3,
3, 2, 5, 5, 4, 4, 5, 4, 1, 3, 3, 3, 4, 3, 3, 3, 3, 3, 2, 4, 3,
3, 3, 4, 4, 3, 3, 3, 4, 3, 1, 2, 3, 3, 5, 4, 3, 4, 2, 2, 3, 4,
2, 2, 1, 4, 3, 5, 3, 3, 3, 3, 3, 3, 3, 5, 3, 3, 4, 5, 2, 3, 5,
4, 3, 4, 4, 4, 3, 3, 3, 5, 2, 3, 3, 5, 5, 4, 2, 3, 3, 3, 3, 3,
3, 3, 2, 3, 1, 3, 2, 1, 2, 5, 3, 5, 1, 3, 3, 2, 3, 4, 5, 3, 3,
3, 4, 3, 1, 3, 3, 3, 3, 5, 1, 3, 4, 5, 3, 4, 3, 3), rMEQQ02 = c(2,
2, 2, 1, 2, 1, 2, 3, 2, 2, 3, 2, 1, 1, 2, 1, 2, 1, 2, 3, 4, 2,
1, 1, 3, 2, 3, 2, 2, 2, 2, 2, 1, 3, 2, 3, 2, 2, 2, 2, 3, 3, 2,
1, 1, 3, 2, 2, 1, 3, 3, 2, 4, 1, 3, 3, 2, 2, 2, 3, 2, 2, 2, 2,
3, 3, 2, 2, 2, 2, 3, 2, 2, 2, 2, 1, 2, 1, 3, 3, 3, 1, 1, 2, 2,
1, 3, 2, 3, 2, 2, 2, 2, 3, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 2,
1, 2, 1, 3, 1, 2, 2, 2, 3, 3, 2, 2, 3, 1, 2, 2, 2, 3, 1, 1, 2,
1, 1, 2, 3, 1, 1, 3, 3, 2, 2, 1, 1, 1, 1, 2, 2, 2, 1, 2, 3, 1,
3, 2, 3, 1, 3, 3, 3, 3, 3, 1, 2, 2, 1, 1, 2, 1, 2, 2, 3, 3, 2,
3, 1, 2, 3, 1, 3, 2, 3, 2, 2, 2, 2, 3, 2, 3, 3, 1, 2, 1, 2, 2,
2, 3, 2, 2, 2, 2, 4, 1, 2, 2, 1, 2, 2, 1, 3, 2, 2, 1, 3, 1, 3,
2, 3, 1, 2, 1, 3, 1, 2, 2, 3, 3, 2, 3, 2, 2, 1, 2, 2, 3, 2, 1,
1, 1, 2, 3, 2, 2, 2, 1, 1, 1, 1, 2, 3, 1, 1, 1, 3, 3, 3, 1, 3,
1, 2, 2, 2, 1, 1, 1), rMEQQ03 = c(1, 3, 1, 3, 3, 2, 2, 5, 4,
3, 3, 3, 4, 3, 3, 3, 3, 5, 3, 2, 5, 1, 3, 3, 3, 3, 2, 3, 3, 1,
2, 2, 3, 1, 3, 4, 3, 1, 5, 3, 3, 3, 2, 3, 3, 3, 3, 3, 3, 3, 3,
1, 4, 3, 1, 3, 3, 1, 3, 3, 1, 2, 3, 3, 4, 3, 2, 3, 2, 2, 2, 3,
3, 3, 3, 3, 5, 3, 2, 3, 3, 1, 1, 1, 1, 3, 3, 3, 2, 3, 1, 4, 3,
3, 5, 3, 5, 1, 3, 5, 1, 1, 2, 1, 2, 3, 3, 3, 3, 3, 5, 3, 3, 3,
2, 2, 3, 5, 1, 1, 1, 3, 2, 3, 3, 3, 3, 3, 1, 3, 3, 4, 3, 3, 2,
5, 1, 3, 5, 1, 1, 3, 4, 2, 4, 3, 3, 1, 1, 3, 5, 1, 3, 3, 3, 3,
1, 1, 3, 1, 1, 3, 1, 1, 1, 2, 3, 3, 1, 3, 2, 3, 5, 4, 4, 4, 1,
2, 3, 3, 3, 2, 1, 3, 2, 3, 2, 2, 1, 3, 3, 1, 3, 5, 4, 3, 3, 1,
3, 3, 1, 2, 3, 1, 4, 5, 1, 2, 3, 5, 2, 5, 3, 5, 5, 4, 2, 3, 3,
3, 2, 4, 3, 3, 2, 2, 1, 2, 2, 2, 2, 5, 5, 3, 3, 2, 3, 3, 2, 3,
4, 3, 5, 3, 3, 3, 5, 5, 3, 1, 3, 3, 3, 3, 3, 3, 3, 3, 3, 5),
rMEQQ04 = c(3, 2, 3, 1, 3, 3, 3, 3, 3, 3, 3, 3, 3, 1, 3,
2, 5, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 4, 3, 2, 4, 3, 3, 3,
3, 3, 1, 3, 2, 3, 3, 3, 3, 3, 3, 4, 3, 3, 5, 3, 3, 3, 3,
5, 3, 3, 3, 5, 5, 4, 1, 3, 3, 3, 2, 2, 3, 3, 3, 3, 3, 3,
3, 4, 3, 3, 2, 3, 3, 3, 2, 2, 3, 3, 3, 1, 3, 2, 1, 3, 2,
2, 3, 3, 2, 3, 3, 1, 3, 1, 4, 3, 3, 1, 3, 3, 3, 5, 3, 3,
3, 3, 2, 3, 3, 3, 2, 3, 4, 2, 3, 3, 3, 2, 2, 3, 3, 3, 3,
3, 3, 3, 4, 3, 3, 5, 1, 1, 2, 3, 3, 2, 2, 3, 2, 3, 2, 3,
3, 3, 2, 3, 2, 3, 3, 2, 3, 4, 5, 3, 3, 2, 3, 3, 3, 3, 3,
3, 4, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 1, 3, 3, 3, 3,
3, 3, 1, 3, 1, 2, 2, 1, 2, 3, 3, 3, 5, 4, 1, 1, 1, 3, 5,
4, 3, 3, 3, 3, 3, 4, 3, 3, 3, 3, 3, 3, 3, 3, 4, 3, 3, 2,
3, 3, 4, 3, 4, 3, 3, 1, 2, 3, 3, 4, 3, 4, 3, 3, 3, 3, 5,
3, 3, 3, 3, 2, 3, 3, 3, 1, 4, 3, 3, 3, 3, 3, 3, 2), rMEQQ05 = c(4,
0, 4, 2, 6, 0, 2, 0, 2, 2, 0, 4, 2, 4, 4, 0, 0, 4, 2, 2,
0, 4, 4, 0, 4, 2, 6, 4, 2, 2, 4, 6, 2, 0, 2, 0, 0, 2, 0,
2, 2, 6, 0, 0, 0, 0, 0, 4, 0, 4, 4, 0, 4, 0, 4, 4, 0, 6,
2, 6, 4, 6, 2, 6, 2, 2, 2, 4, 6, 4, 2, 6, 2, 6, 6, 2, 0,
0, 4, 2, 6, 6, 4, 0, 4, 0, 6, 4, 0, 2, 2, 2, 2, 4, 0, 2,
0, 0, 4, 0, 4, 2, 0, 0, 2, 2, 0, 0, 4, 6, 2, 4, 0, 2, 4,
4, 0, 2, 6, 2, 4, 0, 0, 4, 0, 4, 2, 4, 2, 0, 4, 0, 0, 4,
2, 0, 2, 2, 0, 0, 0, 4, 0, 4, 0, 4, 2, 6, 6, 0, 0, 4, 4,
6, 2, 2, 4, 0, 0, 0, 2, 0, 2, 2, 2, 4, 2, 2, 6, 4, 2, 4,
0, 0, 0, 0, 2, 0, 2, 0, 4, 4, 4, 4, 4, 2, 6, 4, 0, 4, 0,
2, 0, 0, 0, 6, 2, 0, 6, 4, 2, 2, 0, 2, 0, 2, 6, 2, 2, 0,
6, 2, 2, 0, 0, 0, 4, 0, 4, 2, 6, 2, 4, 2, 4, 2, 4, 6, 6,
6, 4, 0, 0, 0, 4, 6, 2, 6, 6, 2, 2, 0, 0, 2, 2, 6, 2, 2,
6, 6, 0, 0, 4, 2, 4, 4, 4, 0, 2, 0)), row.names = c(NA, -260L
), class = "data.frame")
alpha <- psych::alpha(data) # alpha$total$raw_alpha = -0.3665156
omega <- psych::omega(data) # omega$alpha = 0.5710576
```
</code></pre> | 2020-05-25 13:39:57.700000+00:00 | 2020-06-05 17:27:27.520000+00:00 | 2020-05-25 13:58:26.537000+00:00 | r|psych | ['https://psyarxiv.com/2y3w9/'] | 1 |
56,043,853 | <p>That's an ill-posed problem (you can not measure depth with a single RGB camera) and a topic of resent research. I found this <a href="https://arxiv.org/pdf/1901.09402.pdf" rel="nofollow noreferrer">survey paper</a>. Most often a depth image is learned from an RGB image using convolutional neural networks. </p>
<p>However, if you use a lot of prior information about your scene (all objects are circular within in the image and the partially visible circles corresponds to the ones which are in the background), then you might be able to do something with heuristical methods like, thresholding, edge detection or hough transforms, but it won't be easy.</p> | 2019-05-08 15:11:13.650000+00:00 | 2019-05-10 06:59:28.360000+00:00 | 2019-05-10 06:59:28.360000+00:00 | null | 56,032,540 | <p>I am trying to obtain the relative depth of pixels of an image. For example, the image in <a href="https://www.awn.com/news/nvidia-unveils-quadro-rtx-worlds-first-ray-tracing-gpu" rel="nofollow noreferrer">https://www.awn.com/news/nvidia-unveils-quadro-rtx-worlds-first-ray-tracing-gpu</a> . I don't need the precise distance of each pixel, which I believe would be impossible, but I would like to get something as "the green ball is further than the other balls". Is it possible using OpenCV in python? The codes I generated can identify each ball, but not their relative distance or depth, so they are pretty much useless to my intents.</p> | 2019-05-08 01:59:02.677000+00:00 | 2019-05-10 06:59:28.360000+00:00 | null | python|opencv|image-processing | ['https://arxiv.org/pdf/1901.09402.pdf'] | 1 |
67,050,652 | <p>I found a cool method called predefined evenly-distributed class centroids:</p>
<p><a href="https://github.com/anlongstory/CSAE/blob/master/PEDCC.py" rel="nofollow noreferrer">https://github.com/anlongstory/CSAE/blob/master/PEDCC.py</a></p>
<p>An elaboration of the method is in this paper <a href="https://arxiv.org/abs/1902.00220" rel="nofollow noreferrer">A Classification Supervised Auto-Encoder Based on Predefined Evenly-Distributed Class Centroids</a></p> | 2021-04-11 22:12:24.540000+00:00 | 2021-04-11 22:12:24.540000+00:00 | null | null | 67,016,698 | <p>I'm trying to write code to generate N points as far as possible from each other on a D dimensional hypersphere. The method I have so far is to take the number of points and hope it's less than D or 2*D, which it usually will be. Then I create N vectors that are 0 at every index except at index n where n is between 1 and N/2 and then duplicate that times it by -1 then append it, but I think that'll only generate points equally spaced apart on a portion of a sphere. Here's my code</p>
<pre><code>import numpy as np
start = np.eye(D)[:N/2]
points = np.cat((start, -1*start), axis=1)
</code></pre> | 2021-04-09 07:19:24.683000+00:00 | 2021-04-11 22:12:24.540000+00:00 | 2021-04-10 08:03:36.323000+00:00 | python|numpy|geometry|n-dimensional | ['https://github.com/anlongstory/CSAE/blob/master/PEDCC.py', 'https://arxiv.org/abs/1902.00220'] | 2 |
25,641,787 | <p>More flexible control over the search process is provided by Search Combinators, which are described in the publication below. </p>
<p>Schrijvers, Tom, et al. "Search combinators." Principles and Practice of Constraint Programming–CP 2011. Springer Berlin Heidelberg, 2011. 774-788.
Online at <a href="http://arxiv.org/pdf/1203.1095.pdf" rel="nofollow">http://arxiv.org/pdf/1203.1095.pdf</a> </p>
<p>Some implementation exists for Gecode, see the bottom of <a href="http://www.gecode.org/flatzinc.html" rel="nofollow">Gecode's FlatZinc page</a></p> | 2014-09-03 10:17:13.153000+00:00 | 2014-09-03 10:17:13.153000+00:00 | null | null | 22,830,637 | <p>A constraint model may have restrictions or provide hints to the constraint solver to solve the problem more efficiently by defining the order in which variables are solved for. Is there a mechanism to specify the order in which variables need to solved in MiniZinc or FlatZinc ?</p> | 2014-04-03 07:26:45.290000+00:00 | 2014-09-03 10:17:13.153000+00:00 | null | constraint-programming|minizinc | ['http://arxiv.org/pdf/1203.1095.pdf', 'http://www.gecode.org/flatzinc.html'] | 2 |
41,351,608 | <p>This question was asked 5 years back, therefore the answers provided above are outdated given the fact that deep learning has changed the face of computer vision over the past 3-4 years. A deep learning based solution would involve training a Convolutional Neural Network, which would learn features and perform classification in an end-to-end learning framework. However, since multiple cartoons may be present in the same image, the standard softmax cross entropy loss used in image classification may not be appropriate. Hence, independent logistic regression should be used as a loss function. Threshold for each class can be obtained based on accuracy obtained over a held-out validation set. Even for cartoons, it is better to use a pre-trained model initialized using imagenet instead of training from scratch (<a href="https://arxiv.org/pdf/1611.05118v1.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1611.05118v1.pdf</a>, although the final task in this paper is different, they still do processing on cartoons). If you have abundant data, pre-training may not be that important. This task can be performed using standard libraries like <a href="http://caffe.berkeleyvision.org/" rel="nofollow noreferrer">caffe</a>/<a href="http://torch.ch/" rel="nofollow noreferrer">torch</a> etc.</p> | 2016-12-27 20:11:10.660000+00:00 | 2016-12-27 20:11:10.660000+00:00 | null | null | 8,108,550 | <p>As a self-development exercise, I want to develop a simple classification algorithm that, given a particular cell of a Dilbert cartoon, is able to identify which characters are present in the cartoon (Dilbert, PHB, Ratbert etc.). </p>
<p>I assume the best way to do this is to (1) apply some algorithm to the image, which converts it into a set of features, and (2) use a training set and one of many possible machine learning algorithms to correlate the presence/absence of certain features with a particular character being present in the cell.</p>
<p>So my questions are - (a) is this the correct approach, (b) since there's a number of classification algorithms and ML algorithms to test, what is a good methodology for finding the right one, and (c) which algorithms would you start with, given that we're essentially conducting a classification exercise on a cartoon.</p> | 2011-11-13 00:08:15.477000+00:00 | 2017-01-15 21:28:17.947000+00:00 | 2011-11-15 04:10:10.047000+00:00 | python|machine-learning|computer-vision|classification|feature-detection | ['https://arxiv.org/pdf/1611.05118v1.pdf', 'http://caffe.berkeleyvision.org/', 'http://torch.ch/'] | 3 |
2,230,377 | <p>This debate has been going on in almost any SQL Server community for ages. there are good arguments for both sides and there is definitely not just one size fits all answer. It really depends on your individual situation and on many factors, such as number of users, avg. file size, update frequency, read/write ratio, disk-subsystem yadayadayada...</p>
<p>But as you mention SQLExpress probably the most important factor is the max database size limit and this is a very good reason to go for the filesystem approach. Anyway, this research paper might still be interesting for you: <a href="http://arxiv.org/ftp/cs/papers/0701/0701168.pdf" rel="nofollow noreferrer">To BLOB or Not To BLOB:
Large Object Storage in a Database or a Filesystem?</a></p>
<p>This paper used to be on the Microsoft Research site here: <a href="http://research.microsoft.com/apps/pubs/default.aspx?id=64525" rel="nofollow noreferrer">http://research.microsoft.com/apps/pubs/default.aspx?id=64525</a>, but that link doesn't work for me. SQL Server has come a long way since then. Quassnoi already mentioned FILSTREAM, for example.</p> | 2010-02-09 15:55:20.837000+00:00 | 2010-02-09 15:55:20.837000+00:00 | null | null | 2,230,032 | <p>I am planning the development of a photo gallery application for a client. I am developing the app in asp.net 3.5 and would like to develop it so that I can re-use the application across multiple platforms using various front-ends. Basically, I am wondering what are the dis-advantages and advantages of storing images in the database as binary files as opposed to simply storing the files in an application folder.</p>
<p>Any advice would be greatly appreciated! </p>
<p>Thanks,
Tristan</p> | 2010-02-09 15:09:27.923000+00:00 | 2012-04-08 18:56:09.830000+00:00 | 2010-02-09 16:49:43.103000+00:00 | sql-server|database-design|image|asp.net-3.5 | ['http://arxiv.org/ftp/cs/papers/0701/0701168.pdf', 'http://research.microsoft.com/apps/pubs/default.aspx?id=64525'] | 2 |
63,052,156 | <p>What you describe is actually one of the most important research areas of Deep RL: the exploration problem.</p>
<p>The PPO algorithm (like many other "standard" RL algos) tries to maximise a return, which is a (usually discounted) sum of rewards provided by your environment:</p>
<p><a href="https://i.stack.imgur.com/FBG9L.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FBG9L.png" alt="enter image description here" /></a></p>
<p>In your case, you have a <strong>deceptive gradient</strong> problem, the gradient of your return points directly at your objective point (because your reward is the distance to your objective), which discourage your agent to explore other areas.</p>
<p>Here is an illustration of the deceptive gradient problem from <a href="https://arxiv.org/pdf/2006.08505.pdf" rel="nofollow noreferrer">this paper</a>, the reward is computed like yours and as you can see, the gradient of your return function points directly to your objective (the little square in this example). If your agent starts in the bottom right part of the maze, you are very likely to be stuck in a local optimum.</p>
<p><a href="https://i.stack.imgur.com/Tbxdg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Tbxdg.png" alt="enter image description here" /></a></p>
<p>There are many ways to deal with the exploration problem in RL, in PPO for example you can add some noise to your actions, some other approachs like <a href="https://arxiv.org/pdf/1812.05905.pdf" rel="nofollow noreferrer">SAC</a> try to maximize both the reward and the entropy of your policy over the action space, but in the end you have no guarantee that adding exploration noise in your <strong>action space</strong> will result in efficient of your <strong>state space</strong> (which is actually what you want to explore, the (x,y) positions of your env).</p>
<p>I recommend you to read the <strong>Quality Diversity</strong> (QD) literature, which is a very promising field aiming to solve the exploration problem in RL.</p>
<p>Here is are two great resources:</p>
<ul>
<li><a href="https://quality-diversity.github.io/" rel="nofollow noreferrer">A website gathering all informations about QD</a></li>
<li><a href="https://www.youtube.com/watch?v=g6HiuEnbwJE" rel="nofollow noreferrer">A talk from ICLM 2019</a></li>
</ul>
<p>Finally I want to add that the problem is not your reward function, you should not try to engineer a complex reward function such that your agent is able to behave like you want. The goal is to have an agent that is able to solve your environment despite pitfalls like the deceptive gradient problem.</p> | 2020-07-23 10:29:36.420000+00:00 | 2020-07-23 10:29:36.420000+00:00 | null | null | 63,047,930 | <p>I am working on driving industrial robots with neural nets and so far it is working well. I am using the PPO algorithm from the OpenAI baseline and so far I can drive easily from point to point by using the following rewarding strategy:</p>
<p>I calculate the normalized distance between the target and the position. Then I calculate the distance reward with.</p>
<pre><code>rd = 1-(d/dmax)^a
</code></pre>
<p>For each time step, I give the agent a penalty calculated by.</p>
<pre><code>yt = 1-(t/tmax)*b
</code></pre>
<p>a and b are hyperparameters to tune.</p>
<p>As I said this works really well if I want to drive from point to point. <strong>But what if I want to drive around something?</strong> For my work, I need to avoid collisions and therefore the agent needs to drive around objects. If the object is not straight in the way of the nearest path it is working ok. Then the robot can adapt and drives around it. But it gets more and more difficult to impossible to drive around objects which are straight in the way.</p>
<p>See this image :
<a href="https://i.stack.imgur.com/5liV2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5liV2.png" alt="enter image description here" /></a></p>
<p>I already read a <a href="https://arxiv.org/abs/1905.09492?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A%20arxiv%2FQSXk%20%28ExcitingAds%21%20cs%20updates%20on%20arXiv.org%29" rel="nofollow noreferrer">paper</a> which combines PPO with NES to create some Gaussian noise for the parameters of the neural network but I can't implement it by myself.</p>
<p>Does anyone have some experience with adding more exploration to the PPO algorithm? Or does anyone have some general ideas on how I can improve my rewarding strategy?</p> | 2020-07-23 06:09:46.703000+00:00 | 2020-07-23 11:29:13.083000+00:00 | 2020-07-23 11:29:13.083000+00:00 | neural-network|reinforcement-learning|openai | ['https://i.stack.imgur.com/FBG9L.png', 'https://arxiv.org/pdf/2006.08505.pdf', 'https://i.stack.imgur.com/Tbxdg.png', 'https://arxiv.org/pdf/1812.05905.pdf', 'https://quality-diversity.github.io/', 'https://www.youtube.com/watch?v=g6HiuEnbwJE'] | 6 |
64,774,034 | <p>Hidden variables z are used in VAEs as the extracted features for dimensionality reduction. Here is an example dimensionality reduction from four features in the original space (<code>[x1,x2,x3,x4]</code>) to two features in the reduced space (<code>[z1,z2]</code>) (<a href="https://arxiv.org/pdf/2002.10464.pdf" rel="nofollow noreferrer">source</a>):</p>
<p><a href="https://i.stack.imgur.com/AWNEc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AWNEc.png" alt="enter image description here" /></a></p>
<p>Once you have trained the model, you can pass a sample to the encoder it extracts the features. You may find a Keras implementation example on mnist data <a href="https://keras.io/examples/generative/vae/" rel="nofollow noreferrer">here</a> (see the <code>plot_label_clusters</code> function):
<a href="https://i.stack.imgur.com/MZg4l.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MZg4l.png" alt="enter image description here" /></a></p> | 2020-11-10 17:44:18.370000+00:00 | 2020-11-10 17:44:18.370000+00:00 | null | null | 64,770,318 | <p>I want to use my VAE trained on an image dataset as a feature extractor for another task, so that I could for example replace a ResNet for feature extraction with my VAE.
Which Layers do I use for this?</p>
<p>With "standard" autoencoders you just take the encoding network, but since the latent layer of the VAE consist of mean and distribution I do not know which layers I should use for feature extraction.</p>
<p>Does somebody know how to use a VAE as a feature extractor and what to consider with using different components?</p> | 2020-11-10 13:57:52.443000+00:00 | 2020-11-12 08:46:34.140000+00:00 | 2020-11-12 08:46:34.140000+00:00 | tensorflow|machine-learning|keras|feature-extraction|autoencoder | ['https://arxiv.org/pdf/2002.10464.pdf', 'https://i.stack.imgur.com/AWNEc.png', 'https://keras.io/examples/generative/vae/', 'https://i.stack.imgur.com/MZg4l.png'] | 4 |
18,128,818 | <p>Regarding new-ish and interesting algorithms, this is by no means exhaustive or state of the art, but these are the first places I would look:</p>
<p><em>Specific Algorithm</em>: <a href="http://scholar.google.se/scholar?cluster=3057194774950103339&hl=en&as_sdt=0,5">DiDiC (Distributed Diffusive Clustering)</a> - I used it once in my thesis (<a href="http://arxiv.org/abs/1301.5121">Partitioning Graph Databases</a>)</p>
<ul>
<li>You iterate over all nodes, then for each node retrieve all neighbors, in order to spread some of "some unit" to all your neighbors</li>
<li>Easy to implement. </li>
<li>Can be made deterministic</li>
<li>Iterative - as it's based on iterations (like Super Steps in Pregel) you can stop it at any time. The longer you leave it the better the result, in theory (though in some cases, on certain graph shapes it can be unstable)</li>
<li>When we implemented this we ran it for 100 iterations on a machine with ~30GB RAM, for up to ~4 million nodes - it took no more than two days to complete.</li>
</ul>
<p><em>Specific Algorithm</em>: <a href="http://scholar.google.se/scholar?cluster=16072457052181015857&hl=en&as_sdt=0,5">EvoCut "Finding sparse cuts locally using evolving sets"</a> - local probabilistic algorithm from Microsoft - <a href="http://scholar.google.se/scholar?q=related%3aMa34jojWDN8J%3ascholar.google.com/&hl=en&as_sdt=0,5">related to these papers</a></p>
<ul>
<li>Difficult to implement</li>
<li>Local algorithm - BFS-like access patterns (random walks)</li>
<li>It's been a while since i read that paper, but i remember it was built on clean abstractions:
<ul>
<li>EvoNibble (pluggable - decides how much of neighborhood to add to the current cluster</li>
<li>EvoCut (calls EvoNibble multiple times to find the local cluster)</li>
<li>EvoPartition (calls EvoCut repeatedly to partition entire graph)</li>
</ul></li>
<li>Not deterministic</li>
</ul>
<p><em>General Algorithm Family</em>: <a href="http://scholar.google.se/scholar?as_ylo=2009&q=hierarchical%20graph%20clustering&hl=en&as_sdt=0,5">Hierarchical Graph Clustering</a></p>
<p>From a high level:</p>
<ul>
<li>Coarsen the graph by collapsing nodes into aggregate nodes
<ul>
<li>coarsening strategy is selectable</li>
</ul></li>
<li>Find clusters in the coarsened/smaller graph
<ul>
<li>clustering strategy is selectable</li>
</ul></li>
<li>Incrementally decoarsen the graph, refining at the clustering at each step
<ul>
<li>refining strategy is selectable</li>
</ul></li>
</ul>
<p>Notes:</p>
<ul>
<li>If the graph changes slowly (or results don't need to be right up to date) it may be possible to coarsen once (or infrequently) then work with the coarsened graph - to save computation</li>
<li>I don't know of a specific algorithm to recommend</li>
</ul>
<p><strong>General limitations</strong> - the things few clustering algorithms do:</p>
<ul>
<li>Node types not acknowledged - i.e., all nodes treated equally</li>
<li>Relationship types not acknowledged - i.e., all relationships treated equally</li>
<li>Relationship direction not acknowledged - i.e., relationships treated as undirected</li>
</ul> | 2013-08-08 14:35:14.693000+00:00 | 2013-08-08 15:12:27.380000+00:00 | 2013-08-08 15:12:27.380000+00:00 | null | 18,117,757 | <p>I know there has some famous graph partition algo tools like METIS which is implemented by karypis Lab (<a href="http://glaros.dtc.umn.edu/gkhome/metis/metis/overview" rel="noreferrer">http://glaros.dtc.umn.edu/gkhome/metis/metis/overview</a>)</p>
<p>but I wanna know is there any method to partition graph stored in Neo4j?
or I have to dump the Neo4j's data and transform the node and edge format manually to fit the METIS input format?</p> | 2013-08-08 04:09:12.737000+00:00 | 2013-08-08 15:12:27.380000+00:00 | 2013-08-08 15:03:42.323000+00:00 | graph|neo4j|partitioning|metis | ['http://scholar.google.se/scholar?cluster=3057194774950103339&hl=en&as_sdt=0,5', 'http://arxiv.org/abs/1301.5121', 'http://scholar.google.se/scholar?cluster=16072457052181015857&hl=en&as_sdt=0,5', 'http://scholar.google.se/scholar?q=related%3aMa34jojWDN8J%3ascholar.google.com/&hl=en&as_sdt=0,5', 'http://scholar.google.se/scholar?as_ylo=2009&q=hierarchical%20graph%20clustering&hl=en&as_sdt=0,5'] | 5 |
50,038,677 | <p>I've figured out a way to implement <a href="https://github.com/holyseven/PSPNet-TF-Reproduce/blob/master/model/utils_mg.py#L177" rel="nofollow noreferrer">sync batch norm</a> in pure tensorflow and pure python. </p>
<p><a href="https://github.com/holyseven/PSPNet-TF-Reproduce" rel="nofollow noreferrer">The code</a> makes it possible to train <a href="https://arxiv.org/abs/1612.01105" rel="nofollow noreferrer">PSPNet</a> on Cityscapes and get comparable performance.</p> | 2018-04-26 08:48:20.480000+00:00 | 2018-04-26 08:48:20.480000+00:00 | null | null | 43,056,966 | <p>I'd like to know the possible ways to implement batch normalization layers with synchronizing batch statistics when training with multi-GPU. </p>
<p><strong>Caffe</strong> Maybe there are some variants of caffe that could do, like <a href="https://github.com/yjxiong/caffe" rel="noreferrer">link</a>. But for BN layer, my understanding is that it still synchronizes only the outputs of layers, not the means and vars. Maybe MPI can synchronizes means and vars but I think MPI is a little difficult to implemnt.</p>
<p><strong>Torch</strong> I've seen some comments <a href="https://github.com/tensorflow/tensorflow/issues/7439" rel="noreferrer">here</a> and <a href="https://github.com/torch/nn/issues/1071" rel="noreferrer">here</a>, which show the running_mean and running_var can be synchronized but I think batch mean and batch var can not or are difficult to synchronize.</p>
<p><strong>Tensorflow</strong> Normally, it is the same as caffe and torch. The implementation of BN refers <a href="https://github.com/ppwwyyxx/tensorpack/blob/master/tensorpack/models/batch_norm.py" rel="noreferrer">this</a>. I know tensorflow can distribute an operation to any device specified by <code>tf.device()</code>. But the computation of means and vars is in the middle of BN layer, so if I gather the means and vars in cpu, my code will be like this:</p>
<pre class="lang-py prettyprint-override"><code>cpu_gather = []
label_batches = []
for i in range(num_gpu):
with tf.device('/gpu:%d' % i):
with tf.variable_scope('block1', reuse=i > 0):
image_batch, label_batch = cifar_input.build_input('cifar10', train_data_path, batch_size, 'train')
label_batches.append(label_batch)
x = _conv('weights', image_batch, 3, 3, 16, _stride_arr(1))
block1_gather.append(x)
with tf.device('/cpu:0'):
print block1_gather[0].get_shape()
x1 = tf.concat(block1_gather, 0)
# print x1.get_shape()
mean, variance = tf.nn.moments(x1, [0, 1, 2], name='moments')
for i in range(num_gpu):
with tf.device('/gpu:%d' % i):
with tf.variable_scope('block2', reuse=i > 0):
shape = cpu_gather[i].get_shape().as_list()
assert len(shape) in [2, 4]
n_out = shape[-1]
beta, gamma, moving_mean, moving_var = get_bn_variables(n_out, True, True)
x = tf.nn.batch_normalization(
cpu_gather[i], mean, variance, beta, gamma, 0.00001)
x = _relu(x)
</code></pre>
<p>That is just for one BN layer. For gathering statistics in cpu, I have to break the code. If I have more than 100 BN layers, that will be cumbersome. </p>
<p>I am not expert in those libraries so maybe there are some misunderstanding, feel free to point out my errors. </p>
<p>I do not care much about training speed. I am doing image segmentation which consumes much GPU memory and BN needs a reasonable batch size (e.g. larger than 16) for stable statistics. So using multi-GPU is inevitable. In my opinion, tensorflow might be the best choice but I can't resolve the breaking code problem. Solution with other libraries will be welcome too.</p> | 2017-03-27 21:42:48.410000+00:00 | 2020-09-01 17:06:27.977000+00:00 | 2017-06-09 14:25:58.467000+00:00 | tensorflow|caffe|torch|multi-gpu|batch-normalization | ['https://github.com/holyseven/PSPNet-TF-Reproduce/blob/master/model/utils_mg.py#L177', 'https://github.com/holyseven/PSPNet-TF-Reproduce', 'https://arxiv.org/abs/1612.01105'] | 3 |
26,127,012 | <p><a href="https://arxiv.org/pdf/0912.4540.pdf" rel="nofollow noreferrer">The Fibonacci sphere algorithm</a> is great for this. It is fast and gives results that at a glance will easily fool the human eye. <a href="http://www.openprocessing.org/sketch/41142" rel="nofollow noreferrer">You can see an example done with processing</a> which will show the result over time as points are added. <a href="https://www.vertexshaderart.com/art/79HqSrQH4meL63aAo/revision/9c9YN5LwBQKLDa4Aa" rel="nofollow noreferrer">Here's another great interactive example</a> made by @gman. And here's a simple implementation in python.</p>
<pre><code>import math
def fibonacci_sphere(samples=1000):
points = []
phi = math.pi * (3. - math.sqrt(5.)) # golden angle in radians
for i in range(samples):
y = 1 - (i / float(samples - 1)) * 2 # y goes from 1 to -1
radius = math.sqrt(1 - y * y) # radius at y
theta = phi * i # golden angle increment
x = math.cos(theta) * radius
z = math.sin(theta) * radius
points.append((x, y, z))
return points
</code></pre>
<p>1000 samples gives you this:</p>
<p><img src="https://i.stack.imgur.com/NsCif.png" alt="enter image description here" /></p> | 2014-09-30 17:51:24.487000+00:00 | 2022-09-06 22:32:21.340000+00:00 | 2022-09-06 22:32:21.340000+00:00 | null | 9,600,801 | <p>I need an algorithm that can give me positions around a sphere for N points (less than 20, probably) that vaguely spreads them out. There's no need for "perfection", but I just need it so none of them are bunched together.</p>
<ul>
<li><a href="https://stackoverflow.com/questions/5408276/python-uniform-spherical-distribution">This question</a> provided good code, but I couldn't find a way to make this uniform, as this seemed 100% randomized.</li>
<li><a href="http://www.xsi-blog.com/archives/115" rel="noreferrer">This blog post</a> recommended had two ways allowing input of number of points on the sphere, but the <a href="http://sitemason.vanderbilt.edu/page/hmbADS" rel="noreferrer">Saff and Kuijlaars</a> algorithm is exactly in psuedocode I could transcribe, and the <a href="http://cgafaq.info/wiki/Evenly_distributed_points_on_sphere#Spirals" rel="noreferrer">code example</a> I found contained "node[k]", which I couldn't see explained and ruined that possibility. The second blog example was the Golden Section Spiral, which gave me strange, bunched up results, with no clear way to define a constant radius.</li>
<li><a href="http://mathworld.wolfram.com/SpherePointPicking.html" rel="noreferrer">This algorithm</a> from <a href="https://stackoverflow.com/questions/1841014/uniform-random-monte-carlo-distribution-on-unit-sphere">this question</a> seems like it could possibly work, but I can't piece together what's on that page into psuedocode or anything.</li>
</ul>
<p>A few other question threads I came across spoke of randomized uniform distribution, which adds a level of complexity I'm not concerned about. I apologize that this is such a silly question, but I wanted to show that I've truly looked hard and still come up short.</p>
<p>So, what I'm looking for is simple pseudocode to evenly distribute N points around a unit sphere, that either returns in spherical or Cartesian coordinates. Even better if it can even distribute with a bit of randomization (think planets around a star, decently spread out, but with room for leeway).</p> | 2012-03-07 11:39:06.463000+00:00 | 2022-09-06 22:32:21.340000+00:00 | 2018-02-08 18:11:23.490000+00:00 | python|algorithm|math|geometry|uniform | ['https://arxiv.org/pdf/0912.4540.pdf', 'http://www.openprocessing.org/sketch/41142', 'https://www.vertexshaderart.com/art/79HqSrQH4meL63aAo/revision/9c9YN5LwBQKLDa4Aa'] | 3 |
58,164,824 | <p>I finally solved the problem and wrote a paper describing the solution in detail: <a href="https://arxiv.org/abs/1909.10765" rel="nofollow noreferrer">https://arxiv.org/abs/1909.10765</a></p>
<p>Briefly, divide and multiply each addend by the first term in the sum to obtain</p>
<p><em>p(a, b, t, λ, μ) = ω(a, b, t, λ, μ) 2F1(-a, -b; -(a + b - 1); -z(t, λ, μ))</em></p>
<p>where <em>ω(a, b, t, λ, μ)</em> is the first term in the series and <em>2F1</em> is the Gaussian hypergeometric function.
Hypergeometric function <em>2F1(-a, -b; -(a + b - k); -z)</em> (<em>a</em> and <em>b</em> positive integers, <em>k <= 1</em>, <em>z</em> real number) can be computed with the following three-term recurrence relation (TTRR):</p>
<p><em>u(a, b, k) y(b + 1) + v(a, b, k, z) y(b) + w(b, k, z) y(b - 1) = 0</em></p>
<p>where</p>
<p><em>u(a, b, k) = (a + b + 1 − k) (a + b − k)</em></p>
<p><em>v(a, b, k, z) = − (a + b − k) (a + b + 1 − k + (a − b) z)</em></p>
<p><em>w(b, k, z) = − b (b − k) z</em></p>
<p>If <em>b > a</em> swap the two variables (that is <em>a' = max(a, b)</em> and <em>b' = min(a, b)</em>).</p>
<p>Compute the recurrence in a forward manner starting with values <em>y(0) = 2F1(-a, 0; -(a - k); -z) = 1</em> and <em>y(1) = 2F1(-a, -1; -(a + 1 - k); -z) = 1 + (a z) / (a + 1 - k)</em>.</p>
<p>I implemented the previous algorithm in the <em>Julia</em> package <a href="https://github.com/albertopessia/SimpleBirthDeathProcess.jl" rel="nofollow noreferrer">SimpleBirthDeathProcess</a>.</p> | 2019-09-30 09:16:00.953000+00:00 | 2019-09-30 09:16:00.953000+00:00 | null | null | 51,748,069 | <p>I want to numerically evaluate the transition probability of a linear Birth and Death process</p>
<p><img src="https://i.stack.imgur.com/WA7Rh.png" height="60"></p>
<p>where <sub><img src="https://i.stack.imgur.com/QklZw.png" height="32"></sub> is the binomial coefficient and</p>
<p><img src="https://i.stack.imgur.com/FG177.png" height="90"></p>
<p>I am able to evaluate it with an acceptable numerical error (using logarithms and the Kahan-Neumaier summation algorithm) for most parameter combinations.</p>
<p>Problems arise when addends alternate in sign and numerical error dominates the sum (condition number tends to infinity in this case). This happens when</p>
<p><img src="https://i.stack.imgur.com/yiJpC.png" height="32"></p>
<p>For example, I have problems evaluating <code>p(1000, 2158, 72.78045, 0.02, 0.01)</code>. It should be 0 but I get the very big value <code>log(p) ≈ 99.05811</code>, which is impossible for a probability.</p>
<p>I tried refactoring the sum in many different ways and using various "precise" summation algorithms such as <a href="https://dl.acm.org/citation.cfm?id=1824815" rel="nofollow noreferrer">Zhu-Hayes</a>. I always get approximately the same wrong value, making me think that the problem is not the way I sum the numbers but the inner representation of each addend.</p>
<p>Because of binomial coefficients, values easily overflow. I tried with a linear transformation in order to keep each (absolute) element in the sum between the lowest normal number and 1. It didn't help and I think it is because of many algebraic operations of similar magnitudes.</p>
<p>I am now at a dead end and don't know how to proceed. I could use arbitrary precision arithmentic libraries but the computational cost is too high for my Markov Chain Monte Carlo application.</p>
<p>Is there a proper way or tricks to evaluate such sums when we cannot store partial sums at a good-enough precision in a IEEE-754 double?</p>
<p>Here is a basic working example where I only rescale the values by the maximum and sum with Kahan summation algorithm. Obviously, most values end up being subnormals with a Float64.</p>
<pre class="lang-julia prettyprint-override"><code># this is the logarithm of the absolute value of element h
@inline function log_addend(a, b, h, lα, lβ, lγ)
log(a) + lgamma(a + b - h) - lgamma(h + 1) - lgamma(a - h + 1) -
lgamma(b - h + 1) + (a - h) * lα + (b - h) * lβ + h * lγ
end
# this is the logarithm of the ratio between (absolute) elements i and j
@inline function log_ratio(a, b, i, j, q)
lgamma(j + 1) + lgamma(a - j + 1) + lgamma(b - j + 1) + lgamma(a + b - i) -
lgamma(i + 1) - lgamma(a - i + 1) - lgamma(b - i + 1) - lgamma(a + b - j) +
(j - i) * q
end
# function designed to handle the case of an alternating series with λ > μ > 0
function log_trans_prob(a, b, t, λ, μ)
n = a + b
k = min(a, b)
ω = μ / λ
η = exp((μ - λ) * t)
if b > zero(b)
lβ = log1p(-η) - log1p(-ω * η)
lα = log(μ) + lβ - log(λ)
lγ = log(ω - η) - log1p(-ω * η)
q = lα + lβ - lγ
# find the index of the maximum addend in the sum
# use a numerically stable method for solving quadratic equations
x = exp(q)
y = 2 * x / (1 + x) - n
z = ((b - x) * a - x * b) / (1 + x)
sup = if y < zero(y)
ceil(typeof(a), 2 * z / (-y + sqrt(y^2 - 4 * z)))
else
ceil(typeof(a), (-y - sqrt(y^2 - 4 * z)) / 2)
end
# Kahan summation algorithm
val = zero(t)
tot = zero(t)
err = zero(t)
res = zero(t)
for h in 0:k
# the problem happens here when we call the `exp` function
# My guess is that log_ratio is either very big or very small and its
# `exp` cannot be properly represented by Float64
val = (-one(t))^h * exp(log_ratio(a, b, h, sup, q))
tot = res + val
# Neumaier modification
err += (abs(res) >= abs(val)) ? ((res - tot) + val) : ((val - tot) + res)
res = tot
end
res += err
if res < zero(res)
# sum cannot be negative (they are probabilities), it might be because of
# rounding errors
res = zero(res)
end
log_addend(a, b, sup, lα, lβ, lγ) + log(res)
else
a * (log(μ) + log1p(-η) - log(λ) - log1p(-ω * η))
end
end
# ≈ 99.05810564477483 => impossible
log_trans_prob(1000, 2158, 72.78045, 0.02, 0.01)
# increasing precision helps but it is too slow for applications
log_trans_prob(BigInt(1000), BigInt(2158), BigFloat(72.78045), BigFloat(0.02),
BigFloat(0.01))
</code></pre> | 2018-08-08 13:33:31.757000+00:00 | 2019-09-30 09:16:00.953000+00:00 | 2018-08-08 19:28:19.153000+00:00 | floating-point|julia|precision|numerical-analysis|numerical-stability | ['https://arxiv.org/abs/1909.10765', 'https://github.com/albertopessia/SimpleBirthDeathProcess.jl'] | 2 |
72,521,920 | <p>The sources of the article are available on arxiv, so you could simply check the .tex file which essentially boils down to</p>
<pre><code>\documentclass{revtex4-2}
\usepackage{newpxtext}
\begin{document}
In the era of Noisy Intermediate Scale Quantum Computers
\end{document}
</code></pre> | 2022-06-06 18:19:50.827000+00:00 | 2022-06-06 18:19:50.827000+00:00 | null | null | 72,521,732 | <p>I was not able to find the font-family used in this paper: <a href="https://arxiv.org/pdf/2204.00015.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/2204.00015.pdf</a>.</p>
<p>Is it a default font or an external one?</p> | 2022-06-06 17:58:58.800000+00:00 | 2022-06-06 18:19:50.827000+00:00 | null | fonts|latex | [] | 0 |
40,820,385 | <p>Yes, and it should be shuffled at each iteration, e.g. quote from {1}:</p>
<blockquote>
<p>As for any stochastic gradient descent method (including
the mini-batch case), it is important for efficiency of the estimator that each example or minibatch
be sampled approximately independently. Because
random access to memory (or even worse, to
disk) is expensive, a good approximation, called incremental
gradient (Bertsekas, 2010), is to visit the
examples (or mini-batches) in a fixed order corresponding
to their order in memory or disk (repeating
the examples in the same order on a second epoch, if
we are not in the pure online case where each example
is visited only once). In this context, it is safer if
the examples or mini-batches are first put in a random
order (to make sure this is the case, it could
be useful to first shuffle the examples). <strong>Faster convergence
has been observed if the order in which the
mini-batches are visited is changed for each epoch</strong>,
which can be reasonably efficient if the training set
holds in computer memory.</p>
</blockquote>
<p>{1} Bengio, Yoshua. "<a href="http://arxiv.org/abs/1206.5533" rel="noreferrer">Practical recommendations for gradient-based training of deep architectures.</a>" Neural Networks: Tricks of the Trade. Springer Berlin Heidelberg, 2012. 437-478.</p> | 2016-11-26 16:20:28.873000+00:00 | 2016-11-26 16:20:28.873000+00:00 | null | null | 40,816,721 | <p>I want to train a neural network using backpropagation, and I have a data set like this:</p>
<p><a href="https://i.stack.imgur.com/N5SBr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/N5SBr.png" alt="enter image description here"></a></p>
<p>Should I shuffle the input data?</p> | 2016-11-26 09:08:44.200000+00:00 | 2016-11-26 16:20:57.587000+00:00 | 2016-11-26 16:20:57.587000+00:00 | neural-network|shuffle | ['http://arxiv.org/abs/1206.5533'] | 1 |
48,856,527 | <p>As of QISKit v0.4.9, the <a href="https://www.qiskit.org/documentation/_autodoc/qiskit.extensions.standard.html#qiskit.extensions.standard.u3" rel="nofollow noreferrer"><code>u3()</code></a> function parametrizes an arbitrary single-qubit unitary gate <em>U(θ, φ, λ)</em> (for details, <a href="https://arxiv.org/pdf/1707.03429.pdf" rel="nofollow noreferrer">see</a> formula (2)). Obviously, you can use the <code>u3()</code> function to set a qubit to any value.</p>
<p>For example, this is how you can implement the <em>X</em>-gate and apply it to some qubit <code>qr[0]</code> via the <em>U3</em>-gate:</p>
<pre class="lang-python prettyprint-override"><code>u3(theta=math.pi, phi=0, lam=0, q=qr[0])
</code></pre> | 2018-02-18 21:02:06.450000+00:00 | 2018-02-18 21:02:06.450000+00:00 | null | null | 48,850,056 | <p>I am trying to implement the Quantum HHL algorithm on QISKit package of IBM on Python. I have tried searching the documentation for a method to initialize a qubit to a certain value and to create a new unitary gate with specified values. </p>
<p>In the documentation, I found <a href="https://github.com/QISKit/qiskit-sdk-py/blob/master/qiskit/_gate.py" rel="nofollow noreferrer">this</a>, which is the class of a Quantum Gate. I tried to make a new instance of this class but I couldn't because not much documentation has been done about the arguments to be passed while initializing the instance of the class. </p> | 2018-02-18 09:12:31.200000+00:00 | 2018-03-21 13:42:48.677000+00:00 | 2018-03-21 13:42:48.677000+00:00 | python-3.x|linear-algebra|quantum-computing|qiskit | ['https://www.qiskit.org/documentation/_autodoc/qiskit.extensions.standard.html#qiskit.extensions.standard.u3', 'https://arxiv.org/pdf/1707.03429.pdf'] | 2 |
58,966,228 | <p>Depends on the ensembling method; it's an active area of research I suggest you look into, but I'll provide some examples below:</p>
<ul>
<li><strong>Dropout</strong>: trains parts of the model at any given iterations, thus effectively training a multi-NN ensemble</li>
<li><a href="https://www.wikiwand.com/en/Ensemble_averaging_(machine_learning)" rel="nofollow noreferrer"><strong>Weights averaging</strong></a>: train X models on X different splits of data to learn different features, average the early-stopped weights (requires advanted treatment)</li>
<li><a href="https://arxiv.org/abs/1907.08610" rel="nofollow noreferrer"><strong>Lookahead optimizer</strong></a>: automates the above by performing the averaging during training</li>
<li><strong>Parallel weak learners</strong>: run X models, but each model taking 1/X the time to process - which can be achieved by e.g. inserting a <code>strides=X</code> convolutional layer at input; best starting bet is at X=2, so you'll average two models' predictions at output, each prediction made in parallel (which can run <em>faster</em> than original single model)</li>
</ul>
<p>If you have a multi-core CPU, however, multi-model ensembling shouldn't pose much of a problem, as per last bullet, you can run inference concurrently, so inference time shouldn't increase much</p>
<hr>
<p><strong>More on parallelism</strong>: if a single model is large enough, CPU parallelism will no longer help - you'll also need to ensure multiple models can fit in memory (RAM). The alternative then is again a form of downsampling to cut computation</p> | 2019-11-21 02:14:25.553000+00:00 | 2019-11-21 02:48:00.620000+00:00 | 2019-11-21 02:48:00.620000+00:00 | null | 58,966,086 | <p>I want to improve my ResNet model by creating an ensemble of X number of this model, taking the X best one I have trained. For what I've seen, a technique like bagging will take X time longer to classify an image, which is really not an option in my case. </p>
<p>Is there a way to create an ensemble without increasing the required classifying time? Note that I don't care about increasing the training time, because it only needs to be done one time, compared to the classification which could be made a very large number of time. </p> | 2019-11-21 01:53:58.763000+00:00 | 2019-11-21 02:48:00.620000+00:00 | null | python|deep-learning|ensemble-learning | ['https://www.wikiwand.com/en/Ensemble_averaging_(machine_learning)', 'https://arxiv.org/abs/1907.08610'] | 2 |
40,539,045 | <p>From <a href="https://stats.stackexchange.com/q/232032/12359">least amount of bits needed for single neuron</a>:</p>
<p>The following papers have studied this question (descending chronological order):</p>
<ul>
<li>Accelerating Deep Convolutional Networks using low-precision and sparsity. Ganesh Venkatesh, Eriko Nurvitadhi, Debbie Marr. 2016-10-02. <a href="https://arxiv.org/abs/1610.00324" rel="nofollow noreferrer">https://arxiv.org/abs/1610.00324</a></li>
<li>Binarized Neural Networks: Training Neural Networks with Weights and Activations Constrained to +1 or −1
Matthieu Courbariaux, Itay Hubara, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio
arxiv: <a href="http://arxiv.org/abs/1602.02830" rel="nofollow noreferrer">http://arxiv.org/abs/1602.02830</a></li>
<li>Suyog Gupta, Ankur Agrawal, Kailash Gopalakrishnan, Pritish Narayanan
Deep Learning with Limited Numerical Precision <a href="https://arxiv.org/abs/1502.02551" rel="nofollow noreferrer">https://arxiv.org/abs/1502.02551</a></li>
<li>Courbariaux, Matthieu, Jean-Pierre David, and Yoshua Bengio. "Training deep neural networks with low precision multiplications." arXiv preprint arXiv:1412.7024 (2014). <a href="https://arxiv.org/abs/1412.7024" rel="nofollow noreferrer">https://arxiv.org/abs/1412.7024</a></li>
<li>Vanhoucke, Vincent, Andrew Senior, and Mark Z. Mao. "Improving the speed of neural networks on CPUs." (2011). <a href="https://scholar.google.com/scholar?cluster=14667574137314459294&hl=en&as_sdt=0,22" rel="nofollow noreferrer">https://scholar.google.com/scholar?cluster=14667574137314459294&hl=en&as_sdt=0,22</a> ; <a href="https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/37631.pdf" rel="nofollow noreferrer">https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/37631.pdf</a></li>
</ul>
<p>Example from <em>Deep Learning with Limited Numerical Precision</em>:</p>
<p><a href="https://i.stack.imgur.com/jv6Kr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jv6Kr.png" alt="enter image description here"></a></p> | 2016-11-10 23:51:52.160000+00:00 | 2016-11-10 23:51:52.160000+00:00 | 2017-04-13 12:44:13.837000+00:00 | null | 40,537,503 | <p>Neural networks for image recognition can be really big.
There can be thousands of inputs/hidden neurons, millions of connections what
can take up a lot of computer resources.</p>
<p>While <a href="https://en.wikipedia.org/wiki/Single-precision_floating-point_format" rel="nofollow noreferrer">float</a> being commonly 32bit and <a href="https://en.wikipedia.org/wiki/Double-precision_floating-point_format" rel="nofollow noreferrer">double</a> 64bit in c++, they don't have much performance difference in speed yet using floats can save up some memory.</p>
<p>Having a neural network what is using <a href="https://en.wikipedia.org/wiki/Logistic_function" rel="nofollow noreferrer">sigmoid</a> as an activation function,
if we could choose of which variables in neural network can be float or double
which could be float to save up memory without making neural network unable to perform?</p>
<p>While inputs and outputs for training/test data can definitely be floats
because they do not require double precision since colors in image can
only be in range of 0-255 and when normalized 0.0-1.0 scale, unit value would be
1 / 255 = 0.0039~ </p>
<p><strong>1. what about hidden neurons output precision,
would it be safe to make them float too?</strong></p>
<p>hidden neuron's output gets it value from the sum of previous layer neuron's output * its connection weight to currently being calculating neuron and then sum being passed into activation function(currently sigmoid) to get the new output. Sum variable itself could be double since it could become a really large number when network is big.</p>
<p><a href="https://i.stack.imgur.com/gzrsx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gzrsx.png" alt="enter image description here"></a></p>
<p><strong>2. what about connection weights, could they be floats?</strong></p>
<p>while inputs and neuron's outputs are at the range of 0-1.0 because of sigmoid,
weights are allowed to be bigger than that.</p>
<hr>
<p><a href="https://en.wikipedia.org/wiki/Stochastic_gradient_descent" rel="nofollow noreferrer">Stochastic gradient descent</a> <a href="https://en.wikipedia.org/wiki/Backpropagation" rel="nofollow noreferrer">backpropagation</a> suffers on <a href="https://en.wikipedia.org/wiki/Vanishing_gradient_problem" rel="nofollow noreferrer">vanishing gradient problem</a> because of the activation function's derivative, I decided not to put this out as an a question of what precision should gradient variable be, feeling that float will simply not be precise enough, specially when network is deep.</p> | 2016-11-10 21:41:06.763000+00:00 | 2016-11-10 23:51:52.160000+00:00 | 2016-11-10 23:22:05.800000+00:00 | c++|machine-learning|neural-network|precision | ['https://stats.stackexchange.com/q/232032/12359', 'https://arxiv.org/abs/1610.00324', 'http://arxiv.org/abs/1602.02830', 'https://arxiv.org/abs/1502.02551', 'https://arxiv.org/abs/1412.7024', 'https://scholar.google.com/scholar?cluster=14667574137314459294&hl=en&as_sdt=0,22', 'https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/37631.pdf', 'https://i.stack.imgur.com/jv6Kr.png'] | 8 |
40,538,415 | <blockquote>
<ol>
<li>what about hidden neurons output precision, would it be safe to make them float too?</li>
</ol>
</blockquote>
<p>Using <code>float32</code> everywhere is usually the safe first choice for most of the neural network applications. GPUs currently support only <code>float32</code>, so many practitioners stick to <code>float32</code> everywhere. For many applications, even <a href="https://developer.nvidia.com/cudnn-whatsnew" rel="nofollow noreferrer">16-bit floating point values</a> could be sufficient. Some extreme examples show that high accuracy networks can be trained with only as little as 2-bits per weight (<a href="https://arxiv.org/abs/1610.00324" rel="nofollow noreferrer">https://arxiv.org/abs/1610.00324</a>).</p>
<p>The complexity of the deep networks is usually limited not by the computation time, but by the amount of RAM on a single GPU and throughput of the memory bus. Even if you're working on CPU, using a smaller data type still helps to use the cache more efficiently. You're rarely limited by the machine datatype precision.</p>
<blockquote>
<p>since colors in image can only be in range of 0-255, </p>
</blockquote>
<p>You're doing it wrong. You force the network to learn the scale of your input data, when it is already known (unless you're using a custom weight initialization procedure). The better results are usually achieved when the input data is normalized to the range (-1, 1) or (0, 1) and the weights are initialized to have the average output of the layer at the same scale. This is a popular initialization technique: <a href="http://andyljones.tumblr.com/post/110998971763/an-explanation-of-xavier-initialization" rel="nofollow noreferrer">http://andyljones.tumblr.com/post/110998971763/an-explanation-of-xavier-initialization</a></p>
<p>If inputs are in the range [0, 255], then with an average input being ~ 100, and weights being ~ 1, the activation potential (the argument of the activation function) is going to be ~ 100×N, where N is the number of layer inputs, likely far away in the "flat" part of the sigmoid. So either you initialize your weights to be ~ 1/(100×N), or you scale your data and use any popular initialization method. Otherwise the network will have to spend a lot of training time just to bring the weights to this scale.</p>
<blockquote>
<p>Stochastic gradient descent backpropagation suffers on vanishing gradient problem because of the activation function's derivative, I decided not to put this out as an a question of what precision should gradient variable be, feeling that float will simply not be precise enough, specially when network is deep.</p>
</blockquote>
<p>It's much less a matter of machine arithmetic precision, but the scale of the outputs for each of the layers. In practice:</p>
<ul>
<li>preprocess input data (normalize to (-1, 1) range)</li>
<li>if you have more than 2 layers, then don't use sigmoids, use rectified linear units instead</li>
<li>initialize weights carefully</li>
<li>use batch normalization</li>
</ul>
<p><a href="https://www.youtube.com/watch?v=mzkOF4tULj8" rel="nofollow noreferrer">This video</a> should be helpful to learn these concepts if you're not familiar with them.</p> | 2016-11-10 22:52:47.210000+00:00 | 2016-11-10 23:07:42.210000+00:00 | 2016-11-10 23:07:42.210000+00:00 | null | 40,537,503 | <p>Neural networks for image recognition can be really big.
There can be thousands of inputs/hidden neurons, millions of connections what
can take up a lot of computer resources.</p>
<p>While <a href="https://en.wikipedia.org/wiki/Single-precision_floating-point_format" rel="nofollow noreferrer">float</a> being commonly 32bit and <a href="https://en.wikipedia.org/wiki/Double-precision_floating-point_format" rel="nofollow noreferrer">double</a> 64bit in c++, they don't have much performance difference in speed yet using floats can save up some memory.</p>
<p>Having a neural network what is using <a href="https://en.wikipedia.org/wiki/Logistic_function" rel="nofollow noreferrer">sigmoid</a> as an activation function,
if we could choose of which variables in neural network can be float or double
which could be float to save up memory without making neural network unable to perform?</p>
<p>While inputs and outputs for training/test data can definitely be floats
because they do not require double precision since colors in image can
only be in range of 0-255 and when normalized 0.0-1.0 scale, unit value would be
1 / 255 = 0.0039~ </p>
<p><strong>1. what about hidden neurons output precision,
would it be safe to make them float too?</strong></p>
<p>hidden neuron's output gets it value from the sum of previous layer neuron's output * its connection weight to currently being calculating neuron and then sum being passed into activation function(currently sigmoid) to get the new output. Sum variable itself could be double since it could become a really large number when network is big.</p>
<p><a href="https://i.stack.imgur.com/gzrsx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gzrsx.png" alt="enter image description here"></a></p>
<p><strong>2. what about connection weights, could they be floats?</strong></p>
<p>while inputs and neuron's outputs are at the range of 0-1.0 because of sigmoid,
weights are allowed to be bigger than that.</p>
<hr>
<p><a href="https://en.wikipedia.org/wiki/Stochastic_gradient_descent" rel="nofollow noreferrer">Stochastic gradient descent</a> <a href="https://en.wikipedia.org/wiki/Backpropagation" rel="nofollow noreferrer">backpropagation</a> suffers on <a href="https://en.wikipedia.org/wiki/Vanishing_gradient_problem" rel="nofollow noreferrer">vanishing gradient problem</a> because of the activation function's derivative, I decided not to put this out as an a question of what precision should gradient variable be, feeling that float will simply not be precise enough, specially when network is deep.</p> | 2016-11-10 21:41:06.763000+00:00 | 2016-11-10 23:51:52.160000+00:00 | 2016-11-10 23:22:05.800000+00:00 | c++|machine-learning|neural-network|precision | ['https://developer.nvidia.com/cudnn-whatsnew', 'https://arxiv.org/abs/1610.00324', 'http://andyljones.tumblr.com/post/110998971763/an-explanation-of-xavier-initialization', 'https://www.youtube.com/watch?v=mzkOF4tULj8'] | 4 |
55,604,150 | <p>(1) the author could be referencing methods of representing graphs in matrix form, <a href="https://en.wikipedia.org/wiki/Adjacency_matrix" rel="nofollow noreferrer">like this example</a></p>
<p>(2) not sure about "search trees", though if you can represent hash tables as a graph, then there are some methods to optimize them, <a href="https://arxiv.org/pdf/1711.08267.pdf" rel="nofollow noreferrer">like this example</a>.</p> | 2019-04-10 03:04:07.590000+00:00 | 2019-04-10 03:04:07.590000+00:00 | null | null | 7,431,535 | <p>I am reading an article on Hash tables. Here is the text snippet.</p>
<blockquote>
<p>A hash table is useful for any graph theory problem where the nodes
have real names instead of numbers. Here, as the input is read,
vertices are assigned integers from 1 onwards by order of appearance.
Again, the input is likely to have large groups of alphabetized
entries. For example, the vertices could be computers. Then if one
particular installation lists its computers as ibm1, ibm2, ibm3, . .
. , there could be a dramatic effect on efficiency if a search tree is
used.</p>
</blockquote>
<p>My quesitons on above text</p>
<ol>
<li><p>What does author mean "as input is read, verticies are assigned integers from 1 onward" don't we have calculate hash key for input read?</p></li>
<li><p>What does author mean by "there could be a dramatic effect on efficiency if a search tree is used."?</p></li>
<li><p>How hash tables are helpful in graph theory problems when compared to search tree?</p></li>
</ol>
<p>Thanks!</p> | 2011-09-15 13:19:00.400000+00:00 | 2019-04-10 03:04:07.590000+00:00 | 2011-09-15 13:29:20.657000+00:00 | algorithm|graph-theory|discrete-mathematics | ['https://en.wikipedia.org/wiki/Adjacency_matrix', 'https://arxiv.org/pdf/1711.08267.pdf'] | 2 |
65,612,711 | <h4>TLDR; <code>256x256x32</code> refers to the layer's output shape rather than the layer itself.</h4>
<hr />
<p>There are many articles and posts out there explaining how convolution layers work. I'll try to answer your question without going into too many details, just focusing on <em>shapes</em>.</p>
<p>Assuming you are working with 2D convolution layers, your input and output will both be three-dimensional. That is, without considering the batch which would correspond to a 4th axis... Therefore, the shape of a convolution layer input will be <code>(c, h, w)</code> (or <code>(h, w, c)</code> depending on the framework) where <code>c</code> is the number of channels, <code>h</code> is the width of the input and <code>w</code> the width. You can see it as a <em><code>c</code>-channel</em> <code>h</code>x<code>w</code> image.
The most intuitive example of such input is the input of the first convolution layer of your convolutional neural network: most likely an image of size <code>h</code>x<code>w</code> with <code>c</code> channels for example <code>c=1</code> for greyscale or <code>c=3</code> for <em>RGB</em>...</p>
<p>What's important is that for all pixels of that input, the values on each channel gives additional information on that pixel. Having three channels will give each pixel ('pixel' as in position in the 2D input space) a richer content than having a single. Since each pixel will be encoded with three values (three channels) <em>vs.</em> a single one (one channel). This kind of intuition about what channels represent can be extrapolated to a higher number of channels. As we said an input can have <code>c</code> channels.</p>
<p>Now going back to convolution layers, here is a <a href="https://ujjwalkarn.me/2016/08/11/intuitive-explanation-convnets/" rel="nofollow noreferrer">good visualization</a>. Imagine having a <em>5x5</em> 1-channel input. And a convolution layer consisting of a single <em>3x3</em> filter (i.e. <code>kernel_size=3</code>)</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;"></th>
<th style="text-align: center;">input</th>
<th style="text-align: center;">filter</th>
<th style="text-align: center;">convolution</th>
<th style="text-align: center;">output</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">shape</td>
<td style="text-align: center;"><code>(1, 5, 5)</code></td>
<td style="text-align: center;"><code>(3, 3)</code></td>
<td style="text-align: center;"><a href="https://i.stack.imgur.com/b50vM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/b50vM.png" alt="" /></a></td>
<td style="text-align: center;"><code>(3,3)</code></td>
</tr>
<tr>
<td style="text-align: left;">representation</td>
<td style="text-align: center;"><a href="https://i.stack.imgur.com/JxpG0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JxpG0.png" alt="" /></a></td>
<td style="text-align: center;"><a href="https://i.stack.imgur.com/4wuHM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4wuHM.png" alt="" /></a></td>
<td style="text-align: center;"><a href="https://i.stack.imgur.com/xpsJJ.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xpsJJ.gif" alt="" /></a></td>
<td style="text-align: center;"><a href="https://i.stack.imgur.com/cG15K.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cG15K.gif" alt="" /></a></td>
</tr>
</tbody>
</table>
</div>
<p>Now keep in mind the dimension of the output will depend on the <a href="https://deepai.org/machine-learning-glossary-and-terms/stride" rel="nofollow noreferrer"><em>stride</em></a> and <a href="https://deepai.org/machine-learning-glossary-and-terms/padding" rel="nofollow noreferrer"><em>padding</em></a> of the convolution layer. Here the shape of the output is the same as the shape of the filter, it does not necessarily have to be! Take an input shape of <code>(1, 5, 5)</code>, with the same convolution settings, you would end up with a shape of <code>(4, 4)</code> (which is different from the filter shape <code>(3, 3)</code>.</p>
<p>Also, something to note is that if the input had more than one channel: shape <code>(c, h, w)</code>, the filter would have to have the same number of channels. Each channel of the input would convolve with each channel of the filter and the results would be averaged into a single 2D feature map. So you would have an <em>intermediate output</em> of <code>(c, 3, 3)</code>, which after averaging over the channels, would leave us with <code>(1, 3, 3)=(3, 3)</code>. As a result, <strong>considering a convolution with a single filter</strong>, however many input channels there are, the output will always have a single channel.</p>
<p>From there what you can do is assemble multiple filters on the same layer. This means you define your layer as having <code>k</code> <em>3x3</em> filters. So a layer consists <code>k</code> filters. For the computation of the output, the idea is simple: one filter gives a <code>(3, 3)</code> feature map, so <code>k</code> filters will give <em>k</em> <code>(3, 3)</code> feature maps. These maps are then stacked into what will be the channel dimension. Ultimately, you're left with an output shape of... <code>(k, 3, 3)</code>.</p>
<p>Let <code>k_h</code> and <code>k_w</code>, be the kernel height and kernel width respectively. And <code>h'</code>, <code>w'</code> the height and width of one outputted feature map:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;"></th>
<th style="text-align: center;">input</th>
<th style="text-align: center;">layer</th>
<th style="text-align: center;">output</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">shape</td>
<td style="text-align: center;"><code>(c, h, w)</code></td>
<td style="text-align: center;"><code>(k, c, k_h, k_w)</code></td>
<td style="text-align: center;"><code>(k, h', w')</code></td>
</tr>
<tr>
<td style="text-align: left;">description</td>
<td style="text-align: center;"><code>c</code>-channel <code>h</code>x<code>w</code> feature map</td>
<td style="text-align: center;"><code>k</code> filters of shape <code>(c, k_h, k_w)</code></td>
<td style="text-align: center;"><code>k</code>-channel <code>h'</code>x<code>w'</code> feature map</td>
</tr>
</tbody>
</table>
</div><hr />
<p>Back to your question:</p>
<blockquote>
<p>Layers have 3 dimensions like 256x256x32. What is this third number? I assume the first two numbers are the number of nodes but I don't know what the depth is.</p>
</blockquote>
<p>Convolution layers have four dimensions, but one of them is imposed by your input channel count. You can choose the size of your convolution kernel, and the number of filters. This number <s>will determine</s> <strong>is</strong> the number of channels of the output.</p>
<p><em>256x256</em> seems extremely high and you most likely correspond to the output shape of the feature map. On the other hand, <em>32</em> would be the number of channels of the output, which... as I tried to explain is the number of filters in that layer. Usually speaking the dimensions represented in visual diagrams for convolution networks correspond to the intermediate output shapes, not the layer shapes.</p>
<p>As an example, take the <a href="https://arxiv.org/abs/1409.1556" rel="nofollow noreferrer">VGG</a> neural network:</p>
<p><a href="https://i.stack.imgur.com/4M3ZP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4M3ZP.png" alt="!" /></a>
<sub><em><strong>Very Deep Convolutional Networks for Large-Scale Image Recognition</strong></em></sub></p>
<p>Input shape for VGG is <code>(3, 224, 224)</code>, knowing that the result of the first convolution has shape <code>(64, 224, 224)</code> you can determine there is a total of <em>64</em> filters in that layer.</p>
<p><em>As it turns out the kernel size in VGG is <em>3x3</em>. So, here is a question for you: knowing there is a single bias parameter per filter, how many total parameters are in VGG's first convolution layer?</em></p> | 2021-01-07 12:51:17.337000+00:00 | 2021-09-07 20:14:14.200000+00:00 | 2021-09-07 20:14:14.200000+00:00 | null | 65,554,032 | <p>I've been reading about convolutional nets and I've programmed a few models myself. When I see visual diagrams of other models it shows each layer being smaller and deeper than the last ones. Layers have three dimensions like <code>256x256x32</code>. What is this third number? I assume the first two numbers are the number of nodes but I don't know what the depth is.</p> | 2021-01-03 19:29:31.510000+00:00 | 2021-09-07 20:14:14.200000+00:00 | 2021-09-07 20:12:16.643000+00:00 | machine-learning|deep-learning|pytorch|conv-neural-network | ['https://ujjwalkarn.me/2016/08/11/intuitive-explanation-convnets/', 'https://i.stack.imgur.com/b50vM.png', 'https://i.stack.imgur.com/JxpG0.png', 'https://i.stack.imgur.com/4wuHM.png', 'https://i.stack.imgur.com/xpsJJ.gif', 'https://i.stack.imgur.com/cG15K.gif', 'https://deepai.org/machine-learning-glossary-and-terms/stride', 'https://deepai.org/machine-learning-glossary-and-terms/padding', 'https://arxiv.org/abs/1409.1556', 'https://i.stack.imgur.com/4M3ZP.png'] | 10 |
26,950,244 | <p>How do you know that the degree distribution of your PPI network approximates a power-law? It could be any other fat-tailed distribution as well. Also, the <code>$xmin</code> value of the resulting power law fit indicates that the best fit is achieved by a lower cutoff at degree=178 and whatever happens at degrees below 178 is not approximated by the exponent that the method fitted.</p>
<p>If you want to create a random network that has <em>exactly</em> the same degree distribution as your graph, you can try using <code>degree.sequence.game</code> to generate one from scratch (make sure you use <code>method="vl"</code> or <code>method="simple.no.multiple</code> if you want to avoid multiple edges between the same pair of nodes), or use <code>rewire.edges</code> to rewrite the edges of your graph.</p>
<p>Re power laws, I recommend reading <a href="http://arxiv.org/abs/0706.1062.pdf" rel="nofollow">this paper</a> about power-law-like distributions in empirical data.</p> | 2014-11-15 20:13:19.120000+00:00 | 2014-11-15 20:13:19.120000+00:00 | null | null | 26,926,403 | <p>I have an undirected graph (a Protein-Protein Interaction network, PPi) for which I know the degree distribution approximates a power-law distribution. I want to create 1,000 random graphs replicating the number of nodes, edges and "similar" power-law outdegree distribution. </p>
<p>My real graph <code>g.lcc</code> has:</p>
<pre><code>> g.lcc
#IGRAPH UN-- 12551 166189 --
#+ attr: name (v/c), V3 (e/n)
</code></pre>
<p>What I did so far was:</p>
<pre><code>#Calculate the alpha for my distribution
alpha <- power.law.fit(degree(g.lcc, mode="out"))
#$continuous
#[1] FALSE
#$alpha
#[1] 4.529602
#$xmin
#[1] 178
#$logLik
#[1] -1123.405
#$KS.stat
#[1] 0.0446421
#$KS.p
#[1] 0.7825008
</code></pre>
<p>Then I run <code>statitc.power.law.game</code> using as <code>exp.out</code> the alpha generated with <code>power.law.fit</code>:</p>
<pre><code>random.g <- static.power.law.game(12551, 166189, 4.53, exponent.in=-1, finite.size.correction=T)
</code></pre>
<p>However when I do that the two distributions are not even similar...</p>
<p>Any help??</p>
<p>P.S attached two images with real.ppi and random.g</p>
<p><img src="https://i.stack.imgur.com/ruoVx.png" alt="Random graph"> <img src="https://i.stack.imgur.com/vSHOD.png" alt="Real PPI"></p> | 2014-11-14 09:08:26.803000+00:00 | 2014-11-15 20:13:19.120000+00:00 | null | r|igraph | ['http://arxiv.org/abs/0706.1062.pdf'] | 1 |
24,716,876 | <p>Not sure this is what you are asking, but...</p>
<p>Communities are a form of clustering for networks. The basic idea is that nodes (vertices) in a given community are "more connected" to other nodes in that community, than they are to nodes in other communities. In your simple example, node 46 is connected to nodes 269 and 1854, but these three nodes are not connected to any other nodes, so they form a community. Similarly, nodes 11 and 911 are connected to each other, but not to any other nodes, so they form a community. The definition of "more connected" depends on the algorithm used to identify the communities (to do the clustering).</p>
<p><strong>EDIT</strong> Response to OP's comment.</p>
<p>From the documentation:</p>
<blockquote>
<p>This function tries to find densely connected subgraphs, also called
communities in a graph via random walks. The idea is that short random
walks tend to stay in the same community.</p>
</blockquote>
<p>Here is an example:</p>
<pre><code>library(igraph)
# create a sample graph
g <- graph.full(5)
for (i in 0:3) {
g <- g %du% graph.full(5)
g <- add.edges(g,c(5*i+1,5*(i+1)+1))
}
wc <- walktrap.community(g)
colors <- rainbow(max(membership(wc)))
set.seed(1) # for reproducible layout
plot(g,vertex.color=colors[membership(wc)],
layout=layout.fruchterman.reingold)
</code></pre>
<p><img src="https://i.stack.imgur.com/rCLQM.png" alt=""></p>
<p>In this example, each subgroup (community) is highly interconnected, and while the clusters are connected to each other, they are less so. So a random walk that starts in nodes 1-5 is more likely to circulate among those nodes than it is to get to any of the other nodes. Hence nodes 1-5 form a community.</p>
<p>The algorithm is described in detail <a href="http://arxiv.org/abs/physics/0512106" rel="nofollow noreferrer">here</a>.</p> | 2014-07-12 20:08:02.927000+00:00 | 2014-07-14 17:16:51.563000+00:00 | 2014-07-14 17:16:51.563000+00:00 | null | 24,716,746 | <p>Okay, so I have a file like this.</p>
<pre><code> 5 1211 11
18 25 11
12 281 11
522 569 11
46 269 11
46 1854 11
544 2324 11
544 1955 11
10 795 11
246 982 11
37 1500 11
2 1154 11
11 911 11
200 281 11
512 663 11
197 663 11
181 202 11
1 124 11
14 636 11
14 1616 11
578 1743 11
</code></pre>
<p>The first two columns represent the nodes (people) and the third column represent a particular pattern that they follow ( same in this case ) while they send messages over a time period. The nodes actually represent people who work in the same office. Now, I plotted a graph for them and<img src="https://i.stack.imgur.com/hu5mx.png" alt="This is the graph of the nodes following a particular pattern."></p>
<p>Now, I used the community command with the algorithm as walktrap.community using R. I got the graph again as, <img src="https://i.stack.imgur.com/KhWQz.png" alt="after applying communities"></p>
<p>I really wish to know what these groupings mean. I know they have been grouped by taking into account the modularity. But what does these groupings actually represent?
I read about this on a lot of research papers but didnt find anything relevant. </p> | 2014-07-12 19:52:56.930000+00:00 | 2014-07-14 17:16:51.563000+00:00 | 2014-07-14 05:38:21.970000+00:00 | r|algorithm|graph|igraph | ['http://arxiv.org/abs/physics/0512106'] | 1 |
56,443,910 | <p>See Section 3.2 of <a href="https://arxiv.org/pdf/1802.05365.pdf" rel="nofollow noreferrer">the original paper</a>. </p>
<blockquote>
<p>ELMo is a task specific combination of the intermediate layer representations in the biLM. For each token, a L-layer biLM computes a set of 2L+ 1representations</p>
</blockquote>
<p>Previously in Section 3.1, it is said that:</p>
<blockquote>
<p>Recent state-of-the-art neural language models compute a context-independent token representation (via token embeddings or a CNN over characters) then pass it through L layers of forward LSTMs. At each position k, each LSTM layer outputs a context-dependent representation. The top layer LSTM output is used to predict the next token with a Softmax layer.</p>
</blockquote>
<p>To answer your question, the representations are these L LSTM-based context-dependent representations.</p> | 2019-06-04 12:28:45.580000+00:00 | 2019-06-04 12:28:45.580000+00:00 | null | null | 54,947,258 | <p>I am trying my hand at ELMo by simply using it as part of a larger PyTorch model. A basic example is given <a href="https://github.com/allenai/allennlp/blob/master/tutorials/how_to/elmo.md#using-elmo-as-a-pytorch-module-to-train-a-new-model" rel="nofollow noreferrer">here</a>.</p>
<blockquote>
<p>This is a torch.nn.Module subclass that computes any number of ELMo
representations and introduces trainable scalar weights for each. For
example, this code snippet computes two layers of representations (as
in the SNLI and SQuAD models from our paper):</p>
</blockquote>
<pre><code>from allennlp.modules.elmo import Elmo, batch_to_ids
options_file = "https://s3-us-west-2.amazonaws.com/allennlp/models/elmo/2x4096_512_2048cnn_2xhighway/elmo_2x4096_512_2048cnn_2xhighway_options.json"
weight_file = "https://s3-us-west-2.amazonaws.com/allennlp/models/elmo/2x4096_512_2048cnn_2xhighway/elmo_2x4096_512_2048cnn_2xhighway_weights.hdf5"
# Compute two different representation for each token.
# Each representation is a linear weighted combination for the
# 3 layers in ELMo (i.e., charcnn, the outputs of the two BiLSTM))
elmo = Elmo(options_file, weight_file, 2, dropout=0)
# use batch_to_ids to convert sentences to character ids
sentences = [['First', 'sentence', '.'], ['Another', '.']]
character_ids = batch_to_ids(sentences)
embeddings = elmo(character_ids)
# embeddings['elmo_representations'] is length two list of tensors.
# Each element contains one layer of ELMo representations with shape
# (2, 3, 1024).
# 2 - the batch size
# 3 - the sequence length of the batch
# 1024 - the length of each ELMo vector
</code></pre>
<p>My question concerns the 'representations'. Can you compare them to normal word2vec output layers? You can choose how <em>many</em> ELMo will give back (increasing an n-th dimension), but what is the difference between these generated representations and what is their typical use? </p>
<p>To give you an idea, for the above code, <code>embeddings['elmo_representations']</code> returns a list of two items (the two representation layers) but they are identical.</p>
<p>In short, how can one define the 'representations' in ELMo? </p> | 2019-03-01 15:04:37.960000+00:00 | 2019-06-04 12:28:45.580000+00:00 | 2019-06-01 16:03:42.393000+00:00 | python|nlp|pytorch|allennlp|elmo | ['https://arxiv.org/pdf/1802.05365.pdf'] | 1 |
65,427,776 | <p><strong>Is the implementation and understanding of the dense synthesizer correct?</strong></p>
<p>Not exactly, <code>linear1 = nn.Linear(d,d)</code> according to the paper and not <code>(d,l)</code>.
Of course this does not work if <code>X.shape = (l,d)</code> according to matrix multiplication rules.</p>
<p>This is because :</p>
<p><a href="https://i.stack.imgur.com/jNuyj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jNuyj.png" alt="enter image description here" /></a>
<a href="https://i.stack.imgur.com/HSMW7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HSMW7.png" alt="enter image description here" /></a></p>
<p>So <code>F</code> is applied to each <code>Xi</code> in X for i in <code>[1,l]</code></p>
<p>The resulting matrix <code>B</code> is then passed to the softmax function and multiplied by <code>G(x)</code>.
So you'd have to modify your code to sequentially process the input then use the returned matrix to compute <code>Y</code>.</p>
<p><strong>how is that different from a multi-layered perceptron that takes in two different inputs and makes uses of it at different point in the forward propagation?</strong></p>
<p>To understand, we need to put things into context, the idea of introducing attention mechanism was first described here in the context of Encoder - Decoder : <a href="https://arxiv.org/pdf/1409.0473.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1409.0473.pdf</a></p>
<p>The core idea is to allow the model to have control over how the context vector from the encoder is retrieved using a neural network instead of relying solely on the last encoded state :</p>
<p><a href="https://i.stack.imgur.com/LQYTm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LQYTm.png" alt="enter image description here" /></a></p>
<p>see this <a href="https://stats.stackexchange.com/questions/421935/what-exactly-are-keys-queries-and-values-in-attention-mechanisms">post</a> for more detail.</p>
<p>The Transformers introduced the idea of using "Multi-Head Attention" (see graph below) to reduce the computational burden and focus solely on the attention mechanism itself. <a href="https://stats.stackexchange.com/questions/421935/what-exactly-are-keys-queries-and-values-in-attention-mechanisms">post</a></p>
<p><a href="https://arxiv.org/pdf/1706.03762.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1706.03762.pdf</a></p>
<p><a href="https://i.stack.imgur.com/tgulR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tgulR.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/uxrkF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uxrkF.png" alt="enter image description here" /></a></p>
<p><em><strong>So where does the Dense synthesizer fits into all of that ?</strong></em></p>
<p>It simply replaces the Dot product (as illustrated in the first pictures in your post) by <code>F(.)</code>. If you replace what's inside the <code>softmax</code> by <code>F</code> you get the equation for <code>Y</code></p>
<p><a href="https://i.stack.imgur.com/YSpTL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YSpTL.png" alt="enter image description here" /></a></p>
<p><strong>Conclusion</strong></p>
<p>This is an MLP but applied step wise to the input in the context of sequence processing.</p>
<p>Thank you</p> | 2020-12-23 16:39:16.913000+00:00 | 2020-12-23 16:44:49.160000+00:00 | 2020-12-23 16:44:49.160000+00:00 | null | 61,630,765 | <p>I’m trying to understand the Synthesizer paper (<a href="https://arxiv.org/pdf/2005.00743.pdf" rel="noreferrer">https://arxiv.org/pdf/2005.00743.pdf</a> 1) and there’s a description of the dense synthesizer mechanism that should replace the traditional attention model as described in the Transformer architecture.</p>
<p><a href="https://i.stack.imgur.com/DF09Y.png" rel="noreferrer"><img src="https://i.stack.imgur.com/DF09Y.png" alt="enter image description here"></a></p>
<p>The <strong>Dense Synthesizer</strong> is described as such:</p>
<p><a href="https://i.stack.imgur.com/EK7I3.png" rel="noreferrer"><img src="https://i.stack.imgur.com/EK7I3.png" alt="enter image description here"></a></p>
<p>So I tried to implement the layer and it looks like this but I’m not sure whether I’m getting it right:</p>
<pre><code>class DenseSynthesizer(nn.Module):
def __init__(self, l, d):
super(DenseSynthesizer, self).__init__()
self.linear1 = nn.Linear(d, l)
self.linear2 = nn.Linear(l, l)
def forward(self, x, v):
# Equation (1) and (2)
# Shape: l x l
b = self.linear2(F.relu(self.linear1(x)))
# Equation (3)
# [l x l] x [l x d] -> [l x d]
return torch.matmul(F.softmax(b), v)
</code></pre>
<p>Usage:</p>
<pre><code>l, d = 4, 5
x, v = torch.rand(l, d), torch.rand(l, d)
synthesis = DenseSynthesizer(l, d)
synthesis(x, v)
</code></pre>
<p>Example:</p>
<p>x and v are tensors:</p>
<pre><code>x = tensor([[0.0844, 0.2683, 0.4299, 0.1827, 0.1188],
[0.2793, 0.0389, 0.3834, 0.9897, 0.4197],
[0.1420, 0.8051, 0.1601, 0.3299, 0.3340],
[0.8908, 0.1066, 0.1140, 0.7145, 0.3619]])
v = tensor([[0.3806, 0.1775, 0.5457, 0.6746, 0.4505],
[0.6309, 0.2790, 0.7215, 0.4283, 0.5853],
[0.7548, 0.6887, 0.0426, 0.1057, 0.7895],
[0.1881, 0.5334, 0.6834, 0.4845, 0.1960]])
</code></pre>
<p>And passing through a forward pass through the dense synthesis, it returns:</p>
<pre><code>>>> synthesis = DenseSynthesizer(l, d)
>>> synthesis(x, v)
tensor([[0.5371, 0.4528, 0.4560, 0.3735, 0.5492],
[0.5426, 0.4434, 0.4625, 0.3770, 0.5536],
[0.5362, 0.4477, 0.4658, 0.3769, 0.5468],
[0.5430, 0.4461, 0.4559, 0.3755, 0.5551]], grad_fn=<MmBackward>)
</code></pre>
<p><strong>Is the implementation and understanding of the dense synthesizer correct?</strong></p>
<p>Theoretically, <strong>how is that different from a multi-layered perceptron that takes in two different inputs and makes uses of it at different point in the forward propagation?</strong></p> | 2020-05-06 08:33:01.737000+00:00 | 2020-12-23 16:44:49.160000+00:00 | null | python|deep-learning|neural-network|pytorch|transformer-model | ['https://i.stack.imgur.com/jNuyj.png', 'https://i.stack.imgur.com/HSMW7.png', 'https://arxiv.org/pdf/1409.0473.pdf', 'https://i.stack.imgur.com/LQYTm.png', 'https://stats.stackexchange.com/questions/421935/what-exactly-are-keys-queries-and-values-in-attention-mechanisms', 'https://stats.stackexchange.com/questions/421935/what-exactly-are-keys-queries-and-values-in-attention-mechanisms', 'https://arxiv.org/pdf/1706.03762.pdf', 'https://i.stack.imgur.com/tgulR.png', 'https://i.stack.imgur.com/uxrkF.png', 'https://i.stack.imgur.com/YSpTL.png'] | 10 |
45,412,505 | <p>Are you sure you enforce a Lipschitz constraint as done in the WGAN paper?</p>
<p>It is done in their paper by having a strong limit one the weights of the discriminator.</p>
<p><a href="https://arxiv.org/pdf/1701.07875.pdf" rel="nofollow noreferrer">Original WGAN paper</a></p> | 2017-07-31 09:47:53.737000+00:00 | 2017-07-31 09:47:53.737000+00:00 | null | null | 45,410,300 | <p>Here is my code :</p>
<pre><code>DEPTH = 64
OUTPUT_SIZE = 28
batch_size = 16:
def Discriminator(name,inputs):
with tf.variable_scope(name):
output = tf.reshape(inputs, [-1, 28, 28, 1])
output1 = conv2d('d_conv_1', output, ksize=5, out_dim=DEPTH)
output2 = lrelu('d_lrelu_1', output1)
output3 = conv2d('d_conv_2', output2, ksize=5, out_dim=2*DEPTH)
output4 = lrelu('d_lrelu_2', output3)
output5 = conv2d('d_conv_3', output4, ksize=5, out_dim=4*DEPTH)
output6 = lrelu('d_lrelu_3', output5)
# output7 = conv2d('d_conv_4', output6, ksize=5, out_dim=8*DEPTH)
# output8 = lrelu('d_lrelu_4', output7)
chanel = output6.get_shape().as_list()
output9 = tf.reshape(output6, [batch_size, chanel[1]*chanel[2]*chanel[3]])
output0 = fully_connected('d_fc', output9, 1)
return output
</code></pre>
<p>The generator code is:</p>
<pre><code>def generator(name):
with tf.variable_scope(name):
noise = tf.random_normal([batch_size, 100])#.astype('float32')
# noise = tf.constant(np.random.normal(size=(128, 128)).astype('float32'))
noise = tf.reshape(noise, [batch_size, 100], 'noise')
output = fully_connected('g_fc_1', noise, 2*2*8*DEPTH)
output = tf.reshape(output, [batch_size, 2, 2, 8*DEPTH], 'g_conv')
output = deconv2d('g_deconv_1', output, ksize=5, outshape=[batch_size, 4, 4, 4*DEPTH])
output = tf.nn.relu(output)
output = tf.reshape(output, [batch_size, 4, 4, 4*DEPTH])
output = deconv2d('g_deconv_2', output, ksize=5, outshape=[batch_size, 7, 7, 2* DEPTH])
output = tf.nn.relu(output)
output = deconv2d('g_deconv_3', output, ksize=5, outshape=[batch_size, 14, 14, DEPTH])
output = tf.nn.relu(output)
output = deconv2d('g_deconv_4', output, ksize=5, outshape=[batch_size, OUTPUT_SIZE, OUTPUT_SIZE, 1])
# output = tf.nn.relu(output)
output = tf.nn.sigmoid(output)
return tf.reshape(output,[-1,784])
the train code is as follows:
real_data = tf.placeholder(tf.float32, shape=[batch_size,784])
with tf.variable_scope(tf.get_variable_scope()):
fake_data = generator('gen')
disc_real = Discriminator('dis_r',real_data)
disc_fake = Discriminator('dis_f',fake_data)
t_vars = tf.trainable_variables()
d_vars = [var for var in t_vars if 'd_' in var.name]
g_vars = [var for var in t_vars if 'g_' in var.name]
'''计算损失'''
gen_cost = tf.reduce_mean(disc_fake)
disc_cost = -tf.reduce_mean(disc_fake) + tf.reduce_mean(disc_real)
alpha = tf.random_uniform(
shape=[batch_size, 1],minval=0.,maxval=1.)
differences = fake_data - real_data
interpolates = real_data + (alpha * differences)
gradients = tf.gradients(Discriminator('dis',interpolates), [interpolates])[0]
slopes = tf.sqrt(tf.reduce_sum(tf.square(gradients), reduction_indices=[1]))
gradient_penalty = tf.reduce_mean((slopes - 1.) ** 2)
disc_cost += LAMBDA * gradient_penalty
gen_train_op = tf.train.AdamOptimizer(
learning_rate=1e-4,beta1=0.5,beta2=0.9).minimize(gen_cost,var_list=g_vars)
disc_train_op = tf.train.AdamOptimizer(
learning_rate=1e-4,beta1=0.5,beta2=0.9).minimize(disc_cost,var_list=d_vars)
</code></pre>
<p>And the error log is:</p>
<p><img src="https://i.stack.imgur.com/pEhWY.png" alt="error_log"></p>
<p>Obviously ,the code is not work ,it diverge quickly, this problem confused me a long time, I really would like to know the origin of this problem.</p> | 2017-07-31 07:58:10.507000+00:00 | 2018-04-26 14:48:00.930000+00:00 | 2017-07-31 09:21:26.843000+00:00 | tensorflow | ['https://arxiv.org/pdf/1701.07875.pdf'] | 1 |
70,368,657 | <p>Vijaya, from what I know, there is only one public library that does order preserving hierarchical clustering (<a href="https://bitbucket.org/Bakkelund/ophac/wiki/Home" rel="nofollow noreferrer">ophac</a>), but that will only return a trivial hierarchy if your data is totally ordered (which is the case with the sections of a book).</p>
<p>There is a theory that may offer a theoretical reply to your answer, but no industry-strength algorithms currently exist: <a href="https://arxiv.org/abs/2109.04266" rel="nofollow noreferrer">https://arxiv.org/abs/2109.04266</a>. I have an implementation of this theory that can deal with up to 20 elements, so if this is interesting, give me a hint, and I will share the code.</p> | 2021-12-15 18:30:21.260000+00:00 | 2021-12-15 18:30:21.260000+00:00 | null | null | 70,069,193 | <p>Is there any Hierarchical Agglomerative Clustering implementation (in Python) available that preserves the order of data points? For example, I want the output something like this.</p>
<pre><code>(((seg1, seg2), (seg3, seg4)), seg5)
</code></pre>
<p>but not like this</p>
<pre><code>(((seg1, seg5), (seg2, seg3)), seg4)
</code></pre>
<p>E.g., <strong>Actual output</strong> with existing implementation
<a href="https://i.stack.imgur.com/gvVpO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gvVpO.png" alt="Actual Output with existing implementation" /></a></p>
<p><strong>Expected output</strong> (any implementation?)
<a href="https://i.stack.imgur.com/dE2so.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dE2so.png" alt="enter image description here" /></a></p> | 2021-11-22 16:32:20.857000+00:00 | 2022-09-20 08:35:57.610000+00:00 | null | python|hierarchical-clustering | ['https://bitbucket.org/Bakkelund/ophac/wiki/Home', 'https://arxiv.org/abs/2109.04266'] | 2 |
37,113,415 | <p>The idea (that seems to originate from a <a href="http://www.backalleycoder.com/2013/03/18/cross-browser-event-based-element-resize-detection/" rel="nofollow noreferrer">blog post on backalleycoder.com</a>) is that you can use</p>
<ul>
<li><code>onresize</code> on the element itself in IE <=10.</li>
<li><code>scroll</code> events on a specially crafted <code><div></code> appended as a child of the element in most other browsers</li>
<li>or <code>onresize</code> on <code><object style="position: absolute; top: 0; left: 0; height: 100%; width: 100%;"></code> appended as a child of the element.</li>
</ul>
<p>There's a number of libraries that implement some or all of these approaches:</p>
<ul>
<li><a href="https://github.com/sdecima/javascript-detect-element-resize" rel="nofollow noreferrer">sdecima/javascript-detect-element-resize</a> - uses <code>onresize</code> in IE + <code>scroll</code> in other browsers. Small (5KB unminified), but not actively maintained ATM.</li>
<li><a href="https://github.com/wnr/element-resize-detector" rel="nofollow noreferrer">wnr/element-resize-detector</a>: uses the same idea, but is more actively maintained, also available as an NPM package. It has the nicest readme, explaining the <a href="https://github.com/wnr/element-resize-detector#caveats" rel="nofollow noreferrer">caveats</a> and explaining that the resize-on-<code><object></code> technique is only truly needed in certain older browsers. The author even wrote a paper about this (<a href="http://arxiv.org/pdf/1511.01223v1.pdf" rel="nofollow noreferrer">Modular Responsive Web Design using Element Queries</a>). The library is quite larger (16KB minified!) (probably because it's written in a <em>less ad-hoc</em> way -- for example it has a "batchProcessorMaker" to allow running a number of functions at a later stage.)</li>
<li><a href="http://marcj.github.io/css-element-queries/" rel="nofollow noreferrer">http://marcj.github.io/css-element-queries/</a> provides a <code>ResizeSensor</code>
class that appears to use the same <code>scroll</code> technique (from another popular SO question: <a href="https://stackoverflow.com/questions/6492683/how-to-detect-divs-dimension-changed">How to detect DIV's dimension changed?</a>). The rest of library has a different purpose, though.</li>
</ul>
<p>Note that older questions here on SO talk about:</p>
<ul>
<li><a href="http://meetselva.github.io/" rel="nofollow noreferrer"><code>attrchange</code></a> in jQuery, DOMAttrModified, etc. - that won't help you with detecting height changes that do not involve changing the <code>height</code> attribute</li>
<li>DOMSubtreeModified, MutationObserver - can be used to detect changes in <em>content</em> (which is one of the reasons that can lead to height changes -- the other being layout changes, such as a change to the width)</li>
<li>older solutions, including some jQuery plugins that use polling to accomplish this -- that's not a recommended solution, because the page will waste the user's battery even when nothing is resized.</li>
</ul>
<p>[edit] <a href="https://github.com/WICG/ResizeObserver/blob/master/explainer.md" rel="nofollow noreferrer">ResizeObserver</a> will allow to detect this without hacks in newer browsers.</p> | 2016-05-09 10:27:00.043000+00:00 | 2016-05-13 01:03:46.527000+00:00 | 2017-05-23 11:52:41.433000+00:00 | null | 37,113,134 | <p>I need to get the height of a <code>div</code> and get notified when its height changes according to data.</p>
<p>I am using <code>getBoundingClientRect()</code> to get the height currently, but I don't get notified of any further changes.</p>
<p>I also tried <code>DOMAttrModified</code> and <code>MutationObserver</code> to detect changes in the data, but both of them are not universally supported in all browsers.</p>
<p>What's the best way to get notifications about changes to an element's height?</p> | 2016-05-09 10:13:10.843000+00:00 | 2016-05-13 01:03:46.527000+00:00 | 2016-05-10 02:09:09.537000+00:00 | javascript | ['http://www.backalleycoder.com/2013/03/18/cross-browser-event-based-element-resize-detection/', 'https://github.com/sdecima/javascript-detect-element-resize', 'https://github.com/wnr/element-resize-detector', 'https://github.com/wnr/element-resize-detector#caveats', 'http://arxiv.org/pdf/1511.01223v1.pdf', 'http://marcj.github.io/css-element-queries/', 'https://stackoverflow.com/questions/6492683/how-to-detect-divs-dimension-changed', 'http://meetselva.github.io/', 'https://github.com/WICG/ResizeObserver/blob/master/explainer.md'] | 9 |
54,454,207 | <p><em>Will start with referencing the paper <a href="https://nicholas.carlini.com/papers/2017_sp_nnrobustattacks.pdf" rel="nofollow noreferrer">Towards Evaluating the Robustness
of Neural Networks</a> by Carlini from page2 last paragraph: <strong>the adversary has complete access to a neural network, including the architecture and all paramaters, and can use this in a white-box manner.</strong> This is a conservative and realistic assumption: prior work has shown it is possible to train a substitute model given black-box access to a target model, and by attacking the substitute model, we can then transfer these attacks to the target model.</em></p>
<p><strong><em>Making the following 2 definitions, as follow, true:</em></strong></p>
<p><strong>White-Box:</strong> Attackers know full knowledge about the ML algorithm, ML model, (i.e ., parameters and hyperparameters), architecture, etc. Figure below does show an example how does work white-box attacks:</p>
<p><a href="https://i.stack.imgur.com/yr0mO.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yr0mO.jpg" alt="GAN white-box model architecture"></a></p>
<p><strong>Black-Box:</strong> Attackers almost know nothing about the ML system
(perhaps know number of features, ML algorithm). Figure below points steps as an example:</p>
<p><a href="https://i.stack.imgur.com/bgdjk.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bgdjk.jpg" alt="GAN black-box model architecture"></a></p>
<p>Section 3.4 DEMONSTRATION OF BLACK BOX ADVERSARIAL ATTACK IN THE PHYSICAL WORLD from paper <a href="https://arxiv.org/pdf/1607.02533.pdf" rel="nofollow noreferrer">ADVERSARIAL EXAMPLES IN THE PHYSICAL WORLD</a>by Kurakin, 2017 states the following:
<strong>Paragraph 1 page 9 describe white-box meaning:</strong> <strong>The experiments described above study physical adversarial examples under the assumption that adversary has full access to the model (i.e. the adversary knows the architecture, model weights, etc . . . ).</strong> </p>
<p><strong>And following with explanation of the black-box meaning:</strong> <strong>However, the black box scenario, in which the attacker does not have access to the model, is a more realistic model of many security threats.</strong> </p>
<blockquote>
<p><strong>Conclusion:</strong> In order to define/label/classify the algorithms as white-box/black-box you just change the settings for the model. </p>
</blockquote>
<p><strong>Note:</strong> I don't classify each algorithm because some of the algorithms can support only white-box settings or only black-box settings in the <a href="https://github.com/tensorflow/cleverhans" rel="nofollow noreferrer">cleverhans</a>library but is good start for you (if you do research than you need to check every single paper listed in the documentation to understand the GAN so you can generate on your own adversarial examples.</p>
<p>Resources used and interesting papers:</p>
<ol>
<li>BasicIterativeMethod: <a href="https://arxiv.org/pdf/1607.02533.pdf" rel="nofollow noreferrer">BasicIterativeMethod</a></li>
<li>CarliniWagnerL2: <a href="https://arxiv.org/pdf/1608.04644.pdf" rel="nofollow noreferrer">CarliniWagnerL2</a></li>
<li>FastGradientMethod: <a href="https://arxiv.org/pdf/1412.6572.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1412.6572.pdf</a> <a href="https://arxiv.org/pdf/1611.01236.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1611.01236.pdf</a> <a href="https://arxiv.org/pdf/1611.01236.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1611.01236.pdf</a></li>
<li>SaliencyMapMethod: <a href="https://arxiv.org/pdf/1511.07528.pdf" rel="nofollow noreferrer">SaliencyMapMethod</a></li>
<li>VirtualAdversarialMethod: <a href="https://arxiv.org/pdf/1507.00677.pdf" rel="nofollow noreferrer">VirtualAdversarialMethod</a></li>
<li>Fgsm Fast Gradient Sign Method: <a href="https://arxiv.org/pdf/1611.01236.pdf" rel="nofollow noreferrer">Fgsm Fast Gradient Sign Method</a></li>
<li>Jsma Jacobian-based saliency map approach:<a href="https://arxiv.org/pdf/1511.07528.pdf" rel="nofollow noreferrer">JSMA in white-box setting</a></li>
<li>Vatm virtual adversarial training: <a href="https://arxiv.org/pdf/1507.00677.pdf" rel="nofollow noreferrer">Vatm virtual adversarial training</a> </li>
<li><a href="https://ece.umd.edu/~danadach/Security_Fall_17/aml.pdf" rel="nofollow noreferrer">Adversarial Machine Learning
—
An Introduction
With slides from:
Binghui
Wang</a></li>
<li><a href="https://github.com/tensorflow/cleverhans/blob/master/cleverhans_tutorials/mnist_blackbox.py" rel="nofollow noreferrer">mnist_blackbox</a></li>
<li><a href="https://github.com/tensorflow/cleverhans/blob/master/cleverhans_tutorials/mnist_tutorial_cw.py" rel="nofollow noreferrer">mnist_tutorial_cw</a></li>
</ol> | 2019-01-31 05:57:19.727000+00:00 | 2019-01-31 07:37:27.210000+00:00 | 2019-01-31 07:37:27.210000+00:00 | null | 54,432,756 | <p>I use <a href="https://github.com/tensorflow/cleverhans" rel="nofollow noreferrer">https://github.com/tensorflow/cleverhans</a> to generate adversarial images, but the categories of attack algoritm is not provided.</p>
<p>All the attack algorithm codes are listed here: <a href="https://github.com/tensorflow/cleverhans/tree/master/cleverhans/attacks" rel="nofollow noreferrer">https://github.com/tensorflow/cleverhans/tree/master/cleverhans/attacks</a></p>
<p>I don't know which of these attack algorithm is grey box attack and which is white or black attack algorithm?</p>
<p>Because I need the category of algorithm to reasearch the attack defense algorithm.
The github page doesn't provide any information about this. How should I know?</p> | 2019-01-30 02:57:21.793000+00:00 | 2019-01-31 07:37:27.210000+00:00 | 2019-01-30 14:04:34.383000+00:00 | python|tensorflow|google-colaboratory|cleverhans | ['https://nicholas.carlini.com/papers/2017_sp_nnrobustattacks.pdf', 'https://i.stack.imgur.com/yr0mO.jpg', 'https://i.stack.imgur.com/bgdjk.jpg', 'https://arxiv.org/pdf/1607.02533.pdf', 'https://github.com/tensorflow/cleverhans', 'https://arxiv.org/pdf/1607.02533.pdf', 'https://arxiv.org/pdf/1608.04644.pdf', 'https://arxiv.org/pdf/1412.6572.pdf', 'https://arxiv.org/pdf/1611.01236.pdf', 'https://arxiv.org/pdf/1611.01236.pdf', 'https://arxiv.org/pdf/1511.07528.pdf', 'https://arxiv.org/pdf/1507.00677.pdf', 'https://arxiv.org/pdf/1611.01236.pdf', 'https://arxiv.org/pdf/1511.07528.pdf', 'https://arxiv.org/pdf/1507.00677.pdf', 'https://ece.umd.edu/~danadach/Security_Fall_17/aml.pdf', 'https://github.com/tensorflow/cleverhans/blob/master/cleverhans_tutorials/mnist_blackbox.py', 'https://github.com/tensorflow/cleverhans/blob/master/cleverhans_tutorials/mnist_tutorial_cw.py'] | 18 |
72,958,077 | <p>Assuming you mean the 'Paragraph Vectors' algorithm, which is often called <code>Doc2Vec</code>, any textual dataset is a potential test/demo dataset.</p>
<p>The original papers by the creators of <code>Doc2Vec</code> showed results from applying it to:</p>
<ul>
<li>movie reviews</li>
<li>search engine summary snippets</li>
<li>Wikipedia articles</li>
<li>scientific articles from Arxiv</li>
</ul>
<p>People have also used it on…</p>
<ul>
<li>titles of articles/books</li>
<li>abstracts of larger articles</li>
<li>full news articles or scientific papers</li>
<li>tweets</li>
<li>blogposts or social media posts</li>
<li>resumes</li>
</ul>
<p>When learning, it's best to pick very simple, common datasets when you're 1st starting, and then larger datasets that you somewhat understand or are related to your areas of interest – if you don't already have a sufficient project-related dataset.</p>
<p>Note that the algorithm, like others in the [something]2vec family of algorithms, works best with lots of varied training data – many tens of thousands of unique words each with many contrasting usage examples, over many tens of thousands (or many more) of documents.</p>
<p>If you crank the <code>vector_size</code> way down, & the training-epochs way up, you can eke some hints of its real performance out of smaller datasets of a few hundred contrasting documents. For example, in the Python Gensim library's <code>Doc2Vec</code> <a href="https://radimrehurek.com/gensim/auto_examples/tutorials/run_doc2vec_lee.html" rel="nofollow noreferrer">intro-tutorial</a> & test-cases, a tiny set of 300 news-summaries (from about 20 years ago called the 'Lee Corpus') are used, and each text is only a few hundreds words long.</p>
<p>But the <code>vector_size</code> is reduced to 50 – much smaller than the hundreds-of-dimensions typical with larger training data, and perhaps still too many dimensions for such a small amount of data. And, the training <code>epochs</code> is increased to 40, much larger than the default of 5 or typical <code>Doc2Vec</code> choices in published papers of 10-20 epochs. And even with those changes, with such little data & textual variety, the effect of moving similar documents to similar vector coordinates will be appear weaker to human review, & be less consistent between runs, than a better dataset will usually show (albeit using many more minutes/hours of training time).</p> | 2022-07-12 20:35:02.687000+00:00 | 2022-07-12 20:35:02.687000+00:00 | null | null | 72,952,741 | <p>I have a question is there already any free dataset available to test doc2vec and if in case I wanted to create my own dataset what could be an appropriate way to do it.</p> | 2022-07-12 12:58:45.023000+00:00 | 2022-07-12 20:35:02.687000+00:00 | null | nlp|doc2vec | ['https://radimrehurek.com/gensim/auto_examples/tutorials/run_doc2vec_lee.html'] | 1 |
62,485,426 | <p>The architecture is based on the fact that dilated convolutions support exponential expansion of the receptive field without loss of resolution or coverage.</p>
<p>I think most people often use the dilated convolution is because it allows one to have larger receptive field with same computation and memory costs while also preserving resolution.</p>
<p>Also Dilated convolutions have generally improved performance.
See in this paper <a href="https://arxiv.org/pdf/1511.07122.pdf" rel="nofollow noreferrer">Multi-Scale Context Aggregation by Dilated Convolutions</a> we have better semantic segmentation results.</p> | 2020-06-20 11:50:48.510000+00:00 | 2020-06-20 11:50:48.510000+00:00 | null | null | 62,483,075 | <p>If the purpose of dilated convolution is to extend receptive fields (extract image features from distant regions) and kernel 5x5 with mirror padding is also able to get the feature from distant regions. Why do people
more often use the dilated convolution over kernel 5x5?</p>
<p>Thank you.</p> | 2020-06-20 07:50:12.757000+00:00 | 2020-06-20 11:50:48.510000+00:00 | 2020-06-20 11:36:04.663000+00:00 | machine-learning|convolution|conv-neural-network | ['https://arxiv.org/pdf/1511.07122.pdf'] | 1 |
48,563,744 | <p>Are you really using <code>"some paragraph"</code> as an input? If so, I find it odd that your script isn't throwing a <code>ZeroDivisionError</code>. The gensim summarize is based on <a href="https://arxiv.org/pdf/1602.03606.pdf" rel="nofollow noreferrer">TextRank</a>. As per the <a href="https://radimrehurek.com/gensim/summarization/summariser.html" rel="nofollow noreferrer">docs</a>:</p>
<p>"The input should be a string, and must be longer than INPUT_MIN_LENGTH sentences for the summary to make sense. The text will be split into sentences using the split_sentences method in the summarization.texcleaner module. Note that newlines divide sentences."</p>
<p>With this in mind, have a look at <a href="https://github.com/summanlp/gensim/blob/develop/gensim/summarization/summarizer.py#L17" rel="nofollow noreferrer">this</a>.</p> | 2018-02-01 13:18:46.220000+00:00 | 2018-02-01 13:18:46.220000+00:00 | null | null | 48,562,396 | <p>I am new in NLP. I am trying to extract the summary of the paragraphs using Gensim in python. </p>
<p>I am facing a problem with a short paragraph, it is giving me a warning as given below and doesn't give me a summary of the short paragraph.</p>
<p>Here is my code in Python:</p>
<pre><code> import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
from gensim.summarization import summarize
text = "short paragraph"
print ('Summary:')
print (summarize(text))
</code></pre>
<p>It is giving me warning as follows:</p>
<pre><code>2018-02-01 17:31:47,247 : WARNING : Input text is expected to have at least 10 sentences.
2018-02-01 17:31:47,253 : INFO : adding document #0 to Dictionary(0 unique tokens: [])
2018-02-01 17:31:47,258 : INFO : built Dictionary(52 unique tokens: ['clearli', 'adult', 'chang', 'member', 'visit']...) from 4 documents (total 70 corpus positions)
2018-02-01 17:31:47,262 : WARNING : Input corpus is expected to have at least 10 documents.
2018-02-01 17:31:47,285 : WARNING : Couldn't get relevant sentences.
</code></pre>
<p>The output is(Printing only summary label not the actual summary of the short paragraph):</p>
<pre><code>Summary:
</code></pre>
<p>Am I missing something? Is there any other library for the same.</p> | 2018-02-01 12:06:32.253000+00:00 | 2018-02-01 13:18:46.220000+00:00 | null | python|python-3.x|gensim | ['https://arxiv.org/pdf/1602.03606.pdf', 'https://radimrehurek.com/gensim/summarization/summariser.html', 'https://github.com/summanlp/gensim/blob/develop/gensim/summarization/summarizer.py#L17'] | 3 |
32,486,637 | <p><em>Sequence to Sequence Learning using Neural networks</em> is a way to use Neural Networks to translate sequences. The general goal is you have a source sequence(say a sentence in English), a target sequence(it's translation in French) and the task is to generate target sequence looking at source sequence.</p>
<p>Challenges for traditional Feed Forward Neural Networks are varying source and target lengths. <a href="http://arxiv.org/abs/1409.3215" rel="nofollow">In this paper</a>, they use a Recurrent neural Network(RNN) to encode the source sequence i.e, the RNN reads the individual elements in source sequence one-by-one. Once it finishes, the encoder would be having a fair idea of what the source sequence is.</p>
<p>You use the last state of the encoder and provide this additional information to Decoder(which is basically a language model) to generate the target sequence element by element.</p>
<p>In your case, you can use it to generate responses. Say you have chat messages between two users. Now chat message of user 1 would be source sequence and the corresponding reply from user 2 would be target sequence. The training is done as said in the paper. The model after training would be trying to mimick user 2 response.</p> | 2015-09-09 18:18:35.840000+00:00 | 2015-09-09 18:18:35.840000+00:00 | null | null | 31,824,766 | <p>Chat bot can be created with Sequence to Sequence Learning with Neural Networks, I have training chat-data but how to use it?</p> | 2015-08-05 06:19:16.243000+00:00 | 2015-09-09 18:18:35.840000+00:00 | null | machine-learning|neural-network | ['http://arxiv.org/abs/1409.3215'] | 1 |
31,341,215 | <p>While this is certainly possible for Decision Trees and AN6U5 did a great job describing how, Random Forests use bundles of little trees that were trained using random subsets of the data and random subsets of the features. Thus each tree is optimal only in that limited setting of features and data. Since there are typically 100s or even 1000s of them, figuring out the context by examining the randomized data is going to be a thankless task. I don't think anyone does it.</p>
<p>However there are importance ranking for the features generated for Random Forests and pretty much all implementations will output them if requested. They turn out to be extremely useful. </p>
<p>Two of the most important ones are MDI (Mean Decrease Impurity) and MDA (Mean Decrease Accuracy). They are described in some detail in chapter 6 of this excellent work: <a href="http://arxiv.org/pdf/1407.7502v3.pdf" rel="nofollow">http://arxiv.org/pdf/1407.7502v3.pdf</a> </p> | 2015-07-10 12:48:48.097000+00:00 | 2015-07-10 12:48:48.097000+00:00 | null | null | 31,320,286 | <p>I'd like to extract useful rules from Decision Trees/Random Forest in order to develop a more applicable way to handle the rules and predictions. So I need an application which makes the rules more understandable.</p>
<p>Any suggestions (e.g. visualizations, validation methods etc) for my purpose?</p> | 2015-07-09 14:20:25.787000+00:00 | 2015-07-13 09:31:48.077000+00:00 | 2015-07-13 09:31:48.077000+00:00 | r|machine-learning|scikit-learn|random-forest|decision-tree | ['http://arxiv.org/pdf/1407.7502v3.pdf'] | 1 |
58,752,096 | <p>Yes, the loss must coverage, because of the loss value means the difference between expected Q value and current Q value. Only when loss value converges, the current approaches optimal Q value. If it diverges, this means your approximation value is less and less accurate. </p>
<p>Maybe you can try adjusting the update frequency of the target network or check the gradient of each update (add gradient clipping). The addition of the target network increases the stability of the Q-learning. </p>
<p>In Deepmind's 2015 Nature paper, it states that:</p>
<blockquote>
<p>The second modification to online Q-learning aimed at further improving the stability of our method with neural networks is to use a separate network for generating the traget yj in the Q-learning update. More precisely, every C updates we clone the network Q to obtain a target network Q' and use Q' for generating the Q-learning targets y<sub>j</sub> for the following C updates to Q.
This modification makes the algorithm more stable compared to standard online Q-learning, where an update that increases Q(s<sub>t</sub>,a<sub>t</sub>) often also increases Q(s<sub>t+1</sub>, a) for all a and hence also increases the target y<sub>j</sub>, possibly leading to oscillations or divergence of the policy. Generating the targets using the older set of parameters adds a delay between the time an update to Q is made and the time the update affects the targets y<sub>j</sub>, making divergence or oscillations much more unlikely. </p>
</blockquote>
<p><a href="https://web.stanford.edu/class/psych209/Readings/MnihEtAlHassibis15NatureControlDeepRL.pdf" rel="noreferrer">Human-level control through deep reinforcement
learning, Mnih et al., 2015</a></p>
<p>I've made an experiment for another person asked similar questions in the Cartpole environment, and the update frequency of 100 solves the problem (achieve a maximum of 200 steps).</p>
<p>When C (update frequency) = 2, Plotting of the average loss:
<a href="https://i.stack.imgur.com/87cNB.png" rel="noreferrer"><img src="https://i.stack.imgur.com/87cNB.png" alt="C=2"></a></p>
<p>C = 10</p>
<p><a href="https://i.stack.imgur.com/fSn8s.png" rel="noreferrer"><img src="https://i.stack.imgur.com/fSn8s.png" alt="C=10"></a></p>
<p>C = 100</p>
<p><a href="https://i.stack.imgur.com/gsWjT.png" rel="noreferrer"><img src="https://i.stack.imgur.com/gsWjT.png" alt="enter image description here"></a></p>
<p>C = 1000</p>
<p><a href="https://i.stack.imgur.com/38hY9.png" rel="noreferrer"><img src="https://i.stack.imgur.com/38hY9.png" alt="enter image description here"></a></p>
<p>C = 10000</p>
<p><a href="https://i.stack.imgur.com/yscNN.png" rel="noreferrer"><img src="https://i.stack.imgur.com/yscNN.png" alt="enter image description here"></a></p>
<p>If the divergence of loss value is caused by gradient explode, you can clip the gradient. In Deepmind's 2015 DQN, the author clipped the gradient by limiting the value within [-1, 1]. In the other case, the author of <a href="https://arxiv.org/pdf/1511.05952.pdf" rel="noreferrer">Prioritized Experience Replay</a> clip gradient by limiting the norm within 10. Here're the examples:</p>
<p>DQN gradient clipping: </p>
<pre class="lang-py prettyprint-override"><code> optimizer.zero_grad()
loss.backward()
for param in model.parameters():
param.grad.data.clamp_(-1, 1)
optimizer.step()
</code></pre>
<p>PER gradient clipping:</p>
<pre class="lang-py prettyprint-override"><code> optimizer.zero_grad()
loss.backward()
if self.grad_norm_clipping:
torch.nn.utils.clip_grad.clip_grad_norm_(self.model.parameters(), 10)
optimizer.step()
</code></pre> | 2019-11-07 15:34:43.137000+00:00 | 2019-11-07 15:34:43.137000+00:00 | null | null | 47,036,246 | <p>I'm using the DQN algorithm to train an agent in my environment, that looks like this:</p>
<ul>
<li>Agent is controlling a car by picking discrete actions (left, right, up, down)</li>
<li>The goal is to drive at a desired speed without crashing into other cars</li>
<li>The state contains the velocities and positions of the agent's car and the surrounding cars</li>
<li>Rewards: -100 for crashing into other cars, positive reward according to the absolute difference to the desired speed (+50 if driving at desired speed)</li>
</ul>
<p>I have already adjusted some hyperparameters (network architecture, exploration, learning rate) which gave me some descent results, but still not as good as it should/could be. The rewards per epiode are increasing during training. The Q-values are converging, too (see figure <a href="https://i.stack.imgur.com/vU1yu.png" rel="noreferrer">1</a>). However, for all different settings of hyperparameter the Q-loss is not converging (see figure <a href="https://i.stack.imgur.com/cG4jt.png" rel="noreferrer">2</a>). I assume, that the lacking convergence of the Q-loss might be the limiting factor for better results.</p>
<p><a href="https://i.stack.imgur.com/vU1yu.png" rel="noreferrer">Q-value of one discrete action durnig training</a></p>
<p><a href="https://i.stack.imgur.com/cG4jt.png" rel="noreferrer">Q-loss during training</a></p>
<p>I'm using a target network which is updated every 20k timesteps. The Q-loss is calculated as MSE.</p>
<p>Do you have ideas why the Q-loss is not converging?
Does the Q-Loss have to converge for DQN algorithm? I'm wondering, why Q-loss is not discussed in most of the papers.</p> | 2017-10-31 13:07:21.777000+00:00 | 2019-11-07 15:34:43.137000+00:00 | null | tensorflow|deep-learning|reinforcement-learning|q-learning | ['https://web.stanford.edu/class/psych209/Readings/MnihEtAlHassibis15NatureControlDeepRL.pdf', 'https://i.stack.imgur.com/87cNB.png', 'https://i.stack.imgur.com/fSn8s.png', 'https://i.stack.imgur.com/gsWjT.png', 'https://i.stack.imgur.com/38hY9.png', 'https://i.stack.imgur.com/yscNN.png', 'https://arxiv.org/pdf/1511.05952.pdf'] | 7 |
53,816,568 | <p>What you are looking for is the <a href="https://en.wikipedia.org/wiki/Accumulator_(cryptography)" rel="nofollow noreferrer">Accumulators</a>. Currently, they are very popular with <a href="https://www.youtube.com/results?search_query=rsa%20accumulator" rel="nofollow noreferrer">digital coins @youtube</a></p>
<p>From Wikipedia;</p>
<blockquote>
<p>A cryptographic accumulator is a one-way membership function. It answers a query as to whether a potential candidate is a member of a set without revealing the individual members of the set. </p>
</blockquote>
<p>For example this <a href="https://arxiv.org/pdf/0905.1307v1.pdf" rel="nofollow noreferrer">paper</a>;</p>
<blockquote>
<p>We show how to use the RSA one-way accumulator to realize an efficient and dynamic authenticated dictionary, where untrusted directories provide cryptographically verifiable answers
to membership queries on a set maintained by a trusted source</p>
</blockquote>
<p>With a Straightforward Accumulator-Based Scheme;</p>
<ul>
<li>Query: When asking for a proof of membership.</li>
<li>Verification: check the validity of the answer.</li>
<li>Updates: Insertion and deletions</li>
</ul>
<p>are available.</p> | 2018-12-17 13:48:18.733000+00:00 | 2018-12-17 13:48:18.733000+00:00 | null | null | 53,815,524 | <p>is there a hashing algorithm that satisfies the following?</p>
<p><code>let "hash_funct" be a hashing function that takes two args, and returns a hash value. so all the following will be true</code></p>
<p><code>Hash1 = hash_funct(arg1, arg2) <=> hash_funct(Hash1, arg1) = hash_funct(Hash1, arg2) = Hash1;</code></p>
<p>Can anyone point me to this Algorithm? or if it doesn't exist, can anyone collaborate with me to invent it?</p>
<p>more explanation:</p>
<p>imagine a set <code>S={A,B,C,D}</code>, and the Hashing function above.</p>
<p>if we can make: <code>Hash1 = hash_funct(A,B,C,D)</code>, then we can check if an element <code>X</code> is in the set by checking the hash result of <code>hash_funct(Hash1,X) == Hash1 ? belogns to the set : doesn't belong</code></p>
<p>with this property we make checking the exisitance of an element in a set O(1) instead of O(NlogN)</p> | 2018-12-17 12:44:09.623000+00:00 | 2018-12-17 13:48:18.733000+00:00 | 2018-12-17 13:08:11.533000+00:00 | c++|algorithm|math|hash|cryptography | ['https://en.wikipedia.org/wiki/Accumulator_(cryptography)', 'https://www.youtube.com/results?search_query=rsa%20accumulator', 'https://arxiv.org/pdf/0905.1307v1.pdf'] | 3 |
63,021,383 | <p>It should be pooling first, normalization second.</p>
<p>The original code link in the question no longer works, but I'm assuming the normalization being referred to is batch normalization. Though, the main idea will probably apply to other normalization as well. As noted by the batch normalization authors in <a href="https://arxiv.org/pdf/1502.03167.pdf" rel="noreferrer">the paper introducing batch normalization</a>, one of the main purposes is "normalizing layer inputs". The simplified version of the idea being: if the inputs to each layer have a nice, reliable distribution of values, the network can train more easily. Putting the normalization second allows for this to happen.</p>
<p>As a concrete example, we can consider the activations <code>[0, 99, 99, 100]</code>. To keep things simple, a 0-1 normalization will be used. A max pooling with kernel 2 will be used. If the values are first normalized, we get <code>[0, 0.99, 0.99, 1]</code>. Then pooling gives <code>[0.99, 1]</code>. This does not provide the nice distribution of inputs to the next layer. If we instead pool first, we get <code>[99, 100]</code>. Then normalizing gives <code>[0, 1]</code>. Which means we can then control the distribution of the inputs to the next layer to be what we want them to be to best promote training.</p> | 2020-07-21 19:04:03.623000+00:00 | 2020-07-23 15:08:54.953000+00:00 | 2020-07-23 15:08:54.953000+00:00 | null | 42,015,156 | <p>I'm looking at <a href="https://github.com/tensorflow/models/blob/master/tutorials/image/cifar10/cifar10.py" rel="noreferrer">TensorFlow implementation of ORC on CIFAR-10</a>, and I noticed that after the first convnet layer, they do pooling, then normalization, but after the second layer, they do normalization, then pooling.</p>
<p>I'm just wondering what would be the rationale behind this, and any tips on when/why we should choose to do norm before pool would be greatly appreciated. Thanks!</p> | 2017-02-03 01:10:54.307000+00:00 | 2020-07-23 15:08:54.953000+00:00 | null | tensorflow|conv-neural-network | ['https://arxiv.org/pdf/1502.03167.pdf'] | 1 |
31,708,507 | <p>In the link to Ipython notebook Dmitry provided, it says that it does <strong>gradient</strong> <strong>ascent</strong> with <strong>maximizing</strong> L2 normalization. I believe this is what Google means to be enhance the feature from a algorithmic perspective. </p>
<p>If you think about it, it's really the case, minimizing L2 would prevent over-fitting, i.e. make the curve looks smoother. If you do the opposite, you are making the feature more obvious.</p>
<p>Here is a great link to understand <a href="http://www.onmyphd.com/?p=gradient.descent" rel="noreferrer">gradient ascent</a>, though it talks about gradient descent mainly.</p>
<p>I don't know much about implementation details in caffe, since I use theano mostly. Hope it helps!</p>
<p><strong>Update</strong></p>
<p>So I read about the detailed articles [1],[2],[3],[4] today and find out that <a href="http://arxiv.org/pdf/1312.6034v2.pdf" rel="noreferrer">[3]</a> actually talks about the algorithm in details</p>
<blockquote>
<p>A locally-optimal <em>I</em> can be found by the back-propagation
method. The procedure is related to the ConvNet training procedure, where the back-propagation is
used to optimise the layer weights. The difference is that in our case the optimisation is performed
with respect to the input image, while the weights are fixed to those found during the training stage.
We initialised the optimisation with the zero image (in our case, the ConvNet was trained on the
zero-centred image data), and then added the training set mean image to the result.</p>
</blockquote>
<p>Therefore, after training the network on classification, you train it again w.r.t to the input image, using gradient ascent in order to get higher score for the class.</p> | 2015-07-29 18:34:46.807000+00:00 | 2015-07-29 20:51:14.303000+00:00 | 2015-07-29 20:51:14.303000+00:00 | null | 30,994,563 | <p>I am interested in a recent blog post by Google that describes the use of <code>nn</code> to make art. </p>
<p>I am particularly interested in one technique: </p>
<blockquote>
<p>'In this case we simply feed the network an arbitrary image or photo and let the network analyze the picture. We then pick a layer and ask the network to enhance whatever it detected. Each layer of the network deals with features at a different level of abstraction, so the complexity of features we generate depends on which layer we choose to enhance. For example, lower layers tend to produce strokes or simple ornament-like patterns, because those layers are sensitive to basic features such as edges and their orientations.' </p>
</blockquote>
<p>The post is <a href="http://googleresearch.blogspot.co.uk/2015/06/inceptionism-going-deeper-into-neural.html?m=1" rel="nofollow">http://googleresearch.blogspot.co.uk/2015/06/inceptionism-going-deeper-into-neural.html?m=1</a>. </p>
<p><strong>My question</strong>: the post describes this as a 'simple' case--is there an open-source implementation of a nn that could be used for this purpose in a relatively plug-and-play process?
For just the technique described, does the network need to be trained? </p>
<p>No doubt for other techniques mentioned in the paper one needs a network already trained on a large number of images, but for the one I've described is there already some kind of open-source network layer visualization package?</p> | 2015-06-23 05:32:04.760000+00:00 | 2015-07-31 01:14:48.720000+00:00 | 2015-07-31 01:14:48.720000+00:00 | artificial-intelligence|neural-network|deep-learning|caffe|deep-dream | ['http://www.onmyphd.com/?p=gradient.descent', 'http://arxiv.org/pdf/1312.6034v2.pdf'] | 2 |
9,791,028 | <p>Lets look at your example. It is very simple, so we will imagine it being more complex. However, it seems you take for granted that side effects are essential. Let me question that a bit:</p>
<p>In your example you have made a very interesting discovery: The names of all seasons are of same length. What an earth-shattering insight! But wait, is it really true?
The most straight-forward way to verify this, is:</p>
<pre>?- <b>season(S), atom_length(S,L).</b>
S = spring, L = 6
; S = summer, L = 6
; S = autumn, L = 6
; S = winter, L = 6.
</pre>
<p>No need for <code>findall/3</code>, no need for <code>write/1</code>.</p>
<p>For a larger number of answers, visual inspection is not practical. Imagine 400 seasons. But we can verify this with:</p>
<pre>?- <b>season(S), atom_length(S,L), dif(L,6).</b>
false.
</pre>
<p>So we now know for sure that there is no season of a different length.</p>
<p>That is my very first answer to your question:</p>
<blockquote>
<p>As long as you can, use the toplevel shell and not your own side effecting procedures! Stretch things a little bit further to avoid side-effects altogether. This is the best way to avoid failure driven loops right from the beginning.</p>
</blockquote>
<p>There are more reasons why sticking to the toplevel shell is a good idea:</p>
<ul>
<li><p>If your programs can be easily queried on the toplevel, it will be trivial to add test cases for them.</p>
</li>
<li><p>The toplevel shell is used by many other users and therefore is very well tested. Your own writing is often flawed and untested. Think of constraints. Think of writing floats. Will you use <code>write/1</code> for floats too? What is the right way to write floats such that they can be read back accurately? There <em>is</em> a way to do this in <a href="/questions/tagged/iso-prolog" class="post-tag" title="show questions tagged 'iso-prolog'" rel="tag">iso-prolog</a>. Here is the answer:</p>
</li>
</ul>
<blockquote class="spoiler">
<p> In ISO, <code>writeq/1,2</code>, <code>write_canonical/1,2</code>, <code>write_term/2,3</code> with option <code>quoted(true)</code> guarantee that floats can be read back accurately. That is, they are the same w.r.t. <code>(==)/2</code></p>
</blockquote>
<ul>
<li><p>The toplevel shell shows you valid Prolog text. In fact, the answer itself is a query! It can be pasted back into the toplevel - only to get back the very same answer. In this manner you will learn the more exotic but unavoidable details of Prolog, like quoting, escaping and bracketing. It is practically impossible to learn the syntax otherwise, since Prolog parsers are often extremely permissive.</p>
</li>
<li><p>Your program will be most probably more accessible to declarative reasoning.</p>
</li>
</ul>
<p>Very likely, your two procedures <code>methodone</code> and <code>methodtwo</code> are incorrect: You forgot a newline after writing <code>Done</code>. So <code>methodone, methodone</code> contains a garbled line. How to test that easily?</p>
<p>But lets look a little bit further into your program. What is so typical for failure driven loops is that they start innocently as something doing "only" side effects but sooner or later they tend to attract more semantic parts as well. In your case, <code>atom_length/2</code> is hidden down in the failure driven loop completely inaccessible to testing or reasoning.</p>
<h1>Efficiency considerations</h1>
<p>Prolog systems often implement failure by deallocating a stack. Therefore, failure driven loops will not require a garbage collector. That's why people believe that failure driven loops are efficient. However, this is not necessarily the case. For a goal like <code>findall(A, season(A), As)</code> every answer for <code>A</code> is copied into some space. This is a trivial operation for something like atoms but imagine a bigger term. Say:</p>
<pre>
blam([]).
blam([L|L]) :- blam(L).
bigterm(L) :- length(L,64), blam(L).
</pre>
<p>In many systems, <code>findall/3</code> or <code>assertz/1</code> for this big term will freeze the system.</p>
<p>Also, systems like SWI, YAP, SICStus do have quite sophisticated garbage collectors. And using fewer failure driven loops will help to improve those systems even further, since this creates a demand for <a href="http://arxiv.org/abs/1106.1311" rel="nofollow noreferrer">more sophisticated techniques</a>.</p> | 2012-03-20 16:36:55.870000+00:00 | 2022-08-20 17:00:53.143000+00:00 | 2022-08-20 17:00:53.143000+00:00 | null | 9,744,641 | <p>I come up against this all the time, and I'm never sure which way to attack it. Below are two methods for processing some season facts.</p>
<p>What I'm trying to work out is whether to use method 1 or 2, and what are the pros and cons of each, especially large amounts of facts.</p>
<p><code>methodone</code> seems wasteful since the facts are available, why bother building a list of them (especially a large list). This must have memory implications too if the list is large enough ? And it doesn't take advantage of Prolog's natural backtracking feature. </p>
<p><code>methodtwo</code> takes advantage of backtracking to do the recursion for me, and I would guess would be much more memory efficient, but is it good programming practice generally to do this? It's arguably uglier to follow, and might there be any other side effects?</p>
<p>One problem I can see is that each time <code>fail</code> is called, we lose the ability to pass anything back to the calling predicate, eg. if it was <code>methodtwo(SeasonResults)</code>, since we continually fail the predicate on purpose. So <code>methodtwo</code> would need to assert facts to store state.</p>
<p>Presumably(?) method 2 would be faster as it has no (large) list processing to do? </p>
<p>I could imagine that if I had a list, then <code>methodone</code> would be the way to go.. or is that always true? Might it make sense in any conditions to assert the list to facts using <code>methodone</code> then process them using method two? Complete madness?</p>
<p>But then again, I read that asserting facts is a very 'expensive' business, so list handling might be the way to go, even for large lists?</p>
<p>Any thoughts? Or is it sometimes better to use one and not the other, depending on (what) situation? eg. for memory optimisation, use method 2, including asserting facts and, for speed use method 1?</p>
<pre><code>season(spring).
season(summer).
season(autumn).
season(winter).
% Season handling
showseason(Season) :-
atom_length(Season, LenSeason),
write('Season Length is '), write(LenSeason), nl.
% -------------------------------------------------------------
% Method 1 - Findall facts/iterate through the list and process each
%--------------------------------------------------------------
% Iterate manually through a season list
lenseason([]).
lenseason([Season|MoreSeasons]) :-
showseason(Season),
lenseason(MoreSeasons).
% Findall to build a list then iterate until all done
methodone :-
findall(Season, season(Season), AllSeasons),
lenseason(AllSeasons),
write('Done').
% -------------------------------------------------------------
% Method 2 - Use fail to force recursion
%--------------------------------------------------------------
methodtwo :-
% Get one season and show it
season(Season),
showseason(Season),
% Force prolog to backtrack to find another season
fail.
% No more seasons, we have finished
methodtwo :-
write('Done').
</code></pre> | 2012-03-16 21:17:36.967000+00:00 | 2022-08-20 17:00:53.143000+00:00 | 2014-03-10 03:33:15.953000+00:00 | prolog|prolog-dif|prolog-toplevel | ['/questions/tagged/iso-prolog', 'http://arxiv.org/abs/1106.1311'] | 2 |
62,515,047 | <p>Signature verification is a very difficult task, lots of research efforts have been made but still they are not much accurate in comparing signature pairs</p>
<p><code>SIFT/SURF</code> algorithms wouldn't be helpful here because model needs to learn a more complex set of features in order to compare signatures</p>
<p>There are some <code>Deep learning</code> based <code>Offline signature verification</code> models that you can see</p>
<ul>
<li>Dey et al. 2017 [<a href="https://arxiv.org/abs/1707.02131" rel="nofollow noreferrer">paper</a>] [<a href="https://github.com/sounakdey/SigNet" rel="nofollow noreferrer">code</a>]</li>
<li>Hafemann et al. 2017 [<a href="https://arxiv.org/abs/1705.05787" rel="nofollow noreferrer">paper</a>] [<a href="https://github.com/luizgh/sigver" rel="nofollow noreferrer">code</a>]</li>
</ul> | 2020-06-22 12:59:38.723000+00:00 | 2020-06-22 12:59:38.723000+00:00 | null | null | 58,681,389 | <p>I'm working on a project about offline signature verification and I've tried SIFT/SURF algorithms (OpenCV) for comparisson of 2 signature images.</p>
<p>What I've noticed is that when I pass in 2 same pictures I get ~1000 keypoints but when I pass 2 pics of different signatures of same person I get just ~70-80. And when one of the passed pics is a signature of a different person but which has alike style I get ~50-60 keypoints. Some of the points also weren't matching each other at all like they were from 2 different locations.</p>
<p>It's clear to me that these algorithms aren't good for my task but I don't quite understand why. </p>
<p>Could anyone exaplin the reason to me from the maths/algo point of view?</p> | 2019-11-03 14:59:01.343000+00:00 | 2020-06-22 12:59:38.723000+00:00 | null | image-processing|feature-extraction|sift|surf|handwriting-recognition | ['https://arxiv.org/abs/1707.02131', 'https://github.com/sounakdey/SigNet', 'https://arxiv.org/abs/1705.05787', 'https://github.com/luizgh/sigver'] | 4 |
58,527,525 | <p>Well, <code>MobileNets</code> and all other imagenet based models down-sampling the image for 5 times(224 -> 7) and then do <code>GlobalAveragePooling2D</code> and then the output layers. </p>
<p>I think using 32*32 images on these models directly won't give you a good result, as the tensor shape would be 1*1 even before the <code>GlobalAveragePooling2D</code>.</p>
<p>Maybe you should try resize the image to like <a href="https://github.com/keras-team/keras-applications/blob/master/keras_applications/mobilenet_v2.py#L249" rel="nofollow noreferrer">96*96</a> or remove the first <a href="https://github.com/keras-team/keras-applications/blob/master/keras_applications/mobilenet_v2.py#L314" rel="nofollow noreferrer"><code>stride=2</code></a>. Take the <a href="https://arxiv.org/pdf/1707.07012.pdf" rel="nofollow noreferrer">NASNet paper</a> as reference, they use 4 poolings in both Cifar and ImageNet versions while only ImageNet version has <code>stride=2</code> in the first Convolution layer.</p> | 2019-10-23 16:42:38.763000+00:00 | 2019-10-23 16:50:01.287000+00:00 | 2019-10-23 16:50:01.287000+00:00 | null | 58,525,367 | <p>I want to train MobileNetV2 from scratch on CIFAR-100 and I get the following results where it just stops learning after some while. </p>
<p><a href="https://i.stack.imgur.com/aNgEJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aNgEJ.png" alt="enter image description here"></a></p>
<p>Here is my code. I would like to see at least 60-70% validation accuracy and I wonder whether I have to pre-train it on imagenet or whether it is because CIFAR100 is just 32x32x3?
Due to some restrictions, I am using Keras 2.2.4 with tensorflow 1.12.0. </p>
<pre><code>from keras.applications.mobilenet_v2 import MobileNetV2
[..]
(x_train, y_train), (x_test, y_test) = cifar100.load_data()
x_train = x_train / 255
x_test = x_test / 255
y_train = np_utils.to_categorical(y_train, 100)
y_test = np_utils.to_categorical(y_test, 100)
input_tensor = Input(shape=(32,32,3))
x = MobileNetV2(include_top=False,
weights=None,
classes=100)(input_tensor)
x = GlobalAveragePooling2D()(x)
x = Dense(1024, activation='relu')(x)
x = Dense(512, activation='relu')(x)
preds = Dense(100, activation='softmax')(x)
model = Model(inputs=[input_tensor], outputs=[preds])
optimizer = Adam(lr=1e-3)
model.compile(loss="categorical_crossentropy",
optimizer=optimizer,
metrics=['accuracy'])
epochs = 300
batch_size = 64
callbacks = [ReduceLROnPlateau(monitor='val_loss', factor=np.sqrt(0.1), cooldown=0, patience=10, min_lr=1e-6)]
generator = ImageDataGenerator(rotation_range=15,
width_shift_range=5. / 32,
height_shift_range=5. / 32,
horizontal_flip=True)
generator.fit(x_train)
model.fit_generator(generator.flow(x_train, y_train),
validation_data=(x_test, y_test),
steps_per_epoch=(len(x_train) // batch_size),
epochs=epochs, verbose=1,
callbacks=callbacks)
</code></pre> | 2019-10-23 14:39:29.280000+00:00 | 2019-10-23 16:50:01.287000+00:00 | 2019-10-23 14:48:16.267000+00:00 | keras|mobilenet | ['https://github.com/keras-team/keras-applications/blob/master/keras_applications/mobilenet_v2.py#L249', 'https://github.com/keras-team/keras-applications/blob/master/keras_applications/mobilenet_v2.py#L314', 'https://arxiv.org/pdf/1707.07012.pdf'] | 3 |
45,441,313 | <p>The simplest parameter-free way to do black box optimisation is random search, and it will explore high dimensional spaces faster than a grid search. There are papers on this but tl;dr with random search you get different values on every dimension each time, while with grid search you don't.</p>
<p><a href="https://dash.harvard.edu/bitstream/handle/1/27769882/BayesOptLoop.pdf?sequence=1" rel="noreferrer">Bayesian optimisation</a> has good theoretical guarantees (despite the approximations), and implementations like Spearmint can wrap any script you have; there are hyperparameters but users don't see them in practice. Hyperband got a lot of attention by showing faster convergence than Naive Bayesian optimisation. It was able to do this by running different networks for different numbers of iterations, and Bayesian optimisation doesn't support that naively. While it is possible to do better with a Bayesian optimisation algorithm that can take this into account, such as <a href="https://arxiv.org/abs/1605.07079" rel="noreferrer">FABOLAS</a>, in practice hyperband is so simple you're probably better using it and watching it to tune the search space at intervals.</p> | 2017-08-01 14:59:58.650000+00:00 | 2017-08-01 14:59:58.650000+00:00 | null | null | 44,260,217 | <p>What is the best way to perform hyperparameter optimization for a Pytorch model? Implement e.g. Random Search myself? Use Skicit Learn? Or is there anything else I am not aware of?</p> | 2017-05-30 10:51:35.267000+00:00 | 2020-09-06 18:35:23.843000+00:00 | 2020-09-06 18:35:23.843000+00:00 | python|machine-learning|deep-learning|pytorch|hyperparameters | ['https://dash.harvard.edu/bitstream/handle/1/27769882/BayesOptLoop.pdf?sequence=1', 'https://arxiv.org/abs/1605.07079'] | 2 |
49,460,566 | <p>You can use the tool <a href="https://github.com/antalsz/hs-to-coq/" rel="nofollow noreferrer"><code>hs-to-coq</code></a> to convert Haskell mostly automatic into Coq, and then use the full power of the Coq prove assistant to verify your Haskell code. See the papers <a href="https://arxiv.org/abs/1803.06960" rel="nofollow noreferrer">https://arxiv.org/abs/1803.06960</a> and <a href="https://arxiv.org/abs/1711.09286" rel="nofollow noreferrer">https://arxiv.org/abs/1711.09286</a> for some more information.</p> | 2018-03-24 02:18:58.657000+00:00 | 2018-03-24 02:18:58.657000+00:00 | null | null | 4,077,970 | <p>Continuing on from ideas in: <a href="https://stackoverflow.com/questions/4065001/are-there-any-provable-real-world-languages-scala">Are there any provable real-world languages?</a></p>
<p>I don't know about you, but I'm <strong>sick of writing code that I can't guarantee.</strong></p>
<p>After asking the above question and getting a phenomenal response (Thanks all!) I have decided to narrow my search for a provable, pragmatic, approach to <a href="http://www.haskell.org/" rel="noreferrer">Haskell</a>. I chose Haskell because it is actually useful (there are <a href="http://snapframework.com/" rel="noreferrer">many</a> <a href="http://happstack.com/" rel="noreferrer">web</a> <a href="http://www.turbinado.org/" rel="noreferrer">frameworks</a> written for it, this seems a good benchmark) <strong>AND</strong> I think it is strict enough, <a href="http://en.wikipedia.org/wiki/Functional_programming" rel="noreferrer">functionally</a>, that it might be provable, or at least allow the testing of invariants.</p>
<p><strong>Here's what I want</strong> (and have been unable to find)</p>
<p>I want a framework that can look at a Haskell function, add, written in psudocode:</p>
<pre><code>add(a, b):
return a + b
</code></pre>
<p>- and check if certain invarients hold over every execution state. I'd prefer for some formal proof, however I would settle for something like a model-checker.<br>
In this example, the invarient would be that given values <em>a</em> and <em>b</em>, the return value is always the sum <em>a+b</em>.</p>
<p>This is a simple example, but I don't think it is an impossibility for a framework like this to exist. There certainly would be an upper limit on the complexity of a function that could be tested (10 string inputs to a function would certainly take a long time!) but this would encourage more careful design of functions, and is no different than using other formal methods. Imagine using Z or B, when you define variables/sets, you make damn sure that you give the variables the smallest possible ranges. If your INT is never going to be above 100, make sure you initialise it as such! Techniques like these, and proper problem decomposition should - I think - allow for satisfactory checking of a pure-functional language like Haskell.</p>
<p>I am not - yet - very experienced with formal methods or Haskell. Let me know if my idea is a sound one, or maybe you think that haskell is not suited? If you suggest a different language, please make sure it passes the "has-a-web-framework" test, and do read the original <a href="https://stackoverflow.com/questions/4065001/are-there-any-provable-real-world-languages-scala">question</a> :-)</p> | 2010-11-02 13:12:40.037000+00:00 | 2018-10-04 05:45:45.810000+00:00 | 2017-05-23 12:32:26.040000+00:00 | testing|haskell|functional-programming|formal-methods|formal-verification | ['https://github.com/antalsz/hs-to-coq/', 'https://arxiv.org/abs/1803.06960', 'https://arxiv.org/abs/1711.09286'] | 3 |
34,695,524 | <p>Here's a better way to do what you want in R:</p>
<pre><code>categ <- c(CO = "cat:stat.CO", #I'm naming these elements so
ME = "cat:stat.ME", # that the corresponding elements
TH = "cat:stat.TH", # in the list are named as well.
ML = "cat:stat.ML") # Could also just set 'names(doi_list)' to 'categ'.
doi_list <-
lapply(categ, function(ctg)
(doi <- arxiv_search(ctg)$doi)[nchar(doi) > 0])
</code></pre>
<p>I sort of threw you in the deep end on the last line with in-line assignment of <code>doi</code>; a more step-by-step approach would be:</p>
<pre><code>lapply(categ, function(ctg){
arxiv.df <- arxiv_search(ctg)
doi <- arxiv.df$doi
doi[nchar(doi) > 0]})
</code></pre> | 2016-01-09 15:47:33.947000+00:00 | 2016-01-09 15:47:33.947000+00:00 | null | null | 34,694,919 | <p>I managed to fill my <code>doi_list</code> but it does not work if I encapsulate the code into a <code>function</code>. <a href="http://www.r-bloggers.com/call-by-reference-in-r/" rel="nofollow">From a tutorial</a> I've seen I assume that this should be possible but <code>doi_list</code> is empty after <code>get_doi_from_category()</code> finishes.</p>
<pre><code>library(aRxiv)
get_doi_from_category <- function(category, doi_list) {
arxiv_rec <- arxiv_search(category)
arxiv_doi_list <- arxiv_rec[13]
by(arxiv_doi_list, 1:nrow(arxiv_doi_list),
function(row) {
if(nchar(row) > 0) {
doi_list <<- c(doi_list, row)
}
})
}
doi_list <- list()
get_doi_from_category('cat:stat.ML', doi_list)
for(doi in doi_list)
{
print(doi)
}
get_doi_from_category('cat:stat.CO', doi_list)
get_doi_from_category('cat:stat.ME', doi_list)
get_doi_from_category('cat:stat.TH', doi_list)
</code></pre>
<p>PS: First day with <strong>R</strong>.</p> | 2016-01-09 14:51:08.003000+00:00 | 2016-01-09 15:47:33.947000+00:00 | 2016-01-09 15:44:33.170000+00:00 | r | [] | 0 |
29,832,338 | <p>If you have rectified images, finding disparity is a matter of calculating costs between pixels in left and right images on the same horizontal line.</p>
<p>You can take a few selected points in the images (for example the ones that have high gradient or feature points coming from SIFT), set those as roots/seeds of your regions and calculate cost for a range of disparities using SAD/SSD or whatever cost function you prefer.</p>
<p>Then take the best disparity for a root and assign it to a neighbor. If the cost for that is lower than a predefined threshold, add it to the region otherwise go to next neighbor. When you cannot add any more points the region growing is finished.</p>
<p>This is a detailed example of the process: <a href="http://arxiv.org/pdf/0812.1340.pdf" rel="nofollow">http://arxiv.org/pdf/0812.1340.pdf</a></p> | 2015-04-23 19:11:15.603000+00:00 | 2015-04-23 19:11:15.603000+00:00 | null | null | 29,664,911 | <p>Dear friends I am currently working on a disparity algorithm that visits only a small fraction of disparity space in order to find a semi-dense disparity map. It works by growing from a small set of correspondence seeds. But before that I am implementing the standard region growing algorithm in matlab to understand how it works.
The first step of the baseline growing algorithm says that:</p>
<p><strong>Require</strong>: Rectified images Il, Ir, initial correspondence
seeds S, image similarity threshold. Compute similarity simil(s) for every seed s belonging to S.</p>
<blockquote>
<p>Now i cannot understand this step. First of all how do i calculate initial seed points from two rectified images. Should i use SIFT algorithm in matlab or is there any better way to do it.???Can anybody also give me some idea about how does a region growing based disparity calculating algorithm works and whether it is better than SAD or SSD.</p>
</blockquote> | 2015-04-16 03:38:14.837000+00:00 | 2015-04-27 17:24:07.257000+00:00 | 2015-04-27 17:24:07.257000+00:00 | matlab|image-processing|computer-vision|matlab-cvst|disparity-mapping | ['http://arxiv.org/pdf/0812.1340.pdf'] | 1 |
43,596,879 | <p>Since your data is sequential and there is dependency between numbers, it should be possible to train a recurrent neural network. The recurrent weights take care of the relationship between numbers. </p>
<p>As a general rule of thumb, the more uncorrelated input sequences you have, the better it is. This survey article can help you get started with RNN: <a href="https://arxiv.org/abs/1801.01078" rel="nofollow noreferrer">https://arxiv.org/abs/1801.01078</a></p> | 2017-04-24 20:09:11.497000+00:00 | 2018-01-15 23:51:45.063000+00:00 | 2018-01-15 23:51:45.063000+00:00 | null | 43,591,374 | <p>My problem is as follows. As inputs I have sequences of whole numbers, around 200-500 per sequence. Each number in a sequence is marked as good or bad. The first number in each sequence is always good, but whether or not subsequent numbers are still considered good is determined by which numbers came before it. There's a mathematical function which governs how the numbers affect those that come after it but the specifics of this function are unknown. All we know for sure is it starts off accepting every number and then gradually starts rejecting numbers until finally every number is considered bad. Out of every sequence only around 50 numbers will ever be accepted before this happens.</p>
<p>It is possible that the validity of a number is not only determined by which numbers came before it, but also by whether these numbers were themselves considered good or bad.</p>
<p>For example: (<strong>good</strong> numbers in bold)</p>
<ul>
<li><p><strong>4 17 8 47 52</strong> 18 <strong>13</strong> <strong>88</strong> 92 55 8 <strong>66</strong> 76 85 36 ...</p></li>
<li><p><strong>92 13 28</strong> 12 <strong>36 73 82</strong> 14 18 <strong>10</strong> 11 <strong>21</strong> 33 98 1 ...</p></li>
</ul>
<p>Attempting to determine the logic behind the system through guesswork seems like an impossible task. So my question is, can a neural network be trained to predict if a number will be good or bad? If so, approximately how many sequences would be required to train it? (assuming sequences of 200-500 numbers that are 32 bit integers)</p> | 2017-04-24 14:52:24.233000+00:00 | 2018-01-15 23:51:45.063000+00:00 | null | neural-network|deep-learning|pattern-recognition | ['https://arxiv.org/abs/1801.01078'] | 1 |
33,874,783 | <p>Due to the highly context-sensitive nature of typed lambda calculus I would be surprised if there were an efficient algorithm, or rather a "natural" efficient algorithm.</p>
<p><a href="http://arxiv.org/abs/1210.2610" rel="nofollow">This paper</a> has nice formulae for counting <em>untyped</em> lambda terms and from that they derive a fairly simple function for enumerating untyped terms. They also provide a counting function for normal forms which should be easy to adapt to a generating function too. Unfortunately, they only make a typed generator function by filtering that which is absurdly expensive (one of the results of the paper is just how absurd it is).</p>
<p>As for a more efficient way to generate typed terms, my advice would be to generate <em>typing derivations</em> instead of terms and then type checking them.</p> | 2015-11-23 15:33:32.793000+00:00 | 2015-11-23 15:33:32.793000+00:00 | null | null | 28,685,232 | <p>Is there any efficient algorithm that maps between well-typed, closed terms of the simply typed lambda calculus and natural numbers? For example, using bruijn indexes (and probably on incorrect order):</p>
<pre><code>0 → (λ 0)
1 → (λ (λ (0 1)))
2 → (λ (λ (1 0)))
3 → (λ 0 (λ 0))
4 → (λ (λ 0) 0)
5 → (λ (λ 1) 0)
6 → ... so on
</code></pre>
<p>Related questions: is there an algorithm that maps between natural numbers and <strong>normalized</strong> terms of the simply typed lambda calculus? Also, the same questions applied to the untyped lambda calculus.</p> | 2015-02-23 23:00:03.010000+00:00 | 2015-11-23 15:33:32.793000+00:00 | null | algorithm|search|functional-programming|enumeration|lambda-calculus | ['http://arxiv.org/abs/1210.2610'] | 1 |
29,634,569 | <p>The Binary Lambda Calculus defines a binary encoding for any closed term in the untyped lambda calculus, and also suggests a bijection between natural numbers and binary strings, but the former is not surjective.
Still, the paper <a href="http://arxiv.org/abs/1401.0379" rel="nofollow">http://arxiv.org/abs/1401.0379</a>
"Counting Terms in the Binary Lambda Calculus"
might yield efficient ranking/unranking mappings.</p> | 2015-04-14 18:13:50.717000+00:00 | 2015-04-14 18:13:50.717000+00:00 | null | null | 28,685,232 | <p>Is there any efficient algorithm that maps between well-typed, closed terms of the simply typed lambda calculus and natural numbers? For example, using bruijn indexes (and probably on incorrect order):</p>
<pre><code>0 → (λ 0)
1 → (λ (λ (0 1)))
2 → (λ (λ (1 0)))
3 → (λ 0 (λ 0))
4 → (λ (λ 0) 0)
5 → (λ (λ 1) 0)
6 → ... so on
</code></pre>
<p>Related questions: is there an algorithm that maps between natural numbers and <strong>normalized</strong> terms of the simply typed lambda calculus? Also, the same questions applied to the untyped lambda calculus.</p> | 2015-02-23 23:00:03.010000+00:00 | 2015-11-23 15:33:32.793000+00:00 | null | algorithm|search|functional-programming|enumeration|lambda-calculus | ['http://arxiv.org/abs/1401.0379'] | 1 |
65,180,425 | <p>Not sure if you still need it but recently a paper mentioned how to use document embeddings to cluster documents and extract words from each cluster to represent a topic. Here's the link:
<a href="https://arxiv.org/pdf/2008.09470.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/2008.09470.pdf</a>, <a href="https://github.com/ddangelov/Top2Vec" rel="nofollow noreferrer">https://github.com/ddangelov/Top2Vec</a></p>
<p>Inspired by the above paper, another algorithm for topic modelling using BERT to generate sentence embeddings is mentioned here:
<a href="https://towardsdatascience.com/topic-modeling-with-bert-779f7db187e6" rel="nofollow noreferrer">https://towardsdatascience.com/topic-modeling-with-bert-779f7db187e6</a>, <a href="https://github.com/MaartenGr/BERTopic" rel="nofollow noreferrer">https://github.com/MaartenGr/BERTopic</a></p>
<p>The above two libraries provide an end-to-end solution to extract topics from a corpus. But if you're interested only in generating sentence embeddings, look at Gensim's doc2vec (<a href="https://radimrehurek.com/gensim/models/doc2vec.html" rel="nofollow noreferrer">https://radimrehurek.com/gensim/models/doc2vec.html</a>) or at sentence-transformers (<a href="https://github.com/UKPLab/sentence-transformers" rel="nofollow noreferrer">https://github.com/UKPLab/sentence-transformers</a>) as mentioned in the other answers. If you go with sentence-transformers, it is suggested that you train a model on you're domain specific corpus to get good results.</p> | 2020-12-07 10:56:46.473000+00:00 | 2020-12-07 10:56:46.473000+00:00 | null | null | 55,619,176 | <p>For ElMo, FastText and Word2Vec, I'm averaging the word embeddings within a sentence and using HDBSCAN/KMeans clustering to group similar sentences.</p>
<p>A good example of the implementation can be seen in this short article: <a href="http://ai.intelligentonlinetools.com/ml/text-clustering-word-embedding-machine-learning/" rel="noreferrer">http://ai.intelligentonlinetools.com/ml/text-clustering-word-embedding-machine-learning/</a></p>
<p>I would like to do the same thing using BERT (using the BERT python package from hugging face), however I am rather unfamiliar with how to extract the raw word/sentence vectors in order to input them into a clustering algorithm. I know that BERT can output sentence representations - so how would I actually extract the raw vectors from a sentence?</p>
<p>Any information would be helpful.</p> | 2019-04-10 18:31:20.630000+00:00 | 2021-09-13 13:49:19.953000+00:00 | 2020-01-28 20:52:15.723000+00:00 | python|nlp|artificial-intelligence|word-embedding|bert-language-model | ['https://arxiv.org/pdf/2008.09470.pdf', 'https://github.com/ddangelov/Top2Vec', 'https://towardsdatascience.com/topic-modeling-with-bert-779f7db187e6', 'https://github.com/MaartenGr/BERTopic', 'https://radimrehurek.com/gensim/models/doc2vec.html', 'https://github.com/UKPLab/sentence-transformers'] | 6 |
49,263,722 | <h2>Transforming a map into a feature vector:</h2>
<p>In your case, you could turn the 7x7x128 map into an array with 6727 dimensions. In case you have an intermediate map, whose flattened array has a high-dimentionality (e.g., 100k-d), you may want to reduce its dimensions (through PCA, for example) because it probably contains a lot of redundancy in its features.</p>
<h2>Combining the output of your layers:</h2>
<p>As for combining the features, there are many ways of doing it. An option that you've mentioned is to create a single vector concatenating the desired layers' outputs and train a classifier on top of that. This is known as <em>early fusion</em> [1], where you combine your features for a single representation before training. </p>
<p>Another possibility is to train a separate classifier for each feature (the output of each intermediate layer, in your case) and then, for a testing image, you combine the outputs/scores of those separate classifiers. This is known as <em>late fusion</em> [1].</p>
<h2>Extras:</h2>
<p>An exploration you could perform is to investigate which layers you should select (either for early- or late-fusion) prior to training your SVM. This [2] is an interesting paper, where the authors explore something similar (analyzing the performance when using the output from the last few layers as features separately). As far as I remember, their investigation is within the context of transfer learning (using a model pre-trained in a similar problem to tackle/solve another task).</p>
<p>[1] <a href="https://www.researchgate.net/profile/Marcel_Worring/publication/221571875_Early_versus_late_fusion_in_semantic_video_analysis/links/0912f508932602eb9a000000.pdf" rel="nofollow noreferrer"><em>"Early versus Late Fusion in Semantic Video Analysis"</em></a></p>
<p>[2] <a href="https://arxiv.org/pdf/1403.6382.pdf" rel="nofollow noreferrer"><em>"CNN Features off-the-shelf: an Astounding Baseline for Recognition"</em></a></p> | 2018-03-13 18:47:17.830000+00:00 | 2018-03-13 18:47:17.830000+00:00 | null | null | 49,227,909 | <p>I want to test the performance of each convolutional layer of my Convolutional Neural Network(CNN) architecture using SVM. I am using MatConvNet Matlab toolbox.</p>
<p>Layers are like that:
Conv1 Relu1 Pool1 (3x3, 32 features) -> Conv2 Relu2 Pool2 (3x3, 64 features) -> Conv3 Relu3 Pool3 (3x3, 128 features) ->Conv4 Relu4 (1x1, 256 features) -> Conv5 (1x1, 4 features)-> Softwamxloss</p>
<p>After training, I removed the loss layer</p>
<pre><code>net.layers=net.layers(1 : end - 1);
</code></pre>
<p>I have the network looks like that
<a href="https://i.stack.imgur.com/ij9ik.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ij9ik.jpg" alt="enter image description here"></a></p>
<p>I can extract the features like that:</p>
<pre><code>feats = vl_simplenn(net, im) ;
Feautre_L1(fea,:) = squeeze(feats(end).x);
</code></pre>
<p>similarly, I remove 2 more layers and extract 256 features from Conv4.
But when I moved to Conv3 the output feature is 7x7x128.
I want to know that how can I used these features<br>
i) Making a single vector
ii) Averaging the values in depth?</p> | 2018-03-12 03:48:57.843000+00:00 | 2018-03-13 18:47:17.830000+00:00 | null | matlab|machine-learning|computer-vision|deep-learning|conv-neural-network | ['https://www.researchgate.net/profile/Marcel_Worring/publication/221571875_Early_versus_late_fusion_in_semantic_video_analysis/links/0912f508932602eb9a000000.pdf', 'https://arxiv.org/pdf/1403.6382.pdf'] | 2 |
61,433,303 | <p>Your custom layers will still use the GPU and you can confirm that as explained in this <a href="https://stackoverflow.com/questions/60542475/confirm-that-tf2-is-using-my-gpu-when-training/60544417#60544417">answer</a>. </p>
<p>You are right though that the custom layers won't use cuDNN. Why does it matter? To quote after NVidia:</p>
<blockquote>
<p>cuDNN provides highly tuned implementations for standard routines such as forward and backward convolution, pooling, normalization, and activation layers</p>
</blockquote>
<p>In other words, using these optimised primitives will enhance performance of the training. Number of examples with detailed explanation is provided in the <a href="https://arxiv.org/pdf/1410.0759.pdf" rel="nofollow noreferrer">cuDNN: Efficient Primitives for Deep Learning</a> paper. Take for instance <em>spatial convolutions</em>. Non-optimised implementation would use "naive" approach, while cuDNN uses all sorts of tricks to reduce number of operations and batch them appropriately. GPU is still fast when compared to classical CPU, cuDNN just makes it faster. For more recent, independent benchmarks, check out e.g. <a href="https://ieeexplore.ieee.org/document/8721631" rel="nofollow noreferrer">this article</a>.</p>
<p>Still, if Tensorflow runs in the GPU mode, complete computational graph will be executed on the GPU (to my knowledge there's even no simple way you could take out portion of the graph, i.e. intermediate layer, and put on CPU).</p> | 2020-04-25 22:16:06.530000+00:00 | 2020-04-26 09:27:12.283000+00:00 | 2020-04-26 09:27:12.283000+00:00 | null | 61,433,180 | <p>Will completely custom-made layers in TensorFlow automatically be run on GPUs? I noticed that in this document (<a href="https://www.tensorflow.org/guide/keras/rnn#rnn_layers_and_rnn_cells" rel="nofollow noreferrer">https://www.tensorflow.org/guide/keras/rnn#rnn_layers_and_rnn_cells</a>) it seems that the RNN wrappers won't be using CudNN? That means it wouldn't run on the GPU right?</p> | 2020-04-25 22:04:41.647000+00:00 | 2020-04-26 09:27:12.283000+00:00 | null | python|tensorflow|keras | ['https://stackoverflow.com/questions/60542475/confirm-that-tf2-is-using-my-gpu-when-training/60544417#60544417', 'https://arxiv.org/pdf/1410.0759.pdf', 'https://ieeexplore.ieee.org/document/8721631'] | 3 |
65,342,127 | <p>You can use Xpath expressions to address specific elements. On your snippet that would be <code>/ListRecords/record</code>. However I think it misses the document element node with the namespace declaration for the Open Archives Initiative Protocol for Metadata Harvesting. It should be something like:</p>
<pre><code><OAI-PMH xmlns="http://www.openarchives.org/OAI/2.0/">
<ListRecords>
<record>
<header>
<identifier>oai:arXiv.org:hep-th/9901001</identifier>
</header>
</record>
</ListRecords>
</OAI-PMH>
</code></pre>
<p>To address the namespace with Xpath you need to register a prefix for it. Then put the feed urls in an array and iterate them:</p>
<pre><code>$mergeDocument = new DOMDocument();
$mergeDocument->loadXML(
'<OAI-PMH xmlns="http://www.openarchives.org/OAI/2.0/"><ListRecords/></OAI-PMH>'
);
$mergeTarget = $mergeDocument->documentElement->firstChild;
foreach ($feedUrls as $feedUrl) {
$document = new DOMDocument();
$document->load($feedUrl);
$xpath = new DOMXpath($document);
$xpath->registerNamespace('oai', 'http://www.openarchives.org/OAI/2.0/');
foreach ($xpath->evaluate('/oai:OAI-PMH/oai:ListRecords/oai:record') as $record) {
$mergeTarget->appendChild($mergeDocument->importNode($record, TRUE));
}
}
$mergeDocument->formatOutput = TRUE;
echo $mergeDocument->saveXML();
</code></pre> | 2020-12-17 13:56:29.743000+00:00 | 2020-12-17 13:56:29.743000+00:00 | null | null | 64,444,912 | <p>Hey I'm working on an import for a list of elements. The code works for now, but it is not futureproof if there are more items added. The XML uses an unique key and pagination (every 100 items a new key).</p>
<p>Below is my PHP code for the function I've build.</p>
<pre><code><?php
$feedUrl = '[url of the feed]';
$doc1 = new DOMDocument();
$doc1->load($feedUrl);
$doc1_token = $doc1->getElementsByTagName('resumptionToken')[0]->nodeValue;
$doc2 = new DOMDocument();
$doc2->load($feedUrl . '&resumptionToken=' . $doc1_token);
$doc2_token = $doc2->getElementsByTagName('resumptionToken')[0]->nodeValue;
$doc3 = new DOMDocument();
$doc3->load($feedUrl . '&resumptionToken=' . $doc2_token);
$doc3_token = $doc3->getElementsByTagName('resumptionToken')[0]->nodeValue;
$doc4 = new DOMDocument();
$doc4->load($feedUrl . '&resumptionToken=' . $doc3_token);
$doc4_token = $doc4->getElementsByTagName('resumptionToken')[0]->nodeValue;
$doc5 = new DOMDocument();
$doc5->load($feedUrl . '&resumptionToken=' . $doc4_token);
$doc5_token = $doc5->getElementsByTagName('resumptionToken')[0]->nodeValue;
// get 'ListRecordes' element of document 1
$list_records = $doc1->getElementsByTagName('ListRecords')->item(0); //edited res - items
// iterate over 'item' elements of document 2
$items2 = $doc2->getElementsByTagName('record');
for ($i = 0; $i < $items2->length; $i ++) {
$item2 = $items2->item($i);
// import/copy item from document 2 to document 1
$item1 = $doc1->importNode($item2, true);
// append imported item to document 1 'res' element
$list_records->appendChild($item1);
}
// iterate over 'item' elements of document 3
$items3 = $doc3->getElementsByTagName('record');
for ($i = 0; $i < $items3->length; $i ++) {
$item3 = $items3->item($i);
// import/copy item from document 3 to document 1
$item1 = $doc1->importNode($item3, true);
// append imported item to document 1 'res' element
$list_records->appendChild($item1);
}
// iterate over 'item' elements of document 4
$items4 = $doc4->getElementsByTagName('record');
for ($i = 0; $i < $items4->length; $i ++) {
$item4 = $items4->item($i);
// import/copy item from document 4 to document 1
$item1 = $doc1->importNode($item4, true);
// append imported item to document 1 'res' element
$list_records->appendChild($item1);
}
// iterate over 'item' elements of document 5
$items5 = $doc5->getElementsByTagName('record');
for ($i = 0; $i < $items5->length; $i ++) {
$item5 = $$items5->item($i);
// import/copy item from document 5 to document 1
$item1 = $doc1->importNode($item5, true);
// append imported item to document 1 'res' element
$list_records->appendChild($item1);
}
$doc1->save('merged.xml'); //edited -added saving into xml file
</code></pre>
<p>I think the code is not perfect, because if the we add more records than 600, the latest one's are not imported in the merged xml.</p>
<p>Besides this there is also an other issue. We have nested "" nodes. We need to merge the "" direct childs only.</p>
<pre><code><ListRecords>
<record>
<header>
...
</header>
<metadata>
<record xmlns="http://www.openarchives.org/OAI/2.0/" priref="100000002">
...
</record>
</metadata>
</record>
</ListRecords>
</code></pre> | 2020-10-20 12:13:02.917000+00:00 | 2020-12-17 13:56:29.743000+00:00 | 2020-10-20 13:10:18.213000+00:00 | php|xml|dom | [] | 0 |
63,657,402 | <p>The <code>Upsampling2D</code> is not the inverse of <code>MaxPooling</code>.</p>
<p>In fact, the max-pooling operation is non-invertible, meaning that there is no "opposite" mathematical operation which can lead us back to the state before applying the max-pooling.</p>
<p>Zeiler and Fergus state in their paper <a href="https://arxiv.org/pdf/1311.2901.pdf" rel="nofollow noreferrer">"Visualizing and Understanding Convolutional Networks"</a>:</p>
<blockquote>
<p>In the convnet, the max pooling operation is non-invertible, however
we can obtain an approximate inverse by recording the locations of the
maxima within each pooling region in a set of switch variables.</p>
</blockquote>
<p>Indeed, an <code>Upsampling2D</code> layer with a kernel of 2x2 and a stride of 2 will double the size of the tensor, but that does not mean that <code>Upsampling2D</code> is the inverse of the <code>MaxPooling</code>.</p>
<p>For more information please read the article: <a href="https://arxiv.org/pdf/1311.2901.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1311.2901.pdf</a> and the TensorFlow documentation for it: <a href="https://www.tensorflow.org/api_docs/python/tf/keras/layers/UpSampling2D" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/keras/layers/UpSampling2D</a></p> | 2020-08-30 12:25:39.237000+00:00 | 2021-09-30 12:06:45.240000+00:00 | 2021-09-30 12:06:45.240000+00:00 | null | 63,656,884 | <p>in deep learning U-net down sampling Formula is (w-k+2*p)/S +1 but here i have to calculate the up sampling formula suppose current layer size is 8x8x32 to next layer size 16X16x32 what will be the formula for this up-sampling calculation.</p> | 2020-08-30 11:21:38.963000+00:00 | 2021-09-30 12:06:45.240000+00:00 | 2020-08-30 15:51:58.560000+00:00 | deep-learning|conv-neural-network | ['https://arxiv.org/pdf/1311.2901.pdf', 'https://arxiv.org/pdf/1311.2901.pdf', 'https://www.tensorflow.org/api_docs/python/tf/keras/layers/UpSampling2D'] | 3 |
49,732,762 | <p>Your questions addresses several topics for using neural network pretrained models.</p>
<p><strong>Theoretical methods</strong></p>
<ol>
<li><p>In general, you can always neutralize categories by removing the corresponding neurons in the softmax layer and compute a new softmax layer only with the relevant rows of the matrix.<br />
This method will surely work (maybe that is what you meant by <em>filtering</em>) but will not accelerate the network computation time by much, since most of the flops (multiplications and additions) will remain.</p>
</li>
<li><p>Similar to decision trees, pruning is possible but may reduce performance. I will explain what pruning means, but note that the accuracy over your categories may remain since you are not just trimming, you are predicting less categories as well.</p>
</li>
<li><p>Transfer the learning to your problem. See stanford's course in computer vision <a href="http://cs231n.github.io/transfer-learning/" rel="nofollow noreferrer">here</a>. Most of the times I've seen that works good is by keeping the convolution layers as-is, and preparing a medium-size dataset of the objects you'd like to detect.</p>
</li>
</ol>
<p>I will add more theoretical methods if you request, but the above are the most common and accurate I know.</p>
<p><strong>Practical methods</strong></p>
<ol>
<li><p>Make sure you are <em><a href="https://www.tensorflow.org/serving/" rel="nofollow noreferrer">serving</a></em> your tensorflow model, and not just using an inference python code. This could significantly accelerate performance.</p>
</li>
<li><p>You can export the parameters of the network and load them in a faster framework such as <a href="https://github.com/Microsoft/CNTK" rel="nofollow noreferrer">CNTK</a> or <a href="http://caffe.berkeleyvision.org/" rel="nofollow noreferrer">Caffe</a>. These frameworks work in C++/CSharp and can inference much faster. <em>Make sure you load the weights correctly, some frameworks use different order in tensor dimensions when saving/loading (little/big endian-like issues)</em>.</p>
</li>
<li><p>If your application perform inference on several images, you can distribute the computation via several GPUs. **This can also be done in tensorflow, see <a href="https://www.tensorflow.org/programmers_guide/using_gpu" rel="nofollow noreferrer">Using GPUs</a>.</p>
</li>
</ol>
<p><strong>Pruning a neural network</strong></p>
<p>Maybe this is the most interesting method of adapting big networks for simple tasks. You can see a beginner's guide <a href="https://jacobgil.github.io/deeplearning/pruning-deep-learning" rel="nofollow noreferrer">here</a>.</p>
<p>Pruning means that you remove parameters from your network, specifically the whole nodes/neurons in a decision tree/neural network (resp). To do that in object detection, you can do as follows (simplest way):</p>
<ol>
<li>Randomly prune neurons from the fully connected layers.</li>
<li>Train one more epoch (or more) with low learning rate, only on objects you'd like to detect.</li>
<li>(optional) Perform the above several times for validation and choose best network.</li>
</ol>
<p>The above procedure is the most basic one, but you can find plenty of papers that suggest algorithms to do so. For example
<a href="https://arxiv.org/abs/1712.01721" rel="nofollow noreferrer">Automated Pruning for Deep Neural Network Compression</a> and <a href="http://ieeexplore.ieee.org/document/572092/" rel="nofollow noreferrer">An iterative pruning algorithm for feedforward neural networks</a>.</p> | 2018-04-09 12:15:09.893000+00:00 | 2021-11-11 15:08:19.110000+00:00 | 2021-11-11 15:08:19.110000+00:00 | null | 49,732,130 | <p>I am currently using Tensorflow Object Detection API for my human detection app.
I tried filtering in the API itself which worked but I am still not contended by it because it's slow. So I'm wondering if I could remove other categories in the model itself to also make it faster. </p>
<p>If it is not possible, can you please give me other suggestions to make the API faster since I will be using two cameras. Thanks in advance and also pardon my english :)</p> | 2018-04-09 11:42:56.733000+00:00 | 2021-11-11 15:08:19.110000+00:00 | null | python|opencv|tensorflow | ['http://cs231n.github.io/transfer-learning/', 'https://www.tensorflow.org/serving/', 'https://github.com/Microsoft/CNTK', 'http://caffe.berkeleyvision.org/', 'https://www.tensorflow.org/programmers_guide/using_gpu', 'https://jacobgil.github.io/deeplearning/pruning-deep-learning', 'https://arxiv.org/abs/1712.01721', 'http://ieeexplore.ieee.org/document/572092/'] | 8 |
46,411,701 | <p>Offhand I don't see a slick way to make this truly efficient, but it's easy to make it a whole lot faster. If you view your examples as matrices, you're summing them a row at a time. This requires, for each <code>i</code>, finding all the divisors of <code>i</code> and summing their cubes. In all, this requires a number of operations proportional to <code>x**2</code>.</p>
<p>You can easily cut that to a number of operations proportional to <code>x</code>, by summing the matrix by <em>columns</em> instead. Given an integer <code>j</code>, how many integers in <code>1..x</code> are divisible <em>by</em> <code>j</code>? That's easy: there are <code>x//j</code> multiples of <code>j</code> in the range, so divisor <code>j</code> contributes <code>j**3 * (x // j)</code> to the grand total.</p>
<pre><code>def better(x):
return sum(j**3 * (x // j) for j in range(1, x+1))
</code></pre>
<p>That runs much faster, but still takes time proportional to <code>x</code>.</p>
<p>There are lower-level tricks you can play to speed that in turn by constant factors, but they still take <code>O(x)</code> time overall. For example, note that <code>x // j == 1</code> for all <code>j</code> such that <code>x // 2 < j <= x</code>. So about half the terms in the sum can be skipped, replaced by closed-form expressions for a sum of consecutive cubes:</p>
<pre><code>def sum3(x):
"""Return sum(i**3 for i in range(1, x+1))"""
return (x * (x+1) // 2)**2
def better2(x):
result = sum(j**3 * (x // j) for j in range(1, x//2 + 1))
result += sum3(x) - sum3(x//2)
return result
</code></pre>
<p><code>better2()</code> is about twice as fast as <code>better()</code>, but to get faster than <code>O(x)</code> would require deeper insight.</p>
<h2>Quicker</h2>
<p>Thinking about this in spare moments, I still don't have a truly clever idea. But the last idea I gave can be carried to a logical conclusion: don't just group together divisors with only one multiple in range, but also those with two multiples in range, and three, and four, and ... That leads to <code>better3()</code> below, which does a number of operations roughly proportional to the square root of <code>x</code>:</p>
<pre><code>def better3(x):
result = 0
for i in range(1, x+1):
q1 = x // i
# value i has q1 multiples in range
result += i**3 * q1
# which values have i multiples?
q2 = x // (i+1) + 1
assert x // q1 == i == x // q2
if i < q2:
result += i * (sum3(q1) - sum3(q2 - 1))
if i+1 >= q2: # this becomes true when i reaches roughly sqrt(x)
break
return result
</code></pre>
<p>Of course <code>O(sqrt(x))</code> is an enormous improvement over the original <code>O(x**2)</code>, but for very large arguments it's still impractical. For example <code>better3(10**6)</code> appears to complete instantly, but <code>better3(10**12)</code> takes a few seconds, and <code>better3(10**16)</code> is time for a coffee break ;-)</p>
<p>Note: I'm using Python 3. If you're using Python 2, use <code>xrange()</code> instead of <code>range()</code>.</p>
<h2>One more</h2>
<p><code>better4()</code> has the same <code>O(sqrt(x))</code> time behavior as <code>better3()</code>, but does the summations in a different order that allows for simpler code and fewer calls to <code>sum3()</code>. For "large" arguments, it's about 50% faster than <code>better3()</code> on my box.</p>
<pre><code>def better4(x):
result = 0
for i in range(1, x+1):
d = x // i
if d >= i:
# d is the largest divisor that appears `i` times, and
# all divisors less than `d` also appear at least that
# often. Account for one occurence of each.
result += sum3(d)
else:
i -= 1
lastd = x // i
# We already accounted for i occurrences of all divisors
# < lastd, and all occurrences of divisors >= lastd.
# Account for the rest.
result += sum(j**3 * (x // j - i)
for j in range(1, lastd))
break
return result
</code></pre>
<p>It may be possible to do better by extending the algorithm in <a href="https://arxiv.org/abs/1206.3369" rel="nofollow noreferrer">"A Successive Approximation Algorithm for Computing the Divisor Summatory Function"</a>. That takes <code>O(cube_root(x))</code> time for the possibly simpler problem of summing the number of divisors. But it's much more involved, and I don't care enough about this problem to pursue it myself ;-)</p>
<h2>Subtlety</h2>
<p>There's a subtlety in the math that's easy to miss, so I'll spell it out, but only as it pertains to <code>better4()</code>.</p>
<p>After <code>d = x // i</code>, the comment claims that <code>d</code> is the largest divisor that appears <code>i</code> times. But is that true? The actual number of times <code>d</code> appears is <code>x // d</code>, which we did <em>not</em> compute. How do we know that <code>x // d</code> in fact equals <code>i</code>?</p>
<p>That's the purpose of the <code>if d >= i:</code> guarding that comment. After <code>d = x // i</code> we know that</p>
<pre><code>x == d*i + r
</code></pre>
<p>for some integer <code>r</code> satisfying <code>0 <= r < i</code>. That's essentially what floor division <em>means</em>. But since <code>d >= i</code> is also known (that's what the <code>if</code> test ensures), it must also be the case that <code>0 <= r < d</code>. And that's how we know <code>x // d</code> is <code>i</code>.</p>
<p>This can break down when <code>d >= i</code> is <em>not</em> true, which is why a different method needs to be used then. For example, if <code>x == 500</code> and <code>i == 51</code>, <code>d</code> (<code>x // i</code>) is 9, but it's certainly not the case that 9 is the largest divisor that appears 51 times. In fact, 9 appears <code>500 // 9 == 55</code> times. While for positive real numbers</p>
<pre><code>d == x/i
</code></pre>
<p>if and only if</p>
<pre><code>i == x/d
</code></pre>
<p>that's <em>not</em> always so for floor division. But, as above, the first does imply the second if we also know that <code>d >= i</code>.</p>
<h2>Just for Fun</h2>
<p><code>better5()</code> rewrites <code>better4()</code> for about another 10% speed gain. The real pedagogical point is to show that it's easy to compute all the loop limits in advance. Part of the point of the odd code structure above is that it magically returns 0 for a 0 input without needing to test for that. <code>better5()</code> gives up on that:</p>
<pre><code>def isqrt(n):
"Return floor(sqrt(n)) for int n > 0."
g = 1 << ((n.bit_length() + 1) >> 1)
d = n // g
while d < g:
g = (d + g) >> 1
d = n // g
return g
def better5(x):
assert x > 0
u = isqrt(x)
v = x // u
return (sum(map(sum3, (x // d for d in range(1, u+1)))) +
sum(x // i * i**3 for i in range(1, v)) -
u * sum3(v-1))
</code></pre> | 2017-09-25 18:17:21.740000+00:00 | 2017-09-29 04:29:34.947000+00:00 | 2017-09-29 04:29:34.947000+00:00 | null | 46,409,868 | <pre><code>Examples,
1.Input=4
Output=111
Explanation,
1 = 1³(divisors of 1)
2 = 1³ + 2³(divisors of 2)
3 = 1³ + 3³(divisors of 3)
4 = 1³ + 2³ + 4³(divisors of 4)
------------------------
sum = 111(output)
1.Input=5
Output=237
Explanation,
1 = 1³(divisors of 1)
2 = 1³ + 2³(divisors of 2)
3 = 1³ + 3³(divisors of 3)
4 = 1³ + 2³ + 4³(divisors of 4)
5 = 1³ + 5³(divisors of 5)
-----------------------------
sum = 237 (output)
x=int(raw_input().strip())
tot=0
for i in range(1,x+1):
for j in range(1,i+1):
if(i%j==0):
tot+=j**3
print tot
</code></pre>
<p>Using this code I can find the answer for small number less than one million.
But I want to find the answer for very large numbers. Is there any algorithm
for how to solve it easily for large numbers?</p> | 2017-09-25 16:23:41.287000+00:00 | 2021-04-14 01:51:00.063000+00:00 | 2017-09-25 16:24:31.447000+00:00 | python|python-2.7 | ['https://arxiv.org/abs/1206.3369'] | 1 |
39,536,778 | <p>You are touching on several extremely interesting aspects of Prolog, each well worth <em>several</em> separate questions on its own. I will provide a high-level answer to your actual questions, and hope that you post follow-up questions on the points that are most interesting to you.</p>
<p>First, I will trim down the fragment to its essence:</p>
<pre>
essence(N) :-
foldl(essence_(N), [2|Back], Back, _).
essence_(N, X0, Back, Rest) :-
( X0 #< N ->
X1 #= X0 + 1,
Back = [X1|Rest]
; Back = []
).
</pre>
<p>Note that this prevents the creation of extremely large integers, so that we can really study the memory behaviour of this pattern.</p>
<p>To your first question: <strong>Yes</strong>, this runs in <strong>O(1)</strong> space (assuming constant space for arising integers).</p>
<p><em>Why?</em> Because although you continuously create lists in <code>Back = [X1|Rest]</code>, these lists can all be readily <strong>garbage collected</strong> because you are not referencing them anywhere.</p>
<p>To test memory aspects of your program, consider for example the following query, and limit the global stack of your Prolog system so that you can quickly detect growing memory by running out of (global) stack:</p>
<pre>
?- length(_, E),
N #= 2^E,
portray_clause(N),
essence(N),
false.
</pre>
<p>This yields:</p>
<pre>
1.
2.
...
8388608.
16777216.
etc.
</pre>
<p>It would be <strong>completely</strong> different if you <em>referenced</em> the list somewhere. For example:</p>
<pre>
essence(N) :-
foldl(essence_(N), [2|Back], Back, _),
<b>Back = []</b>.
</pre>
<p>With this very small change, the above query yields:</p>
<pre>
?- length(_, E),
N #= 2^E,
portray_clause(N),
essence(N),
false.
1.
2.
...
1048576.
<b>ERROR: Out of global stack</b>
</pre>
<p>Thus, whether a term is referenced somewhere can significantly influence the memory requirements of your program. This sounds quite frightening, but really is hardly an issue in practice: You either need the term, in which case you need to represent it in memory anyway, or you don't need the term, in which case it is simply no longer referenced in your program and becomes amenable to garbage collection. In fact, the amazing thing is rather that GC works so well in Prolog also for quite complex programs that not much needs to be said about it in many situations. </p>
<hr>
<p>On to your second question: Clearly, using <code>(->)/2</code> is almost always highly problematic in that it limits you to a particular direction of use, destroying the generality we expect from logical relations.</p>
<p>There are several solutions for this. If your CLP(FD) system supports <code>zcompare/3</code> or a similar feature, you can write <code>essence_/3</code> as follows:</p>
<pre>
essence_(N, X0, Back, Rest) :-
zcompare(C, X0, N),
closing(C, X0, Back, Rest).
closing(<, X0, [X1|Rest], Rest) :- X1 #= X0 + 1.
closing(=, _, [], _).
</pre>
<p>Another very nice meta-predicate called <strong><code>if_/3</code></strong> was recently introduced in <a href="https://arxiv.org/abs/1607.01590" rel="nofollow"><strong>Indexing dif/2</strong></a> by Ulrich Neumerkel and Stefan Kral. I leave implementing this with <code>if_/3</code> as a very worthwhile and instructive exercise. Discussing this is <em>well worth its own question</em>!</p>
<hr>
<p>On to the third question: How do states with DCGs relate to this? DCG notation is definitely useful if you want to pass around a global state to several predicates, where only a few of them need to access or modify the state, and most of them simply pass the state through. This is completely analogous to <strong>monads</strong> in Haskell.</p>
<p>The "normal" Prolog solution would be to extend each predicate with 2 arguments to describe the relation between the state before the call of the predicate, and the state after it. DCG notation lets you avoid this hassle.</p>
<p>Importantly, using DCG notation, you can copy imperative algorithms almost verbatim to Prolog, without the hassle of introducing many auxiliary arguments, even if you need global states. As an example for this, consider a fragment of Tarjan's <a href="https://en.wikipedia.org/wiki/Tarjan%27s_strongly_connected_components_algorithm" rel="nofollow"><strong>strongly connected components</strong></a> algorithm in imperative terms:</p>
<pre>
function strongconnect(v)
// Set the depth index for v to the smallest unused index
v.index := index
v.lowlink := index
index := index + 1
S.push(v)
</pre>
<p>This clearly makes use of a global <strong>stack</strong> and <strong>index</strong>, which ordinarily would become new arguments that you need to pass around in <em>all</em> your predicates. Not so with DCG notation! For the moment, assume that the global entities are simply easily accessible, and so you can code the whole fragment in Prolog as:</p>
<pre>
scc_(V) -->
vindex_is_index(V),
vlowlink_is_index(V),
index_plus_one,
s_push(V),
</pre>
<p><em>This is a very good candidate for its own question</em>, so consider this a teaser.</p>
<hr>
<p>At last, I have a general remark: In my view, we are only <em>at the beginning</em> of finding a series of very powerful and general meta-predicates, and the solution space is still largely <em>unexplored</em>. <code>call/N</code>, <code>maplist/[3,4]</code>, <code>foldl/4</code> and other meta-predicates are definitely a good start. <code>if_/3</code> has the potential to combine good performance with the generality we expect from Prolog predicates.</p> | 2016-09-16 17:07:23.313000+00:00 | 2016-09-16 17:07:23.313000+00:00 | null | null | 39,531,238 | <p>This is a question provoked by an already deleted answer to <a href="https://stackoverflow.com/questions/39462366/how-can-i-determine-that-all-given-coordinates-of-matrix-are-connected">this question</a>. The issue could be summarized as follows:</p>
<blockquote>
<p>Is it possible to fold over a list, with the tail of the list generated while folding?</p>
</blockquote>
<p>Here is what I mean. Say I want to calculate the factorial (this is a silly example but it is just for demonstration), and decide to do it like this:</p>
<pre><code>fac_a(N, F) :-
must_be(nonneg, N),
( N =< 1
-> F = 1
; numlist(2, N, [H|T]),
foldl(multiplication, T, H, F)
).
multiplication(X, Y, Z) :-
Z is Y * X.
</code></pre>
<p>Here, I need to generate the list that I give to <code>foldl</code>. However, I could do the same in constant memory (without generating the list and without using <code>foldl</code>):</p>
<pre><code>fac_b(N, F) :-
must_be(nonneg, N),
( N =< 1
-> F = 1
; fac_b_1(2, N, 2, F)
).
fac_b_1(X, N, Acc, F) :-
( X < N
-> succ(X, X1),
Acc1 is X1 * Acc,
fac_b_1(X1, N, Acc1, F)
; Acc = F
).
</code></pre>
<p>The point here is that unlike the solution that uses <code>foldl</code>, this uses constant memory: no need for generating a list with all values!</p>
<p>Calculating a factorial is not the best example, but it is easier to follow for the stupidity that comes next.</p>
<p>Let's say that I am really afraid of loops (and recursion), and insist on calculating the factorial using a fold. I still would need a list, though. So here is what I might try:</p>
<pre><code>fac_c(N, F) :-
must_be(nonneg, N),
( N =< 1
-> F = 1
; foldl(fac_foldl(N), [2|Back], 2-Back, F-[])
).
fac_foldl(N, X, Acc-Back, F-Rest) :-
( X < N
-> succ(X, X1),
F is Acc * X1,
Back = [X1|Rest]
; Acc = F,
Back = []
).
</code></pre>
<p>To my surprise, this works as intended. I can "seed" the fold with an initial value at the head of a partial list, and keep on adding the next element as I consume the current head. The definition of <code>fac_foldl/4</code> is almost identical to the definition of <code>fac_b_1/4</code> above: the only difference is that the state is maintained differently. My assumption here is that this should use constant memory: <strong>is that assumption wrong?</strong></p>
<p>I know this is silly, but it could however be useful for folding over a list that cannot be known when the fold starts. In the original question we had to find a connected region, given a list of x-y coordinates. It is not enough to fold over the list of x-y coordinates once (you can however <a href="https://en.wikipedia.org/wiki/Connected-component_labeling#Two-pass" rel="nofollow noreferrer">do it in two passes</a>; note that there is at least <a href="http://www.sciencedirect.com/science/article/pii/S1077314202000309" rel="nofollow noreferrer">one better way to do it</a>, referenced in the same Wikipedia article, but this also uses multiple passes; altogether, the multiple-pass algorithms assume constant-time access to neighboring pixels!).</p>
<p>My <a href="https://stackoverflow.com/a/39463442/1812457">own solution to the original "regions" question</a> looks something like this:</p>
<pre><code>set_region_rest([A|As], Region, Rest) :-
sort([A|As], [B|Bs]),
open_set_closed_rest([B], Bs, Region0, Rest),
sort(Region0, Region).
open_set_closed_rest([], Rest, [], Rest).
open_set_closed_rest([X-Y|As], Set, [X-Y|Closed0], Rest) :-
X0 is X-1, X1 is X + 1,
Y0 is Y-1, Y1 is Y + 1,
ord_intersection([X0-Y,X-Y0,X-Y1,X1-Y], Set, New, Set0),
append(New, As, Open),
open_set_closed_rest(Open, Set0, Closed0, Rest).
</code></pre>
<p>Using the same "technique" as above, we can twist this into a fold:</p>
<pre><code>set_region_rest_foldl([A|As], Region, Rest) :-
sort([A|As], [B|Bs]),
foldl(region_foldl, [B|Back],
closed_rest(Region0, Bs)-Back,
closed_rest([], Rest)-[]),
!,
sort(Region0, Region).
region_foldl(X-Y,
closed_rest([X-Y|Closed0], Set)-Back,
closed_rest(Closed0, Set0)-Back0) :-
X0 is X-1, X1 is X + 1,
Y0 is Y-1, Y1 is Y + 1,
ord_intersection([X0-Y,X-Y0,X-Y1,X1-Y], Set, New, Set0),
append(New, Back0, Back).
</code></pre>
<p>This also "works". The fold leaves behind a choice point, because I haven't articulated the end condition as in <code>fac_foldl/4</code> above, so I need a cut right after it (ugly).</p>
<h2>The Questions</h2>
<ul>
<li>Is there a clean way of closing the list and removing the cut? In the factorial example, we know when to stop because we have additional information; however, in the second example, how do we notice that the back of the list should be the empty list?</li>
<li>Is there a hidden problem I am missing?</li>
<li>This looks like its somehow similar to the Implicit State with DCGs, but I have to admit I never quite got how that works; are these connected?</li>
</ul> | 2016-09-16 12:12:24.780000+00:00 | 2016-09-19 07:24:07.800000+00:00 | 2017-05-23 12:32:08.140000+00:00 | prolog|swi-prolog|fold|meta-predicate | ['https://arxiv.org/abs/1607.01590', 'https://en.wikipedia.org/wiki/Tarjan%27s_strongly_connected_components_algorithm'] | 2 |
61,843,617 | <p><strong>Premise</strong></p>
<p>I'll focus my answer on the image processing part, as I believe implementation details e.g. traversing a file system is not the core of your problem. Also, all that follows is just my humble opinion, I am sure that there are better ways to retrieve your image of which I am not aware. Anyway, I agree with what your prof said and I'll follow the same line of thought, so I'll share some ideas on possible similarity indexes you might use.</p>
<p><strong>Answer</strong></p>
<ul>
<li><em>MSE and SSIM</em> - This is a possible solution, as suggested by your prof. As I assume the low quality images also have a different resolution than the good ones, remember to downsample the good ones (and not upsample the bad ones).</li>
<li><em>Image subtraction (1-norm distance)</em> - Subtract two images -> if they are equal you'll get a black image. If they are slightly different, the non-black pixels (or the sum of the pixel intensity) can be used as a similarity index. This is actually the 1-norm distance.</li>
<li><em>Histogram distance</em> - You can refer to this paper: <a href="https://www.cse.huji.ac.il/~werman/Papers/ECCV2010.pdf" rel="nofollow noreferrer">https://www.cse.huji.ac.il/~werman/Papers/ECCV2010.pdf</a>. Comparing two images' histograms might be potentially robust for your task. Check out this question too: <a href="https://stackoverflow.com/questions/6499491/comparing-two-histograms">Comparing two histograms</a></li>
<li><em>Embedding learning</em> - As I see you included tensorflow, keras or pytorch as tags, let's consider deep learning. This paper came to my
mind: <a href="https://arxiv.org/pdf/1503.03832.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1503.03832.pdf</a> The idea is to learn a
mapping from the image space to a Euclidian space - i.e. compute an
embedding of the image. In the embedding hyperspace, images are
points. This paper learns an embedding function by minimizing the
triplet loss. The triplet loss is meant to maximize the distance
between images of different classes and minimize the distance between
images of the same class. You could train the same model on a Dataset
like ImageNet. You could augment the dataset with by lowering the
quality of the images, in order to make the model "invariant" to
difference in image quality (e.g. down-sampling followed by
up-sampling, image compression, adding noise, etc.). Once you can
compute embedding, you could compute the Euclidian distance (as a
substitute of the MSE). This <strong>might</strong> work better than using MSE/SSIM as a similarity indexes. Repo of FaceNet: <a href="https://github.com/timesler/facenet-pytorch" rel="nofollow noreferrer">https://github.com/timesler/facenet-pytorch</a>. Another general purpose approach (not related to faces) which might help you: <a href="https://github.com/zegami/image-similarity-clustering" rel="nofollow noreferrer">https://github.com/zegami/image-similarity-clustering</a>. </li>
<li><em>Siamese networks for predicting similarity score</em> - I am referring to this paper on face verification: <a href="http://bmvc2018.org/contents/papers/0410.pdf" rel="nofollow noreferrer">http://bmvc2018.org/contents/papers/0410.pdf</a>. The siamese network takes two images as input and outputs a value in the [0, 1]. We can interpret the output as the probability that the two images belong to the same class. You can train a model of this kind to predict 1 for image pairs of the following kind: (good quality image, artificially degraded image). To degrade the image, again, you can combine e.g. down-sampling followed by
up-sampling, image compression, adding noise, etc. Let the model predict 0 for image pairs of different classes (e.g. different images). The output of the network can e used as a similarity index. </li>
</ul>
<p><strong>Remark 1</strong></p>
<p>These different approaches can also be combined. They all provide you with similarity indexes, so you can very easily average the outcomes.</p>
<p><strong>Remark 2</strong></p>
<p>If you only need to do it once, the effort you need to put in implementing and training deep models might be not justified. I would not suggest it. Still, you can consider it if you can't find any other solution and that Mac is REALLY FULL of images and a manual search is not possible. </p> | 2020-05-16 21:10:48.187000+00:00 | 2020-05-17 07:17:05.247000+00:00 | 2020-05-17 07:17:05.247000+00:00 | null | 61,553,923 | <p>I have a bunch of poor quality photos that I extracted from a pdf. Somebody I know has the good quality photo's somewhere on her computer(Mac), but it's my understanding that it will be difficult to find them.</p>
<p>I would like to</p>
<ul>
<li>loop through each poor quality photo</li>
<li>perform a reverse image search using each poor quality photo as the query image and using this persons computer as the database to search for the higher quality images</li>
<li>and create a copy of each high quality image in one destination folder.</li>
</ul>
<p>Example pseudocode</p>
<pre><code>for each image in poorQualityImages:
search ./macComputer for a higherQualityImage of image
copy higherQualityImage to ./higherQualityImages
</code></pre>
<p>I need to perform this action once.
I am looking for a <strong>tool, github repo or library</strong> which can perform this functionality more so than a deep understanding of content based image retrieval.</p>
<hr />
<p>There's <a href="https://www.reddit.com/r/DataHoarder/comments/asr5iz/reverse_image_search_for_local_files/" rel="nofollow noreferrer">a post on reddit</a> where someone was trying to do something similar</p>
<p><a href="https://github.com/knjcode/imgdupes" rel="nofollow noreferrer">imgdupes</a> is a program which seems like it almost achieves this, but I do not want to delete the duplicates, I want to copy the highest quality duplicate to a destination folder</p>
<hr />
<p><strong>Update</strong></p>
<p>Emailed my previous image processing prof and he sent me this</p>
<blockquote>
<p>Off the top of my head, nothing out of the box.</p>
<p>No guaranteed solution here, but you can narrow the search space.
You’d need a little program that outputs the MSE or SSIM similarity
index between two images, and then write another program or shell
script that scans the hard drive and computes the MSE between each
image on the hard drive and each query image, then check the images
with the top X percent similarity score.</p>
<p>Something like that. Still not maybe guaranteed to find everything
you want. And if the low quality images are of different pixel
dimensions than the high quality images, you’d have to do some image
scaling to get the similarity index. If the poor quality images have
different aspect ratios, that’s even worse.</p>
<p>So I think it’s not hard but not trivial either. The degree of
difficulty is partly dependent on the nature of the corruption in the
low quality images.</p>
</blockquote>
<hr />
<p><strong>UPDATE</strong></p>
<p><a href="https://github.com/samgermain/Copy_Duplicate_Images" rel="nofollow noreferrer">Github project I wrote which achieves what I want</a></p> | 2020-05-02 03:01:21.120000+00:00 | 2022-05-10 09:07:44.033000+00:00 | 2020-12-17 16:42:51.063000+00:00 | tensorflow|image-processing|keras|pytorch|cbir | ['https://www.cse.huji.ac.il/~werman/Papers/ECCV2010.pdf', 'https://stackoverflow.com/questions/6499491/comparing-two-histograms', 'https://arxiv.org/pdf/1503.03832.pdf', 'https://github.com/timesler/facenet-pytorch', 'https://github.com/zegami/image-similarity-clustering', 'http://bmvc2018.org/contents/papers/0410.pdf'] | 6 |
63,165,545 | <p>Let me talk about random integer generating algorithms that are "optimal" in terms of the number of random bits it uses on average. In the rest of this post, we will assume we have a "true" random generator that can produce unbiased and independent random bits. (Here, a random "byte" will be a block of 8 random bits.)</p>
<p>In 1976, D. E. Knuth and A. C. Yao showed that any algorithm that produces random integers with a given probability, using only random bits, can be represented as a binary tree, where random bits indicate which way to traverse the tree and each leaf (endpoint) corresponds to an outcome. Knuth and Yao showed that any <em>optimal</em> binary tree algorithm for generating integers in <code>[0, n)</code> uniformly, will need <strong>at least <code>log2(n)</code> and at most <code>log2(n) + 2</code> bits on average</strong>. (Thus, even an <em>optimal</em> algorithm has a chance of "wasting" bits.) See below for examples of optimal algorithms.</p>
<p>However, any <em>optimal</em> integer generator that is also <em>unbiased</em> will, in general, run forever in the worst case, as also shown by Knuth and Yao. Going back to the binary tree, each one of the n outcomes labels leaves in the binary tree so that each integer in [0, n) can occur with probability 1/n. But if 1/n has a non-terminating binary expansion (which will be the case if n is not a power of 2), this binary tree will necessarily either—</p>
<ul>
<li>Have an "infinite" depth, or</li>
<li>include "rejection" leaves at the end of the tree,</li>
</ul>
<p>And in either case, the algorithm will run forever in the worst case, even if it uses very few random bits on average. (On the other hand, when n is a power of 2, the optimal binary tree will have no rejection nodes and require exactly n bits before returning an outcome, so that no bits will be "wasted".) The Fast Dice Roller is an example of an algorithm that uses "rejection" events to ensure it's unbiased; see the comment in the code below.</p>
<p>Thus, in general, <strong>a random integer generator can be <em>either</em> unbiased <em>or</em> constant-time (or even neither), but not both.</strong> And the binary tree concept shows that there is no way in general to "fix" the worst case of an indefinite running time without introducing bias. For instance, modulo reductions (e.g., <code>rand() % n</code>) are equivalent to a binary tree in which rejection leaves are replaced with labeled outcomes — but since there are more possible outcomes than rejection leaves, only some of the outcomes can take the place of the rejection leaves, introducing bias. A similar kind of binary tree — and a similar kind of bias — results if you stop rejecting after a set number of iterations. (However, this bias may be negligible depending on the application. There are also security aspects to random integer generation, which are too complicated to discuss in this answer.)</p>
<h3>Fast Dice Roller Implementation</h3>
<p>There are many examples of <em>optimal</em> algorithms in the sense given earlier. One of them is the <a href="https://arxiv.org/abs/1304.1916" rel="nofollow noreferrer">Fast Dice Roller</a> by J. Lumbroso (2013) (implemented below), and perhaps other examples are the algorithm given as an <a href="https://stackoverflow.com/a/10481147/815724">answer to a similar <em>Stack Overflow</em> question</a> and the algorithm given in the <a href="http://mathforum.org/library/drmath/view/65653.html" rel="nofollow noreferrer">Math Forum</a> in 2004. On the other hand, all the algorithms <a href="https://www.pcg-random.org/posts/bounded-rands.html" rel="nofollow noreferrer">surveyed by M. O'Neill</a> are not optimal, since they rely on generating blocks of random bits at a time. See also my note on <a href="https://peteroupc.github.io/randomfunc.html#RNDINT_Random_Integers_in_0_N" rel="nofollow noreferrer">integer generating algorithms</a>.</p>
<p>The following is a JavaScript implementation of the Fast Dice Roller. Note that it uses rejection events and a loop to ensure it's unbiased. <code>nextBit()</code> is a method that produces an independent unbiased random bit (e.g., <code>Math.random()<0.5 ? 1 : 0</code>, which isn't necessarily efficient in terms of random bits ultimately relied on in JavaScript).</p>
<pre><code>function randomInt(minInclusive, maxExclusive) {
var maxInclusive = (maxExclusive - minInclusive) - 1
var x = 1
var y = 0
while(true) {
x = x * 2
var randomBit = nextBit()
y = y * 2 + randomBit
if(x > maxInclusive) {
if (y <= maxInclusive) { return y + minInclusive }
// Rejection
x = x - maxInclusive - 1
y = y - maxInclusive - 1
}
}
}
</code></pre>
<h3>Reducing Bit Waste</h3>
<p>Recall that "optimal" integer generators, such as the Fast Dice Roller above, use on average at least <code>log2(n)</code> bits (the lower bound), or come within 2 bits of this lower bound on average. There are various techniques that can be used to bring an algorithm (even a less than optimal one) closer to this theoretical lower bound, including batching and randomness extraction. These are discussed in:</p>
<ul>
<li>The Fast Dice Roller paper itself, see section 3.1 (batching).</li>
<li>The paper "<a href="https://arxiv.org/abs/1502.02539" rel="nofollow noreferrer">Random variate generation using only finitely many unbiased, independently and identically distributed random bits</a>" by Devroye and Gravel, section 2.3 (randomness extraction).</li>
<li>The Math Forum page given above (recycling).</li>
</ul>
<p>The following are examples of "batching": to generate four random digits from 0 through 9, simply generate a random integer in [0, 9999], and break the resulting number into digits. Generating eight random digits instead would involve the interval [0, 99999999].</p> | 2020-07-30 03:03:35.433000+00:00 | 2021-06-02 23:43:45.397000+00:00 | 2021-06-02 23:43:45.397000+00:00 | null | 6,299,197 | <p>I'm using the RNG crypto provider to generate numbers in a range the truly naive way:</p>
<pre><code>byte[] bytes = new byte[4];
int result = 0;
while(result < min || result > max)
{
RNG.GetBytes(bytes);
result = BitConverter.ToInt32(bytes);
}
</code></pre>
<p>This is great when the range is wide enough such that there is a decent chance of getting a result, but earlier today I hit a scenario where the range is sufficiently small (within 10,000 numbers) that it can take an age.</p>
<p>So I've been trying to think of a better way that will achieve a decent distribution but will be faster. But now I'm getting into deeper maths and statistics that I simply didn't do at school, or at least if I did I have forgotten it all!</p>
<p>My idea is:</p>
<ul>
<li>get the highest set bit positions of the min and max, e.g. for 4 it would be 3 and for 17 it would be 5</li>
<li>select a number of bytes from the prng that could contain at least the high bits, e.g.1 in this case for 8 bits</li>
<li>see if any of the upper bits in the allowed range (3-5) are set</li>
<li>if yes, turn that into a number up to and including the high bit</li>
<li>if that number is between min and max, return.</li>
<li>if any of the previous tests fail, start again.</li>
</ul>
<p>Like I say, this could be exceedingly naive, but I am sure it will return a match in a narrow range faster than the current implementation. I'm not in front of a computer at the moment so can't test, will be doing that tomorrow morning UK time.</p>
<p>But of course speed isn't my only concern, otherwise I would just use <em>Random</em> (needs a couple of tick marks there to format correctly if someone would be kind enough - they're not on the Android keyboard!).</p>
<p>The biggest concern I have with the above approach is that I am always throwing away up to 7 bits that were generated by the prng, which seems bad. I thought of ways to factor them in (e.g. a simple addition) but they seem terribly unscientific hacks!</p>
<p>I know about the mod trick, where you only have to generate one sequence, but I also know about its weaknesses.</p>
<p>Is this a dead end? Ultimately if the best solution is going to be to stick with the current implementation I will, I just feel that there must be a better way!</p> | 2011-06-09 20:58:41.670000+00:00 | 2021-12-26 17:30:14.753000+00:00 | 2020-07-30 06:00:15.700000+00:00 | c#|security|random | ['https://arxiv.org/abs/1304.1916', 'https://stackoverflow.com/a/10481147/815724', 'http://mathforum.org/library/drmath/view/65653.html', 'https://www.pcg-random.org/posts/bounded-rands.html', 'https://peteroupc.github.io/randomfunc.html#RNDINT_Random_Integers_in_0_N', 'https://arxiv.org/abs/1502.02539'] | 6 |
45,302,758 | <p>To generate N positive numbers that sum to a positive number M at random, where each possible combination is equally likely:</p>
<ul>
<li><p>Generate N exponentially-distributed random variates. One way to generate such a number can be written as—</p>
<pre><code> number = -ln(1.0 - RNDU())
</code></pre>
<p>where <code>ln(x)</code> is the natural logarithm of <code>x</code> and <code>RNDU()</code> is a method that returns a uniform random variate greater than 0 and less than 1. Note that generating the N variates with a uniform distribution is not ideal because a biased distribution of random variate combinations will result. However, the implementation given above has several problems, such as <a href="https://arxiv.org/abs/1704.07949" rel="nofollow noreferrer">being ill-conditioned at large values</a> because of the distribution's right-sided tail, especially when the implementation involves floating-point arithmetic. Another implementation is given in <a href="https://stackoverflow.com/questions/2106503/pseudorandom-number-generator-exponential-distribution/65395376#65395376">another answer</a>.</p>
</li>
<li><p>Divide the numbers generated this way by their sum.</p>
</li>
<li><p>Multiply each number by M.</p>
</li>
</ul>
<p>The result is N numbers whose sum is approximately equal to M (I say "approximately" because of rounding error). See also the Wikipedia article <a href="https://en.wikipedia.org/wiki/Dirichlet_distribution" rel="nofollow noreferrer">Dirichlet distribution</a>.</p>
<p>This problem is also equivalent to the problem of <a href="https://stackoverflow.com/questions/3010837/sample-uniformly-at-random-from-an-n-dimensional-unit-simplex">generating random variates uniformly from an N-dimensional unit simplex</a>.</p>
<hr />
<p>However, for better accuracy (compared to the alternative of using floating-point numbers, which often occurs in practice), you should consider generating <a href="https://peteroupc.github.io/randomfunc.html#Random_Integers_with_a_Given_Positive_Sum" rel="nofollow noreferrer"><code>n</code> random <em>integers</em> that sum to an <em>integer</em> <code>m * x</code></a>, and treating those integers as the numerators to <code>n</code> rational numbers with denominator <code>x</code> (and will thus sum to <code>m</code> assuming <code>m</code> is an integer). You can choose <code>x</code> to be a large number such as 2<sup>32</sup> or 2<sup>64</sup> or some other number with the desired precision. If <code>x</code> is 1 and <code>m</code> is an integer, this solves the problem of generating random <em>integers</em> that sum to <code>m</code>.</p>
<p>The following pseudocode shows two methods for generating <code>n</code> uniform random integers with a given positive sum, in random order. (The algorithm for this was presented in Smith and Tromble, "Sampling Uniformly from the Unit Simplex", 2004.) In the pseudocode below—</p>
<ul>
<li>the method <code>PositiveIntegersWithSum</code> returns <code>n</code> integers <strong>greater than 0</strong> that sum to <code>m</code>, in random order,</li>
<li>the method <code>IntegersWithSum</code> returns <code>n</code> integers <strong>0 or greater</strong> that sum to <code>m</code>, in random order, and</li>
<li><code>Sort(list)</code> sorts the items in <code>list</code> in ascending order (note that sort algorithms are outside the scope of this answer).</li>
</ul>
<p> </p>
<pre><code>METHOD PositiveIntegersWithSum(n, m)
if n <= 0 or m <=0: return error
ls = [0]
ret = NewList()
while size(ls) < n
c = RNDINTEXCRANGE(1, m)
found = false
for j in 1...size(ls)
if ls[j] == c
found = true
break
end
end
if found == false: AddItem(ls, c)
end
Sort(ls)
AddItem(ls, m)
for i in 1...size(ls): AddItem(ret,
ls[i] - ls[i - 1])
return ret
END METHOD
METHOD IntegersWithSum(n, m)
if n <= 0 or m <=0: return error
ret = PositiveIntegersWithSum(n, m + n)
for i in 0...size(ret): ret[i] = ret[i] - 1
return ret
END METHOD
</code></pre>
<p>Here, <code>RNDINTEXCRANGE(a, b)</code> returns a uniform random integer in the interval [a, b).</p> | 2017-07-25 12:05:05.627000+00:00 | 2022-06-12 01:46:30.247000+00:00 | 2022-06-12 01:46:30.247000+00:00 | null | 2,640,053 | <p>I want to get N random numbers whose sum is a value.</p>
<p>For example, let's suppose I want 5 random numbers that sum to 1.</p>
<p>Then, a valid possibility is:</p>
<pre><code>0.2 0.2 0.2 0.2 0.2
</code></pre>
<p>Another possibility is:</p>
<pre><code>0.8 0.1 0.03 0.03 0.04
</code></pre>
<p>And so on. I need this for the creation of a matrix of belongings for Fuzzy C-means.</p> | 2010-04-14 18:37:38.730000+00:00 | 2022-06-12 01:46:30.247000+00:00 | 2019-04-18 04:54:26.590000+00:00 | random|language-agnostic|sum | ['https://arxiv.org/abs/1704.07949', 'https://stackoverflow.com/questions/2106503/pseudorandom-number-generator-exponential-distribution/65395376#65395376', 'https://en.wikipedia.org/wiki/Dirichlet_distribution', 'https://stackoverflow.com/questions/3010837/sample-uniformly-at-random-from-an-n-dimensional-unit-simplex', 'https://peteroupc.github.io/randomfunc.html#Random_Integers_with_a_Given_Positive_Sum'] | 5 |
57,225,765 | <p>If you need the inception distance, then you can use a less generic function called <code>tf.contrib.gan.eval.frechet_inception_distance</code> which doesn't ask for a <code>classifier_fn</code> argument:</p>
<pre><code>fid = tf.contrib.gan.eval.frechet_inception_distance(real_images, fake_images)
</code></pre>
<p>However, when I had tried to use this function using <code>v1.14</code> with eager execution mode, I got errors of various kinds. So eventually, I've decided to go with a custom solution. Probably it would be helpful for you as well.</p>
<p>I encountered the following implementation by <a href="https://machinelearningmastery.com" rel="nofollow noreferrer">Jason Brownlee</a> that seems to match the description from the original paper:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import scipy.linalg
import tensorflow as tf
import tensorflow.keras as keras
from tensorflow.keras.applications.inception_v3 import InceptionV3, preprocess_input
from tensorflow.compat.v1 import ConfigProto
from skimage.transform import resize
tf.enable_eager_execution()
config = ConfigProto()
config.gpu_options.allow_growth = True
tf.keras.backend.set_session(tf.Session(config=config))
def scale_images(images, new_shape):
return np.asarray([resize(image, new_shape, 0) for image in images])
def calculate_fid(model, images1, images2):
f1, f2 = [model.predict(im) for im in (images1, images2)]
mean1, sigma1 = f1.mean(axis=0), np.cov(f1, rowvar=False)
mean2, sigma2 = f2.mean(axis=0), np.cov(f2, rowvar=False)
sum_sq_diff = np.sum((mean1 - mean2)**2)
cov_mean = scipy.linalg.sqrtm(sigma1.dot(sigma2))
if np.iscomplexobj(cov_mean):
cov_mean = cov_mean.real
fid = sum_sq_diff + np.trace(sigma1 + sigma2 - 2.0*cov_mean)
return fid
if __name__ == '__main__':
input_shape = (299, 299, 3)
inception = InceptionV3(include_top=False, pooling='avg', input_shape=input_shape)
(dataset, _), _ = keras.datasets.cifar10.load_data()
dataset = dataset[:100]
dataset = scale_images(dataset, input_shape)
noise = preprocess_input(np.clip(255*np.random.uniform(size=dataset.shape), 0, 255))
noise = scale_images(noise, input_shape)
print('FID:', calculate_fid(inception, dataset, noise))
</code></pre>
<p>So we're performing the following steps:</p>
<ol>
<li><p>re-scale images to the shape expected by <code>InceptionV3</code>;</p></li>
<li><p>transform the images using <code>inception_v3.preprocess_input</code>;</p></li>
<li><p>pass both tensors through <code>InceptionV3</code> network (without top layer);</p></li>
<li><p>use the formula <a href="https://arxiv.org/pdf/1706.08500.pdf" rel="nofollow noreferrer">from the original paper</a> with the computed features as input parameters.</p></li>
</ol>
<hr>
<blockquote>
<p>Here is an excerpt from the mentioned paper.</p>
</blockquote>
<p><a href="https://i.stack.imgur.com/9gUDE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9gUDE.png" alt="enter image description here"></a></p> | 2019-07-26 19:05:39.090000+00:00 | 2019-07-26 19:05:39.090000+00:00 | null | null | 55,421,153 | <p>How to choose the value of classifier_fn in tensorflow, I couldn't find any example about it:</p>
<pre><code> tf.contrib.gan.eval.frechet_classifier_distance(
real_images,
generated_images,
classifier_fn,
num_batches=1
</code></pre>
<p>)</p> | 2019-03-29 15:53:57.437000+00:00 | 2019-07-26 19:05:39.090000+00:00 | null | python|tensorflow | ['https://machinelearningmastery.com', 'https://arxiv.org/pdf/1706.08500.pdf', 'https://i.stack.imgur.com/9gUDE.png'] | 3 |
50,884,034 | <p>Unfortunately, batch size is a hyper parameter you will have to learn by cross validation. In practice, start with a very large number (1024) and then half it until you see performance improve.</p>
<p>There are also a few papers that show that the optimal learning rate and batch size are roughly inversely correlated: <a href="https://arxiv.org/abs/1711.00489" rel="nofollow noreferrer">https://arxiv.org/abs/1711.00489</a></p> | 2018-06-16 00:14:37.153000+00:00 | 2018-06-16 00:14:37.153000+00:00 | null | null | 50,883,917 | <p>In an LSTM network I'm passing as feature an array of the form </p>
<pre><code>X.shape
(350000, 240, 1)
</code></pre>
<p>With a binary categorical target of the form</p>
<pre><code>y.shape
(350000, 2)
</code></pre>
<p>How can I estimate the optimal batch size to minimize learning time without losing accuracy?</p>
<p>Here's the setup:</p>
<pre><code>model = Sequential()
model.add(LSTM(25, input_shape=(240, 1)))
model.add(Dropout(0.1))
model.add(Dense(2, activation='softmax'))
model.compile(loss="binary_crossentropy", optimizer="rmsprop")
model.fit(X_s, y_s, epochs=1000, batch_size=512, verbose=1, shuffle=False, callbacks=[EarlyStopping(patience=10)])
</code></pre> | 2018-06-15 23:53:42.540000+00:00 | 2018-06-16 00:14:37.153000+00:00 | null | python|tensorflow|machine-learning|keras | ['https://arxiv.org/abs/1711.00489'] | 1 |
42,322,361 | <p>Syntaxnet can be used to for named entity recognition, e.g. see: <a href="https://stackoverflow.com/a/39833949/395857">Named Entity Recognition with Syntaxnet</a></p>
<p>word2vec alone isn't very effective for named entity recognition. I don't think seq2seq is commonly used either for that task.</p>
<p>As drpng mentions, you may want to look at <a href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/crf" rel="noreferrer">tensorflow/tree/master/tensorflow/contrib/crf</a>. Adding an LSTM before the CRF layer would help a bit, which gives <a href="https://arxiv.org/pdf/1606.03475.pdf" rel="noreferrer">something like</a>:</p>
<p><a href="https://i.stack.imgur.com/RIV1j.png" rel="noreferrer"><img src="https://i.stack.imgur.com/RIV1j.png" alt="enter image description here"></a></p>
<p>LSTM+CRF code in TensorFlow: <a href="https://github.com/Franck-Dernoncourt/NeuroNER" rel="noreferrer">https://github.com/Franck-Dernoncourt/NeuroNER</a></p> | 2017-02-19 00:43:51.273000+00:00 | 2017-03-14 04:59:34.763000+00:00 | 2017-05-23 12:10:11.523000+00:00 | null | 42,318,945 | <p>I'm trying to work out what's the best model to adapt for an open named entity recognition problem (biology/chemistry, so no dictionary of entities exists but they have to be identified by context).</p>
<p>Currently my best guess is to adapt Syntaxnet so that instead of tagging words as N, V, ADJ etc, it learns to tag as BEGINNING, INSIDE, OUT (IOB notation).</p>
<p>However I am not sure which of these approaches is the best?</p>
<ul>
<li>Syntaxnet</li>
<li>word2vec</li>
<li>seq2seq (I think this is not the right one as I need it to learn on two aligned sequences, whereas seq2seq is designed for sequences of differing lengths as in translation)</li>
</ul>
<p>Would be grateful for a pointer to the right method! thanks!</p> | 2017-02-18 18:19:31.367000+00:00 | 2017-03-14 04:59:34.763000+00:00 | null | tensorflow|word2vec|named-entity-recognition|syntaxnet | ['https://stackoverflow.com/a/39833949/395857', 'https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/crf', 'https://arxiv.org/pdf/1606.03475.pdf', 'https://i.stack.imgur.com/RIV1j.png', 'https://github.com/Franck-Dernoncourt/NeuroNER'] | 5 |
33,220,638 | <p>May I suggest you read my paper <a href="http://arxiv.org/ftp/arxiv/papers/1411/1411.2901.pdf" rel="nofollow">http://arxiv.org/ftp/arxiv/papers/1411/1411.2901.pdf</a>
There you will find a common splitting mechanism to determine satisfiability which apparently is similar to your procedure. The procedure is polynomial (actually linear) in each step but the problem is that the formula length blows up in each step. (As pointed out in one of the answers to your question). In the paper I have addressed the question whether and under which circumstances this blow up is prevented systematically.</p> | 2015-10-19 17:40:37.137000+00:00 | 2015-10-19 17:40:37.137000+00:00 | null | null | 32,375,900 | <p>I have an interesting algorithm for 3SAT in mind that I wanted to implement but was not able to code for the same so unable to see if it really works.
The algorithm is defined in a Microsoft Word file here:
<a href="https://www.dropbox.com/s/7j1mvt7jubf2vb1/3SAT%20algorithm.docx?dl=0" rel="nofollow">DropBox Link for 3SAT algorithm</a>
I do not know if this algorithm really works and if it does what is its complexity. I would really like to know about its complexity though. Please help me regarding this as if it is in polynomial time then I would have proved P=NP!</p> | 2015-09-03 12:39:04.407000+00:00 | 2015-10-19 17:40:37.137000+00:00 | null | algorithm|time-complexity|sat | ['http://arxiv.org/ftp/arxiv/papers/1411/1411.2901.pdf'] | 1 |
11,248,148 | <p>Heterogeneous databases is a tough area and there's a lot of research going on. You can't expect an out of the box solution. It vastly depends on the databases, schemas, data, security concerns involved. To get you going, read this paper: <a href="http://arxiv.org/pdf/0912.0579v1.pdf" rel="nofollow">A Multidatabase System as 4-Tiered Client-Server Distributed Heterogeneous Database System</a></p>
<p>If you are free in choosing the scenario, then make your life as easy as you can:</p>
<ul>
<li>use the same schema on all databases</li>
<li>use plain JDBC access for each database (you will learn more this way and you don't have to deal with ORM framework bloat)</li>
<li>just use one single, simple table at the beginning</li>
<li>build up the required components for a distributed scenario (check the linked paper and search the internet for details)</li>
<li>put everything together</li>
<li>enjoy</li>
</ul> | 2012-06-28 15:41:01.733000+00:00 | 2012-06-28 15:41:01.733000+00:00 | null | null | 11,246,548 | <p>I am a student, major in database management. On my 5th semester, we are required to create a system using heterogeneous database. We must use at least 4 different database. My choice would be MySQL, MS SQL Server, Oracle and PostgreSQL since these are among the most popular and matured database. </p>
<p>The problem is that so far, no group has ever manage to connect to 4 different database. I have heard that using Java hibernate spring could connect to different database, but I am trying to connect to different database on the database level, not on the application level. Using something like Oracle's database link. But as far as I know, MySQL does not have this feature.</p>
<p>If in the industry, what are the common ways to deal with heterogeneous database? Or is there any standard library for me to do this? I hope to get some guide on how should I deal with heterogeneous database using the industry standard</p> | 2012-06-28 14:11:54.563000+00:00 | 2016-11-03 14:35:56.320000+00:00 | 2012-09-26 00:14:17.497000+00:00 | mysql|sql-server|oracle|postgresql | ['http://arxiv.org/pdf/0912.0579v1.pdf'] | 1 |
Subsets and Splits