a_id
int64
7.84k
73.8M
a_body
stringlengths
61
33k
a_creation_date
stringlengths
25
32
a_last_activity_date
stringlengths
25
32
a_last_edit_date
stringlengths
25
32
a_tags
float64
q_id
int64
826
73.8M
q_body
stringlengths
61
29.9k
q_creation_date
stringlengths
25
32
q_last_activity_date
stringlengths
25
32
q_last_edit_date
stringlengths
25
32
q_tags
stringlengths
1
103
_arxiv_links
stringlengths
2
6.69k
_n_arxiv_links
int64
0
94
49,876,051
<p>This problem has been tackled with both Generative Adverserial Networks (GAN) and RNNs with decent degree of success. </p> <p>One fairly successful approach using GANs, introduced by Facebook AI Research <a href="https://arxiv.org/pdf/1511.05440.pdf" rel="noreferrer">here</a>, uses a multi-scale generator. Basically, you sample down the image at various scales, and then predict the next frame for that particular lower resolution. Then using the upsampled predicted frame and original frame at next higher resolution you predict the next frame for that higher scale and the process continues. Multi-scale aggregation helps prevent blurriness and retains details. You can find the code <a href="https://github.com/coupriec/VideoPredictionICLR2016" rel="noreferrer">here</a></p> <p>A more recent approach uses a combination of Autoencoder and GANs, Basically a Variational encoder recurrently compresses the entire video stream into a latent space and you have different networks to predict flow and the next frame from the latent space representation. It then fuses next frame along with information from predicted flow. You can read the paper <a href="http://openaccess.thecvf.com/content_ICCV_2017/papers/Liang_Dual_Motion_GAN_ICCV_2017_paper.pdf" rel="noreferrer">here</a> for details.</p> <p>Another approach from Cornell, does this without GANs. It's rather complex, but in simple terms they use Stacked LSTMs, and propagate error signals to further LSTM layers whose job is then to predict the error for next frame. Here's the <a href="https://arxiv.org/pdf/1605.08104.pdf" rel="noreferrer">paper</a></p> <p>There are other approaches too, but these seem to be widely cited and have code available online.</p>
2018-04-17 10:45:29.997000+00:00
2018-04-17 10:45:29.997000+00:00
null
null
47,576,102
<p>I'm having fun with keras lately and i would like to know how one would approach this problem.</p> <p>I have a sequence of 100 images. They are daily images of a radar map, for 100 consecutive days. I would like to predict the image for the next day.</p> <p>Theses images can be interpreted as matrices of n x m dimensions ( not square ) .</p> <p>Can this be adapted to a lstm nn? How would you approach this problem? </p> <p>Thanks for sharing ideas! </p>
2017-11-30 14:44:13.880000+00:00
2020-04-09 09:58:15.250000+00:00
null
image|deep-learning|sequence|lstm|grayscale
['https://arxiv.org/pdf/1511.05440.pdf', 'https://github.com/coupriec/VideoPredictionICLR2016', 'http://openaccess.thecvf.com/content_ICCV_2017/papers/Liang_Dual_Motion_GAN_ICCV_2017_paper.pdf', 'https://arxiv.org/pdf/1605.08104.pdf']
4
63,635,800
<p>Lock-freedom provides what is called <em>progress guarantee</em>. You are right that in your example thread <code>A</code> has do perform a retry (i.e., loop again), but <em>only if some other thread changed the value</em>, which implies that that thread was able to make progress.</p> <p>In contrast, a thread (let's call it <code>X</code>) that holds a spin-lock prevents all other threads from making progress until the lock is released. So if thread <code>X</code> is preempted, execution of all threads waiting for the lock is effectively stalled until <code>X</code> can resume execution and finally release the lock. If <code>X</code> were to be stalled indefinitely, then all other threads would also be blocked indefinitely.</p> <p>Such a situation is not possible with lock-free algorithms, since it is guaranteed that at any time at least one thread can make progress.</p> <hr /> <p>Which should be used depends on the situation. Lock-free algorithms are inherently difficult to design, especially for more complex data structures like trees. And even if you have a lock-free algorithm, it is almost always slower than a serial one, so a serial version protected by a lock might perform better. Then again, if the data structure is heavily contended, a lock-free version will scale better than one protected by a lock. However, if your workload is mostly read-only, a read-write-lock will also provide good scalability. Unfortunately, there is no general rule here...</p> <hr /> <p>If you want to learn more about lock-freedom (and more) I recommend the book <a href="https://rads.stackoverflow.com/amzn/click/com/0123973376" rel="noreferrer" rel="nofollow noreferrer">The Art of Multiprocessor Programming</a>.<br /> If you prefer free alternatives I recommend <a href="https://arxiv.org/abs/1701.00854" rel="noreferrer">Is Parallel Programming Hard, And, If So, What Can You Do About It?</a> by Paul McKenney or <a href="https://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-579.pdf" rel="noreferrer">Practicallock-freedom</a> by Keir Fraser.</p>
2020-08-28 14:31:09.207000+00:00
2020-08-28 14:31:09.207000+00:00
null
null
63,634,889
<p>I am wondering which are the advantages of lock-free programming over spin lock? I think that when we do lock free programming using CAS mechanism in a thread(called A), if other thread change the value in CAS, A thread still need to loop again. And I think it just like we use spin lock!</p> <p>I am so confused about this. Although I know that CAS and spin-lock are suitable to use when the lock contention is not fierce, can someone explain in which scenarios lock free should be used and spin lock should be used?</p>
2020-08-28 13:37:11.507000+00:00
2020-08-28 23:56:12.507000+00:00
2020-08-28 23:56:12.507000+00:00
multithreading|lock-free|spinlock|lockless
['https://rads.stackoverflow.com/amzn/click/com/0123973376', 'https://arxiv.org/abs/1701.00854', 'https://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-579.pdf']
3
60,731,666
<p>The currently accepted answer by Fred Foo, as well as Hassan's answer, are numerically unstable (Hassan's answer is better). An example of an input on which Hassan's answer fails will be provided later. My implementation is as follows:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np from scipy.special import logsumexp def logmatmulexp(log_A: np.ndarray, log_B: np.ndarray) -&gt; np.ndarray: """Given matrix log_A of shape ϴ×R and matrix log_B of shape R×I, calculates (log_A.exp() @ log_B.exp()).log() in a numerically stable way. Has O(ϴRI) time complexity and space complexity.""" ϴ, R = log_A.shape I = log_B.shape[1] assert log_B.shape == (R, I) log_A_expanded = np.broadcast_to(np.expand_dims(log_A, 2), (ϴ, R, I)) log_B_expanded = np.broadcast_to(np.expand_dims(log_B, 0), (ϴ, R, I)) log_pairwise_products = log_A_expanded + log_B_expanded # shape: (ϴ, R, I) return logsumexp(log_pairwise_products, axis=1) </code></pre> <p>Just like Hassan's answer and Fred Foo's answer, my answer has time complexity O(ϴRI). Their answers have space complexity O(ϴR+RI) (I am not actually sure about this), while mine unfortunately has space complexity O(ϴRI) - this is because numpy can multiply a ϴ×R matrix by a R×I matrix without allocating an additional array of size ϴ×R×I. Having O(ϴRI) space complexity is not an immanent property of my method - I think if you write it out using cycles, you can avoid this space complexity, but unfortunately I don't think you can do that using stock numpy functions.</p> <p>I have checked how much actual time my code runs, it's 20 times slower than regular matrix multiplication.</p> <p>Here's how you can know that my answer is numerically stable:</p> <ol> <li>Clearly, all lines other than the return line are numerically stable.</li> <li>The <code>logsumexp</code> function is known to be numerically stable.</li> <li>Therefor, my <code>logmatmulexp</code> function is numerically stable.</li> </ol> <p>My implementation has another nice property. If instead of using numpy you write the same code in pytorch or using another library with automatic differentiation, you will get a numerically stable backward pass automatically. Here's how we can know the backward pass will be numerically stable:</p> <ol> <li>All functions in my code are differentiable everywhere (unlike <code>np.max</code>)</li> <li>Clearly, back propagating through all lines except the return line is numerically stable, because absolutely nothing weird is happening there.</li> <li>Usually the developers of pytorch know what they're doing. So it's enough to trust them that they implemented backward pass of logsumexp in a numerically stable way.</li> <li>Actually the gradient of logsumexp is the softmax function (for reference google "softmax is gradient of logsumexp" or see <a href="https://arxiv.org/abs/1704.00805" rel="nofollow noreferrer">https://arxiv.org/abs/1704.00805</a> proposition 1). It's known that softmax can be calculated in a numerically stable way. So the pytorch devs probably just use softmax there (I haven't actually checked).</li> </ol> <p>Below is the same code in pytorch (in case you need backpropagation). Due to how pytorch backpropagation works, during forward pass it will save the <code>log_pairwise_products</code> tensor for the backward pass. This tensor is large, and you probably don't want it to be saved - you can just recalculate it once again during backward pass. In such case I suggest you use checkpointing - it's really easy - see the second function below.</p> <pre class="lang-py prettyprint-override"><code>import torch from torch.utils.checkpoint import checkpoint def logmatmulexp(log_A: torch.Tensor, log_B: torch.Tensor) -&gt; torch.Tensor: """Given matrix log_A of shape ϴ×R and matrix log_B of shape R×I, calculates (log_A.exp() @ log_B.exp()).log() and its backward in a numerically stable way.""" ϴ, R = log_A.shape I = log_B.shape[1] assert log_B.shape == (R, I) log_A_expanded = log_A.unsqueeze(2).expand((ϴ, R, I)) log_B_expanded = log_B.unsqueeze(0).expand((ϴ, R, I)) log_pairwise_products = log_A_expanded + log_B_expanded # shape: (ϴ, R, I) return torch.logsumexp(log_pairwise_products, dim=1) def logmatmulexp_lowmem(log_A: torch.Tensor, log_B: torch.Tensor) -&gt; torch.Tensor: """Same as logmatmulexp, but doesn't save a (ϴ, R, I)-shaped tensor for backward pass. Given matrix log_A of shape ϴ×R and matrix log_B of shape R×I, calculates (log_A.exp() @ log_B.exp()).log() and its backward in a numerically stable way.""" return checkpoint(logmatmulexp, log_A, log_B) </code></pre> <hr> <p>Here's an input on which Hassan's implementation fails but my implementation gives the correct output:</p> <pre class="lang-py prettyprint-override"><code>def logmatmulexp_hassan(A, B): max_A = np.max(A,1,keepdims=True) max_B = np.max(B,0,keepdims=True) C = np.dot(np.exp(A - max_A), np.exp(B - max_B)) np.log(C, out=C) C += max_A + max_B return C log_A = np.array([[-500., 900.]], dtype=np.float64) log_B = np.array([[900.], [-500.]], dtype=np.float64) print(logmatmulexp_hassan(log_A, log_B)) # prints -inf, while the correct answer is approximately 400.69. </code></pre>
2020-03-17 23:37:22.537000+00:00
2020-03-19 16:02:31.827000+00:00
2020-03-19 16:02:31.827000+00:00
null
23,630,277
<p>I need to take the matrix product of two NumPy matrices (or other 2d arrays) containing log probabilities. The naive way <code>np.log(np.dot(np.exp(a), np.exp(b)))</code> is not preferred for obvious reasons.</p> <p>Using</p> <pre><code>from scipy.misc import logsumexp res = np.zeros((a.shape[0], b.shape[1])) for n in range(b.shape[1]): # broadcast b[:,n] over rows of a, sum columns res[:, n] = logsumexp(a + b[:, n].T, axis=1) </code></pre> <p>works but runs about 100 times slower than <code>np.log(np.dot(np.exp(a), np.exp(b)))</code></p> <p>Using</p> <pre><code>logsumexp((tile(a, (b.shape[1],1)) + repeat(b.T, a.shape[0], axis=0)).reshape(b.shape[1],a.shape[0],a.shape[1]), 2).T </code></pre> <p>or other combinations of tile and reshape also work but run even slower than the loop above due to the prohibitively large amounts of memory required for realistically sized input matrices.</p> <p>I am currently considering writing a NumPy extension in C to compute this, but of course I'd rather avoid that. Is there an established way to do this, or does anybody know of a less memory intensive way of performing this computation?</p> <p><strong>EDIT:</strong> Thanks to larsmans for this solution (see below for derivation):</p> <pre><code>def logdot(a, b): max_a, max_b = np.max(a), np.max(b) exp_a, exp_b = a - max_a, b - max_b np.exp(exp_a, out=exp_a) np.exp(exp_b, out=exp_b) c = np.dot(exp_a, exp_b) np.log(c, out=c) c += max_a + max_b return c </code></pre> <p>A quick comparison of this method to the method posted above (<code>logdot_old</code>) using iPython's magic <code>%timeit</code> function yields the following:</p> <pre><code>In [1] a = np.log(np.random.rand(1000,2000)) In [2] b = np.log(np.random.rand(2000,1500)) In [3] x = logdot(a, b) In [4] y = logdot_old(a, b) # this takes a while In [5] np.any(np.abs(x-y) &gt; 1e-14) Out [5] False In [6] %timeit logdot_old(a, b) 1 loops, best of 3: 1min 18s per loop In [6] %timeit logdot(a, b) 1 loops, best of 3: 264 ms per loop </code></pre> <p>Obviously larsmans' method obliterates mine!</p>
2014-05-13 11:40:22.380000+00:00
2020-03-19 16:02:31.827000+00:00
2014-06-25 21:07:27.373000+00:00
python|numpy|matrix|matrix-multiplication|logarithm
['https://arxiv.org/abs/1704.00805']
1
45,154,417
<p>Your memory and processing requirements will be proportional to the pixel size of your image. Whether this is too large for you to process efficiently will depend on your hardware constraints and the time you have available.</p> <p>With regards to resizing the images there is no one answer, you have to consider how to best preserve information that'll be required for your algorithm to learn from your data while removing information that won't be useful. Reducing the size of your input images won't necessarily be a negative for accuracy. Consider two cases:</p> <p><strong>Handwritten digits</strong></p> <p>Here the images could be reduced considerably in size and maintain all the structural information necessary to be correctly identified. Have a look at the <a href="http://yann.lecun.com/exdb/mnist/" rel="nofollow noreferrer">MNIST data set</a>, these images are distributed at 28 x 28 resolution and identifiable to <a href="https://arxiv.org/abs/1202.2745" rel="nofollow noreferrer">99.7%+ accuracy</a>.</p> <p><strong>Identifying Tree Species</strong></p> <p>Imagine a set of images of trees where individual leaves could help identify species. Here you might find that reducing the image size reduces small scale detail on leaf shape in a way that's detrimental to the model, but you might find that you get a similar result with a tight crop (which preserves individual leaves) rather than an image resize. If this is the case you may find that creating multiple crops from the same image gives you an augmented data set for training that considerably improves results (which is something to consider, if possible, given your training set is very small)</p> <p>Deep learning models are achieving results around human level in many image classification tasks: if you struggle to identify your own images then it's less likely you'll train an algorithm to. This is often a useful starting point when considering the level of scaling that might be appropriate.</p>
2017-07-17 22:07:34.963000+00:00
2017-07-17 22:07:34.963000+00:00
null
null
45,149,023
<p>How can size of an image effect training the model for this task?</p> <p>My current training set holds images that are <code>2880 X 1800</code>, but I am worried this may be too large to train. In total my sample size will be about 200-500 images. </p> <p>Would this just mean that I need more resources (GPU,RAM, Distribution) when training my model?</p> <p>If this is too large, how should I go about resizing? -- I want to mimic real-world photo resolutions as best as possible for better accuracy.</p> <p><strong>Edit:</strong></p> <p>I would also be using <code>TFRecord</code> format for the image files</p>
2017-07-17 16:15:47.290000+00:00
2017-07-17 22:07:34.963000+00:00
2017-07-17 16:46:38.793000+00:00
machine-learning|tensorflow|classification|image-recognition
['http://yann.lecun.com/exdb/mnist/', 'https://arxiv.org/abs/1202.2745']
2
45,149,909
<p>Higher resolution images will result in a higher training time and an increased memory consumption (mainly GPU memory).</p> <p>Depending on your concrete task, you might want to reduce the image size in order to therefore fit a reasonable batch size of let's say 32 or 64 on the GPU - for stable learning.</p> <p>Your accuracy is probably affected more by the size of your training set. So instead of going for image size, you might want to go for 500-1000 sample images. Recent publications like <a href="https://arxiv.org/abs/1512.02325" rel="nofollow noreferrer">SSD - Single Shot MultiBox Detector</a> achieve high accuracy values like an mAP of 72% on the <a href="http://host.robots.ox.ac.uk/pascal/VOC/voc2012/index.html" rel="nofollow noreferrer">PascalVOC</a> dataset - with "only" using 300x300 image resolution.</p> <p>Resizing and augmentation: SSD for instance just scales every input image down to 300x300, independent of the aspect ratio - does not seem to hurt. You could also augment your data by mirroring, translating, ... etc (but I assume there are built-in methods in Tensorflow for that).</p>
2017-07-17 17:06:00.337000+00:00
2017-07-17 17:06:00.337000+00:00
null
null
45,149,023
<p>How can size of an image effect training the model for this task?</p> <p>My current training set holds images that are <code>2880 X 1800</code>, but I am worried this may be too large to train. In total my sample size will be about 200-500 images. </p> <p>Would this just mean that I need more resources (GPU,RAM, Distribution) when training my model?</p> <p>If this is too large, how should I go about resizing? -- I want to mimic real-world photo resolutions as best as possible for better accuracy.</p> <p><strong>Edit:</strong></p> <p>I would also be using <code>TFRecord</code> format for the image files</p>
2017-07-17 16:15:47.290000+00:00
2017-07-17 22:07:34.963000+00:00
2017-07-17 16:46:38.793000+00:00
machine-learning|tensorflow|classification|image-recognition
['https://arxiv.org/abs/1512.02325', 'http://host.robots.ox.ac.uk/pascal/VOC/voc2012/index.html']
2
43,302,113
<p>The following type:</p> <pre><code>newtype H a b = Fn {invoke :: H b a -&gt; b} </code></pre> <p>while not exactly the same as yours but is in a similar spirit, have been shown by Launchbury, Krstic, and Sauerwein to have interesting uses regarding corouitining: <a href="https://arxiv.org/pdf/1309.5135.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1309.5135.pdf</a> </p>
2017-04-09 01:34:57.907000+00:00
2017-04-09 01:34:57.907000+00:00
null
null
31,168,729
<p>Lately I've been experimenting with the general question, <em>what will GHC allow me to do?</em> I was surprised to find, that it considers the following program as valid</p> <pre><code>module BrokenRecursiveType where data FooType = Foo FooType main = print "it compiles!" </code></pre> <p>At first I thought, <em>how is this useful?</em> Then I remembered that Haskell is lazy, so I could, perhaps, define a function like the following to use it</p> <pre><code>allTheFoos = Foo allTheFoos </code></pre> <p>Then I thought, <em>so how is this useful?</em></p> <p>Are there any valuable use cases (thought up or actually experienced) for types of similar form to <code>FooType</code>?</p>
2015-07-01 18:14:21.323000+00:00
2017-04-09 01:34:57.907000+00:00
2015-07-01 18:21:16.490000+00:00
haskell|ghc
['https://arxiv.org/pdf/1309.5135.pdf']
1
58,558,700
<p>you can go in cloudwatch logs to diagnose your model failure. Real-time inference traffic scaling can be addressed via working on 3 independent dimensions:</p> <ol> <li><p><strong>hardware</strong>: choosing larger machines or more machines. For example you can <a href="https://aws.amazon.com/blogs/machine-learning/load-test-and-optimize-an-amazon-sagemaker-endpoint-using-automatic-scaling/" rel="nofollow noreferrer">load test your model endpoint</a> with bigger and bigger machines and see when hardware size gives you acceptable latency. The Autoscaling feature of SageMaker helps you address this automatically. If deploying a deep neural net, you can also consider using appropriate accelerators, eg GPU (EC2 P3, EC2 G4) or <a href="https://aws.amazon.com/blogs/machine-learning/serving-deep-learning-at-curalate-with-apache-mxnet-aws-lambda-and-amazon-elastic-inference/" rel="nofollow noreferrer">Amazon Elastic Inference Accelerator</a> to make each prediction much faster. </p></li> <li><p><strong>software</strong>: you have 2 levers to tune here:</p> <ul> <li>choosing a <strong>serving stack</strong> that is lean and fast. Different servers will handle load at different levels of performance. One common trick is to batch the load - for example, instead of hitting 100 times your server can you hit it only once with a batch of 100 records? If clients cannot batch their requests, can you use micro-asynchrony so that you do the batching yourself after they issued requests? You can usually configure such micro-batching in advanced deep learning servers such as <a href="https://github.com/tensorflow/serving/tree/master/tensorflow_serving/batching" rel="nofollow noreferrer">TF Serving</a> or <a href="https://github.com/awslabs/mxnet-model-server/blob/master/docs/management_api.md#register-a-model" rel="nofollow noreferrer">MXNet Model Server</a> (both can be used in SageMaker), but otherwise you can also do it yourself by having a queue (SQS) in front of your server.</li> <li><strong>model compilation</strong> - optimizing the model graph and its runtime. This is a very smart concept, that leverages the fact that when you know where you're going to deploy (eg NVIDIA, Intel, ARM, etc), you have an insider edge and you can refine your model artifact and create a bespoke runtime application that are tailor-made for this specific target platform. This can reduce memory consumption and latency by double-digit percentage, and is an active area of ML research. In the SageMaker ecosystem, such a compilation task can be performed with SageMaker Neo, but the open source ecosystem is developing fast, with notably treelite (<a href="https://www.sysml.cc/doc/2018/196.pdf" rel="nofollow noreferrer">paper</a>, <a href="https://treelite.readthedocs.io/en/latest/" rel="nofollow noreferrer">doc</a>) for decision tree compilation and TVM (<a href="https://arxiv.org/abs/1802.04799" rel="nofollow noreferrer">paper</a>, doc) for arbitrary neural net compilation. Both are dependencies of Neo by the way.</li> </ul></li> <li><p><strong>science</strong>: some models are slower or heavier than others. If speed and concurrency are your priorities over accuracy, and if you already exploited all possible tricks at level (1) and (2) above, consider using fast-throughput models, eg linear models &amp; logistic regression for structured data, MobileNet or SqueezeNet instead of large Resnets for classification (<a href="https://gluon-cv.mxnet.io/model_zoo/classification.html" rel="nofollow noreferrer">nice benchmark here</a>), Yolov3 instead of FasterRCNN for detection (<a href="https://gluon-cv.mxnet.io/model_zoo/detection.html" rel="nofollow noreferrer">nice benchmark here</a>), etc. But be aware that unlike levels (1) and (2), changing model science will alter accuracy.</p></li> </ol> <p>As mentioned above, those 3 areas of improvements really are about real-time inference; if you can afford to pre-compute all possible model inputs, then the ultimate low-latency high-throughput solution is to pre-compute offline a variety of input-predictions pairs of interest and serve them on demand from a fast database or local read-only store.</p>
2019-10-25 12:39:20.007000+00:00
2019-10-25 12:39:20.007000+00:00
null
null
58,558,155
<p>I have created an endpoint using Sagemaker, and designed my system so that it is called about 100 times <strong>simultaneously</strong>. This seemed to cause <em>'Model error'</em> and take too much time. Do I need to create an endpoint for each event, and make one call per endpoint, instead?</p>
2019-10-25 12:01:42.170000+00:00
2019-10-25 16:26:57.223000+00:00
2019-10-25 16:26:57.223000+00:00
endpoint|amazon-sagemaker
['https://aws.amazon.com/blogs/machine-learning/load-test-and-optimize-an-amazon-sagemaker-endpoint-using-automatic-scaling/', 'https://aws.amazon.com/blogs/machine-learning/serving-deep-learning-at-curalate-with-apache-mxnet-aws-lambda-and-amazon-elastic-inference/', 'https://github.com/tensorflow/serving/tree/master/tensorflow_serving/batching', 'https://github.com/awslabs/mxnet-model-server/blob/master/docs/management_api.md#register-a-model', 'https://www.sysml.cc/doc/2018/196.pdf', 'https://treelite.readthedocs.io/en/latest/', 'https://arxiv.org/abs/1802.04799', 'https://gluon-cv.mxnet.io/model_zoo/classification.html', 'https://gluon-cv.mxnet.io/model_zoo/detection.html']
9
72,723,391
<p>It looks like you came from one of my previous answer in 2016: <a href="https://stackoverflow.com/a/40113075/4891738">mgcv: How to set number and / or locations of knots for splines</a>, borrowing my code snippet:</p> <pre><code>my_knots &lt;- myfit$smooth[[1]]$xp plot(x, y, col= &quot;grey&quot;, main = &quot;my knots&quot;); lines(x, myfit$linear.predictors, col = 2, lwd = 2) abline(v = my_knots, lty = 2) </code></pre> <p>In that answer, my demonstration was made with cubic regression spline (<code>bs = 'cr'</code>), where knot placement is simple to do. But for B-splines, things are more complicated. So take this answer as a complement to that answer.</p> <hr /> <p>For B-spline classes in <strong>mgcv</strong>, i.e., <code>'bs'</code> being &quot;bs&quot; (<code>?b.spline</code>), &quot;ps&quot; (<code>?p.spline</code>) or &quot;ad&quot; (<code>?adaptive.smooth</code>), the argument <code>k</code> is the number of B-splines, not the number of knots. And the first take-home message is: <strong>the number of B-splines is NOT equal to the number of knots.</strong></p> <p>Knot placement for B-splines is a dirty work. Usually you only specify <code>k</code> and <strong>mgcv</strong> will place knots automatically for you (see for example, <a href="https://stackoverflow.com/q/37379609/4891738">Extract knots, basis, coefficients and predictions for P-splines in adaptive smooth</a>). If you want to control knot placement yourself, the number of knots you provide must be compatible with <code>k</code>. This can cause great confusion if you don't well understand B-splines.</p> <p>I highly recommend your reading the Appendix (page 33-34) of one of my work: <a href="https://arxiv.org/pdf/2201.06808.pdf" rel="nofollow noreferrer">General P-splines for Non-uniform B-splines</a>, to know some fundamental stuff of B-splines. Make sure you understand what are the <strong>domain</strong>, <strong>interior knots</strong> and <strong>auxiliary boundary knots</strong>. In the following, I will just show you how to use these knowledge to write the correct code.</p> <hr /> <p>Here is how to place knots for your example.</p> <pre><code>## degree of spline deg &lt;- 3 ## domain a &lt;- min(x) #[1] 5.86 b &lt;- max(x) #[1] 24.16 ## interior knots (must be between a and b) xs &lt;- c(6, 20) #[1] 6 20 ## domain knots xd &lt;- c(a, xs, b) #[1] 5.86 6.00 20.00 24.16 ## clamped auxiliary boundary knots left.aux &lt;- rep(a, deg) #[1] 5.86 5.86 5.86 right.aux &lt;- rep(b, deg) #[1] 24.16 24.16 24.16 ## complete B-spline knots my.knots &lt;- c(left.aux, xd, right.aux) #[1] 5.86 5.86 5.86 5.86 6.00 20.00 24.16 24.16 24.16 24.16 </code></pre> <p>Here is how to specify <code>k</code> for your example.</p> <pre><code>my.k &lt;- length(xs) + deg + 1 #[1] 6 </code></pre> <p>Now we can work with <strong>mgcv</strong>.</p> <pre><code>myfit &lt;- gam(y ~ s(x, bs = 'bs', k = my.k), knots = list(x = my.knots)) #Family: gaussian #Link function: identity # #Formula: #y ~ s(x, bs = &quot;bs&quot;, k = my.k, m = deg) # #Estimated degrees of freedom: #3.81 total = 4.81 </code></pre> <p>The knots you passed in are stored here (which agree with <code>my.knots</code>):</p> <pre><code>## For B-spline classes, knots are stored in $knots instead of $xp myfit$smooth[[1]]$knots #[1] 5.86 5.86 5.86 5.86 6.00 20.00 24.16 24.16 24.16 24.16 </code></pre> <hr /> <p>Accompanying <a href="https://arxiv.org/pdf/2201.06808.pdf" rel="nofollow noreferrer">General P-splines for Non-uniform B-splines</a> are R packages <strong>gps</strong> and <strong>gps.mgcv</strong>. The latter package introduces a new &quot;gps&quot; class to <strong>mgcv</strong>, where <code>bs = 'ps'</code> and <code>bs = 'bs'</code> are special cases of <code>bs = 'gps'</code>. The new &quot;gps&quot; class makes knot placement easier, because it automatically places auxiliary boundary knots for you and you only need to provide interior knots.</p> <pre><code>## The package stays on GitHub for the moment ## but will be on CRAN in the future. ## You may need to first install package 'devtools' from CRAN. devtools::install_github(&quot;ZheyuanLi/gps&quot;) devtools::install_github(&quot;ZheyuanLi/gps.mgcv&quot;) </code></pre> <pre><code>library(gps.mgcv) ## as same as using 'bs = 'bs'' myfit &lt;- gam(y ~ s(x, bs = 'gps', k = my.k, xt = list(derivative = TRUE)), knots = list(x = xs)) ## provide interior knots ## the novel general P-spline gpsfit &lt;- gam(y ~ s(x, bs = 'gps', k = my.k), knots = list(x = xs)) ## provide interior knots </code></pre> <p>Construction information (domain, interior knots, etc) are stored in <code>myfit$smooth[[1]]$xt</code> and <code>gpsfit$smooth[[1]]$xt</code>.</p>
2022-06-23 00:50:27.177000+00:00
2022-06-23 00:50:27.177000+00:00
null
null
72,718,391
<p>I'm trying to fit a smooth spline to what looks like data with two peaks. First, I fit a smooth spline to my data to identify the potential position of the knots.</p> <pre><code>library(npreg) library(splines) library(mgcv) x &lt;- c(20.70, 20.44, 20.58, 21.02, 19.90, 6.20, 8.20, 6.92, 5.86, 6.44, 6.34, 8.48, 8.46, 9.00, 9.06, 9.00, 9.06, 17.98, 18.42, 19.18, 22.88, 24.16,20.20, 23.50) y &lt;- c(19.884208, 12.772114, 12.932944, 5.016790, 11.405843, 3.310724, 3.950049, 3.641571, 4.073783, 4.616096, 3.425635, 7.773548, 7.498084, 9.474213, 6.162779, 11.041210, 12.618555, 6.287967, 4.286919, 3.242361, 7.571644, 3.379709, 5.274434, 8.8258) data = data.frame(x,y) fit_spline &lt;- smooth.spline(x,y) plot(x,y) lines(fit_spline,lwd=2,col=&quot;purple&quot;) </code></pre> <p><a href="https://i.stack.imgur.com/DMkPJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DMkPJ.png" alt="enter image description here" /></a></p> <p>Then, I wanted to run a regression using the gam function where I can specify the position of the knots. I get an error that says <code>Error in smooth.construct.bs.smooth.spec(object, dk$data, dk$knots) : there should be 7 supplied knots</code> I am not sure where it's drawing this number. If I supply 7 knots, it says it needs 13 knots...and the same erorr gets repeated. I am unclear on how to proceed.</p> <pre><code>myfit &lt;- gam(y ~ s(x, bs = 'bs', k = 3), knots = list(x = c(1,5,20))) my_knots &lt;- myfit$smooth[[1]]$xp plot(x, y, col= &quot;grey&quot;, main = &quot;my knots&quot;); lines(x, myfit$linear.predictors, col = 2, lwd = 2) abline(v = my_knots, lty = 2) </code></pre>
2022-06-22 15:41:55.893000+00:00
2022-06-23 00:56:30.700000+00:00
2022-06-23 00:56:30.700000+00:00
r|regression|spline|gam|mgcv
['https://stackoverflow.com/a/40113075/4891738', 'https://stackoverflow.com/q/37379609/4891738', 'https://arxiv.org/pdf/2201.06808.pdf', 'https://arxiv.org/pdf/2201.06808.pdf']
4
43,241,756
<p>1) There is a variant of minwise hashing called one permutation hashing (see <a href="http://papers.nips.cc/paper/4778-one-permutation-hashing.pdf" rel="nofollow noreferrer">http://papers.nips.cc/paper/4778-one-permutation-hashing.pdf</a>) that uses only a single hash function. Estimation can be somehow inaccurate for small sets where the number of elements is small compared to the number of bins. However, in this case it is possible to "densify" the hash signature of a set using the technique described in <a href="https://arxiv.org/pdf/1406.4784.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1406.4784.pdf</a>.</p> <p>2) Bitsets are actually a special case of HyperLogLog sketches as discussed in <a href="https://arxiv.org/pdf/1702.01284.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1702.01284.pdf</a>. This paper also describes a maximum likelihood method that allows more accurate estimation of union and intersections sizes of two HyperLogLog sketches which can be used to finally get an estimate for the Jaccard similarity.</p>
2017-04-05 21:15:15.297000+00:00
2017-04-05 21:15:15.297000+00:00
null
null
43,191,003
<h2>What I need</h2> <p>I'm looking for pretty fast and accurate way to find Jaccard similarity among multiple huge sets of data. I could have up to 10.000-20.000 operations of calculating Jaccard similarity. Due to the need to calculate all Jaccard similarities just after dumping of that datasets, I can't calculate them in background with slow algorithms during the quiet winter nights.</p> <p>I see two possible solutions:</p> <h2>Solution #1</h2> <p>Use MinHash algorithm. The problem with this solution is that it's very slow. To get 10% error you need to use 100 hash functions. The only workaround I see here is to hash everything with single "expensive" hash function, and than use 100 "cheap" hash functions over the hash result. But I haven't enough math background to choose them by myself.</p> <h2>Question #1</h2> <p><em>How to choose speed efficient set of hash functions for MinHash to get maximum error 10%?</em></p> <h2>Solution #2</h2> <p>Use HyperLogLog or BitSet to calculate Jaccard similarity. <br> The problem with this approach is that <a href="https://github.com/Volodymyr128/HyperLogLog-vs-MinHash-vs-BitSet-tests/blob/master/src/test/java/algos/JaccardSimilarityHLLvsBitSetTest.java" rel="nofollow noreferrer">for some reasons I get too big errors in some cases</a>. Also the problem with BitSet (even it's sparse data structure) is that it takes too much RAM on huger datasets.</p> <p>My algorithms:</p> <ol> <li>Choose probability cardinality estimation algorithm (HyperLogLog or BitSet)</li> <li>Calculate probable cardinality of <code>set1</code></li> <li>Calculate probable cardinality of <code>set2</code></li> <li>Calculate probable cardinality of <code>set1 union set2</code>. Both HyperLogLog and BitSet support merge operation.</li> <li>Similarity between <code>set2</code> and <code>set1</code> = <code>(cardinality(set1) + cardinality(set2) - cardinality(set1 union set2)) / cardinality(set2)</code></li> </ol> <h2>Question #2</h2> <p><em>Why do I get the same deviation of Jaccard similarity estimation on both BitSet and HyperLogLog? <a href="https://github.com/Volodymyr128/HyperLogLog-vs-MinHash-vs-BitSet-tests/blob/master/src/test/java/algos/CardinalityHLLvsBitSetTest.java" rel="nofollow noreferrer">BitSet prove better cardinality precision than HLL</a>. I though that if BitSet takes much more space it should have much bigger accuracy, am I wrong?</em></p> <h2>Question #3</h2> <p><em>Is that impossible to achieve deviation of Jaccard similarity less than 5% with BitSet and HyperLogLog? What I'm doing wrong?</em></p> <h2>P.S.</h2> <p><a href="https://github.com/Volodymyr128/HyperLogLog-vs-MinHash-vs-BitSet-tests/blob/master/src/test/java/algos/JaccardSimilarityHLLvsBitSetTest.java" rel="nofollow noreferrer">Hope this test results would be helpful for you!</a></p>
2017-04-03 17:40:12.613000+00:00
2017-04-05 21:15:15.297000+00:00
null
java|algorithm|hash|bitset|cardinality
['http://papers.nips.cc/paper/4778-one-permutation-hashing.pdf', 'https://arxiv.org/pdf/1406.4784.pdf', 'https://arxiv.org/pdf/1702.01284.pdf']
3
61,549,702
<p>It's hard to tell without looking into the dataset and experimenting. But hopefully, the following research materials will guide you in the right direction.</p> <ul> <li><p>Machine learning-based approach: <a href="https://www.researchgate.net/publication/266672947_Estimating_smile_intensity_A_better_way" rel="nofollow noreferrer">https://www.researchgate.net/publication/266672947_Estimating_smile_intensity_A_better_way</a></p></li> <li><p>Deep learning (CNN): <a href="https://arxiv.org/pdf/1602.00172.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1602.00172.pdf</a></p></li> <li><p>A list of awesome papers for smile and smile intensity detection: <a href="https://github.com/EvelynFan/AWESOME-FER/blob/master/README.md" rel="nofollow noreferrer">https://github.com/EvelynFan/AWESOME-FER/blob/master/README.md</a></p></li> <li><p>SmileNet project: <a href="https://sites.google.com/view/sensingfeeling/" rel="nofollow noreferrer">https://sites.google.com/view/sensingfeeling/</a></p></li> </ul> <p>Now, I'm assuming you don't have any label for actual smile intensity.</p> <p>In such a scenario, the existing smile detection methods can be used directly, you'll use the last activation output (sigmoid) as a confidence score for smiling. If the confidence is higher, the intensity should be higher.</p> <p>Now, you can use the facial landmark points as separate features (pass them through an LSTM block) and concatenate to the CNN at an early stage/ or later to improve the performance of your model.</p> <p>If you have the label for smiling intensity, you can just solve it as a regression problem, the CNN will have one output, will try to regress the smile intensity (the normalized smile intensity with sigmoid in this case).</p>
2020-05-01 19:48:23.960000+00:00
2020-05-01 19:48:23.960000+00:00
null
null
61,549,401
<p>I have a dataset made up of images of faces, with the corresponding landmarks that make up the mouth. These landmarks are sets of 2D points (x,y pixel position). Each image-landmark set data pair is tagged as either a smile, or neutral.</p> <p>What i would like to do is train a deep learning model to return a smile intensity for a new image-landmark data pair.</p> <p>What should I be searching for to help me with the next step? Is it a CNN that i need? In my limited understanding, the usual training input is just an image, where I would be passing the landmark sets to train with. Or would an SVM approach be more accurate?</p> <p>I am looking for maximum accuracy, as much as is possible.</p> <p>What is the approach that I need called?</p> <p>I am happy to use PyTorch, Dlib or any framework, I am just a little stuck on the search terms to help me move forward.</p> <p>Thank you.</p>
2020-05-01 19:26:05.957000+00:00
2020-05-01 19:48:23.960000+00:00
2020-05-01 19:46:11.773000+00:00
tensorflow|machine-learning|deep-learning|pytorch|dlib
['https://www.researchgate.net/publication/266672947_Estimating_smile_intensity_A_better_way', 'https://arxiv.org/pdf/1602.00172.pdf', 'https://github.com/EvelynFan/AWESOME-FER/blob/master/README.md', 'https://sites.google.com/view/sensingfeeling/']
4
46,620,505
<p>That is a great post about 3D batchnorm, it's often unnoticed that batchnorm can be applied to any tensor of rank greater than 1. Your code is correct, but I couldn't help but add a few important notes on this:</p> <ul> <li><p>A "standard" 2D batchnorm (accepts a 4D tensor) can be significantly faster in tensorflow than 3D or higher, because it supports <code>fused_batch_norm</code> implementation, which applies <a href="https://www.tensorflow.org/performance/performance_guide#common_fused_ops" rel="nofollow noreferrer">one kernel operation</a>:</p> <blockquote> <p>Fused batch norm combines the multiple operations needed to do batch normalization into a single kernel. Batch norm is an expensive process that for some models makes up a large percentage of the operation time. Using fused batch norm can result in a 12%-30% speedup.</p> </blockquote> <p>There is <a href="https://github.com/tensorflow/tensorflow/issues/5694" rel="nofollow noreferrer">an issue on GitHub</a> to support 3D filters as well, but there hasn't been any recent activity and at this point the issue is closed unresolved.</p></li> <li><p>Although the original paper prescribes using batchnorm before ReLU activation (and that's what you did in the code above), there is evidence that it's probably better to use batchnorm <em>after</em> the activation. Here's a comment on <a href="https://github.com/fchollet/keras/issues/1802#issuecomment-187966878" rel="nofollow noreferrer">Keras GitHub</a> by Francois Chollet:</p> <blockquote> <p>... I can guarantee that recent code written by Christian [Szegedy] applies relu before BN. It is still occasionally a topic of debate, though.</p> </blockquote></li> <li><p>For anyone interested to apply the idea of normalization in practice, there's been recent research developments of this idea, namely <a href="https://arxiv.org/abs/1602.07868" rel="nofollow noreferrer">weight normalization</a> and <a href="https://arxiv.org/abs/1607.06450" rel="nofollow noreferrer">layer normalization</a>, which fix certain disadvantages of original batchnorm, for example they work better for LSTM and recurrent networks.</p></li> </ul>
2017-10-07 13:07:41.883000+00:00
2017-10-07 13:07:41.883000+00:00
null
null
41,830,723
<p>I'm implementing a model relying on 3D convolutions (for a task that is similar to action recognition) and I want to use batch normalization (see <a href="https://arxiv.org/abs/1502.03167" rel="noreferrer">[Ioffe &amp; Szegedy 2015]</a>). I could not find any tutorial focusing on 3D convs, hence I'm making a short one here which I'd like to review with you.</p> <p>The code below refers to TensorFlow r0.12 and it explicitly instances variables - I mean I'm not using tf.contrib.learn except for the tf.contrib.layers.batch_norm() function. I'm doing this both to better understand how things work under the hood and to have more implementation freedom (e.g., variable summaries).</p> <p>I will get to the 3D convolution case smoothly by first writing the example for a fully-connected layer, then for a 2D convolution and finally for the 3D case. While going through the code, it would be great if you could check if everything is done correctly - the code runs, but I'm not 100% sure about the way I apply batch normalization. I end this post with a more detailed question.</p> <pre class="lang-py prettyprint-override"><code>import tensorflow as tf # This flag is used to allow/prevent batch normalization params updates # depending on whether the model is being trained or used for prediction. training = tf.placeholder_with_default(True, shape=()) </code></pre> <h2>Fully-connected (FC) case</h2> <pre class="lang-py prettyprint-override"><code># Input. INPUT_SIZE = 512 u = tf.placeholder(tf.float32, shape=(None, INPUT_SIZE)) # FC params: weights only, no bias as per [Ioffe &amp; Szegedy 2015]. FC_OUTPUT_LAYER_SIZE = 1024 w = tf.Variable(tf.truncated_normal( [INPUT_SIZE, FC_OUTPUT_LAYER_SIZE], dtype=tf.float32, stddev=1e-1)) # Layer output with no activation function (yet). fc = tf.matmul(u, w) # Batch normalization. fc_bn = tf.contrib.layers.batch_norm( fc, center=True, scale=True, is_training=training, scope='fc-batch_norm') # Activation function. fc_bn_relu = tf.nn.relu(fc_bn) print(fc_bn_relu) # Tensor("Relu:0", shape=(?, 1024), dtype=float32) </code></pre> <h2>2D convolutional (CNN) layer case</h2> <pre class="lang-py prettyprint-override"><code># Input: 640x480 RGB images (whitened input, hence tf.float32). INPUT_HEIGHT = 480 INPUT_WIDTH = 640 INPUT_CHANNELS = 3 u = tf.placeholder(tf.float32, shape=(None, INPUT_HEIGHT, INPUT_WIDTH, INPUT_CHANNELS)) # CNN params: wights only, no bias as per [Ioffe &amp; Szegedy 2015]. CNN_FILTER_HEIGHT = 3 # Space dimension. CNN_FILTER_WIDTH = 3 # Space dimension. CNN_FILTERS = 128 w = tf.Variable(tf.truncated_normal( [CNN_FILTER_HEIGHT, CNN_FILTER_WIDTH, INPUT_CHANNELS, CNN_FILTERS], dtype=tf.float32, stddev=1e-1)) # Layer output with no activation function (yet). CNN_LAYER_STRIDE_VERTICAL = 1 CNN_LAYER_STRIDE_HORIZONTAL = 1 CNN_LAYER_PADDING = 'SAME' cnn = tf.nn.conv2d( input=u, filter=w, strides=[1, CNN_LAYER_STRIDE_VERTICAL, CNN_LAYER_STRIDE_HORIZONTAL, 1], padding=CNN_LAYER_PADDING) # Batch normalization. cnn_bn = tf.contrib.layers.batch_norm( cnn, data_format='NHWC', # Matching the "cnn" tensor which has shape (?, 480, 640, 128). center=True, scale=True, is_training=training, scope='cnn-batch_norm') # Activation function. cnn_bn_relu = tf.nn.relu(cnn_bn) print(cnn_bn_relu) # Tensor("Relu_1:0", shape=(?, 480, 640, 128), dtype=float32) </code></pre> <h2>3D convolutional (CNN3D) layer case</h2> <pre class="lang-py prettyprint-override"><code># Input: sequence of 9 160x120 RGB images (whitened input, hence tf.float32). INPUT_SEQ_LENGTH = 9 INPUT_HEIGHT = 120 INPUT_WIDTH = 160 INPUT_CHANNELS = 3 u = tf.placeholder(tf.float32, shape=(None, INPUT_SEQ_LENGTH, INPUT_HEIGHT, INPUT_WIDTH, INPUT_CHANNELS)) # CNN params: wights only, no bias as per [Ioffe &amp; Szegedy 2015]. CNN3D_FILTER_LENGHT = 3 # Time dimension. CNN3D_FILTER_HEIGHT = 3 # Space dimension. CNN3D_FILTER_WIDTH = 3 # Space dimension. CNN3D_FILTERS = 96 w = tf.Variable(tf.truncated_normal( [CNN3D_FILTER_LENGHT, CNN3D_FILTER_HEIGHT, CNN3D_FILTER_WIDTH, INPUT_CHANNELS, CNN3D_FILTERS], dtype=tf.float32, stddev=1e-1)) # Layer output with no activation function (yet). CNN3D_LAYER_STRIDE_TEMPORAL = 1 CNN3D_LAYER_STRIDE_VERTICAL = 1 CNN3D_LAYER_STRIDE_HORIZONTAL = 1 CNN3D_LAYER_PADDING = 'SAME' cnn3d = tf.nn.conv3d( input=u, filter=w, strides=[1, CNN3D_LAYER_STRIDE_TEMPORAL, CNN3D_LAYER_STRIDE_VERTICAL, CNN3D_LAYER_STRIDE_HORIZONTAL, 1], padding=CNN3D_LAYER_PADDING) # Batch normalization. cnn3d_bn = tf.contrib.layers.batch_norm( cnn3d, data_format='NHWC', # Matching the "cnn" tensor which has shape (?, 9, 120, 160, 96). center=True, scale=True, is_training=training, scope='cnn3d-batch_norm') # Activation function. cnn3d_bn_relu = tf.nn.relu(cnn3d_bn) print(cnn3d_bn_relu) # Tensor("Relu_2:0", shape=(?, 9, 120, 160, 96), dtype=float32) </code></pre> <p>What I would like to make sure is whether the code above exactly implements batch normalization as described in <a href="https://arxiv.org/abs/1502.03167" rel="noreferrer">[Ioffe &amp; Szegedy 2015]</a> at the end of Sec. 3.2:</p> <blockquote> <p>For convolutional layers, we additionally want the normalization to obey the convolutional property – so that different elements of the same feature map, at different locations, are normalized in the same way. To achieve this, we jointly normalize all the activations in a minibatch, over all locations. [...] Alg. 2 is modified similarly, so that during inference the BN transform applies the same linear transformation to each activation in a given feature map.</p> </blockquote> <p><strong>UPDATE</strong> I guess the code above is also correct for the 3D conv case. In fact, when I define my model if I print all the trainable variables, I also see the expected numbers of beta and gamma variables. For instance:</p> <pre class="lang-py prettyprint-override"><code>Tensor("conv3a/conv3d_weights/read:0", shape=(3, 3, 3, 128, 256), dtype=float32) Tensor("BatchNorm_2/beta/read:0", shape=(256,), dtype=float32) Tensor("BatchNorm_2/gamma/read:0", shape=(256,), dtype=float32) </code></pre> <p>This looks ok to me since due to BN, one pair of beta and gamma are learned for each feature map (256 in total).</p> <hr> <p>[Ioffe &amp; Szegedy 2015]: Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift</p>
2017-01-24 14:28:44.437000+00:00
2017-10-30 12:10:16.443000+00:00
2017-10-30 12:10:16.443000+00:00
python|machine-learning|tensorflow|deep-learning|batch-normalization
['https://www.tensorflow.org/performance/performance_guide#common_fused_ops', 'https://github.com/tensorflow/tensorflow/issues/5694', 'https://github.com/fchollet/keras/issues/1802#issuecomment-187966878', 'https://arxiv.org/abs/1602.07868', 'https://arxiv.org/abs/1607.06450']
5
50,132,390
<p>No, it is not uncommon to have asymmetric architectures, e.g. [1, 2, 3, etc.].</p> <ol> <li><p>Tang, Shuai, et al. "Exploring Asymmetric Encoder-Decoder Structure for Context-based Sentence Representation Learning." arXiv preprint arXiv:1710.10380 (2017). <a href="https://arxiv.org/pdf/1710.10380.pdf" rel="nofollow noreferrer">pdf</a></p></li> <li><p>LiNalisnick, Eric, and Padhraic Smyth. "Stick-breaking variational autoencoders." International Conference on Learning Representations (ICLR). 2017. <a href="https://par.nsf.gov/servlets/purl/10039928" rel="nofollow noreferrer">pdf</a></p></li> <li><p>Nash, Charlie, and Chris KI Williams. "The shape variational autoencoder: A deep generative model of part‐segmented 3D objects." Computer Graphics Forum. Vol. 36. No. 5. 2017. <a href="http://homepages.inf.ed.ac.uk/ckiw/postscript/sgp2017.pdf" rel="nofollow noreferrer">pdf</a></p></li> </ol>
2018-05-02 10:27:38.720000+00:00
2018-05-02 10:27:38.720000+00:00
null
null
50,131,402
<p>Does encoder must have the same number of layers as the decoder, in Variational Autoencoder? I got a little better result with the encoder (Dense): 54-10-5-3 and Decoder (Dense): 3-5-10-25-35-45-54</p>
2018-05-02 09:36:28.807000+00:00
2018-05-03 22:23:34.450000+00:00
null
keras|autoencoder
['https://arxiv.org/pdf/1710.10380.pdf', 'https://par.nsf.gov/servlets/purl/10039928', 'http://homepages.inf.ed.ac.uk/ckiw/postscript/sgp2017.pdf']
3
53,929,352
<p>As a short and <strong>practical</strong> answer, here the learning rate is decreased if the model is more complex, the variable <code>model_size</code> is approximately the number of neurons per layer: </p> <pre><code>def rate(self, step = None): "Implement `lrate` above" if step is None: step = self._step return self.factor * \ (self.model_size ** (-0.5) * min(step ** (-0.5), step * self.warmup ** (-1.5))) </code></pre> <p>Source: <a href="https://github.com/harvardnlp/annotated-transformer/blob/master/The%20Annotated%20Transformer.ipynb" rel="nofollow noreferrer">The Annotated Transformer</a></p> <p>Also see: <a href="https://arxiv.org/abs/1412.6980" rel="nofollow noreferrer">Adam: A Method for Stochastic Optimization</a></p>
2018-12-26 08:21:08.773000+00:00
2018-12-26 08:21:08.773000+00:00
null
null
34,476,447
<p>I have a convolutional neural network of which I modified the architecture. I do not have time to retrain and perform a cross-validation (grid search over optimal parameters). I want to <em>intuitively adjust</em> the learning rate.</p> <p>Should I <em>increase</em> or <em>decrease</em> the learning rate of my RMS (SGD-based) optimiser if: </p> <ol> <li>I add <em>more</em> neurons to the fully connected layers?</li> <li>on a convolutional neural network, I remove a sub-sampling (average or max pooling) layer before the full connections, and I increase the amount of fully connected units between that feature map and the softmax outputs (so that there are <em>more</em> weights connected to the fully connected neurons on top)?</li> </ol>
2015-12-27 00:10:30.873000+00:00
2018-12-26 08:21:08.773000+00:00
2018-08-24 13:19:55.397000+00:00
machine-learning|neural-network|deep-learning|conv-neural-network
['https://github.com/harvardnlp/annotated-transformer/blob/master/The%20Annotated%20Transformer.ipynb', 'https://arxiv.org/abs/1412.6980']
2
51,988,094
<p>We all agree that the learning rate can be seen as a way to control overfitting, just like dropout or batch size. But I'm writing this answer because I think the following in Amir's answer and comments is misleading :</p> <ul> <li><blockquote> <p>adding more layers/neurons increases the chance of over-fitting. Therefore it would be better if you decrease the learning rate over time.</p> </blockquote></li> <li><blockquote> <p>Since adding more layers/nodes to the model makes it prone to over-fitting [...] taking small steps towards the local minima is recommended</p> </blockquote></li> </ul> <p>It's actually the <strong>OPPOSITE</strong>! A smaller learning rate will <em>increase</em> the risk of overfitting!</p> <p>Citing from <a href="https://arxiv.org/pdf/1708.07120.pdf" rel="noreferrer">Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates (Smith &amp; Topin 2018)</a> (a very interesting read btw):</p> <blockquote> <p>There are many forms of regularization, such as <strong>large learning rates</strong>, small batch sizes, weight decay, and dropout. Practitioners must balance the various forms of regularization for each dataset and architecture in order to obtain good performance. <strong>Reducing other forms of regularization and regularizing with very large learning rates makes training significantly more efficient.</strong></p> </blockquote> <p>So, as Guillaume Chevalier said in his first comment, if you add regularization, decreasing the learning rate might be a good idea if you want to keep the overall amount of regularization constant. But if your goal is to increase the overall amount of regularization, or if you reduced other means of regularization (e.g., decreased dropout, increased batch size), then the learning rate should be <em>increased</em>.</p> <p>Related (and also very interesting): <a href="https://arxiv.org/pdf/1711.00489.pdf" rel="noreferrer">Don't decay the learning rate, increase the batch size (Smith et al. ICLR'18)</a>.</p>
2018-08-23 14:24:34.733000+00:00
2018-08-23 14:24:34.733000+00:00
null
null
34,476,447
<p>I have a convolutional neural network of which I modified the architecture. I do not have time to retrain and perform a cross-validation (grid search over optimal parameters). I want to <em>intuitively adjust</em> the learning rate.</p> <p>Should I <em>increase</em> or <em>decrease</em> the learning rate of my RMS (SGD-based) optimiser if: </p> <ol> <li>I add <em>more</em> neurons to the fully connected layers?</li> <li>on a convolutional neural network, I remove a sub-sampling (average or max pooling) layer before the full connections, and I increase the amount of fully connected units between that feature map and the softmax outputs (so that there are <em>more</em> weights connected to the fully connected neurons on top)?</li> </ol>
2015-12-27 00:10:30.873000+00:00
2018-12-26 08:21:08.773000+00:00
2018-08-24 13:19:55.397000+00:00
machine-learning|neural-network|deep-learning|conv-neural-network
['https://arxiv.org/pdf/1708.07120.pdf', 'https://arxiv.org/pdf/1711.00489.pdf']
2
46,209,704
<p>There is a hot field called "object detection" that tries to do what you want. In general, you can detect anything (digits, people, cars, etc) from any images and even videos. </p> <p>The state-of-the-art techniques roughly fall into two categories:</p> <ol> <li><a href="https://arxiv.org/abs/1506.01497" rel="nofollow noreferrer">Faster-RCNN</a>, which first proposes a lot of candidate windows for objects of your interest and then detects what are actually inside these windows. </li> <li><a href="https://arxiv.org/abs/1512.02325" rel="nofollow noreferrer">SSD</a>, which only scans the images once and detect objects, faster but not that reliable compared to Faster-RCNN. </li> </ol> <p>A well-known real-time object detection method is YOLO (You Only Look Once), which falls in the SSD category, and has a very impressive real-time demo <a href="https://pjreddie.com/darknet/yolo/" rel="nofollow noreferrer">here</a>, to give you a sense of object detection. Search these methods' names and you will find a lot of example code that satisfies your needs. </p> <p>If you are only looking for digit detection, also check out work surrounding Stanford's <a href="http://ufldl.stanford.edu/housenumbers/" rel="nofollow noreferrer">House Number Dataset</a>. However, note that these works are generally from five and more years ago and do not necessarily beat general methods like Faster-RCNN and SSD.</p>
2017-09-14 02:47:16.273000+00:00
2017-09-14 02:47:16.273000+00:00
null
null
46,208,481
<p>After you've trained a model on the MNIST set, how can I now classify an image as having two digits? More generally, how do I train a model to detect any number of digits on an image?</p>
2017-09-13 23:58:36.017000+00:00
2017-09-14 02:47:16.273000+00:00
null
machine-learning|computer-vision|mnist
['https://arxiv.org/abs/1506.01497', 'https://arxiv.org/abs/1512.02325', 'https://pjreddie.com/darknet/yolo/', 'http://ufldl.stanford.edu/housenumbers/']
4
48,705,375
<p>Now, personally, I think the reason why the performance of the agent collapsed is maybe the overoptimization of values. I read a paper on Double DQN on this, you can read this paper <a href="https://arxiv.org/abs/1509.06461v1" rel="nofollow noreferrer">DEEP REINFORCEMENT LEARNING WITH DOUBLE Q-LEARNING</a></p>
2018-02-09 12:02:12.750000+00:00
2018-02-09 12:02:12.750000+00:00
null
null
45,542,902
<p>I am trying to train an agent on <a href="https://github.com/mwydmuch/ViZDoom" rel="nofollow noreferrer">ViZDoom</a> platform on the deadly_corridor scenario with A3C algorithm and TensorFlow on TITAN X GPU server, however, the performance collapsed after training about 2+ days. As you can see in the following picture.</p> <p><a href="https://i.stack.imgur.com/O9l3A.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/O9l3A.jpg" alt="enter image description here"></a></p> <p>There are 6 demons in the corridor and the agent should kill at least 5 demons to get to the destination and get the vest.</p> <p>Here is the code of the newtwork</p> <pre class="lang-py prettyprint-override"><code>with tf.variable_scope(scope): self.inputs = tf.placeholder(shape=[None, *shape, 1], dtype=tf.float32) self.conv_1 = slim.conv2d(activation_fn=tf.nn.relu, inputs=self.inputs, num_outputs=32, kernel_size=[8, 8], stride=4, padding='SAME') self.conv_2 = slim.conv2d(activation_fn=tf.nn.relu, inputs=self.conv_1, num_outputs=64, kernel_size=[4, 4], stride=2, padding='SAME') self.conv_3 = slim.conv2d(activation_fn=tf.nn.relu, inputs=self.conv_2, num_outputs=64, kernel_size=[3, 3], stride=1, padding='SAME') self.fc = slim.fully_connected(slim.flatten(self.conv_3), 512, activation_fn=tf.nn.elu) # LSTM lstm_cell = tf.contrib.rnn.BasicLSTMCell(cfg.RNN_DIM, state_is_tuple=True) c_init = np.zeros((1, lstm_cell.state_size.c), np.float32) h_init = np.zeros((1, lstm_cell.state_size.h), np.float32) self.state_init = [c_init, h_init] c_in = tf.placeholder(tf.float32, [1, lstm_cell.state_size.c]) h_in = tf.placeholder(tf.float32, [1, lstm_cell.state_size.h]) self.state_in = (c_in, h_in) rnn_in = tf.expand_dims(self.fc, [0]) step_size = tf.shape(self.inputs)[:1] state_in = tf.contrib.rnn.LSTMStateTuple(c_in, h_in) lstm_outputs, lstm_state = tf.nn.dynamic_rnn(lstm_cell, rnn_in, initial_state=state_in, sequence_length=step_size, time_major=False) lstm_c, lstm_h = lstm_state self.state_out = (lstm_c[:1, :], lstm_h[:1, :]) rnn_out = tf.reshape(lstm_outputs, [-1, 256]) # Output layers for policy and value estimations self.policy = slim.fully_connected(rnn_out, cfg.ACTION_DIM, activation_fn=tf.nn.softmax, biases_initializer=None) self.value = slim.fully_connected(rnn_out, 1, activation_fn=None, biases_initializer=None) if scope != 'global' and not play: self.actions = tf.placeholder(shape=[None], dtype=tf.int32) self.actions_onehot = tf.one_hot(self.actions, cfg.ACTION_DIM, dtype=tf.float32) self.target_v = tf.placeholder(shape=[None], dtype=tf.float32) self.advantages = tf.placeholder(shape=[None], dtype=tf.float32) self.responsible_outputs = tf.reduce_sum(self.policy * self.actions_onehot, axis=1) # Loss functions self.policy_loss = -tf.reduce_sum(self.advantages * tf.log(self.responsible_outputs+1e-10)) self.value_loss = tf.reduce_sum(tf.square(self.target_v - tf.reshape(self.value, [-1]))) self.entropy = -tf.reduce_sum(self.policy * tf.log(self.policy+1e-10)) # Get gradients from local network using local losses local_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope) value_var, policy_var = local_vars[:-2] + [local_vars[-1]], local_vars[:-2] + [local_vars[-2]] self.var_norms = tf.global_norm(local_vars) self.value_gradients = tf.gradients(self.value_loss, value_var) value_grads, self.grad_norms_value = tf.clip_by_global_norm(self.value_gradients, 40.0) self.policy_gradients = tf.gradients(self.policy_loss, policy_var) policy_grads, self.grad_norms_policy = tf.clip_by_global_norm(self.policy_gradients, 40.0) # Apply local gradients to global network global_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, 'global') global_vars_value, global_vars_policy = \ global_vars[:-2] + [global_vars[-1]], global_vars[:-2] + [global_vars[-2]] self.apply_grads_value = optimizer.apply_gradients(zip(value_grads, global_vars_value)) self.apply_grads_policy = optimizer.apply_gradients(zip(policy_grads, global_vars_policy)) </code></pre> <p>And the optimizer is </p> <pre><code>optimizer = tf.train.RMSPropOptimizer(learning_rate=1e-5) </code></pre> <p>And here are some summaries of the gradients and norms</p> <p><a href="https://i.stack.imgur.com/kM1QY.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kM1QY.jpg" alt="enter image description here"></a></p> <p>Help some one can help me to tackle this problem.</p>
2017-08-07 08:50:59.720000+00:00
2018-02-09 12:02:12.750000+00:00
null
deep-learning|reinforcement-learning
['https://arxiv.org/abs/1509.06461v1']
1
64,244,323
<p>The <code>requires</code> (here, something like <code>\valid(foo)</code> clause means exactly that: from the point of view of the callee, it is something it can assume, since it is up to the caller (or in the particular case of the main entry point, the outside world) to guarantee that the execution of the function will start in a state that respects the pre-condition.</p> <p>However, in your particular case, there's a catch: for technical reasons, Eva does create an initial context first, and then reduce it according to the pre-condition. Hence, you'll get a warning that the <code>requires</code> is unknown.</p> <p>Generally speaking, the usual way to let Eva start in a specific context is to write a small wrapper function, potentially using the built-ins mentioned in section 9.2.1 of the <a href="http://frama-c.com/download/eva-manual-21.1-Scandium.pdf" rel="nofollow noreferrer">Eva manual</a>. There are also a few options (described in section 6.3 of the manual) that control the way the initial state is computed. If you don't need too precise information about the initial state, they might be sufficient (e.g. too just ensure that <code>foo</code> and any other pointer is valid, use <code>-eva-context-valid-pointers</code>)</p> <p>Finally, there have been experiments on generating a wrapper function from ACSL requires clause (see <a href="https://arxiv.org/abs/1709.04497" rel="nofollow noreferrer">this paper</a>), but as far as I know the corresponding plug-in is not freely available.</p>
2020-10-07 12:46:41.230000+00:00
2020-10-07 12:46:41.230000+00:00
null
null
64,242,478
<p>Take the following C code example.</p> <pre class="lang-c prettyprint-override"><code>struct foo_t { int bar; }; int my_entry_point(const struct foo_t *foo) { return foo-&gt;bar; } </code></pre> <p>In our case, <code>my_entry_point</code> will be called from assembly, and <code>*foo</code> here must be assumed to always be correct.</p> <p>Running with the command line...</p> <pre><code>frama-c -eva -report -report-classify -report-unclassified-warning ERROR -c11 -main my_entry_point /tmp/test.c </code></pre> <p>... results in ...</p> <pre><code>[report] Monitoring events [kernel] Parsing /tmp/override.c (with preprocessing) [eva] Analyzing a complete application starting at my_entry_point [eva] Computing initial state [eva] Initial state computed [eva:initial-state] Values of globals at initialization [eva:alarm] /tmp/override.c:6: Warning: out of bounds read. assert \valid_read(&amp;foo-&gt;bar); [eva] done for function my_entry_point [eva] ====== VALUES COMPUTED ====== [eva:final-states] Values at end of function my_entry_point: __retres ∈ [--..--] [eva:summary] ====== ANALYSIS SUMMARY ====== ---------------------------------------------------------------------------- 1 function analyzed (out of 1): 100% coverage. In this function, 2 statements reached (out of 2): 100% coverage. ---------------------------------------------------------------------------- No errors or warnings raised during the analysis. ---------------------------------------------------------------------------- 1 alarm generated by the analysis: 1 invalid memory access ---------------------------------------------------------------------------- No logical properties have been reached by the analysis. ---------------------------------------------------------------------------- [report] Classification [ERROR:eva.unclassified.warning] Unclassified Warning (Plugin 'eva') [REVIEW:unclassified.unknown] my_entry_point_assert_Eva_mem_access [report] Reviews : 1 [report] Errors : 1 [report] Unclassified: 2 [report] User Error: Classified errors found [kernel] Plug-in report aborted: invalid user input. </code></pre> <p>Of course, we can always add a base-case <code>NULL</code> check, which satisfies the checker (this is probably how we'll solve this for now, anyway).</p> <pre class="lang-c prettyprint-override"><code>if (!foo) return 0; </code></pre> <p>But I'm more curious (for learning purposes) about how this might be done with e.g. ACSL annotations telling the checker &quot;hey, we understand this is pointer could, in theory, be invalid - however, please assume that, since it's the entry point, it is indeed valid&quot;.</p> <p>Is this something that ACSL supports, or can the behavior be altered via command line arguments to <code>frama-c</code>? I can see why the standards committee might be hesitant on adding such a mechanism to ACSL since it could be abused, but seeing as how I'm just learning ACSL I was curious to know what the common approach might be here.</p>
2020-10-07 10:49:11.730000+00:00
2020-10-07 13:53:25.660000+00:00
null
c|frama-c|acsl
['http://frama-c.com/download/eva-manual-21.1-Scandium.pdf', 'https://arxiv.org/abs/1709.04497']
2
2,277,408
<p>I can't find any good implementations, but since no one else can either I'm guessing you'll be writing your own, in which case I have a few handy references for you.</p> <p>A paper that no one seems to have mentioned is the original proposition for Min-Max-Heaps:</p> <p><a href="http://www.cs.otago.ac.nz/staffpriv/mike/Papers/MinMaxHeaps/MinMaxHeaps.pdf" rel="nofollow noreferrer">http://www.cs.otago.ac.nz/staffpriv/mike/Papers/MinMaxHeaps/MinMaxHeaps.pdf</a></p> <p>I've implemented a min-max heap from this paper twice (not in C) and found it fairly trivial.</p> <p>An improvement, which I haven't ever implemented, is a Min-Max-Fine-Heap. I can't find any good papers or references on a plain old fine heap, but I did find one on the min-max-fine-heap, which apparently performs better:</p> <p><a href="http://arxiv.org/ftp/cs/papers/0007/0007043.pdf" rel="nofollow noreferrer">http://arxiv.org/ftp/cs/papers/0007/0007043.pdf</a></p>
2010-02-17 00:14:14.730000+00:00
2010-02-17 00:14:14.730000+00:00
null
null
2,252,793
<p>I'm looking for algorithms like ones in the stl (<code>push_heap</code>, <code>pop_heap</code>, <code>make_heap</code>) except with the ability to pop both the minimum and maximum value efficiently. AKA double ended priority queue. As described <a href="http://www.diku.dk/forskning/performance-engineering/Jesper/heaplab/heapsurvey_html/node11.html" rel="noreferrer">here</a>. </p> <p>Any clean implementation of a double ended priority queue would also be of interest as an alternative, however this question is mainly about a MinMax Heap implementation.</p> <p>My google-fu has not been fruitful, but surely, it must exist?</p>
2010-02-12 15:24:08.023000+00:00
2020-01-25 04:13:56.803000+00:00
2010-02-14 17:53:35.110000+00:00
c++|algorithm|data-structures|heap
['http://www.cs.otago.ac.nz/staffpriv/mike/Papers/MinMaxHeaps/MinMaxHeaps.pdf', 'http://arxiv.org/ftp/cs/papers/0007/0007043.pdf']
2
63,600,286
<p>Refer to <a href="https://arxiv.org/abs/1512.00567" rel="nofollow noreferrer">InceprtionV3 paper</a>.</p> <p>You can see that the mixed layers are made of four parallel connections with single input and we get the output by concatenating all parallel outputs into one. Note that to contatenate all the outputs, all parallel feature maps have to have identical first two dimensions (number of feature maps can differ) and this is achieved by strides and pooling.</p> <p><a href="https://i.stack.imgur.com/H4pFo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/H4pFo.png" alt="Inception layer" /></a></p>
2020-08-26 14:53:25.030000+00:00
2020-08-26 14:53:25.030000+00:00
null
null
63,600,026
<p>I am currently trying to understand the architecture of Inseption v3 as implemented in <code>tf.keras.applications.InceptionV3</code>.</p> <p>I am looking at the list of names in the model's layers:</p> <pre class="lang-py prettyprint-override"><code>print([layer.name for layer in model.layers]) #Outputs: ['input_1', 'conv2d', 'batch_normalization', 'activation', 'conv2d_1', 'batch_normalization_1', 'activation_1', 'conv2d_2', ... ] </code></pre> <p>I understand how batch normalization, pooling and conv layers transform inputs, but deeper we have layers named <code>mixed1, mixed2, ...</code> and so on. I am trying to understand how they (mixed layers) are transforming their inputs.</p> <p>So far, I couldn't find any information about them. How does a mixed layer work? What does it do?</p>
2020-08-26 14:37:30.973000+00:00
2021-01-05 08:00:51.693000+00:00
null
python|tensorflow|keras|computer-vision
['https://arxiv.org/abs/1512.00567', 'https://i.stack.imgur.com/H4pFo.png']
2
63,180,623
<p>Since NumPy 1.17, the reason is largely backward compatibility. See also <a href="https://stackoverflow.com/questions/40914862/why-is-random-sample-faster-than-numpys-random-choice/62951059#62951059">this question</a> and <a href="https://stackoverflow.com/questions/62309424/does-numpy-random-seed-always-give-the-same-random-number-every-time/62310046#62310046">this question</a>.</p> <p>As of NumPy 1.17, <code>numpy.random.*</code> functions, including <code>numpy.random.choice</code>, are legacy functions and &quot;SHALL remain the same as they currently are&quot;, according to <a href="https://github.com/numpy/numpy/blob/master/doc/neps/nep-0019-rng-policy.rst" rel="nofollow noreferrer">NumPy's new RNG policy</a>, which also introduced a <a href="https://numpy.org/doc/stable/reference/random/index.html" rel="nofollow noreferrer">new random generation system for NumPy</a>. The reasons for making them legacy functions include the recommendation to avoid global state. Even so, however, NumPy did not deprecate any <code>numpy.random.*</code> functions in version 1.17, although a future version of NumPy might.</p> <p>Recall that in your examples, <code>numpy.random.choice</code> takes an array of <code>float</code>s as weights. An array of integer weights would lead to more exact random number generation. And although any <code>float</code> could be converted to a rational number (leading to rational-valued weights and thus integer weights), the legacy NumPy version appears not to do this. These and other implementation decisions in <code>numpy.random.choice</code> can't be changed without breaking backward compatibility.</p> <p>By the way, arithmetic coding is not the only algorithm that seeks to avoid wasting bits. Perhaps the canonical algorithm for sampling for a discrete distribution is the Knuth and Yao algorithm (1976), which exactly chooses a random integer based on the binary expansion of the probabilities involved, and treats the problem as a random walk on a binary tree. (This algorithm uses, on average, up to 2 bits away from the theoretical lower bound.) Any other integer generating algorithm can be ultimately described in the same way, namely as a random walk on a binary tree. For example, the <a href="https://arxiv.org/abs/2003.03830v2" rel="nofollow noreferrer">Fast Loaded Dice Roller</a> is a recent algorithm that has a guaranteed bound on the average number of bits it uses (in this case, no more than 6 bits away from the theoretical lower bound). The Han and Hoshi algorithm (from 1997) is another of this kind, but uses cumulative probabilities. See also my section, &quot;<a href="https://peteroupc.github.io/randomfunc.html#Weighted_Choice_With_Replacement" rel="nofollow noreferrer">Weighted Choice With Replacement</a>&quot;.</p>
2020-07-30 20:08:46.040000+00:00
2021-01-19 02:40:19.517000+00:00
2021-01-19 02:40:19.517000+00:00
null
63,180,186
<p>If I evaluate something like:</p> <p><code>numpy.random.choice(2, size=100000, p=[0.01, 0.99])</code></p> <p>using one uniformly-distributed random <code>float</code>, say <code>r</code>, and deciding if <code>r &lt; 0.01</code> will presumably waste many of the random bits (entropy) generated. I've heard (second-hand) that generating psuedo-random numbers is computationally expensive, so I assumed that <code>numpy</code> would not be doing that, and rather would use a scheme like <a href="https://en.wikipedia.org/wiki/Arithmetic_coding" rel="nofollow noreferrer">arithmetic coding</a> in this case.</p> <p>However, at first <a href="https://github.com/numpy/numpy/blob/master/numpy/random/mtrand.pyx#L949" rel="nofollow noreferrer">glance</a> it appears that <code>choice</code> does indeed generate a <code>float</code> for every sample it is asked for. Further, a quick <code>timeit</code> experiment shows that generating <code>n</code> uniform floats is actually quicker than <code>n</code> samples from <code>p=[0.01, 0.99]</code>.</p> <pre><code>&gt;&gt;&gt; timeit.timeit(lambda : numpy.random.choice(2, size=100000, p=[0.01, 0.99]), number=1000) 1.74494537999999 &gt;&gt;&gt; timeit.timeit(lambda : numpy.random.random(size=100000), number=1000) 0.8165735180009506 </code></pre> <p>Does <code>choice</code> really generate a <code>float</code> for each sample, as it would appear? Would it not significantly improve performance to use some compression algorithm in some cases (particularly if <code>size</code> is large and <code>p</code> is distributed unevenly)? If not, why not?</p>
2020-07-30 19:37:20.773000+00:00
2021-01-19 02:40:19.517000+00:00
2020-07-30 19:53:02.683000+00:00
python|python-3.x|performance|numpy|random
['https://stackoverflow.com/questions/40914862/why-is-random-sample-faster-than-numpys-random-choice/62951059#62951059', 'https://stackoverflow.com/questions/62309424/does-numpy-random-seed-always-give-the-same-random-number-every-time/62310046#62310046', 'https://github.com/numpy/numpy/blob/master/doc/neps/nep-0019-rng-policy.rst', 'https://numpy.org/doc/stable/reference/random/index.html', 'https://arxiv.org/abs/2003.03830v2', 'https://peteroupc.github.io/randomfunc.html#Weighted_Choice_With_Replacement']
6
71,621,020
<p>Q# machine learning library implements one specific approach, circuit-centric quantum classifiers. You can find the documentation for this approach at <a href="https://docs.microsoft.com/en-us/azure/quantum/user-guide/libraries/machine-learning/intro" rel="nofollow noreferrer">https://docs.microsoft.com/en-us/azure/quantum/user-guide/libraries/machine-learning/intro</a> and the subsequent pages in that section. The paper it's based on is <a href="https://arxiv.org/abs/1804.00633" rel="nofollow noreferrer">'Circuit-centric quantum classifiers', Maria Schuld, Alex Bocharov, Krysta Svore and Nathan Wiebe</a>.</p>
2022-03-25 17:34:54.390000+00:00
2022-03-25 17:34:54.390000+00:00
null
null
71,596,588
<p>Recently, I'm learning the Q# language for machine learning. The sample of half-moons has been run correctly. Now I want to learn the detail of the code. But there is too little explanation to find. There are too many methods I can't understand and there are no introductions in detail. For example, it only explains the name, parameters for the method, but no further information.<img src="https://i.stack.imgur.com/uaSzW.png" alt="enter image description here" /> I really can't understand it. So is there an exits detailed document for machine learning for beginners? Thank u very much.</p> <p>how to get the detained document</p>
2022-03-24 03:05:04.330000+00:00
2022-03-25 17:34:54.390000+00:00
null
machine-learning|quantum-computing|q#
['https://docs.microsoft.com/en-us/azure/quantum/user-guide/libraries/machine-learning/intro', 'https://arxiv.org/abs/1804.00633']
2
43,711,097
<p>After doing my own research, the following methods proven to work very well in practice: </p> <ul> <li><a href="http://dm.snu.ac.kr/static/docs/TR/SNUDM-TR-2015-03.pdf" rel="nofollow noreferrer">Variational Inference for On-line Anomaly Detection in High-Dimensional Time Series</a></li> <li><a href="https://arxiv.org/pdf/1612.06676.pdf" rel="nofollow noreferrer">Multivariate Industrial Time Series with Cyber-Attack Simulation: Fault Detection Using an LSTM-based Predictive Data Model</a></li> </ul>
2017-04-30 20:40:57.533000+00:00
2017-04-30 20:40:57.533000+00:00
null
null
43,565,003
<p>I am working on anomaly detection problem and I need your help and expertise. I have a sensor that records episodic time series data. For example, once in a while, the sensor activates for 10 seconds and records values at millisecond interval. My task is to identify whether the recorded pattern is not normal. In other words, I need to detect anomalies in that pattern compared to other recorded patterns. </p> <p>What would be the state-of-the-art approaches to that?</p>
2017-04-22 21:48:12.267000+00:00
2017-04-30 20:40:57.533000+00:00
null
machine-learning|time-series|deep-learning|anomaly-detection
['http://dm.snu.ac.kr/static/docs/TR/SNUDM-TR-2015-03.pdf', 'https://arxiv.org/pdf/1612.06676.pdf']
2
62,806,991
<h2>If you have coins with a <em>known</em> probability of heads</h2> <p>Assume you have a function <code>unfairCoin(p)</code>, which is a function that produces heads with a <em>known</em> probability <code>p</code> and tails otherwise. For example, it could be implemented like this:</p> <pre><code>function unfairCoin(p) { return Math.random() &lt; p ? True : false; } </code></pre> <p>Here is an algorithm that solves your problem given <code>unfairCoin</code>, assuming all the probabilities involved sum to 1:</p> <ol> <li>Set <code>cumu</code> to 1.</li> <li>For each item starting with the first: <ol> <li>Get the probability associated with the chosen item (call it <code>p</code>) and accept the item with probability <code>p / cumu</code> (e.g., via <code>unfairCoin(p / cumu)</code>). If the item is accepted, return that item.</li> <li>If the item was not accepted, subtract <code>p</code> from <code>cumu</code>.</li> </ol> </li> </ol> <p>This algorithm's expected time complexity depends on the order of the probabilities. In general, the algorithm's time complexity is linear, but if the probabilities are sorted in descending order, the expected time complexity is constant.</p> <p>EDIT (Jul. 30): As I've just found out, this exact algorithm was already described by Keith Schwarz in <a href="https://www.keithschwarz.com/darts-dice-coins/" rel="nofollow noreferrer">Darts, Dice, and Coins</a>, in &quot;Simulating a Loaded Die with a Biased Coin&quot;. That page also contains a proof of its correctness.</p> <hr /> <p>An alternative solution uses rejection sampling, but requires generating a random integer using fair coin tosses:</p> <ol> <li>Generate a uniform random integer index in the interval [0, n), where <code>n</code> is the number of items. This can be done, for example, using the <a href="https://arxiv.org/abs/1304.1916" rel="nofollow noreferrer">Fast Dice Roller</a> by J. Lumbroso, which uses only fair coin tosses (<code>unfairCoin(0.5)</code>); see the code below. Choose the item at the given index (starting at 0).</li> <li>Get the probability associated with the chosen item (call it <code>p</code>) and accept it with probability <code>p</code> (e.g., via <code>unfairCoin(p)</code>). If the item is accepted, return that item; otherwise, go to step 1.</li> </ol> <p>This algorithm's expected time complexity depends on the difference between the lowest and highest probability.</p> <p>Given the weights for each item, there are many other ways to make a weighted choice besides the algorithms given earlier; see my <a href="https://peteroupc.github.io/randomfunc.html#Weighted_Choice_With_Replacement" rel="nofollow noreferrer">note on weighted choice algorithms</a>.</p> <h3>Fast Dice Roller Implementation</h3> <p>The following is JavaScript code that implements the Fast Dice Roller. Note that it uses a rejection event and a loop to ensure it's unbiased.</p> <pre class="lang-js prettyprint-override"><code>function randomInt(minInclusive, maxExclusive) { var maxInclusive = (maxExclusive - minInclusive) - 1 var x = 1 var y = 0 while(true) { x = x * 2 var randomBit = Math.random()&lt;0.5 ? 1 : 0 y = y * 2 + randomBit if(x &gt; maxInclusive) { if (y &lt;= maxInclusive) { return y + minInclusive } // Rejection x = x - maxInclusive - 1 y = y - maxInclusive - 1 } } } </code></pre> <p>The following version returns a BigInt, an arbitrary-precision integer supported in recent versions of JavaScript:</p> <pre class="lang-js prettyprint-override"><code>function randomInt(minInclusive, maxExclusive) { minInclusive=BigInt(minInclusive) maxExclusive=BigInt(maxExclusive) var maxInclusive = (maxExclusive - minInclusive) - BigInt(1) var x = BigInt(1) var y = BigInt(0) while(true) { x = x * BigInt(2) var randomBit = BigInt(Math.random()&lt;0.5 ? 1 : 0) y = y * BigInt(2) + randomBit if(x &gt; maxInclusive) { if (y &lt;= maxInclusive) { return y + minInclusive } // Rejection x = x - maxInclusive - BigInt(1) y = y - maxInclusive - BigInt(1) } } } </code></pre> <h2>If you have coins with an <em>unknown</em> probability of heads</h2> <p>If on the other hand, you have a function <code>COIN</code> that outputs heads with an <em>unknown</em> probability and tails otherwise, then there are two problems to solve to get to the solution:</p> <ol> <li>How to turn a biased coin into a fair coin.</li> <li>How to turn a fair coin into a loaded die.</li> </ol> <p>In other words, the task is to turn a biased coin into a loaded die.</p> <p>Let's see how these two problems can be solved.</p> <h3>From biased to fair coins</h3> <p>Assume you have a function <code>COIN()</code> that outputs heads with an unknown probability and tails otherwise. (If the coin is <em>known</em> to have probability 0.5 of producing heads then you already have a fair coin and can skip this step.)</p> <p>Here we can use von Neumann's algorithm from 1951 of turning a biased coin into a fair coin. It works like this:</p> <ol> <li>Flip <code>COIN()</code> twice.</li> <li>If both results are heads or both are tails, go to step 1.</li> <li>If the first result is heads and the other is tails, take heads as the final result.</li> <li>If the first result is tails and the other is heads, take tails as the final result.</li> </ol> <p>Now we have a fair coin <code>FAIRCOIN()</code>.</p> <p>(Note that there are other ways of producing fair coins this way, collectively called <em>randomness extractors</em>, but the von Neumann method is perhaps the simplest.)</p> <h3>From fair coins to loaded dice</h3> <p>Now, the method to turn fair coins into loaded dice is much more complex. It suffices to say that there are many ways to solve this problem, and the newest of them is called the <a href="https://arxiv.org/abs/2003.03830" rel="nofollow noreferrer"><em>Fast Loaded Dice Roller</em></a>, which produces a loaded die using just fair coins (in fact, it uses on average up to 6 fair coin tosses more than the optimal amount to produce each loaded die roll). The algorithm is not exactly trivial to implement, but see my <a href="https://github.com/peteroupc/peteroupc.github.io/blob/master/randomgen.py" rel="nofollow noreferrer">Python implementation</a> and the <a href="https://github.com/probcomp/fast-loaded-dice-roller" rel="nofollow noreferrer">implementation</a> by the <em>Fast Loaded Dice Roller</em>'s authors.</p> <p>Note that to use the Fast Loaded Dice Roller, you need to express each probability as a non-negative integer weight (such as 25, 40, 35 in your example).</p>
2020-07-09 03:34:38.270000+00:00
2022-04-06 18:45:01.840000+00:00
2022-04-06 18:45:01.840000+00:00
null
62,806,441
<p>I have a method that mimics an unfair coin. You can pass in a percentage, and it tells you whether or not you succeeded by returning a boolean. So if you call it with .25, it'll return <code>true</code> 25% of the time.</p> <p>I'm trying to figure out if I can use this function to create a weighted randomness function that works like this: <code>There is a 25% chance it returns x, a 40% chance it returns y, and a 35% chance it returns z.</code> This is just an example. I would want the function to work for an unlimited amount of letters, but the percentages added together should equal 1.</p> <p>The trick is, I want to be able to think about it the way I just described above. In other words:</p> <pre><code>result = function ({.25, x}, {.4, y}, {.35, z}) </code></pre> <p><code>result </code> should be x 25% of the time, and so on. Can I implement this function with my unfairCoin?</p> <hr /> <p>Here's how I worded it in a comment below.it might clarify what I'm asking for:</p> <p>Correct my logic if I'm making a mistake here, but let's say XY and Z all had .3333... Couldn't I use my unfair coin to pass in .3333... If that returns true, that means you get X as a result. If it returns false, call my unfair again with .5 if that returns true, return Y, otherwise return Z. If that is correct, I don't know how to get this working if the numbers AREN'T .3333 and if there's more than three</p>
2020-07-09 02:26:54.377000+00:00
2022-04-06 18:45:01.840000+00:00
2020-07-09 04:32:12.060000+00:00
javascript|random
['https://www.keithschwarz.com/darts-dice-coins/', 'https://arxiv.org/abs/1304.1916', 'https://peteroupc.github.io/randomfunc.html#Weighted_Choice_With_Replacement', 'https://arxiv.org/abs/2003.03830', 'https://github.com/peteroupc/peteroupc.github.io/blob/master/randomgen.py', 'https://github.com/probcomp/fast-loaded-dice-roller']
6
54,979,815
<h1>TL;DR</h1> <p>Try this out: <a href="https://github.com/huggingface/pytorch-pretrained-BERT" rel="noreferrer">https://github.com/huggingface/pytorch-pretrained-BERT</a></p> <p>First you have to set it up, properly with </p> <pre><code>pip install -U pytorch-pretrained-bert </code></pre> <p>Then you can use the "masked language model" from the BERT algorithm, e.g. </p> <pre><code>import torch from pytorch_pretrained_bert import BertTokenizer, BertModel, BertForMaskedLM # OPTIONAL: if you want to have more information on what's happening, activate the logger as follows import logging logging.basicConfig(level=logging.INFO) # Load pre-trained model tokenizer (vocabulary) tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') text = '[CLS] I want to [MASK] the car because it is cheap . [SEP]' tokenized_text = tokenizer.tokenize(text) indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text) # Create the segments tensors. segments_ids = [0] * len(tokenized_text) # Convert inputs to PyTorch tensors tokens_tensor = torch.tensor([indexed_tokens]) segments_tensors = torch.tensor([segments_ids]) # Load pre-trained model (weights) model = BertForMaskedLM.from_pretrained('bert-base-uncased') model.eval() # Predict all tokens with torch.no_grad(): predictions = model(tokens_tensor, segments_tensors) predicted_index = torch.argmax(predictions[0, masked_index]).item() predicted_token = tokenizer.convert_ids_to_tokens([predicted_index])[0] print(predicted_token) </code></pre> <p>[out]:</p> <pre><code>buy </code></pre> <h1>In Long</h1> <p>To truly understand why you need the <code>[CLS]</code>, <code>[MASK]</code> and segment tensors, please do read the paper carefully, <a href="https://arxiv.org/abs/1810.04805" rel="noreferrer">https://arxiv.org/abs/1810.04805</a> </p> <p>And if you're lazy, you can read this nice blogpost from Lilian Weng, <a href="https://lilianweng.github.io/lil-log/2019/01/31/generalized-language-models.html" rel="noreferrer">https://lilianweng.github.io/lil-log/2019/01/31/generalized-language-models.html</a></p> <p>Other than BERT, there are a lot of other models that can perform the task of filling in the blank. Do look at the other models in the <code>pytorch-pretrained-BERT</code> repository, but more importantly dive deeper into the task of "Language Modeling", i.e. the task of predicting the next word given a history. </p>
2019-03-04 09:02:07.807000+00:00
2019-03-04 09:02:07.807000+00:00
null
null
54,978,443
<p>I have the sentence below :</p> <pre><code>I want to ____ the car because it is cheap. </code></pre> <p>I want to predict the missing word ,using an NLP model. What NLP model shall I use? Thanks.</p>
2019-03-04 07:27:12.190000+00:00
2019-03-04 09:02:07.807000+00:00
null
machine-learning|neural-network|nlp|predict
['https://github.com/huggingface/pytorch-pretrained-BERT', 'https://arxiv.org/abs/1810.04805', 'https://lilianweng.github.io/lil-log/2019/01/31/generalized-language-models.html']
3
65,877,267
<p>Have you considered using a GAN (generative adversarial network)? Takes a bit more effort than just using a predefined function, but essentially it does exactly what you are hoping to do. Here's the original paper: <a href="https://arxiv.org/abs/1406.2661" rel="nofollow noreferrer">https://arxiv.org/abs/1406.2661</a></p> <p>There are many PyTorch/Tensorflow codes that you can download and fit to your purposes, for example this one: <a href="https://github.com/eriklindernoren/PyTorch-GAN" rel="nofollow noreferrer">https://github.com/eriklindernoren/PyTorch-GAN</a></p> <p>Here is also a blog post I found quite helpful with an introduction to GANs. <a href="https://medium.com/ai-society/gans-from-scratch-1-a-deep-introduction-with-code-in-pytorch-and-tensorflow-cb03cdcdba0f" rel="nofollow noreferrer">https://medium.com/ai-society/gans-from-scratch-1-a-deep-introduction-with-code-in-pytorch-and-tensorflow-cb03cdcdba0f</a></p> <p>Maybe GAN is overkill for this problem and there are simpler methods for upscaling the sample size, in which case I'd be interested to learn about them.</p>
2021-01-24 23:51:34.237000+00:00
2021-01-25 00:00:33.090000+00:00
2021-01-25 00:00:33.090000+00:00
null
65,877,211
<p>I have an input dataframe df_input with 10 variables and 100 rows. This data are not normal distributed. I would like to generate an output dataframe with 10 variables and 10,000 rows, such that the covariance matrix and mean of the new dataframe are the same as those of the original one. The output variables should not be normal distributed, but rather have a distribution similar to the input variables. That is: Cov(df_output) = Cov(df_input) and mean(df_ouput) = mean(df_input) Is there a Python function that does it?</p> <p>Note: np.random.multivariate_normal(mean_input,Cov_input,10000) does almost this, but the output variables are normal distributed, whereas I need them to have the same (or similar) distribution as the input.</p>
2021-01-24 23:42:18.713000+00:00
2021-01-27 22:28:02.980000+00:00
2021-01-24 23:45:44.870000+00:00
python
['https://arxiv.org/abs/1406.2661', 'https://github.com/eriklindernoren/PyTorch-GAN', 'https://medium.com/ai-society/gans-from-scratch-1-a-deep-introduction-with-code-in-pytorch-and-tensorflow-cb03cdcdba0f']
3
48,436,410
<blockquote> <p>What is Unicode range of all Japanese characters?</p> </blockquote> <p>Have a look at page of <a href="https://arxiv.org/pdf/1801.07779.pdf#page=5" rel="nofollow noreferrer">The WiLI benchmark dataset for written language identification</a>, especially table II. The number in bracket is the part of the language you capture with the Unicode code range (in decimal).</p> <ul> <li>12352 - 12543: Japanese (48.73%), English (0.00%)</li> <li>19000 - 44000: Japanese (32.78%), English (0.00%)</li> <li>20 - 128: English (99.74%), Japanese (11.58%)</li> </ul> <p>You can see that 20 - 128 captures English really well and that all 3 blocks are important for Japanese, but still big parts are missing.</p> <p>Those numbers are created with <a href="https://github.com/MartinThoma/lidtk" rel="nofollow noreferrer"><code>lidtk</code></a> and <a href="https://zenodo.org/record/841984" rel="nofollow noreferrer"><code>WiLI-2018</code></a>.</p>
2018-01-25 05:48:35.430000+00:00
2018-01-25 05:48:35.430000+00:00
null
null
19,899,554
<p>I am trying to separate English and Japanese characters. I need to find Unicode range of all Japanese characters. What is Unicode range of all Japanese characters ?</p>
2013-11-11 05:36:54.217000+00:00
2019-02-04 07:43:33.603000+00:00
null
unicode
['https://arxiv.org/pdf/1801.07779.pdf#page=5', 'https://github.com/MartinThoma/lidtk', 'https://zenodo.org/record/841984']
3
51,698,888
<p>Are you thinking of using asynchronous methods for GAN training similar to how asynchronous updates used in A3C? </p> <p>I guess the motivation for asynchronous methods in RL is quite different from what you want to solve with async methods in GANs.</p> <p>RL can be unstable (without async methods) because of the non-stationary nature of the data (i.e. high correlation between consecutive updates). And solving this using async methods kind of makes sense.</p> <p>GANs are unstable because of the optimization approach (e.g. mini-max) the take in solving the objective function. And more recent GAN variants (e.g. progressive GANs) are significant improvements of the original GAN. Personally, I believe "mode collapse" to be a more pressing matter than the stability. </p> <p>So not sure if async methods is the answer you're looking for solving stability issues in GANs. Maybe better optimization methods (e.g. penalizing in stabilities in the optimization procedure) might be a better way of going about this?</p> <p>References <a href="https://arxiv.org/pdf/1705.07215.pdf" rel="nofollow noreferrer">On Convergence and Stability of GANs</a></p>
2018-08-05 23:06:35.660000+00:00
2018-08-05 23:06:35.660000+00:00
null
null
51,691,134
<p>GAN sometimes get really unstable with the high dimensional data. Can we train GAN is Asynchronous manner? It's like we have one master Generator and Discriminator. But we actually update it asynchronously with gradients from a number of slaves generators and discriminators.</p>
2018-08-05 03:18:58.540000+00:00
2018-08-05 23:06:35.660000+00:00
2018-08-05 22:49:45.937000+00:00
generative-adversarial-network
['https://arxiv.org/pdf/1705.07215.pdf']
1
65,020,806
<p>There are several techniques which can be applied using feature engineering. Two possible examples are:</p> <ol> <li><p>You may reduce the dimensionality by summarizing the single features by i.e. applying the mean, median, max, min etc..</p> </li> <li><p>Assuming the dimnensionality of the <code>time_dep_feature</code> does not change: You could &quot;unfold&quot; the features by just declaring a &quot;normal&quot; feature for every entry in the dictionary.</p> </li> </ol> <p>moreover there are also specific algorithms which may help you reducing the dimensionality of a specific feature while keeping as much information as possible, you may refer to <a href="https://arxiv.org/pdf/1403.2877.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1403.2877.pdf</a></p> <p>In particular, you have to test in which way your model predicts best.</p>
2020-11-26 11:05:29.483000+00:00
2020-11-26 11:05:29.483000+00:00
null
null
65,017,992
<p>Most textbook examples of machine learning applications use a 2D <a href="https://en.wikipedia.org/wiki/Design_matrix" rel="nofollow noreferrer">design matrix</a> to store the training data. E.g., the <a href="https://scikit-learn.org/stable/auto_examples/datasets/plot_iris_dataset.html" rel="nofollow noreferrer">iris dataset</a> is the assembly of four, single-valued, numerical features. But what if one of the features is a time series, i.e., a timestamped series of numerical features? One could store each of those feature values in a dictionary where the keys are the timestamps,</p> <pre><code>time_dep_feature = {'20200103 08:20:04': 5, '20200103 16:54:10': 2, '20200215 14:31:16': 7, ···} </code></pre> <p>The problem then is that the rest of the design matrix is 2D, whereas <code>time_dep_feature</code> rises up in the third dimension. The dictionary solution above is easily readable by Python, but is nonetheless cumbersome, especially if one wants to use the power of scalable solutions such as <code>tf.data.Dataset</code>. (The latter does allow for N-dimensional design matrices/tensors, but it isn't clear how it keeps track of the timestamp indices within the <code>time_dep_feature</code> column.)</p> <p>What is the state of the art for nesting structured data of this sort? Clearly there must exist something more elaborate than storing Python-readable strings as the dictionary example above.</p> <p>PS: TensorFlow's <a href="https://www.tensorflow.org/guide/ragged_tensor" rel="nofollow noreferrer"><code>tf.RaggedTensor</code></a> seems to be the closest thing to a solution, but the problem is that I don't quite know how to store the time stamps since it does not ingest dictionaries.</p>
2020-11-26 07:56:41.223000+00:00
2020-11-26 11:38:28.817000+00:00
2020-11-26 11:38:28.817000+00:00
python|tensorflow|machine-learning|dataset|tensorflow-datasets
['https://arxiv.org/pdf/1403.2877.pdf']
1
65,639,193
<p>You can find the answer in research paper about those algorithms, because when a new algorithm been proposed we usually need the experiments to show the evident that it have advantage over other algorithm.</p> <p>The most commonly used evaluation method in research paper about RL algorithms is <strong>average return</strong> (note not reward, return is accumulated reward, is like the score in game) over timesteps, and there many way you can average the return, e.g average wrt different hyperparameters like in <a href="https://arxiv.org/abs/1801.01290" rel="nofollow noreferrer">Soft Actor-Critic paper</a>'s comparative evaluation average wrt different random seeds (initialize the model):</p> <blockquote> <p>Figure 1 shows the total average return of evaluation rolloutsduring training for DDPG, PPO, and TD3. We train fivedifferent instances of each algorithm with different randomseeds, with each performing one evaluation rollout every1000 environment steps. The solid curves corresponds to themean and the shaded region to the minimum and maximumreturns over the five trials.</p> </blockquote> <p><a href="https://i.stack.imgur.com/xbocp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xbocp.png" alt="enter image description here" /></a></p> <p>And we usually want compare the performance of many algorithms not only on one task but diverse set of tasks (i.e Benchmark), because algorithms may have some form of inductive bias for them to better at some form of tasks but worse on other tasks, e.g in <a href="https://arxiv.org/abs/2009.04416" rel="nofollow noreferrer">Phasic Policy Gradient paper</a>'s experiments comparison to PPO:</p> <blockquote> <p>We report results on the environments in Procgen Benchmark (Cobbe et al.,2019). This benchmark was designed to be highly diverse, and we expect improvements on this benchmark to transfer well to many other RL environment</p> </blockquote> <p><a href="https://i.stack.imgur.com/lwKFF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lwKFF.png" alt="enter image description here" /></a></p>
2021-01-09 03:53:41.530000+00:00
2021-01-09 03:53:41.530000+00:00
null
null
65,636,703
<p>I have been trying to implement the Reinforcement learning algorithm on Python using different variants like <code>Q-learning</code>, <code>Deep Q-Network</code>, <code>Double DQN</code> and <code>Dueling Double DQN</code>. Consider a cart-pole example and to evaluate the performance of each of these variants, I can think of plotting <code>sum of rewards</code> to <code>number of episodes</code> <a href="https://i.stack.imgur.com/kpV8I.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kpV8I.png" alt="evaluating a Reinforcement learning model" /></a> (attaching a picture of the plot) and the actual graphical output where how well the pole is stable while the cart is moving.</p> <p>But these two evaluations are not really of interest in terms to explain the better variants quantitatively. I am new to the Reinforcement learning and trying to understand if any other ways to compare different variants of RL models on the same problem.</p> <p>I am referring to the colab link <a href="https://colab.research.google.com/github/ageron/handson-ml2/blob/master/18_reinforcement_learning.ipynb#scrollTo=MR0z7tfo3k9C" rel="nofollow noreferrer">https://colab.research.google.com/github/ageron/handson-ml2/blob/master/18_reinforcement_learning.ipynb#scrollTo=MR0z7tfo3k9C</a> for the code on all the variants of cart pole example.</p>
2021-01-08 21:33:30.920000+00:00
2021-01-09 03:53:41.530000+00:00
null
python|reinforcement-learning|openai-gym|dqn
['https://arxiv.org/abs/1801.01290', 'https://i.stack.imgur.com/xbocp.png', 'https://arxiv.org/abs/2009.04416', 'https://i.stack.imgur.com/lwKFF.png']
4
43,770,897
<p>Another possibility is to use a distributed version of TensorFlow, which automatically handles the data distribution and execution on multiple nodes by using MPI in the backend.</p> <p>We have recently developed one such version at MaTEx: <a href="https://github.com/matex-org/matex" rel="nofollow noreferrer">https://github.com/matex-org/matex</a>, and a paper describing <a href="https://arxiv.org/abs/1704.04560" rel="nofollow noreferrer">https://arxiv.org/abs/1704.04560</a></p> <p>It does synchronous training and provides several parallel dataset reader format.</p> <p>We will be happy to help you if you need more help!</p>
2017-05-03 22:16:42.827000+00:00
2017-05-04 01:37:41.593000+00:00
2017-05-04 01:37:41.593000+00:00
null
39,559,183
<p><strong>Short version:</strong> can't we store variables in one of the workers and not use parameter servers?</p> <p><strong>Long version:</strong> I want to implement synchronous distributed learning of neural network in tensorflow. I want each worker to have a full copy of the model during training.</p> <p>I've read <a href="https://www.tensorflow.org/versions/r0.10/how_tos/distributed/index.html%20distributed%20tensorflow%20tutorial" rel="noreferrer">distributed tensorflow tutorial</a> and <a href="https://github.com/tensorflow/models/tree/master/inception" rel="noreferrer">code of distributed training imagenet</a> and didn't get why do we need parameter servers.</p> <p>I see that they are used for storing values of variables and replica_device_setter takes care that variables are evenly distributed between parameter servers (probably it does something more, I wasn't able to fully understand the code).</p> <p>The question is: why don't we use one of the workers to store variables? Will I achieve that if I use </p> <pre><code>with tf.device('/job:worker/task:0/cpu:0'): </code></pre> <p>instead of</p> <pre><code>with tf.device(tf.train.replica_device_setter(cluster=cluster_spec)): </code></pre> <p>for Variaibles? If that works is there downside comparing to solution with parameter servers?</p>
2016-09-18 15:11:41.610000+00:00
2017-05-04 01:37:41.593000+00:00
null
tensorflow|distributed
['https://github.com/matex-org/matex', 'https://arxiv.org/abs/1704.04560']
2
64,306,743
<p>The <a href="https://twitter.com/ylecun/status/989610208497360896" rel="nofollow noreferrer">2018 opinion retweeted by Yann LeCun</a> is the paper <a href="https://arxiv.org/abs/1804.07612" rel="nofollow noreferrer">Revisiting Small Batch Training For Deep Neural Networks, Dominic Masters and Carlo Luschi</a> suggesting a good generic maximum batch size is:</p> <blockquote> <p>32</p> </blockquote> <p>With some interplay with choice of learning rate.</p> <p>The earlier 2016 paper <a href="https://arxiv.org/abs/1609.04836" rel="nofollow noreferrer">On Large-batch Training For Deep Learning: Generalization Gap And Sharp Minima</a> gives some reason for not using big batches, which I paraphrase badly, as big batches are likely to get stuck in local (“sharp”) minima, small batches not.</p>
2020-10-11 16:59:50.453000+00:00
2020-10-11 16:59:50.453000+00:00
null
null
35,158,365
<p>I am trying to tune the hyper parameter i.e <strong>batch size</strong> in CNN.I have a computer of corei7,RAM 12GB and i am training a CNN network with CIFAR-10 dataset which can be found in this <a href="http://torch.ch/blog/2015/07/30/cifar.html" rel="nofollow noreferrer">blog</a>.<br><br><em>Now At first what i have read and learnt about batch size in machine learning:</em></p> <blockquote> <p>let's first suppose that we're doing online learning, i.e. that we're using a mini­batch size of 1. The obvious worry about online learning is that using mini­batches which contain just a single training example will cause significant errors in our estimate of the gradient. In fact, though, the errors turn out to not be such a problem. The reason is that the individual gradient estimates don't need to be super­accurate. All we need is an estimate accurate enough that our cost function tends to keep decreasing. It's as though you are trying to get to the North Magnetic Pole, but have a wonky compass that's 10­-20 degrees off each time you look at it. Provided you stop to check the compass frequently, and the compass gets the direction right on average, you'll end up at the North Magnetic Pole just fine.<br><br></p> <p>Based on this argument, it sounds as though we should use online learning. In fact, the situation turns out to be more complicated than that.As we know we can use matrix techniques to compute the gradient update for all examples in a mini­batch simultaneously, rather than looping over them. Depending on the details of our hardware and linear algebra library this can make it quite a bit faster to compute the gradient estimate for a mini­batch of (for example) size 100 , rather than computing the mini­batch gradient estimate by looping over the 100 training examples separately. It might take (say) only 50 times as long, rather than 100 times as long.Now, at first it seems as though this doesn't help us that much.<br><br></p> <p>With our mini­batch of size 100 the learning rule for the weights looks like:<a href="https://i.stack.imgur.com/QDhgB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QDhgB.png" alt="enter image description here"></a><br></p> <p>where the sum is over training examples in the mini­batch. This is versus<a href="https://i.stack.imgur.com/X4CkK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/X4CkK.png" alt="enter image description here"></a><br> for online learning. Even if it only takes 50 times as long to do the mini­batch update, it still seems likely to be better to do online learning, because we'd be updating so much more frequently. Suppose, however, that in the mini­batch case we increase the learning rate by a factor 100, so the update rule becomes<br><a href="https://i.stack.imgur.com/KMnF1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KMnF1.png" alt="enter image description here"></a><br> That's a lot like doing separate instances of online learning with a learning rate of <code>η</code>. But it only takes 50 times as long as doing a single instance of online learning. Still, it seems distinctly possible that using the larger mini­batch would speed things up.</p> </blockquote> <p><br><br></p> <p>Now i tried with <code>MNIST digit dataset</code> and ran a sample program and set the batch size <code>1</code> at first.I noted down the training time needed for the full dataset.Then i increased the batch size and i noticed that it became faster. <br> But in case of training with this <a href="https://github.com/szagoruyko/cifar.torch/blob/master/models/vgg_bn_drop.lua" rel="nofollow noreferrer">code</a> and <a href="https://github.com/szagoruyko/cifar.torch" rel="nofollow noreferrer">github link</a> changing the batch size doesn't decrease the training time.It remained same if i use 30 or 128 or 64.They are saying that they got <code>92%</code> accuracy.After two or three epoch they have got above <code>40%</code> accuracy.But when i ran the code in my computer without changing anything other than the batch size i got worse result after 10 epoch like only 28% and test accuracy stuck there in the next epochs.Then i thought since they have used batch size of 128 i need to use that.Then i used the same but it became more worse only give 11% after 10 epoch and stuck in there.<strong>Why is that??</strong></p>
2016-02-02 16:12:17.060000+00:00
2020-10-11 16:59:50.453000+00:00
2018-06-16 09:32:45.593000+00:00
machine-learning|neural-network|conv-neural-network|torch|gradient-descent
['https://twitter.com/ylecun/status/989610208497360896', 'https://arxiv.org/abs/1804.07612', 'https://arxiv.org/abs/1609.04836']
3
46,590,360
<p>I'd like to add to what's been already said here that larger batch size <em>is not always</em> good for generalization. I've seen these cases myself, when an increase in batch size hurt validation accuracy, particularly for CNN working with CIFAR-10 dataset.</p> <p>From <a href="https://arxiv.org/abs/1609.04836" rel="nofollow noreferrer">"On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima"</a>:</p> <blockquote> <p>The stochastic gradient descent (SGD) method and its variants are algorithms of choice for many Deep Learning tasks. These methods operate in a small-batch regime wherein a fraction of the training data, say 32–512 data points, is sampled to compute an approximation to the gradient. <strong>It has been observed in practice that when using a larger batch there is a degradation in the quality of the model, as measured by its ability to generalize</strong>. We investigate the cause for this generalization drop in the large-batch regime and present numerical evidence that supports the view that large-batch methods tend to converge to sharp minimizers of the training and testing functions—and as is well known, sharp minima lead to poorer generalization. In contrast, small-batch methods consistently converge to flat minimizers, and our experiments support a commonly held view that this is due to the inherent noise in the gradient estimation. We discuss several strategies to attempt to help large-batch methods eliminate this generalization gap.</p> </blockquote> <p>Bottom-line: you should tune the batch size, just like <a href="https://stackoverflow.com/questions/41860817/hyperparameter-optimization-for-deep-learning-structures-using-bayesian-optimiza/46318446">any other hyperparameter</a>, to find an optimal value.</p>
2017-10-05 16:16:19.790000+00:00
2017-10-05 16:16:19.790000+00:00
null
null
35,158,365
<p>I am trying to tune the hyper parameter i.e <strong>batch size</strong> in CNN.I have a computer of corei7,RAM 12GB and i am training a CNN network with CIFAR-10 dataset which can be found in this <a href="http://torch.ch/blog/2015/07/30/cifar.html" rel="nofollow noreferrer">blog</a>.<br><br><em>Now At first what i have read and learnt about batch size in machine learning:</em></p> <blockquote> <p>let's first suppose that we're doing online learning, i.e. that we're using a mini­batch size of 1. The obvious worry about online learning is that using mini­batches which contain just a single training example will cause significant errors in our estimate of the gradient. In fact, though, the errors turn out to not be such a problem. The reason is that the individual gradient estimates don't need to be super­accurate. All we need is an estimate accurate enough that our cost function tends to keep decreasing. It's as though you are trying to get to the North Magnetic Pole, but have a wonky compass that's 10­-20 degrees off each time you look at it. Provided you stop to check the compass frequently, and the compass gets the direction right on average, you'll end up at the North Magnetic Pole just fine.<br><br></p> <p>Based on this argument, it sounds as though we should use online learning. In fact, the situation turns out to be more complicated than that.As we know we can use matrix techniques to compute the gradient update for all examples in a mini­batch simultaneously, rather than looping over them. Depending on the details of our hardware and linear algebra library this can make it quite a bit faster to compute the gradient estimate for a mini­batch of (for example) size 100 , rather than computing the mini­batch gradient estimate by looping over the 100 training examples separately. It might take (say) only 50 times as long, rather than 100 times as long.Now, at first it seems as though this doesn't help us that much.<br><br></p> <p>With our mini­batch of size 100 the learning rule for the weights looks like:<a href="https://i.stack.imgur.com/QDhgB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QDhgB.png" alt="enter image description here"></a><br></p> <p>where the sum is over training examples in the mini­batch. This is versus<a href="https://i.stack.imgur.com/X4CkK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/X4CkK.png" alt="enter image description here"></a><br> for online learning. Even if it only takes 50 times as long to do the mini­batch update, it still seems likely to be better to do online learning, because we'd be updating so much more frequently. Suppose, however, that in the mini­batch case we increase the learning rate by a factor 100, so the update rule becomes<br><a href="https://i.stack.imgur.com/KMnF1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KMnF1.png" alt="enter image description here"></a><br> That's a lot like doing separate instances of online learning with a learning rate of <code>η</code>. But it only takes 50 times as long as doing a single instance of online learning. Still, it seems distinctly possible that using the larger mini­batch would speed things up.</p> </blockquote> <p><br><br></p> <p>Now i tried with <code>MNIST digit dataset</code> and ran a sample program and set the batch size <code>1</code> at first.I noted down the training time needed for the full dataset.Then i increased the batch size and i noticed that it became faster. <br> But in case of training with this <a href="https://github.com/szagoruyko/cifar.torch/blob/master/models/vgg_bn_drop.lua" rel="nofollow noreferrer">code</a> and <a href="https://github.com/szagoruyko/cifar.torch" rel="nofollow noreferrer">github link</a> changing the batch size doesn't decrease the training time.It remained same if i use 30 or 128 or 64.They are saying that they got <code>92%</code> accuracy.After two or three epoch they have got above <code>40%</code> accuracy.But when i ran the code in my computer without changing anything other than the batch size i got worse result after 10 epoch like only 28% and test accuracy stuck there in the next epochs.Then i thought since they have used batch size of 128 i need to use that.Then i used the same but it became more worse only give 11% after 10 epoch and stuck in there.<strong>Why is that??</strong></p>
2016-02-02 16:12:17.060000+00:00
2020-10-11 16:59:50.453000+00:00
2018-06-16 09:32:45.593000+00:00
machine-learning|neural-network|conv-neural-network|torch|gradient-descent
['https://arxiv.org/abs/1609.04836', 'https://stackoverflow.com/questions/41860817/hyperparameter-optimization-for-deep-learning-structures-using-bayesian-optimiza/46318446']
2
43,541,018
<p>With the exception of <code>java.lang.String</code> being treated as a special case<sup>1</sup>, Java does not allow you to <em>define</em> the behaviour of <code>+</code> for arbitrary types, or indeed any other operator, as you can in some languages such as C++ or Scala. In other words, Java does not support <em>operator overloading</em>.</p> <p>Your best bet is to build functions like <code>add</code> &amp;c. Appeal to precedent here: see how the Java guys have done it with <code>BigInteger</code>, for example. Sadly there is no way of defining the <em>precedence</em> of your functions, so you have to use very many parentheses to tell the compiler how you want an expression to be evaluated. It's for this reason that I don't use Java for any serious mathematical applications as the implementation of even a simple equation quickly becomes an unreadable mess<sup>2</sup>.</p> <hr> <p><sup>1</sup> Which in some ways does more harm than good: e.g. consider <code>1 + 2 + "Hello" + 3 + 4</code>. This compile time constant expression is a string type with the value <code>3Hello34</code>.</p> <p><sup>2</sup> Note that C++ was used to model the gravitational lensing effects of the wormhole in the movie "Interstellar". I challenge anyone to do that in a language that does not support operator overloading! See <a href="https://arxiv.org/pdf/1502.03808v1.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1502.03808v1.pdf</a></p>
2017-04-21 11:01:00.367000+00:00
2017-04-21 11:19:20.467000+00:00
2017-04-21 11:19:20.467000+00:00
null
43,540,923
<p>I'm new to Java and I couldn't find an answer to it anywhere because i don't even know how to search for it.</p> <p>I want to define how 2 objects can be added together, so you get a new one like for example you can add String "a" and String "b" to get "ab".</p> <p>I know this can be done in python by doing <code>self.__add__(self, other)</code>. How can you do this in Java?</p>
2017-04-21 10:56:30.927000+00:00
2017-04-21 11:19:20.467000+00:00
2017-04-21 11:18:43.357000+00:00
java
['https://arxiv.org/pdf/1502.03808v1.pdf']
1
71,814,109
<p>This pre-print describes a differentiable loss function for the Matthews Correlation Coefficient. They derive an analytical form for its gradient, based on differentiable versions of TP, FP, etc. Then they use it to train their Convolutional Neural Network.</p> <p><a href="https://arxiv.org/abs/2010.13454" rel="nofollow noreferrer">https://arxiv.org/abs/2010.13454</a></p> <p>I think these formulas are exactly what you need. They provide code in PyTorch, too.</p>
2022-04-10 05:51:06.703000+00:00
2022-04-10 05:51:06.703000+00:00
null
null
54,077,612
<p>I try to write a custom loss function for keras with tf backend. I get the following error</p> <blockquote> <p>ValueError: An operation has <code>None</code> for gradient. Please make sure that all of your ops have a gradient defined (i.e. are differentiable). Common ops without gradient: K.argmax, K.round, K.eval.</p> </blockquote> <pre><code>def matthews_correlation(y_true, y_pred): y_pred_pos = K.round(K.clip(y_pred, 0, 1)) y_pred_neg = 1 - y_pred_pos y_pos = K.round(K.clip(y_true, 0, 1)) y_neg = 1 - y_pos tp = K.sum(y_pos * y_pred_pos) tn = K.sum(y_neg * y_pred_neg) fp = K.sum(y_neg * y_pred_pos) fn = K.sum(y_pos * y_pred_neg) numerator = (tp * tn - fp * fn) denominator = K.sqrt((tp + fp) * (tp + fn) * (tn + fp) * (tn + fn)) return 1.0 - numerator / (denominator + K.epsilon()) </code></pre> <p>If I use this function as a metric and not as the loss function it works. How can I use this function as a loss?</p> <p>After removing K.round I get following error:</p> <blockquote> <p>InvalidArgumentError: Can not squeeze dim[0], expected a dimension of 1, got 8 [[{{node loss_9/dense_10_loss/Squeeze}} = Squeeze[T=DT_FLOAT, squeeze_dims=[-1], _device="/job:localhost/replica:0/task:0/device:GPU:0"] (_arg_dense_10_sample_weights_0_2/_2445)]] [[{{node loss_9/add_12/_2467}} = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_6418_loss_9/add_12", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]]</p> </blockquote>
2019-01-07 15:53:29.340000+00:00
2022-04-10 05:51:06.703000+00:00
2019-01-07 16:07:46.837000+00:00
python|tensorflow|keras
['https://arxiv.org/abs/2010.13454']
1
54,867,228
<p>@Alexis has already given the answer to the error message, but I want to clarify something about loss functions which are derived from metrics:</p> <p>In general metrics can not be used as loss functions, but often smoothed version of metrics like the dice measure (=F1 score) <a href="https://arxiv.org/abs/1707.03237" rel="nofollow noreferrer">CH Sudre 2014</a> can be applied as loss functions. One usecase might be image segmentation.</p> <p>(Please excuse, that I have to add this comment as an answer, since I do not have enough reputation to add a comment)</p>
2019-02-25 13:23:38.087000+00:00
2019-02-25 13:23:38.087000+00:00
2020-06-20 09:12:55.060000+00:00
null
54,077,612
<p>I try to write a custom loss function for keras with tf backend. I get the following error</p> <blockquote> <p>ValueError: An operation has <code>None</code> for gradient. Please make sure that all of your ops have a gradient defined (i.e. are differentiable). Common ops without gradient: K.argmax, K.round, K.eval.</p> </blockquote> <pre><code>def matthews_correlation(y_true, y_pred): y_pred_pos = K.round(K.clip(y_pred, 0, 1)) y_pred_neg = 1 - y_pred_pos y_pos = K.round(K.clip(y_true, 0, 1)) y_neg = 1 - y_pos tp = K.sum(y_pos * y_pred_pos) tn = K.sum(y_neg * y_pred_neg) fp = K.sum(y_neg * y_pred_pos) fn = K.sum(y_pos * y_pred_neg) numerator = (tp * tn - fp * fn) denominator = K.sqrt((tp + fp) * (tp + fn) * (tn + fp) * (tn + fn)) return 1.0 - numerator / (denominator + K.epsilon()) </code></pre> <p>If I use this function as a metric and not as the loss function it works. How can I use this function as a loss?</p> <p>After removing K.round I get following error:</p> <blockquote> <p>InvalidArgumentError: Can not squeeze dim[0], expected a dimension of 1, got 8 [[{{node loss_9/dense_10_loss/Squeeze}} = Squeeze[T=DT_FLOAT, squeeze_dims=[-1], _device="/job:localhost/replica:0/task:0/device:GPU:0"] (_arg_dense_10_sample_weights_0_2/_2445)]] [[{{node loss_9/add_12/_2467}} = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_6418_loss_9/add_12", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]]</p> </blockquote>
2019-01-07 15:53:29.340000+00:00
2022-04-10 05:51:06.703000+00:00
2019-01-07 16:07:46.837000+00:00
python|tensorflow|keras
['https://arxiv.org/abs/1707.03237']
1
13,899,323
<p><a href="http://en.wikipedia.org/wiki/Matrix_multiplication" rel="nofollow noreferrer">Matrix multiplication</a> has a theoretical lower bound of Ω(<em>n</em><sup>2</sup>), since all <em>n</em><sup>2</sup> entries need to be processed. The best known algorithm to date (according to the above-linked Wikipedia article) has complexity O(<em>n</em><sup>2.3727</sup>). The naive algorithm has complexity <em>n</em><sup>3</sup>.</p> <p>According to the Wikipedia article on <a href="http://en.wikipedia.org/wiki/Computational_complexity_of_mathematical_operations" rel="nofollow noreferrer">Computational complexity of mathematical operations</a>, back-substitution of a triangular matrix to obtain <em>n</em> solutions is O(<em>n</em><sup>2</sup>). There are probably many other examples around on the web.</p> <p>EDIT: A 2014 <a href="https://arxiv.org/pdf/1407.4972.pdf" rel="nofollow noreferrer">paper by Michele Borassi, <em>et al.</em></a>, discusses a number of decision problems (output size O(1)) that can be solved in O(<em>n</em><sup>2</sup>) time but not in O(<em>n</em><sup>2-<em>ε</em></sup>) for any <em>ε</em> > 0. (Of course, as always, these results depends on P ≠ NP, or, more precisely, that the <a href="https://en.wikipedia.org/wiki/Exponential_time_hypothesis" rel="nofollow noreferrer">Strong Exponential Time Hypothesis</a> is true.)</p> <p>Their paper leads off with a modified <a href="https://en.wikipedia.org/wiki/Boolean_satisfiability_problem" rel="nofollow noreferrer"><em>k</em>-SAT problem</a>:</p> <p><em>Input:</em></p> <ul> <li>two sets of variables <em>X</em> = {<em>x</em><sub><em>i</em></sub>}, <em>Y</em> = {<em>y</em><sub><em>j</em></sub>} of the same size;</li> <li>a set <em>C</em> of clauses over these variables, such that each clause has at most size <em>k</em>; and</li> <li>the two power sets &weierp;(<em>X</em>), &weierp;(<em>Y</em>) of <em>X</em> and <em>Y</em> (used to change the input size).</li> </ul> <p><em>Output:</em> <code>true</code> if there is an evaluation of all variables that satisfies all clauses, <code>False</code> otherwise.</p> <p>Note that the unmodified <em>k</em>-SAT problem (where the input does not include the third bullet above) is NP-complete, so normally one thinks of it as an exponential-time problem. However, here the input size is itself exponential in the number of variables. They show that, with this modification, the problem can always be solved in quadratic time (simply try all possible evaluations). More to the point for this answer, they also show that this is the minimum time complexity for any algorithm that solves the problem.</p> <p>One might object that this modified <em>k</em>-SAT problem is hardy natural. However, they then use this to show that a number of other problems, which do seem natural, also cannot be solved in less that O(<em>n</em><sup>2</sup>) time. The simplest one to state is the subset graph problem:</p> <p><em>Input:</em> a set <em>X</em> and a collection <em>C</em> of subsets of <em>X</em>.</p> <p><em>Output:</em> the graph <em>G</em> = (<em>C</em>, <em>E</em>), where, for each <em>C</em>, <em>C</em>′ ∈ <em>C</em>, (<em>C</em>, <em>C</em>′) ∈ <em>E</em> if and only if <em>C</em> ⊆ <em>C</em>′.</p>
2012-12-16 07:07:01.217000+00:00
2016-06-26 20:18:17.517000+00:00
2016-06-26 20:18:17.517000+00:00
null
13,899,274
<p>I know of the Bubble-sort, Insertion-sort etc. but there are more efficient algorithms for sorting. By the <a href="http://en.wikipedia.org/wiki/Time_hierarchy_theorem" rel="noreferrer">Time Hierarchy Theorem</a>, there are problems that can be solved in O(n^2) but not in O(n^r) for any real r &lt; 2. The constructions used for it's proof are not very natural. What is a good example of a problem whose most efficient solution requires quadratic time?</p> <p>I am looking for something that has preferably the following qualities:</p> <ul> <li>It is simple and easy to understand</li> <li>Something that is used frequently</li> <li>It can be proved that O(n^2) is the best run-time for a correct solution</li> </ul> <p>Small caveat - The output should not be large. (If you want the sum of every pair of integers from a given list, it obviously requires quadratic time to output them). You can assume that it should be a <a href="http://en.wikipedia.org/wiki/Decision_problem" rel="noreferrer">decision problem</a>, i.e. one with an yes-no answer. Also let us assume the time complexity O(n^2) is a function of input size, i.e. n is the number of bits required to represent the input.</p>
2012-12-16 06:54:44.950000+00:00
2016-06-26 20:18:17.517000+00:00
2012-12-16 11:24:56.310000+00:00
algorithm|time-complexity
['http://en.wikipedia.org/wiki/Matrix_multiplication', 'http://en.wikipedia.org/wiki/Computational_complexity_of_mathematical_operations', 'https://arxiv.org/pdf/1407.4972.pdf', 'https://en.wikipedia.org/wiki/Exponential_time_hypothesis', 'https://en.wikipedia.org/wiki/Boolean_satisfiability_problem']
5
23,869,372
<blockquote> <p>How and where do you get the categories - pre-defined labels?</p> </blockquote> <p>There are many benchmark text datasets with taxonomy and ontology information. <a href="http://en.wikipedia.org/wiki/WordNet" rel="nofollow">Wordnet</a> is one such a popular benchmark dataset used in text analysis research. <a href="http://acl.ldc.upenn.edu/P/P94/P94-1019.pdf" rel="nofollow">This</a> is the first paper that focused on using taxonomy to arrive at a semantic similarity for text analysis on Wordnet. . <a href="http://arxiv.org/pdf/1105.5444.pdf?origin=publication_detail" rel="nofollow">This</a> is a more recent good paper dealing with similar objective.</p> <blockquote> <p>Is it possible to plug-in an ontology, taxonomy for that and go as granular as needed?</p> </blockquote> <p>Yes. There is a research subfield that deals with arriving at a semantic similarity based on taxonomy and ontology that exist among the concepts (in this case, concepts in text documents). This <a href="http://dspace.library.drexel.edu/bitstream/1860/2754/1/2006175421.pdf" rel="nofollow">paper</a> provides an overview and comparative study of techniques that bring in ontology and taxonomy into measuring similarities among documents. //go as granular as needed// - Yes, you can do so, by arriving at a new similarity measure that controls the granularity. Many research work pertain to this. This <a href="http://www.researchgate.net/publication/224041454_Ontology-based_semantic_similarity_A_new_feature-based_approach/file/e0b495149d58680c80.pdf" rel="nofollow">paper</a> is a recent example.</p> <blockquote> <p>Do we use n-grams in this case for approximate matching?</p> </blockquote> <p>Yes possible, but the aforementioned papers use less granular approaches that model concepts from documents. Most of them use tf-idf and not n-grams of terms.</p>
2014-05-26 11:45:42.473000+00:00
2014-05-26 11:45:42.473000+00:00
null
null
23,712,806
<p>I am familiar with data mining techniques but not so much with text mining or Web mining.</p> <p>Here is a simple task: classify articles into a set of categories. Let us assume, I extracted text content of the article and processed it.</p> <p>How and where do you get the categories - pre-defined labels? Is it possible to plug-in an ontology, taxonomy for that and go as granular as needed? Classification task will be a multi-label classification.</p> <p>Do we use n-grams in this case for approximate matching?</p> <p>Currently I have themes and named entities extracted from the text. Can I use Vowpal Wabbit for that?</p>
2014-05-17 15:03:37.197000+00:00
2014-05-26 11:45:42.473000+00:00
null
ontology|n-gram|document-classification|vowpalwabbit
['http://en.wikipedia.org/wiki/WordNet', 'http://acl.ldc.upenn.edu/P/P94/P94-1019.pdf', 'http://arxiv.org/pdf/1105.5444.pdf?origin=publication_detail', 'http://dspace.library.drexel.edu/bitstream/1860/2754/1/2006175421.pdf', 'http://www.researchgate.net/publication/224041454_Ontology-based_semantic_similarity_A_new_feature-based_approach/file/e0b495149d58680c80.pdf']
5
30,632,910
<p>You're looking at very old slides.</p> <ul> <li>a full description of the linear mixed model capabilities of the package, including details of the internal representation, is <a href="http://arxiv.org/abs/1406.5823" rel="nofollow">on arxiv</a>, in press in <em>J. Stat. Software</em> (a reference to this information is also in the <a href="http://cran.r-project.org/web/packages/lme4/citation.html" rel="nofollow">citation information for the package on CRAN</a>). (We're still working on the paper that describes the GLMM capabilities.)</li> <li>the <code>getME()</code> function is the current recommended method for accessing model information.</li> <li><code>lme4</code> is still under fairly active development <a href="https://github.com/lme4/lme4/" rel="nofollow">on github</a>.</li> <li>Doug Bates is indeed more interested in <a href="https://github.com/dmbates/MixedModels.jl" rel="nofollow">building mixed model frameworks in Julia now</a>, but he still participates in <code>lme4</code> maintenance to some extent.</li> </ul> <p>The notation/internal representation has changed somewhat, but reconstructing the variance-covariance matrix from internal information can be done as follows (the internal <code>Lambdat</code> is equivalent to <code>t(T %*% S)</code> in the old notation).</p> <pre><code>library("lme4") fm1 &lt;- lmer(Yield ~ 1 + (1 | Batch), Dyestuff) crossprod(getME(fm1,"Lambdat")) </code></pre>
2015-06-04 00:11:24.810000+00:00
2015-06-04 00:16:58.117000+00:00
2015-06-04 00:16:58.117000+00:00
null
30,632,754
<p>I'm trying to understand the linear algebra operations behind the <code>lmer</code> function in <code>R</code>, and I found what seemed to be a great resource on-line on a <a href="https://www.google.com/url?sa=t&amp;rct=j&amp;q=&amp;esrc=s&amp;source=web&amp;cd=5&amp;cad=rja&amp;uact=8&amp;ved=0CEkQFjAE&amp;url=http%3A%2F%2Fwww.stat.wisc.edu%2F~bates%2FPotsdamGLMM%2FLMMD.pdf&amp;ei=_oxvVZ_POI71yATSi4I4&amp;usg=AFQjCNFJyW3K10nLDRBmcRwPudsrrSnElg&amp;bvm=bv.94911696,d.aWw" rel="nofollow">lecture</a> given by the creator of the <code>lme4</code> package, Douglas Bates.</p> <p>The example deals with the dataset <em>Dyestuff</em> and calls for a mixed-effects model as follows:</p> <pre><code>fm1 &lt;- lmer(Yield ~ 1 + (1 | Batch), Dyestuff) </code></pre> <p>The following slides include lines of code to extract the underlying matrices for the random effects, such as:</p> <pre><code>efm1 &lt;- expand(fm1) efm1$S 6 x 6 diagonal matrix of class "ddiMatrix" # [,1] [,2] [,3] [,4] [,5] [,6] [1,] 0.84823 . . . . . [2,] . 0.84823 . . . . [3,] . . 0.84823 . . . [4,] . . . 0.84823 . . [5,] . . . . 0.84823 . [6,] . . . . . 0.84823 </code></pre> <p>and,</p> <pre><code>efm1$T6 x 6 sparse Matrix of class "dtCMatrix" [1,] 1 . . . . . [2,] . 1 . . . . [3,] . . 1 . . . [4,] . . . 1 . . [5,] . . . . 1 . [6,] . . . . . 1 </code></pre> <p>or,</p> <pre><code>(fm1S &lt;- tcrossprod(efm1$T %*% efm1$S)) 6 x 6 sparse Matrix of class "dsCMatrix" [1,] 0.71949 . . . . . [2,] . 0.71949 . . . . [3,] . . 0.71949 . . . [4,] . . . 0.71949 . . [5,] . . . . 0.71949 . [6,] . . . . . 0.71949 </code></pre> <p>Yet, when I try to run the same line codes on R, I get the following error messages:</p> <pre><code>efm1 &lt;- expand(fm1) Error in (function (classes, fdef, mtable) : unable to find an inherited method for function ‘expand’ for signature ‘"lmerMod"’ </code></pre> <p>and not surprisingly,</p> <pre><code>efm1$S Error: object 'efm1' not found </code></pre> <p>Doing a <code>?expand</code> identifies this function as still existing, and seemingly meant to produce matrix decompositions, such as LU or RQ.</p> <p>Doing a search on-line, I found out that Douglas is using now <em>Julia</em> (can the next statistical language have a less implausible name? No, not "Pied Pier"! Sorry, I digress...).</p> <p>What am I doing wrong? Is the <code>lme4</code> now orphan and in decay? Is there a typo in the slides?</p>
2015-06-03 23:51:55.860000+00:00
2015-06-04 00:16:58.117000+00:00
null
r|lme4
['http://arxiv.org/abs/1406.5823', 'http://cran.r-project.org/web/packages/lme4/citation.html', 'https://github.com/lme4/lme4/', 'https://github.com/dmbates/MixedModels.jl']
4
61,877,967
<p>Good insight -- it turns out this works really well to solve the left recursion problem, but you also have to parse bottom-up, not just right to left. I published a preprint about this.</p> <p>Pika parsing: parsing in reverse solves the left recursion and error recovery problems</p> <p><a href="https://arxiv.org/abs/2005.06444" rel="nofollow noreferrer">https://arxiv.org/abs/2005.06444</a></p>
2020-05-18 19:57:17.910000+00:00
2020-05-18 19:57:17.910000+00:00
null
null
34,374,871
<p>I'm working on a project for my OOP class. Part of the task is developing a parser for a very simple grammar. As far as I understood, by far the simplest parser to implement by hand is recursive-descent parser.</p> <p>However all operators for the language that I'm parsing are left-associative by nature. As far as I know best way to deal with left recursion enforced by left associativity is to use LR parser instead.</p> <p>My idea is to parse tokens right-to-left instead, which I believe should enable me to rewrite left associative rules to right associative ones instead.</p> <p>Will this work, and if not, why not?</p>
2015-12-19 20:07:10.627000+00:00
2020-05-18 19:57:17.910000+00:00
null
parsing|recursion
['https://arxiv.org/abs/2005.06444']
1
59,981,008
<blockquote> <p>But when I am predicting it for house.jpg or laptop.jpg or images other than these 3-classes then also it predicting among these 3-classes.</p> </blockquote> <p>This is the normal behaviour because neural network in the last layer</p> <pre><code>model.add(Dense(3, activation="softmax")) </code></pre> <p>it returns <em>probabilities</em> for every class from your problem.</p> <p>So, if you're using a <code>laptop.jpg</code> image maybe it returns three small probabilities and the biggest one it gives the output for you.</p> <p>Since you're not using <code>laptop</code> images in your <code>training</code> set, then <code>neural network</code> has no ideea about it.</p> <p>One approach could be setting a threshold probability, let's say <code>50%</code> and if no one from those <code>3</code> probabilities exceeds this threshold, then print <code>Unknown</code>.</p> <p>With other words, if you are using a <em>softmax</em> distribution for your classification, then you could determine what your baseline <em>max</em> <code>probability</code> is for correctly classified samples, and then infer if a new sample doesn't belong to any of your known classes if its <em>max probability</em> is below some kind of <code>threshold</code>.</p> <p>This idea comes from a <em>research</em> paper which explains this situation: <a href="https://arxiv.org/abs/1610.02136" rel="nofollow noreferrer">A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks</a></p>
2020-01-30 07:28:51.600000+00:00
2020-01-30 07:43:40.520000+00:00
2020-01-30 07:43:40.520000+00:00
null
59,980,932
<pre><code># set the matplotlib backend so figures can be saved in the background import matplotlib matplotlib.use("Agg") # import the necessary packages from keras.layers.core import Dropout, Activation from sklearn.preprocessing import LabelBinarizer from sklearn.model_selection import train_test_split from keras.models import Sequential from keras.optimizers import SGD import matplotlib.pyplot as plt from keras.callbacks import EarlyStopping from keras.callbacks import ModelCheckpoint from keras.layers import Dense, Conv2D, Flatten from keras.layers.convolutional import MaxPooling2D (trainX, testX, trainY, testY) = train_test_split(data,labels, test_size=0.25, random_state=42) lb = LabelBinarizer() trainY = lb.fit_transform(trainY) testY = lb.transform(testY) #create model model = Sequential() #add model layers model.add(Conv2D(32, kernel_size=3, activation="relu", input_shape=(32,32,3))) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Conv2D(32, kernel_size=3, activation="relu")) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Conv2D(64, kernel_size=3, activation="relu")) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Flatten()) model.add(Dense(64)) model.add(Activation("relu")) model.add(Dropout(0.5)) model.add(Dense(3, activation="softmax")) # initialize our initial learning rate and # of epochs to train for INIT_LR = 0.001 EPOCHS = 500 opt = SGD(lr=INIT_LR, clipvalue=0.5) model.compile(loss="categorical_crossentropy", optimizer=opt,metrics=["accuracy"]) es = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=200) mc = ModelCheckpoint('best_model_500Epoch.h5', monitor='val_accuracy', mode='max', verbose=1, save_best_only=True) H = model.fit(trainX, trainY, validation_data=(testX, testY),epochs=EPOCHS, batch_size=32,callbacks=[es, mc]) </code></pre> <p>Using following script for prediction.</p> <pre><code>from keras.models import load_model import pickle import cv2 import os import matplotlib.pyplot as plt from keras import backend as k new_model = load_model('model_name.h5') lb = pickle.loads(open("Label_Binarizer", "rb").read()) dirName = "Other_than_class" listOfFile = os.listdir(dirName) # Iterate over all the entries for entry in listOfFile: # Create full path fullPath = os.path.join(dirName, entry) # If entry is a directory then get the list of files in this directory image = cv2.imread(fullPath) output = image.copy() image = cv2.resize(image, (32, 32)) # scale the pixel values to [0, 1] image = image.astype("float") / 255.0 # check to see if we should flatten the image and add a batch # dimension image = image.flatten() image = image.reshape((1, image.shape[0])) # preds = new_model.predict(image) preds = new_model.predict(image.reshape(1, 32, 32, 3)) print(preds[0]) k.clear_session() # find the class label index with the largest corresponding probability i = preds.argmax(axis=1)[0] label = lb.classes_[i] plt.grid(False) plt.imshow(output) plt.xlabel("Actual: " + str(entry)) plt.title("Prediction: " + str(preds[0][i] * 100)+" "+str(label)) plt.show() </code></pre> <p>I have developed model using above architecture for 3-classes cat,dog and flower. it giving good result when i am predicting any unseen image of these classes. but when I am predicting it for house.jpg or laptop.jpg or images other than these 3-classes then also it predicting among these 3-classes which is so disgusting. what's i am doing wrong?</p> <p>The accuracy of predicting of house.jpg or laptop.jpg is also above 85%. what to do so that it must not predict images out of the classes.</p>
2020-01-30 07:23:32.803000+00:00
2020-01-31 06:01:54.967000+00:00
2020-01-31 06:01:54.967000+00:00
python|tensorflow|machine-learning|deep-learning|neural-network
['https://arxiv.org/abs/1610.02136']
1
34,645,936
<p>I highly recommend the following paper by Gavish and Donoho: <a href="http://arxiv.org/abs/1305.5870" rel="noreferrer"><em>The Optimal Hard Threshold for Singular Values is 4/sqrt(3)</em></a>. </p> <p>I posted a longer summary of this on <a href="https://stats.stackexchange.com/questions/44060/choosing-number-of-principal-components-to-retain/189645#189645">CrossValidated <em>(stats.stackexchange.com)</em></a>. Briefly, they obtain an optimal procedure in the limit of very large matrices. The procedure is very simple, does not require any hand-tuned parameters, and seems to work very well in practice.</p> <p>They have a nice code supplement here: <a href="https://purl.stanford.edu/vg705qn9070" rel="noreferrer">https://purl.stanford.edu/vg705qn9070</a></p>
2016-01-07 01:32:32.867000+00:00
2016-01-07 01:32:32.867000+00:00
2017-04-13 12:44:13.837000+00:00
null
12,067,446
<p>I know that principal component analysis does a SVD on a matrix and then generates an eigen value matrix. To select the principal components we have to take only the first few eigen values. Now, how do we decide on the number of eigen values that we should take from the eigen value matrix?</p>
2012-08-22 06:31:16.407000+00:00
2021-06-15 09:20:24.170000+00:00
null
machine-learning|data-mining|svd
['http://arxiv.org/abs/1305.5870', 'https://stats.stackexchange.com/questions/44060/choosing-number-of-principal-components-to-retain/189645#189645', 'https://purl.stanford.edu/vg705qn9070']
3
71,535,982
<p>In <a href="https://arxiv.org/ftp/arxiv/papers/2111/2111.14839.pdf" rel="nofollow noreferrer">this paper</a>, the author's use PCA to combine categorical features of high cardinality. If I understood correctly, they first calculate conditional probabilities for each target class. Then they choose a threshold hyperparameter and create a new binary variable for each conditional class probability for each categorical feature to be combined. PCA is performed to combine the new binary variables with the number of components retained specified as a hyperparameter.</p>
2022-03-19 06:49:41.560000+00:00
2022-03-19 06:49:41.560000+00:00
null
null
40,795,141
<p>In my understanding, I thought PCA can be performed only for continuous features. But while trying to understand the difference between onehot encoding and label encoding came through a post in the following link:</p> <p><a href="https://datascience.stackexchange.com/questions/9443/when-to-use-one-hot-encoding-vs-labelencoder-vs-dictvectorizor">When to use One Hot Encoding vs LabelEncoder vs DictVectorizor?</a></p> <p>It states that one hot encoding followed by PCA is a very good method, which basically means PCA is applied for categorical features. Hence confused, please suggest me on the same.</p>
2016-11-24 22:11:36.117000+00:00
2022-03-19 06:49:41.560000+00:00
2017-04-13 12:50:40.647000+00:00
python|machine-learning|scikit-learn|data-mining
['https://arxiv.org/ftp/arxiv/papers/2111/2111.14839.pdf']
1
54,209,748
<p>The following publication shows great and meaningful results when computing PCA on categorical variables treated as simplex vertices:</p> <blockquote> <p>Niitsuma H., Okada T. (2005) Covariance and PCA for Categorical Variables. In: Ho T.B., Cheung D., Liu H. (eds) Advances in Knowledge Discovery and Data Mining. PAKDD 2005. Lecture Notes in Computer Science, vol 3518. Springer, Berlin, Heidelberg</p> <p><a href="https://doi.org/10.1007/11430919_61" rel="nofollow noreferrer">https://doi.org/10.1007/11430919_61</a></p> </blockquote> <p>It is available via <a href="https://arxiv.org/abs/0711.4452" rel="nofollow noreferrer">https://arxiv.org/abs/0711.4452</a> (including as a PDF).</p>
2019-01-16 02:50:20.257000+00:00
2020-03-14 12:40:33.173000+00:00
2020-03-14 12:40:33.173000+00:00
null
40,795,141
<p>In my understanding, I thought PCA can be performed only for continuous features. But while trying to understand the difference between onehot encoding and label encoding came through a post in the following link:</p> <p><a href="https://datascience.stackexchange.com/questions/9443/when-to-use-one-hot-encoding-vs-labelencoder-vs-dictvectorizor">When to use One Hot Encoding vs LabelEncoder vs DictVectorizor?</a></p> <p>It states that one hot encoding followed by PCA is a very good method, which basically means PCA is applied for categorical features. Hence confused, please suggest me on the same.</p>
2016-11-24 22:11:36.117000+00:00
2022-03-19 06:49:41.560000+00:00
2017-04-13 12:50:40.647000+00:00
python|machine-learning|scikit-learn|data-mining
['https://doi.org/10.1007/11430919_61', 'https://arxiv.org/abs/0711.4452']
2
47,754,984
<p>This is the subject of a number of recent academic papers, as the field expands rapidly.</p> <p>These for example are based on <a href="https://arxiv.org/pdf/1711.03936.pdf" rel="nofollow noreferrer">Consensus in the Age of Blockchains</a>:</p> <blockquote> <ul> <li>Committee Formation - How the members of the committee are chosen, for example via proof-of-work, proof-of- stake, trusted hardware etc</li> </ul> </blockquote> <ul> <li>Consistency - The likelihood that the system will reach consensus on a proposed value; it can be either strong or weak</li> <li>Incentive Model</li> <li>Safety ( Transaction, Censorship Resistance, DoS Resistance)</li> <li>Adversary models considered</li> <li>Performance (Throughput, Scalability, Latency)</li> <li>Exp. Setup</li> <li>Code availability</li> </ul> <p>See also <a href="https://arxiv.org/abs/1707.01873" rel="nofollow noreferrer">Blockchain Consensus Protocols in the Wild</a>.</p>
2017-12-11 14:22:55.980000+00:00
2022-09-16 08:01:50.397000+00:00
2022-09-16 08:01:50.397000+00:00
null
41,690,983
<p>As the developers are constantly using different network protocols of blockchain such as Hyperledger, multichain, Ethereum, Corda, and others. Community will appreciate if the developers &amp; blockchain enthusiasts can pour in some key differences between various types of blockchains as mentioned above.</p> <p>Thanks !</p>
2017-01-17 07:11:59.493000+00:00
2022-09-16 08:01:50.397000+00:00
2018-02-19 20:05:35.287000+00:00
blockchain|hyperledger|ethereum|corda
['https://arxiv.org/pdf/1711.03936.pdf', 'https://arxiv.org/abs/1707.01873']
2
68,695,576
<p>Here is a paper about a very similar AI: <a href="https://arxiv.org/pdf/1703.06891.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1703.06891.pdf</a></p> <p>They use <strong>two</strong> Neural Networks, the first one extracts audio features like Drums and decides when to place blocks <strong>(timing of blocks)</strong> and the other one decides how the blocks are arranged <strong>(direction of blocks)</strong>.</p>
2021-08-07 19:33:30.510000+00:00
2021-08-07 19:33:30.510000+00:00
null
null
68,695,217
<p>I want to create an AI that coverts songs into beatsaber levels (It's a VR game). A beatsaber level can be stored as a array of &quot;blocks&quot; that looks like this:</p> <pre><code>{ &quot;time&quot;: 1.25, &quot;direction&quot;: &quot;up&quot;, &quot;hand&quot;: &quot;right&quot;, &quot;pos_x&quot;: 0, &quot;pos_y&quot;: 1 } </code></pre> <p>So the AI should convert songs into a level that has a lot of these blocks like the above. So far I have just done AIs that solve classification problems. But this problem has a generative nature. Does anyone know how to approch this kind of problem. Maybe with links that lead me in the right direction.</p> <p>I have enough training data of already existing levels and songs. So that is no problem. I just need to know how the architecture of an AI like this looks like and how it should work.</p> <p>Also I would like to make this with tensorflow if its possible. But other technologies are also fine.</p> <p>Here is a link that should give you a better understanding of the game. The above JSON example resembles one block in the level. So the AI should generate an array of these blocks, and therefore generating a whole level. <a href="https://www.youtube.com/watch?v=7JVXv2ySToU" rel="nofollow noreferrer">https://www.youtube.com/watch?v=7JVXv2ySToU</a></p>
2021-08-07 18:40:43.263000+00:00
2021-08-07 19:33:30.510000+00:00
null
python|tensorflow|keras|deep-learning|neural-network
['https://arxiv.org/pdf/1703.06891.pdf']
1
65,070,843
<ol> <li>To save a png file with Pillow, you have to specify the format as a second parameter:</li> </ol> <pre><code>img2.save('test.png', 'PNG') </code></pre> <p>You could also save files with matplotlib</p> <pre><code>import matplotlib.pyplot as plt plt.imsave(&quot;test.png&quot;, np.array(img2)) </code></pre> <ol start="2"> <li><p>I am not super familiar with the C&amp;W attack, but after looking at their paper <a href="https://arxiv.org/pdf/1608.04644.pdf" rel="nofollow noreferrer">Towards Evaluating the Robustness of Neural Networks</a> and the code for the attack in <a href="https://github.com/carlini/nn_robust_attacks/blob/master/l2_attack.py" rel="nofollow noreferrer">l2_attack.py</a>, it appears that the perturbation is bounded by the <code>boxmin</code> and <code>boxmax</code> parameters, and that the attack itself gradually changes the images and then selects the ones that are the most effective. It won't tell you how many pixels are changed because that is usually a hyperparameter that you decide for yourself.</p> </li> <li><p>To change the image attacked, change the <code>inputs</code> and <code>targets</code> in the attack initialization function:</p> </li> </ol> <pre><code>adv = attack.attack(inputs, targets) </code></pre> <p>You will also want to load your desired data instead of MNIST beforehand. There are good tutorials online for this. Just search &quot;Tensorflow&quot; and &quot;dataloading&quot;</p> <ol start="4"> <li>To make the <code>show</code> function compatible with CIFAR, change stuff like <code>(28, 28)</code> to <code>(32,32,3)</code>, as CIFAR images are 32x32 color images. You'll also need to change the length accordingly. Alternatively, it might be easier to write your own <code>show</code> function based on whatever data you're using.</li> </ol>
2020-11-30 09:20:18.363000+00:00
2020-11-30 09:20:18.363000+00:00
null
null
61,759,536
<p>Once again I need your help. I recently dipped into the realm of Machine Learning and read quite a few papers that got me curious :) Now I wanted to recreate/execute the C&amp;W L2 attack. I cloned the whole repo of Nicholas Carlini <a href="https://github.com/carlini/nn_robust_attacks" rel="nofollow noreferrer">https://github.com/carlini/nn_robust_attacks</a> and started training a network with the train_models.py - only MNIST, to speed things up a bit.</p> <p>Next, I executed the 'test_attack.py'. I modified the output a bit so it made a bit more sense for me (like, showing the predicted class of the adversarial example), but now I am struggling a bit. Instead of, or additionally to, having the adversarial example be shown in the console, I want to save it to a .png/.jpg file. I messed around quite a bit, but only got as far as getting a 28x28 black .png file.</p> <p>My "modified" file looks like this right now:</p> <pre><code>import tensorflow as tf import numpy as np import time from PIL import Image from setup_cifar import CIFAR, CIFARModel from setup_mnist import MNIST, MNISTModel from setup_inception import ImageNet, InceptionModel from l2_attack import CarliniL2 def show(img): """ Show MNSIT digits in the console. """ remap = " .*#"+"#"*100 print(type(img)) img2 = img.reshape((28,28)).astype('uint8')*255 img2 = Image.fromarray(img2) img2.save('test.png') img = (img.flatten()+.5)*3 if len(img) != 784: return print("START") for i in range(28): print("".join([remap[int(round(x))] for x in img[i*28:i*28+28]])) def generate_data(data, samples, targeted=True, start=0, inception=False): """ Generate the input data to the attack algorithm. data: the images to attack samples: number of samples to use targeted: if true, construct targeted attacks, otherwise untargeted attacks start: offset into data to use inception: if targeted and inception, randomly sample 100 targets intead of 1000 """ inputs = [] targets = [] for i in range(samples): if targeted: if inception: seq = random.sample(range(1,1001), 10) else: #seq = range(2) seq = range(data.test_labels.shape[1]) print(seq) for j in seq: if (j == np.argmax(data.test_labels[start+i])) and (inception == False): continue inputs.append(data.test_data[start+i]) targets.append(np.eye(data.test_labels.shape[1])[j]) else: inputs.append(data.test_data[start+i]) targets.append(data.test_labels[start+i]) inputs = np.array(inputs) targets = np.array(targets) return inputs, targets if __name__ == "__main__": with tf.Session() as sess: data, model = MNIST(), MNISTModel("models/mnist", sess) attack = CarliniL2(sess, model, batch_size=9, max_iterations=1000, confidence=1) inputs, targets = generate_data(data, samples=1, targeted=True, start=0, inception=False) timestart = time.time() adv = attack.attack(inputs, targets) timeend = time.time() print("Took",timeend-timestart,"seconds to run",len(inputs),"samples.") for i in range(len(adv)): print(len(adv)) print("Valid:") show(inputs[i]) print("Adversarial:") show(adv[i]) pred = model.model.predict(inputs[i:i+1]) print("Classification (orig):", pred) print("Prediction class original:", np.argmax(pred)) advpred = model.model.predict(adv[i:i+1]) print("Classification:", model.model.predict(adv[i:i+1])) print("Adversarial example classification: ", np.argmax(advpred)) print("Total distortion:", np.sum((adv[i]-inputs[i])**2)**.5) </code></pre> <p>My questions would be:</p> <ol> <li>Is there a way to get the images saved as .png files?</li> <li>What exactly is the total distortion? It does not seem to be a % number. Like, I thought this would tell me how many pixels had to be changed, but I guess I am totally wrong here.</li> <li>By default, it is always the image of a "7" that gets attacked. I have not figured out so far, how to choose by myself which number to create adversarial images for. Also, it does, by default, create an adversarial example for every class (like, one image of a 7 that gets classified as 0, another one for the 1,2,3 etc. - Is there a way I can specify the target class exactly? To only get one adversarial example, say a 7 that gets classified as a 9? Now I think that is something super simple I just dont see...</li> <li>Since I would love to try this with CIFAR10 too (just takes ages to train on my super old Laptop, so thats gonna be an overnight action) - will there be a way to save the CIFAR adversarial examples to .img/.png too? Because as far as I can tell, the "show" function only covers the MNIST set?</li> </ol> <p>Sorry if those are pretty basic questions, but I am super new to ML and not as experienced in Python as I would like to be! I googled quite a lot, but havent seen anyone who has implemented the attack with the original source code and did what I want to do.</p> <p>Thank you very much in advance, I know its a lot to ask for!</p>
2020-05-12 18:40:33.377000+00:00
2020-11-30 09:20:18.363000+00:00
null
python|tensorflow|machine-learning|conv-neural-network|image-recognition
['https://arxiv.org/pdf/1608.04644.pdf', 'https://github.com/carlini/nn_robust_attacks/blob/master/l2_attack.py']
2
43,458,667
<p>I think you must be using the <code>MSECriterion()</code>? It is the standard <em>l2</em> (minimum square error) loss. While the CNN tries to predict results, there are multiple modes through which the result can be correct. And what <em>l2</em> loss does is that it converges to an average of all these modes as that is the most feasible way it can intuitively approach to attain less-penalized results.</p> <blockquote> <p>The MSE-based solution appears overly smooth due to the pixel-wise average of possible solutions in the pixel space</p> </blockquote> <p>To pick the optimum mode of answer, you can look into <code>adversarial loss</code> <a href="https://www.google.co.in/url?sa=t&amp;rct=j&amp;q=&amp;esrc=s&amp;source=web&amp;cd=1&amp;cad=rja&amp;uact=8&amp;ved=0ahUKEwiE56G2oqzTAhVLrY8KHeB8DuIQFggiMAA&amp;url=https%3A%2F%2Farxiv.org%2Fabs%2F1406.2661&amp;usg=AFQjCNEYi3th_8xRH0BIjxxmc-AM4lKdzA&amp;sig2=XzUiulxFYpqgyRHFKQ3N1A" rel="nofollow noreferrer">LINK</a>. This loss picks the optimum mode based on what it thinks is realistic in terms of the data it has seen.</p> <p>For further clarification, look at figure 3 in this paper: <a href="https://arxiv.org/pdf/1609.04802.pdf" rel="nofollow noreferrer">SRGAN</a></p>
2017-04-17 19:58:09.677000+00:00
2017-04-17 19:58:09.677000+00:00
null
null
43,336,472
<p>I am training a CNN for predicting joints on hands. The problem is that my net always converges to the mean value of the training set, and I can only get identical results for different test images. Do you know how to prevent this?</p>
2017-04-11 03:54:33.013000+00:00
2018-09-27 10:22:43.360000+00:00
2018-09-27 10:22:43.360000+00:00
neural-network|deep-learning|caffe|torch
['https://www.google.co.in/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0ahUKEwiE56G2oqzTAhVLrY8KHeB8DuIQFggiMAA&url=https%3A%2F%2Farxiv.org%2Fabs%2F1406.2661&usg=AFQjCNEYi3th_8xRH0BIjxxmc-AM4lKdzA&sig2=XzUiulxFYpqgyRHFKQ3N1A', 'https://arxiv.org/pdf/1609.04802.pdf']
2
13,747,269
<p>Mapreduce will work just fine, and probably you could do most of the input-output shuffling by pig.</p> <p>See</p> <p><a href="http://arxiv.org/abs/1207.4371" rel="nofollow">http://arxiv.org/abs/1207.4371</a></p> <p>for some algorithms.</p> <p>Of course, to make sure you get a running start, you don't actually need to be using mapreduce for this task; just split the input yourself, make the simplest fast program to calculate ngrams of a single input file and aggregate the ngram frequencies later.</p>
2012-12-06 15:46:18.273000+00:00
2012-12-06 15:46:18.273000+00:00
null
null
13,747,120
<p>I'd like to generate ngram frequencies for a large dataset. Wikipedia, or more specifically, Freebase's WEX is suitable for my purposes.</p> <p>What's the best and most cost efficient way to do it in the next day or so?</p> <p>My thoughts are:</p> <ul> <li>PostgreSQL using regex to split sentences and words. I already have the WEX dump in PostgreSQL, and I already have regex to do the splitting (major accuracy isn't required here)</li> <li>MapReduce with Hadoop</li> <li>MapReduce with Amazon's Elastic MapReduce, which I know next to nothing about</li> </ul> <p>My experience with Hadoop consists of calculating Pi on three EC2 instances very very inefficiently. I'm good with Java, and I understand the concept of Map + Reduce. PostgreSQL I fear will take a long, long time, as it's not easily parallelisable.</p> <p>Any other ways to do it? What's my best bet for getting it done in the next couple days?</p>
2012-12-06 15:38:41.113000+00:00
2012-12-06 18:30:32.717000+00:00
null
postgresql|hadoop|mapreduce|bigdata|elastic-map-reduce
['http://arxiv.org/abs/1207.4371']
1
50,396,580
<h2>The noise shape</h2> <p>In order to understand <code>SpatialDropout1D</code>, you should get used to the notion of the <strong>noise shape</strong>. In plain vanilla dropout, each element is kept or dropped independently. For example, if the tensor is <code>[2, 2, 2]</code>, each of 8 elements can be zeroed out depending on random coin flip (with certain "heads" probability); in total, there will be 8 independent coin flips and any number of values may become zero, from <code>0</code> to <code>8</code>.</p> <p>Sometimes there is a need to do more than that. For example, one may need to drop the <em>whole slice</em> along <code>0</code> axis. The <code>noise_shape</code> in this case is <code>[1, 2, 2]</code> and the dropout involves only 4 independent random coin flips. The first component will either be kept together or be dropped together. The number of zeroed elements can be <code>0</code>, <code>2</code>, <code>4</code>, <code>6</code> or <code>8</code>. It cannot be <code>1</code> or <code>5</code>.</p> <p>Another way to view this is to imagine that input tensor is in fact <code>[2, 2]</code>, but each value is double-precision (or multi-precision). Instead of dropping the bytes in the middle, the layer drops the full multi-byte value.</p> <h2>Why is it useful?</h2> <p>The example above is just for illustration and isn't common in real applications. More realistic example is this: <code>shape(x) = [k, l, m, n]</code> and <code>noise_shape = [k, 1, 1, n]</code>. In this case, each batch and channel component will be kept independently, but each row and column will be kept or not kept together. In other words, the <em>whole</em> <code>[l, m]</code> <em>feature map</em> will be either kept or dropped.</p> <p>You may want to do this to account for adjacent pixels correlation, especially in the early convolutional layers. Effectively, you want to prevent co-adaptation of pixels with its neighbors across the feature maps, and make them learn as if no other feature maps exist. This is exactly what <code>SpatialDropout2D</code> is doing: it promotes independence between feature maps.</p> <p>The <code>SpatialDropout1D</code> is very similar: given <code>shape(x) = [k, l, m]</code> it uses <code>noise_shape = [k, 1, m]</code> and drops entire 1-D feature maps.</p> <p>Reference: <a href="https://arxiv.org/abs/1411.4280" rel="noreferrer">Efficient Object Localization Using Convolutional Networks</a> by Jonathan Tompson at al.</p>
2018-05-17 16:41:13.440000+00:00
2018-05-18 13:40:46.987000+00:00
2018-05-18 13:40:46.987000+00:00
null
50,393,666
<p>Occasionally I see some models are using <code>SpatialDropout1D</code> instead of <code>Dropout</code>. For example, in the Part of speech tagging neural network, they use:</p> <pre class="lang-python prettyprint-override"><code>model = Sequential() model.add(Embedding(s_vocabsize, EMBED_SIZE, input_length=MAX_SEQLEN)) model.add(SpatialDropout1D(0.2)) ##This model.add(GRU(HIDDEN_SIZE, dropout=0.2, recurrent_dropout=0.2)) model.add(RepeatVector(MAX_SEQLEN)) model.add(GRU(HIDDEN_SIZE, return_sequences=True)) model.add(TimeDistributed(Dense(t_vocabsize))) model.add(Activation("softmax")) </code></pre> <p>According to Keras' documentation, it says:</p> <blockquote> <p>This version performs the same function as Dropout, however it drops entire 1D feature maps instead of individual elements.</p> </blockquote> <p>However, I am unable to understand the meaning of <strong>entrie 1D feature</strong>. More specifically, I am unable to visualize <code>SpatialDropout1D</code> in the same model explained in <a href="https://www.quora.com/How-does-the-dropout-method-work-in-deep-learning-And-why-is-it-claimed-to-be-an-effective-trick-to-improve-your-network" rel="noreferrer">quora</a>. Can someone explain this concept by using the same model as in quora?</p> <p>Also, under what situation we will use <code>SpatialDropout1D</code> instead of <code>Dropout</code>?</p>
2018-05-17 14:11:37.677000+00:00
2021-05-28 12:10:59.023000+00:00
2018-05-17 20:31:34.433000+00:00
machine-learning|keras|deep-learning|conv-neural-network|dropout
['https://arxiv.org/abs/1411.4280']
1
64,397,892
<h1>The Mile-High Overview</h1> <p>Intuitively, you can think of a count-min sketch as a space-efficient data structure for approximating how many times you've seen a given item in a data stream. From a client-side perspective, the count-min sketch supports two operations:</p> <ul> <li><strong>increment(x)</strong>, which says &quot;I've seen x another time,&quot; and</li> <li><strong>estimate(x)</strong>, which says &quot;please give me an estimate of how many times I've seen x.&quot;</li> </ul> <p>If you just look at this interface, you might be thinking &quot;isn't this something I could do with a hash table or a BST?&quot; And you'd be right - you could just make a BST whose keys are the elements and whose values are the number of times each item is seen, or do the same with a hash table. If you have enough memory to keep track of all the keys you're encountering, this is not an unreasonable. But imagine that you're working on, say, an Amazon server and you're tracking views of each product. Suddenly, even writing down the names of all the pages that are visited is going to take a <em>ton</em> of memory. Or imagine you're Google and you want to find the most-commonly-visited pages each day. It would be prohibitively expensive to get enough RAM simply to write down all the search queries, and paging out to disk would be too slow.</p> <p>What makes the count-min sketch shine is that <em>the amount of memory needed by a count-min sketch can be tweaked by the user.</em> If you give it more memory, it will produce better estimates of the true frequencies of the elements it's seen. If you give it less memory, it will produce lower-quality estimates, but with quantifiable guarantees about how likely it is for those estimates to be close to the true value. This means that if, say, you only have 1MB to dedicate to this purpose, you could get rough estimates of the frequencies, whereas if you have 128MB you could get dramatically better estimates.</p> <p>The estimates given back by a count-min sketch will never underestimate the true frequencies of the elements. For example, if a count-min sketch says that an item has appeared 50 times, then the element might have appeared 50 times, or 49 times, or 48 times, etc., but it't not possible for it to have appeared 100 times. This makes the count-min sketch useful for finding high-frequency items: low-frequency items might have their frequencies overestimated a little bit, but high-frequency items will always appear popular.</p> <h1>The Internal Details</h1> <p>The count-min sketch is a fairly straightforward data structure to implement. The basic idea is the following. Imagine we have an array of counters, and we want to use that array to keep track of the frequencies of the items we're seeing. We'll use a hash function to assign each item to some counter (compute its hash code and mod it by the table size). Whenever we <strong>increment</strong> that item, we'll just go to the appropriate counter, then increment it. To provide an <strong>estimate</strong>, we'll just go to that counter and return the value stored there.</p> <p>One way of thinking about how the count-min sketch works is to imagine each counter as a &quot;bucket&quot; holding all items of some type. We then estimate how frequent something is by seeing how many items are in its bucket, regardless of what those items are, as shown here:</p> <p><img src="https://i.stack.imgur.com/vCR96.png" alt="A collection of items distributed into buckets" /></p> <p>As you can see, we get reasonably good estimates for frequent items, while infrequent items might have their frequencies grossly overestimated. This also gives a nice intuition as to why the count-min sketch never underestimates the frequencies of the elements. You'll always at least count the item itself when looking in its bucket.</p> <p>If we make some reasonable assumptions about the hash functions we're using, we can formally prove that, on average, the estimate given back for an item's frequency is at most its actual frequency, plus the total number of items divided by the total number of counters. And that makes intuitive sense. If you have lots and lots of counters, at some point each item gets its own counter and the estimates will be perfectly accurate. On the other extreme, if you have almost no counters, then you'd expect all the items to be crammed into a small number of buckets, and the totals will start to be way off.</p> <p>While this approach guarantees that <em>on expectation</em> the estimates returned will be good, that doesn't mean that you have a <em>high probability</em> of getting a good estimate. One way to think about this is to think about what it means to have a good estimate &quot;on expectation.&quot; Intuitively, that would mean that if you were to build a bunch of these data structures, then the average of the estimates would probably be pretty good. However, any one individual estimate is not necessarily going to be all that good.</p> <p>The count-min sketch takes this idea to heart and, instead of just having one array of counters, maintains several independent arrays of counters, each of which has a different hash function dropping items into buckets. This gives a degree of redundancy and means that it's highly unlikely that you'll get &quot;unlucky&quot; with items colliding in bad ways.</p> <p>To get its overall estimate, the count-min sketch does something a bit more clever than just averaging the estimates together. Remember that the count-min sketch can never underestimate the actual frequencies of the items being stored, and can only overestimate them. This means that if you have a collection of different estimates coming back, the larger an estimate is, the worse it is. Why? Because the larger the estimate, the more items other than the one we care about it's counting. (Think back to buckets - if a bucket has a bunch of other items in it besides the one we care about, we don't want to use the size of that bucket as an estimate).</p> <p>This is where the &quot;min&quot; part of &quot;count-min sketch&quot; comes in. When returning an estimate, the count-min sketch returns the minimum estimate among all the estimates generated. This is the estimate with the least noise in it. Intuitively, this is very likely to give you a good estimate - the only way it fails is if every single estimate was bad, which is fairly unlikely.</p> <p>This means that the overall data structure, and the logic to manipulate it, is fairly straightforward:</p> <p><img src="https://i.stack.imgur.com/vhJmP.png" alt="Image of a 2D grid representing multiple rows of counters, plus pseudocode summarizing how the structure works." /></p> <h1>Learning More</h1> <p>There's more to explore about the count-min sketch. For example, how do you formally analyze a count-min sketch to determine how many counters you need per row or how many independent structures you'll need? What kinds of hash functions can you use? To learn more about that, check out <a href="http://web.stanford.edu/class/archive/cs/cs166/cs166.1206/lectures/10/Slides10.pdf" rel="noreferrer">these lecture slides</a>, which go into some detail on the topic.</p> <p>What happens if you want to support both increments and decrements? How do you use the count-min sketch to find the most frequent elements in a data stream? <a href="http://dimacs.rutgers.edu/%7Egraham/pubs/papers/cm-full.pdf" rel="noreferrer">The original paper on the topic</a> is a good resource here.</p> <p>Can you get the same results as a count-min sketch without randomness? <a href="https://arxiv.org/pdf/cs/0609032.pdf" rel="noreferrer">Yes, using some clever number theory.</a></p>
2020-10-17 00:14:59.730000+00:00
2020-10-17 00:20:21.200000+00:00
2020-10-17 00:20:21.200000+00:00
null
64,397,891
<p>What is a <a href="https://en.wikipedia.org/wiki/Count%E2%80%93min_sketch" rel="nofollow noreferrer">count-min sketch</a>? In what situations would it be useful?</p>
2020-10-17 00:14:59.730000+00:00
2020-10-17 00:20:21.200000+00:00
null
data-structures|stream|count-min-sketch
['http://web.stanford.edu/class/archive/cs/cs166/cs166.1206/lectures/10/Slides10.pdf', 'http://dimacs.rutgers.edu/%7Egraham/pubs/papers/cm-full.pdf', 'https://arxiv.org/pdf/cs/0609032.pdf']
3
55,952,771
<p>The large difference between AUC and AUCPR is most likely caused, as you suggest, by the class imbalance. You can either try to set <code>balance_classes = True</code> or set weights to a column that would weight the minority class more, e.g. taking the inverse of the class frequency. If you have really small number of observations for the minority class, you can try to synthesise more using e.g. <a href="https://arxiv.org/pdf/1106.1813.pdf" rel="nofollow noreferrer">SMOTE</a>.</p>
2019-05-02 12:44:52.253000+00:00
2019-05-02 12:44:52.253000+00:00
null
null
55,941,654
<p>i have a usecase of very imbalace data set , i undersampled the training dataset , and tried running the automl in h2o, but it gave me great AUC results (over 0.99) but very bad aup_pr results (0.09). is it related to the imbalance issue ? i ran with weight_column option (<a href="http://docs.h2o.ai/h2o/latest-stable/h2o-docs/data-science/algo-params/weights_column.html" rel="nofollow noreferrer">http://docs.h2o.ai/h2o/latest-stable/h2o-docs/data-science/algo-params/weights_column.html</a>) but it didn't help. should i use the balance_classes option instead (when i run both options it fails with "h2oFrame is empty" message) . the train and test are splitted on date time range , and the test dataset has the proper ration between majority and minority classes. </p>
2019-05-01 19:32:05.257000+00:00
2019-05-02 12:44:52.253000+00:00
null
h2o
['https://arxiv.org/pdf/1106.1813.pdf']
1
61,217,038
<p>SageMaker endpoint creation time depends on model size, machine type and serving stack complexity. Yet given upload bandwidth from S3 to EC2 and usual ML model artifact size, (Kb to single-digit Gb) <strong>in practice I found that it rarely takes more than 10min to instantiate a SageMaker endpoint</strong>.</p> <p>The hours-long task your are waiting for to complete is likely the model training, which could take as little as a fraction of second up to several days (Google's Neural Machine Translation model, <a href="https://arxiv.org/pdf/1609.08144.pdf" rel="nofollow noreferrer">GNMT</a> from Wu et al, reports a 6-day training). The <a href="https://docs.aws.amazon.com/general/latest/gr/sagemaker.html" rel="nofollow noreferrer">SageMaker documentation</a> mentions that max training time is 5 days though. You can look at Cloudwatch logs to confirm status of your training task</p>
2020-04-14 20:52:25.913000+00:00
2020-04-14 20:52:25.913000+00:00
null
null
61,165,665
<p>I'm new to Sagemaker but have been waiting a few hours for a Sagemaker training job to complete so that I can create the endpoint... The Sagemaker console shows a Create endpoint button, but when I press it, it doesn't work. The end point configuration still has a spinning icon for "Training job" </p> <p>How long does it typically take for a Sagemaker endpoint to spin up? </p>
2020-04-12 01:12:39.430000+00:00
2020-04-14 20:52:25.913000+00:00
null
amazon-sagemaker
['https://arxiv.org/pdf/1609.08144.pdf', 'https://docs.aws.amazon.com/general/latest/gr/sagemaker.html']
2
50,628,710
<p>Yes both of the models can be converted to tflite format. For a step by step procedure please go through this link <a href="https://codelabs.developers.google.com/codelabs/tensorflow-for-poets-2-tflite/#0" rel="noreferrer">Convert to tflite</a>.</p> <p>The major difference between InceptionV3 and Mobilenet is that Mobilenet uses Depthwise separable convolution while Inception V3 uses standard convolution. This results into lesser number of parameters in MobileNet compared to InceptionV3. However, this results in slight decrease in the performance as well.</p> <p>In a standard convolution the filter operates on the <strong>M</strong> channels of the input image all-together and outputs <strong>N</strong> feature maps i.e. the matrix multiplication between the input and filter is multidimensional. To make it clear take the filter as a cube of size <strong>D<sub>k</sub> x D<sub>k</sub> x M</strong>, then in standard convolution each element of the cube will multiply with the corresponding element in the input feature matrix and finally after the multiplication the feature maps will be added to output <strong>N</strong> feature maps. </p> <p>However, in a depthwise separable convolution the <strong>M</strong> single channel filters will operate on a single cube in the input feature and once the <strong>M</strong> filter outputs are obtained a pointwise filter of size <strong>1 x 1 x M</strong> will operate on it to give <strong>N</strong> output feature maps. This can be understood from the figure below from the <a href="https://arxiv.org/pdf/1704.04861.pdf" rel="noreferrer">MobileNet paper</a>.</p> <p><a href="https://i.stack.imgur.com/snkUUm.png" rel="noreferrer"><img src="https://i.stack.imgur.com/snkUUm.png" alt="Depthwise seprable convolution"></a></p> <p>To make it more clear please go through the <a href="https://towardsdatascience.com/types-of-convolutions-in-deep-learning-717013397f4d" rel="noreferrer">DataScienceLink</a>. They have a concrete example on how it reduces the parameters count which I am simply pasting here.</p> <p><a href="https://i.stack.imgur.com/lJN2X.png" rel="noreferrer"><img src="https://i.stack.imgur.com/lJN2X.png" alt="enter image description here"></a> <a href="https://towardsdatascience.com/types-of-convolutions-in-deep-learning-717013397f4d" rel="noreferrer">4</a></p>
2018-05-31 16:19:25.513000+00:00
2018-06-02 10:07:22.023000+00:00
2018-06-02 10:07:22.023000+00:00
null
50,624,496
<p>Recently i have been working with tensorflow inception V3 and mobileNet to deploy them for use in Android. While converting retrained model of inception V3 to "tflite" there some issues as the "tflite" model was empty, But when tried with retrained MobileNet model it was successfully converted into "tflite". So basically i have two questions</p> <ol> <li>Is it possible to convert inception V3 retrained model to "tflite"?</li> <li>What is the difference between inception V3 and MobileNet?</li> </ol> <p>PS. I have gone through the official documentation link, which only hinted at mobileNet only being </p> <p><a href="https://www.tensorflow.org/tutorials/image_retraining#other_model_architectures" rel="noreferrer">https://www.tensorflow.org/tutorials/image_retraining#other_model_architectures</a></p>
2018-05-31 12:36:19.980000+00:00
2019-04-16 08:30:06.350000+00:00
null
tensorflow|machine-learning
['https://codelabs.developers.google.com/codelabs/tensorflow-for-poets-2-tflite/#0', 'https://arxiv.org/pdf/1704.04861.pdf', 'https://i.stack.imgur.com/snkUUm.png', 'https://towardsdatascience.com/types-of-convolutions-in-deep-learning-717013397f4d', 'https://i.stack.imgur.com/lJN2X.png', 'https://towardsdatascience.com/types-of-convolutions-in-deep-learning-717013397f4d']
6
21,794,786
<p>To help the human-readable meaning, you could add an extra predicate documenting the parameters as readable name/value pairs:</p> <pre><code>entry_ancestor_of(ancestor=P, descendent=C) :- ancestor(P,C). ?- entry_ancestor_of(ancestor=richard, descendent=C). C = adam . </code></pre> <p>Above, the suffix *ancestor_of* suggests param 1 is ancestor of param 2, so naming the predicate carefully can make it clearer.</p> <p>Usually(convention), input parameters are the earlier parameters, and output parameters are later parameters, but where the predicate 'works both ways', ie. either could be input or output, this rule can't hold. This is the case for your predicate:</p> <pre><code>?- entry_ancestor_of(ancestor=X, descendent=adam). X = richard . </code></pre> <p>Either parameter could be input or output, so there is no need to codify/explain them as such, although you might want to comment that it works both ways. </p> <p>I would usually comment these 'flexible' predicates by putting an example of both of the above usages in a comment next to the predicate.</p> <p>For entrypoint labelling, just do one or more of the following:</p> <ul> <li>explicitly name the predicate as an entrypoint, as above</li> <li>document using comments in the code which are the entrypoints</li> <li>arrange the entrypoints in the same physical section with a comment block saying that the predicates below are entrypoints.</li> </ul> <p><strong>Edit: Extra things re: coding guidelines / other answers.</strong></p> <ol> <li>In <a href="http://arxiv.org/abs/0911.2899" rel="nofollow">Coding guidelines for Prolog</a>, section 3.8, it says 'For example, mother_of(A, B) is ambiguous;', so I gave bad advice on that.. perhaps acapelli's suggestion would be more useful on that.</li> </ol> <p>In that document, also have a look at:</p> <ul> <li>3.5 Choose sensible names for auxiliary predicates</li> <li>3.8 Choose predicate names to help show the argument order</li> <li>3.13 Decide whether predicate names should carry the types on which they operate</li> <li>4.1 Begin every predicate (except perhaps auxiliary predicates) with an introductory comment in a well-defined format</li> </ul> <p>The '?' system for identifying parameter types that will ness mentioned is on page 21.</p>
2014-02-15 07:38:33.490000+00:00
2014-02-15 20:17:07.687000+00:00
2014-02-15 20:17:07.687000+00:00
null
21,793,572
<pre><code>parent(mel, joan). parent(jane, betty). parent(jane, tom). parent(richard, adam). parent(richard, rosa). parent(joan, fran). </code></pre> <p>For example someone asks me to find all ancestors of a parent. I give him the code:</p> <pre><code>ancestor(P,C) :- parent(P, C). ancestor(P,C) :- ancestor(P,P1), parent(P1, C). </code></pre> <p>But my friend still doesn't know how to use the predicate. Does he call it like</p> <p><code>ancestor(richard, C)</code> or <code>ancestor(C, richard)</code> ?</p> <p>Is there a way to annotate that <code>P</code> is the parameter while <code>C</code> is the return value? And in a complex case, there will be predicates with different names, how should my user know which predicate is the final predicate he wants to use?</p>
2014-02-15 04:54:04.933000+00:00
2014-02-17 17:05:41.507000+00:00
2014-02-17 17:05:41.507000+00:00
prolog
['http://arxiv.org/abs/0911.2899']
1
11,313,659
<p>There is another option. And that is, you write your own GPU-based PNG decoder. You could use OpenCL to perform this operation fairly efficiently (and perform your composition using OpenGL which can share resources with OpenCL). It is also possible to interleave transfer and decoding for maximum throughput. If this is a route you can/want to pursue I can provide more information.</p> <p>Here are some resources related to GPU-based DEFLATE (and INFLATE).</p> <ol> <li><a href="http://www.google.com/url?sa=t&amp;rct=j&amp;q=&amp;esrc=s&amp;source=web&amp;cd=1&amp;ved=0CE4QFjAA&amp;url=http://arxiv.org/pdf/1107.1525&amp;ei=qhnzT7PEFIr10gGcq9XoCQ&amp;usg=AFQjCNH_BlNNNTk8KTMw7TMfANof64ApBg" rel="nofollow">Accelerating Lossless compression with GPUs</a></li> <li><a href="http://code.google.com/p/gpu-block-compression/" rel="nofollow">gpu-block-compression</a> using CUDA on Google code.</li> <li><a href="http://www.google.com/url?sa=t&amp;rct=j&amp;q=&amp;esrc=s&amp;source=web&amp;cd=5&amp;ved=0CGQQFjAE&amp;url=http://www.ece.neu.edu/groups/nucar/GPGPU4/files/oneil.pdf&amp;ei=1BrzT6ukLsrh0QH2_OSLCg&amp;usg=AFQjCNHTVkX4QukaurUugSDpkFVPRopCWw" rel="nofollow">Floating point data-compression at 75 Gb/s on a GPU</a> - note that this doesn't use INFLATE/DEFLATE but a novel parallel compression/decompression scheme that is more GPU-friendly.</li> </ol> <p>Hope this helps!</p>
2012-07-03 14:57:41.750000+00:00
2012-07-03 16:19:00.993000+00:00
2012-07-03 16:19:00.993000+00:00
null
11,313,251
<p>Our web server needs to process many compositions of large images together before sending the results to web clients. This process is performance critical because the server can receive several thousands of requests per hour.</p> <p>Right now our solution loads PNG files (around 1MB each) from the HD and sends them to the video card so the composition is done on the GPU. We first tried loading our images using the PNG decoder exposed by the XNA API. We saw the performance was not too good.</p> <p>To understand if the problem was loading from the HD or the decoding of the PNG, we modified that by loading the file in a memory stream, and then sending that memory stream to the .NET PNG decoder. The difference of performance using XNA or using System.Windows.Media.Imaging.PngBitmapDecoder class is not significant. We roughly get the same levels of performance.</p> <p>Our benchmarks show the following performance results:</p> <ul> <li>Load images from disk: 37.76ms 1% </li> <li>Decode PNGs: 2816.97ms 77% </li> <li>Load images on Video Hardware: 196.67ms 5% </li> <li>Composition: 87.80ms 2% </li> <li>Get composition result from Video Hardware: 166.21ms 5% </li> <li>Encode to PNG: 318.13ms 9% </li> <li>Store to disk: 3.96ms 0% </li> <li>Clean up: 53.00ms 1% </li> </ul> <p>Total: 3680.50ms 100% </p> <p>From these results we see that the slowest parts are when decoding the PNG.</p> <p>So we are wondering if there wouldn't be a PNG decoder we could use that would allow us to reduce the PNG decoding time. We also considered keeping the images uncompressed on the hard disk, but then each image would be 10MB in size instead of 1MB and since there are several tens of thousands of these images stored on the hard disk, it is not possible to store them all without compression.</p> <p>EDIT: More useful information:</p> <ul> <li>The benchmark simulates loading 20 PNG images and compositing them together. This will roughly correspond to the kind of requests we will get in the production environment. </li> <li>Each image used in the composition is 1600x1600 in size.</li> <li>The solution will involve as many as 10 load balanced servers like the one we are discussing here. So extra software development effort could be worth the savings on the hardware costs.</li> <li>Caching the decoded source images is something we are considering, but each composition will most likely be done with completely different source images, so cache misses will be high and performance gain, low.</li> <li>The benchmarks were done with a crappy video card, so we can expect the PNG decoding to be even more of a performance bottleneck using a decent video card.</li> </ul>
2012-07-03 14:35:54.503000+00:00
2013-02-28 22:50:01.467000+00:00
2012-07-03 15:56:00.677000+00:00
c#|performance|png|decode|decoding
['http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0CE4QFjAA&url=http://arxiv.org/pdf/1107.1525&ei=qhnzT7PEFIr10gGcq9XoCQ&usg=AFQjCNH_BlNNNTk8KTMw7TMfANof64ApBg', 'http://code.google.com/p/gpu-block-compression/', 'http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CGQQFjAE&url=http://www.ece.neu.edu/groups/nucar/GPGPU4/files/oneil.pdf&ei=1BrzT6ukLsrh0QH2_OSLCg&usg=AFQjCNHTVkX4QukaurUugSDpkFVPRopCWw']
3
50,460,927
<p>I don't really have an answer for you, but I think these pointers should help you find an answer:</p> <ol> <li><p>You claim to have memory problems. Are you sure your affinity matrix is sparse? It seems like only the diagonal degree matrix is sparse in your code. Usually when running spectral clustering on pixels/voxels one designs the affinity matrix to be very sparse (8 connect or 26 connect).</p></li> <li><p>You describe your clusters as "they are small". Spectral clustering has <a href="http://wpre.weizmann.ac.il/math/Nadler/sites/math.Nadler/files/publications/fundamental_limitations_of_spectral_clustering.pdf" rel="nofollow noreferrer">known issues</a> with clusters in very different scales. Are you sure you are getting satisfactory results?</p></li> <li><p>How do you compute the affinity (similarity) between neighboring voxels? Can you measure <em>dissimilarity</em> as well? That is, can you say for some voxels that they should <em>not</em> belong to the same cluster? If so, have you considered using <a href="https://stackoverflow.com/a/19510366/1714410">correlation clustering</a>? This method is more robust to different cluster scales and can <em>automatically</em> detect the number of clusters.</p></li> <li><p>Have you considered using multiscale/<a href="http://arxiv.org/abs/1204.4867" rel="nofollow noreferrer">multigrid</a> methods to coarsen your data instead of brutally slicing it into "slabs"? </p></li> <li><p>Have you looked at <a href="https://arxiv.org/abs/1801.01587" rel="nofollow noreferrer">spectralNet</a>? If I am not mistaken, this method should enable you to "learn" the spectral clustering on part of the points and then use the net to "extrapolate" the clustering to new points.</p></li> </ol> <hr> <p><strong>Upadate:</strong><br> In light of <a href="https://stackoverflow.com/questions/50446375/how-to-combine-split-runs-of-spectral-clustering-for-a-huge-affinity-matrix/50460927?noredirect=1#comment87999026_50460927">Leo's comment</a>, I would say that when it comes to spectral clustering of very large data, brutally slicing the data into "slabs" and then trying to "stitch" the results together might not be the best coarse of action (not that I think it is not possible). A better way to approach the problem is by significantly sparsifying the affinity matrix: compute pair-wise affinities for each point only to its neighbors, resulting with affinity matrix that is mostly sparse. This way one can process all the points at once without the need to "slice" and "stitch". </p> <p>As for the difference between spectral clustering and correlation clustering:<br> <em>Why spectral clustering is able to cluster <strong>all</strong> points even when the input affinity matrix is so sparse?</em> how can it tell that point <code>a</code> and a far away point <code>c</code> should belong in the same cluster even when no affinity was computed between them?<br> The simple answer is <a href="https://en.wikipedia.org/wiki/Transitive_relation" rel="nofollow noreferrer">transitivity</a> of affinities: if <code>a</code> is similar to <code>b</code> and <code>b</code> is similar to <code>c</code> then <code>a</code> and <code>c</code> should be clustered together.<br> <em>Where's the catch?</em> In spectral clustering all entries in the affinity matrix are non-negative, which means that unless there is absolutely no path connecting <code>a</code> and <code>c</code> (slim chance) there is some "transitive affinity" suggesting <code>a</code> and <code>c</code> should belong to the same cluster. Therefore, if look at the math of spectral clustering you'll notice that the "trivial solution", i.e., placing all points in the same cluster, provides a global optimum to the problem. One must artificially force the solution to have <code>k</code> clusters to avoid the trivial solution.<br> <em>What can be done?</em> If you only consider positive affinities the value 0 is ambiguous: it means either "I didn't bother to compute the affinities between these points", but it can also mean "I think these two points should <em>not</em> be in the same cluster". To overcome this ambiguity we can introduce <strong>negative affinities</strong> this way if <code>A(i, j) &gt; 0</code> means point <code>i</code> and point <code>j</code> should be in the same cluster with certainty <code>A(i, j)</code>, while if <code>A(i, j) &lt; 0</code> means <code>i</code> and <code>j</code> should <strong>not</strong> be in the same cluster (with certainty <code>|A(i, j)|</code>). Introducing negative affinities breaks the "transitivity chains" that may link far away points, no it is no longer trivial to place all points in the same cluster.<br> <em>How to take advantage of negative affinities?</em> When your affinity matrix has both positive (attraction) and negative (repulsion) values, you can cluster the points using correlation clustering which basically tries to maximize the affinities/attraction between points within each cluster and simultaneously maximize the repulsion between points in different clusters. A nice property of correlation clustering is that it "automatically" discover the underlying number of clusters, see sec. 2 of <a href="http://www.wisdom.weizmann.ac.il/~bagon/pub/LargeScaleCorrClust_2011.pdf" rel="nofollow noreferrer">this paper</a>. </p>
2018-05-22 06:13:04.940000+00:00
2018-05-24 06:34:27.723000+00:00
2018-05-24 06:34:27.723000+00:00
null
50,446,375
<p><strong><em>Leading up to the question</em></strong></p> <p>I have a 2D complex valued image with a short series of values. I want to cluster similar pixels / segment the image. There is one more or less static image with a superimposed image that has some blobs in it that have a changing value (mostly the angle of the complex number) over the short series. They are also slightly discernable in the norm of the image.</p> <p>My first attempt was k-means, but that really clustered according to the means (there is a distinction in mean values, especially compared to surrounding pixels, but the temporal and angular information is greater). My second attempt was ICA and then look at the k components with the largest magnitude, and that did successfully identify certain regions in my image as being different, but did not identify the group of pixels I was interested in (visually it is not hard to recognize them, but they are small).</p> <p><strong><em>Current situation</em></strong></p> <p>So because my first two tries did not work out, I looked around with google and it seemed <a href="http://ai.stanford.edu/~ang/papers/nips01-spectral.pdf" rel="nofollow noreferrer">spectral clustering</a> might be appropriate. But I have some serious issues when using the method, mostly to do with limited available memory. I then thought, since I have so many pixels, I can just apply spectral clustering to seperate slabs of the data.</p> <p>Someone <a href="https://www.researchgate.net/post/How_to_perform_spectral_clustering_for_large_size_imagesDoes_parallel_computing_can_solve_the_issue_of_scalability" rel="nofollow noreferrer">here</a> suggests clustering slabs first and then combine them, he then says 'at the end you will have the problem of recombining them and this problem can be solved easily'. The bits designated as 'easy' in explanations are hardly ever easy of course. He links to <a href="http://file:///C:/Users/Lennart/Downloads/SpectralClusteringforaLargeDataSetbyReducingtheSimilarityMatrixSize%20(1).pdf" rel="nofollow noreferrer">this</a> paper, but that method does not process all the data in slabs. It rather excludes vectors that are not close to a principal component.</p> <p><strong><em>Question</em></strong></p> <p>My question has 2 parts:</p> <p><strong>1</strong>. How do I combine the results for the seperate segments? The eigenvectors are different and the cluster numbers are different. The result looks like it worked in the seperate slabs.</p> <p><strong>2</strong>. No distance / affinity between pixels in seperate slabs is taken into account. Can I make 'slabs between slabs'? For those slabs L and A are not symmetric, no clue how to perform the method then. Perhaps I can somehow compare / merge all eigenvectors at the end?</p> <p>(3. Is there a similar or better method that does not need so much memory. Computation time is also borderline acceptable, easily exploding)</p> <p><strong><em>Matlab code example</em></strong></p> <pre><code>%% generate data % get some outer region without data tempdisk = strel('disk',922/2); tempdisk = double(repmat((1+sqrt(-1)).*tempdisk.Neighborhood,[1 1 15])); % make some noise tempnoise = (rand(921,921,15)+sqrt(-1).*rand(921,921,15))./10; % 'background signal' tempim1 = double(imresize(mean(imread('cameraman.tif'),3),[921,921])); tempim1 = repmat(tempim1./max(tempim1(:)),[1 1 15]); % 'target signal' tempim2 = double(rgb2hsv(imread('fabric.png'))); tempim2 = imresize(tempim2(:,:,2),[921,921]); tempim2 = repmat(tempim2./max(tempim2(:)),[1 1 15]); sin1 = repmat(permute(sin(2.*pi.*(0:14)./15),[1 3 2]),[921 921 1]); % combine into data complexdata = (sin1.*(1.0.*tempim1+0.5.*tempim2.^2).*exp(-sqrt(-1).*2.*pi.*sin1.*(tempim2.^2)).*tempdisk+tempnoise)./1.5; %this is what the mean data looks like meannorm = mean(abs(complexdata),3); meanangle = mean(angle(complexdata),3); figure; subplot(1,2,1); imshow(meannorm,[]); title('mean norm'); subplot(1,2,2); imshow(meanangle,[]); title('mean angle') </code></pre> <p>This is what the generated data looks like:</p> <p><a href="https://i.stack.imgur.com/CFFXg.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CFFXg.jpg" alt="mean data"></a></p> <p>The bright blobs in the right image are what Im after. They have the strongest variation over time as well (and are correlated in time).</p> <p>Then to set up the clustering:</p> <pre><code>%% perform spectral clustering in seperate slabs % method from http://ai.stanford.edu/~ang/papers/nips01-spectral.pdf %get all pixel vectors in a single matrix complexrows = reshape(permute(complexdata, [3,1,2]), [15, 921*921]); %k means and eigs dont accept complex, so convert to real here? complexrowsTranspose = [real(complexrows);imag(complexrows)]'; %lets say 10000 by 10000 matrices are still ok npix = 10000; nslabpix = floor(length(complexrowsTranspose)/npix); nrestpix = rem(length(complexrowsTranspose), npix); </code></pre> <p>Perform spectral clustering in slabs that fit into memory:</p> <pre><code>% spectral clustering keig = 50;%how many eigenvectors needed? more is better affinity_sigma = 1;% i dont understand how to calculate this from the paper tic % start with last slab (dynamically preallocate) for islabpix = (nslabpix+1):-1:1; %print progress islabpix/nslabpix toc if islabpix&gt;nslabpix pixrange = (1:nrestpix) + ((islabpix-1)*npix);; else pixrange = (1:npix) + ((islabpix-1)*npix); end %calculate affinity between all voxels in slab Aff = exp( -squareform(pdist(complexrowsTranspose(pixrange,:))).^2/(2*affinity_sigma^2) ); % affinity matrix %calculate degree matrix for normalization Dsq = sparse(size(Aff,1),size(Aff,2)); %degree matrix for idiag=1:size(Aff,1) Dsq(idiag,idiag) = sum(Aff(idiag,:))^(1/2); end %normalize affinity matrix Lap = Dsq * Aff * Dsq; %normalize affinity matrix %calculate eigenvectors of affinity matrix [eigVectors(pixrange,1:keig), eigValues] = eigs(Lap, keig); %eigenvectors of normalized aff mat normEigVectors(pixrange,1:keig) = eigVectors(pixrange,1:keig)./repmat(sqrt(sum(abs(eigVectors(pixrange,1:keig)).^2,2)), [1 keig]); %normalize rows of eigen vectors, normr only works on real numbers % perform k means clustering on weights for eigenvectors [idx,C,sumd,D] = kmeans([real(normEigVectors(pixrange,1:keig)),imag(normEigVectors(pixrange,1:keig))], 5); %k means on normalized eigenvecotrs idxval(pixrange) = idx; end %reshape into image idxim = reshape(idxval, [921, 921]); figure; imshow(idxim,[]) toc </code></pre> <p>The resulting clustering:</p> <p><a href="https://i.stack.imgur.com/hLJ4m.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hLJ4m.png" alt="k means clusters on spectral slabs"></a></p> <p>The result looks like the method is working to some degree within each slab; the goal was to cluster all blobs with slightly higher norm and stronger angle variation (high saturation blobs from <code>tempim2</code>), which seem recognizable in the result. Now its mostly the seperate slabs that are the issue and that there are no cross-slab clusters. This took my computer about 15 minutes. I reduced the number of eigenvalues and the image size for this example so it runs in an acceptable amount of time. I think that illustrates part of my problem.</p>
2018-05-21 10:01:34.633000+00:00
2018-05-25 08:50:58.820000+00:00
2018-05-25 08:50:58.820000+00:00
machine-learning|computer-vision|cluster-analysis|linear-algebra|image-segmentation
['http://wpre.weizmann.ac.il/math/Nadler/sites/math.Nadler/files/publications/fundamental_limitations_of_spectral_clustering.pdf', 'https://stackoverflow.com/a/19510366/1714410', 'http://arxiv.org/abs/1204.4867', 'https://arxiv.org/abs/1801.01587', 'https://stackoverflow.com/questions/50446375/how-to-combine-split-runs-of-spectral-clustering-for-a-huge-affinity-matrix/50460927?noredirect=1#comment87999026_50460927', 'https://en.wikipedia.org/wiki/Transitive_relation', 'http://www.wisdom.weizmann.ac.il/~bagon/pub/LargeScaleCorrClust_2011.pdf']
7
39,834,015
<p>In old-fashioned Prolog code, the following pattern arises rather frequently:</p> <pre> predicate([], ...). predicate([L|Ls], ...) :- <b>condition(L)</b>, then(Ls, ...). predicate([L|Ls], ...) :- <b>\+ condition(L)</b>, else(Ls, ...). </pre> <p>I am using lists here as an example where this occurs (see for example <code>include/3</code>, <code>exclude/3</code> etc.), although the pattern of course also occurs elsewhere.</p> <p>The tragic is the following:</p> <ul> <li>For an <strong>instantiated</strong> list, pattern matching can distinguish the first clause from the remaining&nbsp;two, but it cannot distinguish the second one from the last&nbsp;one because they <em>both</em> have <code>'.'(_, _)</code> as the primary functor and arity of their first argument.</li> <li>The conditions in which the last two clauses apply are obviously <strong>mutually exclusive</strong>.</li> <li>Thus, when everything is known, we want to obtain an efficient, <strong>deterministic</strong> predicate that <em>does not leave choice points</em>, and ideally does not even <em>create</em> choice points.</li> <li><em>However</em>, as long as not everything can be safely determined, we want to benefit from <strong>backtracking</strong> to see <strong>all solutions</strong>, so we cannot afford to commit to either of the clauses.</li> </ul> <p>In summary, the existing constructs and language features all fall short in some way to express a pattern that often occurs in practice. Therefore, for decades, it seemed necessary to&nbsp;<em>compromise</em>. And you can make a pretty good guess in which direction the "compromises" usually go in the Prolog community: Almost invariably, <em>correctness is sacrificed for efficiency</em> in case of doubt. After all, who cares about correct results as long as your programs are fast, right? Therefore, until the invention of <code>if_/3</code>, this was frequently <em>wrongly</em> written as:</p> <pre> predicate([], ...). predicate([L|Ls], ...) :- ( <b>condition(L)</b> -> then(Ls, ...). ; else(Ls, ...). ) </pre> <p>The <strong>mistake</strong> in this is of course that when the elements are <em>not</em> sufficiently instantiated, then this may <em>incorrectly</em> commit to one branch even though <em>both</em> alternatives are logically possible. For this reason, using if-then-else is almost always declaratively wrong, and stands massively in the way of declarative debugging approaches due to its violation of the most elementary properties we expect from pure Prolog programs.</p> <hr> <p>Using <code>if_/3</code>, you can write this as:</p> <pre> predicate([], ...). predicate([L|Ls], ...) :- if_(<b>condition(L)</b>, then(Ls, ...), else(Ls, ...)). </pre> <p>and <strong>retain all desirable aspects</strong>. This is:</p> <ul> <li><strong>deterministic</strong> if everything can be <em>safely</em> decided</li> <li><strong>efficient</strong> in that it does not even create choice points</li> <li><strong>complete</strong> in that you never <em>incorrectly</em> commit to one particular branch.</li> </ul> <p>The <strong>price</strong> of this is rather affordable: As Boris mentioned in the comments, you need to implement a <strong>reification</strong>. I have now some experience with this and found it rather easy with some practice.</p> <p><strong>Good news everyone</strong>: In many cases, <code>condition</code> is of the form <code>(=)/2</code>, or <code>(#=)/2</code>, and the first even ships with <a href="http://www.complang.tuwien.ac.at/ulrich/Prolog-inedit/sicstus/reif.pl" rel="noreferrer"><strong><code>library(reif)</code></strong></a> <strong>for&nbsp;free</strong>.</p> <p>For more information, see <a href="https://arxiv.org/abs/1607.01590" rel="noreferrer"><strong>Indexing dif/2</strong></a> by Ulrich Neumerkel and Stefan&nbsp;Kral!</p>
2016-10-03 14:27:33.197000+00:00
2016-10-15 15:22:24.430000+00:00
2016-10-15 15:22:24.430000+00:00
null
39,833,370
<p>The predicate <strong><a href="https://stackoverflow.com/questions/27358456/prolog-union-for-a-u-b-u-c/27358600#27358600"><code>if_/3</code></a></strong> seems to be <a href="https://stackoverflow.com/search?q=%5Bprolog%5D+if_">fairly popular</a> among the few main contributors in the Prolog part of Stack Overflow.</p> <p>This predicate is implemented as such, courtesy of @false:</p> <pre><code>if_(If_1, Then_0, Else_0) :- call(If_1, T), ( T == true -&gt; call(Then_0) ; T == false -&gt; call(Else_0) ; nonvar(T) -&gt; throw(error(type_error(boolean,T),_)) ; /* var(T) */ throw(error(instantiation_error,_)) ). </code></pre> <p>However, I have been unable to find a <strong>clear, simple, and concise</strong> explanation of what this predicate does, and what use it has compared to e.g. the classical if-then-else construct of Prolog <code>if -&gt; then ; else</code>. </p> <p>Most links I have found directly use this predicate and provide little explanation as to why it gets used, that a non-expert in Prolog could understand easily.</p>
2016-10-03 13:54:45.450000+00:00
2017-11-09 00:56:55.663000+00:00
2017-05-23 11:46:59.367000+00:00
if-statement|prolog|logical-purity
['http://www.complang.tuwien.ac.at/ulrich/Prolog-inedit/sicstus/reif.pl', 'https://arxiv.org/abs/1607.01590']
2
71,821,233
<p>As of April 2022, you can! Not with ggadjustedcurves but there is a new package called adjustedCurves that offers a variety of calculations:</p> <p><a href="https://github.com/RobinDenz1/adjustedCurves" rel="nofollow noreferrer">https://github.com/RobinDenz1/adjustedCurves</a> <a href="https://arxiv.org/abs/2203.10002" rel="nofollow noreferrer">https://arxiv.org/abs/2203.10002</a></p> <p>It works well under my testing.</p>
2022-04-10 23:12:25.320000+00:00
2022-04-10 23:12:25.320000+00:00
null
null
55,404,550
<p>How can I compute index of variability (SE or CI) for <code>ggadjustedcurves</code> from survminer function? I am using the conditional method. Anyone have any input or resource?</p>
2019-03-28 18:26:52.910000+00:00
2022-04-10 23:12:25.320000+00:00
null
r|cox-regression|survival
['https://github.com/RobinDenz1/adjustedCurves', 'https://arxiv.org/abs/2203.10002']
2
53,891,758
<p>Not unless you do some "smoothing" like in <a href="https://arxiv.org/abs/1603.08983" rel="nofollow noreferrer">Adaptive Computation Time</a>. FYI, ACT can be tricky to use and train. I have witnessed several projects trying it out. It did not provide great benefit compared to tuned number of steps in their setting. One important thing about ACT (and likely other similar approaches) is that it averages RNN states are different time steps, which essentially means that it assumes that the network learns a "linear representation".</p>
2018-12-21 23:14:25.877000+00:00
2018-12-21 23:14:25.877000+00:00
null
null
47,807,071
<p>Recently I'm trying to implement a deep reinforcement learning project which require a variable timesteps.I want to train a network to output a parameter T,and use T as the length or timesteps of policy gradient method or DQN method,I wonder if that's implementable? I mean when we do back-propogate, can me back-propogate through timesteps T?</p>
2017-12-14 06:23:25.653000+00:00
2018-12-21 23:14:25.877000+00:00
null
tensorflow
['https://arxiv.org/abs/1603.08983']
1
53,805,377
<p>Regardless of the memory order setting, you are requiring an atomic operation in both loops. It turns out that, with x86 processors, which are inherently strongly ordered in most situations, this results in using the same asm codes for each fetch_add: <code>lock xadd</code>. This atomic operation on x86 processors is always sequentially consistent, so there are no optimization opportunities here when specifying relaxed memory order.</p> <p>Using relaxed memory order allows further optimizations of surrounding operations, but your code doesn't provide any further optimization opportunities, so the emitted code is the same. Note that the results may have been different with a weakly-ordered processor (e.g., ARM) or with more data manipulation within the loop (which could offer more reordering opportunities).</p> <p>From <a href="https://en.cppreference.com/w/cpp/atomic/memory_order" rel="noreferrer">cppreference</a> (my italics):</p> <blockquote> <p>std::memory_order specifies how regular, non-atomic memory accesses are to be ordered <em>around</em> an atomic operation. </p> </blockquote> <p>The paper <a href="https://arxiv.org/pdf/1803.04432.pdf" rel="noreferrer">Memory Models for C/C++ Programmers</a> provides <em>much</em> greater detail on this.</p> <p>As a side note, repeatedly running atomic benchmarks or running them on different x86 processors (even by the same manufacturer) may result in dramatically different results, as the threads might not be distributed across all the cores equally, and cache latencies are affected by whether it is a local core, another core on the same chip, or on another chip. It's also affected by how the particular processor handles potential consistency conflicts. Furthermore, level 1, 2 and 3 caches behave differently, as does ram, so total size of the data set also has significant effects. See <a href="https://spcl.inf.ethz.ch/Publications/.pdf/atomic-bench.pdf" rel="noreferrer">Evaluating the Cost of Atomic Operations on Modern Architectures</a>.</p>
2018-12-16 18:46:38.640000+00:00
2018-12-16 21:43:46.750000+00:00
2018-12-16 21:43:46.750000+00:00
null
53,805,142
<p>I've created a simple test to check how <code>std::memory_order_relaxed</code> is faster than <code>std::memory_order_seq_cst</code> value for <code>atomic&lt;int&gt;</code> increment. However the performance was the same for both cases.<br> My compiler: gcc version 7.3.0 (Ubuntu 7.3.0-27ubuntu1~18.04)<br> Build arguments: g++ -m64 -O3 main.cpp -std=c++17 -lpthread<br> CPU: Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz, 4 core, 2 thread per core<br> Test code:</p> <pre><code>#include &lt;vector&gt; #include &lt;iostream&gt; #include &lt;thread&gt; #include &lt;atomic&gt; #include &lt;chrono&gt; #include &lt;functional&gt; std::atomic&lt;int&gt; cnt = {0}; void run_test_order_relaxed() { std::vector&lt;std::thread&gt; v; for (int n = 0; n &lt; 4; ++n) { v.emplace_back([]() { for (int n = 0; n &lt; 30000000; ++n) { cnt.fetch_add(1, std::memory_order_relaxed); } }); } std::cout &lt;&lt; "rel: " &lt;&lt; cnt.load(std::memory_order_relaxed); for (auto&amp; t : v) t.join(); } void run_test_order_cst() { std::vector&lt;std::thread&gt; v; for (int n = 0; n &lt; 4; ++n) { v.emplace_back([]() { for (int n = 0; n &lt; 30000000; ++n) { cnt.fetch_add(1, std::memory_order_seq_cst); } }); } std::cout &lt;&lt; "cst: " &lt;&lt; cnt.load(std::memory_order_seq_cst); for (auto&amp; t : v) t.join(); } void measure_duration(const std::function&lt;void()&gt;&amp; func) { using namespace std::chrono; high_resolution_clock::time_point t1 = high_resolution_clock::now(); func(); high_resolution_clock::time_point t2 = high_resolution_clock::now(); auto duration = duration_cast&lt;milliseconds&gt;( t2 - t1 ).count(); std::cout &lt;&lt; " duration: " &lt;&lt; duration &lt;&lt; "ms" &lt;&lt; std::endl; } int main() { measure_duration(&amp;run_test_order_relaxed); measure_duration(&amp;run_test_order_cst); return 0; } </code></pre> <p>Why does <code>std::memory_order_relaxed</code> and <code>std::memory_order_seq_cst</code> always produce almost the same results?<br> Result:<br> rel: 2411 duration: 4440ms<br> cst: 120000164 duration: 4443ms</p>
2018-12-16 18:15:14.280000+00:00
2018-12-16 21:43:46.750000+00:00
2018-12-16 18:28:46.050000+00:00
c++|c++11
['https://en.cppreference.com/w/cpp/atomic/memory_order', 'https://arxiv.org/pdf/1803.04432.pdf', 'https://spcl.inf.ethz.ch/Publications/.pdf/atomic-bench.pdf']
3
40,816,499
<h2>Solutions:</h2> <h3>Make sure to preprocess your input.</h3> <p>If your input pixels are in range [0, 255], it's better to rescale them into [0.0, 1.0]. This suffices in most situation.</p> <p>A more advanced way would be using <a href="https://arxiv.org/abs/1502.03167" rel="nofollow noreferrer">batch normalization</a>.</p> <h3>Make sure you initialize weight matrices in a normalized fashion.</h3> <p>By normalized, I mean each of the 784-dimension column vector of the weight matrix should have a fixed L2-norm. For a simple setup, you could just normalize them to 1.</p> <p>Weight matrix initialization is a research topic, for example using <a href="http://jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf" rel="nofollow noreferrer">glorot initialization</a> tend to show better results for deep networks.</p>
2016-11-26 08:36:06.090000+00:00
2016-11-26 08:36:06.090000+00:00
null
null
40,816,269
<p>According to <a href="https://iamtrask.github.io/2015/07/12/basic-python-network/" rel="nofollow noreferrer">this tutorial</a>, with <code>Python</code> and <code>Numpy</code>, I want to train <code>MNIST</code> dataset to a neural network that can recognize handwritten digits. I understand the logic but i have a problem right now. </p> <p>In this tutorial test case was <code>AND</code> logical gate and because of small amount of data, it works fine. But I'm using <code>MNIST</code> database and each image has <code>28*28</code> dimension,And when I convert each of them to a vector, I have <code>N*784</code> matrix. And if I have a <code>784*1</code> matrix as <code>weight matrix</code>, When I multiply it with the input matrix, The resulting numbers is going to be very small or very large numbers(Negative or Positive) and because i use <code>Sigmoid</code> activation function, all of my data divided in to two section, 1 and 0 at first learning cycle, But i need small numbers that converging slowly.</p> <p>for example i get these numbers after multiplication: <code>-569.87541502</code> , <code>218.62477264</code> and the first one is 0 in <code>Sigmoid</code> activation function and the second is 1, And there is no room for training and converging. All of this because of large amount of data that summation of them resulting to this large or very small numbers.</p> <p>I use this trick for generating very small weights than the original tutorial but i get the same results(I was thinking because these are small numbers, summation of them can't be very large but i get the same results):</p> <pre><code>syn0 = (2*np.random.random((784,10))-1)*(0.00000005-0.00000001) + 0.00000001 </code></pre> <p>I don't know how i can overcome to this.</p>
2016-11-26 08:02:49.797000+00:00
2016-11-26 08:36:06.090000+00:00
null
python|matrix|scipy|neural-network|mnist
['https://arxiv.org/abs/1502.03167', 'http://jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf']
2
21,419,079
<p>The problem can be expressed as the solution of a cubic equation which gives 1, 2, or 3 real roots. For the derivation and closed form solution see Appendix B of <a href="http://arxiv.org/abs/1102.1215" rel="nofollow">Geodesics on an ellipsoid of revolution</a>. The boundary between 1 and 3 solutions is an astroid.</p>
2014-01-28 23:24:41.133000+00:00
2014-01-28 23:24:41.133000+00:00
null
null
20,360,355
<p><strong>Given a point <em>p</em> exterior to an axially aligned, origin centered ellipse <em>E</em>, find the (upto) four unique normals to <em>E</em> passing through <em>p</em>.</strong></p> <p>This is not a Mathematica question. Direct computation is too slow; I am willing to sacrifice precision and accuracy for speed.</p> <p>I have searched the web, but all I found involved overly complex calculations which if implemented directly appear to lack the performance I need. Is there a more "programmatical" way to do this, like using matrices or scaling the ellipse into a circle?</p>
2013-12-03 19:52:53.227000+00:00
2014-01-28 23:24:41.133000+00:00
2013-12-04 18:46:15.447000+00:00
math|geometry|computational-geometry|ellipse
['http://arxiv.org/abs/1102.1215']
1
61,716,647
<p>There are many types of tables in the document images with too much variations and layouts. No matter how many rules you write, there will always appear a table for which your rules will fail. These types of problems are genrally solved using ML(Machine Learning) based solutions. You can find many pre-implemented codes on github for solving the problem of detecting tables in the images using ML or DL (Deep Learning).</p> <p>Here is my code along with the deep learning models, the model can detect various types of tables as well as the structure cells from the tables: <a href="https://github.com/DevashishPrasad/CascadeTabNet" rel="noreferrer">https://github.com/DevashishPrasad/CascadeTabNet</a></p> <p>The approach achieves state of the art on various public datasets right now (10th May 2020) as far as the accuracy is concerned</p> <p>More details : <a href="https://arxiv.org/abs/2004.12629" rel="noreferrer">https://arxiv.org/abs/2004.12629</a></p>
2020-05-10 18:12:34.697000+00:00
2020-10-11 18:14:58.163000+00:00
2020-10-11 18:14:58.163000+00:00
null
50,829,874
<p>I have different type of invoice files, I want to find table in each invoice file. In this table position is not constant. So I go for image processing. First I tried to convert my invoice into image, then I found contour based on table borders, Finally I can catch table position. For the task I used below code.</p> <pre><code>with Image(page) as page_image: page_image.alpha_channel = False #eliminates transperancy img_buffer=np.asarray(bytearray(page_image.make_blob()), dtype=np.uint8) img = cv2.imdecode(img_buffer, cv2.IMREAD_UNCHANGED) ret, thresh = cv2.threshold(img, 127, 255, 0) im2, contours, hierarchy = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) margin=[] for contour in contours: # get rectangle bounding contour [x, y, w, h] = cv2.boundingRect(contour) # Don't plot small false positives that aren't text if (w &gt;thresh1 and h&gt; thresh2): margin.append([x, y, x + w, y + h]) #data cleanup on margin to extract required position values. </code></pre> <p>In this code <code>thresh1</code>, <code>thresh2</code> i'll update based on the file.</p> <p>So using this code I can successfully read positions of tables in images, using this position i'll work on my invoice pdf file. For example </p> <p>Sample 1:</p> <p><a href="https://i.stack.imgur.com/NJXjD.png" rel="noreferrer"><img src="https://i.stack.imgur.com/NJXjD.png" alt="enter image description here"></a></p> <p>Sample 2:</p> <p><a href="https://i.stack.imgur.com/Ntbce.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Ntbce.png" alt="enter image description here"></a></p> <p>Sample 3: <a href="https://i.stack.imgur.com/SqpiK.png" rel="noreferrer"><img src="https://i.stack.imgur.com/SqpiK.png" alt="enter image description here"></a></p> <p>Output:</p> <p>Sample 1:</p> <p><a href="https://i.stack.imgur.com/MIaGH.png" rel="noreferrer"><img src="https://i.stack.imgur.com/MIaGH.png" alt="enter image description here"></a></p> <p>Sample 2:</p> <p><a href="https://i.stack.imgur.com/CXv8x.png" rel="noreferrer"><img src="https://i.stack.imgur.com/CXv8x.png" alt="enter image description here"></a></p> <p>Sample 3:</p> <p><a href="https://i.stack.imgur.com/d76gQ.png" rel="noreferrer"><img src="https://i.stack.imgur.com/d76gQ.png" alt="enter image description here"></a></p> <p>But, now I have a new format which doesn't have any borders but it's a table. How to solve this? Because my entire operation depends only on borders of the tables. But now I don't have a table borders. How can I achieve this? I don't have any idea to move out from this problem. My question is, Is there any way to find position based on table structure?. </p> <p>For example My problem input looks like below:</p> <p><a href="https://i.stack.imgur.com/Fw3Qa.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/Fw3Qa.jpg" alt="enter image description here"></a></p> <p>I would like to find its position like below: <a href="https://i.stack.imgur.com/5or8D.png" rel="noreferrer"><img src="https://i.stack.imgur.com/5or8D.png" alt="enter image description here"></a></p> <p>How can I solve this? It is really appreciable to give me an idea to solve the problem.</p> <p>Thanks in advance.</p>
2018-06-13 05:51:06.367000+00:00
2021-05-18 09:45:27.787000+00:00
2019-06-14 09:00:35.033000+00:00
python|image|opencv|image-processing
['https://github.com/DevashishPrasad/CascadeTabNet', 'https://arxiv.org/abs/2004.12629']
2
51,990,237
<p>I can't give you a number, but a method to find it yourself. The technique is plotting a graph called "<strong>learning curve</strong>" where the x-axis is the number if training samples and the y-axis is the score. You start at 1 training sample and increase to 600. You plot two curves: the training error and the test error. You can then see how much influence more data without any other change will have on the result.</p> <p>More details and the following image in <a href="https://arxiv.org/pdf/1707.09725.pdf" rel="nofollow noreferrer">my masters thesis, section 2.5.4</a>:</p> <p><a href="https://i.stack.imgur.com/Jxj6T.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Jxj6T.png" alt="enter image description here"></a></p> <p>In this example you can see that having up to 20 training samples each new example is improving the test score a lot (green curve goes down a lot). But after that, throwing just more data at the problem will not help a lot.</p> <p>The curve will look different in your case, but the principle should be the same.</p> <h2>Other analysis</h2> <p>Look at chapter 2.5 and 2.6 of my masters thesis. I especially recommend having a look at the confusion matrix and <a href="https://datascience.stackexchange.com/a/25008/8820">confusion matrix ordering</a>. This will give you an idea which classes are confused. Maybe the classes are just inherently difficult to distinguish? Maybe one can add more features? Maybe there are labeling errors? Have a look at chapter 2.5 for more of those "maybe's"</p>
2018-08-23 16:19:52.947000+00:00
2018-08-23 16:43:57.913000+00:00
2018-08-23 16:43:57.913000+00:00
null
51,990,135
<p>for decent generalization, how many images per class is needed for fine-tuning the Resnet-50 model for ASL HandSign Classification(24 classes)? I have around 600 images per class and the model is overfitting very badly.</p>
2018-08-23 16:13:32.290000+00:00
2018-08-23 16:43:57.913000+00:00
null
machine-learning|deep-learning|computer-vision
['https://arxiv.org/pdf/1707.09725.pdf', 'https://i.stack.imgur.com/Jxj6T.png', 'https://datascience.stackexchange.com/a/25008/8820']
3
48,893,335
<p>I wrote a very brief prototype of a simple locality sensitive hashing algorithm in python. However there are a few caveats and you may want to optimize some pieces as well. I'll mention them when we see them.</p> <p>Assume all your strings are stored in <code>strings</code>.</p> <pre><code>import random from collections import Counter MAX_LENGTH = 500 SAMPLING_LENGTH = 10 def bit_sampling(string, indices): return ''.join([string[i] if i&lt;len(string) else ' ' for i in indices]) indices = random.sample(range(MAX_LENGTH),SAMPLING_LENGTH) hashes = [bit_sampling(string, indices) for string in strings] counter = Counter(hashes) most_common, count = counter.most_common()[0] while count &gt; 1: dup_indices = [i for i, x in enumerate(hashes) if x == most_common] # You can use dup_indices to check the edit distance for original groups here. counter.pop(most_common) most_common, count = counter.most_common()[0] </code></pre> <p>First of all, this is a slight variant of bit sampling that works best for the general hamming distance. Ideally if all your string are of the same length, this can give a theoretical probability bound for the hamming distance. When the hamming distance between two string is small, it is very unlikely that they will have different hash. This can be specified by the parameter <code>SAMPLING_LENGTH</code>. A larger <code>SAMPLING_LENGTH</code> will make it more likely to hash similar string to different hash but also reduce the probability of hashing not very similar string to the same hash. For hamming distance, you can calculate this trade-off easily.</p> <p>Run this snippet multiple times can increase your confident on no similar strings since each time you will sample different places.</p> <p>To accommodate your purpose to compare different length strings, one possible approach is to left padding space on shorter strings and make copies of them.</p> <p>Though all of the operation in this snippet are linear (O(n)), it may still consume significant memory and running time and it might be possible to reduce a constant factor.</p> <p>You might also want to consider using more complicated locality sensitive hashing algorithm such as surveyed here: <a href="https://arxiv.org/pdf/1408.2927.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1408.2927.pdf</a></p>
2018-02-20 20:03:02.670000+00:00
2018-02-20 20:43:14.640000+00:00
2018-02-20 20:43:14.640000+00:00
null
48,819,439
<p>I have a database of <code>350,000</code> strings with an average length of about <code>500</code>. The strings are not made up of words, they are an essentially random assortment of characters.</p> <p>I need to make sure no two of the strings are too similar, where similarity is defined as <em>edit distance divided by avg length of string</em>. The division is because smaller edit distances are more acceptable for smaller strings. <em>It is fine if a different metric is used for performance reasons, but edit distance is the preferred baseline metric.</em></p> <p>Naively, we calculate <a href="https://en.wikipedia.org/wiki/Edit_distance" rel="noreferrer">edit distance</a> with runtime <code>O(a*b)</code>, where <code>a,b</code> are the length of the two strings. We do this for all <code>n^2</code> pairs, which gives an overall runtime of <code>O(n^2*a*b)</code>, clearly too large with <code>n=350,000, a,b=500</code>.</p> <p>The database is in the form of a Python list read from a csv file. I'd like to process it in a Pythonic way, if possible.</p> <p><strong>How can this be sped up? I'm not sure how long the naive algorithm will take to finish (on the order of weeks) but it ideally should take less than a day to run.</strong></p>
2018-02-16 02:55:22.007000+00:00
2018-02-20 20:43:14.640000+00:00
2018-02-20 17:58:35.560000+00:00
python|python-3.x|similarity|edit-distance
['https://arxiv.org/pdf/1408.2927.pdf']
1
66,117,248
<p>Pegasus is a <a href="https://arxiv.org/abs/1409.3215" rel="noreferrer"><code>seq2seq</code></a> model, you can't directly convert a <code>seq2seq</code> model (encoder-decoder model) using this method. The <a href="https://colab.research.google.com/github/huggingface/transformers/blob/master/notebooks/04-onnx-export.ipynb#scrollTo=foYlXrSksR_v" rel="noreferrer"><code>guide</code></a> is for BERT which is an encoder model. Any only encoder or only decoder transformer model can be converted using this method.</p> <p>To convert a <code>seq2seq</code> model (encoder-decoder) you have to split them and convert them separately, an encoder to onnx and a decoder to onnx. you can follow <a href="https://github.com/Ki6an/fastT5/blob/8dda859086af631a10ad210a5f1afdec64d49616/fastT5/onnx_exporter.py#L45" rel="noreferrer">this guide</a> (it was done for T5 which is also a <code>seq2seq</code> model)</p> <p><strong>Why are you getting this error?</strong></p> <p>while converting <a href="https://pytorch.org/docs/stable/onnx.html?highlight=onnx%20export#functions" rel="noreferrer">PyTorch to onnx</a></p> <pre><code>_ = torch.onnx._export( model, dummy_input, ... ) </code></pre> <p>you need to provide a dummy variable to both encoder and to the decoder <a href="https://github.com/huggingface/transformers/issues/8923#issuecomment-738811521" rel="noreferrer">separately</a>. by default when converting using this method it provides the encoder the dummy variable. Since this method of conversion didn't accept decoder of this seq2seq model, it won't give a dummy variable to the decoder and you get the above error. <code>ValueError: You have to specify either decoder_input_ids or decoder_inputs_embeds</code></p>
2021-02-09 10:36:06.090000+00:00
2021-03-18 10:14:33.957000+00:00
2021-03-18 10:14:33.957000+00:00
null
66,109,084
<p>I am trying to convert the Pegasus newsroom in HuggingFace's transformers model to the ONNX format. I followed <a href="https://colab.research.google.com/github/huggingface/transformers/blob/master/notebooks/04-onnx-export.ipynb#scrollTo=foYlXrSksR_v" rel="nofollow noreferrer">this</a> guide published by Huggingface. After installing the prereqs, I ran this code:</p> <pre><code>!rm -rf onnx/ from pathlib import Path from transformers.convert_graph_to_onnx import convert convert(framework=&quot;pt&quot;, model=&quot;google/pegasus-newsroom&quot;, output=Path(&quot;onnx/google/pegasus-newsroom.onnx&quot;), opset=11) </code></pre> <p>and got these errors:</p> <pre><code>ValueError Traceback (most recent call last) &lt;ipython-input-9-3b37ed1ceda5&gt; in &lt;module&gt;() 3 from transformers.convert_graph_to_onnx import convert 4 ----&gt; 5 convert(framework=&quot;pt&quot;, model=&quot;google/pegasus-newsroom&quot;, output=Path(&quot;onnx/google/pegasus-newsroom.onnx&quot;), opset=11) 6 7 6 frames /usr/local/lib/python3.6/dist-packages/transformers/models/pegasus/modeling_pegasus.py in forward(self, input_ids, attention_mask, encoder_hidden_states, encoder_attention_mask, head_mask, encoder_head_mask, past_key_values, inputs_embeds, use_cache, output_attentions, output_hidden_states, return_dict) 938 input_shape = inputs_embeds.size()[:-1] 939 else: --&gt; 940 raise ValueError(&quot;You have to specify either decoder_input_ids or decoder_inputs_embeds&quot;) 941 942 # past_key_values_length ValueError: You have to specify either decoder_input_ids or decoder_inputs_embeds </code></pre> <p>I have never seen this error before. Any ideas?</p>
2021-02-08 20:44:13.607000+00:00
2021-03-18 10:14:33.957000+00:00
2021-02-14 18:53:25.807000+00:00
python|tensorflow|pytorch|huggingface-transformers|onnx
['https://arxiv.org/abs/1409.3215', 'https://colab.research.google.com/github/huggingface/transformers/blob/master/notebooks/04-onnx-export.ipynb#scrollTo=foYlXrSksR_v', 'https://github.com/Ki6an/fastT5/blob/8dda859086af631a10ad210a5f1afdec64d49616/fastT5/onnx_exporter.py#L45', 'https://pytorch.org/docs/stable/onnx.html?highlight=onnx%20export#functions', 'https://github.com/huggingface/transformers/issues/8923#issuecomment-738811521']
5
48,861,084
<p>Looks like you need the <strong><a href="http://jinja.pocoo.org/docs/2.10/templates/#safe" rel="nofollow noreferrer">safe</a></strong> keyword. <code>{{ paper| safe }}</code></p> <p>EX:</p> <pre><code>papersFound = ['&lt;a href="http://arxiv.org/abs/math/9801077v2" target = "_blank"&gt;Symmetric spectra&lt;/a&gt;', '&lt;a href="http://arxiv.org/abs/math/9706228v1" target = "_blank"&gt;Topological transformation groups&lt;/a&gt;'] s = """{% for paper in papersFound %} &lt;li class="list-group-item" style="color:black"&gt;{{ paper| safe }}&lt;/li&gt; {% endfor %}""" </code></pre>
2018-02-19 07:08:27.593000+00:00
2018-02-19 07:08:27.593000+00:00
null
null
48,860,520
<p>I have a list of html tags, called papersFound, e.g.</p> <pre><code>papersFound = ['&lt;a href="http://arxiv.org/abs/math/9801077v2" target = "_blank"&gt;Symmetric spectra&lt;/a&gt;', '&lt;a href="http://arxiv.org/abs/math/9706228v1" target = "_blank"&gt;Topological transformation groups&lt;/a&gt;'] </code></pre> <p>I want to loop through this list, and display hyperlinks to the papers, not the tags themselves (obviously). What I have now is displaying the tags:</p> <pre><code>{% for paper in papersFound %} &lt;li class="list-group-item" style="color:black"&gt;{{ paper }}&lt;/li&gt; {% endfor %} </code></pre> <p>Any help is greatly appreciated. Thank you.</p> <p>EDIT: At <a href="http://jinja.pocoo.org" rel="nofollow noreferrer">http://jinja.pocoo.org</a>, I see the example</p> <pre><code>&lt;ul&gt; {% for user in users %} &lt;li&gt;&lt;a href="{{ user.url }}"&gt;{{ user.username }}&lt;/a&gt;&lt;/li&gt; {% endfor %} &lt;/ul&gt; </code></pre> <p>I want to implement this. But what kind of object is users (which has properties url and username)?</p> <p>EDIT 2: Currently, my page is displaying the html tags, as you can see <a href="https://i.stack.imgur.com/tSdpF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tSdpF.png" alt="enter image description here"></a></p>
2018-02-19 06:21:37.683000+00:00
2018-02-19 07:08:27.593000+00:00
2018-02-19 06:35:43.510000+00:00
javascript|python|jquery|html
['http://jinja.pocoo.org/docs/2.10/templates/#safe']
1
48,860,814
<p>You can get your <code>href</code> value to hold your data on empty div.Loop and get <code>herf</code> value and again empty your div.</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-js lang-js prettyprint-override"><code>var papersFound = ['&lt;a href="http://arxiv.org/abs/math/9801077v2" target = "_blank"&gt;Symmetric spectra&lt;/a&gt;', '&lt;a href="http://arxiv.org/abs/math/9706228v1" target = "_blank"&gt;Topological transformation groups&lt;/a&gt;'] var arrayLength = papersFound.length; var html = ''; for (i = 0; i &lt; arrayLength; i++) { html += papersFound[i]; } $('.results').html(html); $('.results').find('a').each(function(index, element) { console.log($(this).attr('href')); }); $('.results').html('');</code></pre> <pre class="snippet-code-html lang-html prettyprint-override"><code>&lt;script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"&gt;&lt;/script&gt; &lt;div class='results'&gt; &lt;/div&gt;</code></pre> </div> </div> </p>
2018-02-19 06:46:52.767000+00:00
2018-02-19 06:46:52.767000+00:00
null
null
48,860,520
<p>I have a list of html tags, called papersFound, e.g.</p> <pre><code>papersFound = ['&lt;a href="http://arxiv.org/abs/math/9801077v2" target = "_blank"&gt;Symmetric spectra&lt;/a&gt;', '&lt;a href="http://arxiv.org/abs/math/9706228v1" target = "_blank"&gt;Topological transformation groups&lt;/a&gt;'] </code></pre> <p>I want to loop through this list, and display hyperlinks to the papers, not the tags themselves (obviously). What I have now is displaying the tags:</p> <pre><code>{% for paper in papersFound %} &lt;li class="list-group-item" style="color:black"&gt;{{ paper }}&lt;/li&gt; {% endfor %} </code></pre> <p>Any help is greatly appreciated. Thank you.</p> <p>EDIT: At <a href="http://jinja.pocoo.org" rel="nofollow noreferrer">http://jinja.pocoo.org</a>, I see the example</p> <pre><code>&lt;ul&gt; {% for user in users %} &lt;li&gt;&lt;a href="{{ user.url }}"&gt;{{ user.username }}&lt;/a&gt;&lt;/li&gt; {% endfor %} &lt;/ul&gt; </code></pre> <p>I want to implement this. But what kind of object is users (which has properties url and username)?</p> <p>EDIT 2: Currently, my page is displaying the html tags, as you can see <a href="https://i.stack.imgur.com/tSdpF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tSdpF.png" alt="enter image description here"></a></p>
2018-02-19 06:21:37.683000+00:00
2018-02-19 07:08:27.593000+00:00
2018-02-19 06:35:43.510000+00:00
javascript|python|jquery|html
[]
0
15,805,635
<p>SPOILER AHEAD</p> <p>I was wrong, stating in one of the comments that "Now, there is a general theorem about spanning trees in a graph, but it does not seem to give a computationally feasible way to compute the number sought". The "general theorem", being the Matrix-Tree theorem, attributed to Kirchhoff, and referred to in one of the answers here, gives the result not only as the product of the nonzero eigenvalues of the graph Laplacian divided by the order of the graph, but also as the absolute value of any cofactor of the Laplacian, which in this case is the absolute value of the determinant of a 49999x49999 matrix. But, although the matrix is very sparse, it still looked to me out of reach.</p> <p>However, the reference</p> <p><a href="http://arxiv.org/pdf/0712.0681.pdf" rel="nofollow">http://arxiv.org/pdf/0712.0681.pdf</a></p> <p>("Determinants of block tridiagonal matrices", by Luca Guido Molinari), permitted to reduce the problem to the evaluation of the determinant of an integer 100x100 dense matrix, having very large integers as its entries.</p> <p>Further, the reference </p> <p><a href="http://www.ams.org/journals/mcom/1968-22-103/S0025-5718-1968-0226829-0/S0025-5718-1968-0226829-0.pdf" rel="nofollow">http://www.ams.org/journals/mcom/1968-22-103/S0025-5718-1968-0226829-0/S0025-5718-1968-0226829-0.pdf</a></p> <p>by Erwin H. Bareiss (usually one just speaks of "Bareiss algorithm", but the recursion which I used and which is referred to as formula (8) in the reference, seems to be due to Charles Dodgson, a.k.a. Lewis Carroll :) ), perimitted me then to evaluate this last determinant and thus to obtain the answer to the original problem.</p>
2013-04-04 08:02:04.600000+00:00
2013-06-07 09:44:20.900000+00:00
2013-06-07 09:44:20.900000+00:00
null
15,063,411
<p>I was trying to solve the following problem:</p> <blockquote> <p>An mn maze is an mn rectangular grid with walls placed between grid cells such that there is exactly one path from the top-left square to any other square. The following are examples of a 912 maze and a 1520 maze:</p> <p><img src="https://i.stack.imgur.com/638dq.gif" alt="enter image description here"></p> <p>Let C(m,n) be the number of distinct mn mazes. Mazes which can be formed by rotation and reflection from another maze are considered distinct.</p> <p>It can be verified that C(1,1) = 1, C(2,2) = 4, C(3,4) = 2415, and C(9,12) = 2.5720e46 (in scientific notation rounded to 5 significant digits).<br> Find C(100,500)</p> </blockquote> <p>Now, there is an explicit formula which gives the right result, and it is perfectly computable. However, as I understand, the solutions to Project Euler problems should be more like clever algorithms and not explicit formula computations. Trying to formulate the solution as a recursion, I could only arrive at a linear system with number of variables growing exponentially with the size of the maze (more precisely, if one tries to write a recursion for the number of mxn mazes with m held fixed, one arrives at a linear system such that the number of its variables grows exponentially with m: one of the variables is the number of mxn mazes with the property given in the declaration of problem 380, while the other variables are numbers of mxn mazes with more than one connected component which touch the boundary of the maze in some specific "configuration" - and the number of such "configurations" seems to grow exponentially with m. So, while this approach is feasible with m=2,3,4 etc, it does not seem to work with m=100).</p> <p>I thought also to reduce the problem to subproblems which can be solved more easily, then reusing the subproblems solutions when constructing a solution to larger subproblems(the dynamic programming approach), but here I stumbled upon the fact that subproblems seem to involve mazes of irregular shapes, and again, the number of such mazes is exponential in m,n. </p> <p>If someone knows of a feasible approach (m=100, n=500) other than using explicit formulas or some ad hoc theorems, and can hint where to look, for me it would be quite interesting.</p>
2013-02-25 09:06:57.277000+00:00
2015-01-22 17:34:04.200000+00:00
2015-01-22 17:34:04.200000+00:00
algorithm|maze
['http://arxiv.org/pdf/0712.0681.pdf', 'http://www.ams.org/journals/mcom/1968-22-103/S0025-5718-1968-0226829-0/S0025-5718-1968-0226829-0.pdf']
2
21,774,856
<p>I am not 100% sure about what you are really asking, because what you call a "match" is vague. But since you said you already matched your SURF points and mentionned pattern recognition and the use of a template, I am assuming that, ultimately, you want to localize the template in your image and you are asking about a localization score to decide whether you found the template in the image or not.</p> <p>This is a challenging problem and I am not aware that a good and always-appropriate solution has been found yet. </p> <p>However, given your approach, what you could do is analyze the density of matched points in your image: consider local or global maximas as possible locations for your template (global if you know your template appears only once in the image, local if it can appear multiple times) and use a threshold on the density to decide whether or not the template appears. A sketch of the algorithm could be something like this:</p> <ol> <li>Allocate a floating point density map of the size of your image</li> <li>Compute the density map, by increasing by a fixed amount the density map in the neighborhood of each matched point (for instance, for each matched point, add a fixed value <em>epsilon</em> in the rectangle your are displaying in your question)</li> <li>Find the global or local maximas of the density map (global can be found using opencv function MinMaxLoc, and local maximas can be found using morpho maths, e.g. <a href="https://stackoverflow.com/questions/1856197/how-can-i-find-local-maxima-in-an-image-in-matlab">How can I find local maxima in an image in MATLAB?</a>)</li> <li>For each maxima obtained, compare the corresponding density value to a threshold <em>tau</em>, to decide whether your template is there or not</li> </ol> <p>If you are into resarch articles, you can check the following ones for improvement of this basic algorithm:</p> <ul> <li><p><a href="http://arxiv.org/pdf/0910.1273" rel="nofollow noreferrer">"ADABOOST WITH KEYPOINT PRESENCE FEATURES FOR REAL-TIME VEHICLE VISUAL DETECTION", by T.Bdiri, F.Moutarde, N.Bourdis and B.Steux, 2009.</a></p></li> <li><p><a href="ftp://ftp.vision.ee.ethz.ch/publications/bookchapters/eth_biwi_00422.pdf" rel="nofollow noreferrer">"Interleaving Object Categorization and Segmentation", by B.Leibe and B.Schiele, 2006.</a></p></li> </ul> <p><strong>EDIT</strong>: another way to address your problem is to try and remove accidently-matched points in order to keep only those truly corresponding to your template image. This can be done by enforcing a constraint of consistancy between close matched points. The following research article presents an approach like this: <a href="http://perso.telecom-paristech.fr/~sahbi/publication-193.pdf" rel="nofollow noreferrer">"Context-dependent logo matching and retrieval", by H.Sahbi, L.Ballan, G.Serra, A.Del Bimbo, 2010</a> (however, this may require some background knowledge...).</p> <p>Hope this helps.</p>
2014-02-14 08:59:03.977000+00:00
2014-02-17 07:23:45.653000+00:00
2017-05-23 11:57:35.637000+00:00
null
21,622,655
<p>I’m currently working on pattern recognition using SURF in OpenCV. What do I have so far: I’ve written a program in C# where I can select a source-image and a template which I want to find. After that I transfer both pictures into a C++-dll where I’ve implemented a program using the OpenCV-SURFdetector, which returns all the keypoints and matches back to my C#-program where I try to draw a rectangle around my matches.</p> <p><img src="https://i.stack.imgur.com/RUhbQ.jpg" alt="This pictures shows a source-image and a template with it&#39;s keypoints and matches. Also I&#39;ve tried to calculate a rectangle around my matches"></p> <p>Now my question: Is there a common measure of accuracy in pattern recognition? Like for example number of matches in proportion to the number of keypoints in the template? Or maybe the size-difference between my match-rectangle and the original size of the template-image? What are common parameters that are used to say if a match is a “real” and “good” match?</p> <p><strong>Edit:</strong> To make my question clearer. I have a bunch of matchpoints, that are already thresholded by minHessian and distance value. After that I draw something like a rectangle around my matchpoints as you can see in my picture. This is my MATCH. How can I tell now how good this match is? I'm already calculating angle, size and color differences between my now found match and my template. But I think that is much too vague.</p>
2014-02-07 08:22:33.340000+00:00
2014-08-23 21:39:27.400000+00:00
2014-02-11 06:46:25.913000+00:00
c++|opencv|computer-vision|surf|pattern-recognition
['https://stackoverflow.com/questions/1856197/how-can-i-find-local-maxima-in-an-image-in-matlab', 'http://arxiv.org/pdf/0910.1273', 'ftp://ftp.vision.ee.ethz.ch/publications/bookchapters/eth_biwi_00422.pdf', 'http://perso.telecom-paristech.fr/~sahbi/publication-193.pdf']
4
63,635,644
<p>You can test to see if doing stemming lemmatization and stopword removal helps. It doesn't always. I usually do if I gonna graph as the stopwords clutter up the results.</p> <p><strong>A case for not using Stopwords</strong> Using Stopwords will provide context to the user's intent, so when you use a contextual model like BERT. In such models like BERT, all stopwords are kept to provide enough context information like the negation words (not, nor, never) which are considered to be stopwords.</p> <p>According to <a href="https://arxiv.org/pdf/1904.07531.pdf" rel="noreferrer">https://arxiv.org/pdf/1904.07531.pdf</a></p> <p>&quot;Surprisingly, the stopwords received as much attention as non-stop words, but removing them has no effect inMRR performances. &quot;</p>
2020-08-28 14:20:10.420000+00:00
2021-03-02 15:20:25.560000+00:00
2021-03-02 15:20:25.560000+00:00
null
63,633,534
<p>Is stopwords removal ,Stemming and Lemmatization necessary for text classification while using Spacy,Bert or other advanced NLP models for getting the vector embedding of the text ?</p> <p>text=&quot;The food served in the wedding was very delicious&quot;</p> <p>1.since Spacy,Bert were trained on huge raw datasets are there any benefits of apply stopwords removal ,Stemming and Lemmatization on these text before generating the embedding using bert/spacy for text classification task ?</p> <p>2.I can understand stopwords removal ,Stemming and Lemmatization will be good when we use countvectorizer,tfidf vectorizer to get embedding of sentences .</p>
2020-08-28 12:10:19.310000+00:00
2022-06-27 13:56:34.530000+00:00
null
nlp|spacy|text-classification|bert-language-model
['https://arxiv.org/pdf/1904.07531.pdf']
1
57,410,744
<p>First, I don't think anyone can give you a definitive answer, we can only give you different options and you will need to run performance measurements for your particular use case yourself to find a solution that's optimal for your specific requirements.</p> <p>Some suggestions:</p> <ul> <li>The Box-Muller transform is certainly a decent way to generate gaussian-distributed values, but requires <code>sin</code>, <code>cos</code>, logarithm, and square root computations. You can remove the need for sine and cosine by switching to the <a href="https://en.wikipedia.org/wiki/Marsaglia_polar_method" rel="nofollow noreferrer">Marsaglia polar method</a>. This does come at the cost of requiring you to generate a larger quantity of uniformly distributed values. Depending on how your RNG performs on your GPU, this may still work to your advantage, however.</li> <li>Be careful with linear congruencial RNGs (LCG), they exhibit patterns that sometimes interact badly with transformation algorithms such as Box-Muller. The one you linked is an MWC generator, which is a technique related to linear congruence, so might have similar issues. I would probably try exploring other generators. I haven't had a chance to try it myself yet, but there exists a <a href="https://arxiv.org/abs/1005.4973v3" rel="nofollow noreferrer">Mersenne Twister variant for GPUs</a> which I would imagine working well for many applications. One advantage of Mersenne Twister is that it mostly uses bitwise manipulation instructions, which tend to be very fast on GPUs, unlike integer multiplication and division.</li> </ul> <p>There are definitely plenty of libraries out there, but I'll point out that for best performance you'll probably want to keep the random number generating code running in the same OpenCL work-item as the code that uses the samples. Writing out to a memory buffer will put a strain on memory bandwidth, although if your subsequent processing code is heavily ALU/FPU bound this might not matter.</p> <p>As with any random number generation, testing is key - at the very least, plot a histogram of the samples your code generates and overlay it with the theoretical distribution function you're trying to obtain and visually inspect it to make sure it looks reasonable.</p>
2019-08-08 10:43:32.210000+00:00
2019-08-08 10:43:32.210000+00:00
null
null
57,409,026
<p>Are there some other way to get the Gaussian distributed random numbers or are there any libraries for this today?</p> <p>I have seen the question about <a href="https://stackoverflow.com/questions/21703439/gaussian-distributed-random-numbers-in-opencl">Gaussian distributed random numbers in OpenCL</a></p> <p>I want to generate many Gaussian distributed random numbers in OpenCL like above url question.</p> <p>It may do that by two step:</p> <p><a href="http://cas.ee.ic.ac.uk/people/dt10/research/rngs-gpu-mwc64x.html#source_code" rel="nofollow noreferrer">http://cas.ee.ic.ac.uk/people/dt10/research/rngs-gpu-mwc64x.html#source_code</a> it may generate many random uniform number instead of Gaussian distributed random numbers .</p> <p>And then I can transform uniform random variants into normally distributed ones using <a href="https://en.wikipedia.org/wiki/Box%E2%80%93Muller_transform" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Box%E2%80%93Muller_transform</a>.</p> <p>It may be time expensive,so are there some other way to get the Gaussian distributed random numbers(it is better using kernel to generate a array random number like cuda function:curandGenerateNormal(....)) or are there any libraries for this today?</p>
2019-08-08 09:10:52.690000+00:00
2019-08-08 10:43:32.210000+00:00
2019-08-08 10:22:44.833000+00:00
gpu|opencl|gpgpu
['https://en.wikipedia.org/wiki/Marsaglia_polar_method', 'https://arxiv.org/abs/1005.4973v3']
2
64,293,693
<p>You can do something called scheduled sampling. A major problem with RNNs and with generative models in general is that during train time, they are not trained on their own predictions. Rather, they are trained on the gold labels. During inference time, there are no gold labels available and the model is fed it's own generations. This is something the model has not done before and if the model makes a mistake during the beginning of the generation phase, it will be more likely to make more mistakes.</p> <p>The idea is to allow the model to be trained on it's own outputs gradually using a decay parameter which increases the probability of a predicted token being fed into the RNN rather than the gold label.</p> <p>You can read about it in this paper : <a href="https://arxiv.org/abs/1506.03099" rel="nofollow noreferrer">Scheduled Sampling for Sequence Prediction with Recurrent Neural Networks</a></p>
2020-10-10 12:58:33.863000+00:00
2020-10-10 12:58:33.863000+00:00
null
null
64,281,894
<p>I want to improve MNIST handwritten model with RNN model/LSTM .. have any one tried modification on RNN .. if so what are the different ways I can improve model ..please suggest</p>
2020-10-09 14:20:48.280000+00:00
2020-10-10 12:58:33.863000+00:00
null
deep-learning|data-science|recurrent-neural-network
['https://arxiv.org/abs/1506.03099']
1
69,626,838
<p>There are some benchmarks for Stream Processing in general - but they are not always broadly applicable or accessible than the ones you can find for RDBMS.</p> <p>I will try here to list some benchmarking works, that helped me:</p> <ul> <li><p>A recent benchmarking framework that is implemented for Storm &amp; Flink is the <a href="https://github.com/yahoo/streaming-benchmarks" rel="nofollow noreferrer">Yahoo Streaming Benchmark</a>. It has a fixed internal architecture using Kafka &amp; Redis and a predefined query/topology. Anyways, it is a good starting point.</p> </li> <li><p><a href="https://arxiv.org/abs/1802.08496" rel="nofollow noreferrer">Karimov et al</a> have a nice paper regarding benchmarking of these systems. It is worth a read since it really helps to understand possible metrics. Unfortunately, I can not find any implementation or further information on their workload (data and queries) that they use - so it is more helpful for understanding, I would say.</p> </li> <li><p><a href="https://www.researchgate.net/publication/339731660_Evaluation_of_Stream_Processing_Frameworks" rel="nofollow noreferrer">van Dongen et al</a> are doing a more in-depth analysis of several stream processing systems and offer their source code at github. Unfortunately, there is no implementation for Storm. But anyways, there are some interesting ideas &amp; contributions on how to build such a framework.</p> </li> </ul> <p>As you see, Stream Processing has a high diversity in the way you can set-up and benchmark your systems...</p>
2021-10-19 07:42:51.950000+00:00
2021-10-19 07:42:51.950000+00:00
null
null
69,608,693
<p>Is there any real benchmarks between Apache Flink and apache storm in real time processing based on performance comparison ?</p> <p>Also if I want to make this performance comparison and implement it by myself, is there any stream API (like twitter API) that offers high throughput than twitter and which is open source ?</p> <p>Thank you !</p>
2021-10-17 21:19:42.353000+00:00
2021-10-19 07:42:51.950000+00:00
null
apache-flink|benchmarking|apache-storm|flink-streaming
['https://github.com/yahoo/streaming-benchmarks', 'https://arxiv.org/abs/1802.08496', 'https://www.researchgate.net/publication/339731660_Evaluation_of_Stream_Processing_Frameworks']
3
63,962,537
<p>A common way to generate polynomial minimax approximation is to use the Remez exchange algorithm published by Russian mathematician Evgeny Remez in 1934. This is a numerical procedure that often involves ill-conditioned systems of equations. As a consequence, it is is usually implemented with the help of an arbitrary-precision library. For example, in the implementation of the Remez algorithm that I use I configured the library for 1024-bit precision.</p> <p>For reasonably well-behaved functions, various variants of the Remez algorithm can find approximations very close to the mathematical minimax polynomial. The problem, as noted in the question, is what happens when one moves the generated coefficients of the polynomial to a finite-precision floating-point computation. One often finds the minimax property of the approximation impaired, sometimes significantly so. There are two sources of error at play. First, the generated coefficients cannot be represented accurately in the finite precision floating-point format. Second, the evaluation of the polynomial uses finite-precision operations instead of mathematical operations with infinite precision.</p> <p>The first problem is the easier one to address. As one can see from some quick experiments, simply rounding the coefficients to the finite-precision format doesn't accomplish the desired near minimax result. By using a finite-precision format, we basically transform from an N-dimensional continuous space to an N-dimensional discrete lattice, and to do this properly, we need to find the closest lattice points. This is a solvable but hard problem, which is usually made easier through the use of heuristics. Relevant literature:</p> <p>N. Brisebarre, J.-M. Muller, and A. Tisserand, &quot;Computing machine-efficient polynomial approximations&quot;. <em>ACM Transactions on Mathematical Software</em>, Vol. 32. No. 2, June 2006, pp. 236-256. (<a href="http://perso.ens-lyon.fr/jean-michel.muller/TruncToms.pdf" rel="nofollow noreferrer">online</a>)</p> <p>Nicolas Brisebarre and Sylvain Chevillard, &quot;Efficient polynomial L<sup>∞</sup>-approximations&quot;, In <em>18th IEEE Symposium on Computer Arithmetic</em>, June 2007, pp. 169-176 (<a href="https://hal.inria.fr/inria-00119513/document" rel="nofollow noreferrer">online</a>)</p> <p>Florent de Dinechin and Christoph Lauter, &quot;Optimizing polynomials for floating-point implementation&quot;, ArXiv preprint 2008 (<a href="https://arxiv.org/pdf/0803.0439" rel="nofollow noreferrer">online</a>)</p> <p>The <a href="http://sollya.gforge.inria.fr/" rel="nofollow noreferrer">Sollya tool</a> uses these techniques from the literature for its <code>fpminimax</code> <a href="http://sollya.gforge.inria.fr/sollya-7.0/help.php?name=fpminimax" rel="nofollow noreferrer">command</a>. Worth checking out in addition to Maple's and Mathematica's facilities for generating minimax polynomial approximations, as it often results in superior approximations in my experience.</p> <p>The second problem, how to account for evaluation with finite-precision floating-point computation and how to adjust the coefficients of a polynomial approximation accordingly, are still subject to research. Some initial results:</p> <p>Tor Myklebust, &quot;Computing accurate Horner form approximations to special functions in finite precision arithmetic&quot;, ArXiv manuscript 2015 (<a href="https://arxiv.org/pdf/1508.03211" rel="nofollow noreferrer">online</a>)</p> <p>Denis Arzelier, Florent Bréhard, Mioara Joldes, &quot;Exchange algorithm for evaluation and approximation error-optimized polynomials&quot;, In <em>26th IEEE Symposium on Computer Arithmetic</em>, June 2019, pp. 30-37 (<a href="https://hal.archives-ouvertes.fr/hal-02006606/document" rel="nofollow noreferrer">online</a>)</p> <p>Note that the first publication came about due to a <a href="https://stackoverflow.com/questions/26692859/best-machine-optimized-polynomial-minimax-approximation-to-arctangent-on-1-1">question</a> I asked on Stackoverflow.</p> <p>For my own use I am using a heuristic search for finding approximations optimized to account for both representational error in the coefficients and evaluation error in the polynomial evaluation. it can be loosely described as a form of simulated annealing. I have also checked into the use of genetic programming, but the preliminary results did not look promising, so I stopped pursuing this approach.</p>
2020-09-18 20:37:48.740000+00:00
2020-09-18 20:37:48.740000+00:00
null
null
63,957,622
<p>I would like to transfer a double precision approximation of a given function into a single precision C implementation (target device provides single precision ALU only).</p> <p>It's not too complicated to generate a high precision (e.g. max error 0.1e-12) approximation using double precision. I have used maple minimax function, but I have as well found some implementations using double precision <a href="http://www.ganssle.com/approx.htm" rel="nofollow noreferrer">example</a>.</p> <p>But as soon as it comes to transferring this approximation to a single precision method, I face a loss of accuracy, when I simply convert the coefficients to float. I'm aiming an approximation (single precision) which is precise for about +/-5 ulp. Simply converting the coefficients to float doesnt seem to do the job. I already learned to split up constants like pi/2 into a rounded part and an error part, and I think there is some kind of trick to transfer the coefficients (the core computation of the approximations are usually polynomials, i want to focus on them in this question), that I don't know yet.</p> <p>I am grateful for every hint, paper concerning the <strong>transfer</strong> aiming an implementation. I have already studied some papers about <a href="https://randomascii.wordpress.com/2012/03/08/float-precisionfrom-zero-to-100-digits-2/" rel="nofollow noreferrer">float precision</a>, but for the last two weeks did not make to much progress.</p> <p>Thanks in advance!</p>
2020-09-18 14:26:08.457000+00:00
2020-09-18 20:37:48.740000+00:00
2020-09-18 15:19:51.730000+00:00
c|floating-point|precision|floating-accuracy
['http://perso.ens-lyon.fr/jean-michel.muller/TruncToms.pdf', 'https://hal.inria.fr/inria-00119513/document', 'https://arxiv.org/pdf/0803.0439', 'http://sollya.gforge.inria.fr/', 'http://sollya.gforge.inria.fr/sollya-7.0/help.php?name=fpminimax', 'https://arxiv.org/pdf/1508.03211', 'https://hal.archives-ouvertes.fr/hal-02006606/document', 'https://stackoverflow.com/questions/26692859/best-machine-optimized-polynomial-minimax-approximation-to-arctangent-on-1-1']
8
69,968,615
<p>The only <code>word2vec.c</code> command-line parameters which differentially affect words by their frequency are <code>min_count</code>, which discards words below a certain threshold, and <code>sample</code>, which randomly discards <em>some</em> occurrences of highly-frequent words.</p> <p>You can do that discarding because using all occurrences of highly-frequent words is overkill: it barely improves their vectors over fewer training samples, it takes extra training time, &amp; it essentially dilutes the influence of rarer words on the model's internal shared weights – while for many applications, rare words are as important (or mote!) than frequent words.</p> <p>So one definite way to make training spend more time/effort, relatively, on lower-frequency words is to use a <em>more-aggressive</em> <code>sample</code> value, which means a <em>smaller</em> number, and more of the most-frequent-words being randomly skipped.</p> <p>The default is <code>1e-04</code>; especially as your corpus grows, you could try a 10x smaller value like <code>1e-05</code>, a 100x smaller value like <code>1e-06</code>, or try even lower. As with other parameter tweaks, you should have some repeatable evaluation of the final vector quality, for your project purposes, that can be used to guide such adjustments.</p> <p>A more aggressive <code>sample</code> can sometimes deliver a double-whammy of both faster training – by dropping the redundant high-frequency words – &amp; better final results – by both giving more weight to rarer words, &amp; effectively *shrinking• the context-windows whereever frequent-words are dropped. (The words are elided before the context-windows considered – so will move retained words that were just outside the window into it.)</p> <p>I've seen a very-aggressive <code>window</code> value of <code>1e-06</code> or higher discard a majority of the pre-downsampling corpus, in typical natural language distributions. The saved training time might also then make it thinkable to consider otherwise impractically-larger values for other parameters which tend to increase training time (like <code>epochs</code>, <code>size</code>, <code>negative</code>, <code>window</code>).</p> <p>There's another parameter, controlling the rates of negative-example sampling, that I believe is called <em>alpha</em> in the original word2vec paper, and frozen at <code>0.75</code> in the original Google <code>word2vec.c</code> tool. However, <a href="https://arxiv.org/abs/1804.04212" rel="nofollow noreferrer">some research</a> has suggested other values of this parameter may be useful in some applications – perhaps especially recommendation systems, and systems where the word-tokens don't have usual natural-language Zipfian distributions.</p> <p>So, you may also want to try tinkering with that parameter. (Other implementations of word2vec, like Python Gensim's version, <a href="https://radimrehurek.com/gensim/models/word2vec.html#gensim.models.word2vec.Word2Vec" rel="nofollow noreferrer">offer this as a parameter <code>ns_exponent</code></a>.)</p> <p>(Tinkering with the other parameters might help in your project aims, vis-a-vis the quality of less-frequent words' vectors, but not in an obvious way - you'd have to find such interactions by experiments in your domain.)</p>
2021-11-15 01:33:20.160000+00:00
2021-11-15 13:24:28.283000+00:00
2021-11-15 13:24:28.283000+00:00
null
69,925,900
<p>Given the classical implementation of <code>word2vec</code> by ‪Tomas Mikolov‬, what set of parameters (<code>window</code>, <code>sample</code>, <code>negative</code>, maybe <code>cbow</code>)</p> <pre><code>./word2vec -train corpus.txt \ -output vec.txt \ -min-count 5 -size 150 \ -window 5 -sample 1e-5 -negative 10 -threads 16 </code></pre> <p>optimize for computing better embeddings for low-frequency words (say with frequency 5 to 25)?</p>
2021-11-11 09:24:11.480000+00:00
2021-11-15 13:24:28.283000+00:00
null
word2vec
['https://arxiv.org/abs/1804.04212', 'https://radimrehurek.com/gensim/models/word2vec.html#gensim.models.word2vec.Word2Vec']
2
69,209,236
<p>GELU is a smoother version of the RELU.</p> <p>ReLU vs GELU:</p> <p><a href="https://i.stack.imgur.com/17HiP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/17HiP.png" alt="enter image description here" /></a></p> <p>I think the reason is stated in <a href="https://arxiv.org/pdf/1606.08415.pdf#page=6" rel="nofollow noreferrer">the paper</a>:</p> <p><a href="https://i.stack.imgur.com/LVPZN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LVPZN.png" alt="enter image description here" /></a></p>
2021-09-16 13:21:10.427000+00:00
2021-09-16 13:36:50.363000+00:00
2021-09-16 13:36:50.363000+00:00
null
57,532,679
<p>I have in the activation function <strong>Gaussian Error Linear Units(GELUs)</strong> is used in the popular NLP model <strong><em>BERT</em></strong>. Is there any solid reason ?</p>
2019-08-17 01:34:21.060000+00:00
2022-08-23 21:09:06.037000+00:00
null
deep-learning|nlp
['https://i.stack.imgur.com/17HiP.png', 'https://arxiv.org/pdf/1606.08415.pdf#page=6', 'https://i.stack.imgur.com/LVPZN.png']
3
50,336,357
<p>I think khaemuaset's answer is correct.</p> <p>To reinforce: As I understand from the paper (I'm reading <a href="https://arxiv.org/pdf/1611.02344.pdf" rel="nofollow noreferrer">A Convolutional Encoder Model for Machine Translation</a>) and the corresponding Facebook AI Research PyTorch source code, the position embedding is a typical embedding table, but for seq position one-hot vectors instead of vocab one-hot vectors. I verified this with the source code <a href="https://github.com/pytorch/fairseq/blob/master/fairseq/modules/learned_positional_embedding.py" rel="nofollow noreferrer">here</a>. Notice the inheritance of <code>nn.Embedding</code> and the call to its <code>forward</code> method at line 32.</p> <p>The class I linked to is used in the FConvEncoder <a href="https://github.com/pytorch/fairseq/blob/master/fairseq/models/fconv.py#L107" rel="nofollow noreferrer">here</a>.</p>
2018-05-14 17:57:13.970000+00:00
2018-05-14 17:57:13.970000+00:00
null
null
44,614,603
<p>I don't understand the position embedding in paper Convolutional Sequence to Sequence Learning, anyone can help me? </p>
2017-06-18 11:43:10.030000+00:00
2020-02-23 11:45:06.813000+00:00
2020-02-23 11:45:06.813000+00:00
deep-learning
['https://arxiv.org/pdf/1611.02344.pdf', 'https://github.com/pytorch/fairseq/blob/master/fairseq/modules/learned_positional_embedding.py', 'https://github.com/pytorch/fairseq/blob/master/fairseq/models/fconv.py#L107']
3
47,278,567
<p>They are basically the fullest learned model you can get from the network, before it's been squashed down to apply to only the number of classes we are interested in. Check out how some researchers use them to train a shallow neural net based on what a deep network has learned: <a href="https://arxiv.org/pdf/1312.6184.pdf" rel="noreferrer">https://arxiv.org/pdf/1312.6184.pdf</a></p> <p>It's kind of like how when learning a subject in detail, you will learn a great many minor points, but then when teaching a student, you will try to compress it to the simplest case. If the student now tried to teach, it'd be quite difficult, but would be able to describe it just well enough to use the language.</p>
2017-11-14 05:51:41.580000+00:00
2017-11-14 05:51:41.580000+00:00
null
null
41,455,101
<p>In the following TensorFlow function, we must feed the activation of artificial neurons in the final layer. That I understand. But I don't understand why it is called logits? Isn't that a mathematical function? </p> <pre><code>loss_function = tf.nn.softmax_cross_entropy_with_logits( logits = last_layer, labels = target_output ) </code></pre>
2017-01-04 02:02:31.570000+00:00
2022-06-01 20:45:03.897000+00:00
2019-04-30 01:52:57.920000+00:00
tensorflow|machine-learning|neural-network|deep-learning|cross-entropy
['https://arxiv.org/pdf/1312.6184.pdf']
1
44,470,613
<p>Good question. The lack of interpretability of NN models is one pain the ML/NN community has been struggling with.</p> <p>One recent approach that has been receiving attention is the <a href="https://arxiv.org/abs/1602.04938" rel="noreferrer">LIME paper</a> (Ribeiro et al, KDD'16). Here's a relevant excerpt from the abstract:</p> <ul> <li><em>"In this work, we propose LIME, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning an interpretable model locally around the prediction"</em>.</li> </ul> <p>There's also a GitHub <a href="https://github.com/marcotcr/lime" rel="noreferrer">repository</a> (Python, yay!).</p> <p>(If you do try LIME, please share your experience in the question comments..)</p>
2017-06-10 07:06:28.230000+00:00
2017-06-11 04:18:33.673000+00:00
2017-06-11 04:18:33.673000+00:00
null
44,460,937
<p>I would like to know if there is any way to visualize or find the most important/contributing features after fitting a MLP classifier in Sklearn.</p> <p>Simple example:</p> <pre><code>import pandas as pd import numpy as np from sklearn.preprocessing import StandardScaler from sklearn.model_selection import LeaveOneOut from sklearn.neural_network import MLPClassifier from sklearn.model_selection import GridSearchCV from sklearn.pipeline import make_pipeline data= pd.read_csv('All.csv', header=None) X, y = data.iloc[0:, 0:249].values, data.iloc[0:,249].values sc = StandardScaler() mlc = MLPClassifier(activation = 'relu', random_state=1,nesterovs_momentum=True) loo = LeaveOneOut() pipe = make_pipeline(sc, mlc) parameters = {"mlpclassifier__hidden_layer_sizes":[(168,),(126,),(498,),(166,)],"mlpclassifier__solver" : ('sgd','adam'), "mlpclassifier__alpha": [0.001,0.0001],"mlpclassifier__learning_rate_init":[0.005,0.001] } clf = GridSearchCV(pipe, parameters,n_jobs= -1,cv = loo) clf.fit(X, y) model = clf.best_estimator_ print("the best model and parameters are the following: {} ".format(model)) </code></pre>
2017-06-09 14:57:00.693000+00:00
2019-03-20 08:37:03.930000+00:00
2018-07-02 08:06:29.340000+00:00
python-2.7|machine-learning|scikit-learn|classification|multi-layer
['https://arxiv.org/abs/1602.04938', 'https://github.com/marcotcr/lime']
2
48,322,346
<p><code>Batch Normalization</code> normalizes each output over a complete batch using the following (from <a href="https://arxiv.org/pdf/1502.03167" rel="noreferrer">original paper</a>). </p> <p><a href="https://i.stack.imgur.com/JeE6J.png" rel="noreferrer"><img src="https://i.stack.imgur.com/JeE6J.png" alt="BatchNorm Formula"></a></p> <p>So take for example, that you have the following outputs (size 3) for batch size of 2</p> <pre><code>[2, 4, 6] [4, 6, 8] </code></pre> <p>Now mean for each of the output over the batch will be</p> <pre><code>[3, 5, 7] </code></pre> <p>Now, look at the numerator in the above formula. It is subtracting mean from each element of the output. But, if the batch size is 1, then mean will exactly be the same as the output, so it will evaluate to 0.</p> <p>As a side note, even the denominator will also be evaluated to 0 but it seems that <code>tensorflow</code> outputs <code>0</code> in a <code>0/0</code> situation.</p>
2018-01-18 13:18:47.030000+00:00
2018-01-18 13:18:47.030000+00:00
null
null
48,320,854
<p>I have a question about the understanding of the BatchNorm (BN later on).</p> <p>I have a convnet working nicely, I was writing tests to check for shape and outputs range. And I noticed that when I set the batch_size = 1, my model outputs zeros (logits and activations).</p> <p>I prototyped the simplest convnet with BN:</p> <p><strong>Input => Conv + ReLU => BN => Conv + ReLU => BN => Conv Layer + Tanh</strong></p> <p>The model is initialized with <em>xavier initialization</em>. I guess that BN <strong>during training</strong> do some calculations that require Batch_size > 1.</p> <p>I have found an issue in PyTorch that seems to talk about this: <a href="https://github.com/pytorch/pytorch/issues/1381" rel="noreferrer">https://github.com/pytorch/pytorch/issues/1381</a></p> <p>Could anyone explain this ? It's still a little blurry for me.</p> <hr> <p><strong>Example Run:</strong></p> <p><strong>Important:</strong> Tensorlayer Library is required for this script to run: <em>pip install tensorlayer</em></p> <pre><code>import tensorflow as tf import tensorlayer as tl import numpy as np def conv_net(inputs, is_training): xavier_initilizer = tf.contrib.layers.xavier_initializer(uniform=True) normal_initializer = tf.random_normal_initializer(mean=1., stddev=0.02) # Input Layers network = tl.layers.InputLayer(inputs, name='input') fx = [64, 128, 256, 256, 256] for i, n_out_channel in enumerate(fx): with tf.variable_scope('h' + str(i + 1)): network = tl.layers.Conv2d( network, n_filter = n_out_channel, filter_size = (5, 5), strides = (2, 2), padding = 'VALID', act = tf.identity, W_init = xavier_initilizer, name = 'conv2d' ) network = tl.layers.BatchNormLayer( network, act = tf.identity, is_train = is_training, gamma_init = normal_initializer, name = 'batch_norm' ) network = tl.layers.PReluLayer( layer = network, a_init = tf.constant_initializer(0.2), name ='activation' ) ############# OUTPUT LAYER ############### with tf.variable_scope('h' + str(len(fx) + 1)): ''' network = tl.layers.FlattenLayer(network, name='flatten') network = tl.layers.DenseLayer( network, n_units = 100, act = tf.identity, W_init = xavier_initilizer, name = 'dense' ) ''' output_filter_size = tuple([int(i) for i in network.outputs.get_shape()[1:3]]) network = tl.layers.Conv2d( network, n_filter = 100, filter_size = output_filter_size, strides = (1, 1), padding = 'VALID', act = tf.identity, W_init = xavier_initilizer, name = 'conv2d' ) network = tl.layers.BatchNormLayer( network, act = tf.identity, is_train = is_training, gamma_init = normal_initializer, name = 'batch_norm' ) net_logits = network.outputs network.outputs = tf.nn.tanh( x = network.outputs, name = 'activation' ) net_output = network.outputs return network, net_output, net_logits if __name__ == '__main__': tf.logging.set_verbosity(tf.logging.DEBUG) ################################################# # MODEL DEFINITION # ################################################# PLH_SHAPE = [None, 256, 256, 3] input_plh = tf.placeholder(tf.float32, PLH_SHAPE, name='input_placeholder') convnet, net_out, net_logits = conv_net(input_plh, is_training=True) with tf.Session() as sess: tl.layers.initialize_global_variables(sess) convnet.print_params(details=True) ################################################# # LAUNCH A RUN # ################################################# for BATCH_SIZE in [1, 2]: INPUT_SHAPE = [BATCH_SIZE, 256, 256, 3] batch_data = np.random.random(size=INPUT_SHAPE) output, logits = sess.run( [net_out, net_logits], feed_dict={ input_plh: batch_data } ) if tf.logging.get_verbosity() == tf.logging.DEBUG: print("\n\n###########################") print("\nBATCH SIZE = %d\n" % BATCH_SIZE) tf.logging.debug("output =&gt; Shape: %s - Mean: %e - Std: %f - Min: %f - Max: %f" % ( output.shape, output.mean(), output.std(), output.min(), output.max() )) tf.logging.debug("logits =&gt; Shape: %s - Mean: %e - Std: %f - Min: %f - Max: %f" % ( logits.shape, logits.mean(), logits.std(), logits.min(), logits.max() )) if tf.logging.get_verbosity() == tf.logging.DEBUG: print("###########################") </code></pre> <p><strong>Gives the following output:</strong></p> <pre><code>########################### BATCH SIZE = 1 DEBUG:tensorflow:output =&gt; Shape: (1, 1, 1, 100) - Mean: 0.000000e+00 - Std: 0.000000 - Min: 0.000000 - Max: 0.000000 DEBUG:tensorflow:logits =&gt; Shape: (1, 1, 1, 100) - Mean: 0.000000e+00 - Std: 0.000000 - Min: 0.000000 - Max: 0.000000 ########################### ########################### BATCH SIZE = 2 DEBUG:tensorflow:output =&gt; Shape: (2, 1, 1, 100) - Mean: -1.430511e-08 - Std: 0.760749 - Min: -0.779634 - Max: 0.779634 DEBUG:tensorflow:logits =&gt; Shape: (2, 1, 1, 100) - Mean: -4.768372e-08 - Std: 0.998715 - Min: -1.044437 - Max: 1.044437 ########################### </code></pre>
2018-01-18 12:00:34.387000+00:00
2018-01-18 13:54:33.500000+00:00
2018-01-18 13:48:38.977000+00:00
tensorflow|machine-learning|deep-learning|conv-neural-network|batch-normalization
['https://arxiv.org/pdf/1502.03167', 'https://i.stack.imgur.com/JeE6J.png']
2
48,322,475
<p>You should probably read an explanation about Batch Normalization, such as <a href="https://gab41.lab41.org/batch-normalization-what-the-hey-d480039a9e3b" rel="noreferrer">this one</a>. You can also take a look at <a href="https://www.tensorflow.org/api_docs/python/tf/nn/batch_normalization" rel="noreferrer">tensorflow's related doc</a>. </p> <p>Basically, there are 2 ways you can do batch_norm, and both have problems dealing with batch size of 1:</p> <ul> <li><p>using a moving mean and variance pixel per pixel, so they are tensors of the same shape as each sample in your batch. This is the one used in @layog's answer, and (I think) in <a href="https://arxiv.org/pdf/1502.03167.pdf" rel="noreferrer">the original paper</a>, and the most used.</p></li> <li><p>Using a moving mean and variance over the entire image / feature space, so they are just vectors (rank 1) of shape <code>(n_channels,)</code>.</p></li> </ul> <p>In both cases, you'll have:</p> <pre><code>output = gamma * (input - mean) / sigma + beta </code></pre> <p>Beta is often set to 0 and gamma to 1, since you have linear functions right after BN.</p> <p><strong>During training</strong>, <code>mean</code> and <code>variance</code> are computed <em>accross the current batch</em>, which causes problem when it is of size 1:</p> <ul> <li>in the 1st case, you'll get <code>mean=input</code>, so <code>output=0</code></li> <li>in the 2nd case, <code>mean</code> will be the average value over all pixels, so it's better; but if your width and height are also 1, then you get <code>mean=input</code> again, so you get <code>output=0</code>.</li> </ul> <p>I think most people (and the original method) use the 1st way, which is why you'll get 0 (although TF doc seems to suggest that the 2nd method is usual too). The argument in the link you're providing seems to be considering the 2nd method.</p> <p>In any case (whichever you're using), with BN you'll only get good results if you use a bigger batch size (say, at least 10).</p>
2018-01-18 13:25:46.553000+00:00
2018-01-18 13:54:33.500000+00:00
2018-01-18 13:54:33.500000+00:00
null
48,320,854
<p>I have a question about the understanding of the BatchNorm (BN later on).</p> <p>I have a convnet working nicely, I was writing tests to check for shape and outputs range. And I noticed that when I set the batch_size = 1, my model outputs zeros (logits and activations).</p> <p>I prototyped the simplest convnet with BN:</p> <p><strong>Input => Conv + ReLU => BN => Conv + ReLU => BN => Conv Layer + Tanh</strong></p> <p>The model is initialized with <em>xavier initialization</em>. I guess that BN <strong>during training</strong> do some calculations that require Batch_size > 1.</p> <p>I have found an issue in PyTorch that seems to talk about this: <a href="https://github.com/pytorch/pytorch/issues/1381" rel="noreferrer">https://github.com/pytorch/pytorch/issues/1381</a></p> <p>Could anyone explain this ? It's still a little blurry for me.</p> <hr> <p><strong>Example Run:</strong></p> <p><strong>Important:</strong> Tensorlayer Library is required for this script to run: <em>pip install tensorlayer</em></p> <pre><code>import tensorflow as tf import tensorlayer as tl import numpy as np def conv_net(inputs, is_training): xavier_initilizer = tf.contrib.layers.xavier_initializer(uniform=True) normal_initializer = tf.random_normal_initializer(mean=1., stddev=0.02) # Input Layers network = tl.layers.InputLayer(inputs, name='input') fx = [64, 128, 256, 256, 256] for i, n_out_channel in enumerate(fx): with tf.variable_scope('h' + str(i + 1)): network = tl.layers.Conv2d( network, n_filter = n_out_channel, filter_size = (5, 5), strides = (2, 2), padding = 'VALID', act = tf.identity, W_init = xavier_initilizer, name = 'conv2d' ) network = tl.layers.BatchNormLayer( network, act = tf.identity, is_train = is_training, gamma_init = normal_initializer, name = 'batch_norm' ) network = tl.layers.PReluLayer( layer = network, a_init = tf.constant_initializer(0.2), name ='activation' ) ############# OUTPUT LAYER ############### with tf.variable_scope('h' + str(len(fx) + 1)): ''' network = tl.layers.FlattenLayer(network, name='flatten') network = tl.layers.DenseLayer( network, n_units = 100, act = tf.identity, W_init = xavier_initilizer, name = 'dense' ) ''' output_filter_size = tuple([int(i) for i in network.outputs.get_shape()[1:3]]) network = tl.layers.Conv2d( network, n_filter = 100, filter_size = output_filter_size, strides = (1, 1), padding = 'VALID', act = tf.identity, W_init = xavier_initilizer, name = 'conv2d' ) network = tl.layers.BatchNormLayer( network, act = tf.identity, is_train = is_training, gamma_init = normal_initializer, name = 'batch_norm' ) net_logits = network.outputs network.outputs = tf.nn.tanh( x = network.outputs, name = 'activation' ) net_output = network.outputs return network, net_output, net_logits if __name__ == '__main__': tf.logging.set_verbosity(tf.logging.DEBUG) ################################################# # MODEL DEFINITION # ################################################# PLH_SHAPE = [None, 256, 256, 3] input_plh = tf.placeholder(tf.float32, PLH_SHAPE, name='input_placeholder') convnet, net_out, net_logits = conv_net(input_plh, is_training=True) with tf.Session() as sess: tl.layers.initialize_global_variables(sess) convnet.print_params(details=True) ################################################# # LAUNCH A RUN # ################################################# for BATCH_SIZE in [1, 2]: INPUT_SHAPE = [BATCH_SIZE, 256, 256, 3] batch_data = np.random.random(size=INPUT_SHAPE) output, logits = sess.run( [net_out, net_logits], feed_dict={ input_plh: batch_data } ) if tf.logging.get_verbosity() == tf.logging.DEBUG: print("\n\n###########################") print("\nBATCH SIZE = %d\n" % BATCH_SIZE) tf.logging.debug("output =&gt; Shape: %s - Mean: %e - Std: %f - Min: %f - Max: %f" % ( output.shape, output.mean(), output.std(), output.min(), output.max() )) tf.logging.debug("logits =&gt; Shape: %s - Mean: %e - Std: %f - Min: %f - Max: %f" % ( logits.shape, logits.mean(), logits.std(), logits.min(), logits.max() )) if tf.logging.get_verbosity() == tf.logging.DEBUG: print("###########################") </code></pre> <p><strong>Gives the following output:</strong></p> <pre><code>########################### BATCH SIZE = 1 DEBUG:tensorflow:output =&gt; Shape: (1, 1, 1, 100) - Mean: 0.000000e+00 - Std: 0.000000 - Min: 0.000000 - Max: 0.000000 DEBUG:tensorflow:logits =&gt; Shape: (1, 1, 1, 100) - Mean: 0.000000e+00 - Std: 0.000000 - Min: 0.000000 - Max: 0.000000 ########################### ########################### BATCH SIZE = 2 DEBUG:tensorflow:output =&gt; Shape: (2, 1, 1, 100) - Mean: -1.430511e-08 - Std: 0.760749 - Min: -0.779634 - Max: 0.779634 DEBUG:tensorflow:logits =&gt; Shape: (2, 1, 1, 100) - Mean: -4.768372e-08 - Std: 0.998715 - Min: -1.044437 - Max: 1.044437 ########################### </code></pre>
2018-01-18 12:00:34.387000+00:00
2018-01-18 13:54:33.500000+00:00
2018-01-18 13:48:38.977000+00:00
tensorflow|machine-learning|deep-learning|conv-neural-network|batch-normalization
['https://gab41.lab41.org/batch-normalization-what-the-hey-d480039a9e3b', 'https://www.tensorflow.org/api_docs/python/tf/nn/batch_normalization', 'https://arxiv.org/pdf/1502.03167.pdf']
3
40,397,120
<p>They are the same type of Networks. Convolutional Neural Networks. The problem with the overview is that as soon as you post something it is already outdated. Most of the networks you describe are already old, even though they are only a few years old.</p> <p>Nevertheless you can take a look at the networks supplied by caffe (<a href="https://github.com/BVLC/caffe/tree/master/models" rel="nofollow noreferrer">https://github.com/BVLC/caffe/tree/master/models</a>). </p> <p>In my personal view the most important concepts in deep Learning are recurrent networks (<a href="https://keras.io/layers/recurrent/" rel="nofollow noreferrer">https://keras.io/layers/recurrent/</a>), residual connections, inception blocks (see <a href="https://arxiv.org/abs/1602.07261" rel="nofollow noreferrer">https://arxiv.org/abs/1602.07261</a>). The rest are largely theoretical concepts, which would not fit in a stack overflow answer.</p>
2016-11-03 08:47:42.303000+00:00
2016-11-03 08:47:42.303000+00:00
null
null
40,378,314
<p>I am fairly new to Deep Learning and get quite overwhelmed by the many different Nets and their field of application. Thus, I want to know if there is some kind of overview which kind of different networks exist, what there key-features are and what kind of purpose they have.</p> <p>For example I know abut LeNet, ConvNet, AlexNet - and somehow they are the same but still differ? </p>
2016-11-02 11:01:31.887000+00:00
2017-05-24 16:40:08.933000+00:00
null
deep-learning
['https://github.com/BVLC/caffe/tree/master/models', 'https://keras.io/layers/recurrent/', 'https://arxiv.org/abs/1602.07261']
3
55,685,730
<p>None that I know of.</p> <p>Here are two examples of PyTorch implementation:</p> <ul> <li><p><a href="https://github.com/OpenNMT/OpenNMT-py/blob/e8622eb5c6117269bb3accd8eb6f66282b5e67d9/onmt/utils/loss.py#L186" rel="noreferrer"><code>LabelSmoothingLoss</code> module</a> in OpenNMT framework for machine translation</p></li> <li><p><a href="https://github.com/jadore801120/attention-is-all-you-need-pytorch/blob/master/train.py#L38" rel="noreferrer"><code>attention-is-all-you-need-pytorch</code></a>, re-implementation of Google's <a href="https://arxiv.org/abs/1706.03762" rel="noreferrer">Attention is all you need paper</a></p></li> </ul>
2019-04-15 09:00:49.827000+00:00
2019-04-15 09:00:49.827000+00:00
null
null
55,681,502
<p>I'm building a <code>ResNet-18</code> classification model for the <strong>Stanford Cars</strong> dataset using transfer learning. I would like to implement <a href="https://arxiv.org/pdf/1701.06548.pdf" rel="noreferrer">label smoothing</a> to penalize overconfident predictions and improve generalization.</p> <p><code>TensorFlow</code> has a simple keyword argument in <a href="https://www.tensorflow.org/api_docs/python/tf/losses/softmax_cross_entropy" rel="noreferrer"><code>CrossEntropyLoss</code></a>. Has anyone built a similar function for <code>PyTorch</code> that I could plug-and-play with?</p>
2019-04-15 01:14:30.217000+00:00
2022-08-04 14:58:19.140000+00:00
2021-04-02 21:29:59.457000+00:00
python|machine-learning|pytorch|transfer-learning
['https://github.com/OpenNMT/OpenNMT-py/blob/e8622eb5c6117269bb3accd8eb6f66282b5e67d9/onmt/utils/loss.py#L186', 'https://github.com/jadore801120/attention-is-all-you-need-pytorch/blob/master/train.py#L38', 'https://arxiv.org/abs/1706.03762']
3
43,870,203
<p>There is no need anymore for an external module for uploading streams to s3. Now there is a new method in aws-sdk called <code>s3.upload</code>, which can upload an arbitrarily sized buffer, blob, or stream. You can check the documentation <a href="http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#upload-property" rel="nofollow noreferrer">here</a></p> <p>The code I used:</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-js lang-js prettyprint-override"><code>const aws = require('aws-sdk'); const s3 = new aws.S3({ credentials:{ accessKeyId: "ACCESS_KEY", secretAccessKey: "SECRET_ACCESS_KEY" } }); const fs = require('fs'); const got = require('got'); //fs stream test s3.upload({ Bucket: "BUCKET_NAME", Key: "FILE_NAME", ContentType: 'text/plain', Body: fs.createReadStream('SOME_FILE') }) .on("httpUploadProgress", progress =&gt; console.log(progress)) .send((err,resp) =&gt; { if(err) return console.error(err); console.log(resp); }) //http stream test s3.upload({ Bucket: "BUCKET_NAME", Key: "FILE_NAME", ContentType: 'application/pdf', Body: got.stream('https://arxiv.org/pdf/1701.00003.pdf') }) .on("httpUploadProgress", progress =&gt; console.log(progress)) .send((err,resp) =&gt; { if(err) return console.error(err); console.log(resp); })</code></pre> </div> </div> </p> <p>To prove my point even further I tried the code with the pdf you posted in your question and here is the <a href="https://s3-eu-west-1.amazonaws.com/my-test-bucket-qqaaqq/testing-upload-pdf" rel="nofollow noreferrer">link</a> for my test bucket showing that the pdf works as expected.</p>
2017-05-09 12:45:30.250000+00:00
2017-05-09 12:45:30.250000+00:00
null
null
43,775,485
<p>I am using nodejs to upload files to aws server. and found that file size is not properly there. I am getting 2.1KB. </p> <p>here is my code </p> <pre><code>var uploadFile = function (fileReadStream, awsHeader, cb) { //set options for the streaming module var options = { concurrentParts: 2, waitTime: 20000, retries: 2, maxPartSize: 10 * 1024 * 1024 }; //call stream function to upload the file to s3 var uploader = new streamingS3(fileReadStream, config.aws.accessKey, config.aws.secretKey, awsHeader, options); //start uploading uploader.begin();// important if callback not provided. // handle these functions uploader.on('data', function (bytesRead) { //console.log(bytesRead, ' bytes read.'); }); uploader.on('part', function (number) { //console.log('Part ', number, ' uploaded.'); }); // All parts uploaded, but upload not yet acknowledged. uploader.on('uploaded', function (stats) { //console.log('Upload stats: ', stats); }); uploader.on('finished', function (response, stats) { console.log(response); cb(null, response); }); uploader.on('error', function (err) { console.log('Upload error: ', err); cb(err); }); }; </code></pre> <p>although, I got file name in my aws bucket. but then I try to open the file it says failed. </p> <p>I am trying to upload this file from url: <a href="https://arxiv.org/pdf/1701.00003.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1701.00003.pdf</a></p>
2017-05-04 06:33:42.510000+00:00
2020-11-24 14:18:39.577000+00:00
2018-06-21 01:14:44.697000+00:00
javascript|node.js|amazon-web-services|amazon-s3|streaming
['http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#upload-property', 'https://s3-eu-west-1.amazonaws.com/my-test-bucket-qqaaqq/testing-upload-pdf']
2
49,981,186
<p>The problem is that the Tesseract engine was not trained to read this kind of text topology.</p> <p>You can:</p> <ul> <li>train your own model, and you'll need in particular to provide images with variations of topology (position of characters). You can actually use the same image, and shuffle the positions of the characters.</li> <li>reorganize the image into clusters of text and use tesseract, in particular, I would consider the cents part and move it on the right of the coma, in that case you can use tesseract out of the box. Few relevant criterions would be the <strong>height of the clusters</strong> (to differenciate cents and integers), and the <strong>position of the clusters</strong> (read from the left to the right).</li> </ul> <p>In general computer vision algorithms (including CNNs) are giving you tool to have a higher representation of an image (features or descriptors), but they fail to create a logic or an algorithm to process intermediate results in a certain way.</p> <p>In your case that would be:</p> <ul> <li>"if the height of those letters are smaller, it's cents",</li> <li>"if the height, and vertical position is the same, it's about the same number, either on left of coma, or on the right of coma".</li> </ul> <p>The thing is that it's difficult to reach that through training, and at the same time it's extremely simple to write this for a human as an algorithm. Sorry for not giving you an actual implementation, but my text <strong>is</strong> the pseudo code.</p> <p><a href="https://github.com/tesseract-ocr/tesseract/wiki/TrainingTesseract2" rel="noreferrer">TrainingTesseract2</a></p> <p><a href="https://github.com/tesseract-ocr/tesseract/wiki/TrainingTesseract-4.00" rel="noreferrer">TrainingTesseract4</a></p> <p><a href="https://arxiv.org/abs/1604.03628" rel="noreferrer">Joint Unsupervised Learning of Deep Representations and Image Clusters</a></p>
2018-04-23 12:32:57.560000+00:00
2018-04-23 12:32:57.560000+00:00
null
null
49,535,840
<p>I am trying to detect these price labels text which is always clearly preprocessed. Although it can easily read the text written above it, it fails to detect price values. I am using python bindings <a href="https://github.com/madmaze/pytesseract/" rel="noreferrer">pytesseract</a> although it also fails to read from the CLI commands. Most of the time it tries to recognize the part where the price as one or two characters.</p> <p><strong>Sample 1:</strong></p> <p><img src="https://i.stack.imgur.com/dKC6k.png"></p> <pre><code>tesseract D:\tesseract\tesseract_test_images\test.png output </code></pre> <p>And the output of the sample image is this.</p> <blockquote> <p>je Beutel</p> <p>13</p> </blockquote> <p>However if I crop and stretch the price to look like they are seperated and are the same font size, output is just fine.</p> <p><strong>Processed image(cropped and shrinked price):</strong></p> <p><img src="https://i.stack.imgur.com/UNeT2.png"></p> <blockquote> <p>je Beutel</p> <p>1,89</p> </blockquote> <p>How do get OCR tesseract to work as I intended, as I will be going over a lot of similar images? <strong>Edit:</strong> Added more price tags:<br> <img src="https://i.stack.imgur.com/8bPOH.jpg" alt="sample2"><img src="https://i.stack.imgur.com/K6krq.jpg" alt="sample3"><img src="https://i.stack.imgur.com/HTYux.jpg" alt="sample4"><a href="https://i.stack.imgur.com/rUt6x.jpg" rel="noreferrer">sample5</a> <a href="https://i.stack.imgur.com/Ar8cE.jpg" rel="noreferrer">sample6</a> <a href="https://i.stack.imgur.com/oCD2V.jpg" rel="noreferrer">sample7</a> </p>
2018-03-28 13:25:31.867000+00:00
2018-04-23 12:32:57.560000+00:00
2018-04-20 04:53:19.337000+00:00
python|opencv|ocr|tesseract
['https://github.com/tesseract-ocr/tesseract/wiki/TrainingTesseract2', 'https://github.com/tesseract-ocr/tesseract/wiki/TrainingTesseract-4.00', 'https://arxiv.org/abs/1604.03628']
3
19,510,366
<p>Very nice work on boundary detection. I used to work on similar segmentation problems.</p> <h3>Theory:</h3> <p>Once you obtained your edge map where <code>e(i,j)</code> indicates the &quot;edge-iness&quot; degree of pixel <code>i,j</code> you would like a segmentation of the image that would respect the edge map as possible.<br /> In order to formulate this &quot;respect the edge map&quot; in a more formal fashion I suggest you look at the <a href="http://www.wisdom.weizmann.ac.il/%7Ebagon/pub/LargeScaleCorrClust_2011.pdf" rel="noreferrer" title="Bagon and Galun, large scale correlation clustering, arXiv 2011"><strong>Correlation clustering (CC)</strong></a> functional:<br /> The CC functional asses the quality of a segmentation based on pair-wise relations between neighboring pixels whether they should be in the same cluster (no edge between them) or in different clusters (there is an edge between them).<br /> Take a look at the example at section 7.1 of the <a href="http://www.wisdom.weizmann.ac.il/%7Ebagon/pub/LargeScaleCorrClust_2011.pdf" rel="noreferrer" title="Bagon and Galun, large scale correlation clustering, arXiv 2011">aforementioned</a> paper.<br /> CC is used for similar segmentation problems in medical (neuronal) imaging as well, see e.g., <a href="http://www.andres.sc/publications/andres2012globally.pdf" rel="noreferrer" title="Andres et al, Globally Optimal Closed-Surface Segmentation for Connectomics, ECCV 2012">here</a>.</p> <hr /> <h3>Practice</h3> <p>Once you convince yourself that CC is indeed an appropriate formulation for your problem, there is still the question of how exactly to convert your binary edge map into an affinity matrix that CC can process. Bear in mind that CC needs as an input a (usually sparse) adjacency matrix with positive entries for pairs of pixels assuming to belong to the same segment, and negative entries for pairs of pixels assumed to belong in different segments.</p> <p>Here's my suggestion:</p> <ol> <li><p>The edges in your edge map looks quite thick and not well localize. I suggest a non-max supression, or morphological thining as a pre-processing stage.</p> </li> <li><p>Once you have a better localized edges, you ignore the &quot;edge&quot; pixels and only work with the &quot;non-edge&quot; pixels, lets call them &quot;active&quot;.<br /> Two active pixels that are next to each other: there is no &quot;edge&quot; pixel between them - they should be together. So the adjecency matrix for immidiate nieghbors should have positive entires.<br /> Consider three pixels on a line, with the two endpoints are &quot;active&quot; pixels: if the middle one is an edge then the two active pixels should not belong to the same cluster - the corresponding entries in the adjecency matrix should be negative. If the middle pixel is also active than the corresponding entries in the adjecency matrix should be positive.</p> </li> <li><p>Consider all possible neighboring pairs and triplets (inducing 24-connected grid graph) allows you to construct an affinity matrix with positive and negative entries suitable for CC.</p> </li> <li><p>Given a matrix you should search for the segmentation with the best CC score (optimization stage). I have Matlab code for this <a href="http://www.wisdom.weizmann.ac.il/%7Ebagon/matlab.html#LSCC" rel="noreferrer" title="Bagon, large scale correlation clustering - matlab code">here</a>. You can also use the excellent <a href="http://hci.iwr.uni-heidelberg.de/opengm2" rel="noreferrer" title="OpenGM - Discrete graphical models in C++">openGM</a> package.</p> </li> <li><p>The optimization will result with a partition of the active pixels only, you can map it back to the input image domain, leaving the edge pixels as un-assigned to any segment.</p> </li> </ol>
2013-10-22 06:18:53.420000+00:00
2017-07-11 13:06:25.560000+00:00
2020-06-20 09:12:55.060000+00:00
null
18,972,932
<p>I have trained a classifier in Python for classifying pixels in an image of cells as edge or non edge. I've used it successfully on a few image datasets but am running into problems with this particular dataset, which seems pretty ambiguous even to the human eye. I don't know of any existing automated technique that can segment it accurately. </p> <p>After prediction I obtain the following image: </p> <p><img src="https://i.stack.imgur.com/9JMva.jpg" alt="Prediction image"></p> <p>I am relatively new to image processing and am unsure with how to proceed with actually obtaining the final segmentations of the cells. I have briefly tried a few different techniques - namely Hough circular transform, level sets, skeletonization, contour finding - but none have really done the trick. Am I just not tuning the parameters correctly or is there a better technique out there? </p> <p>Here are the correct outlines, by the way, for reference. </p> <p><img src="https://i.stack.imgur.com/ijjnj.png" alt="Correct outlines"></p> <p>And the original image:</p> <p><img src="https://i.stack.imgur.com/urhGH.png" alt="enter image description here"></p> <p>And the continuous probability map: </p> <p><img src="https://i.imgur.com/reoh0pb.png" alt="Continuous probability map"></p>
2013-09-24 04:24:08.473000+00:00
2020-05-13 07:32:32.760000+00:00
2020-05-13 07:32:32.760000+00:00
python|image-processing|computer-vision|image-segmentation|edge-detection
['http://www.wisdom.weizmann.ac.il/%7Ebagon/pub/LargeScaleCorrClust_2011.pdf', 'http://www.wisdom.weizmann.ac.il/%7Ebagon/pub/LargeScaleCorrClust_2011.pdf', 'http://www.andres.sc/publications/andres2012globally.pdf', 'http://www.wisdom.weizmann.ac.il/%7Ebagon/matlab.html#LSCC', 'http://hci.iwr.uni-heidelberg.de/opengm2']
5
64,250,855
<p>In short, it is because <code>(+.)</code> and <code>(+)</code> as well as <code>1</code> and <code>1.</code> denote different values despite the fact that they look alike. Those values have different types, different representations, and different semantics. The feature of a language that makes <code>(+)</code> and <code>1</code> work the same in different contexts is called ad-hoc polymorphism and it is not present in OCaml. OCaml is not trying to infer which value you're going to use from the textual representation of your program and its type, instead, it infers what is the type of your program based on the values that you're using and how you're using them. It is an important distinction of ML wrt some other languages that move the other way around, i.e., they have the user-specified type and then infer the correct values that match that type and check that the values are used correctly. In ML the input is the program and the output is the type of that program, which is (a) used to prove that the program is well-typed and will not have any type errors in runtime and, (b), is used to generate native and performant code with all inferred types erased. It is also important to understand that type inference in OCaml is not a convenience utility that lets you omit the types when they could be inferred (like in C++'s <code>auto</code> or local type inference in Scala). Instead, it is a principal step in the compilation process and in the semantics of the language, as OCaml must be able to infer the type for any program. The types that we occasionally write in OCaml programs are used only as constraints and are never taken as inputs. Moreover, type annotations never change the behavior of the program. Well, unless GADTs come into play, but this is a completely different story.</p> <p>For the deeper insight, we should recall that the type inference algorithm underneath OCaml is <em>syntax-driven</em> and <em>declarative</em>. The syntax-driven means that the program syntax fully defines the type of that program. Moreover, the algorithm ensures that if the type exists (i.e., the program is well-typed), then this inferred type is unique and principal. I.e., either there are no other types that represent this program or all other types are instances of the inferred type. The declarative means that the rules of how types are assigned to the programs are described with <a href="https://en.wikipedia.org/wiki/Type_rule" rel="noreferrer">declarative type rules</a>. This algorithm gives the formal translation of programs to types, which enables/ensures <a href="https://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspondence" rel="noreferrer">Curry-Howard</a> correspondence, a deep connection between computer programs and mathematical proofs. That enables us to speak about program correctness and the other way around, about truth correctness, i.e., to prove with programs that our theorems are correct. This brings us back to the history of OCaml/ML, which was originally a Meta Language (ML) for <a href="https://en.wikipedia.org/wiki/Logic_for_Computable_Functions" rel="noreferrer">LCF</a> (Logic for Computable Functions) theorem prover.</p> <p>Given that we agree that we don't want to lose that important property of the language, the question still remains open, why couldn't we implement ad-hoc polymorphism that is syntax-driven and declarable. Well, in fact, we can, for example, Haskell has one. And there is some work on adding <a href="https://arxiv.org/pdf/1512.01895.pdf" rel="noreferrer">modular implicits</a> to OCaml. But it all comes with the tradeoffs and it would be, at the end, a completely different language. So far, in OCaml, each value has a type or a type scheme and there is no type in the OCaml runtime system that will represent a <code>+</code> operator that works for ints and floats.</p> <p>With that said, it would be dishonest to OCaml not to say that you can define your own <code>(+)</code> operator that will have type <code>number -&gt; number -&gt; number</code>, where the <code>number</code> is an existential <code>type number = Num : 'a typeid * 'a -&gt; number</code> with a separate table of operations for each <code>typeid</code> stored in some hidden hashtable. But this would be dynamic typing (see how the <code>typeid</code> is packed in each value) and it is completely different from the static typing, which gives you guarantees about your program before it runs (in our example this <code>(+)</code> function could fail in runtime and our typing rules are not declarative but are intrinsic to the implementation of the <code>(+)</code> operator).</p>
2020-10-07 19:16:25.260000+00:00
2020-10-07 19:31:02.020000+00:00
2020-10-07 19:31:02.020000+00:00
null
64,244,351
<p>OCaml has distinct syntax for BOTH:</p> <ul> <li>float operations vs. int operations. Float operations end in dot: <code>+.</code>.</li> <li>float literals vs. int literals. Floats literals end in dot: <code>3.</code>.</li> </ul> <pre class="lang-ml prettyprint-override"><code># 3 + 3;; - : int = 6 # 3. +. 3.;; - : float = 6. # 3. + 3;; Error: This expression has type float but an expression was expected of type int </code></pre> <p>I can see using one of these mechanisms to disambiguate, but why are two needed <em>always</em>? For example, I could see a case where <code>.</code> is <em>sometimes</em> needed at the end of a literal, but not why it is needed for <code>3 +. 3</code> where OCaml could figure out that we want float arithmetic because we're saying we want float arithmetic.</p> <p>I'm looking for specific technical justification based on interaction with other language features–not opinion or arguments from ergonomics.</p>
2020-10-07 12:48:28.493000+00:00
2020-10-07 19:31:02.020000+00:00
2020-10-07 15:57:18.340000+00:00
compiler-errors|ocaml|static-typing
['https://en.wikipedia.org/wiki/Type_rule', 'https://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspondence', 'https://en.wikipedia.org/wiki/Logic_for_Computable_Functions', 'https://arxiv.org/pdf/1512.01895.pdf']
4
14,519,455
<p>This paper seems to have quite a discussion: <a href="http://arxiv.org/abs/1109.2323" rel="noreferrer">An inventory of three-dimensional Hilbert space-filling curves</a>.</p> <p>Quoting from the abstract:</p> <blockquote> <p>Hilbert's two-dimensional space-filling curve is appreciated for its good locality properties for many applications. However, it is not clear what is the best way to generalize this curve to filling higher-dimensional spaces. We argue that the properties that make Hilbert's curve unique in two dimensions, are shared by 10694807 structurally different space-filling curves in three dimensions.</p> </blockquote>
2013-01-25 10:03:21.610000+00:00
2013-01-25 10:18:15.463000+00:00
2013-01-25 10:18:15.463000+00:00
null
14,519,267
<p>I'd like to map points in a RGB color cube to a one-dimensional list in Python, in a way that makes the list of colors look nice and continuous.</p> <p>I believe using a 3D Hilbert space-filling curve would be a good way to do this, but I've searched and haven't found very helpful resources for this problem. Wikipedia in particular only provides example code for generating 2D curves.</p>
2013-01-25 09:53:26.677000+00:00
2018-04-23 17:31:26.873000+00:00
null
python|algorithm|3d|hilbert-curve
['http://arxiv.org/abs/1109.2323']
1
64,775,915
<p>I once trained a siamese network where I realised that if I use higher learning rates the training loss was going down smooth (as expected since that is what the neural network is learning) ,but saw huge ups and downs with the val loss.</p> <p>This had never happened before when I was using lower learning rate (in the order of 1e-05). I believe that the train loss is actually false since recent papers have proved that large neural networks (I mean neural networks with more complexity) can learn random data flawlessly in the training set, though they performed extremely worse while validating them, I have attached the paper for your reference below which clearly explains this phenomena related to overfitting. So one can't conclude the overall model's performance by just observing the training data.</p> <p>Though other parameters mentioned above also matter, but I guess one should start tweaking the learning rates initially in such a case before tweaking the model itself.</p> <p>Link for the paper : <a href="https://arxiv.org/pdf/1611.03530" rel="nofollow noreferrer">https://arxiv.org/pdf/1611.03530</a></p> <p>Please correct me if I am wrong...</p>
2020-11-10 19:58:07.270000+00:00
2020-11-10 19:58:07.270000+00:00
null
null
55,894,132
<p>I am currently working on a small binary classification project using the new keras API in tensorflow. The problem is a simplified version of the Higgs Boson challenge posted on Kaggle.com a few years back. The dataset shape is 2000x14, where the first 13 elements of each row form the input vector, and the 14th element is the corresponding label. Here is a sample of said dataset:</p> <pre><code>86.043,52.881,61.231,95.475,0.273,77.169,-0.015,1.856,32.636,202.068, 2.432,-0.419,0.0,0 138.149,69.197,58.607,129.848,0.941,120.276,3.811,1.886,71.435,384.916,2.447,1.408,0.0,1 137.457,3.018,74.670,81.705,5.954,775.772,-8.854,2.625,1.942,157.231,1.193,0.873,0.824,1 </code></pre> <p>I am relatively new to machine learning and tensorflow, but I am familiar with the higher level concepts such as loss functions, optimizers and activation functions. I have tried building various models inspired by examples of binary classification problems found online, but I am having difficulties with training the model. During training, the loss somethimes increases within the same epoch, leading to unstable learning. The accuracy hits a plateau around 70%. I have tried changing the learning rate and other hyperparameters but to no avail. In comparison, I have hardcoded a fully-connected feed forward neural net that reaches around 80-85% accuracy on the same problem.</p> <p>Here is my current model:</p> <pre><code>import tensorflow as tf from tensorflow.python.keras.layers.core import Dense import numpy as np import pandas as pd def normalize(array): return array/np.linalg.norm(array, ord=2, axis=1, keepdims=True) x_train = pd.read_csv('data/labeled.csv', sep='\s+').iloc[:1800, :-1].values y_train = pd.read_csv('data/labeled.csv', sep='\s+').iloc[:1800, -1:].values x_test = pd.read_csv('data/labeled.csv', sep='\s+').iloc[1800:, :-1].values y_test = pd.read_csv('data/labeled.csv', sep='\s+').iloc[1800:, -1:].values x_train = normalize(x_train) x_test = normalize(x_test) model = tf.keras.Sequential() model.add(Dense(9, input_dim=13, activation=tf.nn.sigmoid) model.add(Dense(6, activation=tf.nn.sigmoid)) model.add(Dense(1, activation=tf.nn.sigmoid)) model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) model.fit(x_train, y_train, epochs=50) model.evaluate(x_test, y_test) </code></pre> <p>As mentionned, some of the epochs start with a higher accuracy than they finish with, leading to unstable learning. </p> <pre><code> 32/1800 [..............................] - ETA: 0s - loss: 0.6830 - acc: 0.5938 1152/1800 [==================&gt;...........] - ETA: 0s - loss: 0.6175 - acc: 0.6727 1800/1800 [==============================] - 0s 52us/step - loss: 0.6098 - acc: 0.6861 Epoch 54/250 32/1800 [..............................] - ETA: 0s - loss: 0.5195 - acc: 0.8125 1376/1800 [=====================&gt;........] - ETA: 0s - loss: 0.6224 - acc: 0.6672 1800/1800 [==============================] - 0s 43us/step - loss: 0.6091 - acc: 0.6850 Epoch 55/250 </code></pre> <p>What could be the cause of these oscillations in learning in such a simple model? Thanks</p> <hr> <p>EDIT:</p> <p>I have followed some suggestions from the comments and have modified the model accordingly. It now looks more like this:</p> <pre><code>model = tf.keras.Sequential() model.add(Dense(250, input_dim=13, activation=tf.nn.relu)) model.add(Dropout(0.4)) model.add(Dense(200, activation=tf.nn.relu)) model.add(Dropout(0.4)) model.add(Dense(100, activation=tf.nn.relu)) model.add(Dropout(0.3)) model.add(Dense(50, activation=tf.nn.relu)) model.add(Dense(1, activation=tf.nn.sigmoid)) model.compile(optimizer='adadelta', loss='binary_crossentropy', metrics=['accuracy']) </code></pre>
2019-04-28 20:02:19.510000+00:00
2020-11-10 19:58:07.270000+00:00
2019-04-29 03:23:06.563000+00:00
python|tensorflow|machine-learning|classification
['https://arxiv.org/pdf/1611.03530']
1
55,895,411
<h1>Oscillations</h1> <p>Those are most definitely connected to the size of your network; each batch coming through changes your neural network considerably as it does not have enough neurons to represent the relationships. </p> <p>It works fine for one batch, updates the weights for another and changes previously learned connections effectively "unlearning". That's why the loss is also jumpy as the network tries to accommodate to the task you have given it.</p> <p>Sigmoid activation and it's saturation may be causing you troubles as well (as the gradient is squashed into small region and most gradient updates are zero). Quick fix - use <code>ReLU</code> activation as described below.</p> <p>Additionally, neural network <strong>does not</strong> care about accuracy, only about minimizing the loss value (which it tries to do most of the time). Say it predicts probabilities: <code>[0.55, 0.55, 0.55, 0.55, 0.45]</code> for classes <code>[1, 1, 1, 1, 0]</code> so it's accuracy is <code>100%</code> but it's pretty uncertain. Now, let's say the next update pushes the network into probabilities predictions: <code>[0.8, 0.8, 0.8, 0.8, 0.55]</code>. In such case, loss would drop, <strong>but so would accuracy</strong>, from <code>100%</code> to <code>80%</code>.</p> <p><strong>BTW.</strong> You may want to check the scores for logistic regression and see how it performs on this task (so a single layer with output only).</p> <h1>Some things to consider</h1> <h2>1. Size of your neural network</h2> <p>It's always good to start with simple model and grow it bigger if needed (wouldn't advise the other way around). You may want to check on a really small subsample of data (say two/three batches, 160 elements or so) whether your model can learn the relationship between input and output.</p> <p>In your case I doubt the model will be able to learn those relationships with the size of layers you are providing. Try increasing the size, especially in the earlier layers (maybe <code>50</code>/<code>100</code> for starters) and see how it behaves.</p> <h2>2. Activation function</h2> <p>Sigmoid easily saturates (small region where changes occur, most of the values are almost 0 or 1). It is rarely used nowadays as activation before bottleneck (final layer). Most common nowadays is <a href="https://www.tensorflow.org/api_docs/python/tf/keras/activations/relu" rel="noreferrer"><code>ReLU</code></a> which is not prone to saturation (at least when the input is positive) or it's variations. This might help as well.</p> <h2>3. Learning rate</h2> <p>For each dataset and each neural network model optimal choice of learning rate is different. Defaults usually work so-so, but when the learning rate is too small it might get stuck in the local minima (and it's generalization will be worse), while the value being too big will make your network unstable (loss will highly oscillate).</p> <p>You may want to read up on <a href="https://www.datacamp.com/community/tutorials/cyclical-learning-neural-nets" rel="noreferrer">Cyclical Learning Rate</a> (or in the original <a href="https://arxiv.org/abs/1506.01186" rel="noreferrer">research paper by Leslie N. Smith</a>. In there you can find info on how to choose a good learning rate heuristically and setup some simple learning rate schedulers. Those techniques were used by <a href="https://www.fast.ai/" rel="noreferrer">fast.ai</a> teams in CIFAR10 competitions with really good results. On their site <a href="https://docs.fast.ai/callbacks.one_cycle.html" rel="noreferrer">or in documentation of their library</a> you can find <code>One Cycle Policy</code> and learning rate finder (based on the work of aforementioned researcher). This should get you started in this realm I think.</p> <h2>4. Normalization</h2> <p>Not sure, but this normalization looks pretty non-standard to me (never seen it done like that). Good normalization is the basis for neural network convergence (unless the data is already pretty close to normal distribution). Usually one subtracts the mean and divides by standard deviation for each feature. You can check some schemes in <a href="https://scikit-learn.org/stable/modules/preprocessing.html" rel="noreferrer"><code>scikit-learn</code> library</a> for example.</p> <h2>5. Depth</h2> <p>This shouldn't be an issue but if your input is complicated you should consider adding more layers to your neural network (right now it's almost definitely too thin). This would allow it to learn more abstract features and transform the input space more.</p> <h1>Overfitting</h1> <p>When the network overfits to the data you may employ some regularization techniques (hard to tell what might help, you should test it on your own), some of those include:</p> <ul> <li>Higher learning rate with batch normalization smoothing out learning space.</li> <li>Smaller number of neurons (relationships learned by the network would intuitively have to be more data distribution representative).</li> <li>Smaller batch size have regularization effect as well.</li> <li>Dropout, though it's hard to pin-point good dropout rate. Would resort to it as the last one. Furthermore it is known to collide with batch normalization techniques (though there are techniques to combine them, see <a href="https://arxiv.org/abs/1801.05134" rel="noreferrer">here</a> or <a href="https://stackoverflow.com/questions/39691902/ordering-of-batch-normalization-and-dropout">here</a>, you may find more over the web).</li> <li>L1/L2 regularization with the second being much more widely applied (unless you have specific knowledge indicating L1 might perform better)</li> <li>Data augmentation - I would try this one first, mostly because of curiosity. As your features are continuous you may want to add some random noise on batch-to-batch basis generated from gaussian distribution. Noise would have to be small, standard deviation around <code>1e-2</code> or <code>1e-3</code>, you would have to test those values experimentally.</li> <li>Early stopping - after <code>N</code> epochs without improvement on the validation set you end your training. Pretty common technique, should be used almost every time. Remember to save the best model on validation set and set <code>patience</code> (<code>N</code> mentioned above) to some moderately sized value (do not set patience to 1 epoch or so, neural network may easily improve after 5 or so).</li> </ul> <p>Plus there are tons of other techniques you may find. Check what makes intuitive sense and which one you like the most and test how it performs.</p>
2019-04-28 23:27:01.827000+00:00
2019-04-29 08:05:17.477000+00:00
2019-04-29 08:05:17.477000+00:00
null
55,894,132
<p>I am currently working on a small binary classification project using the new keras API in tensorflow. The problem is a simplified version of the Higgs Boson challenge posted on Kaggle.com a few years back. The dataset shape is 2000x14, where the first 13 elements of each row form the input vector, and the 14th element is the corresponding label. Here is a sample of said dataset:</p> <pre><code>86.043,52.881,61.231,95.475,0.273,77.169,-0.015,1.856,32.636,202.068, 2.432,-0.419,0.0,0 138.149,69.197,58.607,129.848,0.941,120.276,3.811,1.886,71.435,384.916,2.447,1.408,0.0,1 137.457,3.018,74.670,81.705,5.954,775.772,-8.854,2.625,1.942,157.231,1.193,0.873,0.824,1 </code></pre> <p>I am relatively new to machine learning and tensorflow, but I am familiar with the higher level concepts such as loss functions, optimizers and activation functions. I have tried building various models inspired by examples of binary classification problems found online, but I am having difficulties with training the model. During training, the loss somethimes increases within the same epoch, leading to unstable learning. The accuracy hits a plateau around 70%. I have tried changing the learning rate and other hyperparameters but to no avail. In comparison, I have hardcoded a fully-connected feed forward neural net that reaches around 80-85% accuracy on the same problem.</p> <p>Here is my current model:</p> <pre><code>import tensorflow as tf from tensorflow.python.keras.layers.core import Dense import numpy as np import pandas as pd def normalize(array): return array/np.linalg.norm(array, ord=2, axis=1, keepdims=True) x_train = pd.read_csv('data/labeled.csv', sep='\s+').iloc[:1800, :-1].values y_train = pd.read_csv('data/labeled.csv', sep='\s+').iloc[:1800, -1:].values x_test = pd.read_csv('data/labeled.csv', sep='\s+').iloc[1800:, :-1].values y_test = pd.read_csv('data/labeled.csv', sep='\s+').iloc[1800:, -1:].values x_train = normalize(x_train) x_test = normalize(x_test) model = tf.keras.Sequential() model.add(Dense(9, input_dim=13, activation=tf.nn.sigmoid) model.add(Dense(6, activation=tf.nn.sigmoid)) model.add(Dense(1, activation=tf.nn.sigmoid)) model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) model.fit(x_train, y_train, epochs=50) model.evaluate(x_test, y_test) </code></pre> <p>As mentionned, some of the epochs start with a higher accuracy than they finish with, leading to unstable learning. </p> <pre><code> 32/1800 [..............................] - ETA: 0s - loss: 0.6830 - acc: 0.5938 1152/1800 [==================&gt;...........] - ETA: 0s - loss: 0.6175 - acc: 0.6727 1800/1800 [==============================] - 0s 52us/step - loss: 0.6098 - acc: 0.6861 Epoch 54/250 32/1800 [..............................] - ETA: 0s - loss: 0.5195 - acc: 0.8125 1376/1800 [=====================&gt;........] - ETA: 0s - loss: 0.6224 - acc: 0.6672 1800/1800 [==============================] - 0s 43us/step - loss: 0.6091 - acc: 0.6850 Epoch 55/250 </code></pre> <p>What could be the cause of these oscillations in learning in such a simple model? Thanks</p> <hr> <p>EDIT:</p> <p>I have followed some suggestions from the comments and have modified the model accordingly. It now looks more like this:</p> <pre><code>model = tf.keras.Sequential() model.add(Dense(250, input_dim=13, activation=tf.nn.relu)) model.add(Dropout(0.4)) model.add(Dense(200, activation=tf.nn.relu)) model.add(Dropout(0.4)) model.add(Dense(100, activation=tf.nn.relu)) model.add(Dropout(0.3)) model.add(Dense(50, activation=tf.nn.relu)) model.add(Dense(1, activation=tf.nn.sigmoid)) model.compile(optimizer='adadelta', loss='binary_crossentropy', metrics=['accuracy']) </code></pre>
2019-04-28 20:02:19.510000+00:00
2020-11-10 19:58:07.270000+00:00
2019-04-29 03:23:06.563000+00:00
python|tensorflow|machine-learning|classification
['https://www.tensorflow.org/api_docs/python/tf/keras/activations/relu', 'https://www.datacamp.com/community/tutorials/cyclical-learning-neural-nets', 'https://arxiv.org/abs/1506.01186', 'https://www.fast.ai/', 'https://docs.fast.ai/callbacks.one_cycle.html', 'https://scikit-learn.org/stable/modules/preprocessing.html', 'https://arxiv.org/abs/1801.05134', 'https://stackoverflow.com/questions/39691902/ordering-of-batch-normalization-and-dropout']
8
20,450,946
<p>If you've got 6GB RAM you've got a 64bit machine, so the easiest solution is probably to just up your RAM.</p> <p>Otherwise, crosspost of this: <a href="https://scicomp.stackexchange.com/questions/1681/what-is-the-fastest-way-to-calculate-the-largest-eigenvalue-of-a-general-matrix/7487#7487">https://scicomp.stackexchange.com/questions/1681/what-is-the-fastest-way-to-calculate-the-largest-eigenvalue-of-a-general-matrix/7487#7487</a></p> <p>There has been some good research on this recently. The new approaches use "randomized algorithms" which only require a few reads of your matrix to get good accuracy on the largest eigenvalues. This is in contrast to power iterations which require several matrix-vector multiplications to reach high accuracy.</p> <p>You can read more about the new research here:</p> <p><a href="http://math.berkeley.edu/~strain/273.F10/martinsson.tygert.rokhlin.randomized.decomposition.pdf" rel="nofollow noreferrer">http://math.berkeley.edu/~strain/273.F10/martinsson.tygert.rokhlin.randomized.decomposition.pdf</a></p> <p><a href="http://arxiv.org/abs/0909.4061" rel="nofollow noreferrer">http://arxiv.org/abs/0909.4061</a></p> <p>This code will do it for you:</p> <p><a href="http://cims.nyu.edu/~tygert/software.html" rel="nofollow noreferrer">http://cims.nyu.edu/~tygert/software.html</a></p> <p><a href="https://bitbucket.org/rcompton/pca_hgdp/raw/be45a1d9a7077b60219f7017af0130c7f43d7b52/pca.m" rel="nofollow noreferrer">https://bitbucket.org/rcompton/pca_hgdp/raw/be45a1d9a7077b60219f7017af0130c7f43d7b52/pca.m</a></p> <p><a href="http://code.google.com/p/redsvd/" rel="nofollow noreferrer">http://code.google.com/p/redsvd/</a></p> <p><a href="https://cwiki.apache.org/MAHOUT/stochastic-singular-value-decomposition.html" rel="nofollow noreferrer">https://cwiki.apache.org/MAHOUT/stochastic-singular-value-decomposition.html</a></p> <p>If your language of choice isn't in there you can roll your own randomized SVD pretty easily; it only requires a matrix vector multiplication followed by a call to an off-the-shelf SVD.</p>
2013-12-08 07:29:05.100000+00:00
2013-12-08 07:29:05.100000+00:00
2017-04-13 12:53:54.717000+00:00
null
20,450,051
<p>I've got a document classification problem with only 2 classes and my training dataset matrix size, after the CountVectorizer becomes (40,845 X 218,904) (unigram). In the case of considering trigrams, it can reach up to (40845 X 3,931,789). Is there a way to perform PCA on such dataset without getting memory or sparse dataset errors. I'm using python sklearn on an 6GB machine.</p>
2013-12-08 04:55:16.407000+00:00
2013-12-08 07:29:05.100000+00:00
null
python-2.7|machine-learning|scikit-learn
['https://scicomp.stackexchange.com/questions/1681/what-is-the-fastest-way-to-calculate-the-largest-eigenvalue-of-a-general-matrix/7487#7487', 'http://math.berkeley.edu/~strain/273.F10/martinsson.tygert.rokhlin.randomized.decomposition.pdf', 'http://arxiv.org/abs/0909.4061', 'http://cims.nyu.edu/~tygert/software.html', 'https://bitbucket.org/rcompton/pca_hgdp/raw/be45a1d9a7077b60219f7017af0130c7f43d7b52/pca.m', 'http://code.google.com/p/redsvd/', 'https://cwiki.apache.org/MAHOUT/stochastic-singular-value-decomposition.html']
7
31,792,540
<p>This exact problem is discussed on Markus Wittmann's Blog, <a href="https://blogs.fau.de/wittmann/2013/02/mpi-node-local-rank-determination/" rel="nofollow">MPI Node-Local Rank determination</a>.</p> <p>There, three strategies are suggested:</p> <ol> <li><em>A naive, portable solution employs MPI_Get_processor_name or gethostname to create an unique identifier for the node and performs an MPI_Alltoall on it.</em> [...]</li> <li>[Method 2] <em>relies on MPI_Comm_split, which provides an easy way to split a communicator into subgroups (sub-communicators).</em> [...]</li> <li><em>Shared memory can be utilized, if available.</em> [...]</li> </ol> <p>For some working code (presumably LGPL licensed?), Wittmann links to <a href="http://git.rrze.uni-erlangen.de/gitweb/?p=apsm.git;a=blob;f=MpiNodeRank.cpp;hb=HEAD" rel="nofollow">MpiNodeRank.cpp</a> from the <a href="http://arxiv.org/abs/1302.4280" rel="nofollow">APSM library</a>.</p>
2015-08-03 16:40:58.537000+00:00
2015-08-03 16:40:58.537000+00:00
null
null
9,022,496
<p>Say, I run a parallel program using MPI. Execution command</p> <pre><code>mpirun -n 8 -npernode 2 &lt;prg&gt; </code></pre> <p>launches 8 processes in total. That is 2 processes per node and 4 nodes in total. (OpenMPI 1.5). Where a node comprises 1 CPU (dual core) and network interconnect between nodes is InfiniBand.</p> <p>Now, the rank number (or process number) can be determined with </p> <pre><code>int myrank; MPI_Comm_rank(MPI_COMM_WORLD, &amp;myrank); </code></pre> <p>This returns a number between 0 and 7. </p> <p>But, How can I determine the node number (in this case a number between 0 and 3) and the process number within a node (number between 0 and 1)?</p>
2012-01-26 17:30:08.990000+00:00
2019-03-19 22:40:50.757000+00:00
null
mpi|parallel-processing|openmpi
['https://blogs.fau.de/wittmann/2013/02/mpi-node-local-rank-determination/', 'http://git.rrze.uni-erlangen.de/gitweb/?p=apsm.git;a=blob;f=MpiNodeRank.cpp;hb=HEAD', 'http://arxiv.org/abs/1302.4280']
3
63,325,086
<p>I had the same question for a while and was thinking why bother with <code>a -&gt; m b</code> once mapping <code>a -&gt; b</code> to <code>m a -&gt; m b</code> looks more natural. This is simialr to asking &quot;why we need a monad given the functor&quot;.</p> <p>The little answer that I give to myself is that <code>a -&gt; m b</code> accounts for side-effects or other complexities that you would not capture with function <code>a -&gt; b</code>.</p> <p>Even better wording form <a href="https://arxiv.org/ftp/arxiv/papers/1803/1803.10195.pdf" rel="nofollow noreferrer">here</a> (highly recommend):</p> <blockquote> <p>monadic value <em>M</em> a can itself be seen as a computation. Monadic functions represent computations that are, in some way, non-standard, i.e. not naturally supported by the programming language. This can mean side effects in a pure functional language or asynchronous execution in an impure functional language. An ordinary function type cannot encode such computations and they are, instead, encoded using a datatype that has the monadic structure.</p> </blockquote> <p>I'd put emphasis on <em>ordinary function type cannot encode such computations</em>, where ordinary is <code>a -&gt; b</code>.</p>
2020-08-09 10:16:01.270000+00:00
2020-08-09 10:16:01.270000+00:00
null
null
21,221,705
<p>This is the signature of the well know >>= operator in Haskell</p> <pre><code>&gt;&gt;= :: Monad m =&gt; m a -&gt; (a -&gt; m b) -&gt; m b </code></pre> <p>The question is why type of the function is </p> <pre><code>(a -&gt; m b) </code></pre> <p>instead of </p> <pre><code>(a -&gt; b) </code></pre> <p>I would say the latter one is more practical because it allows straightforward integration of existing "pure" functions in the monad being defined.</p> <p>On the contrary, it seems not difficult to write a general "adapter"</p> <pre><code>adapt :: (Monad m) =&gt; (a -&gt; b) -&gt; (a -&gt; m b) </code></pre> <p>but anyway I regard more probable that you already have <code>(a -&gt; b)</code> instead of <code>(a -&gt; m b)</code>.</p> <p><strong>Note.</strong> I explain what I mean by "pratical" and "probable". If you haven't defined any monad in a program yet, then, the functions you have are "pure" <code>(a -&gt; b)</code> and you will have 0 functions of the type <code>(a -&gt; m b)</code> just because you have not still defined <code>m</code>. If then you decide to define a monad <code>m</code> it comes the need of having new <code>a -&gt; m b</code> functions defined.</p>
2014-01-19 19:43:39.247000+00:00
2020-08-09 10:16:01.270000+00:00
2014-01-19 21:10:01.863000+00:00
haskell|monads
['https://arxiv.org/ftp/arxiv/papers/1803/1803.10195.pdf']
1
60,364,083
<p>In Drake PR <a href="https://github.com/RobotLocomotion/drake/pull/12053" rel="nofollow noreferrer">#12503</a>, the <code>ImplicitStribeck</code> code was refactored to reflect the notation in the <a href="https://arxiv.org/abs/1909.05700" rel="nofollow noreferrer">TAMSI arXiv paper</a>, and in <a href="https://github.com/RobotLocomotion/drake/pull/12361" rel="nofollow noreferrer">#12361</a> it was changed to provide a more helpful exception with troubleshooting tips:</p> <p><a href="https://github.com/RobotLocomotion/drake/blob/v0.15.0/multibody/plant/multibody_plant.cc#L1866-L1878" rel="nofollow noreferrer">https://github.com/RobotLocomotion/drake/blob/v0.15.0/multibody/plant/multibody_plant.cc#L1866-L1878</a></p> <p>Can you try out a later version (e.g. 0.15.0), and then try out the troubleshooting instructions there? (You've already tried changing step size in the simulator, but you may want to check on the stiffness of your overall system, etc.)</p>
2020-02-23 15:59:45.813000+00:00
2020-02-23 15:59:45.813000+00:00
null
null
60,349,658
<p>I have a Kuka arm and some objects set up in my simulation(very similar to manipulation station example), and I have been running into coredump error below whenever there is a contact between the robot and the objects.</p> <blockquote> <p>"abort: Failure at multibody/plant/multibody_plant.cc:1640 in CalcImplicitStribeckResults(): condition 'info == ImplicitStribeckSolverResult::kSuccess' failed. Aborted (core dumped)"</p> </blockquote> <p>Decreasing the integration step size for the simulator did not help, so I ended up tracing back the error and commented out the condition that is causing the <code>error( "DRAKE_DEMAND(info == ImplicitStribeckSolverResult::kSuccess);" )</code>, which seems to coredump a lot less often.</p> <p>However, I am guessing that condition is there for a reason, so would commenting the line out cause any other issues in the simulation? What is the proper way to fix the coredump problem?</p>
2020-02-22 06:34:02.780000+00:00
2020-02-23 15:59:45.813000+00:00
2020-02-22 21:37:34.107000+00:00
drake
['https://github.com/RobotLocomotion/drake/pull/12053', 'https://arxiv.org/abs/1909.05700', 'https://github.com/RobotLocomotion/drake/pull/12361', 'https://github.com/RobotLocomotion/drake/blob/v0.15.0/multibody/plant/multibody_plant.cc#L1866-L1878']
4
49,587,577
<p>As soon as you find the answer, you'll be able to write a nice paper about it since that's kind of an open research question right now. :)</p> <p>To my best knowledge, your evaluation has to combine syntactic and semantic plausibility of the output, context-coherence, personality consistency and discourse dynamic progression. There's no consensus on how to optimally measure these, but there's plenty of current papers on the topic.</p> <p>Related introductory read by Liu et al: <a href="https://arxiv.org/abs/1603.08023" rel="nofollow noreferrer">https://arxiv.org/abs/1603.08023</a></p>
2018-03-31 12:10:49.633000+00:00
2018-03-31 12:10:49.633000+00:00
null
null
49,587,447
<p>I was making a natural language generator using LSTM networks but now I am stuck in the part , how to evaluate my output. Suppose i have a input training data-set that consists of a dialogue act representation and the correct output for that particular dialogue act. Now suppose i generate a output sentence y from my LSTM network, so how to evaluate that sentence in comparison to the one in the data-set. I mean is there any way to compare output so that I can use gradient descent to train my weights. </p>
2018-03-31 11:53:34.263000+00:00
2018-03-31 12:10:49.633000+00:00
null
machine-learning|lstm
['https://arxiv.org/abs/1603.08023']
1
58,061,404
<p>You could try using Focal Loss.</p> <p>See: <a href="https://arxiv.org/pdf/1708.02002.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1708.02002.pdf</a></p> <p>In the Tensorflow Object Detection model file this would appear as follows:</p> <pre><code>loss { localization_loss { weighted_smooth_l1 { } } classification_loss { weighted_sigmoid_focal { gamma: 2.0 alpha: 0.25 } } classification_weight: 1.0 localization_weight: 1.0 } </code></pre>
2019-09-23 11:24:23.380000+00:00
2019-09-23 11:24:23.380000+00:00
null
null
51,904,077
<p>Is there a good way to fine tune a model for object detection (in particular, I am trying to use the Tensorflow Object Detection API) for a dataset with highly skewed data? I am trying to use take some categories of COCO and combine it with my own custom data, but there are only about 50 images of my data.</p> <p>I have tried with just combining the coco data and my own data but it just predicts the coco categories everytime. </p>
2018-08-17 23:51:49.440000+00:00
2019-09-23 11:24:23.380000+00:00
null
tensorflow|machine-learning|object-detection-api
['https://arxiv.org/pdf/1708.02002.pdf']
1
44,738,170
<p>I've normally seen this when analyzing malware. The authors do this to to prevent static analysis tools like <code>strings</code> from working. Additionally, such authors might load functions by using <code>dlopen</code> and <code>dlsym</code> to get functions that they need.</p> <p>For example, in the code snippet below;</p> <pre><code>printf("Hello World"); </code></pre> <p>I would see the string "Hello World" in the output of <code>strings</code> and by looking at the import section of the elf file, I'd see that the program is making use of <code>printf</code>. So without running the program it is possible to get a sense of what it is doing.</p> <p>Now lets assume that the author wrote a function <code>char* decrypt(int)</code>. This function take an index into a sting table (which each string is encrypted) and returns the decrypted string. The above one line of code would now <strong>notionally</strong> look like</p> <pre><code>void* pfile = dlopen(decrypt(3)); void* pfunct = dlsym(pfile, decrypt(15)); pfunct(decrypt(5)); </code></pre> <p>Again, remember that the above is closer to pseudo-code then actually compileable code. Now in this case using static analysis tools we would not see the strings or the function names (in the import section). </p> <p>Additionally, if we were attempting to reverse engineer the code we would need to take time to decrypt the strings and work through the logic to determine what functions are being called. It's not that this can't be done but it will slow down that analyst, which means that it will be longer till a mitigation for the malware is created.</p> <p>And now to your question;</p> <blockquote> <p>Are there easy ways to hide text that is compiled into an ELF? Be that with easy compiler/linking options. I imagine a decoder could be inserted at main() but how could the text section be easily encoded?</p> </blockquote> <p>There is not compiler / linker option that does this. The author of this would need to choose to do this, write the appropriate functions (i.e. decrypt) above and write a utility to produce the encrypted forms of the strings. Additionally, as others have suggested once this is done, the entire application can be encryped/compressed (think of a self-extracting zip file) thus the only thing you see initially with static analysis tools would be the stub to decrypt of decompress the file.</p> <p>see <a href="https://www.ioactive.com/pdfs/ZeusSpyEyeBankingTrojanAnalysis.pdf" rel="nofollow noreferrer">https://www.ioactive.com/pdfs/ZeusSpyEyeBankingTrojanAnalysis.pdf</a> for an example of this. (granted this is Windows based, but the techniques for encryption and dynamically loading functions is the same. Look at section on API calls)</p> <p>If interested you can also see; <a href="https://www.researchgate.net/publication/224180021_On_the_analysis_of_the_Zeus_botnet_crimeware_toolkit" rel="nofollow noreferrer">https://www.researchgate.net/publication/224180021_On_the_analysis_of_the_Zeus_botnet_crimeware_toolkit</a> and <a href="https://arxiv.org/pdf/1406.5569.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1406.5569.pdf</a></p>
2017-06-24 15:47:14.747000+00:00
2017-06-24 16:31:52.393000+00:00
2017-06-24 16:31:52.393000+00:00
null
44,737,893
<p>I've seen some binary files where the developer was a bit paranoid it seems and obfuscated all text in a binary. I hadn't seen anything like it before and didn't find any obvious options to compile an ELF with hidden text. Even standard OS API strings were hidden which was strange given they are usually visible.</p> <p>These programs wouldn't exactly have any text that isn't exposed when it runs. Except unknown text. But hiding the whole lot just red flags and it makes it look suspicious.</p> <p>Are there easy ways to hide text that is compiled into an ELF? Be that with easy compiler/linking options. I imagine a decoder could be inserted at main() but how could the text section be easily encoded?</p> <p>I can imagine a custom way to do it would be to have an implicit decoder in the code with a key. Then use that key to encode text of the ELF. So that it is easily encoded.</p>
2017-06-24 15:16:48.017000+00:00
2017-06-25 09:40:09.330000+00:00
null
c|encryption|elf
['https://www.ioactive.com/pdfs/ZeusSpyEyeBankingTrojanAnalysis.pdf', 'https://www.researchgate.net/publication/224180021_On_the_analysis_of_the_Zeus_botnet_crimeware_toolkit', 'https://arxiv.org/pdf/1406.5569.pdf']
3
56,409,552
<p>It's a standard textbook matter. See <a href="http://webusers.fis.uniroma3.it/franceschini/notebooks/Event%20Generator.html#CDF-is-not-known" rel="nofollow noreferrer">here</a> for some code, or <a href="http://arxiv.org/abs/arXiv:hep-ph/0006269" rel="nofollow noreferrer">here</a> at Section 3.2 for some reference mathematical background (actually very quick and simple to read).</p>
2019-06-01 18:52:44.873000+00:00
2019-06-01 18:52:44.873000+00:00
null
null
3,510,475
<p>I want to generate random numbers according some distributions. How can I do this?</p>
2010-08-18 09:05:48.580000+00:00
2019-06-01 18:52:44.873000+00:00
2013-06-19 04:42:29.340000+00:00
random|distribution
['http://webusers.fis.uniroma3.it/franceschini/notebooks/Event%20Generator.html#CDF-is-not-known', 'http://arxiv.org/abs/arXiv:hep-ph/0006269']
2
36,921,965
<p>I think the recipe <a href="https://code.activestate.com/recipes/218332-generator-for-integer-partitions/?c=16490#c9" rel="nofollow noreferrer">here</a> may qualify as being elegant. It's lean (20 lines long), fast and based upon Kelleher and O'Sullivan's work which is referenced therein:</p> <pre><code>def aP(n): """Generate partitions of n as ordered lists in ascending lexicographical order. This highly efficient routine is based on the delightful work of Kelleher and O'Sullivan. Examples ======== &gt;&gt;&gt; for i in aP(6): i ... [1, 1, 1, 1, 1, 1] [1, 1, 1, 1, 2] [1, 1, 1, 3] [1, 1, 2, 2] [1, 1, 4] [1, 2, 3] [1, 5] [2, 2, 2] [2, 4] [3, 3] [6] &gt;&gt;&gt; for i in aP(0): i ... [] References ========== .. [1] Generating Integer Partitions, [online], Available: http://jeromekelleher.net/generating-integer-partitions.html .. [2] Jerome Kelleher and Barry O'Sullivan, "Generating All Partitions: A Comparison Of Two Encodings", [online], Available: http://arxiv.org/pdf/0909.2331v2.pdf """ # The list `a`'s leading elements contain the partition in which # y is the biggest element and x is either the same as y or the # 2nd largest element; v and w are adjacent element indices # to which x and y are being assigned, respectively. a = [1]*n y = -1 v = n while v &gt; 0: v -= 1 x = a[v] + 1 while y &gt;= 2 * x: a[v] = x y -= x v += 1 w = v + 1 while x &lt;= y: a[v] = x a[w] = y yield a[:w + 1] x += 1 y -= 1 a[v] = x + y y = a[v] - 1 yield a[:w] </code></pre>
2016-04-28 17:47:11.527000+00:00
2019-11-21 23:17:17.843000+00:00
2019-11-21 23:17:17.843000+00:00
null
10,035,752
<p>I tried to write code to solve the standard Integer Partition problem (<a href="http://en.wikipedia.org/wiki/Partition_%28number_theory%29" rel="noreferrer">Wikipedia</a>). The code I wrote was a mess. I need an elegant solution to solve the problem, because I want to improve my coding style. This is not a homework question.</p>
2012-04-05 20:45:38.327000+00:00
2022-06-01 10:32:16.030000+00:00
2015-04-02 11:51:50.580000+00:00
python|algorithm
['https://code.activestate.com/recipes/218332-generator-for-integer-partitions/?c=16490#c9']
1