a_id
int64 7.84k
73.8M
| a_body
stringlengths 61
33k
| a_creation_date
stringlengths 25
32
| a_last_activity_date
stringlengths 25
32
| a_last_edit_date
stringlengths 25
32
⌀ | a_tags
float64 | q_id
int64 826
73.8M
| q_body
stringlengths 61
29.9k
| q_creation_date
stringlengths 25
32
| q_last_activity_date
stringlengths 25
32
| q_last_edit_date
stringlengths 25
32
⌀ | q_tags
stringlengths 1
103
| _arxiv_links
stringlengths 2
6.69k
| _n_arxiv_links
int64 0
94
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
59,705,175 | <p>No, you cannot, because both configurations do not represent the same functions, this pattern was introduced in the <a href="https://arxiv.org/abs/1409.1556" rel="nofollow noreferrer">VGG network paper</a>, and it is used to increase the representation power of the network. Two layers with 3x3 filters are kind of equivalent to one layer with a 5x5 filter (through composition), it is not equivalent to adding the number of filters</p>
<p>In particular, if you had a convolutional layer with 128 filters, this is not the same to having two convolutional layers with 64 filters each, specially considering that there is a ReLU activation in between them, which makes behavior more non-linear.</p> | 2020-01-12 15:13:12.770000+00:00 | 2020-01-12 15:13:12.770000+00:00 | null | null | 59,704,782 | <pre><code>model = Sequential()
model.add(Conv2D(64, kernel_size=(3, 3), activation='relu', input_shape=(X_train.shape[1:])))
model.add(Conv2D(64,kernel_size= (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2), strides=(2, 2)))
model.add(Dropout(0.5))
</code></pre>
<p>In the above code whether I can use con2D(128) instead of conv24(64) twice.</p> | 2020-01-12 14:26:07.393000+00:00 | 2020-01-12 15:13:12.770000+00:00 | null | opencv|tensorflow|keras | ['https://arxiv.org/abs/1409.1556'] | 1 |
36,802,753 | <ol>
<li>Be sure that each thread has its <strong>own private PRNG</strong> (using one shared PRNG by all threads would need locking and would be terribly slow due to waiting and cache contention).</li>
<li>Katzgraber has proposed a seeding of PRNGs based on process/thread numbers in Section 7.1 of <a href="http://arxiv.org/pdf/1005.4117v1.pdf" rel="nofollow">Random Numbers in Scientific Computing:
An Introduction</a>.</li>
</ol>
<p>It looks like:</p>
<pre><code>long seedgen(void) {
long s, seed, pid; // pid from 0 to number of processes/threads - 1
pid = ...; /* get processt/thread ID */
s = time ( &seconds ); /* get CPU seconds since 01/01/1970 */
seed = abs(((s * 181) * ((pid - 83) * 359)) % 104729);
return seed;
}
</code></pre> | 2016-04-22 20:25:41.503000+00:00 | 2016-04-22 20:25:41.503000+00:00 | null | null | 36,802,457 | <p>I asked a question here yesterday about threads and getting input to create the amount of threads the user specifies. I was able to figure it out with you guy's help. I am working on the same program except this time, I need help with getting each thread in my program to independently seed the random number generator. M apologies if this sounds simple. I'm still new to threads.</p>
<p>Basically what my program is doing is asking the user how many threads they want to create, and how many arrows they want to throw. Each arrow will generate 2 numbers each, from -1 to 1. This is my program so far. It's working code so you can just run it if you need to:</p>
<pre><code>#include <iostream>
#include <string>
#include <ctime>
#include <cstdlib>
#include <thread>
using namespace std;
void exec(int n, int randNumbers)
{
int seed = 0;
srand(seed);
int random_number = rand() % 1 + -1;
cout << "Thread " << n << endl;
cout << "\n";
while (randNumbers != 0)
{
srand(seed);
cout << random_number << endl;
seed++;
cout << "Seed: " << seed << endl;
cout << "\n";
cout << random_number << endl;
seed++;
cout << "Seed: " << seed << endl;
cout << "\n";
randNumbers--;
}
}
int main()
{
int numThreads = 0; // Threads
int maxRandom; // Arrows
cout << "This is a Monte Carlo simulation." << endl;
cout << "Please enter the number of threads to run." << endl;
cout << "Threads: ";
cin >> numThreads;
// create an array of threads
thread* myThreads = new thread[numThreads];
if ((numThreads > 20) || (numThreads < 1))
{
cout << "Sorry. Something went wrong." << endl;
return 0;
}
system("CLS");
cout << "\n";
cout << "Enter the number of arrows you would like to throw: " << endl;
cout << "Arrows: ";
cin >> maxRandom; // Arrows
system("CLS");
for (int i = 0; i < numThreads; i++)
{
// run random number generator for thread at [i]
myThreads[i] = thread(exec, i, maxRandom);
}
for (int i = 0; i < numThreads; i++)
{
myThreads[i].join();
}
cout << "Done!" << endl;
}
</code></pre>
<p>All threads are returning -1 regardless of if <code>int seed</code> goes up by 1. I've looked all over but I still can't seem to figure out why my threads aren't independently seeding the random number generator. Anyone know what is going on? I'm still new to threads. Any bit of help would be greatly appreciated. Thank you very much.</p> | 2016-04-22 20:05:54.483000+00:00 | 2016-04-22 20:25:41.503000+00:00 | null | c++|multithreading|random | ['http://arxiv.org/pdf/1005.4117v1.pdf'] | 1 |
53,896,553 | <p>If you look into the reference paper <a href="https://arxiv.org/pdf/1606.05328.pdf" rel="nofollow noreferrer">Conditional Image Generation with
PixelCNN Decoders</a> which Gate Activation (<strong>G.A</strong>) borrowed from, you can notice that it use the following formula:</p>
<p><a href="https://i.stack.imgur.com/Na2lM.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Na2lM.gif" alt="G.A formula"></a></p>
<p>Although <code>stride=2</code> reduce dimension to half of the input size, <strong>G.A</strong> layer with proper <code>W</code> dimension produce the same as the input dimension which means no mismatch dimension will occur.</p> | 2018-12-22 14:37:39.493000+00:00 | 2018-12-22 14:37:39.493000+00:00 | null | null | 53,893,987 | <p>I am studying <strong>SwishNet</strong>, a Fast CNN for Speech, Music and Noise Classification and Segmentation.</p>
<p>In that <a href="https://arxiv.org/abs/1812.00149" rel="nofollow noreferrer">paper</a>, they used <strong>Strided Convolution</strong> & <strong>Residual Net</strong>. After through <code>stride=2 conv layer</code> its output length will be half of the input length.</p>
<p>My question is how can merger output with input(residual connection) even their array dimension is mismatched?</p>
<p><strong>G.A</strong> is just gated activation function, so it does not affect on the output dimension!</p>
<p><a href="https://i.stack.imgur.com/17wZI.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/17wZI.jpg" alt="enter image description here"></a></p> | 2018-12-22 07:58:40.130000+00:00 | 2018-12-22 14:37:39.493000+00:00 | 2018-12-22 13:08:13.803000+00:00 | deep-learning | ['https://arxiv.org/pdf/1606.05328.pdf', 'https://i.stack.imgur.com/Na2lM.gif'] | 2 |
58,659,209 | <p>You can have a look at this paper: <a href="https://arxiv.org/pdf/1904.06991v3.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1904.06991v3.pdf</a>. It clearly describes the problem of FID scores. However, the FID scores are also dependant on the number of samples. If you are using fewer samples to estimate it then the outcomes are not consistent. </p> | 2019-11-01 12:24:51.540000+00:00 | 2019-11-01 12:24:51.540000+00:00 | null | null | 57,053,952 | <p>I have two GANs and I want to compare their results using FID (Fréchet Inception Distance).
I have trianed the networks with the same dataset of frogs images, and by looking at the results (the generated images) one network yields better results but it's FID score is higher.
I computed the FID score between the original dataset and the generated images of each network. </p>
<p>I have read that lower FID values mean better image quality and diversity,
which is not consistent with the results I have seen.</p>
<p>Is there an explanation for that?</p> | 2019-07-16 09:18:01.520000+00:00 | 2019-11-01 12:24:51.540000+00:00 | null | pytorch|generative-adversarial-network | ['https://arxiv.org/pdf/1904.06991v3.pdf'] | 1 |
33,992,627 | <p>There are a couple of ways you can work this out. The first, which is what you appear to be doing, assumes the earth is spherical. Relative bearings are calculated using Haversine formulation for great circle navigation. Given starting and ending points, this formulation finds the great circle passing through the two points. From this an initial bearing can be calculated. This great circle route is the shortest route between the two points, but suffers from the problem the bearing, in general, will not be constant along the route. Also, except under some very specific cases, the reverse bearing does not behave as you seem to expect and if you want to determine it in general, you will have to perform another calculation reversing the starting and ending points.</p>
<p>Another method you could use is the Rhumb line formulation. In this case, the bearing between the starting point and ending point is constant and would allow you to use the relation you have for the reverse course if you would like. Since this will in general differ from the great circle distance, following Rhumb lines will not result in the shortest path between the two points, but it does simplify the navigation by holding the course constant.</p>
<p>Both of these approaches are described in detail at <a href="http://www.movable-type.co.uk/scripts/latlong.html" rel="nofollow">Calculate distance, bearing and more between Latitude/Longitude points</a></p>
<p>Another formulation for great circle navigation which uses a more accurate representation of the earth's shape, an oblate spheriod, which is a special type of ellipsoid, is attributed to <a href="http://www.movable-type.co.uk/scripts/latlong-vincenty.html" rel="nofollow">Vincenty</a> with additional enhancements provided by <a href="http://arxiv.org/pdf/1109.4448.pdf" rel="nofollow">Karney</a>. In these cases, the formulation is quite a bit more complicated and is probably overkill for most applications, and performance is quite a bit worse than the Haversine formulations above. But these formulations provide much better accuracy if you need it.</p>
<p><strong>Update:</strong></p>
<p>Based on the comment below, the main issue is one of figuring out how far to turn. This will simply be the angle between the normals of the plane containing the great circles for the current heading and the desired heading. To get the normal for the plane on the current heading, you need your current location <code>L</code> and a point some distance away on the current heading, <code>C</code>. The normal is just <code>V = L×C</code>. To compute the normal for the plane containing the great circle along the desired heading, you only need to know a point along the desired route, which you already have in the form of your destination point, which we call <code>D</code>. You can then find the normal by <code>U = L×D</code>. The angle between them is given by <code>θ = acos((U∙V)/(|U||V|))</code>.</p>
<p>In order to find <code>L</code>, <code>C</code> and <code>D</code> you must convert the <a href="http://mathforum.org/library/drmath/view/51832.html" rel="nofollow">Latitude, Longitude, Altitude (LLA) coordinates into Earth Centered, Earth Fixed (ECEF) coordinates</a>.</p> | 2015-11-30 06:26:46.503000+00:00 | 2015-12-02 16:22:28.257000+00:00 | 2015-12-02 16:22:28.257000+00:00 | null | 33,989,893 | <pre><code>double computeHeading(double latitude1, double longitude1, double latitude2, double longitude2)
{
double degToRad = PI / 180.0;
double phi1 = latitude1*degToRad;
double phi2 = latitude2*degToRad;
double lam1 = longitude1*degToRad;
double lam2 = longitude2*degToRad;
double x,y;
x = cos(phi2) * sin(lam2-lam1);
printf("X is %lf\n", x);
y = cos(phi1) * sin(phi2) - sin(phi1) * cos(phi2) * cos(lam2-lam1);
printf("Y is %lf\n", y);
return atan2(x,y)*180/PI;
}
</code></pre>
<p>I am using the above function to determine the true bearing from North between two geographic coordinates.</p>
<p>I'm currently developing a small navigation widget which uses GPS data from Android sensors. The widget has an arrow facing towards a point away from the device's current location. The arrow's direction changes with the device's current location and azimuth to always face the distant point.</p>
<p>Here is a scenario:</p>
<p>I'm at a location, facing north, and another location has a bearing of 300 degrees(somewhat northwest of me). If I face towards south, without moving, my relative bearing to the distant location should be 120 degrees.</p>
<p>How can I find the relative bearing with accounting for the facing direction (azimuth)?</p> | 2015-11-30 00:38:45.390000+00:00 | 2019-07-16 21:20:37.937000+00:00 | 2019-07-16 21:20:37.937000+00:00 | android|c|math|navigation|bearing | ['http://www.movable-type.co.uk/scripts/latlong.html', 'http://www.movable-type.co.uk/scripts/latlong-vincenty.html', 'http://arxiv.org/pdf/1109.4448.pdf', 'http://mathforum.org/library/drmath/view/51832.html'] | 4 |
47,774,965 | <p>As the number of filters e.g. 32, 64, 128, 1024 is a design decision, the model designer decides the number of filters to use. In this case, you. Typically powers of 2 is used 2^5 = 32, 2^6 = 64 etc. Different number of filters would obviously have an impact on the number of operations required for computation in addition to impacting the number of parameters that the model will have to learn. </p>
<p>e.g. 10 filters with dimension 5x5x1 (height x width x channels) would have (5 x 5 x 1 + 1) x 10 = 110 parameters to train. Note that + 1 is for the bias term.</p>
<p>I suggest reading the below in particular the section on ConvNet architectures.
<a href="http://cs231n.github.io/convolutional-networks/#architectures" rel="nofollow noreferrer">http://cs231n.github.io/convolutional-networks/#architectures</a></p>
<p>For convolutional arithmetic I found "A guide to convolution arithmetic for deep learning" resourceful: <a href="https://arxiv.org/abs/1603.07285" rel="nofollow noreferrer">https://arxiv.org/abs/1603.07285</a></p> | 2017-12-12 14:26:30.767000+00:00 | 2017-12-19 03:32:41.890000+00:00 | 2017-12-19 03:32:41.890000+00:00 | null | 47,774,350 | <pre class="lang-py prettyprint-override"><code>weights = {
# 5x5 conv, 1 input, 32 outputs
'wc1': tf.Variable(tf.random_normal([5, 5, 1, 32])),
# 5x5 conv, 32 inputs, 64 outputs
'wc2': tf.Variable(tf.random_normal([5, 5, 32, 64])),
# fully connected, 7*7*64 inputs, 1024 outputs
'wd1': tf.Variable(tf.random_normal([7*7*64, 1024])),
# 1024 inputs, 10 outputs (class prediction)
'out': tf.Variable(tf.random_normal([1024, num_classes]))
}
</code></pre>
<p>My question is:
how do I calculate number of feature/channel output, in this case it's 32 in first layer, 64 in second and 1024 in third? And what effect will it make if I add more or less number than 32, 64, 1024 in my CNN?</p> | 2017-12-12 13:54:24.923000+00:00 | 2017-12-19 03:32:41.890000+00:00 | 2017-12-12 15:36:06.877000+00:00 | machine-learning|tensorflow|deep-learning|conv-neural-network|convolution | ['http://cs231n.github.io/convolutional-networks/#architectures', 'https://arxiv.org/abs/1603.07285'] | 2 |
22,685,398 | <p>This looks like an instance of Robust Ellipse Fitting. Check this paper: Outlier Elimination for
Robust Ellipse and Ellipsoid Fitting <a href="http://arxiv.org/pdf/0910.4610.pdf" rel="nofollow">http://arxiv.org/pdf/0910.4610.pdf</a>.</p>
<p>A first rough and easy solution is provided by the ellipse of inertia (2D version of the ellipsoid of inertia <a href="http://en.wikipedia.org/wiki/Moment_of_inertia#Inertia_ellipsoid" rel="nofollow">http://en.wikipedia.org/wiki/Moment_of_inertia#Inertia_ellipsoid</a>). Its center is just the centroid and axes are given by Eigen vectors/values of the 2x2 matrix of inertia.</p> | 2014-03-27 10:52:33.300000+00:00 | 2014-03-27 10:52:33.300000+00:00 | null | null | 22,619,196 | <p>I have plots of points which look like this.<img src="https://i.stack.imgur.com/UTAgq.jpg" alt="x-y plot1"> <img src="https://i.stack.imgur.com/TejgH.jpg" alt="x-y plot2">
<img src="https://i.stack.imgur.com/vPoxs.jpg" alt="enter image description here"></p>
<p>The tracks which these points form can be a circle or an ellipse. Clearly the center of the circular tracks in the two images above are different.</p>
<p><strong>How can I find the center point of these tracks (circular/elliptical)?</strong> I want to find the (x,y) coordinates which is the center, not necessary that it has to be a point that's in the plotted data set. i.e., I don't want a medoid.</p>
<p>EDIT: Also, <strong>is there anyway that I can find an equation for circle/ellipse that envelopes a majority of these points?</strong> In the elliptical track, I've added an ellipse that envelopes the points on the track. The values were calculated by trial and error. The center was also calculated by eye balling the plot. How can I do this programmatically?</p> | 2014-03-24 19:47:00.297000+00:00 | 2014-03-27 10:52:33.300000+00:00 | 2014-03-25 01:08:48.443000+00:00 | center|computational-geometry|point|centroid | ['http://arxiv.org/pdf/0910.4610.pdf', 'http://en.wikipedia.org/wiki/Moment_of_inertia#Inertia_ellipsoid'] | 2 |
37,755,443 | <p>Following <a href="https://arxiv.org/pdf/1502.01852.pdf" rel="noreferrer">He <em>et. al</em></a> (as suggested in <a href="https://stackoverflow.com/users/2658050/lejlot">lejlot</a>'s comment), initializing the weights of the <em>l</em>-th layer to a zero-mean Gaussian distribution with standard deviation </p>
<p><img src="https://latex.codecogs.com/gif.latex?%5Csqrt%7B%5Cfrac%7B2%7D%7Bn_l%7D%7D"/></p>
<p>where <em>n<sub>l</sub></em> is the flattened length of the the input vector or</p>
<pre><code>stddev=np.sqrt(2 / np.prod(input_tensor.get_shape().as_list()[1:]))
</code></pre>
<p>results in weights that generally do not diverge.</p> | 2016-06-10 18:59:15.373000+00:00 | 2016-06-10 18:59:15.373000+00:00 | 2017-05-23 12:17:08.437000+00:00 | null | 37,448,557 | <p>I can't get TensorFlow RELU activations (neither <code>tf.nn.relu</code> nor <code>tf.nn.relu6</code>) working without NaN values for activations and weights killing my training runs.</p>
<p>I believe I'm following all the right general advice. For example I initialize my weights with </p>
<pre><code>weights = tf.Variable(tf.truncated_normal(w_dims, stddev=0.1))
biases = tf.Variable(tf.constant(0.1 if neuron_fn in [tf.nn.relu, tf.nn.relu6] else 0.0, shape=b_dims))
</code></pre>
<p>and use a slow training rate, e.g.,</p>
<pre><code>tf.train.MomentumOptimizer(0.02, momentum=0.5).minimize(cross_entropy_loss)
</code></pre>
<p>But any network of appreciable depth results in <code>NaN</code> for cost and and at least some weights (at least in the summary histograms for them). In fact, the cost is often <code>NaN</code> right from the start (before training).</p>
<p>I seem to have these issues even when I use L2 (about 0.001) regularization, and dropout (about 50%).</p>
<p>Is there some parameter or setting that I should adjust to avoid these issues? I'm at a loss as to where to even begin looking, so any suggestions would be appreciated!</p> | 2016-05-25 22:16:08.997000+00:00 | 2016-06-10 18:59:15.373000+00:00 | null | machine-learning|tensorflow|nan | ['https://arxiv.org/pdf/1502.01852.pdf', 'https://stackoverflow.com/users/2658050/lejlot'] | 2 |
46,938,094 | <h2>Best use the one that best <a href="https://arxiv.org/pdf/1704.00726.pdf" rel="nofollow noreferrer">"Fair Divides a Cake with Burnt Parts ... "</a></h2>
<p>As per your domain expertise, the art of performance motivated scheduling is similarly constrained.</p>
<p>Given a few facts above,<br>
the approach to a problem solution is also heavily scale-dependent.</p>
<p>Given a Machine Learning has been added,<br>
the problem is not just a common and trivial <code>[SEQ]-[PAR]-[SEQ]</code> pipeline to process 10 different arrays and sum the partial results.</p>
<h2>The bigger the ML-DataSET gets,<br>the larger will be the "Burnt Parts of the Cake"</h2>
<p>Being inside a realm of <strong><code>numpy</code></strong> / <strong><code>python</code></strong> and their tools for Machine Learning, your driving forces will not be a will to "parallelise", <strong>but rather the Costs of trying to do so.</strong> Machine Learning pipelines for real-world problems may easily take large units to small tens of hours to get trained, using all, yes ALL localhost available CPU-cores, just for one <strong><code>train</code></strong>-part of the DataSET. The efficiency, professionally built into the <strong><code>numpy</code></strong> / <strong><code>numba</code></strong> / <strong><code>scikit</code></strong> tools is not easy to get much increased further on, so rather do not expect any low hanging fruit to be anywhere near these grounds. For more <a href="https://stackoverflow.com/a/46931460">technical <strong>details on this</strong>, you might like to <strong>read this post</strong></a>.</p>
<p><strong>Memory matters the most.</strong></p>
<p>For small DataSETs, the <code>[PSPACE]</code>-costs in terms of memory footprint will remain easier, but that will also mean, that there will be harder to pay all the <code>[PTIME]</code>-costs, that will have to be paid per each parallel-process, yes the non-productive overheads fee, one always has to pay for just entering the show:
<code>{ instantiation + setup + control + finalisation + dismantling }</code>, which in this very case ( of small DataSETs ) will be hardly justified by any increased processing-performance, potentially coming out from an attempt to operate parallel-processing on too small-scale problems.</p>
<p><strong>Available Processing Resources matter almost the same</strong> </p>
<p>In case, where a process is ordered to be operated on 10 process instances concurrently, the governing factor is the free-capacity of all resources-classes, that are relevant for such a process smooth execution -- a free ( and best uninterrupted ) CPU-core ( HT-off ), having non-blocked access to sufficient amount of ( free / private >> working set ) RAM memory ( no memory paging, no virtual-memory swaps ) plus minimum ( or fine-tuned ) disk-IO.</p>
<p>In cases, where such resources' capacities are not allowing all the named 10 instances of the Machine Learning process to smoothly execute indeed in <a href="https://stackoverflow.com/revisions/18374629/3"><strong>parallel</strong>, another, "just"-<strong><code>[CONCURRENT]</code></strong></a>, scheduling takes place and <a href="https://stackoverflow.com/a/46124635"><strong>one may straight forget any and all the Amdahl's Law theoretical promises</strong> ( even from the original, "classical", overhead-naive formulation )</a>.</p>
<p><strong>While it is obvious</strong> for anyone, that if a bus, full of tourists, arrives to a five stars Marriott Hotel, equipped with just a pair of lifts from a reception floor, it is simply principally impossible for all such new guests to reach their room at the same time -- <strong>the same problem is not so crystal-clear</strong> in almost exactly the same setup in true-<strong><code>[PARALLEL]</code></strong>-process scheduling and many users start in the very same way - asking "<em>Why my parallel code is slower than serial run?</em>" or "<em>Why is my parallel-code speedup not scaling?</em>".</p>
<p>This said, you will have to maximise your hardware resources-pools and <strong>very carefully balance the parallel-processing</strong> ( setup + termination )-<strong>overhead costs</strong>, because without having at least a chance to pay less than you will potentially gain ( justify the overheads ), going parallel AT ANY COSTS, is simply a devastating policy, whoever has advised you to try to go into this a-priori lost game ( costs/benefits-wise ) direction.</p> | 2017-10-25 16:59:07.530000+00:00 | 2017-12-08 18:11:21.137000+00:00 | 2017-12-08 18:11:21.137000+00:00 | null | 46,931,444 | <p>I have a function that accepts an array and returns a number. </p>
<p>I want to run this function on 10 different input arrays, and then return the sum of all results.<br>
What is the best way to tell Python to run these 10 calculations in parallel on a computer with 4 cores?</p>
<p>In the documentation there are <a href="https://docs.python.org/3/library/concurrency.html" rel="nofollow noreferrer">many different ways for concurrent execution</a>, and also <a href="https://wiki.python.org/moin/ParallelProcessing" rel="nofollow noreferrer">many different packages for parallel processing</a>. With so many different methods, which of these should I use for the task above?</p>
<p><strong>EDIT:</strong><br>
the function is a Machine-Learning function:<br>
it partitions the input array to two parts - a "<code>train</code>" and a "<code>test</code>" part of the DataSET. It trains a certain classifier on the "<code>train</code>" part, and returns a prediction accuracy of such a trained classifier on the "<code>test</code>" part.</p> | 2017-10-25 11:37:43.173000+00:00 | 2017-12-08 18:11:21.137000+00:00 | 2017-10-25 17:03:28.363000+00:00 | python-3.x|parallel-processing | ['https://arxiv.org/pdf/1704.00726.pdf', 'https://stackoverflow.com/a/46931460', 'https://stackoverflow.com/revisions/18374629/3', 'https://stackoverflow.com/a/46124635'] | 4 |
56,485,461 | <p>I've never worked with point-cloud data/LIDAR before, but as nobody has answered yet, I'll give it my best shot. I'm not sure about inpainting approaches per-say, though I imagine they might not work very well (except for maybe a variational method, which I presume would be quite slow). But if your goal is to project the 3D LIDAR readings (when accompanied by ring ids and laser intensity readings) into a dense 2D matrix (for use in a CNN), the following reference might prove useful. Additionally, in this paper they reference a previous work (<a href="https://github.com/robofit/but_velodyne_lib/blob/master/doc/ICRA16_submission.pdf" rel="noreferrer">Collar Line Segments for Fast Odometry Estimation from Velodyne Point Clouds</a>) which covers the technique of polar binning in more detail, <a href="https://github.com/robofit/but_velodyne_lib/blob/master/src/PolarGridOfClouds.cpp" rel="noreferrer">and has C++ code available</a>. Check out the papers, but I'll try and summarize the technique here:</p>
<h1>Encoding Sparse 3D Data with Polar Binning</h1>
<p><a href="https://arxiv.org/pdf/1709.02128.pdf" rel="noreferrer">CNN for Very Fast Ground Segmentation in Velodyne LiDAR Data</a>
- Describes its preprocessing technique in section III.A (<em>Encoding Sparse 3D Data Into a Dense 2D Matrix</em>).
<a href="https://i.stack.imgur.com/0s1Pw.png" rel="noreferrer"><img src="https://i.stack.imgur.com/0s1Pw.png" alt="enter image description here"></a></p>
<ul>
<li>1) Let P represent your original point cloud, and M the multi-channel dense matrix you are hoping to output. The size of M depends on the number of laser beams used in scanning and the horizontal angular resolution of the scanner.</li>
<li>2) Aggregate the point cloud data into polar bins b(r, c), where r represents the ring id and c = floor((R * atan(x/z) + 180)/360).</li>
<li>3) Use the following mapping to map the bin b(r, c) to the corresponding value in the matrix M, m(r, c), where p^i is the laser intensity reading:</li>
</ul>
<p><a href="https://i.stack.imgur.com/lUCHQ.png" rel="noreferrer"><img src="https://i.stack.imgur.com/lUCHQ.png" alt="enter image description here"></a></p>
<ul>
<li>4) In the case of empty bins, linearly interpolate the value of m(r,c) from its neighborhood.</li>
</ul>
<h1>Improving performance of sparse mapping</h1>
<p>Finally, looking at the following paper, they introduce some techniques for using the sparse Velodyne readings in a CNN. Maybe see if any of these improve your performance?</p>
<p><a href="https://arxiv.org/pdf/1608.07916.pdf" rel="noreferrer">Vehicle Detection from 3D Lidar Using Fully Convolutional Network</a>
- Describes its preprocessing technique in section III.A (<em>Data Preparation</em>). </p>
<p><strong>Encoding the range data as a 2-channel image</strong></p>
<ul>
<li>1) Initialize a 2-channel matrix I; Fill with zeros</li>
<li>2) Given coordinates (x, y, z), let theta = atan2(y, x) and let phi = arcsin(z/sqrt(x^2 + y^2 + z^2))</li>
<li>3) Let delta_theta, delta_phi equal the average horizontal and vertical resolution between consecutive beam emitters, respectively.</li>
<li>4) Let r = floor(theta/delta_theta); Let c = floor(phi/delta_phi)</li>
<li>5) Let d = sqrt(x^2 + y^2)</li>
<li>6) Let I(r, c) = (d, z); if two points projected into the same position (rare), keep the one nearer to the observer</li>
</ul>
<p><strong>Unequal (Up/Down)sampling</strong></p>
<ul>
<li>In the first convolutional layer, the authors downsample by 4 horizontally and 2 vertically; This is because for Velodyne point maps, points are denser in the horizontal layer. They upsample by this same factor in their final deconvolutional layers (which simultaneously predict a vehicle's 'objectness' and its bounding box).</li>
</ul>
<p>All techniques are implemented with respect to the KITTI dataset/Velodyne LIDAR, so I imagine they could work (perhaps with some modification) for your particular use-case. </p> | 2019-06-06 21:43:57.943000+00:00 | 2019-06-06 21:43:57.943000+00:00 | null | null | 56,464,193 | <p>I'm working on a classification problem (object classification for autonomous vehicle). I use a dataset from KITTI which provide LiDAR and camera data and want to use both of this data to perform the task.</p>
<p>3D LIDAR data is projected onto the coordinate system of the RGB image resulting in a sparse LiDAR image:</p>
<p><img src="https://web.archive.org/web/20190722023447/https://image.noelshack.com/fichiers/2019/23/3/1559749785-lidar-depth-car.png" alt="sparse" /></p>
<p><img src="https://web.archive.org/web/20190722023452/https://image.noelshack.com/fichiers/2019/23/3/1559750098-car.png" alt="original_image" /></p>
<p>Each pixel is encoded using depth (distance to the point : sqrt(X² + Y²), scaling between 0 and 255).</p>
<p>In order to obtain better results for my CNN, I need a dense LiDAR image, anyone know how to do it using Python?</p>
<p>I would like to obtain something like this</p>
<p><img src="https://web.archive.org/web/20190722023503/https://image.noelshack.com/fichiers/2019/23/3/1559750299-dense.png" alt="goal" /></p> | 2019-06-05 16:04:14.813000+00:00 | 2021-02-21 21:39:30.100000+00:00 | 2021-02-21 21:39:30.100000+00:00 | python|image-processing|point-clouds|depth|lidar | ['https://github.com/robofit/but_velodyne_lib/blob/master/doc/ICRA16_submission.pdf', 'https://github.com/robofit/but_velodyne_lib/blob/master/src/PolarGridOfClouds.cpp', 'https://arxiv.org/pdf/1709.02128.pdf', 'https://i.stack.imgur.com/0s1Pw.png', 'https://i.stack.imgur.com/lUCHQ.png', 'https://arxiv.org/pdf/1608.07916.pdf'] | 6 |
55,880,810 | <p>If (1) the two interleaved sequences can form one monotonic sequence when un-interleaved, <em>and</em> either (a) the array starts with the lowest number and is of odd length, or (b) the array starts with the lowest number of the second sequence (the one that will be the right side when un-interleaved) and is of even length, we may be able to reverse the algorithm decribed in <a href="https://arxiv.org/abs/0805.1598" rel="nofollow noreferrer"><em>A Simple In-Place Algorithm for In-Shuffle</em> (Peiyush Jain, 2008)</a>.</p>
<p>We would have to perform the "cycle leader" sequences first, followed by the cycle shifts.</p>
<p>Example 1</p>
<pre><code>[1, 7, 2, 8, 3, 9, 4, 10, 5]
1 2 3 4 5 6 7 8 9
1 6 2 7 3 8 4 9 5
|1| unaffected
|1 3 |
m = 4; 2m = 3^2 - 1
cycles start on 3^0, 3^1
(4 swaps with 7 and the other
numbers form a longer cycle.)
</code></pre>
<p>Example 2 (simple):</p>
<pre><code>[1, 5, 2, 7, 3]
1 2 3 4 5
1 4 2 5 3
|1| unaffected
| |
m = 1
cycle in 2m => 2, 5
cycle in 2m => 3, 7
cycle shift by m between 5 and 3
=> 2, 3, 5, 7
</code></pre>
<p>I wouldn't expect anyone to come up with this in an interview without being allowed to research, though :)</p> | 2019-04-27 13:02:14.600000+00:00 | 2019-04-27 15:16:57.250000+00:00 | 2019-04-27 15:16:57.250000+00:00 | null | 55,878,675 | <p>This was asked to me today in an interview and was kicked out after staring at the question for 5 min.</p>
<blockquote>
<p>Given an array <em>A</em> such that the subsequence of all the odd positions ([<em>A</em><sub>1</sub>, <em>A</em><sub>3</sub>, <em>A</em><sub>5</sub>, …]) and the subsequence of all the even positions ([<em>A</em><sub>2</sub>, <em>A</em><sub>4</sub>, <em>A</em><sub>6</sub>, …]) are each in sorted order — e.g. [1, 7, 2, 8, 3, 9, 4, 10, 5] or [3, 8, 4, 11, 5] or [5, 2, 7, 4] — sort <em>A</em> in O(<em>n</em>) time and O(1) space (including stack space and output array space).</p>
</blockquote>
<p>I have racked my brain and picked my friend's over it for the last two hours. Google did not yield any answers. I do not want to color any opinions but I feel like this might not be possible to solve in the given complexities.</p>
<p>How can we solve this? All inputs are appreciated.</p> | 2019-04-27 08:27:50.907000+00:00 | 2019-04-27 15:16:57.250000+00:00 | 2019-04-27 13:26:12.773000+00:00 | java|algorithm|sorting|data-structures | ['https://arxiv.org/abs/0805.1598'] | 1 |
66,751,523 | <p><strong>Edit:</strong>
<br>
OK. I figured out it doesn't work in the input space. So the old explanation is probably wrong but I'll keep it anyway.</p>
<p>Here is my new thoughts:</p>
<p>In my senior project, I'm using the algorithm called <a href="https://arxiv.org/pdf/1912.02781.pdf" rel="nofollow noreferrer"><strong>AugMix</strong></a>. In this algorithm they calculated the Shannon-Jensen Divergence between two augmented images, which is the symmetrical form of the KL Divergence.</p>
<p>They used the model output as the probability distribution of the dataset. The idea is to fit a model to a dataset, then interpret the output of the model as the probability density function.</p>
<p>For example, you fitted a dataset without overfitting. Then (assuming this is an classification problem) you feed your logits (the output of the last layer) to the softmax function for each class (sometimes the softmax function is added as a layer to the end of the network, careful). The output of your softmax function (or layer) can be interpreted as P(Y|X_{1}) where X_{1} is the input sample and the Y is the groundtruth class. Then you make a prediction for another sample X_{2}, P(Y|X_{2}), where X_{1} and X_{2} comes from different datasets (say dataset_1 and dataset_2) and the model is not trained with any of those datasets.</p>
<p>Then the KL divergence between dataset_1 and dataset_2 can be calculated by KL(dataset_1 || dataset_2) = P(Y|X_{1}) * log(P(Y|X_{1}) / P(Y|X_{2}))</p>
<p>Make sure that X_{1} and X_{2} belongs to the same class.</p>
<p>I'm not sure if this is the correct way. <strong>Alternatively</strong>, you can train two different models (model_1 and model_2) using different datasets (dataset_1 and dataset_2) and then calculate the KL divergence on the predictions of those two models using the samples of another dataset called dataset_3. In other words:</p>
<p>KL(dataset_1 || dataset_2) = sum x in dataset_3 model_1(x) * log(model_1(x) / model_2(x))</p>
<p>where model_1(x) is the softmax output of model_1, which is trained using dataset_1 without overfitting, for the correct label.</p>
<p>The latter sounds more reasonable to me but I'm not sure either of them. I could not find a proper answer on my own.</p>
<hr />
<p>The things I'm going to explain are adopted from the blog of the Jason Brownlee from <a href="http://machinelearningmastery.com" rel="nofollow noreferrer">machinelearningmastery.com</a> <a href="https://machinelearningmastery.com/divergence-between-probability-distributions/" rel="nofollow noreferrer">KL Divergence</a></p>
<p>As far as I understood, firstly, you have to convert your datasets into the probability distribution so that you can calculate the probability of each of the samples from the union (or intersect?) of the both datasets.</p>
<p>KL(P || Q) = sum x in X P(x) * log(P(x) / Q(x))</p>
<p>However, most of the time the intersection of the datasets are none. For example, if you want to measure the divergence between CIFAR10 and ImageNet, there is not any samples in common. The only way you can calculate this metric is to sample from the same dataset to create two different datasets. Therefore you can have samples that are present in both datasets, and calculate the KL divergence.</p>
<p>Lastly, maybe you want to check the <a href="https://arxiv.org/pdf/1712.01026.pdf" rel="nofollow noreferrer">Wasserstein Divergence</a> that is used in GANs in order to compare the source distribution and the target distribution.</p> | 2021-03-22 18:04:14.473000+00:00 | 2021-05-25 21:52:23.163000+00:00 | 2021-05-25 21:52:23.163000+00:00 | null | 45,086,712 | <p>I have two datasets that contain 40000 samples. I want to calculate the Kullback-Leibler divergence between these two datasets in python. Is there any efficient way of doing this in python? </p> | 2017-07-13 16:47:58.210000+00:00 | 2021-05-25 21:52:23.163000+00:00 | 2018-09-24 14:54:28.543000+00:00 | python|statistics|distance|entropy | ['https://arxiv.org/pdf/1912.02781.pdf', 'http://machinelearningmastery.com', 'https://machinelearningmastery.com/divergence-between-probability-distributions/', 'https://arxiv.org/pdf/1712.01026.pdf'] | 4 |
53,315,043 | <p>Ok, so judging from your plot: it's the nature of the data, the price doesn't really change that often. </p>
<p>Try subsampling your original data a bit (perhaps by a factor of 5, just look at the data), so that you generally see a price movement with every time-tick. This should make any modeling much <strong>MUCH</strong> easier.</p>
<p>For the subsampling: I suggest you do simple regular downsampling in time domain. So if you have price data with a second resolution (i.e. one price tag every second), then simply take every fifth datapoint. Then proceed as you usually do, specifically, compute the log-increase in the price from this subsampled data. Remember that whatever you do, it must be reproducible during the test time.</p>
<p>If that is not an option for you for whatever reasons, have a look at something that can handle multiple time scales, e.g. <a href="https://arxiv.org/abs/1609.03499" rel="nofollow noreferrer">WaveNet</a> or <a href="https://arxiv.org/abs/1402.3511" rel="nofollow noreferrer">Clockwork RNN</a>.</p> | 2018-11-15 08:18:57.897000+00:00 | 2018-11-16 09:04:10.357000+00:00 | 2018-11-16 09:04:10.357000+00:00 | null | 53,303,152 | <pre><code>ipdb> np.count_nonzero(test==0) / len(ytrue) * 100
76.44815766923736
</code></pre>
<p>I have a datafile counting <code>24000</code> prices where I use them for a time series forecasting problem. Instead of trying predicting the price, I tried to predict log-return, i.e. <code>log(P_t/P_P{t-1})</code>. I have applied the log-return over the prices as well as all the features. The prediction are not bad, but the trend tend to predict zero. As you can see above, <code>~76%</code> of the data are zeros. </p>
<p>Now the idea is probably to "look up for a zero-inflated estimator : first predict whether it's gonna be a zero; if not, predict the value". </p>
<p>In details, what is the perfect way to deal with excessive number of zeros? How zero-inflated estimator can help me with that? Be aware originally I am not probabilist.</p>
<p><strong>P.S.</strong> I am working trying to predict the log-return where the units are "seconds" for High-Frequency Trading study. Be aware that it is a regression problem (not a classification problem).</p>
<p><strong>Update</strong></p>
<p><a href="https://i.stack.imgur.com/vIo34.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vIo34.png" alt="enter image description here"></a></p>
<p>That picture is probably the best prediction I have on the log-return, i.e <code>log(P_t/P_{t-1})</code>. Although it is not bad, the remaining predictions tend to predict zero. As you can see in the above question, there is too many zeros. I have probably the same problem inside the features as I take the log-return on the features as well, i.e. if <code>F</code> is a particular feature, then I apply <code>log(F_t/F_{t-1})</code>.</p>
<p>Here is a one day data, <a href="https://drive.google.com/file/d/1Mk5peBhHlluludUXynAQCW8NAQllbNpA/view?usp=sharing" rel="nofollow noreferrer">log_return_with_features.pkl</a>, with shape <code>(23369, 30, 161)</code>. Sorry, but I cannot tell what are the features. As I apply log(F_t/F_{t-1}) on all the features and on the target (i.e. the price), then be aware I added 1e-8 to all the features before applying the log-return operation to avoid division by 0.</p> | 2018-11-14 15:02:29.643000+00:00 | 2018-11-16 12:36:37.573000+00:00 | 2018-11-16 12:36:37.573000+00:00 | python|machine-learning|time-series|zero | ['https://arxiv.org/abs/1609.03499', 'https://arxiv.org/abs/1402.3511'] | 2 |
52,386,445 | <p>For classifying the images based on the custom classifier please find the following link </p>
<p><a href="https://arxiv.org/abs/1712.03541" rel="nofollow noreferrer">https://arxiv.org/abs/1712.03541</a></p>
<p>In this for feature extraction is done by CNN and classification is based upon SVM </p> | 2018-09-18 12:19:00.267000+00:00 | 2018-09-18 12:19:00.267000+00:00 | null | null | 52,379,054 | <p>I am looking for resources that can guide me to use XGboost as my final classifier after feature extraction by CNN and how to save the features generated by CNN and use it for classification by any other techniques like random forest or xgboost.</p> | 2018-09-18 04:41:24.777000+00:00 | 2018-09-18 12:19:00.267000+00:00 | null | python|machine-learning|transfer-learning | ['https://arxiv.org/abs/1712.03541'] | 1 |
47,307,894 | <p>According to the doc in <a href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/mpi" rel="nofollow noreferrer">Tensorflow git repo</a>,actually tf utilizes <a href="https://github.com/grpc/grpc" rel="nofollow noreferrer">gRPC</a> library by detault, which is based on HTTP2 protocol, rather than TCP/IP protocol, and <a href="https://arxiv.org/pdf/1603.02339.pdf" rel="nofollow noreferrer">this paper</a> should give you some insight, hope this information is useful.</p> | 2017-11-15 12:50:08.443000+00:00 | 2017-11-15 12:57:15.790000+00:00 | 2017-11-15 12:57:15.790000+00:00 | null | 46,282,671 | <p>I come from a sort of HPC background and I am just starting to learn about machine learning in general and TensorFlow in particular. I was initially surprised to find out that distributed TensorFlow is designed to communicate with TCP/IP by default though it makes sense in hindsight given what Google is and the kind of hardware it uses most commonly.</p>
<p>I am interested in experimenting with TensorFlow in a parallel way with MPI on a cluster. From my perspective, this should be advantageous because latency should be much lower due to MPI's use of Remote Direct Memory Access (RDMA) across machines without shared memory.</p>
<p>So my question is, why doesn't this approach seem to be more common given the increasing popularity of TensorFlow and machine learning ? Isn't latency a bottleneck ? Is there some typical problem that is solved, that makes this sort of solution impractical? Are there likely to be any meaningful differences between calling TensorFlow functions in a parallel way vs implementing MPI calls inside of the TensorFlow library ?</p>
<p>Thanks</p> | 2017-09-18 15:08:18.110000+00:00 | 2017-11-15 12:57:15.790000+00:00 | 2017-09-18 15:16:22.253000+00:00 | python|tensorflow|mpi|mpi4py | ['https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/mpi', 'https://github.com/grpc/grpc', 'https://arxiv.org/pdf/1603.02339.pdf'] | 3 |
46,283,283 | <p>It seems tensorflow already supports MPI, as stated at <a href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/mpi" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/mpi</a>
MPI support for tensorflow was also discussed at <a href="https://arxiv.org/abs/1603.02339" rel="nofollow noreferrer">https://arxiv.org/abs/1603.02339</a></p>
<p>Generally speaking, keep in mind MPI is best at sending/receiving messages, but not so great at sending notifications and acting upon events.
Last but not least, MPI support of multi-threaded applications (e.g. <code>MPI_THREAD_MULTIPLE</code>) has not always been production-ready among MPI implementation s.
These were two general statements and i honestly do not know if they are relevant for tensorflow.</p> | 2017-09-18 15:40:13.410000+00:00 | 2017-09-18 15:40:13.410000+00:00 | null | null | 46,282,671 | <p>I come from a sort of HPC background and I am just starting to learn about machine learning in general and TensorFlow in particular. I was initially surprised to find out that distributed TensorFlow is designed to communicate with TCP/IP by default though it makes sense in hindsight given what Google is and the kind of hardware it uses most commonly.</p>
<p>I am interested in experimenting with TensorFlow in a parallel way with MPI on a cluster. From my perspective, this should be advantageous because latency should be much lower due to MPI's use of Remote Direct Memory Access (RDMA) across machines without shared memory.</p>
<p>So my question is, why doesn't this approach seem to be more common given the increasing popularity of TensorFlow and machine learning ? Isn't latency a bottleneck ? Is there some typical problem that is solved, that makes this sort of solution impractical? Are there likely to be any meaningful differences between calling TensorFlow functions in a parallel way vs implementing MPI calls inside of the TensorFlow library ?</p>
<p>Thanks</p> | 2017-09-18 15:08:18.110000+00:00 | 2017-11-15 12:57:15.790000+00:00 | 2017-09-18 15:16:22.253000+00:00 | python|tensorflow|mpi|mpi4py | ['https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/mpi', 'https://arxiv.org/abs/1603.02339'] | 2 |
18,707,181 | <p><code>TruncatedSVD</code> is more feature-rich. It has the <a href="http://arxiv.org/abs/1309.0238">scikit-learn API</a>, so you can put it in a <code>sklearn.Pipeline</code> object and call <code>transform</code> on a new matrix instead of having to figure out the matrix multiplications yourself. It offers two algorithms: either a fast randomized SVD solver (the default), or <code>scipy.sparse.svds</code>.</p>
<p>(Full disclosure: I wrote <code>TruncatedSVD</code>.)</p> | 2013-09-09 21:17:53.283000+00:00 | 2013-09-09 21:17:53.283000+00:00 | null | null | 18,706,863 | <p>I see that the documentation for both <a href="http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.TruncatedSVD.html" rel="noreferrer">sklearn.decomposition.TruncatedSVD</a> and <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.linalg.svds.html" rel="noreferrer">scipy.sparse.linalg.svds</a> mention that they both perform <code>SVD</code> for sparse matrices. What is the difference between them?</p>
<p>Thanks.</p> | 2013-09-09 20:54:43.397000+00:00 | 2019-09-23 14:53:16.277000+00:00 | 2013-09-09 21:13:58.140000+00:00 | scipy|scikit-learn|svd | ['http://arxiv.org/abs/1309.0238'] | 1 |
71,734,920 | <p>I just found this open source project focused on dataflow analysis for Python. Check it out!
<a href="https://github.com/SMAT-Lab/Scalpel/" rel="nofollow noreferrer">https://github.com/SMAT-Lab/Scalpel/</a></p>
<p>It's made in Python too; haven't used it, but looks very interesting!</p>
<p>This is the pre-print of their paper:
<a href="https://arxiv.org/pdf/2202.11840.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/2202.11840.pdf</a></p> | 2022-04-04 09:30:31.503000+00:00 | 2022-04-04 09:35:44.610000+00:00 | 2022-04-04 09:35:44.610000+00:00 | null | 61,827,351 | <p>I've been struggling for quite some time to find a static dataflow graph generator for Python.</p>
<p>This is my ideal:
Given a small python script <code>example.py</code>, (written in Python3), return some representation of the data flow graph. </p>
<p>I was able to achieve this result using IBM's pyflowgraph, <a href="https://github.com/IBM/pyflowgraph" rel="nofollow noreferrer">https://github.com/IBM/pyflowgraph</a> which outputs data in <code>graph.ml</code> format, unfortunately this package only performs dynamic analysis. </p>
<p>I'm wondering if anyone knows of a DFG tool that could do this type of static dataflow analysis for Python?</p> | 2020-05-15 19:56:58.443000+00:00 | 2022-04-04 09:35:44.610000+00:00 | null | python|programming-languages|static-analysis|dataflow|graphml | ['https://github.com/SMAT-Lab/Scalpel/', 'https://arxiv.org/pdf/2202.11840.pdf'] | 2 |
49,336,096 | <p>As described in <a href="https://networkx.github.io/documentation/networkx-1.9/reference/generated/networkx.algorithms.cycles.simple_cycles.html#r230" rel="nofollow noreferrer">NetworkX documentation</a>, <code>simple_cycles</code> uses Johnson's algorithm to find elementary cycles. The complexity of the algorithm is <code>O((V+E).(1+C))</code> where </p>
<ul>
<li><code>V</code> is the number of vertices;</li>
<li><code>E</code> is the number of edges;</li>
<li><code>C</code> the number of cycles.</li>
</ul>
<p>In your case <code>V+E ~= 150,000</code>, so assuming the python process is not overloaded, we could expect the running time to be <code>150,000.K.C</code>.</p>
<p>To try to find an estimate of <code>K</code>, you can run the algorithm on smaller graphs, using power of 10 (<code>V+E = 10, 100, 1000 ...</code>) to ensure the running time of the <code>simple_cycles</code> remain proportional to <code>(V+E)(1+C)</code>, get a rough value of <code>K</code> and estimate the running time for your graph based on the number of cycles you expect to find. More precisely, if we note R(V+E,C) the actual running time for each of the experimental smaller graph, and <code>C0, C1, ...Cn</code> their respective number of cycles, then we would expect to have</p>
<pre><code>R(100,C1) / R(10,C0) ~= 10.K.[(1+C1) / (1+C0)]
R(1000,C1) / R(100,C0) ~= 10.K.[(1+C2) / (1+C1)]
...
</code></pre>
<p>If the <code>simple_cycles</code> running time does not exhibit the complexity of Johnson's algorithm, then there is a non-algorithmic factor which is slowing down/preventing the computation - this would then need to be investigated.</p>
<p><strong>Follow-up</strong>
These are the results of some investigation with the graph you provided. I tried to compute the number of cycles with the NetworkX library for smaller subgraphs and reproduced below some interesting results. There are the number of nodes and edges for each subgraph along with the number of cycles computed.</p>
<pre><code>\#Nodes | \#Edges | \#Cycles (computed)
----------------------------------------
1,000 | 186 | 17
2,000 | 675 | 37
3,000 | 1,460 | 72
4,000 | 2,538 | 2,147
4,250 | 2,881 | 2,351,883
</code></pre>
<p>I stopped at <code>#Nodes = 4000</code> for which I could not get any result within minutes.</p>
<p>Let's calculate, for each of these values, the value </p>
<pre><code>log10(C)/E with C = \#Cycles and E = \#Edges.
E = \#Edges | C = \#Cycles (computed) | log(C)/E |
----------------------------------------------------
186 | 17 | 0.0067 |
675 | 37 | 0.0023 |
1,460 | 72 | 0.0013 |
2,538 | 2,147 | 0,0013 |
2,881 | 2,351,883 | 0,0022 |
</code></pre>
<p>As we can see, at least for subgraphs of <code>G</code> with less than <code>~2,500</code> edges, the number of cycles follow roughly the following power law</p>
<pre><code>log10(C) = 0.0013.E => C = 1.003^E
</code></pre>
<p>The empirical 1.003 comes from the topology of your graph (as a side note, the <a href="https://arxiv.org/abs/1702.02662" rel="nofollow noreferrer">maximum theoretical number of cycles given the number of edges</a> is estimated to be <code>1.443^E</code>.).</p>
<p>Note that we don't know if this constant remains the same as the graph gets bigger - this would be an interesting thing to check, but using a different method than this brute-force one (we already have one thousand of a billion cycle when we reach 5000 edges).</p>
<p>In the case (and only in this case) that the constant does not change as the graph gets bigger up to the 150,000 edges of <code>G</code>, the approximate number of cycles would be... <code>~10^359</code></p>
<p>=> It seems you are actually hitting an algorithmic complexity wall. With this in mind, I don't know which alternative you wish to choose to move forward - maybe there exists non-exponential approximation algorithms?</p>
<p><strong>Note</strong><br>
To experiment with subgraphs of <code>G</code> I used the following commands - specifying a target number of nodes, for instance for 3,000 nodes:</p>
<pre><code> H = G.copy()
H.remove_nodes_from(list(nodes)[3000:])
len(list(nx.simple_cycles(H)))
</code></pre> | 2018-03-17 12:36:39.243000+00:00 | 2018-03-17 17:26:29.230000+00:00 | 2018-03-17 17:26:29.230000+00:00 | null | 49,332,606 | <p>Hi I am dealing with a bit <strong>big</strong> (sorry for ambiguous term) graph, which has <strong>29,981 number of nodes</strong> with <strong>150,000 directed-edges</strong> in it. </p>
<p>I am dealing with it with module <strong>networkx</strong> which is fairly widely used nowadays among graph theorists. </p>
<p>I had executed following script early this morning in Jupyter but can't estimate when to finish:</p>
<pre><code>import netowkx as nx
import pickle
# to read the graph
with open ('/home/zachary/H{}'.format("29981"), 'rb') as fp:
H = pickle.load(fp)
print(list(nx.simple_cycles(H)))
</code></pre>
<p>How can I roughly guess the time of finish of this script?</p>
<p>I a bit knows of what <code>big O</code> and <code>small O</code>'s.. but normally this kind of theoretically knowledge still not yet matured in my mind to industrially use those knowledge to calculate and estimate the computation time.</p> | 2018-03-17 04:48:20.833000+00:00 | 2018-03-17 17:26:29.230000+00:00 | null | python|time-complexity|networkx|scalability | ['https://networkx.github.io/documentation/networkx-1.9/reference/generated/networkx.algorithms.cycles.simple_cycles.html#r230', 'https://arxiv.org/abs/1702.02662'] | 2 |
54,873,670 | <p>You can use weight maps (as proposed in the <a href="https://arxiv.org/abs/1505.04597" rel="nofollow noreferrer">U-Net paper</a>). In those weight maps, you can weight regions with more weight or less weight. Here is some pseudocode:</p>
<pre><code>loss = compute_categorical_crossentropy()
weighted_loss = loss * weight_map # using element-wise multiplication
</code></pre> | 2019-02-25 19:51:16.503000+00:00 | 2019-02-25 19:51:16.503000+00:00 | null | null | 43,968,028 | <p>I have built a Keras model for image segmentation (U-Net). However in my samples some misclassifications (areas) are not that important, while other are crucial, so I want to assign higher weight in loss function to them. To complicate things further, I would like some misclassifications (class 1 instead of 2) to have very high penalty while inverse (class 2 instead of 1) shouldn't be penalized that much.</p>
<p>The way I see it, I need to use a sum (across all of the pixels) of weighted categorical crossentropy, but the best I could find is <a href="https://github.com/fchollet/keras/issues/2115" rel="nofollow noreferrer">this</a>:</p>
<pre><code>def w_categorical_crossentropy(y_true, y_pred, weights):
nb_cl = len(weights)
final_mask = K.zeros_like(y_pred[:, 0])
y_pred_max = K.max(y_pred, axis=1)
y_pred_max = K.reshape(y_pred_max, (K.shape(y_pred)[0], 1))
y_pred_max_mat = K.cast(K.equal(y_pred, y_pred_max), K.floatx())
for c_p, c_t in product(range(nb_cl), range(nb_cl)):
final_mask += (weights[c_t, c_p] * y_pred_max_mat[:, c_p] * y_true[:, c_t])
return K.categorical_crossentropy(y_pred, y_true) * final_mask
</code></pre>
<p>However this code only works with a single prediction and my knowledge of Keras inner workings is lacking (and math side of it is not much better). Anyone know how I can adapt it, or even better, is there a ready-made loss function which would suit my case? </p>
<p>I would appreciate some pointers.</p>
<p>EDIT: my question is similar to <a href="https://stackoverflow.com/questions/43033436/how-to-do-point-wise-categorical-crossentropy-loss-in-keras">How to do point-wise categorical crossentropy loss in Keras?</a>, except that I would like to use <strong><em>weighted</em></strong> categorical crossentropy.</p> | 2017-05-14 19:31:14.537000+00:00 | 2019-02-25 19:51:16.503000+00:00 | 2017-05-23 12:18:17.873000+00:00 | keras|loss|cross-entropy | ['https://arxiv.org/abs/1505.04597'] | 1 |
59,853,364 | <p>The softmax is a problematic way to estimate a confidence of the model`s prediction. </p>
<p>There are a few recent papers about this topic. </p>
<p>You can look for "calibration" of neural networks in order to find relevant papers.</p>
<p>This is one example you can start with - <a href="https://arxiv.org/pdf/1706.04599.pdf" rel="noreferrer">https://arxiv.org/pdf/1706.04599.pdf</a></p> | 2020-01-22 05:50:57.780000+00:00 | 2020-01-22 05:50:57.780000+00:00 | null | null | 59,851,961 | <p>I am using a deep neural network model (implemented in <code>keras</code>)to make predictions. Something like this:</p>
<pre><code>def make_model():
model = Sequential()
model.add(Conv2D(20,(5,5), activation = "relu"))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Flatten())
model.add(Dense(20, activation = "relu"))
model.add(Lambda(lambda x: tf.expand_dims(x, axis=1)))
model.add(SimpleRNN(50, activation="relu"))
model.add(Dense(1, activation="sigmoid"))
model.compile(loss = "binary_crossentropy", optimizer = adagrad, metrics = ["accuracy"])
return model
model = make_model()
model.fit(x_train, y_train, validation_data = (x_validation,y_validation), epochs = 25, batch_size = 25, verbose = 1)
##Prediciton:
prediction = model.predict_classes(x)
probabilities = model.predict_proba(x) #I assume these are the probabilities of class being predictied
</code></pre>
<p>My problem is a classification(binary) problem. I wish to calculate the confidence score of each of these <code>prediction</code> i.e. I wish to know - Is my model 99% certain it is "0" or is it 58% it is "0".</p>
<p>I have found some views on how to do it, but can't implement them. The approach I wish to follow says: "With classifiers, when you output you can interpret values as the probability of belonging to each specific class. You can use their distribution as a rough measure of how confident you are that an observation belongs to that class."</p>
<p>How should I predict with something like above model so that I get its confidence about each predictions? I would appreciate some practical examples (preferably in Keras).</p> | 2020-01-22 02:52:32.150000+00:00 | 2021-06-20 07:46:54.687000+00:00 | 2020-01-22 05:21:39.250000+00:00 | tensorflow|machine-learning|keras|confidence-interval|uncertainty | ['https://arxiv.org/pdf/1706.04599.pdf'] | 1 |
62,916,976 | <p>Copying and pasting from a Slack discussion we just had on this in another place:</p>
<ul>
<li><p>This is somewhat wrong / it depends / maybe too narrow a view.</p>
</li>
<li><p>For starters, R itself even uses a little OpenMP (on platforms where it can).</p>
</li>
<li><p>Next up, you can pick BLAS that do all your matrix math in parallel.</p>
</li>
<li><p>Next up is client code that can be multi-threaded and often is; package <a href="http://cloud.r-project.org/package=data.table" rel="nofollow noreferrer">data.table</a> is a <em>great</em> and famous example.</p>
</li>
<li><p>And maybe (at least to me) also importantly: if I set <code>options("Ncpu"=6)</code> on my six core desktop, I get <code>install.packages()</code> to install six packages in parallel.</p>
</li>
<li><p>Same for <code>make -j ...</code></p>
</li>
</ul>
<p>I say a little more on R and parallel computing (at different levels) in this <a href="https://arxiv.org/abs/1912.11144" rel="nofollow noreferrer">arXiv preprint</a> now out in <a href="https://dx.doi.org/10.1002/wics.1515" rel="nofollow noreferrer">this (paywalled) WIREs article</a>.</p>
<p>Now, lastly, you say macOS. That has a slew of other difficulties with OpenMP for which you should peruse the r-sig-mac list and maybe other repo (again, <a href="http://cloud.r-project.org/package=data.table" rel="nofollow noreferrer">data.table</a> covers that). I don't use macOS so I cannot say much more---other than that I see a lot of people having issues.</p>
<p>Lastly, of course, and not to take away from it: yes, R's inner interpreter is single-threaded and will remain so. But that does not mean we should rush out and by single-core computers. You will get <em>some</em> benefits from more cores, but exactly <em>how much</em> depends critically on the workloads.</p> | 2020-07-15 14:15:15.500000+00:00 | 2020-07-15 14:46:53.450000+00:00 | 2020-07-15 14:46:53.450000+00:00 | null | 62,900,601 | <p>On occasions where heavy compute is required, I've used <code>doParallel</code> package to dispatch work over multiple cores. Random example:</p>
<pre><code> if (detectCores()-1 > 1) {
cl <- makeCluster(detectCores()-1)
registerDoParallel(cl)
tdm <- DocumentTermMatrix(corpus, control = list(dictionary = Terms(tdm), removePunctuation = TRUE, stopwords = TRUE, stemming = TRUE, removeNumbers = TRUE))
stopCluster(cl)
}
</code></pre>
<p>But the <em><strong>vast</strong></em> majority (probably 99.5%) of R code I write is <em>not</em> wrapped in additional code that explicitly spread the work across >1 cores.</p>
<p>Is it fair to assume this code is running across 1 single core? Or would answering this require delving into each library used, and its functions (e.g. <code>tidyverse</code>, <code>data.table</code> etc)?</p>
<p>Note: aside from some timed experiments, I do not know a lot about how R and the hardware interacts, so if my understanding if flawed (e.g. wrong assumptions), please point out.</p>
<h3>Background</h3>
<p>The reason this is of great interest is help decide between fewer cores at higher clock speed vs more cores at lower clock speed; al la the latest macbook. It would be unfortunate to pay more for a 'better' processors, only to have most day-to-day R tasks run slower due to the slower clock speed (presuming they're running only on one core).</p>
<p><a href="https://i.stack.imgur.com/Hokrum.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Hokrum.png" alt="enter image description here" /></a></p> | 2020-07-14 17:19:29.790000+00:00 | 2020-07-27 16:13:20.847000+00:00 | 2020-07-27 16:13:20.847000+00:00 | r|foreach|hpc|doparallel | ['http://cloud.r-project.org/package=data.table', 'https://arxiv.org/abs/1912.11144', 'https://dx.doi.org/10.1002/wics.1515', 'http://cloud.r-project.org/package=data.table'] | 4 |
40,318,970 | <p>I suppose that by LDA you mean Latent Dirichlet Allocation, if you mean Linear Discriminant Analysis you can simply use <a href="http://scikit-learn.org/stable/modules/generated/sklearn.discriminant_analysis.LinearDiscriminantAnalysis.html#sklearn.discriminant_analysis.LinearDiscriminantAnalysis" rel="nofollow">sklearn.discriminant_analysis.LinearDiscriminantAnalysis</a> as a classifier and compare accuracy or precision and recall or whatever.</p>
<p>You should use Latent Dirichlet Allocation as a transformer before feeding the topics representation to SVM. For instance the following code does just that (you could of course use Pipelines and cross validation but that is just an example).</p>
<pre><code>import numpy as np
from sklearn.datasets import fetch_olivetti_faces
from sklearn.decomposition import LatentDirichletAllocation
from sklearn.linear_svm import LinearSVC
# get the dataset
faces = fetch_olivetti_faces()
X = (faces.data*255).astype(int)
y = faces.target
# create a test set and a training set
idx = np.arange(len(X))
np.random.shuffle(idx)
train = idx[:2*len(X)/3]
test = idx[2*len(X)/3:]
# create the models
lda = LatentDirichletAllocation(n_topics=10)
svm = LinearSVC(C=10)
# evaluate everything
lda.fit(X[train])
T = lda.transform(X)
print svm.fit(T[train], y[train]).score(T[test], y[test])
</code></pre>
<p>LDA is not particularly <a href="https://arxiv.org/abs/1003.0783" rel="nofollow">well suited for classification</a> and thus there have been developed quite a few variations. We developed one such supervised variation for classification which we presented at ACM Multimedia this year. You can read our paper <a href="http://mug.ee.auth.gr/wp-content/uploads/fsLDA.pdf" rel="nofollow">Fast Supervised LDA</a> and get code and documentation from <a href="http://ldaplusplus.com/" rel="nofollow">http://ldapluplus.com/</a>. Finally you can see an <a href="http://ldaplusplus.com/topic-inference-visualization/" rel="nofollow">example of using Olivetti Faces with LDA++</a>.</p> | 2016-10-29 12:13:45.500000+00:00 | 2016-10-29 12:13:45.500000+00:00 | null | null | 40,304,498 | <p>I want to benmark the performance of face recognition using SVM and LDA. Could you please give me an idea of how to implement it</p> | 2016-10-28 11:52:30.037000+00:00 | 2016-10-29 12:13:45.500000+00:00 | null | python-2.7 | ['http://scikit-learn.org/stable/modules/generated/sklearn.discriminant_analysis.LinearDiscriminantAnalysis.html#sklearn.discriminant_analysis.LinearDiscriminantAnalysis', 'https://arxiv.org/abs/1003.0783', 'http://mug.ee.auth.gr/wp-content/uploads/fsLDA.pdf', 'http://ldaplusplus.com/', 'http://ldaplusplus.com/topic-inference-visualization/'] | 5 |
38,132,833 | <p><strong>The algorithm is just slow.</strong></p>
<p>Sympy explains <a href="http://docs.sympy.org/0.7.2/modules/matrices/matrices.html#sympy.matrices.matrices.MatrixBase.berkowitz" rel="nofollow noreferrer">the Berkowitz method in its documentation</a>, and references <a href="http://www.sciencedirect.com/science/article/pii/0020019084900188" rel="nofollow noreferrer">"On computing the determinant in small parallel time using a small number of processors"</a> ; for its implementation, <a href="http://docs.sympy.org/0.7.2/_modules/sympy/matrices/matrices.html#MatrixBase.berkowitz" rel="nofollow noreferrer">look at the open-source sympy code</a>.</p>
<p>The complexity of Berkowitz is pretty hard to understand, and it looks like if you don't want to brute force the proof of its correctness <a href="http://arxiv.org/pdf/math/0201315.pdf" rel="nofollow noreferrer">then you need to get involved in some pretty hairy combinatorics</a>.</p>
<p><em>The algorithm is fast for highly parrallized architectures</em>; it's mainly motivated by the fact that Gaussian Ellimination doesn't parralelize well. Formally, its in the <a href="https://en.wikipedia.org/wiki/NC_%28complexity%29#The_NC_hierarchy" rel="nofollow noreferrer">class <code>NC^2</code></a>. I might guess that your tests weren't being run on such an architecture. Some researchers into the algorithm <a href="https://cstheory.stackexchange.com/questions/12448/smallest-known-formula-for-the-determinant">seem to be contributors on CS.SE</a>, if you have more questions on that topic.</p>
<p><strong>The Polynomial Call is Slow</strong></p>
<p><a href="http://docs.sympy.org/0.6.7/modules/polynomials.html#sympy.polys.Poly" rel="nofollow noreferrer">From the docs</a>, there are multiple ways of constructing a polynomial, dependent on what type of collection is passed into the constructor (list <code>[1]</code>, tuple <code>[2]</code>, or dictionary <code>[3]</code>); they result in different validation and have very different performance. I would point you to this note in that documentation (emphasis mine, capitalization their's):</p>
<blockquote>
<p>For interactive usage choose <code>[1]</code> as it’s safe and relatively fast.</p>
<p>CAUTION: Use <code>[2]</code> or <code>[3]</code> internally for time critical algorithms, when
you know that coefficients and monomials will be valid sympy
expressions. Use them with caution! If the coefficients are integers
instead of sympy integers (e.g. 1 instead of S(1)) <strong>the polynomial will
be created but you may run into problems if you try to print the
polynomial</strong>. If the monomials are not given as tuples of integers you
will have problems.</p>
</blockquote>
<hr />
<p>Sympy also reserves the right to lazily evaluate expressions until their output is needed. This is a significant part of the benefit of symbolic calculations - mathematical simplification can result in precision gains and performance gains, but it also may mean that the actual evaluation of complex expressions may be delayed until unexpected times.</p> | 2016-06-30 20:38:37.860000+00:00 | 2016-07-01 14:51:14.723000+00:00 | 2020-06-20 09:12:55.060000+00:00 | null | 38,126,043 | <p>At least this it how it appears. The following code behaved correctly for 3x3 and 6x6 matrices.</p>
<pre><code> deter = mat.det('method':'berkowitz')
#self.resultFileHandler.writeLogStr(str(deter))
a = sy_polys.Poly(deter, k)
</code></pre>
<p>For 3x3 it takes ~0.8s to execute this code, for 6x6 it takes ~288s (with only 650ms for the det function, the rest for the Poly). For 10x10, either the complexity has ramped at a colossal rate or some other reason is preventing it returning from the Poly call (I waited a week). No exceptions are thrown.</p>
<p>The elements of the determinants consist of large symbolic polynomials.</p>
<p>I was on 0.7.1 and just upgraded to 1.0 (problem in both versions).</p>
<p>I added the logging to try and get the determinant to file but it sticks again in the str(deter) function call. If I break my debugger can't display the deter (prob too large for the debugger). </p>
<p>Here is a stack:</p>
<pre><code>MainThread - pid_135368_id_42197520
_print [printer.py:262]
_print_Add [str.py:56]
_print [printer.py:257]
parenthesize [str.py:29]
_print_Mul [str.py:290]
_print [printer.py:257]
_print_Add [str.py:56]
_print [printer.py:257]
parenthesize [str.py:29]
_print_Mul [str.py:290]
_print [printer.py:257]
_print_Add [str.py:56]
_print [printer.py:257]
parenthesize [str.py:29]
_print_Mul [str.py:290]
_print [printer.py:257]
_print_Add [str.py:56]
_print [printer.py:257]
parenthesize [str.py:29]
_print_Mul [str.py:290]
_print [printer.py:257]
_print_Add [str.py:56]
_print [printer.py:257]
parenthesize [str.py:29]
_print_Mul [str.py:290]
_print [printer.py:257]
_print_Add [str.py:56]
_print [printer.py:257]
parenthesize [str.py:29]
_print_Mul [str.py:290]
_print [printer.py:257]
_print_Add [str.py:56]
_print [printer.py:257]
parenthesize [str.py:29]
_print_Mul [str.py:290]
_print [printer.py:257]
_print_Add [str.py:56]
_print [printer.py:257]
parenthesize [str.py:29]
_print_Mul [str.py:290]
_print [printer.py:257]
_print_Add [str.py:56]
_print [printer.py:257]
parenthesize [str.py:29]
_print_Mul [str.py:290]
_print [printer.py:257]
_print_Add [str.py:56]
_print [printer.py:257]
parenthesize [str.py:29]
_print_Mul [str.py:290]
_print [printer.py:257]
_print_Add [str.py:56]
_print [printer.py:257]
doprint [printer.py:233]
sstr [str.py:748]
__str__ [basic.py:396]
_getRoots_sympy_Poly_nroots [__init__.py:91]
getRoots [__init__.py:68]
findPolyRoots [__init__.py:697]
_getNroots [polefinder.py:97]
_doForN [polefinder.py:60]
_incN [polefinder.py:52]
__init__ [polefinder.py:39]
_doPoleFind [polefinderwrap.py:33]
_polesForPos [polefinderwrap.py:47]
<module> [polefinderwrap.py:60]
run [pydevd.py:937]
<module> [pydevd.py:1530]
</code></pre>
<p>OK, I've got an exception from the str function. Seems likely that the polynomial has become too large.</p>
<pre><code>Traceback (most recent call last):
File "E:\Peter's Documents\PhD\Code\Git\ProtoQScat\multichannel\qscat\ratsmat.\polefinder.py", line 60, in _doForN
roots = self._getNroots(N)
File "E:\Peter's Documents\PhD\Code\Git\ProtoQScat\multichannel\qscat\ratsmat.\polefinder.py", line 97, in _getNroots
roots = ratSmat.findPolyRoots(False)
File "E:\Peter's Documents\PhD\Code\Git\ProtoQScat\multichannel\qscat\numerical/..\ratsmat\__init__.py", line 697, in findPolyRoots
roots = self.polyRootSolve.getRoots(mat, k)
File "E:\Peter's Documents\PhD\Code\Git\ProtoQScat\multichannel\qscat\numerical/..\ratsmat\__init__.py", line 68, in getRoots
ret = self._getRoots_sympy_Poly_nroots(mat, k)
File "E:\Peter's Documents\PhD\Code\Git\ProtoQScat\multichannel\qscat\numerical/..\ratsmat\__init__.py", line 91, in _getRoots_sympy_Poly_nroots
self.resultFileHandler.writeLogStr(str(deter))
File "C:\Python27\lib\site-packages\sympy\core\basic.py", line 396, in __str__
return sstr(self, order=None)
File "C:\Python27\lib\site-packages\sympy\printing\str.py", line 748, in sstr
s = p.doprint(expr)
File "C:\Python27\lib\site-packages\sympy\printing\printer.py", line 233, in doprint
return self._str(self._print(expr))
File "C:\Python27\lib\site-packages\sympy\printing\printer.py", line 257, in _print
return getattr(self, printmethod)(expr, *args, **kwargs)
File "C:\Python27\lib\site-packages\sympy\printing\str.py", line 56, in _print_Add
t = self._print(term)
File "C:\Python27\lib\site-packages\sympy\printing\printer.py", line 257, in _print
return getattr(self, printmethod)(expr, *args, **kwargs)
File "C:\Python27\lib\site-packages\sympy\printing\str.py", line 290, in _print_Mul
a_str = [self.parenthesize(x, prec) for x in a]
File "C:\Python27\lib\site-packages\sympy\printing\str.py", line 29, in parenthesize
return "(%s)" % self._print(item)
File "C:\Python27\lib\site-packages\sympy\printing\printer.py", line 257, in _print
return getattr(self, printmethod)(expr, *args, **kwargs)
File "C:\Python27\lib\site-packages\sympy\printing\str.py", line 56, in _print_Add
t = self._print(term)
File "C:\Python27\lib\site-packages\sympy\printing\printer.py", line 257, in _print
return getattr(self, printmethod)(expr, *args, **kwargs)
File "C:\Python27\lib\site-packages\sympy\printing\str.py", line 290, in _print_Mul
a_str = [self.parenthesize(x, prec) for x in a]
File "C:\Python27\lib\site-packages\sympy\printing\str.py", line 29, in parenthesize
return "(%s)" % self._print(item)
File "C:\Python27\lib\site-packages\sympy\printing\printer.py", line 257, in _print
return getattr(self, printmethod)(expr, *args, **kwargs)
File "C:\Python27\lib\site-packages\sympy\printing\str.py", line 56, in _print_Add
t = self._print(term)
File "C:\Python27\lib\site-packages\sympy\printing\printer.py", line 257, in _print
return getattr(self, printmethod)(expr, *args, **kwargs)
File "C:\Python27\lib\site-packages\sympy\printing\str.py", line 290, in _print_Mul
a_str = [self.parenthesize(x, prec) for x in a]
File "C:\Python27\lib\site-packages\sympy\printing\str.py", line 29, in parenthesize
return "(%s)" % self._print(item)
File "C:\Python27\lib\site-packages\sympy\printing\printer.py", line 257, in _print
return getattr(self, printmethod)(expr, *args, **kwargs)
File "C:\Python27\lib\site-packages\sympy\printing\str.py", line 69, in _print_Add
return sign + ' '.join(l)
MemoryError
</code></pre>
<p>EDIT:
Following from answer below here is a profile plot with the determinant size (channel). Ignore N (on y axis) it is another parameter of the calculation (governs the size of the polys in the elements).
<a href="https://i.stack.imgur.com/2lyBO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2lyBO.png" alt="enter image description here"></a></p> | 2016-06-30 14:24:45.723000+00:00 | 2016-07-12 14:40:50.063000+00:00 | 2016-07-01 19:33:31.463000+00:00 | python|sympy | ['http://docs.sympy.org/0.7.2/modules/matrices/matrices.html#sympy.matrices.matrices.MatrixBase.berkowitz', 'http://www.sciencedirect.com/science/article/pii/0020019084900188', 'http://docs.sympy.org/0.7.2/_modules/sympy/matrices/matrices.html#MatrixBase.berkowitz', 'http://arxiv.org/pdf/math/0201315.pdf', 'https://en.wikipedia.org/wiki/NC_%28complexity%29#The_NC_hierarchy', 'https://cstheory.stackexchange.com/questions/12448/smallest-known-formula-for-the-determinant', 'http://docs.sympy.org/0.6.7/modules/polynomials.html#sympy.polys.Poly'] | 7 |
71,091,875 | <p>I think you are looking for some kind of action mask implementation. In several games/enviroments, some actions are invalid in a particular state (it is not your case, but it could be the first approach). You can check this <a href="https://arxiv.org/abs/2006.14171" rel="nofollow noreferrer">paper</a> and the <a href="https://github.com/vwxyzjn/invalid-action-masking" rel="nofollow noreferrer">github</a></p> | 2022-02-12 12:31:31.827000+00:00 | 2022-02-12 12:31:31.827000+00:00 | null | null | 71,088,010 | <p>The idea is to initially calibrate the neural network with some prior knowledge before releasing the algorithm to evolve on its own.
To make the question simpler, imagine that an agent can take 10 actions (discrete space). Instead of training the PPO algorithm to figure out by itself which actions are best for each state, I would like to perform a training by considering that some actions were performed in some states.
I'm using Stable Baselines with Gym.</p>
<p>I thought about creating an action wrapper like this:</p>
<pre><code>class RandomActionWrapper(gym.ActionWrapper):
def __init__(self, env):
super(RandomActionWrapper, self).__init__(env)
def action(self, action):
a = self.env.action_space.sample()
return a
</code></pre>
<p>Ps: this wrapper is just a proof of concept, choosing random actions all the time, but the model just doesn't learn that way (I simulated many iterations in ridiculously simple to learn custom environments, something like: "action 2 always results in reward=1 while other actions result in reward=0).
Apparently the updates on the network are being made considering the actions that the model chose (the model always predicts actions by itself) while the rewards are being calculated based on the actions defined in my wrapper. This mismatch makes learning impossible.</p> | 2022-02-12 00:34:49.397000+00:00 | 2022-02-12 18:09:43.640000+00:00 | null | deep-learning|reinforcement-learning|openai-gym|stable-baselines | ['https://arxiv.org/abs/2006.14171', 'https://github.com/vwxyzjn/invalid-action-masking'] | 2 |
39,890,190 | <p>It is a very common mistake to forget that the activations, gradients and optimizer moment tracking variables also take VRRAM, not just the parameters, increasing memory usage quite a bit. The backprob calculations themselves make it so the training phase takes almost double the VRAM of forward / inference use of the neural net, and the Adam optimizer triples the space usage.</p>
<p>So, in the beginning when the network is created, only the parameters are allocated. However, when the training starts. the model actiavtions, backprop computations and the optimizer's tracking variables get allocated, increasing memory use by a large factor.</p>
<p>To allow the training of larger models, people:</p>
<ul>
<li>use <strong>model parallelism</strong> to spread the weights and computations over different accelerators</li>
<li>use <a href="https://medium.com/tensorflow/fitting-larger-networks-into-memory-583e3c758ff9" rel="nofollow noreferrer"><strong>gradient checkpointing</strong></a>, which allows a tradeoff between more computation vs lower memory use during back-propagation.</li>
<li>Potentially use a <strong>memory efficient optimizer</strong> that aims to reduce the number of tracking variables, such as <a href="https://arxiv.org/abs/1804.04235" rel="nofollow noreferrer">Adafactor</a>, for which you will find implementations for all popular deep learning frameworks.</li>
</ul>
<p>Tools to train very large models:</p>
<ul>
<li>Mesh-Tensorflow <a href="https://arxiv.org/abs/1811.02084" rel="nofollow noreferrer">https://arxiv.org/abs/1811.02084</a>
<a href="https://github.com/tensorflow/mesh" rel="nofollow noreferrer">https://github.com/tensorflow/mesh</a></li>
<li>Microsoft DeepSpeed:
<a href="https://github.com/microsoft/DeepSpeed" rel="nofollow noreferrer">https://github.com/microsoft/DeepSpeed</a> <a href="https://www.deepspeed.ai/" rel="nofollow noreferrer">https://www.deepspeed.ai/</a></li>
<li>Facebook FairScale: <a href="https://github.com/facebookresearch/fairscale" rel="nofollow noreferrer">https://github.com/facebookresearch/fairscale</a></li>
<li>Megatron-LM: <a href="https://arxiv.org/abs/1909.08053" rel="nofollow noreferrer">https://arxiv.org/abs/1909.08053</a>
<a href="https://github.com/NVIDIA/Megatron-LM" rel="nofollow noreferrer">https://github.com/NVIDIA/Megatron-LM</a></li>
<li>Article on integration in HuggingFace Transformers: <a href="https://huggingface.co/blog/zero-deepspeed-fairscale" rel="nofollow noreferrer">https://huggingface.co/blog/zero-deepspeed-fairscale</a></li>
</ul> | 2016-10-06 07:36:41.783000+00:00 | 2021-01-27 19:12:45.003000+00:00 | 2021-01-27 19:12:45.003000+00:00 | null | 39,890,147 | <p>I've been messing with Keras, and like it so far. There's one big issue I have been having, when working with fairly deep networks: When calling model.train_on_batch, or model.fit etc., Keras allocates significantly more GPU memory than what the model itself should need. This is not caused by trying to train on some really large images, it's the network model itself that seems to require a lot of GPU memory. I have created this toy example to show what I mean. Here's essentially what's going on:</p>
<p>I first create a fairly deep network, and use model.summary() to get the total number of parameters needed for the network (in this case 206538153, which corresponds to about 826 MB). I then use nvidia-smi to see how much GPU memory Keras has allocated, and I can see that it makes perfect sense (849 MB).</p>
<p>I then compile the network, and can confirm that this does not increase GPU memory usage. And as we can see in this case, I have almost 1 GB of VRAM available at this point.</p>
<p>Then I try to feed a simple 16x16 image and a 1x1 ground truth to the network, and then everything blows up, because Keras starts allocating lots of memory again, for no reason that is obvious to me. Something about training the network seems to require a lot more memory than just having the model, which doesn't make sense to me. I have trained significantly deeper networks on this GPU in other frameworks, so that makes me think that I'm using Keras wrong (or there's something wrong in my setup, or in Keras, but of course that's hard to know for sure).</p>
<p>Here's the code:</p>
<pre class="lang-python prettyprint-override"><code>from scipy import misc
import numpy as np
from keras.models import Sequential
from keras.layers import Dense, Activation, Convolution2D, MaxPooling2D, Reshape, Flatten, ZeroPadding2D, Dropout
import os
model = Sequential()
model.add(Convolution2D(256, 3, 3, border_mode='same', input_shape=(16,16,1)))
model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2)))
model.add(Convolution2D(512, 3, 3, border_mode='same'))
model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2)))
model.add(Convolution2D(1024, 3, 3, border_mode='same'))
model.add(Convolution2D(1024, 3, 3, border_mode='same'))
model.add(Convolution2D(1024, 3, 3, border_mode='same'))
model.add(Convolution2D(1024, 3, 3, border_mode='same'))
model.add(Convolution2D(1024, 3, 3, border_mode='same'))
model.add(Convolution2D(1024, 3, 3, border_mode='same'))
model.add(Convolution2D(1024, 3, 3, border_mode='same'))
model.add(Convolution2D(1024, 3, 3, border_mode='same'))
model.add(Convolution2D(1024, 3, 3, border_mode='same'))
model.add(Convolution2D(1024, 3, 3, border_mode='same'))
model.add(Convolution2D(1024, 3, 3, border_mode='same'))
model.add(Convolution2D(1024, 3, 3, border_mode='same'))
model.add(Convolution2D(1024, 3, 3, border_mode='same'))
model.add(Convolution2D(1024, 3, 3, border_mode='same'))
model.add(Convolution2D(1024, 3, 3, border_mode='same'))
model.add(Convolution2D(1024, 3, 3, border_mode='same'))
model.add(Convolution2D(1024, 3, 3, border_mode='same'))
model.add(Convolution2D(1024, 3, 3, border_mode='same'))
model.add(Convolution2D(1024, 3, 3, border_mode='same'))
model.add(Convolution2D(1024, 3, 3, border_mode='same'))
model.add(Convolution2D(1024, 3, 3, border_mode='same'))
model.add(Convolution2D(1024, 3, 3, border_mode='same'))
model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2)))
model.add(Convolution2D(256, 3, 3, border_mode='same'))
model.add(Convolution2D(32, 3, 3, border_mode='same'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Flatten())
model.add(Dense(4))
model.add(Dense(1))
model.summary()
os.system("nvidia-smi")
raw_input("Press Enter to continue...")
model.compile(optimizer='sgd',
loss='mse',
metrics=['accuracy'])
os.system("nvidia-smi")
raw_input("Compiled model. Press Enter to continue...")
n_batches = 1
batch_size = 1
for ibatch in range(n_batches):
x = np.random.rand(batch_size, 16,16,1)
y = np.random.rand(batch_size, 1)
os.system("nvidia-smi")
raw_input("About to train one iteration. Press Enter to continue...")
model.train_on_batch(x, y)
print("Trained one iteration")
</code></pre>
<p>Which gives the following output for me:</p>
<pre class="lang-python prettyprint-override"><code>Using Theano backend.
Using gpu device 0: GeForce GTX 960 (CNMeM is disabled, cuDNN 5103)
/usr/local/lib/python2.7/dist-packages/theano/sandbox/cuda/__init__.py:600: UserWarning: Your cuDNN version is more recent than the one Theano officially supports. If you see any problems, try updating Theano or downgrading cuDNN to version 5.
warnings.warn(warn)
____________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
====================================================================================================
convolution2d_1 (Convolution2D) (None, 16, 16, 256) 2560 convolution2d_input_1[0][0]
____________________________________________________________________________________________________
maxpooling2d_1 (MaxPooling2D) (None, 8, 8, 256) 0 convolution2d_1[0][0]
____________________________________________________________________________________________________
convolution2d_2 (Convolution2D) (None, 8, 8, 512) 1180160 maxpooling2d_1[0][0]
____________________________________________________________________________________________________
maxpooling2d_2 (MaxPooling2D) (None, 4, 4, 512) 0 convolution2d_2[0][0]
____________________________________________________________________________________________________
convolution2d_3 (Convolution2D) (None, 4, 4, 1024) 4719616 maxpooling2d_2[0][0]
____________________________________________________________________________________________________
convolution2d_4 (Convolution2D) (None, 4, 4, 1024) 9438208 convolution2d_3[0][0]
____________________________________________________________________________________________________
convolution2d_5 (Convolution2D) (None, 4, 4, 1024) 9438208 convolution2d_4[0][0]
____________________________________________________________________________________________________
convolution2d_6 (Convolution2D) (None, 4, 4, 1024) 9438208 convolution2d_5[0][0]
____________________________________________________________________________________________________
convolution2d_7 (Convolution2D) (None, 4, 4, 1024) 9438208 convolution2d_6[0][0]
____________________________________________________________________________________________________
convolution2d_8 (Convolution2D) (None, 4, 4, 1024) 9438208 convolution2d_7[0][0]
____________________________________________________________________________________________________
convolution2d_9 (Convolution2D) (None, 4, 4, 1024) 9438208 convolution2d_8[0][0]
____________________________________________________________________________________________________
convolution2d_10 (Convolution2D) (None, 4, 4, 1024) 9438208 convolution2d_9[0][0]
____________________________________________________________________________________________________
convolution2d_11 (Convolution2D) (None, 4, 4, 1024) 9438208 convolution2d_10[0][0]
____________________________________________________________________________________________________
convolution2d_12 (Convolution2D) (None, 4, 4, 1024) 9438208 convolution2d_11[0][0]
____________________________________________________________________________________________________
convolution2d_13 (Convolution2D) (None, 4, 4, 1024) 9438208 convolution2d_12[0][0]
____________________________________________________________________________________________________
convolution2d_14 (Convolution2D) (None, 4, 4, 1024) 9438208 convolution2d_13[0][0]
____________________________________________________________________________________________________
convolution2d_15 (Convolution2D) (None, 4, 4, 1024) 9438208 convolution2d_14[0][0]
____________________________________________________________________________________________________
convolution2d_16 (Convolution2D) (None, 4, 4, 1024) 9438208 convolution2d_15[0][0]
____________________________________________________________________________________________________
convolution2d_17 (Convolution2D) (None, 4, 4, 1024) 9438208 convolution2d_16[0][0]
____________________________________________________________________________________________________
convolution2d_18 (Convolution2D) (None, 4, 4, 1024) 9438208 convolution2d_17[0][0]
____________________________________________________________________________________________________
convolution2d_19 (Convolution2D) (None, 4, 4, 1024) 9438208 convolution2d_18[0][0]
____________________________________________________________________________________________________
convolution2d_20 (Convolution2D) (None, 4, 4, 1024) 9438208 convolution2d_19[0][0]
____________________________________________________________________________________________________
convolution2d_21 (Convolution2D) (None, 4, 4, 1024) 9438208 convolution2d_20[0][0]
____________________________________________________________________________________________________
convolution2d_22 (Convolution2D) (None, 4, 4, 1024) 9438208 convolution2d_21[0][0]
____________________________________________________________________________________________________
convolution2d_23 (Convolution2D) (None, 4, 4, 1024) 9438208 convolution2d_22[0][0]
____________________________________________________________________________________________________
convolution2d_24 (Convolution2D) (None, 4, 4, 1024) 9438208 convolution2d_23[0][0]
____________________________________________________________________________________________________
maxpooling2d_3 (MaxPooling2D) (None, 2, 2, 1024) 0 convolution2d_24[0][0]
____________________________________________________________________________________________________
convolution2d_25 (Convolution2D) (None, 2, 2, 256) 2359552 maxpooling2d_3[0][0]
____________________________________________________________________________________________________
convolution2d_26 (Convolution2D) (None, 2, 2, 32) 73760 convolution2d_25[0][0]
____________________________________________________________________________________________________
maxpooling2d_4 (MaxPooling2D) (None, 1, 1, 32) 0 convolution2d_26[0][0]
____________________________________________________________________________________________________
flatten_1 (Flatten) (None, 32) 0 maxpooling2d_4[0][0]
____________________________________________________________________________________________________
dense_1 (Dense) (None, 4) 132 flatten_1[0][0]
____________________________________________________________________________________________________
dense_2 (Dense) (None, 1) 5 dense_1[0][0]
====================================================================================================
Total params: 206538153
____________________________________________________________________________________________________
None
Thu Oct 6 09:05:42 2016
+------------------------------------------------------+
| NVIDIA-SMI 352.63 Driver Version: 352.63 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 960 Off | 0000:01:00.0 On | N/A |
| 30% 37C P2 28W / 120W | 1082MiB / 2044MiB | 9% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 1796 G /usr/bin/X 155MiB |
| 0 2597 G compiz 65MiB |
| 0 5966 C python 849MiB |
+-----------------------------------------------------------------------------+
Press Enter to continue...
Thu Oct 6 09:05:44 2016
+------------------------------------------------------+
| NVIDIA-SMI 352.63 Driver Version: 352.63 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 960 Off | 0000:01:00.0 On | N/A |
| 30% 38C P2 28W / 120W | 1082MiB / 2044MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 1796 G /usr/bin/X 155MiB |
| 0 2597 G compiz 65MiB |
| 0 5966 C python 849MiB |
+-----------------------------------------------------------------------------+
Compiled model. Press Enter to continue...
Thu Oct 6 09:05:44 2016
+------------------------------------------------------+
| NVIDIA-SMI 352.63 Driver Version: 352.63 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 960 Off | 0000:01:00.0 On | N/A |
| 30% 38C P2 28W / 120W | 1082MiB / 2044MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 1796 G /usr/bin/X 155MiB |
| 0 2597 G compiz 65MiB |
| 0 5966 C python 849MiB |
+-----------------------------------------------------------------------------+
About to train one iteration. Press Enter to continue...
Error allocating 37748736 bytes of device memory (out of memory). Driver report 34205696 bytes free and 2144010240 bytes total
Traceback (most recent call last):
File "memtest.py", line 65, in <module>
model.train_on_batch(x, y)
File "/usr/local/lib/python2.7/dist-packages/keras/models.py", line 712, in train_on_batch
class_weight=class_weight)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 1221, in train_on_batch
outputs = self.train_function(ins)
File "/usr/local/lib/python2.7/dist-packages/keras/backend/theano_backend.py", line 717, in __call__
return self.function(*inputs)
File "/usr/local/lib/python2.7/dist-packages/theano/compile/function_module.py", line 871, in __call__
storage_map=getattr(self.fn, 'storage_map', None))
File "/usr/local/lib/python2.7/dist-packages/theano/gof/link.py", line 314, in raise_with_op
reraise(exc_type, exc_value, exc_trace)
File "/usr/local/lib/python2.7/dist-packages/theano/compile/function_module.py", line 859, in __call__
outputs = self.fn()
MemoryError: Error allocating 37748736 bytes of device memory (out of memory).
Apply node that caused the error: GpuContiguous(GpuDimShuffle{3,2,0,1}.0)
Toposort index: 338
Inputs types: [CudaNdarrayType(float32, 4D)]
Inputs shapes: [(1024, 1024, 3, 3)]
Inputs strides: [(1, 1024, 3145728, 1048576)]
Inputs values: ['not shown']
Outputs clients: [[GpuDnnConv{algo='small', inplace=True}(GpuContiguous.0, GpuContiguous.0, GpuAllocEmpty.0, GpuDnnConvDesc{border_mode='half', subsample=(1, 1), conv_mode='conv', precision='float32'}.0, Constant{1.0}, Constant{0.0}), GpuDnnConvGradI{algo='none', inplace=True}(GpuContiguous.0, GpuContiguous.0, GpuAllocEmpty.0, GpuDnnConvDesc{border_mode='half', subsample=(1, 1), conv_mode='conv', precision='float32'}.0, Constant{1.0}, Constant{0.0})]]
HINT: Re-running with most Theano optimization disabled could give you a back-trace of when this node was created. This can be done with by setting the Theano flag 'optimizer=fast_compile'. If that does not work, Theano optimizations can be disabled with 'optimizer=None'.
HINT: Use the Theano flag 'exception_verbosity=high' for a debugprint and storage map footprint of this apply node.
</code></pre>
<p>A few things to note: </p>
<ul>
<li>I have tried both Theano and TensorFlow backends. Both have the same problems, and run out of memory at the same line. In TensorFlow, it seems that Keras preallocates a lot of memory (about 1.5 GB) so nvidia-smi doesn't help us track what's going on there, but I get the same out-of-memory exceptions. Again, this points towards an error in (my usage of) Keras (although it's hard to be certain about such things, it could be something with my setup).</li>
<li>I tried using CNMEM in Theano, which behaves like TensorFlow: It preallocates a large amount of memory (about 1.5 GB) yet crashes in the same place.</li>
<li>There are some warnings about the CudNN-version. I tried running the Theano backend with CUDA but not CudNN and I got the same errors, so that is not the source of the problem.</li>
<li>If you want to test this on your own GPU, you might want to make the network deeper/shallower depending on how much GPU memory you have to test this.</li>
<li>My configuration is as follows: Ubuntu 14.04, GeForce GTX 960, CUDA 7.5.18, CudNN 5.1.3, Python 2.7, Keras 1.1.0 (installed via pip)</li>
<li>I've tried changing the compilation of the model to use different optimizers and losses, but that doesn't seem to change anything.</li>
<li>I've tried changing the train_on_batch function to use fit instead, but it has the same problem.</li>
<li>I saw one similar question here on StackOverflow - <a href="https://stackoverflow.com/questions/35757151/why-does-this-keras-model-require-over-6gb-of-memory">Why does this Keras model require over 6GB of memory?</a> - but as far as I can tell, I don't have those issues in my configuration. I've never had multiple versions of CUDA installed, and I've double checked my PATH, LD_LIBRARY_PATH and CUDA_ROOT variables more times than I can count.</li>
<li>Julius suggested that the activation parameters themselves take up GPU memory. If this is true, can somebody explain it a bit more clearly? I have tried changing the activation function of my convolution layers to functions that are clearly hard-coded with no learnable parameters as far as I can tell, and that doesn't change anything. Also, it seems unlikely that these parameters would take up almost as much memory as the rest of the network itself.</li>
<li>After thorough testing, the largest network I can train is about 453 MB of parameters, out of my ~2 GB of GPU RAM. Is this normal? </li>
<li>After testing Keras on some smaller CNNs that do fit in my GPU, I can see that there are very sudden spikes in GPU RAM usage. If I run a network with about 100 MB of parameters, 99% of the time during training it'll be using less than 200 MB of GPU RAM. But every once in a while, memory usage spikes to about 1.3 GB. It seems safe to assume that it's these spikes that are causing my problems. I've never seen these spikes in other frameworks, but they might be there for a good reason? <strong>If anybody knows what causes them, and if there's a way to avoid them, please chime in!</strong></li>
</ul> | 2016-10-06 07:34:16.947000+00:00 | 2021-01-27 19:12:45.003000+00:00 | 2017-10-24 20:00:51.680000+00:00 | memory|tensorflow|keras|theano | ['https://medium.com/tensorflow/fitting-larger-networks-into-memory-583e3c758ff9', 'https://arxiv.org/abs/1804.04235', 'https://arxiv.org/abs/1811.02084', 'https://github.com/tensorflow/mesh', 'https://github.com/microsoft/DeepSpeed', 'https://www.deepspeed.ai/', 'https://github.com/facebookresearch/fairscale', 'https://arxiv.org/abs/1909.08053', 'https://github.com/NVIDIA/Megatron-LM', 'https://huggingface.co/blog/zero-deepspeed-fairscale'] | 10 |
59,268,949 | <p>RFC5829 describes a <a href="https://www.rfc-editor.org/rfc/rfc5829#section-3.1" rel="nofollow noreferrer">version history</a> but does not suggest a return format.</p>
<p>First we assume you have a URL pointing to each version of the resource as in:</p>
<pre><code>/path/to/resource/ - returns latest
/path/to/resource/v1 - returns version v1
/path/to/resource/v2 - returns version v2
</code></pre>
<p>So what you actually want is to return a <strong>collection of links</strong>.
The best representation for this is <a href="https://stackoverflow.com/questions/59270469/what-is-the-best-json-schema-to-use-for-a-collection-of-http-links">another question</a></p>
<p><a href="https://www.rfc-editor.org/rfc/rfc7089" rel="nofollow noreferrer">RFC7089 - HTTP Framework for Time-Based Access to Resource States - Memento</a> describes a similar thing called a <em>time-map</em> and suggests using <a href="https://www.rfc-editor.org/rfc/rfc6690" rel="nofollow noreferrer">application/link-format (RFC6690)</a>.
An example of this is given on <a href="https://www.rfc-editor.org/rfc/rfc7089#page-36" rel="nofollow noreferrer">page 36</a>:</p>
<pre><code> HTTP/1.1 200 OK
Date: Thu, 21 Jan 2010 00:06:50 GMT
Server: Apache
Content-Length: 4883
Content-Type: application/link-format
Connection: close
<http://a.example.org>;rel="original",
<http://arxiv.example.net/timemap/http://a.example.org>
; rel="self";type="application/link-format"
; from="Tue, 20 Jun 2000 18:02:59 GMT"
; until="Wed, 09 Apr 2008 20:30:51 GMT",
<http://arxiv.example.net/timegate/http://a.example.org>
; rel="timegate",
<http://arxiv.example.net/web/20000620180259/http://a.example.org>
; rel="first memento";datetime="Tue, 20 Jun 2000 18:02:59 GMT"
; license="http://creativecommons.org/publicdomain/zero/1.0/",
<http://arxiv.example.net/web/20091027204954/http://a.example.org>
; rel="last memento";datetime="Tue, 27 Oct 2009 20:49:54 GMT"
; license="http://creativecommons.org/publicdomain/zero/1.0/",
<http://arxiv.example.net/web/20000621011731/http://a.example.org>
; rel="memento";datetime="Wed, 21 Jun 2000 01:17:31 GMT"
; license="http://creativecommons.org/publicdomain/zero/1.0/",
<http://arxiv.example.net/web/20000621044156/http://a.example.org>
; rel="memento";datetime="Wed, 21 Jun 2000 04:41:56 GMT"
; license="http://creativecommons.org/publicdomain/zero/1.0/",
...
</code></pre>
<p>link-format is just the same format for links used in http for the link header itself but with whitespace removed (contrary to the example above!).</p>
<p>link-format may be more alien than JSON to API clients, so you might consider returning some kind of JSON document via content-type negotiation. There is no standard format for representing a collection of links in JSON (or rather there are several competing standards with different strengths and weaknesses). A reasonable overview is <a href="https://www.mnot.net/blog/2011/11/25/linking_in_json" rel="nofollow noreferrer">Mark Nottingham's blog</a>. Though this is old at the time of writing, the formats he mentions are still the formats you will come across if you search for suggestions.
A good suggestion might be <em>HAL+JSON</em>. See <a href="https://en.wikipedia.org/wiki/Hypertext_Application_Language" rel="nofollow noreferrer">wikipedia</a> and the <a href="https://datatracker.ietf.org/doc/html/draft-kelly-json-hal-08" rel="nofollow noreferrer">draft RFC</a> (note the draft is expired but the link is not).
Using this a version history would look something like:</p>
<pre><code>{
"_links": {
"self": { "href": "/versions" },
"first": { "href": "/foobar/version1", "datetime": "2019-01-01T12:00:00Z" },
"memento": { "href": "/foobar/version2", "datetime": "2019-01-02T12:00:00Z" }
"latest": { "href": "/foobar/version3", "datetime": "2019-01-03T12:00:00Z" }
}
}
</code></pre>
<p>I've added the datetime attribute here (possibly incorrectly) to represent a version time and used <a href="https://www.rfc-editor.org/rfc/rfc3339" rel="nofollow noreferrer">RFC3339</a> format (which is my personal preference).</p>
<p>For both formats suggested here, you may want to consider carefully what relation names to use - see <a href="https://stackoverflow.com/q/59176515/1569204">https://stackoverflow.com/q/59176515/1569204</a></p> | 2019-12-10 14:01:47.107000+00:00 | 2019-12-10 15:23:24.697000+00:00 | 2021-10-07 13:21:02.303000+00:00 | null | 59,268,948 | <p>I am designing a ReSTful web service which provides a versioned resource.
What is an appropriate return format (content-type) for returning a <a href="https://www.rfc-editor.org/rfc/rfc5829#section-3.1" rel="nofollow noreferrer">version history</a>?</p> | 2019-12-10 14:01:47.107000+00:00 | 2019-12-10 15:23:24.697000+00:00 | 2021-10-07 13:58:44.390000+00:00 | rest|http|version-control|rfc | ['https://www.rfc-editor.org/rfc/rfc5829#section-3.1', 'https://stackoverflow.com/questions/59270469/what-is-the-best-json-schema-to-use-for-a-collection-of-http-links', 'https://www.rfc-editor.org/rfc/rfc7089', 'https://www.rfc-editor.org/rfc/rfc6690', 'https://www.rfc-editor.org/rfc/rfc7089#page-36', 'https://www.mnot.net/blog/2011/11/25/linking_in_json', 'https://en.wikipedia.org/wiki/Hypertext_Application_Language', 'https://datatracker.ietf.org/doc/html/draft-kelly-json-hal-08', 'https://www.rfc-editor.org/rfc/rfc3339', 'https://stackoverflow.com/q/59176515/1569204'] | 10 |
59,644,239 | <blockquote>
<p>I want to be able to train a model that can classify text accurately. Surprisingly, I'm getting a high (>90%) for a relatively simple model like RandomForestClassifier just by using this summing up method. Any insights?</p>
</blockquote>
<p>If you look up papers on aggregating word embeddings you'll find out that this in fact occurs sometimes, especially if the texts are shorter.</p>
<blockquote>
<p>What other way can I train my model? If I take every word and feed that into my model, how do I know how many words I should take? How do I input these words? In the form of a 2D array, where each word vector is a column?</p>
</blockquote>
<p>Have you tried keyword extraction? It can alleviate some of the problems with averaging</p>
<blockquote>
<p>In doing so, have I destroyed any meaning the words might have
possessed?</p>
</blockquote>
<p>As you remarked, you throw out information on word order. But that's not even the worst part: most of the times for longer documents if you embed everything the mean will get dominated by common words ("how", "like", "do" et c). BTW see my answer to <a href="https://stats.stackexchange.com/questions/312206/can-i-apply-word2vec-to-find-document-similarity/320657#320657">this question</a></p>
<p>Other than that, one trick I've seen is to average word vectors, but subtract first principal component of PCA on word embedding matrix. For details you can see for example <a href="https://github.com/PrincetonML/SIF" rel="nofollow noreferrer">this repo</a> which also links to the paper (BTW <a href="https://arxiv.org/pdf/1909.13494.pdf" rel="nofollow noreferrer">this paper</a> suggests you can ignore "Smooth Inverse Frequency" stuff since principal component reduction does the useful part).</p> | 2020-01-08 10:49:42.913000+00:00 | 2020-01-08 10:49:42.913000+00:00 | null | null | 59,643,661 | <p>For example, I have a paragraph which I want to classify in a binary manner. But because the inputs have to have a fixed length, I need to ensure that every paragraph is represented by a uniform quantity. </p>
<p>One thing I've done is taken every word in the paragraph, vectorized it using GloVe word2vec and then summed up all of the vectors to create a "paragraph" vector, which I've then fed in as an input for my model. In doing so, have I destroyed any meaning the words might have possessed? Considering these two sentences would have the same vector:
"My dog bit Dave" & "Dave bit my dog", how do I get around this? Am I approaching this wrong?</p>
<p>What other way can I train my model? If I take every word and feed that into my model, how do I know how many words I should take? How do I input these words? In the form of a 2D array, where each word vector is a column?</p>
<p>I want to be able to train a model that can classify text accurately.
Surprisingly, I'm getting a high (>90%) for a relatively simple model like RandomForestClassifier just by using this summing up method. Any insights?</p>
<p>Edit: One suggestion I have received is to instead featurize my data as a 2D array where each word is a column, on which a CNN could work. Another suggestion I received was to use transfer learning through the huggingface transformer to get a vector for the whole paragraph. Which one is more feasible?</p> | 2020-01-08 10:14:54.337000+00:00 | 2020-01-08 10:49:42.913000+00:00 | null | machine-learning|deep-learning|neural-network|nlp | ['https://stats.stackexchange.com/questions/312206/can-i-apply-word2vec-to-find-document-similarity/320657#320657', 'https://github.com/PrincetonML/SIF', 'https://arxiv.org/pdf/1909.13494.pdf'] | 3 |
37,736,149 | <p>The C99 and C11 define precisely what happens when the host platform can only conveniently compute to a higher precision than that of <code>float</code> and <code>double</code>. The earlier C89 (or “ANSI”) C standard did not. In C99 or C11, the compiler defines <code>FLT_EVAL_METHOD</code> to 1 or 2, which tells the programmer that floating-point constants and operations are going to be interpreted to a higher precision than that of their types.</p>
<p>This was implemented in GCC in the patch discussed in <a href="https://gcc.gnu.org/ml/gcc-patches/2008-11/msg00105.html" rel="nofollow">this message</a>.
The option <code>-fexcess-precision=standard</code> provided by the patch is enabled by default in C99 and C11, but not enabled in “ANSI” (C89) mode.</p>
<p>It does not make too much sense to try to interpret what the compiler does in C89 mode: it's a bit fuzzy, with the value of floating-point variables changing without assignments to them, or changing between optimization levels, as described in <a href="http://arxiv.org/abs/cs/0701192" rel="nofollow">this report</a>. In C99 mode, with <code>FLT_EVAL_METHOD</code> defined by the compiler to <code>2</code>, the difference <code>2.335 - 2.334</code> is computed by the compiler as a 80-bit floating-point number, the difference between the 80-bit FP representation of 2335/1000 and the 80-bit FP representation of 2334/1000. This number happens to be different from the 80-bit representation of 1/1000. This is why the second version of your test program behaves as it does without <code>-ansi</code>. In the first version of your test program, the assignments to <code>double</code> variables cause the numbers to be rounded to double-precison (64-bit) floating-point values. They are equal after both having been rounded thus.</p> | 2016-06-09 21:04:05.633000+00:00 | 2016-06-10 08:39:05.700000+00:00 | 2016-06-10 08:39:05.700000+00:00 | null | 37,731,624 | <p>Consider the two programs. First one prints "Unequal" on gcc 5.3.0 (target: i686-pc-cygwin). When -ansi option is used "Equal" is printed.</p>
<pre><code>int main () {
double d = 2.335 - 2.334;
double q = 0.001;
if (d == q) {
printf ("Equal\n");
} else {
printf ("Unequal\n");
}
return 0;
}
</code></pre>
<p>Second one prints "Unequal" with or without -ansi option.</p>
<pre><code>int main () {
if (2.335 - 2.334 == 0.001) {
printf ("Equal\n");
} else {
printf ("Unequal\n");
}
return 0;
}
</code></pre>
<p>What is the source of disparity?
Of course it is a common knowledge that real numbers should not be tested for equality. I understand the implications of IEEE754 standard on the (im-)precision of calculations involving floating point. However, to my best understanding, those two programs should be semantically equivalent and give the same results. </p>
<p>Is there some implicit conversion going on in the first one in C89 mode that was removed in C99?</p> | 2016-06-09 16:33:46.290000+00:00 | 2016-06-10 08:39:05.700000+00:00 | null | gcc|floating-point|ansi | ['https://gcc.gnu.org/ml/gcc-patches/2008-11/msg00105.html', 'http://arxiv.org/abs/cs/0701192'] | 2 |
51,502,084 | <p>If you want to use your own model and not fine tune a pretrained model like Vgg or inception (for examples), you should read this paper :</p>
<p><a href="https://arxiv.org/pdf/1611.07725.pdf" rel="nofollow noreferrer">iCaRL an incremnetal network (paper)</a></p>
<p>Of course you have to change your algorithm and your code. I find this github repo, apparently they implement it already : <a href="https://github.com/srebuffi/iCaRL/tree/master/iCaRL-Tensorflow" rel="nofollow noreferrer">Github repo for iCaRL in tensorflow</a></p>
<p>But you have to use tensorflow. Look at it to learn how to use it with your model (if possible, I just find the paper and this repo today, so I haven't looked at it yet).</p>
<p>What you are asking is still in research domain, so there are not broad or common techniques yet.
As I said in my comment, search for the keyword "incremental learning", there are others papers on this subject. (look at the Related work session of the iCaRL paper, all the main techniques and papers for this subject are well summarized !)</p>
<p>Also, please note that add objects that is very different from your previous dataset (with your example of flowers + beer or window), should decrease your accuracy a lot.
You will have to train longer to have a better accuracy (but it may possible that your accuracy never increase as it was before)</p> | 2018-07-24 15:10:47.163000+00:00 | 2018-07-24 15:23:25.347000+00:00 | 2018-07-24 15:23:25.347000+00:00 | null | 51,499,231 | <p>I created a simple CNN to differentiate between 5 different categories of flowers. I want to expand the CNN to recognize more objects. For example, I want the CNN to recognize the image of a glass of beer, window, tree etc.
Below is the code that I made to classify the flowers, it works pretty good. But how to expand it and make it recognise more and more objects. I don't want to use any pre-trained models. I want it to learn to classify more objects. Please help.</p>
<pre><code>from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Convolution2D, MaxPooling2D, Flatten, Dense
classifier=Sequential()
#1st Convolution Layer
classifier.add(Convolution2D(32, 3, 3, input_shape=(64,64,3),activation="relu"))
#Pooling
classifier.add(MaxPooling2D(pool_size = (2, 2)))
# Adding a second convolutional layer
classifier.add(Convolution2D(32, 3, 3, activation = 'relu'))
classifier.add(MaxPooling2D(pool_size = (2, 2)))
# Flattening
classifier.add(Flatten())
classifier.add(Dense(output_dim = 128, activation = 'relu'))
classifier.add(Dense(output_dim = 64, activation = 'relu'))
classifier.add(Dense(output_dim = 5, activation = 'softmax'))
classifier.compile(optimizer = 'adam', loss = 'categorical_crossentropy', metrics = ['accuracy'])
print(classifier.summary())
train_datagen = ImageDataGenerator(rescale = 1./255,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True)
test_datagen = ImageDataGenerator(rescale = 1./255)
training_set= train_datagen.flow_from_directory('flowers/train_set',
target_size=(64,64),
batch_size=32,
class_mode='categorical')
test_set= test_datagen.flow_from_directory('flowers/test_set',
target_size=(64,64),
batch_size=32,
class_mode='categorical')
classifier.fit_generator(training_set,
samples_per_epoch = 3000,
nb_epoch = 25,
validation_data = test_set,
nb_val_samples=1000)
</code></pre> | 2018-07-24 12:57:23.493000+00:00 | 2018-07-24 15:23:25.347000+00:00 | 2018-07-24 14:55:29.583000+00:00 | python|machine-learning|neural-network|keras | ['https://arxiv.org/pdf/1611.07725.pdf', 'https://github.com/srebuffi/iCaRL/tree/master/iCaRL-Tensorflow'] | 2 |
41,737,403 | <p>I would not expect changing the bias values to help with the training. The first thing I would try is lowering the learning rate. You can do this manually by retraining the the weights that have reached a plateau and use a solver with a lower base_lr. Or you can change your solver.prototxt to use a different update policy. You can either set the method to step or you can use an update policy such as Adam. See:</p>
<p><a href="http://caffe.berkeleyvision.org/tutorial/solver.html" rel="nofollow noreferrer">http://caffe.berkeleyvision.org/tutorial/solver.html</a></p>
<p>As <a href="https://stackoverflow.com/questions/41735132/fcn32-model-is-not-converging-and-loss-is-fluctuating-after-some-points-why#comment70665511_41735132">@Shai recommends</a>, adding <code>"BatchNorm"</code> layers should help. Batch Normalization is similar to "whitening"/normalizing the input data, but is applied to the intermediate layers. The paper on Batch normalization is on <a href="https://arxiv.org/abs/1502.03167" rel="nofollow noreferrer">arxiv</a>.</p>
<p>You should also reserve some data for validation. Just looking at the training loss can be misleading.</p> | 2017-01-19 08:48:35.443000+00:00 | 2017-01-19 14:50:41.133000+00:00 | 2017-05-23 12:24:14.737000+00:00 | null | 41,735,132 | <p>I am trying to train fcn32. I am training <a href="https://github.com/shelhamer/fcn.berkeleyvision.org/tree/master/voc-fcn32s" rel="nofollow noreferrer">voc-fcn32s</a> model for my own data that has imbalanced class number. This is the learning curve for 18,000 iterations:
<a href="https://i.stack.imgur.com/BISxi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BISxi.png" alt="enter image description here"></a></p>
<p>As you can see training is diminishing in some points and then it is fluctuating. I read some online recommendations that they are suggesting reducing the learning rate or changing the bias value in convolution layers for fillers. So, what I did, is that I changed the <a href="https://github.com/shelhamer/fcn.berkeleyvision.org/blob/master/voc-fcn32s/train.prototxt" rel="nofollow noreferrer">train_val.prototxt</a> as follows for these two layers:</p>
<pre><code> ....
layer {
name: "score_fr"
type: "Convolution"
bottom: "fc7"
top: "score_fr"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 5 # the number of classes
pad: 0
kernel_size: 1
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0.5 #+
}
}
}
layer {
name: "upscore"
type: "Deconvolution"
bottom: "score_fr"
top: "upscore"
param {
lr_mult: 0
}
convolution_param {
num_output: 5 # the number of classes
bias_term: true #false
kernel_size: 64
stride: 32
group: 5 #2
weight_filler: {
type: "bilinear"
value:0.5 #+
}
}
}
....
</code></pre>
<p>And this the trend of model
<a href="https://i.stack.imgur.com/6BXzp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6BXzp.png" alt="enter image description here"></a></p>
<p>It seems not much thing has changed in the behavior of the model.</p>
<p>1) Am doing the right way to add these values to <code>weight_filler</code>?</p>
<p>2) Should I change the learning policy in the solver from <code>fixed</code> to <code>step</code> by reducing by the factor of 10 each time? Will it help to tackle this issue?</p>
<p>I am worried that I am doing the wrong things and my model cannot converge. Does anyone have any suggestion about this? What important things I should consider while training model? What kind of changes can I do on <code>solver</code> and <code>train_val</code> that model to be converged? </p>
<p>I really appreciate your help.</p>
<p><strong>More details after adding BatchNorm layer</strong></p>
<p>Thanks @Shai and @Jonathan for suggesting to add <code>batchNorm</code> layers.
I added <code>Batch Normalization Layers</code> before <code>reLU</code> layers, this one sample layer:</p>
<pre><code>layer {
name: "conv1_1"
type: "Convolution"
bottom: "data"
top: "conv1_1"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 64
pad: 100
kernel_size: 3
stride: 1
}
}
layer {
name: "bn1_1"
type: "BatchNorm"
bottom: "conv1_1"
top: "bn1_1"
batch_norm_param {
use_global_stats: false
}
param {
lr_mult: 0
}
include {
phase: TRAIN
}
}
layer {
name: "bn1_1"
type: "BatchNorm"
bottom: "conv1_1"
top: "bn1_1"
batch_norm_param {
use_global_stats: true
}
param {
lr_mult: 0
}
include {
phase: TEST
}
}
layer {
name: "scale1_1"
type: "Scale"
bottom: "bn1_1"
top: "bn1_1"
scale_param {
bias_term: true
}
}
layer {
name: "relu1_1"
type: "ReLU"
bottom: "bn1_1"
top: "bn1_1"
}
layer {
name: "conv1_2"
type: "Convolution"
bottom: "bn1_1"
top: "conv1_2"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 64
pad: 1
kernel_size: 3
stride: 1
}
}
</code></pre>
<p>As far as I knew from docs, I can only add one parameter in Batch normalization instead of three since I have single channel images. Is this my understanding true? as follows:</p>
<pre><code>param {
lr_mult: 0
}
</code></pre>
<p>Should I add more parameters to scale layer, as documentation is mentioning? What are the meaning of these parameters in <code>Scale</code> layer? like:</p>
<pre><code>layer { bottom: 'layerx-bn' top: 'layerx-bn' name: 'layerx-bn-scale' type: 'Scale',
scale_param {
bias_term: true
axis: 1 # scale separately for each channel
num_axes: 1 # ... but not spatially (default)
filler { type: 'constant' value: 1 } # initialize scaling to 1
bias_filler { type: 'constant' value: 0.001 } # initialize bias
}}
</code></pre>
<p>and this is <a href="https://i.stack.imgur.com/sigzi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sigzi.png" alt="the drawing"></a> of the network. I am not sure how much I am wrong/right. Have I added correctly?
The other question is about <a href="https://stackoverflow.com/questions/40510706/how-to-interpret-caffe-log-with-debug-info">debug_info</a>. What is the meaning of these lines of log file after activating <code>debug_info</code>? What does it mean of <code>diff</code> and <code>data</code>? And why the values are 0? Is my net working correctly?</p>
<pre><code> I0123 23:17:49.498327 15230 solver.cpp:228] Iteration 50, loss = 105465
I0123 23:17:49.498337 15230 solver.cpp:244] Train net output #0: accuracy = 0.643982
I0123 23:17:49.498349 15230 solver.cpp:244] Train net output #1: loss = 105446 (* 1 = 105446 loss)
I0123 23:17:49.498359 15230 sgd_solver.cpp:106] Iteration 50, lr = 1e-11
I0123 23:19:12.680325 15230 net.cpp:608] [Forward] Layer data, top blob data data: 34.8386
I0123 23:19:12.680615 15230 net.cpp:608] [Forward] Layer data_data_0_split, top blob data_data_0_split_0 data: 34.8386
I0123 23:19:12.680670 15230 net.cpp:608] [Forward] Layer data_data_0_split, top blob data_data_0_split_1 data: 34.8386
I0123 23:19:12.680778 15230 net.cpp:608] [Forward] Layer label, top blob label data: 0
I0123 23:19:12.680829 15230 net.cpp:608] [Forward] Layer label_label_0_split, top blob label_label_0_split_0 data: 0
I0123 23:19:12.680896 15230 net.cpp:608] [Forward] Layer label_label_0_split, top blob label_label_0_split_1 data: 0
I0123 23:19:12.688591 15230 net.cpp:608] [Forward] Layer conv1_1, top blob conv1_1 data: 0
I0123 23:19:12.688695 15230 net.cpp:620] [Forward] Layer conv1_1, param blob 0 data: 0
I0123 23:19:12.688742 15230 net.cpp:620] [Forward] Layer conv1_1, param blob 1 data: 0
I0123 23:19:12.721791 15230 net.cpp:608] [Forward] Layer bn1_1, top blob bn1_1 data: 0
I0123 23:19:12.721853 15230 net.cpp:620] [Forward] Layer bn1_1, param blob 0 data: 0
I0123 23:19:12.721890 15230 net.cpp:620] [Forward] Layer bn1_1, param blob 1 data: 0
I0123 23:19:12.721901 15230 net.cpp:620] [Forward] Layer bn1_1, param blob 2 data: 96.1127
I0123 23:19:12.996196 15230 net.cpp:620] [Forward] Layer scale4_1, param blob 0 data: 1
I0123 23:19:12.996237 15230 net.cpp:620] [Forward] Layer scale4_1, param blob 1 data: 0
I0123 23:19:12.996939 15230 net.cpp:608] [Forward] Layer relu4_1, top blob bn4_1 data: 0
I0123 23:19:13.012020 15230 net.cpp:608] [Forward] Layer conv4_2, top blob conv4_2 data: 0
I0123 23:19:13.012403 15230 net.cpp:620] [Forward] Layer conv4_2, param blob 0 data: 0
I0123 23:19:13.012446 15230 net.cpp:620] [Forward] Layer conv4_2, param blob 1 data: 0
I0123 23:19:13.015959 15230 net.cpp:608] [Forward] Layer bn4_2, top blob bn4_2 data: 0
I0123 23:19:13.016005 15230 net.cpp:620] [Forward] Layer bn4_2, param blob 0 data: 0
I0123 23:19:13.016046 15230 net.cpp:620] [Forward] Layer bn4_2, param blob 1 data: 0
I0123 23:19:13.016054 15230 net.cpp:620] [Forward] Layer bn4_2, param blob 2 data: 96.1127
I0123 23:19:13.017211 15230 net.cpp:608] [Forward] Layer scale4_2, top blob bn4_2 data: 0
I0123 23:19:13.017251 15230 net.cpp:620] [Forward] Layer scale4_2, param blob 0 data: 1
I0123 23:19:13.017292 15230 net.cpp:620] [Forward] Layer scale4_2, param blob 1 data: 0
I0123 23:19:13.017980 15230 net.cpp:608] [Forward] Layer relu4_2, top blob bn4_2 data: 0
I0123 23:19:13.032080 15230 net.cpp:608] [Forward] Layer conv4_3, top blob conv4_3 data: 0
I0123 23:19:13.032452 15230 net.cpp:620] [Forward] Layer conv4_3, param blob 0 data: 0
I0123 23:19:13.032493 15230 net.cpp:620] [Forward] Layer conv4_3, param blob 1 data: 0
I0123 23:19:13.036018 15230 net.cpp:608] [Forward] Layer bn4_3, top blob bn4_3 data: 0
I0123 23:19:13.036064 15230 net.cpp:620] [Forward] Layer bn4_3, param blob 0 data: 0
I0123 23:19:13.036105 15230 net.cpp:620] [Forward] Layer bn4_3, param blob 1 data: 0
I0123 23:19:13.036114 15230 net.cpp:620] [Forward] Layer bn4_3, param blob 2 data: 96.1127
I0123 23:19:13.038148 15230 net.cpp:608] [Forward] Layer scale4_3, top blob bn4_3 data: 0
I0123 23:19:13.038189 15230 net.cpp:620] [Forward] Layer scale4_3, param blob 0 data: 1
I0123 23:19:13.038230 15230 net.cpp:620] [Forward] Layer scale4_3, param blob 1 data: 0
I0123 23:19:13.038969 15230 net.cpp:608] [Forward] Layer relu4_3, top blob bn4_3 data: 0
I0123 23:19:13.039417 15230 net.cpp:608] [Forward] Layer pool4, top blob pool4 data: 0
I0123 23:19:13.043354 15230 net.cpp:608] [Forward] Layer conv5_1, top blob conv5_1 data: 0
I0123 23:19:13.128515 15230 net.cpp:608] [Forward] Layer score_fr, top blob score_fr data: 0.000975524
I0123 23:19:13.128569 15230 net.cpp:620] [Forward] Layer score_fr, param blob 0 data: 0.0135222
I0123 23:19:13.128607 15230 net.cpp:620] [Forward] Layer score_fr, param blob 1 data: 0.000975524
I0123 23:19:13.129696 15230 net.cpp:608] [Forward] Layer upscore, top blob upscore data: 0.000790174
I0123 23:19:13.129734 15230 net.cpp:620] [Forward] Layer upscore, param blob 0 data: 0.25
I0123 23:19:13.130656 15230 net.cpp:608] [Forward] Layer score, top blob score data: 0.000955503
I0123 23:19:13.130709 15230 net.cpp:608] [Forward] Layer score_score_0_split, top blob score_score_0_split_0 data: 0.000955503
I0123 23:19:13.130754 15230 net.cpp:608] [Forward] Layer score_score_0_split, top blob score_score_0_split_1 data: 0.000955503
I0123 23:19:13.146767 15230 net.cpp:608] [Forward] Layer accuracy, top blob accuracy data: 1
I0123 23:19:13.148967 15230 net.cpp:608] [Forward] Layer loss, top blob loss data: 105320
I0123 23:19:13.149173 15230 net.cpp:636] [Backward] Layer loss, bottom blob score_score_0_split_1 diff: 0.319809
I0123 23:19:13.149323 15230 net.cpp:636] [Backward] Layer score_score_0_split, bottom blob score diff: 0.319809
I0123 23:19:13.150310 15230 net.cpp:636] [Backward] Layer score, bottom blob upscore diff: 0.204677
I0123 23:19:13.152452 15230 net.cpp:636] [Backward] Layer upscore, bottom blob score_fr diff: 253.442
I0123 23:19:13.153218 15230 net.cpp:636] [Backward] Layer score_fr, bottom blob bn7 diff: 9.20469
I0123 23:19:13.153254 15230 net.cpp:647] [Backward] Layer score_fr, param blob 0 diff: 0
I0123 23:19:13.153291 15230 net.cpp:647] [Backward] Layer score_fr, param blob 1 diff: 20528.8
I0123 23:19:13.153420 15230 net.cpp:636] [Backward] Layer drop7, bottom blob bn7 diff: 9.21666
I0123 23:19:13.153554 15230 net.cpp:636] [Backward] Layer relu7, bottom blob bn7 diff: 0
I0123 23:19:13.153856 15230 net.cpp:636] [Backward] Layer scale7, bottom blob bn7 diff: 0
E0123 23:19:14.382714 15230 net.cpp:736] [Backward] All net params (data, diff): L1 norm = (19254.6, 102644); L2 norm = (391.485, 57379.6)
</code></pre>
<p>I really appreciate if someone knows, please share ideas/links/resources here. Thanks again</p> | 2017-01-19 06:26:11.050000+00:00 | 2017-09-23 05:55:51.583000+00:00 | 2017-09-23 05:55:51.583000+00:00 | neural-network|deep-learning|caffe|image-segmentation|pycaffe | ['http://caffe.berkeleyvision.org/tutorial/solver.html', 'https://stackoverflow.com/questions/41735132/fcn32-model-is-not-converging-and-loss-is-fluctuating-after-some-points-why#comment70665511_41735132', 'https://arxiv.org/abs/1502.03167'] | 3 |
47,662,534 | <p>I had some trouble in the past too. </p>
<p>Now the solution I use is this <a href="https://github.com/sprig/org-capture-extension" rel="nofollow noreferrer">firefox add-on</a> -> You can install it directly from Firefox. That's really easy and in my case, it worked out of the box (Debian distro, Firefox 52.5.0). I do not know if it is available for Firefox Quantum though. </p>
<p><a href="http://orgmode.org/worg/org-contrib/org-protocol.html" rel="nofollow noreferrer">org-protocol</a> is configured as usual. In my case:</p>
<pre><code>cat > "${HOME}/.local/share/applications/org-protocol.desktop" << EOF
[Desktop Entry]
Name=org-protocol
Exec=emacsclient %u
Type=Application
Terminal=false
Categories=System;
MimeType=x-scheme-handler/org-protocol;
EOF
</code></pre>
<p>then</p>
<pre><code>update-desktop-database ~/.local/share/applications/
</code></pre>
<p>In your Emacs <code>init.el</code> file:</p>
<pre><code>(server-start)
(require 'org-protocol)
(setq org-capture-templates `(
("p" "Protocol" entry (file+headline ,(concat org-directory "notes.org") "Inbox")
"* %^{Title}\nSource: %u, %c\n #+BEGIN_QUOTE\n%i\n#+END_QUOTE\n\n\n%?")
("L" "Protocol Link" entry (file+headline ,(concat org-directory "notes.org") "Inbox")
"* %? [[%:link][%:description]] \nCaptured On: %U")
))
</code></pre>
<p>That's all. Under my config it works, I hope this is the same for yours.</p>
<hr>
<p><strong>Extra:</strong> some time ago, I have suggested a small template improvement to handle ArXiv.org titles, the details are here: <a href="https://github.com/sprig/org-capture-extension/issues/37" rel="nofollow noreferrer">https://github.com/sprig/org-capture-extension/issues/37</a></p> | 2017-12-05 20:57:03.073000+00:00 | 2017-12-05 20:57:03.073000+00:00 | null | null | 47,647,948 | <p>So I am trying to get <code>org-protocol</code> to work for the first time on Firefox Quantum. I worked through a variety of blog posts and the documentation, but still cannot get past this particular error </p>
<pre><code>Greedy org-protocol handler. Killing client.
No server buffers remain to edit.
</code></pre>
<p>So I am using emacs 25.3 on Ubuntu Linux x64 16.04, and the newly released Firefox Quantum. </p>
<p>I tried to do this in two ways. </p>
<ol>
<li><p>The traditional approach of setting up <code>org-protocol</code> through creating a desktop entry and then setting up bookmarklets in Firefox. </p></li>
<li><p>Using the <code>org-capture</code> add-on in Firefox. </p></li>
</ol>
<p>Both are giving me the same error. </p>
<p>For approach 1, I followed the <a href="http://orgmode.org/worg/org-contrib/org-protocol.html#sec-6-1" rel="nofollow noreferrer">documentation</a> as well as a very helpful stackexchange post:</p>
<p><a href="https://stackoverflow.com/questions/7464951/how-to-make-org-protocol-work">How to make org-protocol work?</a> </p>
<p>There was also this useful blog <a href="http://www.mediaonfire.com/blog/2017_07_21_org_protocol_firefox.html" rel="nofollow noreferrer">post</a>.</p>
<p>Here is what I did:</p>
<p>A. I created a desktop entry:</p>
<pre><code>[Desktop Entry]
Name=org-protocol
Exec=emacsclient -n %u
Type=Application
Terminal=false
Categories=System;
MimeType=x-scheme-handler/org-protocol;
</code></pre>
<p>B. added config to <code>.spacemacs</code> file.</p>
<p>I added the lines:</p>
<pre><code>(server-start)
(require 'org-protocol)
</code></pre>
<p>Then I setup up the capture template in my <code>.spacemacs</code> file:</p>
<pre><code>(setq org-capture-templates
(quote
("p" "org-protocol" entry (file+headline "~/Dropbox/config/org/refile/refile.org")
"* %^{Title}\nSource: %u, %c\n #+BEGIN_QUOTE\n%:initial\n#+END_QUOTE\n\n\n%?")
("l" "org-protocol link" entry (file "~/Dropbox/config/org/refile/refile.org")
"* %? [[%:link][%:description]] \nCaptured On: %U")
... Additional templates.
))
</code></pre>
<p>C. Then I created the bookmarklets with the locations:</p>
<pre><code>javascript:location.href='org-protocol://store-link://l/'+encodeURIComponent(location.href)
javascript:location.href='org-protocol://capture://l/'+encodeURIComponent(location.href)+'/'+encodeURIComponent(document.title)+'/'+encodeURIComponent(window.getSelection())
</code></pre>
<p>After all of this, I still get the error message referenced above. </p>
<p>Second, I tried the <code>org-capture</code> add-on in Firefox. But it did not work either. </p>
<p>Not sure what the cause of the error is. Any help is appreciated.</p>
<p>In other reading people get this <code>Greedy</code> error when there are problems in the <code>org-capture</code> template, but I did not find any errors. </p> | 2017-12-05 07:08:14.763000+00:00 | 2017-12-05 22:27:57.390000+00:00 | null | firefox|emacs|org-mode | ['https://github.com/sprig/org-capture-extension', 'http://orgmode.org/worg/org-contrib/org-protocol.html', 'https://github.com/sprig/org-capture-extension/issues/37'] | 3 |
63,684,029 | <p>I doubt it is running somewhere. The statistical system is a complicated pipeline that is expensive to run and difficult to maintain.</p>
<p>You can try contacting someone from Google Research who works on MT (just have a look at papers on arXiv, the authors have contact emails there) if they can run it for you.</p>
<p>Alternatively, you can build your own <a href="https://github.com/moses-smt/mosesdecoder" rel="nofollow noreferrer">Moses system</a>, it is an open-source implementation of statistical MT, so the results should be similar to what was Google Translate (judging from the <a href="http://www.statmt.org/wmt15/" rel="nofollow noreferrer">WMT competitions</a> results before 2016).</p> | 2020-09-01 08:29:49.070000+00:00 | 2020-09-01 08:29:49.070000+00:00 | null | null | 63,592,329 | <p>I need the old version of Google Translate (the statistical model, the version before 2016) for my research,
I was wondering if there any way to access the old version?</p>
<p>Thanks</p> | 2020-08-26 07:12:43.017000+00:00 | 2022-01-15 14:29:35.253000+00:00 | null | google-translate|machine-translation | ['https://github.com/moses-smt/mosesdecoder', 'http://www.statmt.org/wmt15/'] | 2 |
8,203,670 | <p>A non-algorithmic way of finding the position <em>t</em> of a fraction in the Farey sequence of order <em>n</em>>1 is shown in Remark 7.10(ii)(a) of the <a href="http://arxiv.org/abs/math/0411026" rel="nofollow">paper</a>, under <em>m:=n</em>-1, where mu-bar stands for the number-theoretic Mobius function on positive integers taking values from the set {-1,0,1}.</p> | 2011-11-20 18:29:17.893000+00:00 | 2011-11-21 08:53:52.240000+00:00 | 2011-11-21 08:53:52.240000+00:00 | null | 8,194,894 | <p>For finding the position of a fraction in <a href="http://en.wikipedia.org/wiki/Farey_sequence" rel="nofollow">farey sequence</a>, i tried to implement the algorithm given here <a href="http://www.math.harvard.edu/~corina/publications/farey.pdf" rel="nofollow">http://www.math.harvard.edu/~corina/publications/farey.pdf</a> under "<strong><em>initial algorithm</em></strong>" but i can't understand where i'm going wrong, i am not getting the correct answers . Could someone please point out my mistake.
eg. for order n = 7 and fractions 1/7 ,1/6 i get same answers.
Here's what i've tried for given degree(n), and a fraction a/b:</p>
<pre><code>sum=0;
int A[100000];
A[1]=a;
for(i=2;i<=n;i++)
A[i]=i*a-a;
for(i=2;i<=n;i++)
{
for(j=i+i;j<=n;j+=i)
A[j]-=A[i];
}
for(i=1;i<=n;i++)
sum+=A[i];
ans = sum/b;
</code></pre>
<p>Thanks.</p> | 2011-11-19 15:06:37.540000+00:00 | 2013-05-11 16:08:48.460000+00:00 | 2011-11-19 16:03:56.493000+00:00 | algorithm|math | ['http://arxiv.org/abs/math/0411026'] | 1 |
17,773,769 | <p>So it sounds like your problem is that you have small gaps between the rectangles preventing them from being collected together into a single piece. If you have access to the source code for the sweep and prune method, you can add a buffer to the "overlap" test, but I think it would be more optimal to consider using an R-Tree. This will index the rectangular spaces without messing with limits on gaps etc.</p>
<p><a href="http://en.wikipedia.org/wiki/R-tree" rel="nofollow noreferrer">R-Tree Wiki</a></p>
<p>Here is a relevant paper by Sellis et. al. describing the R+ tree:</p>
<p><a href="http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=50ECCC47148D9121A4B39EC1220D9FB2?doi=10.1.1.45.3272&rep=rep1&type=pdf" rel="nofollow noreferrer">http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=50ECCC47148D9121A4B39EC1220D9FB2?doi=10.1.1.45.3272&rep=rep1&type=pdf</a></p>
<p>here is a C# implementation of an R-Tree</p>
<p><a href="http://sourceforge.net/projects/cspatialindexrt/" rel="nofollow noreferrer">http://sourceforge.net/projects/cspatialindexrt/</a></p>
<p>[Edit - After Comment 1]</p>
<p>So let me see if I can capture the current problem.</p>
<ul>
<li>Rectangles are joined in passes of horizontal/vertical adjacency tests.</li>
<li>Rectangles are only joined if the adjacent boundary for both is equal.</li>
<li>The intermediate result of any join must also form a valid rectangle.</li>
<li>The result is non-optimal because the sequence of joining.</li>
</ul>
<p>I think you're actually looking for the minimum dissection into rectangles of a rectilinear polygon. The first step would be to join ALL the touching rectangles together, regardless of whether they form a rectangle or not. I think you are getting caught up in problems with the intermediate stages of each step in the process also needing to be complete rectangle deconstructions, leading to a sub-optimal result. If you merge them together into a single rectilinear polygon, you can use graph theory mechanisms. </p>
<p><img src="https://i.stack.imgur.com/Ndi1I.png" alt="Minimum Dissection into Rectangles of a Rectilinear Polygon"></p>
<p>You can check out <a href="http://arxiv.org/pdf/0908.3916v1.pdf" rel="nofollow noreferrer">Graph-Theoretic Solutions
to Computational Geometry Problems</a> by <a href="http://www.ics.uci.edu/~eppstein/" rel="nofollow noreferrer">David Eppstein</a></p>
<p>Or investigate <a href="https://stackoverflow.com/questions/5919298/algorithm-for-finding-the-fewest-rectangles-to-cover-a-set-of-rectangles">Algorithm for finding the fewest rectangles to cover a set of rectangles without overlapping</a> by <a href="https://stackoverflow.com/users/68063/gareth-rees">Gareth Rees</a></p> | 2013-07-21 15:12:08.027000+00:00 | 2013-07-21 20:17:21.280000+00:00 | 2017-05-23 12:04:07.960000+00:00 | null | 17,667,830 | <p>I want to compress many non-overlapping rectangles into larger rectangles When they are adjacent.</p>
<p>Pseudo-code for my current algorithm:</p>
<pre><code>do
compress horizontally using sweep and prune
compress horizontal output vertically using sweep and prune
while (this output is small than previous output)
</code></pre>
<p>Here's a <a href="http://en.wikipedia.org/wiki/Sweep_and_prune" rel="nofollow">link to sweep and prune</a>.</p>
<p>This is working well, but I want to know if there are approaches which result in fewer rectangles output. I figure there's more sophisticated than what I'm doing now.</p> | 2013-07-16 04:04:45.937000+00:00 | 2013-07-22 13:30:59.893000+00:00 | 2013-07-22 13:30:59.893000+00:00 | c#|java|compression|collision-detection | ['http://en.wikipedia.org/wiki/R-tree', 'http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=50ECCC47148D9121A4B39EC1220D9FB2?doi=10.1.1.45.3272&rep=rep1&type=pdf', 'http://sourceforge.net/projects/cspatialindexrt/', 'http://arxiv.org/pdf/0908.3916v1.pdf', 'http://www.ics.uci.edu/~eppstein/', 'https://stackoverflow.com/questions/5919298/algorithm-for-finding-the-fewest-rectangles-to-cover-a-set-of-rectangles', 'https://stackoverflow.com/users/68063/gareth-rees'] | 7 |
60,137,345 | <p>There are indeed universal construction schemes for lock-free or even wait-free algorithms. For example:</p>
<ul>
<li><a href="https://www.researchgate.net/publication/221643573_A_Methodology_for_Implementing_Highly_Concurrent_Data_Structures" rel="nofollow noreferrer">A Methodology for Implementing Highly Concurrent Data Structures</a></li>
<li><a href="https://www.researchgate.net/publication/221257088_A_highly-efficient_wait-free_universal_construction" rel="nofollow noreferrer">A highly-efficient wait-free universal construction</a></li>
<li><a href="https://arxiv.org/abs/1911.01676" rel="nofollow noreferrer">A Wait-Free Universal Construct for Large Objects</a></li>
</ul>
<p>However, these are usually more interesting from a theoretical perspective rather than a practical one. In practice, specialized lock-free algorithms usually perform better than those derived from these universal constructions.</p>
<p>If you are interesting in the area of lock-free programming I would recommend to start with the book <a href="https://rads.stackoverflow.com/amzn/click/com/0123973376" rel="nofollow noreferrer" rel="nofollow noreferrer">The Art of Multiprocessor Programming</a>.</p> | 2020-02-09 13:55:39.787000+00:00 | 2020-02-09 13:55:39.787000+00:00 | null | null | 60,108,465 | <p>The implementation of lock-free data structures is sometimes not easy to implement. The following approach may seem generic and easy but I think here we have some issues:</p>
<pre><code>private AtomicBoolean lock = new AtomicBoolean(false);
public void func(...) {
while !lock.compareAndSet(false,true);
// Some code goes here...
...
...
...
lock.set(false);
}
}
</code></pre>
<p>I think The code above is not really "lock-free", as it locks the awaiting Threads in the while loop in busy-wait mode.</p>
<p>Therefore, the code is suitable only when the "right" lock-free synchronization is impossible.</p>
<p>My question is - Is it possible to implement a generic lock-free with some different approach and it will works so threads will not be in blocked(like synchronized) or busy mode state and the code will run parallel, so we improve performance?</p>
<p>My aim is to keep the threads running. I know that if the code is long, it is better to use the synchronized mechanism instead of the lock-free implementation, so let's assume we are talking on short code.</p>
<p>For example, below is example of Linkedlist which I think is good approach but it not generic for common data stractures.
if we are used here with AtomicBoolean as I show above, it will not be realy "lock-free".</p>
<pre><code>public class LinkedList<T> {
private AtomicReference<Link<T>> head = new AtomicReference(null);
public void add(T data) {
Link<T> localHead;
Link<T> newHead = new Link<>(null, data);
do {
localHead = head.get();
newHead.next = localHead;
} while (!head.compareAndSet(localHead, newHead));
}
}
</code></pre> | 2020-02-07 06:51:27.590000+00:00 | 2020-02-09 13:55:39.787000+00:00 | 2020-02-07 07:04:42.293000+00:00 | java|multithreading|synchronization|locking|compare-and-swap | ['https://www.researchgate.net/publication/221643573_A_Methodology_for_Implementing_Highly_Concurrent_Data_Structures', 'https://www.researchgate.net/publication/221257088_A_highly-efficient_wait-free_universal_construction', 'https://arxiv.org/abs/1911.01676', 'https://rads.stackoverflow.com/amzn/click/com/0123973376'] | 4 |
61,977,005 | <p>Here is a starting point using <a href="https://metacpan.org/pod/XML::Twig" rel="nofollow noreferrer"><code>XML::Twig</code></a> to parse the downloaded XML file:</p>
<pre><code>use feature qw(say);
use strict;
use warnings;
use LWP::UserAgent;
use Text::BibTeX;
use Text::BibTeX::Entry;
use XML::Twig;
use DateTime::Format::Strptime;
{
my $arxivid = "hep-ph/9609357";
my $url = "http://export.arxiv.org/api/query?search_query=" . $arxivid . "&start=0&max_results=1";
my $browser = LWP::UserAgent->new();
my $response = $browser->get($url);
my $xml = $response->content;
my $twig = XML::Twig->new->parse( $xml );
my $title = $twig->get_xpath ( '//entry/title',0 )->text;
my @authors;
for my $node ( $twig->findnodes( '//entry/author/name' )) {
push @authors, $node->text;
}
my $doi = $twig->get_xpath ( '//entry/link[@title="doi"]',0 )->att('href');
my $published = $twig->get_xpath ( '//entry/published',0 )->text;
my ( $year, $month) = parse_published( $published) ;
my $entry = Text::BibTeX::Entry->new();
$entry->set_metatype(BTE_REGULAR);
$entry->set_type('article');
$entry->set_key('article1');
$entry->set( 'title', $title );
$entry->set( 'author', join ' and ', @authors );
$entry->set( 'year', $year );
$entry->set( 'month', $month );
$entry->set( 'doi', $doi );
$entry->print(\*STDOUT);
}
sub parse_published {
my ( $published) = @_;
my $parser = DateTime::Format::Strptime->new(
pattern => '%FT%T%Z',
time_zone => 'UTC',
on_error => 'croak',
);
my $dt = $parser->parse_datetime($published);
return ( $dt->year, $dt->month_name);
}
</code></pre>
<p><strong>Output</strong>:</p>
<pre><code>@article{article1,
title = {Mixing-induced CP Asymmetries in Inclusive $B$ Decays},
author = {Martin Beneke and Gerhard Buchalla and Isard Dunietz},
year = {1996},
month = {September},
doi = {http://dx.doi.org/10.1016/S0370-2693(96)01648-6},
}
</code></pre> | 2020-05-23 18:46:03.533000+00:00 | 2020-05-24 04:13:42.830000+00:00 | 2020-05-24 04:13:42.830000+00:00 | null | 61,975,686 | <p>How can I write a robust Perl script that will generate a BibTeX entry for an arXiv ID?</p>
<p>My guess is that I should use the <a href="https://arxiv.org/help/api/" rel="nofollow noreferrer">arXiv API</a> and parse its response with <a href="https://metacpan.org/pod/XML::Atom" rel="nofollow noreferrer">XML::Atom</a>. It should give me the needed pieces of information to build a BibTeX entry.</p>
<p>Here is how I would start:</p>
<pre><code>use LWP::UserAgent;
use Text::BibTeX::Entry;
use XML::Atom;
my $arxivid = "hep-ph/9609357";
my $url = "http://export.arxiv.org/api/query?search_query=" . $arxivid . "&start=0&max_results=1";
my $browser = LWP::UserAgent->new();
my $response = $browser->get($url);
my $entry = Text::BibTeX::Entry->new();
</code></pre>
<p>Answers not using the arXiv API or XML::Atom are welcome too.</p> | 2020-05-23 17:06:08.957000+00:00 | 2020-05-24 04:13:42.830000+00:00 | null | perl|atom-feed|bibtex | ['https://metacpan.org/pod/XML::Twig'] | 1 |
49,290,659 | <p>If you are interested in using a non-parametric correlation, perhaps you can look at Kendall's tau. As a U-statistics, it is asymptotically normal. (If I am not mistaken, Spearman too is asymptotically normal, so the procedure I am about to describe is valid for Spearman as well) Hence, it suggests you can use the normal log-likelihood to perform your shrinkage.</p>
<p>More precisely, let tau.hat be your vectorized (estimated) correlation matrix and</p>
<pre><code>L(tau | tau.hat, Sigma) = t(tau - tau.hat) Sigma^{-1} (tau - tau.hat)
</code></pre>
<p>where cov(tau.hat) = Sigma. L is the loss function that will allow you to know when you applied too much shrinkage (realizing that L should be roughly chi squared distributed).</p>
<p>This is not very helpful since you don't know Sigma, but there are ways of estimating it (for 50 stocks it might still be okay, but that explodes quickly). This is the main reason why I suggest using Kendall's tau over Spearman's rho: you can have an idea of its variance (Sigma). (See this paper for an estimator of Sigma: <a href="https://arxiv.org/abs/1706.05940" rel="nofollow noreferrer">https://arxiv.org/abs/1706.05940</a>)</p>
<p>From there, you can use a standard technique to shrink Sigma.hat (this is necessary), e.g. simply shrink it towards its diagonal version Sigma0.hat with</p>
<pre><code>Sigma.tilde = w Sigma0.hat + (1-w) Sigma.hat
</code></pre>
<p>with, say, w = .5. At least make sure it is positive definite...</p>
<p>Then (finally!) get a shrunk version of tau.hat by letting q goes to 1 in <code>tau.tilde = (1-q) tau.hat + q mean(tau.hat)</code> until <code>Prob[chi_d^2 > L(tau.tilde | tau.hat, Sigma.tilde)] = .05</code>, where the degrees of freedom for you chi squared are (?!?!?) <code>d = length(tau.hat) - q</code>.</p>
<p>I am not sure the degrees of freedom for the chi squared are right. Actually, I am pretty sure they are not. Note also that .05 was chosen somewhat arbitrarily. Bottom line is that you might want to take a look at the paper referred to above, as they (we, I must say) indeed shrink the Kendall correlation matrix of log-returns from 100 stocks. The way it is done allows us to know more about the degrees of freedom in question (and does not require you to provide a structure a priori as it learns a block structure from the data).</p> | 2018-03-15 02:31:43.480000+00:00 | 2018-03-15 12:54:29.520000+00:00 | 2018-03-15 12:54:29.520000+00:00 | null | 43,191,320 | <p>I'm trying to estimate the correlation matrix of 50 stock returns in R.
I'm aware that i should shrink the correlation matrix, but I'm also interested in using the Spearman’s rank correlation since it dosen't require the distribution to be normal. </p>
<p>For the Sperman correlation I usually specify the method = "spearman" in the cov() function, which dosen't allow shrinkage tecniques.
For shrinkage I've been looking at the tawny package and the corpcor package, but they're not allowing for any specification of the method pearson/spearman etc.</p>
<p>Any suggestions on how I could handle this issue?</p> | 2017-04-03 17:58:50.350000+00:00 | 2018-03-15 12:54:29.520000+00:00 | null | r|correlation|covariance|portfolio|shrink | ['https://arxiv.org/abs/1706.05940'] | 1 |
39,362,834 | <p>The current package <em>du jour</em> for getting text out of PDFs is <a href="https://ropensci.org/blog/2016/03/01/pdftools-and-jeroen" rel="noreferrer"><code>pdftools</code></a> (successor to Rpoppler, noted above), works great on Linux, Windows and OSX:</p>
<pre><code>install.packages("pdftools")
library(pdftools)
download.file("http://arxiv.org/pdf/1403.2805.pdf", "1403.2805.pdf", mode = "wb")
txt <- pdf_text("1403.2805.pdf")
# first page text
cat(txt[1])
# second page text
cat(txt[2])
</code></pre> | 2016-09-07 06:42:49.057000+00:00 | 2016-12-16 00:17:14.220000+00:00 | 2016-12-16 00:17:14.220000+00:00 | null | 9,185,831 | <p>Is that even possible!?!</p>
<p>I have a bunch of legacy reports that I need to import into a database. However, they're all in pdf format. Are there any <code>R</code> packages that can read pdf? Or should I leave that to a command line tool?</p>
<p>The reports were made in excel and then pdfed, so they have regular structure, but many blank "cells".</p> | 2012-02-07 23:46:47.867000+00:00 | 2016-12-16 00:17:14.220000+00:00 | 2013-08-06 12:11:46.873000+00:00 | linux|r|pdf|scrape|pdf-scraping | ['https://ropensci.org/blog/2016/03/01/pdftools-and-jeroen'] | 1 |
59,144,361 | <p>I just finished reading the paper for this method which can be found <a href="https://arxiv.org/pdf/1802.05957.pdf" rel="noreferrer">on arxiv</a>. If you have the appropriate mathematical background I would recommend reading it. See appendix A for the power algorithm which describes what u and v are.</p>
<p>That said I'll try to summarize here.</p>
<p>First, you should know that the spectral norm of a matrix is the maximum singular value. The authors propose finding the spectral norm of weight matrix <code>W</code>, then dividing <code>W</code> by its spectral norm to make it close to <code>1</code> (justification for this decision is in the paper).</p>
<p>While we could just use <code>torch.svd</code> to find a precise estimate of the singular values, they instead use a fast (but imprecise) method called "power iteration". Long story short, the <code>weight_u</code> and <code>weight_v</code> are rough approximations of the left and right singular vectors corresponding to the largest singular value of W. They are useful because the associated singular value, i.e. the spectral norm, of W is equal to <code>u.transpose(1,0) @ W @ v</code> if <code>u</code> and <code>v</code> are the actual left/right singular vectors of <code>W</code>.</p>
<ul>
<li><code>y.weight_orig</code> contains the original values in the layer.</li>
<li><code>y.weight_u</code> is the approximation of the first left singular vector of <code>y.weight_orig</code>.</li>
<li><code>y.weight_v</code> is the approximation of the first right singular vector of <code>y.weight_orig</code>.</li>
<li><code>y.weight</code> is the updated weight matrix which is <code>y.weight_orig</code> divided by its approximate spectral norm. </li>
</ul>
<p>We can verify these claims by showing that the actual left and right singular vectors are nearly parallel to <code>y.weight_u</code> and <code>y.weight_v</code></p>
<pre><code>import torch
import torch.nn as nn
# pytorch default is 1
n_power_iterations = 1
y = nn.Linear(3,3)
y = nn.utils.spectral_norm(y, n_power_iterations=n_power_iterations)
# spectral normalization is performed during forward pre hook for technical reasons, we
# need to send something through the layer to ensure normalization is applied
# NOTE: After this is performed, x.weight is changed in place!
_ = y(torch.randn(1,3))
# test svd vs. spectral_norm u/v estimates
u,s,v = torch.svd(y.weight_orig)
cos_err_u = 1.0 - torch.abs(torch.dot(y.weight_u, u[:, 0])).item()
cos_err_v = 1.0 - torch.abs(torch.dot(y.weight_v, v[:, 0])).item()
print('u-estimate cosine error:', cos_err_u)
print('v-estimate cosine error:', cos_err_v)
# singular values
actual_orig_sn = s[0].item()
approx_orig_sn = (y.weight_u @ y.weight_orig @ y.weight_v).item()
print('Actual original spectral norm:', actual_orig_sn)
print('Approximate original spectral norm:', approx_orig_sn)
# updated weights singular values
u,s_new,v = torch.svd(y.weight.data, compute_uv=False)
actual_sn = s_new[0].item()
print('Actual updated spectral norm:', actual_sn)
print('Desired updated spectral norm: 1.0')
</code></pre>
<p>which results in</p>
<pre><code>u-estimate cosine error: 0.00764310359954834
v-estimate cosine error: 0.034041762351989746
Actual original spectral norm: 0.8086231350898743
Approximate original spectral norm: 0.7871124148368835
Actual updated spectral norm: 1.0273288488388062
Desired updated spectral norm: 1.0
</code></pre>
<p>Increasing the <code>n_power_iterations</code> parameter will increase the accuracy of the estimate at the cost of computation time.</p> | 2019-12-02 17:59:35.543000+00:00 | 2019-12-02 23:29:51.047000+00:00 | 2019-12-02 23:29:51.047000+00:00 | null | 59,123,577 | <p>when I do,</p>
<pre><code>import torch, torch.nn as nn
x = nn.Linear(3, 3)
y = torch.nn.utils.spectral_norm(x)
</code></pre>
<p>then it gives four different weight matrices,</p>
<p><code>y.weight_u</code></p>
<pre><code>tensor([ 0.6534, -0.1644, 0.7390])
</code></pre>
<p><code>y.weight_orig</code></p>
<pre><code>Parameter containing:
tensor([[ 0.2538, 0.3196, 0.3380],
[ 0.4946, 0.0519, 0.1022],
[-0.5549, -0.0401, 0.1654]], requires_grad=True)
</code></pre>
<p><code>y.weight_v</code></p>
<pre><code>tensor([-0.3650, 0.2870, 0.8857])
</code></pre>
<p><code>y.weight</code></p>
<pre><code>tensor([[ 0.5556, 0.6997, 0.7399],
[ 1.0827, 0.1137, 0.2237],
[-1.2149, -0.0878, 0.3622]], grad_fn=<DivBackward0>)
</code></pre>
<p>how are these four matrices calculated?</p> | 2019-12-01 07:46:00.197000+00:00 | 2019-12-02 23:29:51.047000+00:00 | null | python|pytorch | ['https://arxiv.org/pdf/1802.05957.pdf'] | 1 |
47,274,414 | <p>1) No, the numbers from RdRand are not truly random, since they come from a cryptographically-secure pseudorandom number generator. However, RdRand, RdSeed, and the Intel Secure Key technology are probably the closest to truly random you will find.</p>
<p>2) Yes, the feature is available in all Intel processors that appear in laptops, desktops, and servers starting with the Ivy Bridge processors you mention. These days, the features are also implemented in AMD chips.</p>
<p>3 and 4) The Intel software development guide is the place to look for these answers. There is an interesting discussion of how Intel Secure Key is applied to an astrophysical problem here (<a href="http://iopscience.iop.org/article/10.3847/1538-4357/aa7ede/meta;jsessionid=A9DA9DDB925E6522D058F3CEEC7D0B21.ip-10-40-2-120" rel="nofollow noreferrer">http://iopscience.iop.org/article/10.3847/1538-4357/aa7ede/meta;jsessionid=A9DA9DDB925E6522D058F3CEEC7D0B21.ip-10-40-2-120</a>) and non-paywalled version here (<a href="https://arxiv.org/abs/1707.02212" rel="nofollow noreferrer">https://arxiv.org/abs/1707.02212</a>). This paper describes how the technology works, how to implement it, and describes its performance (Sections 2.2.1 and 5). Had to read it for a class.</p> | 2017-11-13 22:10:02.990000+00:00 | 2017-11-13 22:10:02.990000+00:00 | null | null | 17,616,960 | <p>I have seen that Intel seems to have included a new assembly function to get real random numbers obtained from hardware. The name of the instruction is <code>RdRand</code>, but only a small amount of details seem accessible on it on Internet: <a href="http://en.wikipedia.org/wiki/RdRand">http://en.wikipedia.org/wiki/RdRand</a></p>
<p>My questions concerning this new instruction and its use in C++11 are the following:</p>
<ol>
<li><p>Are the random numbers generated with <code>RdRand</code> really random? (each bit generated from uncorrelated white noise or quantum processes? )</p></li>
<li><p>Is it a special feature of Ivy Bridge processors and will Intel continue to implement this function in the next generation of cpu?</p></li>
<li><p>How to use it through C++11? Maybe with <code>std::random_device</code> but do compilers already call <code>RdRand</code> if the instruction is available?</p></li>
<li><p>How to check whether <code>RdRand</code> is really called when I compile a program?</p></li>
</ol> | 2013-07-12 14:12:24.510000+00:00 | 2017-11-13 22:10:02.990000+00:00 | 2013-10-06 21:23:36.543000+00:00 | c++|assembly|c++11|random|rdrand | ['http://iopscience.iop.org/article/10.3847/1538-4357/aa7ede/meta;jsessionid=A9DA9DDB925E6522D058F3CEEC7D0B21.ip-10-40-2-120', 'https://arxiv.org/abs/1707.02212'] | 2 |
46,318,446 | <blockquote>
<p>Although I am still not fully understanding the optimization
algorithm, I feed like it will help me greatly.</p>
</blockquote>
<p>First up, let me briefly explain this part.
Bayesian Optimization methods aim to deal with exploration-exploitation trade off in the <a href="https://en.wikipedia.org/wiki/Multi-armed_bandit" rel="noreferrer">multi-armed bandit problem</a>. In this problem, there is an <em>unknown</em> function, which we can evaluate in any point, but each evaluation costs (direct penalty or opportunity cost), and the goal is to find its maximum using as few trials as possible. Basically, the trade off is this: you know the function in a finite set of points (of which some are good and some are bad), so you can try an area around the current local maximum, hoping to improve it (exploitation), or you can try a completely new area of space, that can potentially be much better or much worse (exploration), or somewhere in between.</p>
<p>Bayesian Optimization methods (e.g. PI, EI, UCB), build a model of the target function using a <a href="https://en.wikipedia.org/wiki/Gaussian_process" rel="noreferrer">Gaussian Process</a> (GP) and at each step choose the most "promising" point based on their GP model (note that "promising" can be defined differently by different particular methods).</p>
<p>Here's an example:</p>
<p><a href="https://i.stack.imgur.com/ksQFy.png" rel="noreferrer"><img src="https://i.stack.imgur.com/ksQFy.png" alt="sin(x)*x"></a></p>
<p>The true function is <code>f(x) = x * sin(x)</code> (black curve) on <code>[-10, 10]</code> interval. Red dots represent each trial, red curve is the GP <em>mean</em>, blue curve is the mean plus or minus one <em>standard deviation</em>.
As you can see, the GP model doesn't match the true function everywhere, but the optimizer fairly quickly identified the "hot" area around <code>-8</code> and started to exploit it.</p>
<blockquote>
<p>How do I set up the Bayesian Optimization with regards to a deep
network?</p>
</blockquote>
<p>In this case, the space is defined by (possibly transformed) hyperparameters, usually a multidimensional unit hypercube. </p>
<p>For example, suppose you have three hyperparameters: a learning rate <code>α in [0.001, 0.01]</code>, the regularizer <code>λ in [0.1, 1]</code> (both continuous) and the hidden layer size <code>N in [50..100]</code> (integer). The space for optimization is a 3-dimensional cube <code>[0, 1]*[0, 1]*[0, 1]</code>. Each point <code>(p0, p1, p2)</code> in this cube corresponds to a trinity <code>(α, λ, N)</code> by the following transformation:</p>
<pre><code>p0 -> α = 10**(p0-3)
p1 -> λ = 10**(p1-1)
p2 -> N = int(p2*50 + 50)
</code></pre>
<blockquote>
<p>What is the function I am trying to optimize? Is it the cost of the
validation set after N epochs?</p>
</blockquote>
<p>Correct, the target function is neural network validation accuracy. Clearly, each evaluation is expensive, because it requires at least several epochs for training.</p>
<p>Also note that the target function is <em>stochastic</em>, i.e. two evaluations on the same point may slightly differ, but it's not a blocker for Bayesian Optimization, though it obviously increases the uncertainty.</p>
<blockquote>
<p>Is spearmint a good starting point for this task? Any other
suggestions for this task?</p>
</blockquote>
<p><a href="https://github.com/kuz/caffe-with-spearmint" rel="noreferrer">spearmint</a> is a good library, you can definitely work with that. I can also recommend <a href="http://hyperopt.github.io/hyperopt/" rel="noreferrer">hyperopt</a>.</p>
<p>In my own research, I ended up writing my own tiny library, basically for two reasons: I wanted to code exact Bayesian method to use (in particular, I found a <a href="https://arxiv.org/pdf/1009.5419.pdf" rel="noreferrer">portfolio strategy</a> of UCB and PI converged faster than anything else, in my case); plus there is another technique that can save up to 50% of training time called <a href="http://aad.informatik.uni-freiburg.de/papers/15-IJCAI-Extrapolation_of_Learning_Curves.pdf" rel="noreferrer">learning curve prediction</a> (the idea is to skip full learning cycle when the optimizer is confident the model doesn't learn as fast as in other areas). I'm not aware of any library that implements this, so I coded it myself, and in the end it paid off. If you're interested, the code is <a href="https://github.com/maxim5/hyper-engine" rel="noreferrer">on GitHub</a>.</p> | 2017-09-20 09:36:43.050000+00:00 | 2017-09-20 09:36:43.050000+00:00 | null | null | 41,860,817 | <p>I have constructed a CLDNN (Convolutional, LSTM, Deep Neural Network) structure for raw signal classification task.</p>
<p>Each training epoch runs for about 90 seconds and the hyperparameters seems to be very difficult to optimize.</p>
<p>I have been research various ways to optimize the hyperparameters (e.g. random or grid search) and found out about Bayesian Optimization.</p>
<p>Although I am still not fully understanding the optimization algorithm, I feed like it will help me greatly.</p>
<p>I would like to ask few questions regarding the optimization task.</p>
<ol>
<li>How do I set up the Bayesian Optimization with regards to a deep network?(What is the cost function we are trying to optimize?)</li>
<li>What is the function I am trying to optimize? Is it the cost of the validation set after N epochs?</li>
<li>Is spearmint a good starting point for this task? Any other suggestions for this task?</li>
</ol>
<p>I would greatly appreciate any insights into this problem.</p> | 2017-01-25 20:13:07.383000+00:00 | 2018-01-13 16:17:51.410000+00:00 | 2017-10-17 09:22:52.150000+00:00 | optimization|machine-learning|tensorflow|deep-learning|bayesian | ['https://en.wikipedia.org/wiki/Multi-armed_bandit', 'https://en.wikipedia.org/wiki/Gaussian_process', 'https://i.stack.imgur.com/ksQFy.png', 'https://github.com/kuz/caffe-with-spearmint', 'http://hyperopt.github.io/hyperopt/', 'https://arxiv.org/pdf/1009.5419.pdf', 'http://aad.informatik.uni-freiburg.de/papers/15-IJCAI-Extrapolation_of_Learning_Curves.pdf', 'https://github.com/maxim5/hyper-engine'] | 8 |
56,227,500 | <p>Yes, you can do it.
<a href="https://arxiv.org/abs/1505.04597" rel="nofollow noreferrer">U-net</a> is one possible solution for your task. Or maybe you may try to use generative models like <a href="https://github.com/openai/pixel-cnn" rel="nofollow noreferrer">Pixel CNN</a>. </p> | 2019-05-20 20:06:52.690000+00:00 | 2019-05-20 20:06:52.690000+00:00 | null | null | 56,209,935 | <p>Can a target variable in a supervised machine learning task be an array for a specific object?
I'm planning to build a machine learning model for a petroleum engineering task - which is to build a permeability map having an "input". In fact I should be able to get a map (which is 2d array) as output. Is it possible to have such a map in target variable to train a model?</p>
<p>Input:</p>
<p><img src="https://pp.userapi.com/c845524/v845524080/20821b/ckzzcaBNIDk.jpg" alt="input"> </p>
<p>Output:</p>
<p><img src="https://pp.userapi.com/c845524/v845524080/208222/0BQw5BqdBmw.jpg" alt="output"></p> | 2019-05-19 16:41:50.430000+00:00 | 2019-05-20 20:06:52.690000+00:00 | null | python|machine-learning|scikit-learn|target | ['https://arxiv.org/abs/1505.04597', 'https://github.com/openai/pixel-cnn'] | 2 |
20,980,343 | <p>If the result won't all fit into core memory, you can put it in a memory-mapped array so that the overflow will be written to your hard disk:</p>
<pre><code>shape = (QT.shape[2],)*2
result = np.memmap('result.dat', dtype=QT.dtype, mode='w+', shape=shape)
np.dot(QT.T, QT, out=result)
</code></pre>
<p>You may also want to take a look at <a href="http://arxiv.org/pdf/1007.5510.pdf" rel="nofollow">this algorithm</a> for performing out-of-core SVD on very large arrays.</p> | 2014-01-07 19:43:13.083000+00:00 | 2014-01-07 19:48:42.863000+00:00 | 2014-01-07 19:48:42.863000+00:00 | null | 16,660,928 | <p><strong>Given a matrix QT:</strong></p>
<pre><code>% ipython
Python 2.7.3
In [3]: QT.dtype
Out[3]: dtype('float64')
In [4]: QT.__class__
Out[4]: numpy.ndarray
In [5]: QT.flags
Out[5]:
C_CONTIGUOUS : True
F_CONTIGUOUS : False
OWNDATA : True
WRITEABLE : True
ALIGNED : True
UPDATEIFCOPY : False
</code></pre>
<p><strong>I need the results of:</strong></p>
<pre><code>QT.T * QT
</code></pre>
<p><strong>Problem:</strong>
Whenever I try to compute these matrices multiplication, the memory overflows and the code stop running. This happen because of the matrix copy numpy is doing behind.</p>
<p><strong>Tried solutions:</strong></p>
<p>First:</p>
<pre><code>Q = numpy.array(QT.T, order='C')
numpy.dot(Q, QT)
</code></pre>
<p>Second:</p>
<pre><code>QT = numpy.array(QT, order='F')
Q = numpy.array(QT.T, order='F')
numpy.dot(Q, QT)
</code></pre>
<p>Third:</p>
<pre><code>QT = numpy.matrix(QT)
QT = QT.copy('F')
Q = numpy.matrix(QT.T)
Q = Q.copy('F')
Q.dot(QT)
</code></pre>
<p>However, none of them is solving.</p>
<p><strong>Question</strong></p>
<p>How can I operate QT.T * QT without having the memory to explode?</p>
<p><strong>References</strong></p>
<p><a href="http://numpy-discussion.10968.n7.nabble.com/inplace-matrix-multiplication-td21817.html" rel="nofollow noreferrer">http://numpy-discussion.10968.n7.nabble.com/inplace-matrix-multiplication-td21817.html</a></p>
<p><a href="https://stackoverflow.com/questions/9478791/is-there-an-enhanced-numpy-scipy-dot-method">Is there an "enhanced" numpy/scipy dot method?</a></p>
<p><a href="https://stackoverflow.com/questions/11856293/numpy-dot-product-very-slow-using-ints">Numpy dot product very slow using ints</a></p>
<p><a href="http://www.scipy.org/PerformanceTips" rel="nofollow noreferrer">http://www.scipy.org/PerformanceTips</a></p> | 2013-05-21 01:48:08.347000+00:00 | 2014-01-07 19:48:42.863000+00:00 | 2017-05-23 11:57:23.170000+00:00 | python|matrix|numpy|scipy|dot-product | ['http://arxiv.org/pdf/1007.5510.pdf'] | 1 |
72,837,134 | <blockquote>
<p>I guess cuckoo filters must have matured quite a bit over the years in terms of adoption.</p>
</blockquote>
<p>Cuckoo filters are relatively simple, so no 'maturity process' was required.</p>
<p>That being said, since cuckoo filters <a href="https://www.cs.cmu.edu/%7Edga/papers/cuckoo-conext2014.pdf" rel="nofollow noreferrer">introduction</a> in 2014 many improvements have been suggested (and continuously being suggested) including:</p>
<ul>
<li><a href="https://ieeexplore.ieee.org/document/9201031" rel="nofollow noreferrer">Configurable bucket</a></li>
<li><a href="https://ieeexplore.ieee.org/document/9213014" rel="nofollow noreferrer">Additive and subtractive cuckoo filter (ASCF)</a></li>
<li><a href="https://ieeexplore.ieee.org/document/8885169" rel="nofollow noreferrer">Reducing Relocations in Cuckoo Filter</a></li>
<li><a href="https://ieeexplore.ieee.org/document/9522301" rel="nofollow noreferrer">Tagged Cuckoo Filters</a></li>
<li><a href="https://ieeexplore.ieee.org/document/9289870" rel="nofollow noreferrer">Optimized Cuckoo Filter (OCF)</a></li>
<li><a href="https://ieeexplore.ieee.org/document/9494536" rel="nofollow noreferrer">Index-Independent Cuckoo filter (I2CF)</a></li>
<li><a href="https://ieeexplore.ieee.org/document/8885169" rel="nofollow noreferrer">Leveraging the power of two choices to select the better candidate bucket during insertion</a></li>
</ul>
<p>and even</p>
<ul>
<li><a href="https://ieeexplore.ieee.org/document/8770062" rel="nofollow noreferrer">CFBF: Reducing the Insertion Time of Cuckoo Filters With an Integrated Bloom Filter</a></li>
</ul>
<p>Whether each of these methods guarantees better results (insert performance, query performance, memory consumption, etc.) for each use case requires a comparative analysis (I'm not aware of such unbiased research).</p>
<p>As for adoption:</p>
<ul>
<li>There are <a href="https://github.com/search?q=cuckoo%20filter" rel="nofollow noreferrer">many GitHub repositories</a> implementing cuckoo filter in various languages</li>
<li>There is a strong academic interest in both theoretical improvements (see above) and applications of cuckoo filters.</li>
</ul>
<blockquote>
<p>So keeping that in mind, which is a better choice among the two in terms of performance as far as Redis implementations are concerned? Is cuckoo filter an obvious choice over bloom given the extra features (like deletion and insertion count)? Are there any trade-offs?</p>
</blockquote>
<p><a href="https://stackoverflow.com/questions/867099/bloom-filter-or-cuckoo-hashing">The question you referred to</a> already has excellent answers concerning the performance and the tradeoffs between these two algorithms. It also discusses why performance is not just a single metric (insert performance vs. query performance; average time vs. worst time, etc.). Since Redis implements the data structure described in the original cuckoo filter paper (albeit in a highly optimized way), all issues discussed apply to the Redis implementation as well.</p>
<hr />
<p>Note that in addition to Bloom and cuckoo filters several additional approximate membership query data structures were suggested, including <a href="https://arxiv.org/abs/1912.08258" rel="nofollow noreferrer">XOR filters</a>, <a href="https://arxiv.org/abs/2103.02515" rel="nofollow noreferrer">Ribbon filters</a>, and <a href="https://arxiv.org/abs/2201.01174" rel="nofollow noreferrer">Binary fuse filters</a>.</p>
<p>Which one is most suitable for each use case requires a non-trivial analysis.</p> | 2022-07-02 07:05:55.097000+00:00 | 2022-07-05 17:14:11.927000+00:00 | 2022-07-05 17:14:11.927000+00:00 | null | 72,833,352 | <p>Previous stackoverflow question regarding bloom and cuckoo filter comparison is 13 years old (<a href="https://stackoverflow.com/questions/867099/bloom-filter-or-cuckoo-hashing">Here</a>) and predates redis-modules by a decade. And I guess cuckoo filters must have matured quite a bit over the years in terms of adoption.</p>
<p>So keeping that in mind, which is a better choice among the two in terms of performance as far as redis implementations are concerned? Is cuckoo filter an obvious choice over bloom given the extra features (like deletion and insertion count)? Are there any trade-offs?</p>
<p>I want to implement these filters for "existing username" invalidation. Are there any better techniques?</p> | 2022-07-01 18:30:42.830000+00:00 | 2022-07-05 17:14:11.927000+00:00 | null | algorithm|filter|hash|redis|bloom-filter | ['https://www.cs.cmu.edu/%7Edga/papers/cuckoo-conext2014.pdf', 'https://ieeexplore.ieee.org/document/9201031', 'https://ieeexplore.ieee.org/document/9213014', 'https://ieeexplore.ieee.org/document/8885169', 'https://ieeexplore.ieee.org/document/9522301', 'https://ieeexplore.ieee.org/document/9289870', 'https://ieeexplore.ieee.org/document/9494536', 'https://ieeexplore.ieee.org/document/8885169', 'https://ieeexplore.ieee.org/document/8770062', 'https://github.com/search?q=cuckoo%20filter', 'https://stackoverflow.com/questions/867099/bloom-filter-or-cuckoo-hashing', 'https://arxiv.org/abs/1912.08258', 'https://arxiv.org/abs/2103.02515', 'https://arxiv.org/abs/2201.01174'] | 14 |
58,553,481 | <p>Please refer to the below paper on Tensorflow Distributions and see if it helps answer your question. </p>
<p><a href="https://arxiv.org/pdf/1711.10604.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1711.10604.pdf</a></p>
<p>If not, please elaborate your issue. </p>
<p>Mention any code snippet of what you have tried, or any limitations you faced while implementing your model.</p> | 2019-10-25 06:43:42.283000+00:00 | 2019-10-25 06:43:42.283000+00:00 | null | null | 57,696,071 | <p>Tensorflow has been used to compute images but I want to use Tensorflow to compute Biological Models. However, the biological model requires big division and this causes numerical instability. I want to have TensorFlow that supports more numerically stability. Are there any hacks to allow Tensorflow to be more numerically stable? I will follow up with more codes in the near feature but if there are any options please tell me. </p> | 2019-08-28 15:47:56.250000+00:00 | 2019-10-25 06:43:42.283000+00:00 | null | tensorflow | ['https://arxiv.org/pdf/1711.10604.pdf'] | 1 |
44,798,131 | <p><strong>TL;DR</strong>: use <code>tf.clip_by_global_norm</code> for gradient clipping.</p>
<h3>clip_by_value</h3>
<p><code>tf.clip_by_value</code> clips each value inside one tensor, regardless of the other values in the tensor. For instance,</p>
<pre><code>tf.clip_by_value([-1, 2, 10], 0, 3) -> [0, 2, 3] # Only the values below 0 or above 3 are changed
</code></pre>
<p>Consequently, it can change the direction of the tensor, so it should be used if the values in the tensor are decorrelated one from another (which is not the case for gradient clipping), or to avoid zero / infinite values in a tensor that could lead to Nan / infinite values elsewhere (by clipping with a minimum of epsilon=1e-8 and a very big max value for instance).</p>
<h3>clip_by_norm</h3>
<p><code>tf.clip_by_norm</code> rescales one tensor if necessary, so that its L2 norm does not exceed a certain threshold. It's useful typically to avoid exploding gradient on one tensor, because you keep the gradient direction. For instance:</p>
<pre><code>tf.clip_by_norm([-2, 3, 6], 5) -> [-2, 3, 6]*5/7 # The original L2 norm is 7, which is >5, so the final one is 5
tf.clip_by_norm([-2, 3, 6], 9) -> [-2, 3, 6] # The original L2 norm is 7, which is <9, so it is left unchanged
</code></pre>
<p>However, <code>clip_by_norm</code> works on only one gradient, so if you use it on all your gradient tensors, you'll unbalance them (some will be rescaled, others not, and not all with the same scale).</p>
<p>Note that the two first ones work on only one tensor, while the last one is used on a list of tensors.</p>
<h3>clip_by_global_norm</h3>
<p><code>tf.clip_by_global_norm</code> rescales a list of tensors so that the total norm of the vector of all their norms does not exceed a threshold. The goal is the same as <code>clip_by_norm</code> (avoid exploding gradient, keep the gradient directions), but it works on all the gradients at once rather than on each one separately (that is, all of them are rescaled by the same factor if necessary, or none of them are rescaled). This is better, because the balance between the different gradients is maintained.</p>
<p>For instance: </p>
<pre><code>tf.clip_by_global_norm([tf.constant([-2, 3, 6]),tf.constant([-4, 6, 12])] , 14.5)
</code></pre>
<p>will rescale both tensors by a factor <code>14.5/sqrt(49 + 196)</code>, because the first tensor has a L2 norm of 7, the second one 14, and <code>sqrt(7^2+ 14^2)>14.5</code></p>
<p>This (<code>tf.clip_by_global_norm</code>) is the one that you should use for gradient clipping. See <a href="https://arxiv.org/pdf/1211.5063.pdf" rel="noreferrer">this</a> for instance for more information.</p>
<h3>Choosing the value</h3>
<p>Choosing the max value is the hardest part. You should use the biggest value such that you don't have exploding gradient (whose effects can be <code>Nan</code>s or <code>infinite</code> values appearing in your tensors, constant loss /accuracy after a few training steps). The value should be bigger for <code>tf.clip_by_global_norm</code> than for the others, since the global L2 norm will be mechanically bigger than the other ones due to the number of tensors implied.</p> | 2017-06-28 09:04:22.130000+00:00 | 2018-06-26 08:56:00.367000+00:00 | 2018-06-26 08:56:00.367000+00:00 | null | 44,796,793 | <p>Want to understand the difference in roles of <code>tf.clip_by_value</code> and <code>tf.clip_by_global_norm</code> during the implementation of Gradient Clipping in TensorFlow. Which one is preferred and how to decide the max value to clip on?</p> | 2017-06-28 08:00:03.233000+00:00 | 2020-02-22 10:05:02.217000+00:00 | 2020-02-22 10:05:02.217000+00:00 | python|tensorflow|deep-learning | ['https://arxiv.org/pdf/1211.5063.pdf'] | 1 |
37,143,901 | <p>RMSprop does not subtract the gradient from the parameters but use more complicated formula involving a combination of: </p>
<ul>
<li>a momentum, if the corresponding parameter is not 0</li>
<li>a gradient step, rescaled non uniformly (on each coordinate) by the square root of the squared average of the gradient. </li>
</ul>
<p>For more information you can refer to <a href="http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf" rel="nofollow">these slides</a> or <a href="http://arxiv.org/pdf/1502.04390v1.pdf" rel="nofollow">this recent paper</a>.</p>
<p>The delta is first computed in memory by tensorflow in the slot variable 'momentum' and then the variable is updated (see <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/kernels/training_ops.cc#L134" rel="nofollow">the C++ operator</a>).<br>
Thus, you should be able to access it and construct a delta node with <code>delta_w1 = self.rmsprop.get_slot(self.w1, 'momentum')</code>. (I have not tried it yet.)</p> | 2016-05-10 16:13:48.323000+00:00 | 2016-05-10 16:13:48.323000+00:00 | null | null | 37,123,745 | <p><strong>Question:</strong> What is the most efficient way to get the delta of my weights in the most efficient way in a TensorFlow network?</p>
<p><strong>Background</strong>: I've got the operators hooked up as follows (thanks to this <a href="https://stackoverflow.com/questions/34687761/efficiently-grab-gradients-from-tensorflow">SO question</a>):</p>
<pre>
self.cost = `the rest of the network`
self.rmsprop = tf.train.RMSPropOptimizer(lr,rms_decay,0.0,rms_eps)
self.comp_grads = self.rmsprop.compute_gradients(self.cost)
self.grad_placeholder = [(tf.placeholder("float", shape=grad[1].get_shape(), name="grad_placeholder"), grad[1]) for grad in self.comp_grads]
self.apply_grads = self.rmsprop.apply_gradients(self.grad_placeholder)
</pre>
<p>Now, to feed in information, I run the following:</p>
<pre>
feed_dict = `training variables`
grad_vals = self.sess.run([grad[0] for grad in self.comp_grads], feed_dict=feed_dict)
feed_dict2 = `feed_dict plus gradient values added to self.grad_placeholder`
self.sess.run(self.apply_grads, feed_dict=feed_dict2)
</pre>
<p>The command of <code>run(self.apply_grads)</code> will update the network weights, but when I compute the differences in the starting and ending weights (<code>run(self.w1)</code>), those numbers are different than what is stored in <code>grad_vals[0]</code>. I figure this is because the RMSPropOptimizer does more to the raw gradients, but I'm not sure what, or where to find out what it does.</p>
<p>So back to the question: How do I get the delta on my weights in the most efficient way? Am I stuck running <code>self.w1.eval(sess)</code> multiple times to get the weights and calc the difference? Is there something that I'm missing with the <code>tf.RMSPropOptimizer</code> function.</p>
<p>Thanks!</p> | 2016-05-09 19:18:15.393000+00:00 | 2016-05-10 16:13:48.323000+00:00 | 2017-05-23 12:02:03.663000+00:00 | tensorflow | ['http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf', 'http://arxiv.org/pdf/1502.04390v1.pdf', 'https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/kernels/training_ops.cc#L134'] | 3 |
36,814,289 | <p>Speed!</p>
<p>According to the <a href="http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.TruncatedSVD.html" rel="noreferrer">docs</a>, <code>sklearn.decomposition.TruncatedSVD</code> can use a randomized algorithm due to <a href="http://arxiv.org/pdf/0909.4061v2.pdf" rel="noreferrer">Halko, Martinson, and Tropp (2009).</a> This paper claims that their algorithm is considerably faster. </p>
<p>For a dense matrix, it runs in O(m*n*log(k)) time, whereas the classical algorithm takes O(m*n*k) time, where m and n are the dimensions of the matrix from which you want the kth largest components. The randomized algorithm is also easier to efficiently parallelize and makes fewer passes over the data. </p>
<p>Table 7.1 of the paper (on page 45) shows the performance of a few algorithms as a function of matrix size and # of components, and the randomized algorithm is often an order of magnitude faster.</p>
<p>The accuracy of the output is also claimed to be pretty good (Figure 7.5), though there are some modifications and constants that might affect it and I haven't gone through the sklearn code to see what they did/did not do.</p> | 2016-04-23 17:45:15.473000+00:00 | 2016-04-23 17:45:15.473000+00:00 | null | null | 36,812,129 | <p>I used with truncatedSVD with 30000 by 40000 size of term-document matrix to reducing the dimension to 3000 dimension,
when using 'randomized', variance ratio is about 0.5 (n_iter=10)
when using 'arpack', variance ratio is about 0.9</p>
<p>Variance ratio of 'randomized' algorithm is lower than one of 'arpack'.</p>
<p>So why scikit-learn truncatedSVD uses 'randomized' algorithm as default?</p> | 2016-04-23 14:34:26.003000+00:00 | 2016-04-23 17:45:15.473000+00:00 | null | scikit-learn|svd|dimension-reduction | ['http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.TruncatedSVD.html', 'http://arxiv.org/pdf/0909.4061v2.pdf'] | 2 |
39,287,682 | <p>Yes, this is done in the Neural GPU TensorFlow model by Łukasz Kaiser and Ilya Sutskever. </p>
<p>It uses GRUs rather than LSTMs, but those are very similar cell types. The model is also a little different from the typical RNN implementations. The nuance is that the model does not accept new inputs in time past the first time step: the inputs are fed to the initial cell state so that this "mental image" state evolves trough timesteps. </p>
<p>The paper is <a href="http://arxiv.org/abs/1511.08228" rel="nofollow">here</a>.
The neural GPU model implementation in TensorFlow can be found <a href="https://github.com/tensorflow/models/tree/master/neural_gpu" rel="nofollow">here</a>. </p> | 2016-09-02 08:25:12.037000+00:00 | 2016-09-02 08:25:12.037000+00:00 | null | null | 39,251,916 | <p>I am currently studying tensorflow. I just made some simple codes like CNN, RNN ans LSTM and so on. And now I want to implement convolutional lstm. I read <a href="http://arxiv.org/abs/1506.04214" rel="nofollow">this paper</a> and tried to implement it as an exercise. However, there were, as far as I searched, no codes available in the internet. If someone knows where the available source code is, please let me know. </p> | 2016-08-31 14:08:04.427000+00:00 | 2016-12-17 02:04:22.020000+00:00 | 2016-09-01 13:56:51.077000+00:00 | tensorflow|deep-learning|conv-neural-network|lstm | ['http://arxiv.org/abs/1511.08228', 'https://github.com/tensorflow/models/tree/master/neural_gpu'] | 2 |
42,035,711 | <p>Yes, this is possible, but not with standard CNN architectures, some changes are needed:</p>
<ul>
<li>One approach is CNNs with binary weights, so evaluating the CNN can just be done with bit operations. There are many publications about this, like <a href="https://arxiv.org/abs/1602.02830" rel="noreferrer">this</a>, <a href="https://arxiv.org/abs/1603.05279" rel="noreferrer">this</a> or <a href="https://arxiv.org/abs/1511.00363" rel="noreferrer">this</a>. I have seen an implementation of YOLO with binary weights running in real-time on an iPhone, so it is definitely possible.</li>
<li>A second approach is to reduce the number of parameters of the neural network, for example if you train a network with 5000 weights and gets detection performance that is close to what you want, then this network might run in real-time. But this is harder.</li>
<li>Third approach is just to optimize the neural network architecture to minimize parameters, and combine it with a very optimized implementation. There are algorithms to speedup convolution operations, such as <a href="https://arxiv.org/abs/1611.06473" rel="noreferrer">L-CNN</a>, or the ones implemented by cuDNN.</li>
</ul>
<p>A very good related resource are the presentation and papers from the <a href="http://allenai.org/plato/emdnn/index.html" rel="noreferrer">The 1st International Workshop on Efficient Methods for Deep Neural Networks</a>.</p> | 2017-02-04 02:03:54.400000+00:00 | 2017-02-04 02:12:33.950000+00:00 | 2017-02-04 02:12:33.950000+00:00 | null | 42,035,467 | <p>I want to create a face detection mobile app and I want to do it with a regular Deep Learning(Convolutional Network). I will train it with my computer and use trained data in the mobile app.</p>
<p>My question is that: can I get very fast computation in the regulat phone like iPhone? I need it be very fast and under 1 sec can detect a face in the video. Is it possible on a mobile device? or this kind of task need more powerful hardware?</p>
<p>I know training phase must be in a powerful computer but I mean production phase in a mobile device. </p>
<p>for example, if I put my phone in a street, It can detect all peoples face with the same deep network in training phase?</p> | 2017-02-04 01:17:28.190000+00:00 | 2017-02-04 08:49:04.310000+00:00 | 2017-02-04 08:49:04.310000+00:00 | mobile|machine-learning|neural-network|computer-vision|deep-learning | ['https://arxiv.org/abs/1602.02830', 'https://arxiv.org/abs/1603.05279', 'https://arxiv.org/abs/1511.00363', 'https://arxiv.org/abs/1611.06473', 'http://allenai.org/plato/emdnn/index.html'] | 5 |
68,622,734 | <p>you can add l1 and l2 Regularization :</p>
<pre><code>kernel_regularizer=keras.regularizers.l2(0.01))
</code></pre>
<p>you can add dropout :</p>
<pre><code>dropout = tf.keras.layers.Dropout(0.2)
</code></pre>
<p>Here, you can also try out different variants of relu like, ELU, LeakyReLU</p>
<p>Monte Carlo (MC) Dropout also you can try.
visit <a href="https://arxiv.org/abs/1506.02142" rel="nofollow noreferrer">https://arxiv.org/abs/1506.02142</a></p>
<p>try</p>
<pre><code>kernel_initializer="he_uniform"
# or
kernel_initializer="he_normal"
</code></pre> | 2021-08-02 13:48:43.247000+00:00 | 2021-08-02 13:48:43.247000+00:00 | null | null | 68,602,832 | <p>I've been working on building a simple ANN, as I'm a novice, and while I no longer receive any errors, after the first epoch, the accuracy jumps to 1.0, which suggests over fitting. I made sure that the dependent variable (y) column wasn't in both the X and y arrays.</p>
<p>The one hot encoder class I got from another post on stackoverflow, and feel that perhaps I haven't implemented it correctly, and that's causing issue.</p>
<p>I was thinking there may be an issue with this line calling the class, but am not 100% sure.</p>
<p>Any clarification on how to solve the over fitting, or to improve the model would be appreciated.</p>
<p>And lastly, for context: the df (found <a href="https://www.kaggle.com/psvishnu/bank-direct-marketing" rel="nofollow noreferrer">https://www.kaggle.com/psvishnu/bank-direct-marketing</a>) contains data on 7 independent variables that describe a banks customer profile - ei, salary, number of credit cards, debt, etc</p>
<p>And the y = if the client subscribed a term deposit? (binary: "yes","no")</p>
<pre><code>X = MultiColumnLabelEncoder(columns = ['housing', 'loan', 'default']).fit_transform(df_ann)
</code></pre>
<p>Full code snippet</p>
<pre><code>import numpy as np
import pandas as pd
import tensorflow as tf
#Importing Dataset
df = pd.read_csv('bank-full.csv', sep=';')
df_ann = df[['age', 'job', 'marital', 'education', 'default', 'balance', 'housing', 'loan', 'y']]
X = df_ann.iloc[:, :7].values
y = df_ann.iloc[:, -1].values
#Encoding Categorical Data
from sklearn.preprocessing import LabelEncoder
from sklearn.pipeline import Pipeline
le = LabelEncoder()
y[:] = le.fit_transform(y[:])
class MultiColumnLabelEncoder:
def __init__(self,columns = None):
self.columns = columns # array of column names to encode
def fit(self,X,y=None):
return self # not relevant here
'''
Transforms columns of X specified in self.columns using
LabelEncoder(). If no columns specified, transforms all
columns in X.
'''
def transform(self,X):
output = X.copy()
if self.columns is not None:
for col in self.columns:
output[col] = LabelEncoder().fit_transform(output[col])
else:
for colname,col in output.iteritems():
output[colname] = LabelEncoder().fit_transform(col)
return output
def fit_transform(self,X,y=None):
return self.fit(X,y).transform(X)
X = MultiColumnLabelEncoder(columns = ['housing','loan', 'default']).fit_transform(df_ann)
#OneHotEncoder (3d+)
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import OneHotEncoder
ct = ColumnTransformer(transformers=[('encoder', OneHotEncoder(handle_unknown='ignore'), ['job', 'marital', 'education'])], remainder='passthrough')
X = np.array(ct.fit_transform(X))
X = np.asarray(X).astype('float32')
y= np.asarray(y).astype('float32')
#Split to training/test
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)
#Feature Scaling
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
# Building the ANN
#Initializing
ann = tf.keras.models.Sequential()
#Adding Input Layer
ann.add(tf.keras.layers.Dense(units=6, activation='relu'))
#Second Hidden Layer
ann.add(tf.keras.layers.Dense(units=6, activation='relu'))
#Output Layer
ann.add(tf.keras.layers.Dense(units=1, activation='sigmoid'))
#Training ANN
ann.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
ann.fit(X_train, y_train, batch_size = 32, epochs = 200)
#Predicting Test Results
y_pred = ann.predict(X_test)
y_pred = (y_pred > 0.5)
print(np.concatenate((y_pred.reshape(len(y_pred),1), y_test.reshape(len(y_test),1)),1))
#Confusion Matrix
from sklearn.metrics import confusion_matrix, accuracy_score
cm = confusion_matrix(y_test, y_pred)
print(cm)
accuracy_score(y_test, y_pred)
</code></pre> | 2021-07-31 14:05:57.130000+00:00 | 2021-08-02 13:48:43.247000+00:00 | null | python|tensorflow|scikit-learn | ['https://arxiv.org/abs/1506.02142'] | 1 |
34,328,229 | <p>There is no direct equivalent of Multiclass SVM in <code>e1071</code>. Besides, all approaches to use SVM for multiclass classification use techniques like 'one vs rest' or encoding, amongst others. Here is a reference detailing most common approaches...
<a href="http://arxiv.org/ftp/arxiv/papers/0802/0802.2411.pdf" rel="nofollow">http://arxiv.org/ftp/arxiv/papers/0802/0802.2411.pdf</a></p>
<p>If you want to use <code>e1071</code> for multiclass SVM, you best can create 26 svm models, one for each class, and use the probability score to predict. This approach should be good enough for handwritten pattern recognition.</p> | 2015-12-17 06:44:33.460000+00:00 | 2015-12-17 06:44:33.460000+00:00 | null | null | 34,328,105 | <p>I am working on the project handwritten pattern recognition(Alphabets) using Support Vector Machines. I have <strong>26 classes</strong> in total but I am not able to classify using SVM in <strong>R</strong>. I can classify the images only if it is a binary class. How to use SVM for <strong>Multiclass SVM</strong> in R?</p>
<p>I am using "e1071" package. </p>
<p>Thanks in advance.</p> | 2015-12-17 06:36:48.217000+00:00 | 2021-06-12 18:22:22.450000+00:00 | 2015-12-17 07:20:28.480000+00:00 | r|svm | ['http://arxiv.org/ftp/arxiv/papers/0802/0802.2411.pdf'] | 1 |
31,491,855 | <p>I'm currently investigating workarounds for this heat problem as well. Glass overheats quickly when capturing previews and doing heavy calculations or image processing on it.</p>
<p>This paper is very informative:</p>
<p><a href="http://arxiv.org/pdf/1404.1320v1.pdf" rel="nofollow">http://arxiv.org/pdf/1404.1320v1.pdf</a></p>
<p>Title: "Draining our Glass: An Energy and Heat
Characterization of Google Glass"</p> | 2015-07-18 13:51:38.860000+00:00 | 2015-07-18 13:51:38.860000+00:00 | null | null | 27,005,588 | <p>I am running a simple application that receives
and displays the values of Bluetooth Low Energy
advertisement packets in real time.</p>
<p>The Glass heats up in about 5 minutes and touch
commands stop working. The Glass is not super
hot, but warmer than feels comfortable.</p>
<p>Commenting out the Bluetooth stuff reduces the
heating considerably.</p>
<p>How can I make this application workable on the
Glass?</p> | 2014-11-18 22:44:18.220000+00:00 | 2015-07-18 13:51:38.860000+00:00 | null | android|bluetooth|google-glass | ['http://arxiv.org/pdf/1404.1320v1.pdf'] | 1 |
53,904,035 | <p>You asked two questions, one somewhat open-ended (the first one) and other one that has a definitive answer, so I will start by the second one:</p>
<blockquote>
<p>Is there a transformation already available that takes as an input a
matrix like M and produces a matrix like C? Preferably, a python
package?</p>
</blockquote>
<p>The answer is yes, there is one package named <a href="https://docs.scipy.org/doc/scipy/reference/spatial.distance.html" rel="nofollow noreferrer">scipy.spatial.distance</a> that contains a function that takes a matrix like <code>M</code> and produces a matrix like <code>C</code>. The following example is to show the function:</p>
<pre><code>import numpy as np
from scipy.spatial.distance import pdist, squareform
# initial data
M = [[18, 34, 54, 65],
[18, 12, 54, 65],
[21, 43, 55, 78]]
# convert to numpy array
arr = np.array(M)
result = squareform(pdist(M, metric='euclidean'))
print(result)
</code></pre>
<p><strong>Output</strong></p>
<pre><code>[[ 0. 22. 16.1245155 ]
[22. 0. 33.76388603]
[16.1245155 33.76388603 0. ]]
</code></pre>
<p>As seen from the example above, <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.distance.pdist.html#scipy.spatial.distance.pdist" rel="nofollow noreferrer">pdist</a> takes the <code>M</code> matrix and generates an <code>C</code> matrix. Note that the output of <code>pdist</code> is a <em>condensed distance matrix</em>, so you need to convert it to square form using <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.distance.squareform.html#scipy.spatial.distance.squareform" rel="nofollow noreferrer">squareform</a>. Now onto the second issue:</p>
<blockquote>
<p>What is a good choice for comparing the ordering of two lists? That
is, what is a good choice for function f?</p>
</blockquote>
<p>Given that order does matter in your particular case I suggest you look at rank correlation coefficients such as: <a href="https://en.wikipedia.org/wiki/Kendall_rank_correlation_coefficient" rel="nofollow noreferrer">Kendall</a> or <a href="https://en.wikipedia.org/wiki/Spearman%27s_rank_correlation_coefficient" rel="nofollow noreferrer">Spearman</a>, both are provided in the <a href="https://docs.scipy.org/doc/scipy/reference/stats.html" rel="nofollow noreferrer">scipy.stats</a> package, along with a whole bunch of other coefficients. Usage example:</p>
<pre><code>import numpy as np
from scipy.spatial.distance import pdist, squareform
from scipy.stats import kendalltau, spearmanr
# distance function
kendall = lambda x, y : kendalltau(x, y)[0]
spearman = lambda x, y : spearmanr(x, y)[0]
# initial data
M = [[18, 34, 54, 65],
[18, 12, 54, 65],
[21, 43, 55, 78]]
# convert to numpy array
arr = np.array(M)
# compute kendall C and convert to square form
kendall_result = 1 - squareform(pdist(arr, kendall)) # subtract 1 because you want a similarity
print(kendall_result)
print()
# compute spearman C and convert to square form
spearman_result = 1 - squareform(pdist(arr, spearman)) # subtract 1 because you want a similarity
print(spearman_result)
print()
</code></pre>
<p><strong>Output</strong></p>
<pre><code>[[1. 0.33333333 0. ]
[0.33333333 1. 0.33333333]
[0. 0.33333333 1. ]]
[[1. 0.2 0. ]
[0.2 1. 0.2]
[0. 0.2 1. ]]
</code></pre>
<p>If those do not fit your needs you can take a look at the <a href="https://en.wikipedia.org/wiki/Hamming_distance" rel="nofollow noreferrer">Hamming distance</a>, for example:</p>
<pre><code>import numpy as np
from scipy.spatial.distance import pdist, squareform
# initial data
M = [[18, 34, 54, 65],
[18, 12, 54, 65],
[21, 43, 55, 78]]
# convert to numpy array
arr = np.array(M)
# compute match_rank C and convert to square form
result = 1 - squareform(pdist(arr, 'hamming'))
print(result)
</code></pre>
<p><strong>Output</strong></p>
<pre><code>[[1. 0.75 0. ]
[0.75 1. 0. ]
[0. 0. 1. ]]
</code></pre>
<p>In the end the choice of the similarity function will depend on your final application, so you will need to try out different functions and see the ones that fit your needs. Both <code>scipy.spatial.distance</code> and <code>scipy.stats</code> provide a plethora of distance and coefficient functions you can try out.</p>
<p><strong>Further</strong></p>
<ol>
<li>The following <a href="https://arxiv.org/pdf/1107.2691.pdf" rel="nofollow noreferrer">paper</a> contains a section on list similarity</li>
</ol> | 2018-12-23 13:31:22.190000+00:00 | 2018-12-23 13:46:03.323000+00:00 | 2018-12-23 13:46:03.323000+00:00 | null | 53,833,387 | <p>Basically, <a href="https://www.youtube.com/watch?v=sI7VpFNiy_I&t=20m" rel="nofollow noreferrer">I want to reimplement this video</a>. </p>
<p>Given a corpus of documents, I want to find the terms that are most similar to each other. </p>
<p>I was able to generate a cooccurrence matrix using <a href="https://stackoverflow.com/a/37822989">this SO thread</a> and use the video to generate an association matrix. Next I, would like to generate a second order cooccurrence matrix.</p>
<p>Problem statement: Consider a matrix where the rows of the matrix correspond to a term and the entries in the rows correspond to the top k terms similar to that term. Say, k = 4, and we have n terms in our dictionary, then the matrix <code>M</code> has <code>n</code> rows and <code>4</code> columns.</p>
<p>HAVE: </p>
<pre><code>M = [[18,34,54,65], # Term IDs similar to Term t_0
[18,12,54,65], # Term IDs similar to Term t_1
...
[21,43,55,78]] # Term IDs similar to Term t_n.
</code></pre>
<p>So, M contains for each term ID, the most similar term IDs. Now, I would like to check how many of those similar terms match. In the example of <code>M</code> above, it seems that term <code>t_0</code> and term <code>t_1</code> are quite similar, because three out of four terms match, where as terms <code>t_0</code> and <code>t_n</code>are not similar, because no terms match. Let's write <code>M</code> as a series of lists.</p>
<pre><code>M = [list_0, # Term IDs similar to Term t_0
list_1, # Term IDs similar to Term t_1
...
list_n] # Term IDs similar to Term t_n.
</code></pre>
<p>WANT:</p>
<pre><code>C = [[f(list_0, list_0), f(list_0, list_1), ..., f(list_0, list_n)],
[f(list_1, list_0), f(list_1, list_1), ..., f(list_1, list_n)],
...
[f(list_n, list_0), f(list_n, list_1), ..., f(list_n, list_n)]]
</code></pre>
<p>I'd like to find the matrix <code>C</code>, that has as its elements, a function <code>f</code> applied to the lists of <code>M</code>. <code>f(a,b)</code> measures the degree of similarity between two lists <code>a</code> and <code>b</code>. Going, with the example above, the degree of similarity between <code>t_0</code> and <code>t_1</code> should be high, whereas the degree of similarity of <code>t_0</code> and <code>t_n</code> should be low. </p>
<p>My questions: </p>
<ol>
<li>What is a good choice for comparing the ordering of two lists? That is, what is a good choice for function <code>f</code>? </li>
<li>Is there a transformation already available that takes as an input a matrix like <code>M</code> and produces a matrix like <code>C</code>? Preferably a python package?</li>
</ol>
<p>Thank you, r0f1</p> | 2018-12-18 12:49:31.057000+00:00 | 2018-12-28 17:59:27.513000+00:00 | 2018-12-28 17:59:27.513000+00:00 | python|matrix|nlp | ['https://docs.scipy.org/doc/scipy/reference/spatial.distance.html', 'https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.distance.pdist.html#scipy.spatial.distance.pdist', 'https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.distance.squareform.html#scipy.spatial.distance.squareform', 'https://en.wikipedia.org/wiki/Kendall_rank_correlation_coefficient', 'https://en.wikipedia.org/wiki/Spearman%27s_rank_correlation_coefficient', 'https://docs.scipy.org/doc/scipy/reference/stats.html', 'https://en.wikipedia.org/wiki/Hamming_distance', 'https://arxiv.org/pdf/1107.2691.pdf'] | 8 |
49,474,581 | <p>I'm not sure what exactly you want. For example, would some pixels belong to two segments? If that is the case, then I'm relatively sure you have to do something on your own. Otherwise, the following might work:</p>
<h2>Opening and Closing</h2>
<p><a href="https://en.wikipedia.org/wiki/Opening_(morphology)" rel="nofollow noreferrer">Opening</a> and <a href="https://en.wikipedia.org/wiki/Closing_(morphology)" rel="nofollow noreferrer">closing</a> are two morphological operations which will smooth borders</p>
<h2>Clustering</h2>
<p>There are many clustering algorithms. They are what you want for non-semantic segmentation (for semantic segmentation, you might want to read my <a href="https://arxiv.org/pdf/1602.06541.pdf" rel="nofollow noreferrer">literature survey</a>). One example is</p>
<blockquote>
<p>P. F. Felzenszwalb, <a href="http://cs.brown.edu/~pff/segment/" rel="nofollow noreferrer">“Graph based image segmentation.”</a></p>
</blockquote>
<p>I would simply give those algorithms a try and see if one directly works.</p>
<p>Other clustering algorithms:</p>
<ul>
<li>K-means</li>
<li>DB-SCAN</li>
<li>CLARANS</li>
<li><a href="https://stat.ethz.ch/R-manual/R-devel/library/cluster/html/agnes.html" rel="nofollow noreferrer">AGNES</a></li>
<li><a href="https://onlinelibrary.wiley.com/doi/book/10.1002/9780470316801" rel="nofollow noreferrer">DIANA</a></li>
</ul> | 2018-03-25 09:59:27.823000+00:00 | 2018-03-25 10:12:31.273000+00:00 | 2018-03-25 10:12:31.273000+00:00 | null | 37,761,792 | <p>I have an input image as follows and wish to segment the parts into regions. I also want the segmented parts to not been just the pixels which contribute to the solid color but also the edge anti-aliasing between the edge of the region and the next region.</p>
<p>Does there exist any filter or method to segment the image in this way? The important part is that the end result segmented part must contain the edge anti-aliasing between it and the next regions. A correct solution is shown in yellow.</p>
<p>In these two images I zoomed the pixels to be large so the edge anti-aliasing between region edges can be seen clearly.</p>
<p><a href="https://i.stack.imgur.com/Lmakp.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Lmakp.jpg" alt="enter image description here"></a></p>
<p>An example output that I want for the yellow region is shown.</p>
<p><a href="https://i.stack.imgur.com/VcMvv.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VcMvv.jpg" alt="enter image description here"></a></p>
<p>For a definition of "edge anti-aliasing" see <a href="https://markpospesel.wordpress.com/2012/03/30/efficient-edge-antialiasing/" rel="nofollow noreferrer">https://markpospesel.wordpress.com/2012/03/30/efficient-edge-antialiasing/</a></p> | 2016-06-11 08:48:26.757000+00:00 | 2018-03-25 10:12:31.273000+00:00 | 2016-06-12 07:02:12.453000+00:00 | image-processing|computer-vision | ['https://en.wikipedia.org/wiki/Opening_(morphology)', 'https://en.wikipedia.org/wiki/Closing_(morphology)', 'https://arxiv.org/pdf/1602.06541.pdf', 'http://cs.brown.edu/~pff/segment/', 'https://stat.ethz.ch/R-manual/R-devel/library/cluster/html/agnes.html', 'https://onlinelibrary.wiley.com/doi/book/10.1002/9780470316801'] | 6 |
63,616,238 | <p>Finetuning is more like adopting the pre-trained model to the downstream task. However, recent <a href="https://arxiv.org/pdf/1909.04925.pdf" rel="nofollow noreferrer">state-of-the-art</a> proves that finetuning doesn't help much with QA tasks. See also the following <a href="https://stackoverflow.com/questions/63218778/fine-tuning-distilbertforsequenceclassification-is-not-learning-why-is-loss-no/63593996#63593996">post</a>.</p> | 2020-08-27 12:41:45.013000+00:00 | 2020-08-27 12:41:45.013000+00:00 | null | null | 60,418,179 | <p>I'm trying to create my model for question answering based on BERT und can't understand what is the meaning of fine tuning. Do I understand it right, that it is like adaption for specific domain? And if I want to use it with Wikipedia corpora, I just need to integrate unchanged pre-trained model in my network?</p> | 2020-02-26 16:16:03.640000+00:00 | 2020-08-27 12:41:45.013000+00:00 | 2020-08-03 12:13:45.183000+00:00 | nlp|bert-language-model | ['https://arxiv.org/pdf/1909.04925.pdf', 'https://stackoverflow.com/questions/63218778/fine-tuning-distilbertforsequenceclassification-is-not-learning-why-is-loss-no/63593996#63593996'] | 2 |
53,249,810 | <p>Fourier transform is most suited if your samples are each a time series. If they are you may extract frequency domain features for each sample from <code>transformed</code>. Here is a listing of common features in time and frequency domain that you can consider (<a href="https://arxiv.org/pdf/1401.8212.pdf" rel="nofollow noreferrer">reference</a>):</p>
<p><a href="https://i.stack.imgur.com/Gl8ao.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Gl8ao.png" alt="enter image description here"></a></p> | 2018-11-11 14:43:46.707000+00:00 | 2018-11-11 14:43:46.707000+00:00 | null | null | 52,766,848 | <p>My dataset has 2000 attributes and 200 samples. I need to reduce the dimensionality of it. To do this, I am trying to use Fourier transformation as a dimensional reduction. Fourier transformation returns the discrete Fourier transform when I feed data as an input. But I do not know how to use it for dimensional reduction. </p>
<pre><code>from scipy.fftpack import fft
import panda as pd
price = pd.read_csv(priceFile(), sep=",")
transformed = fft(price )
</code></pre>
<p>Can you please help me? </p> | 2018-10-11 18:34:47.753000+00:00 | 2021-03-23 13:33:04.500000+00:00 | null | python|fft|dimensionality-reduction | ['https://arxiv.org/pdf/1401.8212.pdf', 'https://i.stack.imgur.com/Gl8ao.png'] | 2 |
56,452,756 | <p>In short, there is a common formula for output dims calculation:</p>
<p><a href="https://i.stack.imgur.com/vD1u3.png" rel="noreferrer"><img src="https://i.stack.imgur.com/vD1u3.png" alt="formula"></a></p>
<p>You can find explanation in <a href="https://medium.com/mlreview/a-guide-to-receptive-field-arithmetic-for-convolutional-neural-networks-e0f514068807" rel="noreferrer">A guide to receptive field arithmetic for Convolutional Neural Networks</a>.</p>
<p>In addition, I'd like to recommend amazing article <a href="https://arxiv.org/abs/1603.07285" rel="noreferrer">A guide to convolution arithmetic for deep learning</a>.</p>
<p>And this repo <a href="https://github.com/vdumoulin/conv_arithmetic" rel="noreferrer">conv_arithmetic</a> with convolution animations.</p> | 2019-06-05 00:05:42.173000+00:00 | 2019-06-05 00:05:42.173000+00:00 | null | null | 56,450,969 | <p>I'm new to convolutional neural networks and wanted to know how to calculate or figure out the output sizes between layers of a model given a configuration file for pytorch similar to those following instructions in <a href="https://blog.paperspace.com/how-to-implement-a-yolo-v3-object-detector-from-scratch-in-pytorch-part-2/" rel="nofollow noreferrer">this link</a>. </p>
<p>Most of the stuff I've already looked at hasn't been very clear and concise. How am I supposed to calculate the sizes through each layer?
Below is a snippet of a configuration file that would be parsed.</p>
<pre class="lang-sh prettyprint-override"><code># (3, 640, 640)
[convolutional]
batch_normalize=1
filters=16
size=3
stride=1
pad=1
activation=leaky
[maxpool]
size=2
stride=2
# (16, 320, 320)
</code></pre> | 2019-06-04 20:33:12.760000+00:00 | 2020-10-28 19:00:04.537000+00:00 | null | pytorch|object-detection | ['https://i.stack.imgur.com/vD1u3.png', 'https://medium.com/mlreview/a-guide-to-receptive-field-arithmetic-for-convolutional-neural-networks-e0f514068807', 'https://arxiv.org/abs/1603.07285', 'https://github.com/vdumoulin/conv_arithmetic'] | 4 |
69,915,369 | <p>@thkala started off with some literature citations. Let me extend that.</p>
<h2>Implementations</h2>
<ul>
<li><a href="https://github.com/tdunning/t-digest" rel="nofollow noreferrer">T-digest</a> from the 2019 Dunning paper has reference implementation in Java, and ports on that page to Python, Go, Javascript, C++, Scala, C, Clojure, C#, Kotlin, and <a href="https://github.com/facebook/folly/blob/main/folly/stats/TDigest.cpp" rel="nofollow noreferrer">C++ port by facebook</a>, and a further <a href="https://crates.io/crates/tdigest/0.2.2" rel="nofollow noreferrer">rust port of that C++ port</a></li>
<li><a href="https://spark.apache.org/" rel="nofollow noreferrer">Spark</a> implements "GK01" from the 2001 Greenwald/Khanna paper for approximate quantiles</li>
<li><a href="https://beam.apache.org/releases/javadoc/2.1.0/org/apache/beam/sdk/transforms/ApproximateQuantiles.html" rel="nofollow noreferrer">Beam: org.apache.beam.sdk.transforms.ApproximateQuantiles</a> has approximate quantiles</li>
<li>Java: <a href="https://guava.dev/releases/23.0/api/docs/com/google/common/math/Quantiles.html" rel="nofollow noreferrer">Guava:com.google.common.math.Quantiles</a> implements exact quantiles, thus taking more memory</li>
<li>Rust: <a href="https://docs.rs/quantiles/0.7.1/quantiles/" rel="nofollow noreferrer">quantiles crate</a> has implementations for the 2001 GK algorithm "GK01", and the 2005 CKMS algorithm. (caution: I found the CKMS implementation slow - <a href="https://github.com/postmates/quantiles/issues/32" rel="nofollow noreferrer">issue</a>)</li>
<li>C++: <a href="https://www.boost.org/doc/libs/1_46_1/libs/math/doc/sf_and_dist/html/math_toolkit/dist/stat_tut/weg/normal_example/normal_misc.html" rel="nofollow noreferrer">boost quantiles</a> has some code, but I didn't understand it.</li>
<li>I did some profiling of the options in Rust [<a href="https://github.com/postmates/quantiles/issues/32" rel="nofollow noreferrer">link</a>] for up to 100M items, and found GK01 the best, T-digest the second, and "keep 1% top values in priority queue" the third.</li>
</ul>
<h2>Literature</h2>
<ul>
<li><p>2001: <a href="http://infolab.stanford.edu/%7Edatar/courses/cs361a/papers/quantiles.pdf" rel="nofollow noreferrer">Space-efficient online computation of quantile summaries</a> (by Greenwald, Khanna). Implemented in Rust: <a href="https://docs.rs/quantiles/0.7.1/quantiles/greenwald_khanna/index.html" rel="nofollow noreferrer">quantiles::greenwald_khanna</a>.</p>
</li>
<li><p>2004: <a href="https://arxiv.org/abs/cs/0408039" rel="nofollow noreferrer">Medians and beyond: new aggregation techniques for sensor networks</a> (by Shrivastava, Buragohain, Agrawal, Suri). Introduces "q-digests", used for fixed-universe data.</p>
</li>
<li><p>2005: <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.91.853&rep=rep1&type=pdf" rel="nofollow noreferrer">Effective computation of biased quantiles over data streams</a> (by Cormode, Korn, Muthukrishnan, Srivastava)... Implemented in Rust: <a href="https://docs.rs/quantiles/0.7.1/quantiles/ckms/index.html" rel="nofollow noreferrer">quantiles::ckms</a> which notes that the IEEE presentation is correct but the self-published one has flaws. With carefully crafted data, space can grow linearly with input size. "Biased" means it focuses on P90/P95/P99 rather than all the percentiles).</p>
</li>
<li><p>2006: <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.133.793&rep=rep1&type=pdf" rel="nofollow noreferrer">Space-and time-efficient deterministic algorithms for biased quantiles over data streams</a> (by Cormode, Korn, Muthukrishnan, Srivastava)... improved space bound over 2005 paper</p>
</li>
<li><p>2007: <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.74.8534&rep=rep1&type=pdf" rel="nofollow noreferrer">A fast algorithm for approximate quantiles in high speed data streams</a> (by Zhang, Wang). Claims 60-300x speedup over GK. The 2020 literature review below says this has state-of-the-art space upper bound.</p>
</li>
<li><p>2019 <a href="https://arxiv.org/abs/1902.04023" rel="nofollow noreferrer">Computing extremely accurate quantiles using t-digests</a> (by Dunning, Ertl). Introduces t-digests, O(log n) space, O(1) updates, O(1) final calculation. It's neat feature is you can build partial digests (e.g. one per day) and merge them into months, then merge months into years. This is what the big query engines use.</p>
</li>
<li><p>2020 <a href="https://arxiv.org/pdf/2004.08255.pdf" rel="nofollow noreferrer">A survey of approximate quantile computation on large-scale data (technical report)</a> (by Chen, Zhang).</p>
</li>
<li><p>2021 <a href="https://www.sciencedirect.com/science/article/pii/S2665963820300403" rel="nofollow noreferrer">The t-digest: Efficient estimates of distributions</a> - an approachable wrap-up paper about t-digests.</p>
</li>
</ul>
<h2>Cheap hack for P99 of <10M values: just store top 1% in a priority queue!</h2>
<p>This'll sound stupid, but if I want to calculate the P99 of 10M float64s, I just created a priority queue with 100k float32s (takes 400kB). This takes only 4x as much space as "GK01" and is much faster. For 5M or fewer items, it takes less space than GK01!!</p>
<pre><code>struct TopValues {
values: std::collections::BinaryHeap<std::cmp::Reverse<ordered_float::NotNan<f32>>>,
}
impl TopValues {
fn new(count: usize) -> Self {
let capacity = std::cmp::max(count / 100, 1);
let values = std::collections::BinaryHeap::with_capacity(capacity);
TopValues { values }
}
fn render(&mut self) -> String {
let p99 = self.values.peek().unwrap().0;
let max = self.values.drain().min().unwrap().0;
format!("TopValues, p99={:.4}, max={:.4}", p99, max)
}
fn insert(&mut self, value: f64) {
let value = value as f32;
let value = std::cmp::Reverse(unsafe { ordered_float::NotNan::new_unchecked(value) });
if self.values.len() < self.values.capacity() {
self.values.push(value);
} else if self.values.peek().unwrap().0 < value.0 {
self.values.pop();
self.values.push(value);
} else {
}
}
}
</code></pre> | 2021-11-10 14:49:21.160000+00:00 | 2022-07-08 18:03:11.717000+00:00 | 2022-07-08 18:03:11.717000+00:00 | null | 1,248,815 | <p>I am looking for an algorithm that determines percentiles for live data capture.</p>
<p>For example, consider the development of a server application.</p>
<p>The server might have response times as follows:
17 ms
33 ms
52 ms
60 ms
55 ms
etc.</p>
<p>It is useful to report the 90th percentile response time, 80th percentile response time, etc.</p>
<p>The naive algorithm is to insert each response time into a list. When statistics are requested, sort the list and get the values at the proper positions.</p>
<p>Memory usages scales linearly with the number of requests.</p>
<p>Is there an algorithm that yields "approximate" percentile statistics given limited memory usage? For example, let's say I want to solve this problem in a way that I process millions of requests but only want to use say one kilobyte of memory for percentile tracking (discarding the tracking for old requests is not an option since the percentiles are supposed to be for all requests).</p>
<p>Also require that there is no a priori knowledge of the distribution. For example, I do not want to specify any ranges of buckets ahead of time.</p> | 2009-08-08 12:56:21.043000+00:00 | 2022-07-08 18:03:11.717000+00:00 | 2009-08-08 14:22:42.060000+00:00 | algorithm|response-time|percentile|resampling | ['https://github.com/tdunning/t-digest', 'https://github.com/facebook/folly/blob/main/folly/stats/TDigest.cpp', 'https://crates.io/crates/tdigest/0.2.2', 'https://spark.apache.org/', 'https://beam.apache.org/releases/javadoc/2.1.0/org/apache/beam/sdk/transforms/ApproximateQuantiles.html', 'https://guava.dev/releases/23.0/api/docs/com/google/common/math/Quantiles.html', 'https://docs.rs/quantiles/0.7.1/quantiles/', 'https://github.com/postmates/quantiles/issues/32', 'https://www.boost.org/doc/libs/1_46_1/libs/math/doc/sf_and_dist/html/math_toolkit/dist/stat_tut/weg/normal_example/normal_misc.html', 'https://github.com/postmates/quantiles/issues/32', 'http://infolab.stanford.edu/%7Edatar/courses/cs361a/papers/quantiles.pdf', 'https://docs.rs/quantiles/0.7.1/quantiles/greenwald_khanna/index.html', 'https://arxiv.org/abs/cs/0408039', 'http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.91.853&rep=rep1&type=pdf', 'https://docs.rs/quantiles/0.7.1/quantiles/ckms/index.html', 'http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.133.793&rep=rep1&type=pdf', 'http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.74.8534&rep=rep1&type=pdf', 'https://arxiv.org/abs/1902.04023', 'https://arxiv.org/pdf/2004.08255.pdf', 'https://www.sciencedirect.com/science/article/pii/S2665963820300403'] | 20 |
57,606,424 | <p>Mean Average Rank is Mean of the Average ranks calculated. See Point 3. Results in <a href="https://academic.oup.com/bioinformatics/article/33/7/1031/2571354" rel="nofollow noreferrer">https://academic.oup.com/bioinformatics/article/33/7/1031/2571354</a>
IMHO your understanding is correct. You can also go through <a href="https://arxiv.org/pdf/1811.04441.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1811.04441.pdf</a> for Mean reciprocal rank (MRR).
To be sure you can mail the first author on the email id he has provided.</p> | 2019-08-22 09:41:04.797000+00:00 | 2019-08-22 09:50:28.490000+00:00 | 2019-08-22 09:50:28.490000+00:00 | null | 57,467,563 | <p>I'm trying to figure out how mean average rank <a href="https://openreview.net/forum?id=HyePrhR5KX" rel="nofollow noreferrer">(MAR)</a> in graph modelling is calculated. Could someone look through my example and tell me if I'm correct?</p>
<p>Say we have two graphs that look like this:</p>
<pre><code>1-2, 3-4
1, 2, 3-4
</code></pre>
<p>In graph 1, and there are two edges (between nodes 1 and 2 and between nodes 3 and 4). </p>
<p>In graph 2, there is only one edge (between nodes 3 and 4).</p>
<p><strong>For graph 1:</strong></p>
<p>To compute the average rank for graph 1 (<strong>for edge 1-2</strong>):</p>
<p>1) We compute the probability that node 1 would connect with nodes 2, 3 and 4, according to our model. Suppose this gives us <code>[0.09, 0.90, 0.01]</code>.</p>
<p>2) The rank of node 2 (the ground truth connection) would be 2 here, because it is the connection with the second highest probability.</p>
<p>Now <strong>for edge 3-4</strong> in graph 1:</p>
<p>1) We compute the probability that node 3 would connect with nodes 1, 2 and 4, according to our model. Suppose this gives us <code>[0.21, 0.04, 0.75]</code>.</p>
<p>3) The rank of node 4 (the ground truth) is 1.</p>
<p><strong>So the average rank for the first graph is (2+1)/2 = 1.5</strong></p>
<p><strong>For graph 2:</strong></p>
<p>1) We replace node 4 with nodes 1 and 2.</p>
<p>2) We compute the probability that node 3 connects with nodes 1, 2, or 4. Say that gives us <code>[0.05, 0.80, 0.15]</code>.</p>
<p>3) The ground truth is node 4, which had probability 0.15, which has rank 2 (is the second highest probability).</p>
<p><strong>So the average rank for the second graph is 2/1 = 2.</strong></p>
<p><strong>The mean average rank (MAR) would be: (1.5 + 2)/2 = 1.75.</strong></p>
<p>Is this correct?</p> | 2019-08-12 20:00:10.377000+00:00 | 2019-08-22 09:50:28.490000+00:00 | 2019-08-21 14:24:44.180000+00:00 | machine-learning | ['https://academic.oup.com/bioinformatics/article/33/7/1031/2571354', 'https://arxiv.org/pdf/1811.04441.pdf'] | 2 |
51,719,550 | <p>There are quite a lot of possible options for you to tune here; what you are currently facing is generally referred to as "Hyperparameter Optimization", which is an entire research field by itself.</p>
<p>Basically, you want to tune your parameters in such a way that you get the best possible result. First of all, I would recommend you to simply train for a longer period of time (more epochs).
I have never trained on CIFAR-10 myself, but it could be that convergence is reached much later (although I doubt it). Also, I would recommend you to implement something along the lines of early stopping.</p>
<p>Instead of always using the latest model, instead use (i.e. checkpoint) the one with the highest validation accuracy, in the hopes that this is the one that generalizes best to unseen data. Even though it might not have perfect scores on your training data, it will most of the times serve you better in practice.<br/>
After completing your whole run (or without significant improvement in terms of validation loss), you can then cancel your training procedure early.</p>
<p>Furthermore, I am not sure how much you have played around with the internals of the residual blocks, but maybe you can try to adjust some parameters here, like the convolutional size, convolutions per residual block, number of features, etc. etc.</p>
<p>There is really a lot to look out for, but I am assuming you are mostly sticking to the <a href="https://arxiv.org/abs/1512.03385" rel="nofollow noreferrer">ResNet paper</a> in terms of your architecture.
Sadly, there is also a good deal of luck involved for that, since a good random initialization might yield far better results with otherwise similar properties (see <a href="https://arxiv.org/abs/1803.03635" rel="nofollow noreferrer">this paper</a> for more details), so simply choosing a different random seed might help out...</p>
<p>Last, but not least, I would recommend you to look into different optimizers: Generally, default SGD "only works so well" (although it is still a great tool and might in some cases work best). Adding more advanced metrics, with the simplest one being for example <a href="https://towardsdatascience.com/stochastic-gradient-descent-with-momentum-a84097641a5d" rel="nofollow noreferrer"><em>momentum</em></a>, may result in better convergence.</p>
<p>Most of the deep learning tools out there offer you a wide variety of optimizers. A decent go-to choice would be the Adam optimizer. There are of course different ones out there, an overview can be found <a href="http://,%20or%20variations%20of%20AdaGrad,%20that%20use%20%22some%20fancy%20math%22%20(specifically,%20derivative%20information,%20or" rel="nofollow noreferrer">here</a>.</p> | 2018-08-07 05:42:20.173000+00:00 | 2018-08-07 05:42:20.173000+00:00 | null | null | 51,718,667 | <p>I built a 56 layer residual network to train the CIFAR-10 dataset for image classification. Though it's the state-of-the-art network architecture, I get my model test accuracy as 79% at 10 epoches training.</p>
<p>The training dataset size is 49000, and the validation dataset size is 1000. I trained the model in 20 epoches and the minibatch size as 128. The learning rate is 1e-3. I used Xavier initialization and RMProp for gradient descent.</p>
<p>Refer here for my implementation.
<a href="https://github.com/Jiancong/cs231n_2017/blob/master/assignment2/TensorFlow.ipynb" rel="nofollow noreferrer">https://github.com/Jiancong/cs231n_2017/blob/master/assignment2/TensorFlow.ipynb</a></p>
<p>The result is following
<a href="https://i.stack.imgur.com/Ggu7C.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ggu7C.png" alt="enter image description here"></a></p>
<p>I tried to decrease the learning rate to 1e-4, meanwhile the test accuracy degraded too. And I tried to increase the training epoches to 15. The accuracy increased as following.</p>
<p><a href="https://i.stack.imgur.com/4flxl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4flxl.png" alt="enter image description here"></a></p>
<p>I increased the training epoches to 50, and test accuracy got saturated at 85%. </p>
<p>Increased training epoches seems not work now. Anything I worthy to try?</p> | 2018-08-07 04:06:15.257000+00:00 | 2018-08-08 13:27:15.887000+00:00 | 2018-08-08 13:27:15.887000+00:00 | python|tensorflow|deep-learning|resnet | ['https://arxiv.org/abs/1512.03385', 'https://arxiv.org/abs/1803.03635', 'https://towardsdatascience.com/stochastic-gradient-descent-with-momentum-a84097641a5d', 'http://,%20or%20variations%20of%20AdaGrad,%20that%20use%20%22some%20fancy%20math%22%20(specifically,%20derivative%20information,%20or'] | 4 |
32,954,705 | <p>I think <a href="https://github.com/artvandelay/Deep_Inside_Convolutional_Networks/blob/master/visualize.py" rel="nofollow">this code</a> is a good starting point to reproduce the images Google team published. The procedure looks clear:</p>
<ol>
<li>Start with a pure noise image and a class (say "cat")</li>
<li>Perform a forward pass and backpropagate the error wrt the imposed class label</li>
<li>Update the initial image with the gradient computed at the data layer</li>
</ol>
<p>There are some tricks involved, that can be found in the <a href="http://arxiv.org/abs/1312.6034" rel="nofollow">original paper</a>.</p>
<p>It seems that the main difference is that Google folks tried to get a more "realistic" image:</p>
<blockquote>
<p>By itself, that doesn’t work very well, but it does if we impose a prior constraint that the image should have similar statistics to natural images, such as neighboring pixels needing to be correlated. </p>
</blockquote> | 2015-10-05 17:45:12.980000+00:00 | 2015-10-05 17:45:12.980000+00:00 | null | null | 32,856,606 | <p>In the famous Google Inceptionism article,
<a href="http://googleresearch.blogspot.jp/2015/06/inceptionism-going-deeper-into-neural.html" rel="noreferrer">http://googleresearch.blogspot.jp/2015/06/inceptionism-going-deeper-into-neural.html</a>
they show images obtained for each class, such as banana or ant. I want to do the same for other datasets.</p>
<p>The article does describe how it was obtained, but I feel that the explanation is insufficient.</p>
<p>There's a related code
<a href="https://github.com/google/deepdream/blob/master/dream.ipynb" rel="noreferrer">https://github.com/google/deepdream/blob/master/dream.ipynb</a></p>
<p>but what it does is to produce a random dreamy image, rather than specifying a class and learn what it looks like in the network, as shown in the article above.</p>
<p>Could anyone give a more concrete overview, or code/tutorial on how to generate images for specific class? (preferably assuming caffe framework)</p> | 2015-09-30 02:04:54.687000+00:00 | 2015-10-05 17:45:12.980000+00:00 | 2015-09-30 10:00:20.237000+00:00 | computer-vision|neural-network|deep-learning|caffe | ['https://github.com/artvandelay/Deep_Inside_Convolutional_Networks/blob/master/visualize.py', 'http://arxiv.org/abs/1312.6034'] | 2 |
11,703,957 | <p>You should remove cities from the route after the recursive call returns. You do this:</p>
<pre><code> Route newRoute = r;
newRoute.addToRoute(citiesNotInRoute.get(i));
BruteForceFindBestRoute(newRoute);
</code></pre>
<p>but never a <code>newRoute.removeFromRoute</code> or similar.</p>
<p>Note that you are writing Java, and in Java, an assignment of an object is done <em>by reference</em>. Which means that <code>r</code> and <code>newRoute</code> are <em>the same object</em>. <code>newRoute</code> is not an independent copy of <code>r</code> as you might expect. So any modification to <code>newRoute</code> will change <code>r</code> as well. You might want to explicitely copy your route there. The Java term for this is <a href="http://docs.oracle.com/javase/7/docs/api/java/lang/Cloneable.html" rel="noreferrer">clone</a>. Make sure your clone is deep enough, i.e. you clone all relevant data structures, instead of sharing them between the original and its clone.</p>
<p><em>Note:</em> There are many places where you might make your code more efficient, but as brute force is far from efficient in any case, and you're only talking about few cities, perhaps you don't have to care. If you want to investigate alternatives, though, consider maintaining a single linked list of all unvisited cities. Each call would loop over that list, remove an element, recurse and put the element back. No need to build this list from scratch in each invocation. The idea of <a href="http://arxiv.org/abs/cs/0011047" rel="noreferrer">dancing links</a> could be neatly applied to this task, as an alternative to premade linked list implementations.</p>
<p><strong>EDIT:</strong></p>
<p>The following variation of your code works for me:</p>
<pre><code>import java.util.*;
class SO11703827 {
private static ArrayList<Integer> bestRoute;
public static void bruteForceFindBestRoute
(ArrayList<Integer> r,
ArrayList<Integer> citiesNotInRoute)
{
if(!citiesNotInRoute.isEmpty())
{
for(int i = 0; i<citiesNotInRoute.size(); i++)
{
Integer justRemoved =
(Integer) citiesNotInRoute.remove(0);
ArrayList<Integer> newRoute =
(ArrayList<Integer>) r.clone();
newRoute.add(justRemoved);
bruteForceFindBestRoute(newRoute, citiesNotInRoute);
citiesNotInRoute.add(justRemoved);
}
}
else //if(citiesNotInRoute.isEmpty())
{
if(isBestRoute(r))
bestRoute = r;
}
}
private static boolean isBestRoute(ArrayList<Integer> r) {
System.out.println(r.toString());
return false;
}
public static void main(String[] args) {
ArrayList<Integer> lst = new ArrayList<Integer>();
for (int i = 0; i < 6; ++i)
lst.add(i);
ArrayList<Integer> route = new ArrayList<Integer>();
bruteForceFindBestRoute(route, lst);
}
}
</code></pre> | 2012-07-28 19:22:15.337000+00:00 | 2012-07-28 23:59:42.807000+00:00 | 2012-07-28 23:59:42.807000+00:00 | null | 11,703,827 | <p>I'm working on a project for a math class at school, and I chose to do mine on the Traveling Salesman Problem, something I've always wanted to investigate more.
However, I'm having problems with my brute force solving algorithm.</p>
<p>*<strong>Please go to the update at the bottom to view the most recent version of the code</strong></p>
<hr>
<hr>
<p><strong>SKIP THIS PARAGRAPH IF YOU KNOW WHAT THE TRAVELING SALESMAN PROBLEM IS:</strong>
To summarize as much as possible, the TSP goes like this: You are a salesman who wants to visit each city in a region (a city is essentially a point on a map). There are 'n' cities in the bounded x and y region, and each city is connected to each city (by assume a straight road). You need to find the shortest possible route among the cities that allows you to visit each city.One of the algorithms I want to use (and I will need to test other algorithms) is Brute Force, which checks every possible route and returns the shortest route. The reason this is not always used is because it requires us to check (n-1)! possible routes, and that number gets huge as 'n' increases- in fact, with just 50 cities, that would be 608281864034267560872252163321295376887552831379210240000000000 routes to check.</p>
<p><em>ASSUME FOR ALL EXAMPLES TALKED ABOUT IN THIS POST THAT WE ARE GOING TO BE USING AN ARBITRARY REGION WITH 4 CITIES</em> (even though the algorithm can handle n cities. also we don't care about distances- we want to hit every possible route in brute force).</p>
<p>Here is a simple picture demoing what I'm talking about (4 cities is what I'm starting with to check if the process is working properly)</p>
<p><a href="http://www.emeraldinsight.com/content_images/fig/0050300707002.png" rel="nofollow">picture of map with cities</a></p>
<p>Here is the Brute Force algorithm (assume all other called methods work correctly, because they do):</p>
<p>(check below for a bit more explanation)</p>
<p>[code]</p>
<pre><code>public void BruteForceFindBestRoute(Route r) //Must start r having 1 unflagged city to begin with
{
if(!r.allFlagged() && r.route.size() != m.cities.size())
{
/*STEP 1 Begin with last unflagged city*/
City pivot = r.lastCityAdded();
/*STEP 2: Flag city*/
pivot.visited = true;
/*STEP 3: Find cities "NOT IN ROUTE"*/
ArrayList<City> citiesNotInRoute = new ArrayList<City>();
for(int i = 0; i<m.cities.size(); i++)
{
if(!r.isCityInRoute(m.cities.get(i).name))
{
citiesNotInRoute.add(m.cities.get(i));
}
}
/*STEP 4: Recursively call BruteForceFindBestRoute() using these cities added to the end of our original route*/
for(int i = 0; i<citiesNotInRoute.size(); i++)
{
Route newRoute = r;
newRoute.addToRoute(citiesNotInRoute.get(i));
BruteForceFindBestRoute(newRoute);
}
}
/*STEP 5: If the route is full but the last city isn't flagged, then flag it call BruteForceFindBestRoute() again, with the last city flagged*/
else if(!r.allFlagged() && r.route.size() == m.cities.size())
{
if(r.allFlaggedButLast())
{
Route x = r;
x.flagLastCity();
BruteForceFindBestRoute(x);
}
}
/*STEP 6: If all cities are flagged, the route is full. Check to see if it's the best route.*/
else if(r.allFlagged())
{
if(IsBestRoute(r))
bestRoute = r;
}
else
System.err.println("Error: somehow all cities got flagged, but the route isn't full");
}
</code></pre>
<p>Here is my logic:
(Note: a city object has a "flag" boolean variable called "visited") </p>
<p><em>(if all routes are not flagged, and if the route doesn't contain each possible city)</em></p>
<ul>
<li>begin with route with 1 unflagged city. </li>
<li>flag the "last unflagged" city (this city is "pivot") </li>
<li>Find each city that is "NOT IN ROUTE R", and add it to a new route. </li>
<li>recursively call the BruteForce method on each of these routes.</li>
</ul>
<p><em>(if all routes are not flagged, but the route contains each city)</em></p>
<ul>
<li>flag the last city</li>
</ul>
<p><em>(else... this means the route has each city flagged and contains each possible city)</em></p>
<ul>
<li>see if this is the shortest route- if it is, store it in global variable</li>
</ul>
<p>This image will help me explain the problem...
So the program correctly goes down the left side. However, after it gets to the bottom, one would expect the recursion to jump back up to step4, which it does. However, instead of R having city A flagged and city B unflagged and then recursively calling itself on the "new route" containing Aflag and B, R now has all 4 cities included, and all 4 are flagged. It fails because it adds city D again to "newRoute", recursively calls itself again, and in another method we get an array out of bounds error because there aren't 5 cities in my region, but there incorrectly are 5 cities in route r (A,B,C,D,D). </p>
<p><a href="http://i264.photobucket.com/albums/ii191/om3n07/46049b90.jpg" rel="nofollow">Helpful picture of recursive tree structure</a></p>
<p>The problem has something to do with calling recursion in the loop, or route 'r' being referenced within a recursive call.</p>
<p>If you have any idea what I need to do, I would SERIOUSLY appreciate some help.</p>
<p>Thanks to anyone who will help me out. I will send the whole project to anyone who is willing to help as well. </p>
<hr>
<p><strong>UPDATE</strong></p>
<p>Alright, so I have attempted to shorten and simplify my original method, and this is what I have:</p>
<pre><code>public void BruteForceFindBestRoute(Route r, ArrayList<City> citiesNotInRoute)
{
if(!citiesNotInRoute.isEmpty())
{
for(int i = 0; i<citiesNotInRoute.size(); i++)
{
City justRemoved = (City) citiesNotInRoute.remove(0).clone();
Route newRoute = (Route) r.clone();
newRoute.addToRoute(justRemoved);
BruteForceFindBestRoute(newRoute, citiesNotInRoute);
citiesNotInRoute.add(justRemoved);
}
}
else //if(citiesNotInRoute.isEmpty())
{
if(IsBestRoute(r))
bestRoute = r;
}
}
</code></pre>
<p>The problem is that the variable i inside the for loop seems to lose it's meaning when we break out of the recursion, and the loop is not continued. Ideas?</p> | 2012-07-28 19:04:50.177000+00:00 | 2018-11-09 14:55:48.940000+00:00 | 2018-11-09 14:55:48.940000+00:00 | java|algorithm|recursion|traveling-salesman | ['http://docs.oracle.com/javase/7/docs/api/java/lang/Cloneable.html', 'http://arxiv.org/abs/cs/0011047'] | 2 |
51,269,522 | <p>As noted by P-Gn, the problem with such coefficients is their differentiability.
However, it is possible to define similar measures to these coefficients that are differentiable. IOU (intersection over union), as proposed by Prune is a good measure. For deep learning tasks more popular is the similar Dice coefficient:</p>
<pre><code>2 * len(A intersect B)/(len(A)+ len(B))
</code></pre>
<p>which ranges between 0 if no overlap and 1 for identical sets.
For binary vectors this can be formulated as</p>
<pre><code>2 * abs(a.b)/(a**2 + b**2)
</code></pre>
<p>where the vectors are a one-hot encoded representation of the set. </p>
<p>Now, if your last layer in your neural network has a softmax activation (like when you use cross entropy) you can interpret the output as the probabilities of that particular element belonging to your predicted set. The previous formula still is a good measure for the intersection between your sets but stays differentiable. The so called Dice loss (1 - dice coefficient) was introduced first in this <a href="https://arxiv.org/abs/1606.04797" rel="nofollow noreferrer">paper</a> where you can read more about it.</p> | 2018-07-10 15:47:55.703000+00:00 | 2018-07-10 15:47:55.703000+00:00 | null | null | 51,255,745 | <p>I wonder if there is a loss function can measure the overlap of two collections/sets (order doesn't matter).
E.g. the groud truth is a set [a, b, c] and my model prediction is a set [b, e, f], the overlap is [b]. My goal is to maximize the overlap of my prediction.
Do we have a loss function that can measure the size of overlap that I can minimize (negative of) the metric as a result I can maximize the overlap.
(I know one solution may follow the REIFORCE learning that treat the overlap as a reward for each data sample and use the reward to weight the loss, but do we have another solution)
Thank you.</p> | 2018-07-10 00:37:31.820000+00:00 | 2018-07-10 15:47:55.703000+00:00 | null | tensorflow|machine-learning|pytorch|loss-function|loss | ['https://arxiv.org/abs/1606.04797'] | 1 |
60,404,428 | <p>There is a relationship between network size and input resolution, though it is a bit indirect. Increases in network size (i.e number of channels and the depth of the network) and input resolution are both computationally taxing. A recent paper proposes <a href="https://arxiv.org/abs/1905.11946" rel="nofollow noreferrer">EfficientNet</a> which parameterizes channel number, depth, and input resolution by a single parameter.</p>
<p>In essence, if you only have so much compute power to spend, you must choose how to distribute it over kernel size, depth, and resolution. Required compute power is linearly proportional to depth, each of the input side lengths, and the square of channel number. </p> | 2020-02-25 22:59:27.997000+00:00 | 2020-02-25 22:59:27.997000+00:00 | null | null | 60,204,703 | <p>I am working on different spatial resolutions of the image and thought to implement CNN architecture for each spatial resolution as resizing images affect the object details. Is there any particular relationship that can quantitatively explained between size of the network and spatial resolution of the image?</p> | 2020-02-13 09:41:42.777000+00:00 | 2020-02-26 05:35:55.367000+00:00 | 2020-02-25 22:52:51.317000+00:00 | machine-learning|deep-learning|computer-vision|conv-neural-network|resolution | ['https://arxiv.org/abs/1905.11946'] | 1 |
44,907,357 | <p>Several methods were suggested.</p>
<p>One simple yet effective method was suggested in <a href="https://arxiv.org/abs/0803.0476" rel="nofollow noreferrer">Fast unfolding of communities in large networks</a> (Blondel et al., 2008). It supports weighted networks. Quoting from the abstract:</p>
<blockquote>
<p>We propose a simple method to extract the community structure of large
networks. Our method is a heuristic method that is based on modularity
optimization. It is shown to outperform all other known community
detection method in terms of computation time. Moreover, the quality
of the communities detected is very good, as measured by the so-called
modularity.</p>
</blockquote>
<p>Quoting from the paper:</p>
<blockquote>
<p>We now introduce our algorithm that finds high modularity partitions
of large networks in short time and that unfolds a complete
hierarchical community structure for the network, thereby giving
access to different resolutions of community detection.</p>
</blockquote>
<p>So it supposed to work well for complete graph, but you should better check it.</p>
<p>A C++ implementation is available <a href="https://sites.google.com/site/findcommunities/" rel="nofollow noreferrer">here</a> (now maintained <a href="https://sourceforge.net/projects/louvain/" rel="nofollow noreferrer">here</a>).</p>
<p>Your other idea - using weight-threshold - may prove as a good pre-processing step, especially for algorithms which won't partition complete graphs. I believe it is best to set it to some percentile (e.g. to the median) of the weights.</p> | 2017-07-04 13:29:25.217000+00:00 | 2017-07-04 13:34:50.487000+00:00 | 2017-07-04 13:34:50.487000+00:00 | null | 44,899,773 | <p>I do have a complete network graph where every vertex is connected with each other and they only differ in form of their different weights. A example network would be: a trade network, where every country is connected with each other somehow and only differ in form of different trading volumina.</p>
<p>Now the question is how I could perform a community detection in that form of network. The usual suspects (algorithm) are only able to perform in either unweighted or incomplete networks well. The main problem is that the geodesic is everywhere the same. </p>
<p>Two option came into my mind:</p>
<ol>
<li>Cut the network into smaller pieces by cutting them at a certain "weight-threshold-level"</li>
<li>Or use a hierarchical cluster algorithm to turn the whole network into a blockmodel. But I think the problem "no variance in geodesic terms" will remain.</li>
</ol> | 2017-07-04 07:37:11.327000+00:00 | 2017-07-04 13:34:50.487000+00:00 | null | algorithm|network-programming|cluster-analysis | ['https://arxiv.org/abs/0803.0476', 'https://sites.google.com/site/findcommunities/', 'https://sourceforge.net/projects/louvain/'] | 3 |
50,711,443 | <p>If you want to classify the image then you will have to use fully connected layers which requires fixed image dimension, which can be avoided by using SPATIAL PYRAMID POOLING.</p>
<p>In <a href="https://arxiv.org/pdf/1406.4729.pdf" rel="nofollow noreferrer">spatial pyramid pooling</a> the input dimension doesn't have to be fixed it can be of any variable dimension .
By adding a new SPP layer on top of the last convolutional layer, before the fully connected layer followed by a softmax layer will solve the problem.</p>
<p>implementation discussion <a href="https://github.com/tensorflow/tensorflow/issues/6011" rel="nofollow noreferrer">github</a>, <a href="https://stackoverflow.com/questions/40913794/how-to-implement-the-fixed-length-spatial-pyramid-pooling-layer">stackoverflow</a></p> | 2018-06-06 02:27:58.530000+00:00 | 2018-06-06 02:34:47.867000+00:00 | 2018-06-06 02:34:47.867000+00:00 | null | 50,710,556 | <p>I am using VGG16 to train a neural network to identify 3 classes, but I don't have any fixed image size (all I know for an image that is m x n, m,n<300). So I set <code>input_shape</code> of the input layer as <code>(None, None, 3)</code>. The question is how can I go down to one dimension from 3 dimensions (row, col, channel)</p> | 2018-06-06 00:08:09.837000+00:00 | 2018-06-06 02:34:47.867000+00:00 | null | tensorflow|keras|deep-learning | ['https://arxiv.org/pdf/1406.4729.pdf', 'https://github.com/tensorflow/tensorflow/issues/6011', 'https://stackoverflow.com/questions/40913794/how-to-implement-the-fixed-length-spatial-pyramid-pooling-layer'] | 3 |
59,869,307 | <p>What you are describing is <em>not</em> <a href="https://arxiv.org/pdf/1509.06461.pdf" rel="nofollow noreferrer">Double DQN</a>. The periodically updated target network is a core feature of the original DQN algorithm (and all of its derivatives). <a href="https://www.nature.com/articles/nature14236.pdf" rel="nofollow noreferrer">DeepMind's classic paper</a> explains why it is crucial to have two networks:</p>
<blockquote>
<p>The second modification to online Q-learning aimed at further improving the
stability of our method with neural networks is to use a separate network for generating the targets <code>y_j</code> in the Q-learning update. More precisely, every <code>C</code> updates we
clone the network <code>Q</code> to obtain a target network <code>Q^</code> and use <code>Q^</code> for generating the
Q-learning targets <code>y_j</code> for the following <code>C</code> updates to <code>Q</code>. This modification makes the algorithm more stable compared to standard online Q-learning, where an update
that increases <code>Q(s_t, a_t)</code> often also increases <code>Q(s_{t+1}, a)</code> for all <code>a</code> and hence also increases the target <code>y_j</code>, possibly leading to oscillations or divergence of the policy. <strong>Generating the targets using an older set of parameters adds a delay between the time an update to <code>Q</code> is made and the time the update affects the targets <code>y_j</code>, making divergence or oscillations much more unlikely.</strong></p>
</blockquote> | 2020-01-22 22:44:34.513000+00:00 | 2020-01-22 22:44:34.513000+00:00 | null | null | 59,848,545 | <p>Why use 2 networks, train once every episode and update target network every <strong>N</strong> episode, when we can use 1 network and train it ONCE every <strong>N</strong> episode! there is literally no difference!</p> | 2020-01-21 20:14:00.087000+00:00 | 2020-01-22 22:44:34.513000+00:00 | null | reinforcement-learning|dqn | ['https://arxiv.org/pdf/1509.06461.pdf', 'https://www.nature.com/articles/nature14236.pdf'] | 2 |
40,187,764 | <p>The disappointing but accurate answer is that ImageNet training only produces label(s) from an input image, not a bounding box. You would need to train a network to identify ROI. There are a few interesting papers in <a href="https://stackoverflow.com/a/11390618/17328">this SO answer</a> that might help, the key terms being "ROI" and "Saliency Detection".</p>
<p>If you're desperate to reuse that pre-trained network you could try taking random sub-crops of the image and picking the smallest one that still has the correct label. I've never tried this so it might be a poor proxy.</p>
<p>Edit: It looks like <a href="https://arxiv.org/abs/1312.6034" rel="nofollow noreferrer">this paper</a> has used an image classification net to
compute a saliency map. I'd follow their ideas.</p> | 2016-10-22 02:33:25.313000+00:00 | 2016-10-22 16:14:24.797000+00:00 | 2017-05-23 11:53:59.690000+00:00 | null | 40,187,685 | <p>so i recently followed this tutorial to train my own image classifier </p>
<p><a href="https://codelabs.developers.google.com/codelabs/tensorflow-for-poets/?utm_campaign=chrome_series_machinelearning_063016&utm_source=gdev&utm_medium=yt-desc#0" rel="nofollow">https://codelabs.developers.google.com/codelabs/tensorflow-for-poets/?utm_campaign=chrome_series_machinelearning_063016&utm_source=gdev&utm_medium=yt-desc#0</a></p>
<p>for those who dont know it allows the retraining of the last layer of the google inception model in order to make the prediction graph work on our own custom categories.</p>
<p>once i was done training i deployed the model on iOS using this tutorial </p>
<p><a href="https://petewarden.com/2016/09/27/tensorflow-for-mobile-poets/" rel="nofollow">https://petewarden.com/2016/09/27/tensorflow-for-mobile-poets/</a></p>
<p>and the model works great in the wild on natural images.im acheving up to 98% accuracy on natural images. it was trained on only 2 classes. lets say it was working to just give a yes no answer to weather a calculator is present in an image or not. if the calculator is present it says yes, if not, it says no.</p>
<p>my question is that if it's possible to draw a bounding box on the calculator using our output graph or even a heatmap of the detection. because i need to crop the image further based on detection. </p> | 2016-10-22 02:17:53.410000+00:00 | 2016-10-22 23:30:10.480000+00:00 | null | machine-learning|computer-vision|tensorflow|deep-learning|object-recognition | ['https://stackoverflow.com/a/11390618/17328', 'https://arxiv.org/abs/1312.6034'] | 2 |
36,483,862 | <p>This kind of learning curve is perfectly normal in neural network training (or even in <a href="https://en.wikipedia.org/wiki/Learning_curve" rel="nofollow">real life learning</a>). That said, while the general shape of the curve is typical, we can improve on its steepness. In that respect, I suggest that you implement <a href="https://www.youtube.com/watch?v=8yg2mRJx-z4" rel="nofollow">momentum</a> into your training algorithm. If that does not seem to be enough, your next step would be to implement some adaptive learning rate algorithm such as <a href="http://sebastianruder.com/optimizing-gradient-descent/" rel="nofollow">adadelta, adagrad or rmsprop</a>. Finally, a last thing you may want to try is <a href="http://arxiv.org/abs/1502.03167" rel="nofollow">batch normalization</a>.</p> | 2016-04-07 18:00:35.583000+00:00 | 2016-04-07 18:00:35.583000+00:00 | null | null | 36,481,971 | <p>I am using tanh as an activation function.
Let's take one problem for example.</p>
<pre><code>XOR Problem:
1 1 0
0 1 1
1 0 1
0 0 0
</code></pre>
<p>When I train my neural network 500 epochs,
results look like this:</p>
<pre><code>1 1 0.001015
0 1 0.955920
1 0 0.956590
0 0 0.001293
</code></pre>
<p>After another 500 epoch:</p>
<pre><code>1 1 0.000428
0 1 0.971866
1 0 0.971468
0 0 0.000525
</code></pre>
<p>Another 500 epoch:</p>
<pre><code>1 1 0.000193
0 1 0.980982
1 0 0.981241
0 0 0.000227
</code></pre>
<p>It seems that the learning is slowing down alot.
My neural network is taking forver to get precise enough for my costom problems.</p>
<p>Is there anyway to speed up the learning after it starts getting slow like that?</p>
<p>Thanks</p> | 2016-04-07 16:23:54.207000+00:00 | 2017-11-12 17:38:48.290000+00:00 | null | machine-learning|neural-network|artificial-intelligence|backpropagation|gradient-descent | ['https://en.wikipedia.org/wiki/Learning_curve', 'https://www.youtube.com/watch?v=8yg2mRJx-z4', 'http://sebastianruder.com/optimizing-gradient-descent/', 'http://arxiv.org/abs/1502.03167'] | 4 |
61,737,986 | <p>If you look inside the "black box" of models such as Yolo and Mask-RCNN, you will realize that they already contain "multiple small networks", to a certain extent, regarding object detection. </p>
<p>Actually, Mask-RCNN is roughly a Faster-RCNN with an additional branch for segmentation. However, regarding detection, there is "somewhere" a classification layer that give a score for each class object (and a regression layer to estimate the box). All the object classes are estimated from a common representation (all the rest of the network) and only the last layer is specialized to each class. The point is nevertheless that there are advantages to compute the common representation jointly to all object classes, in particular because a positive sample for class <code>i</code> is usually also a negative sample for class <code>j</code>.</p>
<p>The idea is quite different for YOLO (v1) but "somewhere" at the end of the network, there is a stack of neurons layers. There is a layer for each object class and it computes the probability of presence of the corresponding object in a region of the image. Once again, the layers are computed from a "common representation" thus in that sense they are quite independent "classifiers". But once again, these "classifiers" benefit from the representation that is computed in jointly for all class objects.</p>
<p>To be honest, these explanation are quite approximate, in order to try to be clear. If you really want to understand, the best is to read <a href="https://pjreddie.com/publications/" rel="nofollow noreferrer">the publication(s) of Yolo</a> or <a href="https://arxiv.org/abs/1703.06870" rel="nofollow noreferrer">that of mask R-CNN</a>. However, it is quite technical and requires to understand quite well deep learning basics. There are also some good tutorials on the web.</p>
<p>This being said, you can modify the architecture of Yolo and Mask R-CNN to put more complex "small neural networks" in replacement of the existing layers. It may improve performance since you will have more neurons, but will be also more complex to train. As said in comments by @jakub, you can also train multiple specific network and add a layer to choose between all, but it would be a "new" architecture and I doubt that you will obtain a better compromise between performances and computational efficiency than Yolo or Mask R-CNN</p> | 2020-05-11 19:44:22.237000+00:00 | 2020-05-11 19:44:22.237000+00:00 | null | null | 61,732,510 | <p>So I have been working on object detection since a lot of time, I have seen models like YOLO and Mask-RCNN use a single deep model to classify objects. Is it possible to make multiple small networks comparatively to identify each and every object separately to increase accuracy and what will be the effect on speed. I'm confused little bit.</p> | 2020-05-11 14:45:16.557000+00:00 | 2021-06-19 17:26:23.223000+00:00 | 2020-05-11 14:47:38.807000+00:00 | machine-learning|deep-learning|computer-vision|computation | ['https://pjreddie.com/publications/', 'https://arxiv.org/abs/1703.06870'] | 2 |
42,646,014 | <p>As I couldn't make it work, I have tried to implement the normalizing flow described in <a href="https://arxiv.org/pdf/1606.04934.pdf" rel="nofollow noreferrer">this</a> paper: <em>Improved Variational Inference
with Inverse Autoregressive Flow</em> </p>
<p>However I still meet the same problem of diverging loss (towards -infinity), which makes no sense. There must be a problem with my implementation.</p>
<p>Here are the important parts:</p>
<pre><code># the encoder
h = encoder_block(x) # a convnet taking proteins as input (matrices of size 400x22), I don't describe it since it isn't very important
z_log_var = Dense(latent_dim)(h)
z_mean = Dense(latent_dim)(h)
h_ = Dense(latent_dim)(h)
encoder = Model(x, [z_mean,z_log_var, h_])
# the latent variables (only one transformation to keep it simple)
latent_input = Input(shape=(latent_dim, 2), batch_shape=(batch_size, latent_dim, 2))
hl = Convolution1D(1, filter_length, activation="relu", border_mode="same")(latent_input)
hl = Reshape((latent_dim,))(hl)
mean_1 = Dense(latent_dim)(hl)
std_1 = Dense(latent_dim)(hl)
latent_model = Model(latent_input, [mean_1, std_1])
# the decoder
decoder_input = Input((latent_dim,), batch_shape=(batch_size, latent_dim))
decoder=decoder_block() # a convnet that I don't describe
x_decoded_mean = decoder(decoder_input)
generator = Model(decoder_input, x_decoded_mean)
# the VAE
z_mean, z_log_var, other = encoder(vae_input)
eps = Lambda(sample_eps, name='sample_eps')([z_mean, z_log_var, other])
z0 = Lambda(sample_z0, name='sample_z0')([z_mean, z_log_var, eps])
l = Lambda(sample_l, name='sample_l')([eps, z_log_var])
mean, std = latent_model(merge([Reshape((latent_dim,1))(z0), Reshape((latent_dim,1))(other)], mode="concat", concat_axis=-1))
z = Lambda(transform_z0)([z0, mean, std])
l = Lambda(transform_l)([l, std])
x_decoded_mean = generator(z)
vae = Model(vae_input, x_decoded_mean)
# and here is the loss
def vae_loss(x, x_decoded_mean):
xent_loss = K.mean(objectives.categorical_crossentropy(x, x_decoded_mean), -1)
ln_q0z0 = K.sum(log_normal2(z0, z_mean, z_log_var), -1)
ln_pz1 = K.sum(log_stdnormal(z), -1)
result = K.mean(l + ln_pz1 + xent_loss - ln_q0z0)
return result
</code></pre>
<p>Here are the utils functions I use above in the <code>Lambda</code> layers:</p>
<pre><code>def sample_eps(args):
# sample epsilon according to N(O,I)
epsilon = K.random_normal(shape=(batch_size, latent_dim), mean=0.,
std=epsilon_std)
return epsilon
def sample_z0(args):
z_mean, z_log_var, epsilon = args
# generate z0 according to N(z_mean, z_log_var)
z0 = z_mean + K.exp(z_log_var / 2) * epsilon
return z0
def sample_l(args):
epsilon, z_log_var = args
l = -0.5*K.sum(z_log_var + epsilon**2 + K.log(2*math.pi), -1)
return l
def transform_z0(args):
z0, mean, std = args
z = z0
sig_std = K.sigmoid(std)
z *= sig_std
z += (1-sig_std)*mean
return z
def transform_l(args):
l, std = args
sig_std = K.sigmoid(std)
l -= K.sum(K.log(sig_std+1e-8), -1)
return l
</code></pre> | 2017-03-07 10:38:15.750000+00:00 | 2017-03-07 10:38:15.750000+00:00 | null | null | 42,620,065 | <p>I've been trying to implement a simple version of normalizing flows with Keras, as explained in this paper: <a href="https://arxiv.org/pdf/1505.05770.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1505.05770.pdf</a></p>
<p>My problem is that the loss is always -infinity, and I can't get what I did wrong. Can anybody help me ? </p>
<p>Here is the procedure: </p>
<ol>
<li><p>the encoder generates vectors of size <code>latent_dim = 100</code>. These are <code>z_mean, z_log_var, u, b, w</code>.</p></li>
<li><p>From <code>z_mean</code> and <code>z_log_var</code>, using the reparametrization trick I can sample <code>z_0</code> ~ <code>N(z_mean, z_log_var)</code>.</p></li>
<li><p>Then I can compute <code>log(abs(1+u.T.dot(psi(z_0))))</code></p></li>
<li><p>Then I can compute <code>z_1</code></p></li>
</ol>
<p>Here is the code for those four steps:</p>
<pre><code>def sampling(args):
z_mean, z_log_var = args
# sample epsilon according to N(O,I)
epsilon = K.random_normal(shape=(batch_size, latent_dim), mean=0.,
std=epsilon_std)
# generate z0 according to N(z_mean, z_log_var)
z0 = z_mean + K.exp(z_log_var / 2) * epsilon
print('z0', z0)
return z0
def logdet_loss(args):
z0, w, u, b = args
b2 = K.squeeze(b, 1)
beta = K.sum(tf.multiply(w, z0), 1) # <w|z0>
linear_trans = beta + b2 # <w|z0> + b
# change u2 so that the transformation z0->z1 is invertible
alpha = K.sum(tf.multiply(w, u), 1) #
diag1 = tf.diag(K.softplus(alpha) - 1 - alpha)
u2 = u + K.dot(diag1, w) / K.sum(K.square(w)+1e-7)
gamma = K.sum(tf.multiply(w,u2), 1)
logdet = K.log(K.abs(1 + (1 - K.square(K.tanh(linear_trans)))*gamma) + 1e-6)
return logdet
def transform_z0(args):
z0, w, u, b = args
b2 = K.squeeze(b, 1)
beta = K.sum(tf.multiply(w, z0), 1)
# change u2 so that the transformation z0->z1 is invertible
alpha = K.sum(tf.multiply(w, u), 1)
diag1 = tf.diag(K.softplus(alpha) - 1 - alpha)
u2 = u + K.dot(diag1, w) / K.sum(K.square(w)+1e-7)
diag2 = tf.diag(K.tanh(beta + b2))
# generate z1
z1 = z0 + K.dot(diag2,u2)
return z1
</code></pre>
<p>Then here is the loss (where <code>logdet</code> is defined above)</p>
<pre><code>def vae_loss(x, x_decoded_mean):
xent_loss = K.mean(objectives.categorical_crossentropy(x, x_decoded_mean), -1)
ln_q0z0 = K.sum(log_normal2(z0, z_mean, z_log_var, eps=1e-6), -1)
ln_pz1 = K.sum(log_stdnormal(z1), -1)
result = K.mean(logdet + ln_pz1 + xent_loss - ln_q0z0)
return result
</code></pre> | 2017-03-06 07:42:13.950000+00:00 | 2017-03-20 14:53:45.677000+00:00 | 2017-03-08 14:21:06.867000+00:00 | python|deep-learning|keras|autoencoder | ['https://arxiv.org/pdf/1606.04934.pdf'] | 1 |
60,036,838 | <p>I know that when it comes to memory orderings, people usually try to argue if and how operations can be reorder, but in my opinion this is the wrong approach! The C++ standard does not state how instructions can be reordered, but instead defines the <em>happens-before relation</em>, which itself is based on the sequenced-before, synchronize-with and inter-thread-happens-before relations.</p>
<p>An acquire-load that reads the value from a store-release <em>sychronizes-with</em> that acquire load, therefore establishing a happens-before relation. Due to the transitivity of the happens-before relation, operations that are "sequenced-before" the store-release, also "happen-before" the acquire-load. Any arguments about the correctness of an implementation using atomics should always rely on the happens-before relation. <strong><em>If and how instructions can be reordered is merely a result of applying the rules for the happens-before relation.</em></strong></p>
<p>For a more detailed explanation of the C++ memory model you can take a look at <a href="https://arxiv.org/abs/1803.04432" rel="nofollow noreferrer">Memory Models for C/C++ Programmers</a>.</p> | 2020-02-03 09:57:17.170000+00:00 | 2020-02-03 09:57:17.170000+00:00 | null | null | 59,651,328 | <p>This is a follow up question to <a href="https://stackoverflow.com/questions/59626494/understanding-memory-order-acquire-and-memory-order-release-in-c11/59629166#59629166">this one</a>.</p>
<p>I want to figure exactly the meaning of instruction ordering, and how it is affected by the <code>std::memory_order_acquire</code>, <code>std::memory_order_release</code> etc...</p>
<p>In the question I linked there's some detail already provided, but I felt like the provided answer isn't really about the order (which was more what was I looking for) but rather motivating a bit why this is necessary etc.</p>
<p>I'll quote the same example which I'll use as reference</p>
<pre><code>#include <thread>
#include <atomic>
#include <cassert>
#include <string>
std::atomic<std::string*> ptr;
int data;
void producer()
{
std::string* p = new std::string("Hello");
data = 42;
ptr.store(p, std::memory_order_release);
}
void consumer()
{
std::string* p2;
while (!(p2 = ptr.load(std::memory_order_acquire)))
;
assert(*p2 == "Hello"); // never fires
assert(data == 42); // never fires
}
int main()
{
std::thread t1(producer);
std::thread t2(consumer);
t1.join(); t2.join();
}
</code></pre>
<p>In a nutshell I want to figure what exactly happens with the instruction order at both line</p>
<pre><code>ptr.store(p, std::memory_order_release);
</code></pre>
<p>and</p>
<pre><code>while (!(p2 = ptr.load(std::memory_order_acquire)))
</code></pre>
<p>Focusing on the first according to the documentation</p>
<blockquote>
<p>... no reads or writes in the current thread can be reordered after this store ...</p>
</blockquote>
<p>I've been watching few talks to understand this ordering issue, I understand why it is important now. The thing I cannot quite figure yet how the compiler translates the order specification, I think also the example given by the documentation isn't particularly useful as well because after the store operation in the thread running <code>producer</code> there's no other instruction, hence nothing would be re-ordered anyway. However is also possible I'm missunderstanding, is it possible they mean that the equivalent assembly of</p>
<pre><code>std::string* p = new std::string("Hello");
data = 42;
ptr.store(p, std::memory_order_release);
</code></pre>
<p>will be such that the first two lines translated will never be moved after the atomic store?
Likewise in the thread running producer is it possible that none of the asserts (or the equivalent assembly) will ever be moved before the atomic load? Suppose I had a third instruction after the store what would happen to those instruction instead which would be already after the atomic load?</p>
<p>I've also tried to compile such code to save the intermediate assembly code with the <code>-S</code> flag, but it's quite large and I can't really figure.</p>
<p>Again, to clarify, this question is about how the ordering, is not about why these mechanism are useful or necessary.</p> | 2020-01-08 17:53:30.610000+00:00 | 2020-02-03 09:57:17.170000+00:00 | 2020-01-12 10:29:54.520000+00:00 | c++|c++11|atomic|memory-model|stdatomic | ['https://arxiv.org/abs/1803.04432'] | 1 |
49,755,345 | <p>It's probably not possible to determine the actual font size, at least not without knowing the exact font and its specifications. <a href="https://graphicdesign.stackexchange.com/questions/4035/what-does-the-size-of-the-font-translate-to-exactly">see here for an explanation of why</a></p>
<p>If you just want to compare font size between documents, it might be sufficient to use average line height as the comparison, which might be easier to do. If you don't care about the actual values and only need to know relative size between documents, the following might work. You would have to consider, or avoid, the potential effect of different document sizes and/or DPI.</p>
<pre><code>library(tesseract)
library(dplyr)
library(tidyr)
df <- ocr_data("http://arxiv.org/pdf/1403.2805.pdf")
df %>%
separate(bbox, c('x1', 'y1', 'x2', 'y2'), convert = T) %>%
mutate(line_height = y2 - y1) %>%
summarise(avg_line_height = mean(line_height))
# # A tibble: 1 x 1
# avg_line_height
# <dbl>
# 1 58.7
</code></pre>
<p>example for average letter height and width....</p>
<pre><code>df %>%
separate(bbox, c('x1', 'y1', 'x2', 'y2'), convert = T) %>%
mutate(word_height = y2 - y1) %>%
mutate(word_width = x2 - x1) %>%
mutate(num_letters = nchar(word)) %>%
mutate(avg_letter_width = word_width / num_letters) %>%
summarise(avg_letter_height = mean(word_height),
avg_letter_width = mean(avg_letter_width))
# # A tibble: 1 x 2
# avg_letter_height avg_letter_width
# <dbl> <dbl>
# 1 58.7 37.3
</code></pre>
<p>and if you want to do it per page, you can use <code>pdftools</code> to render each page of a multi-page PDF individually and the run <code>ocr_data</code> on each one, then combine...</p>
<pre><code>library(pdftools)
library(tesseract)
library(dplyr)
library(tidyr)
download.file(url = "http://arxiv.org/pdf/1403.2805.pdf",
destfile = pdf_path <- tempfile(fileext = ".pdf"))
page_pngs <-
lapply(seq_len(pdf_info(pdf_path)$pages), function(page_num) {
pdf_convert(pdf_path, pages = page_num, dpi = 300)
})
df <-
bind_rows(
lapply(seq_len(length(page_pngs)), function(page_num) {
ocr_data(page_pngs[[page_num]]) %>%
separate(bbox, c('x1', 'y1', 'x2', 'y2'), convert = T) %>%
mutate(word_height = y2 - y1) %>%
mutate(word_width = x2 - x1) %>%
mutate(num_letters = nchar(word)) %>%
mutate(avg_letter_width = word_width / num_letters) %>%
mutate(page = page_num) %>%
select(page, letter_height = word_height, letter_width = avg_letter_width)
})
)
df %>%
group_by(page) %>%
summarise(avg_letter_height = mean(letter_height),
avg_letter_width = mean(letter_width)) %>%
mutate(avg_letter_area = avg_letter_height * avg_letter_width)
# # A tibble: 29 x 4
# page avg_letter_height avg_letter_width avg_letter_area
# <int> <dbl> <dbl> <dbl>
# 1 1 29.4 17.9 525.
# 2 2 29.3 18.9 554.
# 3 3 30.0 19.1 574.
# 4 4 30.2 18.7 565.
# 5 5 29.8 19.0 566.
# 6 6 28.2 17.7 498.
# 7 7 28.9 18.3 529.
# 8 8 29.8 18.6 554.
# 9 9 29.1 18.6 541.
# 10 10 28.3 18.3 519.
# # ... with 19 more rows
</code></pre> | 2018-04-10 13:47:30.563000+00:00 | 2018-04-12 10:12:25.797000+00:00 | 2018-04-12 10:12:25.797000+00:00 | null | 49,321,238 | <p>I've been trying to reproduce a similar dataset (not exactly the same, I stress) explained in this <a href="https://pdfs.semanticscholar.org/7955/9bc8cff7f81699a63e8c43548753041cd920.pdf" rel="nofollow noreferrer">paper</a> for a similar purpose. But I'm having trouble in coming up with an idea for getting font size while coding in R. Other solutions seem to be available in other coding languages. </p>
<p>For instance, one could very easily extract information regarding number of characters in a page or transforming each page in a image and obtaining data regarding number of pixels and such - which will be part of my metadata anyway. Such as in the example below:</p>
<pre><code>library(pdftools)
library(png)
download.file("http://arxiv.org/pdf/1403.2805.pdf", "1403.2805.pdf", mode = "wb")
txt <- pdf_text("1403.2805.pdf")
num_char_page = unlist(lapply(txt,nchar))
height = 1:length(txt)
width =1:length(txt)
for (i in 1:length(txt)) {
bitmap <- pdf_render_page("1403.2805.pdf", page = i)
png::writePNG(bitmap, paste0("page",i,".png"))
photo=readPNG(paste0("page",i,".png"))
height[i] = dim(photo)[1]
width[i] = dim(photo)[2]
}
layout_df = data.frame(page=1:length(txt), num_char_page=num_char_page, height=height, width=width)
</code></pre>
<p>So this is fairly straightforward, although the code could be made faster with some lapply version of it in the loop part (maybe). But I have no idea on how to obtain font size. How would I do it? Specially if we assume a scanned version of the documents, such as in the aforementioned paper.</p>
<hr>
<p><strong>Observation</strong>: I will ask it in a separate question probably, but I would cherish if someone could pinpoint to some ideas regarding margin sizes and spacing between lines in the comments.</p>
<p><strong>Second Observation</strong>: I think (in this particular case) the PDF that I've used as an example could have meta-data which could enable font-size extraction. But I am trying to obtain font size from scanned (and maybe OCR'd) PDFs. One could transform the pages of the PDF (in the example) into images and then transform them again into non-OCR'd PDFs, which might be somewhat similar to the scanned PDF situation. </p> | 2018-03-16 12:52:03.230000+00:00 | 2018-05-24 07:53:10.800000+00:00 | 2018-05-24 07:53:10.800000+00:00 | r|pdf|text-mining | ['https://graphicdesign.stackexchange.com/questions/4035/what-does-the-size-of-the-font-translate-to-exactly'] | 1 |
54,489,458 | <p><a href="https://arxiv.org/abs/1801.06146" rel="nofollow noreferrer">This paper</a> from <a href="https://www.fast.ai/" rel="nofollow noreferrer">fast.ai</a> claims that they have successfully used transfer learning for text classification task. You should have a look.</p> | 2019-02-02 02:33:17.337000+00:00 | 2019-02-02 02:33:17.337000+00:00 | null | null | 54,487,718 | <p>I want to build an app where I can enter any Twitter keywords, the backend will crawl related tweets and return sentiment analysis of the tweets in percentage of negative, neutral and positive tweets. For example, I enter the keyword 'pepsi', the app will output something like this: Tweets related to pepsi contains 10% negative sentiment, 10% neutral sentiment and 80% of positive review.</p>
<p>So the problem is how to train an machine learning algorithm that I can use in the backend to do such sentiment analysis on various kind of topics. The main idea involved here is transfer learning, where we train one model on large amount of labeled data and use it as baseline to train other data. Transfer learning has limits in NLP mostly because knowledge learned at one task is not broad enough to downstream to other tasks. For example, I pretrained a good neural network to do sentiment analysis on airlines with a prediction accuracy of over 70%. However, when I use the same model to do sentiment analysis on pepsi, I get only around 30% prediction accuracy.</p>
<p>I did some research and noticed that Google's universal sentence embedding is quite popular. However, I realized this is a new way of converting input text into feature vector, not a universal algorithm. I wonder anyone can point me to directions I should go? Thanks a lot in advance!</p> | 2019-02-01 22:01:55.573000+00:00 | 2019-02-02 02:33:17.337000+00:00 | null | twitter|deep-learning|sentiment-analysis|transfer-learning | ['https://arxiv.org/abs/1801.06146', 'https://www.fast.ai/'] | 2 |
68,378,406 | <p>This behavior is expected. You are using the Adam optimizer, which means the parameter updates are not based solely on the gradient at the current step but on an average of past squared gradients. You can read more about it in the original paper: <a href="https://arxiv.org/abs/1412.6980" rel="nofollow noreferrer"><em>Adam: A Method for Stochastic Optimization</em></a>.</p>
<p>In contrast, if you use a simpler algorithm such as SGD (without momentum) you will get the same parameter values between the third and last logs.</p>
<p>Alternatively, you could unset the gradients of <code>net.fc2</code> all together instead of having them as zero tensors (because of the <code>zero_grad</code>). You can do this by adding in:</p>
<pre><code>net.fc2.weight.grad = None
net.fc2.bias.grad = None
</code></pre>
<p>This will prevent the optimizer from updating the parameter which is desired in your case.</p> | 2021-07-14 12:47:24.720000+00:00 | 2021-07-14 12:47:24.720000+00:00 | null | null | 68,377,722 | <p>Following is a toy example explaining what actually I want to do.</p>
<pre class="lang-py prettyprint-override"><code>import torch
from torch import nn
from torch.autograd import Variable
import torch.nn.functional as F
import torch.optim as optim
# toy feed-forward net
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = nn.Linear(10, 5)
self.fc2 = nn.Linear(5, 5)
self.fc3 = nn.Linear(5, 1)
def forward(self, x):
x = self.fc1(x)
x = self.fc2(x)
x = self.fc3(x)
return x
# define random data
random_input = Variable(torch.randn(10,))
random_target = Variable(torch.randn(1,))
# define net
net = Net()
# print the initial fc2 weight
print('fc2 weight before train :')
print(net.fc2.weight)
# we want to freeze the fc2 layer: only train fc1 and fc3
net.fc2.weight.requires_grad = False
net.fc2.bias.requires_grad = False
criterion = nn.MSELoss()
optimizer = optim.Adam(net.parameters(), lr=0.1)
for i in range(10):
net.zero_grad()
output = net(random_input)
loss = criterion(output, random_target)
loss.backward()
optimizer.step()
# print the trained fc2 weight
# note that the weight is same as the one before training: only fc1 & fc3 changed
print('fc2 weight (frozen) after retrain:')
print(net.fc2.weight)
# let's unfreeze the fc2 layer this time for extra tuning
net.fc2.weight.requires_grad = True
net.fc2.bias.requires_grad = True
# re-retrain
for i in range(10):
net.zero_grad()
output = net(random_input)
loss = criterion(output, random_target)
loss.backward()
optimizer.step()
# print the re-retrained fc2 weight
# note that this time the fc2 weight also changed
print('fc2 weight (unfrozen) after re-retrain:')
print(net.fc2.weight)
# let's freeze the fc2 layer again
net.fc2.weight.requires_grad = False
net.fc2.bias.requires_grad = False
# re-retrain
for i in range(10):
net.zero_grad()
output = net(random_input)
loss = criterion(output, random_target)
loss.backward()
optimizer.step()
# print the re-retrained fc2 weight
# note that this time the fc2 weight also changed, BUT why?
print('fc2 weight (freeze again) after re-retrain:')
print(net.fc2.weight)
</code></pre>
<p>The output of the above code is as follows:</p>
<pre><code>fc2 weight before train :
Parameter containing:
tensor([[-0.0335, -0.1526, 0.1972, 0.3360, 0.2845],
[ 0.2449, 0.3305, -0.0060, -0.0302, -0.0060],
[-0.3496, 0.2047, 0.2549, 0.1363, 0.3202],
[ 0.0900, 0.1425, -0.1090, 0.2983, 0.3481],
[ 0.2390, -0.1817, 0.0885, 0.0562, 0.1787]], requires_grad=True)
fc2 weight (frozen) after retrain:
Parameter containing:
tensor([[-0.0335, -0.1526, 0.1972, 0.3360, 0.2845],
[ 0.2449, 0.3305, -0.0060, -0.0302, -0.0060],
[-0.3496, 0.2047, 0.2549, 0.1363, 0.3202],
[ 0.0900, 0.1425, -0.1090, 0.2983, 0.3481],
[ 0.2390, -0.1817, 0.0885, 0.0562, 0.1787]])
fc2 weight (unfrozen) after re-retrain:
Parameter containing:
tensor([[-0.1092, -0.1510, 0.2565, 0.3626, 0.0869],
[ 0.3208, 0.4498, -0.0719, -0.0643, 0.5945],
[-0.1369, 0.1388, 0.0623, -0.0110, 0.5612],
[-0.0655, 0.1785, 0.0269, 0.3923, 0.1261],
[ 0.1571, -0.1942, 0.1538, 0.0924, -0.0485]], requires_grad=True)
fc2 weight (freeze again) after re-retrain:
Parameter containing:
tensor([[-0.1465, -0.2145, 0.2829, 0.3678, -0.0570],
[ 0.4450, 0.1603, -0.1964, -0.1753, 0.5486],
[-0.0317, 0.1698, -0.0310, -0.0769, 0.7227],
[-0.1470, 0.1200, 0.0956, 0.4346, -0.0379],
[ 0.1705, -0.1075, 0.1387, 0.0685, -0.0168]])
</code></pre>
<p>fc2's weight is freezed in the first 10 iters as we have freezed it, and fc2's weight has changed in the second 10 iters as we have unfreezed it. But we freeze it again in the third 10 iters, why does it change again? What's the proper way to freeze, unfreeze and freeze again some params?</p> | 2021-07-14 12:02:51.857000+00:00 | 2021-07-14 12:47:24.720000+00:00 | null | python|deep-learning|pytorch | ['https://arxiv.org/abs/1412.6980'] | 1 |
51,012,825 | <p>I know this post is somewhat old, but in 2016, a variant of Q-learning applied to continuous action spaces was proposed, as an alternative to actor-critic methods. It is called normalized advantage functions (NAF). Here's the paper: <a href="https://arxiv.org/abs/1603.00748" rel="nofollow noreferrer">Continuous Deep Q-Learning with Model-based Acceleration</a></p> | 2018-06-24 18:32:20.967000+00:00 | 2018-06-24 18:32:20.967000+00:00 | null | null | 7,098,625 | <p>I'm trying to get an agent to learn the mouse movements necessary to best perform some task in a reinforcement learning setting (i.e. the reward signal is the only feedback for learning).</p>
<p>I'm hoping to use the Q-learning technique, but while I've found <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.17.2539&rep=rep1&type=pdf">a way to extend this method to continuous state spaces</a>, I can't seem to figure out how to accommodate a problem with a continuous action space.</p>
<p>I could just force all mouse movement to be of a certain magnitude and in only a certain number of different directions, but any reasonable way of making the actions discrete would yield a huge action space. Since standard Q-learning requires the agent to evaluate <em>all</em> possible actions, such an approximation doesn't solve the problem in any practical sense.</p> | 2011-08-17 19:54:44.280000+00:00 | 2019-07-09 05:39:43.207000+00:00 | 2019-02-19 08:56:47.717000+00:00 | algorithm|machine-learning|reinforcement-learning|q-learning | ['https://arxiv.org/abs/1603.00748'] | 1 |
56,945,962 | <p>Another paper to make the list, from the value-based school, is <a href="https://arxiv.org/abs/1609.07152" rel="nofollow noreferrer">Input Convex Neural Networks</a>. The idea is to require Q(s,a) to be convex in actions (not necessarily in states). Then, solving the argmax Q inference is reduced to finding the global optimum using the convexity, much faster than an exhaustive sweep and easier to implement than other value-based approaches. Yet, likely at the expense of a reduced representation power than usual feedforward or convolutional neural networks. </p> | 2019-07-09 05:39:43.207000+00:00 | 2019-07-09 05:39:43.207000+00:00 | null | null | 7,098,625 | <p>I'm trying to get an agent to learn the mouse movements necessary to best perform some task in a reinforcement learning setting (i.e. the reward signal is the only feedback for learning).</p>
<p>I'm hoping to use the Q-learning technique, but while I've found <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.17.2539&rep=rep1&type=pdf">a way to extend this method to continuous state spaces</a>, I can't seem to figure out how to accommodate a problem with a continuous action space.</p>
<p>I could just force all mouse movement to be of a certain magnitude and in only a certain number of different directions, but any reasonable way of making the actions discrete would yield a huge action space. Since standard Q-learning requires the agent to evaluate <em>all</em> possible actions, such an approximation doesn't solve the problem in any practical sense.</p> | 2011-08-17 19:54:44.280000+00:00 | 2019-07-09 05:39:43.207000+00:00 | 2019-02-19 08:56:47.717000+00:00 | algorithm|machine-learning|reinforcement-learning|q-learning | ['https://arxiv.org/abs/1609.07152'] | 1 |
38,780,989 | <p>Fast forward to this year, folks from DeepMind proposes a deep reinforcement learning actor-critic method for dealing with <strong>both</strong> continuous state and action space. It is based on a technique called deterministic policy gradient. See the paper <a href="http://arxiv.org/abs/1509.02971" rel="noreferrer">Continuous control with deep reinforcement learning</a> and some <a href="https://www.reddit.com/r/MachineLearning/comments/3ytnqc/deep_q_learning_with_continuous_actions_question/" rel="noreferrer">implementations</a>.</p> | 2016-08-05 04:11:58.593000+00:00 | 2016-08-05 04:11:58.593000+00:00 | null | null | 7,098,625 | <p>I'm trying to get an agent to learn the mouse movements necessary to best perform some task in a reinforcement learning setting (i.e. the reward signal is the only feedback for learning).</p>
<p>I'm hoping to use the Q-learning technique, but while I've found <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.17.2539&rep=rep1&type=pdf">a way to extend this method to continuous state spaces</a>, I can't seem to figure out how to accommodate a problem with a continuous action space.</p>
<p>I could just force all mouse movement to be of a certain magnitude and in only a certain number of different directions, but any reasonable way of making the actions discrete would yield a huge action space. Since standard Q-learning requires the agent to evaluate <em>all</em> possible actions, such an approximation doesn't solve the problem in any practical sense.</p> | 2011-08-17 19:54:44.280000+00:00 | 2019-07-09 05:39:43.207000+00:00 | 2019-02-19 08:56:47.717000+00:00 | algorithm|machine-learning|reinforcement-learning|q-learning | ['http://arxiv.org/abs/1509.02971', 'https://www.reddit.com/r/MachineLearning/comments/3ytnqc/deep_q_learning_with_continuous_actions_question/'] | 2 |
68,308,111 | <p>Apply the <a href="https://github.com/ahupp/python-magic" rel="nofollow noreferrer"><code>python-magic</code> library</a>.</p>
<blockquote>
<p><code>python-magic</code> is a Python interface to the <code>libmagic</code> file type
identification library. <code>libmagic</code> identifies file types by checking
their headers according to a predefined list of file types. This
functionality is exposed to the command line by the Unix command
<code>file</code>.</p>
</blockquote>
<p><em>Commented</em> script (works on Windows 10, Python 3.8.6):</p>
<pre><code># stage #1: read raw data from a url
from urllib.request import urlopen
import gzip
url = "http://export.arxiv.org/e-print/supr-con/9608001"
with urlopen(url) as response:
rawdata = response.read()
# stage #2: detect raw data type by its signature
print("file signature", rawdata[0:2])
import magic
print( magic.from_buffer(rawdata[0:1024]))
# stage #3: decompress raw data and write to a file
data = gzip.decompress(rawdata)
filename = 'test.tex'
file_ = open(filename, 'wb')
file_.write(data)
file_.close()
# stage #4: detect encoding of the data ( == encoding of the written file)
import chardet
print( chardet.detect(data))
</code></pre>
<p><strong>Result</strong>: <code>.\SO\68307124.py</code></p>
<pre><code>file signature b'\x1f\x8b'
gzip compressed data, was "9608001.tex", last modified: Thu Aug 8 04:57:44 1996, max compression, from Unix
{'encoding': 'ascii', 'confidence': 1.0, 'language': ''}
</code></pre> | 2021-07-08 20:14:13.410000+00:00 | 2021-07-08 20:14:13.410000+00:00 | null | null | 68,307,124 | <p>I am trying to download a file and write it to disk, but somehow I am lost in encoding decoding land.</p>
<pre><code>from urllib.request import urlopen
url = "http://export.arxiv.org/e-print/supr-con/9608001"
with urllib.request.urlopen(url) as response:
data = response.read()
filename = 'test.txt'
file_ = open(filename, 'wb')
file_.write(data)
file_.close()
</code></pre>
<p>Here data is a byte string. If I check the file I find a bunch of strange characters. I tried</p>
<pre><code>import chardet
the_encoding = chardet.detect(data)['encoding']
</code></pre>
<p>but this results in None. So I don't really know how the data I downloaded is encoded?</p>
<p>If I just type "http://export.arxiv.org/e-print/supr-con/9608001" into the browser, it downloads a file that I can view with a text editor and it's a perfectly fine .tex file.</p> | 2021-07-08 18:40:27.010000+00:00 | 2021-07-08 20:14:13.410000+00:00 | 2021-07-08 18:53:55.690000+00:00 | python|decode|encode | ['https://github.com/ahupp/python-magic'] | 1 |
12,019,288 | <p>This is an open source code for calculating Feynman integrals in MATLAB: <a href="http://arxiv.org/pdf/1205.6872v1.pdf" rel="nofollow">http://arxiv.org/pdf/1205.6872v1.pdf</a> which can be run on any ordinary CPU and much faster on a GPU.</p>
<p>Since it only uses extremely efficient built-in MATLAB functions which are compiled to machine code, it's not expected to be significantly slower than FORTRAN or C (keeping in mind that the computational cost of calculating Feynman integrals scales exponentially with respect to the number of time steps, meaning that FORTRAN, C, and MATLAB will all be slow in many cases, and the differences between them will be much smaller than the difference between taking 12 time steps and 13 time steps). </p>
<p>If you run this MATLAB code on a GPU it will in fact be faster than the FORTRAN or C implementation (only a CUDA FORTRAN or CUDA C code will be able to compare).</p>
<p>If you have more questions about this code you can email the author at [email protected]</p> | 2012-08-18 14:17:04.913000+00:00 | 2012-08-18 14:17:04.913000+00:00 | null | null | 6,237,083 | <p>I know that doing Feynman path Integral on Matlab is time consuming compare to Fortran or C.</p>
<p>However, do someone have a Matlab code of harmonic oscillator via path integral?
I didn't manage to find any on the web (and even on Matlab forum).</p>
<p>Below a Fortran code which I don't know how to translate to Matlab (I am novice)
Thanks, Joni</p>
<pre><code>! qmc . f90 : Feynman path i n t e g r a l for ground s t a t e wave Function
Program qmc
Implicit none
Integer :: i,j , max , element , prop ( 100 )
Real *8 :: change , ranDom , energy , newE , oldE , out , path ( 100 )
max = 250000
open ( 9 , FILE = ’qmc.dat’ , Status = ’Unknown’ )
! initial path and probability
Do j = 1 , 100
path (j) = 0.0
prop (j) = 0
End Do
! find energy of initial path
oldE = energy(path , 100)
! pick random element , change by random
Do i = 1 , max
element = ranDom ( )*100 + 1
change = ((ranDom() - 0.5)*2)
path (element) = path(element) + change
newE = energy ( path , 100) ! find new energy
! Metropolis algorithm
If ((newE > oldE) .AND. (exp( - newE + oldE ) < ranDom ())) then
path (element) = path (element) - change
EndIf
! add up probabilities
Do j = 1 , 100
element = path(j)*10 + 50
prop (element) = prop(element) + 1
End Do
oldE = newE
End Do
! write output data to file
Do j = 1 , 100
out = prop(j)
write (9 , *) j - 50 , out/max
End Do
close (9)
Stop ’data saved in qmc.dat’
End Program qmc
! Function calculates energy of the system
Function energy ( array , max )
Implicit none
Integer :: i , max
Real*8 :: energy , array (max)
energy = 0
Do i = 1 , (max - 1)
energy = energy + (array(i+ 1) - array(i))**2 + array(i)**2
End Do
Return
End
</code></pre> | 2011-06-04 13:46:04.917000+00:00 | 2012-08-18 14:17:04.913000+00:00 | 2011-06-04 20:32:29.933000+00:00 | matlab|path|integral | ['http://arxiv.org/pdf/1205.6872v1.pdf'] | 1 |
43,680,803 | <p>A Seq2Seq model is by definition not suitable for a task like this. As the name implies, it converts a sequence of inputs (the words in a sentence) to a sequence of labels (the parts of speech of the words). In your case, you are looking for a single label per sample, not a sequence of them. </p>
<p>Fortunately, you have all you need for this already, as you only need the outputs or states of the encoder (the RNN). </p>
<p>The simplest way to create a classifier using this is to use the final state of the RNN. Add a fully connected layer on top of this with shape [n_hidden, n_classes]. On this you can train a softmax layer and loss which predicts the final category. </p>
<p>In principle, this does not include an attention mechanism. However, if you want to include one, it can be done by weighing each of the outputs of the RNN by a learned vector and then taking the sum. However, this is not guaranteed to improve the results. For further reference, <a href="https://arxiv.org/pdf/1606.02601.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1606.02601.pdf</a> implements this type of attention mechanism if I'm not mistaken. </p> | 2017-04-28 12:56:21.483000+00:00 | 2017-04-28 12:56:21.483000+00:00 | null | null | 43,656,938 | <p>I am trying to build a bidirectional RNN with attention mechanism for sequence classification. I am having some issues understanding the helper function. I have seen that the one used for training needs the decoder inputs, but as I want a single label from the whole sequence, I don't know exactly what input should I give here. This is the structure that I have built so far:</p>
<pre><code># Encoder LSTM cells
lstm_fw_cell = rnn.BasicLSTMCell(n_hidden)
lstm_bw_cell = rnn.BasicLSTMCell(n_hidden)
# Bidirectional RNN
outputs, states = tf.nn.bidirectional_dynamic_rnn(lstm_fw_cell,
lstm_bw_cell, inputs=x,
sequence_length=seq_len, dtype=tf.float32)
# Concatenate forward and backward outputs
encoder_outputs = tf.concat(outputs,2)
# Decoder LSTM cell
decoder_cell = rnn.BasicLSTMCell(n_hidden)
# Attention mechanism
attention_mechanism = tf.contrib.seq2seq.LuongAttention(n_hidden, encoder_outputs)
attn_cell = tf.contrib.seq2seq.AttentionWrapper(decoder_cell,
attention_mechanism, attention_size=n_hidden)
name="attention_init")
# Initial attention
attn_zero = attn_cell.zero_state(batch_size=tf.shape(x)[0], dtype=tf.float32)
init_state = attn_zero.clone(cell_state=states[0])
# Helper function
helper = tf.contrib.seq2seq.TrainingHelper(inputs = ???)
# Decoding
my_decoder = tf.contrib.seq2seq.BasicDecoder(cell=attn_cell,
helper=helper,
initial_state=init_state)
decoder_outputs, decoder_states = tf.contrib.seq2seq.dynamic_decode(my_decoder)
</code></pre>
<p>My input is a sequence [batch_size,sequence_length,n_features] and my output is a single vector with N possible classes [batch_size,n_classes].</p>
<p>Do you know what am I missing here or if it is possible to use seq2seq for sequence classification?</p> | 2017-04-27 11:49:41.813000+00:00 | 2017-04-28 12:56:21.483000+00:00 | null | tensorflow|classification|sequence|recurrent-neural-network|attention-model | ['https://arxiv.org/pdf/1606.02601.pdf'] | 1 |
35,129,590 | <p>Feel free to choose any method to calculate mincuts computationally from graphs. I list below a simple example, relevant research, models, storage methods, lemmas, theorems -- and something about visualisation and computing on which this thread is focused on. Next step after the simple example is the parametrisation of the graphical model and then computation. </p>
<p><strong>Simple example with Python and <a href="https://stackoverflow.com/questions/533905/get-the-cartesian-product-of-a-series-of-lists-in-python">Cartesian product</a></strong></p>
<p>Assume a 3x2x2 graph with 3 parallel where 1st branch with 3 things, 2nd branch with 2 things and last branch 2 things. The minimum cuts are <em>{{1,4,6},{1,4,7},{1,5,6},{1,5,7},{2,4,6},{2,4,7},{2,5,6},{2,5,7},{3,4,6},{3,4,7},{3,5,6},{3,5,7}}</em>. </p>
<p><a href="https://i.stack.imgur.com/dCqMa.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dCqMa.jpg" alt="enter image description here"></a></p>
<pre><code>import itertools;
somelists = [
[1, 2, 3],
[4, 5],
[6, 7]
]
print list(itertools.product(*somelists))
</code></pre>
<p><strong>Computing</strong></p>
<ul>
<li><p>General question <a href="https://stackoverflow.com/questions/31734234/how-to-analyse-a-sparse-adjacency-matrix">How to analyse a sparse adjacency matrix?</a> but relevant because graphs often are sparse.</p></li>
<li><p>Graph Complement useful in finding mincuts: a trial with Mathematica below but notice that <a href="https://mathematica.stackexchange.com/a/105403/2000">some bugs</a> in Mathematica found in at least 10.1 related to mincut and VertexConnectivity commands. </p></li>
</ul>
<p><strong>Mathematics</strong></p>
<ul>
<li><p><em>[Graphical parametrization problem in computational algebra for cuts]</em> Cut ideals such as <a href="http://arxiv.org/abs/1209.6372" rel="nofollow noreferrer">Betti numbers of cut ideals of trees</a></p></li>
<li><p><em>[Hard graphical parametrization problem of simple graphs]</em> <a href="https://math.stackexchange.com/questions/1657294/smallest-generating-set-of-simple-graphs-for-fixed-number-of-vertices-and-fixed">Smallest generating set of simple graphs for fixed number of vertices and fixed number of vertex cuts?</a></p></li>
<li><p>The mincuts have crossing criteria, <a href="https://www.cs.elte.hu/egres/qp/egresqp-09-03.pdf" rel="nofollow noreferrer">cactus storage method</a> and the current research outlined in the <a href="https://www.informatik.uni-augsburg.de/thi/personen/kammer/Graph_Connectivity.pdf" rel="nofollow noreferrer">Graph Connectivity paper (2004)</a>.</p></li>
<li><p>Many relevant terms for minimum cuts such as vertex separators, bifurcators, vertex cuts, edge cuts, graph boundaries -- mathematical terms outlined <a href="https://math.stackexchange.com/questions/1600059/set-of-the-vertex-sets-to-make-connected-graph-into-disjoint-sets-of-vertices">here</a>. </p></li>
</ul>
<p><strong>Visualising series-parallel graphs with sink and source</strong></p>
<ul>
<li><p><a href="https://tex.stackexchange.com/questions/290467/tikz-package-to-draw-series-parallel-graphs-with-many-vars-in-series-parallel">Tikz package to draw series parallel graphs with many vars in series parallel</a></p></li>
<li><p>Quality material contain <em>"Graphical models"</em> by Lauritzen Steffen, <em>"Graphical models for R"</em> and Algebraic geometric material by Sturmfels.</p></li>
</ul> | 2016-02-01 11:38:33.970000+00:00 | 2016-02-15 23:21:09.750000+00:00 | 2017-05-23 11:46:46.347000+00:00 | null | 35,116,470 | <p>Assume an undirected system, graph where you want to find out elements in their permutations that makes the system disjoint such that each set is minimum. This graph has two special nodes: sink and source that cannot be in the elements.</p>
<p><em>How can you computationally find the minimum cuts given some graph G=(V,E)?</em></p> | 2016-01-31 16:51:27.780000+00:00 | 2016-02-15 23:22:05.077000+00:00 | 2016-02-15 23:22:05.077000+00:00 | graph|min|discrete-mathematics|symbolic-math|symbolic-computation | ['https://stackoverflow.com/questions/533905/get-the-cartesian-product-of-a-series-of-lists-in-python', 'https://i.stack.imgur.com/dCqMa.jpg', 'https://stackoverflow.com/questions/31734234/how-to-analyse-a-sparse-adjacency-matrix', 'https://mathematica.stackexchange.com/a/105403/2000', 'http://arxiv.org/abs/1209.6372', 'https://math.stackexchange.com/questions/1657294/smallest-generating-set-of-simple-graphs-for-fixed-number-of-vertices-and-fixed', 'https://www.cs.elte.hu/egres/qp/egresqp-09-03.pdf', 'https://www.informatik.uni-augsburg.de/thi/personen/kammer/Graph_Connectivity.pdf', 'https://math.stackexchange.com/questions/1600059/set-of-the-vertex-sets-to-make-connected-graph-into-disjoint-sets-of-vertices', 'https://tex.stackexchange.com/questions/290467/tikz-package-to-draw-series-parallel-graphs-with-many-vars-in-series-parallel'] | 10 |
32,027,013 | <p>It is the response of the website that don't allow empty user agents:</p>
<pre><code>HTTP/1.1 403 Forbidden
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<html>
<head><title>403 Forbidden</title></head>
<body>
<h1>Access Denied</h1>
<p>Sadly, your client does not supply a proper User-Agent,
and is consequently excluded.</p>
<p>We have an inordinate number of problems with automated scripts
which do not supply a User-Agent, and violate the automated access
guidelines posted at arxiv.org
-- hence we now exclude them all.</p>
<p>(In rare cases, we have found that accesses through proxy servers
strip the User-Agent information. If this is the case, you need to contact
the administrator of your proxy server to get it fixed.)</p>
<p>If you believe this determination to be in error, see
<b>http://arxiv.org/denied.html</b> for additional information.</p>
</body>
</html>
</code></pre>
<p>If you use for example the user agent "Mozilla/5.0 (Windows NT 6.1; Trident/7.0; rv:11.0) like Gecko" in your request, it will work:</p>
<pre><code>$options = array(
'http'=>array(
'method'=>"GET",
'header'=>"User-Agent: Mozilla/5.0 (Windows NT 6.1; Trident/7.0; rv:11.0) like Gecko\r\n"
)
);
$context = stream_context_create($options);
$str = file_get_contents($url, false, $context);
</code></pre> | 2015-08-15 16:53:15.243000+00:00 | 2015-08-15 16:53:15.243000+00:00 | null | null | 32,026,804 | <p>I am trying to extract the title and abstract from arXiv pages, for example <a href="http://arxiv.org/abs/1207.0102" rel="nofollow">http://arxiv.org/abs/1207.0102</a>, my code currently looks like</p>
<pre><code>function get_title($url){
$str = file_get_contents($url);
if(strlen($str)>0){
$str = trim(preg_replace('/\s+/', ' ', $str)); // supports line breaks inside <title>
preg_match("/\<title\>(.*)\<\/title\>/i",$str,$title); // ignore case
return $title[1];
}
}
echo get_title("http://arxiv.org/abs/1207.0102");
</code></pre>
<p>When I run this code, this error comes up</p>
<blockquote>
<p>Warning: file_get_contents(<a href="http://arxiv.org/abs/1207.0102" rel="nofollow">http://arxiv.org/abs/1207.0102</a>): failed to
open stream: HTTP request failed! HTTP/1.1 403 Forbidden in
C:\wamp\www\mysite\Index.php</p>
</blockquote>
<p>This problem doesn't happen when I try different urls for example <a href="http://www.washingtontimes.com/" rel="nofollow">http://www.washingtontimes.com/</a>.</p>
<p>Does anyone know why this happens?</p>
<p>Also, is it possible to extract the abstract from this webpage?</p> | 2015-08-15 16:29:41.930000+00:00 | 2015-08-15 16:53:15.243000+00:00 | null | php|string|url|meta | [] | 0 |
69,255,185 | <p>It would be better to use <a href="https://www.cs.princeton.edu/courses/archive/spring20/cos598C/lectures/lec3-contextualized-word-embeddings.pdf" rel="nofollow noreferrer">contextual word embeddings</a>(vector representations) for words.</p>
<p>Here is an approach to sentence similarities by pairwise word similarities: <a href="https://github.com/Tiiiger/bert_score" rel="nofollow noreferrer">BERTScore</a>.</p>
<p><a href="https://i.stack.imgur.com/gKEhB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gKEhB.png" alt="enter image description here" /></a></p>
<p>You can check the math <a href="https://arxiv.org/pdf/1904.09675.pdf#page=4" rel="nofollow noreferrer">here</a>.</p> | 2021-09-20 13:22:07.893000+00:00 | 2021-10-10 06:06:39.790000+00:00 | 2021-10-10 06:06:39.790000+00:00 | null | 28,163,289 | <p>Assuming that I have a word similarity score for each pair of words in two sentences, what is a decent approach to determining the overall sentence similarity from those scores?</p>
<p>The word scores are calculated using cosine similarity from vectors representing each word. </p>
<p>Now that I have individual word scores, is it too naive to sum the individual word scores and divide by the total word count of both sentences to get a score for the two sentences?</p>
<p>I've read about further constructing vectors to represent the sentences, using the word scores, and then again using cosine similarity to compare the sentences. But I'm not familiar with how to construct sentence vectors from the existing word scores. Nor am I aware of what the tradeoffs are compared with the naive approach described above, which at the very least, I can easily comprehend. :).</p>
<p>Any insights are greatly appreciated.</p>
<p>Thanks. </p> | 2015-01-27 04:31:53.013000+00:00 | 2021-10-10 06:06:39.790000+00:00 | 2015-01-29 22:14:02.210000+00:00 | wordnet|cosine-similarity|word2vec|sentence-similarity | ['https://www.cs.princeton.edu/courses/archive/spring20/cos598C/lectures/lec3-contextualized-word-embeddings.pdf', 'https://github.com/Tiiiger/bert_score', 'https://i.stack.imgur.com/gKEhB.png', 'https://arxiv.org/pdf/1904.09675.pdf#page=4'] | 4 |
66,221,212 | <p>You simply skip computational layer with a dilated convolution layer:</p>
<p>For example a dilated convolution with</p>
<ul>
<li>a filter kernel k×k = 3×3, dilation rate r = 2, stride s = 1 and no padding</li>
</ul>
<p><strong>is comparable to</strong></p>
<ul>
<li>2x downsampling followed by 3x3 convolution followed by 2x upsampling</li>
</ul>
<p>For further reference look at the amazing paper from Vincent Dumoulin, Francesco Visin:
<a href="https://arxiv.org/abs/1603.07285" rel="nofollow noreferrer">A guide to convolution arithmetic for deep learning</a></p>
<p>Also on the github of this paper is a animation how dilated convolution works:
<a href="https://github.com/vdumoulin/conv_arithmetic" rel="nofollow noreferrer">https://github.com/vdumoulin/conv_arithmetic</a></p> | 2021-02-16 08:49:50.550000+00:00 | 2021-02-16 08:49:50.550000+00:00 | null | null | 66,221,032 | <p>I have been studying UNet inspired architecture ENet and I think I follow the basic concepts. The ground-rock of efficiency of ENet is dilated convolution (apart other things). I understand the preserving spatial resolution, how it is computed and so on, however I can't understand why it is computationally and memory-wise less expensive than e.g. max-pooling.</p>
<p>ENet: <a href="https://arxiv.org/pdf/1606.02147.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1606.02147.pdf</a></p> | 2021-02-16 08:36:57.040000+00:00 | 2022-08-04 09:26:20.193000+00:00 | 2022-08-04 09:26:20.193000+00:00 | machine-learning|conv-neural-network|convolution|semantic-segmentation|unet-neural-network | ['https://arxiv.org/abs/1603.07285', 'https://github.com/vdumoulin/conv_arithmetic'] | 2 |
47,981,960 | <p>To follow up on @lemm-ras's <a href="https://stackoverflow.com/a/38907274/957657">answer</a>, <a href="https://arxiv.org/abs/1511.03771" rel="nofollow noreferrer">this paper</a> by Talathi and Vartak shows that the initial value of the recurrent weight matrix will strongly influence the performance of a recurrent neural network with reLU activation. Specifically, the authors demonstrate that a network of recurrent units with ReLU activation will perform best when the recurrent weight matrix is initialized so that it is positive definite, with the largest eigenvalue equal to one and all other eignenvalues less than one. Their explanation is that the way the network weights evolve over time is dependent on the initial condition of the network (shown in Figure 2 of Talathi and Vartak), and can lead to a few different cases:</p>
<p>Case 1: If all eigenvalues are one, then the network starts at a stable condition and does not evolve over time (Figure 2a)</p>
<p>Case 2: If all eigenvalues are less than one the network is attracted toward the origin and the network will always evolve toward a recurrent weight matrix of zero (Figure 2b). </p>
<p>Case 3: If any eignevalues are greater than one the network does not have a stable attractor, and will "blow up" (Figure 2d). </p>
<p>Case 4: If the recurrent weight matrix has one eigenvalue of one and the rest are less than one, then there can be a stable manifold that the network will evolve toward, and it can reach a stable, non-zero solution (Figure 2c). I don't know if this is guaranteed to be true for all problems, so I believe it'd be best to think of this as a necessary but not proven to be sufficient condition.</p>
<p>From the initial question, it sounds like @rksh's problem is the second case, and the network is attracted toward zero. Try initializing the weight matrix as Talathi and Vartak suggest and see if that addresses the problem.</p> | 2017-12-26 17:45:36.010000+00:00 | 2017-12-26 17:45:36.010000+00:00 | null | null | 38,901,684 | <p>I am new to machine learning. I've read that the ReLU function is better than a sigmoid function for a recurrent neural network because of the vanishing gradient problem.</p>
<p>I'm trying to implement a very basic recurrent neural network with 3 input nodes, 10 hidden nodes and 3 output nodes.</p>
<p>There is the ReLU function at both input and hidden nodes and the softmax function for the output nodes.</p>
<p>However when I'm using the ReLU function after few epochs (less than 10) either the error gets to 0 or the error gets to infinity depending on whether the weight changes are added or subtracted from the original weights.</p>
<pre><code>weight = weight + gradient_decent #weights hits infinity
weight = weight - gradient decent #weights become 0
</code></pre>
<p>And also because it hits infinity it gives the following error,</p>
<pre><code>RuntimeWarning: invalid value encountered in maximum
return np.maximum(x, 0)
</code></pre>
<p>However when I implement the sigmoid function the error nicely comes down. But because this is a simple example that is fine but if I use it on a bigger problem I am afraid I will hit with the vanishing gradient problem.</p>
<p>Is this caused by the small number of hidden nodes, how can I solve this issue? If you need the code sample please comment, not posting the code because it's too long.</p>
<p>Thank you.</p> | 2016-08-11 16:25:51.853000+00:00 | 2021-09-17 11:05:18.723000+00:00 | null | python|machine-learning|neural-network|artificial-intelligence | ['https://stackoverflow.com/a/38907274/957657', 'https://arxiv.org/abs/1511.03771'] | 2 |
63,129,877 | <p><code>TRAIN_ROI_PER_IMAGE</code> - means how many Region of Interest or <strong>ROI</strong> proposals will be fed to the mask head or the classifier.</p>
<p><img src="https://lilianweng.github.io/lil-log/assets/images/faster-RCNN.png" alt="" />
img_src : <a href="https://arxiv.org/pdf/1506.01497.pdf" rel="nofollow noreferrer">Ren et al., 2016</a></p>
<p>Concretely, This setting is like the <code>batch size</code> for the second stage of the model.</p> | 2020-07-28 08:21:26.363000+00:00 | 2020-07-28 08:21:26.363000+00:00 | null | null | 62,716,745 | <p>I have been trying to train a breast cancer segmentation model with mask rcnn. I have been able to understand almost all the hyperparameter but this one variable <code>TRAIN_ROI_PER_IMAGE</code> I just can't seem to wrap my head around it and there's little to no documentation available for it.
If anyone could please explain it to me, it would be super helpful for my research.</p> | 2020-07-03 13:45:33.900000+00:00 | 2020-07-28 08:21:26.363000+00:00 | null | tensorflow|keras|deep-learning | ['https://arxiv.org/pdf/1506.01497.pdf'] | 1 |
7,137,997 | <p>It's not lying. You should set the webclient's encoding first before calling DownloadString.</p>
<pre><code>using(WebClient webClient = new WebClient())
{
webClient.Encoding = Encoding.UTF8;
string s = webClient.DownloadString("http://export.arxiv.org/api/query?search_query=au:Freidel_L*&start=0&max_results=20");
}
</code></pre>
<p>As for why your alternative isn't working, it's because the usage is incorrect. Its should be:</p>
<pre><code>System.Text.Encoding.UTF8.GetString()
</code></pre> | 2011-08-21 11:31:51.480000+00:00 | 2015-05-05 07:33:14.217000+00:00 | 2015-05-05 07:33:14.217000+00:00 | null | 7,137,165 | <p>The following code:</p>
<pre><code>var text = (new WebClient()).DownloadString("http://export.arxiv.org/api/query?search_query=au:Freidel_L*&start=0&max_results=20"));
</code></pre>
<p>results in a variable <code>text</code> that contains, among many other things, the string</p>
<blockquote>
<p>"$κ$-Minkowski space, scalar field, and the issue of Lorentz invariance"</p>
</blockquote>
<p>However, when I visit that URL in Firefox, I get</p>
<blockquote>
<p>$κ$-Minkowski space, scalar field, and the issue of Lorentz invariance</p>
</blockquote>
<p>which is actually correct. I also tried</p>
<pre><code>var data = (new WebClient()).DownloadData("http://export.arxiv.org/api/query?search_query=au:Freidel_L*&start=0&max_results=20");
var text = System.Text.UTF8Encoding.Default.GetString(data);
</code></pre>
<p>but this gave the same problem.</p>
<p>I'm not sure where the fault lies here. Is the feed lying about being UTF8-encoded, and the browser is smart enough to figure that out, but not <code>WebClient</code>? Is the feed properly UTF8-encoded, but <code>WebClient</code> is failing in some other way? What can I do to mitigate this?</p> | 2011-08-21 08:10:55.087000+00:00 | 2015-05-05 07:33:14.217000+00:00 | null | .net|unicode|utf-8|webclient | [] | 0 |
69,546,880 | <p>A Question Answering ot is basically a DL model that creates an answer by <em>extracting part of the context</em> (in you case what is called <code>text</code>). This means that the goal of the QAbot is to identify the <strong>start</strong> and the <strong>end</strong> of the answer.</p>
<hr />
<p>Basic functioning of a QAbot:</p>
<p>First of all, every word of the question and context is tokenized. This means it is (possibily divided into characters/subwords and then) convertend into a number. It really depends on the type of tokenizer (which means it depends on the model you are using, since you will be using the same tokenizer - it's what the third line of your code is doing). I suggest <a href="https://huggingface.co/course/chapter2/4?fw=pt" rel="nofollow noreferrer">this very useful guide</a>.</p>
<p>Then, the tokenized <code>question + text</code> are passed into the model which performs its internal operations. Remember when I told at the beginning that the model will identify the <code>start</code> and the <code>end</code> of the answer? Well, it does so by calculating for every token of the <code>question + text</code> the probability that that particular token is the start of the answer. This probabilities are the softmaxed version of the <code>start_logits</code>. After that, the same operations are performed for the end token.</p>
<p>So, this is what <code>start_scores</code> and <code>end_scores</code> are: the pre-softmax scores that every token is start and end of the answer, respectively.</p>
<hr />
<p>So, what are <code>start_position</code> and <code>stop_position</code>?</p>
<p>As stated <a href="https://huggingface.co/transformers/model_doc/roberta.html#robertaforquestionanswering" rel="nofollow noreferrer">here</a>, they are:</p>
<blockquote>
<p><code>start_positions</code> (<code>torch</code>.<code>LongTensor</code> of shape (<code>batch_size</code>,), optional) –
Labels for position (index) of the start of the labelled span for
computing the token classification loss. Positions are clamped to the
length of the sequence (<code>sequence_length</code>). Position outside of the
sequence are not taken into account for computing the loss.</p>
<p><code>end_positions</code> (<code>torch</code>.<code>LongTensor</code> of shape (<code>batch_size</code>,), optional) –
Labels for position (index) of the end of the labelled span for
computing the token classification loss. Positions are clamped to the
length of the sequence (<code>sequence_length</code>). Position outside of the
sequence are not taken into account for computing the loss.</p>
</blockquote>
<hr />
<p>Moreover, the model you are using (<code>roberta-base</code>, see <a href="https://huggingface.co/roberta-base" rel="nofollow noreferrer">the model on the HuggingFace repository</a> and the <a href="https://arxiv.org/abs/1907.11692" rel="nofollow noreferrer">RoBERTa official paper</a>) has <strong>NOT</strong> been fine-tuned for QuestionAnswering. It is "just" a model trained by using MaskedLanguageModeling, which means that the model has a general understanding of the english language, but it is not suitable for question asnwering. You can use it of course, but it would probably give non optimal results.</p>
<p>I suggest you use the same model, inthe version specifically fine-tuned on QuestionAnswering: <code>roberta-base-squad2</code>, see <a href="https://huggingface.co/deepset/roberta-base-squad2?context=Jim%20Henson%20was%20a%20nice%20puppet&question=Who%20was%20Jim%20Henson%3F" rel="nofollow noreferrer">it on HuggingFace</a>.</p>
<p>In practical terms, you have to replace the lines where you load the model and the tokenizer with:</p>
<pre><code>tokenizer = RobertaTokenizer.from_pretrained('roberta-base-squad2')
model = RobertaForQuestionAnswering.from_pretrained('roberta-base-squad2')
</code></pre>
<p>This will give much more accurate results.</p>
<p>Bonus read: <a href="https://www.analyticsvidhya.com/blog/2020/07/transfer-learning-for-nlp-fine-tuning-bert-for-text-classification/" rel="nofollow noreferrer">what fine-tuning is and how it works</a></p> | 2021-10-12 20:52:15.083000+00:00 | 2021-10-13 09:08:58.860000+00:00 | 2021-10-13 09:08:58.860000+00:00 | null | 69,544,570 | <p>I'm trying to fine-tune "RobertaForQuestionAnswering" on my custom dataset and I'm confused about the input params it takes. Here's the sample code.</p>
<pre><code>>>> from transformers import RobertaTokenizer, RobertaForQuestionAnswering
>>> import torch
>>> tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
>>> model = RobertaForQuestionAnswering.from_pretrained('roberta-base')
>>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
>>> inputs = tokenizer(question, text, return_tensors='pt')
>>> start_positions = torch.tensor([1])
>>> end_positions = torch.tensor([3])
>>> outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
>>> loss = outputs.loss
>>> start_scores = outputs.start_logits
>>> end_scores = outputs.end_logits
</code></pre>
<p>I'm not able to understand variables <strong>start_positions</strong> & <strong>end_positions</strong> which are being given in the model as input and variables <strong>start_scores</strong> & <strong>end_scores</strong> that are being generated.</p> | 2021-10-12 17:18:29.850000+00:00 | 2021-10-13 09:08:58.860000+00:00 | 2021-10-13 07:41:24.243000+00:00 | nlp|huggingface-transformers|bert-language-model|nlp-question-answering|roberta-language-model | ['https://huggingface.co/course/chapter2/4?fw=pt', 'https://huggingface.co/transformers/model_doc/roberta.html#robertaforquestionanswering', 'https://huggingface.co/roberta-base', 'https://arxiv.org/abs/1907.11692', 'https://huggingface.co/deepset/roberta-base-squad2?context=Jim%20Henson%20was%20a%20nice%20puppet&question=Who%20was%20Jim%20Henson%3F', 'https://www.analyticsvidhya.com/blog/2020/07/transfer-learning-for-nlp-fine-tuning-bert-for-text-classification/'] | 6 |
41,383,119 | <p>From the original <a href="https://arxiv.org/abs/1502.03167" rel="nofollow noreferrer">batch normalization paper</a> by Ioffe & Szegedy: "we make sure that the transformation inserted in the network can represent the identity transform." Without the Scale layer after the BatchNorm layer, that would not be the case because the Caffe BatchNorm layer has no learnable parameters.</p>
<p>I learned this from the <a href="https://github.com/KaimingHe/deep-residual-networks" rel="nofollow noreferrer">Deep Residual Networks git repo</a>; see item 6 under disclaimers and known issues there.</p> | 2016-12-29 16:00:44.750000+00:00 | 2016-12-29 16:00:44.750000+00:00 | null | null | 41,351,390 | <p>I am using caffe , in detail pycaffe, to create my neuronal network. I noticed that I have to use BatchNormLayer to get a positive result. I am using the Kappa-Score as a result matrix.
I now have seen several different locations for the BatchNorm-Layers in my network. But I came across the ScaleLayer, too which is not in the Layer Catalogue but gets often mentioned with the BatchNorm Layer</p>
<p>Do you always need to put a ScaleLayer after a BatchNorm - Layer and what does it do?</p> | 2016-12-27 19:54:53.310000+00:00 | 2016-12-29 16:00:44.750000+00:00 | 2016-12-27 21:22:23.993000+00:00 | neural-network|deep-learning|caffe|pycaffe | ['https://arxiv.org/abs/1502.03167', 'https://github.com/KaimingHe/deep-residual-networks'] | 2 |
12,230,134 | <p>Quick googling shows a paper:</p>
<p><a href="http://projects.wizardlike.ca/attachments/18/FICS2010.pdf" rel="nofollow">Safe Recursion Revisited: Categorical Semantics and Type Systems for Lower Complexity</a></p>
<p>I also remember works by Japaridze on polynomial time arithmetics, see <a href="http://arxiv.org/abs/0902.2969" rel="nofollow">http://arxiv.org/abs/0902.2969</a></p>
<p>I think you can start from there and walk by references.</p> | 2012-09-01 18:07:59.973000+00:00 | 2012-09-01 18:07:59.973000+00:00 | null | null | 11,725,899 | <p>Category theory and abstract algebra deal with the way functions can be combined with other functions. Complexity theory deals with how hard a function is to compute. It's weird to me that I haven't seen anyone combine these fields of study, since they seem like such natural pairs. Has anyone done this before?</p>
<hr>
<p>As a motivating example, let's take a look at monoids. It's well known that if an operation is a monoid, then we can parallelize the operation.</p>
<p>For example in Haskell, we can trivially define that addition is a monoid over the integers like this:</p>
<pre><code>instance Monoid Int where
mempty = 0
mappend = (+)
</code></pre>
<p>Now if we want to compute the sum of 0 to 999, we could do it sequentially like:</p>
<pre><code>foldl1' (+) [0..999]
</code></pre>
<p>or we could do it in parallel</p>
<pre><code>mconcat [0..999] -- for simplicity of the code, I'm ignoring that this doesn't *actually* run in parallel
</code></pre>
<p>But parallelizing this monoid makes sense only because mappend runs in constant time. What if this weren't the case? Lists, for example, are monoids where mappend does not run inconstant time (or space!). I'm guessing this is why there is no default parallel mconcat function in Haskell. The best implementation depends on the complexity of the monoid.</p>
<hr>
<p>It seems like there should be a convenient way to describe the differences between these two monoids. We should then be able to annotate our code with these differences and have programs automatically choose the best algorithms to use depending on a monoid's complexity.</p> | 2012-07-30 16:44:42.470000+00:00 | 2020-01-08 13:05:22.690000+00:00 | 2012-09-04 21:50:48.433000+00:00 | haskell|complexity-theory|category-theory|abstract-algebra | ['http://projects.wizardlike.ca/attachments/18/FICS2010.pdf', 'http://arxiv.org/abs/0902.2969'] | 2 |
46,204,997 | <p>This is not a direct answer to your question because product roadmap is not really something we can comment on. However, if you are worried about dying ReLU problem in H2O, why don't you use <code>ExpRectifier</code>, which stands for <strong>exponential linear unit (RLU)</strong>, which does not suffer dying ReLU problem. As a matter of fact, <a href="https://arxiv.org/pdf/1511.07289v5.pdf" rel="nofollow noreferrer">this paper</a> proves that ELU outperforms all ReLU variants. The only drawback is it is more computational heavy as it involves exponent in calculation. </p> | 2017-09-13 18:59:17.877000+00:00 | 2017-09-13 18:59:17.877000+00:00 | null | null | 46,167,613 | <p>Are there any plans to implement a leaky ReLU in the Deep Learning module of H2O? I am a beginner to neural nets, but in the limited amount of model building and parameter tuning, I have found the ReLUs to generalize better, and was wondering if even better performance might be obtained by using leaky ReLUs to avoid the dying ReLU problem.</p> | 2017-09-12 03:50:40.673000+00:00 | 2017-09-13 18:59:17.877000+00:00 | 2017-09-12 16:43:38.317000+00:00 | deep-learning|h2o|activation-function | ['https://arxiv.org/pdf/1511.07289v5.pdf'] | 1 |
73,042,652 | <p>If you only have collected data, but no way to interact with the environment then you are in what is called <strong>Offline RL</strong> scenario, which is an active area of research. It has its own pros and cons. The most naive approach can be to use behavioural cloning (so you treat dataset as a normal supervised learning problem and replicate the actions) - the problem is that this assumes data is already coming from good executions. The other way around is to run an RL algorithm with <strong>off policy corrections</strong> since the data is not now coming from your actual policy and thus a policy gradient would be biased etc. Overall - Offline RL is your keyword.</p>
<p>For further reading: <a href="https://arxiv.org/abs/2203.01387" rel="nofollow noreferrer">https://arxiv.org/abs/2203.01387</a></p> | 2022-07-19 19:42:19.270000+00:00 | 2022-07-19 19:42:19.270000+00:00 | null | null | 73,041,043 | <p>I am new to Reinforcement learning and I did several examples using the GYM environment. However, I knew and observed that Reinforcement should be trained on the real environment not on collected data like supervised learning, My question here, is this always true? I mean I have a specific dataset which is something like a recommendation system and I want the agent to be trained on it before I publish the agent in the real environment..
Is this possible?</p> | 2022-07-19 17:13:19.400000+00:00 | 2022-08-08 07:56:43.867000+00:00 | null | deep-learning|reinforcement-learning|openai-gym | ['https://arxiv.org/abs/2203.01387'] | 1 |
58,843,912 | <p>You are right. There is no direct association between memory (experience replay) and the performance of the model in episode reward. The Q-value in DQN is used to predict each action's expected reward in each step. The performance measure of how good your model was is the difference between the real reward and the expected reward (TD-error). </p>
<p>The deployment of -1 for non-goal steps is a trick to help RL models to choose the actions that can finish the episode quicker. Because of the Q-value is an action-value. At each step, the model predicts rewards for every possible move and the policy (usually greedy or epsilon-greedy) choose the action with the most significant value. You can imagine that going back at one moment will result in 200 steps to finish the episode but going forward takes only 100 steps. The Q-value will be -200 (without discount) and -100 respectively. You might wonder how the model knows the value of each action, that is because in the repeated episodes and successive trial-and-error. The model was trained to minimise the difference between real reward and expected reward, aka TD-error. </p>
<p>In a randomly sampled experience replay, all experiences are sampled and deleted uniformly. However, in priority experience replay, you can reuse those experience with high estimated error. Usually, the priorities are proportional to the TD error (real_reward - expected_reward) of expected Q-value and the current model's predicted Q-value. The larger the priority means how surprising the experience, and it helps accelerate the training.</p>
<p>You can check the idea in <a href="https://arxiv.org/pdf/1511.05952.pdf" rel="nofollow noreferrer">Priority Experience Replay, Schaul et al., 2016</a> </p> | 2019-11-13 19:04:34.400000+00:00 | 2019-11-13 19:20:12.997000+00:00 | 2019-11-13 19:20:12.997000+00:00 | null | 54,371,272 | <p>Given that the OpenAI Gym environment <a href="https://github.com/openai/gym/blob/master/gym/envs/classic_control/mountain_car.py" rel="noreferrer">MountainCar-v0</a> ALWAYS returns -1.0 as a reward (even when goal is achieved), I don't understand how DQN with experience-replay converges, yet I know it does, because I have <a href="https://gist.github.com/keithmgould/f99296260b4a739ac6b523ad71e1ec90" rel="noreferrer">working code</a> that proves it. By working, I mean that when I train the agent, the agent quickly (within 300-500 episodes) learns how to solve the mountaincar problem. Below is an example from my trained agent.
<a href="https://i.stack.imgur.com/odrPU.gif" rel="noreferrer"><img src="https://i.stack.imgur.com/odrPU.gif" alt="enter image description here"></a></p>
<p>It is my understanding that ultimately there needs to be a "sparse reward" that is found. Yet as far as I can see from the openAI Gym <a href="https://github.com/openai/gym/blob/master/gym/envs/classic_control/mountain_car.py" rel="noreferrer">code</a>, there is never any reward other than -1. It feels more like a "no reward" environment.</p>
<p>What almost answers my question, but in fact does not: when the task is completed quickly, the <strong>return</strong> (sum of rewards) of the episode is larger. So if the car never finds the flag, the return is -1000. If the car finds the flag quickly the return might be -200. The reason this does not answer my question is because with DQN and experience replay, those returns (-1000, -200) are never present in the experience replay memory. All the memory has are tuples of the form (state, action, reward, next_state), and of course remember that tuples are pulled from memory at random, not episode-by-episode.</p>
<p>Another element of this particular OpenAI Gym environment is that the Done state is returned on either of two occasions: hitting the flag (yay) or timing out after some number of steps (boo). However, the agent treats both the same, accepting the reward of -1. Thus as far as the tuples in memory are concerned, both events look identical from a reward standpoint.</p>
<p>So, I don't see anything in the memory that indicates that the episode was performed well.</p>
<p>And thus, I have no idea why this DQN code is working for MountainCar.</p> | 2019-01-25 18:53:23.047000+00:00 | 2020-06-21 22:31:46.680000+00:00 | 2019-01-26 15:14:20.887000+00:00 | machine-learning|keras|reinforcement-learning|openai-gym|q-learning | ['https://arxiv.org/pdf/1511.05952.pdf'] | 1 |
21,985,882 | <p>Depending on the nature of neurons, the output can be anything. The most popular neurons are linear, <a href="http://mathworld.wolfram.com/SigmoidFunction.html" rel="nofollow">sigmoidal curve</a> (range [0, 1]) and <a href="http://mathworld.wolfram.com/HyperbolicTangent.html" rel="nofollow">Hyperbolic Tangent</a> (range [-1, 1]). The first one can output any value. The latter two c approximate step function (i.e. binary behavior), but it is up to the end user (you) to define the cut-off value for that translation.</p>
<p>You didn't say which neurons you use, but you should definitely read more on how neural networks are implemented and how they work. You may start with <a href="http://www.youtube.com/watch?v=DG5-UyRBQD4" rel="nofollow">this video</a> and then read <a href="http://arxiv.org/ftp/cs/papers/0308/0308031.pdf" rel="nofollow">Artificial Neural Networks for Beginners </a> by C Gershenson.</p>
<p><strong>UPDATE</strong> You say that you use tanh-sigmoid neurons and wonder how come you don't get values either very close to -1 or to 1. </p>
<p>The output of tanh neuron is hyperbolic tangent of the sum of all its inputs. Every value between -1 and 1 is possible. What determines the "steepness" of the output (in other words: the proportion of interim values) is the output values of the preceding neurons and their weights. These depend on the output of their preceding neurons and their weights etc etc etc. It is up to the learning algorithm to find the set of weights that minimizes a predefined scoring function, given a certain input. In a typical setup, a scoring function is a function that compares neural network output to a set of desired results and returns a single number that indicates how different the actual and the desired outputs are.</p>
<p>Before using NN you have to do some homework. At the minimum you have to decide what your goal is, how you interpret NN output and how you measure NN performance and how you update the weights. </p> | 2014-02-24 11:03:54.083000+00:00 | 2014-02-24 14:20:30.420000+00:00 | 2014-02-24 14:20:30.420000+00:00 | null | 21,985,502 | <p>I have an input dataset (matrix 25x1575) which is normalized to values between 0 and 1.
I also have a binary formatted output matrix (9x1575) like 0 0 0 0 0 0 0 0 1, 1 0 0 1 1 1 0 0 1 ... </p>
<p>I imported both files in matlab nntool and it automatically created a network with 25 input and 9 output nodes as I wanted.</p>
<p>After I trained this network using feed-forward backProp, I tested the model in its training data and each output nodes returns a decimal value like (-0.1978 0.45913 0.12748 0.25072 0.45199 0.59368 0.38359 0.31435 1.0604). </p>
<p>Why it doesn't return discrete values like 1 0 0 1 1 1 0 0 1?
Is there any thing that I must set in nntool to get such values?</p> | 2014-02-24 10:45:49.630000+00:00 | 2014-02-24 14:20:30.420000+00:00 | null | matlab|matrix|neural-network|classification|nntool | ['http://mathworld.wolfram.com/SigmoidFunction.html', 'http://mathworld.wolfram.com/HyperbolicTangent.html', 'http://www.youtube.com/watch?v=DG5-UyRBQD4', 'http://arxiv.org/ftp/cs/papers/0308/0308031.pdf'] | 4 |
66,519,480 | <p>There are several points to be checked. As you have same output to the different inputs, I suspect that some layer zeros out all it's inputs. So check the outputs of the PositionalEncoding and also Encoder block of the Transformer, to make sure they are not constant. But before that, make sure your inputs differ (try to inject noise, for example).</p>
<p>Additionally, from what I see in the pictures, your input and output are speech signals and was sampled at 22.05kHz (I guess), so it should have ~10k features, but you claim that you have only 128. This is another place to check. Now, the number 499 represent some time slice. Make sure your slices are in reasonable range (20-50 msec, usually 30). If it is the case, then 30ms by 500 is 15 seconds, which is much more you have in your example. And finally you are masking off a third of a second of speech in your input, which is too much I believe.</p>
<p>I think it would be useful to examine <a href="https://arxiv.org/abs/1904.05862" rel="nofollow noreferrer">Wav2vec</a> and <a href="https://arxiv.org/abs/2006.11477" rel="nofollow noreferrer">Wav2vec 2.0</a> papers, which tackle the problem of self supervised training in speech recognition domain using Transformer Encoder with great success.</p> | 2021-03-07 17:39:44.643000+00:00 | 2021-03-07 17:49:59.507000+00:00 | 2021-03-07 17:49:59.507000+00:00 | null | 64,876,788 | <p>I'm trying to go <code>seq2seq</code> with a Transformer model. My input and output are the same shape (<code>torch.Size([499, 128])</code> where 499 is the sequence length and 128 is the number of features.</p>
<p>My input looks like:
<a href="https://i.stack.imgur.com/90IRq.png" rel="noreferrer"><img src="https://i.stack.imgur.com/90IRq.png" alt="enter image description here" /></a></p>
<p>My output looks like:
<a href="https://i.stack.imgur.com/C0qAY.png" rel="noreferrer"><img src="https://i.stack.imgur.com/C0qAY.png" alt="enter image description here" /></a></p>
<p>My training loop is:</p>
<pre><code> for batch in tqdm(dataset):
optimizer.zero_grad()
x, y = batch
x = x.to(DEVICE)
y = y.to(DEVICE)
pred = model(x, torch.zeros(x.size()).to(DEVICE))
loss = loss_fn(pred, y)
loss.backward()
optimizer.step()
</code></pre>
<p>My model is:</p>
<pre><code>import math
from typing import final
import torch
import torch.nn as nn
class Reconstructor(nn.Module):
def __init__(self, input_dim, output_dim, dim_embedding, num_layers=4, nhead=8, dim_feedforward=2048, dropout=0.5):
super(Reconstructor, self).__init__()
self.model_type = 'Transformer'
self.src_mask = None
self.pos_encoder = PositionalEncoding(d_model=dim_embedding, dropout=dropout)
self.transformer = nn.Transformer(d_model=dim_embedding, nhead=nhead, dim_feedforward=dim_feedforward, num_encoder_layers=num_layers, num_decoder_layers=num_layers)
self.decoder = nn.Linear(dim_embedding, output_dim)
self.decoder_act_fn = nn.PReLU()
self.init_weights()
def init_weights(self):
initrange = 0.1
nn.init.zeros_(self.decoder.weight)
nn.init.uniform_(self.decoder.weight, -initrange, initrange)
def forward(self, src, tgt):
pe_src = self.pos_encoder(src.permute(1, 0, 2)) # (seq, batch, features)
transformer_output = self.transformer_encoder(pe_src)
decoder_output = self.decoder(transformer_output.permute(1, 0, 2)).squeeze(2)
decoder_output = self.decoder_act_fn(decoder_output)
return decoder_output
</code></pre>
<p>My output has a shape of <code>torch.Size([32, 499, 128])</code> where <code>32</code> is batch, <code>499</code> is my sequence length and <code>128</code> is the number of features. But the output has the same values:</p>
<pre><code>tensor([[[0.0014, 0.0016, 0.0017, ..., 0.0018, 0.0021, 0.0017],
[0.0014, 0.0016, 0.0017, ..., 0.0018, 0.0021, 0.0017],
[0.0014, 0.0016, 0.0017, ..., 0.0018, 0.0021, 0.0017],
...,
[0.0014, 0.0016, 0.0017, ..., 0.0018, 0.0021, 0.0017],
[0.0014, 0.0016, 0.0017, ..., 0.0018, 0.0021, 0.0017],
[0.0014, 0.0016, 0.0017, ..., 0.0018, 0.0021, 0.0017]]],
grad_fn=<PreluBackward>)
</code></pre>
<p>What am I doing wrong? Thank you so much for any help.</p> | 2020-11-17 14:03:01.820000+00:00 | 2021-03-07 17:49:59.507000+00:00 | null | python|machine-learning|pytorch|transformer-model|sequence-to-sequence | ['https://arxiv.org/abs/1904.05862', 'https://arxiv.org/abs/2006.11477'] | 2 |
56,605,829 | <p>The type of GAN you used doesn't have any way of controlling which number it generates. For that you would need to train a <a href="https://arxiv.org/abs/1411.1784" rel="nofollow noreferrer">Conditional GAN</a>. </p>
<p>The only means you have to <em>control</em> what images the GAN is generating is through the <strong>noise vector</strong> you input to the generator. What you could try is to change the the values of this vector until you get the digit you want. </p>
<p>The easiest way to do so would be through a random seed</p>
<pre class="lang-py prettyprint-override"><code>np.random.seed(13) # changing this number will result in different digits being created
noise = np.random.normal(0, 1, size=[examples, random_dim])
generated_images = generator.predict(noise)
generated_images = generated_images.reshape(examples, 28, 28)
</code></pre> | 2019-06-14 22:32:13.033000+00:00 | 2019-06-14 22:32:13.033000+00:00 | null | null | 56,604,219 | <p>I'm interested in GAN so I followed this tutorial <a href="https://www.datacamp.com/community/tutorials/generative-adversarial-networks?fbclid=IwAR1Mq1ePB74ttO6WSnNX4zRrYuUHDBJ39Gwaea5pwjO9x4K-gVPJEdQMLVA" rel="nofollow noreferrer">link</a> about GANs for MNIST with Keras. The result was that it generated a 4x4 image with random digit like <a href="https://i.stack.imgur.com/5yGm8.png" rel="nofollow noreferrer">this</a>. I want to ask how to generate 4x4 image with digit that I want and not random digits like <a href="https://i.stack.imgur.com/Qrtnc.png" rel="nofollow noreferrer">this</a>.</p>
<pre><code>def plot_generated_images(epoch, generator, examples=16, dim=(4, 4), figsize=(4, 4)):
noise = np.random.normal(0, 1, size=[examples, random_dim])
generated_images = generator.predict(noise)
generated_images = generated_images.reshape(examples, 28, 28)
plt.figure(figsize=figsize)
for i in range(generated_images.shape[0]):
plt.subplot(dim[0], dim[1], i+1)
plt.imshow(generated_images[i], interpolation='nearest', cmap='gray_r')
plt.axis('off')
plt.tight_layout()
plt.savefig('gan_generated_image_epoch_%d.png' % epoch)
</code></pre> | 2019-06-14 19:37:22.320000+00:00 | 2020-09-20 09:05:54.953000+00:00 | 2019-06-15 00:26:32.300000+00:00 | python|keras|generative-adversarial-network | ['https://arxiv.org/abs/1411.1784'] | 1 |
49,767,312 | <p>Let's see the sample code in <a href="https://www.tensorflow.org/api_docs/python/tf/nn/" rel="noreferrer">TensorFlow API(r1.7)</a></p>
<p>For <code>depthwise_conv2d</code>,</p>
<pre><code>output[b, i, j, k * channel_multiplier + q] =
sum_{di, dj} input[b, strides[1] * i + rate[0] * di,
strides[2] * j + rate[1] * dj, k] *
filter[di, dj, k, q]
</code></pre>
<p>filter is <code>[filter_height, filter_width, in_channels, channel_multiplier]</code></p>
<p>For <code>conv2d</code>,</p>
<pre><code>output[b, i, j, k] =
sum_{di, dj, q} input[b, strides[1] * i + di,
strides[2] * j + dj, q] *
filter[di, dj, q, k]
</code></pre>
<p>filter is <code>[filter_height, filter_width, in_channels, out_channels]</code></p>
<p>Focusing on <code>k</code> and <code>q</code>, we can see the difference shown above.</p>
<p>The default format is <code>NHWC</code>, where <code>b</code> is batch size, <code>(i, j)</code> is a coordinate in feature map.</p>
<p>(Note that <code>k</code> and <code>q</code> refer to different things in this two functions.)</p>
<ol>
<li>For <code>depthwise_conv2d</code>, <code>k</code> refers to an input channel and <code>q</code>, <code>0 <= q < channel_multiplier</code>, refers to an output channel. Each input channel <code>k</code> is expanded to <code>k*channel_multiplier</code> with different filters <code>[filter_height, filter_width, channel_multiplier]</code>. It does not conduct cross-channel operation, in some literature, it is referred as <code>channel-wise spatial convolution</code>. Above process can be concluded as applying kernels of each filter separately to each channel and concatenating the outputs.</li>
<li>For <code>conv2d</code>, <code>k</code> refers to an output channel and <code>q</code> refers to an input channel. It sums up among all input channels, meaning that each output channel <code>k</code> is associated with all <code>q</code> input channels by a <code>[filter_height, filter_width, in_channels]</code> filter.</li>
</ol>
<p>For example, </p>
<pre><code>input_size: (_, 14, 14, 32)
filter of conv2d: (3, 3, 32, 64)
params of conv2d filter: 3x3x32x64
filter of depthwise_conv2d: (3, 3, 32, 64)
params of depthwise_conv2d filter: 3x3x32x64
</code></pre>
<p>suppose stride = 1 with padding, then</p>
<pre><code>output of conv2d: (_, 14, 14, 64)
output of depthwise_conv2d: (_, 14, 14, 32*64)
</code></pre>
<p>Some more insights:</p>
<ul>
<li>Standard convolution operation can be split into 2 steps: depthwise convolution and reduction (sum).</li>
<li>Depthwise Convolution is equivalent to setting the number of group to input channel in Group Convolution.</li>
<li>Usually, <code>depthwise_conv2d</code> is followed by <code>pointwise_conv2d</code>(a 1x1 convolution for reduction purpose), making a <code>separable_conv2d</code>. Check <a href="https://arxiv.org/pdf/1610.02357.pdf" rel="noreferrer">Xception</a>, <a href="https://arxiv.org/pdf/1704.04861.pdf" rel="noreferrer">MobileNet</a> for more details.</li>
</ul> | 2018-04-11 05:40:39.410000+00:00 | 2018-04-12 08:25:26.150000+00:00 | 2018-04-12 08:25:26.150000+00:00 | null | 44,226,932 | <p>What is the difference between <code>tf.nn_conv2d</code> and <code>tf.nn.depthwise_conv2d</code> in Tensorflow?</p> | 2017-05-28 11:43:06.073000+00:00 | 2018-04-12 08:25:26.150000+00:00 | null | python|tensorflow|deep-learning|conv-neural-network | ['https://www.tensorflow.org/api_docs/python/tf/nn/', 'https://arxiv.org/pdf/1610.02357.pdf', 'https://arxiv.org/pdf/1704.04861.pdf'] | 3 |
Subsets and Splits