a_id
int64 7.84k
73.8M
| a_body
stringlengths 61
33k
| a_creation_date
stringlengths 25
32
| a_last_activity_date
stringlengths 25
32
| a_last_edit_date
stringlengths 25
32
β | a_tags
float64 | q_id
int64 826
73.8M
| q_body
stringlengths 61
29.9k
| q_creation_date
stringlengths 25
32
| q_last_activity_date
stringlengths 25
32
| q_last_edit_date
stringlengths 25
32
β | q_tags
stringlengths 1
103
| _arxiv_links
stringlengths 2
6.69k
| _n_arxiv_links
int64 0
94
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
39,576,476 | <p>In short, you would have to train the Tesseract engine to recognize the handwriting. Take a look at this link:</p>
<p><a href="https://stackoverflow.com/questions/12310287/tesseract-handwriting-with-dictionary-training">Tesseract handwriting with dictionary training</a></p>
<p>This is what the linked post says:</p>
<blockquote>
<p>It's possible to train tesseract to recognize handwriting. Here are
the instructions:
<a href="https://tesseract-ocr.github.io/tessdoc/Training-Tesseract" rel="noreferrer">https://tesseract-ocr.github.io/tessdoc/Training-Tesseract</a></p>
<p>But don't expect very good results. Academics have typically gotten
accuracy results topping out about 90%. Here are a couple references
for words and numbers. So if your use case can deal with at least 1/10
errors, this might work for you.</p>
</blockquote>
<p>Also here is a good academic article written on this subject:</p>
<p><a href="https://arxiv.org/ftp/arxiv/papers/1003/1003.5893.pdf" rel="noreferrer">Recognition of Handwritten Textual Annotations using Tesseract
Open Source OCR Engine for information Just In Time (iJIT)</a></p> | 2016-09-19 15:08:23.587000+00:00 | 2020-05-02 04:51:02.690000+00:00 | 2020-06-20 09:12:55.060000+00:00 | null | 39,556,443 | <p>I was just wondering how accurate can tesseract be for handwriting recognition if used with capital letters all in their own little boxes in a form.</p>
<p>I know you can train it to recognise your own handwriting somewhat but the problem in my case is I need to use it across multiple handwritings. Can anyone point me in the right direction? </p>
<p>Thanks a lot. </p> | 2016-09-18 10:05:42+00:00 | 2020-06-27 09:20:46.487000+00:00 | null | android|ocr|tesseract|handwriting | ['https://stackoverflow.com/questions/12310287/tesseract-handwriting-with-dictionary-training', 'https://tesseract-ocr.github.io/tessdoc/Training-Tesseract', 'https://arxiv.org/ftp/arxiv/papers/1003/1003.5893.pdf'] | 3 |
55,209,510 | <p>One idea when you have missing data (in your case, zeros) is to try to use the known data to fill the missing values. In other words, given a <em>partial</em> vector of features for an individual, we want to infer the remaining values.
A trivial way to do this is to simply use the mean value for the missing column (of course, then the inferred value does not depend on the known values for that person or the values known for people like them!). You could also, for example, cluster users (using only known values that both individuals share) and compute mean values for missing columns just within each cluster.</p>
<p>A very relevant literature to look into is the use of matrix completion for recommender systems (which in fact looks like what you are basically trying to do) and <a href="https://en.wikipedia.org/wiki/Collaborative_filtering" rel="nofollow noreferrer">collaborative filtering</a>. Imputation has been used but is rather expensive for large-scale datasets. Check out <a href="https://datajobs.com/data-science-repo/Recommender-Systems-[Netflix].pdf" rel="nofollow noreferrer">Koren et al, <em>Matrix factorization techniques for recommender systems</em></a> for some of the techniques used.</p>
<p>Another outlook is to use semi-supervised probabilistic representation learning methods. Basically you learn a generative model of the data, such that you can <em>partially</em> specify a representation and automatically infer the remaining values. One caveat is this may be expensive, as you need to define a stochastic node per feature in this case. Consider, e.g., <a href="https://arxiv.org/abs/1706.00400" rel="nofollow noreferrer">Siddarth et al, <em>Learning Disentangled Representations with Semi-Supervised Deep Generative Models</em></a></p> | 2019-03-17 16:51:07.767000+00:00 | 2019-03-17 16:51:07.767000+00:00 | null | null | 55,027,074 | <p>i have a dataser in this way: </p>
<pre><code>User Movie
0 1 2 3 4
0 2 0 5 0 0
1 0 1 0 0 0
2 0 5 5 5 0
</code></pre>
<p>from 1 to 5 is value of review of user for movies, otherwise is zero (no review).</p>
<p>I don't have a full columns, the data are all sparse.(at least one zero in column)</p>
<p>I seen that this introduce more noise in the data, because i have many value that really dont need.
Which are the method to remove this noise? I remember that instead use zero, i can use a medium value, and after i simplify in some way, but I m not sure..</p>
<p>Any suggestion?</p> | 2019-03-06 15:49:14.100000+00:00 | 2019-03-17 16:51:07.767000+00:00 | null | python|matrix|dataset|sparse-matrix|dimensionality-reduction | ['https://en.wikipedia.org/wiki/Collaborative_filtering', 'https://datajobs.com/data-science-repo/Recommender-Systems-[Netflix].pdf', 'https://arxiv.org/abs/1706.00400'] | 3 |
49,798,586 | <p>There are multiple ways in which the problem can be solved. Two approach listed below</p>
<blockquote>
<ol>
<li>Using traditional image processing techniques: Do intensity based threshold, Edge detection and others etc. </li>
<li>use machine learning/ Deep learning: Please find below reference for ML/deep learning implementation.</li>
</ol>
<p><a href="https://arxiv.org/ftp/arxiv/papers/1710/1710.06836.pdf" rel="nofollow noreferrer">https://arxiv.org/ftp/arxiv/papers/1710/1710.06836.pdf</a>
<a href="https://cse.iitk.ac.in/users/cs365/2015/_submissions/vinsam/report.pdf" rel="nofollow noreferrer">https://cse.iitk.ac.in/users/cs365/2015/_submissions/vinsam/report.pdf</a></p>
</blockquote>
<p>In my opinion/practice Deep learning methods generalize well compared to traditional approach, given lot of training data and compute.</p> | 2018-04-12 13:59:45.773000+00:00 | 2018-04-12 13:59:45.773000+00:00 | null | null | 49,728,805 | <p>I am trying to build a system to recognize sign language alphabet. I don't have experience working on computer vision because this is my first time. I don't know which filter i should use (sharping , smoothing , sharping then smoothing, smoothing then sharping Or even something else). Not just the filter choice but also other choices like:<br>
1- Image Thresholding methods<br>
2- edge detection techniques<br>
..etc</p> | 2018-04-09 08:41:25.993000+00:00 | 2018-04-12 13:59:45.773000+00:00 | 2018-04-09 11:29:45.173000+00:00 | image-processing|computer-vision | ['https://arxiv.org/ftp/arxiv/papers/1710/1710.06836.pdf', 'https://cse.iitk.ac.in/users/cs365/2015/_submissions/vinsam/report.pdf'] | 2 |
61,412,303 | <p>Build <a href="https://en.wikipedia.org/wiki/Convex_hull" rel="nofollow noreferrer">convex hull</a> of all points.</p>
<p>Then find the largest area quadtrilateral with vertices belonging to the hull.
If hull count N is small, you can just check all diagonals. </p>
<p>Otherwise consider using of more advanced algorithms like this: <a href="https://arxiv.org/pdf/1708.00681.pdf" rel="nofollow noreferrer">Maximum-Area Quadrilateral in a Convex Polygon, Revisited</a></p> | 2020-04-24 15:49:25.417000+00:00 | 2020-04-24 15:49:25.417000+00:00 | null | null | 61,405,587 | <p>I am using pythom 3.6 and I have came across a problem. There are given n points by x and y coordinates (n is from 4 to 200) and I need to find the 4 points from those n which form the biggest general quadrangle (any convex shape formed by 4 points).</p>
<p>I can think of a solution including 4 for cycles with calculating the area of the quadrangle given by point in the for cycles, but it is extremaly slow. Do you know about anything faster?</p>
<p>The point are given like this:</p>
<pre><code>B = np.array([[ 1., 2.], [ 0., 0.], [ 1., 3.], [ -5., 6.], [ -6., 3.], [ 1., 5.], [ -1., 2.], [ 1., -3.], [ 4., 2.]])
</code></pre>
<p>The next level is when I get N points given by x, y and z coordinates (N is between 8 and 500) and I should find the biggest (in volume) hexahedron (the shape defined by 8 points) - I have no idea of the solution.</p>
<p>There is no need for right angles, just shapes defined by 4 (8) points. Any suggestions?</p>
<hr>
<p>Background:
I have quite complex 3D models of building which I need to simplify to one specific program for computations. The details on the building are not needed. All information about the buildings is in file.obj exported from Blender.</p> | 2020-04-24 09:50:20.927000+00:00 | 2020-04-25 22:06:03.453000+00:00 | null | python|python-3.x|geometry|shapes|modeling | ['https://en.wikipedia.org/wiki/Convex_hull', 'https://arxiv.org/pdf/1708.00681.pdf'] | 2 |
19,387,081 | <p>Here's some simple but reasonably efficient Python code that does the job.</p>
<pre><code>import math
def T(n):
"Return sum_{i=1}^n d(i), where d(i) is the number of divisors of i."
f = int(math.floor(math.sqrt(n)))
return 2 * sum(n // x for x in range(1, f+1)) - f**2
def count_divisors(a, b):
"Return sum_{i=a}^b d(i), where d(i) is the number of divisors of i."
return T(b) - T(a-1)
</code></pre>
<p>Explanation: it's enough to be able to compute the sum from <code>1</code> to <code>b</code>, then we can do two separate computations and subtract to get the sum from <code>a</code> to <code>b</code>. Finding the sum of the divisor function from <code>1</code> to <code>b</code> amounts to computing <a href="http://oeis.org/A006218" rel="nofollow">sequence A006218</a> from the online encyclopaedia of integer sequences. That sequence is equivalent to the sum of <code>floor(n / d)</code> as <code>d</code> ranges over all integers from <code>1</code> to <code>n</code>.</p>
<p>And now <em>that</em> sequence can be thought of as the number of integer-valued points under the hyperbola <code>xy=n</code>. We can use the symmetry of the hyperbola around the line <code>x = y</code>, and count the integer points with <code>x <= sqrt(n)</code> and those with <code>y <= sqrt(n)</code>. That ends up double counting the points with both <code>x</code> and <code>y</code> less than <code>sqrt(n)</code>, so we subtract the square of <code>floor(sqrt(n))</code> to compensate. All this is explained (briefly) in the introduction to <a href="http://arxiv.org/abs/1206.3369" rel="nofollow">this paper</a>.</p>
<p>Remarks:</p>
<ul>
<li><p>the algorithm has running time <code>O(sqrt(b))</code>, and constant space requirements. Improvements in running time are possible at the expense of space; see the paper referred to above.</p></li>
<li><p>for really large <code>n</code>, you'll want a proper integer square root rather than using <code>floor(math.sqrt(n))</code>, to avoid problems with floating-point inaccuracies. That's not a problem with the sort of <code>n</code> that you're looking at. With typical IEEE 754 floating-point and a correctly rounded square root operation, you're not going to run into trouble until <code>n</code> exceeds <code>2**52</code>.</p></li>
<li><p>if <code>a</code> and <code>b</code> are <em>really</em> close, there may be more efficient solutions.</p></li>
</ul> | 2013-10-15 17:12:53.903000+00:00 | 2013-10-15 18:12:32.020000+00:00 | 2013-10-15 18:12:32.020000+00:00 | null | 19,381,617 | <p>Let there be a function <strong>g(x)=number of divisor of x</strong>. Given two integers a and b we need to find-></p>
<p><strong>g(a)+g(a+1)....+g(b).</strong></p>
<p>I thought this step-></p>
<pre><code>for every x from a to b
sum+=number of divisor of x(in sqrt(x) complexity)
</code></pre>
<p>but its given <strong>1<=a<=b<=2^31-1</strong></p>
<p>So to iterate between a and b can cost me a lot of time....for eg->if a=1 and b=2^31-1.</p>
<p>Is there a better way to do?</p> | 2013-10-15 12:47:00.973000+00:00 | 2013-10-16 10:29:47.757000+00:00 | 2013-10-15 16:47:57.960000+00:00 | algorithm|numbers | ['http://oeis.org/A006218', 'http://arxiv.org/abs/1206.3369'] | 2 |
34,339,658 | <p>As explained by <a href="https://stackoverflow.com/a/7181993/1364752">this answer</a> to a question about sorting small collections, you can actually make your swap code more performant by changing its definition to the following one:</p>
<pre><code>#define SWAP(x, y) { \
int dx = data[x]; \
data[x] = dx < data[y] ? dx : data[y]; \
data[y] ^= dx ^ data[x]; \
}
</code></pre>
<p>According to the research paper <a href="http://arxiv.org/abs/1505.01962" rel="nofollow noreferrer"><em>Applying Sorting Networks to Synthesize Optimized Sorting Libraries</em></a>, this version of <code>SWAP</code> is branchless and compiles down to a mere 5 instructions on GCC or Clang with a decent optimization level. The article also hints at the fact that the low number of instructions may actually make the code benefit from instruction-level parallelism.</p>
<p>If <code>xor</code> does not work for the types to be sort, you can use an alternative version of <code>SWAP</code> that uses two conditionals instead of one, which should be almost as fast as the <code>xor</code> version. Actually, I use this trick in a sorting library of mine and sorting a small fixed-size collection of integers with sorting networks went from Β« not really better than insertion sort Β» to Β« several times faster than insertion sort Β» when I introduced the trick. Sorting a collection of 8 integers is ~5 times faster with sorting networks than with an insertion sort on my computer.</p> | 2015-12-17 16:30:45.213000+00:00 | 2015-12-17 16:30:45.213000+00:00 | 2017-05-23 11:52:18.833000+00:00 | null | 31,372,925 | <p>I was working on network sort (for arrays smaller than 8) and noticed that all the algorithms focus on its ability to allow parallel operations. Here is one such set for an array of size 5.</p>
<pre><code> #define SWAP(x,y) if (data[y] < data[x]) { int tmp = data[x]; data[x] = data[y]; data[y] = tmp; }
//Parallelizable
SWAP(1, 2);
SWAP(4, 5);
//Parallelizable
SWAP(0, 2);
SWAP(3, 5);
//Parallelizable
SWAP(0, 1);
SWAP(3, 4);
SWAP(2, 5);
//Parallelizable
SWAP(0, 3);
SWAP(1, 4);
//Parallelizable
SWAP(2, 4);
SWAP(1, 3);
//Parallelizable
SWAP(2, 3);
</code></pre>
<p>I was working with <code>long int</code> arrays (So each element is 8 bytes in size). So is there any easy way to parallelize these operations in C ? Is there any hardware specific commands I can use to achieve this (SIMD, ASM(x86) etc.) </p> | 2015-07-12 21:46:06.887000+00:00 | 2015-12-17 16:30:45.213000+00:00 | 2015-12-01 17:18:42.810000+00:00 | c|algorithm|sorting|parallel-processing|sorting-network | ['https://stackoverflow.com/a/7181993/1364752', 'http://arxiv.org/abs/1505.01962'] | 2 |
8,947,501 | <p>Q-learning is a <a href="http://en.wikipedia.org/wiki/Temporal_difference_learning" rel="nofollow">Temporal difference learning</a> algorithm. For every possible state (board), it learns the value of the available actions (moves). However, it is not suitable for use with <a href="http://ai-depot.com/articles/minimax-explained/" rel="nofollow">Minimax</a>, because the Minimax algorithm needs an evaluation function that returns the value of a position, not the value of an action at that position.</p>
<p>However, temporal difference methods can be used to learn such an evaluation function. Most notably, Gerald Tesauro used the TD(Ξ») ("TD lambda") algorithm to create <a href="http://en.wikipedia.org/wiki/TD-Gammon" rel="nofollow">TD-Gammon</a>, a human-competitive Backgammon playing program. He wrote an article describing the approach, which you can find <a href="http://www.research.ibm.com/massive/tdl.html" rel="nofollow">here</a>.</p>
<p>TD(Ξ») was later extended to TDLeaf(Ξ»), specifically to better deal with Minimax searches. TDLeaf(Ξ») has been used, for example, in the chess program KnightCap. You can read about TDLeaf in <a href="http://arxiv.org/pdf/cs.LG/9901001" rel="nofollow">this paper</a>.</p> | 2012-01-20 20:30:46.530000+00:00 | 2012-01-20 20:45:51.583000+00:00 | 2012-01-20 20:45:51.583000+00:00 | null | 8,804,768 | <p>How to use MinMax trees with Q-Learning?</p>
<p>I want to implement a Q-Learning connect four agent and heard that adding MinMax trees into it helps.</p> | 2012-01-10 14:23:32.727000+00:00 | 2020-01-11 11:09:50.980000+00:00 | 2012-01-19 02:15:43.493000+00:00 | artificial-intelligence|reinforcement-learning|game-ai | ['http://en.wikipedia.org/wiki/Temporal_difference_learning', 'http://ai-depot.com/articles/minimax-explained/', 'http://en.wikipedia.org/wiki/TD-Gammon', 'http://www.research.ibm.com/massive/tdl.html', 'http://arxiv.org/pdf/cs.LG/9901001'] | 5 |
55,937,793 | <p>For lasso+FE, you can first demean both sides of your regression by following the logic given e.g. <a href="https://en.wikipedia.org/wiki/Fixed_effects_model#Classical_Representation" rel="nofollow noreferrer">here</a>, and then run lasso via glmnet. </p>
<p>Lasso+random effects is a bit <a href="https://arxiv.org/abs/1109.4003" rel="nofollow noreferrer">more complicated beast</a> mathematically and it is not supported out of the box with glmnet. There exists a package for doing a mixed-model lasso <a href="https://cran.r-project.org/web/packages/glmmLasso/index.html" rel="nofollow noreferrer">here</a>, but I haven't tried it.</p> | 2019-05-01 14:24:08.747000+00:00 | 2019-05-01 14:24:08.747000+00:00 | null | null | 55,936,791 | <p>I have many continuous independent variables and a dependent dummy variable in my data set about individuals in given years. I want to perform feature selection using Logistic Random Effects Lasso/Logistic Fixed Effects Lasso. However, the default settings of <code>glmnet</code> for my estimation procedure is that I am using cross-sectional data while I want <code>R</code> to see my data as panel data, and it thus models a Logistic Lasso while I want a Logistic Random Effects Lasso/Logistic Fixed Effects Lasso model.</p>
<p>Therefore, in the example code below, I want to let <code>R</code> know that I am using a panel data set and that <code>ID</code> are my individuals/cross-sectional units and <code>year</code> are the years I have observations for each <code>ID</code>. In the code below, all individuals are pooled and I even get coefficients for <code>ID</code> (and <code>year</code>) in this Logistic Lasso estimation. How can I estimate a Logistic Random Effects Lasso/Logistic Fixed Effects Lasso model in <code>R</code>?</p>
<pre><code>df=cbind(c(1,546,2,56,6,73,4234,436,647,567,87,2,5,76,5,456,6756,6,132,78,32),c(2,3546,26,568,76,873,234,36,67,57,887,29,50,736,51,56,676,62,32,782,322),10:30)
year=rep(1:3, times=7)
ID=rep(1:7, each=3)
x=as.matrix(cbind(ID,year,df))
y1=as.data.frame(rep(c(0,1), each = 18))[1:21,]
y=as.matrix(y1)
fit=glmnet(x,y,alpha=1,family="binomial")
lambdamin=min(fit$lambda)
predict.glmnet(fit,s=lambdamin,newx=x,type="coefficients")
</code></pre>
<pre><code> 1
(Intercept) -8.309211e+01
ID 1.281220e+01
year .
-2.339904e-04
.
.
</code></pre> | 2019-05-01 13:10:13.630000+00:00 | 2019-05-01 14:24:08.747000+00:00 | null | r|panel|lasso-regression | ['https://en.wikipedia.org/wiki/Fixed_effects_model#Classical_Representation', 'https://arxiv.org/abs/1109.4003', 'https://cran.r-project.org/web/packages/glmmLasso/index.html'] | 3 |
61,188,128 | <p>It all depends on the type of planning you're willing to use. If it's 2.1, then you can use numeric variables to do what you want (which is what you found in that example). Figure 1 from the paper [<a href="https://arxiv.org/pdf/1106.4561.pdf" rel="nofollow noreferrer">here</a>] shows it as well.</p>
<p>If it's just classical planning that you're hoping to use, then you need to be a little bit smarter about the encoding. Predicates like <code>(capacity ?vehicle ?num)</code> would need to be created where <code>?num</code> is an object of type <code>number</code> and you create a finite number of them. This can work if your capacities are small enough.</p>
<p>As always, a working example would be helpful to see where it is you're stuck.</p>
<hr>
<p><em>Edit: after confirming PDDL2.1</em></p>
<p>The issues that I can see with the PDDL you posted:</p>
<ul>
<li>Missing a space in <code>at?vehicle</code></li>
<li><p>Your capacity check in the precondition should include the cargo size. E.g.,</p>
<p><code>(< (+ (loadedCargo ?vehicle) (cargosize ?cargo)) (capacity ?vehicle))</code></p></li>
<li>Bad variable name in <code>(at ?c ?vehicle)</code> (should be <code>?cargo</code>)</li>
<li>Your increase should use infix notation and include the cargo size: <code>(increase (loadedCargo ?vehicle) (cargosize ?cargo))</code></li>
<li>You need to remove the cargo from the current location as an effect: <code>(not (at ?cargo ?location))</code></li>
</ul>
<p>I think that's everything I see wrong with the example, but I haven't tested it.</p> | 2020-04-13 12:41:55.923000+00:00 | 2020-04-14 14:07:44.700000+00:00 | 2020-04-14 14:07:44.700000+00:00 | null | 61,187,292 | <p>I am having to create PPDL, in which vehicles transport cargo across a map. A vehicle has a capacity in regards to how much cargo it can carry. Before a vehicle loads cargo onto it, it needs to know whether there is enough capacity for the vehicle to carry that cargo. How do I assign capacity to a vehicle object?</p>
<p>I have seen examples such as:</p>
<pre><code>< (passengers ?lift) (capacity ?lift)
</code></pre>
<p>So clearly, in this scenario, 'lift' has a capacity attribute and a passengers attribute. Could someone provide an example of how this object declaration looks?</p>
<p>Apologies for the poor question, I am new to PDDL and trying to wrap my head around it still.</p>
<p>Here is my load function for loading cargo onto a vehicle:</p>
<pre><code> (:action load
:parameters (?vehicle ?cargo ?location)
:precondition (and (at?vehicle ?location) (at ?cargo ?location) (< (loadedCargo ?vehicle) (capacity ?vehicle)))
:effect (and (at ?c ?vehicle) (increase(loadedCargo ?vehicle) + 1))
</code></pre>
<p>Another problem is that there are different types of vehicles, and we need a way of determining which kind of vehicle we are loading onto because different vehicles have different capacities.</p>
<p>I am pretty sure that I am using PDDL 2.1</p> | 2020-04-13 11:51:46.560000+00:00 | 2020-04-14 14:07:44.700000+00:00 | 2020-04-13 12:51:26.990000+00:00 | artificial-intelligence|planning|pddl | ['https://arxiv.org/pdf/1106.4561.pdf'] | 1 |
57,261,418 | <p>One architecture suited to your needs is <a href="https://en.wikipedia.org/wiki/WaveNet" rel="nofollow noreferrer">WaveNet</a>.</p>
<p>The WaveNet architecture is constructed to deal with very long sequences (your sequences are reasonably long) and has been shown to outperform LSTM based RNNs on several tasks in <a href="https://arxiv.org/pdf/1609.03499.pdf" rel="nofollow noreferrer">the original paper</a>.</p>
<p>I am not sure what you mean by</p>
<blockquote>
<p>converting the above representations into an image and use CNNs to train</p>
</blockquote>
<p>so I would suggest sticking to recurrent models or WaveNet for sequence classification.</p> | 2019-07-29 21:06:55.153000+00:00 | 2019-07-29 21:06:55.153000+00:00 | null | null | 57,259,227 | <p>I have time series data(one instance of 30 seconds) as shown in the figure, I would like to know the what kind of classifying algorithms I could use.
<a href="https://i.stack.imgur.com/3Kja8.png" rel="nofollow noreferrer">This is how the data looks in time and frequency domain </a></p>
<p>In the image we have 2 classes(one represented in blue and the other in orange).On the left section of the image we have data represented in the time-domain and on the right its equivalent Fourier-Transform.
I am thinking of using LSTM to train the data for both domains and also converting the above representations into an image and use CNNs to train.
Any Suggestion such as a better algorithm or a better representation of data would help.</p> | 2019-07-29 18:04:10.260000+00:00 | 2019-07-29 21:06:55.153000+00:00 | null | neural-network|artificial-intelligence|conv-neural-network|recurrent-neural-network | ['https://en.wikipedia.org/wiki/WaveNet', 'https://arxiv.org/pdf/1609.03499.pdf'] | 2 |
52,483,718 | <p>You cannot do the following line apparently</p>
<pre><code>dog_img = dog_img[:, :, 0:3] # Opera has added alpha channel
</code></pre>
<p>So I loaded the image using a utility in Keras called <code>load_img</code>, which doesn't add the alpha channel.</p>
<p>The complete code </p>
<pre><code>import imageio
from matplotlib import pyplot as plt
from skimage.transform import resize
import numpy as np
from keras import activations
from keras.applications import VGG16
from keras.preprocessing import image
from keras.applications.vgg16 import preprocess_input, decode_predictions
# Build the VGG16 network with ImageNet weights
model = VGG16(weights='imagenet', include_top=True)
dog_img = image.img_to_array(image.load_img(r"F:\tmp\Opera Snapshot_2018-09-24_133452_arxiv.org.png", target_size=(224, 224)))
x = np.expand_dims(dog_img, axis=0)
x = preprocess_input(x)
pred = model.predict(x)
print(decode_predictions(pred))
[[('n02108089', 'boxer', 0.29122102), ('n02108422', 'bull_mastiff', 0.199128), ('n02129604', 'tiger', 0.10050287), ('n02123159', 'tiger_cat', 0.09733449), ('n02109047', 'Great_Dane', 0.056869864)]]
</code></pre> | 2018-09-24 16:32:13+00:00 | 2018-09-24 16:32:13+00:00 | null | null | 52,483,437 | <p>I'm trying to reproduce some results from the <a href="https://arxiv.org/pdf/1610.02391.pdf" rel="nofollow noreferrer">paper, describing Grad-CAM method</a>, using Keras with Tensorflow-GPU backend, and obtain totally incorrect labels.</p>
<p>I've captured the screenshot of figure 1(a) from that paper and trying to make the pretrained VGG16 from Keras Applications to classify it.</p>
<p>Here is my image:</p>
<p><a href="https://i.stack.imgur.com/utqcY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/utqcY.png" alt="snapshot"></a></p>
<p>Here is my code (cell from the Jupyter notebook). Part of code was copied from the <a href="https://keras.io/applications/#vgg16" rel="nofollow noreferrer">Keras manuals</a></p>
<pre><code>import imageio
from matplotlib import pyplot as plt
from skimage.transform import resize
from keras import activations
from keras.applications import VGG16
from keras.applications.vgg16 import preprocess_input, decode_predictions
# Build the VGG16 network with ImageNet weights
model = VGG16(weights='imagenet', include_top=True)
%matplotlib inline
dog_img = imageio.imread(r"F:\tmp\Opera Snapshot_2018-09-24_133452_arxiv.org.png")
dog_img = dog_img[:, :, 0:3] # Opera has added alpha channel
dog_img = resize(dog_img, (224, 224, 3))
x = np.expand_dims(dog_img, axis=0)
x = preprocess_input(x, mode='tf')
pred = model.predict(x)
decode_predictions(pred)
</code></pre>
<p>Output: </p>
<pre><code>[[('n03788365', 'mosquito_net', 0.017053505),
('n03291819', 'envelope', 0.015034639),
('n15075141', 'toilet_tissue', 0.012603286),
('n01737021', 'water_snake', 0.010620943),
('n04209239', 'shower_curtain', 0.009625845)]]
</code></pre>
<p>However, when I submit the same image to the online service, run by the paper authors, <a href="http://gradcam.cloudcv.org/classification" rel="nofollow noreferrer">http://gradcam.cloudcv.org/classification</a>, I see correct label "Boxer"</p>
<p>Here is the output from something that they call "Terminal":</p>
<pre><code>Completed the Classification Task
"Time taken for inference in torch: 9.0"
"Total time taken: 9.12565684319"
{"classify_gcam": "./media/grad_cam/classification/86560f84-bfe5-11e8-a657-22000b4a9274/classify_gcam_243.png", "execution_time": 9.0, "label": 243.0, "classify_gb_gcam": "./media/grad_cam/classification/86560f84-bfe5-11e8-a657-22000b4a9274/classify_gb_gcam_243.png", "classify_gcam_raw": "./media/grad_cam/classification/86560f84-bfe5-11e8-a657-22000b4a9274/classify_gcam_raw_243.png", "input_image": "./media/grad_cam/classification/86560f84-bfe5-11e8-a657-22000b4a9274/Opera Snapshot_2018-09-24_133452_arxiv.org.png", "pred_label": 243.0, "classify_gb": "./media/grad_cam/classification/86560f84-bfe5-11e8-a657-22000b4a9274/classify_gb_243.png"}
Completed the Classification Task
"Time taken for inference in torch: 9.0"
"Total time taken: 9.05940508842"
{"classify_gcam": "./media/grad_cam/classification/86560f84-bfe5-11e8-a657-22000b4a9274/classify_gcam_243.png", "execution_time": 9.0, "label": 243.0, "classify_gb_gcam": "./media/grad_cam/classification/86560f84-bfe5-11e8-a657-22000b4a9274/classify_gb_gcam_243.png", "classify_gcam_raw": "./media/grad_cam/classification/86560f84-bfe5-11e8-a657-22000b4a9274/classify_gcam_raw_243.png", "input_image": "./media/grad_cam/classification/86560f84-bfe5-11e8-a657-22000b4a9274/Opera Snapshot_2018-09-24_133452_arxiv.org.png", "pred_label": 243.0, "classify_gb": "./media/grad_cam/classification/86560f84-bfe5-11e8-a657-22000b4a9274/classify_gb_243.png"}
Job published successfully
Publishing job to Classification Queue
Starting classification job on VGG_ILSVRC_16_layers.caffemodel
Job published successfully
Publishing job to Classification Queue
Starting classification job on VGG_ILSVRC_16_layers.caffemodel
</code></pre>
<p>I use Anaconda Python 64-bit, on Windows 7. </p>
<p>Versions of relevant software on my PC:</p>
<pre><code>keras 2.2.2 0
keras-applications 1.0.4 py36_1
keras-base 2.2.2 py36_0
keras-preprocessing 1.0.2 py36_1
tensorflow 1.10.0 eigen_py36h849fbd8_0
tensorflow-base 1.10.0 eigen_py36h45df0d8_0
</code></pre>
<p>What am I doing wrong? How can I get boxer label?</p> | 2018-09-24 16:15:25.503000+00:00 | 2018-09-24 16:38:20.737000+00:00 | 2018-09-24 16:29:50.420000+00:00 | python|tensorflow|keras | [] | 0 |
61,498,320 | <p><code>memory_order_acquire</code> makes only sense for operations that <em>read</em> a value, and <code>memory_order_release</code> makes only sense for operations that <em>write</em> a value. Since a read-modify-write operations reads and writes, it is possible to combine these memory orders, but it is not always necessary.</p>
<p>The <code>m_event.m_state.compare_exchange_weak</code> uses <code>memory_order_release</code> to write the new value, because it tries to replace a value that has previously been read using memory_order_acquire:</p>
<pre><code> // load initial value using memory_order_acquire
void* oldValue = m_event.m_state.load(std::memory_order_acquire);
do {
...
} while (!m_event.m_state.compare_exchange_weak(oldValue, this,
std::memory_order_release,
// in case of failure, load new value using memory_order_acquire
std::memory_order_acquire));
</code></pre>
<p>IMHO in this case it is not even necessary to use memory_order_acquire at all, since oldValue is never de-referenced, but only stored as next pointer, i.e., it would be perfectly find to replace these two memory_order_acquire with memory_order_relaxed.</p>
<p>In <code>async_manual_reset_event::set()</code> the situtation is different:</p>
<pre><code> void* oldValue = m_state.exchange(this, std::memory_order_acq_rel);
if (oldValue != this)
{
auto* waiters = static_cast<awaiter*>(oldValue);
while (waiters != nullptr)
{
// we are de-referencing the pointer read from m_state!
auto* next = waiters->m_next;
waiters->m_awaitingCoroutine.resume();
waiters = next;
}
</code></pre>
<p>Since we are de-referencing the pointer we read from <code>m_state</code>, we have to ensure that these reads <em>happen after</em> the writes to these waiter objects. This is ensured via the synchronize-with relation on <code>m_state</code>. The writer is added via the previously discussed compare_exchange using <code>memory_order_release</code>. The acquire-part of the exchange synchronizes with the release-compare_exchange (and in fact all prior release-compare_exchange that are part of the release sequence), thus providing the necessary happens-before relation.</p>
<p>To be honest, I am not sure why this exchange would need the release part. I think the author might have wanted to be on "the safe side", since several other operations are also stronger than necessary (I already mentioned that <code>await_suspend</code> does not need memory_order_acquire, and the same goes for <code>is_set</code> and <code>reset</code>).</p>
<p>For your lock implementation it is very simple - when you want to acquire the lock (<code>try_lock_shared</code>/<code>try_lock</code>) use <code>memory_order_acquire</code> for the compare-exchange operation only. Releasing the lock has to use <code>memory_order_release</code>.</p>
<p>The argument is also quite simple: you have to ensure that when you have acquired the lock, any changes previously made to the data protected by the lock is visible to the current owner, that is, you have to ensure that these changes <em>happened before</em> the operations you are about to perform <em>after acquiring the lock</em>. This is achieved by establishing a synchronize-with relation between the <code>try_lock</code> (acquire-CAS) and the previous <code>unlock</code> (release-store).</p>
<p>When trying to argue about the correctness of an implementation based on the semantics of the C++ memory model I usually do this as follows:</p>
<ol>
<li>identify the necessary happens-before relations (like for your lock)</li>
<li>make sure that these happens-before relations are established correctly on all code paths</li>
</ol>
<p>And I always annotate the atomic operations to document how these relations are established (i.e., which other operations are involved). For example:</p>
<pre><code> // (1) - this acquire-load synchronizes-with the release-CAS (11)
auto n = head.load(std::memory_order_acquire);
// (8) - this acquire-load synchronizes-with the release-CAS (11)
h.acquire(head, std::memory_order_acquire);
// (11) - this release-CAS synchronizes-with the acquire-load (1, 8)
if (head.compare_exchange_weak(expected, next, std::memory_order_release, std::memory_order_relaxed))
</code></pre>
<p>(see <a href="https://github.com/mpoeter/xenium/blob/master/xenium/michael_scott_queue.hpp" rel="nofollow noreferrer">https://github.com/mpoeter/xenium/blob/master/xenium/michael_scott_queue.hpp</a> for the full code)</p>
<p>For more details about the C++ memory model I can recommend this paper which I have co-authored: <a href="https://arxiv.org/abs/1803.04432" rel="nofollow noreferrer">Memory Models for C/C++ Programmers</a></p> | 2020-04-29 09:19:05.367000+00:00 | 2020-04-29 09:19:05.367000+00:00 | null | null | 61,493,121 | <p>I refer to the code in Lewiss Baker's coroutine tutorial.</p>
<p><a href="https://lewissbaker.github.io/2017/11/17/understanding-operator-co-await" rel="nofollow noreferrer">https://lewissbaker.github.io/2017/11/17/understanding-operator-co-await</a></p>
<pre><code>bool async_manual_reset_event::awaiter::await_suspend(
std::experimental::coroutine_handle<> awaitingCoroutine) noexcept
{
// Special m_state value that indicates the event is in the 'set' state.
const void* const setState = &m_event;
// Remember the handle of the awaiting coroutine.
m_awaitingCoroutine = awaitingCoroutine;
// Try to atomically push this awaiter onto the front of the list.
void* oldValue = m_event.m_state.load(std::memory_order_acquire);
do
{
// Resume immediately if already in 'set' state.
if (oldValue == setState) return false;
// Update linked list to point at current head.
m_next = static_cast<awaiter*>(oldValue);
// Finally, try to swap the old list head, inserting this awaiter
// as the new list head.
} while (!m_event.m_state.compare_exchange_weak(
oldValue,
this,
std::memory_order_release,
std::memory_order_acquire));
// Successfully enqueued. Remain suspended.
return true;
}
</code></pre>
<p>where m_state is just a <code>std::atomic<void *></code>.</p>
<pre><code>bool async_manual_reset_event::is_set() const noexcept
{
return m_state.load(std::memory_order_acquire) == this;
}
void async_manual_reset_event::reset() noexcept
{
void* oldValue = this;
m_state.compare_exchange_strong(oldValue, nullptr, std::memory_order_acquire);
}
void async_manual_reset_event::set() noexcept
{
// Needs to be 'release' so that subsequent 'co_await' has
// visibility of our prior writes.
// Needs to be 'acquire' so that we have visibility of prior
// writes by awaiting coroutines.
void* oldValue = m_state.exchange(this, std::memory_order_acq_rel);
if (oldValue != this)
{
// Wasn't already in 'set' state.
// Treat old value as head of a linked-list of waiters
// which we have now acquired and need to resume.
auto* waiters = static_cast<awaiter*>(oldValue);
while (waiters != nullptr)
{
// Read m_next before resuming the coroutine as resuming
// the coroutine will likely destroy the awaiter object.
auto* next = waiters->m_next;
waiters->m_awaitingCoroutine.resume();
waiters = next;
}
}
}
</code></pre>
<p>Note in <code>m_state.exchange</code> of the <code>set()</code> method, the comment above shows clearly why the call to exchange requires both acquire and release.</p>
<p>I wonder why in the <code>m_state.compare_exchange_weak</code> of the <code>await_suspend()</code> method, the third parameter is a std::memory_order_release but not a memory_order_acq_rel (the acquire is removed).</p>
<p>The author (Lewis) did explain that we need release in the compare_exchange_weak because we need to let later set() see the writes in compare_exchange_weak. But why don't we require other compare_exchange_weak in other threads to see the writes in the current compare_exchange_weak?</p>
<p>Is it because of release sequence? I.e., in a release chain (write release at first, and all the middle operations are "read acquire then write release" operations, and the final operation is read acquire), then you don't need to tell them to acquire in the middle?</p>
<p>In the following code, I tried to implement a shared lock,</p>
<pre><code> struct lock {
uint64_t exclusive : 1;
uint64_t id : 48;
uint64_t shared_count : 15;
};
std::atomic<lock> lock_ { {0, 0, 0} };
bool try_lock_shared() noexcept {
lock currentlock = lock_.load(std::memory_order_acquire);
if (currentlock.exclusive == 1) {
return false;
}
lock newlock;
do {
newlock = currentlock;
newlock.shared_count++;
}
while(!lock_.compare_exchange_weak(currentlock, newlock, std::memory_order_acq_rel) && currentlock.exclusive == 0);
return currentlock.exclusive == 0;
}
bool try_lock() noexcept {
uint64_t id = utils::get_thread_id();
lock currentlock = lock_.load(std::memory_order_acquire);
if (currentlock.exclusive == 1) {
assert(currentlock.id != id);
return false;
}
bool result = false;
lock newlock { 1, id, 0 };
do {
newlock.shared_count = currentlock.shared_count;
}
while(!(result = lock_.compare_exchange_weak(currentlock, newlock, std::memory_order_acq_rel)) && currentlock.exclusive == 0);
return result;
}
</code></pre>
<p>I used <code>lock_.compare_exchange_weak(currentlock, newlock, std::memory_order_acq_rel)</code> everywhere, can I safely replace them to <code>compare_exchange_weak(currentlock, newlock, std::memory_order_release, std::memory_order_acquire)</code> ?</p>
<p>I could also see examples that <code>memory_order_release</code> is removed from <code>compare_exchange_strong</code> (see the <code>compare_exchange_strong</code> in <code>reset()</code> function of Lewis code), where you only need std::memory_order_acquire for compare_exchange_strong (but not release). I didn't really see memory_order_release is removed from weak nor memory_order_acquire is removed from strong.</p>
<p>This made me wonder whether there's deeper rule that I didn't understand or not.</p>
<p>Thanks.</p> | 2020-04-29 02:16:48.987000+00:00 | 2020-04-29 09:19:05.367000+00:00 | 2020-04-29 02:32:20.953000+00:00 | c++|c++11|concurrency | ['https://github.com/mpoeter/xenium/blob/master/xenium/michael_scott_queue.hpp', 'https://arxiv.org/abs/1803.04432'] | 2 |
3,063,647 | <p>It is possible, but it is very complicated! A simpler O(nlogn) and O(1) space solution might be better to code and in terms of cache.</p>
<p>We will solve a problem different from yours, but your problem is trivial to solve once we solve that problem.</p>
<p>Consider the array to be</p>
<pre><code>b1, a1, b2, a2, ..., bn, an
</code></pre>
<p>and you have to convert this to </p>
<pre><code>a1, a2, ..., an, b1, b2, ..., bn
</code></pre>
<p>Working with indices 1 to 2n,</p>
<p>we see that this is given by</p>
<pre><code>i -> (n+1)*i (mod 2n+1).
</code></pre>
<hr>
<p><strong>An O(nlogn) time O(1) space solution</strong></p>
<p>We can use divide and conquer as follows.</p>
<p>First for some m close to n/2 convert</p>
<p><code>b1, a1, ..., bn , an</code> </p>
<p>to</p>
<pre><code>a1,a2,...am, b1,b2, ..bm, a(m+1), ..., an, b(m+1), ... , bn
</code></pre>
<p>by recursively applying to first 2m elements, and then the remaining.</p>
<p>Now all we need to do this cyclic shift the middle array by m spots (this can be done in O(n) time and O(1) space) </p>
<p>to give</p>
<pre><code>a1, a2, .., am , a(m+1), ..., an, b1, b2, ..., bm, b(m+1), ..., bn.
</code></pre>
<p>Of course, as IVlad pointed out, this needs O(logn) stack space. We can get around that by doing the following:</p>
<p>We have:</p>
<pre><code>b1 a1, b2 a2, .. bm am, b(m+1) a(m+1), ..., bn an
</code></pre>
<p>Now swap pairs in the latter part of the array to give</p>
<pre><code>b1 a1, b2 a2, .. bm am, a(m+1) b(m+1), ..., an bn
</code></pre>
<p>Now cyclic shift the elements at odd position: <code>b1, b2, .., bm, a(m+1), a(m+2) ..., a(n).</code></p>
<p>This gives something like</p>
<pre><code>a(m+1) a1, a(m+2) a2, ..., a(2m) am, a(2m+1) b(m+1),...,an b(n-m), b1 b(n-m+1),...,bm bn
</code></pre>
<p>Now again swap the latter part of the array to give</p>
<pre><code>a(m+1) a1, a(m+2) a2, ..., a(2m) am, b(m+1) a(2m+1),...,b(n-m) an,b(n-m+1) b1,..., bn bm
</code></pre>
<p>Now recursively solve the first part and second part to give</p>
<pre><code>[a1 a2 ... am][a(m+1) ... a(2m)] [a(2m+1) ...an b1 b2 .. bm][b(m+1) ... bn]
</code></pre>
<p>This works whether 2m >= n or not.</p>
<p>So, this is O(nlogn) time and O(1) space algorithm.</p>
<hr>
<p><strong>An O(n) time O(1) space solution.</strong></p>
<p>The ideas used are similar to the ideas used in the following paper:
<a href="http://arxiv.org/abs/0805.1598" rel="nofollow noreferrer">A simple in-place algorithm for Inshuffle</a>.</p>
<p>You would need to read that paper to understand the below. I suggest you also read: <a href="https://stackoverflow.com/questions/2352542/how-to-master-in-place-array-modification-algorithms">How to master in-place array modification algorithms?</a></p>
<p>This is basically the inverse permutation of what is solved in the paper above.</p>
<p>It is enough to solve this when 2n+1 is a power of 3 = (3^m say), as we can use divide and conquer after that (like the O(nlogn) solution).</p>
<p>Now 2n+1 and n+1 are relatively prime, so working modulo 3^m, we see that n+1 <em>must</em> be some power of 2. (See that paper again to see why: basically any number modulo 3^m which is relative prime to 3^m is a power of 2, again modulo 3^m).</p>
<p>Say n+1 = 2^k (we don't know k yet and note this is modulo 3^m).</p>
<p>A way to find out k, compute powers of n+1 modulo 3^m, till it becomes 1. This gives us k (and is O(n) time at most).</p>
<p>Now we can see that the cycles of the permutation (see above paper/stackoverflow link for what that is) start at</p>
<p>2^a*3^b</p>
<p>where 0 <= a < k, and 0 <= b < m.</p>
<p>So you start with each possible pair (a,b) and follow the cycles of the permutation, and this gives an O(n) time, in-place algorithm, as you touch each element no more than a constant number of times!</p>
<p>This was a bit brief(!) and if you need more info, please let me know.</p> | 2010-06-17 16:38:50.317000+00:00 | 2010-06-18 02:56:26.727000+00:00 | 2017-05-23 12:19:39.187000+00:00 | null | 3,062,974 | <p>Suppose there is an array, we want to find everything in the odd index (index starting with 0), and move it to the end.
Everything in the even index move it to the beginning.
The relative order of all odd index items and all even index items are preserved.</p>
<p>i.e. if the array is </p>
<pre><code>a1 b1 a2 b2 ... an bn
</code></pre>
<p>after the operation it becomes</p>
<pre><code>a1 a2 a3 ... an b1 b2 ... bn
</code></pre>
<p>Can this be done in-place and in O(n) time?</p> | 2010-06-17 15:16:27.670000+00:00 | 2010-06-22 15:56:31.017000+00:00 | 2010-06-17 17:16:47.917000+00:00 | algorithm | ['http://arxiv.org/abs/0805.1598', 'https://stackoverflow.com/questions/2352542/how-to-master-in-place-array-modification-algorithms'] | 2 |
41,970,980 | <p>Not a dedicated way :(</p>
<p>There's currently no easy (dedicated) way of doing this with Keras.</p>
<p>A discussion is ongoing at <a href="https://groups.google.com/forum/#!topic/keras-users/oEecCWayJrM" rel="noreferrer">https://groups.google.com/forum/#!topic/keras-users/oEecCWayJrM</a>.</p>
<p>You may also be interested in this paper: <a href="https://arxiv.org/pdf/1608.04493v1.pdf" rel="noreferrer">https://arxiv.org/pdf/1608.04493v1.pdf</a>.</p> | 2017-02-01 02:31:51.707000+00:00 | 2017-04-05 15:00:20.283000+00:00 | 2017-04-05 15:00:20.283000+00:00 | null | 41,958,566 | <p>I'm trying to design a neural network using Keras with priority on prediction performance, and I cannot get sufficiently high accuracy by further reducing the number of layers and nodes per layer. I have noticed that very large portion of my weights are effectively zero (>95%). Is there a way to prune dense layers in hope of reducing prediction time?</p> | 2017-01-31 13:14:12.917000+00:00 | 2021-03-15 04:31:06.663000+00:00 | null | python-3.x|neural-network|keras|pruning | ['https://groups.google.com/forum/#!topic/keras-users/oEecCWayJrM', 'https://arxiv.org/pdf/1608.04493v1.pdf'] | 2 |
34,183,499 | <p>Have a look at <a href="https://arxiv.org/pdf/1707.09725.pdf#page=11" rel="nofollow noreferrer">my masters thesis, chapter 3</a>.</p>
<p>In general, there are no strict rules to follow when it comes to the nets architecture. There is a lot of experience in it. Exceptions are the input layer (nr. of features = nr. of neurons) and the output layer (in classification: nr of classes = nr of neurons).</p>
<p>However, there seem to be several trends / rules of thumb:</p>
<ul>
<li>For fully connected layers, use not "too few" neurons, but not more than about 3 times the last layer</li>
<li>If you have CNNs, dropout is REALLY important. Then you can have many layers / neurons and hope that dropout prevents overfitting</li>
<li><strong>Automatic topology creation</strong>: I haven't seen any of them in use often.
<ul>
<li><strong>Growing approaches</strong>: There are strategies like Cascade Correlation / Meiosis networks to start with a small network and make it bigger.</li>
<li><strong>Pruning approaches</strong>: There are strategies like Optimal Brain Damage / Optimal Brain Surgeon to start from a big network and make it smaller.</li>
<li><strong>Genetic approaches</strong>: NEAT (NeuroEvolution of Augmented Topologies)</li>
</ul></li>
<li>Bottleneck-layers are used when you want to use huge amounts of unlabeled data in an unsupervised fashion with (denoising) auto-encoders. I have seen that a couple of times.</li>
</ul>
<p>You might be interested in reading the <a href="http://www.cs.toronto.edu/~fritz/absps/imagenet.pdf" rel="nofollow noreferrer">AlexNet</a> and the <a href="http://arxiv.org/abs/1409.4842" rel="nofollow noreferrer">GoogLeNet</a> papers</p> | 2015-12-09 16:07:28.793000+00:00 | 2017-08-01 06:21:27.940000+00:00 | 2017-08-01 06:21:27.940000+00:00 | null | 34,178,334 | <p>I was doing a literature review of deep learning, recently. Hinton in his papers <a href="http://www.cs.toronto.edu/~hinton/absps/ncfast.pdf" rel="nofollow">http://www.cs.toronto.edu/~hinton/absps/ncfast.pdf</a> <a href="http://www.cs.toronto.edu/~hinton/science.pdf" rel="nofollow">http://www.cs.toronto.edu/~hinton/science.pdf</a> uses a 784*500*500*2000*10 sized network for demonstrating RBM based pretraining + finetuning using BP on MNIST dataset
Is there any specific reason we choose same number of hidden units(500) in subsequent hidden layers and increased number(2000) in the last layer? In general how to choose hidden layers/units for RBM depending on dataset (from practical experience other than Hinton's RBM manual).</p>
<p>This was a brain teasing question to me for a long time. I would be grateful for the answer.</p> | 2015-12-09 12:01:02.460000+00:00 | 2017-08-01 06:21:27.940000+00:00 | null | machine-learning|deep-learning | ['https://arxiv.org/pdf/1707.09725.pdf#page=11', 'http://www.cs.toronto.edu/~fritz/absps/imagenet.pdf', 'http://arxiv.org/abs/1409.4842'] | 3 |
62,739,284 | <p>If your ground truth contains the exact location of what you are trying to classify, use the ground truth to crop your images in an informed way. I.e. adjust the ground truth, if you are removing what you are trying to classify.</p>
<p>If you don't know the location of what you are classifying, you could</p>
<ol>
<li>attempt to train a classifier on your un-augmented dataset,</li>
<li>find out, what the regions of the images are that your classifier reacts to,</li>
<li>make note of these location</li>
<li>crop your images in an informed way</li>
<li>train a new classifier</li>
</ol>
<p>But how do you "find out, what regions your classifier reacts to"?
Multiple ways are described in <a href="https://arxiv.org/pdf/1311.2901" rel="nofollow noreferrer">Visualizing and Understanding Convolutional Networks</a> by Zeiler and Fergus:</p>
<p><a href="https://i.stack.imgur.com/fh4qQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fh4qQ.png" alt="Image by Zeiler et al. Visualization by occlusion." /></a>
Imagine your classifier classifies <em>breast cancer</em> or <em>no breast cancer</em>. Now simply take an image that contains positive information for breast cancer and occlude part of the image with some blank color (see gray square in image above, image by Zeiler et al.) and predict <em>cancer</em> or <em>not</em>. Now move the occluded square around. In the end you'll get rough predictions scores for all <em>parts</em> of your original image (see (d) in the image above), because when you covered up the important part that is responsible for a positive prediction, you (should) get a negative cancer prediction.</p>
<p>If you have someone who can actually recognize cancer in an image, this is also a good way to check for and guard against confounding factors.</p>
<p>BTW: You might want to crop on-the-fly and randomize how you crop even more to generate way more samples.</p>
<p>If the 150x150 is already the <a href="https://en.wikipedia.org/wiki/Region_of_interest" rel="nofollow noreferrer">region of interest (ROI)</a> you could try the following data augmentations:</p>
<ul>
<li>use a <em>larger</em> patch, e.g. 170x170 that always contains your 150x150 patch</li>
<li>use a <em>larger</em> patch, e.g. 200x200, and scale it down to 150x150</li>
<li>add some gaussian noise to the image</li>
<li>rotate the image slightly (by random amounts)</li>
<li>change image contrast slightly</li>
<li>artificially emulate whatever other (image-)effects you see in the original dataset</li>
</ul> | 2020-07-05 10:00:45.513000+00:00 | 2020-07-06 06:24:52.077000+00:00 | 2020-07-06 06:24:52.077000+00:00 | null | 62,738,868 | <p>I have image patches from <strong>DDSM Breast Mammography</strong> that are <code>150x150</code> in size. I would like to augment my dataset by randomly cropping these images 2x times to <code>120x120</code> size. So, If my dataset contains <code>6500</code> images, augmenting it with random crop should get me to <code>13000</code> images. Thing is, I do NOT want to lose potential information in the image and possibly change ground truth label.</p>
<p>What would be best way to do this? Should I crop them randomly from <code>150x150</code> to <code>120x120</code> and hope for the best or maybe pad them first and then perform the cropping? What is the standard way to approach this problem?</p> | 2020-07-05 09:15:01.780000+00:00 | 2020-07-06 06:24:52.077000+00:00 | null | opencv|machine-learning|deep-learning|computer-vision|conv-neural-network | ['https://arxiv.org/pdf/1311.2901', 'https://i.stack.imgur.com/fh4qQ.png', 'https://en.wikipedia.org/wiki/Region_of_interest'] | 3 |
26,920,207 | <p>@Rafal discussed the case of trees. But what if you do not have trees? here is my two cents:</p>
<p><strong>Mathematica approach</strong> </p>
<p>Mathematica has a built-in predicate to check whether two graphs are isomorphic. You can try it for 30 days if you do not have it. </p>
<p><strong>Check nauty</strong> </p>
<p><a href="http://cs.anu.edu.au/~bdm/nauty/" rel="nofollow">nauty</a> is a solver where you can download it and test isomorphic.</p>
<p><strong>Detect true negatives in advance</strong></p>
<p>You can detect true negatives in advance by simply computing and comparing some numbers/sequences. This includes computing the degree sequence the vertex and edge set degrees. A pair of graphs passing this does not necessarily mean they are isomorphic but will reduce your space (maybe drastically !). </p>
<p>Most importantly, there is a <a href="http://arxiv.org/abs/1404.0818" rel="nofollow">recent</a> advancement of the problem stating that isomorphic tests are polynomial for graphs of bounded treewidth. Even if your graphs seems general, they may exhibit this property (or you can simply assume it in general). </p> | 2014-11-13 23:11:23.497000+00:00 | 2014-11-13 23:11:23.497000+00:00 | null | null | 26,880,816 | <p>I've seen <a href="https://stackoverflow.com/questions/3550114/how-to-store-molecules-in-memory">this question about the representation of molecules in memory</a>, and it makes sense to me (tl;dr represent it as a graph with atoms as nodes and bonds as edges). But now my question is this: how do we check and see if two molecules are equal? This could be generalized as <strong>how can we check equality of (acyclic) graphs?</strong> For now we'll ignore <a href="http://en.wikipedia.org/wiki/Stereoisomerism" rel="nofollow noreferrer">stereoisomers</a> and cyclical structures, such as the carbon ring in the example given in the first link.</p>
<p>Here's a more detailed description of my problem: For my <code>Molecule</code> class (as of now), I intend to have an array of <code>Atom</code>s and an array of <code>Bond</code>s. Each <code>Bond</code> will point to the two <code>Atom</code>s at either end, and will have a weight (i.e., the number of chemical bonds in that edge). In other words, this will most closely resemble an edge list graph. My first guess is to iterate over the <code>Atom</code>s in one molecule and try to find corresponding <code>Atom</code>s in the other molecule based on the <code>Bond</code>s that contain that <code>Atom</code>, but this is a rather naive approach, and the complexity seems pretty large (best guess is close to <em>O(n!)</em>. Yikes.). </p>
<p>Regardless of complexity, this approach seems like it would work in most cases, however it seems to break down for some molecules. Take these for example (notice the different location of the OH group):</p>
<pre><code> H H H OH H
| | | | |
H - C - C - C - C - C - H (2-Pentanol)
| | | | |
H H H H H
H H OH H H
| | | | |
H - C - C - C - C - C - H (3-Pentanol)
| | | | |
H H H H H
</code></pre>
<p>If we examine these molecules, for each atom in one molecule there is a unique same-element atom in the other molecule that has the same number and types of bonds, but these two molecules are clearly not the same, nor are they stereoisomers (which I'm not considering now). Instead they are <a href="http://en.wikipedia.org/wiki/Structural_isomer" rel="nofollow noreferrer">structural isomers</a>. Is there a way that we can check this relative structure as well? Would this be easier with an adjacency list instead of an edge list? Are there any graph equality algorithms out there that I should look into (ideally in Java)? I've looked a bit into <a href="http://en.wikipedia.org/wiki/Graph_canonization" rel="nofollow noreferrer">graph canonization</a>, but this seems like it could be NP-hard.</p>
<p><strong>Edit:</strong> Looking at the <a href="http://en.wikipedia.org/wiki/Graph_isomorphism" rel="nofollow noreferrer">Graph Isomorphism Problem Wikipedia Article</a>, it seems as if graphs with bounded degree have polynomial time solutions to this problem. Furthermore, planar graphs also have polynomial solutions (i.e., the edges only intersect at their endpoints). It seems to me that molecules satisfy both of these conditions, so what is this polynomial-time solution to this problem, or where can I find it? My Google searches are letting me down this time.</p> | 2014-11-12 06:40:21.483000+00:00 | 2014-11-13 23:11:23.497000+00:00 | 2017-05-23 12:07:44.357000+00:00 | algorithm|graph|equality | ['http://cs.anu.edu.au/~bdm/nauty/', 'http://arxiv.org/abs/1404.0818'] | 2 |
66,119,130 | <p>Based on the resolution of your sample, I can say that it's a crop from much larger image, therefore the ball in its given context it's fairly small object.</p>
<p>Yolo architectures are notorious for poor performance on small objects due to their reduced dimensionality in the feature maps.</p>
<p>For tracking detecting and further tracking small objects I recommend using an <a href="https://arxiv.org/pdf/1512.02325.pdf" rel="nofollow noreferrer">SSD</a> architecture which uses to feature maps extracted from multiple depth levels for producing the results; or maybe try out a newer architecture that can equal inference time performance such as <a href="https://arxiv.org/pdf/1911.09070.pdf" rel="nofollow noreferrer">EfficientDet</a>. Implementations for both of them can be found over a variety of frameworks.</p> | 2021-02-09 12:36:10.787000+00:00 | 2021-02-09 12:36:10.787000+00:00 | null | null | 66,028,198 | <p>I am new in the field of CV and trying to build an object detection with yolo and object tracking with DeepSort.
I have some problems with the identification of objects in the video. Here an example:
The sport ball is identified in the video but when it is too close to the person, the detector can not identify it.</p>
<p>In this picture the Ball is identified:</p>
<p><a href="https://i.stack.imgur.com/8x1XR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8x1XR.png" alt="enter image description here" /></a></p>
<p>Here the bull is not identified:</p>
<p><a href="https://i.stack.imgur.com/BGiE5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BGiE5.png" alt="enter image description here" /></a></p>
<p>How can I improve the detection? I am using pre-trained yolov3(which is trained on the coco dataset) and DeepSort.</p> | 2021-02-03 13:07:35.043000+00:00 | 2021-09-30 13:38:29.440000+00:00 | null | opencv|object-detection|yolo | ['https://arxiv.org/pdf/1512.02325.pdf', 'https://arxiv.org/pdf/1911.09070.pdf'] | 2 |
13,163,159 | <p>As others have said, NLTK is probably the go-to tool for doing NLP in Python.</p>
<p>As for technique, you're looking for something like a similarity metric between pairs of words. For every word in the text, compute this for the content-bearing words in the title, and keep the top-N. Have a look at <a href="http://arxiv.org/abs/1203.1858" rel="nofollow">this paper</a> for a survey of approaches, and see what NLTK gives you in terms of functionality. There is masses of research on this stuff, though, and you'll probably be happy with something fairly simple (depending on exactly what your application is). <a href="http://en.wikipedia.org/wiki/Pointwise_mutual_information" rel="nofollow">Point-wise mutual information</a> is usually a good starting point.</p> | 2012-10-31 17:06:05.177000+00:00 | 2012-10-31 17:06:05.177000+00:00 | null | null | 13,162,409 | <p>Not sure how to phrase this question properly, but this is what I intend to achieve using the hypothetical scenario outlined below - </p>
<p>A user's email to me has just the SUBJECT and BODY, the subject being the topic of email, and the body being a description of the topic in just one paragraph of max 1000 words. Now I would like to analyse this paragraph (in the BODY) using some computer language (python, maybe), and then come up with a list of most important words from the paragraph with respect to the topic mentioned in the SUBJECT field.</p>
<p>For example, if the topic of email is say iPhone, and the body is something like "the iPhone redefines user-interface design with super resolution and graphics. it is fully touch enabled and allows users to swipe the screen"</p>
<p>So the result I am looking for is a sort of list with the key terms from the paragraph as related to iPhone. Example - (user-interface, design, resolution, graphics, touch, swipe, screen). </p>
<p>So basically I am looking at picking the most relevant words from the paragraph. I am not sure on what I can use or how to use to achieve this result. Searching on google, I read a little about Natural Language Processing and python and classification etc. I just need a general approach on how to go about this - using what technology/language, which area I have to read on etc..</p>
<p>Thanks!</p>
<blockquote>
<p>EDIT:::</p>
</blockquote>
<p>I have been reading up in the meantime. To be precise, I am looking at HOW TO do this, using WHAT TOOL:</p>
<p>Generate related tags from a body of text using NLP which are based on synonyms, morphological similarity, spelling errors and contextual analysis.</p> | 2012-10-31 16:21:06.173000+00:00 | 2012-10-31 17:06:05.177000+00:00 | 2012-10-31 16:49:39.583000+00:00 | python|nlp|classification|tagging|folksonomy | ['http://arxiv.org/abs/1203.1858', 'http://en.wikipedia.org/wiki/Pointwise_mutual_information'] | 2 |
47,274,192 | <p>Although this thread is pretty old, I came across a paper about Intel Secure Key that describes its random number generation, security, and performance aspects. The full paper is here (<a href="http://iopscience.iop.org/article/10.3847/1538-4357/aa7ede/meta;jsessionid=A9DA9DDB925E6522D058F3CEEC7D0B21.ip-10-40-2-120" rel="nofollow noreferrer">http://iopscience.iop.org/article/10.3847/1538-4357/aa7ede/meta;jsessionid=A9DA9DDB925E6522D058F3CEEC7D0B21.ip-10-40-2-120</a>), but the non-paywalled version is here (<a href="https://arxiv.org/abs/1707.02212" rel="nofollow noreferrer">https://arxiv.org/abs/1707.02212</a>).</p>
<p>In short, the best technology we have for random number generation is Intel Secure Key, which uses the RdRand and RdSeed instruction sets. It is a cryptographically-secure pseudorandom number generator that uses an on-chip entropy source to randomly seed the number generator. Its fully compliant with up-to-date security specs such as NIST SP800-90Ar1/B/C, FIPS-140-2, and ANSI X9.82.</p> | 2017-11-13 21:54:05.640000+00:00 | 2017-11-13 22:26:01.133000+00:00 | 2017-11-13 22:26:01.133000+00:00 | null | 3,549,721 | <p>With Intel's recent purchase of a well known security company, I'm starting to think about what software w/could be more secure on a chip level. Examples I've come up with are: </p>
<ul>
<li>Random number generation</li>
<li>Encryption</li>
<li>Memory protection</li>
</ul>
<p>But is hardware level security any more secure than software based security? ( I would assume garbage in garbage out no matter what level you operate at) What are the design considerations for embedded security? What are the limitations? Finally, do you have any good resources for learning more about the topic?</p> | 2010-08-23 16:44:19.207000+00:00 | 2017-11-13 22:26:01.133000+00:00 | null | security|embedded|theory|microchip | ['http://iopscience.iop.org/article/10.3847/1538-4357/aa7ede/meta;jsessionid=A9DA9DDB925E6522D058F3CEEC7D0B21.ip-10-40-2-120', 'https://arxiv.org/abs/1707.02212'] | 2 |
49,506,658 | <p>Text like this is less common. In papers there are two common ways. First tables have been widely used to convey the structure of a NN, purely with text. An example for this would be the VGG networks(<a href="https://arxiv.org/pdf/1409.1556.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1409.1556.pdf</a>). </p>
<p>The other common way, which does not purely use text, is to show a graph as is done by the ResNet Paper (<a href="https://arxiv.org/pdf/1409.1556.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1409.1556.pdf</a>) or the inception networks. </p> | 2018-03-27 07:22:25.947000+00:00 | 2018-03-27 07:22:25.947000+00:00 | null | null | 49,505,632 | <p>What is the best practice to represent a CNN network by text?</p>
<p>for example the following text is a common way to representing a CNN network:</p>
<pre><code>INPUT -> CONV -> RELU -> FC
</code></pre>
<p>But input size, filter (kernel) size, number of strides, padding and many parameters in each layer does not specified.</p>
<p>Is there any best practice to representing a CNN network (for example AlexNet) by text?</p> | 2018-03-27 06:20:37.210000+00:00 | 2018-03-27 07:22:25.947000+00:00 | null | deep-learning|convolution | ['https://arxiv.org/pdf/1409.1556.pdf', 'https://arxiv.org/pdf/1409.1556.pdf'] | 2 |
23,941,878 | <p>When I heard it first, I liked the article <a href="http://arxiv.org/html/0901.4016" rel="nofollow">A Proposal for Proquints: Identifiers that are Readable, Spellable, and Pronounceable</a>. It encodes data as a sequence of consonants and vowels. It's tied to the English language though. (Because in German, <code>f</code> and <code>v</code> sound equal, so they should not be used both.) But I like the general idea.</p> | 2014-05-29 19:57:39.147000+00:00 | 2014-05-29 19:57:39.147000+00:00 | null | null | 9,639,236 | <p>Let's say you have a system in which a fairly long key value can be accurately communicated to a user on-screen, via email or via paper; but the user needs to be able to communicate the key back to you accurately by reading it over the phone, or by reading it and typing it back into some other interface. </p>
<p><strong>What is a "good" way to encode the key to make reading / hearing / typing it easy & accurate?</strong></p>
<p>This could be an invoice number, a document ID, a transaction ID or some other abstract value. Let's say for the sake of this discussion the underlying key value is a big number, say 40 digits in base 10.</p>
<p><strong>Some thoughts:</strong></p>
<p>Shorter keys are generally better</p>
<ul>
<li>a 40-digit base 10 value may not fit in the space given, and is easy to get lost in the middle of</li>
<li>the same value could be represented in base 16 in 33-34 digits</li>
<li>the same value could be represented in base 36 in 26 digits</li>
<li>the same value could be represented in base 64 in 22-23 digits</li>
</ul>
<p>Characters that can't be visually confused with each other are better</p>
<ul>
<li>e.g. an encoding that includes both O (oh) and 0 (zero), or S (ess) and 5 (five), could be bad</li>
<li>This issue depends on the font / face used to display the key, which you may be able to control in some cases (like printing on paper) but can't control in others (like web pages and email).</li>
<li>Also depends on whether you can control the exclusive use of upper and / or lower case -- e.g. capital D (dee) may look like O (oh) but lower case d (dee) would not; while lower case l (ell) looks like a 1 (one) while capital L (ell) would not. (With exceptions for especially exotic fonts / faces).</li>
</ul>
<p>Characters that can't be verbally / aurally confused with each other are better</p>
<ul>
<li>a (ay) 8 (eight)</li>
<li>B (bee) C (cee) D (dee) E (ee) g (gee) p (pee) t (tee) v (vee) z (zee) 3 (three)</li>
<li>This issue depends on the audio quality of the end-to-end channel -- bigger challenge if the expected user base could have a speech impediment, or may have to speak through a gas mask, or the communication channel could include CB radios or choppy VOIP phone systems.</li>
</ul>
<p>Adding a check digit or two would detect errors but not help resolve errors.</p>
<p>An alpha - bravo - charlie - delta type dialog can help with hearing errors, but not reading errors.</p>
<p><strong>Possible choices of encoding:</strong></p>
<ul>
<li>Base 64 -- compact, but too many hard-to-verbalize characters (underscore, dash etc.)</li>
<li>Base 34 -- 0-9 and A-Z but with O (oh) and I (aye) left out as the easiest to confuse with digits</li>
<li>Base 32 -- same as base 34 but leave out the 0 (zero) and 1 (one) as well</li>
</ul>
<p>Is there a generally recognized encoding that is a reasonable solution for this scenario?</p> | 2012-03-09 18:36:02.237000+00:00 | 2014-05-29 19:57:39.147000+00:00 | null | encoding|character-encoding|error-correction|human-readable | ['http://arxiv.org/html/0901.4016'] | 1 |
62,853,704 | <p>The answer is that it is impossible in general to generate a random unbiased integer in [0, <code>n</code>) in constant time. One notable exception is when the source of random numbers produces unbiased random bits and <code>n</code> is a power of 2.</p>
<p>For instance, assume we have a "true" random generator and can produce unbiased random bits. Then, unless <code>n</code> is a power of 2, there are only two possible ways to proceed:</p>
<ul>
<li>It can use modulo reduction (or Lemire's multiply-then-shift reduction). This will run in constant time, but introduce a bias (some numbers are slightly more likely to be generated than others).</li>
<li>It can use rejection sampling. This will introduce no bias, but can run forever in the worst case (even though it has an expected constant time complexity). Many kinds of algorithms fit in this category, including modulo reduction followed by a rejection step (which is necessary if <code>n</code> is not a power of 2), as well as the <a href="https://arxiv.org/abs/1304.1916" rel="nofollow noreferrer">Fast Dice Roller</a> (which uses random bits).</li>
</ul>
<p>(See my note on <a href="https://peteroupc.github.io/randomfunc.html#RNDINT_Random_Integers_in_0_N" rel="nofollow noreferrer">integer generating algorithms</a> for a survey of both kinds of algorithms. For a Fast Dice Roller implementation, see <a href="https://stackoverflow.com/a/62920514/815724">another answer of mine</a>.)</p>
<p>In this sense, Knuth and Yao showed in 1976 that any algorithm that produces random integers with a given probability, using only random bits, can be represented as a binary tree, where random bits indicate which way to traverse the tree and each leaf (endpoint) corresponds to an outcome. (Knuth and Yao, "The complexity of nonuniform random number generation", in Algorithms and Complexity, 1976.) In this case, each integer in [0, n) can occur with probability 1/n. And if 1/n has a non-terminating binary expansion (which will be the case if <code>n</code> is not a power of 2), this binary tree will necessarily eitherβ</p>
<ul>
<li>have an "infinite" depth, or</li>
<li>include "rejection" leaves at the end of the tree,</li>
</ul>
<p>And in either case, the algorithm won't run in constant time.</p>
<p>Modulo or similar reductions are equivalent to a binary tree in which rejection leaves are replaced with labeled outcomes β but since there are more possible outcomes than rejection leaves, only some of the outcomes can take the place of the rejection leaves, introducing bias. The same kind of binary tree β and the same kind of bias β results if you stop rejecting after a set number of iterations. (See also chapter 15 of <em>Non-Uniform Random Variate Generation</em> by L. Devroye, 1986.)</p>
<p><strong>Therefore:</strong> In general, an integer generator can be <em>either</em> unbiased <em>or</em> constant-time, but not both.</p>
<p>If you can't tolerate the worst case of running forever, then the only thing you can do is set a fixed maximum number of rejections or use a reduction, both of which can introduce bias. However, this bias might be negligible depending on your application (e.g., if the chance the algorithm "fails" is negligible compared to the chance it "succeeds", for the application's purposes). There are also security aspects to random integer generation, which are too complicated to discuss in this answer.</p> | 2020-07-11 19:36:09.553000+00:00 | 2022-02-15 00:11:15.540000+00:00 | 2022-02-15 00:11:15.540000+00:00 | null | 33,288,102 | <p>In my embedded project, I have a biginteger class that handles arbitrary length integers. I would like to be able to generate a random bigint between 0 and an arbitrary number. Assume I have a quality source of random bytes.</p>
<p>All the implementations I have seen essentially do the same thing: </p>
<ol>
<li>Generate a big number with the correct number of bytes,</li>
<li>If it is greater than max, generate again.</li>
</ol>
<p>The problem I see with this implementation is that it could take an awfully long time. Imagine that <code>max = 2^2049-1</code> =(<code>01 FF .. FF</code>). This algorithm will generate 257 bytes, then check that the most significant byte is <code><=1</code>. So there is a 254/256 chance it has to generate a whole new 257 byte number. In the (admittedly unlikely) worst case, this loop could go on for minutes or years.</p>
<p>My question is:<br>
In the case where the generated number is too large, is there a way to keep most of the bytes I have already generated?<br>
Is it valid to just regenerate the most significant byte, or does that introduce bias? What about shifting the result right one digit? </p>
<p>Is there any way to make the time deterministic, while still avoiding bias?</p>
<p>--</p>
<p>Another edge case: <code>max = 2^2048 + 1</code> = (<code>01 00 .. 01</code>) In this case the most significant byte can be non-zero if the remaining bytes are 0s followed by a <code>00</code> or <code>01</code>. So most of the time, if the MSB is non-zero, than it will be invalid, and just regenerating that byte will never make it valid. But just force setting it to zero seems wrong also.</p> | 2015-10-22 18:26:02.457000+00:00 | 2022-02-15 00:11:15.540000+00:00 | 2016-01-05 08:29:49.567000+00:00 | random|language-agnostic|real-time|bigint | ['https://arxiv.org/abs/1304.1916', 'https://peteroupc.github.io/randomfunc.html#RNDINT_Random_Integers_in_0_N', 'https://stackoverflow.com/a/62920514/815724'] | 3 |
15,606,137 | <p>I think the best known algorithm for unweighted undirected graphs takes Γ(n^Ο), where n = |V| and Ο < 2.376 is the exponent of fast matrix multiplication. And O((|V|+|E|) * lg |V|) would give us Γ(n^2), which is better than the best known algorithm. Look at the introduction section of <a href="http://arxiv.org/abs/1011.6181" rel="nofollow">http://arxiv.org/abs/1011.6181</a> for a brief survey and references.</p> | 2013-03-25 01:05:43.610000+00:00 | 2013-03-25 18:59:41.903000+00:00 | 2013-03-25 18:59:41.903000+00:00 | null | 15,604,421 | <p>If you have a simple undirected graph G(V, E), how can you find the diameter of the graph in O((|V|+|E|) * lg |V|) running time?</p> | 2013-03-24 21:46:36.173000+00:00 | 2013-03-25 18:59:41.903000+00:00 | 2013-03-24 21:59:22.437000+00:00 | performance|algorithm|graph|big-o | ['http://arxiv.org/abs/1011.6181'] | 1 |
48,946,480 | <h2>Output Calibration</h2>
<p>One thing that I think is important to realise at first is that the outputs of a neural network may be poorly <em>calibrated</em>. What I mean by that is, the outputs it gives to different instances may result in a good ranking (images with label L tend to have higher scores for that label than images without label L), but these scores cannot always reliably be interpreted as probabilities (it may give very high scores, like <code>0.9</code>, to instances without the label, and just give even higher scores, like <code>0.99</code>, to instances with the label). I suppose whether or not this may happen depends, among other things, on your chosen loss function.</p>
<p>For more info on this, see for example: <a href="https://arxiv.org/abs/1706.04599" rel="noreferrer">https://arxiv.org/abs/1706.04599</a></p>
<hr>
<h2>Going through all classes 1 by 1</h2>
<p><strong>Class 0:</strong> AUC (area under curve) = 0.99. Thats a very good score. Column 0 in your confusion matrix also looks fine, so nothing wrong here.</p>
<p><strong>Class 1:</strong> AUC = 0.44. Thats quite terrible, lower than 0.5, if I'm not mistaken that pretty much means you're better off deliberately doing the <em>opposite</em> of what your network predicts for this label. </p>
<p>Looking at column 1 in your confusion matrix, it has pretty much the same scores everywhere. To me, this indicates that the network did not manage to learn a lot about this class, and pretty much just "guesses" according to the percentage of images that contained this label in training set (55.6%). Since this percentage dropped down to 50% in validation set, this strategy indeed means that it'll do slightly worse than random. Row 1 still has the highest number of all rows in this column though, so it appears to have learned at least a tiny little bit, but not much.</p>
<p><strong>Class 2:</strong> AUC = 0.96. Thats very good. </p>
<p>Your interpretation for this class was that it's always predicted as not being present, based on the light shading of the entire column. I dont think that interpretation is correct though. See how it has a score >0 on the diagonal, and just 0s everywhere else in the column. It may have a relatively low score in that row, but it's easily separable from the other rows in the same column. You'll probably just have to set your threshold for choosing whether or not that label is present relatively low. I suspect this is due to the calibration thing mentioned above. </p>
<p>This is also why the AUC is in fact very good; it is possible to select a threshold such that most instances with scores above the threshold correctly have the label, and most instances below it correctly do not. That threshold may not be 0.5 though, which is the threshold you may expect if you assume good calibration. Plotting the ROC curve for this specific label may help you decide exactly where the threshold should be.</p>
<p><strong>Class 3:</strong> AUC = 0.9, quite good.</p>
<p>You interpreted it as always being detected as present, and the confusion matrix does indeed have a lot of high numbers in the column, but the AUC is good and the cell on the diagonal does have a sufficiently high value that it may be easily separable from the others. I suspect this is a similar case to Class 2 (just flipped around, high predictions everywhere and therefore a high threshold required for correct decisions).</p>
<p>If you want to be able to tell for sure whether a well-selected threshold can indeed correctly split most "positives" (instances with class 3) from most "negatives" (instances without class 3), you'll want to sort all instances according to predicted score for label 3, then go through the entire list and between every pair of consecutive entries compute the accuracy over validation set that you would get if you decided to place your threshold right there, and select the best threshold.</p>
<p><strong>Class 4:</strong> same as class 0.</p>
<p><strong>Class 5:</strong> AUC = 0.01, obviously terrible. Also agree with your interpretation of confusion matrix. It's difficult to tell for sure why it's performing so poorly here. Maybe it is a difficult kind of object to recognize? There's probably also some overfitting going on (0 False Positives in training data judging from the column in your second matrix, though there are also other classes where this happens). </p>
<p>It probably also doesn't help that the proportion of label 5 images has increased going from training to validation data. This means that it was less important for the network to perform well on this label during training than it is during validation.</p>
<p><strong>Class 6:</strong> AUC = 0.52, only slightly better than random.</p>
<p>Judging by column 6 in the first matrix, this actually could have been a similar case to class 2. If we also take AUC into account though, it looks it doesn't learn to rank instances very well either. Similar to class 5, just not as bad. Also, again, training and validation distribution quite different.</p>
<p><strong>Class 7:</strong> AUC = 0.65, rather average. Obviously not as good as class 2 for example, but also not as bad as you may interpret just from the matrix.</p>
<p><strong>Class 8:</strong> AUC = 0.97, very good, similar to class 3.</p>
<p><strong>Class 9:</strong> AUC = 0.82, not as good, but still good. The column in matrix has so many dark cells, and the numbers are so close, that the AUC is surprisingly good in my opinion. It was present in almost every image in training data, so it's no surprise that it gets predicted as being present often. Maybe some of those very dark cells are based only on a low absolute number of images? This would be interesting to figure out.</p>
<p><strong>Class 10:</strong> AUC = 0.09, terrible. A 0 on the diagonal is quite concerning (is your data labelled correctly?). It seems to get confused for classes 3 and 9 very often according to row 10 of the first matrix (do cotton and primary_incision_knives look a lot like secondary_incision_knives?). Maybe also some overfitting to training data.</p>
<p><strong>Class 11:</strong> AUC = 0.5, no better than random. Poor performance (and apparantly excessively high scores in matrix) are likely because this label was present in the majority of training images, but only a minority of validation images.</p>
<hr>
<h2>What else to plot / measure?</h2>
<p>To gain more insight in your data, I'd start out by plotting heatmaps of how often every class co-occurs (one for training and one for validation data). Cell (i, j) would be colored according to the ratio of images that contain both labels i and j. This would be a symmetric plot, with on the diagonal cells colored according to those first lists of numbers in your question. Compare the two heatmaps, see where they are very different, and see if that can help to explain your model's performance.</p>
<p>Additionally, it may be useful to know (for both datasets) how many different labels each image has on average, and, for every individual label, how many other labels it shares an image with on average. For example, I suspect images with label 10 have relatively few other labels in the training data. This may dissuade the network from predicting label 10 if it recognises other things, and cause poor performance if label 10 does suddenly share images with other objects more regularly in the validation data. Since pseudocode may more easily get the point across than words, it could be interesting to print something like the following:</p>
<pre><code># Do all of the following once for training data, AND once for validation data
tot_num_labels = 0
for image in images:
tot_num_labels += len(image.get_all_labels())
avg_labels_per_image = tot_num_labels / float(num_images)
print("Avg. num labels per image = ", avg_labels_per_image)
for label in range(num_labels):
tot_shared_labels = 0
for image in images_with_label(label):
tot_shared_labels += (len(image.get_all_labels()) - 1)
avg_shared_labels = tot_shared_labels / float(len(images_with_label(label)))
print("On average, images with label ", label, " also have ", avg_shared_labels, " other labels.")
</code></pre>
<p>For just a single dataset this doesn't provide much useful information, but if you do it for training and validation sets you can tell that their distributions are quite different if the numbers are very different</p>
<p>Finally, I am a bit concerned by how some columns in your first matrix have <em>exactly</em> the same mean prediction appearing over many different rows. I am not quite sure what could cause this, but that may be useful to investigate.</p>
<hr>
<h2>How to improve?</h2>
<p>If you didn't already, I'd recommend looking into <em>data augmentation</em> for your training data. Since you're working with images, you could try adding rotated versions of existing images to your data.</p>
<p>For your multi-label case specifically, where the goal is to detect different types of objects, it may also be interesting to try simply concatenating a bunch of different images (e.g. two or four images) together. You could then scale them down to the original image size, and as labels assign the union of the original sets of labels. You'd get funny discontinuities along the edges where you merge images, I don't know if that'd be harmful. Maybe it wouldn't for your case of multi-object detection, worth a try in my opinion.</p> | 2018-02-23 11:08:46.417000+00:00 | 2018-02-26 13:16:38+00:00 | 2018-02-26 13:16:38+00:00 | null | 48,872,738 | <p>I have a multi-label classification problem with 12 classes. I'm using <code>slim</code> of <code>Tensorflow</code> to train the model using the models pretrained on <code>ImageNet</code>. Here are the percentages of presence of each class in the training & validation </p>
<pre><code> Training Validation
class0 44.4 25
class1 55.6 50
class2 50 25
class3 55.6 50
class4 44.4 50
class5 50 75
class6 50 75
class7 55.6 50
class8 88.9 50
class9 88.9 50
class10 50 25
class11 72.2 25
</code></pre>
<p>The problem is that the model did not converge and the are under of the <code>ROC</code> curve (<code>Az</code>) on the validation set was poor, something like:</p>
<pre><code> Az
class0 0.99
class1 0.44
class2 0.96
class3 0.9
class4 0.99
class5 0.01
class6 0.52
class7 0.65
class8 0.97
class9 0.82
class10 0.09
class11 0.5
Average 0.65
</code></pre>
<p>I had no clue why it works good for some classes and it does not for the others. I decided to dig into the details to see what the neural network is learning. I know that confusion matrix is only applicable on binary or multi-class classification. Thus, to be able to draw it, I had to convert the problem into pairs of multi-class classification. Even though the model was trained using <code>sigmoid</code> to provide a prediction for each class, for each every single cell in the confusion matrix below, I'm showing the average of the probabilities (got by applying <code>sigmoid</code> function on the predictions of tensorflow) of the images where the class in the row of the matrix is present and the class in column is not present. This was applied on the validation set images. This way I thought I can get more details about what the model is learning. I just circled the diagonal elements for display purposes.</p>
<p><a href="https://i.stack.imgur.com/WDNzF.png" rel="noreferrer"><img src="https://i.stack.imgur.com/WDNzF.png" alt="enter image description here"></a></p>
<p>My interpretation is:</p>
<ol>
<li>Classes 0 & 4 are detected present when they are present and not present where they are not. This means these classes are well detected.</li>
<li>Classes 2, 6 & 7 are always detected as not present. This is not what I'm looking for.</li>
<li>Classes 3, 8 & 9 are always detected as present. This is not what I'm looking for. This can be applied to the class 11.</li>
<li>Class 5 is detected present when it is not present and detected as not present when it is present. It is inversely detected.</li>
<li>Classes 3 & 10: I don't think we can extract too much information for these 2 classes.</li>
</ol>
<p>My problem is the interpretation.. I'm not sure where the problem is and I'm not sure if there is a bias in the dataset that produce such results. I'm also wondering if there are some metrics that can help in multi-label classification problems? Can u please share with me your interpretation for such confusion matrix? and what/where to look next? some suggestions for other metrics would be great.</p>
<p>Thanks.</p>
<p><strong>EDIT:</strong></p>
<p>I converted the problem to multi-class classification so for each pair of classes (e.g. 0,1) to compute the probability(class 0, class 1), denoted as <code>p(0,1)</code>:
I take the predictions of tool 1 of the images where tool 0 is present and tool 1 is not present and I convert them to probabilities by applying the sigmoid function, then I show the mean of those probabilities. For <code>p(1, 0)</code>, I do the same for but now for the tool 0 using the images where tool 1 is present and tool 0 is not present. For <code>p(0, 0)</code>, I use all the images where tool 0 is present. Considering <code>p(0,4)</code> in the image above, N/A means there are no images where tool 0 is present and tool 4 is not present. </p>
<p>Here are the number of images for the 2 subsets:</p>
<ol>
<li>169320 images for training </li>
<li>37440 images for validation</li>
</ol>
<p>Here is the confusion matrix computed on the training set (computed the same way as on the validation set described previously) but this time the color code is the number of images used to compute each probability:
<a href="https://i.stack.imgur.com/hxH2E.png" rel="noreferrer"><img src="https://i.stack.imgur.com/hxH2E.png" alt="enter image description here"></a></p>
<p><strong>EDITED:</strong>
For data augmentation, I do a random translation, rotation and scaling for each input image to the network. Moreover, here are some information about the tools:</p>
<pre><code>class 0 shape is completely different than the other objects.
class 1 resembles strongly to class 4.
class 2 shape resembles to class 1 & 4 but it's always accompanied by an object different than the others objects in the scene. As a whole, it is different than the other objects.
class 3 shape is completely different than the other objects.
class 4 resembles strongly to class 1
class 5 have common shape with classes 6 & 7 (we can say that they are all from the same category of objects)
class 6 resembles strongly to class 7
class 7 resembles strongly to class 6
class 8 shape is completely different than the other objects.
class 9 resembles strongly to class 10
class 10 resembles strongly to class 9
class 11 shape is completely different than the other objects.
</code></pre>
<p><strong>EDITED:</strong>
Here is the output of the code proposed below for the training set:</p>
<pre><code>Avg. num labels per image = 6.892700212615167
On average, images with label 0 also have 6.365296803652968 other labels.
On average, images with label 1 also have 6.601033718926901 other labels.
On average, images with label 2 also have 6.758548914659531 other labels.
On average, images with label 3 also have 6.131520940484937 other labels.
On average, images with label 4 also have 6.219187208527648 other labels.
On average, images with label 5 also have 6.536933407946279 other labels.
On average, images with label 6 also have 6.533908387864367 other labels.
On average, images with label 7 also have 6.485973817793214 other labels.
On average, images with label 8 also have 6.1241642788920725 other labels.
On average, images with label 9 also have 5.94092288040875 other labels.
On average, images with label 10 also have 6.983303518187239 other labels.
On average, images with label 11 also have 6.1974066621953945 other labels.
</code></pre>
<p>For the validation set:</p>
<pre><code>Avg. num labels per image = 6.001282051282051
On average, images with label 0 also have 6.0 other labels.
On average, images with label 1 also have 3.987080103359173 other labels.
On average, images with label 2 also have 6.0 other labels.
On average, images with label 3 also have 5.507731958762887 other labels.
On average, images with label 4 also have 5.506459948320414 other labels.
On average, images with label 5 also have 5.00169779286927 other labels.
On average, images with label 6 also have 5.6729452054794525 other labels.
On average, images with label 7 also have 6.0 other labels.
On average, images with label 8 also have 6.0 other labels.
On average, images with label 9 also have 5.506459948320414 other labels.
On average, images with label 10 also have 3.0 other labels.
On average, images with label 11 also have 4.666095890410959 other labels.
</code></pre>
<p><strong>Comments:</strong>
I think it is not only related to the difference between distributions because if the model was able to generalize well the class 10 (meaning the object was recognized properly during the training process like the class 0), the accuracy on the validation set would be good enough. I mean that the problem stands in the training set per se and in how it was built more than the difference between both distributions. It can be: frequency of presence of the class or objects resemble strongly (as in the case of the class 10 which strongly resembles to class 9) or bias inside the dataset or thin objects (representing maybe 1 or 2% of pixels in the input image like class 2). I'm not saying that the problem is one of them but I just wanted to point out that I think it's more than difference betwen both distributions.</p> | 2018-02-19 19:13:30.867000+00:00 | 2018-02-26 17:28:14.130000+00:00 | 2018-02-26 17:28:14.130000+00:00 | python|tensorflow|machine-learning|deep-learning|confusion-matrix | ['https://arxiv.org/abs/1706.04599'] | 1 |
54,079,462 | <blockquote>
<p>The differences might somehow reflect the discussion whether learning
rate decay is even needed when applying <a href="https://arxiv.org/abs/1412.6980" rel="nofollow noreferrer">Adam</a>.</p>
</blockquote>
<ol>
<li>Adam updates any parameter with an individual learning rate. This means that every parameter in the network have a specific learning rate associated.</li>
<li>The single learning rates for parameters is computed using the initial learning rate as upper limit. This means that every single learning rate can vary from 0 (no update) to the initial learning rate.</li>
<li>The learning rates adapt themselves during train steps, but if you want to be sure that every update step do not exceed an upper Limit you can than lower your initial (global) learning rate, using exponential decay. </li>
</ol>
<p>So these reasons show why there is a discussion whether learning rate decay with Adam is necessary after all.</p> | 2019-01-07 17:59:36.723000+00:00 | 2019-01-07 17:59:36.723000+00:00 | null | null | 53,088,856 | <p>Why does the Keras implementation for the Adam optimizer have the decay argument and Tensorflow doesn't? And what idea of this argument?</p> | 2018-10-31 17:16:34.947000+00:00 | 2020-04-20 18:43:27.370000+00:00 | 2020-04-20 18:43:27.370000+00:00 | tensorflow|neural-network|keras|deep-learning | ['https://arxiv.org/abs/1412.6980'] | 1 |
56,357,630 | <p>20k training instances is extremally small training data for neural machine translation. If you want to train anything on this data, you should use as little parameters as possible and strong regularization (dropout, L2, something as <a href="https://arxiv.org/abs/1808.07512" rel="nofollow noreferrer">SwitchOut</a>).</p>
<p>If I understand your code correctly, you are doing the vanilla encoder-decoder architecture by <a href="https://papers.nips.cc/paper/5346-sequence-to-sequence-learning-with-neural-networks.pdf" rel="nofollow noreferrer">Sutkever at al</a>. Although it is a simple model, it has relatively weak modeling capabilities. On such a small dataset, the attention model might be more suitable.</p> | 2019-05-29 09:45:15.500000+00:00 | 2019-05-29 09:45:15.500000+00:00 | null | null | 56,205,153 | <p>i've trainning a machine translation model (from english to vietnamese) with RNN, LSTM with 25000 example pairs (for training set -> 20000, test set -> 5000) the model i used like below but val_acc always reach to ~0,37 and does not increment althought i used some other models and epoch is about 100:</p>
<pre><code>model = Sequential()
model.add(Embedding(src_vocab, n_units, input_length=src_timesteps, mask_zero=True))
model.add(Bidirectional(LSTM(n_units)))
model.add(Dropout(0.2))
model.add(RepeatVector(tar_timesteps))
model.add(Bidirectional(LSTM(n_units, return_sequences=True)))
model.add(Dropout(0.2))
model.add(TimeDistributed(Dense(512, activation='relu')))
model.add(Dropout(0.2))
model.add(TimeDistributed(Dense(tar_vocab, activation='softmax')))
return model
</code></pre>
<p>i want the model prevents overfitting,hope you guys help me solve the problem</p> | 2019-05-19 05:58:06.130000+00:00 | 2019-05-29 09:45:15.500000+00:00 | 2019-05-19 06:06:14.467000+00:00 | lstm|recurrent-neural-network|machine-translation|encoder-decoder | ['https://arxiv.org/abs/1808.07512', 'https://papers.nips.cc/paper/5346-sequence-to-sequence-learning-with-neural-networks.pdf'] | 2 |
28,435,961 | <p>For part (1) of your question, there are a couple of relatively recent techniques that come to mind.</p>
<p>The first one is a type of feedforward layer called "maxout" which computes a piecewise linear output function of its inputs.</p>
<p>Consider a traditional neural network unit with <code>d</code> inputs and a linear transfer function. We can describe the output of this unit as a function of its input <code>z</code> (a vector with <code>d</code> elements) as <code>g(z) = w z</code>, where <code>w</code> is a vector with <code>d</code> weight values.</p>
<p>In a maxout unit, the output of the unit is described as</p>
<pre><code>g(z) = max_k w_k z
</code></pre>
<p>where <code>w_k</code> is a vector with <code>d</code> weight values, and there are <code>k</code> such weight vectors <code>[w_1 ... w_k]</code> <em>per unit</em>. Each of the weight vectors in the maxout unit computes some linear function of the input, and the <code>max</code> combines all of these linear functions into a single, convex, piecewise linear function. The individual weight vectors can be learned by the network, so that in effect each linear transform learns to model a specific part of the input (<code>z</code>) space.</p>
<p>You can read more about maxout networks at <a href="http://arxiv.org/abs/1302.4389" rel="nofollow">http://arxiv.org/abs/1302.4389</a>.</p>
<p>The second technique that has recently been developed is the "parametric relu" unit. In this type of unit, all neurons in a network layer compute an output <code>g(z) = max(0, w z) + a min(w z, 0)</code>, as compared to the more traditional rectified linear unit, which computes <code>g(z) = max(0, w z)</code>. The parameter <code>a</code> is shared across all neurons in a layer in the network and is learned along with the weight vector <code>w</code>.</p>
<p>The prelu technique is described by <a href="http://arxiv.org/abs/1502.01852" rel="nofollow">http://arxiv.org/abs/1502.01852</a>.</p>
<p>Maxout units have been shown to work well for a number of image classification tasks, particularly when combined with dropout to prevent overtraining. It's unclear whether the parametric relu units are extremely useful in modeling images, but the prelu paper gets really great results on what has for a while been considered the benchmark task in image classification.</p> | 2015-02-10 15:51:55.467000+00:00 | 2015-02-10 15:51:55.467000+00:00 | null | null | 28,429,971 | <p>The neural network applications I've seen always learn the weights of their inputs and use fixed "hidden layers".</p>
<p>But I'm wondering about the following techniques:</p>
<p>1) fixed inputs, but the hidden layers are no longer fixed, in the sense that the functions of the input they compute can be tweaked (learned)</p>
<p>2) fixed inputs, but the hidden layers are no longer fixed, in the sense that although they have clusters which compute fixed functions (multiplication, addition, etc... just like ALUs in a CPU or GPU) of their inputs, the weights of the connections between them and between them and the input can be learned (this should in some ways be equivalent to 1) )</p>
<p>These could be used to model systems for which we know the inputs and the output but not how the input is turned into the output (figuring out what is inside a "black box"). Do such techniques exist and if so, what are they called?</p> | 2015-02-10 10:59:39.643000+00:00 | 2015-02-10 15:51:55.467000+00:00 | null | machine-learning|neural-network|black-box | ['http://arxiv.org/abs/1302.4389', 'http://arxiv.org/abs/1502.01852'] | 2 |
62,029,335 | <p>So I also found the Tensorflow documentation on weight pruning to be quite <em>sparse</em>, so I spent some quality time with the debugger to figure out how everything works.
<br><br></p>
<h1>How Pruning Schedules Work</h1>
<p>At the most basic level, the Pruning Schedule is simply a function that takes the step as an input and produces a sparsity percentage. That sparsity value is then used to generate a mask, which is used to prune out weights with an absolute value less than then the <em>k - 1</em> value given by the absolute value weight distribution and the sparsity percentage.</p>
<h1>PolynomialDecay</h1>
<p>Class definition: <a href="https://github.com/tensorflow/model-optimization/blob/c2642e5de64bb7709310bd7775de84b4765b359a/tensorflow_model_optimization/python/core/sparsity/keras/pruning_schedule.py#L183" rel="noreferrer">Github Link</a><br>
The comments included with the class definition above helped me understand how the PolynomialDecay scheduler works.</p>
<blockquote>
<p>Pruning rate grows rapidly in the beginning from initial_sparsity, but then
plateaus slowly to the target sparsity. </p>
<p>The function applied is</p>
<p>current_sparsity = final_sparsity + (initial_sparsity - final_sparsity)
* (1 - (step - begin_step)/(end_step - begin_step)) ^ exponent</p>
</blockquote>
<p>By the above equation, when <code>step == begin_step</code> then <code>current_sparsity = initial_sparsity</code>. Thus, the weights will be pruned to the <code>initial_sparsity</code> on the step specified by the <code>begin_step</code> parameter.
<br><br>
I would agree with your assessment, in that you would usually want to start pruning at a lower sparsity than 50%, but I do not have any published research I can cite to backup that claim. You may be able to find more information in the <a href="https://arxiv.org/abs/1710.01878" rel="noreferrer">paper</a> cited with the PolynomialDecay class definition, although I have not had a chance to read it myself.</p>
<h1>ConstantSparsity</h1>
<p>Class definition: <a href="https://github.com/tensorflow/model-optimization/blob/c2642e5de64bb7709310bd7775de84b4765b359a/tensorflow_model_optimization/python/core/sparsity/keras/pruning_schedule.py#L137" rel="noreferrer">Github Link</a><br>
The purpose of this scheduler appears to be pretty limited. With every valid prune step, the <code>target_sparsity</code> is returned. As such, multiple pruning steps are very much redundant. The use case for this scheduler appears to be for a one time prune during training. The ability to prune with this scheduler multiple times is to align it with its parent abstract class and other pruning schedulers.</p>
<h1>Creating Your Own Pruning Scheduler</h1>
<p>If the two above schedulers do not float your boat, the abstract class <a href="https://github.com/tensorflow/model-optimization/blob/c2642e5de64bb7709310bd7775de84b4765b359a/tensorflow_model_optimization/python/core/sparsity/keras/pruning_schedule.py#L23" rel="noreferrer">PruningSchedule</a> exposes an endpoint which makes it very easy to create your own pruning scheduler, as convoluted as it may be. Below is an example I created myself.<br><br>
Disclaimer: this scheduler is a creation of a 19 year-old college student's imagination and has no basis in any published literature. </p>
<pre><code>PruningSchedule = tfmot.sparsity.keras.PruningSchedule
class ExponentialPruning(PruningSchedule):
def __init__(self, rate=0.01, begin_step=0, frequency=100, max_sparsity=0.9):
self.rate = rate
self.begin_step = begin_step
self.frequency = frequency
self.max_sparsity = max_sparsity
# Validation functions provided by the parent class
# The -1 parameter is for the end_step
# as this pruning schedule does not have one
# The last true value is a boolean flag which says it is okay
# to have no end_step
self._validate_step(self.begin_step, -1, self.frequency, True)
self._validate_sparsity(self.max_sparsity, 'Max Sparsity')
def __call__(self, step):
# Sparsity calculation endpoint
# step is a integer tensor
# The sparsity returned by __call__ must be a tensor
# of dtype=tf.float32, so tf.math is required.
# In the logic below, you can assume that a valid
# pruning step is passed.
p = tf.math.divide(
tf.cast(step - self.begin_step, tf.float32),
tf.constant(self.frequency, dtype=tf.float32)
)
sparsity = tf.math.subtract(
tf.constant(1, dtype=tf.float32),
tf.math.pow(
tf.constant(1 - self.rate, dtype=tf.float32),
p
)
)
sparsity = tf.cond(
tf.math.greater(sparsity, tf.constant(self.max_sparsity, dtype=tf.float32)),
lambda: tf.constant(self.max_sparsity, dtype=tf.float32),
lambda: sparsity
)
# This function returns a tuple of length 2
# The first value determines if pruning should occur on this step
# I recommend using the parent class function below for this purpose
# The negative one value denotes no end_step
# The second value is the sparsity to prune to
return (self._should_prune_in_step(step, self.begin_step, -1, self.frequency),
sparsity)
def get_config(self):
# A function required by the parent class
# return the class_name and the input parameters as
# done below
return {
'class_name': self.__class__.__name__,
'config': {
'rate': self.rate,
'begin_step': self.begin_step,
'frequency': self.frequency,
'max_sparsity': self.max_sparsity
}
}
</code></pre>
<h1>Using a Pruning Scheduler</h1>
<p>If you only would like certain layers to be pruned, rather than all prunable layers, you can call the <code>prune_low_magnitude</code> function on a layer which you are adding to your model.</p>
<pre><code>prune_low_magnitude = tfmot.sparsity.keras.prune_low_magnitude
model = keras.models.Sequential()
...
model.add(prune_low_magnitude(keras.layers.Dense(8, activation='relu', kernel_regularizer=keras.regularizers.l1(0.0001)),
ExponentialPruning(rate=1/8)))
</code></pre>
<p>Also make sure to pass a <code>UpdatePruningStep</code> instance to the training callbacks:</p>
<pre><code>m.fit(train_input, train_labels, epochs=epochs, validation_data=[test_input, test_labels],
callbacks=[UpdatePruningStep()])
</code></pre> | 2020-05-26 18:54:20.057000+00:00 | 2020-05-26 18:54:20.057000+00:00 | null | null | 60,005,900 | <p>I was trying the tutorial <a href="https://www.tensorflow.org/model_optimization/guide/pruning/pruning_with_keras#train_a_pruned_mnist" rel="noreferrer">TensorFlow 2.0 Magnitude-based weight pruning with Keras</a>
and came across the parameter <em>initial_sparsity</em></p>
<pre><code>import tensorflow_model_optimization as tfmot
from tensorflow_model_optimization.sparsity import keras as sparsity
import numpy as np
epochs = 12
num_train_samples = x_train.shape[0]
end_step = np.ceil(1.0 * num_train_samples / batch_size).astype(np.int32) * epochs
print('End step: ' + str(end_step))
pruning_params = {
'pruning_schedule': sparsity.PolynomialDecay(initial_sparsity=0.50,
final_sparsity=0.90,
begin_step=2000,
end_step=end_step,
frequency=100)
}
</code></pre>
<p>The tutorial says:</p>
<blockquote>
<p>The parameter used here means:</p>
<p><strong>Sparsity</strong> PolynomialDecay is used across the whole training process. We start at the sparsity level 50% and gradually train the
model to reach 90% sparsity. X% sparsity means that X% of the weight
tensor is going to be pruned away.</p>
</blockquote>
<p>My question is, shouldn't you start with <strong>initial_sparsity</strong> of 0% and then prune 90% of the weights off?</p>
<p>What does starting with <strong>initial_sparsity</strong> of 50% mean? Does this mean that 50% of the weights are pruned to begin with and then 90% sparsity of pruning is achieved?</p>
<p>Also, for <strong>tfmot.sparsity.keras.ConstantSparsity</strong>, the API is as follows:</p>
<pre><code>pruning_params_unpruned = {
'pruning_schedule': sparsity.ConstantSparsity(
target_sparsity=0.0, begin_step=0,
end_step = 0, frequency=100
)
}
</code></pre>
<blockquote>
<p>Initializes a Pruning schedule with constant sparsity.</p>
<p>Sparsity is applied in the interval [begin_step, end_step] every
frequency steps. At each applicable step, the sparsity(%) is constant.</p>
</blockquote>
<p>Does this mean that if a neural network model is already at a sparsity level of 50%, but the <em>target_sparsity = 0.5</em> then will the pruning schedule do:</p>
<ol>
<li>No pruning, since the model is already at a pruned level of 50%</li>
<li>It further prunes 50% of the weights of the already (50% pruned) model</li>
</ol>
<p>You can read about it in <a href="https://www.tensorflow.org/model_optimization/api_docs/python/tfmot/sparsity/keras/PolynomialDecay" rel="noreferrer">PolynomialDecay</a> and in <a href="https://www.tensorflow.org/model_optimization/api_docs/python/tfmot/sparsity/keras/ConstantSparsity" rel="noreferrer">ConstantSparsity</a></p>
<p>Thanks</p> | 2020-01-31 14:32:52.070000+00:00 | 2020-05-26 18:54:20.057000+00:00 | 2020-01-31 17:45:42.777000+00:00 | python|tensorflow|neural-network | ['https://github.com/tensorflow/model-optimization/blob/c2642e5de64bb7709310bd7775de84b4765b359a/tensorflow_model_optimization/python/core/sparsity/keras/pruning_schedule.py#L183', 'https://arxiv.org/abs/1710.01878', 'https://github.com/tensorflow/model-optimization/blob/c2642e5de64bb7709310bd7775de84b4765b359a/tensorflow_model_optimization/python/core/sparsity/keras/pruning_schedule.py#L137', 'https://github.com/tensorflow/model-optimization/blob/c2642e5de64bb7709310bd7775de84b4765b359a/tensorflow_model_optimization/python/core/sparsity/keras/pruning_schedule.py#L23'] | 4 |
48,243,703 | <p>Actually - there is no a good answer to your question. Most of the architectures are usually carefully designed and finetuned during many experiments. I could share with you some of the rules of thumbs one should apply when designing its own architecture:</p>
<ol>
<li><p><strong>Avoid a <em>dimension collapse</em> in the first layer.</strong> Let's assume that your input filter has a <code>(n, n)</code> spatial shape for <code>RGB</code> image. In this case, it is a good practice to set the filter numbers to be greater than <code>n * n * 3</code> as this is the dimensionality of the input of a single filter. If you set smaller number - you could suffer from the fact that many useful pieces of information about the image are lost due to initialization which dropped informative dimensions. Of course - this is not a general rule - e.g. for a texture recognition, where image complexity is lower - a small number of filters might actually help.</p></li>
<li><p><strong>Think more about volume than filters number</strong> - when setting the number of filters it's important to think about the volume change instead of the change of filter numbers between the consecutive layers. E.g. in <code>VGG</code> - even though the number of filters doubles after pooling layer - the actual feature map volume is decreased by a factor of 2, because of pooling decreasing the feature map by a factor of <code>4</code>. Usually decreasing the size of the volume by more than 3 should be considered as a bad practice. Most of the modern architectures use the volume drop factor in the range between 1 and 2. Still - this is not a general rule - e.g. in case of a narrow hierarchy - the greater value of volume drop might actually help.</p></li>
<li><p><strong>Avoid <em>bottlenecking</em></strong>. As one may read in this milestone <a href="https://arxiv.org/abs/1512.00567" rel="noreferrer">paper</a> bottlenecking might seriously harm your training process. It occurs when dropping the volume is too severe. Of course - this still might be achieved - but then you should use the intelligent downsampling, used e.g. in <code>Inception v>2</code></p></li>
<li><p><strong>Check 1x1 convolutions</strong> - it's believed that filters activation are highly correlated. One may take advantage of it by using <strong>1x1</strong> convolutions - namely convolution with a filter size of 1. This makes possible e.g. volume dropping by them instead of <code>pooling</code> or intelligent downsampling (see example <a href="https://arxiv.org/pdf/1612.08242.pdf" rel="noreferrer">here</a>). You could e.g. build twice more filters and then cut 25% of them by using 1x1 convs as a consecutive layer.</p></li>
</ol>
<p>As you may see. There is no easy way to choose the number of filters. Except for the hints above, I'd like to share with you one of my favorite sanity checks on the number of filters. It takes 2 easy steps:</p>
<ol>
<li>Try to overfit at 500 random images with regularization.</li>
<li>Try to overfit at the whole dataset without any regularization.</li>
</ol>
<p>Usually - if the number of filters is too low (in general) - these two tests will show you that. If - during your training process - with regularization - your network severely overfits - this is a clear indicator that your network has way too many filters.</p>
<p>Cheers.</p> | 2018-01-13 19:50:05.247000+00:00 | 2019-01-31 21:41:28.970000+00:00 | 2019-01-31 21:41:28.970000+00:00 | null | 48,243,360 | <p>I'm just beginning my ML journey and have done a few tutorials. One thing that's not clear (to me) is how the 'filter' parameter is determined for Keras Conv2D.</p>
<p>Most sources I've read simply set the parameter to 32 without explanation. Is this just a rule of thumb or do the dimensions of the input images play a part? For example, the images in CIFAR-10 are 32x32</p>
<p>Specifically:</p>
<pre class="lang-python prettyprint-override"><code>model = Sequential()
filters = 32
model.add(Conv2D(filters, (3, 3), padding='same', input_shape=x_train.shape[1:]))
model.add(Activation('relu'))
model.add(Conv2D(filters, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
</code></pre>
<p>The next layer has a filter parameter of filter*2 or 64. Again, how is this calculated?</p>
<p>Tx.</p>
<p>Joe</p> | 2018-01-13 19:07:04.710000+00:00 | 2020-04-20 12:24:40.973000+00:00 | 2018-04-24 11:36:41.150000+00:00 | machine-learning|neural-network|keras|conv-neural-network|convolution | ['https://arxiv.org/abs/1512.00567', 'https://arxiv.org/pdf/1612.08242.pdf'] | 2 |
61,921,120 | <p>Hmmm ... <a href="https://en.wikipedia.org/wiki/Betweenness_centrality" rel="nofollow noreferrer">Betweenness Centrality</a>. Complexity according to <a href="https://arxiv.org/abs/1311.2147" rel="nofollow noreferrer">Betweenness Centrality -- Incremental and Faster</a> is <em>O(mn + n^2 log n) time, where n = |V| and m = |E|</em> for the common algorithm, but can be improved with incremental edge updates. Interesting. </p>
<p>But your question is not about Betweenness Centrality but about the Cartesian Product?</p>
<p>To get the cartesian product of cities, you have to create a goal that chains two <code>member/2</code> calls, and then collect all solutions thereof with <code>bagof/3</code>: </p>
<pre><code>cities_1([berlin,tokyo,kigali,moscow]).
cities_2([beijing,new_delhi,cairo,lisboa]).
cartesian_product(CityPairs) :-
cities_1(L1),
cities_2(L2),
bagof([C1,C2],(member(C1,L1),member(C2,L2)),CityPairs).
</code></pre> | 2020-05-20 19:14:50.117000+00:00 | 2020-05-20 19:14:50.117000+00:00 | null | null | 61,919,491 | <p>First of all this is not homework, just trying to learn Prolog on my own :)</p>
<p>I've been reading about graph theory and I thought it would be cool to implement betweeness centrality in my pet project. I have a list of cities and want to determine which one is more common in all shortest paths possible but I'm not sure how to get all two city combinations.</p>
<p>I already have a rule that gets the shortest path between two cities.</p> | 2020-05-20 17:42:31.603000+00:00 | 2020-05-20 19:14:50.117000+00:00 | null | prolog | ['https://en.wikipedia.org/wiki/Betweenness_centrality', 'https://arxiv.org/abs/1311.2147'] | 2 |
54,102,266 | <p>Ad Q1: If the similarity matrix contains the cosine similarities of the word embeddings (which it more or less does, see Equation 4 in <a href="http://www.aclweb.org/anthology/S17-2051" rel="nofollow noreferrer">SimBow at SemEval-2017 Task 3</a>) and if the word embeddings are L2-normalized, then the SCM (Soft Cosine Measure) is equivalent to averaging the word embeddings (i.e. your baseline). For a proof, see Lemma 3.3 in <a href="https://arxiv.org/pdf/1808.09407.pdf" rel="nofollow noreferrer">the Implementation Notes for the SCM</a>. My Gensim implementation of the SCM (<a href="https://github.com/RaRe-Technologies/gensim/pull/1827" rel="nofollow noreferrer">1</a>, <a href="https://github.com/RaRe-Technologies/gensim/pull/2016" rel="nofollow noreferrer">2</a>) additionally sparsifies the similarity matrix to keep the memory footprint small and to regularize the embeddings, so you will get slightly different results compared to vanilla SCM. If embedding averaging gives you similar results to simple BOW cosine similarity, I would question the quality of the embeddings.</p>
<p>Ad Q2: Training a Doc2Vec model on the entire dataset for one epoch is equivalent to training a Doc2Vec model on smaller segments of the entire dataset, one epoch for each segment. Just be aware that Doc2Vec uses document ids as a part of the training process, so you must ensure that the ids are still unique after the segmentation (i.e. the first document of the first segment must have a different id than the first document of the second segment).</p> | 2019-01-09 01:52:52.930000+00:00 | 2019-01-09 18:30:41.150000+00:00 | 2019-01-09 18:30:41.150000+00:00 | null | 53,467,414 | <p>I am trying to calculate similarity between two documents which are comprised of more than thousands sentences.</p>
<p>Baseline would be calculating cosine similarity using BOW.</p>
<p>However, I want to capture more of semantic difference between documents.</p>
<p>Hence, I built word embedding and calculated documents similarity by generating document vectors by simply averaging all the word vectors in each of documents and measure cosine similarity between these documents vectors. </p>
<p>However, since the size of each input document is rather big, the results I get from using the method above are very similar to simple BOW cosine similarity.</p>
<p>I have two questions, </p>
<p>Q1. I found gensim module offers soft cosine similarity. But I am having hard time understanding the difference from the methods I used above, and I think it may not be the mechanism to calculate similarity between million pairs of documents.</p>
<p>Q2. I found Doc2Vec by gensim would be more appropriate for my purpose. But I recognized that training Doc2Vec requires more RAM than I have (32GB) (the size of my entire documents is about 100GB). Would there be any way that I train the model with small part(like 20GB of them) of entire corpus, and use this model to calculate pairwise similarities of entire corpus?
If yes, then what would be the desirable train set size, and is there any tutorial that I can follow?</p> | 2018-11-25 12:25:40.470000+00:00 | 2019-01-09 18:30:41.150000+00:00 | null | python|similarity|gensim|word2vec|doc2vec | ['http://www.aclweb.org/anthology/S17-2051', 'https://arxiv.org/pdf/1808.09407.pdf', 'https://github.com/RaRe-Technologies/gensim/pull/1827', 'https://github.com/RaRe-Technologies/gensim/pull/2016'] | 4 |
31,747,134 | <p>Earlier this year I ran into an interesting article about a method to locate smartphones based only on the power consumption of the device, called <a href="http://arxiv.org/abs/1502.03182" rel="nofollow">PowerSpy</a>.</p>
<p>While I have never implemented/tested the method myself, the authors claim that:</p>
<blockquote>
<p>On Android reading the phoneβs aggregate power meter is done by repeatedly reading the following two files: <br/>
<code>/sys/class/power_supply/battery/voltage_now</code> <br/> <code>/sys/class/power_supply/battery/current_now</code>.</p>
</blockquote>
<p>They also claim to have tested the method on Nexus 4, Nexus 5 and HTC. While this is not that specific about which exact versions of Android the devices were equipped with, Nexus 4 is supposed to come with API level 17 and Nexus 5 with API level 19, but both phones seem to be subject to OS upgrades. </p>
<p>However, this seems to be a radically different method when compared with the ones mentioned by the other posters, so probably worth mentioning. Unfortunately, it seems to be a low-level approach, so a lot of work might be required to get what you need, since power, as in Physics, does not really mean battery consumption. But maybe you need it this way.</p>
<p>Last, the article mentioned that access to the two files requires no special permissions (which kind of motivated the study they did in the first place).</p>
<p><strong>EDIT</strong></p>
<p>Second thought, if you plot <code>P=U*I</code> (getting U and I as mentioned above) and integrate over time (that would be a sum in this discrete case) you might just get a very good approximation of battery consumption. Maybe you also factor in a Kalman filter (to attenuate the noise on U and I, keeping in mind that your algorithms are not the only consumer) and you might just get what you want.</p> | 2015-07-31 13:17:13.887000+00:00 | 2015-07-31 13:26:48.830000+00:00 | 2015-07-31 13:26:48.830000+00:00 | null | 30,735,069 | <p>I'm trying to get battery stats in my application for some benchmarking. The wonderful BatteryManager.BATTERY_PROPERTY_CHARGE_COUNTER lets me query the device's micro-amps which is great. However, this was only introduced in API 21 so the number of devices this can reach is quite limited. Is there a compatible way to do something similar in lower versions of the APIs (down to 4.0)? I know BatteryManager.EXTRA_LEVEL will give me the percentage but that's really not fine-grained.</p> | 2015-06-09 14:31:51.097000+00:00 | 2015-07-31 13:26:48.830000+00:00 | null | android|battery|usage-statistics | ['http://arxiv.org/abs/1502.03182'] | 1 |
41,176,694 | <p>Let's say you want to do digit recognition (MNIST) and you have defined your architecture of the network (CNNs). Now, you can start feeding the images from the training data one by one to the network, get the prediction (till this step it's called as doing <em>inference</em>), compute the loss, compute the gradient, and then update the parameters of your network (i.e. <em>weights</em> and <em>biases</em>) and then proceed with the next image ... This way of training the model is sometimes called as <em>online learning</em>.</p>
<p>But, you want the training to be faster, the gradients to be less noisy, and also take advantage of the power of GPUs which are efficient at doing array operations (<em>nD-arrays</em> to be specific). So, what you instead do is feed in <strong>say 100 images at a time</strong> (the choice of this size is up to you (i.e. it's a <em>hyperparameter</em>) and depends on your problem too). For instance, take a look at the below picture, (Author: Martin Gorner)</p>
<p><a href="https://i.stack.imgur.com/8FzdQ.png" rel="noreferrer"><img src="https://i.stack.imgur.com/8FzdQ.png" alt="Batch size of 100"></a></p>
<p>Here, since you're feeding in 100 images(<code>28x28</code>) at a time (instead of 1 as in the online training case), the <strong>batch size is 100</strong>. Oftentimes this is called as <em>mini-batch size</em> or simply <code>mini-batch</code>.</p>
<hr>
<p>Also the below picture: (Author: Martin Gorner)</p>
<p><a href="https://i.stack.imgur.com/vncAa.png" rel="noreferrer"><img src="https://i.stack.imgur.com/vncAa.png" alt="batch size again"></a></p>
<p>Now, the matrix multiplication will all just work out perfectly fine and you will also be taking advantage of the highly optimized array operations and hence achieve faster <em>training</em> time.</p>
<p>If you observe the above picture, it doesn't matter that much whether you give 100 or 256 or 2048 or 10000 (<em>batch size</em>) images as long as it fits in the memory of your (GPU) hardware. You'll simply get that many predictions.</p>
<p>But, please keep in mind that this <em>batch size</em> influences the training time, the error that you achieve, the gradient shifts etc., There is no general rule of thumb as to which batch size works out best. Just try a few sizes and pick the one which works best for you. But try not to use large batch sizes since it will overfit the data. People commonly use mini-batch sizes of <code>32, 64, 128, 256, 512, 1024, 2048</code>.</p>
<hr>
<p><strong>Bonus</strong>: To get a good grasp of how crazy you can go with this batch size, please give this paper a read: <a href="https://arxiv.org/pdf/1404.5997.pdf" rel="noreferrer">weird trick for parallelizing CNNs</a></p> | 2016-12-16 03:10:48.570000+00:00 | 2019-01-08 18:43:46.047000+00:00 | 2019-01-08 18:43:46.047000+00:00 | null | 41,175,401 | <p>The introductory documentation, which I am reading (<a href="https://www.tensorflow.org/get_started/" rel="noreferrer">TOC here</a>) uses the term "batch" (<a href="https://www.tensorflow.org/tutorials/keras/classification" rel="noreferrer">for instance here</a>) without having defined it.</p> | 2016-12-16 00:11:35.430000+00:00 | 2020-09-03 01:08:41.723000+00:00 | 2020-09-03 01:08:41.723000+00:00 | tensorflow|machine-learning|neural-network|deep-learning|tensor | ['https://i.stack.imgur.com/8FzdQ.png', 'https://i.stack.imgur.com/vncAa.png', 'https://arxiv.org/pdf/1404.5997.pdf'] | 3 |
47,959,567 | <p>First of all, remember that dropout is a technique to <em>fight overfitting</em> and improve neural network generalization. So the good starting point is to focus on training performance, and deal with overfitting once you clearly see it. E.g., in some machine learning areas, such as reinforcement learning, it is possible that the main issue with learning is lack of timely reward and the state space is so big that there's no problem with generalization.</p>
<p>Here's a very approximate picture how overfitting looks like in practice:</p>
<p><a href="https://i.stack.imgur.com/xvqJI.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/xvqJI.jpg" alt="overfitting-chart"></a></p>
<p>By the way, dropout isn't the only technique, the latest convolutional neural networks tend to prefer batch and weight normalization to dropout.</p>
<p>Anyway, suppose overfitting is really a problem and you want to apply specifically dropout. Although it's common to suggest <code>dropout=0.5</code> as a default, this advise follows the recommendations from the <a href="https://arxiv.org/pdf/1207.0580.pdf" rel="noreferrer">original Dropout paper</a> by Hinton at al, which at that time was focused on fully-connected or dense layers. Also the advise implicitly assumes that the researches does hyper-parameter tuning to find the best dropout value.</p>
<p>For convolutional layers, I think you're right: <code>dropout=0.5</code> seems too severe and the research agrees with it. See, for example, <a href="http://mipal.snu.ac.kr/images/1/16/Dropout_ACCV2016.pdf" rel="noreferrer">"Analysis on the Dropout Effect in Convolutional Neural Networks"</a> paper by Park and Kwak: they find that much lower levels <code>dropout=0.1</code> and <code>dropout=0.2</code> work better. In my own research, I do Bayesian optimization for hyper-parameter tuning (see <a href="https://stackoverflow.com/q/41860817/712995">this question</a>) and it often selects gradual increase of drop probability from the first convolutional layer down the network. This makes sense because the number of filters also increases, so does the chance of co-adaptation. As a result, the architecture often looks like this:</p>
<ul>
<li>CONV-1: <code>filter=3x3</code>, <code>size=32</code>, dropout between <code>0.0-0.1</code></li>
<li>CONV-2: <code>filter=3x3</code>, <code>size=64</code>, dropout between <code>0.1-0.25</code></li>
<li>... </li>
</ul>
<p>This does perform well for classification tasks, however, it's surely not a universal architecture and you should definitely cross-validate and optimize hyper-parameters for your problem. You can do that via simple random search or Bayesian optimization. If you choose Bayesian optimization, there're good libraries for that, such as <a href="https://github.com/maxim5/hyper-engine" rel="noreferrer">this one</a>.</p> | 2017-12-24 09:44:40.787000+00:00 | 2017-12-24 09:44:40.787000+00:00 | null | null | 47,892,505 | <p>I am currently building a convolution neural network to play the game 2048. It has convolution layers and then 6 hidden layers. All of the guidance online mentions a dropout rate of ~50%. I am about to start training but am concerned that 50% dropout on each of the 6 layers is a bit overkill and will lead to under-fitting. </p>
<p>I would greatly appreciate some guidance on this. What do you guys recommend as a starting point on dropout? I would also love to understand why you recommend what you do.</p> | 2017-12-19 17:44:12.467000+00:00 | 2017-12-24 10:40:43.510000+00:00 | 2017-12-24 10:40:43.510000+00:00 | machine-learning|neural-network|conv-neural-network|convolution|recurrent-neural-network | ['https://i.stack.imgur.com/xvqJI.jpg', 'https://arxiv.org/pdf/1207.0580.pdf', 'http://mipal.snu.ac.kr/images/1/16/Dropout_ACCV2016.pdf', 'https://stackoverflow.com/q/41860817/712995', 'https://github.com/maxim5/hyper-engine'] | 5 |
14,785,706 | <p>Either consult the <a href="http://www.cgal.org" rel="nofollow">CGAL</a> library or this person's project <a href="http://cgi.di.uoa.gr/~vfisikop/compgeom/Layered_Range_trees/" rel="nofollow">code</a> and <a href="http://arxiv.org/pdf/1103.4521v1.pdf" rel="nofollow">report</a>.</p>
<p>Also, the <a href="http://www.codechef.com/FEB13/problems/TRIQUERY" rel="nofollow">contest</a> is still ongoing, please refrain from asking for help here until the contest is over.</p> | 2013-02-09 06:49:41.953000+00:00 | 2013-02-09 06:49:41.953000+00:00 | null | null | 14,775,352 | <p>I want to implement 2D RANGE TREES for searching given points inside a triangle effectively in O( logn^2 ). </p>
<p>To make things easier,I want to search no of given points which lie in right triangle with two sides aligned parallel to x-y axis and both sides same.
So, co-ordinates of vertices of ABC would be A(a,b) , B(a+d,b) , C(a,b+d) and it is a right triangle and AB,AC are parallel to X,Y axis respectively. </p>
<p>I know i can do this effectively using 2D range trees .(k-d trees O(sqrt(n)) is slow and searching for each point individually is too slow)</p>
<p>Can anyone show me how to implement/explain the algorithm 2D range tree to test which points lie inside above type of triangle?</p> | 2013-02-08 15:00:55.527000+00:00 | 2013-02-09 06:49:41.953000+00:00 | null | tree|implementation | ['http://www.cgal.org', 'http://cgi.di.uoa.gr/~vfisikop/compgeom/Layered_Range_trees/', 'http://arxiv.org/pdf/1103.4521v1.pdf', 'http://www.codechef.com/FEB13/problems/TRIQUERY'] | 4 |
53,134,070 | <p>Purpose of <code>AdaptiveAvgPool2d</code> is to make the convnet work on input of any arbitrary size (and produce an output of fixed size). In your case, since input size is fixed to 400x400, you probably do not need it.</p>
<p>I think this paper might give you a better idea of this method - <a href="https://arxiv.org/pdf/1406.4729v3.pdf" rel="noreferrer">https://arxiv.org/pdf/1406.4729v3.pdf</a></p> | 2018-11-03 18:00:32.747000+00:00 | 2018-11-03 18:00:32.747000+00:00 | null | null | 53,114,882 | <p>I'm currently trying to modify the VGG16 network architecture so that it's able to accept 400x400 px images. </p>
<p>Based on literature that I've read, the way to do it would be to covert the fully connected (FC) layers into convolutional (CONV) layers. This would essentially " allow the network to efficiently βslideβ across a larger input image and make multiple evaluations of different parts of the image, incorporating all available contextual information." Afterwards, an Average Pooling layer is used to "average the multiple feature vectors into a single feature vector that summarizes the input image". </p>
<p>I've done this <a href="https://stackoverflow.com/a/52981440/4777141">using this function</a>, and have come up with the following network architecture: </p>
<pre><code>----------------------------------------------------------------
Layer (type) Output Shape Param #
================================================================
Conv2d-1 [-1, 64, 400, 400] 1,792
ReLU-2 [-1, 64, 400, 400] 0
Conv2d-3 [-1, 64, 400, 400] 36,928
ReLU-4 [-1, 64, 400, 400] 0
MaxPool2d-5 [-1, 64, 200, 200] 0
Conv2d-6 [-1, 128, 200, 200] 73,856
ReLU-7 [-1, 128, 200, 200] 0
Conv2d-8 [-1, 128, 200, 200] 147,584
ReLU-9 [-1, 128, 200, 200] 0
MaxPool2d-10 [-1, 128, 100, 100] 0
Conv2d-11 [-1, 256, 100, 100] 295,168
ReLU-12 [-1, 256, 100, 100] 0
Conv2d-13 [-1, 256, 100, 100] 590,080
ReLU-14 [-1, 256, 100, 100] 0
Conv2d-15 [-1, 256, 100, 100] 590,080
ReLU-16 [-1, 256, 100, 100] 0
MaxPool2d-17 [-1, 256, 50, 50] 0
Conv2d-18 [-1, 512, 50, 50] 1,180,160
ReLU-19 [-1, 512, 50, 50] 0
Conv2d-20 [-1, 512, 50, 50] 2,359,808
ReLU-21 [-1, 512, 50, 50] 0
Conv2d-22 [-1, 512, 50, 50] 2,359,808
ReLU-23 [-1, 512, 50, 50] 0
MaxPool2d-24 [-1, 512, 25, 25] 0
Conv2d-25 [-1, 512, 25, 25] 2,359,808
ReLU-26 [-1, 512, 25, 25] 0
Conv2d-27 [-1, 512, 25, 25] 2,359,808
ReLU-28 [-1, 512, 25, 25] 0
Conv2d-29 [-1, 512, 25, 25] 2,359,808
ReLU-30 [-1, 512, 25, 25] 0
MaxPool2d-31 [-1, 512, 12, 12] 0
Conv2d-32 [-1, 4096, 1, 1] 301,993,984
ReLU-33 [-1, 4096, 1, 1] 0
Dropout-34 [-1, 4096, 1, 1] 0
Conv2d-35 [-1, 4096, 1, 1] 16,781,312
ReLU-36 [-1, 4096, 1, 1] 0
Dropout-37 [-1, 4096, 1, 1] 0
Conv2d-38 [-1, 3, 1, 1] 12,291
AdaptiveAvgPool2d-39 [-1, 3, 1, 1] 0
Softmax-40 [-1, 3, 1, 1] 0
================================================================
Total params: 333,502,275
Trainable params: 318,787,587
Non-trainable params: 14,714,688
----------------------------------------------------------------
Input size (MB): 1.83
Forward/backward pass size (MB): 696.55
Params size (MB): 1272.21
Estimated Total Size (MB): 1970.59
----------------------------------------------------------------
</code></pre>
<p>My question is simple: Is the use of the average pooling layer at the end necessary? It seems like by the last convolutional layer, we get a 1x1 image with 3 channels. Doing an average pooling on that would seem to not have any effect. </p>
<p>If there is anything amiss in my logic/ architecture, kindly feel free to point it out.
Thanks!</p> | 2018-11-02 08:15:10.490000+00:00 | 2020-12-03 16:19:02.570000+00:00 | null | computer-vision|conv-neural-network|pytorch|vgg-net | ['https://arxiv.org/pdf/1406.4729v3.pdf'] | 1 |
59,100,256 | <p>Indeed, unfortunately catastrophic interference (or forgetting) is applicable to your case.
But there is a branch of Deep Learning that focuses on that problem called <a href="https://arxiv.org/abs/1905.08119" rel="nofollow noreferrer">Continual Learning</a>.</p> | 2019-11-29 07:11:44.387000+00:00 | 2019-11-29 07:11:44.387000+00:00 | null | null | 59,100,176 | <p>I have a neural network which has been trained over some dataset. Say the dataset had 10k data points initially and another 100 data points are now added. Is there a way for my neural network to learn this entire (updated) dataset without training from scratch? Further, is catastrophic interference applicable here? I know catastrophic interference is applicable when the NN tries to learn "new information", but I wasn't sure if "updated (due to insertions) information" counts as "new information".</p> | 2019-11-29 07:04:01.103000+00:00 | 2020-11-02 17:29:33.763000+00:00 | 2020-11-02 17:29:33.763000+00:00 | machine-learning|deep-learning|neural-network|online-machine-learning | ['https://arxiv.org/abs/1905.08119'] | 1 |
63,416,828 | <p>Almost, there are a few differences however. According to the paper <a href="https://arxiv.org/abs/1512.03385" rel="nofollow noreferrer">https://arxiv.org/abs/1512.03385</a>, resnet50's conv1 layer (as you would find in <code>torchvision.models.resnet50()</code>) has</p>
<p><code>conv1 = {Conv2d} Conv2d(3, 64, kernel_size=(7,7), stride=(2,2), padding=(3,3), bias=False)</code>.</p>
<p>So the differences are<br/>a) the kernel_size is 7x7 not 3x3<br/>b) padding of 3x3 not 1x1 and<br/>c) the weights from resnet50.conv1 from the pretrained model will be conditioned on the training and not initialized as random normal as the will be for <code>nn.Conv2d(3, 64, kernel_size=3, stride=2, padding=1, bias=False)</code></p> | 2020-08-14 16:41:35.457000+00:00 | 2020-08-14 16:47:17.657000+00:00 | 2020-08-14 16:47:17.657000+00:00 | null | 63,407,135 | <p>I am using one layer to extract features from image. The old layer is</p>
<pre><code>self.conv = nn.Conv2d(3, 64, kernel_size=3, stride=2, padding=1, bias=False)
</code></pre>
<p>New layer is</p>
<pre><code>resConv = models.resnet50(pretrained=True)
resConv.conv1 = nn.Conv2d(3, 64, kernel_size=3, stride=2, padding=1, bias=False)
self.conv = resConv.conv1
</code></pre>
<p>Any performance difference? or both layers are same.</p> | 2020-08-14 05:35:13.270000+00:00 | 2020-08-15 17:42:19.910000+00:00 | null | neural-network|torch|conv-neural-network|resnet | ['https://arxiv.org/abs/1512.03385'] | 1 |
54,136,342 | <p>Have a look at the paper <a href="https://arxiv.org/pdf/1805.10685.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1805.10685.pdf</a> that gives you a overall idea.
check this link for more references <a href="https://github.com/Hironsan/awesome-embedding-models" rel="nofollow noreferrer">https://github.com/Hironsan/awesome-embedding-models</a></p> | 2019-01-10 20:20:25.530000+00:00 | 2019-01-10 20:20:25.530000+00:00 | null | null | 52,847,994 | <p>How can we use ANN to find some similar documents? I know its a silly question, but I am new to this NLP field.
I have made a model using kNN and bag-of-words approach to solve my problem. Using that I can get n number of documents (along with their closeness) that are somewhat similar to the input, but now I want to implement the same using ANN and I am not getting any idea.</p>
<p>Thanks in advance for any help or suggestions.</p> | 2018-10-17 05:46:31.530000+00:00 | 2019-01-10 20:20:25.530000+00:00 | 2018-10-17 10:58:00.553000+00:00 | python|machine-learning|nlp|artificial-intelligence|word-embedding | ['https://arxiv.org/pdf/1805.10685.pdf', 'https://github.com/Hironsan/awesome-embedding-models'] | 2 |
67,935,650 | <p>There are many ways to prevent overfitting, according to the <strong>papers</strong> below:</p>
<ul>
<li>Dropout layers (Disabling randomly neurons). <a href="https://www.cs.toronto.edu/%7Ehinton/absps/JMLRdropout.pdf" rel="nofollow noreferrer">https://www.cs.toronto.edu/~hinton/absps/JMLRdropout.pdf</a></li>
<li>Input Noise (e.g. Random Gaussian Noise on the imges). <a href="https://arxiv.org/pdf/2010.07532.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/2010.07532.pdf</a></li>
<li>Random Data Augmentations (e.g. Rotating, Shifting, Scaling, etc.).
<a href="https://arxiv.org/pdf/1906.11052.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1906.11052.pdf</a></li>
<li>Adjusting Number of Layers & Units.
<a href="https://clgiles.ist.psu.edu/papers/UMD-CS-TR-3617.what.size.neural.net.to.use.pdf" rel="nofollow noreferrer">https://clgiles.ist.psu.edu/papers/UMD-CS-TR-3617.what.size.neural.net.to.use.pdf</a></li>
<li>Regularization Functions (e.g. L1, L2, etc)
<a href="https://www.researchgate.net/publication/329150256_A_Comparison_of_Regularization_Techniques_in_Deep_Neural_Networks" rel="nofollow noreferrer">https://www.researchgate.net/publication/329150256_A_Comparison_of_Regularization_Techniques_in_Deep_Neural_Networks</a></li>
<li>Early Stopping: <strong>If you notice that for N successive epochs that your model's training loss is decreasing, but the model performs poorly on validaiton data set, then It is a good sign to stop the training.</strong></li>
<li>Shuffling the training data or K-Fold cross validation is also common way way of dealing with Overfitting.</li>
</ul>
<p>I found this great repository, which contains examples of how to implement data augmentations:
<a href="https://github.com/kochlisGit/random-data-augmentations" rel="nofollow noreferrer">https://github.com/kochlisGit/random-data-augmentations</a></p>
<p>Also, this repository here seems to have examples of CNNs that implement most of the above methods:
<a href="https://github.com/kochlisGit/Tensorflow-State-of-the-Art-Neural-Networks" rel="nofollow noreferrer">https://github.com/kochlisGit/Tensorflow-State-of-the-Art-Neural-Networks</a></p> | 2021-06-11 10:46:56.863000+00:00 | 2021-06-11 13:46:42.813000+00:00 | 2021-06-11 13:46:42.813000+00:00 | null | 67,677,875 | <p>My training and loss curves look like below and yes, similar graphs have received comments like "Classic overfitting" and I get it.</p>
<p><a href="https://i.stack.imgur.com/6u8ey.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6u8ey.png" alt="training and Lossurves" /></a></p>
<p>My model looks like below,</p>
<pre><code>input_shape_0 = keras.Input(shape=(3,100, 100, 1), name="img3")
model = tf.keras.layers.TimeDistributed(Conv2D(8, 3, activation="relu"))(input_shape_0)
model = tf.keras.layers.TimeDistributed(Dropout(0.3))(model)
model = tf.keras.layers.TimeDistributed(MaxPooling2D(2))(model)
model = tf.keras.layers.TimeDistributed(Conv2D(16, 3, activation="relu"))(model)
model = tf.keras.layers.TimeDistributed(MaxPooling2D(2))(model)
model = tf.keras.layers.TimeDistributed(Conv2D(32, 3, activation="relu"))(model)
model = tf.keras.layers.TimeDistributed(MaxPooling2D(2))(model)
model = tf.keras.layers.TimeDistributed(Dropout(0.3))(model)
model = tf.keras.layers.TimeDistributed(Flatten())(model)
model = tf.keras.layers.TimeDistributed(Dropout(0.4))(model)
model = LSTM(16, kernel_regularizer=tf.keras.regularizers.l2(0.007))(model)
# model = Dense(100, activation="relu")(model)
# model = Dense(200, activation="relu",kernel_regularizer=tf.keras.regularizers.l2(0.001))(model)
model = Dense(60, activation="relu")(model)
# model = Flatten()(model)
model = Dropout(0.15)(model)
out = Dense(30, activation='softmax')(model)
model = keras.Model(inputs=input_shape_0, outputs = out, name="mergedModel")
def get_lr_metric(optimizer):
def lr(y_true, y_pred):
return optimizer.lr
return lr
opt = tf.keras.optimizers.RMSprop()
lr_metric = get_lr_metric(opt)
# merged.compile(loss='sparse_categorical_crossentropy',
optimizer='adam', metrics=['accuracy'])
model.compile(loss='sparse_categorical_crossentropy',
optimizer=opt, metrics=['accuracy',lr_metric])
model.summary()
</code></pre>
<p>In the above model building code, please consider the commented lines as some of the approaches I have tried so far.</p>
<p>I have followed the suggestions given as answers and comments to this kind of question and none seems to be working for me. Maybe I am missing something really important?</p>
<p>Things that I have tried:</p>
<ol>
<li>Dropouts at different places and different amounts.</li>
<li>Played with inclusion and expulsion of dense layers and their number of units.</li>
<li>Number of units on the LSTM layer was tried with different values (started from as low as 1 and now at 16, I have the best performance.)</li>
<li>Came across weight regularization techniques and tried to implement them as shown in the code above and so tried to put it at different layers ( I need to know what is the technique in which I need to use it instead of simple trial and error - this is what I did and it seems wrong)</li>
<li>Implemented learning rate scheduler using which I reduce the learning rate as the epochs progress after a certain number of epochs.</li>
<li>Tried two LSTM layers with the first one having return_sequences = true.</li>
</ol>
<p>After all these, I still cannot overcome the overfitting problem.
My data set is properly shuffled and divided in a train/val ratio of 80/20.</p>
<p>Data augmentation is one more thing that I found commonly suggested which I am yet to try, but I want to see if I am making some mistake so far which I can correct it and avoid diving into data augmentation steps for now. My data set has the below sizes:</p>
<pre><code>Training images: 6780
Validation images: 1484
</code></pre>
<p>The numbers shown are samples and each sample will have 3 images. So basically, I input 3 mages at once as one sample to my time-distributed <code>CNN</code> which is then followed by other layers as shown in the model description. Following that, my training images are 6780 * 3 and my Validation images are 1484 * 3. Each image is 100 * 100 and is on channel 1.</p>
<p>I am using <code>RMS prop</code> as the optimizer which performed better than <code>adam</code> as per my testing</p>
<p><strong>UPDATE</strong></p>
<p>I tried some different architectures and some reularizations and dropouts at different places and I am now able to achieve a val_acc of 59% below is the new model.</p>
<pre><code># kernel_regularizer=tf.keras.regularizers.l2(0.004)
# kernel_constraint=max_norm(3)
model = tf.keras.layers.TimeDistributed(Conv2D(32, 3, activation="relu"))(input_shape_0)
model = tf.keras.layers.TimeDistributed(Dropout(0.3))(model)
model = tf.keras.layers.TimeDistributed(MaxPooling2D(2))(model)
model = tf.keras.layers.TimeDistributed(Conv2D(64, 3, activation="relu"))(model)
model = tf.keras.layers.TimeDistributed(MaxPooling2D(2))(model)
model = tf.keras.layers.TimeDistributed(Conv2D(128, 3, activation="relu"))(model)
model = tf.keras.layers.TimeDistributed(MaxPooling2D(2))(model)
model = tf.keras.layers.TimeDistributed(Dropout(0.3))(model)
model = tf.keras.layers.TimeDistributed(GlobalAveragePooling2D())(model)
model = LSTM(128, return_sequences=True,kernel_regularizer=tf.keras.regularizers.l2(0.040))(model)
model = Dropout(0.60)(model)
model = LSTM(128, return_sequences=False)(model)
model = Dropout(0.50)(model)
out = Dense(30, activation='softmax')(model)
</code></pre> | 2021-05-24 19:25:16.947000+00:00 | 2021-06-11 13:46:42.813000+00:00 | 2021-06-10 11:16:06.650000+00:00 | tensorflow|deep-learning|conv-neural-network|lstm|overfitting-underfitting | ['https://www.cs.toronto.edu/%7Ehinton/absps/JMLRdropout.pdf', 'https://arxiv.org/pdf/2010.07532.pdf', 'https://arxiv.org/pdf/1906.11052.pdf', 'https://clgiles.ist.psu.edu/papers/UMD-CS-TR-3617.what.size.neural.net.to.use.pdf', 'https://www.researchgate.net/publication/329150256_A_Comparison_of_Regularization_Techniques_in_Deep_Neural_Networks', 'https://github.com/kochlisGit/random-data-augmentations', 'https://github.com/kochlisGit/Tensorflow-State-of-the-Art-Neural-Networks'] | 7 |
60,066,129 | <p>BERT is not trained to determine if one sentence follows another. That is just ONE of the <a href="https://openreview.net/pdf?id=rJ4km2R5t7" rel="noreferrer">GLUE tasks</a> and there are a myriad more. ALL of the GLUE tasks (and superglue) are getting knocked out of the park by ALBERT.</p>
<p>BERT (and Albert for that matter) is the absolute state of the art in Natural Language Understanding. Doc2Vec doesn't come close. BERT is not a bag-of-words method. It's a bi-directional attention based encoder built on the Transformer which is the incarnation of the Google Brain paper <a href="https://arxiv.org/abs/1706.03762" rel="noreferrer">Attention is All you Need</a>. Also see this <a href="http://jalammar.github.io/illustrated-transformer/" rel="noreferrer">Visual breakdown</a> of the Transformer model.</p>
<p>This is a fundamentally new way of looking at natural language which doesn't use RNN's or LSTMs or tf-idf or any of that stuff. We aren't turning words or docs into vectors anymore. <a href="https://nlp.stanford.edu/projects/glove/" rel="noreferrer">GloVes: Global Vectors for Word Representations</a> with LSTMs are old. Doc2Vec is old.</p>
<p>BERT is reeeeeallly powerful - like, pass the Turing test easily powerful. Take a look at </p>
<p>See <a href="https://w4ngatang.github.io/static/papers/superglue.pdf" rel="noreferrer">superGLUE</a> which just came out. Scroll to the bottom at look at how insane those tasks are. THAT is where NLP is at.</p>
<p>Okay so now that we have dispensed with the idea that tf-idf is state of the art - you want to take documents and look at their similarity? I would use ALBERT on Databricks in two layers:</p>
<ol>
<li><p>Perform either Extractive or Abstractive summarization: <a href="https://pypi.org/project/bert-extractive-summarizer/" rel="noreferrer">https://pypi.org/project/bert-extractive-summarizer/</a> (NOTICE HOW BIG THOSE DOCUMENTS OF TEXT ARE - and reduce your document down to a summary.</p></li>
<li><p>In a separate step, take each summary and do the STS-B task from Page 3 <a href="https://openreview.net/pdf?id=rJ4km2R5t7" rel="noreferrer">GLUE</a></p></li>
</ol>
<p>Now, we are talking about absolutely bleeding edge technology here (Albert came out in just the last few months). You will need to be extremely proficient to get through this but it CAN be done, and I believe in you!!</p> | 2020-02-04 21:57:48.107000+00:00 | 2020-02-04 21:57:48.107000+00:00 | null | null | 57,882,417 | <p>Is it possible to use Google BERT for calculating similarity between two textual documents? As I understand BERT's input is supposed to be a limited size sentences. Some works use BERT for similarity calculation for sentences like:</p>
<p><a href="https://github.com/AndriyMulyar/semantic-text-similarity" rel="noreferrer">https://github.com/AndriyMulyar/semantic-text-similarity</a></p>
<p><a href="https://github.com/beekbin/bert-cosine-sim" rel="noreferrer">https://github.com/beekbin/bert-cosine-sim</a></p>
<p>Is there an implementation of BERT done to use it for large documents instead of sentences as inputs ( Documents with thousands of words)?</p> | 2019-09-11 05:03:18.903000+00:00 | 2020-12-05 16:35:09.900000+00:00 | 2019-09-11 18:07:37.467000+00:00 | python|text|scikit-learn|nlp|word-embedding | ['https://openreview.net/pdf?id=rJ4km2R5t7', 'https://arxiv.org/abs/1706.03762', 'http://jalammar.github.io/illustrated-transformer/', 'https://nlp.stanford.edu/projects/glove/', 'https://w4ngatang.github.io/static/papers/superglue.pdf', 'https://pypi.org/project/bert-extractive-summarizer/', 'https://openreview.net/pdf?id=rJ4km2R5t7'] | 7 |
41,233,873 | <p>Yes there are ways to make silhouette metric scalable. No its not published all the way as I describe here. Its not that complex so you can understand it too and maybe write it. Just let me know please so I can use it if you write it first.</p>
<p>Looks like we both need to write a high performance silhouette scorer. Input any cluster column vector, giving this scorer capability to work with every clustering implementation. Use mapreduce if possible for easy distributed version as well as shared memory. It looks possible. Page 4 shows the math:
<a href="http://cran.us.r-project.org/web/packages/clValid/vignettes/clValid.pdf" rel="noreferrer">http://cran.us.r-project.org/web/packages/clValid/vignettes/clValid.pdf</a> An LSH would help algorithmically since it avoids the exact distance computations that dominate its math. A good LSH implementation would then be essential but I have not found one. Sklearnβs LSHForest is the right idea but not implemented well enough. A simplified silhouette or approximate would be interesting too. The LSH inclusion would result in approximate results. Use LSH capability to find only the nearest point and centroid, which avoids the all-pairs computations. Page 28 of this article has several good suggestions: <a href="https://arxiv.org/pdf/1605.01802.pdf" rel="noreferrer">https://arxiv.org/pdf/1605.01802.pdf</a> It seems to say:
Use simplified silhouette not plain silhouette, as follows: Change computation from distance from point to point, to distance from point to cluster centroid. Itβs a reduction from all-pairs of points within the cluster and the closest neighbor cluster, which is O(n^2), down to a linear length O(N) computation. Here is my understanding and translation:</p>
<pre><code>Start with:
File of cluster tuples: (clusterID, cluster centroid point)
File of example point tuples: (example point, clusterID). Notice the clusterID is the clusterID column vector described above.
Processing:
For each cluster tuple, run a map(): hashmap[clusterID] = cluster centroid point
For each example point tuple, run:
map(): (dist = distance between point and its cluster centroid point, example point (copy), clusterID(copy))
map(): find closest cluster centroid point to the example point
map(): emit SSI = (distance - minClusterDistance)/minClusterDistance
reduce(): By clusterID emit (clusterID, clusterβs sum of SSI / #points)
</code></pre>
<p>I may end up being the implementor. It's crazy nobody has written a fast one like this before. People have done it already in my expectation, but they are keeping them to themselves for competitive purposes (corporate profit, Kaggle placings, etc).</p>
<p>The above is formatted as code but is not code. It is English outline or pseudocode. stackoverflow forced me to format this section as code to accept it.</p> | 2016-12-20 02:11:30.850000+00:00 | 2016-12-20 02:11:30.850000+00:00 | null | null | 31,863,148 | <p>I want to use silhouette to determine optimal value for k when using KMeans clustering in Spark.
Is there any optimal way parallelize this? i.e. make it scalable</p> | 2015-08-06 18:24:23.770000+00:00 | 2019-06-12 10:49:36.563000+00:00 | null | machine-learning|apache-spark|cluster-analysis|distributed-computing|k-means | ['http://cran.us.r-project.org/web/packages/clValid/vignettes/clValid.pdf', 'https://arxiv.org/pdf/1605.01802.pdf'] | 2 |
51,107,225 | <p>You are on the right track. However, I believe you are confusing rewards with action probabilities. In case of draw, it learns that the reward itself is zero at the end of the episode. However, in case of loss, the loss function is discounted reward (which should be -1) times the action probabilities. So it will get you more towards actions which end in win and away from loss with actions ending in draw falling in the middle. Intuitively, it is very similar to supervised deep learning only with an additional weighting parameter (reward) attached to it. </p>
<p>Additionally, I believe this paper from Google DeepMind would be useful for you: <a href="https://arxiv.org/abs/1712.01815" rel="nofollow noreferrer">https://arxiv.org/abs/1712.01815</a>.
They actually talk about solving the chess problem using RL.</p> | 2018-06-29 18:05:01.787000+00:00 | 2018-06-29 18:05:01.787000+00:00 | null | null | 51,092,769 | <p>I'm currently learning about Policy Gradient Descent in the context of Reinforcement Learning. TL;DR, my question is: <strong>"What are the constraints on the reward function (in theory and practice) and what would be a good reward function for the case below?"</strong></p>
<p>Details:
I want to implement a Neural Net which should learn to play a simple board game using Policy Gradient Descent. I'll omit the details of the NN as they don't matter. The loss function for Policy Gradient Descent, as I understand it is negative log likelihood: <code>loss = - avg(r * log(p))</code></p>
<p>My question now is how to define the reward <code>r</code>? Since the game can have 3 different outcomes: win, loss, or draw - it seems rewarding 1 for a win, 0 for a draw, -1 for a loss (and some discounted value of those for action leading to those outcomes) would be a natural choice.</p>
<p>However, mathematically I have doubts:</p>
<p><strong>Win Reward: 1</strong> - This seems to make sense. This should push probabilities towards 1 for moves involved in wins with diminishing gradient the closer the probability gets to 1.</p>
<p><strong>Draw Reward: 0</strong> - This does not seem to make sense. This would just cancel out any probabilities in the equation and no learning should be possible (as the gradient should always be 0).</p>
<p><strong>Loss Reward: -1</strong> - This should kind of work. It should push probabilities towards 0 for moves involved in losses. However, I'm concerned about the asymmetry of the gradient compared to the win case. The closer to 0 the probability gets, the steeper the gradient gets. I'm concerned that this would create an extremely strong bias towards a policy that avoids losses - to the degree where the win signal doesn't matter much at all.</p> | 2018-06-29 00:29:35.060000+00:00 | 2018-06-29 18:05:01.787000+00:00 | null | reinforcement-learning|policy-gradient-descent | ['https://arxiv.org/abs/1712.01815'] | 1 |
20,321,070 | <p>There is an alternative OpenCV based implementation of many recent shadow detection algorithms, providing much higher quality of shadow detection:</p>
<p><a href="http://arma.sourceforge.net/shadows/" rel="nofollow">http://arma.sourceforge.net/shadows/</a></p>
<p>There is also an associated <a href="http://arxiv.org/pdf/1304.1233.pdf" rel="nofollow">journal article</a>, describing all the implemented algorithms and their various trade-offs (eg. quality vs speed).</p> | 2013-12-02 04:49:00.417000+00:00 | 2013-12-02 04:49:00.417000+00:00 | null | null | 13,799,225 | <p>I have been testing two different implementation of Mixture of Gaussians (MOG) for background subtraction. One is using opncv2.1.0, cvCreateGaussianBGModel + cvUpdateBGStatModel and another is using opencv 2.4.3, BackgroundSubtractorMOG2 class.</p>
<pre><code>Now, 2.4.3 provide a parameter called bShadowDetect, to identify the shadow area by gray color. But my experience with this implementation is, it does not provide the accuracy of shadow detection. It varies according to the parameter fTau. The other issue with this implementation is performance hit. For 640 X 480 resolution video, it is generating below 5 fps, By switching to release mode of project I get improvement upto 7 to 8 FPS.
The another implementation of MOG is using 2.1.0. I have configured GaussianBG state Model 's paramenters and then I am calling cvUpdateBGStatModel each time I receive a new frame.
For performance improvement, I have converted my frames to gray frames before I send it for state update. My best performance till now is using opencv 2.1.0 and which is around 30 FPS for 640 X 480 resolution frames. So, currently I am preferring opencv 2.1.0 version's MOG for background subtraction. But Here I come to face the issue of shadow removal. Here, I want to detect only moving object. that is without shadow, and draw a rectangle to highlight.
Any help in this context will be grateful.
</code></pre>
<p>Thanks in Advance. </p> | 2012-12-10 10:37:13.370000+00:00 | 2013-12-02 04:49:00.417000+00:00 | 2012-12-10 10:42:42.530000+00:00 | opencv | ['http://arma.sourceforge.net/shadows/', 'http://arxiv.org/pdf/1304.1233.pdf'] | 2 |
55,992,788 | <p>I came up with the following algorithm which is not recursive as requested by the OP but it is worth mentioning given its <em>invincible efficiency</em>.</p>
<p>As said in <a href="https://stackoverflow.com/users/4200/ed-guiness">Ed Guiness'</a> <a href="https://stackoverflow.com/a/4314310/1137388">post</a>, strings of <code>N</code> pairs of correctly matched parentheses is a representation of a Dyck word. In another useful representation, parentheses <code>(</code> and <code>)</code> are replaced with <code>1</code> and <code>0</code> respectively. Hence, <code>()()()</code> becomes <code>101010</code>. The latter can also be seen as the binary representation of (decimal) number <code>42</code>. In summary, some integer numbers can represent strings of correctly matched pairs of parentheses. Using this representation the following is an efficient algorithm to generate Dyck works.</p>
<p>Let <code>integer</code> be any C/C++ (or, possibly, and member of the the <a href="https://en.wikipedia.org/wiki/List_of_C-family_programming_languages" rel="nofollow noreferrer">C-family programming languages</a>) unsigned integer type up to 64-bits long. Given a Dyck word the following code returns the next Dyck word of the same size, provided it exists.</p>
<pre><code>integer next_dyck_word(integer w) {
integer const a = w & -w;
integer const b = w + a;
integer c = w ^ b;
c = (c / a >> 2) + 1;
c = ((c * c - 1) & 0xaaaaaaaaaaaaaaaa) | b;
return c;
}
</code></pre>
<p>For instance, if <code>w == 42</code> (<code>101010</code> in binary, i.e., <code>()()()</code>) the function returns <code>44</code> (<code>101100</code>, <code>()(())</code>). One can iterate until getting <code>56</code> (<code>111000</code>, <code>((()))</code>) which is the maximum Dyck word for <code>N == 3</code>.</p>
<p>Above, I've mentioned <em>invincible efficiency</em> because, as far as generation of a single Dyck word is concerned, this algorithm is O(1), loopless and branchless. However, the implementation still has room for improvement. Indeed, the relatively expensive division <code>c / a</code> in the body of the function can be eliminated if we can use some assembly instructions that are not available in strict Standard compliant C/C++.</p>
<p>You might say. "<em>Objection! I do not want to be constraint to <code>N <= 64</code></em>". Well, my answer to that is that if you want to generate all Dyck works, then in practice you are already bound to a much lower size than <code>64</code>. Indeed, <a href="https://en.wikipedia.org/wiki/Catalan_number" rel="nofollow noreferrer">the number of Dyck works of size <code>N</code></a> grows factorially with <code>N</code> and for <code>N == 64</code> the time to generate them all is likely to be greater than the age of the universe. (I confess I did not calculate this time but this is a quite common anecdotal feature of problems of this nature.)</p>
<p>I've written a <a href="https://arxiv.org/abs/1602.06426" rel="nofollow noreferrer">detailed document on the algorithm</a>.</p> | 2019-05-05 14:06:12.357000+00:00 | 2019-05-05 14:06:12.357000+00:00 | null | null | 4,313,921 | <p>How can we generate all possibilities on braces ?</p>
<p>N value has given to us and we have to generate all possibilities. </p>
<p><strong>Examples:</strong></p>
<p>1) if N == 1, then only one possibility () .</p>
<p>2) if N==2, then possibilities are (()), ()()</p>
<p>3) if N==3, then possibilities are ((())), (())(),()()(), ()(()) ... </p>
<p>Note: left and right braces should match. I mean )( is INVALID for the N==1 </p>
<p>Can we solve this problem by using recurrence approach ? </p> | 2010-11-30 12:48:07.847000+00:00 | 2019-05-05 14:06:12.357000+00:00 | 2017-02-01 13:30:19.283000+00:00 | algorithm|recursion|data-structures|catalan | ['https://stackoverflow.com/users/4200/ed-guiness', 'https://stackoverflow.com/a/4314310/1137388', 'https://en.wikipedia.org/wiki/List_of_C-family_programming_languages', 'https://en.wikipedia.org/wiki/Catalan_number', 'https://arxiv.org/abs/1602.06426'] | 5 |
48,110,086 | <p>A good overview is described in "<em>Speed/accuracy trade-offs for modern convolutional object detectors</em>" (<a href="https://arxiv.org/abs/1611.10012" rel="nofollow noreferrer">https://arxiv.org/abs/1611.10012</a>). </p>
<p>In order to save time, you may consider using Google Object Detection API <a href="https://github.com/tensorflow/models/tree/master/research/object_detection" rel="nofollow noreferrer">https://github.com/tensorflow/models/tree/master/research/object_detection</a>, they have an tutorial on how to train on your own dataset.</p>
<p>It is hard to say which object detection framework is the best. However, I saw people usually stick to Faster R-CNN (for accuracies) and SSD or YOLOv2 (for speed).</p> | 2018-01-05 08:48:37.800000+00:00 | 2018-01-05 08:48:37.800000+00:00 | null | null | 48,108,547 | <p>Need to start an object detection project. can anyone suggest the better framework which has better accuracy and speed. I have read about imagenet, resnet, mobilenet, yolo, tensorflow and dlib features. Can anyone give a comparison of them and suggest a better option.</p> | 2018-01-05 06:48:27.690000+00:00 | 2018-01-05 08:48:37.800000+00:00 | 2018-01-05 07:25:20.573000+00:00 | tensorflow|computer-vision|object-detection|resnet|imagenet | ['https://arxiv.org/abs/1611.10012', 'https://github.com/tensorflow/models/tree/master/research/object_detection'] | 2 |
70,199,786 | <p>Those are metrics printed out at every iteration of the training loop. The most important ones are the loss values, but below are basic descriptions of them all (<code>eta</code> and <code>iter</code> are self-explanatory I think).</p>
<p><code>total_loss</code>: This is a weighted sum of the following individual losses calculated during the iteration. By default, the weights are all one.</p>
<ol>
<li><p><code>loss_cls</code>: Classification loss in the ROI head. Measures the loss for box classification, i.e., how good the model is at labelling a predicted box with the correct class.</p>
</li>
<li><p><code>loss_box_reg</code>: Localisation loss in the ROI head. Measures the loss for box localisation (predicted location vs true location).</p>
</li>
<li><p><code>loss_rpn_cls</code>: Classification loss in the Region Proposal Network. Measures the "objectness" loss, i.e., how good the RPN is at labelling the anchor boxes as foreground or background.</p>
</li>
<li><p><code>loss_rpn_loc</code>: Localisation loss in the Region Proposal Network. Measures the loss for localisation of the predicted regions in the RPN.</p>
</li>
<li><p><code>loss_mask</code>: Mask loss in the Mask head. Measures how "correct" the predicted binary masks are.</p>
<p>For more details on the losses (1) and (2), take a look at the <a href="https://arxiv.org/abs/1504.08083" rel="noreferrer">Fast R-CNN paper</a> and the <a href="https://github.com/facebookresearch/detectron2/blob/cc2d218a572c2bfea4fd998082a9e753f25dee15/detectron2/modeling/roi_heads/fast_rcnn.py#L301" rel="noreferrer">code</a>.</p>
<p>For more details on the losses (3) and (4), take a look at the <a href="https://arxiv.org/abs/1506.01497" rel="noreferrer">Faster R-CNN paper</a> and the <a href="https://github.com/facebookresearch/detectron2/blob/5e2a1ecccd228227c5a605c0a98d58e1b2db3640/detectron2/modeling/proposal_generator/rpn_outputs.py#L302" rel="noreferrer">code</a>.</p>
<p>For more details on the loss (5), take a look at the <a href="https://arxiv.org/abs/1703.06870" rel="noreferrer">Mask R-CNN paper</a> and the <a href="https://github.com/facebookresearch/detectron2/blob/cc2d218a572c2bfea4fd998082a9e753f25dee15/detectron2/modeling/roi_heads/mask_head.py#L23" rel="noreferrer">code</a>.</p>
</li>
</ol>
<p><code>time</code>: Time taken by the iteration.</p>
<p><code>data_time</code>: Time taken by the dataloader in that iteration.</p>
<p><code>lr</code>: The learning rate in that iteration.</p>
<p><code>max_mem</code>: Maximum GPU memory occupied by tensors in bytes.</p> | 2021-12-02 12:49:39.310000+00:00 | 2021-12-02 12:49:39.310000+00:00 | null | null | 70,169,219 | <p>I want to train a custom dataset on using faster_rcnn or mask_rcnn with the Pytorch and Detectron2 .Everything works well but I wanted to know I want to know what are the results I have.</p>
<pre><code>[11/29 20:16:31 d2.utils.events]: eta: 0:24:04 iter: 19 total_loss: 9.6 loss_cls: 1.5 loss_box_reg: 0.001034 loss_mask: 0.6936 loss_rpn_cls: 6.773 loss_rpn_loc: 0.5983 time: 1.4664 data_time: 0.0702 lr: 4.9953e-06 max_mem: 2447M
</code></pre>
<p>I have this as result and I want to know what all of this means</p> | 2021-11-30 12:15:15.573000+00:00 | 2021-12-02 12:49:39.310000+00:00 | 2021-11-30 14:14:20.877000+00:00 | python|machine-learning|pytorch|conv-neural-network|detectron | ['https://arxiv.org/abs/1504.08083', 'https://github.com/facebookresearch/detectron2/blob/cc2d218a572c2bfea4fd998082a9e753f25dee15/detectron2/modeling/roi_heads/fast_rcnn.py#L301', 'https://arxiv.org/abs/1506.01497', 'https://github.com/facebookresearch/detectron2/blob/5e2a1ecccd228227c5a605c0a98d58e1b2db3640/detectron2/modeling/proposal_generator/rpn_outputs.py#L302', 'https://arxiv.org/abs/1703.06870', 'https://github.com/facebookresearch/detectron2/blob/cc2d218a572c2bfea4fd998082a9e753f25dee15/detectron2/modeling/roi_heads/mask_head.py#L23'] | 6 |
38,353,456 | <p>The word2vec model uses a network architecture to represent the input word(s) and most likely associated output word(s).</p>
<p>Assuming there is one hidden layer (as in the example linked in the question), the two matrices introduced represent the weights and biases that allow the network to compute its internal representation of the function mapping the input vector (e.g. βcatβ in the linked example) to the output vector (e.g. βclimbedβ).</p>
<p>The weights of the network are a sub-symbolic representation of the mapping between the input and the output β any single weight doesnβt necessarily represent anything meaningful on its own. Itβs the connection weights between all units (i.e. the interactions of all the weights) in the network that gives rise to the networkβs representation of the function mapping. This is why neural networks are often referred to as βblack boxβ models β it can be very difficult to interpret why they make particular decisions and how they learn. As such, it's very difficult to say what the vector [0.3,0.01,0.04] represents exactly.</p>
<p>Network weights are traditionally initialised to random values for two main reasons:</p>
<ol>
<li>It prevents a bias being introduced to the model before training begins</li>
<li>It allows the network to start from different points in the search space after initialisation (helping reduce the impact of local minima)</li>
</ol>
<p>A networkβs ability to learn can be very sensitive to the way its weights are initialised. There are more advanced ways of initialising weights today e.g. <a href="http://arxiv.org/pdf/1206.5533v2.pdf" rel="nofollow">this paper (see section: Weights initialization scaling coefficient)</a>.</p>
<p>The way in which weights are initialised and the dimension of the hidden layer are often referred to as hyper-parameters and are typically chosen according to heuristics and prior knowledge of the problem space.</p> | 2016-07-13 13:42:45.780000+00:00 | 2016-07-13 13:42:45.780000+00:00 | null | null | 38,325,438 | <p>I am using word2vec model for training a neural network and building a neural embedding for finding the similar words on the vector space. But my question is about dimensions in the word and context embeddings (matrices), which we initialise them by random numbers(vectors) at the beginning of the training, like this <a href="https://iksinc.wordpress.com/2015/04/13/words-as-vectors/" rel="nofollow">https://iksinc.wordpress.com/2015/04/13/words-as-vectors/</a></p>
<p>Lets say we want to display {book,paper,notebook,novel} words on a graph, first of all we should build a matrix with this dimensions 4x2 or 4x3 or 4x4 etc, I know the first dimension of the matrix its the size of our vocabulary |v|. But the second dimension of the matrix (number of vector's dimensions), for example this is a vector for word βbook" [0.3,0.01,0.04], what are these numbers? do they have any meaning? for example the 0.3 number related to the relation between word βbook" and βpaperβ in the vocabulary, the 0.01 is the relation between book and notebook, etc.
Just like TF-IDF, or Co-Occurence matrices that each dimension (column) Y has a meaning - its a word or document related to the word in row X.</p> | 2016-07-12 09:51:21.917000+00:00 | 2022-04-08 07:50:24.503000+00:00 | 2022-04-08 07:50:24.503000+00:00 | machine-learning|neural-network|nlp|word2vec|word-embedding | ['http://arxiv.org/pdf/1206.5533v2.pdf'] | 1 |
42,275,697 | <p><a href="https://github.com/FredericGodin/DynamicCNN" rel="nofollow noreferrer">DynamicCNN - for Theano/Lasagne </a> by <a href="http://www.fredericgodin.com/" rel="nofollow noreferrer">FrΓ©deric Godin</a> is an approach which might work better for sentences modeling. It is based on a paper named <a href="https://arxiv.org/abs/1404.2188" rel="nofollow noreferrer">"A Convolutional Neural Network for Modelling Sentences"</a> by <a href="https://www.nal.ai/" rel="nofollow noreferrer">Nal Kalchbrenner</a>, Edward Grefenstette, Phil Blunsom from 2014.</p>
<p>Quoting the abstract of the mentioned paper:</p>
<blockquote>
<p>The network uses Dynamic k-Max Pooling, a global pooling operation
over linear sequences. The network handles input sentences of varying
length and induces a feature graph over the sentence that is capable
of explicitly capturing short and long-range relations. The network
does not rely on a parse tree and is easily applicable to any
language. We test the DCNN in four experiments: small scale binary and
multi-class sentiment prediction, six-way question classification and
Twitter sentiment prediction by distant supervision. The network
achieves excellent performance in the first three tasks and a greater
than 25% error reduction in the last task with respect to the
strongest baseline.</p>
</blockquote>
<p>I hadn't used it myself, but it solved another similar <a href="https://stackoverflow.com/questions/38628014/variable-size-convolutional-neural-network-input-and-fixed-output">sentences modeling problem</a> on SO.</p> | 2017-02-16 13:44:29.230000+00:00 | 2017-02-16 13:44:29.230000+00:00 | 2017-05-23 11:47:17.553000+00:00 | null | 36,262,860 | <p>CNN seems to be implemented mostly for fixed size input. Now I want to use CNN to train some sentences of different size, what are some common methods?</p> | 2016-03-28 12:58:01.423000+00:00 | 2017-02-16 13:44:29.230000+00:00 | null | machine-learning|nlp|deep-learning | ['https://github.com/FredericGodin/DynamicCNN', 'http://www.fredericgodin.com/', 'https://arxiv.org/abs/1404.2188', 'https://www.nal.ai/', 'https://stackoverflow.com/questions/38628014/variable-size-convolutional-neural-network-input-and-fixed-output'] | 5 |
36,665,994 | <p>The following suggestion is mostly related to CNNs for computer vision taks (in particular for recognition), but might also be helpful in your domain: I would have a look at <a href="http://arxiv.org/abs/1406.4729" rel="noreferrer">"Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition" by He et al.</a> proposing a Spatial Pyramid Pooling layer.</p>
<p>The general idea: The convolutional layers of a CNN (and related layers such as pooling, local response normalization etc.) are able to process variable sized input. Therefore, the problem of variable sized input propagates down to the first fully connected/inner product layer which requires a vector of fixed size. He et al. propose to add the Spatial Pyramid Pooling Layer just before the first fully-connected layer (details in the paper). The layer itself works by hierarchically partitioning the feature maps of the last convolutional layer (or the subsequent pooling or response normalization layer) into a fixed number of bins. Within these bins, responses are pooled as usually, creating a fixed-sized output (where the size depends on the hierarchy and number of bins). please see the paper for illustration.</p>
<p>The layer has been implemented based on Caffe and is available on GitHub: <a href="https://github.com/ShaoqingRen/SPP_net" rel="noreferrer">ShaoqingRen/SPP_net</a>.</p> | 2016-04-16 15:23:01.190000+00:00 | 2016-04-16 15:23:01.190000+00:00 | null | null | 36,262,860 | <p>CNN seems to be implemented mostly for fixed size input. Now I want to use CNN to train some sentences of different size, what are some common methods?</p> | 2016-03-28 12:58:01.423000+00:00 | 2017-02-16 13:44:29.230000+00:00 | null | machine-learning|nlp|deep-learning | ['http://arxiv.org/abs/1406.4729', 'https://github.com/ShaoqingRen/SPP_net'] | 2 |
50,051,683 | <p>I believe <a href="https://arxiv.org/" rel="nofollow noreferrer">https://arxiv.org/</a> is the best source for scientific papers.<br> There are no particular datasets, but you can write a simple crawler and download articles for particular topics.</p> | 2018-04-26 20:54:05.460000+00:00 | 2018-04-26 20:54:05.460000+00:00 | null | null | 50,049,436 | <p>i'm new in data science and trying to make application that classify the scientific papers to (AI,Machine Learning, NLP,...)
i spend a lot of time trying to find data set for scientific papers but not found.
any help to find the data set? </p> | 2018-04-26 18:05:50.273000+00:00 | 2018-04-26 20:54:05.460000+00:00 | null | nlp | ['https://arxiv.org/'] | 1 |
53,023,449 | <p>I think <code>dReal</code> (<a href="http://dreal.github.io/" rel="nofollow noreferrer">http://dreal.github.io/</a>) is the only solver that provides support for ODEs, though Iβm not an expert on this.</p>
<p>Also, see this paper for further details: <a href="https://arxiv.org/pdf/1310.8278.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1310.8278.pdf</a></p> | 2018-10-27 15:28:06.750000+00:00 | 2018-10-27 15:34:35.260000+00:00 | 2018-10-27 15:34:35.260000+00:00 | null | 53,012,608 | <p>if so, how does it work? I tried to find information on z3 about differential equations but I didn't find anything.</p> | 2018-10-26 16:09:46.793000+00:00 | 2022-03-17 09:22:41.267000+00:00 | 2022-03-17 09:22:41.267000+00:00 | ode|smt | ['http://dreal.github.io/', 'https://arxiv.org/pdf/1310.8278.pdf'] | 2 |
53,179,120 | <p>Found the explication of the AURA algorithm in this doc : <a href="https://arxiv.org/pdf/1805.03490.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1805.03490.pdf</a></p>
<p>Pages 24 -> 26</p> | 2018-11-06 19:58:18.540000+00:00 | 2018-11-06 19:58:18.540000+00:00 | null | null | 53,177,404 | <p>I implemented a Parity Proof of Authority private chain, running 2 authority nodes and 3 users nodes.</p>
<p>I managed to credit ether in the users nodes, and with one of the user node, i made a transaction.. and it worked !</p>
<p>The thing is, I don't understand how the authority mined this transaction, since they have to validate it, but how can they judge if the transaction has to be mined or not ? </p>
<p>I hope that I have been clear enough :)</p> | 2018-11-06 17:57:22.453000+00:00 | 2018-11-06 19:58:18.540000+00:00 | null | ethereum|mining|parity | ['https://arxiv.org/pdf/1805.03490.pdf'] | 1 |
42,596,148 | <p>The problem is that you are training from scratch.</p>
<p>Reading the <a href="https://arxiv.org/abs/1605.06211#" rel="nofollow noreferrer">FCN paper</a> will tell you that they always use networks that are pretrained on ImageNet, it will <strong>NOT</strong> work if you train it from scratch, it has to be finetuned from a pretrained network. The optimization problem if you train from random weights just doesn't converge.</p> | 2017-03-04 12:41:23.893000+00:00 | 2017-03-04 12:41:23.893000+00:00 | null | null | 42,571,445 | <p>Now it is quite a long time (almost two months) that I was working on FCN32 for semantic segmentation of single channel images. I played around with different learning rates and even adding <code>BatchNormalization</code> layer. However, I was not successful to even see any output. I did not have any choice except to instantly ask for help here. I really do not know what I am doing wrong. </p>
<p>I am sending one image to the network as a batch.This the train-loss curve <code>LR=1e-9</code> and <code>lr_policy="fixed"</code>:
<a href="https://i.stack.imgur.com/cPiqD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cPiqD.png" alt="enter image description here"></a></p>
<p>I increased the learning rate to <code>1e-4</code>(the following figure). It seems that loss is falling down, however, the learning curve is not acting normal.
<a href="https://i.stack.imgur.com/Pf15Z.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Pf15Z.png" alt="enter image description here"></a></p>
<p>I reduced the layers of original FCN as follows:
(1) Conv64 β ReLU β Conv64 β ReLU β MaxPool</p>
<p>(2) Conv128 β ReLU β Conv128 β ReLU β MaxPool</p>
<p>(3) Conv256 β ReLU β Conv256 β ReLU β MaxPool </p>
<p>(4) Conv4096 β ReLU β Dropout0.5</p>
<p>(5) Conv4096 β ReLU β Dropout0.5</p>
<p>(6) Conv2 </p>
<p>(7) Deconv32x β Crop</p>
<p>(8) SoftmaxWithLoss</p>
<pre><code>layer {
name: "data"
type: "Data"
top: "data"
include {
phase: TRAIN
}
transform_param {
mean_file: "/jjj/FCN32_mean.binaryproto"
}
data_param {
source: "/jjj/train_lmdb/"
batch_size: 1
backend: LMDB
}
}
layer {
name: "label"
type: "Data"
top: "label"
include {
phase: TRAIN
}
data_param {
source: "/jjj/train_label_lmdb/"
batch_size: 1
backend: LMDB
}
}
layer {
name: "data"
type: "Data"
top: "data"
include {
phase: TEST
}
transform_param {
mean_file: "/jjj/FCN32_mean.binaryproto"
}
data_param {
source: "/jjj/val_lmdb/"
batch_size: 1
backend: LMDB
}
}
layer {
name: "label"
type: "Data"
top: "label"
include {
phase: TEST
}
data_param {
source: "/jjj/val_label_lmdb/"
batch_size: 1
backend: LMDB
}
}
layer {
name: "conv1_1"
type: "Convolution"
bottom: "data"
top: "conv1_1"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 64
pad: 100
kernel_size: 3
stride: 1
}
}
layer {
name: "relu1_1"
type: "ReLU"
bottom: "conv1_1"
top: "conv1_1"
}
layer {
name: "conv1_2"
type: "Convolution"
bottom: "conv1_1"
top: "conv1_2"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 64
pad: 1
kernel_size: 3
stride: 1
}
}
layer {
name: "relu1_2"
type: "ReLU"
bottom: "conv1_2"
top: "conv1_2"
}
layer {
name: "pool1"
type: "Pooling"
bottom: "conv1_2"
top: "pool1"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
name: "conv2_1"
type: "Convolution"
bottom: "pool1"
top: "conv2_1"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 128
pad: 1
kernel_size: 3
stride: 1
}
}
layer {
name: "relu2_1"
type: "ReLU"
bottom: "conv2_1"
top: "conv2_1"
}
layer {
name: "conv2_2"
type: "Convolution"
bottom: "conv2_1"
top: "conv2_2"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 128
pad: 1
kernel_size: 3
stride: 1
}
}
layer {
name: "relu2_2"
type: "ReLU"
bottom: "conv2_2"
top: "conv2_2"
}
layer {
name: "pool2"
type: "Pooling"
bottom: "conv2_2"
top: "pool2"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
name: "conv3_1"
type: "Convolution"
bottom: "pool2"
top: "conv3_1"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 256
pad: 1
kernel_size: 3
stride: 1
}
}
layer {
name: "relu3_1"
type: "ReLU"
bottom: "conv3_1"
top: "conv3_1"
}
layer {
name: "conv3_2"
type: "Convolution"
bottom: "conv3_1"
top: "conv3_2"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 256
pad: 1
kernel_size: 3
stride: 1
}
}
layer {
name: "relu3_2"
type: "ReLU"
bottom: "conv3_2"
top: "conv3_2"
}
layer {
name: "pool3"
type: "Pooling"
bottom: "conv3_2"
top: "pool3"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
name: "fc6"
type: "Convolution"
bottom: "pool3"
top: "fc6"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 4096
pad: 0
kernel_size: 7
stride: 1
}
}
layer {
name: "relu6"
type: "ReLU"
bottom: "fc6"
top: "fc6"
}
layer {
name: "drop6"
type: "Dropout"
bottom: "fc6"
top: "fc6"
dropout_param {
dropout_ratio: 0.5
}
}
layer {
name: "fc7"
type: "Convolution"
bottom: "fc6"
top: "fc7"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 4096
pad: 0
kernel_size: 1
stride: 1
}
}
layer {
name: "relu7"
type: "ReLU"
bottom: "fc7"
top: "fc7"
}
layer {
name: "drop7"
type: "Dropout"
bottom: "fc7"
top: "fc7"
dropout_param {
dropout_ratio: 0.5
}
}
layer {
name: "score_fr"
type: "Convolution"
bottom: "fc7"
top: "score_fr"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 5 #21
pad: 0
kernel_size: 1
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "upscore"
type: "Deconvolution"
bottom: "score_fr"
top: "upscore"
param {
lr_mult: 0
}
convolution_param {
num_output: 5 #21
bias_term: false
kernel_size: 64
stride: 32
group: 5 #2
weight_filler: {
type: "bilinear"
}
}
}
layer {
name: "score"
type: "Crop"
bottom: "upscore"
bottom: "data"
top: "score"
crop_param {
axis: 2
offset: 19
}
}
layer {
name: "accuracy"
type: "Accuracy"
bottom: "score"
bottom: "label"
top: "accuracy"
include {
phase: TRAIN
}
}
layer {
name: "accuracy"
type: "Accuracy"
bottom: "score"
bottom: "label"
top: "accuracy"
include {
phase: TEST
}
}
layer {
name: "loss"
type: "SoftmaxWithLoss"
bottom: "score"
bottom: "label"
top: "loss"
loss_param {
ignore_label: 255
normalize: true
}
}
</code></pre>
<p>and this is the solver definition:</p>
<pre><code>net: "train_val.prototxt"
#test_net: "val.prototxt"
test_iter: 736
# make test net, but don't invoke it from the solver itself
test_interval: 2000 #1000000
display: 50
average_loss: 50
lr_policy: "step" #"fixed"
stepsize: 2000 #+
gamma: 0.1 #+
# lr for unnormalized softmax
base_lr: 0.0001
# high momentum
momentum: 0.99
# no gradient accumulation
iter_size: 1
max_iter: 10000
weight_decay: 0.0005
snapshot: 2000
snapshot_prefix: "snapshot/NET1"
test_initialization: false
solver_mode: GPU
</code></pre>
<p>At the beginning, the loss is starting to fall down, but again after some iterations, it is not showing good learning behavior:
<a href="https://i.stack.imgur.com/9EhNh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9EhNh.png" alt="enter image description here"></a></p>
<p>I am a beginner in deep learning and <code>caffe</code>. I really do not understand why this happens. I really appreciate if those that have expertise, please have a look on the model definition and I will be very thankful if you help me. </p> | 2017-03-03 05:38:22.930000+00:00 | 2017-03-04 12:41:23.893000+00:00 | null | deep-learning|caffe|pycaffe|matcaffe | ['https://arxiv.org/abs/1605.06211#'] | 1 |
60,979,228 | <p>Richard Critten's comment says:</p>
<blockquote>
<p>"...Some implementations may occasionally return infinity if RealType is float. This is LWG issue 2524..." source (has a link to the issue): <a href="https://en.cppreference.com/w/cpp/numeric/random/exponential_distribution" rel="nofollow noreferrer">https://en.cppreference.com/w/cpp/numeric/random/exponential_distribution</a></p>
</blockquote>
<p>Switching to <code>double</code> doesn't fix this problem, but rather, it merely reduces the chance of seeing it (or at least makes it negligible). As the LWG issue points out, the underlying issue is that <code>generate_canonical</code> (which some implementations could make use of) might return 1.0 in rare cases, so that <code>-log(1-generate_canonical())</code> could output infinity. In practice, this was more likely with <code>float</code> than with <code>double</code> since there are considerably fewer numbers that <code>generate_canonical</code> could produce with <code>float</code> than with <code>double</code> (in practice, 2^24 as opposed to 2^53). (In any case, there are other problems with naive implementations of right-tailed distributions, such as the exponential distribution; see "<a href="https://arxiv.org/abs/1704.07949" rel="nofollow noreferrer">Reconditioning your quantile function</a>".)</p> | 2020-04-01 19:53:26.243000+00:00 | 2020-04-01 20:02:05.470000+00:00 | 2020-04-01 20:02:05.470000+00:00 | null | 60,975,271 | <p>In a project, I am generating millions of expo(lambda) random variables, where lambda is potentially very large. When using <code>std::exponential_distribution<float></code> I occasionally get a return value of <code>inf</code>. I would understand this if lambda were close to 0, but when lambda is large a value very close to zero is expected. For example, the following program usually terminates after several million draws:</p>
<pre><code>#include<iostream>
#include<random>
int main(void)
{
std::random_device rd;
std::mt19937 generator(rd());
float lambda = 1000000.0;
float val = 0;
for(int i = 0; i < 1000000000; ++i)
{
std::exponential_distribution<float> dist(lambda);
val = dist(generator);
if(isinf(val))
{
std::cout << i << " " << val << " failure" << std::endl;
exit(0);
}
}
return 0;
}
</code></pre>
<p>If there is some error (due to precision) in the function, why does it return <code>inf</code> instead of the more convenient <code>0.0</code>? Is there any way to fix this, besides for manually checking that the output is finite? Thanks. </p> | 2020-04-01 16:04:16.633000+00:00 | 2020-04-01 20:02:05.470000+00:00 | null | c++|random | ['https://en.cppreference.com/w/cpp/numeric/random/exponential_distribution', 'https://arxiv.org/abs/1704.07949'] | 2 |
52,920,547 | <p>Your task is related to object detection. The difference is, that you seem to have only one object in each of your images, whereas in detection there may be multiple objects or no object present. For object detection, there are networks such as YOLOv3 (<a href="https://pjreddie.com/media/files/papers/YOLOv3.pdf" rel="nofollow noreferrer">https://pjreddie.com/media/files/papers/YOLOv3.pdf</a>) or Single Shot Multibox Detector - SSD (<a href="https://arxiv.org/pdf/1512.02325.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1512.02325.pdf</a>) but also ResNet can be trained as an object detection network (as in this paper: <a href="https://arxiv.org/pdf/1506.01497.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1506.01497.pdf</a>)</p>
<p>I will shortly describe how YOLO solves the regression problem for bounding box x,y coordinates: </p>
<ul>
<li>YOLO uses a sigmoid activation function for x,y</li>
<li>It devides the image into grid cells and predicts offsets for a potential object in each grid-cell. This may be helpful in case you have large images or objects at multiple locations.</li>
<li>The original paper uses MSE as a loss function, but in my favorite keras-reimplementation they use crossentropy loss with the Adam optimizer.</li>
</ul>
<p>In principle your setup looks fine to me. But there are many things which could result in poor performance, since you don't tell about the domain of your dataset: Are you using a pretrained network or are you training from scratch? Is it a new category which you are to learning or an object category the network has seen before? etc. </p>
<p>Here are some ideas which you could try:</p>
<ul>
<li>change the optimizer (to SGD or Adam)</li>
<li>change the learning rate (better smaller than too large) </li>
<li>increase your dataset size. For retraining the network for a new object category my rule of thumb is to use about 500-1000 images. For retraining from scratch you need orders of magnitude more.</li>
<li>you may want to check out YOLO or SSD and modify those networks for your case</li>
</ul>
<p>I hope you find some inspiration for your solution.</p> | 2018-10-21 22:41:53.467000+00:00 | 2018-10-21 22:41:53.467000+00:00 | null | null | 52,918,613 | <p>Final objective: Object Midpoint calculation. </p>
<p>I have a small dataset (around 120 images), which has an object (the same in all cases), and the labels are the normalized x,y coordinates of the midpoint of the object in the image (always between 0 and 1)</p>
<p>e.g. x = image_005 ; y = (0.1, 0.15) for an image with the object placed near the bottom left corner</p>
<p>I am trying to use a ResNet architecture but customized for my image-size (all are identical images). Since the output values are always between 0 and 1, for both coordinates, I was wondering if it is possible to use Sigmoid activation in my last layer: </p>
<pre><code> X = Dense(2, activation='sigmoid', name='fc', kernel_initializer = glorot_uniform(seed=0))(X)
</code></pre>
<p>instead of a linear activation (as is advised often when you are trying to achieve a regression result)</p>
<p>For the loss function, I use MSE, with 'rmsprop' optimizer and in addition to accuracy and MSE, I have written a custom metric for telling me if the predicted points are off from the labels by more than 5%</p>
<pre><code>model.compile(optimizer='rmsprop', loss='mean_squared_error', metrics=['mse','acc',perc_midpoint_err])
</code></pre>
<p>I am not getting good results, after training the model on around 150 epochs (I experimented with different batch sizes too)</p>
<p>Should I change the activation layer to linear? Or is there a different modification I can do to my model? Or is ResNet completely unsuitable for this task?</p> | 2018-10-21 18:38:32.577000+00:00 | 2019-09-11 21:47:06.907000+00:00 | 2018-10-21 18:51:37.200000+00:00 | machine-learning|keras|deep-learning|conv-neural-network|activation-function | ['https://pjreddie.com/media/files/papers/YOLOv3.pdf', 'https://arxiv.org/pdf/1512.02325.pdf', 'https://arxiv.org/pdf/1506.01497.pdf'] | 3 |
64,899,431 | <p>Besides the von Neumann procedure given in other answers, there is a whole family of techniques, called <em>randomness extraction</em> (also known as <em>debiasing</em>, <em>deskewing</em>, or <em>whitening</em>), that serve to produce unbiased random bits from random numbers of unknown bias. They include Peres's (1992) iterated von Neumann procedure, as well as an "extractor tree" by Zhou and Bruck (2012). Both methods (and several others) are <em>asymptotically optimal</em>, that is, their efficiency (in terms of output bits per input) approaches the optimal limit as the number of inputs gets large (Pae 2018).</p>
<p>For example, the Peres extractor takes a list of bits (zeros and ones with the same bias) as input and is described as follows:</p>
<ol>
<li>Create two empty lists named U and V. Then, while two or more bits remain in the input:
<ul>
<li>If the next two bits are 0/0, append 0 to U and 0 to V.</li>
<li>Otherwise, if those bits are 0/1, append 1 to U, then write a 0.</li>
<li>Otherwise, if those bits are 1/0, append 1 to U, then write a 1.</li>
<li>Otherwise, if those bits are 1/1, append 0 to U and 1 to V.</li>
</ul>
</li>
<li>Run this algorithm recursively, reading from the bits placed in U.</li>
<li>Run this algorithm recursively, reading from the bits placed in V.</li>
</ol>
<p>This is not to mention procedures that produce unbiased random bits from biased <em>dice</em> or other biased random numbers (not just biased bits); see, e.g., Camion (1974).</p>
<p>I discuss more on randomness extractors in a <a href="https://peteroupc.github.io/randextract.html" rel="nofollow noreferrer">note on randomness extraction</a>.</p>
<p>REFERENCES:</p>
<ul>
<li>Peres, Y., "Iterating von Neumann's procedure for extracting random bits", Annals of Statistics 1992,20,1, p. 590-597.</li>
<li>Zhou, H. And Bruck, J., "<a href="https://arxiv.org/abs/1209.0730" rel="nofollow noreferrer">Streaming algorithms for optimal generation of random bits</a>", arXiv:1209.0730 [cs.IT], 2012.</li>
<li>S. Pae, "<a href="https://arxiv.org/abs/1602.06058v2" rel="nofollow noreferrer">Binarization Trees and Random Number Generation</a>", arXiv:1602.06058v2 [cs.DS].</li>
<li>Camion, Paul, "Unbiased die rolling with a biased die", North Carolina State University. Dept. Of Statistics, 1974.</li>
</ul> | 2020-11-18 18:48:03.157000+00:00 | 2020-11-21 20:42:11.623000+00:00 | 2020-11-21 20:42:11.623000+00:00 | null | 1,986,859 | <p>You have a biased random number generator that produces a 1 with a probability p and 0 with a probability (1-p). You do not know the value of p. Using this make an unbiased random number generator which produces 1 with a probability 0.5 and 0 with a probability 0.5.</p>
<p><strong>Note</strong>: this problem is an exercise problem from Introduction to Algorithms by Cormen, Leiserson, Rivest, Stein.(clrs)</p> | 2009-12-31 19:45:24.833000+00:00 | 2020-11-21 20:42:11.623000+00:00 | 2016-04-24 21:53:59.417000+00:00 | algorithm|random|probability|clrs | ['https://peteroupc.github.io/randextract.html', 'https://arxiv.org/abs/1209.0730', 'https://arxiv.org/abs/1602.06058v2'] | 3 |
36,086,814 | <p>It sounds that You should go for the one of the Collaborative Filtering (CF) algorithm as users have explicit feedback in a form of ratings. First, I would suggest implementing a simple item/user-based k-Nearest Neighbours algorithm. If the results do not satisfy You and maybe Your data is very sparse - probably matrix factorization techniques should do the trick. A good recently survey which I read was [1] - it presents the different methods on different data settings. </p>
<p>If You fill fill comfortable with this and You realize that what You need is actually ranked list of Top-N predictions than ratings, I would suggest reading about e.g. Bayesian Personalized Ranking[2]. </p>
<p>And the best part is - those algorithms are really well known and most of them are available for almost every programming language, e.g. python -> <a href="https://github.com/Mendeley/mrec/" rel="nofollow">https://github.com/Mendeley/mrec/</a></p>
<p>[1] J. Lee, M. Sun, and G. Lebanon, βA Comparative Study of Collaborative Filtering Algorithms,β ArXiv, pp. 1β27, 2012. </p>
<p>[2] S. Rendle, C. Freudenthaler, Z. Gantner, and L. Schmidt-thieme, βBPR : Bayesian Personalized Ranking from Implicit Feedback,β in Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, 2009, vol. cs.LG, pp. 452β461.</p> | 2016-03-18 14:23:07.700000+00:00 | 2016-03-18 14:23:07.700000+00:00 | null | null | 36,085,890 | <p>I have crawled MTurk website. and I have 260 Hits as a dataset and from this dataset particular number of users has selected Hits and assigned ratings to each selected Hits. now I want to give recommendation to these users on basis of their selection. <strong>How it is possible ? Can anyone recommend me any recommendation algorithm ?</strong></p> | 2016-03-18 13:37:50.913000+00:00 | 2016-03-18 14:23:07.700000+00:00 | 2016-03-18 13:47:46.693000+00:00 | recommendation-engine|collaborative-filtering|crowdsourcing|content-based-retrieval|pearson-correlation | ['https://github.com/Mendeley/mrec/'] | 1 |
71,626,837 | <p>I have the same question, and I think their claim is a little bit misleading. I will explain my understanding.</p>
<p>First, the <code>Q</code>,<code>K</code>,<code>V</code> here is only symbols for distinguish in the formula of attention calculation, not means they have to be different. For vanilla self-attention, they are actually all equal to input X, without any linear projection/transforming.</p>
<p>This can be seen in Figure 2 of the <a href="https://papers.nips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf" rel="nofollow noreferrer">transformer paper</a>, the linear layer is not included in the dot-product attention layer(left), and in multi-head attention, there is addition linear layer before the dot-product attention(right). So their complexity result is for vanilla self-attention, without any linear projection, i.e. <code>Q</code>=<code>K</code>=<code>V</code>=<code>X</code>.</p>
<p>And, I found this <a href="https://nlp.stanford.edu/seminar/details/lkaiser.pdf" rel="nofollow noreferrer">slides</a> from one of the author of the transformer paper, you can see clearly, <code>O(n^2 d)</code> is only for the dot-product attention, without the linear projection. While the complexity of multi-head attention is actually <code>O(n^2 d+n d^2)</code>.</p>
<p>Also I don't think the argument of <a href="https://stackoverflow.com/a/65794564/4129549">@igrinis</a> is correct. Although it didn't require to calculate QKV in original <a href="https://arxiv.org/pdf/1409.0473.pdf" rel="nofollow noreferrer">attention paper</a>, the complexity of alignment model(MLP) is actually <code>O(d^2)</code> for each pair of value, so total complexity of attention layer is <code>O(n^2Β·d^2)</code>, even larger than the QKV attention.</p> | 2022-03-26 09:29:13.800000+00:00 | 2022-03-26 09:29:13.800000+00:00 | null | null | 65,703,260 | <p>I recently went through the <a href="https://papers.nips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf" rel="noreferrer">Transformer</a> paper from Google Research describing how self-attention layers could completely replace traditional RNN-based sequence encoding layers for machine translation. In Table 1 of the paper, the authors compare the computational complexities of different sequence encoding layers, and state (later on) that self-attention layers are faster than RNN layers when the sequence length <code>n</code> is smaller than the dimensionality of the vector representations <code>d</code>.</p>
<p>However, the self-attention layer seems to have an inferior complexity than claimed if my understanding of the computations is correct. Let <code>X</code> be the input to a self-attention layer. Then, <code>X</code> will have shape <code>(n, d)</code> since there are <code>n</code> word-vectors (corresponding to rows) each of dimension <code>d</code>. Computing the output of self-attention requires the following steps (consider single-headed self-attention for simplicity):</p>
<ol>
<li>Linearly transforming the rows of <code>X</code> to compute the query <code>Q</code>, key <code>K</code>, and value <code>V</code> matrices, each of which has shape <code>(n, d)</code>. This is accomplished by post-multiplying <code>X</code> with 3 learned matrices of shape <code>(d, d)</code>, amounting to a computational complexity of <code>O(n d^2)</code>.</li>
<li>Computing the layer output, specified in Equation 1 of the paper as <code>SoftMax(Q Kt / sqrt(d)) V</code>, where the softmax is computed over each row. Computing <code>Q Kt</code> has complexity <code>O(n^2 d)</code>, and post-multiplying the resultant with <code>V</code> has complexity <code>O(n^2 d)</code> as well.</li>
</ol>
<p>Therefore, the total complexity of the layer is <code>O(n^2 d + n d^2)</code>, which is worse than that of a traditional RNN layer. I obtained the same result for multi-headed attention too, on considering the appropriate intermediate representation dimensionalities (<code>dk</code>, <code>dv</code>) and finally multiplying by the number of heads <code>h</code>.</p>
<p>Why have the authors ignored the cost of computing the Query, Key, and Value matrices while reporting total computational complexity?</p>
<p>I understand that the proposed layer is fully parallelizable across the <code>n</code> positions, but I believe that Table 1 does not take this into account anyway.</p> | 2021-01-13 13:47:10.647000+00:00 | 2022-03-26 09:29:13.800000+00:00 | 2021-01-13 19:48:02.180000+00:00 | machine-learning|deep-learning|neural-network|nlp|artificial-intelligence | ['https://papers.nips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf', 'https://nlp.stanford.edu/seminar/details/lkaiser.pdf', 'https://stackoverflow.com/a/65794564/4129549', 'https://arxiv.org/pdf/1409.0473.pdf'] | 4 |
65,794,564 | <p>First, you are correct in your complexity calculations. So, what is the source of confusion?</p>
<p>When the original <a href="https://arxiv.org/pdf/1409.0473.pdf" rel="noreferrer">Attention paper</a> was first introduced, it didn't require to calculate <code>Q</code>, <code>V</code> and <code>K</code> matrices, as the values were taken directly from the hidden states of the RNNs, and thus the complexity of Attention layer <strong>is</strong> <code>O(n^2Β·d)</code>.</p>
<p>Now, to understand what <code>Table 1</code> contains please keep in mind how most people scan papers: they read title, abstract, then look at figures and tables. Only then if the results were interesting, they read the paper more thoroughly. So, the main idea of the <code>Attention is all you need</code> paper was to replace the RNN layers completely with attention mechanism in seq2seq setting because RNNs were really slow to train. If you look at the <code>Table 1</code> in this context, you see that it compares RNN, CNN and Attention and highlights the motivation for the paper: using Attention should have been beneficial over RNNs and CNNs. It should have been advantageous in 3 aspects: constant amount of calculation steps, constant amount of operations <strong>and</strong> lower computational complexity for usual Google setting, where <code>n ~= 100</code> and <code>d ~= 1000</code>. But as any idea, it hit the hard wall of reality. And in reality in order for that great idea to work, they had to add positional encoding, reformulate the Attention and add multiple heads to it. The result is the Transformer architecture which while has the computational complexity of <code>O(n^2Β·d + nΒ·d^2)</code> still is much faster then RNN (in a sense of wall clock time), and produces better results.</p>
<p>So the answer for your question is that attention layer the authors refer to in <code>Table 1</code> is strictly the attention mechanism. It is not the complexity of the Transformer. They are very well aware about the complexity of their model (I quote):</p>
<blockquote>
<p>Separable convolutions [6], however, decrease the complexity
considerably, to <code>O(kΒ·nΒ·d + nΒ·d^2)</code>. Even with <code>k = n</code>, however, the
complexity of a separable convolution is equal to the combination of a
self-attention layer and a point-wise feed-forward layer, the approach
we take in our model.</p>
</blockquote> | 2021-01-19 15:32:05.807000+00:00 | 2021-01-22 20:55:59.287000+00:00 | 2021-01-22 20:55:59.287000+00:00 | null | 65,703,260 | <p>I recently went through the <a href="https://papers.nips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf" rel="noreferrer">Transformer</a> paper from Google Research describing how self-attention layers could completely replace traditional RNN-based sequence encoding layers for machine translation. In Table 1 of the paper, the authors compare the computational complexities of different sequence encoding layers, and state (later on) that self-attention layers are faster than RNN layers when the sequence length <code>n</code> is smaller than the dimensionality of the vector representations <code>d</code>.</p>
<p>However, the self-attention layer seems to have an inferior complexity than claimed if my understanding of the computations is correct. Let <code>X</code> be the input to a self-attention layer. Then, <code>X</code> will have shape <code>(n, d)</code> since there are <code>n</code> word-vectors (corresponding to rows) each of dimension <code>d</code>. Computing the output of self-attention requires the following steps (consider single-headed self-attention for simplicity):</p>
<ol>
<li>Linearly transforming the rows of <code>X</code> to compute the query <code>Q</code>, key <code>K</code>, and value <code>V</code> matrices, each of which has shape <code>(n, d)</code>. This is accomplished by post-multiplying <code>X</code> with 3 learned matrices of shape <code>(d, d)</code>, amounting to a computational complexity of <code>O(n d^2)</code>.</li>
<li>Computing the layer output, specified in Equation 1 of the paper as <code>SoftMax(Q Kt / sqrt(d)) V</code>, where the softmax is computed over each row. Computing <code>Q Kt</code> has complexity <code>O(n^2 d)</code>, and post-multiplying the resultant with <code>V</code> has complexity <code>O(n^2 d)</code> as well.</li>
</ol>
<p>Therefore, the total complexity of the layer is <code>O(n^2 d + n d^2)</code>, which is worse than that of a traditional RNN layer. I obtained the same result for multi-headed attention too, on considering the appropriate intermediate representation dimensionalities (<code>dk</code>, <code>dv</code>) and finally multiplying by the number of heads <code>h</code>.</p>
<p>Why have the authors ignored the cost of computing the Query, Key, and Value matrices while reporting total computational complexity?</p>
<p>I understand that the proposed layer is fully parallelizable across the <code>n</code> positions, but I believe that Table 1 does not take this into account anyway.</p> | 2021-01-13 13:47:10.647000+00:00 | 2022-03-26 09:29:13.800000+00:00 | 2021-01-13 19:48:02.180000+00:00 | machine-learning|deep-learning|neural-network|nlp|artificial-intelligence | ['https://arxiv.org/pdf/1409.0473.pdf'] | 1 |
65,793,599 | <p>You cannot compare this to a traditional RNN encoder-decoder, the architecture described in the paper is meant to improve upon the classical <code>Attention Mechanism</code> first established in this <a href="https://arxiv.org/pdf/1409.0473.pdf" rel="nofollow noreferrer">paper</a>.</p>
<p>In its initial form, the attention mechanism was relying on a Neural network trained to retrieved the relevant hidden states of the encoder. Instead of relying on a fixed retrieval strategy (for instance: using the last hidden state) you allow the system some control over the process.</p>
<p><a href="https://i.stack.imgur.com/aW7nJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aW7nJ.png" alt="enter image description here" /></a></p>
<p>There is already a very good post on StackExchange explaining the differences in computational complexity <a href="https://stats.stackexchange.com/questions/421935/what-exactly-are-keys-queries-and-values-in-attention-mechanisms">here</a>.</p>
<p>The paper you are describing is "replacing" this Neural Network with a dot product between two array, which less demanding computationally than having to train a Neural Network and relatively more efficient. But its not meant to be more efficient than a regular RRN-based auto-encoder without attention.</p>
<blockquote>
<p>How is this any less demanding computationally?</p>
</blockquote>
<p>In a traditionnal RNN / LSTM based auto encoder each time step is encoded into a vector <code>h</code>. The decoder usually (again, there's a lot of different architectures but that's the basic one) takes the last vector <code>h</code> as input produce the output sequence.</p>
<p><a href="https://i.stack.imgur.com/GxX0N.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GxX0N.png" alt="enter image description here" /></a></p>
<p>In this scenario there are no attention mechanism, this is straight forward reading the last encoded state.
The problem with this architecture is that as your sequence gets longer, all the relevant information gets squeezed into the last encoded state <code>h(t)</code> and you loose relevant information.</p>
<p><strong>Introducing attention mechanism</strong></p>
<p>As described in the paper above, the original attention mechanism aims at circumventing this limitation by allowing the decoder to access not only the last encoded state but any of the previous encoded state and to combine them in order to improve on the prediction.</p>
<p>For each time steps, a probability vector <code>alpha</code> is computed by a neural network to choose encoded state to retreive :</p>
<blockquote>
<p>If we restrict Ξ± to be an one-hot vector, this operation becomes the same as retrieving from a set of elements h with index Ξ±. With the restriction removed, the attention operation can be thought of as doing "proportional retrieval" according to the probability vector Ξ±</p>
</blockquote>
<p>I won't copy paste the SE <a href="https://stats.stackexchange.com/questions/421935/what-exactly-are-keys-queries-and-values-in-attention-mechanisms">post</a> , there you'll have the explanation on why the method of the dot product is computationally more efficient than the neural network.</p>
<p><strong>Conclusion</strong></p>
<p>The key take away is that you cannot compare this to a traditional RNN encoder-decoder because there are no <code>attention mechanism</code> in such network. It would be like comparing CNN with LSTM layer, these are just different architecture.</p> | 2021-01-19 14:32:03.637000+00:00 | 2021-01-22 14:25:26.650000+00:00 | 2021-01-22 14:25:26.650000+00:00 | null | 65,703,260 | <p>I recently went through the <a href="https://papers.nips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf" rel="noreferrer">Transformer</a> paper from Google Research describing how self-attention layers could completely replace traditional RNN-based sequence encoding layers for machine translation. In Table 1 of the paper, the authors compare the computational complexities of different sequence encoding layers, and state (later on) that self-attention layers are faster than RNN layers when the sequence length <code>n</code> is smaller than the dimensionality of the vector representations <code>d</code>.</p>
<p>However, the self-attention layer seems to have an inferior complexity than claimed if my understanding of the computations is correct. Let <code>X</code> be the input to a self-attention layer. Then, <code>X</code> will have shape <code>(n, d)</code> since there are <code>n</code> word-vectors (corresponding to rows) each of dimension <code>d</code>. Computing the output of self-attention requires the following steps (consider single-headed self-attention for simplicity):</p>
<ol>
<li>Linearly transforming the rows of <code>X</code> to compute the query <code>Q</code>, key <code>K</code>, and value <code>V</code> matrices, each of which has shape <code>(n, d)</code>. This is accomplished by post-multiplying <code>X</code> with 3 learned matrices of shape <code>(d, d)</code>, amounting to a computational complexity of <code>O(n d^2)</code>.</li>
<li>Computing the layer output, specified in Equation 1 of the paper as <code>SoftMax(Q Kt / sqrt(d)) V</code>, where the softmax is computed over each row. Computing <code>Q Kt</code> has complexity <code>O(n^2 d)</code>, and post-multiplying the resultant with <code>V</code> has complexity <code>O(n^2 d)</code> as well.</li>
</ol>
<p>Therefore, the total complexity of the layer is <code>O(n^2 d + n d^2)</code>, which is worse than that of a traditional RNN layer. I obtained the same result for multi-headed attention too, on considering the appropriate intermediate representation dimensionalities (<code>dk</code>, <code>dv</code>) and finally multiplying by the number of heads <code>h</code>.</p>
<p>Why have the authors ignored the cost of computing the Query, Key, and Value matrices while reporting total computational complexity?</p>
<p>I understand that the proposed layer is fully parallelizable across the <code>n</code> positions, but I believe that Table 1 does not take this into account anyway.</p> | 2021-01-13 13:47:10.647000+00:00 | 2022-03-26 09:29:13.800000+00:00 | 2021-01-13 19:48:02.180000+00:00 | machine-learning|deep-learning|neural-network|nlp|artificial-intelligence | ['https://arxiv.org/pdf/1409.0473.pdf', 'https://i.stack.imgur.com/aW7nJ.png', 'https://stats.stackexchange.com/questions/421935/what-exactly-are-keys-queries-and-values-in-attention-mechanisms', 'https://i.stack.imgur.com/GxX0N.png', 'https://stats.stackexchange.com/questions/421935/what-exactly-are-keys-queries-and-values-in-attention-mechanisms'] | 5 |
73,094,973 | <p>To answer your question: static analysis tools (like FlawFinder) can generate a <em>LOT</em> of "false positives".</p>
<p>I Googled to find some quantifiable information for you, and found an interesting article about "DeFP":</p>
<blockquote>
<p><a href="https://arxiv.org/pdf/2110.03296.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/2110.03296.pdf</a></p>
<p>Static analysis tools are frequently used to detect potential
vulnerabilities in software systems. However, an inevitable problem of
these tools is their large number of warnings with a high false
positive rate, which consumes time and effort for investigating. In
this paper, we present DeFP, a novel method for ranking static analysis warnings.</p>
<p>Based on the intuition that warnings which have
similar contexts tend to have similar labels (true positive or false
positive), DeFP is built with two BiLSTM models to capture the
patterns associated with the contexts of labeled warnings. After that,
for a set of new warnings, DeFP can calculate and rank them according
to their likelihoods to be true positives (i.e., actual
vulnerabilities).</p>
<p>Our experimental results on a dataset of 10
real-world projects show that using DeFP, by investigating only 60% of
the warnings, developers can find
+90% of actual vulnerabilities. Moreover, DeFP improves the state-of-the-art approach 30% in both Precision and Recall.</p>
</blockquote>
<p>Apparently, the authors built a neural network to analyze FlawFinder results, and rank them.</p>
<p>I doubt DeFP is a practical "solution" for you. But yes: if you think that specific "memcpy()" warning is a "false positive" - then I'm inclined to agree. It very well could be :)</p> | 2022-07-24 00:08:53.110000+00:00 | 2022-07-24 03:12:22.647000+00:00 | 2022-07-24 03:12:22.647000+00:00 | null | 73,094,915 | <p>I am running flawfinder on a set of libraries written in C/C++. I have a lot of generated warnings by flawfinder. My question is that, how much I can rely on these generated warnings? For example, consider the following function from numpy library (<a href="https://github.com/numpy/numpy/blob/4ada0641ed1a50a2473f8061f4808b4b0d68eff5/numpy/f2py/src/fortranobject.c" rel="nofollow noreferrer">https://github.com/numpy/numpy/blob/4ada0641ed1a50a2473f8061f4808b4b0d68eff5/numpy/f2py/src/fortranobject.c</a>):</p>
<pre><code>static PyObject *
fortran_doc(FortranDataDef def)
{
char *buf, *p;
PyObject *s = NULL;
Py_ssize_t n, origsize, size = 100;
if (def.doc != NULL) {
size += strlen(def.doc);
}
origsize = size;
buf = p = (char *)PyMem_Malloc(size);
if (buf == NULL) {
return PyErr_NoMemory();
}
if (def.rank == -1) {
if (def.doc) {
n = strlen(def.doc);
if (n > size) {
goto fail;
}
memcpy(p, def.doc, n);
p += n;
size -= n;
}
else {
n = PyOS_snprintf(p, size, "%s - no docs available", def.name);
if (n < 0 || n >= size) {
goto fail;
}
p += n;
size -= n;
}
}
else {
PyArray_Descr *d = PyArray_DescrFromType(def.type);
n = PyOS_snprintf(p, size, "'%c'-", d->type);
Py_DECREF(d);
if (n < 0 || n >= size) {
goto fail;
}
p += n;
size -= n;
if (def.data == NULL) {
n = format_def(p, size, def) == -1;
if (n < 0) {
goto fail;
}
p += n;
size -= n;
}
else if (def.rank > 0) {
n = format_def(p, size, def);
if (n < 0) {
goto fail;
}
p += n;
size -= n;
}
else {
n = strlen("scalar");
if (size < n) {
goto fail;
}
memcpy(p, "scalar", n);
p += n;
size -= n;
}
}
if (size <= 1) {
goto fail;
}
*p++ = '\n';
size--;
/* p now points one beyond the last character of the string in buf */
#if PY_VERSION_HEX >= 0x03000000
s = PyUnicode_FromStringAndSize(buf, p - buf);
#else
s = PyString_FromStringAndSize(buf, p - buf);
#endif
PyMem_Free(buf);
return s;
fail:
fprintf(stderr, "fortranobject.c: fortran_doc: len(p)=%zd>%zd=size:"
" too long docstring required, increase size\n",
p - buf, origsize);
PyMem_Free(buf);
return NULL;
}
</code></pre>
<p>There are two memcpy() API calls, and flawfinder tells me that:</p>
<pre><code>['vul_fortranobject.c:216: [2] (buffer) memcpy:\\n Does not check for buffer overflows when copying to destination (CWE-120).\\n Make sure destination can always hold the source data.\\n memcpy(p, "scalar", n);']
</code></pre>
<p>I am not sure whether the report is true.</p> | 2022-07-23 23:51:41.053000+00:00 | 2022-07-24 03:12:22.647000+00:00 | 2022-07-23 23:59:11.053000+00:00 | static-analysis|cppcheck|clang-static-analyzer|flawfinder | ['https://arxiv.org/pdf/2110.03296.pdf'] | 1 |
73,347,200 | <p>That is a non-trivial question. In the simplest case, use a 'hook' offered by <code>configure</code> and <code>configure.win</code> to pre-build a (static) library you ship in your sources and then link your package to that.</p>
<p>That said, the <a href="https://rstudio.github.io/r-manuals/r-exts/" rel="nofollow noreferrer"><em>Writing R Extensions</em></a> manual and/or the <a href="https://cran.r-project.org/web/packages/policies.html" rel="nofollow noreferrer"><em>CRAN Repository Policy</em></a> (both of which are <em>the</em> references here) expressed more of a preference for an <em>external</em> library -- which may not be an option here if PCL is too exotic.</p>
<p>As the topic comes up with Rcpp, I wrote a short paper about it (at arXiv <a href="https://arxiv.org/abs/1911.06416" rel="nofollow noreferrer">here</a>) which is also included as a <a href="https://cloud.r-project.org/web/packages/Rcpp/vignettes/Rcpp-libraries.pdf" rel="nofollow noreferrer">vignette in the package</a>. It requires a few pages to cover the common cases but even then it cannot cover all.</p>
<p>Your main source of reference may be CRAN. The are <em>lots</em> of packages in this space. A few of mine use external libraries, I contributed to package <code>nloptr</code> which uses a hybrid approach ("use system library if found, else build") and some like <code>httpuv</code> always build (a small-ish library).</p> | 2022-08-13 19:41:52.980000+00:00 | 2022-08-13 19:48:12.527000+00:00 | 2022-08-13 19:48:12.527000+00:00 | null | 73,346,029 | <p>I'm trying to write a wrapper for a C++ function I've written, making use of the Point Clouds Library (PCL). This is my first try interfacing R and C++, so I apologise if any solution is too trivial. My goal is to make a few functions available for myself and my colleagues directly in R, on <strong>mac and windows</strong>. My example function <code>cloudSize</code> is included at the bottom of the text. I will try to be as clear as possible.</p>
<p>I've installed PCL with the vcpkg package manager for winx64 at <code>C:\src\vcpkg\vcpkg</code>.
This is added to my Environmental Variable Path for my user.</p>
<p>I created an empty R-package with <code>Rcpp.package.skeleton()</code>:
<code>C:/User/csvi0001/Desktop/GitHub/RPCLpackage/PCLR</code></p>
<p>PCL is a massive library, but thankfully modular,and so I only #include the headers that are needed to compile the executable: <code>pcl/io/pcd_io.h</code>, <code>pcl/point_types.h</code>, <code>pcl/registration/icp.h</code>.</p>
<p>Now, since I'd like this to work on more than one OS - and therefore compile on install (?) - I should use a dynamic library? I'll presume that the person installing my package already has a compiled copy of pcl. However, I do not know how to find a flag showing that pcl is installed - how do I find these for inclusion in Makevars(?). CMake must find them when testing the C++ function in VSCode after adding an include path. In lieu of this:</p>
<p>I copy the pcl folder installed by vcpkg to ./src . When I tried copying all the .h files, they seemed to lose track of one another as they refer to eachother through which module they are placed in, e.g. <pcl/memory.h> cannot be found if memory.h is placed directly in ./src. However, flattening the structure of the modules means that every single dependency and #include must be manually changed, in some cases there are also files with the same name in different folders. e.g. pcl/kdtree.h and pcl/search/kdtree.h. After this, it must be done again when replacing < > with " " for each header.</p>
<p><strong>Is there any way of telling Rcpp that the library included in /src is structured?</strong></p>
<p>I'm working on Win 10 winx64.</p>
<p>Since I'm making use of the depends RcppEigen and BH; and I must have C++14 or higher (choice: C++17) I add to my DESCRIPTION file:</p>
<pre><code>LinkingTo: Rcpp, RcppEigen, BH
SystemRequirements: C++17
</code></pre>
<p>My actual C++ function:</p>
<pre><code>//PCL requires at least C++14
//[[Rcpp::plugins(cpp17)]]
//[[Rcpp::depends(RcppEigen)]]
//[[Rcpp::depends(BH)]]
#include <Rcpp.h>
#include <iostream>
#include "pcl/io/pcd_io.h"
#include "pcl/point_types.h"
#include "pcl/registration/icp.h"
//[[Rcpp::export]]
int cloudSize(Rcpp::DataFrame x)
{
pcl::PointCloud<pcl::PointXYZ> sourceCloud;
for(int i=0;i<x.nrows();i++)
{
sourceCloud.push_back(pcl::PointXYZ(x[0][i],x[1][i],x[2][i])); //This way of referring to elements in a Rcpp::DataFrame may be erroneous.
}
int cloudSize = sourceCloud->size();
return (cloudSize);
}
</code></pre> | 2022-08-13 16:37:11.210000+00:00 | 2022-08-13 19:48:12.527000+00:00 | null | rcpp|r-package | ['https://rstudio.github.io/r-manuals/r-exts/', 'https://cran.r-project.org/web/packages/policies.html', 'https://arxiv.org/abs/1911.06416', 'https://cloud.r-project.org/web/packages/Rcpp/vignettes/Rcpp-libraries.pdf'] | 4 |
42,395,245 | <p>Probably the best performing solution for the coin problem would be to use a regression to solve this. Annotate 5k images with the amount of objects in the scene and run your model on it. Then your model just outputs the correct number. (Hopefully)</p>
<p>Another way is to classify if an image shows a coin and use a sliding window approach like this one: <a href="https://arxiv.org/pdf/1312.6229.pdf" rel="noreferrer">https://arxiv.org/pdf/1312.6229.pdf</a> to classify for each window if it shows a coin. Then you count the found regions. This one is easier to annotate and learn and better extensible. But you have the problem of choosing good windows and using the result of those windows in a concise way. </p> | 2017-02-22 15:10:09.990000+00:00 | 2017-02-22 15:10:09.990000+00:00 | null | null | 42,375,680 | <p>New to machine learning so looking for some direction how to get started. The end goal is to be able to train a model to count the number of objects in an image using Tensorflow. My initial focus will be to train the model to count one specific type of object. So lets say I take coins. I will only train the model to count coins. Not worried about creating a generic counter for all different types of objects. I've only done Google's example of image classification of flowers and I understand the basics of that. So looking for clues how to get started. Is this an image classification problem and I can use the same logic as the flowers...etc etc?</p> | 2017-02-21 18:55:33.063000+00:00 | 2018-07-28 05:47:30.560000+00:00 | 2017-02-21 19:17:49.700000+00:00 | machine-learning|tensorflow|computer-vision|artificial-intelligence|deep-learning | ['https://arxiv.org/pdf/1312.6229.pdf'] | 1 |
47,151,382 | <p>The audio transcription problem you describe is a well know problem in the Music Information Retrieval (MIR) research community. It is not one that is easy to solve and consists of two aspects: </p>
<ul>
<li><p>detecting pitch frequencies, which is often hard due to the occurrence of harmonics and the fact that notes are often glided into (C# can be detected instead of C), also due to tuning discrepancies. </p></li>
<li><p>beat detection: audio performances are often not played in time exactly, so finding the actual onsets can be tricky. </p></li>
</ul>
<p>A promising novel approach is to use deep neural networks to solve this, e.g.: </p>
<p>Boulanger-Lewandowski, N., Bengio, Y., & Vincent, P. (2012). <a href="https://arxiv.org/ftp/arxiv/papers/1206/1206.6392.pdf" rel="nofollow noreferrer">Modeling temporal dependencies in high-dimensional sequences: Application to polyphonic music generation and transcription</a>. arXiv preprint arXiv:1206.6392.</p>
<p>More info:</p>
<p>Poliner, G. E., Ellis, D. P., Ehmann, A. F., GΓ³mez, E., Streich, S., & Ong, B. (2007). Melody transcription from music audio: Approaches and evaluation. IEEE Transactions on Audio, Speech, and Language Processing, 15(4), 1247-1256.</p> | 2017-11-07 06:32:23.033000+00:00 | 2017-11-07 06:32:23.033000+00:00 | null | null | 47,124,045 | <p>I'm trying to make a program in Python from which I can upload a music file and get notes from this file (on piano). I created a <a href="https://i.stack.imgur.com/cqOUq.jpg" rel="nofollow noreferrer">Spectrogram</a>, and now how can I get a frequencies from it? How can I fix spectrogram (from half of spectrogram I have mirror reflection)? I need something like <a href="https://image.slidesharecdn.com/sigproc-selfstudy-130318120948-phpapp01/95/digital-signal-processing-through-speech-hearing-and-python-26-638.jpg?cb=1363609100" rel="nofollow noreferrer">this</a>. <a href="https://pastebin.com/t4dfKQwV" rel="nofollow noreferrer">Here</a> is my code.</p>
<pre><code>import numpy as np
from matplotlib import pyplot as plt
import scipy.io.wavfile as wav
from numpy.lib import stride_tricks
""" short time fourier transform of audio signal """
def stft(sig, frameSize, overlapFac=0.5, window=np.hanning):
win = window(frameSize)
hopSize = int(frameSize - np.floor(overlapFac * frameSize))
# zeros at beginning (thus center of 1st window should be for sample nr. 0)
samples = np.append(np.zeros(np.floor(frameSize/2.0)), sig)
# cols for windowing
cols = np.ceil((len(samples) - frameSize) / float(hopSize)) + 1
# zeros at end (thus samples can be fully covered by frames)
samples = np.append(samples, np.zeros(frameSize))
frames = stride_tricks.as_strided(samples, shape=(cols, frameSize), strides=(samples.strides[0]*hopSize, samples.strides[0])).copy()
frames *= win
return np.fft.rfft(frames)
""" scale frequency axis logarithmically """
def logscale_spec(spec, sr=44100, factor=20.):
timebins, freqbins = np.shape(spec)
scale = np.linspace(0, 1, freqbins) ** factor
scale *= (freqbins-1)/max(scale)
scale = np.unique(np.round(scale))
# create spectrogram with new freq bins
newspec = np.complex128(np.zeros([timebins, len(scale)]))
for i in range(0, len(scale)):
if i == len(scale)-1:
newspec[:,i] = np.sum(spec[:,scale[i]:], axis=1)
else:
newspec[:,i] = np.sum(spec[:,scale[i]:scale[i+1]], axis=1)
# list center freq of bins
allfreqs = np.abs(np.fft.fftfreq(freqbins*2, 1./sr)[:freqbins+1])
freqs = []
for i in range(0, len(scale)):
if i == len(scale)-1:
freqs += [np.mean(allfreqs[scale[i]:])]
else:
freqs += [np.mean(allfreqs[scale[i]:scale[i+1]])]
return newspec, freqs
""" plot spectrogram"""
def plotstft(audiopath, binsize=2**10, plotpath=None, colormap="jet"):
samplerate, samples = wav.read(audiopath)
s = stft(samples, binsize)
sshow, freq = logscale_spec(s, factor=1.0, sr=samplerate)
ims = 20.*np.log10(np.abs(sshow)/10e-6) # amplitude to decibel
timebins, freqbins = np.shape(ims)
plt.figure(figsize=(15, 7.5))
plt.imshow(np.transpose(ims), origin="lower", aspect="auto", cmap=colormap, interpolation="none")
plt.colorbar()
plt.xlabel("time (s)")
plt.ylabel("frequency (Hz)")
plt.xlim([0, timebins-1])
plt.ylim([0, freqbins])
xlocs = np.float32(np.linspace(0, timebins-1, 5))
plt.xticks(xlocs, ["%.02f" % l for l in ((xlocs*len(samples)/timebins)+(0.5*binsize))/samplerate])
ylocs = np.int16(np.round(np.linspace(0, freqbins-1, 10)))
plt.yticks(ylocs, ["%.02f" % freq[i] for i in ylocs])
if plotpath:
plt.savefig(plotpath, bbox_inches="tight")
else:
plt.show()
plt.clf()
plotstft("Sound/piano2.wav")
</code></pre> | 2017-11-05 16:43:51.867000+00:00 | 2019-01-19 05:53:53.637000+00:00 | 2019-01-19 05:53:53.637000+00:00 | python|frequency|spectrogram | ['https://arxiv.org/ftp/arxiv/papers/1206/1206.6392.pdf'] | 1 |
47,393,408 | <p>Any ordinary pre-trained classification model like vgg or resNet will extract different features of the image on each layer. While the earlier layers will respond to more basic and simple features like edges, the deeper layers will respond to more specific features. If you want to have specific features extracted from images, you have to label some data and train your model with that dataset.
For that, you can use the first couple of layers from a pre-trained model as an encoder. </p>
<p>But I would guess a CNN only solution will get you better results. Here is a nice read about the subject: <a href="https://arxiv.org/ftp/arxiv/papers/1709/1709.08761.pdf" rel="nofollow noreferrer">https://arxiv.org/ftp/arxiv/papers/1709/1709.08761.pdf</a></p>
<p>Keras actually includes some applications with pre-trained weights, including vgg16: <a href="https://github.com/fchollet/keras/blob/master/keras/applications/vgg16.py" rel="nofollow noreferrer">https://github.com/fchollet/keras/blob/master/keras/applications/vgg16.py</a></p>
<p>There you can find the link to the weights for this vgg16 model (pre-trained on imageNet):
<a href="https://github.com/fchollet/deep-learning-models/releases/download/v0.1/vgg16_weights_tf_dim_ordering_tf_kernels.h5" rel="nofollow noreferrer">https://github.com/fchollet/deep-learning-models/releases/download/v0.1/vgg16_weights_tf_dim_ordering_tf_kernels.h5</a></p> | 2017-11-20 13:39:40.730000+00:00 | 2017-11-20 13:39:40.730000+00:00 | null | null | 47,392,633 | <p>Could someone please provide details of model available to extract the feature of images model for tensorflow or Keras? I have been looking for pre-trained models that will extract the features of the image. And then I will create a vector of the images then apply the nearest neighbor to find out similar images.</p> | 2017-11-20 12:58:53.963000+00:00 | 2017-11-20 13:39:40.730000+00:00 | null | image-processing|tensorflow|deep-learning|keras|autoencoder | ['https://arxiv.org/ftp/arxiv/papers/1709/1709.08761.pdf', 'https://github.com/fchollet/keras/blob/master/keras/applications/vgg16.py', 'https://github.com/fchollet/deep-learning-models/releases/download/v0.1/vgg16_weights_tf_dim_ordering_tf_kernels.h5'] | 3 |
65,869,398 | <h2>TL;DR</h2>
<p>Yes, GraphQL does support a sort of pseudo-join. You can see the books and authors example below running in <a href="https://github.com/simbo1905/quarkus-graphql-bigquery" rel="noreferrer">my demo project</a>.</p>
<h2>Example</h2>
<p>Consider a simple database design for storing info about books:</p>
<pre class="lang-sql prettyprint-override"><code>create table Book ( id string, name string, pageCount string, authorId string );
create table Author ( id string, firstName string, lastName string );
</code></pre>
<p>Because we know that Author can write many Books that database model puts them in separate tables. Here is the GraphQL schema:</p>
<pre><code>type Query {
bookById(id: ID): Book
}
type Book {
id: ID
title: String
pageCount: Int
author: Author
}
type Author {
id: ID
firstName: String
lastName: String
}
</code></pre>
<p>Notice there is no <code>authorId</code> on the <code>Book</code> type but a type <code>Author</code>. The database <code>authorId</code> column on the book table is not exposed to the outside world. It is an internal detail.</p>
<p>We can pull back a book and it's author using this GraphQL query:</p>
<pre><code>{
bookById(id:"book-1"){
id
title
pageCount
author {
firstName
lastName
}
}
}
</code></pre>
<p>Here is a screenshot of it in action using <a href="https://github.com/simbo1905/quarkus-graphql-bigquery" rel="noreferrer">my demo project</a>:</p>
<p><a href="https://i.stack.imgur.com/XcVGK.png" rel="noreferrer"><img src="https://i.stack.imgur.com/XcVGK.png" alt="graphql pseudo join" /></a></p>
<p>The result nests the Author details:</p>
<pre class="lang-json prettyprint-override"><code>{
"data": {
"book1": {
"id": "book-1",
"title": "Harry Potter and the Philosopher's Stone",
"pageCount": 223,
"author": {
"firstName": "Joanne",
"lastName": "Rowling"
}
}
}
}
</code></pre>
<p>The single GQL query resulted in two separate fetch-by-id calls into the database. When a single logical query turns into multiple physical queries we can quickly run into the infamous <code>N+1</code> problem.</p>
<h2>The <code>N+1</code> Problem</h2>
<p>In our case above a book can only have one author. If we only query one book by ID we only get a "read amplification" against our database of 2x. Imaging if you can query books with a title that starts with a prefix:</p>
<pre><code>type Query {
booksByTitleStartsWith(titlePrefix: String): [Book]
}
</code></pre>
<p>Then we call it asking it to fetch the books with a title starting with "Harry":</p>
<pre><code>{
booksByTitleStartsWith(titlePrefix:"Harry"){
id
title
pageCount
author {
firstName
lastName
}
}
}
</code></pre>
<p>In this GQL query we will fetch the books by a database query of <code>title like 'Harry%'</code> to get many books including the <code>authorId</code> of each book. It will then make an individual fetch by <code>ID</code> for every author of every book. This is a total of <code>N+1</code> queries where the <code>1</code> query pulls back <code>N</code> records and we then make <code>N</code> separate fetches to build up the full picture.</p>
<p>The easy fix for that example is to not expose a field <code>author</code> on <code>Book</code> and force the person using your API to fetch all the authors in a separate query <code>authorsByIds</code> so we give them two queries:</p>
<pre><code>type Query {
booksByTitleStartsWith(titlePrefix: String): [Book] /* <- single database call */
authorsByIds(authorIds: [ID]) [Author] /* <- single database call */
}
type Book {
id: ID
title: String
pageCount: Int
}
type Author {
id: ID
firstName: String
lastName: String
}
</code></pre>
<p>The key thing to note about that last example is that there is no way in that model to walk from one entity type to another. If the person using your API wants to load the books authors the same time they simple call both queries in single post:</p>
<pre><code>query {
booksByTitleStartsWith(titlePrefix: "Harry") {
id
title
}
authorsByIds(authorIds: ["author-1","author-2","author-3") {
id
firstName
lastName
}
}
</code></pre>
<p>Here the person writing the query (perhaps using JavaScript in a web browser) sends a single GraphQL post to the server asking for both <code>booksByTitleStartsWith</code> and <code>authorsByIds</code> to be passed back at once. The server can now make two efficient database calls.</p>
<p>This approach shows that there is "no magic bullet" for how to map the "logical model" to the "physical model" when it comes to performance. This is known as the <a href="https://en.wikipedia.org/wiki/Object%E2%80%93relational_impedance_mismatch" rel="noreferrer">Objectβrelational impedance mismatch</a> problem. More on that below.</p>
<h2>Is Fetch-By-ID So Bad?</h2>
<p>Note that the default behaviour of GraphQL is still very helpful. You can map GraphQL onto anything. You can map it onto internal REST APIs. You can map some types into a relational database and other types into a NoSQL database. These can be in the same schema and the same GraphQL end-point. There is no reason why you cannot have <code>Author</code> stored in Postgres and <code>Book</code> stored in MongoDB. This is because GraphQL doesn't by default "join in the datastore" it will fetch each type independently and build the response in memory to send back to the client. It <em><strong>may</strong></em> be the case that you can use a model that only joins to a small dataset that gets very good cache hits. You can then add caching into your system and not have a problem and benefit from all the <a href="https://arxiv.org/pdf/2003.04761.pdf" rel="noreferrer">advantages of GraphQL</a>.</p>
<h2>What About ORM?</h2>
<p>There is a project called <a href="https://github.com/join-monster/join-monster" rel="noreferrer">Join Monster</a> which does look at your database schema, looks at the runtime GraphQL query, and tries to generate efficient database joins on-the-fly. That is a form of <a href="https://en.wikipedia.org/wiki/Object%E2%80%93relational_mapping" rel="noreferrer">Object Relational Mapping</a> which sometimes gets a lot of "<a href="https://martinfowler.com/bliki/OrmHate.html" rel="noreferrer">OrmHate</a>". This is mainly due to <a href="https://en.wikipedia.org/wiki/Object%E2%80%93relational_impedance_mismatch" rel="noreferrer">Objectβrelational impedance mismatch</a> problem.</p>
<p>In my experience, any ORM works if you write the database model to exactly support your object API. In my experience, any ORM tends to fail when you have an existing database model that you try to map with an ORM framework.</p>
<p>IMHO, if the data model is optimised without thinking about ORM or queries then avoid ORM. For example, if the data model is optimised to conserve space in classical <a href="https://en.wikipedia.org/wiki/Third_normal_form" rel="noreferrer">third normal form</a>. My recommendation there is to avoid querying the main data model and use the <a href="https://docs.microsoft.com/en-us/azure/architecture/patterns/cqrs" rel="noreferrer">CQRS pattern</a>. See below for an example.</p>
<h2>What Is Practical?</h2>
<p>If you do want to use pseudo-joins in GraphQL but you hit an <code>N+1</code> problem you can write code to map specific "field fetches" onto hand-written database queries. Carefully performance test using realist data whenever any fields return an array.</p>
<p>Even when you can put in hand written queries you may hit scenarios where those joins don't run fast enough. In which case consider the <a href="https://docs.microsoft.com/en-us/azure/architecture/patterns/cqrs" rel="noreferrer">CQRS pattern</a> and denormalise some of the data model to allow for fast lookups.</p>
<h2>Update: GraphQL Java "Look-Ahead"</h2>
<p>In our case we use <a href="https://github.com/graphql-java/graphql-java" rel="noreferrer">graphql-java</a> and use pure configuration files to map DataFetchers to database queries. There is a some generic logic that looks at the graph query being run and calls parameterized sql queries that are in a custom configuration file. We saw this article <a href="https://www.graphql-java.com/blog/deep-dive-data-fetcher-results/" rel="noreferrer">Building efficient data fetchers by looking ahead</a> which explains that you can inspect at runtime the what the person who wrote the query selected to be returned. We can use that to "look-ahead" at what other entities we would be asked to fetch to satisfy the entire query. At which point we can join the data in the database and pull it all back efficiently in the a single database call. The graphql-java engine will still make <code>N</code> in-memory fetches to our code. The <code>N</code> requests to get the author of each book are satisfied by simply lookups in a hashmap that we loaded out of the single database call that joined the author table to the books table returning <code>N</code> complete rows efficiently.</p>
<p>Our approach might sound a little like ORM yet we did not make any attempt to make it intelligent. The developer creating the API and our custom configuration files has to decide which graphql queries will be mapped to what database queries. Our generic logic just "looks-ahead" at what the runtime graphql query actually selects in total to understand all the database columns that it needs to load out of each row returned by the SQL to build the hashmap. Our approach can only handle parent-child-grandchild style trees of data. Yet this is a very common use case for us. The developer making the API still needs to keep a careful eye on performance. They need to adapt both the API and the custom mapping files to avoid poor performance.</p> | 2021-01-24 10:18:25.060000+00:00 | 2021-12-21 22:48:06.877000+00:00 | 2021-12-21 22:48:06.877000+00:00 | null | 51,805,890 | <p>I am very new in GraphQL and trying to do a simple join query. My sample tables look like below:</p>
<pre><code>{
phones: [
{
id: 1,
brand: 'b1',
model: 'Galaxy S9 Plus',
price: 1000,
},
{
id: 2,
brand: 'b2',
model: 'OnePlus 6',
price: 900,
},
],
brands: [
{
id: 'b1',
name: 'Samsung'
},
{
id: 'b2',
name: 'OnePlus'
}
]
}
</code></pre>
<p>I would like to have a query to return a <em>phone</em> object with its brand name in it instead of the brand code.</p>
<p>E.g. If queried for the phone with <code>id = 2</code>, it should return:</p>
<pre><code>{id: 2, brand: 'OnePlus', model: 'OnePlus 6', price: 900}
</code></pre> | 2018-08-12 05:10:56.327000+00:00 | 2021-12-21 22:48:06.877000+00:00 | 2020-08-25 21:56:35.023000+00:00 | javascript|database|graphql | ['https://github.com/simbo1905/quarkus-graphql-bigquery', 'https://github.com/simbo1905/quarkus-graphql-bigquery', 'https://i.stack.imgur.com/XcVGK.png', 'https://en.wikipedia.org/wiki/Object%E2%80%93relational_impedance_mismatch', 'https://arxiv.org/pdf/2003.04761.pdf', 'https://github.com/join-monster/join-monster', 'https://en.wikipedia.org/wiki/Object%E2%80%93relational_mapping', 'https://martinfowler.com/bliki/OrmHate.html', 'https://en.wikipedia.org/wiki/Object%E2%80%93relational_impedance_mismatch', 'https://en.wikipedia.org/wiki/Third_normal_form', 'https://docs.microsoft.com/en-us/azure/architecture/patterns/cqrs', 'https://docs.microsoft.com/en-us/azure/architecture/patterns/cqrs', 'https://github.com/graphql-java/graphql-java', 'https://www.graphql-java.com/blog/deep-dive-data-fetcher-results/'] | 14 |
50,906,634 | <p>It is often not a good idea to adjust word2vec embeddings if you do not have sufficiently large corpus in your training. To clarify that, take an example where your corpus has <em>television</em> but not <em>TV</em>. Even though they might have word2vec embeddings, after training only <em>television</em> will be adjust and not <em>TV</em>. So you disrupt the information from word2vec.</p>
<p>To solve this problem you have 3 options:</p>
<ol>
<li>You let the LSTM in the upper layer figure out what the word might mean based on its context. For example, <em>I like choc.</em> the LSTM can figure out it is an object. This was demonstrated by <a href="https://arxiv.org/abs/1410.3916" rel="nofollow noreferrer">Memory Networks</a>.</li>
<li>Easy option, pre-process, canonicalise as much as you can before passing to the model. Spell checkers often capture these very well and are really fast.</li>
<li>You can use character encoding along side word2vec. This is employed in many of the question answering models such as <a href="https://arxiv.org/abs/1410.3916" rel="nofollow noreferrer">BiDAF</a> where the character representation is merged with word2vec so you have some information relating characters to words. In this case, <em>choc</em> might be similar to <em>chocolate</em>.</li>
</ol> | 2018-06-18 09:42:54.133000+00:00 | 2018-06-18 09:42:54.133000+00:00 | null | null | 50,906,372 | <p>I am working on text classification task where my dataset contains a lot of abbreviations and proper nouns. For instance: <strong>Milka choc. bar</strong>.<br>
My idea is to use bidirectional LSTM model with word2vec embedding.<br>
And here is my problem how to code words, that not appears in the dictionary?
I partially solved this problem by merging pre-trained vectors with randomly initialized. Here is my implementation:</p>
<pre><code>import gensim
from gensim.models import Word2Vec
from gensim.utils import simple_preprocess
from gensim.models.keyedvectors import KeyedVectors
word_vectors = KeyedVectors.load_word2vec_format('ru.vec', binary=False, unicode_errors='ignore')
EMBEDDING_DIM=300
vocabulary_size=min(len(word_index)+1,num_words)
embedding_matrix = np.zeros((vocabulary_size, EMBEDDING_DIM))
for word, i in word_index.items():
if i>=num_words:
continue
try:
embedding_vector = word_vectors[word]
embedding_matrix[i] = embedding_vector
except KeyError:
embedding_matrix[i]=np.random.normal(0,np.sqrt(0.25),EMBEDDING_DIM)
def LSTMModel(X,words_nb, embed_dim, num_classes):
_input = Input(shape=(X.shape[1],))
X = embedding_layer = Embedding(words_nb,
embed_dim,
weights=[embedding_matrix],
trainable=True)(_input)
X = The_rest_of__the_LSTM_model()(X)
</code></pre>
<p>Do you think, that allowing the model to adjust the embedding weights is a good idea?
Could you please tell me, how can I encode words like <strong>choc</strong>? Obviously, this abbreviation stands for <strong>chocolate</strong>. </p> | 2018-06-18 09:28:25.293000+00:00 | 2018-06-18 09:43:30.820000+00:00 | null | python|keras|nlp|word2vec | ['https://arxiv.org/abs/1410.3916', 'https://arxiv.org/abs/1410.3916'] | 2 |
41,488,871 | <p><strong>My general order is:</strong></p>
<ol>
<li>Batch size, as it will largely affect the training time of future experiments.</li>
<li>Architecture of the network:
<ul>
<li>Number of neurons in the network</li>
<li>Number of layers</li>
</ul></li>
<li>Rest (dropout, L2 reg, etc.)</li>
</ol>
<p><strong>Dependencies:</strong></p>
<p>I'd assume that the optimal values of </p>
<ul>
<li>learning rate and batch size </li>
<li>learning rate and number of neurons </li>
<li>number of neurons and number of layers</li>
</ul>
<p>strongly depend on each other. I am not an expert on that field though.</p>
<p><strong>As for your hyperparameters:</strong></p>
<ul>
<li>For the Adam optimizer: "Recommended values in the paper are eps = 1e-8, beta1 = 0.9, beta2 = 0.999." (<a href="http://cs231n.github.io/neural-networks-3/#ada" rel="noreferrer">source</a>)</li>
<li>For the learning rate with Adam and RMSProp, I found values around 0.001 to be optimal for most problems. </li>
<li>As an alternative to Adam, you can also use RMSProp, which reduces the memory footprint by up to 33%. See <a href="https://stackoverflow.com/a/37843152/2628369">this answer</a> for more details.</li>
<li>You could also tune the initial weight values (see <a href="https://arxiv.org/abs/1511.06422" rel="noreferrer">All you need is a good init</a>). Although, the Xavier initializer seems to be a good way to prevent having to tune the weight inits.</li>
<li>I don't tune the number of iterations / epochs as a hyperparameter. I train the net until its validation error converges. However, I give each run a time budget.</li>
</ul> | 2017-01-05 15:35:08.147000+00:00 | 2017-01-20 13:53:25.120000+00:00 | 2017-05-23 11:54:25.650000+00:00 | null | 37,467,647 | <p>I have a quite simple ANN using Tensorflow and AdamOptimizer for a regression problem and I am now at the point to tune all the hyperparameters. </p>
<p>For now, I saw many different hyperparameters that I have to tune : </p>
<ul>
<li>Learning rate : initial learning rate, learning rate decay</li>
<li>The AdamOptimizer needs 4 arguments (learning-rate, beta1, beta2, epsilon) so we need to tune them - at least epsilon</li>
<li>batch-size </li>
<li>nb of iterations</li>
<li>Lambda L2-regularization parameter</li>
<li>Number of neurons, number of layers </li>
<li>what kind of activation function for the hidden layers, for the output layer </li>
<li>dropout parameter</li>
</ul>
<p>I have 2 questions :</p>
<p>1) Do you see any other hyperparameter I might have forgotten ? </p>
<p>2) For now, my tuning is quite "manual" and I am not sure I am not doing everything in a proper way.
Is there a special order to tune the parameters ? E.g learning rate first, then batch size, then ...
I am not sure that all these parameters are independent - in fact, I am quite sure that some of them are not. Which ones are clearly independent and which ones are clearly not independent ? Should we then tune them together ?
Is there any paper or article which talks about properly tuning all the parameters in a special order ? </p>
<p>EDIT :
Here are the graphs I got for different initial learning rates, batch sizes and regularization parameters. The purple curve is completely weird for me... Because the cost decreases like way slowly that the others, but it got stuck at a lower accuracy rate. Is it possible that the model is stuck in a local minimum ? </p>
<p><a href="http://i.stack.imgur.com/C0IcD.png" rel="noreferrer">Accuracy</a></p>
<p><a href="http://i.stack.imgur.com/OId4A.png" rel="noreferrer">Cost</a></p>
<p>For the learning rate, I used the decay :
LR(t) = LRI/sqrt(epoch) </p>
<p>Thanks for your help !
Paul </p> | 2016-05-26 17:38:30.630000+00:00 | 2020-07-27 11:41:58.737000+00:00 | 2016-05-27 17:45:11.033000+00:00 | neural-network|tensorflow|hyperparameters | ['http://cs231n.github.io/neural-networks-3/#ada', 'https://stackoverflow.com/a/37843152/2628369', 'https://arxiv.org/abs/1511.06422'] | 3 |
45,643,853 | <p>Proper initialisation of weights is often crucial to getting deeper neural nets to train.</p>
<p>Xavier initialisation is derived with the goal of ensuring that the variance of the output at each neuron is expected to be 1.0 (see <a href="http://andyljones.tumblr.com/post/110998971763/an-explanation-of-xavier-initialization" rel="nofollow noreferrer">here</a>). This generally relies on the additional assumption that your inputs are standardised to have mean 0 and variance of 1, so it is important to also ensure this.</p>
<p>For ReLU units, I believe <a href="http://arxiv.org/abs/1502.01852" rel="nofollow noreferrer">He initialisation</a> is actually considered best practice. This requires initialising from a zero-mean Gaussian distribution with standard deviation:</p>
<p><img src="https://chart.googleapis.com/chart?cht=tx&chl=%5Csqrt%7B%5Cfrac%7B2%7D%7Bn%7D%7D" alt="heinitformula"> </p>
<p>Where <em>n</em> is the number of input units. See the <a href="https://lasagne.readthedocs.io/en/latest/modules/init.html#lasagne.init.He" rel="nofollow noreferrer">Lasagne docs</a> for best practices for some other activation functions.</p>
<p>On a side note, batch normalisation can often reduce the dependence of model performance on weights initialisation.</p> | 2017-08-11 21:12:35.923000+00:00 | 2017-08-11 21:12:35.923000+00:00 | null | null | 45,521,025 | <p>I am trying to implement a simple gender classifier using deep convolutional neural networks using tensorflow. I have found this <a href="http://www.cv-foundation.org/openaccess/content_cvpr_workshops_2015/W08/papers/Levi_Age_and_Gender_2015_CVPR_paper.pdf" rel="noreferrer">model</a> and implemented it.</p>
<pre><code>def create_model_v2(data):
cl1_desc = {'weights':weight_variable([7,7,3,96]), 'biases':bias_variable([96])}
cl2_desc = {'weights':weight_variable([5,5,96,256]), 'biases':bias_variable([256])}
cl3_desc = {'weights':weight_variable([3,3,256,384]), 'biases':bias_variable([384])}
fc1_desc = {'weights':weight_variable([240000, 128]), 'biases':bias_variable([128])}
fc2_desc = {'weights':weight_variable([128,128]), 'biases':bias_variable([128])}
fc3_desc = {'weights':weight_variable([128,2]), 'biases':bias_variable([2])}
cl1 = conv2d(data,cl1_desc['weights'] + cl1_desc['biases'])
cl1 = tf.nn.relu(cl1)
pl1 = max_pool_nxn(cl1,3,[1,2,2,1])
lrm1 = tf.nn.local_response_normalization(pl1)
cl2 = conv2d(lrm1, cl2_desc['weights'] + cl2_desc['biases'])
cl2 = tf.nn.relu(cl2)
pl2 = max_pool_nxn(cl2,3,[1,2,2,1])
lrm2 = tf.nn.local_response_normalization(pl2)
cl3 = conv2d(lrm2, cl3_desc['weights'] + cl3_desc['biases'])
cl3 = tf.nn.relu(cl3)
pl3 = max_pool_nxn(cl3,3,[1,2,2,1])
fl = tf.contrib.layers.flatten(cl3)
fc1 = tf.add(tf.matmul(fl, fc1_desc['weights']), fc1_desc['biases'])
drp1 = tf.nn.dropout(fc1,0.5)
fc2 = tf.add(tf.matmul(drp1, fc2_desc['weights']), fc2_desc['biases'])
drp2 = tf.nn.dropout(fc2,0.5)
fc3 = tf.add(tf.matmul(drp2, fc3_desc['weights']), fc3_desc['biases'])
return fc3
</code></pre>
<p>What I need to note at this point is that I have also done all the pre-processing steps described in the paper, however my images are resized to 100x100x3 instead of the 277x277x3.</p>
<p>I have defined the the logits to be <code>[0,1]</code> for females and <code>[1,0]</code> for males</p>
<pre><code>x = tf.placeholder('float',[None,100,100,3])
y = tf.placeholder('float',[None,2])
</code></pre>
<p>And have defined the training procedure as follows:</p>
<pre><code>def train(x, hm_epochs, LR):
#prediction = create_model_v2(x)
prediction = create_model_v2(x)
cost = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(logits = prediction, labels = y) )
optimizer = tf.train.AdamOptimizer(learning_rate=LR).minimize(cost)
batch_size = 50
correct = tf.equal(tf.argmax(prediction, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct, 'float'))
print("hello")
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for epoch in range(hm_epochs):
epoch_loss = 0
i = 0
while i < (len(x_train)):
start = i
end = i + batch_size
batch_x = x_train[start:end]
batch_y = y_train[start:end]
whatever, vigen = sess.run([optimizer, cost], feed_dict = {x:batch_x, y:batch_y})
epoch_loss += vigen
i+=batch_size
print('Epoch', epoch ,'loss:',epoch_loss/len(x_train))
if (epoch+1) % 2 == 0:
j = 0
acc = []
while j < len(x_test):
acc += [accuracy.eval(feed_dict = {x:x_test[j:j + 10], y:y_test[j:j+10]})]
j+= 10
print ('accuracy after', epoch + 1, 'epochs on test set: ', sum(acc)/len(acc))
j = 0
acc = []
while j < len(x_train):
acc += [accuracy.eval(feed_dict = {x:x_train[j:j + 10], y:y_train[j:j+10]})]
j+= 10
print ('accuracy after', epoch, ' epochs on train set:', sum(acc)/len(acc))
</code></pre>
<p>Half of the code above is just for outputting test and train accuracies every 2 epochs.</p>
<p>Anyhow the loss starts high at first epoch</p>
<blockquote>
<p>('Epoch', 0, 'loss:', 148.87030902462453)</p>
<p>('Epoch', 1, 'loss:', 0.01549744715988636)</p>
<p>('accuracy after', 2, 'epochs on test set: ', 0.33052011888510396)</p>
<p>('accuracy after', 1, ' epochs on train set:', 0.49607501227222384)</p>
<p>('Epoch', 2, 'loss:', 0.015493246909976005)</p>
</blockquote>
<p>What am I missing?</p>
<p>and continues like this keeping the accuracy at 0.5 for train set.</p>
<p><strong>EDIT:</strong> the functions weights variable, conv2d and max_pool_nn are</p>
<pre><code>def bias_variable(shape):
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
def weight_variable(shape):
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial)
def avg_pool_nxn(x, n, strides):
return tf.nn.avg_pool(x, ksize=[1,n,n,1], strides = strides,padding = 'SAME')
def max_pool_nxn(x, n, strides):
return tf.nn.max_pool(x, ksize=[1,n,n,1], strides = strides, padding = 'SAME')
def conv2d(x, W,stride = [1,1,1,1]):
return tf.nn.conv2d(x, W, strides = stride, padding = 'SAME')
</code></pre>
<p><strong>EDIT 2 - Problem solved</strong></p>
<p>The Problem was fascinatingly related to parameter initialization. Changing the weight initialization from Normal Distribution to Xavier initialization worked wonders and accuracy ended up at about 86%. If anyone is interested here is the original paper <a href="http://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf" rel="noreferrer">http://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf</a>, if anyone knows and cares to explain exactly why Xavier works well with convnets and images feel free to post an answer.</p> | 2017-08-05 10:48:04.393000+00:00 | 2017-08-11 21:12:35.923000+00:00 | 2020-06-20 09:12:55.060000+00:00 | tensorflow|neural-network|deep-learning|conv-neural-network|loss | ['http://andyljones.tumblr.com/post/110998971763/an-explanation-of-xavier-initialization', 'http://arxiv.org/abs/1502.01852', 'https://lasagne.readthedocs.io/en/latest/modules/init.html#lasagne.init.He'] | 3 |
60,767,593 | <p>Borrowing from <a href="https://arxiv.org/abs/1505.04597" rel="nofollow noreferrer">Ronneberger et al.</a>, what we have been doing is to split the input Landsat scene and corresponding ground truth mask into overlapping tiles. Take the original image and pad it by the overlap margin (we use reflection for the padding) then split into tiles. Here is a code snippet using scikit-image:</p>
<pre><code>import skimage as sk
patches = sk.util.view_as_windows(image,
(self.tile_height+2*self.image_margin,
self.tile_width+2*self.image_margin,raster_value['channels']),
(self.tile_height,self.tile_width,raster_value['channels'])
</code></pre>
<p>I don't know what you are using for a loss function for unsupervised segmentation. In our case with supervised learning, we crop the final segmentation prediction to match the ground truth output shape. In the Ronneberger paper they relied on shrinkage due to the use of valid padding.
For predictions you would do the same (split into overlapping tiles) and stitch the result. </p> | 2020-03-20 02:01:55.643000+00:00 | 2020-03-20 02:01:55.643000+00:00 | null | null | 60,555,060 | <p>I am looking at using Landsat imagery to train a CNN for unsupervised pixel-wise semantic segmentation classification. That said, I have been unable to find a method that allows me to crop images from the larger Landsat image for training and then predict on the original image. Essentially here is what I am trying to do:</p>
<p>Original Landsat image (5,000 x 5,000 - this is an arbitrary size, not exactly sure of the actual dimensions off-hand) -> crop the image into (100 x 100) chunks -> train the model on these cropped images -> output a prediction for each pixel in the original (uncropped) image.</p>
<p>That said, I am not sure if I should predict on the cropped images and stitch them together after they are predicted or if I can predict on the original image. </p>
<p>Any clarification/code examples would be greatly appreciated. For reference, I use both pytorch and tensorflow.</p>
<p>Thank you!
Lance D</p> | 2020-03-05 22:44:01.253000+00:00 | 2020-03-20 02:01:55.643000+00:00 | null | python-3.x|pytorch|conv-neural-network|tensorflow2.0|semantic-segmentation | ['https://arxiv.org/abs/1505.04597'] | 1 |
55,178,576 | <p>In the original <a href="https://arxiv.org/pdf/1505.04597.pdf" rel="nofollow noreferrer">U-Net paper</a> features right before Max-Pool layers are used for the skip-connections.</p>
<p>The logic is exactly the same with pre-trained backbones: at each spatial resolution, the deepest feature layer is selected. Thanks for qubvel on github for pointing this out in an <a href="https://github.com/qubvel/segmentation_models/issues/76" rel="nofollow noreferrer">issue</a>.</p> | 2019-03-15 08:38:55.307000+00:00 | 2019-03-15 08:38:55.307000+00:00 | null | null | 55,165,091 | <p>I'm trying to implement U-Net with PyTorch, using pre-trained networks in the encoder path.</p>
<p>The original U-Net paper trained a network from scratch. Are there any resources or principles on where skip connections should be placed, when a pre-trained backbone is used instead?</p>
<p>I already found some examples (e.g. <a href="https://github.com/qubvel/segmentation_models" rel="nofollow noreferrer">this repo</a>), but without any justification for the feature selection.</p> | 2019-03-14 14:29:45.707000+00:00 | 2019-03-15 08:38:55.307000+00:00 | 2019-03-14 14:36:34.600000+00:00 | machine-learning|neural-network|pytorch|image-segmentation | ['https://arxiv.org/pdf/1505.04597.pdf', 'https://github.com/qubvel/segmentation_models/issues/76'] | 2 |
65,922,110 | <p>According to the <a href="https://arxiv.org/pdf/1506.02640.pdf" rel="nofollow noreferrer">paper (section 2)</a>, the <code>S x S x (B * 5 + C)</code> shaped output represents the <code>S x S</code> grid cells that YoloV1 splits the image into. The last layer can be implemented as a fully connected layer with an output length <code>S x S x (B * 5 + C)</code>, then you can simply reshape the output to a 3D shape.</p>
<p>The paper states that:</p>
<p>"Our system divides the input image into an S Γ S grid.
If the center of an object falls into a grid cell, that grid cell
is responsible for detecting that object."</p>
<p>Meaning you have to assign each label to its corresponding grid cell in order to do backpropagation. For reference, a keras/tensorflow implementation of the loss calculation can be found <a href="https://github.com/FMsunyh/keras-yolo/blob/master/core/layers/_losses.py" rel="nofollow noreferrer">here</a> (by the github user FMsunyh).</p> | 2021-01-27 15:20:50.320000+00:00 | 2021-01-27 15:20:50.320000+00:00 | null | null | 65,915,176 | <p>Hi I coded a YOLO model from scratch and just came to realise that my dataset does not fit the models output. This is what I mean:
The model outputs a <code>S x S x (B * 5 + C)</code> matrix.
The shape of y[0] (the answer for the first image) is <code>(7,5)</code>.
How will I make the model use the labels of mine.
From what I knew and read the labels come in this format <code>x,y,w,h,objectiveness_score, class_scores</code> for the yolo algorithm so how come that the model will output a 3D matrix while the labels are a 2d matrix.</p>
<p>How will I solve the issue of mine by using numpy and keras?</p> | 2021-01-27 08:12:56.073000+00:00 | 2021-01-27 15:20:50.320000+00:00 | null | python|numpy|keras|artificial-intelligence|yolo | ['https://arxiv.org/pdf/1506.02640.pdf', 'https://github.com/FMsunyh/keras-yolo/blob/master/core/layers/_losses.py'] | 2 |
60,921,934 | <p>The problem is that the <a href="https://developer.apple.com/documentation/gameplaykit/gkrandom/1501054-nextint" rel="nofollow noreferrer"><code>nextInt()</code></a> method of all <code>GKRandom</code> types returns an integer value in the range <code>[INT32_MIN, INT32_MAX]</code>, which means that your βnon-workingβ implementation of <code>next()</code> returns 64-bit values with the high 32 bits equal to zero. This βviolatesβ the requirement of the <a href="https://developer.apple.com/documentation/swift/randomnumbergenerator" rel="nofollow noreferrer"><code>RandomNumberGenerator</code></a> protocol that calls to <code>next()</code> must produce uniformly distributed 64-bit values.</p>
<p>In older Swift releases this might not have caused problems, but with the <a href="https://github.com/apple/swift/commit/2bc648c26baccd95847cde2ad025c8c1b1fd0375" rel="nofollow noreferrer">implementation</a> of <a href="https://arxiv.org/pdf/1805.10941.pdf" rel="nofollow noreferrer">Lemireβs Nearly Divisionless Random Integer Generation</a> on 64-bit Intel platforms this has the effect that <code>Random.next(upperBound:)</code> always returns zero:</p>
<pre><code>var gen = SeededGenerator(seed: 234)
print((0..<20).map { _ in Int.random(in: 0..<10, using: &gen) })
// [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
</code></pre>
<p>As a consequence, the <code>shuffle()</code> method does not swap array elements at all.</p>
<p>Your alternative implementation of <code>next()</code> works because it fills both the low and the high 32 bits of the 64-bit random number.</p> | 2020-03-29 23:14:21.477000+00:00 | 2020-03-29 23:34:59.407000+00:00 | 2020-03-29 23:34:59.407000+00:00 | null | 60,920,711 | <p>Since Xcode 11.4 the array doesn't get shuffled at all (regardless of the seed) when using the second implementation of the <code>next</code> function.</p>
<p>I don't get why, since both functions generate random numbers, even though the first only populates the least significant 32 bits of the random Int64.</p>
<p>You can try this minimal example in Swift Playgrounds, commenting out one of the two functions.</p>
<pre><code> import GameplayKit
struct SeededGenerator: RandomNumberGenerator {
static var shared: SeededGenerator?
let seed: UInt64
let generator: GKMersenneTwisterRandomSource
init(seed: UInt64) {
self.seed = seed
generator = GKMersenneTwisterRandomSource(seed: seed)
}
// New alternative found to be working
mutating func next() -> UInt64 {
let next1 = UInt64(bitPattern: Int64(generator.nextInt()))
let next2 = UInt64(bitPattern: Int64(generator.nextInt()))
return next1 | (next2 << 32)
}
// Code previously in use that doesn't work anymore.
mutating func next() -> UInt64 {
return UInt64(bitPattern: Int64(abs(generator.nextInt())))
}
}
var gen = SeededGenerator(seed: 234)
var array = ["1", "2", "3"]
array.shuffle(using: &gen)
</code></pre> | 2020-03-29 21:05:24.827000+00:00 | 2020-03-29 23:34:59.407000+00:00 | null | swift|random | ['https://developer.apple.com/documentation/gameplaykit/gkrandom/1501054-nextint', 'https://developer.apple.com/documentation/swift/randomnumbergenerator', 'https://github.com/apple/swift/commit/2bc648c26baccd95847cde2ad025c8c1b1fd0375', 'https://arxiv.org/pdf/1805.10941.pdf'] | 4 |
42,214,170 | <p>In this post, I will advise you of:</p>
<ul>
<li>How to map navigational instructions to action sequences with an LSTM
neural network </li>
<li>Resources that will help you learn how to use neural
networks to accomplish your task </li>
<li>How to install and configure neural
network libraries based on what I needed to learn the hard way</li>
</ul>
<p><strong>General opinion of your idea:</strong></p>
<p>I can see what you're trying to do, and I believe that your game idea (of using randomly generated identities of adversaries that control their behavior in a way that randomly alters the way they're using artificial intelligence to behave intelligently) has a lot of potential. </p>
<p><strong>Mapping navigational instructions to action sequences with a neural network</strong></p>
<p>For processing your game board, because it involves <em>dense</em> (as opposed to <em>sparse</em>) data, you could find a Convolutional Neural Network (CNN) to be useful. However, because you need to translate the map to an action sequence, sequence-optimized neural networks (such as Recurrent Neural Networks) will likely be the most useful for you. I did find some studies that use neural networks to map navigational instructions to action sequences, construct the game map, and move a character through a game with many types of inputs:</p>
<ul>
<li>Mei, H., Bansal, M., & Walter, M. R. (2015). Listen, attend, and walk: Neural mapping of navigational instructions to action sequences. arXiv preprint arXiv:1506.04089. Available at: <a href="https://arxiv.org/pdf/1506.04089.pdf" rel="noreferrer" title="Listen, Attend, and Walk: Neural Mapping of Navigational Instructions to Action Sequences">Listen, Attend, and Walk: Neural Mapping of Navigational Instructions to Action Sequences</a></li>
<li>Lample, G., & Chaplot, D. S. (2016). Playing FPS games with deep reinforcement learning. arXiv preprint arXiv:1609.05521. Available at: <a href="https://arxiv.org/pdf/1603.00930.pdf" rel="noreferrer" title="Super Mario as a String: Platformer Level Generation Via LSTMs">Super Mario as a String: Platformer Level Generation Via LSTMs</a></li>
<li>Lample, G., & Chaplot, D. S. (2016). Playing FPS games with deep reinforcement learning. arXiv preprint arXiv:1609.05521. Available at: <a href="https://arxiv.org/pdf/1609.05521.pdf" rel="noreferrer" title="Playing FPS Games with Deep Reinforcement Learning">Playing FPS Games with Deep Reinforcement Learning</a></li>
<li>Schulz, R., Talbot, B., Lam, O., Dayoub, F., Corke, P., Upcroft, B., & Wyeth, G. (2015, May). Robot navigation using human cues: A robot navigation system for symbolic goal-directed exploration. In Robotics and Automation (ICRA), 2015 IEEE International Conference on (pp. 1100-1105). IEEE. Available at: <a href="http://eprints.qut.edu.au/82728/1/schulz-etal-ICRA-2015.pdf" rel="noreferrer" title="Robot Navigation Using Human Cues: A robot navigation system for symbolic goal-directed exploration">Robot Navigation Using Human Cues: A robot navigation system for symbolic goal-directed exploration</a></li>
</ul>
<p><strong>General opinion of what will help you</strong></p>
<p>It sounds like you're missing some basic understanding of how neural networks work, so <em>my primary recommendation to you is to study more of the underlying mechanics behind neural networks</em> in general. It's important to keep in mind that a neural network is a type of <em>machine learning</em> model. So, it doesn't really make sense to just construct a neural network with random parameters. A neural network is a machine learning model that is trained from sample data, and once it is trained, it can be evaluated on test data (e.g. to perform predictions). </p>
<p>The root of machine learning is largely influenced by Bayesian statistics, so you might benefit from getting a textbook on Bayesian statistics to gain a deeper understanding of how machine-based classification works in general. </p>
<p>It will also be valuable for you to learn the differences between different types of neural networks, such as Long Short Term Memory (LSTM) and Convolutional Neural Networks (CNNs). </p>
<p>If you want to <strong>tinker with how neural networks can be used</strong> for classification tasks, try this: </p>
<ul>
<li><a href="http://playground.tensorflow.org/#activation=tanh&batchSize=10&dataset=circle&regDataset=reg-plane&learningRate=0.03&regularizationRate=0&noise=0&networkShape=4,2&seed=0.39263&showTestData=false&discretize=false&percTrainData=50&x=true&y=true&xTimesY=false&xSquared=false&ySquared=false&cosX=false&sinX=false&cosY=false&sinY=false&collectStats=false&problem=classification&initZero=false&hideText=false" rel="noreferrer" title="Tensorflow Playground">Tensorflow Playground</a></li>
</ul>
<p><strong>To learn the math:</strong>
My professional opinion is that learning the underlying math of neural networks is very important. If it's intimidating, I give you my testimony that I was able to learn all of it on my own. But if you prefer learning in a classroom environment, then I recommend that you try that. A <strong>great resource and textbook for learning the mechanics and mathematics of neural networks</strong> is: </p>
<ul>
<li><a href="http://neuralnetworksanddeeplearning.com/" rel="noreferrer" title="Free ebook on neural networks">Neural Networks and Deep Learning</a></li>
</ul>
<p><strong>Tutorials for neural network libraries</strong></p>
<p>I recommend that you try working through the tutorials for a neural network library, such as: </p>
<ul>
<li><a href="https://www.tensorflow.org/tutorials/" rel="noreferrer" title="TensorFlow tutorials">TensorFlow tutorials</a></li>
<li><a href="http://deeplearning.net/tutorial/" rel="noreferrer" title="Deep Learning tutorials with Theano">Deep Learning tutorials with Theano</a></li>
<li><a href="https://www.cntk.ai/pythondocs/tutorials.html" rel="noreferrer" title="Tutorials with Microsoft CNTK">CNTK tutorials</a> (<a href="https://github.com/Microsoft/CNTK/blob/v2.0.beta10.0/Tutorials/CNTK_205_Artistic_Style_Transfer.ipynb" rel="noreferrer" title="CNTK 205: Artistic Style Transfer">CNTK 205: Artistic Style Transfer</a> is particularly cool.)</li>
<li><a href="https://elitedatascience.com/keras-tutorial-deep-learning-in-python" rel="noreferrer" title="Keras tutorial">Keras tutorial</a> (Keras is a powerful high-level neural network library that can use either <strong>TensorFlow</strong> or <strong>Theano</strong>.)</li>
</ul> | 2017-02-13 21:49:07.693000+00:00 | 2017-02-13 21:49:07.693000+00:00 | null | null | 42,099,814 | <p>I'm new to neural networks/machine learning/genetic algorithms, and for my first implementation I am writing a network that learns to play snake (<a href="http://patorjk.com/games/snake/" rel="nofollow noreferrer" title="Snake">An example in case you haven't played it before</a>) I have a few questions that I don't fully understand:</p>
<p>Before my questions I just want to make sure I understand the general idea correctly. There is a population of snakes, each with randomly generated DNA. The DNA is the weights used in the neural network. Each time the snake moves, it uses the neural net to decide where to go (using a bias). When the population dies, select some parents (maybe highest fitness), and crossover their DNA with a slight mutation chance. </p>
<p>1) If given the whole board as an input (about 400 spots) enough hidden layers (no idea how many, maybe 256-64-32-2?), and enough time, would it learn to not box itself in?</p>
<p>2) What would be good inputs? Here are some of my ideas:</p>
<ul>
<li>400 inputs, one for each space on the board. Positive if snake should go there (the apple) and negative if it is a wall/your body. The closer to -1/1 it is the closer it is.</li>
<li>6 inputs: game width, game height, snake x, snake y, apple x, and apple y (may learn to play on different size boards if trained that way, but not sure how to input it's body, since it changes size)</li>
<li>Give it a field of view (maybe 3x3 square in front of head) that can alert the snake of a wall, apple, or it's body. (the snake would only be able to see whats right in front unfortunately, which could hinder it's learning ability)</li>
</ul>
<p>3) Given the input method, what would be a good starting place for hidden layer sizes (of course plan on tweaking this, just don't know what a good starting place)</p>
<p>4) Finally, the fitness of the snake. Besides time to get the apple, it's length, and it's lifetime, should anything else be factored in? In order to get the snake to learn to not block itself in, is there anything else I could add to the fitness to help that?</p>
<p>Thank you!</p> | 2017-02-07 20:54:49.357000+00:00 | 2017-02-13 21:49:07.693000+00:00 | 2017-02-07 21:13:33.580000+00:00 | machine-learning|neural-network|artificial-intelligence|genetic-algorithm | ['https://arxiv.org/pdf/1506.04089.pdf', 'https://arxiv.org/pdf/1603.00930.pdf', 'https://arxiv.org/pdf/1609.05521.pdf', 'http://eprints.qut.edu.au/82728/1/schulz-etal-ICRA-2015.pdf', 'http://playground.tensorflow.org/#activation=tanh&batchSize=10&dataset=circle®Dataset=reg-plane&learningRate=0.03®ularizationRate=0&noise=0&networkShape=4,2&seed=0.39263&showTestData=false&discretize=false&percTrainData=50&x=true&y=true&xTimesY=false&xSquared=false&ySquared=false&cosX=false&sinX=false&cosY=false&sinY=false&collectStats=false&problem=classification&initZero=false&hideText=false', 'http://neuralnetworksanddeeplearning.com/', 'https://www.tensorflow.org/tutorials/', 'http://deeplearning.net/tutorial/', 'https://www.cntk.ai/pythondocs/tutorials.html', 'https://github.com/Microsoft/CNTK/blob/v2.0.beta10.0/Tutorials/CNTK_205_Artistic_Style_Transfer.ipynb', 'https://elitedatascience.com/keras-tutorial-deep-learning-in-python'] | 11 |
63,559,650 | <p>Some thoughts:</p>
<p>Why did the logic program fail: The answer to "<em>why"</em> is of course <em>"because there is no variable assignment that fulfills the constraints given by the Prolog program"</em>.</p>
<p>This is evidently rather unhelpful, but it is exactly the case of the "blue dog": there are no such thing (at least in the problem you model).</p>
<p>In fact the only acceptable answer to the blue dog problem is obtained when the system goes into full theorem-proving mode and outputs:</p>
<pre><code>blue(X) <=> ~dog(X)
</code></pre>
<p>or maybe just</p>
<pre><code>dog(X) => ~blue(X)
</code></pre>
<p>or maybe just</p>
<pre><code>blue(X) => ~dog(X)
</code></pre>
<p>depending on assumptions. "There is no evidence of blue dogs". Which is true, as that's what the program states. So a "why" in this question is a demand to rewrite the program...</p>
<p>There may not <em>be</em> a good answer: <em>"Why is there no x such that xΒ² < 0"</em> is ill-posed and may have as answer <em>"just because"</em> or <em>"because you are restricting yourself to the reals"</em> or <em>"because that 0 in the equation is just wrong"</em> ... so it depends very much.</p>
<p>To make a "<em>why</em>" more helpful, you will have to qualify this <em>"why"</em> somehow. which may be done by structuring the program and extending the query so that additional information collecting during proof tree construction is bubbling up, but you will have to decide beforehand what information that is:</p>
<p><code>query(Sought, [Info1, Info2, Info3])</code></p>
<p>And this query will always succeed (for <code>query/2</code>, "success" no longer means "success in finding a solution to the modeled problem" but "success in finishing the computation"),</p>
<p>Variable <code>Sought</code> will be the reified answer of the actual query you want answered, i.e. one of the atoms <code>true</code> or <code>false</code> (and maybe <code>unknown</code> if you have had enough with two-valued logic) and <code>Info1, Info2, Info3</code> will be additional details to help you answer a <em>why something something</em> in case <code>Sought</code> is <code>false</code>.</p>
<p>Note that much of the time, the desire to ask "why" is down to the mix-up between the two distinct failures: "failure in finding a solution to the modeled problem" and "failure in finishing the computation". For example, you want to apply <code>maplist/3</code> to two lists and expect this to work but erroneously the two lists are of different length: You will get <code>false</code> - but it will be a <code>false</code> from computation (in this case, due to a bug), not a <code>false</code> from modeling. Being heavy-handed with <code>assertion/1</code> may help here, but this is ugly in its own way.</p>
<p>In fact, compare with imperative or functional languages w/o logic programming parts: In the event of failure (maybe an exception?), what would be a corresponding "why"? It is unclear.</p>
<p><strong>Addendum</strong></p>
<p>This is a great question but the more I reflect on it, the more I think it can only be answer in a task-specific way: You must structure your logic program to be <code>why</code>-able, and you must decide what kind of information <code>why</code> should actually return. It <em>will</em> be something task-specific: something about missing information, "if only this or that were true" indications, where "this or that" are chosen from a dedicates set of predicates. This is of course expected, as there is no general way to make imperative or functional programs explain their results (or lack thereof) either.</p>
<p>I have looked a bit for papers on this (including IEEE Xplore and ACM Library), and have just found:</p>
<ul>
<li><a href="https://arxiv.org/abs/1402.0575" rel="nofollow noreferrer">Reasoning about Explanations for Negative Query Answers in DL-Lite</a> which is actually for Description Logics and uses <a href="https://en.wikipedia.org/wiki/Abductive_reasoning" rel="nofollow noreferrer">abductive reasoning</a>.</li>
<li><a href="http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.71.9781" rel="nofollow noreferrer">WhyNot: Debugging Failed Queries in Large Knowledge Bases</a> which discusses a tool for <a href="https://en.wikipedia.org/wiki/Cyc" rel="nofollow noreferrer">Cyc</a>.</li>
<li>I also took a random look at the documentation for <a href="http://flora.sourceforge.net/documentation.html" rel="nofollow noreferrer">Flora-2</a> but they basically seem to say "use the debugger". But debugging is just debugging, not explaining.</li>
</ul>
<p>There must be more.</p> | 2020-08-24 10:56:00.817000+00:00 | 2020-08-24 17:25:02.527000+00:00 | 2020-08-24 17:25:02.527000+00:00 | null | 63,555,020 | <p>I'm looking for an approach, pattern, or built-in feature in Prolog that I can use to return <em>why</em> a set of predicates failed, at least as far as the predicates in the database are concerned. I'm trying to be able to say more than "That is false" when a user poses a query in a system.</p>
<p>For example, let's say I have two predicates. <code>blue/1</code> is true if something is blue, and <code>dog/1</code> is true if something is a dog:</p>
<pre><code>blue(X) :- ...
dog(X) :- ...
</code></pre>
<p>If I pose the following query to Prolog and <code>foo</code> is a dog, but not blue, Prolog would normally just return "false":</p>
<pre><code>? blue(foo), dog(foo)
false.
</code></pre>
<p>What I want is to find out <em>why</em> the conjunction of predicates was not true, even if it is an out of band call such as:</p>
<pre><code>? getReasonForFailure(X)
X = not(blue(foo))
</code></pre>
<p>I'm OK if the predicates have to be written in a certain way, I'm just looking for any approaches people have used.</p>
<p>The way I've done this to date, with some success, is by writing the predicates in a stylized way and using some helper predicates to find out the reason after the fact. For example:</p>
<pre><code>blue(X) :-
recordFailureReason(not(blue(X))),
isBlue(X).
</code></pre>
<p>And then implementing recordFailureReason/1 such that it always remembers the "reason" that happened deepest in the stack. If a query fails, whatever failure happened the deepest is recorded as the "best" reason for failure. That heuristic works surprisingly well for many cases, but does require careful building of the predicates to work well.</p>
<p>Any ideas? I'm willing to look outside of Prolog if there are predicate logic systems designed for this kind of analysis.</p> | 2020-08-24 05:06:06.297000+00:00 | 2020-08-26 20:26:31.493000+00:00 | null | prolog|first-order-logic | ['https://arxiv.org/abs/1402.0575', 'https://en.wikipedia.org/wiki/Abductive_reasoning', 'http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.71.9781', 'https://en.wikipedia.org/wiki/Cyc', 'http://flora.sourceforge.net/documentation.html'] | 5 |
63,568,128 | <p>As long as you remain within the pure monotonic subset of Prolog, you may consider <strong>generalizations</strong> as explanations. To take your example, the following generalizations might be thinkable depending on your precise definition of <code>blue/1</code> and <code>dog/1</code>.</p>
<pre>
?- blue(foo), * <s>dog(foo)</s>.
false.
</pre>
<p>In this generalization, the entire goal <code>dog(foo)</code> was removed. The prefix <code>*</code> is actually a predicate defined like <code>:- op(950, fy, *). *(_).</code>
Informally, above can be read as: <em>Not only this query fails, but even this generalized query fails. There is no blue foo at all (provided there is none).</em> But maybe there <em>is</em> a blue foo, but no blue dog at all...</p>
<pre>
?- blue(_X/*<s>foo</s>*/), dog(_X/*<s>foo</s>*/).
false.
</pre>
<p>Now we have generalized the program by replacing <code>foo</code> with the new variable <code>_X</code>. In this manner the sharing between the two goals is retained.</p>
<p>There are more such generalizations possible like introducing <code>dif/2</code>.</p>
<p>This technique can be both manually and automatically applied. For more, there is a <a href="https://stackoverflow.com/a/30791637/772868">collection of example sessions</a>. Also see <a href="https://arxiv.org/abs/cs/0207044" rel="nofollow noreferrer">Declarative program development in Prolog with GUPU</a></p> | 2020-08-24 20:14:33.727000+00:00 | 2020-08-26 20:26:31.493000+00:00 | 2020-08-26 20:26:31.493000+00:00 | null | 63,555,020 | <p>I'm looking for an approach, pattern, or built-in feature in Prolog that I can use to return <em>why</em> a set of predicates failed, at least as far as the predicates in the database are concerned. I'm trying to be able to say more than "That is false" when a user poses a query in a system.</p>
<p>For example, let's say I have two predicates. <code>blue/1</code> is true if something is blue, and <code>dog/1</code> is true if something is a dog:</p>
<pre><code>blue(X) :- ...
dog(X) :- ...
</code></pre>
<p>If I pose the following query to Prolog and <code>foo</code> is a dog, but not blue, Prolog would normally just return "false":</p>
<pre><code>? blue(foo), dog(foo)
false.
</code></pre>
<p>What I want is to find out <em>why</em> the conjunction of predicates was not true, even if it is an out of band call such as:</p>
<pre><code>? getReasonForFailure(X)
X = not(blue(foo))
</code></pre>
<p>I'm OK if the predicates have to be written in a certain way, I'm just looking for any approaches people have used.</p>
<p>The way I've done this to date, with some success, is by writing the predicates in a stylized way and using some helper predicates to find out the reason after the fact. For example:</p>
<pre><code>blue(X) :-
recordFailureReason(not(blue(X))),
isBlue(X).
</code></pre>
<p>And then implementing recordFailureReason/1 such that it always remembers the "reason" that happened deepest in the stack. If a query fails, whatever failure happened the deepest is recorded as the "best" reason for failure. That heuristic works surprisingly well for many cases, but does require careful building of the predicates to work well.</p>
<p>Any ideas? I'm willing to look outside of Prolog if there are predicate logic systems designed for this kind of analysis.</p> | 2020-08-24 05:06:06.297000+00:00 | 2020-08-26 20:26:31.493000+00:00 | null | prolog|first-order-logic | ['https://stackoverflow.com/a/30791637/772868', 'https://arxiv.org/abs/cs/0207044'] | 2 |
30,460,874 | <p>Building on DavidW's answer, here's the implementation I am currently using, which is about 20% faster by using nogil and parallel computation:</p>
<pre><code>from cython.parallel import parallel, prange
@cython.boundscheck(False)
@cython.wraparound(False)
@cython.initializedcheck(False)
cdef rsenc_cython(msg_in_r, nsym, gen_t) :
'''Reed-Solomon encoding using polynomial division, better explained at http://research.swtch.com/field'''
cdef uint8_t[::1] msg_in = bytearray(msg_in_r) # have to copy, unfortunately - can't make a memory view from a read only object
#cdef int[::1] gen = array.array('i',gen_t) # convert list to array
cdef uint8_t[::1] gen = gen_t
cdef uint8_t[::1] msg_out = bytearray(msg_in) + bytearray(len(gen)-1)
cdef int i, j
cdef uint8_t[::1] lgen = bytearray(gen.shape[0])
for j in xrange(gen.shape[0]):
lgen[j] = gf_log_c[gen[j]]
cdef uint8_t coef,lcoef
with nogil:
for i in xrange(msg_in.shape[0]):
coef = msg_out[i]
if coef != 0: # coef 0 is normally undefined so we manage it manually here (and it also serves as an optimization btw)
lcoef = gf_log_c[coef] # precaching
for j in prange(1, gen.shape[0]): # optimization: can skip g0 because the first coefficient of the generator is always 1! (that's why we start at position 1)
msg_out[i + j] ^= gf_exp_c[lcoef + lgen[j]] # equivalent (in Galois Field 2^8) to msg_out[i+j] -= msg_out[i] * gen[j]
# Recopy the original message bytes
msg_out[:msg_in.shape[0]] = msg_in
return msg_out
</code></pre>
<p>I would still like it to be faster (on a real implementation, data is encoded at about 6.4 MB/s with n=255, n being the size of the message+codeword).</p>
<p>The main lead to a faster implementation that I have found is to use a LUT (LookUp Table) approach, by precomputing the multiplication and addition arrays. However, in my Python and Cython implementations, the LUT approach is slower than calculating XOR and addition operations.</p>
<p>There are other approaches to implement a faster RS encoder, but I don't have the abilities nor the time to try them out. I will leave them as references for other interested readers:</p>
<blockquote>
<ul>
<li>"Fast software implementation of finite field operations", Cheng Huang and Lihao Xu, Washington University in St. Louis, Tech. Rep (2003). <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.99.6595&rep=rep1&type=pdf" rel="noreferrer">link</a> and a correct code implementation <a href="http://catid.mechafetus.com/news/news.php?view=295" rel="noreferrer">here</a>.</li>
<li>Luo, Jianqiang, et al. "Efficient software implementations of large finite fields GF (2 n) for secure storage applications." ACM Transactions on Storage (TOS) 8.1 (2012): 2.</li>
<li>"A Performance Evaluation and Examination of Open-Source Erasure Coding Libraries for Storage.", Plank, J. S. and Luo, J. and Schuman, C. D. and Xu, L., and Wilcox-O'Hearn, Z, FAST. Vol. 9. 2009. <a href="https://www.usenix.org/legacy/events/fast09/tech/full_papers/plank/plank_html/" rel="noreferrer">link</a>
Or also the non extended version: "A Performance Comparison of Open-Source Erasure Coding Libraries for Storage Applications", Plank and Schuman.</li>
<li>Sourcecode of the ZFEC library, with multiplication LUT optimization <a href="https://hackage.haskell.org/package/fec-0.1.1/src/zfec/fec.c" rel="noreferrer">link</a>.</li>
<li>"Optimized Arithmetic for Reed-Solomon Encoders", Christof Paar (1997, June). In IEEE International Symposium on Information Theory (pp. 250-250). INSTITUTE OF ELECTRICAL ENGINEERS INC (IEEE). <a href="https://www.emsec.rub.de/media/crypto/veroeffentlichungen/2011/01/19/cnst.ps" rel="noreferrer">link</a></li>
<li>"A Fast Algorithm for Encoding the (255,233) Reed-Solomon Code Over GF(2^8)", R.L. Miller and T.K. Truong, I.S. Reed. <a href="http://ipnpr.jpl.nasa.gov/progress_report2/42-56/56P.PDF" rel="noreferrer">link</a></li>
<li>"Optimizing Galois Field arithmetic for diverse processor architectures and applications", Greenan, Kevin and M., Ethan and L. Miller and Thomas JE Schwarz, Modeling, Analysis and Simulation of Computers and Telecommunication Systems, 2008. MASCOTS 2008. IEEE International Symposium on. IEEE, 2008. <a href="http://www.ssrc.ucsc.edu/Papers/greenan-mascots08.pdf" rel="noreferrer">link</a></li>
<li>Anvin, H. Peter. "The mathematics of RAID-6." (2007). <a href="https://www.kernel.org/pub/linux/kernel/people/hpa/raid6.pdf" rel="noreferrer">link</a> and <a href="https://www.kernel.org/doc/Documentation/crc32.txt" rel="noreferrer">link</a></li>
<li><a href="https://github.com/catid/wirehair/" rel="noreferrer">Wirehair library</a>, one of the only few implementations of Cauchy Reed-Solomon, which is said to be very fast.</li>
<li>"A logarithmic Boolean time algorithm for parallel polynomial division", Bini, D. and Pan, V. Y. (1987), Information processing letters, 24(4), 233-237. See also Bini, D., and V. Pan. "Fast parallel algorithms for polynomial division over an arbitrary field of constants." Computers & Mathematics with Applications 12.11 (1986): 1105-1118. <a href="https://www.researchgate.net/profile/Dario_Bini/publication/250727103_Fast_parallel_algorithms_for_polynomial_division_over_an_arbitrary_field_of_constants/links/53f1ad9d0cf23733e815c755.pdf" rel="noreferrer">link</a></li>
<li>Kung, H.T. "Fast evaluation and interpolation." (1973). <a href="http://www.eecs.harvard.edu/~htk/publication/1973-cmu-cs-technical-report-kung.pdf" rel="noreferrer">link</a></li>
<li>Cao, Zhengjun, and Hanyue Cao. "Note on fast division algorithm for polynomials using Newton iteration." arXiv preprint arXiv:1112.4014 (2011). <a href="http://arxiv.org/pdf/1112.4014" rel="noreferrer">link</a></li>
<li>"An Introduction to Galois Fields and Reed-Solomon Coding", James Westall and James Martin, 2010. <a href="http://people.cs.clemson.edu/~westall/851/rs-code.pdf" rel="noreferrer">link</a></li>
<li>Mamidi, Suman, et al. "Instruction set extensions for Reed-Solomon encoding and decoding." Application-Specific Systems, Architecture Processors, 2005. ASAP 2005. 16th IEEE International Conference on. IEEE, 2005. <a href="http://glossner.org/john/papers/2005_07_asap_reed_solomon.pdf" rel="noreferrer">link</a></li>
<li>Dumas, Jean-Guillaume, Laurent Fousse, and Bruno Salvy. "Simultaneous modular reduction and Kronecker substitution for small finite fields." Journal of Symbolic Computation 46.7 (2011): 823-840.</li>
<li>Greenan, Kevin M., Ethan L. Miller, and Thomas Schwarz. Analysis and construction of galois fields for efficient storage reliability. Vol. 9. Technical Report UCSC-SSRC-07, 2007. <a href="http://www.ssrc.ucsc.edu/Papers/ssrctr-07-09.pdf" rel="noreferrer">link</a></li>
</ul>
</blockquote>
<p>However, I think the best lead is to use an efficient <strong>polynomial modular reduction</strong> instead of polynomial division:</p>
<blockquote>
<ul>
<li>"Modular Reduction in GF (2 n) without Pre-computational Phase". KneΕΎevic, M., et al. Arithmetic of Finite Fields. Springer Berlin Heidelberg, 2008. 77-87.</li>
<li>"On computation of polynomial modular reduction". Wu, Huapeng. Technical report, Univ. of Waterloo, The Centre for applied cryptographic research, 2000.</li>
<li>"A fast software implementation for arithmetic operations in GF (2n)". De Win, E., Bosselaers, A., Vandenberghe, S., De Gersem, P., & Vandewalle, J. (1996, January). In Advances in CryptologyβAsiacrypt'96 (pp. 65-76). Springer Berlin Heidelberg. <a href="https://www.cosic.esat.kuleuven.be/publications/article-300.pdf" rel="noreferrer">link</a></li>
<li><a href="http://en.wikipedia.org/wiki/Barrett_reduction" rel="noreferrer">Barnett reduction</a></li>
</ul>
</blockquote>
<p>/EDIT: in fact it seems "On computation of polynomial modular reduction" just uses the same approach as I did with the variants rsenc_alt1() and rsenc_alt2() (the main idea being that we precompute the couples of coefficients we will need, and reduce them all at once), and unluckily it's not faster (it's actually slower because the precomputation cannot be done once for all since it depends on the message input).</p>
<p>/EDIT: I found a library with really interesting optimizations, lots that are not even found in any academic papers (which the author stated he has read btw), and which is probably the fastest software implementation of Reed-Solomon: the <a href="https://github.com/catid/wirehair/blob/master/wirehair-mobile/wirehair_codec_8.cpp" rel="noreferrer">wirehair project</a> and the <a href="http://catid.mechafetus.com/news/news.php" rel="noreferrer">related blog</a> for more details. Worth of noting, the author also made a <a href="https://github.com/catid/longhair" rel="noreferrer">Cauchy-Reed-Solomon codec called longhair</a> with similar optimization tricks.</p>
<p>/FINAL EDIT: it seems the fastest implementation available is based on this paper:</p>
<blockquote>
<p>Plank, James S., Kevin M. Greenan, and Ethan L. Miller. "Screaming
fast Galois field arithmetic using intel SIMD instructions." FAST.
2013. <a href="http://www.kaymgee.com/Kevin_Greenan/Publications_files/plank-fast2013.pdf" rel="noreferrer">link</a></p>
</blockquote>
<p>The <a href="https://github.com/klauspost/reedsolomon" rel="noreferrer">implementation, in pure Go, is available here and is authored by Klaus Post</a>. It's the fastest implementation I have ever read about, both in single thread and parallelized (it supports both). It claims over 1GB/s in single thread and over 4 GB/s with 8 threads. However, it relies on optimized SIMD instructions and various low-level optimizations on matrix operations (because here the RS codec is matrix oriented instead of the polynomial approach I have in my question).</p>
<p>So, if you are an interested reader and want to find the fastest Reed-Solomon codec available, that's the one.</p> | 2015-05-26 14:04:26.993000+00:00 | 2015-07-28 16:48:33.783000+00:00 | 2015-07-28 16:48:33.783000+00:00 | null | 30,363,903 | <p>I am trying to optimize a Reed-Solomon encoder, which is in fact simply a polynomial division operation over Galois Fields 2^8 (which simply means that values wrap-around over 255). The code is in fact very very similar to what can be found here for Go: <a href="http://research.swtch.com/field" rel="noreferrer">http://research.swtch.com/field</a></p>
<p>The algorithm for polynomial division used here is a <a href="http://en.wikipedia.org/wiki/Synthetic_division" rel="noreferrer">synthetic division</a> (also called Horner's method).</p>
<p>I tried everything: numpy, pypy, cython. The best performance I get is by using pypy with this simple nested loop:</p>
<pre><code>def rsenc(msg_in, nsym, gen):
'''Reed-Solomon encoding using polynomial division, better explained at http://research.swtch.com/field'''
msg_out = bytearray(msg_in) + bytearray(len(gen)-1)
lgen = bytearray([gf_log[gen[j]] for j in xrange(len(gen))])
for i in xrange(len(msg_in)):
coef = msg_out[i]
# coef = gf_mul(msg_out[i], gf_inverse(gen[0])) // for general polynomial division (when polynomials are non-monic), we need to compute: coef = msg_out[i] / gen[0]
if coef != 0: # coef 0 is normally undefined so we manage it manually here (and it also serves as an optimization btw)
lcoef = gf_log[coef] # precaching
for j in xrange(1, len(gen)): # optimization: can skip g0 because the first coefficient of the generator is always 1! (that's why we start at position 1)
msg_out[i + j] ^= gf_exp[lcoef + lgen[j]] # equivalent (in Galois Field 2^8) to msg_out[i+j] += msg_out[i] * gen[j]
# Recopy the original message bytes
msg_out[:len(msg_in)] = msg_in
return msg_out
</code></pre>
<p>Can a Python optimization wizard guide me to some clues on how to get a speedup? My goal is to get at least a speedup of 3x, but more would be awesome. Any approach or tool is accepted, as long as it is cross-platform (works at least on Linux and Windows).</p>
<p>Here is a small test script with some of the other alternatives I tried (the cython attempt is not included since it was slower than native python!):</p>
<pre><code>import random
from operator import xor
numpy_enabled = False
try:
import numpy as np
numpy_enabled = True
except ImportError:
pass
# Exponent table for 3, a generator for GF(256)
gf_exp = bytearray([1, 3, 5, 15, 17, 51, 85, 255, 26, 46, 114, 150, 161, 248, 19,
53, 95, 225, 56, 72, 216, 115, 149, 164, 247, 2, 6, 10, 30, 34,
102, 170, 229, 52, 92, 228, 55, 89, 235, 38, 106, 190, 217, 112,
144, 171, 230, 49, 83, 245, 4, 12, 20, 60, 68, 204, 79, 209, 104,
184, 211, 110, 178, 205, 76, 212, 103, 169, 224, 59, 77, 215, 98,
166, 241, 8, 24, 40, 120, 136, 131, 158, 185, 208, 107, 189, 220,
127, 129, 152, 179, 206, 73, 219, 118, 154, 181, 196, 87, 249, 16,
48, 80, 240, 11, 29, 39, 105, 187, 214, 97, 163, 254, 25, 43, 125,
135, 146, 173, 236, 47, 113, 147, 174, 233, 32, 96, 160, 251, 22,
58, 78, 210, 109, 183, 194, 93, 231, 50, 86, 250, 21, 63, 65, 195,
94, 226, 61, 71, 201, 64, 192, 91, 237, 44, 116, 156, 191, 218,
117, 159, 186, 213, 100, 172, 239, 42, 126, 130, 157, 188, 223,
122, 142, 137, 128, 155, 182, 193, 88, 232, 35, 101, 175, 234, 37,
111, 177, 200, 67, 197, 84, 252, 31, 33, 99, 165, 244, 7, 9, 27,
45, 119, 153, 176, 203, 70, 202, 69, 207, 74, 222, 121, 139, 134,
145, 168, 227, 62, 66, 198, 81, 243, 14, 18, 54, 90, 238, 41, 123,
141, 140, 143, 138, 133, 148, 167, 242, 13, 23, 57, 75, 221, 124,
132, 151, 162, 253, 28, 36, 108, 180, 199, 82, 246] * 2 + [1])
# Logarithm table, base 3
gf_log = bytearray([0, 0, 25, 1, 50, 2, 26, 198, 75, 199, 27, 104, 51, 238, 223, # BEWARE: the first entry should be None instead of 0 because it's undefined, but for a bytearray we can't set such a value
3, 100, 4, 224, 14, 52, 141, 129, 239, 76, 113, 8, 200, 248, 105,
28, 193, 125, 194, 29, 181, 249, 185, 39, 106, 77, 228, 166, 114,
154, 201, 9, 120, 101, 47, 138, 5, 33, 15, 225, 36, 18, 240, 130,
69, 53, 147, 218, 142, 150, 143, 219, 189, 54, 208, 206, 148, 19,
92, 210, 241, 64, 70, 131, 56, 102, 221, 253, 48, 191, 6, 139, 98,
179, 37, 226, 152, 34, 136, 145, 16, 126, 110, 72, 195, 163, 182,
30, 66, 58, 107, 40, 84, 250, 133, 61, 186, 43, 121, 10, 21, 155,
159, 94, 202, 78, 212, 172, 229, 243, 115, 167, 87, 175, 88, 168,
80, 244, 234, 214, 116, 79, 174, 233, 213, 231, 230, 173, 232, 44,
215, 117, 122, 235, 22, 11, 245, 89, 203, 95, 176, 156, 169, 81,
160, 127, 12, 246, 111, 23, 196, 73, 236, 216, 67, 31, 45, 164,
118, 123, 183, 204, 187, 62, 90, 251, 96, 177, 134, 59, 82, 161,
108, 170, 85, 41, 157, 151, 178, 135, 144, 97, 190, 220, 252, 188,
149, 207, 205, 55, 63, 91, 209, 83, 57, 132, 60, 65, 162, 109, 71,
20, 42, 158, 93, 86, 242, 211, 171, 68, 17, 146, 217, 35, 32, 46,
137, 180, 124, 184, 38, 119, 153, 227, 165, 103, 74, 237, 222, 197,
49, 254, 24, 13, 99, 140, 128, 192, 247, 112, 7])
if numpy_enabled:
np_gf_exp = np.array(gf_exp)
np_gf_log = np.array(gf_log)
def gf_pow(x, power):
return gf_exp[(gf_log[x] * power) % 255]
def gf_poly_mul(p, q):
r = [0] * (len(p) + len(q) - 1)
lp = [gf_log[p[i]] for i in xrange(len(p))]
for j in range(len(q)):
lq = gf_log[q[j]]
for i in range(len(p)):
r[i + j] ^= gf_exp[lp[i] + lq]
return r
def rs_generator_poly_base3(nsize, fcr=0):
g_all = {}
g = [1]
g_all[0] = g_all[1] = g
for i in range(fcr+1, fcr+nsize+1):
g = gf_poly_mul(g, [1, gf_pow(3, i)])
g_all[nsize-i] = g
return g_all
# Fastest way with pypy
def rsenc(msg_in, nsym, gen):
'''Reed-Solomon encoding using polynomial division, better explained at http://research.swtch.com/field'''
msg_out = bytearray(msg_in) + bytearray(len(gen)-1)
lgen = bytearray([gf_log[gen[j]] for j in xrange(len(gen))])
for i in xrange(len(msg_in)):
coef = msg_out[i]
# coef = gf_mul(msg_out[i], gf_inverse(gen[0])) # for general polynomial division (when polynomials are non-monic), the usual way of using synthetic division is to divide the divisor g(x) with its leading coefficient (call it a). In this implementation, this means:we need to compute: coef = msg_out[i] / gen[0]
if coef != 0: # coef 0 is normally undefined so we manage it manually here (and it also serves as an optimization btw)
lcoef = gf_log[coef] # precaching
for j in xrange(1, len(gen)): # optimization: can skip g0 because the first coefficient of the generator is always 1! (that's why we start at position 1)
msg_out[i + j] ^= gf_exp[lcoef + lgen[j]] # equivalent (in Galois Field 2^8) to msg_out[i+j] += msg_out[i] * gen[j]
# Recopy the original message bytes
msg_out[:len(msg_in)] = msg_in
return msg_out
# Alternative 1: the loops were completely changed, instead of fixing msg_out[i] and updating all subsequent i+j items, we now fixate msg_out[i+j] and compute it at once using all couples msg_out[i] * gen[j] - msg_out[i+1] * gen[j-1] - ... since when we fixate msg_out[i+j], all previous msg_out[k] with k < i+j are already fully computed.
def rsenc_alt1(msg_in, nsym, gen):
msg_in = bytearray(msg_in)
msg_out = bytearray(msg_in) + bytearray(len(gen)-1)
lgen = bytearray([gf_log[gen[j]] for j in xrange(len(gen))])
# Alternative 1
jlist = range(1, len(gen))
for k in xrange(1, len(msg_out)):
for x in xrange(max(k-len(msg_in),0), len(gen)-1):
if k-x-1 < 0: break
msg_out[k] ^= gf_exp[msg_out[k-x-1] + lgen[jlist[x]]]
# Recopy the original message bytes
msg_out[:len(msg_in)] = msg_in
return msg_out
# Alternative 2: a rewrite of alternative 1 with generators and reduce
def rsenc_alt2(msg_in, nsym, gen):
msg_in = bytearray(msg_in)
msg_out = bytearray(msg_in) + bytearray(len(gen)-1)
lgen = bytearray([gf_log[gen[j]] for j in xrange(len(gen))])
# Alternative 1
jlist = range(1, len(gen))
for k in xrange(1, len(msg_out)):
items_gen = ( gf_exp[msg_out[k-x-1] + lgen[jlist[x]]] if k-x-1 >= 0 else next(iter(())) for x in xrange(max(k-len(msg_in),0), len(gen)-1) )
msg_out[k] ^= reduce(xor, items_gen)
# Recopy the original message bytes
msg_out[:len(msg_in)] = msg_in
return msg_out
# Alternative with Numpy
def rsenc_numpy(msg_in, nsym, gen):
msg_in = np.array(bytearray(msg_in))
msg_out = np.pad(msg_in, (0, nsym), 'constant')
lgen = np_gf_log[gen]
for i in xrange(msg_in.size):
msg_out[i+1:i+lgen.size] ^= np_gf_exp[np.add(lgen[1:], msg_out[i])]
msg_out[:len(msg_in)] = msg_in
return msg_out
gf_mul_arr = [bytearray(256) for _ in xrange(256)]
gf_add_arr = [bytearray(256) for _ in xrange(256)]
# Precompute multiplication and addition tables
def gf_precomp_tables(gf_exp=gf_exp, gf_log=gf_log):
global gf_mul_arr, gf_add_arr
for i in xrange(256):
for j in xrange(256):
gf_mul_arr[i][j] = gf_exp[gf_log[i] + gf_log[j]]
gf_add_arr[i][j] = i ^ j
return gf_mul_arr, gf_add_arr
# Alternative with precomputation of multiplication and addition tables, inspired by zfec: https://hackage.haskell.org/package/fec-0.1.1/src/zfec/fec.c
def rsenc_precomp(msg_in, nsym, gen=None):
msg_in = bytearray(msg_in)
msg_out = bytearray(msg_in) + bytearray(len(gen)-1)
for i in xrange(len(msg_in)): # [i for i in xrange(len(msg_in)) if msg_in[i] != 0]
coef = msg_out[i]
if coef != 0: # coef 0 is normally undefined so we manage it manually here (and it also serves as an optimization btw)
mula = gf_mul_arr[coef]
for j in xrange(1, len(gen)): # optimization: can skip g0 because the first coefficient of the generator is always 1! (that's why we start at position 1)
#msg_out[i + j] = gf_add_arr[msg_out[i+j]][gf_mul_arr[coef][gen[j]]] # slower...
#msg_out[i + j] ^= gf_mul_arr[coef][gen[j]] # faster
msg_out[i + j] ^= mula[gen[j]] # fastest
# Recopy the original message bytes
msg_out[:len(msg_in)] = msg_in # equivalent to c = mprime - b, where mprime is msg_in padded with [0]*nsym
return msg_out
def randstr(n, size):
'''Generate very fastly a random hexadecimal string. Kudos to jcdryer http://stackoverflow.com/users/131084/jcdyer'''
hexstr = '%0'+str(size)+'x'
for _ in xrange(n):
yield hexstr % random.randrange(16**size)
# Simple test case
if __name__ == "__main__":
# Setup functions to test
funcs = [rsenc, rsenc_precomp, rsenc_alt1, rsenc_alt2]
if numpy_enabled: funcs.append(rsenc_numpy)
gf_precomp_tables()
# Setup RS vars
n = 255
k = 213
import time
# Init the generator polynomial
g = rs_generator_poly_base3(n)
# Init the ground truth
mes = 'hello world'
mesecc_correct = rsenc(mes, n-11, g[k])
# Test the functions
for func in funcs:
# Sanity check
if func(mes, n-11, g[k]) != mesecc_correct: print func.__name__, ": output is incorrect!"
# Time the function
total_time = 0
for m in randstr(1000, n):
start = time.clock()
func(m, n-k, g[k])
total_time += time.clock() - start
print func.__name__, ": total time elapsed %f seconds." % total_time
</code></pre>
<p>And here is the result on my machine:</p>
<pre><code>With PyPy:
rsenc : total time elapsed 0.108183 seconds.
rsenc_alt1 : output is incorrect!
rsenc_alt1 : total time elapsed 0.164084 seconds.
rsenc_alt2 : output is incorrect!
rsenc_alt2 : total time elapsed 0.557697 seconds.
Without PyPy:
rsenc : total time elapsed 3.518857 seconds.
rsenc_alt1 : output is incorrect!
rsenc_alt1 : total time elapsed 5.630897 seconds.
rsenc_alt2 : output is incorrect!
rsenc_alt2 : total time elapsed 6.100434 seconds.
rsenc_numpy : output is incorrect!
rsenc_numpy : total time elapsed 1.631373 seconds
</code></pre>
<p>(Note: the alternatives should be correct, some index must be a bit off, but since they are slower anyway I did not try to fix them)</p>
<p>/UPDATE and goal of the bounty: I found a very interesting optimization trick that promises to speed up computations a lot: to <a href="https://hackage.haskell.org/package/fec-0.1.1/src/zfec/fec.c" rel="noreferrer">precompute the multiplication table</a>. I updated the code above with the new function rsenc_precomp(). However, there's no gain at all in my implementation, it's even a bit slower:</p>
<pre><code>rsenc : total time elapsed 0.107170 seconds.
rsenc_precomp : total time elapsed 0.108788 seconds.
</code></pre>
<p><strong>How can it be that arrays lookups cost more than operations like additions or xor? Why does it work in ZFEC and not in Python?</strong></p>
<p>I will attribute the bounty to whoever can show me how to make this multiplication/addition lookup-tables optimization work (faster than the xor and addition operations) or who can explain to me with references or analysis why this optimization cannot work here (using Python/PyPy/Cython/Numpy etc.. I tried them all).</p> | 2015-05-21 03:11:39.083000+00:00 | 2015-07-28 16:48:33.783000+00:00 | 2015-05-26 13:40:01.967000+00:00 | python|numpy|optimization|cython|pypy | ['http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.99.6595&rep=rep1&type=pdf', 'http://catid.mechafetus.com/news/news.php?view=295', 'https://www.usenix.org/legacy/events/fast09/tech/full_papers/plank/plank_html/', 'https://hackage.haskell.org/package/fec-0.1.1/src/zfec/fec.c', 'https://www.emsec.rub.de/media/crypto/veroeffentlichungen/2011/01/19/cnst.ps', 'http://ipnpr.jpl.nasa.gov/progress_report2/42-56/56P.PDF', 'http://www.ssrc.ucsc.edu/Papers/greenan-mascots08.pdf', 'https://www.kernel.org/pub/linux/kernel/people/hpa/raid6.pdf', 'https://www.kernel.org/doc/Documentation/crc32.txt', 'https://github.com/catid/wirehair/', 'https://www.researchgate.net/profile/Dario_Bini/publication/250727103_Fast_parallel_algorithms_for_polynomial_division_over_an_arbitrary_field_of_constants/links/53f1ad9d0cf23733e815c755.pdf', 'http://www.eecs.harvard.edu/~htk/publication/1973-cmu-cs-technical-report-kung.pdf', 'http://arxiv.org/pdf/1112.4014', 'http://people.cs.clemson.edu/~westall/851/rs-code.pdf', 'http://glossner.org/john/papers/2005_07_asap_reed_solomon.pdf', 'http://www.ssrc.ucsc.edu/Papers/ssrctr-07-09.pdf', 'https://www.cosic.esat.kuleuven.be/publications/article-300.pdf', 'http://en.wikipedia.org/wiki/Barrett_reduction', 'https://github.com/catid/wirehair/blob/master/wirehair-mobile/wirehair_codec_8.cpp', 'http://catid.mechafetus.com/news/news.php', 'https://github.com/catid/longhair', 'http://www.kaymgee.com/Kevin_Greenan/Publications_files/plank-fast2013.pdf', 'https://github.com/klauspost/reedsolomon'] | 23 |
45,624,249 | <p>This thread is misleading. Tried commenting on Lucas Ramadan's answer, but I don't have the right privileges yet, so I'll just put this here.</p>
<p>Batch normalization works best after the activation function, and <a href="https://arxiv.org/abs/1502.03167" rel="noreferrer">here</a> or <a href="https://standardfrancis.wordpress.com/2015/04/16/batch-normalization/" rel="noreferrer">here</a> is why: it was developed to prevent internal covariate shift. Internal covariate shift occurs when the distribution of the <em>activations</em> of a layer shifts significantly throughout training. Batch normalization is used so that the distribution of the inputs (and these inputs are literally the result of an activation function) to a specific layer doesn't change over time due to parameter updates from each batch (or at least, allows it to change in an advantageous way). It uses batch statistics to do the normalizing, and then uses the batch normalization parameters (gamma and beta in the original paper) "to make sure that the transformation inserted in the network can represent the identity transform" (quote from original paper). But the point is that we're trying to normalize the inputs to a layer, so it should always go immediately before the next layer in the network. Whether or not that's after an activation function is dependent on the architecture in question.</p> | 2017-08-10 22:17:24.913000+00:00 | 2017-08-10 22:17:24.913000+00:00 | null | null | 34,716,454 | <p>If I want to use the BatchNormalization function in Keras, then do I need to call it once only at the beginning?</p>
<p>I read this documentation for it: <a href="http://keras.io/layers/normalization/">http://keras.io/layers/normalization/</a></p>
<p>I don't see where I'm supposed to call it. Below is my code attempting to use it:</p>
<pre><code>model = Sequential()
keras.layers.normalization.BatchNormalization(epsilon=1e-06, mode=0, momentum=0.9, weights=None)
model.add(Dense(64, input_dim=14, init='uniform'))
model.add(Activation('tanh'))
model.add(Dropout(0.5))
model.add(Dense(64, init='uniform'))
model.add(Activation('tanh'))
model.add(Dropout(0.5))
model.add(Dense(2, init='uniform'))
model.add(Activation('softmax'))
sgd = SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='binary_crossentropy', optimizer=sgd)
model.fit(X_train, y_train, nb_epoch=20, batch_size=16, show_accuracy=True, validation_split=0.2, verbose = 2)
</code></pre>
<p>I ask because if I run the code with the second line including the batch normalization and if I run the code without the second line I get similar outputs. So either I'm not calling the function in the right place, or I guess it doesn't make that much of a difference.</p> | 2016-01-11 07:47:53.143000+00:00 | 2021-01-15 18:08:24.403000+00:00 | 2019-06-04 18:55:02.283000+00:00 | python|keras|neural-network|data-science|batch-normalization | ['https://arxiv.org/abs/1502.03167', 'https://standardfrancis.wordpress.com/2015/04/16/batch-normalization/'] | 2 |
70,159,092 | <p>It is clearly a typo. As stated in the <a href="https://arxiv.org/abs/2001.05137" rel="nofollow noreferrer">linked paper (thanks Christoph)</a>, they mention the following with the max-pooling layer (emphasis mine):</p>
<blockquote>
<p>The first network is
a Fully Designed Neural Network (FD-NN). The architecture and layers of the model are displayed in Table 1. A 2D
convolutional layer with 3Γ3 filter size used, and Relu assigned as an activation function. <strong>Maxpooling with the size of
2Γ2 applied to reduce the number of features</strong>.</p>
</blockquote>
<p>If a 2 x 2 window is applied, you are correct where it should reduce the feature map from 32 x 32 x 32 to 16 x 16 x 32. In addition, the number of filters in the row is wrong. It should also still be 32. This paper is not formally published in any conferences or journals and is only available as a preprint on arXiv. This means that the paper was not formally vetted for errors or was proofread.</p>
<p>As such, if you want to seek clarity on the actual output dimensions I recommend you seek this from the original authors of the paper. However, the output dimensions for the max pooling and dropout from this table are completely incorrect.</p> | 2021-11-29 17:48:10.293000+00:00 | 2021-11-29 18:08:11.843000+00:00 | 2021-11-29 18:08:11.843000+00:00 | null | 70,137,598 | <p>According to my understanding a maxpool layer works on convolution 2d layer and reduces the dimensions of the layer by half but the architecture of this model shows it in a different manner.
Can anyone tell me how it got decreased only by a small dimension and not half as is expected? What I mean is if the maxpooling layer is applied, shouldn't the dimension be 16x16x32? Why is it 32x31x30? If there is a possibility of a custom output shape, I'd like to know why.</p>
<p><a href="https://i.stack.imgur.com/BzdEE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BzdEE.png" alt="Architecture of the Model" /></a></p> | 2021-11-27 18:29:54.890000+00:00 | 2021-11-30 14:23:29.410000+00:00 | 2021-11-30 14:23:29.410000+00:00 | image-processing|conv-neural-network|image-classification|max-pooling|max-pool-size | ['https://arxiv.org/abs/2001.05137'] | 1 |
55,631,985 | <p>This is simply a choice of a hyper-parameter. Such choices can be made by cross validation of hyper-parameter search, meaning training a few models with different choices of a hyper-parameter, and seeing who got the best performance on the validation set.
Specifically for 3x3 convolution, this has been made popular since the <a href="https://arxiv.org/pdf/1409.1556.pdf" rel="nofollow noreferrer">VGG paper</a> which suggested that stacking many 3x3 convolutions (which is considered a small kernel) can give good performance.</p> | 2019-04-11 11:59:42.287000+00:00 | 2019-04-11 11:59:42.287000+00:00 | null | null | 55,584,228 | <p>I am reading the faster-rcnn and ssd code for object detection. The prediction layer use the 3x3 filter to predict box position and class label. </p>
<p>Why not use 2x2 filter or 4x4 filter or 5x5 filter to predict them?</p>
<p><a href="https://i.stack.imgur.com/OrXb0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OrXb0.png" alt="enter image description here"></a></p> | 2019-04-09 02:24:11.040000+00:00 | 2019-04-11 11:59:42.287000+00:00 | null | deep-learning|computer-vision|object-detection|faster-rcnn | ['https://arxiv.org/pdf/1409.1556.pdf'] | 1 |
68,112,475 | <p>The proper way to calculate a token price is by asking the liquidity pool (the pair of this token against the local PEG or some USD token) for the ratio of how much PEG was inserted agains how may tokens (for more details about what a liquidity pool represents, see <a href="https://uniswap.org/docs/v2/core-concepts/pools/" rel="nofollow noreferrer">https://uniswap.org/docs/v2/core-concepts/pools/</a>).</p>
<p>So for python use:</p>
<pre class="lang-py prettyprint-override"><code>from web3 import Web3
from web3.middleware import geth_poa_middleware # Needed for Binance
from json import loads
from decimal import Decimal
ETHER = 10 ** 18
WBNB = '0xbb4CdB9CBd36B01bD1cBaEBF2De08d9173bc095c'
CAKE_ROUTER_V2 = web3.toChecksumAddress('0x10ed43c718714eb63d5aa57b78b54704e256024e')
web3 = Web3(Web3.HTTPProvider('https://bsc-dataseed1.binance.org:443'))
web3.middleware_onion.inject(geth_poa_middleware, layer=0) # Again, this is needed for Binance, not Ethirium
ABI = loads('[{"inputs":[],"name":"decimals","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"},{"constant":true,"inputs":[],"name":"token0","outputs":[{"internalType":"address","name":"","type":"address"}],"payable":false,"stateMutability":"view","type":"function"},{"inputs":[],"name":"factory","outputs":[{"internalType":"address","name":"","type":"address"}],"stateMutability":"view","type":"function"},{"constant":true,"inputs":[{"internalType":"address","name":"","type":"address"},{"internalType":"address","name":"","type":"address"}],"name":"getPair","outputs":[{"internalType":"address","name":"","type":"address"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":true,"inputs":[],"name":"getReserves","outputs":[{"internalType":"uint112","name":"_reserve0","type":"uint112"},{"internalType":"uint112","name":"_reserve1","type":"uint112"},{"internalType":"uint32","name":"_blockTimestampLast","type":"uint32"}],"payable":false,"stateMutability":"view","type":"function"}]')
def get_price(token, decimals, pair_contract, is_reversed, is_price_in_peg):
peg_reserve = 0
token_reserve = 0
(reserve0, reserve1, blockTimestampLast) = pair_contract.functions.getReserves().call()
if is_reversed:
peg_reserve = reserve0
token_reserve = reserve1
else:
peg_reserve = reserve1
token_reserve = reserve0
if token_reserve and peg_reserve:
if is_price_in_peg:
# CALCULATE PRICE BY TOKEN PER PEG
price = (Decimal(token_reserve) / 10 ** decimals) / (Decimal(peg_reserve) / ETHER)
else:
# CALCULATE PRICE BY PEG PER TOKEN
price = (Decimal(peg_reserve) / ETHER) / (Decimal(token_reserve) / 10 ** decimals)
return price
return Decimal('0')
if __name__ == '__main__':
CAKE_FACTORY_V2 = web3.eth.contract(address=CAKE_ROUTER_V2, abi=ABI).functions.factory().call()
token = web3.toChecksumAddress('0x126f5f2a88451d24544f79d11f869116351d46e1')
pair = web3.eth.contract(address=CAKE_FACTORY_V2, abi=ABI).functions.getPair(token, WBNB).call()
pair_contract = web3.eth.contract(address=pair, abi=ABI)
is_reversed = pair_contract.functions.token0().call() == WBNB
decimals = web3.eth.contract(address=token, abi=ABI).functions.decimals().call()
is_price_in_peg = True
print(get_price(token, decimals, pair_contract, is_reversed, is_price_in_peg), 'BNB')
</code></pre>
<p>And for JS use:</p>
<pre class="lang-js prettyprint-override"><code>var ETHER = Math.pow(10, 18);
var WBNB = '0xbb4CdB9CBd36B01bD1cBaEBF2De08d9173bc095c';
var CAKE_ROUTER_V2 = Web3.utils.toChecksumAddress('0x10ed43c718714eb63d5aa57b78b54704e256024e');
var web3 = new Web3('https://bsc-dataseed1.binance.org:443');
var ABI = [{"inputs":[],"name":"decimals","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"},{"constant":true,"inputs":[],"name":"token0","outputs":[{"internalType":"address","name":"","type":"address"}],"payable":false,"stateMutability":"view","type":"function"},{"inputs":[],"name":"factory","outputs":[{"internalType":"address","name":"","type":"address"}],"stateMutability":"view","type":"function"},{"constant":true,"inputs":[{"internalType":"address","name":"","type":"address"},{"internalType":"address","name":"","type":"address"}],"name":"getPair","outputs":[{"internalType":"address","name":"","type":"address"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":true,"inputs":[],"name":"getReserves","outputs":[{"internalType":"uint112","name":"_reserve0","type":"uint112"},{"internalType":"uint112","name":"_reserve1","type":"uint112"},{"internalType":"uint32","name":"_blockTimestampLast","type":"uint32"}],"payable":false,"stateMutability":"view","type":"function"}];
var get_price = async function(token, decimals, pair_contract, is_reverse, is_price_in_peg) {
var price,
peg_reserve = 0,
token_reserve = 0,
res = await pair_contract.methods.getReserves().call(),
reserve0 = res[0],
reserve1 = res[1];
if (is_reverse) {
peg_reserve = reserve0;
token_reserve = reserve1;
} else {
peg_reserve = reserve1;
token_reserve = reserve0;
}
if (token_reserve && peg_reserve) {
if (is_price_in_peg) {
// CALCULATE PRICE BY TOKEN PER PEG
price = (Number(token_reserve) / Number(Math.pow(10, decimals))) / (Number(peg_reserve) / Number(ETHER));
} else {
// CALCULATE PRICE BY PEG PER TOKEN
price = (Number(peg_reserve) / Number(ETHER)) / (Number(token_reserve) / Number(Math.pow(10, decimals)));
}
return price;
}
return Number(0);
};
var token = Web3.utils.toChecksumAddress('0x126f5f2a88451d24544f79d11f869116351d46e1');
var pair = await (await (new web3.eth.Contract(ABI, CAKE_FACTORY_V2))).methods.getPair(token, WBNB).call();
var pair_contract = await new web3.eth.Contract(ABI, pair);
var is_reversed = (await pair_contract.methods.token0().call()) == WBNB;
var decimals = await (await new web3.eth.Contract(ABI, token)).methods.decimals().call();
var is_price_in_peg = true;
console.log(await get_price(token, decimals, pair_contract, is_reversed, is_price_in_peg), 'BNB')
</code></pre>
<p>NOTE 1: This applies only for tokens that have liquidity against WBNB. If the liquidity is against some other coin, you have to recursively understand all the prices in that chain and correlate them one to another until you reach WBNB (or any other PEG in other networks).</p>
<p>NOTE 2. According to <a href="https://arxiv.org/pdf/2009.14021.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/2009.14021.pdf</a> :</p>
<blockquote>
<p>Expected Execution Price (E[P]): When a liquidity
taker issues a trade on X/Y , the taker wishes to
execute the trade with the expected execution price
E[P] (based on the AMM algorithm and X/Y state),
given the expected slippage.</p>
</blockquote>
<blockquote>
<p>Execution Price (P): During the time difference between a liquidity taker issuing a transaction, and the
transaction being executed (e.g. mined in a block),
the state of the AMM market X/Y may change.
This state change may induce unexpected slippage
resulting in an execution price P != E[P].</p>
</blockquote>
<blockquote>
<p>Unexpected Price Slippage (P β E[P]): is the difference between P and E[P].</p>
</blockquote>
<blockquote>
<p>Unexpected Slippage Rate ((P β E[P]) / E[P]): is the
unexpected slippage over the expected price.</p>
</blockquote>
<p>So in our situation, <code>E[P]</code> is the result of our <code>get_price()</code> and <code>P</code> is the result from <code>getAmounsOut()</code> of an amount (1Kwei for example) divided by the amount we provided (1K in this example) and thus we can even calculate the slippage by eventually subtracting <code>P β E[P]</code></p> | 2021-06-24 08:47:38.487000+00:00 | 2022-08-19 20:48:34.863000+00:00 | 2022-08-19 20:48:34.863000+00:00 | null | 67,563,429 | <p>I'm using Web3.py and I'm experiencing something strange.</p>
<p>For the following code (with Pancake Router V2):</p>
<pre class="lang-py prettyprint-override"><code>from web3 import Web3
from web3.middleware import geth_poa_middleware
web3 = Web3(Web3.HTTPProvider('https://bsc-dataseed1.binance.org:443'))
web3.middleware_onion.inject(geth_poa_middleware, layer=0)
ABI = {"inputs":[{"internalType":"uint256","name":"amountIn","type":"uint256"},{"internalType":"address[]","name":"path","type":"address[]"}],"name":"getAmountsOut","outputs":[{"internalType":"uint256[]","name":"amounts","type":"uint256[]"}],"stateMutability":"view","type":"function"}
CAKE_ROUTER_V2 = web3.toChecksumAddress('0x10ed43c718714eb63d5aa57b78b54704e256024e')
router_contract = web3.eth.contract(address=CAKE_ROUTER_V2, abi=ABI),
WBNB = web3.toChecksumAddress('0xbb4CdB9CBd36B01bD1cBaEBF2De08d9173bc095c')
CAKE = web3.toChecksumAddress('0x0e09fabb73bd3ade0a17ecc321fd13a19e81ce82')
KONGSHIBA = web3.toChecksumAddress('0x126f5f2a88451d24544f79d11f869116351d46e1')
print(router_contract.functions.getAmountsOut(1, [WBNB, CAKE]).call())
print(router_contract.functions.getAmountsOut(1, [WBNB, KONGSHIBA]).call())
</code></pre>
<p>And I'm getting the following:</p>
<pre><code>[1, 19]
[1, 160]
</code></pre>
<p>WBNB and CAKE have 18 decimals and KONGSHIBA has 17.<br>
While CAKE's worth is currently about $27.7, WBNB is $545.41291093<br>and KONGSHIBA is $0.00000000000000000332.<br>
So I should have got back:</p>
<pre><code>[1, 19]
[1, 16000000000000000000]
</code></pre>
<p>Please advise.</p> | 2021-05-17 02:50:28.470000+00:00 | 2022-08-19 20:48:34.863000+00:00 | 2022-08-19 20:47:52.733000+00:00 | python|blockchain|smartcontracts|web3js|web3py | ['https://uniswap.org/docs/v2/core-concepts/pools/', 'https://arxiv.org/pdf/2009.14021.pdf'] | 2 |
46,907,852 | <p>Well, well, well: I have been doing for probably a decade or more and there are some older posts of mine floating. In fact, the blog post you use references my script via a five-year old email -- these still work for me. </p>
<p>My current version is attached below. I have been building this all these years on "whatever is the current Ubuntu version", with a slight upgrade delay. So like you I am currently on 17.04, and I just use <code>libicu-dev</code> which gets we the <code>libicu57</code> runtime.</p>
<p>That said, you also get r-devel "prebuilt" via Docker images from our Rocker project. This this <a href="https://arxiv.org/abs/1710.03675" rel="nofollow noreferrer">recent arXiv preprint</a> descrining the project, and mentioning <code>r-devel</code> and <code>drd</code>, both available from <a href="http://hub.docker.com" rel="nofollow noreferrer">hub.docker.com</a>.</p>
<p>My script follows. There is no magic in. You may need to remove the <code>ccache</code> or install <code>ccache</code>.</p>
<pre><code>#!/bin/sh
cd ~/svn/r-devel
R_PAPERSIZE=letter \
R_BATCHSAVE="--no-save --no-restore" \
R_BROWSER=xdg-open \
PAGER=/usr/bin/pager \
PERL=/usr/bin/perl \
R_UNZIPCMD=/usr/bin/unzip \
R_ZIPCMD=/usr/bin/zip \
R_PRINTCMD=/usr/bin/lpr \
LIBnn=lib \
AWK=/usr/bin/awk \
CC="ccache gcc" \
CFLAGS="-ggdb -pipe -std=gnu99 -Wall -pedantic" \
CXX="ccache g++" \
CXXFLAGS="-ggdb -pipe -Wall -pedantic" \
FC="ccache gfortran" \
F77="ccache gfortran" \
MAKE="make -j4" \
./configure \
--prefix=/usr/local/lib/R-devel \
--enable-R-shlib \
--without-blas \
--without-lapack \
--without-recommended-packages
make
echo "*** Done -- now run 'make install'"
</code></pre>
<p>And for what it is worth I get the <em>exact same iconv message</em> which appears to be just informative:</p>
<pre><code>checking iconv.h usability... yes checking iconv.h presence... yes
checking for iconv.h... yes checking for iconv... yes
checking whether iconv accepts "UTF-8", "latin1", "ASCII" and "UCS-*"... yes checking for iconvlist... no
checking for iconv... yes checking for iconv declaration...
extern size_t iconv (iconv_t cd, char * *inbuf, size_t *inbytesleft, char * *outbuf, size_t *outbytesleft); checking wchar.h usability... yes
checking wchar.h presence... yes
</code></pre>
<p>Your problem, then, seems to be that you lack <code>libiconv-dev</code>, and presumably <em>many more</em> such <code>-dev</code> packages: it is not just <code>libc6-dev</code>.</p>
<p><em>Edit:</em> Come to think about it, the aforementioned Dockerfiles are probably a good proxy for the packages you need. See eg <a href="https://github.com/rocker-org/drd/blob/601c703fdf2373a06706b78cf7863f8305956f79/Dockerfile#L19-L62" rel="nofollow noreferrer">here</a> for the corresponding 43 lines (!!) from drd.</p> | 2017-10-24 10:14:24.080000+00:00 | 2017-10-24 10:20:19.343000+00:00 | 2017-10-24 10:20:19.343000+00:00 | null | 46,907,554 | <p>I have been spending one day to try to compile R-devel.
I have used this <a href="http://singmann.org/installing-r-devel-on-linux/" rel="nofollow noreferrer">post</a> to do so.</p>
<p>Regardless of what I do, I have:</p>
<pre><code>[...]/src/main/sysutils.c:794: undefined reference to `libiconv'
[...]
[...]/src/main/platform.c:3052: undefined reference to `u_getVersion_54'
[...]
</code></pre>
<p>add many other similar lines, similar to the last comment of <a href="http://jtremblay.github.io/software_installation/2017/06/21/Install-R-3.4.0-and-RStudio-on-Ubuntu-16.04" rel="nofollow noreferrer">this post</a>.</p>
<p>Obviously, I have:</p>
<pre><code>$ sudo apt install libc6-dev
libc6-dev is already the newest version (2.24-9ubuntu2.2).
</code></pre>
<p>The configuration step regarding libiconv seems ok:</p>
<pre><code>checking iconv.h usability... yes
checking iconv.h presence... yes
checking for iconv.h... yes
checking for iconv... yes
checking whether iconv accepts "UTF-8", "latin1", "ASCII" and "UCS-*"... yes
checking for iconvlist... no
checking for iconv... yes
checking for iconv declaration...
extern size_t iconv (iconv_t cd, char * *inbuf, size_t *inbytesleft, char * *outbuf, size_t *outbytesleft);
</code></pre>
<p>The command <code>iconv -l</code> seems to work fine. The C file described in <a href="https://stackoverflow.com/questions/4709178/how-do-i-link-glibcs-implementation-of-iconv">this other post</a> compiles with no problem either.</p>
<p>Where should I look? I use Gnome Ubuntu 17.04.</p> | 2017-10-24 09:59:44.097000+00:00 | 2018-10-25 12:50:34.993000+00:00 | 2017-10-24 10:08:38.630000+00:00 | r|compiler-errors | ['https://arxiv.org/abs/1710.03675', 'http://hub.docker.com', 'https://github.com/rocker-org/drd/blob/601c703fdf2373a06706b78cf7863f8305956f79/Dockerfile#L19-L62'] | 3 |
60,220,052 | <p>There are a few nuances and a few open research problems in Federated Learning and this question has struck a couple of them.</p>
<ol>
<li><p><strong>Training loss looks <em>much</em> better than evaluation loss</strong>: when using Federated Averaging (the optimization algorithm used in the <a href="https://www.tensorflow.org/federated/tutorials/federated_learning_for_image_classification" rel="nofollow noreferrer">Federated Learning for Image Classification tutorial</a>) one needs to be careful interpreting metrics as they have nuanced differences from centralized model training. Especially <em>training loss</em>, which is the average over many sequence steps or batches. This means after one round, each client may have fit the model to their local data very well (obtaining a high accuracy), but after averaging these updates into the global model the global model may still be far away from "good", resulting in a low test accuracy. Additionally, 10 rounds may be too few; one of the original academic papers on Federated Learning demonstrated at least 20 rounds until 99% accuracy (<a href="https://arxiv.org/abs/1602.05629" rel="nofollow noreferrer">McMahan 2016</a>) with IID data, and more than 100 rounds in with non-IID data.</p></li>
<li><p><strong>BatchNorm in the federated setting</strong>: its an open research problem on how to combine the batchnorm parameters, particularly with non-IID client data. Should each new client start with fresh parameters, or receive the global model parameters? TFF may not be communicating them between the server and client (since it currently is implemented only to communicate <em>trainable</em> variables), and may be leading to unexpected behavior. It may we good to print the <code>state</code> parameters watch what happens each round to them.</p></li>
</ol> | 2020-02-14 04:35:58.460000+00:00 | 2020-02-14 04:35:58.460000+00:00 | null | null | 60,198,252 | <p>I implemented Resnet34 model in federated images classification tutorial. After 10 rounds the training accuracy can be higher than 90%, however, the evaluation accuracy using the last round's <code>state.model</code> is always around 50%.</p>
<pre><code> evaluation = tff.learning.build_federated_evaluation(model_fn)
federated_test_data = make_federated_data(emnist_test, sample_clients)
test_metrics = evaluation(state.model, federated_test_data)
str(test_metrics)
</code></pre>
<p>I am very confused what's possibly wrong with the evaluation part? Also, I printed the untrainable variables (mean and variance in BatchNorm) of the server's model, which are 0 and 1 with no updates/averaging after those rounds. Should they be like that or that could be the problem?
Thanks very much! </p>
<p><strong>Updates:</strong> </p>
<p>The codes to prepare training data and printed results:</p>
<pre><code>len(emnist_train.client_ids)
4
emnist_train.element_type_structure
OrderedDict([('label', TensorSpec(shape=(), dtype=tf.int64, name=None)),('pixels',TensorSpec(shape=(256, 256, 3), dtype=tf.float32, name=None))])
NUM_CLIENTS = 4
NUM_EPOCHS = 1
BATCH_SIZE = 30
SHUFFLE_BUFFER = 500
def preprocess(dataset):
def element_fn(element):
return collections.OrderedDict([
('x', element['pixels']),
('y', tf.reshape(element['label'], [1])),
])
return dataset.repeat(NUM_EPOCHS).map(element_fn).shuffle(
SHUFFLE_BUFFER).batch(BATCH_SIZE)
sample_clients = emnist_train.client_ids[0:NUM_CLIENTS]
federated_train_data = make_federated_data(emnist_train, sample_clients)
preprocessed_example_dataset = preprocess(example_dataset)
sample_batch = tf.nest.map_structure(
lambda x: x.numpy(), iter(preprocessed_example_dataset).next())
def make_federated_data(client_data, client_ids):
return [preprocess(client_data.create_tf_dataset_for_client(x))
for x in client_ids]
len(federated_train_data), federated_train_data[0]
(4,<BatchDataset shapes: OrderedDict([(x, (None, 256, 256, 3)), (y, (None, 1))]), types: OrderedDict([(x, tf.float32), (y, tf.int64)])>)
</code></pre>
<p>The training and evaluation codes:</p>
<pre><code> def create_compiled_keras_model():
base_model = tf.keras.applications.resnet.ResNet50(include_top=False, weights='imagenet', input_shape=(256,256,3,))
global_average_layer = tf.keras.layers.GlobalAveragePooling2D()
prediction_layer = tf.keras.layers.Dense(2, activation='softmax')
model = tf.keras.Sequential([
base_model,
global_average_layer,
prediction_layer
])
model.compile(optimizer = tf.keras.optimizers.SGD(lr = 0.001, momentum=0.9), loss = tf.keras.losses.SparseCategoricalCrossentropy(), metrics = [tf.keras.metrics.SparseCategoricalAccuracy()])
return model
def model_fn():
keras_model = create_compiled_keras_model()
return tff.learning.from_compiled_keras_model(keras_model, sample_batch)
iterative_process = tff.learning.build_federated_averaging_process(model_fn)
state = iterative_process.initialize()
for round_num in range(2, 12):
state, metrics = iterative_process.next(state, federated_train_data)
print('round {:2d}, metrics={}'.format(round_num, metrics, state))
evaluation = tff.learning.build_federated_evaluation(model_fn)
federated_test_data = make_federated_data(emnist_test, sample_clients)
len(federated_test_data), federated_test_data[0]
(4,
<BatchDataset shapes: OrderedDict([(x, (None, 256, 256, 3)), (y, (None, 1))]), types: OrderedDict([(x, tf.float32), (y, tf.int64)])>)
test_metrics = evaluation(state.model, federated_test_data)
str(test_metrics)
</code></pre>
<p>The training and evaluations results after each round:</p>
<pre><code>round 1, metrics=<sparse_categorical_accuracy=0.5089045763015747,loss=0.7813001871109009,keras_training_time_client_sum_sec=0.008826255798339844>
<sparse_categorical_accuracy=0.49949443340301514,loss=8.0671968460083,keras_training_time_client_sum_sec=0.0>
round 2, metrics=<sparse_categorical_accuracy=0.519825279712677,loss=0.7640910148620605,keras_training_time_client_sum_sec=0.011750459671020508>
<sparse_categorical_accuracy=0.49949443340301514,loss=8.0671968460083,keras_training_time_client_sum_sec=0.0>
round 3, metrics=<sparse_categorical_accuracy=0.5099126100540161,loss=0.7513422966003418,keras_training_time_client_sum_sec=0.0039823055267333984>
<sparse_categorical_accuracy=0.49949443340301514,loss=8.0671968460083,keras_training_time_client_sum_sec=0.0>
round 4, metrics=<sparse_categorical_accuracy=0.5278897881507874,loss=0.7905193567276001,keras_training_time_client_sum_sec=0.0010638236999511719>
<sparse_categorical_accuracy=0.49949443340301514,loss=8.0671968460083,keras_training_time_client_sum_sec=0.0>
round 5, metrics=<sparse_categorical_accuracy=0.5199933052062988,loss=0.7782396674156189,keras_training_time_client_sum_sec=0.012729644775390625>
<sparse_categorical_accuracy=0.49949443340301514,loss=8.0671968460083,keras_training_time_client_sum_sec=0.0>
</code></pre> | 2020-02-12 23:13:22.600000+00:00 | 2020-12-03 06:24:58.117000+00:00 | 2020-11-28 10:57:23.627000+00:00 | tensorflow|tf.keras|resnet|tensorflow-federated|federated-learning | ['https://www.tensorflow.org/federated/tutorials/federated_learning_for_image_classification', 'https://arxiv.org/abs/1602.05629'] | 2 |
56,413,068 | <p>Generally, you fuse image features (CNN) and question features (RNN), pass these to another network with softmax output that corresponds to a one-word answer. See here: <a href="https://arxiv.org/pdf/1505.00468v6.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1505.00468v6.pdf</a></p>
<p>I can imagine trying a decoder network to produce sentences for answers, but those will be harder to evaluate.</p> | 2019-06-02 07:35:20.863000+00:00 | 2019-06-02 07:35:20.863000+00:00 | null | null | 56,412,965 | <p>I am trying to get me head around this but having difficulty.</p>
<p>As far as I understand: </p>
<blockquote>
<p><a href="https://towardsdatascience.com/image-captioning-in-deep-learning-9cd23fb4d8d2" rel="nofollow noreferrer">Image Captioning</a> is the process of generating textual description of an image. It uses both Natural Language Processing and Computer Vision to generate the captions.</p>
</blockquote>
<hr>
<p>And from <a href="https://arxiv.org/abs/1412.6632" rel="nofollow noreferrer">this</a> paper:</p>
<blockquote>
<p>It directly models the probability distribution of generating a word given previous words and an image.</p>
</blockquote>
<hr>
<p>So if I understand correctly, using some model which takes image and previous text as input, it generates probabilities for the next word. </p>
<p>Taking an example from "Deep Visual-Semantic Alignments for Generating Image Descriptions" <a href="https://cs.stanford.edu/people/karpathy/cvpr2015.pdf" rel="nofollow noreferrer">paper</a></p>
<p><a href="https://i.stack.imgur.com/RlRLo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RlRLo.png" alt="enter image description here"></a></p>
<hr>
<p>But how is that used in VQA (Visual Question Answering is a research area about building a computer system to answer questions presented in an image and a natural language. )? </p>
<p>Is the answer to a question taken from the caption generated from the image? </p> | 2019-06-02 07:20:55.607000+00:00 | 2019-06-02 07:35:20.863000+00:00 | null | image-processing|deep-learning|computer-vision|conv-neural-network|recurrent-neural-network | ['https://arxiv.org/pdf/1505.00468v6.pdf'] | 1 |
70,480,894 | <blockquote>
<p>Please help me pair my app with LoadingCache.</p>
</blockquote>
<p>Merry X-Mas! There you go:</p>
<ol>
<li><code>LoadingCache<Key, Graph></code> applied to our code must be <code>LoadingCache<String[],Map<Character, Integer>></code> matching the input and output types of our ...</li>
<li><code>createExpensiveGraph</code> method applied to our case would be <code>charCounter</code>.</li>
<li>To pair up, we would not invoke <code>charCounter(...)</code> directly, but through a (given) cache instance, so: <code>graphs.get(...)</code>.</li>
</ol>
<hr />
<p>I refactored "little" (simplified <code>String[]</code> to <code>String</code>, removed "half" of your classes, made the main method interactive), this is what came out:</p>
<hr />
<p>pom.xml:</p>
<pre class="lang-xml prettyprint-override"><code><project>
<!-- only defaults ..., and: -->
<dependencies>
<dependency>
<groupId>com.google.guava</groupId>
<artifactId>guava</artifactId>
<version>31.0.1-jre</version>
</dependency>
</dependencies>
<properties>
<maven.compiler.source>17</maven.compiler.source>
<maven.compiler.target>17</maven.compiler.target>
</properties>
</project>
</code></pre>
<hr />
<p>Main.java</p>
<pre class="lang-java prettyprint-override"><code>package com.stackoverflow.cache.test;
import com.google.common.cache.CacheBuilder;
import com.google.common.cache.CacheLoader;
import com.google.common.cache.LoadingCache;
import java.util.LinkedHashMap;
import java.util.Scanner;
import java.util.concurrent.ExecutionException;
public class Main {
static final LoadingCache<String, LinkedHashMap<Character, Integer>> CACHE = CacheBuilder.newBuilder()
.build( // with defaults, and this loader:
new CacheLoader<String, LinkedHashMap<Character, Integer>>() {
@Override
public LinkedHashMap<Character, Integer> load(String key) {
System.out.format("Key: \"%s\" not cached.%n", key);
return analyze(key); // invoking "expensive calculation"!
}
});
public static void main(String[] args) throws ExecutionException {
try ( Scanner consoleScanner = new Scanner(System.in)) {
String word = consoleScanner.nextLine().trim(); // line wise! (for word-wise: next())
while (!"bye".equalsIgnoreCase(word)) {// from Yoda, greetings! https://blog.codinghorror.com/new-programming-jargon/#1 ;)
System.out.println(CACHE.get(word));// invoke cache, instead of "expensive" calculation!
word = consoleScanner.nextLine().trim(); // line wise! (for word-wise: next())
}
System.out.println("Bye!");
}
}
// basically your charCounter method with single parameter + stream:
static LinkedHashMap<Character, Integer> analyze(String arg) {
LinkedHashMap<Character, Integer> elements = new LinkedHashMap();
arg.chars().forEach((num) -> {
Character c = (char) num;
if (elements.containsKey(c)) {
elements.put(c, elements.get(c) + 1);
} else {
elements.put(c, 1);
}
});
return elements;
}
}
</code></pre>
<hr />
<p>In- and Output:</p>
<pre class="lang-text prettyprint-override"><code>>Java is the best programming language in the world!
Key: "Java is the best programming language in the world!" not cached.
{J=1, a=5, v=1, =8, i=3, s=2, t=3, h=2, e=4, b=1, p=1, r=3, o=2, g=4, m=2, n=3, l=2, u=1, w=1, d=1, !=1}
>hello
Key: "hello" not cached.
{h=1, e=1, l=2, o=1}
>hello
{h=1, e=1, l=2, o=1}
>bye
Bye!
</code></pre>
<p>(2nd calculation of "hello" is cached.)</p>
<hr />
<p>We see: Once identifying/understanding/defining</p>
<ul>
<li>"key"</li>
<li>"graph" and</li>
<li>"expensive graph operation"</li>
</ul>
<p>, it is easy to a apply a (guava) cache to a given "operation".</p>
<p>For advanced cache configuration, please refer to <a href="https://javadoc.io/doc/com.google.guava/guava/latest/com/google/common/cache/CacheBuilder.html" rel="nofollow noreferrer">CacheBuilder javadoc</a>, for advanced usage to <a href="https://guava.dev/releases/31.0-jre/api/docs/com/google/common/cache/LoadingCache.html" rel="nofollow noreferrer">LoadingCache javadoc</a>.</p>
<p>Rather advanced and theoretical but very related to this topic/ use case: <a href="https://arxiv.org/abs/1912.03888" rel="nofollow noreferrer">Similarity Caching</a>.</p>
<hr />
<p>To receive "words" from command line arguments, we can use a <code>main()</code> method like this:</p>
<pre><code>public static void main(String[] args) throws ExecutionException {
for (String word : args) {
System.out.println(CACHE.get(word)); // ...or custom "print" method
}
}
</code></pre>
<hr />
<p>To make it completely "without external libs" (i.e. guava)we would (<em>remove/clean up</em> that dependencies,) we would then use it, as outlined in (accepted answer of) <a href="https://stackoverflow.com/q/224868/592355">Easy, simple to use LRU cache in java</a> :</p>
<pre class="lang-java prettyprint-override"><code>// limited version:
static final int MAX_ENTRIES = 100;
static final Map<String, Map<Character, Integer>> CACHE = new LinkedHashMap<>(
MAX_ENTRIES + 1, // initial capacity
1.0f, // load factor: better 1. than 0.75 (in this setup!?)
true // "accessOrder" flag
) {
// eviction: "...This method is invoked by put and putAll after inserting a new entry into the map"
public boolean removeEldestEntry(Map.Entry eldest) {
return size() > MAX_ENTRIES;
}
};
</code></pre>
<p>for "unlimited" cache (might be sufficient for tutor;), just:</p>
<pre class="lang-java prettyprint-override"><code>// no limit, no "order", no evict, no outdate:
static final Map<String, Map<Character, Integer>> CACHE = new HashMap<>();
</code></pre>
<p>(For a <em>thread-safe</em> version, we'd have to:)</p>
<pre class="lang-java prettyprint-override"><code> ... CACHE = Collections.synchronizedMap(new xxxMap...);
</code></pre>
<p><a href="https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/LinkedHashMap.html" rel="nofollow noreferrer">LinkedHashMap javadoc 17</a></p>
<p>We could wrap our "cache loading" then like:</p>
<pre class="lang-java prettyprint-override"><code>static Map<Character, Integer> load(String key) {
final Map<Character, Integer> result;
if (CACHE.containsKey(key)) { // cached!
result = CACHE.get(key);
// to "refresh" key, put it again (LRU):
// CACHE.put(key, result); // here or outside the if-else
} else { // "expensive" calculation:
result = analyze(key); // ... and add to cache (unlimited, or removingEldest(imlicitely!!)):
CACHE.put(key, result);
}
return result;
}
</code></pre>
<p>and main method like:</p>
<pre><code>public static void main(String[] args) {
for (String word : args) {
System.out.println(load(word));
}
}
</code></pre>
<p>;)#</p> | 2021-12-25 15:14:27.243000+00:00 | 2022-01-01 13:18:41.100000+00:00 | 2022-01-01 13:18:41.100000+00:00 | null | 70,478,158 | <p>I'm learning programming in the language java.</p>
<p>I need to write an application that takes a string and returns the number of unique characters in the string.
It is expected that a string with the same character sequence may be passed several times to the method.
Since the counting operation can be time-consuming, the method should cache the results, so that when the method is given a string previously encountered</p>
<p>At this stage, my application is already able to count and display characters</p>
<pre><code>public class Main {
public static void main(String[] args) {
String[] testArray = new String[]{"Java", "is", "the", "best", "programming",
"language", "in", "the", "world!"};
CharCounter charCounter = new CharCounter();
Print print = new Print();
print.printArgs(testArray);
print.print(charCounter.charCounter(testArray));
}
}
/**
* CharCounter should takes a string and returns the number of unique
* characters in the string.
*/
public class CharCounter {
public LinkedHashMap<Character, Integer> charCounter(String[] args) {
LinkedHashMap<Character, Integer> elements = new LinkedHashMap();
List<Character> chars = new ArrayList();
for (char c : stringToCharArray(args)) {
chars.add(c);
}
for (Character element : chars) {
if (elements.containsKey(element)) {
elements.put(element, elements.get(element) + 1);
} else {
elements.put(element, 1);
}
}
return elements;
}
/**
* stringToCharArray method - convert string array to character array *
*/
private char[] stringToCharArray(String[] args) {
String s = "";
for (String agr : args) {
if (s == "") {
s = agr;
} else {
s = s + " " + agr;
}
}
return s.toCharArray();
}
}
/**
* The Print class is intended to output the result to the console
*/
public class Print {
public void print(Map map) {
Iterator<Map.Entry<Character, Integer>> iterator
= map.entrySet().iterator();
while (iterator.hasNext()) {
Map.Entry<Character, Integer> charCounterEntry = iterator.next();
System.out.printf("\"%c\" - %d\n", charCounterEntry.getKey(),
charCounterEntry.getValue());
}
}
public void printArgs(String[] args) {
for (String arg : args) {
System.out.printf("%s ", arg);
}
System.out.println();
}
}
</code></pre>
<p>The result of the application</p>
<pre><code>Java is the best programming language in the world!
"J" - 1
"a" - 5
"v" - 1
" " - 8
"i" - 3
"s" - 2
"t" - 3
"h" - 2
"e" - 4
"b" - 1
"p" - 1
"r" - 3
"o" - 2
"g" - 4
"m" - 2
"n" - 3
"l" - 2
"u" - 1
"w" - 1
"d" - 1
"!" - 1
</code></pre>
<p>Now I need to teach my application to cache and check the input data for an already existing result.</p>
<p>I think LoadingCache from Guava will help me</p>
<pre><code>LoadingCache<Key, Graph> graphs = CacheBuilder.newBuilder()
.maximumSize(1000)
.expireAfterWrite(10, TimeUnit.MINUTES)
.removalListener(MY_LISTENER)
.build(
new CacheLoader<Key, Graph>() {
@Override
public Graph load(Key key) throws AnyException {
return createExpensiveGraph(key);
}
});
</code></pre>
<p>Please help me pair my app with LoadingCache.</p>
<p>To all who will respond, thanks a lot!</p> | 2021-12-25 05:29:32.980000+00:00 | 2022-01-01 13:18:41.100000+00:00 | 2021-12-25 12:32:20.387000+00:00 | java|caching | ['https://javadoc.io/doc/com.google.guava/guava/latest/com/google/common/cache/CacheBuilder.html', 'https://guava.dev/releases/31.0-jre/api/docs/com/google/common/cache/LoadingCache.html', 'https://arxiv.org/abs/1912.03888', 'https://stackoverflow.com/q/224868/592355', 'https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/LinkedHashMap.html'] | 5 |
55,633,258 | <p>According to both the <a href="https://github.com/matterport/Mask_RCNN/blob/master/mrcnn/model.py" rel="noreferrer">code comments</a> and the <a href="https://www.pydoc.io/pypi/mrcnn-0.0.1/autoapi/model/index.html#model.rpn_class_loss_graph" rel="noreferrer">documentation</a> in the Python Package Index, these losses are defined as:</p>
<ul>
<li><strong>rpn_class_loss</strong> = RPN anchor classifier loss</li>
<li><strong>rpn_bbox_loss</strong> = RPN bounding box loss graph</li>
<li><strong>mrcnn_class_loss</strong> = loss for the classifier head of Mask R-CNN</li>
<li><strong>mrcnn_bbox_loss</strong> = loss for Mask R-CNN bounding box refinement</li>
<li><strong>mrcnn_mask_loss</strong> = mask binary cross-entropy loss for the masks head</li>
</ul>
<p>Each of these loss metrics is the sum of all the loss values calculated individually for each of the regions of interest. The general <strong>loss</strong> metric given in the log is the sum of the other five losses (you can check it by summing them up) as defined by the Mask R-CNN's authors. </p>
<p>In terms of how these losses are calculated as per the <a href="https://arxiv.org/pdf/1703.06870.pdf" rel="noreferrer">original paper</a>, they can be described as follows (note that the definitions are quite rough for the sake of a more intuitive explanation):</p>
<ul>
<li>The classification loss values are basically dependent on the confidence score of the true class, hence the <strong>classification losses reflect</strong> how confident the model is when predicting the class labels, or in other words, <strong>how close the model is to predicting the correct class</strong>. In the case of mrcnn_class_loss, all the object classes are covered, whereas in the case of rpn_class_loss the only classification that is done is labelling the anchor boxes as foreground or background (which is the reason why this loss tends to have lower values, as conceptually there are only 'two classes' than can be predicted).</li>
<li>The <strong>bounding box loss values reflect the distance between the true box parameters</strong> -that is, the (x,y) coordinates of the box location, its width and its height- <strong>and the predicted ones</strong>. It is by its nature a regression loss, and it penalizes larger absolute differences (in an approximately exponential manner for lower differences, and linearly for larger differences - see <a href="https://www.researchgate.net/profile/Zhenhua_Feng3/publication/321180616/figure/fig4/AS:631643995918366@1527607072866/Plots-of-the-L1-L2-and-smooth-L1-loss-functions.png" rel="noreferrer">Smooth L1 loss</a> function for more insight). Hence, it ultimately shows <strong>how good the model is at locating objects</strong> within the image, in the case of rpn_bbox_loss; and how good the model is <strong>at precisely predicting the area</strong>(s) within an image <strong>corresponding to the different objects</strong> that are present, in the case of mrcnn_bbox_loss.</li>
<li>The <strong>mask loss</strong>, similarly to the classification loss, <strong>penalizes wrong per-pixel binary classifications (foreground/background</strong>, in respect to the true class label). It is calculated differently for each of the regions of interest: Mask R-CNN encodes a binary mask per class for each of the RoIs, and the mask loss for a specific RoI is calculated based only on the mask corresponding to its true class, which prevents the mask loss from being affected by class predictions.</li>
</ul>
<p>As you already said, these loss metrics are indeed training losses, and the ones with the <em>val_</em> prefix are the validation losses. Fluctuations in the validation loss can occur for several different reasons, and it's hard to guess at first sight based only on your charts. They might be caused by a learning rate that is too high (making the stochastic gradient descent overshoot when trying to find a minimum), or a validation set that is too small (which gives unreliable loss values, as small changes in the output can produce big loss value changes). </p> | 2019-04-11 13:08:26.267000+00:00 | 2019-04-11 13:08:26.267000+00:00 | null | null | 55,360,262 | <p>I use Mask-R-CNN to train my data with it. When i use TensorBoard to see the result, i have the <strong>loss,</strong> <strong>mrcnn_bbox_loss</strong>, <strong>mrcnn_class_loss</strong>, <strong>mrcnn_mask_loss</strong>, <strong>rpn_bbox_loss</strong>, <strong>rpn_class_loss</strong> and all the same 6 loss for the validation: <strong>val_loss,</strong> <strong>val_mrcnn_bbox_loss</strong> etc. </p>
<p>I want to know what is each loss exactly. </p>
<p>Also i want to know if the first 6 losses are the train loss or what are they? If they aren't the train loss, how can i see the train loss?</p>
<p>My guess is:</p>
<p><strong>loss</strong>: it's all the 5 losses in summary (but i don't know how TensorBoard summarizes it).</p>
<p><strong>mrcnn_bbox_loss</strong>: is the size of the bounding box correct or not? </p>
<p><strong>mrcnn_class_loss</strong>: is the class correct? is the pixel correctly assign to the class?</p>
<p><strong>mrcnn_mask_loss</strong>: is the shape of the instance correct or not? is the pixel correctly assign to the instance?</p>
<p><strong>rpn_bbox_loss</strong>: is the size of the bbox correct?</p>
<p><strong>rpn_class_loss</strong>: is the class of the bbox correct?</p>
<p>But i am pretty sure this is not right...</p>
<p>And are some lossed irrelevant if i have only 1 class? For example only the background and 1 other class?</p>
<p>My data have only the background and 1 other class and this is my result on TensorBoard:</p>
<p><a href="https://i.stack.imgur.com/z4plk.png" rel="noreferrer"><img src="https://i.stack.imgur.com/z4plk.png" alt="Result 1:"></a><a href="https://i.stack.imgur.com/yVMeK.png" rel="noreferrer"><img src="https://i.stack.imgur.com/yVMeK.png" alt="Result 2:"></a><a href="https://i.stack.imgur.com/P5J1Z.png" rel="noreferrer"><img src="https://i.stack.imgur.com/P5J1Z.png" alt="Result 3:"></a><a href="https://i.stack.imgur.com/aIb5Z.png" rel="noreferrer"><img src="https://i.stack.imgur.com/aIb5Z.png" alt="Result :4"></a></p>
<p>My prediction is ok, but i don't know why some losses from my validation is going up and down at the end... I thought it has to be first only down and after overfitting only up.
The prediction i used is the green line on TensorBoard with the most epochs. I am not sure if my Network is overfitted, therfore i am wondering why some losses in the validation look how they look...</p>
<p>Here is my prediction:
<a href="https://i.stack.imgur.com/ovIts.png" rel="noreferrer"><img src="https://i.stack.imgur.com/ovIts.png" alt="Example of my Trainset:"></a>
<a href="https://i.stack.imgur.com/yPVH0.png" rel="noreferrer"><img src="https://i.stack.imgur.com/yPVH0.png" alt="This is the Ground Truth of my Testset example:"></a>
<a href="https://i.stack.imgur.com/hnHaM.png" rel="noreferrer"><img src="https://i.stack.imgur.com/hnHaM.png" alt="This is the prediction from the Testset example:"></a></p> | 2019-03-26 14:59:43.523000+00:00 | 2021-09-07 23:16:32.937000+00:00 | 2019-03-27 12:42:00.727000+00:00 | keras|instance|image-segmentation|loss|faster-rcnn | ['https://github.com/matterport/Mask_RCNN/blob/master/mrcnn/model.py', 'https://www.pydoc.io/pypi/mrcnn-0.0.1/autoapi/model/index.html#model.rpn_class_loss_graph', 'https://arxiv.org/pdf/1703.06870.pdf', 'https://www.researchgate.net/profile/Zhenhua_Feng3/publication/321180616/figure/fig4/AS:631643995918366@1527607072866/Plots-of-the-L1-L2-and-smooth-L1-loss-functions.png'] | 4 |
53,572,697 | <p><strong>TL;DR</strong>: Simply pick an existing system which is seems easy to implement for you and seems to have reasonable accuracy. This can either be a cloud offering (for example, IBM Watson Conversation, Google DialogFlow) or an library or executable (for example, RASA NLU or Natural Language Toolkit). Choosing a system solely on accuracy is non-trivial and if you always want the best, then you should switch between systems often.</p>
<p>You question asks which system will give the most accurate results while not requiring too much computational power. In your case for recognizing a person name from a text. The natural language processing (NLP) field is rapidly changing. To show this, we can look at the current state of the art (SOTA) for named-entity recognition (NER). <a href="https://github.com/dsindex/etagger" rel="nofollow noreferrer">This</a> Github page has a nice summary for the CONLL03 NER dataset, I will copy it here and use company names since they are easier to remember:</p>
<ol>
<li><a href="http://alanakbik.github.io/papers/coling2018.pdf" rel="nofollow noreferrer">Zalando</a>. F1 score: 0.931. Date: 24 June 2018</li>
<li><a href="https://arxiv.org/pdf/1810.04805.pdf" rel="nofollow noreferrer">Google</a>. F1 score: 0.928. Date: 31 October 2018</li>
<li><a href="https://arxiv.org/pdf/1809.08370.pdf" rel="nofollow noreferrer">Stanford / Google Brain</a>. F1 score: 0.926. Date: 22 September 2018</li>
</ol>
<p>Based on this list we observe that, at the start of 2019, a new SOTA is obtained every few months. See <a href="https://rajpurkar.github.io/SQuAD-explorer/" rel="nofollow noreferrer">https://rajpurkar.github.io/SQuAD-explorer/</a> for an updated list of benchmarks for a complex NLP task. So, since the SOTA algorithm changes each month, "the most accurate system (library)" also has to change often. Furthermore, the accuracy on your data depends not only on the system, but also on the following:</p>
<ul>
<li><strong>Used algorithm.</strong> It could be that Google has published SOTA research but not implemented it. The only way to figure it out, for sure, is continually testing all systems.</li>
<li><strong>Training data size.</strong> Although bigger is better, some algorithms can handle few examples (<em>few-shot learning</em>) better.</li>
<li><strong>Domain.</strong> An algorithm could be better suitable for handling formal governmental text instead of less formal Wikipedia text.</li>
<li><strong>Data language.</strong> Since most research is focused on showing SOTA on public data sets, they are often optimized for English. How they perform on other languages might differ.</li>
</ul>
<p>Due to all these things to consider, I would advise to pick an existing system and choose based on many requirements such as pricing and ease of use.</p> | 2018-12-01 16:13:05.210000+00:00 | 2020-08-04 08:00:32.683000+00:00 | 2020-08-04 08:00:32.683000+00:00 | null | 53,430,654 | <p>I want to recognise <strong>person</strong> name from text. But i'm getting confused which NLP library I have to use for NER. I find out following best NLP library for NER
1. Stanford coreNLP
2. Spacy
3. Google cloud.</p>
<p>I unable to find out which library will give more accurate result and good performance. Please help me here.</p> | 2018-11-22 12:06:20.370000+00:00 | 2020-08-04 08:00:32.683000+00:00 | 2018-11-22 12:16:57.027000+00:00 | nlp|stanford-nlp|spacy|named-entity-recognition|google-natural-language | ['https://github.com/dsindex/etagger', 'http://alanakbik.github.io/papers/coling2018.pdf', 'https://arxiv.org/pdf/1810.04805.pdf', 'https://arxiv.org/pdf/1809.08370.pdf', 'https://rajpurkar.github.io/SQuAD-explorer/'] | 5 |
55,180,840 | <p>Ontology enables you to link your data, and reason over it automatically. It is a good question you ask, cause even today, the Semantic community is still seeking what is the "Killer APP" for ontology.</p>
<p>I have developed some simple ontology applications, as shown below:</p>
<p>Lost Silence:
<a href="http://arxiv.org/abs/1903.05372" rel="nofollow noreferrer">http://arxiv.org/abs/1903.05372</a></p>
<p>SARA -- A Semantic Access Point Resource Allocation Service for Heterogenous Wireless Networks:
<a href="http://eprints.gla.ac.uk/179727/" rel="nofollow noreferrer">http://eprints.gla.ac.uk/179727/</a></p>
<p>NextBus reverse proxy:
<a href="https://github.com/QianruZhou333/reverseProxy_NextBus.git" rel="nofollow noreferrer">https://github.com/QianruZhou333/reverseProxy_NextBus.git</a></p> | 2019-03-15 10:44:39.667000+00:00 | 2019-03-15 10:44:39.667000+00:00 | null | null | 24,012,254 | <p>In which scenario we are going for an Ontology? Can anyone tells some real time applications of Ontology?</p>
<p>While googling I noticed about some semantic web applications are using ontology. But still i didn't got the exact idea about semantic web applications.</p>
<p>From <a href="http://protege.stanford.edu/publications/ontology_development/ontology101-noy-mcguinness.html" rel="noreferrer">this link</a> i got something about creation of Ontologies, but still I am confusing for what we are going for Ontology by neglecting the traditional databases. And i didn't found any real time applications that using Ontology concept.</p>
<p>Thank you in advance</p> | 2014-06-03 09:59:41.997000+00:00 | 2022-09-05 18:41:47.990000+00:00 | null | ontology | ['http://arxiv.org/abs/1903.05372', 'http://eprints.gla.ac.uk/179727/', 'https://github.com/QianruZhou333/reverseProxy_NextBus.git'] | 3 |
50,389,164 | <p>TL;DR: Yes, but not that much. Unless you are considering <10 JPEG quality parameter, you should be safe.</p>
<p>Longer version: </p>
<p>I highly recommend an article called <a href="https://arxiv.org/abs/1604.04004" rel="noreferrer">
Understanding How Image Quality Affects Deep Neural Networks</a>. As you may guess authors checked how different distortions (JPEG, JPEG 2000, blur, and noise) affect the performance of usual CNN architectures (VGG, AlexNet, GoogLeNet).</p>
<p>Apparently, all tested nets perform in a similar way and only severe JPEG compressions (quality < 10) can hurt them.</p>
<p>The only thing is that nothing from ResNet family was tested, but I don't see why it can be drastically different.</p> | 2018-05-17 10:27:36.480000+00:00 | 2018-05-17 10:27:36.480000+00:00 | null | null | 47,497,352 | <p>We are working with a company that has more than 2 million images in jpeg. They want to collect more images. The purpose of the images are machine classification and to find small objects like bolts and small water leaks. the number of images are high, but the examples for training are small, maybe only 100 samples or less.</p>
<p>Our suggestion to the company is to store the data in the original 10 or 12 bit png/tiff format uncompressed. they want to use the jpeg format since they can collect more data in a shorter time (4 images pre second) and do not need all that disk space.</p>
<p>does anyone know how storage of jpeg compared to png format will affect both training of samples and then finding/classification later? </p>
<p>I have searched with Google. It returns many answers on how you can improve jpeg quality by using deep learning. Rest of the answers is about how to process cats and dogs using libraries on the internet. There is one article that say that jpeg compression affects the recognition, but very little about what sort of images, what type of objects you look for etc.</p>
<p>When you look for large objects like dogs and cats, you will have many features, curves, colours, histograms and other features that can be used. Looking for very small objects with few characteristics is more complex. </p>
<p>Does anyone know of any article about this subject?
Key question: Should I store my images in png or lossless tiff or can I use jpeg compression for later use in deep learning?</p> | 2017-11-26 14:47:28.123000+00:00 | 2018-05-22 17:56:14.940000+00:00 | 2018-05-17 11:12:26.723000+00:00 | image|machine-learning|deep-learning|convolutional-neural-network | ['https://arxiv.org/abs/1604.04004'] | 1 |
47,086,312 | <p>There is a Tensorflow implementation for Windows but honestly it's always 1-2 steps behind Linux.
One thing I would add to your list (which was not already mentioned) is <a href="https://arxiv.org/pdf/1703.06870.pdf" rel="nofollow noreferrer">MaskRCNN</a></p>
<p>You can also look for some implementation of it for Tensorflow like this one: <a href="https://github.com/CharlesShang/FastMaskRCNN" rel="nofollow noreferrer">https://github.com/CharlesShang/FastMaskRCNN</a> </p> | 2017-11-02 23:40:22.040000+00:00 | 2018-01-25 16:09:42.473000+00:00 | 2018-01-25 16:09:42.473000+00:00 | null | 44,482,895 | <p>I'm looking into performing object detection (not just classification) using CNNs; I currently only have access to Windows platforms but can install <code>Linux</code> distributions if necessary. I would like to assess a number of existing techniques, but most available code is for <code>Linux</code>.</p>
<p>I am aware of the following:</p>
<ul>
<li>Faster RCNN (CNTK, Caffe w/ Matlab) </li>
<li>R-CNN (Matlab with loads of
toolboxes)</li>
<li>R-FCN (Caffe w/ Matlab)</li>
</ul>
<p>From what I can see, there are no TensorFlow implementations for Windows currently available. Am I missing anything, or do I just need to install Ubuntu if I want to try more?</p>
<p>EDIT: A windows version of YOLO can be found here: <a href="https://github.com/AlexeyAB/darknet" rel="nofollow noreferrer">https://github.com/AlexeyAB/darknet</a></p> | 2017-06-11 10:42:41.927000+00:00 | 2018-01-25 16:09:42.473000+00:00 | 2017-06-11 15:08:59.150000+00:00 | windows|machine-learning|computer-vision|deep-learning|object-detection | ['https://arxiv.org/pdf/1703.06870.pdf', 'https://github.com/CharlesShang/FastMaskRCNN'] | 2 |
73,724,035 | <p>As someone who has published on the question, this has been attempted many times in many different ways but doesn't work.</p>
<p>Please see the ICLR paper on the subject <a href="https://openreview.net/pdf?id=BJgza6VtPB" rel="nofollow noreferrer">https://openreview.net/pdf?id=BJgza6VtPB</a> "Language GANs Falling Short"</p>
<p>This is not something specific to HuggingFace, but it is specific to the mathematical fact that words are discrete objects, and that you can't slightly modify a word and slightly change the meaning of a sentence, like you can an image. Words are discrete.</p>
<p>A way to do achieve what you are trying to do would be with plug-in play language models.</p>
<p><a href="https://www.uber.com/blog/pplm/" rel="nofollow noreferrer">https://www.uber.com/blog/pplm/</a> <a href="https://arxiv.org/abs/1912.02164" rel="nofollow noreferrer">https://arxiv.org/abs/1912.02164</a></p>
<p>What they do is that they train classifiers for some desirable or undesirable characteristics, and add (or subtract) the probability of that class to the probability of each next potential token (score computed for each potential sentences including that token). That way, you either increase or decrease the probability of generating text with that characteristic.</p>
<p>Controllable text generation is a whole field, I recommend that you do a literature review.</p> | 2022-09-14 23:05:36.610000+00:00 | 2022-09-14 23:05:36.610000+00:00 | null | null | 62,565,480 | <p>I have the following goal, which I have been trying to achieve with the Huggingface Library but I encountered some roadblocks.</p>
<p><strong>The Problem:</strong></p>
<p>I want to generate sentences in a differentiable way at training time. Why am I doing this? I want to apply a discriminator to this output to generate sentences with certain properties, which are "enforced" by the discriminator. These sentences will also be conditioned on a input sentence, so I need a Encoder Decoder Model.</p>
<p>To get around the non differentiability of argmax, I simply take the softmax output of the decoder and multiply it with my embedding matrix. Then I am taking this embedded input and feed it into a transformer discriminator, which simply classifies the input as original/fake. Then I backpropagate through the encoder decoder. Just as one would do it with a normal GAN.</p>
<p>So far I have tried to use the <code>EncoderDecoderModel</code> from Huggingface. This class has a method named generate, which generates sentences in a non differentiable way (greedy or beam-search). So I dug through the source code and tried to build my own differentiable generate method. I didn't get it to work though.</p>
<p><strong>Questions:</strong></p>
<ul>
<li>Is there a reasonably easy way to do this with the Huggingface Library, as I really want to use the pretrained models and everything else that comes with it?</li>
<li>Is there a way to invoke the forward method of the decoder and only generate one new token, not the whole sequence again?</li>
</ul>
<p>Thanks for your help, I would really appreciate it, I have been stuck on this for quiet a while now.</p> | 2020-06-24 23:16:45.103000+00:00 | 2022-09-14 23:05:36.610000+00:00 | null | nlp|pytorch|huggingface-transformers|generative-adversarial-network|discriminator | ['https://openreview.net/pdf?id=BJgza6VtPB', 'https://www.uber.com/blog/pplm/', 'https://arxiv.org/abs/1912.02164'] | 3 |
19,821,488 | <p>After accepting Vitus' answer, I discovered a different way to accomplish the goal of proving a function terminates in Agda, namely using "sized types." I am providing my answer here because it seems acceptable, and also for critique of any weak points of this answer.</p>
<p>Sized types are described:
<a href="http://arxiv.org/pdf/1012.4896.pdf">http://arxiv.org/pdf/1012.4896.pdf</a></p>
<p>They are implemented in Agda, not only MiniAgda; see here: <a href="http://www2.tcs.ifi.lmu.de/~abel/talkAIM2008Sendai.pdf">http://www2.tcs.ifi.lmu.de/~abel/talkAIM2008Sendai.pdf</a>.</p>
<p>The idea is to augment the data type with a size that allows the typechecker to more easily prove termination. Size is defined in the standard library.</p>
<pre><code>open import Size
</code></pre>
<p>We define sized natural numbers:</p>
<pre><code>data Nat : {i : Size} \to Set where
zero : {i : Size} \to Nat {\up i}
succ : {i : Size} \to Nat {i} \to Nat {\up i}
</code></pre>
<p>Next, we define predecessor and subtraction (monus):</p>
<pre><code>pred : {i : Size} β Nat {i} β Nat {i}
pred .{β i} (zero {i}) = zero {i}
pred .{β i} (succ {i} n) = n
sub : {i : Size} β Nat {i} β Nat {β} β Nat {i}
sub .{β i} (zero {i}) n = zero {i}
sub .{β i} (succ {i} m) zero = succ {i} m
sub .{β i} (succ {i} m) (succ n) = sub {i} m n
</code></pre>
<p>Now, we may define division via Euclid's algorithm:</p>
<pre><code>div : {i : Size} β Nat {i} β Nat β Nat {i}
div .{β i} (zero {i}) n = zero {i}
div .{β i} (succ {i} m) n = succ {i} (div {i} (sub {i} m n) n)
data β₯ : Set where
record β€ : Set where
notZero : Nat β Set
notZero zero = β₯
notZero _ = β€
</code></pre>
<p>We give division for nonzero denominators.
If the denominator is nonzero, then it is of the form, b+1. We then do
divPos a (b+1) = div a b
Since div a b returns ceiling (a/(b+1)).</p>
<pre><code>divPos : {i : Size} β Nat {i} β (m : Nat) β (notZero m) β Nat {i}
divPos a (succ b) p = div a b
divPos a zero ()
</code></pre>
<p>As auxiliary:</p>
<pre><code>div2 : {i : Size} β Nat {i} β Nat {i}
div2 n = divPos n (succ (succ zero)) (record {})
</code></pre>
<p>Now we can define a divide and conquer method for computing the n-th Fibonacci number.</p>
<pre><code>fibd : {i : Size} β Nat {i} β Nat
fibd zero = zero
fibd (succ zero) = succ zero
fibd (succ (succ zero)) = succ zero
fibd (succ n) with even (succ n)
fibd .{β i} (succ {i} n) | true =
let
-- When m=n+1, the input, is even, we set k = m/2
-- Note, ceil(m/2) = ceil(n/2)
k = div2 {i} n
fib[k-1] = fibd {i} (pred {i} k)
fib[k] = fibd {i} k
fib[k+1] = fib[k-1] + fib[k]
in
(fib[k+1] * fib[k]) + (fib[k] * fib[k-1])
fibd .{β i} (succ {i} n) | false =
let
-- When m=n+1, the input, is odd, we set k = n/2 = (m-1)/2.
k = div2 {i} n
fib[k-1] = fibd {i} (pred {i} k)
fib[k] = fibd {i} k
fib[k+1] = fib[k-1] + fib[k]
in
(fib[k+1] * fib[k+1]) + (fib[k] * fib[k])
</code></pre> | 2013-11-06 19:54:04.267000+00:00 | 2013-11-07 13:53:08.497000+00:00 | 2013-11-07 13:53:08.497000+00:00 | null | 19,642,921 | <p>Suppose we define a function</p>
<pre><code>f : N \to N
f 0 = 0
f (s n) = f (n/2) -- this / operator is implemented as floored division.
</code></pre>
<p>Agda will paint f in salmon because it cannot tell if n/2 is smaller than n. I don't know how to tell Agda's termination checker anything. I see in the standard library they have a floored division by 2 and a proof that n/2 < n. However, I still fail to see how to get the termination checker to realize that recursion has been made on a smaller subproblem.</p> | 2013-10-28 18:58:36.053000+00:00 | 2016-09-20 21:02:05.163000+00:00 | 2013-10-29 04:32:27.503000+00:00 | functional-programming|termination|agda | ['http://arxiv.org/pdf/1012.4896.pdf', 'http://www2.tcs.ifi.lmu.de/~abel/talkAIM2008Sendai.pdf'] | 2 |
60,926,330 | <p>BERT is trained on pairs of sentences, therefore it is unlikely to generalize for much longer texts. Also, BERT requires quadratic memory with the length of the text, using too long texts might result in memory issues. In most implementations, it does not accept sequences longer than 512 subwords.</p>
<p>Making pre-trained Transformers work efficiently for long texts is an active research area, you can have a look at a paper called <a href="https://arxiv.org/pdf/1904.08398.pdf" rel="nofollow noreferrer"><em>DocBERT</em></a> to have an idea what people are trying. But it will take some time until there is a nicely packaged working solution.</p>
<p>There are also other methods for document embedding, for instance <a href="https://github.com/RaRe-Technologies/gensim" rel="nofollow noreferrer">Gensim</a> implements doc2vec. However, I would still stick with TF-IDF.</p>
<p>TF-IDF is typically very sensitive to data pre-processing. You certainly need to remove stopwords, in many languages it also pays off to do lemmatization. Given the specific domain of your texts, you can also try expanding the standard list of stop words by words that appear frequently in news stories. You can get further improvements by detecting and keeping together named entities.</p> | 2020-03-30 08:06:00.333000+00:00 | 2020-03-30 08:06:00.333000+00:00 | null | null | 60,912,329 | <p>we have a news website where we have to match news to a particular user. </p>
<p>We have to use for the matching only the user textual information, like for example the interests of the user or a brief description about them.</p>
<p>I was thinking to threat both the user textual information and the news text as document and find document similarity. </p>
<p>In this way, I hope, that if in my profile I wrote sentences like: <em>I loved the speach of the president in Chicago last year</em>, and a news talks about: <em>Trump is going to speak in Illinois</em> I can have a match (the example is purely casual). </p>
<p>I tried, first, to embed my documents using TF-IDF and then I tried a kmeans to see if there was something that makes sense, but I don't like to much the results. </p>
<p>I think the problem derives from the poor embedding that TF-IDF gives me. </p>
<p>Thus I was thinking of using BERT embedding to retrieve the embedding of my documents and then use cosine similarity to check similarity of two document (a document about the user profile and a news). </p>
<p>Is this an approach that could make sense? Bert can be used to retrieve the embedding of sentences, but there is a way to embed an entire document?</p>
<p>What would you advice me? </p>
<p>Thank you</p> | 2020-03-29 09:31:36.833000+00:00 | 2020-04-24 23:59:20.510000+00:00 | 2020-04-24 23:59:20.510000+00:00 | nlp|document|cosine-similarity|bert-language-model | ['https://arxiv.org/pdf/1904.08398.pdf', 'https://github.com/RaRe-Technologies/gensim'] | 2 |
55,192,795 | <p>I think you are gonna think about Bayesian Learning. First, talking about <strong>uncertainty</strong>.</p>
<p>For example, given several pictures of dog breeds as training dataβwhen a user uploads a photo of his dogβthe hypothetical website should return a prediction with rather high confidence. But what should happen if a user uploads a photo of a cat and asks the website to decide on a dog breed?</p>
<p>The above is an example of out of distribution test data. The model has been trained on photos of dogs of different breeds, and has (hopefully) learnt to distinguish between them well. But the model has never seen a cat before, and a photo of a cat would lie outside of the data distribution the model was trained on. This illustrative example can be extended to more serious settings, such as MRI scans with structures a diagnostics system has never observed before, or scenes an autonomous car steering system has never been trained on.</p>
<p>A possible desired behaviour of a model in such cases would be to return a prediction (attempting to extrapolate far away from our observed data), but return an answer with the added information that the point lies outside of the data distribution. We want our model to possess some quantity conveying a high level of <strong>uncertainty</strong> with such inputs (alternatively, conveying low confidence).</p>
<p>Then, I think you could read briefly <a href="https://arxiv.org/abs/1506.02142" rel="nofollow noreferrer">this paper</a> when they also apply to classification task and generate uncertainty for classes (dog, cat...). From this paper, you can extend your finding to application using this paper, and I think you will find what you want.</p> | 2019-03-16 02:06:40.523000+00:00 | 2019-03-16 02:06:40.523000+00:00 | null | null | 55,163,522 | <p>what approach should i take when I want my CNN multi-class network to output something like <code>[0.1, 0,1]</code> when image doesn't belong
to any class. Using softmax and categorical_crossentropy for multi-class would give me output that sums up to 1 so still not what I want.
I'm new to neural networks so sorry for silly question and thanks in advance for any help.</p> | 2019-03-14 13:14:29.157000+00:00 | 2019-03-16 02:06:40.523000+00:00 | null | keras|conv-neural-network | ['https://arxiv.org/abs/1506.02142'] | 1 |
27,443,874 | <p>i stumbled across something like this lately here, but no code or service for now. <a href="http://cs.stanford.edu/people/karpathy/deepimagesent/" rel="nofollow">http://cs.stanford.edu/people/karpathy/deepimagesent/</a> or here <a href="http://arxiv.org/abs/1411.4555" rel="nofollow">http://arxiv.org/abs/1411.4555</a></p> | 2014-12-12 12:38:39.560000+00:00 | 2014-12-12 12:38:39.560000+00:00 | null | null | 27,443,810 | <p>I know there are some pretty impressive detection engines out there, which will let you input an image, and out comes whatever is the "main subject" in the image.</p>
<p>Lets say you give it a picture of a football, and it will return the text "football".</p>
<p>However I dont recall what these engines are called, and I just wonder if anyone has any pointers or names for good detection engines I can use with PHP?</p> | 2014-12-12 12:34:23.340000+00:00 | 2014-12-12 12:38:39.560000+00:00 | null | php|image-processing | ['http://cs.stanford.edu/people/karpathy/deepimagesent/', 'http://arxiv.org/abs/1411.4555'] | 2 |
67,883,417 | <p>The most common way to stabilize the training of a WGAN is to replace the <strong>Gradient Clipping</strong> technique that was used in the early W-GAN with <strong>Gradient Penalty (WGAN-GP)</strong>. This technique seems outperform the original WGAN. The paper that describes what GP is can be found here:
<a href="https://arxiv.org/pdf/1704.00028.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1704.00028.pdf</a></p>
<p>Also, If you need any help of how to implement this, You can check a nice repository that I have found here:
<a href="https://github.com/kochlisGit/Keras-GAN" rel="nofollow noreferrer">https://github.com/kochlisGit/Keras-GAN</a></p>
<p>There are also other tricks that You can use to improve the overall quality of your generated images, described in the repository. For example:</p>
<ol>
<li>Add Random <strong>Gaussian Noise</strong> at the inputs of the discriminator that decays over the time.</li>
<li>Random/Adaptive <strong>Data Augmentations</strong></li>
<li>Separate <strong>fake/real batches</strong></li>
</ol>
<p>etc.</p> | 2021-06-08 07:44:11.663000+00:00 | 2021-06-08 07:44:11.663000+00:00 | null | null | 61,066,012 | <p>I am working on a project with Wasserstein GANs and more specifically with an implementation of the improved version of Wasserstein GANs. I have two theoretical questions about wGANs regarding their stability and training process. Firstly, the result of the loss function notoriously is correlated with the quality of the result of the generated samples <a href="https://lilianweng.github.io/lil-log/2017/08/20/from-GAN-to-WGAN.html" rel="nofollow noreferrer">(that is stated here)</a>. Is there some extra bibliography that supports that argument?</p>
<p>Secondly, during my experimental phase, I noticed that training my architecture using wGANs is much faster than using a simple version of GANs. Is that a common behavior? Is there also some literature analysis about that?</p>
<p>Furthermore, one question about the continuous functions that are guaranteed by using Wasserstein loss. I am having some issues understanding this concept in practice, what it means that the normal GANs loss is not continuous function?</p> | 2020-04-06 18:00:15.493000+00:00 | 2021-06-08 07:44:11.663000+00:00 | 2020-05-01 12:37:08.957000+00:00 | python|keras|neural-network | ['https://arxiv.org/pdf/1704.00028.pdf', 'https://github.com/kochlisGit/Keras-GAN'] | 2 |
61,562,059 | <ol>
<li><p>You can check <a href="https://arxiv.org/abs/1606.03498" rel="nofollow noreferrer">Inception Score</a> and <a href="https://arxiv.org/abs/1706.08500" rel="nofollow noreferrer">Frechet Inception Distance</a> for now. And also <a href="https://machinelearningmastery.com/how-to-evaluate-generative-adversarial-networks/" rel="nofollow noreferrer">here</a>. The problem is that GANs not having a unified objective functions(there are two networks) there's no agreed way of evaluating and comparing GAN models. INstead people devise metrics that's relating the image distributinos and generator distributions.</p></li>
<li><p>wGAN could be faster due to having morestable training procedures as opposed to vanilla GAN(Wasserstein metric, weight clipping and gradient penalty(if you are using it) ) . I dont know if there's a literature analysis for speed and It may not always the case for WGAN faster than a simple GAN. WGAN cannot find the best Nash equlibirum like GAN.</p></li>
<li><p>Think two distributions: p and q. If these distributions overlap, i.e. , their domains overlap, then KL or JS divergence are differentiable. The problem arises when p and q don't overlap. As in WGAN paper example, say two pdfs on 2D space, V = (0, Z) , Q = (K , Z) where K is different from 0 and Z is sampled from uniform distribution. If you try to take derivative of KL/JS divergences of these two pdfs well you cannot. This is because these two divergence would be a binary indicator function (equal or not) and we cannot take derivative of these functions. However, if we use Wasserstein loss or Earth-Mover distance, we can take it since we are approximating it as a distance between two points on space. <strong>Short story: Normal GAN loss function is continuous iff the distributions have an overlap, otherwise it is discrete.</strong></p></li>
</ol>
<p>Hope this helps</p> | 2020-05-02 15:43:47.277000+00:00 | 2020-05-02 16:19:53.900000+00:00 | 2020-05-02 16:19:53.900000+00:00 | null | 61,066,012 | <p>I am working on a project with Wasserstein GANs and more specifically with an implementation of the improved version of Wasserstein GANs. I have two theoretical questions about wGANs regarding their stability and training process. Firstly, the result of the loss function notoriously is correlated with the quality of the result of the generated samples <a href="https://lilianweng.github.io/lil-log/2017/08/20/from-GAN-to-WGAN.html" rel="nofollow noreferrer">(that is stated here)</a>. Is there some extra bibliography that supports that argument?</p>
<p>Secondly, during my experimental phase, I noticed that training my architecture using wGANs is much faster than using a simple version of GANs. Is that a common behavior? Is there also some literature analysis about that?</p>
<p>Furthermore, one question about the continuous functions that are guaranteed by using Wasserstein loss. I am having some issues understanding this concept in practice, what it means that the normal GANs loss is not continuous function?</p> | 2020-04-06 18:00:15.493000+00:00 | 2021-06-08 07:44:11.663000+00:00 | 2020-05-01 12:37:08.957000+00:00 | python|keras|neural-network | ['https://arxiv.org/abs/1606.03498', 'https://arxiv.org/abs/1706.08500', 'https://machinelearningmastery.com/how-to-evaluate-generative-adversarial-networks/'] | 3 |
65,306,468 | <p>Your methodology is correct : the problem is that the first layers of a big model like ResNet50 or VGG16 are really big. Connecting a fully connected (dense) layer to a big output (like 112x112x64) leads to a very heavy weights.</p>
<p>There is some strategy against that problem described in the paper <a href="https://arxiv.org/abs/1610.01644" rel="nofollow noreferrer">Understanding intermediate layers using linear classifier probes</a></p>
<blockquote>
<p><strong>Practical concern : Dimension reduction on features</strong></p>
<p>Another practical problem can arise when certain layers of a neural network have an exceedingly large quantity of features. The first few layers of Inception v3, for example, have a few million features when we multiply height, width and channels. This leads to parameters for a single probe taking upwards of a few gigabytes of storage, which is disproportionately large when we consider that the entire set of model parameters takes less space than that.In those cases, we have three possible suggestions for trimming down the space of features on which we fit the probes.</p>
<ul>
<li><strong>Use only a random subset of the features (but always the same ones).</strong> This is used on the Inception v3 model in Section 4.2.</li>
<li>Project the features to a lower-dimensional space. Learn this mapping. This is probably a worse idea than it sounds because the projection matrix itself can take a lot of storage (even more than the probe parameters).</li>
<li>When dealing with features in the form of images (height, width, channels), <strong>we can perform2D pooling along the (height, width) of each channel. This reduces the number of features to the number of channels.</strong> This is used on the ResNet-50 model in Section 4.1</li>
</ul>
</blockquote> | 2020-12-15 13:05:32.727000+00:00 | 2020-12-15 13:05:32.727000+00:00 | null | null | 65,305,974 | <p>I need to get the probes of a pretained model in TensorFlow (dataset imagenet), that is for each block of a VGG16, or ResNet50 or any other pertained model in TensorFlow, I want to have a prediction of the class <code>y_hat</code>, so an array of zeros but for the predicted class which will be 1.</p>
<p>I have written the following code to get the outputs from each block (found in another StackOverflow question):</p>
<pre><code>IMG_SHAPE = (224, 224, 3)
model = tf.keras.applications.VGG16(input_shape = IMG_SHAPE,
include_top=False,
weights='imagenet')
# Download the weights file and then:
model = tf.keras.applications.VGG16(input_shape = IMG_SHAPE,
include_top=False,
weights=None)
pretrain_model_path = "/content/vgg16_weights_tf_dim_ordering_tf_kernels_notop.h5"
model.load_weights(pretrain_model_path)
# Get blocks output:
probe1 = model.get_layer('block1_pool').output
probe2 = model.get_layer('block2_pool').output
probe3 = model.get_layer('block3_pool').output
probe4 = model.get_layer('block4_pool').output
probe5 = model.get_layer('block5_pool').output
probe_1 = tf.keras.models.Model(inputs=model.input, outputs=probe1)
probe_2 = tf.keras.models.Model(inputs=model.input, outputs=probe2)
probe_3 = tf.keras.models.Model(inputs=model.input, outputs=probe3)
probe_4 = tf.keras.models.Model(inputs=model.input, outputs=probe4)
probe_5 = tf.keras.models.Model(inputs=model.input, outputs=probe5)
</code></pre>
<p>Now, I'm stuck, because I am trying to flat and dense the output of the block to get the prediction using softmax, but it gives me thousands of errors:</p>
<pre><code>inputs = model.input
lay = model.get_layer('block1_pool').output
x = tf.keras.layers.Flatten()(lay)
outputs = tf.keras.layers.Dense(1000, activation='softmax')(x)
model_1 = tf.keras.Model(inputs=inputs, outputs=outputs)
---------------------------------------------------------------------------
ResourceExhaustedError Traceback (most recent call last)
<ipython-input-88-22a2004a95b6> in <module>()
2 lay = model.get_layer('block1_pool').output
3 x = tf.keras.layers.Flatten()(lay)
----> 4 outputs = tf.keras.layers.Dense(1000, activation='softmax')(x)
5 model_1 = tf.keras.Model(inputs=inputs, outputs=outputs)
21 frames
/usr/local/lib/python3.6/dist-packages/six.py in raise_from(value, from_value)
ResourceExhaustedError: OOM when allocating tensor with shape[802816,1000] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [Op:RandomUniform]
</code></pre>
<p>But nothing seems to work. Does anybody know how I can get 'final' predictions from each block?</p> | 2020-12-15 12:35:56.960000+00:00 | 2020-12-15 13:05:32.727000+00:00 | 2020-12-15 12:48:59.877000+00:00 | python|tensorflow|keras|deep-learning | ['https://arxiv.org/abs/1610.01644'] | 1 |
Subsets and Splits