a_id
int64 7.84k
73.8M
| a_body
stringlengths 61
33k
| a_creation_date
stringlengths 25
32
| a_last_activity_date
stringlengths 25
32
| a_last_edit_date
stringlengths 25
32
⌀ | a_tags
float64 | q_id
int64 826
73.8M
| q_body
stringlengths 61
29.9k
| q_creation_date
stringlengths 25
32
| q_last_activity_date
stringlengths 25
32
| q_last_edit_date
stringlengths 25
32
⌀ | q_tags
stringlengths 1
103
| _arxiv_links
stringlengths 2
6.69k
| _n_arxiv_links
int64 0
94
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
59,197,990 | <p>@Blade Here's the solution I came up with!</p>
<pre><code>import torch
import torch.nn as nn
import torch.nn.functional as F
class masked_softmax_cross_entropy_loss(nn.Module):
r"""my version of masked tf.nn.softmax_cross_entropy_with_logits"""
def __init__(self, weight=None):
super(masked_softmax_cross_entropy_loss, self).__init__()
self.register_buffer('weight', weight)
def forward(self, input, target, mask):
if not target.is_same_size(input):
raise ValueError("Target size ({}) must be the same as input size ({})".format(target.size(), input.size()))
input = F.softmax(input)
loss = -torch.sum(target * torch.log(input), 1)
loss = torch.unsqueeze(loss, 1)
mask /= torch.mean(mask)
mask = torch.unsqueeze(mask, 1)
loss = torch.mul(loss, mask)
return torch.mean(loss)
</code></pre>
<p>Btw: I needed this loss function at the time (Sept 2017) because I was attempting to translate Thomas Kipf's GCN (see <a href="https://arxiv.org/abs/1609.02907" rel="nofollow noreferrer">https://arxiv.org/abs/1609.02907</a>) code from TensorFlow to PyTorch. However, I now notice that Kipf has done this himself (see <a href="https://github.com/tkipf/pygcn" rel="nofollow noreferrer">https://github.com/tkipf/pygcn</a>), and in his code, he simply uses the built-in PyTorch loss function, the negative log likelihood loss, i.e.</p>
<pre><code>loss_train = F.nll_loss(output[idx_train], labels[idx_train])
</code></pre>
<p>Hope this helps.</p>
<p>~DV</p> | 2019-12-05 15:06:07.007000+00:00 | 2019-12-05 15:06:07.007000+00:00 | null | null | 46,218,566 | <p>I was wondering is there an equivalent PyTorch loss function for TensorFlow's <code>softmax_cross_entropy_with_logits</code>?</p> | 2017-09-14 12:00:50.460000+00:00 | 2021-07-22 21:09:43.677000+00:00 | 2021-03-15 11:18:12.223000+00:00 | tensorflow|pytorch|cross-entropy | ['https://arxiv.org/abs/1609.02907', 'https://github.com/tkipf/pygcn'] | 2 |
62,892,947 | <p>Continuous neural networks are not known to be universal approximators (in the sense of density in $L^p$ or $C(\mathbb{R})$ for the topology of uniform convergence on compacts, i.e.: as in <a href="https://www.sciencedirect.com/science/article/pii/0893608089900208" rel="nofollow noreferrer">the universal approximation theorem</a>) but only universal interpolators in the sense of this paper:
<a href="https://arxiv.org/abs/1908.07838" rel="nofollow noreferrer">https://arxiv.org/abs/1908.07838</a></p> | 2020-07-14 10:18:39.713000+00:00 | 2020-07-14 10:18:39.713000+00:00 | null | null | 3,066,353 | <p>I realize that this is probably a very niche question, but has anyone had experience with working with continuous neural networks? I'm specifically interested in what a continuous neural network may be useful for vs what you normally use discrete neural networks for.</p>
<p>For clarity I will clear up what I mean by continuous neural network as I suppose it can be interpreted to mean different things. I do <strong>not</strong> mean that the activation function is continuous. Rather I allude to the idea of a increasing the number of neurons in the hidden layer to an infinite amount.</p>
<p>So for clarity, here is the architecture of your typical discreet NN:
<a href="https://i.stack.imgur.com/XU442.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XU442.png" alt="alt text" /></a><br />
<sub>(source: <a href="https://sites.google.com/site/garamatt/nn.png" rel="nofollow noreferrer">garamatt at sites.google.com</a>)</sub></p>
<p>The <code>x</code> are the input, the <code>g</code> is the activation of the hidden layer, the <code>v</code> are the weights of the hidden layer, the <code>w</code> are the weights of the output layer, the <code>b</code> is the bias and apparently the output layer has a linear activation (namely none.)</p>
<p>The difference between a discrete NN and a continuous NN is depicted by this figure:
<a href="https://i.stack.imgur.com/kl3uA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kl3uA.png" alt="alt text" /></a><br />
<sub>(source: <a href="https://sites.google.com/site/garamatt/nn2.png" rel="nofollow noreferrer">garamatt at sites.google.com</a>)</sub></p>
<p>That is you let the number of hidden neurons become infinite so that your final output is an integral. In practice this means that instead of computing a deterministic sum you instead must approximate the corresponding integral with quadrature.</p>
<p>Apparently its a common misconception with neural networks that too many hidden neurons produces over-fitting.</p>
<p>My question is specifically, given this definition of discrete and continuous neural networks, I was wondering if anyone had experience working with the latter and what sort of things they used them for.</p>
<p>Further description on the topic can be found here:
<a href="http://www.iro.umontreal.ca/%7Elisa/seminaires/18-04-2006.pdf" rel="nofollow noreferrer">http://www.iro.umontreal.ca/~lisa/seminaires/18-04-2006.pdf</a></p> | 2010-06-17 23:37:35.123000+00:00 | 2021-04-24 15:08:54.530000+00:00 | 2021-04-24 15:08:54.530000+00:00 | algorithm|artificial-intelligence|neural-network | ['https://www.sciencedirect.com/science/article/pii/0893608089900208', 'https://arxiv.org/abs/1908.07838'] | 2 |
43,109,898 | <p>Neural network definitely can predict/approximate more outputs. I have experience with neuron regulator and there net produce control signal for two motors.</p>
<p>So I don't have experience with tensorflow. But this framework is from Google and is quite popular, so I'm almost sure, there is multioutput functionality.</p>
<p>There is nice example <a href="https://arxiv.org/pdf/1609.07378.pdf" rel="nofollow noreferrer">of such thing.</a> </p> | 2017-03-30 06:14:18.087000+00:00 | 2017-03-30 06:14:18.087000+00:00 | null | null | 43,109,488 | <p>Need some advise here.</p>
<p>I am trying to build a model where it can predict the 3 different output features when 5 input features are given.</p>
<p>for example,
5 input features: size of the house, house floor, house condition, number of rooms, parking.
3 output features: price for selling, price for buying, price for renting</p>
<p>What I am confusing right now is that, is that possible that the trained model are able to predict the 3 outputs? What I found from others' example/tutorial is that they mostly trying to do one thing only on their model.</p>
<p>Sorry if my explanations are bad, I am new to tensorflow and machine learning.</p> | 2017-03-30 05:46:42.243000+00:00 | 2017-03-30 06:53:00.837000+00:00 | null | machine-learning|tensorflow|neural-network|regression | ['https://arxiv.org/pdf/1609.07378.pdf'] | 1 |
27,892,571 | <p>Solving several polynomial equations in several variables is a hard problem. Doing so in polynomial time in the average case <a href="http://en.wikipedia.org/wiki/Smale%27s_problems" rel="nofollow">is Smale's 17th problem.</a> It is unlikely that you will find a fast and simple algorithm for doing so that actually works.</p>
<p>You might look at "Ideals, varieties, and algorithms" by Cox, Little, and O'Shea for an intro to Groebner bases. Buchberger's algorithm finds a Groebner basis for a given polynomial ideal. You can find all solutions of a given polynomial system using a Groebner basis for the ideal generated by the polynomials, though the solution comes in a slightly awkward form.</p>
<p>Newton's method is a basic method for solving a system of several nonlinear equations in several variables. Applied naively, Newton's method is heuristic; it won't always find a solution to a system even if a solution exists. However, <em>if</em> Newton's method converges, <em>then</em> it converges really fast. Thus the challenge of the theory problem posed by Smale lies in finding a provably good initial guess to start Newton's method from.</p>
<p><a href="http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.211.3321" rel="nofollow">Beltran and Pardo made considerable progress on Smale's 17th problem</a>, giving an algorithm that works on the average for systems with bounded degree using real-number arithmetic. This has since been <a href="http://arxiv.org/abs/1205.0869" rel="nofollow">turned into a finite-precision algorithm by Briquel, Cucker, Pena, and Roshchina</a>. Fascinating as they are, I'm not aware of any implementations, or any attempts at implementations, of these ideas---we're still very, very far away from having <em>usable code</em> for solving systems of polynomial equations.</p> | 2015-01-11 21:59:39.443000+00:00 | 2015-01-11 21:59:39.443000+00:00 | null | null | 27,891,506 | <p>I'm looking for a fast algorithm to solve a system of N polynomial equations on 3 unknown variables. That is, given 3 functions, <code>F0(x,y,z), F1(x,y,z)... FN(x,y,z)</code>, I want to find <code>x, y, z</code> such that <code>F0(x,y,z) = F1(x,y,z) = ... = FN(x,y,z) = 0</code>. </p>
<p>I've tried finding the solution in several different places, but I could only find very advanced papers on topics such as algebraic geometry or cryptography. What I need, though, is a simple/quick algorithm that returns a fast numerical solution. Is there such algorithm?</p> | 2015-01-11 20:05:19.703000+00:00 | 2016-01-04 03:52:39.133000+00:00 | null | algorithm|math|language-agnostic|equation|polynomial-math | ['http://en.wikipedia.org/wiki/Smale%27s_problems', 'http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.211.3321', 'http://arxiv.org/abs/1205.0869'] | 3 |
62,100,576 | <p>It is correct, because you are testing all of your p-values come from a random uniform distribution. The alternate hypothesis is that at least one of them is true. Which in your case is very possible.</p>
<p>We can simulate this, by drawing from a random uniform distribution 1000 times, the length of your p-values:</p>
<pre><code>import numpy as np
from scipy.stats import combine_pvalues
from matplotlib import pyplot as plt
random_p = np.random.uniform(0,1,(1000,len(p_values_list)))
res = np.array([combine_pvalues(i,method='fisher',weights=None) for i in random_p])
plt.hist(fisher_p)
</code></pre>
<p><a href="https://i.stack.imgur.com/Iqdvw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Iqdvw.png" alt="enter image description here"></a></p>
<p>From your results, the chi-square is 62.456 which is really huge and no where near the simulated chi-square above.</p>
<p>One thing to note is that the combining you did here does not take into account directionality, if that is possible in your test, you might want to consider using stouffer's Z along with weights. Also another sane way to check is to run simulation like the above, to generate list of p-values under the null hypothesis and see how they differ from what you observed. </p>
<p><a href="https://arxiv.org/pdf/1707.06897.pdf" rel="nofollow noreferrer">Interesting paper but maybe a bit on the statistics side</a></p> | 2020-05-30 09:37:17.953000+00:00 | 2020-05-30 09:37:17.953000+00:00 | null | null | 62,077,181 | <p>I have to combine p values and get one p value.
I'm using scipy.stats.combine_pvalues function, but it is giving very small combined p value, is it normal?</p>
<p>e.g.:</p>
<pre><code>>>> import scipy
>>> p_values_list=[8.017444955844044e-06, 0.1067379119652372, 5.306374345615846e-05, 0.7234201655194492, 0.13050605094545614, 0.0066989543716175, 0.9541246420333787]
>>> test_statistic, combined_p_value = scipy.stats.combine_pvalues(p_values_list, method='fisher',weights=None)
>>> combined_p_value
4.331727536209026e-08
</code></pre>
<p>As you see, <strong>combined_p_value is smaller than any given p value in the p_values_list</strong>?
How can it be?</p>
<p>Thanks in advance,
Burcak</p> | 2020-05-29 01:25:56.903000+00:00 | 2020-06-03 20:09:57.847000+00:00 | 2020-06-03 20:09:57.847000+00:00 | python|scipy|statistics|p-value | ['https://i.stack.imgur.com/Iqdvw.png', 'https://arxiv.org/pdf/1707.06897.pdf'] | 2 |
52,706,269 | <p>Weights are initialized randomly in neural networks, so it is possibly to get different results by design. If you think about how backpropagation works and how the cost function is minimized, you will notice that you don´t have any guarantee that your network will find the "global minima". Fixing the seed is one idea to get reproducible results, but on the other hand you limit your network to a fixed starting position, where it probably will never reach the global minima.</p>
<p>A lot of complex models, especially LSTMs are unstable. You could look at convolutional approaches. I noticed, they are performing almost equally and are much more stable.
<a href="https://arxiv.org/pdf/1803.01271.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1803.01271.pdf</a></p> | 2018-10-08 16:09:58.153000+00:00 | 2018-10-08 16:09:58.153000+00:00 | null | null | 52,695,534 | <p>I have one LSTM model like below:</p>
<pre><code>model = Sequential()
model.add(Conv1D(3, 32, input_shape=(60, 12)))
model.add(LSTM(units=256, return_sequences=False, dropout=0.25))
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(1))
model.compile(optimizer='adam', loss='mse')
model.summary()
</code></pre>
<p>Each time I use the same dataset to train it, I will get a different model. Most of the time, the performance of the trained model is acceptable, but sometime is really bad. I think that there are some randomness during the training or initialization. So how can I fix everything to get same model for each training? </p> | 2018-10-08 04:48:37.733000+00:00 | 2018-10-08 16:09:58.153000+00:00 | 2018-10-08 05:56:36.450000+00:00 | python|tensorflow|keras|lstm | ['https://arxiv.org/pdf/1803.01271.pdf'] | 1 |
65,256,601 | <p><a href="https://arxiv.org/pdf/1603.00831.pdf" rel="nofollow noreferrer">MOT16: A Benchmark for Multi-Object Tracking</a> (see <em>4.1.5 Multiple Object Tracking Precision</em>) defines MOTP as a measure of bounding box overlap.</p>
<blockquote>
<p>MOTP thereby gives the average overlap between all correctly matched hypotheses and their respective objects and ranges between 50% and 100%.</p>
</blockquote>
<p>You can check the precise formula in the paper.</p> | 2020-12-11 18:17:16.293000+00:00 | 2020-12-11 18:17:16.293000+00:00 | null | null | 62,049,456 | <p><strong>Multiple Object Tracking Precision</strong> (MOTP) is one of the metrics defined in the <a href="https://media.proquest.com/media/pq/classic/doc/2314480661/fmt/pi/rep/NONE?_s=8WBofXOxkXhvgRhw8K10leO5bJQ%3D" rel="nofollow noreferrer">Clear MOT paper</a> for evaluating multiple object tracking algorithms. In this paper, it is defined as the average distance between the predicted object location and the ground-truth object location, over all predictions that are successfully matched to a ground-truth. This distance could either be absolute (pixel) distance, or, more commonly I think in the case of objects being denoted by bounding boxes, <code>1-IoU</code>, the intersection-over-union metric between the ground truth and the predicted bounding box. In either case, you want the distance to be small, so the MOTP metric should be as close to zero as possible.</p>
<p>This is where I am confused, because in some multiple object tracking benchmarks (see <a href="http://detrac-db.rit.albany.edu/TraRet" rel="nofollow noreferrer">UA Detrac</a> and <a href="https://motchallenge.net/results/CVPR_2019_Tracking_Challenge/?chl=11&orderBy=MOTP&orderStyle=DESC&det=Public" rel="nofollow noreferrer">MOT Challenge</a>), MOTP is listed as a percentage and the goal is for MOTP to be as high as possible. The MOT challenge website even cites the CLEAR MOT metrics as their source for this metric, when the definitions are clearly dissimilar!</p>
<p>So, to put my question succinctly, why do these benchmarks use a percentage for MOTP instead of an absolute value, and why is the goal for it to be as high as possible? What does this metric actually represent?</p> | 2020-05-27 17:39:31.130000+00:00 | 2020-12-11 18:17:16.293000+00:00 | null | computer-vision|video-tracking | ['https://arxiv.org/pdf/1603.00831.pdf'] | 1 |
8,232,155 | <p>Creating a system that can scale out means taking some trade-offs - one of these is facilitating "idempotent" operations in your application. </p>
<p>This means that you would either:</p>
<ul>
<li><p>assume that the data was written somewhere and that the node will
eventually become consistent </p></li>
<li><p>fire the entire contents of the write again, perhaps sleeping a given amount of time or<br>
at a less restrictive consistency level</p></li>
</ul>
<p>A good description of this approach can be found in section 6 of Pat Helland's "Building on Quicksand" paper: <a href="http://arxiv.org/pdf/0909.1788" rel="nofollow">http://arxiv.org/pdf/0909.1788</a></p> | 2011-11-22 18:51:00.283000+00:00 | 2011-11-22 18:51:00.283000+00:00 | null | null | 8,229,212 | <p>I execute batch update which modifies few rows within few column families. In case of TimedOutException some data could be modified, but possibly not whole set....</p>
<p>In order to implement compensating transaction, I would need to know what data (rows) was modified - is there a way to find this out? Does exception contain this information?</p>
<p>Thanks,
Maciej</p> | 2011-11-22 15:22:32.017000+00:00 | 2011-11-22 18:51:00.283000+00:00 | null | cassandra | ['http://arxiv.org/pdf/0909.1788'] | 1 |
52,765,677 | <p>Yes, its completely normal. This happens because as you do inference on GPU (or even multicore CPUs), increasing the batch size allows better use of the GPUs parallel computation resources, decreasing time per sample in the batch. If you use a small batch size, then you are wasting computational resources that are available in the GPU.</p>
<p><a href="https://arxiv.org/abs/1605.07678" rel="nofollow noreferrer">This paper</a> describes the same effect, and one of the Figures contains a plot that shows inference time per image versus batch size. It shows the same effect as you are seeing.</p> | 2018-10-11 17:16:05.493000+00:00 | 2018-10-11 17:16:05.493000+00:00 | null | null | 52,765,373 | <p>I am using my trained model to make predictions (CPU only). I observe that both on Tensorflow and Keras with Tensorflow backend, the prediction time per sample is much lower when a batch of samples is used as compared to an individual sample. Moreover, the time per sample seems to go down with increasing batch size up to the limits imposed by memory. As an example, on pure Tensorflow, prediction of a single sample takes ~ 1.5 seconds , on 100 samples it is ~ 17 seconds (per sample time ~ 0.17s) on 1000 samples it is ~ 93 seconds (the per sample time ~ 0.093s). </p>
<p>Is this normal behavior? If so, is there an intuitive explanation for this? I guess it might have something to do with initializing the graph, but I need some clarification. Also, why does the per sample time go down as we increase the number of samples for prediction? In my use case, I have to predict on individual samples as and when they become available. So, obviously, I would be losing quite a bit in terms of speed if this is the way things work. </p>
<p>Thanks in advance for your help. </p>
<p>Edit: I am adding a minimal working example. I have one image input and 4 vector inputs to my model, which produces 4 outputs. I am initializing all inputs to 0 for speed test (I guess the actual values don't matter much for speed?). The initialization time and inference time are calculated separately. I find that the initialization time is a fraction of the inference time (~0.1s for 100 samples). </p>
<pre><code>from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import time
import numpy as np
import tensorflow as tf
t00=time.time()
graph = tf.Graph()
graph_def = tf.GraphDef()
with open("output_graph.pb", "rb") as f:
graph_def.ParseFromString(f.read())
with graph.as_default():
tf.import_graph_def(graph_def)
# One image and 4 auxiliary scalar inputs
img_input_layer ="input"
qp4_input_layer ="qp4"
qp3_input_layer ="qp3"
qp2_input_layer ="qp2"
qp1_input_layer ="qp1"
input_name = "import/" + img_input_layer
qp4_input_name = "import/" + qp4_input_layer
qp3_input_name = "import/" + qp3_input_layer
qp2_input_name = "import/" + qp2_input_layer
qp1_input_name = "import/" + qp1_input_layer
input_operation_img = graph.get_operation_by_name(input_name)
input_operation_qp4 = graph.get_operation_by_name(qp4_input_name)
input_operation_qp3 = graph.get_operation_by_name(qp3_input_name)
input_operation_qp2 = graph.get_operation_by_name(qp2_input_name)
input_operation_qp1 = graph.get_operation_by_name(qp1_input_name)
output_operation=[]
for i in range(4):
output_operation.append(graph.get_operation_by_name("import/" + "output_"+str(i)).outputs)
#Initializing dummy inputs
n=100 # Number of samples for inference
img=np.zeros([n,64, 64,1])
qp4=np.zeros([n,1, 1,1])
qp3=np.zeros([n,2, 2,1])
qp2=np.zeros([n,4, 4,1])
qp1=np.zeros([n,8, 8,1])
t01=time.time()
print("Iniialization time",t01-t00)
t0=time.time()
with tf.Session(graph=graph) as sess:
results = sess.run(output_operation,
{input_operation_img.outputs[0]: img, input_operation_qp4.outputs[0]: qp4, input_operation_qp3.outputs[0]: qp3, input_operation_qp2.outputs[0]: qp2, input_operation_qp1.outputs[0]: qp1})
# print(results)
t1 = time.time()
print("Inference time", t1-t0)
</code></pre> | 2018-10-11 16:59:25.650000+00:00 | 2018-10-11 17:59:08.820000+00:00 | 2018-10-11 17:59:08.820000+00:00 | python|performance|tensorflow|keras|runtime | ['https://arxiv.org/abs/1605.07678'] | 1 |
36,394,203 | <p>Did you tried <strong>proquints</strong>?</p>
<p>A Proquint is a PRO-nouncable QUINT-uplet of alternating unambiguous consonants and vowels, for example: "lusab".</p>
<p>I think they meet almost all your requirements.</p>
<p>See the proposal <a href="http://arxiv.org/html/0901.4016" rel="nofollow">here</a>.
And <a href="https://github.com/dsw/proquint" rel="nofollow">here</a> is the official implementation in C and Java.</p>
<p>I've worked on a port to .NET that you can download as <a href="https://github.com/thepirat000/Proquint.NET" rel="nofollow">Proquint.NET</a>. </p> | 2016-04-04 03:38:36.647000+00:00 | 2016-04-04 03:38:36.647000+00:00 | null | null | 36,392,404 | <p>I'm looking for an algorithm which generates identifiers suitable for both, external use in e.g. URLs as well as persistence with the following requirements:</p>
<ul>
<li><strong>Short</strong>, like a max. of 8 characters</li>
<li><strong>URL-friendly</strong>, so no special characters</li>
<li><strong>Human-friendly</strong>, e.g. no ambigous characters like L/l, 0/O</li>
<li><strong>Incremental</strong> for fast indexing</li>
<li><strong>Random</strong> to prevent guessing without knowing the algorithm (would be nice, but not important)</li>
<li><strong>Unique</strong> without requiring to check the database</li>
</ul>
<p>I looked at various solutions, but all I found have some major tradeoffs. For example: </p>
<ul>
<li>GUID: Too long, not incremental</li>
<li>GUID base64 encoded: Still too long, not incremental</li>
<li>GUID ascii85 encoded: Short, not incremental, too many unsuitable characters</li>
<li>GUID encodings like base32, base36: Short, but loss of information</li>
<li>Comb GUID: Too long, however incremental</li>
<li>All others based on random: Require checking the DB for uniqueness</li>
<li>Time-based: Prone to collisions in clustered or multi-threaded environments</li>
</ul>
<hr>
<p><strong>Edit</strong>: Why has this been marked off-topic? The requirements describe a specific problem to which numerous legitimate solutions can be provided. In fact, some of the solutions here are so good, I'm struggling with choosing the one to mark as answer.</p> | 2016-04-03 23:11:37.443000+00:00 | 2016-04-04 05:27:15.310000+00:00 | 2016-04-04 05:27:15.310000+00:00 | c#|.net|guid|uniqueidentifier|identifier | ['http://arxiv.org/html/0901.4016', 'https://github.com/dsw/proquint', 'https://github.com/thepirat000/Proquint.NET'] | 3 |
65,642,494 | <p>To provide an alternate view to the answer that Khalid <a href="https://stackoverflow.com/questions/57457817/adding-batch-normalization-decreases-the-performance">linked in the comments</a>, which puts a stronger focus on generalization performance rather than training loss, consider this:</p>
<p>Batch Normalization has been postulated to have a regularizing effect. <a href="https://arxiv.org/pdf/1809.00846.pdf" rel="nofollow noreferrer">Luo et al.</a> look at BN as a decomposition into population normalization and gamma decay and observe similar training loss curves as you do (comparing BN to no BN - note, however, that they use vanilla SGD and not Adam). There are a couple of things that affect BN (as outlined also in Khalid's <a href="https://stackoverflow.com/questions/57457817/adding-batch-normalization-decreases-the-performance">link</a>): The batch size, for example, on the one hand should be large enough for robust estimation of population parameters, however, with increasing size of the batch generalization performance can also drop (see Luo et al.'s paper: the gist is that lower batch sizes result in noisy population parameter estimates, essentially perturbing the input).</p>
<p>In your case I would not intuitively have expected a big difference (given how your data is set up), but maybe someone deeper into the theoretical analysis of BN can still provide insights.</p> | 2021-01-09 12:14:04.923000+00:00 | 2021-01-09 12:14:04.923000+00:00 | null | null | 65,637,165 | <p>I am trying to implement the batch normalization with Pytorch and use a simple fully connected neural network to approximate a given function.</p>
<p>The code is as follows. The result shows that the neural network without the batch normalization performs better than that with the batch normalization technique. This means that the batch normalization makes the training even worse. Could someone explain this result? Thanks!</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
import torch
class Net(torch.nn.Module):
def __init__(self, num_inputs, num_outputs, hidden_size=256, is_bn=True):
super(Net, self).__init__()
self.num_inputs = num_inputs
self.num_outputs = num_outputs
self.is_bn = is_bn
# no bias is needed if batch normalization
if self.is_bn:
self.linear1 = torch.nn.Linear(num_inputs, hidden_size, bias=False)
self.linear2 = torch.nn.Linear(hidden_size, hidden_size, bias=False)
else:
self.linear1 = torch.nn.Linear(num_inputs, hidden_size)
self.linear2 = torch.nn.Linear(hidden_size, hidden_size)
self.linear3 = torch.nn.Linear(hidden_size, num_outputs)
if self.is_bn:
self.bn1 = torch.nn.BatchNorm1d(hidden_size)
self.bn2 = torch.nn.BatchNorm1d(hidden_size)
self.activation = torch.nn.ReLU()
def forward(self, inputs):
x = inputs
if self.is_bn:
x = self.activation(self.bn1(self.linear1(x)))
x = self.activation(self.bn2(self.linear2(x)))
else:
x = self.activation(self.linear1(x))
x = self.activation(self.linear2(x))
out = self.linear3(x)
return out
torch.manual_seed(0) # reproducible
Nx = 100
x = torch.linspace(-1., 1., Nx)
x = torch.reshape(x, (Nx, 1))
y = torch.sin(3*x)
fcn_bn, fcn_no_bn = Net(num_inputs=1, num_outputs=1, is_bn=True), Net(num_inputs=1, num_outputs=1, is_bn=False)
criterion = torch.nn.MSELoss()
optimizer_bn = torch.optim.Adam(fcn_bn.parameters(), lr=0.001)
optimizer_no_bn = torch.optim.Adam(fcn_no_bn.parameters(), lr=0.001)
total_epoch = 5000
# record loss history
loss_history_bn = np.zeros(total_epoch)
loss_history_no_bn = np.zeros(total_epoch)
fcn_bn.train()
fcn_no_bn.train()
for epoch in range(total_epoch):
optimizer_bn.zero_grad()
loss = criterion(fcn_bn(x), y)
loss_history_bn[epoch] = loss.item()
loss.backward()
optimizer_bn.step()
optimizer_no_bn.zero_grad()
loss = criterion(fcn_no_bn(x), y)
loss_history_no_bn[epoch] = loss.item()
loss.backward()
optimizer_no_bn.step()
if epoch%1000 == 0:
print("epoch: %d; MSE (with bn): %.2e; MSE (without bn): %.2e"%(epoch, loss_history_bn[epoch], loss_history_no_bn[epoch]))
fcn_bn.eval()
fcn_no_bn.eval()
plt.figure()
plt.semilogy(np.arange(total_epoch), loss_history_bn, label='neural network (with bn)')
plt.semilogy(np.arange(total_epoch), loss_history_no_bn, label='neural network (without bn)')
plt.legend()
plt.figure()
plt.plot(x, y, '-', label='exact')
plt.plot(x, fcn_bn(x).detach(), 'o', markersize=2, label='neural network (with bn)')
plt.plot(x, fcn_no_bn(x).detach(), 'o', markersize=2, label='neural network (without bn)')
plt.legend()
plt.figure()
plt.plot(x, np.abs(fcn_bn(x).detach() - y), 'o', markersize=2, label='neural network (with bn)')
plt.plot(x, np.abs(fcn_no_bn(x).detach() - y), 'o', markersize=2, label='neural network (without bn)')
plt.legend()
plt.show()
</code></pre>
<p>The result is as follows:</p>
<pre><code>epoch: 0; MSE (with bn): 3.99e-01; MSE (without bn): 4.84e-01
epoch: 1000; MSE (with bn): 4.70e-05; MSE (without bn): 1.27e-06
epoch: 2000; MSE (with bn): 1.81e-04; MSE (without bn): 7.93e-07
epoch: 3000; MSE (with bn): 2.73e-04; MSE (without bn): 7.45e-07
epoch: 4000; MSE (with bn): 4.04e-04; MSE (without bn): 5.68e-07
</code></pre> | 2021-01-08 22:18:54.170000+00:00 | 2021-01-09 12:14:04.923000+00:00 | 2021-01-08 22:59:25.240000+00:00 | python|neural-network|pytorch|batch-normalization | ['https://stackoverflow.com/questions/57457817/adding-batch-normalization-decreases-the-performance', 'https://arxiv.org/pdf/1809.00846.pdf', 'https://stackoverflow.com/questions/57457817/adding-batch-normalization-decreases-the-performance'] | 3 |
57,308,661 | <p><a href="https://slurm.schedmd.com/overview.html" rel="noreferrer">Slurm</a> is open source job scheduling system for large and small Linux clusters. It is mainly used as Workload Manager/Job scheduler. Mostly used in HPC (High Performance Computing) and sometimes in BigData. </p>
<p><a href="https://kubernetes.io/" rel="noreferrer">Kubernetes</a> is an orchestration system for Docker containers using the concepts of ”labels” and ”pods” to group containers into logical units. It was mainly created to run micro-services and AFAIK currently <code>Kubernetes</code> is not supporting Slurm.</p>
<p>Slumr as Job scheduler have more scheduling options than Kubernetes, but K8s is container orchestration system not only Job scheduler. For example <code>Kubernetes</code> is supporting Array jobs and <code>Slurm</code> supports Parallel and array jobs. If you want to dive in to scheduling check <a href="https://arxiv.org/pdf/1705.03102.pdf" rel="noreferrer">this</a> article.</p>
<p>As I mentioned before, Kubernetes is more focused on container orchestration and Slumr is focused on Job/Workload scheduling.
Only thing comes to my mind is that someone needed very personal-customized cluster using <a href="https://github.com/sylabs/wlm-operator" rel="noreferrer">WLM-Operator</a> + K8s + Slurm + Singularity to execute HPC/BigData jobs.</p>
<p>Usually Slurm Workload Manager is used by many of the world's supercomputers to optimize locality of task assignments on parallel computers.</p> | 2019-08-01 12:05:35.630000+00:00 | 2019-08-01 12:05:35.630000+00:00 | null | null | 57,282,483 | <p>i saw that some people use Kubernetes co-exist with slurm, I was just curious as to why you need kubernetes with slurm? what is the main difference between kubernetes and slurm?</p> | 2019-07-31 02:48:58.413000+00:00 | 2019-08-01 12:05:35.630000+00:00 | 2019-07-31 06:21:51.770000+00:00 | kubernetes|containers|hpc|slurm | ['https://slurm.schedmd.com/overview.html', 'https://kubernetes.io/', 'https://arxiv.org/pdf/1705.03102.pdf', 'https://github.com/sylabs/wlm-operator'] | 4 |
34,577,769 | <p>In my opinion the reason random cropping helps data augmentation is that while the semantics of the image are preserved (unless you pick out a really bad crop, but let's assume that you setup your random cropping so that this is very low probability) the activations values you get in your conv net are different. So in effect our conv net learns to associate a broader range of spatial activation statistics with a certain class label and thus data augmentation via random cropping helps improve the robustness of our feature detectors in conv nets. Also in the same vein, the random crop produces different intermediate activation values and produces a different forwardpass so it's like a "new training point." </p>
<p>It's also not trivial. See the recent work on adversarial examples in neural networks (relatively shallow to AlexNet sized). Images that semantically look the same, more or less, when we pass them through a neural net with a softmax classifier on top, we can get drastically different class probabilities. So subtle changes from a semantic point of view can end up having different forward passes through a conv net. For more details see <a href="http://arxiv.org/abs/1312.6199" rel="noreferrer">Intriguing properties of neural networks</a>. </p>
<p>To answer the last part of your question: I usually just make my own random cropping script. Say my images are (3, 256, 256) (3 RGB channels, 256x256 spatial size) you can code up a loop which takes 224x224 random crops of your image by just randomly selecting a valid corner point. So I typically compute an array of valid corner points and if I want to take 10 random crops, I randomly select 10 different corner points from this set, say I choose (x0, y0) for my upper left hand corner point, I will select the crop X[x0:x0+224, y0:y0+224], something like this. I personally like to randomly choose from a pre-computed set of valid corner points instead of randomly choosing a corner one draw at a time because this way I guarantee I do not get a duplicate crop, though in reality it's probably low probability anyway.</p> | 2016-01-03 14:43:16.010000+00:00 | 2016-01-03 14:43:16.010000+00:00 | null | null | 34,574,714 | <p>I am training a convolutional neural network, but have a relatively small dataset. So I am implementing techniques to augment it. Now this is the first time i am working on a core computer vision problem so am relatively new to it. For augmenting, i read many techniques and one of them that is mentioned a lot in the papers is random cropping. Now i'm trying to implement it ,i've searched a lot about this technique but couldn't find a proper explanation. So had a few queries:</p>
<p>How is random cropping actually helping in data augmentation? Is there any library (e.g OpenCV, PIL, scikit-image, scipy) in python implementing random cropping implicitly? If not, how should i implement it?</p> | 2016-01-03 08:34:54.213000+00:00 | 2022-04-20 12:30:37.030000+00:00 | null | python|opencv|image-processing|deep-learning|conv-neural-network | ['http://arxiv.org/abs/1312.6199'] | 1 |
67,541,213 | <p>Good question. <a href="https://xgboost.readthedocs.io/en/latest/" rel="nofollow noreferrer"><code>XGBoost</code></a> has <a href="https://arxiv.org/pdf/1908.01672.pdf" rel="nofollow noreferrer">been known to do well for imbalanced datasets</a>, and includes a number of hyperparameters to help us get there.</p>
<p>For the <code>scale_pos_weight</code> feature, <a href="https://xgboost.readthedocs.io/en/latest/parameter.html" rel="nofollow noreferrer">XGBoost documentation suggests</a>:</p>
<p><code>sum(negative instances) / sum(positive instances)</code></p>
<p>For extremely unbalanced datasets, some have suggested using the <code>sqrt</code> of that formula above.</p>
<p>For weights, typically via the <code>sample_weight</code> parameter in XGBoost, you can learn <code>class_weights</code> via a <a href="http://scikit-learn.org/stable/modules/generated/sklearn.utils.class_weight.compute_sample_weight.html" rel="nofollow noreferrer">sklearn utility</a>, as described <a href="https://datascience.stackexchange.com/questions/16342/unbalanced-multiclass-data-with-xgboost">here</a>.</p>
<p>The difference between the two is <a href="https://stackoverflow.com/questions/48079973/xgboost-sample-weights-vs-scale-pos-weight">explored here</a>, but in summary:</p>
<blockquote>
<p>The sample_weight parameter allows you to specify a different weight
for each training example. The scale_pos_weight parameter lets you
provide a weight for an entire class of examples ("positive" class).</p>
</blockquote>
<p>In code, you can see these implementations below, including the square root. Please note, I had to use synthetic data since none was provided in the question.</p>
<pre><code># General imports
import pandas as pd
from sklearn import datasets
from collections import Counter
# Generate datasets
from sklearn.datasets import make_classification
from imblearn.datasets import make_imbalance
# Train, test, splits and gridsearch optimization
from sklearn.model_selection import train_test_split, GridSearchCV
# Class weights
from sklearn.utils import class_weight
# Performance
from sklearn.metrics import classification_report
# Modeling
import xgboost
import warnings
warnings.filterwarnings('ignore')
# Generate synthetic data
X, y = make_classification(n_samples=10000, n_features=20, n_informative=15, class_sep=2.0, n_classes=2, n_clusters_per_class=5, hypercube=True, random_state=30)
scaled_X, scaled_y = make_imbalance(X, y, sampling_strategy={0:200}, random_state=8)
data = pd.DataFrame(data=scaled_X, columns=['feature_{}'.format(i) for i in range(X.shape[1])])
X_train, X_test, y_train, y_test = train_test_split(data, scaled_y, random_state=8, stratify=scaled_y)
# Compare 3 XGBoost models: no changes to weights, using sample weights, and using weight_scale
# Build a model without using the scale_pos_weight parameter, fit it, and get a set of its performance measures.
model_no_scale = xgboost.XGBClassifier(random_state=30)
model_no_scale.fit(X_train, y_train)
# Print performance
print("Off the Shelf XGBoost")
print(classification_report(y_test, model_no_scale.predict(X_test)))
# Get class_weights
# https://datascience.stackexchange.com/questions/16342/unbalanced-multiclass-data-with-xgboost
model_weights = xgboost.XGBClassifier(sample_weight=class_weight.compute_sample_weight(class_weight='balanced', y=scaled_y), random_state=30)
model_weights.fit(X_train, y_train)
# Print performance
print("Weights XGBoost")
print(classification_report(y_test, model_weights.predict(X_test)))
# Get the counts of the training data per XGBoost documentation
counts = Counter(y_train)
model_scale = xgboost.XGBClassifier(scale_pos_weight=counts[0] / counts[1], random_state=30)
model_scale.fit(X_train, y_train)
# Print performance
print("Scale XGBoost")
print(classification_report(y_test, model_scale.predict(X_test)))
# Get the counts of the training data per XGBoost documentation
from math import sqrt
model_sqrt = xgboost.XGBClassifier(scale_pos_weight=sqrt(counts[0] / counts[1]), random_state=30)
model_sqrt.fit(X_train, y_train)
# Print performance
print("SQRT XGBoost")
print(classification_report(y_test, model_sqrt.predict(X_test)))
</code></pre>
<p>Results in:</p>
<pre><code>Off the Shelf XGBoost
precision recall f1-score support
0 0.95 0.38 0.54 50
1 0.98 1.00 0.99 1253
accuracy 0.98 1303
macro avg 0.96 0.69 0.77 1303
weighted avg 0.97 0.98 0.97 1303
Weights XGBoost
precision recall f1-score support
0 0.95 0.38 0.54 50
1 0.98 1.00 0.99 1253
accuracy 0.98 1303
macro avg 0.96 0.69 0.77 1303
weighted avg 0.97 0.98 0.97 1303
Scale XGBoost
precision recall f1-score support
0 0.73 0.64 0.68 50
1 0.99 0.99 0.99 1253
accuracy 0.98 1303
macro avg 0.86 0.82 0.83 1303
weighted avg 0.98 0.98 0.98 1303
SQRT XGBoost
precision recall f1-score support
0 0.96 0.46 0.62 50
1 0.98 1.00 0.99 1253
accuracy 0.98 1303
macro avg 0.97 0.73 0.81 1303
weighted avg 0.98 0.98 0.97 1303
</code></pre> | 2021-05-14 21:39:27.610000+00:00 | 2021-05-14 21:39:27.610000+00:00 | null | null | 67,303,447 | <p>I am working on binary classification problem on a dataset with extreme class imbalance. To help the model learn the signals of the minority class, I downsampled the majority class such that the training set has 20% of minority class and 80% majority class.</p>
<p>Now there is one other parameter "scale_pos_weight" . I am not sure how to set this parameter after downsampling.</p>
<p>Should i set this based on the actual class ratios or should i use the class ratios after downsampling?</p> | 2021-04-28 15:41:53.840000+00:00 | 2021-05-14 21:39:27.610000+00:00 | null | python|machine-learning|xgboost | ['https://xgboost.readthedocs.io/en/latest/', 'https://arxiv.org/pdf/1908.01672.pdf', 'https://xgboost.readthedocs.io/en/latest/parameter.html', 'http://scikit-learn.org/stable/modules/generated/sklearn.utils.class_weight.compute_sample_weight.html', 'https://datascience.stackexchange.com/questions/16342/unbalanced-multiclass-data-with-xgboost', 'https://stackoverflow.com/questions/48079973/xgboost-sample-weights-vs-scale-pos-weight'] | 6 |
65,913,021 | <p>After more investigation, turns out the CREPE model itself supports up to ~1997Hz (seen in code) or 1975.5 Hz (seen in paper)</p>
<p>The paper about CREPE:
<a href="https://arxiv.org/abs/1802.06182" rel="nofollow noreferrer">https://arxiv.org/abs/1802.06182</a></p>
<p>States:</p>
<blockquote>
<p>The 360 pitch values are denoted as c1, c2..., 360 are selected so that they cover six octaves with 20-cent intervals between C1 and B7, corresponding to 32.70 Hz and 1975.5 Hz</p>
</blockquote>
<p>The <a href="https://marl.github.io/crepe/crepe.js" rel="nofollow noreferrer">JS implementation</a> has this mapping which maps the 360 intervals to 0 - 1997Hz range:</p>
<p><code>const cent_mapping = tf.add(tf.linspace(0, 7180, 360), tf.tensor(1997.3794084376191))</code></p>
<p>This means, short of retraining the model I'm probably out of luck at using it for now.</p>
<hr />
<p>Edit:</p>
<p>After a good nights sleep I found a simple solution which works for my simple application.</p>
<p>In it's essence, it is to resample my audio buffer so it has 2 times lower pitch. CREPE than detects a pitch of 440Hz as 220Hz, and I just need to multiply it by 2.</p>
<p>The result is still more consistently correct than YIN algorithm for my real time, noisy application.</p> | 2021-01-27 04:49:12.510000+00:00 | 2021-01-29 21:05:57.327000+00:00 | 2021-01-29 21:05:57.327000+00:00 | null | 65,898,843 | <p>I'm successfully using pitch detection features of ml5:</p>
<ul>
<li>tutorial: <a href="https://ml5js.org/reference/api-PitchDetection/" rel="nofollow noreferrer">https://ml5js.org/reference/api-PitchDetection/</a></li>
<li>model: <a href="https://cdn.jsdelivr.net/gh/ml5js/ml5-data-and-models/models/pitch-detection/crepe/" rel="nofollow noreferrer">https://cdn.jsdelivr.net/gh/ml5js/ml5-data-and-models/models/pitch-detection/crepe/</a></li>
</ul>
<h2>The issue:</h2>
<p>No pitch above ±2000Hz is detected. I tried multiple devices and checked that the sounds are visible on sonograms so it's does not seem to be a mic issue.</p>
<p>I assumed it may be a result of sampling rate limitations / resampling done by the library, as the Nyquist frequency (max "recordable" frequency) is that of half of the sampling rate.</p>
<p>I hosted the ml5 sources localy and tried modifying the <a href="https://github.com/ml5js/ml5-library/tree/main/src/PitchDetection" rel="nofollow noreferrer">PitchDetection class</a></p>
<p>There I see the sampling rate seems to be resampled to 1024Hz for performance reasons. This does not sound right though as if I'm not mistaken, this would only allow detection of frequencies up to 512hz. I am definitely missing something (or a lot).</p>
<p>I tried fiddling with the rates, but increasing it to, say 2048 causes an error:
<code>Error when checking : expected crepe_input to have shape [null,1024] but got array with shape [1,2048].</code></p>
<h2>My question is:</h2>
<p>Is there something in ml5 <a href="https://github.com/ml5js/ml5-library/tree/main/src/PitchDetection" rel="nofollow noreferrer">PitchDetection class</a> I can modify, configure (perhaps a different model) to detect frequencies higher than 2000Hz using crepe model?</p> | 2021-01-26 09:32:43.457000+00:00 | 2021-01-29 21:05:57.327000+00:00 | null | tensorflow|pitch-detection|ml5.js | ['https://arxiv.org/abs/1802.06182', 'https://marl.github.io/crepe/crepe.js'] | 2 |
58,328,560 | <p>The problem lies in the fact that in <code>OneHotCategorical</code> performs a discontinuous sampling - what causes gradient computation to fail. In order to replace this discontinuous sampling with a continuous (<em>relaxed</em>) version one may try to use <a href="https://www.tensorflow.org/probability/api_docs/python/tfp/distributions/RelaxedOneHotCategorical" rel="nofollow noreferrer"><code>RelaxedOneHotCategorical</code></a> (which is based on interesting <a href="https://arxiv.org/abs/1611.01144" rel="nofollow noreferrer">Gumbel Softmax</a> technique).</p> | 2019-10-10 17:52:25.760000+00:00 | 2019-10-10 17:52:25.760000+00:00 | null | null | 58,327,372 | <h1>Problem description</h1>
<p>I have inputs <code>x</code> that are indicator variables, and outputs <code>y</code>, where each row is a random one-hot vector that depends on the values of <code>x</code> (data sample shown below).</p>
<p>I want to train a model that essentially learns the probabilistic relationship between <code>x</code> and <code>y</code> in the form of per-column weights. The model must "choose" one, and only one, indicator to output. My current approach is to sample a categorical random variable and produce a one-hot vector as a prediction.</p>
<p>The issue is that I'm getting an error <code>ValueError: An operation has `None` for gradient</code> when I try to train my Keras model.</p>
<p>I find this error odd, because I've trained mixture networks using Keras and Tensorflow, which use <code>tf.contrib.distributions.Categorical</code>, and I did not run into any gradient-related issues.</p>
<h1>Code</h1>
<h2>Experiment</h2>
<pre><code>import tensorflow as tf
import tensorflow.contrib.distributions as tfd
import numpy as np
from keras import backend as K
from keras.layers import Layer
from keras.models import Sequential
from keras.utils import to_categorical
def make_xy_prob(rng, size=10000):
rng = np.random.RandomState(rng) if isinstance(rng, int) else rng
cols = 3
weights = np.array([[1, 2, 3]])
# generate data and drop zeros for now
x = rng.choice(2, (size, cols))
is_zeros = x.sum(axis=1) == 0
x = x[~is_zeros]
# use weights to create probabilities for determining y
weighted_x = x * weights
prob_x = weighted_x / weighted_x.sum(axis=1, keepdims=True)
y = np.row_stack([to_categorical(rng.choice(cols, p=p), cols) for p in prob_x])
# add zeros back and shuffle
zeros = np.zeros(((size - len(x), cols)))
x = np.row_stack([x, zeros])
y = np.row_stack([y, zeros])
shuffle_idx = rng.permutation(size)
x = x[shuffle_idx]
y = y[shuffle_idx]
return x, y
class OneHotGate(Layer):
def build(self, input_shape):
self.kernel = self.add_weight(name='kernel', shape=(1, input_shape[1]), initializer='ones')
def call(self, x):
zero_cond = x < 1
x_shape = tf.shape(x)
# weight indicators so that more probability is assigned to more likely columns
weighted_x = x * self.kernel
# fill zeros with -inf so that zero probability is assigned to that column
ninf_fill = tf.fill(x_shape, -np.inf)
masked_x = tf.where(zero_cond, ninf_fill, weighted_x)
onehot_gate = tf.squeeze(tfd.OneHotCategorical(logits=masked_x, dtype=x.dtype).sample(1))
# fill gate with zeros where input was originally zero
zeros_fill = tf.fill(x_shape, 0.0)
masked_gate = tf.where(zero_cond, zeros_fill, onehot_gate)
return masked_gate
def experiment(epochs=10):
K.clear_session()
rng = np.random.RandomState(2)
X, y = make_xy_prob(rng)
input_shape = (X.shape[1], )
model = Sequential()
gate_layer = OneHotGate(input_shape=input_shape)
model.add(gate_layer)
model.compile('adam', 'categorical_crossentropy')
model.fit(X, y, 64, epochs, verbose=1)
</code></pre>
<h2>Data sample</h2>
<pre><code>>>> x
array([[1., 1., 1.],
[0., 1., 0.],
[1., 0., 1.],
...,
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 0.]])
>>> y
array([[0., 0., 1.],
[0., 1., 0.],
[1., 0., 0.],
...,
[0., 0., 1.],
[1., 0., 0.],
[1., 0., 0.]])
</code></pre>
<h2>Error</h2>
<pre><code>ValueError: An operation has `None` for gradient. Please make sure that all of your ops have a gradient defined (i.e. are differentiable). Common ops without gradient: K.argmax, K.round, K.eval.
</code></pre> | 2019-10-10 16:29:33.273000+00:00 | 2019-10-11 16:22:41.537000+00:00 | 2019-10-11 16:22:41.537000+00:00 | python|tensorflow|machine-learning|keras|deep-learning | ['https://www.tensorflow.org/probability/api_docs/python/tfp/distributions/RelaxedOneHotCategorical', 'https://arxiv.org/abs/1611.01144'] | 2 |
33,233,851 | <p>I think this question is way too broad for StackOverflow, but I'll give you some thoughts:</p>
<ol>
<li><p>Using stochastic or probability in tree searches is usually called expectimax searches. You can find a good summary and pseudo-code for <a href="http://arxiv.org/pdf/0909.0801.pdf" rel="nofollow noreferrer">Expectimax Approximation with Monte-Carlo Tree Search</a> in chapter 4, but I would recommend using a normal minimax tree search with the expectimax extension. There are a few modifications like <a href="http://jveness.info/publications/thesis.pdf" rel="nofollow noreferrer">Star1, Star2 and Star2.5</a> for a better runtime (similiar to alpha-beta pruning).</p>
<p>It boils down to not only having decision nodes, but also chance nodes. The probability of each possible outcome should be known and the expected value of each node is multiplied with its probability to know its real expected value.</p>
</li>
<li><p>2^5 nodes per move is high, but not impossibly high, especially for low number of moves and a shallow search. Even a 1-3 depth search shoulld give you <em>some</em> results. In my tetris AI, there are ~30 different possible moves to consider and I calculate the result of three following pieces (for each possible) to select my move. This is done in 2 seconds. I'm sure you have much more time for calculation since you're waiting for user input.</p>
</li>
<li><p>If you know what move the player is obvious, shouldn't it also obvious for your AI?</p>
</li>
<li><p>You don't need to consider a single value (hp), you can have several factors that are weighted different to calculate the expected value. If I come back to my tetris AI, there are 7 factors (bumpiness, highest piece, number of holes, ...) that are calculated, weighted and added together. To get the weights, you could use different methods, I used a genetic algorithm to find the combination of weights that resulted in most lines cleared.</p>
</li>
</ol> | 2015-10-20 10:11:37.800000+00:00 | 2015-10-20 10:11:37.800000+00:00 | 2020-06-20 09:12:55.060000+00:00 | null | 33,220,741 | <p>I'm trying to implement a MCTS algorithm for the AI of a small game. The game is a rpg-simulation. The AI should decides what moves to play in battle. It's a turn base battle (FF6-7 style). There is no movement involved.</p>
<p>I won't go into details but we can safely assume that we know with certainty what move will chose the player in any given situation when it is its turn to play.</p>
<p>Games end-up when one party has no unit alive (4v4). It can take any number of turn (may also never end). There is a lot of RNG element in the damage computation & skill processing (attacks can hit/miss, crit or not, there is a lots of procs going on that can "proc" or not, buffs can have % value to happens ect...).
Units have around 6 skills each to give an idea of the branching factor.</p>
<p>I've build-up a preliminary version of the MCTS that gives poor results for now. I'm having trouble with a few things :</p>
<p>One of my main issue is how to handle the non-deterministic states of my moves. I've read a few papers about this but I'm still in the dark.</p>
<p>Some suggest determinizing the game information and run a MCTS tree on that, repeat the process N times to cover a broad range of possible game states and use that information to take your final decision. In the end, it does multiply by a huge factor our computing time since we have to compute N times a MCTS tree instead of one. I cannot rely on that since over the course of a fight I've got thousands of RNG element : 2^1000 MCTS tree to compute where i already struggle with one is not an option :)</p>
<p>I had the idea of adding X children for the same move but it does not seems to be leading to a good answer either. It smooth the RNG curve a bit but can shift it in the opposite direction if the value of X is too big/small compared to the percentage of a particular RNG. And since I got multiple RNG par move (hit change, crit chance, percentage to proc something etc...) I cannot find a decent value of X that satisfies every cases. More of a badband-aid than anythign else.</p>
<p>Likewise adding 1 node per RNG tuple {hit or miss ,crit or not,proc1 or not,proc2 or not,etc...} for each move should cover every possible situations but has some heavy drawbacks : with 5 RNG mecanisms only that means 2^5 node to consider for each move, it is way too much to compute. If we manage to create them all, we could assign them a probability ( linked to the probability of each RNG element in the node's tuple) and use that probability during our selection phase. This should work overall but be really hard on the cpu :/</p>
<p>I also cannot "merge" them in one single node since I've got no way of averaging the player/monsters stat's value accuractely based on two different game state and averaging the move's result during the move processing itself is doable but requieres a lot of simplifcation that are a pain to code and will hurt our accuracy really fast anyway.</p>
<p>Do you have any ideas how to approach this problem ?</p>
<p>Some other aspects of the algorithm are eluding me:</p>
<p>I cannot do a full playout untill a end state because A) It would take a lot of my computing time and B) Some battle may never ends (by design). I've got 2 solutions (that i can mix)
- Do a random playout for X turns
- Use an evaluation function to try and score the situation.</p>
<p>Even if I consider only health point to evaluate I'm failing to find a good evaluation function to return a reliable value for a given situation (between 1-4 units for the player and the same for the monsters ; I know their hp current/max value). What bothers me is that the fights can vary greatly in length / disparity of powers. That means that sometimes a 0.01% change in Hp matters (for a long game vs a boss for example) and sometimes it is just insignificant (when the player farm a low lvl zone compared to him).</p>
<p>The disparity of power and Hp variance between fights means that my Biais parameter in the UCB selection process is hard to fix. i'm currently using something very low, like 0.03. Anything > 0.1 and the exploration factor is so high that my tree is constructed depth by depth :/</p>
<p>For now I'm also using a biaised way to choose move during my simulation phase : it select the move that the player would choose in the situation and random ones for the AI, leading to a simulation biaised in favor of the player. I've tried using a pure random one for both, but it seems to give worse results. Do you think having a biaised simulation phase works against the purpose of the alogorithm? I'm inclined to think it would just give a pessimistic view to the AI and would not impact the end result too much. Maybe I'm wrong thought.</p>
<p>Any help is welcome :)</p> | 2015-10-19 17:46:48.553000+00:00 | 2015-10-20 10:11:37.800000+00:00 | 2015-10-19 17:54:42.763000+00:00 | algorithm|artificial-intelligence|montecarlo|non-deterministic|stochastic | ['http://arxiv.org/pdf/0909.0801.pdf', 'http://jveness.info/publications/thesis.pdf'] | 2 |
63,600,437 | <p>Your goal is ultimately to roll a <em>k</em>-sided die given only a <em>p</em>-sided die, without wasting randomness.</p>
<p>In this sense, by Lemma 3 in "<a href="https://perso.math.u-pem.fr/kloeckner.benoit/papiers/DiceSimulation.pdf" rel="nofollow noreferrer">Simulating a dice with a dice</a>" by B. Kloeckner, this waste is inevitable unless "every prime number dividing <em>k</em> also divides <em>p</em>". Thus, for example, if <em>p</em> is a power of 2 (and any block of random bits is the same as rolling a die with a power of 2 number of faces) and <em>k</em> has prime factors other than 2, the best you can do is get arbitrarily close to no waste of randomness.</p>
<p>Also, besides batching of bits to reduce "bit waste" (see also the <a href="http://mathforum.org/library/drmath/view/65653.html" rel="nofollow noreferrer">Math Forum</a>), there is also the technique of <em>randomness extraction</em>, discussed in <a href="https://arxiv.org/abs/1502.02539" rel="nofollow noreferrer">Devroye and Gravel 2015-2020</a> and in my <a href="https://peteroupc.github.io/randextract.html" rel="nofollow noreferrer">Note on Randomness Extraction</a>.</p>
<p>See also the question: <a href="https://stackoverflow.com/questions/6046918/how-to-generate-a-random-integer-in-the-range-0-n-from-a-stream-of-random-bits">How to generate a random integer in the range [0,n] from a stream of random bits without wasting bits?</a>, especially my answer there.</p> | 2020-08-26 15:01:57.213000+00:00 | 2020-11-20 15:25:22.890000+00:00 | 2020-11-20 15:25:22.890000+00:00 | null | 63,596,813 | <p>Is there a way to convert uniformly distributed random numbers of one range to uniformly distributed random numbers of another range <strong>frugally</strong>?</p>
<p>Let me explain what I mean by "<strong>frugally</strong>".</p>
<p>The typical approach to generate random number within given range (e.g. r ∈ [0..10) ) is to take some fixed random bits, let's say 31, which result non-negative random number less than 2147483648. Then make sure that the value is less than 2147483640 (because 2147483648 is not divisible by 10, and hence may lead to uneven distribution). If the value is greater or equal to 2147483640, throw it away and try again (get next 31 random bits and so on). If value if less than 2147483640, then just return the remainder of division by 10. This approach consumes at least 31 bit per decimal digit. Since theoretical limit is log<sub>2</sub>(10) = 3.321928..., it is quite wasteful.</p>
<p>We can improve this, if we use 4 bits instead if 31. In this case we will consume 4 × 1.6 = 6.4 bits per decimal digit. This is more <strong>frugal</strong>, but still far from the ideal.</p>
<pre class="lang-java prettyprint-override"><code> public int nextDit() {
int result;
do {
result = next4Bits();
} while (result >= 10);
return result;
}
</code></pre>
<p>We can try to generate 3 decimal digits at once. Since 1024 is quite close to 1000, the probability that raw source random number will be rejected is less than in previous case. Once we generated 3 decimal digits, we return 1 digit and reserve the rest 2 digits.</p>
<p>Something like below</p>
<pre class="lang-java prettyprint-override"><code> private int _decDigits = 0;
private int _decCount = 0;
public int nextDit() {
if (_decCount > 0) {
// take numbers from the reserve
int result = _decDigits % 10;
_decDigits /= 10;
_decCount -= 1;
return result;
} else {
int result;
do {
result = next10Bits();
} while (result >= 1000);
// reserve 2 decimal digits
_decCount = 2;
_decDigits = result % 100;
result /= 100;
return result;
}
}
</code></pre>
<p>This approach is much more <strong>frugal</strong>: it consumes 10 × 1.024 / 3 = 3.41(3) bits per decimal digit.</p>
<p>We can even go farther if we try to reuse the numbers, which we previously have been throwing away. The random number r ∈ [0, 1024) falls into one of the 3 ranges: [0, 1000), [1000, 1020), [1020, 1024).</p>
<p>If it falls into [0, 1000), we do as we did before, reserve 2 decimal digits (in decimal digit reserve) and return 1 decimal digit.</p>
<p>If it falls into [1000, 1020), we subtract 1000 converting to the range [0, 20). Then we get 1 bit by dividing it by 10 and 1 decimal digit by getting remainder of division by 10. We put the bit to the binary digit reserve and return the decimal digit.</p>
<p>If it falls into [1020, 1024), we subtract 1020 converting to the range [0, 4). Here we get just 2 bits, which we put to the binary digits reserve.</p>
<pre class="lang-java prettyprint-override"><code> // decimal digit reserve
private int _decDigits = 0;
private int _decCount = 0;
// binary digit reserve
private int _binDigits = 0;
private int _binCount = 0;
private int nextBits(int bits, int n) {
for (int i = 0; i < n; i += 1) {
bits = (bits << 1) + _bitRandomDevice.nextBit();
}
return bits;
}
private int next10Bits() {
// take bits from the binary reserve first, then from _bitRandomDevice
int result;
if (_binCount >= 10) {
result = _binDigits >> (_binCount - 10);
_binDigits = _binDigits & (1 << (_binCount - 10) - 1);
_binCount -= 10;
} else {
result = nextBits(_binDigits, 10 - _binCount);
_binCount = 0;
_binDigits = 0;
}
return result;
}
public int nextDit() {
if (_decCount > 0) {
// take numbers from the decimal reserve
int result = _decDigits % 10;
_decDigits /= 10;
_decCount -= 1;
return result;
} else {
int result;
while (true) {
result = next10Bits();
if (result < 1000) {
assert result >= 0 && result < 1000;
// reserve 2 decimal digits
_decCount = 2;
_decDigits = result % 100;
result /= 100;
// return 1 decimal digit
return result;
} else if (result < 1020) {
result -= 1000;
assert result >= 0 && result < 20;
// reserve 1 binary digit
_binCount += 1;
_binDigits = (_binDigits << 1) + (result / 10);
// return 1 decimal digit
return result % 10;
} else {
result -= 1020;
assert result >= 0 && result < 4;
// reserve 2 binary digits
_binCount += 2;
_binDigits = (_binDigits << 2) + result;
}
}
}
}
</code></pre>
<p>This approach consumes about 3.38... bits per decimal digit. This is the most <strong>frugal</strong> approach I can find, but it still wastes/loses some information from the source of randomness.</p>
<p>Thus, my question is: Is there any universal approach/algorithm that converts uniformly distributed random numbers of one arbitrary range [0, s) (later called source numbers) to uniformly distributed random numbers of another arbitrary range [0, t) (later called target numbers), consuming only log<sub>s</sub>(t) + C source numbers per target number? where C is some constant.
If there is no such approach, why? What prevents from reaching the ideal limit?</p>
<p>The purpose of being frugal is to reduce number of calls to RNG. This could be especially worth to do when we work with True RNG, which often has limited throughput.</p>
<p>As for "frugality optimizations", they are based on following assumptions:</p>
<ul>
<li>given uniform random number r ∈ [0,N), after checking that <em>r</em> < M (if M <= N), we may assume that it's uniformly distributed in [0,M). Traditional rejection approach is actually based on this assumption. Similarly, after checking that <em>r</em> >= <em>M</em>, we may assume that it's uniformly distributed in [M,N).</li>
<li>given uniform random number r ∈ [A,B), the derived random number (r+C) is uniformly distributed in [A+C,B+C). I.e. we can add and subtract any constant to random number to shift its range.</li>
<li>given uniform random number r ∈ [0,N), where N=P × Q, the derived random numbers (r%P) is uniformly distributed in [0,P) and (r/P) is uniformly distributed in [0,Q). I.e. we can split one uniform random number into several ones.</li>
<li>given uniform random numbers p ∈ [0,P) and q ∈ [0,Q), the derived random number (q× P + p) is uniformly distributed in [0,P × Q). I.e. we can combine uniform random numbers into one.</li>
</ul> | 2020-08-26 11:36:23.047000+00:00 | 2020-11-20 15:25:22.890000+00:00 | 2020-08-31 07:16:27.520000+00:00 | algorithm|random | ['https://perso.math.u-pem.fr/kloeckner.benoit/papiers/DiceSimulation.pdf', 'http://mathforum.org/library/drmath/view/65653.html', 'https://arxiv.org/abs/1502.02539', 'https://peteroupc.github.io/randextract.html', 'https://stackoverflow.com/questions/6046918/how-to-generate-a-random-integer-in-the-range-0-n-from-a-stream-of-random-bits'] | 5 |
8,741,732 | <p>I recently came across the following paper : <a href="http://arxiv.org/abs/0711.2010" rel="nofollow">http://arxiv.org/abs/0711.2010</a>
This paper proposes "A Polynomial Time Algorithm for Graph Isomorphism"</p> | 2012-01-05 11:29:23.887000+00:00 | 2012-01-05 11:29:23.887000+00:00 | null | null | 1,695,173 | <p>Is there an algorithm or heuristics for graph isomorphism?</p>
<p>Corollary: A graph can be represented in different different drawings.</p>
<p>What s the best approach to find different drawing of a graph?</p> | 2009-11-08 02:39:31.853000+00:00 | 2017-05-02 11:32:58.283000+00:00 | null | algorithm|graph | ['http://arxiv.org/abs/0711.2010'] | 1 |
6,889,826 | <p>Not sure there's a package but there's some code available here:</p>
<p><a href="http://r.789695.n4.nabble.com/Total-least-squares-linear-regression-td1475960.html" rel="nofollow">http://r.789695.n4.nabble.com/Total-least-squares-linear-regression-td1475960.html</a></p>
<p>You could also likely do a fairly inefficient search by using one of R's various and powerful optimization packages. Since from this article <a href="http://arxiv.org/PS_cache/math/pdf/9805/9805076v1.pdf" rel="nofollow">http://arxiv.org/PS_cache/math/pdf/9805/9805076v1.pdf</a> it appears that the centroid always runs through the best fit line, you'd just be searching to find the angle that minimizes the sum of the squared Euclidian distances. Shouldn't be too hard, but that just gets you the fit not any diagnostics on the fit.</p> | 2011-07-31 13:48:49.527000+00:00 | 2011-07-31 13:54:36.297000+00:00 | 2011-07-31 13:54:36.297000+00:00 | null | 6,889,809 | <blockquote>
<p><strong>Possible Duplicate:</strong><br>
<a href="https://stackoverflow.com/questions/6872928/how-to-calculate-total-least-squares-in-r-orthogonal-regression">How to calculate Total least squares in R? (Orthogonal regression)</a> </p>
</blockquote>
<p>I have to implement Total Least Squares model in R instead of lm() (linear regression)</p>
<p>Who don't understand what I mean, this link maybe be useful: <a href="http://en.wikipedia.org/wiki/Total_least_squares" rel="nofollow noreferrer">http://en.wikipedia.org/wiki/Total_least_squares</a></p>
<p>Is there a R function for this kind of regression model?</p> | 2011-07-31 13:45:00.817000+00:00 | 2011-07-31 13:54:36.297000+00:00 | 2017-05-23 11:55:34.390000+00:00 | r | ['http://r.789695.n4.nabble.com/Total-least-squares-linear-regression-td1475960.html', 'http://arxiv.org/PS_cache/math/pdf/9805/9805076v1.pdf'] | 2 |
53,508,825 | <p>I guess you are trying to detect anomaly by using the reconstruction error of your network, i.e. train on some normal timeseries, then detect on a timeseries which includes outliers. If I am further guessing right, your advisor is suggesting that the internal representation will tell you more about the nature of the anomaly, i.e. which features of the input data is most abnormal.</p>
<p>First thing to note, is that the intermediate features your network is building with only legitimate data will not change in detection mode, thus the internal representation (the weights of each of your intermediate layer's neurons) will not change when processing a new datapoint in this mode.</p>
<p>You will only be able to reason about the root cause of the outlier (which dimension contributes the most to the reconstruction error) if you have a good idea of which original features your intermediate features represent. This can be quite hard if you have one fully connected auto-encoder and several hidden layers, where the contribution of each feature to each neuron is increasingly interleaved with the other features. A trick is to build one auto-encoder per set of features, and use them as an ensemble for anomaly prediction. That way, each auto-encoder in the ensemble is known to be responsible for a set of features, and it makes it easier to known how each set of features contribute to the anomaly. See an example <a href="https://arxiv.org/abs/1802.09089" rel="nofollow noreferrer">here</a>.</p> | 2018-11-27 21:57:07.373000+00:00 | 2018-11-27 21:57:07.373000+00:00 | null | null | 53,462,087 | <p>I've trained an autoencoder to recognize 'positive' time series (the network is a simple fully connected network, no recurrent layers). The problem is that from what my advisor says, I should try to detect anomalies using some statistics on the latent space (like difference between histograms of latent space between good and outlier data), but when I predict time series with outliers I get the same internal representation as with the good data. I believe this is due to the fact that my network can only reproduce the normal data.
Do you have any hints?
Thanks </p> | 2018-11-24 20:28:09.363000+00:00 | 2018-11-27 21:57:07.373000+00:00 | null | keras|neural-network|artificial-intelligence|autoencoder|anomaly-detection | ['https://arxiv.org/abs/1802.09089'] | 1 |
26,016,529 | <p>SDM is a method to align shapes in images. It uses feature extractors (SIFT and HoG) in the process, but is not a feature extractor.
Similar methods are ASM, AAM or CLM, but SDM has better performance and accuracy. </p>
<p>In the case of SDM, in the training process, the system learns some descent vectors from a initial shape configuration (different from the shapes in the database) to the database sets. Those vectors have the hability of fitting a new initial configuration with the face shape in the image you want to fit in.</p>
<p>This link can help you to learn more about it: <a href="http://arxiv.org/pdf/1405.0601v1.pdf" rel="nofollow">http://arxiv.org/pdf/1405.0601v1.pdf</a></p>
<p>About the code, there is some demo samples in the main page of <a href="http://www.humansensing.cs.cmu.edu/intraface/" rel="nofollow">IntraFace</a> but if you are looking for the code, I don´t think you can find it.</p> | 2014-09-24 12:07:56.373000+00:00 | 2014-09-24 12:07:56.373000+00:00 | null | null | 24,101,327 | <p>Can someone explain briefly how SDM (Supervised Descent Method) for Feature Extraction works?
I searched a lot on the Internet but couldn't found what I was looking for.</p>
<p>Is it only for feature extraction in videos, or can it be used in both videos and images?</p>
<p>If someone can explain, it would be of great help.</p> | 2014-06-07 20:31:42.413000+00:00 | 2015-04-30 23:26:15.013000+00:00 | 2014-06-08 02:32:31.763000+00:00 | image|video|image-processing|feature-extraction | ['http://arxiv.org/pdf/1405.0601v1.pdf', 'http://www.humansensing.cs.cmu.edu/intraface/'] | 2 |
8,106,549 | <p>This is the so called in-place in-shuffle algorithm, and it's an extremely hard task if you want to do it efficiently. I'm just posting this entry so people don't post their so called "solutions" claiming that it can be extended to work with O(1) space, without any proof...</p>
<p>Here is a paper for a simpler case when the list is in the form: <code>a1 a2 a3 ... an b1 b2 b3 .. bn</code>:</p>
<p><a href="http://arxiv.org/PS_cache/arxiv/pdf/0805/0805.1598v1.pdf">http://arxiv.org/PS_cache/arxiv/pdf/0805/0805.1598v1.pdf</a></p> | 2011-11-12 18:23:00.003000+00:00 | 2011-11-12 18:23:00.003000+00:00 | null | null | 8,106,376 | <blockquote>
<p><strong>Possible Duplicate:</strong><br>
<a href="https://stackoverflow.com/questions/5557326/reordering-of-array-elements">Reordering of array elements</a> </p>
</blockquote>
<p>In given array of elements like [a1,a2,a3,..an,b1,b2,b3,..bn,c1,c2,c3,...cn] Write a program to merge them like [a1,b1,c1,a2,b2,c2,...an,bn,cn].
We have to do it in O(1) extra space.</p>
<p>Sample Testcases:</p>
<pre><code>Input #00:
{1,2,3,4,5,6,7,8,9,10,11,12}
Output #00:
{1,5,9,2,6,10,3,7,11,4,8,12}
Explanation:
Here as you can notice, the array is of the form
{a1,a2,a3,a4,b1,b2,b3,b4,c1,c2,c3,c4}
</code></pre>
<p>EDIT:
I got it in Amazon placement test. Have been trying it for a long time.
PLease provide psuedo code. What i tried is finding new position p for second element e(1st is already at correct position), inserting e at p and repeating the same for the old element at position p. But this is ending in a cycle.
I tried detecting cycle and incrementing the starting position by 1. But even this is not working.</p>
<p>EDIT2: </p>
<pre><code>#include <iostream>
using namespace std;
int pos(int i, int n)
{
if(i<n)
{
return 3*i;
}
else if(i>=n && i<2*n)
{
return 3*(i-n) + 1;
}
else if(i>=2*n && i<3*n)
{
return 3*(i-2*n) + 2;
}
return -1;
}
void printn(int* A, int n)
{
for(int i=0;i<3*n;i++)
cout << A[i]<<";";
cout << endl;
}
void merge(int A[], int n)
{
int j=1;
int k =-1;
int oldAj = A[1];
int count = 0;
int temp;
while(count<3*n-1){
printn(A,n);
k = pos(j,n);
temp = A[k];
A[k] = oldAj;
oldAj = temp;
j = k;
count++;
if(j==1) {j++;}
}
}
int main()
{
int A[21] = {1,4,7,10,13,16,19,2,5,8,11,14,17,20,3,6,9,12,15,18,21};
merge(A,7);
cin.get();}
</code></pre> | 2011-11-12 17:54:09.837000+00:00 | 2011-11-12 23:18:30.547000+00:00 | 2017-05-23 12:06:35.573000+00:00 | arrays|algorithm | ['http://arxiv.org/PS_cache/arxiv/pdf/0805/0805.1598v1.pdf'] | 1 |
59,158,210 | <p>if I get your question right, it is about a noisy training data set?
If so, there are some paper available which address this problem - e.g. </p>
<p><a href="https://arxiv.org/pdf/1705.03419.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1705.03419.pdf</a></p>
<p>or </p>
<p><a href="https://jingdongwang2017.github.io/Pubs/CVPR16-DisturbLabel.pdf" rel="nofollow noreferrer">https://jingdongwang2017.github.io/Pubs/CVPR16-DisturbLabel.pdf</a></p>
<p>Hope those papers are a little helpful :)</p> | 2019-12-03 13:27:30.687000+00:00 | 2019-12-03 13:27:30.687000+00:00 | null | null | 59,158,036 | <p>I have automatically created a data set for object detection for camera images. However, the algorithm I use for this makes mistakes. But I can calculate the uncertainty</p>
<p><em>Now my question: Is it possible to consider these uncertainties when training a neural net? If so, how? Has one of you ever read a paper about it?</em> </p>
<p>Unfortunately I couldn't find anything about it myself. (Maybe I just use the wrong keywords in my search)</p>
<p>First of all, thank you very much for your help! </p>
<p>ps: A few details: I got a robot with a lidar who is calculating his posision of an object with a common slam-algorithm. I can calculate the covriance matrix for the position. I use these information to create the label for the 2d image. Later on I'd like to use a cheap camera to do the same job as the lidar</p> | 2019-12-03 13:18:09.470000+00:00 | 2019-12-03 13:27:30.687000+00:00 | null | deep-learning|annotations|conv-neural-network|supervised-learning | ['https://arxiv.org/pdf/1705.03419.pdf', 'https://jingdongwang2017.github.io/Pubs/CVPR16-DisturbLabel.pdf'] | 2 |
43,535,273 | <p>fastText offers more than topic modelling, it is a tool for generation of word embeddings and text classification using a shallow neural network.
The authors state its performance is comparable with much more complex “deep learning” algorithms, but the training time is significantly lower.</p>
<p><strong>Pros:</strong></p>
<p>=> It is extremely easy to train your own fastText model,</p>
<p><code>$ ./fasttext skipgram -input data.txt -output model</code></p>
<p>Just provide your input and output file, the architecture to be used and that's all, but if you wish to customize your model a bit, fastText provides the option to change the hyper-parameters as well.</p>
<p>=> While generating word vectors, fastText takes into account sub-parts of words called character n-grams so that similar words have similar vectors even if they happen to occur in different contexts. For example, “supervised”, “supervise” and “supervisor” all are assigned similar vectors.</p>
<p>=> A previously trained model can be used to compute word vectors for out-of-vocabulary words. This one is my favorite. Even if the vocabulary of your corpus is finite, you can get a vector for almost any word that exists in the world.</p>
<p>=> fastText also provides the option to generate vectors for paragraphs or sentences. Similar documents can be found by comparing the vectors of documents.</p>
<p>=> The option to predict likely labels for a piece of text has been included too.</p>
<p>=> Pre-trained word vectors for about 90 languages trained on Wikipedia are available in the official repo.</p>
<p><strong>Cons:</strong></p>
<p>=> As fastText is command line based, I struggled while incorporating this into my project, this might not be an issue to others though.</p>
<p>=> No in-built method to find similar words or paragraphs.</p>
<p>For those who wish to read more, here are the links to the official research papers:</p>
<p>1) <a href="https://arxiv.org/pdf/1607.04606.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1607.04606.pdf</a></p>
<p>2) <a href="https://arxiv.org/pdf/1607.01759.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1607.01759.pdf</a></p>
<p>And link to the official repo:</p>
<p><a href="https://github.com/facebookresearch/fastText" rel="nofollow noreferrer">https://github.com/facebookresearch/fastText</a></p> | 2017-04-21 06:18:10.970000+00:00 | 2017-10-16 04:02:57.760000+00:00 | 2017-10-16 04:02:57.760000+00:00 | null | 39,071,317 | <p>Hi Last week Facebook announced Fasttext which is a way to categorize words into bucket. Latent Dirichlet Allocation is also another way to do topic modeling. My question is did anyone do any comparison regarding pro and con within these 2.</p>
<p>I haven't tried Fasttext but here are few pro and con for LDA based on my experience</p>
<p>Pro</p>
<ol>
<li><p>Iterative model, having support for Apache spark</p></li>
<li><p>Takes in a corpus of document and does topic modeling.</p></li>
<li><p>Not only finds out what the document is talking about but also finds out related documents</p></li>
<li><p>Apache spark community is continuously contributing to this. Earlier they made it work on mllib now on ml libraries</p></li>
</ol>
<p>Con</p>
<ol>
<li><p>Stopwords need to be defined well. They have to be related to the context of the document. For ex: "document" is a word which is having high frequency of appearance and may top the chart of recommended topics but it may or maynot be relevant, so we need to update the stopword for that</p></li>
<li><p>Sometime classification might be irrelevant. In the below example it is hard to infer what this bucket is talking about</p></li>
</ol>
<p>Topic: </p>
<ol>
<li><p>Term:discipline </p></li>
<li><p>Term:disciplines</p></li>
<li><p>Term:notestable </p></li>
<li><p>Term:winning</p></li>
<li><p>Term:pathways </p></li>
<li><p>Term:chapterclosingtable </p></li>
<li><p>Term:metaprograms</p></li>
<li><p>Term:breakthroughs </p></li>
<li><p>Term:distinctions </p></li>
<li><p>Term:rescue</p></li>
</ol>
<p>If anyone has done research in Fasttext can you please update with your learning?</p> | 2016-08-22 04:15:37.870000+00:00 | 2017-10-16 04:02:57.760000+00:00 | null | facebook|scala|apache-spark | ['https://arxiv.org/pdf/1607.04606.pdf', 'https://arxiv.org/pdf/1607.01759.pdf', 'https://github.com/facebookresearch/fastText'] | 3 |
64,949,461 | <p>Take a look into <strong>SuperGlue</strong>, graph neural network based feature matching. Although, they do not provide training code, but two pretrained model for indoor, outdoor is available. Links,</p>
<p><a href="https://github.com/magicleap/SuperGluePretrainedNetwork" rel="nofollow noreferrer">https://github.com/magicleap/SuperGluePretrainedNetwork</a></p>
<p><a href="https://psarlin.com/superglue/" rel="nofollow noreferrer">https://psarlin.com/superglue/</a></p>
<p><a href="https://arxiv.org/pdf/1911.11763.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1911.11763.pdf</a></p>
<p><a href="https://i.stack.imgur.com/Z0JKB.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Z0JKB.gif" alt="enter image description here" /></a></p> | 2020-11-22 00:11:34.453000+00:00 | 2020-11-22 00:11:34.453000+00:00 | null | null | 64,921,919 | <p>I have developed two methods using SIFT and ORB, but it seems to me that the points do not correspond correctly. Am I using these functions wrongly or do I need something different?</p>
<pre><code>orb = cv2.ORB_create()
keypoints_X, descriptor_X = orb.detectAndCompute(car1_gray, None)
keypoints_y, descriptor_y = orb.detectAndCompute(car2_gray, None)
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck = True)
matches = bf.match(descriptor_X, descriptor_y)
matches = sorted(matches, key = lambda x: x.distance)
result = cv2.drawMatches(car1_gray, keypoints_X, car2_gray, keypoints_y, matches[:10], car2_gray, flags = 2)
</code></pre>
<hr />
<pre><code>sift = cv2.SIFT_create()
keypoints_X, descriptor_X = sift.detectAndCompute(car1_gray, None)
keypoints_y, descriptor_y = sift.detectAndCompute(car2_gray, None)
bf = cv2.BFMatcher()
matches = bf.knnMatch(descriptor_X, descriptor_y, k=2)
bom = []
for m,n in matches:
if m.distance < 0.75*n.distance:
bom.append([m])
result = cv2.drawMatchesKnn(car1_gray, keypoints_X, car2_gray, keypoints_y, bom, None, flags=cv2.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS)
</code></pre>
<p>Below the result of SIFT and ORB:
<a href="https://i.stack.imgur.com/BPMYM.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BPMYM.jpg" alt="SIFT and ORB result" /></a></p> | 2020-11-20 00:14:11.200000+00:00 | 2020-11-22 00:11:34.453000+00:00 | null | opencv|computer-vision|conv-neural-network|sift|orb | ['https://github.com/magicleap/SuperGluePretrainedNetwork', 'https://psarlin.com/superglue/', 'https://arxiv.org/pdf/1911.11763.pdf', 'https://i.stack.imgur.com/Z0JKB.gif'] | 4 |
71,734,048 | <p>Dataset disbalance always causes performance decrease. Though, there are a few tricks, which may be helpful in your situation:</p>
<ol>
<li>The simplest one - class weight. May be computed by sklearn's <code>compute_class_weight</code> method.</li>
<li>Quite modern approach - Focal loss (<a href="https://arxiv.org/abs/1708.02002" rel="nofollow noreferrer">https://arxiv.org/abs/1708.02002</a>). Roughly, this loss function drives more NN's attention for 'hard-detected' objects (simply by increasing the loss on them), which includes imbalanced classes.</li>
</ol>
<p>Your low confidence problem may be one of the underfitting consequences. Thats from personal experience with two-stage detectors (mostly Faster-RCNN)</p> | 2022-04-04 08:24:43.670000+00:00 | 2022-04-04 08:24:43.670000+00:00 | null | null | 71,732,857 | <p>I was wondering if using unbalanced dataset with YOLO would cause it to train worse in terms of accuracy? Would the classes with less images have less accuracy?</p>
<p>I have 3 classes with 14.4 k images</p>
<p>1 class has 12,000 image examples
the other 2 have 1,000 image examples each</p>
<p>would this be an issue?</p>
<p>I am training on YOLOR right now and my MAP is at 0.36 on my custom dataset</p>
<p>I classified with the weights and the classification is good but I need to set the confidence very low as the classes with less images have a very low confidence (0.05 - 0.12) while the class with 12,000 images has confidence (0.45 - 0.90</p> | 2022-04-04 06:27:34.727000+00:00 | 2022-04-04 08:24:43.670000+00:00 | null | computer-vision|conv-neural-network|yolo | ['https://arxiv.org/abs/1708.02002'] | 1 |
51,556,575 | <p>The CRFs are still used for the tasks of image labeling and semantic image segmentation along with the DNNs. In fact, CRFs and DNNs are not self-excluding techniques and a lot of recent publications use both of them.</p>
<p>CRFs are based on probabilistic graphical models, where graph nodes and edges represent random variables, initialized with <em>potential functions</em>. DNN can be used as such potential function:</p>
<ul>
<li><a href="http://www.robots.ox.ac.uk/~tvg/publications/2017/CRFMeetCNN4SemanticSegmentation.pdf" rel="noreferrer">Conditional Random Fields Meet Deep Neural Networks for Semantic Segmentation</a></li>
<li><a href="https://www.robots.ox.ac.uk/~szheng/papers/CRFasRNN.pdf" rel="noreferrer">Conditional Random Fields as Recurrent Neural Networks</a></li>
<li><a href="http://www.professeurs.polymtl.ca/christopher.pal/miccai2014/brats2014_lisa_cnn_paper.pdf" rel="noreferrer">Brain Tumor Segmentation with Deep Neural Network</a> (Future Work Section)</li>
</ul>
<p>DCNN may be used for the feature extraction process, which is an essential step in applying CRFs:</p>
<ul>
<li><a href="http://www.project-10.de/Kosov/files/PRJ_2018.pdf" rel="noreferrer">Environmental Microorganism Classification Using Conditional Random Fields and Deep Convolutional Neural Networks</a></li>
<li><a href="https://arxiv.org/pdf/1711.04483.pdf" rel="noreferrer">Conditional Random Field and Deep Feature Learning for Hyperspectral Image Segmentation</a></li>
</ul>
<p>There are also toolkits, combining both CRFs and DNNs:</p>
<ul>
<li><a href="http://research.project-10.de/dgm/" rel="noreferrer">Direct graphical models C++ library</a> </li>
</ul> | 2018-07-27 10:59:38.557000+00:00 | 2018-07-27 10:59:38.557000+00:00 | null | null | 51,439,053 | <p>Are CRF (Conditional Random Fields) still actively used in semantic segmentation tasks or do the current deep neural networks made them unnecessary ?
I've seen both of the answers in academic papers and, since it seems quite complicated to implement and infer, I would like to have opinions on them before trying them out.</p>
<p>Thank you</p> | 2018-07-20 09:10:52.640000+00:00 | 2018-07-27 10:59:38.557000+00:00 | null | deep-learning|crf|semantic-segmentation | ['http://www.robots.ox.ac.uk/~tvg/publications/2017/CRFMeetCNN4SemanticSegmentation.pdf', 'https://www.robots.ox.ac.uk/~szheng/papers/CRFasRNN.pdf', 'http://www.professeurs.polymtl.ca/christopher.pal/miccai2014/brats2014_lisa_cnn_paper.pdf', 'http://www.project-10.de/Kosov/files/PRJ_2018.pdf', 'https://arxiv.org/pdf/1711.04483.pdf', 'http://research.project-10.de/dgm/'] | 6 |
67,712,404 | <p>I may be late to answer this one, but here is one potential approach.
The blur_detector library in pypi can be used to identify regions in an image which are sharp vs blurry. Here is the paper on which the library is created: <a href="https://arxiv.org/pdf/1703.07478.pdf" rel="noreferrer">https://arxiv.org/pdf/1703.07478.pdf</a></p>
<p>The way this library operates is that it looks at every pixel in the image at multiple scales and performs the discrete cosine transform at each scale. These DCT coefficients are then filtered such that we only use the <code>high frequency</code> coefficients. All the <code>high frequency</code> DCT coefficients at multiple scales are then fused together and sorted to form the <code>multiscale-fused and sorted high-frequency transform coefficients</code>
A subset of these sorted coefficients is selected. This is a tunable parameter and user can experiment with it based on the application. The output of the selected DCT coefficients is then sent through a max pooling to retain the maximum activation at multiple scales. This makes the algorithm quite robust to detect blurry areas in an image.</p>
<p>Here are the results that I see on the images that you have provided in the question:
<a href="https://i.stack.imgur.com/L0N25.png" rel="noreferrer"><img src="https://i.stack.imgur.com/L0N25.png" alt="enter image description here" /></a></p>
<p>Note: I have used a face detector from the default cascade_detectors in opencv to select a region of interest. the output of these two approaches (spatial blur detection + face detection) can be used to get the sharpness map in the image.</p>
<p>Here we can see that in the sharp images, the intensity of the pixels in the eyes region is very high, whereas for the blurry image, it is low.</p>
<p>You can threshold this to identify which images are sharp and which images are blurry.</p>
<p>Here is the code snippet which generated the above results:</p>
<pre><code>pip install blur_detector
</code></pre>
<hr />
<pre><code>import blur_detector
import cv2
if __name__ == '__main__':
face_cascade = cv2.CascadeClassifier('cv2/data/haarcascade_frontalface_default.xml')
img = cv2.imread('1.png', 0)
blur_map1 = blur_detector.detectBlur(img, downsampling_factor=1, num_scales=3, scale_start=1)
faces = face_cascade.detectMultiScale(img, 1.1, 4)
for (x, y, w, h) in faces:
cv2.rectangle(blur_map1, (x, y), (x + w, y + h), (255, 0, 0), 2)
img = cv2.imread('2.png', 0)
blur_map2 = blur_detector.detectBlur(img, downsampling_factor=1, num_scales=3, scale_start=1)
faces = face_cascade.detectMultiScale(img, 1.1, 4)
for (x, y, w, h) in faces:
cv2.rectangle(blur_map2, (x, y), (x + w, y + h), (255, 0, 0), 2)
img = cv2.imread('3.png', 0)
blur_map3 = blur_detector.detectBlur(img, downsampling_factor=1, num_scales=3, scale_start=1)
faces = face_cascade.detectMultiScale(img, 1.1, 4)
for (x, y, w, h) in faces:
cv2.rectangle(blur_map3, (x, y), (x + w, y + h), (255, 0, 0), 2)
cv2.imshow('a', blur_map1)
cv2.imshow('b', blur_map2)
cv2.imshow('c', blur_map3)
cv2.waitKey(0)
</code></pre>
<p>To understand the details about the algorithm regarding the blur detector, please take a look at this github page: <a href="https://github.com/Utkarsh-Deshmukh/Spatially-Varying-Blur-Detection-python" rel="noreferrer">https://github.com/Utkarsh-Deshmukh/Spatially-Varying-Blur-Detection-python</a></p> | 2021-05-26 20:38:16.840000+00:00 | 2021-05-26 20:38:16.840000+00:00 | null | null | 57,233,870 | <p>I am working on the blur detection of images. I have used the <strong>variance of the Laplacian method</strong> in OpenCV.</p>
<pre><code>img = cv2.imread(imgPath)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
value = cv2.Laplacian(gray, cv2.CV_64F).var()
</code></pre>
<p>The function failed in some cases like pixelated blurriness. It shows a higher value for those blur images than the actual clear images. Is there any better approach that detects Pixelated as well as motion blurriness?</p>
<p>Sample images:</p>
<p>This image is much clearer but showing value of <strong>266.79</strong></p>
<p><a href="https://i.stack.imgur.com/5yKqT.png" rel="noreferrer"><img src="https://i.stack.imgur.com/5yKqT.png" alt="enter image description here"></a></p>
<p>Where as this image showing the value of <strong>446.51</strong> .</p>
<p><a href="https://i.stack.imgur.com/oJ5NB.png" rel="noreferrer"><img src="https://i.stack.imgur.com/oJ5NB.png" alt="enter image description here"></a></p>
<p>Also this image seems to be much clearer but showing value only <strong>38.96</strong></p>
<p><a href="https://i.stack.imgur.com/zSV4S.png" rel="noreferrer"><img src="https://i.stack.imgur.com/zSV4S.png" alt="enter image description here"></a></p>
<p>I need to classify 1st and 3rd one as not blur whereas the second one as a blur.</p> | 2019-07-27 16:09:07.980000+00:00 | 2021-05-26 20:38:16.840000+00:00 | 2019-07-28 00:08:33.133000+00:00 | python|opencv | ['https://arxiv.org/pdf/1703.07478.pdf', 'https://i.stack.imgur.com/L0N25.png', 'https://github.com/Utkarsh-Deshmukh/Spatially-Varying-Blur-Detection-python'] | 3 |
48,049,238 | <p>Rather than measuring/detecting when overfitting starts to occur, it's easier to take steps to prevent it from happening. Two ideas for doing this:</p>
<ol>
<li><p>Instead of always having the agent play against itself, have it play against an agent randomly selected from a larger set of older versions of itself. This idea is somewhat similar in spirit to Lyndon's idea of testing against humans and/or alpha-beta search engines (and very similar to his idea in the last paragraph of his answer). However, the goal here is not to test and figure out when performance starts dropping against a test set of opponents; the goal is simply to create a diverse set of training opponents, so that your agent cannot afford to only overfit against one of them. I believe this approach was also used in [1, 2].</p></li>
<li><p>Incorporate search algorithms (like MCTS) directly in the agent's action selection during training. The combination of NN + search (typically informed/biased by the NN) is usually a bit stronger than just the NN on its own. So you can always keep updating the NN to make its behaviour more like the behaviour of NN + search, and it'll generally be an improvement. The search part in this is unlikely to ever overfit against a specific opponent (because it's not learned from experience, the search always behaves in the same way). If the NN on its own starts overfitting against a particular opponent and starts suggesting moves that would be bad in general, but good against a particular opponent, a search algorithm should be able to "exploit/punish" this "mistake" by the overfitting NN, and therefore provide feedback to the NN to move away from that overfitting again. Examples of this approach can be found in [3, 4, 5].</p></li>
</ol>
<p>The second idea probably requires much more engineering effort than the first, and it also only works if you actually can implement a search algorithm like MCTS (which you can, since you know the game's rules), but it probably works a bit better. I don't know for sure if it'll work better though, I only suspect it will because it was used in later publications with better results than papers using the first idea.</p>
<hr>
<p><strong>References</strong></p>
<p>[1] Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., van den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., Dieleman, S., Grewe, D., Nham, J., Kalchbrenner, N., Sutskever, I., Lillicrap, T., Leach, M., Kavukcuoglu, K., Graepel, T., and Hassabis, D. (2016). Mastering the Game of Go with Deep Neural Networks and Tree Search. <em>Nature</em>, Vol 529, No. 7587, pp. 484-489.</p>
<p>[2] Bansal, T., Pachocki, J., Sidor, S., Sutskever, I., and Mordatch, I. (2017). Emergent Complexity via Multi-Agent Competition. <em>arXiv:1710.03748v2</em>.</p>
<p>[3] Anthony, T. W., Tian, Z., and Barber, D. (2017). Thinking Fast and Slow with Deep Learning and Tree Search. <em>arXiv:1705.08439v4</em>.</p>
<p>[4] Silver, D., Schrittwieser, J., Simonyan, K, Antonoglou, I., Huang, A., Guez, A., Hubert, T., Baker, L., Lai, M., Bolton, A., Chen, Y., Lillicrap, T., Hui, F., Sifre, L., van den Driessche, G., Graepel, T., and Hassabis, D. (2017). Mastering the Game of Go without Human Knowledge. <em>Nature</em>, Vol. 550, No. 7676, pp. 354-359.</p>
<p>[5] Silver, D., Hubert, T., Schrittweiser, J., Antonoglou, I., Lai, M., Guez, A., Lanctot, M., Sifre, L., Kumaran, D., Graepel, T., Lillicrap, T., Simonyan, K., and Hassabis, D. (2017c). Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm. <em>arXiv:1712.01815v1</em>.</p> | 2018-01-01 11:24:18.717000+00:00 | 2018-01-01 11:24:18.717000+00:00 | null | null | 47,878,311 | <p>I have a Neural Network designed to play Connect 4, it gauges the value of a game state toward Player 1 or Player 2.</p>
<p>In order to train it, I am having it play against itself for <code>n</code> number of games.</p>
<p>What I've found is that 1000 games results in a better game-play than 100,000 even though the Mean Square Average over every 100 games constantly improves in the 100,000 epochs.</p>
<p>(I determine this by challenging the top-ranked player at <a href="http://riddles.io" rel="nofollow noreferrer">http://riddles.io</a>)</p>
<p>I've therefore reached the conclusion that over-fitting has occurred.</p>
<p>With self-play in mind, how do you successfully measure/determine/estimate that over-fitting has occurred? I.e., how to I determine when to stop the self-play?</p> | 2017-12-19 00:15:57.890000+00:00 | 2018-01-01 11:24:18.717000+00:00 | 2017-12-19 07:56:52.897000+00:00 | neural-network|reinforcement-learning|temporal-difference | [] | 0 |
47,002,794 | <p>Keras correctly implements L1 regularization. In the context of neural networks, L1 regularization simply adds the L1 norm of the parameters to the loss function (see <a href="http://cs231n.github.io/neural-networks-2/#reg" rel="nofollow noreferrer">CS231</a>).</p>
<p>While L1 regularization does encourages sparsity, it does not guarantee that output will be sparse. The parameter updates from stochastic gradient descent are inherently noisy. Thus, the probability that any given parameter is exactly 0 is vanishingly small.</p>
<p>However, many of the parameters of an L1 regularized network are often close to 0. A rudimentary approach would be to threshold small values to 0. There has been research to explore more advanced methods of generating sparse neural network. In <a href="https://arxiv.org/abs/1611.06694" rel="nofollow noreferrer">this paper</a>, the authors simultaneously prune and train a neural network to achieve 90-95% sparsity on a number of well known network architectures. </p> | 2017-10-29 16:53:27.670000+00:00 | 2017-10-29 16:53:27.670000+00:00 | null | null | 43,146,015 | <p>I am employing L1 regularization on my neural network parameters in Keras with <code>keras.regularizers.l1(0.01)</code> to obtain a sparse model. I am finding that, while many of my coefficients are <em>close</em> to zero, few of them are actually zero.</p>
<p>Upon looking at <a href="https://github.com/fchollet/keras/blob/master/keras/regularizers.py" rel="noreferrer">the source code for the regularization</a>, it suggests that Keras simply adds the L1 norm of the parameters to the loss function.</p>
<p>This would be incorrect because the parameters would almost certainly never go to zero (within floating point error) as intended with L1 regularization. The L1 norm is not differentiable when a parameter is zero, so subgradient methods need to be used where the parameters are set to zero if close enough to zero in the optimization routine. See the soft threshold operator <code>max(0, ..)</code> <a href="https://en.wikipedia.org/wiki/Lasso_(statistics)#Orthonormal_covariates" rel="noreferrer">here</a>.</p>
<p>Does Tensorflow/Keras do this, or is this impractical to do with stochastic gradient descent?</p>
<p>EDIT: Also <a href="http://jocelynchi.com/soft-thresholding-operator-and-the-lasso-solution" rel="noreferrer">here</a> is a superb blog post explaining the soft thresholding operator for L1 regularization.</p> | 2017-03-31 16:57:05.940000+00:00 | 2019-05-09 09:09:43.257000+00:00 | 2018-04-24 11:35:16.730000+00:00 | tensorflow|machine-learning|neural-network|deep-learning|keras | ['http://cs231n.github.io/neural-networks-2/#reg', 'https://arxiv.org/abs/1611.06694'] | 2 |
29,708,311 | <p>You'll need to read the specifications yourself, the official URL is:</p>
<p><a href="https://www.opennetworking.org/sdn-resources/technical-library" rel="nofollow">https://www.opennetworking.org/sdn-resources/technical-library</a></p>
<p>Each specification has the historical change logs for all previous versions listed in Appendix B.</p>
<p>Some advice: Start with Openflow 1.0, then read the 1.0.1 errata which clarifies things a bit. Then read the newer versions if you really need to. The newer versions > 1.0 add a <em>lot</em> of complexity which makes them hard to understand if you don't know Openflow 1.0 already.</p>
<p>Personal note: Most publicly available implementations still only support version 1.0, newer ones have recently started to support v.1.3. A lot of research is still being done with v.1.0, so it's not completely obsolete yet. Newer versions are mainly extensions of Openflow 1.0, possibly with some fixes for edge cases and the like.</p>
<p>Other resources: This document by the Open vSwitch maintainers has an overview of some things that changed over time (with tables comparing versions):</p>
<p><a href="http://openvswitch.org/support/dist-docs/DESIGN.md.txt" rel="nofollow">"Design Decisions in Openflow"</a></p>
<p>It may also help to read the Wiki pages and archived discussions leading up to the release of Openflow 1.0 on the previous website (not maintained anymore):</p>
<p><a href="http://archive.openflow.org/wk/index.php/OpenFlow_Releases" rel="nofollow">http://archive.openflow.org/wk/index.php/OpenFlow_Releases</a></p>
<p>Finally, you may want to take a look at a survey paper to get an overview over some of the projects being worked on, e.g. this one:</p>
<p><a href="http://arxiv.org/abs/1406.0440" rel="nofollow">"Software-Defined Networking: A Comprehensive Survey"</a></p> | 2015-04-17 19:57:12.930000+00:00 | 2015-07-16 13:20:24.283000+00:00 | 2015-07-16 13:20:24.283000+00:00 | null | 28,714,959 | <p>I'm new to openflow protocol. I think, there are 5 version of openflow protocol available (1.1 to 1.5). Can somebody help me out in understanding or a provide a link which summarizes the difference b/w these versions?</p>
<p>Thanks</p> | 2015-02-25 09:05:21.953000+00:00 | 2018-11-25 11:41:19.890000+00:00 | null | openflow | ['https://www.opennetworking.org/sdn-resources/technical-library', 'http://openvswitch.org/support/dist-docs/DESIGN.md.txt', 'http://archive.openflow.org/wk/index.php/OpenFlow_Releases', 'http://arxiv.org/abs/1406.0440'] | 4 |
73,118,125 | <p><strong>What is the purpose of positional embeddings?</strong></p>
<p>In transformers (BERT included) the only interaction between the different tokens is done via self-attention layers. If you look closely at the mathematical operation implemented by these layers you will notice that these layers are <strong>permutation <a href="https://en.wikipedia.org/wiki/Equivariant_map" rel="nofollow noreferrer">equivariant</a></strong>: That is, the representation of<br />
<em>"I do like coding"</em><br />
and<br />
<em>"Do I like coding"</em><br />
is the same, because the words (=tokens) are the same in both sentences, only their order is different.<br />
As you can see, this "permutation equivariance" is not a desired property in many cases.<br />
To break this symmetry/equivariance one can simply "code" the actual position of each word/token in the sentence. For example:<br />
<em>"I_1 do_2 like_3 coding_4"</em><br />
is no longer identical to<br />
<em>"Do_1 I_2 like_3 coding_4"</em></p>
<p>This is the purpose of positional encoding/embeddings -- to make self-attention layers sensitive to the order of the tokens.</p>
<p>Now to your questions:</p>
<ol>
<li>learnable position encoding is indeed implemented with a simple single <code>nn.Parameter</code>. The position encoding is just a "code" added to each token marking its position in the sequence. Therefore, all it requires is a tensor of the same size as the input sequence with different values per position.</li>
<li><em>Is it enough to introduce position encoding once in a transformer architecture?</em> Yes! Since transformers stack multiple self-attention layers it is enough to add positional embeddings once at the beginning of the processing. The position information is "fused" into the semantic representation learned per token.<br />
A nice visualization of this effect in Vision Transformers (ViT) can be found in this work:<br />
<em>Shir Amir, Yossi Gandelsman, Shai Bagon and Tali Dekel</em> <a href="https://arxiv.org/abs/2112.05814" rel="nofollow noreferrer"><strong>Deep ViT Features as Dense Visual Descriptors</strong></a> (arXiv 2021).<br />
In sec. 3.1 and fig. 3 they show how the position information dominates the representation of tokens at early layers, but as you go deeper in a transformer, semantic information takes over.</li>
</ol> | 2022-07-26 05:29:46.647000+00:00 | 2022-07-26 05:29:46.647000+00:00 | null | null | 73,113,261 | <p>I was recently reading the bert source code from the hugging face project. I noticed that the so-called "learnable position encoding" seems to refer to a specific nn.Parameter layer when it comes to implementation.</p>
<pre class="lang-py prettyprint-override"><code>def __init__(self):
super()
positional_encoding = nn.Parameter()
def forward(self, x):
x += positional_encoding
</code></pre>
<p>↑ Could be this feeling, then performed the learnable position encoding. Whether that means it's that simple or not, I'm not sure I understand it correctly, I want to ask someone with experience.</p>
<p>In addition, I noticed a classic bert structure whose location is actually coded only once at the initial input. Does this mean that the subsequent bert layers, for each other, lose the ability to capture location information?</p>
<pre class="lang-py prettyprint-override"><code>BertModel(
(embeddings): BertEmbeddings(
(word_embeddings): Embedding(30522, 768, padding_idx=0)
(position_embeddings): Embedding(512, 768)
(token_type_embeddings): Embedding(2, 768)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(encoder): BertEncoder(
(layer): ModuleList(
(0): BertLayer(...)
...
(pooler): BertPooler(...)
</code></pre>
<p>Would I get better results if the results of the previous layer were re-positional encoded before the next BERT layer?</p> | 2022-07-25 17:37:50.353000+00:00 | 2022-07-26 10:04:28.623000+00:00 | 2022-07-26 10:04:28.623000+00:00 | deep-learning|pytorch|bert-language-model|transformer-model | ['https://en.wikipedia.org/wiki/Equivariant_map', 'https://arxiv.org/abs/2112.05814'] | 2 |
38,040,580 | <p>You can use feature selection for your data. some good feature selection can reduce features up to 90% and persist the classification performance.
In feature selection you select top feature(in <strong>Bag Of Word</strong> model, you select top influence words), and train model based on these words(features). this reduce the dimension of your data(also it prevent Curse Of Dimensionality)
here is a good survey:
<a href="https://arxiv.org/pdf/1602.02850.pdf" rel="nofollow">Survey on feature selection</a></p>
<p>In Brief:</p>
<p>Two feature selection approach is available: Filtering and Wrapping</p>
<p>Filtering approach is almost based on information theory. search "Mutual Information", "chi2" and... for this type of feature selection</p>
<p>Wrapping approach use the classification algorithm to estimate the most important features in the library. for example you select some words and evaluate classification performance(recall,precision).</p>
<p>Also some others approch can be usefull. LSA and LSI can outperform the classification performance and time:
<a href="https://en.wikipedia.org/wiki/Latent_semantic_analysis" rel="nofollow">https://en.wikipedia.org/wiki/Latent_semantic_analysis</a></p>
<p>You can use sickit for feature selection and LSA:</p>
<p><a href="http://scikit-learn.org/stable/modules/feature_selection.html" rel="nofollow">http://scikit-learn.org/stable/modules/feature_selection.html</a></p>
<p><a href="http://scikit-learn.org/stable/modules/decomposition.html" rel="nofollow">http://scikit-learn.org/stable/modules/decomposition.html</a></p> | 2016-06-26 16:21:16.650000+00:00 | 2016-06-26 16:21:16.650000+00:00 | null | null | 37,969,425 | <p>So i am using textblob python library, but the performance is lacking.</p>
<p>I already serialize it and load it before the loop( using pickle ).</p>
<p>It currently takes ~ 0.1( for small training data ) and ~ 0.3 on 33'000 test data. I need to make it faster, is it even possible ?</p>
<h1><strong>Some code:</strong></h1>
<pre><code># Pass trainings before loop, so we can make performance a lot better
trained_text_classifiers = load_serialized_classifier_trainings(config["ALL_CLASSIFICATORS"])
# Specify witch classifiers are used by witch classes
filter_classifiers = get_classifiers_by_resource_names(trained_text_classifiers, config["FILTER_CLASSIFICATORS"])
signal_classifiers = get_classifiers_by_resource_names(trained_text_classifiers, config["SIGNAL_CLASSIFICATORS"])
for (url, headers, body) in iter_warc_records(warc_file, **warc_filters):
start_time = time.time()
body_text = strip_html(body);
# Check if url body passess filters, if yes, index, if no, ignore
if Filter.is_valid(body_text, filter_classifiers):
print "Indexing", url.url
resp = indexer.index_document(body, body_text, signal_classifiers, url=url, headers=headers, links=bool(args.save_linkgraph_domains))
else:
print "\n"
print "Filtered out", url.url
print "\n"
resp = 0
</code></pre>
<p>This is the loop witch performs check on each of the warc file's body and metadata.</p>
<p>there are 2 text classification checks here.</p>
<p>1) In Filter( very small training data ):</p>
<pre><code>if trained_text_classifiers.classify(body_text) == "True":
return True
else:
return False
</code></pre>
<p>2) In index_document( 33'000 training data ):</p>
<pre><code>prob_dist = trained_text_classifier.prob_classify(body)
prob_dist.max()
# Return the propability of spam
return round(prob_dist.prob("spam"), 2)
</code></pre>
<p>The classify and prob_classify are the methods that take the tool on performance.</p> | 2016-06-22 13:23:27.023000+00:00 | 2016-06-26 16:40:58.637000+00:00 | 2016-06-22 13:46:25.530000+00:00 | python|performance|machine-learning|textblob | ['https://arxiv.org/pdf/1602.02850.pdf', 'https://en.wikipedia.org/wiki/Latent_semantic_analysis', 'http://scikit-learn.org/stable/modules/feature_selection.html', 'http://scikit-learn.org/stable/modules/decomposition.html'] | 4 |
71,206,804 | <p>Not sure if this will help in your particular use-case, but you could work with an approximation of the Hessian, e.g. empirical Fisher (EF). I've worked with this approach to implement Laplace approximation for Flux models (see <a href="https://github.com/pat-alt/BayesLaplace.jl" rel="nofollow noreferrer">here</a>) inspired by this <a href="https://github.com/AlexImmer/Laplace" rel="nofollow noreferrer">PyTorch implementation</a>. Below I've applied the approach to your example.</p>
<pre><code>using Flux: Chain, Dense, σ, crossentropy, params, DataLoader
using Zygote
using Random
Random.seed!(2022)
model = Chain(
x -> reshape(x, :, size(x, 4)),
Dense(2, 5),
Dense(5, 1),
x -> σ.(x)
)
n_data = 5
input = randn(2, 1, 1, n_data)
target = randn(1, n_data)
loss(x, y) = crossentropy(model(x), y)
n_params = length(reduce(vcat, [vec(θ) for θ ∈ params(model)]))
= zeros(n_params,n_params)
data = DataLoader((input, target))
for d in data
x, y = d
= gradient(() -> loss(x,y),params(model))
= reduce(vcat,[vec([θ]) for θ ∈ params(model)])
+= * ' # empirical fisher
end
</code></pre>
<p>Should there be a way to use Zygote autodiff directly (and more efficiently) I'd also be interested to see that. Using EF for the full Hessian still scales quadratically in the number of parameters, but as shown in this NeurIPS 2021 <a href="https://arxiv.org/abs/2106.14806" rel="nofollow noreferrer">paper</a> you can further approximate the Hessian using (blog-)diagonal factorization. The paper also shows that in the context of Bayesian deep learning treating only the last layer probabilistically generally yields good results, but again not sure if relevant in your case.</p> | 2022-02-21 13:03:22.977000+00:00 | 2022-02-21 13:03:22.977000+00:00 | null | null | 66,345,394 | <p>How would you calculate a hessian of a loss function that consists of a Neural Network w.r.t. the NN's parameters?</p>
<p>For instance, consider the loss function below</p>
<pre><code>using Flux: Chain, Dense, σ, crossentropy, params
using Zygote
model = Chain(
x -> reshape(x, :, size(x, 4)),
Dense(2, 5),
Dense(5, 1),
x -> σ.(x)
)
n_data = 5
input = randn(2, 1, 1, n_data)
target = randn(1, n_data)
loss = model -> crossentropy(model(input), target)
</code></pre>
<p>I can get a gradient w.r.t parameters in two ways…</p>
<pre><code>Zygote.gradient(model -> loss(model), model)
</code></pre>
<p>or</p>
<pre><code>grad = Zygote.gradient(() -> loss(model), params(model))
grad[params(model)[1]]
</code></pre>
<p>However, I can’t find a way to get a hessian w.r.t its parameters. (I want to do something like <code>Zygote.hessian(model -> loss(model), model)</code>, but <code>Zygote.hessian</code> does not take <code>::Params</code> as an input)</p>
<p>Recently, a <code>jacobian</code> function <a href="https://github.com/FluxML/Zygote.jl/issues/910" rel="nofollow noreferrer">was added</a> to the master branch (issue #910), which <a href="https://github.com/FluxML/Zygote.jl/pull/414" rel="nofollow noreferrer">understands <code>::Params</code> as an input</a>.</p>
<p>I've been trying to combine <code>gradient</code> and <code>jacobian</code> to get a hessian (because a hessian is the jacobian of a gradient of a function), but to no avail.
I think the problem is that <code>model</code> is a <code>Chain</code> object that includes generic functions like <code>reshape</code> and <code>σ.</code> which lack parameters, but I can't get past this.</p>
<pre><code>grad = model -> Zygote.gradient(model -> loss(model), model)
jacob = model -> Zygote.jacobian(grad, model)
jacob(model) ## does not work
</code></pre>
<p>EDIT: For reference, I've <a href="https://github.com/stash-196/pytorch-derivatives/blob/main/derivatives.py" rel="nofollow noreferrer">created this in pytorch before</a></p> | 2021-02-24 05:51:44.090000+00:00 | 2022-02-21 13:03:22.977000+00:00 | 2021-07-21 12:17:26.843000+00:00 | julia|flux.jl | ['https://github.com/pat-alt/BayesLaplace.jl', 'https://github.com/AlexImmer/Laplace', 'https://arxiv.org/abs/2106.14806'] | 3 |
45,531,658 | <p>Nassim answer is believed to be True for small networks and datasets but recent <a href="https://arxiv.org/pdf/1706.10239.pdf" rel="nofollow noreferrer">articles</a> (or e.g. <a href="https://arxiv.org/abs/1412.0233" rel="nofollow noreferrer">this</a> one) makes us believe that for deeper networks (with more than 4 layers) - not shuffling your data set might be considered as some kind of regularization - as poor minima are expected to be deep but small and good minima are expected to be wide and hard to leave. </p>
<p>In case of inference time - the only way where this might harm your inference process is when you are using a training distribution of your data in a highly coupled manner - e.g. using <code>BatchNormalization</code> or <code>Dropout</code> like in a training phase (this is sometimes used for some kinds of Bayesian Deep Learning).</p> | 2017-08-06 11:40:55.520000+00:00 | 2017-08-06 11:40:55.520000+00:00 | null | null | 45,479,626 | <p>Suppose one makes a neural network using Keras. Do the trained weights depend on the order in which the training data has been fed into the system ? Is it ok to feed data belonging to one category first and then data belonging to another category or should they be random?</p> | 2017-08-03 09:04:46.790000+00:00 | 2017-08-06 11:40:55.520000+00:00 | null | machine-learning|keras | ['https://arxiv.org/pdf/1706.10239.pdf', 'https://arxiv.org/abs/1412.0233'] | 2 |
53,633,247 | <p>Sure, it is possible. This is called style transfer and there have been a lot of works on that. In a way you learn a mapping function between the manifolds of dogs to the manifolds of cats. A famous work in that direction is the CycleGAN paper (<a href="https://arxiv.org/pdf/1703.10593.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1703.10593.pdf</a>), which uses a cycle consistent loss to map from one direction to the other and back. This makes the training more stable and the resulting images closer to the initial images.</p> | 2018-12-05 13:16:47.813000+00:00 | 2018-12-05 13:16:47.813000+00:00 | null | null | 53,625,405 | <p>Is it possible for the <strong>Generator</strong> to learn a distribution when <strong>noise</strong> is a specific input say <strong>n images</strong> instead of a <strong>random noise</strong>? For example, there are two categories of images with labels <strong>0</strong> and <strong>1</strong> say <strong>0 for cats</strong> and <strong>1 for dogs</strong>. Is it possible to learn the generator as we feed it a dog and it will generate a cat image against that dog image?
This query is somehow the same as deblurring images but what if no clear image is given against that blurred image but we are just given with random clear images.</p> | 2018-12-05 04:56:22.503000+00:00 | 2018-12-05 13:16:47.813000+00:00 | null | deep-learning|computer-vision|artificial-intelligence|generative-adversarial-network|dcgan | ['https://arxiv.org/pdf/1703.10593.pdf'] | 1 |
70,563,103 | <p><strong>1. The difference between the two programs:</strong><br />
Conceptually, your two implementations are the same: you forward <code>gradient_accumulation_steps</code> batches for each weight update.<br />
As you already observed, the second method requires more memory resources than the first one.</p>
<p>There is, however, a slight difference: usually, loss functions implementation use <code>mean</code> to reduce the loss over the batch. When you use gradient accumulation (first implementation) you reduce using <code>mean</code> over each mini-batch, but using <em><code>sum</code></em> over the accumulated <code>gradient_accumulation_steps</code> mini-batches. To make sure the accumulated gradient implementation is <em>identical</em> to large batches implementation you need to be very careful in the way the loss function is reduced. In many cases you will need to divide the accumulated loss by <code>gradient_accumulation_steps</code>. See <a href="https://stackoverflow.com/a/65913698/1714410">this answer</a> for a detailed imlpementation.</p>
<hr />
<p><strong>2. Batch size and learning rate:</strong>
Learning rate and batch size are indeed related. When increasing the batch size one usually reduces the learning rate.<br />
See, e.g.:<br />
<em>Samuel L. Smith, Pieter-Jan Kindermans, Chris Ying, Quoc V. Le</em>, <a href="https://arxiv.org/abs/1711.00489" rel="nofollow noreferrer"><strong>Don't Decay the Learning Rate, Increase the Batch Size</strong></a> (ICLR 2018).</p> | 2022-01-03 08:13:42.170000+00:00 | 2022-01-03 08:13:42.170000+00:00 | null | null | 70,461,130 | <p>I'm trying to get a better understanding of how Gradient Accumulation works and why it is useful. To this end, I wanted to ask what is the difference (if any) between these two possible PyTorch-like implementations of a custom training loop with gradient accumulation:</p>
<pre class="lang-py prettyprint-override"><code>gradient_accumulation_steps = 5
for batch_idx, batch in enumerate(dataset):
x_batch, y_true_batch = batch
y_pred_batch = model(x_batch)
loss = loss_fn(y_true_batch, y_pred_batch)
loss.backward()
if (batch_idx + 1) % gradient_accumulation_steps == 0: # (assumption: the number of batches is a multiple of gradient_accumulation_steps)
optimizer.step()
optimizer.zero_grad()
</code></pre>
<pre class="lang-py prettyprint-override"><code>y_true_batches, y_pred_batches = [], []
gradient_accumulation_steps = 5
for batch_idx, batch in enumerate(dataset):
x_batch, y_true_batch = batch
y_pred_batch = model(x_batch)
y_true_batches.append(y_true_batch)
y_pred_batches.append(y_pred_batch)
if (batch_idx + 1) % gradient_accumulation_steps == 0: # (assumption: the number of batches is a multiple of gradient_accumulation_steps)
y_true = stack_vertically(y_true_batches)
y_pred = stack_vertically(y_pred_batches)
loss = loss_fn(y_true, y_pred)
loss.backward()
optimizer.step()
optimizer.zero_grad()
y_true_batches.clear()
y_pred_batches.clear()
</code></pre>
<p>Also, kind of as an unrelated question: Since the purpose of gradient accumulation is to mimic a larger batch size in cases where you have memory constraints, does it mean that I should also increase the learning rate proportionally?</p> | 2021-12-23 10:54:46.213000+00:00 | 2022-01-03 08:13:42.170000+00:00 | 2021-12-23 11:05:33.077000+00:00 | python|pytorch|gradient-descent | ['https://stackoverflow.com/a/65913698/1714410', 'https://arxiv.org/abs/1711.00489'] | 2 |
68,382,495 | <p>The technical answer is no. KD is a different technique from ensembling.</p>
<p>But they are related in the sense that KD was originally proposed to distill larger models, and the authors specifically cite ensemble models as the type of larger model they experimented on.</p>
<p>Net net, give KD a try on your big model to see if you can keep a lot of the performance of the bigger model but with the size of the smaller model. I have empirically found that you can retain 75%-80% of the power of the a 5x larger model after distilling it down to the smaller model.</p>
<p>From the abstract of the KD paper:</p>
<p>A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.</p>
<p><a href="https://arxiv.org/abs/1503.02531" rel="nofollow noreferrer">https://arxiv.org/abs/1503.02531</a></p> | 2021-07-14 17:13:06.837000+00:00 | 2021-07-14 17:13:06.837000+00:00 | null | null | 68,380,183 | <p>I don't know much about knowledge distillation.
I have a one question.</p>
<p>There is a model with showing 99% performance(10class image classification). But I can't use a bigger model because I have to keep inference time.</p>
<p>Does it have an ensemble effect if I train knowledge distillation using another big model?</p>
<p>-------option-------
Or let me know if there's any way to improve performance than this.</p>
<p><a href="https://i.stack.imgur.com/pjCFD.png" rel="nofollow noreferrer">enter image description here</a></p> | 2021-07-14 14:42:28.377000+00:00 | 2021-07-14 17:13:06.837000+00:00 | null | tensorflow|deep-learning|pytorch|classification | ['https://arxiv.org/abs/1503.02531'] | 1 |
64,050,355 | <p>Your noise_multiplier is too high for your number of clients_per_round. Following the methodology in <a href="https://arxiv.org/abs/1710.06963" rel="nofollow noreferrer">"Learning Differentially Private Language Models"</a>, you should first find the largest n_m that allows training with good utility, then scale up n_m <em>and proportionally scale up c_p_r</em> to train a final model with good privacy.</p> | 2020-09-24 16:13:21.983000+00:00 | 2020-09-24 16:13:21.983000+00:00 | null | null | 64,046,242 | <p>I was trying to use Tensorflow Privacy with TFF following the two examples provided in <a href="https://github.com/tensorflow/federated/tree/master/tensorflow_federated/python/research/differential_privacy" rel="nofollow noreferrer">here</a> with my own dataset.
I made sure that samples and target were formatted correctly and everything worked before adding the DP process with clipping and noise.
Unfortunately, in any execution with dp enable the model diverge instead of converging, with both train and validation loss increasing at each round.</p>
<pre><code>Round 0, 68.89s per round in average.
Train: loss=5.172, accuracy=0.222
Validation: loss=6.181, accuracy=0.002
Round 1, 61.52s per round in average.
Train: loss=4.087, accuracy=0.328
Validation: loss=6.747, accuracy=0.002
Round 2, 57.98s per round in average.
Train: loss=4.659, accuracy=0.227
Validation: loss=7.475, accuracy=0.002
Round 3, 56.62s per round in average.
Train: loss=5.354, accuracy=0.198
Validation: loss=8.409, accuracy=0.002
Updating the best state...
Round 4, 55.25s per round in average.
Train: loss=6.181, accuracy=0.172
Validation: loss=9.330, accuracy=0.004
Round 5, 54.36s per round in average.
Train: loss=7.739, accuracy=0.095
Validation: loss=10.311, accuracy=0.006
Round 6, 53.83s per round in average.
Train: loss=9.188, accuracy=0.037
Validation: loss=11.243, accuracy=0.006
Round 7, 53.63s per round in average.
Train: loss=9.581, accuracy=0.080
Validation: loss=12.214, accuracy=0.009
</code></pre>
<p>I have tried with different combinations of clip and noise_multiplier but without achieving any results..
Here is an example:</p>
<pre><code> 'clients_per_round' : 20,
'client_epochs_per_round' : 2,
'uniform_weighting' : True,
'server_optimizer': 'adam',
'client_optimizer': 'adam',
'clip':0.05, #l2 norm
'noise_multiplier' : 1.0,
'adaptive_clip_learning_rate' : 0,
'target_unclipped_quantile' : 0.5,
'clipped_count_budget_allocation' : 0.1,
'per_vector_clipping' : False,
</code></pre>
<p>Any idea on what could be the problem? With 'noise_multiplier' : False everything was working properly..
The definition of the DP_query and the averaging process is basically the same used in the example:</p>
<pre><code>dp_query = tff.utils.build_dp_query(
clip=FLAGS.clip,
noise_multiplier=FLAGS.noise_multiplier,
expected_total_weight=FLAGS.clients_per_round,
adaptive_clip_learning_rate=FLAGS.adaptive_clip_learning_rate,
target_unclipped_quantile=FLAGS.target_unclipped_quantile,
clipped_count_budget_allocation=FLAGS.clipped_count_budget_allocation,
expected_clients_per_round=FLAGS.clients_per_round,
per_vector_clipping=FLAGS.per_vector_clipping,
model=model_fn())
weights_type = tff.learning.framework.weights_type_from_model(model_fn)
aggregation_process = tff.utils.build_dp_aggregate_process(
weights_type.trainable, dp_query)
</code></pre>
<p>Thank you!</p> | 2020-09-24 12:15:40.627000+00:00 | 2020-09-24 16:13:21.983000+00:00 | 2020-09-24 12:23:19.480000+00:00 | tensorflow|tensorflow-federated | ['https://arxiv.org/abs/1710.06963'] | 1 |
46,530,037 | <p>In the typical jargon, when someone refers to a conv layer with N kernels of size (x, y), it is implied that the kernels actually have size (x, y, z), where z is the depth of the input volume to that layer.</p>
<p>Imagine what happens when the input image to the network has R, G, and B channels: each of the initial kernels itself has 3 channels. Subsequent layers are the same, treating the input volume as a multi-channel image, where the channels are now maps of some other feature.</p>
<p>The motion of that 3D kernel as it "sweeps" across the input is only 2D, so it is still referred to as a 2D convolution, and the output of that convolution is a 2D feature map.</p>
<p>Edit:</p>
<p>I found a good quote about this in a recent paper, <a href="https://arxiv.org/pdf/1809.02601v1.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1809.02601v1.pdf</a></p>
<p>"In a convolutional layer, the input feature map X is a W<sub>1</sub> × H<sub>1</sub> × D<sub>1</sub> cube, with W<sub>1</sub>, H<sub>1</sub> and D<sub>1</sub> indicating its width, height and depth (also referred to as the number of channels), respectively. The output feature map, similarly, is a cube Z with W<sub>2</sub> × H<sub>2</sub> × D<sub>2</sub> entries. The convolution Z = f(X) is parameterized by D<sub>2</sub> convolutional kernels, each of which is a S × S × D<sub>1</sub> cube."</p> | 2017-10-02 16:49:38.690000+00:00 | 2018-09-13 17:18:36.890000+00:00 | 2018-09-13 17:18:36.890000+00:00 | null | 46,480,699 | <p>In a typical CNN, a conv layer will have Y filters of size NxM, and thus it has N x M x Y trainable parameters (not including bias).</p>
<p>Accordingly, in the following simple keras model, I expect the second conv layer to have 16 kernels of size (7x7), and thus kernel weights of size (7x7x16). <strong>Why then are its weights actually size (7x7x8x16)?</strong></p>
<p>I understand the mechanics of what is happening: the Conv2D layers are actually doing a 3D convolution, treating the output maps of the previous layer as channels. It has 16 3D kernels of size(7x7x8). What I don't understand is:</p>
<ul>
<li>why this is Keras's default behavior?</li>
<li><strong>how do I get a "traditional" convolutional layer</strong> without dropping down into the low-level API (avoiding that is my reason for using Keras in the first place)?</li>
</ul>
<p>_</p>
<pre><code>from keras.models import Sequential
from keras.layers import InputLayer, Conv2D
model = Sequential([
InputLayer((101, 101, 1)),
Conv2D(8, (11, 11)),
Conv2D(16, (7, 7))
])
model.weights
</code></pre> | 2017-09-29 02:04:12.223000+00:00 | 2018-09-13 17:18:36.890000+00:00 | 2017-09-29 02:09:49.057000+00:00 | deep-learning|keras|conv-neural-network | ['https://arxiv.org/pdf/1809.02601v1.pdf'] | 1 |
47,009,439 | <p>I would strongly suggest against using frequency domain methods for segmentation:<br>
The nice thing about frequency domain is that it captures non-local properties like textures. However, the price you pay for this global information is the lack of location information: it is very non-trivial to say "where in the picture this frequency comes from". Alas, this location information is <strong>crucial</strong> for segmentation: you must know which pixel belongs to what "frequency"/texture/region...
There are frequency based descriptors that are designed to maintain some locality information (e.g., Gabor filters etc.). However, these techniques are not easy to implement "out-of-the-box".</p>
<p>Have you considered using deep semantic segmentation methods? If you do not have a lot of labeled images, I would suggest you look at something semi-supervised like <a href="https://arxiv.org/abs/1707.00243" rel="nofollow noreferrer"><em>Ning Xu, Brian Price, Scott Cohen, Jimei Yang, Thomas Huang</em>, <strong>Deep GrabCut for Object Selection</strong> (arXiv 2017)</a>.</p> | 2017-10-30 06:46:48.830000+00:00 | 2017-10-30 06:46:48.830000+00:00 | null | null | 47,006,053 | <p>I would like to segment this image <img src="https://i.stack.imgur.com/zS8X6.jpg" alt="image to segment"><br>
(I want only the bees, I have 100 images like that not labelled). I think that the best way to do that is to use frenquency domain because the bees seems to have specific frequencies. But I'm not sure how to do that. How to find the right frequencies ? </p>
<p>Or maybe you think of a better way to do that ? </p>
<p>Thanks in advance !</p> | 2017-10-29 22:49:16.257000+00:00 | 2017-11-04 08:29:55.123000+00:00 | 2017-10-30 09:54:36.833000+00:00 | python|c++|image-processing|fft|image-segmentation | ['https://arxiv.org/abs/1707.00243'] | 1 |
49,184,832 | <p>There is still no convolutional autoencoder example in mxnet, though there is some <a href="https://arxiv.org/ftp/arxiv/papers/1701/1701.04949.pdf" rel="nofollow noreferrer">progress in research</a> in that area. Anyway, there is <a href="https://github.com/apache/incubator-mxnet/issues/1549" rel="nofollow noreferrer">a ticket</a> for that in MxNet github, but it is still open. You are more than welcome to contribute, by, for example, <a href="https://blog.keras.io/building-autoencoders-in-keras.html" rel="nofollow noreferrer">migrating the code from Keras</a>.</p> | 2018-03-09 00:37:00.653000+00:00 | 2018-03-09 00:37:00.653000+00:00 | null | null | 43,391,400 | <p>I'm looking for implementations of convolutional autoencoder using MxNet. But there is only one example of autoencoder based on Fully Connected Networks, which is <a href="https://github.com/dmlc/mxnet/tree/master/example/autoencoder" rel="nofollow noreferrer">here</a>. There is also an issue asking similar questions in github, but receives very few responses. Is there any toy example of convolutional autoencoders implemented using MxNet?</p> | 2017-04-13 11:43:22.287000+00:00 | 2019-04-02 07:25:46.263000+00:00 | 2019-04-02 07:25:46.263000+00:00 | machine-learning|computer-vision|convolution|autoencoder|mxnet | ['https://arxiv.org/ftp/arxiv/papers/1701/1701.04949.pdf', 'https://github.com/apache/incubator-mxnet/issues/1549', 'https://blog.keras.io/building-autoencoders-in-keras.html'] | 3 |
41,133,632 | <p>This is a very interesting sequence. It is almost but not quite the order-4 Fibonacci (a.k.a. Tetranacci) numbers. Having extracted the <a href="https://stackoverflow.com/a/40964862/6732794">doubling formulas for Tetranacci</a> from its companion matrix, I could not resist doing it again for this very similar recurrence relation. </p>
<p>Before we get into the actual code, some definitions and a short derivation of the formulas used are in order. Define an integer sequence <code>A</code> such that:</p>
<pre><code>A(n) := A(n-1) + A(n-3) + A(n-4)
</code></pre>
<p>with initial values <code>A(0), A(1), A(2), A(3) := 1, 1, 1, 2</code>.</p>
<p>For <code>n >= 0</code>, this is the number of <a href="https://en.wikipedia.org/wiki/Composition_(combinatorics)" rel="nofollow noreferrer">integer compositions</a> of <code>n</code> into parts from the set <code>{1, 3, 4}</code>. This is the sequence that we ultimately wish to compute.</p>
<p>For convenience, define a sequence <code>T</code> such that:</p>
<pre><code>T(n) := T(n-1) + T(n-3) + T(n-4)
</code></pre>
<p>with initial values <code>T(0), T(1), T(2), T(3) := 0, 0, 0, 1</code>.</p>
<p>Note that <code>A(n)</code> and <code>T(n)</code> are simply shifts of each other. More precisely, <code>A(n) = T(n+3)</code> for all integers <code>n</code>. Accordingly, as elaborated by <a href="https://stackoverflow.com/a/41112123/6732794">another answer</a>, the companion matrix for both sequences is:</p>
<pre><code>[0 1 0 0]
[0 0 1 0]
[0 0 0 1]
[1 1 0 1]
</code></pre>
<p>Call this matrix <code>C</code>, and let:</p>
<pre><code>a, b, c, d := T(n), T(n+1), T(n+2), T(n+3)
a', b', c', d' := T(2n), T(2n+1), T(2n+2), T(2n+3)
</code></pre>
<p>By induction, it can easily be shown that:</p>
<pre><code>[0 1 0 0]^n = [d-c-a c-b b-a a]
[0 0 1 0] [ a d-c c-b b]
[0 0 0 1] [ b b+a d-c c]
[1 1 0 1] [ c c+b b+a d]
</code></pre>
<p>As seen above, for any <code>n</code>, <code>C^n</code> can be fully determined from its rightmost column alone. Furthermore, multiplying <code>C^n</code> with its rightmost column produces the rightmost column of <code>C^(2n)</code>:</p>
<pre><code>[d-c-a c-b b-a a][a] = [a'] = [a(2d - 2c - a) + b(2c - b)]
[ a d-c c-b b][b] [b'] [ a^2 + c^2 + 2b(d - c)]
[ b b+a d-c c][c] [c'] [ b(2a + b) + c(2d - c)]
[ c c+b b+a d][d] [d'] [ b^2 + d^2 + 2c(a + b)]
</code></pre>
<p>Thus, if we wish to compute <code>C^n</code> for some <code>n</code> by repeated squaring, we need only perform matrix-vector multiplication per step instead of the full matrix-matrix multiplication. </p>
<hr>
<p>Now, the implementation, in Python:</p>
<pre class="lang-python prettyprint-override"><code># O(n) integer additions or subtractions
def A_linearly(n):
a, b, c, d = 0, 0, 0, 1 # T(0), T(1), T(2), T(3)
if n >= 0:
for _ in range(+n):
a, b, c, d = b, c, d, a + b + d
else: # n < 0
for _ in range(-n):
a, b, c, d = d - c - a, a, b, c
return d # because A(n) = T(n+3)
# O(log n) integer multiplications, additions, subtractions.
def A_by_doubling(n):
n += 3 # because A(n) = T(n+3)
if n >= 0:
a, b, c, d = 0, 0, 0, 1 # T(0), T(1), T(2), T(3)
else: # n < 0
a, b, c, d = 1, 0, 0, 0 # T(-1), T(0), T(1), T(2)
# Unroll the final iteration to avoid computing extraneous values
for i in reversed(range(1, abs(n).bit_length())):
w = a*(2*(d - c) - a) + b*(2*c - b)
x = a*a + c*c + 2*b*(d - c)
y = b*(2*a + b) + c*(2*d - c)
z = b*b + d*d + 2*c*(a + b)
if (n >> i) & 1 == 0:
a, b, c, d = w, x, y, z
else: # (n >> i) & 1 == 1
a, b, c, d = x, y, z, w + x + z
if n & 1 == 0:
return a*(2*(d - c) - a) + b*(2*c - b) # w
else: # n & 1 == 1
return a*a + c*c + 2*b*(d - c) # x
print(all(A_linearly(n) == A_by_doubling(n) for n in range(-1000, 1001)))
</code></pre>
<p>Because it was rather trivial to code, the sequence is extended to negative <code>n</code> in the usual way. Also provided is a simple linear implementation to serve as a point of reference. </p>
<p>For <code>n</code> large enough, the logarithmic implementation above is 10-20x faster than directly exponentiating the companion matrix with <code>numpy</code>, by a simple (i.e. not rigorous, and likely flawed) timing comparison. And by my estimate, it would still take ~100 years to compute <code>A(10**12)</code>! Even though the algorithm above has room for improvement, that number is simply too large. On the other hand, computing <code>A(10**12) mod M</code> for some <code>M</code> is much more attainable. </p>
<hr>
<h3>A direct relation to Lucas and Fibonacci numbers</h3>
<p>It turns out that <code>T(n)</code> is even closer to the Fibonacci and <a href="https://en.wikipedia.org/wiki/Lucas_number" rel="nofollow noreferrer">Lucas numbers</a> than it is to Tetranacci. To see this, note that the characteristic polynomial for <code>T(n)</code> is <code>x^4 - x^3 - x - 1 = 0</code> which factors into <code>(x^2 - x - 1)(x^2 + 1) = 0</code>. The first factor is the characteristic polynomial for Fibonacci & Lucas! The 4 roots of <code>(x^2 - x - 1)(x^2 + 1) = 0</code> are the two Fibonacci roots, <code>phi</code> and <code>psi = 1 - phi</code>, and <code>i</code> and <code>-i</code>--the two square roots of <code>-1</code>.</p>
<p>The closed-form expression or "Binet" formula for <code>T(n)</code> will have the general form:</p>
<pre><code>T(n) = U(n) + V(n)
U(n) = p*(phi^n) + q*(psi^n)
V(n) = r*(i^n) + s*(-i)^n
</code></pre>
<p>for some constant coefficients <code>p, q, r, s</code>. </p>
<p>Using the initial values for <code>T(n)</code>, solving for the coefficients, applying some algebra, and noting that the Lucas numbers have the closed-form expression: <code>L(n) = phi^n + psi^n</code>, we can derive the following relations:</p>
<pre><code> L(n+1) - L(n) L(n-1) F(n) + F(n-2)
U(n) = ------------- = -------- = ------------
5 5 5
</code></pre>
<p>where <code>L(n)</code> is the n'th Lucas number with <code>L(0), L(1) := 2, 1</code> and <code>F(n)</code> is the n'th Fibonacci number with <code>F(0), F(1) := 0, 1</code>. And we also have:</p>
<pre><code>V(n) = 1 / 5 if n = 0 (mod 4)
| -2 / 5 if n = 1 (mod 4)
| -1 / 5 if n = 2 (mod 4)
| 2 / 5 if n = 3 (mod 4)
</code></pre>
<p>Which is ugly, but trivial to code. Note that the numerator of <code>V(n)</code> <a href="https://math.stackexchange.com/q/2057991/368127">can also be succinctly expressed</a> as <code>cos(n*pi/2) - 2sin(n*pi/2)</code> or <code>(3-(-1)^n) / 2 * (-1)^(n(n+1)/2)</code>, but we use the piece-wise definition for clarity.</p>
<p>Here's an even nicer, more direct identity:</p>
<pre><code>T(n) + T(n+2) = F(n)
</code></pre>
<p>Essentially, we can compute <code>T(n)</code> (and therefore <code>A(n)</code>) by using Fibonacci & Lucas numbers. Theoretically, this should be much more efficient than the Tetranacci-like approach.</p>
<p>It is known that the Lucas numbers can computed more efficiently than Fibonacci, therefore we will compute <code>A(n)</code> from the Lucas numbers. The most efficient, simple Lucas number algorithm I know of is one by L.F. Johnson (see his <a href="https://arxiv.org/abs/1012.0284" rel="nofollow noreferrer">2010 paper</a>: <em>Middle and Ripple, fast simple O(lg n) algorithms for Lucas Numbers</em>). Once we have a Lucas algorithm, we use the identity: <code>T(n) = L(n - 1) / 5 + V(n)</code> to compute <code>A(n)</code>. </p>
<pre class="lang-python prettyprint-override"><code># O(log n) integer multiplications, additions, subtractions
def A_by_lucas(n):
n += 3 # because A(n) = T(n+3)
offset = (+1, -2, -1, +2)[n % 4]
L = lf_johnson_2010_middle(n - 1)
return (L + offset) // 5
def lf_johnson_2010_middle(n):
"-> n'th Lucas number. See [L.F. Johnson 2010a]."
#: The following Lucas identities are used:
#:
#: L(2n) = L(n)^2 - 2*(-1)^n
#: L(2n+1) = L(2n+2) - L(2n)
#: L(2n+2) = L(n+1)^2 - 2*(-1)^(n+1)
#:
#: The first and last identities are equivalent.
#: For the unrolled iteration, the following is also used:
#:
#: L(2n+1) = L(n)*L(n+1) - (-1)^n
#:
#: Since this approach uses only square multiplications per loop,
#: It turns out to be slightly faster than standard Lucas doubling,
#: which uses 1 square and 1 regular multiplication.
if n >= 0:
a, b, sign = 2, 1, +1 # L(0), L(1), (-1)^0
else: # n < 0
a, b, sign = -1, 2, -1 # L(-1), L(0), (-1)^(-1)
# unroll the last iteration to avoid computing unnecessary values
for i in reversed(range(1, abs(n).bit_length())):
a = a*a - 2*sign # L(2k)
c = b*b + 2*sign # L(2k+2)
b = c - a # L(2k+1)
sign = +1
if (n >> i) & 1:
a, b = b, c
sign = -1
if n & 1:
return a*b - sign
else:
return a*a - 2*sign
</code></pre>
<p>You may verify that <code>A_by_lucas</code> produces the same results as the previous <code>A_by_doubling</code> function, but is roughly 5x faster. Still not fast enough to compute <code>A(10**12)</code> in any reasonable amount of time! </p> | 2016-12-14 02:02:47.807000+00:00 | 2016-12-14 20:16:57.050000+00:00 | 2017-05-23 12:09:50.057000+00:00 | null | 41,111,249 | <p>This is a question given in this presentation. <a href="https://web.stanford.edu/class/cs97si/04-dynamic-programming.pdf" rel="nofollow noreferrer">Dynamic Programming</a></p>
<p>now i have implemented the algorithm using recursion and it works fine for small values. But when n is greater than 30 it becomes really slow.The presentation mentions that for large values of n one should consider something similar to
<a href="https://math.stackexchange.com/questions/784710/how-to-prove-fibonacci-sequence-with-matrices">the matrix form of Fibonacci numbers</a> .I am having trouble undestanding how to use the matrix form of Fibonacci numbers to come up with a solution.Can some one give me some hints or pseudocode</p>
<p>Thanks</p> | 2016-12-12 23:13:30.223000+00:00 | 2016-12-14 20:16:57.050000+00:00 | 2017-04-13 12:19:16.067000+00:00 | algorithm|matrix|dynamic-programming|fibonacci | ['https://stackoverflow.com/a/40964862/6732794', 'https://en.wikipedia.org/wiki/Composition_(combinatorics)', 'https://stackoverflow.com/a/41112123/6732794', 'https://en.wikipedia.org/wiki/Lucas_number', 'https://math.stackexchange.com/q/2057991/368127', 'https://arxiv.org/abs/1012.0284'] | 6 |
48,504,955 | <p>My impression is that your problems come from mixing various approaches for various aspects (repeated measurements/correlation vs. heteroscedasticity) that cannot be mixed so easily. Instead of using random effects you might also consider fixed effects, or instead of only adjusting the inference for heteroscedasticity you might consider a Gaussian model and model both mean and variance, etc. For me, it's hard to say what is the best route forward here. Hence, I only comment on some aspects regarding the <code>sandwich</code> package:</p>
<p>The <code>sandwich</code> package is <em>not</em> limited to <code>lm</code>/<code>glm</code> only but it is in principle object-oriented, see <code>vignette("sandwich-OOP", package = "sandwich")</code> (also published as <a href="https://dx.doi.org/10.18637/jss.v016.i09" rel="nofollow noreferrer">doi:10.18637/jss.v016.i09</a>.</p>
<p>There are suitable methods for a wide variety of packages/models but not
for <code>nlme</code> or <code>lme4</code>. The reason is that it's not so obvious for which mixed-effects models the usual sandwich trick actually works. (Disclaimer: But I'm no expert in mixed-effects modeling.)</p>
<p>However, for <code>lme4</code> there is a relatively new package
called <code>merDeriv</code> (<a href="https://CRAN.R-project.org/package=merDeriv" rel="nofollow noreferrer">https://CRAN.R-project.org/package=merDeriv</a>) that
supplies <code>estfun</code> and <code>bread</code> methods so that <code>sandwich</code> covariances can be
computed for <code>lmer</code> output etc. There is also a working paper associated
with that package: <a href="https://arxiv.org/abs/1612.04911" rel="nofollow noreferrer">https://arxiv.org/abs/1612.04911</a></p> | 2018-01-29 15:43:22.307000+00:00 | 2018-01-29 15:43:22.307000+00:00 | null | null | 48,479,984 | <p><a href="https://stackoverflow.com/questions/26224877/two-fixed-factors-nested-and-crossed-factors-in-r">This question</a> asks the same question, but hasn't been answered. My question relates to how to specify the model with the lm() function and is therefore a programming (not statistical) question.</p>
<p>I have a mixed design (2 repeated and 1 independent predictors). Participants were first primed into group A or B (this is the independent predictor) and then they rated how much they liked 4 different statements (these are the two repeated predictors).
There are many great online resources how to model this data. However, my data is heterscedastic. So I like to use heteroscedastic-consistent covariance matrices. <a href="https://www.ncbi.nlm.nih.gov/pubmed/24015776" rel="nofollow noreferrer">This paper</a> explains it well. The <a href="https://cran.r-project.org/web/packages/sandwich/sandwich.pdf" rel="nofollow noreferrer">sandwich</a> and <a href="https://cran.r-project.org/web/packages/lmtest/lmtest.pdf" rel="nofollow noreferrer">lmtest</a> packages are great. <a href="https://stats.stackexchange.com/questions/117052/replicating-statas-robust-option-in-r">Here</a> is a good explanation how to do it for a indpendent design in R with lm(y ~ x). </p>
<p>It seems that I have use lm, else it wont work? </p>
<p>Here is the code for a regression model assuming that all variances are equal (which they are not as Levene's test comes back significant). </p>
<pre><code>fit3 <- nlme:::lme(DV ~ repeatedIV1*repeatedIV2*independentIV1, random = ~1|participants, df) ##works fine
</code></pre>
<p>Here is the code for an indepedent model correcting for heteroscedasticity, which works.</p>
<pre><code>fit3 <- lm(DV ~ independentIV1)
library(sandwich)
vcovHC(fit3, type = 'HC4', sandwich = F)
library(lmtest)
coef(fit3, vcov. = vcovHC, type = 'HC4')
</code></pre>
<p>So my question really is, how to specify my model with lm?
Alternative approaches in R how to fit my model accounting for heteroscedasticity are welcome too!</p>
<p>Thanks a lot!!! </p> | 2018-01-27 20:00:23.433000+00:00 | 2020-07-02 18:45:38.170000+00:00 | 2018-01-29 10:03:01.020000+00:00 | r|regression|mixed-models | ['https://dx.doi.org/10.18637/jss.v016.i09', 'https://CRAN.R-project.org/package=merDeriv', 'https://arxiv.org/abs/1612.04911'] | 3 |
66,067,938 | <p>There is a Grover-augmented <a href="https://arxiv.org/ftp/arxiv/papers/2007/2007.10328.pdf" rel="nofollow noreferrer">Viterbi algorithm</a> with a claimed quadratic runtime speedup. Methods have also been proposed for <a href="https://arxiv.org/pdf/0810.3828.pdf" rel="nofollow noreferrer">Quantum Reinforcement Learning</a>. More relevant than the search algorithm itself is the iterative process used to rotate the state vector, which has applications in algorithms in a number of domains (most prominently these days in quantum cryptography).</p> | 2021-02-05 17:33:49.017000+00:00 | 2021-02-05 17:33:49.017000+00:00 | null | null | 65,971,394 | <p>I am trying to see applications of grover's algorithm. I have seen it can be applied in the DNA sequence alignment. I was wondering where in machine learning (deep learning, NLP and reinforcement learning) can i use the grover's algorithm.</p> | 2021-01-30 17:59:35.197000+00:00 | 2021-02-05 17:33:49.017000+00:00 | 2021-01-30 19:51:58.753000+00:00 | machine-learning|quantum-computing | ['https://arxiv.org/ftp/arxiv/papers/2007/2007.10328.pdf', 'https://arxiv.org/pdf/0810.3828.pdf'] | 2 |
23,044,975 | <p>I can point you to a non-parametric way to get the best ordering with respect to a weighted linear scoring system without knowing exactly what weights you want to use (just constraints on the weights). First though, note that average daily views might be misleading because movies are probably downloaded less in later years. So the first thing I would do is fit a polynomial model (degree 10 should be good enough) that predicts total number of views as a function of how many days the movie has been available. Then, once you have your fit, then for each date you get predicted total number of views, which is what you divide by to get "relative average number of views" which is a multiplier indicator which tells you how many times more likely (or less likely) the movie is to be watched compared to what you expect on average given the data. So 2 would mean the movie is watched twice as much, and 1/2 would mean the movie is watched half as much. If you want 2 and 1/2 to be "negatives" of each other which sort of makes sense from a scoring perspective, then take the log of the multiplier to get the score. </p>
<p>Now, there are several quantities you can compute to include in an overall score, like the (log) "relative average number of views" I mentioned above, and (likes/total views) and (dislikes / total views). US News and World Report ranks universities each year, and they just use a weighted sum of 7 different category scores to get an overall score for each university that they rank by. So using a weighted linear combination of category scores is definitely not a bad way to go. (Noting that you may want to do something like a log transform on some categories before taking the linear combination of scores). The problem is you might not know exactly what weights to use to give the "most desirable" ranking. The first thing to note is that if you want the weights on the same scale, then you should normalize each category score so that it has standard deviation equal to 1 across all movies. Then, e.g., if you use equal weights, then each category is truly weighted equally. So then the question is what kinds of weights you want to use. Clearly the weights for relative number of views and proportion of likes should be positive, and the weight for proportion of dislikes should be negative, so multiply the dislike score by -1 and then you can assume all weights are positive. If you believe each category should contribute at least 20%, then you get that each weight is at least 0.2 times the sum of weights. If you believe that dislikes are more important that likes, then you can say (dislike weight) >= c*(like weight) for some c > 1, or (dislike_weight) >= c*(sum of weights) + (like weight) for some c > 0. Similarly you can define other linear constraints on the weights that reflect your beliefs about what the weights should be, without picking exact values for the weights.</p>
<p>Now here comes the fun part, which is the main thrust of my post. If you have linear inequality constraints on the weights, all of the form that a linear combination of the weights is greater than or equal to 0, but you don't know what weights to use, then you can simply compute all possible top-10 or top-20 rankings of movies that you can get for any choice of weights that satisfy your constraints, and then choose the top-k ordering which is supported by the largest VOLUME of weights, where the volume of weights is the solid angle of the polyhedral cone of weights which results in the particular top-k ordering. Then, once you've chosen the "most supported" top-k ranking, you can restrict the scoring parameters to be in the cone that gives you that ranking, and remove the top k movies, and compute all possibilities for the next top-10 or top-20 ranking of the remaining movies when the weights are restricted to respect the original top-k movies' ranking. Computing all obtainale top-k rankings of movies for restricted weights can be done much, much faster than enumerating all n(n-1)...(n-k+1) top-k possible rankings and trying them all out. If you have two or three categories then using polytope construction methods the obtainable top-k rankings can be computed in linear time in terms of the output size, i.e. the number of obtainable top-k rankings. The polyhedral computation approach also gives the inequalities that define the cone of scoring weights that give each top-k ranking, also in linear time if you have two or three categories. Then to get the volume of weights that give each ranking, you triangulate the cone and intersect with the unit sphere and compute the areas of the spherical triangles that you get. (Again linear complexity if the number of categories is 2 or 3). Furthermore, if you scale your categories to be in a range like [0,50] and round to the nearest integer, then you can prove that the number of obtainable top-k rankings is actually quite small if the number of categories is like 5 or less. (Even if you have a lot of movies and k is high). And when you fix the ordering for the current top group of movies and restrict the parameters to be in the cone that yields the fixed top ordering, this will further restrict the output size for the obtainable next best top-k movies. The output size does depend (polynomially) on k which is why I recommended setting k=10 or 20 and computing top-k movies and choosing the best (largest volume) ordering and fixing it, and then computing the next best top-k movies that respect the ordering of the original top-k etc. </p>
<p>Anyway if this approach sounds appealing to you (iteratively finding successive choices of top-k rankings that are supported by the largest volume of weights that satisfy your weight constraints), let me know and I can produce and post a write-up on the polyhedral computations needed as well as a link to software that will allow you to do it with minimal extra coding on your part. In the meantime here is a paper <a href="http://arxiv.org/abs/0805.1026">http://arxiv.org/abs/0805.1026</a> I wrote on a similar study of 7-category university ranking data where the weights were simply restricted to all be non-negative (generalizing to arbitrary linear constraints on weights is straightforward).</p> | 2014-04-13 16:05:28.320000+00:00 | 2014-04-13 16:49:41.620000+00:00 | 2014-04-13 16:49:41.620000+00:00 | null | 22,949,258 | <p>I'm currently ranking videos on a website using a bayesian ranking algorithm, each video has:</p>
<ul>
<li><code>likes</code></li>
<li><code>dislikes</code></li>
<li><code>views</code></li>
<li><code>upload_date</code></li>
</ul>
<p>Anyone can <code>like</code> or <code>dislike</code> a video, a video is always <code>views + 1</code> when viewed and all videos have a unique <code>upload_date</code>.</p>
<p><br>
<strong>Data Structure</strong></p>
<p>The data is in the following format:</p>
<pre><code>| id | title | likes | dislikes | views | upload_date |
|------|-----------|---------|------------|---------|---------------|
| 1 | Funny Cat | 9 | 2 | 18 | 2014-04-01 |
| 2 | Silly Dog | 9 | 2 | 500 | 2014-04-06 |
| 3 | Epic Fail | 100 | 0 | 200 | 2014-04-07 |
| 4 | Duck Song | 0 | 10000 | 10000 | 2014-04-08 |
| 5 | Trololool | 25 | 30 | 5000 | 2014-04-09 |
</code></pre>
<p><br>
<strong>Current Weighted Ranking</strong></p>
<p>The following weighted ratio algorithm is used to rank and sort the videos so that the best rated are shown first.</p>
<p><em>This algorithm takes into account the <a href="http://en.wikipedia.org/wiki/Bayesian_average">bayesian average</a> to give a better overall ranking.</em></p>
<pre><code>Weighted Rating (WR) = ((AV * AR) + (V * R))) / (AV + V)
AV = Average number of total votes
AR = Average rating
V = This items number of combined (likes + dislikes)
R = This items current rating (likes - dislikes)
</code></pre>
<p><br>
<strong>Example current MySQL Query</strong></p>
<pre><code>SELECT id, title, (((avg_vote * avg_rating) + ((likes + dislikes) * (likes / dislikes)) ) / (avg_vote + (likes + dislikes))) AS score
FROM video
INNER JOIN (SELECT ((SUM(likes) + SUM(dislikes)) / COUNT(id)) AS avg_vote FROM video) AS t1
INNER JOIN (SELECT ((SUM(likes) - SUM(dislikes)) / COUNT(id)) AS avg_rating FROM video) AS t2
ORDER BY score DESC
LIMIT 10
</code></pre>
<p><em>Note: <code>views</code> and <code>upload_date</code> are not factored in.</em></p>
<p><br>
<strong>The Issue</strong></p>
<p>The ranking currently works well but it seems we are not making full use of all the data at our disposal.</p>
<p>Having <code>likes</code>, <code>dislikes</code>, <code>views</code> and an <code>upload_date</code> but only using two seems a waste because the <code>views</code> and <code>upload_date</code> are not factored in to account how much weight each <code>like</code> / <code>dislike</code> should have.</p>
<p>For example in the <strong>Data Structure</strong> table above, items <code>1</code> and <code>2</code> both have the same amount of <code>likes</code> / <code>dislikes</code> however item <code>2</code> was uploaded more recently so it's average daily views are higher.</p>
<p>Since item <code>2</code> has more likes and dislikes in a shorter time than those <code>likes</code> / <code>dislikes</code> should surely be weighted stronger?</p>
<p><br>
<strong>New Algorithm Result</strong></p>
<p>Ideally the new algorithm with <code>views</code> and <code>upload_date</code> factored in would sort the data into the following result:</p>
<p><em>Note: <code>avg_views</code> would equal <code>(views / days_since_upload)</code></em></p>
<pre><code>| id | title | likes | dislikes | views | upload_date | avg_views |
|------|-----------|---------|------------|---------|---------------|-------------|
| 3 | Epic Fail | 100 | 0 | 200 | 2014-04-07 | 67 |
| 2 | Silly Dog | 9 | 2 | 500 | 2014-04-06 | 125 |
| 1 | Funny Cat | 9 | 2 | 18 | 2014-04-01 | 2 |
| 5 | Trololool | 25 | 30 | 5000 | 2014-04-09 | 5000 |
| 4 | Duck Song | 0 | 10000 | 10000 | 2014-04-08 | 5000 |
</code></pre>
<p><em>The above is a simple representation, with more data it gets a lot more complex.</em></p>
<p><br>
<strong>The question</strong></p>
<p>So to summarise, my question is how can I factor <code>views</code> and <code>upload_date</code> into my current ranking algorithm in a style to improve the way that videos are ranked?</p>
<p>I think the above example by calculating the <code>avg_views</code> is a good way to go but where should I then add that into the ranking algorithm that I have?</p>
<p>It's possible that <strong>better ranking algorithms may exist</strong>, if this is the case then please provide an example of a different algorithm that I could use and state the benefits of using it.</p> | 2014-04-08 22:16:00.510000+00:00 | 2014-04-18 00:02:46.707000+00:00 | 2014-04-10 23:49:01.940000+00:00 | mysql|algorithm|sorting|statistics|ranking | ['http://arxiv.org/abs/0805.1026'] | 1 |
43,065,621 | <p>If you mean the model architecture then I recommend looking at the graph in Tensorboard, the graph visualisation tool provided with Tensorflow. I'm pretty sure that the demo code/tutorial already implements all the code required to import into tensorboard so it should just be a case or running tensorboard and pointing it to the log directory. (This should be defined in the code near the top).</p>
<p>Then run <code>tensorboard --logdir=/path/to/logs/</code> and click the graphs tab. You will then see the various graphs for the different runs.</p>
<p>Alternatively, there are a couple of papers on inception that describe the model and the theory behind it. One is available <a href="https://arxiv.org/pdf/1512.00567.pdf" rel="nofollow noreferrer">here</a></p>
<p>Hope that helps you understand inception a bit more.</p> | 2017-03-28 09:22:45.640000+00:00 | 2017-03-28 09:22:45.640000+00:00 | null | null | 43,059,871 | <p>I am new to tensorflow. I have downloaded and run the image classifier provided on tensorflow website. I can see <a href="http://download.tensorflow.org/models/image/imagenet/inception-2015-12-05.tgz" rel="nofollow noreferrer">link</a> that downloads model from web.
I have to read the .pb file in human readable format.</p>
<p>Is this possible? if yes, How?
Thanks!</p> | 2017-03-28 02:54:52.770000+00:00 | 2017-05-31 20:29:11.157000+00:00 | null | machine-learning|tensorflow|neural-network|deep-learning|protocol-buffers | ['https://arxiv.org/pdf/1512.00567.pdf'] | 1 |
54,873,391 | <p>There are many visualization methods. Each of these methods has its strengths and weaknesses.</p>
<p>However, you have to keep in mind that the methods partly visualize different things. Here is a short overview based on this <a href="https://arxiv.org/abs/1705.05598" rel="nofollow noreferrer">paper</a>.
You can distinguish between three main visualization groups: </p>
<ul>
<li><strong>Functions</strong> (gradients, saliency map): These methods visualize how a change in input space affects the prediction</li>
<li><strong>Signal</strong> (deconvolution, Guided BackProp, PatternNet): the signal (reason for a neuron's activation) is visualized. So this visualizes what pattern caused the activation of a particular neuron.</li>
<li><strong>Attribution</strong> (LRP, Deep Taylor Decomposition, PatternAttribution): these methods visualize how much a single pixel contributed to the prediction. As a result you get a heatmap highlighting which pixels of the input image most strongly contributed to the classification.</li>
</ul>
<p>Since you are asking how much a pixel has contributed to the classification, you should use methods of attribution. Nevertheless, the other methods also have their right to exist.</p>
<p>One nice toolbox for visualizing heatmaps is <a href="https://github.com/albermax/innvestigate" rel="nofollow noreferrer">iNNvestigate</a>.
This toolbox contains the following methods:</p>
<ul>
<li><a href="https://arxiv.org/abs/1706.03825" rel="nofollow noreferrer">SmoothGrad</a></li>
<li><a href="https://arxiv.org/abs/1311.2901" rel="nofollow noreferrer">DeConvNet</a></li>
<li><a href="https://arxiv.org/abs/1412.6806" rel="nofollow noreferrer">Guided BackProp</a></li>
<li><a href="https://arxiv.org/abs/1705.05598" rel="nofollow noreferrer">PatternNet</a></li>
<li><a href="https://arxiv.org/abs/1705.05598" rel="nofollow noreferrer">PatternAttribution</a></li>
<li>Occlusion</li>
<li><a href="https://arxiv.org/abs/1605.01713" rel="nofollow noreferrer">Input times Gradient</a></li>
<li><a href="https://arxiv.org/abs/1703.01365" rel="nofollow noreferrer">Integrated Gradients</a></li>
<li><a href="https://www.sciencedirect.com/science/article/pii/S0031320316303582?via%3Dihub" rel="nofollow noreferrer">Deep Taylor</a></li>
<li><a href="https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0130140" rel="nofollow noreferrer">LRP</a></li>
<li><a href="https://openreview.net/forum?id=Sy21R9JAW" rel="nofollow noreferrer">DeepLift</a></li>
</ul> | 2019-02-25 19:30:38.760000+00:00 | 2019-02-25 19:30:38.760000+00:00 | null | null | 44,731,990 | <p>What are common techniques for finding which parts of images contribute most to image classification <strong>via convolutional neural nets</strong>?</p>
<p>In general, suppose we have 2d matrices with float values between 0 and 1 as entires. Each matrix is associated with a label (single-label, multi-class) and the goal is to perform classification via (Keras) 2D CNN's.</p>
<p>I'm trying to find methods to extract relevant subsequences of rows/columns that contribute most to classification. </p>
<p>Two examples:</p>
<p><a href="https://github.com/jacobgil/keras-cam" rel="noreferrer">https://github.com/jacobgil/keras-cam</a></p>
<p><a href="https://github.com/tdeboissiere/VGG16CAM-keras" rel="noreferrer">https://github.com/tdeboissiere/VGG16CAM-keras</a></p>
<p>Other examples/resources with an eye toward Keras would be much appreciated.</p>
<p>Note my datasets are not actual images, so using methods with ImageDataGenerator might not directly apply in this case.</p> | 2017-06-24 01:51:41.957000+00:00 | 2019-02-25 19:30:38.760000+00:00 | 2017-06-24 03:39:17.290000+00:00 | deep-learning|keras | ['https://arxiv.org/abs/1705.05598', 'https://github.com/albermax/innvestigate', 'https://arxiv.org/abs/1706.03825', 'https://arxiv.org/abs/1311.2901', 'https://arxiv.org/abs/1412.6806', 'https://arxiv.org/abs/1705.05598', 'https://arxiv.org/abs/1705.05598', 'https://arxiv.org/abs/1605.01713', 'https://arxiv.org/abs/1703.01365', 'https://www.sciencedirect.com/science/article/pii/S0031320316303582?via%3Dihub', 'https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0130140', 'https://openreview.net/forum?id=Sy21R9JAW'] | 12 |
43,334,594 | <p>There are a few subtle differences. </p>
<ol>
<li><p>You are trying to apply ImageNet style architecture to Cifar-10. First convolution is <code>3 x 3</code>, not <code>7 x 7</code>. There is no max-pooling layer. The image is downsampled purely by using stride-2 convolutions.</p></li>
<li><p>You should probably do mean-centering by keeping <code>featurewise_center = True</code> in <code>ImageDataGenerator</code>.</p></li>
<li><p>Do not use very high number of filters such as [512, 1024, 2048]. There are only 50,000 images for you to train unlike ImageNet which has about a million.</p></li>
</ol>
<p>In short, read up section 4.2 in the <a href="https://arxiv.org/pdf/1512.03385.pdf" rel="nofollow noreferrer">deep residual network paper</a> and try to replicate the network. You may also read <a href="http://florianmuellerklein.github.io/wRN_vs_pRN/" rel="nofollow noreferrer">this</a> blog.</p> | 2017-04-10 23:51:15.120000+00:00 | 2017-04-10 23:51:15.120000+00:00 | null | null | 43,334,080 | <p><a href="https://github.com/slavaglaps/ResNet_cifar10/blob/master/resnet.ipynb" rel="nofollow noreferrer">https://github.com/slavaglaps/ResNet_cifar10/blob/master/resnet.ipynb</a></p>
<p>This is my model trained in 100 epochs
Accuracy on similar models and similar data reaches 90%
What is my problem?
I think it's worth reducing the learning rate with the passage of the epochs.
What do you think that can help me?</p> | 2017-04-10 22:48:46.547000+00:00 | 2017-04-10 23:51:15.120000+00:00 | null | deep-learning|keras | ['https://arxiv.org/pdf/1512.03385.pdf', 'http://florianmuellerklein.github.io/wRN_vs_pRN/'] | 2 |
63,248,565 | <p><strong>Yolov4 Vs Yolov3:</strong></p>
<ul>
<li>Yolov3 uses <strong><code>Darknet53</code></strong> as backbone, Yolov4 uses
<strong><code>CSPDarknet53</code></strong> as backbone.</li>
<li>Yolov4 uses <code>PANet</code> as the method of parameter aggregation from different backbone levels for different detector levels, instead of the <code>FPN</code> used in Yolov3.</li>
</ul>
<p>YOLOv4 consists of:</p>
<ol>
<li><strong>Backbone:</strong> CSPDarknet53 (Feature Extraction)</li>
<li><strong>Neck:</strong> Additional module - SPP, PANet [this was not there in Yolov3]</li>
<li><strong>Head:</strong> YOLOv3 (Dense Prediction Block) [This part is same as Yolov3]</li>
</ol>
<p><a href="https://i.stack.imgur.com/xrOwX.png" rel="noreferrer"><img src="https://i.stack.imgur.com/xrOwX.png" alt="Yolov4-arch" /></a></p>
<blockquote>
<p><strong>Neck:</strong> Object detectors developed in recent years often insert some
layers between backbone and head, and these layers are usually used to
collect feature maps from different stages.</p>
</blockquote>
<p>References:</p>
<ul>
<li>Read <a href="https://arxiv.org/pdf/2004.10934.pdf" rel="noreferrer">Yolov4</a> paper in detail</li>
<li>Read <a href="https://medium.com/@jonathan_hui/yolov4-c9901eaa8e61" rel="noreferrer">this</a> article</li>
</ul> | 2020-08-04 14:00:55.387000+00:00 | 2020-08-04 14:00:55.387000+00:00 | null | null | 63,244,184 | <p>I try to understand the architecture of Yolo4.
It is composed of a backbone, neck, dense prediction and sparse prediction.
Knowing that Yolo 3 has already a backbone, Is Yolo 4 taking all the architecture of Yolo 3 including its backbone or just part of Yolo3 ?</p>
<p>In page number 5 in paper Yolo 4, they've mentioned anchor based for Yolo3</p>
<p>Yolo 4 : <a href="https://arxiv.org/pdf/2004.10934.pdf" rel="noreferrer">https://arxiv.org/pdf/2004.10934.pdf</a></p>
<p>Yolo 3 :<a href="https://pjreddie.com/media/files/papers/YOLOv3.pdf" rel="noreferrer">https://pjreddie.com/media/files/papers/YOLOv3.pdf</a></p> | 2020-08-04 09:44:07.850000+00:00 | 2020-08-04 14:00:55.387000+00:00 | 2020-08-04 10:02:42.233000+00:00 | deep-learning|architecture|computer-vision|object-detection|yolo | ['https://i.stack.imgur.com/xrOwX.png', 'https://arxiv.org/pdf/2004.10934.pdf', 'https://medium.com/@jonathan_hui/yolov4-c9901eaa8e61'] | 3 |
51,703,669 | <p>To visualize your conceptional error: If you are trained to recognize images of cats, and every time you correctly shout "cat" during training time you get a cookie, what would you say if you suddenly see an image of a dog? <br/>
- Exactly, say "cat", since you still get the cookie. </p>
<p>More specifically, there is no way for your network to learn what "right" or "wrong" means, without having examples for both cases during training time.
Without a negative example, your training won't work in the classical sense, as it will always be "beneficial" for the network to say whatever you're showing it is the single class it knows.</p>
<p>The research area of single-class classification exists (see <a href="https://arxiv.org/pdf/1802.06360.pdf" rel="nofollow noreferrer">this</a> and <a href="https://arxiv.org/pdf/1801.05365.pdf" rel="nofollow noreferrer">this</a> paper, for example), but so far I would say that it would make much more sense to use some negative examples to get a decent performance, especially if you have an abundance of readily available training data at hand (namely, the non-zero images in MNIST).</p> | 2018-08-06 08:41:51.390000+00:00 | 2018-08-06 08:41:51.390000+00:00 | null | null | 51,696,537 | <p>I have a deep winding code written with TensorFlow. This code is for several classes. I want to change the code to a class so that I can teach the zero class and identify any data that is not like zero class data as non-zero. I use the sigmoid function and a neuron in the last layer. My model training is easy, but at the time of testing it only recognizes the same class for any other type of data.
I put the code below.
How do I change it to recognize non-class?</p>
<pre><code> h_drop = tf.nn.dropout(h_pool_flat, keep_prob=self.keep_prob)
# Softmax
with tf.name_scope('softmax'):
softmax_w = tf.Variable(tf.truncated_normal([num_filters_total, self.num_classes], stddev=0.1), name='softmax_w')
softmax_b = tf.Variable(tf.constant(0.1, shape=[self.num_classes]), name='softmax_b')
# Add L2 regularization to output layer
self.l2_loss += tf.nn.l2_loss(softmax_w)
self.l2_loss += tf.nn.l2_loss(softmax_b)
self.logits = tf.matmul(h_drop, softmax_w) + softmax_b
predictions = tf.nn.sigmoid(self.logits)
print(predictions)
**self.predictions = tf.argmax(predictions, 1, name='predictions')**
# Loss
with tf.name_scope('loss'):
losses = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=self.input_y, logits=self.logits)
# Add L2 losses
self.cost = tf.reduce_mean(losses) + self.l2_reg_lambda * self.l2_loss
# Accuracy
with tf.name_scope('accuracy'):
correct_predictions = tf.equal(self.predictions, self.input_y)
print(self.input_y)
print(self.predictions)
self.correct_num = tf.reduce_sum(tf.cast(correct_predictions, tf.float32))
self.accuracy = tf.reduce_mean(tf.cast(correct_predictions, tf.float32), name='accuracy')
</code></pre>
<p>This line needs to be changed, but I do not know how.
<strong>self.predictions = tf.argmax(predictions, 1, name='predictions')</strong>
can you guide me?</p> | 2018-08-05 17:13:13.490000+00:00 | 2022-06-30 13:35:40.077000+00:00 | null | tensorflow|deep-learning | ['https://arxiv.org/pdf/1802.06360.pdf', 'https://arxiv.org/pdf/1801.05365.pdf'] | 2 |
53,319,184 | <p><a href="https://tex.stackexchange.com/a/38367">original answer</a> from <a href="https://tex.stackexchange.com/users/5969/sebastian-busch">sebastian-busch</a>:</p>
<blockquote>
<p>I've written a python script that returns the corresponding bib-entry
from an arxiv ID, you can find it on
<a href="http://www.thamnos.de/misc/look-up-bibliographical-information-from-an-arxiv-id/" rel="nofollow noreferrer">http://www.thamnos.de/misc/look-up-bibliographical-information-from-an-arxiv-id/</a>.
If you save it e.g. as <code>arxiv2bib.py</code>, you can call it as
<code>arxiv2bib.py 1234.5678</code> or as
<code>arxiv2bib.py http://arxiv.org/abs/1234.5678</code>.</p>
</blockquote>
<hr>
<p>Someone also made a PHP script that can do it.</p>
<p><a href="https://gist.github.com/MartinThoma/8133254" rel="nofollow noreferrer">PHP to automatically create BibTeX entry from arXiv</a></p>
<p>From answer in <a href="https://tex.stackexchange.com/questions/3833/how-to-cite-an-article-from-arxiv-using-bibtex">how-to-cite-an-article-from-arxiv-using-bibtex</a></p> | 2018-11-15 12:08:48.047000+00:00 | 2018-11-21 00:53:54.433000+00:00 | 2018-11-21 00:53:54.433000+00:00 | null | 53,318,385 | <p>I am currently using the following <code>curl</code> command to get bibliographic information for a scientific article from its digital object identifier (DOI) number:</p>
<pre><code>curl -LH "Accept: text/bibliography; style=bibtex" http://dx.doi.org/10.1901/jaba.1974.7-497a
</code></pre>
<p>I would like to be able to do something similar for <a href="http://www.arxiv.org" rel="nofollow noreferrer">arxiv</a> articles, i.e. send the articles arxiv number to some web service and get bibliographic information back. How would I go about doing this?</p>
<p>I am looking for a solution in bash, zsh, or python.</p> | 2018-11-15 11:22:59.547000+00:00 | 2018-11-21 00:53:54.433000+00:00 | 2018-11-16 12:29:21.560000+00:00 | python|bash|command-line|bibtex | ['https://tex.stackexchange.com/a/38367', 'https://tex.stackexchange.com/users/5969/sebastian-busch', 'http://www.thamnos.de/misc/look-up-bibliographical-information-from-an-arxiv-id/', 'https://gist.github.com/MartinThoma/8133254', 'https://tex.stackexchange.com/questions/3833/how-to-cite-an-article-from-arxiv-using-bibtex'] | 5 |
62,810,247 | <p>From the two examples you've uploaded I can assume you are thresholding based on difference in color/intensity- I can suggest <a href="https://docs.opencv.org/trunk/d8/d83/tutorial_py_grabcut.html" rel="nofollow noreferrer">grabcut</a> as basic foreground separation- use the edges in the mask in that ROI as input to the algorithm.
Even better- if your thresholding is good as the first image, just skip the edge detection part and this will be the input to grabcut.</p>
<p>======= EDIT =======</p>
<p>@RoiMulia if you need production level I can suggest that you leave the threshold + edge detection direction completly and try background removal techniques (SOTA are currently neural networks such as <a href="https://arxiv.org/pdf/2004.00626.pdf" rel="nofollow noreferrer">Background Matting: The World is Your Green Screen</a> (<a href="https://www.youtube.com/watch?v=JE-7OcNrPao" rel="nofollow noreferrer">example</a>)).</p>
<p>You can also try some ready made background removal APIs such as <a href="https://www.remove.bg/" rel="nofollow noreferrer">https://www.remove.bg/</a> or <a href="https://clippingmagic.com/" rel="nofollow noreferrer">https://clippingmagic.com/</a></p> | 2020-07-09 08:09:16.673000+00:00 | 2020-07-09 12:56:08.303000+00:00 | 2020-07-09 12:56:08.303000+00:00 | null | 62,622,154 | <p>Giving an image that I applied an edge detection filter, what would be the way (hopefully efficient/performant one) to achieve a mask of the "sum" of the point in a marked segment?</p>
<p>Image for illustration:
<a href="https://i.stack.imgur.com/eQJvi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eQJvi.png" alt="enter image description here" /></a></p>
<p>Thank you in advance.</p>
<p><strong>UPDATE:</strong></p>
<p>Added example of a lighter image (<a href="https://imgur.com/a/MN0t3pH" rel="nofollow noreferrer">https://imgur.com/a/MN0t3pH</a>).
As you'll see in the below image, we assume that when the user marks a region (ROI), there will be an object that will "stand out" from its background. Our end goal is to get the most accurate "mask" of this object, so we can use it for ML processing.</p>
<p><a href="https://i.stack.imgur.com/h0TvZ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/h0TvZ.jpg" alt="enter image description here" /></a></p> | 2020-06-28 11:52:18.670000+00:00 | 2020-07-12 10:21:57.800000+00:00 | 2020-07-07 10:47:06.057000+00:00 | python|opencv|computer-vision|metal|edge-detection | ['https://docs.opencv.org/trunk/d8/d83/tutorial_py_grabcut.html', 'https://arxiv.org/pdf/2004.00626.pdf', 'https://www.youtube.com/watch?v=JE-7OcNrPao', 'https://www.remove.bg/', 'https://clippingmagic.com/'] | 5 |
47,061,483 | <p>Convolutional feature maps, early and later ones, contain <em>a lot</em> of useful information. Many interesting and fun applications are based exactly on the feature maps from the pre-trained CNNs, e.g. <a href="https://cs.stackexchange.com/q/56406/77374">Google Deep Dream</a> and <a href="https://github.com/jcjohnson/neural-style" rel="nofollow noreferrer">Neural Style</a>. A common choice for a pre-trained model is VGGNet for its simplicity.</p>
<p>Also note that some CNNs, e.g. <a href="https://arxiv.org/abs/1412.6806" rel="nofollow noreferrer">All Convolutional Net</a>, replace pooling layers with convolutional ones. They still do downsampling through striding, but completely avoid maxpool or avgpool operations. This idea has become popular and applied in many modern CNN architectures.</p>
<p>The only difficulty is that CNN without downsampling may be harder to train. You need enough training data, where labels are images (I assume you have), and you'd also need some clever loss function for backpropagation. Of course, you can start with L2 norm of pixel difference, but it really depends on the problem you're solving.</p>
<p>My recommendation would be to take an existing pre-trained CNN (e.g. <a href="https://www.cs.toronto.edu/~frossard/post/vgg16/" rel="nofollow noreferrer">VGGNet for tensorflow</a>) and leave just first two convolutional layers, up until the first downsampling. This is a fast way to try this kind of architecture.</p> | 2017-11-01 18:34:03.703000+00:00 | 2017-11-01 18:34:03.703000+00:00 | null | null | 47,054,202 | <p>I know that a usual CNN consists of both convolutional and pooling layers. Pooling layers make the output smaller which means less computations and they also make it somehow transform invariant, so the position of the feature from the kernel filter can be shifted in the original image a little bit. </p>
<p>But what happens when I don't use pooling layers? The reason could be that I want a feature vector for each pixel from the original image, so the output of the convolutional layers has to be of the same size as the image, just having more channels. Does this make sense? Will there be still the useful information in these feature vectors or having the pooling layers in CNNs is necessary? Or are there some approaches to get feature vectors of individual pixels with pooling layers?</p> | 2017-11-01 11:53:31.710000+00:00 | 2022-09-21 09:21:39.577000+00:00 | null | machine-learning|neural-network|computer-vision|conv-neural-network|max-pooling | ['https://cs.stackexchange.com/q/56406/77374', 'https://github.com/jcjohnson/neural-style', 'https://arxiv.org/abs/1412.6806', 'https://www.cs.toronto.edu/~frossard/post/vgg16/'] | 4 |
1,826,889 | <p>What are Item1 and Item2? Are they distinct entities? Then the design seems fine to me.</p>
<p>For example, you might want to fill a database with solutions to the traveling salesman problem. You have a table City(cityId, latitude, longitude), and a table Path(pathId, salesmanId). Now a path where the salesman visits n+1 cities would be represented by n entries in PathSegment(pathId, segmentId, fromCityId, toCityId). Here, although fromCityId and toCityId are foreign keys that reference the same table City, they describe different attributes of the PathSegment entity, hence this does not violate NF1.</p>
<p>Edit:</p>
<p>So you want to store trees, actually, only your trees are mostly just linked lists, and most of those are linked lists with just two nodes, right? And apparently your coworker wants to do this as an adjacency list, so a tree like</p>
<pre><code>1-2-3
\-4
</code></pre>
<p>becomes</p>
<pre><code>(1,2)
(2,3)
(1,4)
</code></pre>
<p>There's nothing wrong with that, but it's not the only way to store a tree in a database. For a good summary of alternatives, <a href="https://stackoverflow.com/questions/935098/database-structure-for-tree-data-structure">see here</a>.</p>
<p>In your case, the advantage of using an adjacency list is that most of your trees have only two nodes, so most of them end up being one row in the table, keeping that simple. Also, questions about the immediate neighbours are easy. "What's the invoice for this payment?" becomes</p>
<pre><code>select item1 from link where item2 = :paymentID
</code></pre>
<p>which is neat, too. There are drawbacks, though. The order of child nodes often matters, and the list doesn't help you here, so you have to store that either as a separate column or as something like timestamps in the tables your foreign keys are referring to). Also, reconstructing an entire branch becomes a recursive task, and not all database systems can do that. So if your application often has to retrieve a message-board-like overview of the invoice history, it might require some application-side logic that turns the list of adjacent nodes into a tree on the client and works on that. If that becomes too cumbersome, you might want to consider a nested sets representation, <a href="http://mikehillyer.com/articles/managing-hierarchical-data-in-mysql/" rel="nofollow noreferrer">see here</a>.</p>
<p>What's best for your problem? Depends on several things: size and shape of your trees (if they are really mostly short linked lists, adjacency list is good), frequency of inserts and updates (if frequent, adjacency list is good, because its inserts are cheap), frequency and complexity of queries (if frequent and complex, nested sets is good, because its selects are simple and fast). So for a message board, i'd go with nested sets (or even Tropashkos <a href="http://arxiv.org/html/cs.DB/0401014" rel="nofollow noreferrer">nested intervals</a> for speed and extra coolness), but for a simple request-response (and sometimes some more response) table, i'd probably use an adjacency list.</p> | 2009-12-01 15:22:59.920000+00:00 | 2009-12-03 18:00:26.843000+00:00 | 2017-05-23 12:30:42.570000+00:00 | null | 1,826,224 | <p>I am having a discussion with someone about the following table that is used to link client-specific items:</p>
<pre><code>Table LINK:
Client (int)
Item1 (int)
Item2 (int)
</code></pre>
<p>This is the disputed design. All three fields refer to other tables. The two Item fields refer to the same other table. These are not real field names, so don't bother with discussing naming conventions (the "1" and "2" are, however, really part of the field name). I am arguing that this design is bad on 1NF-violation grounds, while the other person is arguing that even though this seems distasteful, all other options are worse for our specific use-case.</p>
<p>Notes:</p>
<ul>
<li>The vast majority of cases will only require linking two items with each other;</li>
<li>N:1 groups are however allowed; in such a case, the same Item1 is repeated on multiple lines with different Item2 values;</li>
<li>There are also a very small number of cases where some Item2 values (in an existing Item1-Item2 links) are themselves linked to other Items, and in these cases these values occur in the Item1 column, with the other linked value in the Item2 column; all linked items correspond to one group, and must be retrieved as such.</li>
</ul>
<p>My claims:</p>
<ul>
<li>This violates 1NF: Item1 and Item2 are foreign keys for the same table, and as such constitute a repeating group (the other party disagrees on the defn of repeating group);</li>
<li>For searches on Item, this means that two indexes are required instead of one, for example in a table that uses a GroupID field instead;</li>
<li>This makes queries looking for a specific Item in this table more complex, because a restriction clause must examine both Item1 and Item2 fields.</li>
<li>The retrieval for the case where chains of Item links occur will be more complex.</li>
</ul>
<p>The other side claims:</p>
<ul>
<li>The most viable alternative is a table with a single Item field, and an additional GroupID field;</li>
<li>The simpler, more common two-item link case now becomes more complex;</li>
<li>There could be concurrency issues in obtaining GroupID slots, and that needs to be managed</li>
<li>Managing the GroupID concurrency issues probably requires a second table with GroupIDs in a field with a uniqueness constraint</li>
<li>You now have to perform a join, at least some of the time, especially if an ORM is used. The join is less efficient than using a single table as in current design.</li>
</ul>
<p>I would like to hear some opinions about this. I have read the other posts on SO about database design, and especially 1NF, but they don't deal as specifically with my case above as I would have liked. I have also come to understand, based on much research online, that so-called standards like 1NF can be defined in many different ways by different people. I have tried to be as clear as possible about both arguments, and not to bias one or the other.</p>
<p>EDIT 1:</p>
<ul>
<li>Item1 and Item2 are (financial) transactions</li>
<li>The "1" and "2" are really part of the field name</li>
</ul> | 2009-12-01 13:35:20.623000+00:00 | 2009-12-03 18:00:26.843000+00:00 | 2009-12-02 05:51:51.603000+00:00 | database-design|normalization | ['https://stackoverflow.com/questions/935098/database-structure-for-tree-data-structure', 'http://mikehillyer.com/articles/managing-hierarchical-data-in-mysql/', 'http://arxiv.org/html/cs.DB/0401014'] | 3 |
61,721,478 | <p>You might want to check how quantization works in TensorFlow - maybe you missed handling zero-point and scale?</p>
<ul>
<li><a href="https://arxiv.org/pdf/1712.05877.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1712.05877.pdf</a></li>
<li><a href="https://www.tensorflow.org/lite/performance/quantization_spec" rel="nofollow noreferrer">https://www.tensorflow.org/lite/performance/quantization_spec</a></li>
</ul> | 2020-05-11 02:44:36.193000+00:00 | 2020-05-11 02:44:36.193000+00:00 | null | null | 61,672,171 | <p>I want to export a quantized model onto FPGA</p>
<p>I adopted the Quantization Aware Training flow as per <a href="https://www.tensorflow.org/model_optimization/guide/quantization/training_example" rel="nofollow noreferrer">https://www.tensorflow.org/model_optimization/guide/quantization/training_example</a> to get a tflite model with uint8 quantization.</p>
<p>Dataset :MNIST</p>
<p>Model used:</p>
<pre><code> model = keras.Sequential([
keras.layers.InputLayer(input_shape=(28, 28)),
keras.layers.Flatten(),
keras.layers.Dense(32, activation=tf.nn.relu),
keras.layers.Dense(16, activation=tf.nn.relu),
keras.layers.Dense(10, activation=tf.nn.softmax)
])
</code></pre>
<p>Accuracy Results of tflite model obtained</p>
<pre><code>Quant TFLite test_accuracy: 0.9478
Quant TF test accuracy: 0.9527000188827515
</code></pre>
<p>Now, I want to use this quantized model on FPGA. So, I export the tflite model and export the weights as numpy array using netron (<a href="https://github.com/lutzroeder/netron" rel="nofollow noreferrer">https://github.com/lutzroeder/netron</a>) .</p>
<p>Using these weights, I run an inference on python by handcoding the forward path as shown . I get accuracy = 32 % as against the accuarcy of 95% obtained using tflite interpreter .</p>
<p>Here is the handcoded forward path </p>
<pre><code>w_32x784=np.load("w_32x784.npy")
b_32 = np.load("b_32.npy")
w_16x32 = np.load("w_16x32.npy")
b_16 = np.load("b_16.npy")
w_10x16 = np.load("w_10x16.npy")
b_10 =np.load("b_10.npy")
def eval_q(x_inp):
layer1_op = relu(np.squeeze(np.matmul(w_32x784,x_inp))+b_32)
layer2_op = relu(np.squeeze(np.matmul(w_16x32,layer1_op))+b_16)
layer3_op = np.squeeze(np.matmul(w_10x16,layer2_op))+b_10
predict = np.argmax(layer3_op)
return predict
def relu(w):
op =[x if x > 0 else 0 for x in w]
return op
# Evaluate Model
predictions=[]
for img in test_images:
x_inp = img.reshape(784,1)
predictions.append(eval_q(x_inp))
print((predictions==test_labels).mean())
</code></pre>
<pre><code>Accuracy =0.3248
</code></pre>
<p>Please help me out with finding where I'm going wrong.</p> | 2020-05-08 04:42:36.217000+00:00 | 2020-05-11 02:44:36.193000+00:00 | null | tensorflow|machine-learning|keras|fpga | ['https://arxiv.org/pdf/1712.05877.pdf', 'https://www.tensorflow.org/lite/performance/quantization_spec'] | 2 |
39,808,784 | <p>Use <code>watch nvidia-smi</code> to check how much GPU memory your processes are using.</p>
<p>FYI: </p>
<ul>
<li><a href="https://stackoverflow.com/q/38724733/395857">Configuring Theano so that it doesn't directly crash when a GPU memory allocation fails</a></li>
<li><a href="https://stats.stackexchange.com/q/164876/12359">Tradeoff batch size vs. number of iterations to train a neural network</a>:</li>
</ul>
<blockquote>
<p>From Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail
Smelyanskiy, Ping Tak Peter Tang. On Large-Batch Training for Deep
Learning: Generalization Gap and Sharp Minima.
<a href="https://arxiv.org/abs/1609.04836" rel="nofollow noreferrer">https://arxiv.org/abs/1609.04836</a> :</p>
<blockquote>
<p>The stochastic gradient descent method and its variants are algorithms of choice for many Deep Learning tasks. These methods
operate in a small-batch regime wherein a fraction of the training
data, usually 32--512 data points, is sampled to compute an
approximation to the gradient. <strong>It has been observed in practice that
when using a larger batch there is a significant degradation in the
quality of the model, as measured by its ability to generalize.</strong>
There have been some attempts to investigate the cause for this
generalization drop in the large-batch regime, however the precise
answer for this phenomenon is, hitherto unknown. In this paper, we
present ample numerical evidence that supports the view that
large-batch methods tend to converge to sharp minimizers of the
training and testing functions -- and that sharp minima lead to poorer
generalization. In contrast, small-batch methods consistently converge
to flat minimizers, and our experiments support a commonly held view
that this is due to the inherent noise in the gradient estimation. We
also discuss several empirical strategies that help large-batch
methods eliminate the generalization gap and conclude with a set of
future research ideas and open questions.</p>
<p>[…]</p>
<p><strong>The lack of generalization ability is due to the fact that large-batch methods tend to converge to <em>sharp minimizers</em> of the
training function</strong>. These minimizers are characterized by large
positive eigenvalues in $\nabla^2 f(x)$ and tend to generalize less
well. In contrast, small-batch methods converge to flat minimizers
characterized by small positive eigenvalues of $\nabla^2 f(x)$. We
have observed that the loss function landscape of deep neural networks
is such that large-batch methods are almost invariably attracted to
regions with sharp minima and that, unlike small batch methods, are
unable to escape basins of these minimizers.</p>
<p>[…]</p>
<p><a href="https://i.stack.imgur.com/30I6a.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/30I6a.png" alt="enter image description here"></a></p>
</blockquote>
</blockquote> | 2016-10-01 16:12:34.340000+00:00 | 2016-10-01 16:12:34.340000+00:00 | 2017-05-23 12:09:49.663000+00:00 | null | 39,807,237 | <p>When training deep learning model, I found that GPU is not fully utilise if I set the train and validate(test) batch size to be same, say 32, 64, ..., 512. </p>
<p>Then I check NVIDIA Titan X specifications: </p>
<ol>
<li>NVIDIA CUDA® Cores: 3584</li>
<li>Memory: 12 GB GDDR5X</li>
</ol>
<p>In order to reduce test time for CNN model, I want to increase the number of samples in a batch as large as possible. I tried:</p>
<ul>
<li>set number of samples per batch to 3584, cuda out of memrory error. </li>
<li>set number of samples per batch to 2048, cuda out of memrory error.</li>
<li>set number of samples per batch to 1024, works. but I am not sure whether GPU is fully utilised or not.</li>
</ul>
<p>Question:</p>
<p>How to easily pick the number of samples per batch to fully utilize GPU on deep model forward operation?</p> | 2016-10-01 13:30:48.613000+00:00 | 2017-12-26 23:14:56.197000+00:00 | 2017-12-26 23:14:56.197000+00:00 | deep-learning|nvidia|nvidia-titan | ['https://stackoverflow.com/q/38724733/395857', 'https://stats.stackexchange.com/q/164876/12359', 'https://arxiv.org/abs/1609.04836', 'https://i.stack.imgur.com/30I6a.png'] | 4 |
55,238,647 | <p><strong>Particle Filter</strong> is what you are looking for to localize a robot. </p>
<p>To implement particle filter, you need an understanding of basic probability(mostly Bayes theorem), Gaussian distributions in 2D.</p>
<p><a href="http://ais.informatik.uni-freiburg.de/teaching/ws12/mapping/pdf/slam09-particle-filter-4.pdf" rel="nofollow noreferrer">slides</a>, <a href="https://www.youtube.com/watch?v=w0hH0bRF1zk" rel="nofollow noreferrer">video</a> </p>
<p>Watch these <a href="https://www.youtube.com/playlist?list=PLgnQpQtFTOGQrZ4O5QzbIHgl3b1JHimN_" rel="nofollow noreferrer">course videos</a>, which are really good.</p>
<blockquote>
<p>for e.g. in one of the research paper it was written that Markov algorithm can be used in global indoor positioning system or when you have multi-modal gaussian distribution. whereas Kalman filter can not be used for the same reasons. Now, I completely didn't understand.</p>
</blockquote>
<p>Kalman filter or Extend Kalman filter is used for unimodal distribution and also the initial estimation must be good enough to track.</p>
<p>Particle filter is multi modal, doesn't need an initial guess, but need more particles (or samples) to converge to a better estimate.</p>
<blockquote>
<p>second example, Markov Algorithm assume map is static and consider Markov assumption where measurements are independent and doesn't depend on previous measurements. but when environment is dynamic (objects are moving) , Markov assumption is not valid and we need to modify Markov algorithm to incorporate dynamic environment. Now, I don't understand why?</p>
</blockquote>
<p>If the objects are humans, it is not difficult to localize (unless the robot is completely covered by humans and robot is not able to see any part of the environment)even in a dynamic environment. A simple modification will be to consider laser rays which are in conformation with the map. Below paper explains this.</p>
<p>check this paper <a href="https://arxiv.org/pdf/1106.0222.pdf" rel="nofollow noreferrer">Markov Localization for Mobile Robots
in Dynami Environments</a></p> | 2019-03-19 10:22:34.190000+00:00 | 2019-03-19 10:22:34.190000+00:00 | null | null | 55,177,695 | <p>In my thesis project, I need to implement Monte Carlo Localisation algorithm (it's based on Markov Localisation). I have exactly one month of time to understand and implement the algorithm. I understand basics of probability and Bayes theorem. Now which topics I should get familiar with to understand Markov Algorithm? I have read couple of research papers 3-4 times, still I failed to understand everything. </p>
<p>I tried to do Google whichever terms I didn't understand but I couldn't get the essence of the algorithm. I want to understand systematically. I know what it does but I didn't fully understand how it does or why it does. </p>
<p>for e.g. in one of the research paper it was written that Markov algorithm can be used in global indoor positioning system or when you have multi-modal gaussian distribution. whereas Kalman filter can not be used for the same reasons. Now, I completely didn't understand. </p>
<p>second example, Markov Algorithm assume map is static and consider Markov assumption where measurements are independent and doesn't depend on previous measurements. but when environment is dynamic (objects are moving) , Markov assumption is not valid and we need to modify Markov algorithm to incorporate dynamic environment. Now, I don't understand why? </p>
<p>It would be great if someone point me out which topics should I learn to understand the algorithm. please keep in mind that I have only one month.</p> | 2019-03-15 07:35:59.513000+00:00 | 2019-03-19 10:22:34.190000+00:00 | null | localization|slam|slam-algorithm | ['http://ais.informatik.uni-freiburg.de/teaching/ws12/mapping/pdf/slam09-particle-filter-4.pdf', 'https://www.youtube.com/watch?v=w0hH0bRF1zk', 'https://www.youtube.com/playlist?list=PLgnQpQtFTOGQrZ4O5QzbIHgl3b1JHimN_', 'https://arxiv.org/pdf/1106.0222.pdf'] | 4 |
57,126,172 | <p>To directly answer the question, those two lines of code generate different instructions, with differing performance characteristics and hardware pipeline usage.</p>
<pre><code>uint read_value = b.field; // generates a load instruction
uint read_value2 = atomicAdd(b.field, 0); // generates an atomic instruction
</code></pre>
<ul>
<li><a href="http://shader-playground.timjones.io/0733f88ba4b8ddd197c0242b1273044d" rel="nofollow noreferrer">AMD disassembly can be seen in this online Shader Playground</a> -- <code>buffer_load_dword</code> versus <code>buffer_atomic_add</code></li>
<li><a href="https://arxiv.org/abs/1804.06826" rel="nofollow noreferrer">Dissecting the NVIDIA Volta GPU Architecture via Microbenchmarking</a> -- <code>LDG</code> versus <code>ATOM</code></li>
</ul>
<p>The <a href="https://www.khronos.org/registry/OpenGL/specs/gl/GLSLangSpec.4.40.pdf" rel="nofollow noreferrer">GLSL spec</a> section 4.10 Memory Qualifiers makes a point that <code>coherent</code> is only about visibility of reads and writes across invocations (shader threads). They also left a comment on the implied performance:</p>
<blockquote>
<p>When accessing memory using variables not declared as coherent, the memory accessed by a shader may be cached by the implementation to service future accesses to the same address. Memory stores may be cached in such a way that the values written might not be visible to other shader invocations accessing the same memory. The implementation may cache the values fetched by memory reads and return the same values to any shader invocation accessing the same memory, even if the underlying memory has been modified since the first memory read. While variables not declared as coherent might not be useful for communicating between shader invocations, using non-coherent accesses may result in higher performance.</p>
</blockquote>
<p>The point-of-coherence in GPU memory systems is usually the last-level cache (L2 cache), meaning all coherent accesses must be performed by the L2 cache. This also means coherent buffers cannot be cached in L1 or other caches closer to the shader processors. Modern GPUs also have dedicated atomic hardware in the L2 caches; a plain load will not use those, but an <code>atomicAdd(..., 0)</code> will go through those. The atomic hardware usually has lower bandwidth than the full L2 cache.</p> | 2019-07-20 15:35:41.857000+00:00 | 2019-07-20 15:35:41.857000+00:00 | null | null | 57,114,620 | <p>This is with Vulkan semantics, if it makes any difference.</p>
<p>Assume the following:</p>
<pre><code>layout(...) coherent buffer B
{
uint field;
} b;
</code></pre>
<p>Say the field is being modified by other invocations of the same shader (or a derived shader) through <code>atomic*()</code> funcions.</p>
<p>If a shader invocation wants to perform an atomic read from this <code>field</code> (with the same semantics as <code>atomicCounter()</code> in GLES, had this been an <code>atomic_uint</code> instead), is there any difference between the following two (other than obviously that one of them does a write as well as read)?</p>
<pre><code>uint read_value = b.field;
uint read_value2 = atomicAdd(b.field, 0);
</code></pre> | 2019-07-19 14:31:54.710000+00:00 | 2019-08-01 03:43:44.670000+00:00 | 2019-07-19 16:18:30.073000+00:00 | glsl|vulkan | ['http://shader-playground.timjones.io/0733f88ba4b8ddd197c0242b1273044d', 'https://arxiv.org/abs/1804.06826', 'https://www.khronos.org/registry/OpenGL/specs/gl/GLSLangSpec.4.40.pdf'] | 3 |
62,350,293 | <p>This is probably resolved. As pointed out in <a href="https://arxiv.org/pdf/1504.06768.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1504.06768.pdf</a>, you don't actually need to invert the shifted sparse matrix and then repeatedly apply it in some Lanczos-type method -- you just need to repeatedly solve an inverse problem (M-sigma*identity)*v(n+1)=v(n) to generate a sequence of vectors {v(n)}. This inverse problem can be done quickly for a sparse matrix after LU decomposition.</p> | 2020-06-12 18:24:54.327000+00:00 | 2020-06-12 18:24:54.327000+00:00 | null | null | 60,538,622 | <p>I'm trying to find the smallest (as in most negative, not lowest magnitude) several eigenvalues of a list of sparse Hermitian matrices in Python using scipy.sparse.linalg.eigsh. The matrices are ~1000x1000, and the list length is ~500-2000. In addition, I know upper and lower bounds on the eigenvalues of all the matrices -- call them <em>eig_UB</em> and <em>eig_LB</em>, respectively.</p>
<p>I've tried two methods:</p>
<ol>
<li>Using shift-invert mode with sigma=<em>eig_LB</em>.</li>
<li>Subtracting <em>eig_UB</em> from the diagonal of each matrix (thus shifting the smallest eigenvalues to be the largest magnitude eigenvalues), diagonalizing the resulting matrices with default eigsh settings (no shift-invert mode and using which='LM'), and then adding <em>eig_UB</em> to the resulting eigenvalues.</li>
</ol>
<p>Both methods work and their results agree, but method 1 is around 2-2.5x faster. This seems counterintuitive, since (at least as I understand the eigsh documentation) shift-invert mode subtracts sigma from the diagonal, inverts the matrix, and then finds eigenvalues, whereas default mode directly finds the largest magnitude eigenvalues. Does anyone know what could explain the difference in performance?</p>
<p>One other piece of information: I've checked, and the matrices that result from shift-inverting (that is, (M-sigma*identity)^(-1) if M is the original matrix) are no longer sparse, which seems like it should make finding their eigenvalues take even longer.</p> | 2020-03-05 05:18:58.483000+00:00 | 2020-08-28 10:25:19.797000+00:00 | 2020-08-28 10:25:19.797000+00:00 | sparse-matrix|eigenvalue|arpack | ['https://arxiv.org/pdf/1504.06768.pdf'] | 1 |
64,972,431 | <p>I think the problem comes from your classes. What are their formats ? Are they one-hot-encoded ?</p>
<p>Your activation and loss functions are correct for me. I think your last convolutionnal layer should be with a filter of size 1x1 (similarly to the original <a href="https://arxiv.org/abs/1505.04597" rel="nofollow noreferrer">paper</a>). It ensures better pixel-classificiation.</p> | 2020-11-23 16:35:19.453000+00:00 | 2020-11-23 16:35:19.453000+00:00 | null | null | 64,835,509 | <p>I am trying to implement a UNet model, on labeled image data. The dataset contains around 10,000 images and their respective masks (colored-RGB).</p>
<p>Image Dimensions: 500 X 500 X 3</p>
<p>The masks are not black & white, they are colored (RGB), having 3 classes (technically 4):</p>
<ul>
<li>Background: Black</li>
<li>Class 1: Red</li>
<li>Class 2: Green</li>
<li>Class 3: Blue</li>
</ul>
<p>This is the code for the last two CONV blocks of the model:</p>
<pre><code> model = Conv2D(64,(3,3),strides=(1, 1),padding='same')(concat_5)
model = LeakyReLU(0.1)(model)
model = BatchNormalization()(model)
model = Conv2D(4,(3,3),strides=(1, 1),padding='same', activation="softmax")(model)
model = Model(base_model.input,model)
</code></pre>
<p>Model Architecture:</p>
<pre><code>Model: "functional_1"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) [(None, 500, 500, 3) 0
__________________________________________________________________________________________________
block1_conv1 (Conv2D) (None, 500, 500, 64) 1792 input_1[0][0]
__________________________________________________________________________________________________
block1_conv2 (Conv2D) (None, 500, 500, 64) 36928 block1_conv1[0][0]
__________________________________________________________________________________________________
block1_pool (MaxPooling2D) (None, 250, 250, 64) 0 block1_conv2[0][0]
__________________________________________________________________________________________________
block2_conv1 (Conv2D) (None, 250, 250, 128 73856 block1_pool[0][0]
__________________________________________________________________________________________________
block2_conv2 (Conv2D) (None, 250, 250, 128 147584 block2_conv1[0][0]
__________________________________________________________________________________________________
block2_pool (MaxPooling2D) (None, 125, 125, 128 0 block2_conv2[0][0]
__________________________________________________________________________________________________
block3_conv1 (Conv2D) (None, 125, 125, 256 295168 block2_pool[0][0]
__________________________________________________________________________________________________
block3_conv2 (Conv2D) (None, 125, 125, 256 590080 block3_conv1[0][0]
__________________________________________________________________________________________________
block3_conv3 (Conv2D) (None, 125, 125, 256 590080 block3_conv2[0][0]
__________________________________________________________________________________________________
block3_pool (MaxPooling2D) (None, 62, 62, 256) 0 block3_conv3[0][0]
__________________________________________________________________________________________________
block4_conv1 (Conv2D) (None, 62, 62, 512) 1180160 block3_pool[0][0]
__________________________________________________________________________________________________
block4_conv2 (Conv2D) (None, 62, 62, 512) 2359808 block4_conv1[0][0]
__________________________________________________________________________________________________
block4_conv3 (Conv2D) (None, 62, 62, 512) 2359808 block4_conv2[0][0]
__________________________________________________________________________________________________
block4_pool (MaxPooling2D) (None, 31, 31, 512) 0 block4_conv3[0][0]
__________________________________________________________________________________________________
block5_conv1 (Conv2D) (None, 31, 31, 512) 2359808 block4_pool[0][0]
__________________________________________________________________________________________________
block5_conv2 (Conv2D) (None, 31, 31, 512) 2359808 block5_conv1[0][0]
__________________________________________________________________________________________________
block5_conv3 (Conv2D) (None, 31, 31, 512) 2359808 block5_conv2[0][0]
__________________________________________________________________________________________________
block5_pool (MaxPooling2D) (None, 15, 15, 512) 0 block5_conv3[0][0]
__________________________________________________________________________________________________
conv2d_transpose_5 (Conv2DTrans (None, 31, 31, 256) 1179904 block5_pool[0][0]
__________________________________________________________________________________________________
leaky_re_lu_10 (LeakyReLU) (None, 31, 31, 256) 0 conv2d_transpose_5[0][0]
__________________________________________________________________________________________________
batch_normalization_10 (BatchNo (None, 31, 31, 256) 1024 leaky_re_lu_10[0][0]
__________________________________________________________________________________________________
concatenate_5 (Concatenate) (None, 31, 31, 768) 0 batch_normalization_10[0][0]
block5_conv3[0][0]
__________________________________________________________________________________________________
conv2d_6 (Conv2D) (None, 31, 31, 512) 3539456 concatenate_5[0][0]
__________________________________________________________________________________________________
leaky_re_lu_11 (LeakyReLU) (None, 31, 31, 512) 0 conv2d_6[0][0]
__________________________________________________________________________________________________
batch_normalization_11 (BatchNo (None, 31, 31, 512) 2048 leaky_re_lu_11[0][0]
__________________________________________________________________________________________________
conv2d_transpose_6 (Conv2DTrans (None, 62, 62, 512) 2359808 batch_normalization_11[0][0]
__________________________________________________________________________________________________
leaky_re_lu_12 (LeakyReLU) (None, 62, 62, 512) 0 conv2d_transpose_6[0][0]
__________________________________________________________________________________________________
batch_normalization_12 (BatchNo (None, 62, 62, 512) 2048 leaky_re_lu_12[0][0]
__________________________________________________________________________________________________
concatenate_6 (Concatenate) (None, 62, 62, 1024) 0 batch_normalization_12[0][0]
block4_conv3[0][0]
__________________________________________________________________________________________________
conv2d_7 (Conv2D) (None, 62, 62, 512) 4719104 concatenate_6[0][0]
__________________________________________________________________________________________________
leaky_re_lu_13 (LeakyReLU) (None, 62, 62, 512) 0 conv2d_7[0][0]
__________________________________________________________________________________________________
batch_normalization_13 (BatchNo (None, 62, 62, 512) 2048 leaky_re_lu_13[0][0]
__________________________________________________________________________________________________
conv2d_transpose_7 (Conv2DTrans (None, 125, 125, 512 2359808 batch_normalization_13[0][0]
__________________________________________________________________________________________________
leaky_re_lu_14 (LeakyReLU) (None, 125, 125, 512 0 conv2d_transpose_7[0][0]
__________________________________________________________________________________________________
batch_normalization_14 (BatchNo (None, 125, 125, 512 2048 leaky_re_lu_14[0][0]
__________________________________________________________________________________________________
concatenate_7 (Concatenate) (None, 125, 125, 768 0 batch_normalization_14[0][0]
block3_conv3[0][0]
__________________________________________________________________________________________________
conv2d_8 (Conv2D) (None, 125, 125, 256 1769728 concatenate_7[0][0]
__________________________________________________________________________________________________
leaky_re_lu_15 (LeakyReLU) (None, 125, 125, 256 0 conv2d_8[0][0]
__________________________________________________________________________________________________
batch_normalization_15 (BatchNo (None, 125, 125, 256 1024 leaky_re_lu_15[0][0]
__________________________________________________________________________________________________
conv2d_transpose_8 (Conv2DTrans (None, 250, 250, 256 590080 batch_normalization_15[0][0]
__________________________________________________________________________________________________
leaky_re_lu_16 (LeakyReLU) (None, 250, 250, 256 0 conv2d_transpose_8[0][0]
__________________________________________________________________________________________________
batch_normalization_16 (BatchNo (None, 250, 250, 256 1024 leaky_re_lu_16[0][0]
__________________________________________________________________________________________________
concatenate_8 (Concatenate) (None, 250, 250, 384 0 batch_normalization_16[0][0]
block2_conv2[0][0]
__________________________________________________________________________________________________
conv2d_9 (Conv2D) (None, 250, 250, 128 442496 concatenate_8[0][0]
__________________________________________________________________________________________________
leaky_re_lu_17 (LeakyReLU) (None, 250, 250, 128 0 conv2d_9[0][0]
__________________________________________________________________________________________________
batch_normalization_17 (BatchNo (None, 250, 250, 128 512 leaky_re_lu_17[0][0]
__________________________________________________________________________________________________
conv2d_transpose_9 (Conv2DTrans (None, 500, 500, 128 147584 batch_normalization_17[0][0]
__________________________________________________________________________________________________
leaky_re_lu_18 (LeakyReLU) (None, 500, 500, 128 0 conv2d_transpose_9[0][0]
__________________________________________________________________________________________________
batch_normalization_18 (BatchNo (None, 500, 500, 128 512 leaky_re_lu_18[0][0]
__________________________________________________________________________________________________
concatenate_9 (Concatenate) (None, 500, 500, 192 0 batch_normalization_18[0][0]
block1_conv2[0][0]
__________________________________________________________________________________________________
conv2d_10 (Conv2D) (None, 500, 500, 64) 110656 concatenate_9[0][0]
__________________________________________________________________________________________________
leaky_re_lu_19 (LeakyReLU) (None, 500, 500, 64) 0 conv2d_10[0][0]
__________________________________________________________________________________________________
batch_normalization_19 (BatchNo (None, 500, 500, 64) 256 leaky_re_lu_19[0][0]
__________________________________________________________________________________________________
conv2d_11 (Conv2D) (None, 500, 500, 4) 2308 batch_normalization_19[0][0]
==================================================================================================
Total params: 31,948,164
Trainable params: 31,941,892
Non-trainable params: 6,272
__________________________________________________________________________________________________
</code></pre>
<p>When I am training the model, I am getting the following error:</p>
<pre><code>InvalidArgumentError: Incompatible shapes: [4,4,500,500] vs. [4,3,500,500]
[[node categorical_crossentropy/mul (defined at <ipython-input-15-b60b11ed9e76>:9) ]] [Op:__inference_train_function_5874]
Function call stack:
train_function
</code></pre>
<p>It seems that there is some problem with the dimensions of the last CONV layer of the model.
I want suggestions about the following three things:</p>
<ol>
<li>Dimensions of the last CONV layer (considering multi-class segmentation for <strong>4 classes</strong>)</li>
<li>Which activation function should I use in the last CONV layer? Currently, I am using <code>softmax</code> activation.</li>
<li>What kind of loss should I use while compiling the model? Currently, I am using <code>categorical_crossentropy</code>.</li>
</ol> | 2020-11-14 15:28:01.860000+00:00 | 2020-11-23 16:35:19.453000+00:00 | null | python|tensorflow|machine-learning|keras|deep-learning | ['https://arxiv.org/abs/1505.04597'] | 1 |
46,850,829 | <p>There are only ad hoc ways to know if it is possible to learn a function with a differentiable network from a dataset. That said, these ad hoc ways do usually work. For example, the network should be able to overfit the training set without any regularisation.</p>
<p>A common technique to gauge this is to only fit the network on a subset of the full dataset. Check that the network can overfit to that, then increase the size of the subset, and increase the size of the network as well. Unfortunately, deciding whether to add extra layers or add more units in a hidden layer is an arbitrary decision you'll have to make.</p>
<p>However, looking at your code, there are a few things that could be going wrong here:</p>
<ol>
<li>Are your outputs balanced? By that I mean, do you have the same number of 1s as 0s in the dataset targets?</li>
<li>Your initialisation in the first layer is all zeros, the gradient to this will be zero, so it can't learn anything (although, you have a real initialisation above it commented out).</li>
<li>Sigmoid nonlinearities are more difficult to optimise than simpler nonlinearities, such as ReLUs.</li>
</ol>
<p>I'd recommend using the <a href="https://www.tensorflow.org/api_guides/python/contrib.layers" rel="nofollow noreferrer">built-in definitions for layers</a> in Tensorflow to not worry about initialisation, and switching to ReLUs in any hidden layers (you need sigmoid at the output for your boolean target).</p>
<p>Finally, deep learning isn't actually very good at most "bag of features" machine learning problems because they lack structure. For example, the order of the features doesn't matter. Other methods often work better, but if you really want to use deep learning then you could look at <a href="https://arxiv.org/abs/1706.02515" rel="nofollow noreferrer">this recent paper</a>, showing improved performance by just using a very specific nonlinearity and weight initialisation (change 4 lines in your code above).</p> | 2017-10-20 14:04:14.007000+00:00 | 2017-10-20 14:04:14.007000+00:00 | null | null | 46,848,023 | <p>I'm a newbie to machine learning and this is one of the first real-world ML tasks challenged.</p>
<p>Some experimental data contains 512 independent boolean features and a boolean result.</p>
<p>There are about 1e6 real experiment records in the provided data set.</p>
<p>In a classic XOR example all 4 out of 4 possible states are required to train NN. In my case its only <code>2^(10-512) = 2^-505</code> which is close to zero.</p>
<p>I have no more information about the data nature, just these <code>(512 + 1) * 1e6</code> bits.</p>
<p>Tried NN with 1 hidden layer on available data. Output of the trained NN on the samples even from the training set are always close to 0, not a single close to "1". Played with weights initialization, gradient descent learning rate.</p>
<p>My <a href="https://pastebin.com/9uM6gDR3" rel="nofollow noreferrer">code</a> utilizing TensorFlow 1.3, Python 3. Model excerpt:</p>
<pre><code>with tf.name_scope("Layer1"):
#W1 = tf.Variable(tf.random_uniform([512, innerN], minval=-2/512, maxval=2/512), name="Weights_1")
W1 = tf.Variable(tf.zeros([512, innerN]), name="Weights_1")
b1 = tf.Variable(tf.zeros([1]), name="Bias_1")
Out1 = tf.sigmoid( tf.matmul(x, W1) + b1)
with tf.name_scope("Layer2"):
W2 = tf.Variable(tf.random_uniform([innerN, 1], minval=-2/512, maxval=2/512), name="Weights_2")
#W2 = tf.Variable(tf.zeros([innerN, 1]), name="Weights_2")
b2 = tf.Variable(tf.zeros([1]), name="Bias_2")
y = tf.nn.sigmoid( tf.matmul(Out1, W2) + b2)
with tf.name_scope("Training"):
y_ = tf.placeholder(tf.float32, [None,1])
cross_entropy = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(
labels = y_, logits = y)
)
train_step = tf.train.GradientDescentOptimizer(0.005).minimize(cross_entropy)
with tf.name_scope("Testing"):
# Test trained model
correct_prediction = tf.equal( tf.round(y), tf.round(y_))
# ...
# Train
for step in range(500):
batch_xs, batch_ys = Datasets.train.next_batch(300, shuffle=False)
_, my_y, summary = sess.run([train_step, y, merged_summaries],
feed_dict={x: batch_xs, y_: batch_ys})
</code></pre>
<p>I suspect two cases:</p>
<ol>
<li>my fault – bad NN implementation, wrong architecture;</li>
<li>bad data. Compared to XOR example, incomplete training data would result in a failing NN. However, the training examples fed to the trained NN are supposed to give right predictions, aren't they?</li>
</ol>
<p><strong>How to evaluate</strong> if it is possible at all to train a neural network (a 2-layer perceptron) on the provided data to forecast the result? A case of aceptable set would be the XOR example. Opposed to some random noise.</p> | 2017-10-20 11:18:05.370000+00:00 | 2017-10-20 14:04:14.007000+00:00 | 2017-10-20 11:46:19.857000+00:00 | machine-learning|tensorflow|neural-network | ['https://www.tensorflow.org/api_guides/python/contrib.layers', 'https://arxiv.org/abs/1706.02515'] | 2 |
64,843,044 | <p>I've also encountered this issue recently and raise a similar <a href="https://stackoverflow.com/questions/64808986/scene-text-image-super-resolution-for-ocr">question</a> with more details and with a recent approach. It seems to be an unsolved problem until now. There are some recent research works that try to address such problems with deep learning. Unfortunately, none of the works reach our expectations. However, I'm sharing the info in case it may come helpful to anyone.</p>
<h2>1. Scene Text Image Super-Resolution in the Wild</h2>
<p>In our case, it may be our last choice; comparatively, perform well enough. It's a recent research work (<a href="https://arxiv.org/abs/2005.03341" rel="noreferrer">TSRN</a>) mainly focuses on such cases. The main intuitive of it is to introduce super-resolution (SR) techniques as pre-processing. This <a href="https://github.com/JasonBoy1/TextZoom" rel="noreferrer">implementation</a> looks by far the most promising. Here is the illustration of their achievement, improve blur to clean image.</p>
<p><img src="https://i.stack.imgur.com/N0vc5.jpg" alt="" /></p>
<h2>2. Neural Enhance</h2>
<p>From their <a href="https://github.com/alexjc/neural-enhance" rel="noreferrer">repo</a> demonstration, It's appearing that It may have some potential to improve blur text either. However, the author probably doesn't maintain the repo for about 4 years.</p>
<p><img src="https://i.stack.imgur.com/PDPIL.jpg" alt="" /></p>
<h2>3. Blind Motion Deblurring with GAN</h2>
<p>The attractive part is the <strong>Blind Motion Deblurring</strong> mechanism in it, named <a href="https://github.com/KupynOrest/DeblurGAN" rel="noreferrer">DeblurGAN</a>. It looks very promising.</p>
<p><a href="https://i.stack.imgur.com/SBsow.gif" rel="noreferrer"><img src="https://i.stack.imgur.com/SBsow.gif" alt="enter image description here" /></a></p>
<h2>4. Real-World Super-Resolution via Kernel Estimation and Noise Injection</h2>
<p>An interesting fact about <a href="https://openaccess.thecvf.com/content_CVPRW_2020/papers/w31/Ji_Real-World_Super-Resolution_via_Kernel_Estimation_and_Noise_Injection_CVPRW_2020_paper.pdf" rel="noreferrer">their work</a> is that unlike other literary works they first design a novel <strong>degradation framework</strong> for realworld images by <strong>estimating various blur kernels</strong> as well as real <strong>noise distributions</strong>. Based on that they acquire <strong>LR</strong> images sharing a common domain with real-world images. Then, they propose a realworld super-resolution model aiming at better <strong>perception</strong>. From their article:</p>
<p><a href="https://i.stack.imgur.com/nQjLJ.png" rel="noreferrer"><img src="https://i.stack.imgur.com/nQjLJ.png" alt="enter image description here" /></a> <a href="https://i.stack.imgur.com/hWEqX.png" rel="noreferrer"><img src="https://i.stack.imgur.com/hWEqX.png" alt="enter image description here" /></a></p>
<p>However, in my observation, I couldn't get the expected results. I've raised an <a href="https://github.com/jixiaozhong/RealSR/issues/29" rel="noreferrer">issue on github</a> and until now didn't get any response.</p>
<hr />
<h2>Convolutional Neural Networks for Direct Text Deblurring</h2>
<p>The <a href="http://www.fit.vutbr.cz/%7Eihradis/CNN-Deblur/" rel="noreferrer">paper</a> that was shared by @Ali looks very interesting and the outcomes are extremely good. It's nice that they have shared the pre-trained weight of their trained model and also shared python scripts for easier use. However, they've experimented with the <strong>Caffe</strong> library. I would prefer to convert into <strong>PyTorch</strong> to better control. Below are the provided python scripts with <strong>Caffe</strong> imports. Please note, I couldn't port it completely until now because of a lack of Caffe knowledge, please correct me if you are aware of it.</p>
<pre class="lang-py prettyprint-override"><code>from __future__ import print_function
import numpy as np
import os, sys, argparse, glob, time, cv2, Queue, caffe
# Some Helper Functins
def getCutout(image, x1, y1, x2, y2, border):
assert(x1 >= 0 and y1 >= 0)
assert(x2 > x1 and y2 >y1)
assert(border >= 0)
return cv2.getRectSubPix(image, (y2-y1 + 2*border, x2-x1 + 2*border), (((y2-1)+y1) / 2.0, ((x2-1)+x1) / 2.0))
def fillRndData(data, net):
inputLayer = 'data'
randomChannels = net.blobs[inputLayer].data.shape[1]
rndData = np.random.randn(data.shape[0], randomChannels, data.shape[2], data.shape[3]).astype(np.float32) * 0.2
rndData[:,0:1,:,:] = data
net.blobs[inputLayer].data[...] = rndData[:,0:1,:,:]
def mkdirp(directory):
if not os.path.isdir(directory):
os.makedirs(directory)
</code></pre>
<p>The main function start here</p>
<pre class="lang-py prettyprint-override"><code>def main(argv):
pycaffe_dir = os.path.dirname(__file__)
parser = argparse.ArgumentParser()
# Optional arguments.
parser.add_argument(
"--model_def",
help="Model definition file.",
required=True
)
parser.add_argument(
"--pretrained_model",
help="Trained model weights file.",
required=True
)
parser.add_argument(
"--out_scale",
help="Scale of the output image.",
default=1.0,
type=float
)
parser.add_argument(
"--output_path",
help="Output path.",
default=''
)
parser.add_argument(
"--tile_resolution",
help="Resolution of processing tile.",
required=True,
type=int
)
parser.add_argument(
"--suffix",
help="Suffix of the output file.",
default="-deblur",
)
parser.add_argument(
"--gpu",
action='store_true',
help="Switch for gpu computation."
)
parser.add_argument(
"--grey_mean",
action='store_true',
help="Use grey mean RGB=127. Default is the VGG mean."
)
parser.add_argument(
"--use_mean",
action='store_true',
help="Use mean."
)
parser.add_argument(
"--adversarial",
action='store_true',
help="Use mean."
)
args = parser.parse_args()
mkdirp(args.output_path)
if hasattr(caffe, 'set_mode_gpu'):
if args.gpu:
print('GPU mode', file=sys.stderr)
caffe.set_mode_gpu()
net = caffe.Net(args.model_def, args.pretrained_model, caffe.TEST)
else:
if args.gpu:
print('GPU mode', file=sys.stderr)
net = caffe.Net(args.model_def, args.pretrained_model, gpu=args.gpu)
inputs = [line.strip() for line in sys.stdin]
print("Classifying %d inputs." % len(inputs), file=sys.stderr)
inputBlob = net.blobs.keys()[0] # [innat]: input shape
outputBlob = net.blobs.keys()[-1]
print( inputBlob, outputBlob)
channelCount = net.blobs[inputBlob].data.shape[1]
net.blobs[inputBlob].reshape(1, channelCount, args.tile_resolution, args.tile_resolution)
net.reshape()
if channelCount == 1 or channelCount > 3:
color = 0
else:
color = 1
outResolution = net.blobs[outputBlob].data.shape[2]
inResolution = int(outResolution / args.out_scale)
boundary = (net.blobs[inputBlob].data.shape[2] - inResolution) / 2
for fileName in inputs:
img = cv2.imread(fileName, flags=color).astype(np.float32)
original = np.copy(img)
img = img.reshape(img.shape[0], img.shape[1], -1)
if args.use_mean:
if args.grey_mean or channelCount == 1:
img -= 127
else:
img[:,:,0] -= 103.939
img[:,:,1] -= 116.779
img[:,:,2] -= 123.68
img *= 0.004
outShape = [int(img.shape[0] * args.out_scale) ,
int(img.shape[1] * args.out_scale) ,
net.blobs[outputBlob].channels]
imgOut = np.zeros(outShape)
imageStartTime = time.time()
for x, xOut in zip(range(0, img.shape[0], inResolution), range(0, imgOut.shape[0], outResolution)):
for y, yOut in zip(range(0, img.shape[1], inResolution), range(0, imgOut.shape[1], outResolution)):
start = time.time()
region = getCutout(img, x, y, x+inResolution, y+inResolution, boundary)
region = region.reshape(region.shape[0], region.shape[1], -1)
data = region.transpose([2, 0, 1]).reshape(1, -1, region.shape[0], region.shape[1])
if args.adversarial:
fillRndData(data, net)
out = net.forward()
else:
out = net.forward_all(data=data)
out = out[outputBlob].reshape(out[outputBlob].shape[1], out[outputBlob].shape[2], out[outputBlob].shape[3]).transpose(1, 2, 0)
if imgOut.shape[2] == 3 or imgOut.shape[2] == 1:
out /= 0.004
if args.use_mean:
if args.grey_mean:
out += 127
else:
out[:,:,0] += 103.939
out[:,:,1] += 116.779
out[:,:,2] += 123.68
if out.shape[0] != outResolution:
print("Warning: size of net output is %d px and it is expected to be %d px" % (out.shape[0], outResolution))
if out.shape[0] < outResolution:
print("Error: size of net output is %d px and it is expected to be %d px" % (out.shape[0], outResolution))
exit()
xRange = min((outResolution, imgOut.shape[0] - xOut))
yRange = min((outResolution, imgOut.shape[1] - yOut))
imgOut[xOut:xOut+xRange, yOut:yOut+yRange, :] = out[0:xRange, 0:yRange, :]
imgOut[xOut:xOut+xRange, yOut:yOut+yRange, :] = out[0:xRange, 0:yRange, :]
print(".", end="", file=sys.stderr)
sys.stdout.flush()
print(imgOut.min(), imgOut.max())
print("IMAGE DONE %s" % (time.time() - imageStartTime))
basename = os.path.basename(fileName)
name = os.path.join(args.output_path, basename + args.suffix)
print(name, imgOut.shape)
cv2.imwrite( name, imgOut)
if __name__ == '__main__':
main(sys.argv)
</code></pre>
<p>To run the program:</p>
<blockquote>
<p>cat fileListToProcess.txt | python processWholeImage.py --model_def
./BMVC_nets/S14_19_200.deploy --pretrained_model
./BMVC_nets/S14_19_FQ_178000.model --output_path ./out/
--tile_resolution 300 --suffix _out.png --gpu --use_mean</p>
</blockquote>
<p>The weight files and also the above scripts can be download from <a href="http://www.fit.vutbr.cz/%7Eihradis/CNN-Deblur/" rel="noreferrer">here (BMVC_net)</a>. However, you may want to convert <a href="https://github.com/vadimkantorov/caffemodel2pytorch" rel="noreferrer">caffe2pytorch</a>. In order to do that, here is the basic starting point:</p>
<ul>
<li>install <a href="http://google.github.io/proto-lens/installing-protoc.html" rel="noreferrer">proto-lens</a></li>
<li>clone <a href="https://github.com/vadimkantorov/caffemodel2pytorch" rel="noreferrer">caffemodel2pytorch</a></li>
</ul>
<p>Next,</p>
<pre class="lang-py prettyprint-override"><code># BMVC_net, you need to download it from authors website, link above
model = caffemodel2pytorch.Net(
prototxt = './BMVC_net/S14_19_200.deploy',
weights = './BMVC_net/S14_19_FQ_178000.model',
caffe_proto = 'https://raw.githubusercontent.com/BVLC/caffe/master/src/caffe/proto/caffe.proto'
)
model.cuda()
model.eval()
torch.set_grad_enabled(False)
</code></pre>
<p>Run-on a demo tensor,</p>
<pre class="lang-py prettyprint-override"><code># make sure to have right procedure of image normalization and channel reordering
image = torch.Tensor(8, 3, 98, 98).cuda()
# outputs dict of PyTorch Variables
# in this example the dict contains the only key "prob"
#output_dict = model(data = image)
# you can remove unneeded layers:
#del model.prob
#del model.fc8
# a single input variable is interpreted as an input blob named "data"
# in this example the dict contains the only key "fc7"
output_dict = model(image)
# print(output_dict)
print(output_dict.keys())
</code></pre>
<p>Please note, there are some basic things to consider; the networks expect text at DPI 120-150, reasonable orientation, and reasonable black and white levels. The networks expect to mean [103.9, 116.8, 123.7] to be subtracted from inputs. The inputs should be further multiplied by 0.004.</p> | 2020-11-15 09:51:27.690000+00:00 | 2020-11-19 11:11:26.523000+00:00 | 2020-11-19 11:11:26.523000+00:00 | null | 48,674,106 | <p>I have an image that is blurred:<br />
<img src="https://i.stack.imgur.com/gAQlO.png" alt="1" />
This is a part of the business card and it is one of the frames taken by the camera and without proper focus.</p>
<p>The clear image looks like this
<img src="https://i.stack.imgur.com/Kp124.png" alt="2" />
I'm looking for a method that could give me an image of better quality, so that image could be recognized by OCR, but also should be quite fast. The image is not blurred too much (I think so) but isn't good for OCR. I tried:</p>
<ul>
<li>different kinds of HPF,</li>
<li>Laplacian,</li>
<li>Canny detector,</li>
<li>combinations of morphological operations (opening, closing).</li>
</ul>
<p>I also tried:</p>
<ul>
<li>deconvolution with Wiener filter,</li>
<li>deconvolution and the Lucy-Richardson method.</li>
</ul>
<p>But it was not easy to find the right PSF (Point Spread Function). These methods are considered effective, but not so fast enough. I also tried FFT and then IFFT with a Gaussian mask, but the results were not satisfactory. I'm looking for some kind of general method of deblurring images with text, not only this image. Could someone help me with this problem? I'll be grateful for any advice. I'm working with OpenCV 3 (C++ and sometimes Python).</p> | 2018-02-07 22:09:21.983000+00:00 | 2022-03-11 08:03:40.207000+00:00 | 2022-03-11 08:03:40.207000+00:00 | python|c++|opencv|image-processing|ocr | ['https://stackoverflow.com/questions/64808986/scene-text-image-super-resolution-for-ocr', 'https://arxiv.org/abs/2005.03341', 'https://github.com/JasonBoy1/TextZoom', 'https://github.com/alexjc/neural-enhance', 'https://github.com/KupynOrest/DeblurGAN', 'https://i.stack.imgur.com/SBsow.gif', 'https://openaccess.thecvf.com/content_CVPRW_2020/papers/w31/Ji_Real-World_Super-Resolution_via_Kernel_Estimation_and_Noise_Injection_CVPRW_2020_paper.pdf', 'https://i.stack.imgur.com/nQjLJ.png', 'https://i.stack.imgur.com/hWEqX.png', 'https://github.com/jixiaozhong/RealSR/issues/29', 'http://www.fit.vutbr.cz/%7Eihradis/CNN-Deblur/', 'http://www.fit.vutbr.cz/%7Eihradis/CNN-Deblur/', 'https://github.com/vadimkantorov/caffemodel2pytorch', 'http://google.github.io/proto-lens/installing-protoc.html', 'https://github.com/vadimkantorov/caffemodel2pytorch'] | 15 |
31,119,720 | <p>There are two abstractions used for things like this in Haskell, one usings <code>Arrow</code>s and the other <code>Applicative</code>s. Both can be broken down into smaller parts than those used in <code>base</code>.</p>
<hr>
<p>If you go in the <code>Arrow</code> direction and <a href="https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0CB8QFjAA&url=http%3A%2F%2Farxiv.org%2Fabs%2F1007.2885&ei=52KRVaLWAoTwoASV44OYCA&usg=AFQjCNF7TMVyJnsjsrNguJGMYjCIYaBXRw&sig2=uB6vQrlIINYTTpim5p5CAA" rel="nofollow noreferrer">break down the capabilities of <code>Arrow</code>s into component pieces</a>, you'd have a separate class for those arrows that are able to lift arbitrary functions into the arrow.</p>
<pre><code>class ArrowArr a where
arr :: (b -> c) -> a b c
</code></pre>
<p>This would be the opposite of <code>ArrowArr</code>, arrows where any arbitrary arrow can be dropped to a function.</p>
<pre><code>class ArrowFun a where
($) :: a b c -> (b -> c)
</code></pre>
<p>If you just split <code>arr</code> off of <code>Arrow</code> you are left with <a href="https://stackoverflow.com/a/27387931/414413">arrow like categories that can construct and deconstruct tuples</a>.</p>
<pre><code>class Category a => ArrowLike a where
fst :: a (b, d) b
snd :: a (d, b) b
(&&&) :: a b c -> a b c' -> a b (c,c')
</code></pre>
<hr>
<p>If you go in the <code>Applicative</code> direction this is a <a href="https://hackage.haskell.org/package/pointed-2.0.2/docs/Data-Copointed.html" rel="nofollow noreferrer"><code>Copointed</code></a> "<code>Applicative</code> without <code>pure</code>" (which goes by the name <a href="https://hackage.haskell.org/package/semigroupoids-5.0.0.2/docs/Data-Functor-Apply.html#t:Apply" rel="nofollow noreferrer"><code>Apply</code></a>). </p>
<pre><code>class Copointed p where Source
copoint :: p a -> a
class Functor f => Apply f where
(<.>) :: f (a -> b) -> f a -> f b
</code></pre>
<p>When you go this way you typically drop the <code>Category</code> for functions and instead have a type constructor <code>C a</code> representing values (including function values) constructed according to a certain set of rules.</p> | 2015-06-29 15:31:17.640000+00:00 | 2015-06-29 15:31:17.640000+00:00 | 2017-05-23 11:58:14.053000+00:00 | null | 31,118,949 | <p>I had a thought to generalise <code>($)</code> like <code>Control.Category</code> generalises <code>(.)</code>, and I've done so with the code at the end of this post (<a href="https://ideone.com/Uc1398" rel="noreferrer">also ideone</a>).</p>
<p>In this code I've created a class called <code>FunctionObject</code>. This class has a function <code>($)</code> with the following signature:</p>
<pre><code>($) :: f a b -> a -> b
</code></pre>
<p>Naturally I make <code>(->)</code> an instance of this class so <code>$</code> continues to work with ordinary functions.</p>
<p>But this allows you to make special functions that, for example, know their own inverse, as the example below shows.</p>
<p>I've concluded there's one of three possibilities:</p>
<ol>
<li>I'm the first to think of it.</li>
<li>Someone else has already done it and I'm reinventing the wheel.</li>
<li>It's a bad idea.</li>
</ol>
<p>Option 1 seems unlikely, and my searches on <a href="http://hayoo.fh-wedel.de/" rel="noreferrer">hayoo</a> didn't reveal option 2, so I suspect option 3 is most likely, but if someone could explain why that is it would be good.</p>
<pre><code>import Prelude hiding ((.), ($))
import Control.Category ((.), Category)
class FunctionObject f where
($) :: f a b -> a -> b
infixr 0 $
instance FunctionObject (->) where
f $ x = f x
data InvertibleFunction a b =
InvertibleFunction (a -> b) (b -> a)
instance Category InvertibleFunction where
(InvertibleFunction f f') . (InvertibleFunction g g') =
InvertibleFunction (f . g) (g' . f')
instance FunctionObject InvertibleFunction where
(InvertibleFunction f _) $ x = f $ x
inverse (InvertibleFunction f f') = InvertibleFunction f' f
add :: (Num n) => n -> InvertibleFunction n n
add n = InvertibleFunction (+n) (subtract n)
main = do
print $ add 2 $ 5 -- 7
print $ inverse (add 2) $ 5 -- 3
</code></pre> | 2015-06-29 14:57:06.537000+00:00 | 2019-06-02 13:59:10.687000+00:00 | 2019-06-02 13:59:10.687000+00:00 | haskell|category-abstractions | ['https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0CB8QFjAA&url=http%3A%2F%2Farxiv.org%2Fabs%2F1007.2885&ei=52KRVaLWAoTwoASV44OYCA&usg=AFQjCNF7TMVyJnsjsrNguJGMYjCIYaBXRw&sig2=uB6vQrlIINYTTpim5p5CAA', 'https://stackoverflow.com/a/27387931/414413', 'https://hackage.haskell.org/package/pointed-2.0.2/docs/Data-Copointed.html', 'https://hackage.haskell.org/package/semigroupoids-5.0.0.2/docs/Data-Functor-Apply.html#t:Apply'] | 4 |
38,417,250 | <p>The method I wrote as of my latest edit is now faster than even <code>scipy.statstools.acf</code> with <code>fft=True</code> until the sample size gets very large.</p>
<p><strong>Error analysis</strong> If you want to adjust for biases & get highly accurate error estimates: Look at my code <a href="https://github.com/flipdazed/Hybrid-Monte-Carlo/blob/master/correlations/errors.py" rel="nofollow noreferrer">here</a> which implements <a href="http://arxiv.org/pdf/hep-lat/0306017.pdf" rel="nofollow noreferrer">this paper</a> by Ulli Wolff <em>(<a href="https://www.physik.hu-berlin.de/de/com/ALPHAsoft" rel="nofollow noreferrer">or original by UW in <code>Matlab</code></a>)</em></p>
<h2>Functions Tested</h2>
<ul>
<li><code>a = correlatedData(n=10000)</code> is from a routine found <a href="https://github.com/flipdazed/Hybrid-Monte-Carlo/blob/master/correlations/acorr.py" rel="nofollow noreferrer">here</a></li>
<li><code>gamma()</code> is from same place as <code>correlated_data()</code></li>
<li><code>acorr()</code> is my function below</li>
<li><code>estimated_autocorrelation</code> is found in another answer</li>
<li><code>acf()</code> is from <code>from statsmodels.tsa.stattools import acf</code></li>
</ul>
<h2>Timings</h2>
<pre><code>%timeit a0, junk, junk = gamma(a, f=0) # puwr.py
%timeit a1 = [acorr(a, m, i) for i in range(l)] # my own
%timeit a2 = acf(a) # statstools
%timeit a3 = estimated_autocorrelation(a) # numpy
%timeit a4 = acf(a, fft=True) # stats FFT
## -- End pasted text --
100 loops, best of 3: 7.18 ms per loop
100 loops, best of 3: 2.15 ms per loop
10 loops, best of 3: 88.3 ms per loop
10 loops, best of 3: 87.6 ms per loop
100 loops, best of 3: 3.33 ms per loop
</code></pre>
<p>Edit... I checked again keeping <code>l=40</code> and changing <code>n=10000</code> to <code>n=200000</code> samples the FFT methods start to get a bit of traction and <code>statsmodels</code> fft implementation just edges it... (order is the same)</p>
<pre><code>## -- End pasted text --
10 loops, best of 3: 86.2 ms per loop
10 loops, best of 3: 69.5 ms per loop
1 loops, best of 3: 16.2 s per loop
1 loops, best of 3: 16.3 s per loop
10 loops, best of 3: 52.3 ms per loop
</code></pre>
<p>Edit 2: I changed my routine and re-tested vs. the FFT for <code>n=10000</code> and <code>n=20000</code></p>
<pre><code>a = correlatedData(n=200000); b=correlatedData(n=10000)
m = a.mean(); rng = np.arange(40); mb = b.mean()
%timeit a1 = map(lambda t:acorr(a, m, t), rng)
%timeit a1 = map(lambda t:acorr.acorr(b, mb, t), rng)
%timeit a4 = acf(a, fft=True)
%timeit a4 = acf(b, fft=True)
10 loops, best of 3: 73.3 ms per loop # acorr below
100 loops, best of 3: 2.37 ms per loop # acorr below
10 loops, best of 3: 79.2 ms per loop # statstools with FFT
100 loops, best of 3: 2.69 ms per loop # statstools with FFT
</code></pre>
<h2>Implementation</h2>
<pre><code>def acorr(op_samples, mean, separation, norm = 1):
"""autocorrelation of a measured operator with optional normalisation
the autocorrelation is measured over the 0th axis
Required Inputs
op_samples :: np.ndarray :: the operator samples
mean :: float :: the mean of the operator
separation :: int :: the separation between HMC steps
norm :: float :: the autocorrelation with separation=0
"""
return ((op_samples[:op_samples.size-separation] - mean)*(op_samples[separation:]- mean)).ravel().mean() / norm
</code></pre>
<p><strong><code>4x</code> speedup</strong> can be achieved below. You must be careful to only pass <code>op_samples=a.copy()</code> as it will modify the array <code>a</code> by <code>a-=mean</code> otherwise:</p>
<pre><code>op_samples -= mean
return (op_samples[:op_samples.size-separation]*op_samples[separation:]).ravel().mean() / norm
</code></pre>
<h2>Sanity Check</h2>
<p><a href="https://i.stack.imgur.com/3lmNf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3lmNf.png" alt="enter image description here" /></a></p>
<h2>Example Error Analysis</h2>
<p>This is a bit out of scope but I can't be bothered to redo the figure without the integrated autocorrelation time or integration window calculation. The autocorrelations with errors are clear in the bottom plot
<a href="https://i.stack.imgur.com/3xFQm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3xFQm.png" alt="enter image description here" /></a></p> | 2016-07-17 02:00:28.303000+00:00 | 2017-09-28 10:25:42.787000+00:00 | 2020-06-20 09:12:55.060000+00:00 | null | 14,297,012 | <p>I would like to perform Autocorrelation on the signal shown below. The time between two consecutive points is 2.5ms (or a repetition rate of 400Hz).</p>
<p><img src="https://i.stack.imgur.com/UknOq.png" alt="enter image description here"></p>
<p>This is the equation for estimating autoacrrelation that I would like to use (Taken from <a href="http://en.wikipedia.org/wiki/Autocorrelation" rel="noreferrer">http://en.wikipedia.org/wiki/Autocorrelation</a>, section Estimation):</p>
<p><img src="https://i.stack.imgur.com/CujxE.png" alt="enter image description here"></p>
<p>What is the simplest method of finding the estimated autocorrelation of my data in python? Is there something similar to <code>numpy.correlate</code> that I can use? </p>
<p>Or should I just calculate the mean and variance?</p>
<hr>
<p>Edit:</p>
<p>With help from <a href="https://stackoverflow.com/users/190597/unutbu">unutbu</a>, I have written:</p>
<pre><code>from numpy import *
import numpy as N
import pylab as P
fn = 'data.txt'
x = loadtxt(fn,unpack=True,usecols=[1])
time = loadtxt(fn,unpack=True,usecols=[0])
def estimated_autocorrelation(x):
n = len(x)
variance = x.var()
x = x-x.mean()
r = N.correlate(x, x, mode = 'full')[-n:]
#assert N.allclose(r, N.array([(x[:n-k]*x[-(n-k):]).sum() for k in range(n)]))
result = r/(variance*(N.arange(n, 0, -1)))
return result
P.plot(time,estimated_autocorrelation(x))
P.xlabel('time (s)')
P.ylabel('autocorrelation')
P.show()
</code></pre> | 2013-01-12 19:19:24.870000+00:00 | 2017-12-17 23:46:26.423000+00:00 | 2017-05-23 11:47:08.517000+00:00 | python|numpy|signal-processing | ['https://github.com/flipdazed/Hybrid-Monte-Carlo/blob/master/correlations/errors.py', 'http://arxiv.org/pdf/hep-lat/0306017.pdf', 'https://www.physik.hu-berlin.de/de/com/ALPHAsoft', 'https://github.com/flipdazed/Hybrid-Monte-Carlo/blob/master/correlations/acorr.py', 'https://i.stack.imgur.com/3lmNf.png', 'https://i.stack.imgur.com/3xFQm.png'] | 6 |
59,548,789 | <p>Part of the problem is that there are two different omega coefficients. Omega_h and Omega_t. I just checked semTools and they seem to report omega_t (which they label as omega). </p>
<p>psych reports both. With the example data set from semTools, the answers are similar, but not identical. I suspect the difference is the psych version allows cross loadings and the sem version does not.</p>
<blockquote>
<p><code>reliability(fit)
visual textual speed total
alpha 0.6261171 0.8827069 0.6884550 0.7604886
omega 0.6253180 0.8851754 0.6877600 0.8453351
omega2 0.6253180 0.8851754 0.6877600 0.8453351
omega3 0.6120052 0.8850608 0.6858417 0.8596204
avevar 0.3705589 0.7210163 0.4244883 0.5145874</code></p>
</blockquote>
<p>summary(omega(HolzingerSwineford1939[7:15],covar=TRUE))</p>
<p><code>Omega
Alpha: 0.76
G.6: 0.81
Omega Hierarchical: 0.44
Omega H asymptotic: 0.52
Omega Total 0.85</code></p>
<p>...
<code>Total, General and Subset omega for each subset
g F1* F2* F3*
Omega total for total scores and subscales 0.85 0.89 0.65 0.71
Omega general for total scores and subscales 0.44 0.27 0.24 0.17
Omega group for total scores and subscales 0.35 0.62 0.40 0.54</code></p>
<p>For a discussion of various estimates of reliability see the article on PsyArXiv preprints: reliability from Alpha to Omega <a href="https://psyarxiv.com/2y3w9/" rel="nofollow noreferrer">preprint of reliability article available from psyArXiv</a></p> | 2019-12-31 22:15:14.423000+00:00 | 2019-12-31 22:15:14.423000+00:00 | null | null | 58,748,786 | <p>Additionally to reporting Cronbachs Alpha, I would like to report McDonals Omega for each scale of my survey. I know that there are two ways to get Omega in R - either with the psych command omega() which is for exploratory analysis or with reliability-command in semtools which is for confirmatory analysis. Because I have difficulties calculating the latter, I would like to know whether the results of both omegas are identical in terms of value?</p>
<p>Thanks in advance! </p>
<p>Carolin</p> | 2019-11-07 12:27:46.807000+00:00 | 2019-12-31 22:15:14.423000+00:00 | null | r|reliability | ['https://psyarxiv.com/2y3w9/'] | 1 |
64,118,889 | <p>Looking at the TensorFlow definition of Spatial pyramid pooling (<a href="https://github.com/tensorflow/addons/blob/v0.11.2/tensorflow_addons/layers/spatial_pyramid_pooling.py" rel="nofollow noreferrer">link</a>), it seems like the output is flattened. So in that case, only 1D convolution would make sense. However, if there are any spatial information left for a convolution to learn I don't know.</p>
<p><strong>EDIT:</strong> After a quick investigation of what the Spatial pyramid pooling algorithm does, I do not think that it is a good idea to use any convolution. That is because there is no real spatial information left in the data. That is also how it is used, see for example <a href="https://arxiv.org/abs/1406.4729" rel="nofollow noreferrer">this article</a>.</p> | 2020-09-29 11:42:27.320000+00:00 | 2020-09-29 11:49:17.983000+00:00 | 2020-09-29 11:49:17.983000+00:00 | null | 64,118,740 | <p>Must there be a fully connected layer after a Spatial pyramid pooling layer when doing classification? Or can convolutional layers be used after the Spatial pyramid pooling layer?</p> | 2020-09-29 11:32:00.647000+00:00 | 2020-09-29 12:09:26.337000+00:00 | 2020-09-29 12:09:26.337000+00:00 | python|tensorflow|deep-learning|conv-neural-network | ['https://github.com/tensorflow/addons/blob/v0.11.2/tensorflow_addons/layers/spatial_pyramid_pooling.py', 'https://arxiv.org/abs/1406.4729'] | 2 |
51,755,257 | <p>important is this paper <a href="https://arxiv.org/pdf/1802.07088.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1802.07088.pdf</a> - <em>i-REVNET: DEEP INVERTIBLE NETWORKS</em> and this git <a href="https://github.com/jhjacobsen/pytorch-i-revnet" rel="nofollow noreferrer">https://github.com/jhjacobsen/pytorch-i-revnet</a></p>
<p>when reading the above paper critical components in i-RevNets are <strong>homeomorphic layers</strong>, on the link between topology and neural nets cf <a href="http://colah.github.io/posts/2014-03-NN-Manifolds-Topology/" rel="nofollow noreferrer">http://colah.github.io/posts/2014-03-NN-Manifolds-Topology/</a> - <strong>Neural Networks, Manifolds, and Topology</strong> ( search for 'homeomorphic' ) </p>
<p>in <a href="https://github.com/jhjacobsen/pytorch-i-revnet" rel="nofollow noreferrer">https://github.com/jhjacobsen/pytorch-i-revnet</a> homeomorphic layers are implemented in <code>class irevnet_block(nn.Module)</code>, note that there are <strong>NO</strong> operations that discard information like <strong>maxpooling, averaging, ...</strong> ( with exception of the output layer ), only <em>batch normalization</em> ( <a href="https://towardsdatascience.com/batch-normalization-in-neural-networks-1ac91516821c" rel="nofollow noreferrer">https://towardsdatascience.com/batch-normalization-in-neural-networks-1ac91516821c</a> ) is applied, the ReLUs are also locally strictly linear.</p>
<p>in <a href="https://stackoverflow.com/questions/34716454/where-do-i-call-the-batchnormalization-function-in-keras">Where do I call the BatchNormalization function in Keras?</a> is how to implement this in keras, simply stack the layers into a homeomorphic layer: </p>
<pre><code>homeomorphic layer -> NO POOLING, ... LAYERS
model.add(Dense(64, init='uniform'))
model.add(Activation('relu'))
model.add(BatchNormalization())
</code></pre>
<p>the rest of the code in <a href="https://github.com/jhjacobsen/pytorch-i-revnet/blob/master/models/iRevNet.py" rel="nofollow noreferrer">https://github.com/jhjacobsen/pytorch-i-revnet/blob/master/models/iRevNet.py</a> like i.e. <code>def inverse(self, x)</code> or <code>def forward(self, x)</code> can be reproduced using the keras functions in <a href="https://keras.io/layers/merge/" rel="nofollow noreferrer">https://keras.io/layers/merge/</a> . Cf <a href="https://github.com/jhjacobsen/pytorch-i-revnet/blob/master/models/model_utils.py" rel="nofollow noreferrer">https://github.com/jhjacobsen/pytorch-i-revnet/blob/master/models/model_utils.py</a> on the <code>merge</code> and <code>split</code> functions, they use <code>torch.cat</code> and <code>torch.split</code> whichs keras equivalents are in <a href="https://keras.io/layers/merge/" rel="nofollow noreferrer">https://keras.io/layers/merge/</a></p> | 2018-08-08 20:36:11.460000+00:00 | 2018-08-08 20:42:38.160000+00:00 | 2018-08-08 20:42:38.160000+00:00 | null | 51,750,347 | <p>I want to implement i-RevNet on MNIST dataset on keras and generate the original 28*28 input images from the output of i-RevNet, but i don't have a clue. Online resources I can find are all based on tensorflow.</p> | 2018-08-08 15:23:54.973000+00:00 | 2021-01-22 09:16:34.413000+00:00 | null | python|keras | ['https://arxiv.org/pdf/1802.07088.pdf', 'https://github.com/jhjacobsen/pytorch-i-revnet', 'http://colah.github.io/posts/2014-03-NN-Manifolds-Topology/', 'https://github.com/jhjacobsen/pytorch-i-revnet', 'https://towardsdatascience.com/batch-normalization-in-neural-networks-1ac91516821c', 'https://stackoverflow.com/questions/34716454/where-do-i-call-the-batchnormalization-function-in-keras', 'https://github.com/jhjacobsen/pytorch-i-revnet/blob/master/models/iRevNet.py', 'https://keras.io/layers/merge/', 'https://github.com/jhjacobsen/pytorch-i-revnet/blob/master/models/model_utils.py', 'https://keras.io/layers/merge/'] | 10 |
59,231,481 | <p>Looking for algorithms means entering the realm of Academia, so it's helpful to know the proper name of the problem. We're looking for a <a href="https://en.wikipedia.org/wiki/Streaming_algorithm" rel="nofollow noreferrer">Streaming Algorithm</a>, probably Quantile Streaming although you may want other statistics too. Search for that phrase and you'll get more informed answers.</p>
<p>One easy answer is <a href="https://sites.cs.ucsb.edu/~suri/psdir/ency.pdf" rel="nofollow noreferrer">this paper</a>, a collaboration between Amazon and Academia describing the state of the art as of 2007. It provides a high-level view of the Greenwald-Khanna (GK) and Q-Digest algorithms. You can actually find those algorithms in libraries. <a href="https://github.com/sengelha/streaming-percentiles" rel="nofollow noreferrer">This library</a> has an easy to use looking C++ and JS implementation. The <a href="https://software.intel.com/en-us/mkl-ssnotes-computing-quantiles-for-streaming-data-with-vsl-ss-method-squants-zw" rel="nofollow noreferrer">Intel Math Kernel Library</a> implements Zhang 2007.</p>
<p>While the sengelha library looks easy to use and good enough for most needs, the world has moved on since 2007. A <a href="https://arxiv.org/abs/1907.00236" rel="nofollow noreferrer">paper from this year</a> (Amazon, Yahoo, and Academia) describes the "lazy kll" algorithm which is implemented in the <a href="https://datasketches.github.io/" rel="nofollow noreferrer">Data Sketches</a> library (C++, Java, Python) <a href="https://github.com/apache/incubator-datasketches-cpp/blob/master/kll/include/kll_sketch.hpp" rel="nofollow noreferrer">here</a>.</p>
<p>This information should be enough to let you generate quantile data from your software or even distributed software, and I hope others post even better answers.</p> | 2019-12-08 00:29:33.747000+00:00 | 2019-12-08 00:29:33.747000+00:00 | null | null | 59,231,480 | <p>It's such a common problem but the answers are hard to find. I want to measure the performance of [ web server 95th percentile response time | API calls | algorithm performance | disk I/O | whatever ]. But, you know, that's a lot of data and I don't want to store it because this is used in <em>production</em>. Also, I don't want to spend a lot of CPU time calculating how slow my software is.</p>
<p>If you search for answers, you'll see many references to ancient algorithms that store a ton of data in bins or keep a large reservoir of random sample data. Common results include P-square and binmedian , and notice it's hard to find any decent implementations because although they're commonly suggested they're also garbage and nobody with a clue uses them.</p>
<p>You'll also find clever-sounding answers you can't implement because half the explanation is missing. Maybe if you were a stats major you'd understand <a href="https://stackoverflow.com/a/2144754">this</a>.</p>
<p>So what can I use to get cheap performance statistics? Algorithm and source code, please.</p> | 2019-12-08 00:29:33.747000+00:00 | 2019-12-08 00:29:33.747000+00:00 | null | performance|quantile | ['https://en.wikipedia.org/wiki/Streaming_algorithm', 'https://sites.cs.ucsb.edu/~suri/psdir/ency.pdf', 'https://github.com/sengelha/streaming-percentiles', 'https://software.intel.com/en-us/mkl-ssnotes-computing-quantiles-for-streaming-data-with-vsl-ss-method-squants-zw', 'https://arxiv.org/abs/1907.00236', 'https://datasketches.github.io/', 'https://github.com/apache/incubator-datasketches-cpp/blob/master/kll/include/kll_sketch.hpp'] | 7 |
46,226,170 | <p>Faster convergence with a very high loss could possibly mean you are facing an exploding gradients problem. Try to use a much lower learning rate like 1e-5 or 1e-6. You can also try techniques like gradient clipping to limit your gradients in case of high learning rates.</p>
<p><strong>Answer 1</strong> </p>
<p>Another reason could be initialization of weights, try the below 3 methods:</p>
<ol>
<li>Method described in this paper <a href="https://arxiv.org/abs/1502.01852" rel="noreferrer">https://arxiv.org/abs/1502.01852</a></li>
<li>Xavier initialization</li>
<li>Random initialization</li>
</ol>
<p>For many cases 1st initialization method works the best.</p>
<p><strong>Answer 2</strong></p>
<p>You can try different optimizers like </p>
<ol>
<li>Momentum optimizer</li>
<li>SGD or Gradient descent</li>
<li>Adam optimizer</li>
</ol>
<p>The choice of your optimizer should be based on the choice of your loss function. For example: for a logistic regression problem with MSE as a loss function, gradient based optimizers will not converge.</p>
<p><strong>Answer 3</strong></p>
<p>How deep or wide your network should be is again fully dependent on which type of network you are using and what the problem is. </p>
<p>As you said you are using a sequential model using LSTM, to learn sequence on text. No doubt your choice of model is good for this problem you can also try 4-5 LSTMs.</p>
<p><strong>Answer 4</strong></p>
<p>If your gradients are going either 0 or infinite, it is called vanishing gradients or it simply means early convergence, try gradient clipping with proper learning rate and the first weight initialization technique.</p>
<p>I am sure this will definitely solve your problem.</p> | 2017-09-14 18:38:30.630000+00:00 | 2017-09-15 05:49:44.240000+00:00 | 2017-09-15 05:49:44.240000+00:00 | null | 46,224,598 | <p>This is more of a deep learning conceptual problem, and if this is not the right platform I'll take it elsewhere.</p>
<p>I'm trying to use a Keras LSTM sequential model to learn sequences of text and map them to a numeric value (a regression problem).</p>
<p>The thing is, the learning always converges too fast on high loss (both training and testing). I've tried all possible hyperparameters, and I have a feeling it's a local minima issue that causes the model's high bias.</p>
<p>My questions are basically :</p>
<ol>
<li>How to initialize weights and bias given this problem?</li>
<li>Which optimizer to use?</li>
<li>How deep I should extend the network (I'm afraid that if I use a very deep network, the training time will be unbearable and the model variance will grow)</li>
<li>Should I add more training data?</li>
</ol>
<p>Input and output are normalized with minmax.</p>
<p>I am using SGD with momentum, currently 3 LSTM layers (126,256,128) and 2 dense layers (200 and 1 output neuron)</p>
<p>I have printed the weights after few epochs and noticed that <strong>many weights
are zero and the rest are basically have the value of 1</strong> (or very close to it).</p>
<p>Here are some plots from tensorboard :<a href="https://i.stack.imgur.com/adC9M.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/adC9M.jpg" alt="enter image description here"></a></p> | 2017-09-14 16:52:57.057000+00:00 | 2018-10-25 13:43:36.913000+00:00 | 2017-09-14 20:36:46.777000+00:00 | python|tensorflow|deep-learning|keras|lstm | ['https://arxiv.org/abs/1502.01852'] | 1 |
68,066,505 | <p>VAEs can't cope with missing data by default. Clean your data first or try to apply some method for dealing with missing data i.e.:</p>
<p><a href="https://arxiv.org/abs/2006.05301" rel="nofollow noreferrer">https://arxiv.org/abs/2006.05301</a></p> | 2021-06-21 10:44:02.047000+00:00 | 2021-06-21 10:44:02.047000+00:00 | null | null | 55,756,020 | <p>How do you resolve exploding gradient in a deep generative model(VAE)?</p>
<p>NB: the data-set contains a lot of NaNs values in the columns</p> | 2019-04-19 02:56:08.703000+00:00 | 2021-06-21 10:44:02.047000+00:00 | null | lstm|gradient|autoencoder|generative-adversarial-network|generative | ['https://arxiv.org/abs/2006.05301'] | 1 |
68,944,896 | <p>You can notice that there is O(n^(1/2)) unique values in the set S = {⌊n/1⌋, ⌊n/2⌋, ..., ⌊n/(n-1)⌋, ⌊n/n⌋}. Therefore you can calculate the function in O(n^(1/2))</p>
<p>Also since this function is asymmetric, you can even calculate x2 faster by using this formula: D(n) = Σ(x=1->u)(⌊n/x⌋) - u^2 for u = ⌊n^(1/2)⌋</p>
<p>Even more complex but faster: using the method that Richard Sladkey described in <a href="https://arxiv.org/abs/1206.3369" rel="nofollow noreferrer">this paper</a> you can calculate the function in O(n^(1/3))</p> | 2021-08-26 20:22:17.190000+00:00 | 2021-08-28 08:21:55.383000+00:00 | 2021-08-28 08:21:55.383000+00:00 | null | 27,768,625 | <p>How can I sum the following sequence:</p>
<pre><code>⌊n/1⌋ + ⌊n/2⌋ + ⌊n/3⌋ + ... + ⌊n/n⌋
</code></pre>
<p>This is simply O(n) solution on C++:</p>
<pre><code>#include <iostream>
int main()
{
int n;
std::cin>>n;
unsigned long long res=0;
for (int i=1;i<=n;i++)
{
res+= n/i;
}
std::cout<<res<<std::endl;
return 0;
}
</code></pre>
<p>Do you know any better solution than this? I mean O(1) or O(log(n)). Thank you for your time :) and solutions</p>
<p>Edit:
Thank you for all your answers. If someone wants the solution O(sqrt(n)):
Python:</p>
<pre class="lang-py prettyprint-override"><code>import math
def seq_sum(n):
sqrtn = int(math.sqrt(n))
return sum(n // k for k in range(1, sqrtn + 1)) * 2 - sqrtn ** 2
n = int(input())
print(seq_sum(n))
</code></pre>
<p>C++:</p>
<pre class="lang-c prettyprint-override"><code>#include <iostream>
#include <cmath>
int main()
{
int n;
std::cin>>n;
int sqrtn = (int)(std::sqrt(n));
long long res2 = 0;
for (int i=1;i<=sqrtn;i++)
{
res2 +=2*(n/i);
}
res2 -= sqrtn*sqrtn;
std::cout<<res2<<std::endl;
return 0;
}
</code></pre> | 2015-01-04 18:04:39.060000+00:00 | 2021-08-28 08:21:55.383000+00:00 | 2015-01-04 19:56:02.040000+00:00 | c++|algorithm|math|discrete-mathematics | ['https://arxiv.org/abs/1206.3369'] | 1 |
27,769,242 | <p>The Polymath project sketches an algorithm for computing this function in time O(n^(1/3 + o(1))), see section 2.1 on pages 8-9 of:</p>
<p><a href="http://arxiv.org/abs/1009.3956">http://arxiv.org/abs/1009.3956</a></p>
<p>The algorithm involves slicing the region into sufficiently thin intervals and <em>estimating</em> the value on each, where the intervals are chosen to be thin enough that the estimate will be exact when rounded to the nearest integer. So you compute up to some range directly (they suggest 100n^(1/3) but you could modify this with some care) and then do the rest in these thin slices.</p>
<p>See <a href="https://oeis.org/A006218">the OEIS entry</a> for more information on this sequence.</p>
<p>Edit: I now see that Kerrek SB mentions this algorithm in the comments. In fairness, though, I added the comment to the OEIS 5 years ago so I don't feel bad for posting 'his' answer. :)</p>
<p>I should also mention that no O(1) algorithm is possible, since the answer is around n log n and hence even <em>writing it out</em> takes time > log n.</p> | 2015-01-04 19:03:58.670000+00:00 | 2015-01-04 19:12:57.303000+00:00 | 2015-01-04 19:12:57.303000+00:00 | null | 27,768,625 | <p>How can I sum the following sequence:</p>
<pre><code>⌊n/1⌋ + ⌊n/2⌋ + ⌊n/3⌋ + ... + ⌊n/n⌋
</code></pre>
<p>This is simply O(n) solution on C++:</p>
<pre><code>#include <iostream>
int main()
{
int n;
std::cin>>n;
unsigned long long res=0;
for (int i=1;i<=n;i++)
{
res+= n/i;
}
std::cout<<res<<std::endl;
return 0;
}
</code></pre>
<p>Do you know any better solution than this? I mean O(1) or O(log(n)). Thank you for your time :) and solutions</p>
<p>Edit:
Thank you for all your answers. If someone wants the solution O(sqrt(n)):
Python:</p>
<pre class="lang-py prettyprint-override"><code>import math
def seq_sum(n):
sqrtn = int(math.sqrt(n))
return sum(n // k for k in range(1, sqrtn + 1)) * 2 - sqrtn ** 2
n = int(input())
print(seq_sum(n))
</code></pre>
<p>C++:</p>
<pre class="lang-c prettyprint-override"><code>#include <iostream>
#include <cmath>
int main()
{
int n;
std::cin>>n;
int sqrtn = (int)(std::sqrt(n));
long long res2 = 0;
for (int i=1;i<=sqrtn;i++)
{
res2 +=2*(n/i);
}
res2 -= sqrtn*sqrtn;
std::cout<<res2<<std::endl;
return 0;
}
</code></pre> | 2015-01-04 18:04:39.060000+00:00 | 2021-08-28 08:21:55.383000+00:00 | 2015-01-04 19:56:02.040000+00:00 | c++|algorithm|math|discrete-mathematics | ['http://arxiv.org/abs/1009.3956', 'https://oeis.org/A006218'] | 2 |
4,223,640 | <p>When you say </p>
<blockquote>
<p>depth of tree is 2 nodes. i.e. each
parent msg could have two child nodes.</p>
</blockquote>
<p>i get confused.</p>
<p>If each of the two child nodes can have more children then you are not taking about depth, but width of a branch of a node.</p>
<p><strong>1) depth really = 2</strong></p>
<p>If your max depth is really 2 (in another words, all nodes connect to root, or zero level nodes in 2 steps; yet in another words, for each node there is no other ancestor then parent and grandparent) then you could even use relational model directly to store hierarchical data (either through self join, which is not so bad with such low maximum depth or by splitting the data into 3 entities - grandparents, parents and children)</p>
<p><strong>2) depth >> 2</strong></p>
<p>If number 2 was the width and the depth is variable and potentially quite deep then look at nested sets, with two additional possibilities to explore</p>
<ul>
<li>using the nested set idea you could explore geom type to store hierarchical data, (the benefits might not be so interesting - few useful operators, single field, possibly better indexing strategy)</li>
<li><a href="http://arxiv.org/pdf/cs.DB/0402051," rel="nofollow">continued fractions</a> (based on nested set, tropashko offered generalization which seemed interesting as they promised to improve on some of the problems with nested sets; didn't implement it though so... do your own tests).</li>
</ul> | 2010-11-19 09:28:58.970000+00:00 | 2010-11-19 09:28:58.970000+00:00 | null | null | 4,223,385 | <p>I have to store messages that my web app fetch from Twitter into a local database. The purpose of storing messages is that I need to display these messages in a hierarchical order i.e. certain messages(i.e. status updates) that user input through my application are child nodes of others (I have to show them as sub-list item of parent message). Which data model should I use Adjacency List Model OR Nested Set Model? I have to manage four types of messages & messages in each category could have two child node. One more question here is that what I see(realize) in both cases that input is controlled manually that is how reference to parent node in adjacency model or right, left are given in Nested List. My app fetch messages data from twitter like: </p>
<pre><code>foreach ($xml4->entry as $status4) {
echo'<li>'.$status4->content.'</li>';
}
</code></pre>
<p>So its no manual, any number of messages can be available anytime. How could I make a parent child relation among messages from it. At the moment, users enter messages in different windows that correspond to four types of messages, my app adds keywords & fetches those back to display in diff windows. All those messages are at the moment parent messages. Now how I make user enter a messages that could be saved into database as child of another.</p> | 2010-11-19 08:53:24.463000+00:00 | 2010-11-19 09:28:58.970000+00:00 | 2010-11-19 09:06:16.833000+00:00 | mysql|html|twitter|adjacency-list|nested-lists | ['http://arxiv.org/pdf/cs.DB/0402051,'] | 1 |
29,551,135 | <p>See <a href="http://arxiv.org/abs/1312.1431" rel="noreferrer">here</a> (section 4) for some peer-reviewed benchmarks which I personally worked on. The comparison was between Julia and PyPy.</p> | 2015-04-10 00:02:49.573000+00:00 | 2015-04-10 00:02:49.573000+00:00 | null | null | 29,548,803 | <p>The performance benchmarks for Julia I have seen so far, such as at <a href="http://julialang.org/" rel="nofollow noreferrer">http://julialang.org/</a>, compare Julia to pure Python or Python+NumPy. Unlike NumPy, SciPy uses the BLAS and LAPACK libraries, where we get an optimum multi-threaded SIMD implementation. If we assume that Julia and Python performance are the same when calling BLAS and LAPACK functions (under the hood), how does Julia performance compare to CPython when using Numba or NumbaPro for code that doesn't call BLAS or LAPACK functions?</p>
<p>One thing I notice is that Julia is using LLVM v3.3, while Numba uses llvmlite, which is built on LLVM v3.5. Does Julia's old LLVM prevent an optimum SIMD implementation on newer architectures, such as Intel Haswell (AVX2 instructions)?</p>
<p>I am interested in performance comparisons for both spaghetti code and small DSP loops to handle very large vectors. The latter is more efficiently handled by the CPU than the GPU for me due to the overhead of moving data in and out of the GPU device memory. I am only interested in performance on a single Intel Core-i7 CPU, so cluster performance is not important to me. Of particular interest to me is the ease and success with creating parallelized implementations of DSP functions.</p>
<p>A second part of this question is a comparison of Numba to NumbaPro (ignoring the MKL BLAS). Is NumbaPro's <code>target="parallel"</code> really needed, given the new <code>nogil</code> argument for the <code>@jit</code> decorator in Numba?</p> | 2015-04-09 20:54:44.363000+00:00 | 2017-04-28 21:36:26.517000+00:00 | 2017-01-06 06:20:49.787000+00:00 | python|performance|julia|numba|numba-pro | ['http://arxiv.org/abs/1312.1431'] | 1 |
60,231,502 | <p>There can be confusion around this issue because heuristics mean different things in different contexts. So, let me talk about the different meaning of heuristics. Then we can return to evaluation functions.</p>
<p><strong>Single Agent Heuristic Search</strong></p>
<p>In single-agent heuristic search (eg A*, IDA*) heuristics are usually qualified with the words <em>admissible</em> or <em>consistent</em>. In this context heuristics are lower bounds on the cost to reach the goal. That is, they are the result of a function which returns a numerical value. If the heuristic is <em>admissible</em>, the value returned does not overestimate the true distance to the goal. If the heuristic is <em>consistent</em>, the heuristic between adjacent states never changes more than the edge cost. Consistent heuristics are admissible if the goal has a heuristic of 0, but not all admissible heuristics are consistent.</p>
<p>There are many properties proven on the combinations of heuristics and algorithms. A* and IDA* will find optimal paths with a consistent heuristic. A* is <em>optimal</em> in necessary node expansions with a consistent heuristic, but with a inconsistent heuristic A* can perform 2^N expansions of N states. (See <a href="https://www.movingai.com/SAS/INC/" rel="nofollow noreferrer">this demo</a> for an example of where this occurs.)</p>
<p><strong>Game Playing</strong></p>
<p>In game playing programs using algorithms like alpha-beta or Monte Carlo tree search (MCTS), heuristics represent approximations of the win/loss value of a game. For instance, the value might be scaled between -1 (loss) and +1 (win), with values in between representing uncertainty about the true value. Here there are no guarantees on underestimate or overestimation, but the better ordering of values (wins > draws > losses) the better the performance of the algorithms. Alpha-beta pruning will return the same result even if an affine transform is applied to the heuristic, because alpha-beta uses the relative ordering of values to search. See, <a href="https://arxiv.org/pdf/1406.0486.pdf" rel="nofollow noreferrer">this paper</a> for an example of heuristics in MCTS. Note that in this context a heuristic still has a numerical value.</p>
<p><strong>Optimization</strong></p>
<p>In search for optimization problems like SAT (satisfiability problems) or CSP (constraint satisfaction problems), algorithms are much more efficient if they can find good solutions quickly. Thus, instead of searching in a naive manner, they order their search in a way that is expected to be more efficient. If the ordering is good the search might be able to terminate earlier, but this isn't guaranteed. In this context a heuristic is a way of ordering choices that is likely to end up with finding a solution more quickly. (A satisfying assignment of variables in SAT or in a CSP.) <a href="https://arxiv.org/pdf/1401.4601.pdf" rel="nofollow noreferrer">Here is an example</a> of work that explores different ordering heuristics for these problems.</p>
<p>In this context a heuristic is used for ordering, so it doesn't have to be numerically based. If it is numerically based, the numbers would not necessarily have global meaning as heuristics do in other types of search. There are many, many variants of optimizations problems besides SAT and CSP where heuristics are used in this manner.</p>
<p><strong>Evaluation Function</strong></p>
<p>So, then, what is an evaluation function? It is probably most commonly used in the second context of games, where heuristic and evaluation function can be interchanged, but it is more generally referring to the numerical evaluation of a state. The primary point would be that an evaluation function is more specific than a heuristic function, as a heuristic has broad use in wider contexts.</p> | 2020-02-14 18:09:06.573000+00:00 | 2020-02-20 06:04:34.980000+00:00 | 2020-02-20 06:04:34.980000+00:00 | null | 23,031,657 | <p>I'm reading about searching algorithms and heuristic search and I am slightly confused about heuristic and evaluation functions. People seem to use them quite freely to describe seemingly same things. What am I missing?</p> | 2014-04-12 14:27:00.460000+00:00 | 2021-01-25 12:45:05.010000+00:00 | null | artificial-intelligence|evaluation|heuristics | ['https://www.movingai.com/SAS/INC/', 'https://arxiv.org/pdf/1406.0486.pdf', 'https://arxiv.org/pdf/1401.4601.pdf'] | 3 |
49,990,219 | <p>You can use both dropout and L2 regularization at the same time as is commonly done. They are quite different types of regularization. However, I would note that recent literature has suggested that batch normalization has replaced the need for dropout as noted in the original paper on batch normalization:</p>
<p><a href="https://arxiv.org/abs/1502.03167" rel="nofollow noreferrer">https://arxiv.org/abs/1502.03167</a></p>
<blockquote>
<p>From the abstract: "It also acts as a regularizer, in some cases eliminating the need for Dropout."</p>
</blockquote>
<p>L2 regularization is typically applied when batchnorm is in use. There's nothing stopping you from applying all 3 forms of regularization, the statement above only indicates that you might not see an improvement by applying dropout when batchnorm is already in use.</p>
<p>There are generally optimal values for the amount of L2 regularization to apply and the dropout keep probability. These are hyperparameters you tune by trial and error or a hyperparameter search algorithm.</p> | 2018-04-23 21:24:53.127000+00:00 | 2018-04-23 21:24:53.127000+00:00 | null | null | 49,987,574 | <p>I have been working on a project related with sequence to sequence autoencoder for time series forecasting. So, I have used <code>tf.contrib.rnn.MultiRNNCell</code> in encoder and decoder. I am confused in which strategy used in order to regularize my seq2seq model. Should I use L2 regularization in the loss or using DropOutWrapper (<code>tf.contrib.rnn.DropoutWrapper</code>) in the multiRNNCell? Or can I use both strategies ... L2 for weigths and bias (projection layer) and DropOutWrapper between cells in the multiRNNCell?
Thanks in advance :)</p> | 2018-04-23 18:21:06.567000+00:00 | 2018-04-23 21:24:53.127000+00:00 | null | tensorflow|regularized|dropout|seq2seq | ['https://arxiv.org/abs/1502.03167'] | 1 |
46,806,820 | <p>So you want to use instead of WX+b a different function for your neurons. Well in tensorflow you explicitly calculate this product, so for example you do</p>
<pre><code>y_ = tf.matmul(X, W)
</code></pre>
<p>you simply have to write your formula and let the network learn. It should not be difficult to implement. </p>
<p>In addition what you are trying to do (according to the paper you link) is called batch normalization and is relatively standard. The idea being you normalize your intermediate steps (in the different layers). Check for example <a href="https://www.google.ch/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&ved=0ahUKEwikh-HM7PnWAhXDXRQKHZJhD9EQFggyMAE&url=https%3A%2F%2Farxiv.org%2Fabs%2F1502.03167&usg=AOvVaw1nGzrGnhPhNGEczNwcn6WK" rel="nofollow noreferrer">https://www.google.ch/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&ved=0ahUKEwikh-HM7PnWAhXDXRQKHZJhD9EQFggyMAE&url=https%3A%2F%2Farxiv.org%2Fabs%2F1502.03167&usg=AOvVaw1nGzrGnhPhNGEczNwcn6WK</a> or <a href="https://www.google.ch/url?sa=t&rct=j&q=&esrc=s&source=web&cd=4&ved=0ahUKEwikh-HM7PnWAhXDXRQKHZJhD9EQFghCMAM&url=https%3A%2F%2Fbcourses.berkeley.edu%2Ffiles%2F66022277%2Fdownload%3Fdownload_frd%3D1%26verifier%3DoaU8pqXDDwZ1zidoDBTgLzR8CPSkWe6MCBKUYan7&usg=AOvVaw0AHLwD_0pUr1BSsiiRoIFc" rel="nofollow noreferrer">https://www.google.ch/url?sa=t&rct=j&q=&esrc=s&source=web&cd=4&ved=0ahUKEwikh-HM7PnWAhXDXRQKHZJhD9EQFghCMAM&url=https%3A%2F%2Fbcourses.berkeley.edu%2Ffiles%2F66022277%2Fdownload%3Fdownload_frd%3D1%26verifier%3DoaU8pqXDDwZ1zidoDBTgLzR8CPSkWe6MCBKUYan7&usg=AOvVaw0AHLwD_0pUr1BSsiiRoIFc</a></p>
<p>Hope that helps,
Umberto</p> | 2017-10-18 09:24:10.057000+00:00 | 2017-10-18 09:24:10.057000+00:00 | null | null | 46,806,127 | <p>I have been working with Keras for a week or so. I know that Keras can use either TensorFlow or Theano as a backend. In my case, I am using TensorFlow.</p>
<p>So I'm wondering: is there a way to write a NN in Keras, and then print out the equivalent version in TensorFlow?</p>
<p><strong>MVE</strong></p>
<p>For instance suppose I write</p>
<pre><code> #create seq model
model = Sequential()
# add layers
model.add(Dense(100, input_dim = (10,), activation = 'relu'))
model.add(Dense(1, activation = 'linear'))
# compile model
model.compile(optimizer = 'adam', loss = 'mse')
# fit
model.fit(Xtrain, ytrain, epochs = 100, batch_size = 32)
# predict
ypred = model.predict(Xtest, batch_size = 32)
# evaluate
result = model.evaluate(Xtest)
</code></pre>
<p>This code might be wrong, since I just started, but I think you get the idea. </p>
<p>What I want to do is write down this code, run it (or not even, maybe!) and then have a function or something that will produce the TensorFlow code that Keras has written to do all these calculations.</p> | 2017-10-18 08:46:24.170000+00:00 | 2017-10-19 21:25:09.777000+00:00 | 2017-10-19 21:25:09.777000+00:00 | python|machine-learning|tensorflow|neural-network|keras | ['https://www.google.ch/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&ved=0ahUKEwikh-HM7PnWAhXDXRQKHZJhD9EQFggyMAE&url=https%3A%2F%2Farxiv.org%2Fabs%2F1502.03167&usg=AOvVaw1nGzrGnhPhNGEczNwcn6WK', 'https://www.google.ch/url?sa=t&rct=j&q=&esrc=s&source=web&cd=4&ved=0ahUKEwikh-HM7PnWAhXDXRQKHZJhD9EQFghCMAM&url=https%3A%2F%2Fbcourses.berkeley.edu%2Ffiles%2F66022277%2Fdownload%3Fdownload_frd%3D1%26verifier%3DoaU8pqXDDwZ1zidoDBTgLzR8CPSkWe6MCBKUYan7&usg=AOvVaw0AHLwD_0pUr1BSsiiRoIFc'] | 2 |
63,787,312 | <p>I want to emphasize here that the word "random" means not only <em>identically distributed</em>, but also <em>independent of everything else</em> (including independent of any other choice).</p>
<p>There are numerous "randomness tests" available, including tests that estimate <em>p-values</em> from running various statistical probes, as well as tests that estimate <em>min-entropy</em>, which is roughly a minimum "compressibility" level of a bit sequence and the most relevant entropy measure for "secure random number generators". There are also various "<a href="https://peteroupc.github.io/randextract.html" rel="nofollow noreferrer">randomness extractors</a>", such as the von Neumann and Peres extractors, that could give you an idea on how much "randomness" you can extract from a bit sequence. However, all these tests and methods can only be more reliable on the first part of this definition of randomness ("identically distributed") than on the second part ("independent").</p>
<p>In general, there is no algorithm that can tell, <em>from a sequence of numbers alone</em>, whether the process generated them in an independent and identically distributed way, without knowledge on what that process is. Thus, for example, although you can tell that a given sequence of bits has more zeros than ones, you can't tell whether those bits—</p>
<ul>
<li>Were truly generated independently of any other choice, or</li>
<li>form part of an extremely long periodic sequence that is only "locally random", or</li>
<li>were simply reused from another process, or</li>
<li>were produced in some other way,</li>
</ul>
<p>...without more information on the process. As one important example, the process of a person choosing a password is rarely "random" in this sense since passwords tend to contain familiar words or names, among other reasons.</p>
<p>Also I should discuss the article added to your question in 2019. That article dealt with the task of sampling from the distribution of bit strings generated by pseudorandom quantum circuits, and doing so with a low rate of error (a task specifically designed to be exponentially easier for quantum computers than for classical computers), rather than the task of "verifying" whether a particular sequence of bits (taken out of its context) was generated "at random" in the sense given in this answer. There is an explanation on what exactly this "task" is in a <a href="https://arxiv.org/pdf/2007.07872.pdf" rel="nofollow noreferrer">July 2020 paper</a>.</p> | 2020-09-08 04:54:46.370000+00:00 | 2021-07-22 16:26:50.997000+00:00 | 2021-07-22 16:26:50.997000+00:00 | null | 1,474,382 | <p>What is the best algorithm to take a long sequence of integers (say 100,000 of them) and return a measurement of how random the sequence is?</p>
<p>The function should return a single result, say 0 if the sequence is not all all random, up to, say 1 if perfectly random. It can give something in-between if the sequence is somewhat random, e.g. 0.95 might be a reasonably random sequence, whereas 0.50 might have some non-random parts and some random parts.</p>
<p>If I were to pass the first 100,000 digits of Pi to the function, it should give a number very close to 1. If I passed the sequence 1, 2, ... 100,000 to it, it should return 0.</p>
<p>This way I can easily take 30 sequences of numbers, identify how random each one is, and return information about their relative randomness.</p>
<p>Is there such an animal?</p>
<p>…..</p>
<p>Update 24-Sep-2019: <a href="https://www.theverge.com/2019/9/23/20879485/google-quantum-supremacy-qubits-nasa" rel="noreferrer">Google may have just ushered in an era of quantum supremacy</a> says:</p>
<blockquote>
<p>"Google’s quantum computer was reportedly able to solve a calculation — proving the randomness of numbers produced by a random number generator — in 3 minutes and 20 seconds that would take the world’s fastest traditional supercomputer, Summit, around 10,000 years. This effectively means that the calculation cannot be performed by a traditional computer, making Google the first to demonstrate quantum supremacy."</p>
</blockquote>
<p>So obviously there is an algorithm to "prove" randomness. Does anyone know what it is? Could this algorithm also provide a measure of randomness?</p> | 2009-09-24 21:56:05.773000+00:00 | 2022-07-19 00:22:15.093000+00:00 | 2019-09-25 01:04:40.203000+00:00 | algorithm|random | ['https://peteroupc.github.io/randextract.html', 'https://arxiv.org/pdf/2007.07872.pdf'] | 2 |
70,547,371 | <p>There is <a href="https://remove.bg" rel="nofollow noreferrer">remove.bg</a><br>
It is the most easy tool,to remove background automatically <br>
Sample code:-https://www.remove.bg/api#sample-code<br>
Screenshots:
<a href="https://i.stack.imgur.com/VINyB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VINyB.png" alt="Picture" /></a>
<a href="https://i.stack.imgur.com/ChX7P.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ChX7P.png" alt="picture2" /></a>
Here is an example of doing in Python:</p>
<pre><code># Requires "requests" to be installed (see python-requests.org)
import requests
response = requests.post(
'https://api.remove.bg/v1.0/removebg',
files={'image_file': open('/path/to/file.jpg', 'rb')},
data={'size': 'auto'},
headers={'X-Api-Key': 'INSERT_YOUR_API_KEY_HERE'},
)
if response.status_code == requests.codes.ok:
with open('no-bg.png', 'wb') as out:
out.write(response.content)
else:
print("Error:", response.status_code, response.text)
</code></pre>
<p>Since your are using <code>python</code> ,
This <a href="https://github.com/brilam/remove-bg" rel="nofollow noreferrer">https://github.com/brilam/remove-bg</a> by @brilam will help you.<br>
It is a Python API wrapper for removing backgrounds from picture using remove.bg's API.
But it also offers <code>go</code> based command line tool:-https://github.com/remove-bg/go</p>
<hr />
<blockquote>
<p>In addition to that, there are many other opensource library for python in GitHub for your reference: <br> <a href="https://github.com/topics/background-removal?l=python" rel="nofollow noreferrer">https://github.com/topics/background-removal?l=python</a><br></p>
</blockquote>
<blockquote>
<p>Also, you can try, <br>
<a href="https://github.com/danielgatis/rembg" rel="nofollow noreferrer">https://github.com/danielgatis/rembg</a></p>
</blockquote>
<blockquote>
<p>OR, refer research paper for your understanding: <a href="https://arxiv.org/pdf/2005.09007.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/2005.09007.pdf</a></p>
</blockquote> | 2022-01-01 06:58:16.687000+00:00 | 2022-01-03 05:28:01.990000+00:00 | 2022-01-03 05:28:01.990000+00:00 | null | 70,516,978 | <p>I am using Pixellib library in Python to detect a person and change its background, as shown in their example <a href="https://pixellib.readthedocs.io/en/latest/change_image_bg.html" rel="nofollow noreferrer">here</a>.</p>
<p>It works flawlessly, but takes huge processing power on my laptop, coupled with their large (~150mb) pascalvoc model, thus rendering an image in approx 4-5sec.</p>
<p>I need to be able to do the same via a mobile phone app, so certainly this cannot be run on a user's mobile. Alternative is to run this on cloud and return the processed image back. This is both costly if user requests increase and will still have noticable lag on user's app.</p>
<p>So, how do achieve this? Apps like <a href="https://www.canva.com/pro/background-remover/" rel="nofollow noreferrer">Canva Pro</a> seem to do this seamlessly in an app <a href="https://static-cse.canva.com/video/753559/02_CANVA_ProFeatures_BackgroundRemover.mp4" rel="nofollow noreferrer">fairly quickly</a>. In fact, there are many other 'free' apps on Play store claiming to do the same.</p>
<p>Thus, is there a better way to run Pixellib, to make it more performant? Or any other library that can provide similar (or better) ouptut and can be run on user's mobile?</p> | 2021-12-29 08:37:32.410000+00:00 | 2022-08-18 09:14:54.547000+00:00 | 2022-08-18 09:14:54.547000+00:00 | python|tensorflow|image-processing|pixellib | ['https://remove.bg', 'https://i.stack.imgur.com/VINyB.png', 'https://i.stack.imgur.com/ChX7P.png', 'https://github.com/brilam/remove-bg', 'https://github.com/topics/background-removal?l=python', 'https://github.com/danielgatis/rembg', 'https://arxiv.org/pdf/2005.09007.pdf'] | 7 |
55,603,701 | <p>The <a href="https://arxiv.org/pdf/1704.03155v2.pdf" rel="nofollow noreferrer">paper</a> contains a diagram of the output format. Instead of specifying the box in a usual way, it is specified as a set of distances (up, right, down, and left) from an offset (x, y), in addition to an angle A, the amount box has rotated counterclockwise.
<a href="https://i.stack.imgur.com/syHjM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/syHjM.png" alt="image from Arxiv paper"></a></p>
<p>Note that the <code>scores</code> and <code>geometry</code> are indexed by <code>y, x</code>, opposite of any logic below <code>offset</code> calculation. Therefore, to get the geometry components of a highest scoring <code>y, x</code>:</p>
<pre><code>high_scores_yx = np.where(scores[0][0] >= np.max(scores[0][0]))
y, x = high_scores_yx[0][0], high_scores_yx[1][0]
h_upper, w_right, h_lower, w_left, A = geometry[0,:,y,x]
</code></pre>
<p>The code uses <code>offset</code> to store the offset of the lower-right corner of the rectangle. Since it's the lower-right, it only needs <code>w_right</code> and <code>h_lower</code>, which in the code, are <code>x1_data</code> and <code>x2_data</code>, respectively.</p>
<p><a href="https://i.stack.imgur.com/HWSMl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HWSMl.png" alt="box_diagram"></a></p>
<p>The location of the lower-right corner, with respect to the original offset <code>offsetX, offsetY</code>, depends on the angle of rotation. Below, the dotted lines show the axes orientation. The components to get from the original to the lower-bottom offset are labelled in violet (horizontal) and purple (vertical). Note that the <code>sin(A) * w_right</code> component is <em>subtracted</em> because <code>y</code> gets bigger as you go lower, in this coordinate system.</p>
<p><a href="https://i.stack.imgur.com/JhCqU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JhCqU.png" alt="rotation_diagram"></a></p>
<p>So that explains</p>
<pre><code>offset = ([offsetX + cosA * x1_data[x] + sinA * x2_data[x], offsetY - sinA * x1_data[x] + cosA * x2_data[x]])
</code></pre>
<p>Next: <code>p1</code> and <code>p3</code> are the lower-left and upper-right corners of the rectangle, respectively, with rotation taken into account. <code>center</code> is just the average of these two points.</p>
<p>Finally, <code>-1*angle * 180.0 / math.pi</code> converts the original counterclockwise, radians angle into a clockwise-based, degrees angle (so final output angle should be negative for objects rotated counterclockwise). This is for compatibility with the CV2 <code>boxPoints</code> method, used in:</p>
<p><a href="https://github.com/opencv/opencv/blob/7fb70e170154d064ef12d8fec61c0ae70812ce3d/samples/dnn/text_detection.py" rel="nofollow noreferrer">https://github.com/opencv/opencv/blob/7fb70e170154d064ef12d8fec61c0ae70812ce3d/samples/dnn/text_detection.py</a></p> | 2019-04-10 02:03:42.877000+00:00 | 2019-04-10 02:03:42.877000+00:00 | null | null | 55,583,306 | <p>I'm trying to use the EAST model in OpenCV to detect text in images. I'm successfuly getting the output after I run an image through a network but I'm having a hard time understanding how the decode function I use works. I know that I get 5 numbers as output from the model and I think it's the distances from a point to the top, bottom, left and right sides of the rectangle, respectively, and the angle of rotation at the end. I'm not sure what the decode function does to get the bounding box for the text region.</p>
<p>I know why the offset is multiplied by 4 (it's shrunk by 4 when run through the model). I know why h and w are what they are. I'm not sure about anything after that.</p>
<p>scores are the confidence scores for each region;
geometry are the geometry values for each region (the 5 numbers I mentioned)
scoreThresh is just a threshold for the non-maximum suppresion</p>
<pre><code>def decode(scores, geometry, scoreThresh):
detections = []
confidences = []
############ CHECK DIMENSIONS AND SHAPES OF geometry AND scores ############
assert len(scores.shape) == 4, "Incorrect dimensions of scores"
assert len(geometry.shape) == 4, "Incorrect dimensions of geometry"
assert scores.shape[0] == 1, "Invalid dimensions of scores"
assert geometry.shape[0] == 1, "Invalid dimensions of geometry"
assert scores.shape[1] == 1, "Invalid dimensions of scores"
assert geometry.shape[1] == 5, "Invalid dimensions of geometry"
assert scores.shape[2] == geometry.shape[2], "Invalid dimensions of scores and geometry"
assert scores.shape[3] == geometry.shape[3], "Invalid dimensions of scores and geometry"
height = scores.shape[2]
width = scores.shape[3]
for y in range(0, height):
# Extract data from scores
scoresData = scores[0][0][y]
x0_data = geometry[0][0][y]
x1_data = geometry[0][1][y]
x2_data = geometry[0][2][y]
x3_data = geometry[0][3][y]
anglesData = geometry[0][4][y]
for x in range(0, width):
score = scoresData[x]
# If score is lower than threshold score, move to next x
if(score < scoreThresh):
continue
# Calculate offset
offsetX = x * 4.0
offsetY = y * 4.0
angle = anglesData[x]
# Calculate cos and sin of angle
cosA = math.cos(angle)
sinA = math.sin(angle)
h = x0_data[x] + x2_data[x]
w = x1_data[x] + x3_data[x]
# Calculate offset
offset = ([offsetX + cosA * x1_data[x] + sinA * x2_data[x], offsetY - sinA * x1_data[x] + cosA * x2_data[x]])
# Find points for rectangle
p1 = (-sinA * h + offset[0], -cosA * h + offset[1])
p3 = (-cosA * w + offset[0], sinA * w + offset[1])
center = (0.5*(p1[0]+p3[0]), 0.5*(p1[1]+p3[1]))
detections.append((center, (w,h), -1*angle * 180.0 / math.pi))
confidences.append(float(score))
# Return detections and confidences
return [detections, confidences]
</code></pre> | 2019-04-08 23:52:22.197000+00:00 | 2019-09-24 14:08:58.383000+00:00 | null | python|opencv|deep-learning | ['https://arxiv.org/pdf/1704.03155v2.pdf', 'https://i.stack.imgur.com/syHjM.png', 'https://i.stack.imgur.com/HWSMl.png', 'https://i.stack.imgur.com/JhCqU.png', 'https://github.com/opencv/opencv/blob/7fb70e170154d064ef12d8fec61c0ae70812ce3d/samples/dnn/text_detection.py'] | 5 |
65,155,106 | <p>The linked definitions are generally agreeing. The best one is in the <a href="https://arxiv.org/pdf/1706.03059.pdf" rel="nofollow noreferrer">article</a>.</p>
<ul>
<li>"Depthwise" (not a very intuitive name since depth is not involved) - is a series of regular 2d convolutions, just applied to layers of the data separately. - "Pointwise" is same as <code>Conv2d</code> with 1x1 kernel.</li>
</ul>
<p>I suggest a few correction to your <code>SeparableConv2d</code> class:</p>
<ul>
<li>no need to use depth parameter - it is same as out_channels</li>
<li>I set padding to 1 to ensure same output size with <code>kernel=(3,3)</code>. if kernel size is different - adjust padding accordingly, using same principles as with regular Conv2d. Your example class <code>Net()</code> is no longer needed - padding is done in <code>SeparableConv2d</code>.</li>
</ul>
<p>This is the updated code, should be similar to <code>tf.keras.layers.SeparableConv2D</code> implementation:</p>
<pre><code>class SeparableConv2d(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size, bias=False):
super(SeparableConv2d, self).__init__()
self.depthwise = nn.Conv2d(in_channels, in_channels, kernel_size=kernel_size,
groups=in_channels, bias=bias, padding=1)
self.pointwise = nn.Conv2d(in_channels, out_channels,
kernel_size=1, bias=bias)
def forward(self, x):
out = self.depthwise(x)
out = self.pointwise(out)
return out
</code></pre> | 2020-12-05 08:27:10.093000+00:00 | 2020-12-07 08:52:52.630000+00:00 | 2020-12-07 08:52:52.630000+00:00 | null | 65,154,182 | <h2>Main objective</h2>
<p>PyTorch equivalent for SeparableConv2D with <code>padding = 'same'</code>:</p>
<pre class="lang-py prettyprint-override"><code>from tensorflow.keras.layers import SeparableConv2D
x = SeparableConv2D(64, (1, 16), use_bias = False, padding = 'same')(x)
</code></pre>
<h2>What is the PyTorch equivalent for SeparableConv2D?</h2>
<p>This <a href="https://discuss.pytorch.org/t/depthwise-and-separable-convolutions-in-pytorch/7315" rel="nofollow noreferrer">source</a> says:</p>
<blockquote>
<p>If groups = nInputPlane, kernel=(K, 1), (and before is a Conv2d layer with groups=1 and kernel=(1, K)), then it is separable.</p>
</blockquote>
<p>While this <a href="https://www.programmersought.com/article/1344745736/" rel="nofollow noreferrer">source</a> says:</p>
<blockquote>
<p>Its core idea is to break down a complete convolutional acid into a two-step calculation, Depthwise Convolution and Pointwise.</p>
</blockquote>
<p>This is my attempt:</p>
<pre class="lang-py prettyprint-override"><code>class SeparableConv2d(nn.Module):
def __init__(self, in_channels, out_channels, depth, kernel_size, bias=False):
super(SeparableConv2d, self).__init__()
self.depthwise = nn.Conv2d(in_channels, out_channels*depth, kernel_size=kernel_size, groups=in_channels, bias=bias)
self.pointwise = nn.Conv2d(out_channels*depth, out_channels, kernel_size=1, bias=bias)
def forward(self, x):
out = self.depthwise(x)
out = self.pointwise(out)
return out
</code></pre>
<p>Is this correct? Is this equivalent to <code>tensorflow.keras.layers.SeparableConv2D</code>?</p>
<h2>What about <code>padding = 'same'</code>?</h2>
<p>How to ensure that my input and output size is the same while doing this?</p>
<p>My attempt:</p>
<pre><code>x = F.pad(x, (8, 7, 0, 0), )
</code></pre>
<p>Because the kernel size is <code>(1,16)</code>, I added left and right padding, 8 and 7 respectively. Is this the right way (and best way) to achieve <code>padding = 'same'</code>? How can I place this inside my <code>SeparableConv2d</code> class, and calculate on the fly given the input data dimension size?</p>
<h2>All together</h2>
<pre class="lang-py prettyprint-override"><code>class SeparableConv2d(nn.Module):
def __init__(self, in_channels, out_channels, depth, kernel_size, bias=False):
super(SeparableConv2d, self).__init__()
self.depthwise = nn.Conv2d(in_channels, out_channels*depth, kernel_size=kernel_size, groups=in_channels, bias=bias)
self.pointwise = nn.Conv2d(out_channels*depth, out_channels, kernel_size=1, bias=bias)
def forward(self, x):
out = self.depthwise(x)
out = self.pointwise(out)
return out
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.separable_conv = SeparableConv2d(
in_channels=32,
out_channels=64,
depth=1,
kernel_size=(1,16)
)
def forward(self, x):
x = F.pad(x, (8, 7, 0, 0), )
x = self.separable_conv(x)
return x
</code></pre>
<p>Any problem with these codes?</p> | 2020-12-05 05:47:40.500000+00:00 | 2020-12-07 08:52:52.630000+00:00 | 2020-12-05 08:18:07.930000+00:00 | python|tensorflow|pytorch|conv-neural-network|convolution | ['https://arxiv.org/pdf/1706.03059.pdf'] | 1 |
3,114,467 | <p>From your description, Neo4j should indeed be a very good match to the problems you are facing. For example, the relationship type support should prove useful here. There's an <a href="http://www.mail-archive.com/[email protected]/info.html" rel="nofollow noreferrer">active community</a>, which increases chances this graphdb will survive into the future. It has also been in production for a long time.</p>
<p>The <a href="http://wiki.neo4j.org/content/PHP" rel="nofollow noreferrer">PHP side of Neo4j</a> isn't that shiny so far, but I think the <a href="http://wiki.neo4j.org/content/Getting_Started_REST" rel="nofollow noreferrer">REST API</a> opens up for some interesting scenarios. There's a <a href="http://github.com/onewheelgood/Neo4J-REST-PHP-API-client" rel="nofollow noreferrer">PHP REST client</a> (quick intro <a href="http://onewheeledbicycle.com/2010/06/01/getting-started-with-neo4j-rest-api-and-php/" rel="nofollow noreferrer">here</a>) being developed. Then there's an <a href="http://github.com/coffeesnake/neo4j-php-wrapper" rel="nofollow noreferrer">experiment</a> with the PHP/Java bridge (I haven't tried that one myself).</p>
<p>Note that some of your requirements are simply put very hard problems, which can't be easily solved using any technology. For example, finding the maximum depth may be an extremely expensive operation depending on the layout of the graph. In some cases it can work out well to take a bigger hit on inserts and store for instance the "children count" on each node.</p>
<p>Regarding RDBMS, I've struggled with similar issues in a PHP/MySQL based system. Using stored procedures helped out regarding structuring the project, but performance actually got a little worse (this was at the time stored procedures was a new feature in MySQL). PostgreSQL performs better with complex queries in my experience, but writing real graph queries for it isn't really possible (read <a href="http://www.scribd.com/doc/29591188/The-Graph-Traversal-Pattern" rel="nofollow noreferrer">here</a> and <a href="http://arxiv.org/abs/1006.2361" rel="nofollow noreferrer">here</a> for why this is so!)</p>
<p>Disclaimer: I'm part of the Neo4j team</p> | 2010-06-24 22:49:26.980000+00:00 | 2010-06-24 22:49:26.980000+00:00 | null | null | 3,113,063 | <p>I have a site written in PHP. It currently uses MySQL for all of its database needs (I'm open to additional DB technologies).</p>
<p>The system's content is interrelated. These relationships can be represented as a graph where vertices are pieces of content and edges are the relationships. I need to be able to traverse that graph. In particular I need to be able to:</p>
<ul>
<li>Get the child count at a given depth (e.g. how many grandchildrean does an item have)</li>
<li>Get the cumulative child count at a given depth (e.g. how many children and grandchildren does an item have)</li>
<li>Get the maximum depth for a given root (e.g. what is the longest path from this item)</li>
<li>Get the children at a given depth (e.g. who are the grandchildren of this item)</li>
<li>Get the parents at a given depth (e.g. who are the grandparents of this item)</li>
<li>Look up which statuses (such as "hidden" or "locked") have been inherited from parents.</li>
</ul>
<p>Because it is a graph on a dynamic system and not a tree or traditional hierarchy, there are some intricacies that I <em>think</em> rule out the usual SQL-based tricks (e.g. Adjacency List and Path Enumeration).</p>
<p><strong>The Main Intricacies:</strong></p>
<ul>
<li><p>Content can have more than one child.</p></li>
<li><p>Content can have more than one parent.</p></li>
<li><p>An item's relationship graph might look different for each user. For instance, certain content might be hidden for one person but not another.</p></li>
<li><p>Items can appear more than once on a graph tree, and can appear at different path lengths (e.g. item 50 could be an immediate child while also being a 3rd generation child).</p></li>
<li><p>Graphs can contain hundreds of thousands of items.</p></li>
</ul>
<p><strong>Some Additional Intricacies:</strong></p>
<ul>
<li><p>Different types of content can be related (as in, a poll could be related to a forum post, or a user could be related to a community)</p></li>
<li><p>There are several different types of relationship (as in, parent/child relationship, ownership relationship, peer relationship)</p></li>
<li><p>Depending on the type of relationship, permissions and restrictions may or may not be passed from parent to child (e.g. if a parent is hidden, the child will be hidden as well, but if a peer item is hidden that status isn't passed along)</p></li>
</ul>
<p><strong>My Naive (slow) "Solutions"</strong></p>
<p>Currently I am taking the naive approach using SQL. I have a single "Relationships" table with these columns:</p>
<pre><code>item1ID (int)
item1TypeID (int)
item2ID (int)
item2TypeID (int)
relationshipTypeID (int)
</code></pre>
<p>In PHP, I dynamically generate queries full of inner self joins to look up the maximum depth, and then once that is figured out I generate a single query which traverses the hierarchy and retrieves whatever information I need. This is already too slow, even with proper indexing.</p>
<p>My second naive approach was going to be moving that traversal and depth lookup into stored procedures. I have no idea if this would actually create a significant speed improvement. I was also thinking of incorporating some sort of caching mechanism so I could avoid looking up maximum depths as often, but that seems like it simply avoiding the real problem.</p>
<p><strong>My Question</strong></p>
<p>There has to be a better way. What is it? I know there are a lot of questions and answers already on StackOverflow regarding the issue of hierarchical information in SQL, but this is not quite hierarchy -- it is a full blown graph.</p>
<p>Since I have strong models I can mix in another DB technology to handle the relationship side of things without ruining the existing code base. I have been looking into NoSQL solutions but I know practically nothing about them. I have also heard of "Graph Databases" (such as Neo4J) which, based on the name and the various powerpoint slides I've seen, sounds like exactly what I need. However, I don't know which ones are actually robust enough to stick around or which ones would play well with PHP.</p>
<p>Help me StackOverflow, you're my only hope.</p> | 2010-06-24 18:58:43.017000+00:00 | 2010-06-24 22:49:26.980000+00:00 | null | php|mysql|graph|nosql | ['http://www.mail-archive.com/[email protected]/info.html', 'http://wiki.neo4j.org/content/PHP', 'http://wiki.neo4j.org/content/Getting_Started_REST', 'http://github.com/onewheelgood/Neo4J-REST-PHP-API-client', 'http://onewheeledbicycle.com/2010/06/01/getting-started-with-neo4j-rest-api-and-php/', 'http://github.com/coffeesnake/neo4j-php-wrapper', 'http://www.scribd.com/doc/29591188/The-Graph-Traversal-Pattern', 'http://arxiv.org/abs/1006.2361'] | 8 |
42,795,954 | <p>An interesting reading about such questions is Bengio's 2012 paper <a href="https://arxiv.org/pdf/1206.5533.pdf" rel="nofollow noreferrer">Practical Recommendations for Gradient-Based Training of Deep
Architectures</a></p>
<p>There is a section about online learning where the distribution of training data is unknown. I quote from the original paper</p>
<blockquote>
<p>It
means that online learners, when given a stream of
non-repetitive training data, really optimize (maybe
not in the optimal way, i.e., using a first-order gradient
technique) what we really care about: generalization
error.</p>
</blockquote>
<p>The best practice though to figure out how your dataset behaves under different testing scenarios would be to try them both and get experimental results of how the distribution of the training data affects your generalization error. </p> | 2017-03-14 20:28:04.973000+00:00 | 2017-03-14 20:28:04.973000+00:00 | null | null | 42,787,719 | <p>This is probably a newbie question but I'm trying to get my head around how training on small batches works.</p>
<p><strong>Scenario -</strong> </p>
<p>For the mnist classification problem, let's say that we have a model with appropriate hyerparameters that allow training on 0-9 digits. If we feed it with a small batches of uniform distribution of inputs (that have more or less same numbers of all digits in each batch), it'll learn to classify as expected.</p>
<p>Now, imagine that instead of a uniform distribution, we trained the model on images containing only 1s so that the weights are adjusted until it works perfectly for 1s. And then we start training on images that contain only 2s. Note that only the inputs have changed, the model and everything else has stayed the same.</p>
<p><strong>Question</strong> -</p>
<p>What does the training exclusively on 2s after the model was already trained exclusively on 1s do? Will it keep adjusting the weights till it has forgotten (so to say) all about 1s and is now classifying on 2s? Or will it still adjust the weights in a way that it remembers both 1s and 2s?</p>
<p>In other words, must each batch contain a uniform distribution of different classifications? Does retraining a trained model in Tensorflow overwrite previous trainings? If yes, if it is not possible to create small (< 256) batches that are sufficiently uniform, does it make sense to train on very large (>= 500-2000) batch sizes?</p> | 2017-03-14 13:45:17.667000+00:00 | 2017-03-14 20:28:04.973000+00:00 | null | tensorflow | ['https://arxiv.org/pdf/1206.5533.pdf'] | 1 |
68,813,711 | <p>I am the author of imputeTS, currently it is not possible to get multiple imputations using imputeTS.
(I have it in mind for further updates, but this will be a bigger effort, which certainly will take some time)</p>
<p>If you desperately need multiple imputations you can use the following workaround (without using imputeTS):</p>
<p>Use <strong>mice</strong> or another multiple imputation package for cross-sectional/non-time series data. Since you only have one variable or you don't have time information added to your dataset, you have to create additional time variables from your dataset.</p>
<p>So you add lags and leads, season, moving averages, ... as additional variables to your dataset. This way <strong>mice</strong> will also work for time series data and also give you multiple imputations. (see also <a href="https://arxiv.org/pdf/1510.03924.pdf" rel="nofollow noreferrer">this paper</a> from page 8 onward ) But you need to be careful, you will get multiple imputations, but it might be, that the overall model isn't as good as when you use dedicated time series methods) With this approach you use ML methods for time series, but not every ML model is good for time series and your have to do a good job modeling you time series features into the additional variables you add to the dataset.</p> | 2021-08-17 08:01:56.567000+00:00 | 2021-08-17 08:01:56.567000+00:00 | null | null | 68,568,295 | <p>This is about "imputeTS" package in R. I would like to know whether there is a way to do multiple imputations using this package?
Any guidance/directions about the possibilities of doing that would be greatly appreciated.</p>
<p>Also, I would like to know ideas about checking for the missing mechanism (MCAR, MAR, MNAR), particularly for a univariate time series.</p> | 2021-07-28 23:09:17.797000+00:00 | 2021-08-17 08:01:56.567000+00:00 | null | imputets | ['https://arxiv.org/pdf/1510.03924.pdf'] | 1 |
61,373,129 | <p><strong>Don’t Use With Dropout</strong></p>
<p>Batch normalization offers some regularization effect, reducing generalization error, perhaps no longer requiring the use of <a href="https://machinelearningmastery.com/dropout-for-regularizing-deep-neural-networks/" rel="nofollow noreferrer">dropout for regularization</a>.</p>
<blockquote>
<p>Removing Dropout from Modified BN-Inception speeds up training, without increasing overfitting.</p>
</blockquote>
<p>— Batch Normalization: <a href="https://arxiv.org/abs/1502.03167" rel="nofollow noreferrer">Accelerating Deep Network Training by Reducing Internal Covariate Shift, 2015</a>.</p>
<p>Further, it may not be a good idea to use batch normalization and dropout in the same network.</p>
<p>The reason is that the statistics used to normalize the activations of the prior layer may become noisy given the random dropping out of nodes during the dropout procedure.</p>
<blockquote>
<p>Batch normalization also sometimes reduces generalization error and allows dropout to be omitted, due to the noise in the estimate of the statistics used to normalize each variable.</p>
</blockquote>
<p>— Page 425, <a href="https://www.deeplearningbook.org/contents/guidelines.html" rel="nofollow noreferrer">Deep Learning</a>, 2016.</p>
<p>Source - <a href="https://machinelearningmastery.com/batch-normalization-for-training-of-deep-neural-networks/" rel="nofollow noreferrer">machinelearningmastery.com - batch normalization</a></p> | 2020-04-22 19:04:12.083000+00:00 | 2021-08-08 18:39:55.267000+00:00 | 2021-08-08 18:39:55.267000+00:00 | null | 60,043,592 | <p>I wonder if it is a problem to use BatchNormalization when there are only 2 convolutional layers in a CNN.
Can this have adverse effects on classification performance? Now I don't mean the training time, but really the accuracy? Is my network overloaded with unneccessary layers? I want to train the network with a small data set.</p>
<pre><code>model = Sequential()
model.add(Conv2D(32, kernel_size=(3,3), input_shape=(28,28,1), padding = 'same'))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.2))
model.add(Conv2D(64, kernel_size=(3,3), padding='same'))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.2))
model.add(Flatten())
model.add(Dense(128))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(10, activation='softmax'))
model.compilke(optimizer="Adam", loss='categorical_crossentropy, metrics =['accuracy'])
</code></pre>
<p>Many thanks.</p> | 2020-02-03 16:44:48.843000+00:00 | 2021-08-08 18:39:55.267000+00:00 | null | conv-neural-network|batch-normalization|dropout|max-pooling | ['https://machinelearningmastery.com/dropout-for-regularizing-deep-neural-networks/', 'https://arxiv.org/abs/1502.03167', 'https://www.deeplearningbook.org/contents/guidelines.html', 'https://machinelearningmastery.com/batch-normalization-for-training-of-deep-neural-networks/'] | 4 |
47,849,835 | <p>I would refer you to <a href="https://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/He_Deep_Residual_Learning_CVPR_2016_paper.pdf" rel="nofollow noreferrer">the original paper</a> by Kaiming He at al.</p>
<p>In sections 3.1-3.2, they define "identity" shortcuts as <code>y = F(x, W) + x</code>, where <code>W</code> are the trainable parameters, <strong>for any residual mapping</strong> <code>F</code> to be learned. It is important that the residual mapping contains a <em>non-linearity</em>, otherwise the whole construction is one sophisticated linear layer. But the number of linearities is not limited. </p>
<p>For example, <a href="https://arxiv.org/pdf/1611.05431.pdf" rel="nofollow noreferrer">ResNeXt network</a> creates identity shortcuts around a stack of only convolutional layers (see the figure below). So there aren't <em>any</em> dense layers in the residual block.</p>
<p><a href="https://i.stack.imgur.com/A5zqg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/A5zqg.png" alt="res-next-blocks"></a></p>
<p>The general answer is, thus: yes, it would work. However, in a particular neural network, reducing two dense layers to one may be a bad idea, because anyway the residual block must be flexible enough to learn the residual function. So remember to validate any design you come up with.</p> | 2017-12-16 20:50:08.757000+00:00 | 2017-12-18 13:19:18.433000+00:00 | 2017-12-18 13:19:18.433000+00:00 | null | 47,484,159 | <p>Standard in ResNets is to skip 2 linearities.
Would skipping only one work as well?</p> | 2017-11-25 08:52:18.797000+00:00 | 2017-12-18 13:19:18.433000+00:00 | 2017-12-18 13:10:28.023000+00:00 | machine-learning|neural-network|deep-learning|deep-residual-networks | ['https://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/He_Deep_Residual_Learning_CVPR_2016_paper.pdf', 'https://arxiv.org/pdf/1611.05431.pdf', 'https://i.stack.imgur.com/A5zqg.png'] | 3 |
58,463,200 | <p>tl;dr: By following the paper mentioned in the comments, a C# implementation of a solver (below) handles the case of 500 randomly distributed stations in about 14 ms on an aging laptop, so in particular, this handles the 100 station case easily, and is orders of magnitude faster than using a MIP solver, as suggested in the comments.</p>
<p>Typically, the gas station problem (which we should really start calling the charging station problem, but that's a different story) assumes that the starting fuel amount is 0: The more general case may be reduced to the 0 case by adding a new starting station with free fuel at a distance to your initial starting point that causes the car to reach your initial starting point with a tank containing your given amount of fuel. This can be done without ruining the general complexity of the solution below.</p>
<p>Noting this, the problem boils down to that described in <a href="http://www.cs.umd.edu/users/samir/grant/gas-j.pdf" rel="nofollow noreferrer"><em>To Fill or not to Fill: The Gas Station Problem</em></a>, as <a href="https://stackoverflow.com/questions/58289424/gas-station-problem-cheapest-and-least-amount-of-stations/58463200#comment102944110_58289424">noted by @PySeeker</a> in the comments. In particular, $O(N \log N)$ seems optimistic. In the paper, the relevant theorem handles your case in $O(\Delta N^2 \log N)$, where $\Delta$ is minimum number of stops required (which you can easily precompute in linear time if necessary). Another paper, <em><a href="https://arxiv.org/abs/1706.00195" rel="nofollow noreferrer">A fast algorithm for the gas station problem</a></em>, describes how to solve the problem in $O(\Delta N^2 + N^2 \log N)$, but let's just focus on the former paper here.</p>
<p>Its Theorem 2.2 solves the problem for a fixed $\Delta$, where you're really only interested in the lowest possible one. Since their recurrence is set up to solve the problem for increasing $\Delta$, this amounts to simply halt the algorithm once $A(s, \Delta, 0)$, in the notation of the paper, becomes finite.</p>
<p>Note also that compared to the general problem which handles general graphs whose edge weights form a metric (a requirement noted in the second of the above-mentioned papers but for some reason not in the first one), your situation is simpler with vertices $0, \dots, N - 1$ and distances $d_{uv} = d[v] - d[u]$.</p>
<p>One thing to be a bit careful when implementing the algorithm is that while the description in the paper is fine, the pseudo-code is rather buggy/lacking (cf. e.g. <a href="https://cs.stackexchange.com/q/115930/108166">this question</a>). Below we implement the various fixes needed to get the algorithm up and running, as well as a some amount of indexing to help improve performance.</p>
<p>You mention in your edit that besides the value of the optimal solution, you would also like to get the actual paths taken. The algorithm below only outputs the value, i.e. $A(0, \Delta, 0)$, but by keeping track of the argmin in a separate table whenever the table of values is updated, you'll immediately get the desired path as well. This is completely analogous to how you would implement e.g. Dijkstra's algorithm.</p>
<p>You don't specify a language in the question, so I took the liberty of writing this in C#; the code is very C'y, so it should be straightforward to port it to Java if necessary (s/List/ArrayList/g). The notation follows the paper, so let me simply refer to that for comments/documentation (but let me also apologize that without the paper at hand, the implementation is likely impossible to read).</p>
<p>The solution doesn't go all the way: As mentioned above, a different algorithm with better complexity exists, and it would be natural to try that one as well; it's not particularly complicated. Moreover, the implementation at hand has a natural performance optimization that's not included: It's not necessary to grow the table for all $q$; for example, the source vertex $u = 0$ will depend only on the previous row through <code>R[0]</code>, so by precomputing the minimal value of $\Delta$, we can avoid some redundant calculation.</p>
<pre class="lang-cs prettyprint-override"><code>private static double Solve(double[] c, double[] d, double U)
{
int n = d.Length;
int t = n - 1;
var R = new int[n][];
var indep = new double[n][];
var GV = new List<List<double>>();
var GVar = new List<Dictionary<int, int>>();
for (int u = 0; u < n; u++)
{
var l = new List<int>();
for (int v = u + 1; v < n; v++)
{
if (d[v] - d[u] <= U)
l.Add(v);
else break;
}
R[u] = l.ToArray();
indep[u] = new double[l.Count];
}
for (int u = 0; u < n; u++)
{
var l = new List<double> { 0 };
var gvar = new Dictionary<int, int>();
int i = 1;
for (int w = 0; w < u; w++)
{
if (c[w] < c[u] && d[u] - d[w] <= U)
{
l.Add(U - (d[u] - d[w]));
gvar[w] = i++;
}
}
GV.Add(l);
GVar.Add(gvar);
}
int q = 0;
var Cq1 = new double[n][];
var Cq2 = new double[n][];
for (int i = 0; i < n; i++)
{
Cq1[i] = new double[GV[i].Count];
Cq2[i] = new double[GV[i].Count];
for (int j = 0; j < GV[i].Count; j++)
{
Cq1[i][j] = double.PositiveInfinity;
Cq2[i][j] = double.PositiveInfinity;
}
}
var toggle = true;
while (true)
{
var Cq = toggle ? Cq1 : Cq2;
var prev = !toggle ? Cq1 : Cq2;
toggle = !toggle;
for (int i = 0; i < n; i++)
{
for (int j = 0; j < GV[i].Count; j++)
Cq[i][j] = double.PositiveInfinity;
}
for (int u = 0; u < n; u++)
{
Grow(u, q, t, c, d, U, R[u], GV[u], q == 0 ? null : prev, Cq, indep[u], GVar);
if (u == 0 && !double.IsPositiveInfinity(Cq[u][0]))
return Cq[u][0];
}
q++;
}
}
private static void Grow(int u, int q, int t, double[] c, double[] d, double U,
int[] r, List<double> gv, double[][] prev, double[][] ret, double[] indep,
List<Dictionary<int, int>> GVar)
{
double cost = c[u];
if (q == 0 || u == t)
{
for (int i = 0; i < gv.Count; i++)
{
var g = gv[i];
if (q == 0 && g <= d[t] - d[u] && d[t] - d[u] <= U)
ret[u][i] = (d[t] - d[u] - g) * cost;
}
return;
}
for (var i = 0; i < r.Length; i++)
{
var v = r[i];
indep[i] = c[v] <= cost ? prev[v][0] + (d[v] - d[u]) * cost : prev[v][GVar[v][u]] + U * cost;
}
Array.Sort(indep, r);
var j = 0;
var w = r[j];
for (int gi = 0; gi < gv.Count; gi++)
{
var g = gv[gi];
while (g > d[w] - d[u] && c[w] <= cost)
{
j++;
if (j == r.Length) return;
w = r[j];
}
ret[u][gi] = indep[j] - g * cost;
}
}
</code></pre>
<p>Example usage, and benchmark on a 500 station case:</p>
<pre class="lang-cs prettyprint-override"><code>static void Main(string[] args)
{
var rng = new Random();
var sw = new Stopwatch();
for (int k = 0; k < 100; k++)
{
int n = 500;
var prices = Enumerable.Range(1, n).Select(_ => rng.NextDouble()).ToArray();
var distances = Enumerable.Range(1, n).Select(_ => rng.NextDouble() * n).OrderBy(x => x).ToArray();
var capacity = 15;
sw.Start();
var result = Solve(prices, distances, capacity);
sw.Stop();
var time = sw.Elapsed;
Console.WriteLine($"{time} {result}");
sw.Reset();
}
}
</code></pre> | 2019-10-19 11:27:14.910000+00:00 | 2020-01-22 17:01:30.053000+00:00 | 2020-01-22 17:01:30.053000+00:00 | null | 58,289,424 | <p>I am working on a problem that consists of the following: You are driving a car with a certain fuel usage
m (in our example we will take 8l/100km) and you are driving a straight of length x (example: 1000km). The car starts out with a certain amount of fuel f (example: 22l). Your car has a fuel tank of size g (example: 55l) and there are gas stations (which have a price per liter) plotted around the straight (e.g 100km (1.45$/l), 400km(1.40$/l) and 900km(1.22$/l). The aim of the algorithm I'm having a hard time creating is: With the least amount of stops (so not the cheapest route, but the one with the least stops at gas stations) find the cheapest way and tell the user how much liters he has to tank at which gas station and how much it will cost.</p>
<p>Currently using recursion and for loops (presumably O(n^2)) I've created an algorithm that can solve some problems up to a certain complexity, it starts to struggle when there are about 100 gas stations.</p>
<p>How my algorithm works:</p>
<ul>
<li>Go to the tank stations that are available from the start (the 22l in the example)</li>
<li>Go to each tank station and list the tank stations (or the end) in range by having a full tank (since the car can fuel up at the tank station you can have a full tank) I do save this in a list so it's not calculated twice.</li>
<li>Then for loop every one of those tank stations that can be reached and recursion occurs, pretty much I then save the smallest amount of stops required and rinse and repeat and voila I get the smallest answer which is (in our example) stopping at 100 fueling 10.00 liters, paying 14.5$ and then stopping at 400 fueling 48 liters and paying 67.20$</li>
</ul>
<p>The problems I still have:</p>
<ul>
<li><p>How (Is it even possible)can I reduce the complexity to O(N log N) or linear, so that all (even if it's 100+ gas stations) can be checked. At the moment the recursive methods go down to 10+ recursions in recursions which makes anything above 100 gas stations pretty much unsolvable for this algorithm.</p></li>
<li><p>At the moment my algorithm only fuels up as much as it requires to reach the next gas station or the end: What's the best way to handle the problem if the first gas station is cheaper than the second and you can fuel up "n liters more" so you buy less liters at the second gas station. Since in the ideal case you have 0 liters left at the end of the trip.</p></li>
</ul>
<p>Extra notes:</p>
<ul>
<li>Arriving at a gas station with 0 liters of fuel counts as having arrived. </li>
<li>EDIT: All paths of the same price and least amount of stations must be found.</li>
</ul>
<p>My current code (snippet) I think that the methods names are self explanitory, add comment if they are not., </p>
<pre><code> void findRoutes(List<GasStation> reachableStations, List<GasStation> previousStations) {
int currentSteps = previousStations.size();
if (currentSteps > leastSteps) {
return;
}
// Reached the end (reachableStations will be null if it can reach the end)
if (reachableStations == null) {
// less steps
if (currentSteps < leastSteps) {
routes.clear();
routes.add(previousStations);
leastSteps = previousStations.size();
return;
} else {
// same amount of steps
routes.add(previousStations);
return;
}
}
// would be too many steps
if (currentSteps + 1 > leastSteps) {
return;
}
// Check those further away so we get a smaller step amount quicker
Collections.reverse(reachableStations);
for (GasStation reachableStation : reachableStations) {
List<GasStation> newPrevious = new LinkedList<>(previousStations);
newPrevious.add(reachableStation);
findRoutes(reachableStation.getReachableGasStations(), newPrevious);
}
}
</code></pre> | 2019-10-08 15:22:32.980000+00:00 | 2020-01-22 17:01:30.053000+00:00 | 2019-10-09 10:40:18.677000+00:00 | algorithm|optimization|mathematical-optimization|shortest-path|operations-research | ['http://www.cs.umd.edu/users/samir/grant/gas-j.pdf', 'https://stackoverflow.com/questions/58289424/gas-station-problem-cheapest-and-least-amount-of-stations/58463200#comment102944110_58289424', 'https://arxiv.org/abs/1706.00195', 'https://cs.stackexchange.com/q/115930/108166'] | 4 |
48,497,614 | <p>The problem you described is a pretty well-known multiclassification problem. Instead of assigning a label from a predefined set - you are making a decision for each label separately if you want to assign it to a given image.</p>
<p>In case of <code>keras</code> setup - you could either build a vector of length <code>nb_of_classes</code> with <code>sigmoid</code> activation (model is trained using <code>binary_crossentopy</code> then) or set up multiple outputs (recommended if each label has multiple decisions to make - like predicting a class and some other value) for each class.</p>
<p>To answer your questions:</p>
<ol>
<li><p>From my experience (and knowing how usual loss functions work) if you set up training for only one class - in the ideal scenario, this would lead to assigning 50%-50% (in case of two ground truth classes), 33%-33%-33% (in case of three ground truth classes), etc. As you may see - this might make problems e.g. with setting a threshold for classification. I'd personally choose the strategy with a separate output with <code>sigmoid</code> per class- remember - that having multiple pieces of information about the image should in general lead to better model performance. </p></li>
<li><p>As I mentioned earlier - providing multi-classes might help as you are providing e.g. an implicit class correlations and resolving class conflicts in case of multi-classes assigned.</p></li>
<li><p><a href="https://arxiv.org/pdf/1406.5726.pdf" rel="nofollow noreferrer">Here</a> you have a nice paper about your case.</p></li>
</ol> | 2018-01-29 09:04:14.220000+00:00 | 2018-01-29 09:04:14.220000+00:00 | null | null | 48,493,689 | <p>I built a CNN for a multi-label classification, i.e. predicting multiple labels per image.</p>
<p>I noticed that ImageNet and many of the other dataset actually include a set of examples per label. The way they structured the data is such that given a label, there is a list of examples for that label. Namely:
label -> list of images. Also Keras, which I'm using, supports a data structure of a folder per label, and in each folder a list of images as examples fo the label.</p>
<p>The problem I'm concerned about is that many images may actually have multiple labels in them. For example, if I'm classifying general objects a single folder named 'Cars' will have images of cars, but some images of cars will also have people in them (and may hinder the results on the class 'People').</p>
<p>My first question:
1) Can this (i.e. single label per image in ground truth) reduce the potential accuracy of the network?</p>
<p>If this is the case, I thought of instead creating a dataset of the form:
image1,{list of its labels}
image2,{list of its labels}
etc</p>
<p>2) Will such a structure produce better results?</p>
<p>3) What is a good academic paper about this?</p> | 2018-01-29 02:44:27.310000+00:00 | 2018-01-29 09:04:14.220000+00:00 | 2018-01-29 08:15:21.533000+00:00 | neural-network|deep-learning|keras|training-data|imagenet | ['https://arxiv.org/pdf/1406.5726.pdf'] | 1 |
55,011,213 | <p>Nobody really knows why certain architectures work, that is still a topic of ongoing discussion (see, e.g., <a href="https://arxiv.org/abs/1706.05394" rel="nofollow noreferrer">this paper</a>).</p>
<p>Finding architectures that work is mostly trial and error, and adopting or modifying existing architectures that seem to work well for related tasks and dataset sizes.</p>
<p>I would refer you to <a href="https://www.deeplearningbook.org" rel="nofollow noreferrer">Goodfellow, Bengio, and Courville's book</a>, it is a great resource to get started with machine learning and deep learning in particular.</p> | 2019-03-05 20:40:14.817000+00:00 | 2019-03-05 20:40:14.817000+00:00 | null | null | 55,010,607 | <p>I am new to machine learning and have spent some time learning python. I have started to learn TensorFlow and Keras for machine learning and I literally have no clue nor any understanding of the process to make the model. How do you know which models to use? which activation functions to use? The amount of layers and dimensions of the output space? </p>
<p>I've noticed most models were the Sequential type, and tend to have 3 layers, why is that? I couldn't find any resources that explain which to use, why we use them, and when. The best I could find was tensorflow's function details. Any elaboration or any resources to clarify would be greatly appreciated. </p>
<p>Thanks. </p> | 2019-03-05 19:59:16.610000+00:00 | 2019-03-06 11:34:01.223000+00:00 | 2019-03-06 11:34:01.223000+00:00 | python|python-3.x|tensorflow|keras|keras-layer | ['https://arxiv.org/abs/1706.05394', 'https://www.deeplearningbook.org'] | 2 |
65,944,333 | <p>I think that there is some confusion in some of the answers proposed because of the use of the word "model" in the question asked. If I am guessing correctly, you are referring to the fact that in K-fold cross-validation we learn K-different predictors (or decision functions), which you call "model" (this is a bad idea because in machine learning we also do model selection which is choosing between families of predictors and this is something which can be done using cross-validation). Cross-validation is typically used for hyperparameter selection or to choose between different algorithms or different families of predictors. Once these chosen, the most common approach is to relearn a predictor with the selected hyperparameter and algorithm from all the data.
However, if the loss function which is optimized is convex with respect to the predictor, than it is possible to simply average the different predictors obtained from each fold.
This is because for a convex risk, the risk of the average of the predictor is always smaller than the average of the individual risks.</p>
<p>The PROs and CONs of averaging (vs retraining) are as follows
PROs: (1) In each fold, the evaluation that you made on the held out set gives you an unbiased estimate of the risk for those very predictors that you have obtained, and for these estimates the only source of uncertainty is due to the estimate of the empirical risk (the average of the loss function) on the held out data.
This should be contrasted with the logic which is used when you are retraining and which is that the cross-validation risk is an estimate of the "expected value of the risk of a given learning algorithm" (and not of a given predictor) so that if you relearn from data from the same distribution, you should have in average the same level of performance. But note that this is in average and when retraining from the whole data this could go up or down. In other words, there is an additional source of uncertainty due to the fact that you will retrain.
(2) The hyperparameters have been selected exactly for the number of datapoints that you used in each fold to learn. If you relearn from the whole dataset, the optimal value of the hyperparameter is in theory and in practice not the same anymore, and so in the idea of retraining, you really cross your fingers and hope that the hyperparameters that you have chosen are still fine for your larger dataset.
If you used leave-one-out, there is obviously no concern there, and if the number of data point is large with 10 fold-CV you should be fine. But if you are learning from 25 data points with 5 fold CV, the hyperparameters for 20 points are not really the same as for 25 points...</p>
<p>CONs: Well, intuitively you don't benefit from training with all the data at once</p>
<p>There are unfortunately very little thorough theory on this but the following two papers especially the second paper consider precisely the averaging or aggregation of the predictors from K-fold CV.</p>
<p>Jung, Y. (2016). Efficient Tuning Parameter Selection by Cross-Validated Score in High Dimensional Models. International Journal of Mathematical and Computational Sciences, 10(1), 19-25.</p>
<p>Maillard, G., Arlot, S., & Lerasle, M. (2019). Aggregated Hold-Out. arXiv preprint arXiv:1909.04890.</p> | 2021-01-28 19:57:14.360000+00:00 | 2021-01-28 19:57:14.360000+00:00 | null | null | 37,994,608 | <p>I have a question regarding cross validation in Linear regression model.</p>
<p>From my understanding, in cross validation, we split the data into (say) 10 folds and train the data from 9 folds and the remaining folds we use for testing. We repeat this process until we test all of the folds, so that every folds are tested exactly once.</p>
<p>When we are training the model from 9 folds, should we not get a different model (may be slightly different from the model that we have created when using the whole dataset)? I know that we take an average of all the "n" performances.</p>
<p>But, what about the model? Shouldn't the resulting model also be taken as the average of all the "n" models? I see that the resulting model is same as the model which we created using whole of the dataset before cross-validation. If we are considering the overall model even after cross-validation (and not taking avg of all the models), then what's the point of calculating average performance from n different models (because they are trained from different folds of data and are supposed to be different, right?)</p>
<p>I apologize if my question is not clear or too funny.
Thanks for reading, though!</p> | 2016-06-23 14:31:19.560000+00:00 | 2021-01-28 19:57:14.360000+00:00 | null | linear-regression|cross-validation | [] | 0 |
41,916,066 | <p>You didn't say what architecture you're talking about. Since you said you want to classify images, I'm assuming it's a partly convolutional, partly fully connected network like AlexNet, GoogLeNet, etc. In general, the answer to your question depends on the network type you are working with. </p>
<p>If, for example, your network only contains convolutional units - that is to say, does not contain fully connected layers - it <em>can</em> be invariant to the input image's size. Such a network <em>could</em> process the input images and in turn return another image ("convolutional all the way"); you would have to make sure that the output matches what you expect, since you have to determine the loss in some way, of course.</p>
<p>If you are using fully connected units though, you're up for trouble: Here you have a fixed number of learned weights your network has to work with, so varying inputs would require a varying number of weights - and that's not possible.</p>
<p>If that is your problem, here's some things you can do:</p>
<ul>
<li>Don't care about squashing the images. A network might learn to make sense of the content anyway; does scale and perspective mean anything to the content anyway?</li>
<li>Center-crop the images to a specific size. If you fear you're losing data, do multiple crops and use these to augment your input data, so that the original image will be split into <code>N</code> different images of correct size.</li>
<li>Pad the images with a solid color to a squared size, then resize.</li>
<li>Do a combination of that.</li>
</ul>
<p>The padding option might introduce an additional error source to the network's prediction, as the network might (read: likely will) be biased to images that contain such a padded border.
If you need some ideas, have a look at the <a href="https://www.tensorflow.org/api_docs/python/tf/image" rel="noreferrer">Images</a> section of the TensorFlow documentation, there's pieces like <code>resize_image_with_crop_or_pad</code> that take away the bigger work.</p>
<p>As for just don't caring about squashing, <a href="https://github.com/tensorflow/models/blob/f98c5ded31d7da0c2d127c28b2c16f0307a368f0/slim/preprocessing/inception_preprocessing.py#L206-L216" rel="noreferrer">here's</a> a piece of the preprocessing pipeline of the famous Inception network:</p>
<pre class="lang-py prettyprint-override"><code># This resizing operation may distort the images because the aspect
# ratio is not respected. We select a resize method in a round robin
# fashion based on the thread number.
# Note that ResizeMethod contains 4 enumerated resizing methods.
# We select only 1 case for fast_mode bilinear.
num_resize_cases = 1 if fast_mode else 4
distorted_image = apply_with_random_selector(
distorted_image,
lambda x, method: tf.image.resize_images(x, [height, width], method=method),
num_cases=num_resize_cases)
</code></pre>
<p>They're totally aware of it and do it anyway.</p>
<p>Depending on how far you want or need to go, there actually is a paper <a href="https://arxiv.org/abs/1406.4729" rel="noreferrer">here</a> called <em>Spatial Pyramid Pooling in Deep Convolution Networks for Visual Recognition</em> that handles inputs of arbitrary sizes by processing them in a very special way.</p> | 2017-01-28 23:31:24.747000+00:00 | 2019-01-19 03:16:47.473000+00:00 | 2019-01-19 03:16:47.473000+00:00 | null | 41,907,598 | <p>I am trying to train my model which classifies images.
The problem I have is, they have different sizes. how should i format my images/or model architecture ?</p> | 2017-01-28 07:58:17.117000+00:00 | 2019-11-06 10:39:56.013000+00:00 | 2019-11-06 10:39:56.013000+00:00 | deep-learning | ['https://www.tensorflow.org/api_docs/python/tf/image', 'https://github.com/tensorflow/models/blob/f98c5ded31d7da0c2d127c28b2c16f0307a368f0/slim/preprocessing/inception_preprocessing.py#L206-L216', 'https://arxiv.org/abs/1406.4729'] | 3 |
53,832,503 | <p>In the original YOLO or <a href="https://arxiv.org/pdf/1506.02640.pdf" rel="nofollow noreferrer">YOLOv1</a>, the prediction was done without any assumption on the shape of the target objects. Let's say that the network tries to detect humans. We know that, generally, humans fit in a vertical rectangle box, rather than a square one. However, the original YOLO tried to detect humans with rectangle and square box with equal probability.</p>
<p>But this is not efficient and might decrease the prediction speed.
So in <a href="https://arxiv.org/pdf/1612.08242.pdf" rel="nofollow noreferrer">YOLOv2</a>, we put some assumption on the shapes of the objects. These are Anchor-Boxes. Usually we feed the anchor boxes to the network as a list of some numbers, which is a series of pairs of width and height:</p>
<p>anchors = [0.57273, 0.677385, 1.87446, 2.06253, 3.33843, 5.47434, 7.88282, 3.52778, 9.77052, 9.16828]</p>
<p>In the above example, (0.57273, 0.677385) represents a single anchor box, in which the two elements are width and height respectively. That is, this list defines 5 different anchor boxes. Note that these values are relative to the output size. For example, YOLOv2 outputs 13x13 feature mat and you can get the absolute values by multiplying 13 to the values of anchors.</p>
<p>Using anchor boxes made the prediction a little bit faster. But the accuracy might decrease. <a href="https://arxiv.org/pdf/1612.08242.pdf" rel="nofollow noreferrer">The paper of YOLOv2</a> says:</p>
<blockquote>
<p>Using anchor boxes we get a small decrease in accuracy. YOLO only
predicts 98 boxes per image but with anchor boxes our model predicts
more than a thousand. Without anchor boxes our intermediate model gets
69.5 mAP with a recall of 81%. With anchor boxes our model gets 69.2 mAP with a recall of 88%. Even though the mAP decreases, the increase
in recall means that our model has more room to improve</p>
</blockquote> | 2018-12-18 11:56:33.557000+00:00 | 2019-01-09 15:59:15.423000+00:00 | 2019-01-09 15:59:15.423000+00:00 | null | 49,403,497 | <p>For algorithms like yolo or R-CNN, they use the concept of anchor boxes for predicting objects. <a href="https://pjreddie.com/darknet/yolo/" rel="nofollow noreferrer">https://pjreddie.com/darknet/yolo/</a></p>
<p>The anchor boxes are trained on specific dataset, one for COCO dataset is:</p>
<pre><code>anchors = 0.57273, 0.677385, 1.87446, 2.06253, 3.33843, 5.47434, 7.88282, 3.52778, 9.77052, 9.16828
</code></pre>
<p>However, i don't understand how to interpret these anchor boxes? What does a pair of values (0.57273, 0.677385) means?</p> | 2018-03-21 10:11:41.877000+00:00 | 2019-01-09 15:59:15.423000+00:00 | null | computer-vision|deep-learning | ['https://arxiv.org/pdf/1506.02640.pdf', 'https://arxiv.org/pdf/1612.08242.pdf', 'https://arxiv.org/pdf/1612.08242.pdf'] | 3 |
64,623,695 | <p>Yes, Adam and AdamW weight decay are different.</p>
<blockquote>
<p>Hutter pointed out in their paper (<a href="https://arxiv.org/abs/1711.05101" rel="noreferrer">Decoupled Weight Decay Regularization</a>) that the way weight decay is implemented in Adam in every library seems to be wrong, and proposed a simple way (which they call AdamW) to fix it.</p>
</blockquote>
<p>In Adam, the weight decay is usually implemented by adding <code>wd*w</code> (<code>wd</code> is weight decay here) to the gradients (Ist case), rather than actually subtracting from weights (IInd case).</p>
<pre><code># Ist: Adam weight decay implementation (L2 regularization)
final_loss = loss + wd * all_weights.pow(2).sum() / 2
# IInd: equivalent to this in SGD
w = w - lr * w.grad - lr * wd * w
</code></pre>
<blockquote>
<p>These methods are same for vanilla SGD, but as soon as we add momentum, or use a more sophisticated optimizer like Adam, L2 regularization (first equation) and weight decay (second equation) become different.</p>
</blockquote>
<p>AdamW follows the second equation for weight decay.</p>
<p>In Adam</p>
<blockquote>
<p>weight_decay (float, optional) – weight decay (L2 penalty) (default: 0)</p>
</blockquote>
<p>In AdamW</p>
<blockquote>
<p>weight_decay (float, optional) – weight decay coefficient (default: 1e-2)</p>
</blockquote>
<p>Read more on the <a href="https://www.fast.ai/2018/07/02/adam-weight-decay/" rel="noreferrer">fastai blog</a>.</p> | 2020-10-31 16:00:47.330000+00:00 | 2020-10-31 16:00:47.330000+00:00 | null | null | 64,621,585 | <p>Is there any difference between <code>torch.optim.Adam(weight_decay=0.01)</code> and <code>torch.optim.AdamW(weight_decay=0.01)</code>?</p>
<p>Link to the docs: <a href="https://pytorch.org/docs/stable/optim.html" rel="noreferrer">torch.optim</a></p> | 2020-10-31 12:11:46.883000+00:00 | 2021-07-29 12:44:05.260000+00:00 | 2021-07-29 12:44:05.260000+00:00 | pytorch | ['https://arxiv.org/abs/1711.05101', 'https://www.fast.ai/2018/07/02/adam-weight-decay/'] | 2 |
55,876,529 | <p>I think that depends on what are you going to do with these numbers after you have mapped them to qubits.</p>
<p>If you need to use 2ⁿ numbers to prepare a quantum state on n qubits that is a weighted superposition of the basis states, you can use <a href="https://docs.microsoft.com/en-us/qsharp/api/qsharp/microsoft.quantum.preparation.preparearbitrarystate" rel="nofollow noreferrer">PrepareArbitraryState</a> operation which does exactly that. Internally it implements the paper <a href="https://arxiv.org/abs/quant-ph/0406176" rel="nofollow noreferrer">Synthesis of Quantum Logic Circuits</a> by Shende, Bullock, Markov.</p>
<p>If you need to represent these numbers in a way that would allow you to read them out by measuring the qubits, you might have to do something like convert them in binary and store each bit in a separate qubit.</p> | 2019-04-27 01:30:51.037000+00:00 | 2019-05-10 06:50:09.907000+00:00 | 2019-05-10 06:50:09.907000+00:00 | null | 55,790,551 | <p>In theory, the state of a qubit is defined by 2 complex numbers, following this formula:</p>
<p><img src="https://i.stack.imgur.com/fG1Q4.png" alt="Image for formula of qubit superposition state"></p>
<p>The rule is that the amount of complex numbers needed to define the state of a set of qubits is equal to 2ⁿ, where n is the number of used qubits.</p>
<p>if i have an array of complex numbers, how can I map or assign each number to a qubit?</p>
<p>For instance:
I have this complex number: 0.0020908999722450972 + i*0.001669629942625761.
What would the state of a qubit be in this case?
Would I be needing more qubits to represent this number?</p> | 2019-04-22 07:12:17.417000+00:00 | 2019-05-10 06:50:09.907000+00:00 | 2019-04-27 03:03:00.400000+00:00 | complex-numbers|q#|qubit | ['https://docs.microsoft.com/en-us/qsharp/api/qsharp/microsoft.quantum.preparation.preparearbitrarystate', 'https://arxiv.org/abs/quant-ph/0406176'] | 2 |
58,952,250 | <p>The problem you are describing there is <em>trajectory reconstruction for AIS/GPS data</em>. There's a number of papers for general trajectory reconstruction (see <a href="https://arxiv.org/pdf/1808.09297.pdf" rel="nofollow noreferrer">this</a> for example), but AIS data are quite specific.</p>
<p>The irregularity of AIS data is a well know problem with no standard approach to deal with, as far as I know.
However, there is a handful of publications which try to deal with this issue. The problem of reconstruction is connected to the <em>trajectory prediction</em> problem, since both of these two shares some of the methods (the latter is more popular in the scientific community, I think).</p>
<p>Traditionally, AIS trajectory reconstruction is done using some physical models, which take into account the curvature of the earth and other factors, such as data noise (see examples <a href="https://www.sciencedirect.com/science/article/pii/S0029801815005582" rel="nofollow noreferrer">here</a>, <a href="https://arxiv.org/pdf/1607.03306.pdf" rel="nofollow noreferrer">here</a>, and <a href="https://www.researchgate.net/profile/Van_Nguyen33/publication/286512533_The_Interpolation_Method_for_the_missing_AIS_Data_of_Ship/links/5ac5ad5a458515798c305dea/The-Interpolation-Method-for-the-missing-AIS-Data-of-Ship.pdf" rel="nofollow noreferrer">here</a>).
More recent approach tries to use <a href="https://arxiv.org/pdf/1806.03972.pdf" rel="nofollow noreferrer">LSTM neural networks</a>.</p>
<p>I don't know much about GPS data, but I think the methods are very similar to the ones mentioned above (especially taking into account the fact that you probably want to deal with maritime data).</p> | 2019-11-20 10:35:09.930000+00:00 | 2019-11-20 10:35:09.930000+00:00 | null | null | 57,071,498 | <p>I would like to increase the density of my AIS or GPS data in order to carry out more precise analyses afterwards. During my research I came across different approaches like interpolation, filtering or imputation. With the first two approaches, there is no doubt that these can be used to approximate the points between two collected data points.
In the case of imputation (e.g. MICE), however, I have not yet found an approach in the literature for determining position data.</p>
<p>That's why I wanted to ask if anyone knew a paper dealing with this subject and whether it makes sense at all to determine further position data approximately by imputation.</p> | 2019-07-17 08:18:20.463000+00:00 | 2019-11-20 10:35:09.930000+00:00 | null | gps|position|imputation|ais | ['https://arxiv.org/pdf/1808.09297.pdf', 'https://www.sciencedirect.com/science/article/pii/S0029801815005582', 'https://arxiv.org/pdf/1607.03306.pdf', 'https://www.researchgate.net/profile/Van_Nguyen33/publication/286512533_The_Interpolation_Method_for_the_missing_AIS_Data_of_Ship/links/5ac5ad5a458515798c305dea/The-Interpolation-Method-for-the-missing-AIS-Data-of-Ship.pdf', 'https://arxiv.org/pdf/1806.03972.pdf'] | 5 |
72,436,831 | <p>I currently work on similar topic and here is my observation of what works</p>
<p>Step 1: Use SLIC segmentation to get the superpixels of the image<br>
Step 2: Region adjacency graph can be build form the superpixel labels (output is networkx graph)<br>
Step 3: Encode any special feature to discriminate your graph (just like the images) eg. px intensities of rgb channels can be embedded as node features using a vector etc<br></p>
<pre><code>superpixels_labels = segmentation.slic(fmri, compactness=30, n_segments=72, multichannel=False) + 1
def build_rag(labels, image):
g = nx.Graph()
footprint = ndi.generate_binary_structure(labels.ndim, connectivity=1)
_ = ndi.generic_filter(labels, add_edge_filter, footprint=footprint,
mode='nearest', extra_arguments=(g,))
for n in g:
g.nodes[n]['total color'] = np.zeros(34, np.double)
g.nodes[n]['pixel count'] = 0
for index in np.ndindex(labels.shape):
n = labels[index]
g.nodes[n]['total color'] += image[index]
g.nodes[n]['pixel count'] += 1
return g
#to add node features from image
nx.set_node_attributes(g, [0.2, 0.7, 0.5], "ndata")
g.nodes[1]["ndata"]
</code></pre>
<p><strong>Useful Reference</strong></p>
<ol>
<li><a href="https://arxiv.org/abs/2201.12633" rel="nofollow noreferrer">https://arxiv.org/abs/2201.12633</a></li>
<li><a href="https://graph-neural-networks.github.io/static/file/chapter20.pdf" rel="nofollow noreferrer">https://graph-neural-networks.github.io/static/file/chapter20.pdf</a></li>
<li><a href="https://distill.pub/2021/gnn-intro/" rel="nofollow noreferrer">https://distill.pub/2021/gnn-intro/</a></li>
</ol> | 2022-05-30 15:37:47.073000+00:00 | 2022-05-30 15:38:27.047000+00:00 | 2022-05-30 15:38:27.047000+00:00 | null | 69,960,805 | <p>I have a task about image classification using Graph Neural Network. Can you give me some references for it? I just found on the internet GCN is used for CSV data classification. thanks :)</p> | 2021-11-14 06:34:54.767000+00:00 | 2022-05-30 15:38:27.047000+00:00 | null | deep-learning|neural-network|computer-vision | ['https://arxiv.org/abs/2201.12633', 'https://graph-neural-networks.github.io/static/file/chapter20.pdf', 'https://distill.pub/2021/gnn-intro/'] | 3 |
69,960,847 | <p>Here is a survey of image based Graph Neural Networks for classification - <a href="https://arxiv.org/abs/2106.06307" rel="nofollow noreferrer">https://arxiv.org/abs/2106.06307</a></p>
<p>and a tutorial - <a href="https://medium.com/@BorisAKnyazev/tutorial-on-graph-neural-networks-for-computer-vision-and-beyond-part-1-3d9fada3b80d" rel="nofollow noreferrer">https://medium.com/@BorisAKnyazev/tutorial-on-graph-neural-networks-for-computer-vision-and-beyond-part-1-3d9fada3b80d</a></p> | 2021-11-14 06:42:39.013000+00:00 | 2021-11-14 06:42:39.013000+00:00 | null | null | 69,960,805 | <p>I have a task about image classification using Graph Neural Network. Can you give me some references for it? I just found on the internet GCN is used for CSV data classification. thanks :)</p> | 2021-11-14 06:34:54.767000+00:00 | 2022-05-30 15:38:27.047000+00:00 | null | deep-learning|neural-network|computer-vision | ['https://arxiv.org/abs/2106.06307', 'https://medium.com/@BorisAKnyazev/tutorial-on-graph-neural-networks-for-computer-vision-and-beyond-part-1-3d9fada3b80d'] | 2 |
52,831,714 | <p>This is not really a programming problem, its way more complicated. What you want is called Out of Distribution detection, where the classifier has a way to tell you that the sample is not on the training set.</p>
<p>There are recent research papers that deal with this problem, such as <a href="https://arxiv.org/abs/1802.04865" rel="nofollow noreferrer">https://arxiv.org/abs/1802.04865</a> and <a href="https://arxiv.org/abs/1711.09325" rel="nofollow noreferrer">https://arxiv.org/abs/1711.09325</a></p>
<p>In general you cannot use a model that has not been trained specifically for this, for example, the probabilities produced by a softmax classifier are not calibrated for this purpose, so thresholding these probabilities will not work at all.</p> | 2018-10-16 09:00:28.973000+00:00 | 2018-10-16 09:00:28.973000+00:00 | null | null | 52,831,038 | <p>I trained my CNN classifier (using tensorflow) with 3 data categories (ID card, passport, bills).<br>
When I test it with images that belong to one of the 3 categories, it gives the right prediction. However, when I test it with a wrong image (a car image for example) it keeps giving me prediction (i.e. it predicts that the car belongs the ID card category). </p>
<p><strong>Is there a way to make it display an error message instead of giving a wrong prediction?</strong></p> | 2018-10-16 08:24:54.757000+00:00 | 2021-12-06 15:17:14.410000+00:00 | 2018-11-19 10:34:39.133000+00:00 | tensorflow|classification|conv-neural-network | ['https://arxiv.org/abs/1802.04865', 'https://arxiv.org/abs/1711.09325'] | 2 |
Subsets and Splits