a_id
int64 7.84k
73.8M
| a_body
stringlengths 61
33k
| a_creation_date
stringlengths 25
32
| a_last_activity_date
stringlengths 25
32
| a_last_edit_date
stringlengths 25
32
⌀ | a_tags
float64 | q_id
int64 826
73.8M
| q_body
stringlengths 61
29.9k
| q_creation_date
stringlengths 25
32
| q_last_activity_date
stringlengths 25
32
| q_last_edit_date
stringlengths 25
32
⌀ | q_tags
stringlengths 1
103
| _arxiv_links
stringlengths 2
6.69k
| _n_arxiv_links
int64 0
94
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
49,435,281 | <p>The <a href="https://arxiv.org/pdf/1803.02865.pdf" rel="nofollow noreferrer">WNGrad</a> paper
states it's inspired by batch (and weight) normalization. You should use L2 norm with respect to the weight dimensions (don't sum it all) as <a href="https://i.stack.imgur.com/9Esjp.jpg" rel="nofollow noreferrer">show in this algorithm</a></p> | 2018-03-22 17:47:07.373000+00:00 | 2018-03-22 18:43:02.890000+00:00 | 2018-03-22 18:43:02.890000+00:00 | null | 49,282,523 | <p>I'm trying to implement the WNGrad (technically WN-Adam, algorithm 4 in the paper) optimizier (<a href="https://arxiv.org/abs/1803.02865" rel="nofollow noreferrer">WNGrad</a>) in pytorch. I've never implemented an optimizer in pytorch before so I don't know if I've done it correctly (I started from the adam implementation). The optimizer does not make much progress and falls down like I would expect (bj values can only monotonically increase, which happens quickly so no progress is made) but I'm guessing I have a bug. Standard optimizers (Adam, SGD) work fine on the same model I'm trying to optimize.</p>
<p>Does this implementation look correct?</p>
<pre class="lang-python prettyprint-override"><code>from torch.optim import Optimizer
class WNAdam(Optimizer):
"""Implements WNAdam algorithm.
It has been proposed in `WNGrad: Learn the Learning Rate in Gradient Descent`_.
Arguments:
params (iterable): iterable of parameters to optimize or dicts defining
parameter groups
lr (float, optional): learning rate (default: 0.1)
beta1 (float, optional): exponential smoothing coefficient for gradient.
When beta=0 this implements WNGrad.
.. _WNGrad\: Learn the Learning Rate in Gradient Descent:
https://arxiv.org/abs/1803.02865
"""
def __init__(self, params, lr=0.1, beta1=0.9):
if not 0.0 <= beta1 < 1.0:
raise ValueError("Invalid beta1 parameter: {}".format(beta1))
defaults = dict(lr=lr, beta1=beta1)
super().__init__(params, defaults)
def step(self, closure=None):
"""Performs a single optimization step.
Arguments:
closure (callable, optional): A closure that reevaluates the model
and returns the loss.
"""
loss = None
if closure is not None:
loss = closure()
for group in self.param_groups:
for p in group['params']:
if p.grad is None:
continue
grad = p.grad.data
state = self.state[p]
# State initialization
if len(state) == 0:
state['step'] = 0
# Exponential moving average of gradient values
state['exp_avg'] = torch.zeros_like(p.data)
# Learning rate adjustment
state['bj'] = 1.0
exp_avg = state['exp_avg']
beta1 = group['beta1']
state['step'] += 1
state['bj'] += (group['lr']**2)/(state['bj'])*grad.pow(2).sum()
# update exponential moving average
exp_avg.mul_(beta1).add_(1 - beta1, grad)
bias_correction = 1 - beta1 ** state['step']
p.data.sub_(group['lr'] / state['bj'] / bias_correction, exp_avg)
return loss
</code></pre> | 2018-03-14 16:04:02.087000+00:00 | 2021-06-22 15:49:53.847000+00:00 | 2018-03-14 17:16:25.420000+00:00 | deep-learning|pytorch | ['https://arxiv.org/pdf/1803.02865.pdf', 'https://i.stack.imgur.com/9Esjp.jpg'] | 2 |
49,999,344 | <p>Regarding your first question (minus the anomaly detection part), Keras creator François Chollet gives some good hints in his (highly recommended) blog post <a href="https://blog.keras.io/building-autoencoders-in-keras.html" rel="nofollow noreferrer">Building Autoencoders in Keras</a>:</p>
<blockquote>
<p><strong>What are autoencoders good for?</strong></p>
<p>They are rarely used in practical applications. In 2012 they briefly found an application in greedy layer-wise pretraining for deep convolutional neural networks, but this quickly fell out of fashion as we started realizing that better random weight initialization schemes were sufficient for training deep networks from scratch. In 2014, batch normalization started allowing for even deeper networks, and from late 2015 we could train arbitrarily deep networks from scratch using residual learning.</p>
<p>[...]</p>
<p><strong>So what's the big deal with autoencoders?</strong></p>
<p>Their main claim to fame comes from being featured in many introductory machine learning classes available online. As a result, a lot of newcomers to the field absolutely love autoencoders and can't get enough of them. This is the reason why this tutorial exists!</p>
</blockquote>
<p><strong>UPDATE</strong></p>
<p>That said, there indeed seem to be some cases where autoencoders are used in practice for anomaly detection; here are some recent papers:</p>
<p><a href="https://arxiv.org/abs/1802.00187" rel="nofollow noreferrer">Clustering and Unsupervised Anomaly Detection with L2 Normalized Deep Auto-Encoder Representations</a></p>
<p><a href="https://arxiv.org/abs/1802.03903" rel="nofollow noreferrer">Unsupervised Anomaly Detection via Variational Auto-Encoder for Seasonal KPIs in Web Applications</a></p>
<p><a href="http://www.kdd.org/kdd2017/papers/view/anomaly-detection-with-robust-deep-auto-encoders" rel="nofollow noreferrer">Anomaly Detection with Robust Deep Auto-encoders</a></p>
<p>and blog posts:</p>
<p><a href="https://medium.com/@curiousily/credit-card-fraud-detection-using-autoencoders-in-keras-tensorflow-for-hackers-part-vii-20e0c85301bd" rel="nofollow noreferrer">Credit Card Fraud Detection using Autoencoders in Keras</a></p>
<p><a href="https://www.kaggle.com/imrandude/h2o-autoencoders-and-anomaly-detection-python" rel="nofollow noreferrer">H2O - Autoencoders and anomaly detection (Python)</a></p>
<p><a href="http://How%20Deep%20Learning%20Analytics%20Can%20Keep%20Your%20Data%20and%20Decisions%20in%20Line" rel="nofollow noreferrer">How Deep Learning Analytics Can Keep Your Data and Decisions in Line</a></p> | 2018-04-24 10:26:51.260000+00:00 | 2018-04-24 12:42:50.060000+00:00 | 2020-06-20 09:12:55.060000+00:00 | null | 49,993,275 | <p>I have three questions about autoEncoders and i would really appreciate your help :</p>
<p>1- I have noticed that there is a lack of research papers done on deep auto encoders (AE) although the concept is explained in plenty of tutorials and examples and most of the tutorials claim that this model is powerful , is there a reason for the lack of research paper published using AE especially in Anomaly or novelty detection ?</p>
<p>2- in all the tutorials i have seen a threshold is manually set ( hard set ) for AutoEncoder to be as a decision boundary for Anomaly detection by testing several values and selecting the best one , is there another technique to select the Threshold value , in other words what are the different thresholding mechanisms that can automatically detect the threshold</p> | 2018-04-24 04:08:37.737000+00:00 | 2021-04-23 14:42:59.240000+00:00 | 2021-04-23 14:42:59.240000+00:00 | machine-learning|deep-learning|neural-network|autoencoder | ['https://blog.keras.io/building-autoencoders-in-keras.html', 'https://arxiv.org/abs/1802.00187', 'https://arxiv.org/abs/1802.03903', 'http://www.kdd.org/kdd2017/papers/view/anomaly-detection-with-robust-deep-auto-encoders', 'https://medium.com/@curiousily/credit-card-fraud-detection-using-autoencoders-in-keras-tensorflow-for-hackers-part-vii-20e0c85301bd', 'https://www.kaggle.com/imrandude/h2o-autoencoders-and-anomaly-detection-python', 'http://How%20Deep%20Learning%20Analytics%20Can%20Keep%20Your%20Data%20and%20Decisions%20in%20Line'] | 7 |
71,112,920 | <p>Follow this simple steps to insert url to your github ReadMe file.</p>
<p>1.) Write a short text in a square bracket to represent the clickable link.</p>
<p>2.) Write the URL next to it in parenthesis.</p>
<p>kindly follow the example below</p>
<p><a href="https://arxiv.org/pdf/1805.08620.pdf" rel="nofollow noreferrer">Click here to read the paper</a></p>
<p><a href="https://i.stack.imgur.com/K4d5V.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/K4d5V.png" alt="enter image description here" /></a></p> | 2022-02-14 13:48:08.307000+00:00 | 2022-02-14 13:48:08.307000+00:00 | null | null | 71,106,542 | <p>I want to insert the URL link of a published paper in my Github ReadMe file. What are the possible ways to achieve this? Please, can anyone provide a suggestion? I want a situation where I can just click on the URL link and it will immediately direct me to the paper.</p>
<p>Your kind suggestions are welcome.</p> | 2022-02-14 02:25:29.890000+00:00 | 2022-02-14 13:48:08.307000+00:00 | null | github|github-actions|github-pages|github-api|github-for-windows | ['https://arxiv.org/pdf/1805.08620.pdf', 'https://i.stack.imgur.com/K4d5V.png'] | 2 |
51,536,080 | <p>Normalizing pixel values to range <code>[0..1]</code> (instead of <code>[0..255]</code>) is common practice not only in deep learning, but also in other domains of image-processing/computer-vision.<br>
This is mainly done since the native <code>uint8</code> pixel values are not easy to work with - <code>uint8</code> easily over/underfloat. Therefore, it is more convenient to convert pixel values to <code>float</code> type in range <code>[0..1]</code>.</p>
<p>Trying to cope with vanishing/exploding gradients in deep nets, there are many theoretical papers analyzing the distribution of activation values (see e.g., <a href="https://arxiv.org/abs/1502.01852" rel="nofollow noreferrer">this work</a>). These works usually assume a normal distribution of values - thus the scaling. You will also come across many nets that in addition to scaling the nets subtract "image mean" from the input.</p> | 2018-07-26 09:56:35.793000+00:00 | 2018-07-26 09:56:35.793000+00:00 | null | null | 51,510,745 | <p>The <code>caffe.io.load_image()</code> function call on a png, returns a numpy 3d array, with the rgb values as normalized floats in the 0-1 range instead of 0-255. </p>
<p>Is this:</p>
<ol>
<li>common practice when loading images into array like structures?</li>
<li>have something to do with how the caffe network layers uses the images?</li>
<li>something to do with how png files are stored? </li>
</ol>
<p>Thanks</p> | 2018-07-25 04:22:07.297000+00:00 | 2018-07-26 09:56:58.067000+00:00 | 2018-07-26 09:56:58.067000+00:00 | image-processing|neural-network|deep-learning|computer-vision|caffe | ['https://arxiv.org/abs/1502.01852'] | 1 |
68,029,886 | <p>You could use the EMNIST package that can be found here: <a href="https://pypi.org/project/emnist/" rel="nofollow noreferrer">https://pypi.org/project/emnist/</a></p>
<p>To load the dataset you first need to decide which of the six different datasets you would like to work with. Details in this paper: <a href="https://arxiv.org/pdf/1702.05373v1.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1702.05373v1.pdf</a></p>
<p>Let's say we want to use the byclass dataset:</p>
<pre><code>from emnist import extract_training_samples, extract_test_samples
x_train, y_train = extract_training_samples('byclass')
x_test, y_test = extract_test_samples('byclass')
</code></pre> | 2021-06-18 05:41:31.007000+00:00 | 2021-06-18 05:41:31.007000+00:00 | null | null | 47,854,014 | <p>In all the tutorials i've seen for tensorflow, they've used the MNIST dataset, i've understood the modelling but how do i load this dataset into tensorflow?
<a href="https://www.nist.gov/itl/iad/image-group/emnist-dataset" rel="nofollow noreferrer">https://www.nist.gov/itl/iad/image-group/emnist-dataset</a></p> | 2017-12-17 10:02:19.620000+00:00 | 2021-06-18 05:41:31.007000+00:00 | null | image-processing|tensorflow|tensorflow-datasets | ['https://pypi.org/project/emnist/', 'https://arxiv.org/pdf/1702.05373v1.pdf'] | 2 |
42,090,174 | <p>Convolutions can work on any image input size (which is big enough). However, if you have a fully connected layer at the end, this layer needs a fixed input size. Hence the complete network needs a fixed image input size.</p>
<p>However, you can remove the fully connected layer and just work with convolutional layers. You can make a convolutional layer at the end which has the same number of filters as you have classes. But you want one value for each class which indicates the probability of that class. Hence you apply a pooling filter over the complete remaining feature map. This pooling is hence "global" as it always is as big as necessary. In contrast, usual pooling layers have a fixed size (e.g. of 2x2 or 3x3).</p>
<p>This is a general concept. You can also find global pooling in other libraries, e.g. <a href="http://lasagne.readthedocs.io/en/latest/modules/layers/pool.html#lasagne.layers.GlobalPoolLayer" rel="noreferrer">Lasagne</a>. If you want a good reference in literature, I recommend reading <a href="https://arxiv.org/pdf/1312.4400v3.pdf" rel="noreferrer">Network In Network</a>.</p> | 2017-02-07 12:40:58.163000+00:00 | 2017-02-07 12:57:07.963000+00:00 | 2017-02-07 12:57:07.963000+00:00 | null | 42,070,528 | <p>I recently found the "global_pooling" flag in the Pooling layer in caffe, however was unable to find sth about it in the documentation here (<a href="http://caffe.berkeleyvision.org/tutorial/layers/pooling.html" rel="noreferrer">Layer Catalogue)</a>
nor here (<a href="http://caffe.berkeleyvision.org/doxygen/classcaffe_1_1PoolingLayer.html" rel="noreferrer">Pooling doxygen doc</a>) . </p>
<p>Is there an easy forward examply explanation to this in comparison to the normal Pool-Layer behaviour?</p> | 2017-02-06 14:45:51.793000+00:00 | 2018-07-11 05:56:32.657000+00:00 | 2018-07-11 05:56:32.657000+00:00 | image-processing|machine-learning|deep-learning|caffe|conv-neural-network | ['http://lasagne.readthedocs.io/en/latest/modules/layers/pool.html#lasagne.layers.GlobalPoolLayer', 'https://arxiv.org/pdf/1312.4400v3.pdf'] | 2 |
56,068,500 | <p>I spent quite some time trying this out and it seems the fastest and the least memory intensive approach is using lxml and iterparse, but making sure to free unneeded memory. In my example, parsing arXiv dump:</p>
<pre><code>from lxml import etree
context = etree.iterparse('path/to/file', events=('end',), tag='Record')
for event, element in context:
record_id = element.findtext('.//{http://arxiv.org/OAI/arXiv/}id')
created = element.findtext('.//{http://arxiv.org/OAI/arXiv/}created')
print(record_id, created)
# Free memory.
element.clear()
while element.getprevious() is not None:
del element.getparent()[0]
</code></pre>
<p>So <code>element.clear</code> is not enough, but also the removal of any links to previous elements.</p> | 2019-05-09 22:42:55.347000+00:00 | 2019-05-09 22:42:55.347000+00:00 | null | null | 324,214 | <p>I am currently running the following code based on Chapter 12.5 of the Python Cookbook:</p>
<pre><code>from xml.parsers import expat
class Element(object):
def __init__(self, name, attributes):
self.name = name
self.attributes = attributes
self.cdata = ''
self.children = []
def addChild(self, element):
self.children.append(element)
def getAttribute(self,key):
return self.attributes.get(key)
def getData(self):
return self.cdata
def getElements(self, name=''):
if name:
return [c for c in self.children if c.name == name]
else:
return list(self.children)
class Xml2Obj(object):
def __init__(self):
self.root = None
self.nodeStack = []
def StartElement(self, name, attributes):
element = Element(name.encode(), attributes)
if self.nodeStack:
parent = self.nodeStack[-1]
parent.addChild(element)
else:
self.root = element
self.nodeStack.append(element)
def EndElement(self, name):
self.nodeStack.pop()
def CharacterData(self,data):
if data.strip():
data = data.encode()
element = self.nodeStack[-1]
element.cdata += data
def Parse(self, filename):
Parser = expat.ParserCreate()
Parser.StartElementHandler = self.StartElement
Parser.EndElementHandler = self.EndElement
Parser.CharacterDataHandler = self.CharacterData
ParserStatus = Parser.Parse(open(filename).read(),1)
return self.root
</code></pre>
<p>I am working with XML documents of about 1 GB in size. Does anyone know a faster way to parse these?</p> | 2008-11-27 16:47:54.353000+00:00 | 2022-02-11 11:52:27.920000+00:00 | 2019-02-05 13:27:57.327000+00:00 | python|xml|performance|parsing | [] | 0 |
45,045,324 | <p>So, don't use pre-trained models. Not only will they be missing domain words, but even with the words that are shared, the senses of words as most used in 'news articles' or 'Twitter' may not match your domain. </p>
<p>It's not hard to train your own word-vectors, or other doc-vectors, using the domain of interest as your training data. </p>
<p>A followup paper to the original 'Paragraph Vectors' paper, "<a href="https://arxiv.org/abs/1507.07998" rel="nofollow noreferrer">Document Embedding With Paragraph Vectors</a>", specifically evaluates Paragraph Vectors (in the PV-DBOW variant) in a topic-sensitive way. For pairs of Wikipedia articles with the same editor-assigned 'category', it checks if PV-DBOW places that pair closer to each other than some randomly chosen third article. It performs a similar check on 886,000 Arxiv papers. </p>
<p>Even if you have a small dataset, you might be able to use a similar technique. And even if the exercise provides a small dataset, perhaps other public datasets with similar vocabularies can be used to thicken your model.</p>
<p>(The PV-DBOW mode used in the above paper, adding word-training to doc-vector training, is analogous to the <code>Doc2Vec</code> class in the Python gensim library using options <code>dm=0, dbow_words=1</code>.)</p> | 2017-07-11 22:15:28.657000+00:00 | 2017-07-11 22:15:28.657000+00:00 | null | null | 45,026,220 | <p>I'm beginning to work on my ML course's project which is to classify a scientific text and label it as if its topic is "A" or not. The problem I'm having is that they have provided me with a limited data set. Usually, scientific texts make use of complex and irregular words which don't exist normally in pre-trained word2vec models like Google news or Twitter, and these words weigh a lot in terms of the meaning of the texts. So I was wondering, what could I do to use these pre-trained models and predict what the new words mean? </p> | 2017-07-11 05:41:23.187000+00:00 | 2017-07-11 22:15:28.657000+00:00 | null | machine-learning|classification|word2vec|text-classification|text-recognition | ['https://arxiv.org/abs/1507.07998'] | 1 |
66,460,729 | <p>It looks like <strong>causalnex</strong> doesn't directly support setting the CPD's manually, but you can look at the underlying code and see that it's using the <strong>pgmpy</strong> BayesianModel to simultaneously represent the structure and CPD's within a <strong>causalnex</strong> BayesianNetwork.</p>
<p>With that, you could add the CPD's you know via <a href="https://pgmpy.org/models.html#pgmpy.models.BayesianModel.BayesianModel.add_cpds" rel="nofollow noreferrer">add_cpds</a> instead of fitting them. To get at the <code>BayesianModel</code> object it would be: <code>bn._model</code>, where <code>bn</code> is your <code>causalnex.BayesianNetwork</code> object.</p>
<p>I'm not sure if this would make you just want to use <strong>pgmpy</strong> instead of <strong>causalnex</strong> or not!! It seems like the big benefit from <strong>causalnex</strong> is its use of the <a href="https://arxiv.org/abs/1803.01422" rel="nofollow noreferrer">NOTEARS</a> algorithm, which helps you build the Weighted Adjacency Matrix for your Directed Graph. Then again, it also coordinates some plotting for you.</p>
<p>Also, an important note from the <a href="https://causalnex.readthedocs.io/en/latest/03_tutorial/03_tutorial.html#Fitting-the-Conditional-Distribution-of-the-Bayesian-Network" rel="nofollow noreferrer">docs</a> to remind you that it's not <em>really</em> continuous, but discretised/binned:</p>
<blockquote>
<p>Bayesian Networks in CausalNex support only discrete distributions.
Any continuous features, or features with a large number of
categories, should be discretised prior to fitting the Bayesian
Network. Models containing variables with many possible values will
typically be badly fit, and exhibit poor performance.</p>
</blockquote> | 2021-03-03 16:07:07.637000+00:00 | 2021-03-03 18:32:16.443000+00:00 | 2021-03-03 18:32:16.443000+00:00 | null | 66,352,988 | <p>Up to now, in causalnex package, I only encountered Bayesian networks that were constucted from
data. I want to know how to create my own network with my node parameters and CPDs from expertise. Anybody has some reference to it or an example?</p> | 2021-02-24 14:35:35.677000+00:00 | 2021-03-03 18:32:16.443000+00:00 | null | python|bayesian-networks|causality | ['https://pgmpy.org/models.html#pgmpy.models.BayesianModel.BayesianModel.add_cpds', 'https://arxiv.org/abs/1803.01422', 'https://causalnex.readthedocs.io/en/latest/03_tutorial/03_tutorial.html#Fitting-the-Conditional-Distribution-of-the-Bayesian-Network'] | 3 |
40,455,030 | <h2>What output shape does the Caffe deconvolution layer produce?</h2>
<p>For this colorization model in particular you can simply refer to page 24 of <a href="https://arxiv.org/pdf/1603.08511.pdf" rel="nofollow noreferrer">their paper</a> (which is linked in their GitHub page):</p>
<p><a href="https://i.stack.imgur.com/PkdxN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PkdxN.png" alt="Colorization model architecture"></a></p>
<p>So basically the output shape of this deconvolution layer in the original model is [None, 56, 56, 128]. This is what you want to pass to Keras as output_shape. The only problem is as I mention in the section below, Keras doesn't really use this parameter to determine the output shape, so you need to run a dummy prediction to find what your other parameters need to be in order for you to get what you want.</p>
<p>More generally the <a href="https://github.com/BVLC/caffe/blob/master/src/caffe/layers/deconv_layer.cpp#L18" rel="nofollow noreferrer">Caffe source code for computing its Deconvolution layer output shape</a> is:</p>
<pre><code> const int kernel_extent = dilation_data[i] * (kernel_shape_data[i] - 1) + 1;
const int output_dim = stride_data[i] * (input_dim - 1)
+ kernel_extent - 2 * pad_data[i];
</code></pre>
<p>Which with a dilation argument equal to 1 reduces to just:</p>
<pre><code> const int output_dim = stride_data[i] * (input_dim - 1)
+ kernel_shape_data[i] - 2 * pad_data[i];
</code></pre>
<p>Note that this matches the <a href="https://keras.io/layers/convolutional/#deconvolution2d" rel="nofollow noreferrer">Keras documentation</a> when the parameter <code>a</code> is zero:</p>
<blockquote>
<p>Formula for calculation of the output shape <a href="https://github.com/BVLC/caffe/blob/master/src/caffe/layers/deconv_layer.cpp#L18" rel="nofollow noreferrer">3</a>, <a href="https://keras.io/layers/convolutional/#deconvolution2d" rel="nofollow noreferrer">4</a>: o = s (i - 1) +
a + k - 2p</p>
</blockquote>
<h2>How to verify actual output shape with your Keras backend</h2>
<p>This is tricky, because the actual output shape depends on the backend implementation and configuration. Keras is currently unable to find it on its own. So you actually have to execute a prediction on some dummy input to find the actual output shape. Here's an example of how to do this from the Keras docs for Deconvolution2D:</p>
<pre><code>To pass the correct `output_shape` to this layer,
one could use a test model to predict and observe the actual output shape.
# Examples
```python
# apply a 3x3 transposed convolution with stride 1x1 and 3 output filters on a 12x12 image:
model = Sequential()
model.add(Deconvolution2D(3, 3, 3, output_shape=(None, 3, 14, 14), border_mode='valid', input_shape=(3, 12, 12)))
# Note that you will have to change the output_shape depending on the backend used.
# we can predict with the model and print the shape of the array.
dummy_input = np.ones((32, 3, 12, 12))
# For TensorFlow dummy_input = np.ones((32, 12, 12, 3))
preds = model.predict(dummy_input)
print(preds.shape)
# Theano GPU: (None, 3, 13, 13)
# Theano CPU: (None, 3, 14, 14)
# TensorFlow: (None, 14, 14, 3)
</code></pre>
<p>Reference: <a href="https://github.com/fchollet/keras/blob/master/keras/layers/convolutional.py#L507" rel="nofollow noreferrer">https://github.com/fchollet/keras/blob/master/keras/layers/convolutional.py#L507</a></p>
<p>Also you might be curious to know why is it that the output_shape parameter apparently doesn't really define the output shape. According to the post <a href="https://stackoverflow.com/questions/39018767/deconvolution2d-layer-in-keras">Deconvolution2D layer in keras</a> this is why:</p>
<blockquote>
<p>Back to Keras and how the above is implemented. Confusingly, the output_shape parameter is actually not used for determining the output shape of the layer, and instead they try to deduce it from the input, the kernel size and the stride, while assuming only valid output_shapes are supplied (though it's not checked in the code to be the case). The output_shape itself is only used as input to the backprop step. Thus, you must also specify the stride parameter (subsample in Keras) in order to get the desired result (which could've been determined by Keras from the given input shape, output shape and kernel size).</p>
</blockquote> | 2016-11-06 22:02:35.967000+00:00 | 2016-11-08 08:35:03.793000+00:00 | 2017-05-23 11:53:49.037000+00:00 | null | 40,453,494 | <p>I am implementing following <a href="https://github.com/richzhang/colorization/blob/master/models/colorization_deploy_v2.prototxt" rel="nofollow noreferrer">Colorization Model written in Caffe</a>. I am confused about my output_shape parameter to supply in Keras</p>
<pre><code>model.add(Deconvolution2D(256,4,4,border_mode='same',
output_shape=(None,3,14,14),subsample=(2,2),dim_ordering='th',name='deconv_8.1'))
</code></pre>
<p>I have added a dummy output_shape parameter. But how can I determine the output parameter? In caffe model the layer is defined as:</p>
<pre><code>layer {
name: "conv8_1"
type: "Deconvolution"
bottom: "conv7_3norm"
top: "conv8_1"
convolution_param {
num_output: 256
kernel_size: 4
pad: 1
dilation: 1
stride: 2
}
</code></pre>
<p>If I do not supply this parameter the code give parameter error but I can not understand what should I supply as output_shape</p>
<p>p.s. already asked on data science forum page with no response. may be due to small user base</p> | 2016-11-06 19:20:21.110000+00:00 | 2016-11-08 08:35:03.793000+00:00 | null | convolution|keras|deconvolution | ['https://arxiv.org/pdf/1603.08511.pdf', 'https://i.stack.imgur.com/PkdxN.png', 'https://github.com/BVLC/caffe/blob/master/src/caffe/layers/deconv_layer.cpp#L18', 'https://keras.io/layers/convolutional/#deconvolution2d', 'https://github.com/BVLC/caffe/blob/master/src/caffe/layers/deconv_layer.cpp#L18', 'https://keras.io/layers/convolutional/#deconvolution2d', 'https://github.com/fchollet/keras/blob/master/keras/layers/convolutional.py#L507', 'https://stackoverflow.com/questions/39018767/deconvolution2d-layer-in-keras'] | 8 |
62,792,449 | <p>It is kind of hard to help without the dataset itself. Though one or two things I would test:</p>
<ul>
<li>I find the ReLU activation inappropriate for Dense layer, which could lead to the mono-class prediction. Try replacing the relu from your Dense(128) layer by something else (sigmoid, tanh)</li>
<li>Dropout is not really appropriate for images in general, you might want to look at <a href="https://arxiv.org/abs/1810.12890" rel="nofollow noreferrer">DropBlock</a></li>
<li>Initial learning rate is pretty low, I would start with something between 1e-3 or 1e-4</li>
<li>Stupid thing that happened to me way too often: have you visualize the image / label combinaison to make sure each image has the right label?</li>
</ul>
<p>Again, not sure it will fix everything, but I hope it might help!</p> | 2020-07-08 10:13:23.090000+00:00 | 2020-07-08 10:13:23.090000+00:00 | null | null | 62,787,260 | <p>EDIT: it seems like I did not even run the model for enough epochs, so I will try that out and return with my results</p>
<p>I am trying to create a CNN that classifies 3D brain images. However, the CNN program always predict the same class when I run it and am not sure what other methods I can do to prevent this. I have searched up this problem with many plausible solutions, but they did not work</p>
<p>So far, I have tried:</p>
<ul>
<li>Decreasing the learning rate</li>
<li>Normalize the data to [0, 1]</li>
<li>Change optimizers</li>
<li>Only use sigmoid and binary_crossentropy</li>
<li>Add/remove dropout layers</li>
<li>Changed into a simpler CNN model</li>
<li>Balance the dataset</li>
<li>Added augmented data using a custom 3D imagedatagenerator()
<ul>
<li>Link: <a href="https://github.com/dhuy228/augmented-volumetric-image-generator" rel="nofollow noreferrer">https://github.com/dhuy228/augmented-volumetric-image-generator</a></li>
</ul>
</li>
</ul>
<p>For context, I am classifying between two groups. The amount of images I am using is a total of 200 3D brain images (about 100 for each category). To increase my training size, I used a custom data augmentation I found from github</p>
<p>Looking at the learning curve, the accuracy and loss rates are completely random. Some runs they would be decreasing, some increasing, and some fluctuating within a range</p>
<p>Any help would be appreciated!</p>
<pre><code>import os
import csv
import tensorflow as tf # 2.0
import nibabel as nib
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import OneHotEncoder, LabelEncoder
from keras.models import Model
from keras.layers import Conv3D, MaxPooling3D, Dense, Dropout, Activation, Flatten
from keras.layers import Input, concatenate
from keras import optimizers
from keras.utils import to_categorical
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
import seaborn as sns
import matplotlib.pyplot as plt
from augmentedvolumetricimagegenerator.generator import customImageDataGenerator
from keras.callbacks import EarlyStopping
# Administrative items
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
# Where the file is located
path = r'C:\Users\jesse\OneDrive\Desktop\Research\PD\decline'
folder = os.listdir(path)
target_size = (96, 96, 96)
# creating x - converting images to array
def read_image(path, folder):
mri = []
for i in range(len(folder)):
files = os.listdir(path + '\\' + folder[i])
for j in range(len(files)):
image = np.array(nib.load(path + '\\' + folder[i] + '\\' + files[j]).get_fdata())
image = np.resize(image, target_size)
image = np.expand_dims(image, axis=3)
image /= 255.
mri.append(image)
return mri
# creating y - one hot encoder
def create_y():
excel_file = r'C:\Users\jesse\OneDrive\Desktop\Research\PD\decline_label.xlsx'
excel_read = pd.read_excel(excel_file)
excel_array = np.array(excel_read['Label'])
label = LabelEncoder().fit_transform(excel_array)
label = label.reshape(len(label), 1)
onehot = OneHotEncoder(sparse=False).fit_transform(label)
return onehot
# Splitting image train/test
x = np.asarray(read_image(path, folder))
y = np.asarray(create_y())
x_split, x_test, y_split, y_test = train_test_split(x, y, test_size=.2, stratify=y)
x_train, x_val, y_train, y_val = train_test_split(x_split, y_split, test_size=.25, stratify=y_split)
print(x_train.shape, x_val.shape, x_test.shape, y_train.shape, y_val.shape, y_test.shape)
batch_size = 10
num_classes = len(folder)
inputs = Input((96, 96, 96, 1))
conv1 = Conv3D(32, [3, 3, 3], padding='same', activation='relu')(inputs)
conv1 = Conv3D(32, [3, 3, 3], padding='same', activation='relu')(conv1)
pool1 = MaxPooling3D(pool_size=(2, 2, 2), padding='same')(conv1)
drop1 = Dropout(0.5)(pool1)
conv2 = Conv3D(64, [3, 3, 3], padding='same', activation='relu')(drop1)
conv2 = Conv3D(64, [3, 3, 3], padding='same', activation='relu')(conv2)
pool2 = MaxPooling3D(pool_size=(2, 2, 2), padding='same')(conv2)
drop2 = Dropout(0.5)(pool2)
conv3 = Conv3D(128, [3, 3, 3], padding='same', activation='relu')(drop2)
conv3 = Conv3D(128, [3, 3, 3], padding='same', activation='relu')(conv3)
pool3 = MaxPooling3D(pool_size=(2, 2, 2), padding='same')(conv3)
drop3 = Dropout(0.5)(pool3)
flat1 = Flatten()(drop3)
dense1 = Dense(128, activation='relu')(flat1)
drop5 = Dropout(0.5)(dense1)
dense2 = Dense(num_classes, activation='sigmoid')(drop5)
model = Model(inputs=[inputs], outputs=[dense2])
opt = optimizers.Adagrad(lr=1e-5)
model.compile(loss='binary_crossentropy', optimizer=opt, metrics=['accuracy'])
train_datagen = customImageDataGenerator(
horizontal_flip=True
)
val_datagen = customImageDataGenerator()
training_set = train_datagen.flow(x_train, y_train, batch_size=batch_size)
validation_set = val_datagen.flow(x_val, y_val, batch_size=batch_size)
callbacks = EarlyStopping(monitor='val_loss', patience=3)
history = model.fit_generator(training_set,
steps_per_epoch = 10,
epochs = 20,
validation_steps = 5,
callbacks = [callbacks],
validation_data = validation_set)
score = model.evaluate(x_test, y_test, batch_size=batch_size)
print(score)
y_pred = model.predict(x_test, batch_size=batch_size)
y_test = np.argmax(y_test, axis=1)
y_pred = np.argmax(y_pred, axis=1)
confusion = confusion_matrix(y_test, y_pred)
map = sns.heatmap(confusion, annot=True)
print(map)
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
plt.figure(1)
plt.plot(acc)
plt.plot(val_acc)
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='best')
plt.title('Accuracy')
plt.figure(2)
plt.plot(loss)
plt.plot(val_loss)
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='best')
plt.title('Loss')
</code></pre>
<p>You can find the outputs here: <a href="https://i.stack.imgur.com/FF13P.jpg" rel="nofollow noreferrer">https://i.stack.imgur.com/FF13P.jpg</a></p> | 2020-07-08 03:51:26.927000+00:00 | 2020-07-08 19:08:02.403000+00:00 | 2020-07-08 19:08:02.403000+00:00 | python|tensorflow|keras|conv-neural-network | ['https://arxiv.org/abs/1810.12890'] | 1 |
13,156,235 | <p>Each job is independent of each other, so without storing the output in intermediate location it's not possible to share the data across jobs.</p>
<p>FYI, in MapReduce model the map tasks don't talk to each other. Same is the case for reduce tasks also. <a href="http://incubator.apache.org/giraph/" rel="nofollow">Apache Giraph</a> which runs on Hadoop uses communication between the mappers in the same job for iterative algorithms which requires the same job to be run again and again without communication between the mappers.</p>
<p>Not sure about the algorithm being implemented and why MR, but every MR algorithm can be implemented in <a href="http://en.wikipedia.org/wiki/Bulk_synchronous_parallel" rel="nofollow">BSP</a> also. Here is a <a href="http://arxiv.org/abs/1203.2081" rel="nofollow">paper</a> comparing BSP with MR. Some of the algorithms perform well in BSP when compared to MR. <a href="http://hama.apache.org/" rel="nofollow">Apache Hama</a> is an implementation of the BSP model, the way Apache Hadoop is an implementation of MR.</p> | 2012-10-31 10:52:38.983000+00:00 | 2012-10-31 10:52:38.983000+00:00 | null | null | 13,154,720 | <p>Is it possible to share a value between successive reducer and mapper?</p>
<p>Or is it possible to store the output of first reducer into memory and second mapper can access that from memory ?</p>
<p>Problem is ,
I had written a chain map reducer like Map1 -> Reducer1 --> Map2 --> Reducer2.</p>
<p>Map1 and Map2 is reading the same input file.</p>
<p>Reduce1 is deriving a value suppose 'X' as its output.</p>
<p>I need 'X' and input file for Map2.</p>
<p>How can we do this without reading the output file of Reduce1?</p>
<p>Is it possible store 'X' in memory to access for Mapper 2 ?</p> | 2012-10-31 09:23:17.567000+00:00 | 2012-11-01 04:23:51.857000+00:00 | 2012-11-01 04:23:51.857000+00:00 | hadoop|mapreduce | ['http://incubator.apache.org/giraph/', 'http://en.wikipedia.org/wiki/Bulk_synchronous_parallel', 'http://arxiv.org/abs/1203.2081', 'http://hama.apache.org/'] | 4 |
66,775,641 | <p>This paper (<a href="https://arxiv.org/pdf/1903.07288.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1903.07288.pdf</a>) studied the effect of padding types on LSTM and CNN. They found that post-padding achieved substantially lower accuracy (nearly half) compared to pre-padding in LSTMs, although there wasn't a significant difference for CNNs (post-padding was only slightly worse).</p>
<p>A simple/intuitive explanation for RNNs is that, post-padding seems to add noise to what has been learned from the sequence through time, and there aren't more timesteps for the RNN to recover from this noise. With pre-padding, however, the RNN is better able to adjust to the added noise of zeros at the beginning as it learns from the sequence through time.</p>
<p>I think more thorough experiments are needed in the community for more detailed mechanistic explanations on how padding affects performance.</p>
<p>I always recommend using pre-padding over post-padding, even for CNNs, unless the problem specifically requires post-padding.</p> | 2021-03-24 06:13:29.413000+00:00 | 2021-07-08 01:05:56.230000+00:00 | 2021-07-08 01:05:56.230000+00:00 | null | 46,298,793 | <p>I'm working on an NLP sequence labelling problem. My data consists of variable length sequences <code>(w_1, w_2, ..., w_k)</code> with corresponding labels <code>(l_1, l_2, ..., l_k)</code> (in this case the task is named entity extraction).</p>
<p>I intend to solve the problem using Recurrent Neural Networks. As the sequences are of variable length I need to pad them (I want batch size >1). I have the option of either pre zero padding them, or post zero padding them. I.e. either I make every sequence <code>(0, 0, ..., w_1, w_2, ..., w_k)</code> or <code>(w_1, w_2, ..., w_k, 0, 0, ..., 0)</code> such that the lenght of each sequence is the same. </p>
<p><strong>How does the choice between pre- and post padding impact results?</strong> </p>
<p>It seems like pre padding is more common, but I can't find an explanation of why it would be better. Due to the nature of RNNs it feels like an arbitrary choice for me, since they share weights across time steps.</p> | 2017-09-19 11:04:11+00:00 | 2021-07-08 01:05:56.230000+00:00 | null | performance|machine-learning|recurrent-neural-network | ['https://arxiv.org/pdf/1903.07288.pdf'] | 1 |
57,433,320 | <p>If i recall correct I think it is Googles NasNet. It's a very cool (and computer intensive) method used to design the model architecture, but good for transfer learning and prediction. I would recommend taking a look at the <a href="https://arxiv.org/pdf/1707.07012.pdf" rel="nofollow noreferrer">NasNet-paper</a></p>
<p>It should also be available to use through <code>keras.application</code></p> | 2019-08-09 15:39:08.737000+00:00 | 2019-08-09 15:39:08.737000+00:00 | null | null | 57,432,725 | <p>Prior to 2017, it was relatively simple to understand which CNN was the best to classify images with the imagnet yearly competition.</p>
<p><a href="https://i.stack.imgur.com/l86pf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/l86pf.png" alt="enter image description here"></a></p>
<p>In 2017 the imagenet competition was divided into different <a href="http://image-net.org/challenges/LSVRC/2017/results" rel="nofollow noreferrer">tasks</a> with winners such as <a href="https://arxiv.org/abs/1709.01507" rel="nofollow noreferrer">this</a>. In 2018, the competition moved to kaggle and became about 3D detection. </p>
<p>I am interested in image classification only and there no longer seems to be a competition for this.</p>
<p>Does anyone know what neural network was recognised as the best for image classification in 2018?</p> | 2019-08-09 15:01:23.793000+00:00 | 2019-08-09 17:55:22.747000+00:00 | 2019-08-09 17:55:22.747000+00:00 | machine-learning|neural-network|classification|conv-neural-network|imagenet | ['https://arxiv.org/pdf/1707.07012.pdf'] | 1 |
57,433,396 | <p>This is a really good question. I was wondering about the same and played around with some of the models that are on <a href="https://tfhub.dev/s?module-type=image-classification" rel="nofollow noreferrer">TensorFlow Hub</a>. So, here are my two cents.</p>
<p>The current best models in terms of performance on ImageNet are the ones which are obtained with <a href="https://arxiv.org/abs/1712.00559" rel="nofollow noreferrer">Progressive Neural Architecture Search</a>. On the other hand, these models are incredibly slow to train because they are huge. When it comes to the models such as InceptionNet, ResNet, and VGG, this is a <a href="https://github.com/jcjohnson/cnn-benchmarks" rel="nofollow noreferrer">good link</a> to check out the performance compared to the training/inference speed.</p>
<p>My personal experience is that if you want to maximize performance, use <a href="https://tfhub.dev/google/imagenet/resnet_v2_152/classification/3" rel="nofollow noreferrer">ResNet152</a>. If you want a relatively fast CNN, while achieving good performance, go with <a href="https://tfhub.dev/google/imagenet/resnet_v2_50/classification/3" rel="nofollow noreferrer">ResNet50</a>. When it comes to the VGG nets, I played around with the <a href="https://github.com/tensorflow/models/blob/master/research/slim/nets/vgg.py" rel="nofollow noreferrer">TF-Slim implementation</a> but it was slower than ResNet50, with performance around the same. Finally, I can't say much about Inception because I didn't use it. In the end, I went with ResNet152, because it yield the best performance for me (Please note that I was using a pre-trained version and I was fine-tuning it to my task).</p>
<p>To summarize, I think that there is no general <strong>best CNN</strong>. I would avoid using VGG16/19, because it yields worse performance than ResNet50, while being slower. If you have access to a lot of computational power, go with Resnet152 or PNASNet. Again, this my opinion based on my personal experience by playing around with the pre-trained models on TF-Hub.</p> | 2019-08-09 15:44:09.327000+00:00 | 2019-08-09 15:50:45.280000+00:00 | 2019-08-09 15:50:45.280000+00:00 | null | 57,432,725 | <p>Prior to 2017, it was relatively simple to understand which CNN was the best to classify images with the imagnet yearly competition.</p>
<p><a href="https://i.stack.imgur.com/l86pf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/l86pf.png" alt="enter image description here"></a></p>
<p>In 2017 the imagenet competition was divided into different <a href="http://image-net.org/challenges/LSVRC/2017/results" rel="nofollow noreferrer">tasks</a> with winners such as <a href="https://arxiv.org/abs/1709.01507" rel="nofollow noreferrer">this</a>. In 2018, the competition moved to kaggle and became about 3D detection. </p>
<p>I am interested in image classification only and there no longer seems to be a competition for this.</p>
<p>Does anyone know what neural network was recognised as the best for image classification in 2018?</p> | 2019-08-09 15:01:23.793000+00:00 | 2019-08-09 17:55:22.747000+00:00 | 2019-08-09 17:55:22.747000+00:00 | machine-learning|neural-network|classification|conv-neural-network|imagenet | ['https://tfhub.dev/s?module-type=image-classification', 'https://arxiv.org/abs/1712.00559', 'https://github.com/jcjohnson/cnn-benchmarks', 'https://tfhub.dev/google/imagenet/resnet_v2_152/classification/3', 'https://tfhub.dev/google/imagenet/resnet_v2_50/classification/3', 'https://github.com/tensorflow/models/blob/master/research/slim/nets/vgg.py'] | 6 |
7,166,033 | <p>This isn't a solution so much as it's another way to think about the problem.</p>
<p>Make the following graph: </p>
<ul>
<li>Vertices are all subsets of <code>S</code> of sizes <code>n</code> or <code>n+1</code>. </li>
<li>There is an edge between <code>v</code> and <code>w</code> if the two sets differ by one element.</li>
</ul>
<p>For example, for n=1, you get the following cycle:</p>
<pre><code> {1} --- {1,3} --- {3}
| |
| |
{1,2} --- {2} --- {2,3}
</code></pre>
<p>Your problem is to find a <a href="http://en.wikipedia.org/wiki/Hamiltonian_path" rel="nofollow">Hamiltonian cycle</a>:</p>
<blockquote>
<p>A Hamiltonian cycle (or Hamiltonian circuit) is a cycle in an
undirected graph which visits each vertex exactly once and also
returns to the starting vertex. Determining whether such paths and
cycles exist in graphs is the Hamiltonian path problem which is
NP-complete.</p>
</blockquote>
<p>In other words, this problem is hard.</p>
<p>There are a handful of theorems giving sufficient conditions for a Hamiltonian cycle to exist in a graph (e.g. if all vertices have degree at least <code>N/2</code> where <code>N</code> is the number of vertices), but none that I know immediately implies that this graph has a Hamiltonian cycle.</p>
<p>You could try one of the myriad algorithms to determine if a Hamiltonian cycle exists. For example, from the wikipedia article on the <a href="http://en.wikipedia.org/wiki/Hamiltonian_path_problem" rel="nofollow">Hamiltonian path problem</a>:</p>
<blockquote>
<p>A trivial heuristic algorithm for locating hamiltonian paths is to
construct a path abc... and extend it until no longer possible; when
the path abc...xyz cannot be extended any longer because all
neighbours of z already lie in the path, one goes back one step,
removing the edge yz and extending the path with a different neighbour
of y; if no choice produces a hamiltonian path, then one takes a
further step back, removing the edge xy and extending the path with a
different neighbour of x, and so on. This algorithm will certainly
find an hamiltonian path (if any) but it runs in exponential time.</p>
</blockquote>
<p>Hope this helps.</p>
<hr>
<p><strong>Good News</strong>: Though the Hamiltonian cycle problem is difficult in general, this graph is very nice: it's bipartite and <code>(n+1)</code>-regular. This means there may be a nice solution for this particular graph.</p>
<p><strong>Bad News</strong>: After doing a bit of searching, it turns out that this problem is known as the <em>Middle Levels Conjecture</em>, and it seems to have originated around 1980. As best I can tell, the problem is still open in general, but it has been computer verified for <code>n <= 17</code> (and I found a <a href="http://arxiv.org/abs/0912.4564" rel="nofollow">preprint</a> from 12/2009 claiming to verify <code>n=18</code>). These two pages have additional information about the problem and references:</p>
<ul>
<li><a href="http://www.math.uiuc.edu/~west/openp/revolving.html" rel="nofollow">http://www.math.uiuc.edu/~west/openp/revolving.html</a></li>
<li><a href="http://garden.irmacs.sfu.ca/?q=op/middle_levels_problem" rel="nofollow">http://garden.irmacs.sfu.ca/?q=op/middle_levels_problem</a></li>
</ul> | 2011-08-23 18:50:30.497000+00:00 | 2011-08-23 19:58:06.060000+00:00 | 2011-08-23 19:58:06.060000+00:00 | null | 7,165,680 | <p>Problem: Start with a set <code>S</code> of size 2n+1 and a subset <code>A</code> of <code>S</code> of size n. You have functions <code>addElement(A,x)</code> and <code>removeElement(A,x)</code> that can add or remove an element of <code>A</code>. Write a function that cycles through all the subsets of <code>S</code> of size n or n+1 using just these two operations on <code>A</code>.</p>
<p>I figured out that there are (2n+1 choose n) + (2n+1 choose n+1) = 2 * (2n+1 choose n) subsets that I need to find. So here's the structure for my function:</p>
<pre><code>for (int k=0; k<2*binomial(2n+1,n); ++k) {
if (k mod 2) {
// somehow choose x from S-A
A = addElement(A,x);
printSet(A,n+1);
} else
// somehow choose x from A
A = removeElement(A,x);
printSet(A,n);
}
}
</code></pre>
<p>The function <code>binomial(2n+1,n)</code> just gives the binomial coefficient, and the function <code>printSet</code> prints the elements of <code>A</code> so that I can see if I hit all the sets.</p>
<p>I don't know how to choose the element to add or remove, though. I tried lots of different things, but I didn't get anything that worked in general.</p>
<p>For n=1, here's a solution that I found that works:</p>
<pre><code>for (int k=0; k<6; ++k) {
if (k mod 2) {
x = S[A[0] mod 3];
A = addElement(A,x);
printSet(A,2);
} else
x = A[0];
A = removeElement(A,x);
printSet(A,1);
}
}
</code></pre>
<p>and the output for <code>S = [1,2,3]</code> and <code>A=[1]</code> is:</p>
<pre><code>[1,2]
[2]
[2,3]
[3]
[3,1]
[1]
</code></pre>
<p>But even getting this to work for n=2 I can't do. Can someone give me some help on this one?</p> | 2011-08-23 18:18:08.100000+00:00 | 2011-08-23 19:58:06.060000+00:00 | null | c|algorithm | ['http://en.wikipedia.org/wiki/Hamiltonian_path', 'http://en.wikipedia.org/wiki/Hamiltonian_path_problem', 'http://arxiv.org/abs/0912.4564', 'http://www.math.uiuc.edu/~west/openp/revolving.html', 'http://garden.irmacs.sfu.ca/?q=op/middle_levels_problem'] | 5 |
48,009,544 | <p>To follow up on @Robert Dodier 's comment, reading <a href="http://arxiv.org/pdf/1207.6002.pdf" rel="nofollow noreferrer">http://arxiv.org/pdf/1207.6002.pdf</a> you'll find a recipe for Maximum Likelihood estimation, which I adapt here:</p>
<pre><code>import scipy
import scipy.stats as sciStat
import scipy.optimize as sciOpt
def myMleEstimate(myFunc, par, data):
def lnL_av(x, par):
N = len(x)
lnL = 0.
for i in range(N):
lnL += scipy.log(myFunc(par, x[i]))
return lnL/N
objFunc = lambda s: -lnL_av(data, s)
par_mle = sciOpt.fmin(objFunc, par, disp=0)
return par_mle
</code></pre>
<p>If you'd want to model a Rayleigh, you'd:</p>
<pre><code>from scipy.stats import rayleigh
Rayleigh = lambda par, x: sciStat.rayleigh.pdf(x, loc=par[0], scale=par[1])
</code></pre>
<p>And estimate from your <em>data</em>:</p>
<pre><code>estimated = myMleEstimate(Rayleigh, [0, 1], data)
</code></pre>
<p>(Here I chose 0, 1 starting parameters).</p>
<p>To test the last line, you could first sample a thousand data points using:</p>
<pre><code># parameters
params = {
'loc': 1,
'scale': 2
}
data = rayleigh.rvs(loc=params['loc'], scale=params['scale'], size=1000)
</code></pre>
<hr>
<p>And yes, I understand that a K-distro is a compound of two gammas, not a Rayleigh. But sources such as <a href="http://oai.dtic.mil/oai/oai?verb=getRecord&metadataPrefix=html&identifier=ADA368069" rel="nofollow noreferrer">Estimating the Parameters of the K Distribution in the Intensity Domain</a> point out that it is quite difficult with ML estimation.</p>
<p><em>So you got what you asked, a Python recipe, but this may not be what you need.</em></p> | 2017-12-28 14:47:53.563000+00:00 | 2017-12-28 14:47:53.563000+00:00 | null | null | 28,112,226 | <p>I have a sample distribution of data which I would like to fit with some not Python embedded statistics in scipy.stats, such as the K pdf.
Is it then possible to do so? Are there, by chance other modules that have the k distribution or other not gaussian pdfs available?</p>
<p>Thanks for your help!</p> | 2015-01-23 14:38:14.060000+00:00 | 2017-12-28 14:47:53.563000+00:00 | null | python|statistics | ['http://arxiv.org/pdf/1207.6002.pdf', 'http://oai.dtic.mil/oai/oai?verb=getRecord&metadataPrefix=html&identifier=ADA368069'] | 2 |
50,631,576 | <p>You will need a variable number of outputs which would require a recurrent network on the prediction side. Let's try building one on the existing network:</p>
<pre><code># first we'll add an extra input telling how many outputs we need
num_outs = Input(shape=(1,), dtype='int')
# ... continuing from
answer = LSTM(32)(answer) # (samples, 32)
# answer is your encoded context-query, we will decode it into a sequence
answers = RepeatVector(num_outs[0])(answer) # (samples, num_outs, 32)
# Another RNN to give us decoding (optional)
answers = LSTM(32, return_sequences=True)(answers) # note return_sequences
answers = TimeDistributed(Dense(vocab_size, activation='softmax'))(answers)
# we have (samples, num_outs, vocab_size) so num_outs words
# ...
</code></pre>
<p>Now your targets have to be 3D shape as well. <em>Important</em>, you have to append a <strong>end-of-answer</strong> token to every answer so you know when to stop at prediction time. Equally, you would pad the number of answers within a batch to get a tensor after end-of-answer token.</p>
<p>Now at prediction time, you can ask for 10 words and chop words after end-of-answer token, similar to how machine translation is done using seq2seq models. For reference, have a look at <a href="https://arxiv.org/abs/1506.07285" rel="nofollow noreferrer">Dynamic Memory Networks</a>.</p> | 2018-05-31 19:34:46.653000+00:00 | 2018-05-31 19:34:46.653000+00:00 | null | null | 50,630,123 | <p>I am trying to modify <a href="https://github.com/keras-team/keras/blob/master/examples/babi_memnn.py" rel="nofollow noreferrer">Keras's memory neural net using the bAbI dataset</a> from outputting a single word to outputting multiple words (3 in this example). For context, this is an NLP model using LSTM for question answering. </p>
<p>Here is a snippet of the model structure:</p>
<pre><code># placeholders
input_sequence = Input((story_maxlen,))
question = Input((query_maxlen,))
# encoders
# embed the input sequence into a sequence of vectors
input_encoder_m = Sequential()
input_encoder_m.add(Embedding(input_dim=vocab_size,
output_dim=64))
input_encoder_m.add(Dropout(0.3))
# output: (samples, story_maxlen, embedding_dim)
# embed the input into a sequence of vectors of size query_maxlen
input_encoder_c = Sequential()
input_encoder_c.add(Embedding(input_dim=vocab_size,
output_dim=query_maxlen))
input_encoder_c.add(Dropout(0.3))
# output: (samples, story_maxlen, query_maxlen)
# embed the question into a sequence of vectors
question_encoder = Sequential()
question_encoder.add(Embedding(input_dim=vocab_size,
output_dim=64,
input_length=query_maxlen))
question_encoder.add(Dropout(0.3))
# output: (samples, query_maxlen, embedding_dim)
# encode input sequence and questions (which are indices)
# to sequences of dense vectors
input_encoded_m = input_encoder_m(input_sequence)
input_encoded_c = input_encoder_c(input_sequence)
question_encoded = question_encoder(question)
# compute a 'match' between the first input vector sequence
# and the question vector sequence
# shape: `(samples, story_maxlen, query_maxlen)`
match = dot([input_encoded_m, question_encoded], axes=(2, 2))
match = Activation('softmax')(match)
# add the match matrix with the second input vector sequence
response = add([match, input_encoded_c]) # (samples, story_maxlen, query_maxlen)
response = Permute((2, 1))(response) # (samples, query_maxlen, story_maxlen)
# concatenate the match matrix with the question vector sequence
answer = concatenate([response, question_encoded])
# the original paper uses a matrix multiplication for this reduction step.
# we choose to use a RNN instead.
answer = LSTM(32)(answer) # (samples, 32)
# one regularization layer -- more would probably be needed.
answer = Dropout(0.3)(answer)
answer = Dense(vocab_size)(answer) # (samples, vocab_size)
# we output a probability distribution over the vocabulary
answer = Activation('softmax')(answer)
</code></pre>
<p>and this is how it is being compiled and trained:</p>
<pre><code>model = Model([input_sequence, question], answer)
model.compile(optimizer='rmsprop', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
model.fit([inputs_train, queries_train], answers_train,
batch_size=32,
epochs=num_epochs,
validation_data=([inputs_test, queries_test], answers_test))
</code></pre>
<p>In the above example, the answers_train variable is a 1xn matrix where each item is the value for a question. So, for example, the first three answers:</p>
<pre><code>print(answers_train[:3])
</code></pre>
<p>outputs:</p>
<pre><code>[16 16 19]
</code></pre>
<h2>My Issue</h2>
<p>This is the change I made was to the answer_train variable where:</p>
<pre><code>print(answers_train[:3])
</code></pre>
<p>outputs:</p>
<pre><code>[[ 0 0 16]
[ 0 0 27]
[ 0 0 16]]
</code></pre>
<p>basically, I'm trying to get up to three words predicted instead of one.</p>
<p>When I do this and try to train the model I get this error:</p>
<blockquote>
<p>ValueError: Error when checking target: expected activation_29 to have
shape (1,) but got array with shape (3,)</p>
</blockquote>
<p>Here is the output of model.summary():</p>
<pre><code>__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) (None, 552) 0
__________________________________________________________________________________________________
input_2 (InputLayer) (None, 5) 0
__________________________________________________________________________________________________
sequential_1 (Sequential) multiple 2304 input_1[0][0]
__________________________________________________________________________________________________
sequential_3 (Sequential) (None, 5, 64) 2304 input_2[0][0]
__________________________________________________________________________________________________
dot_1 (Dot) (None, 552, 5) 0 sequential_1[1][0]
sequential_3[1][0]
__________________________________________________________________________________________________
activation_1 (Activation) (None, 552, 5) 0 dot_1[0][0]
__________________________________________________________________________________________________
sequential_2 (Sequential) multiple 180 input_1[0][0]
__________________________________________________________________________________________________
add_1 (Add) (None, 552, 5) 0 activation_1[0][0]
sequential_2[1][0]
__________________________________________________________________________________________________
permute_1 (Permute) (None, 5, 552) 0 add_1[0][0]
__________________________________________________________________________________________________
concatenate_1 (Concatenate) (None, 5, 616) 0 permute_1[0][0]
sequential_3[1][0]
__________________________________________________________________________________________________
lstm_1 (LSTM) (None, 32) 83072 concatenate_1[0][0]
__________________________________________________________________________________________________
dropout_4 (Dropout) (None, 32) 0 lstm_1[0][0]
__________________________________________________________________________________________________
dense_1 (Dense) (None, 36) 1188 dropout_4[0][0]
__________________________________________________________________________________________________
activation_2 (Activation) (None, 36) 0 dense_1[0][0]
==================================================================================================
Total params: 89,048
Trainable params: 89,048
Non-trainable params: 0
__________________________________________________________________________________________________
</code></pre>
<p>What I understand is that the model was built to determine a single word answer (i.e. shape (1,)) and that I need to modify the model because now I expect it to determine multiple word answers (in this case, shape (3,)). What I don't understand is how to change the model structure to accomplish that.</p>
<p>I don't see anywhere in the model's summary that indicates where the shape (1,) is defined. I only see definitions for the max story size in words (552), the max query/question size in words (5), and the vocabulary size in words (36).</p>
<p>Is anyone able to help me figure out what I'm doing wrong?</p>
<hr>
<h2>Update #1</h2>
<p>I've learned a few more things while I've been continuing to research this problem. I could be wrong on all these points as I'm not familiar with the fine details of ML and NNs so feel free to call me out on anything that seems amiss.</p>
<ul>
<li>The last dense layer of shape (None, 36) is sized based on the vocabulary size and the subsequent softmax activation layer's purpose is to produce a vector of probabilities to indicate which word is the correct one. If that's the case then by reducing the last dense layer to (None, 3) am I losing information? Would I be just getting a vector of three probabilities without any indication as to what words they apply to? Unless the last dense layer are the indicies of the vectorized vocabulary? In that case I'd know the words being predicted, but then what would be the purpose of the subsequent activation layer?</li>
<li>The <code>sparse_categorical_crossentropy</code> loss function reduces the shape of the final output to (1,) in <a href="https://github.com/keras-team/keras/blob/7365a99f6e832847808c7aa28718d32fbc744b21/keras/engine/training.py#L770" rel="nofollow noreferrer">~/keras/engine/training.py on line 770</a>. Does that mean I'm using the wrong loss function? I can't use <code>categorical_crossentropy</code> because I don't want to have a one-hot vectored output. Does that mean I need to change the whole model altogether or will another loss function give me the desired output?</li>
</ul>
<p>I guess in summation, is a tweak to the model even possible or do I need to use a completely different model? If you could also provide clarity on my confusion based on the above two points I would be very grateful.</p> | 2018-05-31 17:54:27.363000+00:00 | 2018-06-01 01:22:28.927000+00:00 | 2018-06-01 01:22:28.927000+00:00 | python|tensorflow|keras|lstm | ['https://arxiv.org/abs/1506.07285'] | 1 |
2,768,974 | <p>Almost all our public-key encryptions (ex. RSA) are based solely on math, relying on the difficulty of <a href="http://en.wikipedia.org/wiki/Factorization" rel="nofollow noreferrer">factorization</a> or <a href="http://en.wikipedia.org/wiki/Discrete_logarithm" rel="nofollow noreferrer">discrete-logarithms</a>. Both of these will be efficiently broken using <a href="http://arxiv.org/abs/quant-ph/9508027" rel="nofollow noreferrer">quantum computers</a> (though even after a bachelors in CS and Math, and having taken several classes on quantum mechanics, I still don't understand the algorithm).</p>
<p><strong>However, hashing algorithms (Ex. SHA2) and symmetric-key encryptions (ex. AES)</strong>, which are based mostly on <a href="http://en.wikipedia.org/wiki/Confusion_and_diffusion" rel="nofollow noreferrer">diffusion and confusion</a>, <strong>are still secure.</strong></p> | 2010-05-04 21:04:21.870000+00:00 | 2010-05-04 21:14:51.737000+00:00 | 2010-05-04 21:14:51.737000+00:00 | null | 2,768,807 | <p>I read a while back that Quantum Computers can break most types of hashing and encryption in use today in a very short amount of time(I believe it was mere minutes). How is it possible? I've tried reading articles about it but I get lost at the <code>a quantum bit can be 1, 0, or something else</code>. Can someone explain how this relates to cracking such algorithms in plain English without all the fancy maths? </p> | 2010-05-04 20:37:43.960000+00:00 | 2021-10-22 13:45:39.317000+00:00 | 2015-05-01 13:08:24.007000+00:00 | encryption|cryptography|quantum-computing | ['http://en.wikipedia.org/wiki/Factorization', 'http://en.wikipedia.org/wiki/Discrete_logarithm', 'http://arxiv.org/abs/quant-ph/9508027', 'http://en.wikipedia.org/wiki/Confusion_and_diffusion'] | 4 |
22,640,362 | <h1>Robust peak detection algorithm (using z-scores)</h1>
<p>I came up with an algorithm that works very well for these types of datasets. It is based on the principle of <a href="https://en.wikipedia.org/wiki/Statistical_dispersion" rel="noreferrer">dispersion</a>: if a new datapoint is a given x number of standard deviations away from some moving mean, the algorithm signals (also called <a href="https://en.wikipedia.org/wiki/Standard_score" rel="noreferrer">z-score</a>). The algorithm is very robust because it constructs a <em>separate</em> moving mean and deviation, such that signals do not corrupt the threshold. Future signals are therefore identified with approximately the same accuracy, regardless of the amount of previous signals. The algorithm takes 3 inputs: <code>lag = the lag of the moving window</code>, <code>threshold = the z-score at which the algorithm signals</code> and <code>influence = the influence (between 0 and 1) of new signals on the mean and standard deviation</code>. For example, a <code>lag</code> of 5 will use the last 5 observations to smooth the data. A <code>threshold</code> of 3.5 will signal if a datapoint is 3.5 standard deviations away from the moving mean. And an <code>influence</code> of 0.5 gives signals <em>half</em> of the influence that normal datapoints have. Likewise, an <code>influence</code> of 0 ignores signals completely for recalculating the new threshold. An influence of 0 is therefore the most robust option (but assumes <a href="https://en.wikipedia.org/wiki/Stationary_process" rel="noreferrer">stationarity</a>); putting the influence option at 1 is least robust. For non-stationary data, the influence option should therefore be put somewhere between 0 and 1.</p>
<p>It works as follows:</p>
<p><em><strong>Pseudocode</strong></em></p>
<pre class="lang-r prettyprint-override"><code># Let y be a vector of timeseries data of at least length lag+2
# Let mean() be a function that calculates the mean
# Let std() be a function that calculates the standard deviaton
# Let absolute() be the absolute value function
# Settings (these are examples: choose what is best for your data!)
set lag to 5; # average and std. are based on past 5 observations
set threshold to 3.5; # signal when data point is 3.5 std. away from average
set influence to 0.5; # between 0 (no influence) and 1 (full influence)
# Initialize variables
set signals to vector 0,...,0 of length of y; # Initialize signal results
set filteredY to y(1),...,y(lag) # Initialize filtered series
set avgFilter to null; # Initialize average filter
set stdFilter to null; # Initialize std. filter
set avgFilter(lag) to mean(y(1),...,y(lag)); # Initialize first value average
set stdFilter(lag) to std(y(1),...,y(lag)); # Initialize first value std.
for i=lag+1,...,t do
if absolute(y(i) - avgFilter(i-1)) > threshold*stdFilter(i-1) then
if y(i) > avgFilter(i-1) then
set signals(i) to +1; # Positive signal
else
set signals(i) to -1; # Negative signal
end
set filteredY(i) to influence*y(i) + (1-influence)*filteredY(i-1);
else
set signals(i) to 0; # No signal
set filteredY(i) to y(i);
end
set avgFilter(i) to mean(filteredY(i-lag+1),...,filteredY(i));
set stdFilter(i) to std(filteredY(i-lag+1),...,filteredY(i));
end
</code></pre>
<p>Rules of thumb for selecting good parameters for your data can be found below.</p>
<hr />
<h2>Demo</h2>
<p><a href="https://i.imgur.com/LFvEM2Y.gif" rel="noreferrer"><img src="https://i.imgur.com/LFvEM2Y.gif" alt="Demonstration of robust thresholding algorithm" /></a></p>
<p><sub>The Matlab code for this demo can be found <a href="https://stackoverflow.com/questions/22583391/peak-signal-detection-in-realtime-timeseries-data/54507329#54507329"><strong>here</strong></a>. To use the demo, simply run it and create a time series yourself by clicking on the upper chart. The algorithm starts working after drawing <code>lag</code> number of observations.</sub></p>
<hr />
<h2>Result</h2>
<p>For the original question, this algorithm will give the following output when using the following settings: <code>lag = 30, threshold = 5, influence = 0</code>:</p>
<p><a href="https://i.stack.imgur.com/KdpF7.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/KdpF7.jpg" alt="Thresholding algorithm example" /></a></p>
<hr />
<h2>Implementations in different programming languages:</h2>
<ul>
<li><h2><a href="https://stackoverflow.com/questions/22583391/peak-signal-detection-in-realtime-timeseries-data/54507329#54507329">Matlab</a> (me)</h2>
</li>
<li><h2><a href="https://stackoverflow.com/questions/22583391/peak-signal-detection-in-realtime-timeseries-data/54507329#54507329">R</a> (me)</h2>
</li>
<li><h2><a href="https://stackoverflow.com/questions/22583391/peak-signal-detection-in-realtime-timeseries-data/42144471#42144471">Golang</a> (Xeoncross)</h2>
</li>
<li><h2><a href="https://stackoverflow.com/questions/22583391/peak-signal-detection-in-realtime-timeseries-data/72502027#72502027">Golang</a> [efficient version] (Micah Parks)</h2>
</li>
<li><h2><a href="https://stackoverflow.com/questions/22583391/peak-signal-detection-in-realtime-timeseries-data/43512887#43512887">Python</a> (R Kiselev)</h2>
</li>
<li><h2><a href="https://stackoverflow.com/questions/22583391/peak-signal-detection-in-realtime-timeseries-data/56451135#56451135">Python</a> [efficient version] (delica)</h2>
</li>
<li><h2><a href="https://stackoverflow.com/questions/43583302/peak-detection-for-growing-time-series-using-swift/43607179#43607179">Swift</a> (me)</h2>
</li>
<li><h2><a href="https://stackoverflow.com/questions/22583391/peak-signal-detection-in-realtime-timeseries-data/46575416#46575416">Groovy</a> (JoshuaCWebDeveloper)</h2>
</li>
<li><h2><a href="https://stackoverflow.com/questions/22583391/peak-signal-detection-in-realtime-timeseries-data/46956908#46956908">C++</a> (brad)</h2>
</li>
<li><h2><a href="https://stackoverflow.com/questions/22583391/peak-signal-detection-in-realtime-timeseries-data/46998001#46998001">C++</a> (Animesh Pandey)</h2>
</li>
<li><h2><a href="https://github.com/swizard0/smoothed_z_score" rel="noreferrer">Rust</a> (swizard)</h2>
</li>
<li><h2><a href="https://stackoverflow.com/questions/22583391/peak-signal-detection-in-realtime-timeseries-data/48231877#48231877">Scala</a> (Mike Roberts)</h2>
</li>
<li><h2><a href="https://stackoverflow.com/questions/22583391/peak-signal-detection-in-realtime-timeseries-data/48772305#48772305">Kotlin</a> (leoderprofi)</h2>
</li>
<li><h2><a href="https://stackoverflow.com/questions/22583391/peak-signal-detection-in-realtime-timeseries-data/48895639#48895639">Ruby</a> (Kimmo Lehto)</h2>
</li>
<li><h2><a href="https://stackoverflow.com/questions/22583391/peak-signal-detection-in-realtime-timeseries-data/51185583#51185583">Fortran</a> [for resonance detection] (THo)</h2>
</li>
<li><h2><a href="https://stackoverflow.com/questions/22583391/peak-signal-detection-in-realtime-timeseries-data/52447946#52447946">Julia</a> (Matt Camp)</h2>
</li>
<li><h2><a href="https://stackoverflow.com/questions/22583391/peak-signal-detection-in-realtime-timeseries-data/53614452#53614452">C#</a> (Ocean Airdrop)</h2>
</li>
<li><h2><a href="https://stackoverflow.com/questions/22583391/peak-signal-detection-in-realtime-timeseries-data/54507140#54507140">C</a> (DavidC)</h2>
</li>
<li><h2><a href="https://stackoverflow.com/questions/22583391/peak-signal-detection-in-realtime-timeseries-data/56174275#56174275">Java</a> (takanuva15)</h2>
</li>
<li><h2><a href="https://stackoverflow.com/questions/22583391/peak-signal-detection-in-realtime-timeseries-data/57889588#57889588">JavaScript</a> (Dirk Lüsebrink)</h2>
</li>
<li><h2><a href="https://github.com/Bluejay47/zScore" rel="noreferrer">TypeScript</a> (Jerry Gamble)</h2>
</li>
<li><h2><a href="https://stackoverflow.com/questions/22583391/peak-signal-detection-in-realtime-timeseries-data/58859333#58859333">Perl</a> (Alen)</h2>
</li>
<li><h2><a href="https://stackoverflow.com/questions/22583391/peak-signal-detection-in-realtime-timeseries-data/59687045#59687045">PHP</a> (radhoo)</h2>
</li>
<li><h2><a href="https://github.com/gtjamesa/php-zscore" rel="noreferrer">PHP</a> (gtjamesa)</h2>
</li>
<li><h2><a href="https://stackoverflow.com/a/66700221/833188">Dart</a> (Sga)</h2>
</li>
</ul>
<hr />
<h2>Rules of thumb for configuring the algorithm</h2>
<p><em><strong><code>lag</code></strong></em>: the lag parameter determines how much your data will be smoothed and how adaptive the algorithm is to changes in the long-term average of the data. The more <a href="https://en.wikipedia.org/wiki/Stationary_process" rel="noreferrer">stationary</a> your data is, the more lags you should include (this should improve the robustness of the algorithm). If your data contains time-varying trends, you should consider how quickly you want the algorithm to adapt to these trends. I.e., if you put <code>lag</code> at 10, it takes 10 'periods' before the algorithm's treshold is adjusted to any systematic changes in the long-term average. So choose the <code>lag</code> parameter based on the trending behavior of your data and how adaptive you want the algorithm to be.</p>
<p><em><strong><code>influence</code></strong></em>: this parameter determines the influence of signals on the algorithm's detection threshold. If put at 0, signals have no influence on the threshold, such that future signals are detected based on a threshold that is calculated with a mean and standard deviation that is not influenced by past signals. If put at 0.5, signals have <em>half</em> the influence of normal data points. Another way to think about this is that if you put the influence at 0, you implicitly assume stationarity (i.e. no matter how many signals there are, you always expect the time series to return to the same average over the long term). If this is not the case, you should put the influence parameter somewhere between 0 and 1, depending on the extent to which signals can systematically influence the time-varying trend of the data. E.g., if signals lead to a <a href="https://en.wikipedia.org/wiki/Structural_break" rel="noreferrer">structural break</a> of the long-term average of the time series, the influence parameter should be put high (close to 1) so the threshold can react to structural breaks quickly.</p>
<p><em><strong><code>threshold</code></strong></em>: the threshold parameter is the number of standard deviations from the moving mean above which the algorithm will classify a new datapoint as being a signal. For example, if a new datapoint is 4.0 standard deviations above the moving mean and the threshold parameter is set as 3.5, the algorithm will identify the datapoint as a signal. This parameter should be set based on how many signals you expect. For example, if your data is normally distributed, a threshold (or: z-score) of 3.5 corresponds to a signaling probability of 0.00047 (from <a href="https://imgur.com/a/UJlXNJo" rel="noreferrer">this table</a>), which implies that you expect a signal once every 2128 datapoints (1/0.00047). The threshold therefore directly influences how sensitive the algorithm is and thereby also determines how often the algorithm signals. Examine your own data and choose a sensible threshold that makes the algorithm signal when you want it to (some trial-and-error might be needed here to get to a good threshold for your purpose).</p>
<hr />
<p><strong>WARNING: The code above always loops over all datapoints everytime it runs.</strong> When implementing this code, make sure to split the calculation of the signal into a separate function (without the loop). Then when a new datapoint arrives, update <code>filteredY</code>, <code>avgFilter</code> and <code>stdFilter</code> once. Do not recalculate the signals for all data everytime there is a new datapoint (like in the example above), that would be extremely inefficient and slow in real-time applications.</p>
<p>Other ways to modify the algorithm (for potential improvements) are:</p>
<ol>
<li>Use median instead of mean</li>
<li>Use a <a href="https://en.wikipedia.org/wiki/Robust_measures_of_scale" rel="noreferrer">robust measure of scale</a>, such as the median absolute deviation (MAD), instead of the standard deviation</li>
<li>Use a signalling margin, so the signal doesn't switch too often</li>
<li>Change the way the influence parameter works</li>
<li>Treat <em>up</em> and <em>down</em> signals differently (asymmetric treatment)</li>
<li>Create a separate <code>influence</code> parameter for the mean and std (<a href="https://stackoverflow.com/questions/43583302/peak-detection-for-growing-time-series-using-swift/43607179#43607179">as in this Swift translation</a>)</li>
</ol>
<hr />
<h2>(Known) academic citations to this StackOverflow answer:</h2>
<ul>
<li><p>Cai, Y., Wang, X., Joos, G., & Kamwa, I. (2022). <a href="https://arxiv.org/pdf/2207.05356.pdf" rel="noreferrer"><strong>An Online Data-Driven Method to Locate Forced Oscillation Sources from Power Plants Based on Sparse Identification of Nonlinear Dynamics (SINDy)</strong></a>. IEEE Transactions on Power Systems.</p>
</li>
<li><p>Yang, S., Yim, J., Kim, J., & Shin, H. V. (2022). <a href="https://dl.acm.org/doi/abs/10.1145/3491102.3517461" rel="noreferrer"><strong>CatchLive: Real-time Summarization of Live Streams with Stream Content and Interaction Data</strong></a>. <em>CHI Conference on Human Factors in Computing Systems</em>, 1-20.</p>
</li>
<li><p>Feng, D., Tan, Z., Engwirda, D., Liao, C., Xu, D., Bisht, G., ... & Leung, R. (2022). <a href="https://hess.copernicus.org/preprints/hess-2022-251/" rel="noreferrer"><strong>Investigating coastal backwater effects and flooding in the coastal zone using a global river transport model on an unstructured mesh</strong></a>. <em>Hydrology and Earth System Sciences Discussions</em>, 1-31 [preprint].</p>
</li>
<li><p>Link, J., Perst, T., Stoeve, M., & Eskofier, B. M. (2022). <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9002797/" rel="noreferrer"><strong>Wearable sensors for activity recognition in ultimate frisbee using convolutional neural networks and transfer learning</strong></a>. <em>Sensors</em>, 22(7), 2560.</p>
</li>
<li><p>Romeiro, J. M. N., Torres, F. T. P., & Pirotti, F. (2021). <a href="https://www.sifet.org/bollettino/index.php/bollettinosifet/article/view/2266" rel="noreferrer"><strong>Evaluation of Effect of Prescribed Fires Using Spectral Indices and SAR Data</strong></a>. <em>Bollettino della società italiana di fotogrammetria e topografia</em>, (2), 36-56.</p>
</li>
<li><p>Moore, J., Goffin, P., Wiese, J., & Meyer, M. (2021). <a href="https://dl.acm.org/doi/abs/10.1145/3494964" rel="noreferrer"><strong>An Interview Method for Engaging Personal Data</strong></a>. <em>Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies</em>, 5(4), 1-28.</p>
</li>
<li><p>Rykov, Y., Thach, T. Q., Bojic, I., Christopoulos, G., & Car, J. (2021). <a href="https://mhealth.jmir.org/2021/10/e24872" rel="noreferrer"><strong>Digital Biomarkers for Depression Screening With Wearable Devices: Cross-sectional Study With Machine Learning Modeling</strong></a>. <em>JMIR mHealth and uHealth</em>, 9(10), e24872.</p>
</li>
<li><p>Hong, Y., Xin, Y., Martin, H., Bucher, D., & Raubal, M. (2021). <a href="https://drops.dagstuhl.de/opus/volltexte/2021/14763/" rel="noreferrer"><strong>A Clustering-Based Framework for Individual Travel Behaviour Change Detection</strong></a>. In 11th International Conference on Geographic Information Science (GIScience 2021)-Part II.</p>
</li>
<li><p>Grammenos, A., Kalyvianaki, E., & Pietzuch, P. (2021). <a href="https://arxiv.org/abs/2104.13429" rel="noreferrer"><strong>Pronto: Federated Task Scheduling</strong></a>. arXiv preprint arXiv:2104.13429.</p>
</li>
<li><p>Courtial, N. (2020). <a href="https://tel.archives-ouvertes.fr/tel-03048963/" rel="noreferrer"><strong>Fusion d’images multimodales pour l’assistance de procédures d’électrophysiologie cardiaque</strong></a>. <em>Doctoral dissertation</em>, Université Rennes.</p>
</li>
<li><p>Beckman, W. F., Jiménez, M. Á. L., Moerland, P. D., Westerhoff, H. V., & Verschure, P. J. (2020). <a href="https://www.biorxiv.org/content/10.1101/2020.12.23.424175v1.full" rel="noreferrer"><strong>4sUDRB-sequencing for genome-wide transcription bursting quantification in breast cancer cells</strong></a>. bioRxiv.</p>
</li>
<li><p>Olkhovskiy, M., Müllerová, E., & Martínek, P. (2020). <a href="https://content.sciendo.com/view/journals/jee/71/6/article-p397.xml" rel="noreferrer"><strong>Impulse signals classification using one dimensional convolutional neural network</strong></a>. Journal of Electrical Engineering, 71(6), 397-405.</p>
</li>
<li><p>Gao, S., & Calderon, D. P. (2020). <a href="https://www.nature.com/articles/s41598-020-77162-3" rel="noreferrer"><strong>Robust alternative to the righting reflex to assess arousal in rodents</strong></a>. Scientific reports, 10(1), 1-11.</p>
</li>
<li><p>Chen, G. & Dong, W. (2020). <a href="https://dl.acm.org/doi/abs/10.1145/3418210" rel="noreferrer"><strong>Reactive Jamming and Attack Mitigation over Cross-Technology Communication Links</strong></a>. ACM Transactions on Sensor Networks, 17(1).</p>
</li>
<li><p>Takahashi, R., Fukumoto, M., Han, C., Sasatani, T., Narusue, Y., & Kawahara, Y. (2020). <a href="https://dl.acm.org/doi/abs/10.1145/3379337.3415873" rel="noreferrer"><strong>TelemetRing: A Batteryless and Wireless Ring-shaped Keyboard using Passive Inductive Telemetry</strong></a>. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology (pp. 1161-1168).</p>
</li>
<li><p>Negus, M. J., Moore, M. R., Oliver, J. M., Cimpeanu, R. (2020). <a href="https://arxiv.org/abs/2009.09872" rel="noreferrer"><strong>Droplet impact onto a spring-supported plate: analysis and simulations</strong></a>. Journal of Engineering Mathematics, 128(3).</p>
</li>
<li><p>Yin, C. (2020). <a href="https://arxiv.org/abs/2006.00280" rel="noreferrer"><strong>Dinucleotide repeats in coronavirus SARS-CoV-2 genome: evolutionary implications</strong></a>. ArXiv e-print, accessible from: <a href="https://arxiv.org/pdf/2006.00280.pdf" rel="noreferrer">https://arxiv.org/pdf/2006.00280.pdf</a></p>
</li>
<li><p>Esnaola-Gonzalez, I., Gómez-Omella, M., Ferreiro, S., Fernandez, I., Lázaro, I., & García, E. (2020). <a href="https://www.mdpi.com/1424-8220/20/6/1549" rel="noreferrer"><strong>An IoT Platform Towards the Enhancement of Poultry Production Chains</strong></a>. Sensors, 20(6), 1549.</p>
</li>
<li><p>Gao, S., & Calderon, D. P. (2020). <a href="https://www.biorxiv.org/content/10.1101/2020.02.19.956789v1" rel="noreferrer"><strong>Continuous regimens of cortico-motor integration calibrate levels of arousal during emergence from anesthesia</strong></a>. bioRxiv.</p>
</li>
<li><p>Cloud, B., Tarien, B., Liu, A., Shedd, T., Lin, X., Hubbard, M., ... & Moore, J. K. (2019). <a href="https://journals.plos.org/plosone/article/file?type=printable&id=10.1371/journal.pone.0225690" rel="noreferrer"><strong>Adaptive smartphone-based sensor fusion for estimating competitive rowing kinematic metrics</strong></a>. PloS one, 14(12).</p>
</li>
<li><p>Ceyssens, F., Carmona, M. B., Kil, D., Deprez, M., Tooten, E., Nuttin, B., ... & Puers, R. (2019). <a href="https://doi.org/10.1016/j.snb.2018.12.030" rel="noreferrer"><strong>Chronic neural recording with probes of subcellular cross-section using 0.06 mm² dissolving microneedles as insertion device</strong></a>. <em>Sensors and Actuators B: Chemical</em>, 284, pp. 369-376.</p>
</li>
<li><p>Dons, E., Laeremans, M., Orjuela, J. P., Avila-Palencia, I., de Nazelle, A., Nieuwenhuijsen, M., ... & Nawrot, T. (2019). <a href="https://www.sciencedirect.com/science/article/pii/S1352231019304261" rel="noreferrer"><strong>Transport Most Likely to Cause Air Pollution Peak Exposures in Everyday Life: Evidence from over 2000 Days of Personal Monitoring</strong></a>. <em>Atmospheric Environment</em>, 213, 424-432.</p>
</li>
<li><p>Schaible B.J., Snook K.R., Yin J., et al. (2019). <a href="http://www.thepermanentejournal.org/issues/2019/summer/7190-twitter-poliomyelitis.html" rel="noreferrer"><strong>Twitter conversations and English news media reports on poliomyelitis in five different countries, January 2014 to April 2015</strong></a>. <em>The Permanente Journal</em>, 23, 18-181.</p>
</li>
<li><p>Lima, B. (2019). <a href="http://dx.doi.org/10.20381/ruor-24195" rel="noreferrer"><strong>Object Surface Exploration Using a Tactile-Enabled Robotic Fingertip</strong></a> (Doctoral dissertation, Université d'Ottawa/University of Ottawa).</p>
</li>
<li><p>Lima, B. M. R., Ramos, L. C. S., de Oliveira, T. E. A., da Fonseca, V. P., & Petriu, E. M. (2019). <a href="https://proceedings.cmbes.ca/index.php/proceedings/article/view/850" rel="noreferrer"><strong>Heart Rate Detection Using a Multimodal Tactile Sensor and a Z-score Based Peak Detection Algorithm</strong></a>. <em>CMBES Proceedings</em>, 42.</p>
</li>
<li><p>Lima, B. M. R., de Oliveira, T. E. A., da Fonseca, V. P., Zhu, Q., Goubran, M., Groza, V. Z., & Petriu, E. M. (2019, June). <a href="https://ieeexplore.ieee.org/abstract/document/8802209" rel="noreferrer"><strong>Heart Rate Detection Using a Miniaturized Multimodal Tactile Sensor</strong></a>. <em>In 2019 IEEE International Symposium on Medical Measurements and Applications (MeMeA)</em> (pp. 1-6). IEEE.</p>
</li>
<li><p>Ting, C., Field, R., Quach, T., Bauer, T. (2019). <a href="https://ieeexplore.ieee.org/abstract/document/8682257" rel="noreferrer"><strong>Generalized Boundary Detection Using Compression-based Analytics</strong></a>. <em>ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)</em>, Brighton, United Kingdom, pp. 3522-3526.</p>
</li>
<li><p>Carrier, E. E. (2019). <a href="http://hdl.handle.net/2142/104752" rel="noreferrer"><strong>Exploiting compression in solving discretized linear systems</strong></a>. <em>Doctoral dissertation</em>, University of Illinois at Urbana-Champaign.</p>
</li>
<li><p>Khandakar, A., Chowdhury, M. E., Ahmed, R., Dhib, A., Mohammed, M., Al-Emadi, N. A., & Michelson, D. (2019). <a href="https://www.mdpi.com/1424-8220/19/7/1563" rel="noreferrer"><strong>Portable system for monitoring and controlling driver behavior and the use of a mobile phone while driving</strong></a>. <em>Sensors</em>, 19(7), 1563.</p>
</li>
<li><p>Baskozos, G., Dawes, J. M., Austin, J. S., Antunes-Martins, A., McDermott, L., Clark, A. J., ... & Orengo, C. (2019). <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6343954/" rel="noreferrer"><strong>Comprehensive analysis of long noncoding RNA expression in dorsal root ganglion reveals cell-type specificity and dysregulation after nerve injury</strong></a>. <em>Pain</em>, 160(2), 463.</p>
</li>
<li><p>Cloud, B., Tarien, B., Crawford, R., & Moore, J. (2018). <a href="https://doi.org/10.31224/osf.io/nykuh" rel="noreferrer"><strong>Adaptive smartphone-based sensor fusion for estimating competitive rowing kinematic metrics</strong></a>. <em>engrXiv Preprints</em>.</p>
</li>
<li><p>Zajdel, T. J. (2018). <a href="https://escholarship.org/uc/item/7vb3835n" rel="noreferrer"><strong>Electronic Interfaces for Bacteria-Based Biosensing</strong></a>. <em>Doctoral dissertation</em>, UC Berkeley.</p>
</li>
<li><p>Perkins, P., Heber, S. (2018). <a href="https://ieeexplore.ieee.org/abstract/document/8541902" rel="noreferrer"><strong>Identification of Ribosome Pause Sites Using a Z-Score Based Peak Detection Algorithm</strong></a>. <em>IEEE 8th International Conference on Computational Advances in Bio and Medical Sciences (ICCABS)</em>, ISBN: 978-1-5386-8520-4.</p>
</li>
<li><p>Moore, J., Goffin, P., Meyer, M., Lundrigan, P., Patwari, N., Sward, K., & Wiese, J. (2018). <a href="https://www.cs.utah.edu/%7Ewiese/publications/MAAV_paper_v40.pdf" rel="noreferrer"><strong>Managing In-home Environments through Sensing, Annotating, and Visualizing Air Quality Data</strong></a>. <em>Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies</em>, 2(3), 128.</p>
</li>
<li><p>Lo, O., Buchanan, W. J., Griffiths, P., and Macfarlane, R. (2018), <a href="https://doi.org/10.1155/2018/5906368" rel="noreferrer"><strong>Distance Measurement Methods for Improved Insider Threat Detection</strong></a>, <em>Security and Communication Networks</em>, Vol. 2018, Article ID 5906368.</p>
</li>
<li><p>Apurupa, N. V., Singh, P., Chakravarthy, S., & Buduru, A. B. (2018). <a href="https://repository.iiitd.edu.in/jspui/handle/123456789/632" rel="noreferrer"><strong>A critical study of power consumption patterns in Indian Apartments</strong></a>. <em>Doctoral dissertation</em>, IIIT-Delhi.</p>
</li>
<li><p>Scirea, M. (2017). <a href="https://en.itu.dk/%7E/media/en/research/phd-programme/phd-defences/2017/phd-thesis-temporary-version-marco-scirea-pdf.pdf?la=en" rel="noreferrer"><strong>Affective Music Generation and its effect on player experience</strong></a>. <em>Doctoral dissertation</em>, IT University of Copenhagen, Digital Design.</p>
</li>
<li><p>Scirea, M., Eklund, P., Togelius, J., & Risi, S. (2017). <a href="http://sebastianrisi.com/wp-content/uploads/scirea_ceec17.pdf" rel="noreferrer"><strong>Primal-improv: Towards co-evolutionary musical improvisation</strong></a>. <em>Computer Science and Electronic Engineering (CEEC)</em>, 2017 (pp. 172-177). IEEE.</p>
</li>
<li><p>Catalbas, M. C., Cegovnik, T., Sodnik, J. and Gulten, A. (2017). <a href="https://ieeexplore.ieee.org/document/8266142/" rel="noreferrer"><strong>Driver fatigue detection based on saccadic eye movements</strong></a>, <em>10th International Conference on Electrical and Electronics Engineering (ELECO), pp. 913-917.</em></p>
</li>
</ul>
<p><strong>Other works using the algorithm from this answer</strong></p>
<ul>
<li><p>Bergamini, E. and E. Mourlon-Druol (2021). <a href="https://www.bruegel.org/wp-content/uploads/2021/03/WP-2021-04.pdf" rel="noreferrer"><strong>Talking about Europe: exploring 70 years of news archives</strong></a>. Working Paper 04/2021, Bruegel.</p>
</li>
<li><p>Cox, G. (2020). <a href="https://www.baeldung.com/cs/signal-peak-detection" rel="noreferrer"><strong>Peak Detection in a Measured Signal</strong></a>. <em>Online article on <a href="https://www.baeldung.com/cs/signal-peak-detection" rel="noreferrer">https://www.baeldung.com/cs/signal-peak-detection</a></em>.</p>
</li>
<li><p>Raimundo, D. W. (2020). <a href="https://pub.tik.ee.ethz.ch/students/2020-FS/SA-2020-09.pdf" rel="noreferrer"><strong>SwitP: Mobile Application for Real-Time Swimming Analysis.</strong></a>. <em>Semester Thesis</em>, ETH Zürich.</p>
</li>
<li><p>Bernardi, D. (2019). <a href="https://aaltodoc.aalto.fi/handle/123456789/39146" rel="noreferrer"><strong>A feasibility study on pairing a smartwatch and a mobile device through multi-modal gestures</strong></a>. <em>Master thesis</em>, Aalto University.</p>
</li>
<li><p>Lemmens, E. (2018). <a href="https://pure.tue.nl/ws/portalfiles/portal/125708557/1026332_Master_Thesis_Eef_Lemmens_BIS_269.pdf" rel="noreferrer"><strong>Outlier detection in event logs by using statistical methods</strong></a>, <em>Master thesis</em>, University of Eindhoven.</p>
</li>
<li><p>Willems, P. (2017). <a href="http://purl.utwente.nl/essays/73246" rel="noreferrer"><strong>Mood controlled affective ambiences for the elderly</strong></a>, <em>Master thesis</em>, University of Twente.</p>
</li>
<li><p>Ciocirdel, G. D. and Varga, M. (2016). <a href="https://event.cwi.nl/lsde/2016/papers/group02.pdf" rel="noreferrer"><strong>Election Prediction Based on Wikipedia Pageviews</strong></a>. <em>Project paper</em>, Vrije Universiteit Amsterdam.</p>
</li>
</ul>
<p><strong>Other applications of the algorithm from this answer</strong></p>
<ul>
<li><p><a href="https://www.avo.app/blog/avo-audit" rel="noreferrer"><strong>Avo Audit dbt package</strong></a>. Avo Company (next-generation analytics governance).</p>
</li>
<li><p><a href="https://openbci.com/community/synthesized-speech-with-openbci-system/" rel="noreferrer"><strong>Synthesized speech with OpenBCI system</strong></a>, SarahK01.</p>
</li>
<li><p><a href="https://github.com/hudson-and-thames/mlfinlab/blob/master/mlfinlab/filters/filters.py" rel="noreferrer"><strong>Python package: Machine Learning Financial Laboratory</strong></a>, based on the work of De Prado, M. L. (2018). <a href="https://link.springer.com/article/10.1007/s11408-019-00341-4" rel="noreferrer"><strong>Advances in financial machine learning</strong></a>. John Wiley & Sons.</p>
</li>
<li><p><a href="https://github.com/adafruit/Adafruit_CircuitPlayground" rel="noreferrer"><strong>Adafruit CircuitPlayground Library</strong></a>, Adafruit board (Adafruit Industries)</p>
</li>
<li><p><a href="https://github.com/jeeshnair/ubicomp" rel="noreferrer"><strong>Step tracker algorithm</strong></a>, Android App (jeeshnair)</p>
</li>
<li><p><a href="https://cran.r-project.org/package=animaltracker" rel="noreferrer"><strong>R package: animaltracker</strong></a> (Joe Champion, Thea Sukianto)</p>
</li>
</ul>
<p><strong>Links to other peak detection algorithms</strong></p>
<ul>
<li><a href="https://stackoverflow.com/questions/59557910/real-time-peak-detection-in-noisy-sinusoidal-time-series/59565524#59565524"><strong>Real-time peak detection in noisy sinusoidal time-series</strong></a></li>
</ul>
<hr />
<h2>How to reference this algorithm:</h2>
<blockquote>
<p>Brakel, J.P.G. van (2014). "Robust peak detection algorithm using z-scores". Stack Overflow. Available at: <a href="https://stackoverflow.com/questions/22583391/peak-signal-detection-in-realtime-timeseries-data/22640362#22640362"> https://stackoverflow.com/questions/22583391/peak-signal-detection-in-realtime-timeseries-data/22640362#22640362</a> (version: 2020-11-08).</p>
</blockquote>
<blockquote>
<p><sub><strong>Bibtex</strong>
@misc{brakel2014, author = {Brakel, J.P.G van}, title = {Robust peak detection algorithm using z-scores}, url = {https://stackoverflow.com/questions/22583391/peak-signal-detection-in-realtime-timeseries-data/22640362#22640362}, language = {en}, year = {2014}, urldate = {2022-04-12}, journal = {Stack Overflow}, howpublished = {https://stackoverflow.com/questions/22583391/peak-signal-detection-in-realtime-timeseries-data/22640362#22640362}}</sub></p>
</blockquote>
<hr />
<p><em>If you use this function somewhere, please credit me by using above reference. If you have any questions about the algorithm, post them in the comments below or contact me on <a href="https://www.linkedin.com/in/jpgvb" rel="noreferrer">LinkedIn</a>.</em></p>
<hr /> | 2014-03-25 16:16:01.680000+00:00 | 2022-08-16 18:54:50.777000+00:00 | 2022-08-16 18:54:50.777000+00:00 | null | 22,583,391 | <hr />
<p><strong>Update:</strong> The best performing algorithm <em>so far</em> <a href="https://stackoverflow.com/questions/22583391/peak-recognition-in-realtime-timeseries-data/22640362#22640362"><strong>is this one</strong></a>.</p>
<hr />
<p><em>This question explores robust algorithms for detecting sudden peaks in real-time timeseries data.</em></p>
<p>Consider the following example data:</p>
<p><a href="https://i.stack.imgur.com/yUeKr.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/yUeKr.jpg" alt="Plot of data" /></a></p>
<p><sub>Example of this data is in Matlab format (but this question is not about the language but about the algorithm):</sub></p>
<pre><code>p = [1 1 1.1 1 0.9 1 1 1.1 1 0.9 1 1.1 1 1 0.9 1 1 1.1 1 1 1 1 1.1 0.9 1 1.1 1 1 0.9, ...
1 1.1 1 1 1.1 1 0.8 0.9 1 1.2 0.9 1 1 1.1 1.2 1 1.5 1 3 2 5 3 2 1 1 1 0.9 1 1, ...
3 2.6 4 3 3.2 2 1 1 0.8 4 4 2 2.5 1 1 1];
</code></pre>
<p>You can clearly see that there are three large peaks and some small peaks. This dataset is a specific example of the class of timeseries datasets that the question is about. This class of datasets has two general features:</p>
<ol>
<li>There is basic noise with a general mean</li>
<li>There are large '<em>peaks</em>' or '<em>higher data points</em>' that significantly deviate from the noise.</li>
</ol>
<p>Let's also assume the following:</p>
<ul>
<li>The width of the peaks cannot be determined beforehand</li>
<li>The height of the peaks significantly deviates from the other values</li>
<li>The algorithm updates in realtime (so updates with each new datapoint)</li>
</ul>
<p>For such a situation, a boundary value needs to be constructed which triggers signals. However, the boundary value cannot be static and must be determined realtime based on an algorithm.</p>
<hr />
<p><strong>My Question: what is a good algorithm to calculate such thresholds in realtime?</strong> Are there specific algorithms for such situations? What are the most well-known algorithms?</p>
<hr />
<p><sub>Robust algorithms or useful insights are all highly appreciated. (can answer in any language: it's about the algorithm)</sub></p> | 2014-03-22 20:48:36.837000+00:00 | 2022-08-16 18:54:50.777000+00:00 | 2021-03-08 18:50:52.667000+00:00 | algorithm|language-agnostic|time-series|signal-processing|data-analysis | ['https://en.wikipedia.org/wiki/Statistical_dispersion', 'https://en.wikipedia.org/wiki/Standard_score', 'https://en.wikipedia.org/wiki/Stationary_process', 'https://i.imgur.com/LFvEM2Y.gif', 'https://stackoverflow.com/questions/22583391/peak-signal-detection-in-realtime-timeseries-data/54507329#54507329', 'https://i.stack.imgur.com/KdpF7.jpg', 'https://stackoverflow.com/questions/22583391/peak-signal-detection-in-realtime-timeseries-data/54507329#54507329', 'https://stackoverflow.com/questions/22583391/peak-signal-detection-in-realtime-timeseries-data/54507329#54507329', 'https://stackoverflow.com/questions/22583391/peak-signal-detection-in-realtime-timeseries-data/42144471#42144471', 'https://stackoverflow.com/questions/22583391/peak-signal-detection-in-realtime-timeseries-data/72502027#72502027', 'https://stackoverflow.com/questions/22583391/peak-signal-detection-in-realtime-timeseries-data/43512887#43512887', 'https://stackoverflow.com/questions/22583391/peak-signal-detection-in-realtime-timeseries-data/56451135#56451135', 'https://stackoverflow.com/questions/43583302/peak-detection-for-growing-time-series-using-swift/43607179#43607179', 'https://stackoverflow.com/questions/22583391/peak-signal-detection-in-realtime-timeseries-data/46575416#46575416', 'https://stackoverflow.com/questions/22583391/peak-signal-detection-in-realtime-timeseries-data/46956908#46956908', 'https://stackoverflow.com/questions/22583391/peak-signal-detection-in-realtime-timeseries-data/46998001#46998001', 'https://github.com/swizard0/smoothed_z_score', 'https://stackoverflow.com/questions/22583391/peak-signal-detection-in-realtime-timeseries-data/48231877#48231877', 'https://stackoverflow.com/questions/22583391/peak-signal-detection-in-realtime-timeseries-data/48772305#48772305', 'https://stackoverflow.com/questions/22583391/peak-signal-detection-in-realtime-timeseries-data/48895639#48895639', 'https://stackoverflow.com/questions/22583391/peak-signal-detection-in-realtime-timeseries-data/51185583#51185583', 'https://stackoverflow.com/questions/22583391/peak-signal-detection-in-realtime-timeseries-data/52447946#52447946', 'https://stackoverflow.com/questions/22583391/peak-signal-detection-in-realtime-timeseries-data/53614452#53614452', 'https://stackoverflow.com/questions/22583391/peak-signal-detection-in-realtime-timeseries-data/54507140#54507140', 'https://stackoverflow.com/questions/22583391/peak-signal-detection-in-realtime-timeseries-data/56174275#56174275', 'https://stackoverflow.com/questions/22583391/peak-signal-detection-in-realtime-timeseries-data/57889588#57889588', 'https://github.com/Bluejay47/zScore', 'https://stackoverflow.com/questions/22583391/peak-signal-detection-in-realtime-timeseries-data/58859333#58859333', 'https://stackoverflow.com/questions/22583391/peak-signal-detection-in-realtime-timeseries-data/59687045#59687045', 'https://github.com/gtjamesa/php-zscore', 'https://stackoverflow.com/a/66700221/833188', 'https://en.wikipedia.org/wiki/Stationary_process', 'https://en.wikipedia.org/wiki/Structural_break', 'https://imgur.com/a/UJlXNJo', 'https://en.wikipedia.org/wiki/Robust_measures_of_scale', 'https://stackoverflow.com/questions/43583302/peak-detection-for-growing-time-series-using-swift/43607179#43607179', 'https://arxiv.org/pdf/2207.05356.pdf', 'https://dl.acm.org/doi/abs/10.1145/3491102.3517461', 'https://hess.copernicus.org/preprints/hess-2022-251/', 'https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9002797/', 'https://www.sifet.org/bollettino/index.php/bollettinosifet/article/view/2266', 'https://dl.acm.org/doi/abs/10.1145/3494964', 'https://mhealth.jmir.org/2021/10/e24872', 'https://drops.dagstuhl.de/opus/volltexte/2021/14763/', 'https://arxiv.org/abs/2104.13429', 'https://tel.archives-ouvertes.fr/tel-03048963/', 'https://www.biorxiv.org/content/10.1101/2020.12.23.424175v1.full', 'https://content.sciendo.com/view/journals/jee/71/6/article-p397.xml', 'https://www.nature.com/articles/s41598-020-77162-3', 'https://dl.acm.org/doi/abs/10.1145/3418210', 'https://dl.acm.org/doi/abs/10.1145/3379337.3415873', 'https://arxiv.org/abs/2009.09872', 'https://arxiv.org/abs/2006.00280', 'https://arxiv.org/pdf/2006.00280.pdf', 'https://www.mdpi.com/1424-8220/20/6/1549', 'https://www.biorxiv.org/content/10.1101/2020.02.19.956789v1', 'https://journals.plos.org/plosone/article/file?type=printable&id=10.1371/journal.pone.0225690', 'https://doi.org/10.1016/j.snb.2018.12.030', 'https://www.sciencedirect.com/science/article/pii/S1352231019304261', 'http://www.thepermanentejournal.org/issues/2019/summer/7190-twitter-poliomyelitis.html', 'http://dx.doi.org/10.20381/ruor-24195', 'https://proceedings.cmbes.ca/index.php/proceedings/article/view/850', 'https://ieeexplore.ieee.org/abstract/document/8802209', 'https://ieeexplore.ieee.org/abstract/document/8682257', 'http://hdl.handle.net/2142/104752', 'https://www.mdpi.com/1424-8220/19/7/1563', 'https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6343954/', 'https://doi.org/10.31224/osf.io/nykuh', 'https://escholarship.org/uc/item/7vb3835n', 'https://ieeexplore.ieee.org/abstract/document/8541902', 'https://www.cs.utah.edu/%7Ewiese/publications/MAAV_paper_v40.pdf', 'https://doi.org/10.1155/2018/5906368', 'https://repository.iiitd.edu.in/jspui/handle/123456789/632', 'https://en.itu.dk/%7E/media/en/research/phd-programme/phd-defences/2017/phd-thesis-temporary-version-marco-scirea-pdf.pdf?la=en', 'http://sebastianrisi.com/wp-content/uploads/scirea_ceec17.pdf', 'https://ieeexplore.ieee.org/document/8266142/', 'https://www.bruegel.org/wp-content/uploads/2021/03/WP-2021-04.pdf', 'https://www.baeldung.com/cs/signal-peak-detection', 'https://www.baeldung.com/cs/signal-peak-detection', 'https://pub.tik.ee.ethz.ch/students/2020-FS/SA-2020-09.pdf', 'https://aaltodoc.aalto.fi/handle/123456789/39146', 'https://pure.tue.nl/ws/portalfiles/portal/125708557/1026332_Master_Thesis_Eef_Lemmens_BIS_269.pdf', 'http://purl.utwente.nl/essays/73246', 'https://event.cwi.nl/lsde/2016/papers/group02.pdf', 'https://www.avo.app/blog/avo-audit', 'https://openbci.com/community/synthesized-speech-with-openbci-system/', 'https://github.com/hudson-and-thames/mlfinlab/blob/master/mlfinlab/filters/filters.py', 'https://link.springer.com/article/10.1007/s11408-019-00341-4', 'https://github.com/adafruit/Adafruit_CircuitPlayground', 'https://github.com/jeeshnair/ubicomp', 'https://cran.r-project.org/package=animaltracker', 'https://stackoverflow.com/questions/59557910/real-time-peak-detection-in-noisy-sinusoidal-time-series/59565524#59565524', 'https://stackoverflow.com/questions/22583391/peak-signal-detection-in-realtime-timeseries-data/22640362#22640362', 'https://www.linkedin.com/in/jpgvb'] | 94 |
25,534,853 | <p>I'm working on <a href="https://github.com/thomasmueller/minperf" rel="nofollow noreferrer">an algorithm and Java implementation that needs less than 1.6 bits per key</a>.</p>
<p>Previously, I have implemented <a href="https://github.com/h2database/h2database/blob/master/h2/src/tools/org/h2/dev/hash/MinimalPerfectHash.java" rel="nofollow noreferrer">a minimal perfect hash function tool in Java</a> that needs less than 2.0 bits per key.</p>
<p>Other algorithms are implemented in <a href="http://cmph.sourceforge.net/" rel="nofollow noreferrer">CMPH</a>. For example CHD needs about 2.06 bits per key by default. It can be configured to use less space, but generation is then slower.</p>
<p><em>Update</em>: There is now <a href="https://arxiv.org/abs/1910.06416" rel="nofollow noreferrer">a paper</a> about my invention called "RecSplit: Minimal Perfect Hashing via Recursive Splitting"</p> | 2014-08-27 18:48:38.200000+00:00 | 2022-05-12 10:17:56.967000+00:00 | 2022-05-12 10:17:56.967000+00:00 | null | 6,743,316 | <p>I have many integers in range [0; 2^63-1]. There is only 10^8 integers, however. There is <em>no duplicates</em>. Full list is known at compile-time but it is <em>just unique random numbers</em>. These numbers <em>never changes</em>.<br>
To store one integer <em>explicitly</em>, 8 bytes required, and there is associated 1-byte values, so explicit storing requires about 860 MB.<br>
So I want to find minimal perfect hash function to map each of 10^8 integers from [0;2^63-1] to [0;10^8-1]. I should find this function only once, data never changes, and function can be complicated. But it should be minimal, perfect, and calculating should be fast. How I can do this better? Maybe it is possible to find and use some subsequences if they happens?<br>
Thanks.</p> | 2011-07-19 06:55:38.210000+00:00 | 2022-05-12 10:17:56.967000+00:00 | null | perfect-hash | ['https://github.com/thomasmueller/minperf', 'https://github.com/h2database/h2database/blob/master/h2/src/tools/org/h2/dev/hash/MinimalPerfectHash.java', 'http://cmph.sourceforge.net/', 'https://arxiv.org/abs/1910.06416'] | 4 |
53,136,471 | <p>Optimal solution of a NxNxN Rubik's cube is NP-Complete ... as discussed in this paper <a href="https://arxiv.org/abs/1706.06708" rel="nofollow noreferrer">https://arxiv.org/abs/1706.06708</a></p> | 2018-11-03 23:36:59.983000+00:00 | 2018-11-03 23:36:59.983000+00:00 | null | null | 40,109,785 | <p>I'm writing something about the relation of <code>3x3x3</code> Rubiks cube and theory of computation. I've read some texts talking about the god's number and optimally solutions, but I still can't figure out if solving a rubiks cube optimally is <code>P</code> or <code>NP</code>, if it is <code>P</code>, there is an algorithm to solve it in polynomial time? </p> | 2016-10-18 13:34:14.740000+00:00 | 2018-11-03 23:36:59.983000+00:00 | 2016-10-18 13:59:44.243000+00:00 | algorithm|complexity-theory|computation-theory|np|rubiks-cube | ['https://arxiv.org/abs/1706.06708'] | 1 |
65,285,375 | <p>Yes, In the past 2 years several papers have come out using Transformer architecture for time-series problems across business domains. These have been found to work particularly well compared to existing models. For e.g. see one such use case in medical domain - <a href="https://arxiv.org/pdf/2001.08317.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/2001.08317.pdf</a>. This is a more generic writeup: <a href="https://towardsdatascience.com/attention-for-time-series-classification-and-forecasting-261723e0006d" rel="nofollow noreferrer">https://towardsdatascience.com/attention-for-time-series-classification-and-forecasting-261723e0006d</a></p> | 2020-12-14 07:52:42.120000+00:00 | 2020-12-14 07:52:42.120000+00:00 | null | null | 59,523,557 | <p>I have reading the paper on Transformer Architecture. GRU/LSTM have done well with Time Series problems. I was wondering if anyone has used Transformers for Time Series problems.</p> | 2019-12-29 21:43:18.620000+00:00 | 2020-12-14 07:52:42.120000+00:00 | null | machine-learning|deep-learning|nlp | ['https://arxiv.org/pdf/2001.08317.pdf', 'https://towardsdatascience.com/attention-for-time-series-classification-and-forecasting-261723e0006d'] | 2 |
33,835,894 | <p>There are a lot. The term is "<em>Overlapping community detection</em>".</p>
<p>Take a look at this paper, for example, where you can find comparison of 14 such algorithms:</p>
<p><a href="http://arxiv.org/abs/1110.5813" rel="nofollow">Overlapping Community Detection in Networks: the State of the Art and Comparative Study [Xie, Kelley, Szymanski - 2011]</a></p>
<p><a href="https://cs.stanford.edu/people/jure/pubs/bigclam-wsdm13.pdf" rel="nofollow">BIGCLAM</a>, an algorithm presented by Yang and Leskovec is an interesting one, as it scales very well. It is available as part of <a href="https://snap.stanford.edu/snap/description.html" rel="nofollow">SNAP</a>.</p> | 2015-11-20 20:53:02.497000+00:00 | 2015-11-21 13:49:35.917000+00:00 | 2015-11-21 13:49:35.917000+00:00 | null | 33,834,690 | <p>I feel in some cases it is reasonable for a person to be assigned to multiple communities. For example in the science field a professor may work in multiple research areas. Is there an algorithm that assigns a person to multiple communities, or a "soft-labeling approach"?(e.g. a positive probability associated with a person belonging to any detected community)</p>
<p>Thanks</p> | 2015-11-20 19:33:46.117000+00:00 | 2015-11-21 13:49:35.917000+00:00 | null | algorithm|graph|machine-learning|data-mining | ['http://arxiv.org/abs/1110.5813', 'https://cs.stanford.edu/people/jure/pubs/bigclam-wsdm13.pdf', 'https://snap.stanford.edu/snap/description.html'] | 3 |
53,073,514 | <p>Few points:</p>
<ul>
<li><p>I don't see resizing, aligning and whitening of the input face image that is fed into the network.</p></li>
<li><p>You cannot add a fixed margin of 50 to a variable-sized face. There has to be a scaling such that the face region fills almost the same region in every input image.</p></li>
<li><p>I am not sure about the model you are using, but if you are using <a href="https://arxiv.org/abs/1503.03832" rel="nofollow noreferrer">FaceNet</a>, your accepted matching threshold, 0.1, seems to be very low. It will not accept any matches unless it is the same exact image (with a distance of 0.0), or has a very minimal variation from the gallery image.</p></li>
</ul> | 2018-10-30 22:02:48.220000+00:00 | 2018-10-30 22:02:48.220000+00:00 | null | null | 52,963,149 | <p>I've been working on a face recognition attendance management system. I've built the pipeline from scratch but in the end,the script recognizes the wrong face among a group of 10 classes.
I've implemented the following pipeline using Tensorflow and Python.</p>
<ol>
<li>Capture images, resize, align them using dlib's shape predictor and store them in named folders for later comparison while performing recognition.</li>
<li><p>Pickle the images into a <code>data.pickle</code> file for later deserialization.</p></li>
<li><p>Using OpenCV to implement MTCNN algorithm to detect faces in a frame captured by webcam</p></li>
<li>passing these frames into a facenet network to create 128-D embeddings and compared accordingly with the embeddings present in pickle database.</li>
</ol>
<p>Given Below is the main file which runs step 3 and 4:</p>
<pre><code>from keras import backend as K
import time
from multiprocessing.dummy import Pool
K.set_image_data_format('channels_first')
import cv2
import os
import glob
import numpy as np
from numpy import genfromtxt
import tensorflow as tf
from keras.models import load_model
from fr_utils import *
from inception_blocks_v2 import *
from mtcnn.mtcnn import MTCNN
import dlib
from imutils import face_utils
import imutils
import pickle
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import train_test_split
FRmodel = load_model('face-rec_Google.h5')
# detector = dlib.get_frontal_face_detector()
detector = MTCNN()
# FRmodel = faceRecoModel(input_shape=(3, 96, 96))
#
# # detector = dlib.get_frontal_face_detector()
# # predictor = dlib.shape_predictor("shape_predictor_68_face_landmarks.dat")
# def triplet_loss(y_true, y_pred, alpha = 0.3):
# """
# Implementation of the triplet loss as defined by formula (3)
#
# Arguments:
# y_pred -- python list containing three objects:
# anchor -- the encodings for the anchor images, of shape (None, 128)
# positive -- the encodings for the positive images, of shape (None, 128)
# negative -- the encodings for the negative images, of shape (None, 128)
#
# Returns:
# loss -- real number, value of the loss
# """
#
# anchor, positive, negative = y_pred[0], y_pred[1], y_pred[2]
#
# pos_dist = tf.reduce_sum(tf.square(tf.subtract(anchor, positive)), axis=-1)
# neg_dist = tf.reduce_sum(tf.square(tf.subtract(anchor, negative)), axis=-1)
# basic_loss = tf.add(tf.subtract(pos_dist, neg_dist), alpha)
# loss = tf.reduce_sum(tf.maximum(basic_loss, 0.0))
#
# return loss
#
# FRmodel.compile(optimizer = 'adam', loss = triplet_loss, metrics = ['accuracy'])
# load_weights_from_FaceNet(FRmodel)
def ret_model():
return FRmodel
def prepare_database():
pickle_in = open("data.pickle","rb")
database = pickle.load(pickle_in)
return database
def unpickle_something(pickle_file):
pickle_in = open(pickle_file,"rb")
unpickled_file = pickle.load(pickle_in)
return unpickled_file
def webcam_face_recognizer(database):
cv2.namedWindow("preview")
vc = cv2.VideoCapture(0)
while vc.isOpened():
ret, frame = vc.read()
img_rgb = cv2.cvtColor(frame,cv2.COLOR_BGR2RGB)
img = frame
# We do not want to detect a new identity while the program is in the process of identifying another person
img = process_frame(img,img)
cv2.imshow("Preview", img)
cv2.waitKey(1)
vc.release()
def process_frame(img, frame):
"""
Determine whether the current frame contains the faces of people from our database
"""
# rects = detector(img)
rects = detector.detect_faces(img)
# Loop through all the faces detected and determine whether or not they are in the database
identities = []
for (i,rect) in enumerate(rects):
(x,y,w,h) = rect['box'][0],rect['box'][1],rect['box'][2],rect['box'][3]
img = cv2.rectangle(frame,(x, y),(x+w, y+h),(255,0,0),2)
identity = find_identity(frame, x-50, y-50, x+w+50, y+h+50)
cv2.putText(img, identity,(10,500), cv2.FONT_HERSHEY_SIMPLEX , 4,(255,255,255),2,cv2.LINE_AA)
if identity is not None:
identities.append(identity)
if identities != []:
cv2.imwrite('example.png',img)
return img
def find_identity(frame, x,y,w,h):
"""
Determine whether the face contained within the bounding box exists in our database
x1,y1_____________
| |
| |
|_________________x2,y2
"""
height, width, channels = frame.shape
# The padding is necessary since the OpenCV face detector creates the bounding box around the face and not the head
part_image = frame[y:y+h, x:x+w]
return who_is_it(part_image, database, FRmodel)
def who_is_it(image, database, model):
encoding = img_to_encoding(image, model)
min_dist = 100
# Loop over the database dictionary's names and encodings.
for (name, db_enc) in database.items():
# Compute L2 distance between the target "encoding" and the current "emb" from the database.
dist = np.linalg.norm(db_enc.flatten() - encoding.flatten())
print('distance for %s is %s' %(name, dist))
# If this distance is less than the min_dist, then set min_dist to dist, and identity to name
if dist < min_dist:
min_dist = dist
identity = name
if min_dist >0.1:
print('Unknown person')
else:
print(identity)
return identity
if __name__ == "__main__":
database = prepare_database()
webcam_face_recognizer(database)
</code></pre>
<p>What am I doing wrong here?
<strong>Here the FRmodel is the facenet trained model</strong></p> | 2018-10-24 07:25:11.763000+00:00 | 2018-10-30 22:02:48.220000+00:00 | null | python|opencv|face-detection|face-recognition | ['https://arxiv.org/abs/1503.03832'] | 1 |
45,073,495 | <p><a href="https://i.stack.imgur.com/I29CN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/I29CN.png" alt="enter image description here"></a></p>
<p><em>Access Denied 403.</em>
Sadly, your client does not supply a proper User-Agent and is consequently excluded.</p>
<p>to fix this pass User-Agent in request headers</p>
<pre><code>var options = {
url: 'https://arxiv.org/pdf/1611.10012.pdf',
headers: {
'Referer': 'https://arxiv.org',
'User-Agent': 'stagefright/1.2 (Linux;Android 5.0)'
}
}
request(options, function (error, response, body) {
console.log('error:', error);
console.log('statusCode:', response && response.statusCode);
console.log('body:', body);
});
</code></pre>
<p>List of user agents for User Agent <a href="https://gist.github.com/enginnr/ed572cf5c324ad04ff2e" rel="nofollow noreferrer">https://gist.github.com/enginnr/ed572cf5c324ad04ff2e</a></p> | 2017-07-13 06:59:23.957000+00:00 | 2017-07-13 06:59:23.957000+00:00 | null | null | 45,073,401 | <p>I am using request method for get the file stream, its works for all pdf files but when i try to get <a href="https://arxiv.org" rel="nofollow noreferrer">https://arxiv.org</a> website pdfs (<a href="https://arxiv.org/pdf/1611.10012.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1611.10012.pdf</a>) then its not working. </p>
<p>For <a href="https://arxiv.org/" rel="nofollow noreferrer">https://arxiv.org/</a> website pdfs its giving 403 fobidden status code whereas for other websites pdf files it return 200 status code.</p>
<p>Here is my code for getting other website pdfs</p>
<pre><code> request('http://uberthings.com/mobile/intro_to_mobile.pdf', function (error, response, body) {
console.log('error:', error);
console.log('statusCode:', response && response.statusCode);
console.log('body:', body);
});
</code></pre>
<p>// Return 200 status code</p>
<p>Here is my code for <a href="https://arxiv.org" rel="nofollow noreferrer">https://arxiv.org</a> other website pdfs</p>
<pre><code> request('https://arxiv.org/pdf/1611.10012.pdf', function (error, response, body) {
console.log('error:', error);
console.log('statusCode:', response && response.statusCode);
console.log('body:', body);
});
</code></pre>
<p>// Return 403 status code</p>
<p>Any Idea why request method for particular website (<a href="https://arxiv.org/pdf/1611.10012.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1611.10012.pdf</a>) is not working ?</p> | 2017-07-13 06:54:48.217000+00:00 | 2017-07-13 06:59:23.957000+00:00 | null | node.js|pdf|request | ['https://i.stack.imgur.com/I29CN.png', 'https://gist.github.com/enginnr/ed572cf5c324ad04ff2e'] | 2 |
34,148,259 | <p>Having only one cell (one hidden unit) is not a good idea even if you are just testing the correctness of your code. You should try 50 even for such simple problem. This paper here: <a href="http://arxiv.org/pdf/1503.04069.pdf" rel="nofollow">http://arxiv.org/pdf/1503.04069.pdf</a> gives you very clear gradient rules for updating the parameters. Having said that, there is no need to implement your own even if your dataset and/or the problem you are working on is new LSTM. Pick from the existing library (Theano, mxnet, Torch etc...) and modify from there I think is a easier way, given that it's less error prone and it supports gpu computing which is essential for training lstm within a reasonable amount of time.</p> | 2015-12-08 04:51:54.613000+00:00 | 2015-12-08 04:51:54.613000+00:00 | null | null | 28,850,154 | <p>I have attempted to program my own LSTM (long short term memory) neural network. I would like to verify that the basic functionality is working. I have implemented a Back propagation through time BPTT algorithm to train a single cell network.</p>
<p>Should a single cell LSTM network be able to learn a simple sequence, or are more than one cells necessary? The network does not seem to be able to learn a simple sequence such as 1 0 0 0 1 0 0 0 1 0 0 0 1.</p>
<p>I am sending the the sequence 1's and 0's one by one, in order, into the network, and feeding it forward. I record each output for the sequence.</p>
<p>After running the whole sequence through the LSTM cell, I feed the mean error signals back into the cell, saving the weight changes internal to the cell, in a seperate collection, and after running all the errors one by one through and calculating the new weights after each error, I average the new weights together to get the new weight, for each weight in the cell.</p>
<p>Am i doing something wrong? I would very appreciate any advice.</p>
<p>Thank you so much!</p> | 2015-03-04 08:54:24.643000+00:00 | 2016-07-28 13:55:26.910000+00:00 | 2015-04-02 20:05:47.867000+00:00 | machine-learning|artificial-intelligence|neural-network|lstm | ['http://arxiv.org/pdf/1503.04069.pdf'] | 1 |
57,299,359 | <p>The article you've referenced has a reasonable exposition of the <code>Doc2Vec</code> algorithm, but its example code includes a very damaging anti-pattern: calling <code>train()</code> multiple times in a loop, while manually managing <code>alpha</code>. <strong>This is hardly ever a good idea, and very error-prone.</strong></p>
<p>Instead, <strong>don't</strong> change the default <code>min_alpha</code>, and call <code>train()</code> just once with the desired <code>epochs</code>, and let the method smoothly manage the <code>alpha</code> itself. </p>
<p>Your general approach is reasonable: develop a repeatable way of scoring your models based on some prior ideas of what, then try a wide range of model parameters and pick the one that scores best. </p>
<p>When you say that your own two methods of accuracy calculation don't match, that's a little concerning, because the <code>most_similar()</code> method does in fact check your query-point against all known doc-vectors, and returns those with the greatest cosine-similarity. Those should be identical as those that you've calculated to have the least cosine-distance. If you added to your question your exact code – how you're calculating cosine-distances, and how you're calling <code>most_similar()</code> – then it would probably be clear what subtle differences or errors are the cause of the discrepancy. (There shouldn't be any essential difference, but given that: you'll likely want to use the <code>most_similar()</code> results, because they're known non-buggy, and use efficient bulk array library operations that are probably faster than whatever loop you've authored.)</p>
<p>Note that you don't necessarily have to hold back your set of known-highly-similar document pairs. Since <code>Doc2Vec</code> is an unsupervised algorithm, you're <strong>not</strong> feeding it the preferred "make sure these documents are similar" results during training. It's fairly reasonable to train on the full set of documents, then pick the model that best captures your desired most-similar relationships, and believe that the inclusion of more documents actually helped you find the best parameters. </p>
<p>(Such a process might, however, slightly over-estimate the expected accuracy on future unseen docs, or some other hypothetical "other 20K" training documents. But it would still be plausibly finding the "best possible" metaparameters given your training data.) </p>
<p>(If you don't feed them all during training, then during testing you'll need to be using <code>infer_vector()</code> for the unseen docs, rather than just looking up the learned vectors from training. You haven't shown your code for such scoring/inference, but that's another step that might be done wrong. If you just train vectors for all available docs together, that possibility for error is eliminated.)</p>
<p>Checking if desired docs are in the top-5 (or top-N) most-similar is just one way to score a model. Another way, that was used in a couple of the original 'Paragraph Vector' (<code>Doc2Vec</code>) papers, is for each such pair, also pick another <em>random</em> document. Count the model as accurate each time it reports the known-similar docs as closer to each other than the 3rd randomly-chosen document. In the original 'Paragraph Vector' papers, existing search-ranking systems (which reported certain text snippets in response to the same probe queries) or hand-curated categories (as in Wikipedia or Arxiv) were used to generate such evaluation pairs: texts in the same search-results-page, or same category, were checked to see if they were 'closer' inside a model to each other than other random docs. </p>
<p>If your question were expanded to describe more about some of the initial parameters you've tried (such as the full parameters you're supplying to <code>Doc2Vec</code> and <code>train()</code>), and what has seemed to help or hurt, it might then be possible to suggest other ranges of parameters worth checking. </p> | 2019-07-31 21:58:09.463000+00:00 | 2019-07-31 21:58:09.463000+00:00 | null | null | 57,283,636 | <p>I have around 20k documents with 60 - 150 words. Out of these 20K documents, there are 400 documents for which the similar document are known. These 400 documents serve as my test data.</p>
<p>At present I am removing those 400 documents and using remaining 19600 documents for training the doc2vec. Then I extract the vectors of train and test data. Now for each test data document, I find it's cosine distance with all the 19600 train documents and select the top 5 with least cosine distance. If the similar document marked is present in these top 5 then take it to be accurate. Accuracy% = No. of Accurate records / Total number of Records.</p>
<p>The other way I find similar documents is by using the doc2Vec most similiar method. Then calculate accuracy using the above formula.</p>
<p>The above two accuracy doesn't match. With each epoch one increases other decreases.</p>
<p>I am using the code given here: <a href="https://medium.com/scaleabout/a-gentle-introduction-to-doc2vec-db3e8c0cce5e" rel="nofollow noreferrer">https://medium.com/scaleabout/a-gentle-introduction-to-doc2vec-db3e8c0cce5e</a>. For training the Doc2Vec.</p>
<p>I would like to know how to tune the hyperparameters so that I can get making accuracy by using above-mentioned formula. Should I use cosine distance to find the most similar documents or shall I use the gensim's most similar function?</p> | 2019-07-31 05:14:40.743000+00:00 | 2019-07-31 21:58:09.463000+00:00 | null | python|nlp|gensim|doc2vec|sentence-similarity | [] | 0 |
30,045,244 | <p>It is a common practice to decrease the learning rate (lr) as the optimization/learning process progresses. However, it is not clear how exactly the learning rate should be decreased as a function of the iteration number.</p>
<p>If you use <a href="https://github.com/NVIDIA/DIGITS" rel="noreferrer">DIGITS</a> as an interface to Caffe, you will be able to visually see how the different choices affect the learning rate.</p>
<p><strong>fixed:</strong> the learning rate is kept fixed throughout the learning process.</p>
<hr>
<p><strong>inv:</strong> the learning rate is decaying as ~<code>1/T</code><br>
<img src="https://i.stack.imgur.com/LScLY.png" alt="enter image description here"></p>
<hr>
<p><strong>step:</strong> the learning rate is piecewise constant, dropping every X iterations<br>
<img src="https://i.stack.imgur.com/W5h6j.png" alt="enter image description here"> </p>
<hr>
<p><strong>multistep:</strong> piecewise constant at arbitrary intervals<br>
<img src="https://i.stack.imgur.com/DW0qa.png" alt="enter image description here"></p>
<hr>
<p>You can see exactly how the learning rate is computed in the function <a href="https://github.com/BVLC/caffe/blob/master/src/caffe/solvers/sgd_solver.cpp#L27" rel="noreferrer"><code>SGDSolver<Dtype>::GetLearningRate</code></a> (<em>solvers/sgd_solver.cpp</em> line ~30).</p>
<hr>
<p>Recently, I came across an interesting and unconventional approach to learning-rate tuning: <a href="http://arxiv.org/abs/1506.01186" rel="noreferrer">Leslie N. Smith's work "No More Pesky Learning Rate Guessing Games"</a>. In his report, Leslie suggests to use <code>lr_policy</code> that alternates between decreasing and <em>increasing</em> the learning rate. His work also suggests how to implement this policy in Caffe.</p> | 2015-05-05 05:55:11.887000+00:00 | 2017-06-04 14:28:38.080000+00:00 | 2017-06-04 14:28:38.080000+00:00 | null | 30,033,096 | <p>I just try to find out how I can use <a href="http://caffe.berkeleyvision.org/">Caffe</a>. To do so, I just took a look at the different <code>.prototxt</code> files in the examples folder. There is one option I don't understand:</p>
<pre><code># The learning rate policy
lr_policy: "inv"
</code></pre>
<p>Possible values seem to be:</p>
<ul>
<li><code>"fixed"</code></li>
<li><code>"inv"</code></li>
<li><code>"step"</code></li>
<li><code>"multistep"</code></li>
<li><code>"stepearly"</code></li>
<li><code>"poly"</code> </li>
</ul>
<p>Could somebody please explain those options?</p> | 2015-05-04 14:47:04.150000+00:00 | 2018-03-22 06:46:36.260000+00:00 | 2018-03-22 06:46:36.260000+00:00 | machine-learning|neural-network|deep-learning|caffe|gradient-descent | ['https://github.com/NVIDIA/DIGITS', 'https://github.com/BVLC/caffe/blob/master/src/caffe/solvers/sgd_solver.cpp#L27', 'http://arxiv.org/abs/1506.01186'] | 3 |
43,179,197 | <p>Answer to your question: Yes, they can. They can be slower and they can be faster than classic descriptors. For example, using only a single filter and several max-poolings will almost certainly be faster. But the results will also certainly be crappy.</p>
<p>You should ask a much more specific question. Relevant parts are:</p>
<ul>
<li><strong>Problem</strong>: Classification / Detection / Semantic Segmentation / Instance Segmentation / Face verification / ... ?</li>
<li><strong>Constraints</strong>: Minimum accuracy / maximum speed / maximum latency?</li>
<li><strong>Evaluation specifics</strong>:
<ul>
<li>Which hardware is available (GPUs)?</li>
<li>Do you evaluate on a single image? Often you can evaluate up to 512 images in about the same time as one image.</li>
</ul></li>
</ul>
<p>Also: The input image size should not be relevant. If CNNs achieve better results on smaller inputs than classic descriptors, why should you care?</p>
<h2>Papers</h2>
<p>Please note that CNNs are usually not tweaked towards speed, but towards accuracy.</p>
<ul>
<li>Detection: <a href="http://papers.nips.cc/paper/5638-faster-r-cnn-towards-real-time-object-detection-with-region-proposal-networks" rel="nofollow noreferrer">Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks</a>: 600px x ~800px in 200ms on a GPU</li>
<li><a href="https://arxiv.org/pdf/1703.10956.pdf" rel="nofollow noreferrer">InverseFaceNet: Deep Single-Shot Inverse Face Rendering From A Single Image</a>: 9.79ms with GeForce GTX Titan and AlexNet to get FC7 features</li>
<li>Semantic segmentation: <a href="https://arxiv.org/pdf/1511.00513.pdf" rel="nofollow noreferrer">Pixel-wise Segmentation of Street with Neural
Networks</a> 20ms with GeForce GTX 980</li>
</ul> | 2017-04-03 07:47:11.810000+00:00 | 2017-04-03 07:54:35.247000+00:00 | 2017-04-03 07:54:35.247000+00:00 | null | 43,168,748 | <p><strong>Disclamer:</strong> I don't know almost nothing on CNNs and I have no idea where I could ask this.</p>
<p>My research is focused on high performance on computer vision applications. We generate codes representing an image in less than 20 ms on images with the largest size of 500pxs. </p>
<p>This is done by combining SURF descriptors and <a href="http://www.vlfeat.org/overview/encodings.html" rel="nofollow noreferrer">VLAD</a> codes, obtaining a vector representing an image that will be used in our object recognition application. </p>
<p>Can CNNs be faster? According to <a href="https://github.com/soumith/convnet-benchmarks/blob/master/README.md" rel="nofollow noreferrer">this</a> benchmark (which is based on much smaller images) the times needed is longer, almost double considering that the size of the image is half of ours. </p> | 2017-04-02 13:13:20.287000+00:00 | 2017-05-17 19:59:46.487000+00:00 | 2017-05-17 19:59:46.487000+00:00 | computer-vision|deep-learning|surf|vlad-vector | ['http://papers.nips.cc/paper/5638-faster-r-cnn-towards-real-time-object-detection-with-region-proposal-networks', 'https://arxiv.org/pdf/1703.10956.pdf', 'https://arxiv.org/pdf/1511.00513.pdf'] | 3 |
43,173,229 | <p>Yes, they can be faster. The numbers you got are for networks trained for ImageNet classification, 1 Million images, 1000 classes. Unless your classification problem is similar, then using a ImageNet network is overkill.</p>
<p>You should also remember that these networks have weights in the order of 10-100 million, so they are quite expensive to evaluate. But you probably don't need a really big network, and you can design your own network, with less layers and parameters that is much cheaper to evaluate.</p>
<p>In my experience, I designed a network to classify 96x96 sonar image patches, and with around 4000 weights in total, it can get over 95% classification accuracy and run at 40 ms per frame on a RPi2.</p>
<p>A bigger network with 900K weights, same input size, takes 7 ms to evaluate on a Core i7. So this is surely possible, you just need to play with smaller network architectures. A good start is <a href="https://arxiv.org/abs/1602.07360" rel="nofollow noreferrer">SqueezeNet</a>, which is a network that can achieve good performance in Imagenet, but has 50 times less weights, and it is of course much faster than other networks.</p> | 2017-04-02 20:32:28.527000+00:00 | 2017-04-02 20:32:28.527000+00:00 | null | null | 43,168,748 | <p><strong>Disclamer:</strong> I don't know almost nothing on CNNs and I have no idea where I could ask this.</p>
<p>My research is focused on high performance on computer vision applications. We generate codes representing an image in less than 20 ms on images with the largest size of 500pxs. </p>
<p>This is done by combining SURF descriptors and <a href="http://www.vlfeat.org/overview/encodings.html" rel="nofollow noreferrer">VLAD</a> codes, obtaining a vector representing an image that will be used in our object recognition application. </p>
<p>Can CNNs be faster? According to <a href="https://github.com/soumith/convnet-benchmarks/blob/master/README.md" rel="nofollow noreferrer">this</a> benchmark (which is based on much smaller images) the times needed is longer, almost double considering that the size of the image is half of ours. </p> | 2017-04-02 13:13:20.287000+00:00 | 2017-05-17 19:59:46.487000+00:00 | 2017-05-17 19:59:46.487000+00:00 | computer-vision|deep-learning|surf|vlad-vector | ['https://arxiv.org/abs/1602.07360'] | 1 |
45,520,846 | <p>As <strong>vijay m</strong> points out correctly, by changing the <code>dct_method</code> to "INTEGER_ACCURATE" you will get the same uint8 image using cv2 or tf. The problem indeed seems to be the resizing method. I also tried to force Tensorflow to use the same interpolation method than cv2 uses by default (bilinear) but the results are still different. This might be the case, because cv2 does the interpolation on integer values and TensorFlow converts to float before interpolating. But this is only a guess. If you plot the pixel-wise difference between the resized image by TF and cv2 you'll get the following historgram:</p>
<p><a href="https://i.stack.imgur.com/aUUXK.png" rel="noreferrer">Histrogramm of pixel-wise difference</a></p>
<p>As you can see, this looks pretty normal distributed. (Also I was surprised amount of pixel-wise difference). The problem of your accuracy drop could lie exactly here. In <a href="https://arxiv.org/pdf/1412.6572.pdf" rel="noreferrer">this paper</a> Goodfellow et al. describe the effect of adversarial examples and classification systems. This problem here is something similar I think. If the original weights you use for your network were trained using some input pipeline which gives the results of the cv2 functions, the image from the TF input pipeline is something like an adversarial example. </p>
<p>(See the image on page 3 at the top for an example...I can't post more than two links.)</p>
<p>So in the end I think if you want to use the original network weights for the same data they trained the network on, you should stay with a similar/same input pipeline. If you use the weights to finetune the network on your own data, this should not be of a big concern, because you retrain the classification layer to work with the new input images (from the TF pipeline).</p>
<p>And @ <strong>Ishant Mrinal</strong>: Please have a look at the code the OP provided in the GIST. He is aware of the difference of BGR (cv2) and RGB (TF) and is converting the images to the same color space.</p> | 2017-08-05 10:27:45.453000+00:00 | 2017-08-05 10:27:45.453000+00:00 | null | null | 45,516,859 | <p>I recently switched out using OpenCV for Tensorflow's <a href="https://www.tensorflow.org/api_docs/python/tf/image" rel="nofollow noreferrer">tf.image</a> module for image processing. However, my validation accuracy dropped around 10%.</p>
<p>I believe the issue is related to</p>
<ol>
<li>cv2.imread() vs. tf.image.decode_jpeg()</li>
<li>cv2.resize() vs. tf.image.resize_images()</li>
</ol>
<p>While these differences result in worse accuracy, the images seem to be human-indistinguishable when using plt.imshow(). For example, take Image #1 of the ImageNet Validation Dataset:</p>
<p><a href="https://i.stack.imgur.com/cJQAQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cJQAQ.png" alt="CV2 Image" /></a> <a href="https://i.stack.imgur.com/NyCsj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NyCsj.png" alt="enter image description here" /></a></p>
<p>First issue:</p>
<ul>
<li>cv2.imread() takes in a string and outputs a BGR 3-channel uint8 matrix</li>
<li>tf.image_decode_jpeg() takes in a string tensor and outputs an RGB 3-channel uint8 tensor.</li>
</ul>
<p>However, after converting the tf tensor to BGR format, there are very slight differences at many pixels in the image.</p>
<p>Using tf.image.decode_jpeg and then converting to BGR</p>
<pre><code>[[ 26 41 24 ..., 57 48 46]
[ 36 39 36 ..., 24 24 29]
[ 41 26 34 ..., 11 17 27]
...,
[ 71 67 61 ..., 106 105 100]
[ 66 63 59 ..., 106 105 101]
[ 64 66 58 ..., 106 105 101]]```
</code></pre>
<p>Using cv.imread</p>
<pre><code>[[ 26 42 24 ..., 57 48 48]
[ 38 40 38 ..., 26 27 31]
[ 41 28 36 ..., 14 20 31]
...,
[ 72 67 60 ..., 108 105 102]
[ 65 63 58 ..., 107 107 103]
[ 65 67 60 ..., 108 106 102]]```
</code></pre>
<p>Second issue:</p>
<ul>
<li>tf.image.resize_images() automatically converts a uint8 tensor to a float32 tensor, and seems to exacerbate the differences in pixel values.</li>
<li>I believe that tf.image.resize_images() and cv2.resize() are both</li>
</ul>
<p>tf.image.resize_images</p>
<pre><code>[[ 26. 25.41850281 35.73127747 ..., 81.85855103
59.45834351 49.82373047]
[ 38.33480072 32.90485001 50.90826797 ..., 86.28446198
74.88543701 20.16353798]
[ 51.27312469 26.86172867 39.52401352 ..., 66.86851501
81.12111664 33.37636185]
...,
[ 70.59472656 75.78851318
45.48100662 ..., 70.18637085
88.56777191 97.19295502]
[ 70.66964722 59.77249908 48.16699219 ..., 74.25527954
97.58244324 105.20263672]
[ 64.93395996 59.72298431 55.17600632 ..., 77.28720856
98.95108032 105.20263672]]```
</code></pre>
<p>cv2.resize</p>
<pre><code>[[ 36 30 34 ..., 102 59 43]
[ 35 28 51 ..., 85 61 26]
[ 28 39 50 ..., 59 62 52]
...,
[ 75 67 34 ..., 74 98 101]
[ 67 59 43 ..., 86 102 104]
[ 66 65 48 ..., 86 103 105]]```
</code></pre>
<p>Here's a <a href="https://gist.github.com/txizzle/0c1b5665a3cbad4bd4fc8a3afa424a86" rel="nofollow noreferrer">gist</a> demonstrating the behavior just mentioned. It includes the full code for how I am processing the image.</p>
<p>So my main questions are:</p>
<ul>
<li><strong>Why is the output of cv2.imread() and tf.image.decode_jpeg() different?</strong></li>
<li><strong>How are cv2.resize() and tf.image.resize_images() different if they use the same interpolation scheme?</strong></li>
</ul>
<p>Thank you!</p> | 2017-08-04 23:40:25.627000+00:00 | 2022-07-26 18:58:10.047000+00:00 | 2022-07-26 18:58:10.047000+00:00 | python|image|tensorflow|opencv|image-processing | ['https://i.stack.imgur.com/aUUXK.png', 'https://arxiv.org/pdf/1412.6572.pdf'] | 2 |
73,447,846 | <p>Just now did I see this question, hopefully it's not too late (also for those finding it later on).</p>
<p>The small variance you are observing is actually expected. The algorithms that powers <code>tfcausalimpact</code>, despite also being from the Markovian family type, are not the same as the <a href="https://research.google/pubs/pub41854/" rel="nofollow noreferrer">Gibbs Sampling</a> as used by the original R package, so results might change between runs but it should be negligible for the most part.</p>
<p>The default algorithm uses variational inference, it's possible to change it to use <a href="https://arxiv.org/abs/2108.12107" rel="nofollow noreferrer">Hamiltonian Monte Carlo</a> (HMC) which is considered the state of the art in this field. Results will be more precise but it'll take longer to converge. Here's how to activate it:</p>
<pre><code>ci = CausalImpact(data, pre_period, post_period, model_args={'fit_method': 'hmc'})
</code></pre>
<p>As far as the interpretation of data goes it should remain stable regardless of how many runs you test it.</p> | 2022-08-22 15:50:45.823000+00:00 | 2022-08-22 15:50:45.823000+00:00 | null | null | 69,257,795 | <p>I'm using tfcausalimpact on Python to do some structural time series modeling.
Results change from run to run, which I'd like to be able to avoid or control through some initial random state. Is there a way to obtain consistent results from run to run?</p>
<pre><code>import numpy as np
import pandas as pd
from statsmodels.tsa.arima_process import ArmaProcess
from causalimpact import CausalImpact
np.random.seed(12345)
ar = np.r_[1, 0.9]
ma = np.array([1])
arma_process = ArmaProcess(ar, ma)
X = 100 + arma_process.generate_sample(nsample=100)
y = 1.2 * X + np.random.normal(size=100)
y[70:] += 5
data = pd.DataFrame({'y': y, 'X': X}, columns=['y', 'X'])
pre_period = [0, 69]
post_period = [70, 99]
ci = CausalImpact(data, pre_period, post_period)
print(ci.summary())
print(ci.summary(output='report'))
ci.plot()
#Run 1
Posterior Inference {Causal Impact}
Average Cumulative
Actual 125.23 3756.86
Prediction (s.d.) 120.41 (0.34) 3612.35 (10.34)
95% CI [119.73, 121.08] [3591.82, 3632.35]
Absolute effect (s.d.) 4.82 (0.34) 144.52 (10.34)
95% CI [4.15, 5.5] [124.51, 165.04]
Relative effect (s.d.) 4.0% (0.29%) 4.0% (0.29%)
95% CI [3.45%, 4.57%] [3.45%, 4.57%]
#Run 2
Posterior Inference {Causal Impact}
Average Cumulative
Actual 125.23 3756.86
Prediction (s.d.) 120.32 (0.33) 3609.48 (9.76)
95% CI [119.68, 120.95] [3590.28, 3628.53]
Absolute effect (s.d.) 4.91 (0.33) 147.39 (9.76)
95% CI [4.28, 5.55] [128.33, 166.58]
Relative effect (s.d.) 4.08% (0.27%) 4.08% (0.27%)
95% CI [3.56%, 4.62%] [3.56%, 4.62%]
</code></pre> | 2021-09-20 16:20:12.750000+00:00 | 2022-08-22 15:50:45.823000+00:00 | null | python|random|time-series | ['https://research.google/pubs/pub41854/', 'https://arxiv.org/abs/2108.12107'] | 2 |
38,199,264 | <p>You need to place your CSS code inside the <code><style></code> tag, like so:</p>
<pre><code><style>
.arxiv_button {
background-color: #4CAF50; /* Green */
border: none;
color: white;
padding: 10px 32px;
text-align: center;
text-decoration: none;
display: inline-block;
font-size: 16px;
margin: 4px 2px;
cursor: pointer;
}
</style>
</code></pre> | 2016-07-05 09:01:09.973000+00:00 | 2016-07-05 09:01:09.973000+00:00 | null | null | 38,199,218 | <p>I have very little experience with front end developing, so please be patient. I built a simple website, and uploaded it to my university's server. obviously, I most have done something wrong, since the snippet Google is </p>
<p><a href="https://i.stack.imgur.com/7Tljm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7Tljm.png" alt="enter image description here"></a></p>
<p>Which is CSS code.
I changed my website about a week ago (to its current version, which I thought would fix the issue), so is there a chance that it is waiting for Google's indexing task to run again before showing the correct snippet?</p>
<p>In case not, what should I do in order to fix it? <a href="http://technion.ac.il/~omerbp/" rel="nofollow noreferrer">here is a link to my website</a></p> | 2016-07-05 08:58:37.297000+00:00 | 2016-07-05 11:58:18.897000+00:00 | 2016-07-05 11:58:18.897000+00:00 | css|seo|google-search | [] | 0 |
63,101,006 | <p>I think your observation relates to batch normalization. There is a <a href="https://arxiv.org/abs/1502.03167" rel="nofollow noreferrer">paper</a> written on the subject, an numerous medium/towardsdatascience posts, which i will not list here. Idea is that if you have a no non-linearities in your model and loss function, it doesn't matter. But even in MSE you do have non-linearity, which makes it sensitive to scaling of both target and source data. You can experiment with inserting <a href="https://pytorch.org/docs/master/generated/torch.nn.BatchNorm1d.html#torch.nn.BatchNorm1d" rel="nofollow noreferrer">Batch Normalization</a> Layers into your models, after dense or convolutional layers. In my experience it often improves accuracy.</p> | 2020-07-26 13:48:14.247000+00:00 | 2020-07-26 13:48:14.247000+00:00 | null | null | 63,100,933 | <p>I'm working on a regression problem in pytorch. My target values can be either between 0 to 100 or 0 to 1 (they represent % or % divided by 100).</p>
<p>The data is unbalanced, I have much more data with lower targets.</p>
<p>I've noticed that when I run the model with targets in the range 0-100, it doesn't learn - the validation loss doesn't improve, and the loss on the 25% large targets is very big, much bigger than the std in this group.</p>
<p>However, when I run the model with targets in the range 0-1, it does learn and I get good results.</p>
<p>If anyone can explain why this happens, and if using the ranges 0-1 is "cheating", that will be great.</p>
<p>Also - should I scale the targets? (either if I use the larger or the smaller range).</p>
<p>Some additional info - I'm trying to fine tune bert for a specific task. I use MSEloss.</p>
<p>Thanks!</p> | 2020-07-26 13:41:26.490000+00:00 | 2020-07-26 13:48:14.247000+00:00 | null | regression|scaling|loss-function|bert-language-model|mse | ['https://arxiv.org/abs/1502.03167', 'https://pytorch.org/docs/master/generated/torch.nn.BatchNorm1d.html#torch.nn.BatchNorm1d'] | 2 |
46,632,174 | <p>Merge/Switch are concepts taken from data flow processing concepts from the 70s
<a href="https://i.stack.imgur.com/qa5kn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qa5kn.png" alt="enter image description here"></a></p>
<p>(from Advances in Computers, 1992)</p>
<p>See section 4.4 of <a href="https://arxiv.org/pdf/1603.04467.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1603.04467.pdf</a> , discussion in <a href="https://github.com/tensorflow/tensorflow/issues/4762" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/issues/4762</a></p>
<p>Also following code may be informative:</p>
<p>A replacement for Switch:
<a href="https://github.com/tensorflow/tensorflow/pull/9189" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/pull/9189</a></p>
<p>Adding gradients to while loop:
<a href="https://github.com/tensorflow/tensorflow/commit/301b14c2" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/commit/301b14c2</a></p> | 2017-10-08 14:25:15.117000+00:00 | 2017-10-08 15:13:34.507000+00:00 | 2017-10-08 15:13:34.507000+00:00 | null | 46,630,986 | <p>I'm looking for code of tensorflow v1.3 for using this framework more precise. </p>
<p>However there are lots of complicate things.</p>
<p>Specifically, I'm watching the process of running the graph. </p>
<p>When one node's numerical computation is done, the output of that node will be added to ready queue. </p>
<p>With this thought, I tracked the code of tensorflow. However in PropagateOutputs function (tensorflow/core/common_runtime/executor.cc), they divide the case into four (enter, exit, next iteration, none). </p>
<p>In this part, I have no idea what node is enter, exit or something. Also I cannot get the point of frame and iteration after I read the manual in the tf code. </p>
<p>Can I get some knowledge of such things or can I get any reference for studying the architecture of tensorflow?</p>
<p>Thanks.</p> | 2017-10-08 12:13:16.420000+00:00 | 2017-10-08 15:13:34.507000+00:00 | 2017-10-08 12:28:14.763000+00:00 | c++|tensorflow | ['https://i.stack.imgur.com/qa5kn.png', 'https://arxiv.org/pdf/1603.04467.pdf', 'https://github.com/tensorflow/tensorflow/issues/4762', 'https://github.com/tensorflow/tensorflow/pull/9189', 'https://github.com/tensorflow/tensorflow/commit/301b14c2'] | 5 |
69,177,968 | <p>No, it is different from <a href="https://pytorch.org/docs/stable/generated/torch.nn.Dropout.html" rel="noreferrer"><code>Dropout</code></a>:</p>
<pre class="lang-py prettyprint-override"><code>import torch
from torch.nn.functional import dropout
torch.manual_seed(2021)
def drop_path(x, drop_prob: float = 0., training: bool = False):
if drop_prob == 0. or not training:
return x
keep_prob = 1 - drop_prob
shape = (x.shape[0],) + (1,) * (x.ndim - 1)
random_tensor = keep_prob + torch.rand(shape, dtype=x.dtype, device=x.device)
random_tensor.floor_() # binarize
output = x.div(keep_prob) * random_tensor
return output
x = torch.rand(3, 2, 2, 2)
# DropPath
d1_out = drop_path(x, drop_prob=0.33, training=True)
# Dropout
d2_out = dropout(x, p=0.33, training=True)
</code></pre>
<p>Let's compare the outputs (I removed the line break between channel dimension for readability):</p>
<pre class="lang-py prettyprint-override"><code># DropPath
print(d1_out)
# tensor([[[[0.1947, 0.7662],
# [1.1083, 1.0685]],
# [[0.8515, 0.2467],
# [0.0661, 1.4370]]],
#
# [[[0.0000, 0.0000],
# [0.0000, 0.0000]],
# [[0.0000, 0.0000],
# [0.0000, 0.0000]]],
#
# [[[0.7658, 0.4417],
# [1.1692, 1.1052]],
# [[1.2014, 0.4532],
# [1.4840, 0.7499]]]])
# Dropout
print(d2_out)
# tensor([[[[0.1947, 0.7662],
# [1.1083, 1.0685]],
# [[0.8515, 0.2467],
# [0.0661, 1.4370]]],
#
# [[[0.0000, 0.1480],
# [1.2083, 0.0000]],
# [[1.2272, 0.1853],
# [0.0000, 0.5385]]],
#
# [[[0.7658, 0.0000],
# [1.1692, 1.1052]],
# [[1.2014, 0.4532],
# [0.0000, 0.7499]]]])
</code></pre>
<p>As you can see, they are different. <code>DropPath</code> is dropping an entire sample from the batch, which effectively results in <em>stochastic depth</em> when used as in Eq. 2 of their <a href="https://arxiv.org/pdf/1603.09382.pdf" rel="noreferrer">paper</a>. On the other hand, <code>Dropout</code> is dropping random values, as expected (from the <a href="https://pytorch.org/docs/stable/generated/torch.nn.Dropout.html" rel="noreferrer">docs</a>):</p>
<blockquote>
<p>During training, <strong>randomly zeroes some of the elements of the input tensor</strong> with probability <code>p</code> using samples from a Bernoulli distribution. Each channel will be zeroed out independently on every forward call.</p>
</blockquote>
<p>Also note that both scale the output values based on the probability, i.e., the non-zeroed out elements are identical for the same <code>p</code>.</p> | 2021-09-14 12:39:56.617000+00:00 | 2021-09-14 12:39:56.617000+00:00 | null | null | 69,175,642 | <p>The code below (taken from <a href="https://github.com/rwightman/pytorch-image-models/blob/a6e8598aaf90261402f3e9e9a3f12eac81356e9d/timm/models/layers/drop.py#L140" rel="nofollow noreferrer">here</a>) seems to implement only a simple <code>Dropout</code>, neither the <code>DropPath</code> nor <code>DropConnect</code>. Is that true?</p>
<pre class="lang-py prettyprint-override"><code>def drop_path(x, drop_prob: float = 0., training: bool = False):
"""Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).
This is the same as the DropConnect impl I created for EfficientNet, etc networks, however,
the original name is misleading as 'Drop Connect' is a different form of dropout in a separate paper...
See discussion: https://github.com/tensorflow/tpu/issues/494#issuecomment-532968956 ... I've opted for
changing the layer and argument names to 'drop path' rather than mix DropConnect as a layer name and use
'survival rate' as the argument.
"""
if drop_prob == 0. or not training:
return x
keep_prob = 1 - drop_prob
shape = (x.shape[0],) + (1,) * (x.ndim - 1) # work with diff dim tensors, not just 2D ConvNets
random_tensor = keep_prob + torch.rand(shape, dtype=x.dtype, device=x.device)
random_tensor.floor_() # binarize
output = x.div(keep_prob) * random_tensor
return output
</code></pre> | 2021-09-14 09:53:52.140000+00:00 | 2021-12-08 03:04:15.357000+00:00 | 2021-09-14 12:42:05.297000+00:00 | python|deep-learning|pytorch|computer-vision|conv-neural-network | ['https://pytorch.org/docs/stable/generated/torch.nn.Dropout.html', 'https://arxiv.org/pdf/1603.09382.pdf', 'https://pytorch.org/docs/stable/generated/torch.nn.Dropout.html'] | 3 |
61,835,577 | <p>If like me you need to work with data that contains a mix of languages, you may be interested in my package that helps you translate word vectors between languages: <a href="https://pypi.org/project/transvec/" rel="nofollow noreferrer">transvec</a>.</p>
<p>Bear in mind that while you can use Word2Vec on any language, there are no pre-trained models (that I'm aware of) that have been trained on multiple languages. Unless the mixed-language corpora also included information about translations between the languages, they probably wouldn't be of much use anyway.</p>
<p><code>transvec</code> solves this problem by allowing you to convert word vectors between models, so you can use one model for each language and use <code>transvec</code> to find out which words in language A (the <em>target</em> language) are most similar to a word from language B (the <em>source</em> language). If you're interested in how this works, check out the README for the Python package or the <a href="https://arxiv.org/pdf/1309.4168.pdf" rel="nofollow noreferrer">original research paper</a> (which I just realised is the same paper from @gojomo's answer!) that describes the technique in more detail. </p>
<p>Here's a small example that shows how you can compare Russian word vectors to English word vectors in a meaningful way:</p>
<pre><code>import gensim.downloader
from transvec.transformers import TranslationWordVectorizer
# Pretrained models in two different languages.
ru_model = gensim.downloader.load("word2vec-ruscorpora-300")
en_model = gensim.downloader.load("glove-wiki-gigaword-300")
# Training data: pairs of English words with their Russian translations.
# The more you can provide, the better.
train = [
("king", "царь_NOUN"), ("tsar", "царь_NOUN"),
("man", "мужчина_NOUN"), ("woman", "женщина_NOUN")
]
bilingual_model = TranslationWordVectorizer(en_model, ru_model).fit(train)
# Find words with similar meanings across both languages.
bilingual_model.similar_by_word("царица_NOUN", 1) # "queen"
# [('king', 0.7763221263885498)]
</code></pre>
<p>Don't worry about the weird POS tags on the Russian words - this is just a quirk of the particular pre-trained model I used.</p>
<p>So basically, if you can provide a list of words with their translations, then you can train a <code>TranslationWordVectorizer</code> to translate <em>any</em> word that exists in your source language corpus into the target language. When I used this for real, I produced some training data by extracting all the individual Russian words from my data, running them through Google Translate and then keeping everything that translated to a single word in English. The results were pretty good (sorry I don't have any more detail for the benchmark yet; it's still a work in progress!).</p> | 2020-05-16 10:49:28.407000+00:00 | 2020-05-16 10:49:28.407000+00:00 | null | null | 42,065,113 | <p>I want to know if we can use word2vec algorithm to train models for languages other than English like Spanish, Chinese,Italian ?</p> | 2017-02-06 10:09:28.513000+00:00 | 2020-05-16 10:49:28.407000+00:00 | null | word2vec | ['https://pypi.org/project/transvec/', 'https://arxiv.org/pdf/1309.4168.pdf'] | 2 |
42,131,468 | <p>Yes! In fact one of Google's original word2vec papers highlighted its potential for use in machine-translation between language pairs:</p>
<p><em><a href="https://arxiv.org/abs/1309.4168" rel="noreferrer">Exploiting Similarities among Languages for Machine Translation</a></em></p>
<p>Note that as with English, you'll need to break example texts into word-tokens before feeding them to the Word2Vec algorithm, which may be harder in some languages.</p> | 2017-02-09 08:23:40.930000+00:00 | 2017-02-09 08:23:40.930000+00:00 | null | null | 42,065,113 | <p>I want to know if we can use word2vec algorithm to train models for languages other than English like Spanish, Chinese,Italian ?</p> | 2017-02-06 10:09:28.513000+00:00 | 2020-05-16 10:49:28.407000+00:00 | null | word2vec | ['https://arxiv.org/abs/1309.4168'] | 1 |
58,478,860 | <p>Based on the recent <a href="https://blog.dask.org/2019/09/30/dask-hyperparam-opt" rel="nofollow noreferrer">https://blog.dask.org/2019/09/30/dask-hyperparam-opt</a>, <code>HyperbandSearchCV</code> does require models implementing <code>partial_fit</code> because the point of using HyperbandSearchCV is to avoid training on the entire data in order to make a decision whether the model is good. This is where HyperbandSearchCV's speed advantage comes from. The way I interpret the blog post is that once a model is fully trained, HyperbandSearchCV cannot do anything more, there's no early-stopping left to do. However, this might be true for the Dask implementation and not necessarily for the Hyperband algorithm described in the original <a href="https://arxiv.org/pdf/1603.06560.pdf" rel="nofollow noreferrer">paper</a> which I should re-read.</p> | 2019-10-21 01:47:37.213000+00:00 | 2019-10-24 01:26:17.007000+00:00 | 2019-10-24 01:26:17.007000+00:00 | null | 58,470,021 | <p>I have been deep diving on the github pages and reading the documentation, but I am not fully understanding whether HyperbandCV will be useful to speed up hyperparameter optimization in my case.</p>
<p>I am using SKLearn's pipeline functionality. And I am also testing models like LinearRegression() which doesn't support partial_fit; it has to use all the data to fit the parameters all at once. In this case, can HyperbandCV still be used? If it is used, what exactly is it optimizing if from my understanding neither Pipeline nor said models have partial fit implemented. In Hyperband's api, it reads that it needs to have partial_fit implemented in order to use it. However, in another documentation it reads it can be a drop-in replacement for RandomizedSearchCV since it just spends less time training low performing models. </p>
<p>If anyone can clarify this for me, this will be great. </p> | 2019-10-20 04:25:06.227000+00:00 | 2022-06-14 06:54:02.547000+00:00 | null | dask|dask-ml | ['https://blog.dask.org/2019/09/30/dask-hyperparam-opt', 'https://arxiv.org/pdf/1603.06560.pdf'] | 2 |
61,423,272 | <p>I have not heard of the pipeline you just mentioned. In order to construct an LM for your use-case, you have basically two options:</p>
<ol>
<li><p>Further training BERT (-base/-large) model on your own corpus. This process is called <em>domain-adaption</em> as also described in this <a href="https://arxiv.org/pdf/2004.10964.pdf" rel="nofollow noreferrer">recent paper</a>. This will adapt the learned parameters of BERT model to your specific domain (Bio/Medical text). Nonetheless, for this setting, you will need quite a large corpus to help BERT model better update its parameters.</p>
</li>
<li><p>Using a pre-trained language model that is pre-trained on a large amount of domain-specific text either from the scratch or fine-tuned on vanilla BERT model. As you might know, the vanilla BERT model released by Google has been trained on Wikipedia and BookCorpus text. After the vanilla BERT, researchers have tried to train the BERT architecture on other domains besides the initial data collections. You may be able to use these pre-trained models which have a deep understanding of domain-specific language. For your case, there are some models such as: <a href="https://github.com/dmis-lab/biobert" rel="nofollow noreferrer">BioBERT</a>, <a href="https://github.com/ncbi-nlp/bluebert" rel="nofollow noreferrer">BlueBERT</a>, and <a href="https://github.com/allenai/scibert" rel="nofollow noreferrer">SciBERT</a>.</p>
</li>
</ol>
<blockquote>
<p>Is it possible with hugging-face?</p>
</blockquote>
<p>I am not sure if huggingface developers have developed a robust approach for pre-training BERT model on custom corpora as claimed their code is still in progress, but if you are interested in doing this step, I suggest using <a href="https://github.com/google-research/bert" rel="nofollow noreferrer">Google research's bert</a> code which has been written in Tensorflow and is totally robust (released by BERT's authors). In their readme and under <code>Pre-training with BERT</code> section, the exact process has been declared. This will provide you with Tensorflow checkpoint, which can be easily converted to Pytorch checkpoint if you'd like to work with Pytorch/Transformers.</p> | 2020-04-25 09:04:13.467000+00:00 | 2021-12-05 00:02:43.780000+00:00 | 2021-12-05 00:02:43.780000+00:00 | null | 61,416,197 | <p>I was curious if it is possible to use transfer learning in text generation, and re-train/pre-train it on a specific kind of text. </p>
<p>For example, having a pre-trained BERT model and a small corpus of medical (or any "type") text, make a language model that is able to generate medical text. The assumption is that you do not have a huge amount of "medical texts" and that is why you have to use transfer learning.</p>
<p>Putting it as a pipeline, I would describe this as: </p>
<ol>
<li>Using a pre-trained BERT tokenizer. </li>
<li>Obtaining new tokens from my new text and adding them to the existing pre-trained language model (i.e., vanilla BERT). </li>
<li>Re-training the pre-trained BERT model on the custom corpus with the combined tokenizer. </li>
<li>Generating text that resembles the text within the small custom corpus.</li>
</ol>
<p>Does this sound familiar? Is it possible with hugging-face?</p> | 2020-04-24 19:38:46.420000+00:00 | 2022-05-03 08:11:27.503000+00:00 | 2020-04-26 18:27:02.387000+00:00 | deep-learning|transfer-learning|huggingface-transformers|language-model|bert-language-model | ['https://arxiv.org/pdf/2004.10964.pdf', 'https://github.com/dmis-lab/biobert', 'https://github.com/ncbi-nlp/bluebert', 'https://github.com/allenai/scibert', 'https://github.com/google-research/bert'] | 5 |
35,241,735 | <p>If your target vocabulary(or in other words amount of classes you want to predict) is really big, it is very hard to use regular softmax, because you have to calculate probability for every word in dictionary. By Using <code>sampled_softmax_loss</code> you only take in account subset <strong>V</strong> of your vocabulary to calculate your loss.</p>
<p>Sampled softmax only makes sense if we sample(our <strong>V</strong>) less than vocabulary size. If your vocabulary(amount of labels) is small, there is no point using <code>sampled_softmax_loss</code>.</p>
<p>You can see implementation details in this paper:
<a href="http://arxiv.org/pdf/1412.2007v2.pdf" rel="noreferrer">http://arxiv.org/pdf/1412.2007v2.pdf</a></p>
<p>Also you can see example where it is used - Sequence to sequence translation in this <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/models/rnn/translate/seq2seq_model.py" rel="noreferrer">example</a></p> | 2016-02-06 13:45:51.510000+00:00 | 2016-02-06 13:51:43.947000+00:00 | 2016-02-06 13:51:43.947000+00:00 | null | 35,241,251 | <p>In tensorflow, there are methods called <a href="https://www.tensorflow.org/versions/master/api_docs/python/tf/nn/softmax_cross_entropy_with_logits" rel="nofollow noreferrer"><code>softmax_cross_entropy_with_logits</code></a> and <a href="https://www.tensorflow.org/versions/master/api_docs/python/tf/nn/sampled_softmax_loss" rel="nofollow noreferrer"><code>sampled_softmax_loss</code></a>.</p>
<p>I read the tensorflow document and searched google for more information but I couldn't find the difference. It looks like to me both calculates the loss using softmax function.</p>
<h3>Using <code>sampled_softmax_loss</code> to calculate the loss</h3>
<pre><code>loss = tf.reduce_mean(tf.nn.sampled_softmax_loss(...))
</code></pre>
<h3>Using <code>softmax_cross_entropy_with_logits</code> to calculate the loss</h3>
<pre><code>loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(P, Q))
</code></pre>
<p>To me, calculating softmax loss is same as calculating softmaxed cross entropy (e.g. <code>cross_entropy(softmax(train_x))</code>)</p>
<p>Could somebody tell me the why there is two different methods and which method should I use in which case?</p> | 2016-02-06 12:54:40.953000+00:00 | 2020-01-17 18:18:58.473000+00:00 | 2018-06-25 20:51:39.910000+00:00 | python|machine-learning|deep-learning|tensorflow | ['http://arxiv.org/pdf/1412.2007v2.pdf', 'https://github.com/tensorflow/tensorflow/blob/master/tensorflow/models/rnn/translate/seq2seq_model.py'] | 2 |
28,595,723 | <p><strong>Before we get started</strong>: The question is about the <a href="http://doc.akka.io/docs/akka/snapshot/scala/typed-actors.html" rel="noreferrer"><em>deprecated</em> "typed actors" module</a>. Which will soon be replaced with <a href="http://doc.akka.io/docs/akka/snapshot/scala/typed.html" rel="noreferrer">akka-typed</a>, a far superior take on the problem, which avoids the below explained shortcomings - please do have a look at akka-typed if you're interested in typed actors!</p>
<hr>
<p>I'll enumerate a number of downsides of using the typed actors implementation you refer to. Please do note however that we have just merged a new <a href="http://doc.akka.io/docs/akka/snapshot/scala/typed.html" rel="noreferrer">akka-typed</a> module, which brings in type safety back to the world of akka actors. For the sake of this post, I will not go in depth into the reasons developing the typed version was such a tough challenge, let's for now answer the question of "why not use the (old) typed actors".</p>
<p><strong>Firstly</strong>, they were never designed to be the core of the toolkit. They are built on top of the messaging infrastructure Akka provides. Please note that thanks to that messaging infrastructure we're able to achieve location transparency, and Akka's well known performance. They heavily use reflection and JDK proxies to translate to and from methods to message sends. This is very expensive (time wise), and downgrades the performance around 10-fold in contrast to plain Akka Actors, see below for a "ping pong" benchmark (implemented using both styles, sender tells to actor, actor replies - 100.000 times):</p>
<pre><code>Unit = ops/ms
Benchmark Mode Samples Mean Mean error Units
TellPingPongBenchmark.tell_100000_msgs thrpt 20 119973619.810 79577253.299 ops/ms
JdkProxyTypedActorTellPingPongBenchmark.tell_100000_msgs thrpt 20 16697718.988 406179.847 ops/ms
Unit = us/op
Benchmark Mode Samples Mean Mean error Units
TellPingPongBenchmark.tell_100000_msgs sample 133647 1.223 0.916 us/op
JdkProxyTypedActorTellPingPongBenchmark.tell_100000_msgs sample 222869 12.416 0.045 us/op
</code></pre>
<p>(Benchmarks are kept in <a href="https://github.com/akka/akka/tree/master/akka-bench-jmh" rel="noreferrer">akka/akka-bench-jmh</a> and run using the <a href="http://openjdk.java.net/projects/code-tools/jmh/" rel="noreferrer">OpenJDK JMH</a> tool, via the <a href="https://github.com/ktoso/sbt-jmh" rel="noreferrer">sbt-jmh</a> plugin.)</p>
<p><strong>Secondly</strong>, using methods to abstract over <strong>distributed systems</strong> is just not a good way of going about it (oh, how I remember RMI... let's <em>not</em> go there again). Using such "looks like a method" makes you stop thinking about message loss, reordering and all the things which can and <em>do</em> happen in distributed systems. It also encourages (makes it "<em>too easy to do the wrong thing</em>") using signatures like <code>def getThing(id: Int): Thing</code> - which would generate <strong>blocking</strong> code - which is horrible for performance! You really do want to stay asynchronous and <em>responsive</em>, which is why you'd end up with loads of futures when trying to work properly with these (proxy based) typed actors.</p>
<p><strong>Lastly</strong>, you basically <strong>lose one of the main Actor capabilities</strong>. The 3 canonical operations an Actor can perform are 1) send messages 2) start child actors 3) <em>change it's own behaviour based on received messages</em> (see Carl Hewitt's original paper on the <a href="http://arxiv.org/pdf/1008.1459.pdf" rel="noreferrer">Actor Model</a>). The 3rd capability is used to beautifully model state machines. For example you can say (in plain akka actors) <code>become(active)</code> and then <code>become(allowOnlyPrivileged)</code>, to switch between <code>receive</code> implementations - making finite state machine implementations (we also have a <a href="http://doc.akka.io/docs/akka/snapshot/scala/fsm.html" rel="noreferrer">DSL for FSMs</a>) a joy to work with. You can not express this nicely in JDK proxied typed actors, because you can not change the set of exposed methods. This is a major down side once you get into the thinking and modeling using state machines.</p>
<p><strong>A New Hope</strong> (Episode 1): Please do have a look at <a href="http://doc.akka.io/docs/akka/snapshot/scala/typed.html" rel="noreferrer">the upcoming akka-typed module</a> authored by Roland Kuhn (preview to be included in the 2.4 release soon), I'm pretty sure you'll like what you'll find there typesafety wise. And also, that implementation will eventually be even faster than the current untyped actors (omitting impl details here as the answer got pretty long already - short version: basically we'll remove a load of allocations thanks to the new implementation).</p>
<p>I hope you'll enjoy this thorough answer. Feel free to ask follow up questions in comments here or on <a href="https://groups.google.com/forum/#!forum/akka-user" rel="noreferrer">akka-user</a> - our official mailing list. Happy Hakking!</p> | 2015-02-18 23:08:13.210000+00:00 | 2015-02-18 23:32:54.683000+00:00 | 2015-02-18 23:32:54.683000+00:00 | null | 28,516,273 | <p>I don't understand why not to use <code>TypedActors</code> in <code>Akka</code>. Using reflection (well.. <code>instanceof</code>) to compensate for the lack of pattern matching in Java is quite ugly.<br>
As far as I understand, <code>TypedActors</code> should be like a gate between the "Akka world" and the "Non Akka world" of your software. But why won't we just throw all OO principals and just use reflection!<br>
Why wouldn't you want to use an actor and know exactly what it should respond to? Or for Akka's sake of keeping the actor model, why not create a message hierarchy that uses double-dispatch in order to activate the right method in the actor (and I know you shouldn't pass Actors as parameters and use ActorRef instead).<br>
DISCLAIMER: I'm new to <code>Akka</code> and this model, and I haven't wrote a single line of code using <code>Akka</code>, but just reading the documentation is giving me a headache.</p> | 2015-02-14 13:49:13.283000+00:00 | 2019-03-02 11:44:21.060000+00:00 | 2015-02-14 16:32:47.767000+00:00 | java|akka | ['http://doc.akka.io/docs/akka/snapshot/scala/typed-actors.html', 'http://doc.akka.io/docs/akka/snapshot/scala/typed.html', 'http://doc.akka.io/docs/akka/snapshot/scala/typed.html', 'https://github.com/akka/akka/tree/master/akka-bench-jmh', 'http://openjdk.java.net/projects/code-tools/jmh/', 'https://github.com/ktoso/sbt-jmh', 'http://arxiv.org/pdf/1008.1459.pdf', 'http://doc.akka.io/docs/akka/snapshot/scala/fsm.html', 'http://doc.akka.io/docs/akka/snapshot/scala/typed.html', 'https://groups.google.com/forum/#!forum/akka-user'] | 10 |
33,907,308 | <p>Just to give a feedback,
It seems that directly it cannot be done, not the full INET as it is can be parallelized in short because it uses global variables in some places.
in this particular case, mac addresses assignment are one of the issues (uses a global variable), hence eth interface cannot be parallelized.</p>
<p>for more details refer to this paper explaining why this is not possible:</p>
<ul>
<li><a href="http://arxiv.org/abs/1409.0994" rel="nofollow">Enabling Distributed Simulation of OMNeT++ INET Models:</a></li>
</ul>
<p>For reference/possible solution refer to authors webpage from aachen university, where you can download a complete copy of omnet++ and INET that can be parallelized: </p>
<ul>
<li><a href="https://code.comsys.rwth-aachen.de/redmine/projects/parallel-inet" rel="nofollow">project overview and code</a></li>
</ul> | 2015-11-25 02:02:33.203000+00:00 | 2015-11-25 02:10:02.490000+00:00 | 2015-11-25 02:10:02.490000+00:00 | null | 33,399,184 | <p>I'm trying to parallelize my model (I want to parallelize a single config run, not run multiple configs in parallel).
I'm using Omnet++ 4.2.2, but probably the version doesn't matter.</p>
<p>I've read the Parallel Distributed Simulation chapter of the Omnet++ manual
and the principle seems very straightforward:
simply assign different modules/submodules to different partitions.
Following the provided cqn example</p>
<pre><code>*.tandemQueue[0]**.partition-id = 0
*.tandemQueue[1]**.partition-id = 1
*.tandemQueue[2]**.partition-id = 2
</code></pre>
<p>If I try to simulate relatively simple models everything works fine I can partition the model at wish.
However, when I start to run simulation that use Standardhost module, or modules that are interconnected using ethernet links that doesn't work anymore. </p>
<p>If i take for example the Inet provided example WiredNetWithDHCP (inet/examples/dhcp/eth), as experiment, lets say I want to run hosts in a different partition than the switch
I therefore assign the switch to a partition and everything else to another:</p>
<pre><code>**.switch**.partition-id = 1
**.partition-id = 0
</code></pre>
<p>The different partitions are separated by links, there is delay, and therefore it should be possible to partition this way.
When I run the model, using the graphic interface, I can see that the model is correctly partitioned however the connections are somehow wrong and i get the following error message:</p>
<pre><code>during network initialization: the input/output datarates differ
</code></pre>
<p>clearly datarates don't differ (and running the model sequentially works perfectly), by checking the error message this exception is triggered also by link not connected. This is indeed what happen. It seems that the gates are not correctly linked. </p>
<p>Clearly I'm missing something in the Link connection mechanism, should I partition somewhere else?</p>
<p>Due to the simplicity of the paradigm I feel like being an idiot but I'm not able to solve this issue by myself</p> | 2015-10-28 18:46:51.923000+00:00 | 2015-11-25 02:10:02.490000+00:00 | null | parallel-processing|omnet++|inet | ['http://arxiv.org/abs/1409.0994', 'https://code.comsys.rwth-aachen.de/redmine/projects/parallel-inet'] | 2 |
60,809,799 | <p>In most (maybe even all) commonly used Transformers, position embeddings are not trained but defined using an analytically described function (unnumbered equation on
page 6 of <a href="https://arxiv.org/pdf/1706.03762.pdf" rel="nofollow noreferrer">Attention is all you need</a> paper):</p>
<p><a href="https://i.stack.imgur.com/OEYeo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OEYeo.png" alt="enter image description here"></a></p>
<p>To save the computation time in the <a href="https://github.com/huggingface/transformers" rel="nofollow noreferrer">Transformer package</a>, they are pre-computed up to the length of 512 and stored as a variable that serves as a cache which should not change during training.</p>
<p>The reason for not training the position embeddings is that the embeddings for later positions would be undertrained, but with cleverly analytically defined position embeddings, the network can learn the regularity behind the equations and generalize for longer sequences more easily.</p> | 2020-03-23 08:05:47.040000+00:00 | 2020-03-23 08:05:47.040000+00:00 | null | null | 60,772,384 | <p>I'm interested in huggingface's distillBERT work, by going through their code (<a href="https://github.com/huggingface/transformers/blob/master/examples/distillation/train.py" rel="nofollow noreferrer">https://github.com/huggingface/transformers/blob/master/examples/distillation/train.py</a>), I found that if use roBERTa as student model, they will freeze the position embedding, I'm wondering what is this for? </p>
<pre><code>def freeze_pos_embeddings(student, args):
if args.student_type == "roberta":
student.roberta.embeddings.position_embeddings.weight.requires_grad = False
elif args.student_type == "gpt2":
student.transformer.wpe.weight.requires_grad = False
</code></pre>
<p>I understand the reason of freezing token_type_embeddings, because roBERTa never used segment embeddings, but why position embeddings?<br>
Thanks a lot for the help!</p> | 2020-03-20 10:28:34.297000+00:00 | 2020-04-25 00:01:49.337000+00:00 | 2020-04-25 00:01:49.337000+00:00 | huggingface-transformers|bert-language-model | ['https://arxiv.org/pdf/1706.03762.pdf', 'https://i.stack.imgur.com/OEYeo.png', 'https://github.com/huggingface/transformers'] | 3 |
20,523,155 | <p>I think <a href="http://arxiv.org/pdf/1202.6101.pdf" rel="nofollow">Ram & Gray 2008</a> is exactly what you're looking for. They call their structure a "cone tree".</p> | 2013-12-11 15:30:38.293000+00:00 | 2013-12-11 15:30:38.293000+00:00 | null | null | 20,512,429 | <p>A normal kd-tree is constructed by recursively split the super plane into half. And to do range search with a query point, it will only search a small bunch of points(log) in stead of all(linear).</p>
<p>I wonder a kd-tree can be built with dot product?</p>
<p>For example, b is a list of 3d vector:</p>
<pre><code>b = np.random.rand(10,3)
a = (1,1,1) is a query vector
</code></pre>
<p>and I want to find nearest bk which satisfy:</p>
<pre><code>a * bk > a * bi, for i = 1, 2, ...k-1, k+1, 10
</code></pre>
<p>I do not want to calculate all a * bi dot product pairs.</p>
<p>How can I build a tree with b, and when query a come, I only calculate some of a * bi?</p> | 2013-12-11 06:50:18.983000+00:00 | 2013-12-11 15:30:38.293000+00:00 | 2013-12-11 07:15:11.860000+00:00 | python|algorithm|data-structures|nearest-neighbor|kdtree | ['http://arxiv.org/pdf/1202.6101.pdf'] | 1 |
39,353,006 | <p>Section 4 of the <a href="http://arxiv.org/pdf/1512.00567v3.pdf" rel="noreferrer">paper</a> you cite is about <em>auxiliary classifiers</em>. These are classifiers added to the lower levels of the network, that improve training by mitigating the vanishing gradients problem and speedup convergence. For running inference on a trained network, you should use the main classifier, called <code>softmax:0</code> in the model, and <strong>NOT</strong> the auxiliary classifier, called <code>auxiliary_softmax:0</code>.</p> | 2016-09-06 15:43:42.813000+00:00 | 2016-09-06 15:43:42.813000+00:00 | null | null | 39,352,108 | <p>The Inception v3 model is shown in this image:</p>
<p><img src="https://4.bp.blogspot.com/-TMOLlkJBxms/Vt3HQXpE2cI/AAAAAAAAA8E/7X7XRFOY6Xo/s1600/image03.png" alt="Inception v3 Model"></p>
<p>The image is from this blog-post:</p>
<p><a href="https://research.googleblog.com/2016/03/train-your-own-image-classifier-with.html" rel="nofollow">https://research.googleblog.com/2016/03/train-your-own-image-classifier-with.html</a></p>
<p>It seems that there are two Softmax classification outputs. Why is that?</p>
<p>Which one is used in the TensorFlow example as the output tensor with the name 'softmax:0' in this file?</p>
<p><a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/models/image/imagenet/classify_image.py" rel="nofollow">https://github.com/tensorflow/tensorflow/blob/master/tensorflow/models/image/imagenet/classify_image.py</a></p>
<p>The academic paper for the Inception v3 model doesn't seem to have this image of the Inception model:</p>
<p><a href="http://arxiv.org/pdf/1512.00567v3.pdf" rel="nofollow">http://arxiv.org/pdf/1512.00567v3.pdf</a></p>
<p>I'm trying to understand why there are these two branches of the network with seemingly two different softmax-outputs.</p>
<p>Thanks for any clarification!</p> | 2016-09-06 14:56:10.640000+00:00 | 2017-01-10 14:36:13.420000+00:00 | 2017-01-10 14:36:13.420000+00:00 | tensorflow|deep-learning|softmax | ['http://arxiv.org/pdf/1512.00567v3.pdf'] | 1 |
39,780,424 | <p>In a document classification task, if you use LSTM the outputs are typically word-level vectors, which can be either pre-trained or randomly initialized. You can combine word-level vectors with character-level vectors, e.g. <a href="https://arxiv.org/abs/1606.03475" rel="nofollow">https://arxiv.org/abs/1606.03475</a></p> | 2016-09-29 21:36:49.523000+00:00 | 2016-09-29 21:36:49.523000+00:00 | null | null | 39,774,093 | <p>I was thinking of using Keras to implement a document classification task, but the input of the LSTM layer is confusing me.</p>
<p>I know I have to generate the vectors for training, I have a corpus and one document per line in this corpus, if I would like to feed the corpus into the LSTM layer, do I need to first generate the document vectors from the corpus for training? Or instead of using word-level vectors, or character-level vectors?</p> | 2016-09-29 15:14:57.823000+00:00 | 2016-09-29 21:36:49.523000+00:00 | 2016-09-29 20:46:25.023000+00:00 | vector|keras|lstm | ['https://arxiv.org/abs/1606.03475'] | 1 |
67,453,030 | <p>The gate U in <a href="https://arxiv.org/pdf/1707.03429.pdf" rel="nofollow noreferrer">OpenQASM 2</a> was defined as a special unitary (i.e. determinant=1). But if you are writing circuits in the OpenQASM 2 language, this choice should not be consequential, as OpenQASM 2 does not have a way of explicitly dealing with global phases (and they are not observable).</p>
<p><a href="https://arxiv.org/pdf/2104.14722.pdf" rel="nofollow noreferrer">OpenQASM 3</a> on the other hand has a mechanism for controlling gates. This makes global phases consequential (controlling turns the global phase into a relative, observable phase). It turns out the new definition of U in OpenQASM 3 is the same as the <a href="https://qiskit.org/documentation/stubs/qiskit.circuit.library.UGate.html" rel="nofollow noreferrer">definition in Qiskit</a>. If you are writing circuits in OpenQASM 3 or Qiskit, then global phases matter. Therefore you should use this new definition.</p>
<p>(As a side note, the new definition is chosen because standard gates such as Paulis can be derived from it in a more straightforward way).</p> | 2021-05-08 23:23:58.383000+00:00 | 2021-05-08 23:23:58.383000+00:00 | null | null | 67,448,934 | <p>I have noticed that Open QASM and Qiskit define the universal single-qubit gate U(lambda, theta, phi) differently. The difference causes a phase difference in RZ, for example.</p>
<p>Has anyone come across this problem? Which should one choose?</p> | 2021-05-08 15:01:41.827000+00:00 | 2021-05-08 23:23:58.383000+00:00 | 2021-05-08 16:59:54.587000+00:00 | quantum-computing|qiskit | ['https://arxiv.org/pdf/1707.03429.pdf', 'https://arxiv.org/pdf/2104.14722.pdf', 'https://qiskit.org/documentation/stubs/qiskit.circuit.library.UGate.html'] | 3 |
61,819,378 | <p>I have a multi-part answer for you.</p>
<h2>How to freeze a Module</h2>
<p>It all comes down to how your optimizer is set up. The usual approach for TF1 is to initialize it with all Variables found in the TRAINABLE_VARIABLES <a href="https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/GraphKeys" rel="nofollow noreferrer">collection</a>. The <a href="https://www.tensorflow.org/hub/api_docs/python/hub/Module" rel="nofollow noreferrer">doc for hub.Module</a> says about <code>trainable</code>: "If False, no variables are added to TRAINABLE_VARIABLES collection, ...". So, <strong>yes</strong>, setting <code>trainable=False</code> (explicitly or by default) freezes the module in the standard usage of TF1.</p>
<h2>Why not to freeze BERT</h2>
<p>That said, BERT is meant to be fine-tuned. The <a href="https://arxiv.org/abs/1810.04805" rel="nofollow noreferrer">paper</a> talks about the <em>feature-based</em> (i.e., frozen) vs <em>fine-tuning</em> approaches in more general terms, but the <a href="https://tfhub.dev/google/bert_cased_L-12_H-768_A-12/1" rel="nofollow noreferrer">module doc</a> spells it out clearly: <strong>"fine-tuning all parameters is the recommended practice."</strong> This gives the final parts of computing the pooled output a better shot at adapting to the features that matter most for the task at hand.</p>
<p>If you intend to follow this advice, please also mind <a href="https://www.tensorflow.org/hub/tf1_hub_module#fine-tuning" rel="nofollow noreferrer">tensorflow.org/hub/tf1_hub_module#fine-tuning</a> and pick the correct graph version: BERT uses <a href="https://arxiv.org/abs/1207.0580" rel="nofollow noreferrer">dropout</a> regularization during training, and you need to set <code>hub.Module(..., tags={"train"})</code> to get that. But for inference (in evaluation and prediction), where dropout does nothing, you omit the <code>tags=</code> argument (or set it to the empty <code>set()</code> or to <code>None</code>).</p>
<h2>Outlook: TF2</h2>
<p>You asked about <code>hub.Module()</code>, which is an API for TF1, so I answered in that context. The same considerations apply for <a href="https://tfhub.dev/google/collections/bert/1" rel="nofollow noreferrer">BERT</a> in TF2 SavedModel format. There, it's all about setting <code>hub.KerasLayer(..., trainable=True)</code> or not, but the need to select a graph version has gone away (the layer picks up Keras' <code>training</code> state and applies it under the hood).</p>
<p>Happy training!</p> | 2020-05-15 12:35:59.090000+00:00 | 2020-05-15 12:35:59.090000+00:00 | null | null | 61,818,043 | <p>In this link <a href="https://medium.com/@prasad.pai/how-to-use-tensorflow-hub-with-code-examples-9100edec29af" rel="nofollow noreferrer">click here</a> the author says that:</p>
<pre><code>import tensorflow_hub as hub
module = hub.Module(<<Module URL as string>>, trainable=True)
</code></pre>
<p>If user wishes to fine-tune/modify the weights of the model, this parameter has to be set as True.
So my doubt is if I set this to false does it mean that I am freezing all the layers of the BERT which is my intension too. I want to know if my approach is right.</p> | 2020-05-15 11:22:38.777000+00:00 | 2021-08-26 19:35:22.473000+00:00 | 2021-08-26 19:35:22.473000+00:00 | tensorflow|bert-language-model|tensorflow-hub | ['https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/GraphKeys', 'https://www.tensorflow.org/hub/api_docs/python/hub/Module', 'https://arxiv.org/abs/1810.04805', 'https://tfhub.dev/google/bert_cased_L-12_H-768_A-12/1', 'https://www.tensorflow.org/hub/tf1_hub_module#fine-tuning', 'https://arxiv.org/abs/1207.0580', 'https://tfhub.dev/google/collections/bert/1'] | 7 |
40,404,549 | <p>Simply, yes, because you want to use train dlib's object detector which requires a labeled (up to a bounding box) dataset or you will use a available and labeled dataset. </p>
<p>And also the main function of imglab is creating bounding boxes and it's written in your comments:</p>
<blockquote>
<p>To create your own XML files you can use the imglab tool which can be
found in the tools/imglab folder. It is a simple graphical tool for
labeling objects in images with boxes.</p>
</blockquote>
<p>For the original paper please refer to:
<a href="https://arxiv.org/pdf/1502.00046v1.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1502.00046v1.pdf</a></p>
<p>Actually, as you said, it is really hard. The one of main challenges in object detection or recognition is creating the dataset. That's why, researchers use <code>Mechanical Turk</code> like sites to use the power of crowd. </p> | 2016-11-03 14:47:47.393000+00:00 | 2016-11-03 14:47:47.393000+00:00 | null | null | 40,388,696 | <p>I am using D-lib library to use ocular recognition. So I am planning to train my own classifier using the options given in the documentation. I am using Python as a language platform when compared to C++.</p>
<p>So, I have created the two .xml files training and testing using the imglab tool. Do I have to label all the subject names in the imglab tool?
I have close to 20000 images. Is it not going to be difficult?
Do we have an easy way of doing it?
Please find the code matching the scenario attached.</p>
<pre><code>import os
import sys
import glob
import dlib
from skimage import io
# In this example we are going to train a face detector based on the small
# faces dataset in the examples/faces directory. This means you need to supply
# the path to this faces folder as a command line argument so we will know
# where it is.
faces_folder = "/media/praveen/SSD/NIVL_Ocular/NIR_Ocular_Training"
# Now let's do the training. The train_simple_object_detector() function has a
# bunch of options, all of which come with reasonable default values. The next
# few lines goes over some of these options.
options = dlib.simple_object_detector_training_options()
# Since faces are left/right symmetric we can tell the trainer to train a
# symmetric detector. This helps it get the most value out of the training
# data.
options.add_left_right_image_flips = False
# The trainer is a kind of support vector machine and therefore has the usual
# SVM C parameter. In general, a bigger C encourages it to fit the training
# data better but might lead to overfitting. You must find the best C value
# empirically by checking how well the trained detector works on a test set of
# images you haven't trained on. Don't just leave the value set at 5. Try a
# few different C values and see what works best for your data.
options.C = 5
# Tell the code how many CPU cores your computer has for the fastest training.
options.num_threads = 4
options.be_verbose = True
training_xml_path = os.path.join(faces_folder, "/media/praveen/SSD/NIVL_Ocular/praveen_ocular_dataset.xml")
testing_xml_path = os.path.join(faces_folder, "/media/praveen/SSD/NIVL_Ocular/praveen_ocular_test_dataset.xml")
# This function does the actual training. It will save the final detector to
# detector.svm. The input is an XML file that lists the images in the training
# dataset and also contains the positions of the face boxes. To create your
# own XML files you can use the imglab tool which can be found in the
# tools/imglab folder. It is a simple graphical tool for labeling objects in
# images with boxes. To see how to use it read the tools/imglab/README.txt
# file. But for this example, we just use the training.xml file included with
# dlib.
dlib.train_simple_object_detector(training_xml_path, "detector.svm", options)
# Now that we have a face detector we can test it. The first statement tests
# it on the training data. It will print(the precision, recall, and then)
# average precision.
print("") # Print blank line to create gap from previous output
print("Training accuracy: {}".format(
dlib.test_simple_object_detector(training_xml_path, "detector.svm")))
# However, to get an idea if it really worked without overfitting we need to
# run it on images it wasn't trained on. The next line does this. Happily, we
# see that the object detector works perfectly on the testing images.
print("Testing accuracy: {}".format(
dlib.test_simple_object_detector(testing_xml_path, "detector.svm")))
#
# # Now let's use the detector as you would in a normal application. First we
# # will load it from disk.
# detector = dlib.simple_object_detector("detector.svm")
#
# # We can look at the HOG filter we learned. It should look like a face. Neat!
# win_det = dlib.image_window()
# win_det.set_image(detector)
#
# # Now let's run the detector over the images in the faces folder and display the
# # results.
# print("Showing detections on the images in the faces folder...")
# win = dlib.image_window()
# for f in glob.glob(os.path.join(faces_folder, "*.png")):
# print("Processing file: {}".format(f))
# img = io.imread(f)
# dets = detector(img)
# print("Number of faces detected: {}".format(len(dets)))
# for k, d in enumerate(dets):
# print("Detection {}: Left: {} Top: {} Right: {} Bottom: {}".format(
# k, d.left(), d.top(), d.right(), d.bottom()))
#
# win.clear_overlay()
# win.set_image(img)
# win.add_overlay(dets)
# dlib.hit_enter_to_continue()
</code></pre> | 2016-11-02 19:57:39.577000+00:00 | 2016-11-03 15:37:14.623000+00:00 | 2016-11-03 15:37:14.623000+00:00 | computer-vision|dlib|dlib-python | ['https://arxiv.org/pdf/1502.00046v1.pdf'] | 1 |
38,822,821 | <p>I'm not aware of any complete list. However, there is a not so comprehensive one in our report <a href="https://arxiv.org/abs/1601.03027" rel="noreferrer">Open Mobile API: Accessing the UICC on Android Devices</a> and there is another one (though now unmaintained) in the <a href="https://github.com/seek-for-android/pool/wiki/%5BUNMAINTAINED%5D-Devices#commercial-devices" rel="noreferrer">SEEK-for-Android Wiki</a>.</p>
<p>If you have access to each of the devices you are interested in, you could, of cource, check if the smartcard system service is available on them:</p>
<pre><code>final String SMARTCARD_SERVICE_PACKAGE = "org.simalliance.openmobileapi.service";
try {
PackageInfo pi = getPackageManager().getPackageInfo(SMARTCARD_SERVICE_PACKAGE, 0);
// smartcard service present
} catch (PackageManager.NameNotFoundException ex) {
// smartcard service NOT present
}
</code></pre>
<p>Or you could simply create an app that declares to require the Open Mobile API library by adding the following uses-library entry to its AndroidManifest.xml:</p>
<pre><code><uses-library android:name="org.simalliance.openmobileapi" android:required="true" />
</code></pre>
<p>If that app can be installed on a device, this indicates that the device contains the Open Mobile API library.</p>
<p>This may also be a way to obtain a more comprehensive list of supported devices: You could create such an app and publish it on Google Play. Google Play will filter based on <code><uses-library /></code> entries that have the required attribute set to <code>true</code> (<code>android:required="true"</code>); see also <a href="https://developer.android.com/guide/topics/manifest/uses-library-element.html" rel="noreferrer"><code><uses-library></code></a> and <a href="https://developer.android.com/google/play/filters.html#manifest-filters" rel="noreferrer">Filters on Google Play</a>. This means, that once you uploaded such an app to Google Play, you should be able to get a list of suuported devices that essentially matches all devices that have the Open Mobile API library available on them.</p> | 2016-08-08 06:48:32.573000+00:00 | 2016-08-08 06:48:32.573000+00:00 | null | null | 38,657,627 | <p>I'm developing with the Open Mobile API but so far haven't found a list of devices that support the API by default (by default being using the OEM ROM).</p>
<p>I realise that since API level 21, Android telephony supports sending APDUs via basic and logical channels dirctly through the TelephonyManager. But I'd like to know about devices running pre-API level 21 too.</p>
<p>So, has a list already been compiled of devices with built-in support or is there a way to find out for myself?</p> | 2016-07-29 11:24:42.010000+00:00 | 2018-09-11 16:21:46.183000+00:00 | 2016-08-08 06:50:31.730000+00:00 | android|nfc|apdu|open-mobile-api|secure-element | ['https://arxiv.org/abs/1601.03027', 'https://github.com/seek-for-android/pool/wiki/%5BUNMAINTAINED%5D-Devices#commercial-devices', 'https://developer.android.com/guide/topics/manifest/uses-library-element.html', 'https://developer.android.com/google/play/filters.html#manifest-filters'] | 4 |
53,022,636 | <p>After many years, let me give an answer to this question myself.</p>
<h1>Turing completeness</h1>
<ul>
<li>As powerful as a Turing machine (TM).</li>
<li>Requires an <strong>infinite memory</strong>. I.e. in practice, no physical device ever can be Turing complete.</li>
<li>Consider a normal <strong>personal computer (PC)</strong>:
<ul>
<li>A specific physical instance is <strong><em>not</em> Turing complete</strong>, as it has finite memory.</li>
<li>However, consider the <strong>concept of a PC</strong> as something where you could add memory on demand. All programs will still work in the same way when you add more memory. For any given input, and any given program, there is a maximum amount of memory such that it should work. (Let's not be pedantic now about the <code>int64</code> memory address limit or things like that. These are technical limits, which could be solved, e.g. by big ints, etc.) Thus, we can say that <strong>the <em>concept of a PC</em> is Turing complete</strong>. This is also called <a href="https://en.wikipedia.org/wiki/Abstract_machine" rel="nofollow noreferrer">an <strong>abstract machine</strong></a>.
So, differentiate between a specific machine or device (PC, RNN, human brain) and a programming language or abstract machine for a programming language. A programming language usually does not have a restriction on the amount of memory it can use, thus programming languages can be Turing complete. See the <a href="https://www.open-std.org/jtc1/sc22/wg14/www/docs/n1570.pdf" rel="nofollow noreferrer">official C language standard definition</a>, which uses an abstract machine to define the semantics. The main question is, which of those specific classes (PC, RNN, human brain) allows to formulate abstract machine variants, and are those abstract machine variants Turing complete?</li>
</ul>
</li>
<li>Turing completeness is really mostly about the memory concept. Consider any finite state machine/automaton (FSA), and some access to external memory. Depending on the type of memory, you end up in different classes of the <a href="https://en.wikipedia.org/wiki/Chomsky_hierarchy" rel="nofollow noreferrer">Chomsky hierarchy</a>:
<ul>
<li>no memory / finite memory: <a href="https://en.wikipedia.org/wiki/Regular_language" rel="nofollow noreferrer">regular languages</a></li>
<li>single stack: <a href="https://en.wikipedia.org/wiki/Pushdown_automaton" rel="nofollow noreferrer">pushdown automata</a>, <a href="https://en.wikipedia.org/wiki/Context-free_language" rel="nofollow noreferrer">context-free languages</a></li>
<li>two or more stacks: Turing complete, <a href="https://en.wikipedia.org/wiki/Recursively_enumerable_language" rel="nofollow noreferrer">recursively enumerable language</a></li>
</ul>
</li>
</ul>
<h1>Recurrent neural networks (RNN)</h1>
<h2><em>On the computational power of neural nets</em></h2>
<p>An often cited paper is <a href="http://binds.cs.umass.edu/papers/1992_Siegelmann_COLT.pdf" rel="nofollow noreferrer">On the computational power of neural nets, Siegelmann & Sonntag, 1992</a>, which states that RNNs are Turing complete.
This paper assumes that we have rational numbers without limits in the nominator/denominator, i.e. the infinite memory is encoded as rational numbers, or floating point numbers of infinite precision. See also <a href="https://stats.stackexchange.com/questions/220907/meaning-and-proof-of-rnn-can-approximate-any-algorithm">here</a>. Usually we do not model a NN in a way that is operates with rational numbers (without limit). When we restrict ourselves to (R)NNs with finite precision weights and activations, the proof in the paper falls appart, and does not apply anymore. Thus, this paper is not so relevant.</p>
<p>There is a more recent paper, <a href="https://arxiv.org/abs/1805.04908" rel="nofollow noreferrer">On the Practical Computational Power of Finite Precision RNNs for Language Recognition, Weiss et al, 2018</a>, which exactly addresses this.</p>
<h2>Universal approximator</h2>
<p>It is well known that most standard NNs are <a href="https://en.wikipedia.org/wiki/Universal_approximation_theorem" rel="nofollow noreferrer">universal approximators</a>. This states, that given any function (nonconstant, bounded, and continuous), and given some allowed threshold, you can construct a NN which approximates this function within the allowed threshold.
This is about finite dimensional vector spaces. When we speak about computability, we speak about sequences, thus we have an infinite dimensional vector space. Thus this property is not so relevant.</p>
<h2>Without external memory</h2>
<p>The question is how to define the <em>concept of a standard RNN</em>. In the paper mentioned above, it assumes infinite precision in the activations. But I argue this is not a reasonable concept of a RNN as you never have this. And with this being limited, there is no other way how a concept of a standard RNN can have infinite memory.</p>
<p>Thus, to state it explicitly:
Without external memory, the standard RNN, and also <a href="https://en.wikipedia.org/wiki/Long_short-term_memory" rel="nofollow noreferrer">LSTM</a> is <strong>not Turing complete</strong>.
There is also no straight-forward way how you could define a <em>concept of a RNN</em>, where you could add memory on demand.
The memory of a RNN are the most recent hidden activations. Adding more memory means to change the NN, i.e. adding new neurons, thus adding the internal workings of it. This is like changing the program itself.</p>
<h2>With external memory</h2>
<p>There is the <a href="https://en.wikipedia.org/wiki/Neural_Turing_machine" rel="nofollow noreferrer">Neural Turing machine (NTM)</a> and a few similar models, which have some sort of external memory.
Here it is straight-forward to think about a <em>concept of a NTM</em> where you would add memory on demand.
Thus we can say that <strong>the <em>concept of a NTM</em> is Turing complete</strong>.</p>
<p>There are some details like the type of attentions used in the heads, which needs some adaptation. There is a follow up on the model, the <a href="https://en.wikipedia.org/wiki/Differentiable_neural_computer" rel="nofollow noreferrer">Differentiable neural computer (DNC)</a> which explicitly addresses this, and also has some explicit mechanism to add memory.</p>
<h2>Learning / training</h2>
<p>We spoke mostly about the theoretic computation power.
A very different question is whether the NN can learn such a function. I.e. whether training (gradient search) leads to a NN which has learned a computable function.</p>
<h2>Human brain</h2>
<p>We might interpret the human brain (or any brain) as kind of a complex neural network. We can also ask the question, whether the human brain (model) is Turing complete. See e.g. <a href="https://cs.stackexchange.com/questions/42311/how-is-the-computational-power-of-a-human-brain-comparing-to-a-turing-machine">here</a>. This question is a tricky one. The intuition would say that we are able to perform any kind of computation, thus the human brain is Turing complete. However, the arguments we have outlined here shows that a RNN is not Turing complete. Important is again the memory effect. At some point, the memory capacity of a human brain is not enough to operate on some input. Thus we would need external memory. So, the human brain together with external memory is Turing complete, obviously. But this is maybe not the interesting question (and also assumes that a human brain could be as large as needed to encode any Turing machine program, so there is no upper size of a human brain, which is not really true). More interesting is the question of just the human brain itself, without any external memory. Or basically, how to define the <em>abstract machine</em> or the <em>concept of a human brain</em>? Does this concept allows for infinite memory? Is this straightforward to define?</p>
<p>There is one aspect of the memory in a human brain which is a bit different than in a standard RNN: It can generalize to a high degree, and the addressing mechanism to access certain activations is different. Also, it has some kind of adaptive weights (which however also only can store finite information). So, considering this, it's actually already more similar to a NTM than just a RNN. There are many different kinds of memory in the human brain. Some of it is just as static as a RNN (like the current activations) but other types allow for associative memory like a NTM. And thus the human brain is also similar to a PC, which also has finite memory, whereas the concept of a PC has infinite memory.</p>
<p>So maybe we can say that a concept of a human brain has infinite memory as well, due to the associative memory, and also can be as large as needed to encode any program, and thus <strong>the concept of a human brain is Turing complete</strong>. Maybe this should be called an <strong>abstract human brain</strong> (instance of an <em>abstract machine</em>).</p> | 2018-10-27 14:02:59.323000+00:00 | 2022-06-05 13:16:35.530000+00:00 | 2022-06-05 13:16:35.530000+00:00 | null | 2,990,277 | <p>While reading some papers about the Turing completeness of recurrent neural nets (for example: Turing computability with neural nets, Hava T. Siegelmann and Eduardo D. Sontag, 1991), I got the feeling that the proof which was given there was not really that practical. For example the referenced paper needs a neural network which neuron activity must be of infinity exactness (to reliable represent any rational number). Other proofs need a neural network of infinite size. Clearly, that is not really that practical.</p>
<p>But I started to wonder now if it does make sense at all to ask for Turing completeness. By the strict definition, no computer system nowadays is Turing complete because none of them will be able to simulate the infinite tape.</p>
<p>Interestingly, programming language specification leaves it most often open if they are turing complete or not. It all boils down to the question if they will always be able to allocate more memory and if the function call stack size is infinite. Most specification don't really specify this. Of course all available implementations are limited here, so all practical implementations of programming languages are not Turing complete.</p>
<p>So, what you can say is that all computer systems are just equally powerful as finite state machines and not more.</p>
<p>And that brings me to the question: <strong>How useful is the term Turing complete at all?</strong></p>
<p>And back to neural nets: For any practical implementation of a neural net (including our own brain), they will not be able to represent an infinite number of states, i.e. by the strict definition of Turing completeness, they are not Turing complete. <strong>So does the question if neural nets are Turing complete make sense at all?</strong></p>
<p>The question if they are as powerful as finite state machines was answered already much earlier (1954 by Minsky, the answer of course: yes) and also seems easier to answer. I.e., at least in theory, that was already the proof that they are as powerful as any computer.</p>
<hr>
<p>Some other questions which are more about what I really want to know:</p>
<ul>
<li><p>Is there any theoretical term which can say something more specific about the computational power of a computer? (given its limited memory space)</p></li>
<li><p>How can you compare the computational power of practical implementations of neural nets with computers? (Turing-completeness is not useful as argumented above.)</p></li>
</ul> | 2010-06-07 14:20:05.327000+00:00 | 2022-06-05 13:16:35.530000+00:00 | 2019-06-09 03:30:44.237000+00:00 | neural-network|finite-automata|turing-complete|state-machine | ['https://en.wikipedia.org/wiki/Abstract_machine', 'https://www.open-std.org/jtc1/sc22/wg14/www/docs/n1570.pdf', 'https://en.wikipedia.org/wiki/Chomsky_hierarchy', 'https://en.wikipedia.org/wiki/Regular_language', 'https://en.wikipedia.org/wiki/Pushdown_automaton', 'https://en.wikipedia.org/wiki/Context-free_language', 'https://en.wikipedia.org/wiki/Recursively_enumerable_language', 'http://binds.cs.umass.edu/papers/1992_Siegelmann_COLT.pdf', 'https://stats.stackexchange.com/questions/220907/meaning-and-proof-of-rnn-can-approximate-any-algorithm', 'https://arxiv.org/abs/1805.04908', 'https://en.wikipedia.org/wiki/Universal_approximation_theorem', 'https://en.wikipedia.org/wiki/Long_short-term_memory', 'https://en.wikipedia.org/wiki/Neural_Turing_machine', 'https://en.wikipedia.org/wiki/Differentiable_neural_computer', 'https://cs.stackexchange.com/questions/42311/how-is-the-computational-power-of-a-human-brain-comparing-to-a-turing-machine'] | 15 |
59,307,889 | <p>You can have a single model with multiple outputs. As you seem interested with image processing, detection models for example (like SSD, RFCN, etc.) have multiple outputs, one for classes, one for box coordinates. Take a look a the page 3 of <a href="https://arxiv.org/pdf/1611.10012.pdf" rel="nofollow noreferrer">this article</a> for the "feature extractor"/"classifier" split.</p>
<p>In fact, you will have a first common part of your model (with mostly the convolution layers to extract your features).
Deeper in your model, you will have separate parts, one for each kind of prediction.</p> | 2019-12-12 15:25:33.400000+00:00 | 2019-12-12 15:25:33.400000+00:00 | null | null | 59,307,823 | <p>If I am building a model where I need to predict the vehicle, color of it, and make of it, then can I use all the labels for a single image and build my model around it.</p>
<p>Like for a single image of a vehicle which is a car (car1.jpg) will have labels like - Sedan(Make), Blue(Color) and Car(Type of vehicle). Can I make a single model for this or I will have to make 3 separate models for this problem.</p> | 2019-12-12 15:21:32.330000+00:00 | 2019-12-12 15:25:33.400000+00:00 | null | tensorflow|deep-learning|conv-neural-network|pytorch|multilabel-classification | ['https://arxiv.org/pdf/1611.10012.pdf'] | 1 |
56,865,631 | <p>It works only for gaussian noise, but with almost every level of noise.
It could be used to remove some other kind of noises, but that's not guaranteed.
However if you look at Matlab documentation it says that uses a pre-trained model called "DnCNN".
So i think that could be useful to see the relative paper:
<a href="https://arxiv.org/abs/1608.03981" rel="nofollow noreferrer">link to paper</a></p> | 2019-07-03 08:08:19.740000+00:00 | 2019-07-03 08:08:19.740000+00:00 | null | null | 56,864,278 | <p>What kind of noise removal training does the function 'denoisingNetwork' do, which is used as a part of 'denoiseImage'? Is it specific to some kind of noise and noise level or just a generalized network that gives an average output image?</p> | 2019-07-03 06:38:45.370000+00:00 | 2019-07-03 08:08:19.740000+00:00 | null | matlab|image-processing|deep-learning | ['https://arxiv.org/abs/1608.03981'] | 1 |
53,542,804 | <p>Unfortunately, Tensorflow does not provide the tooling for post-training per layer quantization in flatbuffer (tflite) yet, but only in protobuf. The only available way now is to introduce <a href="https://www.tensorflow.org/api_docs/python/tf/quantization/fake_quant_with_min_max_vars" rel="nofollow noreferrer">fakeQuantization</a> layers in your graph and re-train / fine-tune your model on the train or a calibration set. This is called "<a href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/quantize" rel="nofollow noreferrer">Quantization-aware training</a>".</p>
<p>Once the fakeQuant layers are introduced, then you can feed the training set and TF is going to use them on Feed-Forward as simulated quantisation layers (fp-32 datatypes that represent 8-bit values) and back-propagate using full precision values. This way, you can get back the accuracy loss that caused by quantization.</p>
<p>In addition, the fakeQuant layers are going to capture the ranges per layer or per channel through moving average and store them in min / max variables. </p>
<p>Later, you can extract the graph definition and get rid of the fakeQuant nodes through <code>freeze_graph</code> tool.</p>
<p>Finally, the model can be fed into tf_lite_converter (cross-fingers it won't brake) and extract the u8_tflite with captured ranges.</p>
<p>A very good white-paper, explaining all these is provided by Google here : <a href="https://arxiv.org/pdf/1806.08342.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1806.08342.pdf</a></p>
<p>Hope that helps.</p> | 2018-11-29 15:50:30.200000+00:00 | 2018-11-29 15:50:30.200000+00:00 | null | null | 53,500,185 | <p>I have used Keras to finetune MobileNet v1. Now I have <code>model.h5</code> and I need to convert it to TensorFlow Lite to use it in Android app. </p>
<p>I use TFLite conversion <a href="https://www.tensorflow.org/lite/convert/cmdline_examples" rel="nofollow noreferrer">script</a> <code>tflite_convert</code>. I can convert it without quantization but I need more performance so I need to make quantization.</p>
<p>If I run this script:</p>
<pre><code>tflite_convert --output_file=model_quant.tflite \
--keras_model_file=model.h5 \
--inference_type=QUANTIZED_UINT8 \
--input_arrays=input_1 \
--output_arrays=predictions/Softmax \
--mean_values=128 \
--std_dev_values=127 \
--input_shape="1,224,224,3"
</code></pre>
<p>It fails:</p>
<blockquote>
<p>F tensorflow/contrib/lite/toco/tooling_util.cc:1634] Array
conv1_relu/Relu6, which is an input to the DepthwiseConv operator
producing the output array conv_dw_1_relu/Relu6, is lacking min/max
data, which is necessary for quantization. If accuracy matters, either
target a non-quantized output format, or run quantized training with
your model from a floating point checkpoint to change the input graph
to contain min/max information. If you don't care about accuracy, you
can pass --default_ranges_min= and --default_ranges_max= for easy
experimentation.\nAborted (core dumped)\n"</p>
</blockquote>
<p>If I use <code>default_ranges_min</code> and <code>default_ranges_max</code> (called as "dummy-quantization"), it works but it is only for debugging performance without accuracy as it is described in error log. </p>
<p>So what I need to do to make Keras model correctly quantizable? Do I need to find best <code>default_ranges_min</code> and <code>default_ranges_max</code>? How? Or it is about changes in Keras training phase?</p>
<p>Library versions:</p>
<pre><code>Python 3.6.4
TensorFlow 1.12.0
Keras 2.2.4
</code></pre> | 2018-11-27 12:53:02.227000+00:00 | 2018-11-29 15:50:30.200000+00:00 | 2018-11-27 12:55:02.483000+00:00 | python|tensorflow|keras|tensorflow-lite | ['https://www.tensorflow.org/api_docs/python/tf/quantization/fake_quant_with_min_max_vars', 'https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/quantize', 'https://arxiv.org/pdf/1806.08342.pdf'] | 3 |
71,388,842 | <p>As the text from the <code>lr_find</code> method says, you can visually inspect the plot and choose a learning rate in a range where the loss is falling prior to divergence. A higher learning rate in this range will converge faster. This is an idea called an "LR range test" from <a href="https://arxiv.org/pdf/1506.01186.pdf" rel="nofollow noreferrer">Leslie Smith's paper</a> that became popular through the <a href="https://github.com/fastai/fastai" rel="nofollow noreferrer">fastai</a> library and was later adopted by other libraries like <a href="https://github.com/amaiya/ktrain" rel="nofollow noreferrer">ktrain</a> and Amazon's <a href="https://mxnet.apache.org/versions/1.6/api/python/docs/api/gluon/index.html" rel="nofollow noreferrer">Gluon</a> library. The red dot in this plot is just a numerical approximation of where the loss is dramatically falling that may be useful for automated scenarios, but not necessarily the best. In this plot, the red dot represents the steepest part of the curve, which is one strategy to automatically select a learning rate from the plot (without visual inspection). Other automated strategies include taking the learning rate associated with the minimum loss and dividing by 10, and finding the learning rate associated with the <a href="https://github.com/fastai/fastai/pull/3377" rel="nofollow noreferrer">longest valley</a>.</p> | 2022-03-08 00:53:32.053000+00:00 | 2022-03-08 00:59:02.727000+00:00 | 2022-03-08 00:59:02.727000+00:00 | null | 71,318,065 | <p>I am using ktrain package to classify text. My experiment is shown as:</p>
<p><a href="https://i.stack.imgur.com/7rBep.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7rBep.png" alt="enter image description here" /></a></p>
<p>lr_find and lr_plot are functions in ktrain. They can be used to highlight the best learning rate, which is shown as the red dot in the plot.</p>
<p>I do not understand how to understand this plot:</p>
<ol>
<li>How to transfer log scale to the normal linear one?</li>
<li>Why the best scale is the red dot?</li>
</ol> | 2022-03-02 05:58:58.443000+00:00 | 2022-03-11 22:21:03.193000+00:00 | 2022-03-11 22:21:03.193000+00:00 | plot|nlp|loss|learning-rate|ktrain | ['https://arxiv.org/pdf/1506.01186.pdf', 'https://github.com/fastai/fastai', 'https://github.com/amaiya/ktrain', 'https://mxnet.apache.org/versions/1.6/api/python/docs/api/gluon/index.html', 'https://github.com/fastai/fastai/pull/3377'] | 5 |
62,728,602 | <p>I know the question is quite old, but I will still like to give my 2 cents to it.
First of all, I would have to point out that the architecture you are referring to here is not of U-Net.</p>
<p><a href="https://arxiv.org/pdf/1505.04597.pdf" rel="nofollow noreferrer">U-Net</a> starts with a double convolutional layer with 64 filters each. In the above example, your model architecture starts with the 32 filter size. The padding is not supposed to be "same" but "valid" and few other issues such as you are not cropping the contracting layer's output before concatenating it to the expansive part.</p>
<p>You will need to have the input layer as below.</p>
<pre><code>inputs = layers.Input(shape=(512, 512, 1))
</code></pre>
<p>And you will have to build the model in the following fashion.</p>
<pre><code>model.build(input_shape=(None, 1, 512, 512))
</code></pre>
<p>That means you will have to orient your input in an above-mentioned way.
Your Github link is not working and I was unable to see the input size of the image as you have used the variables in place of the numbers. So this is my guess here.</p>
<p>If you are interested in knowing how the actual U-net model should look like you can see the below code.</p>
<pre><code>def unet_model():
# declaring the input layer
# Input layer expects an RGB image, in the original paper the network consisted of only one channel.
inputs = layers.Input(shape=(572, 572, 3))
# first part of the U - contracting part
c0 = layers.Conv2D(64, activation='relu', kernel_size=3)(inputs)
c1 = layers.Conv2D(64, activation='relu', kernel_size=3)(c0) # This layer for concatenating in the expansive part
c2 = layers.MaxPool2D(pool_size=(2, 2), strides=(2, 2), padding='valid')(c1)
c3 = layers.Conv2D(128, activation='relu', kernel_size=3)(c2)
c4 = layers.Conv2D(128, activation='relu', kernel_size=3)(c3) # This layer for concatenating in the expansive part
c5 = layers.MaxPool2D(pool_size=(2, 2), strides=(2, 2), padding='valid')(c4)
c6 = layers.Conv2D(256, activation='relu', kernel_size=3)(c5)
c7 = layers.Conv2D(256, activation='relu', kernel_size=3)(c6) # This layer for concatenating in the expansive part
c8 = layers.MaxPool2D(pool_size=(2, 2), strides=(2, 2), padding='valid')(c7)
c9 = layers.Conv2D(512, activation='relu', kernel_size=3)(c8)
c10 = layers.Conv2D(512, activation='relu', kernel_size=3)(c9) # This layer for concatenating in the expansive part
c11 = layers.MaxPool2D(pool_size=(2, 2), strides=(2, 2), padding='valid')(c10)
c12 = layers.Conv2D(1024, activation='relu', kernel_size=3)(c11)
c13 = layers.Conv2D(1024, activation='relu', kernel_size=3, padding='valid')(c12)
# We will now start the second part of the U - expansive part
t01 = layers.Conv2DTranspose(512, kernel_size=2, strides=(2, 2), activation='relu')(c13)
crop01 = layers.Cropping2D(cropping=(4, 4))(c10)
concat01 = layers.concatenate([t01, crop01], axis=-1)
c14 = layers.Conv2D(512, activation='relu', kernel_size=3)(concat01)
c15 = layers.Conv2D(512, activation='relu', kernel_size=3)(c14)
t02 = layers.Conv2DTranspose(256, kernel_size=2, strides=(2, 2), activation='relu')(c15)
crop02 = layers.Cropping2D(cropping=(16, 16))(c7)
concat02 = layers.concatenate([t02, crop02], axis=-1)
c16 = layers.Conv2D(256, activation='relu', kernel_size=3)(concat02)
c17 = layers.Conv2D(256, activation='relu', kernel_size=3)(c16)
t03 = layers.Conv2DTranspose(128, kernel_size=2, strides=(2, 2), activation='relu')(c17)
crop03 = layers.Cropping2D(cropping=(40, 40))(c4)
concat03 = layers.concatenate([t03, crop03], axis=-1)
c18 = layers.Conv2D(128, activation='relu', kernel_size=3)(concat03)
c19 = layers.Conv2D(128, activation='relu', kernel_size=3)(c18)
t04 = layers.Conv2DTranspose(64, kernel_size=2, strides=(2, 2), activation='relu')(c19)
crop04 = layers.Cropping2D(cropping=(88, 88))(c1)
concat04 = layers.concatenate([t04, crop04], axis=-1)
c20 = layers.Conv2D(64, activation='relu', kernel_size=3)(concat04)
c21 = layers.Conv2D(64, activation='relu', kernel_size=3)(c20)
# This is based on our dataset. The output channels are 3, think of it as each pixel will be classified
# into three classes, but I have written 4 here, as I do padding with 0, so we end up have four classes.
outputs = layers.Conv2D(4, kernel_size=1)(c21)
model = tf.keras.Model(inputs=inputs, outputs=outputs, name="u-netmodel")
return model
</code></pre> | 2020-07-04 11:35:28.107000+00:00 | 2020-07-04 23:25:24.927000+00:00 | 2020-07-04 23:25:24.927000+00:00 | null | 47,645,797 | <p>
I am attempting to recreate a UNet using the Keras model API, I have collected images of cells, and the segmented version of it and I am attempting to train a model with it. In doing so I could upload a different cell and get the segmented version of the image as a prediction.
</p>
<p><a href="https://github.com/JamilGafur/Unet" rel="nofollow noreferrer">https://github.com/JamilGafur/Unet</a></p>
<pre><code>from __future__ import print_function
from matplotlib import pyplot as plt
from keras import losses
import os
from keras.models import Model
from keras.layers import Input, concatenate, Conv2D, MaxPooling2D, Conv2DTranspose
from keras.optimizers import Adam
import cv2
import numpy as np
# training data
image_location = "C:/Users/JamilG-Lenovo/Desktop/train/"
image = image_location+"image"
label = image_location +"label"
class train_data():
def __init__(self, image, label):
self.image = []
self.label = []
for file in os.listdir(image):
if file.endswith(".tif"):
self.image.append(cv2.imread(image+"/"+file,0))
for file in os.listdir(label):
if file.endswith(".tif"):
#print(label+"/"+file)
self.label.append(cv2.imread(label+"/"+file,0))
def get_image(self):
return np.array(self.image)
def get_label(self):
return np.array(self.label)
def get_unet(rows, cols):
inputs = Input((rows, cols, 1))
conv1 = Conv2D(32, (3, 3), activation='relu', padding='same')(inputs)
conv1 = Conv2D(32, (3, 3), activation='relu', padding='same')(conv1)
pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)
conv2 = Conv2D(64, (3, 3), activation='relu', padding='same')(pool1)
conv2 = Conv2D(64, (3, 3), activation='relu', padding='same')(conv2)
pool2 = MaxPooling2D(pool_size=(2, 2))(conv2)
conv3 = Conv2D(128, (3, 3), activation='relu', padding='same')(pool2)
conv3 = Conv2D(128, (3, 3), activation='relu', padding='same')(conv3)
pool3 = MaxPooling2D(pool_size=(2, 2))(conv3)
conv4 = Conv2D(256, (3, 3), activation='relu', padding='same')(pool3)
conv4 = Conv2D(256, (3, 3), activation='relu', padding='same')(conv4)
pool4 = MaxPooling2D(pool_size=(2, 2))(conv4)
conv5 = Conv2D(512, (3, 3), activation='relu', padding='same')(pool4)
conv5 = Conv2D(512, (3, 3), activation='relu', padding='same')(conv5)
up6 = concatenate([Conv2DTranspose(256, (2, 2), strides=(2, 2), padding='same')(conv5), conv4], axis=3)
conv6 = Conv2D(256, (3, 3), activation='relu', padding='same')(up6)
conv6 = Conv2D(256, (3, 3), activation='relu', padding='same')(conv6)
up7 = concatenate([Conv2DTranspose(128, (2, 2), strides=(2, 2), padding='same')(conv6), conv3], axis=3)
conv7 = Conv2D(128, (3, 3), activation='relu', padding='same')(up7)
conv7 = Conv2D(128, (3, 3), activation='relu', padding='same')(conv7)
up8 = concatenate([Conv2DTranspose(64, (2, 2), strides=(2, 2), padding='same')(conv7), conv2], axis=3)
conv8 = Conv2D(64, (3, 3), activation='relu', padding='same')(up8)
conv8 = Conv2D(64, (3, 3), activation='relu', padding='same')(conv8)
up9 = concatenate([Conv2DTranspose(32, (2, 2), strides=(2, 2), padding='same')(conv8), conv1], axis=3)
conv9 = Conv2D(32, (3, 3), activation='relu', padding='same')(up9)
conv9 = Conv2D(32, (3, 3), activation='relu', padding='same')(conv9)
conv10 = Conv2D(1, (1, 1), activation='sigmoid')(conv9)
model = Model(inputs=[inputs], outputs=[conv10])
model.compile(optimizer=Adam(lr=1e-5), loss = losses.mean_squared_error)
return model
def main():
# load all the training images
train_set = train_data(image, label)
# get the training image
train_images = train_set.get_image()
# get the segmented image
train_label = train_set.get_label()
print("type of train_images" + str(type(train_images[0])))
print("type of train_label" + str(type(train_label[0])))
print('\n')
print("shape of train_images" + str(train_images[0].shape))
print("shape of train_label" + str(train_label[0].shape))
plt.imshow(train_images[0], interpolation='nearest')
plt.title("actual image")
plt.show()
plt.imshow(train_label[0], interpolation='nearest')
plt.title("segmented image")
plt.show()
# create a UNet (512,512)
unet = get_unet(train_label[0].shape[0],
train_label[0].shape[1])
# look at the summary of the unet
unet.summary()
#-----------errors start here-----------------
# fit the unet with the actual image, train_images
# and the output, train_label
unet.fit(train_images, train_label, batch_size=16, epochs=10)
main()
</code></pre>
<p>When I am attempting to run it I expect it to be fitting over 10 epochs, but instead, it is throwing the following error:</p>
<pre><code>File "C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\training.py",
line 144, in _standardize_input_data str(array.shape))
ValueError: Error when checking input: expected input_5 to have shape (None,
512, 512, 1) but got array with shape (1, 30, 512, 512)
</code></pre>
<p>If someone could tell me what I did wrong, what the code should be, or point me in the right direction, I would much appreciate it.</p>
<p>Thank you!</p> | 2017-12-05 03:50:20.070000+00:00 | 2020-07-04 23:25:24.927000+00:00 | null | python-3.x|machine-learning|computer-vision|keras|keras-layer | ['https://arxiv.org/pdf/1505.04597.pdf'] | 1 |
16,593,009 | <p>There is a <a href="http://arxiv.org/abs/0908.3030v3" rel="noreferrer">Math.BigDecimal implementation of core mathematical functions</a> with source code available from the Cornell University Library <a href="http://arxiv.org/src/0908.3030v3/anc" rel="noreferrer">here</a> (also you can download the library as a <a href="http://arxiv.org/src/0908.3030v3" rel="noreferrer">tar.gz</a>). Here is a sample of the library use:</p>
<pre><code>import org.nevec.rjm.*;
import java.math.BigDecimal;
public class test {
public static void main(String... args) {
BigDecimal a = new BigDecimal("1.21");
BigDecimal b = new BigDecimal("0.5");
System.out.println(BigDecimalMath.pow(a, b).toString());
}
}
</code></pre>
<p>Prints out: </p>
<pre><code>1.1
</code></pre>
<hr>
<h1>Update</h1>
<p>The license information is now clear in the May 2015 update:</p>
<blockquote>
<p>The full source code is made available under the LGPL v3.0.</p>
</blockquote> | 2013-05-16 16:40:21+00:00 | 2018-04-04 02:34:58.807000+00:00 | 2018-04-04 02:34:58.807000+00:00 | null | 16,441,769 | <p>Java's <code>BigDecimal.pow(int)</code> method only accepts an integer parameter, no <code>BigDecimal</code> parameter. </p>
<p>Is there a library, like Apache's commons-lang, that supports <code>BigDecimal.pow(BigDecimal)</code>? It should be able to do calculate <code>"1.21".pow("0.5")</code> to return <code>"1.1"</code>.</p> | 2013-05-08 13:16:33.687000+00:00 | 2020-05-25 23:06:41.447000+00:00 | 2013-05-08 15:28:09.483000+00:00 | java | ['http://arxiv.org/abs/0908.3030v3', 'http://arxiv.org/src/0908.3030v3/anc', 'http://arxiv.org/src/0908.3030v3'] | 3 |
62,246,627 | <p>There is no straight forward way to show progress % with <code>mgr-pdf-viewer-react</code>. So you have to kind off do your own.</p>
<ul>
<li>Use a <code>ref</code> and attach it to <code>PDF</code> component. The library maintains a state called <code>pages</code> which is set after pdf document is loaded. Read the <code>pages</code> value using the <code>ref</code> and show progress bar until pages has got some value.</li>
</ul>
<p>Use some library for progress bar. I have used <a href="https://github.com/abdennour/react-progressbar" rel="nofollow noreferrer">react-progressbar</a></p>
<p><a href="https://codesandbox.io/s/pdf-loading-placeholder-lb7eh?file=/src/App.js" rel="nofollow noreferrer">Working demo</a></p>
<pre class="lang-js prettyprint-override"><code>import PDFViewer from "mgr-pdf-viewer-react";
import ProgressBar from "react-progressbar";
export default function App() {
const [currentProgress, setCurrentProgress] = useState(0);
useEffect(() => {
progress();
}, []);
const ref = useRef(null);
const progress = () => {
let step = 5; // the smaller this is the slower the progress bar
let current_progress = 0;
const interval = setInterval(function() {
current_progress += step;
setCurrentProgress(current_progress);
let progress =
Math.round((Math.atan(current_progress) / (Math.PI / 2)) * 100 * 1000) /
1000;
// console.log(ref.current);
if (ref.current && ref.current.state.pages) {
console.log("cleared");
current_progress = 100;
setCurrentProgress(current_progress);
clearInterval(interval);
} else if (progress >= 70) {
step = 0.1;
}
}, 100);
};
return (
<div className="App">
<h1>Hello CodeSandbox</h1>
<p>{currentProgress + " %"}</p>
{<ProgressBar completed={currentProgress} />}
<PDFViewer
ref={ref}
document={{
url: "https://arxiv.org/pdf/quant-ph/0410100.pdf"
}}
// loader={<h2 style={{ color: "#fa5b35" }}>Custom loader element</h2>}
// loader={<ProgressBar completed={currentProgress} />}
/>
</div>
);
}
</code></pre> | 2020-06-07 14:15:35.093000+00:00 | 2020-06-07 14:15:35.093000+00:00 | null | null | 62,245,720 | <p>I am working on a pdf reader in my reactJS app. I need to show the loading indicator of a pdf file. My pdf reader only loads once the file is completely loaded. As you can see in <a href="http://balpaathmala.olenepal.org/books/read/c0286efe-adf5-49de-ad89-e94ddbc67a66" rel="nofollow noreferrer">this link</a>.</p>
<p>I think loading the pdf file separately first while showing the loading percent indicator and then only loading the pdf loader component would do the trick. </p>
<p>Also a useState to set loaded state and loaded percent would probably be a good idea to go about it. Please help me implement this and feel free to suggest anything else. </p>
<p>My current code for this is as follows:</p>
<pre><code>import PDFViewer from "mgr-pdf-viewer-react"
function Reader({ bookDetail }) {
const readerRef = useRef(null)
return (
bookDetail !== null && (
// ...
<PDFViewer
page={1}
document={{
url: bookDetail.bookUrl,
}}
/>
// ...
)
)
}
export default Reader
</code></pre> | 2020-06-07 13:02:54.157000+00:00 | 2020-06-07 14:15:35.093000+00:00 | null | javascript|reactjs|pdf|state|loader | ['https://github.com/abdennour/react-progressbar', 'https://codesandbox.io/s/pdf-loading-placeholder-lb7eh?file=/src/App.js'] | 2 |
37,429,445 | <p>1) You can bypass it, <em>but</em> then you will work in <code>D</code> dimensions, not <code>N</code> as you say. where <code>N << D</code>. That means that the algorithm has to adapt to <code>D</code> dimensions as well.</p>
<hr>
<p>2) <em>No</em>.</p>
<p>Read <a href="http://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_feature2d/py_sift_intro/py_sift_intro.html" rel="nofollow">SIFT from openCV</a>:</p>
<blockquote>
<ol start="4">
<li>Keypoint Descriptor</li>
</ol>
<p>Now keypoint descriptor is created. A 16x16 neighbourhood around the
keypoint is taken. It is devided into 16 sub-blocks of 4x4 size. For
each sub-block, 8 bin orientation histogram is created. So a total of
128 bin values are available. It is represented as a vector to form
keypoint descriptor. In addition to this, several measures are taken
to achieve robustness against illumination changes, rotation etc.</p>
</blockquote>
<p>Here is how I have it in my mind, hope that will be enough:</p>
<p><strong>LSH takes as input a pointset, of <code>n</code> points, where every point lies in <code>d</code> dimensions.</strong></p>
<p>So, a query is a point, in <code>d</code> dimensions and the goal is to find its NN<sup>*</sup>.</p>
<hr>
<ol>
<li><p>Now every point, represents an image descriptor. So, we have <code>n</code> images in our
dataset.</p></li>
<li><p>The query, which is also a point (i.e. a vector with <code>d</code>
coordinates), represents another image descriptor.</p></li>
<li><p>We are seeking to match (i.e. to find the Nearest Neighbor) the
query image descriptor with an image descriptor from our dataset.</p></li>
</ol>
<p>So the conversion you are talking about is applied in a vector, <em>not</em> a matrix.</p>
<hr>
<p><em>Edit</em>:</p>
<p>Moreover, from our <a href="http://arxiv.org/pdf/1603.09596.pdf" rel="nofollow">High-dimensional approximate nearest neighbor: k-d Generalized Randomized Forests</a> paper, see this in the <em>Experiments</em> section:</p>
<blockquote>
<p>SIFT is a 128-dimensional vector that describes a local image patch by
histograms of local gradient orientations.</p>
</blockquote>
<hr>
<p>*<sub>or the <a href="https://en.wikipedia.org/wiki/Fixed-radius_near_neighbors" rel="nofollow">Fixed-radius near neighbors</a> problem</sub></p> | 2016-05-25 06:34:15.917000+00:00 | 2016-05-25 09:32:37.413000+00:00 | 2016-05-25 09:32:37.413000+00:00 | null | 37,408,722 | <p>I read some paper about LSH and I know that is used for solving the approximated k-NN problem. We can divide the algorithm in two parts:</p>
<ol>
<li><p>Given a vector in <code>D</code> dimensions (where <code>D</code> is big) of any value, translate it with a set of <code>N</code> (where <code>N<<D</code>) hash functions to a binary vector in <code>N</code> dimensions.</p></li>
<li><p>Using hamming distance, apply some search technique on the set of given binary codes obtained from phase 1 to find the k-NN.</p></li>
</ol>
<p>The keypoint is that computing the hamming distance for vectors in <code>N</code> dimensions is fast using XOR.</p>
<p>Anyway, I have two questions:</p>
<ol>
<li><p>Point 1. is still necessary if we use a binary descriptor, like ORB? Since ORB's descriptors are already binaries and we use the hamming distance for comparing them, why we should perform the first point?</p></li>
<li><p>How the conversion happens for images described by SIFT? Each SIFT descriptor is 128 bits and each image is described by a set of descriptors. So we have matrix <code>descX128</code> (where <code>desc</code> is the number of descriptors used), while LSH usually accept as input a vector.</p></li>
</ol> | 2016-05-24 08:55:07.007000+00:00 | 2016-05-25 09:32:37.413000+00:00 | 2016-05-25 06:34:57.700000+00:00 | image-processing|sift|nearest-neighbor|orb|locality-sensitive-hash | ['http://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_feature2d/py_sift_intro/py_sift_intro.html', 'http://arxiv.org/pdf/1603.09596.pdf', 'https://en.wikipedia.org/wiki/Fixed-radius_near_neighbors'] | 3 |
61,941,655 | <p>Maybe you should give a look at Graph Neural Networks (specialy Spatial-Temporal Graph Networks). They use temporal information about graphs and its adjacency matrix to predict future nodes states, such values in the next-step. </p>
<p>You can read <a href="https://arxiv.org/pdf/1901.00596.pdf" rel="nofollow noreferrer">this</a> survey paper as a start point and follow its cited works therefore.</p> | 2020-05-21 18:46:31.540000+00:00 | 2020-05-21 18:46:31.540000+00:00 | null | null | 61,925,599 | <p>There is an adjacent matrix dataset that is based on time series. I would like to know if it is possible to build a neural network model to predict tn time point's matrix by using the previous time-series data. In my opinion, traditional models such as CNN may not fit for the sparse matrix graph. </p> | 2020-05-21 01:20:08.730000+00:00 | 2020-05-21 18:46:31.540000+00:00 | null | python|neural-network|pytorch|artificial-intelligence | ['https://arxiv.org/pdf/1901.00596.pdf'] | 1 |
64,359,267 | <ol>
<li><p>I think the reason you have [32, 248, 248, 3] instead of the [32, 256, 256, 3] that you wanted is because of padding. Briefly, when you do a convolution, the convolutional window a.k.a. kernel doesn't necessarily fit perfectly with the image, so the image has zeroes padded on the outside to make it fit. Based on your code, padding is unspecified, so, try adding padding='same' into your conv2d layers.</p>
</li>
<li><p>You're correct here about what 64 and [5, 5] signify. The number of filters has no bearing here. The kernel is related to, though not the cause of, the gap in dimensions that you're experiencing, if my answer in (1) is correct. However, you should ideally be able to set it to 3, 5 or whatever, without issue. Refer to (1).</p>
</li>
<li><p>32 corresponds to samples, as you said. In other words, it's the batch size. The image size itself is 256 x 256 pixels (height and width). 3 channels corresponds to red, blue and green strengths for each pixel. If the image was grayscale, there would be only 1 channel. In Tensorflow, the convention is (N,H,W,C), meaning Number of samples, Height, Width, Channels. Note that channels is typically 3 at input and output, but it changes in the hidden layers according to the number of filters you use.</p>
</li>
<li><p>Decreasing and then increasing the height and width of the images is good for certain tasks, like image segmentation, where you're sort of reducing the information in the image. We call the first half of the network the encoding stage, and the second half the decoding stage. I think it eliminates unnecessary info in the first half by making the image dimensions smaller, and then restores it to original size in the second half. That's just my interpretation though. A well-known model that does this is called U-net. <a href="https://arxiv.org/abs/1505.04597" rel="nofollow noreferrer">https://arxiv.org/abs/1505.04597</a></p>
</li>
</ol> | 2020-10-14 18:16:08.690000+00:00 | 2020-10-14 18:16:08.690000+00:00 | null | null | 64,358,575 | <p>Please go through the code below. Image size is (32, 256 , 256 , 6). I don't think it is necessary to know what net is and what it does. My question is purely analytical.</p>
<pre><code> net = slim.conv2d(input_images, 64, [5, 5], stride=1, scope='conv1')
net = slim.max_pool2d(net, [2, 2], scope='pool1') # image size is (batch_size, 256 , 256 , 6)
net = slim.conv2d(net, 128, [5, 5], stride=1, scope='conv2')
net = slim.max_pool2d(net, [2, 2], scope='pool2')
net = slim.conv2d(net, 256, [3, 3], stride=1, scope='conv3')
net = slim.max_pool2d(net, [2, 2], scope='pool3')
net = tf.image.resize_bilinear(net, [64,64])
net = slim.conv2d(net, 256, [3, 3], stride=1, scope='conv4')
net = tf.image.resize_bilinear(net, [128,128])
net = slim.conv2d(net, 128, [3, 3], stride=1, scope='conv5')
net = tf.image.resize_bilinear(net, [256,256]) #(32 , 256 , 256 , 3)
net = slim.conv2d(net, 64, [5, 5], stride=1, scope='conv6')
net = slim.conv2d(net, 3, [5, 5], stride=1, activation_fn=tf.tanh,
normalizer_fn=None, scope='conv7') # 32*248*248*3
</code></pre>
<p>The dimension of net turns out to be (32, 248, 248, 3) according to me. But apparently according to the paper it should be (32, 256, 256, 3).</p>
<p>Ques1) Where did I go wrong?</p>
<p>Ques2) In slim.conv2d what exactly is 64 and [5,5]? I think they are the number of filters and 5*5 is the dimension of the kernel. But since I am getting the wrong dimensions am I wrong here too?</p>
<p>Ques3) When you say (32,256,256,3) does it mean that there are 32 samples, 256*256 is the pixel strength and 3 are the number of channels?</p>
<p>Ques4) I know this might be difficult to answer but could someone tell me what is the need to first decrease the dimensions and then increase them again? A link to the concept shall be much appreciated.</p> | 2020-10-14 17:29:06.020000+00:00 | 2020-10-14 18:16:08.690000+00:00 | null | tensorflow|image-processing|conv-neural-network | ['https://arxiv.org/abs/1505.04597'] | 1 |
67,529,291 | <p>You should not worry about the scale of your loss function values. Remember, the loss function is simply a measure of how far away your network is. However, you can always scale this any way you like. What does matter is the <em>trend</em> in the loss over epochs? You want it to be a smooth decrease, which is what your second figure shows.</p>
<p>Losses are just that: an arbitrary number that's only meaningful in a relative sense, for the same network, for the same dataset. It has no other meaning. In fact, losses do not correspond well with metrics either: see Huang et al., 2019.</p>
<blockquote>
<p>as they've been tested on other datasets and generalized very well,</p>
</blockquote>
<p>That's what matters.</p>
<blockquote>
<p>but the screwy loss function isn't very nice to report.</p>
</blockquote>
<p>You could scale these losses by 1,000. They're only meaningful in a relative sense.</p>
<p><strong>References:</strong></p>
<ul>
<li><a href="https://arxiv.org/pdf/1905.05895.pdf" rel="nofollow noreferrer">Huang et al., 2019. Addressing the Loss-Metric Mismatch with Adaptive Loss Alignment</a></li>
</ul> | 2021-05-14 05:06:33.297000+00:00 | 2021-05-14 05:10:29.020000+00:00 | 2021-05-14 05:10:29.020000+00:00 | null | 67,528,138 | <p>I'm training some <code>CNN</code> networks on proprietary data using Tensorflow. We have boatloads of data, and it seems that these models are capable of learning a great deal of information about classifying data (all binary classifications so far).</p>
<p>Sometimes, the train/test accuracy curves can be remarkably good, upwards of 95% in some cases. However, the loss functions are suspicious in terms of scale. Visually, they look alright and about how I'd expect for something performing well, but it isn't the correct order of magnitude.</p>
<p>Can anyone tell me how this scaling is <em>usually</em> appropriately done in TF/Keras? I'm confident in these models, as they've been tested on other datasets and generalized very well, but the screwy loss function isn't very nice to report.</p>
<p>The learning rate is on the order of 0.0001. <code>L1</code> and <code>L2</code> are using the same lambda value, which I've had the most success with when providing to the model as somewhere between 0.01 and 0.03. I'm currently not using any dropout.</p>
<p>I'm including photos of a particularly highly variant accuracy run. This isn't always the case, but it does happen sometimes. I suspect that this problem is partly due to outlier data, or possibly the regularization values.</p>
<p><a href="https://i.stack.imgur.com/d1ytN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/d1ytN.png" alt="train/test accuracy" /></a></p>
<p><a href="https://i.stack.imgur.com/xuCiv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xuCiv.png" alt="train/test loss" /></a></p>
<p>Here are relevant code snippets.</p>
<pre><code> model = tf.keras.models.Sequential()
if logistic_regression is not True:
for i in range(depth):
# 1
model.add(Conv2D(
15,
kernel_size=(10, 3),
strides=1,
padding='same',
activation='relu',
data_format='channels_last',
kernel_regularizer=tf.keras.regularizers.l1_l2(
l1=regularizer_param,
l2=regularizer_param)
))
model.add(MaxPooling2D(
pool_size=(3, 3),
strides=1,
padding='valid',
data_format='channels_last'))
model.add(BatchNormalization())
if dropout is not None:
model.add(Dropout(dropout))
# flatten
model.add(Flatten(data_format='channels_last'))
model.add(Dense(
len(self.groups),
# use_bias=True if initial_bias is not None else False,
# bias_initializer=initial_bias
# if initial_bias is not None
# else None,
kernel_regularizer=tf.keras.regularizers.l1_l2(
l1=regularizer_param,
l2=regularizer_param)
))
</code></pre>
<pre><code> model.compile(
optimizer=tf.keras.optimizers.Adagrad(
learning_rate=learning_rate,
initial_accumulator_value=0.1,
epsilon=1e-07,
name='Adagrad'),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
</code></pre> | 2021-05-14 01:53:44.673000+00:00 | 2021-05-20 17:35:32.347000+00:00 | 2021-05-20 17:35:32.347000+00:00 | python|tensorflow|machine-learning|keras | ['https://arxiv.org/pdf/1905.05895.pdf'] | 1 |
63,332,894 | <p>Although more memory efficient, depthwise 2D convolutions can indeed be slower than regular 2D convolutions.</p>
<p><a href="https://arxiv.org/pdf/1803.10615.pdf" rel="nofollow noreferrer">Gholami et al.</a> (SqueezeNext: Hardware-Aware Neural Network Design) states that:</p>
<blockquote>
<p>The reason for this is the inefficiency of depthwise-separable convolution in terms of hardware performance, which is due to its poor arithmetic intensity (ratio of compute to memory operations).</p>
</blockquote> | 2020-08-10 01:12:15.730000+00:00 | 2020-08-10 01:12:15.730000+00:00 | null | null | 63,332,819 | <p>I'm reimplementing <a href="https://arxiv.org/abs/1704.04861" rel="nofollow noreferrer">MobileNet</a>, but I find the depthwise convolution is no faster than conv2d(I haven't included the 1 by 1 pointwise convolution yet). Here's the test code run on colab: <a href="https://colab.research.google.com/drive/1nBuYrmmH5kM0jbtIZdsuiG6uJbU6mpA7?usp=sharing" rel="nofollow noreferrer">https://colab.research.google.com/drive/1nBuYrmmH5kM0jbtIZdsuiG6uJbU6mpA7?usp=sharing</a></p>
<pre class="lang-py prettyprint-override"><code>import tensorflow as tf
import time
x = tf.random.normal((2, 64, 64, 3))
conv = tf.keras.layers.Conv2D(16, 3, strides=1, padding='same')
dw = tf.keras.layers.DepthwiseConv2D(3, padding='same')
start = time.time()
conv(x)
print('conv2d:', time.time() - start) # approximate 0.0036s
start = time.time()
dw(x)
print('dw:', time.time() - start) # approximate 0.0034s
%timeit conv(x) # 1000 loops, best of 3: 225 µs per loop
%timeit dw(x) # 1000 loops, best of 3: 352 µs per loop
</code></pre>
<p>I also try it on my laptop using CPUs only, similar results are spotted. Why would <code>DepthwiseConv2D</code> be slower than <code>Conv2D</code>? Did I make any mistakes?</p> | 2020-08-10 00:59:07.543000+00:00 | 2020-08-10 01:12:15.730000+00:00 | 2020-08-10 01:04:32.783000+00:00 | python|tensorflow|conv-neural-network | ['https://arxiv.org/pdf/1803.10615.pdf'] | 1 |
24,441,861 | <p>When talking about convergence for SOMs, for a given map size (n x m), you want to know whether sufficient iterations of the algorithm have run to ensure the map is "Stable". This means, loosely speaking, do new inputs (observations) to the map get placed at the same neurons /codebook vectors if the map is retrained many times?(Ignoring the issue of the fact that the arrangement of the map may switch around when it is trained each time, which is fine as long as the clusters are still arranged in a stable way). </p>
<p>To assist in answering the question of whether enough iterations have run, see the academic papers listed below. Both papers also touch on the issue of what map size is appropriate (what n x m values help ensure convergence of the SOM?).</p>
<p>One of the traditional approaches that has been popular in papers is given here:</p>
<p><a href="http://arxiv.org/pdf/math/0701144.pdf" rel="nofollow">Statistical tools to assess the reliability of self-organizing maps (Bodt, Cottrell, Verleysen)</a></p>
<p>More recently, this method has come about, which looks rather promising:</p>
<p><a href="http://homepage.cs.uri.edu/faculty/hamel/pubs/theses/ms-thesis-ben.pdf" rel="nofollow">A CONVERGENCE CRITERION FOR SELF-ORGANIZING MAPS
, masters thesis, Benjamin h. ott (University of Rhode island)</a></p>
<p>This thesis, in my opinion, was really well written and a pleasure to read. What's also nice is that this research has been written up as a SOM convergence test in a (rather unknown) package in R, called <code>popsom</code>. Check it out: </p>
<p><a href="http://cran.r-project.org/web/packages/popsom/index.html" rel="nofollow">popsom</a></p> | 2014-06-26 23:33:35.550000+00:00 | 2014-06-26 23:52:07.970000+00:00 | 2014-06-26 23:52:07.970000+00:00 | null | 2,557,289 | <p>I like to stop the execution when Batch SOM becomes converged.
What error function can I use to determine the convergence?</p> | 2010-03-31 23:54:58.433000+00:00 | 2014-06-26 23:52:07.970000+00:00 | 2014-06-26 23:41:55.597000+00:00 | algorithm|machine-learning|som|self-organizing-maps|convergence | ['http://arxiv.org/pdf/math/0701144.pdf', 'http://homepage.cs.uri.edu/faculty/hamel/pubs/theses/ms-thesis-ben.pdf', 'http://cran.r-project.org/web/packages/popsom/index.html'] | 3 |
47,833,219 | <p>Though, the concepts may be abstract, they find good use in recent times in Machine learning/Artificial intelligence.</p>
<p>This might serve as a good motivation on practical need for these theoretic concepts. In summary, you want to estimate how well your model (LSTM, CNN for example) does in approximating the target output ( using for example cross entropy or Kullback-Leibler Divergence from information theory). (check <a href="https://openreview.net/pdf?id=ry_WPG-A-" rel="nofollow noreferrer">on information bottleneck</a> and <a href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7133169&casa_token=Cuofu0kgJg0AAAAA:JeCJJd34wAMlvYCReIDrJc4RjNokpWs6mnjf1Toz1ffrujgcvf0X7EBMcNO4fuQFEYwsvYBDuiPV&tag=1" rel="nofollow noreferrer">deep learning and information Bottleneck principle</a> for perspectives on explaining deep learning through information theory)</p>
<p>In addition, you won't build a useful <a href="https://rads.stackoverflow.com/amzn/click/com/B00FB33A7Q" rel="nofollow noreferrer" rel="nofollow noreferrer">communication</a> or <a href="https://rads.stackoverflow.com/amzn/click/com/B00EANELDA" rel="nofollow noreferrer" rel="nofollow noreferrer">networked system</a> without some analysis of the channel capacity and properties.</p>
<p>In essence, it might look theoretic but it is at the heart of the present communication age.</p>
<p>To get a more elaborate view on what I mean, I invite you to watch this ISIT lecture: <a href="https://www.youtube.com/watch?v=O_uBxFGk-U4&t=1563s" rel="nofollow noreferrer">The Spirit of Information Theory</a> by Prof David TSe.</p>
<p>Check also the paper <a href="https://monoskop.org/images/2/2f/Shannon_Claude_E_1956_The_Bandwagon.pdf" rel="nofollow noreferrer">Bandwagon</a> by Claude Channon himself explaining when information theory might be useful and when it is not appropriate for use.</p>
<p>This <a href="https://arxiv.org/abs/1802.05968" rel="nofollow noreferrer">paper</a> helps you get you started and for comprehensive details read <a href="https://rads.stackoverflow.com/amzn/click/com/0471062596" rel="nofollow noreferrer" rel="nofollow noreferrer">Elements of Information theory</a>.</p> | 2017-12-15 13:18:54.680000+00:00 | 2020-07-06 15:56:11.500000+00:00 | 2020-07-06 15:56:11.500000+00:00 | null | 2,306,579 | <p>Information theory comes into play where ever encoding & decoding is present. For example: compression(multimedia), cryptography.</p>
<p>In Information Theory we encounter terms like "Entropy", "Self Information", "Mutual Information" and entire subject is based on these terms. Which just sound nothing more than abstract. Frankly, they don't really make any sense.</p>
<p>Is there any book/material/explanation (if you can) which explains these things in a practical way? </p>
<p><strong>EDIT:</strong></p>
<blockquote>
<p><a href="http://amzn.com/0486240614" rel="noreferrer">An Introduction to Information Theory: symbols, signals & noise by John Robinson Pierce</a> is <strong>The Book</strong> that explains it the way I want (practically). Its too good. I started reading it.</p>
</blockquote> | 2010-02-21 16:51:03.870000+00:00 | 2020-07-06 15:56:11.500000+00:00 | 2010-02-23 21:35:38.743000+00:00 | encryption|image-processing|compression|cryptography|information-theory | ['https://openreview.net/pdf?id=ry_WPG-A-', 'https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7133169&casa_token=Cuofu0kgJg0AAAAA:JeCJJd34wAMlvYCReIDrJc4RjNokpWs6mnjf1Toz1ffrujgcvf0X7EBMcNO4fuQFEYwsvYBDuiPV&tag=1', 'https://rads.stackoverflow.com/amzn/click/com/B00FB33A7Q', 'https://rads.stackoverflow.com/amzn/click/com/B00EANELDA', 'https://www.youtube.com/watch?v=O_uBxFGk-U4&t=1563s', 'https://monoskop.org/images/2/2f/Shannon_Claude_E_1956_The_Bandwagon.pdf', 'https://arxiv.org/abs/1802.05968', 'https://rads.stackoverflow.com/amzn/click/com/0471062596'] | 8 |
65,163,308 | <p>It really depends on how precise you define what a "layer" is. This may vary for different authors.</p>
<p>For your <a href="https://arxiv.org/pdf/1512.03385.pdf" rel="nofollow noreferrer">ResNet</a> example it is pretty clear: In Section 3.4 <em>Implementation</em> you'll a description of the network, there it say's:</p>
<blockquote>
<p>We adopt batch normalization (BN) right after each convolution and
before activation, [...].</p>
</blockquote>
<p>So convolution and batch normalization is considered as a single layer. Figure 3. in the paper shows a picture of ResNet34 where the batch normalization layers are not even explicitly shown and the layers sum up to 34.</p>
<p>So in conclusion, the ResNet paper does not count batch normalization as extra layer.</p>
<p>Further Keras makes it really easy to check those things for many <a href="https://keras.io/api/applications/" rel="nofollow noreferrer">pretrained models</a>, e.g.:</p>
<pre class="lang-py prettyprint-override"><code>import tensorflow as tf
resnet = tf.keras.applications.ResNet50()
print(resnet.summary())
</code></pre> | 2020-12-06 00:05:53.953000+00:00 | 2020-12-06 00:22:23.470000+00:00 | 2020-12-06 00:22:23.470000+00:00 | null | 65,163,196 | <p>Is BatchNormalizationLayer considered a layer in a neural network?
For example, if we say, Resnet50 has 50 layers, does that mean that some of those layers may be batchnormalization layers?</p>
<p>When building models in Keras I considered it as an extra, similar to a dropout layer or when adding an “Activation layer”. But BatchNormalization has trainable parameters, so... I am confused</p> | 2020-12-05 23:46:41.993000+00:00 | 2020-12-07 05:26:13.113000+00:00 | null | neural-network|conv-neural-network | ['https://arxiv.org/pdf/1512.03385.pdf', 'https://keras.io/api/applications/'] | 2 |
7,266,188 | <p>Knuth wrote a paper called "Dancing Links" e.g. at <a href="http://arxiv.org/pdf/cs.DS/0011047" rel="nofollow">http://arxiv.org/pdf/cs.DS/0011047</a> on a streamlined version of backtracking search. He uses it to e.g. tile a plane with polyonimoes. My guess is that this could be used to solve your problem - at some expense. I suspect that if a really general method existed to solve your problem, it could also be applied to Knuth's, which makes it unlikely that one will be found. Of course, your shapes are simpler than Knuth's, so perhaps there is some method particular to your problem which is more efficient.</p> | 2011-09-01 04:40:19.083000+00:00 | 2011-09-01 04:40:19.083000+00:00 | null | null | 7,265,956 | <p>I am trying to figure out how I would go about writing algorithm (in C#) that, when given:</p>
<ul>
<li><p>A set of generally small tile-based shapes. Often 2x2, but not always. Sometimes the shapes will be 2x1 or non-rectangular.</p></li>
<li><p>A tilemap (two-dimensional array) in which certain tiles are marked "free" and certain tiles are "reserved". The free tiles designate the area where the tile-based shapes are allowed to go, the reserved tiles cannot be occupied by the shapes.</p></li>
</ul>
<p>Example of a tile-based shape, other than 2x2s:
<a href="http://img850.imageshack.us/img850/9057/awk.png" rel="nofollow">http://img850.imageshack.us/img850/9057/awk.png</a></p>
<p>An example of "available space" in a tilemap:
<a href="http://img641.imageshack.us/img641/8263/spaceh.png" rel="nofollow">http://img641.imageshack.us/img641/8263/spaceh.png</a></p>
<ul>
<li>The white in this image designates the space to be filled, but I want the algorithm to be able to work just as well if the gray were the space to be filled and the whites were reserved.</li>
</ul>
<p>Basically, the algorithm should place the shapes in the available space, biased towards the top. The solution it comes up with does not have to be 'perfect', but it should <em>always</em> be able to find a solution if one exists. Also I would really like to avoid using any pseudo-random numbers in this algorithm. I want it to always find the same solution given the same input, even if that solution isn't the best one.</p>
<p>I have found other topics relating to this, but all of them had to do with filling a rectangular space rather than an arbitrary space.</p>
<p>EDIT: oh, and the shapes CAN be flipped both horizontally and/or vertically, but not rotated. Forgot to mention this.</p>
<p>EDIT2: let me clarify. I don't want the space to be filled, I want to determine where the shapes should go given a finite number of them. They should default towards the top.</p> | 2011-09-01 03:50:51.823000+00:00 | 2011-09-01 13:34:11.973000+00:00 | 2011-09-01 13:34:11.973000+00:00 | c#|algorithm|shapes | ['http://arxiv.org/pdf/cs.DS/0011047'] | 1 |
21,464,598 | <p>Broadly, the two main approaches to modeling are the so-called "mechanistic" and "empirical" approaches. Both have their adherents (and their detractors). The mechanistic approach asserts that modeling should proceed from an understanding of the underlying phenomena (mechanism), which is then translated to some type of mathematical equation(s), which are then fit to the data (to test the mechanism). The empirical approach assembles a (usually long) list of models (equations) and seeks to find the one that "fits best". Empirical modeling is appealing but dangerous because assessing when you have a "good fit" is not trivial - although it is often treated that way.</p>
<p>You have not given us nearly enough information to formulate a mechanistic model, so here's an illustration of a couple of empirical models, as a cautionary tale:</p>
<p><a href="http://arxiv.org/pdf/cond-mat/0002075.pdf" rel="nofollow noreferrer">Finite-time singularity models</a> are popular with your type of data. Among other things, these models are used to "predict" <a href="http://quantivity.wordpress.com/2011/02/08/curiosity-of-lppl/" rel="nofollow noreferrer">stock market bubbles</a> (the LPPL model). The basic idea is that a catastrophe (singularity) is coming, and we want to predict when. So we use a function of the form:</p>
<blockquote>
<p>y = a × (c-x)<sup>b</sup></p>
</blockquote>
<p>With b < 0, y approaches a singularity as x -> c.</p>
<p>In R code, we can fit a model like this as follows:</p>
<pre><code># Finite-Time Singularity Model
library(minpack.lm)
f <- function(par,x) {
a <- par[1]
b <- par[2]
c <- par[3]
a * (c - x)^b
}
resid <- function(par,obs,xx) {obs-f(par,xx)}
start <- c(a=1, b=-1, c=2100)
nls.out <- nls.lm(par=start, fn=resid, obs =dat$incidents, xx=dat$year,
control = nls.lm.control(maxiter=500))
coef(nls.out)
with(dat, plot(incidents~year, main="Finite-Time Singularity Model"))
lines(dat$year,f(coef(nls.out),year), col=2, lwd=2)
</code></pre>
<p>This gives what appears to be a "pretty good fit":</p>
<p><img src="https://i.stack.imgur.com/Amcsf.png" alt=""></p>
<p>In fact, the model overstates incidents early on, and tends to understate them later (which is terrible because we want a prediction for the future). The residuals plot shows this clearly.</p>
<pre><code>with(dat,plot(year,resid(coef(nls.out),incidents,year),
main="Residuals Plot", ylab="residuals"))
</code></pre>
<p><img src="https://i.stack.imgur.com/3Z61U.png" alt=""></p>
<p>Another approach notes that your data is "counts" (e.g. number of incidents per year). This suggests a generalized linear model in the poisson family:</p>
<pre><code># generalized liner model, poisson family
fit.glm <- glm(incidents ~year,data=dat,family=poisson)
with(dat,plot(incidents~year))
lines(dat$year,predict(fit.glm,type="response"), col=2, lwd=2)
par(mfrow=c(2,2))
plot(fit.glm)
</code></pre>
<p><img src="https://i.stack.imgur.com/RvYsd.png" alt=""></p>
<p>This fit is better, but still not very good, as the diagnostic plots show. The residuals follow a trend, they are not normally distributed, and some of the data points have unacceptably high leverage.</p>
<p><img src="https://i.stack.imgur.com/LVLdl.png" alt=""></p> | 2014-01-30 18:26:05.667000+00:00 | 2014-01-30 20:21:28.190000+00:00 | 2014-01-30 20:21:28.190000+00:00 | null | 21,446,534 | <p>I need to do Probability Density Prediction of the following data in R:</p>
<pre><code>year = c(1971, 1984, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006,
2007, 2008, 2009, 2010, 2011, 2012, 2013)
incidents = c(1, 1, 1, 1, 3, 1, 6, 6, 9, 11, 21, 37, 38, 275, 226, 774, 1064)
</code></pre>
<p>The are data.frame in R like:</p>
<pre><code>dat <- data.frame(year,incidents)
</code></pre>
<p>The goal and idea is to make predictions based on a few years and "predict" for the last year of the data available.</p>
<p>I'm new in R so any suggestions, advise and so forth is welcome.
Thanks in advance. </p> | 2014-01-30 02:05:41.657000+00:00 | 2014-02-07 22:36:12.440000+00:00 | 2014-01-30 02:14:15.057000+00:00 | r|probability|prediction|kernel-density|probability-density | ['http://arxiv.org/pdf/cond-mat/0002075.pdf', 'http://quantivity.wordpress.com/2011/02/08/curiosity-of-lppl/'] | 2 |
68,694,407 | <p>Unfortunately, I do not know where the breaking point is, and of course, it will depend on acceptable evaluation metrics and training data size.</p>
<p>From a technical point of view, there is no hard limit and if you go to extremes there could be Core ML model size issues and memory issues during inferences. However, that will only happen for an extremely large number of classes.</p>
<p>From a modeling perspective (which is a problem that will happen much earlier than the technical limitation) it is not as clear. As you increase the number of classes, you increase the risk of making classification mistakes. Although, the severity of a lot of the mistakes should simultaneously go down as you will have more and more classes that are naturally similar (breeds of dogs, etc.). The original YOLO9000 paper (<a href="https://arxiv.org/pdf/1612.08242.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1612.08242.pdf</a>) trained a model using 9000+ classes with reasonable results (lots of mistakes of course, but still impressive). They trained it on a combination of detection and classification data, so if they actually had detection data for all 9000, then results would presumably be even better.</p>
<p>In your experiment, it sounds like 50-60 was OK (thanks for giving us a sample point!). Anything below 100 is definitely tried and true, as long as you have the data. However, will 300 do OK? Will 1000 do OK? Theoretically, I would say yes, if you are able to provide enough training data and you adjust your expectation of what a good evaluation metric is since you know you'll make more mistakes. For instance, for classification with 1000 classes, it is common to report top-5 accuracy (that is, the correct label is in your top-5 classes for a sample).</p>
<p>Here is a useful link - <a href="https://github.com/apple/turicreate/issues/968" rel="nofollow noreferrer">https://github.com/apple/turicreate/issues/968</a></p> | 2021-08-07 16:51:12.937000+00:00 | 2021-08-08 12:04:47.577000+00:00 | 2021-08-08 12:04:47.577000+00:00 | null | 68,694,379 | <p>I'm new to the computer vision world, I'm trying to create a script with the objective to gather data from a dataset of images.</p>
<p>I'm interested in what kind of objects are in those images and getting a summary of them in a json file for every image.</p>
<p>I've checked out some YOLO implementations but the ones I've seen are almost always based on COCO and have 80 classes or have a custom dataset.</p>
<p>I've seen that there are algorithms like InceptionV3 etc. which are capable of classifying 1000 classes. But per my understanding object classification is different from object recognition.</p>
<p>Is there a way to use those big dataset classification algos for object detection?
Or any other suggestion?</p> | 2021-08-07 16:47:33.517000+00:00 | 2021-08-08 18:32:57.480000+00:00 | null | python|computer-vision|object-detection|object-recognition | ['https://arxiv.org/pdf/1612.08242.pdf', 'https://github.com/apple/turicreate/issues/968'] | 2 |
68,585,848 | <p><strong>Yes</strong>, there is a better algorithm, which runs in subquadratic time. See this recent paper of <a href="https://arxiv.org/abs/1908.04251" rel="nofollow noreferrer">Brent, Pomerance, Purdum, and Webster</a>. Their algorithm also outputs all the values in the <strong>n</strong> x <strong>n</strong> multiplication table. Note that the number of values is known to be subquadratic, due to a classic result of <a href="https://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93Tenenbaum%E2%80%93Ford_constant" rel="nofollow noreferrer">Erdős</a>. The linked <a href="https://mathoverflow.net/questions/31663/distinct-numbers-in-multiplication-table/400593#400593">question</a> on MathOverflow has more information.</p> | 2021-07-30 05:29:03.117000+00:00 | 2021-07-30 05:29:03.117000+00:00 | null | null | 24,614,798 | <p>Inspired by <a href="https://mathoverflow.net/questions/31663/distinct-numbers-in-multiplication-table">this question on mathoverflow</a> </p>
<p>Suppose I have a <strong>n</strong> x <strong>n</strong> multiplication table, what is the number of <strong>distinct values</strong> on it?</p>
<p>For example, a 3X3 multiplication table </p>
<p>1 2 3<br>
2 4 6<br>
3 6 9 </p>
<p>has <strong>6</strong> unique values namely <em>[1, 2, 3, 4, 6, 9]</em></p>
<p>So far I only have a <strong>O(n<sup>2</sup>)</strong> solution</p>
<pre><code> public static void findDistinctNumbers(int n) {
Set<Integer> unique = new HashSet<>();
for(int i=1; i<=n; i++) {
for(int j=1; j<=n; j++) {
unique.add(i*j);
}
}
System.out.println("number of unique values: " + unique.size());
}
</code></pre>
<p>Is there a better approach which is less than <strong>O(n<sup>2</sup>)</strong> ?</p> | 2014-07-07 15:56:30.670000+00:00 | 2021-07-30 05:29:03.117000+00:00 | 2017-04-13 12:57:55.007000+00:00 | algorithm | ['https://arxiv.org/abs/1908.04251', 'https://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93Tenenbaum%E2%80%93Ford_constant', 'https://mathoverflow.net/questions/31663/distinct-numbers-in-multiplication-table/400593#400593'] | 3 |
42,758,708 | <p>There are no hard, proven rules how to construct neural networks (or CNNs). This is an open problem.</p>
<blockquote>
<p>why use the most examples these architectures: Conv 5x5 -> Pooling(2,max) -> Conv5x5</p>
</blockquote>
<p>This is not the case. Most architectures use 3x3 pooling, because subsequent pooling layers expand the perceptive field to arbitary sizes. Empirically, some researchers (e.g. <a href="https://arxiv.org/abs/1512.00567" rel="nofollow noreferrer">Rethinking the Inception Architecture for Computer Vision</a>) found that those work better.</p>
<blockquote>
<p>how can I determine if the Network is too deep / too shallow</p>
</blockquote>
<ul>
<li>The inference is too slow -> the network is too deep</li>
<li>The accuracy is too low -> depth could help</li>
</ul>
<blockquote>
<p>how can I determine if the kernel size is too big / too small?</p>
</blockquote>
<p>Just use 3x3 as default. See <a href="https://arxiv.org/abs/1512.00567" rel="nofollow noreferrer">Rethinking the Inception Architecture for Computer Vision</a> for the reason.</p>
<blockquote>
<p>When do I chose <code>conv -> conv -> pooling</code> instead of <code>conv -> pooling -> conv</code>?</p>
</blockquote>
<p>I would rather write <code>conv -> conv -> pooling</code> instead of <code>conv -> pooling</code>, hence the question is "how do I determine how many subsequent convolutional layers I should have. Again, this is an open problem. Most people choose 2 or 3 subsequent convolutional layer, but at the end it seems to boil down to "just try it". (Please let me know if there is a more engineering approach!)</p>
<blockquote>
<p>What for an impact has the stride parameter?</p>
</blockquote>
<p>Stride reduces the size of the output feature map. Hence it reduces your memory footprint a lot (* 1/stride^2).</p>
<blockquote>
<p>is there a way to check which features a Layer is detecting?</p>
</blockquote>
<p>Zeiler & Fergus: <a href="https://arxiv.org/abs/1311.2901" rel="nofollow noreferrer">Visualizing and Understanding Convolutional Networks</a></p> | 2017-03-13 07:51:48.410000+00:00 | 2017-03-13 07:58:04.687000+00:00 | 2017-03-13 07:58:04.687000+00:00 | null | 42,747,748 | <p>How determine architecture of a Convolution Neuronal Network</p>
<p>I'm doing some research on deep learning in computer vision.</p>
<p>I read a lot about how neuronal networks , back propagation, stochastic gradient descent, overfitting, regularization and so on works.
There are 'hard' mathematical rules: that's easy to understand.</p>
<p>But, how do I know what for a architecture I need for my Convolution Neuronal Network?
For Exmaple: I want to classify these plants:
<a href="http://www.biohof-waldegg.ch/Bilder/Blacke%201%20(Individuell).JPG" rel="nofollow noreferrer">http://www.biohof-waldegg.ch/Bilder/Blacke%201%20(Individuell).JPG</a></p>
<p>I have studied examples with the mnist database (handwritten digit database)
- why use the most examples these architectures: Conv 5x5 -> Pooling(2,max) -> Conv5x5?
I have plotted the weights of the first hidden layer, but the image filters looks not very
familiar for me (nor like a high pass filter for edge detection, nor like a low pass filter)</p>
<ul>
<li>is it better to add more feature maps in a layer or to add more hidden layers?</li>
<li>how can I determine if the Network is too deep / too shallow </li>
<li>how can I determine if a layer has too much / too less feature maps?</li>
<li>how can I determine if the kernel size is too big / too small?</li>
<li>when i chose conv -> conv -> pooling instead of conv -> pooling -> conv?</li>
<li>what for an impact has the stride parameter? (I know what this parameter does, but not when and how i have to adjust these parameter?</li>
<li>is there a way to check <em>which</em> features a Layer is detecting? (e.g. edges / color / shapes)</li>
</ul> | 2017-03-12 13:03:42.113000+00:00 | 2017-03-16 04:51:25.243000+00:00 | null | computer-vision|deep-learning | ['https://arxiv.org/abs/1512.00567', 'https://arxiv.org/abs/1512.00567', 'https://arxiv.org/abs/1311.2901'] | 3 |
24,282,204 | <p>For matrix inversion it's going to be more efficient to use <a href="http://en.wikipedia.org/wiki/Cholesky_decomposition" rel="nofollow noreferrer">Choleksy decomposition</a> than Gauss–Jordan elimination See this paper <a href="http://arxiv.org/ftp/arxiv/papers/1111/1111.4144.pdf" rel="nofollow noreferrer"> Matrix Inversion Using Cholesky Decomposition</a>.</p>
<p><a href="https://stackoverflow.com/questions/22479258/cholesky-decomposition-with-openmp/23063655#23063655">I implemented Choleksy decomposition with SSE, AVX, and FMA as well as with multiple threads</a>. I get about 50% of the performance of the Intel MKL. I'm currently rewritting my kernel code so I hope to get closer to MKL performance soon.</p>
<p>Note that the inversion is often unnecessary to solve a system of equations. In my case I only need the Cholesky decomposition and then I use backward and forward substitution to solve.</p> | 2014-06-18 09:41:40.267000+00:00 | 2014-06-18 09:41:40.267000+00:00 | 2017-05-23 11:57:35.103000+00:00 | null | 24,281,408 | <p>With the follow code, I calculate the inverse matrix 4x4 with cramer rules, but how do extend this code for NxN matrix?</p>
<pre><code>void PIII_Inverse_4x4(float* src) {
__m128 minor0,minor1,minor2,minor3;
__m128 row0,row1,row2,row3;
__m128 det,tmp1;
tmp1= _mm_loadh_pi(_mm_loadl_pi(tmp1, (__m64*)(src)), (__m64*)(src+ 4));
row1= _mm_loadh_pi(_mm_loadl_pi(row1, (__m64*)(src+8)), (__m64*)(src+12));
row0= _mm_shuffle_ps(tmp1, row1, 0x88);
row1= _mm_shuffle_ps(row1, tmp1, 0xDD);
tmp1= _mm_loadh_pi(_mm_loadl_pi(tmp1, (__m64*)(src+ 2)), (__m64*)(src+ 6));
row3= _mm_loadh_pi(_mm_loadl_pi(row3, (__m64*)(src+10)), (__m64*)(src+14));
row2= _mm_shuffle_ps(tmp1, row3, 0x88);
row3= _mm_shuffle_ps(row3, tmp1, 0xDD);
tmp1= _mm_mul_ps(row2, row3);
tmp1= _mm_shuffle_ps(tmp1, tmp1, 0xB1);
minor0= _mm_mul_ps(row1, tmp1);
minor1= _mm_mul_ps(row0, tmp1);
tmp1= _mm_shuffle_ps(tmp1, tmp1, 0x4E);
minor0= _mm_sub_ps(_mm_mul_ps(row1, tmp1), minor0);
minor1= _mm_sub_ps(_mm_mul_ps(row0, tmp1), minor1);
minor1= _mm_shuffle_ps(minor1, minor1, 0x4E);
tmp1= _mm_mul_ps(row1, row2);
tmp1= _mm_shuffle_ps(tmp1, tmp1, 0xB1);
minor0= _mm_add_ps(_mm_mul_ps(row3, tmp1), minor0);
minor3= _mm_mul_ps(row0, tmp1);
tmp1= _mm_shuffle_ps(tmp1, tmp1, 0x4E);
minor0= _mm_sub_ps(minor0, _mm_mul_ps(row3, tmp1));
minor3= _mm_sub_ps(_mm_mul_ps(row0, tmp1), minor3);
minor3= _mm_shuffle_ps(minor3, minor3, 0x4E);
tmp1= _mm_mul_ps(_mm_shuffle_ps(row1, row1, 0x4E), row3);
tmp1= _mm_shuffle_ps(tmp1, tmp1, 0xB1);
row2= _mm_shuffle_ps(row2, row2, 0x4E);
minor0= _mm_add_ps(_mm_mul_ps(row2, tmp1), minor0);
minor2= _mm_mul_ps(row0, tmp1);
tmp1= _mm_shuffle_ps(tmp1, tmp1, 0x4E);
minor0= _mm_sub_ps(minor0, _mm_mul_ps(row2, tmp1));
minor2= _mm_sub_ps(_mm_mul_ps(row0, tmp1), minor2);
minor2= _mm_shuffle_ps(minor2, minor2, 0x4E);
tmp1= _mm_mul_ps(row0, row1);
tmp1= _mm_shuffle_ps(tmp1, tmp1, 0xB1);
minor2= _mm_add_ps(_mm_mul_ps(row3, tmp1), minor2);
minor3= _mm_sub_ps(_mm_mul_ps(row2, tmp1), minor3);
tmp1= _mm_shuffle_ps(tmp1, tmp1, 0x4E);
minor2= _mm_sub_ps(_mm_mul_ps(row3, tmp1), minor2);
minor3= _mm_sub_ps(minor3, _mm_mul_ps(row2, tmp1));
tmp1= _mm_mul_ps(row0, row3);
tmp1= _mm_shuffle_ps(tmp1, tmp1, 0xB1);
minor1= _mm_sub_ps(minor1, _mm_mul_ps(row2, tmp1));
minor2= _mm_add_ps(_mm_mul_ps(row1, tmp1), minor2);
tmp1= _mm_shuffle_ps(tmp1, tmp1, 0x4E);
minor1= _mm_add_ps(_mm_mul_ps(row2, tmp1), minor1);
minor2= _mm_sub_ps(minor2, _mm_mul_ps(row1, tmp1));
tmp1= _mm_mul_ps(row0, row2);
tmp1= _mm_shuffle_ps(tmp1, tmp1, 0xB1);
minor1= _mm_add_ps(_mm_mul_ps(row3, tmp1), minor1);
minor3= _mm_sub_ps(minor3, _mm_mul_ps(row1, tmp1));
tmp1= _mm_shuffle_ps(tmp1, tmp1, 0x4E);
minor1= _mm_sub_ps(minor1, _mm_mul_ps(row3, tmp1));
minor3= _mm_add_ps(_mm_mul_ps(row1, tmp1), minor3);
// -----------------------------------------------
// -----------------------------------------------
// -----------------------------------------------
det= _mm_mul_ps(row0, minor0);
det= _mm_add_ps(_mm_shuffle_ps(det, det, 0x4E), det);
det= _mm_add_ss(_mm_shuffle_ps(det, det, 0xB1), det);
tmp1= _mm_rcp_ss(det);
det= _mm_sub_ss(_mm_add_ss(tmp1, tmp1), _mm_mul_ss(det, _mm_mul_ss(tmp1, tmp1)));
det= _mm_shuffle_ps(det, det, 0x00);
minor0 = _mm_mul_ps(det, minor0);
_mm_storel_pi((__m64*)(src), minor0);
_mm_storeh_pi((__m64*)(src+2), minor0);
minor1 = _mm_mul_ps(det, minor1);
_mm_storel_pi((__m64*)(src+4), minor1);
_mm_storeh_pi((__m64*)(src+6), minor1);
minor2 = _mm_mul_ps(det, minor2);
_mm_storel_pi((__m64*)(src+ 8), minor2);
_mm_storeh_pi((__m64*)(src+10), minor2);
minor3 = _mm_mul_ps(det, minor3);
_mm_storel_pi((__m64*)(src+12), minor3);
_mm_storeh_pi((__m64*)(src+14), minor3);
}
</code></pre>
<p>I searched on google, but I have not found anything useful... I have searched also the gauss-jordan method for inverse matrix, but nothing...</p> | 2014-06-18 09:05:39.940000+00:00 | 2014-06-18 09:41:40.267000+00:00 | 2014-06-18 09:11:43.527000+00:00 | matrix|x86|sse|simd|matrix-inverse | ['http://en.wikipedia.org/wiki/Cholesky_decomposition', 'http://arxiv.org/ftp/arxiv/papers/1111/1111.4144.pdf', 'https://stackoverflow.com/questions/22479258/cholesky-decomposition-with-openmp/23063655#23063655'] | 3 |
67,042,366 | <blockquote>
<p>How do experts design CNN's? How could I choose between Inception Modules, Dropout Regularization, Batch Normalization, convolutional filter size, size and depth of convolutional channels, number of fully-connected layers, activations neurons, etc? How do people navigate this large optimization problem in a scientific manner? The combinations are endless.</p>
</blockquote>
<p>You said truly that the combinations are huge in number. And without approaching rightly you may end up with nowhere. A great one said machine Learning is an art, not science. Results are data-dependent. Here are a few tips regarding your above concern.</p>
<ul>
<li><p><strong>Log Everything</strong>: In the training time, save necessary logs of every experiment such as training loss, validation loss, weight files, execution times, visualization, etc. Some of them can be saved with <code>CSVLogger</code>, <code>ModelCheckpoint</code> etc. <code>TensorBoard</code> is a great tool for inspecting both training log and visualization and many more.</p>
</li>
<li><p><strong>Strong Validation Strategies</strong>: This is very important. To build a stable <strong>Cross-Validation (CV)</strong>, we must have a good understanding of the data and the challenges faced. We’ll check and make sure the <strong>validation set</strong> has a <strong>similar distribution</strong> to the <strong>training set</strong> and <strong>test set</strong>. And We’ll try to make sure our models improve <strong>both</strong> on our <strong>CV</strong> and on the <strong>test set</strong> (if <code>gt</code> is available for the test set). Basically, partitioning the data randomly is usually not enough to satisfy this. Understanding the data and how we can partition it without introducing a <strong>data leakage</strong> in our CV is key to avoid <strong>overfitting</strong>.</p>
</li>
<li><p><strong>Change Only One</strong>: During the experiment, change one thing at a time and save the observations (<code>logs</code>) for those changes. For example: change the image size gradually from <code>224</code> (for example) to higher and observe the results. We should start with a small combination. While experimenting with image size, fix others like <code>model</code> architecture, <code>learning rate</code>, etc. The same goes for the <code>learning rate</code> part or <code>model</code> architectures. However, later we also may need to change more than one when we get some promising combinations. In kaggle competition, these are very common approaches one would follow. Below is a very simple example regarding this. But it's not limited any way.</p>
</li>
</ul>
<p><a href="https://i.stack.imgur.com/jqgf6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jqgf6.png" alt="enter image description here" /></a></p>
<hr />
<p>However, as you said, your Ph.D. project is to <strong>reduce CO2 emissions on Earth.</strong> In my understanding, these are more <strong>application-specific</strong> problems and less than the <strong>algorithm-specific</strong> problems. So, we think it's better to take benefit from well-recognized pre-trained models.</p>
<p>In case if we wish to write our <code>CNN</code> on our own, we should give a decent time on it. Start with a very simple one, for example:</p>
<pre><code>Conv2D (16, 3, 'relu') - > MaxPool (2)
Conv2D (32, 3, 'relu') - > MaxPool (2)
Conv2D (64, 3, 'relu') - > MaxPool (2)
Conv2D (128, 3, 'relu') - > MaxPool (2)
</code></pre>
<p>Here we gradually increase the depth but reducing the feature dimension. By the end layer, more semantic information would emerge. While stacking <code>Conv2D</code> layers, it's common practice to increase the channel depth in such order <code>16, 32, 64, 128</code> etc. If we want to impute <code>Inception</code> or <code>Residual Block</code> inside our network, I think, we should do some basic math first about what feature properties will come out of this, etc. Following a concept like this, we may also wish to look at approaches like <code>SENet</code>, <code>ResNeSt</code> etc. About <code>Dropout</code>, if we observe that our model is getting overfitted during training, then we should add some. In the final layer, we may want to choose <code>GlobalAveragePooling</code> over the <code>Flatten</code> layer (<strong>FCC</strong>). We can probably now understand that there are lots of ablation studies that need to be done to get a satisfactory <code>CNN</code> model.</p>
<p>In this regard, We suggest you explore the two most important things: (<strong>1</strong>). Read one of the pre-trained model papers/blogs/videos about their strategies to build the algorithm. For example: check out this <a href="https://www.youtube.com/watch?v=3svIm5UC94I" rel="nofollow noreferrer">EfficientNet Explained</a>. (<strong>2</strong>). Next, explore the source code of it. That would give your more sense and encourage you to build your own giant.</p>
<hr />
<p>We like to end this with one last working example. See the model diagram below, it's a <strong>small inception network</strong>, <a href="https://arxiv.org/pdf/1611.03530.pdf" rel="nofollow noreferrer">source</a>. If we look closely, we will see, it consists of the following three modules.</p>
<ul>
<li>Conv Module</li>
<li>Inception Module</li>
<li>Downsample Modul</li>
</ul>
<p>Take a close look at each module's configuration such as <strong>filter size</strong>, <strong>strides</strong>, etc. Let's try to understand and implement this module. Before that, here are two good references (<a href="https://www.youtube.com/watch?v=C86ZXvgpejM" rel="nofollow noreferrer">1</a>, <a href="https://www.youtube.com/watch?v=KfV8CJh7hE0" rel="nofollow noreferrer">2</a>) for the Inception concept to refresh the concept.</p>
<p><a href="https://i.stack.imgur.com/Vtm4f.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Vtm4f.png" alt="enter image description here" /></a></p>
<h2>Conv Module</h2>
<p>From the diagram we can see, it consists of <strong>one convolutional</strong> network, <strong>one batch normalization</strong>, and <strong>one relu</strong> activation. Also, it produces <code>C</code> times feature maps with <code>K x K</code> filters and <code>S x S</code> strides. To do that, we will create a class object that will inherit the <code>tf.keras.layers.Layer</code> classes</p>
<pre><code>class ConvModule(tf.keras.layers.Layer):
def __init__(self, kernel_num, kernel_size, strides, padding='same'):
super(ConvModule, self).__init__()
# conv layer
self.conv = tf.keras.layers.Conv2D(kernel_num,
kernel_size=kernel_size,
strides=strides, padding=padding)
# batch norm layer
self.bn = tf.keras.layers.BatchNormalization()
def call(self, input_tensor, training=False):
x = self.conv(input_tensor)
x = self.bn(x, training=training)
x = tf.nn.relu(x)
return x
</code></pre>
<h2>Inception Module</h2>
<p>Next comes the <strong>Inception</strong> module. According to the above graph, it consists of <em>two convolutional modules</em> and then <strong>merges</strong> together. Now as we know to merge, here we need to ensure that the output feature maps dimension ( <code>height</code> and <code>width</code> ) needs to be the <strong>same</strong>.</p>
<pre><code>class InceptionModule(tf.keras.layers.Layer):
def __init__(self, kernel_size1x1, kernel_size3x3):
super(InceptionModule, self).__init__()
# two conv modules: they will take same input tensor
self.conv1 = ConvModule(kernel_size1x1, kernel_size=(1,1), strides=(1,1))
self.conv2 = ConvModule(kernel_size3x3, kernel_size=(3,3), strides=(1,1))
self.cat = tf.keras.layers.Concatenate()
def call(self, input_tensor, training=False):
x_1x1 = self.conv1(input_tensor)
x_3x3 = self.conv2(input_tensor)
x = self.cat([x_1x1, x_3x3])
return x
</code></pre>
<p>Here you may notice that we are now hard-coded the exact <strong>kernel size</strong> and <strong>strides</strong> number for both convolutional layers according to the network (diagram). And also in <code>ConvModule</code>, we have already set padding to the <code>same</code>, so that the dimension of the feature maps will be the same for both (<code>self.conv1</code> and <code>self.conv2</code>); which is required in order to concatenate them to the end.</p>
<p>Again, in this module, two variable performs as the placeholder, <code>kernel_size1x1</code>, and <code>kernel_size3x3</code>. This is for the purpose of course. Because we will need different numbers of feature maps to the different stages of the entire model. If we look into the diagram of the model, we will see that <code>InceptionModule</code> takes a different number of filters at different stages in the model.</p>
<h2>Downsample Module</h2>
<p>Lastly the <strong>downsampling module</strong>. The main intuition for downsampling is that we hope to get more relevant feature information that highly represents the inputs to the model. As it tends to remove the unwanted feature so that model can focus on the most relevant. There are many ways we can reduce the dimension of the feature maps (or inputs). For example: using <code>strides 2</code> or using the conventional <code>pooling</code> operation. There are many types of pooling operation, namely: <code>MaxPooling</code>, <code>AveragePooling</code>, <code>GlobalAveragePooling</code>.</p>
<p>From the diagram, we can see that the downsampling module contains <strong>one convolutional</strong> layer and <strong>one max-pooling</strong> layer which later merges together. Now, if we look closely at the diagram (<strong>top-right</strong>), we will see that the convolutional layer takes a <code>3 x 3</code> size filter with <code>strides 2 x 2</code>. And the pooling layer (here <code>MaxPooling</code>) takes pooling size <code>3 x 3</code> with <code>strides 2 x 2</code>. Fair enough, however, we also ensure that the dimension coming from each of them should be the same in order to merge at the end. Now, if we remember when we design the <code>ConvModule</code> we purposely set the value of the <code>padding</code> argument to <code>same</code>. But in this case, we need to set it to <code>valid</code>.</p>
<pre><code>class DownsampleModule(tf.keras.layers.Layer):
def __init__(self, kernel_size):
super(DownsampleModule, self).__init__()
# conv layer
self.conv3 = ConvModule(kernel_size, kernel_size=(3,3),
strides=(2,2), padding="valid")
# pooling layer
self.pool = tf.keras.layers.MaxPooling2D(pool_size=(3, 3),
strides=(2,2))
self.cat = tf.keras.layers.Concatenate()
def call(self, input_tensor, training=False):
# forward pass
conv_x = self.conv3(input_tensor, training=training)
pool_x = self.pool(input_tensor)
# merged
return self.cat([conv_x, pool_x])
</code></pre>
<p>Okay, now we have built all three modules, namely: <strong>ConvModule InceptionModule DownsampleModule</strong>. Let's initialize their parameter according to the diagram.</p>
<pre><code>class MiniInception(tf.keras.Model):
def __init__(self, num_classes=10):
super(MiniInception, self).__init__()
# the first conv module
self.conv_block = ConvModule(96, (3,3), (1,1))
# 2 inception module and 1 downsample module
self.inception_block1 = InceptionModule(32, 32)
self.inception_block2 = InceptionModule(32, 48)
self.downsample_block1 = DownsampleModule(80)
# 4 inception module and 1 downsample module
self.inception_block3 = InceptionModule(112, 48)
self.inception_block4 = InceptionModule(96, 64)
self.inception_block5 = InceptionModule(80, 80)
self.inception_block6 = InceptionModule(48, 96)
self.downsample_block2 = DownsampleModule(96)
# 2 inception module
self.inception_block7 = InceptionModule(176, 160)
self.inception_block8 = InceptionModule(176, 160)
# average pooling
self.avg_pool = tf.keras.layers.AveragePooling2D((7,7))
# model tail
self.flat = tf.keras.layers.Flatten()
self.classfier = tf.keras.layers.Dense(num_classes, activation='softmax')
def call(self, input_tensor, training=True, **kwargs):
# forward pass
x = self.conv_block(input_tensor)
x = self.inception_block1(x)
x = self.inception_block2(x)
x = self.downsample_block1(x)
x = self.inception_block3(x)
x = self.inception_block4(x)
x = self.inception_block5(x)
x = self.inception_block6(x)
x = self.downsample_block2(x)
x = self.inception_block7(x)
x = self.inception_block8(x)
x = self.avg_pool(x)
x = self.flat(x)
return self.classfier(x)
</code></pre>
<p>The amount of <strong>filter</strong> number for each computational block is set according to the design of the model (see the diagram). After initialing all the blocks (in the <code>__init__</code> function), we connect them according to the design (in the <code>call</code> function).</p> | 2021-04-11 07:14:59.873000+00:00 | 2021-04-11 07:21:27.007000+00:00 | 2021-04-11 07:21:27.007000+00:00 | null | 67,040,737 | <p>I am working on a Ph.D. project, which objective is to reduce <code>CO2</code> emissions on Earth.</p>
<p>I have a dataset, and I was able to successfully implement a <code>CNN</code>, which gives <code>80%</code> accuracy (worst-case scenario). However, the field where I work is very demanding, and I have the impression that I could get better accuracy with a well-optimized CNN.</p>
<p>How do experts design <code>CNN's</code>? How could I choose between <code>Inception</code> Modules, <code>Dropout</code> Regularization, <code>Batch Normalization</code>, convolutional filter size, size and depth of convolutional channels, number of fully-connected layers, activations neurons, etc? How do people navigate this large optimization problem in a scientific manner? The combinations are endless. Are there any real-life examples where this problem is navigated, addressing its full complexity (not just optimizing a few hyper-parameters)?</p>
<p>Hopefully, my dataset is not too large, so the <code>CNN</code> models that I am considering should have very few parameters.</p> | 2021-04-11 01:38:48.643000+00:00 | 2021-05-21 10:27:55.550000+00:00 | 2021-05-21 10:27:55.550000+00:00 | tensorflow|machine-learning|computer-vision|conv-neural-network|hyperparameters | ['https://i.stack.imgur.com/jqgf6.png', 'https://www.youtube.com/watch?v=3svIm5UC94I', 'https://arxiv.org/pdf/1611.03530.pdf', 'https://www.youtube.com/watch?v=C86ZXvgpejM', 'https://www.youtube.com/watch?v=KfV8CJh7hE0', 'https://i.stack.imgur.com/Vtm4f.png'] | 6 |
38,667,808 | <p>There are usually two common ways for imbanlanced dataset:</p>
<ol>
<li><p>Online sampling as mentioned above. In each iteration you sample a class-balanced batch from the training set.</p></li>
<li><p>Re-weight the cost of two classes respectively. You'd want to give the loss on the dominant class a smaller weight. For example this is used in the paper <a href="http://arxiv.org/abs/1504.06375" rel="nofollow">Holistically-Nested Edge Detection</a></p></li>
</ol> | 2016-07-29 21:34:48.277000+00:00 | 2016-07-29 21:34:48.277000+00:00 | null | null | 38,664,487 | <p>I am working on a Classification problem with 2 labels : 0 and 1. My training dataset is a very imbalanced dataset (and so will be the test set considering my problem). </p>
<p>The proportion of the imbalanced dataset is 1000:4 , with label '0' appearing 250 times more than label '1'. However, I have a lot of training samples : around 23 millions. So I should get around 100 000 samples for the label '1'.</p>
<p>Considering the big number of training samples I have, I didn't consider SVM. I also read about SMOTE for Random Forests. However, I was wondering whether NN could be efficient to handle this kind of imbalanced dataset with a large dataset ? </p>
<p>Also, as I am using Tensorflow to design the model, which characteristics should/could I tune to be able to handle this imbalanced situation ?</p>
<p>Thanks for your help !
Paul </p>
<hr>
<p>Update : </p>
<p>Considering the number of answers, and that they are quite similar, I will answer all of them here, as a common answer.</p>
<p>1) I tried during this weekend the 1st option, increasing the cost for the positive label. Actually, with less unbalanced proportion (like 1/10, on another dataset), this seems to help a bit to get a better result, or at least to 'bias' the precision/recall scores proportion.
However, for my situation,
It seems to be very sensitive to the alpha number. With alpha = 250, which is the proportion of the unbalanced dataset, I have a precision of 0.006 and a recall score of 0.83, but the model is predicting way too many 1 that it should be - around 0.50 of label '1' ...
With alpha = 100, the model predicts only '0'. I guess I'll have to do some 'tuning' for this alpha parameter :/
I'll take a look at this function from TF too as I did it manually for now : tf.nn.weighted_cross_entropy_with_logitsthat</p>
<p>2) I will try to de-unbalance the dataset but I am afraid that I will lose a lot of info doing that, as I have millions of samples but only ~ 100k positive samples. </p>
<p>3) Using a smaller batch size seems indeed a good idea. I'll try it ! </p> | 2016-07-29 17:34:06.913000+00:00 | 2021-03-17 21:41:08.657000+00:00 | 2016-08-01 16:57:55.203000+00:00 | neural-network|tensorflow|random-forest | ['http://arxiv.org/abs/1504.06375'] | 1 |
59,572,464 | <p>If you are going to train the classifier, it should be okay. Nonetheless, I wouldn't remove it either way.</p>
<p>It is worth mentioning that the max-pooling is part of the original architecture, as can be seen in Table 1 of the original paper: <a href="https://arxiv.org/pdf/1409.1556.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1409.1556.pdf</a>.</p> | 2020-01-03 02:20:50.550000+00:00 | 2020-01-03 02:20:50.550000+00:00 | null | null | 59,572,222 | <p>I am currently using VGG16 with Global Average Pooling (GAP) before final classification layer. The VGG16 model used is the one provided by torchvision.</p>
<p>However, I noticed that before the GAP layer, there is a Max Pooling layer. Is this okay or should the Max Pooling layer be removed before the GAP layer? The network architecture can be seen below.</p>
<pre><code>VGG(
(features): Sequential(
(0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): ReLU(inplace=True)
(2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(3): ReLU(inplace=True)
(4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(6): ReLU(inplace=True)
(7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(8): ReLU(inplace=True)
(9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(11): ReLU(inplace=True)
(12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(13): ReLU(inplace=True)
(14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(15): ReLU(inplace=True)
(16): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(17): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(18): ReLU(inplace=True)
(19): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(20): ReLU(inplace=True)
(21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(22): ReLU(inplace=True)
(23): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(24): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(25): ReLU(inplace=True)
(26): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(27): ReLU(inplace=True)
(28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(29): ReLU(inplace=True)
(30): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(avgpool): AdaptiveAvgPool2d(output_size=1) #GAP Layer
(classifier): Sequential(
(0): Linear(in_features=512, out_features=7, bias=True)
)
)
</code></pre>
<p>Thanks in advance.</p> | 2020-01-03 01:39:21.433000+00:00 | 2020-01-03 02:20:50.550000+00:00 | null | machine-learning|pytorch|vgg-net|torchvision | ['https://arxiv.org/pdf/1409.1556.pdf'] | 1 |
39,921,272 | <p>Here is some source code snip from our sample gallery you can find on the Alea GPU sample gallery <a href="http://www.quantalea.com/gallery" rel="nofollow">http://www.quantalea.com/gallery</a>.</p>
<p>Here is the single stage algorithm. It is not the fastest but reasonably simple to understand.</p>
<pre><code>public static class FloydWarshallSingleStage
{
const int BlockWidth = 16;
/// <summary>
/// Kernel for parallel Floyd Warshall algorithm on GPU.
/// </summary>
/// <param name="u">Number vertex of which is performed relaxation paths [v1, v2]</param>
/// <param name="n">Number of vertices in the graph G:=(V,E), n := |V(G)|</param>
/// <param name="d">Matrix of shortest paths d(G)</param>
/// <param name="p">Matrix of predecessors p(G)</param>
public static void KernelSingleStage(int u, int[,] d, int[,] p)
{
var n = d.GetLength(0);
var v1 = blockDim.y * blockIdx.y + threadIdx.y;
var v2 = blockDim.x * blockIdx.x + threadIdx.x;
if (v1 < n && v2 < n)
{
var newPath = d[v1, u] + d[u, v2];
var oldPath = d[v1, v2];
if (oldPath > newPath)
{
d[v1, v2] = newPath;
p[v1, v2] = p[u, v2];
}
}
}
[GpuManaged]
public static void Run(Gpu gpu, int[,] d, int[,] p)
{
var n = d.GetLength(0);
var gridDim = new dim3((n - 1) / BlockWidth + 1, (n - 1) / BlockWidth + 1, 1);
var blockDim = new dim3(BlockWidth, BlockWidth, 1);
var lp = new LaunchParam(gridDim, blockDim);
for (var u = 0; u < n; u++)
{
gpu.Launch(KernelSingleStage, lp, u, d, p);
}
}
}
</code></pre>
<p>The multi-stage is more complicated and the listing is longer. Here I paste a version that uses automatic memory management which simplifies the code quite a bit but also has some performance implication. The multi-stage version uses three kernels to complete the job and uses tiling to improve memory access.</p>
<pre><code>public class FloydWarshallMultiStage
{
private const int None = -1;
private const int Inf = 1061109567;
//[GpuParam]
//private readonly Constant<int> BlockSize;
//[GpuParam]
//private readonly Constant<int> ThreadSize;
//[GpuParam]
//private readonly Constant<int> VirtualBlockSize;
private const int BlockSize = 16;
private const int ThreadSize = 2;
private const int VirtualBlockSize = BlockSize*ThreadSize;
public FloydWarshallMultiStage(int blockSize, int threadSize)
{
//BlockSize = new Constant<int>(blockSize);
//ThreadSize = new Constant<int>(threadSize);
//VirtualBlockSize = new Constant<int>(blockSize * threadSize);
}
/// <summary>
/// Kernel for parallel Floyd Warshall algorithm on GPU computing independent blocks.
/// </summary>
/// <param name="block">Number block of which is performed relaxation paths [v1, v2]</param>
/// <param name="n">Number of vertices in the graph G:=(V,E), n := |V(G)|</param>
/// <param name="pitch">Width to get to next row in number of int</param>
/// <param name="d">Matrix of shortest paths d(G)</param>
/// <param name="p">Matrix of predecessors p(G)</param>
public void KernelPhaseOne(int block, int n, int pitch, int[,] d, int[,] p)
{
var newPred = 0;
var tx = threadIdx.x;
var ty = threadIdx.y;
var v1 = VirtualBlockSize*block + ty;
var v2 = VirtualBlockSize*block + tx;
var primaryD = __shared__.Array2D<int>(VirtualBlockSize, VirtualBlockSize);
var primaryP = __shared__.Array2D<int>(VirtualBlockSize, VirtualBlockSize);
if (v1 < n && v2 < n)
{
primaryD[ty, tx] = d[v1, v2];
primaryP[ty, tx] = p[v1, v2];
newPred = primaryP[ty, tx];
}
else
{
primaryD[ty, tx] = Inf;
primaryP[ty, tx] = None;
}
DeviceFunction.SyncThreads();
for (var i = 0; i < VirtualBlockSize; i++)
{
var newPath = primaryD[ty, i] + primaryD[i, tx];
DeviceFunction.SyncThreads();
if (newPath < primaryD[ty, tx])
{
primaryD[ty, tx] = newPath;
newPred = primaryP[i, tx];
}
DeviceFunction.SyncThreads();
primaryP[ty, tx] = newPred;
}
if (v1 < n && v2 < n)
{
d[v1, v2] = primaryD[ty, tx];
p[v1, v2] = primaryP[ty, tx];
}
}
/// <summary>
/// Kernel for parallel Floyd Warshall algorithm on GPU to compute block depending on a single independent block.
/// </summary>
/// <param name="block">Number block of which is performed relaxation paths [v1, v2]</param>
/// <param name="n">Number of vertices in the graph G:=(V,E), n := |V(G)|</param>
/// <param name="pitch"></param>
/// <param name="d">Matrix of shortest paths d(G)</param>
/// <param name="p">Matrix of predecessors p(G)</param>
public void KernelPhaseTwo(int block, int n, int pitch, int[,] d, int[,] p)
{
if (blockIdx.x == block) return;
var newPath = 0;
var newPred = 0;
var tx = threadIdx.x;
var ty = threadIdx.y;
var v1 = VirtualBlockSize*block + ty;
var v2 = VirtualBlockSize*block + tx;
var primaryD = __shared__.Array2D<int>(VirtualBlockSize, VirtualBlockSize);
var currentD = __shared__.Array2D<int>(VirtualBlockSize, VirtualBlockSize);
var primaryP = __shared__.Array2D<int>(VirtualBlockSize, VirtualBlockSize);
var currentP = __shared__.Array2D<int>(VirtualBlockSize, VirtualBlockSize);
if (v1 < n && v2 < n)
{
primaryD[ty, tx] = d[v1, v2];
primaryP[ty, tx] = p[v1, v2];
}
else
{
primaryD[ty, tx] = Inf;
primaryP[ty, tx] = None;
}
// load i-aligned singly dependent blocks
if (blockIdx.y == 0)
{
v1 = VirtualBlockSize*block + ty;
v2 = VirtualBlockSize*blockIdx.x + tx;
}
// load j-aligned singly dependent blocks
else
{
v1 = VirtualBlockSize*blockIdx.x + ty;
v2 = VirtualBlockSize*block + tx;
}
if (v1 < n && v2 < n)
{
currentD[ty, tx] = d[v1, v2];
currentP[ty, tx] = p[v1, v2];
newPred = currentP[ty, tx];
}
else
{
currentD[ty, tx] = Inf;
currentP[ty, tx] = None;
}
DeviceFunction.SyncThreads();
// compute i-aligned singly dependent blocks
if (blockIdx.y == 0)
{
for (var i = 0; i < VirtualBlockSize; i++)
{
newPath = primaryD[ty, i] + currentD[i, tx];
DeviceFunction.SyncThreads();
if (newPath < currentD[ty, tx])
{
currentD[ty, tx] = newPath;
newPred = currentP[i, tx];
}
DeviceFunction.SyncThreads();
currentP[ty, tx] = newPred;
}
}
// compute j-aligned singly dependent blocks
else
{
for (var i = 0; i < VirtualBlockSize; i++)
{
newPath = currentD[ty, i] + primaryD[i, tx];
DeviceFunction.SyncThreads();
if (newPath < currentD[ty, tx])
{
currentD[ty, tx] = newPath;
currentP[ty, tx] = primaryP[i, tx];
}
DeviceFunction.SyncThreads();
}
}
if (v1 < n && v2 < n)
{
d[v1, v2] = currentD[ty, tx];
p[v1, v2] = currentP[ty, tx];
}
}
/// <summary>
/// Kernel for parallel Floyd Warshall algorithm on GPU to compute dependent block depending on the singly dependent blocks.
/// </summary>
/// <param name="block">Number block of which is performed relaxation paths [v1, v2]</param>
/// <param name="n">Number of vertices in the graph G:=(V,E), n := |V(G)|</param>
/// <param name="pitch"></param>
/// <param name="d">Matrix of shortest paths d(G)</param>
/// <param name="p">Matrix of predecessors p(G)</param>
public void KernelPhaseThree(int block, int n, int pitch, int[,] d, int[,] p)
{
if (blockIdx.x == block || blockIdx.y == block) return;
var tx = threadIdx.x*ThreadSize;
var ty = threadIdx.y*ThreadSize;
var v1 = blockDim.y*blockIdx.y*ThreadSize + ty;
var v2 = blockDim.x*blockIdx.x*ThreadSize + tx;
var primaryRowD = __shared__.Array2D<int>(BlockSize*ThreadSize, BlockSize*ThreadSize);
var primaryColD = __shared__.Array2D<int>(BlockSize*ThreadSize, BlockSize*ThreadSize);
var primaryRowP = __shared__.Array2D<int>(BlockSize*ThreadSize, BlockSize*ThreadSize);
var v1Row = BlockSize*block*ThreadSize + ty;
var v2Col = BlockSize*block*ThreadSize + tx;
// load data for virtual block
for (var i = 0; i < ThreadSize; i++)
{
for (var j = 0; j < ThreadSize; j++)
{
var idx = tx + j;
var idy = ty + i;
if (v1Row + i < n && v2 + j < n)
{
primaryRowD[idy, idx] = d[v1Row + i, v2 + j];
primaryRowP[idy, idx] = p[v1Row + i, v2 + j];
}
else
{
primaryRowD[idy, idx] = Inf;
primaryRowP[idy, idx] = None;
}
if (v1 + i < n && v2Col + j < n)
{
primaryColD[idy, idx] = d[v1 + i, v2Col + j];
}
else
{
primaryColD[idy, idx] = Inf;
}
}
}
DeviceFunction.SyncThreads();
// compute data for virtual block
for (var i = 0; i < ThreadSize; i++)
{
for (var j = 0; j < ThreadSize; j++)
{
if (v1 + i < n && v2 + j < n)
{
var path = d[v1 + i, v2 + j];
var predecessor = p[v1 + i, v2 + j];
var idy = ty + i;
var idx = tx + j;
for (var k = 0; k < BlockSize*ThreadSize; k++)
{
var newPath = primaryColD[idy, k] + primaryRowD[k, idx];
if (path > newPath)
{
path = newPath;
predecessor = primaryRowP[k, idx];
}
}
d[v1 + i, v2 + j] = path;
p[v1 + i, v2 + j] = predecessor;
}
}
}
}
/// <summary>
/// Parallel multi-stage Floyd Warshall algorithm on GPU.
/// </summary>
/// <param name="gpu">The GPU on which the kernels should run</param>
/// <param name="n">Number of vertices in the graph G:=(V,E), n := |V(G)|</param>
/// <param name="g">The graph G:=(V,E)</param>
/// <param name="d">Matrix of shortest paths d(G)</param>
/// <param name="p">Matrix of predecessors p(G)</param>
public void Run(Gpu gpu, int[,] d, int[,] p, bool verbose = false)
{
var n = d.GetLength(0);
var gridDim1 = new dim3(1, 1, 1);
var gridDim2 = new dim3((n - 1)/VirtualBlockSize + 1, 2, 1);
var gridDim3 = new dim3((n - 1)/VirtualBlockSize + 1, (n - 1)/VirtualBlockSize + 1, 1);
var blockDim1 = new dim3(VirtualBlockSize, VirtualBlockSize, 1);
var blockDim2 = new dim3(VirtualBlockSize, VirtualBlockSize, 1);
var blockDim3 = new dim3(BlockSize, BlockSize, 1);
var numOfBlock = (n - 1)/VirtualBlockSize + 1;
var pitchInt = n;
if (verbose)
{
Console.WriteLine($"|V| {n}");
Console.WriteLine($"Phase 1: grid dim {gridDim1} block dim {blockDim1}");
Console.WriteLine($"Phase 2: grid dim {gridDim2} block dim {blockDim2}");
Console.WriteLine($"Phase 3: grid dim {gridDim3} block dim {blockDim3}");
}
for (var block = 0; block < numOfBlock; block++)
{
gpu.Launch(KernelPhaseOne, new LaunchParam(gridDim1, blockDim1), block, n, pitchInt, d, p);
gpu.Launch(KernelPhaseTwo, new LaunchParam(gridDim2, blockDim2), block, n, pitchInt, d, p);
gpu.Launch(KernelPhaseThree, new LaunchParam(gridDim3, blockDim3), block, n, pitchInt, d, p);
}
}
}
</code></pre>
<p>The version with explicit memory management you better download the sample from <a href="http://www.quantalea.com/gallery" rel="nofollow">http://www.quantalea.com/gallery</a> and search for Floyd-Warshall. </p>
<p>Hope that answers the question. </p>
<p>The implementation is based on the following paper:</p>
<p>Ben Lund, Justin W Smith, A Multi-Stage CUDA Kernel for Floyd-Warshall, 2010.</p>
<p><a href="https://arxiv.org/abs/1001.4108" rel="nofollow">https://arxiv.org/abs/1001.4108</a></p> | 2016-10-07 15:50:43.933000+00:00 | 2016-10-07 15:50:43.933000+00:00 | null | null | 36,482,098 | <p>I've been trying to use Alea GPU to write the parallel Floyd-Warshall algorithm in F#, and basing myself on the CUDA code another user presented here</p>
<p><a href="https://stackoverflow.com/questions/19861532/the-floyd-warshall-algorithm-in-cuda">The Floyd-Warshall algorithm in CUDA</a></p>
<p>I wrote the following simple implementation</p>
<pre><code>type FWModule<'T>(target:GPUModuleTarget, tileDim:int) =
inherit GPUModule(target)
[<Kernel;ReflectedDefinition>]
member this.FloydWKernel (width:int) (k:int) (data:deviceptr<float>) =
let col = blockIdx.x * blockDim.x + threadIdx.x
let row = blockIdx.y
if col >= width then () //out of bounds
let index = width * row + col
let best = __shared__.Variable<float>()
if threadIdx.x = 0 then best := data.[width*row+k]
__syncthreads()
let tmp = data.[k*width+col]
let candidate = !best + tmp
data.[index] <- min data.[index] candidate
member this.LaunchParams width =
let blockdim = dim3(tileDim)
let griddim = dim3(divup width tileDim, width)
LaunchParam(griddim, blockdim)
member this.FloydW (width:int) (k:int) (data:deviceptr<float>) =
let lp = this.LaunchParams width
this.GPULaunch <@ this.FloydWKernel @> lp width k idata odata
member this.FW(size:int, A:float[])=
use deviceArr = this.GPUWorker.Malloc(A)
for k in 0 .. size-1 do
this.FloydW size k deviceArr.Ptr deviceArr.Ptr
deviceArr.Gather()
let tileDim = 256
let apsp = new FWModule<float>(GPUModuleTarget.DefaultWorker, tileDim)
</code></pre>
<p>However, when the following lines are ran in <code>fsi</code></p>
<pre><code>let m = [|0.0 ; 5.0 ; 9.0 ; infinity;
infinity; 0.0 ; 1.0 ; infinity;
infinity; infinity; 0.0 ; 2.0;
infinity; 3.0 ; infinity; 0.0|];;
apsp.FW (4,m);;
</code></pre>
<p>The output is</p>
<pre><code>[|0.0; 5.0; 6.0; 8.0;
4.0; 0.0; 1.0; 3.0;
3.0; 3.0; 0.0; 1.0;
1.0; 1.0; 1.0; 0.0|]
</code></pre>
<p>Which it should not be given that the usual iterative, sequential <code>floydwarshall</code></p>
<pre><code>let floydwarshall (l:int, mat:float[]) =
let a = Array.copy mat
for k in 0 .. (l-1) do
for i in 0 .. (l-1) do
for j in 0 .. (l-1) do
a.[i*l+j] <- min a.[i*l+j] (a.[i*l+k] + a.[k*l+j])
a
</code></pre>
<p>gives me</p>
<pre><code>floydwarshall (4,m);;
[|0.0 ; 5.0; 6.0; 8.0;
infinity; 0.0; 1.0; 3.0;
infinity; 5.0; 0.0; 2.0;
infinity; 3.0; 4.0; 0.0|]
</code></pre>
<p>My question is, what's happening? </p> | 2016-04-07 16:30:20.020000+00:00 | 2016-10-07 15:50:43.933000+00:00 | 2017-05-23 12:18:18.497000+00:00 | f#|aleagpu | ['http://www.quantalea.com/gallery', 'http://www.quantalea.com/gallery', 'https://arxiv.org/abs/1001.4108'] | 3 |
45,417,389 | <p>Checkout this <a href="https://arxiv.org/pdf/1608.01471.pdf" rel="noreferrer">paper</a> where they come up with a way to make the concept of IoU differentiable. I implemented their solution with amazing results!</p> | 2017-07-31 13:32:20.717000+00:00 | 2017-07-31 13:32:20.717000+00:00 | null | null | 40,475,246 | <p>When people try to solve the task of semantic segmentation with CNN's they usually use a softmax-crossentropy loss during training (see <a href="http://www.cv-foundation.org/openaccess/content_cvpr_2015/html/Long_Fully_Convolutional_Networks_2015_CVPR_paper.html" rel="noreferrer" title="FCN Longquot;">Fully conv. - Long</a>). But when it comes to comparing the performance of different approaches measures like intersection-over-union are reported.</p>
<p>My question is why don't people train directly on the measure they want to optimize? Seems odd to me to train on some measure during training, but evaluate on another measure for benchmarks.</p>
<p>I can see that the IOU has problems for training samples, where the class is not present (union=0 and intersection=0 => division zero by zero). But when I can ensure that every sample of my ground truth contains all classes, is there another reason for not using this measure?</p> | 2016-11-07 21:48:05.863000+00:00 | 2020-07-01 18:24:45.177000+00:00 | null | machine-learning|computer-vision|deep-learning|image-segmentation | ['https://arxiv.org/pdf/1608.01471.pdf'] | 1 |
62,016,449 | <p>This phenomenon is called catastrophic forgetting of neural networks. </p>
<p>You can read a paper on this: <a href="https://arxiv.org/pdf/1708.02072.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1708.02072.pdf</a></p>
<p>Your <code>toyX</code> and <code>toyX2</code> has a completely different distribution. When you re-train your model with ToyX2 for 1000 epochs, your model has completely forgotten the mapping from <code>toyX</code> to <code>toyY</code>.</p>
<p>You should train only for very few epochs with a really small learning rate if you want to make sure the knowledge of previous training stays or just mix both of the sets and re-train.</p> | 2020-05-26 07:07:44.777000+00:00 | 2020-05-26 07:07:44.777000+00:00 | null | null | 62,006,706 | <p>I want to write a NN program for recognition using Keras.</p>
<p>I have used 2 sets of data for training:</p>
<pre><code>toyX = [1, 2, 3, 4, 5, 6, 7, 8]
toyX2 = [18, 17, 16, 15, 14, 13, 12, 11].
</code></pre>
<p>After training with <code>toyX</code> and then <code>toyX2</code>, the output of <code>model.predict(toyX)</code> is <code>[[0.56053144 1.0758346 1.7890009 ]]</code>. However, it should have been <code>[6, 11, 14]</code>.</p>
<p>Should I add more layers or change parameters to improve prediction? Which parameters I should change?</p>
<p>Please help me to solve this problem.</p>
<pre><code>from keras.models import Sequential
from keras.optimizers import Adam
from keras.layers import Conv1D, MaxPooling1D
from keras.layers import Dense, Flatten
from keras.layers import Dropout
import numpy as np
model = Sequential()
model.add(Conv1D(filters=64, kernel_size=3, activation='relu', input_shape=(8, 1)))
model.add(Conv1D(filters=64, kernel_size=3, activation='relu'))
model.add(Dropout(0.5))
model.add(MaxPooling1D(pool_size=2))
model.add(Flatten())
model.add(Dense(100, activation='relu'))
#model.add(Dense(3, activation='softmax'))
model.add(Dense(3))
#model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.compile(loss='mean_squared_error', optimizer='adam', metrics=['accuracy'])
print(model.summary())
toyX = np.array([1, 2, 3, 4, 5, 6, 7, 8]).reshape(1, 8, 1)
toyX2 = np.array([18, 17, 16, 15, 14, 13, 12, 11]).reshape(1, 8, 1)
#print (toyX.shape)
toyY = np.array([6, 11, 14]).reshape(1, 3)
#print (toyY.shape)
toyY2 = np.array([1, 2, 3]).reshape(1, 3) # if flatten is active
model.fit(toyX, toyY, epochs = 1000, verbose = 0)
model.fit(toyX2, toyY2, epochs = 1000, verbose = 0)
print (model.predict(toyX))
</code></pre> | 2020-05-25 16:30:46.580000+00:00 | 2020-05-26 07:07:44.777000+00:00 | 2020-05-26 06:59:15.123000+00:00 | python|keras|conv-neural-network | ['https://arxiv.org/pdf/1708.02072.pdf'] | 1 |
52,161,194 | <p>It's a <em>very</em> broad subject, but IMHO, you should try <a href="https://github.com/clcarwin/focal_loss_pytorch" rel="noreferrer">focal loss</a>: It was introduced by <a href="https://arxiv.org/pdf/1708.02002.pdf" rel="noreferrer">Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He and Piotr Dollar</a> to handle imbalance prediction in object detection. Since introduced it was also used in the context of segmentation.<br>
The idea of the focal loss is to reduce both loss and gradient for correct (or almost correct) prediction while emphasizing the gradient of errors.</p>
<p>As you can see in the graph:<br>
<a href="https://i.stack.imgur.com/aWvuD.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/aWvuD.jpg" alt="enter image description here"></a></p>
<p>Blue curve is the regular cross entropy loss: it has on the one hand non-negligible loss and gradient even for well classified examples, and on the other hand it has weaker gradient for the erroneously classified examples.<br>
In contrast, focal loss (all other curves) has smaller loss and weaker gradient for the well classified examples and stronger gradients for the erroneously classified examples.</p> | 2018-09-04 07:40:17.647000+00:00 | 2018-09-04 07:45:51.067000+00:00 | 2018-09-04 07:45:51.067000+00:00 | null | 52,160,979 | <p>I'm currently using the Cross Entropy Loss function but with the imbalance data-set the performance is not great.</p>
<p>Is there better lost function?</p> | 2018-09-04 07:27:37.063000+00:00 | 2018-09-04 07:46:11.147000+00:00 | 2018-09-04 07:46:11.147000+00:00 | tensorflow|machine-learning|deep-learning|computer-vision|pytorch | ['https://github.com/clcarwin/focal_loss_pytorch', 'https://arxiv.org/pdf/1708.02002.pdf', 'https://i.stack.imgur.com/aWvuD.jpg'] | 3 |
55,974,946 | <p>A wonderful resource for BERT is: <a href="https://github.com/huggingface/pytorch-pretrained-BERT" rel="nofollow noreferrer">https://github.com/huggingface/pytorch-pretrained-BERT</a>. This repository contains op-for-op PyTorch reimplementations, pre-trained models and fine-tuning examples for Google's BERT model.</p>
<p>You can find the language model fine-tuning examples in the following link. The three example scripts in this folder can be used to fine-tune a pre-trained BERT model using the pretraining objective (the combination of masked language modeling and next sentence prediction loss).</p>
<ul>
<li><a href="https://github.com/huggingface/pytorch-pretrained-BERT/tree/master/examples/lm_finetuning" rel="nofollow noreferrer">https://github.com/huggingface/pytorch-pretrained-BERT/tree/master/examples/lm_finetuning</a></li>
</ul>
<p>By the way, BERT multilingual is available for 104 languages (<a href="https://github.com/google-research/bert/blob/master/multilingual.md" rel="nofollow noreferrer">ref</a>), and it is found to be surprisingly effective in many cross-lingual NLP tasks (<a href="https://arxiv.org/pdf/1904.09077.pdf" rel="nofollow noreferrer">ref</a>). So, make sure you use BERT appropriately in your task.</p> | 2019-05-03 17:55:15.747000+00:00 | 2019-05-03 21:51:16.210000+00:00 | 2019-05-03 21:51:16.210000+00:00 | null | 55,973,414 | <p>I wanted to pre-train BERT with the data from my own language since multilingual (which includes my language) model of BERT is not successful. Since whole pre-training costs a lot, I decided to fine tune it on its own 2 tasks: masked language model and next sentence prediction. There are previous implementation on different tasks (NER, sentiment analysis etc.), but I couldn't find any fine tuning on its own tasks. Is there an implementation that I couldn't see? If not, where should I start? I need some initial help.</p> | 2019-05-03 15:58:36.587000+00:00 | 2019-05-03 21:51:16.210000+00:00 | null | python|keras|nlp|pytorch|language-model | ['https://github.com/huggingface/pytorch-pretrained-BERT', 'https://github.com/huggingface/pytorch-pretrained-BERT/tree/master/examples/lm_finetuning', 'https://github.com/google-research/bert/blob/master/multilingual.md', 'https://arxiv.org/pdf/1904.09077.pdf'] | 4 |
43,751,332 | <p>It would be great if you add details what are you trying to extract from the image and details of the dataset that you are trying to use.</p>
<p>A general assumption can be drawn from <a href="https://adeshpande3.github.io/adeshpande3.github.io/The-9-Deep-Learning-Papers-You-Need-To-Know-About.html" rel="nofollow noreferrer">Alexnet and ZFnet</a> about the filter mask sizes that are needed to be considered. There is no specific formulation which size should be considered for particular format but the size is kept low if a deeper analysis is required as many smaller details might miss with larger filter sizes. In the above link with <a href="https://arxiv.org/pdf/1409.4842v1.pdf" rel="nofollow noreferrer">Inception networks</a> describes how effectively you can utilize the computing resources. If you dont have the issue of the resources, then from ZFNet you can observe the visualizations in multiple layers, there are many finer details visible. We can call it CNN even if it has one layer of convolution and pooling layer. The number of layers depends on the deep finer requirements. </p>
<p>I am not expert, but can recommend if your dataset is small as few thousands and not many features extraction is required, and if you are not sure about the size you can just simply go with the small sizes (small best and popular is 5x5 - Lenet5).</p> | 2017-05-03 04:51:54.530000+00:00 | 2017-05-03 05:46:56.477000+00:00 | 2017-05-03 05:46:56.477000+00:00 | null | 43,744,362 | <p>If I have an image which is WxHx3 (RGB), how do I decide how big to make the filter masks? Is it a function of the dimensions (W and H) or something else? How does the dimensions of the second, third, ... filters compare to the dimensions of the first filter? (Any concrete pointers would be appreciated.)</p>
<p>I have seen the following, but they don't answer the question.</p>
<p><a href="https://stackoverflow.com/questions/42712219/dimensions-in-convolutional-neural-network">Dimensions in convolutional neural network</a> </p>
<p><a href="https://stackoverflow.com/questions/36646618/convolutional-neural-networks-how-many-pixels-will-be-covered-by-each-of-the-fi">Convolutional Neural Networks: How many pixels will be covered by each of the filters?</a> </p>
<p><a href="https://stackoverflow.com/questions/24509921/how-do-you-decide-the-parameters-of-a-convolutional-neural-network-for-image-cla">How do you decide the parameters of a Convolutional Neural Network for image classification?</a></p> | 2017-05-02 18:00:12.313000+00:00 | 2017-05-03 05:46:56.477000+00:00 | 2017-05-23 12:26:05.070000+00:00 | conv-neural-network | ['https://adeshpande3.github.io/adeshpande3.github.io/The-9-Deep-Learning-Papers-You-Need-To-Know-About.html', 'https://arxiv.org/pdf/1409.4842v1.pdf'] | 2 |
48,554,288 | <p>Found an answer from <a href="https://arxiv.org/abs/1711.04436" rel="nofollow noreferrer">SQLNet</a>'s equation:</p>
<p><a href="https://i.stack.imgur.com/Yg21q.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Yg21q.png" alt="-"></a></p> | 2018-02-01 02:05:47.027000+00:00 | 2018-02-01 02:05:47.027000+00:00 | null | null | 48,312,854 | <p>I use a full-connected network to get the whole words distribution from the last state of an encoder.</p>
<p>For example, there are 5 words in the vocabulary.</p>
<pre><code>P = [0.1, 0.1, 0.2, 0.2, 0,4]
</code></pre>
<p>And the ground truth is a words' set for this train data.</p>
<p>I sample 3 words from the 5 words and if the target set contains the 3 words , then I want the probability of the 3 words in <code>P</code> increase, for this state. </p>
<p>If one of the 3 word is not in the target set, then I want the probability of the word in <code>P</code> decrease, for this state.</p>
<p>So I wrote these code:</p>
<pre><code>reward = [0,0,0]
</code></pre>
<p>Suppose the first 3 words are sampled from <code>P</code>, and only the first 2 of the 3 words are in the target set. And the third word is not in the target set. Then</p>
<pre><code>reward = [1,1,-1]
</code></pre>
<p>Then I compute the negative sum and dot product of <code>reward</code> and sampled 3 <code>P2=[0.1, 0.1, 0.2]</code> as the loss</p>
<pre><code>loss = -sum(reward * P2.log())
</code></pre>
<p>But I fail to get the result: The top probability words can be selected from the vocabulary for every state.</p> | 2018-01-18 02:19:26.017000+00:00 | 2018-02-01 02:05:47.027000+00:00 | 2018-01-18 10:46:35.210000+00:00 | nlp|deep-learning|reinforcement-learning | ['https://arxiv.org/abs/1711.04436', 'https://i.stack.imgur.com/Yg21q.png'] | 2 |
58,213,245 | <p>I would suggest either one of these strategies</p>
<h3>Focal Loss</h3>
<p>A very interesting approach for dealing with un-balanced training data through tweaking of the loss function was introduced in<br />
<em>Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He and Piotr Dollar</em> <a href="http://openaccess.thecvf.com/content_iccv_2017/html/Lin_Focal_Loss_for_ICCV_2017_paper.html" rel="noreferrer"><strong>Focal Loss for Dense Object Detection</strong></a> (ICCV 2017).<br />
They propose to modify the binary cross entropy loss in a way that decrease the loss and gradient of easily classified examples while "focusing the effort" on examples where the model makes gross errors.</p>
<h3>Hard Negative Mining</h3>
<p>Another popular approach is to do "hard negative mining"; that is, propagate gradients only for part of the training examples - the "hard" ones.<br />
see, e.g.:<br />
<em>Abhinav Shrivastava, Abhinav Gupta and Ross Girshick</em> <a href="https://arxiv.org/abs/1604.03540" rel="noreferrer"><strong>Training Region-based Object Detectors with Online Hard Example Mining</strong></a> (CVPR 2016)</p> | 2019-10-03 06:13:43.933000+00:00 | 2019-10-03 06:13:43.933000+00:00 | 2020-06-20 09:12:55.060000+00:00 | null | 58,206,286 | <p>I have a multilabel classification problem, which I am trying to solve with CNNs in Pytorch. I have 80,000 training examples and 7900 classes; every example can belong to multiple classes at the same time, mean number of classes per example is 130. </p>
<p>The problem is that my dataset is very imbalance. For some classes, I have only ~900 examples, which is around 1%. For “overrepresented” classes I have ~12000 examples (15%). When I train the model I use BCEWithLogitsLoss from <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.BCEWithLogitsLoss" rel="nofollow noreferrer">pytorch</a> with a positive weights parameter. I calculate the weights the same way as described in the documentation: the number of negative examples divided by the number of positives.</p>
<p>As a result, my model overestimates almost every class… Mor minor and major classes I get almost twice as many predictions as true labels. And my AUPRC is just 0.18. Even though it’s much better than no weighting at all, since in this case the model predicts everything as zero.</p>
<p>So my question is, how do I improve the performance? Is there anything else I can do? I tried different batch sampling techniques (to oversample minority class), but they don’t seem to work.</p> | 2019-10-02 17:17:37.567000+00:00 | 2019-10-03 10:12:20.627000+00:00 | null | pytorch|multilabel-classification|imbalanced-data | ['http://openaccess.thecvf.com/content_iccv_2017/html/Lin_Focal_Loss_for_ICCV_2017_paper.html', 'https://arxiv.org/abs/1604.03540'] | 2 |
72,209,218 | <p>By digging into the document and paper I get to understand that the estimated face geometry is aligned with the canonical face model. You can refer the pager <a href="https://arxiv.org/pdf/1907.06724.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1907.06724.pdf</a> and the explanation <a href="https://developers.googleblog.com/2020/09/mediapipe-3d-face-transform.html" rel="nofollow noreferrer">here</a></p> | 2022-05-12 01:39:45.713000+00:00 | 2022-05-12 01:39:45.713000+00:00 | null | null | 72,196,945 | <p>Looking into below Mediapipe's faceeefect module's graph definition</p>
<pre><code> node_options: {
[type.googleapis.com/mediapipe.SwitchContainerOptions] {
contained_node: {
calculator: "FaceGeometryEffectRendererCalculator"
node_options: {
[type.googleapis.com/mediapipe.FaceGeometryEffectRendererCalculatorOptions] {
effect_texture_path: "mediapipe/graphs/face_effect/data/axis.pngblob"
effect_mesh_3d_path: "mediapipe/graphs/face_effect/data/axis.binarypb"
}
}
}
</code></pre>
<p>I further checked the calculator code of FaceGeometryEffectRendererCalculator, I couldn't pinpoint the code where it determines the exact location the renderer render the axis. Maybe I don't understand the OpenGL well which leads to the misunderstanding.</p>
<p>Can someone help shed some lights where should I look for?</p>
<p>Thanks!</p> | 2022-05-11 07:21:00.750000+00:00 | 2022-05-12 01:39:45.713000+00:00 | 2022-05-11 13:47:57.427000+00:00 | mediapipe | ['https://arxiv.org/pdf/1907.06724.pdf', 'https://developers.googleblog.com/2020/09/mediapipe-3d-face-transform.html'] | 2 |
57,024,170 | <p>I assume that that this is for gaining some confidence about the predictions.</p>
<p>If this is the case, there are multiple ways to do this. For example, refer to <a href="https://arxiv.org/pdf/1711.11053.pdf" rel="nofollow noreferrer">this</a> paper by Amazon on how to predict quantiles, and <a href="https://arxiv.org/pdf/1709.01907.pdf" rel="nofollow noreferrer">this</a> paper on how to use a Bayesian framework to obtain uncertainty around the predictions. </p>
<p>If you have other intentions, please clarify.</p> | 2019-07-14 02:37:21.283000+00:00 | 2019-07-14 02:37:21.283000+00:00 | null | null | 56,970,807 | <p>The sample dataset contains Location point of the user.</p>
<pre><code>df.head()
user tslot Location_point
0 0 2015-12-04 13:00:00 4356
1 0 2015-12-04 13:15:00 4356
2 0 2015-12-04 13:30:00 3659
3 0 2015-12-04 13:45:00 4356
4 0 2015-12-04 14:00:00 8563
df.shape
(576,3)
</code></pre>
<p>The location points are random and need to predict the next location point of the user for a given time. As the location points are random numbers I need to predict the set of location points at each time slot.</p>
<pre><code>Example:
If I need to predict the location point at tslot 2015-12-04 14:00:00.
my predicted output should be [8563,4356,3659,5861,3486].
</code></pre>
<p>My code</p>
<pre><code>time_steps=1
data_dim = X_train.shape[2]
model = Sequential()
model.add(LSTM(data_dim, input_shape=(time_steps,data_dim), activation='relu'))
model.add(Dense(data_dim))
model.compile(loss='categorical_crossentropy', optimizer='adam')
model.fit(X_train, y_train, epochs=20, batch_size=96)
model.summary()
</code></pre>
<p>which helps to to predict 1 location points for each time slot. I would like to know if this is possible and how?</p> | 2019-07-10 12:33:04.423000+00:00 | 2019-07-14 02:37:21.283000+00:00 | 2019-07-10 12:47:23.010000+00:00 | python|tensorflow|keras|deep-learning|lstm | ['https://arxiv.org/pdf/1711.11053.pdf', 'https://arxiv.org/pdf/1709.01907.pdf'] | 2 |
61,193,229 | <p>Yes, both <code>nsl.AdversarialRegularization.perturb_on_batch()</code> and <code>cleverhans.attacks.FastGradientMethod.generate()</code> implement the Fast Gradient Sign Method in <a href="https://arxiv.org/abs/1412.6572" rel="nofollow noreferrer">Goodfellow et al. 2014</a>. And both offer parameters like epsilon and norm type to control perturbations. Since both <code>nsl</code> and <code>cleverhans</code> implements FGSM, the generated perturbations have no difference when the configurations are carefully specified. Yet some implementation details might be handled differently, especially in their default configuration. For example,</p>
<ul>
<li><code>cleverhans</code> by default takes model predictions as labels for generating adversarial perturbations, while <code>nsl</code> takes true labels.</li>
<li><code>cleverhans</code> usually expects the model to output logits (since the default <code>loss_fn</code> is <code>softmax_cross_entropy_with_logits</code>), while <code>nsl</code>'s models may output different things. In <code>nsl</code>'s adversarial training <a href="https://www.tensorflow.org/neural_structured_learning/tutorials/adversarial_keras_cnn_mnist" rel="nofollow noreferrer">tutorial</a>, the model outputs probability distributions.</li>
</ul>
<p>There could be more differences in other places. I can take a closer look if you could provide a concrete example.</p>
<p>Regarding adversarial training, <code>nsl.keras.AdversarialRegularization</code> takes the adversarial loss as regularization, which means the model is trained on both original and adversarial examples. <code>cleverhans.loss.CrossEntropy</code> also calculates loss on both original and adversarial examples, but the weighting scheme is a bit different. In <code>nsl</code> the original and adversarial examples are weighted as <code>1:multiplier</code>, while in <code>cleverhans</code> they are weighted as <code>(1-adv_coeff):adv_coeff</code>. Note that another training approach is deployed in some literatures, where the model is only trained on adversarial examples.</p> | 2020-04-13 17:26:07.610000+00:00 | 2020-04-13 17:26:07.610000+00:00 | null | null | 61,071,168 | <p>I've implemented what I believe to be the same model training loop in both <a href="https://www.tensorflow.org/neural_structured_learning" rel="nofollow noreferrer">TensorFlow's neural structured learning</a> (<code>nsl</code>) and the <a href="https://github.com/tensorflow/cleverhans" rel="nofollow noreferrer"><code>cleverhans</code></a> library, and curiously, they show that models trained using adversarial training with the two libraries (via <code>nsl.AdversarialRegularization</code> and <code>cleverhans.attacks.FastGradientMethod</code>) do not achieve comparable performance. However, this question isn't about those specific results, so I don't attempt to replicate them here.</p>
<p>I'm curious more generally about what the implementation differences are for adversarial perturbation in <code>nsl.AdversarialRegularization.perturb_on_batch()</code> versus the <code>cleverhans</code> implementation of the same/similar functionality, which would be <code>FastGradientMethod.generate()</code>. </p>
<p>The <code>nsl</code> docs aren't especially clear, but they seem to imply that <code>nsl</code> is using the Fast Gradient Sign Method of <a href="https://arxiv.org/pdf/1412.6572.pdf" rel="nofollow noreferrer">Goodfellow et al. 2014</a>, which is supposedly the same method implemented in <code>FastGradientMethod</code>. For example, <code>nsl</code> refers to the Goodfellow et al. paper in the <a href="https://www.tensorflow.org/neural_structured_learning/tutorials/adversarial_keras_cnn_mnist" rel="nofollow noreferrer">adversarial training tutorial</a> and in some of the function <a href="https://github.com/tensorflow/neural-structured-learning/blob/c1fc41808e43784b971779cfeafa213807e708df/neural_structured_learning/lib/utils.py#L82" rel="nofollow noreferrer">docs</a>. Both libraries allow specification of similar parameters, e.g. an <code>epsilon</code> to control the level of perturbation and control over the norm used to constrain it. However, the differences in adversarially-trained performance lead me to believe that these libraries are not using the same underlying implementation. <code>nsl</code> is difficult to parse, so I am particularly curious what might be happening under the hood there.</p>
<p><strong>What are the differences in implementation in <code>nsl.AdversarialRegularization.perturb_on_batch()</code> and the <code>cleverhans.attacks.FastGradientMethod.generate()</code> which could cause different perturbations for the same inputs?</strong> Are there other differences in these functions which might contribute to differences in their performance (I am not interested in speed or efficiency, but in ways in which the results of the two perturbations might be different for the same model, epsilon, and norm).</p> | 2020-04-07 00:53:42.073000+00:00 | 2020-04-13 17:26:07.610000+00:00 | 2020-04-07 04:07:28.967000+00:00 | tensorflow|cleverhans|nsl | ['https://arxiv.org/abs/1412.6572', 'https://www.tensorflow.org/neural_structured_learning/tutorials/adversarial_keras_cnn_mnist'] | 2 |
63,407,558 | <p>I also have been trying to do something similar for my problem. You can check out the following papers:</p>
<ol>
<li><a href="http://www.lamda.nju.edu.cn/wanghan/pricai16.pdf" rel="nofollow noreferrer">Exploring Multi-Action Relationship in Reinforcement Learning</a></li>
<li><a href="https://arxiv.org/pdf/1803.05402.pdf" rel="nofollow noreferrer">Imitation Learning with Concurrent Actions in 3D Games</a></li>
<li><a href="https://arxiv.org/pdf/1711.08946.pdf" rel="nofollow noreferrer">Action Branching Architectures for Deep Reinforcement Learning</a></li>
<li><a href="https://arxiv.org/pdf/1708.04782.pdf" rel="nofollow noreferrer">StarCraft II: A New Challenge for Reinforcement Learning</a></li>
</ol> | 2020-08-14 06:18:51.933000+00:00 | 2020-08-14 06:18:51.933000+00:00 | null | null | 63,330,428 | <p>I’m trying to use Reinforcement Learning to solve a problem that involves a ton of simultaneous actions. For example, the agent will be able to take actions that can result in a single action, like shooting, or that can result in multiple actions, like shooting while jumping while turning right while doing a karate chop, etc. When all the possible actions are combined, I end up with a huge action array, say 1 x 2000. So my LSTM network output array will have that size. Of course I’ll use a dictionary to decode the action array to apply the actions(s). So my questions are, is that action array too large? Is this the way to handle simultaneous actions? Is there any other way to do this? Feel free to link any concrete examples you have seen around. Thanks.</p> | 2020-08-09 19:21:23.783000+00:00 | 2020-08-14 06:18:51.933000+00:00 | null | reinforcement-learning | ['http://www.lamda.nju.edu.cn/wanghan/pricai16.pdf', 'https://arxiv.org/pdf/1803.05402.pdf', 'https://arxiv.org/pdf/1711.08946.pdf', 'https://arxiv.org/pdf/1708.04782.pdf'] | 4 |
48,116,823 | <p>It's good to have a look at EMNLP paper on handling 'oov' tokens by generating embeddings</p>
<p><a href="https://arxiv.org/pdf/1707.06961.pdf" rel="nofollow noreferrer">Mimicking Word Embeddings using Subword RNNs</a></p> | 2018-01-05 15:40:35.430000+00:00 | 2018-01-05 15:40:35.430000+00:00 | null | null | 45,495,190 | <p>I am building TensorFlow model for NLP task, and I am using pretrained Glove 300d word-vector/embedding dataset.</p>
<p>Obviously some tokens can't be resolved as embeddings, because were not included into training dataset for word vector embedding model, e.g. rare names.</p>
<p>I can replace those tokens with vectors of 0s, but rather than dropping this information on the floor, I prefer to encode it somehow and include to my training data.</p>
<p>Say, I have 'raijin' word, which can't be resolved as embedding vector, what would be the best way to encode it consistently with Glove embedding dataset? What is the best approach to convert it to 300d vector?</p>
<p>Thank you. </p> | 2017-08-03 21:58:11.610000+00:00 | 2018-01-05 15:40:35.430000+00:00 | 2017-08-04 01:22:07.097000+00:00 | tensorflow|embedding|word-embedding | ['https://arxiv.org/pdf/1707.06961.pdf'] | 1 |
71,331,689 | <p>That's a interesting problem. As @lvan said, this is a problem of optimization in a multi-objective.</p>
<p>The multi-loss/multi-task is as following:</p>
<pre><code>l(\theta) = f(\theta) + g(\theta)
</code></pre>
<p>The <code>l</code> is total_loss, <code>f</code> is the class loss function, <code>g</code> is the detection loss function.</p>
<p>The different loss function have the different refresh rate.As learning progresses, the rate at which the two loss functions decrease is quite inconsistent. Often one decreases very quickly and the other decreases super slowly.</p>
<p>There is a paper devoted to this question:</p>
<p><a href="https://arxiv.org/abs/1705.07115" rel="nofollow noreferrer">Multi-Task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics</a></p>
<p>The main thinking of th paper estimate the uncertainty of each task, then automatically reducing the weight of the loss.</p>
<p>I am a non-native English speaker. Hope you can understand my answer and help you.</p> | 2022-03-03 03:10:55.920000+00:00 | 2022-03-03 03:10:55.920000+00:00 | null | null | 71,317,141 | <p>I am training a model with different outputs in PyTorch, and I have four different losses for positions (in meter), rotations (in degree), and velocity, and a boolean value of 0 or 1 that the model has to predict.<br />
AFAIK, there are two ways to define a final loss function here:</p>
<p>one - the naive weighted sum of the losses</p>
<p>two - the defining coefficient for each loss to optimize the final loss.</p>
<p>So, My question is how is better to weigh these losses to obtain the final loss, correctly?</p> | 2022-03-02 03:32:07.653000+00:00 | 2022-03-03 23:58:34.473000+00:00 | 2022-03-03 23:58:34.473000+00:00 | python|optimization|pytorch|loss-function|loss | ['https://arxiv.org/abs/1705.07115'] | 1 |
60,675,258 | <p>Have you tried the <a href="https://www.aclweb.org/anthology/D18-2029.pdf" rel="nofollow noreferrer">Universal Sentence Encoder (USE)</a>, or the <a href="https://arxiv.org/abs/1907.04307" rel="nofollow noreferrer">Multilingual Universal Sentence Encoder</a>? </p>
<p>There's a colab showing how to score sentence pairs for <a href="https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/semantic_similarity_with_tf_hub_universal_encoder.ipynb" rel="nofollow noreferrer">semantic textual similarity with USE</a> on the Semantic Textual Similarity Benchmark (STS-B) and another for <a href="https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/cross_lingual_similarity_with_tf_hub_multilingual_universal_encoder.ipynb" rel="nofollow noreferrer">multilingual similarity</a>.</p>
<p>Here's a heatmap of pairwise semantic similarity scores from USE on the <a href="https://ai.googleblog.com/2018/05/advances-in-semantic-textual-similarity.html" rel="nofollow noreferrer">Google AI blog post Advances in Semantic Textual Similarity</a>. The model was trained on a large amount of web data so it should work well for a wide variety of input data.</p>
<p><img src="https://i.stack.imgur.com/bJMue.png?s=512" alt="Pairwise semantic similarity comparison via outputs from TensorFlow Hub Universal Sentence Encoder."> </p> | 2020-03-13 18:00:35.273000+00:00 | 2020-03-30 23:41:10.857000+00:00 | 2020-03-30 23:41:10.857000+00:00 | null | 60,606,705 | <p>I am trying to find sentence similarity through word emebeddings and then applying cosine similarity score. Tried CBOW/Skip Gram methods for embedding but did not solve the problem.</p>
<p>I am doing this for product review data. I have two columns:</p>
<pre><code>SNo Product_Title Customer_Review
1 101.x battery works well I have an Apple phone and it's not that
with Samsung smart phone that great.
2 112.x battery works well I have samsung smart tv and I tell that it's
with Samsung smart phone not wort buying.
3 112.x battery works well This charger works very well with samsung
with Samsung smart phone. phone. It is fast charging.
</code></pre>
<p>The first two reviews are <code>irrelevant</code> as semantic meaning of <code>Product_Title</code> and <code>Customer_Review</code> are completely different.</p>
<p>How can an algorithm find this semantic meaning of sentences and score them.</p>
<p>My Approach:</p>
<ol>
<li><p>Text pre-processing</p></li>
<li><p>Train CBOW/Skip gram using Gensim on my data-set</p></li>
<li><p>Do Sentence level encoding via averaging all word vectors in that sentence </p></li>
<li><p>Take cosine similarity of <code>product_title</code> and <code>reviews</code>.</p></li>
</ol>
<p>Problem: It was not able to find the context from the sentence and hence the result was very poor.</p>
<p>Approch 2:</p>
<p>Used pre-trained BERT without pre-processing sentences. The result was not improving either.</p>
<p>1.Any other approach that would capture the context/semantics of sentences.</p>
<p>2.How can we train BERT on our data-set from scratch without using pre-trained model?</p> | 2020-03-09 18:54:32.343000+00:00 | 2020-11-20 13:17:41.233000+00:00 | null | python|nlp | ['https://www.aclweb.org/anthology/D18-2029.pdf', 'https://arxiv.org/abs/1907.04307', 'https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/semantic_similarity_with_tf_hub_universal_encoder.ipynb', 'https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/cross_lingual_similarity_with_tf_hub_multilingual_universal_encoder.ipynb', 'https://ai.googleblog.com/2018/05/advances-in-semantic-textual-similarity.html'] | 5 |
25,996,316 | <p>Note that in the case of a concave polygon the center of the bounding rectangle might be completely outside the polygon. If your polygons might be concaved, I'd recommend using the center of the biggest inscribed circle as the "center" of the polygon. You can see a simple enough algorithm <a href="http://arxiv.org/ftp/arxiv/papers/1212/1212.3193.pdf" rel="nofollow">here (p. 4)</a>. If your task is to place a label on the polygon, this will also give the most aesthetically pleasing results (in which case I'd recommend using this method even if your polygons might not be concave).</p> | 2014-09-23 13:24:25.093000+00:00 | 2014-09-23 13:24:25.093000+00:00 | null | null | 3,081,021 | <p>It doesn't need to be 100% correct, it can be the center of the bounding rectangle.</p> | 2010-06-20 21:34:02.150000+00:00 | 2021-09-02 05:05:38.767000+00:00 | 2013-02-17 06:47:54.997000+00:00 | google-maps|polygon|google-maps-api-3 | ['http://arxiv.org/ftp/arxiv/papers/1212/1212.3193.pdf'] | 1 |
303,088 | <p>Noise on top of the Cosmic Microwave Background spectrum. Of course you must first remove some anisotropy, foreground objects, correlated detector noise, galaxy and local group velocities, polarizations etc. Many <a href="http://arxiv.org/abs/astro-ph/0703806" rel="nofollow noreferrer">pitfalls remain</a>.</p> | 2008-11-19 19:40:35.337000+00:00 | 2008-11-19 19:40:35.337000+00:00 | null | null | 300,854 | <p>Okay, I guess this is entirely subjective and whatnot, but I was thinking about entropy sources for random number generators. It goes that most generators are seeded with the current time, correct? Well, I was curious as to what other sources could be used to generate perfectly valid, random (The loose definition) numbers.</p>
<p>Would using multiple sources (Such as time + current HDD seek time [We're being fantastical here]) together create a "more random" number than a single source? What are the logical limits of the amount of sources? How much is really enough? Is the time chosen simply because it is convenient?</p>
<p>Excuse me if this sort of thing is not allowed, but I'm curious as to the theory behind the sources.</p> | 2008-11-19 02:35:42.173000+00:00 | 2017-08-16 16:17:30.113000+00:00 | null | theory|random|entropy | ['http://arxiv.org/abs/astro-ph/0703806'] | 1 |
50,274,440 | <p>The DQN algorithm you linked to is for a single agent game. You have to change it quite a bit to work with multiple agents. There are <a href="https://arxiv.org/pdf/1605.06676v2.pdf" rel="nofollow noreferrer">multiple</a> <a href="https://arxiv.org/abs/1707.04402" rel="nofollow noreferrer">papers</a> written on the subject. If you want to truly understand what your code is doing, I suggest finding a paper that tries to solve an environment similar to yours and then applying the concepts within that paper to your code.</p> | 2018-05-10 13:47:20.723000+00:00 | 2018-05-10 13:47:20.723000+00:00 | null | null | 50,228,635 | <p>I should make my own environment and apply <strong>dqn</strong> algorithm in a multi-agent environment. </p>
<p>I have <strong>4 agents</strong> . Each state of my environment has <strong>5 variables</strong> <code>state=[p1, p2, p3, p4,p5]</code>, at each time step,we update the different parameters of all states. Action is one of amount: <code>{-2,-1,0,1,2}</code> given the best q-value.</p>
<pre><code> param0,param1,param2,param3,param4=[[0 for x in range(numframe)] for y in range(number_nodes)]
`timestep p4[agent0]=random.randint(0,2)
p4[agent1]=p4[agent0]+action
p4[agent2]=p4[agent1]+action
p4[agent3]=p4[agent2]+action
(actions find by a DNN in dqn and can be one of {-2,-1,0,1,2})`
param0..5=[[0 for x in range(numframe)] for y in range(number_nodes)]
</code></pre>
<p>numframe: shows amount for experience-replay, number_nodes=4 showing number of agents</p>
<p>I have written the following code based on [dqn-keras-code][1], </p>
<p>1- how I could change it to work as <strong>multi-agent</strong>?
2- how I could change to write my reset? (I should reset to <code>0</code>each of parameters)</p>
<p>I write some code but as I am beginner in dqn and multi-agent, I saw the following error: (I know it has also some problem related to multi-agent)</p>
<pre><code> line 156, in <module>
state = env.reset()
TypeError: reset() missing 1 required positional argument: 'self'
</code></pre>
<p>Could you please help me more than this error how I can fix my <strong>reset</strong> section and <strong>step</strong> section?</p>
<p>Here is my code:</p>
<pre><code> import random
import numpy as np
import tensorflow as tf
from collections import deque
from keras.models import Sequential
from keras.layers import Dense
from keras.optimizers import Adam
#-----------------------------------------------------------------
global param0,param1,param2,param3,param4,state,next_state,action_space,action_size,w,m, reward,episodes,time_t,state
#--------------------------------------------------------------------------
episodes=2000
number_nodes=5 #one more than number of nodes
timemax=500
action_size=5
state_size=5
action_space=[-2,-1,0,1,2]
m=16 #4*(ltime+ftime)=16
numframe=16
#-------------------------------------------------------------------------
class env:
def __init__(self):
self.action_space=[-2,-1,0,1,2] # X=[-2,2]
self.action_size = len(self.action_space)
self.state = None
return action_space, action_size
def reset(self):
#self.action_space=[0,0,0,0,0]
for ii in range (1,4): #both sides
param1[ii]=0
param2[ii]=0
param3[ii]=0
param4[ii]=0
param0[ii]=0
reward[ii]=0
state[ii]=[param0[ii],param1[ii],param2[ii],param3[ii],param4[ii]]
return state
# def reset(self):
# self.state = self.np_random.uniform(low=-0.05, high=0.05, size=(4,))
# self.steps_beyond_done = None
# return np.array(self.state)
def step(self,action):
state = self.state
param1, param2, param3, param4, param0 = state
param0[0]=random.randint(0,2) #produce a random param0
#relationship between parameteres for refreshing
param0[1]=param0[0]+action
param0[2]=param0[1]+action
param0[3]=param0[2]+action
param0[4]=param0[3]+action
for i in range (1,4):
param1[time_t][i]=param4[time_t][i+1]-param0[i+1]
#action[i]=agent.init(state_size, action_size)
#relationship between parameteres for refreshing
param2[time_t][i]=param0[i]+action
param3[time_t][i]=param2[time_t][i]
param4[time_t][i]=param3[time_t][i]
#param1,param3,param4,param0
next_state[i]=[param1[time_t][i],param2[time_t][i],param3[time_t][i],param4[time_t][i],param0[i]]
cp= [2, 0, 0, 0]
ch= [2, 2, 2, 2]
# reward function
if param1[i]>=0:
reward[i]+=ch[i]*param1[time_t][i]
else:
reward[i]+=cp[i]*param1[time_t][i]
return next_state, reward
#-------------------------------------------------
class DQNAgent:
def __init__(self, state_size, action_size):
self.state_size = state_size
self.action_size = action_size
self.memory = deque(maxlen=2000)
self.gamma = 0.95 # discount rate
self.epsilon = 1.0 # exploration rate
self.epsilon_min = 0.01
self.epsilon_decay = 0.995
self.learning_rate = 0.001
self.model = self._build_model()
def _build_model(self):
# Neural Net for Deep-Q learning Model
model = Sequential()
model.add(Dense(24, input_dim=self.state_size, activation='relu'))
model.add(Dense(24, activation='relu'))
model.add(Dense(self.action_size, activation='linear'))
model.compile(loss='mse',
optimizer=Adam(lr=self.learning_rate))
return model
def remember(self, state, action, reward, next_state, done):
self.memory.append((state, action, reward, next_state, done))
def act(self, state):
if np.random.rand() <= self.epsilon:
return random.randrange(self.action_size)
act_values = self.model.predict(state)
return np.argmax(act_values[0]) # returns action
def replay(self, batch_size):
minibatch = random.sample(self.memory, batch_size)
for state, action, reward, next_state, done in minibatch:
target = reward
if not done:
target = (reward + self.gamma *
np.amax(self.model.predict(next_state)[0]))
target_f = self.model.predict(state)
target_f[0][action] = target
self.model.fit(state, target_f, epochs=1, verbose=0)
if self.epsilon > self.epsilon_min:
self.epsilon *= self.epsilon_decay
def load(self, name):
self.model.load_weights(name)
def save(self, name):
self.model.save_weights(name)
if __name__ == "__main__":
#env = gym.make('CartPole-v1')
#state_size = env.observation_space.shape[0]
#action_size = env.action_space.n
state_size=4
action_size=5
agent = DQNAgent(state_size, action_size)
# agent.load("./save/cartpole-dqn.h5")
done = False
batch_size = 32
for e in range(episodes):
state = env.reset()
state = np.reshape(state, [1, state_size])
for time in range(500):
# env.render()
action = agent.act(state)
next_state, reward, done, _ = env.step(action)
reward = reward if not done else -10
next_state = np.reshape(next_state, [1, state_size])
agent.remember(state, action, reward, next_state, done)
state = next_state
if done:
print("episode: {}/{}, score: {}, e: {:.2}"
.format(e, EPISparam2DES, time, agent.epsilon))
break
if len(agent.memory) > batch_size:
agent.replay(batch_size)
# if e % 10 == 0:
# agent.save("./save/cartpole-dqn.h5")
agent = DQNAgent(state_size, action_size)
# agent.load("./save/cartpole-dqn.h5")
[1]: https://github.com/keon/deep-q-learning/blob/master/dqn.py
</code></pre> | 2018-05-08 07:53:44.203000+00:00 | 2018-05-10 13:47:20.723000+00:00 | null | deep-learning|reinforcement-learning|q-learning|multi-agent | ['https://arxiv.org/pdf/1605.06676v2.pdf', 'https://arxiv.org/abs/1707.04402'] | 2 |
55,038,921 | <p>With <code>gensim</code>, there is no way around the initial load time for the model. You can use a smaller model to reduce load times, but large model files will inevitably require a few seconds to load when your application is being initialized. However, if you have designed your application correctly, this should be a cost you pay only once (when your program first starts). </p>
<p>If it's really important to save those few seconds at startup, you might consider switching from <code>gensim</code>'s word2vec implementation to <a href="https://github.com/plasticityai/magnitude" rel="nofollow noreferrer"><code>magnitude</code></a>, which has an initial load time of less than one second, as well as speed/usability improvements in many other areas (as in, speed improvements of hundreds to thousands of times faster than gensim for many tasks). If you're interested in reading a bit on the background behind magnitude, you can read the academic paper by it's authors (free to read on arXiv):</p>
<p><a href="https://arxiv.org/abs/1810.11190" rel="nofollow noreferrer">Magnitude: A Fast, Efficient Universal Vector Embedding Utility Package</a></p> | 2019-03-07 08:10:40.547000+00:00 | 2019-03-07 08:10:40.547000+00:00 | null | null | 55,038,641 | <h3>Task</h3>
<p>I want to make a class for <code>word2vec</code> model and call it, so the process will be faster and more efficient. </p>
<h3>Script</h3>
<pre><code>from docsim import DocSim
from gensim.models.keyedvectors import KeyedVectors
word2vec_model = 'w2vec_wiki_id_case_doc'
model = KeyedVectors.load(word2vec_model, mmap='r')
ds = DocSim(model)
</code></pre>
<h3>Problem</h3>
<p>With my code, the process takes a long time to read the <code>word2vec</code> model. How can I solve the problem? </p> | 2019-03-07 07:53:06.583000+00:00 | 2019-03-07 14:51:53.420000+00:00 | 2019-03-07 08:20:19.457000+00:00 | python|class|word2vec | ['https://github.com/plasticityai/magnitude', 'https://arxiv.org/abs/1810.11190'] | 2 |
43,670,914 | <p>It sounds like <a href="https://en.wikipedia.org/wiki/Placement_(EDA)" rel="nofollow noreferrer">placement</a> problem to me, so you should try investigating approaches to solve it.</p>
<p>I'll use <a href="https://en.wikipedia.org/wiki/Satisfiability" rel="nofollow noreferrer">satisfiability</a> framework to discuss uncertainty and related stuff. You can define <strong>a 2D grid</strong>, each node can hold <em>features</em> - a hill, a river, a mountain. Features could be generic or special (Mt. Foobar). Some features could be predefined - you can place a mountain at specified node, but it's up to you. Cities are defined as features too.</p>
<p>Now, interesting part. You can define a set of constraints <strong>over</strong> such grid. So, for each node in a grid you can define something like this:</p>
<blockquote>
<p>IF city A takes place at node THEN a mountain must be at an adjacent node</p>
<p>IF given node contains a mountain THEN city A must be at any of adjacent nodes</p>
<p>IF city A is present at given node THEN city B can't be at any of adjacent nodes</p>
</blockquote>
<p>You can introduce a lot of various constraints in such way, even with counting objects:</p>
<blockquote>
<p>IF a node has a river THEN no more than 2 cities could be at adjacent nodes</p>
</blockquote>
<p>To do so, you can rely on <a href="http://minisat.se/downloads/MiniSat+.pdf" rel="nofollow noreferrer">pseudo-boolean constraints</a>. You can use it even for optimizing solutions by introducing counters for specific configurations and require that here must be <em>more than</em> or <em>less than</em> of them.</p>
<p>To solve resulting problem you can use any of SAT solvers (e.g. <a href="http://www.labri.fr/perso/lsimon/glucose/" rel="nofollow noreferrer">Glucose</a>)</p>
<p>You can generate several <em>different</em> solutions by using <a href="https://arxiv.org/abs/1510.00523" rel="nofollow noreferrer">AllSAT</a>, there are solvers for that too.</p>
<p>If purely Boolean formulation is too complex, you can try <a href="https://en.wikipedia.org/wiki/Satisfiability_modulo_theories" rel="nofollow noreferrer">SMT</a></p>
<p>Details how to implement such systems are out the question's scope, it's too broad and require plenty of preliminary studies.</p>
<p>Hope that my answer is helpful.</p>
<p>EDIT</p>
<p>SAT solver returns <em>first</em> correct solution and it'll be random in this sense.</p> | 2017-04-28 02:19:53.320000+00:00 | 2017-04-28 02:40:57.210000+00:00 | 2020-06-20 09:12:55.060000+00:00 | null | 43,662,477 | <p>I know there's a high chance of my hitting the XY problem here, so the first chunk of this is about the more general situation.</p>
<hr />
<h2>The Problem</h2>
<p>I have a set of data points containing abstract geographic-feature information, but no actual locations (absolute or relative). For the sake of example, let's call it the following list of cities that describes the local terrain, but with no coordinates or relative positioning:</p>
<ul>
<li>City A is on an island and on a hill.</li>
<li>City B is on the coast and near a river.</li>
<li>City C is on a mountain and near a river</li>
<li>City D is on an island and on a hill.</li>
<li>City E is on an island and on plains.</li>
</ul>
<p>From this, I want to write a program that can provide relative positions for these locations. For the above list, it might derive that A and D are near each other, E might be near them but is less likely, and B and C might be near each other. You can think of it as a graph with all the edges rubbed out, and the program tries to write them back in based on the properties of the nodes. Given that, it could then come up with some arbitrary coordinates for each city, from which a map might be drawn.</p>
<p>I don't need a unique solution – my end-goal here is plausible maps of unmapped-but-described fictional places.</p>
<p>This seems to me to be the sort of problem that's a good fit for e.g. Prolog or similar logic-engines, since it's basically just constraint resolution. However, I can't quite work it out myself. The issues I'm hitting at the moment relate to the way that, for example, two cities could have similar local features without being near the same instance of that larger feature. It's the difference between "This city is near some unspecified mountain" and "This city is near Mt. Foobar." The latter provides a strong constraint (two cities both near Mt. Foobar are near each other), but the latter only provides a guideline (two cities both near mountains are more likely to be near each other than one city near a mountain and another city not near a mountain).</p>
<hr />
<h2>The Question</h2>
<p>How does one define (and provide solutions based on) probabilities, rather than absolutes, in Prolog or other logic/rules engines?</p> | 2017-04-27 15:51:26.880000+00:00 | 2017-04-29 13:13:31.990000+00:00 | 2020-06-20 09:12:55.060000+00:00 | prolog|constraint-programming|logic-programming | ['https://en.wikipedia.org/wiki/Placement_(EDA)', 'https://en.wikipedia.org/wiki/Satisfiability', 'http://minisat.se/downloads/MiniSat+.pdf', 'http://www.labri.fr/perso/lsimon/glucose/', 'https://arxiv.org/abs/1510.00523', 'https://en.wikipedia.org/wiki/Satisfiability_modulo_theories'] | 6 |
59,042,681 | <p>Readin your post these are the following things I could suggest you fix/explore</p>
<ul>
<li><p>42% is not that impressive of an accuracy for the task you have at hand, consider the way you are <strong>cross-validating</strong> e.g. how do you split between a validation, test and training dataset</p></li>
<li><p>Your dataset seems very limited. Your task is to identify the speaker. A single episode might not be enough data for this task. </p></li>
<li><p>You might want to consider Deep Neural Network libraries such as Keras and Tensorflow. Convolutions is something you can apply directly to the MFC Graph. </p></li>
<li><p>If you decide using Tensorflow or Keras consider Triplet-Loss, where you preset a positive and negative example. </p></li>
<li><p>Consider reading the current state of the art for your task: <a href="https://github.com/grausof/keras-sincnet" rel="nofollow noreferrer">https://github.com/grausof/keras-sincnet</a></p></li>
<li><p>Consider reading <a href="https://arxiv.org/abs/1503.03832" rel="nofollow noreferrer">https://arxiv.org/abs/1503.03832</a> and adopting it for speech recognition.</p></li>
</ul>
<p><strong>The easiest thing you can do to improve your results is adding CNN layers to extract features from the MFCC</strong></p> | 2019-11-26 02:06:52.980000+00:00 | 2019-11-26 02:06:52.980000+00:00 | null | null | 59,040,393 | <p>Im working on a speaker recognition Neural Network.</p>
<p>What I am doing is taking wav files [ of the Bing Bang Theory first espiode :-) ], than convert it to MFCC coeffs than I make it as an input to an open source api of Neural Network (MLPClassifier) and as output I define a unique vector to each speaker ( Let's say : [1,0,0,0] - sheldon; [0,1,0,0] - Penny; and ect... ), I take 50 random values for testing and the others for fitting ( training )</p>
<p>This is my code, At the begining I got about random accucary for the NN but after some help of amazing guy I improved it to ~42% but I want more :) about 70% :</p>
<pre><code>from sklearn.neural_network import MLPClassifier
import python_speech_features
import scipy.io.wavfile as wav
import numpy as np
from os import listdir
from os.path import isfile, join
from random import shuffle
import matplotlib.pyplot as plt
from tqdm import tqdm
from random import randint
import random
winner = [] # this array count how much Bingo we had when we test the NN
random_winner = []
win_len = 0.04 # in seconds
step = win_len / 2
nfft = 2048
for TestNum in tqdm(range(20)): # in every round we build NN with X,Y that out of them we check 50 after we build the NN
X = []
Y = []
onlyfiles = [f for f in listdir("FinalAudios/") if isfile(join("FinalAudios/", f))] # Files in dir
names = [] # names of the speakers
for file in onlyfiles: # for each wav sound
# UNESSECERY TO UNDERSTAND THE CODE
if " " not in file.split("_")[0]:
names.append(file.split("_")[0])
else:
names.append(file.split("_")[0].split(" ")[0])
only_speakers = [] + names
#print only_speakers
names = list(dict.fromkeys(names)) # names of speakers
print names
vector_names = [] # vector for each name
i = 0
vector_for_each_name = [0] * len(names)
for name in names:
vector_for_each_name[i] += 1
vector_names.append(np.array(vector_for_each_name))
vector_for_each_name[i] -= 1
i += 1
for f in onlyfiles:
if " " not in f.split("_")[0]:
f_speaker = f.split("_")[0]
else:
f_speaker = f.split("_")[0].split(" ")[0]
fs, audio = wav.read("FinalAudios/" + f) # read the file
try:
mfcc_feat = python_speech_features.mfcc(audio, samplerate=fs, winlen=win_len,
winstep=step, nfft=nfft, appendEnergy=False)
flat_list = [item for sublist in mfcc_feat for item in sublist]
X.append(np.array(flat_list))
Y.append(np.array(vector_names[names.index(f_speaker)]))
except IndexError:
pass
Z = list(zip(X, Y))
shuffle(Z) # WE SHUFFLE X,Y TO PERFORM RANDOM ON THE TEST LEVEL
X, Y = zip(*Z)
X = list(X)
Y = list(Y)
X = np.asarray(X)
Y = np.asarray(Y)
Y_test = Y[:50] # CHOOSE 50 FOR TEST, OTHERS FOR TRAIN
X_test = X[:50]
X = X[50:]
Y = Y[50:]
print len(X)
clf = MLPClassifier(solver='lbfgs', alpha=3e-2, hidden_layer_sizes=(50, 20), random_state=2) # create the NN
clf.fit(X, Y) # Train it
print list(clf.predict_proba([X[0]])[0])
print list(Y_test[0])
for sample in range(len(X_test)): # add 1 to winner array if we correct and 0 if not, than in the end it plot it
arr = list(clf.predict([X_test[sample]])[0])
if arr.index(max(arr)) == list(Y_test[sample]).index(1):
winner.append(1)
else:
winner.append(0)
if only_speakers[randint(0, len(only_speakers) - 1)] == only_speakers[randint(0, len(only_speakers) - 1)]:
random_winner.append(1)
else:
random_winner.append(0)
# plot winner
plot_x = []
plot_y = []
for i in range(1, len(winner)):
plot_y.append(sum(winner[0:i])*1.0/len(winner[0:i]))
plot_x.append(i)
plot_random_x = []
plot_random_y = []
for i in range(1, len(random_winner)):
plot_random_y.append(sum(random_winner[0:i])*1.0/len(random_winner[0:i]))
plot_random_x.append(i)
plt.plot(plot_x, plot_y, 'r', label='machine learning')
plt.plot(plot_random_x, plot_random_y, 'b', label='random')
plt.xlabel('Number Of Samples')
# naming the y axis
plt.ylabel('Success Rate')
# giving a title to my graph
plt.title('Success Rate : Random Vs ML!')
# function to show the plot
plt.show()
</code></pre>
<p>This is my zip file that contains the code and the audio file : <a href="https://ufile.io/eggjm1gw" rel="nofollow noreferrer">https://ufile.io/eggjm1gw</a></p>
<p><strong>Somebody have an idea how can I improve my accucary</strong>?</p>
<p><strong><em>Edit :</em></strong></p>
<p>I improved my data set and put convolution model and got 60% accucarry, which is ok but also not good enoguh</p>
<pre><code>import python_speech_features
import scipy.io.wavfile as wav
import numpy as np
from os import listdir
import os
import shutil
from os.path import isfile, join
from random import shuffle
from matplotlib import pyplot
from tqdm import tqdm
from random import randint
import tensorflow as tf
from ast import literal_eval as str2arr
from tempfile import TemporaryFile
#win_len = 0.04 # in seconds
#step = win_len / 2
#nfft = 2048
win_len = 0.05 # in seconds
step = win_len
nfft = 16384
results = []
outfile_x = None
outfile_y = None
winner = []
for TestNum in tqdm(range(40)): # We check it several times
if not outfile_x: # if path not exist we create it
X = [] # inputs
Y = [] # outputs
onlyfiles = [f for f in listdir("FinalAudios") if isfile(join("FinalAudios", f))] # Files in dir
names = [] # names of the speakers
for file in onlyfiles: # for each wav sound
# UNESSECERY TO UNDERSTAND THE CODE
if " " not in file.split("_")[0]:
names.append(file.split("_")[0])
else:
names.append(file.split("_")[0].split(" ")[0])
only_speakers = [] + names
namesWithoutDuplicate = list(dict.fromkeys(names))
namesWithoutDuplicateCopy = namesWithoutDuplicate[:]
for name in namesWithoutDuplicateCopy: # we remove low samples files
if names.count(name) < 107:
namesWithoutDuplicate.remove(name)
names = namesWithoutDuplicate
print(names) # print it
vector_names = [] # output for each name
i = 0
for name in names:
vector_for_each_name = i
vector_names.append(np.array(vector_for_each_name))
i += 1
for f in onlyfiles: # for all the files
if " " not in f.split("_")[0]:
f_speaker = f.split("_")[0]
else:
f_speaker = f.split("_")[0].split(" ")[0]
if f_speaker in namesWithoutDuplicate:
fs, audio = wav.read("FinalAudios\\" + f) # read the file
try:
# compute MFCC
mfcc_feat = python_speech_features.mfcc(audio, samplerate=fs, winlen=win_len, winstep=step, nfft=nfft, appendEnergy=False)
#flat_list = [item for sublist in mfcc_feat for item in sublist]
# Create output + inputs
for i in mfcc_feat:
X.append(np.array(i))
Y.append(np.array(vector_names[names.index(f_speaker)]))
except IndexError:
pass
else:
if not os.path.exists("TooLowSamples"): # if path not exist we create it
os.makedirs("TooLowSamples")
shutil.move("FinalAudios\\" + f, "TooLowSamples\\" + f)
outfile_x = TemporaryFile()
np.save(outfile_x, X)
outfile_y = TemporaryFile()
np.save(outfile_y, Y)
# ------------------- RANDOMIZATION, UNNECESSARY TO UNDERSTAND THE CODE ------------------- #
else:
outfile_x.seek(0)
X = np.load(outfile_x)
outfile_y.seek(0)
Y = np.load(outfile_y)
Z = list(zip(X, Y))
shuffle(Z) # WE SHUFFLE X,Y TO PERFORM RANDOM ON THE TEST LEVEL
X, Y = zip(*Z)
X = list(X)
Y = list(Y)
lenX = len(X)
# ------------------- RANDOMIZATION, UNNECESSARY TO UNDERSTAND THE CODE ------------------- #
y_test = np.asarray(Y[:4000]) # CHOOSE 100 FOR TEST, OTHERS FOR TRAIN
x_test = np.asarray(X[:4000]) # CHOOSE 100 FOR TEST, OTHERS FOR TRAIN
x_train = np.asarray(X[4000:]) # CHOOSE 100 FOR TEST, OTHERS FOR TRAIN
y_train = np.asarray(Y[4000:]) # CHOOSE 100 FOR TEST, OTHERS FOR TRAIN
x_val = x_train[-4000:] # FROM THE TRAIN CHOOSE 100 FOR VALIDATION
y_val = y_train[-4000:] # FROM THE TRAIN CHOOSE 100 FOR VALIDATION
x_train = x_train[:-4000] # FROM THE TRAIN CHOOSE 100 FOR VALIDATION
y_train = y_train[:-4000] # FROM THE TRAIN CHOOSE 100 FOR VALIDATION
x_train = x_train.reshape(np.append(x_train.shape, (1, 1))) # RESHAPE FOR INPUT
x_test = x_test.reshape(np.append(x_test.shape, (1, 1))) # RESHAPE FOR INPUT
x_val = x_val.reshape(np.append(x_val.shape, (1, 1))) # RESHAPE FOR INPUT
features_shape = x_val.shape
# -------------- OUR TENSOR FLOW NEURAL NETWORK MODEL -------------- #
model = tf.keras.models.Sequential([
tf.keras.layers.Input(name='inputs', shape=(13, 1, 1), dtype='float32'),
tf.keras.layers.Conv2D(32, (3, 3), activation='relu', padding='same', strides=1, name='block1_conv', input_shape=(13, 1, 1)),
tf.keras.layers.MaxPooling2D((3, 3), strides=(2,2), padding='same', name='block1_pool'),
tf.keras.layers.BatchNormalization(name='block1_norm'),
tf.keras.layers.Conv2D(32, (3, 3), activation='relu', padding='same', strides=1, name='block2_conv',
input_shape=(13, 1, 1)),
tf.keras.layers.MaxPooling2D((3, 3), strides=(2, 2), padding='same', name='block2_pool'),
tf.keras.layers.BatchNormalization(name='block2_norm'),
tf.keras.layers.Conv2D(32, (3, 3), activation='relu', padding='same', strides=1, name='block3_conv',
input_shape=(13, 1, 1)),
tf.keras.layers.MaxPooling2D((3, 3), strides=(2, 2), padding='same', name='block3_pool'),
tf.keras.layers.BatchNormalization(name='block3_norm'),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation='relu', name='dense'),
tf.keras.layers.BatchNormalization(name='dense_norm'),
tf.keras.layers.Dropout(0.2, name='dropout'),
tf.keras.layers.Dense(10, activation='softmax', name='pred')
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# -------------- OUR TENSOR FLOW NEURAL NETWORK MODEL -------------- #
print("fitting")
history = model.fit(x_train, y_train, epochs=15, validation_data=(x_val, y_val))
print("testing")
results.append(model.evaluate(x_test, y_test)[1])
print(results)
print(sum(results)/len(results))
for i in range(10000):
f_1 = only_speakers[randint(0, len(only_speakers) - 1)]
f_2 = only_speakers[randint(0, len(only_speakers) - 1)]
if " " not in f_1.split("_")[0]:
f_speaker_1 = f_1.split("_")[0]
else:
f_speaker_1 =f_1.split("_")[0].split(" ")[0]
if " " not in f_2.split("_")[0]:
f_speaker_2 = f_2.split("_")[0]
else:
f_speaker_2 =f_2.split("_")[0].split(" ")[0]
if f_speaker_2 == f_speaker_1:
winner.append(1)
else:
winner.append(0)
print(sum(winner)/len(winner))
#]
# if onlyfiles[randint(len(onlyfiles) - 1)] == onlyfiles[randint(len(onlyfiles) - 1)]
#pyplot.plot(history.history['loss'], label='train')
#pyplot.plot(history.history['val_loss'], label='test') Q
#pyplot.legend()
#pyplot.show()
</code></pre> | 2019-11-25 21:31:43.920000+00:00 | 2019-12-14 21:51:39.260000+00:00 | 2019-12-14 21:51:39.260000+00:00 | machine-learning|neural-network|identity|voice-recognition|mfcc | ['https://github.com/grausof/keras-sincnet', 'https://arxiv.org/abs/1503.03832'] | 2 |
38,030,527 | <p>One may opt to use Machine-Learning tools to build a <strong><code>learner</code></strong> to either</p>
<ul>
<li>both <strong>classify</strong> of what kind the said "asset price movement" will be<br>and<br> serve also statistical probability measures for such a <code>Classifier</code> prediction</li>
<li>both <strong>regress</strong> a real target value, to which the asset price will move<br>and<br>serve also statistical probability measures for such a <code>Regressor</code> prediction</li>
</ul>
<p><strong><code>A1:</code></strong> <sup> ( while StackOverflow <strong>strongly discourages</strong> users to ask about an opinion about a tool or a particular framework ) </sup> there would be not much damages or extra time to be spent, if one performs academia papers research and there would be quite a remarkable list of repeatedly used tools, used for ML in the context of academic R&D. For a reason, there would not be a surprise to meet <strong><code>scikit-learn</code></strong> ML-classes a lot, some other papers may work with <code>R</code>-based quantitative finance / statistical libraries. The tools, however, with all due respect, are not the core to answer all the doubts and inital confusion present in a mix of your questions. The subject confusion is.</p>
<p><strong><code>A2:</code></strong> No, it would not. Well, unless you beat all the advanced quantitative research and happen to prove that the Market exhibits a random behaviour ( which it is not and for which it would be waste of time to re-cite remarkable research published about why it is not indeed a random process ).</p>
<p><strong><code>A3:</code></strong> Do not try to jump on any wagon just because of it's attractive Tag or "contemporary popularity" in marketing minded texts. With all due respect, understanding HMM is outside of your sight while you now appear to move just to the nearest horizons to first understand what to look for.</p>
<p><strong><code>A4:</code></strong> This is a nice proof of a missed target. Your question shows in this particular point better than in others, how small amount of own research efforts were put into covering the problem-domain and acquiring at least some elementary knowledge before typing the last two questions.</p>
<p>StackOverflow encourages users to ask high quality questions, so do not hesitate to re-edit your post to add some polishing efforts to this subject.</p>
<hr />
<p>If in a need for an inspiration, try to review a nice and a powerful approach for a fast Machine Learning process, where both <strong>Classification</strong> and <strong>Regression</strong> tasks obtain also probability estimates for each predicted target value.</p>
<p>To have some idea about highly performant ML-predictors, these typically operate on much more than a set of 5 variables <sub> ( called in the ML-domain <strong>"features"</strong> ) </sub>. ( Think rather about some large <strong>hundreds</strong> to small <strong>thousands</strong> features, typically heavily non-linear transformations from the original TimeSeries' data ).</p>
<p><strong>There you go, if indeed willing to master ML for algorithmic trading.</strong></p>
<hr />
<h2>May like to read about a state-of-art research in this direction:</h2>
<blockquote>
<p><strong><code>[1]</code></strong> Mondrian Forests: Efficient Online Random Forests<br>
<a href="https://arxiv.org/pdf/1406.2673.pdf" rel="nofollow noreferrer"> >>> arXiv:1406.2673v2 [stat.ML] 16 Feb 2015 </a><br>
<strong><code>[2]</code></strong> Mondrian Forests for Large-Scale Regression <strong>when Uncertainty Matters</strong><br>
<a href="https://arxiv.org/pdf/1506.03805.pdf" rel="nofollow noreferrer">>>> arXiv:1506.03805v4 [stat.ML] 27 May 2016 >>></a></p>
</blockquote>
<hr />
<blockquote>
<p>May also enjoy other posts on subject: <a href="https://stackoverflow.com/search?tab=votes&q=user%3A3666197%20%5Balgorithmic-trading%5D"><strong><code>>>> StackOverflow Algorithmic-Trading >>></code></strong></a></p>
</blockquote> | 2016-06-25 16:26:51.150000+00:00 | 2022-05-15 09:52:17.233000+00:00 | 2022-05-15 09:52:17.233000+00:00 | null | 38,027,086 | <p>I am seeking a method to allow me to analyse/search for patterns in asset price movements using 5 variables that move and change with price (from historical data).</p>
<p>I'd like to be able to assign a <strong>probability to a forecasted price move</strong> when for example, <strong><code>var1</code></strong> and <strong><code>var2</code></strong> do this and <strong><code>var3..5</code></strong> do this, then price should do this with <strong><code>x</code></strong> amount of certainty.</p>
<p><strong><code>Q1:</code></strong> Could someone point me in the right direction as to what framework / technique can help me achieve this?</p>
<p><strong><code>Q2:</code></strong> Would this be a multivariate continuous random series analysis?</p>
<p><strong><code>Q3:</code></strong> A Hidden Markov modelling?</p>
<p><strong><code>Q4:</code></strong> Or perhaps is it a data-mining problem?</p>
<p>I'm looking for <strong>what</strong> rather then <strong>how</strong>.</p> | 2016-06-25 09:42:50.720000+00:00 | 2022-05-15 09:52:17.233000+00:00 | 2016-06-25 15:49:32.120000+00:00 | statistics|time-series|probability|hidden-markov-models|algorithmic-trading | ['https://arxiv.org/pdf/1406.2673.pdf', 'https://arxiv.org/pdf/1506.03805.pdf', 'https://stackoverflow.com/search?tab=votes&q=user%3A3666197%20%5Balgorithmic-trading%5D'] | 3 |
73,426,105 | <p>You already solved your problem. That's great. I would like to point out a different approach and address a few questions.</p>
<blockquote>
<p>Will my model size limit essentially be sum of all available GPU
memory? (35GB?)</p>
</blockquote>
<p>This depends on the training technique you use. The standard data parallelism replicates the model, gradients and optimiser states to each of the GPUs. <strong>So each GPU must have enough memory to hold all these.</strong> The data is splitted across the GPUs. However, the bottleneck is usually the optimiser states and the model not the data.</p>
<p>The state-of-the-art approach in training is <strong>ZeRO</strong>. Not only the dataset, but also the model parameters, the gradients and the optimizer states are splitted across the GPUs. This allows you to train huge models without hitting OOM. See the nice illustration below from the paper. The baseline is the standard case that I mentioned. They gradually split optimizer states, gradients and model parameter accross the GPU's and compare the memory usage per GPU.</p>
<p><a href="https://i.stack.imgur.com/Xlnmm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Xlnmm.png" alt="enter image description here" /></a></p>
<p>The authors of the paper created a library called DeepSpeed and it is very easy to integrate it with huggingface. With that I was able to increase my model size from 260 Million to 11 Billion :)</p>
<p>If you want to understand in detail how it works, here is the paper:
<a href="https://arxiv.org/pdf/1910.02054.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1910.02054.pdf</a></p>
<p>More information on integrating DeepSpeed with Huggingface can be found here:
<a href="https://huggingface.co/docs/transformers/main_classes/deepspeed" rel="nofollow noreferrer">https://huggingface.co/docs/transformers/main_classes/deepspeed</a></p>
<p>PS: There is a the model parallelism technique in which each GPU trains different layers of the model but it lost its popularity and is not being actively used.</p> | 2022-08-20 10:58:59.530000+00:00 | 2022-08-20 10:58:59.530000+00:00 | null | null | 60,904,170 | <p>I want to fine tune a GPT-2 model using Huggingface’s Transformers. Preferably the medium model but large if possible. Currently, I have a RTX 2080 Ti with 11GB of memory and I can train the small model just fine.</p>
<p>My question is: will I run into any issues if I added an old Tesla K80 (24GB) to my machine and distributed the training? I cannot find information about using different capacity GPUs during training and issues I could run into.</p>
<p>Will my model size limit essentially be sum of all available GPU memory? (35GB?)</p>
<p>I’m not interested in doing this in AWS.</p> | 2020-03-28 17:16:37.513000+00:00 | 2022-08-20 10:58:59.530000+00:00 | null | machine-learning|huggingface-transformers | ['https://i.stack.imgur.com/Xlnmm.png', 'https://arxiv.org/pdf/1910.02054.pdf', 'https://huggingface.co/docs/transformers/main_classes/deepspeed'] | 3 |
Subsets and Splits