a_id
int64 7.84k
73.8M
| a_body
stringlengths 61
33k
| a_creation_date
stringlengths 25
32
| a_last_activity_date
stringlengths 25
32
| a_last_edit_date
stringlengths 25
32
⌀ | a_tags
float64 | q_id
int64 826
73.8M
| q_body
stringlengths 61
29.9k
| q_creation_date
stringlengths 25
32
| q_last_activity_date
stringlengths 25
32
| q_last_edit_date
stringlengths 25
32
⌀ | q_tags
stringlengths 1
103
| _arxiv_links
stringlengths 2
6.69k
| _n_arxiv_links
int64 0
94
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
63,817,479 | <p>Here's a slightly more elegant solution: we're going to override how spacy calculates document vectors under-the-hood, which will propagate this customization to any downstream pipeline components like the TextCategorizer or whatever.</p>
<p>This is based on the documentation found here: <a href="https://spacy.io/usage/processing-pipelines#custom-components-user-hooks" rel="nofollow noreferrer">https://spacy.io/usage/processing-pipelines#custom-components-user-hooks</a></p>
<p>This solution was designed around loading pre-trained embeddings. In lieu of referencing a list of stopwords directly, I'm just going to assume that anything that's out-of-vocab for my loaded embeddings is a token I want to ignore in my document vector calculation.</p>
<pre><code>class FancyDocumentVectors(object):
def __call__(self, doc):
doc.user_hooks["vector"] = self.vector
return doc
def vector(self, doc):
"""
Constrain attention to non-zero vectors.
Returns concatenation of mean and max pooling
"""
# This is the part where we filter out stop words
# (really any token for which we couldn't calculate a vector representation).
# If you'd rather invoke a stopword list, change the line below to something like:
# doc_vecs = np.array([t.vector for t in doc if t in STOPWORDS])
doc_vecs = np.array([t.vector for t in doc if t.has_vector])
if sum(doc_vecs.shape) == 0:
doc_vecs = np.array([doc[0].vector])
mean_pooled = doc_vecs.mean(axis=0)
# Because I'm fancy, I'm going to augment my custom document vector with
# some additional information. For a demonstration of the value of this
# approach, reference the SWEM paper: https://arxiv.org/abs/1805.09843
max_pooled = doc_vecs.max(axis=0)
doc_vec = np.hstack([mean_pooled, max_pooled])
return doc_vec
# If you're not into it, just return mean_pooled instead.
# return mean_pooled
nlp.add_pipe(FancyDocumentVectors())
</code></pre>
<p>Here's a concrete example using vectors trained on stackoverflow!</p>
<p>First, we load our pretrained embeddings into an empty language model.</p>
<pre><code>import spacy
from gensim.models.keyedvectors import KeyedVectors
# https://github.com/vefstathiou/SO_word2vec
word_vect = KeyedVectors.load_word2vec_format("SO_vectors_200.bin", binary=True)
nlp = spacy.blank('en')
nlp.vocab.vectors = spacy.vocab.Vectors(data=word_vect.syn0, keys=word_vect.index2word)
</code></pre>
<p>Default behavior before changing anything:</p>
<pre><code>doc = nlp("This is a question about spacy.")
for token in doc:
print(token, token.vector_norm, token.vector.sum())
print(doc.vector_norm, doc.vector.sum())
# This 0.0 0.0
# is 0.0 0.0
# a 0.0 0.0
# question 25.44337 -41.958717
# about 0.0 0.0
# spacy 13.833485 -6.3489656
# . 0.0 0.0
# 4.353660220883036 -6.901098
</code></pre>
<p>Modified behavior after overriding document vector calculation:</p>
<pre><code># MAGIC!
nlp.add_pipe(FancyDocumentVectors())
doc = nlp("This is a question about spacy.")
for token in doc:
print(token, token.vector_norm, token.vector.sum())
print(doc.vector_norm, doc.vector.sum())
# This 0.0 0.0
# is 0.0 0.0
# a 0.0 0.0
# question 25.44337 -41.958717
# about 0.0 0.0
# spacy 13.833485 -6.3489656
# . 0.0 0.0
# 24.601780061609414 109.74769
</code></pre> | 2020-09-09 18:24:46.960000+00:00 | 2020-09-09 18:24:46.960000+00:00 | null | null | 52,807,080 | <p>So right now I have a really simple program that will take a sentence and find the sentence in a given book that is most semantically similar and prints out that sentence along with the next few sentences.</p>
<pre><code>import spacy
nlp = spacy.load('en_core_web_lg')
#load alice in wonderland
from gutenberg.acquire import load_etext
from gutenberg.cleanup import strip_headers
text = strip_headers(load_etext(11)).strip()
alice = nlp(text)
sentences = list(alice.sents)
mysent = nlp(unicode("example sentence, could be whatever"))
best_match = None
best_similarity_value = 0
for sent in sentences:
similarity = sent.similarity(mysent)
if similarity > best_similarity_value:
best_similarity_value = similarity
best_match = sent
print sentences[sentences.index(best_match):sentences.index(best_match)+10]
</code></pre>
<p>I want to get better results by telling SpaCy to ignore the stop words when doing this process, but I don't know the best way to go about this. Like I could create a new blank list and append each word that isn't a stop word to the list</p>
<pre><code>for sentence in sentences:
for word in sentence:
if word.is_stop == 'False':
newlist.append(word)
</code></pre>
<p>but I would have to make it more complicated than the code above because I would have to keep the integrity of the original list of sentences (because the indexes would have to be the same if I wanted to print out the full sentences later). Plus if I did it this way, I would have to run this new list of lists back through SpaCy in order to use the .similarity method.</p>
<p>I feel like there must be a better way of going about this, and I'd really appreciate any guidance. Even if there isn't a better way than appending each non-stop word to a new list, I'd appreciate any help in creating a list of lists so that the indexes will be identical to the original "sentences" variable.</p>
<p>Thanks so much!</p> | 2018-10-14 21:03:42.800000+00:00 | 2020-09-09 18:24:46.960000+00:00 | null | python|nlp|spacy | ['https://spacy.io/usage/processing-pipelines#custom-components-user-hooks'] | 1 |
40,254,471 | <p>In addition to what was stated, performing this on pandas dataframes works as well:</p>
<pre><code>some_column_hist = dataframe['some_column'].plot(bins=np.logspace(-2, np.log10(max_value), 100), kind='hist', loglog=True, xlim=(0,max_value))
</code></pre>
<p>I would caution, that there may be an issue with normalizing the bins. Each bin is larger than the previous one, and therefore must be divided by it's size to normalize the frequencies before plotting, and it seems that neither my solution, nor HYRY's solution accounts for this.</p>
<p>Source: <a href="https://arxiv.org/pdf/cond-mat/0412004.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/cond-mat/0412004.pdf</a></p> | 2016-10-26 05:37:41.620000+00:00 | 2017-08-16 10:29:08.423000+00:00 | 2017-08-16 10:29:08.423000+00:00 | null | 6,855,710 | <p>As far as I know the option Log=True in the histogram function only refers to the y-axis.</p>
<pre><code>P.hist(d,bins=50,log=True,alpha=0.5,color='b',histtype='step')
</code></pre>
<p>I need the bins to be equally spaced in log10. Is there something that can do this?</p> | 2011-07-28 07:55:35.857000+00:00 | 2019-04-11 18:34:49.587000+00:00 | 2013-07-09 11:31:55.827000+00:00 | python|numpy|matplotlib|histogram | ['https://arxiv.org/pdf/cond-mat/0412004.pdf'] | 1 |
45,903,027 | <p>If I understand the question correctly you are looking for a semantic parser for Swedish which translates natural language sentences into SQL queries suites for your product. This task is very big and can be a huge project. If you are looking for a quick solution jump to the last paragraph.</p>
<p>The good old standard way in computational semantics is to write a syntax for the parser and the translation of each word into your database. e.g. There are examples in <a href="http://www.nltk.org/book/ch10.html" rel="nofollow noreferrer">this NLTK package</a>.</p>
<p>The challenge for such path is that you need to manually hard code all possible vocabulary in a single syntax file. A more advanced similar solution is to use <a href="http://www.grammaticalframework.org/" rel="nofollow noreferrer">Grammatical Framework</a> and write you controlled language in GF. There you can start from their large resource grammar and you don't need to worry about the grammar of different languages. This will be a package that you can add to your python program or something like that. </p>
<p>If those solutions for controlled languages don't serve you well, there are other options such as probabilistic CCG parser translating the natural language to SQL (e.g. <a href="https://github.com/workday/upshot-montague/" rel="nofollow noreferrer">English-to-SQL</a> and <a href="https://arxiv.org/pdf/1704.08760.pdf" rel="nofollow noreferrer">other ideas</a>), which you probably need to train it on your own. And, if you have a lot of annotated data and you want to have the state of the art technology maybe you can take a look at parsers using neural machine translator, e.g. <a href="https://github.com/mokemokechicken/keras_npi" rel="nofollow noreferrer">neural programmer</a>.</p>
<p>Finally, if you want to have a fast solution write regular expressions for patterns that you know. The outcome might become still very similar to <a href="https://www.wolframalpha.com/" rel="nofollow noreferrer">WolframAlpha</a> queries. You can even make some of these regular expressions dynamic based on known name entities in your database. Hopefully, this answers your question.</p> | 2017-08-27 08:24:10.140000+00:00 | 2017-08-27 08:24:10.140000+00:00 | null | null | 45,878,290 | <p>I’m currently working on a project in which I have a database with 1000 products (washing machines), where each one has 21 product attributes (such as weight, dimensions, colour, power consumption and so on.)
My aim to use NLP to make the users able to search the database of products by queries of natural language, like:</p>
<blockquote>
<p>“Find a washing machine that can load at least 8 kg of laundry, and with height no more than 60 cm and with a front of stainless steel”</p>
<p>“I’m looking for a washing machine that costs less than 6000 SEK and has the opening in the front, not in the top”</p>
</blockquote>
<p>This NL query needs to be translated to a SQL-query to be used with my database. The problem is that I would need it to work in the Swedish language.
I’ve found a great API (<a href="https://json-tagger.com/" rel="nofollow noreferrer">https://json-tagger.com/</a>) that does the pre-processing of the sentences for me, tokenization and tagging Part of speech in Swedish. Thanks! But now I would really like some tips on how I best use this to translate it to SQL-queries?</p>
<p>I guess I would need to extract the relations and semantics of the user input in order to query the database, but I’m not sure how to do this. As it is a fairly limited area (washing machine product search) I hope I can construct some rules for doing this, but I’m not sure if that is the right way to go. Any help or ideas are very appreciated! :)</p>
<p>I kind of new to NLP and would really prefer working in Python3. Thank you!</p> | 2017-08-25 09:21:57.037000+00:00 | 2017-08-27 08:24:10.140000+00:00 | 2020-06-20 09:12:55.060000+00:00 | python|sql|nlp|search-engine | ['http://www.nltk.org/book/ch10.html', 'http://www.grammaticalframework.org/', 'https://github.com/workday/upshot-montague/', 'https://arxiv.org/pdf/1704.08760.pdf', 'https://github.com/mokemokechicken/keras_npi', 'https://www.wolframalpha.com/'] | 6 |
63,254,315 | <p>I can only give you a partial answer. You should be aware that this is pretty bleeding-edge; although the ArXiv paper (dated Feb 2020) says</p>
<blockquote>
<p>Hopefully by the time you are reading
this, the functionality will be available in the stable release on the Comprehensive R Archive Network (CRAN)</p>
</blockquote>
<p>but so far that's not even true; it's not even in the master GitHub branch, so I used <code>remotes::install_github("stan-dev/rstanarm@feature/survival")</code> to install it from source.</p>
<p>The proximal problem is that you should specify the new data frame as <code>newdata</code>, not <code>newdataEvent</code>. There seem to be a lot of mismatches between the master and this branch, and between the docs and the code ... <code>newdataEvent</code> is used in the older method for <code>stanjm</code> models, but not for <code>stansurv</code> models. You can look at the code <a href="https://github.com/stan-dev/rstanarm/blob/feature/survival/R/posterior_survfit.R#L364-L377" rel="nofollow noreferrer">here</a>, or use <code>formals(rstanarm:::posterior_survfit.stansurv)</code>. Unfortunately, because this method has an (unused, unchecked) <code>...</code> argument, that means that any misnamed arguments will be silently ignored.</p>
<p>The next problem is that if you specify the new data in your example as <code>newdata</code> you'll get</p>
<blockquote>
<p>Error: The following variables are missing from the data: subplot_by_site, New.Species.name</p>
</blockquote>
<p>That is, there doesn't seem to be an obvious way to generate a population-level posterior prediction. (Setting the random effects grouping variables to <code>NA</code> isn't allowed.) If you want to do this, you could either:</p>
<ul>
<li>expand your <code>newdata</code> to include all combinations of the grouping variables in your data set, and average the results across levels yourself;</li>
<li>post an issue on GitHub or contact the maintainers ...</li>
</ul> | 2020-08-04 20:04:50.527000+00:00 | 2020-08-04 20:04:50.527000+00:00 | null | null | 63,222,192 | <p>I am having trouble generating posterior predictions using posterior_survfit(). I am trying to use a new data frame, but it is not using the new data frame and instead is using values from the dataset I used to fit the model. The fitted variables in the model are New.Treatment (6 treatments = categorical), Openness (a continuous light index min= 2.22, mean= 6.903221 and max=10.54), subplot_by_site(categorical-720 sites), New.Species.name(categorical- 165 species). My new data frame has 94 rows and the posterior_survfit() is giving me 3017800 rows. Help, please!</p>
<pre><code>head(nd)
New.Treatment Openness
1 BE 5
2 BE 6
3 BE 7
4 BE 8
5 BE 9
6 BE 10
fit= stan_surv(formula = Surv(days, Status_surv) ~ New.Treatment*Openness + (1 |subplot_by_site)+(1|New.Species.name),
data = dataset,
basehaz = "weibull",
chains=4,
iter = 2000,
cores =4 )
Post=posterior_survfit(fit, type="surv",
newdata=nd5)
head(Post)
id cond_time time median ci_lb ci_ub
1 1 NA 62.0000 0.9626 0.9623 1.0000
2 1 NA 69.1313 0.9603 0.9600 0.9997
3 1 NA 76.2626 0.9581 0.9579 0.9696
4 1 NA 83.3939 0.9561 0.9557 0.9665
5 1 NA 90.5253 0.9541 0.9537 0.9545
6 1 NA 97.6566 0.9522 0.9517 0.9526
##Here some reproducible code to explain my problem:
library(rstanarm)
data_NHN<- expand.grid(New.Treatment = c("A","B","C"), Openness = c(seq(2, 11, by=0.15)))
data_NHN$subplot_by_site=c(rep("P1",63),rep("P2",60),rep("P3",60))
data_NHN$Status_surv=sample(0:1,183, replace=TRUE)
data_NHN$New.Species.name=c(rep("sp1",10),rep("sp2",40),rep("sp1",80),rep("sp2",20),rep("sp1",33))
data_NHN$days=sample(10, size = nrow(data_NHN), replace = TRUE)
nd_t<- expand.grid(New.Treatment = c("A","B","C"), Openness = c(seq(2, 11, by=1)))
mod= stan_surv(formula = Surv(days, Status_surv) ~ New.Treatment+Openness + (1 |subplot_by_site)+(1|New.Species.name),
data =data_NHN,
basehaz = "weibull",
chains=4,
iter = 30,
cores =4)
summary(mod)
pos=posterior_survfit(mod, type="surv",
newdataEvent=nd_t,
times = 0)
head(pos)
#I am interested in predicting values for specific Openess values
#(nd_t=20 rows)but I am getting instead values for each point in time
#(pos=18300rows)
</code></pre>
<p>Operating System: Mac OS Catalina 10.15.6
R version: 4.0
rstan version: 2.21.2
rstanarm Version: rstanarm_2.21.2
Any suggestions on why is it not working. it’s not clear how to give some sort of plot of the effects of one variable in the interaction as the other changes and the associated uncertainty (i.e. a marginal effects plot). In my example, I am interested in getting the values at specific "Openness" values and not at each specific time as appears in the posterior results. TIA.</p> | 2020-08-03 00:14:36.290000+00:00 | 2020-08-04 20:04:50.527000+00:00 | 2020-08-03 07:02:33.400000+00:00 | r|rstan|rstanarm | ['https://github.com/stan-dev/rstanarm/blob/feature/survival/R/posterior_survfit.R#L364-L377'] | 1 |
67,588,518 | <p>At the moment the Amazon Textract does not support font recognition. These two projects might help you:</p>
<ol>
<li>DeepFont: Identify Your Font from An Image</li>
</ol>
<ul>
<li>Paper: <a href="https://arxiv.org/pdf/1507.03196v1.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1507.03196v1.pdf</a></li>
<li>GitHub: <a href="https://github.com/robinreni96/Font_Recognition-DeepFont" rel="nofollow noreferrer">https://github.com/robinreni96/Font_Recognition-DeepFont</a></li>
</ul>
<ol start="2">
<li>Typefont: The first open-source library that detects the font of a text in a image.(It's read only now.)</li>
</ol>
<ul>
<li>GitHub: <a href="https://github.com/Vasile-Peste/Typefont" rel="nofollow noreferrer">https://github.com/Vasile-Peste/Typefont</a></li>
</ul> | 2021-05-18 14:40:19.727000+00:00 | 2021-05-18 14:40:19.727000+00:00 | null | null | 67,548,410 | <p>I am using the <code>Amazon Textract API</code>, through AWS' Python API, to extract text from a document (<code>pdf</code> or <code>jpg</code>). I do get the text and coordinates of its bounding box, but I would also love to have the font type (only the major ones needed: Arial, Helvetica, Verdana, Calibri, Times New Roman + a few others).</p>
<p>Does anyone have a solution to get that piece of data?</p>
<p>The best solution may be a package, which accepts a small image, returns the font type name, and which I can run on my server. An external API would most likely be too costly (money and time-wise), as I have to run it 100+ times in a second.</p>
<h3>What Amazon Textract returns (unfortunately, no font type):</h3>
<pre><code>{'BlockType': 'LINE',
'Confidence': 99.81985473632812,
'Text': 'This is a text',
'Geometry': {'BoundingBox': {'Width': 0.7395017743110657,
'Height': 0.012546566314995289,
'Left': 0.12995509803295135,
'Top': 0.2536422610282898},
'Polygon': [{'X': 0.12995509803295135, 'Y': 0.2536422610282898},
{'X': 0.8694568872451782, 'Y': 0.2536422610282898},
{'X': 0.8694568872451782, 'Y': 0.2661888301372528},
{'X': 0.12995509803295135, 'Y': 0.2661888301372528}]},
'Id': '59f42615-7f33-41d2-9f3c-77ae5e4b6e7a',
'Relationships': ...}
</code></pre>
<h3>What I did so far</h3>
<p>I implemented a solution which calculates the ratio <code>width/height</code> of the text and compare this by programmatically drawing the same text using Python's pillow package and different font types and then comparing the ratio. However, that heuristic often leads to wrong results.</p> | 2021-05-15 15:30:52.847000+00:00 | 2021-06-19 08:33:26.890000+00:00 | 2021-06-19 08:33:26.890000+00:00 | python|ocr|image-recognition|amazon-textract | ['https://arxiv.org/pdf/1507.03196v1.pdf', 'https://github.com/robinreni96/Font_Recognition-DeepFont', 'https://github.com/Vasile-Peste/Typefont'] | 3 |
58,982,370 | <p>I get the following result if i increase the nodes to 5
<code>b.layer(tanh=5)</code></p>
<p>There are probably multiple answers to this question, tho. Maybe increasing the number of layers or changing the activation function. You can always use different solvers, too. Finding the best network architecture is an optimization problem of its own. Some people have tried to figure it out with genetic algorithms, for example:</p>
<p><a href="https://arxiv.org/pdf/1808.03818.pdf" rel="noreferrer">https://arxiv.org/pdf/1808.03818.pdf</a></p>
<p><a href="https://i.stack.imgur.com/hD6BD.png" rel="noreferrer"><img src="https://i.stack.imgur.com/hD6BD.png" alt="enter image description here"></a></p> | 2019-11-21 19:24:56.253000+00:00 | 2019-11-21 19:24:56.253000+00:00 | null | null | 58,982,159 | <p>I am learning to use Gekko's brain module for deep learning applications. </p>
<p>I have been setting up a neural network to learn the numpy.cos() function and then produce similar results. </p>
<p>I get a good fit when the bounds on my training are:</p>
<pre class="lang-py prettyprint-override"><code>x = np.linspace(0,2*np.pi,100)
</code></pre>
<p>But the model falls apart when I try to extend the bounds to:</p>
<pre class="lang-py prettyprint-override"><code>x = np.linspace(0,3*np.pi,100)
</code></pre>
<p>What do I need to change in my neural network to increase the flexibility of my model so that it works for other bounds?</p>
<p>This is my code:</p>
<pre class="lang-py prettyprint-override"><code>from gekko import brain
import numpy as np
import matplotlib.pyplot as plt
#Set up neural network
b = brain.Brain()
b.input_layer(1)
b.layer(linear=2)
b.layer(tanh=2)
b.layer(linear=2)
b.output_layer(1)
#Train neural network
x = np.linspace(0,2*np.pi,100)
y = np.cos(x)
b.learn(x,y)
#Calculate using trained nueral network
xp = np.linspace(-2*np.pi,4*np.pi,100)
yp = b.think(xp)
#Plot results
plt.figure()
plt.plot(x,y,'bo')
plt.plot(xp,yp[0],'r-')
plt.show()
</code></pre>
<p>These are results to 2pi:</p>
<p><a href="https://i.stack.imgur.com/jJsQ7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jJsQ7.png" alt="enter image description here"></a></p>
<p>These are results to 3pi:</p>
<p><a href="https://i.stack.imgur.com/yazDz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yazDz.png" alt="enter image description here"></a></p> | 2019-11-21 19:12:29.093000+00:00 | 2019-11-22 17:35:38.070000+00:00 | 2019-11-21 19:58:06.073000+00:00 | neural-network|deep-learning|gekko | ['https://arxiv.org/pdf/1808.03818.pdf', 'https://i.stack.imgur.com/hD6BD.png'] | 2 |
68,674,040 | <p>Here are a few points that would be useful for you:</p>
<ul>
<li><p>At first glance your model is not learning since your prediction are as good as a random guess. The first initiative would be to monitor your loss, here you only have a single epoch. At least you could evaluate your model on unseen data:</p>
<pre><code>validation_set = torchvision.datasets.MNIST('./',
download=True, train=False, transform=T.ToTensor())
validation_loader = DataLoader(validation_set, batch_size=32)
</code></pre>
</li>
<li><p>You are using a MSE loss (the L2-norm) to train a classification task which is <a href="https://stats.stackexchange.com/questions/46413/can-the-mean-squared-error-be-used-for-classification">not the right tool for this kind of task</a>. You could instead be using the negative log-likelihood. PyTorch offers <a href="https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html" rel="nofollow noreferrer"><code>nn.CrossEntropyLoss</code></a> which includes a log-softmax and the negative log-likelihood loss in one module. This change can be implemented by adding in:</p>
<pre><code>loss_function = nn.CrossEntropyLoss()
</code></pre>
<p>and using the right target shapes when applying <code>loss_function</code> (<em>see below</em>). Since the loss function will apply a log-softmax, you <strong>shouldn't have</strong> an activation function on your model's output.</p>
</li>
<li><p>You are using sigmoid as an activation function, intermediate non-linearities will work better as <a href="https://arxiv.org/pdf/1803.08375.pdf" rel="nofollow noreferrer">ReLU</a> (<a href="https://stats.stackexchange.com/questions/226923/why-do-we-use-relu-in-neural-networks-and-how-do-we-use-it">see related post</a>). A sigmoid is more suited for a binary classification task. Again, since we are using <code>nn.CrossEntropyLoss</code>, we have to remove the activation after <code>layer2</code>.</p>
<pre><code>class Net(torch.nn.Module):
def __init__(self):
super(Net, self).__init__()
self.flatten = nn.Flatten()
self.layer1 = torch.nn.Linear(784, 800)
self.layer2 = torch.nn.Linear(800, 10)
def forward(self, x):
x = self.flatten(x)
x = torch.relu(self.layer1(x))
x = self.layer2(x)
return x
</code></pre>
</li>
<li><p>A less crucial point is the fact that you could infer estimations on a whole batch instead of looping through each batch one element at a time. A typical training loop for one epoch would look like:</p>
<pre><code>for images, labels in training_loader:
optimizer.zero_grad()
output = net(images)
loss = loss_function(output, labels)
loss.backward()
optimizer.step()
</code></pre>
</li>
</ul>
<p>With these kinds of modifications, you can expect to have a validation of around 80% after a single epoch.</p> | 2021-08-05 22:10:03.090000+00:00 | 2021-08-05 22:10:03.090000+00:00 | null | null | 68,672,764 | <p>I am using <a href="https://en.wikipedia.org/wiki/PyTorch" rel="nofollow noreferrer">PyTorch</a> in order to get my neural network to recognize digits from the <a href="https://en.wikipedia.org/wiki/MNIST_database" rel="nofollow noreferrer">MNIST database</a>.</p>
<pre><code>import torch
import torchvision
</code></pre>
<p>I'd like to implement a very simple design similar to what is shown in <a href="https://www.3blue1brown.com/topics/neural-networks" rel="nofollow noreferrer">3Blue1Brown's video series about neural networks</a>. The following design in particular achieved an error rate of <a href="https://en.wikipedia.org/wiki/MNIST_database#Classifiers" rel="nofollow noreferrer">1.6%</a>.</p>
<pre><code>class Net(torch.nn.Module):
def __init__(self):
super(Net, self).__init__()
self.layer1 = torch.nn.Linear(784, 800)
self.layer2 = torch.nn.Linear(800, 10)
def forward(self, x):
x = torch.sigmoid(self.layer1(x))
x = torch.sigmoid(self.layer2(x))
return x
</code></pre>
<p>The data is gathered using torchvision and organised in mini batches containing 32 images each.</p>
<pre><code>batch_size = 32
training_set = torchvision.datasets.MNIST("./", download=True, transform=torchvision.transforms.ToTensor())
training_loader = torch.utils.data.DataLoader(training_set, batch_size=32)
</code></pre>
<p>I am using the mean squared error as a loss funtion and stochastic gradient descent with a learning rate of 0.001 as my optimization algorithm.</p>
<pre><code>net = Net()
loss_function = torch.nn.MSELoss()
optimizer = torch.optim.SGD(net.parameters(), lr=0.001)
</code></pre>
<p>Finally the network gets trained and saved using the following code:</p>
<pre><code>for images, labels in training_loader:
optimizer.zero_grad()
for i in range(batch_size):
output = net(torch.flatten(images[i]))
desired_output = torch.tensor([float(j == labels[i]) for j in range(10)])
loss = loss_function(output, desired_output)
loss.backward()
optimizer.step()
torch.save(net.state_dict(), "./trained_net.pth")
</code></pre>
<p>However, here are the outputs of some test images:</p>
<pre><code>tensor([0.0978, 0.1225, 0.1018, 0.0961, 0.1022, 0.0885, 0.1007, 0.1077, 0.0994,
0.1081], grad_fn=<SigmoidBackward>)
tensor([0.0978, 0.1180, 0.1001, 0.0929, 0.1006, 0.0893, 0.1010, 0.1051, 0.0978,
0.1067], grad_fn=<SigmoidBackward>)
tensor([0.0981, 0.1227, 0.1018, 0.0970, 0.0979, 0.0908, 0.1001, 0.1092, 0.1011,
0.1088], grad_fn=<SigmoidBackward>)
tensor([0.1061, 0.1149, 0.1037, 0.1001, 0.0957, 0.0919, 0.1044, 0.1022, 0.0997,
0.1052], grad_fn=<SigmoidBackward>)
tensor([0.0996, 0.1137, 0.1005, 0.0947, 0.0977, 0.0916, 0.1048, 0.1109, 0.1013,
0.1085], grad_fn=<SigmoidBackward>)
tensor([0.1008, 0.1154, 0.0986, 0.0996, 0.1031, 0.0952, 0.0995, 0.1063, 0.0982,
0.1094], grad_fn=<SigmoidBackward>)
tensor([0.0972, 0.1235, 0.1013, 0.0984, 0.0974, 0.0907, 0.1032, 0.1075, 0.1001,
0.1080], grad_fn=<SigmoidBackward>)
tensor([0.0929, 0.1258, 0.1016, 0.0978, 0.1006, 0.0889, 0.1001, 0.1068, 0.0986,
0.1024], grad_fn=<SigmoidBackward>)
tensor([0.0982, 0.1207, 0.1040, 0.0990, 0.0999, 0.0910, 0.0980, 0.1051, 0.1039,
0.1078], grad_fn=<SigmoidBackward>)
</code></pre>
<p>As you can see the network seems to approach a state where the answer for every input is:</p>
<pre><code>[0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1]
</code></pre>
<p>This neural network is not better than just guessing. Where did I go wrong in my design or code?</p> | 2021-08-05 19:53:55.597000+00:00 | 2021-08-05 22:10:03.090000+00:00 | null | python|machine-learning|neural-network|pytorch|mnist | ['https://stats.stackexchange.com/questions/46413/can-the-mean-squared-error-be-used-for-classification', 'https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html', 'https://arxiv.org/pdf/1803.08375.pdf', 'https://stats.stackexchange.com/questions/226923/why-do-we-use-relu-in-neural-networks-and-how-do-we-use-it'] | 4 |
58,202,769 | <p>A universal method to decompose 2-qubit unitaries into primitive gates is sometimes referred to as "Krauss-Cirac decomposition". Here are several sources:</p>
<ul>
<li><a href="https://arxiv.org/pdf/quant-ph/0308006.pdf" rel="noreferrer">Optimal Quantum Circuits for General Two-Qubit Gates</a> by Vatan and Williams,</li>
<li><a href="https://arxiv.org/pdf/quant-ph/0011050.pdf" rel="noreferrer">Optimal Creation of Entanglement Using a Two–Qubit Gate</a> by Kraus and Cirac.</li>
<li>“Explorations in Quantum Computing” by Williams, chapter 2.</li>
</ul>
<p>As a side note, such questions are usually better received at <a href="https://quantumcomputing.stackexchange.com/">Quantum Computing StackExchange</a>.</p> | 2019-10-02 13:40:46.390000+00:00 | 2019-10-02 13:40:46.390000+00:00 | null | null | 58,198,077 | <p>I would like to make a quantum circuit from the following matrix.
<a href="https://i.stack.imgur.com/ongkh.gif" rel="nofollow noreferrer">matrix to be transformed into qubit operations</a>
How can I decompose this matrix into qubit operations such as <code>Rotation Y</code>, <code>Control-NOT</code> and so on ?</p>
<p>FYI, I read a book named "Quantum Computation and Quantum Information" written by Nielsen & Chuang, in particular Section 4.5.</p> | 2019-10-02 08:41:06.720000+00:00 | 2019-10-02 15:38:44.803000+00:00 | 2019-10-02 15:38:44.803000+00:00 | quantum-computing|qiskit | ['https://arxiv.org/pdf/quant-ph/0308006.pdf', 'https://arxiv.org/pdf/quant-ph/0011050.pdf', 'https://quantumcomputing.stackexchange.com/'] | 3 |
66,046,184 | <p>The closest think I've seen to what your are describing is attention mechanism in auto encoder. Where a Dense layer essentially control which of the encoded hidden states should be used by the decoding layer, instead of relying solely on the last hidden state.</p>
<p>here is the <a href="https://arxiv.org/pdf/1409.0473.pdf" rel="nofollow noreferrer">paper</a> if you want to read more.</p>
<p>This architecture intent at circumventing the limit on how much information can be stored in one hidden state over long sequence.</p> | 2021-02-04 13:01:24.297000+00:00 | 2021-02-04 13:01:24.297000+00:00 | null | null | 66,043,890 | <p>Usually in a RNN only the previous input and hidden state is used to calculate the output.
However, what would happen if we use up to n previous steps? In essence feeding an n-gram to the neural network?
Since n-grams are generally quite good in short text generation, this added information will lessen the burden in the hidden state to memorize short term knowledge and focus on the context aspect of the text.</p>
<p>This seems quite a simple thing but I'm unable to find any paper that have implemented this.</p> | 2021-02-04 10:36:12.993000+00:00 | 2021-02-04 13:01:24.297000+00:00 | null | deep-learning|neural-network|lstm|recurrent-neural-network | ['https://arxiv.org/pdf/1409.0473.pdf'] | 1 |
51,824,126 | <p>Looking at <a href="https://github.com/tensorflow/models/blob/master/research/object_detection/protos/faster_rcnn.proto" rel="nofollow noreferrer"><code>faster_rcnn.proto</code></a>:</p>
<pre class="lang-proto prettyprint-override"><code>// Naming conventions:
// Faster R-CNN models have two stages: a first stage region proposal network
// (or RPN) and a second stage box classifier. We thus use the prefixes
// `first_stage_` and `second_stage_` to indicate the stage to which each
// parameter pertains when relevant
</code></pre>
<p>And so:</p>
<pre class="lang-proto prettyprint-override"><code>// Maximum number of RPN proposals retained after first stage postprocessing.
optional int32 first_stage_max_proposals = 15 [default=300];
</code></pre>
<p>Faster R-CNN has two networks, the first proposes regions where objects may be found and the second tries to detect objects in those. Increasing the number of proposals by the first network increases the accuracy but implies more computational work, because the second network has to search in more potential areas. For a quick explanation on how Faster R-CNN works check out <a href="https://medium.com/@smallfishbigsea/faster-r-cnn-explained-864d4fb7e3f8" rel="nofollow noreferrer">Faster R-CNN Explained</a>, and if you want to have the full picture you can look at the original publication: <a href="https://arxiv.org/abs/1506.01497" rel="nofollow noreferrer">Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks</a>.</p> | 2018-08-13 13:57:45.507000+00:00 | 2018-08-13 13:57:45.507000+00:00 | null | null | 51,823,857 | <p>I am trying to understand tensorflow object detection config fields exactly.</p>
<p>And according to this article(<a href="https://medium.com/@jonathan_hui/object-detection-speed-and-accuracy-comparison-faster-r-cnn-r-fcn-ssd-and-yolo-5425656ae359" rel="nofollow noreferrer">https://medium.com/@jonathan_hui/object-detection-speed-and-accuracy-comparison-faster-r-cnn-r-fcn-ssd-and-yolo-5425656ae359</a>), for good balance between accuracy and speed, I changed first_stage_max_proposals from origin 100 to 50.</p>
<p>Good news is, it indeed reduced the inference latency(from 4.2 seconds to 2.2 per image), however, bad news is it also decreases the accuracy.</p>
<p>Then, I changed max proposals from 50 to 70, accuracy is better.</p>
<p>So, I wanna know exactly what the max proposals controls. Does it related to any other config something like max_detections_per_class or max_total_detections .etc?</p>
<p>I googled a lot but seems less upon this guy.
I use python3.6.4 and tensorflow 1.8.0, and here is my model config:</p>
<pre><code>model {
faster_rcnn {
num_classes: 3
image_resizer {
keep_aspect_ratio_resizer {
min_dimension: 670
max_dimension: 1013
}
}
feature_extractor {
type: "faster_rcnn_resnet101"
first_stage_features_stride: 16
}
first_stage_anchor_generator {
grid_anchor_generator {
height_stride: 16
width_stride: 16
scales: 0.25
scales: 0.5
scales: 1.0
scales: 2.0
aspect_ratios: 0.5
aspect_ratios: 1.0
aspect_ratios: 2.0
}
}
first_stage_box_predictor_conv_hyperparams {
op: CONV
regularizer {
l2_regularizer {
weight: 0.0
}
}
initializer {
truncated_normal_initializer {
stddev: 0.01
}
}
}
first_stage_nms_score_threshold: 0.0
first_stage_nms_iou_threshold: 0.7
first_stage_max_proposals: 70
first_stage_localization_loss_weight: 2.0
first_stage_objectness_loss_weight: 1.0
initial_crop_size: 14
maxpool_kernel_size: 2
maxpool_stride: 2
second_stage_box_predictor {
mask_rcnn_box_predictor {
fc_hyperparams {
op: FC
regularizer {
l2_regularizer {
weight: 0.0
}
}
initializer {
variance_scaling_initializer {
factor: 1.0
uniform: true
mode: FAN_AVG
}
}
}
use_dropout: false
dropout_keep_probability: 1.0
}
}
second_stage_post_processing {
batch_non_max_suppression {
score_threshold: 0.3
iou_threshold: 0.6
max_detections_per_class: 30
max_total_detections: 30
}
score_converter: SOFTMAX
}
second_stage_localization_loss_weight: 2.0
second_stage_classification_loss_weight: 1.0
second_stage_batch_size: 70
}
}
train_config {
batch_size: 1
data_augmentation_options {
random_horizontal_flip {
}
}
optimizer {
momentum_optimizer {
learning_rate {
exponential_decay_learning_rate {
initial_learning_rate: 0.0003
decay_steps: 2000
decay_factor: 0.95
}
}
momentum_optimizer_value: 0.9
}
use_moving_average: false
}
gradient_clipping_by_norm: 10.0
fine_tune_checkpoint: "d:/od/tool/faster_rcnn3/model.ckpt"
from_detection_checkpoint: true
}
train_input_reader {
label_map_path: "d:/od/project/train_allinone/file/labelmap.pbtxt"
tf_record_input_reader {
input_path: "d:/od/project/train_allinone/file/tf.record"
}
}
</code></pre>
<p>Any explanation on this is great appreciated.</p>
<p>Thanks.</p> | 2018-08-13 13:44:40.660000+00:00 | 2018-08-13 13:57:45.507000+00:00 | null | python|tensorflow | ['https://github.com/tensorflow/models/blob/master/research/object_detection/protos/faster_rcnn.proto', 'https://medium.com/@smallfishbigsea/faster-r-cnn-explained-864d4fb7e3f8', 'https://arxiv.org/abs/1506.01497'] | 3 |
72,501,111 | <p>The method of modularity maximization (of which Leiden is an implementation) has two important properties:</p>
<ol>
<li>It only searches for assortative communities (i.e. groups with more internal connections than external ones).</li>
<li>It is a <strong>statistically inconsistent</strong> method, that will both overfit and underfit, depending on the situation. A discussion on this matter can be found here: <a href="https://skewed.de/tiago/blog/modularity-harmful" rel="nofollow noreferrer">https://skewed.de/tiago/blog/modularity-harmful</a></li>
</ol>
<p>The SBM inference method is different in both counts:</p>
<ol>
<li>It finds groups with arbitrary mixing patterns, i.e. preferences of connections to other groups. Assortativity is a special case, but there are many others possible patterns.</li>
<li>It achieves this in a statistically principled manner, avoiding both overfitting and underfitting. For a theoretical introduction, see: <a href="https://arxiv.org/abs/1705.10225" rel="nofollow noreferrer">https://arxiv.org/abs/1705.10225</a>. For a discussion on the differences between inferential and non-inferential methods see: <a href="https://arxiv.org/abs/2112.00183" rel="nofollow noreferrer">https://arxiv.org/abs/2112.00183</a></li>
</ol>
<p>Because of the above, one should not expect SBM inference and Leiden/Louvain to yield similar answers in general.</p>
<p>Now, for whatever reason, you may be interested to find only assortative communities. You can also do that with the SBM, but using a more constrained parametrization. You can do this with graph-tool as explained here: <a href="https://graph-tool.skewed.de/static/doc/demos/inference/inference.html#assortative-community-structure" rel="nofollow noreferrer">https://graph-tool.skewed.de/static/doc/demos/inference/inference.html#assortative-community-structure</a></p> | 2022-06-04 15:49:43.320000+00:00 | 2022-06-04 15:56:40.443000+00:00 | 2022-06-04 15:56:40.443000+00:00 | null | 72,073,919 | <p>I'm calculating network communities for 4 networks using 2 methods:</p>
<ol>
<li><p>'Leiden' method, which gives me 7 (a), 13 (b), 19 (c), 22 (d) communities.</p>
</li>
<li><p>'Stochastic block Model', also checking group membership of the nodes by inspecting levels of the hierarchy, like so:</p>
</li>
</ol>
<hr />
<pre><code> state = gt.inference.minimize_nested_blockmodel_dl(g)
state.print_summary()
levels = state.get_levels()
for s in levels:
print(s)
if s.get_N() == 1:
break
lstate = state.levels[0]
b = lstate.get_blocks()
print(b[10])
</code></pre>
<hr />
<p>which prints:</p>
<pre><code><BlockState object with 228 blocks (21 nonempty), degree-corrected, for graph <Graph object, undirected, with 228 vertices and 1370 edges, 1 internal vertex property, 1 internal edge property, at 0x7fbaff1c8d50>, at 0x7fba9fac1bd0>
<BlockState object with 21 blocks (6 nonempty), for graph <Graph object, undirected, with 228 vertices and 96 edges, at 0x7fb9a3c51910>, at 0x7fb9a2dd1a10>
<BlockState object with 6 blocks (1 nonempty), for graph <Graph object, undirected, with 21 vertices and 20 edges, at 0x7fb9a3c51590>, at 0x7fb9a3c51ed0>
<BlockState object with 1 blocks (1 nonempty), for graph <Graph object, undirected, with 6 vertices and 1 edge, at 0x7fb9a6f034d0>, at 0x7fb9a3c51790>
190
<Graph object, undirected, with 3459 vertices and 134046 edges, 1 internal vertex property, 1 internal edge property, at 0x7fbb62e22790>
l: 0, N: 3459, B: 294
l: 1, N: 294, B: 85
l: 2, N: 85, B: 34
l: 3, N: 34, B: 12
l: 4, N: 12, B: 4
l: 5, N: 4, B: 1
l: 6, N: 1, B: 1
</code></pre>
<hr />
<p>and draws:</p>
<p><a href="https://i.stack.imgur.com/VoSa7.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VoSa7.jpg" alt="enter image description here" /></a></p>
<p>This looks like having WAY more communities than using Leiden, and I'm trying to wrap my head around why, as well as this SBM concept.</p>
<p>Are these SBM graphs depicting adicional levels of hierarchy or is there something else going on here that justifies so many more communities?</p> | 2022-05-01 04:08:07.657000+00:00 | 2022-06-04 15:56:40.443000+00:00 | 2022-05-05 17:23:12.407000+00:00 | python|graph-tool|complex-networks | ['https://skewed.de/tiago/blog/modularity-harmful', 'https://arxiv.org/abs/1705.10225', 'https://arxiv.org/abs/2112.00183', 'https://graph-tool.skewed.de/static/doc/demos/inference/inference.html#assortative-community-structure'] | 4 |
43,050,024 | <p>If you use the pretrained model, you would need to save those outputs and input the images into a character recognition network, if using neural net, or another approach.</p>
<p>What you are doing is "scene text recognition". You can check out the Reading Text in the Wild with Convolutional Neural Networks <a href="https://arxiv.org/abs/1412.1842" rel="noreferrer">paper</a>, here's a <a href="http://zeus.robots.ox.ac.uk/textsearch/#/search/" rel="noreferrer">demo</a> and <a href="http://www.robots.ox.ac.uk/~vgg/research/text/" rel="noreferrer">homepage</a>. Github user chongyangtao has a whole <a href="https://github.com/chongyangtao/Awesome-Scene-Text-Recognition" rel="noreferrer">list</a> of resources on the topic. </p> | 2017-03-27 15:06:21.803000+00:00 | 2017-03-27 15:06:21.803000+00:00 | null | null | 43,003,369 | <p>Currently I am using a deep learing model which is called "Yolov2" for object detection, and I want to use it to extract text and use save it in disk, but i don't know how to do that, if anyone know more about that, please advice me</p>
<p>I use Tensorflow</p>
<p>Thanks</p> | 2017-03-24 15:24:54.810000+00:00 | 2019-08-27 05:41:41.227000+00:00 | null | tensorflow|deep-learning|text-recognition | ['https://arxiv.org/abs/1412.1842', 'http://zeus.robots.ox.ac.uk/textsearch/#/search/', 'http://www.robots.ox.ac.uk/~vgg/research/text/', 'https://github.com/chongyangtao/Awesome-Scene-Text-Recognition'] | 4 |
55,489,945 | <p>I want to add two points:</p>
<p>1) When use special treatments, it is possible to achieve similar performance for a very large batch size while speeding-up the training process tremendously. For example,
<a href="https://arxiv.org/pdf/1706.02677.pdf" rel="nofollow noreferrer"><em>Accurate, Large Minibatch SGD:Training ImageNet in 1 Hour</em></a></p>
<p>2) Regarding your MNIST example, I really don't suggest you to over-read these numbers. Because the difference is so subtle that it could be caused by noise. I bet if you try models saved on a different epoch, you will see a different result. </p> | 2019-04-03 07:59:10.657000+00:00 | 2019-04-03 07:59:10.657000+00:00 | null | null | 55,485,837 | <p>I was using Keras' CNN to classify MNIST dataset. I found that using different batch-sizes gave different accuracies. Why is it so? </p>
<p><a href="https://i.stack.imgur.com/Ad4oO.png" rel="nofollow noreferrer">Using Batch-size 1000</a> (Acc = 0.97600)</p>
<p><a href="https://i.stack.imgur.com/CRLd3.png" rel="nofollow noreferrer">Using Batch-size 10</a> (Acc = 0.97599)</p>
<p>Although, the difference is very small, why is there even a difference?
<strong>EDIT - I have found that the difference is only because of precision issues and they are in fact equal.</strong></p> | 2019-04-03 01:42:04.910000+00:00 | 2019-04-07 01:49:59.447000+00:00 | 2019-04-07 01:49:59.447000+00:00 | machine-learning|keras|deep-learning|conv-neural-network | ['https://arxiv.org/pdf/1706.02677.pdf'] | 1 |
55,489,890 | <p>This is not connected to Keras. The batch size, together with the learning rate, are critical hyper-parameters for training neural networks with mini-batch stochastic gradient descent (SGD), which entirely affect the learning dynamics and thus the accuracy, the learning speed, etc.</p>
<p>In a nutshell, SGD optimizes the weights of a neural network by iteratively updating them towards the (negative) direction of the gradient of the loss. In mini-batch SGD, the gradient is estimated at each iteration on a subset of the training data. It is a noisy estimation, which helps regularize the model and therefore the size of the batch matters a lot. Besides, the learning rate determines how much the weights are updated at each iteration. Finally, although this may not be obvious, the learning rate and the batch size are related to each other. <a href="https://arxiv.org/abs/1711.04623" rel="nofollow noreferrer">[paper]</a></p> | 2019-04-03 07:56:36.100000+00:00 | 2019-04-03 07:56:36.100000+00:00 | null | null | 55,485,837 | <p>I was using Keras' CNN to classify MNIST dataset. I found that using different batch-sizes gave different accuracies. Why is it so? </p>
<p><a href="https://i.stack.imgur.com/Ad4oO.png" rel="nofollow noreferrer">Using Batch-size 1000</a> (Acc = 0.97600)</p>
<p><a href="https://i.stack.imgur.com/CRLd3.png" rel="nofollow noreferrer">Using Batch-size 10</a> (Acc = 0.97599)</p>
<p>Although, the difference is very small, why is there even a difference?
<strong>EDIT - I have found that the difference is only because of precision issues and they are in fact equal.</strong></p> | 2019-04-03 01:42:04.910000+00:00 | 2019-04-07 01:49:59.447000+00:00 | 2019-04-07 01:49:59.447000+00:00 | machine-learning|keras|deep-learning|conv-neural-network | ['https://arxiv.org/abs/1711.04623'] | 1 |
55,487,307 | <p>That is because of the Mini-batch gradient descent effect during training process. You can find good explanation <a href="https://machinelearningmastery.com/gentle-introduction-mini-batch-gradient-descent-configure-batch-size/" rel="nofollow noreferrer">Here</a> that I mention some notes from that link here:</p>
<blockquote>
<p>Batch size is a slider on the learning process.</p>
<ol>
<li>Small values give a learning process that converges quickly at the
cost of noise in the training process.</li>
<li>Large values give a learning
process that converges slowly with accurate estimates of the error
gradient.</li>
</ol>
</blockquote>
<p>and also one important note from that link is :</p>
<blockquote>
<p>The presented results confirm that using small batch sizes achieves the <strong>best training stability</strong> and generalization performance, for a
given computational cost, across a wide range of experiments. In all
cases the best results have been obtained with batch sizes m = 32 or
smaller</p>
</blockquote>
<p>Which is the result of <a href="https://arxiv.org/abs/1804.07612" rel="nofollow noreferrer"><strong>this paper</strong></a>.</p>
<p><strong>EDIT</strong></p>
<p>I should mention two more points Here:</p>
<ol>
<li>because of the <strong>inherent randomness in machine learning algorithms</strong> concept, generally you should not expect machine learning algorithms (like Deep learning algorithms) to have same results on different runs. You can find more details <a href="https://machinelearningmastery.com/randomness-in-machine-learning/" rel="nofollow noreferrer">Here</a>.</li>
<li>On the other hand both of your results are too close and somehow they are equal. So in your case we can say that the batch size has no effect on your network results based on the reported results.</li>
</ol> | 2019-04-03 04:50:59.220000+00:00 | 2019-04-03 11:16:25.500000+00:00 | 2019-04-03 11:16:25.500000+00:00 | null | 55,485,837 | <p>I was using Keras' CNN to classify MNIST dataset. I found that using different batch-sizes gave different accuracies. Why is it so? </p>
<p><a href="https://i.stack.imgur.com/Ad4oO.png" rel="nofollow noreferrer">Using Batch-size 1000</a> (Acc = 0.97600)</p>
<p><a href="https://i.stack.imgur.com/CRLd3.png" rel="nofollow noreferrer">Using Batch-size 10</a> (Acc = 0.97599)</p>
<p>Although, the difference is very small, why is there even a difference?
<strong>EDIT - I have found that the difference is only because of precision issues and they are in fact equal.</strong></p> | 2019-04-03 01:42:04.910000+00:00 | 2019-04-07 01:49:59.447000+00:00 | 2019-04-07 01:49:59.447000+00:00 | machine-learning|keras|deep-learning|conv-neural-network | ['https://machinelearningmastery.com/gentle-introduction-mini-batch-gradient-descent-configure-batch-size/', 'https://arxiv.org/abs/1804.07612', 'https://machinelearningmastery.com/randomness-in-machine-learning/'] | 3 |
64,555,483 | <blockquote>
<p>How could I achieve so through XGBoost ?</p>
</blockquote>
<p>If you are ok with a departure from proportional hazards assumption, try the Accelerated Failure Time (AFT) model (see <a href="https://xgboost.readthedocs.io/en/latest/tutorials/aft_survival_analysis.html" rel="nofollow noreferrer">documentation</a>, and <a href="https://arxiv.org/abs/2006.04920" rel="nofollow noreferrer">paper</a>). Instead of modeling <code>HR,</code> AFT models try predicting log of survival time directly.</p>
<p>If you want to stick to proportional hazards, looks like there is a way to estimate $h_0(t)$, at least in R - so you may have to replicate it in python (see <a href="https://datascience.stackexchange.com/questions/65266/how-do-i-predict-survival-curves-using-xgboost">this</a> stack overflow answer).</p> | 2020-10-27 13:26:42.580000+00:00 | 2020-10-27 13:26:42.580000+00:00 | null | null | 60,012,277 | <pre><code>clf = XGBRegressor(objective='survival:cox', min_child_weight=4, gamma=0.50,
subsample=0.6, eta=1, max_depth=10, booster='gblinear', reg_lambda = 2)
</code></pre>
<p>When I use this objective, I only get the hazard ratio (HR) ( i.e., as HR = exp(marginal_prediction) in the proportional hazard function h(t) = h0(t) * HR). I would like to get the whole survival time prediction. How could I achieve so through XGBoost ?</p>
<p>Thank you for your help</p> | 2020-01-31 22:47:03.680000+00:00 | 2020-10-27 13:26:42.580000+00:00 | null | python|xgboost|survival-analysis|non-linear-regression|cox-regression | ['https://xgboost.readthedocs.io/en/latest/tutorials/aft_survival_analysis.html', 'https://arxiv.org/abs/2006.04920', 'https://datascience.stackexchange.com/questions/65266/how-do-i-predict-survival-curves-using-xgboost'] | 3 |
68,471,408 | <p>I can't answer all your questions in depth, but I try to give you some advice.</p>
<ul>
<li>you can understand better <em>avg.loss</em>, reading <a href="https://github.com/facebookresearch/fastText/issues/690#issuecomment-441570351" rel="nofollow noreferrer">this thread</a></li>
<li>learning rate is updated according <em>lrUpdateRate</em> option (read <a href="https://fasttext.cc/docs/en/options.html" rel="nofollow noreferrer">this</a>).</li>
<li>in general, increasing the number of epochs can improve learning. However, as you can read in <a href="https://arxiv.org/pdf/1906.06669.pdf" rel="nofollow noreferrer">this paper</a>, the most popular language models have a number of epochs between 10 and 100.</li>
<li>default loss function is softmax. You can also choose hs (hierarchical softmax) or ns. You can read more in the <a href="https://fasttext.cc/docs/en/supervised-tutorial.html#advanced-readers-hierarchical-softmax" rel="nofollow noreferrer">official tutorial</a>.</li>
<li>if you want to learn more about the effects of the <em>ws</em> and <em>wordngrams</em> parameters, you can read <a href="https://stackoverflow.com/questions/57507056/difference-between-max-length-of-word-ngrams-and-size-of-context-window">this answer</a>.</li>
</ul> | 2021-07-21 14:31:14.587000+00:00 | 2021-07-21 14:31:14.587000+00:00 | null | null | 68,466,879 | <p>I wanted to create a fastText unsupervised model for my text data of size 1GB. I'm using fastText command line tool to implement the model training process.</p>
<pre><code>./fasttext skipgram -input PlainText.txt -output FastText-PlainText- -dim 50 -epoch 50
</code></pre>
<p>The above are few arguments I used for created word representation.</p>
<pre><code>Read 207M words
Number of words: 501986
Number of labels: 0
Progress: 97.5% words/sec/thread: 87224 lr: 0.001260 avg.loss: 0.089536 ETA: 0h 4m 9s
</code></pre>
<p>Here, in the output of the fastText command, I see this avg.loss and the learning rate has been decreased from default (0.5) to 0.001. I don't really understand, what does this avg.loss mean and why is the learning rate is dropped?</p>
<ol>
<li>Should I want to increase the epoch to make fastText to learn my data better?</li>
<li>Can I use any loss function to improve the loss? If yes, what kind of loss function will be better?</li>
<li>And how can I evaluate my fastText model's learning whether is good or bad?</li>
<li>Just out of interest, Can I use wordngrams to make my model learn better with context in unsupervised learning?</li>
</ol> | 2021-07-21 09:10:36.507000+00:00 | 2021-07-21 14:31:14.587000+00:00 | null | word-embedding|fasttext | ['https://github.com/facebookresearch/fastText/issues/690#issuecomment-441570351', 'https://fasttext.cc/docs/en/options.html', 'https://arxiv.org/pdf/1906.06669.pdf', 'https://fasttext.cc/docs/en/supervised-tutorial.html#advanced-readers-hierarchical-softmax', 'https://stackoverflow.com/questions/57507056/difference-between-max-length-of-word-ngrams-and-size-of-context-window'] | 5 |
23,876,673 | <p>Full instructions are available in the <a href="http://scikit-learn.org/stable/developers/contributing.html#rolling-your-own-estimator" rel="noreferrer">scikit-learn docs</a>, and the principles behind the API are set out in <a href="http://arxiv.org/abs/1309.0238" rel="noreferrer">this paper by yours truly et al.</a> In short, besides <code>fit</code>, what you need for an estimator are <code>get_params</code> and <code>set_params</code> that return (as a <code>dict</code>) and set (from kwargs) the hyperparameters of the estimator, i.e. the parameters of the learning algorithm itself (as opposed to the data parameters it learns). These parameters should match the <code>__init__</code> parameters.</p>
<p>Both methods can be obtained by inheriting from the classes in <code>sklearn.base</code>, but you can provide them yourself if you don't want your code to be dependent on scikit-learn.</p>
<p>Note that input validation should be done in <code>fit</code>, not the constructor, because otherwise you can still set invalid parameters in <code>set_params</code> and have <code>fit</code> fail in unexpected ways.</p> | 2014-05-26 19:35:48.730000+00:00 | 2018-05-12 20:43:05.570000+00:00 | 2018-05-12 20:43:05.570000+00:00 | null | 23,866,833 | <p>I'm rolling my own predictor and want to use it like I would use any of the scikit routines (e.g. RandomForestRegressor). I have a class containing <code>fit</code> and <code>predict</code> methods that seem to work fine. However, when I try to use some of the scikit methods, such as cross validation, I get errors like:</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python27\lib\site-packages\sklearn\cross_validation.py", line 1152, in cross_val_
score
for train, test in cv)
File "C:\Python27\lib\site-packages\sklearn\externals\joblib\parallel.py", line 516, in __
call__
for function, args, kwargs in iterable:
File "C:\Python27\lib\site-packages\sklearn\cross_validation.py", line 1152, in <genexpr>
for train, test in cv)
File "C:\Python27\lib\site-packages\sklearn\base.py", line 43, in clone
% (repr(estimator), type(estimator)))
TypeError: Cannot clone object '<__main__.Custom instance at 0x033A6990>' (type <type 'inst
ance'>): it does not seem to be a scikit-learn estimator a it does not implement a 'get_para
ms' methods.
</code></pre>
<p>I see that it wants me to implement some methods (presumably <code>get_params</code> as well as maybe <code>set_params</code> and <code>score</code>) but I'm not sure what the right specification for making these methods is. Is there some information available on this topic? Thanks.</p> | 2014-05-26 09:22:21.897000+00:00 | 2018-05-12 20:43:05.570000+00:00 | null | python|scikit-learn | ['http://scikit-learn.org/stable/developers/contributing.html#rolling-your-own-estimator', 'http://arxiv.org/abs/1309.0238'] | 2 |
48,550,118 | <p>The max-over-time pooling is usually applied in NLP (unlike ordinary max-pool, which is common in CNNs for computer vision tasks), so the setup is a little bit different.</p>
<p>The input to the max-over-time pooling is a feature map <code>c = [c(1), ..., c(n-h+1)]</code>, which is computed over a sentence of length <code>n</code> with a filter of size <code>h</code>. The convolution operation is very similar to one with images, but in this case it's applied to 1-dimensional vector of words. This is the formula (3) in the <a href="https://arxiv.org/pdf/1408.5882.pdf" rel="noreferrer">paper</a>.</p>
<p>The max-over-time pooling operation is very simple: <code>max_c = max(c)</code>, i.e., it's a single number that gets a max over the whole feature map. The reason to do this, instead of "down-sampling" the sentence like in a CNN, is that in NLP the sentences naturally have different length in a corpus. This makes the feature maps different for different sentences, but we'd like to reduce the tensor to a fixed size to apply softmax or regression head in the end. As stated in the paper, it allows to capture the most important
feature, one with the highest value for each feature map.</p>
<p>Note that in computer vision, images are usually<sup>1</sup> of the same size, like <code>28x28</code> or <code>32x32</code>, that's why it is unnecessary to downsample the feature maps to <code>1x1</code> immediately.</p>
<p>Sum-pooling-over-time is the same.</p>
<hr>
<p><sup>1</sup> Modern CNN can be trained with images of different size, but this requires the network to be all-convolutional, so it doesn't have any pooling layers. See <a href="https://stackoverflow.com/q/48230031/712995">this question</a> for more details.</p> | 2018-01-31 19:36:05.203000+00:00 | 2018-01-31 19:36:05.203000+00:00 | null | null | 48,549,670 | <p>I understand conceptually what is happening in a max/sum pool as a CNN layer operation, but I see this term "max pool over time", or "sum pool over time" thrown around (e.g., <a href="https://arxiv.org/pdf/1408.5882.pdf" rel="noreferrer">"Convolutional Neural Networks for Sentence Classification"</a> paper by Yoon Kim). What is the difference?</p> | 2018-01-31 19:06:58.383000+00:00 | 2019-01-23 11:02:34.877000+00:00 | 2018-01-31 20:09:01.540000+00:00 | machine-learning|neural-network|nlp|convolution|max-pooling | ['https://arxiv.org/pdf/1408.5882.pdf', 'https://stackoverflow.com/q/48230031/712995'] | 2 |
61,352,997 | <p>Accuracy could be misleading as a metric for your problem, with high class imbalance, I would use the <strong>F1</strong> score.</p>
<p>As for the loss, you could use the <a href="https://arxiv.org/pdf/1708.02002.pdf" rel="nofollow noreferrer"><strong>focal loss</strong></a> it is an variant of the <strong>categorical cross-entropy</strong> that focuses on the <strong>least represented</strong> classes. You can find an example <a href="https://github.com/umbertogriffo/focal-loss-keras/blob/master/losses.py" rel="nofollow noreferrer">here</a>, in my experience, it helps a lot with little classes on NLP classification tasks.</p> | 2020-04-21 20:50:46.640000+00:00 | 2020-04-21 20:50:46.640000+00:00 | null | null | 59,502,005 | <p>My dataset shape is <code>(91149, 12)</code></p>
<p>I used CNN to train my classifier in text classification tasks</p>
<p>I found Training Accuracy: <code>0.5923</code> and Testing Accuracy: <code>0.5780</code></p>
<p>My Class has 9 labels as below:</p>
<pre><code>df['thematique'].value_counts()
Corporate 42399
Economie collaborative 13272
Innovation 11360
Filiale 5990
Richesses Humaines 4445
Relation sociétaire 4363
Communication 4141
Produits et services 2594
Sites Internet et applis 2585
</code></pre>
<p>The model structure:</p>
<pre><code>model = Sequential()
embedding_layer = Embedding(vocab_size, 300, weights=[embedding_matrix], input_length=maxlen , trainable=False)
model.add(embedding_layer)
model.add(Conv1D(128, 7, activation='relu'))
model.add(GlobalMaxPooling1D())
model.add(Dense(9, activation='sigmoid'))
model.compile(optimizer='Adam', loss='categorical_crossentropy', metrics= ['categorical_accuracy'])
</code></pre>
<p>My data for multilabel classification is imbalanced. I need to handle imbalanced data for multipabel classification using CNN in Keras.</p> | 2019-12-27 14:14:56.600000+00:00 | 2022-04-08 08:45:53.773000+00:00 | 2019-12-27 14:56:42.600000+00:00 | python|keras|multilabel-classification|imbalanced-data | ['https://arxiv.org/pdf/1708.02002.pdf', 'https://github.com/umbertogriffo/focal-loss-keras/blob/master/losses.py'] | 2 |
50,390,549 | <p>According to the last 2 lines of page 3 of the original paper <a href="https://arxiv.org/pdf/1512.03385.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1512.03385.pdf</a>, their calculation only considers multiply and add operations. Whereas tensorflow further includes batch norm or max operations of pooling, relu. I think that is the reason for the difference.</p> | 2018-05-17 11:40:07.060000+00:00 | 2018-05-17 11:40:07.060000+00:00 | null | null | 50,288,523 | <p>Recently I use tf.profile to calculate FLOPs of ResNet-v1-50. I get 7084572224 (7.08 GFLOPs ?). But in original paper it is 3.8 GFLOPs.</p>
<p>And I perform same on VGG-19 and get 5628853928 (56.29 GFLOPs?), but its real value is 19.6 billion FLOPs.
Note that all test model is in tf.slim.</p>
<p>My code is as followed:</p>
<pre><code>run_meta = tf.RunMetadata()
im = tf.placeholder(tf.float32, [1, 224, 224, 3])
with arg_scope(resnet_v1.resnet_arg_scope(use_batch_norm=True)):
ims, endpoints = resnet_v1.resnet_v1_50(im)
print(get_num_of_params(tf.get_default_graph()))
opts = tf.profiler.ProfileOptionBuilder.float_operation()
flops = tf.profiler.profile(tf.get_default_graph(), run_meta=run_meta, cmd='op', options=opts)
print(flops.total_float_ops)
</code></pre>
<p>Please someone help me.</p> | 2018-05-11 08:50:36.323000+00:00 | 2018-05-17 11:40:07.060000+00:00 | 2018-05-11 10:09:22.597000+00:00 | tensorflow|resnet|flops | ['https://arxiv.org/pdf/1512.03385.pdf'] | 1 |
48,805,904 | <p>One approach that I think may be promising is using Dynamic Memory Networks for Questions Answering. The problem they are solving is generalized version of what you are attempting to solve. In your case you would be answering just two questions: "Which is the source?" and "Which is the destination?". Have a look at the <a href="https://arxiv.org/abs/1603.01417" rel="nofollow noreferrer">paper</a> and also this <a href="http://youtube.com/watch?v=T3octNTE7Is" rel="nofollow noreferrer">video lecture</a> that explains the same approach. </p>
<p>Seems to me it should be easy to generate a training set as long as you have enough training examples with ground truth for source and destination. </p>
<p>You could also make use of the fact you are only dealing with 2 different questions - instead of computing embedding for the question, train two different models, one for answering which is the source and second that find the destination. </p> | 2018-02-15 11:15:18.993000+00:00 | 2018-02-15 11:20:53.903000+00:00 | 2018-02-15 11:20:53.903000+00:00 | null | 48,003,074 | <p>For a text (say):</p>
<p>"I am leaving India today. I am headed to USA for a week."
"I am travelling from India to USA"</p>
<p>I need to train the machine to label USA as "Destination" and India as "Source"</p>
<p>I am using SpaCy's NER to extract the locations.</p>
<p>How should I proceed to create a training set and train it. What would be my feature vector and label vector?</p> | 2017-12-28 07:01:43.567000+00:00 | 2018-02-15 11:20:53.903000+00:00 | 2018-02-15 11:08:00.820000+00:00 | python|algorithm|machine-learning|nlp|text-mining | ['https://arxiv.org/abs/1603.01417', 'http://youtube.com/watch?v=T3octNTE7Is'] | 2 |
73,505,729 | <p>BFT in Fabric is something that is currently being worked on.</p>
<p>There is, however, an <a href="https://github.com/SmartBFT-Go/fabric/" rel="nofollow noreferrer">integration</a> of <a href="https://github.com/SmartBFT-Go/consensus/" rel="nofollow noreferrer">a BFT library</a> in Fabric which is <strong>not an official Hyperledger effort</strong>.</p>
<p>This integration is planned to be ported into the official Fabric soon.</p>
<p>The paper of the project can be found <a href="https://arxiv.org/abs/2107.06922" rel="nofollow noreferrer">here</a> and <a href="https://smartbft-go.github.io/paper.pdf" rel="nofollow noreferrer">here</a>.</p> | 2022-08-26 19:30:10.037000+00:00 | 2022-08-26 19:30:10.037000+00:00 | null | null | 73,498,918 | <p>It doesn't seem to be supported by the official documentation and git yet.</p>
<ol>
<li>Is it correct that the odring service does not support pbft yet?</li>
<li>If you have to use pbft, how should you approach it?</li>
</ol>
<p>have a good day!</p> | 2022-08-26 09:32:32.277000+00:00 | 2022-08-26 19:30:10.037000+00:00 | null | blockchain|hyperledger-fabric|ibm-blockchain | ['https://github.com/SmartBFT-Go/fabric/', 'https://github.com/SmartBFT-Go/consensus/', 'https://arxiv.org/abs/2107.06922', 'https://smartbft-go.github.io/paper.pdf'] | 4 |
46,283,822 | <p>What you can do, you can leverage results of <a href="https://en.wikipedia.org/wiki/Johnson%E2%80%93Lindenstrauss_lemma" rel="nofollow noreferrer">Johnson-Lindenstrauss lemma</a> where you embed you dataset into lower dimension space and when doing the kmeans computation on smaller dataset. For instance if you data matrix is A you can do:</p>
<pre><code>% N is the number of data points and s is the reduced dimension
S = randn (N, s)/s q r t (s) ;
C = A ∗ S ;
% now you can do you kmeans computation on C
[idx, ctr] = kmeans(MeansOfK , k, 'Distance', 'sqEuclidean');
</code></pre>
<p>Basically you can use <code>idx</code> and <code>ctr</code> results for original dataset which will give you (1+epsilon) approximation. Also you can reach better results based on work by <a href="https://pdfs.semanticscholar.org/1e76/727be351d6289311d9d1d65c494a683ace0f.pdf" rel="nofollow noreferrer">Dan Feldman</a> which basically says that you can compute and SVD on your data and project on k/epsilon engine values to compute the kmeans value and get (1+epsilon) approximation.</p>
<p><hr />
<strong>UPDATE</strong></p>
<p>Based on comment I'd like to suggest to leverage coresets approach, again based on the paper of Dan Feldman at el, <a href="https://pdfs.semanticscholar.org/1e76/727be351d6289311d9d1d65c494a683ace0f.pdf" rel="nofollow noreferrer">Turning Big Data Into Tiny Data</a>. The techniques provides with capability to reduce large volume of data into smaller with provable guarantee to provide (1+epsilon) approximation to the optimal kmeans solution. Moreover you can proceed with streaming coreset construction which will allow you to maintain <code>O(logn * epsilon)</code> approximation while streaming you data (section 10, figure 3), e.g. in your case partition into smaller chunks. When eventually you can run kmeans computation on the resulted coreset. </p>
<p>Also you probably might consider to take a look into my recent <a href="https://arxiv.org/pdf/1511.08990.pdf" rel="nofollow noreferrer">publication</a> to get more details on how to handle your case. Here you can find also a reference in my <a href="https://github.com/C0rWin/KMeanCoreset" rel="nofollow noreferrer">github account</a> if you'd like to use it.</p> | 2017-09-18 16:08:07.830000+00:00 | 2017-09-19 01:39:13.413000+00:00 | 2017-09-19 01:39:13.413000+00:00 | null | 46,283,467 | <p>I am using matlab and i have this very very big .mat file named MeansOfK that contains almost 5,000,000 x N. My test data consists of Car and Non-car. My problem is that when i try to use k-means to the MeansofK. It always runs out of memory. </p>
<pre><code>[idx, ctr] = kmeans(MeansOfK , k, 'Distance', 'sqEuclidean');
</code></pre>
<p>My Options are</p>
<p>1.i use the divide and conquer technique wherein i partition the car and non-car to smaller partitions and put it into k-means. </p>
<p>2.I separate the car and non-car classes and try to use k-means to both classes. </p>
<p>the final output would be the combined classes of car or non-car. from the k-means process. </p>
<p>so my question is?</p>
<p>Is what i will be doing feasible?
Will it affect the output of my k-means if i partition the file rather than doing it as a whole? </p>
<p>Suggestions and answers are always appreciated :)
Thanks</p> | 2017-09-18 15:49:59.817000+00:00 | 2017-09-19 01:39:13.413000+00:00 | null | algorithm|matlab|image-processing|k-means | ['https://en.wikipedia.org/wiki/Johnson%E2%80%93Lindenstrauss_lemma', 'https://pdfs.semanticscholar.org/1e76/727be351d6289311d9d1d65c494a683ace0f.pdf', 'https://pdfs.semanticscholar.org/1e76/727be351d6289311d9d1d65c494a683ace0f.pdf', 'https://arxiv.org/pdf/1511.08990.pdf', 'https://github.com/C0rWin/KMeanCoreset'] | 5 |
42,286,944 | <p>You have to keep in mind that TensorFlow is, as far as the user is concerned, "just" a machine learning API. People may happen to use it for image classification - the 2017 Dev Summit showed medical use cases in <a href="https://www.youtube.com/watch?v=toK1OSLep3s" rel="nofollow noreferrer">skin cancer detection</a> and <a href="https://www.youtube.com/watch?v=oOeZ7IgEN4o" rel="nofollow noreferrer">retinal imaging</a> - but <em>all</em> the topics of supervised and unsupervised machine learning are candidates for TensorFlow, just like they are for any other ML library; regression of sales by advertisement budget, clustering of users in a social network and recommending books based on previous purchases via collaborative filtering, just to name a few.</p>
<p>If you heard about the recent self-driving car projects, think about obtaining steering wheel and brake control commands from a live camera feed. NVIDIA had <a href="https://arxiv.org/abs/1604.07316" rel="nofollow noreferrer">a paper on it</a>, for example.</p>
<p>One rather interesting use case are <a href="https://www.tensorflow.org/tutorials/seq2seq" rel="nofollow noreferrer">sequence to sequence models</a> to transform one arbitrary sequence of inputs to another one; according to <a href="https://www.youtube.com/watch?v=RIR_-Xlbp7s" rel="nofollow noreferrer">this video</a>, Google Translate might be taking advantage of it on the phone. If you're thinking of image and video retrieval, sequence labelling is another topic, where you train a network to describe, in human words, the content of a video. Or even natural language processing, where you try to determine the <em>concepts</em> within written text.</p>
<p>There are also papers <a href="https://arxiv.org/abs/1610.09460" rel="nofollow noreferrer">like this</a> describing the usage of recurrent models like LSTMs for energy usage prediction (Note the paper isn't specific to TensorFlow, but LSTMs are part of the core library). <a href="http://de.slideshare.net/TaegyunJeon1/electricity-price-forecasting-with-recurrent-neural-networks" rel="nofollow noreferrer">Here</a> are slides on electricity price forecasting with TensorFlow, if you're interested in it.</p> | 2017-02-16 23:53:13.217000+00:00 | 2017-02-18 01:13:05.463000+00:00 | 2017-02-18 01:13:05.463000+00:00 | null | 42,286,113 | <p>When I look at tensorflow, I find lots of cool stuff I can geek out over, but I haven't been able to figure out anywhere that I would use it in the real world. Google didn't spend a gajillion dollars on it without seeing real world applications.</p>
<p>There are lots of cool tutorials on how to build cool stuff with Tensorflow, but they start with the assumption that you are already fluent in their dialect of "greek without an R" and can extrapolate business usecases from a demo of recognizing a handwritten letter e in a fixed size cell.</p>
<p>I have built demonstration neural networks in pascal, C, C++, and Java at different times, so I have some grasp of the principles. Is is possible to express this in a manner that an old-style Pascal guy who has dabbled i nthe underlying technology a bit can grasp? </p> | 2017-02-16 22:44:53.690000+00:00 | 2017-02-18 01:13:05.463000+00:00 | null | tensorflow | ['https://www.youtube.com/watch?v=toK1OSLep3s', 'https://www.youtube.com/watch?v=oOeZ7IgEN4o', 'https://arxiv.org/abs/1604.07316', 'https://www.tensorflow.org/tutorials/seq2seq', 'https://www.youtube.com/watch?v=RIR_-Xlbp7s', 'https://arxiv.org/abs/1610.09460', 'http://de.slideshare.net/TaegyunJeon1/electricity-price-forecasting-with-recurrent-neural-networks'] | 7 |
48,804,578 | <p>Depends on your training data.</p>
<p>The embeddings are learned when you train the model. If the examples containing "dog" and "doghouse" are very similar I would expect the embeddings would be close. The embeddings are close when the semantics within the training data are close.</p>
<p>If you train a model to distringuish between animals and other objects (for example) then the two words should be fairly far away.</p>
<p>For the second part, given that you have reasonable training examples that reflect the relation you want over the strings you should get embeddings that reflect that relationship. Check out <a href="https://arxiv.org/pdf/1310.4546.pdf" rel="nofollow noreferrer">this paper</a> where they train embedding to reflect many relationships between the words (for example capital to country).</p> | 2018-02-15 10:09:48.707000+00:00 | 2018-02-15 10:09:48.707000+00:00 | null | null | 48,803,070 | <p>I have a project where I use data where we have numeric features and String features to make a binary classifier. And I was reading the following explanation about features columns in tensorflow :<a href="https://www.tensorflow.org/get_started/feature_columns" rel="nofollow noreferrer">https://www.tensorflow.org/get_started/feature_columns</a>.</p>
<p>I have a problem to understand exactly what embedding columns will do to String features:
in the tensorflow example, we have a feature whose values are in ["dog","spoon","scissors","guitar"]. We use a categorical column to convert the Strings to integers which are index to a lookup table where each String is finally mapped to a vector of low dimensions (initialized with random float numbers). It is said that the assignments of the embeddings vectors happen during training and that embedding columns increase our model's capabilities, since an embeddings vector learns new relationships between categories from the training data.</p>
<p><a href="https://i.stack.imgur.com/j9GLp.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/j9GLp.jpg" alt="embedding columns in tensorflow"></a></p>
<p>My question is : do embedding columns manage to create vectors that represent the similarity between the Strings as "the vector learns new relationships between categories" ? For instance, if we add "doghouse" to the above example vocabulary, the distance between the embeddings vectors of "dog" and "doghouse" will be shorter than with the vectors of the other words.</p>
<p>if I push the question further, we can have a String categorical feature whose values are in ["red circle","red square","blue circle","blue square"], and the question will be: will the embedding vectors express the relationship between categories with the similarity with the color and the shape?</p>
<p>Thanks in advance for your help.</p> | 2018-02-15 08:43:24.403000+00:00 | 2018-02-15 10:09:48.707000+00:00 | null | tensorflow | ['https://arxiv.org/pdf/1310.4546.pdf'] | 1 |
56,137,398 | <p><strong>Can you please explain more about your use-case?</strong> </p>
<p>"Introduction to OXPath" document provided in your question is not available anymore. This could be because a more recent version have been released in 2018. </p>
<p>Please see here: <a href="https://arxiv.org/pdf/1806.10899.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1806.10899.pdf</a></p>
<p>Referring to this document, in section 3.7, page 43 reads: </p>
<blockquote>
<pre><code>Different components of OXPath are united under the umbrella name OXPath Project.
The current version of OXPath (2.2.0) and OXPath CLI (1.0.1) are generated by OXPath
Project 1.0.3. It consists of the following main components:
OXPath Core (v.2.2.0) implementing the core functionality of the OXPath language.
WebAPI (v.1.4.0) implementing an interface to web browsers based on Selenium 2.53.1
(only Firefox 47.0.1 is currently supported).
Output Handlers are a set of modules for serialising the output tree of OXPath into
different formats. The following output handlers are available: XMLOutputHandler
for XML (see Section 1.3.1 on page 13), JsonOutputHandler for JSON (see Section 1.3.2 on page 13), RecStreamCSVOutputHandler for rscsv
</code></pre>
<p>(see Section 1.3.3
on page 14), HierarchyCSVOutputHandler for hcsv,
RecStreamJDBCOutputHandler for rsjdbc (see Section 1.3.4 on page 16), and
HierarchyJDBCOutputHandler for hjdbc.
OXPath CLI (v.1.0.1) is a command line interface for OXPath.
Java documentation API is available at <a href="https://oxpath.github.io/api-docs/1.0" rel="nofollow noreferrer">https://oxpath.github.io/api-docs/1.0</a>.
3/javadoc/.</p>
</blockquote>
<p>I don't think there is a Javascript API currently, but they could be referring to the fact that you can use java classes into javascript. See here: <a href="https://stackoverflow.com/questions/24577613/use-a-jar-in-javascript-through-java-scriptengine">Use a jar in JavaScript through Java ScriptEngine</a></p>
<p>However, considering that the underlaying engine relies on selenium opening a browser and navigating the various URL's in your OXPath query, even when using X video frame buffer, <strong>OXPath would be unusable in any client code environment</strong> </p> | 2019-05-14 19:32:01.250000+00:00 | 2019-05-14 19:32:01.250000+00:00 | null | null | 50,970,121 | <p>I have a lib, which has a API available under <code>uk.ac.ox.cs.diadem.oxpath.oxpath-example</code></p>
<p>In java I would use it like so</p>
<p><code>// load from API
package uk.ac.ox.cs.diadem.oxpath.oxpath-example;
// invoke OXPath
OXPath.ENGINE.evaluate(input, browser, outputHandler);</code></p>
<p>The complete documentation can be found <a href="http://www.oxpath.org/papers/2017-IntroductionToOxpath-ed1.pdf" rel="nofollow noreferrer">here</a>. Page 30 (last paragraph before section 3.2) says I can embed it in JavaScript, but there is only a java example.</p>
<p>How can I load OXPath into a JavaScript project?</p>
<p>EDIT</p>
<p>I have tried this:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-js lang-js prettyprint-override"><code>var oxpath = require("uk.ac.ox.cs.diadem.oxpath"); // error</code></pre>
</div>
</div>
</p>
<p>But it throws an error:</p>
<pre><code>Error: Cannot find module 'uk.ac.ox.cs.diadem.oxpath'
</code></pre> | 2018-06-21 13:44:43.483000+00:00 | 2019-05-14 19:32:01.250000+00:00 | 2018-06-21 13:51:35.277000+00:00 | javascript|java|node.js|xpath | ['https://arxiv.org/pdf/1806.10899.pdf', 'https://oxpath.github.io/api-docs/1.0', 'https://stackoverflow.com/questions/24577613/use-a-jar-in-javascript-through-java-scriptengine'] | 3 |
49,361,723 | <p>The only peers to execute chaincode are the endorsing peers, rest only validates during the commit time whenever the transaction satisfies endorsement policy. And in order to be peer to be able to endorse the transaction proposal someone (admin) has to install the chaincode on it.</p>
<p>You can find more details in <a href="http://hyperledger-fabric.readthedocs.io/en/release-1.0/arch-deep-dive.html" rel="nofollow noreferrer">documentation</a> or there is a nice <a href="https://medium.com/kokster/hyperledger-fabric-endorsing-transactions-3c1b7251a709" rel="nofollow noreferrer">blog post</a> which also describes it pretty well.</p>
<p>Basically flow from high level perspective works as following:</p>
<ol>
<li>Client submits transaction proposal to endorsing peers</li>
<li>Endorsing peers invokes chaincodes</li>
<li>Endorsing peers signs over the execution results</li>
<li>Client gathers all results and check consistency</li>
<li>Client submits transaction to the ordering service</li>
<li>Ordering service cuts new block with several transactions</li>
<li>Peer gets new block via dissemination layer</li>
<li>Peer validates each transaction</li>
<li>Eventually block is committed where all valid transaction changes the sate according to the simulation results from #2.</li>
</ol>
<p>There some more int depth detail published in the <a href="https://arxiv.org/pdf/1801.10228.pdf" rel="nofollow noreferrer">official Fabric paper</a>.</p> | 2018-03-19 11:20:39.963000+00:00 | 2018-03-19 11:45:51.630000+00:00 | 2018-03-19 11:45:51.630000+00:00 | null | 49,354,661 | <p>I have a question.
I want to know whether all nodes execute the chain code or only endorsement nodes execute the chain code?</p> | 2018-03-19 01:43:43.363000+00:00 | 2018-03-19 11:45:51.630000+00:00 | null | hyperledger-fabric|hyperledger-composer | ['http://hyperledger-fabric.readthedocs.io/en/release-1.0/arch-deep-dive.html', 'https://medium.com/kokster/hyperledger-fabric-endorsing-transactions-3c1b7251a709', 'https://arxiv.org/pdf/1801.10228.pdf'] | 3 |
32,379,979 | <p>For your task, a CNN is definitely worth a try!</p>
<p>Many researchers used networks which are pretrained for Image Classification and obtained state-of-the-art results on fine-grained classification. For example, trying to classify <a href="http://arxiv.org/abs/1406.2952" rel="nofollow">birds species</a> or cars. </p>
<p>Now, your task is not classification, but it is related. You can think about similarity as some geometric distance between features, which are basically vectors. Thus, you may carry out some experiments computing the distance between the feature vectors for all your training images (the reference) and the feature vector extracted from the query image.</p>
<p>CNNs features extracted from the first layers of the net should be more related to color or other graphical traits, rather than more "semantical" ones.</p>
<p>Alternatively, there is some work on learning directly a similarity metric through CNN, see <a href="http://arxiv.org/abs/1404.4661" rel="nofollow">here</a> for example.</p> | 2015-09-03 15:38:10.540000+00:00 | 2015-09-03 15:38:10.540000+00:00 | null | null | 30,403,937 | <p>I am doing research in the field of computer vision, and am working on a problem related to finding visually similar images to a query image. For example, finding t-shirts of similar colour with similar patterns (Striped/ Checkered), or shoes of similar colour and shape, and so on. </p>
<p>I have explored hand-crafted image features such as Color Histograms, Texture features, Shape features (Histogram of Oriented Gradients), SIFT and so on. I have also read up literature about Deep Neural Networks (Convolutional Neural Networks), which have been trained on massive amounts of data and are currently state of the art in Image Classification.</p>
<p>I was wondering if the same features (extracted from the CNN's) can also be used for my project - finding fine-grained similarities between images. From what I understand, the CNNs have learnt good representative features that can help classify images - for example, be it a red shirt or a blue shirt or an orange shirt, it is able to identify that the image is a shirt. However it doesn't understand that an orange shirt looks more similar to a red shirt than a blue shirt does, and hence it is not able to capture these similarities. </p>
<p>Please correct me if I am wrong. I would like to know if there are any Deep Neural Networks that capture these similarities, and have proven to be superior to the hand-crafted features. Thanks in advance.</p> | 2015-05-22 18:53:04.287000+00:00 | 2017-01-17 01:16:45.583000+00:00 | null | image-processing|computer-vision|neural-network|feature-extraction|deep-learning | ['http://arxiv.org/abs/1406.2952', 'http://arxiv.org/abs/1404.4661'] | 2 |
3,429,567 | <p>This link contains a list of subfields: <a href="http://arxiv.org/corr/home" rel="nofollow noreferrer">http://arxiv.org/corr/home</a>, I won't reproduce them here as the link may change, and it would be redundant.</p>
<p>Also, I'm reminded of the quote of someone, can't remember who, along the lines of:</p>
<blockquote>
<p>Mathematics is whatever
mathematicians do</p>
</blockquote>
<p>It would seem to apply.</p> | 2010-08-07 07:29:49.750000+00:00 | 2010-08-07 07:35:10.253000+00:00 | 2010-08-07 07:35:10.253000+00:00 | null | 3,429,560 | <p>What is the technical definition of theoretical computer science? (Or, what should it be?)</p>
<p>What main subfields does it include, and what is the commonality that separates them from the rest of computer science?</p>
<p>More specifically: if some particular research has direct practical motivations, goals and outcomes but mostly involves very abstract methods, is it theoretical computer science or not? </p>
<p>Two examples to consider: </p>
<p>"Dual quaternions for rigid transformation blending" (Better mathematical representation of rotation and transform for animation)
<a href="https://www.cs.tcd.ie/publications/tech-reports/reports.06/TCD-CS-2006-46.pdf" rel="nofollow noreferrer">https://www.cs.tcd.ie/publications/tech-reports/reports.06/TCD-CS-2006-46.pdf</a> </p>
<p>"Relational Semantics for Effect-Based Program Transformations
with Dynamic Allocation" (Complier optimisation via denotational semantics): <a href="http://research.microsoft.com/pubs/67977/ppdprelational.pdf" rel="nofollow noreferrer">http://research.microsoft.com/pubs/67977/ppdprelational.pdf</a></p>
<p>[The Wikipedia article gives only a vague definition and a long list of subfields. Should just accept that there's no better definition than this? <a href="http://en.wikipedia.org/wiki/Theoretical_computer_science" rel="nofollow noreferrer">http://en.wikipedia.org/wiki/Theoretical_computer_science</a> ]</p>
<p>EDIT: I guess this question comes down to "What does the term 'theory' mean in the context of computer science?". Looking at the 6 different meanings of the word at <a href="http://en.wiktionary.org/wiki/theory" rel="nofollow noreferrer">wiktionary</a>, I don't think any of them fully fits. I guess the mathematical sense of a theory fits well for completely mathematical fields but not for others, and for VLSI, machine learning and computational biology from <a href="http://en.wikipedia.org/wiki/Theoretical_computer_science" rel="nofollow noreferrer">wikipedia:TCS</a> it basically doesn't fit.</p> | 2010-08-07 07:28:12.773000+00:00 | 2010-11-12 23:04:31.093000+00:00 | 2010-08-08 05:28:14.387000+00:00 | programming-languages|computer-science|theory|complexity-theory | ['http://arxiv.org/corr/home'] | 1 |
67,179,224 | <p>This is the <a href="https://en.wikipedia.org/wiki/Change-making_problem" rel="nofollow noreferrer">Change-making problem</a>. It can be solved using the greedy approach which works really well when the problem has the greedy choice.</p>
<p>This algorithm works for "<a href="https://arxiv.org/abs/0809.0400" rel="nofollow noreferrer">canonical coin systems</a>", systems where each value (coin value in our example) is at least double the value of the item before it.</p>
<p>You can see that if you give as input lets say <strong>Amount:</strong> 40 and <strong>Coins_available:</strong> [20, 10, 5, 2, 1] - Then it works perfectly,</p>
<p>but for <strong>Amount:</strong> 40 and <strong>Coins_available:</strong> [25, 20, 10, 5, 2, 1] - Check to see what we get.</p>
<pre><code>def min_coin(amount, coins_available):
# Making sure your coin array is sorted in descending order
# This way we make sure to include the largest possible coin each time
coins_available.sort(reverse=True)
# Initializing our array that will hold the coins we choose
selected_coins = []
for i in range(len(coins_available)):
while (amount >= coins_available[i]):
# Evey time we pick a coin, we reduce the amount left by that coin's amount
amount -= coins_available[i]
selected_coins.append(coins_available[i])
if(amount == 0):
break;
for coin in selected_coins:
print(coin)
</code></pre>
<p>In this solution I simplified some statements and removed unnecessary variables.</p>
<p>Try to avoid using global variables and also you can do the printing from within the function, otherwise you can just return the list and use it as you wish.</p>
<p>The way it works is it looks for the largest (in terms of value) coin we have, then checks if we can give it as change, if yes, we give it as change, reduce the amount left and check again. Once the amount left is less that the largest coin we move to the second largest.</p>
<p>This way we know our program is correct because the underlying algorithm is correct. Try to answer yourself why.</p>
<p>Hint:</p>
<blockquote class="spoiler">
<p> See if you can assume that the solution is wrong and come up with a "correct" one, if you cant then its correct)</p>
</blockquote>
<p>I hope this helped in understanding.</p> | 2021-04-20 12:45:41.280000+00:00 | 2021-04-20 12:45:41.280000+00:00 | null | null | 67,177,803 | <p>This is the <em>minimum coins</em>, problem.</p>
<p>I am not able to handle the case when the return type of function is <code>None</code>. I tried something like <code>if min_count != None && min_count <= counter:</code> but this is not so proper.</p>
<p>Please tell me the correct way of doing this.</p>
<pre class="lang-py prettyprint-override"><code># min coin problem using a recursive approach
# this program is giving me all the solutions available in all combination
counter = 10000
def min_coin(Amount,coins_available,count):
global counter
if (Amount == 0):
return count
for i in coins_available:
if Amount >= i:
min_count = min_coin(Amount - i,coins_available,count+1)
if min_count <= counter:
counter = min_count
Amount = 2
coins_available = [1,2]
count = 0
min_coin(Amount,coins_available,count)
print(counter)
</code></pre> | 2021-04-20 11:13:28.973000+00:00 | 2021-04-20 12:45:41.280000+00:00 | 2021-04-20 12:02:41.313000+00:00 | python | ['https://en.wikipedia.org/wiki/Change-making_problem', 'https://arxiv.org/abs/0809.0400'] | 2 |
50,085,338 | <p>There is some work around that, starting with the paper of <a href="https://arxiv.org/abs/1312.6034" rel="nofollow noreferrer">Simoyan</a> and that have been popularized with the "Deep Dream" buzz about 2/3 years ago. The basic idea is to start from a random image and optimize on the <em>input data</em>, rather than the network itself. So in your case, you would optimize the image to maximize the probability of the image being recognized as a cat.</p>
<p>Alas, a classifier is not a generator, and therefore as you may recall from all the "deep dream" movies, the resulting images are all but realistic. (You can find such images in the paper cited above). Realistic sample generation, if that is your goal, is nowadays often achieved with GANs and their variants.</p> | 2018-04-29 09:53:09.110000+00:00 | 2018-04-29 09:53:09.110000+00:00 | null | null | 50,085,171 | <p>In a classifying image context (Tensorflow), imagine you have a retrained model with animals for example, is it possible to "ask" what a cat looks like for your model ?
I don't want to give it a picture and it recognizes a cat, I want it to "describe" what a cat is.
I was just wondering...
Thanks ! =D</p> | 2018-04-29 09:32:56.127000+00:00 | 2018-04-29 09:53:09.110000+00:00 | null | tensorflow|machine-learning|classification | ['https://arxiv.org/abs/1312.6034'] | 1 |
44,986,851 | <p>First of all, let me mention that I think METIS is the wrong tool here, because it is used for <strong>graph partitioning</strong>, where the emphasis is on <em>minimizing the number of edges between partitions</em> while <em>keeping the partitions balanced</em> (more or less equal sizes)</p>
<p>What you probably want to do is <strong>community detection</strong> within social networks, i.e. the search for clusters which <em>maximize internal connectivity</em> (large number of edges between nodes from the same cluster) and <em>minimize external connectivity</em> (small number of edges between different clusters).<br>
This can be achieved by maximizing the so-called <a href="https://en.wikipedia.org/wiki/Modularity_(networks)" rel="nofollow noreferrer">Modularity</a> of the clustering</p>
<p>There are several approaches to tackle this problem, a popular heuristic being <a href="https://arxiv.org/abs/0709.2938" rel="nofollow noreferrer">Label propagation</a>.<br>
If you don't want to implement the algorithm yourself, I would recommend using a framework like <a href="https://networkit.iti.kit.edu/" rel="nofollow noreferrer">NetworKit</a> (unfortunately, I don't know any other such frameworks yet), which implements <a href="https://networkit.iti.kit.edu/api/doxyhtml/class_networ_kit_1_1_l_p_degree_ordered.html" rel="nofollow noreferrer">Label propagation</a>, some <a href="https://networkit.iti.kit.edu/api/doxyhtml/class_networ_kit_1_1_parallel_agglomerative_clusterer.html" rel="nofollow noreferrer">modularity-based algorithms</a> and many helpful tools.</p>
<p>But back to your original question:</p>
<p><strong>What is <code>-ptype=rb/kway</code>?</strong></p>
<p>There are multiple ways how you can approach the graph partitioning problem: You can either try to partition the graph into your desired number of partitions directly (k-way partitioning) or you can split the graph in half repeatedly until you have the desired number of partitions (recursive bisection, rb)</p>
<p><strong>What is the jth constraint?</strong></p>
<p>METIS allows you to try and optimize multiple balance constraints at the same time, i.e. if you have multiple types of calculations on the graph that should all be more or less balanced among the compute nodes.</p>
<p>See the manual:</p>
<blockquote>
<p>Many important types of multi-phase and multi-
physics computations require that multiple quantities be load balanced simultaneously.
[...]<br>
METIS includes partitioning routines that can be used to partition a graph in the presence of such multiple balancing
constraints. Each vertex is now assigned a vector of m weights and the objective of the partitioning routines is
to minimize the edge-cut subject to the constraints that each one of the m weights is equally distributed among the
domains.</p>
</blockquote>
<p>EDIT: Since you clarified that you wanted to look at a <strong>fixed number of clusters</strong>, I see how graph partitioning could be helpful here. Let me illustrate what <code>ufactor</code> means:</p>
<p>The <em>imbalance</em> of a partitioned graph is (in this simple case) computed as the maximum of the imbalance for each partition, which is roughly the quotient <code>partition size / average partition size</code>. So if we allow a maximum imbalance of 2, this means that the largest partition is twice as big as the average partition. Note however that <code>ufactor</code> doesn't specify the imbalance directly, it specifies how many permille away from 1 the imbalance is allowed to be.
So <code>ufactor=143</code> actually means that your maximal allowed imbalance is 1.143, which makes sense since your clusters are not that far from each other. So in your case, you will probably use larger values for ufactor to allow the groups to be of quite different sizes.</p>
<p><strong>Consequences of large imbalance</strong></p>
<p>If your imbalance is too large, it might happen that all the strongly-connected parts land in the same partition while only isolated nodes are put in the other partitions. This is due to the fact that the algorithm tries to minimize the number of <em>cut edges</em> between different partitions, which will be lower if we put all the high-degree nodes in the same partition.</p>
<p><strong>What about spectral partitioning, ...?</strong></p>
<p>The general approach of METIS works as follows:<br>
Most input graphs are too large to partition directly, which is why so-called multilevel methods are used:</p>
<ul>
<li>The graph is first <em>coarsened</em> (nodes are combined while trying to preserve the graph structure) until its size becomes feasible to partition directly</li>
<li>The coarsest graph is partitioned using an <em>initial partitioning</em> technique, where we could use a variety of approaches (combinatorial bisection, spectral bisection, exact solutions using ILPs, ...).</li>
<li>The graph is then <em>uncoarsened</em>, where in each step a small number of nodes are moved from partition to partition in a <em>local search</em> to improve the overall edge cut.</li>
</ul>
<p><strong>My personal recommendation</strong></p>
<p>I should however note that while graph partitioning might be a valid model for your case, METIS itself might not be the ideal implementation for you:</p>
<p>As you can read on the METIS homepage, it is mostly used for rather sparse graphs ('finite element methods, linear programming, VLSI, and transportation'), whereas social networks are much denser and have a different structure (degrees follow a power-law distribution)</p>
<p>The coarsening approach of METIS uses <em>heavy edge matching</em> to combine nodes which are somehow close together, which works great for the intended applications, for social networks however, <em>clustering-based coarsening</em> techniques might prove more efficient.<br>
Another library that is a bit slower in general, but implements some presets especially for social networks is <a href="http://algo2.iti.kit.edu/kahip/" rel="nofollow noreferrer">KaHIP</a>, see the <a href="http://algo2.iti.kit.edu/schulz/software_releases/kahipv2.00.pdf" rel="nofollow noreferrer">manual</a> for details.</p>
<p>(I should mention however that I am biased in this regard, since I worked extensively with this library ;-) )</p> | 2017-07-08 13:38:44.113000+00:00 | 2017-07-11 22:17:32.593000+00:00 | 2017-07-11 22:17:32.593000+00:00 | null | 44,986,614 | <p>I have been using <code>METIS</code> for clustering social media users.</p>
<p>By default, it was outputting clusters with same number of vertices in each side, which is not ideal in real world scenario. So, I was trying to find way to loosen the constraint of "same number of vertices" and get possible imbalance partition with minimized cut value.</p>
<p>I find a parameter <code>ufactor</code> in the manual which is suitable(I think) for my case but I did not grasp what it is really doing. I have large graph and tried with some value of <code>ufactor</code>. For one data set <code>ufactor=1000</code> works very well but for another dataset it could not even partition the graph. I can not interpret this result as i did not understand what it's really doing. Here is what i find in the manual about this:</p>
<blockquote>
<p>Specifies the maximum allowed load imbalance among the partitions. A value of x indicates that the
allowed load imbalance is (1 + x)/1000. The load imbalance for the jth constraint is defined to be
max_i(w[j, i])/t[j, i]), where w[j, i] is the fraction of the overall weight of the jth constraint that
is assigned to the ith partition and t[j, i] is the desired target weight of the jth constraint for the
ith partition (i.e., that specified via -tpwgts). For -ptype=rb, the default value is 1 (i.e., load
imbalance of 1.001) and for -ptype=kway, the default value is 30 (i.e., load imbalance of 1.03).</p>
</blockquote>
<p>Can anybody help me to interpret this? Here, what is <code>jth</code> constraints? what is <code>-ptype=rb/kway</code>?</p> | 2017-07-08 13:13:37.277000+00:00 | 2017-07-11 22:17:32.593000+00:00 | 2017-07-08 13:26:29.933000+00:00 | cluster-analysis|metis | ['https://en.wikipedia.org/wiki/Modularity_(networks)', 'https://arxiv.org/abs/0709.2938', 'https://networkit.iti.kit.edu/', 'https://networkit.iti.kit.edu/api/doxyhtml/class_networ_kit_1_1_l_p_degree_ordered.html', 'https://networkit.iti.kit.edu/api/doxyhtml/class_networ_kit_1_1_parallel_agglomerative_clusterer.html', 'http://algo2.iti.kit.edu/kahip/', 'http://algo2.iti.kit.edu/schulz/software_releases/kahipv2.00.pdf'] | 7 |
34,146,642 | <p>I searched more paper and I found some that are related to the subject. The main topic of my question was to </p>
<ol>
<li>find the way to train the network efficiently with small size of dataset</li>
<li>find the way to make huge dataset with small human effort</li>
</ol>
<p>There were some papers and two of them helped me a lot. This is the link.</p>
<p><a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.1.9632&rep=rep1&type=pdf" rel="nofollow">Explanation-Based Neural Network Learning for Robot Control</a></p>
<p><a href="http://arxiv.org/pdf/1509.06825v1.pdf" rel="nofollow">Supersizing Self-supervision: Learning to Grasp from 50K Tries and 700 Robot Hours</a></p> | 2015-12-08 01:51:27.283000+00:00 | 2015-12-08 01:51:27.283000+00:00 | null | null | 34,116,592 | <p>I am trying to train the robot for specific actions such as grasping or pointing by using the RNN.
The robot is composed of one arm and a head containing camera in it. Also the workspace will be the small table so that the arm and objects can be located.
The input of the recurrent neural network wiil be the image frame of every time steps from the camera and the output will be the target motor angle of next frame of the robot arm.
When the current image frame is fed to the network, the network outputs the motor value of arm for the next frame. And when the arm reaches the next position, the input frame in that position is again goes to the network and it again yields the next motor output.</p>
<p>However, when making the data for training, I have to make all the data of (image, motor angle) pair for all the position on the workspace. Eventhough the network can do some generalization job by itselt, the data needed is stil too much and it takes lots of time since there are too many trajectories. </p>
<p>Generalizing the problem I have, the time for getting training data for network is too much. Is there any way or method that can train network with small size dataset? Or making huge dataset within relatively small human intervention? </p> | 2015-12-06 10:53:08.883000+00:00 | 2015-12-08 02:04:27.640000+00:00 | 2015-12-08 01:44:11.333000+00:00 | neural-network|robotics|recurrent-neural-network | ['http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.1.9632&rep=rep1&type=pdf', 'http://arxiv.org/pdf/1509.06825v1.pdf'] | 2 |
33,950,177 | <p><strong>Update July 2016</strong> The easiest way to use batch normalization in TensorFlow is through the higher-level interfaces provided in either <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/layers/python/layers/layers.py" rel="noreferrer">contrib/layers</a>, <a href="http://tflearn.org/layers/normalization/" rel="noreferrer">tflearn</a>, or <a href="https://github.com/tensorflow/models/blob/master/inception/inception/slim/ops.py" rel="noreferrer">slim</a>.</p>
<p><strong>Previous answer if you want to DIY</strong>:
The documentation string for this has improved since the release - see the <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/ops/nn_ops.cc#L65" rel="noreferrer">docs comment in the master branch</a> instead of the one you found. It clarifies, in particular, that it's the output from <code>tf.nn.moments</code>.</p>
<p>You can see a very simple example of its use in the <a href="https://github.com/tensorflow/tensorflow/blob/3972c791b9f4d9a61b9ad6399b481df396f359ff/tensorflow/python/ops/nn_test.py#L518" rel="noreferrer">batch_norm test code</a>. For a more real-world use example, I've included below the helper class and use notes that I scribbled up for my own use (no warranty provided!):</p>
<pre class="lang-py prettyprint-override"><code>"""A helper class for managing batch normalization state.
This class is designed to simplify adding batch normalization
(http://arxiv.org/pdf/1502.03167v3.pdf) to your model by
managing the state variables associated with it.
Important use note: The function get_assigner() returns
an op that must be executed to save the updated state.
A suggested way to do this is to make execution of the
model optimizer force it, e.g., by:
update_assignments = tf.group(bn1.get_assigner(),
bn2.get_assigner())
with tf.control_dependencies([optimizer]):
optimizer = tf.group(update_assignments)
"""
import tensorflow as tf
class ConvolutionalBatchNormalizer(object):
"""Helper class that groups the normalization logic and variables.
Use:
ewma = tf.train.ExponentialMovingAverage(decay=0.99)
bn = ConvolutionalBatchNormalizer(depth, 0.001, ewma, True)
update_assignments = bn.get_assigner()
x = bn.normalize(y, train=training?)
(the output x will be batch-normalized).
"""
def __init__(self, depth, epsilon, ewma_trainer, scale_after_norm):
self.mean = tf.Variable(tf.constant(0.0, shape=[depth]),
trainable=False)
self.variance = tf.Variable(tf.constant(1.0, shape=[depth]),
trainable=False)
self.beta = tf.Variable(tf.constant(0.0, shape=[depth]))
self.gamma = tf.Variable(tf.constant(1.0, shape=[depth]))
self.ewma_trainer = ewma_trainer
self.epsilon = epsilon
self.scale_after_norm = scale_after_norm
def get_assigner(self):
"""Returns an EWMA apply op that must be invoked after optimization."""
return self.ewma_trainer.apply([self.mean, self.variance])
def normalize(self, x, train=True):
"""Returns a batch-normalized version of x."""
if train:
mean, variance = tf.nn.moments(x, [0, 1, 2])
assign_mean = self.mean.assign(mean)
assign_variance = self.variance.assign(variance)
with tf.control_dependencies([assign_mean, assign_variance]):
return tf.nn.batch_norm_with_global_normalization(
x, mean, variance, self.beta, self.gamma,
self.epsilon, self.scale_after_norm)
else:
mean = self.ewma_trainer.average(self.mean)
variance = self.ewma_trainer.average(self.variance)
local_beta = tf.identity(self.beta)
local_gamma = tf.identity(self.gamma)
return tf.nn.batch_norm_with_global_normalization(
x, mean, variance, local_beta, local_gamma,
self.epsilon, self.scale_after_norm)
</code></pre>
<p>Note that I called it a <code>ConvolutionalBatchNormalizer</code> because it pins the use of <code>tf.nn.moments</code> to sum across axes 0, 1, and 2, whereas for non-convolutional use you might only want axis 0.</p>
<p>Feedback appreciated if you use it.</p> | 2015-11-27 04:16:11.250000+00:00 | 2016-07-14 21:21:01.993000+00:00 | 2016-07-14 21:21:01.993000+00:00 | null | 33,949,786 | <p>I would like to use <em>batch normalization</em> in TensorFlow. I found the related C++ source code in <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/ops/nn_ops.cc" rel="noreferrer"><code>core/ops/nn_ops.cc</code></a>. However, I did not find it documented on tensorflow.org.</p>
<p>BN has different semantics in MLP and CNN, so I am not sure what exactly this BN does.</p>
<p>I did not find a method called <code>MovingMoments</code> either.</p> | 2015-11-27 03:17:52.330000+00:00 | 2018-07-11 17:34:39.070000+00:00 | 2018-07-08 14:42:24.463000+00:00 | python|tensorflow | ['https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/layers/python/layers/layers.py', 'http://tflearn.org/layers/normalization/', 'https://github.com/tensorflow/models/blob/master/inception/inception/slim/ops.py', 'https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/ops/nn_ops.cc#L65', 'https://github.com/tensorflow/tensorflow/blob/3972c791b9f4d9a61b9ad6399b481df396f359ff/tensorflow/python/ops/nn_test.py#L518'] | 5 |
35,050,572 | <p>When N is very large, the normal method that randomly shuffles the N numbers and selects, say, first k numbers, can be prohibitive because of space complexity. The following algorithm requires only O(k) for both time and space complexities.</p>
<p><a href="http://arxiv.org/abs/1512.00501" rel="nofollow">http://arxiv.org/abs/1512.00501</a></p>
<pre><code>def random_selection_indices(num_samples, N):
modified_entries = {}
seq = []
for n in xrange(num_samples):
i = N - n - 1
j = random.randrange(i)
# swap a[j] and a[i]
a_j = modified_entries[j] if j in modified_entries else j
a_i = modified_entries[i] if i in modified_entries else i
if a_i != j:
modified_entries[j] = a_i
elif j in modified_entries: # no need to store the modified value if it is the same as index
modified_entries.pop(j)
if a_j != i:
modified_entries[i] = a_j
elif i in modified_entries: # no need to store the modified value if it is the same as index
modified_entries.pop(i)
seq.append(a_j)
return seq
</code></pre> | 2016-01-28 00:02:08.350000+00:00 | 2016-01-28 00:02:08.350000+00:00 | null | null | 48,087 | <p>I need a quick algorithm to select 5 random elements from a generic list. For example, I'd like to get 5 random elements from a <code>List<string></code>.</p> | 2008-09-07 03:12:28.120000+00:00 | 2022-06-16 12:26:47.800000+00:00 | 2016-07-21 12:27:11.983000+00:00 | c#|algorithm|collections|random|element | ['http://arxiv.org/abs/1512.00501'] | 1 |
66,959,161 | <p>The <code>SmoothGradient</code> interpreter adds random noise to embeddings. It is by design not deterministic. For details, you might have to read the paper: <a href="https://arxiv.org/abs/1706.03825" rel="nofollow noreferrer">https://arxiv.org/abs/1706.03825</a></p>
<p>If you need deterministic results, try <code>SimpleGradient</code> instead.</p> | 2021-04-05 20:08:23.853000+00:00 | 2021-04-05 20:08:23.853000+00:00 | null | null | 66,715,946 | <p>I am trying to get the saliency score for sentiment analysis task. Every time I run the code I get different saliency scores. Should this be the case? I am attaching my code for more reference.</p>
<pre><code>from allennlp.predictors.predictor import Predictor
import nltk
from allennlp.interpret.saliency_interpreters import SmoothGradient
data = "purchase costume year old grandson halloween arrive one week earlier expect happy grandson absolutely love glad order larger size size barely fit material durable well make think wear many time play since halloween happy purchase worth dollars spend"
words = nltk.word_tokenize(data)
predictor = Predictor.from_path("https://storage.googleapis.com/allennlp-public-models/stanford-sentiment-treebank-roberta.2021-03-11.tar.gz")
predicted = predictor.predict(words)
saliency_scores = SmoothGradient(predictor).saliency_interpret_from_json({'sentence':words})
</code></pre>
<p>Every time I print saliency scores for the same data the values keep changing. Also the tokens that the model generates are distorted, for example halloween breaks into hall, ow and een. How can I fix this? Any help would be appreciated.</p> | 2021-03-19 21:42:50.507000+00:00 | 2021-04-05 20:08:23.853000+00:00 | null | nlp|sentiment-analysis|allennlp | ['https://arxiv.org/abs/1706.03825'] | 1 |
64,042,809 | <p>In the original Bahdanau's paper, the decoder has only a single LSTM layer. There are various approaches how to deal with multiple layers. The quite usual thing to do is to do the attention between the layers (which you obviously did not do, see e.g., <a href="https://arxiv.org/abs/1609.08144" rel="nofollow noreferrer">a paper by Google</a>). If you use multiple decoder layers like this, you can use only the last layer (i.e., do <code>h_decoder[1]</code>), alternatively, you can concatenate the layers (i.e., in torch call <a href="https://pytorch.org/docs/stable/generated/torch.cat.html#torch-cat" rel="nofollow noreferrer"><code>torch.cat</code></a> or <a href="https://www.tensorflow.org/api_docs/python/tf/concat" rel="nofollow noreferrer"><code>tf.concat</code></a> in the 0-th dimension).</p>
<p>The matrices <em>W</em><sub>decoder</sub> and <em>W</em><sub>encoder</sub> ensure that both the encoder and decoder states get projected to the same dimension (regardless if you what you did with the decoder layers), so you can do the summation.</p>
<p>The only remaining issue is that the encoder states have the max-length dimension. The trick here is that you need to add a dimension to the projected decoder state, so the summation gets broadcasted and the projected decoder state get summed with all the encoder states. In PyTorch, just call <a href="https://pytorch.org/docs/stable/generated/torch.unsqueeze.html" rel="nofollow noreferrer"><code>unsqueeze</code></a>, in TensorFlow <a href="https://www.tensorflow.org/api_docs/python/tf/expand_dims" rel="nofollow noreferrer"><code>expand_dims</code></a> in the 0-th dimension on the projected decoder state.</p> | 2020-09-24 08:49:25.197000+00:00 | 2020-09-24 08:49:25.197000+00:00 | null | null | 64,018,890 | <p>I'm currently trying to compute this function to get Bahdanau's attention
<a href="https://i.stack.imgur.com/hpnlf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hpnlf.png" alt="enter image description here" /></a></p>
<p>My question is with the H for the decoder and the encoder.</p>
<p>In one implementation, I see an h encoder with the dimensions: [max source Len, batch size, hidden size]</p>
<p>and a h decoder with the following dimensions: [#lstm layers, batch size, hidden dim]</p>
<p>How can I compute the addition if the dimensions for the W matrices have to be the same according to:
<a href="https://blog.floydhub.com/attention-mechanism/#bahdanau-att-step1" rel="nofollow noreferrer">https://blog.floydhub.com/attention-mechanism/#bahdanau-att-step1</a></p>
<p>Thanks for the help</p> | 2020-09-22 23:16:24.527000+00:00 | 2020-09-24 08:50:31.707000+00:00 | 2020-09-24 08:50:31.707000+00:00 | deep-learning|lstm|attention-model|seq2seq | ['https://arxiv.org/abs/1609.08144', 'https://pytorch.org/docs/stable/generated/torch.cat.html#torch-cat', 'https://www.tensorflow.org/api_docs/python/tf/concat', 'https://pytorch.org/docs/stable/generated/torch.unsqueeze.html', 'https://www.tensorflow.org/api_docs/python/tf/expand_dims'] | 5 |
57,415,190 | <p>There is <a href="https://arxiv.org/abs/1512.09300" rel="nofollow noreferrer">VAE-GAN</a>, which likely can achieve what you want, you likely don't even need the "variational" part, You might also want to look into <a href="https://arxiv.org/abs/1703.10593" rel="nofollow noreferrer">CycleGAN</a>.</p> | 2019-08-08 14:41:06.777000+00:00 | 2019-08-08 14:41:06.777000+00:00 | null | null | 57,368,903 | <p>Here I am a newcomer to the field of GAN. I know the original GANs take latent vectors as input. But if I want to complete tasks as style converting and watermark removal, the input should possibly be an image.</p>
<p>Then it leads me to think that I probably need an autoencoder to translate an image to latent vector if I want to do such work based on original GAN architectures. Is it a legit idea? </p>
<p>Now I know Pix2pix is likely what I need. But what are the early-era GAN architectures to accomplish this 'image converting' task? </p>
<p>Many thanks.</p> | 2019-08-06 04:22:58.490000+00:00 | 2019-08-10 08:15:49.647000+00:00 | 2019-08-10 08:15:49.647000+00:00 | machine-learning|neural-network|deep-learning|computer-vision|generative-adversarial-network | ['https://arxiv.org/abs/1512.09300', 'https://arxiv.org/abs/1703.10593'] | 2 |
59,618,479 | <p>If training time is concern then one can switch the tree growing policy <code>tree_method</code> to <code>hist</code> which is histogram based method. With GPU it should be set to <code>gpu_hist</code>. You can find more details about its xgboost implementation here <a href="http://arxiv.org/abs/1603.02754" rel="nofollow noreferrer">http://arxiv.org/abs/1603.02754</a></p>
<p>This is the secret sauce which leads to super fast training without much compromise in the solution quality. In fact GPU based training and even lightGBM etc relies on histogram based techniques for faster training and subsequently iterations/experiments which matters a lot in time constrained kaggle type competitions. <code>hist</code> may cut training time to half or less and <code>gpu_hist</code> on gpu may take it to minutes.</p>
<p>PS: I would suggest to reduce the dimensionality of your data (16k X 180k) by removing correlated/rank-correlated features which will further improve not only your training time but also model performance.</p> | 2020-01-06 20:24:13.797000+00:00 | 2020-01-06 20:24:13.797000+00:00 | null | null | 59,605,761 | <p>I am trying to train an XGBoost classifier in Python using the xgboost package. I am using the defaults on all the parameters for the classifier and my training set has around 16,000 elements and 180,000 features for each element. I am not using the gpu to train the model, but still, the training process has taken more than five hours and is still going. I have 32GB of RAM and a 6 core Intel I7. I am wondering if this is normal time for training this classifier with the amount of data I have because I have heard of people training the model in a couple of minutes.</p> | 2020-01-06 01:42:04.703000+00:00 | 2020-01-06 20:24:13.797000+00:00 | null | python|machine-learning|xgboost|training-data | ['http://arxiv.org/abs/1603.02754'] | 1 |
41,833,187 | <p>sklearn does not seem to specify how it works internally regarding data types. However, it probably makes sense to assume it retains <em>at least</em> the precision of the input data type. So, to be on the safe side, probably specify <code>dtype</code> as double in your data. </p>
<p>In practice error propagation should not be an issue, since most algorithms are approximative in nature, and some of them rely much more on the random initial conditions than accuracy. Recently, there is even the suggestion that we should <em>limit</em> accuracy to save resources, since the impact is small. See for example
<a href="https://arxiv.org/pdf/1502.02551.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1502.02551.pdf</a></p> | 2017-01-24 16:20:38.787000+00:00 | 2017-01-24 16:20:38.787000+00:00 | null | null | 41,832,801 | <p>I am using sklearn for machine learning purposes. If I found out correctly, the float type in python works with double precision. Does sklearn work with the same precision internally? I pass data to sklearn in lists/numpy arrays filled with floats (is this even relevant?). </p>
<p>Do I have to be worried about error propagation? I guess I do not, if double precision is used.</p>
<p>Just want to make sure.</p> | 2017-01-24 16:04:31.357000+00:00 | 2017-01-24 16:20:38.787000+00:00 | null | python|statistics|scikit-learn|precision | ['https://arxiv.org/pdf/1502.02551.pdf'] | 1 |
56,611,656 | <p>Yes, but you'd want to train the <code>Doc2Vec</code> model on a large set of documents which contains the full range of document topics you want to represent – such as all Wikipedia articles – rather than just on one document. </p>
<p>Then, if the documents you want to compare were named in the training set, you can look up their vectors from the model. But if they're new documents using similar language, you can use the <code>Doc2Vec.infer_vector()</code> method on their word-tokens (which should be preprocessed/tokenized the same as the training data was).</p>
<p>It seems in your question, you've picked two documents that are known to be somewhat-similar – perhaps because they share the same human-assigned category – and then a 3rd at random, with the hopes a model will properly determine the 1st two are more-similar to each-other than the 3rd. </p>
<p>Doing that over a large set of document-triples is a good model-evaluation process! In fact, in the followup paper to the original 'Paragraph Vector' (<code>Doc2Vec</code>) work, that's what's used to evaluate and parameter-optimize the algorithm, against both Wikipedia and Arxiv document corpuses. See:</p>
<p><a href="https://arxiv.org/abs/1507.07998" rel="nofollow noreferrer">Document Embedding with Paragraph Vectors</a> </p> | 2019-06-15 15:18:24.487000+00:00 | 2019-06-15 15:18:24.487000+00:00 | null | null | 56,605,777 | <p>I would like to compare two documents semantically and generate a similarity score. The following docs are from wikipedia and when compare them, I expect to see a higher score for world_1 and world_2 as they have similar context.</p>
<p>Would training a Doc2vec model on "world_1" and testing other two docs with that model be a good approach? </p>
<p>thermo = "Thermodynamics is principally based on a set of four laws which are universally valid when applied to systems that fall within the constraints implied by each. In the various theoretical descriptions of thermodynamics these laws may be expressed in seemingly differing forms, but the most prominent formulations are the following:Zeroth law of thermodynamics:If two systems are each in thermal equilibrium with a third, they are also in thermal equilibrium with each other.This statement implies that thermal equilibrium is an equivalence relation on the set of thermodynamic systems under consideration."</p>
<p>world_1 = "World War I (often abbreviated as WWI or WW1), also known as the First World War or the Great War, was a global war originating in Europe that lasted from 28 July 1914 to 11 November 1918. Contemporaneously described as the war to end all wars,[7] it led to the mobilisation of more than 70 million military personnel, including 60 million Europeans, making it one of the largest wars in history.[8][9] It is also one of the deadliest conflicts in history,[10] with an estimated nine million combatants and seven million civilian deaths as a direct result of the war, while resulting genocides and the 1918 influenza pandemic caused another 50 to 100 million deaths worldwide. On 28 June 1914, Gavrilo Princip, a Bosnian Serb Yugoslav nationalist, assassinated the Austro-Hungarian heir Archduke Franz Ferdinand in Sarajevo, leading to the July Crisis."</p>
<p>world_2 = "World War II (often abbreviated to WWII or WW2), also known as the Second World War, was a global war that lasted from 1939 to 1945. The vast majority of the world's countries—including all the great powers—eventually formed two opposing military alliances: the Allies and the Axis. A state of total war emerged, directly involving more than 100 million people from over 30 countries. The major participants threw their entire economic, industrial, and scientific capabilities behind the war effort, blurring the distinction between civilian and military resources. World War II was the deadliest conflict in human history, marked by 50 to 85 million fatalities, most of whom were civilians in the Soviet Union and China."</p> | 2019-06-14 22:24:15.850000+00:00 | 2019-06-15 15:18:24.487000+00:00 | null | gensim|word2vec|similarity|cosine-similarity|doc2vec | ['https://arxiv.org/abs/1507.07998'] | 1 |
44,475,009 | <p>There are a couple of things that contribute to the problem. Changing some or all of them will give you reasonable results and make learning possible. </p>
<ol>
<li><p>Some of your (polynomial) features have a huge variance and are taking on very large values. Check out <code>np.max(x_train_poly)</code>.When your weight matrix is randomly initialised, this causes the initial predictions to be largely off, and the loss to approach infinity quickly. To counteract this, you may want to standardise your features first (i.e. make mean 0 and variance 1 for each feature). Note, that in very deep networks a similar idea is used called "Batch Normalization". If you're interested, you can read more here: <a href="https://arxiv.org/abs/1502.03167" rel="nofollow noreferrer">https://arxiv.org/abs/1502.03167</a> You can do the following to fix your example:</p>
<pre><code>means = np.mean(x_train_poly,axis=0,keepdims=True)
std = np.std(x_train_poly,axis=0,keepdims=True)
x_train_poly = (x_train_poly - means) / std
</code></pre></li>
<li><p>Your current model, doesn't have any hidden layers, which is sort of the point of a neural network and building a non-linear regressor/ classifier. What you're doing right now is applying a linear transformation to the 27 input features to get something that is close to the output. You could add an additional layer like this:</p>
<pre><code>hidden_dim = 50
class Model(torch.nn.Module):
def __init__(self):
super(Model, self).__init__()
self.layer1 = torch.nn.Linear(poly.n_output_features_, hidden_dim)
self.layer2 = torch.nn.Linear(hidden_dim, output_size)
def forward(self, x):
return self.layer2(torch.nn.ReLU()(self.layer1(x)))
</code></pre>
<p>Note that I have added a non-linearity after the first linear transformation, because else there's no point having multiple layers.</p></li>
<li><p>The problem of initial predictions that are greatly off in the beginning and lead to the loss approaching infinity. You're using squared loss which essentially doubles the order of magnitude of your initial "mistake" in the loss function. And once the loss is infinity, you'll be unable to escape, because the gradient updates are essentially also infinity as you're using squared loss. An easy fix that is sometimes useful is to use the smooth L1 loss instead. Essentially MSE on the interval [0, 1] and L1 loss outside that interval. Change the following:</p>
<pre><code>criterion = torch.nn.SmoothL1Loss()
</code></pre></li>
<li><p>That already gets you to something sensible (i.e. no infs anymore), but now
consider tuning the learning rate and introducing weight_decay. You may also want to change the optimizer. Some suggestions that works alright:</p>
<pre><code>optimizer = torch.optim.SGD(model.parameters(), lr=0.01, weight_decay=1)
optimizer = torch.optim.Adam(model.parameters(), lr=0.01, weight_decay=0.1)
</code></pre></li>
</ol> | 2017-06-10 15:26:37.540000+00:00 | 2017-06-10 15:26:37.540000+00:00 | null | null | 42,795,226 | <p>I have modified the code hat I found on the Pytorch github to suit my data, but my loss results are huge and with each iteration they get bigger and later become nan.Code doesn't give me any errors, just nor loss results and no predictions.
I have another code that deals withe the simple Linear Regression and all works fine. I guess I'm missing something simple here, but I'm unable to see it. any help would be appreciated.</p>
<p>Code:</p>
<pre><code>import sklearn.linear_model as lm
from sklearn.preprocessing import PolynomialFeatures
import torch
import torch.autograd
import torch.nn.functional as F
from torch.autograd import Variable
train_data = torch.Tensor([
[40, 6, 4],
[44, 10, 4],
[46, 12, 5],
[48, 14, 7],
[52, 16, 9],
[58, 18, 12],
[60, 22, 14],
[68, 24, 20],
[74, 26, 21],
[80, 32, 24]])
test_data = torch.Tensor([
[6, 4],
[10, 5],
[4, 8]])
x_train = train_data[:,1:3]
y_train = train_data[:,0]
POLY_DEGREE = 3
input_size = 2
output_size = 1
poly = PolynomialFeatures(input_size * POLY_DEGREE, include_bias=False)
x_train_poly = poly.fit_transform(x_train.numpy())
class Model(torch.nn.Module):
def __init__(self):
super(Model, self).__init__()
self.fc = torch.nn.Linear(poly.n_output_features_, output_size)
def forward(self, x):
return self.fc(x)
model = Model()
criterion = torch.nn.MSELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.001)
losses = []
for i in range(10):
optimizer.zero_grad()
outputs = model(Variable(torch.Tensor(x_train_poly)))
print(outputs)
loss = criterion(outputs, Variable(y_train))
print(loss.data[0])
losses.append(loss.data[0])
loss.backward()
optimizer.step()
if loss.data[0] < 1e-4:
break
print('n_iter', i)
print(loss.data[0])
plt.plot(losses)
plt.show()
</code></pre>
<p>output:</p>
<blockquote>
<p>[393494300459008.0, inf, inf, inf, nan, nan, nan, nan, nan, nan]</p>
<p>n_iter</p>
<p>9 nan</p>
</blockquote> | 2017-03-14 19:47:08.123000+00:00 | 2017-06-10 15:26:37.540000+00:00 | 2020-06-20 09:12:55.060000+00:00 | python|machine-learning|linear-regression|polynomials|pytorch | ['https://arxiv.org/abs/1502.03167'] | 1 |
58,102,886 | <p>Yes, hyperledger fabric has a moderate throughput of transactions. Should be able to get at least 3k transactions/second. Version 2.0 is capable of 20k/sec according to some researchers who made some modifications to the ordering service. Research here: <a href="https://arxiv.org/pdf/1901.00910.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1901.00910.pdf</a> </p> | 2019-09-25 16:32:47.243000+00:00 | 2019-10-08 15:11:50.657000+00:00 | 2019-10-08 15:11:50.657000+00:00 | null | 58,099,298 | <p>Is it reasonable to use blockchain for realtime services like task scheduling?</p>
<p>Fabric architecture is suitable for my problem, but I'm not sure metrics and overheads are reasonable.</p> | 2019-09-25 13:14:38.303000+00:00 | 2019-10-08 15:11:50.657000+00:00 | 2019-09-26 08:42:00.633000+00:00 | hyperledger-fabric|blockchain|hyperledger | ['https://arxiv.org/pdf/1901.00910.pdf'] | 1 |
48,847,675 | <p>It looks like multiple instance learning might be your approach. Check out these two papers:</p>
<p><a href="https://arxiv.org/pdf/1610.03155.pdf" rel="nofollow noreferrer">Multiple Instance Learning Convolutional Neural
Networks for Object Recognition</a></p>
<p><a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4908336/pdf/btw252.pdf" rel="nofollow noreferrer">Classifying and segmenting microscopy images
with deep multiple instance learning</a></p>
<p>The last one is implemented by @dancsalo (not sure if he has a stack overflow account) <a href="https://github.com/dancsalo/TensorFlow-MIL" rel="nofollow noreferrer">here</a>.</p>
<p>I looks like the second paper deals with very large images and breaks them into sub images, but labels the entire image. So, it is like labeling a bag of images with a label instead of having to make a label for each sub-image. In your case, you might be able to construct a matrix of images, i.e. a 10 image x 10 image master image for each of the scans...</p>
<p>Let us know if you do this and if it works well on your data set!</p> | 2018-02-18 01:46:04.077000+00:00 | 2018-02-18 01:46:04.077000+00:00 | null | null | 48,840,499 | <p>I'm building CNN that will tell me if a person has brain damage. I'm planning to use <a href="https://github.com/tensorflow/models/tree/f87a58cd96d45de73c9a8330a06b2ab56749a7fa/research/inception" rel="nofollow noreferrer">tf inception v3</a> model, and <a href="https://github.com/tensorflow/models/blob/f87a58cd96d45de73c9a8330a06b2ab56749a7fa/research/inception/inception/data/build_image_data.py" rel="nofollow noreferrer">build_image_data.py</a> script to build <em>TFRecord</em>.</p>
<p>Dataset is composed of brain scans. Every scan has about 100 images(different head poses, angles). On some images, damage is visible, but on some is not. I can't label all images from the scan as a damage positive(or negative), because some of them would be labeled wrong(if scan is positive on damage, but that is not visible on specific image).</p>
<p>Is there a way to label the whole scan as positive/negative and in that way train the network?
And after training is done, pass scan as input to network(not single image) and classify it.</p> | 2018-02-17 10:49:58.827000+00:00 | 2018-02-18 01:46:04.077000+00:00 | null | python|tensorflow|deep-learning|conv-neural-network | ['https://arxiv.org/pdf/1610.03155.pdf', 'https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4908336/pdf/btw252.pdf', 'https://github.com/dancsalo/TensorFlow-MIL'] | 3 |
62,831,823 | <h2>Training</h2>
<p>In general, there are two strategies of parallelizing model training: data parallelism and model parallelism.</p>
<h3>1. Data parallelism</h3>
<p>This strategy splits training data into N partitions, each of which will be trained on different “devices” (different CPU cores, GPUs, or even machines). In contrast to training without data parallelism which produces one gradient per minibatch, we now have N gradients for each minibatch step. The next question is how we should combine these N gradients.</p>
<p>One way to do it is by averaging all the N gradients and then updating the model parameters <em>once</em> based on the average. This technique is called <strong>synchronous distributed SGD</strong>. By doing the average, we have a more accurate gradient, but with a cost of waiting all the devices to finish computing its own local gradient.</p>
<p>Another way is by not combining the gradients — each gradient will instead be used to update the model parameters independently. So, there will be N parameter updates for each minibatch step, in contrast to only one for the previous technique. This technique is called <strong>asynchronous distributed SGD</strong>. Because it doesn't have to wait other devices to finish, the async approach will take less time to complete a minibatch step than the sync approach will do. However, the async approach will produce a more noisy gradient, so it might need to complete more minibatch steps to catch up with the performance (in terms of loss) of the sync approach.</p>
<p>There are many papers proposing some improvements and optimizations on either approach, but the main idea is generally the same as described above.</p>
<p>In the literature there's been some disagreement on which technique is better in practice. At the end most people now settle on the synchronous approach.</p>
<p><strong>Data Parallelism in PyTorch</strong></p>
<p>To do synchronous SGD, we can wrap our model with <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.parallel.DistributedDataParallel" rel="noreferrer"><code>torch.nn.parallel.DistributedDataParallel</code></a>:</p>
<pre><code>from torch.nn.parallel import DistributedDataParallel as DDP
# `model` is the model we previously initialized
model = ...
# `rank` is a device number starting from 0
model = model.to(rank)
ddp_model = DDP(model, device_ids=[rank])
</code></pre>
<p>Then we can train it similarly. For more details, you can refer to <a href="https://pytorch.org/tutorials/intermediate/ddp_tutorial.html" rel="noreferrer">the official tutorial</a>.</p>
<p>For doing asynchronous SGD in PyTorch, we need to <a href="https://pytorch.org/docs/stable/notes/multiprocessing.html#asynchronous-multiprocess-training-e-g-hogwild" rel="noreferrer">implement it more manually</a> since there is no wrapper similar to <code>DistributedDataParallel</code> for it.</p>
<p><strong>Data Parallelism in TensorFlow/Keras</strong></p>
<p>For synchronous SGD, we can use <a href="https://www.tensorflow.org/api_docs/python/tf/distribute/MirroredStrategy" rel="noreferrer"><code>tf.distribute.MirroredStrategy</code></a> to wrap the model initalization:</p>
<pre><code>import tensorflow as tf
strategy = tf.distribute.MirroredStrategy()
with strategy.scope():
model = Model(...)
model.compile(...)
</code></pre>
<p>Then we can train it as usual. For more details, you can refer to the official guides on <a href="https://keras.io/guides/distributed_training/" rel="noreferrer">Keras website</a> and <a href="https://www.tensorflow.org/guide/distributed_training" rel="noreferrer">TensorFlow website</a>.</p>
<p>For asynchronous SGD, we can use <a href="https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/ParameterServerStrategy" rel="noreferrer"><code>tf.distribute.experimental.ParameterServerStrategy</code></a> similarly.</p>
<h3>2. Model Parallelism</h3>
<p>This strategy splits the model into N parts, each of which will be computed on different devices. A common way to split the model is based on layers: different sets of layers are placed on different devices. But we can also split it more intricately depending on the model architecture.</p>
<p><strong>Model Parallelism in TensorFlow and PyTorch</strong></p>
<p>To implement model parallelism in either TensorFlow or PyTorch, the idea is the same: to move some model parameters into a different device.</p>
<p>In PyTorch we can use <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module.to" rel="noreferrer"><code>torch.nn.Module.to</code></a> method to move a module into a different device. For example, suppose we want to create two linear layers each of which is placed on a different GPU:</p>
<pre><code>import torch.nn as nn
linear1 = nn.Linear(16, 8).to('cuda:0')
linear2 = nn.Linear(8, 4).to('cuda:1')
</code></pre>
<p>In TensorFlow we can use <a href="https://www.tensorflow.org/api_docs/python/tf/device" rel="noreferrer"><code>tf.device</code></a> to place an operation into a specific device. To implement the PyTorch example above in TensorFlow:</p>
<pre><code>import tensorflow as tf
from tensorflow.keras import layers
with tf.device('/GPU:0'):
linear1 = layers.Dense(8, input_dim=16)
with tf.device('/GPU:1'):
linear2 = layers.Dense(4, input_dim=8)
</code></pre>
<p>For more details you can refer to <a href="https://pytorch.org/tutorials/intermediate/model_parallel_tutorial.html" rel="noreferrer"><code>the official PyTorch tutorial</code></a>; or if you use TensorFlow you can even use a more high-level library like <a href="https://github.com/tensorflow/mesh" rel="noreferrer">mesh</a>.</p>
<h3>3. Hybrid: Data and Model Parallelism</h3>
<p>Recall that data parallelism only splits the training data, whereas model parallelism only splits the model structures. If we have a model so large that even after using either parallelism strategy it still doesn't fit in the memory, we can always do both.</p>
<p>In practice most people prefer data parallelism to model parallelism since the former is more decoupled (in fact, independent) from the model architecture than the latter. That is, by using data parallelism they can change the model architecture as they like, without worrying which part of the model should be parallelized.</p>
<h2>Model Inference / Serving</h2>
<p>Parallelizing model serving is easier than parallelizing model training since the model parameters are already fixed and each request can be processed independently. Similar to scaling a regular Python web service, we can scale model serving by spawning more processes (to workaround <a href="https://stackoverflow.com/q/1294382/1403530">Python's GIL</a>) in a single machine, or even spawning more machine instances.</p>
<p>When we use a GPU to serve the model, though, we need to do more work to scale it. Because of how concurrency is handled differently by a GPU compared to a CPU, in order to maximize the performance, we need to do inference request batching. The idea is when a request comes, instead of immediately processing it, we wait some timeout duration for other requests to come. When the timeout is up, even if the number of requests is only one, we batch them all to be processed on the GPU.</p>
<p>In order to minimize the average request latency, we need to find the optimal timeout duration. To find it we need to observe that there is a trade-off between minimizing the timeout duration and maximizing the number of batch size. If the timeout is too low, the batch size will be small, so the GPU will be underutilized. But if the timeout is too high, the requests that come early will wait too long before they get processed. So, the optimal timeout duration depends on the model complexity (hence, the inference duration) and the average requests per second to receive.</p>
<p>Implementing a scheduler to do request batching is not a trivial task, so instead of doing it manually, we'd better use <a href="https://github.com/tensorflow/serving" rel="noreferrer">TensorFlow Serving</a> or <a href="https://github.com/pytorch/serve" rel="noreferrer">PyTorch Serve</a> which already supports it.</p>
<hr />
<p>To learn more about parallel and distributed learning, you can read <a href="https://arxiv.org/abs/1802.09941" rel="noreferrer">this review paper</a>.</p> | 2020-07-10 09:57:51.360000+00:00 | 2020-07-10 09:57:51.360000+00:00 | null | null | 62,759,940 | <p>What strategies and forms of parallelization are feasible and available for <strong>training</strong> and <strong>serving</strong> a neural network?:</p>
<ul>
<li><strong>inside</strong> a machine <strong>across</strong> cores (e.g. GPU / TPU / CPU)</li>
<li><strong>across</strong> machines on a network or a rack</li>
</ul>
<p>I'm also looking for evidence for how they may also be used in e.g. TensorFlow, PyTorch or MXNet.</p>
<h3>Training</h3>
<p>To my knowledge, when training large neural networks on large datasets, one could at least have:</p>
<ol>
<li>Different <strong>cores</strong> <em>or</em> <strong>machines</strong> operate on <strong>different parts of the graph</strong> ("<em><strong>graph</strong> splitting</em>"). E.g. backpropagation through the graph itself can be parallelized e.g. by having different layers hosted on different machines since (I think?) the <strong>autodiff graph</strong> is always a <strong>DAG</strong>.</li>
<li>Different <strong>cores</strong> <em>or</em> <strong>machines</strong> operate on <strong>different samples</strong> of data ("<em><strong>data</strong> splitting</em>"). In SGD, the computation of gradients across batches or samples can also be parallelized (e.g. the gradients can be combined after computing them independently on different batches). I believe this is also called gradient accumulation (?).</li>
</ol>
<p>When is each strategy better for what type of problem or neural network? Which modes are supported by modern libraries? and can one combine all four (2x2) strategies?</p>
<p>On top of that, I have read about:</p>
<ul>
<li><strong>Asynchronous</strong> training</li>
<li><strong>Synchronous</strong> training</li>
</ul>
<p>but I don't know what exactly that refers to, e.g. is it the computation of <strong>gradients</strong> on <strong>different data batches</strong> or the computation of <strong>gradients</strong> on different <strong>subgraphs</strong>? Or perhaps it refers to something else altogether?</p>
<h3>Serving</h3>
<p>If the network is huge, prediction / inference may also be slow, and the model may not fit on a single machine in memory at serving time. Are there any known multi-core and multi-node prediction solutions that work that can handle such models?</p> | 2020-05-30 16:50:45.240000+00:00 | 2020-07-13 09:47:58.767000+00:00 | 2020-07-09 19:55:11.833000+00:00 | tensorflow|deep-learning|pytorch|distributed-computing|mxnet | ['https://pytorch.org/docs/stable/nn.html#torch.nn.parallel.DistributedDataParallel', 'https://pytorch.org/tutorials/intermediate/ddp_tutorial.html', 'https://pytorch.org/docs/stable/notes/multiprocessing.html#asynchronous-multiprocess-training-e-g-hogwild', 'https://www.tensorflow.org/api_docs/python/tf/distribute/MirroredStrategy', 'https://keras.io/guides/distributed_training/', 'https://www.tensorflow.org/guide/distributed_training', 'https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/ParameterServerStrategy', 'https://pytorch.org/docs/stable/nn.html#torch.nn.Module.to', 'https://www.tensorflow.org/api_docs/python/tf/device', 'https://pytorch.org/tutorials/intermediate/model_parallel_tutorial.html', 'https://github.com/tensorflow/mesh', 'https://stackoverflow.com/q/1294382/1403530', 'https://github.com/tensorflow/serving', 'https://github.com/pytorch/serve', 'https://arxiv.org/abs/1802.09941'] | 15 |
62,864,931 | <p>As the question is quite broad, I'll try to shed a little different light and touch on different topics than what was shown in
<a href="https://stackoverflow.com/a/62831823/10886420">@Daniel's</a> in-depth answer.</p>
<h1>Training</h1>
<h2>Data parallelization vs model parallelization</h2>
<p>As mentioned by <a href="https://stackoverflow.com/a/62831823/10886420">@Daniel</a> data parallelism is used way more often and is easier to do correctly. Major caveat of model parallelism is the need to wait for part of neural network and synchronization between them.</p>
<p>Say you have a simple feedforward <code>5</code> layer neural network spread across <code>5</code> different GPUs, each layer for one device. In this case, during each forward pass each device has to wait for computations from the previous layers. In this simplistic case, copying data between devices and synchronization would take a lot longer and won't bring benefits.</p>
<p>On the other hand, there are models better suited for model parallelization like <a href="https://arxiv.org/abs/1512.00567" rel="noreferrer">Inception networks</a>, see picture below:</p>
<p><a href="https://i.stack.imgur.com/LvGoO.png" rel="noreferrer"><img src="https://i.stack.imgur.com/LvGoO.png" alt="inception block" /></a></p>
<p>Here you can see <code>4</code> independent paths from previous layer which could go in parallel and only <code>2</code> synchronization points (<code>Filter concatenation</code> and <code>Previous Layer</code>).</p>
<h2>Questions</h2>
<blockquote>
<p>E.g. backpropagation through the graph itself can be parallelized e.g.
by having different layers hosted on different machines since (I
think?) the autodiff graph is always a DAG.</p>
</blockquote>
<p>It's not that easy. Gradients are calculated based on the loss value (usually) and you need to know gradients of deeper layers to calculate gradients for the more shallow ones. As above, if you have independent paths it's easier and may help, but it's way easier on a single device.</p>
<blockquote>
<p>I believe this is also called gradient accumulation (?)</p>
</blockquote>
<p>No, it's actually reduction across multiple devices. You can see some of that in <a href="https://pytorch.org/tutorials/intermediate/dist_tuto.html" rel="noreferrer">PyTorch tutorial</a>. Gradient accumulation is when you run your forward pass (either on single or multiple devices) <code>N</code> times and backpropagate (the gradient is kept in the graph and the values are added during each pass) and optimizer only makes a single step to change neural network's weights (and clears the gradient). In this case, loss is usually divided by the number of steps without optimizer. This is used for more reliable gradient estimation, usually when you are unable to use large batches.</p>
<p>Reduction across devices looks like this:</p>
<p><a href="https://i.stack.imgur.com/SlPTj.png" rel="noreferrer"><img src="https://i.stack.imgur.com/SlPTj.png" alt="reduction" /></a></p>
<p>This is all-reduce in data parallelization, each device calculates the values which are send to all other devices and backpropagated there.</p>
<blockquote>
<p>When is each strategy better for what type of problem or neural
network?</p>
</blockquote>
<p>Described above, data parallel is almost always fine if you have enough of data and the samples are big (up to <code>8k</code> samples or more can be done at once without <strong>very</strong> big struggle).</p>
<blockquote>
<p>Which modes are supported by modern libraries?</p>
</blockquote>
<p><code>tensorflow</code> and <code>pytorch</code> both support either, most modern and maintained libraries have those functionalities implemented one way or another</p>
<blockquote>
<p>can one combine all four (2x2) strategies</p>
</blockquote>
<p>Yes, you can parallelize both model and data across and within machines.</p>
<blockquote>
<p>synchronous vs asynchronous</p>
</blockquote>
<h3>asynchronous</h3>
<p>Described by <a href="https://stackoverflow.com/a/62831823/10886420">@Daniel</a> in brief, but it's worth mentioning updates are not totally separate. That would make little sense, as we would essentially train <code>N</code> different models based on their batches.</p>
<p>Instead, there is a global parameter space, where each replica is supposed to share calculated updates asynchronously (so forward pass, backward, calculate update with optimizer and share this update to global params).</p>
<p>This approach has one problem though: there is no guarantee that when one worker calculated forward pass another worker updated the parameters, so the update is calculated <strong>with respect to old set of params</strong> and this is called <strong>stale gradients</strong>. Due to this, convergence might be hurt.</p>
<p>Other approach is to calculate <code>N</code> steps and updates for each worker and synchronize them afterwards, though it's not used as often.</p>
<p>This part was based on great <a href="http://seba1511.net/dist_blog/" rel="noreferrer">blogpost</a> and you should definitely read it if interested (there is more about staleness and some solutions).</p>
<h3>synchronous</h3>
<p>Mostly described previously, there are different approaches but PyTorch gathers output from network and backpropagates on them (<code>torch.nn.parallel.DistributedDataParallel</code>)[https://pytorch.org/docs/stable/nn.html#torch.nn.parallel.DistributedDataParallel]. BTW. You should solely this (no <code>torch.nn.DataParallel</code>) as it overcomes Python's GIL problem.</p>
<h2>Takeaways</h2>
<ul>
<li>Data parallelization is always almost used when going for speed up as you "only" have to replicate neural network on each device (either over the network or within single machine), run part of batch on each during the forward pass, concatenate them into a single batch (synchronization) on one device and backpropagate on said.</li>
<li>There are multiple ways to do data parallelization, already introduced by <a href="https://stackoverflow.com/a/62831823/10886420">@Daniel</a></li>
<li>Model parallelization is done when the model is too large to fit on single machine (<a href="https://arxiv.org/abs/2005.14165" rel="noreferrer">OpenAI's GPT-3</a> would be an extreme case) or when the architecture is suited for this task, but both are rarely the case AFAIK.</li>
<li>The more and the longer parallel paths the model has (synchronization points), the better it might be suited for model parallelization</li>
<li>It's important to start workers at similar times with similar loads in order not to way for synchronization processes in synchronous approach or not to get stale gradients in asynchronous (though in the latter case it's not enough).</li>
</ul>
<h1>Serving</h1>
<h2>Small models</h2>
<p>As you are after large models I won't delve into options for smaller ones, just a brief mention.</p>
<p>If you want to serve multiple users over the network you need some way to scale your architecture (usually cloud like GCP or AWS). You could do that using <a href="https://kubernetes.io/" rel="noreferrer">Kubernetes</a> and it's PODs or pre-allocate some servers to handle requests, but that approach would be inefficient (small number of users and running servers would generate pointless costs, while large numbers of users may halt the infrastructure and take too long to process resuests).</p>
<p>Other way is to use autoscaling based on serverless approach. Resources will be provided based on each request so it has large scaling abilities + you don't pay when the traffic is low. You can see <a href="https://azure.microsoft.com/en-us/services/functions/" rel="noreferrer">Azure Functions</a> as they are on the path to improve it for ML/DL tasks, or <a href="https://github.com/szymonmaszke/torchlambda" rel="noreferrer"><code>torchlambda</code></a> for PyTorch (disclaimer, I'm the author) for smaller models.</p>
<h2>Large models</h2>
<p>As mentioned previously, you could use Kubernetes with your custom code or ready to use tools.</p>
<p>In the first case, you can spread the model just the same as for training, but only do <code>forward</code> pass. In this way even giant models can be put up on the network (once again, <a href="https://openai.com/blog/openai-api/" rel="noreferrer">GPT-3</a> with 175B parameters), but requires a lot of work.</p>
<p>In the second, <a href="https://stackoverflow.com/a/62831823/10886420">@Daniel</a> provided two possibilities. Others worth mentioning could be (read respective docs as those have a lot of functionalities):</p>
<ul>
<li><a href="https://www.kubeflow.org/" rel="noreferrer">KubeFlow</a> - multiple frameworks, based on Kubernetes (so auto-scaling, multi-node), training, serving and what not, connects with other things like MLFlow below</li>
<li><a href="https://aws.amazon.com/sagemaker/" rel="noreferrer">AWS SageMaker</a> - training and serving with Python API, supported by Amazon</li>
<li><a href="https://mlflow.org/" rel="noreferrer">MLFlow</a> - multiple frameworks, for experiment handling and serving</li>
<li><a href="https://github.com/bentoml/BentoML" rel="noreferrer">BentoML</a> - multiple frameworks, training and serving</li>
</ul>
<p>For PyTorch, you could read more <a href="https://pytorch.org/blog/model-serving-in-pyorch/" rel="noreferrer">here</a>, while tensorflow has a lot of serving functionality out of the box via <a href="https://www.tensorflow.org/tfx/guide/serving" rel="noreferrer">Tensorflow EXtended (TFX)</a>.</p>
<h1>Questions from OP's comment</h1>
<blockquote>
<p>Are there any forms of parallelism that are better within a machine vs
across machines</p>
</blockquote>
<p>The best for of parallelism would probably be within one giant computer as to minimize transfer between devices.</p>
<p>Additionally, there are different backends (at least in PyTorch) one can choose from (<code>mpi</code>, <code>gloo</code>, <code>nccl</code>) and not all of them support direct sending, receiving, reducing etc. data between devices (some may support CPU to CPU, others GPU to GPU). If there is no direct link between devices, those have to be first copied to another device and copied again to target device (e.g. GPU on other machine -> CPU on host -> GPU on host). See <a href="https://pytorch.org/docs/stable/distributed.html" rel="noreferrer">pytorch info</a>.</p>
<p>The more data and the bigger network, the more profitable it should be to parallelize computations. If whole dataset can be fit on a single device there is no need for parallelization. Additionally, one should take into account things like internet transfer speed, network reliability etc. Those costs may outweigh benefits.</p>
<p>In general, go for data parallelization if you have lots of of data (say ImageNet with <code>1.000.000</code> images) or big samples (say images <code>2000x2000</code>). If possible, within a single machine as to minimize between-machines transfer. Distribute model only if there is no way around it (e.g. it doesn't fit on GPU). Don't otherwise (there is little to no point to parallelize when training MNIST as the whole dataset will easily fit in RAM and the read will be fastest from it).</p>
<blockquote>
<p>why bother build custom ML-specific hardware such as TPUs?</p>
</blockquote>
<p>CPUs are not the best suited for highly parallel computations (e.g. matrices multiplication) + CPU may be occupied with many other tasks (like data loading), hence it makes sense to use GPU.</p>
<p>As GPU was created with graphics in mind (so algebraic transformation), it can take some of CPU duties and can be specialized (many more cores when compared to CPU but simpler ones, see <a href="https://www.nvidia.com/en-us/data-center/v100/" rel="noreferrer">V100</a> for example).</p>
<p>Now, TPUs are tailored specificially for tensor computations (so deep learning mainly) and originated in Google, still WIP when compared to GPUs. Those are suited for certain types of models (mainly convolutional neural networks) and can bring speedups in this case. Additionally one should use the largest batches with this device (see <a href="https://cloud.google.com/tpu/docs/performance-guide" rel="noreferrer">here</a>), best to be divisible by <code>128</code>. You can compare that to NVidia's Tensor Cores technology (GPU) where you are fine with batches (or layer sizes) divisible by <code>16</code> or <code>8</code> (<code>float16</code> precision and <code>int8</code> respectively) for good utilization (although the more the better and depends on number of cores, exact graphic card and many other stuff, see some guidelines <a href="https://developer.download.nvidia.com/video/gputechconf/gtc/2019/presentation/s9926-tensor-core-performance-the-ultimate-guide.pdf" rel="noreferrer">here</a>).</p>
<p>On the other hand, TPUs support still isn't the best, although two major frameworks support it (<code>tensorflow</code> officially, while PyTorch with <a href="https://pytorch.org/xla/release/1.5/index.html" rel="noreferrer"><code>torch_xla</code></a> package).</p>
<p>In general, GPU is a good default choice in deep learning right now, TPUs for convolution heavy architectures, though might give some headache tbh. Also (once again thanks @Daniel), TPUs are more power effective, hence should be cheaper when comparing single floating point operation cost.</p> | 2020-07-12 18:22:19.427000+00:00 | 2020-07-13 09:47:58.767000+00:00 | 2020-07-13 09:47:58.767000+00:00 | null | 62,759,940 | <p>What strategies and forms of parallelization are feasible and available for <strong>training</strong> and <strong>serving</strong> a neural network?:</p>
<ul>
<li><strong>inside</strong> a machine <strong>across</strong> cores (e.g. GPU / TPU / CPU)</li>
<li><strong>across</strong> machines on a network or a rack</li>
</ul>
<p>I'm also looking for evidence for how they may also be used in e.g. TensorFlow, PyTorch or MXNet.</p>
<h3>Training</h3>
<p>To my knowledge, when training large neural networks on large datasets, one could at least have:</p>
<ol>
<li>Different <strong>cores</strong> <em>or</em> <strong>machines</strong> operate on <strong>different parts of the graph</strong> ("<em><strong>graph</strong> splitting</em>"). E.g. backpropagation through the graph itself can be parallelized e.g. by having different layers hosted on different machines since (I think?) the <strong>autodiff graph</strong> is always a <strong>DAG</strong>.</li>
<li>Different <strong>cores</strong> <em>or</em> <strong>machines</strong> operate on <strong>different samples</strong> of data ("<em><strong>data</strong> splitting</em>"). In SGD, the computation of gradients across batches or samples can also be parallelized (e.g. the gradients can be combined after computing them independently on different batches). I believe this is also called gradient accumulation (?).</li>
</ol>
<p>When is each strategy better for what type of problem or neural network? Which modes are supported by modern libraries? and can one combine all four (2x2) strategies?</p>
<p>On top of that, I have read about:</p>
<ul>
<li><strong>Asynchronous</strong> training</li>
<li><strong>Synchronous</strong> training</li>
</ul>
<p>but I don't know what exactly that refers to, e.g. is it the computation of <strong>gradients</strong> on <strong>different data batches</strong> or the computation of <strong>gradients</strong> on different <strong>subgraphs</strong>? Or perhaps it refers to something else altogether?</p>
<h3>Serving</h3>
<p>If the network is huge, prediction / inference may also be slow, and the model may not fit on a single machine in memory at serving time. Are there any known multi-core and multi-node prediction solutions that work that can handle such models?</p> | 2020-05-30 16:50:45.240000+00:00 | 2020-07-13 09:47:58.767000+00:00 | 2020-07-09 19:55:11.833000+00:00 | tensorflow|deep-learning|pytorch|distributed-computing|mxnet | ['https://stackoverflow.com/a/62831823/10886420', 'https://stackoverflow.com/a/62831823/10886420', 'https://arxiv.org/abs/1512.00567', 'https://i.stack.imgur.com/LvGoO.png', 'https://pytorch.org/tutorials/intermediate/dist_tuto.html', 'https://i.stack.imgur.com/SlPTj.png', 'https://stackoverflow.com/a/62831823/10886420', 'http://seba1511.net/dist_blog/', 'https://stackoverflow.com/a/62831823/10886420', 'https://arxiv.org/abs/2005.14165', 'https://kubernetes.io/', 'https://azure.microsoft.com/en-us/services/functions/', 'https://github.com/szymonmaszke/torchlambda', 'https://openai.com/blog/openai-api/', 'https://stackoverflow.com/a/62831823/10886420', 'https://www.kubeflow.org/', 'https://aws.amazon.com/sagemaker/', 'https://mlflow.org/', 'https://github.com/bentoml/BentoML', 'https://pytorch.org/blog/model-serving-in-pyorch/', 'https://www.tensorflow.org/tfx/guide/serving', 'https://pytorch.org/docs/stable/distributed.html', 'https://www.nvidia.com/en-us/data-center/v100/', 'https://cloud.google.com/tpu/docs/performance-guide', 'https://developer.download.nvidia.com/video/gputechconf/gtc/2019/presentation/s9926-tensor-core-performance-the-ultimate-guide.pdf', 'https://pytorch.org/xla/release/1.5/index.html'] | 26 |
23,529,443 | <p>Since David does not seem interested in writing it down (well obviously he <em>is</em> interested, see the other answer :), I will use <a href="http://arxiv.org/pdf/0805.1598v1.pdf" rel="nofollow">his reference</a> to arrive at an algorithm for the case with 3 partitions.</p>
<p>First note that if we can solve the problem efficiently for some <em>m < n</em> using an algorithm <em>A</em>, we can rearrange the array so that we can apply <em>A</em> and are then left with a smaller subproblem. Say the original array is</p>
<pre><code>x1 .. xm x{m+1}.. xn y1 .. ym y{m+1} .. yn z1 .. zm z{m+1} .. zn
</code></pre>
<p>We want to rearrange it to</p>
<pre><code>x1 .. xm y1 .. ym z1 .. zm x{m+1} .. xn y{m+1} .. yn z{m+1} .. zn
</code></pre>
<p>This is basically a transformation of the pattern <code>AaBbCc</code> to <code>ABCabc</code> where A, B, C and a, b, c have the same lengths, respectively. We can achieve that through a series of reversals. Let X' denote the reversal of string X here:</p>
<pre><code> AaBbCc
-> Aa(BbCc)' = Aac'C'b'B'
-> Aac'(C'b')'B' = Aac'bCB'
-> A(ac'bCB')' = ABC'b'ca'
-> ABCb'ca'
-> ABC(b'ca')' = ABCac'b
-> ABCa(c'b)' = ABCab'c
-> ABCabc
</code></pre>
<p>There's probably a shorter way, but this is still just a constant number of operations, so it takes only linear time. One could use a more sophisticated algorithm here to implement some of the cyclic shifts, but that's just an optimization.</p>
<p>Now we can solve the two partitions of our array recursively and we're done.</p>
<p>The question remains, what would be a nice m that allows us to solve the left part easily?</p>
<p>To figure this out, we need to realize that what we want to implement is a particular <a href="http://en.wikipedia.org/wiki/Permutation" rel="nofollow">permutation</a> P of the array indices. Every permutation <a href="http://en.wikipedia.org/wiki/Permutation#Cycle_notation" rel="nofollow">can be decomposed into a set of cycles</a> <code>a0 -> a1 -> ... -> a{k-1} -> a0</code>, for which we have P(ai) = a{(i + 1) % k}. It is easy to process such a cycle in-place, the algorithm is outlined <a href="http://en.wikipedia.org/wiki/In-place_matrix_transposition#Non-square_matrices%3a_Following_the_cycles" rel="nofollow">on Wikipedia</a>. </p>
<p>Now the problem is that after you completed processing one of the cycle, to find an element that is part of a cycle you have not yet processed. There is no generic solution for this, but for some particular permutations there are nice formulas that describe what exactly the positions are that are part of the different cycles. </p>
<p>For your problems, you just choose m = (5^(2k) - 1)/3, such that m < n and k is maximum. A sequence of elements that are part of all the different cycles is 5^0, 5^1, ..., 5^{k-1}. You can use those to implement the cycle-leader algorithm on the left part of the array (after the shifting) in O(m).</p>
<p>We solve the leftover right part recursively and get an algorithm to solve the problem in time</p>
<pre><code>T(n) = O(m) + T(n - m)
</code></pre>
<p>and since m >= Omega(n), we get T(n) = O(n).</p> | 2014-05-07 22:15:46.460000+00:00 | 2014-05-07 22:53:29.490000+00:00 | 2014-05-07 22:53:29.490000+00:00 | null | 23,527,241 | <p>Given an array of size 3n of the form</p>
<pre><code>[x1, x2, x3... xn, y1, y2, y3... yn, z1, z2, z3... zn]
</code></pre>
<p>Convert it to <code>[x1, y1, z1, x2, y2, z2, ... xn, yn, zn]</code></p>
<p>Here xn, yn, zn can be any integers. See example input and output below.</p>
<p>Two constraints</p>
<ol>
<li>Do in O(n)</li>
<li>O(1) memory (inplace)</li>
</ol>
<p>An example input and output are as follows.</p>
<p><strong>Input :</strong><br>
<code>[5, 8, 11, 3, 2, 17, 21, 1, 9]</code> 3n = 9. So n = 3.</p>
<p>Here
<code>x1=5 x2=8 x3=11 y1=3 y2=2 y3=17 z1=21 z2=1 z3=9</code></p>
<p><strong>Output :</strong><br>
<code>[5, 3, 21, 8, 2, 1, 11, 17, 9]</code></p>
<p><strong>One possible O(n log n) soln:</strong>
Considering just x's and y's. Now I can swap all y's to its position which will leave me x2, x4, x6 swapped out of position. Then I will swap in x2, x4's which will leave x3, x7's out of position. And next iteration would be x8, x16's. This would take me to O(n log n) but not O(n).</p> | 2014-05-07 20:00:08.093000+00:00 | 2014-05-07 22:53:29.490000+00:00 | 2014-05-07 22:48:20.920000+00:00 | algorithm | ['http://arxiv.org/pdf/0805.1598v1.pdf', 'http://en.wikipedia.org/wiki/Permutation', 'http://en.wikipedia.org/wiki/Permutation#Cycle_notation', 'http://en.wikipedia.org/wiki/In-place_matrix_transposition#Non-square_matrices%3a_Following_the_cycles'] | 4 |
23,529,396 | <p>This answer is based on work by <a href="http://arXiv.org/abs/0805.1598v1" rel="nofollow">Peiyush Jain</a> (whose bibliography is woefully incomplete, but I don't feel like taking the time to straighten out the history of the in-place transposition problem). Observe that 3 is a primitive root of 25 = 5^2, since</p>
<pre><code>>>> len(set(pow(3,n,25)for n in range(25)))
20
</code></pre>
<p>and 20 is Euler's totient of 25. By Jain's Theorem 1, a classic result in number theory, 3 is a primitive root for all 5^k.</p>
<p>When the array has length 3n, the new position of the element at position k*n + j is 3*j + k. In general, the new position of i (except for the last element) is (i*n) % (3*n - 1). Note that n is the multiplicative inverse of 3 modulo 3*n - 1, so 3 is a primitive root if and only if n is.</p>
<p>Jain's observation, in this case, is that, if 3*n - 1 is a power of 5, then the permutation above has log_5 (3*n - 1) + 1 distinct cycles, led by 5^k for k from 0 to log_5 (3*n - 1). (This is more or less the definition of primitive root.) For each cycle, all we have to do is move the leader, move the element displaced by the leader, move the element displaced by the element displaced by the leader, etc., until we return to the leader.</p>
<p>For other array sizes, break the array into O(log n) implicit subarrays of lengths 3 and one plus powers of 5 that are divisible by 3: 6, 126, 3126, 78126, etc. Do a series of rotations, decreasing geometrically in size, to get the subarrays contiguous, then run the above algorithm.</p>
<p>If you actually implement this, please benchmark it. I did for the base case of Jain's algorithm (3^n - 1, pairs instead of triples) and found that, <strong>on my machine the O(n log n)-time algorithm was faster for non-galactic input sizes</strong>. YMMV of course.</p> | 2014-05-07 22:11:34.203000+00:00 | 2014-05-07 22:11:34.203000+00:00 | null | null | 23,527,241 | <p>Given an array of size 3n of the form</p>
<pre><code>[x1, x2, x3... xn, y1, y2, y3... yn, z1, z2, z3... zn]
</code></pre>
<p>Convert it to <code>[x1, y1, z1, x2, y2, z2, ... xn, yn, zn]</code></p>
<p>Here xn, yn, zn can be any integers. See example input and output below.</p>
<p>Two constraints</p>
<ol>
<li>Do in O(n)</li>
<li>O(1) memory (inplace)</li>
</ol>
<p>An example input and output are as follows.</p>
<p><strong>Input :</strong><br>
<code>[5, 8, 11, 3, 2, 17, 21, 1, 9]</code> 3n = 9. So n = 3.</p>
<p>Here
<code>x1=5 x2=8 x3=11 y1=3 y2=2 y3=17 z1=21 z2=1 z3=9</code></p>
<p><strong>Output :</strong><br>
<code>[5, 3, 21, 8, 2, 1, 11, 17, 9]</code></p>
<p><strong>One possible O(n log n) soln:</strong>
Considering just x's and y's. Now I can swap all y's to its position which will leave me x2, x4, x6 swapped out of position. Then I will swap in x2, x4's which will leave x3, x7's out of position. And next iteration would be x8, x16's. This would take me to O(n log n) but not O(n).</p> | 2014-05-07 20:00:08.093000+00:00 | 2014-05-07 22:53:29.490000+00:00 | 2014-05-07 22:48:20.920000+00:00 | algorithm | ['http://arXiv.org/abs/0805.1598v1'] | 1 |
48,984,607 | <p>Doesn't have code examples but this paper may be helpful: <a href="https://arxiv.org/abs/1006.2804" rel="nofollow noreferrer">An Effective Fingerprint Verification Technique, Gogoi & Bhattacharyya</a></p>
<blockquote>
<p>This paper presents an effective method for fingerprint verification based on a data mining technique called minutiae clustering and a graph-theoretic approach to analyze the process of fingerprint comparison to give a feature space representation of minutiae and to produce a lower bound on the number of detectably distinct fingerprints. The method also proving the invariance of each individual fingerprint by using both the topological behavior of the minutiae graph and also using a distance measure called Hausdorff distance.The method provides a graph based index generation mechanism of fingerprint biometric data. The self-organizing map neural network is also used for classifying the fingerprints.</p>
</blockquote> | 2018-02-26 08:54:23.880000+00:00 | 2018-02-26 08:54:23.880000+00:00 | null | null | 22,025,196 | <p>I have already extracted the features of a fingerprint database then a Neural Network should be applied to classify the images by gender. I haven't worked with NN yet and I know a bit.</p>
<ul>
<li><p>What type of NN should be used? Is it Artificial Neural Network or Multi-layer perceptron?</p></li>
<li><p>If the image size is not the same among all, does it matter?</p></li>
</ul>
<p>Maybe some code sample in this area could help.</p> | 2014-02-25 20:16:48.220000+00:00 | 2018-02-26 08:54:23.880000+00:00 | null | matlab|neural-network|classification|fingerprint | ['https://arxiv.org/abs/1006.2804'] | 1 |
55,226,096 | <p>You could also choose to make use of the R package "<a href="https://github.com/Nth-iteration-labs/contextual" rel="nofollow noreferrer">contextual</a>", which aims to ease the implementation and evaluation of both context-free (as described in Sutton & Barto) and contextual (such as for <a href="https://github.com/Nth-iteration-labs/contextual/blob/master/R/policy_cmab_lin_ucb_disjoint.R" rel="nofollow noreferrer">example</a> <a href="https://arxiv.org/abs/1003.0146" rel="nofollow noreferrer">LinUCB</a>) Multi-Armed Bandit policies. </p>
<p>The package actually <a href="https://nth-iteration-labs.github.io/contextual/articles/sutton_barto.html" rel="nofollow noreferrer">offers a vignette</a> on how to replicate all Sutton & Barto bandit plots. For example, to generate the ε-greedy plots, just simulate <a href="https://github.com/Nth-iteration-labs/contextual/blob/master/R/policy_mab_epsilon_greedy.R" rel="nofollow noreferrer">EpsilonGreedy</a> policies against a <a href="https://github.com/Nth-iteration-labs/contextual/blob/master/R/bandit_basic_gaussian.R" rel="nofollow noreferrer">Gaussian bandit</a> :</p>
<pre><code>library(contextual)
set.seed(2)
mus <- rnorm(10, 0, 1)
sigmas <- rep(1, 10)
bandit <- BasicGaussianBandit$new(mu_per_arm = mus, sigma_per_arm = sigmas)
agents <- list(Agent$new(EpsilonGreedyPolicy$new(0), bandit, "e = 0, greedy"),
Agent$new(EpsilonGreedyPolicy$new(0.1), bandit, "e = 0.1"),
Agent$new(EpsilonGreedyPolicy$new(0.01), bandit, "e = 0.01"))
simulator <- Simulator$new(agents = agents, horizon = 1000, simulations = 2000)
history <- simulator$run()
plot(history, type = "average", regret = FALSE, lwd = 1, legend_position = "bottomright")
plot(history, type = "optimal", lwd = 1, legend_position = "bottomright")
</code></pre>
<p><a href="https://i.stack.imgur.com/fAuaV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fAuaV.png" alt="EpsilonGreedy, average rewards"></a></p>
<p><a href="https://i.stack.imgur.com/huGHb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/huGHb.png" alt="EpsilonGreedy, optimal arms"></a></p>
<p>Full disclosure: I am one of the developers of the package.</p> | 2019-03-18 16:39:32.920000+00:00 | 2019-03-19 09:32:50.467000+00:00 | 2019-03-19 09:32:50.467000+00:00 | null | 17,934,171 | <p>I'm using Sutton & Barto's ebook <em>Reinforcement Learning: An Introduction</em> to study reinforcement learning. I'm having some issues trying to emulate the results (plots) on the <a href="http://webdocs.cs.ualberta.ca/~sutton/book/ebook/node16.html" rel="nofollow noreferrer">action-value page</a>.</p>
<p>More specifically, how can I simulate the <code>greedy</code> value for each task? The book says:</p>
<blockquote>
<p>...we can plot the performance and behavior of various methods as
they improve with experience over 1000 plays...</p>
</blockquote>
<p>So I guess I have to keep track of the <em>exploratory</em> values as better ones are found. The issue is how to do this using the <em>greedy</em> approach - since there are no exploratory moves, how do I know <em>what is a greedy behavior</em>?</p>
<p>Thanks for all the comments and answers!</p>
<p>UPDATE: See code on my answer.</p> | 2013-07-29 21:04:04.843000+00:00 | 2019-03-19 09:32:50.467000+00:00 | 2017-03-08 21:03:00.140000+00:00 | r|simulation|reinforcement-learning | ['https://github.com/Nth-iteration-labs/contextual', 'https://github.com/Nth-iteration-labs/contextual/blob/master/R/policy_cmab_lin_ucb_disjoint.R', 'https://arxiv.org/abs/1003.0146', 'https://nth-iteration-labs.github.io/contextual/articles/sutton_barto.html', 'https://github.com/Nth-iteration-labs/contextual/blob/master/R/policy_mab_epsilon_greedy.R', 'https://github.com/Nth-iteration-labs/contextual/blob/master/R/bandit_basic_gaussian.R', 'https://i.stack.imgur.com/fAuaV.png', 'https://i.stack.imgur.com/huGHb.png'] | 8 |
45,216,534 | <p>What you are looking for is a fuzzy rule-based information retrieval system. It will require some hand crafted rules and fuzzy matching (usually using Lucene) to match queries against a knowledge base of entities/documents. </p>
<p>See this paper for an example:</p>
<p>Implementation of an efficient Fuzzy Logic based Information Retrieval System
<a href="https://arxiv.org/pdf/1503.03957.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1503.03957.pdf</a></p> | 2017-07-20 13:42:49.797000+00:00 | 2017-07-20 13:42:49.797000+00:00 | null | null | 42,211,828 | <p>I would like to match social media posts (short text) to a database of movies/TV shows. The database contains information on movie or TV show names, characters and actors. If enough evidence is found in the input text, then I want the algorithm to classify the text to the movie it belongs to, or do nothing if there is not enough evidence.</p>
<p>I'm familiar with machine learning approaches, but those require training samples and a finite number of categories. My algorithm should be able to use context and and be scale-able for new content. For example, I don't want the machine to learn to recognize "Harry Potter" movies but then fail to recognize "Fantastic beasts and where to find them" when that is released. </p>
<p>I understand that the solution to this is partial string matching, but I would like to be pointed in the right directing for some general guidelines on these sort of problems. I'm also interested in recognizing misspelled words and assigning more weight to certain matches and less to others.</p>
<p>Also, as a side note, should string matching be done through SQLite or outside it? My guess for this case would be outside, but I'd just like to make sure. </p>
<p>Thank you in advance for any help!</p> | 2017-02-13 19:17:42.183000+00:00 | 2017-07-20 13:42:49.797000+00:00 | null | database|sqlite|python-3.x|text|text-classification | ['https://arxiv.org/pdf/1503.03957.pdf'] | 1 |
17,576,442 | <p>Are you using <a href="http://en.wikipedia.org/wiki/BB84" rel="nofollow">BB84</a> to encode your information? If so, look through the stuff that Shor and Preskill <a href="http://arxiv.org/pdf/quant-ph/0003004v2.pdf" rel="nofollow">to prove the security of the algorithm</a>. If you're interested in simulating this, the <a href="http://www.libquantum.de" rel="nofollow">libquantum</a> project has some good libraries put together for defining qubit registers and making measurements on them.</p> | 2013-07-10 16:51:38.567000+00:00 | 2013-07-10 16:51:38.567000+00:00 | null | null | 4,866,531 | <p>I have a project i.e. third party authentication using Quantum Key. But we are facing lot of problems related to hardware so now we are focusing on simulation.</p>
<p>So can anyone guide me what type of simulation we should use?</p> | 2011-02-01 18:36:09.173000+00:00 | 2016-04-14 19:59:18.390000+00:00 | 2016-04-14 19:59:18.390000+00:00 | security|networking|quantum-computing|quantumgrid | ['http://en.wikipedia.org/wiki/BB84', 'http://arxiv.org/pdf/quant-ph/0003004v2.pdf', 'http://www.libquantum.de'] | 3 |
49,041,481 | <p>ROIPooling layer is typically used for object detection networks such as <a href="https://arxiv.org/abs/1311.2524" rel="nofollow noreferrer">R-CNN</a> and its variants (<a href="https://arxiv.org/abs/1504.08083" rel="nofollow noreferrer">Fast R-CNN</a> and <a href="https://arxiv.org/abs/1506.01497" rel="nofollow noreferrer">Faster R-CNN</a>). The essential part of all these architectures is a component (neural or classical CV) that generates region proposals. These region proposals are basically ROIs that need to be fed into the ROIPooling layer. The output of ROIPooling layer is going to be a batch of tensors, where each tensor represents one cropped area of an image. Each of these tensors are processed independently for classification. For example, in R-CNN, these tensors are crops of the image in RGB, which are then run through a classification network. In Fast R-CNN and Faster R-CNN, tensors are features out of a convolutional network, for example ResNet34.</p>
<p>In your example, whether through a classic computer vision algorithm (as in R-CNN and Fast R-CNN) or using a Region Proposal Network (as in Faster R-CNN), you need to generate some ROIs that are <em>candidates</em> for containing object of interest. Once you have these ROIs for each image in one mini-batch, you then need to combine them into one NDArray of <code>[[batch_index, x1, y1, x2, y2]]</code>. What this dimensioning means is that you can basically have as many ROIs as you want, and for each ROI, you must specify which image in the batch to crop (hence the <code>batch_index</code>) and what coordinates to crop it at (hence the <code>(x1, y1)</code> for top-left-corner and <code>(x2,y2)</code> for bottom-right-corner coordinates).</p>
<p>So based on the above, if you're implementing something similar to R-CNN, you would be passing your images directly into the RoiPooling layer:</p>
<pre><code>class ClassifyObjects(gluon.HybridBlock):
def __init__(self, num_classes, pooled_size):
super(ClassifyObjects, self).__init__()
self.classifier = gluon.model_zoo.vision.resnet34_v2(classes=num_classes)
self.pooled_size = pooled_size
def hybrid_forward(self, F, imgs, rois):
return self.classifier(
F.ROIPooling(
imgs, rois, pooled_size=self.pooled_size, spatial_scale=1.0))
# num_classes are 10 categories plus 1 class for "no-object-in-this-box" category
net = ClassifyObjects(num_classes=11, pooled_size=(64, 64))
# Initialize parameters and overload pre-trained weights
net.collect_params().initialize()
pretrained_net = gluon.model_zoo.vision.resnet34_v2(pretrained=True)
net.classifier.features = pretrained_net.features
</code></pre>
<p>Now if we send dummy data through the network, you can see that if roi array contains 4 rois, the output is going to contain 4 classification results:</p>
<pre><code># Dummy forward pass through the network
imgs = x = nd.random.uniform(shape=(2, 3, 128, 128)) # shape is (batch_size, channels, height, width)
rois = nd.array([[0, 10, 10, 100, 100], [0, 20, 20, 120, 120],
[1, 15, 15, 110, 110], [1, 25, 25, 128, 128]])
out = net(imgs, rois)
print(out.shape)
</code></pre>
<p>Outputs:</p>
<pre><code>(4, 11)
</code></pre>
<p>If you want to, however, use ROIPooling with similar to Fast R-CNN or Faster R-CNN model, you need access to the features of the network before they are average pooled. These features are then ROIPooled before being passed up to classification. Here an example where the features are from the pre-trained network, the ROIPooling's <code>pooled_size</code> is 4x4, and a simple GlobalAveragePooling followed by a Dense layer is used for classification after ROIPooling. Note that because the image is max-pooled by a factor of 32 through the ResNet network, <code>spatial_scale</code> is set to <code>1.0/32</code> to let the ROIPooling layer automatically compensate the rois for that.</p>
<pre><code>def GetResnetFeatures(resnet):
resnet.features._children.pop() # Pop Flatten layer
resnet.features._children.pop() # Pop GlobalAveragePooling layer
return resnet.features
class ClassifyObjects(gluon.HybridBlock):
def __init__(self, num_classes, pooled_size):
super(ClassifyObjects, self).__init__()
# Add a placeholder for features block
self.features = gluon.nn.HybridSequential()
# Add a classifier block
self.classifier = gluon.nn.HybridSequential()
self.classifier.add(gluon.nn.GlobalAvgPool2D())
self.classifier.add(gluon.nn.Flatten())
self.classifier.add(gluon.nn.Dense(num_classes))
self.pooled_size = pooled_size
def hybrid_forward(self, F, imgs, rois):
features = self.features(imgs)
return self.classifier(
F.ROIPooling(
features, rois, pooled_size=self.pooled_size, spatial_scale=1.0/32))
# num_classes are 10 categories plus 1 class for "no-object-in-this-box" category
net = ClassifyObjects(num_classes=11, pooled_size=(4, 4))
# Initialize parameters and overload pre-trained weights
net.collect_params().initialize()
net.features = GetResnetFeatures(gluon.model_zoo.vision.resnet34_v2(pretrained=True))
</code></pre>
<p>Now if we send dummy data through the network, you can see that if roi array contains 4 rois, the output is going to contain 4 classification results:</p>
<pre><code># Dummy forward pass through the network
# shape of each image is (batch_size, channels, height, width)
imgs = x = nd.random.uniform(shape=(2, 3, 128, 128))
# rois is the output of region proposal module of your architecture
# Each ROI entry contains [batch_index, x1, y1, x2, y2]
rois = nd.array([[0, 10, 10, 100, 100], [0, 20, 20, 120, 120],
[1, 15, 15, 110, 110], [1, 25, 25, 128, 128]])
out = net(imgs, rois)
print(out.shape)
</code></pre>
<p>Outputs:</p>
<pre><code>(4, 11)
</code></pre> | 2018-03-01 01:58:24.520000+00:00 | 2018-03-01 01:58:24.520000+00:00 | null | null | 48,272,913 | <p>Assume I have a Resnet34 pretained model in MXNet and I want to add to it the premade ROIPooling Layer included in the API:</p>
<p><a href="https://mxnet.incubator.apache.org/api/python/ndarray/ndarray.html#mxnet.ndarray.ROIPooling" rel="nofollow noreferrer">https://mxnet.incubator.apache.org/api/python/ndarray/ndarray.html#mxnet.ndarray.ROIPooling</a></p>
<p>If the code for initializing Resnet is the following, how can I add ROIPooling at the last layer of the Resnet features before the classifier?</p>
<p>Actually, how can I utilize the ROIPooling function in my model in general?</p>
<p>How can I incorporate multiple different ROIs in the ROIpooling layer? How should they be stored? <strong>How should the data iterator be changed in order to give me the Batch index required by the ROIPooling function ?</strong></p>
<p>Let us assume that I use this along with the VOC 2012 Dataset for the task of action recognition</p>
<pre><code>batch_size = 40
num_classes = 11
init_lr = 0.001
step_epochs = [2]
train_iter, val_iter, num_samples = get_iterators(batch_size,num_classes)
resnet34 = vision.resnet34_v2(pretrained=True, ctx=ctx)
net = vision.resnet34_v2(classes=num_classes)
class ROIPOOLING(gluon.HybridBlock):
def __init__(self):
super(ROIPOOLING, self).__init__()
def hybrid_forward(self, F, x):
#print(x)
a = mx.nd.array([[0, 0, 0, 7, 7]]).tile((40,1))
return F.ROIPooling(x, a, (2,2), 1.0)
net_cl = nn.HybridSequential(prefix='resnetv20')
with net_cl.name_scope():
for l in xrange(4):
net_cl.add(resnet34.classifier._children[l])
net_cl.add(nn.Dense(num_classes, in_units=resnet34.classifier._children[-1]._in_units))
net.classifier = net_cl
net.classifier[-1].collect_params().initialize(mx.init.Xavier(rnd_type='gaussian', factor_type="in", magnitude=2), ctx=ctx)
net.features = resnet34.features
net.features._children.append(ROIPOOLING())
net.collect_params().reset_ctx(ctx)
</code></pre> | 2018-01-16 01:30:19.453000+00:00 | 2018-03-01 01:58:24.520000+00:00 | 2018-01-24 19:45:29.267000+00:00 | python|deep-learning|mxnet|resnet | ['https://arxiv.org/abs/1311.2524', 'https://arxiv.org/abs/1504.08083', 'https://arxiv.org/abs/1506.01497'] | 3 |
60,649,802 | <p>Sadly this is not possible currently, there is an open feature request on it <a href="https://jira.mongodb.org/browse/SERVER-22497" rel="nofollow noreferrer">here</a> so you can keep track of it if you wish.</p>
<p>Right now thought you have two options.</p>
<ol>
<li><p>Split your call into 2 queries and add that bit of logic to your code, which is what i personally recommend.</p></li>
<li><p>Use this aggregate which looks up all 3 collections:</p></li>
</ol>
<pre><code>search.aggregate([
{
'$match': {
'id_int': 0
}
},
{
'$project': {
'_id': 0,
'collection': 1,
'id_int': 1
}
},
{
"$facet": {
"arxiv": [
{
"$lookup": {
"from": "arxiv",
"localField": "id_int",
"foreignField": "id_int",
"as": "arxiv"
}
}
],
"crossref": [
{
"$lookup": {
"from": "crossref",
"localField": "id_int",
"foreignField": "id_int",
"as": "crossref"
}
}
],
"pmc_test": [
{
"$lookup": {
"from": "pmc_test",
"localField": "id_int",
"foreignField": "id_int",
"as": "pmc_test"
}
}
]
}
},
{
"$addFields": {
"newRoot": [
{
"k": "$collection",
"v": {
"$cond": [
{
"$eq": [
"$collection",
"arxiv"
]
},
"$arxiv",
{
"$cond": [
{
"$eq": [
"$collection",
"crossref"
]
},
"$crossref",
"$pmc_test"
]
}
]
}
},
{
"k": "collection", "v": "$collection"
},
{
"k": "id_int", "v": "$id_int"
}
]
}
},
{
"$replaceRoot": {
"newRoot": {
"$arrayToObject": {
"$concatArrays": "$newRoot"
}
}
}
}
])
</code></pre>
<p>As you might have noticed the pipeline isn't exactly sexy, if you don't care about the field name in the end result you can dump most of it.</p> | 2020-03-12 08:01:12.337000+00:00 | 2020-03-12 08:01:12.337000+00:00 | null | null | 60,642,230 | <p>I am trying to see if i can change the <strong>from</strong> in the <strong>$lookup</strong> or rearrange my query to somehow retrieve from three potential collections. So far i have managed to set up the query like so:</p>
<pre><code>const search = db.collection("search");
search.aggregate([
{
'$match': {
'id_int': 0
}
}, {
'$project': {
'_id': 0,
'collection': 1,
'id_int': 1
}
}, {
'$lookup': {
'from': 'arxiv',
'localField': 'id_int',
'foreignField': 'id_int',
'as': 'arxiv'
}
}
], function(err, cursor) ... )
</code></pre>
<p>The <strong>$match</strong> and then <strong>$project</strong> pipeline stages return a result with the following properties:</p>
<pre><code>collection:"arxiv"
id_int:0
</code></pre>
<p>The collection value will always be one of three arxiv, crossref or pmc_test. Therefore i'd like my <strong>$lookup from</strong> to use this property value programmatically as opposed having it hard coded.</p>
<pre><code>'$lookup': {
'from': 'arxiv' or 'crossref' or 'pmc_test', // Dynamic based on result
...
}
</code></pre>
<p>Thanks</p>
<p><strong>Edit</strong></p>
<p>id_int will get passed in and collection will not, thats why a query is made to the search collection.</p> | 2020-03-11 18:13:33.597000+00:00 | 2020-03-12 08:01:12.337000+00:00 | 2020-03-11 18:23:51.683000+00:00 | mongodb|mongodb-query|nosql|node-mongodb-native | ['https://jira.mongodb.org/browse/SERVER-22497'] | 1 |
58,058,501 | <p>This termin, according to <a href="https://arxiv.org/abs/1904.09751" rel="nofollow noreferrer">this article</a>, means the situations in text generation process when eigther generator model find the state <strong>x</strong> such that <strong>G(x) = x</strong> which means that generated text is repeated infinitely, or, according to error state in the middle of generation process, the model starts to reproduce incoherent text patterns. </p> | 2019-09-23 08:32:09.077000+00:00 | 2019-09-23 08:32:09.077000+00:00 | null | null | 58,057,884 | <p>In machine learning, especially NLP, <strong>what does it mean to degenerate a text?</strong></p>
<p>I heard this phrase some days ago in my office and after googling it I saw there are some papers for it, so I thought it might be important and I'm here to aks about the terminology.</p> | 2019-09-23 07:54:02.880000+00:00 | 2022-04-07 13:06:51.903000+00:00 | 2022-04-07 13:06:51.903000+00:00 | machine-learning|deep-learning|nlp|terminology | ['https://arxiv.org/abs/1904.09751'] | 1 |
53,857,325 | <p>This may be loosely related to tracking algorithms. Typically, you would use a LSTM or other algorithm coupled with a CNN to predict a human's behavior in time series images. </p>
<p>I don't see why you couldn't setup your dataset with target labels of phones vs no phones for the CNN to predict the class label. R-CNN or Yolo won't come out of the box like this so you would need to custom fit your algorithm and training set for this application. </p>
<p>Understanding human behavior is an important and active research topic for deep learning right now. Predicting behavior for a task like this is probably not as widely distributed in common libraries since these could be more domain specific tasks and the research is new, but that doesn't mean it's not possible. </p>
<p>This is a survey paper on this topic that may relate to your question: <a href="https://arxiv.org/pdf/1806.11230.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1806.11230.pdf</a>. You may also want to look into the research going on with object tracking since it is a similar concept (but covers a wider scope than just detecting what someone is holding).</p> | 2018-12-19 18:36:41.753000+00:00 | 2018-12-19 18:36:41.753000+00:00 | null | null | 53,855,354 | <p>I'd like to ask a general question about DNN based object detection algorithms such as Yolo, SSD or R-CNN.</p>
<p>Assume I'd like to detect mobile phones on small images, where - consequently - the mobile devices themselves are super small, moreover, it's nearly impossible to detect them by only looking at those pixels which they appear on. For instance, looking at a 300x300 image, the mobile shows up on a 7x5 grid, so only by looking at the 7x5 picture no one can surely decide what can be seen there.</p>
<p>On the other hand, if we see a subway car on the picture, where a person has something black in her/his hand, we (human beings) are almost sure that the little, black 7x5 grid stands for a mobile device.</p>
<p><a href="https://i.stack.imgur.com/HncE0.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/HncE0.jpg" alt="man_with_phone"></a></p>
<p>Is my understanding right that the current state-of-the-art DNN algorithms cannot capture the environment as humans do, but they only detect objects by their physical appearance on the image? If not, can you suggest an algorithm that does not necessarily learn on a black pixel group only, but is able to capture a human being holding a black thing in her/his hand that is likely to be a phone?</p>
<p>Thanks.</p> | 2018-12-19 16:26:13.700000+00:00 | 2018-12-19 18:36:41.753000+00:00 | null | neural-network|deep-learning|computer-vision|conv-neural-network|object-detection | ['https://arxiv.org/pdf/1806.11230.pdf'] | 1 |
48,850,780 | <p>I think your LSTM is not able to extract relevant features from the video frames in order to achieve a good accuracy.</p>
<p>The approach that usually gives the best results when dealing with images (or video frames) is extracting features with a stack of convolution + relu + max pooling layers (see <a href="https://arxiv.org/abs/1612.02903" rel="nofollow noreferrer">https://arxiv.org/abs/1612.02903</a> which is a survey on facial expression recognition, they all use convolutions to extract useful features from images).</p>
<p>These work best with 2-dimensional input, but i see that you represent a video frame with an array of size 2048 instead of a matrix. Usually images are represented with a shape similar to <code>(rows, cols, color_channels)</code>.</p>
<p>In your case, the input would have shape <code>(1, None, rows, cols, color_channels)</code>, then the convolutions would look something like this:</p>
<pre class="lang-py prettyprint-override"><code>from keras.layers import Input, LSTM, Conv2D, MaxPool2D, TimeDistributed, Flatten
x = Input(batch_shape=(1, None, rows, cols, color_channels), name='x')
convs = TimeDistributed(Conv2D(16, kernel_size=(3,3), activation='relu', padding='same'))(x)
convs = TimeDistributed(MaxPool2D(pool_size=(2,2)))(convs)
convs = TimeDistributed(Conv2D(32, kernel_size=(3,3), activation='relu', padding='same'))(convs)
convs = TimeDistributed(MaxPool2D(pool_size=(2,2)))(convs)
lstm_input = TimeDistributed(Flatten())(convs)
lstmR = LSTM(256, return_sequences=True, name='lstmR', stateful=True)(lstm_input)
lstmL = LSTM(256, return_sequences=True, go_backwards=True, name='lstmL', stateful=True)(lstm_input)
...
</code></pre>
<p>Where <code>TimeDistrubuted</code> applies the given layer to each time step.</p> | 2018-02-18 10:46:29.853000+00:00 | 2018-02-18 10:46:29.853000+00:00 | null | null | 42,151,994 | <p>I'm new in Keras, and trying to implement this network
<a href="https://i.stack.imgur.com/MHyVi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MHyVi.png" alt="enter image description here"></a></p>
<p>this network takes a video frames as x = {x1,........,xT} where T is the number of the frames in the video and x is the visual features of the frames
of size 2048</p>
<p>i tried to use stateful LSTM as each sample have a number of frames as refereed <a href="http://philipperemy.github.io/keras-stateful-lstm/" rel="nofollow noreferrer">here</a></p>
<p>and this is my model</p>
<pre><code>x = Input(batch_shape=(1, None, 2048), name='x')
lstmR = LSTM(256, return_sequences=True, name='lstmR', stateful=True)(x)
lstmL = LSTM(256, return_sequences=True, go_backwards=True,name='lstmL', stateful=True)(x)
merge = merge([x, lstmR, lstmL], mode='concat', name='merge')
dense = Dense(256, activation='sigmoid', name='dense')(merge)
y = Dense(1, activation='sigmoid', name='y')(dense)
model = Model(input=x, output=y)
model.compile(loss='mean_squared_error',
optimizer=SGD(lr=0.01),
metrics=['accuracy'])
</code></pre>
<p>and tried to train the model using mini-batching</p>
<pre><code>for epoch in range(15):
mean_tr_acc = []
mean_tr_loss = []
for i in range(nb_samples):
x, y = get_train_sample(i)
for j in range(len(x)):
sample_x = x[j]
tr_loss, tr_acc = model.train_on_batch(np.expand_dims(np.expand_dims(sample_x, axis=0), axis=0),np.expand_dims(y, axis=0))
mean_tr_acc.append(tr_acc)
mean_tr_loss.append(tr_loss)
model.reset_states()
</code></pre>
<p>but it seems like the model cannot converge as it gives 0.3 accuracy</p>
<p>I also tried to do it with stateless LSTM with input shape (None,1024) but it didn't converge too</p> | 2017-02-10 05:15:18.737000+00:00 | 2018-02-18 10:46:29.853000+00:00 | 2017-02-10 19:57:42.037000+00:00 | deep-learning|keras|lstm|recurrent-neural-network | ['https://arxiv.org/abs/1612.02903'] | 1 |
47,135,706 | <ol>
<li><p>You are right, there's no one best way to do it, just like there's no one best filter size or one best neural network architecture in general. </p>
<p>VGG-16 uses 2-3 convolutional layers between the pooling layers (the picture below), VGG-19 uses up to 4 layers, ...</p>
<p><a href="https://i.stack.imgur.com/3R0Kd.png" rel="noreferrer"><img src="https://i.stack.imgur.com/3R0Kd.png" alt="vgg-16"></a></p>
<p>.. and GoogleNet applies incredible number of convolutions (the picture blow), in between and sometimes in parallel with maxpooling layers</p>
<p><a href="https://i.stack.imgur.com/BGVka.png" rel="noreferrer"><img src="https://i.stack.imgur.com/BGVka.png" alt="google-net"></a></p>
<p>Each new layer, obviously, increases the network flexibility, so that it can approximate the more complex target functions. On the other hand, it requires more computation for training, however it's common to save computation using the <a href="https://stats.stackexchange.com/questions/194142/what-does-1x1-convolution-mean-in-a-neural-network">1x1 convolution trick</a>. How much flexibility does <em>your network</em> need? Greatly depends on the data, but usually 2-3 layers is flexible enough for most applications, and additional layers don't affect the performance. There's no better strategy than to cross-validate models of various depth. <em>(The pictures are from <a href="http://book.paddlepaddle.org/03.image_classification/" rel="noreferrer">this blog-post</a>)</em></p></li>
<li><p>This is a known issue and I'd like to mention here one particular technique that deals with too aggressive downsampling: <a href="https://arxiv.org/abs/1412.6071" rel="noreferrer">Fractional Pooling</a>. The idea is to apply <em>different-size</em> receptive fields for different neurons in the layer to reduce the image by any ratio: 90%, 75%, 66%, etc.</p>
<p><a href="https://i.stack.imgur.com/9pFeO.png" rel="noreferrer"><img src="https://i.stack.imgur.com/9pFeO.png" alt="fmp"></a></p>
<p>This is one of ways to make deeper networks particularly for small images, like MNIST digits, that demonstrated very good accuracy (0.32% test error).</p></li>
</ol> | 2017-11-06 11:16:53.473000+00:00 | 2017-11-06 11:16:53.473000+00:00 | null | null | 47,128,782 | <p>Generally we will insert max-pooling layers between convolution layers. The main idea is to "summarize" the features in conv. layers. But it's hard to decide when to insert. I have some questions behind this:</p>
<ol>
<li><p>how to decide how many conv. layers until we insert a max-pooling. and what's the effect of too many/few conv. layers</p></li>
<li><p>as max-pooling will reduce the size. so if we want to use very deep network, we can not do many maxpooling otherwise the size is too small. For example, the MNIST only have 28x28 input, but I do see some people use very deep network to experiment with it, so they might end up with very small size? actually when the size is too small (extreme case, 1x1), its' just like a fully-connected layer, and it seems doing convolution on them doesn't make any sense.</p></li>
</ol>
<p>I know there is no golden role but I just want to figure the basic intuition behind this, so that I can make reasonable choice when implement a network</p> | 2017-11-06 02:12:48.660000+00:00 | 2017-11-06 11:16:53.473000+00:00 | null | machine-learning|neural-network|deep-learning|conv-neural-network | ['https://i.stack.imgur.com/3R0Kd.png', 'https://i.stack.imgur.com/BGVka.png', 'https://stats.stackexchange.com/questions/194142/what-does-1x1-convolution-mean-in-a-neural-network', 'http://book.paddlepaddle.org/03.image_classification/', 'https://arxiv.org/abs/1412.6071', 'https://i.stack.imgur.com/9pFeO.png'] | 6 |
71,494,404 | <p>The documentation of <code>pos_weight</code> is indeed a bit unclear. For <code>BCEWithLogitsLoss</code> <code>pos_weight</code> should be a <code>torch.tensor</code> of size=1:</p>
<pre class="lang-py prettyprint-override"><code>BCE_With_LogitsLoss=nn.BCEWithLogitsLoss(pos_weight=torch.tensor([class_wts[0]/class_wts[1]]))
</code></pre>
<p>However, in your case, where pos class occurs only 2% of the times, I think setting <code>pos_weight</code> will not be enough.<br />
Please consider using <a href="https://stackoverflow.com/a/52161194/1714410">Focal loss</a>:<br />
<em>Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, Piotr Dollár</em> <a href="https://arxiv.org/abs/1708.02002" rel="nofollow noreferrer"><strong>Focal Loss for Dense Object Detection</strong></a> (ICCV 2017).<br />
Apart from describing Focal loss, this paper provides a very good explanation as to why CE loss performs so poorly in the case of imbalance. I strongly recommend reading this paper.</p>
<p>Other alternatives are listed <a href="https://stackoverflow.com/a/58213245/1714410">here</a>.</p> | 2022-03-16 09:17:32.140000+00:00 | 2022-03-16 09:17:32.140000+00:00 | null | null | 71,462,326 | <p>I have a neural network as below for binary prediction. My classes are heavily imbalanced and class 1 occurs only 2% of times. Showing last few layers only</p>
<pre><code>self.batch_norm2 = nn.BatchNorm1d(num_filters)
self.fc2 = nn.Linear(np.sum(num_filters), fc2_neurons)
self.batch_norm3 = nn.BatchNorm1d(fc2_neurons)
self.fc3 = nn.Linear(fc2_neurons, 1)
</code></pre>
<p>My loss is as below. Is this a correct way to calculate <code>pos_weight</code> parameter? I looked into official documentation at this <a href="https://pytorch.org/docs/stable/generated/torch.nn.BCEWithLogitsLoss.html#torch.nn.BCEWithLogitsLoss" rel="nofollow noreferrer">link</a> and it shows that <code>pos_weight</code> needs to have one value for each class for multiclass classification. Not sure if for the binary class it is a difference scenario. I tried to input 2 values and I was getting an error</p>
<p><em><strong>My question: for binary problem, would <code>pos_weight</code> be a single value unlike multiclass classification where it needs to a list/array with length equal to number of classes?</strong></em></p>
<pre><code>BCE_With_LogitsLoss=nn.BCEWithLogitsLoss(pos_weight=class_wts[0]/class_wts[1])
</code></pre>
<p>My y variable is a single variable that has 0 or 1 to represent the actual class and the neural network outputs a single value</p>
<p><strong>--------------------------------------------------Update 1</strong></p>
<p>based upon the answer by Shai I have below questions:</p>
<ol>
<li><code>BCEWithLogitsLoss</code> - if it is a multiclass problem then how to use <code>pos_weigh</code> parameter?</li>
<li>Is there any example of using focal loss in pytorch? I found some links but most of them were old - dating 2 or 3 or more years</li>
<li>For training I am oversampling my class 1. Is focal loss still appropiate?</li>
</ol> | 2022-03-14 02:07:49.943000+00:00 | 2022-03-16 13:49:03.750000+00:00 | 2022-03-16 13:49:03.750000+00:00 | neural-network|pytorch | ['https://stackoverflow.com/a/52161194/1714410', 'https://arxiv.org/abs/1708.02002', 'https://stackoverflow.com/a/58213245/1714410'] | 3 |
46,122,670 | <p>Adversarial training methods ( as a mean of regularization) may be worth looking into.
<a href="https://arxiv.org/abs/1605.07725" rel="nofollow noreferrer">Adversarial Training Methods for Semi-Supervised Text Classification</a> </p> | 2017-09-08 18:36:32.117000+00:00 | 2017-09-08 18:36:32.117000+00:00 | null | null | 43,843,538 | <h2>Objective :</h2>
<ul>
<li>Identifying class label using user entered question (like Question
Answer system). </li>
<li>Data extracted from Big PDF file, and need to predict
page number based on user input.</li>
<li>Majorly used in policy document, where
user have question about policy and need to show particular page
number.</li>
</ul>
<hr>
<p>Previous Implementation :
Applied elastic-search but very less accuracy, because user enter any text like "I need" == "want to" </p>
<hr>
<p>Dataset information :
Dataset contains each row as, Text( or paragraph) and Label (as Page number). here dataset size is small, I have only 500 rows.</p>
<h2>Current Implementation :</h2>
<ul>
<li>Applied word-embedding(Glove) with LSTM in Keras and back-end is
Tensor-flow </li>
<li>Applied Droupout </li>
<li>Applied ActivityRegularization </li>
<li>Applied L2 W_regularizer( from 0.1 to 0.001) </li>
<li>Applied different nb_epoch from 10 to 600 </li>
<li>Changed EMBEDDING_DIM from 100 to 300 of Glove Data</li>
</ul>
<hr>
<p>Applied NLP for,</p>
<ul>
<li>Convert to lower case </li>
<li>Remove Stop word of English </li>
<li>Stemming </li>
<li>Remove numbers </li>
<li>Remove URL and IP address</li>
</ul>
<p>Result : Accuracy on test data(or validation data) is 23% but on train data is 91%</p>
<hr>
<h2>Code :</h2>
<pre><code>import time
from time import strftime
import numpy as np
from keras.callbacks import CSVLogger, ModelCheckpoint
from keras.layers import Dense, Input, LSTM, ActivityRegularization
from keras.layers import Embedding, Dropout,Bidirectional
from keras.models import Model
from keras.preprocessing.sequence import pad_sequences
from keras.preprocessing.text import Tokenizer
from keras.regularizers import l2
from keras.utils import to_categorical
import pickle
from DataGenerator import *
BASE_DIR = ''
GLOVE_DIR = 'D:/Dataset/glove.6B' # BASE_DIR + '/glove.6B/'
MAX_SEQUENCE_LENGTH = 50
MAX_NB_WORDS = 20000
EMBEDDING_DIM = 300
VALIDATION_SPLIT = 0.2
# first, build index mapping words in the embeddings set
# to their embedding vector
np.random.seed(1337) # for reproducibility
print('Indexing word vectors.')
t_start = time.time()
embeddings_index = {}
if os.path.exists('pickle/glove.pickle'):
print('Pickle found..')
with open('pickle/glove.pickle', 'rb') as handle:
embeddings_index = pickle.load(handle)
else:
print('Pickle not found...')
f = open(os.path.join(GLOVE_DIR, 'glove.6B.300d.txt'), encoding='utf8')
for line in f:
values = line.split()
word = values[0]
coefs = np.asarray(values[1:], dtype='float32')
embeddings_index[word] = coefs
f.close()
with open('pickle/glove.pickle', 'wb') as handle:
pickle.dump(embeddings_index, handle, protocol=pickle.HIGHEST_PROTOCOL)
print('Found %s word vectors.' % len(embeddings_index))
# second, prepare text samples and their labels
print('Processing text dataset')
texts = [] # list of text samples
labels = [] # list of label ids
labels_index = {} # dictionary mapping label name to numeric id
(texts, labels, labels_index) = get_data('D:/PolicyDocument/')
print('Found %s texts.' % len(texts))
# finally, vectorize the text samples into a 2D integer tensor
tokenizer = Tokenizer(nb_words=MAX_NB_WORDS)
tokenizer.fit_on_texts(texts)
sequences = tokenizer.texts_to_sequences(texts)
word_index = tokenizer.word_index
print('Found %s unique tokens.' % len(word_index))
data = pad_sequences(sequences, maxlen=MAX_SEQUENCE_LENGTH)
labels = to_categorical(np.asarray(labels))
print('Shape of data tensor:', data.shape)
print('Shape of label tensor:', labels.shape)
# split the data into a training set and a validation set
indices = np.arange(data.shape[0])
np.random.shuffle(indices)
data = data[indices]
labels = labels[indices]
num_validation_samples = int(VALIDATION_SPLIT * data.shape[0])
x_train = data[:-num_validation_samples]
y_train = labels[:-num_validation_samples]
x_val = data[-num_validation_samples:]
y_val = labels[-num_validation_samples:]
# prepare embedding matrix
num_words = min(MAX_NB_WORDS, len(word_index))
embedding_matrix = np.zeros((num_words + 1, EMBEDDING_DIM))
print('Preparing embedding matrix. :', embedding_matrix.shape)
for word, i in word_index.items():
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None:
# words not found in embedding index will be all-zeros.
embedding_matrix[i] = embedding_vector
# load pre-trained word embeddings into an Embedding layer
# note that we set trainable = False so as to keep the embeddings fixed
embedding_layer = Embedding(embedding_matrix.shape[0],
embedding_matrix.shape[1],
weights=[embedding_matrix],
input_length=MAX_SEQUENCE_LENGTH,
mask_zero=True,
trainable=False)
print('Training model.')
csv_file = "logs/training_log_" + strftime("%Y-%m-%d %H-%M", time.localtime()) + ".csv"
model_file = "models/Model_" + strftime("%Y-%m-%d %H-%M", time.localtime()) + ".mdl"
print("Model file:" + model_file)
csv_logger = CSVLogger(csv_file)
# train a 1D convnet with global maxpooling
sequence_input = Input(shape=(MAX_SEQUENCE_LENGTH,), dtype='int32')
embedded_sequences = embedding_layer(sequence_input)
rate_drop_lstm = 0.15 + np.random.rand() * 0.25
num_lstm = np.random.randint(175, 275)
rate_drop_dense = 0.15 + np.random.rand() * 0.25
x = LSTM(num_lstm, return_sequences=True, W_regularizer=l2(0.001))(embedded_sequences)
x = Dropout(0.5)(x)
x = LSTM(64)(x)
x = Dropout(0.25)(x)
x = ActivityRegularization(l1=0.01, l2=0.001)(x)
preds = Dense(len(labels_index), activation='softmax')(x)
model = Model(sequence_input, preds)
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['acc'])
model_checkpoint = ModelCheckpoint(model_file, monitor='val_loss', verbose=0, save_best_only=True,
save_weights_only=False, mode='auto')
model.fit(x_train, y_train,
batch_size=1,
nb_epoch=600,
validation_data=(x_val, y_val), callbacks=[csv_logger, model_checkpoint])
score = model.evaluate(x_val, y_val, verbose=0)
print('Test score:', score[0])
print('Test accuracy:', score[1])
t_end = time.time()
total = t_end - t_start
ret_str = "Time needed(s): " + str(total)
print(ret_str)
</code></pre> | 2017-05-08 08:56:31.437000+00:00 | 2017-09-08 18:36:32.117000+00:00 | 2017-05-18 07:01:26.903000+00:00 | tensorflow|keras|lstm|text-classification|word-embedding | ['https://arxiv.org/abs/1605.07725'] | 1 |
55,999,707 | <p>The approach you might have to take will depend upon whether you wish to generalize to new unseen maps and be able to segment the map in to feasible(available for robot navigation) and infeasible (wall/other ojects/obstacles). Please be aware that you need to generate these maps dynamically if your environment will change over time(like moving obstacles/other robots/objects). For this if you have a good amount of annotated training data with maps with wall regions marked(segmented out), you could use a standard neural network based segmentation algorithm like Mask-RCNN (<a href="https://github.com/matterport/Mask_RCNN" rel="nofollow noreferrer">https://github.com/matterport/Mask_RCNN</a>) on your dataset. Alternatively, if do not have a lot of annotated data and you just want a general purpose path planning algorithm, that can plan on a path from point A to B with out running in to obstacles you could use a MPC based obstacle avoidance algorithms as ones described in <a href="https://arxiv.org/abs/1805.09633" rel="nofollow noreferrer">https://arxiv.org/abs/1805.09633</a> / <a href="https://www.tandfonline.com/doi/full/10.1080/00423114.2018.1492141" rel="nofollow noreferrer">https://www.tandfonline.com/doi/full/10.1080/00423114.2018.1492141</a></p> | 2019-05-06 06:17:19.450000+00:00 | 2019-05-06 06:17:19.450000+00:00 | null | null | 55,999,318 | <p>I want to design a neural network / ConvNet to generate a set of points on a given map, which correspond to possible positions of a robot. The map contains a lot of empty space for walls, and the robots can't be in those positions. Therefore, the network should take in the map, and generate pairs of numbers (x, y) corresponding to places on the map that is not wall. What would be an appropriate choice of neural network structure to implement this task?</p> | 2019-05-06 05:33:45.837000+00:00 | 2019-05-06 06:17:19.450000+00:00 | null | neural-network|computer-vision|conv-neural-network | ['https://github.com/matterport/Mask_RCNN', 'https://arxiv.org/abs/1805.09633', 'https://www.tandfonline.com/doi/full/10.1080/00423114.2018.1492141'] | 3 |
71,792,968 | <p>Getting a ISBN costs money, and I do not think it will work well for free online only books. However, getting a DOI is free and easy either by publishing the book as a pre-print on <a href="https://arxiv.org/" rel="nofollow noreferrer">arxiv</a> without peer review, or using <a href="https://zenodo.org/" rel="nofollow noreferrer">zenodo</a>. You can even automatically generate new DOIs for newer versions with <a href="https://github.com/ivotron/zenodo" rel="nofollow noreferrer">GitHub actions</a>.</p> | 2022-04-08 07:07:03.657000+00:00 | 2022-04-08 07:07:03.657000+00:00 | null | null | 71,792,877 | <p>I've written a technical book using Bookdown, which is hosted on GitHub and is open access. As an academic, I'd like people to be able to cite the book, and normally that would be done with an ISBN and DOI. However, I'm not sure what is the best way to get hold of these.</p>
<p>Could anyone tell me the best way to go about this? I am not looking for any royalties, and the book will likely be updated every now and then, so I don't see much point in using a self-publishing service like Amazon, or going through a classical publisher like Springer, CRC, etc.</p>
<p>My ideal end scenario would be just to have the book open access online, but so that it can be cited (I mean properly cited, ideally with ISBN and/or DOI). Any ideas?</p>
<p>p.s. apologies if Stack Overflow is not the place for asking this, not sure where else to ask.</p> | 2022-04-08 06:57:38.457000+00:00 | 2022-04-08 07:07:03.657000+00:00 | null | r|bookdown | ['https://arxiv.org/', 'https://zenodo.org/', 'https://github.com/ivotron/zenodo'] | 3 |
61,616,960 | <p>Devroye's <em><a href="http://luc.devroye.org/rnbookindex.html" rel="nofollow noreferrer">Non-Uniform Random Variate Generation</a></em>, pp. 505 and 86, mentions an inversion by sequential search algorithm.</p>
<p>Based on that algorithm, if you know the <code>mean</code> is considerably less than 1, then if you generate a uniform random variate <code>u</code> in [0, 1], the Poisson variable will be 0 if <code>u <= exp(-mean)</code>, and greater than 0 otherwise.</p>
<p>If the mean is low and you can tolerate an approximate distribution, then you can use the following approach (see Appendix A of "<a href="https://arxiv.org/pdf/2004.00010.pdf" rel="nofollow noreferrer">The Discrete Gaussian for Differential Privacy</a>"):</p>
<ol>
<li>Express <code>mean</code> in the form of a rational number, in the form <code>numer</code>/<code>denom</code>. For example, if <code>mean</code> is a fixed value, then <code>numer</code> and <code>denom</code> can be precalculated accordingly, such as at compile time.</li>
<li>Randomly generate a Bernoulli(<code>numer / denom</code>) number (generate 1 with probability <code>numer / denom</code> or 0 otherwise). If 1 was generated this way, repeat this step with Bernoulli(<code>numer / (denom * 2)</code>), Bernoulli(<code>numer / (denom * 3)</code>), and so on until 0 is generated this way. Generate these numbers using an algorithm that minimizes waste of bits, such as the one mentioned in Appendix B of Lumbroso's Fast Dice Roller paper (2013) or the "ZeroToOne" method modified from there and given in my section on <a href="https://github.com/peteroupc/peteroupc.github.io/blob/master/randomfunc.md#boolean-truefalse-conditions" rel="nofollow noreferrer">Boolean conditions</a>. See also <a href="https://stackoverflow.com/questions/60777414/uniformly-distributed-bit-sequence">this question</a>.</li>
<li>If step 2 produced an even number of ones, the Poisson variable is exactly 0.</li>
<li>If step 2 produced an odd number of ones, the Poisson variable is greater than 0, and a "slower" algorithm is necessary that samples only Poisson variables greater than 0.</li>
</ol>
<p>For example, say the mean is 1e-6 (1/1000000), Generate a Bernoulli(1/1000000) number, then Bernoulli(1/2000000), etc. Until you generate 0 this way. If an even number of ones were generated, then the Poisson variable is exactly 0. Otherwise, the Poisson variable is 1 or greater and a "slower" algorithm is necessary.</p>
<p>One example is the algorithm below, which is based on the one from pages 505 and 86, but only samples Poisson variables 1 or greater:</p>
<pre><code>METHOD Poisson1OrGreater(mean)
sum=Math.exp(-mean)
prod=sum
u=RNDRANGE(sum, 1)
i=0
while i==0 or u>sum
prod*=mean/(i+1)
sum+=prod
i=i+1
end
return i
END METHOD
</code></pre>
<p>This method, though, is not very robust, especially since it uses numbers close to 1 (where the floating-point space is more sparse) rather than numbers close to 0.</p>
<hr />
<p>Note that the sum of <code>n</code> independent Poisson(<code>mean</code>) random variates is Poisson(<code>mean*n</code>) distributed (p. 501). Thus, the discussion above in this answer applies to a sum of <code>n</code> Poisson random variates as long as <code>n</code> times their mean remains small. For example, to generate a sum of 1000 Poisson random variates with a mean of 1e-6, simply generate a single Poisson random variate with a mean of 0.001. This will save considerably on calls to the pseudorandom number generator.</p>
<hr />
<p>There is another way to generate Poisson variates with low mean (1 or less). It is described by Duchon and Duvignau in "Preserving the number of cycles of length k in a growing uniform permutation", Electronic Journal of Combinatorics 23(4), 2016.</p>
<p>First, generate a Poisson(1) random variate <code>x = Poisson1()</code> using the algorithm given below which uses only integer arithmetic (where <code>RNDINT(a)</code> generates a uniform random integer in [0, <code>a</code>]):</p>
<pre><code>METHOD Poisson1()
ret=1; a=1; b=0
while true // until this method returns
j=RNDINT(a)
if j<a and j<b: return ret
if j==a: ret=ret+1
else
ret=ret-1; b=a+1
end
a=a+1
end
END METHOD
</code></pre>
<p>Now let <code>mean</code> be the desired mean. Flip a coin <code>x</code> times, where the coin shows heads with probability equal to <code>mean</code>. (In other words, generate a binomial(<code>x</code>, <code>mean</code>) random variate.) The number of heads is a Poisson(<code>mean</code>) random variate.</p> | 2020-05-05 15:35:38.750000+00:00 | 2021-10-16 17:57:07.213000+00:00 | 2021-10-16 17:57:07.213000+00:00 | null | 61,614,458 | <p>In order to draw random number from a Poisson distribution in C++, it is generally advised to use</p>
<pre><code>RNG_type rng;
std::poisson_distribution<size_t> d(1e-6);
auto r = d(rng);
</code></pre>
<p>At each call of the <code>std::poisson_distribution</code> object, an entire sequence of random bits is consumed (e.g. 32 bits with <a href="http://www.cplusplus.com/reference/random/mt19937/" rel="nofollow noreferrer">std::mt19937</a>, 64 bits for <a href="http://www.cplusplus.com/reference/random/mt19937_64/" rel="nofollow noreferrer">std::mt19937_64</a>). It strikes me that with such low mean (<code>mean = 1e-6</code>), the vast majority of times, only a few bits are enough to determine that the value to return is 0. The other bits could then be cached for later use.</p>
<p>Assuming that a sequence of bits set to true is associated to a high returned value from the Poisson distribution, when using a mean of <code>1e-6</code>, any sequence not starting with 19 trues necessarily returns a zero! Indeed, </p>
<pre><code>1 - 1/2^19 < P(0, 1e-6) < 1 - 1/2^20
</code></pre>
<p>, where <code>P(n, r)</code> denotes the probability of drawing <code>n</code> from a Poisson distribution with mean <code>r</code>. An algorithm that does not waste bits would use one bit half of the time, two bits a quarter of the times, three bits an eighth of the times, ....</p>
<p><strong>Is there an algorithm out there that can improve performance by consuming as few bits as possible when drawing Poisson numbers? Is there another way to improve performance compared to <code>std::poisson_distribution</code> when we consider a low mean?</strong> </p>
<hr>
<p>In response to @Jarod42's comment who said</p>
<blockquote>
<p>Wonder if using fewer bits don't break equiprobability...</p>
</blockquote>
<p>I don't think it would break equiprobability. In a vague attempt to test it, I consider the same question with a simple bernoulli distribution. I am sampling true with a probability <code>1/2^4</code> and sampling false with a probability <code>1 - 1/2^4</code>. The function <code>drawWithoutWastingBits</code> stops as soon as it sees a true in the cache and the function <code>drawWastingBits</code> consumes 4 bits whatever these bits are.</p>
<pre><code>#include <iostream>
#include <vector>
#include <string>
#include <algorithm>
#include <random>
bool drawWithoutWastingBits(std::vector<bool>& cache, size_t& cache_index)
{
/*
Get a true with probability 1/2^4 (=1/16=0.0625) and a false otherwise
*/
size_t nbTrues = 0;
while (cache[cache_index])
{
++nbTrues;
++cache_index;
if (nbTrues == 4)
{
return true;
}
}
++cache_index;
return false;
}
bool drawWastingBits(std::vector<bool>& cache, size_t& cache_index)
{
/*
Get a true with probability 1/2^4 (=1/16=0.0625) and a false otherwise
*/
bool isAnyTrue = false;
for (size_t i = 0 ; i < 4; ++i)
{
if (cache[cache_index])
{
isAnyTrue = true;
}
++cache_index;
}
return !isAnyTrue;
}
int main()
{
/*
Just cache a lot of bits in advance in `cache`. The same sequence of bits will be used by both function.
I am just caching way enough bits to make sure they don't run out of bits below
I made sure to have the same number of zeros and ones so that any deviation is caused by the methodology and not by the RNG
*/
// Produce cache
std::vector<bool> cache;
size_t nbBitsToCache = 1e7;
cache.reserve(nbBitsToCache);
for (size_t i = 0 ; i < nbBitsToCache/2 ; ++i)
{
cache.push_back(false);
cache.push_back(true);
}
// Shuffle cache
{
std::mt19937 mt(std::random_device{}());
std::shuffle(cache.begin(), cache.end(), mt);
}
// Draw without wasting bits
{
size_t nbDraws = 1e6;
size_t cache_index = 0;
std::pair<size_t, size_t> outcomes = {0,0};
for (size_t r = 0 ; r < nbDraws ; ++r)
{
drawWithoutWastingBits(cache, cache_index) ? ++outcomes.first : ++outcomes.second;
assert(cache_index <= cache.size());
}
assert(outcomes.first + outcomes.second == nbDraws);
std::cout << "Draw Without Wasting Bits: prob true = " << (double)outcomes.first / nbDraws << "\n";
}
// Draw wasting bits
{
size_t nbDraws = 1e6;
size_t cache_index = 0;
std::pair<size_t, size_t> outcomes = {0,0};
for (size_t r = 0 ; r < nbDraws ; ++r)
{
drawWastingBits(cache, cache_index) ? ++outcomes.first : ++outcomes.second;
assert(cache_index <= cache.size());
}
assert(outcomes.first + outcomes.second == nbDraws);
std::cout << "Draw Wit Wasting Bits: prob true = " << (double)outcomes.first / nbDraws << "\n";
}
}
</code></pre>
<p>Possible output</p>
<pre><code>Draw Without Wasting Bits: prob true = 0.062832
Draw Wit Wasting Bits: prob true = 0.062363
</code></pre> | 2020-05-05 13:34:22.327000+00:00 | 2021-10-16 17:57:07.213000+00:00 | 2020-05-05 14:49:56.920000+00:00 | c++|performance|random|probability|poisson | ['http://luc.devroye.org/rnbookindex.html', 'https://arxiv.org/pdf/2004.00010.pdf', 'https://github.com/peteroupc/peteroupc.github.io/blob/master/randomfunc.md#boolean-truefalse-conditions', 'https://stackoverflow.com/questions/60777414/uniformly-distributed-bit-sequence'] | 4 |
66,925,379 | <p>Yes, you can use residual networks in fully connected networks. Skipped connections help the learning for fully connected layers.</p>
<p>Here is a nice paper (not mine unfortunately) where it is done and where the authors explain in detail why it helps the learning. <a href="https://arxiv.org/pdf/1701.09175.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1701.09175.pdf</a></p> | 2021-04-02 21:21:47.533000+00:00 | 2021-04-02 21:21:47.533000+00:00 | null | null | 62,023,882 | <p>Residual networks are always built with convolutional layers. I have never seen residual networks with only fully connected layers. Does it work to build a residual network with only fully connected layers?</p> | 2020-05-26 14:00:13.063000+00:00 | 2021-06-20 15:54:43.980000+00:00 | null | python|machine-learning|neural-network|conv-neural-network|deep-residual-networks | ['https://arxiv.org/pdf/1701.09175.pdf'] | 1 |
62,024,327 | <p>So, let's start with: what is the aim of ResNets?</p>
<p>Given an input <code>X</code>, which is propagated through a certain ensemble of layers, let's call with <code>F(X)</code> the output of this ensemble. If we denote with <code>H(X)</code> the desired output (the ideal mapping, i.e. <code>F(X)!=H(X)</code>), a resnet learn <code>H(X) = F(X) + X</code>, that can be written as <code>F(X) = H(X)-X</code>, i.e the residual, from which the name residual network.</p>
<p>Thus, what is the gain of a resnet?</p>
<p>In a resnet, the mapping of a following layer performs at least as well as the previous one. Why? Because, at lest, it learns the mapping of an identity (<code>F(X)=X</code>).</p>
<p>This is a crucial aspect related to convolutional networks. Indeed, deeper nets should perform better than networks with lesser depth, but this does not always happen. From this rises the necessity to build a network that guarantees such behavior.</p>
<p>Is this true also for dense networks?
No, it is not. There is a known theorem (Universal Approximation Theorem) for dense nets, which states: any kind of network is equivalent to a two dense layers net with an adequate number of hidden units distributed between the two layers. For this reason, it is not necessary to increase the depth of a dense net, rather it is necessary to find the right number of hidden units.</p>
<p>If you want you can explore the original <a href="https://arxiv.org/pdf/1512.03385.pdf" rel="nofollow noreferrer">paper</a> by He et al 2015.</p> | 2020-05-26 14:21:39.673000+00:00 | 2020-10-07 23:49:24.940000+00:00 | 2020-10-07 23:49:24.940000+00:00 | null | 62,023,882 | <p>Residual networks are always built with convolutional layers. I have never seen residual networks with only fully connected layers. Does it work to build a residual network with only fully connected layers?</p> | 2020-05-26 14:00:13.063000+00:00 | 2021-06-20 15:54:43.980000+00:00 | null | python|machine-learning|neural-network|conv-neural-network|deep-residual-networks | ['https://arxiv.org/pdf/1512.03385.pdf'] | 1 |
56,294,846 | <p>I think implementing for a discrete action space such as Cartpole-v1 is easier than for continuous action spaces. But for continuous action spaces, this is the most straight-forward implementation I found in Pytorch as you can clearly see how they get <code>mu</code> and <code>std</code> where as I could not with more renowned implementations such as Openai Baselines and Spinning up or Stable Baselines. </p>
<p><a href="https://github.com/higgsfield/RL-Adventure-2/blob/master/3.ppo.ipynb" rel="nofollow noreferrer">RL-Adventure PPO</a></p>
<p>These lines from the link above:</p>
<pre><code>class ActorCritic(nn.Module):
def __init__(self, num_inputs, num_outputs, hidden_size, std=0.0):
super(ActorCritic, self).__init__()
self.critic = nn.Sequential(
nn.Linear(num_inputs, hidden_size),
nn.ReLU(),
nn.Linear(hidden_size, 1)
)
self.actor = nn.Sequential(
nn.Linear(num_inputs, hidden_size),
nn.ReLU(),
nn.Linear(hidden_size, num_outputs),
)
self.log_std = nn.Parameter(torch.ones(1, num_outputs) * std)
self.apply(init_weights)
def forward(self, x):
value = self.critic(x)
mu = self.actor(x)
std = self.log_std.exp().expand_as(mu)
dist = Normal(mu, std)
return dist, value
</code></pre>
<p>and the clipping:</p>
<pre><code>def ppo_update(ppo_epochs, mini_batch_size, states, actions, log_probs, returns, advantages, clip_param=0.2):
for _ in range(ppo_epochs):
for state, action, old_log_probs, return_, advantage in ppo_iter(mini_batch_size, states, actions, log_probs, returns, advantages):
dist, value = model(state)
entropy = dist.entropy().mean()
new_log_probs = dist.log_prob(action)
ratio = (new_log_probs - old_log_probs).exp()
surr1 = ratio * advantage
surr2 = torch.clamp(ratio, 1.0 - clip_param, 1.0 + clip_param) * advantage
</code></pre>
<p>I found the link above the comments on this video on Youtube:</p>
<p><a href="https://www.youtube.com/watch?v=5P7I-xPq8u8" rel="nofollow noreferrer">arxiv insights PPO</a></p> | 2019-05-24 14:38:52.113000+00:00 | 2019-05-24 14:47:07.363000+00:00 | 2019-05-24 14:47:07.363000+00:00 | null | 46,422,845 | <p>I know the basics of Reinforcement Learning, but what terms it's necessary to understand to be able read <a href="https://arxiv.org/abs/1707.06347" rel="noreferrer">arxiv PPO paper</a> ?</p>
<p>What is the roadmap to learn and use <a href="https://blog.openai.com/openai-baselines-ppo/" rel="noreferrer">PPO</a> ?</p> | 2017-09-26 09:36:42.703000+00:00 | 2021-07-13 15:54:47.997000+00:00 | null | machine-learning|reinforcement-learning | ['https://github.com/higgsfield/RL-Adventure-2/blob/master/3.ppo.ipynb', 'https://www.youtube.com/watch?v=5P7I-xPq8u8'] | 2 |
46,476,149 | <p>PPO is a simple algorithm, which falls into policy optimization algorithms class (as opposed to value-based methods such as DQN). If you "know" RL basics (I mean if you have at least read thoughtfully some first chapters of <a href="http://www.incompleteideas.net/book/the-book-2nd.html" rel="noreferrer">Sutton's book</a> for example), then a first logical step is to get familiar with policy gradient algorithms. You can read <a href="https://papers.nips.cc/paper/1713-policy-gradient-methods-for-reinforcement-learning-with-function-approximation.pdf" rel="noreferrer">this paper</a> or chapter 13 of <a href="http://www.incompleteideas.net/book/the-book-2nd.html" rel="noreferrer">Sutton's book</a> new edition. Additionally, you may also read <a href="https://arxiv.org/pdf/1502.05477.pdf" rel="noreferrer">this paper</a> on TRPO, which is a previous work from PPO's first author (this paper has numerous notational mistakes; just note). Hope that helps. --Mehdi</p> | 2017-09-28 18:36:34.597000+00:00 | 2017-12-15 14:53:39.710000+00:00 | 2017-12-15 14:53:39.710000+00:00 | null | 46,422,845 | <p>I know the basics of Reinforcement Learning, but what terms it's necessary to understand to be able read <a href="https://arxiv.org/abs/1707.06347" rel="noreferrer">arxiv PPO paper</a> ?</p>
<p>What is the roadmap to learn and use <a href="https://blog.openai.com/openai-baselines-ppo/" rel="noreferrer">PPO</a> ?</p> | 2017-09-26 09:36:42.703000+00:00 | 2021-07-13 15:54:47.997000+00:00 | null | machine-learning|reinforcement-learning | ['http://www.incompleteideas.net/book/the-book-2nd.html', 'https://papers.nips.cc/paper/1713-policy-gradient-methods-for-reinforcement-learning-with-function-approximation.pdf', 'http://www.incompleteideas.net/book/the-book-2nd.html', 'https://arxiv.org/pdf/1502.05477.pdf'] | 4 |
50,663,200 | <p>To better understand PPO, it is helpful to look at the main contributions of the paper, which are: <strong>(1)</strong> the Clipped Surrogate Objective and <strong>(2)</strong> the use of "multiple epochs of stochastic gradient ascent to perform each policy update".</p>
<br>
<p>From the original <a href="https://arxiv.org/abs/1707.06347" rel="noreferrer">PPO paper</a>:</p>
<blockquote>
<p>We have introduced [PPO], a family of policy optimization methods that use <strong>multiple epochs of stochastic gradient ascent to perform each policy update</strong>. These methods have the stability and reliability of trust-region [<a href="https://arxiv.org/abs/1502.05477" rel="noreferrer">TRPO</a>] methods but are much simpler to implement, requiring <strong>only a few lines of code change to a vanilla policy gradient implementation</strong>, applicable in more general settings (for example, when using a joint architecture for the policy and value function), and have better overall performance.</p>
</blockquote>
<hr />
<h1 id="the-clipped-surrogate-objective-98d7">1. The Clipped Surrogate Objective</h1>
<p>The Clipped Surrogate Objective is a drop-in replacement for the policy gradient objective that is designed to improve training stability by limiting the change you make to your policy at each step.</p>
<p>For vanilla policy gradients (e.g., REINFORCE) --- which you should be familiar with, or <a href="http://karpathy.github.io/2016/05/31/rl/" rel="noreferrer">familiarize yourself with</a> before you read this --- the objective used to optimize the neural network looks like:</p>
<p><a href="https://i.stack.imgur.com/5VZRT.png" rel="noreferrer"><img src="https://i.stack.imgur.com/5VZRT.png" alt="PG objective" /></a></p>
<p>This is the standard formula that you would see in the <a href="http://incompleteideas.net/book/the-book-2nd.html" rel="noreferrer">Sutton book</a>, and <a href="http://karpathy.github.io/2016/05/31/rl/" rel="noreferrer">other</a> <a href="http://rail.eecs.berkeley.edu/deeprlcourse-fa17/index.html" rel="noreferrer">resources</a>, where the A-hat could be the discounted return (as in REINFORCE) or the advantage function (as in <a href="https://arxiv.org/abs/1506.02438" rel="noreferrer">GAE</a>) for example. By taking a gradient ascent step on this loss with respect to the network parameters, you will incentivize the actions that led to higher reward.</p>
<p>The vanilla policy gradient method uses the log probability of your action (log π(a | s)) to trace the impact of the actions, but you could imagine using another function to do this. Another such function, introduced in <a href="https://people.eecs.berkeley.edu/%7Epabbeel/cs287-fa09/readings/KakadeLangford-icml2002.pdf" rel="noreferrer">this paper</a>, uses the probability of the action under the <em>current policy</em> (π(a|s)), divided by the probability of the action under your <em>previous policy</em> (π_old(a|s)). This looks a bit similar to importance sampling if you are familiar with that:</p>
<p><a href="https://i.stack.imgur.com/bCAEy.png" rel="noreferrer"><img src="https://i.stack.imgur.com/bCAEy.png" alt="r eq" /></a></p>
<p>This r(θ) will be greater than 1 when the action is <em>more</em> probable for your <em>current</em> policy than it is for your <em>old</em> policy; it will be between 0 and 1 when the action is less probable for your current policy than for your old.</p>
<p>Now to build an objective function with this r(θ), we can simply swap it in for the log π(a|s) term. This is what is done in TRPO:</p>
<p><a href="https://i.stack.imgur.com/EMMPa.png" rel="noreferrer"><img src="https://i.stack.imgur.com/EMMPa.png" alt="TRPO objective" /></a></p>
<p><strong>But what would happen here if your action is much more probable (like 100x more) for your current policy?</strong> r(θ) will tend to be really big and lead to taking big gradient steps that might wreck your policy. To deal with this and other issues, TRPO adds several extra bells and whistles (e.g., KL Divergence constraints) to limit the amount the policy can change and help guarantee that it is monotonically improving.</p>
<p>Instead of adding all these extra bells and whistles, what if we could build these stabilizing properties into the objective function? As you might guess, this is what PPO does. It gains the same performance benefits as TRPO and avoids the complications by optimizing this simple (but kind of funny looking) Clipped Surrogate Objective:</p>
<p><a href="https://i.stack.imgur.com/zt9mz.png" rel="noreferrer"><img src="https://i.stack.imgur.com/zt9mz.png" alt="annotated clipped surrogate" /></a></p>
<p>The first term (blue) inside the minimization is the same (r(θ)A) term we saw in the TRPO objective. The second term (red) is a version where the (r(θ)) is clipped between (1 - e, 1 + e). (in the paper they state a good value for e is about 0.2, so r can vary between ~(0.8, 1.2)). Then, finally, the minimization of both of these terms is taken (green).</p>
<p>Take your time and look at the equation carefully and make sure you know what all the symbols mean, and mathematically what is happening. Looking at the code may also help; here is the relevant section in both the OpenAI <a href="https://github.com/openai/baselines/blob/9fa8e1baf1d1f975b87b369a8082122eac812eb1/baselines/ppo1/pposgd_simple.py#L111-L117" rel="noreferrer">baselines</a> and <a href="https://github.com/unixpickle/anyrl-py/blob/953ad68d6507b83583e342b3210ed98e03a86a4f/anyrl/algos/ppo.py#L149-L155" rel="noreferrer">anyrl-py</a> implementations.</p>
<p>Great.</p>
<p>Next, let's see what effect the L clip function creates. Here is a diagram from the paper that plots the value of the clip objective for when the Advantage is positive and negative:</p>
<p><a href="https://i.stack.imgur.com/F6SxR.png" rel="noreferrer"><img src="https://i.stack.imgur.com/F6SxR.png" alt="Clip intro" /></a></p>
<p>On the left half of the diagram, where (A > 0), this is where the action had an estimated positive effect on the outcome. On the right half of the diagram, where (A < 0), this is where the action had an estimated negative effect on the outcome.</p>
<p>Notice how on the left half, the r-value gets clipped if it gets too high. This will happen if the action became a lot more probable under the current policy than it was for the old policy. When this happens, we do not want to get greedy and step too far (because this is just a local approximation and sample of our policy, so it will not be accurate if we step too far), and so we clip the objective to prevent it from growing. (This will have the effect in the backward pass of blocking the gradient --- the flat line causing the gradient to be 0).</p>
<p>On the right side of the diagram, where the action had an estimated <em>negative</em> effect on the outcome, we see that the clip activates near 0, where the action under the current policy is unlikely. This clipping region will similarly prevent us from updating too much to make the action much less probable after we already just took a big step to make it less probable.</p>
<p>So we see that both of these clipping regions prevent us from getting too greedy and trying to update too much at once and leaving the region where this sample offers a good estimate.</p>
<p><strong>But why are we letting the r(θ) grow indefinitely on the far right side of the diagram? This seems odd as first, but what would cause r(θ) to grow really large in this case?</strong> r(θ) growth in this region will be caused by a gradient step that made our action <em>a lot more probable</em>, and it turning out to make our policy <em>worse</em>. If that was the case, we would want to be able to undo that gradient step. And it just so happens that the L clip function allows this. The function is negative here, so the gradient will tell us to walk the other direction and make the action less probable by an amount proportional to how much we screwed it up. (Note that there is a similar region on the far left side of the diagram, where the action is good and we accidentally made it less probable.)</p>
<p>These "undo" regions explain why we must include the weird minimization term in the objective function. They correspond to the unclipped r(θ)A having a lower value than the clipped version and getting returned by the minimization. This is because they were steps in the wrong direction (e.g., the action was good but we accidentally made it less probable). If we had not included the min in the objective function, these regions would be flat (gradient = 0) and we would be prevented from fixing mistakes.</p>
<p>Here is a diagram summarizing this:</p>
<p><a href="https://i.stack.imgur.com/gasbI.png" rel="noreferrer"><img src="https://i.stack.imgur.com/gasbI.png" alt="L Clip Diagram" /></a></p>
<p>And that is the gist of it. The Clipped Surrogate Objective is just a drop-in replacement you could use in the vanilla policy gradient. The clipping limits the effective change you can make at each step in order to improve stability, and the minimization allows us to fix our mistakes in case we screwed it up. One thing I didn't discuss is what is meant by PPO objective forming a "lower bound" as discussed in the paper. For more on that, I would suggest <a href="https://youtu.be/gqX8J38tESw?t=14m1s" rel="noreferrer">this part</a> of a lecture the author gave.</p>
<h1 id="multiple-epochs-for-policy-updating-a67s">2. Multiple epochs for policy updating</h1>
<p>Unlike vanilla policy gradient methods, and <em>because of the Clipped Surrogate Objective function</em>, PPO allows you to run multiple epochs of gradient ascent on your samples without causing destructively large policy updates. This allows you to squeeze more out of your data and reduce sample inefficiency.</p>
<p>PPO runs the policy using <em>N</em> parallel actors each collecting data, and then it samples mini-batches of this data to train for <em>K</em> epochs using the Clipped Surrogate Objective function. See full algorithm below (the approximate param values are: <em>K</em> = 3-15, <em>M</em> = 64-4096, <em>T</em> (horizon) = 128-2048):</p>
<p><a href="https://i.stack.imgur.com/a6z3u.png" rel="noreferrer"><img src="https://i.stack.imgur.com/a6z3u.png" alt="PPO Algo" /></a></p>
<p>The parallel actors part was popularized by the <a href="https://arxiv.org/abs/1602.01783" rel="noreferrer">A3C paper</a> and has become a fairly standard way for collecting data.</p>
<p>The newish part is that they are able to run <em>K</em> epochs of gradient ascent on the trajectory samples. As they state in the paper, it would be nice to run the vanilla policy gradient optimization for multiple passes over the data so that you could learn more from each sample. However, this generally fails in practice for vanilla methods because they take too big of steps on the local samples and this wrecks the policy. PPO, on the other hand, has the built-in mechanism to prevent too much of an update.</p>
<p>For each iteration, after sampling the environment with π_old (line 3) and when we start running the optimization (line 6), our policy π will be exactly equal to π_old. So at first, none of our updates will be clipped and we are guaranteed to learn something from these examples. However, as we update π using multiple epochs, the objective will start hitting the clipping limits, the gradient will go to 0 for those samples, and the training will gradually stop...until we move on to the next iteration and collect new samples.</p>
<p>....</p>
<p>And that's all for now. If you are interested in gaining a better understanding, I would recommend digging more into the <a href="https://arxiv.org/abs/1707.06347" rel="noreferrer">original paper</a>, trying to implement it yourself, or diving into the <a href="https://github.com/openai/baselines/blob/b29c8020d7ac72256dca48fae85e96b4b3c75ccb/baselines/ppo2/ppo2.py" rel="noreferrer">baselines implementation</a> and playing with the code.</p>
<p>[edit: 2019/01/27]: For a better background and for how PPO relates to other RL algorithms, I would also strongly recommend checking out OpenAI's <a href="https://spinningup.openai.com/en/latest/index.html" rel="noreferrer">Spinning Up resources and implementations</a>.</p> | 2018-06-03 04:06:24.903000+00:00 | 2021-07-13 15:54:47.997000+00:00 | 2021-07-13 15:54:47.997000+00:00 | null | 46,422,845 | <p>I know the basics of Reinforcement Learning, but what terms it's necessary to understand to be able read <a href="https://arxiv.org/abs/1707.06347" rel="noreferrer">arxiv PPO paper</a> ?</p>
<p>What is the roadmap to learn and use <a href="https://blog.openai.com/openai-baselines-ppo/" rel="noreferrer">PPO</a> ?</p> | 2017-09-26 09:36:42.703000+00:00 | 2021-07-13 15:54:47.997000+00:00 | null | machine-learning|reinforcement-learning | ['https://arxiv.org/abs/1707.06347', 'https://arxiv.org/abs/1502.05477', 'http://karpathy.github.io/2016/05/31/rl/', 'https://i.stack.imgur.com/5VZRT.png', 'http://incompleteideas.net/book/the-book-2nd.html', 'http://karpathy.github.io/2016/05/31/rl/', 'http://rail.eecs.berkeley.edu/deeprlcourse-fa17/index.html', 'https://arxiv.org/abs/1506.02438', 'https://people.eecs.berkeley.edu/%7Epabbeel/cs287-fa09/readings/KakadeLangford-icml2002.pdf', 'https://i.stack.imgur.com/bCAEy.png', 'https://i.stack.imgur.com/EMMPa.png', 'https://i.stack.imgur.com/zt9mz.png', 'https://github.com/openai/baselines/blob/9fa8e1baf1d1f975b87b369a8082122eac812eb1/baselines/ppo1/pposgd_simple.py#L111-L117', 'https://github.com/unixpickle/anyrl-py/blob/953ad68d6507b83583e342b3210ed98e03a86a4f/anyrl/algos/ppo.py#L149-L155', 'https://i.stack.imgur.com/F6SxR.png', 'https://i.stack.imgur.com/gasbI.png', 'https://youtu.be/gqX8J38tESw?t=14m1s', 'https://i.stack.imgur.com/a6z3u.png', 'https://arxiv.org/abs/1602.01783', 'https://arxiv.org/abs/1707.06347', 'https://github.com/openai/baselines/blob/b29c8020d7ac72256dca48fae85e96b4b3c75ccb/baselines/ppo2/ppo2.py', 'https://spinningup.openai.com/en/latest/index.html'] | 22 |
65,011,745 | <p>I have tried quite a few symbolic regression implementations, including rgp, gplearn and a Python tool called fast-symbolic-regression. None of these was nearly comparable to Eureqa, a symbolic regression tool that I first used in 2015 and that left the market in 2017.</p>
<p>Recently a new symbolic regression tool called <a href="https://turingbotsoftware.com/" rel="nofollow noreferrer">TuringBot</a> was developed, and it was shown in <a href="https://arxiv.org/abs/2010.11328" rel="nofollow noreferrer">arXiv:2010.11328</a> to be more efficient than Eureqa at finding formulas. So I would recommend TuringBot for symbolic regression in 2020.</p> | 2020-11-25 19:47:07.513000+00:00 | 2020-11-25 19:47:07.513000+00:00 | null | null | 28,225,178 | <p>First, please excuse my ignorance because I've just started learning R today.</p>
<p>I have a data frame of two variables (x, y) as follows: <code>(1,0), (2,26), (3,88), (4,186), (5,320), (6,490), (7,541)</code>. I want to use Symbolic Regression to find a function <code>f</code> such that <code>y = f(x)</code>.</p>
<p>Following the tutorial <a href="http://rsymbolic.org/projects/rgp/wiki/Symbolic_Regression" rel="noreferrer">here</a>, I can have a plot of <code>f(x)</code>, which is closed to what I expect. However, I don't know how to print out the function <code>f(x)</code>. </p>
<p>I tried with another tool called Eurequa. It is pretty easy to use, and gives me (a lot of) functions. But I can't use a commercial tool for my project. Thank you.</p>
<hr>
<h1>UPDATE</h1>
<p>Here is my code to compute Symbolic Regression and plot the function. I enter the command one by one in R environment. </p>
<pre><code>x = c (1, 2, 3, 4, 5, 6, 7)
y = c (0, 26, 88, 186, 320, 490, 541)
data1 = data.frame(x,y)
newFuncSet <- functionSet("+","-","*")
result1 <- symbolicRegression(y ~ x, data = data1, functionSet = newFuncSet, stopCondition = makeStepsStopCondition(2000))
plot(data1$y, col=1, type="l"); points(predict(result1, newdata = data1), col=2, type="l")
</code></pre> | 2015-01-29 21:42:53.223000+00:00 | 2020-11-25 19:47:07.513000+00:00 | 2015-01-29 23:16:34.917000+00:00 | r | ['https://turingbotsoftware.com/', 'https://arxiv.org/abs/2010.11328'] | 2 |
37,645,912 | <p><strong>Classifying Digits</strong></p>
<p>You clarified in comments that you've already isolated the number part of the image pre-detection, so I'll start under that assumption.</p>
<p>Perhaps you can approximate the perspective effects and "blurriness" of the number by treating it as a hand-written number. In this case, there is a famous data-set of handwritten numerals for classification training called mnist. </p>
<p>Yann LeCun has enumerated the state of the art on this dataset here <a href="http://yann.lecun.com/exdb/mnist/" rel="nofollow noreferrer">mnist hand-written dataset</a>. </p>
<p>At the far end of the spectrum, convolutional neural networks yield <a href="http://arxiv.org/abs/1202.2745" rel="nofollow noreferrer">outrageously low error rates</a> (fractions of 1% error). For a simpler solution, k-nearest neighbours using deskewing, noise removal, blurring, and 2 pixel shift, yielded about 1% error, and is significantly faster to implement. <a href="http://docs.opencv.org/2.4/modules/ml/doc/k_nearest_neighbors.html" rel="nofollow noreferrer">Python opencv has an implementation</a>. Neural networks and support vector machines with deskewing also have some pretty impressive performance rates.</p>
<p>Note that convolutional networks don't have you pick your own features, so the important color-differential information here might just be used for narrowing the region-of-interest. Other approaches, where you define your feature space, might incorporate the known color difference more precisely.</p>
<p>Python supports a lot of machine learning techniques in the terrific package sklearn - <a href="http://scikit-learn.org/stable/auto_examples/classification/plot_digits_classification.html" rel="nofollow noreferrer">here are examples of sklearn applied to mnist</a>. <em>If you're looking for an tutorialized explanation of machine learning in python, <a href="http://scikit-learn.org/stable/supervised_learning.html" rel="nofollow noreferrer">sklearn's own tutorial is very verbose</a></em></p>
<p>From the sklearn link:
<a href="https://i.stack.imgur.com/PHDCw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PHDCw.png" alt="Classifying mnist"></a></p>
<p>Those are the kinds of items you're trying to classify if you learn using this approach. To emphasize how easy it is to start training some of these machine learning-based classifiers, here is an abridged section from the example code in the linked sklearn package:</p>
<pre><code>digits = datasets.load_digits() # built-in to sklearn!
data = digits.images.reshape((len(digits.images), -1))
# Create a classifier: a support vector classifier
classifier = svm.SVC(gamma=0.001)
# We learn the digits on the first half of the digits
classifier.fit(data[:n_samples / 2], digits.target[:n_samples / 2])
</code></pre>
<p>If you're wedded to openCv (possibly because you want to port to a real-time system in the future), opencv3/python <a href="http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_ml/py_knn/py_knn_opencv/py_knn_opencv.html#knn-opencv" rel="nofollow noreferrer">has a tutorial on this exact topic too</a>! Their demo uses k-nearest-neighbor (listed in the LeCun page), but they also <a href="http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_ml/py_svm/py_svm_opencv/py_svm_opencv.html#svm-opencv" rel="nofollow noreferrer">have svms</a> and the many of the other tools in sklearn. Their ocr page using SVMs uses deskewing, which might be useful with the perspective effect in your problem:</p>
<p><a href="https://i.stack.imgur.com/NT190.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NT190.jpg" alt="Deskewed digit"></a></p>
<hr>
<p><strong>UPDATE:</strong> I used the out-of-the box skimage approach outlined above on your image, heavily cropped, and it <strong>correctly classified it</strong>. A <strong>lot</strong> more testing would be required to see if this is rhobust in practice</p>
<p><a href="https://i.stack.imgur.com/Gpgsk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Gpgsk.png" alt="enter image description here"></a></p>
<p>^^ That tiny image is the 8x8 crop of the image you embedded in your question. mnist is 8x8 images. That's why it trains in less than a second with default arguments in skimage.</p>
<p>I converted it the correct format by scaling it up to the mnist range using</p>
<pre><code>number = scipy.misc.imread("cropped_image.png")
datum = (number[:,:,0]*15).astype(int).reshape((64,))
classifier.predict(datum) # returns 8
</code></pre>
<p>I didn't change anything else from the example; here, I'm only using the first channel for classification, and no smart feature computation. 15 looked about right to me; you'll need to tune it to get within the target range or (ideally) provide your own training and testing set</p>
<hr>
<p><strong>Object Detection</strong></p>
<p>If you haven't isolated the number in the image you'll need an object detector. The literature space on this problem is gigantic and I won't start down that rabbit hole (google Viola and Jones, maybe?) <a href="http://www.pyimagesearch.com/2015/03/23/sliding-windows-for-object-detection-with-python-and-opencv/" rel="nofollow noreferrer">This blog</a> covers the fundamentals of a "sliding window" detector in python. Adrian Rosebrock looks like he's even a contributor on SO, and that page has some good examples of opencv and python-based object detectors fairly tutorialized (you actually linked to that blog in your question, I didn't realize). </p>
<p>In short, classify windows across the image and pick the window of highest confidence. Narrowing down the search space with a region of interest will of course yield huge improvements in all areas of performance</p> | 2016-06-05 19:28:26.720000+00:00 | 2016-09-21 19:59:11.593000+00:00 | 2016-09-21 19:59:11.593000+00:00 | null | 37,645,576 | <p>I would like to capture the number from this kind of picture. </p>
<p><a href="https://i.stack.imgur.com/S8esF.png" rel="noreferrer"><img src="https://i.stack.imgur.com/S8esF.png" alt="enter image description here"></a></p>
<p>I tried multi-scale matching from the following link. </p>
<p><a href="http://www.pyimagesearch.com/2015/01/26/multi-scale-template-matching-using-python-opencv/" rel="noreferrer">http://www.pyimagesearch.com/2015/01/26/multi-scale-template-matching-using-python-opencv/</a></p>
<p>All I want to know is the red number. But the problem is, the red number is blurry for openCV recognize/match template. Would there be other possible way to detect this red number on the black background?</p> | 2016-06-05 18:52:26.913000+00:00 | 2016-09-21 19:59:11.593000+00:00 | 2016-06-08 13:39:27.467000+00:00 | python|opencv|edge-detection|number-recognition | ['http://yann.lecun.com/exdb/mnist/', 'http://arxiv.org/abs/1202.2745', 'http://docs.opencv.org/2.4/modules/ml/doc/k_nearest_neighbors.html', 'http://scikit-learn.org/stable/auto_examples/classification/plot_digits_classification.html', 'http://scikit-learn.org/stable/supervised_learning.html', 'https://i.stack.imgur.com/PHDCw.png', 'http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_ml/py_knn/py_knn_opencv/py_knn_opencv.html#knn-opencv', 'http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_ml/py_svm/py_svm_opencv/py_svm_opencv.html#svm-opencv', 'https://i.stack.imgur.com/NT190.jpg', 'https://i.stack.imgur.com/Gpgsk.png', 'http://www.pyimagesearch.com/2015/03/23/sliding-windows-for-object-detection-with-python-and-opencv/'] | 11 |
72,241,660 | <h2>Overview</h2>
<p>This answer provides probable explanations. Put it shortly, all parallel workload does not infinitely scale. When many cores compete for the same shared resource (eg. DRAM), using too many cores is often detrimental because <strong>there is a point where there are enough cores to saturate a given shared resource and using more core only increase the overheads</strong>.</p>
<p>More specifically, in your case, the L3 cache and the IMCs are likely the problem. Enabling <strong>Sub-NUMA Clustering</strong> and <strong>non-temporal prefetch</strong> should improve a bit the performances and the scalability of your benchmark. Still, there are other architectural hardware limitations that can cause the benchmark not to scale well. The next section describes how Intel Skylake SP processors deal with memory accesses and how to find the bottlenecks.</p>
<hr />
<h2>Under the hood</h2>
<p>The layout of Intel Xeon Skylake SP processors is like the following in your case:</p>
<p><a href="https://i.stack.imgur.com/7zTUM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7zTUM.png" alt="processor-configuration" /></a></p>
<p><a href="https://i.stack.imgur.com/D3Lgk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/D3Lgk.png" alt="core-configuration" /></a> <br />
Source: <a href="https://www.intel.com/content/www/us/en/developer/articles/technical/xeon-processor-scalable-family-technical-overview.html" rel="nofollow noreferrer">Intel</a></p>
<p>There are two sockets connected with an UPI interconnect and each processor is connected to its own set of DRAM. There are 2 Integrated Memory Controller (IMC) per processor and each is connected to 3 DDR4 DRAM @ 2666MHz. This means the theoretical bandwidth is <code>2*2*3*2666e6*8 = 256 GB/s = 238 GiB/s</code>.</p>
<p>Assuming your benchmark is well designed and each processor access only to its NUMA node, I expect a very low UPI throughput and a very low number of remote NUMA pages. You can check this with hardware counters. Linux <code>perf</code> or VTune enable you to check this relatively easily.</p>
<p>The L3 cache is split in <strong>slices</strong>. All physical addresses are distributed across the cache slices using an <strong>hash function</strong> (see <a href="https://arxiv.org/abs/1508.03767" rel="nofollow noreferrer">here</a> for more informations). This method enable the processor to <strong>balance the throughput</strong> between all the L3 slices. This method also enable the processor to balance the throughput between the two IMCs so that in-fine the processor <em>looks like a SMP architecture</em> instead of a NUMA one. This was also use in Sandy Bridge and Xeon Phi processors (mainly to mitigate NUMA effects).</p>
<p>Hashing does not guarantee a perfect balancing though (no hash function is perfect, especially the ones that are fast to compute), but it is often quite good in practice, <em>especially for contiguous accesses</em>. A bad balancing decreases the memory throughput due to partial stalls. This is one reason you cannot reach the theoretical bandwidth.</p>
<p>With a good hash function, the balancing should be independent of the number of core used. If the hash function is not good enough, one IMC can be more saturated than the other one oscillating over time. The bad news is that the hash function is undocumented and checking this behaviour is complex: AFAIK you can get hardware counters for the each IMC throughput but they have a limited granularity which is quite big. On my Skylake machine the name of the hardware counters are <code>uncore_imc/data_reads/</code> and <code>uncore_imc/data_writes/</code> but on your platform you certainly have 4 counters for that (one for each IMC).</p>
<p>Fortunately, Intel provides a feature called <strong>Sub-NUMA Clustering</strong> (SNC) on Xeon SP processors like your. The idea is to split the processor in two NUMA nodes that have their own dedicated IMC. This solve the balancing issue due to the hash function and so result in faster memory operations as long as your application is NUMA-friendly. Otherwise, it can actually be significantly slower due to NUMA effects. In the worst case, the pages of an application can all be mapped to the same NUMA node resulting in only half the bandwidth being usable. Since your benchmark is supposed to be NUMA-friendly, SNC should be more efficient.</p>
<p><a href="https://i.stack.imgur.com/VJpfn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VJpfn.png" alt="Sub-NUMA Clustering" /></a> <br />
Source: Intel</p>
<p>Furthermore, having more cores accessing the L3 in parallel can cause more <strong>early evictions of prefetched cache lines</strong> which need to be fetched again later when the core actual need them (with an additional DRAM latency time to pay). This effect is not as unusual as it seems. Indeed, due to the high latency of DDR4 DRAMs, hardware prefetching units have to prefetch data a long time in advance so to reduce the impact of the latency. They also need to perform a lot of requests concurrently. This is generally not a problem with sequential accesses, but <strong>more cores causes accesses to look more random</strong> from the caches and IMCs point-of-view. The thing is DRAM are designed so that contiguous accesses are faster than random one (multiple <em>contiguous</em> cache lines should be loaded consecutively to fully saturate the bandwidth). You can analyse the value of the <code>LLC-load-misses</code> hardware counter to check if more data are re-fetched with more threads (I see such effect on my Skylake-based PC with only 6-cores but it is not strong enough to cause any visible impact on the final throughput). To mitigate this problem, you can use <strong>software non-temporal prefetch (<code>prefetchnta</code>)</strong> to request the processor to load data directly into the line fill buffer instead of the L3 cache resulting in a lower pollution (<a href="https://stackoverflow.com/questions/71818324/how-does-the-cpu-cache-affect-the-performance-of-a-c-program/71911346#71911346">here</a> is a related answer). This may be slower with fewer cores due to a lower concurrency, but it should be a bit faster with a lot of cores. Note that this does not solve the problem of having fetched address that looks more random from the IMCs point-of-view and there is not much to do about that.</p>
<p>The low-level architecture DRAM and caches is very complex in practice. More information about memory can be found in the following links:</p>
<ul>
<li><a href="https://people.freebsd.org/%7Elstewart/articles/cpumemory.pdf" rel="nofollow noreferrer">What Every Programmer Should Know About Memory</a></li>
<li><a href="https://web.corral.tacc.utexas.edu/CompEdu/pdf/stc/EijkhoutIntroToHPC.pdf" rel="nofollow noreferrer">Introduction to High Performance Scientific Computing</a> (Section 1.3)</li>
<li><a href="https://www.youtube.com/watch?v=IUk9o9wvX1Y" rel="nofollow noreferrer">Lecture: Main Memory and the DRAM System</a></li>
<li><a href="https://www.youtube.com/watch?v=I-9XWtdW_Co" rel="nofollow noreferrer">Short lectures: Dynamic Random Access Memory</a> (in 7 parts)</li>
<li><a href="https://www.intel.com/content/www/us/en/developer/articles/technical/intel-sdm.html" rel="nofollow noreferrer">Intel® 64 and IA-32 Architectures Software Developer's Manual</a> (Volume 3)</li>
</ul> | 2022-05-14 15:29:49.153000+00:00 | 2022-05-23 23:50:09.763000+00:00 | 2022-05-23 23:50:09.763000+00:00 | null | 72,229,540 | <p>This question is a spin-off of the one posted here: <a href="https://stackoverflow.com/questions/72182595/measuring-bandwidth-on-a-ccnuma-system">Measuring bandwidth on a ccNUMA system</a></p>
<p>I've written a micro-benchmark for the memory bandwidth on a ccNUMA system with 2x Intel(R) Xeon(R) Platinum 8168:</p>
<ol>
<li>24 cores @ 2.70 GHz,</li>
<li>L1 cache 32 kB, L2 cache 1 MB and L3 cache 33 MB.</li>
</ol>
<p>As a reference, I'm using the Intel Advisor's roof-line plot, which depicts the bandwidths of each CPU data-path available. According to this, the bandwidth is 230 GB/s.</p>
<p>Strong scaling of bandwidth:
<a href="https://i.stack.imgur.com/M8TMI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/M8TMI.png" alt="enter image description here" /></a></p>
<p><strong>Question:</strong> If you look at the strong scaling diagram, you can see that the peak effective bandwidth is actually achieved at 33 CPUs, following which adding CPUs only reduces it. Why is this happening?</p> | 2022-05-13 12:21:32.673000+00:00 | 2022-05-23 23:50:09.763000+00:00 | 2022-05-23 14:37:44.033000+00:00 | performance|parallel-processing|intel|cpu-architecture|numa | ['https://i.stack.imgur.com/7zTUM.png', 'https://i.stack.imgur.com/D3Lgk.png', 'https://www.intel.com/content/www/us/en/developer/articles/technical/xeon-processor-scalable-family-technical-overview.html', 'https://arxiv.org/abs/1508.03767', 'https://i.stack.imgur.com/VJpfn.png', 'https://stackoverflow.com/questions/71818324/how-does-the-cpu-cache-affect-the-performance-of-a-c-program/71911346#71911346', 'https://people.freebsd.org/%7Elstewart/articles/cpumemory.pdf', 'https://web.corral.tacc.utexas.edu/CompEdu/pdf/stc/EijkhoutIntroToHPC.pdf', 'https://www.youtube.com/watch?v=IUk9o9wvX1Y', 'https://www.youtube.com/watch?v=I-9XWtdW_Co', 'https://www.intel.com/content/www/us/en/developer/articles/technical/intel-sdm.html'] | 11 |
27,306,613 | <p>Your formula translates to:</p>
<p><code>conc ~</code> value is modelled using <code>G + F</code> fixed effects <code>(K|F)</code> random slope of <code>K</code> varying on <code>G</code> and <code>(Z|G/F/K)</code> random slope of <code>Z</code> varying on <code>K</code> nested in <code>F</code> nested in <code>G</code>. Also, while you use <code>\</code> rather then <code>:</code>, this translates to: <code>(Z|G) + (Z|G:F) + (Z|G:F:K)</code>. You do not use <code>0 +</code> or <code>- 1</code> in your definition, so intercept is included. </p>
<p>So your model translates to: <code>conc ~ 1 + G + F + (1 + K|F) + (1 + Z|G) + (1 + Z|G:F) + (1 + Z|G:F:K)</code>. Is this what you wanted?</p>
<p>What may be problematic is that in your definition <code>K</code> is both the random slope and grouping variable for random effects - is it for purpose?</p>
<p>Check an article by <a href="http://arxiv.org/pdf/1406.5823.pdf" rel="nofollow">Bates et al. (in press)</a> on <code>lme4</code> and formulas in this package.</p> | 2014-12-05 00:08:34.740000+00:00 | 2014-12-05 00:13:52.100000+00:00 | 2014-12-05 00:13:52.100000+00:00 | null | 27,306,472 | <p>I have the following SAS code that I would like to write in R. I know the class statement is redundant in R (not necessary).</p>
<pre><code>proc mixed data=in_data;
class G F K kal;
model conc=;
random G F K(F) kal(G*F*K);
ods output covparms=out.cov_out;
run;
</code></pre>
<p>I tried the below code, with no luck.</p>
<p>fit <- lmer( conc ~ (1 | G) + (1 | F) + (1 | K/F) + (1 | kal/G:F:K) , sample_1)</p>
<p>with the following output. I was hoping not to get a value for kal or K. </p>
<pre><code>summary(fit)
Random effects:
Groups Name Variance Std.Dev.
G:F:K:kal (Intercept) 1.421e-04 0.011921
F:K (Intercept) 1.326e-05 0.003641
F (Intercept) 6.548e-05 0.008092
kal (Intercept) 9.852e-06 0.003139
K (Intercept) 1.272e-05 0.003567
G (Intercept) 2.165e-03 0.046527
Residual 4.647e-04 0.021557
</code></pre> | 2014-12-04 23:56:32.300000+00:00 | 2014-12-05 02:39:31.900000+00:00 | 2014-12-05 02:39:31.900000+00:00 | r|mixed-models|lmer | ['http://arxiv.org/pdf/1406.5823.pdf'] | 1 |
20,500,344 | <p>Edit: I'm wrong. In Mastermind, you also have knowledge of the number of colors that were right, but not in the right spot. That changes the number of possible solutions, so that there is no obvious exact translation between the two problems and so I can't make the conclusion below. Leaving the answer here for now, maybe it helps someone think about the problem.</p>
<p>I answered:</p>
<p>Bad news, it's at least NP-complete.</p>
<p>Your problem reminded me of the game <em>Mastermind</em> (<a href="http://en.wikipedia.org/wiki/Mastermind_%28board_game%29" rel="nofollow">Wikipedia</a>). On that page, they also mention the "Mastermind satisfiability problem: given a set of two-color Mastermind guesses and answers, is there a possible right guess?</p>
<p>So that problem has slightly more information than you: Mastermind gives number of correct colors in the right place (== length minus Hammond distance), plus the number of correct colors in the wrong place. And they only try to decide whether the number of possibilities is > 0. And that problem is NP-complete according to <a href="http://arxiv.org/abs/cs.CC/0512049" rel="nofollow">this paper</a>.</p> | 2013-12-10 16:50:55.370000+00:00 | 2013-12-10 19:35:42.760000+00:00 | 2013-12-10 19:35:42.760000+00:00 | null | 20,498,309 | <p>I am writing a simulation. The problem I need to solve is as follows.</p>
<p>I am given a random binary vector <code>t</code> (a list in python) of length <code>l</code>. I then sample binary vectors of the same length and measure the Hamming distance (that is the number of aligned mismatches) between each of the sampled vectors and <code>t</code> and store the results. I want to determine how many binary vectors of length l are compatible with the distances found so far. Clearly <code>t</code> is but it is also likely many others are too. </p>
<p>My code is as follows.</p>
<pre><code>import random
import itertools
import operator
l=23
t = [random.randint(0,1) for b in range(l)]
stringsleft = set(itertools.product([0,1],repeat=l))
xcoords = []
ycoords = []
iters = l
for i in xrange(iters):
print i, len(stringsleft)
xcoords.append(i)
ycoords.append(math.log(len(stringsleft)))
pattern = [random.randint(0,1) for b in range(l)]
distance = sum(itertools.imap(operator.ne, pattern, t))
stringsleft = stringsleft.intersection(set([y for y in stringsleft if sum(itertools.imap(operator.ne, pattern, y)) == distance]))
</code></pre>
<p>Unfortunately it is very slow and uses a lot of memory and does not work at all if I increase l to 30. Is it possible to speed this up to solve the <code>l=30</code> case?</p>
<p><strong>Update</strong><br>
I made a small optimisation by replacing the lists by integers so now it runs with <code>l=26</code>. </p>
<pre><code>l=26
t = random.randint(0,2**l)
stringsleft = set(range(2**l))
xcoords = []
ycoords = []
iters = l
for i in xrange(iters):
print i, len(stringsleft)
if (len(stringsleft) > 1):
xcoords.append(i)
ycoords.append(math.log(len(stringsleft),2))
pattern = random.randint(0,2**l)
distance = bin(pattern ^ t).count('1')
stringsleft = stringsleft.intersection(set([y for y in stringsleft if bin(pattern ^ y).count('1') == distance]))
</code></pre>
<p>The problem that stops me getting to <code>l=30</code> is RAM usage rather than time. </p> | 2013-12-10 15:23:44.543000+00:00 | 2013-12-11 21:47:58.437000+00:00 | 2013-12-11 10:59:45.427000+00:00 | python|performance|algorithm | ['http://en.wikipedia.org/wiki/Mastermind_%28board_game%29', 'http://arxiv.org/abs/cs.CC/0512049'] | 2 |
29,576,943 | <p>I suspect you're running into trouble with namespaces. The difficulty is that the element you're after may be called <code>authors</code>, but it is living inside an arXiv-specific namespace. You'll have to adapt the XPath expression with this in mind.</p> | 2015-04-11 10:41:57.960000+00:00 | 2015-04-11 10:41:57.960000+00:00 | null | null | 29,576,273 | <p>I'm currently trying to parse some data from the arXiv. I was able to get the data in the xml format, but now I can't select certain elements. </p>
<p>For example, I want to get all authors from this xml file</p>
<p><a href="http://export.arxiv.org/oai2?verb=ListRecords&set=physics:hep-th&from=2015-03-30&until=2015-03-31&metadataPrefix=arXivRaw" rel="nofollow">http://export.arxiv.org/oai2?verb=ListRecords&set=physics:hep-th&from=2015-03-30&until=2015-03-31&metadataPrefix=arXivRaw</a></p>
<p>The xpath query</p>
<pre><code>//authors
</code></pre>
<p>or similar queries always return zero. Any ideas how to solve this problem would be great </p> | 2015-04-11 09:31:14.867000+00:00 | 2015-04-11 10:41:57.960000+00:00 | null | xml|xpath | [] | 0 |
29,576,371 | <p>I think the problem comes from the declaration of two namespace but there is no prefix for them, so using <code>//authors</code> will try to find it using the default namespace and so returning no result.</p>
<p>Try this :</p>
<pre><code>/OAI-PMH/ListRecords/record/metadata/arXivRaw/authors
</code></pre> | 2015-04-11 09:41:50.773000+00:00 | 2015-04-11 09:41:50.773000+00:00 | null | null | 29,576,273 | <p>I'm currently trying to parse some data from the arXiv. I was able to get the data in the xml format, but now I can't select certain elements. </p>
<p>For example, I want to get all authors from this xml file</p>
<p><a href="http://export.arxiv.org/oai2?verb=ListRecords&set=physics:hep-th&from=2015-03-30&until=2015-03-31&metadataPrefix=arXivRaw" rel="nofollow">http://export.arxiv.org/oai2?verb=ListRecords&set=physics:hep-th&from=2015-03-30&until=2015-03-31&metadataPrefix=arXivRaw</a></p>
<p>The xpath query</p>
<pre><code>//authors
</code></pre>
<p>or similar queries always return zero. Any ideas how to solve this problem would be great </p> | 2015-04-11 09:31:14.867000+00:00 | 2015-04-11 10:41:57.960000+00:00 | null | xml|xpath | [] | 0 |
29,576,459 | <p><code><authors></code> nodes are in the <em>default namespace</em> <code>xmlns="http://arxiv.org/OAI/arXivRaw/"</code> which declared at the <code><arXivRaw></code> node level. </p>
<p>Many XPath platform (programming language API or other kind of XPath tools), provide a way to register mapping of namespace prefix to the corresponding namespace URI. In that case you need to register a prefix that points to <code>"http://arxiv.org/OAI/arXivRaw/"</code>, and use that prefix in your XPath. For example, assuming that the registered prefix named <code>d</code> :</p>
<pre><code>//d:authors
</code></pre>
<p>But in case there is no way, you can resort to use combination of Xpath's <code>local-name()</code> and <code>namespace-uri()</code> :</p>
<pre><code>//*[local-name()='authors' and namespace-uri()='http://arxiv.org/OAI/arXivRaw/']
</code></pre>
<p>or maybe just ignore the namespace for simplicity* :</p>
<pre><code>//*[local-name()='authors']
</code></pre>
<p>*) with risk of getting wrong nodes in case there are several nodes having same local name but different namespace</p> | 2015-04-11 09:51:27.850000+00:00 | 2015-04-11 09:57:02.843000+00:00 | 2015-04-11 09:57:02.843000+00:00 | null | 29,576,273 | <p>I'm currently trying to parse some data from the arXiv. I was able to get the data in the xml format, but now I can't select certain elements. </p>
<p>For example, I want to get all authors from this xml file</p>
<p><a href="http://export.arxiv.org/oai2?verb=ListRecords&set=physics:hep-th&from=2015-03-30&until=2015-03-31&metadataPrefix=arXivRaw" rel="nofollow">http://export.arxiv.org/oai2?verb=ListRecords&set=physics:hep-th&from=2015-03-30&until=2015-03-31&metadataPrefix=arXivRaw</a></p>
<p>The xpath query</p>
<pre><code>//authors
</code></pre>
<p>or similar queries always return zero. Any ideas how to solve this problem would be great </p> | 2015-04-11 09:31:14.867000+00:00 | 2015-04-11 10:41:57.960000+00:00 | null | xml|xpath | [] | 0 |
53,335,053 | <p>There are mainly two things to check.</p>
<p><strong>1. Are you sure that you are using batch normalization (BN) correctly in the train op?</strong> </p>
<p>If you read the layer documentation:</p>
<blockquote>
<p>Note: when training, the moving_mean and moving_variance need to be updated.
By default the update ops are placed in <code>tf.GraphKeys.UPDATE_OPS</code>, so they
need to be added as a dependency to the <code>train_op</code>. Also, be sure to add
any batch_normalization ops before getting the update_ops collection.
Otherwise, update_ops will be empty, and training/inference will not work
properly. </p>
</blockquote>
<p>For example:</p>
<pre><code>x_norm = tf.layers.batch_normalization(x, training=training)
# ...
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(update_ops):
train_op = optimizer.minimize(loss)
</code></pre>
<p><strong>2. Otherwise, try lowering the "momentum" in the BN.</strong> </p>
<p>During the training, in fact, the BN uses two moving averages of the mean and the variance that are supposed to approximate the population statistics. Mean and variance are initialized to 0 and 1 respectively and then, step by step, they are multiplied by the momentum value (default is 0.99) and added the new value*0.01. At inference (test) time, the normalization uses these statistics. For this reason, it takes these values a little while to arrive at the "real" mean and variance of the data. </p>
<p><em>Source:</em></p>
<p><a href="https://www.tensorflow.org/api_docs/python/tf/layers/batch_normalization" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/layers/batch_normalization</a></p>
<p><a href="https://github.com/keras-team/keras/issues/7265" rel="nofollow noreferrer">https://github.com/keras-team/keras/issues/7265</a></p>
<p><a href="https://github.com/keras-team/keras/issues/3366" rel="nofollow noreferrer">https://github.com/keras-team/keras/issues/3366</a></p>
<p><em>The original BN paper can be found here:</em> </p>
<p><a href="https://arxiv.org/abs/1502.03167" rel="nofollow noreferrer">https://arxiv.org/abs/1502.03167</a></p> | 2018-11-16 09:37:48.217000+00:00 | 2018-11-16 10:47:34.657000+00:00 | 2018-11-16 10:47:34.657000+00:00 | null | 48,031,639 | <p>I am trying to use Batch Normalization using <a href="https://www.tensorflow.org/api_docs/python/tf/layers/batch_normalization" rel="nofollow noreferrer">tf.layers.batch_normalization()</a> and my code looks like this:</p>
<pre class="lang-py prettyprint-override"><code>def create_conv_exp_model(fingerprint_input, model_settings, is_training):
# Dropout placeholder
if is_training:
dropout_prob = tf.placeholder(tf.float32, name='dropout_prob')
# Mode placeholder
mode_placeholder = tf.placeholder(tf.bool, name="mode_placeholder")
he_init = tf.contrib.layers.variance_scaling_initializer(mode="FAN_AVG")
# Input Layer
input_frequency_size = model_settings['bins']
input_time_size = model_settings['spectrogram_length']
net = tf.reshape(fingerprint_input,
[-1, input_time_size, input_frequency_size, 1],
name="reshape")
net = tf.layers.batch_normalization(net,
training=mode_placeholder,
name='bn_0')
for i in range(1, 6):
net = tf.layers.conv2d(inputs=net,
filters=8*(2**i),
kernel_size=[5, 5],
padding='same',
kernel_initializer=he_init,
name="conv_%d"%i)
net = tf.layers.batch_normalization(net,
training=mode_placeholder,
name='bn_%d'%i)
with tf.name_scope("relu_%d"%i):
net = tf.nn.relu(net)
net = tf.layers.max_pooling2d(net, [2, 2], [2, 2], 'SAME',
name="maxpool_%d"%i)
net_shape = net.get_shape().as_list()
net_height = net_shape[1]
net_width = net_shape[2]
net = tf.layers.conv2d( inputs=net,
filters=1024,
kernel_size=[net_height, net_width],
strides=(net_height, net_width),
padding='same',
kernel_initializer=he_init,
name="conv_f")
net = tf.layers.batch_normalization( net,
training=mode_placeholder,
name='bn_f')
with tf.name_scope("relu_f"):
net = tf.nn.relu(net)
net = tf.layers.conv2d( inputs=net,
filters=model_settings['label_count'],
kernel_size=[1, 1],
padding='same',
kernel_initializer=he_init,
name="conv_l")
### Squeeze
squeezed = tf.squeeze(net, axis=[1, 2], name="squeezed")
if is_training:
return squeezed, dropout_prob, mode_placeholder
else:
return squeezed, mode_placeholder
</code></pre>
<p>And my train step looks like this:</p>
<pre class="lang-py prettyprint-override"><code>update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(update_ops):
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate_input)
gvs = optimizer.compute_gradients(cross_entropy_mean)
capped_gvs = [(tf.clip_by_value(grad, -2., 2.), var) for grad, var in gvs]
train_step = optimizer.apply_gradients(gvs))
</code></pre>
<p>During training, I am feeding the graph with:</p>
<pre class="lang-py prettyprint-override"><code>train_summary, train_accuracy, cross_entropy_value, _, _ = sess.run(
[
merged_summaries, evaluation_step, cross_entropy_mean, train_step,
increment_global_step
],
feed_dict={
fingerprint_input: train_fingerprints,
ground_truth_input: train_ground_truth,
learning_rate_input: learning_rate_value,
dropout_prob: 0.5,
mode_placeholder: True
})
</code></pre>
<p>During validation, </p>
<pre class="lang-py prettyprint-override"><code>validation_summary, validation_accuracy, conf_matrix = sess.run(
[merged_summaries, evaluation_step, confusion_matrix],
feed_dict={
fingerprint_input: validation_fingerprints,
ground_truth_input: validation_ground_truth,
dropout_prob: 1.0,
mode_placeholder: False
})
</code></pre>
<p>My loss and accuracy curves (orange is training, blue is validation):
<a href="https://i.stack.imgur.com/ZAqDw.png" rel="nofollow noreferrer">Plot of loss vs number of iterations</a>,
<a href="https://i.stack.imgur.com/CYKJX.png" rel="nofollow noreferrer">Plot of accuracy vs number of iterations</a></p>
<p>The validation loss (and accuracy) seem very erratic. Is my implementation of Batch Normalization wrong? Or is this normal with Batch Normalization and I should wait for more iterations?</p> | 2017-12-30 06:36:57.153000+00:00 | 2020-06-01 21:03:44.440000+00:00 | null | tensorflow|machine-learning|deep-learning|conv-neural-network|batch-normalization | ['https://www.tensorflow.org/api_docs/python/tf/layers/batch_normalization', 'https://github.com/keras-team/keras/issues/7265', 'https://github.com/keras-team/keras/issues/3366', 'https://arxiv.org/abs/1502.03167'] | 4 |
55,245,092 | <p>I am working with thermal infrared images which subject to quantity of noises.</p>
<p>I found that low rank based approaches such as approaches based on Singular Value Decomposition (SVD) or Weighted Nuclear Norm Metric (WNNM) give very efficient result in terms of reducing the noise while preserving the structure of the information.
Their main drawback is the fact they are quite slow to compute (several minutes per image)
Here is some litterature:</p>
<p><a href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7067415" rel="nofollow noreferrer">https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7067415</a></p>
<p><a href="https://arxiv.org/abs/1705.09912" rel="nofollow noreferrer">https://arxiv.org/abs/1705.09912</a></p>
<p>The second paper has some MatLab code available, there is quite a lot of files but the translation to python is should not that complex.</p>
<p>OpenCV implement as well (and it is available in python) a very efficient algorithm on the Non-Local Means algorithm:
<a href="https://docs.opencv.org/master/d5/d69/tutorial_py_non_local_means.html" rel="nofollow noreferrer">https://docs.opencv.org/master/d5/d69/tutorial_py_non_local_means.html</a></p> | 2019-03-19 15:52:59.780000+00:00 | 2019-03-19 15:52:59.780000+00:00 | null | null | 55,210,838 | <p>I'm trying to develop a way to count the number of bright spots in an <a href="https://i.stack.imgur.com/cmrue.png" rel="nofollow noreferrer">image</a>. The spots should be gaussian point sources, but there is a lot of noise. There are probably on the order of 10-20 actual point sources in this image. My first though was to use a <a href="https://i.stack.imgur.com/2mYDv.png" rel="nofollow noreferrer">gaussian convolution</a> with sigma = 15, which seems to do a good job.</p>
<p>First, is there a better way to isolate these bright spots?</p>
<p>Second, how can I 'detect' the bright spots, i.e. count them? I haven't had any luck with circular hough transforms from opencv.</p>
<p><strong>Edit</strong>: <a href="https://i.stack.imgur.com/zv4MY.png" rel="nofollow noreferrer">Here is the original without gridlines</a>, <a href="https://i.stack.imgur.com/6hefl.png" rel="nofollow noreferrer">here is the convolved image without gridlines</a>.</p> | 2019-03-17 19:06:46.300000+00:00 | 2019-03-19 15:52:59.780000+00:00 | 2019-03-17 21:36:11.680000+00:00 | opencv|image-processing|computer-vision|detection|feature-detection | ['https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7067415', 'https://arxiv.org/abs/1705.09912', 'https://docs.opencv.org/master/d5/d69/tutorial_py_non_local_means.html'] | 3 |
72,440,122 | <ol>
<li>Compose the "Raw" file URL, it will look something like: <code>https://raw.githubusercontent.com/{org/user}/{repo}/{branch}/{file}</code>, example: <a href="https://github.com/openai/gpt-3/blob/master/README.md" rel="nofollow noreferrer">https://github.com/openai/gpt-3/blob/master/README.md</a> becomes: <code>https://raw.githubusercontent.com/openai/gpt-3/master/README.md</code></li>
<li>Then, use cURL to fetch the file:
<code>curl {url}</code>. In this case:</li>
</ol>
<pre class="lang-bash prettyprint-override"><code>curl https://raw.githubusercontent.com/openai/gpt-3/master/README.md
</code></pre>
<p>Outputs:</p>
<pre><code># GPT-3: Language Models are Few-Shot Learners
[arXiv link](https://arxiv.org/abs/2005.14165)
> Recent work has demonstrated substantial gains on many NLP tasks and benchmarks by pre-training on a large corpus of text followed by fine-tuning on a specific task. While typically task-agnostic in architecture, this method still requires task-specific fine-tuning datasets of thousands or tens of thousands of examples. By contrast, humans can generally perform a new language task from only a few examples or from simple instructions – something which current NLP systems still largely struggle to do. Here we show that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art fine-tuning approaches. Specifically, we train GPT-3, an autoregressive language model with 175 billion parameters, 10x more than any previous non-sparse language model, and test its performance in the few-shot setting. For all tasks, GPT-3 is applied without any gradient updates or fine-tuning, with tasks and few-shot demonstrations specified purely via text interaction with the model. GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks, as well as several tasks that require on-the-fly reasoning or domain adaptation, such as unscrambling words, using a novel word in a sentence, or performing 3-digit arithmetic. At the same time, we also identify some datasets where GPT-3's few-shot learning still struggles, as well as some datasets where GPT-3 faces methodological issues related to training on large web corpora. Finally, we find that GPT-3 can generate samples of news articles which human evaluators have difficulty distinguishing from articles written by humans. We discuss broader societal impacts of this finding and of GPT-3 in general.
## Contents
- [175b_samples.jsonl](175b_samples.jsonl) - Unconditional, unfiltered 2048 token samples from GPT-3 with p=.85, t=1.&#12288;
**CONTENT WARNING:** GPT-3 was trained on arbitrary data from the web, so may contain offensive content and language.
- [data](data) - Synthetic datasets for word scramble and arithmetic tasks described in the paper.
- [dataset_statistics](dataset_statistics) - Statistics for all languages included in the training dataset mix.
- [overlap_frequency.md](overlap_frequency.md) - Samples of 13-gram overlaps between our training data and benchmarks, selected by frequency in the training set.
- [model-card.md](model-card.md) - GPT-3 Model Card.
## How to cite
`
@article{brown2020language,
title={Language Models are Few-Shot Learners},
author={Tom B. Brown and Benjamin Mann and Nick Ryder and Melanie Subbiah and Jared Kaplan and Prafulla Dhariwal and Arvind Neelakantan and Pranav Shyam and Girish Sastry and Amanda Askell and Sandhini Agarwal and Ariel Herbert-Voss and Gretchen Krueger and Tom Henighan and Rewon Child and Aditya Ramesh and Daniel M. Ziegler and Jeffrey Wu and Clemens Winter and Christopher Hesse and Mark Chen and Eric Sigler and Mateusz Litwin and Scott Gray and Benjamin Chess and Jack Clark and Christopher Berner and Sam McCandlish and Alec Radford and Ilya Sutskever and Dario Amodei},
year={2020},
eprint={2005.14165},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
`
</code></pre> | 2022-05-30 21:17:42.887000+00:00 | 2022-05-30 21:17:42.887000+00:00 | null | null | 72,439,902 | <p>I have a list of repo and I want to read some specific files present in each of the repo (all done through code). Is there way that I don't have to clone each repo individually at all but just read the info inside the file of the repo ? All these repos are public. Thank you in advanced</p> | 2022-05-30 20:49:08.783000+00:00 | 2022-05-30 21:17:42.887000+00:00 | null | git|github|clone|github-cli | ['https://github.com/openai/gpt-3/blob/master/README.md'] | 1 |
68,797,484 | <p>You can try triplet learning instead of simple classification.</p>
<p><a href="https://i.stack.imgur.com/SOn0Y.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SOn0Y.png" alt="triplet" /></a></p>
<p>From your 1000 users, you can make, c * 1000 * 999 / 2 pairs. c is the average number of samples per class/user.</p>
<p><a href="https://arxiv.org/pdf/1412.6622.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1412.6622.pdf</a></p> | 2021-08-16 04:41:44.210000+00:00 | 2021-08-16 04:41:44.210000+00:00 | null | null | 68,744,129 | <p>I am new to deep learning (I just finished to read <em>deep learning with pytorch</em>), and I was wondering what is the best neural network architecture for my case.</p>
<p>I have a large multiclass classification problem (user identification problem), about 1000 classes in which each class is a user. I have about 2000 features for each user after one-hot encoding and cleaning. Data are highly imbalanced, but I can always use oversampling/downsampling techniques.</p>
<p>I was wondering what is the best architecture to implement for my case. I've always seen deep learning applied to time series or images, so I'm not sure about what to use in this case. I was thinking about a multi-layer perceptron but maybe there are better solutions.</p>
<p>Thanks for your tips and help. Have a nice day!</p> | 2021-08-11 14:39:46.243000+00:00 | 2021-08-16 04:41:44.210000+00:00 | null | deep-learning|neural-network|architecture | ['https://i.stack.imgur.com/SOn0Y.png', 'https://arxiv.org/pdf/1412.6622.pdf'] | 2 |
54,178,054 | <p>Adding few more ideas in answer pointed by <strong>bhaskar</strong>, which are used to handle this problem.</p>
<p>You can used <strong>Attention</strong> mechanism, which is used to deal with long term dependencies. Because for a long sequence, it certainly forget information or its next prediction may not depend on all the sequence information, it has in its cell. So <code>attention mechanism helps to find the reasonable weights for the characters, it depend on.</code> For more info you can check this <a href="https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=2ahUKEwj5yc2I7uzfAhUCYo8KHRWzBkIQFjAAegQIBxAB&url=https%3A%2F%2Farxiv.org%2Fabs%2F1706.03762&usg=AOvVaw2ceXGQohV5Kx51VSkfkG08" rel="nofollow noreferrer">link</a></p>
<p>There is potentially lots of research on this problem. <a href="https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=7&cad=rja&uact=8&ved=2ahUKEwjW-JWv7uzfAhUGaI8KHWIdCNwQFjAGegQIAhAC&url=https%3A%2F%2Farxiv.org%2Fpdf%2F1803.00144&usg=AOvVaw2S-aKSTJS-tomj5IGFiVgT" rel="nofollow noreferrer">This</a> is very recent paper on this problem.</p>
<p>You can also break the sequence and use <code>seq2seq</code> model, which encode the features into low dims space and then decoder will extract it . This is <a href="https://github.com/zhangruiskyline/DeepLearning/blob/master/doc/RNN.md" rel="nofollow noreferrer">short-article</a> on this.</p>
<p>My personal advice is to break the sequence and then train it, because sliding window on the complete sequence is pretty much able to find the correlation between each sequence.</p> | 2019-01-14 08:39:44.993000+00:00 | 2019-01-14 08:39:44.993000+00:00 | null | null | 52,562,216 | <p>I built a character-level LSTM model on text data, but ultimately I'm looking to apply this model on very long text documents (such as a novel) where it's important to understand contextual information, such as where in the novel it's in. </p>
<p>For these large-scale NLP tasks, is the data usually cut into smaller pieces and concatenated with metadata - such as position within the document, detected topic, etc. - to be fed into the model? Or are there more elegant techniques?</p> | 2018-09-28 20:23:49.983000+00:00 | 2019-01-14 08:39:44.993000+00:00 | null | machine-learning|deep-learning|nlp|lstm|recurrent-neural-network | ['https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=2ahUKEwj5yc2I7uzfAhUCYo8KHRWzBkIQFjAAegQIBxAB&url=https%3A%2F%2Farxiv.org%2Fabs%2F1706.03762&usg=AOvVaw2ceXGQohV5Kx51VSkfkG08', 'https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=7&cad=rja&uact=8&ved=2ahUKEwjW-JWv7uzfAhUGaI8KHWIdCNwQFjAGegQIAhAC&url=https%3A%2F%2Farxiv.org%2Fpdf%2F1803.00144&usg=AOvVaw2S-aKSTJS-tomj5IGFiVgT', 'https://github.com/zhangruiskyline/DeepLearning/blob/master/doc/RNN.md'] | 3 |
52,599,504 | <p>Personally, I have not gone that in depth with using LSTMs to go into the level of depth that you are trying to attain but I do have some suggestions. </p>
<p>One solution to your problem, which you mentioned above, could be to simply analyze different pieces of the document by splitting your document into smaller pieces and analyzing them that way. You'll probably have to be creative.</p>
<p>Another solution, that I think might be of interest of you is to uses a Tree LSTM model in order to get the level to depth. <a href="https://arxiv.org/pdf/1503.00075.pdf" rel="nofollow noreferrer">Here's the link to the paper</a> Using the Tree model you could feed in individual characters or words on the lowest level and then feed it upward to higher levels of abstraction. Again, I am not completely familiar with the model, so don't take my word on it, but it could be a possible solution.</p> | 2018-10-01 22:04:29.230000+00:00 | 2018-10-01 22:04:29.230000+00:00 | null | null | 52,562,216 | <p>I built a character-level LSTM model on text data, but ultimately I'm looking to apply this model on very long text documents (such as a novel) where it's important to understand contextual information, such as where in the novel it's in. </p>
<p>For these large-scale NLP tasks, is the data usually cut into smaller pieces and concatenated with metadata - such as position within the document, detected topic, etc. - to be fed into the model? Or are there more elegant techniques?</p> | 2018-09-28 20:23:49.983000+00:00 | 2019-01-14 08:39:44.993000+00:00 | null | machine-learning|deep-learning|nlp|lstm|recurrent-neural-network | ['https://arxiv.org/pdf/1503.00075.pdf'] | 1 |
47,910,243 | <blockquote>
<p>How can we train a neural network so that it ends up maximizing classification accuracy?</p>
<p>I'm asking for a way to get a continuous proxy function that's closer to the accuracy</p>
</blockquote>
<p>To start with, the loss function used today for classification tasks in (deep) neural nets was not invented with them, but it goes back several decades, and it actually comes from the early days of logistic regression. Here is the equation for the simple case of binary classification:</p>
<p><a href="https://i.stack.imgur.com/ganiv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ganiv.png" alt="enter image description here" /></a></p>
<p>The idea behind it was exactly to come up with a <em>continuous & differentiable</em> function, so that we would be able to exploit the (vast, and still expanding) arsenal of convex optimization for classification problems.</p>
<p>It is safe to say that the above loss function is the best we have <em>so far</em>, given the desired mathematical constraints mentioned above.</p>
<p>Should we consider this problem (i.e. better approximating the accuracy) solved and finished? At least in principle, no. I am old enough to remember an era when the only activation functions practically available were <code>tanh</code> and <code>sigmoid</code>; then came ReLU and gave a real boost to the field. Similarly, someone may eventually come up with a better loss function, but arguably this is going to happen in a research paper, and not as an answer to a SO question...</p>
<p>That said, the very fact that the current loss function comes from very <em>elementary</em> considerations of probability and information theory (fields that, in sharp contrast with the current field of deep learning, stand upon firm theoretical foundations) creates at least some doubt as to if a better proposal for the loss may be just around the corner.</p>
<hr />
<p>There is another subtle point on the relation between loss and accuracy, which makes the latter something qualitatively different than the former, and is frequently lost in such discussions. Let me elaborate a little...</p>
<p>All the classifiers related to this discussion (i.e. neural nets, logistic regression etc) are <em>probabilistic</em> ones; that is, they do not return hard class memberships (0/1) but class probabilities (continuous real numbers in [0, 1]).</p>
<p>Limiting the discussion for simplicity to the binary case, when converting a class probability to a (hard) class membership, we are implicitly involving a <em>threshold</em>, usually equal to 0.5, such as if <code>p[i] > 0.5</code>, then <code>class[i] = "1"</code>. Now, we can find many cases whet this naive default choice of threshold will not work (heavily imbalanced datasets are the first to come to mind), and we'll have to choose a different one. But the important point for our discussion here is that this threshold selection, while being of central importance to the accuracy, is completely <em>external</em> to the mathematical optimization problem of minimizing the loss, and serves as a further "insulation layer" between them, compromising the simplistic view that loss is just a proxy for accuracy (it is not). As nicely put in the answer of <a href="https://stats.stackexchange.com/questions/312119/reduce-classification-probability-threshold">this Cross Validated thread</a>:</p>
<blockquote>
<p>the statistical component of your exercise ends when you output a probability for each class of your new sample. Choosing a threshold beyond which you classify a new observation as 1 vs. 0 is not part of the <em>statistics</em> any more. It is part of the <em>decision</em> component.</p>
</blockquote>
<hr />
<p>Enlarging somewhat an already broad discussion: Can we possibly move completely away from the (very) limiting constraint of mathematical optimization of continuous & differentiable functions? In other words, can we do away with back-propagation and gradient descend?</p>
<p>Well, we are actually doing so already, at least in the sub-field of reinforcement learning: 2017 was the year when <a href="https://blog.openai.com/evolution-strategies/" rel="nofollow noreferrer">new research from OpenAI</a> on something called <em>Evolution Strategies</em> <a href="https://www.technologyreview.com/s/603921/elon-musks-openai-unveils-a-simpler-way-for-machines-to-learn/" rel="nofollow noreferrer">made headlines</a>. And as an extra bonus, here is an ultra-fresh (Dec 2017) <a href="https://arxiv.org/abs/1712.06567" rel="nofollow noreferrer">paper by Uber</a> on the subject, again generating <a href="https://twitter.com/dennybritz/status/943384648595357696" rel="nofollow noreferrer">much enthusiasm</a> in the community.</p> | 2017-12-20 16:13:11.360000+00:00 | 2021-06-08 13:35:22.050000+00:00 | 2021-06-08 13:35:22.050000+00:00 | null | 47,891,197 | <p>When we train neural networks, we typically use gradient descent, which relies on a continuous, differentiable real-valued cost function. The final cost function might, for example, take the mean squared error. Or put another way, gradient descent implicitly assumes the end goal is <em>regression</em> - to minimize a real-valued error measure.</p>
<p>Sometimes what we want a neural network to do is perform <em>classification</em> - given an input, classify it into two or more discrete categories. In this case, the end goal the user cares about is classification accuracy - the percentage of cases classified correctly.</p>
<p>But when we are using a neural network for classification, though <em>our</em> goal is classification accuracy, <em>that is not what the neural network is trying to optimize</em>. The neural network is still trying to optimize the real-valued cost function. Sometimes these point in the same direction, but sometimes they don't. In particular, I've been running into cases where a neural network trained to correctly minimize the cost function, has a classification accuracy worse than a simple hand-coded threshold comparison.</p>
<p>I've boiled this down to a minimal test case using TensorFlow. It sets up a perceptron (neural network with no hidden layers), trains it on an absolutely minimal dataset (one input variable, one binary output variable) assesses the classification accuracy of the result, then compares it to the classification accuracy of a simple hand-coded threshold comparison; the results are 60% and 80% respectively. Intuitively, this is because a single outlier with a large input value, generates a correspondingly large output value, so the way to minimize the cost function is to try extra hard to accommodate that one case, in the process misclassifying two more ordinary cases. The perceptron is correctly doing what it was told to do; it's just that this does not match what we actually want of a classifier. But the classification accuracy is not a continuous differentiable function, so we can't use it as the target for gradient descent.</p>
<p>How can we train a neural network so that it ends up maximizing classification accuracy?</p>
<pre><code>import numpy as np
import tensorflow as tf
sess = tf.InteractiveSession()
tf.set_random_seed(1)
# Parameters
epochs = 10000
learning_rate = 0.01
# Data
train_X = [
[0],
[0],
[2],
[2],
[9],
]
train_Y = [
0,
0,
1,
1,
0,
]
rows = np.shape(train_X)[0]
cols = np.shape(train_X)[1]
# Inputs and outputs
X = tf.placeholder(tf.float32)
Y = tf.placeholder(tf.float32)
# Weights
W = tf.Variable(tf.random_normal([cols]))
b = tf.Variable(tf.random_normal([]))
# Model
pred = tf.tensordot(X, W, 1) + b
cost = tf.reduce_sum((pred-Y)**2/rows)
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
tf.global_variables_initializer().run()
# Train
for epoch in range(epochs):
# Print update at successive doublings of time
if epoch&(epoch-1) == 0 or epoch == epochs-1:
print('{} {} {} {}'.format(
epoch,
cost.eval({X: train_X, Y: train_Y}),
W.eval(),
b.eval(),
))
optimizer.run({X: train_X, Y: train_Y})
# Classification accuracy of perceptron
classifications = [pred.eval({X: x}) > 0.5 for x in train_X]
correct = sum([p == y for (p, y) in zip(classifications, train_Y)])
print('{}/{} = perceptron accuracy'.format(correct, rows))
# Classification accuracy of hand-coded threshold comparison
classifications = [x[0] > 1.0 for x in train_X]
correct = sum([p == y for (p, y) in zip(classifications, train_Y)])
print('{}/{} = threshold accuracy'.format(correct, rows))
</code></pre> | 2017-12-19 16:28:02.923000+00:00 | 2021-06-08 13:35:22.050000+00:00 | 2019-06-08 10:02:33.310000+00:00 | machine-learning|neural-network|classification|gradient-descent|loss-function | ['https://i.stack.imgur.com/ganiv.png', 'https://stats.stackexchange.com/questions/312119/reduce-classification-probability-threshold', 'https://blog.openai.com/evolution-strategies/', 'https://www.technologyreview.com/s/603921/elon-musks-openai-unveils-a-simpler-way-for-machines-to-learn/', 'https://arxiv.org/abs/1712.06567', 'https://twitter.com/dennybritz/status/943384648595357696'] | 6 |
32,972,012 | <p>Since the hyperparameters we're talking about are related to <strong>backpropagation</strong>, which is a gradient-based approach, I believe the main reference is <a href="http://arxiv.org/abs/1206.5533" rel="nofollow">Y. Bengio</a>, along with the more classic <a href="http://yann.lecun.com/exdb/publis/pdf/lecun-98b.pdf" rel="nofollow">Lecun et al.</a>.</p>
<p>There are three main approaches to find out the optimal value for an hyperparameter. The first two are well explained in the first paper I linked.</p>
<ul>
<li>Manual search. The researcher choose the optimal value through try-and-error.</li>
<li>Automatic search. The researcher relies on an automated routine in order to speed up the search.</li>
<li>Bayesian Optimization. You can find a video presenting it <a href="https://www.youtube.com/watch?v=VG2uCpKJkSg" rel="nofollow">here</a>.</li>
</ul> | 2015-10-06 14:02:34.740000+00:00 | 2015-10-06 17:34:06.910000+00:00 | 2015-10-06 17:34:06.910000+00:00 | null | 32,956,598 | <p>I was just wondering if someone could provide a good source for me to read on how I should approach choosing hyper-parameters of the solver based on the complexity of my problem.</p>
<p>Basically, I understand that many feel that they are "shooting around in the dark" when it comes to setting and then modifying these parameters and a system or benchmark for choosing parameters based on specific problem/data complexity has escaped me.</p>
<p>If you care to explain your own methodology or simply provide commentary on your source, it would be much appreciated.</p> | 2015-10-05 19:40:58.087000+00:00 | 2015-10-06 17:34:06.910000+00:00 | null | neural-network|caffe | ['http://arxiv.org/abs/1206.5533', 'http://yann.lecun.com/exdb/publis/pdf/lecun-98b.pdf', 'https://www.youtube.com/watch?v=VG2uCpKJkSg'] | 3 |
70,806,866 | <p>Python's <a href="https://docs.python.org/3/library/fractions.html" rel="nofollow noreferrer"><code>fractions</code></a> module and its class, <code>Fraction</code>, implement arithmetic with rational numbers. The <code>Fraction</code> class doesn't implement a square root operation, because most square roots are irrational numbers. However, it can be used to approximate a square root with arbitrary accuracy, because a <code>Fraction</code>'s numerator and denominator are arbitrary-precision integers.</p>
<p>The following method takes a positive number <code>x</code> and a number of iterations, and returns upper and lower bounds for the square root of <code>x</code>.</p>
<pre><code>from fractions import Fraction
def sqrt(x, n):
x = x if isinstance(x, Fraction) else Fraction(x)
upper = x + 1
for i in range(0, n):
upper = (upper + x/upper) / 2
lower = x / upper
if lower > upper:
raise ValueError("Sanity check failed")
return (lower, upper)
</code></pre>
<p>See the reference below for details on this operation's implementation. It also shows how to implement other operations with upper and lower bounds (although there is apparently at least one error with the <code>log</code> operation there).</p>
<ul>
<li>Daumas, M., Lester, D., Muñoz, C., "Verified Real Number Calculations: A Library for Interval Arithmetic", arXiv:0708.3721 [cs.MS], 2007.</li>
</ul>
<p>Alternatively, using Python's <code>math.isqrt</code>, we can calculate a square root to arbitrary precision:</p>
<ul>
<li>Square root of <code>i</code> within 1/2<sup><em>n</em></sup> of the correct value, where <code>i</code> is an integer:<code>Fraction(math.isqrt(i * 2**(n*2)), 2**n)</code>.</li>
<li>Square root of <code>i</code> within 1/10<sup><em>n</em></sup> of the correct value, where <code>i</code> is an integer:<code>Fraction(math.isqrt(i * 10**(n*2)), 10**n)</code>.</li>
<li>Square root of <code>x</code> within 1/2<sup><em>n</em></sup> of the correct value, where <code>x</code> is a multiple of 1/2<sup><em>n</em></sup>:<code>Fraction(math.isqrt(x * 2**(n)), 2**n)</code>.</li>
<li>Square root of <code>x</code> within 1/10<sup><em>n</em></sup> of the correct value, where <code>x</code> is a multiple of 1/10<sup><em>n</em></sup>:<code>Fraction(math.isqrt(x * 10**(n)), 10**n)</code>.</li>
</ul>
<p>In the foregoing, <code>i</code> or <code>x</code> must be 0 or greater.</p> | 2022-01-21 19:45:14.950000+00:00 | 2022-06-14 16:39:24.177000+00:00 | 2022-06-14 16:39:24.177000+00:00 | null | 70,793,490 | <p>I need to calculate the square root of some numbers, for example <code>√9 = 3</code> and <code>√2 = 1.4142</code>. How can I do it in Python?</p>
<p>The inputs will probably be all positive integers, and relatively small (say less than a billion), but just in case they're not, is there anything that might break?</p>
<hr />
<p><strong>Related</strong></p>
<ul>
<li><a href="https://stackoverflow.com/questions/15390807/integer-square-root-in-python">Integer square root in python</a>
<ul>
<li><a href="https://stackoverflow.com/questions/15978781/how-to-find-integer-nth-roots">How to find integer nth roots?</a></li>
</ul>
</li>
<li><a href="https://stackoverflow.com/questions/19255120/is-there-a-short-hand-for-nth-root-of-x-in-python">Is there a short-hand for nth root of x in Python?</a></li>
<li><a href="https://stackoverflow.com/questions/33684948/difference-between-1-2-math-sqrt-and-cmath-sqrt">Difference between **(1/2), math.sqrt and cmath.sqrt?</a></li>
<li><a href="https://stackoverflow.com/questions/41551529/why-is-math-sqrt-incorrect-for-large-numbers">Why is math.sqrt() incorrect for large numbers?</a></li>
<li><a href="https://stackoverflow.com/questions/28239978/python-sqrt-limit">Python sqrt limit for very large numbers?</a></li>
<li><a href="https://stackoverflow.com/questions/327002/which-is-faster-in-python-x-5-or-math-sqrtx">Which is faster in Python: x**.5 or math.sqrt(x)?</a></li>
<li><a href="https://stackoverflow.com/questions/9595135/why-does-python-give-the-wrong-answer-for-square-root">Why does Python give the "wrong" answer for square root?</a> (specific to Python 2)</li>
<li><a href="https://stackoverflow.com/questions/15424123/calculating-n-th-roots-using-python-3s-decimal-module">calculating n-th roots using Python 3's decimal module</a></li>
<li><a href="https://stackoverflow.com/questions/17766774/how-can-i-take-the-square-root-of-1-using-python">How can I take the square root of -1 using python?</a> (focused on NumPy)</li>
<li><a href="https://stackoverflow.com/questions/10725522/arbitrary-precision-of-square-roots">Arbitrary precision of square roots</a></li>
</ul>
<p><em><sub><strong>Note</strong>: This is an attempt at a <a href="https://meta.stackoverflow.com/q/291992/4518341">canonical question</a> after <a href="https://meta.stackoverflow.com/questions/415385/how-can-we-handle-a-question-where-the-problem-in-the-title-is-different-from-th">a discussion on Meta</a> about <a href="https://stackoverflow.com/questions/9595135/why-does-python-give-the-wrong-answer-for-square-root">an existing question with the same title</a>.</sub></em></p> | 2022-01-20 21:16:14.703000+00:00 | 2022-08-29 11:24:20.397000+00:00 | 2022-02-17 03:40:07+00:00 | python|math|sqrt | ['https://docs.python.org/3/library/fractions.html'] | 1 |
41,191,846 | <p>It's an extra variable that was created because you are using an <code>AdamOptimizer()</code> to train your data. You can read about the algorithm in the original paper - <a href="https://arxiv.org/pdf/1412.6980v8.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1412.6980v8.pdf</a></p> | 2016-12-16 20:16:18.400000+00:00 | 2016-12-16 20:16:18.400000+00:00 | null | null | 41,191,240 | <p>I printed all tensor value in a checkpoint file.</p>
<p>I can understand "conv1/weights". But what is "conv1/weights/Adam" in checkpoint file?</p> | 2016-12-16 19:28:36.107000+00:00 | 2016-12-16 20:16:18.400000+00:00 | null | tensorflow | ['https://arxiv.org/pdf/1412.6980v8.pdf'] | 1 |
5,416,036 | <p>See <a href="http://arxiv.org/abs/cs/0509027" rel="noreferrer">Haskell's Overlooked Object System</a> by Oleg Kiselyov and Ralf Laemmel for a detailed explanation of how OO concepts can be implemented in Haskell. But as Antal said in the comments, don't try to write a Java program in Haskell.</p>
<p>Remember that objects are a poor man's closure, and closures are a poor man's object.</p> | 2011-03-24 07:30:57.273000+00:00 | 2015-03-08 19:41:06.490000+00:00 | 2015-03-08 19:41:06.490000+00:00 | null | 5,414,323 | <p>Does it support concepts like separation of declaration and implementation (interfaces and classes in Java)?</p>
<p>How about restricting access (like access modifiers in Java)?</p> | 2011-03-24 03:10:44.813000+00:00 | 2020-03-11 14:08:28.227000+00:00 | 2018-09-28 13:45:09.963000+00:00 | haskell | ['http://arxiv.org/abs/cs/0509027'] | 1 |
57,254,836 | <p>If i understood the article, they are building an autoencoder to <strong>denoise</strong> the data in order to classify it later by adding some layers on top of it. In your case, i'm not sure the autoencoder structure is necessary as you just want to classify your texture.</p>
<p>If you want to keep your idea of using a metric to choose the textures that are the most similar to the one provided, you can for example use a strategy similar as <a href="https://arxiv.org/pdf/1511.05879.pdf" rel="nofollow noreferrer">this</a> (R-MAC method).</p> | 2019-07-29 13:23:04.227000+00:00 | 2019-07-29 13:34:36.853000+00:00 | 2019-07-29 13:34:36.853000+00:00 | null | 57,248,453 | <p>I'm trying to develop a CBIR (Content Based Image Retrieval) system for textures. My approach right now, due to the huge numer of clases and the unlabeled data, is to use an autoencoder in order to extract the features and and then use cosine similarity in order to choose the textures that are the most similar to the one provided. I have made some test and the idea seems to work fine but I'm having lots of problems with the design of the NN. I'm using a convulational autoencoder that right now looks like this: </p>
<pre><code>_________________________________________________________________
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
e_conv1 (Conv2D) (None, 128, 128, 32) 320
_________________________________________________________________
e_pool1 (MaxPooling2D) (None, 64, 64, 32) 0
_________________________________________________________________
e_conv2 (Conv2D) (None, 64, 64, 16) 4624
_________________________________________________________________
e_pool2 (MaxPooling2D) (None, 32, 32, 16) 0
_________________________________________________________________
e_conv3 (Conv2D) (None, 32, 32, 16) 2320
_________________________________________________________________
e_pool3 (MaxPooling2D) (None, 16, 16, 16) 0
_________________________________________________________________
e_conv41 (Conv2D) (None, 16, 16, 8) 1160
_________________________________________________________________
e_pool4 (MaxPooling2D) (None, 8, 8, 8) 0
_________________________________________________________________
e_conv42 (Conv2D) (None, 8, 8, 8) 584
_________________________________________________________________
e_pool42 (MaxPooling2D) (None, 4, 4, 8) 0
_________________________________________________________________
e_conv43 (Conv2D) (None, 4, 4, 8) 584
_________________________________________________________________
flatten (Flatten) (None, 128) 0
_________________________________________________________________
reshape (Reshape) (None, 4, 4, 8) 0
_________________________________________________________________
d_conv00 (Conv2D) (None, 4, 4, 8) 584
_________________________________________________________________
d_pool01 (UpSampling2D) (None, 8, 8, 8) 0
_________________________________________________________________
d_conv01 (Conv2D) (None, 8, 8, 8) 584
_________________________________________________________________
d_pool0 (UpSampling2D) (None, 16, 16, 8) 0
_________________________________________________________________
d_conv02 (Conv2D) (None, 16, 16, 8) 584
_________________________________________________________________
d_pool1 (UpSampling2D) (None, 32, 32, 8) 0
_________________________________________________________________
d_conv1 (Conv2D) (None, 32, 32, 16) 1168
_________________________________________________________________
d_pool2 (UpSampling2D) (None, 64, 64, 16) 0
_________________________________________________________________
d_conv2 (Conv2D) (None, 64, 64, 16) 2320
_________________________________________________________________
d_pool3 (UpSampling2D) (None, 128, 128, 16) 0
_________________________________________________________________
d_conv3 (Conv2D) (None, 128, 128, 32) 4640
_________________________________________________________________
logits (Conv2D) (None, 128, 128, 1) 289
=================================================================
Total params: 19,761
Trainable params: 19,761
Non-trainable params: 0
_________________________________________________________________
</code></pre>
<p>The optimizer is adam and the loss function mse. The images that I'm using right now are in gray scale in order to be able to make test faster. I'm using the keras api for making tests. The input images size is 128x128 but the original images are between 500x500 and 1700x1700.</p>
<p>The biggest problem I'm facing right now is that the network is not learning high level features, it just learn positions and it's gray value. The texture details are really small and the result (the decoded image) seems a blurred version of the input one that do not seem to work for it's clasification. I'm not sure how I should design the NN because I have not found any guide that explains how to combine multiple layers to reach the desired result (but I have found lots of tutorials that explain how does each individual layer work).
<a href="https://i.stack.imgur.com/WGOdX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WGOdX.png" alt=""></a>
This image is in color and 32x32 because it is from first tests, but in gray scale and with more than 1000 textures this won't work due to it's blurriness.</p>
<p>Another doubt that I have is that I'm not sure if for the training I should use the same image for the input and output or if I should use diferent images of the same texture. I think that this should force the NN to learn high level features but I'm not sure if this will work.</p>
<p>Other problem I'm having is that right now I do not have an adequate computer that allows me to make test with a well trained NN (soon I will have one, right now I'm using a free k80 on google colaboratory) so I'm not sure if the bads results are due to a bad design or to a lack of training. The neural network learns first low level features and the it learns slowly the high level ones or should learn directly the high level ones? I also have found that at the end of the encoding about a third of it's features have a value of 0 (in all the textures in the same position) and this does not seem right to me. Is this normal? Should more training allow this features to get a value? May this be related wit the problem with dying relu nodes?</p>
<p><strong>EDIT1:</strong>
If you want to know what I'm trying to do with more detail I found <a href="https://hackernoon.com/a-deep-convolutional-denoising-autoencoder-for-image-classification-26c777d3b88e" rel="nofollow noreferrer">this</a>
article some days ago in which the author has the same probles as me and take the same aproach to solve them. The only thing you have to do is to substitute "magic card" by texture. My data is composed of 20000 images of different sizes and between 1000 and 5000 different types of textures.</p> | 2019-07-29 06:47:34.980000+00:00 | 2019-07-29 13:34:36.853000+00:00 | 2019-07-29 07:38:04.510000+00:00 | tensorflow|machine-learning|image-processing|neural-network|deep-learning | ['https://arxiv.org/pdf/1511.05879.pdf'] | 1 |
53,361,494 | <p>The method is a util function of Faster R-CNN, so I assume you understood what is the "anchor" proposed in Faster R-CNN.</p>
<ul>
<li>"Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks" <a href="https://arxiv.org/abs/1506.01497" rel="nofollow noreferrer">https://arxiv.org/abs/1506.01497</a></li>
</ul>
<p><code>base_size</code> and <code>anchor_scales</code> determines the size of the anchor.
For example, when <code>base_size=16</code> and <code>anchor_scales=[8, 16, 32]</code> (and <code>ratio=1.0</code>), height and width of the anchor will be <code>16 * [8, 16, 32] = (128, 256, 512)</code>, as you expected.
<code>ratio</code> determines the height and width aspect ratio.</p>
<p>(I might be wrong in below paragraph, please correct if I'm wrong.)</p>
<p>I think <code>base_size</code> need to be set as the size of the current hidden layer's scale. In the <code>chainercv</code> Faster R-CNN implementation, <code>extractor</code>'s feature is fed into <code>rpn</code> (region proposal network) and <code>generate_anchor_base</code> is used in <code>rpn</code>. So you need to take care what is the feature of <code>extractor</code>'s output. <code>chainercv</code> uses VGG16 as the feature extractor, and <code>conv5_3</code> layer is used as extracted feature (see <a href="https://github.com/chainer/chainercv/blob/master/chainercv/links/model/faster_rcnn/faster_rcnn_vgg.py#L109" rel="nofollow noreferrer">here</a>), this layer is a place where <code>max_pooling_2d</code> is applied 4 times, which results 2^4=16 times smallen feature.</p>
<p>For the another question, I think your understanding is correct, <code>py - h / 2</code> will be negative value. But this <code>anchor_base</code> value is just a relative value. Once <code>anchor_base</code> is prepared at the initialization of model (<a href="https://github.com/chainer/chainercv/blob/master/chainercv/links/model/faster_rcnn/region_proposal_network.py#L55-L56" rel="nofollow noreferrer">here</a>), actual (absolute value) <code>anchor</code> is created in each forward call (<a href="https://github.com/chainer/chainercv/blob/master/chainercv/links/model/faster_rcnn/region_proposal_network.py#L116-L117" rel="nofollow noreferrer">here</a>) in <code>_enumerate_shifted_anchor</code> method.</p> | 2018-11-18 13:39:05.113000+00:00 | 2018-11-18 13:39:05.113000+00:00 | null | null | 53,360,552 | <p><a href="https://github.com/chainer/chainercv/blob/master/chainercv/links/model/faster_rcnn/utils/generate_anchor_base.py" rel="nofollow noreferrer">Github page</a></p>
<p>Looking <code>generate_anchor_base</code> method, which is Faster R-CNN util method in ChainerCV.</p>
<p>What is the <code>base_size = 16</code>? I saw in the Documentation that it is </p>
<blockquote>
<p>The width and the height of the reference window. </p>
</blockquote>
<p>But what does "reference window" mean? </p>
<p>Also it says that <code>anchor_scales=[8, 16, 32]</code> are the areas of the anchors but I thought that that the areas are (128, 256, 512)</p>
<p>Another question:<br>
If the <code>base size</code> is 16 and <code>h = 128</code> and <code>w=128</code>, Does that mean <code>anchor_base[index, 0] = py - h / 2</code> is a negative value?
since py = 8 and and h/2 = 128/2</p> | 2018-11-18 11:52:27.437000+00:00 | 2018-11-18 14:06:30.373000+00:00 | 2018-11-18 14:06:30.373000+00:00 | chainer|chainercv | ['https://arxiv.org/abs/1506.01497', 'https://github.com/chainer/chainercv/blob/master/chainercv/links/model/faster_rcnn/faster_rcnn_vgg.py#L109', 'https://github.com/chainer/chainercv/blob/master/chainercv/links/model/faster_rcnn/region_proposal_network.py#L55-L56', 'https://github.com/chainer/chainercv/blob/master/chainercv/links/model/faster_rcnn/region_proposal_network.py#L116-L117'] | 4 |
4,509,603 | <p>You're right to point out that the adjacencies for a vertex are most accurately modelled by a set (or in the case of a multigraph, a multiset). So why do data structures books write about arrays and linked lists instead? I can think of three reasons:</p>
<ol>
<li><p>The idea that programming languages should include sets as a primitive data type is fairly recent. Older writers wouldn't have considered using it, and modern writers tend to follow the traditions of the field.</p></li>
<li><p>One of the purposes of a data structures course is to enable you to think about the representation of data at a low (concrete) level as well as at a high (abstract) level. A set is an abstract datatype that (unlike linked lists and arrays) doesn't have an obvious low-level implementation: some sets are best represented as linked lists, some as hash tables, some as arrays, and so on. So it is natural for a data structures course to skip over the high level representation of sets to their low-level implementation, which you have to know about anyway in order to analyse the behaviour of algorithms that use them.</p></li>
<li><p>It's important not to be dogmatic about how to represent datatypes, because algorithms can be most efficiently expressed using particular representations. Example 1. To count the paths of length <em>n</em> between each pair of vertices in a graph, represent the graph by its adjacency matrix and raise the matrix to the power <em>n</em>. If you insist on representing the adjacencies of a vertex as a set of edges, then you'll miss this algorithm (which can be parallelized using standard techniques). Example 2. Knuth's "<a href="http://lanl.arxiv.org/pdf/cs/0011047" rel="noreferrer">Dancing Links</a>" algorithm for the exact cover problem represents sets of columns using doubly linked lists, so that the links from deleted items can be reused for efficient backtracking.</p></li>
</ol> | 2010-12-22 13:25:12.920000+00:00 | 2010-12-22 13:25:12.920000+00:00 | null | null | 4,509,270 | <p>I'm from Argentina, but i think everybody who has ever take a Data Structures class know what a graph is. If you do, you might know what kind of implementations are "common" or "standar". It can be implemented through a List, or an array. Even Wikipedia says this. As well as Mark Allen Weiss, Bruno Preiss and Luis Joyanes Aguilar.</p>
<p>The thing is. No one ever has think that this is not a good way to do it? The most recommended way is through a List. But considered that vertices can have just one edge between them, i don't think that a List is the good interface to do this. I mean, if the Vertex V1 is connected with Vertex V2, then there is just one and only one edge.</p>
<p>Don't you think it would be a Set instead of a list?</p>
<pre><code>Class Vertex{
private Set edges;
private Object data;
/** Methods**/
}
</code></pre>
<p>Just want to know some opinions, what do you think?</p>
<p>Thanks!!</p>
<p><strong>Edit:</strong>
Also, if we think that the Graph can't have repeated elements, a HashSet would be a good choice to minimize the lookup of the vertex in the insertion.</p> | 2010-12-22 12:42:17.760000+00:00 | 2013-09-30 18:17:18.680000+00:00 | 2013-09-30 18:17:18.680000+00:00 | data-structures|graph|directed-graph | ['http://lanl.arxiv.org/pdf/cs/0011047'] | 1 |
69,880,180 | <p>You can further pretrain a BERT model with your own data with run_mlm.py at: <a href="https://github.com/huggingface/transformers/tree/master/examples/pytorch/language-modeling" rel="nofollow noreferrer">https://github.com/huggingface/transformers/tree/master/examples/pytorch/language-modeling</a>.</p>
<p>Also look at this: <a href="https://github.com/allenai/dont-stop-pretraining" rel="nofollow noreferrer">https://github.com/allenai/dont-stop-pretraining</a> and the paper: <a href="https://arxiv.org/pdf/2004.10964.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/2004.10964.pdf</a> for related ideas and terminology: domain-adaptive pertaining and task-adaptive
pretraining.</p> | 2021-11-08 07:55:05.330000+00:00 | 2021-11-08 08:02:03.290000+00:00 | 2021-11-08 08:02:03.290000+00:00 | null | 62,948,077 | <p>I am trying to further pretrain the bert-base model using the custom data. The steps I'm following are as follows:</p>
<ol>
<li><p>Generate list of words from the custom data and add these words to the existing bert-base vocab file. The vocab size has been increased from <code>35022</code> to <code>35880</code>.</p>
</li>
<li><p>I created the input data using <strong>create_pretraining_data.py</strong> from <a href="https://github.com/google-research/bert" rel="nofollow noreferrer">the bert official github page</a>.</p>
</li>
<li><p>Doing the pretrain using <strong>run_pretraining.py</strong> but facing the mismatch error:</p>
</li>
</ol>
<blockquote>
<p>ValueError: Shape of variable bert/embeddings/word_embeddings:0
((35880, 128)) doesn't match with shape of tensor
bert/embeddings/word_embeddings ([30522, 128]) from checkpoint reader.</p>
</blockquote>
<p><strong>Note:</strong> I changed the <code>bert_config</code> file with lastest <code>vocab_size</code> as <code>35880</code>.</p>
<p>Please help me to understand the error and what changes should be made, so that I can pretrain with the custom vocab file.</p> | 2020-07-17 06:07:07.730000+00:00 | 2021-11-08 08:02:03.290000+00:00 | 2020-07-17 06:20:44.930000+00:00 | python|tensorflow|nlp|pre-trained-model|bert-language-model | ['https://github.com/huggingface/transformers/tree/master/examples/pytorch/language-modeling', 'https://github.com/allenai/dont-stop-pretraining', 'https://arxiv.org/pdf/2004.10964.pdf'] | 3 |
60,236,088 | <p>The geometric meaning of the projection matrix in the OpenGL function <code>GLFrustum()</code>, which was explained in the official guide of OpenGL (the nineth edition of the book, OpenGL programming guide-- the official guide to learning OpenGL), from an algebraic point of view, actually corresponds to many geometric meanings that are unluckily different from what the authors expected. </p>
<p>A significant difference between them are:
1. in the official guide, the authors are illustrating a compound linear transform with one of its factors as <code>central projection</code>, so the final compound perspective projecction matrix should be a singular or degenerated matrix;
2. while still in the official guide, at the appendix, the <code>GLFrustum()</code> perspective projection matrix is a nonsingular 4 times 4 square matrix!</p>
<p>Note that: The authors are trying to explain a nonsingular matrix into a theoretically singular one!</p>
<p>The following matrix decomposition (not unique) corresponds to one of the geometric meanings of the nonsingualr <code>GLFrustum()</code> matrix:</p>
<p><a href="https://i.stack.imgur.com/CUVlA.png" rel="nofollow noreferrer">enter image description here</a></p>
<p>All of the above matrix factors except for No. (2), have their geoemtric meaning clearly redefined in <a href="https://arxiv.org/abs/1307.0998" rel="nofollow noreferrer">Unified Framework of Elementary Geometric Transformations</a>. If you choose this factorization as the geometric meaning explanation of the <code>GLFrustum()</code> perspective projection matrix, you will have to make sure any computation or transforms you are doing in your code are consistent to its geometric meaning.</p>
<p>So when you are programming with OpenGL <code>GLFrustum()</code>, probably may have to compare with what has been illustrated in the official guide and what the <code>GLFrustum()</code> perspective projection matrix really means from the view point of pure algebraic projective geometry, and use any one of your own.</p> | 2020-02-15 05:24:02.187000+00:00 | 2020-02-15 05:24:02.187000+00:00 | null | null | 39,483,586 | <p>I've been using <code>gluPerspective</code> and somewhat have a grasp on what it's doing and how it works. But I still don't quite understand how to use it for any effect beyond that of where it starts and stops. changing the <code>zfar</code> and <code>znear</code> never seems to affect anything much. Am I missing a cool effect? what are some ranges of values and what visual distortions can you do?</p> | 2016-09-14 06:13:16.973000+00:00 | 2020-12-02 19:24:49.840000+00:00 | 2016-09-14 10:29:05.400000+00:00 | opengl | ['https://i.stack.imgur.com/CUVlA.png', 'https://arxiv.org/abs/1307.0998'] | 2 |
69,528,827 | <p>To try and answer your question in the comments:</p>
<blockquote>
<p>I guess my question is why swapping register bits can give us a gate realizing order finding algorithms.</p>
</blockquote>
<p>One way to think of Shor's algorithm is it takes as input:</p>
<ul>
<li>A circuit* <code>U</code>, and</li>
<li>a starting state <code>|ψ⟩</code></li>
</ul>
<p>Shor's algorithm tells us the <em>period</em> of that circuit, i.e. the number of times you need to repeat <code>U</code> to get back to your initial input. We then use a classical algorithm to map factoring to this problem by setting <code>U|y⟩≡|ay mod N⟩</code> and <code>|ψ⟩=|1⟩</code>.</p>
<p>You can confirm through simulations that the circuit in the Qiskit Textbook has that property, although it doesn't give a method of generating that circuit (I imagine it was educated guessing similar to <a href="https://quantumcomputing.stackexchange.com/questions/12849/how-to-implement-cx-mod-n-unitary/12861#12861">this answer</a> but you'll need to read <a href="https://arxiv.org/abs/quant-ph/0205095" rel="nofollow noreferrer">that paper</a> in the other answer for a general method).</p>
<p>If we already know the answer using the algorithm, then we could just find any old circuit with the correct period and plug that in. E.g. a single swap gate acting on <code>|1⟩</code> has period 2. Although this doesn't really count as "doing Shor's algorithm", it's often used to demonstrate the algorithm works <a href="https://www.nature.com/articles/s41598-021-95973-w" rel="nofollow noreferrer">1</a> <a href="https://github.com/qiskit-community/ibm-quantum-challenge-2021/blob/main/solutions%20by%20participants/ex2/ex2-AlbertoMaldonadoRomo-6cnot.ipynb" rel="nofollow noreferrer">2</a>.</p>
<hr />
<p>*To make the algorithm efficient, the input is really "an efficient way to make circuits for <code>U^(2^x)</code>". Fortunately, we know how to do this for the circuits needed for factoring, but the Qiskit textbook just repeats <code>U</code> inefficiently for sake of demonstration.</p> | 2021-10-11 15:32:03.963000+00:00 | 2021-10-12 11:56:20.213000+00:00 | 2021-10-12 11:56:20.213000+00:00 | null | 69,524,922 | <p>I am studying the quantum circuit realization of Shor's algorithm about factoring 15 into product of prime numbers using the python package Qiskit. See this <a href="https://qiskit.org/textbook/ch-algorithms/shor.html" rel="nofollow noreferrer">website</a> for details.</p>
<p>My question is related to the realization of U-gate in this website. In this website, the realization of U-gate is given in the form</p>
<pre><code>def c_amod15(a, power):
"""Controlled multiplication by a mod 15"""
if a not in [2,7,8,11,13]:
raise ValueError("'a' must be 2,7,8,11 or 13")
U = QuantumCircuit(4)
for iteration in range(power):
if a in [2,13]:
U.swap(0,1)
U.swap(1,2)
U.swap(2,3)
if a in [7,8]:
U.swap(2,3)
U.swap(1,2)
U.swap(0,1)
if a == 11:
U.swap(1,3)
U.swap(0,2)
if a in [7,11,13]:
for q in range(4):
U.x(q)
U = U.to_gate()
U.name = "%i^%i mod 15" % (a, power)
c_U = U.control()
return c_U
</code></pre>
<p>My question is that why this U-gate is engineered in such way by swapping qbits. How exactly the value of 'a' will affect the swapping scheme? What if I want to factor 33? How should I change this swapping scheme to factor 33?</p> | 2021-10-11 10:43:57.963000+00:00 | 2021-10-12 11:56:20.213000+00:00 | 2021-10-11 10:52:58.037000+00:00 | algorithm|physics|quantum-computing|qiskit | ['https://quantumcomputing.stackexchange.com/questions/12849/how-to-implement-cx-mod-n-unitary/12861#12861', 'https://arxiv.org/abs/quant-ph/0205095', 'https://www.nature.com/articles/s41598-021-95973-w', 'https://github.com/qiskit-community/ibm-quantum-challenge-2021/blob/main/solutions%20by%20participants/ex2/ex2-AlbertoMaldonadoRomo-6cnot.ipynb'] | 4 |
69,525,245 | <p>The value of <code>a</code> is part of the phase estimation part of Shor's algorithm, where the operation</p>
<pre><code>|y> -> |ay mod N>
</code></pre>
<p>is applied. So <code>a</code> influences the arithmetic operation and depending on how you implement the modular multiplication influences the final circuit differently.</p>
<p>The Qiskit textbook implementation seems to only support special values of <code>a</code>, but the software package itself has the general code for all values of <code>a</code>: <a href="https://github.com/Qiskit/qiskit-terra/blob/main/qiskit/algorithms/factorizers/shor.py" rel="nofollow noreferrer">https://github.com/Qiskit/qiskit-terra/blob/main/qiskit/algorithms/factorizers/shor.py</a></p>
<p>That code uses the Fourier transform to do the multiplication, so <code>a</code> will influence the phase shifts applied after the Fourier transform. Qiskit's implementation is based on <a href="https://arxiv.org/abs/quant-ph/0205095" rel="nofollow noreferrer">this paper</a> where you can find more information.</p> | 2021-10-11 11:10:14.760000+00:00 | 2021-10-11 11:10:14.760000+00:00 | null | null | 69,524,922 | <p>I am studying the quantum circuit realization of Shor's algorithm about factoring 15 into product of prime numbers using the python package Qiskit. See this <a href="https://qiskit.org/textbook/ch-algorithms/shor.html" rel="nofollow noreferrer">website</a> for details.</p>
<p>My question is related to the realization of U-gate in this website. In this website, the realization of U-gate is given in the form</p>
<pre><code>def c_amod15(a, power):
"""Controlled multiplication by a mod 15"""
if a not in [2,7,8,11,13]:
raise ValueError("'a' must be 2,7,8,11 or 13")
U = QuantumCircuit(4)
for iteration in range(power):
if a in [2,13]:
U.swap(0,1)
U.swap(1,2)
U.swap(2,3)
if a in [7,8]:
U.swap(2,3)
U.swap(1,2)
U.swap(0,1)
if a == 11:
U.swap(1,3)
U.swap(0,2)
if a in [7,11,13]:
for q in range(4):
U.x(q)
U = U.to_gate()
U.name = "%i^%i mod 15" % (a, power)
c_U = U.control()
return c_U
</code></pre>
<p>My question is that why this U-gate is engineered in such way by swapping qbits. How exactly the value of 'a' will affect the swapping scheme? What if I want to factor 33? How should I change this swapping scheme to factor 33?</p> | 2021-10-11 10:43:57.963000+00:00 | 2021-10-12 11:56:20.213000+00:00 | 2021-10-11 10:52:58.037000+00:00 | algorithm|physics|quantum-computing|qiskit | ['https://github.com/Qiskit/qiskit-terra/blob/main/qiskit/algorithms/factorizers/shor.py', 'https://arxiv.org/abs/quant-ph/0205095'] | 2 |
68,473,054 | <p>This sort of discrepancies are normally due to the <a href="https://en.wikipedia.org/wiki/Percentile#The_linear_interpolation_between_closest_ranks_method" rel="nofollow noreferrer">interpolation method</a>, very noticeable when the sample is very small.</p>
<p>However, 6055 is exactly percentile 75 in your sample:</p>
<pre><code>1563 2731 3586 3966 4174 4971 6055 9175 15667
0/8 1/8 2/8 3/8 4/8 5/8 6/8 7/8 8/8
0 0.125 0.25 0.375 0.5 0.625 0.75 0.875 1
</code></pre>
<p>Accordingly, Numpy produces the same result using any of its <a href="https://numpy.org/doc/stable/reference/generated/numpy.quantile.html" rel="nofollow noreferrer">interpolation methods</a> (linear, lower, higher, nearest, midpoint).</p>
<p>Dynatrace may be using a more complex interpolation method like <a href="https://arxiv.org/abs/1902.04023" rel="nofollow noreferrer">this one</a>. One of the authors is affiliated to Dynatrace.</p> | 2021-07-21 16:14:47.720000+00:00 | 2021-07-21 16:14:47.720000+00:00 | null | null | 68,468,287 | <p>I am trying to create a report based on the data extracted from Dynatrace.</p>
<p>I am extracting the data on daily basis for the events, in my Python Django report, I need to show the Nth percentile data (like <em>30</em>th percentile, <em>60</em>th Percentile, <em>75</em>th Percentile, <em>90</em>th Percentile).</p>
<p>When I try to pull the data from Dynatrace the below list is the result:
<code>[1563,2731,3586,3966,4174,4971,6055,9175,15667]</code></p>
<p>For this list, when I use numpy.percentile or df.quantile, I am getting one value which is similar to the percentile value like the formula I used in Excel
However the Dynatrace PERCENTILE function is showing a different value all together</p>
<p>For example, From the excel and Python, I am getting 75th Percentile as - <strong>6055</strong>
From Dynatrace I am getting - <strong>6835</strong></p>
<p>I tried to use some online tools to calculate the Percentile but all seems to be giving 6055.
If someone can explain this how <strong>DynaTrace</strong> is calculating this formula that would be a great help</p>
<p>Thanks in advance</p> | 2021-07-21 10:57:23.073000+00:00 | 2021-07-21 16:14:47.720000+00:00 | null | python|excel|numpy|percentile|dynatrace | ['https://en.wikipedia.org/wiki/Percentile#The_linear_interpolation_between_closest_ranks_method', 'https://numpy.org/doc/stable/reference/generated/numpy.quantile.html', 'https://arxiv.org/abs/1902.04023'] | 3 |
62,017,730 | <blockquote>
<p>The intended result for the application: The input is a medical image and the output is the image with <strong>positive regions highlighted</strong> with the overall positive/negative indication with a confidence percentage.</p>
</blockquote>
<p>It sounds like when you say highlighting positive regions, you mean that you want to highlight the region showing the particular disease detected by the classification model. If this is what you mean, I don't believe there is an easy way to combine those two models in the way you described, to the best of my knowledge.</p>
<blockquote>
<p>The issue with the segmentation app is I need to change the segmentation app from continuous video to single images, and model input characteristics. I don’t know if there is a easy way todo this.</p>
</blockquote>
<p>Changing from real-time video handling to single image handling should be pretty trivial. The second part, model input characteristics, would not be so trivial to change for a number of reasons.</p>
<p>1) The image segmentation (Deeplab) model expects an image as the input. We can't suddenly change the input format and hope it to magically work.</p>
<p>2) The CNN based image classification models are known to be difficult for humans to understand. It simply gives you the label and confidence, but it's based on the entire image. The output doesn't contain any information about what regions in the image contributed the most to the particular (label, confidence) pair. So, if we could somehow modify the segmentation model to take additional information, the image classification model wouldn't give any such useful information in the first place.</p>
<p>Instead, I would suggest searching for some research papers on this topic and see if you can implement those ideas. My quick search gave me <a href="https://arxiv.org/abs/1801.01693" rel="nofollow noreferrer">this paper</a>, for example. And there should be many more papers on medical image classification / segmentation, in which I have no domain knowledge in to help you further.</p> | 2020-05-26 08:29:51.087000+00:00 | 2020-05-26 08:29:51.087000+00:00 | null | null | 62,010,627 | <p>Problem: I am trying to combine <a href="https://github.com/obeshor/Plant-Diseases-Detector" rel="nofollow noreferrer">https://github.com/obeshor/Plant-Diseases-Detector</a> with <a href="https://github.com/pillarpond/image-segmenter-android" rel="nofollow noreferrer">https://github.com/pillarpond/image-segmenter-android</a> </p>
<p>The first application, Plant app, allows for single images to be processed and the model already works on it.</p>
<p>The second application, image segmentation app, has a great image segmentation features for users and researchers to understand what the model is looking at. </p>
<p>Due to the complexity in the image segmentation application I want to merge the Plant app into the segmentation app.</p>
<p>The issue with the segmentation app is I need to change the segmentation app from continuous video to single images, and model input characteristics. I don’t know if there is a easy way todo this. </p>
<p>The model for the medical image classification is already complete and works on the Plant app.</p>
<p>The intended result for the application:
The input is a medical image and the output is the image with positive regions highlighted with the overall positive/negative indication with a confidence percentage.</p>
<p>Question: Does anyone have any links to methods to combine these two apps or guides to make a new one. I have been working at this for the past five months and am having issues locating the area in the second code for changing model input characteristics, integrating labels, and changing camera from continuous to single images.</p>
<p>Thank you and hope I included enough information. </p> | 2020-05-25 20:53:56.890000+00:00 | 2020-05-26 08:29:51.087000+00:00 | null | android-studio|tensorflow|image-processing|tensorflow-lite | ['https://arxiv.org/abs/1801.01693'] | 1 |
62,916,716 | <p>Using the data you provided but taking the log you can get fit to an ordinary least squares model from <a href="https://scikit-learn.org/stable/modules/linear_model.html" rel="nofollow noreferrer">scikit-learn</a>:</p>
<pre><code>import numpy as np
import pymc3 as pm
import pandas as pd
import theano.tensor as tt
from sklearn.linear_model import LinearRegression
import statsmodels.formula.api as smf
X = np.log(np.array([11.52, 11.559, 12.31, 16.46, 11.84, 7.38, 9.99, 16.72, 11.617,
11.77, 6.48, 9.035, 12.87, 11.18, 6.75]))
y = np.log(np.array([25.51658407, 24.61306145, 19.4007494, 24.85111923,
25.99397106, 14.30284824, 17.69451713, 27.37460301,
22.23326366, 18.44905152, 10.28001306, 10.68681843,
28.85399089, 14.02840557, 18.41941787]))
reg = LinearRegression().fit(X.reshape(-1, 1), y.reshape(-1, 1))
print('Af estimate: ', np.exp(reg.intercept_))
</code></pre>
<p>This gives the following estimate of <code>Af</code></p>
<pre><code>Af estimate: [2.4844087]
</code></pre>
<p>Since you don't seem to be interested in predicting new data using a model but you want the best estimate of the linear model parameters you could use <a href="https://www.statsmodels.org/stable/index.html" rel="nofollow noreferrer">statsmodels</a>:</p>
<pre><code>results = smf.ols('y ~ X + 1', data=pd.DataFrame({'y':y,'X':X})).fit()
print('Statsmodels Af estimate: ', np.exp(results.params['X']))
</code></pre>
<p>which gives 2.366 quite close to the previous value. <code>r^2</code> is similar to the one you quote.</p>
<p>Lastly my suggestion would be to use <a href="https://docs.pymc.io" rel="nofollow noreferrer">pymc3</a> and get a full bayesian fit that would allow you to naturally estimate the uncertainty of the quantity you want to measure. There is undoubtedly a bit a learning curve to pymc3 but it is great package for probabilistic programming. It allows to estimate the full posterior of your parameter space which is what most people are interested when fitting a model. An implementation for your problem could be the following:</p>
<pre><code>with pm.Model() as model:
# Prior
alpha = pm.Normal('alpha', mu=1.35, sd=5) # centered around the literature value
beta = pm.HalfNormal('beta', sd=10) # only positive values as it goes into the sqrt. Also is height always positive here?
sigma = pm.HalfNormal("sigma", sd=1)
beta2 = pm.Deterministic('beta2', tt.sqrt(beta*9.81)) # g is very well known
alpha_f = pm.Deterministic('alpha_f', tt.exp(alpha)) # estimate directly the output value we want
# Likelihood
likelihood = pm.Normal('y', mu=alpha + beta2 * X,sigma=sigma,observed=y)
# Samplingtemp
trace = pm.sample(init='adapt_diag')
print(pm.summary(trace))
</code></pre>
<p>This gives the following output:</p>
<pre><code> mean sd hpd_3% hpd_97% ... ess_sd ess_bulk ess_tail r_hat
alpha 0.781 0.544 -0.232 1.864 ... 309.0 440.0 406.0 1.01
beta 0.091 0.044 0.013 0.167 ... 517.0 438.0 359.0 1.01
sigma 0.259 0.056 0.172 0.368 ... 530.0 479.0 147.0 1.00
beta2 0.917 0.229 0.439 1.316 ... 434.0 438.0 359.0 1.01
alpha_f 2.535 1.552 0.465 5.224 ... 317.0 440.0 406.0 1.01
</code></pre>
<p>and you can see that there is a lot of uncertainty in <code>Af</code>.</p>
<p>However it is important to consider the data that goes in too and not overinterpret the results. At the moment you don't provide any uncertainty in either <code>y</code> or <code>X</code> or the covariance matrix. It is however very unlikely that you know these quantities perfectly so the rational thing to do is including these uncertainties in your modelling. pymc3 allow for this to be implemented naturally. My implementation above includes an estimate of uncertainty from the data but you may well have your own uncertainty from the measurement device. See also this <a href="https://stackoverflow.com/questions/39677240/multivariate-linear-regression-in-pymc3">question</a> and this <a href="https://arxiv.org/abs/1008.4686" rel="nofollow noreferrer">paper</a>.</p> | 2020-07-15 14:02:02.397000+00:00 | 2020-07-15 14:02:02.397000+00:00 | null | null | 62,893,027 | <p>I have a dataset with predicted and observed data.
The equation that predicts the data is given by: y = AfT <img src="https://chart.googleapis.com/chart?cht=tx&chl=%5Csqrt%7Bgh%7D" alt="\sqrt{gh}" /></p>
<p>With Af = constant (now at 1.35), T = wave period, g = gravitation 9.81, h = wave height.</p>
<p>Id like to use linear regression to find the best fitted coefficient (Af in the equation),
so that the predicted value is closer to the observed data.</p>
<p>I now have Af = 1.35 (from suggestion in the literature) results in r^2 = 0.5676
Ideally, I`d use python to find the best fitted coefficient for my data.</p>
<pre><code>import statsmodels.formula.api as smf
from sklearn.linear_model import LogisticRegression
from sklearn.datasets import load_iris
X = np.array([11.52, 11.559, 12.31, 16.46, 11.84, 7.38, 9.99, 16.72, 11.617, 11.77, 6.48, 9.035, 12.87, 11.18, 6.75])
y = np.array([25.51658407, 24.61306145, 19.4007494, 24.85111923, 25.99397106, 14.30284824, 17.69451713, 27.37460301, 22.23326366, 18.44905152, 10.28001306, 10.68681843, 28.85399089, 14.02840557, 18.41941787]).reshape((-1, 1))
X, y = load_iris(return_X_y=True)
clf = LogisticRegression(random_state=0).fit(X, y)
print(clf.coef_, clf.intercept_)
</code></pre>
<p>X = observed/measured values in the field,
y = the predicted values of X using the equation</p>
<p>I have difficulties incorporating the actual equation and finding the best fit for Af.</p> | 2020-07-14 10:22:51.807000+00:00 | 2020-07-15 14:02:02.397000+00:00 | 2020-07-14 13:43:32.693000+00:00 | python|scikit-learn | ['https://scikit-learn.org/stable/modules/linear_model.html', 'https://www.statsmodels.org/stable/index.html', 'https://docs.pymc.io', 'https://stackoverflow.com/questions/39677240/multivariate-linear-regression-in-pymc3', 'https://arxiv.org/abs/1008.4686'] | 5 |
58,056,503 | <p>To add to the answer above, another way of doing this is neural style transfer, where we feed two images to a CNN which then generates a new image combining the content from the second image and the style from the first. Check out this paper for further details, <a href="https://arxiv.org/abs/1508.06576" rel="nofollow noreferrer">https://arxiv.org/abs/1508.06576</a></p>
<p>We could of course always use <a href="https://arxiv.org/abs/1406.2661" rel="nofollow noreferrer">GAN</a>s to do achieve full perfection. </p> | 2019-09-23 06:08:00.323000+00:00 | 2019-09-23 06:08:00.323000+00:00 | null | null | 41,703,747 | <p>CNN such that outputs the image with the feature added to the input image can be created?
For example, if an image of a person's face is input, outputs an image of the person's face wearing glasses.</p> | 2017-01-17 17:49:23.903000+00:00 | 2019-09-23 06:08:00.323000+00:00 | null | machine-learning | ['https://arxiv.org/abs/1508.06576', 'https://arxiv.org/abs/1406.2661'] | 2 |
30,447,723 | <p>Very low scores on the analogy-questions are more likely due to limitations in the amount or quality of your training data, rather than mistuned parameters. (If your training phrases are really only 5 words each, they may not capture the same rich relations as can be discovered from datasets with full sentences.)</p>
<p>You could use a window of 5 on your phrases – the training code trims the window to what's available on either side – but then every word of each phrase affects all of the other words. That might be OK: one of the Google word2vec papers ("Distributed Representations of Words and Phrases
and their Compositionality", <a href="https://arxiv.org/abs/1310.4546" rel="nofollow noreferrer">https://arxiv.org/abs/1310.4546</a>) mentions that to get the best accuracy on one of their phrase tasks, they used "the entire sentence for the context". (On the other hand, on one English corpus of short messages, I found a window size of just 2 created the vectors that scored best on the analogies-evaluation, so larger isn't necessarily better.)</p>
<p>A paper by Levy & Goldberg, "Dependency-Based Word Embeddings", speaks a bit about the qualitative effect of window-size:</p>
<p><a href="https://levyomer.files.wordpress.com/2014/04/dependency-based-word-embeddings-acl-2014.pdf" rel="nofollow noreferrer">https://levyomer.files.wordpress.com/2014/04/dependency-based-word-embeddings-acl-2014.pdf</a></p>
<p>They find:</p>
<p>Larger windows tend to capture more topic/domain information: what other words (of any type) are used in related discussions? Smaller windows tend to capture more about word itself: what other words are functionally similar? (Their own extension, the dependency-based embeddings, seems best at finding most-similar words, synonyms or obvious-alternatives that could drop-in as replacements of the origin word.)</p> | 2015-05-26 00:35:32.140000+00:00 | 2021-09-21 09:48:21.867000+00:00 | 2021-09-21 09:48:21.867000+00:00 | null | 22,272,370 | <p>I am trying to train a word2vec model on very short phrases (5 grams). Since each sentence or example is very short, I believe the window size I can use can atmost be 2. I am trying to understand what the implications of such a small window size are on the quality of the learned model, so that I can understand whether my model has learnt something meaningful or not. I tried training a word2vec model on 5-grams but it appears the learnt model does not capture semantics etc very well.</p>
<p>I am using the following test to evaluate the accuracy of model:
<a href="https://code.google.com/p/word2vec/source/browse/trunk/questions-words.txt" rel="noreferrer">https://code.google.com/p/word2vec/source/browse/trunk/questions-words.txt</a></p>
<p>I used gensim.Word2Vec to train a model and here is a snippet of my accuracy scores (using a window size of 2)</p>
<pre><code>[{'correct': 2, 'incorrect': 304, 'section': 'capital-common-countries'},
{'correct': 2, 'incorrect': 453, 'section': 'capital-world'},
{'correct': 0, 'incorrect': 86, 'section': 'currency'},
{'correct': 2, 'incorrect': 703, 'section': 'city-in-state'},
{'correct': 123, 'incorrect': 183, 'section': 'family'},
{'correct': 21, 'incorrect': 791, 'section': 'gram1-adjective-to-adverb'},
{'correct': 8, 'incorrect': 544, 'section': 'gram2-opposite'},
{'correct': 284, 'incorrect': 976, 'section': 'gram3-comparative'},
{'correct': 67, 'incorrect': 863, 'section': 'gram4-superlative'},
{'correct': 41, 'incorrect': 951, 'section': 'gram5-present-participle'},
{'correct': 6, 'incorrect': 1089, 'section': 'gram6-nationality-adjective'},
{'correct': 171, 'incorrect': 1389, 'section': 'gram7-past-tense'},
{'correct': 56, 'incorrect': 936, 'section': 'gram8-plural'},
{'correct': 52, 'incorrect': 705, 'section': 'gram9-plural-verbs'},
{'correct': 835, 'incorrect': 9973, 'section': 'total'}]
</code></pre>
<p>I also tried running the demo-word-accuracy.sh script outlined here with a window size of 2 and get poor accuracy as well:</p>
<pre><code>Sample output:
capital-common-countries:
ACCURACY TOP1: 19.37 % (98 / 506)
Total accuracy: 19.37 % Semantic accuracy: 19.37 % Syntactic accuracy: -nan %
capital-world:
ACCURACY TOP1: 10.26 % (149 / 1452)
Total accuracy: 12.61 % Semantic accuracy: 12.61 % Syntactic accuracy: -nan %
currency:
ACCURACY TOP1: 6.34 % (17 / 268)
Total accuracy: 11.86 % Semantic accuracy: 11.86 % Syntactic accuracy: -nan %
city-in-state:
ACCURACY TOP1: 11.78 % (185 / 1571)
Total accuracy: 11.83 % Semantic accuracy: 11.83 % Syntactic accuracy: -nan %
family:
ACCURACY TOP1: 57.19 % (175 / 306)
Total accuracy: 15.21 % Semantic accuracy: 15.21 % Syntactic accuracy: -nan %
gram1-adjective-to-adverb:
ACCURACY TOP1: 6.48 % (49 / 756)
Total accuracy: 13.85 % Semantic accuracy: 15.21 % Syntactic accuracy: 6.48 %
gram2-opposite:
ACCURACY TOP1: 17.97 % (55 / 306)
Total accuracy: 14.09 % Semantic accuracy: 15.21 % Syntactic accuracy: 9.79 %
gram3-comparative:
ACCURACY TOP1: 34.68 % (437 / 1260)
Total accuracy: 18.13 % Semantic accuracy: 15.21 % Syntactic accuracy: 23.30 %
gram4-superlative:
ACCURACY TOP1: 14.82 % (75 / 506)
Total accuracy: 17.89 % Semantic accuracy: 15.21 % Syntactic accuracy: 21.78 %
gram5-present-participle:
ACCURACY TOP1: 19.96 % (198 / 992)
Total accuracy: 18.15 % Semantic accuracy: 15.21 % Syntactic accuracy: 21.31 %
gram6-nationality-adjective:
ACCURACY TOP1: 35.81 % (491 / 1371)
Total accuracy: 20.76 % Semantic accuracy: 15.21 % Syntactic accuracy: 25.14 %
gram7-past-tense:
ACCURACY TOP1: 19.67 % (262 / 1332)
Total accuracy: 20.62 % Semantic accuracy: 15.21 % Syntactic accuracy: 24.02 %
gram8-plural:
ACCURACY TOP1: 35.38 % (351 / 992)
Total accuracy: 21.88 % Semantic accuracy: 15.21 % Syntactic accuracy: 25.52 %
gram9-plural-verbs:
ACCURACY TOP1: 20.00 % (130 / 650)
Total accuracy: 21.78 % Semantic accuracy: 15.21 % Syntactic accuracy: 25.08 %
Questions seen / total: 12268 19544 62.77 %
</code></pre>
<p>However the word2vec site claims its possible to obtain an accuracy of ~60% on these tasks.
Hence I would like to gain some insights into the effect of these hyperparameters like window size and how they affect quality of learnt models.</p> | 2014-03-08 17:07:51.013000+00:00 | 2021-09-21 09:48:21.867000+00:00 | null | gensim|word2vec | ['https://arxiv.org/abs/1310.4546', 'https://levyomer.files.wordpress.com/2014/04/dependency-based-word-embeddings-acl-2014.pdf'] | 2 |
Subsets and Splits