a_id
int64 7.84k
73.8M
| a_body
stringlengths 61
33k
| a_creation_date
stringlengths 25
32
| a_last_activity_date
stringlengths 25
32
| a_last_edit_date
stringlengths 25
32
⌀ | a_tags
float64 | q_id
int64 826
73.8M
| q_body
stringlengths 61
29.9k
| q_creation_date
stringlengths 25
32
| q_last_activity_date
stringlengths 25
32
| q_last_edit_date
stringlengths 25
32
⌀ | q_tags
stringlengths 1
103
| _arxiv_links
stringlengths 2
6.69k
| _n_arxiv_links
int64 0
94
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
70,994,608 | <p>As I understand the mapping network is not trained separately. It it part of generator network and adjusts weights based on gradients just like other parts of the network.</p>
<p>In their <a href="https://github.com/NVlabs/stylegan/blob/1e0d5c781384ef12b50ef20a62fee5d78b38e88f/training/networks_stylegan.py#L300" rel="nofollow noreferrer">stylegan generator code implementation</a> it written the Generator is composed of two sub networks one mapping and another synthesis. In <a href="https://github.com/NVlabs/stylegan3/blob/a5a69f58294509598714d1e88c9646c3d7c6ec94/training/networks_stylegan3.py#L511" rel="nofollow noreferrer">stylegan3 generator source</a> it is much easier to see. The output of mapping is passed to synthesis network which generates image.</p>
<pre><code>class Generator(torch.nn.Module):
...
def forward(self, z, ...):
ws = self.mapping(z, ...)
img = self.synthesis(ws, ...)
return img
</code></pre>
<p>The diagram below shows mapping network from <a href="https://arxiv.org/abs/1812.04948" rel="nofollow noreferrer">stylegan 2019 paper</a>. Section 2 describes about mapping network.</p>
<h2>Generator Diagram with Mapping Layer</h2>
<p><a href="https://i.stack.imgur.com/6lCrE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6lCrE.png" alt="enter image description here" /></a></p>
<p>Mapping layer is represented with <code>f</code> in the paper that takes noise vector <code>z</code> initialized from normal distribution and maps to intermediate latent representation <code>w</code>. It is implemented with 8 layer MLP. <a href="https://github.com/NVlabs/stylegan/blob/1e0d5c781384ef12b50ef20a62fee5d78b38e88f/training/networks_stylegan.py#L384" rel="nofollow noreferrer">Stylegan mapping network implementation</a> has MLP layers set to 8.</p>
<p>In section 4 they mention,</p>
<blockquote>
<p>a common goal is a latent space that consists of linear subspaces, each of which controls one factor of variation. However, the sampling probability of each combination of factors in <code>Z</code> needs to match the corresponding density in the training data.</p>
</blockquote>
<blockquote>
<p>A major benefit of our generator architecture is that the intermediate latent space <code>W</code> does not have to support sampling according to any fixed distribution.</p>
</blockquote>
<p>So, <code>z</code> and <code>w</code> have same dimensions but <code>w</code> is more disentangled than <code>z</code>. Finding a <code>w</code> from intermediate latent space <code>W</code> for an image allows specific image editing.</p>
<p>From <a href="https://arxiv.org/pdf/2102.02766.pdf" rel="nofollow noreferrer">Encoder for Editing</a> paper,</p>
<p><a href="https://i.stack.imgur.com/mn74n.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mn74n.png" alt="enter image description here" /></a></p>
<p>In stylegan2-ada paper with other changes they found mapping network depth of 2 better than 8. In <a href="https://github.com/NVlabs/stylegan3/blob/a5a69f58294509598714d1e88c9646c3d7c6ec94/training/networks_stylegan3.py#L115" rel="nofollow noreferrer">stylegan3 mapping layer code implementation</a> default number of layers in mapping is set to 2.</p>
<h2>References</h2>
<ul>
<li><a href="https://arxiv.org/abs/1812.04948" rel="nofollow noreferrer">https://arxiv.org/abs/1812.04948</a></li>
<li><a href="https://arxiv.org/abs/2102.02766" rel="nofollow noreferrer">https://arxiv.org/abs/2102.02766</a></li>
<li><a href="https://github.com/NVlabs/stylegan3" rel="nofollow noreferrer">https://github.com/NVlabs/stylegan3</a></li>
<li><a href="https://github.com/NVlabs/stylegan" rel="nofollow noreferrer">https://github.com/NVlabs/stylegan</a></li>
</ul> | 2022-02-05 01:29:20.580000+00:00 | 2022-02-05 01:29:20.580000+00:00 | null | null | 70,869,211 | <p>I am learning StyleGAN architecture and I got confused about the purpose of the Mapping Network. In the original paper it says:</p>
<blockquote>
<p>Our mapping network consists of 8 fully-connected layers, and the
dimensionality of all input and output activations— including z and w
— is 512.</p>
</blockquote>
<p>And there is no information about this network being trained in any way.</p>
<p>Like, wouldn’t it just generate some nonsense values?</p>
<p>I've tried creating a network like that (but with a smaller shape <code>(16,)</code>):</p>
<pre class="lang-py prettyprint-override"><code>import tensorflow as tf
import numpy as np
model = tf.keras.models.Sequential()
model.add(tf.keras.Input(shape=(16)))
for i in range(7):
model.add(tf.keras.layers.Dense(16, activation='relu'))
model.compile()
</code></pre>
<p>and then evaluated it on some random values:</p>
<pre class="lang-py prettyprint-override"><code>g = tf.random.Generator.from_seed(34)
model(
g.normal(shape=(16, 16))
)
</code></pre>
<p>And I am getting some random outputs like:</p>
<pre><code>array([[0. , 0.01045225, 0. , 0. , 0.02217731,
0.00940356, 0.02321716, 0.00556996, 0. , 0. ,
0. , 0.03117323, 0. , 0. , 0.00734158,
0. ],
[0.03159791, 0.05680077, 0. , 0. , 0. ,
0. , 0.05907414, 0. , 0. , 0. ,
0. , 0. , 0.03110216, 0.04647615, 0. ,
0.04566741],
.
. # More similar vectors goes there
.
[0. , 0.01229661, 0.00056016, 0. , 0.03534952,
0.02654905, 0.03212402, 0. , 0. , 0. ,
0. , 0.0913604 , 0. , 0. , 0. ,
0. ]], dtype=float32)>
</code></pre>
<p>What am I missing? Is there any information on the Internet about training Mapping Network? Any math explanation? Got really confused :(</p> | 2022-01-26 19:33:29.457000+00:00 | 2022-02-05 01:29:20.580000+00:00 | 2022-01-27 06:02:17.463000+00:00 | python|tensorflow|deep-learning|generative-adversarial-network|stylegan | ['https://github.com/NVlabs/stylegan/blob/1e0d5c781384ef12b50ef20a62fee5d78b38e88f/training/networks_stylegan.py#L300', 'https://github.com/NVlabs/stylegan3/blob/a5a69f58294509598714d1e88c9646c3d7c6ec94/training/networks_stylegan3.py#L511', 'https://arxiv.org/abs/1812.04948', 'https://i.stack.imgur.com/6lCrE.png', 'https://github.com/NVlabs/stylegan/blob/1e0d5c781384ef12b50ef20a62fee5d78b38e88f/training/networks_stylegan.py#L384', 'https://arxiv.org/pdf/2102.02766.pdf', 'https://i.stack.imgur.com/mn74n.png', 'https://github.com/NVlabs/stylegan3/blob/a5a69f58294509598714d1e88c9646c3d7c6ec94/training/networks_stylegan3.py#L115', 'https://arxiv.org/abs/1812.04948', 'https://arxiv.org/abs/2102.02766', 'https://github.com/NVlabs/stylegan3', 'https://github.com/NVlabs/stylegan'] | 12 |
66,801,833 | <p>Masker class provides a background data to "train" your explainer against. I.e., in:</p>
<pre><code>explainer = shap.LinearExplainer(model, masker = masker)
</code></pre>
<p>you're using background data determined by masker (you may see what data is used by accessing <code>masker.data</code> attribute). You may read more about "true to model" or "true to data" explanations <a href="https://arxiv.org/pdf/2006.16234.pdf" rel="nofollow noreferrer">here</a> or <a href="https://github.com/slundberg/shap/issues/1098" rel="nofollow noreferrer">here</a>.</p>
<p>Given above, caluclation-wise you may do both:</p>
<pre><code>masker = shap.maskers.Independent(data = X_train)
</code></pre>
<hr />
<p>or</p>
<hr />
<pre><code>masker = shap.maskers.Independent(data = X_test)
explainer = shap.LinearExplainer(model, masker = masker)
</code></pre>
<p>but conceptually, imo the following makes more sense:</p>
<pre><code>masker = shap.maskers.Independent(data = X_train)
explainer = shap.LinearExplainer(model, masker = masker)
</code></pre>
<p>This is akin usual <code>train/test</code> paradigm, where you train your model (and explainer) on train data, and try to predict (and explain) your test data.</p>
<hr />
<p>Unrelated to the question. An alternative to masker, which samples data for you, would be to explicitly provide background that may allow comparing 2 datapoints: a point against which compare, and the point of interest, like in <a href="https://github.com/slundberg/shap/blob/master/notebooks/tabular_examples/tree_based_models/Explaining%20a%20simple%20OR%20function.ipynb" rel="nofollow noreferrer">this</a> notebook. In such a manner one may find out why 2 seemingly similar datapoints were classified differently.</p> | 2021-03-25 14:41:02.363000+00:00 | 2022-04-02 16:11:47.990000+00:00 | 2022-04-02 16:11:47.990000+00:00 | null | 66,560,839 | <p>I have been trying to work with the <code>shap</code> package. I want to determine the shap values from my logistic regression model. Contrary to the <code>TreeExplainer</code>, the <code>LinearExplainer</code> requires a so-called masker. What exactly does this masker do and what is the difference between the independent and partition maskers?</p>
<p>Also, I am interested in the important features from the test-set. Do I then fit the masker on the training set or the test set? Below you can see a snippet of code.</p>
<pre><code>model = LogisticRegression(random_state = 1)
model.fit(X_train, y_train)
masker = shap.maskers.Independent(data = X_train)
**or**
masker = shap.maskers.Independent(data = X_test)
explainer = shap.LinearExplainer(model, masker = masker)
shap_val = explainer(X_test)```
</code></pre> | 2021-03-10 08:20:08.220000+00:00 | 2022-05-25 05:40:45.583000+00:00 | 2022-05-25 05:40:45.583000+00:00 | python|machine-learning|logistic-regression|shap | ['https://arxiv.org/pdf/2006.16234.pdf', 'https://github.com/slundberg/shap/issues/1098', 'https://github.com/slundberg/shap/blob/master/notebooks/tabular_examples/tree_based_models/Explaining%20a%20simple%20OR%20function.ipynb'] | 3 |
63,636,446 | <p><strong>In short:</strong> NNs are rarely the best models for classifying either small amounts data or the data that is already compactly represented by a few non-heterogeneous columns. Often enough, boosted methods or GLM would produce better results from a similar amount of effort.</p>
<p><strong>What can you do with your model?</strong> Counterintuitively, sometimes hindering the network capacity can be beneficial, especially when the number of network parameters exceeds number of training points. One can reduce the number of neurons, like in your case setting layer sizes to 16 or so and simultaneously removing layers; introduce regularizations (label smoothing, weight decay, etc); or generate more data by adding more derived columns in different (log, binary) scales.</p>
<p>Another approach would be to search for NNs models designed for your type of data. As, for example, <a href="https://arxiv.org/abs/1706.02515" rel="nofollow noreferrer">Self-Normalizing Neural Networks</a> or <a href="https://arxiv.org/abs/1606.07792" rel="nofollow noreferrer">Wide & Deep Learning for Recommender Systems</a>.</p>
<p>If you get to try only 1 thing, I would recommend doing a grid search of the learning rate or trying a few different optimizers.</p>
<p><strong>How to make a better decision about which model to use?</strong> Look through finished kaggle.com competitions and find datasets similar to the one at hand, then check out the techniques used by the top places.</p> | 2020-08-28 15:07:58.397000+00:00 | 2020-08-28 15:07:58.397000+00:00 | null | null | 63,564,017 | <p>Here is the code I tried:</p>
<pre><code># normalizing the train data
cols_to_norm = ["WORK_EDUCATION", "SHOP", "OTHER",'AM','PM','MIDDAY','NIGHT', 'AVG_VEH_CNT', 'work_traveltime', 'shop_traveltime','work_tripmile','shop_tripmile', 'TRPMILES_sum',
'TRVL_MIN_sum', 'TRPMILES_mean', 'HBO', 'HBSHOP', 'HBW', 'NHB', 'DWELTIME_mean','TRVL_MIN_mean', 'work_dweltime', 'shop_dweltime', 'firsttrip_time', 'lasttrip_time']
dataframe[cols_to_norm] = dataframe[cols_to_norm].apply(lambda x: (x - x.min()) / (x.max()-x.min()))
# labels
y = dataframe.R_SEX.values
</code></pre>
<hr />
<pre><code># splitting train and test set
X_train, X_test, y_train, y_test =train_test_split(X, y, test_size=0.33, random_state=42)
model = Sequential()
model.add(Dense(256, input_shape=(X_train.shape[1],), activation='relu'))
model.add(Dense(256, activation='relu'))
model.add(layers.Dropout(0.3))
model.add(Dense(256, activation='relu'))
model.add(layers.Dropout(0.3))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam' , metrics=['acc'])
print(model.summary())
model.fit(X_train, y_train , batch_size=128, epochs=30, validation_split=0.2)
</code></pre>
<hr />
<pre><code>Epoch 23/30
1014/1014 [==============================] - 4s 4ms/step - loss: 0.6623 - acc: 0.5985 - val_loss: 0.6677 - val_acc: 0.5918
Epoch 24/30
1014/1014 [==============================] - 4s 4ms/step - loss: 0.6618 - acc: 0.5993 - val_loss: 0.6671 - val_acc: 0.5925
Epoch 25/30
1014/1014 [==============================] - 4s 4ms/step - loss: 0.6618 - acc: 0.5997 - val_loss: 0.6674 - val_acc: 0.5904
Epoch 26/30
1014/1014 [==============================] - 4s 4ms/step - loss: 0.6614 - acc: 0.6001 - val_loss: 0.6669 - val_acc: 0.5911
Epoch 27/30
1014/1014 [==============================] - 4s 4ms/step - loss: 0.6608 - acc: 0.6004 - val_loss: 0.6668 - val_acc: 0.5920
Epoch 28/30
1014/1014 [==============================] - 4s 4ms/step - loss: 0.6605 - acc: 0.6002 - val_loss: 0.6679 - val_acc: 0.5895
Epoch 29/30
1014/1014 [==============================] - 4s 4ms/step - loss: 0.6602 - acc: 0.6009 - val_loss: 0.6663 - val_acc: 0.5932
Epoch 30/30
1014/1014 [==============================] - 4s 4ms/step - loss: 0.6597 - acc: 0.6027 - val_loss: 0.6674 - val_acc: 0.5910
<tensorflow.python.keras.callbacks.History at 0x7fdd8143a278>
</code></pre>
<p>I have tried modifying the neural network and double-cheking the data.</p>
<p>Is there anything I can do to improve the outcome? Is the model not deep enough? Is there any alternative models suited for my data? Does this mean these features have no predictive value? I'm kind of confused what to do next.</p>
<p>thank you</p>
<p><strong>Update:</strong></p>
<p>I tried adding new column do my dataframe which is the outcome of a KNN model for sex classification. Here is what I did:</p>
<pre><code>#Import knearest neighbors Classifier model
from sklearn.neighbors import KNeighborsClassifier
#Create KNN Classifier
knn = KNeighborsClassifier(n_neighbors=41)
#Train the model using the training sets
knn.fit(X, y)
#predict sex for the train set so that it can be fed to the nueral net
y_pred = knn.predict(X)
#add the outcome of knn to the train set
X = X.assign(KNN_result=y_pred)
</code></pre>
<p>It improved the training and validation accuracy up to 61 percent.</p>
<pre><code>Epoch 26/30
1294/1294 [==============================] - 8s 6ms/step - loss: 0.6525 - acc: 0.6166 - val_loss: 0.6604 - val_acc: 0.6095
Epoch 27/30
1294/1294 [==============================] - 8s 6ms/step - loss: 0.6523 - acc: 0.6173 - val_loss: 0.6596 - val_acc: 0.6111
Epoch 28/30
1294/1294 [==============================] - 8s 6ms/step - loss: 0.6519 - acc: 0.6177 - val_loss: 0.6614 - val_acc: 0.6101
Epoch 29/30
1294/1294 [==============================] - 8s 6ms/step - loss: 0.6512 - acc: 0.6178 - val_loss: 0.6594 - val_acc: 0.6131
Epoch 30/30
1294/1294 [==============================] - 8s 6ms/step - loss: 0.6510 - acc: 0.6183 - val_loss: 0.6603 - val_acc: 0.6103
<tensorflow.python.keras.callbacks.History at 0x7fe981bbe438>
</code></pre>
<p>Thank you</p> | 2020-08-24 15:26:18.067000+00:00 | 2020-09-15 14:21:26.843000+00:00 | 2020-09-15 14:21:26.843000+00:00 | python|tensorflow|keras | ['https://arxiv.org/abs/1706.02515', 'https://arxiv.org/abs/1606.07792'] | 2 |
63,315,104 | <blockquote>
<p>Why do we need monadic types?</p>
</blockquote>
<p>Since it was the quandary of I/O and its observable effects in nonstrict languages like Haskell that brought the monadic interface to such prominence:</p>
<ul>
<li>
<blockquote>
<p>[...] monads are used to address the more general problem of computations (involving state, input/output, backtracking, ...) returning values: they do not solve any input/output-problems directly but rather provide an elegant and flexible abstraction of many solutions to related problems. [...] For instance, no less than three different input/output-schemes are used to solve these basic problems in <a href="https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.86.9725&rep=rep1&type=pdf" rel="nofollow noreferrer">Imperative functional programming</a>, the paper which originally proposed <em>`a new model, based on monads, for performing input/output in a non-strict, <a href="http://conal.net/blog/posts/the-c-language-is-purely-functional" rel="nofollow noreferrer">purely functional</a> language'.</em> [...]</p>
<p>[Such] input/output-schemes merely provide frameworks in which side-effecting operations can safely be used with a guaranteed order of execution and without affecting the properties of the purely functional parts of the language.</p>
<p><a href="https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.95.9846&rep=rep1&type=pdf" rel="nofollow noreferrer">Claus Reinke</a> (pages 96-97 of 210).</p>
</blockquote>
<p><sup>(emphasis by me.)</sup></p>
</li>
<li>
<blockquote>
<p>[...] When we write effectful code – monads or no monads – we have to constantly keep in mind the context of expressions we pass around.</p>
<p>The fact that monadic code ‘desugars’ (is implementable in terms of) side-effect-free code is irrelevant. When we use monadic notation, we program within that notation – without considering what this notation desugars into. Thinking of the desugared code breaks the monadic abstraction. A side-effect-free, applicative code is normally compiled to (that is, desugars into) C or machine code. If the desugaring argument has any force, it may be applied just as well to the applicative code, leading to the conclusion that it all boils down to the machine code and hence all programming is imperative.</p>
<p>[...] From the personal experience, I have noticed that the mistakes I make when writing monadic code are exactly the mistakes I made when programming in C. Actually, monadic mistakes tend to be worse, because monadic notation (compared to that of a typical imperative language) is ungainly and obscuring.</p>
<p><a href="https://arxiv.org/pdf/1905.06544" rel="nofollow noreferrer">Oleg Kiselyov</a> (page 21 of 26).</p>
</blockquote>
</li>
<li>
<blockquote>
<p>The most difficult construct for students to understand is the monad. I introduce <code>IO</code> without mentioning monads.</p>
<p><a href="https://wiki.haskell.org/index.php?title=Haskell_in_education&oldid=63321" rel="nofollow noreferrer">Olaf Chitil.</a></p>
</blockquote>
</li>
</ul>
<p>More generally:</p>
<ul>
<li>
<blockquote>
<p>Still, today, over 25 years after the introduction of the concept of monads to the world of functional programming, beginning functional programmers struggle to grasp the concept of monads. This struggle is exemplified by the numerous blog posts about the effort of trying to learn about monads. From our own experience we notice that even at university level, bachelor level students often struggle to comprehend monads and consistently score poorly on monad-related exam questions.</p>
<p>Considering that the concept of monads is not likely to disappear from the functional programming landscape any time soon, it is vital that we, as the functional programming community, somehow overcome the problems novices encounter when first studying monads.</p>
<p><a href="https://pms.cs.ru.nl/iris-diglib/src/getContent.php?id=2017-Steenvoorden-SupportLearning" rel="nofollow noreferrer">Tim Steenvoorden, Jurriën Stutterheim, Erik Barendsen and Rinus Plasmeijer.</a></p>
</blockquote>
</li>
</ul>
<p>If only there was another way to specify "<em>a guaranteed order of execution</em>" in Haskell, while keeping the ability to separate regular Haskell definitions from those involved in I/O (and its observable effects) - translating this variation of <a href="https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.91.3579&rep=rep1&type=pdf" rel="nofollow noreferrer">Philip Wadler</a>'s <code>echo</code>:</p>
<pre class="lang-ml prettyprint-override"><code>val echoML : unit -> unit
fun echoML () = let val c = getcML () in
if c = #"\n" then
()
else
let val _ = putcML c in
echoML ()
end
fun putcML c = TextIO.output1(TextIO.stdOut,c);
fun getcML () = valOf(TextIO.input1(TextIO.stdIn));
</code></pre>
<p>...could then be as simple as:</p>
<pre class="lang-hs prettyprint-override"><code>echo :: OI -> ()
echo u = let !(u1:u2:u3:_) = partsOI u in
let !c = getChar u1 in
if c == '\n' then
()
else
let !_ = putChar c u2 in
echo u3
</code></pre>
<p>where:</p>
<pre class="lang-hs prettyprint-override"><code>data OI -- abstract
foreign import ccall "primPartOI" partOI :: OI -> (OI, OI)
⋮
foreign import ccall "primGetCharOI" getChar :: OI -> Char
foreign import ccall "primPutCharOI" putChar :: Char -> OI -> ()
⋮
</code></pre>
<p>and:</p>
<pre class="lang-hs prettyprint-override"><code>partsOI :: OI -> [OI]
partsOI u = let !(u1, u2) = partOI u in u1 : partsOI u2
</code></pre>
<p>How would this work? At run-time, <code>Main.main</code> receives an initial <code>OI</code> <a href="https://academic.oup.com/comjnl/article-pdf/31/3/243/1157325/310243.pdf" rel="nofollow noreferrer"><em>pseudo-data</em></a> value as an argument:</p>
<pre class="lang-hs prettyprint-override"><code>module Main(main) where
main :: OI -> ()
⋮
</code></pre>
<p>...from which other <code>OI</code> values are produced, using <code>partOI</code> or <code>partsOI</code>. All you have to do is ensure each new <code>OI</code> value is used at most <em>once</em>, in each call to an <code>OI</code>-based definition, foreign or otherwise. In return, you get back a plain ordinary result - it isn't e.g. paired with some odd abstract state, or requires the use of a <s>callback</s> continuation, etc.</p>
<p>Using <code>OI</code>, instead of the unit type <code>()</code> like Standard ML does, means we can avoid <em>always</em> having to use the monadic interface:</p>
<blockquote>
<p>Once you're in the <code>IO</code> monad, you're stuck there forever, and are reduced to Algol-style imperative programming.</p>
<p><a href="https://existentialtype.wordpress.com/2011/05/01/of-course-ml-has-monads" rel="nofollow noreferrer">Robert Harper</a>.</p>
</blockquote>
<p>But if you really <code>do</code> need it:</p>
<pre class="lang-hs prettyprint-override"><code>type IO a = OI -> a
unitIO :: a -> IO a
unitIO x = \ u -> let !_ = partOI u in x
bindIO :: IO a -> (a -> IO b) -> IO b
bindIO m k = \ u -> let !(u1, u2) = partOI u in
let !x = m u1 in
let !y = k x u2 in
y
⋮
</code></pre>
<p>So, monadic types aren't <em>always</em> needed - there are other interfaces out there:</p>
<blockquote>
<p>LML had a fully fledged implementation of oracles running of a multi-processor (a Sequent Symmetry) back in ca 1989. The description in <a href="https://cth.altocumulus.org/%7Ehallgren/Thesis/toc.html" rel="nofollow noreferrer">the Fudgets thesis</a> refers to this implementation. It was fairly pleasant to work with and quite practical.</p>
<p>[...]</p>
<p>These days everything is done with monads so other solutions are sometimes forgotten.</p>
<p><a href="http://lambda-the-ultimate.org/node/1665#comment-20339" rel="nofollow noreferrer">Lennart Augustsson</a> (2006).</p>
</blockquote>
<hr />
<p>Wait a moment: since it so closely resembles Standard ML's direct use of effects, is this approach and its use of <em>pseudo-data</em> referentially transparent?</p>
<p>Absolutely - just find a suitable definition of "<em>referential transparency</em>"; <a href="https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.469.6920&rep=rep1&type=pdf" rel="nofollow noreferrer">there's plenty to choose from...</a></p> | 2020-08-08 11:58:22.983000+00:00 | 2022-02-28 21:59:41.363000+00:00 | 2022-02-28 21:59:41.363000+00:00 | null | 28,139,259 | <p>In my humble opinion the answers to the famous question <a href="https://stackoverflow.com/questions/44965/what-is-a-monad">"What is a monad?"</a>, especially the most voted ones, try to explain what is a monad without clearly explaining <em>why monads are really necessary</em>. Can they be explained as the solution to a problem?</p> | 2015-01-25 17:27:34.197000+00:00 | 2022-02-28 21:59:41.363000+00:00 | 2017-05-23 12:10:36.807000+00:00 | haskell|monads | ['https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.86.9725&rep=rep1&type=pdf', 'http://conal.net/blog/posts/the-c-language-is-purely-functional', 'https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.95.9846&rep=rep1&type=pdf', 'https://arxiv.org/pdf/1905.06544', 'https://wiki.haskell.org/index.php?title=Haskell_in_education&oldid=63321', 'https://pms.cs.ru.nl/iris-diglib/src/getContent.php?id=2017-Steenvoorden-SupportLearning', 'https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.91.3579&rep=rep1&type=pdf', 'https://academic.oup.com/comjnl/article-pdf/31/3/243/1157325/310243.pdf', 'https://existentialtype.wordpress.com/2011/05/01/of-course-ml-has-monads', 'https://cth.altocumulus.org/%7Ehallgren/Thesis/toc.html', 'http://lambda-the-ultimate.org/node/1665#comment-20339', 'https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.469.6920&rep=rep1&type=pdf'] | 12 |
54,957,035 | <p>I'm unfamiliar with any research or folk wisdom about the desirable "weights of the vectors" (by which I assume you mean the individual dimensions). </p>
<p>In general, since the individual dimensions <em>aren't</em> strongly interpretable, I'm not sure you could say much about how any one dimension's values should be distributed. And remember, our intuitions from low-dimensional spaces (2d, 3d, 4d) often don't hold up in high-dimensional spaces. </p>
<p>I've seen two interesting, possibly relevant observations in research:</p>
<ul>
<li><p>some have observed that the raw trained vectors for words with singular meanings tend to have a larger magnitude, and those with many meanings have smaller magnitudes. A plausible explanation for this would be that word-vectors for polysemous word-tokenss are being pulled in different directions for the multiple contrasting meanings, and thus wind up "somewhere in the middle" (closer to the origin, and thus of lower magnitude). Note, though, that most word-vector-to-word-vector comparisons <em>ignore</em> the magnitudes, by using cosine-similarity to only compare angles (or largely equivalently, by normalizing all vectors to unit length before comparisons). </p></li>
<li><p>A paper "All-but-the-Top: Simple and Effective Postprocessing for Word Representations" by Mu, Bhat, & Viswanath <a href="https://arxiv.org/abs/1702.01417v2" rel="nofollow noreferrer">https://arxiv.org/abs/1702.01417v2</a> has noted that the average of all word-vectors that were trained together tends to biased in a certain direction from the origin, but that removing that bias (and other commonalities in the vectors) can result in improved vectors for many tasks. In my own personal experiments, I've observed that the magnitude of that bias-from-origin seems correlated with the number of <code>negative</code> samples chosen - and that choosing the extreme (and uncommon) value of just 1 negative sample makes such a bias negligible (but might not be best for overall quality or efficiency/speed of training). </p></li>
</ul>
<p>So there <em>may</em> be useful heuristics about vector quality from looking at the relative distributions of vectors, but I'm not sure any would be sensitive to individual dimensions (except insofar as those happen to be the projections of vectors onto a certain axis). </p> | 2019-03-02 09:25:05.667000+00:00 | 2019-03-02 09:25:05.667000+00:00 | null | null | 54,951,312 | <p>I am training my own embedding vectors as I'm focused on an academic dataset (WOS); whether the vectors are generated via word2vec or fasttext doesn't particularly matter. Say my vectors are 150 dimensions each. I'm wondering what the desired distribution of weights within a vector ought to be, if you averaged across an entire corpus's vectors? </p>
<p>I did a few experiments while looking at the distributions of a sample of my vectors and came to these conclusions (uncertain as to how absolutely they hold): </p>
<p>If one trains their model with too few epochs then the vectors don't change significantly from their initiated values (easy to see if you start you vectors as weight 0 in every category). Thus if my weight distribution is centered around some point (typically 0) then I've under-trained my corpus. </p>
<p>If one trains their model with too few documents/over-trains then the vectors show significant correlation with each other (I typically visualize a random set of vectors and you can see stripes where all the vectors have weights that are either positive or negative). </p>
<p>What I imagine is a single "good" vector has various weights across the entire range of -1 to 1. For any single vector it may have significantly more dimensions near -1 or 1. However, the weight distribution of an entire corpus would balance out vectors that randomly have more values towards one end of the spectrum or another, so that the weight distribution of the entire corpus is approximately evenly distributed across the entire corpus. Is this intuition correct? </p> | 2019-03-01 19:43:21.467000+00:00 | 2019-03-02 09:25:05.667000+00:00 | null | nlp|word2vec|fasttext | ['https://arxiv.org/abs/1702.01417v2'] | 1 |
65,889,433 | <p>Turns out nlpy0.5.0 works much better and will follow the documentation: <a href="https://arxiv.org/pdf/1808.03292.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1808.03292.pdf</a>.</p>
<p>I was able to install the nlpy0.5.0 with the following code:</p>
<pre><code>pip install install nl4py==0.5.0
</code></pre>
<p>Hopefully whoever updated the nlpy0.7.0 will update the documentation.</p> | 2021-01-25 17:19:32.853000+00:00 | 2021-01-25 17:19:32.853000+00:00 | null | null | 65,888,938 | <p>I am using NL4Py and a few functions have deprecation warnings. I am able to start Nelogo, open a model, and run a model through the NL4Py python package. However, when I try to set parameter values and I get DeprecationWarning, but the function suggested is the same as the previous one used, setParamsRandom(). And the setParamsRandom() does not update parameter values in the Netlogo model. I would like to be able to set parameter values in Netlogo. I am not sure if there is an updated function or something wrong with my compatibility:</p>
<p>python 3.8,
ipython 7.10.0,
nl4py 0.7.0, jdk 15.0.2, Netlogo 6.0, MacOS Catalina 10.15.7</p>
<pre><code>#start of code for nl4py
import nl4py
nl4py.initialize("/Applications/NetLogo 6.0/")
n = nl4py.netlogo_app()
n.open_model('/Users/tracykuper/Desktop/NetLogo 6.0/models/Sample Models/Earth Science/Fire.nlogo')
n.setParamsRandom()
<ipython-input-10-50e53e085382>:1: DeprecationWarning: Call to deprecated function setParamsRandom (Alias left for backward compatibility. Use setParamsRandom() since version 1.0.0.).
n.setParamsRandom()
---------------------------------------------------------------------------
Py4JJavaError Traceback (most recent call last)
<ipython-input-10-50e53e085382> in <module>
----> 1 n.setParamsRandom()
2 #n.getParamList()
~/opt/anaconda3/envs/netpy/lib/python3.8/site-packages/nl4py/NL4PyException.py in new_func1(*args, **kwargs)
57 )
58 warnings.simplefilter('default', DeprecationWarning)
---> 59 return func1(*args, **kwargs)
60
61 return new_func1
~/opt/anaconda3/envs/netpy/lib/python3.8/site-packages/nl4py/NetLogoGUI.py in setParamsRandom(self)
185 @deprecated('Alias left for backward compatibility. Use setParamsRandom() since version 1.0.0.')
186 def setParamsRandom(self):
--> 187 self.set_params_random()
188
189 @deprecated('Alias left for backward compatibility. Use getParamNames() since version 1.0.0.')
~/opt/anaconda3/envs/netpy/lib/python3.8/site-packages/nl4py/NetLogoGUI.py in set_params_random(self)
97
98 '''
---> 99 paramSpecs = self.app.getParamList(self.path).getParamSpecs()
100
101 ##Using some bsearch code here thanks to Forrest Stonedahl
~/opt/anaconda3/envs/netpy/lib/python3.8/site-packages/py4j/java_gateway.py in __call__(self, *args)
1302
1303 answer = self.gateway_client.send_command(command)
-> 1304 return_value = get_return_value(
1305 answer, self.gateway_client, self.target_id, self.name)
1306
~/opt/anaconda3/envs/netpy/lib/python3.8/site-packages/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
324 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
325 if answer[1] == REFERENCE_TYPE:
--> 326 raise Py4JJavaError(
327 "An error occurred while calling {0}{1}{2}.\n".
328 format(target_id, ".", name), value)
Py4JJavaError: An error occurred while calling o0.getParamList.
: java.lang.NoSuchMethodError: 'org.nlogo.api.ValueConstraint org.nlogo.agent.Observer.constraint(int)'
at bsearch.nlogolink.Utils.getDefaultConstraintsText(Utils.java:68)
at nl4py.server.NetLogoAppController.getParamList(NetLogoAppController.java:169)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:64)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.base/java.lang.Thread.run(Thread.java:832)
</code></pre> | 2021-01-25 16:48:07.493000+00:00 | 2021-01-25 17:19:32.853000+00:00 | null | python|ipython|warnings|netlogo|deprecated | ['https://arxiv.org/pdf/1808.03292.pdf'] | 1 |
58,688,696 | <p>How does the model output represent the bus routes? Maybe you could try a reinforced learning approach. Take a look at Deep-Q Learning, It basically takes and input vector (the state of the system) and outputs an action (usually represented by an index in your output layer), then it computes the reward of that action and uses it to train the model (without the need of target values).</p>
<p>Here are some resources that might help you get started:</p>
<p><a href="https://towardsdatascience.com/double-deep-q-networks-905dd8325412" rel="nofollow noreferrer">https://towardsdatascience.com/double-deep-q-networks-905dd8325412</a></p>
<p><a href="https://arxiv.org/pdf/1802.09477.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1802.09477.pdf</a></p>
<p><a href="https://arxiv.org/pdf/1509.06461.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1509.06461.pdf</a></p>
<p>Hope this was useful.</p>
<p><strong>UPDATE</strong></p>
<p>There is a second option, you could define a custom loss function. Generally these functions only take two arguments, the predicted_y and the target_y, in your case, there is no target_y, so you could pass a dummy target_y and not use it inside the function (I assume that you could call your simulation process inside that function, and return the metric as the "loss"). Here are examples in PyTorch and Keras.</p>
<p>Keras: <a href="https://stackoverflow.com/questions/45961428/make-a-custom-loss-function-in-keras">Make a custom loss function in keras</a></p>
<p>PyTorch:<a href="https://stackoverflow.com/questions/53980031/pytorch-custom-loss-function">PyTorch custom loss function</a></p> | 2019-11-04 07:10:55.070000+00:00 | 2019-11-05 16:03:18.817000+00:00 | 2019-11-05 16:03:18.817000+00:00 | null | 58,688,255 | <p>I am trying to use an RNN model that outputs bus routes and its input is the demand matrix. The bus routes are then used in a simulation which spits out a metric of how the routes performed. The question is, since there is no target value of bus routes, how do I back propagate the simulation result?</p>
<p>To explain the question with simple python code:</p>
<pre class="lang-py prettyprint-override"><code>"""
The model is an RNN that takes 400,24,24 matrix as input
dimension 0 represents time, dimension 1 represents departure bus stop and dimension 2 represents the arrival bus stop. Each value is a count of the number of passengers who departed at a bus stop with an arrival bus stop in mind in a specific time
output is 64,24 matrix which will be reshaped to 8,8,24
dimension 0 is the sequence index, dimension 1 is the index of bus (there are 8 buses), dimension 2 is the softmaxed classifier dimension of 24 different bus stops. From the output, 8 bus stops are picked per bus with a sequence
These sequences are then used for path generations of buses and they are evaluated from a simulation
"""
model.train()
optimizer.zero_grad()
out = model(demand)#out is 64,24 demand is 400,24,24
demand, performance = simulation(out)#assume performance as float
#here the out has grad_fn but the performance does not
loss = SOME_NUMBER - performance
loss = torch.FloatTensor(loss)
#here I need to back propagate and it is the confusing part
#simply doing loss.backward() does nothing because no grad_fn
#out.backward() requires 64,24 gradients computed somehow from 1 #metric, causes complete divergence within few steps
optimizer.step()
</code></pre> | 2019-11-04 06:31:46.377000+00:00 | 2019-11-05 16:03:18.817000+00:00 | 2019-11-04 16:35:27.120000+00:00 | optimization|deep-learning|pytorch|recurrent-neural-network | ['https://towardsdatascience.com/double-deep-q-networks-905dd8325412', 'https://arxiv.org/pdf/1802.09477.pdf', 'https://arxiv.org/pdf/1509.06461.pdf', 'https://stackoverflow.com/questions/45961428/make-a-custom-loss-function-in-keras', 'https://stackoverflow.com/questions/53980031/pytorch-custom-loss-function'] | 5 |
68,886,605 | <p>The closest work I'm aware of is by Selsam et al., "Learning a SAT Solver from Single-Bit Supervision." See <a href="https://arxiv.org/abs/1802.03685" rel="nofollow noreferrer">https://arxiv.org/abs/1802.03685</a></p>
<p>You should also look into Selsam's PhD thesis on this topic as well <a href="https://searchworks.stanford.edu/view/13250178" rel="nofollow noreferrer">https://searchworks.stanford.edu/view/13250178</a>, which has lots of other references.</p> | 2021-08-23 02:18:55.977000+00:00 | 2021-08-23 02:18:55.977000+00:00 | null | null | 68,885,877 | <p>I'm interested in applying machine learning algorithms for SAT solving procedures. Current trends on SAT solvers seems they use CDCL procedures.
Specifically, is there any small example to illustrate the idea?</p> | 2021-08-22 23:29:18.977000+00:00 | 2021-08-27 07:25:41.933000+00:00 | 2021-08-27 07:25:41.933000+00:00 | machine-learning|sat | ['https://arxiv.org/abs/1802.03685', 'https://searchworks.stanford.edu/view/13250178'] | 2 |
45,634,945 | <p>In this case SSD uses <strong>mobilenet</strong> as it's <strong>feature extractor</strong> . In-order to increase the speed. If you read the mobilenet paper , it's a lightweight convolutional neural nets specially using <strong>separable convolution</strong> inroder to reduce parameters . </p>
<p>As I understood <strong>separable convolution</strong> can loose information because of the channel wise convolution. </p>
<p>So when quantifying a graph according to TF implementation it makes 16 bits ops and weights to 8bits . If you read the tutorial in TF for quantization they clearly have mentioned how this operation is more like adding some noise in to already trained net hoping our model has well generalized . </p>
<p>So this will work really well and almost lossless interms of accuracy for a heavy model like inception , resnet etc. But with the lightness and simplicity of ssd with mobilenet it really can make a accuracy loss . </p>
<p><a href="https://arxiv.org/abs/1704.04861" rel="nofollow noreferrer">MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications</a> </p>
<p><a href="https://www.tensorflow.org/performance/quantization" rel="nofollow noreferrer">How to Quantize Neural Networks with TensorFlow</a></p> | 2017-08-11 12:11:40.033000+00:00 | 2017-08-11 12:11:40.033000+00:00 | null | null | 44,832,492 | <p>I am working on the recently released "SSD-Mobilenet" model by google for object detection.
Model downloaded from following location: <a href="https://github.com/tensorflow/models/blob/master/object_detection/g3doc/detection_model_zoo.md" rel="nofollow noreferrer">https://github.com/tensorflow/models/blob/master/object_detection/g3doc/detection_model_zoo.md</a></p>
<p>The frozen graph file downloaded from the site is working as expected, however after quantization the accuracy drops significantly (mostly random predictions).</p>
<p>I built tensorflow r1.2 from source, and used following method to quantize: </p>
<p>bazel-bin/tensorflow/tools/graph_transforms/transform_graph --in_graph=frozen_inference_graph.pb --out_graph=optimized_graph.pb --inputs='image_tensor' --outputs='detection_boxes','detection_scores','detection_classes','num_detections' --<strong>transforms</strong>='add_default_attributes strip_unused_nodes(type=float, shape="1,224,224,3") fold_constants(ignore_errors=true) fold_batch_norms fold_old_batch_norms quantize_weights strip_unused_nodes sort_by_execution_order' </p>
<p>I tried various combinations in the "<strong>transforms</strong>" part, and the transforms mentioned above gave sometimes correct predictions, however no where close to the original model.</p>
<p>Is there any other way to improve performance of the quantized model?</p> | 2017-06-29 18:28:00.723000+00:00 | 2018-04-22 11:58:07.213000+00:00 | null | tensorflow|deep-learning|object-detection | ['https://arxiv.org/abs/1704.04861', 'https://www.tensorflow.org/performance/quantization'] | 2 |
71,030,550 | <p>What you want is a data structure for a problem called 'incremental reachability'.
There are multiple ways to construct such a data structure, all have some different update/query time tradeoffs. A very simple way to achieve the goal is to use an adjacency list and use BFS every time a user queries if node "x" is reachable from 1.
This gets you update time: O(1) and query time: O(m).</p>
<p>A more complicated idea is 'even-shiloach' trees [1], here a BFS tree is maintained efficiently.
Total update time: O(nm) Query time: O(1).</p>
<p>An experimental analysis of similar algorithms can be found in [2].</p>
<p>[1] Shimon Even, Yossi Shiloach: An On-Line Edge-Deletion Problem. J. ACM 28(1): 1-4 (1981) <a href="https://dl.acm.org/doi/10.1145/322234.322235" rel="nofollow noreferrer">https://dl.acm.org/doi/10.1145/322234.322235</a></p>
<p>[2] Fully Dynamic Single-Source Reachability in Practice: An Experimental Study, Kathrin Hanauer, Monika Henzinger and Christian Schulz <a href="https://arxiv.org/abs/1905.01216" rel="nofollow noreferrer">https://arxiv.org/abs/1905.01216</a></p> | 2022-02-08 08:18:43.910000+00:00 | 2022-02-08 08:31:41.500000+00:00 | 2022-02-08 08:31:41.500000+00:00 | null | 71,028,191 | <p>I understand the DSU strictly works with undirected graphs from this stack overflow question - <a href="https://stackoverflow.com/questions/61167751/can-we-detect-cycles-in-directed-graph-using-union-find-data-structure">Can we detect cycles in directed graph using Union-Find data structure?</a></p>
<p>Nevertheless, I am currently working on a problem that involves 400000 queries and a graph with at most 400000 nodes, in which there are two possible queries:</p>
<ul>
<li><p>Connect nodes a and b (directed, of course)</p>
</li>
<li><p>Output "true" if node "x" is reachable from node 1. Otherwise, print "false."</p>
</li>
</ul>
<p>However, my original instinct was to use DSU; that obviously would not work. Any suggestions? Thank you.</p> | 2022-02-08 03:20:50.697000+00:00 | 2022-02-08 16:42:32.127000+00:00 | null | algorithm|graph-theory | ['https://dl.acm.org/doi/10.1145/322234.322235', 'https://arxiv.org/abs/1905.01216'] | 2 |
44,628,011 | <p>I want to explain with picture from <a href="https://arxiv.org/abs/1412.0767" rel="noreferrer">C3D</a>.</p>
<p>In a nutshell, <strong>convolutional direction</strong> & <strong>output shape</strong> is important!</p>
<p><a href="https://i.stack.imgur.com/owWjX.png" rel="noreferrer"><img src="https://i.stack.imgur.com/owWjX.png" alt="enter image description here" /></a></p>
<p>↑↑↑↑↑ <em><strong>1D Convolutions - Basic</strong></em> ↑↑↑↑↑</p>
<ul>
<li>just <strong>1</strong>-direction (time-axis) to calculate conv</li>
<li>input = [W], filter = [k], output = [W]</li>
<li>ex) input = [1,1,1,1,1], filter = [0.25,0.5,0.25], output = [1,1,1,1,1]</li>
<li>output-shape is 1D array</li>
<li>example) graph smoothing</li>
</ul>
<h3>tf.nn.conv1d code Toy Example</h3>
<pre><code>import tensorflow as tf
import numpy as np
sess = tf.Session()
ones_1d = np.ones(5)
weight_1d = np.ones(3)
strides_1d = 1
in_1d = tf.constant(ones_1d, dtype=tf.float32)
filter_1d = tf.constant(weight_1d, dtype=tf.float32)
in_width = int(in_1d.shape[0])
filter_width = int(filter_1d.shape[0])
input_1d = tf.reshape(in_1d, [1, in_width, 1])
kernel_1d = tf.reshape(filter_1d, [filter_width, 1, 1])
output_1d = tf.squeeze(tf.nn.conv1d(input_1d, kernel_1d, strides_1d, padding='SAME'))
print sess.run(output_1d)
</code></pre>
<p><a href="https://i.stack.imgur.com/hvMaU.png" rel="noreferrer"><img src="https://i.stack.imgur.com/hvMaU.png" alt="enter image description here" /></a></p>
<p>↑↑↑↑↑ <em><strong>2D Convolutions - Basic</strong></em> ↑↑↑↑↑</p>
<ul>
<li><strong>2</strong>-direction (x,y) to calculate conv</li>
<li>output-shape is <strong>2D</strong> Matrix</li>
<li>input = [W, H], filter = [k,k] output = [W,H]</li>
<li>example) <a href="https://en.wikipedia.org/wiki/Sobel_operator" rel="noreferrer">Sobel Egde Fllter</a></li>
</ul>
<h3>tf.nn.conv2d - Toy Example</h3>
<pre><code>ones_2d = np.ones((5,5))
weight_2d = np.ones((3,3))
strides_2d = [1, 1, 1, 1]
in_2d = tf.constant(ones_2d, dtype=tf.float32)
filter_2d = tf.constant(weight_2d, dtype=tf.float32)
in_width = int(in_2d.shape[0])
in_height = int(in_2d.shape[1])
filter_width = int(filter_2d.shape[0])
filter_height = int(filter_2d.shape[1])
input_2d = tf.reshape(in_2d, [1, in_height, in_width, 1])
kernel_2d = tf.reshape(filter_2d, [filter_height, filter_width, 1, 1])
output_2d = tf.squeeze(tf.nn.conv2d(input_2d, kernel_2d, strides=strides_2d, padding='SAME'))
print sess.run(output_2d)
</code></pre>
<p><a href="https://i.stack.imgur.com/IvDQP.png" rel="noreferrer"><img src="https://i.stack.imgur.com/IvDQP.png" alt="enter image description here" /></a></p>
<p>↑↑↑↑↑ <em><strong>3D Convolutions - Basic</strong></em> ↑↑↑↑↑</p>
<ul>
<li><strong>3</strong>-direction (x,y,z) to calcuate conv</li>
<li>output-shape is <strong>3D</strong> Volume</li>
<li>input = [W,H,<strong>L</strong>], filter = [k,k,<strong>d</strong>] output = [W,H,M]</li>
<li><strong>d < L</strong> is important! for making volume output</li>
<li>example) C3D</li>
</ul>
<h3>tf.nn.conv3d - Toy Example</h3>
<pre><code>ones_3d = np.ones((5,5,5))
weight_3d = np.ones((3,3,3))
strides_3d = [1, 1, 1, 1, 1]
in_3d = tf.constant(ones_3d, dtype=tf.float32)
filter_3d = tf.constant(weight_3d, dtype=tf.float32)
in_width = int(in_3d.shape[0])
in_height = int(in_3d.shape[1])
in_depth = int(in_3d.shape[2])
filter_width = int(filter_3d.shape[0])
filter_height = int(filter_3d.shape[1])
filter_depth = int(filter_3d.shape[2])
input_3d = tf.reshape(in_3d, [1, in_depth, in_height, in_width, 1])
kernel_3d = tf.reshape(filter_3d, [filter_depth, filter_height, filter_width, 1, 1])
output_3d = tf.squeeze(tf.nn.conv3d(input_3d, kernel_3d, strides=strides_3d, padding='SAME'))
print sess.run(output_3d)
</code></pre>
<p><a href="https://i.stack.imgur.com/49cdt.png" rel="noreferrer"><img src="https://i.stack.imgur.com/49cdt.png" alt="enter image description here" /></a></p>
<p>↑↑↑↑↑ <em><strong>2D Convolutions with 3D input</strong></em> - LeNet, VGG, ..., ↑↑↑↑↑</p>
<ul>
<li>Eventhough input is 3D ex) 224x224x3, 112x112x32</li>
<li>output-shape is not <strong>3D</strong> Volume, but <strong>2D</strong> Matrix</li>
<li>because filter depth = <strong>L</strong> must be matched with input channels = <strong>L</strong></li>
<li><strong>2</strong>-direction (x,y) to calcuate conv! not 3D</li>
<li>input = [W,H,<strong>L</strong>], filter = [k,k,<strong>L</strong>] output = [W,H]</li>
<li>output-shape is <strong>2D</strong> Matrix</li>
<li>what if we want to train N filters (N is number of filters)</li>
<li>then output shape is (stacked 2D) <strong>3D = 2D x N</strong> matrix.</li>
</ul>
<h3>conv2d - LeNet, VGG, ... for 1 filter</h3>
<pre><code>in_channels = 32 # 3 for RGB, 32, 64, 128, ...
ones_3d = np.ones((5,5,in_channels)) # input is 3d, in_channels = 32
# filter must have 3d-shpae with in_channels
weight_3d = np.ones((3,3,in_channels))
strides_2d = [1, 1, 1, 1]
in_3d = tf.constant(ones_3d, dtype=tf.float32)
filter_3d = tf.constant(weight_3d, dtype=tf.float32)
in_width = int(in_3d.shape[0])
in_height = int(in_3d.shape[1])
filter_width = int(filter_3d.shape[0])
filter_height = int(filter_3d.shape[1])
input_3d = tf.reshape(in_3d, [1, in_height, in_width, in_channels])
kernel_3d = tf.reshape(filter_3d, [filter_height, filter_width, in_channels, 1])
output_2d = tf.squeeze(tf.nn.conv2d(input_3d, kernel_3d, strides=strides_2d, padding='SAME'))
print sess.run(output_2d)
</code></pre>
<h3>conv2d - LeNet, VGG, ... for N filters</h3>
<pre><code>in_channels = 32 # 3 for RGB, 32, 64, 128, ...
out_channels = 64 # 128, 256, ...
ones_3d = np.ones((5,5,in_channels)) # input is 3d, in_channels = 32
# filter must have 3d-shpae x number of filters = 4D
weight_4d = np.ones((3,3,in_channels, out_channels))
strides_2d = [1, 1, 1, 1]
in_3d = tf.constant(ones_3d, dtype=tf.float32)
filter_4d = tf.constant(weight_4d, dtype=tf.float32)
in_width = int(in_3d.shape[0])
in_height = int(in_3d.shape[1])
filter_width = int(filter_4d.shape[0])
filter_height = int(filter_4d.shape[1])
input_3d = tf.reshape(in_3d, [1, in_height, in_width, in_channels])
kernel_4d = tf.reshape(filter_4d, [filter_height, filter_width, in_channels, out_channels])
#output stacked shape is 3D = 2D x N matrix
output_3d = tf.nn.conv2d(input_3d, kernel_4d, strides=strides_2d, padding='SAME')
print sess.run(output_3d)
</code></pre>
<p><a href="https://i.stack.imgur.com/RghcS.png" rel="noreferrer"><img src="https://i.stack.imgur.com/RghcS.png" alt="enter image description here" /></a>
↑↑↑↑↑ <em><strong>Bonus 1x1 conv in CNN</strong></em> - GoogLeNet, ..., ↑↑↑↑↑</p>
<ul>
<li>1x1 conv is confusing when you think this as 2D image filter like sobel</li>
<li>for 1x1 conv in CNN, input is 3D shape as above picture.</li>
<li>it calculate depth-wise filtering</li>
<li>input = [W,H,L], filter = <strong>[1,1,L]</strong> output = [W,H]</li>
<li>output stacked shape is <strong>3D = 2D x N</strong> matrix.</li>
</ul>
<h3>tf.nn.conv2d - special case 1x1 conv</h3>
<pre><code>in_channels = 32 # 3 for RGB, 32, 64, 128, ...
out_channels = 64 # 128, 256, ...
ones_3d = np.ones((1,1,in_channels)) # input is 3d, in_channels = 32
# filter must have 3d-shpae x number of filters = 4D
weight_4d = np.ones((3,3,in_channels, out_channels))
strides_2d = [1, 1, 1, 1]
in_3d = tf.constant(ones_3d, dtype=tf.float32)
filter_4d = tf.constant(weight_4d, dtype=tf.float32)
in_width = int(in_3d.shape[0])
in_height = int(in_3d.shape[1])
filter_width = int(filter_4d.shape[0])
filter_height = int(filter_4d.shape[1])
input_3d = tf.reshape(in_3d, [1, in_height, in_width, in_channels])
kernel_4d = tf.reshape(filter_4d, [filter_height, filter_width, in_channels, out_channels])
#output stacked shape is 3D = 2D x N matrix
output_3d = tf.nn.conv2d(input_3d, kernel_4d, strides=strides_2d, padding='SAME')
print sess.run(output_3d)
</code></pre>
<h3>Animation (2D Conv with 3D-inputs)</h3>
<p><a href="https://i.stack.imgur.com/FjvuN.gif" rel="noreferrer"><img src="https://i.stack.imgur.com/FjvuN.gif" alt="enter image description here" /></a></p>
<ul>
<li>Original Link : <a href="https://sites.google.com/site/nttrungmtwiki/home/it/data-science---python/tensorflow/tensorflow-and-deep-learning-part-3?tmpl=%2Fsystem%2Fapp%2Ftemplates%2Fprint%2F&showPrintDialog=1" rel="noreferrer">LINK</a></li>
<li>The author: Martin Görner</li>
<li>Twitter: @martin_gorner</li>
<li>Google +: plus.google.com/+MartinGorne</li>
</ul>
<h3>Bonus 1D Convolutions with 2D input</h3>
<p><a href="https://i.stack.imgur.com/woaXM.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/woaXM.jpg" alt="enter image description here" /></a>
↑↑↑↑↑ <em><strong>1D Convolutions with 1D input</strong></em> ↑↑↑↑↑</p>
<p><a href="https://i.stack.imgur.com/9VBtu.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/9VBtu.jpg" alt="enter image description here" /></a>
↑↑↑↑↑ <em><strong>1D Convolutions with 2D input</strong></em> ↑↑↑↑↑</p>
<ul>
<li>Eventhough input is 2D ex) 20x14</li>
<li>output-shape is not <strong>2D</strong> , but <strong>1D</strong> Matrix</li>
<li>because filter height = <strong>L</strong> must be matched with input height = <strong>L</strong></li>
<li><strong>1</strong>-direction (x) to calcuate conv! not 2D</li>
<li>input = [W,<strong>L</strong>], filter = [k,<strong>L</strong>] output = [W]</li>
<li>output-shape is <strong>1D</strong> Matrix</li>
<li>what if we want to train N filters (N is number of filters)</li>
<li>then output shape is (stacked 1D) <strong>2D = 1D x N</strong> matrix.</li>
</ul>
<h3>Bonus C3D</h3>
<pre><code>in_channels = 32 # 3, 32, 64, 128, ...
out_channels = 64 # 3, 32, 64, 128, ...
ones_4d = np.ones((5,5,5,in_channels))
weight_5d = np.ones((3,3,3,in_channels,out_channels))
strides_3d = [1, 1, 1, 1, 1]
in_4d = tf.constant(ones_4d, dtype=tf.float32)
filter_5d = tf.constant(weight_5d, dtype=tf.float32)
in_width = int(in_4d.shape[0])
in_height = int(in_4d.shape[1])
in_depth = int(in_4d.shape[2])
filter_width = int(filter_5d.shape[0])
filter_height = int(filter_5d.shape[1])
filter_depth = int(filter_5d.shape[2])
input_4d = tf.reshape(in_4d, [1, in_depth, in_height, in_width, in_channels])
kernel_5d = tf.reshape(filter_5d, [filter_depth, filter_height, filter_width, in_channels, out_channels])
output_4d = tf.nn.conv3d(input_4d, kernel_5d, strides=strides_3d, padding='SAME')
print sess.run(output_4d)
sess.close()
</code></pre>
<h3>Input & Output in Tensorflow</h3>
<p><a href="https://i.stack.imgur.com/I25ty.png" rel="noreferrer"><img src="https://i.stack.imgur.com/I25ty.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/xIdEq.png" rel="noreferrer"><img src="https://i.stack.imgur.com/xIdEq.png" alt="enter image description here" /></a></p>
<h3>Summary</h3>
<p><a href="https://i.stack.imgur.com/HCWgp.png" rel="noreferrer"><img src="https://i.stack.imgur.com/HCWgp.png" alt="enter image description here" /></a></p> | 2017-06-19 10:22:12.110000+00:00 | 2021-08-17 14:47:45.383000+00:00 | 2021-08-17 14:47:45.383000+00:00 | null | 42,883,547 | <p>Can anyone please clearly explain the difference between 1D, 2D, and 3D convolutions in convolutional neural networks (in deep learning) with the use of examples?</p> | 2017-03-19 06:20:20.053000+00:00 | 2021-08-17 14:47:45.383000+00:00 | 2020-03-17 00:20:49.773000+00:00 | machine-learning|deep-learning|signal-processing|conv-neural-network|convolution | ['https://arxiv.org/abs/1412.0767', 'https://i.stack.imgur.com/owWjX.png', 'https://i.stack.imgur.com/hvMaU.png', 'https://en.wikipedia.org/wiki/Sobel_operator', 'https://i.stack.imgur.com/IvDQP.png', 'https://i.stack.imgur.com/49cdt.png', 'https://i.stack.imgur.com/RghcS.png', 'https://i.stack.imgur.com/FjvuN.gif', 'https://sites.google.com/site/nttrungmtwiki/home/it/data-science---python/tensorflow/tensorflow-and-deep-learning-part-3?tmpl=%2Fsystem%2Fapp%2Ftemplates%2Fprint%2F&showPrintDialog=1', 'https://i.stack.imgur.com/woaXM.jpg', 'https://i.stack.imgur.com/9VBtu.jpg', 'https://i.stack.imgur.com/I25ty.png', 'https://i.stack.imgur.com/xIdEq.png', 'https://i.stack.imgur.com/HCWgp.png'] | 14 |
70,417,996 | <p>Because different features of the same object mean different things, and it's not logical to calculate some statistics over these values. They can have different range, mean, std, etc. E.g. one of your features could mean the age of a person and other one is the height of the person. If you calculate mean of these values you will not get any meaningful number.</p>
<p>In classic machine learning (especially in linear models and KNN) you should normalize your features (i.e. calculate mean and std of the specific feature over the entire dataset and transform your features to (X-mean(X)) / std(X) ). Batch normalization is analogue of this applied to stochastic optimization methods, like SGD (it's not meaningful to use global statistics on mini batch, furthermore you want to use batch norm more often than just before the first layer). More fundamenal ideas could be found in the <a href="https://arxiv.org/abs/1502.03167" rel="nofollow noreferrer">original paper</a></p> | 2021-12-20 06:16:32.143000+00:00 | 2021-12-20 06:16:32.143000+00:00 | null | null | 70,417,085 | <p>Why the batch normalization is working on the different samples of the same characteristics instead of different characteristics of the same sample? Shouldn't it be the normalization of different features? In the diagram, why do we use the first row and not the first column?
Could someone help me?</p>
<p><img src="https://i.stack.imgur.com/lxiY1.png" alt="enter image description here" /></p>
<p><img src="https://i.stack.imgur.com/6HlPa.png" alt="enter image description here" /></p> | 2021-12-20 03:19:04.350000+00:00 | 2021-12-20 08:14:19.693000+00:00 | 2021-12-20 08:14:19.693000+00:00 | machine-learning|deep-learning|normalization | ['https://arxiv.org/abs/1502.03167'] | 1 |
30,609,096 | <p>From a programming point of view, the problem appears to be your value of <code>gamma</code> and therefore the size of your collapse operators. Print out <code>gamma</code> - it is of the order <code>10**25</code> - this seems to be what is preventing the solver from converging.</p>
<p>Just for testing (I'm an engineer, not a quantum physicist...), I put in a smaller value of <code>gamma</code> (e.g. 0.1), the solver seems to work and gives apparently reasonable output in <code>results.states</code></p>
<p>I don't quite understand your <code>gamma</code> - it <em>seems</em> to have units of cm<sup>-1</sup>s<sup>-2</sup> as you have set it up. I wonder if you only want to divide by <code>hbar</code> once, maybe. As I say, I'm not a quantum physicist, so I'm only guessing here based on what makes the programming hang together combined with a bit of dimensional analysis.</p>
<p><strong><em>EDIT</em></strong></p>
<p>OP indicates in comments that the wrong order of magnitude / units for <code>gamma</code> does seem to be the programming issue (i.e. preventing numerical calculus from converging), but isn't totally clear on how to calculate gamma. At this stage, it may be worth posting a question at either <a href="http://physics.stackexchange.com">http://physics.stackexchange.com</a> or <a href="http://math.stackexchange.com">http://math.stackexchange.com</a> about that - referencing this one for context if necessary.</p>
<p><strong><em>EDIT 2</em></strong></p>
<p>I note you asked this <a href="https://physics.stackexchange.com/questions/187734/messed-up-units">related question on the physics site</a>. This makes it clear where <a href="http://arxiv.org/pdf/0807.0929v2.pdf" rel="nofollow noreferrer">the expression for gamma comes from</a> and thereby clarifies that the constant terms presented as simply <code>30</code> and <code>150</code> in this question actually have units (Energy and frequency respectively). This changes the dimensional analysis - the units of gamma are s<sup>-1</sup> or, with appropriate conversion, cm<sup>-1</sup>.</p>
<p>It also shows the value you mention in comments - 300 cm<sup>-1</sup>.</p> | 2015-06-03 01:00:23.283000+00:00 | 2015-06-06 23:50:40.227000+00:00 | 2017-04-13 12:39:35.890000+00:00 | null | 30,605,526 | <p>I have been trying to use QuTiP to solve a quantum mechanics matrix differential equation (a Lindblad equation). Here is the code:</p>
<pre><code>from qutip import *
from matplotlib import *
import numpy as np
hamiltonian = np.array([[215, -104.1, 5.1, -4.3 ,4.7,-15.1 ,-7.8 ],
[-104.1, 220.0, 32.6 ,7.1, 5.4, 8.3, 0.8],
[ 5.1, 32.6, 0.0, -46.8, 1.0 , -8.1, 5.1],
[-4.3, 7.1, -46.8, 125.0, -70.7, -14.7, -61.5],
[ 4.7, 5.4, 1.0, -70.7, 450.0, 89.7, -2.5],
[-15.1, 8.3, -8.1, -14.7, 89.7, 330.0, 32.7],
[-7.8, 0.8, 5.1, -61.5, -2.5, 32.7, 280.0]])
H=Qobj(hamiltonian)
ground=Qobj(np.array([[ 0.0863685 ],
[ 0.17141713],
[-0.91780802],
[-0.33999268],
[-0.04835763],
[-0.01859027],
[-0.05006013]]))
rho0 = ground*ground.dag()
from scipy.constants import *
ktuple=physical_constants['Boltzmann constant in eV/K']
k = ktuple[0]* 8065.6
htuple = physical_constants['Planck constant in eV s']
hbar = (htuple[0]* 8065.6)/(2*pi)
gamma=(2*pi)*((k*300)/hbar)*(35/(150*hbar))
L1 = Qobj(np.array([[1,0,0,0,0,0,0],[0,0,0,0,0,0,0],[0,0,0,0,0,0,0],[0,0,0,0,0,0,0],[0,0,0,0,0,0,0],[0,0,0,0,0,0,0],[0,0,0,0,0,0,0]]))
L2 = Qobj(np.array([[0,0,0,0,0,0,0],[0,1,0,0,0,0,0],[0,0,0,0,0,0,0],[0,0,0,0,0,0,0],[0,0,0,0,0,0,0],[0,0,0,0,0,0,0],[0,0,0,0,0,0,0]]))
L3 = Qobj(np.array([[0,0,0,0,0,0,0],[0,0,0,0,0,0,0],[0,0,1,0,0,0,0],[0,0,0,0,0,0,0],[0,0,0,0,0,0,0],[0,0,0,0,0,0,0],[0,0,0,0,0,0,0]]))
L4 = Qobj(np.array([[0,0,0,0,0,0,0],[0,0,0,0,0,0,0],[0,0,0,0,0,0,0],[0,0,0,1,0,0,0],[0,0,0,0,0,0,0],[0,0,0,0,0,0,0],[0,0,0,0,0,0,0]]))
L5 = Qobj(np.array([[0,0,0,0,0,0,0],[0,0,0,0,0,0,0],[0,0,0,0,0,0,0],[0,0,0,0,0,0,0],[0,0,0,0,1,0,0],[0,0,0,0,0,0,0],[0,0,0,0,0,0,0]]))
L6 = Qobj(np.array([[0,0,0,0,0,0,0],[0,0,0,0,0,0,0],[0,0,0,0,0,0,0],[0,0,0,0,0,0,0],[0,0,0,0,0,0,0],[0,0,0,0,0,1,0],[0,0,0,0,0,0,0]]))
L7 = Qobj(np.array([[0,0,0,0,0,0,0],[0,0,0,0,0,0,0],[0,0,0,0,0,0,0],[0,0,0,0,0,0,0],[0,0,0,0,0,0,0],[0,0,0,0,0,0,0],[0,0,0,0,0,0,1]]))
#Since our gamma variable cannot be directly applied onto the Lindblad operator, we must multiply it with the collapse operators:
L1=gamma*L1
L2=gamma*L2
L3=gamma*L3
L4=gamma*L4
L5=gamma*L5
L6=gamma*L6
L7=gamma*L7
options = Options(nsteps=100000)
results = mesolve(H, rho0, np.linspace(0.0, 1000, 20),[L1,L2,L3,L4,L5,L6,L7],[],options=options)
print results
</code></pre>
<p>This code is supposed to solve the following equation:</p>
<p><img src="https://i.stack.imgur.com/x2X0Z.gif" alt="Lindblad equation"></p>
<p>where L_i are matrices (in the list: [L1,L2,L3,L4,L5,L6,L7]), H is the hamiltonian, another matrix, <img src="https://i.stack.imgur.com/TeKxt.gif" alt="$\rho$"> is the density matrix, and <img src="https://i.stack.imgur.com/C2zjo.gif" alt="$\gamma$"> is a constant equal to <img src="https://i.stack.imgur.com/JaFwh.gif" alt="$2\pi kT/\hbar*E_{R}/(\hbar\omega_{c})$"> where T is the temperature, k is the Boltzmann constant, and <img src="https://i.stack.imgur.com/BYYcG.gif" alt="$\hbar$ = $h/2\pi$">, where h is Planck's constant.</p>
<p>Every time I run the code, it gives me the following error:</p>
<pre><code>ZVODE-- At T (=R1) and step size H (=R2), the
corrector convergence failed repeatedly
or with abs(H) = HMIN
In above, R1 = 0.0000000000000D+00 R2 = 0.1202322246215D-36
/usr/local/lib/python2.7/dist-packages/scipy/integrate/_ode.py:853: UserWarning: zvode: Repeated convergence failures. (Perhaps bad Jacobian supplied or wrong choice of MF o
r tolerances.)
'Unexpected istate=%s' % istate))
Traceback (most recent call last):
File "lindbladqutip.py", line 48, in <module>
results = mesolve(H, rho0, np.linspace(0.0, 1000, 20),[L1,L2,L3,L4,L5,L6,L7],[],options=options)
File "/projects/d6138712-e5f4-4d85-9d4d-77ce0a7b4a61/.local/lib/python2.7/site-packages/qutip/mesolve.py", line 264, in mesolve
progress_bar)
File "/projects/d6138712-e5f4-4d85-9d4d-77ce0a7b4a61/.local/lib/python2.7/site-packages/qutip/mesolve.py", line 692, in _mesolve_const
return _generic_ode_solve(r, rho0, tlist, e_ops, opt, progress_bar)
File "/projects/d6138712-e5f4-4d85-9d4d-77ce0a7b4a61/.local/lib/python2.7/site-packages/qutip/mesolve.py", line 866, in _generic_ode_solve
raise Exception("ODE integration error: Try to increase "
Exception: ODE integration error: Try to increase the allowed number of substeps by increasing the nsteps parameter in the Options class.
</code></pre>
<p>After doing some debugging analysis, it seems like the first or second integration fails. The error tells me to increase the nsteps parameter, which I have tried. Even then it fails. Changing the list of times (the np.linspace function makes the list of times) also has no effect. </p>
<p>I am desperate to know what I can do to fix this error. Please comment if you all need more details.</p>
<p>Thanks for all your help!</p> | 2015-06-02 20:02:15.400000+00:00 | 2015-08-23 00:32:38.430000+00:00 | 2015-08-23 00:32:38.430000+00:00 | python|numpy|scipy|integrate|qutip | ['http://physics.stackexchange.com', 'http://math.stackexchange.com', 'https://physics.stackexchange.com/questions/187734/messed-up-units', 'http://arxiv.org/pdf/0807.0929v2.pdf'] | 4 |
59,370,641 | <p>Class saliency maps as described in <a href="https://arxiv.org/pdf/1312.6034v2.pdf" rel="nofollow noreferrer">Deep Inside Convolutional Networks: VisualisingImage Classification Models and Saliency Maps</a> explain that such a map describes per pixel how much changing such a pixel will influence a prediction. Hence I see no reason why this could not be applied to image segmentation tasks.</p>
<p>The resulting images from the segmentation task and saliency map have to be interpreted differently however. In an image segmentation task the output is a per pixel prediction of whether or not a pixel belongs to a class, sometimes in the form of a certainty score.</p>
<p>A class saliency map describes per pixel how much changing that pixel would change the score of the classifier. Or quote from above paper: <strong><em>"which pixels need to be changed the least to affect the class score the most"</em></strong></p>
<p><strong>Edit: Added example.</strong>
Say that a pixel gets a score of 99% for being of the class "Dog", we can be rather certain that this pixel actually is part of a dog. The salience map can show a low score for this same pixel. This means that changing this pixel slightly would not influence the prediction of that pixel belonging to the class "Dog". In my experience so far, both the per pixel class probability map and the salience map show somewhat similar patterns, but this does not mean they are to be interpreted equal. </p>
<p>A piece of code I came across that can be applied to pytorch models (from Nikhil Kasukurthi, not mine) can be found on <a href="https://gist.github.com/Nikhil-Kasukurthi/3f75bd470380dda6e24f981d01f4c2cb" rel="nofollow noreferrer">github</a>.</p> | 2019-12-17 08:55:55.777000+00:00 | 2019-12-18 12:46:35.250000+00:00 | 2019-12-18 12:46:35.250000+00:00 | null | 59,198,096 | <p>I also know the fact that saliency map is also a form of image segmentation task.
But it has been used very widely for interpretable deep learning ( Read GradCam etc ) .
I also came across this paper (<a href="http://img.cs.uec.ac.jp/pub/conf16/161011shimok_0.pdf" rel="nofollow noreferrer">http://img.cs.uec.ac.jp/pub/conf16/161011shimok_0.pdf</a>)
which talks about Class Saliency Maps - something that rings a bell when it comes to Image Segmentation. Please tell if this concept exists for Image Segmentation or I need to read more on this subject.</p> | 2019-12-05 15:12:44.590000+00:00 | 2019-12-18 12:46:35.250000+00:00 | null | neural-network|deep-learning|reproducible-research | ['https://arxiv.org/pdf/1312.6034v2.pdf', 'https://gist.github.com/Nikhil-Kasukurthi/3f75bd470380dda6e24f981d01f4c2cb'] | 2 |
57,009,483 | <p>Basically, you will need a <a href="https://arxiv.org/abs/1503.03832" rel="nofollow noreferrer">FaceNet</a> pretrained model. A FaceNet model creates an embedding vector for a human face in an image. As mentioned in the paper, researchers have used clustering algorithms using the embedded face vectors. Hence, you get a 128 or 256-dimensional vector which represents that human face.</p>
<ul>
<li>After you've generated an embedding vector from the images of the two subjects, you can find the <a href="https://en.wikipedia.org/wiki/Cosine_similarity" rel="nofollow noreferrer">cosine similarity</a> of both the vectors, which is a common metric used for vectors comparison.</li>
<li>By some experimenting, you can find some <strong>threshold similarity score</strong>, meaning, if the similarity scores exceed this threshold score, the faces are of the same subjects.</li>
</ul>
<p>You can discover some references here:</p>
<ol>
<li><p><a href="https://medium.com/@vinayakvarrier/building-a-real-time-face-recognition-system-using-pre-trained-facenet-model-f1a277a06947" rel="nofollow noreferrer">https://medium.com/@vinayakvarrier/building-a-real-time-face-recognition-system-using-pre-trained-facenet-model-f1a277a06947</a></p></li>
<li><p><a href="https://machinelearningmastery.com/how-to-develop-a-face-recognition-system-using-facenet-in-keras-and-an-svm-classifier/" rel="nofollow noreferrer">https://machinelearningmastery.com/how-to-develop-a-face-recognition-system-using-facenet-in-keras-and-an-svm-classifier/</a></p></li>
</ol> | 2019-07-12 14:55:54.090000+00:00 | 2019-07-12 14:55:54.090000+00:00 | null | null | 57,005,176 | <p>I have failed miserably trying to train a face verification network on my own hardware. Here by face verification i mean looking at two photos and telling its the same person or not. So any recommendations for pre trained models?</p>
<p>there are many articles on implementation of face-net for face identification but none for face verification. Can anyone guide me if you know of any pre trained models that i can use?</p> | 2019-07-12 10:31:07.740000+00:00 | 2019-07-12 14:55:54.090000+00:00 | null | python|tensorflow|machine-learning|keras|computer-vision | ['https://arxiv.org/abs/1503.03832', 'https://en.wikipedia.org/wiki/Cosine_similarity', 'https://medium.com/@vinayakvarrier/building-a-real-time-face-recognition-system-using-pre-trained-facenet-model-f1a277a06947', 'https://machinelearningmastery.com/how-to-develop-a-face-recognition-system-using-facenet-in-keras-and-an-svm-classifier/'] | 4 |
72,169,588 | <p>Although correct, I believe that the method described in the answer of Dion Groothof is not what is usually of interest. Usually, researchers are interested in visualizing the causal effect of a variable <em>adjusted</em> for confounders. Simply showing the predicted survival curve for one single covariate combination does not really do the trick here. I would recommend reading up on confounder-adjusted survival curves. See <a href="https://arxiv.org/abs/2203.10002" rel="nofollow noreferrer">https://arxiv.org/abs/2203.10002</a> for example.</p>
<p>Those type of curves can be calculated in R using the <code>adjustedCurves</code> package: <a href="https://github.com/RobinDenz1/adjustedCurves" rel="nofollow noreferrer">https://github.com/RobinDenz1/adjustedCurves</a></p>
<p>In your example, the following code could be used:</p>
<pre><code>library(survival)
library(devtools)
# install adjustedCurves from github, load it
devtools::install_github("/RobinDenz1/adjustedCurves")
library(adjustedCurves)
# "event" needs to be binary
lung$status <- lung$status - 1
# "variable" needs to be a factor
lung$ph.ecog <- factor(lung$ph.ecog)
fit <- coxph(Surv(time, status) ~ ph.ecog + age + sex, data=lung,
x=TRUE)
# calculate and plot curves
adj <- adjustedsurv(data=lung, variable="ph.ecog", ev_time="time",
event="status", method="direct",
outcome_model=fit, conf_int=TRUE)
plot(adj)
</code></pre>
<p>Producing the following output:</p>
<p><a href="https://i.stack.imgur.com/UkZp7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UkZp7.png" alt="Output" /></a></p>
<p>These survival curves are adjusted for the effect of <code>age</code> and <code>sex</code>. More information on how this adjustment works can be found in the documentation of the <code>adjustedCurves</code> package or the article I cited above.</p> | 2022-05-09 09:16:24.317000+00:00 | 2022-05-09 09:16:24.317000+00:00 | null | null | 70,783,093 | <p>I'm using the <code>survminer</code> package to try to generate survival and hazard function graphs for a longitudinal student-level dataset that has 5 subgroups of interest.</p>
<p>I've had success creating a model that shows the survival functions <em>without</em> adjusting for student-level covariates using <code>ggsurvplot</code>.</p>
<pre><code>ggsurvplot(survfit(Surv(expectedgr, sped) ~ langstatus_new, data=mydata), pvalue=TRUE)
</code></pre>
<p><a href="https://i.stack.imgur.com/6r3ze.png" rel="nofollow noreferrer">Output example</a></p>
<p>However, I cannot manage to get these curves adjusted for covariates. My aim is to create <a href="https://i.stack.imgur.com/OMaga.jpg" rel="nofollow noreferrer">graphs like these</a>. As you can see, these are covariate-adjusted survival curves according to some factor variable. Does anyone how such graphs can be obtained in <code>R</code>?</p> | 2022-01-20 08:43:51.733000+00:00 | 2022-05-09 09:16:24.317000+00:00 | 2022-01-21 03:52:47.143000+00:00 | r|survival-analysis|cox-regression|survminer | ['https://arxiv.org/abs/2203.10002', 'https://github.com/RobinDenz1/adjustedCurves', 'https://i.stack.imgur.com/UkZp7.png'] | 3 |
58,341,899 | <p>The most suited NLP technique for this is probably <em>language models</em>.
They predict the likelihood of a word given the previous words (or surrounding words).
They can be used for error correction .<br>
You may find following useful:<br>
<a href="https://arxiv.org/pdf/1902.07178.pdf" rel="nofollow noreferrer">article</a><br>
<a href="https://medium.com/@davidmasse8/predicting-the-next-word-back-off-language-modeling-8db607444ba9" rel="nofollow noreferrer">page</a></p> | 2019-10-11 13:05:23.037000+00:00 | 2019-10-11 13:05:23.037000+00:00 | null | null | 58,341,471 | <p>The ideal goal is to correct the output from a speech2text model according to a reference corpus (the actual text). I don't mind using any off the selves tool either in NLP space or ElasticSearch </p>
<p>I have a reference corpus like the following:</p>
<blockquote>
<p>It is a reliance that has led to a cycle of addiction that has
destroyed lives it <strong>is a cycle that makes you sick when</strong> you <strong>try to stop
and potentially takes your</strong> <strong>life when you don't and beyond</strong> its physical
effects this cycle of addiction also includes constant contact with
the criminal justice system and not just a cycle of arrests release
and violation.</p>
</blockquote>
<p>In fact its much longer ...</p>
<p>On the other hand, I have a set of sentences that are recognized from a speech-2-text model in a CSV files</p>
<pre><code>1, is a cycle that makes you dick when
2, try two stops and essentially hates your
3, posses activated
4, lives when who don't and beyond
</code></pre>
<p>As you can see there because the speech2text model is not perfect there are errors, for example</p>
<p>1) With references to the corpus these subsentences are misspelled (e.g. dick instead of sick in number the sentence number 1
2) there are sentences that do not match to the corpus at all - e.g. number 3
3) putting the sentences together does not cover the whole paragraph.</p>
<p>So basically I wonder what is this task called in the NLP topic, then I can do a better googling, and I appreciate if you name specific functions or examples that I can leverage, e.g. in Space or NLTK or any other tool.</p>
<p><em>edit</em> : * I already have experience with nlp (coursera certificate) - therefore, looking for a concrete answer and/or example rather a scientific paper. This is not a general error correction task or the next work recommendation based on sequential models. </p> | 2019-10-11 12:39:28.253000+00:00 | 2019-10-17 08:30:27.607000+00:00 | 2019-10-17 08:20:23.950000+00:00 | regex|elasticsearch|nlp|nltk|spacy | ['https://arxiv.org/pdf/1902.07178.pdf', 'https://medium.com/@davidmasse8/predicting-the-next-word-back-off-language-modeling-8db607444ba9'] | 2 |
56,464,767 | <p>You have a fair reason to prefer 0.0-1.0 (though many learning algorithms should do just fine with a -1.0 to 1.0 range). Your norm_sim rescaling of -1.0 to 1.0 to 0.0 to 1.0 is fine, if your only purpose is to get 0.0-1.0 ranges... but of course the resulting value isn't a true cosine-similarity anymore. </p>
<p>It won't necessarily matter that the values aren't real full-range angles any more. (If the algorithm needed real angles, it'd work with -1.0 to 1.0.) </p>
<p>Using the signless absolute value would be a bad idea, as it would change the rank order of similarities – moving some results that are "natively" most-dissimilar way up.</p>
<p>There's been work on constraining word-vectors to have only non-negative values in dimensions, & the usual benefit is that the resulting dimensions are more likely to be individually interpretable. (See for example <a href="https://cs.cmu.edu/~bmurphy/NNSE/" rel="nofollow noreferrer">https://cs.cmu.edu/~bmurphy/NNSE/</a>.) However, gensim doesn't support this variant, & only trying it could reveal whether it would be better for any particular project.</p>
<p>Also, there's other research that suggests usual word-vectors may not be 'balanced' around the origin (so you'll see fewer negative cosine-similiarities than would be expected from points in a random hypersphere), and that shifting them to be more balanced will usually improve them for other tasks. See: <a href="https://arxiv.org/abs/1702.01417v2" rel="nofollow noreferrer">https://arxiv.org/abs/1702.01417v2</a></p> | 2019-06-05 16:43:33.883000+00:00 | 2019-06-05 16:43:33.883000+00:00 | null | null | 56,316,903 | <p>I am interested in calculating similarity between vectors, however this similarity has to be a number between 0 and 1. There are many questions concerning tf-idf and cosine similarity, all indicating that the value lies between 0 and 1. From <a href="https://en.wikipedia.org/wiki/Cosine_similarity#Soft_cosine_measure" rel="noreferrer">Wikipedia</a>:</p>
<blockquote>
<p>In the case of information retrieval, the cosine similarity of two
documents will range from 0 to 1, since the term frequencies (using
tf–idf weights) cannot be negative. The angle between two term
frequency vectors cannot be greater than 90°.</p>
</blockquote>
<p>The peculiarity is that I wish to calculate the similarity between two vectors from two different word2vec models. These models have been aligned, though, so they should in fact represent their words in the same vector space. I can calculate the similarity between a word in <code>model_a</code> and a word in <code>model_b</code> like so</p>
<pre class="lang-py prettyprint-override"><code>import gensim as gs
from sklearn.metrics.pairwise import cosine_similarity
model_a = gs.models.KeyedVectors.load_word2vec_format(model_a_path, binary=False)
model_b = gs.models.KeyedVectors.load_word2vec_format(model_b_path, binary=False)
vector_a = model_a[word_a].reshape(1, -1)
vector_b = model_b[word_b].reshape(1, -1)
sim = cosine_similarity(vector_a, vector_b).item(0)
</code></pre>
<p>But <code>sim</code> is then a similarity metric in the [-1,1] range. Is there a scientifically sound way to map this to the [0,1] range? Intuitively I would think that something like</p>
<pre><code>norm_sim = (sim + 1) / 2
</code></pre>
<p>is okay, but I'm not sure whether that is good practice with respect to the actual meaning of cosine similarity. If not, are other similarity metrics advised? </p>
<p>The reason why I am trying to get the values to be between 0 and 1 is because the data will be transferred to a colleague who will use it as a feature for her machine learning system, which expects all values to be between 0 and 1. Her intuition was to take the absolute value, but that seems to me to be a worse alternative because then you map opposites to be identical. Considering the actual meaning of cosine similarity, though, I might be wrong. So if taking the absolute value is the good approach, we can do that as well.</p> | 2019-05-26 19:53:31.253000+00:00 | 2019-08-22 08:51:51.680000+00:00 | 2019-05-28 09:32:48.610000+00:00 | python|scikit-learn|gensim|similarity|cosine-similarity | ['https://cs.cmu.edu/~bmurphy/NNSE/', 'https://arxiv.org/abs/1702.01417v2'] | 2 |
28,699,821 | <p>Adding another answer for generating a sudoku of desired difficulty on-the-fly.</p>
<p>This means that unlike other approaches the algorithm runs <strong>only once</strong> and returns a sudoku configuration matching the desired difficulty (<strong>with high probability within a range or with probability=1</strong>)</p>
<p>Various solutions for generating (and rating) a sudoku difficulty have to do with <a href="http://www.learn-sudoku.com/basic-techniques.html" rel="nofollow noreferrer"><em>human-based</em> techniques and approaches</a>, which can be <em>easily</em> rated.</p>
<p>Then one (after having generated a sudoku configuration) <strong>re-solves</strong> the sudoku with the human-like solver and depending on the techniques the solver used (e.g <em>pairs</em>, <em>x-wing</em>, <em>swordfish</em> etc.) a difficulty rate is also assigned.</p>
<p><strong>Problems with this approach</strong>
(and requirements for the use case i had)</p>
<ol>
<li><p>In order to generate a sudoku with given difficulty, with previous method one needs to solve a sudoku twice (once with the basic algorithm and once with the human-like solver).</p></li>
<li><p>One has to (pre-)generate many sudokus which can only be rated as to difficulty <strong>after being solved</strong> by the human-like solver. So one cannot generate a desired sudoku on-the-fly once.</p></li>
<li><p>The human-like solver can be complicated and in most cases (if not all) is tightly coupled to 9x9 sudoku grids. So no easy generalisation to other sudokus (e.g 4x4, 16x16, 6x6 etc.)</p></li>
<li><p>The difficulty rating of the human-like techniques is very subjective. For example why <em>x-wing</em> is taken to be more difficult than <em>hidden singles</em>? (personaly have solved many difficult published sudoku puzzles manualy and never used such techniques)</p></li>
</ol>
<p>Another approach was used which has the following benefits:</p>
<ol>
<li>Generalises well to arbitrary sudokus (9x9, 4x4, 6x6, 16x16 etc..)</li>
<li>The sudoku configuration, with desired difficulty, is generated once and on-the-fly</li>
<li>The difficulty rating is objective.</li>
</ol>
<p><strong>How it works?</strong></p>
<p>First of all, the simple fact that <strong>the more difficult the puzzle, the more time it needs to be solved</strong>.</p>
<p>But time to be solved is intimately correlated to both number of clues (givens) and average alternatives to be investigated per empty cell.</p>
<p>Extending my <a href="https://stackoverflow.com/a/25110517/3591273">previous answer</a>, it was mentioned that for any sudoku puzzle the minimum number of clues is an objective property of the puzzle (for example <a href="http://arxiv.org/abs/1201.0749" rel="nofollow noreferrer"><strong>for 9x9 grids the minimum number of clues for having a valid sudoku is 17</strong></a>)</p>
<p>One can start from there and compute minimum number of clues per difficulty level (linear correlation).</p>
<p>Furthermore at each step of the sudoku generation process, one can make sure the average alternatives (to be investigated) per empty cell is within given bounds (as a function of desired difficulty)</p>
<p>Depending on whether the algorithm uses backtrack or not (for the use case discussed the algorithm does no backtracking) the desired difficulty can be reached either with probability=1 or with high probability within bounds (respectively).</p>
<p>Tests of the sudokus generated with this algorithm and difficulty rating based on the previous approaches (human-like solver), show a correlation of desired and estimated difficulty rates, plus a greater ability for generalisation to arbitrary sudoku configurations.</p>
<p>(have used this online <a href="http://www.sudoku-solutions.com/" rel="nofollow noreferrer">sudoku solver</a> (and also <a href="http://www.stolaf.edu/people/hansonr/sudoku/analyst.htm" rel="nofollow noreferrer">this one</a>) to correlate the difficulty rates of the test sudokus)</p>
<p>The code is available free <a href="https://github.com/foo123/sudoku.js" rel="nofollow noreferrer">on github sudoku.js (along with sample demo application)</a>, a scaled-down version of CrossWord.js a professional crossword builder in JavaScript, by same author</p> | 2015-02-24 15:31:44.763000+00:00 | 2015-04-09 14:16:14.087000+00:00 | 2017-05-23 12:25:48.280000+00:00 | null | 10,488,719 | <p>So, I've done a fair bit of reading into generation of a Sudoku puzzle. From what I can tell, the standard way to have a Sudoku puzzle of a desired difficulty is to generate a puzzle, and then grade it afterwards, and repeat until you have one of an acceptable rating. This can be refined by generating via backtracing using some of the more complex solving patterns (XY-wing, swordfish, etc.), but that's not quite what I'm wanting to do here.</p>
<p>What I want to do, but have been unable to find any real resource on, is generate a puzzle from a "difficulty value" (0-1.0 value, 0 being the easiest, and 1.0 being the hardest). </p>
<p>For example, I want create a moderately difficult puzzle, so the value .675 is selected. Now using that value I want to be able to generate a moderately difficult puzzle.</p>
<p>Anyone know of something like this? Or perhaps something with a similar methodology?</p> | 2012-05-07 20:33:22.333000+00:00 | 2015-04-09 14:16:14.087000+00:00 | 2014-02-27 05:45:37.897000+00:00 | puzzle|sudoku | ['http://www.learn-sudoku.com/basic-techniques.html', 'https://stackoverflow.com/a/25110517/3591273', 'http://arxiv.org/abs/1201.0749', 'http://www.sudoku-solutions.com/', 'http://www.stolaf.edu/people/hansonr/sudoku/analyst.htm', 'https://github.com/foo123/sudoku.js'] | 6 |
63,356,226 | <p>It is likely to be possible, but we can't exactly know before trying.</p>
<p>You may try using SOTA(State of the art) or near-SOTA Convolution neural network architectures such as <a href="https://arxiv.org/abs/1905.11946" rel="nofollow noreferrer">EfficientNet</a> or <a href="https://arxiv.org/abs/2004.08955" rel="nofollow noreferrer">ResNeSt</a>.</p>
<p>There are better Preprocessing methods than the standard ones in the usual Deep Learning library such as <a href="https://research.google/pubs/pub47890/" rel="nofollow noreferrer">AutoAugment</a>, but they are usually not necessary unless you want the improve the performance to the extreme or are training with very small datasets.</p>
<p>Also, training with larger datasets are definitely going to help.</p> | 2020-08-11 10:25:15.250000+00:00 | 2020-08-11 10:25:15.250000+00:00 | null | null | 63,355,507 | <p>I have been training an image classifier using Keras, and have tried out various convnet architectures. The dataset consists of jars of food. The problem is that many of the classes are extremely similar, with the differences between them often just being a slightly different label and color. Another problem is that the images are taken under different lighting conditions, so even color is often an ineffective means of distinguishing them. Is there a good network architecture, or some preprocessing, which would be able to increase the accuracy of a classifier?</p> | 2020-08-11 09:35:46.373000+00:00 | 2020-08-11 10:25:15.250000+00:00 | null | keras|conv-neural-network|data-science|image-preprocessing | ['https://arxiv.org/abs/1905.11946', 'https://arxiv.org/abs/2004.08955', 'https://research.google/pubs/pub47890/'] | 3 |
16,706,588 | <p>Yes, boost.odeint and boost.python should solve your problem. You can use odeint with Thrust. There are also some OpenCL libraries (VexCL, ViennaCL) which might be easier to use then Thrust. Have a look at <a href="http://arxiv.org/abs/1212.6326" rel="nofollow">thist paper</a> for a comparions and for use cases of odeint on GPUs.</p>
<p>Boost.python can do the communication between the C++ application and Python. Another approach would be a very slim command line application for solving the ODE (using boost.odeint) and which is entirely controlled by your python application. </p> | 2013-05-23 06:01:23.170000+00:00 | 2013-05-23 06:01:23.170000+00:00 | null | null | 16,700,212 | <p>I posted here not too long ago about a model I am trying to build using pycuda which solves About 9000 coupled ODEs. My model is too slow however and an SO member suggested that memory transfers from host to GPU is probably the culprit. </p>
<p>Right now cuda is being used only to calculate the rate of change of each of the 9000 species I am dealing with. Since I am passing in an array from the host to the GPU to perform this calculation and returning an array from the GPU to integrate on the host I can see how this would slow things down. </p>
<p>Would boost be the solution to my problem? From what I read, boost allows interoperability between c++ and python. It also includes c++ odeint , which I read, partnered with thrust allows quick reduction and integration all on the GPU. Is my understanding correct? </p>
<p>Thank you,
Karsten</p> | 2013-05-22 19:35:51.443000+00:00 | 2013-05-23 06:01:23.170000+00:00 | null | boost|gpu|thrust|pycuda|odeint | ['http://arxiv.org/abs/1212.6326'] | 1 |
48,068,354 | <ol>
<li>This will definitely work, and is probably the solution the vast majority of users will choose. The disadvantage lies, of course, in having to maintain the mapping.</li>
<li>A hashing function will also work. There are, in fact, approaches which use hashing to <a href="https://arxiv.org/pdf/1706.03993.pdf" rel="nofollow noreferrer">reduce the</a> <a href="https://github.com/maciejkula/spotlight/tree/master/examples/bloom_embeddings" rel="nofollow noreferrer">dimensionality</a> of the embedding layers required. One thing worth bearing in mind is that the resulting hash range should be relatively compact: most implementations will allocate parameters for all possible values, so a hashing function that can hash to very large values will require exorbitant amounts of memory. Hashing following by a modulo function could work well; the trade-off then is between memory required to hold all parameters and collision probability.</li>
</ol>
<p>In LightFM as well as most other implementations, recommendations can only be made for users and items (or at least for user and item <em>features</em>) that were present during the training. The mapping will then be a part of the model itself, and be effectively frozen until a new model is trained.</p> | 2018-01-02 21:53:19.180000+00:00 | 2018-01-02 21:53:19.180000+00:00 | null | null | 48,068,147 | <p><a href="https://github.com/lyst/lightfm" rel="nofollow noreferrer">LightFM</a> and other libraries ask for a 32 bit integer id e.g for users. But, our user id is a UUID e.g. <code>0003374a-a35c-46ed-96d2-0ea32b753199</code>. I was wondering what you would recommend in scenarios like these. What I have come up with is:</p>
<ul>
<li>Create a bidirectional dictionary either in memory or in a database to keep a UUID <-> Int mapping. e.g. <a href="https://github.com/jab/bidict" rel="nofollow noreferrer">https://github.com/jab/bidict</a></li>
<li>Use a non cryptographic hash function like MurmurHash3 or xxHash. For e.g. for 10 million UUIDs, I got around 11,521 or 0.1% collision using <code>xxhash</code>. Is that negligible for a recommender system?</li>
</ul>
<p>I'm also curious on how this would apply in an online prediction scenario, where given the UUID, user interactions and the model, I have to predict the recommendations for a model which needs 32 bit integers. If I use the in memory bidict approach, then that won't work in this case and hence I may have to create a persistent key-value store in the worst case.</p> | 2018-01-02 21:33:11.163000+00:00 | 2018-01-02 21:53:19.180000+00:00 | 2018-01-02 21:41:32.793000+00:00 | hash|hashtable|recommendation-engine | ['https://arxiv.org/pdf/1706.03993.pdf', 'https://github.com/maciejkula/spotlight/tree/master/examples/bloom_embeddings'] | 2 |
26,304,197 | <p>This is currently a very active field in programming languages research.</p>
<p>I know of these two different approaches that look at the problem:
- <a href="http://arxiv.org/pdf/1409.0166.pdf" rel="nofollow">http://arxiv.org/pdf/1409.0166.pdf</a>
- <a href="http://research.microsoft.com/en-us/um/people/sumitg/pubs/cacm14.pdf" rel="nofollow">http://research.microsoft.com/en-us/um/people/sumitg/pubs/cacm14.pdf</a> (this is actually only one of very many papers by Sumit and his group)</p>
<p>You may want to look at these things to find something that could help with your problem (and edit this answer to make it more useful).</p> | 2014-10-10 16:43:58.693000+00:00 | 2014-10-10 16:43:58.693000+00:00 | null | null | 26,303,388 | <p>For a class, I would like to automatically evaluate (parts) of the coding assignments of students. The setup I had in mind is something like:</p>
<ol>
<li>Students get a class skeleton, which they have to fill in. </li>
<li>Students ``upload'' this class definition to a server (or via webinterface)</li>
<li>The server runs a script an test on specific functions, eg class.sigmoid(x), and checks if the output of the function is correct and might give suggestions.</li>
</ol>
<p>This setup brings a whole lot of problems, since you're evaluating untrusted code. However, it would be extremely useful, for many of my classes, so I'm willing to spend some time in thinking it trough. I remember Coursera had something similar for matlab/octace assignments, but I can't get the details of that.</p>
<p>I've looked at many online python interfaces (eg, codecademy.com, ideone.com, c9.io); while they seem perfect to learn and or share code, with online evaluation. I do miss the option, that the evaluation script is "hidden" from the students (ie the evaluation script should contain a correct reference implementation to compare output on random generated data). Moreover, the course I give requires some data (eg images) and packages (sklearn / numpy), which is not always available.</p>
<p>Specifically, my questions are</p>
<ol>
<li>Do I miss an online environment which actually offers such a functionality. (that would be easiest)</li>
<li>To set this up myself, I was thinking to host it at (eg) amazon cloud (so no problem with infrastructure at University), but are there any python practices you could recommend on sandboxing the evaluation? </li>
</ol>
<p>Thanks in advance for any suggestions!</p>
<hr>
<p>Pity to hear that the question is not suitable for StackOverflow. Thanks to the people (partially) answering the question.</p>
<p>After some more feedback via other channels, I think my approach will become as follows:</p>
<ol>
<li>Student gets skeleton and fills it in</li>
<li>Student also has the evaluation script.</li>
<li>In the script, some connections with a server are made to
<ul>
<li>login </li>
<li>obtain some random data</li>
<li>check if the output of the students code is numerically identical to what the server expects.</li>
</ul></li>
</ol>
<p>In this way the students code is evaluated locally, but only output is send to the server. This limits the kind of evaluations possible, but still allows for kind of automatic evaluation of code.</p> | 2014-10-10 15:55:08.790000+00:00 | 2014-10-11 15:46:58.490000+00:00 | 2014-10-11 15:46:58.490000+00:00 | python|matlab | ['http://arxiv.org/pdf/1409.0166.pdf', 'http://research.microsoft.com/en-us/um/people/sumitg/pubs/cacm14.pdf'] | 2 |
60,615,939 | <p>The answer to this lies in the (admittedly very brief) description of what the tasks are about:</p>
<blockquote>
<p>[<code>BertForMultipleChoice</code>] [...], e.g. for RocStories/SWAG tasks.</p>
</blockquote>
<p>When looking at the <a href="https://arxiv.org/pdf/1808.05326.pdf" rel="noreferrer">paper for SWAG</a>, it seems that the task is actually learning to <em>choose from varying options</em>. This is in contrast to your "classical" classification task, in which the "choices" (i.e., classes) <em>do not vary</em> across your samples, which is exactly what <code>BertForSequenceClassification</code> is for.</p>
<p>Both variants can in fact be for an arbitrary number of classes (in the case of <code>BertForSequenceClassification</code>), respectively choices (for <code>BertForMultipleChoice</code>), via changing the <code>labels</code> parameter in the config. But, since it seems like you are dealing with a case of "classical classification", I suggest using the <code>BertForSequenceClassification</code> model.</p>
<p>Shortly addressing the missing Softmax in <code>BertForSequenceClassification</code>: Since classification tasks can compute loss across classes indipendent of the sample (unlike multiple choice, where your distribution is changing), this allows you to use Cross-Entropy Loss, which factors in Softmax in the backpropagation step for <a href="https://medium.com/@zhang_yang/understanding-cross-entropy-implementation-in-pytorch-softmax-log-softmax-nll-cross-entropy-416a2b200e34" rel="noreferrer">increased numerical stability</a>.</p> | 2020-03-10 10:41:23.373000+00:00 | 2020-03-10 10:41:23.373000+00:00 | null | null | 60,610,280 | <p>I'm working on a text classification problem (e.g. sentiment analysis), where I need to classify a text string into one of five classes.</p>
<p>I just started using the <a href="https://huggingface.co/transformers/index.html" rel="noreferrer">Huggingface Transformer</a> package and BERT with PyTorch. What I need is a classifier with a softmax layer on top so that I can do 5-way classification. Confusingly, there seem to be two relevant options in the Transformer package: <a href="https://huggingface.co/transformers/model_doc/bert.html#bertforsequenceclassification" rel="noreferrer">BertForSequenceClassification</a> and <a href="https://huggingface.co/transformers/model_doc/bert.html#bertformultiplechoice" rel="noreferrer">BertForMultipleChoice</a>.</p>
<p><strong>Which one should I use for my 5-way classification task? What are the appropriate use cases for them?</strong></p>
<p>The documentation for <strong>BertForSequenceClassification</strong> doesn't mention softmax at all, although it does mention cross-entropy. I am not sure if this class is only for 2-class classification (i.e. logistic regression).</p>
<blockquote>
<p><em>Bert Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks.</em></p>
<ul>
<li><em><strong>labels</strong> (torch.LongTensor of shape (batch_size,), optional, defaults to None) – Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy).</em></li>
</ul>
</blockquote>
<p>The documentation for <strong>BertForMultipleChoice</strong> mentions softmax, but the way the labels are described, it sound like this class is for multi-label classification (that is, a binary classification for multiple labels).</p>
<blockquote>
<p><em>Bert Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks.</em></p>
<ul>
<li><em><strong>labels</strong> (torch.LongTensor of shape (batch_size,), optional, defaults to None) – Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices] where num_choices is the size of the second dimension of the input tensors.</em></li>
</ul>
</blockquote>
<p>Thank you for any help.</p> | 2020-03-10 01:02:54.743000+00:00 | 2020-03-10 10:41:23.373000+00:00 | 2020-06-20 09:12:55.060000+00:00 | python|machine-learning|pytorch|bert-language-model|huggingface-transformers | ['https://arxiv.org/pdf/1808.05326.pdf', 'https://medium.com/@zhang_yang/understanding-cross-entropy-implementation-in-pytorch-softmax-log-softmax-nll-cross-entropy-416a2b200e34'] | 2 |
50,698,897 | <p>There is an SDN extension for INET, see <a href="https://arxiv.org/abs/1609.04554" rel="nofollow noreferrer">this paper</a> and the corresponding code on <a href="https://github.com/marco-tiloca-sics/INET_SDN_dev" rel="nofollow noreferrer">github</a>.</p>
<p>Regarding <code>UDPEchoApp</code>: this behavior is intended. An echo application responds to whatever request is sent; if you send the request to the controller (as in your config), and run the EchoApp on the controller, your UDP packet will be responded to from the controller. However, you don't need a controller for a non-SDN scenario at all (you'd just use <code>client->switch->host2</code>).</p> | 2018-06-05 11:24:16.690000+00:00 | 2018-06-05 11:24:16.690000+00:00 | null | null | 50,680,109 | <p>I'm trying to create SDN model on OMNet v5.2.1. However, there is no SDN controller module in INET. That's why I use standartHost module as controller. Can I obtain reasonable result?</p>
<p>In addition, I use UDP protocol on my network. <em>Since I want sending packet to follow this path:</em><br>
<strong>client -> switch -> controller -> switch -> host2</strong> ,</p>
<p>I defined client's protocol as UDPBasicApp and controller's protocol as UDPEcho. However UDPEcho protocol makes the path :<br>
<strong>client -> switch -> controller -> switch -> client</strong> </p>
<p>To sum up, client gets the packet which he sends.. How can I fix it?</p>
<p>I'm enclosing the part of .INI file related to UDP protocols </p>
<hr>
<pre><code>[Config Step1]
network = Test
description = "Fully automatic static routing table configuration"
*.client.numUdpApps = 1
*.client.udpApp[0].typename = "UDPBasicApp"
*.client.udpApp[0].destAddresses = "controller"
*.client.udpApp[0].destPort = 5000
*.client.udpApp[0].messageLength = 1000B
*.client.udpApp[0].sendInterval = exponential(12ms)
*.client.udpApp[0].packetName = "UDPData"
*.controller.numUdpApps = 1
*.controller.udpApp[0].typename = "UDPEchoApp"
*.controller.udpApp[0].localPort = 5000
*.controller.pingApp[*].destAddr = "host2"
</code></pre> | 2018-06-04 11:59:13.637000+00:00 | 2019-07-09 18:21:39.407000+00:00 | null | c++|oop|omnet++|sdn | ['https://arxiv.org/abs/1609.04554', 'https://github.com/marco-tiloca-sics/INET_SDN_dev'] | 2 |
73,751,408 | <p>As there is a lot of active development in this space, the answer to the question which compiler is best for offloading to NVIDIA GPUs will probably vary over time/versions (as well as the application). So if you want to be sure you are getting the best performance, you will need to benchmark the most recent versions of the different compilers (See Jérôme Richard's answer) with your specific application and keep doing so in the future.</p>
<p>Depending on size and complexity of your application one might argue that the time this takes could be spent better on implementing CUDA kernels but on the other hand a bad CUDA implementation is potentially as slow as what the "worst compiler" is generating from OpenMP.</p>
<p>There are papers benchmarking different OpenMP implementations, but at this point in time I have not found any including the Intel compiler used by OP. The results in <a href="https://arxiv.org/abs/2010.09454v2" rel="nofollow noreferrer">Performance Assessment of OpenMP Compilers Targeting NVIDIA V100 GPUs (2020)</a> are probably not very meaningful anymore.</p>
<p><a href="https://arxiv.org/abs/2203.02096" rel="nofollow noreferrer">Portability for GPU-accelerated molecular docking applications for cloud and HPC: can portable compiler directives provide performance across all platforms? (2022)</a> might be worth looking into for getting an overview of implementations, optimizations and portable alternatives to OpenMP.</p>
<p>All that being said, if you have no other reason for using the DPC++ compiler, and do not want to do all that benchmarking, I would rather go for either one of the big, established FOS toolchains (GCC or Clang) due to the big user base or for the NVIDIA HPC compilers due to their interest in being fast on their own hardware. Until the Intel compiler is more established and there are more results available publicly I would only use it for offloading to Intel hardware.</p>
<p>Since new supercomputers with AMD (<a href="https://en.wikipedia.org/wiki/Frontier_(supercomputer)" rel="nofollow noreferrer">Frontier</a> and <a href="https://en.wikipedia.org/wiki/LUMI" rel="nofollow noreferrer">LUMI</a>) and Intel (<a href="https://en.wikipedia.org/wiki/Aurora_(supercomputer)" rel="nofollow noreferrer">Aurora</a>) accelerators are already here or will be in the very near future, I expect a lot of comparisons between accelerators and portable programming models to be published as many HPC libraries and applications will need to support accelerators from all vendors.</p> | 2022-09-17 00:38:01.143000+00:00 | 2022-09-17 00:38:01.143000+00:00 | null | null | 73,746,723 | <p>I'm on a mission to write a <a href="https://stackoverflow.com/questions/73472975/openmp-marking-functions-to-be-included-in-the-offloaded-code">program with OpenMP offloading to a GPU</a>. At the moment I compile my code with Intel oneAPI DPC++ compiler <code>icpx</code> v2022.1.0 and aim to utilise an NVIDIA Tesla V100 at the backend. Please find below the relevant parts of my <code>Makefile</code>:</p>
<pre><code>MKLROOT = /lustre/system/local/apps/intel/oneapi/2022.2.0/mkl/latest
CXX = icpx
INC =-I"${MKLROOT}/include"
CXXFLAGS =-qopenmp -fopenmp-targets=spir64 ${INC} --gcc-toolchain=/lustre/system/local/apps/gcc9/9.3.0
LDFLAGS =-qopenmp -fopenmp-targets=spir64 -fsycl -L${MKLROOT}/lib/intel64
LDLIBS =-lmkl_sycl -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -lsycl -lOpenCL -lstdc++ -lpthread -lm -ldl
${EXE}: ${OBJ}
${CXX} ${CXXFLAGS} $^ ${LDFLAGS} ${LDLIBS} -o $@
</code></pre>
<p>The code compiles without errors and warnings, but I'm not entirely sure it does use the GPU when it runs.</p>
<ol>
<li>How can I verify that? Can I use an Intel or an NVIDIA profiler to check that?</li>
<li>Is my assumption correct, that Intel compiler supports offloading to an NVIDIA GPU?</li>
<li>Or should I better use an NVIDIA compiler to enable OpenMP offloading to NVIDIA graphics cards?</li>
</ol> | 2022-09-16 14:55:12.897000+00:00 | 2022-09-17 00:38:01.143000+00:00 | 2022-09-16 16:32:43.247000+00:00 | gpu|openmp|nvidia|intel-oneapi|offloading | ['https://arxiv.org/abs/2010.09454v2', 'https://arxiv.org/abs/2203.02096', 'https://en.wikipedia.org/wiki/Frontier_(supercomputer)', 'https://en.wikipedia.org/wiki/LUMI', 'https://en.wikipedia.org/wiki/Aurora_(supercomputer)'] | 5 |
40,572,055 | <p>i dont know if its your search but my new code</p>
<pre><code>Function Clean-InvalidFileNameChars {
param(
[Parameter(Mandatory=$true,
Position=0,
ValueFromPipeline=$true,
ValueFromPipelineByPropertyName=$true)]
[String]$Name
)
$invalidChars = [IO.Path]::GetInvalidFileNameChars() -join ''
$re = "[{0}]" -f [RegEx]::Escape($invalidChars)
$res=($Name -replace $re)
return $res.Substring(0, [math]::Min(260, $res.Length))
}
Function Clean-InvalidPathChars {
param(
[Parameter(Mandatory=$true,
Position=0,
ValueFromPipeline=$true,
ValueFromPipelineByPropertyName=$true)]
[String]$Name
)
$invalidChars = [IO.Path]::GetInvalidPathChars() -join ''
$re = "[{0}]" -f [RegEx]::Escape($invalidChars)
$res=($Name -replace $re)
return $res.Substring(0, [math]::Min(248, $res.Length))
}
$rootpath="c:\temp2"
$rootpathresult="c:\tempresult"
$template=@'
[3] arXiv:1611.00057 [pdf, ps, other]
Title: {title*:Holomorphy of adjoint $L$ functions for quasisplit A2}
Authors: Joseph Hundley
Comments: 18 pages
Subjects: {subject:Number Theory (math.NT)}
[4] arXiv:1611.00066 [pdf, other]
Title: {title*:Many Haken Heegaard splittings}
Authors: Alessandro Sisto
Comments: 12 pages, 3 figures
Subjects: {subject:Geometric Topology (math.GT)}
[5] arXiv:1611.00067 [pdf, ps, other]
Title: {title*:Subsumed homoclinic connections and infinitely many coexisting attractors in piecewise-linear maps}
Authors: David J.W. Simpson, Christopher P. Tuffley
Subjects: {subject:Dynamical Systems (math.DS)}
[21] arXiv:1611.00114 [pdf, ps, other]
Title: {title*:Faces of highest weight modules and the universal Weyl polyhedron}
Authors: Gurbir Dhillon, Apoorva Khare
Comments: We recall preliminaries and results from the companion paper arXiv:1606.09640
Subjects: {subject:Representation Theory (math.RT)}; Combinatorics (math.CO); Metric Geometry (math.MG)
'@
#extract utils data and clean
$listbook=gci $rootpath -File -filter *.pdf | foreach { New-Object psobject -Property @{file=$_.fullname; books= ((iwr "https://arxiv.org/abs/$($_.BaseName)").ParsedHtml.body.outerText | ConvertFrom-String -TemplateContent $template)}} | select file -ExpandProperty books | select file, @{N="Subject";E={Clean-InvalidPathChars $_.subject}}, @{N="Title";E={Clean-InvalidFileNameChars $_.title}}
#build dirs and copy+rename file
$listbook | %{$newpath="$rootpathresult\$($_.subject)"; New-Item -ItemType Directory -Path "$newpath" -Force; Copy-Item $_.file "$newpath\$($_.title).pdf" -Force}
</code></pre> | 2016-11-13 08:43:09.870000+00:00 | 2016-11-13 08:43:09.870000+00:00 | null | null | 40,541,873 | <p>To crawl and rename filenames I using this PowerShell code</p>
<pre><code>gci *.pdf | foreach { (iwr "https://arxiv.org/abs/$($_.BaseName)") -match 'primary-subject">(.*?)</span>'; $matches[1] }
</code></pre>
<p>But I need to order and move them into relative <em>Subject</em> folders</p>
<p><strong>This is HTTP</strong> <a href="https://arxiv.org/list/math/1611" rel="nofollow noreferrer">LINK</a>.</p>
<p>Text file is so formatted</p>
<pre><code>[1] arXiv:1611.00024 [pdf, other]
Symmetry, Outer Bounds, and Code Constructions: A Computer-Aided Investigation on the Fundamental Limits of Caching
Chao Tian
Comments: 34 pages, 7 figures; submitted to IEEE Trans. Information Theory
Subjects: Information Theory (cs.IT)
[2] arXiv:1611.00044 [pdf, ps, other]
Optimal Signaling for Secure Communications over Gaussian MIMO Wiretap Channels
Sergey Loyka, Charalambos D. Charalambous
Comments: accepted by IEEE Trans. Info. Theory
Subjects: Information Theory (cs.IT)
[3] arXiv:1611.00057 [pdf, ps, other]
Holomorphy of adjoint L functions for quasisplit A2
Joseph Hundley
Comments: 18 pages
Subjects: Number Theory (math.NT)
[4] arXiv:1611.00066 [pdf, other]
Many Haken Heegaard splittings
Alessandro Sisto
Comments: 12 pages, 3 figures
Subjects: Geometric Topology (math.GT)
[6] arXiv:1611.00069 [pdf, other]
On singular square metrics with vanishing Douglas curvature
Changtao Yu, Hongmei Zhu
Comments: 12 pages, 2 figures
Subjects: Differential Geometry (math.DG); Metric Geometry (math.MG)
[7] arXiv:1611.00071 [pdf, ps, other]
Eigenvalues of rotations and braids in spherical fusion categories
Daniel Barter, Corey Jones, Henry Tucker
Subjects: Quantum Algebra (math.QA); Representation Theory (math.RT)
</code></pre>
<p>I need to extract <em>Subjects</em> category name from each file text block</p>
<pre><code>Subjects: Information Theory (cs.IT)
Subjects: Information Theory (cs.IT)
Subjects: Number Theory (math.NT)
Subjects: Geometric Topology (math.GT)
Subjects: Differential Geometry (math.DG)
Subjects: Quantum Algebra (math.QA)
</code></pre>
<p>I don't want include <em>second tag</em> like</p>
<pre><code>Metric Geometry (math.MG)
Representation Theory (math.RT)
</code></pre>
<p>So finally I should make this folders like this</p>
<pre><code>[folder] Information Theory (cs.IT)
[folder] Number Theory (math.NT)
[folder] Geometric Topology (math.GT)
[folder] Differential Geometry (math.DG)
[folder] Quantum Algebra (math.QA)
</code></pre>
<p>and move finally files <em>into</em> relative subject folders like this</p>
<pre><code>[folder] Information Theory (cs.IT)
|
|__ [file] Symmetry, Outer Bounds, and Code Constructions...
|__ [file] Optimal Signaling for Secure Communications...
[folder] Number Theory (math.NT)
|
|__ [file] Holomorphy of adjoint L functions...
</code></pre> | 2016-11-11 05:47:41.613000+00:00 | 2016-11-13 08:43:09.870000+00:00 | 2016-11-11 11:05:22.287000+00:00 | file|powershell|batch-file|batch-processing|directory | [] | 0 |
40,549,555 | <pre><code>Function Clean-InvalidFileNameChars {
param(
[Parameter(Mandatory=$true,
Position=0,
ValueFromPipeline=$true,
ValueFromPipelineByPropertyName=$true)]
[String]$Name
)
$invalidChars = [IO.Path]::GetInvalidFileNameChars() -join ''
$re = "[{0}]" -f [RegEx]::Escape($invalidChars)
$res=($Name -replace $re)
return $res.Substring(0, [math]::Min(260, $res.Length))
}
Function Clean-InvalidPathChars {
param(
[Parameter(Mandatory=$true,
Position=0,
ValueFromPipeline=$true,
ValueFromPipelineByPropertyName=$true)]
[String]$Name
)
$invalidChars = [IO.Path]::GetInvalidPathChars() -join ''
$re = "[{0}]" -f [RegEx]::Escape($invalidChars)
$res=($Name -replace $re)
return $res.Substring(0, [math]::Min(248, $res.Length))
}
$res=Invoke-WebRequest "https://arxiv.org/list/math/1611"
$rootpath="c:\temp"
$template=@'
[3] arXiv:1611.00057 [pdf, ps, other]
Title: {title*:Holomorphy of adjoint $L$ functions for quasisplit A2}
Authors: Joseph Hundley
Comments: 18 pages
Subjects: {subject:Number Theory (math.NT)}
[4] arXiv:1611.00066 [pdf, other]
Title: {title*:Many Haken Heegaard splittings}
Authors: Alessandro Sisto
Comments: 12 pages, 3 figures
Subjects: {subject:Geometric Topology (math.GT)}
[5] arXiv:1611.00067 [pdf, ps, other]
Title: {title*:Subsumed homoclinic connections and infinitely many coexisting attractors in piecewise-linear maps}
Authors: David J.W. Simpson, Christopher P. Tuffley
Subjects: {subject:Dynamical Systems (math.DS)}
[21] arXiv:1611.00114 [pdf, ps, other]
Title: {title*:Faces of highest weight modules and the universal Weyl polyhedron}
Authors: Gurbir Dhillon, Apoorva Khare
Comments: We recall preliminaries and results from the companion paper arXiv:1606.09640
Subjects: {subject:Representation Theory (math.RT)}; Combinatorics (math.CO); Metric Geometry (math.MG)
'@
#get date and cut with format template, group by Subject and clean Title and Subject for transformation to dir and file name
$grousubject=$res.ParsedHtml.body.outerText | ConvertFrom-String -TemplateContent $template | select @{N="Subject";E={Clean-InvalidPathChars $_.subject}}, @{N="Title";E={Clean-InvalidFileNameChars $_.title}} | group Subject
#create dir and files
$grousubject | %{$path= "$rootpath\$($_.Name)" ; $_.group.title | %{New-Item -ItemType File -Path "$path\$_" -Force} }
</code></pre> | 2016-11-11 13:58:52.550000+00:00 | 2016-11-11 13:58:52.550000+00:00 | null | null | 40,541,873 | <p>To crawl and rename filenames I using this PowerShell code</p>
<pre><code>gci *.pdf | foreach { (iwr "https://arxiv.org/abs/$($_.BaseName)") -match 'primary-subject">(.*?)</span>'; $matches[1] }
</code></pre>
<p>But I need to order and move them into relative <em>Subject</em> folders</p>
<p><strong>This is HTTP</strong> <a href="https://arxiv.org/list/math/1611" rel="nofollow noreferrer">LINK</a>.</p>
<p>Text file is so formatted</p>
<pre><code>[1] arXiv:1611.00024 [pdf, other]
Symmetry, Outer Bounds, and Code Constructions: A Computer-Aided Investigation on the Fundamental Limits of Caching
Chao Tian
Comments: 34 pages, 7 figures; submitted to IEEE Trans. Information Theory
Subjects: Information Theory (cs.IT)
[2] arXiv:1611.00044 [pdf, ps, other]
Optimal Signaling for Secure Communications over Gaussian MIMO Wiretap Channels
Sergey Loyka, Charalambos D. Charalambous
Comments: accepted by IEEE Trans. Info. Theory
Subjects: Information Theory (cs.IT)
[3] arXiv:1611.00057 [pdf, ps, other]
Holomorphy of adjoint L functions for quasisplit A2
Joseph Hundley
Comments: 18 pages
Subjects: Number Theory (math.NT)
[4] arXiv:1611.00066 [pdf, other]
Many Haken Heegaard splittings
Alessandro Sisto
Comments: 12 pages, 3 figures
Subjects: Geometric Topology (math.GT)
[6] arXiv:1611.00069 [pdf, other]
On singular square metrics with vanishing Douglas curvature
Changtao Yu, Hongmei Zhu
Comments: 12 pages, 2 figures
Subjects: Differential Geometry (math.DG); Metric Geometry (math.MG)
[7] arXiv:1611.00071 [pdf, ps, other]
Eigenvalues of rotations and braids in spherical fusion categories
Daniel Barter, Corey Jones, Henry Tucker
Subjects: Quantum Algebra (math.QA); Representation Theory (math.RT)
</code></pre>
<p>I need to extract <em>Subjects</em> category name from each file text block</p>
<pre><code>Subjects: Information Theory (cs.IT)
Subjects: Information Theory (cs.IT)
Subjects: Number Theory (math.NT)
Subjects: Geometric Topology (math.GT)
Subjects: Differential Geometry (math.DG)
Subjects: Quantum Algebra (math.QA)
</code></pre>
<p>I don't want include <em>second tag</em> like</p>
<pre><code>Metric Geometry (math.MG)
Representation Theory (math.RT)
</code></pre>
<p>So finally I should make this folders like this</p>
<pre><code>[folder] Information Theory (cs.IT)
[folder] Number Theory (math.NT)
[folder] Geometric Topology (math.GT)
[folder] Differential Geometry (math.DG)
[folder] Quantum Algebra (math.QA)
</code></pre>
<p>and move finally files <em>into</em> relative subject folders like this</p>
<pre><code>[folder] Information Theory (cs.IT)
|
|__ [file] Symmetry, Outer Bounds, and Code Constructions...
|__ [file] Optimal Signaling for Secure Communications...
[folder] Number Theory (math.NT)
|
|__ [file] Holomorphy of adjoint L functions...
</code></pre> | 2016-11-11 05:47:41.613000+00:00 | 2016-11-13 08:43:09.870000+00:00 | 2016-11-11 11:05:22.287000+00:00 | file|powershell|batch-file|batch-processing|directory | [] | 0 |
49,488,451 | <p>You might want to have a look at my masters thesis <a href="https://arxiv.org/pdf/1707.09725.pdf" rel="nofollow noreferrer">Analysis and Optimization of Convolutional Neural Network Architectures</a>, chapter 2.5 page 15:</p>
<blockquote>
<p>A machine learning developer has the following choices to improve the model’s quality:<br/>
(I1) Change the problem definition (e.g., the classes which are to be distinguished)<br/>
(I2) Get more training data<br/>
(I3) Clean the training data<br/>
(I4) Change the preprocessing (see Appendix B.1)<br/>
(I5) Augment the training data set (see Appendix B.2)<br/>
(I6) Change the training setup (see Appendices B.3 to B.5)<br/>
(I7) Change the model (see Appendices B.6 and B.7)</p>
</blockquote>
<p>It's always good to check thoroughly where exactly the problem is and compare it with a human baseline. Reliably getting better than a human is super hard.</p> | 2018-03-26 09:46:22.247000+00:00 | 2018-03-26 09:46:22.247000+00:00 | null | null | 49,471,441 | <p>How do you proceed to increasing accuracy of your neural network?</p>
<p>I have tried lots of architectures yet in my image detection ( classification + localization ) I can only get 75% accuracy.</p>
<p>I am using VOC2007 dataset, and I extracted only data where 1 person is present.</p>
<p>What are the steps I can think of to increase the accuracy of my object detector?</p>
<p>thanks for help.</p> | 2018-03-25 00:40:29.390000+00:00 | 2018-03-26 09:46:22.247000+00:00 | 2018-03-25 09:17:23.440000+00:00 | neural-network|artificial-intelligence|conv-neural-network | ['https://arxiv.org/pdf/1707.09725.pdf'] | 1 |
22,992,202 | <p>The "fast modularity" algorithm by Clauset et al. (<a href="http://arxiv.org/abs/cond-mat/0408187" rel="nofollow">paper here</a>, <a href="http://www.cs.unm.edu/~aaron/research/fastmodularity.htm" rel="nofollow">code here</a>) uses a pair of linked data structures. On the one hand, you have a sparse matrix data structure (which is really just an adjacency list in which instead of storing the elements hanging off a particular array element as a linked list, we store them using a balanced binary tree data structure), and a max-heap. All the values in the sparse matrix (which are really the dQ_ij values for the potential merges in the algorithm) are also stored in the max-heap.</p>
<p>So, the max-heap is just an efficient way of finding the edge in the sparse matrix with the most positive value. Once you have the ij pair for that edge, you want to "insert" the elements of column (row) i into the elements of column (row) j, and then you want to delete column (row) i. So, you're not going to rebuild the entire max-heap after each pop from the max-heap. Instead, you want to delete some elements from it (the ones in the row/column that you delete from the sparse matrix) and update the values of others (the ones in the updated row/column for j).</p>
<p>This is where the linked data structure is helpful -- in the original implementation, each element in the sparse matrix stores a pointer to its corresponding entry in the max-heap so that if you update the value in the sparse matrix, you can then find the corresponding element in the max-heap and update its value. Once you do this, you need to re-heapify the updated heap element, by letting it move (recursively) up or down in the heap. Similarly, if you delete an element in the sparse matrix, you can find its entry in the heap and call a delete function on it.</p> | 2014-04-10 15:19:10.987000+00:00 | 2014-04-10 15:19:10.987000+00:00 | null | null | 22,982,253 | <p>I have a list of binomial_heaps and each iteration of the algorithm I have to update the priority of an element in some of the binomial_heaps. For this I use the update function of the boost <a href="http://www.boost.org/doc/libs/1_55_0/doc/html/heap/reference.html#header.boost.heap.binomial_heap_hpp" rel="nofollow" title="binomial_heap">binomial_heap</a>. However one of the binomial_heaps I have to remove and rebuild completely (as all priorities change). Instead of using push every time (which if I understand correctly would have a complexity of n*log(n)) I would like to construct it based on iterators of an underlying container (a kind of heapify or make_heap operation which would be linear time). This seems possible in the standard <a href="http://www.cplusplus.com/reference/queue/priority_queue/priority_queue/" rel="nofollow" title="priority_queue">priority_queue</a>, but not in the boost implementation. On the other hand the standard one does not provide me with an update function. Is there a way around this where I can have both, or another library that supports both. Or maybe my reasoning, that pushing all elements on an empty priority queue is slower, is not correct?</p>
<p>Some might say there is something seriously wrong with the fact that I need to rebuild an entire priority queue which would make the use of the priority queue completely superfluous. The algorithm I want to implement is "Finding community structure in very large networks by Aaron Clauset" in which the authors do exactly that (unless I didn't interpret it correctly)</p>
<p>(Sorry couldn't post the link to the paper as I don't have enough reputation to post more than 2 links)</p> | 2014-04-10 08:08:25.777000+00:00 | 2014-04-10 15:19:10.987000+00:00 | 2014-04-10 08:14:19.960000+00:00 | c++|boost|priority-queue | ['http://arxiv.org/abs/cond-mat/0408187', 'http://www.cs.unm.edu/~aaron/research/fastmodularity.htm'] | 2 |
38,200,214 | <p>you can write is as sum(A*B)/(sum(A^2)+sum(B^2)). refer to <a href="https://arxiv.org/abs/1606.04797" rel="nofollow">https://arxiv.org/abs/1606.04797</a></p> | 2016-07-05 09:48:14.720000+00:00 | 2016-07-05 09:48:14.720000+00:00 | null | null | 37,302,962 | <p>I am trying to create a custom loss function for use in lasagne. </p>
<p>I would like to use Sorensen-dice coefficient which I have written with numpy, and use for evaluation like so:</p>
<pre><code>np.sum(np.logical_and(preds == num_labs, labels == num_labs)))*2/ (np.sum(preds == num_labs) + np.sum(labels == num_labs)
</code></pre>
<p>Which is doing:</p>
<p>Dice = (2*|X & Y|)/ (|X|+ |Y|)</p>
<p>I am now trying to implement this in theano, unsure how feasible it is. </p>
<p>Is it possible to use this as a loss function? I would like to use it as I am segmenting volumes but I as this should be differentiable for back propagation, how can I change this?</p>
<p>Any thoughts?</p> | 2016-05-18 14:43:05.940000+00:00 | 2016-07-05 09:48:14.720000+00:00 | null | python|theano|conv-neural-network|lasagne | ['https://arxiv.org/abs/1606.04797'] | 1 |
46,649,215 | <p>Look at this paper <a href="https://arxiv.org/abs/1708.02551" rel="nofollow noreferrer">https://arxiv.org/abs/1708.02551</a>
Here is describing this loss function for semantic image segmentation</p> | 2017-10-09 14:47:07.160000+00:00 | 2017-10-09 14:47:07.160000+00:00 | null | null | 44,205,961 | <p>Could someone explain what is meant by a discriminative loss function in the context of deep learning?</p> | 2017-05-26 16:11:34.997000+00:00 | 2017-10-09 14:47:07.160000+00:00 | null | machine-learning | ['https://arxiv.org/abs/1708.02551'] | 1 |
47,572,466 | <p>Just an update for anyone who looked at this. I found the solution reading this paper:</p>
<p><a href="https://arxiv.org/pdf/1711.10834.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1711.10834.pdf</a></p>
<p>and the following code:</p>
<pre><code>mean_function = np.dot(np.dot(K_pt ,np.linalg.inv(K_train)), ytrain)
covariance_function = K_prior - np.dot(np.dot(K_pt ,np.linalg.inv(K_train)) , K_tp)
f = np.random.multivariate_normal(mean_function[:,0],covariance_function , 100)
</code></pre>
<p>where f is the posterior joint Gaussian from which you sample from</p> | 2017-11-30 11:32:41.050000+00:00 | 2017-11-30 11:32:41.050000+00:00 | null | null | 47,517,230 | <p>I have created and sampled a jointly Gaussian prior with mean=0 using the code below:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from math import pi
from scipy.spatial.distance import cdist
import scipy.stats as sts
x_prior = np.linspace(-10,10,101)
x_prior = x_prior.reshape(-1,1)
mu = np.zeros(x_prior.shape)
#defining the Kernel for the covariance function
def sec(a,b, length_scale , sigma) :
K = sigma * np.exp(-1/(2*length_scale) * cdist(a,b)**2)
return K
#defining the Gaussian Process prior
def GP(a , b, mu , kernel , length_scale, sigma , samples ) :
f = np.random.multivariate_normal(mu.flatten(), kernel(a ,b , length_scale , sigma ) , samples)
return f
prior = GP(x_prior ,x_prior, mu , sec , 100, 1 , 5)
plt.figure()
plt.grid()
plt.title('samples from the Gaussian prior')
plt.plot(x_prior , prior.T)
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/w5Yim.png" rel="noreferrer"><img src="https://i.stack.imgur.com/w5Yim.png" alt="GP prior sampling"></a></p>
<p>Then, when adding in some 'observed' data, I wish to compute the posterior over these points but this is where I become stuck. </p>
<p>Here's my code for introducing new data:</p>
<pre><code>x_train = np.array([-10,-8,5,-1,2])
x_train = x_train.reshape(-1,1)
def straight_line(m , x , c):
y = 5*x + c
return y
ytrain = straight_line(5 , x_train , 0)
</code></pre>
<p>It's my understanding that you calculate a conditional distribution over the new data given the prior and new x values associated with the observed data.</p>
<p>Do you then wish to update the multivariate prior to become the posterior by performing some sort of change to the mean values to include the new y values? </p>
<p>I have used the following resources to try and attempt this:</p>
<p><a href="http://katbailey.github.io/post/gaussian-processes-for-dummies/" rel="noreferrer">http://katbailey.github.io/post/gaussian-processes-for-dummies/</a>
<a href="https://www.robots.ox.ac.uk/~mebden/reports/GPtutorial.pdf" rel="noreferrer">https://www.robots.ox.ac.uk/~mebden/reports/GPtutorial.pdf</a></p>
<p>but I'm really trying to understand what happens at each stage, and why, so that when I get a posterior (which I can't do) I know exactly how I got there. </p>
<p>Here's some solutions I've been trying to implement but so far no avail:</p>
<pre><code>K_train = sec(x_train , x_train , 1,1)
K_prior = sec(x_prior , x_prior , 1,1)
K_pt = sec(x_prior , x_train , 1,1)
K_tp = sec(x_train , x_prior , 1,1) ## = k_tp transpose
prior = sts.multivariate_normal(mu.flatten(), K_prior)
#mean_test = np.dot(K_p , np.linalg.inv(K_prior))
mean_function = np.dot(np.dot(K_tp ,np.linalg.inv(K_prior).T) , prior )
covariance_function = K_train - np.dot(np.dot(K_tp ,np.linalg.inv(K_prior).T) , K_pt)
</code></pre> | 2017-11-27 18:08:38.283000+00:00 | 2021-12-16 18:43:06.763000+00:00 | 2021-12-15 19:06:41.790000+00:00 | python|machine-learning|process|gaussian|sampling | ['https://arxiv.org/pdf/1711.10834.pdf'] | 1 |
35,736,757 | <p><code>tf.gradients</code> returns the gradient with respect to the loss. This means that if your loss is a sum of per-example losses, then the gradient is also the sum of per-example loss gradients. </p>
<p>The summing up is implicit. For instance if you want to minimize the sum of squared norms of <code>Wx-y</code> errors, the gradient with respect to <code>W</code> is <code>2(WX-Y)X'</code> where <code>X</code> is the batch of observations and <code>Y</code> is the batch of labels. You never explicitly form "per-example" gradients that you later sum up, so it's not a simple matter of removing some stage in the gradient pipeline.</p>
<p>A simple way to get <code>k</code> per-example loss gradients is to use batches of size 1 and do <code>k</code> passes. Ian Goodfellow <a href="http://arxiv.org/pdf/1510.01799v2.pdf" rel="noreferrer">wrote up</a> how to get all <code>k</code> gradients in a single pass, for this you would need to specify gradients explicitly and not rely on <code>tf.gradients</code> method</p> | 2016-03-02 01:21:29.963000+00:00 | 2016-03-02 01:49:47.820000+00:00 | 2016-03-02 01:49:47.820000+00:00 | null | 35,731,506 | <p>Given a simple mini-batch gradient descent problem on mnist in tensorflow (like in this <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/tutorials/mnist/mnist_with_summaries.py" rel="noreferrer">tutorial</a>), how can I retrieve the gradients for each example in the batch individually.</p>
<p><code>tf.gradients()</code> seems to return gradients averaged over all examples in the batch. Is there a way to retrieve gradients before aggregation?</p>
<p>Edit: A first step towards this answer is figuring out at which point tensorflow averages the gradients over the examples in the batch. I thought this happened in <a href="https://github.com/tensorflow/tensorflow/blob/8048088bfb82846078f8023bc6199e6424926624/tensorflow/python/ops/gradients.py#L437" rel="noreferrer">_AggregatedGrads</a>, but that doesn't appear to be the case. Any ideas?</p> | 2016-03-01 19:16:22.593000+00:00 | 2019-03-13 13:11:48.303000+00:00 | 2016-03-01 21:48:48.033000+00:00 | tensorflow | ['http://arxiv.org/pdf/1510.01799v2.pdf'] | 1 |
33,786,063 | <p>From the <a href="http://arxiv.org/pdf/1301.3781.pdf" rel="noreferrer">original paper</a>, section 3.1, it is clear that there is no hidden layer:</p>
<blockquote>
<p>"the first proposed architecture is similar to the feedforward NNLM
where the non-linear hidden layer is removed and the projection layer is shared for all words".</p>
</blockquote>
<p>With respect to your second question (what does sharing the projection layer means), it means that you consider only one single vector, which is the centroid of the vectors of all the words in context. Thus, instead of having <code>n-1</code> word vectors as input, you consider only one vector. This is why it is called Continuous <strong>Bag of Words</strong> (because word order is lost within the context of size <code>n-1</code>).</p> | 2015-11-18 17:04:21.787000+00:00 | 2015-11-19 08:39:14.250000+00:00 | 2015-11-19 08:39:14.250000+00:00 | null | 33,374,010 | <p>When I am reading one of papers of Tomas Mikolov: <a href="http://arxiv.org/pdf/1301.3781.pdf" rel="nofollow noreferrer">http://arxiv.org/pdf/1301.3781.pdf</a></p>
<p>I have one concern on the Continuous Bag-of-Words Model section:</p>
<blockquote>
<p>The first proposed architecture is similar to the feedforward NNLM, where the non-linear hidden layer is removed and the projection layer is shared for all words (not just the projection matrix); thus, all words get projected into the same position (their vectors are averaged).</p>
</blockquote>
<p>I find some people mention that there is a hidden layer in Word2Vec model, but from my understanding, there is only one projection layer in that model. Does this projection layer do the same work as hidden layer?</p>
<p>The another question is that how to project input data into the projection layer? </p>
<p>"the projection layer is shared for all words (not just the projection matrix)", what does that mean?</p> | 2015-10-27 16:57:34.620000+00:00 | 2017-11-18 21:33:41.857000+00:00 | 2017-11-18 21:33:41.857000+00:00 | neural-network|word2vec | ['http://arxiv.org/pdf/1301.3781.pdf'] | 1 |
72,006,900 | <p>This algorithm for doing the solitaire is covered in this research paper: <a href="https://arxiv.org/pdf/math/0006067.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/math/0006067.pdf</a>.</p>
<p>It claims that a linear time algorithm exists.</p>
<hr />
<p>A valid solution looks like this:</p>
<blockquote>
<pre class="lang-py prettyprint-override"><code>L = 1
or 011
or 110
or 11 (01)* [ 00 | 00(11)+ | (11)+00 | (11)*1011 | 1101(11)* ] (10)*11 # (A)
or 11 (01)* (11)* 01 # (B)
or 10 (11)* (10)* 11 # (C)
</code></pre>
</blockquote>
<hr />
<p>To solve A, you can use regex to check for the string. However, there are multiple cases of it due to the middle section.</p>
<p>First case: <code>11(01)*00(10)*11</code><br />
Second case: <code>11(01)*(00)(11)+(10)*11</code><br />
Third case: <code>11(01)*(11)+(00)(10)*11</code><br />
Fourth case: <code>11(01)*(11)*(1011)(10)*11</code><br />
Fifth case: <code>11(01)*1101(11)*(10)*11</code></p>
<hr />
<p>To solve B and C is a simpler regex match:</p>
<p>Solution for B: <code>11(01)*(11)*01</code><br />
Solution for C: <code>10(11)*(10)*11</code></p>
<hr />
<p>If you match (you will need to convert it to a string though, such as <code>''.join([str(i) for i in mylist])</code> for example) <code>1</code>, <code>011</code>, <code>110</code>, or any of the patterns above, then it will be solvable.</p> | 2022-04-26 00:01:01.420000+00:00 | 2022-04-26 00:01:01.420000+00:00 | null | null | 72,006,710 | <p>First I will explain the rules of peg solitaire (for 1 dimension):
you initially start with a 1 dimensional board of length n. n-1 elements are pegs (filled) and 1 element is a hole (empty). So a starting position can be <code>[1, 1, 0, 1, 1, 1]</code> where 1s represent pegs and 0s represent holes for <code>n = 6</code></p>
<p>The goal of the game is to reach a board state where n-1 elements are holes and 1 element is a peg at any given position. So a valid solution can be <code>[0, 0, 0, 1, 0, 0]</code> for <code>n = 6</code></p>
<p>Your available moves at any given position is to move one peg by two positions to the right or to the left if and only if there is a peg between the two position, then once you make that move, replace the middle peg with a hole.</p>
<p>So for a board such as <code>board = [0, 1, 1, 0, 1, 1, 0]</code> there are two available moves.</p>
<ol>
<li>move <code>board[1]</code> to <code>board[3]</code> and set <code>board[2] = 0</code></li>
<li>move <code>board[5]</code> to <code>board[3]</code> and set <code>board[4] = 0</code></li>
<li>move <code>board[2]</code> to <code>board[0]</code> and set <code>board[1] = 0</code></li>
<li>move <code>board[4]</code> to <code>board[6]</code> and set <code>board[5] = 0</code></li>
</ol>
<p>The goal of the algorithm I am trying to make is to take an input of n where n > 2 and n is an even number, then for a board of length n, find all the positions for a start state at which a hole can be placed to produce a valid solution.</p>
<p>I have created a brute force algorithm which finds all the possible moves until a valid solution is reached, but it starts taking a very long time to find a solution past n > 20. So I was wondering if there are some optimizations I could do or different solution approaches.</p>
<p>Here is my code:</p>
<pre><code>import re
def generateBoard(n):
return [1]*n
def solve(board):
if checkBoard(board):
return True
elif checkUnsolvable(board):
return False
moves = []
for i in range(len(board)):
if i < len(board)-2:
if board[i] and board[i+1] and not board[i+2]:
moves.append((i, 'right'))
if i > 1:
if board[i] and board[i-1] and not board[i-2]:
moves.append((i, 'left'))
for move in moves:
newBoard = makeMove(board, move)
if solve(newBoard):
return True
continue
return False
def makeMove(board, move):
index, direction = move
b = [element for element in board]
if direction == 'right':
b[index] = 0
b[index+1] = 0
b[index+2] = 1
elif direction == 'left':
b[index] = 0
b[index-1] = 0
b[index-2] = 1
return b
def checkBoard(board):
if sum(board) == 1:
return True
return False
def checkUnsolvable(board):
expression1 = '1000+1' #RE for a proven to be unsolvable board
expression2 = '00100' #RE for a proven to be unsolvable board
string = ''.join([str(element) for element in board])
if re.search(expression1, string) or re.search(expression2, string):
return True
return False
def countSolutions(board):
indices = []
for i in range(len(board)):
b = [element for element in board]
b[i] = 0
if solve(b):
indices.append(i+1)
return indices
n = int(input())
print(countSolutions(generateBoard(n)))
</code></pre>
<p>Optimizations I have come up with so far:</p>
<ol>
<li>A board containing <code>[1, 0, 0, ..., 0, 1]</code> is unsolvable. So when we find this patters we skip</li>
<li>Same thing for a board containing <code>[0, 0, .. 0, 1, 0, 0, ..,0]</code></li>
<li>We only need to check half of the board, as the solutions of the other half would be symmetrical.</li>
</ol>
<p>But despite these the code is still very slow.</p> | 2022-04-25 23:22:33.823000+00:00 | 2022-04-26 00:01:01.420000+00:00 | null | python|algorithm | ['https://arxiv.org/pdf/math/0006067.pdf'] | 1 |
44,667,825 | <p>Yes that can be done. Its explained in the paper <code>'Striving for simplicity: The all convolutional net'</code> <a href="https://arxiv.org/pdf/1412.6806.pdf" rel="noreferrer">https://arxiv.org/pdf/1412.6806.pdf</a>. Quote from the paper: </p>
<blockquote>
<p>'We find that max-pooling can simply be replaced by a convolutional
layer with increased stride without loss in accuracy on several image
recognition benchmarks'</p>
</blockquote> | 2017-06-21 05:53:30.803000+00:00 | 2017-06-21 05:53:30.803000+00:00 | null | null | 44,666,390 | <p>In most of the architectures, conv layers are being followed by a pooling layer (max / avg etc.). As those pooling layers are just selecting the output of previous layer (i.e. conv), can we just use convolution with stride 2 and expect the similar accuracy results with reduced process need? </p> | 2017-06-21 03:30:30.513000+00:00 | 2019-06-12 02:08:01.303000+00:00 | null | deep-learning|conv-neural-network|max-pooling | ['https://arxiv.org/pdf/1412.6806.pdf'] | 1 |
40,969,626 | <p>I think that the answer is in the randomizing part of the algorithm. You can find more details here:</p>
<p><a href="https://github.com/gephi/gephi/wiki/Modularity" rel="nofollow noreferrer">https://github.com/gephi/gephi/wiki/Modularity</a></p>
<p><a href="https://sites.google.com/site/findcommunities/" rel="nofollow noreferrer">https://sites.google.com/site/findcommunities/</a></p>
<p><a href="http://lanl.arxiv.org/abs/0803.0476" rel="nofollow noreferrer">http://lanl.arxiv.org/abs/0803.0476</a></p> | 2016-12-05 08:00:52.200000+00:00 | 2016-12-05 08:00:52.200000+00:00 | null | null | 40,900,931 | <p>I have run modulartiy edge_weight/randomized at a resolution of 1, atleast 20 times on the same network. This is the same network I have created based on the following rule. Two nodes are related if they have atleast one item in common. Every time I run modularity I get a little different node distribution among communities. Additionally, I get 9 or 10 communities but it is never consistent. Any comment or help is much appreciated.</p> | 2016-12-01 01:00:27.560000+00:00 | 2016-12-08 17:32:28.200000+00:00 | null | gephi | ['https://github.com/gephi/gephi/wiki/Modularity', 'https://sites.google.com/site/findcommunities/', 'http://lanl.arxiv.org/abs/0803.0476'] | 3 |
64,946,271 | <p>The short answer is NO, clustering is not the only field under unsupervised learning. Unsupervised Learning is way more broader than only clustering. Clustering is just a sub-field of (or type of) unsupervised learning.</p>
<p>Little correction: KNN is not a clustering method, it is a classification algorithm. You probably meant to say k-means.</p>
<p>The essence of unsupervised learning is basically learning data without ground truth labels. Thus, the goal of unsupervised learning is to find representations of data given. The applications of unupervised learning vary a lot, though academically it is true that the field is less attractive to researchers due to its complexity and effort to build new stuff and/or make improvements.</p>
<p><a href="https://en.wikipedia.org/wiki/Dimensionality_reduction" rel="nofollow noreferrer">Dimension reduction</a> can be considered under unsupervised learning as you want to find a good representation of data in lower dimensions. They are also useful for visualizing high-dimension data. PCA, SNE, tSNE, Isomap, etc. are type of these applications.</p>
<p><a href="https://en.wikipedia.org/wiki/Cluster_analysis" rel="nofollow noreferrer">Clustering</a> methods are type of unsupervised learning as well where you want to group and label values based on some distance/divergence measure. Some applications could be K-means, Hierarchical clustering, etc.</p>
<p><a href="https://en.wikipedia.org/wiki/Generative_model" rel="nofollow noreferrer">Generative models</a>, generative models model the conditional probability P(X|Y=y). The research in this field boomed since the publication of <a href="https://en.wikipedia.org/wiki/Generative_adversarial_network" rel="nofollow noreferrer">GAN</a> (see <a href="https://arxiv.org/abs/1406.2661" rel="nofollow noreferrer">paper</a>). GANs can learn the data distribution without seeing the data explicitly. Methods are various where GANs, VAE, Gaussian Mixture, LDA, Hidden Markov model.</p>
<p>You can read further <a href="https://en.wikipedia.org/wiki/Unsupervised_learning" rel="nofollow noreferrer">here</a> on unsupervised learning.</p> | 2020-11-21 17:53:42.970000+00:00 | 2020-11-21 17:53:42.970000+00:00 | null | null | 64,945,939 | <p>Clustering (eg: K-means , EM algorithm etc) is used for unsupervised classification by forming clusters in the data sets using some distance measurement between data points</p>
<p>My question is :
Other than clustering what can I use to perform unsupervised classification and how? Or is there no other option other than clustering for unsupervised Classification?</p>
<p>Edit: Yes I meant k-means</p> | 2020-11-21 17:21:22.897000+00:00 | 2020-11-22 07:48:21.280000+00:00 | 2020-11-22 07:48:21.280000+00:00 | machine-learning|classification|unsupervised-learning | ['https://en.wikipedia.org/wiki/Dimensionality_reduction', 'https://en.wikipedia.org/wiki/Cluster_analysis', 'https://en.wikipedia.org/wiki/Generative_model', 'https://en.wikipedia.org/wiki/Generative_adversarial_network', 'https://arxiv.org/abs/1406.2661', 'https://en.wikipedia.org/wiki/Unsupervised_learning'] | 6 |
33,851,457 | <p>If you are looking for another multilingual POS tagger, you might want to try <a href="http://rdrpostagger.sourceforge.net/" rel="noreferrer">RDRPOSTagger</a>: a robust, easy-to-use and language-independent toolkit for POS and morphological tagging. See experimental results including performance speed and tagging accuracy on 13 languages in <a href="http://arxiv.org/abs/1412.4021" rel="noreferrer">this paper</a>. RDRPOSTagger now supports pre-trained POS and morphological tagging models for Bulgarian, Czech, Dutch, English, French, German, Hindi, Italian, Portuguese, Spanish, Swedish, Thai and Vietnamese. RDRPOSTagger also supports the pre-trained Universal POS tagging models for 40 languages.</p>
<p>In Python, you can utilize the pre-trained models for tagging a raw unlabeled text corpus as:</p>
<p><code>python RDRPOSTagger.py tag PATH-TO-PRETRAINED-MODEL PATH-TO-LEXICON PATH-TO-RAW-TEXT-CORPUS</code></p>
<p>Example: <code>python RDRPOSTagger.py tag ../Models/POS/German.RDR ../Models/POS/German.DICT ../data/GermanRawTest</code></p>
<p>If you would like to program with RDRPOSTagger, please follow code lines 92-98 in <code>RDRPOSTagger.py</code> module in <code>pSCRDRTagger</code> package. Here is an example:</p>
<pre><code>r = RDRPOSTagger()
r.constructSCRDRtreeFromRDRfile("../Models/POS/German.RDR") #Load POS tagging model for German
DICT = readDictionary("../Models/POS/German.DICT") #Load a German lexicon
r.tagRawSentence(DICT, "Die Reaktion des deutschen Außenministers zeige , daß dieser die außerordentlich wichtige Rolle Irans in der islamischen Welt erkenne .")
r = RDRPOSTagger()
r.constructSCRDRtreeFromRDRfile("../Models/POS/French.RDR") # Load POS tagging model for French
DICT = readDictionary("../Models/POS/French.DICT") # Load a French lexicon
r.tagRawSentence(DICT, "Cette annonce a fait l' effet d' une véritable bombe . ")
</code></pre> | 2015-11-22 04:01:51.460000+00:00 | 2016-05-19 07:58:31.263000+00:00 | 2016-05-19 07:58:31.263000+00:00 | null | 32,740,988 | <p>Recently I approached to the NLP and I tried to use <a href="http://www.nltk.org" rel="noreferrer">NLTK</a> and <a href="http://textblob.readthedocs.org" rel="noreferrer">TextBlob</a> for analyzing texts. I would like to develop an app that analyzes reviews made by travelers and so I have to manage a lot of texts written in different languages. I need to do two main operations: POS Tagging and lemmatization. I have seen that in NLTK there is a possibility to choice the the right language for sentences tokenization like this:</p>
<pre><code>tokenizer = nltk.data.load('tokenizers/punkt/PY3/italian.pickle')
</code></pre>
<p>I haven't found the the right way to set the language for POS Tagging and Lemmatizer in different languages yet. How can I set the correct corpora/dictionary for non-english texts such as Italian, French, Spanish or German? I also see that there is a possibility to import the "TreeBank" or "WordNet" modules, but I don't understand how I can use them. Otherwise, where can I find the respective corporas?</p>
<p>Can you give me some suggestion or reference? Please take care that I'm not an expert of NLTK.</p>
<p>Many Thanks. </p> | 2015-09-23 13:29:59.550000+00:00 | 2022-04-15 13:10:11.960000+00:00 | 2015-09-23 13:56:04.937000+00:00 | python|nlp|nltk|pos-tagger|lemmatization | ['http://rdrpostagger.sourceforge.net/', 'http://arxiv.org/abs/1412.4021'] | 2 |
58,060,817 | <p>This very much depends on the way how you got your embeddings. The CBOW model has two parameters the embedding matrix that is denoted <em>v</em> and the output projection matrix <em>v'</em>. If you want to recover the probabilities that are used in the CBOW model at training time, you need to get <em>v'</em> as well.
See equation (2) in the <a href="https://arxiv.org/pdf/1310.4546.pdf" rel="nofollow noreferrer">word2vec paper</a>. Tools for pre-computing word embeddings usually don't do that, so you would need to modify them yourself.</p>
<p>Anyway, if you want to compute a probability of a word, given a context, you should rather think about using a (neural) language model than a table of word embeddings. If you search the Internet, I am sure you will find something that suits your needs.</p> | 2019-09-23 10:52:13.780000+00:00 | 2019-09-23 10:52:13.780000+00:00 | null | null | 58,059,619 | <p>I know that some methods of generating word embeddings (e.g. CBOW) are based on predicting the likelihood of a given word appearing in a given context. I'm working with polish language, which is sometimes ambiguous with respect to segmentation, e.g. 'Coś' can be either treated as one word, or two words which have been conjoined ('Co' + '-ś') depending on the context. What I want to do, is create a tokenizer which is context sensitive. Assuming that I have the vector representation of the preceding context, and all possible segmentations, could I somehow calculate, or approximate the likelihood of particular words appearing in this context?</p> | 2019-09-23 09:40:18.783000+00:00 | 2019-09-23 10:52:13.780000+00:00 | null | nlp|word-embedding|word-sense-disambiguation | ['https://arxiv.org/pdf/1310.4546.pdf'] | 1 |
53,402,564 | <p>It turns out (per TensorFlow r1.13) that if len(xs) > 1, then tf.hessians(ys, xs) returns tensors corresponding to only the block diagonal submatrices of the full Hessian matrix. Full story and solutions in this paper <a href="https://arxiv.org/pdf/1905.05559" rel="nofollow noreferrer">https://arxiv.org/pdf/1905.05559</a>, and code at <a href="https://github.com/gknilsen/pyhessian" rel="nofollow noreferrer">https://github.com/gknilsen/pyhessian</a></p> | 2018-11-20 22:30:54.990000+00:00 | 2019-05-15 06:58:43.453000+00:00 | 2019-05-15 06:58:43.453000+00:00 | null | 51,948,171 | <p>Let's say I want to compute the Hessian of a scalar-valued function with respect to some parameters W (e.g the weights and biases of a feed-forward neural network).
If you consider the following code, implementing a two-dimensional linear model trained to minimize a MSE loss: </p>
<pre><code>import numpy as np
import tensorflow as tf
x = tf.placeholder(dtype=tf.float32, shape=[None, 2]) #inputs
t = tf.placeholder(dtype=tf.float32, shape=[None,]) #labels
W = tf.placeholder(np.eye(2), dtype=tf.float32) #weights
preds = tf.matmul(x, W) #linear model
loss = tf.reduce_mean(tf.square(preds-t), axis=0) #mse loss
params = tf.trainable_variables()
hessian = tf.hessians(loss, params)
</code></pre>
<p>you'd expect <code>session.run(tf.hessian,feed_dict={})</code> to return a 2x2 matrix (equal to W). It turns out that because <code>params</code>is a 2x2 tensor, the output is rather a tensor with shape [2, 2, 2, 2]. While I can easily reshape the tensor to obtain the matrix I want, it seems that this operation might be extremely cumbersome when <code>params</code>becomes a list of tensors of varying size (i.e when the model is a deep neural network for instance).</p>
<p>It seems that are two ways around this:</p>
<ul>
<li><p>Flatten <code>params</code> to be a 1D tensor called <code>flat_params</code>:</p>
<pre><code>flat_params = tf.concat([tf.reshape(p, [-1]) for p in params])
</code></pre>
<p>so that <code>tf.hessians(loss, flat_params)</code> naturally returns a 2x2 matrix. However as noted in <a href="https://stackoverflow.com/questions/44836859/why-does-tensorflow-reshape-tf-reshape-break-the-flow-of-gradients">Why does Tensorflow Reshape tf.reshape() break the flow of gradients?</a> for tf.gradients (but also holds for tf.hessians), tensorflow is not able to see the symbolic link in the graph between <code>params</code>and <code>flat_params</code> and <code>tf.hessians(loss, flat_params)</code> will raise an error as the gradients will be seen as <code>None</code>. </p></li>
<li><p>In <a href="https://afqueiruga.github.io/tensorflow/2017/12/28/hessian-mnist.html" rel="nofollow noreferrer">https://afqueiruga.github.io/tensorflow/2017/12/28/hessian-mnist.html</a>, the author of the code goes the other way, and first create the flat parameter and reshapes its parts into <code>self.params</code>. This trick does work and gets you the hessian with its <em>expected</em> shape (2x2 matrix). However, it seems to me that this will be cumbersome to use when you have a complex model, and impossible to apply if you create your model via built-in functions (like <code>tf.layers.dense</code>, ..). </p></li>
</ul>
<p>Is there no straight-forward way to get the Hessian matrix (as in the 2x2 matrix in this example) from <code>tf.hessians</code>, when <code>self.params</code> is a list of tensor of arbitrary shapes? If not, how can you automatize the reshaping of the output tensor of <code>tf.hessians</code>?</p> | 2018-08-21 11:58:54.823000+00:00 | 2019-05-15 06:58:43.453000+00:00 | null | python|tensorflow|deep-learning|hessian-matrix | ['https://arxiv.org/pdf/1905.05559', 'https://github.com/gknilsen/pyhessian'] | 2 |
57,303,846 | <p>Why hybrid flow? The oft-documented rationale is that your app can immediately have info about the user via the id_token while the access token acquisition is still in flight. Technically this is true but it's still rarely used in the wild.</p>
<p>One real-world example is a Financial-grade API (FAPI) profile developed by a working group under the umbrella of OpenID Foundation. It recommends hybrid flow for security reasons. It's worth noting that the channel split "feature" of the flow is not enough on its own to provide the desired security properties, more "cooperation" from other moving parts is needed. From <a href="https://openid.net/specs/openid-financial-api-part-2-ID2.html" rel="nofollow noreferrer">FAPI implementer's draft part 2</a>:</p>
<blockquote>
<p>This profile describes security provisions for the server and client
that are appropriate for Financial-grade APIs by defining the measures
to mitigate:</p>
<ul>
<li>attacks that leverage the weak binding of endpoints in [RFC6749] (e.g. malicious endpoint attacks, IdP mix-up attacks),</li>
<li>attacks that modify authorization requests and responses unprotected in [RFC6749] by leveraging OpenID Connect's Hybrid Flow that returns
an ID Token in the authorization response.</li>
</ul>
</blockquote>
<p>and details</p>
<blockquote>
<p>8.3.3 Identity provider (IdP) mix-up attack </p>
<p>In this attack, the client has
registered multiple IdPs and one of them is a rogue IdP that returns
the same <code>client_id</code> that belongs to one of the honest IdPs. When a user
clicks on a malicious link or visits a compromised site, an
authorization request is sent to the rogue IdP. The rogue IdP then
redirects the client to the honest IdP that has the same <code>client_id</code>. If
the user is already logged on at the honest IdP, then the
authentication may be skipped and a code is generated and returned to
the client. Since the client was interacting with the rogue IdP, the
code is sent to the rogue IdP's token endpoint. At the point, the
attacker has a valid code that can be exchanged for an access token at
the honest IdP.</p>
<p>This is mitigated by the use of OpenID Connect Hybrid Flow in which
the honest IdP's issuer identifier is included as the value of <code>iss</code>.
The client then sends the code to the token endpoint that is
associated with the issuer identifier thus it will not get to the
attacker.</p>
<p>8.4.3. Authorization response parameter injection attack </p>
<p>This attack occurs when the victim and attacker use the same relying party client.
The attacker is somehow able to capture the authorization code and
state from the victim's authorization response and uses them in his
own authorization response.</p>
<p>This can be mitigated by using OpenID Connect Hybrid Flow where the
<code>c_hash</code>, <code>at_hash</code>, and <code>s_hash</code> can be used to verify the validity of the
authorization code, access token, and state parameters. The server can
verify that the state is the same as what was stored in the browser
session at the time of the authorization request.</p>
</blockquote>
<p>For a more technical description of these two attacks and countermeasures, see <a href="https://www.nds.rub.de/media/ei/veroeffentlichungen/2017/01/30/oidc-security.pdf" rel="nofollow noreferrer">Single Sign-On Security – An Evaluation of OpenID Connect</a> </p>
<p>For a realllly detailed description, take a look at <a href="https://arxiv.org/abs/1704.08539" rel="nofollow noreferrer">OIDC Security Analysis</a> paper. </p> | 2019-08-01 07:28:16.870000+00:00 | 2019-08-01 07:28:16.870000+00:00 | null | null | 57,302,426 | <p>I have been reading about OpenId Connect and their flows that are <strong>implicit flow</strong>, <strong>authorization code flow</strong> and <strong>hybrid flow</strong>.</p>
<p>I know that for example, the implicit flow is kind of insecure and should be used just in public clients like SPA application.</p>
<p>Now I´m trying to understand the Hybrid Flow that can be used for non-public applications like .Net MVC applications where you have a backchannel communication and thus you can save a secret password.</p>
<p>Reading about the Hybrid flow I know that it has 3 different types of response_type that can be:</p>
<ol>
<li>code id_token</li>
<li>code token</li>
<li>code id_token token</li>
</ol>
<p>For me, the best response_type would be code id_token where I can get the code in the front channel and then send that code to the Identity Server Provider and get the access token through the backchannel.</p>
<p>I've been searching for information on a real-world application of <strong>response_type=code id_token token</strong>, or <strong>code token</strong>, but other than reading that in these flows the first token/s are issued by the authorization endpoint which is the front channel and the final tokens that are issued by exchanging the authorization code are issued at the token endpoint, which is the backchannel and thus inherently accepted as being more secure, I fail to understand what you would use this for. Any information would be gladly accepted.</p> | 2019-08-01 05:39:06.423000+00:00 | 2021-05-17 03:44:19.830000+00:00 | null | oauth|oauth-2.0|openid-connect | ['https://openid.net/specs/openid-financial-api-part-2-ID2.html', 'https://www.nds.rub.de/media/ei/veroeffentlichungen/2017/01/30/oidc-security.pdf', 'https://arxiv.org/abs/1704.08539'] | 3 |
38,744,014 | <p>I’m now running one of my pet projects and it's required face recognition – detecting the area with face on the photo, if it exists with Raspberry pi, so I’ve done some analysis about that task</p>
<p>And I found <a href="https://arxiv.org/pdf/1506.02640.pdf" rel="nofollow">this</a> approach. The key idea is in avoiding scanning entire picture to help by scanning windows of different sizes like it was in OpenCV, but by dividing an entire photo into 49 (7x7) squares and train the model not only for detecting of presenting one of classes inside each square, but also for determining the location and size of detecting object</p>
<p>It’s only 49 runs of trained model, so I think it's possible to execute this in less than in a second even on non state-of-the-art smartphones. Anyway, it will be a trade-off between accuracy and performance
About the model </p>
<p>I will use <a href="https://arxiv.org/pdf/1409.1556.pdf" rel="nofollow">vgg</a> –like model, probably a bit simpler than even vgg11A. </p>
<p>In my case ready dataset already <a href="http://www.robots.ox.ac.uk/~vgg/data/vgg_face" rel="nofollow">exists</a>. So I can train convolutional network with it</p>
<p>Why deep learning approach is better than 1-3 you mentioned? Because of its higher accuracy for such kind of tasks. It’s practically proven. You could check it in <a href="https://www.kaggle.com/" rel="nofollow">kaggle</a>. Majority of the top models for classification competitions are based on convolutional networks</p>
<p>The only disadvantage for you – probably it would be necessary create your own dataset to train the model</p> | 2016-08-03 12:42:08.533000+00:00 | 2016-08-04 07:40:06.557000+00:00 | 2016-08-04 07:40:06.557000+00:00 | null | 38,739,303 | <p>I started learning Image Recognition a few days back and I would like to do a project in which it need to identify different brand logos in Android.</p>
<p>For Ex: If I take a picture of Nike logo in an Android device then it needs to display "Nike".</p>
<ul>
<li>Low computational time would be the main criteria for me.</li>
</ul>
<p>For this, I have done some work and started learning OpenCV sample examples.</p>
<p>What would be the best Image Recognition that would be used for me.</p>
<p>1) I came to know from Template Matching that their applicability is limited mostly by the available computational power, as identification of big and complex templates can be time consuming. (and so I don't want to use it)</p>
<p>2) Feature Based detectors like SIFT/SURF/STAR (As per my knowledge this would be a better option for me)</p>
<p>3) How about Deep Learning and Pattern recognition concepts? (I was digging on this and don't know whether it would be an option for me). Can any of you let me know whether I can use this and why it would be an better choice for me when compared with 1 and 2.</p>
<p>4) Haar caascade classifiers (From one of the posts in SO, I came to know that by using Haar it doesn't work in Rotation and Scale invariant and so I haven't concentrated much on this). Does this been a better Option for me If I focus up on.</p> | 2016-08-03 09:13:43.567000+00:00 | 2016-08-04 07:40:06.557000+00:00 | null | machine-learning|pattern-matching|image-recognition|feature-detection|haar-classifier | ['https://arxiv.org/pdf/1506.02640.pdf', 'https://arxiv.org/pdf/1409.1556.pdf', 'http://www.robots.ox.ac.uk/~vgg/data/vgg_face', 'https://www.kaggle.com/'] | 4 |
35,150,369 | <p><code>inv()</code> is certainly slower than <code>\</code> unless you have multiple right hand side vectors to solve for. However, the advice from MathWorks regarding inaccuracy is due to a overly conservative bound in a numerical linear algebra result. In other words, <code>inv()</code> is NOT inaccurate. The link elaborates further : <a href="http://arxiv.org/abs/1201.6035" rel="nofollow">http://arxiv.org/abs/1201.6035</a> </p>
<blockquote>
<p>Several widely-used textbooks lead the reader to believe that solving a linear system of equations Ax = b by multiplying the vector b by a computed inverse inv(A) is inaccurate. Virtually all other textbooks on numerical analysis and numerical linear algebra advise against using computed inverses without stating whether this is accurate or not. In fact, under reasonable assumptions on how the inverse is computed, x = inv(A)*b is as accurate as the solution computed by the best backward-stable solvers.</p>
</blockquote> | 2016-02-02 10:06:44.463000+00:00 | 2016-02-02 10:06:44.463000+00:00 | null | null | 1,419,580 | <p>I read at a few places (in the doc and in this blog post : <a href="http://blogs.mathworks.com/loren/2007/05/16/purpose-of-inv/" rel="noreferrer">http://blogs.mathworks.com/loren/2007/05/16/purpose-of-inv/</a> ) that the use of inv in Matlab is not recommended because it is slow and inaccurate.</p>
<p>I am trying to find the reason of this inaccuracy. As of now, Google did not give m interesting result, so I thought someone here could guide me.</p>
<p>Thanks !</p> | 2009-09-14 03:38:25.010000+00:00 | 2016-02-02 10:06:44.463000+00:00 | 2013-07-03 14:49:00.513000+00:00 | matlab|linear-algebra|numerical-analysis|matrix-inverse | ['http://arxiv.org/abs/1201.6035'] | 1 |
20,821,506 | <p>I did try to look pretty in-depth at the article, and found it very confusing and hard to read in the same way as @sth. I have an academic background so am used to reading (often poorly written) papers, but found this one pretty hard to go through.</p>
<p>I don't want to dishearten you but you should have someone go through and help you re-write it if you want to reach any significant audience.</p>
<p>Claimed proofs of or counterexamples to famous outstanding conjectures appear every day (look around <a href="http://arxiv.org" rel="nofollow">http://arxiv.org</a>, I bet P vs. NP has been solved at least once this week), and are often discredited out of hand if all three of the following things do not hold:</p>
<ul>
<li>the author is already established and has a good reputation for being correct and careful,</li>
<li>the paper clearly has a significant, new idea, rather than apparently mysteriously extracting a proof from unmotivated symbolic manipulation (if it were possible, one of the many smart, experienced people who have worked on the problem, or a computer, would likely have found such a solution),</li>
<li>the paper explains clearly what the roadblocks were to finding a solution, and how this new idea overcomes it.</li>
<li>It helps if the paper is well-written, but a poorly-written paper satisfying the above will still get some attention.</li>
</ul>
<p>Probably over 99% of claimed solutions to famous open problems are wrong, and papers that fail a 60 second smell test are most often thrown away by people who are actually equipped to evaluate them.</p>
<p>Sorry to say, you meet none of the above criteria. This doesn't mean that your proof is wrong, but it does mean that people who are able to evaluate it will be reluctant to spent the necessary time, particularly because the paper is hard to read. Never mind that it is not actually clear what you are claiming to have proved.</p>
<p>Some specific complaints:</p>
<ul>
<li>I don't see anywhere a description of an actual algorithm. If you claim to have achieved a certain time complexity improvement, you should either include an algorithm attaining it or explain why your proof cannot be adapted to be constructive.</li>
<li>You nowhere clearly describe the approaches people have attempted to solve the problem, and how your approach is similar to or different from theirs.</li>
<li>You don't state your significant new idea that solved this problem. The proof appears to use nothing beyond basic arithmetic. Sorry, I love down-to-earth concrete math, but I guarantee that everyone who has ever worked on this problem has a solid command of arithmetic, and if no other tools are necessary to attain a 4-page solution then probably someone would have found it by now.</li>
<li>I had hoped to find an implementation of an algorithm attaining your claimed time complexity (never mind that I am not clear on what the claim is) in the Python file you attach. However, to my dismay, the script apparently just runs a computation of the closed-form expression that you provide in your paper.</li>
</ul>
<p>I expect some people will come to your "defense" (despite that this is not an attack, but honest advice), because you are a high-schooler and this is "amazing" for a high-schooler. Right now there are two posts already in this spirit, and neither author seems to even know what you are claiming to prove.</p>
<p>I recommend you clean the paper up as best you can and post it on Math or CS StackExchange (<strong>Edit:</strong> apparently Math StackExchange has a ban on posting "solutions" to open problems, probably for the reasons I describe above!), where there will be a much larger audience that is equipped to look at it and evaluate it carefully. I recommend you also look for other articles on the same topic (there are certainly dozens if not hundreds), look up the authors of those articles, pick one who is relatively junior (a full professor will be harder to convince to interact with you), and send him what you have personally to see what he thinks. I would avoid emphasizing that you are in high school, as in my experience, most academics will not be impressed and will be much more ready to write you off as a likely waste of time.</p>
<p>@mrip has some nice references and advice for you as well. Good luck.</p>
<p><strong>Edit:</strong> Just for fun, here are two claimed solutions to P vs. NP from this last summer, and an article that explores the anthropological side of P vs. NP:</p>
<ul>
<li><a href="http://arxiv.org/abs/1304.1307" rel="nofollow">http://arxiv.org/abs/1304.1307</a> Quote from abstract: "First of all, we prove <code>P != NP</code>..."</li>
<li><a href="http://arxiv.org/abs/1305.4029" rel="nofollow">http://arxiv.org/abs/1305.4029</a></li>
<li><a href="http://arxiv.org/abs/0904.3074" rel="nofollow">http://arxiv.org/abs/0904.3074</a></li>
</ul>
<p><strong>Edit for record-keeping:</strong> The article linked at <a href="http://arxiv.org/abs/1112.0631" rel="nofollow">http://arxiv.org/abs/1112.0631</a> is a paper claiming to prove the same thing as you (maybe), so it's a great first place to look and first person to contact.</p> | 2013-12-29 02:22:16.740000+00:00 | 2013-12-29 06:15:11.250000+00:00 | 2013-12-29 06:15:11.250000+00:00 | null | 19,970,750 | <p>I believe that I have solved an open problem in complexity theory but I want to make sure that it's right.</p>
<p>I problem in question is: ``How many moves does it take to solve the Towers of Hanoi puzzle as the number of towers increases?''</p>
<p>What is obvious is that if the number of ''disks'' is kept bounded, then then running time asymptotically approaches <code>O(n)</code> where n is the number of ''disks''. This is significantly better than the original <code>O(2^n)</code>.</p>
<p>What I've found is that the running time is <code>O(2^n^(1/k))</code> where <code>n</code> is the number of disks, <code>k</code> is the number of pegs, and exponentiation (the <code>^</code> operator) is right associative. Although, this comes about because of a weird phenomena where there are discrete points where the running time increases linearly and then changes slope. So all in all, the running time <em>amortized</em> <code>O(2^n^(1/k))</code>.</p>
<p>If you're curious about it and want to read the proof for yourself, I set up a website where you can find it <a href="http://kcolford.com/the-towers-of-hanoi-2/" rel="nofollow noreferrer">over here</a>. (If that link is inaccessable, try <a href="https://github.com/kcolford/towersofhanoi" rel="nofollow noreferrer">github</a>. Although you'll need access to the necessary tools to build it)</p>
<p>Because I know that someone is going to ask me ''Why don't I just give it to my professor?'' or something else along those lines. The answer is that I'm not affiliated with any university/college, I'm still in high school.</p>
<p>Any help is very appreciated, thank you in advance.</p>
<p><strong>Notice:</strong> This question has been re-posted on Math Overflow <a href="https://mathoverflow.net/questions/153021/correctness-of-proof-about-reves-puzzle">over here</a></p>
<p><strong>Notice:</strong> When the recomended formatting edits are made to the paper, another bounty will be issued on a new question that will be posted as I am looking for criticsm on the <em>content</em> of the paper rather than the <em>legibility</em> of it. </p> | 2013-11-14 06:08:21.727000+00:00 | 2013-12-29 22:00:09.580000+00:00 | 2017-04-13 12:57:55.007000+00:00 | performance|algorithm|towers-of-hanoi | ['http://arxiv.org', 'http://arxiv.org/abs/1304.1307', 'http://arxiv.org/abs/1305.4029', 'http://arxiv.org/abs/0904.3074', 'http://arxiv.org/abs/1112.0631'] | 5 |
49,767,338 | <p>Both, case 1 and 2 are called ensemble learning. Both are worth it. </p>
<p>For case 1: please note that neural networks with the same architecture, same learning algorithm but different initial weights can have very different performance.</p>
<p>Similar to case 1 (taking the average, not the mode) are Schmidhubers averaging ensembles. I published some results of them with various datasets and network architectures in <a href="https://arxiv.org/abs/1707.09725" rel="nofollow noreferrer">my masters thesis</a> (e.g. Table 5.2, Table 5.8, Table 5.9, Table 5.11, 5.13, ...).</p>
<h2>See also</h2>
<ul>
<li><a href="http://scikit-learn.org/stable/modules/ensemble.html" rel="nofollow noreferrer">Ensemble methods in sklearn</a></li>
<li>My Blog post: <a href="https://martin-thoma.com/ensembles/" rel="nofollow noreferrer">Ensembles</a></li>
<li><a href="https://stats.stackexchange.com/a/187970/25741">Bagging, boosting and stacking in machine learning</a></li>
</ul> | 2018-04-11 05:42:35.320000+00:00 | 2018-04-11 06:35:45.960000+00:00 | 2018-04-11 06:35:45.960000+00:00 | null | 49,726,785 | <p>I have a question about a classifier concept.</p>
<h2>case 1</h2>
<p>If I have a classifier and the performance of that classifier up to 90%.
and I create n another classifier with the same algorithm and same dataset and get performance 90% too.</p>
<h2>case2</h2>
<p>same like case 1 but every classifier with a different algorithm.</p>
<h2>Combinding the results</h2>
<p>the result I get from</p>
<pre><code>mode(classifier1,classifier2,classifier3,...,classifiern)
</code></pre>
<p>Is that technique worth it or useless (case1, and case2)?</p> | 2018-04-09 06:30:17.363000+00:00 | 2018-04-12 17:37:27.123000+00:00 | 2020-06-20 09:12:55.060000+00:00 | machine-learning|classification | ['https://arxiv.org/abs/1707.09725', 'http://scikit-learn.org/stable/modules/ensemble.html', 'https://martin-thoma.com/ensembles/', 'https://stats.stackexchange.com/a/187970/25741'] | 4 |
60,316,202 | <p>Yes, this is directly related to the way that BERT is trained. Specifically, I encourage you to have a look at the <a href="https://arxiv.org/pdf/1810.04805.pdf" rel="nofollow noreferrer">original BERT paper</a>, in which the authors introduce the meaning of the <code>[CLS]</code> token:</p>
<blockquote>
<p><code>[CLS]</code> is a special symbol added in front of every input example [...].</p>
</blockquote>
<p>Specifically, it is used for classification purposes, and therefore the first and simplest choice for any fine-tuning for classification tasks. What your relevant code fragment is doing, is basically just extracting this <code>[CLS]</code> token.</p>
<p>Unfortunately, the DistilBERT documentation of Huggingface's library does not explicitly refer to this, but you rather have to check out their <a href="https://huggingface.co/transformers/model_doc/bert.html" rel="nofollow noreferrer">BERT documentation</a>, where they also highlight some issues with the <code>[CLS]</code> token, analogous to your concerns:</p>
<blockquote>
<p>Alongside MLM, BERT was trained using a next sentence prediction (NSP)
objective using the [CLS] token as a sequence approximate. The user
may use this token (the first token in a sequence built with special
tokens) to get a sequence prediction rather than a token prediction.
However, averaging over the sequence may yield better results than
using the [CLS] token.</p>
</blockquote> | 2020-02-20 09:03:53.683000+00:00 | 2021-08-22 09:41:23.193000+00:00 | 2021-08-22 09:41:23.193000+00:00 | null | 60,087,613 | <p>In the last few layers of sequence classification by <a href="https://github.com/huggingface/transformers/blob/33d3072e1c54bcd235447b98c6dea1b4cb71234c/src/transformers/modeling_distilbert.py#L634" rel="noreferrer">HuggingFace</a>, they took the first hidden state of the sequence length of the transformer output to be used for classification. </p>
<pre><code>hidden_state = distilbert_output[0] # (bs, seq_len, dim) <-- transformer output
pooled_output = hidden_state[:, 0] # (bs, dim) <-- first hidden state
pooled_output = self.pre_classifier(pooled_output) # (bs, dim)
pooled_output = nn.ReLU()(pooled_output) # (bs, dim)
pooled_output = self.dropout(pooled_output) # (bs, dim)
logits = self.classifier(pooled_output) # (bs, dim)
</code></pre>
<p>Is there any benefit to taking the first hidden state over the last, average, or even the use of a Flatten layer instead?</p> | 2020-02-06 04:10:53.640000+00:00 | 2021-08-22 09:41:23.193000+00:00 | 2020-02-07 02:42:09.717000+00:00 | time-series|sequence|tensorflow2.0|text-classification|huggingface-transformers | ['https://arxiv.org/pdf/1810.04805.pdf', 'https://huggingface.co/transformers/model_doc/bert.html'] | 2 |
67,654,948 | <p>As it seems noone else had the same issue or using primitive objects as cache entries, thus haven't noticed the issue.
Upon replicating and fortunately traced the root cause, the below points are coming up :</p>
<ul>
<li>Always implement Serializable / <code>hashCode</code> / <code>equals</code> for custom objects that are going to end been transmitted through a replicated/synchronized cache.</li>
<li>Never put primitive arrays, as the <code>hashcode</code> / <code>equals</code> would not be calculated - efficiently-</li>
<li>Dont enable eviction with remove strategy on replicated caches, as upon reaching the maximum limit, the entries are getting removed randomly - based on <a href="https://arxiv.org/abs/1512.00727" rel="nofollow noreferrer">TinyLFU</a> - and not based on the expired timer and never getting removed from the JVM heap.</li>
</ul> | 2021-05-22 23:27:54.847000+00:00 | 2021-05-22 23:27:54.847000+00:00 | null | null | 66,267,902 | <p>Was doing some internal testing about a clustering solution on top of infinispan/jgroups and noticed that the expired entries were never becoming eligible for GC, due to a reference on the expiration-reaper, while having more than 1 nodes in the cluster with expiration enabled / eviction disabled.
Due to some system difficulties the below versions are being used :</p>
<ul>
<li>JDK 1.8</li>
<li>Infinispan 9.4.20</li>
<li>JGroups 4.0.21</li>
</ul>
<p>In my example I am using a simple Java main scenario, placing a specific number of data, expecting them to expire after a specific time period. The expiration is indeed happening, as it can be confirmed both while accessing the expired entry and by the respective event listener(if its configured), by it looks that it is never getting removed from the available memory, even after an explicit GC or while getting close to an OOM error.</p>
<p>So the question is :</p>
<p>Is this really expected as default behavior, or I am missing a critical configuration as per the cluster replication / expiration / serialization ?</p>
<p>Example :</p>
<p>Cache Manager :</p>
<pre><code>return new DefaultCacheManager("infinispan.xml");
</code></pre>
<p>infinispan.xml :</p>
<pre><code> <jgroups>
<stack-file name="udp" path="jgroups.xml" />
</jgroups>
<cache-container default-cache="default">
<transport stack="udp" node-name="${nodeName}" />
<replicated-cache name="myLeakyCache" mode="SYNC">
<expiration interval="30000" lifespan="3000" max-idle="-1"/>
</replicated-cache>
</cache-container>
</code></pre>
<p>Default UDP jgroups xml as in the packaged example :</p>
<p>.....</p>
<pre><code><UDP
mcast_addr="${jgroups.udp.mcast_addr:x.x.x.x}"
mcast_port="${jgroups.udp.mcast_port:46655}"
bind_addr="${jgroups.bind.addr:y.y.y.y}"
tos="8"
ucast_recv_buf_size="200k"
ucast_send_buf_size="200k"
mcast_recv_buf_size="200k"
mcast_send_buf_size="200k"
max_bundle_size="64000"
ip_ttl="${jgroups.udp.ip_ttl:2}"
enable_diagnostics="false"
bundler_type="old"
thread_naming_pattern="pl"
thread_pool.enabled="true"
thread_pool.max_threads="30"
/>
</code></pre>
<p>The dummy cache entry :</p>
<pre><code>public class CacheMemoryLeak implements Serializable {
private static final long serialVersionUID = 1L;
Date date = new Date();
}
</code></pre>
<p>An example usage from the "service" :</p>
<pre><code>Cache<String, Object> cache = cacheManager.getCache("myLeakyCache");
cache.put(key, new CacheMemoryLeak());
</code></pre>
<p>Some info / tryouts :</p>
<ul>
<li>When there is only one node in the cluster or restarting them
sequentially the references are getting cleared.</li>
<li>Enabling Max-idle shows the same behavior (makes sense expiration
reaper is the same)</li>
<li>Enabling eviction does not resolve the issue, just keeps the
"expired" references count between the max limit. In case this is
reached pretty fast, random eviction is happening on the live entries
as well(default remove strategy)!!</li>
<li>If i change the Cache entry to be a native String, then, the
infinispan.MortalCacheEntries are getting removed from the heap space
on the next GC cycle, upon getting expired and marked from expiration reaper, compared to the custom object!!</li>
<li>Enabling the expiration reaper only in one node didn't resolve the
issue, and might break the failover mechanism.</li>
<li>Upgraded to infinispan 10.1.8 Final , but faced the same issue.</li>
</ul> | 2021-02-18 20:44:15.797000+00:00 | 2021-05-22 23:27:54.847000+00:00 | 2021-02-18 21:00:41.700000+00:00 | java|caching|infinispan | ['https://arxiv.org/abs/1512.00727'] | 1 |
71,803,617 | <p>It's unclear what you're intending to predict.</p>
<p>Do you want your Keras NN to report the <em>same</em> value as the precise cosine-similarity calculation, between the two text summary vectors, would report? If so, why not just... do the calculation? It's not something I'd necessarily expect a neural-architecture to approxmate better.</p>
<p>Alternatively, if your tiny 6-pair dataset is the target:</p>
<ol>
<li><p>Your existing 'gold standard' answers don't seem obviously correct to me. Superficially, 'olive plant' & 'black olives' seem nearly as 'similar' as 'tomato' & 'tomato substance'. Similarly, 'watering plants' & 'cornsalad plant' about-as-similar as 'sweet potatoes' & 'potato chip'.</p>
</li>
<li><p>A mere 6 examples (maybe 5 after train/test split?) is both inadequate to usefully train a larger neural classifier, <em>and</em> to the extent the classifer might be easily trained (indeed 'overfit') to the 5 training examples, it won't necessarily have learned anything generalizable to the one hold-out example (which is using vectors quite far from the training texts). (With such a paucity of training data, & testing using inputs that might be arbitrarily different than the training data, "very poor" performance would be expected. Neural nets require lots of varied training examples!)</p>
</li>
</ol>
<p>Finally, the strategy of creating combined-embeddings-by-averaging, as investigated by your linked paper, is another atypical practice that seems fishy to me. Even if it could offer some benefits, there's no reason to mix that atypical, somewhat non-intuitive extra practice into your experiment before even having things work with a more typical and simple baseline approach, for comparison, to be sure the extra 'meta'/averaging is worth the complication.</p>
<p>The paper itself doesn't really show any advantage over concatenation, which has a stronger theoretical basis (preserving each model's full independent spaces) than averaging, except by a tiny amount in 1-of-6 tests. Further, average of GLoVe & CBOW performs <em>the same or worse</em> than GLoVe alone on 3 on their 6 evaluations – and pretty minimally better on the 3 other evaluations. That implies to me the outperformance might be mainly random jitter introduced by the extra steps, and the averaging is – at best – a cheap option to consider as a tiny boost, not a generally-better approach.</p>
<p>The paper also leaves many natural related questions unaddressed:</p>
<ul>
<li>Is averaging better than, say, just picking a random half of each models' dimensions for concatenation? That'd be even cheaper!</li>
<li>Might some of the <em>slight</em> lift in <em>some</em> tasks be due not to the averaging, but the other transformations they've applied – the l2-normalization applied to each source model, or across the whole of each dimension for the GLoVE model? (It's unclear if this model-postprocessing was only applied before dual-model averaging, or also to GLoVe in its solo evaluation.)</li>
</ul>
<p>There's other work suggesting post-training transformations of word-vector spaces may improve performance on downstream tasks – see for example <a href="https://arxiv.org/abs/1702.01417" rel="nofollow noreferrer">'All But The Top'</a> – so which steps, exactly, get which advantages is important to distinguish.</p> | 2022-04-08 22:11:32.067000+00:00 | 2022-04-08 22:11:32.067000+00:00 | null | null | 71,802,729 | <p>I want to implement a Keras model to predict the similarity between two sentences from words embeddings as follows (I included my full script at the end):</p>
<ol>
<li>Load words embeddings models, e.g., Word2Vec and fastText.</li>
<li>Generate samples (<code>X1</code> and <code>X2</code>) by computing the average word vectors for all words in a sentence. If two or more models are used, calculate the arithmetic mean of all embeddings (<a href="https://arxiv.org/abs/1804.05262" rel="nofollow noreferrer"><em>Frustratingly Easy Meta-Embedding -- Computing Meta-Embeddings by Averaging Source Word Embeddings</em></a>).</li>
<li>Concatenate <code>X1</code> and <code>X2</code> into one array before feeding them to the network.</li>
<li>Compile (and evaluate) the Keras model.</li>
</ol>
<p>The entire script is as follows:</p>
<pre><code>import numpy as np
from gensim.models import Word2Vec
from keras.layers import Dense
from keras.models import Sequential
from sklearn.model_selection import train_test_split
def encoder_vector(v: str, model: Word2Vec) -> np.array:
wv_dim = model.vector_size
if v in model.wv:
return model.wv[v]
else:
return np.zeros(wv_dim)
def encoder_words_avg(words: list[str], model: Word2Vec) -> np.array:
dim = model.vector_size
words = [word for word in words if word in model.wv]
if len(words) >= 1:
return np.mean(model.wv[words], axis=0)
else:
return np.zeros(dim)
def load_samples(mappings, w2v_model, fast_model):
dim = w2v_model.vector_size
num = len(mappings)
X1 = np.zeros((num, dim))
X2 = np.zeros((num, dim))
y = np.zeros((num, 1))
for i in range(num):
mapping = mappings[i].split("|")
sentence_1, sentence_2 = mapping[1:]
e = np.zeros((2, dim))
# Compute meta-embedding by averaging all embeddings.
e[0, :] = encoder_words_avg(words=sentence_1.split(), model=w2v_model)
e[1, :] = encoder_words_avg(words=sentence_1.split(), model=fast_model)
X1[i] = e.mean(axis=0)
e[0, :] = encoder_words_avg(words=sentence_2.split(), model=w2v_model)
e[1, :] = encoder_words_avg(words=sentence_2.split(), model=fast_model)
X2[i] = e.mean(axis=0)
y[i] = 0.0 if mapping[0].startswith("-") else 1.0
return X1, X2, y
def baseline_model(X_train, X_test, y_train, y_test):
model = Sequential()
model.add(
Dense(
200,
input_shape=(X_train.shape[1],),
activation="relu",
kernel_initializer="he_uniform",
)
)
model.add(Dense(1, activation="sigmoid"))
model.compile(optimizer="sgd", loss="binary_crossentropy", metrics=["accuracy"])
model.fit(X_train, y_train, batch_size=8, epochs=14)
# Evaluate the trained model, using the train and test data
_, train_acc = model.evaluate(X_train, y_train, verbose=0)
_, test_acc = model.evaluate(X_test, y_test, verbose=0)
print("Train: %.3f, Test: %.3f\n" % (train_acc, test_acc))
return model
def main():
w2v_model = Word2Vec.load("")
fast_model = Word2Vec.load("")
mappings = [
"1|boiled chicken egg|hen egg whole boiled",
"2|tomato|tomato substance",
"3|sweet potatoes|potato chip",
"-1|watering plants|cornsalad plant",
"-2|butter|butane",
"-3|olive plant|black olives",
]
X1, X2, y = load_samples(mappings, w2v_model=w2v_model, fast_model=fast_model)
# Concatenate both arrays into one before feeding to the network.
X = np.concatenate([X1, X2], axis=1)
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42, stratify=y
)
model = baseline_model(X_train, X_test, y_train, y_test)
model.summary()
</code></pre>
<p>The above script seems to work, but the prediction result is very poor even when using only Word2Vec (which makes me think there could be an issue with the Keras model...). Any ideas on how to improve the outcome? Am I doing something wrong?</p>
<p>Thank you.</p> | 2022-04-08 20:15:55.387000+00:00 | 2022-04-08 22:11:32.067000+00:00 | null | keras|neural-network|word2vec|similarity|word-embedding | ['https://arxiv.org/abs/1702.01417'] | 1 |
16,395,650 | <p>Use <code>/fp:strict</code> on Windows to tell the compiler to produce code that strictly follows IEEE 754, and <code>gcc -msse2 -mfpmath=sse</code> on Linux to obtain the same behavior there.</p>
<p>The reasons for the differences you are seeing have been discussed in spots on StackOverflow, but the best survey is David Monniaux's <a href="http://arxiv.org/abs/cs/0701192" rel="nofollow">article</a>.</p>
<hr>
<p>The assembly instructions I obtain when compiling with <code>gcc -msse2 -mpfmath=sse</code> are as follow. Instructions <code>cvtsi2ssq</code>, <code>divss</code>, <code>mulss</code>, <code>addss</code> are the correct instructions to use, and they result in a program where <code>p_value</code> contains at one point <code>42d5d1ec</code>.</p>
<pre><code> .globl _main
.align 4, 0x90
_main: ## @main
.cfi_startproc
## BB#0:
pushq %rbp
Ltmp2:
.cfi_def_cfa_offset 16
Ltmp3:
.cfi_offset %rbp, -16
movq %rsp, %rbp
Ltmp4:
.cfi_def_cfa_register %rbp
subq $32, %rsp
movl $0, -4(%rbp)
movl $0, -8(%rbp)
LBB0_1: ## =>This Inner Loop Header: Depth=1
cmpl $100000, -8(%rbp) ## imm = 0x186A0
jge LBB0_4
## BB#2: ## in Loop: Header=BB0_1 Depth=1
movq _p_value@GOTPCREL(%rip), %rax
movabsq $100, %rcx
cvtsi2ssq %rcx, %xmm0
movss LCPI0_0(%rip), %xmm1
movabsq $10, %rcx
cvtsi2ssq %rcx, %xmm2
cvtsi2ss -8(%rbp), %xmm3
divss %xmm3, %xmm2
movss %xmm2, -12(%rbp)
cvtsi2ss -8(%rbp), %xmm2
mulss %xmm2, %xmm1
addss %xmm0, %xmm1
movss %xmm1, (%rax)
movl (%rax), %edx
movl %edx, -16(%rbp)
leaq L_.str(%rip), %rdi
movl -16(%rbp), %esi
movb $0, %al
callq _printf
movl %eax, -20(%rbp) ## 4-byte Spill
## BB#3: ## in Loop: Header=BB0_1 Depth=1
movl -8(%rbp), %eax
addl $1, %eax
movl %eax, -8(%rbp)
jmp LBB0_1
LBB0_4:
movl -4(%rbp), %eax
addq $32, %rsp
popq %rbp
ret
</code></pre> | 2013-05-06 09:21:45.040000+00:00 | 2014-05-31 22:46:34.553000+00:00 | 2014-05-31 22:46:34.553000+00:00 | null | 16,395,615 | <p>My programe runs both in linux and windows, I have to make sure the floating point arithmetic get the same result in different OS.</p>
<p>Here is the code:</p>
<pre><code>for (int i = 0; i < 100000; ++i)
{
float d_value = 10.0f / float(i);
float p_value = 0.01f * float(i) + 100.0f;
}
</code></pre>
<p>I use "<strong>g++ -m32 -c -static -g -O0 -ffloat-store</strong>" to build the code in linux.
I use "/fp:precise /O2" to build the code in windows with vs2005.</p>
<p>When I printf the "d_value" and the "p_value", the "d_value" is all the same both in linux and windows. But the "p_value" is different sometimes.
For exsample, print the "p_value" with hexadecimal format:</p>
<pre><code>windows: 42d5d1eb
linux: 42d5d1ec
</code></pre>
<p>Why dose this happen?</p>
<p>My g++ version is</p>
<pre><code>Configured with: ../src/configure -v --with-pkgversion='Debian 4.4.5-8' --with-bugurl=file:///usr/share/doc/gcc-4.4/README.Bugs --enable-languages=c,c++,fortran,objc,obj-c++ --prefix=/usr --program-suffix=-4.4 --enable-shared --enable-multiarch --enable-linker-build-id --with-system-zlib --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --with-gxx-include-dir=/usr/include/c++/4.4 --libdir=/usr/lib --enable-nls --enable-clocale=gnu --enable-libstdcxx-debug --enable-objc-gc --enable-targets=all --with-arch-32=i586 --with-tune=generic --enable-checking=release --build=i486-linux-gnu --host=i486-linux-gnu --target=i486-linux-gnu
Thread model: posix
gcc version 4.4.5 (Debian 4.4.5-8)
</code></pre>
<p>I use the flag <code>-ffloat-store</code>, because of someone's suggestion here: <a href="https://stackoverflow.com/questions/1961442/different-math-rounding-behaviour-between-linux-mac-os-x-and-windows">Different math rounding behaviour between Linux, Mac OS X and Windows</a></p> | 2013-05-06 09:19:47.340000+00:00 | 2014-05-31 22:46:34.553000+00:00 | 2017-05-23 10:32:08.263000+00:00 | linux|windows|math|floating-point|g++ | ['http://arxiv.org/abs/cs/0701192'] | 1 |
21,782,205 | <p>Check out the object detection framework available at <a href="https://github.com/nenadmarkus/pico" rel="noreferrer">https://github.com/nenadmarkus/pico</a>.</p>
<p>The framework enables you to learn a custom object detector (for example, for finding frontal, upright faces) and then use it at runtime for rotation invariant detection.</p>
<p>This is achieved by scanning the image with a rotated version of the object detector at a number of different orientations. Note that this can be done without cascade retraining or image resampling, and it should work in real-time on modern machines (the provided face detection demo does).</p>
<p>The details are given in the paper available at <a href="http://arxiv.org/abs/1305.4537" rel="noreferrer">http://arxiv.org/abs/1305.4537</a>.</p> | 2014-02-14 14:39:18.817000+00:00 | 2014-02-14 14:39:18.817000+00:00 | null | null | 14,869,862 | <p>I'd like to create an object detector based on cascade classifier, the only problem is that LBP and Haar features are not rotation invariant. The first thing that comes to my mind is to rotate the training sample at different angles, but I doubt that the resulting classifier would have good quality, moreover, the object could have stretched proportions. There are many rotation invariant detectors, for example, iPhone recognizes faces in real time in any orientation, so I wonder how do they achieve this? I would prefer to use OpenCV for this.</p> | 2013-02-14 07:33:07.590000+00:00 | 2014-02-14 14:39:18.817000+00:00 | null | opencv|image-processing|computer-vision|object-detection|haar-wavelet | ['https://github.com/nenadmarkus/pico', 'http://arxiv.org/abs/1305.4537'] | 2 |
12,339,432 | <ol>
<li>Get largest sub-array having size 3<sup>k</sup>+1</li>
<li>Apply cycle leader algorithm to the parts of this sub-array, starting from positions 1, 3, 9, ... 3<sup>k-1</sup>: move element to its proper position in sub-array (even-indexed elements to the left of sub-array, odd-indexed - to the right), the replaced element should be also moved to its proper position, etc. until this procedure comes back to the starting position. <a href="http://arxiv.org/abs/0805.1598" rel="nofollow">This paper</a> gives number-theoretic explanation why such selection of starting positions shuffles sub-array to the correct order.</li>
<li>Process the remaining parts of the array recursively, using steps 1 and 2.</li>
<li>Now we only need to join the reordered parts together. Start with the smaller sub-arrays at the end of the whole array. To exchange sub-array halves, use reverse algorithm: reverse(reverse(a), reverse(b)); or, for sub-array halves of equal size, use pair-wise swaps.</li>
<li>Now all even-positioned elements are on the left. To get them on the right, as required, exchange elements i and i+N/2 for all i = 0 .. N/2-1.</li>
</ol>
<p>Algorithm is in-place, time complexity is O(N).</p>
<p>Example:</p>
<pre><code>0 1 2 3 4 5 6 7 8 9 10 11 (the original array)
0 1 2 3 4 5 6 7 8 9 # 10 11 (split to sub-arrays)
0 2 4 3 8 1 6 5 7 9 # 10 11 (first cycle leader iteration, starting with 1)
0 2 4 6 8 1 3 5 7 9 # 10 11 (second cycle leader iteration, starting with 3)
0 2 4 6 8 9 7 5 3 1 # 10 11(2nd half of 1st part& 1st half of 2nd part reversed)
0 2 4 6 8 10 1 3 5 7 9 11 (both halves reversed together)
</code></pre>
<p>Variation of this algorithm, that does not need step 5:</p>
<ul>
<li>On step 1, get largest sub-array having size 3<sup>k</sup>-1.</li>
<li>On step 2, move even-indexed elements to the right of sub-array, odd-indexed - to the left. Use starting positions 0, 2, 8, ... 3<sup>k-1</sup>-1 for cycle-leader algorithm.</li>
</ul>
<p>Here is different O(N log N) in-place algorithm, which does not need number-theoretic proofs:</p>
<ol>
<li>Reinterpret your array as a sequence of single-element 2*2 matrices, transpose these matrices.</li>
<li>Reinterpret the result as a sequence of two-element 2*2 matrices and transpose them.</li>
<li>Continue while matrices size is less than the array size.</li>
<li>Now we only need to join the reordered parts together (exactly as in previous algorithm).</li>
<li>Exchange elements of left and right halves of the array (exactly as in previous algorithm).</li>
</ol>
<p>Example:</p>
<pre><code>0 1 2 3 4 5 6 7 (the original array)
[0 2] [1 3] [4 6] [5 7] (first transposition)
[0 2] [4 6] [1 3] [5 7] (second transposition)
</code></pre>
<p>This problem is just a special case of the <a href="http://en.wikipedia.org/wiki/In-place_matrix_transposition" rel="nofollow">In-place matrix transposition</a>.</p> | 2012-09-09 13:22:52.357000+00:00 | 2012-10-10 20:12:07.227000+00:00 | 2012-10-10 20:12:07.227000+00:00 | null | 12,338,654 | <p>Given an array with positive and negative integers, move all the odd indexed elements to the left and even indexed elements to the right.</p>
<p>The difficult part of the problem is to do it in-place while maintaining the order.</p>
<p>e.g.</p>
<pre><code>7, 5, 6, 3, 8, 4, 2, 1
</code></pre>
<p>The output should be: </p>
<pre><code>5, 3, 4, 1, 7, 6, 8, 2
</code></pre>
<p>If the order didn't matter, we could have been used partition() algorithm of quick sort.</p>
<p>How to do it in O( N )?</p> | 2012-09-09 11:24:34.707000+00:00 | 2022-04-24 02:52:12.607000+00:00 | 2012-09-09 13:07:39.057000+00:00 | performance|algorithm|data-structures|time-complexity|in-place | ['http://arxiv.org/abs/0805.1598', 'http://en.wikipedia.org/wiki/In-place_matrix_transposition'] | 2 |
67,123,504 | <p>I have recently (April 2021) published a paper regarding this topic that you can find on arXiv (<a href="https://arxiv.org/abs/2104.07225" rel="noreferrer">https://arxiv.org/abs/2104.07225</a>).</p>
<p>There, Table 1 allows to review previous approaches to the problem in question, and the whole manuscript is about long text classification and proposing a new method called Text Guide. This new method claims to improve performance over naive and semi-naive text selection methods used in the paper (<a href="https://arxiv.org/abs/1905.05583" rel="noreferrer">https://arxiv.org/abs/1905.05583</a>) that was mentioned in one of the previous answers to this question.</p>
<p>Long story short about your options:</p>
<ol>
<li><p>Low computational cost: use naive/semi naive approaches to select a part of original text instance. Examples include choosing first n tokens, or compiling a new text instance out of the beginning and end of original text instance.</p>
</li>
<li><p>Medium to high computational cost: use recent transformer models (like Longformer) that have 4096 token limit instead of 512. In some cases this will allow for covering the whole text instance and the modified attention mechanism decreases computational cost, and</p>
</li>
<li><p>High computational cost: divide the text instance into chunks that fit a model like BERT with ‘standard’ 512 limit of tokens per instance, deploy the model on each part separately, join the resulting vector representations.</p>
</li>
</ol>
<p>Now, in my recently published paper there is a new method proposed called Text Guide. Text Guide is a text selection method that allows for improved performance when compared to naive or semi-naive truncation methods. As a text selection method, Text Guide doesn’t interfere with the language model, so it can be used to improve performance of models with ‘standard’ limit of tokens (512 for transformer models) or ‘extended’ limit (4096 as for instance for the Longformer model). Summary: Text Guide is a low-computational-cost method that improves performance over naive and semi-naive truncation methods. If text instances are exceeding the limit of models deliberately developed for long text classification like Longformer (4096 tokens), it can also improve their performance.</p> | 2021-04-16 10:24:13.223000+00:00 | 2021-04-16 10:24:13.223000+00:00 | null | null | 58,636,587 | <p>We know that BERT has a max length limit of tokens = 512, So if an article has a length of much bigger than 512, such as 10000 tokens in text
How can BERT be used?</p> | 2019-10-31 03:34:11.420000+00:00 | 2022-01-02 20:31:17.217000+00:00 | 2021-03-13 08:52:08.453000+00:00 | nlp|text-classification|bert-language-model | ['https://arxiv.org/abs/2104.07225', 'https://arxiv.org/abs/1905.05583'] | 2 |
61,404,922 | <p>In addition to chunking data and passing it to BERT, check the following new approaches.</p>
<p>There are new researches for long document analysis. As you've asked for Bert a similar pre-trained transformer <a href="https://github.com/allenai/longformer" rel="nofollow noreferrer">Longformer</a> has recently been made available from ALLEN NLP (<a href="https://arxiv.org/abs/2004.05150" rel="nofollow noreferrer">https://arxiv.org/abs/2004.05150</a>). Check out this link for the paper.</p>
<p>The related work section also mentions some previous work on long sequences. Google them too. I'll suggest at least go through Transformer XL (<a href="https://arxiv.org/abs/1901.02860" rel="nofollow noreferrer">https://arxiv.org/abs/1901.02860</a>). As far I know it was one of the initial models for long sequences, so would be good to use it as a foundation before moving into 'Longformers'.</p> | 2020-04-24 09:13:25.613000+00:00 | 2022-01-02 20:31:17.217000+00:00 | 2022-01-02 20:31:17.217000+00:00 | null | 58,636,587 | <p>We know that BERT has a max length limit of tokens = 512, So if an article has a length of much bigger than 512, such as 10000 tokens in text
How can BERT be used?</p> | 2019-10-31 03:34:11.420000+00:00 | 2022-01-02 20:31:17.217000+00:00 | 2021-03-13 08:52:08.453000+00:00 | nlp|text-classification|bert-language-model | ['https://github.com/allenai/longformer', 'https://arxiv.org/abs/2004.05150', 'https://arxiv.org/abs/1901.02860'] | 3 |
59,778,726 | <p>This paper compared a few different strategies: <a href="https://arxiv.org/abs/1905.05583" rel="noreferrer">How to Fine-Tune BERT for Text Classification?</a>.
On the IMDb movie review dataset, they actually found that cutting out the middle of the text (rather than truncating the beginning or the end) worked best! It even outperformed more complex "hierarchical" approaches involving breaking the article into chunks and then recombining the results.</p>
<p>As another anecdote, I applied BERT to the Wikipedia Personal Attacks dataset <a href="https://youtu.be/_eSGWNqKeeY" rel="noreferrer">here</a>, and found that simple truncation worked well enough that I wasn't motivated to try other approaches :) </p> | 2020-01-16 22:26:47.543000+00:00 | 2020-04-27 10:19:54.683000+00:00 | 2020-04-27 10:19:54.683000+00:00 | null | 58,636,587 | <p>We know that BERT has a max length limit of tokens = 512, So if an article has a length of much bigger than 512, such as 10000 tokens in text
How can BERT be used?</p> | 2019-10-31 03:34:11.420000+00:00 | 2022-01-02 20:31:17.217000+00:00 | 2021-03-13 08:52:08.453000+00:00 | nlp|text-classification|bert-language-model | ['https://arxiv.org/abs/1905.05583', 'https://youtu.be/_eSGWNqKeeY'] | 2 |
64,948,268 | <p>You can leverage from the HuggingFace Transformers library that includes the following list of Transformers that work with long texts (more than 512 tokens):</p>
<ul>
<li><a href="https://huggingface.co/transformers/model_doc/reformer.html" rel="noreferrer">Reformer</a>: that combines the modeling capacity of a Transformer with an architecture that can be executed efficiently on long sequences.</li>
<li><a href="https://huggingface.co/transformers/model_doc/longformer.html" rel="noreferrer">Longformer</a>: with an attention mechanism that scales linearly with sequence length, making it easy to process documents of thousands of tokens or longer.</li>
</ul>
<p>Eight other recently proposed efficient Transformer models include Sparse Transformers (Child et al.,2019), Linformer (Wang et al., 2020), Sinkhorn Transformers (Tay et al., 2020b), Performers (Choromanski et al., 2020b), Synthesizers (Tay et al., 2020a), Linear Transformers (Katharopoulos et al., 2020), and BigBird (Zaheeret al., 2020).</p>
<p>The <a href="https://arxiv.org/pdf/2011.04006.pdf" rel="noreferrer">paper</a> from the authors from Google Research and DeepMind tries to make a comparison between these Transformers based on Long-Range Arena "aggregated metrics":</p>
<p><a href="https://i.stack.imgur.com/TMD6h.png?s=256" rel="noreferrer"><img src="https://i.stack.imgur.com/TMD6h.png?s=256" alt="Performance, speed, and memory footprint of the Transformers" /></a></p>
<p>They also suggest that <em><strong>Longformers have better performance than Reformer when it comes to the classification task</strong></em>.</p> | 2020-11-21 21:27:09.930000+00:00 | 2020-11-21 21:27:09.930000+00:00 | null | null | 58,636,587 | <p>We know that BERT has a max length limit of tokens = 512, So if an article has a length of much bigger than 512, such as 10000 tokens in text
How can BERT be used?</p> | 2019-10-31 03:34:11.420000+00:00 | 2022-01-02 20:31:17.217000+00:00 | 2021-03-13 08:52:08.453000+00:00 | nlp|text-classification|bert-language-model | ['https://huggingface.co/transformers/model_doc/reformer.html', 'https://huggingface.co/transformers/model_doc/longformer.html', 'https://arxiv.org/pdf/2011.04006.pdf', 'https://i.stack.imgur.com/TMD6h.png?s=256'] | 4 |
63,646,337 | <p>There is an approach used in the paper Defending Against Neural Fake News ( <a href="https://arxiv.org/abs/1905.12616" rel="nofollow noreferrer">https://arxiv.org/abs/1905.12616</a>)</p>
<p>Their generative model was producing outputs of 1024 tokens and they wanted to use BERT for human vs machine generations. They extended the sequence length which BERT uses simply by initializing 512 more embeddings and training them while they were fine-tuning BERT on their dataset.</p> | 2020-08-29 11:15:57.403000+00:00 | 2020-08-29 11:15:57.403000+00:00 | null | null | 58,636,587 | <p>We know that BERT has a max length limit of tokens = 512, So if an article has a length of much bigger than 512, such as 10000 tokens in text
How can BERT be used?</p> | 2019-10-31 03:34:11.420000+00:00 | 2022-01-02 20:31:17.217000+00:00 | 2021-03-13 08:52:08.453000+00:00 | nlp|text-classification|bert-language-model | ['https://arxiv.org/abs/1905.12616'] | 1 |
73,808,100 | <p>I hate to break it to you, but this is an image (full of sound and fury) signifying nothing. When you flatten the output of your <code>Conv2d</code> layers and pass this output through 2 <code>Linear</code> layers, you lose any spatial meaning to the neurons. A "linear" or "dense" layer connects every node in the previous layer to every node in the next layer, effectively discarding the relation between any neuron/node and a locale in the original input image.</p>
<p>If you want to look at the parts of the image your network is attending to in order to make its decisions, you are going to want to look inside the convolutional layers. This is a nontrivial problem with many valid approaches to it. One popular method for doing this is <a href="https://arxiv.org/abs/1610.02391" rel="nofollow noreferrer">Grad-CAM</a>. If you want something simpler, you could try plotting each channel separately of the output of one of one of your convolutional layers; but even this will be difficult to interpret.</p> | 2022-09-22 00:13:47.673000+00:00 | 2022-09-22 11:22:06.200000+00:00 | 2022-09-22 11:22:06.200000+00:00 | null | 73,808,000 | <p>I am following <a href="https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html" rel="nofollow noreferrer">this tutorial</a>.</p>
<p>However, I decided to take the linear layers, And then make them convertible into an image of 1 * 19 * 19. In doing so, I get a bunch of pixels at random places.
<a href="https://i.stack.imgur.com/8OtTa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8OtTa.png" alt="enter image description here" /></a></p>
<p>Here's what my modified code looks like. To describe what I did, I essentially cut from 0-10 for the labels, Then cut from 10-length of the array for my images. Just so I am separating the labels and jumbled images.</p>
<pre><code>import torch
import torchvision
import torchvision.transforms as transforms
import torch.optim as optim
import torch.nn as nn
import torch.nn.functional as F
import matplotlib.pyplot as plt
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
batch_size = 4
trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=batch_size,
shuffle=True)
testset = torchvision.datasets.CIFAR10(root='./data', train=False,
download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=batch_size,
shuffle=False)
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
class Net(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5,8 * 7 * 7) # 8 * 7 * 7
self.fc2 = nn.Linear(8 * 7 * 7, 6 * 8 * 8) # 6 * 8 * 8
self.fc3 = nn.Linear(6 * 8 * 8,19 * 19 + 10) # 19 * 19 + 10
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
#### IMPORTANT: this here is where I extract the picture of this layer, comparing against the
# regression layers and this here!
self.picx = self.pool(F.relu(self.conv2(x)))
####
x2 = torch.flatten(self.picx, 1) # flatten all dimensions except batch
x2 = F.relu(self.fc1(x2))
x2 = F.relu(self.fc2(x2))
x2 = self.fc3(x2)
return x2
net = Net()
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
for epoch in range(5): # loop over the dataset multiple times
for i, data in enumerate(trainloader, 0):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs).to(device)
loss = criterion(outputs, labels).to(device)
loss.backward()
optimizer.step()
running_loss = loss.item()
with torch.no_grad():
dataiter = iter(testloader)
images, labels = dataiter.next()
outputs = net.forward(images) ### pic = to the extracted convulotional layer, vs the regression layer.
_, predicted = torch.max(outputs[...,0:10], 1)
print(predicted, labels)
preds = torch.reshape(outputs[...,10:1454], (4,19,19))
plt.imshow(preds[0].detach().numpy())
plt.show()
correct = 0
total = 0
# since we're not training, we don't need to calculate the gradients for our outputs
with torch.no_grad():
for data in testloader:
images, labels = data
# calculate outputs by running images through the network
outputs = net(images)
# the class with the highest energy is what we choose as prediction
_, predicted = torch.max(outputs[...,0:10], 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print(f'Accuracy of the network on the 10000 test images: {100 * correct // total} %')
</code></pre>
<p>What is this image signifying? And is there any way to get a mask of where the pixels specifically detect the label? In other words, I want to have an image where it draws pixels of where it is seeing a subject, like a dog or a cat.</p> | 2022-09-21 23:51:36.560000+00:00 | 2022-09-22 11:22:06.200000+00:00 | 2022-09-22 11:21:41.853000+00:00 | python|machine-learning|pytorch|conv-neural-network | ['https://arxiv.org/abs/1610.02391'] | 1 |
14,710,958 | <p>No need to dig into the source code. You only need to read the documentation. <code>?predict.randomForest</code> states that one of its arguments is called <code>predict.all</code>:</p>
<blockquote>
<p><strong>predict.all</strong> Should the predictions of all trees be kept?</p>
</blockquote>
<p>So setting that to <code>TRUE</code> will keep a prediction for each case, for each tree, which you can then use to calculate standard error for each case.</p>
<p>I have recently been made aware of <a href="http://arxiv.org/pdf/1311.4555.pdf" rel="noreferrer">this</a> paper by Stefan Wager, Trevor Hastie and Brad Efron which investigates more rigorously the idea of standard errors for the predictions generated by random forests (and other bagged predictors).</p> | 2013-02-05 15:26:02.533000+00:00 | 2014-01-31 22:17:56.663000+00:00 | 2014-01-31 22:17:56.663000+00:00 | null | 14,709,711 | <p>R package <em>randomForest</em> reports mean squared errors for each <strong>tree</strong> in the forest. I need, however, a measure of confidence for each <strong>case</strong> in the data. Since <em>randomForest</em> calculates the casewise predictions by averaging the predictions of the single trees, I guess that it should also be possible to calculate a casewise standard error and thus a confidence interval. Can this be done using the output randomForest object (if so: how?) or do I have to dig into the source code?</p> | 2013-02-05 14:22:43.210000+00:00 | 2014-01-31 22:17:56.663000+00:00 | null | r|random-forest|confidence-interval | ['http://arxiv.org/pdf/1311.4555.pdf'] | 1 |
9,524,215 | <p>The right softening length for a problem depends upon lots of things -- timestep, configuration, scale of the problems of interest, choice of integrator, etc. Generally speaking, if you want to suppress two-body relaxation you want some function of the Hill radius [as opposed to the physical radius, as it looks like you want to suppress the effects of close encounters, not mock up a collision.]</p>
<p>See <a href="http://arxiv.org/pdf/astro-ph/0011568v1.pdf" rel="nofollow">Walter Dehnen's paper</a> on the subject of choosing an optimal softening (although I'm dating myself a little by citing that; probably there are more up-to-date references).</p> | 2012-03-01 21:14:02.893000+00:00 | 2012-03-02 14:25:41.773000+00:00 | 2012-03-02 14:25:41.773000+00:00 | null | 9,523,928 | <p>I am in the process of writing a simplified version of All Pairs N-Body simulation. I am using CUDA/OpenGL to implement the algorithm and visualize the simulation. I am assuming that all bodies are spheres of uniform radius such that the mass of each sphere is the only difference(Assume that all spheres have radius == 1). Now, I would like to know how to choose the softening factor in the equation of Acceleration?
<img src="https://i.stack.imgur.com/Ebvec.jpg" alt="http://http.developer.nvidia.com/GPUGems3/elementLinks/680equ02.jpg"></p>
<p>What I am thinking of is that <code>epsilon == 2</code> is a good choice because it is the moment when two spheres collide in my case. Is that a reasonable choice? Is there a simple explanation of how to choose the softening factor?</p>
<p>I have looked at Chapter 31 of GPU Gems 3 but it doesn't say what the chosen value is and how you would choose a suitable value. I have looked at some research papers but I am unable to penetrate those academic papers on my own.</p> | 2012-03-01 20:55:14.513000+00:00 | 2012-03-02 14:25:41.773000+00:00 | 2012-03-01 21:04:06.247000+00:00 | opengl|cuda|physics|simulation | ['http://arxiv.org/pdf/astro-ph/0011568v1.pdf'] | 1 |
55,892,336 | <p>In FastText, the sentence embedding is basically an average of the word vectors, as is shown in one of the <a href="https://arxiv.org/pdf/1607.01759.pdf" rel="nofollow noreferrer">FastText papers</a>:</p>
<p><a href="https://i.stack.imgur.com/0jpTZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0jpTZ.png" alt="FastText figure"></a></p>
<p>Given this fact, zeroes might a logical choice. But, the answer depends on what you want to do with the embeddings. </p>
<p>If you use them as an input for a classifier, it should be fine to select an arbitrary vector as a representation of empty string and the classifier will learn what that means. FastText also learns a special embedding for <code></s></code>, i.e., end of a sentence. This is another natural candidate for an embedding of the empty string, especially if you do similarity search.</p> | 2019-04-28 16:32:35.733000+00:00 | 2019-04-28 16:32:35.733000+00:00 | null | null | 55,890,153 | <p>I am trying to embed texts, using pre-trained fastText models. Some are empty. How would one replace them to make embedding possible? I was thinking about replacing them with dummy words, like that (docs being a pandas DataFrame object):
<code>docs = docs.replace(np.nan, 'unknown', regex=True)</code></p>
<p>However it doesn't really make sense as the choice of this word is arbitrary and it is not equivalent to having an empty string.</p>
<p>Otherwise, I could associate the 0 vector embedding to empty strings, or the average vector, but I am not convinced either would make sense, as the embedding operation is non-linear.</p> | 2019-04-28 12:17:31.490000+00:00 | 2019-04-28 16:32:35.733000+00:00 | null | machine-learning|nlp|artificial-intelligence|text-classification|fasttext | ['https://arxiv.org/pdf/1607.01759.pdf', 'https://i.stack.imgur.com/0jpTZ.png'] | 2 |
55,417,306 | <p>Current NMT models do not work with words in the traditional sense, but with so-called subwords. Segmentation of the text into subwords is done using statistical models which ensure that frequently used words or strings of characters remain together and less frequent words get split, ultimately they can split into individual characters. In this way, there are no out-of-vocabulary words. The segmentation is the same both for the source and the target language, so it is easy for the model to learn to copy.</p>
<p>Currently, most frequent approaches are <a href="https://github.com/rsennrich/subword-nmt" rel="nofollow noreferrer">Byte-Pair Encoding</a> and <a href="https://github.com/google/sentencepiece" rel="nofollow noreferrer">SentencePiece</a>, both of them are available via <code>pip</code> and easy to use.</p>
<p>Google in their <a href="https://arxiv.org/abs/1609.08144" rel="nofollow noreferrer">2016 paper</a> claim to use a similar technique called WordPiece, however, they may have switched to SentencePiece which was made public by Google in 2018.</p> | 2019-03-29 12:18:47.753000+00:00 | 2019-04-03 12:52:05.540000+00:00 | 2019-04-03 12:52:05.540000+00:00 | null | 55,281,727 | <p>Can anyone explain a best method to handle unknown words in <strong>Neural machine translation</strong> instead of removing it and to know how google translate is handling names while the sentence is getting translate between any two languages ?</p>
<p>I'd really appreciate your response...Thanks!</p> | 2019-03-21 13:39:02.030000+00:00 | 2019-04-03 12:52:05.540000+00:00 | null | nlp|machine-translation|natural-language-processing | ['https://github.com/rsennrich/subword-nmt', 'https://github.com/google/sentencepiece', 'https://arxiv.org/abs/1609.08144'] | 3 |
5,061,599 | <p>I agree with all that mzabsky said in his answer.
Here's a few of extra things:</p>
<p>I find it useful to make statements using a <code>Text</code> or <code>DisplayFormula</code> cell then manually group a Mathematica check/proof to the statement which is then collapsed and can be displayed when you want.</p>
<p>The <a href="http://reference.wolfram.com/mathematica/guide/PalettesMenu.html" rel="nofollow">Writing Assistant Palette</a> has quite a few useful constructions in it that you can learn from.</p>
<p>Finally, I found it really useful to make my own style sheet for a couple reasons:
1) the built-in ones are a bit ugly; 2) it really helps you to understand how the notebooks work.</p>
<p>To see examples of the stylesheet I made (which I don't claim to be perfect - I didn't bother making it work in all screen environments) look at some of the files in <a href="ftp://ftp.physics.uwa.edu.au/pub/MATH2200/2010/" rel="nofollow">ftp://ftp.physics.uwa.edu.au/pub/MATH2200/2010/</a>. I use a similar stylesheet in all of my notes - I have many research projects primarily contained in Mma notebooks, eg <a href="http://arxiv.org/abs/1102.3043" rel="nofollow">http://arxiv.org/abs/1102.3043</a>.</p>
<p>Like Mr Wizard, I also recommend <a href="http://home.comcast.net/~djmpark/" rel="nofollow">David Park's notes</a> as a starting point. Also, you should study stylesheets that you like by going to the Format menu and clicking "Edit Stylesheet". Don't forget to follow the links through the cascade of stylesheets (version 6 onwards).</p>
<p>To answer the questions in your edit: Once you are viewing a notebook's stylesheet, you can save it, edit it, and use it in your own documents. Stylesheets in
<code>$UserBaseDirectory/SystemFiles/FrontEnd/StyleSheets</code> are automatically available in the menu. You can then use that stylesheet in any notebook by simply selecting it from the menu.</p>
<p>The formating in the screenshot that you posted is all contained in the stylesheet. This includes the grey background in the table.</p>
<h2>Addendum:</h2>
<p>When distributing notebooks to others, if the stylesheet is external from the notebook, then other people will not see it as you do. To include the stylesheet into the current notebook, you need code like</p>
<pre><code>ss = StyleDefinitions /. Options[EvaluationNotebook[]]
fn = ToFileName[{$UserBaseDirectory, "SystemFiles", "FrontEnd", "StyleSheets"}, ss]
If[FileExistsQ[fn],
style=Get[fn];SetOptions[EvaluationNotebook[],StyleDefinitions->style];,
Print["Can not find file"]]
</code></pre>
<p>(Assuming the file is in stored in the conventional place)</p>
<p>Here's an <a href="http://pastebin.com/ENLbMW9L" rel="nofollow">EmbedStylesheet.m</a> that is an improved version of the above.</p> | 2011-02-21 01:57:34.060000+00:00 | 2011-09-19 23:51:13.530000+00:00 | 2011-09-19 23:51:13.530000+00:00 | null | 5,058,854 | <p>I use Mma mainly to solve relatively small problems.</p>
<p>I want to start using it also to prepare my <strong>presentations and documents</strong>, but I am having troubles to learn how to do it from the embedded help, and I guess some good resources may be available elsewhere.</p>
<p>Do you know any useful pointers (books, papers, videos ...)?</p>
<p>Do you have a "bag of tricks" to post here?</p>
<p><strong>Edit</strong></p>
<p>This question received two answers so far (@mzabsky's and Mr.Wizard's) and although both are useful, perhaps my concerns are much more basic. So I am posting an example of the <em>kind</em> of things I am unable to do (or understand how to discern how others did them).</p>
<p>I took the following example from <a href="http://www.mathematica-journal.com/issue/v10i1/contents/graph_draw/graph_draw.nb" rel="noreferrer">The Mathematica Journal</a> (the notebook at the left on the following image - click on the image to see full size):</p>
<p><a href="https://i.stack.imgur.com/iPBRD.png" rel="noreferrer"><img src="https://i.stack.imgur.com/iPBRD.png" alt="Enter image description here"></a></p>
<p>So, some issues, just to get the idea of my troubles:</p>
<p>1) I copied the text to my .nb on the right, formatted it with the same style (text), but the appearance is different, so I guess the style definition is different. How can I copy the style definitions from one .nb to the other?</p>
<p>2) The table below the text block doesn't have an attached style. How was it formatted? Where is the background color defined?</p>
<p>I would like pointers to read (or videos to look, or whatever) about these issues. I don't want you to write down here a book on Mathematica formatting!</p>
<h2>Summary of the links posted in answers</h2>
<ul>
<li><a href="http://forums.wolfram.com/mathgroup/archive/2007/Jul/msg00914.html" rel="noreferrer">A Mathgroup thread</a> (John Browne) and <a href="http://forums.wolfram.com/mathgroup/archive/2007/Jul/msg00832.html" rel="noreferrer">here</a> (David Park and
Selwyn Hollis)</li>
<li><a href="http://bobueland.com/2008/06/14/create-a-style-sheet/" rel="noreferrer">Advice from Bob Ueland</a></li>
<li><a href="http://reference.wolfram.com/mathematica/guide/PalettesMenu.html" rel="noreferrer">The Writing Assistant Palette</a></li>
<li><a href="http://home.comcast.net/~djmpark/" rel="noreferrer">David Park's notes</a></li>
<li><a href="ftp://ftp.physics.uwa.edu.au/pub/MATH2200/2010/" rel="noreferrer">Simon's documents</a></li>
<li><a href="http://www.mathematica-users.org/webMathematica/wiki/wiki.jsp?pageName=Notebook:Tips.nb" rel="noreferrer">Tips for Mathematica SlideShow presenters</a></li>
<li><a href="http://library.wolfram.com/tips/notebookformatting.html" rel="noreferrer">Notebook formatting</a></li>
<li><a href="http://library.wolfram.com/infocenter/Conferences/7564/" rel="noreferrer">Presentations with Mathematica</a></li>
<li><a href="http://www.wolfram.com/broadcast/#Tutorials-Notebooks" rel="noreferrer">Videos</a></li>
<li><a href="http://library.wolfram.com/infocenter/TechNotes/5299/" rel="noreferrer">Tips for Mathematica Slide Show Presenters</a></li>
<li><a href="http://library.wolfram.com/infocenter/MathSource/6282/" rel="noreferrer">How to - Automatic Slide Show</a></li>
<li><a href="http://reference.wolfram.com/mathematica/howto/CreateALectureNotebook.html" rel="noreferrer">Create a Lecture Notebook</a></li>
</ul> | 2011-02-20 17:59:44.337000+00:00 | 2016-02-13 02:36:56.113000+00:00 | 2016-02-13 02:36:56.113000+00:00 | wolfram-mathematica | ['http://reference.wolfram.com/mathematica/guide/PalettesMenu.html', 'ftp://ftp.physics.uwa.edu.au/pub/MATH2200/2010/', 'http://arxiv.org/abs/1102.3043', 'http://home.comcast.net/~djmpark/', 'http://pastebin.com/ENLbMW9L'] | 5 |
58,883,589 | <p>You can use <code>ast.eval_literal()</code> within your json to make the "string formatted" list, be interpreted as a list and then reference it as you correctly stated.</p>
<p>Starting from your data, this worked for me:</p>
<pre><code>import ast
print(ast.literal_eval(data[0]['link'])[1]['href'])
</code></pre>
<p>Output:</p>
<pre><code>http://arxiv.org/pdf/1802.00209v1
</code></pre> | 2019-11-15 19:36:42.593000+00:00 | 2019-11-15 19:36:42.593000+00:00 | null | null | 58,883,532 | <p>I'm new to read json file in python. I want to get the url from the file. Here is my json file. </p>
<pre><code>[
{
"author": "[{'name': 'Ahmed Osman'}, {'name': 'Wojciech Samek'}]",
"day": 1,
"id": "1802.00209v1",
"link": "[{'rel': 'alternate', 'href': 'http://arxiv.org/abs/1802.00209v1', 'type': 'text/html'}, {'rel': 'related', 'href': 'http://arxiv.org/pdf/1802.00209v1', 'type': 'application/pdf', 'title': 'pdf'}]",
"month": 2,
"summary": "We propose an architecture for VQA which utilizes recurrent layers to\ngenerate visual and textual attention. The memory characteristic of the\nproposed recurrent attention units offers a rich joint embedding of visual and\ntextual features and enables the model to reason relations between several\nparts of the image and question. Our single model outperforms the first place\nwinner on the VQA 1.0 dataset, performs within margin to the current\nstate-of-the-art ensemble model. We also experiment with replacing attention\nmechanisms in other state-of-the-art models with our implementation and show\nincreased accuracy. In both cases, our recurrent attention mechanism improves\nperformance in tasks requiring sequential or relational reasoning on the VQA\ndataset.",
"tag": "[{'term': 'cs.AI', 'scheme': 'http://arxiv.org/schemas/atom', 'label': None}, {'term': 'cs.CL', 'scheme': 'http://arxiv.org/schemas/atom', 'label': None}, {'term': 'cs.CV', 'scheme': 'http://arxiv.org/schemas/atom', 'label': None}, {'term': 'cs.NE', 'scheme': 'http://arxiv.org/schemas/atom', 'label': None}, {'term': 'stat.ML', 'scheme': 'http://arxiv.org/schemas/atom', 'label': None}]",
"title": "Dual Recurrent Attention Units for Visual Question Answering",
"year": 2018
},
{
"author": "[{'name': 'Ji Young Lee'}, {'name': 'Franck Dernoncourt'}]",
"day": 12,
"id": "1603.03827v1",
"link": "[{'rel': 'alternate', 'href': 'http://arxiv.org/abs/1603.03827v1', 'type': 'text/html'}, {'rel': 'related', 'href': 'http://arxiv.org/pdf/1603.03827v1', 'type': 'application/pdf', 'title': 'pdf'}]",
"month": 3,
"summary": "Recent approaches based on artificial neural networks (ANNs) have shown\npromising results for short-text classification. However, many short texts\noccur in sequences (e.g., sentences in a document or utterances in a dialog),\nand most existing ANN-based systems do not leverage the preceding short texts\nwhen classifying a subsequent one. In this work, we present a model based on\nrecurrent neural networks and convolutional neural networks that incorporates\nthe preceding short texts. Our model achieves state-of-the-art results on three\ndifferent datasets for dialog act prediction.",
"tag": "[{'term': 'cs.CL', 'scheme': 'http://arxiv.org/schemas/atom', 'label': None}, {'term': 'cs.AI', 'scheme': 'http://arxiv.org/schemas/atom', 'label': None}, {'term': 'cs.LG', 'scheme': 'http://arxiv.org/schemas/atom', 'label': None}, {'term': 'cs.NE', 'scheme': 'http://arxiv.org/schemas/atom', 'label': None}, {'term': 'stat.ML', 'scheme': 'http://arxiv.org/schemas/atom', 'label': None}]",
"title": "Sequential Short-Text Classification with Recurrent and Convolutional\n Neural Networks",
"year": 2016
}
]
</code></pre>
<p>I read the file by using the code as follow.</p>
<pre><code>with open(args.filename, 'r') as myfile:
data = json.loads(myfile.read())
myfile.close()
</code></pre>
<p>And I wanted to get the second <code>href</code> by using <code>data[0]["link"][1]["href"]</code>. However the type of <code>data[0]["link"]</code> is string. I wonder how I can deal with this.</p> | 2019-11-15 19:32:07.057000+00:00 | 2019-11-15 19:36:42.593000+00:00 | null | python|json | [] | 0 |
5,572,899 | <p><a href="http://arxiv.org/abs/0911.2899" rel="noreferrer">Coding guidelines for Prolog</a> by Covington <em>et al.</em> is very recent; in fact, I believe it hasn't even been formally published yet. There was some discussion about it on the <a href="https://lists.iai.uni-bonn.de/mailman/listinfo.cgi/swi-prolog" rel="noreferrer">SWI-Prolog mailing list</a> some six weeks ago.</p> | 2011-04-06 21:01:28.013000+00:00 | 2011-04-06 21:01:28.013000+00:00 | null | null | 5,572,551 | <p>Is there a (relatively) current reference for best practices in Prolog? One suitable for giving to commercial Prolog developers who have not studied logic programming or advanced texts like "The Craft of Prolog"?</p>
<p>There are plenty of general tutorials but the only one on best practices I could find was this one from 1994:</p>
<p><a href="http://www.cs.auckland.ac.nz/~j-hamer/07.363/prolog-for-se.html" rel="nofollow noreferrer">http://www.cs.auckland.ac.nz/~j-hamer/07.363/prolog-for-se.html</a></p>
<p>There's also the individual question on comp.lang.prolog or here like the following:</p>
<p><a href="https://stackoverflow.com/questions/3014933/prolog-best-practice-checking-if-a-variable-is-already-bound">Prolog Best Practice: checking if a variable is already bound.</a> </p>
<p>But nothing more comprehensive, current, and suitable to the commercial developer.</p>
<p>This issue came up during an interview for a job that would require formal mentoring and code reviews for beginner to intermediate Prolog developers. Working as an experienced Prolog contractor, I frequently advised other developers who had advanced knowledge of their product and its domain, but were self-taught or who had limited Prolog training. But the mentoring was on an ad hoc basis, responding to their particular day-to-day issues.</p>
<p>Anyway, I've been away from Prolog development for a while and the interviewers question got me thinking that there should be such a reference. If there isn't one out there, I will likely create it myself if I get this job.</p> | 2011-04-06 20:29:30.697000+00:00 | 2011-04-06 21:01:56.597000+00:00 | 2017-05-23 11:58:30.997000+00:00 | prolog | ['http://arxiv.org/abs/0911.2899', 'https://lists.iai.uni-bonn.de/mailman/listinfo.cgi/swi-prolog'] | 2 |
50,033,137 | <p><strong>TL;DR</strong>: 1. Yes 2. Yes 3. No
<hr>
<strong>TS;WM</strong>:</p>
<ol>
<li><a href="https://arxiv.org/abs/1502.03167" rel="nofollow noreferrer">Batch normalization</a> was a great invention by Sergey Ioffe and Christian Szegedy early 2015. Back in those days, battling vanishing or exploding gradients was an everyday problem. Read that article if you want to gain a deep understanding. but basically this quote from the abstract should give you some idea:</li>
</ol>
<blockquote>
<p>Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs.</p>
</blockquote>
<p>They did in fact first use batch normalization for DCNNs, which allowed them to beat human performance in the top-5 ImageNet classification, but any network where there are nonlinearities can benefit from batch normalization. Including a network consisting of fully-connected layers.</p>
<ol start="2">
<li><p>Yes, it is used for shallow CNN-s too. Any network with more than one layer can benefit from it, albeit it is true that more benefit comes to deeper networks.</p></li>
<li><p>First of all, one-hot vectors should <em>never</em> be normalized. Normalization means you subtract the mean and divide by the variance, thus creating a dataset with 0 mean and 1 variance. If you do this to a one-hot vector, then the cross-entropy loss calculation will be completely off. Second, there is no point in normalizing a concat layer separately, since it does not change the values, just concatenates them. Batch normalization is done on the <em>input</em> of a layer, so the one after the concat, that will get the concatenated values, can do it if necessary.</p></li>
</ol> | 2018-04-26 00:34:51.093000+00:00 | 2018-04-26 20:24:20.367000+00:00 | 2018-04-26 20:24:20.367000+00:00 | null | 50,025,263 | <p>1.) Batchnorm is always used in deep convolutional neural networks. But is it also used in not-CNN. In NN. In networks with just fully-connected layers?</p>
<p>2.) Is batchnorm used in shallow CNNs?</p>
<p>3.) If I have a CNN with an input image and an input array IN_array, the output is an array after the last fully-connected layer. I call this array FC_array. If I want to concat that FC_array with the IN_array.</p>
<pre><code>CONCAT_array = tf.concat(values=[FC_array, IN_array])
</code></pre>
<p>Is it useful to have a bachnorm after the concat layer? Or should that batchnorm be just after the FC_array before the concat layer?</p>
<p>For information, the IN_array is a tf.one_hot() vector.</p>
<p>Thank you</p> | 2018-04-25 14:48:13+00:00 | 2018-04-26 20:24:20.367000+00:00 | null | tensorflow|neural-network | ['https://arxiv.org/abs/1502.03167'] | 1 |
66,648,881 | <p>The original <a href="https://arxiv.org/pdf/1512.02325.pdf" rel="nofollow noreferrer">SSD paper</a> that came out in 2016 was designed with 2 specific input image sizes, <code>300x300</code> and <code>512x512</code>. However, the backbone for that was Mobilenet (considering speed as the main factor). You can try resizing the images to <code>512x512</code> and then training. However, considering that the repo has <code>300x300</code> as the default value would probably mean that the model works best when the inputs are of that size and not any other.</p>
<p>There are however many other models that allow an input size of <code>640x640</code></p>
<p>In Tensorflow models zoo - version 1, you have <code>ssd_resnet50_v1</code> <a href="https://github.com/tensorflow/models/blob/master/research/object_detection/samples/configs/ssd_resnet50_v1_fpn_shared_box_predictor_640x640_coco14_sync.config" rel="nofollow noreferrer">config file</a> and in <a href="https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md" rel="nofollow noreferrer">version 2</a>, you have many other variants of SSD and EfficientDet which support <code>640x640</code> (with different backbones however).</p>
<p>You will probably get better results by training using the above-mentioned models</p> | 2021-03-16 03:54:44.087000+00:00 | 2021-03-16 03:54:44.087000+00:00 | null | null | 66,558,764 | <p>I am looking to use the TensorFlow Object Detection API to train SSD Inception-V2 from scratch on a custom dataset with resolution larger than 300x300.</p>
<p>I am referencing this as a sample config file: <a href="https://github.com/tensorflow/models/blob/master/research/object_detection/samples/configs/ssd_inception_v2_coco.config" rel="nofollow noreferrer">https://github.com/tensorflow/models/blob/master/research/object_detection/samples/configs/ssd_inception_v2_coco.config</a></p>
<p>I have successfully trained a 4-class custom model with okay performance by setting:
<code>num_classes: 4</code> and pointing the training data path to my custom dataset.</p>
<p>However, the input resolution was set to 300x300 with:</p>
<pre><code>image_resizer {
fixed_shape_resizer {
height: 300
width: 300
}
}
</code></pre>
<p>My dataset has pretty small objects and I want to increase the input resolution during training.</p>
<p>However, If I just change this setting to:</p>
<pre><code>image_resizer {
fixed_shape_resizer {
height: 640
width: 640
}
}
</code></pre>
<p>The model does not train at all and the loss stays stagnate. I saw a few other threads that talked about changing the anchor boxes and customizing the SSD network to be compatible with the new resolution.</p>
<p>I have tried several configurations of anchor boxes and model customizations but I can never get the model training. (It looks like its training but the loss doesnt go down and the inference is garbage outputs)</p>
<p>Has anyone trained SSD Inception-V2 with the TensorFlow object detection API on resolution other than 300x300 and can supply more concrete steps to execute the training?</p> | 2021-03-10 05:14:18.543000+00:00 | 2021-04-06 06:14:44.787000+00:00 | 2021-03-10 10:24:45.627000+00:00 | python|tensorflow|deep-learning|object-detection|object-detection-api | ['https://arxiv.org/pdf/1512.02325.pdf', 'https://github.com/tensorflow/models/blob/master/research/object_detection/samples/configs/ssd_resnet50_v1_fpn_shared_box_predictor_640x640_coco14_sync.config', 'https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md'] | 3 |
7,016,812 | <p>One thing to note is that the Fibonacci heap is not really "based" on the Fibonacci number (its structure doesn't look at all like it's related to Fibonacci numbers); it's the <em>analysis</em> of the Fibonacci heap where the Fibonacci numbers appear. You use the Fibonacci sequence to bound the number of trees in the heap of n elements with a value related to the nth Fibonacci number, thus demonstrating that the worst-case behavior of some of the operations can't be worse than O(log n).</p>
<p>As for your question about Pell numbers, I am not aware of any data structures that rely on the sequence (I actually hadn't encountered that sequence before!). The Fibonacci sequence arises so much instead of other similar recurrent sequences due to a lot of interesting properties of the sequence that aren't necessarily true of other recurrence relations; I wrote about this <a href="https://stackoverflow.com/questions/4571670/what-is-it-about-fibonacci-numbers/4571782#4571782">in my answer to this question</a>. I would assume that Pell numbers might be usable in some data structures or analyses, but the structure required to satisfy the recurrence relation doesn't seem to arise in any data structures or algorithms I have encountered.</p>
<p><strong>EDIT</strong>: I did find an interesting paper using Pell numbers in the analysis of certain sequences of values, which you can find <a href="http://arxiv.org/pdf/math/0205206v1" rel="nofollow noreferrer">here.</a></p>
<p>Hope this helps!</p> | 2011-08-10 19:44:57.243000+00:00 | 2011-08-10 19:44:57.243000+00:00 | 2017-05-23 12:21:49.283000+00:00 | null | 7,016,157 | <p>Is there any heap based on Pell sequence(or Pell number) instead of Fibonacci number(like the Fibonacci heap)?</p> | 2011-08-10 18:53:06.470000+00:00 | 2020-02-02 15:22:49.280000+00:00 | 2012-01-31 21:32:51.437000+00:00 | data-structures|fibonacci-heap | ['https://stackoverflow.com/questions/4571670/what-is-it-about-fibonacci-numbers/4571782#4571782', 'http://arxiv.org/pdf/math/0205206v1'] | 2 |
33,906,745 | <p><a href="http://arxiv.org/abs/1412.4021" rel="nofollow">This paper</a> presents a simple lexicon-based approach for tagging unknown-words. It shows that the lexicon-based approach obtains promising tagging results of unknown words on 13 languages, including Bulgarian, Czech, Dutch, English, French, German, Hindi, Italian, Portuguese, Spanish, Swedish, Thai and Vietnamese. </p>
<p>In addition, you can also find in the paper accuracy results (for known words and unknown words) of 3 POS and morphological taggers on the 13 languages. </p> | 2015-11-25 00:57:28.623000+00:00 | 2015-11-25 00:57:28.623000+00:00 | null | null | 25,330,660 | <p>What is the correct way to apply the unknown word handling.....</p>
<p>I am confused with in the things like first I have to check that word starts with Capital or first to check for the suffix?</p>
<p>Should I gather the knowledge of Capitalize word being a noun from corpus or assign them Noun Tag blindly....</p>
<p>What would be better approached?</p> | 2014-08-15 16:51:52.380000+00:00 | 2015-11-25 00:57:28.623000+00:00 | null | nlp | ['http://arxiv.org/abs/1412.4021'] | 1 |
2,856,219 | <p>Anyone who "proves" that Cantor's diagonal method doesn't work proves only their own incompetence. Cf. Wilfred Hodges' <a href="http://www.math.ucla.edu/~asl/bsl/0401/0401-001.ps" rel="nofollow noreferrer">An editor recalls some hopeless papers</a> for a surprisingly sympathetic explanation of what kind of thing is going wrong with these attempts.</p>
<p>You can provide speculative descriptions of hyper-Turing neural nets, just as you can provide speculative descriptions of other kinds of hyper-Turing computers: there's nothing incoherent in the idea that hypercomputation is possible, and speculative descriptions of mechanical hypercomputers have been made where the hypercomputer is stipulated to have infinitely fine engravings that encode an oracle for the Halting machine: the existence of such a machine is consistent with Newtonian mechanics, though not quantum mechanics. Rather, the Church-Turing thesis says that they can't be constructed, and there are two reasons to believe the Church-Turing thesis is correct:</p>
<ol>
<li>No such machines have ever been constructed; and</li>
<li>There's work been done connecting models of physics to models of computation, going back to work in the early 1970s by Robin Gandy, with recent work by people such as David Deutsch (e.g., <a href="http://arxiv.org/abs/math/9911150" rel="nofollow noreferrer">Machines, Logic and Quantum Physics</a> and John Tucker (e.g., <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.99.8056&rep=rep1&type=pdf" rel="nofollow noreferrer">Computations via experiments with kinematic systems</a>) which argues that physics doesn't support hypercomputation.</li>
</ol>
<p>The main point is that the truth of the Church-Turing thesis is an empirical fact, and not a mathematical fact. It's one that we can have confidence is true, but not certainty.</p> | 2010-05-18 10:07:16+00:00 | 2010-05-18 10:07:16+00:00 | null | null | 2,804,401 | <p>I've read in Wikipedia that neural-network functions defined on a field of arbitrary real/rational numbers (along with algorithmic schemas, and the speculative `transrecursive' models) have more computational power than the computers we use today. Of course it was a page of russian wikipedia (ru.wikipedia.org) and that may be not properly proven, but that's not the only source of such.. rumors</p>
<p>Now, the thing that I really do not understand is: How can a string-rewriting machine (NNs are exactly string-rewriting machines just as Turing machines are; only programming language is different) be more powerful than a universally capable U-machine? </p>
<p>Yes, the descriptive instrument is really different, but the fact is that any function of such class can be (easily or not) turned to be a legal Turing-machine. Am I wrong? Do I miss something important?</p>
<p>What is the cause of people saying that? I do know that the fenomenum of undecidability is widely accepted today (though not consistently proven according to what I've read), but I do not really see a smallest chance of NNs being able to solve that particular problem.</p>
<p>Add-in: <code>Not consistently proven according to what I've read</code> - I meant that you might want to take a look at A. Zenkin's (russian mathematician) papers after mid-90-s where he persuasively postulates the wrongness of G. Cantor's concepts, including transfinite sets, uncountable sets, diagonalization method (method used in the proof of undecidability by Turing) and maybe others. Even Goedel's incompletness theorems were proven in right way in only 21-st century.. That's all just to plug Zenkin's work to the post cause I don't know how widespread that knowledge is in CS community so forgive me if that did look stupid.</p>
<p>Thank you!</p> | 2010-05-10 16:27:28.327000+00:00 | 2011-04-08 14:13:15.560000+00:00 | 2011-04-08 14:13:15.560000+00:00 | algorithm|computer-science|neural-network|turing-complete | ['http://www.math.ucla.edu/~asl/bsl/0401/0401-001.ps', 'http://arxiv.org/abs/math/9911150', 'http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.99.8056&rep=rep1&type=pdf'] | 3 |
15,230,041 | <p>I have done some research into fast KD-tree implementations a few months ago, and I agree with Anony-Mousse that quality (and "weight" of libraries) varies strongly. Here are some of my findings:</p>
<p><a href="https://github.com/jmhodges/kdtree2" rel="noreferrer">kdtree2</a> is a little known and pretty straightforward KD-tree implementation I found to be quite fast for 3D problems, especially if you allow it to copy and re-sort your data. Also, it is small and very easy to incorporate and adapt. <a href="http://arxiv.org/abs/physics/0408067" rel="noreferrer">Here</a> is a paper on the implementation by the author (don't be put off by the mentioning of Fortran in the title). This is the library I ended up using. My colleagues benchmarked its speed for 3D points against <a href="http://www.vlfeat.org" rel="noreferrer">VLFeat's</a> KD-trees and another library I don't remember (many FLANN, see below) and it won.</p>
<p><a href="http://people.cs.ubc.ca/~mariusm/index.php/FLANN/FLANN" rel="noreferrer">FLANN</a> has a reputation of being fast, and is used and recommended quite often recently. It aims at the higher dimensional case, where it offers approximate algorithms, but is also used in the <a href="http://pointclouds.org" rel="noreferrer">Point Cloud Library</a> which deals with 3D problems.</p>
<p>I did not experiment with CGAL since I found the library to be too heavyweight. I agree that CGAL has a good reputation, but I feel it occasionally suffers from oversophistication.</p> | 2013-03-05 17:14:41.380000+00:00 | 2013-03-05 18:47:25.773000+00:00 | 2013-03-05 18:47:25.773000+00:00 | null | 15,124,900 | <p>I am using CGAL's (the latest) KD-tree implementation for searching nearest neighbors in point sets. And also Wikipedia and other resources seem to suggest that KD-trees are the way to go. But somehow they are too slow and Wiki also suggests their worst-case time of O(n), which is far from ideal.</p>
<p>[BEGIN-EDIT]
<strong>I am now using "nanoflann", which is about 100-1000 times faster than the equivalent in CGAL for K-neighbor search. And I am using "Intel Embree" for raycasting, which is about 100-200 times faster than CGAL's AABB trees.</strong>
[END-EDIT]</p>
<p>My task looks like this:</p>
<p>I have a HUGE point set, say like up to a few 100 mio. points!! and their distribution is on surfaces of triangulated geometry (yes, a photon tracer). So one could say that their distribution is 2D in 3D space, because it is sparse in 3D but dense when looking at the surfaces... This could be the problem right? Because to me this seems to trigger the worst-case performance of a KD-tree which probably could deal much better with 3D dense point sets...</p>
<p>CGAl is quite good at what it does, so I have a bit doubt that their implementation just sucks. Their AABB tree I am using for raytracing burns a straight billion raytraces in a few mintues in the ground... That is remarkable I guess. But their KD-tree on the other hand can't even deal with a mio. points and 250k samples (point queries) in any reasonable time...</p>
<p>I came up with two solutions which kick the crap out of KD-trees:</p>
<p>1) Use texture maps to store the photons in a linked list on the geometry. This is always an O(1) operation, since you have to do the raycast anyway...</p>
<p>2) Use view dependent sliced hashsets. That is the farther away you get, the more coarse the hashsets get. So basically you can think of a 1024x1024x1024 raster in NDC coordinates, but with hashsets, to save memory in sparse areas. This basically has O(1) complexity and can be parallelized efficiently, both for inserts (micro-sharding) and queries (lock-free). </p>
<p>The former solution has the disadvantage that it is close to impossible to average over neighboring photon lists, which is important in darker regions to avoid noise.
The latter solution doesn't have this problem and should be on par feature wise with KD-trees, just that it has O(1) worst case performance, lol.</p>
<p>So what do you think? Bad KD-tree implementation? If not, is there something "better" than a KD-tree for bounded nearest neighbor queries? I mean I have nothing against my second solution above, but a "proven" data-structure that delivers similar performance would be nicer!</p>
<p>Thanks!</p>
<p>Here is the code (not compilable though) that I used:</p>
<pre><code>#include "stdafx.h"
#include "PhotonMap.h"
#pragma warning (push)
#pragma warning (disable: 4512 4244 4061)
#pragma warning (disable: 4706 4702 4512 4310 4267 4244 4917 4820 4710 4514 4365 4350 4640 4571 4127 4242 4350 4668 4626)
#pragma warning (disable: 4625 4265 4371)
#include <CGAL/Simple_cartesian.h>
#include <CGAL/Orthogonal_incremental_neighbor_search.h>
#include <CGAL/basic.h>
#include <CGAL/Search_traits.h>
#include <CGAL/point_generators_3.h>
#pragma warning (pop)
struct PhotonicPoint
{
float vec[3];
const Photon* photon;
PhotonicPoint(const Photon& photon) : photon(&photon)
{
vec[0] = photon.hitPoint.getX();
vec[1] = photon.hitPoint.getY();
vec[2] = photon.hitPoint.getZ();
}
PhotonicPoint(Vector3 pos) : photon(nullptr)
{
vec[0] = pos.getX();
vec[1] = pos.getY();
vec[2] = pos.getZ();
}
PhotonicPoint() : photon(nullptr) { vec[0] = vec[1] = vec[2] = 0; }
float x() const { return vec[0]; }
float y() const { return vec[1]; }
float z() const { return vec[2]; }
float& x() { return vec[0]; }
float& y() { return vec[1]; }
float& z() { return vec[2]; }
bool operator==(const PhotonicPoint& p) const
{
return (x() == p.x()) && (y() == p.y()) && (z() == p.z()) ;
}
bool operator!=(const PhotonicPoint& p) const
{
return ! (*this == p);
}
};
namespace CGAL
{
template <>
struct Kernel_traits<PhotonicPoint>
{
struct Kernel
{
typedef float FT;
typedef float RT;
};
};
}
struct Construct_coord_iterator
{
typedef const float* result_type;
const float* operator()(const PhotonicPoint& p) const
{
return static_cast<const float*>(p.vec);
}
const float* operator()(const PhotonicPoint& p, int) const
{
return static_cast<const float*>(p.vec+3);
}
};
typedef CGAL::Search_traits<float, PhotonicPoint, const float*, Construct_coord_iterator> Traits;
typedef CGAL::Orthogonal_incremental_neighbor_search<Traits> NN_incremental_search;
typedef NN_incremental_search::iterator NN_iterator;
typedef NN_incremental_search::Tree Tree;
struct PhotonMap_Impl
{
Tree tree;
PhotonMap_Impl(const PhotonAllocator& allocator) : tree()
{
int counter = 0, maxCount = allocator.GetAllocationCounter();
for(auto& list : allocator.GetPhotonLists())
{
int listLength = std::min((int)list.size(), maxCount - counter);
counter += listLength;
tree.insert(std::begin(list), std::begin(list) + listLength);
}
tree.build();
}
};
PhotonMap::PhotonMap(const PhotonAllocator& allocator)
{
impl = std::make_shared<PhotonMap_Impl>(allocator);
}
void PhotonMap::Sample(Vector3 where, float radius, int minCount, std::vector<const Photon*>& outPhotons)
{
NN_incremental_search search(impl->tree, PhotonicPoint(where));
int count = 0;
for(auto& p : search)
{
if((p.second > radius) && (count > minCount) || (count > 50))
break;
count++;
outPhotons.push_back(p.first.photon);
}
}
</code></pre> | 2013-02-27 23:53:22.813000+00:00 | 2018-02-23 21:19:34.847000+00:00 | 2013-03-16 03:22:43.727000+00:00 | c++|data-structures|raytracing|kdtree|cgal | ['https://github.com/jmhodges/kdtree2', 'http://arxiv.org/abs/physics/0408067', 'http://www.vlfeat.org', 'http://people.cs.ubc.ca/~mariusm/index.php/FLANN/FLANN', 'http://pointclouds.org'] | 5 |
53,984,401 | <p><code>Q1: Why my network is still trainable with a *wrong* loss function?</code></p>
<p>A1: Because your network is optimized in terms of gradient descent, which does not care about which loss function is used as long as it is differentiable. This fact reveals the difficulty to debug a network when it doesn't work, because it is not a code bug (e.g. causing memory leak, numerical overflow, etc.), but some bug does not scientifically sound (e.g. your regression target is of range (0,100), but you use <code>sigmoid</code> as the activation function of the last dense layer).</p>
<pre><code>Q2: How come `sigmoid` gives better performance than `softmax`?
</code></pre>
<p>A2: First, using the <code>sigmoid</code> loss function means to train 10 binary classifiers, one for each class (i.e. the classic one v.s. all or one v.s. rest setting), and thus it is also technically sound. </p>
<p>The only difference between <code>sigmoid</code> and <code>softmax</code> is that the sum of the class-wise predicted probability is always 1 for the <code>softmax</code> network, while may not necessarily to be 1 for the <code>sigmoid</code> network. In other words, you might have confusions to decide a label during testing for the <code>sigmoid</code> network. </p>
<p>Regarding to why <code>sigmoid</code> is better than <code>softmax</code>, it is related to many aspects and difficult to analyze without careful studies. One possible explanation is that <code>sigmoid</code> treats rows in the weight matrix of the last dense layer independently, while <code>softmax</code> treats them dependently. Therefore, <code>sigmoid</code> may better handle those samples with contradicting gradient directions. Another thought is that maybe you should try the recent <a href="https://arxiv.org/pdf/1809.04157.pdf" rel="nofollow noreferrer">heated-up softmax</a>.</p>
<p>Finally, if you believe <code>sigmoid</code> version gives you better performance but you still want a <code>softmax</code> network, you may reuse all the layers until the last dense layer in the <code>sigmoid</code> network and finetune a new <code>softmax</code> layer, or use both losses just as in a multi-task problem.</p> | 2018-12-31 06:40:08.960000+00:00 | 2018-12-31 06:40:08.960000+00:00 | null | null | 53,981,991 | <p>I am running an U-net as defined below: </p>
<pre><code>inputs = Input((IMG_HEIGHT, IMG_WIDTH, IMG_CHANNELS))
s = Lambda(lambda x: x / 255) (inputs)
c1 = Conv2D(8, (3, 3), activation='relu', padding='same') (s)
c1 = Conv2D(8, (3, 3), activation='relu', padding='same') (c1)
p1 = MaxPooling2D((2, 2)) (c1)
c2 = Conv2D(16, (3, 3), activation='relu', padding='same') (p1)
c2 = Conv2D(16, (3, 3), activation='relu', padding='same') (c2)
p2 = MaxPooling2D((2, 2)) (c2)
c3 = Conv2D(32, (3, 3), activation='relu', padding='same') (p2)
c3 = Conv2D(32, (3, 3), activation='relu', padding='same') (c3)
p3 = MaxPooling2D((2, 2)) (c3)
c4 = Conv2D(64, (3, 3), activation='relu', padding='same') (p3)
c4 = Conv2D(64, (3, 3), activation='relu', padding='same') (c4)
p4 = MaxPooling2D(pool_size=(2, 2)) (c4)
c5 = Conv2D(128, (3, 3), activation='relu', padding='same') (p4)
c5 = Conv2D(128, (3, 3), activation='relu', padding='same') (c5)
u6 = Conv2DTranspose(64, (2, 2), strides=(2, 2), padding='same') (c5)
u6 = concatenate([u6, c4])
c6 = Conv2D(64, (3, 3), activation='relu', padding='same') (u6)
c6 = Conv2D(64, (3, 3), activation='relu', padding='same') (c6)
u7 = Conv2DTranspose(32, (2, 2), strides=(2, 2), padding='same') (c6)
u7 = concatenate([u7, c3])
c7 = Conv2D(32, (3, 3), activation='relu', padding='same') (u7)
c7 = Conv2D(32, (3, 3), activation='relu', padding='same') (c7)
u8 = Conv2DTranspose(16, (2, 2), strides=(2, 2), padding='same') (c7)
u8 = concatenate([u8, c2])
c8 = Conv2D(16, (3, 3), activation='relu', padding='same') (u8)
c8 = Conv2D(16, (3, 3), activation='relu', padding='same') (c8)
u9 = Conv2DTranspose(8, (2, 2), strides=(2, 2), padding='same') (c8)
u9 = concatenate([u9, c1], axis=3)
c9 = Conv2D(8, (3, 3), activation='relu', padding='same') (u9)
c9 = Conv2D(8, (3, 3), activation='relu', padding='same') (c9)
outputs = Conv2D(10, (1, 1), activation='sigmoid') (c9)
model = Model(inputs=[inputs], outputs=[outputs])
model.compile(optimizer='Adamax', loss = dice, metrics = [mIoU])
</code></pre>
<p>Notice that I'm doing multi-class prediction on ten classes. And the inputs are <code>256x256x3</code> (rgb) images and the ground truths are binary masks of size <code>256x256x10</code> since the <code>depth=num_classes=10</code>. My question is, I accidently forgot to change the activation function from <code>sigmoid</code> to <code>softmax</code> and ran the network. The network still ran. How is this possible?? Is it because it's treating each binary mask independently? </p>
<p>More intriguingly, the network actually yielded better results when using <code>sigmoid</code> as opposed to when I ran it with <code>softmax</code>.</p> | 2018-12-30 22:37:57.957000+00:00 | 2018-12-31 06:40:08.960000+00:00 | 2018-12-31 06:24:20.047000+00:00 | keras|neural-network | ['https://arxiv.org/pdf/1809.04157.pdf'] | 1 |
53,688,493 | <p>This is essentially a 3 part question: </p>
<ol>
<li>How to estimate time-varying effects?</li>
<li>What is the difference between different specifications of time-varying effects using <code>survival::coxph</code> function</li>
<li>How to decide what shape the time-variation has, i.e., linear, logarithmic, ...</li>
</ol>
<p>I will try to answer these questions in the following using the veteran data example, which is featured in section 4.2 of the <a href="https://cran.microsoft.com/web/packages/survival/vignettes/timedep.pdf" rel="noreferrer">vignette on time-dependent covariates and time-dependent coefficients</a> (also known as time-varying effects) in the <strong><code>survival</code></strong> package: </p>
<pre><code>library(dplyr)
library(survival)
data("veteran", package = "survival")
veteran <- veteran %>%
mutate(
trt = 1L * (trt == 2),
prior = 1L * (prior == 10))
head(veteran)
#> trt celltype time status karno diagtime age prior
#> 1 0 squamous 72 1 60 7 69 0
#> 2 0 squamous 411 1 70 5 64 1
#> 3 0 squamous 228 1 60 3 38 0
#> 4 0 squamous 126 1 60 9 63 1
#> 5 0 squamous 118 1 70 11 65 1
#> 6 0 squamous 10 1 20 5 49 0
</code></pre>
<h2>1. How to estimate time-varying effects</h2>
<p>There are different popular methods and implementations, e.g. <code>survival::coxph</code>, <code>timereg::aalen</code> or using GAMs after appropriate data transformation (see below).</p>
<p>Although the specific methods and their implementaitons differ, a general idea ist to create a long form data set where </p>
<ul>
<li>the follow-up is partitioned into intervals</li>
<li>for each subject, the status is 0 in all intervals except the last (if an event) </li>
<li>the time variable is updated in each interval</li>
</ul>
<p>Then, the time (or a transformation of time, e.g. log(t)) is simply a covariate and time-varying effects can be estimated by an interaction between the covariate of interest and the (transformed) covariate of time. </p>
<p>If the functional form of the time-variation is known, you can use the <code>tt()</code> aproach: </p>
<pre><code>cph_tt <- coxph(
formula = Surv(time, status) ~ trt + prior + karno + tt(karno),
data = veteran,
tt = function(x, t, ...) x * log(t + 20))
</code></pre>
<h2>2. What is the difference between different specifications of time-varying effects using <code>survival::coxph</code> function</h2>
<p>There is no difference. I assume the <code>tt()</code> function is simply a short-cut for the estimation via transformation to the long-format. You can verify that the two approaches are equivalent using the code below: </p>
<h2>transform to long format</h2>
<pre><code>veteran_long <- survSplit(Surv(time, status)~., data = veteran, id = "id",
cut = unique(veteran$time)) %>%
mutate(log_time = log(time + 20))
head(veteran_long) %>% select(id, trt, age, tstart, time, log_time, status)
#> id trt age tstart time log_time status
#> 1 1 0 69 0 1 3.044522 0
#> 2 1 0 69 1 2 3.091042 0
#> 3 1 0 69 2 3 3.135494 0
#> 4 1 0 69 3 4 3.178054 0
#> 5 1 0 69 4 7 3.295837 0
#> 6 1 0 69 7 8 3.332205 0
cph_long <- coxph(formula = Surv(tstart, time, status)~
trt + prior + karno + karno:log_time, data = veteran_long)
## models are equivalent, just different specification
cbind(coef(cph_long), coef(cph_tt))
#> [,1] [,2]
#> trt 0.01647766 0.01647766
#> prior -0.09317362 -0.09317362
#> karno -0.12466229 -0.12466229
#> karno:log_time 0.02130957 0.02130957
</code></pre>
<h2>3. How to decide what shape the time-variation has?</h2>
<p>As mentioned before, time-varying effects are simply interactions of a covariate <code>x</code> and time <code>t</code>, thus time-varying effects can have different specifications, equivalent to interactions in standard regression models, e.g.</p>
<ul>
<li><code>x*t</code>: linear covariate effect, linearly time-varying effect </li>
<li><code>f(x)*t</code>: non-linear covariate effect, linearly time-varying effect</li>
<li><code>f(t)*x</code>: linear covariate effect, non-linearly time-varying (for categorical x) this essentially represents a stratified baseline hazard </li>
<li><code>f(x, t)</code>: non-linear, non-linearly time-varying effect</li>
</ul>
<p>In each case, the functional form of the effect <code>f</code> can either be estimated from the data or prespecified (e.g. <code>f(t)*x = karno * log(t + 20)</code> above). </p>
<p>In most cases you would prefer to estimate <code>f</code> from the data. The support for the (penalized) estimation of such effects is to my knowledge limited in the <strong><code>survival</code></strong> package. However, you can use <code>mgcv::gam</code> to estimate any of the effects specified above (after appropriate data transformation). An example is given below and shows that the effect of <code>karno</code> goes towards 0 as time progresses, regardless of the Karnofsky score at the beginning of the follow-up (see <a href="https://adibender.github.io/pammtools/articles/tveffects.html" rel="noreferrer">here</a> for details and also Section 4.2 <a href="https://arxiv.org/pdf/1806.01042.pdf" rel="noreferrer">here</a>): </p>
<pre><code>library(pammtools)
# data transformation
ped <- as_ped(veteran, Surv(time, status)~., max_time = 400)
# model
pam <- mgcv::gam(ped_status ~ s(tend) + trt + prior + te(tend, karno, k = 10),
data = ped, family = poisson(), offset = offset, method = "REML")
p_2d <- gg_tensor(pam)
p_slice <- gg_slice(ped, pam, "karno", tend = unique(tend), karno = c(20, 50, 80), reference = list(karno = 60))
gridExtra::grid.arrange(p_2d, p_slice, nrow = 1)
</code></pre>
<p><a href="https://i.stack.imgur.com/llPOl.png" rel="noreferrer"><img src="https://i.stack.imgur.com/llPOl.png" alt=""></a></p> | 2018-12-09 01:00:58.237000+00:00 | 2019-01-31 17:20:54.153000+00:00 | 2019-01-31 17:20:54.153000+00:00 | null | 45,870,975 | <p>In R, what is the best way to incorporate the interaction term between a covariate and time, when the proportionality test (with coxph) shows that the proportionality assumption in the Cox model is violated? I know that you can either use strata or an interaction with time term, I'm interested in the latter. I haven't been able to find a definitive clear explanation with examples on how to do this on the internet. In the most common example using the Rossi dataset, Fox suggested to do,</p>
<pre><code>coxph(formula = Surv(start, stop, arrest.time) ~ fin + age + age:stop + prio, data = Rossi.2)
</code></pre>
<p>Is there a difference between modeling with age:stop versus age:start? Does the formula have to use this format? If I use the Surv with the two parameter format, would the following also make sense?</p>
<pre><code>coxph(formula = Surv(week, arrest) ~ fin + age + age:week + prio, data = Rossi)
</code></pre>
<p>Or you have to split the dataset and use the Surv(start,stop,event) method?
Also, there is the time-transform method, so,</p>
<pre><code>coxph(formula = Surv(week, arrest) ~ fin + age + tt(age) + prio, data = Rossi, tt=function(x,t,...) x*t)
</code></pre>
<p>I know that some people would prefer model with the <code>log(t)</code> instead of <code>t</code> here. But which one of these is the correct method to model interaction with time? Do these all refer to the same/different underlying statistical model? And the end, are all modeling (for the interaction term): <code>h(t) = h0(t)exp(b*X*t)</code>?</p> | 2017-08-24 21:13:20.237000+00:00 | 2019-01-31 17:20:54.153000+00:00 | 2017-09-16 04:26:03.293000+00:00 | r|survival-analysis|cox-regression|survival | ['https://cran.microsoft.com/web/packages/survival/vignettes/timedep.pdf', 'https://adibender.github.io/pammtools/articles/tveffects.html', 'https://arxiv.org/pdf/1806.01042.pdf', 'https://i.stack.imgur.com/llPOl.png'] | 4 |
56,760,939 | <p><em>edit 03/30/2020: adding a link to the the <a href="https://github.com/awslabs/amazon-sagemaker-examples/blob/master/sagemaker-python-sdk/scikit_learn_randomforest/Sklearn_on_SageMaker_end2end.ipynb" rel="noreferrer">SageMaker Sklearn random forest demo</a></em></p>
<p><br/></p>
<p>in SageMaker you have 3 options to write scientific code:</p>
<ul>
<li><strong>Built-in algorithms</strong></li>
<li><strong>Open-source pre-written containers</strong> (available
for sklearn, tensorflow, pytorch, mxnet, chainer. Keras can be
written in the tensorflow and mxnet containers)</li>
<li><strong>Bring your own container</strong> (for R for example)</li>
</ul>
<p><strong>At the time of writing this post there is no random forest classifier nor regressor in the built-in library</strong>. There is an algorithm called <a href="https://aws.amazon.com/blogs/machine-learning/use-the-built-in-amazon-sagemaker-random-cut-forest-algorithm-for-anomaly-detection/" rel="noreferrer">Random Cut Forest</a> in the built-in library but it is an unsupervised algorithm for anomaly detection, a different use-case than the <a href="https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html" rel="noreferrer">scikit-learn random forest</a> used in a supervised fashion (also <a href="https://stackoverflow.com/questions/56728230/aws-sagemaker-randomcutforest-rcf-vs-scikit-lean-randomforest-rf?noredirect=1&lq=1">answered in StackOverflow here</a>). But it is easy to use the open-source pre-written scikit-learn container to implement your own. There is a <a href="https://github.com/awslabs/amazon-sagemaker-examples/blob/master/sagemaker-python-sdk/scikit_learn_randomforest/Sklearn_on_SageMaker_end2end.ipynb" rel="noreferrer">demo showing how to use Sklearn's random forest in SageMaker</a>, with training orchestration bother from the high-level SDK and <code>boto3</code>. You can also use this other <a href="https://github.com/awslabs/amazon-sagemaker-examples/blob/master/sagemaker-python-sdk/scikit_learn_iris/Scikit-learn%20Estimator%20Example%20With%20Batch%20Transform.ipynb" rel="noreferrer">public sklearn-on-sagemaker demo</a> and change the model. A benefit of the pre-written containers over the "Bring your own" option is that the dockerfile is already written, and web serving stack too.</p>
<p>Regarding your surprise that Random Forest is not featured in the built-in algos, the library and its 18 algos already cover a rich set of use-cases. For example for supervised learning over structured data (the usual use-case for the random forest), if you want to stick to the built-ins, depending on your priorities (accuracy, inference latency, training scale, costs...) you can use SageMaker XGBoost (XGBoost has been winning tons of datamining competitions - every winning team in the top10 of the KDDcup 2015 used XGBoost <a href="https://arxiv.org/pdf/1603.02754.pdf" rel="noreferrer">according to the XGBoost paper</a> - and scales well) and linear learner, which is extremely fast at inference and can be trained at scale, in mini-batch fashion over GPU(s). <a href="https://docs.aws.amazon.com/sagemaker/latest/dg/fact-machines-howitworks.html" rel="noreferrer">Factorization Machines</a> (linear + 2nd degree interaction with weights being column embedding dot-products) and <a href="https://aws.amazon.com/blogs/machine-learning/amazon-sagemaker-supports-knn-classification-and-regression/" rel="noreferrer">SageMaker kNN</a> are other options. Also, things are not frozen in stone, and the list of built-in algorithms is being improved fast.</p> | 2019-06-25 19:38:06.350000+00:00 | 2020-03-30 16:46:46.547000+00:00 | 2020-03-30 16:46:46.547000+00:00 | null | 56,740,609 | <p>I am looking to recreate a randomforest model built locally, and deploy it through sagemaker. The model is very basic, but for comparison I would like to use the same in sagemaker. I don't see randomforest among sagemaker's built in algorithms (which seems weird) - is my only option to go the route of <a href="https://aws.amazon.com/blogs/machine-learning/train-and-host-scikit-learn-models-in-amazon-sagemaker-by-building-a-scikit-docker-container/" rel="nofollow noreferrer">deploying my own custom model</a>? Still learning about containers, and it seems like a lot of work for something that is just a simple randomforestclassifier() call locally. I just want to baseline against the out of the box randomforest model, and show that it works the same when deployed through AWS sagemaker.</p> | 2019-06-24 16:27:13.193000+00:00 | 2020-03-30 16:46:46.547000+00:00 | null | amazon-web-services|docker|containers|random-forest|amazon-sagemaker | ['https://github.com/awslabs/amazon-sagemaker-examples/blob/master/sagemaker-python-sdk/scikit_learn_randomforest/Sklearn_on_SageMaker_end2end.ipynb', 'https://aws.amazon.com/blogs/machine-learning/use-the-built-in-amazon-sagemaker-random-cut-forest-algorithm-for-anomaly-detection/', 'https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html', 'https://stackoverflow.com/questions/56728230/aws-sagemaker-randomcutforest-rcf-vs-scikit-lean-randomforest-rf?noredirect=1&lq=1', 'https://github.com/awslabs/amazon-sagemaker-examples/blob/master/sagemaker-python-sdk/scikit_learn_randomforest/Sklearn_on_SageMaker_end2end.ipynb', 'https://github.com/awslabs/amazon-sagemaker-examples/blob/master/sagemaker-python-sdk/scikit_learn_iris/Scikit-learn%20Estimator%20Example%20With%20Batch%20Transform.ipynb', 'https://arxiv.org/pdf/1603.02754.pdf', 'https://docs.aws.amazon.com/sagemaker/latest/dg/fact-machines-howitworks.html', 'https://aws.amazon.com/blogs/machine-learning/amazon-sagemaker-supports-knn-classification-and-regression/'] | 9 |
65,621,082 | <p>The other two answers are good. Another option is to actually use more recent packages that are purpose-built for highly dimensional / high volume data sets. They run their code using lower-level languages (C++ and/or Java) and in certain cases use parallelization.</p>
<p>I'd recommend taking a look into these three:</p>
<p>ranger (uses C++ compiler)
randomForestSRC (uses C++ compiler)
h2o (Java compiler - needs Java version 8 or higher)
Also, some additional reading here to give you more to go off on which package to choose: <a href="https://arxiv.org/pdf/1508.04409.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1508.04409.pdf</a></p>
<p>Page 8 shows benchmarks showing the performance improvement of ranger against randomForest against growing data size - ranger is WAY faster due to linear growth in runtime rather than non-linear for randomForest for rising tree/sample/split/feature sizes.</p>
<p>Good Luck!</p> | 2021-01-07 22:52:47.347000+00:00 | 2021-01-07 22:52:47.347000+00:00 | null | null | 34,706,654 | <p>I have to make a regression with randomforest in R. My problem is that my dataframe is huge: I have 12 variables and more than 400k entries. When I try - the code is written in the bottom - to get a randomForest regression the system takes many hours to process the data: after 5, 6 hours of calculation, I am obliged to stop the operation without any output. Someone can suggests me how I can get it faster?
Thanks</p>
<pre><code>library(caret)
library(randomForest)
dataset <- read.csv("/home/anonimo/Modelli/total_merge.csv", header=TRUE)
dati <- data.frame(dataset)
attach(dati)
trainSet <- dati[2:107570,]
testSet <- dati[107570:480343,]
output.forest <- randomForest(dati$Clip_pm25 ~ dati$e_1 + dati$Clipped_so + dati$Clip_no2 + dati$t2m_1 + dati$tp_1 + dati$Clipped_nh + dati$Clipped_co + dati$Clipped_o3 + dati$ssrd_1 + dati$Clipped_no + dati$Clip_pm10 + dati$sp_1, data=trainSet, ntree=250)
</code></pre> | 2016-01-10 14:40:48.827000+00:00 | 2021-01-07 22:52:47.347000+00:00 | null | r|regression|random-forest | ['https://arxiv.org/pdf/1508.04409.pdf'] | 1 |
37,294,060 | <p>Try the Bayesian blocks method, here is the paper: <strong>J. D. Scargle, J. P. Norris, B. Jackson, J. Chiang, (2012) arXiv:1207.5578.</strong></p>
<p>It is rather long paper, but you can skip to the place where they included a MatLab implementation.</p>
<p>What it does is that it splits your time series into time blocks in which the values fluctuate around some mean, that mean being different in each block.</p>
<p>Once you have the blocks, the ones which are low can be scaled up to remove the drops.</p>
<p>NOTE: there is a parameter ncp_prior , by varying it you can change the sensitivity of the method, so that it doesn't get fooled by the fluctuations, but reproduces the drop.</p> | 2016-05-18 08:24:22.033000+00:00 | 2016-05-18 08:24:22.033000+00:00 | null | null | 35,554,971 | <p>I am wondering what would be the most efficient way to correct signal if it drops out significantly in some period of time. Like in the figure green signal dropped out between around 16:26 and 19:16 and I would like to elevate it to the same level like before 16:26 and after 19:16 using statistics. </p>
<p><a href="http://i.stack.imgur.com/FRI1Z.png" rel="nofollow">Please find here the figure</a></p>
<p>Thanks in advance!</p> | 2016-02-22 13:29:59.543000+00:00 | 2016-05-18 08:24:22.033000+00:00 | 2016-02-22 13:42:25.840000+00:00 | machine-learning|statistics|data-analysis | [] | 0 |
25,857,571 | <p>Remembering that R works well on vectors, a first step is to think of 'Words' rather than 'Word'</p>
<pre><code>## constructor, accessors, subset (also need [[, [<-, [[<- methods)
.Words <- setClass("Words",
representation(words="character", parts="character"))
words <- function(x) x@words
parts <- function(x) x@parts
setMethod("length", "Words", function(x) length(words(x)))
setMethod("[", c("Words", "ANY", "missing"), function(x, i, j, ...) {
initialize(x, words=words(x)[i], parts=parts(x)[i], ...)
})
## validity
setValidity("Words", function(object) {
if (length(words(object)) == length(parts(object)))
NULL
else
"'words()' and 'parts()' are not the same length"
})
</code></pre>
<p>@nicola's suggestion that one have a list of words has been formalized in the <a href="http://bioconductor.org/packages/release/bioc/html/IRanges.html" rel="nofollow">IRanges</a> package (actually, <a href="http://bioconductor.org/packages/devel/bioc/html/S4Vectors.html" rel="nofollow">S4Vectors</a> in the 'devel' / 3.0 branch of Bioconductor), where a 'SimpleList' takes the 'naive' approach of requiring all elements of the list to have the same class, whereas a 'CompressedList' has similar behavior but actually is implemented as a vector-like object (one with a length(), [, and [[ methods) that is 'partitioned' (either by end or width) into groups.</p>
<pre><code>library(IRanges)
.Sentences = setClass("Sentences",
contains="CompressedList",
prototype=c(elementType="Words"))
</code></pre>
<p>One would then write a more user-friendly constructor, but the basic functionality is</p>
<pre><code>## 0 Sentences
.Sentences()
## 1 sentence of 0 words
.Sentences(unlistData=.Words(), partitioning=PartitioningByEnd(0))
## 3 sentences of 2, 0, and 3 words
s3 <- .Sentences(unlistData=.Words(words=letters[1:5], parts=LETTERS[1:5]),
partitioning=PartitioningByEnd(c(2, 2, 5)))
</code></pre>
<p>leading to</p>
<pre><code>> s3[[1]]
An object of class "Words"
Slot "word":
[1] "a" "b"
Slot "part":
[1] "A" "B"
> s3[[2]]
An object of class "Words"
Slot "word":
character(0)
Slot "part":
character(0)
> s3[[3]]
An object of class "Words"
Slot "word":
[1] "c" "d" "e"
Slot "part":
[1] "C" "D" "E"
</code></pre>
<p>Notice that some typical operations are fast because they can operate on the 'unlisted' elements without creating or destroying S4 instances, e.g., coercing all 'words' to upper case</p>
<pre><code>setMethod(toupper, "Words", function(x) { x@word <- toupper(x@word); x })
setMethod(toupper, "Sentences", function(x) relist(toupper(unlist(x)), x))
</code></pre>
<p>This is 'fast' for large collections of sentences because unlist / relist is really on a slot access and creation of a single instance of 'Words'. <a href="http://fr.arxiv.org/abs/1409.2864" rel="nofollow">Scalable Genomics with R and Bioconductor</a> outlines this and other strategies.</p>
<p>In an answer @nicola says 'R is not perfectly suited for OO programming style' but it's probably more helpful to realize that R's S4 object oriented style differs from C++ and Java, just as R differs from C. In particular it's really valuable to continue thinking in terms of vectors when working with S4 -- Words rather than Word, People rather than Person...</p> | 2014-09-15 21:58:40.380000+00:00 | 2014-09-16 13:27:22.447000+00:00 | 2014-09-16 13:27:22.447000+00:00 | null | 25,855,177 | <p>Let's say I want to define two classes classes, <code>Sentence</code> and <code>Word</code>. Each word object has a character string and a part of speech (pos). Each sentence contains some number of words and has an additional slot for data.</p>
<p>The <code>Word</code> class is straightforward to define.</p>
<pre><code>wordSlots <- list(word = "character", pos = "character")
wordProto <- list(word = "", pos = "")
setClass("Word", slots = wordSlots, prototype = wordProto)
Word <- function(word, pos) new("Word", word=word, pos=pos)
</code></pre>
<p>Now I want to make a <code>Sentence</code> class which can contain some <code>Word</code>s and some numerical data.</p>
<p>If I define the <code>Sentence</code> class as so:</p>
<pre><code>sentenceSlots <- list(words = "Word", stats = "numeric")
sentenceProto <- list(words = Word(), stats = 0)
setClass("Sentence", slots = sentenceSlots, prototype = sentenceProto)
</code></pre>
<p>Then the sentence can contain only one word. I could obviously define it with many slots, one for each word, but then it will be limited in length.</p>
<p>However, if I define the <code>Sentence</code> class like this:</p>
<pre><code>sentenceSlots <- list(words = "list", stats = "numeric")
sentenceProto <- list(words = list(Word()), stats = 0)
setClass("Sentence", slots = sentenceSlots, prototype = sentenceProto)
</code></pre>
<p>it can contain as many words as I want, but the slot <code>words</code> can contain objects which are not of the class <code>Word</code>.</p>
<p>Is there a way to accomplish this? This would be similar to the C++ thing where you can have a vector of objects of the same type.</p> | 2014-09-15 19:10:51.857000+00:00 | 2019-05-01 15:20:24.723000+00:00 | 2019-05-01 15:20:24.723000+00:00 | r|class|object|slot | ['http://bioconductor.org/packages/release/bioc/html/IRanges.html', 'http://bioconductor.org/packages/devel/bioc/html/S4Vectors.html', 'http://fr.arxiv.org/abs/1409.2864'] | 3 |
52,250,544 | <p>There are multiple places from which I suggest you try to learn:</p>
<p>1) The CNN course from coursera <a href="https://www.coursera.org/learn/convolutional-neural-networks" rel="nofollow noreferrer">https://www.coursera.org/learn/convolutional-neural-networks</a>
This course has a good explanation on yolo(There assignment is on car detection as well which can easily be extended to car counting) and the rest of the course is quite nice as well</p>
<p>2)<a href="https://towardsdatascience.com/yolo-you-only-look-once-real-time-object-detection-explained-492dc9230006" rel="nofollow noreferrer">https://towardsdatascience.com/yolo-you-only-look-once-real-time-object-detection-explained-492dc9230006</a></p>
<p>The article focus on a few implementation details and talks about the papers yolo and yolov2 , and helped me clear a out a few issues i had when i was trying to implement yolo </p>
<p>3)The original paper (although this may be too advanced ): <a href="https://arxiv.org/pdf/1506.02640v5.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1506.02640v5.pdf</a></p>
<p>4)A keras implmentation : <a href="https://github.com/experiencor/keras-yolo2" rel="nofollow noreferrer">https://github.com/experiencor/keras-yolo2</a></p>
<p>A simple git clone if you wish to simply have the code , although i do not recommend this as it has very little actual learning and is simply a download and use option </p> | 2018-09-10 03:06:51.513000+00:00 | 2018-09-10 03:34:37.980000+00:00 | 2018-09-10 03:34:37.980000+00:00 | null | 51,147,350 | <p>I'm new to YOLO and trying to make car counting application using YOLO. The cars is from video file. Is there any reference? Thank you</p> | 2018-07-03 05:43:33.763000+00:00 | 2018-09-10 03:34:37.980000+00:00 | null | image-processing|computer-vision|yolo | ['https://www.coursera.org/learn/convolutional-neural-networks', 'https://towardsdatascience.com/yolo-you-only-look-once-real-time-object-detection-explained-492dc9230006', 'https://arxiv.org/pdf/1506.02640v5.pdf', 'https://github.com/experiencor/keras-yolo2'] | 4 |
67,209,955 | <p>Your model will only do what it is trained for, regardless of what name your dataset(s) have.</p>
<p>Name of the dataset is just an organizational issue which does not go into training, does not really effect the amount of loss that will be produced during a training step. What will effect your models responses is however is the properties of the data.</p>
<p>Sometimes data from different datasets have different properties even though the datasets serve for the same purpose; like images with different illumination, background, resolution etc. That surely have an effect on the model performance. This is why mixing datasets should be performed with caution. You might find it useful to have a look at this <a href="https://arxiv.org/abs/1907.01341v3" rel="nofollow noreferrer">paper</a>.</p> | 2021-04-22 08:53:32.853000+00:00 | 2021-04-22 08:53:32.853000+00:00 | null | null | 67,209,768 | <p>If I had a Dataset 1 with 90% cat images and 10% dog images, and I combined Dataset 2, with only dogs to equalize the class imbalance, will my model classify which are cats and dogs or which are dataset 1 images and dataset 2 images?</p>
<p>If it's the latter, how do I get the model to classify between cats and dogs?</p> | 2021-04-22 08:42:37.330000+00:00 | 2021-04-22 08:53:32.853000+00:00 | 2021-04-22 08:44:54.203000+00:00 | machine-learning|image-recognition | ['https://arxiv.org/abs/1907.01341v3'] | 1 |
51,284,836 | <p>In your case you have a <em>fixed monthly cost</em> <strong>(FMC)</strong> and a <em>variable monthly cost</em> <strong>(VMC)</strong> for each choice of package. <strong>FMC</strong> is in {x, x+10000, x+20000}, while <strong>VMC</strong> is the sum of the <em>variable weekly costs</em> <strong>VWC</strong> for the 4 weeks. <strong>VWC</strong> is determined by the partition of the interval of the set of days D = (M,T,W,T,F,S,S) into k disjoint sub-intervals, with k in {1,3,5}.</p>
<p>Therefore you have to choose min{<strong>FMC1</strong>+<strong>VMC1*</strong>, <strong>FMC3</strong>+<strong>VMC3*</strong>, <strong>FMC5</strong>+<strong>VMC5*</strong>}, where <strong>VMCk*</strong> denotes the minimum variable monthly cost for partitioning D into k intervals (Note that for the case of k=1 the answer is trivial since there exists a single partition for each week). Since the variable weekly cost is <strong>VWC</strong>= 0.7*(<em>r1+r2+r3+r4+r5+r6+r7</em>), <em>ri</em> being the remaining amount of day <em>i</em>, it all boils down to minimizing the remaining quantity of each week. In calculating the <strong>VMCk*</strong> you can use DP-algorithm described in <a href="https://arxiv.org/pdf/math/0309285.pdf" rel="nofollow noreferrer">this</a> paper, with the objective of minimizing the remaining amount of each week.</p>
<p>So in high level:</p>
<ol>
<li>Obtain the minimum variable weekly cost -> minimum remaining amount of each week using DP, for each package of deposits {1,3,5}. And then the variable monthly cost as the sum of the 4 weeks.</li>
<li>Choose the minimum total cost from considering the fixed cost of each package and the variable one obtained at 1.</li>
</ol> | 2018-07-11 11:53:29.363000+00:00 | 2018-07-11 11:59:15.163000+00:00 | 2018-07-11 11:59:15.163000+00:00 | null | 51,278,864 | <p>A bank has an ATM machine. For a particular week, the usage of cash in millions as below.</p>
<ul>
<li>5- Monday</li>
<li>4- Tuesday</li>
<li>1- Wednesday</li>
<li>15- Thursday</li>
<li>6- Friday</li>
<li>2- Saturday</li>
<li>4- Sunday</li>
</ul>
<p>The bank hires a depositing company to deposit money in 5, 3, or 1 rounds per week.</p>
<p>The depositing company provides following packages to the bank when charging for depositing money, </p>
<ul>
<li><p>Cost for depositing in 4 rounds per month- 21135</p></li>
<li><p>Cost for depositing in 12 rounds per month- 32000</p></li>
<li><p>Cost for depositing in 20 rounds per month- 41975</p></li>
</ul>
<p>Ordering remains as Monday, Tuesday, Wednesday, Thursday, Friday, Saturday, Sunday. This order shouldn’t be violated when categorizing values.</p>
<p><strong>Example</strong></p>
<ul>
<li>5 rounds</li>
</ul>
<p>[(5+4),1, 15, 6, (2+4)]</p>
<p>[(5+4), 1, (15+6)=20+1, 2, 4]</p>
<p>can have many other combinations which don't break order.</p>
<ul>
<li>3 rounds</li>
</ul>
<p>[(5+4+1), 15, (6+2+4)]</p>
<p>[(5+4), (1+15), (6+2+4)]</p>
<p>can have many other combinations which don't break order.</p>
<ul>
<li>1 round</li>
</ul>
<p>[(5+4+1+15+6+2+4)]</p>
<p>Also the bank has to bear a holding cost of 0.019% of the remaining amount at the end of the day. </p>
<p><strong>Example</strong></p>
<p>Consider 1st week usage of cash as follows.( in millions)</p>
<p>Mon- 13</p>
<p>Tue- 5</p>
<p>Wed- 4</p>
<p>Thu- 4</p>
<p>Fri- 2</p>
<p>Sat- 11</p>
<p>Sun- 1</p>
<p><strong><em>5 - rounds</em></strong> </p>
<p>1st week Cash depositing order - 13, (5+4), 4, (2+11), 1</p>
<p>Assuming depositing is done in 5 rounds for all 4 weeks of the month,
(5*4 = 20)</p>
<p>Total depositing cost = 41975</p>
<p>1- 13 deposited,
13 withdrawn,
0 remaining,
0 holding cost</p>
<p>2- (5+4) deposited,
5 withdrawn,
4 remaining,
4*0.00019 holding cost</p>
<p>3- 0 deposited,
4 withdrawn,
0 remaining,
0 holding cost</p>
<p>4- 4 deposited,
4 withdrawn,
0 remaining,
0 holding cost</p>
<p>5- (2+11) deposited,
2 withdrawn,
11 remaining,
11*0.00019 holding cost</p>
<p>6- 0 deposited,
11 withdrawn,
0 remaining,
0 holding cost</p>
<p>7- 1 deposited,
1 withdrawn,
0 remaining,
0 holding cost</p>
<p>Total holding cost for 1st week = 4*0.00019 + 11*0.00019 = 0.00285 millions= 2850</p>
<p>Likewise I need to find the total holding cost for the month considering each particular week. </p>
<p><strong><em>3- rounds</em></strong></p>
<p>Cash depositing order for 1st week - 13, (5+4+4), (2+11+1)=(1+1+12)</p>
<p>Edit- Assuming 12 rounds per month package is choosen, therefore 3 rounds per week( 3*4 =12)</p>
<p>Total depositing cost = 32000</p>
<p>1 - 13 deposited,
13 withdrawn,
0 remaining, 0 holding cost</p>
<p>2- (5+4+4) deposited, 5 withdrawn, (4+4) remaining, (4+4)*0.00019 holding cost</p>
<p>3- 0 deposited, 4 withdrawn, 4 remaining, 4*0.00019 holding cost</p>
<p>4- 0 deposited, 4 withdrawn, 0 remaining, 0 holding cost</p>
<p>5- (2+11+1) deposited, 2 withdrawn, (11+1) remaining, (11+1)*0.00019 holding cost</p>
<p>6- 0 deposited, 11 withdrawn, 1 remaining, 1*0.00019 holding cost</p>
<p>7- 0 deposited, 1 withdrawn, 0 remaining, 0 holding cost</p>
<p>Total holding cost for 1st week = (4+4)*0.00019 + 4*0.00019 + (11+1)*0.00019 + 1*0.00019 = 0.00475 millions = 4750</p>
<p>Likewise I need to find the total holding cost for the month considering each week.</p>
<p>Edit - suppose the 41975 package is picked. Then it means cash deposited in 20 rounds per month. That means 5 rounds per week. If the 32000 package is picked, then 12 rounds per month. That means 3 rounds per week. If the 21135 package is picked, then it means for 4 rounds per month, that means 1 round per week. There are no mixed combinations of 5,3,1 for the four weeks of a particular month. Only all four weeks are done in 1, 3 or 5 rounds. We have to select the best package considering holding cost and package cost. </p>
<p>A good combination of 5 rounds which doesn't violate order, can be better than all the 3 rounds solutions and the 1 round solution. Same applies for 3 rounds solution aswell. Or else 1 round solution can be better than all 5 rounds and 3 rounds solutions.</p>
<p>When depositing rounds increase, holding cost reduces but depositing cost increases. When rounds decrease, depositing cost reduces but holding cost increases. So I need to find the order of depositing money for each week of the month and the monthly depositing package which can make a good tradeoff between total holding cost and total depositing cost, consuming the least time. </p>
<p>Any insight to the approach will be really helpful.</p> | 2018-07-11 06:45:08.790000+00:00 | 2018-07-16 04:03:00.667000+00:00 | 2018-07-16 04:03:00.667000+00:00 | algorithm|genetic-algorithm|greedy|np|set-cover | ['https://arxiv.org/pdf/math/0309285.pdf'] | 1 |
60,344,170 | <p>Author here.</p>
<blockquote>
<p>What does the reward from the initial_inference represent?</p>
</blockquote>
<p>The initial inference "predicts" the last observed reward. This isn't actually used for anything, but makes our code simpler: The prediction head can simply always predict the immediately preceding reward. For the dynamics network, this would be the reward observed after applying the action that's given as an input to the dynamics network.</p>
<p>At the beginning of the game there is no last observed reward, so we just set it to 0.</p>
<p>The reward target computation in the pseudocode was indeed misaligned; I've just uploaded a new version to arXiv. </p>
<p>Where it used to say</p>
<pre class="lang-py prettyprint-override"><code> if current_index < len(self.root_values):
targets.append((value, self.rewards[current_index],
self.child_visits[current_index]))
else:
# States past the end of games are treated as absorbing states.
targets.append((0, 0, []))
</code></pre>
<p>It should be:</p>
<pre class="lang-py prettyprint-override"><code> # For simplicity the network always predicts the most recently received
# reward, even for the initial representation network where we already
# know this reward.
if current_index > 0 and current_index <= len(self.rewards):
last_reward = self.rewards[current_index - 1]
else:
last_reward = 0
if current_index < len(self.root_values):
targets.append((value, last_reward, self.child_visits[current_index]))
else:
# States past the end of games are treated as absorbing states.
targets.append((0, last_reward, []))
</code></pre>
<p>Hope that helps!</p> | 2020-02-21 18:09:52.350000+00:00 | 2020-02-21 18:09:52.350000+00:00 | null | null | 60,234,530 | <p><a href="https://arxiv.org/abs/1911.08265" rel="nofollow noreferrer">MuZero</a>, a deep reinforcement learning technique, was just released, and I've been trying to implement it by looking at its <a href="https://arxiv.org/src/1911.08265v1/anc/pseudocode.py" rel="nofollow noreferrer">pseudocode</a> and this <a href="https://medium.com/applied-data-science/how-to-build-your-own-muzero-in-python-f77d5718061a" rel="nofollow noreferrer">helpful tutorial</a> on Medium.</p>
<p>However, there's something confusing me about how rewards are handled during training in the pseudocode, and it would be great if someone could verify that I'm reading the code correctly, and if I am, explain why this training algorithm works.</p>
<p>Here's the training function (from the <a href="https://arxiv.org/src/1911.08265v1/anc/pseudocode.py" rel="nofollow noreferrer">pseudocode</a>):</p>
<pre class="lang-py prettyprint-override"><code>def update_weights(optimizer: tf.train.Optimizer, network: Network, batch,
weight_decay: float):
loss = 0
for image, actions, targets in batch:
# Initial step, from the real observation.
value, reward, policy_logits, hidden_state = network.initial_inference(
image)
predictions = [(1.0, value, reward, policy_logits)]
# Recurrent steps, from action and previous hidden state.
for action in actions:
value, reward, policy_logits, hidden_state = network.recurrent_inference(
hidden_state, action)
predictions.append((1.0 / len(actions), value, reward, policy_logits))
hidden_state = tf.scale_gradient(hidden_state, 0.5)
for prediction, target in zip(predictions, targets):
gradient_scale, value, reward, policy_logits = prediction
target_value, target_reward, target_policy = target
l = (
scalar_loss(value, target_value) +
scalar_loss(reward, target_reward) +
tf.nn.softmax_cross_entropy_with_logits(
logits=policy_logits, labels=target_policy))
loss += tf.scale_gradient(l, gradient_scale)
for weights in network.get_weights():
loss += weight_decay * tf.nn.l2_loss(weights)
optimizer.minimize(loss)
</code></pre>
<p>I'm interested in the <code>reward</code> in the loss, specifically. Note that the loss gets all of its values from the <code>predictions</code>. The first <code>reward</code> added to <code>predictions</code> is from the <code>network.initial_inference</code> function. Afterwards, there are <code>len(actions)</code> more rewards added to <code>predictions</code>, all of which come from the <code>network.recurrent_inference</code> function.</p>
<p>Based on the tutorial <code>initial_inference</code> and <code>recurrent_inference</code> functions are built out of 3 different functions:</p>
<ol>
<li><em>Prediction</em> Input: internal game state. Output: policy, value (predicted sum of best possible future rewards)</li>
<li><em>Dynamics</em> Input: internal state of a game, action. Output: reward from taking that action, new internal state of the game.</li>
<li><em>Representation</em> Input: external state of a game. Output: internal state of the game</li>
</ol>
<p>The <code>initial_inference</code> function takes in an external game state, uses the <code>representation</code> function to turn it into an internal state, and then uses the <code>prediction</code> function on that internal game state. It outputs the internal state, the policy, and value.</p>
<p>The <code>recurrent_inference</code> function takes in an internal game state and an action. It uses the <code>dynamics</code> function to get a new internal game state and reward from the old game state and action. It then applies the <code>prediction</code> function to the new internal game state to get a policy and value of that new internal state. Thus, the final output is a new internal state, a reward, a policy, and a value.</p>
<p>However, in the pseudocode, the <code>initial_inference</code> function <em>also returns a reward</em>.</p>
<p>My main problem: <strong>What does that reward represent?</strong></p>
<p>In <a href="https://medium.com/applied-data-science/how-to-build-your-own-muzero-in-python-f77d5718061a" rel="nofollow noreferrer">the tutorial</a>, they just implicitly assume that the reward from the <code>initial_inference</code> function is 0. (See <a href="https://miro.medium.com/max/1514/1*GA72IpY7ZciGshmVvtl8kQ.png" rel="nofollow noreferrer">this image</a> from the tutorial.) So is that what's going on? Is there actually no reward, so the <code>initial_inference</code> just always returns a 0 for the reward?</p>
<p>Let's assume that that's the case.</p>
<p>Under this assumption, then, the first reward in the <code>predictions</code> list will be the 0 that the <code>initial_inference</code> function will return for the reward. Then, in the loss, this 0 will be compared with the first element of the <code>target</code> list.</p>
<p>Here's how the <code>target</code> is created:</p>
<pre class="lang-py prettyprint-override"><code> def make_target(self, state_index: int, num_unroll_steps: int, td_steps: int,
to_play: Player):
# The value target is the discounted root value of the search tree N steps
# into the future, plus the discounted sum of all rewards until then.
targets = []
for current_index in range(state_index, state_index + num_unroll_steps + 1):
bootstrap_index = current_index + td_steps
if bootstrap_index < len(self.root_values):
value = self.root_values[bootstrap_index] * self.discount**td_steps
else:
value = 0
for i, reward in enumerate(self.rewards[current_index:bootstrap_index]):
value += reward * self.discount**i # pytype: disable=unsupported-operands
if current_index < len(self.root_values):
targets.append((value, self.rewards[current_index],
self.child_visits[current_index]))
else:
# States past the end of games are treated as absorbing states.
targets.append((0, 0, []))
return targets
</code></pre>
<p>The <code>targets</code> returned by this function become the <code>target</code> list in the <code>update_weights</code> function. So the first value in <code>targets</code> is <code>self.rewards[current_index]</code>. The <code>self.rewards</code> is a list of all of the rewards received while playing a game. The only time it is edited is within this function <code>apply</code>:</p>
<pre class="lang-py prettyprint-override"><code> def apply(self, action: Action):
reward = self.environment.step(action)
self.rewards.append(reward)
self.history.append(action)
</code></pre>
<p>The <code>apply</code> function is only called here:</p>
<pre class="lang-py prettyprint-override"><code># Each game is produced by starting at the initial board position, then
# repeatedly executing a Monte Carlo Tree Search to generate moves until the end
# of the game is reached.
def play_game(config: MuZeroConfig, network: Network) -> Game:
game = config.new_game()
while not game.terminal() and len(game.history) < config.max_moves:
# At the root of the search tree we use the representation function to
# obtain a hidden state given the current observation.
root = Node(0)
current_observation = game.make_image(-1)
expand_node(root, game.to_play(), game.legal_actions(),
network.initial_inference(current_observation))
add_exploration_noise(config, root)
# We then run a Monte Carlo Tree Search using only action sequences and the
# model learned by the network.
run_mcts(config, root, game.action_history(), network)
action = select_action(config, len(game.history), root, network)
game.apply(action)
game.store_search_statistics(root)
return game
</code></pre>
<p>To me, it looks like <em>every single time an action is taken, a reward is generated</em>. So the first reward in the <code>self.rewards</code> list should be the reward from taking the first action in the game.</p>
<p>The issue becomes clear if <code>current_index = 0</code> in <code>self.rewards[current_index]</code>. In this case, the <code>predictions</code> list will have a 0 for the first reward because it always does. However, the <code>targets</code> list, will have the reward given for completing the first action.</p>
<p>So, to me, <em>it seems like the rewards are misaligned.</em></p>
<p>If we continue, the second reward in the <code>predictions</code> list will be the reward from <code>recurrent_inference</code> for completing the <em>first</em> action. However, the second reward in the <code>targets</code> list will be the reward stored in the game for completing the <em>second</em> action.</p>
<p>So, overall, I have three questions that build on one another:</p>
<ol>
<li><strong>What does the reward from the <code>initial_inference</code> represent? (What is it?)</strong></li>
<li>If it is 0, and it's supposed to represent a reward, are the rewards between the <code>predictions</code> and <code>targets</code> misaligned? (i.e. should the second reward in <code>predictions</code> actually be matched with the first reward in <code>targets</code>?)</li>
<li>If they are misaligned, will the network still train and work correctly?</li>
</ol>
<p>(Another curiosity to note is that despite this misalignment (assuming there is misalignment), both the <code>predictions</code> and <code>targets</code> length do have the same length. The targets length is defined by the line <code>for current_index in range(state_index, state_index + num_unroll_steps + 1)</code> in the <code>make_target</code> function above. Above, we also computed that the length of <code>predictions</code> is <code>len(actions) + 1</code>. And <code>len(actions)</code> is defined by <code>g.history[i:i + num_unroll_steps]</code> in the <code>sample_batch</code> function (see <a href="https://arxiv.org/src/1911.08265v1/anc/pseudocode.py" rel="nofollow noreferrer">the pseudocode</a>). Thus, the length of both lists are the same.)</p>
<p><strong>What's going on?</strong></p> | 2020-02-14 23:03:20.870000+00:00 | 2020-02-21 18:09:52.350000+00:00 | null | python|algorithm|machine-learning|artificial-intelligence|structure | [] | 0 |
31,981,057 | <p>Yes and no. As already observed, there are an infinite number of possible answers. However, it is possible to check certain types of generator to see whether they're capable of explaining the known values.</p>
<p>E.g. polynomials can be tested by looking at nth differences. Other types of generator can be tested by treating the sequence as coefficients of a generating function and looking at nth derivatives of that g.f.</p>
<p>See the description of the <a href="https://oeis.org/demos.html" rel="nofollow">OEIS Superseeker</a> for a brief discussion of some relevant ideas, and check out the academic literature (RATE, GFUN, <a href="http://arxiv.org/abs/math/0702086" rel="nofollow">extensions</a>).</p> | 2015-08-13 06:32:58.070000+00:00 | 2015-08-13 06:32:58.070000+00:00 | null | null | 31,980,133 | <p>Not sure how realistic this is, but anyway, say, I have sequence of some numbers : </p>
<pre><code>1, 1, 2, 3, 5, 8, 13, 21
</code></pre>
<p>I am a human and for me it is obvious that this is Fibonacci sequence.</p>
<p><strong>Question :</strong> is there some programmatic way to determine nature of the generator that created this sequence and generate next value in this sequence? Is it possible to determine formula that was used to generate selected sequence, at least with some approximation?</p> | 2015-08-13 05:30:10.483000+00:00 | 2015-08-13 06:32:58.070000+00:00 | null | algorithm|generator|formula | ['https://oeis.org/demos.html', 'http://arxiv.org/abs/math/0702086'] | 2 |
63,085,647 | <blockquote>
<p>Change is number of classes doesn't have significant impact on
inference time.</p>
</blockquote>
<p>For example in case of <a href="https://github.com/AlexeyAB/darknet/blob/master/cfg/yolov4.cfg" rel="nofollow noreferrer">Yolov4</a>, which has got 3 <code>Yolo</code> layers, change in classes leads to change in <strong><code>filter size</code></strong> for <code>conv</code> layers preceding <code>Yolo</code> layers and some computation reduction within <code>Yolo</code> layers that's all. This is very minute compared to overall inference time as <code>conv</code> layers preceding <code>Yolo</code> layers are bottom layers with very small width and hight and also time spent on logic that depends upon number of classes within <a href="https://github.com/AlexeyAB/darknet/blob/master/src/region_layer.c" rel="nofollow noreferrer">Yolo</a> layer is very less.</p>
<p><a href="https://i.stack.imgur.com/Y62zU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Y62zU.png" alt="Snapshot of Yolo layer" /></a></p>
<p>Here:</p>
<pre><code>filters=(classes + 5)x3
</code></pre>
<p>Note that tinier version of yolov4 i.e <a href="https://github.com/AlexeyAB/darknet/blob/master/cfg/yolov4-tiny.cfg" rel="nofollow noreferrer">tiny-yolov4</a> have got two <code>Yolo</code> layers only, instead of 3.</p>
<p><strong>If your intent is to reduce inference time, especially on raspberry pi or a jetson nano, without losing on accuracy/mAP, do following things:</strong></p>
<ul>
<li><p><strong>Quantisation</strong>: Run inference with <code>INT8</code> instead of <code>FP32</code>. You can use this <a href="https://github.com/AlexeyAB/yolo2_light" rel="nofollow noreferrer">repo</a> for this purpose. You can do this for both Jetson nano and raspberry pi.</p>
</li>
<li><p>Use inference library such as <a href="https://github.com/ceccocats/tkDNN" rel="nofollow noreferrer"><strong>tkDNN</strong></a>, which is a Deep Neural Network library built with <code>cuDNN</code> and <code>tensorRT</code> primitives, specifically thought to work on <strong>NVIDIA Jetson</strong> Boards. You can use this for Jetson nano. Note that with <a href="https://developer.nvidia.com/tensorrt" rel="nofollow noreferrer"><strong>TensorRT</strong></a>, you can use <code>INT8</code> and <code>FP16</code> instead of <code>FP32</code> to reduce detection time.</p>
</li>
</ul>
<p><strong>Following techniques can be used to reduce inference time, but they come at the cost of significant drop in accuracy/mAP:</strong></p>
<ul>
<li>You can train the models with tinier versions rather than full Yolo versions.</li>
<li>Model Pruning - If you could rank the neurons in the network according to how much they contribute, you could then remove the low ranking neurons from the network, resulting in a smaller and faster network. Pruned yolov3 research <a href="https://arxiv.org/abs/1907.11093v1" rel="nofollow noreferrer">paper</a> and it's <a href="https://github.com/PengyiZhang/SlimYOLOv3" rel="nofollow noreferrer">implementation</a>. <a href="https://github.com/Lam1360/YOLOv3-model-pruning" rel="nofollow noreferrer">This</a> is another pruned Yolov3 implementation.</li>
</ul> | 2020-07-25 07:32:18.373000+00:00 | 2020-07-25 07:32:18.373000+00:00 | null | null | 63,076,707 | <p>I am using YOLOv4 to train my custom detector. Source: <a href="https://github.com/AlexeyAB/darknet" rel="nofollow noreferrer">https://github.com/AlexeyAB/darknet</a></p>
<p>One of the issues while training is the computing power of GPU and available video RAM. What is the relationship between number of object classes and the time it takes to train the model? Also, is it possible to significantly reduce the inference time of images by reducing the number of object classes? The goal is to run inference on a Raspberry Pi or a Jetson Nano.</p>
<p>Any help is much appreciated. Thanks.</p> | 2020-07-24 15:28:36+00:00 | 2020-07-27 01:00:08.477000+00:00 | 2020-07-27 01:00:08.477000+00:00 | object-detection|yolo | ['https://github.com/AlexeyAB/darknet/blob/master/cfg/yolov4.cfg', 'https://github.com/AlexeyAB/darknet/blob/master/src/region_layer.c', 'https://i.stack.imgur.com/Y62zU.png', 'https://github.com/AlexeyAB/darknet/blob/master/cfg/yolov4-tiny.cfg', 'https://github.com/AlexeyAB/yolo2_light', 'https://github.com/ceccocats/tkDNN', 'https://developer.nvidia.com/tensorrt', 'https://arxiv.org/abs/1907.11093v1', 'https://github.com/PengyiZhang/SlimYOLOv3', 'https://github.com/Lam1360/YOLOv3-model-pruning'] | 10 |
39,337,595 | <p><a href="https://github.com/jhlau/gensim" rel="noreferrer">This forked version of gensim</a> allows loading pre-trained word vectors for training doc2vec. <a href="https://github.com/jhlau/doc2vec/blob/master/train_model.py" rel="noreferrer">Here</a> you have an example on how to use it. The word vectors must be in the C-word2vec tool text format: one line per word vector where first comes a string representing the word and then space-separated float values, one for each dimension of the embedding.</p>
<p>This work belongs to a <a href="https://arxiv.org/abs/1607.05368" rel="noreferrer">paper</a> in which they claim that using pre-trained word embeddings actually helps building the document vectors. However I am getting <em>almost</em> the same results no matter I load the pre-trained embeddings or not.</p>
<p><strong>Edit:</strong> actually there is one remarkable difference in my experiments. When I loaded the pretrained embeddings I trained doc2vec for half of the iterations to get <em>almost</em> the same results (training longer than that produced worse results in my task).</p> | 2016-09-05 20:53:35.107000+00:00 | 2016-09-06 15:28:28.173000+00:00 | 2016-09-06 15:28:28.173000+00:00 | null | 27,470,670 | <p>I recently came across the doc2vec addition to Gensim. How can I use pre-trained word vectors (e.g. found in word2vec original website) with doc2vec?</p>
<p>Or is doc2vec getting the word vectors from the same sentences it uses for paragraph-vector training?</p>
<p>Thanks.</p> | 2014-12-14 15:13:43.283000+00:00 | 2020-04-05 17:56:23.823000+00:00 | 2020-04-05 17:56:23.823000+00:00 | python|nlp|gensim|word2vec|doc2vec | ['https://github.com/jhlau/gensim', 'https://github.com/jhlau/doc2vec/blob/master/train_model.py', 'https://arxiv.org/abs/1607.05368'] | 3 |
57,408,162 | <p><strong>SSD Multibox</strong> (short for Single Shot Multibox Detector) is a neural network that can detect and locate objects in an image in a single forward pass. The network is trained in a supervised manner on a dataset of images where a bounding box and a class label is given for each object of interest. The loss term</p>
<pre><code>multibox_loss = confidence_loss + alpha * location_loss
</code></pre>
<p>is made up of two parts: </p>
<p><strong>Confidence loss</strong> is a categorical cross-entropy loss for classifying the detected objects. The purpose of this term is to make sure that correct label is assigned to each detected object.</p>
<p><strong>Location loss</strong> is a regression loss (either the smooth L1 or the L2 loss) on the parameters (width, height and corner offset) of the detected bounding box. The purpose of this term is to make sure that the correct region of the image is identified for the detected objects. The <strong>alpha</strong> term is a hyper parameter used to scale the location loss.</p>
<p>The precise formulation of the loss is given in Equation 1 of the <a href="https://arxiv.org/abs/1512.02325" rel="nofollow noreferrer">SSD: Single Shot MultiBox Detector</a> paper.</p> | 2019-08-08 08:18:19.887000+00:00 | 2019-08-08 08:18:19.887000+00:00 | null | null | 57,407,108 | <p>I have found some expression for SSD Multibox-loss function as follows:</p>
<p><strong>multibox_loss = confidence_loss + alpha * location_loss</strong></p>
<p>Can someone explains what are the explanations for those terms?</p> | 2019-08-08 07:13:58.430000+00:00 | 2019-08-08 08:18:19.887000+00:00 | null | tensorflow|keras|deep-learning|conv-neural-network|faster-rcnn | ['https://arxiv.org/abs/1512.02325'] | 1 |
41,849,702 | <ol>
<li>So basically - in order to measure performance across different hyperparameters - the best practice is to to simulate the process of training your final classifier on a training data for each parameters setup - and then compare different results with respect to measures which you want to hyperoptimize. </li>
<li>If you change the process of training (by e.g. setting a fixed rate of epochs during a hyperoptimization phase and then setting different in a final training) - you shouldn't expect that result obtained during multiple testing phases would generalize. In my opinion this might harm your optimization process especially that some hyperparameters setups need more time to actually obtain good results (e.g. when you set a really high dropout rate) and cutting training time during choosing the best value might tend to make hyperparameters setups that give better result at earlier training stage more favourable. </li>
<li>Good practices?:
<ul>
<li>choose <a href="http://www.jmlr.org/papers/volume13/bergstra12a/bergstra12a.pdf" rel="nofollow noreferrer">random search</a>, not grid search. Usually your training network is less sensitive with respect to some parameters, so making a full grid search is a lost of time,</li>
<li>if you want to try more sophisticated methods you could try more complexed methods e.g. <a href="https://arxiv.org/pdf/1502.05700.pdf" rel="nofollow noreferrer">bayessian hyperoptimization</a>,</li>
<li>use crossvalidation or run your network with a given hyperparameter setup multiple times. It's due to the fact that neural networks might be sensitive to starting weights - so the score data might not generalize well,</li>
<li>parallelize your training process. Try to run training process e.g. on different machines and then simply merge the results.</li>
</ul></li>
</ol> | 2017-01-25 10:53:47.250000+00:00 | 2017-01-25 10:53:47.250000+00:00 | null | null | 41,836,034 | <p>I am currently trying to come up with a novel structure for a CLDNN (Convolutional, LSTM, Deep Neural Network)</p>
<p>Just as any other networks, I am having a difficult time optimizing the hyper parameters.</p>
<p>I would like to try grid search and random search to get an optimal set of hyperparameters but I am not clear on few things.</p>
<ol>
<li><p>If I run a simulation of the network with a temporary set of hyperparameters, how do I measure "goodness" of the hyperparameters? I was thinking about recording the cost and training accuracy after N number of epochs for each simulations.</p></li>
<li><p>Since each simulation takes relatively long time (for my network it takes about 70 seconds to train for one epoch), is there a faster way to check the "goodness" of the hyperparameters without actually running the full training?</p></li>
<li><p>Is there a general tip/advice for hyperparameter-optimization?</p></li>
</ol> | 2017-01-24 18:12:53.490000+00:00 | 2017-01-25 19:39:48.063000+00:00 | 2017-01-25 19:39:48.063000+00:00 | optimization|machine-learning|neural-network|deep-learning|hyperparameters | ['http://www.jmlr.org/papers/volume13/bergstra12a/bergstra12a.pdf', 'https://arxiv.org/pdf/1502.05700.pdf'] | 2 |
67,308,584 | <p>See this <a href="https://arxiv.org/pdf/1502.05767.pdf" rel="nofollow noreferrer">paper</a> for exact answer, specifically section 2.1 or figure 2.</p>
<p>In short, PyTorch has a list of basic functions and the expression of their derivatives. So, what is done in your case (y =x<em>x), is evaluating $$ y' = 2</em>x $$.</p>
<p>The numerical method you mentioned is called numerical differentiation or finite differences, and it is an approximation of the derivative. But it is not what PyTorch does.</p> | 2021-04-28 22:13:34.620000+00:00 | 2021-04-28 22:13:34.620000+00:00 | null | null | 63,026,854 | <p>When we talk about the auto-differentiation in the pytorch, we are usually presented a graphical structures of tensors based on their formulas, and pytorch will compute the gradients by tracing down the graphical tree using chain rules. However, I want to know what will happen at the leaf nodes? Does pytorch hardcode a whole list of basic functions with their analytical derivatives, or does it compute the gradients using numerical methods? A quick example:</p>
<pre class="lang-py prettyprint-override"><code>import torch
def f(x):
return x ** 2
x = torch.tensor([1.0], requires_grad=True)
y = f(x)
y.backward()
print(x.grad) # 2.0
</code></pre>
<p>In this example, does pytorch compute the derivative by $$ (x^2)' = 2x = 2 * 1 = 2 $$, or does pytorch compute in a way similar to $$ (1.00001^2 - 1^2) / (1.000001 - 1) ~ 2 $$ ?</p>
<p>Thanks!</p> | 2020-07-22 04:25:02.707000+00:00 | 2021-04-28 22:13:34.620000+00:00 | null | python|pytorch|autodiff | ['https://arxiv.org/pdf/1502.05767.pdf'] | 1 |
41,198,193 | <p>Have a look at <a href="https://arxiv.org/pdf/1603.07466v1.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1603.07466v1.pdf</a>, this could help. There is a link in the document to a github project with dmn-js where a rules validtion is added, also to an online version where you can see how it works. </p> | 2016-12-17 11:40:10.027000+00:00 | 2016-12-17 11:40:10.027000+00:00 | null | null | 35,264,681 | <p>I am using camunda DMN in my application(in angular, java, spring).</p>
<p>I want to validate if rule is not overlapping while adding new rule to DMN table.</p>
<p>for example following is my DMN table,</p>
<hr>
<h1>| x | y | o/p |</h1>
<p>| <9 | >50 | "ABC" |</p>
<p>| <20 | >100 | "XYZ" |</p>
<p>Consider user is so dumb :D , and can create rules like above.</p>
<p>Now if i/p's for above DMN are x = 10 and y = 99 then it satisfy both rules.</p>
<p>If I use UNIQUE hit policy, it wont show me error at the time of add new rule rather it will show me while evaluation of DMN table. And I dont want that :(</p>
<p><strong>How to avoid overlapping of rules while creating of rule it self using either camunda dmn js api or camunda dmn java api ?</strong></p> | 2016-02-08 07:51:37.783000+00:00 | 2016-12-17 11:40:10.027000+00:00 | null | java|angularjs|camunda | ['https://arxiv.org/pdf/1603.07466v1.pdf'] | 1 |
54,682,459 | <p>This is called a <em>profunctor</em>! It is a <a href="http://blog.sigfpe.com/2011/07/profunctors-in-haskell.html" rel="noreferrer">very useful type of bifunctor</a> that shows up all over the place (for example, <a href="https://arxiv.org/ftp/arxiv/papers/1703/1703.10857.pdf" rel="noreferrer">in the construction of lenses</a>)! In Haskell it is available as <code>Data.Profunctor</code> in the <code>profunctors</code> package. I am not a Scala person but it looks like it is <a href="https://typelevel.org/cats/api/cats/arrow/Profunctor.html" rel="noreferrer">available</a> in <code>cats</code> as well.</p> | 2019-02-14 02:42:51.070000+00:00 | 2019-02-14 02:42:51.070000+00:00 | null | null | 54,682,429 | <p>I'm looking to see if there is a standard typeclass for a Bi-Functor that has one Contravariant parameter and one Covariant parameter.</p>
<p>punching the signature <code>(c -> a) -> (b -> d) -> f a b -> f c d</code> results in nothing that matches.</p>
<p>Basically in Scala I'm looking to do:</p>
<pre><code>trait CoContraBiFunctor[F[_, _]] {
def ccmap[A, B, C, D](fab: F[A, B])(f: C => A)(g: B => D): F[C, D]
}
implicit val ccFunction: CoContraBiFunctor[Function1] = new CoContraBiFunctor[Function] {
override def ccmap[A, B, C, D](fab: Function[A, B])(f: C => A)(g: B => D): Function[C, D] = new Function[C, D] {
override def apply(c: C): D = g(fab(f(c)))
}
}
</code></pre>
<p>Anyone have an idea? I definitely am not the first person to look for this.</p> | 2019-02-14 02:38:21.453000+00:00 | 2019-02-14 04:31:55.567000+00:00 | 2019-02-14 04:31:55.567000+00:00 | scala|haskell|functional-programming|typeclass|scala-cats | ['http://blog.sigfpe.com/2011/07/profunctors-in-haskell.html', 'https://arxiv.org/ftp/arxiv/papers/1703/1703.10857.pdf', 'https://typelevel.org/cats/api/cats/arrow/Profunctor.html'] | 3 |
58,379,734 | <p>What you're describing is known as a cardinality constraint. There are many ways of encoding these in CNF. As a starting point, some of these encodings are explained in</p>
<ul>
<li><a href="http://www.carstensinz.de/papers/CP-2005.pdf" rel="nofollow noreferrer">http://www.carstensinz.de/papers/CP-2005.pdf</a> and</li>
<li><a href="https://arxiv.org/pdf/1012.3853.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1012.3853.pdf</a>.</li>
</ul>
<p>Many are implemented in the <a href="https://pysathq.github.io/" rel="nofollow noreferrer">PySAT</a> python toolkit</p>
<ul>
<li><a href="https://pysathq.github.io/docs/html/api/card.html" rel="nofollow noreferrer">https://pysathq.github.io/docs/html/api/card.html</a></li>
</ul> | 2019-10-14 15:21:41.053000+00:00 | 2019-10-14 15:21:41.053000+00:00 | null | null | 58,338,456 | <p>Many papers are using SAT, but few mentioned how to convert an addition to CNF.</p>
<p>Since CNF only allows AND OR NOT operation, it is difficult to describe addition operation. For example,</p>
<pre><code>x1 + x2 + x3 + ... +x1599 < 30, xi is binary.
</code></pre>
<ol>
<li>Map these equations into a Boolean circuit.</li>
<li>Apply Tseitin's transformation to the circuit and convert it into DIMACS format. </li>
</ol>
<p>But is there any way to read the results? I do think it is possible to read the results if all the variables are defined by ourself, so figuring out how to convert a linear constraint to SAT problem is necessary.</p>
<p>If there are 3 or 4 variables, i.e. x1+x2+x3 <3, we can use truth table to solve this conversion. Also, a direct way is that chose 29 (any number smaller than 30) variables from 1600 variables to be 1, the others to be 0. But there are too many possibilities which makes this problem hard to solve.</p>
<p>I have used STP, but it can only give 1 answer. As the increasing number of variables and clauses, it costs a long time for STP to run.</p>
<p>So I tried to use SAT to solve the cnf given by STP, it can give out answers in a minutes. But the results cannot be read.</p>
<p>In the end, I found some paper,
1. Encoding Linear Constraints with Implication Chains to CNF,
2. SAT-Based Techniques for Integer Linear Constraints. This may be helpful.</p> | 2019-10-11 09:39:14.257000+00:00 | 2019-10-16 05:55:05.987000+00:00 | 2019-10-16 05:55:05.987000+00:00 | addition|sat|conjunctive-normal-form | ['http://www.carstensinz.de/papers/CP-2005.pdf', 'https://arxiv.org/pdf/1012.3853.pdf', 'https://pysathq.github.io/', 'https://pysathq.github.io/docs/html/api/card.html'] | 4 |
46,120,905 | <p>In my opinion, you are lacking some critical literature review.
Here are some good papers about RNN and CNN that can be used for image recognition appications :</p>
<p><a href="https://pdfs.semanticscholar.org/86ef/e7769f2b8a0e15ca213ab09881e6705caeb0.pdf" rel="nofollow noreferrer">https://pdfs.semanticscholar.org/86ef/e7769f2b8a0e15ca213ab09881e6705caeb0.pdf</a>
<a href="https://arxiv.org/pdf/1506.00019.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1506.00019.pdf</a></p>
<p>What is a feature? A feature represents one of the elements of the input vector which will be used to train the model and produce output.</p>
<p>The feature set is to be determined depending on the application.
Each element of the input vector is a different (dependent or independent) feature. </p>
<p>Look at this tutorial for example using the MNIST digit data set:
<a href="https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/3_NeuralNetworks/recurrent_network.py" rel="nofollow noreferrer">https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/3_NeuralNetworks/recurrent_network.py</a></p>
<p>It says:
'''
To classify images using a recurrent neural network, we consider every image
row as a sequence of pixels. Because MNIST image shape is 28*28px, we will then
handle 28 sequences of 28 steps for every sample.
'''</p>
<p>The RNN is built on sequences, hence if the image is 28 by 28 you can break it in 28 sequences of 28 features.</p>
<pre><code># Network Parameters
num_input = 28 # MNIST data input (img shape: 28*28)
timesteps = 28 # timesteps
</code></pre>
<p>This is what you see in the network parameters. The 28 <strong>features</strong> (num_input = 28) representing one sequence of the image. </p>
<p>To repeat again, each element of the input vector is considered a feature. Furthermore, is the analyst's responsibility to properly define these features.</p> | 2017-09-08 16:26:45.243000+00:00 | 2017-09-08 16:26:45.243000+00:00 | null | null | 46,118,688 | <p>I sort of understand what features are, say a ML algorithm that learns SPAM, certain keywords could be a feature? </p>
<p>But in the famous MNIST digits data set, I see a matrix of numbers, is the entire matrix one single feature? Or is a feature each number in the matrix?</p> | 2017-09-08 14:22:38.320000+00:00 | 2017-09-10 04:20:35.277000+00:00 | null | machine-learning | ['https://pdfs.semanticscholar.org/86ef/e7769f2b8a0e15ca213ab09881e6705caeb0.pdf', 'https://arxiv.org/pdf/1506.00019.pdf', 'https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/3_NeuralNetworks/recurrent_network.py'] | 3 |
72,390,305 | <p>TL;DR: It is absolutely okay to restrict actions.</p>
<p>The available actions can be state-dependent. This can be given by physical limitations (no possibility to enter the wall). A radical example of this is the application of RL to movement on a graph (see this: <a href="https://education.dellemc.com/content/dam/dell-emc/documents/en-us/2020KS_Nannapaneni-Optimal_path_routing_using_Reinforcement_Learning.pdf" rel="nofollow noreferrer">https://education.dellemc.com/content/dam/dell-emc/documents/en-us/2020KS_Nannapaneni-Optimal_path_routing_using_Reinforcement_Learning.pdf</a>).</p>
<p>Additionally, you can restrict your actions even if they are allowed (e.g. physically possible) by designing the policy. In case of probabilistic policy, you can set the "fire" actions to have a probability zero.</p>
<p>For deeper reading: <a href="https://arxiv.org/pdf/1906.01772.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1906.01772.pdf</a></p> | 2022-05-26 10:23:08.967000+00:00 | 2022-05-26 10:23:08.967000+00:00 | null | null | 72,385,956 | <p>I am currently implementing q learning to solve a maze which contains fires which initiate randomly. Would it be considered proper for me to code the action to not be an option for the agent if there is a fire in that direction or should my reward be doing this instead?
Thanks</p> | 2022-05-26 02:11:09.183000+00:00 | 2022-05-26 10:23:08.967000+00:00 | null | machine-learning|reinforcement-learning|q-learning | ['https://education.dellemc.com/content/dam/dell-emc/documents/en-us/2020KS_Nannapaneni-Optimal_path_routing_using_Reinforcement_Learning.pdf', 'https://arxiv.org/pdf/1906.01772.pdf'] | 2 |
50,936,084 | <p>You want object detection and segmentation. For that check out the excellent work done by Kaiming He et al:</p>
<ul>
<li><a href="https://arxiv.org/pdf/1703.06870" rel="nofollow noreferrer">Paper</a></li>
<li><a href="https://github.com/facebookresearch/Detectron" rel="nofollow noreferrer">Code</a></li>
</ul>
<p>Their work (Mask R-CNN) has also been ported to TensorFlow.</p>
<ul>
<li><a href="https://github.com/CharlesShang/FastMaskRCNN" rel="nofollow noreferrer">TensorFlow Mask R-CNN</a></li>
<li><a href="https://github.com/matterport/Mask_RCNN" rel="nofollow noreferrer">Mask R-CNN on Python 3, Keras, and TensorFlow</a></li>
</ul>
<p><a href="https://i.stack.imgur.com/Gc15l.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Gc15l.jpg" alt="Demo"></a></p> | 2018-06-19 20:01:01.307000+00:00 | 2018-06-19 20:01:01.307000+00:00 | null | null | 50,935,297 | <p>I have the image with some object on it - person, vehicle, building or some manually set object. For example assuming I have this image</p>
<p><a href="https://i.stack.imgur.com/uC8qK.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uC8qK.jpg" alt="enter image description here"></a></p>
<p>I want to take out a house from it</p>
<p><a href="https://i.stack.imgur.com/D6AZE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/D6AZE.png" alt="enter image description here"></a></p>
<p>Can I achieve that using Microsoft Cognitive API and Azure though some custom vision service? Or I should go OpenCV route and others...</p> | 2018-06-19 19:00:11.770000+00:00 | 2018-06-19 20:01:01.307000+00:00 | null | image-processing|microsoft-cognitive | ['https://arxiv.org/pdf/1703.06870', 'https://github.com/facebookresearch/Detectron', 'https://github.com/CharlesShang/FastMaskRCNN', 'https://github.com/matterport/Mask_RCNN', 'https://i.stack.imgur.com/Gc15l.jpg'] | 5 |
70,400,448 | <p>The solution I found is to create three data sets since the beginning with the correct transforms you want. Then you have 3 data set objects and you give it to the <code>torch.utils.data.Subset(train_dataset, train_indices)</code>. The crux is this essentially:</p>
<pre><code> # load the dataset
path_to_data_set: str = str(Path(path_to_data_set).expanduser())
train_dataset = datasets.MNIST(root=path_to_data_set, train=True,
download=True, transform=train_transform)
val_dataset = datasets.MNIST(root=path_to_data_set, train=True,
download=True, transform=val_transform)
indices = list(range(len(train_dataset)))
train_indices, val_indices = split_inidices(indices, test_size=val_size, random_state=seed, shuffle=shuffle)
train_dataset = torch.utils.data.Subset(train_dataset, train_indices)
val_dataset = torch.utils.data.Subset(val_dataset, val_indices)
train_loader, val_loader = get_serial_or_distributed_dataloaders(train_dataset,
val_dataset,
batch_size,
batch_size_eval,
rank,
world_size,
merge,
num_workers,
pin_memory
)
</code></pre>
<p>then you can create whatever dataloaders you want later. This way you don't have to change the transform in the first place.</p>
<hr />
<p>full code:</p>
<pre><code>"""
# - data augmentation
Current belief is that augmenting the validation set should be fine, especially if you want to actually encourage
generalization since it makes the val set harder and it allows you to make val split percentage slightly lower since
your validation set was increased size.
For reproducibility of other work, especially for scientific pursues rather than "lets beat state of the art" - to make
it easier to compare results use what they use. e.g. it seems only augmenting the train set is the common thing,
especially when I looked at the augmentation strategies in min-imagenet and mnist.
Test set augmentation helps mostly to make test set harder (so acc should go down) - but it also increases variance
since the data size was increased. If you are reporting results most likely augmenting the data set is a good idea
- especially if you are going to compute test set errors when comparing accuracy with previous work.
Also, the way CI intervals are computed with t_p * std_n / sqrt n, means that the avg test error will be smaller, so
you are fine in general.
Default code I see doesn't augment test set so I most likely won't either.
ref:
- https://stats.stackexchange.com/questions/320800/data-augmentation-on-training-set-only/320967#320967
- https://arxiv.org/abs/1809.01442, https://stats.stackexchange.com/a/390470/28986
# - pin_memory
For data loading, passing pin_memory=True to a DataLoader will automatically put the fetched data Tensors in pinned
memory, and thus enables faster data transfer to CUDA-enabled GPUs. Note on pinning:
This is an advanced tip. If you overuse pinned memory, it can cause serious problems when running low on RAM, and
you should be aware that pinning is often an expensive operation. Thus, will leave it's default as False.
ref:
- on pin_memory: https://pytorch.org/docs/stable/data.html
"""
from typing import Callable, Optional, Union
import numpy as np
import torch
from numpy.random import RandomState
from torch.utils.data import Dataset, SubsetRandomSampler, random_split, DataLoader, RandomSampler
def get_train_val_split_random_sampler(
train_dataset: Dataset,
val_dataset: Dataset,
val_size: float = 0.2,
batch_size: int = 128,
batch_size_eval: int = 64,
num_workers: int = 4,
pin_memory: bool = False
# random_seed: Optional[int] = None,
) -> tuple[DataLoader, DataLoader]:
"""
Note:
- this will use different transforms for val and train if the objects you pass have different transforms.
- note train_dataset, val_dataset whill often be the same data set object but different instances with different
transforms for each data set.
Recommended use:
- this one is recommended when you want the train & val to have different transforms e.g. when doing scientific
work - instead of beating benchmark work - and the train, val sets had different transforms.
ref:
- https://gist.github.com/MattKleinsmith/5226a94bad5dd12ed0b871aed98cb123
"""
assert 0 <= val_size <= 1.0, f"Error: {val_size} valid_size should be in the range [0, 1]."
num_train = len(train_dataset)
indices = list(range(num_train))
split_idx = int(np.floor(val_size * num_train))
# I don't think this is needed since later the sampler randomly samples data from a given list
# if shuffle == True:
# np.random.seed(random_seed)
# np.random.shuffle(indices)
train_idx, valid_idx = indices[:split_idx], indices[split_idx:]
assert len(train_idx) != 0 and len(valid_idx) != 0
# Samples elements randomly from a given list of indices, without replacement.
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
train_loader = torch.utils.data.DataLoader(train_dataset,
batch_size=batch_size, sampler=train_sampler,
num_workers=num_workers, pin_memory=pin_memory)
valid_loader = torch.utils.data.DataLoader(val_dataset,
batch_size=batch_size_eval, sampler=valid_sampler,
num_workers=num_workers, pin_memory=pin_memory)
return train_loader, valid_loader
def get_train_val_split_with_split(
train_dataset: Dataset,
train_val_split: list[int, int], # e.g. [50_000, 10_000] for mnist
batch_size: int = 128,
batch_size_eval: int = 64,
num_workers: int = 4,
pin_memory: bool = False
) -> tuple[DataLoader, DataLoader]:
"""
Note:
- this will have the train and val sets have the same transform.
ref:
- https://gist.github.com/MattKleinsmith/5226a94bad5dd12ed0b871aed98cb123
- change transform: https://discuss.pytorch.org/t/changing-transforms-after-creating-a-dataset/64929/4
"""
train_dataset, valid_dataset = random_split(train_dataset, train_val_split)
train_loader = torch.utils.data.DataLoader(train_dataset,
batch_size=batch_size, num_workers=num_workers, pin_memory=pin_memory)
valid_loader = torch.utils.data.DataLoader(valid_dataset,
batch_size=batch_size_eval, num_workers=num_workers,
pin_memory=pin_memory)
return train_loader, valid_loader
def get_serial_or_distributed_dataloaders(train_dataset: Dataset,
val_dataset: Dataset,
batch_size: int = 128,
batch_size_eval: int = 64,
rank: int = -1,
world_size: int = 1,
merge: Optional[Callable] = None,
num_workers: int = -1, # -1 means its running serially
pin_memory: bool = False,
):
"""
"""
from uutils.torch_uu.distributed import is_running_serially
if is_running_serially(rank):
train_sampler = RandomSampler(train_dataset)
val_sampler = RandomSampler(val_dataset)
num_workers = 4 if num_workers == -1 else num_workers
else:
assert (batch_size >= world_size), f'Each worker must get at least one data point, so batch_size >= world_size' \
f'but got: {batch_size}{world_size}'
from torch.utils.data import DistributedSampler
# note: shuffle = True by default
train_sampler = DistributedSampler(train_dataset, num_replicas=world_size, rank=rank)
val_sampler = DistributedSampler(val_dataset, num_replicas=world_size, rank=rank)
# set the input num_workers but for ddp 0 is recommended afaik, todo - check
num_workers = 0 if num_workers == -1 else num_workers
# get dist dataloaders
train_loader = DataLoader(train_dataset,
batch_size=batch_size,
sampler=train_sampler,
collate_fn=merge,
num_workers=num_workers,
pin_memory=pin_memory)
val_loader = DataLoader(val_dataset,
batch_size=batch_size_eval,
sampler=val_sampler,
collate_fn=merge,
num_workers=num_workers,
pin_memory=pin_memory)
# return dataloaders
# dataloaders = {'train': train_dataloader, 'val': val_dataloader, 'test': test_dataloader}
# iter(train_dataloader) # if this fails its likely your running in pycharm and need to set num_workers flag to 0
return train_loader, val_loader
def split_inidices(indices: list,
test_size: Optional = None,
random_state: Optional[Union[int, RandomState, None]] = None,
shuffle: bool = False, # false for reproducibility, and any split is as good as any other.
) -> tuple[list[int], list[int]]:
import sklearn.model_selection
# - api: https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html
train_indices, val_indices = sklearn.model_selection.train_test_split(indices, test_size=test_size,
random_state=random_state,
shuffle=shuffle)
return train_indices, val_indices
# - visualization help
"""
Inspired from:
- https://gist.github.com/MattKleinsmith/5226a94bad5dd12ed0b871aed98cb123
- https://www.geeksforgeeks.org/training-neural-networks-with-validation-using-pytorch/
"""
from argparse import Namespace
from pathlib import Path
from typing import Optional, Callable
import numpy as np
import torch
from torch.utils.data import random_split, DataLoader
from torchvision import datasets
from torchvision.transforms import transforms
from uutils.torch_uu.dataloaders.common import split_inidices, \
get_serial_or_distributed_dataloaders
NORMALIZE_MNIST = transforms.Normalize((0.1307,), (0.3081,)) # MNIST
def get_train_valid_test_data_loader_helper_for_mnist(args: Namespace) -> dict:
train_kwargs = {'path_to_data_set': args.path_to_data_set,
'batch_size': args.batch_size,
'batch_size_eval': args.batch_size_eval,
'augment_train': args.augment_train,
'augment_val': args.augment_val,
'num_workers': args.num_workers,
'pin_memory': args.pin_memory,
'rank': args.rank,
'world_size': args.world_size,
'merge': None
}
test_kwargs = {'path_to_data_set': args.path_to_data_set,
'batch_size_eval': args.batch_size_eval,
'augment_test': args.augment_train,
'num_workers': args.num_workers,
'pin_memory': args.pin_memory,
'rank': args.rank,
'world_size': args.world_size,
'merge': None
}
train_loader, val_loader = get_train_valid_loader(**train_kwargs)
test_loader: DataLoader = get_test_loader(**test_kwargs)
dataloaders: dict = {'train': train_loader, 'val': val_loader, 'test': test_loader}
return dataloaders
def get_transform(augment: bool):
if augment:
transform = transforms.Compose([
transforms.RandomCrop(32, padding=4),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
NORMALIZE_MNIST
])
else:
transform = transforms.Compose([
transforms.ToTensor(),
NORMALIZE_MNIST
])
return transform
def get_train_valid_loader(path_to_data_set: Path,
batch_size: int = 128,
batch_size_eval: int = 64,
seed: Optional[int] = None,
augment_train: bool = True,
augment_val: bool = False,
val_size: Optional[float] = 0.2,
shuffle: bool = False, # false for reproducibility, and any split is as good as any other.
num_workers: int = -1,
pin_memory: bool = False,
rank: int = -1,
world_size: int = 1,
merge: Optional[Callable] = None,
) -> tuple[DataLoader, DataLoader]:
"""
Utility function for loading and returning train and valid
multi-process iterators over the MNIST dataset. A sample
9x9 grid of the images can be optionally displayed.
If using CUDA, num_workers should be set to 1 and pin_memory to True.
"""
# train_kwargs = {'batch_size': args.batch_size}
# define transforms
train_transform = get_transform(augment_train)
val_transform = get_transform(augment_val)
# load the dataset
path_to_data_set: str = str(Path(path_to_data_set).expanduser())
train_dataset = datasets.MNIST(root=path_to_data_set, train=True,
download=True, transform=train_transform)
val_dataset = datasets.MNIST(root=path_to_data_set, train=True,
download=True, transform=val_transform)
indices = list(range(len(train_dataset)))
train_indices, val_indices = split_inidices(indices, test_size=val_size, random_state=seed, shuffle=shuffle)
train_dataset = torch.utils.data.Subset(train_dataset, train_indices)
val_dataset = torch.utils.data.Subset(val_dataset, val_indices)
train_loader, val_loader = get_serial_or_distributed_dataloaders(train_dataset,
val_dataset,
batch_size,
batch_size_eval,
rank,
world_size,
merge,
num_workers,
pin_memory
)
return train_loader, val_loader
def get_test_loader(path_to_data_set,
batch_size_eval: int = 64,
shuffle: bool = True,
augment_test: bool = False,
num_workers: int = -1,
pin_memory=False,
rank: int = -1,
world_size: int = 1,
merge: Optional[Callable] = None,
) -> DataLoader:
"""
Utility function for loading and returning a multi-process
test iterator over the MNIST dataset.
If using CUDA, num_workers should be set to 1 and pin_memory to True.
Params
------
- path_to_data_set: path directory to the dataset.
- batch_size: how many samples per batch to load.
- shuffle: whether to shuffle the dataset after every epoch.
- num_workers: number of subprocesses to use when loading the dataset.
- pin_memory: whether to copy tensors into CUDA pinned memory. Set it to
True if using GPU.
Returns
-------
- data_loader: test set iterator.
Note:
- it knows it's the test set since train=False in the body when creating the data set.
"""
# define transform
test_transform = get_transform(augment_test)
# load the dataset
path_to_data_set: str = str(Path(path_to_data_set).expanduser())
test_dataset = datasets.MNIST(root=path_to_data_set,
train=False, # ensures its test set
download=True,
transform=test_transform)
_, test_loader = get_serial_or_distributed_dataloaders(test_dataset,
test_dataset,
batch_size_eval,
batch_size_eval,
rank,
world_size,
merge,
num_workers,
pin_memory,
)
return test_loader
</code></pre>
<p>repo that came from this with permanent github link: <a href="https://github.com/brando90/ultimate-utils/blob/ef2217c07b43aa5354f7b6f8f1761c5f16017874/ultimate-utils-proj-src/uutils/torch_uu/dataloaders/mnist.py#L22" rel="nofollow noreferrer">https://github.com/brando90/ultimate-utils/blob/ef2217c07b43aa5354f7b6f8f1761c5f16017874/ultimate-utils-proj-src/uutils/torch_uu/dataloaders/mnist.py#L22</a></p>
<hr />
<p>related:</p>
<ul>
<li><a href="https://discuss.pytorch.org/t/changing-transformation-applied-to-data-during-training/15671/14" rel="nofollow noreferrer">https://discuss.pytorch.org/t/changing-transformation-applied-to-data-during-training/15671/14</a></li>
<li><a href="https://discuss.pytorch.org/t/changing-transforms-after-creating-a-dataset/64929/7" rel="nofollow noreferrer">https://discuss.pytorch.org/t/changing-transforms-after-creating-a-dataset/64929/7</a></li>
<li><a href="https://discuss.pytorch.org/t/apply-different-transform-data-augmentation-to-train-and-validation/63580/13" rel="nofollow noreferrer">https://discuss.pytorch.org/t/apply-different-transform-data-augmentation-to-train-and-validation/63580/13</a></li>
</ul> | 2021-12-18 01:51:16.167000+00:00 | 2021-12-18 01:51:16.167000+00:00 | null | null | 70,400,439 | <p>I noticed that a standard thing like getting the validation set in PyTorch is not as common as one would expect or not obviously available in the pytorch library.</p>
<p>I found two websites that do it their own way:
- <a href="https://gist.github.com/MattKleinsmith/5226a94bad5dd12ed0b871aed98cb123" rel="nofollow noreferrer">https://gist.github.com/MattKleinsmith/5226a94bad5dd12ed0b871aed98cb123</a>
- <a href="https://www.geeksforgeeks.org/training-neural-networks-with-validation-using-pytorch/" rel="nofollow noreferrer">https://www.geeksforgeeks.org/training-neural-networks-with-validation-using-pytorch/</a></p>
<p>but they have their problems because the second one force you to have both the train & validation set have the same transforms and the first one splits with respect to the data loader - which is then impossible to give easily to a distributed data loader afaik.</p>
<p>If that is not the way to do it then what is the right proper way to create a train, val and test set?</p> | 2021-12-18 01:49:27.200000+00:00 | 2021-12-18 01:51:16.167000+00:00 | null | deep-learning|pytorch | ['https://github.com/brando90/ultimate-utils/blob/ef2217c07b43aa5354f7b6f8f1761c5f16017874/ultimate-utils-proj-src/uutils/torch_uu/dataloaders/mnist.py#L22', 'https://discuss.pytorch.org/t/changing-transformation-applied-to-data-during-training/15671/14', 'https://discuss.pytorch.org/t/changing-transforms-after-creating-a-dataset/64929/7', 'https://discuss.pytorch.org/t/apply-different-transform-data-augmentation-to-train-and-validation/63580/13'] | 4 |
27,722,051 | <p>This problem is easier than the problem of sensitivity analysis of minimum spanning trees, which is to determine how much each tree/nontree edge can increase/decrease in weight before the minimum spanning tree changes. The best known algorithm for MST sensitivity analysis appears to be due to <a href="http://arxiv.org/abs/1407.1910" rel="nofollow">Seth Pettie (2005, arXived 2014)</a>, with a running time of O(|E| log alpha(|E|, |V|)). This is very close to optimal (alpha is inverse Ackermann) but also still superlinear. Several randomized algorithms with linear expected running times are known.</p> | 2014-12-31 13:50:00.807000+00:00 | 2014-12-31 14:12:11.547000+00:00 | 2014-12-31 14:12:11.547000+00:00 | null | 27,714,799 | <p>I came across this question while finding a solution for a "critical edge" problem. The original (C++) problem, which I have already solved, was:</p>
<blockquote>
<p>Consider a graph G=(V,E). Find how many edges belong to <strong>all</strong> MSTs, how many edges do <strong>not</strong> belong to <strong>any</strong> MST and how many edges belong to some MSTs, but not all.</p>
</blockquote>
<p>Let's call "green", "red" and "yellow", respectively, the edges in the 3 above cases.</p>
<p>After conducting my research, I came across <a href="https://stackoverflow.com/questions/15720155/find-all-critical-edges-of-an-mst">Find all critical edges of an MST</a>, which solves the problem. One would run a modified version of Kruskal's algorithm: if two or more edges of the same weight connect the same components, thus forming a cycle, then all these are yellow edges, i.e., edges that could be included in the MST (or not). Edges that have been indisputably selected are "green" and edges that create a cycle in the same component are "red". So, the original problem has been solved.</p>
<p>The issue with the above algorithm is that it runs in <strong>O( |E| * log|V| )</strong>, which is the running time of Kruskal's algorithm (please correct me if I'm wrong). I was considering whether a modified version of Prim's algortihm could also be used, as it has a better amortized complexity of <strong>O( |E| + |V| log |V| )</strong>, if a Fibonacci heap is used.</p>
<p>My feeling is that a modified version of Prim's algorithm cannot be used here, since we are obliged to iterate all edges based on ascending weight; however, I cannot prove this. So, it is possible to further reduce the complexity of this problem?</p> | 2014-12-31 00:54:34.193000+00:00 | 2015-08-27 17:37:37.820000+00:00 | null | c++|algorithm|graph|minimum-spanning-tree | ['http://arxiv.org/abs/1407.1910'] | 1 |
26,876,492 | <p>This is quite tricky if you want to handle all errors correctly. So one should ask themself, what to do if: you code throws an exception, or <code>error</code> event handler is called. You want that errors propagate correctly, that is, are thrown as an exception in the fiber calling streaming code. I implemented something like this for one of our <a href="https://github.com/peerlibrary/peerlibrary/blob/development/server/jobs/arxiv.coffee#L222" rel="nofollow">job-collecton</a> tasks, for extracting <a href="https://github.com/peerlibrary/meteor-tar" rel="nofollow">tar files</a>.</p>
<p>First you need some helper functions:</p>
<pre><code>bindWithFuture = (futures, mainFuture, fun, self) ->
wrapped = (args...) ->
future = new Future()
if mainFuture
future.resolve (error, value) ->
# To resolve mainFuture early when an exception occurs
mainFuture.throw error if error and not mainFuture.isResolved()
# We ignore the value
args.push future.resolver()
try
futures.list.push future
fun.apply (self or @), args
catch error
future.throw error
# This waiting does not really do much because we are
# probably in a new fiber created by Meteor.bindEnvironment,
# but we can still try to wait
Future.wait future
Meteor.bindEnvironment wrapped, null, self
wait = (futures) ->
while futures.list.length
Future.wait futures.list
# Some elements could be added in meantime to futures,
# so let's remove resolved ones and retry
futures.list = _.reject futures.list, (f) ->
if f.isResolved()
# We get to throw an exception if there was an exception.
# This should not really be needed because exception should
# be already thrown through mainFuture and we should not even
# get here, but let's check for every case.
f.get()
true # And to remove resolved
</code></pre>
<p>And then you can run something like:</p>
<pre><code>mainFuture = new Future()
# To be able to override list with a new value in wait we wrap it in an object
futures =
list: []
bindWithOnException = (f) =>
Meteor.bindEnvironment f, (error) =>
mainFuture.throw error unless mainFuture.isResolved()
onWebpageMetaData = (metaData, callback) =>
return callback null if mainFuture.isResolved()
# Do whatever you want here.
# Call callback(null) when you finish.
# Call callback(error) if there is an error.
# If you want to call into a Meteor code inside some other callback for async code you use,
# use bindWithOnException to wrap a function and stay inside a Meteor environment and fiber.
MeteorCollection.insert
metaData: metaData
callback null
requestFuture = new Future()
request
url: job.fileURL
encoding: null
,
(error, response, body) ->
return requestFuture.throw error if error
return requestFuture.throw new Error "Expected status 200, got #{ response.statusCode }." unless response.statusCode is 200
requestFuture.return response
response = requestFuture.wait()
responseEncoding = response.headers['content-type']
throw new Error "Wrong encoding" unless responseEncoding in ['application/octet-stream', 'binary/octet-stream']
regexSplit = /WARC\/1\./
response.pipe(
zlib.createGunzip()
).pipe(
EventStream.split regexSplit
).pipe(
EventStream.map bindWithFuture futures, mainFuture, onWebpageMetaData
).on('end', =>
# It could already be resolved by an exception from bindWithFuture or bindWithOnException
mainFuture.return() unless mainFuture.isResolved()
).on('error', (error) =>
# It could already be resolved by an exception from bindWithFuture or bindWithOnException
mainFuture.throw error unless mainFuture.isResolved()
)
mainFuture.wait()
wait futures
</code></pre> | 2014-11-11 23:23:27.263000+00:00 | 2014-11-11 23:23:27.263000+00:00 | null | null | 26,772,910 | <p>I'm using the <a href="https://github.com/vsivsi/meteor-job-collection" rel="nofollow">job-collection</a> package to do the following:</p>
<ol>
<li>Download a large file with a bunch of metadata about webpages</li>
<li>Create a stream from the file metadata that is split by a regex using the NPM <code>event-stream</code> package</li>
<li>Check if there is a match of the metadata in a collection (I've been attempting to stream each webpage's metadata to another function to do this) </li>
</ol>
<p>The file is too large to buffer, so streaming is required. <a href="https://s3.amazonaws.com/ja-common-crawl/exampleWatFile.wat.gz" rel="nofollow">Here is a small file with a few examples of the metadata </a> if you wish to try this.</p>
<p>Each job from the <code>job-collection</code> package is already inside an async function: </p>
<pre><code>var request = Npm.require('request');
var zlib = Npm.require('zlib');
var EventStream = Meteor.npmRequire('event-stream');
function (job, callback) {
//This download is much too long to block
request({url: job.fileURL, encoding: null}, function (error, response, body) {
if (error) console.error('Error downloading File');
if (response.statusCode !== 200) console.error(downloadResponse.statusCode, 'Status not 200');
var responseEncoding = response.headers['content-type'];
console.log('response encoding is %s', responseEncoding);
if (responseEncoding === 'application/octet-stream' || 'binary/octet-stream') {
console.log('Received binary/octet-stream');
var regexSplit = /WARC\/1\./;
response.pipe(zlib.createGunzip()
.pipe(EventStream.split(regexSplit))
.pipe(EventStream.map(function (webpageMetaData) {
/* Need parse the metaData or pass each webpageMetaData to function
* This next function could block if it had to */
searchPageMetaData(webpageMetaData); // pass each metadatum to this function to update a collection - this function can be synchronous
}));
} else {
console.error('Wrong encoding');
}
});
}
function searchWebPageMetaData(metaData) {
// Parse JSON and search collection for match
}
</code></pre>
<ul>
<li>Are there better ways to structure this? Am I on the right track?</li>
<li>Where to put <code>Meteor.bindEnvironment</code>? - do I I bind the environment for each time I pass to <code>searchWebPageMetaData()</code>? Do I need to expressly use fibers here?</li>
<li>The stream stops when running this if I run it to <code>process.stdout</code>. Am I supposed to put the stream into one of Meteor's wrap</li>
<li>I'm aware of <code>Meteor.wrapAsync</code>. Do I want to wrap the innermost <code>searchWebPageMetaData()</code> function in <code>Meteor.wrapAsync</code>? (think I'm answering this yes as I type)</li>
<li>Will the stream slow to compensate for the slowness of the DB calls? My guess is no but how do I deal with that?</li>
</ul>
<p>I've spent quite a while learning about Meteor's <code>wrapAsync</code>, and <code>bindEnvironment</code> but having trouble bringing it all together and understanding where to use them.</p>
<p><strong>SUPPLEMENT 1</strong></p>
<p>Just to clarify, the steps are:</p>
<ol>
<li>Download file;</li>
<li>Create stream;</li>
<li>unzip it;</li>
<li>split it into individual webPages - EventStream handles this</li>
<li>send it to a function - don't need return values; this could be blocking, it's just some searching and database call</li>
</ol>
<p>I was trying to do something like this, except the core code I need help with was in a function in a different file. The following code has most of @electric-jesus' answer in there.</p>
<pre><code> processJobs('parseWatFile', {
concurrency: 1,
cargo: 1,
pollInterval: 1000,
prefetch: 1
}, function (job, callback) {
if (job.data.watZipFileLink) {
queue.pause();
console.log('queue should be paused now');
var watFileUrl = 'https://s3.amazonaws.com/ja-common-crawl/exampleWatFile.wat.gz';
function searchPageMetaData(webpageMetaData, callback) {
console.log(webpageMetaData); // Would be nice to just get this function logging each webPageMetaData
future.return(callback(webpageMetaData)); //I don't need this to return any value - do I have to return something?
}
if (!watFile)
console.error('No watFile passed to downloadAndSearchWatFileForEntity ');
var future = new Future(); // Doc Brown would be proud.
if(typeof callback !== 'function') future.throw('callbacks are supposed to be functions.');
request({url: watFile, encoding: null}, function (error, response, body) {
if (error) future.throw('Error Downloading File');
if (response.statusCode !== 200) future.throw('Expected status 200, got ' + response.statusCode + '.');
var responseEncoding = response.headers['content-type'];
if (responseEncoding === 'application/octet-stream' || 'binary/octet-stream') {
var regexSplit = /WARC\/1\./;
response.pipe(zlib.createGunzip()
.pipe(EventStream.split(regexSplit))
.pipe(EventStream.map(function (webpageMetaData) {
searchPageMetaData(webpageMetaData, callback);
})
));
} else {
future.throw('Wrong encoding');
}
});
return future.wait();
} else {
console.log('No watZipFileLink for this job');
job.log('ERROR: NO watZipFileLink from commonCrawlJob collection');
}
queue.resume();
job.done;
callback();
}
</code></pre> | 2014-11-06 06:15:59.057000+00:00 | 2014-11-12 05:58:02.267000+00:00 | 2014-11-12 05:58:02.267000+00:00 | node.js|asynchronous|meteor|stream | ['https://github.com/peerlibrary/peerlibrary/blob/development/server/jobs/arxiv.coffee#L222', 'https://github.com/peerlibrary/meteor-tar'] | 2 |
56,616,027 | <p>99% of the data is missing!!!???
Well, if your dataset has less than 100,000 examples, then you may want to remove those columns instead of imputing through any methods.
If you have a larger dataset then using mean imputing or knn imputing would be ...OK. These methods don't catch the statistics of your data and can eat up memory. Instead use Bayesian methods of Machine Learning like fitting a Gaussian Process through your data or a Variational Auto-Encoder to those sparse columns.
<br>
1.) Here are a few links to learn and use gaussian processes to samples missing values from the dataset: <br>
<a href="http://web.stanford.edu/class/archive/ee/ee278/ee278.1152/lect06-2.pdf" rel="nofollow noreferrer">What is a Random Process</a>? <br>
<a href="https://pdfs.semanticscholar.org/de85/50f00947d8dbcb1385b339780597daf05d7f.pdf" rel="nofollow noreferrer">How to handle missing values with GP?</a></p>
<p>2.) You can also use a VAE to impute the missing values!!!<br>
<a href="https://arxiv.org/abs/1807.03653" rel="nofollow noreferrer">Try reading this paper</a></p>
<p>I hope this helps!</p> | 2019-06-16 04:15:43.567000+00:00 | 2019-06-16 04:22:33.557000+00:00 | 2019-06-16 04:22:33.557000+00:00 | null | 56,615,889 | <p>I am facing a dilemma with a project of mine. Few of the variables don't have enough data that means almost 99% data observations are missing.</p>
<p>I am thinking of couple of options - </p>
<ul>
<li><p>Impute missing value with mean/knn imputation </p></li>
<li><p>Impute missing value with 0.</p></li>
</ul>
<p>I couldn't think of anything in this direction. If someone can help that would be great. </p>
<p>P.S. I am not feeling comfortable using mean imputation when 99% of the data is missing. Does someone have a reasoning for that? kindly let me know. </p>
<p>Data has 397576 Observations out of which below are the missing values
<a href="https://i.stack.imgur.com/jgJUo.png" rel="nofollow noreferrer">enter image description here</a></p> | 2019-06-16 03:43:13.643000+00:00 | 2020-03-26 16:52:01.163000+00:00 | 2020-03-26 16:52:01.163000+00:00 | python|machine-learning|data-science|data-analysis|data-cleaning | ['http://web.stanford.edu/class/archive/ee/ee278/ee278.1152/lect06-2.pdf', 'https://pdfs.semanticscholar.org/de85/50f00947d8dbcb1385b339780597daf05d7f.pdf', 'https://arxiv.org/abs/1807.03653'] | 3 |
50,633,209 | <p>It appears that it is possible to track a smart phone without using GPS.</p>
<p>Sources:</p>
<p>Primary: <a href="https://arxiv.org/pdf/1802.01468.pdf" rel="nofollow noreferrer">"PinMe: Tracking a Smartphone User around the World"</a> </p>
<p>Secondary: <a href="https://gizmodo.com/how-to-track-a-cellphone-without-gps-or-consent-1821125371" rel="nofollow noreferrer">"How to Track a Cellphone Without GPS—or Consent"</a></p>
<p>I have not yet found a link to the team's final code. When I do I will post, if another has not done so.</p> | 2018-05-31 21:41:32.877000+00:00 | 2019-01-31 21:37:38.240000+00:00 | 2019-01-31 21:37:38.240000+00:00 | null | 6,694,391 | <p>Is it possible to get the current location of user without using GPS or the internet? I mean with the help of mobile network provider.</p> | 2011-07-14 14:01:03.970000+00:00 | 2019-04-02 09:56:22.197000+00:00 | 2019-01-31 21:36:29.227000+00:00 | android|geolocation|gps|android-gps | ['https://arxiv.org/pdf/1802.01468.pdf', 'https://gizmodo.com/how-to-track-a-cellphone-without-gps-or-consent-1821125371'] | 2 |
59,457,761 | <p>I don't know if the approach you are using will actually give you any useful results since the way the network learns and what it exactly learns aren't known, I suggest you use a different kind of autoencoder, that automatically learns disentangled representations of the data in a latent space, this way you can be sure that all the parameters you find are actually contributing to the representation of your data. check this <a href="https://arxiv.org/abs/1606.05579" rel="nofollow noreferrer">article</a></p> | 2019-12-23 15:31:27.763000+00:00 | 2019-12-23 15:31:27.763000+00:00 | null | null | 59,437,198 | <p>I have implemented Autoencoder using Keras that takes <code>112*112*3</code> neurons as input and <code>100</code> neurons as the compressed/encoded state. I want to find the neurons out of these 100 that learns the important features. So far i have calculated eigen values(e) and eigen vectors(v) using the following steps. And i found out that around first 30 values of (e) is greater than 0. Does that mean the first 30 modes are the important ones? Is there any other method that could find the important neurons?</p>
<p>Thanks in Advance</p>
<pre><code>x_enc = enc_model.predict(x_train, batch_size=BATCH_SIZE) # shape (3156,100)
x_mean = np.mean(x_enc, axis=0) # shape (100,)
x_stds = np.std(x_enc, axis=0) # shape (100,)
x_cov = np.cov((x_enc - x_mean).T) # shape (100,100)
e, v = np.linalg.eig(x_cov) # shape (100,) and (100,100) respectively
</code></pre> | 2019-12-21 14:58:28.803000+00:00 | 2019-12-23 15:31:27.763000+00:00 | 2019-12-22 12:12:12.090000+00:00 | machine-learning|pca|autoencoder|eigenvalue|eigenvector | ['https://arxiv.org/abs/1606.05579'] | 1 |
Subsets and Splits