a_id
int64 7.84k
73.8M
| a_body
stringlengths 61
33k
| a_creation_date
stringlengths 25
32
| a_last_activity_date
stringlengths 25
32
| a_last_edit_date
stringlengths 25
32
⌀ | a_tags
float64 | q_id
int64 826
73.8M
| q_body
stringlengths 61
29.9k
| q_creation_date
stringlengths 25
32
| q_last_activity_date
stringlengths 25
32
| q_last_edit_date
stringlengths 25
32
⌀ | q_tags
stringlengths 1
103
| _arxiv_links
stringlengths 2
6.69k
| _n_arxiv_links
int64 0
94
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
64,542,942 | <p>In the Current time, the best way to detect the text is by using EAST (An Efficient and Accurate Scene Text Detector)</p>
<p>The EAST pipeline is capable of predicting words and lines of text at arbitrary orientations on 720p images, and furthermore, can run at 13 FPS, according to the authors.</p>
<p>EAST quick start tutorial can be found <a href="https://www.pyimagesearch.com/2018/08/20/opencv-text-detection-east-text-detector/" rel="nofollow noreferrer">here</a></p>
<p>EAST paper can be found <a href="https://arxiv.org/abs/1704.03155" rel="nofollow noreferrer">here</a></p> | 2020-10-26 18:27:43.510000+00:00 | 2020-10-26 18:27:43.510000+00:00 | null | null | 10,206,526 | <p>i want to detect text area from image as a preprocessing step for tesseract OCR engine, the engine works well when the input is text only but when the input image contains Nontext content it falls, so i want to detect only text content in image,any idea of how to do that will be helpful,thanks. </p> | 2012-04-18 09:25:02.710000+00:00 | 2020-10-26 18:27:43.510000+00:00 | 2012-04-18 09:32:03.237000+00:00 | c++|image-processing|tesseract|text-extraction | ['https://www.pyimagesearch.com/2018/08/20/opencv-text-detection-east-text-detector/', 'https://arxiv.org/abs/1704.03155'] | 2 |
14,441,803 | <p>Quicksort-Heapsort in-place hybrids are really interesting, too, since most of them only needs n*log n comparisons in the worst case (they are optimal with respect to the first term of the asymptotics, so they avoid the worst-case scenarios of Quicksort), O(log n) extra-space and they preserve at least "a half" of the good behaviour of Quicksort with respect to already-ordered set of data. An extremely interesting algorithm is presented by Dikert and Weiss in <a href="http://arxiv.org/pdf/1209.4214v1.pdf" rel="noreferrer">http://arxiv.org/pdf/1209.4214v1.pdf</a>:</p>
<ul>
<li>Select a pivot p as the median of a random sample of sqrt(n) elements (this can be done in at most 24 sqrt(n) comparisons through the algorithm of Tarjan&co, or 5 sqrt(n) comparisons through the much more convoluted spider-factory algorithm of Schonhage);</li>
<li>Partition your array in two parts as in the first step of Quicksort;</li>
<li>Heapify the smallest part and use O(log n) extra bits to encode a heap in which every left child has a value greater than its sibling;</li>
<li>Recursively extract the root of the heap, sift down the lacune left by the root until it reaches a leaf of the heap, then fill the lacune with an appropriate element took from the other part of the array;</li>
<li>Recur over the remaining non-ordered part of the array (if p is chosen as the exact median, there is no recursion at all).</li>
</ul> | 2013-01-21 15:23:51.230000+00:00 | 2013-01-21 15:46:59.377000+00:00 | 2013-01-21 15:46:59.377000+00:00 | null | 2,467,751 | <p>Both quicksort and heapsort do in-place sorting. Which is better? What are the applications and cases in which either is preferred?</p> | 2010-03-18 05:45:44.310000+00:00 | 2020-09-19 11:07:54.650000+00:00 | 2010-03-24 18:47:52.240000+00:00 | algorithm|sorting|quicksort|heapsort | ['http://arxiv.org/pdf/1209.4214v1.pdf'] | 1 |
62,791,442 | <blockquote>
<p>Thread 3's acquire syncs with Thread 2's release, which comes after Thread 2's acquire which syncs with Thread 1's release. Therefore, Thread 3 is guaranteed to see the value that Thread 1 set to x, correct?</p>
</blockquote>
<p>Yes, this is correct. The acquire/release operations establish <em>synchronize-with</em> relations - i.e., <code>store_release(a)</code> synchronizes-with <code>load_acquire(a)</code> and <code>store_release(b)</code> synchronizes-with <code>load_acquire(b)</code>. And <code>load_acquire(a)</code> is sequenced-before <code>store_release(b)</code>. <em>synchronize-with</em> and <em>sequenced-before</em> are both part of the <em>happens-before</em> definition, and the happens-before relation is <em>transitive</em>. Therefore, <code>store_relaxed(x, 1)</code> happens-before <code>load_relaxed(x)</code>.</p>
<blockquote>
<p>Am I right in believing that according to the standard, there must always be a pair of non-relaxed atomic operations, one in each thread, in order for any kind of memory ordering at all to be guaranteed?</p>
</blockquote>
<p>This is question is a bit too broad, but overall I would tend to say "yes". In general you have to ensure that there is a proper <em>happens-before</em> relation when operating on some (non-atomic) shared data. If one thread writes to some shared data and some other thread should read that data, you have to ensure that the write <em>happens-before</em> the read. There are different ways to achieve this - atomics with the correct memory orderings are just one way (although one could argue that almost all other methods (like <code>std::mutex</code>) also boil down to atomic operations).</p>
<p>Fences also have to be combined with other fences or atomic operations. Your example would work if <code>super_duper_memory_fence()</code> were a <code>std::atomic_thread_fence(std::memory_order_release)</code> and you put another <code>std::atomic_thread_fence(std::memory_order_acquire)</code> before your call to <code>use_data</code>.</p>
<p>For more details I can recommend this paper which I have co-authored: <a href="https://arxiv.org/abs/1803.04432" rel="nofollow noreferrer">Memory Models for C/C++ Programmers</a></p> | 2020-07-08 09:15:23.687000+00:00 | 2020-07-08 09:15:23.687000+00:00 | null | null | 62,786,334 | <p>I have some doubts about the C++11/C11 memory model that I was wondering if anyone can clarify. These are questions about the model/abstract machine, not about any real architecture.</p>
<hr />
<ol>
<li>Are acquire/release effects guaranteed to "cascade" from one thread to the next?</li>
</ol>
<p>Here is a pseudo code example of what I mean (assume all variables start as 0)</p>
<pre><code>[Thread 1]
store_relaxed(x, 1);
store_release(a, 1);
[Thread 2]
while (load_acquire(a) == 0);
store_release(b, 1);
[Thread 3]
while (load_acquire(b) == 0);
assert(load_relaxed(x) == 1);
</code></pre>
<p>Thread 3's acquire syncs with Thread 2's release, which comes after Thread 2's acquire which syncs with Thread 1's release. Therefore, Thread 3 is guaranteed to see the value that Thread 1 set to x, correct? Or do we need to use seq cst here in order to be guaranteed that the assert will not fire? I have a feeling acquire/release is enough, but I can't quite find any simple explanation that guarantees it. Most explanations of acquire/release mainly focus on the acquiring thread receiving all the stores made by the releasing thread. However in the example above, Thread 2 never touches variable x, and Thread 1/Thread 3 do not touch the same atomic variable. It's obvious that if Thread 2 were to load x, it would see 1, but is that state guaranteed to cascade over into other threads which subsequently do an acquire/release sync with Thread 2? Or does Thread 3 also need to do an acquire on variable a in order to receive Thread 1's write to x?</p>
<p>According to <a href="https://en.cppreference.com/w/cpp/atomic/memory_order" rel="nofollow noreferrer">https://en.cppreference.com/w/cpp/atomic/memory_order</a>:</p>
<blockquote>
<p>All writes in the current thread are visible in other threads that acquire the same atomic variable</p>
</blockquote>
<blockquote>
<p>All writes in other threads that release the same atomic variable are visible in the current thread</p>
</blockquote>
<p>Since Thread 1 and Thread 3 don't touch the same atomic variable, I'm not sure if acquire/release alone is enough for the above case. There's probably an answer hiding in the formal description, but I can't quite work it out.</p>
<p>*EDIT: Didn't notice until after the fact, but there is an example at the link I posted ("The following example demonstrates transitive release-acquire ordering...") that is almost the same as my example, but it uses the same atomic variable across all three threads, which seems like it might be significant. I am specifically asking about the case where the variables are not the same.</p>
<hr />
<ol start="2">
<li>Am I right in believing that according to the standard, there must always be a pair of non-relaxed atomic operations, one in each thread, in order for any kind of memory ordering at all to be guaranteed?</li>
</ol>
<p>Imagine there is a function "get_data" that allocates a buffer, writes some data to it, and returns a pointer to the buffer. And there is a function "use_data" that takes the pointer to the buffer and does something with the data. Thread 1 gets a buffer from get_data and passes it to Thread 2 using a relaxed atomic store to a global atomic pointer. Thread 2 does relaxed atomic loads in a loop until it gets the pointer, and then passes it off to use_data:</p>
<pre><code>int* get_data() {...}
void use_data(int* buf) {...}
int* global_ptr = nullptr;
[Thread 1]
int* buf = get_data();
super_duper_memory_fence();
store_relaxed(global_ptr, buf);
[Thread 2]
int* buf = nullptr;
while ((buf = load_relaxed(global_ptr)) == nullptr);
use_data(buf);
</code></pre>
<p>Is there any kind of operation at all that can be put in "super_duper_memory_fence", that will guarantee that by the time use_data gets the pointer, the data in the buffer is also visible? It is my understanding that there is not a portable way to do this, and that Thread 2 must have a matching fence or other atomic operation in order to guarantee that it receives the writes made into the buffer and not just the pointer value. Is this correct?</p> | 2020-07-08 01:47:04.823000+00:00 | 2020-07-08 21:15:16.807000+00:00 | 2020-07-08 21:15:16.807000+00:00 | multithreading|c++11|c11|memory-barriers|abstract-machine | ['https://arxiv.org/abs/1803.04432'] | 1 |
72,104,563 | <p>Hey first of all thank you for linking my question, I will do my best to clarify it :)</p>
<p>First of all there is no big difference between pre-training & fine-tuning. The only difference is in pre-training you train your model from scratch, in order words you initialized the weights by initial value (it can be random or zero) however in fine-tuning you actually load a pre-trained model and then train it again for a downstream task, so basically what you are doing is initializing weights by pre-trained model. Therefore you can use the knowledge that is captured by the pre-trained model.</p>
<p>Lets try to understand fine-tuning and pre-training architecture.
In the following diagram shows us the overview of pre-training architecture.</p>
<p><a href="https://i.stack.imgur.com/d7n8T.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/d7n8T.png" alt="enter image description here" /></a></p>
<p>When you fine-tune BERT model you change that task specific area and labels. When you change task specific area, you change the overall architecture. You replace the heads. Also this change has affected the naming of model that you are using in Transformers. For example BertForPreTraining uses both MLM and NSP head at the same time, whereas BertForSequenceClassification uses a linear layer as a head just like in NSP. But what they have in common is that they wrap BERT model. So we just change "task specific" area (architecture). This is what they meant by stating "There is minimal difference between the pre-trained architecture and the final downstream architecture." in <a href="https://arxiv.org/abs/1810.04805" rel="nofollow noreferrer">BERT paper</a>.</p>
<p>If you want to change BERT's pre-training task, you must change the architecture of task specific area and input labels just like I did in "<a href="https://ieeexplore.ieee.org/document/9598954" rel="nofollow noreferrer">Same Sentence Prediction: A new Pre-training Task for BERT</a>"(<a href="https://github.com/kaansonmezoz/bert-same-sentence-prediction" rel="nofollow noreferrer">Github Repo</a>). Same goes for the fine-tuning process. If you want to customize fine-tuning architecture for a downstream task, all you need to do this changing the architecture of task specific area.</p>
<p>So it doesn't matter using Trainer for pre-training or fine-tuning. Trainer will basically updates the weights of model according to training loss. If you use pre-trained BERT with downstream task specific heads, it will update weights in both BERT model and task specific heads (unless you tell it otherwise by freezing the weights of BERT model). If you use untrained BERT model with task specific heads it will also update weights.</p> | 2022-05-03 19:29:18.067000+00:00 | 2022-05-03 19:29:18.067000+00:00 | null | null | 69,126,923 | <p>i find a answer of training model from scratch in this question:
<a href="https://stackoverflow.com/questions/65646925/how-to-train-bert-from-scratch-on-a-new-domain-for-both-mlm-and-nsp">How to train BERT from scratch on a new domain for both MLM and NSP?</a></p>
<p>one answer use Trainer and TrainingArguments like this:</p>
<pre><code>from transformers import Trainer, TrainingArguments
training_args = TrainingArguments(
output_dir= "/path/to/output/dir/for/training/arguments"
overwrite_output_dir=True,
num_train_epochs=2,
per_gpu_train_batch_size= 16,
save_steps=10_000,
save_total_limit=2,
prediction_loss_only=True,
)
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=dataset,
)
trainer.train()
trainer.save_model("path/to/your/model")
</code></pre>
<p>but huggingface official doc <a href="https://huggingface.co/transformers/training.html" rel="nofollow noreferrer">Fine-tuning a pretrained model
</a> also use Trainer and TrainingArguments in the same way to finetune .
so when I use Trainer and TrainingArguments to train model, Do I train model from scratch or just finetune?</p> | 2021-09-10 03:30:56.660000+00:00 | 2022-05-03 19:29:18.067000+00:00 | null | huggingface-transformers|bert-language-model|transformer-model|fine-tune | ['https://i.stack.imgur.com/d7n8T.png', 'https://arxiv.org/abs/1810.04805', 'https://ieeexplore.ieee.org/document/9598954', 'https://github.com/kaansonmezoz/bert-same-sentence-prediction'] | 4 |
55,631,240 | <p>There is no easy way how to do it. <a href="https://arxiv.org/pdf/1607.04606.pdf" rel="nofollow noreferrer">The FastText algorithm</a> uses character-level information, so it can infer embeddings for unseen words. This is what the FastText paper says about representing the words:</p>
<p><a href="https://i.stack.imgur.com/5Yn7Z.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5Yn7Z.png" alt="enter image description here"></a></p>
<p>However, this makes sense only in the case of words where you can infer what they mean from knowing the parts. E.g., if you had a reliable embedding for "walk", but not for "walking" and there were plenty of words ending with "ing", FastText would be able to infer the embedding. But this obviously cannot work with words like "Microsoft".</p>
<p>The best thing you can do is train your embeddings on data that contain the words you want the model work with of genre as similar as possible. If your text is in English, tt should not be too difficult.</p> | 2019-04-11 11:17:20.507000+00:00 | 2019-04-11 11:17:20.507000+00:00 | null | null | 55,629,368 | <p>I am using a word embeddings model (FastText via Gensim library) to expand the terms of a search.
So, basically if the user write "operating system" my goal is to expand that term with very similar terms like "os", "windows", "ubuntu", "software" and so on.</p>
<p>The model works very well but now the time has come to improve the model with "external information", with "external information" i mean OOV (out-of-vocabulary) terms OR terms that do not have good context.</p>
<p>Following the example i wrote above when the user writes <strong>operating system</strong> i would like to expand the query with the "general" terms:</p>
<p>Terms built in the FastText model:</p>
<ul>
<li>windows </li>
<li>ubuntu</li>
<li>software</li>
</ul>
<p><strong>AND</strong> </p>
<p>terms that represent (organizations/companies) like "Microsoft", "Apple" so the complete query will be:</p>
<ul>
<li><strong>term</strong>: operating system</li>
<li><strong>query</strong>: operating system, os, software, windows, ios, Microsoft, Apple</li>
</ul>
<p>My problem is that i DO NOT have companies inside the corpus OR, if present, i do not have to much context to "link" Microsoft to "operating system".</p>
<p>For example if i extract a piece inside the corpus i can read "... i have started working at Microsoft in November 2000 with my friend John ..." so, as you can see, i cannot contextualize "Microsoft" word because i do not have good context, indeed.</p>
<p>A small recap:</p>
<ol>
<li>I have a corpus where the companies (terms) have poor context</li>
<li>I have a big database with companies and the description of what they do.</li>
</ol>
<p>What i need to do:</p>
<p>I would like to include the companies in my FastText model and set "manually" their words context/cloud of related terms.</p>
<p>Ideas?</p> | 2019-04-11 09:40:00.350000+00:00 | 2019-04-11 13:56:53.450000+00:00 | null | python|machine-learning|nlp|gensim | ['https://arxiv.org/pdf/1607.04606.pdf', 'https://i.stack.imgur.com/5Yn7Z.png'] | 2 |
22,094,124 | <p>You are asking for an answer to a very hard problem. The hough algorithm is not a toy solution, but it is not appropriate for all machine visions circle detection situations. Human eyes are very good at such thing (if a bit imprecise). You basically need to know a lot more about your data to approach a robust solution.</p>
<p>Take a look at <a href="https://stackoverflow.com/questions/9860667/writing-robust-color-and-size-invariant-circle-detection-with-opencv-based-on">this dicussion about Hough Circle detection</a> as well as this paper <a href="http://www.egr.msu.edu/classes/ece480/capstone/fall10/group03/ece480_dt3_application_note_dembelef.pdf" rel="nofollow noreferrer">Hough Circle Transform</a> for a deeper understanding of the limitations</p>
<p>You might also want to review this paper on the <a href="http://arxiv.org/ftp/arxiv/papers/1106/1106.0962.pdf" rel="nofollow noreferrer">ant system</a> for ideas on a different approach.</p>
<p>You also might want to read up on <a href="http://homepages.inf.ed.ac.uk/rbf/HIPR2/thin.htm" rel="nofollow noreferrer">Morpological thinning</a> as a possible pre-preprocessing step before Houghton</p>
<p>Best of luck</p> | 2014-02-28 11:37:35.103000+00:00 | 2014-02-28 11:37:35.103000+00:00 | 2017-05-23 12:10:56.470000+00:00 | null | 22,093,331 | <p>My problem is following. I need precisely measure diameter of circles in bitmap.
I have Bitmap with several circles. Some of them are concentric. I need values of their diameters.
I tried OpenCV and EmguCV and their method HoughCircles. But this method find circles on the places where is are no circles (I tried a lot of combinations of input parameters). Ad if it finds them there is no case, when it found exatly same circle as in the bitmap. Their centers and diameters are different then circles on the original picture. So this method is only for some kind of game. Not for my purpose(precise measuring for industry). </p>
<p>Do you know some way or algorithm how to do it? (I prefer C#, but if it will be in pseudocode or different langueage, I will rewrite it)</p>
<p>Thanks in advance.</p> | 2014-02-28 11:03:45.343000+00:00 | 2014-02-28 11:37:35.103000+00:00 | null | c#|opencv|image-processing|pattern-matching|emgucv | ['https://stackoverflow.com/questions/9860667/writing-robust-color-and-size-invariant-circle-detection-with-opencv-based-on', 'http://www.egr.msu.edu/classes/ece480/capstone/fall10/group03/ece480_dt3_application_note_dembelef.pdf', 'http://arxiv.org/ftp/arxiv/papers/1106/1106.0962.pdf', 'http://homepages.inf.ed.ac.uk/rbf/HIPR2/thin.htm'] | 4 |
45,028,228 | <p>There are many possibilities why something like this occurs:</p>
<ol>
<li><p><strong>Your parameters trajectory changed its basin of attraction</strong> - this means that your system left a stable trajectory and switched to another one. This was probably due to randomization like e.g. batch sampling or dropout.</p></li>
<li><p><strong>LSTM instability</strong>- <strong>LSTM</strong>s are believed to be extremely unstable in terms of training. It was also reported that very often it's really time consuming for them to stabilize.</p></li>
</ol>
<p>Due to the latest research (e.g. from <a href="https://arxiv.org/pdf/1706.10239.pdf" rel="noreferrer">here</a>) I would recommend you decreasing the batch size and leaving it for more epochs. I would also try to check if e.g. topology of a network is not to complexed (or plain) in terms of amount of patterns it need to learn. I would also try switch to either <code>GRU</code> or <code>SimpleRNN</code>.</p> | 2017-07-11 07:36:37.673000+00:00 | 2017-07-11 07:36:37.673000+00:00 | null | null | 45,027,234 | <p>I'm trying to train an LSTM for some a binary classification problem. When I plot <code>loss</code> curve after the training, there are strange picks in it. Here are some examples:</p>
<p><a href="https://i.stack.imgur.com/oxBpf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oxBpf.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/9cvie.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9cvie.png" alt="enter image description here" /></a></p>
<p>Here is the basic code</p>
<pre><code>model = Sequential()
model.add(recurrent.LSTM(128, input_shape = (columnCount,1), return_sequences=True))
model.add(Dropout(0.5))
model.add(recurrent.LSTM(128, return_sequences=False))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
new_train = X_train[..., newaxis]
history = model.fit(new_train, y_train, nb_epoch=500, batch_size=100,
callbacks = [EarlyStopping(monitor='val_loss', min_delta=0.0001, patience=2, verbose=0, mode='auto'),
ModelCheckpoint(filepath="model.h5", verbose=0, save_best_only=True)],
validation_split=0.1)
# list all data in history
print(history.history.keys())
# summarize history for loss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
</code></pre>
<p>I don't understand why do that picks occur? Any ideas?</p> | 2017-07-11 06:46:30.467000+00:00 | 2018-08-24 03:27:02.793000+00:00 | 2020-06-20 09:12:55.060000+00:00 | machine-learning|neural-network|deep-learning|keras|lstm | ['https://arxiv.org/pdf/1706.10239.pdf'] | 1 |
46,681,533 | <p>You could have a look on the <code>Omniscient Debugging</code> which is refereed as the ODB by the author Bil Lewis <a href="http://arxiv.org/abs/cs/0310016" rel="nofollow noreferrer">paper</a>. ODB is an implementation of the <strong>offline log</strong> and <strong>replay</strong> way to make reverse debugging in <strong>Java</strong> come true.</p> | 2017-10-11 06:47:25.443000+00:00 | 2017-10-11 06:47:25.443000+00:00 | null | null | 14,159,806 | <p>Recently, I started to use <a href="http://sourceware.org/gdb/wiki/ReverseDebug" rel="nofollow noreferrer">reverse debugging with gdb</a> in C++ and it works pretty well for certain types of problems (e.g., loops and recursive algorithms). Besides gdb, there are other commercial debuggers for C/C++ (e.g., <a href="http://undo-software.com/product/undodb-overview" rel="nofollow noreferrer">UndoDB</a>).</p>
<p>I wonder if there are good reversible debuggers for other languages?
I'm especially interested in Java and Ruby but the question is open to any language.</p>
<p>An alternative that I found is to run your application on a virtual machine and connect to it. The only implementation that I know of (but never tested) is <a href="http://www.replaydebugging.com/2008/08/vmware-workstation-65-reverse-and." rel="nofollow noreferrer">VMware's Replay Debugging</a>. I wonder for which types of debugging tasks it is suitable. Seems to be overkill for most common problems but it may be useful to debug communication or synchronization issues, which are generally hard to reproduce.</p>
<p>Background information:</p>
<ul>
<li>The term "Reverse Debugging" is used by gdb. However, there are many synonyms:
<ul>
<li>"Reversible Debugging"</li>
<li>Microsoft calls it IntelliTrace or "Historical Debugging"</li>
<li>In Java, such debuggers are sometimes known as an "Ominiscient Debugger"</li>
</ul></li>
<li>(on programmers.stackexchange) <a href="https://softwareengineering.stackexchange.com/questions/181527/why-is-reverse-debugging-rarely-used">Why is reverse debugging rarely used?</a></li>
</ul> | 2013-01-04 15:16:29.793000+00:00 | 2017-10-11 06:47:25.443000+00:00 | 2017-04-12 07:31:17.220000+00:00 | java|ruby|debugging | ['http://arxiv.org/abs/cs/0310016'] | 1 |
29,900,094 | <p>You can find one of the oldest and most elaborate implementations of attributed variables in <a href="http://eclipseclp.org">ECLiPSe</a>, where it forms part of the wider infrastructure for implementing constraint solvers.</p>
<p>The main characteristics of this design are:</p>
<ul>
<li>attributes must be declared, and in return the compiler supports efficient access</li>
<li>a syntax for attributed variables, so that they can be read and written</li>
<li>a more complete set of handlers for attribute operations, so that attributes are not only taken into account for unification, but also for other generic operations such as term copying and subsumption tests</li>
<li>a clear separation between the concepts of variable attribute and suspended goals</li>
<li>used in over a dozen of ECLiPSe's libraries</li>
</ul>
<p><a href="http://arxiv.org/abs/1012.4240">This paper (section 4)</a> and the ECLiPSe documentation have more details.</p> | 2015-04-27 15:37:03.760000+00:00 | 2015-04-27 15:37:03.760000+00:00 | null | null | 29,776,832 | <p>When I was skimming some <a href="/questions/tagged/prolog" class="post-tag" title="show questions tagged 'prolog'" rel="tag">prolog</a> related questions recently, I stumbled upon <a href="https://stackoverflow.com/questions/10096124/how-to-represent-directed-cyclic-graph-in-prolog-with-direct-access-to-neighbour/10101483#10101483">this answer by @mat</a> to question <a href="https://stackoverflow.com/questions/10096124/how-to-represent-directed-cyclic-graph-in-prolog-with-direct-access-to-neighbour">How to represent directed cyclic graph in Prolog with direct access to neighbour verticies</a> .</p>
<p>So far, my personal experience with attributed variables in Prolog has been very limited. But the use-case given by @mat sparked my interest. So I tried using it for answering another question, <a href="https://stackoverflow.com/questions/6214266/ordering-lists-with-constraint-logic-programming/29657722#29657722">ordering lists with constraint logic programming</a>.</p>
<p><em>First, the good news:</em> My first use of attributed variables worked out like I wanted it to.</p>
<p><em>Then, the not so good news:</em> When I had posted by answer, I realized there were several API's and implementations for attributed variables in Prolog.</p>
<p>I feel I'm over my head here... In particular I want to know the following:</p>
<ul>
<li>What API's are in wide-spread use? Up to now, I found two: SICStus and SWI.</li>
<li>Which features do the different attributed variable implementations offer? The same ones? Or does one subsume the other?</li>
<li>Are there differences in semantics?</li>
<li>What about the actual implementation? Are some more efficient than others?</li>
<li>Can be (or is) using attributed variables a portability issue?</li>
</ul>
<p>Lots of question marks, here... Please share your experience / stance?
Thank you in advance!</p>
<hr>
<h3>Edit 2015-04-22</h3>
<p>Here's a code snippet of the <a href="https://stackoverflow.com/questions/6214266/ordering-lists-with-constraint-logic-programming/29657722#29657722">answer</a> mentioned above:</p>
<pre><code>init_att_var(X,Z) :-
put_attr(Z,value,X).
get_att_value(Var,Value) :-
get_attr(Var,value,Value).
</code></pre>
<p>So far I "only" use <a href="http://www.swi-prolog.org/pldoc/man?section=attvar" rel="nofollow noreferrer"><code>put_attr/3</code> and <code>get_attr/3</code></a>, but---according to the SICStus Prolog documentation on attributed variables---SICStus offers <a href="https://sicstus.sics.se/sicstus/docs/latest4/html/sicstus.html/lib_002datts.html" rel="nofollow noreferrer"><code>put_attr/2</code> and <code>get_attr/2</code></a>.</p>
<p>So even <em>this very shallow use-case</em> <em>requires</em> some <em>emulation</em> layer (one way or the other).</p> | 2015-04-21 15:37:16.497000+00:00 | 2016-03-05 19:30:48.907000+00:00 | 2017-05-23 11:45:03.560000+00:00 | prolog | ['http://eclipseclp.org', 'http://arxiv.org/abs/1012.4240'] | 2 |
60,644,446 | <p>Additionally to what has been said I want to add the following:</p>
<ul>
<li>For Object Tracking, an essential part in dealing with occlusions is writing an efficient cost function, which will be able to discriminate between the occluded object and the object that is occluding it. If the cost function is not ok, the object instances (ids) may swap and the object will be incorrectly tracked. There are numerous ways in which cost functions can be written some methods use CNNs<a href="https://arxiv.org/pdf/1703.07402.pdf" rel="noreferrer">[1]</a> while some prefer to have more control and aggregate features<a href="https://www.mdpi.com/1424-8220/20/4/1110" rel="noreferrer">[2]</a>. The disadvantage of CNN models is that in case you are tracking objects that are in the training set in the presence of objects which are not in the training set, and the first ones get occluded, the tracker can latch onto the wrong object and may or may never recover. Here is a <a href="https://youtu.be/MMQzQW1Y4h0?t=94" rel="noreferrer">video</a> showing this. The disadvantage of aggregate features is that you have to manually engineer the cost function, and this can take time and sometimes knowledge of advanced mathematics.</li>
<li><p>In the case of dense Stereo Vision reconstruction, occlusion happens when a region is seen with the left camera and not seen with the right(or vice versa). In the disparity map this occluded region appears black (because the corresponding pixels in that region have no equivalent in the other image). Some techniques use the so called background filling algorithms which fill the occluded black region with pixels coming from the background. Other reconstruction methods simply let those pixels with no values in the disparity map, because the pixels coming from the background filling method may be incorrect in those regions. Bellow you have the 3D projected points obtained using a dense stereo method. The points were rotated a bit to the right(in the 3D space). In the presented scenario the values in the disparity map which are occluded are left unreconstructed (with black) and due to this reason in the 3D image we see that black "shadow" behind the person.</p>
<p><a href="https://i.stack.imgur.com/oVoKk.png" rel="noreferrer"><img src="https://i.stack.imgur.com/oVoKk.png" alt="enter image description here"></a></p></li>
</ul> | 2020-03-11 21:06:54.020000+00:00 | 2020-03-11 21:06:54.020000+00:00 | null | null | 2,764,238 | <p>I'm developing an image processing project and I come across the word <strong><em>occlusion</em></strong> in many scientific papers, what do occlusions mean in the context of image processing? The dictionary is only giving a general definition. Can anyone describe them using an image as a context?</p> | 2010-05-04 09:47:10.407000+00:00 | 2020-03-11 21:06:54.020000+00:00 | 2018-12-12 08:07:35.120000+00:00 | image-processing|computer-vision|object-detection|imaging | ['https://arxiv.org/pdf/1703.07402.pdf', 'https://www.mdpi.com/1424-8220/20/4/1110', 'https://youtu.be/MMQzQW1Y4h0?t=94', 'https://i.stack.imgur.com/oVoKk.png'] | 4 |
56,878,909 | <p>As of now, it is not completely clear in theory what is the best way to initialize the weights of a neural network. As you have mentioned, the loss surface is highly non-convex and different things can happen based on initialization. </p>
<p>Current popular and empirically verified techniques for initialization include the Glorot initialization (<a href="http://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf?hc_location=ufi" rel="nofollow noreferrer">http://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf?hc_location=ufi</a>) or the He initialization (<a href="https://www.cv-foundation.org/openaccess/content_iccv_2015/papers/He_Delving_Deep_into_ICCV_2015_paper.pdf" rel="nofollow noreferrer">https://www.cv-foundation.org/openaccess/content_iccv_2015/papers/He_Delving_Deep_into_ICCV_2015_paper.pdf</a>) which aim to stabilize the training process.</p>
<p>There are some new theoretical guarantees on the well-behavedness of the network with a certain random initialization: <a href="https://arxiv.org/abs/1901.03611" rel="nofollow noreferrer">https://arxiv.org/abs/1901.03611</a></p>
<p>Touching a bit closer on your question, it has been recently showed that for some networks, if trained via SGD, it will converge to a near-closest global optimum of the loss surface: <a href="https://arxiv.org/abs/1902.04674" rel="nofollow noreferrer">https://arxiv.org/abs/1902.04674</a></p>
<p>To conclude, there is no universally accepted answer what is the best initialization for deep neural networks, however there are empirically verified 'good' initializations and there are some theoretical results recently, but this is a very active field of research currently.</p> | 2019-07-03 23:07:24.500000+00:00 | 2019-07-03 23:07:24.500000+00:00 | null | null | 56,873,752 | <p>Is it possible to determine the best starting point for the gradient descent optimization algorithm regarding neural networks?</p>
<p>For example, looking at an example loss surface containing local AND global minima in the link below, it is clear (1) that some starting points are better than other in the sense that the global optimum would be reached faster than other starting points, (2) that some starting points will cause descent into LOCAL, rather than GLOBAL optima and (3) that some starting points will probably never converge at all.</p>
<p><a href="https://www.researchgate.net/profile/Klaus_Raizer/publication/278036660/figure/fig7/AS:294224927969287@1447160097730/Error-surface-in-the-weigth-space-for-two-weights.png" rel="nofollow noreferrer">https://www.researchgate.net/profile/Klaus_Raizer/publication/278036660/figure/fig7/AS:294224927969287@1447160097730/Error-surface-in-the-weigth-space-for-two-weights.png</a></p>
<p>Thanks in advance for any contributions :) </p> | 2019-07-03 15:42:50.040000+00:00 | 2019-07-03 23:07:24.500000+00:00 | null | optimization|deep-learning|conv-neural-network|gradient-descent|convergence | ['http://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf?hc_location=ufi', 'https://www.cv-foundation.org/openaccess/content_iccv_2015/papers/He_Delving_Deep_into_ICCV_2015_paper.pdf', 'https://arxiv.org/abs/1901.03611', 'https://arxiv.org/abs/1902.04674'] | 4 |
64,989,992 | <blockquote>
<p>I wonder whether should I take <code>log</code> of <code>attn_weights</code> before passing it into <code>gumbel_softmax</code>?</p>
</blockquote>
<p>If <code>attn_weights</code> are probabilities (sum to 1; e.g., output of a softmax), then yes. Otherwise, no.</p>
<blockquote>
<p>I wonder how to choose <code>tau</code> in <code>gumbel_softmax</code>, any guidelines?</p>
</blockquote>
<p>Usually, it requires tuning. The references provided in the docs can help you with that.</p>
<p>From <a href="https://arxiv.org/pdf/1611.01144.pdf" rel="nofollow noreferrer">Categorical Reparameterizaion with Gumbel-Softmax</a>:</p>
<ul>
<li><p>Figure 1, caption:</p>
<blockquote>
<p>... (a) For low temperatures (τ = 0.1, τ = 0.5), the expected value of a Gumbel-Softmax random variable approaches the expected value of a categorical random variable with the same logits. As the temperature increases (τ = 1.0, τ = 10.0), the expected value converges to a uniform distribution over the categories.</p>
</blockquote>
</li>
<li><p>Section 2.2, 2nd paragraph (emphasis mine):</p>
<blockquote>
<p>While Gumbel-Softmax samples are differentiable, they are not identical to samples from the corresponding categorical distribution for non-zero temperature. For learning, <strong>there is a tradeoff between small temperatures</strong>, where samples are close to one-hot but the variance of the gradients is large, <strong>and large temperatures</strong>, where samples are smooth but the variance of the gradients is small (Figure 1). <strong>In practice, we start at a high temperature and anneal to a small but non-zero temperature</strong>.</p>
</blockquote>
</li>
<li><p>Lastly, they remind the reader that tau can be learned:</p>
<blockquote>
<p>If τ is a learned parameter (rather than annealed via a fixed
schedule), this scheme can be interpreted as entropy regularization (Szegedy et al., 2015; Pereyra et al., 2016), where the Gumbel-Softmax distribution can adaptively adjust the "confidence" of proposed samples during the training process.</p>
</blockquote>
</li>
</ul> | 2020-11-24 15:50:24.753000+00:00 | 2020-11-24 15:50:24.753000+00:00 | null | null | 64,980,330 | <p>Say I have a tensor named <code>attn_weights</code> of size [1,a], entries of which indicate the attention weights between the given query and |a| keys. I want to select the largest one using <code>torch.nn.functional.gumbel_softmax</code>.</p>
<p>I find <a href="https://pytorch.org/docs/stable/nn.functional.html?highlight=gumbel%20softmax#torch.nn.functional.gumbel_softmax" rel="nofollow noreferrer">docs about this function</a> describe the parameter as <em><strong>logits</strong></em> <em>- […, num_features] unnormalized log probabilities</em>. I wonder <strong>whether should I take <code>log</code> of <code>attn_weights</code> before passing it into <code>gumbel_softmax</code>?</strong> And I find Wiki defines <code>logit=lg(p/1-p)</code>, which is different from barely logrithm. I wonder which one should I pass to the function?</p>
<p>Further, I wonder <strong>how to choose <code>tau</code></strong> in <code>gumbel_softmax</code>, any guidelines?</p> | 2020-11-24 04:37:57.833000+00:00 | 2020-11-24 15:50:24.753000+00:00 | 2020-11-24 06:00:59.187000+00:00 | pytorch|softmax | ['https://arxiv.org/pdf/1611.01144.pdf'] | 1 |
31,492,801 | <p>It totally depends on the problem you try do model. The more layers you have, the harder it's to train the network (more computation power needed). The deeper the layer is however, the more complex problems it can solve.</p>
<p>Geoffrey Hinton wrote in his <a href="http://www.cs.toronto.edu/~hinton/nipstutorial/nipstut3.pdf" rel="noreferrer">tutorial</a>:</p>
<blockquote>
<p>How many lines of code should an AI program use and how long should
each line be? – This is obviously a silly question.</p>
<p>• Deep belief nets
give the creator a lot of freedom. </p>
<p>– How best to make use of that
freedom depends on the task. </p>
<p>– With enough narrow layers we can model
any distribution over binary vectors (Sutskever & Hinton, 2007) </p>
<p>• If freedom scares you, stick to convex optimization of shallow models
that are obviously inadequate for doing Artificial Intelligence.</p>
</blockquote>
<p>From what I know the number of layers usually is not really big. <a href="http://arxiv.org/pdf/1409.4842.pdf" rel="noreferrer">Here</a> ( <em>ImageNet Large-Scale Visual Recognition Challenge 2014</em>) e.g. google team used a net with 22 layers.</p> | 2015-07-18 15:40:47.070000+00:00 | 2015-07-18 15:40:47.070000+00:00 | null | null | 31,488,326 | <p>can anybody tell me "usually" how many layers does a Deep Neural Network have? How deep is deep enough?</p>
<p>To my knowledge, it is still difficult to say the specific number of the hidden layers. But can anyone tell me, like some examples, how many hidden layers will researchers, developers use in their deep learning projects?</p>
<p>Many thanks.</p> | 2015-07-18 06:06:26.867000+00:00 | 2015-07-18 15:40:47.070000+00:00 | null | machine-learning|artificial-intelligence|neural-network|deep-learning | ['http://www.cs.toronto.edu/~hinton/nipstutorial/nipstut3.pdf', 'http://arxiv.org/pdf/1409.4842.pdf'] | 2 |
61,803,365 | <p>The place where you called <code>zero_grad</code> is wrong. During each epoch, gradient is added to the previous one and backpropagated. This makes the loss oscillate as it gets closer, but previous gradient throws it off of the solution again.</p>
<p>Code below will easily perform the task:</p>
<pre><code>import torch
X = torch.randn(1000,1,1)
net = SimpleNet()
optimizer = Adam(params=net.parameters())
for epoch in range(EPOCHS):
optimizer.zero_grad() # zero the gradient buffers
loss = torch.mean((net.forward(X) - X) ** 2)
if loss < 1e-8:
print(epoch, loss)
break
loss.backward()
optimizer.step()
</code></pre>
<blockquote>
<p>1) Is it possible to achieve an arbitrary small error (loss) purely by
means of some Pytorch optimizer?</p>
</blockquote>
<p>Yeah, precision above is reached in around ~1500 epochs, you can go lower up to the machine (float in this case) precision</p>
<blockquote>
<p>2) I am wondering how the optimizers can work well in complex
examples, when they does not work well even in this extremely simple
example.</p>
</blockquote>
<p>Currently, we don't have anything better (at least wide spread) for network optimization than first order methods. Those are used as it's much faster to calculate gradient than Hessians for higher order methods. And complex, non-convex functions may have a lot of minima which kinda fulfill the task we threw at it, there is no need for global minima per se (although they may under some conditions, see <a href="https://arxiv.org/abs/1811.03804" rel="nofollow noreferrer">this paper</a>).</p> | 2020-05-14 17:12:51.977000+00:00 | 2020-05-14 17:12:51.977000+00:00 | null | null | 61,801,066 | <p>Consider a simple line fitting <code>a * x + b = x</code>, where <code>a</code>, <code>b</code> are the optimized parameters and <code>x</code> is the observed vector given by</p>
<pre><code>import torch
X = torch.randn(1000,1,1)
</code></pre>
<p>One can immediately see that the exact solution is <code>a=1</code>, <code>b=0</code> for any <code>x</code> and it can be found as easily as:</p>
<pre><code>import numpy as np
np.polyfit(X.numpy().flatten(), X.numpy().flatten(), 1)
</code></pre>
<p>I am trying now to find this solution by means of gradient descent in PyTorch, where the mean square error is used as an optimization criterion. </p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
import torch
import torch.nn as nn
from torch.optim import Adam, SGD, Adagrad, ASGD
X = torch.randn(1000,1,1) # Sample data
class SimpleNet(nn.Module): # Trivial neural network containing two weights
def __init__(self):
super(SimpleNet, self).__init__()
self.f1 = nn.Linear(1,1)
def forward(self, x):
x = self.f1(x)
return x
# Testing default setting of 3 basic optimizers
K = 500
net = SimpleNet()
optimizer = Adam(params=net.parameters())
Adam_losses = []
optimizer.zero_grad() # zero the gradient buffers
for k in range(K):
for b in range(1): # single batch
loss = torch.mean((net.forward(X[b,:,:]) - X[b,:, :])**2)
loss.backward()
optimizer.step()
Adam_losses.append(float(loss.detach()))
net = SimpleNet()
optimizer = SGD(params=net.parameters(), lr=0.0001)
SGD_losses = []
optimizer.zero_grad() # zero the gradient buffers
for k in range(K):
for b in range(1): # single batch
loss = torch.mean((net.forward(X[b,:,:]) - X[b,:, :])**2)
loss.backward()
optimizer.step()
SGD_losses.append(float(loss.detach()))
net = SimpleNet()
optimizer = Adagrad(params=net.parameters())
Adagrad_losses = []
optimizer.zero_grad() # zero the gradient buffers
for k in range(K):
for b in range(1): # single batch
loss = torch.mean((net.forward(X[b,:,:]) - X[b,:, :])**2)
loss.backward()
optimizer.step()
Adagrad_losses.append(float(loss.detach()))
</code></pre>
<p>The training progress in terms of loss evolution can be shown as
<a href="https://i.stack.imgur.com/rn9W8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rn9W8.png" alt="Convergence process of 3 optimizer algorithms"></a></p>
<p>What is surprising for me is a very slow convergence of the algorithms in default setting. I have thus 2 questions:</p>
<p>1) Is it possible to achieve an arbitrary small error (loss) purely by means of some Pytorch optimizer? Since the loss function is convex, it should be definitely possible, however, I am not able to figure out, how to achieve this using PyTorch. Note that the above 3 optimizers cannot do that - see the loss progress in log scale for 20000 iterations:
<a href="https://i.stack.imgur.com/dYMFP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dYMFP.png" alt="Log-scale plot for 20000 training iterations"></a></p>
<p>2) I am wondering how the optimizers can work well in complex examples, when they does not work well even in this extremely simple example. Or (and that is the second question) is it something wrong in their application above that I missed?</p> | 2020-05-14 15:19:23.393000+00:00 | 2020-05-14 17:12:51.977000+00:00 | null | optimization|pytorch|gradient-descent|convergence | ['https://arxiv.org/abs/1811.03804'] | 1 |
48,253,884 | <blockquote>
<p>I'm trying to expand my neural network to also classify letters in license plate. But i'm worried if i just simply add more classes into output, for example add 10 letters into classification so total 20 classes, it would be hard for neural network to separate feature from each class.</p>
</blockquote>
<p>You're far from where it becomes problematic. ImageNet has 1000 classes and is commonly done in a single network. See the <a href="https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf" rel="nofollow noreferrer">AlexNet paper</a>. If you want to learn more about CNNs, have a look at <a href="https://arxiv.org/pdf/1707.09725.pdf" rel="nofollow noreferrer">chapter 2 of "Analysis and Optimization of
Convolutional Neural Network Architectures"</a>. And when you're on it, see chapter 4 for hirarchical classification. You can read the summary for ... well, a summary of it.</p> | 2018-01-14 20:31:41.520000+00:00 | 2018-01-14 20:31:41.520000+00:00 | null | null | 48,237,696 | <p>I'm quite new to neural network and I recently built neural network for number classification in vehicle license plate. It has 3 layers: 1 input layer for 16*24(382 neurons) number image with 150 dpi , 1 hidden layer(199 neurons) with sigmoid activation function, 1 softmax output layer(10 neurons) for each number 0 to 9. </p>
<p>I'm trying to expand my neural network to also classify letters in license plate. But I'm worried if I just simply add more classes into output, for example add 10 letters into classification so total 20 classes, it would be hard for neural network to separate feature from each class. And also, I think it might cause problem when input is one of number and neural network wrongly classifies as one of letter with biggest probability, even though sum of probabilities of all number output exceeds that.</p>
<p>So I wonder if it is possible to build hierarchical neural network in following manner:</p>
<p>There are 3 neural networks: 'Item', 'Number', 'Letter'</p>
<ol>
<li><p>'Item' neural network classifies whether input is numbers or letters.</p></li>
<li><p>If 'Item' neural network classifies input as numbers(letters), then input goes through 'Number'('Letter') neural network.</p></li>
<li><p>Return final output from Number(Letter) neural network.</p></li>
</ol>
<p>And learning mechanism for each network is below:</p>
<ol>
<li>'Item' neural network learns all images of numbers and letters. So there are 2 output.</li>
<li>'Number'('Letter') neural network learns images of only numbers(letter).</li>
</ol>
<p>Which method should I pick to have better classification? Just simply add 10 more classes or build hierarchical neural networks with method above?</p> | 2018-01-13 07:07:42.370000+00:00 | 2018-04-23 09:51:49.440000+00:00 | 2018-04-23 09:51:49.440000+00:00 | machine-learning|neural-network|deep-learning|artificial-intelligence | ['https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf', 'https://arxiv.org/pdf/1707.09725.pdf'] | 2 |
55,901,634 | <blockquote>
<p>But the idea I have is nodes that would be bumped into very rarely when someone is transitting between two other arbitrary nodes. </p>
</blockquote>
<p>As @Joel mentioned, there are many centrality measures available, and there are often strong correlations between them such that many of them will probably give you more or less what you want. </p>
<p>That being said, I think the class of centrality measures that most closely reflect your intuition are based on <a href="https://en.wikipedia.org/wiki/Random_walk_closeness_centrality" rel="nofollow noreferrer">random walks</a>. Some of these are pretty costly to compute (although see <a href="https://arxiv.org/pdf/1808.02912.pdf" rel="nofollow noreferrer">this paper</a> for some recent improvements on that front) but luckily there is a strong <a href="https://users.cs.duke.edu/~kamesh/ModernAlg/lec8.pdf" rel="nofollow noreferrer">correspondence between the Eigenvector centrality and the frequency with which nodes are visited by a random walker</a>. </p>
<p>The implementation in networkx is available via <code>networkx.algorithms.centrality.eigenvector_centrality</code>. </p> | 2019-04-29 10:38:38.737000+00:00 | 2019-04-29 10:38:38.737000+00:00 | null | null | 55,888,924 | <p>I mapped out all the edges of a graph in the ancient BBS game 'TradeWars 2002'. There are 1000 nodes. The graph is officially a directed graph, although most edges between nodes are undirected. The graph is strongly connected.</p>
<p>I modelled the universe in <code>networkx</code>. I'd like to use networkx methods to identify the "most remote" nodes in the network. I don't know how to articulate "most-remote" in graph theory terminology though. But the idea I have is nodes that would be bumped into very rarely when someone is transitting between two other arbitrary nodes. And the idea that on the edge of the well-connected nodes, there might be a string of nodes that extend out along a single path that terminates.</p>
<p>I visualization of what I imagine is node 733. Pretty unlikely someone accidentally stumbles onto that one, compared to other better-connected nodes.</p>
<p>What could I use from networkx library to quantify some measure of 'remoteness'?</p>
<p><a href="https://i.stack.imgur.com/ZNZPW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZNZPW.png" alt="enter image description here"></a></p>
<p>This is the entire universe:</p>
<p><a href="https://i.stack.imgur.com/FxRbQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FxRbQ.png" alt="enter image description here"></a></p> | 2019-04-28 09:36:48.913000+00:00 | 2019-04-29 10:38:38.737000+00:00 | null | networkx | ['https://en.wikipedia.org/wiki/Random_walk_closeness_centrality', 'https://arxiv.org/pdf/1808.02912.pdf', 'https://users.cs.duke.edu/~kamesh/ModernAlg/lec8.pdf'] | 3 |
60,231,415 | <p>Great question! This comes up in survey design, where you want a few different versions of the survey that each only contain a subset of the questions, but you want every pair (or t-tuple) of questions to have been asked at least once.</p>
<p>This is called <strong>covering design</strong>, and is a variant of the classic <em>set cover problem</em>. As you can read in an excellent <a href="https://math.stackexchange.com/q/1734855/127715">Mathematics Stack Exchange post</a> on the topic, folks use notation C(v, k, t) indicating the minimum number of k-element subsets you need to draw (k=3 in your case) from a v-element set (v=5 in your case) such that every t-element subset in the entire set (t=2 in your case) is contained within one of your selected subsets. Folks have evaluated this function for many different (v, k, t) tuples; see, for instance, <a href="https://ljcr.dmgordon.org/cover/table.html" rel="nofollow noreferrer">https://ljcr.dmgordon.org/cover/table.html</a> . We can read from that table that C(5, 3, 2) = 4, with the following as one possible design:</p>
<pre><code> 1 2 3
1 4 5
2 3 4
2 3 5
</code></pre>
<p>First and foremost, this problem is NP-hard, so all known exact algorithms will scale exponentially in inputs v, k, and t. So while you may be able to solve small instances exactly by enumeration or some more clever exact method (e.g. integer programming), you will likely need to resort to heuristic methods as the problem size gets very large.</p>
<p>One possibility in this direction is lexicographic covering, as proposed in <a href="https://arxiv.org/pdf/math/9502238.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/math/9502238.pdf</a> (you will note that many of the solutions on the site linked above list "lex covering" as the method of construction). Basically you list out all possible k-tuples in lexicographic order:</p>
<pre><code>123
124
125
134
135
145
234
235
245
345
</code></pre>
<p>Then you greedily add the k-tuple that covers the most previously uncovered t-tuples, breaking ties using the lexicographic ordering.</p>
<p>Here's how the algorithm works in our case:</p>
<ol>
<li><p>At the beginning every 3-tuple covers 3 different 2-tuples, so we add <code>123</code> since it is lexicographically earliest.</p></li>
<li><p>After doing this, the 2-tuples of <code>12</code>, <code>13</code>, and <code>23</code> have been covered, while all remaining 2-tuples are not covered. A number of 3-tuples cover 3 more 2-tuples, e.g. <code>145</code> and <code>245</code>. We pick <code>145</code>, since it is lexicographically first, covering <code>14</code>, <code>45</code>, and <code>15</code>.</p></li>
<li><p>Now we have 4 remaining uncovered 2-tuples -- <code>24</code>, <code>25</code>, <code>34</code>, and <code>35</code>. No 3-tuple covers 3 of these, but several cover 2, e.g. <code>234</code> and <code>345</code>. We select <code>234</code> as the lexicographically earliest.</p></li>
<li><p>We have two remaining uncovered 2-tuples -- <code>25</code> and <code>35</code>. We select <code>235</code> as the only 3-tuple that covers both.</p></li>
</ol>
<p>We end up with the exact solution shown above. Importantly, this is just a heuristic method -- it doesn't give any guarantee that 4 is the smallest number of 3-tuples needed to cover all pairs in a set with 5 elements. In this case, a lower bound by Schönheim (a reference is provided in the linked article above) convinces us that, in fact, C(5, 3, 2) cannot be smaller than 4. We conclude that the solution from lexicographic covering is in fact optimal.</p>
<p>You would need a tweak to cover each t-tuple a certain number of times r. One obvious one would just be to repeat each tuple to be covered "r" times, and then run lex covering as usual (so for instance in the first step above each 3-tuple would cover 9 2-tuples with r=3). Of course this remains a heuristic for your overall problem due to the use of lex covering.</p> | 2020-02-14 18:02:59.193000+00:00 | 2020-02-17 17:47:43.343000+00:00 | 2020-02-17 17:47:43.343000+00:00 | null | 60,185,417 | <p>Let's say we have a 5x5 matrix, filled with 0s.</p>
<pre><code>myMatrix <- matrix(rep(0, 25), ncol = 5)
</code></pre>
<p>Now, let's pick a triplet of integers between 1 and 5.</p>
<pre><code>triplet <- c(1,2,3)
</code></pre>
<p>For all combinations of this triplet we now add 1 in the matrix,
with this function:</p>
<pre><code>addCombinationsToMatrix <- function(.matrix, .triplet){
indexesToChange <- as.matrix(expand.grid(.triplet, .triplet))
.matrix[indexesToChange] <- .matrix[indexesToChange] + 1
.matrix
}
</code></pre>
<p>Using the function, we go from</p>
<pre><code>myMatrix
[,1] [,2] [,3] [,4] [,5]
[1,] 0 0 0 0 0
[2,] 0 0 0 0 0
[3,] 0 0 0 0 0
[4,] 0 0 0 0 0
[5,] 0 0 0 0 0
</code></pre>
<p>to</p>
<pre><code>myMatrix <- addCombinationsToMatrix(myMatrix, triplet)
myMatrix
[,1] [,2] [,3] [,4] [,5]
[1,] 1 1 1 0 0
[2,] 1 1 1 0 0
[3,] 1 1 1 0 0
[4,] 0 0 0 0 0
[5,] 0 0 0 0 0
</code></pre>
<p>If we pick another triplet we move on to</p>
<pre><code>nextTriplet <- 2:4
myMatrix <- addCombinationsToMatrix(myMatrix, nextTriplet)
myMatrix
[,1] [,2] [,3] [,4] [,5]
[1,] 1 1 1 0 0
[2,] 1 2 2 1 0
[3,] 1 2 2 1 0
[4,] 0 1 1 1 0
[5,] 0 0 0 0 0
</code></pre>
<p>So, row-column combinations represent how often two integers have been shown
together in a triplet: 3 and 4 have been shown together once, 2 and 3 have
been shown together twice.</p>
<p><strong>Question</strong>: How can one pick triplets, such that
every combination (1-2, 1-3, 1-4...) was picked at least once
and the number of triplets is minimized.</p>
<p>I'm looking for an algorithm here that picks the next triplet.</p>
<p>Ideally it can be extended to</p>
<ul>
<li>arbitrarily big matrices (10x10, 100x100 ...)</li>
<li>arbitrarily big vectors (quadruplets, quintuplets, n-tuplets)</li>
<li>an arbitrary number of times a combination must have been picked at least</li>
</ul>
<p>Example:</p>
<pre><code>myMatrix
myMatrix <- addCombinationsToMatrix(myMatrix, 1:3)
myMatrix
myMatrix <- addCombinationsToMatrix(myMatrix, 3:5)
myMatrix
myMatrix <- addCombinationsToMatrix(myMatrix, c(1,4,5))
myMatrix
myMatrix <- addCombinationsToMatrix(myMatrix, c(2,4,5))
myMatrix
</code></pre>
<hr>
<p><strong>EDIT</strong>:
Just to be sure: the answer doesn't have to be <code>R</code> code. It can be some other language as well or even pseudo code.</p>
<p><strong>EDIT 2</strong>:
It occured to me now, that there are different ways of measuring efficiency. I actually meant, the algorithm should take as little iterations as possible. The algorithm being fast is also very cool, but not the main goal here.</p> | 2020-02-12 09:47:15.143000+00:00 | 2020-02-17 17:47:43.343000+00:00 | 2020-02-14 19:22:42.983000+00:00 | r|algorithm|mathematical-optimization | ['https://math.stackexchange.com/q/1734855/127715', 'https://ljcr.dmgordon.org/cover/table.html', 'https://arxiv.org/pdf/math/9502238.pdf'] | 3 |
41,858,477 | <p>You can try fitness uniform selection <a href="https://arxiv.org/abs/cs/0103015" rel="nofollow noreferrer">https://arxiv.org/abs/cs/0103015</a>.
But, IMHO the results won't be very good. </p> | 2017-01-25 18:01:08.257000+00:00 | 2017-01-25 18:01:08.257000+00:00 | null | null | 41,855,317 | <p>My questions is ,if there are genetic optimization algorithms where the population keeps i.i.d (independ identically distributed) during all iterations. The most common ones like NSGA2 or SPEA2 mix the current population with the previous one so that mixed population is no longer iid. But are there algorithms where the distribution of the population changes during optimization but still remains i.i.d?</p> | 2017-01-25 15:25:02.953000+00:00 | 2017-01-25 18:01:08.257000+00:00 | null | optimization|genetic-algorithm | ['https://arxiv.org/abs/cs/0103015'] | 1 |
65,476,452 | <p>As per my understanding (as there is no clear explanation given anywhere) -</p>
<p>These are just multiple ways to implement the same concept.</p>
<p>Basic concept is - to use sine and cosine positional encodings to encode the embeddings as per their positions in the sentence. So, if the word '<strong>it</strong>' is present in 2 separate sentences, their embeddings will be different for both the sentences due to added positional encodings.</p>
<p>The idea is to have some way to implement the time-steps from RNNs in which positions of words in the sentence is encoded by default, due to the way RNNs are implemented.</p>
<p>So, if you notice the difference between equations - written in <a href="https://arxiv.org/pdf/1706.03762.pdf" rel="nofollow noreferrer">paper</a> and the one implemented in the <a href="https://www.tensorflow.org/tutorials/text/transformer?hl=en#positional_encoding" rel="nofollow noreferrer">official tensorflow website</a> and the one implemented in the <a href="https://github.com/tensorflow/tensor2tensor/tree/5f9dd2db6d7797162e53adf152310ed13e9fc711/tensor2tensor/layers" rel="nofollow noreferrer">official tensor2tensor github repository</a>, they all are different.</p>
<p>In <strong>tensorflow website</strong>, they have merged sine and cosine encodings at odd and even positions -</p>
<pre><code>def get_angles(pos, i, d_model):
angle_rates = 1 / np.power(10000, (2 * (i//2)) / np.float32(d_model))
return pos * angle_rates
def positional_encoding(position, d_model):
angle_rads = get_angles(np.arange(position)[:, np.newaxis],
np.arange(d_model)[np.newaxis, :],
d_model)
# apply sin to even indices in the array; 2i
angle_rads[:, 0::2] = np.sin(angle_rads[:, 0::2])
# apply cos to odd indices in the array; 2i+1
angle_rads[:, 1::2] = np.cos(angle_rads[:, 1::2])
pos_encoding = angle_rads[np.newaxis, ...]
return tf.cast(pos_encoding, dtype=tf.float32)
</code></pre>
<p>whereas in the <strong>tensor2tensor implementation</strong>, they have concatenated sine and cosine encodings. (Channels here mean d_model, and signal means position encodings) (Notice the divisor in the implementation. Its <em>d_model-1</em> rather than <em>d_model</em>)</p>
<pre><code> channels = common_layers.shape_list(x)[2]
num_timescales = channels // 2
log_timescale_increment = (
math.log(float(max_timescale) / float(min_timescale)) /
(tf.to_float(num_timescales) - 1))
inv_timescales = min_timescale * tf.exp(
tf.to_float(tf.range(num_timescales)) * -log_timescale_increment)
scaled_time = (
tf.expand_dims(tf.to_float(position), 2) * tf.expand_dims(
tf.expand_dims(inv_timescales, 0), 0))
signal = tf.concat([tf.sin(scaled_time), tf.cos(scaled_time)], axis=2)
signal = tf.pad(signal, [[0, 0], [0, 0], [0, tf.mod(channels, 2)]])
signal = common_layers.cast_like(signal, x)
return x + signal
</code></pre>
<p>Ultimately, what matters is to have different encodings for each word in the sequence which, with sine and cosine functions applied to different frequency and phase, gives different output. Just point is to remember to continue using same equation across the model. Once you change the way of implementation, you lose the correct positional encodings.</p> | 2020-12-28 10:49:47.937000+00:00 | 2020-12-28 10:49:47.937000+00:00 | null | null | 63,295,569 | <p>I was implementing the transformer architecture in tensorflow.</p>
<p>I was following the tutorial : <a href="https://www.tensorflow.org/tutorials/text/transformer#setup_input_pipeline" rel="nofollow noreferrer">https://www.tensorflow.org/tutorials/text/transformer#setup_input_pipeline</a></p>
<p>They implement the positional encoding in this way:</p>
<pre><code>angle_rates = 1 / np.power(10000, (2 * (i//2)) / np.float32(d_model))
</code></pre>
<p>However in the paper i is not divided by 2 (i//2), is this a bug? , or why is the reason to make this operation?</p>
<p><a href="https://i.stack.imgur.com/67ADh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/67ADh.png" alt="pos" /></a></p>
<p>Thankyou</p> | 2020-08-07 05:09:02.930000+00:00 | 2020-12-28 10:49:47.937000+00:00 | 2020-08-08 22:30:10.770000+00:00 | tensorflow|encoding|nlp|transformer-model|attention-model | ['https://arxiv.org/pdf/1706.03762.pdf', 'https://www.tensorflow.org/tutorials/text/transformer?hl=en#positional_encoding', 'https://github.com/tensorflow/tensor2tensor/tree/5f9dd2db6d7797162e53adf152310ed13e9fc711/tensor2tensor/layers'] | 3 |
55,192,940 | <p>The real way out of this is described in <a href="https://arxiv.org/abs/1806.03476" rel="nofollow noreferrer"><em>Type Variables in Patterns</em>, by Richard A. Eisenberg, Joachim Breitner, Simon Peyton Jones</a>:</p>
<blockquote>
<p>For many years, GHC has implemented an extension to Haskell that
allows type variables to be bound in type signatures and patterns, and
to scope over terms. This extension was never properly specified. We
rectify that oversight here. With the formal specification in hand,
the otherwise-labyrinthine path toward a design for binding type
variables in patterns becomes blindingly clear. We thus extend
ScopedTypeVariables to bind type variables explicitly, obviating the
Proxy workaround to the dustbin of history.</p>
</blockquote> | 2019-03-16 02:39:03.997000+00:00 | 2019-03-16 02:39:03.997000+00:00 | null | null | 22,273,684 | <p>The following code (which is not meant to do anything useful) compiles fine : </p>
<pre><code>{-# LANGUAGE ScopedTypeVariables #-}
import System.Random
uselessFunction :: (RandomGen g) => g -> [Int]
uselessFunction gen =
let (value::Int, newGen) = (random gen)
in (uselessFunction newGen)
</code></pre>
<p>Is it possible for me to use type variables in the pattern matching, in the following spirit (code fails to compile): </p>
<pre><code>{-# LANGUAGE ScopedTypeVariables #-}
import System.Random
uselessFunction :: (RandomGen g, Random a) => g -> [a]
uselessFunction gen =
let (value::a, newGen) = (random gen)
in (uselessFunction newGen)
</code></pre> | 2014-03-08 19:00:31.147000+00:00 | 2019-03-16 02:39:03.997000+00:00 | 2014-03-08 19:06:14.740000+00:00 | haskell|pattern-matching|type-variables | ['https://arxiv.org/abs/1806.03476'] | 1 |
64,264,846 | <p>There is an example of rdpmc usage: <a href="https://github.com/jdmccalpin/low-overhead-timers" rel="nofollow noreferrer">https://github.com/jdmccalpin/low-overhead-timers</a> by John <a href="https://stackoverflow.com/a/60267195">https://stackoverflow.com/a/60267195</a> (<a href="http://sites.utexas.edu/jdm4372/2018/07/23/comments-on-timing-short-code-sections-on-intel-processors/" rel="nofollow noreferrer">http://sites.utexas.edu/jdm4372/2018/07/23/comments-on-timing-short-code-sections-on-intel-processors/</a>).</p>
<p>Also there was mentioned ready to use tool to measure instructions: <a href="https://arxiv.org/pdf/1911.03282.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1911.03282.pdf</a> <a href="https://github.com/andreas-abel/nanoBench" rel="nofollow noreferrer">https://github.com/andreas-abel/nanoBench</a></p>
<p>This answer <a href="https://stackoverflow.com/a/60267531">https://stackoverflow.com/a/60267531</a> has example of using perf_event_open to setup event counter and rdpmc to read counter.</p>
<p>rdpmc is not serializing and also not monotonic between two unserialized rdpmcs according to <a href="https://www.felixcloutier.com/x86/rdpmc" rel="nofollow noreferrer">https://www.felixcloutier.com/x86/rdpmc</a>:</p>
<blockquote>
<p>The RDPMC instruction is not a serializing instruction; that is, it does not imply that all the events caused by the preceding instructions have been completed or that events caused by subsequent instructions have not begun. If an exact event count is desired, software must insert a serializing instruction (such as the CPUID instruction) before and/or after the RDPMC instruction.</p>
<p>Performing back-to-back fast reads are not guaranteed to be monotonic. To guarantee monotonicity on back-to-back reads, a serializing instruction must be placed between the two RDPMC instructions.</p>
</blockquote>
<p>jevents library can be used to generate PMC event selectors: <a href="https://github.com/andikleen/pmu-tools/tree/master/jevents" rel="nofollow noreferrer">https://github.com/andikleen/pmu-tools/tree/master/jevents</a>. It is used internally by recent versions of perf linux profiling tool. jevents also has simple api to use rdpmc command</p>
<pre><code>if (rdpmc_open(PERF_COUNT_HW_CPU_CYCLES, &ctx) < 0) ... error ...
start = rdpmc_read(&ctx);
... your workload ...
end = rdpmc_read(&ctx);
</code></pre>
<p>showevtinfo of libpfm4 may generate event id compatible to rdpmc's ecx format, but I'm not sure: <a href="https://stackoverflow.com/a/46370111">https://stackoverflow.com/a/46370111</a></p>
<p>With nanobench we can check source code for Skylake events:
<a href="https://github.com/andreas-abel/nanoBench/blob/master/configs/cfg_Skylake_common.txt" rel="nofollow noreferrer">https://github.com/andreas-abel/nanoBench/blob/master/configs/cfg_Skylake_common.txt</a></p>
<pre><code>D1.01 MEM_LOAD_RETIRED.L1_HIT
D1.08 MEM_LOAD_RETIRED.L1_MISS
D1.02 MEM_LOAD_RETIRED.L2_HIT
D1.10 MEM_LOAD_RETIRED.L2_MISS
D1.04 MEM_LOAD_RETIRED.L3_HIT
D1.20 MEM_LOAD_RETIRED.L3_MISS
</code></pre>
<p>parsed in <a href="https://github.com/andreas-abel/nanoBench/blob/master/common/nanoBench.c" rel="nofollow noreferrer">https://github.com/andreas-abel/nanoBench/blob/master/common/nanoBench.c</a> <code>parse_counter_configs()</code> as <code>pfc_configs[n_pfc_configs].evt_num</code> dot <code>pfc_configs[n_pfc_configs].umask</code>; encoded in <code>configure_perf_ctrs_programmable</code> as</p>
<pre><code> uint64_t perfevtselx = read_msr(MSR_IA32_PERFEVTSEL0+i);
perfevtselx &= ~(((uint64_t)1 << 32) - 1);
perfevtselx |= ((config.cmask & 0xFF) << 24);
perfevtselx |= (config.inv << 23);
perfevtselx |= (1ULL << 22);
perfevtselx |= (config.any << 21);
perfevtselx |= (config.edge << 18);
perfevtselx |= (os << 17);
perfevtselx |= (usr << 16);
perfevtselx |= ((config.umask & 0xFF) << 8);
perfevtselx |= (config.evt_num & 0xFF);
write_msr(MSR_IA32_PERFEVTSEL0+i, perfevtselx);
</code></pre>
<p>So, two lower bytes of register value written into IA32_PERF_EVTSELx MSR are evt_num and umask. Not sure how it is translated into rdpmc ecx format.</p>
<p>John says that rdpmc command takes "something in the range of 24-40 cycles" and describes that "Intel architecture makes it impossible to change the performance counter event select programming from user space at low latency/overhead." <a href="https://community.intel.com/t5/Software-Tuning-Performance/Capturing-multiple-events-simultaneously-using-RDPMC-instruction/td-p/1097868" rel="nofollow noreferrer">https://community.intel.com/t5/Software-Tuning-Performance/Capturing-multiple-events-simultaneously-using-RDPMC-instruction/td-p/1097868</a></p>
<p>And documentation of rdpmc says the same <a href="https://www.felixcloutier.com/x86/rdpmc" rel="nofollow noreferrer">https://www.felixcloutier.com/x86/rdpmc</a>:</p>
<blockquote>
<p>The ECX register specifies the counter type (if the processor supports architectural performance monitoring) and counter index.
General-purpose or special-purpose performance counters are specified with ECX[30] = 0</p>
</blockquote>
<p>ECX does contain not the exact event to count, but the index of counter. There are 2, 4 or 8 "programmable performance counters", and you must first use wrmsr (in kernel mode) to setup some counter, for example with MSR IA32_PERF_EVTSEL0 to setup counter with index 0, and then use rdpmc with ecx[30]=0 and ecx[29:0]=0; with MSR IA32_PERF_EVTSEL3 use rdpmc with ecx[30]=0 and ecx[29:0]=3.</p>
<p>I think that it will be easier to use PAPI API to setup counter and get readings from it before and after your test code. But API call adds overhead, so your test code should be designed to repeat the sequence to be tested for several times (thousands or more). By default rdpmc/rdmsr for perfcounters are disabled for user-space code by PCE flag in CR4 - <a href="https://www.felixcloutier.com/x86/rdpmc" rel="nofollow noreferrer">https://www.felixcloutier.com/x86/rdpmc</a> (<code>echo 2 > /sys/bus/event_source/devices/cpu/rdpmc</code>); with only linux kernel access enabled. And wrmsr for setup of counter is disabled too.</p>
<p>There are several known methods of measuring cache hierarchy latency without perfcounters: <a href="https://www.7-cpu.com/utils.html" rel="nofollow noreferrer">https://www.7-cpu.com/utils.html</a> and lmbench/src/lat_mem_rd.c, but to get actual cache latency some manual post-processing is required.</p> | 2020-10-08 14:35:23.980000+00:00 | 2020-10-08 14:35:23.980000+00:00 | null | null | 64,210,648 | <p>I am wondering is there any single event that can capture the L1D cache misses. I tried to capture L1d cache miss by measuring latency to access specific memory with rdtsc at the beginning. On my setting, if the L1d cache miss happens, it should hit L2 cache. Therefore I measure latency of accessing memory with RDTSC and compare it with L1 cache latency and L2 cache latency. However, because of the noise, I cannot discern whether it hits L1 or L2. So I decided to use RDPMC.</p>
<p>I found that several APIs provide some functions to monitor perf events easily, but I would like to use RDPMC instruction directly on my test program. I found that MEM_INST_RETIRED.ALL_LOADS-MEM_LOAD_RETIRED.L1_HIT can be used to count the number of retired load instructions that miss in the L1D.(<a href="https://stackoverflow.com/questions/54638486/counting-l1-cache-misses-with-papi-read-counters-gives-unexpected-results">counting L1 cache misses with PAPI_read_counters gives unexpected results</a>). However, it seems that this posting talks about the papi Api.</p>
<p>How can I find what values should be assigned for ecx register before executing rdpmc instruction to capture specific events?? Also, I am wondering is there any single event that can tell me L1 miss happens for one memory load instruction in between two rdpmc instructions back to back like below.</p>
<pre class="lang-cpp prettyprint-override"><code>c = XXX; //I don't know what value should be assigned for what perf counter..
asm volatile(
"lfence"
"rdpmc"
"lfence"
"mov (0xdeadbeef), %%r10"//read memory
"mov %%eax, %%r10 //read lower 32 bits of counter
"lfence"
"rdpmc" //another rdpmc to capture difference
"sub %%r10, %%eax //sub two counter to get difference
:"=a"(a)
:"c"(c)
:"r10", "edx");
</code></pre>
<p><a href="https://i.stack.imgur.com/yM2FQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yM2FQ.png" alt="enter image description here" /></a></p>
<p>I am currently using 9900k coffee lake machine, so I searched perf counter number for coffee lake machine in the intel manual. It seems that just capturing two MEM_LOAD_RETIRED.L1_HIT before and after the load instruction is enough to capture the event, but I am not sure whether it is okay to do so.. Also I don't know well how to encode that perf event as ecx register.</p>
<p>Lastly, I am wondering does the rdpmc instruction back-to-back require any serialization instructions. In my case, because I only put the load instruction and measure the L1d cache miss happens or not, I enclose the first rdpmc instruction with lfence instruction and put one more lfence instruction before last rdpmc to make sure the load instruction finish before second rdpmc.</p>
<p>Added code</p>
<pre class="lang-cpp prettyprint-override"><code>asm volatile (
"lfence\n\t"
"rdpmc\n\t"
"lfence\n\t"
"mov %%eax, %%esi\n\t"
//measure
"mov (%4), %%r10\n\t"
"lfence\n\t"
"rdpmc\n\t"
"lfence\n\t"
"sub %%esi, %%eax\n\t"
"mov %%eax, (%0)\n\t"
:
:"r"(&perf[1]), "r"(&perf[2]), "r"(&perf[3]),
"r"(myAddr), "c"(0x0)
:"eax","edx","esi","r10", "memory");
</code></pre>
<p>Also I pinned my core number 3 with isolcpu and disable hyperthreading for testing. MSR register has been figured with below command</p>
<pre class="lang-sh prettyprint-override"><code> sudo wrmsr -p 3 0x186 0x4108D1 #L1 MISS
</code></pre> | 2020-10-05 14:28:55.087000+00:00 | 2020-10-09 03:02:08.197000+00:00 | 2020-10-09 03:02:08.197000+00:00 | assembly|x86|perf|intel-pmu | ['https://github.com/jdmccalpin/low-overhead-timers', 'https://stackoverflow.com/a/60267195', 'http://sites.utexas.edu/jdm4372/2018/07/23/comments-on-timing-short-code-sections-on-intel-processors/', 'https://arxiv.org/pdf/1911.03282.pdf', 'https://github.com/andreas-abel/nanoBench', 'https://stackoverflow.com/a/60267531', 'https://www.felixcloutier.com/x86/rdpmc', 'https://github.com/andikleen/pmu-tools/tree/master/jevents', 'https://stackoverflow.com/a/46370111', 'https://github.com/andreas-abel/nanoBench/blob/master/configs/cfg_Skylake_common.txt', 'https://github.com/andreas-abel/nanoBench/blob/master/common/nanoBench.c', 'https://community.intel.com/t5/Software-Tuning-Performance/Capturing-multiple-events-simultaneously-using-RDPMC-instruction/td-p/1097868', 'https://www.felixcloutier.com/x86/rdpmc', 'https://www.felixcloutier.com/x86/rdpmc', 'https://www.7-cpu.com/utils.html'] | 15 |
48,474,873 | <p>For SVM's, the goal is to find a classifier. This problem can be expressed in terms of a function that you are trying to minimize.
Let's first consider the <strong>Newton iteration</strong>. Newton iteration is a numerical method to find a solution to a problem of the form f(x) = 0.
Instead of solving it <strong>analytically</strong> we can solve it <strong>numerically</strong> by the follwing iteration:</p>
<pre><code>x^k+1 = x^k - DF(x)^-1 * F(x)
</code></pre>
<p>Here <code>x^k+1</code> is the k+1th iterate, <code>DF(x)^-1</code> is the inverse of the Jacobian of F(x) and <code>x</code> is the kth x in the iteration.</p>
<p>This update runs as long as we make progress in terms of step size (delta x) or if our function value approaches 0 to a good degree. The termination criteria can be chosen accordingly.</p>
<p>Now consider solving the problem <code>f'(x)=0</code>. If we formulate the Newton iteration for that, we get</p>
<pre><code>x^k+1 = x - HF(x)^-1 * DF(x)
</code></pre>
<p>Where <code>HF(x)^-1</code> is the inverse of the Hessian matrix and <code>DF(x)</code> the gradient of the function F. Note that we are talking about n-dimensional Analysis and can not just take the quotient. We have to take the inverse of the matrix.</p>
<p>Now we are facing some problems: In each step, we have to calculate the Hessian matrix for the updated x, which is very inefficient. We also have to solve a system of linear equations, namely <code>y = HF(x)^-1 * DF(x)</code> or <code>HF(x)*y = DF(x)</code>.
So instead of computing the Hessian in every iteration, we start off with an initial guess of the Hessian (maybe the identity matrix) and perform rank one updates after each iterate. For the exact formulas have a look <a href="https://en.wikipedia.org/wiki/Broyden%E2%80%93Fletcher%E2%80%93Goldfarb%E2%80%93Shanno_algorithm" rel="nofollow noreferrer">here</a>.</p>
<p>So how does this link to SVM's?</p>
<p>When you look at the function you are trying to minimize, you can formulate a primal problem, which you can the reformulate as a Dual Lagrangian problem which is convex and can be solved numerically. It is all well documented in the article so I will not try to express the formulas in a less good quality.</p>
<p>But the idea is the following: If you have a dual problem, you can solve it numerically. There are multiple solvers available. In the link you posted, they recommend coordinate descent, which solves the optimization problem for one coordinate at a time. Or you can use subgradient descent. Another method is to use L-BFGS. It is really well explained in <a href="https://arxiv.org/abs/1402.4861" rel="nofollow noreferrer">this</a> paper.</p>
<p>Another popular algorithm for solving problems like that is ADMM (alternating direction method of multipliers). In order to use ADMM you would have to reformulate the given problem into an equal problem that would give the same solution, but has the correct format for ADMM. For that I suggest reading Boyds script on ADMM.</p>
<p>In general: First, understand the function you are trying to minimize and then choose the numerical method that is most suited. In this case, subgradient descent and coordinate descent are most suited, as stated in the Wikipedia link.</p> | 2018-01-27 10:32:00.957000+00:00 | 2018-01-27 10:32:00.957000+00:00 | null | null | 23,812,332 | <p>Yesterday, I posted a question about general concept of SVM Primal Form Implementation:</p>
<p><a href="https://stackoverflow.com/questions/23752856/support-vector-machine-primal-form-implementation/23760112#23760112">Support Vector Machine Primal Form Implementation</a></p>
<p>and "lejlot" helped me out to understand that what I am solving is a QP problem.</p>
<p>But I still don't understand how my objective function can be expressed as QP problem</p>
<p>(<a href="http://en.wikipedia.org/wiki/Support_vector_machine#Primal_form" rel="nofollow noreferrer">http://en.wikipedia.org/wiki/Support_vector_machine#Primal_form</a>)</p>
<p>Also I don't understand how QP and Quasi-Newton method are related </p>
<p>All I know is Quasi-Newton method will SOLVE my QP problem which supposedly formulated from</p>
<p>my objective function (which I don't see the connection)</p>
<p>Can anyone walk me through this please??</p> | 2014-05-22 16:22:42.300000+00:00 | 2018-01-27 10:32:00.957000+00:00 | 2017-05-23 12:15:07.893000+00:00 | svm|mathematical-optimization | ['https://en.wikipedia.org/wiki/Broyden%E2%80%93Fletcher%E2%80%93Goldfarb%E2%80%93Shanno_algorithm', 'https://arxiv.org/abs/1402.4861'] | 2 |
61,181,613 | <p>Your question is really broad and really hard to answer because nonconvex optimization is rather complicated so is any iterative algorithm that solves such problems. As a quick hint, you can use the Mexican Hat function (or a simple polynomial that gives you what you want) for your test case. Also these papers can give you come context : <a href="https://arxiv.org/pdf/1602.04915.pdf" rel="nofollow noreferrer">Paper1</a> <a href="https://arxiv.org/pdf/1309.5549.pdf" rel="nofollow noreferrer">Paper2</a></p>
<p>Good luck. </p> | 2020-04-13 04:37:53.980000+00:00 | 2020-04-13 04:37:53.980000+00:00 | null | null | 61,062,601 | <p>I have a gd algorithm and I am trying to come up with a non-convex univariate optimization problem. I want to plot the function python and then show two runs of gd, one where it gets caught in a local minimum and one where it manages to make it to a global minimum. I am thinking of using different starting points to accomplish this.</p>
<p>That being said I am somewhat clueless about coming up with such a function or trying two different points, any help is appreciated.</p> | 2020-04-06 14:54:02.677000+00:00 | 2021-03-11 03:34:11.787000+00:00 | null | machine-learning|mathematical-optimization | ['https://arxiv.org/pdf/1602.04915.pdf', 'https://arxiv.org/pdf/1309.5549.pdf'] | 2 |
60,182,230 | <p>The example you linked to is doing a parameter study of the Lorenz system. That is, it solves a big number of the same equations with different parameters. The state type is <code>vex::multivector<double,3></code>, which packs together states (3D coordinates) of many Lorenz systems. This is an embarrassingly parallel problem and one can apply the odeint algorithm to the state types in lock-step. That is, operations like <code>x += tau * dt</code> where <code>x</code> and <code>dt</code> are large vectors, are performed on a GPU.</p>
<p>More details about odeint/vexcl implementation may be found in [1]. [2] is an interesting paper about how to extract parallelism in the case of coupled systems.</p>
<p>[1] Ahnert, Karsten, Denis Demidov, and Mario Mulansky. "Solving ordinary differential equations on GPUs." Numerical Computations with GPUs. Springer, Cham, 2014. 125-157. <a href="https://doi.org/10.1007/978-3-319-06548-9_7" rel="nofollow noreferrer">https://doi.org/10.1007/978-3-319-06548-9_7</a> (<a href="https://www.researchgate.net/profile/Denis_Demidov/publication/303728047_Solving_ordinary_differential_equations_on_GPUs/links/5dc9234892851c8180436b87/Solving-ordinary-differential-equations-on-GPUs.pdf" rel="nofollow noreferrer">pdf</a>)</p>
<p>[2] Mulansky, Mario. "Optimizing Large-Scale ODE Simulations." arXiv preprint <a href="https://arxiv.org/abs/1412.0544" rel="nofollow noreferrer">arXiv:1412.0544</a> (2014).</p> | 2020-02-12 06:19:04.767000+00:00 | 2020-02-12 06:19:04.767000+00:00 | null | null | 60,178,543 | <p>My question is related to the <a href="https://www.boost.org/doc/libs/1_72_0/libs/numeric/odeint/doc/html/boost_numeric_odeint/tutorial/using_opencl_via_vexcl.html" rel="nofollow noreferrer">tutorial</a> which explains how to implement boost::odeint with VexCL in order to achieve concurrency (the complete code can be found <a href="https://github.com/headmyshoulder/odeint-v2/blob/master/examples/vexcl/lorenz_ensemble.cpp" rel="nofollow noreferrer">here</a>).</p>
<p>The following figure shows how I think of the iterations of ODEINT:
<a href="https://i.stack.imgur.com/pYG3x.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pYG3x.png" alt="enter image description here"></a></p>
<p>Now I ask myself, <strong>what exactly / or which part of it is parallelised</strong> in VexCL?</p>
<p>My impression is, the ODE part is one single task, as all equations of ODE are within one block in the given example. Maybe the integration part runs in three parallel tasks. This results in four tasks, where (I think) the ODE task is a bottle neck (because the equations can become very large).</p>
<p>If this is right I would like to know, <strong>how to improve this concurrency</strong>. I think it make sense to combine ODE and INT horizontally. This results in 3 tasks, each of which cannot be further reduced at this level.</p> | 2020-02-11 22:48:45.670000+00:00 | 2020-02-12 06:19:04.767000+00:00 | null | c++|boost|opencl|odeint|vexcl | ['https://doi.org/10.1007/978-3-319-06548-9_7', 'https://www.researchgate.net/profile/Denis_Demidov/publication/303728047_Solving_ordinary_differential_equations_on_GPUs/links/5dc9234892851c8180436b87/Solving-ordinary-differential-equations-on-GPUs.pdf', 'https://arxiv.org/abs/1412.0544'] | 3 |
70,414,630 | <p>If you're looking for the best CNN model for image classification, take a look at EfficientNet architecture (<a href="https://github.com/lukemelas/EfficientNet-PyTorch" rel="nofollow noreferrer">Pytorch implementation</a>, <a href="https://arxiv.org/abs/1905.11946" rel="nofollow noreferrer">Paper</a>). IIRC, Googlenet is kinda old.</p>
<p>If your model requires some specific shape of the input image, you can just resize them (for example, you can use <a href="https://pytorch.org/vision/stable/transforms.html#torchvision.transforms.Resize" rel="nofollow noreferrer">torchvision</a> or <a href="https://www.tutorialkart.com/opencv/python/opencv-python-resize-image/" rel="nofollow noreferrer">OpenCV</a>)</p> | 2021-12-19 19:23:35.017000+00:00 | 2021-12-19 19:23:35.017000+00:00 | null | null | 70,412,953 | <p>I am new to DL and am trying to train my first CNN models with <strong>googLeNet</strong> architecture. I've prepared my custom image data dimensions with 50x50 but the architecture is recommending to use 224x224. Will it be okay to use the architecture? I don't want to remake my datasets to change the size of the images. So, if there are some other architectures that I can look into it, please kindly recommend them for me.</p> | 2021-12-19 15:49:25.473000+00:00 | 2021-12-19 19:23:35.017000+00:00 | null | deep-learning|architecture|conv-neural-network | ['https://github.com/lukemelas/EfficientNet-PyTorch', 'https://arxiv.org/abs/1905.11946', 'https://pytorch.org/vision/stable/transforms.html#torchvision.transforms.Resize', 'https://www.tutorialkart.com/opencv/python/opencv-python-resize-image/'] | 4 |
68,885,307 | <p>I might be able to get part way there (although this is not my area of expertise). The answer may be:</p>
<pre><code>Package ‘linkprediction’
October 19, 2018
Title Link Prediction Methods
Version 1.0-0
Description Implementations of most of the existing proximity-based methods of
link prediction in graphs. Among the 20 implemented methods are e.g.:
Adamic L. and Adar E. (2003) <doi:10.1016/S0378-8733(03)00009-1>,
Leicht E., Holme P., Newman M. (2006) <doi:10.1103/PhysRevE.73.026120>,
Zhou T. and Zhang Y (2009) <doi:10.1140/epjb/e2009-00335-8>, and
Fouss F., Pirotte A., Renders J., and Saerens M. (2007) <doi:10.1109/TKDE.2007.46>.
</code></pre>
<p>The <code>Leicht E., Holme P., Newman M. (2006) <doi:10.1103/PhysRevE.73.026120></code> citation is behind a paywall, but there is a preprint version of it that makes me think the citation you mention is a later description of the same thing: <a href="https://arxiv.org/abs/physics/0510143" rel="nofollow noreferrer">https://arxiv.org/abs/physics/0510143</a></p>
<pre><code>if(!require("linkprediction") ){
install.packages("linkprediction", dependencies=TRUE);
library(linkprediction }
if(requireNamespace("igraph")) {
g <- igraph::make_graph(~ A -- C:D:E -- B -- F -- G:H -- I)
}
# LHN
proxfun(g, method="lhn_global") $ returns matrix, possibly what your eq. described
round( proxfun(g, method="lhn_global"), 5)
1 2 3 4 5 6 7 8 9
1 0.12648 0.04052 0.04052 0.04052 0.01199 0.00329 0.00101 0.00101 0.00038
2 0.04052 0.27352 0.02352 0.02352 0.03162 0.00867 0.00266 0.00266 0.00101
3 0.04052 0.02352 0.27352 0.02352 0.03162 0.00867 0.00266 0.00266 0.00101
4 0.04052 0.02352 0.02352 0.27352 0.03162 0.00867 0.00266 0.00266 0.00101
5 0.01199 0.03162 0.03162 0.03162 0.07439 0.02039 0.00625 0.00625 0.00237
6 0.00329 0.00867 0.00867 0.00867 0.02039 0.12603 0.03862 0.03862 0.01465
7 0.00101 0.00266 0.00266 0.00266 0.00625 0.03862 0.27152 0.02152 0.05556
8 0.00101 0.00266 0.00266 0.00266 0.00625 0.03862 0.02152 0.27152 0.05556
9 0.00038 0.00101 0.00101 0.00101 0.00237 0.01465 0.05556 0.05556 0.27107
</code></pre>
<p>On the other hand you may be looking for "the Katz Index" which has a citation from 1953. Also look at <a href="https://arxiv.org/pdf/2105.01931.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/2105.01931.pdf</a> as well as at: <a href="https://dial.uclouvain.be/memoire/ucl/en/object/thesis%3A12878/datastream/PDF_01/view" rel="nofollow noreferrer">https://dial.uclouvain.be/memoire/ucl/en/object/thesis%3A12878/datastream/PDF_01/view</a> (which offers Matlab code for calculations.)</p> | 2021-08-22 21:28:20.067000+00:00 | 2021-08-22 22:02:30.310000+00:00 | 2021-08-22 22:02:30.310000+00:00 | null | 68,882,999 | <p>I would like to know if there is an R package which is able to compute the <em>regular equivalence</em> measure described in Newman (2010:218) as "Katz <em>similarity</em>":</p>
<p><a href="https://i.stack.imgur.com/86oEb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/86oEb.png" alt="enter image description here" /></a></p>
<p>where σ is the similarity score, A is the adjacency matrix of the graph and δ is meant to increase the "self-similarity" of diagonal elements. So far I was not able to find any specific function computing this score in R. Since the textbook also explicitly states that:</p>
<blockquote>
<p>"The Katz centrality of a vertex would then be simply
the sum of the Katz similarities of that vertex to all others." (Newman, 2010: 219)</p>
</blockquote>
<p>I was thinking that maybe there is a way to derive the similarity score from the Katz centrality measure, but I could not find a proper way to disentangle each score in the centrality.</p> | 2021-08-22 16:03:48.580000+00:00 | 2021-08-22 22:02:30.310000+00:00 | 2021-08-22 21:50:55.220000+00:00 | r|igraph|similarity|sna | ['https://arxiv.org/abs/physics/0510143', 'https://arxiv.org/pdf/2105.01931.pdf', 'https://dial.uclouvain.be/memoire/ucl/en/object/thesis%3A12878/datastream/PDF_01/view'] | 3 |
38,172,390 | <blockquote>
<p>What profunctors lack compared to arrows is the ability to compose them. If we add composition, will we get an arrow?</p>
</blockquote>
<h2>MONOIDS</h2>
<p>This is exactly the question tackled in section 6 of "<a href="http://arxiv.org/pdf/1406.4823.pdf" rel="noreferrer">Notions of Computation as Monoids</a>," which unpacks a result from the (rather dense) "<a href="http://homepages.inf.ed.ac.uk/cheunen/publications/2008/arrows/arrows.pdf" rel="noreferrer">Categorical semantics for arrows</a>". "Notions" is a great paper because while it dives deep into category theory it (1) doesn't assume the reader has more than a cursory knowledge of abstract algebra and (2) illustrates most of the migraine-inducing mathematics with Haskell code. We can briefly summarize section 6 of the paper here:</p>
<p>Say we have</p>
<pre><code>class Profunctor p where
dimap :: (contra' -> contra) -> (co -> co') -> p contra co -> p contra' co'
</code></pre>
<p>Your bog-standard, negative-and-positive dividin' encoding of profunctors in Haskell. Now this data type,</p>
<pre><code>data (⊗) f g contra co = forall x. (f contra x) ⊗ (g x co)
</code></pre>
<p>as implemented in <a href="https://hackage.haskell.org/package/profunctors-5.2/docs/Data-Profunctor-Composition.html" rel="noreferrer">Data.Profunctor.Composition</a>, acts like composition for profunctor. We can, for example, demonstrate a lawful instance <code>Profunctor</code>:</p>
<pre><code>instance (Profunctor f, Profunctor g) => Profunctor (f ⊗ g) where
dimap contra co (f ⊗ g) = (dimap contra id f) ⊗ (dimap id co g)
</code></pre>
<p>We will hand-wave the proof that it is lawful for reasons of time and space.</p>
<p>OK. Now the fun part. Say we this typeclass:</p>
<pre><code>class Profunctor p => ProfunctorMonoid p where
e :: (a -> b) -> p a b
m :: (p ⊗ p) a b -> p a b
</code></pre>
<p>This is, with a lot more hand-waving, a way of encoding the notion of profunctor monoids in Haskell. Specifically this is a monoid in the monoidal category <code>Pro</code>, which is a monoidal structure for the functor category <code>[C^op x C, Set]</code> with <code>⊗</code> as the tensor and <code>Hom</code> as its unit. So there's a lot of ultraspecific mathematical diction to unpack here but for that you should just read the paper.</p>
<p>We then see that <code>ProfunctorMonoid</code> is isomorphic to <code>Arrow</code> ... almost.</p>
<pre><code>instance ProfunctorMonoid p => Category p where
id = dimap id id
(.) pbc pab = m (pab ⊗ pbc)
instance ProfunctorMonoid p => Arrow p where
arr = e
first = undefined
instance Arrow p => Profunctor p where
lmap = (^>>)
rmap = (>>^)
instance Arrow p => ProfunctorMonoid p where
e = arr
m (pax ⊗ pxb) = pax >> pxb
</code></pre>
<p>Of course we are ignoring the typeclass laws here but, as the paper shows, they do work out fantastically.</p>
<p>Now I said almost because crucially we were unable to implement <code>first</code>. What we have really done is demonstrated an isomorphism between <code>ProfunctorMonoid</code> and <em>pre-arrows</em> .The paper calls <code>Arrow</code> without <code>first</code> a <em>pre-arrow</em>. It then goes on to show that</p>
<pre><code>class Profunctor p => StrongProfunctor p where
first :: p x y -> p (x, z) (y, z)
class StrongProfunctor p => StrongProfunctorMonoid p where
e :: (a -> b) -> p a b
m :: (p ⊗ p) a b -> p a b
</code></pre>
<p>is necessary and sufficient for the desired isomorphism to <code>Arrow</code>. The word "strong" comes from a specific notion in category theory and is described by the paper in better writing and richer detail than I could ever muster.</p>
<p>So to summarize:</p>
<ul>
<li><p>A monoid in the category of profunctors is a pre-arrow, and vice versa. (A previous version of the paper used the term "weak arrows" instead of pre-arrows, and that's OK too I guess.)</p></li>
<li><p>A monoid in the category of strong profunctors is an arrow, and vice versa.</p></li>
<li><p>Since monad is a monoid in the category of endofunctors we can think of the SAT analogy <code>Functor : Profunctor :: Monad : Arrow</code>. This is the real thrust of the notions-of-computation-as-monoids paper.</p></li>
<li><p>Monoids and monoidal categories are gentle sea creatures that appear everywhere, and it's a shame that some students will go through computer science or software engineering education without being taught monoids.</p></li>
<li><p>Category theory is fun.</p></li>
<li><p>Haskell is fun.</p></li>
</ul> | 2016-07-03 17:13:24.427000+00:00 | 2016-07-03 17:51:16.693000+00:00 | 2016-07-03 17:51:16.693000+00:00 | null | 38,169,453 | <p>Apparently, every <code>Arrow</code> is a <a href="https://hackage.haskell.org/package/profunctors-4.3.2/docs/Data-Profunctor.html#t:Strong" rel="noreferrer"><code>Strong</code></a> profunctor. Indeed <a href="https://hackage.haskell.org/package/base-4.9.0.0/docs/Control-Arrow.html#v:-94--62--62-" rel="noreferrer"><code>^>></code></a> and <code>>>^</code> correspond to <code>lmap</code> and <code>rmap</code>. And <code>first'</code> and <code>second'</code> are just the same as <code>first</code> and <code>second</code>. Similarly every <code>ArrowChoice</code> is also <a href="https://hackage.haskell.org/package/profunctors-4.3.2/docs/Data-Profunctor.html#t:Choice" rel="noreferrer"><code>Choice</code></a>.</p>
<p>What profunctors lack compared to arrows is the ability to compose them. If we add composition, will we get an arrow? In other words, if a (strong) profunctor is also a <a href="https://hackage.haskell.org/package/base-4.9.0.0/docs/Control-Category.html#t:Category" rel="noreferrer">category</a>, is it already an arrow? If not, what's missing?</p> | 2016-07-03 11:24:15.233000+00:00 | 2019-06-02 23:44:37.477000+00:00 | 2019-06-02 23:44:37.477000+00:00 | haskell|functor|category-theory|arrows|profunctor | ['http://arxiv.org/pdf/1406.4823.pdf', 'http://homepages.inf.ed.ac.uk/cheunen/publications/2008/arrows/arrows.pdf', 'https://hackage.haskell.org/package/profunctors-5.2/docs/Data-Profunctor-Composition.html'] | 3 |
5,660,348 | <p>This is the idiomatic way of writing a (nested) if-then-else in Prolog:</p>
<pre><code>testif(X,Y) :-
(X >= 6 ->
writeln('X> 6'),
(Y>= 3 ->
writeln('X >6 and Y> 3')
;
writeln('X >6 and Y<3'),
)
;
writeln('X<6');
),
writeln('Test over').
</code></pre>
<p>As in any other language, indentation counts in Prolog. See <a href="http://arxiv.org/abs/0911.2899" rel="nofollow">Covington <em>et al.</em></a> for a styleguide.</p> | 2011-04-14 08:14:21.270000+00:00 | 2011-04-14 08:14:21.270000+00:00 | null | null | 5,652,867 | <p>for example:</p>
<pre><code>testif(X,Y) :-
X >= 6 ->
writeln('X> 6'),
Y>= 3 ->
writeln('X >6 and Y> 3');
writeln('X >6 and Y<3'),
writeln('X<6');
writeln('Test over').
test(X,Y):-
testif(X,Y).
?- test(7, 3).
Yes (0.00s cpu)
X> 6
X >6 and Y> 3
</code></pre>
<p>why don't output 'Test over' ?</p>
<pre><code>?- test(4, 3).
Yes (0.00s cpu, solution 1, maybe more)
X >6 and Y<3 % why output this line?
X<6
</code></pre>
<p>why don't output 'Test over' too?</p>
<p>Thanks very much :)</p> | 2011-04-13 16:59:44.370000+00:00 | 2011-04-14 08:14:21.270000+00:00 | null | prolog | ['http://arxiv.org/abs/0911.2899'] | 1 |
35,056,552 | <p>Word2Vec captures distributed representation of a word which essentially means, <em>multiple neurons capture a single concept</em> (concept can be word meaning/sentiment/part of speech etc.), and also <em>a single neuron contributes to multiple concepts</em>.</p>
<p>These concepts are automatically learnt and not pre-defined, hence you can think of them as latent/hidden. Also for the same reason, the word vectors can be used for multiple applications.</p>
<p>More is the size parameter, more will be the capacity of your neural network to represent these concepts, but more data will be required to train these vectors (as they are initialised randomly). In absence of sufficient number of sentences/computing power, its better to keep the <code>size</code> small.</p>
<p><a href="http://arxiv.org/pdf/1405.4053v2.pdf">Doc2Vec</a> follows slightly different neural network architecture as compared to Word2Vec, but the meaning of <code>size</code> is analogous. </p> | 2016-01-28 08:41:05.773000+00:00 | 2016-01-28 08:41:05.773000+00:00 | null | null | 34,948,650 | <p>I am using <code>Doc2Vec</code> function of <a href="https://radimrehurek.com/gensim/models/doc2vec.html" rel="noreferrer">gensim</a> in Python to convert a document to a vector.</p>
<p>An example of usage</p>
<p><code>model = Doc2Vec(documents, size=100, window=8, min_count=5, workers=4)</code></p>
<p>How should I interpret the <code>size</code> parameter. I know that if I set <code>size = 100</code>, the length of output vector will be 100, but what does it mean? For instance, if I increase <code>size</code> to 200, what is the difference?</p> | 2016-01-22 14:07:52.770000+00:00 | 2016-01-28 08:41:05.773000+00:00 | null | python|gensim|word2vec | ['http://arxiv.org/pdf/1405.4053v2.pdf'] | 1 |
21,210,735 | <p>I think <a href="https://stackoverflow.com/users/68063/gareth-rees">Gareth</a>'s answer above is excellent but just to add (I don't have any reputation to add comments) that Dots and boxes has been shown (at least with a sketch) to be np-hard according to this: <a href="http://arxiv.org/pdf/cs/0106019v2.pdf" rel="nofollow noreferrer">arxiv.org/pdf/cs/0106019v2.pdf</a></p>
<p>I wrote a javascript version of dots and boxes that tries to incorporate the strategies mentioned above <a href="http://dotsandboxes.org" rel="nofollow noreferrer">dotsandboxes.org</a>. It's not the best one available (doesn't yet incorporate all techniques that Gareth mentions) but the graphics are nice and it beats most humans and other implementations :) Feel free to have a look at the code and there are some other links to other people's version of the game you can train yours on.</p> | 2014-01-18 22:32:03.997000+00:00 | 2014-01-24 19:51:11.553000+00:00 | 2017-05-23 12:24:56.457000+00:00 | null | 10,057,357 | <p>I'm currently working on a "<a href="http://en.wikipedia.org/wiki/Dots_and_Boxes" rel="noreferrer">dots and boxes</a>" program where the input is automatically generated by a computer, and our output is what move we'll make. I'll be competing against another player (their algorithm).</p>
<p>I'm representing the dots and boxes board as a matrix in Python. Winning the game is top priority: algorithm efficiency is not that important.</p>
<p>Is there a best, not complex algorithm for automatically figuring out what move we should make, given a board? </p>
<p>P.S. - You don't need to give me anything in code if you want...English algorithms are perfectly acceptable.</p> | 2012-04-07 18:53:57.897000+00:00 | 2014-01-24 19:51:11.553000+00:00 | 2012-04-07 19:07:44.153000+00:00 | python|algorithm | ['https://stackoverflow.com/users/68063/gareth-rees', 'http://arxiv.org/pdf/cs/0106019v2.pdf', 'http://dotsandboxes.org'] | 3 |
46,838,332 | <p>UPDATE: it appears that a new and very good algorithm has appeared, called KLL. See <a href="https://arxiv.org/abs/1603.05346" rel="nofollow noreferrer">paper</a>. It has an implementation <a href="https://github.com/edoliberty/streaming-quantiles/blob/master/kll.py" rel="nofollow noreferrer">in Python</a> and <a href="https://github.com/dgryski/go-kll" rel="nofollow noreferrer">in Go</a>.</p>
<p><a href="https://github.com/tdunning/t-digest/blob/master/docs/t-digest-paper/histo.pdf" rel="nofollow noreferrer">t-digest</a> has implementations in several languages and satisfies all of your requirements. See <a href="https://github.com/tdunning/t-digest/blob/master/docs/t-digest-paper/histo.pdf" rel="nofollow noreferrer">the paper</a> that makes comparisons to some other algorithms, e.g. to Q-Digest. You can look for more comparisons in the <a href="https://graphics.stanford.edu/courses/cs468-05-winter/Papers/Information_Aggregation/Suri_sensys04.pdf" rel="nofollow noreferrer">Q-Digest paper</a>.</p>
<p>Generally, both of these algorithms are far superior to sampling-based algorithms for estimating quantiles, in terms of giving much better accuracy given the same amount of storage. You can look for a discussion of many more approximate algorithms in the excellent book <a href="https://rads.stackoverflow.com/amzn/click/com/193301914X" rel="nofollow noreferrer" rel="nofollow noreferrer">Data Streams: Algorithms and Applications</a> (it does not discuss t-digest because it was created after the book was published).</p>
<p>There might be other, better algorithms that I'm not familiar with.</p>
<p>There is currently no Beam wrapper for the t-digest library, but it should not be difficult to develop one using a custom <code>CombineFn</code>. See, for example, <a href="https://github.com/apache/beam/pull/3991" rel="nofollow noreferrer">a current pending PR</a> adding support for a different approximate algorithm using a <code>CombineFn</code>.</p> | 2017-10-19 20:24:35.397000+00:00 | 2017-10-20 18:51:29.797000+00:00 | 2017-10-20 18:51:29.797000+00:00 | null | 46,827,512 | <p>I am trying to compute quantiles (can be approximate with some accuracy guarantees or error bounds) for a huge dataset (terabytes of data) . How can i efficiently compute quantiles . The requirements are </p>
<pre><code>1) Can be computed efficiently (one-pass) or in a distributed way (merging)
2) High accuracy (or at least can be controlled)
3) Can be re-computed or reproduced in multiple language (java and python)
4) Incrementally updated (not a requirement but good to have)
</code></pre>
<p>The few approaches I am looking at are :</p>
<blockquote>
<p>1) The naive solution : reservoir sampling (not sure how to do it in<br>
distributed map reduce way specially how to merge different reservoir
samples for same data or two different distributions, are there any<br>
good implementations ? )</p>
<p>2) t-digest </p>
<p>3) Gurmeet Singh Manku, Sridhar Rajagopalan, and Bruce G. Lindsay.
Approximate medians and other quantiles in one pass and with<br>
limited memory. (Reason being i think some map reduce frameworks like
dataflow and BigQuery already implement variation of this AFAIK)</p>
</blockquote>
<p>Can someone who has prior experience with working with these algorithm and techniques provide me some pointers about what are the caveats, pros and cons for each . When to use which method, is one approach arguably better than other if requirement is efficient computation and accuracy better. </p>
<p>I have not in particular used digest based approach and would like to understand better why and when would i prefer something like t-digest over something simple like reservoir sampling to compute the approximate quantiles.</p> | 2017-10-19 10:09:13.953000+00:00 | 2017-10-20 18:51:29.797000+00:00 | null | database|mapreduce|bigdata|spark-streaming|google-cloud-dataflow | ['https://arxiv.org/abs/1603.05346', 'https://github.com/edoliberty/streaming-quantiles/blob/master/kll.py', 'https://github.com/dgryski/go-kll', 'https://github.com/tdunning/t-digest/blob/master/docs/t-digest-paper/histo.pdf', 'https://github.com/tdunning/t-digest/blob/master/docs/t-digest-paper/histo.pdf', 'https://graphics.stanford.edu/courses/cs468-05-winter/Papers/Information_Aggregation/Suri_sensys04.pdf', 'https://rads.stackoverflow.com/amzn/click/com/193301914X', 'https://github.com/apache/beam/pull/3991'] | 8 |
73,257,326 | <p>This is certainly a valid application of <code>FedAvg</code> and the variants proposed in the linked paper, though one that is only studied empirically in a subset of the literature. On the other hand, many theoretical analyses of <code>FedAvg</code> assume a similar situation to the one you're describing; at the bottom of page 4 of that linked paper, you will see that the analysis is performed in this so-called 'full participation' regime, where every client participates on every round.</p>
<p>Often the setting you describe is called 'cross silo'; see, e.g., <a href="https://arxiv.org/pdf/1912.04977.pdf" rel="nofollow noreferrer">section 7.5 of Advances and Open Problems in Federated Learning</a>, which will also contain many useful pointers for the cross-silo literature.</p>
<p>Finally, depending on the application, consider that it may be more natural to literally train on <em>all</em> clients, reserving portions of each clients' data for validation and test. Questions around natural partitions of data to model the 'setting we care about' are often thorny in the federated setting.</p> | 2022-08-06 05:08:31.363000+00:00 | 2022-08-06 05:08:31.363000+00:00 | null | null | 73,198,156 | <p>I am building a federated learning model using Tensorflow Federated.
Based on what I have read in the tutorials and papers, I understood that the state-of-the-art method (FedAvg) is working by selecting a random subset of clients at each round.</p>
<p>My concern is:</p>
<ul>
<li>I am having a small number of clients. Totally I have 8 clients, I select 6 clients for training and I kept 2 for testing.</li>
<li>All of the data are provided on my local device, so I am using the TFF as the simulation environment.</li>
<li>If I use all of the 6 clients in all of the rounds during federated communication rounds, would this be a wrong execution of the FedAvg method?</li>
<li>Note that I am planning also to use the same experiment used in this <a href="https://arxiv.org/abs/2003.00295" rel="nofollow noreferrer">paper</a>. That aims to use different server optimization methods and compare their performance. So, would (all clients participating procedure) works here or not?</li>
</ul>
<p>Thanks in advance</p> | 2022-08-01 18:35:35.130000+00:00 | 2022-08-06 05:08:31.363000+00:00 | null | tensorflow|tensorflow-federated | ['https://arxiv.org/pdf/1912.04977.pdf'] | 1 |
45,731,612 | <p>Let's go through your model:</p>
<p>You have your input layer with dimension 1120, connected to that one, you have your first hidden layer with 512 neurons, after you have your batch normalization layer. After that your activation function and after that your dropout layer. Note that you can use the command <code>model.summary()</code> to visualize your model</p>
<p>In theory, you can (and should), consider these layer only like one layer on which you apply the following transformation: batch-normalization, activation and dropout. In practice, each layer is implemented individually in Keras because you gain in modularity for the implementation: instead of coding all the possible way a layer can be designed, the user can choose to add to the layer batch norm or dropout. To look at the modular implementation, I recommend you to have a look at <a href="http://cs231n.stanford.edu/slides/2017/cs231n_2017_lecture4.pdf" rel="nofollow noreferrer">http://cs231n.stanford.edu/slides/2017/cs231n_2017_lecture4.pdf</a> and in general at <a href="http://cs231n.stanford.edu/syllabus.html" rel="nofollow noreferrer">http://cs231n.stanford.edu/syllabus.html</a> if you want to gain deeper knowledge.</p>
<p>For the batch-normalization layer, you have as you can notice 4 parameters: two adjustable parameters: gamma and beta, and two parameters that are set by the data (the mean and the standard deviation). To learn what it is, look at the Stanford class, you can also find it in the original paper about batch normalization <a href="https://arxiv.org/abs/1502.03167" rel="nofollow noreferrer">https://arxiv.org/abs/1502.03167</a>. It is just a trick to improve learning speed and improve accuracy by normalizing your data at each layer, like you would do it in a preprocessing step for your input data.</p>
<p>From what I said, you can infer the rest of your model.</p>
<p>N-B: I wouldn't use the batchnormalization layer in the last step before the softmax.</p>
<p>Is it clearer ?</p> | 2017-08-17 09:40:11.680000+00:00 | 2017-08-17 11:01:57.360000+00:00 | 2017-08-17 11:01:57.360000+00:00 | null | 45,730,955 | <p>I have some troubles to understand the models of DNN using batchnormalization, in specifique using keras. Can somebody explaind me the structure and content of each layer in this model that I built? </p>
<pre><code>modelbatch = Sequential()
modelbatch.add(Dense(512, input_dim=1120))
modelbatch.add(BatchNormalization())
modelbatch.add(Activation('relu'))
modelbatch.add(Dropout(0.5))
modelbatch.add(Dense(256))
modelbatch.add(BatchNormalization())
modelbatch.add(Activation('relu'))
modelbatch.add(Dropout(0.5))
modelbatch.add(Dense(num_classes))
modelbatch.add(BatchNormalization())
modelbatch.add(Activation('softmax'))
# Compile model
modelbatch.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
# Train the model
start = time.time()
model_info = modelbatch.fit(X_2, y_2, batch_size=500, \
epochs=20, verbose=2, validation_data=(X_test, y_test))
end = time.time()
</code></pre>
<p>This is, i think, all the layers of my model:</p>
<pre><code>print(modelbatch.layers[0].get_weights()[0].shape)
(1120, 512)
print(modelbatch.layers[0].get_weights()[1].shape)
(512,)
print(modelbatch.layers[1].get_weights()[0].shape)
(512,)
print(modelbatch.layers[1].get_weights()[1].shape)
(512,)
print(modelbatch.layers[1].get_weights()[2].shape)
(512,)
print(modelbatch.layers[1].get_weights()[3].shape)
(512,)
print(modelbatch.layers[4].get_weights()[0].shape)
(512, 256)
print(modelbatch.layers[4].get_weights()[1].shape)
(256,)
print(modelbatch.layers[5].get_weights()[0].shape)
(256,)
print(modelbatch.layers[5].get_weights()[1].shape)
(256,)
print(modelbatch.layers[5].get_weights()[2].shape)
(256,)
print(modelbatch.layers[5].get_weights()[3].shape)
(256,)
print(modelbatch.layers[8].get_weights()[0].shape)
(256, 38)
print(modelbatch.layers[8].get_weights()[1].shape)
(38,)
print(modelbatch.layers[9].get_weights()[0].shape)
(38,)
print(modelbatch.layers[9].get_weights()[1].shape)
(38,)
print(modelbatch.layers[9].get_weights()[2].shape)
(38,)
print(modelbatch.layers[9].get_weights()[3].shape)
(38,)
</code></pre>
<p>I will appreciate your help, thanks in advance.</p> | 2017-08-17 09:09:49.737000+00:00 | 2021-02-14 23:47:23.827000+00:00 | 2021-02-14 23:47:23.827000+00:00 | keras|batch-normalization | ['http://cs231n.stanford.edu/slides/2017/cs231n_2017_lecture4.pdf', 'http://cs231n.stanford.edu/syllabus.html', 'https://arxiv.org/abs/1502.03167'] | 3 |
65,344,089 | <p>Referring to <a href="https://arxiv.org/abs/1511.07122" rel="nofollow noreferrer">Multi-Scale Context Aggregation by Dilated Convolutions</a>, yes, you can save some memory while having a larger receptive field. You might want to use dilated convolutions if you want an exponential expansion of the receptive field without loss of resolution or coverage. This allows us to have a larger receptive field with the same computation and memory costs while preserving resolution. Pooling and stride Convolutions can also kind of "expand" the receptive field, but those reduce data's resolution.</p>
<p>Generally, dilated convolutions have also shown to perform better, for example in image segmentation in <a href="https://arxiv.org/pdf/1606.00915" rel="nofollow noreferrer">DeepLab</a> and in speech in <a href="https://arxiv.org/pdf/1609.03499.pdf" rel="nofollow noreferrer">WaveNet</a>.</p>
<p><a href="https://github.com/vdumoulin/conv_arithmetic/blob/master/README.md" rel="nofollow noreferrer">Here</a> shows a neat visualization of what dilation does.</p> | 2020-12-17 15:56:47.163000+00:00 | 2020-12-17 15:56:47.163000+00:00 | null | null | 65,344,036 | <p>I do not understand what is the use of dilated convolution and when should we use it. When we want a larger receptive field while saving memory? By increasing <code>dilation</code> size, it increases the spacing between the kernel points?</p> | 2020-12-17 15:53:48.590000+00:00 | 2020-12-17 15:56:47.163000+00:00 | null | machine-learning | ['https://arxiv.org/abs/1511.07122', 'https://arxiv.org/pdf/1606.00915', 'https://arxiv.org/pdf/1609.03499.pdf', 'https://github.com/vdumoulin/conv_arithmetic/blob/master/README.md'] | 4 |
51,396,340 | <p>I think you are looking for something called <a href="https://arxiv.org/abs/1610.02984" rel="nofollow noreferrer">ReID</a>. There are a lot of papers about it in CVPR2018.</p>
<p>You can imagine that you would need some sort of stored characteristic vector for each person. For each detected person, gives a new ID if it does not match any previous record, or returns the ID if it does match a record. The key is how to compute this characteristic vector. CNN features (intermediate layer) can be one. Gaussian mixtures of color of the detected human patch can be another.</p>
<p>It is still a very active research field and I think it would be quite hard to make a accurate one if you don't have much resources/time at hand.</p> | 2018-07-18 07:44:37.477000+00:00 | 2018-07-18 07:44:37.477000+00:00 | null | null | 51,395,132 | <p>Currently I am researching about viable approaches to identify a certain object with the image processing techniques. However I ams struggling finding them. For example, I have a CNN capable of detecting certain objects, like a person, then I can track the person as well. However, my issue is that I want the identify the detected and tracked person like saving its credentials and giving an id. I do not want something like who is he/she. Just giving an id in that manner.</p>
<p>Any help/resource will be appreciated.</p> | 2018-07-18 06:39:15.637000+00:00 | 2018-07-18 07:51:01.217000+00:00 | null | python|opencv|image-processing|computer-vision | ['https://arxiv.org/abs/1610.02984'] | 1 |
23,821,940 | <p>I'm surprised that you saw chained hashing to be faster than linear probing - in practice, linear probing is typically significantly faster than chaining. This is primarily due to <a href="https://en.wikipedia.org/wiki/Locality_of_reference" rel="noreferrer">locality of reference</a>, since the accesses performed in linear probing tend to be closer in memory than the accesses performed in chained hashing.</p>
<p>There are other wins in linear probing. For example, insertions into a linear probing hash table don't require any new allocations (unless you're rehashing the table), so in applications like network routers where memory is scarce, it's nice to know that once the table is set up, the elements can be placed into it with no risk of a <code>malloc</code> fail.</p>
<p>One weakness of linear probing is that, with a bad choice of hash function, <a href="https://en.wikipedia.org/wiki/Primary_clustering" rel="noreferrer">primary clustering</a> can cause the performance of the table to degrade significantly. While chained hashing can still suffer from bad hash functions, it's less sensitive to elements with nearby hash codes, which don't adversely impact the runtime. Theoretically, linear probing only gives expected O(1) lookups if the hash functions are <a href="http://arxiv.org/abs/1509.04549" rel="noreferrer">5-independent</a> or if <a href="https://www.eecs.harvard.edu/~michaelm/postscripts/soda2008b.pdf" rel="noreferrer">there's sufficient entropy in the keys</a>. There are many ways to address this, since as using the <a href="http://sebastiansylvan.com/post/robin-hood-hashing-should-be-your-default-hash-table-implementation/" rel="noreferrer">Robin Hood hashing</a> technique or <a href="http://people.csail.mit.edu/shanir/publications/disc2008_submission_98.pdf" rel="noreferrer">hopscotch hashing</a>, both of which have significantly better worst-cases than vanilla linear probing.</p>
<p>The other weakness of linear probing is that its performance significantly degrades as the load factor approaches 1. You can address this either by rehashing periodically or by using the Robin Hood hashing technique described above.</p>
<p>Hope this helps!</p> | 2014-05-23 05:57:39.440000+00:00 | 2016-05-17 19:03:21.310000+00:00 | 2016-05-17 19:03:21.310000+00:00 | null | 23,821,764 | <p>I recently learned about different methods to deal with collisions in hash tables and saw that the separate chaining with linked lists is always more time efficient than linear probing. For space efficiency, we allocate a predefined memory for linear probing which later on we might not use, but for separate chaining we use memory dynamically.</p>
<p>Is separate chaining with linked list more efficient than linear probing? If so, why do we then use linear probing at all?</p> | 2014-05-23 05:44:45.650000+00:00 | 2020-09-05 19:39:11.347000+00:00 | 2020-09-05 19:39:11.347000+00:00 | performance|algorithm|hash|hashtable|time-complexity | ['https://en.wikipedia.org/wiki/Locality_of_reference', 'https://en.wikipedia.org/wiki/Primary_clustering', 'http://arxiv.org/abs/1509.04549', 'https://www.eecs.harvard.edu/~michaelm/postscripts/soda2008b.pdf', 'http://sebastiansylvan.com/post/robin-hood-hashing-should-be-your-default-hash-table-implementation/', 'http://people.csail.mit.edu/shanir/publications/disc2008_submission_98.pdf'] | 6 |
65,201,190 | <p>Many of the solutions are great, but <strong>None of them mentioned the <a href="https://medium.com/p/2ab4c0335e8b" rel="nofollow noreferrer">state-of-the-art</a> one</strong>.</p>
<p>It has <strong>O(1) worst-time</strong> complexity for the <code>fill(v), read(i), write(i, v)</code> operations<br>
(calling <em>fill(v)</em> sets all the values in the array to v, and <em>read</em>/<em>write</em> are self explanatory),<br>
While taking only <strong>1 bit of extra space</strong> besides the array. Yep.</p>
<p>So an int32_t array of size 1,000,000 will take O(1) worst-time to be initialized (and filled),<br>
and will take only 32,000,001 bits of memory.</p>
<p>It is mentioned in the <a href="https://arxiv.org/abs/1709.08900" rel="nofollow noreferrer">In-Place Initializable Arrays</a> paper,<br>and explained in an <a href="https://medium.com/p/2ab4c0335e8b" rel="nofollow noreferrer">Article</a> I've written about the subject.</p>
<p>I wrote a C++ header-only library called <a href="https://github.com/tomhea/farray" rel="nofollow noreferrer">Farray</a> that features the <strong>F</strong>illable <strong>Array</strong>,<br>
which is a templated implementation of the paper above.</p> | 2020-12-08 14:51:10.803000+00:00 | 2020-12-11 00:21:29.070000+00:00 | 2020-12-11 00:21:29.070000+00:00 | null | 10,005,544 | <p>I encountered the following interview question on the Internet.</p>
<p><strong>Describe a data structure for which getValue(int index), setValue(int index, int value), and setAllValues(int value) are all O(1).</strong></p>
<p>Though array is good enough for the first and second operations to be performed in O(1), what can be proposed for the third (setAllValues)?</p> | 2012-04-04 05:44:43.483000+00:00 | 2021-08-08 21:14:00.813000+00:00 | 2018-11-11 03:33:20.740000+00:00 | data-structures | ['https://medium.com/p/2ab4c0335e8b', 'https://arxiv.org/abs/1709.08900', 'https://medium.com/p/2ab4c0335e8b', 'https://github.com/tomhea/farray'] | 4 |
63,159,738 | <p>It can be done with SQL syntax using the <code>sqldf</code> package.</p>
<p>Here is how it works with your example</p>
<pre><code>library(sqldf)
data <- data.frame(id = factor(c(0, 1, 2, 3, 4, 5)),
from_id = factor(c("A", "B", "B", "C", "C", "D")),
to_id = factor(c("B", "C", "E", "B", "D", "F")),
amount = c(200, 185, 50, 170, 40, 38),
date_trx = c(0708, 0820, 1019, 0508, 0919, 0713)))
</code></pre>
<p>Since all info is from one year, I treated the variable <code>date_trx</code> as month-day numeric, but it could be treated as year-month-day also.</p>
<p>The following query returns what is expected:</p>
<pre><code>sqldf("select a.*, coalesce(b.suspicious, 'N') as suspicious
from data a
left join (select distinct b.from_id, 'Y' as suspicious
from data a
inner join data b
on a.to_id = b.from_id and
a.date_trx < b.date_trx and
b.amount/a.amount > 0.9) b
on a.from_id = b.from_id")
</code></pre>
<p>which returns in the R console</p>
<pre><code> id from_id to_id amount date_trx suspicious
1 0 A B 200 708 N
2 1 B C 185 820 Y
3 2 B E 50 1019 Y
4 3 C B 170 508 N
5 4 C D 40 919 N
6 5 D F 38 713 N
</code></pre>
<p>It seems that you are working to detect fraud, if it is the case, this approach does not works in real-time but you can look at figure 1 in this paper <a href="https://arxiv.org/pdf/2002.05988.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/2002.05988.pdf</a> to have an idea of how to implement code for doing a real-time suspicious detection.</p> | 2020-07-29 17:58:18.963000+00:00 | 2020-07-29 18:17:11.557000+00:00 | 2020-07-29 18:17:11.557000+00:00 | null | 63,156,688 | <p>This is my transaction data:</p>
<pre><code>
data:
id from_id to_id amount date_trx
<fctr> <fctr> <fctr> <dbl> <date>
0 7468 5695 700.0 2005-01-04
1 6213 9379 11832.0 2005-01-08
2 7517 8170 1000.0 2005-01-10
3 6143 9845 4276.0 2005-01-12
4 6254 9640 200.0 2005-01-14
5 6669 5815 200.0 2005-01-20
6 6934 8583 49752.0 2005-01-24
7 9240 8314 19961.0 2005-01-26
8 6374 8865 1000.0 2005-01-30
9 6143 6530 13.4 2005-01-31
...
</code></pre>
<p>I want to filter data as follows:</p>
<p>If at any point in the transfer chain, the ratio of the money received by an account to the money leaving the initial sender is above a threshold value, say 0.9, ( that is 1 ≥ (second_transacted_amount / first_transacted_amount) > 0.9 ), then I want to extract these accounts and save them for later investigation.</p>
<p>For example, here the account "7468" sends 700.0 dollars to the account "5695" in 2005-01-04. After this transaction, let's say "5695" makes a transaction over 90 percent of 700.0 but not exactly 700.0. Here is another important point: The second transaction should always be ahead of the first. So, date_trx variable have importance in this case also. How can I do that for the whole data (in R)?</p>
<p>We can have a little dataset to test on:</p>
<pre><code>id from_id to_id amount date_trx
<fctr> <fctr> <fctr> <dbl> <date>
0 A B 200.0 2005-07-08
1 B C 185.0 2005-08-20
2 B E 50.0 2005-10-19
3 C B 170.0 2005-05-08
4 C D 40.0 2005-09-19
5 D F 38.0 2005-07-13
</code></pre>
<p>Considering the transactions from or to B,</p>
<pre><code>0 A B 200.0 2005-07-08
1 B C 185.0 2005-08-20
2 B E 50.0 2005-10-19
3 C B 170.0 2005-05-08
</code></pre>
<p>due to the following transaction pair, B should be flagged as suspicious</p>
<pre><code>0 A B 200.0 2005-07-08
1 B C 185.0 2005-08-20
</code></pre>
<p>With the same reasoning, considering the transactions from or to C</p>
<pre><code>1 B C 185.0 2005-08-20
3 C B 170.0 2005-05-08
4 C D 40.0 2005-09-19
</code></pre>
<p>C shouldn't be flagged as suspicious</p>
<p>Flagging can be done regarding from_id:</p>
<p>The output could be something similar to this:</p>
<pre><code>id from_id to_id amount date_trx suspicious
<fctr> <fctr> <fctr> <dbl> <date> <fctr>
0 A B 200.0 2005-07-08 N
1 B C 185.0 2005-08-20 Y
2 B E 50.0 2005-10-19 Y
3 C B 170.0 2005-05-08 N
4 C D 40.0 2005-09-19 N
5 D F 38.0 2005-07-13 N
</code></pre> | 2020-07-29 15:07:15.833000+00:00 | 2020-07-29 18:17:11.557000+00:00 | 2020-07-29 16:28:35.723000+00:00 | r|networking|igraph|social-networking|network-analysis | ['https://arxiv.org/pdf/2002.05988.pdf'] | 1 |
66,864,134 | <p>Don't want to steal the thunder, but OttoV's comment actually gave the right command that works for me.</p>
<pre><code>aws s3 ls --request-payer requester s3://arxiv/src/
</code></pre>
<p>My EC2 is in Region us-east-2, but the arXiv s3 buckets are in Region us-east-1, so I think that's why the <code>--request-payer requester</code> is needed.</p>
<p>From <a href="https://aws.amazon.com/s3/pricing/?nc=sn&loc=4" rel="nofollow noreferrer">https://aws.amazon.com/s3/pricing/?nc=sn&loc=4</a> :</p>
<blockquote>
<p>You pay for all bandwidth into and out of Amazon S3, except for the following:</p>
<p>• Data transferred in from the internet.</p>
<p>• Data transferred out to an Amazon Elastic Compute Cloud (Amazon EC2) instance, when the instance is in the same AWS Region as the S3 bucket (including to a different account in the same AWS region).</p>
<p>• Data transferred out to Amazon CloudFront (CloudFront).</p>
</blockquote> | 2021-03-30 02:50:33.900000+00:00 | 2021-03-30 02:50:33.900000+00:00 | null | null | 28,784,528 | <p>I have been struggling for about a week to download arXiv articles as mentioned here: <a href="http://arxiv.org/help/bulk_data_s3#src" rel="noreferrer">http://arxiv.org/help/bulk_data_s3#src</a>.</p>
<p>I have tried lots of things: <code>s3Browser</code>, <code>s3cmd</code>. I am able to login to my buckets but I am unable to download data from arXiv bucket. </p>
<p>I tried:</p>
<ol>
<li><code>s3cmd get s3://arxiv/pdf/arXiv_pdf_1001_001.tar</code></li>
</ol>
<p>See:</p>
<pre><code>$ s3cmd get s3://arxiv/pdf/arXiv_pdf_1001_001.tar
s3://arxiv/pdf/arXiv_pdf_1001_001.tar -> ./arXiv_pdf_1001_001.tar [1 of 1]
s3://arxiv/pdf/arXiv_pdf_1001_001.tar -> ./arXiv_pdf_1001_001.tar [1 of 1]
ERROR: S3 error: Unknown error
</code></pre>
<ol start="2">
<li><code>s3cmd get</code> with <code>x-amz-request-payer:requester</code></li>
</ol>
<p>It gave me same error again:</p>
<pre><code>$ s3cmd get --add-header="x-amz-request-payer:requester" s3://arxiv/pdf/arXiv_pdf_manifest.xml
s3://arxiv/pdf/arXiv_pdf_manifest.xml -> ./arXiv_pdf_manifest.xml [1 of 1]
s3://arxiv/pdf/arXiv_pdf_manifest.xml -> ./arXiv_pdf_manifest.xml [1 of 1]
ERROR: S3 error: Unknown error
</code></pre>
<ol start="3">
<li>Copying</li>
</ol>
<p>I have tried copying files from that folder too.</p>
<pre><code>$ aws s3 cp s3://arxiv/pdf/arXiv_pdf_1001_001.tar .
A client error (403) occurred when calling the HeadObject operation: Forbidden
Completed 1 part(s) with ... file(s) remaining
</code></pre>
<p>This probably means that I made a mistake. The problem is I don't know how and what to add that will convey my permission to pay for download.</p>
<p>I am unable to figure out what should I do for downloading data from S3. I have been reading a lot on AWS sites, but nowhere I can get pinpoint solution to my problem.</p>
<p>How can I bulk download the arXiv data?</p> | 2015-02-28 17:14:38.500000+00:00 | 2021-03-30 02:50:33.900000+00:00 | 2015-10-12 11:40:50.803000+00:00 | amazon-web-services|amazon-s3 | ['https://aws.amazon.com/s3/pricing/?nc=sn&loc=4'] | 1 |
29,185,978 | <p>At the bottom of <a href="http://arxiv.org/help/bulk_data_s3" rel="nofollow">this page</a> arXiv explains that s3cmd gets denied because it does not support access to requester pays bucket as a non-owner and you have to apply a patch to the source code of s3cmd. However, the version of s3cmd they used is outdated and the patch does not apply to the latest version of s3cmd.</p>
<p>Basically you need to allow s3cmd to add "x-amz-request-payer" header to its HTTP request to buckets. Here is how to fix it:</p>
<ol>
<li>Download the source code of s3cmd.</li>
<li>Open S3/S3.py with a text editor.</li>
<li><p>Add this two lines of code at the bottom of <code>__init__</code> function:</p>
<pre><code>if self.s3.config.extra_headers:
self.headers.update(self.s3.config.extra_headers)
</code></pre></li>
<li>Install s3cmd as instructed.</li>
</ol> | 2015-03-21 17:49:31.403000+00:00 | 2015-03-21 17:49:31.403000+00:00 | null | null | 28,784,528 | <p>I have been struggling for about a week to download arXiv articles as mentioned here: <a href="http://arxiv.org/help/bulk_data_s3#src" rel="noreferrer">http://arxiv.org/help/bulk_data_s3#src</a>.</p>
<p>I have tried lots of things: <code>s3Browser</code>, <code>s3cmd</code>. I am able to login to my buckets but I am unable to download data from arXiv bucket. </p>
<p>I tried:</p>
<ol>
<li><code>s3cmd get s3://arxiv/pdf/arXiv_pdf_1001_001.tar</code></li>
</ol>
<p>See:</p>
<pre><code>$ s3cmd get s3://arxiv/pdf/arXiv_pdf_1001_001.tar
s3://arxiv/pdf/arXiv_pdf_1001_001.tar -> ./arXiv_pdf_1001_001.tar [1 of 1]
s3://arxiv/pdf/arXiv_pdf_1001_001.tar -> ./arXiv_pdf_1001_001.tar [1 of 1]
ERROR: S3 error: Unknown error
</code></pre>
<ol start="2">
<li><code>s3cmd get</code> with <code>x-amz-request-payer:requester</code></li>
</ol>
<p>It gave me same error again:</p>
<pre><code>$ s3cmd get --add-header="x-amz-request-payer:requester" s3://arxiv/pdf/arXiv_pdf_manifest.xml
s3://arxiv/pdf/arXiv_pdf_manifest.xml -> ./arXiv_pdf_manifest.xml [1 of 1]
s3://arxiv/pdf/arXiv_pdf_manifest.xml -> ./arXiv_pdf_manifest.xml [1 of 1]
ERROR: S3 error: Unknown error
</code></pre>
<ol start="3">
<li>Copying</li>
</ol>
<p>I have tried copying files from that folder too.</p>
<pre><code>$ aws s3 cp s3://arxiv/pdf/arXiv_pdf_1001_001.tar .
A client error (403) occurred when calling the HeadObject operation: Forbidden
Completed 1 part(s) with ... file(s) remaining
</code></pre>
<p>This probably means that I made a mistake. The problem is I don't know how and what to add that will convey my permission to pay for download.</p>
<p>I am unable to figure out what should I do for downloading data from S3. I have been reading a lot on AWS sites, but nowhere I can get pinpoint solution to my problem.</p>
<p>How can I bulk download the arXiv data?</p> | 2015-02-28 17:14:38.500000+00:00 | 2021-03-30 02:50:33.900000+00:00 | 2015-10-12 11:40:50.803000+00:00 | amazon-web-services|amazon-s3 | ['http://arxiv.org/help/bulk_data_s3'] | 1 |
33,019,940 | <p>Try downloading <code>s3cmd</code> version <code>1.6.0</code>: <a href="http://sourceforge.net/projects/s3tools/files/s3cmd/" rel="noreferrer">http://sourceforge.net/projects/s3tools/files/s3cmd/</a></p>
<pre><code>$ s3cmd --configure
</code></pre>
<p>Enter your credentials found in the account management tab of the Amazon AWS website interface.</p>
<pre><code>$ s3cmd get --recursive --skip-existing s3://arxiv/src/ --requester-pays
</code></pre> | 2015-10-08 15:23:59.873000+00:00 | 2015-10-12 16:49:05.503000+00:00 | 2015-10-12 16:49:05.503000+00:00 | null | 28,784,528 | <p>I have been struggling for about a week to download arXiv articles as mentioned here: <a href="http://arxiv.org/help/bulk_data_s3#src" rel="noreferrer">http://arxiv.org/help/bulk_data_s3#src</a>.</p>
<p>I have tried lots of things: <code>s3Browser</code>, <code>s3cmd</code>. I am able to login to my buckets but I am unable to download data from arXiv bucket. </p>
<p>I tried:</p>
<ol>
<li><code>s3cmd get s3://arxiv/pdf/arXiv_pdf_1001_001.tar</code></li>
</ol>
<p>See:</p>
<pre><code>$ s3cmd get s3://arxiv/pdf/arXiv_pdf_1001_001.tar
s3://arxiv/pdf/arXiv_pdf_1001_001.tar -> ./arXiv_pdf_1001_001.tar [1 of 1]
s3://arxiv/pdf/arXiv_pdf_1001_001.tar -> ./arXiv_pdf_1001_001.tar [1 of 1]
ERROR: S3 error: Unknown error
</code></pre>
<ol start="2">
<li><code>s3cmd get</code> with <code>x-amz-request-payer:requester</code></li>
</ol>
<p>It gave me same error again:</p>
<pre><code>$ s3cmd get --add-header="x-amz-request-payer:requester" s3://arxiv/pdf/arXiv_pdf_manifest.xml
s3://arxiv/pdf/arXiv_pdf_manifest.xml -> ./arXiv_pdf_manifest.xml [1 of 1]
s3://arxiv/pdf/arXiv_pdf_manifest.xml -> ./arXiv_pdf_manifest.xml [1 of 1]
ERROR: S3 error: Unknown error
</code></pre>
<ol start="3">
<li>Copying</li>
</ol>
<p>I have tried copying files from that folder too.</p>
<pre><code>$ aws s3 cp s3://arxiv/pdf/arXiv_pdf_1001_001.tar .
A client error (403) occurred when calling the HeadObject operation: Forbidden
Completed 1 part(s) with ... file(s) remaining
</code></pre>
<p>This probably means that I made a mistake. The problem is I don't know how and what to add that will convey my permission to pay for download.</p>
<p>I am unable to figure out what should I do for downloading data from S3. I have been reading a lot on AWS sites, but nowhere I can get pinpoint solution to my problem.</p>
<p>How can I bulk download the arXiv data?</p> | 2015-02-28 17:14:38.500000+00:00 | 2021-03-30 02:50:33.900000+00:00 | 2015-10-12 11:40:50.803000+00:00 | amazon-web-services|amazon-s3 | ['http://sourceforge.net/projects/s3tools/files/s3cmd/'] | 1 |
28,800,148 | <p><strong>Requester Pays</strong> is a feature on Amazon S3 buckets that requires the user of the bucket to pay Data Transfer costs associated with accessing data.</p>
<p>Normally, the owner of an S3 bucket pays Data Transfer costs, but this can be expensive for free / Open Source projects. Thus, the bucket owner can activated Requester Pays to reduce the portion of costs they will be charged.</p>
<p>Therefore, when accessing a Requester Pays bucket, you will need to authenticate yourself so that S3 knows whom to charge.</p>
<p>I recommend using the official <strong><a href="http://aws.amazon.com/cli/" rel="nofollow noreferrer">AWS Command-Line Interface (CLI)</a></strong> to access AWS services. You can provide your credentials via:</p>
<pre><code>aws configure
</code></pre>
<p>and then view the bucket via:</p>
<pre><code>aws s3 ls s3://arxiv/pdf/
</code></pre>
<p>and download via:</p>
<pre><code>aws s3 cp s3://arxiv/pdf/arXiv_pdf_1001_001.tar .
</code></pre>
<p><strong>UPDATE:</strong> I just tried the above myself, and received <code>Access Denied</code> error messages (both on the bucket listing and the download command). When using <code>s3cmd</code>, it says <code>ERROR: S3 error: Access Denied</code>. <strong>It would appear that the permissions on the bucket no longer permit access.</strong> You should contact the owners of the bucket to request access.</p> | 2015-03-01 22:46:29.193000+00:00 | 2015-03-01 22:46:29.193000+00:00 | null | null | 28,784,528 | <p>I have been struggling for about a week to download arXiv articles as mentioned here: <a href="http://arxiv.org/help/bulk_data_s3#src" rel="noreferrer">http://arxiv.org/help/bulk_data_s3#src</a>.</p>
<p>I have tried lots of things: <code>s3Browser</code>, <code>s3cmd</code>. I am able to login to my buckets but I am unable to download data from arXiv bucket. </p>
<p>I tried:</p>
<ol>
<li><code>s3cmd get s3://arxiv/pdf/arXiv_pdf_1001_001.tar</code></li>
</ol>
<p>See:</p>
<pre><code>$ s3cmd get s3://arxiv/pdf/arXiv_pdf_1001_001.tar
s3://arxiv/pdf/arXiv_pdf_1001_001.tar -> ./arXiv_pdf_1001_001.tar [1 of 1]
s3://arxiv/pdf/arXiv_pdf_1001_001.tar -> ./arXiv_pdf_1001_001.tar [1 of 1]
ERROR: S3 error: Unknown error
</code></pre>
<ol start="2">
<li><code>s3cmd get</code> with <code>x-amz-request-payer:requester</code></li>
</ol>
<p>It gave me same error again:</p>
<pre><code>$ s3cmd get --add-header="x-amz-request-payer:requester" s3://arxiv/pdf/arXiv_pdf_manifest.xml
s3://arxiv/pdf/arXiv_pdf_manifest.xml -> ./arXiv_pdf_manifest.xml [1 of 1]
s3://arxiv/pdf/arXiv_pdf_manifest.xml -> ./arXiv_pdf_manifest.xml [1 of 1]
ERROR: S3 error: Unknown error
</code></pre>
<ol start="3">
<li>Copying</li>
</ol>
<p>I have tried copying files from that folder too.</p>
<pre><code>$ aws s3 cp s3://arxiv/pdf/arXiv_pdf_1001_001.tar .
A client error (403) occurred when calling the HeadObject operation: Forbidden
Completed 1 part(s) with ... file(s) remaining
</code></pre>
<p>This probably means that I made a mistake. The problem is I don't know how and what to add that will convey my permission to pay for download.</p>
<p>I am unable to figure out what should I do for downloading data from S3. I have been reading a lot on AWS sites, but nowhere I can get pinpoint solution to my problem.</p>
<p>How can I bulk download the arXiv data?</p> | 2015-02-28 17:14:38.500000+00:00 | 2021-03-30 02:50:33.900000+00:00 | 2015-10-12 11:40:50.803000+00:00 | amazon-web-services|amazon-s3 | ['http://aws.amazon.com/cli/'] | 1 |
24,590,623 | <p>You are seeking an approach for the <em>K</em>-nearest neighbor problem consisting of two steps:</p>
<ol>
<li>Finding the Euclidean distances between the elements;</li>
<li>Finding the first <code>K</code> elements providing the <code>K</code> smallest distances.</li>
</ol>
<p>It seems that such an approach is already existing and has been implemented in</p>
<p><a href="http://arxiv.org/pdf/0906.0231.pdf" rel="nofollow">K.Kato and T.Hosino, "Solving <em>k</em>-Nearest Neighbor Problem on Multiple Graphics Processors"</a></p>
<p>and presented at the 2009 GTC Conference as</p>
<p><a href="http://on-demand.gputechconf.com/gtc/2009/presentations/1034-Multi-GPU-Recommendation-System.pdf" rel="nofollow">K.Kato and T.Hosino, "You Might Also Like: A Multi-GPU Recommendation System"</a>.</p>
<p>The approach solves the above two steps by</p>
<ol>
<li>using the classical <em>N</em>-body approach developed in L.Nyland, M.Harris, J.Prins, "Fast <em>N</em>-body simulation with CUDA," In: GPU Gems III. NVIDIA (2007) 677–695 to calculate the Euclidean distances;</li>
<li>using the <a href="http://www.lsi.upc.edu/~conrado/research/talks/aofa04.pdf" rel="nofollow">partial sorting</a> technique, based on a parallel heapsort idea.</li>
</ol>
<p>Again, as mentioned in my comment above, a better approach avoiding your "brute-force" one would be to use <a href="https://sites.google.com/a/nirmauni.ac.in/cudacodes/cuda-projects/proj-12-gpu-nearest-neighbors-using-a-minimal-kd-tree" rel="nofollow">KD-trees</a>. </p> | 2014-07-05 21:13:19.573000+00:00 | 2014-07-05 21:13:19.573000+00:00 | null | null | 24,579,487 | <p>I have implemented a <em>K</em>-nearest neighbor on the GPU using both pure CUDA and Thrust library function calls.</p>
<p>Euclidean distances are computed with a pure CUDA kernel. Then, Thrust sorting facilities (radix sort) are used to sort the distances in increasing order. Finally, the <em>K</em> first elements (i.e. the <em>K</em> nearest neighbors) are retrieved from the sorted vectors.</p>
<p>My implementation works well. However, sorting the entire euclidean distances matrix (sets can contain more than <code>250000</code> train samples) just to retrieve the <em>K</em>-nn seems non-optimal.</p>
<p>Therefore, I'm searching for a GPU algorithm implementation which allows to stop the sorting computations once the <em>K</em> smallest elements are found, or which performs an efficient <em>K</em> out of <em>N</em> sorting. It would indeed be faster for small <em>K</em> than sorting the entire matrix.</p>
<p>If such an implementation is not available, I would also be interested by advices to implement it efficiently in pure CUDA or Thrust. I was thinking to use a few threads per test samples to look for <em>K</em> nearest, each thread running to a part of the euclidean distances. I would maintain a buffer of size <em>K</em> in shared memory. I would run through the distances and insert the Knn in the shared memory vector. However, it would require some warp level synchronization and thread divergence.</p>
<p>Thank you for your help.</p> | 2014-07-04 18:41:09.360000+00:00 | 2014-07-05 21:18:47.030000+00:00 | 2014-07-05 21:18:47.030000+00:00 | sorting|cuda|gpu|thrust|knn | ['http://arxiv.org/pdf/0906.0231.pdf', 'http://on-demand.gputechconf.com/gtc/2009/presentations/1034-Multi-GPU-Recommendation-System.pdf', 'http://www.lsi.upc.edu/~conrado/research/talks/aofa04.pdf', 'https://sites.google.com/a/nirmauni.ac.in/cudacodes/cuda-projects/proj-12-gpu-nearest-neighbors-using-a-minimal-kd-tree'] | 4 |
43,981,820 | <p>After some testing, I think I might know the reason for the bad performance of the deeper model. It seems to be an issue with initialization of the weights, which, I guess, is more critical in a deeper model. I have updated my model to use the weight initializer proposed in the <a href="https://arxiv.org/pdf/1502.01852v1.pdf" rel="nofollow noreferrer">Delving Deep into Rectifiers,</a> paper, together with Stochastic gradient descent, and a learning rate of 0.1. This seems to solve the problem!</p>
<p>Is what I'm thinking correct? Does the weight initialization become this much more critical when using a deeper model?</p> | 2017-05-15 14:17:27.287000+00:00 | 2017-05-15 14:17:27.287000+00:00 | null | null | 43,967,615 | <p>I'm working on an implementation of SegNet in TensorFlow, that I am using to segment aerial images into two classes: "Building" and "Not building". I have a small version of the network, which gives accuracy up to 82% mIoU.</p>
<p>However, I wanted to expand the network by adding multiple convolutional layers, as the original SegNet has, but I can't get it to work.</p>
<p>This is how I implemented the small model that works:</p>
<pre><code>def inference_basic(images, phase_train, batch_size, keep_prob):
conv1 = conv_layer_with_bn(norm1, [7, 7, images.get_shape().as_list()[3], 64], phase_train, name="conv1")
pool1, pool1_indices = tf.nn.max_pool_with_argmax(conv1, ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1], padding='SAME', name='pool1')
conv2 = conv_layer_with_bn(pool1, [7, 7, 64, 64], phase_train, name="conv2")
pool2, pool2_indices = tf.nn.max_pool_with_argmax(conv2, ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1], padding='SAME', name='pool2')
conv3 = conv_layer_with_bn(pool2, [7, 7, 64, 64], phase_train, name="conv3")
pool3, pool3_indices = tf.nn.max_pool_with_argmax(conv3, ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1], padding='SAME', name='pool3')
conv4 = conv_layer_with_bn(pool3, [7, 7, 64, 64], phase_train, name="conv4")
pool4, pool4_indices = tf.nn.max_pool_with_argmax(conv4, ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1], padding='SAME', name='pool4')
""" End of encoder """
""" start decoder """
upsample4 = deconv_layer(pool4, [2, 2, 64, 64], [batch_size, FLAGS.image_h//8, FLAGS.image_w//8, 64], 2, "up4")
conv_decode4 = conv_layer_with_bn(upsample4, [7, 7, 64, 64], phase_train, False, name="conv_decode4")
upsample3= deconv_layer(conv_decode4, [2, 2, 64, 64], [batch_size, FLAGS.image_h//4, FLAGS.image_w//4, 64], 2, "up3")
conv_decode3 = conv_layer_with_bn(upsample3, [7, 7, 64, 64], phase_train, False, name="conv_decode3")
upsample2= deconv_layer(conv_decode3, [2, 2, 64, 64], [batch_size, FLAGS.image_h//2, FLAGS.image_w//2, 64], 2, "up2")
conv_decode2 = conv_layer_with_bn(upsample2, [7, 7, 64, 64], phase_train, False, name="conv_decode2")
upsample1= deconv_layer(conv_decode2, [2, 2, 64, 64], [batch_size, FLAGS.image_h, FLAGS.image_w, 64], 2, "up1")
conv_decode1 = conv_layer_with_bn(upsample1, [7, 7, 64, 64], phase_train, False, name="conv_decode1")
""" end of decoder """
""" Start Classify """
with tf.variable_scope('conv_classifier') as scope:
kernel = _variable_with_weight_decay('weights',
shape=[1, 1, 64, FLAGS.num_class],
initializer=msra_initializer(1, 64),
wd=0.0005)
conv = tf.nn.conv2d(conv_decode1, kernel, [1, 1, 1, 1], padding='SAME')
biases = _variable_on_cpu('biases', [FLAGS.num_class], tf.constant_initializer(0.0))
conv_classifier = tf.nn.bias_add(conv, biases, name=scope.name)
return conv_classifier
</code></pre>
<p>And this is the extended model, that gets really bad results:</p>
<pre><code>def inference(images, phase_train, batch_size):
conv1_1 = conv_layer_with_bn(images, [7, 7, images.get_shape().as_list()[3], 64], phase_train, name="conv1_1")
conv1_2 = conv_layer_with_bn(conv1_1, [7, 7, 64, 64], phase_train, name="conv1_2")
pool1, pool1_indices = tf.nn.max_pool_with_argmax(conv1_2, ksize=[1, 2, 2, 1],strides=[1, 2, 2, 1], padding='SAME', name='pool1')
conv2_1 = conv_layer_with_bn(pool1, [7, 7, 64, 64], phase_train, name="conv2_1")
conv2_2 = conv_layer_with_bn(conv2_1, [7, 7, 64, 64], phase_train, name="conv2_2")
pool2, pool2_indices = tf.nn.max_pool_with_argmax(conv2_2, ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1], padding='SAME', name='pool2')
conv3_1 = conv_layer_with_bn(pool2, [7, 7, 64, 64], phase_train, name="conv3_1")
conv3_2 = conv_layer_with_bn(conv3_1, [7, 7, 64, 64], phase_train, name="conv3_2")
conv3_3 = conv_layer_with_bn(conv3_2, [7, 7, 64, 64], phase_train, name="conv3_3")
pool3, pool3_indices = tf.nn.max_pool_with_argmax(conv3_3, ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1], padding='SAME', name='pool3')
conv4_1 = conv_layer_with_bn(pool3, [7, 7, 64, 64], phase_train, name="conv4_1")
conv4_2 = conv_layer_with_bn(conv4_1, [7, 7, 64, 64], phase_train, name="conv4_2")
conv4_3 = conv_layer_with_bn(conv4_2, [7, 7, 64, 64], phase_train, name="conv4_3")
pool4, pool4_indices = tf.nn.max_pool_with_argmax(conv4_3, ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1], padding='SAME', name='pool4')
conv5_1 = conv_layer_with_bn(pool4, [7, 7, 64, 64], phase_train, name="conv5_1")
conv5_2 = conv_layer_with_bn(conv5_1, [7, 7, 64, 64], phase_train, name="conv5_2")
conv5_3 = conv_layer_with_bn(conv5_2, [7, 7, 64, 64], phase_train, name="conv5_3")
pool5, pool5_indices = tf.nn.max_pool_with_argmax(conv5_3, ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1], padding='SAME', name='pool5')
""" End of encoder """
""" Start decoder """
upsample5 = deconv_layer(pool5, [2, 2, 64, 64], [batch_size, FLAGS.image_h//16, FLAGS.image_w//16, 64], 2, "up5")
conv_decode5_1 = conv_layer_with_bn(upsample5, [7, 7, 64, 64], phase_train, True, name="conv_decode5_1")
conv_decode5_2 = conv_layer_with_bn(conv_decode5_1, [7, 7, 64, 64], phase_train, True, name="conv_decode5_2")
conv_decode5_3 = conv_layer_with_bn(conv_decode5_2, [7, 7, 64, 64], phase_train, True, name="conv_decode5_3")
upsample4 = deconv_layer(conv_decode5_3, [2, 2, 64, 64], [batch_size, FLAGS.image_h//8, FLAGS.image_w//8, 64], 2, "up4")
conv_decode4_1 = conv_layer_with_bn(upsample4, [7, 7, 64, 64], phase_train, True, name="conv_decode4_1")
conv_decode4_2 = conv_layer_with_bn(conv_decode4_1, [7, 7, 64, 64], phase_train, True, name="conv_decode4_2")
conv_decode4_3 = conv_layer_with_bn(conv_decode4_2, [7, 7, 64, 64], phase_train, True, name="conv_decode4_3")
upsample3 = deconv_layer(conv_decode4_3, [2, 2, 64, 64], [batch_size, FLAGS.image_h//4, FLAGS.image_w//4, 64], 2, "up3")
conv_decode3_1 = conv_layer_with_bn(upsample3, [7, 7, 64, 64], phase_train, True, name="conv_decode3_1")
conv_decode3_2 = conv_layer_with_bn(conv_decode3_1, [7, 7, 64, 64], phase_train, True, name="conv_decode3_2")
conv_decode3_3 = conv_layer_with_bn(conv_decode3_2, [7, 7, 64, 64], phase_train, True, name="conv_decode3_3")
upsample2= deconv_layer(conv_decode3_3, [2, 2, 64, 64], [batch_size, FLAGS.image_h//2, FLAGS.image_w//2, 64], 2, "up2")
conv_decode2_1 = conv_layer_with_bn(upsample2, [7, 7, 64, 64], phase_train, True, name="conv_decode2_1")
conv_decode2_2 = conv_layer_with_bn(conv_decode2_1, [7, 7, 64, 64], phase_train, True, name="conv_decode2_2")
upsample1 = deconv_layer(conv_decode2_2, [2, 2, 64, 64], [batch_size, FLAGS.image_h, FLAGS.image_w, 64], 2, "up1")
conv_decode1_1 = conv_layer_with_bn(upsample1, [7, 7, 64, 64], phase_train, True, name="conv_decode1_1")
conv_decode1_2 = conv_layer_with_bn(conv_decode1_1, [7, 7, 64, 64], phase_train, True, name="conv_decode1_2")
""" End of decoder """
""" Start Classify """
# output predicted class number
with tf.variable_scope('conv_classifier') as scope: #all variables prefixed with "conv_classifier/"
kernel = _variable_with_weight_decay('weights',
shape=[1, 1, 64, FLAGS.num_class],
initializer=msra_initializer(1, 64),
wd=0.0005)
conv = tf.nn.conv2d(conv_decode1_2, kernel, [1, 1, 1, 1], padding='SAME')
biases = _variable_on_cpu('biases', [FLAGS.num_class], tf.constant_initializer(0.0))
conv_classifier = tf.nn.bias_add(conv, biases, name=scope.name)
#logit = conv_classifier = prediction
return conv_classifier
</code></pre>
<p>Convolutional layer:</p>
<pre><code>def conv_layer_with_bn(inputT, shape, train_phase, activation=True, name=None):
in_channel = shape[2]
out_channel = shape[3]
k_size = shape[0]
with tf.variable_scope(name) as scope:
kernel = _variable_with_weight_decay('weights',
shape=shape,
initializer=msra_initializer(k_size, in_channel),
wd=None)
conv = tf.nn.conv2d(inputT, kernel, [1, 1, 1, 1], padding='SAME')
biases = _variable_on_cpu('biases', [out_channel], tf.constant_initializer(0.0))
bias = tf.nn.bias_add(conv, biases)
if activation is True:
conv_out = tf.nn.relu(batch_norm_layer(bias, train_phase, scope.name))
else:
conv_out = batch_norm_layer(bias, train_phase, scope.name)
return conv_out
def batch_norm_layer(inputT, is_training, scope):
"""Used in conv_layer_with_bn()"""
return tf.cond(is_training,
lambda: tf.contrib.layers.batch_norm(inputT, is_training=True,
center=False, updates_collections=None, scope=scope+"_bn"),
lambda: tf.contrib.layers.batch_norm(inputT, is_training=False,
updates_collections=None, center=False, scope=scope+"_bn", reuse = True))
</code></pre>
<p>The extended model gets around 10% mIoU because all the pixels in the images gets classified into the "Not building" class. Can anyone help me understand why this is happening? I have looked at the <a href="https://github.com/alexgkendall/SegNet-Tutorial/blob/master/Models/segnet_inference.prototxt" rel="nofollow noreferrer">caffe implementation</a> of SegNet, and I can't see the difference between the two implementations.</p> | 2017-05-14 18:46:41.673000+00:00 | 2017-05-15 14:17:27.287000+00:00 | null | tensorflow|neural-network|deep-learning|image-segmentation|encoder-decoder | ['https://arxiv.org/pdf/1502.01852v1.pdf'] | 1 |
30,280,141 | <p>There are multiple libraries and extensions available for this tasks.</p>
<p><strong>Extensions</strong></p>
<p>Stem</p>
<ul>
<li><a href="https://pecl.php.net/package/stem" rel="nofollow">https://pecl.php.net/package/stem</a></li>
<li><a href="http://snowball.tartarus.org/" rel="nofollow">http://snowball.tartarus.org/</a> (<a href="http://snowball.tartarus.org/demo.php" rel="nofollow">DEMO</a>)</li>
</ul>
<p>php-stemmer</p>
<p><a href="https://github.com/hthetiot/php-stemmer" rel="nofollow">https://github.com/hthetiot/php-stemmer</a></p>
<p><strong>Libraries</strong></p>
<p>These Porter-Stemmer libs will also do the job (at least for the english language parts): </p>
<ul>
<li><a href="https://github.com/andyceo/PHP-Porter-Stemmer" rel="nofollow">https://github.com/andyceo/PHP-Porter-Stemmer</a></li>
<li><a href="http://www.chuggnutt.com/stemmer" rel="nofollow">http://www.chuggnutt.com/stemmer</a></li>
</ul>
<p>PHP Morhpy</p>
<p><a href="http://phpmorphy.sourceforge.net/dokuwiki/" rel="nofollow">http://phpmorphy.sourceforge.net/dokuwiki/</a></p>
<p>--</p>
<p>Urdu is a mixed language. So the "basic" Porter-Stemming will not be enough (and might only be sufficent for the english language parts of Urdu). You will have to model the language rules. The Urdu language is really challenging for NLP, because of the rich morphology. </p>
<p>If you want to implement a rule based stemmer, then take a look at this paper, which explains the algo used: "<a href="http://arxiv.org/ftp/arxiv/papers/1310/1310.0581.pdf" rel="nofollow">Rule Based Stemmer in Urdu</a>" by Vaishali Gupta, Nisheeth Joshi
, Iti Mathur.</p> | 2015-05-16 19:56:07.953000+00:00 | 2015-05-17 14:23:45.733000+00:00 | 2015-05-17 14:23:45.733000+00:00 | null | 30,280,006 | <p>There are lot of stemming libraries but they are in other languages but I need an API or library or algorithm that will be use for stemming Urdu words. I want to find the root of a word like in English e.g.</p>
<blockquote>
<p>sadness => sad</p>
</blockquote> | 2015-05-16 19:41:34.693000+00:00 | 2015-05-17 14:36:14.377000+00:00 | 2015-05-17 14:36:14.377000+00:00 | php|api | ['https://pecl.php.net/package/stem', 'http://snowball.tartarus.org/', 'http://snowball.tartarus.org/demo.php', 'https://github.com/hthetiot/php-stemmer', 'https://github.com/andyceo/PHP-Porter-Stemmer', 'http://www.chuggnutt.com/stemmer', 'http://phpmorphy.sourceforge.net/dokuwiki/', 'http://arxiv.org/ftp/arxiv/papers/1310/1310.0581.pdf'] | 8 |
63,118,955 | <p>A few methods have been developed to process images with multiple sizes, including images with unequal horizontal and vertical dimensions. For example, <a href="https://arxiv.org/abs/1406.4729" rel="nofollow noreferrer">spatial pyramid pooling</a> or <a href="https://arxiv.org/abs/1803.09218" rel="nofollow noreferrer">scale recurrent neural networks</a>. You could also set network <a href="https://github.com/keras-team/keras/issues/1920" rel="nofollow noreferrer">dimensions to be variable</a>, then use a pooling operation (e.g. global average pooling) to get fixed size dimensions before fully connected or other layers than need a fixed size.</p>
<p>The simplest approach is cropping or padding images (e.g. with zeros) so that they are all the same size.</p> | 2020-07-27 15:55:21.217000+00:00 | 2020-07-27 22:05:28.250000+00:00 | 2020-07-27 22:05:28.250000+00:00 | null | 63,118,141 | <p>I want to use Tensorflow and Keras to train a dataset composed of images with very different sizes in order to classify them. But some of them are horizontal (1400x100) and some of them are vertical (100x1000).
As far as I understand, Keras accepts same size input images.
I'm not sure if it's wise to convert all of them to a classical resolution like 150x150 or 180x180 since they are horizontal and vertical.</p>
<p>How can I solve this problem?</p> | 2020-07-27 15:05:51.307000+00:00 | 2020-07-27 22:05:28.250000+00:00 | 2020-07-27 15:22:25.897000+00:00 | python|tensorflow|keras | ['https://arxiv.org/abs/1406.4729', 'https://arxiv.org/abs/1803.09218', 'https://github.com/keras-team/keras/issues/1920'] | 3 |
46,812,931 | <p>It is not that simple. For example, in later stages the variable could be reduced to 0.</p>
<p>I'd have a look at <a href="https://arxiv.org/abs/1606.05386" rel="noreferrer">LIME</a> (Local Interpretable Model-Agnostic Explanations). The basic idea is to set some inputs to zero, pass it through the model and see if the result is similar. If yes, then that variable might not be that important. But there is more about it and if you want to know it, then you should read the paper.</p>
<p>See <a href="https://github.com/marcotcr/lime" rel="noreferrer">marcotcr/lime</a> on GitHub.</p> | 2017-10-18 14:43:29.850000+00:00 | 2017-10-18 14:43:29.850000+00:00 | null | null | 44,119,207 | <p>I am looking for a proper or best way to get variable importance in a Neural Network created with Keras. The way I currently do it is I just take the weights (not the biases) of the variables in the first layer with the assumption that more important variables will have higher weights in the first layer. Is there another/better way of doing it?</p> | 2017-05-22 17:54:26.430000+00:00 | 2020-10-09 18:29:17.450000+00:00 | 2017-10-18 14:40:38.120000+00:00 | tensorflow|deep-learning|keras|keras-layer|keras-2 | ['https://arxiv.org/abs/1606.05386', 'https://github.com/marcotcr/lime'] | 2 |
41,976,515 | <p>Apparently, <code>float128</code> is much slower than <code>double</code>. Like up to 100x slower but the precise slowdown would depend on the operations, of course.</p>
<p><a href="https://www.nsc.liu.se/~pla/blog/2013/10/17/quadruple-precision/" rel="nofollow noreferrer">Multiplying two 256x256 matrices</a> is 100x slower.</p>
<p>The paper <a href="http://perso.ens-lyon.fr/philippe.theveny/benchs/binary128_emu.pdf" rel="nofollow noreferrer">Benchmark of an MPFR emulation of Binary128 arithmetic</a> cites 250x slowdown for summation, 120x slowdown for product, 400x slowdown for dot product.</p>
<p>Another paper <a href="https://arxiv.org/pdf/1412.5316.pdf" rel="nofollow noreferrer">Twofolds in C and C++</a> says <code>__float128</code> summation is 150x (25x times 6x) slower than <code>double</code> summation.</p> | 2017-02-01 09:46:52.687000+00:00 | 2017-02-01 09:46:52.687000+00:00 | null | null | 27,771,354 | <p>Has anyone experienced with <code>float128</code> type? </p>
<p>I would like to know about its performance compared with <code>double</code>, and other high precision types such as <code>boost::multiprecision:cpp_dec_float</code>? Is there any benchmark already done?</p> | 2015-01-04 23:06:21.040000+00:00 | 2017-02-01 09:46:52.687000+00:00 | 2015-01-04 23:24:42.343000+00:00 | floating-point|performance-testing|arbitrary-precision | ['https://www.nsc.liu.se/~pla/blog/2013/10/17/quadruple-precision/', 'http://perso.ens-lyon.fr/philippe.theveny/benchs/binary128_emu.pdf', 'https://arxiv.org/pdf/1412.5316.pdf'] | 3 |
41,390,991 | <p>Take a look of the work done by Rodrigo Benenson.</p>
<p><a href="https://arxiv.org/abs/1602.01237" rel="nofollow noreferrer">How Far are We from Solving Pedestrian Detection?</a></p>
<p><a href="http://rodrigob.github.io/documents/2014_eccvw_ten_years_of_pedestrian_detection_with_supplementary_material.pdf" rel="nofollow noreferrer">Ten Years of pedestrian detection</a></p>
<p>It's a really good starting point for pedestrian detection and understand what are the different approaches that has been used in the last decade.</p> | 2016-12-30 05:18:10.833000+00:00 | 2016-12-30 05:18:10.833000+00:00 | null | null | 41,377,787 | <p>I am working on an application where I need to detect and track people in a crowded indoor area(like a mall). Right now I am using the OpenCV Background Subtraction class(MOG2) to detect blobs and a Kalman filter and Hungarian Algorithm for tracking(based on <a href="https://www.youtube.com/watch?v=2fW5TmAtAXM" rel="nofollow noreferrer">this video</a>).</p>
<p>The issues I'm having are: </p>
<ol>
<li>The blobs merging together when two people come close to each other </li>
<li>Parts of the person not getting detected which leads to false and multiple detections on a person </li>
<li>The background subtraction itself results in too many false detections.</li>
</ol>
<p>I would like to know your suggestions to improve this and any solutions to fix these problems? Is there an alternate way to detect humans?I am not using HOG because I didn't get detections unless the entire body of the person was in the frame, and it resulted in false detections as well.</p>
<p>Thanks in advance!</p>
<p>BTW, I'm using OpenCV 3.1,C++</p>
<p>edit:</p>
<p>This what I mean by false detections with HOG:</p>
<p><a href="https://i.stack.imgur.com/UQlnw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UQlnw.png" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/9pYpx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9pYpx.png" alt="Person not detected when only half is body is present in the frame"></a></p>
<p><a href="https://i.stack.imgur.com/DDkMp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DDkMp.png" alt="enter image description here"></a></p> | 2016-12-29 10:29:28.923000+00:00 | 2016-12-30 08:04:06.857000+00:00 | 2016-12-30 08:04:06.857000+00:00 | opencv|image-processing|computer-vision|blob|background-subtraction | ['https://arxiv.org/abs/1602.01237', 'http://rodrigob.github.io/documents/2014_eccvw_ten_years_of_pedestrian_detection_with_supplementary_material.pdf'] | 2 |
5,559,092 | <p>The common feature of your example is the representation of some (mathematical) object by <em>functions</em>. This is common in functional languages, but not as practical as in mathematics because functions in programs are used extensionally (you cannot inspect their definitions, only observe their actions on arguments), and only with computable operations (you can only observe a finite number of arguments).</p>
<p>In mathematics, you don't bother with such stuff, for example you very often say "if f is analytic, then let's (a_n) be its sequence of coefficients, and...". In a computer language, if you start with a function of type <code>Double -> Integer -> [Double]</code>, it will be painful to convert it to a representation where you can easily recover the coefficients. In programming languages, function <em>really</em> are black boxes.</p>
<p>For this reason, programmers often try to use explicit data representations instead of function black boxes. You can easily obtain a function from a data representation (its a kind of evaluation or interpretation), while the other way around can be more difficult, less efficient, etc. See Conal Elliott's <a href="http://conal.net/blog/posts/everything-is-a-function-in-haskell/" rel="noreferrer">“Everything is a function” in Haskell?</a>.</p>
<p>Functions are however still used in cases where we really want extensional objects, that can only be observed instead of inspected. For each possible observation on the object you want to define, you give a function that realize this observation. In your example, you only have one function because you only have one observation. This is the core idea of Object Oriented Programming as defined by William Cook in his <a href="http://lambda-the-ultimate.org/node/3668" rel="noreferrer">On Understanding Data Abstraction, Revisited</a> paper.</p>
<p>I think the reason why you relate this to the term "duality" (a term that is, in the Haskell intelligentsia, rather related to category-theoretic concepts) is that the shift from an object to some particular form of observation of it is sometimes called duality in maths, and has the effect of adding functions everywhere. For example, there is the classical example of the dual of a vector space, and in particular the bidual construction which is really a conversion from a vector to its observations by linear functions : you switch from <code>V</code> to <code>(V -> K) -> K</code>, for <code>K</code> the field underlying your vector space.</p>
<p>(Would one think of continuations reading my last example ? Of course those are related, as this representation of continuations is really an "observation" of concrete evaluation contexts, represented by their action on values.)</p>
<p>Your representation of probability measures is actually used to define probability measure monads in functional languages. There are different ways to define probability monads. See for example <a href="http://www.cs.tufts.edu/~nr/pubs/pmonad-abstract.html" rel="noreferrer">http://www.cs.tufts.edu/~nr/pubs/pmonad-abstract.html</a> by Norman Ramsey and Avi Pfeffer. Most real-world implementation of probability DSL however use a more concrete representation such as a <code>[(prob,event)]</code> list of pair (Haskell <a href="http://www.haskell.org/haskellwiki/Probabilistic_Functional_Programming" rel="noreferrer">probability</a> library and OCaml <a href="http://okmij.org/ftp/kakuritu/" rel="noreferrer">HANSEI</a>).</p>
<p>Finally, for an example of representation of real number as functions, see Russel O'Connor's <a href="http://arxiv.org/abs/cs.NA/0605058" rel="noreferrer">A Monadic, Functional Implementation of Real Numbers</a>. Numerous representation of "computable" numbers exist and have different merits, and most of them are based on sequences and may therefore be represented as <code>Integer -> ...</code> functions.</p> | 2011-04-05 22:01:29.977000+00:00 | 2015-06-19 21:33:43.003000+00:00 | 2015-06-19 21:33:43.003000+00:00 | null | 5,557,810 | <p>I'd like to know what kind of <strong>real life</strong> problems can be tackled by "duality methods" in functional programming. More precisely, I'd like to know whether someone did actually use a duality method like the ones I present below, or whether there are other interesting examples. I'd be particularly interested in <strong>existing implementations</strong>, probably in Haskell.</p>
<p><em>[Since the majority of people which will be interested in this question likely know Haskell, let me please add the Haskell tag, even if the question is quite language independent]</em></p>
<p>Let me explain what I mean by duality (by lack of a better name) through a few examples. The first one is the <strong>real numbers</strong>. Assume the existence of a <code>Integer</code> and a <code>Rational</code> type, and define a real number as a function (pardon my Haskell, I'm no hardcore haskeller)</p>
<pre><code>type Real = Integer -> Rational
</code></pre>
<p>such that whenever <code>x :: Real</code> denotes the real number x, <code>x n</code> yields a rational number which is within <code>2^(-n)</code> of x.</p>
<p>Now one can do</p>
<pre><code>(+) :: Real -> Real -> Real
(+) x y n = (x $ n + 1) + (y $ n + 1)
</code></pre>
<p>or likewise for other arithmetic operations. Given a continuous real function f, one can also compute <code>f x</code> as soon as one can compute a <a href="http://en.wikipedia.org/wiki/Modulus_of_continuity" rel="nofollow noreferrer">modulus of continuity</a> for <code>f</code>.</p>
<p>This has the advantage that one can write natural looking code, and at the end, get a result at the desired level of precision automatically. However, it is no longer possible to compare real numbers for equality. The only kind of comparison possible between <code>x</code> and <code>y</code> is <code>x < y + eps</code>.</p>
<p>Another example of duality is <a href="https://stackoverflow.com/questions/395981">this question on <strong>probability measures</strong></a>, which triggered the present question in my head. Let us write</p>
<pre><code>type Measure a = (a -> Double) -> Double
</code></pre>
<p>and define measures as integration procedures against functions. In the linked question, I show how natural it is in this framework to express concepts like convolution or pushforward which are much more difficult (computationally, but also theoretically) to define at the level of probability <em>densities</em>.</p>
<p>It allows one to compose building blocks from probability theory, and in principle allows one to build complex Monte Carlo procedures, and even allows one to work with explicit probability densities (at the expense of numerical integration). I'd be especially interested in any attempt at a real world library on this topic.</p>
<p>Another example that I have in mind, but did not quite formalize yet is the notion of <strong>vector fields</strong> (from differential geometry), that one can express as <em>differentiation operators</em>. For this, one needs a suitable type of "smooth real valued functions", and then a vector field is like this:</p>
<pre><code>type VectorField = SmoothFunction -> SmoothFunction
</code></pre>
<p>such that <code>v (f * g) = f * (v g) + g * (v f)</code>.</p>
<p>Of course, describing a sheaf of regular functions in say Haskell should not be easy. But by doing that, we could express all the stuff from differential geometry in a totally coordinate independant way, and plug coordinates at the very end.</p>
<p>There are other examples, eg. <strong>Taylor series</strong> have been discussed in Sigfpe's blog (I can't find this particular post though), where an analytic function is the following type:</p>
<pre><code>type AnalyticFunction = Double -> Integer -> [Double]
</code></pre>
<p>and where <code>f x n</code> returns the <code>n</code> first partial sums of the Taylor expansion of <code>f</code> around <code>x</code>. This allows us to seamlessly write all kind of arithmetic on analytic functions, including stuff like <code>f / g</code> where <code>f</code> and <code>g</code> both can vanish at a point (along with some of their derivatives), or even <code>f^(-1)</code> (provided <code>f'</code> does not vanish). At the end, only the necessary terms of the intermediate series are computed to yield the value of a given expression.</p> | 2011-04-05 20:05:21.067000+00:00 | 2015-06-19 21:33:43.003000+00:00 | 2017-05-23 12:07:57.733000+00:00 | math|haskell|functional-programming | ['http://conal.net/blog/posts/everything-is-a-function-in-haskell/', 'http://lambda-the-ultimate.org/node/3668', 'http://www.cs.tufts.edu/~nr/pubs/pmonad-abstract.html', 'http://www.haskell.org/haskellwiki/Probabilistic_Functional_Programming', 'http://okmij.org/ftp/kakuritu/', 'http://arxiv.org/abs/cs.NA/0605058'] | 6 |
65,055,834 | <p>This problem is a well-known one in <a href="https://en.wikipedia.org/wiki/Computational_geometry" rel="nofollow noreferrer">Computational Geometry</a>. A simplified version of this problem (without a query point) is briefly described <a href="https://en.wikipedia.org/wiki/Largest_empty_rectangle" rel="nofollow noreferrer">here</a>. The problem <em>with</em> query point can be formulated in the following way:</p>
<blockquote>
<p>Let P be a set of n points in a fixed axis-parallel rectangle B in the plane. A P-empty rectangle (or just an empty rectangle for short) is any axis-parallel rectangle that is contained in
B and its interior does not contain any point of P. We consider the problem of preprocessing
P into a data structure so that, given a query point q, we can efficiently find the largest-area
P-empty rectangle containing q.</p>
</blockquote>
<p>The paragraph above has been copied from <a href="https://arxiv.org/abs/1106.3628" rel="nofollow noreferrer">this</a> paper, where authors describe an algorithm and data structure for the set with <code>N</code> points in the plane, which allow to find a maximal empty rectangle for any query point in <code>O(log^4(N))</code> time. Sorry to say, it's a theoretic paper, which doesn't contain any algorithm implementation details.</p> | 2020-11-29 00:31:10.337000+00:00 | 2020-11-30 17:18:14.923000+00:00 | 2020-11-30 17:18:14.923000+00:00 | null | 64,981,923 | <h1>Problem</h1>
<p>Given an occupancy grid, for example:</p>
<pre><code>...................*
*...............*...
*..*.............*..
...........*........
....................
..*.......X.........
............*.*.*...
....*..........*....
...*........*.......
..............*.....
</code></pre>
<p>Where, <code>*</code> represents an occupied block, <code>.</code> represents a free block and <code>X</code> represents a point (or block) of interest, what is the most time-efficient algorithm to find the <strong>largest</strong> rectangle which includes <code>X</code>, but does not include any obstacles, i.e. any <code>*</code>?</p>
<p>For example, the solution to the provided grid would be:</p>
<pre><code>.....######........*
*....######.....*...
*..*.######......*..
.....######*........
.....######.........
..*..#####X.........
.....######.*.*.*...
....*######....*....
...*.######.*.......
.....######...*.....
</code></pre>
<h1>My Thoughts</h1>
<p>Given we have a known starting point <code>X</code>, I can't help but think there must be a straightforwards solution to "snap" lines to the outer boundaries to create the largest rectangle.</p>
<p>My current thinking is to snap lines to the maximum position offsets (i.e. go to the next row or column until you encounter an obstacle) in a cyclic manner. E.g. you propagate a horizontal line from the point <code>X</code> down until there is a obstacle along that line, then you propagate a vertical line left until you encounter an obstacle, then a horizontal line up and a vertical line right. You repeat this starting at with one of the four moving lines to get four rectangles, and then you select the rectangle with the largest area. However, I do not know if this is optimal, nor the quickest approach.</p> | 2020-11-24 07:26:33.717000+00:00 | 2020-11-30 17:18:14.923000+00:00 | 2020-11-29 00:36:51.017000+00:00 | algorithm|complexity-theory|computational-geometry | ['https://en.wikipedia.org/wiki/Computational_geometry', 'https://en.wikipedia.org/wiki/Largest_empty_rectangle', 'https://arxiv.org/abs/1106.3628'] | 3 |
59,879,196 | <p>Here is a <a href="https://arxiv.org/pdf/1904.10674.pdf" rel="nofollow noreferrer">survey on DL algorithms</a> used to classify hyperspectral datas. </p>
<p>Since you have datas or varying size, you will have to create patches of datas, you won't be able to feed datas of different sizes.</p>
<p>For example you could feed patches of <code>(16, 16, 220)</code> to your network. </p>
<p>I worked on a CNN with images of multispectral bands, I had less bands that you have, the size of patches was obviously important, I used a UNET in image segmentation.</p>
<p>Edit with an example using<code>(None, None, 220)</code> as input :</p>
<pre class="lang-py prettyprint-override"><code>model = Sequential()
# this applies 32 convolution filters of size 3x3 each.
model.add(Conv2D(32, (3, 3), activation='relu', input_shape=(None, None, 220)))
model.add(Conv2D(32, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
# model.add(Flatten())
# Replace flatten by GlobalPooling example :
model.add(GlobalMaxPooling2D())
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(10, activation='softmax'))
adam = Adam(lr=1e-4)
model.compile(loss='categorical_crossentropy', optimizer=adam)
</code></pre> | 2020-01-23 13:00:49.117000+00:00 | 2020-01-24 09:15:19.013000+00:00 | 2020-01-24 09:15:19.013000+00:00 | null | 59,878,530 | <p>I am looking for an approach to train a hyperspectral image data on Tensorflow.
The training sample is encoded in CSV and has an arbitrary x-y dimension but constant depth:
The data looks like this:</p>
<p><em>Sample1.csv</em>: <strong>50x4x220</strong> (Row 1-50 is supposed to be aligned with row 51-100, 101-150, and 151-200)<br/>
<em>Sample2.csv</em>: <strong>18x71x220</strong> (Row 1-18 is supposed to be aligned with row 19-36, etc.)<br/>
<em>Sample3.csv</em>: <strong>33x41x220</strong> (same as above)<br/>
....<br/>
<em>Sample100.csv</em>: <strong>15x8x220</strong> (same as above)<br/></p>
<p>Is there any project example that I can use? Thanks in advance.</p> | 2020-01-23 12:24:03.320000+00:00 | 2020-01-24 09:15:19.013000+00:00 | null | csv|dataframe|tensorflow|multidimensional-array|keras | ['https://arxiv.org/pdf/1904.10674.pdf'] | 1 |
68,584,914 | <p>Objectness is a binary cross entropy loss term over 2 classes (object/not object) associated with each anchor box in the first stage (RPN), and classication loss is normal cross-entropy term over C classes. Both first stage region proposals and second stage bounding boxes are also penalized with a smooth L1 loss term.</p>
<p>It should also be noted that the authors train the first and second stage alternately since both rely on the same features computed with convolutional layers + FPN to aid in training convergence.</p>
<p>Not a very clear description? I'd recommend reading the original <a href="https://arxiv.org/pdf/1506.01497.pdf" rel="nofollow noreferrer">Faster-RCNN paper</a> as it is pretty foundational and will probably do a better job describing the loss terms than me.</p> | 2021-07-30 02:54:49.870000+00:00 | 2021-07-30 10:42:13.863000+00:00 | 2021-07-30 10:42:13.863000+00:00 | null | 68,584,185 | <p>I am using a pretrained model from this tutorial. <a href="https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html#defining-your-model" rel="nofollow noreferrer">https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html#defining-your-model</a></p>
<p>The model is pytorch's Faster RCNN ResNet 50 FPN model. Does anyone know what the classification loss, loss, and objectness loss functions are (i.e. Cross Entropy or?). Thanks in advance,
Sriram A.</p> | 2021-07-30 00:37:40.480000+00:00 | 2021-07-30 10:42:13.863000+00:00 | null | python|pytorch|loss-function|resnet|faster-rcnn | ['https://arxiv.org/pdf/1506.01497.pdf'] | 1 |
49,558,286 | <p><strong>Preface</strong>: What you have implemented is, in an important way, <strong>a lot better</strong> than the built-in construct you mentioned. I will discuss this point in more detail below.</p>
<p>Regarding the <strong>literal</strong> question: I think you are <em>quite close</em> already, since you can already <em>nest</em> this to some extent:</p>
<pre>
<b>?- ifthenelse(C1, ifthenelse(C2,X=1,X=2), X=3).</b>
C1 = C2, C2 = true,
X = 1 ;
C1 = true,
C2 = false,
X = 2 ;
C1 = false,
X = 3.
</pre>
<p>What now remains is to make it nestable <strong>in the condition</strong>. For this, you need a way to <em>reify</em> the outcome of conditions, that is, to turn the truth value into a Prolog <em>term</em> that you can reason about symbolically.</p>
<p>See <code>if_/3</code> for more information: <a href="https://arxiv.org/abs/1607.01590" rel="noreferrer"><strong>Indexing <code>dif/2</code></strong></a></p>
<p>A key property this construct preserves is called <a href="/questions/tagged/logical-purity" class="post-tag" title="show questions tagged 'logical-purity'" rel="tag">logical-purity</a>. In particular, do not erroneously commit to one branch if both are logically possible!</p>
<hr>
<p><strong>Note on purity</strong>: From a declarative point of view, what you have implemented is very nice and has a clear logical interpretation. Taking for example the last two clauses, we can read <code>ifthenelse(C,G1,G2)</code> as: <em>If</em> <code>C</code> is <code>true</code>, then the predicate holds if <code>G1</code> holds. If <code>C</code> is <code>false</code>, then it holds if <code>G2</code> holds. Perfectly fine. A semantic "if...then" is not in any way problematic; in fact, every single Horn clause can be read in this way, as an implication from right to left.</p>
<p>In contrast, the built-in construct you mention <em>lacks</em> such a declarative reading in general. For example:</p>
<pre>
?- ( ( X = 1 ; X = 2 ) -> true ; true ).
<b>X = 1.</b>
</pre>
<p>But on the other hand:</p>
<pre>
?- <b>X = 2</b>, ( ( X = 1 ; X = 2 ) -> true ; true ).
<b>X = 2.</b>
</pre>
<p>So <em>adding a constraint</em> has led to a <em>new</em> solution. A classical logician's nightmare.</p>
<p>Your construct <em>avoids</em> such problems, because it does not prematurely <strong>commit</strong> to any particular branch. This yields a much more versatile predicate that can also be used in other directions. For example, see that <strong>all</strong> possible solutions are correctly generated:</p>
<pre>
<b>?- ifthenelse(C, true, true).</b>
true ;
C = true ;
C = false.
</pre>
<p>So, I highly encourage your way of formulating this: As you have made perfectly clear, it's <strong>your own</strong> 'if-then-else', and you are using only pure monotonic constructs to express it.</p>
<p>On a psychological note, I generally prefer to give more room to pure declarative constructs and I added this section only because the comments expressed genuine interest in these aspects.</p> | 2018-03-29 14:09:04.150000+00:00 | 2018-03-29 17:00:30.557000+00:00 | 2018-03-29 17:00:30.557000+00:00 | null | 49,557,647 | <p>I'm experimenting with Prolog and tried to make my own "<strong><code>if-then-else</code></strong>" method, so that I don't use the <code>-> ;</code> method, for experimenting's sake.
My goal is to make it so that there can be nested ifs and elses in my code if required.
Thus far I have got this:</p>
<pre><code>ifthenelse(_, G, G):- G. %no matter the condition, the goals are the same
ifthenelse(true,G,_):- G. %if
ifthenelse(false,_,G):- G. %else
</code></pre>
<p>I think my way hardly seems correct. How can I make my own <code>ifthenelse/3</code> properly?</p>
<p>Thank you</p> | 2018-03-29 13:41:22.053000+00:00 | 2018-03-29 17:00:30.557000+00:00 | 2018-03-29 14:06:55.900000+00:00 | if-statement|prolog | ['https://arxiv.org/abs/1607.01590', '/questions/tagged/logical-purity'] | 2 |
66,807,322 | <p>I found this very recent paper <a href="https://arxiv.org/pdf/1901.01926.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1901.01926.pdf</a>, where all is nicely typeset an explained.
Apparently there is always a compromise between time and space complexity and there is no algorithm (yet?) that is O(n)-in-time and in-place.
(The paper claims to be the first subquadratic deterministic in-place algorithm.)</p>
<p>The algorithm you posted is not listed in the paper and would be O(n) in time and O(n) in space (i.e. out-of-place).</p>
<p>I post it here for reference, also to check the correctness of other implementations.
I used zero-base indexing for simplicity.</p>
<pre><code>// O(n) out-of-place algorithm
template<class It, class Size, class ItOut>
void inverse_permutation_n(It first, Size n, ItOut d_first){
for(Size i = 0; i != n; ++i){
d_first[first[i]] = i;
}
}
</code></pre>
<p>Then there is the algorithm that is listed as "folklore" in the table translated from the pseudocode in the paper (Listing 3 in the paper).</p>
<pre><code>#include<algorithm> // for std::min
template<class It, class Index>
void reverse_cycle(It t, Index start){
auto cur = t[start];
auto prev = start;
while( cur != start ){
auto next = t[cur];
t[cur] = prev;
prev = cur;
cur = next;
}
t[start] = prev;
}
template<class It, class Index>
auto cycle_leader(It t, Index start){
auto cur = t[start];
auto smallest = start;
while(cur != start){
smallest = std::min(smallest, cur);
cur = t[cur];
}
return smallest;
}
// O(n²) in-place algorithm
template<class It, class Size>
void inverse_permutation_n(It first, Size n){
for(Size i = 0; i != n; ++i){
if( cycle_leader(first, i) == i ) reverse_cycle(first, i);
}
}
</code></pre>
<p>The above algorithm is O(n²)-time in the average case.
The reason is that for each point you have to follow a cycle of length O(n) to find the leader.</p>
<hr />
<p>Then you can build on top of this by putting a shortcut in the cycle leader search, once the cycle leader is less than the starting point the search returns false.</p>
<pre><code>template<class It, class Index>
auto cycle_leader_shortcut(It t, Index start){
auto cur = t[start];
while(cur != start){
if(cur < start) return false;
cur = t[cur];
}
return true;
}
// O(n log n) in-place algorithm
template<class It, class Size>
void inverse_permutation_shortcut_n(It first, Size n){
for(Size i = 0; i != n; ++i){
if( cycle_leader_shortcut(first, i) ) reverse_cycle(first, i);
}
}
</code></pre>
<p>This algorithm is fortunately O(N log N) (in average, I think).
The reason is that the cycles iteration become shorter as the point in the sequence increases because it is more likely that a point will have a low value that was already reversed.</p>
<hr />
<p>This is the benchmark and the result:</p>
<pre><code>#include<numeric> // for iota
#include<random>
// initialize rng
std::random_device rd;
std::mt19937 g(rd());
static void BM_inverse_permutation(benchmark::State& state){
// allocate memory and initialize test buffer and reference solution
auto permutation = std::vector<std::size_t>(state.range(0));
std::iota(permutation.begin(), permutation.end(), 0);
auto reference_inverse_permutation = std::vector<std::size_t>(permutation.size());
for(auto _ : state){
state.PauseTiming(); // to random shuffle and calculate reference solution
std::shuffle(permutation.begin(), permutation.end(), g);
// inverse_permutation_n(permutation.cbegin(), permutation.size(), reference_inverse_permutation.begin());
benchmark::DoNotOptimize(permutation.data());
benchmark::ClobberMemory();
state.ResumeTiming();
inverse_permutation_n(permutation.begin(), permutation.size());
benchmark::DoNotOptimize(permutation.data());
benchmark::ClobberMemory();
// state.PauseTiming(); // to check that the solution is correct
// if(reference_inverse_permutation != permutation) throw std::runtime_error{""};
// state.ResumeTiming();
}
state.SetItemsProcessed(state.iterations()*permutation.size() );
state.SetComplexityN(state.range(0));
}
BENCHMARK(BM_inverse_permutation)->RangeMultiplier(2)->Range(8, 8<<10)->Complexity(benchmark::oNSquared);
static void BM_inverse_permutation_shortcut(benchmark::State& state){
// allocate memory and initialize test buffer and reference solution
auto permutation = std::vector<std::size_t>(state.range(0));
std::iota(permutation.begin(), permutation.end(), 0);
auto reference_inverse_permutation = std::vector<std::size_t>(permutation.size());
for(auto _ : state){
state.PauseTiming(); // to random shuffle and calculate reference solution
std::shuffle(permutation.begin(), permutation.end(), g);
// inverse_permutation_n(permutation.cbegin(), permutation.size(), reference_inverse_permutation.begin());
benchmark::DoNotOptimize(permutation.data());
benchmark::ClobberMemory();
state.ResumeTiming();
inverse_permutation_shortcut_n(permutation.begin(), permutation.size());
benchmark::DoNotOptimize(permutation.data());
benchmark::ClobberMemory();
// state.PauseTiming(); // to check that the solution is correct
// if(reference_inverse_permutation != permutation) throw std::runtime_error{""};
// state.ResumeTiming();
}
state.SetItemsProcessed(state.iterations()*permutation.size() );
state.SetComplexityN(state.range(0));
}
BENCHMARK(BM_inverse_permutation_shortcut)->RangeMultiplier(2)->Range(8, 8<<10)->Complexity(benchmark::oNLogN);
BENCHMARK_MAIN();
</code></pre>
<pre class="lang-sh prettyprint-override"><code>$ c++ a.cpp -O3 -DNDEBUG -lbenchmark && ./a.out
2021-03-30 21:16:55
Running ./a.out
Run on (12 X 4600 MHz CPU s)
CPU Caches:
L1 Data 32K (x6)
L1 Instruction 32K (x6)
L2 Unified 256K (x6)
L3 Unified 12288K (x1)
Load Average: 1.26, 1.80, 1.76
***WARNING*** CPU scaling is enabled, the benchmark real time measurements may be noisy and will incur extra overhead.
-----------------------------------------------------------------------------------------------
Benchmark Time CPU Iterations UserCounters...
-----------------------------------------------------------------------------------------------
BM_inverse_permutation/8 476 ns 478 ns 1352259 items_per_second=16.7276M/s
BM_inverse_permutation/16 614 ns 616 ns 1124905 items_per_second=25.9688M/s
BM_inverse_permutation/32 1106 ns 1107 ns 630398 items_per_second=28.9115M/s
BM_inverse_permutation/64 2929 ns 2931 ns 238236 items_per_second=21.835M/s
BM_inverse_permutation/128 10748 ns 10758 ns 64708 items_per_second=11.898M/s
BM_inverse_permutation/256 41556 ns 41582 ns 16600 items_per_second=6.15656M/s
BM_inverse_permutation/512 164006 ns 164023 ns 4245 items_per_second=3.12151M/s
BM_inverse_permutation/1024 621341 ns 620840 ns 1056 items_per_second=1.64938M/s
BM_inverse_permutation/2048 2468060 ns 2466060 ns 293 items_per_second=830.474k/s
BM_inverse_permutation/4096 10248540 ns 10244982 ns 93 items_per_second=399.805k/s
BM_inverse_permutation/8192 55926122 ns 55926230 ns 10 items_per_second=146.479k/s
BM_inverse_permutation_BigO 0.82 N^2 0.82 N^2
BM_inverse_permutation_RMS 18 % 18 %
BM_inverse_permutation_shortcut/8 499 ns 501 ns 1193871 items_per_second=15.9827M/s
BM_inverse_permutation_shortcut/16 565 ns 567 ns 1225056 items_per_second=28.2403M/s
BM_inverse_permutation_shortcut/32 740 ns 742 ns 937909 items_per_second=43.1509M/s
BM_inverse_permutation_shortcut/64 1121 ns 1121 ns 619016 items_per_second=57.0729M/s
BM_inverse_permutation_shortcut/128 1976 ns 1977 ns 355982 items_per_second=64.745M/s
BM_inverse_permutation_shortcut/256 3644 ns 3645 ns 191387 items_per_second=70.2375M/s
BM_inverse_permutation_shortcut/512 7282 ns 7288 ns 95434 items_per_second=70.2481M/s
BM_inverse_permutation_shortcut/1024 14732 ns 14752 ns 47417 items_per_second=69.4165M/s
BM_inverse_permutation_shortcut/2048 30590 ns 30398 ns 23079 items_per_second=67.3728M/s
BM_inverse_permutation_shortcut/4096 64374 ns 64039 ns 10766 items_per_second=63.9613M/s
BM_inverse_permutation_shortcut/8192 196961 ns 195786 ns 3646 items_per_second=41.8416M/s
BM_inverse_permutation_shortcut_BigO 1.74 NlgN 1.73 NlgN
BM_inverse_permutation_shortcut_RMS 27 % 27 %
</code></pre> | 2021-03-25 20:41:54.320000+00:00 | 2021-03-31 04:18:46.410000+00:00 | 2021-03-31 04:18:46.410000+00:00 | null | 56,603,153 | <p>I tried to write a function that would invert an array and print its <a href="https://www.geeksforgeeks.org/inverse-permutation/" rel="nofollow noreferrer">inverse permutation</a> without creating a new array.</p>
<blockquote>
<p>Given an array of size n of integers in range from 1 to n, we need to find the inverse permutation of that array.</p>
</blockquote>
<blockquote>
<p>An inverse permutation is a permutation which you will get by inserting position of an element at the position specified by the element value in the array.</p>
</blockquote>
<p>I wrote code that can invert giving an output array, but I have to create a new array in order to do that. <strong>How to do it in-place?</strong></p>
<pre class="lang-cpp prettyprint-override"><code>#include<iostream>
using namespace std;
void inverse(int arr[], int size) {
int arr2[size];
for (int i = 0; i < size; i++)
arr2[arr[i] - 1] = i + 1;
for (int i = 0; i < size; i++)
cout << arr2[i] << " ";
}
int main() {
int arr[] = {2, 3, 4, 5, 1};
int size = sizeof(arr) / sizeof(arr[0]);
inverse(arr, size);
return 0;
}
</code></pre> | 2019-06-14 18:00:31.153000+00:00 | 2022-06-18 16:00:48.420000+00:00 | 2021-03-25 04:38:43.700000+00:00 | c++ | ['https://arxiv.org/pdf/1901.01926.pdf'] | 1 |
10,396,041 | <p>In fact,
the limitation of Hamiltonian path is you have to visit an edge only once. But with Eulerian you might visit the same vertex, and sometimes same edge with opposite direction. In most cases however, it is possible to create a new vertex but merely possible add an edge. </p>
<p>Bellman, R. (1962), "Dynamic programming treatment of the travelling salesman problem", Journal of the ACM 9: 61–63, doi:10.1145/321105.321111 .</p>
<p>If you just check that article, there is a dynamic programming implementation of graphs (not for all kinds of graphs of course).. And also there are some HMM implementations as well,</p>
<p>Björklund, Andreas (2010), "Determinant sums for undirected Hamiltonicity", Proc. 51st IEEE Symposium on Foundations of Computer Science (FOCS '10), pp. 173–182, arXiv:1008.0541 , doi:10.1109/FOCS.2010.24.</p>
<p>The good part of eulerian path is; you can get subgraphs (branch and bound alike), and then get the total cycle-graph. Truth to be said, eulerian mostly is for local solutions..</p>
<p>Hope that helps..</p> | 2012-05-01 09:48:23.890000+00:00 | 2012-05-01 09:48:23.890000+00:00 | null | null | 7,343,046 | <p>I learned that even though seemingly similar, Eulerian path can be solved in linear time while Hamiltonian path problem is NP-complete. I wonder what is the reason that underlies this difference? I don't know too much graph theory so probably won't understand well a rigorous proof, but some jargons should be fine.</p> | 2011-09-08 04:13:51.970000+00:00 | 2013-03-19 16:44:29.650000+00:00 | 2013-03-19 16:44:29.650000+00:00 | graph-theory | [] | 0 |
71,540,948 | <p>It is element-wise multiplication, so in most of the libraries quite literally the <code>*</code>. Source is here:
<a href="https://arxiv.org/pdf/1807.06521.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1807.06521.pdf</a></p>
<p><a href="https://i.stack.imgur.com/v2y8N.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/v2y8N.png" alt="enter image description here" /></a></p> | 2022-03-19 18:31:06.870000+00:00 | 2022-03-19 18:31:06.870000+00:00 | null | null | 71,538,199 | <p>I would like to understand SAM, the model during YOLOv4.
However, I am confused as to the meaning of the "x" in the diagram.
Please tell me what module name this "x" represents in yolov4.cfg in darknet.</p>
<p>Thank you in advance.</p>
<p><a href="https://i.stack.imgur.com/S0Uzj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/S0Uzj.png" alt="enter image description here" /></a></p> | 2022-03-19 12:38:26.300000+00:00 | 2022-03-19 18:31:06.870000+00:00 | null | deep-learning|conv-neural-network|object-detection|yolov4 | ['https://arxiv.org/pdf/1807.06521.pdf', 'https://i.stack.imgur.com/v2y8N.png'] | 2 |
67,744,062 | <p>Here is a solution in Python/OpenCV. It creates transformation maps that define the equations from output back to input and applies them using cv2.remap(). The equations come from <a href="https://arxiv.org/pdf/1509.06344.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1509.06344.pdf</a> for the Elliptical Grid Mapping approach.</p>
<p>Input:</p>
<p><a href="https://i.stack.imgur.com/BiKqQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BiKqQ.png" alt="enter image description here" /></a></p>
<pre><code>import numpy as np
import cv2
import math
# References:
# https://arxiv.org/pdf/1509.06344.pdf
# http://squircular.blogspot.com/2015/09/mapping-circle-to-square.html
# Evaluate:
# u = x*sqrt(1-y**2/2)
# v = y*sqrt(1-x**2/2)
# u,v are input circle coordinates and x,y are output square coordinates
# read input
img = cv2.imread("rings.png")
# get dimensions and center
h, w = img.shape[:2]
xcent = w / 2
ycent = h / 2
# set up the maps as float32 from output square (x,y) to input circle (u,v)
map_u = np.zeros((h, w), np.float32)
map_v = np.zeros((h, w), np.float32)
# create u and v maps where x,y is measured from the center and scaled from -1 to 1
for y in range(h):
Y = (y - ycent)/ycent
for x in range(w):
X = (x - xcent)/xcent
map_u[y, x] = xcent * X * math.sqrt(1 - 0.5*Y**2) + xcent
map_v[y, x] = ycent * Y * math.sqrt(1 - 0.5*X**2) + ycent
# do the remap
result = cv2.remap(img, map_u, map_v, cv2.INTER_LINEAR, borderMode = cv2.BORDER_REFLECT_101, borderValue=(0,0,0))
# save results
cv2.imwrite("rings_circle2square.png", result)
# display images
cv2.imshow('img', img)
cv2.imshow('result', result)
cv2.waitKey(0)
cv2.destroyAllWindows()
</code></pre>
<p>Result:</p>
<p><a href="https://i.stack.imgur.com/KcML9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KcML9.png" alt="enter image description here" /></a></p>
<p>Here is another example:</p>
<p>Input:</p>
<p><a href="https://i.stack.imgur.com/gQm8o.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gQm8o.png" alt="enter image description here" /></a></p>
<p>Result:</p>
<p><a href="https://i.stack.imgur.com/LiWuE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LiWuE.png" alt="enter image description here" /></a></p>
<p>And here is a 3rd example:</p>
<p>Input:</p>
<p><a href="https://i.stack.imgur.com/TbH2k.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TbH2k.jpg" alt="enter image description here" /></a></p>
<p>Result:</p>
<p><a href="https://i.stack.imgur.com/uw8vV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uw8vV.png" alt="enter image description here" /></a></p>
<p><strong>ADDITION</strong></p>
<p>Here is an alternate approach based upon the Simple Stretch equations in the reference above:</p>
<pre><code>import numpy as np
import cv2
import math
# References:
# https://arxiv.org/pdf/1509.06344.pdf
# Simple stretch equations
# read input
img = cv2.imread("rings.png")
#img = cv2.imread("ICM.png")
#img = cv2.imread("soccerball_small.jpg")
# get dimensions and center
h, w = img.shape[:2]
xcent = w / 2
ycent = h / 2
# set up the maps as float32 from output square (x,y) to input circle (u,v)
map_u = np.zeros((h, w), np.float32)
map_v = np.zeros((h, w), np.float32)
# create u and v maps where x,y is measured from the center and scaled from -1 to 1
# note: copysign(1,x) is signum(x) and returns 1 ,0, or -1 depending upon sign of x
for y in range(h):
Y = (y - ycent)/ycent
for x in range(w):
X = (x - xcent)/xcent
X2 = X*X
Y2 = Y*Y
XY = X*Y
R = math.sqrt(X2+Y2)
if R == 0:
map_u[y, x] = xcent
map_v[y, x] = ycent
elif X2 >= Y2:
map_u[y, x] = xcent * math.copysign(1, X) * X2/R + xcent
map_v[y, x] = ycent * math.copysign(1, X) * XY/R + ycent
else:
map_u[y, x] = xcent * math.copysign(1, Y) * XY/R + xcent
map_v[y, x] = ycent * math.copysign(1, Y) * Y2/R + ycent
# do the remap
result = cv2.remap(img, map_u, map_v, cv2.INTER_LINEAR, borderMode = cv2.BORDER_REFLECT_101, borderValue=(0,0,0))
# save results
cv2.imwrite("rings_circle2square2.png", result)
#cv2.imwrite("ICM_circle2square2.png", result)
#cv2.imwrite("soccerball_small_circle2square2.png", result)
# display images
cv2.imshow('img', img)
cv2.imshow('result', result)
cv2.waitKey(0)
cv2.destroyAllWindows()
</code></pre>
<p>Input:</p>
<p><a href="https://i.stack.imgur.com/BjSfd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BjSfd.png" alt="enter image description here" /></a></p>
<p>Result:</p>
<p><a href="https://i.stack.imgur.com/qmXZz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qmXZz.png" alt="enter image description here" /></a></p>
<p>Input:</p>
<p><a href="https://i.stack.imgur.com/9S8V9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9S8V9.png" alt="enter image description here" /></a></p>
<p>Result:</p>
<p><a href="https://i.stack.imgur.com/tAn0h.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tAn0h.png" alt="enter image description here" /></a></p>
<p>Input:</p>
<p><a href="https://i.stack.imgur.com/7zoMI.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7zoMI.jpg" alt="enter image description here" /></a></p>
<p>Result:</p>
<p><a href="https://i.stack.imgur.com/E5wMj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/E5wMj.png" alt="enter image description here" /></a></p> | 2021-05-28 18:42:23.930000+00:00 | 2021-05-29 04:28:15.253000+00:00 | 2021-05-29 04:28:15.253000+00:00 | null | 67,694,258 | <p>OpenCV / Python related:</p>
<p>Given a photo of a round object, how can you output that object flattened, <em>while adjusting for surface area?</em> Here is an example image of an input:</p>
<p><a href="https://i.stack.imgur.com/uotyY.jpg" rel="nofollow noreferrer">Soccer ball</a></p>
<p>It is similar to adjusting for camera distortion (turning a round object into flat one), but in this case the distortion comes from the object itself and not the camera.</p>
<p><a href="https://i.stack.imgur.com/1QSTz.jpg" rel="nofollow noreferrer">Distorted image</a>:</p>
<p><a href="https://i.stack.imgur.com/byDyY.jpg" rel="nofollow noreferrer">Undistorted image</a>:</p>
<p>Any suggestions would help. Thank you!</p>
<p>Edit: The package squircle is just what I needed, thank you fmw42!</p> | 2021-05-25 19:15:34.477000+00:00 | 2021-05-29 04:28:15.253000+00:00 | 2021-05-26 21:45:39.257000+00:00 | python|opencv | ['https://arxiv.org/pdf/1509.06344.pdf', 'https://i.stack.imgur.com/BiKqQ.png', 'https://i.stack.imgur.com/KcML9.png', 'https://i.stack.imgur.com/gQm8o.png', 'https://i.stack.imgur.com/LiWuE.png', 'https://i.stack.imgur.com/TbH2k.jpg', 'https://i.stack.imgur.com/uw8vV.png', 'https://i.stack.imgur.com/BjSfd.png', 'https://i.stack.imgur.com/qmXZz.png', 'https://i.stack.imgur.com/9S8V9.png', 'https://i.stack.imgur.com/tAn0h.png', 'https://i.stack.imgur.com/7zoMI.jpg', 'https://i.stack.imgur.com/E5wMj.png'] | 13 |
68,328,995 | <p>Not sure whether the question still needs to be answered, but I guess it boils down to how to implement models like <a href="https://arxiv.org/abs/1703.06103" rel="nofollow noreferrer">R-GCN</a> or <a href="https://arxiv.org/abs/2003.01332" rel="nofollow noreferrer">HGT</a> with DGL. Some of these layers come build-in with DGL <a href="https://docs.dgl.ai/api/python/nn.html" rel="nofollow noreferrer">here</a>. But it is also easy to implement your own computations. The following explanation only makes sense if you know the basic computational process of DGL during a forward pass through a graph layer (message, reduce, apply_node), if not DGL has good tutorials on that as well. To extend the usual graph computation to for example edges of different types you need to create a heterogenous Graph object and call <code>multi_update_all</code> on that graph object. You can pass a dictionary to that function which specifies the computation per edge type.</p> | 2021-07-10 15:16:59.223000+00:00 | 2021-07-10 15:16:59.223000+00:00 | null | null | 57,779,973 | <p>I'm trying to implement a graph convolutional network (GCN) in the Deep Graph Learning (DGL) package for Python. In many papers, edges have discrete features, and each possible value is associated with a different weight matrix or set of weight matrices. An example would be <a href="https://pubs.acs.org/doi/full/10.1021/acscentsci.8b00507" rel="nofollow noreferrer">here</a>. Is anyone familiar with how to implement a model like this in DGL? The DGL team's example of GCNs for <a href="https://docs.dgl.ai/tutorials/basics/4_batch.html" rel="nofollow noreferrer">graph classification</a>, as does <a href="https://iwatobipen.wordpress.com/2019/02/01/try-gcn-qspr-with-pytorch-based-graph-library-rdkit-pytorch-dgl/" rel="nofollow noreferrer">another example</a> I found online.</p> | 2019-09-04 00:07:53.517000+00:00 | 2021-07-10 15:16:59.223000+00:00 | null | conv-neural-network|pytorch|graph-databases|chemistry | ['https://arxiv.org/abs/1703.06103', 'https://arxiv.org/abs/2003.01332', 'https://docs.dgl.ai/api/python/nn.html'] | 3 |
17,724,401 | <p>You can try the A-ES algorithm from <a href="http://arxiv.org/pdf/1012.0256.pdf" rel="nofollow noreferrer">this paper of S. Efraimidis</a>. It's quite simple to code and very efficient.</p>
<p>Hope this helps,</p>
<p>Benoit</p> | 2013-07-18 13:09:22.543000+00:00 | 2013-07-18 13:09:22.543000+00:00 | null | null | 17,117,872 | <p>Is there an algorithm for how to perform reservoir sampling when the points in the data stream have associated weights?</p> | 2013-06-14 21:59:14.597000+00:00 | 2018-12-12 23:05:31.593000+00:00 | 2018-12-12 23:05:31.593000+00:00 | language-agnostic|sampling | ['http://arxiv.org/pdf/1012.0256.pdf'] | 1 |
35,642,647 | <p>To my understanding, irrespective of what architecture is used (skip-gram/CBOW), word vectors are read from same word-vector matrix.</p>
<p>As suggested in second footnote of the <a href="http://arxiv.org/pdf/1402.3722v1.pdf" rel="nofollow">paper</a>, <em>v_in</em> and <em>v'_out</em> of same word (say <em>dog</em>) should be different, and they are assumed to be coming from different vocabularies during the derivation of the loss function.</p>
<p>Practically, probability of word appearing in its own context is very low, and most implementations don't save two vector representations of same word for saving memory and efficiency.</p> | 2016-02-26 03:03:32.017000+00:00 | 2016-02-26 03:03:32.017000+00:00 | null | null | 35,434,499 | <p>I'm reading the raw word2vec paper: <a href="http://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf" rel="nofollow noreferrer">http://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf</a></p>
<p>According to below equation, every word has two vectors, one is used to predict context word as center word, another is used as context word. For the former one, we can update it with Gradient descent in each iteration. But how to update the latter one? And which vector is the final vector in final model?
<a href="https://i.stack.imgur.com/m4x7n.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/m4x7n.jpg" alt="enter image description here"></a></p> | 2016-02-16 13:56:24.453000+00:00 | 2017-01-11 15:26:45.193000+00:00 | 2017-01-11 15:26:45.193000+00:00 | machine-learning|word2vec|softmax | ['http://arxiv.org/pdf/1402.3722v1.pdf'] | 1 |
63,108,600 | <p>I suggest you refer to the config.py of the original implementation of PixelLink, available at the following link:
<a href="https://github.com/ZJULearning/pixel_link/blob/master/config.py" rel="nofollow noreferrer">https://github.com/ZJULearning/pixel_link/blob/master/config.py</a></p>
<p>Additionally, I would also encourage you to read the paper, “PixelLink: Detecting Scene Text via Instance Segmentation”, available at the following link:
<a href="https://arxiv.org/abs/1801.01315" rel="nofollow noreferrer">https://arxiv.org/abs/1801.01315</a></p> | 2020-07-27 04:11:18.400000+00:00 | 2020-07-27 04:11:18.400000+00:00 | null | null | 62,963,060 | <p>i am trying to train text detection <a href="https://github.com/opencv/openvino_training_extensions/tree/develop/tensorflow_toolkit/text_detectio.." rel="nofollow noreferrer">https://github.com/opencv/openvino_training_extensions/tree/develop/tensorflow_toolkit/text_detectio..</a>. and default it is set for image size 1280 * 768 , but i want to train it on cropped vehicle number plate, i have resized my images to 200*120px size with padding keeping the aspect ratio.</p>
<p>is there any doc available to understand config.yaml,</p>
<p>some fields there are like</p>
<p>min_area: 300</p>
<p>score_map_shape: [128,128]</p>
<p>train_image_shape : [512,512]</p>
<p>can someone please explain these.
i tried setting train_image_shape with 200,120, and i got error operands could not be broadcast together with shapes (8,13,2)(8,14,2)</p>
<p>Thanks & Regards</p>
<p>Rawat</p> | 2020-07-17 23:37:01.527000+00:00 | 2020-07-27 04:11:18.400000+00:00 | 2020-07-18 00:04:10.117000+00:00 | openvino | ['https://github.com/ZJULearning/pixel_link/blob/master/config.py', 'https://arxiv.org/abs/1801.01315'] | 2 |
52,371,123 | <p>@rabin-poudyal, Please note, <em>"Data splitting/cross validation has NOTHING to do with labelled or unlabelled"</em> dataset. On the contrary cross validation has been applied in clustering in both research and practice. See these papers for reference, <a href="https://arxiv.org/pdf/1702.02658.pdf" rel="nofollow noreferrer">1</a>,<a href="https://www.tandfonline.com/doi/abs/10.1198/106186005X59243" rel="nofollow noreferrer">2</a>,<a href="https://www.sciencedirect.com/science/article/pii/S0004370299000946" rel="nofollow noreferrer">3</a>, <a href="https://link.springer.com/article/10.1023/A:1008940618127" rel="nofollow noreferrer">4</a> and many more. Also see this discussion on <a href="https://stats.stackexchange.com/questions/95453/cross-validation-for-comparing-clustering-techniques">SE</a></p>
<p>As earlier pointed out, k-means works only for continuous data. Since your dealing with text data, suggest to use any other clustering algorithm that can work with categorical data for instance, <a href="http://www.math.le.ac.uk/people/ag153/homepage/KmeansKmedoids/Kmeans_Kmedoids.html" rel="nofollow noreferrer">k-medoids</a></p> | 2018-09-17 15:26:15.167000+00:00 | 2018-09-17 15:26:15.167000+00:00 | null | null | 52,345,054 | <p>I am doing a machine learning project and I have the dataset that contains the frequency of words that occurred in the email. I need to find the clusters that each mail belongs to. What I did is that I loaded a data into pandas dataframe, then I trained a KMeans algorithm.
The dataset looks like following:</p>
<pre><code>[
{
"adwords": 2,
"google": 4,
"ads": 2,
"facebook": 1,
"shyam": 2
},
{
"facebook": 4,
"post": 2,
"is": 1,
"comment": 2,
"likes": 1,
"google": 1
},...]
</code></pre>
<p>Then my python code looks like this:</p>
<pre><code>import numpy as np
import pandas as pd
data = pd.read_json('data.json', orient='records')
data = data.fillna(0)
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters=5, init='k-means++')
kmeans.fit_predict(data)
</code></pre>
<p>Now since I have only 30 emails and I need to cluster them according to the words given, I cannot also split into train test set. Is this a right approach to solve the problem. Please suggest what algorithm would do best and what I should be doing. Thanks in advance</p> | 2018-09-15 13:17:23.490000+00:00 | 2018-09-17 15:26:15.167000+00:00 | null | python|machine-learning|scikit-learn|cluster-analysis|data-mining | ['https://arxiv.org/pdf/1702.02658.pdf', 'https://www.tandfonline.com/doi/abs/10.1198/106186005X59243', 'https://www.sciencedirect.com/science/article/pii/S0004370299000946', 'https://link.springer.com/article/10.1023/A:1008940618127', 'https://stats.stackexchange.com/questions/95453/cross-validation-for-comparing-clustering-techniques', 'http://www.math.le.ac.uk/people/ag153/homepage/KmeansKmedoids/Kmeans_Kmedoids.html'] | 6 |
8,948,028 | <p><a href="http://arxiv.org/abs/1106.4064" rel="nofollow noreferrer">http://arxiv.org/abs/1106.4064</a> might interest you.</p>
<blockquote>
<p><strong>Algorithmic Programming Language Identification</strong></p>
<p>David Klein, Kyle Murray, Simon Weber</p>
<p>(Submitted on 21 Jun 2011 (v1), last revised 9 Nov 2011 (this version, v2))</p>
<p>Motivated by the amount of code that goes unidentified on the web, we introduce a practical method for algorithmically identifying the programming language of source code. Our work is based on supervised learning and intelligent statistical features. We also explored, but abandoned, a grammatical approach. In testing, our implementation greatly outperforms that of an existing tool that relies on a Bayesian classifier. Code is written in Python and available under an MIT license.</p>
</blockquote> | 2012-01-20 21:23:37.260000+00:00 | 2012-01-20 21:23:37.260000+00:00 | 2020-06-20 09:12:55.060000+00:00 | null | 8,947,794 | <blockquote>
<p><strong>Possible Duplicates:</strong><br>
<a href="https://stackoverflow.com/questions/3326494/parsing-css-in-javascript-jquery">Parsing CSS in JavaScript / jQuery</a><br>
<a href="https://stackoverflow.com/questions/4412469/parsing-css-string-with-regex-in-javascript">Parsing CSS string with RegEx in JavaScript</a> </p>
</blockquote>
<p>How can I find out if a string contains CSS rules?</p>
<p>Example rules:</p>
<pre><code>selector {
property:value;
}
selector { property:value; }
selector{property:value}
...
</code></pre>
<p>Basically I want to find out if a text block represents either PHP + HTML or CSS code.</p>
<p>One way to do this - I was thinking to trim the text, then match the first character of the text with <code>#</code>, <code>.</code> or a CSS selector such as <code>body</code>, <code>p</code> etc. DO you think it's a good idea?</p> | 2012-01-20 21:00:00.580000+00:00 | 2012-01-20 21:51:32.883000+00:00 | 2017-05-23 12:14:25.897000+00:00 | javascript|jquery|css|regex | ['http://arxiv.org/abs/1106.4064'] | 1 |
69,414,947 | <p>Computing the SVD of an m × n matrix has complexity O(mn min(n, m)). Since this is super-linear in the size of the data, it becomes computationally expensive for large data sets. However, if we have a low rank matrix, we would need only k basis vectors, where k << m, n. One way of computing the rank k approximation is to compute the SVD of the full matrix and retain only the k largest singular values and vectors.</p>
<p><a href="https://arxiv.org/pdf/1710.02812.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1710.02812.pdf</a></p> | 2021-10-02 07:36:36.330000+00:00 | 2021-10-02 07:36:36.330000+00:00 | null | null | 29,218,823 | <p>I've been trying to implement SVD in C for the past few weeks now, and currently I've been using the algorithm 6 found <a href="http://www.cs.utexas.edu/users/inderjit/public_papers/HLA_SVD.pdf" rel="noreferrer">here</a>, and from my understanding this algorithm will run in time O(n^5) because there are two loops (One of the loops does not go from 0 to n, I know but n^5 works as a crude bound), and inside the inner loop matrix multiplication has to be done which is an n^3 process.</p>
<p>However, according <a href="http://rakaposhi.eas.asu.edu/s01-cse494-mailarchive/msg00028.html" rel="noreferrer">this website</a>, for an n by n matrix, SVD can be calculated in O(2n^3). Does anyone know where I can find an algorithm for that time complexity?</p> | 2015-03-23 19:33:28.087000+00:00 | 2021-10-02 07:36:36.330000+00:00 | null | c|algorithm|matrix|linear-algebra | ['https://arxiv.org/pdf/1710.02812.pdf'] | 1 |
61,784,597 | <p>There's an <strong>MNIST sequence dataset</strong> from Edwin de Jong:</p>
<ul>
<li>Paper: <a href="https://arxiv.org/pdf/1611.03068.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1611.03068.pdf</a></li>
<li>Github: <a href="https://edwin-de-jong.github.io/blog/mnist-sequence-data/" rel="nofollow noreferrer">https://edwin-de-jong.github.io/blog/mnist-sequence-data/</a></li>
<li>Blog: <a href="https://github.com/edwin-de-jong/mnist-digits-as-stroke-sequences/" rel="nofollow noreferrer">https://github.com/edwin-de-jong/mnist-digits-as-stroke-sequences/</a></li>
</ul>
<p>and an <strong>MNIST classification using RNN</strong> by Ryan Epp:</p>
<ul>
<li><a href="https://www.ryanepp.com/blog/mnist-classification-using-stroke-paths" rel="nofollow noreferrer">https://www.ryanepp.com/blog/mnist-classification-using-stroke-paths</a></li>
</ul>
<p>In both projects, the direction strokes will take at a T-junction depends on the algorithm and is often counterintuitive. This means that there's more to learn for sequences since many stroke patterns will produce the same image.</p>
<p><a href="https://i.stack.imgur.com/hu3Jp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hu3Jp.png" alt="digit 4 as a sequence"></a></p> | 2020-05-13 20:51:23.413000+00:00 | 2020-05-13 20:51:23.413000+00:00 | null | null | 58,402,851 | <p><img src="https://i.stack.imgur.com/KnBXM.jpg" alt="Trained image"> I've trained a decision tree on a dataset (handwritten) which contains 8 x-y points sampled along the length of the number (number digit dataset). The test dataset given (assignment), is the MNIST dataset, which is the pixel intensities in a 28x28 bitmap image. I need to sample 8 points and along the trajectory of the number so that it performs well.</p>
<p>I'm doing this in Python. I don't know what to do with the image to sample those points. Any package/procedure will help.</p> | 2019-10-15 21:25:57.033000+00:00 | 2020-05-13 20:51:23.413000+00:00 | 2019-10-18 19:26:24+00:00 | python|image | ['https://arxiv.org/pdf/1611.03068.pdf', 'https://edwin-de-jong.github.io/blog/mnist-sequence-data/', 'https://github.com/edwin-de-jong/mnist-digits-as-stroke-sequences/', 'https://www.ryanepp.com/blog/mnist-classification-using-stroke-paths', 'https://i.stack.imgur.com/hu3Jp.png'] | 5 |
27,546,396 | <p>Hmmm. Somebody is interested in exactly same thing which I have been working on.</p>
<p>Below is my code in python.</p>
<pre><code>from collections import OrderedDict
def invert(u):
identity = sorted(u)
ui = []
for x in identity:
index = u.index(x)
ui.append(identity[index])
print "Given U is:\n",u
print "Inverse of U is:\n",ui
return identity,ui
def r_vector(x,y,id):
# from collections import OrderedDict
id_x_Map = OrderedDict(zip(id,x))
id_y_Map = OrderedDict(zip(id,y))
r = []
for x_index,x_value in id_x_Map.items():
for y_index,y_value in id_y_Map.items():
if (x_value == y_index):
r.append(y_value)
print r
return r
def xr_vector(x):
# from collections import OrderedDict
values_checked = []
unorderd_xr = []
ordered_xr = []
for value in x:
values_to_right = []
for n in x[x.index(value)+1:]:
values_to_right.append(n)
result = [i for i in values_to_right if i < value]
if(len(result)!=0):
values_checked.append(value)
unorderd_xr.append(len(result))
value_ltValuePair = OrderedDict(zip(values_checked,unorderd_xr))
for key in sorted(value_ltValuePair):
# print key,value_ltValuePair[key]
ordered_xr.append(value_ltValuePair[key])
print "Xr= ",ordered_xr
print "Kendal Tau distance = ",sum(ordered_xr)
if __name__ == '__main__':
print "***********************************************************"
print "Enter the first string (U):"
u = raw_input().split()
print "Enter the second string (V):"
v = raw_input().split()
print "***********************************************************"
print "Step 1: Find U Inverse"
identity,uinverse = invert(u)
print "***********************************************************"
print "Step 2: Find R = V.UInverse"
r = r_vector(v,uinverse,identity)
print "***********************************************************"
print "Step 3: Finding XR and Kenday_Tau"
xr_vector(r)
</code></pre>
<p>About the approach/ algorithm to find Kendall Tau distance this way, I would either leave it to you, or point towards the research paper <strong><a href="http://arxiv.org/pdf/1408.4963v1" rel="nofollow">Optimal Permutation Codes and the Kendall’s τ-Metric</strong></a></p>
<p>You can implement (Approach) the same in R.</p> | 2014-12-18 12:15:21.597000+00:00 | 2014-12-18 12:15:21.597000+00:00 | null | null | 20,224,871 | <p>How can the Kendall tau distance (a.k.a. bubble-sort distance) between two permutations be calculated in R without loading additional libraries?</p> | 2013-11-26 18:18:01.560000+00:00 | 2014-12-18 12:15:21.597000+00:00 | 2013-11-27 10:41:38.150000+00:00 | r|algorithm|permutation | ['http://arxiv.org/pdf/1408.4963v1'] | 1 |
55,496,653 | <p>This question is very old. Lot of developments were happened in NLP area in last 7 years. </p>
<p><a href="https://en.wikipedia.org/wiki/Convolutional_neural_network" rel="nofollow noreferrer">Convolutional_neural_network</a> and <a href="https://en.wikipedia.org/wiki/Recurrent_neural_network" rel="nofollow noreferrer">Recurrent_neural_network</a> evolved during this time. </p>
<p><em>Word Embeddings:</em> Words appearing within similar context possess similar meaning. Word embeddings are pre-trained on a task where the objective is to predict a word based on its context.</p>
<p><a href="https://i.stack.imgur.com/ynzh8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ynzh8.png" alt="Word embeddings"></a></p>
<p>CNN for NLP:</p>
<ol>
<li><p>Sentences are first tokenized into words, which are further transformed into a word embedding matrix (i.e., input embedding layer) of d dimension. </p></li>
<li><p>Convolutional filters are applied on this input embedding layer to produce a feature map.</p></li>
<li><p>A max-pooling operation on each filter obtain a fixed length output and reduce the dimensionality of the output.</p></li>
</ol>
<p><a href="https://i.stack.imgur.com/BekQ1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BekQ1.png" alt="CNN for NLP"></a></p>
<p>Since CNN had a short-coming of not preserving long-distance contextual information, RNNs have been introduced. </p>
<p>RNNs are specialized neural-based approaches that are effective at processing sequential information.</p>
<p>RNN memorizes the result of previous computations and use it in current computation. </p>
<p><a href="https://i.stack.imgur.com/vuGKe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vuGKe.png" alt="RNN for NLP"></a></p>
<p>There are few variations in RNN - Long Short Term Memory Unit (LSTM) and Gated recurrent units (GRUs) </p>
<p>Have a look at below resources:</p>
<p><a href="https://medium.com/dair-ai/deep-learning-for-nlp-an-overview-of-recent-trends-d0d8f40a776d" rel="nofollow noreferrer">deep-learning-for-nlp</a></p>
<p><a href="https://arxiv.org/pdf/1708.02709.pdf" rel="nofollow noreferrer">Recent trends in deep learning paper</a></p> | 2019-04-03 13:41:45.033000+00:00 | 2019-04-10 13:15:55.433000+00:00 | 2019-04-10 13:15:55.433000+00:00 | null | 2,434,536 | <p>We need to decide between Support Vector Machines and Fast Artificial Neural Network for some text processing project.</p>
<p>It includes Contextual Spelling Correction and then tagging the text to certain phrases and their synonyms.</p>
<p>Which will be the right approach? Or is there an alternate to both of these... Something more appropriate than FANN as well as SVM?</p> | 2010-03-12 17:24:45.120000+00:00 | 2019-04-10 13:15:55.433000+00:00 | null | artificial-intelligence|machine-learning|neural-network | ['https://en.wikipedia.org/wiki/Convolutional_neural_network', 'https://en.wikipedia.org/wiki/Recurrent_neural_network', 'https://i.stack.imgur.com/ynzh8.png', 'https://i.stack.imgur.com/BekQ1.png', 'https://i.stack.imgur.com/vuGKe.png', 'https://medium.com/dair-ai/deep-learning-for-nlp-an-overview-of-recent-trends-d0d8f40a776d', 'https://arxiv.org/pdf/1708.02709.pdf'] | 7 |
58,722,371 | <p>The <strong>Attention Mechanism</strong> is used for this exactly; programmatic implementation isn't simple, but use-ready repositories exist - see below. Example output below.</p>
<p>As to what attention 'is', see <a href="https://stats.stackexchange.com/questions/344508/what-are-attention-mechanisms-exactly">this SE answer</a>, and/or <a href="https://www.quora.com/What-is-exactly-the-attention-mechanism-introduced-to-RNN-recurrent-neural-network-It-would-be-nice-if-you-could-make-it-easy-to-understand" rel="nofollow noreferrer">this Quora answer</a>; in a nutshell, it's a means of identifying the most 'important' timesteps, effectively mapping out a temporal 'heatmap'.</p>
<ul>
<li><a href="https://github.com/albermax/innvestigate" rel="nofollow noreferrer">iNNvestigate</a>, classifier introspection (first image below; can be applied to timeseries)</li>
<li><a href="https://raghakot.github.io/keras-vis/visualizations/activation_maximization/" rel="nofollow noreferrer">Saliency maps</a>, extracted features introspection</li>
<li><a href="https://stackoverflow.com/questions/58356868/how-visualize-attention-lstm-using-keras-self-attention-package/58357581#58357581">LSTM/CNN Visualization</a>, simple function (second image below)</li>
<li><a href="https://github.com/ningshixian/LSTM_Attention" rel="nofollow noreferrer">LSTM_Attention</a> - includes research paper-specific implementations.</li>
</ul>
<p>Lastly, as a tip, ditch LSTMs for <a href="https://arxiv.org/abs/1803.04831" rel="nofollow noreferrer">IndRNNs</a>; where former struggles w/ 800-1000 timesteps, latter's shown to succeed w/ 5000+. Features are also more interpretable, as each channel is independent, absent LSTM-type gating mechanisms. Though if speed is important, there is no <a href="https://keras.io/layers/recurrent/#cudnnlstm" rel="nofollow noreferrer"><code>CuDNNIndRNN</code></a>.</p>
<p><hr>
<img src="https://raw.githubusercontent.com/albermax/innvestigate/master/examples/images/analysis_grid.png" width="700">
<img src="https://i.stack.imgur.com/l26NF.png" width="700"></p> | 2019-11-06 02:45:52.803000+00:00 | 2019-11-06 02:45:52.803000+00:00 | null | null | 58,722,282 | <p>I have a binary classification problem where for each data point I have 3 time-series as follows.</p>
<pre><code>data_point, time_series1, time_series2, time_series3, label
d1, [0.1, ....., 0.5], [0.8, ....., 0.6], [0.8, ....., 0.8], 1
and so on
</code></pre>
<p>I am using the following code to perform my binary classification.</p>
<pre><code>model = Sequential()
model.add(LSTM(100, input_shape=(25,4)))
model.add(Dense(50))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
</code></pre>
<p>Since, currently I am considering my classification as a black-box task, I would like to dig deeper and see what happens inside.</p>
<p>More specifically, I would like to know the imporatant features used by LSTM to classify my datapoints. More importantly I want to answer the following questions;</p>
<ul>
<li>Which time series (i.e. <code>time_series1</code>, <code>time_series2</code>,
<code>time_series3</code>) was most influenced in the classification </li>
<li>What are features extracted from the most influenced timeseries?</li>
</ul>
<p>I am happy to provide more details if needed.</p> | 2019-11-06 02:34:15.850000+00:00 | 2019-11-06 02:46:34.280000+00:00 | 2019-11-06 02:46:34.280000+00:00 | python|keras|deep-learning|data-visualization|lstm | ['https://stats.stackexchange.com/questions/344508/what-are-attention-mechanisms-exactly', 'https://www.quora.com/What-is-exactly-the-attention-mechanism-introduced-to-RNN-recurrent-neural-network-It-would-be-nice-if-you-could-make-it-easy-to-understand', 'https://github.com/albermax/innvestigate', 'https://raghakot.github.io/keras-vis/visualizations/activation_maximization/', 'https://stackoverflow.com/questions/58356868/how-visualize-attention-lstm-using-keras-self-attention-package/58357581#58357581', 'https://github.com/ningshixian/LSTM_Attention', 'https://arxiv.org/abs/1803.04831', 'https://keras.io/layers/recurrent/#cudnnlstm'] | 8 |
59,071,268 | <p>Well, generally I(personal opinion based on my experience) think that rewards should be relative to the impact it has on the agent. If the problem is sparse rewards, you can have a look at this <a href="https://www.youtube.com/watch?v=0Ey02HT_1Ho&t=338s" rel="nofollow noreferrer">Arxiv Insights Youtube</a> to see how that can be solved.</p>
<p>I can give one example that might be challenging: if the reward is much more positive than the bad rewards are negative, the agent will probably not care too much if it risks ending up in the states with negative rewards to acquire the big positive reward. So you might end up with a risky agent.</p> | 2019-11-27 13:25:37.857000+00:00 | 2019-11-27 13:25:37.857000+00:00 | null | null | 59,048,803 | <p>I am new to reinforcement learning and experimenting with training of RL agents.</p>
<p>I have a doubt about reward formulation, from a given state if a agent takes a good action i give a positive reward, and if the action is bad, i give a negative reward. So if i give the agent very high positive rewards when it takes a good action, like 100 times positive value as compared to negative rewards, will it help agent during the training?</p>
<p>Intuitively I feel, it will help the agent training, but will there be any drawbacks of such skewed reward structure?</p> | 2019-11-26 10:28:15.780000+00:00 | 2019-11-27 13:25:37.857000+00:00 | null | artificial-intelligence|reinforcement-learning|montecarlo|reward|dqn | ['https://www.youtube.com/watch?v=0Ey02HT_1Ho&t=338s'] | 1 |
46,499,987 | <p>I've been working on this problem myself for some time. I totally agree with the other answers, that it really depends on your problem and you must match your input to the output that you expect.
I found that for certain tasks like sentiment analysis it's OK to remove lot's of nuances by preprocessing, but e.g. for text generation, it is quite essential to keep everything. </p>
<p>I'm currently working on generating Latin text and therefore I need to keep quite a lot of structure in the data.</p>
<p>I found a very interesting paper doing some analysis on that topic, but it covers only a small area. However, it might give you some more hints:</p>
<p>On the Role of Text Preprocessing in Neural Network Architectures: An Evaluation Study on Text Categorization and Sentiment Analysis
by Jose Camacho-Collados and Mohammad Taher Pilehvar</p>
<p><a href="https://arxiv.org/pdf/1707.01780.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1707.01780.pdf</a></p>
<p>Here is a quote from their conclusion:</p>
<p>"Our evaluation highlights the importance of being consistent in the preprocessing strategy employed across training and evaluation data. In general a simple tokenized corpus works equally or better than more complex preprocessing techniques such as lemmatization or multiword grouping, except for a dataset corresponding to a specialized domain, like health, in which sole tokenization performs poorly. Addi- tionally, word embeddings trained on multiword- grouped corpora perform surprisingly well when applied to simple tokenized datasets."</p> | 2017-09-30 05:52:45.600000+00:00 | 2017-09-30 05:52:45.600000+00:00 | null | null | 44,291,798 | <p>In the traditional "one-hot" representation of words as vectors you have a vector of the same dimension as the cardinality of your vocabulary. To reduce dimensionality usually stopwords are removed, as well as applying stemming, lemmatizing, etc. to normalize the features you want to perform some NLP task on.</p>
<p>I'm having trouble understanding whether/how to preprocess text to be embedded (e.g. word2vec). My goal is to use these word embeddings as features for a NN to classify texts into topic A, not topic A, and then perform event extraction on them on documents of topic A (using a second NN).</p>
<p>My first instinct is to preprocess removing stopwords, lemmatizing stemming, etc. But as I learn about NN a bit more I realize that applied to natural language, the CBOW and skip-gram models would in fact require the whole set of words to be present --to be able to predict a word from context one would need to know the actual context, not a reduced form of the context after normalizing... right?). The actual sequence of POS tags seems to be key for a human-feeling prediction of words.</p>
<p>I've found <a href="https://groups.google.com/forum/#!topic/word2vec-toolkit/TI-TQC-b53w" rel="noreferrer">some guidance online</a> but I'm still curious to know what the community here thinks:</p>
<ol>
<li>Are there any recent commonly accepted best practices regarding punctuation, stemming, lemmatizing, stopwords, numbers, lowercase etc?</li>
<li>If so, what are they? Is it better in general to process as little as possible, or more on the heavier side to normalize the text? Is there a trade-off?</li>
</ol>
<p>My thoughts: </p>
<p>It is better to remove punctuation (but e.g. in Spanish don't remove the accents because the do convey contextual information), change written numbers to numeric, do not lowercase everything (useful for entity extraction), no stemming, no lemmatizing. </p>
<p>Does this sound right?</p> | 2017-05-31 18:03:37.833000+00:00 | 2017-09-30 05:52:45.600000+00:00 | null | neural-network|nlp | ['https://arxiv.org/pdf/1707.01780.pdf'] | 1 |
72,402,527 | <p>The <code>setup_lifting</code> is only useful if you intend to use the lifting-and-transfer framework, for the details of which you can read <a href="https://www21.in.tum.de/%7Ekuncar/documents/huffman-kuncar-cpp2013.pdf" rel="nofollow noreferrer">Huffman and Kuncar's CPP paper</a>. Some recent progress in this framework is mostly related to bounded natural functors, for which you can have a look at this <a href="https://arxiv.org/abs/2104.05348" rel="nofollow noreferrer">paper</a>.</p>
<p>Proposing suitable relators requires insight of the data type. Here <code>setup_lifting</code> gives us some relatively easy relator theorems for free.</p>
<p>(Answer from the Isabelle Zulip Chat.)</p> | 2022-05-27 08:24:15.353000+00:00 | 2022-05-27 08:24:15.353000+00:00 | null | null | 72,392,804 | <p>In the Isabelle/HOL, I would like to work with two custom <code>typedef</code>s that build on top of each other, and show that both of them instantiate certain type classes (e.g. from <code>HOL.Rings</code>). I do not understand what definitions or facts are required to properly call <code>setup_lifiting</code> in this case. Moreover, I would like to understand what precise purpose this keyword serves after declaring a <code>typedef</code> (currently I simply try to mimick the structure of existing work) and what the so-called <em>relators</em> are which are used in the process. Neither the <em>locale</em> nor the <em>typeclass</em> PDF tutorials go into much detail on this command.</p>
<p>For explicitness, consider the <a href="https://www.isa-afp.org/browser_info/current/AFP/Polynomials/Polynomials.html" rel="nofollow noreferrer">AFP entry implementing polynomials</a>, which schematically defines monomials as</p>
<pre><code>type_synonym 'v monom_list = "('v × nat)list"
[...]
typedef (overloaded) 'v monom = "Collect (monom_inv :: ...)"
</code></pre>
<p>for an adequate invariant <code>monom_inv</code>. The same entry proceeds to define polynomials as</p>
<pre><code>type_synonym ('v,'a)poly = "('v monom × 'a)list"
</code></pre>
<p>and also defines a suitable invariant <code>poly_inv</code>.</p>
<p>After the typedef of <code>'v monom</code>, it appears to be no problem to call <code>setup_lifting type_definition_monom</code> as the necessary <em>relators</em> (?) for the underlying types (in particular lists) are part of the standard library. (Please correct if this is a misunderstanding of what's actually going on.)</p>
<p>However, after a new <code>typedef (overloaded) ('v, 'a) polyq = "Collect (poly_inv :: ...)"</code> (which is not part of the AFP entry cited above), things are different. Now, calling</p>
<pre><code>setup_lifting type_definition_polyq
</code></pre>
<p>yields a warning which reads: <em>Generation of a parametrized correspondence relation failed. Reason: No relator for the type "Polynomials.monom" found.</em></p>
<p><strong>Do I need to address this issue to afterwards call for example <code>instatiation polyq :: (...) comm_semiring_1</code>? If yes, how can I go about defining a relator for the <code>monom</code> type?</strong></p> | 2022-05-26 13:40:57.623000+00:00 | 2022-05-27 08:24:15.353000+00:00 | null | isabelle | ['https://www21.in.tum.de/%7Ekuncar/documents/huffman-kuncar-cpp2013.pdf', 'https://arxiv.org/abs/2104.05348'] | 2 |
54,056,457 | <p>There are a bunch of algorithms out there to convert Graph into feature vectors. Two famous examples are:</p>
<ul>
<li><a href="https://snap.stanford.edu/node2vec/" rel="nofollow noreferrer">Node2vec</a> from Stanford</li>
<li><a href="https://arxiv.org/abs/1403.6652" rel="nofollow noreferrer">Deepwalk</a> from Univ. Sherbrooke</li>
</ul>
<p>Their implementation exists on GitHub. </p>
<p>The underlying idea between these methods are almost the same: 1) Random walk 2) generate some sequences 3) word2vec (skip-gram) or other DL methods 4) use output as a feature vector in other Task.</p>
<p><a href="https://i.stack.imgur.com/FwCWf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FwCWf.png" alt="enter image description here"></a></p> | 2019-01-05 21:31:32.767000+00:00 | 2019-01-05 21:31:32.767000+00:00 | null | null | 54,055,663 | <p>I have almost 1000 of pandas DataFrame, which I converted into graphs. Now I can access edges and nodes of those graphs. For one DataFrame it looks like following:</p>
<pre><code>nx.edges(FG)
Out[59]: EdgeView([('Dy0O7', 'Dy1O6'), ('Dy0O7', 'Dy2O6'), ('Dy0O7', 'Dy3O7'), ('Dy0O7', 'Dy4O6'), ('Dy1O6', 'Dy3O7'), ('Dy1O6', 'Dy5O6'), ('Dy2O6', 'Dy4O6'), ('Dy3O7', 'Dy4O6'), ('Dy3O7', 'Dy5O6')])
nx.nodes(FG)
Out[61]: NodeView(('Dy0O7', 'Dy1O6', 'Dy2O6', 'Dy3O7', 'Dy4O6', 'Dy5O6'))
</code></pre>
<p>Also I can have adjancy view, which gives the information about connected nodes with corresponding weights.</p>
<pre><code>FG.adj
Out[64]: AdjacencyView({'Dy0O7': {'Dy1O6': {'weight': 3.0}, 'Dy2O6': {'weight': 1.0}, 'Dy3O7': {'weight': 2.0}, 'Dy4O6': {'weight': 1.0}}, 'Dy1O6': {'Dy0O7': {'weight': 3.0}, 'Dy3O7': {'weight': 1.0}, 'Dy5O6': {'weight': 1.0}}, 'Dy2O6': {'Dy0O7': {'weight': 1.0}, 'Dy4O6': {'weight': 1.0}}, 'Dy3O7': {'Dy0O7': {'weight': 2.0}, 'Dy1O6': {'weight': 1.0}, 'Dy4O6': {'weight': 3.0}, 'Dy5O6': {'weight': 1.0}}, 'Dy4O6': {'Dy0O7': {'weight': 1.0}, 'Dy2O6': {'weight': 1.0}, 'Dy3O7': {'weight': 3.0}}, 'Dy5O6': {'Dy1O6': {'weight': 1.0}, 'Dy3O7': {'weight': 1.0}}})
</code></pre>
<p>I want to use such graph properties as an input in Machine learning algorithm such as NN, How can we do that?</p> | 2019-01-05 19:51:04.400000+00:00 | 2019-01-05 21:32:22.063000+00:00 | 2019-01-05 21:32:22.063000+00:00 | machine-learning|deep-learning | ['https://snap.stanford.edu/node2vec/', 'https://arxiv.org/abs/1403.6652', 'https://i.stack.imgur.com/FwCWf.png'] | 3 |
49,776,477 | <p>Generally speaking, you are correct. The input shape should be (window length, features n).</p>
<p>However, there has been some success in transforming the input to the way you describe above. Below is a whitepaper where they were able to beat many top performing algorithms by doing so, and they used convolutional 1D layers to handle the time series pattern through a separate input.</p>
<p><a href="https://arxiv.org/pdf/1709.05206.pdf" rel="nofollow noreferrer">LSTM Fully Convolutional Networks for Time
Series Classification</a></p> | 2018-04-11 13:30:40.197000+00:00 | 2018-04-11 13:30:40.197000+00:00 | null | null | 49,776,331 | <p>I came across a tutorial where the autor use a LSTM network for a time series prediction like this :</p>
<pre class="lang-python prettyprint-override"><code>trainX = numpy.reshape(trainX, (trainX.shape[0], 1, trainX.shape[1]))
testX = numpy.reshape(testX, (testX.shape[0], 1, testX.shape[1]))
model = Sequential()
model.add(LSTM(4, input_shape=(1, look_back)))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(trainX, trainY, epochs=100, batch_size=1, verbose=2)
</code></pre>
<p>We agree that the LSTM in this act like a normal NN (and is useless ?) since the LSTM got only one time step without stateful = TRUE , Am I right ?</p> | 2018-04-11 13:24:13.240000+00:00 | 2018-04-11 13:56:02.213000+00:00 | 2018-04-11 13:56:02.213000+00:00 | deep-learning|keras|lstm | ['https://arxiv.org/pdf/1709.05206.pdf'] | 1 |
28,392,269 | <p>You could use any of the objective functions optimized when performing community detection, see (<a href="http://arxiv.org/abs/0906.0612" rel="nofollow">Fortunato 09</a>). The most widespread one is certainly the <em>modularity</em> (<a href="http://arxiv.org/abs/cond-mat/0308217" rel="nofollow">Newman & Girvan 03</a>), which is implemented in Gephi (see <a href="https://wiki.gephi.org/index.php/Modularity" rel="nofollow">this page</a>), and most other tools.</p>
<p>Edit: from your comments, you might be more interested in the <a href="http://en.wikipedia.org/wiki/Cut_%28graph_theory%29" rel="nofollow">cut size</a> (number of links between the two communities), or one of its normalized variants such as the conductance, ratio cut, normalized cut. See <a href="http://arxiv.org/abs/0906.0612" rel="nofollow">Fortunato'10</a> (section 4.1) for a review.</p> | 2015-02-08 09:01:40.987000+00:00 | 2015-02-16 08:02:54.033000+00:00 | 2015-02-16 08:02:54.033000+00:00 | null | 28,346,924 | <p>Consider a large graph with several thousands nodes. Nodes belong to two communities (defined intrinsically by a node attribute). I am looking for a metric (if possible already implemented in gephi, cytoscape, or other software) able to tell me how much these two communities/subgraphs are interconnected (and compare several case studies). I am sure this must be a standard problem for people studying social dynamics in communities or network theory...</p> | 2015-02-05 14:47:34.737000+00:00 | 2015-02-16 08:02:54.033000+00:00 | null | networking|graph|connectivity | ['http://arxiv.org/abs/0906.0612', 'http://arxiv.org/abs/cond-mat/0308217', 'https://wiki.gephi.org/index.php/Modularity', 'http://en.wikipedia.org/wiki/Cut_%28graph_theory%29', 'http://arxiv.org/abs/0906.0612'] | 5 |
70,571,549 | <p>Yes, you can use transformer-based models for NER task. You can check this paper <a href="https://arxiv.org/abs/1810.04805" rel="nofollow noreferrer">BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding</a>. The method can be employing a pre-trained BERT model by fine-tuning it to NER task.</p> | 2022-01-03 21:17:09.923000+00:00 | 2022-01-03 21:17:09.923000+00:00 | null | null | 59,425,902 | <p>Can we use neural machine translation (like seq2seq) for named entity recognition?such as USING TRANSFORMER nerual network FOR NER TASK. SOURCE IS WORD SEQUENCE, TARGET IS TAG sequence LIKE "o o o PERSON o o o location",is it possible? </p> | 2019-12-20 13:18:20.277000+00:00 | 2022-01-03 21:17:09.923000+00:00 | null | tensorflow|pytorch | ['https://arxiv.org/abs/1810.04805'] | 1 |
71,907,453 | <p>I've also been working on this super-resolution field and found some promising results but haven't tried yet,
<a href="https://doi.org/10.1016/j.heliyon.2021.e08341" rel="nofollow noreferrer">first paper</a> (license plate base text) they implement the image enhancement first then do the super-resolution in a later stage.
<a href="https://arxiv.org/abs/2106.15368" rel="nofollow noreferrer">second paper</a> and <a href="https://github.com/mjq11302010044/TPGSR" rel="nofollow noreferrer">github</a> in this paper they use text prior to guide the super-resolution network.</p> | 2022-04-18 03:54:53.310000+00:00 | 2022-04-18 03:54:53.310000+00:00 | null | null | 64,808,986 | <p>I am working on an OCR system. A challenge that I'm facing for recognizing the text within <strong>ROI</strong> is due to the <strong>shakiness</strong> or <strong>motion effect</strong> shot or text that is <strong>not focus</strong> due to <strong>angle positions</strong>. Please consider the following demo sample</p>
<p><a href="https://i.stack.imgur.com/R3UwZ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/R3UwZ.jpg" alt="enter image description here" /></a></p>
<p>If you notice the texts (for ex. the mark as a red), in such cases the OCR system couldn't properly recognize the text. However, this scenario can also come on with no angle shot where the image is too blurry that the OCR system can't recognize or partially recognize the text. Sometimes they are <strong>blurry</strong> or sometimes very <strong>low resolution</strong> or <strong>pixelated</strong>. For example</p>
<p><a href="https://i.stack.imgur.com/dpkKl.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dpkKl.jpg" alt="enter image description here" /></a></p>
<h2>Methods we've tried</h2>
<p>Firstly we've tried various methods available on SO. But sadly no luck.</p>
<ul>
<li><a href="https://stackoverflow.com/questions/54497882/how-improve-image-quality-to-extract-text-from-image-using-tesseract">How to improve image quality to extract text from image using Tesseract</a></li>
<li><a href="https://stackoverflow.com/questions/52004133/how-to-improve-image-quality">How to improve image quality? [closed]</a></li>
<li><a href="https://stackoverflow.com/questions/32848301/image-quality-improvement-in-opencv">Image quality improvement in Opencv</a></li>
</ul>
<p>Next, we've tried the following three most promising methods as below.</p>
<p><strong>1.TSRN</strong></p>
<p>A recent research work (<a href="https://arxiv.org/abs/2005.03341" rel="nofollow noreferrer">TSRN</a>) mainly focuses on such cases. The main intuitive of it is to introduce <strong>super-resolution</strong> (SR) techniques as pre-processing. This <a href="https://github.com/JasonBoy1/TextZoom" rel="nofollow noreferrer">implementation</a> looks by far the most promising. However, it fails to do magic on our custom dataset (for example the second images above, the blue text). Here are some example from their demonstration:</p>
<p><a href="https://i.stack.imgur.com/N0vc5.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/N0vc5.jpg" alt="enter image description here" /></a></p>
<p><strong>2. Neural Enhance</strong></p>
<p>After looking at its illustration on <a href="https://github.com/alexjc/neural-enhance" rel="nofollow noreferrer">its page</a>, we believed it might work. But sadly it also couldn't address the problem. However, I was a bit confusing even with their showed example because I couldn't reproduce them too. I've raised an <a href="https://github.com/alexjc/neural-enhance/issues/259" rel="nofollow noreferrer">issue on github</a> where I demonstrated this more in detail. Here are some example from their demonstration:</p>
<p><a href="https://i.stack.imgur.com/PDPIL.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PDPIL.jpg" alt="enter image description here" /></a></p>
<p><strong>3. ISR</strong></p>
<p>The last choice with minimum hope with <a href="https://github.com/idealo/image-super-resolution" rel="nofollow noreferrer">this</a> implementation. No luck either.</p>
<h3>Update 1</h3>
<ul>
<li><p>[Method]: Apart from the above, we also tried some traditional approaches such as <a href="https://docs.opencv.org/master/de/d3c/tutorial_out_of_focus_deblur_filter.html" rel="nofollow noreferrer">Out-of-focus Deblur Filter</a> (Wiener filter and also unsupervised Weiner filter). We also checked the <a href="https://scikit-image.org/docs/dev/auto_examples/filters/plot_deconvolution.html" rel="nofollow noreferrer">Richardson-Lucy</a> method. but no improvement with this approach either.</p>
</li>
<li><p>[Method]: We’ve checked out a GAN based DeBlur solution. <a href="https://github.com/KupynOrest/DeblurGAN" rel="nofollow noreferrer">DeblurGAN</a> I have tried this network. What attracted me was the approach of the <strong>Blind Motion Deblurring</strong> mechanism.</p>
</li>
</ul>
<p>Lastly, from this <a href="https://stackoverflow.com/questions/48674106/deblur-image-with-text-to-be-recognized-by-ocr">discussion</a> we encounter <a href="http://www.fit.vutbr.cz/%7Eihradis/CNN-Deblur/" rel="nofollow noreferrer">this research work</a> which seems really good enough. Didn't try this yet.</p>
<p><a href="https://i.stack.imgur.com/xhozQ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xhozQ.jpg" alt="enter image description here" /></a></p>
<h3>Update 2</h3>
<ol>
<li><p>[Method]: <strong>Real-World Super-Resolution via Kernel Estimation and Noise Injection</strong>
Tried this method. Promising. However, didn't work in our case. <a href="https://github.com/jixiaozhong/RealSR" rel="nofollow noreferrer">Code</a>.</p>
</li>
<li><p>[Method]: <strong>Photo Restoration</strong>
Comparative to the above all methods, it performs the best surprisingly in super text resolution for OCR. It greatly removes noise, blurriness, etc., and makes the image much clearer and which enhance model generalization better. <a href="https://github.com/microsoft/Bringing-Old-Photos-Back-to-Life" rel="nofollow noreferrer">Code</a>.</p>
</li>
</ol>
<h2>My Query</h2>
<p>Is there any effective workaround to tackle such cases? Any methods that could improve such <strong>blurry</strong> or <strong>low-resolution</strong> pixels whether the texts are <strong>in front</strong> or <strong>far away</strong> due to the camera angle?</p> | 2020-11-12 17:41:20.980000+00:00 | 2022-04-18 03:54:53.310000+00:00 | 2022-01-14 19:54:40.340000+00:00 | python|opencv|ocr|generative-adversarial-network|text-recognition | ['https://doi.org/10.1016/j.heliyon.2021.e08341', 'https://arxiv.org/abs/2106.15368', 'https://github.com/mjq11302010044/TPGSR'] | 3 |
36,760,930 | <p>There's <a href="https://github.com/glample/tagger" rel="noreferrer">this implementation</a> by Guillaume Lample from the paper "<a href="http://arxiv.org/abs/1603.01360" rel="noreferrer">Neural Architectures for Named Entity Recognition</a>" that you can use for starter.</p> | 2016-04-21 06:05:07.410000+00:00 | 2016-04-21 06:05:07.410000+00:00 | null | null | 33,078,423 | <p>I need to implement a bidirectional LSTM network with a CRF layer at the end. Specifically the model presented in this paper, and train it.</p>
<p><a href="http://www.aclweb.org/anthology/P15-1109">http://www.aclweb.org/anthology/P15-1109</a></p>
<p>I want to implement it in Python preferably. Can anyone present some libraries or sample code as to how this can be done. I looked at PyBrain but couldn't really understand it.</p>
<p>I'm also open to tool-kits in other programming languages.</p> | 2015-10-12 10:05:02.927000+00:00 | 2017-12-14 23:55:36.393000+00:00 | null | python|crf|lstm | ['https://github.com/glample/tagger', 'http://arxiv.org/abs/1603.01360'] | 2 |
7,842 | <p>You can do this in two lines in python with</p>
<pre><code>allSums = set(a+b for a in X for b in X)
allSums = sorted(allSums)
</code></pre>
<p>The cost of this is <code>n^2</code> (maybe an extra log factor for the set?) for the iteration and s * log(s) for the sorting where s is the size of the set.</p>
<p>The size of the set could be as big as <code>n*(n-1)/2</code> for example if <code>X = [1,2,4,...,2^n]</code>. So if you want to generate this list it will take at least <code>n^2/2</code> in the worst case since this is the size of the output.</p>
<p>However if you want to select the first k elements of the result you can do this in O(kn) using a selection algorithm for sorted <code>X+Y</code> matrices by Frederickson and Johnson (<a href="http://arxiv.org/abs/0804.0936" rel="nofollow noreferrer">see here for gory details)</a>. Although this can probably be modified to generate them online by reusing computation and get an efficient generator for this set.</p>
<p>@deuseldorf, Peter
There is some confusion about <code>(n!)</code> I seriously doubt deuseldorf meant "n factorial" but simply "n, (very excited)!"</p> | 2008-08-11 14:47:31.227000+00:00 | 2020-07-27 01:15:48.273000+00:00 | 2020-07-27 01:15:48.273000+00:00 | null | 826 | <p>You have an ascending list of numbers, what is the most efficient algorithm you can think of to get the ascending list of sums of every two numbers in that list. Duplicates in the resulting list are irrelevant, you can remove them or avoid them if you like.</p>
<p>To be clear, I'm interested in the algorithm. Feel free to post code in any language and paradigm that you like.</p> | 2008-08-03 21:08:54.977000+00:00 | 2020-07-27 01:15:48.273000+00:00 | 2008-08-03 21:38:52.623000+00:00 | algorithm|language-agnostic | ['http://arxiv.org/abs/0804.0936'] | 1 |
49,294,072 | <p>It should still work with somewhat similar results. There are no problems with skipping only one layer.</p>
<p>See <a href="https://openreview.net/forum?id=HkwBEMWCZ" rel="nofollow noreferrer">https://openreview.net/forum?id=HkwBEMWCZ</a>
and densenets <a href="https://arxiv.org/abs/1608.06993" rel="nofollow noreferrer">https://arxiv.org/abs/1608.06993</a></p> | 2018-03-15 07:53:03.390000+00:00 | 2018-03-15 07:53:03.390000+00:00 | null | null | 49,293,450 | <p><a href="https://i.stack.imgur.com/TfMe9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TfMe9.png" alt="enter image description here"></a></p>
<p>This figure shows a basic block of a residual network. What it has two convolutional layers? What will happen when it has only one convolutional layer?</p> | 2018-03-15 07:09:14.340000+00:00 | 2018-03-15 07:53:03.390000+00:00 | null | machine-learning|deep-learning|conv-neural-network|deeplearning4j|deep-residual-networks | ['https://openreview.net/forum?id=HkwBEMWCZ', 'https://arxiv.org/abs/1608.06993'] | 2 |
51,815,101 | <p>I have found from paper <code><<Very Deep Convolutional Networks for Large-Scale Image Recognition>></code>.</p>
<blockquote>
<p>Rather than using relatively large receptive fields in the first conv. layers (e.g. 11×11with stride 4 in (Krizhevsky et al., 2012), or 7×7 with stride 2 in (Zeiler & Fergus, 2013; Sermanet et al., 2014)), we use very small 3 × 3 receptive fields throughout the whole net, which are convolved with the input at every pixel (with stride 1). It is easy to see that a stack of two 3×3 conv.layers (without spatial poolingin between) has an effective receptive field of 5×5; three such layers have a 7 × 7 effective receptive field. </p>
</blockquote>
<ol>
<li><p>two 3*3 convolution filter is equivalent to one 5*5 convolution filter.</p></li>
<li><p>two 3*3 convolution filter will have less parameters than one 5*5 convolution filter.</p></li>
<li><p>two 3*3 convolution filter will make network more deep and extract more complex features than one 5*5 convolution filter.</p></li>
</ol>
<p>paper:<a href="https://arxiv.org/pdf/1409.1556.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1409.1556.pdf</a></p> | 2018-08-13 04:11:14.263000+00:00 | 2018-08-15 00:11:37.097000+00:00 | 2018-08-15 00:11:37.097000+00:00 | null | 51,015,834 | <p>For example, as to the <code>famous AlexNet architecutre</code> <a href="https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf" rel="nofollow noreferrer">(original paper)</a>, what's the difference of using <code>two 3*3 convolution filters</code> between using <code>one 5*5 convolution filter</code> ?</p>
<p>The <code>two 3*3 convolution filters</code> and <code>one 5*5 convolution filter</code> have been highlighted by <code>red rectangle</code> in the below image.</p>
<p><a href="https://i.stack.imgur.com/sY2pe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sY2pe.png" alt="https://medium.com/@smallfishbigsea/a-walk-through-of-alexnet-6cbd137a5637"></a></p>
<p><a href="https://i.stack.imgur.com/CWVZr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CWVZr.png" alt="enter image description here"></a></p>
<p>What about use another <code>5*5 convolution filter</code> to supersede the <code>two 3*3 convolution filters</code>, or vice verse?</p> | 2018-06-25 03:11:59.243000+00:00 | 2020-12-29 03:15:39.277000+00:00 | null | deep-learning|conv-neural-network | ['https://arxiv.org/pdf/1409.1556.pdf'] | 1 |
51,114,946 | <p>You can look into these two papers, which try to answer your questions:</p>
<p><a href="http://jmlr.org/papers/v18/17-269.html" rel="nofollow noreferrer">http://jmlr.org/papers/v18/17-269.html</a></p>
<p><a href="https://arxiv.org/abs/1804.03515" rel="nofollow noreferrer">https://arxiv.org/abs/1804.03515</a></p>
<p>tuneRanger is a package especially for tuning random forest in R. </p> | 2018-06-30 12:30:31.213000+00:00 | 2018-06-30 12:30:31.213000+00:00 | null | null | 51,087,897 | <p>I want to tune hyperparameters for random forest using the MLR package. I have a few questions:</p>
<p>1) How do I decide which of the parameters I should tune? I heard something about keeping num.trees as high as computationally possible and tuning mtry? (I couldn't find anything online backing this up though)</p>
<p>2) What should be my range of tuning mtry? Is here a good rule of thumb between 0 and 1/3 of the parameter? If so, how would I integrate that in the code below if I have different data sets (i.e., what would I write instead of lower=0 and upper =10)?</p>
<p>3) Lastly, does it even make sense to create the learner twice, once with makeLearner function where I set the parameter in par.vals and then once with makeTuneWrapper function? Doesn’t it overwrite it then anyways?</p>
<pre><code>learnerRF = makeLearner("regr.ranger", par.vals = list("num.trees" = 5000))
parsRF = makeParamSet(
makeIntegerParam("mtry", lower = 0 , upper = 10),
)
tuneRF = makeTuneControlGrid()
inner = makeResampleDesc("CV", iters = 10)
learnerRF = makeTuneWrapper(learnerRF, resampling = inner, par.set = parsRF,control = tuneRF, show.info = FALSE)
</code></pre> | 2018-06-28 16:45:48.617000+00:00 | 2018-07-03 07:18:57.030000+00:00 | null | random-forest|hyperparameters|mlr | ['http://jmlr.org/papers/v18/17-269.html', 'https://arxiv.org/abs/1804.03515'] | 2 |
57,233,045 | <p>You can't do it solely using <code>torch.nn.Sequential</code> as it requires operations to go, as the name suggests, sequentially, while yours are parallel.</p>
<p>You could, in principle, construct your own <code>block</code> really easily like this:</p>
<pre><code>import torch
class ResNet(torch.nn.Module):
def __init__(self, module):
super().__init__()
self.module = module
def forward(self, inputs):
return self.module(inputs) + inputs
</code></pre>
<p>Which one can use something like this:</p>
<pre><code>model = torch.nn.Sequential(
torch.nn.Conv2d(3, 32, kernel_size=7),
# 32 filters in and out, no max pooling so the shapes can be added
ResNet(
torch.nn.Sequential(
torch.nn.Conv2d(32, 32, kernel_size=3),
torch.nn.ReLU(),
torch.nn.BatchNorm2d(32),
torch.nn.Conv2d(32, 32, kernel_size=3),
torch.nn.ReLU(),
torch.nn.BatchNorm2d(32),
)
),
# Another ResNet block, you could make more of them
# Downsampling using maxpool and others could be done in between etc. etc.
ResNet(
torch.nn.Sequential(
torch.nn.Conv2d(32, 32, kernel_size=3),
torch.nn.ReLU(),
torch.nn.BatchNorm2d(32),
torch.nn.Conv2d(32, 32, kernel_size=3),
torch.nn.ReLU(),
torch.nn.BatchNorm2d(32),
)
),
# Pool all the 32 filters to 1, you may need to use `torch.squeeze after this layer`
torch.nn.AdaptiveAvgPool2d(1),
# 32 10 classes
torch.nn.Linear(32, 10),
)
</code></pre>
<p>Fact being usually overlooked (without real consequences when it comes to shallowe networks) is that skip connection should be left <strong>without</strong> any nonlinearities like <code>ReLU</code> or convolutional layers and that's what you can see above (source: <a href="https://arxiv.org/pdf/1603.05027.pdf" rel="noreferrer">Identity Mappings in Deep Residual Networks</a>).</p> | 2019-07-27 14:27:49.813000+00:00 | 2020-09-09 18:16:27.007000+00:00 | 2020-09-09 18:16:27.007000+00:00 | null | 57,229,054 | <p>I want to implement a ResNet network (or rather, <strong>residual blocks</strong>) but I really want it to be in the sequential network form.</p>
<p>What I mean by sequential network form is the following:</p>
<pre><code>## mdl5, from cifar10 tutorial
mdl5 = nn.Sequential(OrderedDict([
('pool1', nn.MaxPool2d(2, 2)),
('relu1', nn.ReLU()),
('conv1', nn.Conv2d(3, 6, 5)),
('pool1', nn.MaxPool2d(2, 2)),
('relu2', nn.ReLU()),
('conv2', nn.Conv2d(6, 16, 5)),
('relu2', nn.ReLU()),
('Flatten', Flatten()),
('fc1', nn.Linear(1024, 120)), # figure out equation properly
('relu4', nn.ReLU()),
('fc2', nn.Linear(120, 84)),
('relu5', nn.ReLU()),
('fc3', nn.Linear(84, 10))
]))
</code></pre>
<p>but of course with the NN lego blocks being “ResNet”.</p>
<p>I know the equation is something like:</p>
<p><a href="https://i.stack.imgur.com/gn2va.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gn2va.png" alt="enter image description here" /></a></p>
<p>but I am not sure how to do it in Pytorch AND Sequential. Sequential is key for me!</p>
<hr />
<h1>Bounty:</h1>
<p>I'd like to see an example with a fully connected net and where the BN layers would have to go (and the drop out layers would go too). Ideally on a toy example/data if possible.</p>
<hr />
<p>Cross-posted:</p>
<ul>
<li><a href="https://discuss.pytorch.org/t/how-to-have-residual-network-using-only-sequential-blocks/51541" rel="nofollow noreferrer">https://discuss.pytorch.org/t/how-to-have-residual-network-using-only-sequential-blocks/51541</a></li>
<li><a href="https://www.quora.com/unanswered/How-does-one-implement-my-own-ResNet-with-torch-nn-Sequential-in-Pytorch" rel="nofollow noreferrer">https://www.quora.com/unanswered/How-does-one-implement-my-own-ResNet-with-torch-nn-Sequential-in-Pytorch</a></li>
<li><a href="https://www.reddit.com/r/pytorch/comments/uyyr28/how_to_implement_my_own_resnet_with/" rel="nofollow noreferrer">https://www.reddit.com/r/pytorch/comments/uyyr28/how_to_implement_my_own_resnet_with/</a></li>
</ul> | 2019-07-27 04:08:33.397000+00:00 | 2022-05-27 13:54:50.927000+00:00 | 2022-05-27 13:54:50.927000+00:00 | machine-learning|neural-network|deep-learning|conv-neural-network|pytorch | ['https://arxiv.org/pdf/1603.05027.pdf'] | 1 |
64,961,525 | <p>This is the code I developed to auto-fill the URL field from the DOI or arXiv number. All URLs are hyperlinked.</p>
<p>You must use</p>
<blockquote>
<p>\usepackage[hidelinks]{hyperref}</p>
</blockquote>
<p>for the hyperlinks to work. Here is the code I made for the .bst file.</p>
<pre><code>FUNCTION {format.url}
{ is.use.url
{ url empty$
{ doi empty$
{ eprint empty$
{ ""
}
{ this.to.prev.status
this.status.std
cap.yes 'status.cap :=
"\href{https://arxiv.org/pdf/" eprint * "}{ [Online]. Available: https://arxiv.org/pdf/" * eprint * "}" * output
punct.no 'this.status.punct :=
punct.period 'prev.status.punct :=
space.normal 'this.status.space :=
space.normal 'prev.status.space :=
quote.no 'prev.status.quote :=
}
if$
}
{ this.to.prev.status
this.status.std
cap.yes 'status.cap :=
"\href{http://dx.doi.org/" doi * "}{[Online]. Available: http://dx.doi.org/" * doi * "}" * output
punct.no 'this.status.punct :=
punct.period 'prev.status.punct :=
space.normal 'this.status.space :=
space.normal 'prev.status.space :=
quote.no 'this.status.quote :=
}
if$
}
{ this.to.prev.status
this.status.std
cap.yes 'status.cap :=
name.url.prefix " " *
"\url{" * url * "}" *
punct.no 'this.status.punct :=
punct.period 'prev.status.punct :=
space.normal 'this.status.space :=
space.normal 'prev.status.space :=
quote.no 'this.status.quote :=
}
if$
}
{ "" }
if$
}
</code></pre> | 2020-11-23 01:36:25.463000+00:00 | 2020-11-23 01:36:25.463000+00:00 | null | null | 64,307,157 | <p>How can I add a \url{} command to a .bst function. In this case, if the URL field is empty I want it to fill it with data from the DOI field, See the code below. I need to add this to the 4th line, but every way I have tried just causes it to crash.</p>
<pre><code>FUNCTION {format.url}
{ is.use.url
{ url empty$
{"[Online]. Available: https://doi.org/" doi * }
{ this.to.prev.status
this.status.std
cap.yes 'status.cap :=
name.url.prefix " " *
"\url{" * url * "}" *
punct.no 'this.status.punct :=
punct.period 'prev.status.punct :=
space.normal 'this.status.space :=
space.normal 'prev.status.space :=
quote.no 'this.status.quote :=
}
if$
}
{ "" }
if$
}
</code></pre>
<p>This comes from IEEE.bst file and can be found around line 1920</p> | 2020-10-11 17:41:05.263000+00:00 | 2020-11-23 01:36:25.463000+00:00 | null | latex|tex|bibtex|biblatex | [] | 0 |
60,666,752 | <p>A properly trained FaceNet model should already be somewhat invariant to lighting conditions, pose and other features that should not be a part of identifying a face. At least that is what is claimed in a draft of the <a href="https://arxiv.org/pdf/1503.03832.pdf" rel="nofollow noreferrer">FaceNet paper</a>. If you only intend to compare feature vectors generated from the network, and intend to recognize a small group of people, your own dataset likely does not have to be particulary large.</p>
<p>Personally I have done something quite similar to what you are trying to achieve for a group of around ~100 people. The dataset consisted of 1 image per person and I used a 1-N-N classifier to classify the generated feature vectors. While I do not remember the exact results, it did work quite well. The pretrained network's architecture was different from FaceNet's but the overall idea was the same though.</p>
<p>The only way to truly answer your question though would be to experiment and see how well things work out in practice.</p> | 2020-03-13 07:54:26.630000+00:00 | 2020-03-13 07:54:26.630000+00:00 | null | null | 60,665,931 | <p>I am working on building a custom facial recognition for our office.</p>
<p>I am planning to use <strong>Google FaceNet</strong>,
Now my question is that you can find or create your own version of <strong>facenet</strong> model in keras or pytorch there's no issue in that, but regarding creating <strong>dataset</strong> ,I want to know what are the best practices to capture photo of person when I don't have any prior photo of that person,all I have is a <strong>camera</strong> and a person ,should I create variance in by changing <strong>lightning</strong> condition or <strong>orientation</strong> or face <strong>size</strong> ?</p> | 2020-03-13 06:43:14.073000+00:00 | 2020-03-13 07:54:26.630000+00:00 | null | machine-learning|deep-learning|computer-vision|dataset|facial-identification | ['https://arxiv.org/pdf/1503.03832.pdf'] | 1 |
27,098,116 | <p><em>Labeling</em> topics is completely distinct from topic modeling. Here's an article that describes using a keyword extraction technique (KERA) to apply meaningful labels to topics: <a href="http://arxiv.org/abs/1308.2359" rel="nofollow">http://arxiv.org/abs/1308.2359</a></p> | 2014-11-24 04:57:07.277000+00:00 | 2014-11-24 04:57:07.277000+00:00 | null | null | 27,097,779 | <p>I am new to python and trying to implement topic modelling. I am successful in implementing LDA in pything using gensim , but I am not able to give any label/name to these topics.
How do we name these topics? please help out with the best way to implement in python.
My LDA output is somewhat like this(please let me know if you need the code) :-</p>
<p>0.024*research + 0.021*students + 0.019*conference + 0.019*chi + 0.017*field + 0.014*work + 0.013*student + 0.013*hci + 0.013*group + 0.013*researchers
0.047*research + 0.034*students + 0.020*ustars + 0.018*underrepresented + 0.017*participants + 0.012*researchers + 0.012*mathematics + 0.012*graduate + 0.012*mathematical + 0.012*conference
0.027*students + 0.026*research + 0.018*conference + 0.017*field + 0.015*new + 0.014*participants + 0.013*chi + 0.012*robotics + 0.010*researchers + 0.010*student
0.023*students + 0.019*robotics + 0.018*conference + 0.017*international + 0.016*interact + 0.016*new + 0.016*ph.d. + 0.016*meet + 0.016*ieee + 0.015*u.s.
0.033*research + 0.030*flow + 0.028*field + 0.023*visualization + 0.020*challenges + 0.017*students + 0.015*project + 0.013*shape + 0.013*visual + 0.012*data
0.044*research + 0.020*mathematics + 0.017*program + 0.014*june + 0.014*conference + 0.014*- + 0.013*mathematicians + 0.013*conferences + 0.011*field + 0.011*mrc
0.023*research + 0.021*students + 0.015*field + 0.014*hovering + 0.014*mechanisms + 0.014*dpiv + 0.013*aerodynamic + 0.012*unsteady + 0.012*conference + 0.012*hummingbirds
0.031*research + 0.018*mathematics + 0.016*program + 0.014*flow + 0.014*mathematicians + 0.012*conferences + 0.011*field + 0.011*june + 0.010*visualization + 0.010*communities
0.028*students + 0.028*research + 0.018*ustars + 0.018*mathematics + 0.015*underrepresented + 0.010*program + 0.010*encouraging + 0.010*'', + 0.010*participants + 0.010*conference
0.049*research + 0.021*conference + 0.021*program + 0.020*mathematics + 0.014*mathematicians + 0.013*field + 0.013*- + 0.011*conferences + 0.010*areas</p> | 2014-11-24 04:15:12.583000+00:00 | 2014-11-24 04:57:07.277000+00:00 | null | python|label|lda | ['http://arxiv.org/abs/1308.2359'] | 1 |
59,035,058 | <p>I recently read this paper: <a href="https://arxiv.org/pdf/1805.07917.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1805.07917.pdf</a>
I haven't used this method in particular, so I can't really vouch for the usefulness, but the explanation to this problem seemed convincing to me:</p>
<blockquote>
<p>For instance, during the course of learning, the cheetah benefits from leaning forward to increase its speed which gives rise to a strong gradient in this direction. However, if the cheetah leans too much, it falls over. The gradient-based methods seem to often fall into this trap and then fail to recover as the gradient information from the new state has no guarantees of undoing the last gradient update.</p>
</blockquote> | 2019-11-25 15:23:44.523000+00:00 | 2019-11-25 15:23:44.523000+00:00 | null | null | 59,032,466 | <p>Hi I am training reinforcement learning agents for a control problem using PPO algorithm. I am tracking the accumulated rewards for each episode during the training process. Several times during the training process I see a sudden dip in the accumulated rewards. I am not able to figure out why this is happening or how to avoid this. Tried with changing some of the hyper parameters like changing the number of neurons in the neural network layers, learning rate etc.. but still I see this happening consistently.
If I debug and check the actions that are being taken during dips, obviously actions are very bad hence causing a decrease in rewards.</p>
<p>Can some one help me with understanding why this is happening or how to avoid this ? </p>
<p>Some of plots of my training process</p>
<p><a href="https://i.stack.imgur.com/E1Oz9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/E1Oz9.png" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/7lv1y.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7lv1y.png" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/m3HZD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/m3HZD.png" alt="enter image description here"></a></p> | 2019-11-25 13:01:14.217000+00:00 | 2019-11-25 15:23:44.523000+00:00 | null | artificial-intelligence|reinforcement-learning|agent|temporal-difference|dqn | ['https://arxiv.org/pdf/1805.07917.pdf'] | 1 |
53,777,653 | <p>Unlike the core caches, the LLC has an additional level of set mapping. </p>
<p>You usually don't want entire regions of memory to map into adjacent sets in the same LLC slice, because then a core that is far from that slice would have horrible access times. Instead, for the common case, you want your data to be spread out as uniformly as possible. To achieve that, they added a hash function as part of the mapping that determined the slice.</p>
<p>The exception is use-cases where you want what's called "sub-NUMA clustering" or "core-clustering" - in these cases you can override that distribution by explicit configuration.</p>
<p>The hashing should work on higher bits as well to better scatter memory regions, so skipping by the LLC size won't work. You're still landing on the same set in <em>multiple</em> slices, which means that you get effectively the associativity multiplied by the number of slices. However, depending on you structure alignment, you wouldn't necessarily get optimal distribution, so you could start seeing jumps ahead of that associativity level.</p>
<p>Here's a paper with some more details about slice hashing: <a href="https://arxiv.org/pdf/1508.03767.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1508.03767.pdf</a></p> | 2018-12-14 10:15:07.513000+00:00 | 2018-12-14 10:15:07.513000+00:00 | null | null | 53,764,676 | <p>I'm trying to determine the associativity of my processor.
I have Intel core i5-2500:</p>
<p>L1 data: 32 Kb, 8-way set associative</p>
<p>L1 instruction: 32 Kb, 8-way set associative</p>
<p>L2: 256 Kb, 8-way set associative</p>
<p>L3: 6 Mb, 12-way set associative, shared between all cores</p>
<p>I measure the average access time to an element of an array in processor ticks. The array is divided into fragments.</p>
<p>In the loop, I increase the number of fragments. The distance between two adjacent fragments is equal to L3 cache size. I access first elements of all fragments, then second elements, and so on. Each element contains the index of the next element. Last element contains the index of the first.</p>
<p>It looks like this: <a href="https://i.stack.imgur.com/ICErR.png" rel="nofollow noreferrer">enter image description here</a></p>
<p>When the number of fragments will be greater than associativity of the cache, the average access time should increase.</p>
<p>I got the following results:
<a href="https://i.stack.imgur.com/63tQS.png" rel="nofollow noreferrer">enter image description here</a></p>
<p>The first jump corresponds to associativity of TLB, second corresponds to associativity of L1 and L2 caches, but I do not understand why the time does not increase after exceeding associativity of L3 cache.</p>
<p>I also tried different sizes and offsets</p>
<p>Am i doing something wrong? Or i have some mistakes in code?</p>
<p>Can you please explain it to me.
Here is the code:</p>
<pre><code>#include <assert.h>
#include <limits.h>
#include <stdio.h>
#include <stdlib.h>
#define SIZE 6291456 //6 Mb
#define OFFSET 6291456
#define ROUNDS 200
#define MIN_FRAGMENTS_COUNT 1
#define MAX_FRAGMENTS_COUNT 32
void FreeArray(int* array, int size) {
assert(size > 0 && "Size must be gerater than zero\n");
if (NULL == array) {
return;
}
for (int i = 0; i < size; ++i) {
array[i] = 0;
}
free(array);
}
int* CreateArray(int size) {
assert(size > 0 && "Size must be greater than zero\n");
return calloc(size, sizeof(int));
}
unsigned long long int GetTicksCount(void) {
unsigned int high = 0;
unsigned int low = 0;
__asm__ __volatile__("rdtsc" : "=a"(low), "=d"(high));
return (((unsigned long long int)high) << 32 | (unsigned long long int)low);
}
void SetIndexes(int* array, int fragment_size, int offset,
int fragments_count) {
assert(NULL != array && "Pointer to array must not be NULL\n");
assert(fragment_size > 0 && "Fragmnet size must be greater than zero\n");
assert(offset > 0 && "Offset must be greater than zero\n");
assert(fragments_count > 0 && "Fragments count must be greater than zero\n");
assert(fragment_size <= offset &&
"Fragment size must not be greater than offset\n");
int last_fragment = fragments_count - 1;
int last_element = fragment_size - 1;
for (int i = 0; i < last_element; ++i) {
for (int j = 0; j < last_fragment; ++j) {
array[j * offset + i] = (j + 1) * offset + i; //Go in the same element of next fragment
}
array[last_fragment * offset + i] = i + 1; // Go in the next element from last fragment
}
array[last_fragment * offset + last_element] = 0; // Go in first element from last element
}
unsigned long long int CalcAccessTime(int* array, int size) {
assert(NULL != array && "Pointer to array must not be NULL\n");
assert(size > 0 && "Size must be greater than zero\n");
unsigned long long int start = 0;
unsigned long long int end = 0;
unsigned long long int min_time = ULLONG_MAX;
int index = 0;
for (int i = 0; i < ROUNDS; ++i) {
start = GetTicksCount();
for (int j = 0; j < size; ++j) {
index = array[index];
}
end = GetTicksCount();
unsigned long long int cur_time = (end - start) / size;
if (cur_time < min_time) {
min_time = cur_time;
}
}
return min_time;
}
int main(int argc, char** argv) {
int integers_count = SIZE / sizeof(int);
int offset_int = OFFSET / sizeof(int);
for (int i = MIN_FRAGMENTS_COUNT; i <= MAX_FRAGMENTS_COUNT; ++i) {
int size = i * offset_int;
int* array = CreateArray(size);
if (NULL == array) {
return -1;
}
SetIndexes(array, integers_count / i, offset_int, i);
printf("Fragments: %d\n", i);
printf("Time: %llu\n", CalcAccessTime(array, integers_count));
FreeArray(array, size);
}
return 0;
}
</code></pre> | 2018-12-13 15:02:27.393000+00:00 | 2018-12-14 10:15:07.513000+00:00 | 2018-12-13 18:03:28.290000+00:00 | c|caching|cpu-cache | ['https://arxiv.org/pdf/1508.03767.pdf'] | 1 |
34,642,719 | <p>I am not very familiar with Synaptic (but it does look kinda cool), but here are some general issues you could look into:</p>
<ul>
<li><p>Weight initialization is important. Proper weight initialization allows our gradients to backpropagate through our network and for learning to occur. Is there an option to initialize weights in your network? Common initialization schemes are the Xavier Glorot Initialization given in <a href="http://jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf" rel="nofollow">Understanding the difficulty of training deep feedforward neural networks</a> and more recently in <a href="http://arxiv.org/abs/1502.01852" rel="nofollow">Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification</a>. </p></li>
<li><p>Is your step size aka learning rate too large? It seems like your network is outputting constant values. If you are using saturating nonlinearities (i.e. bounded activation functions like sigmoid or tanh) then a large learning might may cause your nonlinearities to saturate and learning effectively halts and this may result in outputting constant values. </p></li>
<li><p>Related to the previous point: what type of nonlinearities are you using in your hidden layers? Again, if it's a saturating nonlinearity, this may be hindering your training. You can try rectifying linear units (ReLUs) which have the form $f(x) = \max(0,x)$. They are unbounded, so they do not saturate and they have gradient equal to 1 when $x > 0$. They have the interpretation of "activating" when the input is greater than 0. In that case they act like a switch and allow gradient to propagate through. </p></li>
</ul>
<p>There might be other issues that hopefully others can comment on as well. These are the 3 that immediately come to mind for me. </p>
<p>I am not familiar with Synaptic so I am not sure how much control or what their default setup or parameters are. </p>
<p>Hope this helps!</p> | 2016-01-06 21:04:17.990000+00:00 | 2016-01-06 21:47:07.997000+00:00 | 2016-01-06 21:47:07.997000+00:00 | null | 34,636,231 | <p>I am just getting started with neural networks and using <a href="http://synaptic.juancazala.com/" rel="nofollow">Synaptic</a> to get started (I know I know, neural networks in JavaScript, gasp!).</p>
<p>This is the example code <a href="https://github.com/cazala/synaptic#perceptron" rel="nofollow">given in this section</a> for creating a neural network for learning the XOR function:</p>
<pre><code>var myPerceptron = new Architect.Perceptron(2, 3, 1);
var myTrainer = new Trainer(myPerceptron);
myTrainer.XOR();
console.log(myPerceptron.activate([0, 0])); // 0.0268581547421616
console.log(myPerceptron.activate([1, 0])); // 0.9829673642853368
console.log(myPerceptron.activate([0, 1])); // 0.9831714267395621
console.log(myPerceptron.activate([1, 1])); // 0.02128894618097928
</code></pre>
<p>I am experimenting with adding more layers and seeing what happens. Adding one additional hidden layer doesn't have much effect, but adding 2 layers makes the output identical regardless of the input. </p>
<pre><code>var myPerceptron = new Architect.Perceptron(2, 3, 3, 3, 1);
var myTrainer = new Trainer(myPerceptron);
myTrainer.XOR();
console.log(myPerceptron.activate([0, 0])); // 0.521076904986927
console.log(myPerceptron.activate([1, 0])); // 0.5210769149857782
console.log(myPerceptron.activate([0, 1])); // 0.5210769118775331
console.log(myPerceptron.activate([1, 1])); // 0.5210769209325651
</code></pre>
<p>Why does this happen? Is this simply because a more complex network needs a lot more training, or is it because this kind of network is intrinsically not suitable for this kind of problem?</p> | 2016-01-06 15:01:22.343000+00:00 | 2016-01-06 21:47:07.997000+00:00 | null | javascript|machine-learning|neural-network | ['http://jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf', 'http://arxiv.org/abs/1502.01852'] | 2 |
11,935,266 | <p>It depends on your random graph model. The simplest model is the <a href="http://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93R%C3%A9nyi_model" rel="nofollow">Erdős–Rényi model</a>, where you specify the number of nodes and the probability of a link between any given pair. This is easy to generate but the resulting graphs will not be very interesting because they aren't at all similar to most networks observed in the real world. Real-world networks usually have power law degree distributions and higher clustering coefficients. There are a few other standard models you might be interested in that address this (<a href="http://en.wikipedia.org/wiki/Watts_and_Strogatz_model" rel="nofollow">Watts-Strogatz</a> or <a href="http://en.wikipedia.org/wiki/Barab%C3%A1si%E2%80%93Albert_model" rel="nofollow">Barabási–Albert</a>). I have also used the LFR model described in <a href="http://arxiv.org/pdf/0908.1062.pdf" rel="nofollow">this paper</a> which has source code available <a href="https://sites.google.com/site/andrealancichinetti/files" rel="nofollow">here</a>.</p> | 2012-08-13 13:30:25.970000+00:00 | 2012-08-13 13:30:25.970000+00:00 | null | null | 11,795,727 | <p>I am currently working on developing an application to find the maximum clique in a graph for my final year project. I have most the project complete and am just starting to test the application. </p>
<p>The application currently uses an adjacency list as an input and I was wondering if anyone knew of an adjacency list random generator, so I can test my application?</p>
<p>Many thanks </p> | 2012-08-03 12:27:01.393000+00:00 | 2012-08-13 13:30:25.970000+00:00 | null | c#|graph-theory|adjacency-list|clique-problem | ['http://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93R%C3%A9nyi_model', 'http://en.wikipedia.org/wiki/Watts_and_Strogatz_model', 'http://en.wikipedia.org/wiki/Barab%C3%A1si%E2%80%93Albert_model', 'http://arxiv.org/pdf/0908.1062.pdf', 'https://sites.google.com/site/andrealancichinetti/files'] | 5 |
54,429,829 | <p>You should estimate <strong>Q(s,a)</strong> using the <strong>critic</strong>, not the <strong>actor</strong>.</p>
<p>Remember that in the <strong>actor-critic</strong> setting (e.g. A2C), the actor(s) will <strong>output</strong> the probability distribution over all your actions at state <code>s</code>. From this distribution, you'll sample an action <code>a</code> to take in the environment. Then, the environment will give you a reward <code>r</code> and the next state <code>s'</code>. </p>
<p>After <code>N</code> steps, you'll use the <strong>critic</strong> to estimate the <strong>state value</strong> <code>V(s)</code> and will calculate the <strong>advantage</strong> to point out how much better your actions were than the average for example. With the advantage, you'll update your policy (<strong>actor</strong>) to increase/decrease the probability of taking the action <code>a</code> at state <code>s</code>.</p>
<p>Therefore, to use your <strong>advantage function</strong> in this framework, you could use the critic to estimate <code>Q(s,a)</code>, which is the value for each pair of action-state. Then, you can estimate <code>V(s)</code> with:</p>
<p><a href="https://i.stack.imgur.com/ru0vc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ru0vc.png" alt="enter image description here"></a></p>
<p>You can take a look at this <a href="https://datascience.stackexchange.com/a/31791">answer</a> and at this <a href="https://danieltakeshi.github.io/2017/04/02/notes-on-the-generalized-advantage-estimation-paper/" rel="nofollow noreferrer">post</a> to have a better idea. Note that, to estimate <code>Q(s,a)</code> your critic network should have <code>|A|</code> output units, instead of just one as in the case of <code>V(s)</code>. There are also other options to try as your advantage function.</p>
<p>Remember that the only purpose of the advantage function is to tell your model how much to increase/decrease the probabilities of taking action <code>a</code> at state <code>s</code>. If it's better than average you increase otherwise you decrease.</p>
<p>This <a href="https://arxiv.org/pdf/1506.02438.pdf" rel="nofollow noreferrer">paper</a> is a very good reference.</p> | 2019-01-29 21:26:55.437000+00:00 | 2019-01-29 21:32:19.200000+00:00 | 2019-01-29 21:32:19.200000+00:00 | null | 48,586,709 | <p>I have troubles understanding how to get data to compute the advantage in actor-critic settings.</p>
<p>I know that <code>A(s,a) = Q(s,a) - V(s)</code>. It seems straightforward to get the state value estimate <code>V(s)</code>, but how can we estimate <code>Q(s,a)</code> given that the policy only outputs probabilities? </p>
<p>Thanks! </p> | 2018-02-02 16:15:24.273000+00:00 | 2019-01-29 21:32:19.200000+00:00 | 2018-02-02 18:11:57.540000+00:00 | deep-learning|reinforcement-learning | ['https://i.stack.imgur.com/ru0vc.png', 'https://datascience.stackexchange.com/a/31791', 'https://danieltakeshi.github.io/2017/04/02/notes-on-the-generalized-advantage-estimation-paper/', 'https://arxiv.org/pdf/1506.02438.pdf'] | 4 |
69,453,816 | <p>Seems in context of super resolution it's called pixel shuffle layer (but maybe it's just a special case of sub-pixel convolution).</p>
<p><a href="https://pytorch.org/docs/stable/generated/torch.nn.PixelShuffle.html" rel="nofollow noreferrer">https://pytorch.org/docs/stable/generated/torch.nn.PixelShuffle.html</a></p>
<p>Paper: <a href="https://arxiv.org/pdf/1609.05158.pdf" rel="nofollow noreferrer">Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network by Shi et. al (2016)</a></p> | 2021-10-05 16:03:24.817000+00:00 | 2021-10-05 16:03:24.817000+00:00 | null | null | 59,853,676 | <p>I am working on a super-resolution project. And for now, I am using Transposed Convolution for upsampling the image. But I have heard a lot that sub-pixel convolutions do it better. So what is subpixel convolution and the math behind it? And Why it is better than transposed convolution? (some example would be great)</p> | 2020-01-22 06:17:44.297000+00:00 | 2021-10-05 16:03:24.817000+00:00 | 2020-04-17 06:53:08.557000+00:00 | image-processing|deep-learning|neural-network|conv-neural-network|convolution | ['https://pytorch.org/docs/stable/generated/torch.nn.PixelShuffle.html', 'https://arxiv.org/pdf/1609.05158.pdf'] | 2 |
65,063,080 | <p>This answer is based on the reference that professor @AchimZeileis provided in his article (<a href="https://arxiv.org/abs/1906.10179" rel="nofollow noreferrer">https://arxiv.org/abs/1906.10179</a>), and it is devoted to the second part of my original question, which was regarding the question: <em><strong>How to count the correctly specified models (trees)?</strong></em></p>
<h1>The short answer.</h1>
<p>The article divides the problem into two different types of data generating process (DGP). In the first one, the data has only one break in one variable ("<em>stump</em>" case), and the authors count the number of correctly identified models based on the number of times that the M-fluctuation test identified correctly the variable that was generating the break (just one real break in one variable and 9 noisy candidates to be splitting variable with no break). The second DGP was a model with two breaks (<em>"tree"</em> case), and they used the <a href="https://en.wikipedia.org/wiki/Rand_index" rel="nofollow noreferrer">Adjusted Rand Index</a> (ARI) to assess the performance of the model as a metric of the similarity of the real tree to the predicted one.</p>
<h1>The <s>very</s> long answer.</h1>
<p>Let's break down the <a href="https://en.wikipedia.org/wiki/Rand_index" rel="nofollow noreferrer">ARI</a> for 6 different illustrative possible trees that can be obtained at different sample sizes. The code used here is highly based on the supplemental material of the <a href="https://arxiv.org/abs/1906.10179" rel="nofollow noreferrer">article</a> recommended by @AchimZeileis.</p>
<h2>Data generating process: Tree structure</h2>
<p>The real dgp has 2 breaks, as illustrated in the picture below. The first one is generated by the variable <code>z2</code> and the second one by <code>z1</code>. In the snippet of the code below, delta is equal to 1. The threshold value for the first break (depending on <code>z2</code>) is equal to 0.3, and the threshold value for the second break (depending on <code>z1</code>) is equal to -0.3 (the values can be seen in the object <code>xi = c(-0.3, 0.3)</code>)</p>
<p><a href="https://i.stack.imgur.com/mpT5n.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mpT5n.png" alt="Figure from " /></a></p>
<pre><code>#function from https://arxiv.org/src/1906.10179v1/anc
dgp_tree <- function(nobs = 1000, delta = 1, xi = c(-0.3, 0.3),
sigma = 1, seed = 7,
changetype = "abrupt",
variation = "all",
beta0 = NULL, beta1 = NULL)
{
# check input values
if(variation != "all") stop("variation can only be set to 'all' in dgp_tree")
if(changetype != "abrupt") stop("changetype can only be abrupt in dgp_tree")
if(!is.null(beta0) | !is.null(beta1)) warning("values for beta0 or beta1 are ignored since variation='all' for dgp_tree")
set.seed(seed)
if(length(xi)==1){
xi1 <- xi2 <- xi
} else {
xi1 <- xi[1]
xi2 <- xi[2]
}
z1 <- runif(nobs,-1,1)
z2 <- runif(nobs,-1,1)
z3 <- rnorm(nobs, 0, 1)
z4 <- rnorm(nobs, 0, 1)
z5 <- rnorm(nobs, 0, 1)
z6 <- rnorm(nobs, 0, 1)
z7 <- rnorm(nobs, 0, 1)
z8 <- runif(nobs, -1, 1)
z9 <- runif(nobs, -1, 1)
z10 <- runif(nobs, -1, 1)
id <- numeric(length(z1))
x <- runif(nobs, min = -1, max = 1)
beta0 <- delta * (-1)^(z1<xi1) * 0^(z2<xi2)
beta1 <- delta * (-1)^(z2>=xi2)
id <- 1 + (z2>=xi2) + (z2>=xi2)*(z1>=xi1)
mu <- beta0 + beta1 * x
y <- rnorm(nobs, mu, sigma)
d <- data.frame(y = y, x = x,
z1 = z1, z2 = z2, z3 = z3, z4 = z4, z5 = z5, z6 = z6, z7 = z7, z8 = z8, z9 = z9, z10 = z10,
beta0 = beta0, beta1 = beta1, mu = mu, sigma = rep.int(sigma, times = length(y)), id = id)
return(d)
}
</code></pre>
<h2>The <a href="https://en.wikipedia.org/wiki/Rand_index" rel="nofollow noreferrer">Adjusted Rand Index</a></h2>
<p>Among the functions included in the <a href="https://arxiv.org/abs/1906.10179" rel="nofollow noreferrer">article</a>, there is one to compute the <a href="https://en.wikipedia.org/wiki/Rand_index" rel="nofollow noreferrer">ARI</a>, and it is listed below to be used in the following examples. It resembles almost exactly letter by letter the notation used <a href="https://en.wikipedia.org/wiki/Rand_index" rel="nofollow noreferrer">here</a>.</p>
<pre><code># function to compute adjusted Rand Index from https://arxiv.org/src/1906.10179v1/anc
adj_rand_index <- function(x, y) {
tab <- table(x, y)
a <- rowSums(tab)
b <- colSums(tab)
M <- sum(choose(tab, 2))
N <- choose(length(x), 2)
A <- sum(choose(a, 2))
B <- sum(choose(b, 2))
c(ARI = (M - (A * B) / N) / (0.5 * (A + B) - (A * B) / N))
}
</code></pre>
<h2>Simulating the data</h2>
<pre><code>library(partykit)
library(future.apply) ## for parallel stuff
plan(multisession) ## use all available cores
ols_formula <- y ~ x | z1 + z2 +z3 +z4 + z5 +z6 +z7+ z8 +z9 +z10
ols <- function(y, x, start = NULL, weights = NULL, offset = NULL, ...) {lm(y ~ 0 + x)}
sim_ari <- function(n){
tree_data <- dgp_tree(nobs = n)
ols_mob <- mob(ols_formula,
data = tree_data,
fit = ols)
prednode <- predict(ols_mob ,
type = "node")
cross_table <- table(prednode,tree_data$id)
ari <- adj_rand_index(prednode,
tree_data$id)
print(n)
print(ari)
return(
list(
ols_mob = ols_mob,
cross_table = cross_table,
ari=ari,
data = tree_data)
)
}
n_levels <- c(55, ## no break
87, ## only one break
123, ## Correct structure, but poor performance
199, ## Nested break in second leaf
667, ## Additional break in first leaf
5000 ## Perfect model
)
ari <- future_lapply(n_levels, sim_ari, future.seed = 1234L)
</code></pre>
<h1>A Tale of Six Trees.</h1>
<p>The following six cases are analyzed in terms of how the <a href="https://en.wikipedia.org/wiki/Rand_index" rel="nofollow noreferrer">ARI</a> can accurately capture the degree of similarity between the correct and the estimated tree. The key to compare the trees is <code>id</code>, which shows which leaf each observation should belong in the tree. For example, if an observation has an <code>id</code> value of <code>1</code>, it meets the requirements assigned to node number 2 in the figure above. On the other hand, if <code>id</code> is equal to 2, the observation should be assigned to node number 4 in the same picture. Finally, if <code>id</code> is equal to 3, it is assigned to node number 5. You can check this reasoning in the following line <code>id <- 1 + (z2>=xi2) + (z2>=xi2)*(z1>=xi1)</code></p>
<h3>The First Tree (n=55): No breaks</h3>
<p>The first case analyzed corresponds when no breaks are identified. Here, in this case, the ARI is equal to 0.</p>
<pre><code>##### First Tree (n=55): No break ####
ari[[1]][[1]]
## Fitted party:
## [1] root: n = 55
## x(Intercept) xx
## -0.01309586 0.39291089
##
## Number of inner nodes: 0
## Number of terminal nodes: 1
## Number of parameters per node: 2
## Objective function: 95.58631
</code></pre>
<p>Here it is interesting to note that all the observations are assigned to the root node. Therefore, when crosstable the predicted nodes <code>prednode_1</code>, we see that all possible <code>id</code> values belong to the root node <code>[1]</code> of the predicted tree (basically, because there is no other option.) By using the function <code>adj_rand_index()</code>, you can check that this leads to an ARI equal to 0.</p>
<pre><code>#data first tree (n=55)
data_1 <- ari[[1]][[4]]
#predicted node first iteration
data_1$prednode_1 <- predict(ari[[1]][[1]], type = "node")
#Cross table
with(data_1, table(prednode_1 ,id))
## id
## prednode_1 1 2 3
## 1 37 7 11
#adj_rand_index
ari[[1]][[3]]
</code></pre>
<h3>The second tree (n=87): Just one break identified</h3>
<p>This case is interesting because it partially identifies the tree's structure (i.e., the break on <code>z1</code> is missing).</p>
<pre><code>##### Second Tree (n=87): Extra partition in node[5] ####
ari[[2]][[1]]
# Fitted party:
# [1] root
# | [2] z2 <= 0.29288: n = 57
# | x(Intercept) xx
# | 0.133293 1.082701
# | [3] z2 > 0.29288: n = 30
# | x(Intercept) xx
# | 0.2598309 -1.8014133
#
# Number of inner nodes: 1
# Number of terminal nodes: 2
# Number of parameters per node: 2
# Objective function: 122.0116
</code></pre>
<p>Additionally, we can check that when crosstable the predicted and the real nodes, we see that some observations meet the criteria even in this non-perfect tree. Meaning there are <code>57</code> observations correctly assigned to the first node and <code>9</code> that were correctly assigned to the second branch. Finally, <code>30</code> where misassigned because the last node was not identified at all. This lead to an ARI equal to <code>0.8577366</code>, which is a huge improvement from the first tree.</p>
<pre><code>#data second iteration (n=87)
data_2 <- ari[[2]][[4]]
#predicted node first iteration
data_2$prednode_2 <- predict(ari[[2]][[1]], type = "node")
#Cross table
with(data_2, table(prednode_2 ,id))
# id
# prednode_2 1 2 3
# 2 57 0 0
# 3 1 9 20
#adj_rand_index
ari[[2]][[3]]
# > ari[[2]][[3]]
# ARI
# 0.8577366
</code></pre>
<h3>The third tree (n=123): Correct structure but poor performance</h3>
<p>This case is interesting because it <strong>does</strong> recover the real tree structure, but it has worse performance than the last three that only partially identified its structure.</p>
<pre><code>##### Third Tree (n=123): Correct structure but poor performance ####
ari[[3]][[1]]
# Fitted party:
# [1] root
# | [2] z2 <= 0.07319: n = 60
# | x(Intercept) xx
# | -0.1723388 1.1071878
# | [3] z2 > 0.07319
# | | [4] z1 <= -0.35485: n = 22
# | | x(Intercept) xx
# | | -0.7166565 -0.6791717
# | | [5] z1 > -0.35485: n = 41
# | | x(Intercept) xx
# | | 0.7096033 -0.8605967
#
# Number of inner nodes: 2
# Number of terminal nodes: 3
# Number of parameters per node: 2
# Objective function: 156.4397
</code></pre>
<p>Below we can see that when we crosstable predicted and real nodes, we see that <code>16</code> (<code>10 + 6</code>) observations were incorrectly classified, and this fact leads to an ARI of <code>0.6117612</code>.</p>
<pre><code>#data third iteration (n=123)
data_3 <- ari[[3]][[4]]
#predicted node first iteration
data_3$prednode_3 <- predict(ari[[3]][[1]], type = "node")
#Cross table
with(data_3, table(prednode_3 ,id))
# id
# prednode_3 1 2 3
# 2 60 0 0
# 4 6 16 0
# 5 10 0 31
#adj_rand_index
ari[[3]][[3]]
# > ari[[3]][[3]]
# ARI
# 0.6117612
</code></pre>
<h3>The fourth Tree: Forth Tree (n=199): Extra leaf at node <code>[5]</code></h3>
<p>The tree identified here deviated from the original because it has an extra leaf from <code>node[5]</code>, which is unexisting in the real data.</p>
<pre><code>##### Forth Tree (n=199): Extra leaf at node[5] ####
ari[[4]][[1]]
# Fitted party:
# [1] root
# | [2] z2 <= -0.19806: n = 79
# | x(Intercept) xx
# | 0.06455217 1.51512672
# | [3] z2 > -0.19806
# | | [4] z1 <= -0.27127: n = 44
# | | x(Intercept) xx
# | | -0.4863122 -0.3860951
# | | [5] z1 > -0.27127
# | | | [6] z2 <= 0.17481: n = 23
# | | | x(Intercept) xx
# | | | -0.1335096 0.2046050
# | | | [7] z2 > 0.17481: n = 53
# | | | x(Intercept) xx
# | | | 1.0868488 -0.0290925
#
# Number of inner nodes: 3
# Number of terminal nodes: 4
# Number of parameters per node: 2
# Objective function: 282.6727
</code></pre>
<p>Here the count of the crosstable of the real and predicted nodes is interesting because nodes <code>[6]</code> and <code>[7]</code> are inexistent in the real data, but they are getting observations that should, for example, be assigned to the node <code>[1]</code> (<code>23</code> and <code>7</code> observations respectively.) This misallocation diminished the ARI index down to <code>0.4649789</code>.</p>
<pre><code>#data forth iteration (n=199)
data_4 <- ari[[4]][[4]]
#predicted node first iteration
data_4$prednode_4 <- predict(ari[[4]][[1]], type = "node")
#Cross table
with(data_4, table(prednode_4 ,id))
# id
# prednode_4 1 2 3
# 2 79 0 0
# 4 16 27 1
# 6 23 0 0
# 7 7 0 46
#adj_rand_index
ari[[4]][[3]]
# ARI
# 0.4649789
</code></pre>
<h3>The fifth Tree (n=667): Extra leaf at node <code>[2]</code></h3>
<p>This is another example of a tree with an incorrect structure where an extra leaf (based on a partition on <code>z5</code> which is incorrect(!)) is attached to the node <code>[2]</code>.</p>
<pre><code>##### Fifth Tree (n=667): Extra leaf at node[2] ####
ari[[5]][[1]]
# Fitted party:
# [1] root
# | [2] z2 <= 0.28476
# | | [3] z5 <= 0.76285: n = 322
# | | x(Intercept) xx
# | | -0.1322881 0.9535337
# | | [4] z5 > 0.76285: n = 96
# | | x(Intercept) xx
# | | 0.1686863 1.3878776
# | [5] z2 > 0.28476
# | | [6] z1 <= -0.32001: n = 89
# | | x(Intercept) xx
# | | -0.9139858 -0.7957158
# | | [7] z1 > -0.32001: n = 160
# | | x(Intercept) xx
# | | 0.7661154 -0.8656553
#
# Number of inner nodes: 3
# Number of terminal nodes: 4
# Number of parameters per node: 2
# Objective function: 927.9088
</code></pre>
<p>The crosstable from predicted and correct nodes shows us that most of the observations (<code>322</code>) that, in reality, belong to the first node <code>[1]</code> were assigned to the predicted node <code>[3]</code>. Finally, this poor structure leads to an ARI of <code>0.6932132</code>.`</p>
<pre><code>#data third iteration (n=667)
data_5 <- ari[[5]][[4]]
#predicted node first iteration
data_5$prednode_5 <- predict(ari[[5]][[1]], type = "node")
#Cross table
with(data_5, table(prednode_5 ,id))
# id
# prednode_5 1 2 3
# 3 322 0 0
# 4 96 0 0
# 6 0 89 0
# 7 3 3 154
#adj_rand_index
ari[[5]][[3]]
# ARI
# 0.6932132
</code></pre>
<h3>The sixth tree (n=5000): The Golden Tree!</h3>
<p>This final tree recovers the data perfectly, both in the tree structure and in allocating the observation to each leaf.</p>
<pre><code>##### Sixth Tree (n=5000): Extra leaf at node[2] ####
ari[[6]][[1]]
# Fitted party:
# [1] root
# | [2] z2 <= 0.29971: n = 3187
# | x(Intercept) xx
# | -0.008719923 1.022232280
# | [3] z2 > 0.29971
# | | [4] z1 <= -0.30286: n = 609
# | | x(Intercept) xx
# | | -0.9488846 -0.9813765
# | | [5] z1 > -0.30286: n = 1204
# | | x(Intercept) xx
# | | 1.0281410 -0.9565637
#
# Number of inner nodes: 2
# Number of terminal nodes: 3
# Number of parameters per node: 2
# Objective function: 6992.848
</code></pre>
<p>Here we can see from the crosstable of the predicted and real nodes that it allocates <em>perfectly</em> each observation where it does belong, leading to an ARI equal to <code>1</code>.</p>
<pre><code>#data sixt iteration (n=5000)
data_6 <- ari[[6]][[4]]
#predicted node first iteration
data_6$prednode_6 <- predict(ari[[6]][[1]], type = "node")
#Cross table
with(data_6, table(prednode_6 ,id))
# id
# prednode_6 1 2 3
# 2 3187 0 0
# 4 0 609 0
# 5 0 0 1204
#adj_rand_index
ari[[6]][[3]]
# ARI
# 1
</code></pre>
<h3>The conclusions.</h3>
<p>Some important takeaways can be recovered from the illustration above.</p>
<p>1.- The <a href="https://en.wikipedia.org/wiki/Rand_index" rel="nofollow noreferrer">ARI</a> is useful to assess the degree of similarity of a predicted tree that can have a very different structure from the real tree in the data generating process.</p>
<p>2.- Recover the tree's correct structure does not lead to an <a href="https://en.wikipedia.org/wiki/Rand_index" rel="nofollow noreferrer">ARI</a> equal to one.</p>
<p>3.- Incorrect trees not necessarily will an <a href="https://en.wikipedia.org/wiki/Rand_index" rel="nofollow noreferrer">ARI</a> equal to zero.</p>
<h3>A final Simulation.</h3>
<p>To conclude, here is a small simulation to see how the <a href="https://en.wikipedia.org/wiki/Rand_index" rel="nofollow noreferrer">ARI</a> index behaves when the sample size increases.</p>
<pre><code>
### Final simulation
n_levels <-seq(from= 10, to= 2000, by= 5)
ari <- lapply(n_levels, sim_ari)
ari_models<- function(i){
ari <- ari_sim[[i]]$ari
n <- nobs(ari_sim[[i]]$ols_mob)
return(
list(ari = ari , n= n )
)
}
ari_n_list <- lapply(1:length(ari_sim), ari_models)
df <- data.frame(matrix(unlist(ari_n_list), nrow=length(ari_n_list), byrow=T))
colnames(df) <- c("ARI" , "N")
library(ggplot2)
ggplot(df, aes(N)) +
geom_line(aes(y = ARI, colour = "ARI"))
</code></pre>
<p><a href="https://i.stack.imgur.com/6pClY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6pClY.png" alt="enter image description here" /></a></p> | 2020-11-29 17:36:41.123000+00:00 | 2020-11-30 09:14:20.580000+00:00 | 2020-11-30 09:14:20.580000+00:00 | null | 64,970,510 | <p>I am using the <code>mob()</code> function from <code>partykit</code> package and I am getting some problems when parsing the obtained model.</p>
<p>My main aim is to check <em>approximately</em> how large a sample size needs to be in order to detect the real
structure of a data generating process (DGP) when breaks are present.</p>
<p>The code below performs a Montecarlo simulation of data with breaks and tries to identify if the break was captured by the M-fluctuation test or not.</p>
<p>More specifically, I want to make a count on the number of times over the total number of simulations (<code>nreps</code>) that the model actually captures the DGP, conditional on a fixed sample size (<code>N</code>) to get a feeling of how many data should I need to capture the real DGP.</p>
<p>At the end of the code, you can see the list that I get out of my simulations. The problem is that I cannot recover the information displayed on the console.</p>
<p>Additionally, I have some doubts about how to make the count of the "correctly identified models". For now, what I have in mind is to count as positive if the model has a break into the correct variable (<code>z1</code>) at the specified region (<code>z1>0</code>) with some tolerance into the break region for example if the break is at <code>z1>0.1</code> or <code>z1>-0.4</code> it is also valid as a positive for me. Therefore, is there any simple way of counting the models that have the stated characteristics?</p>
<p>I hope my example is clear enough for you to help me out. Thank you a lot in advance.</p>
<h3>Simulate model</h3>
<ol>
<li>Ingredients to generate the DGP.</li>
</ol>
<pre><code>library("partykit")
library(data.table) ## optional, but what I'll use to coerce the list into a DT
library(future.apply) ## for parallel stuff
plan(multisession) ## use all available cores
#sample size
N <- 300
#coeficients
betas <- list()
betas$b0 <- 1
betas$b1_up <- 2.4
betas$b1_down <- 2
betas$b2_up <- 2.4
betas$b2_down <- 2
#mob() ingredients
ols_formula <- y ~ x1 + x2 | z1 + z2
# the ""0 +"" ---> supress the 'double' interecept
ols <- function(y, x, start = NULL, weights = NULL, offset = NULL, ...) {lm(y ~ 0 + x)}
</code></pre>
<ol start="2">
<li>Function that generates the data and fit OLS using mob algorithm.</li>
</ol>
<pre><code>reg_simulation_mob <- function(...){
#data
data <- data.frame(
x1 = rnorm(N),
x2 = rnorm(N),
z1 = rnorm(N),
z2 = rnorm(N),
e = rnorm(N))
#dependent variable
data$y <- betas$b0 + with(data, ifelse(z1>0,
betas$b1_up * x1 + betas$b2_up * x2 ,
betas$b1_down * x1 + betas$b2_down * x2 )
+ e )
#Estimate mob()-OLS
ols_mob <- mob(ols_formula,
data = data,
fit = ols)
# return(ols$coefficients)
return(ols_mob)
}
</code></pre>
<ol start="3">
<li>Montecarlo simulation (only 2 trials) of the above-described setup.</li>
</ol>
<pre><code># N repetitions
nreps <- 2
## Parallel version
results <- future_lapply(1:nreps, reg_simulation_mob, future.seed = 1234L)
</code></pre>
<h3>Obtained result</h3>
<p>As you can see below in the first trial (<code>results[[1]]</code>), the model finds the correct break but in the second (<code>results[[2]]</code>) it fails to find it.</p>
<pre><code>> results
[[1]]
Model-based recursive partitioning (ols)
Model formula:
y ~ x1 + x2 | z1 + z2
Fitted party:
[1] root
| [2] z1 <= 0.00029: n = 140
| x(Intercept) xx1 xx2
| 0.9597894 1.7552122 2.1360788
| [3] z1 > 0.00029: n = 160
| x(Intercept) xx1 xx2
| 0.9371795 2.4745728 2.5087608
Number of inner nodes: 1
Number of terminal nodes: 2
Number of parameters per node: 3
Objective function: 422.2329
[[2]]
Model-based recursive partitioning (ols)
Model formula:
y ~ x1 + x2 | z1 + z2
Fitted party:
[1] root: n = 300
x(Intercept) xx1 xx2
1.015224 2.175625 2.200746
Number of inner nodes: 0
Number of terminal nodes: 1
Number of parameters per node: 3
Objective function: 422.3085
</code></pre>
<p>In the picture below, you can observe the structure of the list <code>results</code>, where I cannot find the information displayed on the console (i.e. number of nodes, parameters, threshold values, etc..)
<a href="https://i.stack.imgur.com/lhz7Q.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lhz7Q.png" alt="enter image description here" /></a></p> | 2020-11-23 14:36:54.363000+00:00 | 2020-11-30 09:14:20.580000+00:00 | 2020-11-29 15:51:43.673000+00:00 | r|decision-tree|party | ['https://arxiv.org/abs/1906.10179', 'https://en.wikipedia.org/wiki/Rand_index', 'https://en.wikipedia.org/wiki/Rand_index', 'https://arxiv.org/abs/1906.10179', 'https://i.stack.imgur.com/mpT5n.png', 'https://en.wikipedia.org/wiki/Rand_index', 'https://arxiv.org/abs/1906.10179', 'https://en.wikipedia.org/wiki/Rand_index', 'https://en.wikipedia.org/wiki/Rand_index', 'https://en.wikipedia.org/wiki/Rand_index', 'https://en.wikipedia.org/wiki/Rand_index', 'https://en.wikipedia.org/wiki/Rand_index', 'https://en.wikipedia.org/wiki/Rand_index', 'https://en.wikipedia.org/wiki/Rand_index', 'https://i.stack.imgur.com/6pClY.png'] | 15 |
64,976,780 | <p>First, I would recommend to use the <code>lmtree()</code> function and not vanilla <code>mob()</code>. The former is faster, comes with better defaults for printing and plotting, and has more options for predictions.</p>
<p>Second, I recommend that you consult the <code>vignette("partykit", package = "partykit")</code> which explains how <code>party</code> objects are constructed and which classes and methods are involved.</p>
<p>Third, to determine which variable (if any) was used for splitting in the root node it is probably of interest to extract the results from all parameter instability tests. There is a dedicated <code>sctest()</code> (structural change test) method for obtaining this:</p>
<pre><code>library("strucchange")
sctest(results[[1]], node = 1)
## z1 z2
## statistic 4.072483e+01 6.1762164
## p.value 5.953672e-07 0.9153013
sctest(results[[2]], node = 1)
## z1 z2
## statistic 12.2810548 10.1944484
## p.value 0.2165527 0.4179998
</code></pre>
<p>The corresponding <code>partysplit</code> object for the <code>$split</code> (if any) in the root <code>$node</code> is probably easiest to extract manually:</p>
<pre><code>results[[1]]$node$split
## $varid
## [1] 4
##
## $breaks
## [1] 0.0002853492
##
## $index
## NULL
##
## $right
## [1] TRUE
##
## $prob
## NULL
##
## $info
## NULL
##
## attr(,"class")
## [1] "partysplit"
results[[2]]$node$split
## NULL
</code></pre>
<p>The variable id pertains to the order of the variables in:</p>
<pre><code>names(results[[1]]$data)
## [1] "y" "x1" "x2" "z1" "z2"
</code></pre>
<p>Finally, as for the question <em>what</em> to evaluate: Everything depends on identifying the correct variable for splitting. If this is done correctly, then the split point estimates converge fast to the true values, and then the parameter estimates also converge. See for example our recent arXiv paper (<a href="https://arxiv.org/abs/1906.10179" rel="nofollow noreferrer">https://arxiv.org/abs/1906.10179</a>) which contains a larger simulation study and also provides replication code.</p>
<p>Therefore, typically, I either evaluate the probability of selecting the correct variable in the first split. Or alternatively I look at the RMSE of the estimated vs.true coefficients <em>for each observation</em>.</p>
<p><em><strong>Update:</strong></em> Beyond the root node you can use <code>nodeapply()</code> to extract information from various nodes. However, I do not recommend to evaluate all splits because it becomes increasingly difficult to match which estimated split matches which of the true splits. Instead, we often assess how similar the fitted partition is compared to the true partition, e.g., using the adjusted Rand Index. Again, you can find an example for the in the arXiv paper mentioned above.</p> | 2020-11-23 21:31:33.573000+00:00 | 2020-11-25 20:09:30.093000+00:00 | 2020-11-25 20:09:30.093000+00:00 | null | 64,970,510 | <p>I am using the <code>mob()</code> function from <code>partykit</code> package and I am getting some problems when parsing the obtained model.</p>
<p>My main aim is to check <em>approximately</em> how large a sample size needs to be in order to detect the real
structure of a data generating process (DGP) when breaks are present.</p>
<p>The code below performs a Montecarlo simulation of data with breaks and tries to identify if the break was captured by the M-fluctuation test or not.</p>
<p>More specifically, I want to make a count on the number of times over the total number of simulations (<code>nreps</code>) that the model actually captures the DGP, conditional on a fixed sample size (<code>N</code>) to get a feeling of how many data should I need to capture the real DGP.</p>
<p>At the end of the code, you can see the list that I get out of my simulations. The problem is that I cannot recover the information displayed on the console.</p>
<p>Additionally, I have some doubts about how to make the count of the "correctly identified models". For now, what I have in mind is to count as positive if the model has a break into the correct variable (<code>z1</code>) at the specified region (<code>z1>0</code>) with some tolerance into the break region for example if the break is at <code>z1>0.1</code> or <code>z1>-0.4</code> it is also valid as a positive for me. Therefore, is there any simple way of counting the models that have the stated characteristics?</p>
<p>I hope my example is clear enough for you to help me out. Thank you a lot in advance.</p>
<h3>Simulate model</h3>
<ol>
<li>Ingredients to generate the DGP.</li>
</ol>
<pre><code>library("partykit")
library(data.table) ## optional, but what I'll use to coerce the list into a DT
library(future.apply) ## for parallel stuff
plan(multisession) ## use all available cores
#sample size
N <- 300
#coeficients
betas <- list()
betas$b0 <- 1
betas$b1_up <- 2.4
betas$b1_down <- 2
betas$b2_up <- 2.4
betas$b2_down <- 2
#mob() ingredients
ols_formula <- y ~ x1 + x2 | z1 + z2
# the ""0 +"" ---> supress the 'double' interecept
ols <- function(y, x, start = NULL, weights = NULL, offset = NULL, ...) {lm(y ~ 0 + x)}
</code></pre>
<ol start="2">
<li>Function that generates the data and fit OLS using mob algorithm.</li>
</ol>
<pre><code>reg_simulation_mob <- function(...){
#data
data <- data.frame(
x1 = rnorm(N),
x2 = rnorm(N),
z1 = rnorm(N),
z2 = rnorm(N),
e = rnorm(N))
#dependent variable
data$y <- betas$b0 + with(data, ifelse(z1>0,
betas$b1_up * x1 + betas$b2_up * x2 ,
betas$b1_down * x1 + betas$b2_down * x2 )
+ e )
#Estimate mob()-OLS
ols_mob <- mob(ols_formula,
data = data,
fit = ols)
# return(ols$coefficients)
return(ols_mob)
}
</code></pre>
<ol start="3">
<li>Montecarlo simulation (only 2 trials) of the above-described setup.</li>
</ol>
<pre><code># N repetitions
nreps <- 2
## Parallel version
results <- future_lapply(1:nreps, reg_simulation_mob, future.seed = 1234L)
</code></pre>
<h3>Obtained result</h3>
<p>As you can see below in the first trial (<code>results[[1]]</code>), the model finds the correct break but in the second (<code>results[[2]]</code>) it fails to find it.</p>
<pre><code>> results
[[1]]
Model-based recursive partitioning (ols)
Model formula:
y ~ x1 + x2 | z1 + z2
Fitted party:
[1] root
| [2] z1 <= 0.00029: n = 140
| x(Intercept) xx1 xx2
| 0.9597894 1.7552122 2.1360788
| [3] z1 > 0.00029: n = 160
| x(Intercept) xx1 xx2
| 0.9371795 2.4745728 2.5087608
Number of inner nodes: 1
Number of terminal nodes: 2
Number of parameters per node: 3
Objective function: 422.2329
[[2]]
Model-based recursive partitioning (ols)
Model formula:
y ~ x1 + x2 | z1 + z2
Fitted party:
[1] root: n = 300
x(Intercept) xx1 xx2
1.015224 2.175625 2.200746
Number of inner nodes: 0
Number of terminal nodes: 1
Number of parameters per node: 3
Objective function: 422.3085
</code></pre>
<p>In the picture below, you can observe the structure of the list <code>results</code>, where I cannot find the information displayed on the console (i.e. number of nodes, parameters, threshold values, etc..)
<a href="https://i.stack.imgur.com/lhz7Q.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lhz7Q.png" alt="enter image description here" /></a></p> | 2020-11-23 14:36:54.363000+00:00 | 2020-11-30 09:14:20.580000+00:00 | 2020-11-29 15:51:43.673000+00:00 | r|decision-tree|party | ['https://arxiv.org/abs/1906.10179'] | 1 |
65,430,875 | <blockquote>
<p>I've always used minimum runtime over multiple runs because any deviation from the minimum will be due to the CPU being busy with unrelated things, but I couldn't find any reliable sources confirming that that's the best practice. Other obvious choices are average or median run time. (Maximum seems odd, as it will probably be dominated by unrelated CPU spikes.) Are there any better ways to make sense of the statistical data gathered from several runs?</p>
</blockquote>
<p><a href="https://arxiv.org/abs/1608.04295" rel="nofollow noreferrer">Chen, J. and Revels, J., 2016. Robust benchmarking in noisy environments. arXiv preprint arXiv:1608.04295.</a> has some moderately extensive and mathematically justified arguments about how to do robust benchmarking.
The simple version is: use minimum.</p>
<p>As you expect because it boils down to:
<em>noise can only ever make things run slower, never faster.</em>
as such <code>minimum</code> not <code>mean</code> or <code>median</code> is the best estimator of the true time.
Since the model is <code>true_time + noise</code> where <code>noise</code> is nonnegative, and thus the sample that is smallest must have the smallest error (since <code>true_time</code> doessn't chane between samples, only noise).</p> | 2020-12-23 20:59:12.357000+00:00 | 2020-12-23 20:59:12.357000+00:00 | null | null | 24,980,500 | <p>When I'm benchmarking some code, either on a fixed set of input data, or on random input where the randomness doesn't affect the control flow: what is the best metric to use to assess the performance of my code?</p>
<p>I've always used minimum runtime over multiple runs because any deviation from the minimum will be due to the CPU being busy with unrelated things, but I couldn't find any reliable sources confirming that that's the best practice. Other obvious choices are average or median run time. (Maximum seems odd, as it will probably be dominated by unrelated CPU spikes.) Are there any better ways to make sense of the statistical data gathered from several runs?</p>
<p>As paxdiablo points out, if I can measure CPU time directly that would be ideal. But what do I do when I can only benchmark wall time?</p>
<p>As I said I was unable to find anything reliable on this, but maybe I just didn't find the right Google keywords, so if you can point me to anything existing, that would already be a great help. Also, please feel free to migrate this to Programmers.SE, if this question is too general for SO.</p> | 2014-07-27 11:33:50.870000+00:00 | 2020-12-24 00:55:32.273000+00:00 | 2014-07-27 11:41:50.513000+00:00 | language-agnostic|benchmarking | ['https://arxiv.org/abs/1608.04295'] | 1 |
15,611,461 | <p>I had the same problem and the solution for it was to match the two arrays of MFCCs using the <a href="http://en.wikipedia.org/wiki/Dynamic_time_warping" rel="nofollow">Dynamic Time Warping</a> algorithm.</p>
<p>After computing the MFCCs you should now have, for each of your two signals, an array where each element contains the MFCCs for a frame (an array of arrays). The first step would be to compute "distances" between every one element of one array and every one element of the other, i.e. distances between every two sets of MFCCs (you could try using the <a href="http://en.wikipedia.org/wiki/Euclidean_distance" rel="nofollow">Euclidian Distance</a> for this). </p>
<p>This should leave you with a 2-dimensional array (let's call it "dist") where element (i,j) represents the distance between the MFCCs of the i-th frame in the first signal and the MFCCs of the j-th frame of your second signal.</p>
<p>On this array you can now apply the DTW algorithm:</p>
<ul>
<li>dtw(1,1) = dist(1,1)</li>
<li>dtw(i,j) = min (dtw(i-1, j-1), dtw(i-1, j), dtw(i, j-1)) + dist(i,j).</li>
</ul>
<p>The value representing the "difference" between your two files is dtw(n,m), where n = nr. of frames in the first signal, m = nr. of frames of the second one.</p>
<p>For further reading, <a href="http://arxiv.org/ftp/arxiv/papers/1003/1003.4083.pdf" rel="nofollow">this paper</a> might give you an overall view of applying DTW to MFCCs and <a href="http://www.psb.ugent.be/cbd/papers/gentxwarper/DTWAlgorithm.ppt" rel="nofollow">this presentation</a> of the DTW algorithm might also help.</p> | 2013-03-25 09:27:41.123000+00:00 | 2013-03-25 09:27:41.123000+00:00 | null | null | 6,932,096 | <p>I have extracted two series MFCC coefficients from two around 30 second audio files consisting of the same speech content. The audio files are recorded at the same location from different sources. An estimation should be made whether the audio contains the same conversation or a different conversation. Currently I have tested a correlation calculation of the two Mfcc series but the result is not very reasonable. Are there best practices for this scenario?</p> | 2011-08-03 19:23:53.167000+00:00 | 2020-04-30 06:00:57.203000+00:00 | null | matlab|audio|matching|similarity|mfcc | ['http://en.wikipedia.org/wiki/Dynamic_time_warping', 'http://en.wikipedia.org/wiki/Euclidean_distance', 'http://arxiv.org/ftp/arxiv/papers/1003/1003.4083.pdf', 'http://www.psb.ugent.be/cbd/papers/gentxwarper/DTWAlgorithm.ppt'] | 4 |
11,743,151 | <p>There is the Road Coloring Problem:</p>
<p><strong>The problem:</strong> <em>Given a directed graph G, colour the edges such that for every vertex, there are a set of instructions that lead to that vertex, from every other vertex.</em></p>
<p>(<a href="http://en.wikipedia.org/wiki/Road_coloring_problem" rel="nofollow">link</a>)</p>
<p>It was recently proved (<a href="http://arxiv.org/abs/0709.0099" rel="nofollow">Trahtman 2009</a>) that if the graph is aperiodic and every vertex has the same out-degree, such a coloring exists:</p>
<p><strong>Theorem:</strong> <em>Every finite <strong>strongly connected aperiodic</strong> directed graph of <strong>uniform out-degree</strong> has a synchronizing coloring.</em></p>
<p>Trahtman also give an O(n^3) algorithm for the problem.</p>
<p>You should search for "road coloring problem algorithm" and its variants (for example one can relax the condition to aperiodicity, but I think it's an open problem so far).</p> | 2012-07-31 15:06:14.880000+00:00 | 2012-07-31 15:23:45.553000+00:00 | 2012-07-31 15:23:45.553000+00:00 | null | 11,741,675 | <p>Some years ago I read about an algorithm: it labels graph's edges so path from source node X to destination node Y is always the same sequence of labels, independently from which node you select as source X. How is it called?</p>
<p>(I can't remember which kind of conditions should be satisfied by graph)</p>
<p>Here an example (created by me):</p>
<p><img src="https://i.stack.imgur.com/uH6y6.png" alt="Example graph"></p>
<ul>
<li>Vertex 1: Red/Black/Red</li>
<li>Vertex 2: Red/Red/Black</li>
<li>Vertex 3: Red/Red/Black/Green</li>
<li>Vertex 4: Red/Black/Red/Green</li>
</ul>
<p>Starting from any vertex as source you using the path above you always reach the destination vertex. </p> | 2012-07-31 13:49:30.210000+00:00 | 2012-07-31 15:23:45.553000+00:00 | null | algorithm|graph|path | ['http://en.wikipedia.org/wiki/Road_coloring_problem', 'http://arxiv.org/abs/0709.0099'] | 2 |
56,224,965 | <p>As of release <code>0.2.0</code>, TensorFlow Federated includes an implementation of FedSGD (<a href="https://www.tensorflow.org/federated/api_docs/python/tff/learning/build_federated_sgd_process" rel="nofollow noreferrer"><code>tff.learning.build_federated_sgd_process()</code></a>), as described by the paper:</p>
<p><em>Communication-Efficient Learning of Deep Networks from Decentralized Data</em>
H. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, Blaise Aguera y Arcas. AISTATS 2017.
<a href="https://arxiv.org/abs/1602.05629" rel="nofollow noreferrer">https://arxiv.org/abs/1602.05629</a></p>
<p>Code can be found in <a href="https://github.com/tensorflow/federated/blob/master/tensorflow_federated/python/learning/federated_sgd.py" rel="nofollow noreferrer"><code>tensorflow_federated/python/learning/federated_sgd.py</code></a>, which shows aggregating gradients on the client and sending back an aggregated gradient to the server. </p> | 2019-05-20 16:36:07.250000+00:00 | 2019-05-20 16:36:07.250000+00:00 | null | null | 56,215,147 | <p>Currently, the tensorflow's federated_learn seem to only include things like federated_averaging that work on the model's trainable variables. How would I go about implementing algorithms that require the gradients for aggregation at the server?</p>
<p>Thanks</p> | 2019-05-20 06:08:30.147000+00:00 | 2019-05-20 16:36:07.250000+00:00 | 2019-05-20 15:33:33.397000+00:00 | tensorflow|tensorflow-federated | ['https://www.tensorflow.org/federated/api_docs/python/tff/learning/build_federated_sgd_process', 'https://arxiv.org/abs/1602.05629', 'https://github.com/tensorflow/federated/blob/master/tensorflow_federated/python/learning/federated_sgd.py'] | 3 |
59,879,769 | <p>If I've understood correctly, there are two questions here:</p>
<ol>
<li><p>Is it possible to determine what features in the input have activated neurons?</p></li>
<li><p>If so, is it possible to use this information to generate samples from <code>p(x|y)</code>?</p></li>
</ol>
<p>Regarding <code>1</code>, a basic way to determine if a neuron is sensitive to an input feature <code>x_i</code> is to compute the gradient of this neuron's output w.r.t <code>x_i</code>. A high gradient will indicate sensitivity to a particular input element. There is a rich literature on the subject, for example, you can have a look at guided backpropagation or at <a href="https://arxiv.org/abs/1610.02391" rel="nofollow noreferrer">GradCam</a> (the latter is about classification with convnets, but it does contain useful ideas).</p>
<p>As for <code>2</code>, I don't think that your approach to "reversing the problem" is correct. The problem is that your network is discriminative and what it outputs can be seen as <code>argmax_y p(y|x)</code>. Note that this is a point-wise estimation, not a full modeling of the distribution. However, the inverse problem that you're interested in seems to be sampling from </p>
<pre><code>p(x|y)=constant*p(y|x)p(x).
</code></pre>
<p>You don't know how to sample from <code>p(y|x)</code> and you don't know anything about <code>p(x)</code>. Even if you use a method to discover correlations between the neurons and specific input features, you have only discovered which features where more important to the networks prediction, but depending on the nature of <code>y</code> this might be insufficiant. Consider a toy example where your inputs <code>x</code> are 2d points distributed according to some distribution in <code>R^2</code> and where the output <code>y</code> is binary, such that any <code>(a,b) in R^2</code> is classified as <code>1</code> if <code>a<1</code> and it is classified as <code>0</code> if <code>a>1</code>. Then a discriminative network could learn the vertical line <code>x=1</code> as its decision boundary. Inspecting correlations between neurons and input features will reveal that only the first coordinate was useful in this prediction, but this information is not sufficient for sampling from the full 2d distribution of inputs. </p>
<p>I think that <a href="https://en.wikipedia.org/wiki/Autoencoder#Variational_autoencoder_(VAE)" rel="nofollow noreferrer">Variational autoencoders</a> could be what you're looking for. </p> | 2020-01-23 13:30:42.700000+00:00 | 2020-01-23 13:30:42.700000+00:00 | null | null | 59,878,319 | <p>Can we activate the outputs of a NN to gain insight into how the neurons are connected to input features?</p>
<p>If I take a basic NN example from the PyTorch tutorials. Here is an example of a <code>f(x,y)</code> training example.</p>
<pre class="lang-py prettyprint-override"><code>import torch
N, D_in, H, D_out = 64, 1000, 100, 10
x = torch.randn(N, D_in)
y = torch.randn(N, D_out)
model = torch.nn.Sequential(
torch.nn.Linear(D_in, H),
torch.nn.ReLU(),
torch.nn.Linear(H, D_out),
)
loss_fn = torch.nn.MSELoss(reduction='sum')
learning_rate = 1e-4
for t in range(500):
y_pred = model(x)
loss = loss_fn(y_pred, y)
model.zero_grad()
loss.backward()
with torch.no_grad():
for param in model.parameters():
param -= learning_rate * param.grad
</code></pre>
<p>After I've finished training the network to predict <code>y</code> from <code>x</code> inputs. Is it possible to reverse the trained NN so that it can now predict <code>x</code> from <code>y</code> inputs?</p>
<p>I don't expect <code>y</code> to match the original <em>inputs</em> that trained the <code>y</code> outputs. So I expect to see what <em>features</em> the model activates on to match <code>x</code> and <code>y</code>.</p>
<p>If it is possible, then how do I rearrange the <code>Sequential</code> model without breaking all the weights and connections?</p> | 2020-01-23 12:13:11.680000+00:00 | 2020-01-23 20:31:48.403000+00:00 | 2020-01-23 12:18:37.140000+00:00 | python|neural-network|pytorch | ['https://arxiv.org/abs/1610.02391', 'https://en.wikipedia.org/wiki/Autoencoder#Variational_autoencoder_(VAE)'] | 2 |
31,467,932 | <p>A way to get what you want is to generate truncated normally distributed random radii r with \sigma=1 in a range +-R and uniformly distributed random values \theta in 0..pi for the polar angle. If A and B are the major and minor axes aligned with x and y, then the points with respect to the origin are</p>
<pre><code>x = r A/R cos \theta, y = r B/R sin \theta
</code></pre>
<p>When you choose parameter R to be large, the values will be concentrated toward the center of the ellipse. When it's small, they'll be evenly distributed across it.</p>
<p>Generating pseudo random values that are truncated-normally distributed is not trivial, but not too hard either. See e.g. <a href="http://arxiv.org/pdf/1201.6140.pdf" rel="nofollow">Chopin's paper</a> for a good discussion. <a href="http://miv.u-strasbg.fr/mazet/rtnorm/" rel="nofollow">This C++ implementation</a> looks useful.</p>
<p>If you don't care about all the points lying inside the ellipse, you can use the full normal distribution. The Box Muller algorithm for this is very simple to implement, and it's built into many libraries <a href="http://docs.oracle.com/javase/7/docs/api/java/util/Random.html#nextGaussian()" rel="nofollow">including Java's</a>.</p> | 2015-07-17 03:25:13.457000+00:00 | 2015-07-17 03:25:13.457000+00:00 | null | null | 31,467,501 | <p>I feel like the question is quite descriptive but to illustrate it in words.</p>
<p>Picture a radial gradient (black in centre, white on edge) I want to generate a random point that is more likely to fall in the black, less likely to fall in the grey, and even less likely to fall in the white.</p>
<p>Could someone point me in the right direction? I'm quite stumped :/</p> | 2015-07-17 02:21:52.363000+00:00 | 2015-07-18 18:24:52.560000+00:00 | null | algorithm | ['http://arxiv.org/pdf/1201.6140.pdf', 'http://miv.u-strasbg.fr/mazet/rtnorm/', 'http://docs.oracle.com/javase/7/docs/api/java/util/Random.html#nextGaussian()'] | 3 |
Subsets and Splits