a_id
int64 7.84k
73.8M
| a_body
stringlengths 61
33k
| a_creation_date
stringlengths 25
32
| a_last_activity_date
stringlengths 25
32
| a_last_edit_date
stringlengths 25
32
⌀ | a_tags
float64 | q_id
int64 826
73.8M
| q_body
stringlengths 61
29.9k
| q_creation_date
stringlengths 25
32
| q_last_activity_date
stringlengths 25
32
| q_last_edit_date
stringlengths 25
32
⌀ | q_tags
stringlengths 1
103
| _arxiv_links
stringlengths 2
6.69k
| _n_arxiv_links
int64 0
94
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
60,227,691 | <p>There is this paper : <a href="https://arxiv.org/pdf/1810.09111.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1810.09111.pdf</a><br>
Its look like what you want to do !</p>
<p>In the paper they output a Heatmap of the changes between the two images, but you could just detect if there is activation or certain amount of activations to output your binary classification.</p>
<p>Hope that can help you !</p> | 2020-02-14 13:53:38.093000+00:00 | 2020-02-14 13:53:38.093000+00:00 | null | null | 60,223,311 | <p>I am not really sure this is the right place to ask for this kind of question. I am looking for some recommendations for a solution to a specific problem.</p>
<p>I am struggling to find a solution for my university project.
Let's say I have 2 images taken at the same location but at different times. I need to build a model that detects if there is any change between these 2 images.</p>
<p>This is somehow similar to the foreground segmentation/background subtraction/scene change detection problems whose plenty of research works was deployed (For <a href="https://www.researchgate.net/publication/330695424_ChangeNet_A_Deep_Learning_Architecture_for_Visual_Change_Detection_Subvolume_B" rel="nofollow noreferrer">reference</a>):
<a href="https://i.stack.imgur.com/JZ92A.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JZ92A.png" alt="enter image description here"></a></p>
<p>However, the scope of those problems is far beyond what I want to do. They extract features by CNN from original images and then concatenate extracted features to a greyscale output image by Deconvolutional techniques. For me, I just want to extract features and eventually output a binary (1 or 0) value: 1 if there is any change between 2 images, 0 if there is not. In other words, I don't want to implement the Deconvolutional part.</p>
<p>I have 2 problems:</p>
<ol>
<li>The CD2014 dataset (and also other relevant datasets) have only
ground truths in greyscale images format, not my desired output
(binary 1/0 format). </li>
<li>Furthermore, since my problem is different from
those papers, I cannot find a suitable model to use, I tend to use
VGG-16 but there is no proof that it will work in my problem.</li>
</ol>
<p>Can you suggest me any solutions or materials in this scope?
I truly appreciate any recommendations.</p>
<p>Thank you and have a nice day!</p> | 2020-02-14 09:18:06.587000+00:00 | 2020-02-14 13:53:38.093000+00:00 | null | image-processing|deep-learning|conv-neural-network|deconvolution | ['https://arxiv.org/pdf/1810.09111.pdf'] | 1 |
63,644,769 | <p>Fuzzing a JavaScript engine draws a lot of attention as the number of browser users is about 4 Billion. Several works have been done to find bugs in JS engines, including popular large engines, e.g, v8, webkit, chakracore, gecko, or some small embedded engines, like jerryscript, QuickJS, jsish, mjs, mujs.</p>
<p>It is really difficult to find bugs using AFL as the mutation mechanisms provided by AFL is not practical for JS files, e.g, bitflip can hardly be a valid mutation. Since JS is a structured language, several works using ECMAScript grammar to mutate/generate JS files(seeds):</p>
<p><a href="https://www.usenix.org/system/files/conference/usenixsecurity12/sec12-final73.pdf" rel="nofollow noreferrer">LangFuzz</a> parses sample JS files and splits them into code fragments. It then recombines the fragments to produce test cases.</p>
<p><a href="https://github.com/MozillaSecurity/funfuzz" rel="nofollow noreferrer">jsfunfuzz</a> randomly generates syntactically valid JS statements from JS grammar manually written for fuzzing.</p>
<p><a href="https://github.com/MozillaSecurity/dharma" rel="nofollow noreferrer">Dharma</a> is a generation-based, context-free grammar fuzzer, generating files based on given grammar.</p>
<p><a href="https://arxiv.org/pdf/1812.01197.pdf" rel="nofollow noreferrer">Superion</a> extends AFL using tree-based mutation guided by JS grammar.</p>
<p>The above works can easily pass the syntax checks but fail at semantic checks. A lot of generated JS seeds are semantically invalid.</p>
<p><a href="https://github.com/SoftSec-KAIST/CodeAlchemist" rel="nofollow noreferrer">CodeAlchemist</a> uses a semantics-aware approach to generate code segments based on a static type analysis.</p>
<p>There are two levels of bugs related to JS engines: simple parser/interpreter bugs and deep inside logic bugs. Recently, there is a trend that the number of simple bugs decreases while more and more deep bugs come out.</p>
<p><a href="https://github.com/sslab-gatech/DIE" rel="nofollow noreferrer">DIE</a> uses aspect-preserving mutation to preserves the desirable properties of CVEs. It also using type analysis to generate semantic-valid bugs.</p>
<p>Some works focus on mutating intermediate representations.</p>
<p><a href="https://github.com/googleprojectzero/fuzzilli" rel="nofollow noreferrer">Fuzzilli</a> is a coverage-guided fuzzer based on mutation on the IR level. The mutations on IR can guarantee semantic validity and can be transferred to JS.</p>
<p>Fuzzing JS is an interesting and hot topic according to the top conference of security/SE in recent years. Hope this information is helpful.</p> | 2020-08-29 08:01:36.420000+00:00 | 2020-08-29 08:01:36.420000+00:00 | null | null | 63,560,866 | <p>The <a href="https://lcamtuf.coredump.cx/afl/" rel="nofollow noreferrer">American Fuzzy Lop</a>, and the conceptually related LLVM <a href="https://llvm.org/docs/LibFuzzer.html" rel="nofollow noreferrer">libfuzzer</a> not only generate random fuzzy strings, but they also watch branch coverage of the code under test and use genetic algorithms to try to cover as many branches as possible. This increases the hit frequency of the more interesting code further downstream as otherwise most of the generated inputs will be stopped early in some deserialization or validation.</p>
<p>But those tools work at native code level, which is not useful for JavaScript applications as it would be trying to cover the interpreter, but not really the interpreted code.</p>
<p>So is there a way to fuzz JavaScript (preferably in browser, but tests running in node.js would help too) with coverage guidance?</p>
<p>I looked at the tools mentioned in <a href="https://stackoverflow.com/questions/16521143/">this old question</a>, but those that do javascript don't seem to mention anything about coverage profiling. And while <a href="https://gitlab.com/akihe/radamsa" rel="nofollow noreferrer">radamsa</a> mentions optionally pairing it with coverage analsysis, I haven't found any documentation on how to actually do it.</p>
<p>How can one fuzz-test java-script (in browser) application with coverage guidance?</p> | 2020-08-24 12:15:01.917000+00:00 | 2020-08-29 08:01:36.420000+00:00 | 2020-08-24 13:33:41.223000+00:00 | javascript|fuzzing | ['https://www.usenix.org/system/files/conference/usenixsecurity12/sec12-final73.pdf', 'https://github.com/MozillaSecurity/funfuzz', 'https://github.com/MozillaSecurity/dharma', 'https://arxiv.org/pdf/1812.01197.pdf', 'https://github.com/SoftSec-KAIST/CodeAlchemist', 'https://github.com/sslab-gatech/DIE', 'https://github.com/googleprojectzero/fuzzilli'] | 7 |
32,100,108 | <p>Prolog allows to represent knowledge by facts and rules. A
fact and a rule have the following format:</p>
<pre><code> A :- A1, .., An
</code></pre>
<p>Where A, A1, .., An are so called literals. If n=0 then its
a fact, if n>0 then its a rule.</p>
<p>A literal has the folllowing syntax, name is the predicate name
and the terms are the arguments of the predicate:</p>
<pre><code> literal = atom [ "(" term { "," term } ].
</code></pre>
<p>Knowledge respresentation is an art in itself. There can be
many requirements to the representation, which can force it
to have a certain form.</p>
<p>But you can think as a literal as an excel sheet that is used
to hold a table. The columns that describe the column names are
not entered into Prolog as facts and rules, but you can use
Prolog <a href="http://arxiv.org/abs/0911.2899" rel="nofollow">comments</a> to enter the column names, such as:</p>
<pre><code> % distance_between_cities(Atom, Atom, Float)
</code></pre>
<p>Or more specific:</p>
<pre><code> % distance_between_cities(CityId, CityId, DistanceMiles)
</code></pre>
<p>After the first comment, you just enter the facts:</p>
<pre><code> distance_between_cities('New York, US','Los Angeles, US',2443.85).
distance_between_cities('New York, US','San Francisco, US',2563.89).
distance_between_cities('Los Angeles, US','San Francisco, US',347.18).
</code></pre>
<p>Different predicate names can name different excel sheets
so to speak. Some prolog systems even have <a href="http://www.swi-prolog.org/pldoc/man?section=csv" rel="nofollow">CSV</a> interfaces.</p>
<p>Bye</p> | 2015-08-19 15:43:08.910000+00:00 | 2015-08-19 16:31:39.077000+00:00 | 2015-08-19 16:31:39.077000+00:00 | null | 27,081,108 | <p>I am learning prolog. I am wondering about how can i represent following fact in prolog?
<br>"There are 300 miles between someCityA and someCityB".
Anyone help please.I have searched for it enough but cant find solution to my specific problem.</p> | 2014-11-22 18:47:09.353000+00:00 | 2015-08-19 16:31:39.077000+00:00 | 2014-11-23 10:57:02.760000+00:00 | prolog | ['http://arxiv.org/abs/0911.2899', 'http://www.swi-prolog.org/pldoc/man?section=csv'] | 2 |
38,409,015 | <p>Dropout is used in unsupervised learning. For example:</p>
<blockquote>
<p>Shuangfei Zhai, Zhongfei Zhang: Dropout Training of Matrix Factorization and Autoencoder for Link Prediction in Sparse Graphs (<a href="http://arxiv.org/abs/1512.04483" rel="nofollow">arxiv</a>, 14 Dec 2015)</p>
</blockquote> | 2016-07-16 07:45:39.333000+00:00 | 2016-07-16 07:45:39.333000+00:00 | null | null | 19,666,598 | <p>All or nearly all of the papers using dropout are using it for supervised learning. It seems that it could just as easily be used to regularize deep autoencoders, RBMs and DBNs. So why isn't dropout used in unsupervised learning?</p> | 2013-10-29 18:41:36.400000+00:00 | 2021-08-18 16:48:37.040000+00:00 | null | machine-learning|neural-network|unsupervised-learning|supervised-learning | ['http://arxiv.org/abs/1512.04483'] | 1 |
69,152,922 | <p>In general, the compiler will create a <code>vtable</code> and virtual method calls are dispatched through it, i.e., there is one added level of indirection in calls.</p>
<p>But optimizing compilers do try to avoid this. This optimization is generally called "devirtualization".
When and how this works very much depends on the compiler and code in question.
<a href="https://quuxplusone.github.io/blog/2021/02/15/devirtualization/" rel="nofollow noreferrer">Here is a nice blog post about it.</a></p>
<p>Some more resources:</p>
<p>Talk: <a href="https://www.youtube.com/watch?v=w0sz5WbS5AM#t=3090" rel="nofollow noreferrer">Matt Godbolt on speculative devirtualization in LLVM</a></p>
<p>Talk: <a href="https://www.youtube.com/watch?v=qMhV6d3B1Vk" rel="nofollow noreferrer">2016 LLVM Developers’ Meeting: P. Padlewski “Devirtualization in LLVM”</a></p>
<p>Talk: <a href="https://www.youtube.com/watch?v=Dt4UehzzcsE" rel="nofollow noreferrer">2018 LLVM Developers’ Meeting: P. Padlewski “Sound Devirtualization in LLVM”</a></p>
<p>Paper: <a href="https://sites.cs.ucsb.edu/%7Eckrintz/papers/devirt-ibmjit.pdf" rel="nofollow noreferrer">Ishizaki et al, 2000, "A Study of Devirtualization Techniques
for a Java™ Just-In-Time Compiler"</a></p>
<p>Paper: <a href="https://arxiv.org/pdf/2003.04228.pdf" rel="nofollow noreferrer">Padlewski et al, 2020, "Modeling the Invariance of Virtual Pointers in LLVM"</a></p> | 2021-09-12 15:37:36.057000+00:00 | 2021-09-18 11:04:18.240000+00:00 | 2021-09-18 11:04:18.240000+00:00 | null | 69,150,509 | <p>I have a question regarding the resolving timing of a C++ virtual function. From chapter OOP in C++ Primer, it mentioned that:</p>
<blockquote>
<p><strong>Calls to Virtual Functions May Be Resolved at Run Time</strong>
When a virtual function is called through a reference or pointer, the compiler
generates code to decide at <strong>run time</strong> which function to call. The function that is called
is the one that corresponds to the dynamic type of the object bound to that pointer or
reference.</p>
</blockquote>
<p>I understand what the above statement describes: when a virtual function is executed, which version is really resolved depends on the actual type of calling pointer/reference. If the actual type of pointer/reference is base class, the virtual function of base class is the one actually being run, vice versa. It obviously needs to be done in run time.</p>
<p>However, the example following by the above statement in C++ primer has confused me for a while:</p>
<pre><code>double print_total(ostream &os, const Quote &item, size_t n)
{
// depending on the type of the object bound to the item parameter
// calls either Quote::net_price or Bulk_quote::net_price
double ret = item.net_price(n);
}
// Quote is base class, Bulk_quote is derived class
Quote base("0-201-82470-1", 50);
print_total(cout, base, 10); // calls Quote::net_price
Bulk_quote derived("0-201-82470-1", 50, 5, .19);
print_total(cout, derived, 10); // calls Bulk_quote::net_price
</code></pre>
<p>My questions are:</p>
<ol>
<li>For my understanding, in this example, compiler is able to know in <strong>compile time</strong> the "real type" of <strong>instance base</strong> and <strong>instance derived</strong> as they are just declared obviously in sketch! So, I think the resolved timing of this example can be in compile time. Am I right about that?</li>
<li>Can resolved timing of virtual functions be compile time? Or as a matter of convenience, C++ just makes all virtual functions resolved in run time? Since C++ primer says: Calls to Virtual Functions <strong>May</strong> Be Resolved at Run Time, I am not quite sure if run time is all the case.</li>
</ol>
<p>I think to really understand the resolved time of virtual functions is very important to every C++ user. I tried to find the knowledge about compile time/run time but none of them can help me figure out my question. Do anyone have any thoughts on my questions. Thank you in advance!</p> | 2021-09-12 10:19:26.963000+00:00 | 2021-09-19 09:44:12.823000+00:00 | 2021-09-12 14:46:43.140000+00:00 | c++|polymorphism|virtual-functions | ['https://quuxplusone.github.io/blog/2021/02/15/devirtualization/', 'https://www.youtube.com/watch?v=w0sz5WbS5AM#t=3090', 'https://www.youtube.com/watch?v=qMhV6d3B1Vk', 'https://www.youtube.com/watch?v=Dt4UehzzcsE', 'https://sites.cs.ucsb.edu/%7Eckrintz/papers/devirt-ibmjit.pdf', 'https://arxiv.org/pdf/2003.04228.pdf'] | 6 |
21,020,265 | <p>It's called <a href="http://en.wikipedia.org/wiki/In_shuffle" rel="nofollow">in-place in-shuffle problem</a>. Here is its implementation in C++ based on <a href="http://arxiv.org/abs/0805.1598" rel="nofollow">here</a>.</p>
<pre><code>void in_place_in_shuffle(int arr[], int length)
{
assert(arr && length>0 && !(length&1));
// shuffle to {5, 0, 6, 1, 7, 2, 8, 3, 9, 4}
int i,startPos=0;
while(startPos<length)
{
i=_LookUp(length-startPos);
_ShiftN(&arr[startPos+(i-1)/2],(length-startPos)/2,(i-1)/2);
_PerfectShuffle(&arr[startPos],i-1);
startPos+=(i-1);
}
// local swap to {0, 5, 1, 6, 2, 7, 3, 8, 4, 9}
for (int i=0; i<length; i+=2)
swap(arr[i], arr[i+1]);
}
// cycle
void _Cycle(int Data[],int Lenth,int Start)
{
int Cur_index,Temp1,Temp2;
Cur_index=(Start*2)%(Lenth+1);
Temp1=Data[Cur_index-1];
Data[Cur_index-1]=Data[Start-1];
while(Cur_index!=Start)
{
Temp2=Data[(Cur_index*2)%(Lenth+1)-1];
Data[(Cur_index*2)%(Lenth+1)-1]=Temp1;
Temp1=Temp2;
Cur_index=(Cur_index*2)%(Lenth+1);
}
}
// loop-move array
void _Reverse(int Data[],int Len)
{
int i,Temp;
for(i=0;i<Len/2;i++)
{
Temp=Data[i];
Data[i]=Data[Len-i-1];
Data[Len-i-1]=Temp;
}
}
void _ShiftN(int Data[],int Len,int N)
{
_Reverse(Data,Len-N);
_Reverse(&Data[Len-N],N);
_Reverse(Data,Len);
}
// perfect shuffle of satisfying [Lenth=3^k-1]
void _PerfectShuffle(int Data[],int Lenth)
{
int i=1;
if(Lenth==2)
{
i=Data[Lenth-1];
Data[Lenth-1]=Data[Lenth-2];
Data[Lenth-2]=i;
return;
}
while(i<Lenth)
{
_Cycle(Data,Lenth,i);
i=i*3;
}
}
// look for 3^k that nearnest to N
int _LookUp(int N)
{
int i=3;
while(i<=N+1) i*=3;
if(i>3) i=i/3;
return i;
}
</code></pre>
<hr>
<p><strong>Test:</strong></p>
<pre><code>int arr[] = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9};
int length = sizeof(arr)/sizeof(int);
in_place_in_shuffle(arr, length);
</code></pre>
<p>After this, <code>arr[]</code> will be <code>{0, 5, 1, 6, 2, 7, 3, 8, 4, 9}</code>.</p> | 2014-01-09 12:28:11.603000+00:00 | 2014-01-09 12:28:11.603000+00:00 | null | null | 1,777,901 | <p>Suppose we have an array
a1, a2,... , an, b1, b2, ..., bn.</p>
<p>The goal is to change this array to
a1, b1, a2, b2, ..., an, bn in O(n) time and in O(1) space.
In other words, we need a linear-time algorithm to modify the array in place, with no more than a constant amount of extra storage.</p>
<p>How can this be done?</p> | 2009-11-22 05:26:44.363000+00:00 | 2021-10-04 10:39:17.067000+00:00 | 2021-10-02 01:39:09.407000+00:00 | arrays|algorithm|permutation | ['http://en.wikipedia.org/wiki/In_shuffle', 'http://arxiv.org/abs/0805.1598'] | 2 |
40,480,678 | <p>Fully connected layers can indeed be very heavy. Please look at section "3.1 Truncated SVD for faster detection" at <a href="https://arxiv.org/abs/1504.08083" rel="nofollow noreferrer"><em>Girshick, R</em> <strong>Fast-RCNN</strong> ICCV 2015</a> describing how to use <a href="https://en.wikipedia.org/wiki/Singular_value_decomposition" rel="nofollow noreferrer">SVD</a> trick to <em>significalntly</em> reduce the burden of fully connected layers. Hence, you can replace your three fully connected layers with 6 very thin layers.</p>
<p>Steps to go from model <code>A</code> to <code>B</code>: </p>
<ol>
<li><p>Create <code>B.prototxt</code> that has the 5 convolution layers <strong>with the same <code>"name"</code>s</strong> as <code>A</code>. </p></li>
<li><p>Give the single fully connected layer in <code>B</code> a new <code>"name"</code> that does not exist in <code>A</code>.</p></li>
<li><p>in python </p>
<pre><code>import caffe
B = caffe.Net('/path/to/B.prototxt', '/path/to/weights_A.caffemodel', caffe.TEST)
B.save('/path/to/weights_B.caffemodel')
</code></pre></li>
<li><p>Now you have weights for <code>B</code> that are the same as the weights of <code>A</code> for all convolutional layers and <strong>random</strong> for the new single fully connected layer.</p></li>
<li><p>fine tune model <code>B</code> starting from <code>'/path/to/weights_B.caffemodel'</code> to learn the weights for the new single fully connected layer.</p></li>
</ol> | 2016-11-08 06:53:17.263000+00:00 | 2016-11-08 06:53:17.263000+00:00 | null | null | 40,479,864 | <p>I need to update a caffe model from an existing caffe model where I will drop last two layers. It is needed to reduce caffe model size so that it would be easier and lesser size to deploy. Say my existing caffe model is <strong>A1.caffemodel</strong> which has <strong><em>5 convolution layers</em></strong> and <strong><em>3 fully connected layers</em></strong>. I want to generate a new model from it named <strong>B1.caffemodel</strong> which will have <strong><em>5 convolution layers</em></strong> and <strong><em>1 fully connected layer</em></strong> (last 2 fc layers discarded).</p>
<p>I appreciate your all valuable suggestions and helpful code snippet. </p> | 2016-11-08 05:49:19.193000+00:00 | 2016-11-08 09:45:14.673000+00:00 | 2016-11-08 09:44:11.583000+00:00 | python|c++|caffe | ['https://arxiv.org/abs/1504.08083', 'https://en.wikipedia.org/wiki/Singular_value_decomposition'] | 2 |
66,193,020 | <p>Now you can use BERT or related variants and here you can find all the pre-trained models: <a href="https://huggingface.co/transformers/pretrained_models.html" rel="nofollow noreferrer">https://huggingface.co/transformers/pretrained_models.html</a></p>
<p>And it is possible to pre-train and fine-tune RNN, and you can refer to this paper: <a href="https://arxiv.org/pdf/1706.08838.pdf" rel="nofollow noreferrer">TimeNet: Pre-trained deep recurrent neural network for time series classification</a>.</p> | 2021-02-14 06:36:30.343000+00:00 | 2021-02-14 06:36:30.343000+00:00 | null | null | 46,713,734 | <p>I am trying to solve a time series prediction problem. I tried with ANN and LSTM, played around a lot with the various parameters, but all I could get was 8% better than the persistence prediction.</p>
<p>So I was wondering: since you can save models in keras; are there any pre-trained model (LSTM, RNN, or any other ANN) for time series prediction? If so, how to I get them? Are there in Keras?</p>
<p>I mean it would be super useful if there a website containing pre trained models, so that people wouldn't have to speent too much time training them..</p>
<p>Similarly, another question:</p>
<p>Is it possible to do the following?
1. Suppose I have a dataset now and I use it to train my model. Suppose that in a month, I will have access to another dataset (corresponding to same data or similar data, in the future possibly, but not exclusively). Will it be possible to continue training the model then? It is not the same thing as training it in batches. When you do it in batches you have all the data in one moment.
Is it possible? And how?</p> | 2017-10-12 15:41:15.467000+00:00 | 2021-02-14 06:36:30.343000+00:00 | null | python|machine-learning|neural-network|keras|recurrent-neural-network | ['https://huggingface.co/transformers/pretrained_models.html', 'https://arxiv.org/pdf/1706.08838.pdf'] | 2 |
44,449,630 | <p><em>"Several methods for finding saliency have been described by other authors. Among them are sensitivity
based approaches [4, 5, 6], deconvolution based ones [7, 8], or more complex ones like
layer-wise relevance propagation (LRP) [9]."</em></p>
<p>source : <a href="https://arxiv.org/pdf/1704.07911.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1704.07911.pdf</a></p>
<p>They are doing what you want, but with a CNN, maybe you should go from a MLP to a CNN, that would seem appropriate for medical imaging classification.</p>
<p>Or maybe this paper would fit better:</p>
<p><a href="https://www.researchgate.net/publication/222561214_Illuminating_the_black_box_A_randomization_approach_for_understanding_variable_contributions_in_artificial_neural_networks" rel="nofollow noreferrer">Randomization approach for understanding variable contributions in artificial neural networks</a></p> | 2017-06-09 05:01:41.267000+00:00 | 2017-06-09 05:01:41.267000+00:00 | null | null | 44,443,721 | <p>I've trained a multiplayer perceptron on a medical imaging classification task (classifying whether an ultrasound scanning image belongs to the healthy or disease condition). The network consists of 2 fully connected hidden layers and 1 output unit. I then want to examine the weights to see which features in the images (e.g., clusters of pixels) are the most important for the network to distinguish between different classes. Since my network has two layers of hidden weights, how do I use these weights to quantify the importance of each image pixel? Could someone experienced with this point me to the right literature? Thanks.</p> | 2017-06-08 19:05:50.467000+00:00 | 2017-06-09 05:01:41.267000+00:00 | null | machine-learning|deep-learning|perceptron | ['https://arxiv.org/pdf/1704.07911.pdf', 'https://www.researchgate.net/publication/222561214_Illuminating_the_black_box_A_randomization_approach_for_understanding_variable_contributions_in_artificial_neural_networks'] | 2 |
69,493,985 | <p>There are actually some methods that could increase the accuracy of the model:</p>
<ol>
<li>Random Resampling</li>
<li>Residual Adaptive Refinement (RAR): <a href="https://arxiv.org/pdf/1907.04502.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1907.04502.pdf</a></li>
</ol>
<p>They even have an implemented example in their github repository:</p>
<p><a href="https://github.com/lululxvi/deepxde/blob/master/examples/Burgers_RAR.py" rel="nofollow noreferrer">https://github.com/lululxvi/deepxde/blob/master/examples/Burgers_RAR.py</a></p>
<p>Also, You could try using a different architecture such as Multi-Scale Fourier NNs. They seem to outperform PINNs, in cases where the solution contains lots of "spikes".</p> | 2021-10-08 09:55:34.513000+00:00 | 2021-10-08 09:55:34.513000+00:00 | null | null | 67,103,081 | <p>We used DeepXDE for solving differential equations. (DeepXDE is a framework for solving differential equations, based on TensorFlow). It works fine, but the accuracy of the solution is limited, and optimizing the meta-parameters did not help. Is this limitation a well-known problem? How the accuracy of solutions can be increased? We used the Adam-optimizer; are there optimizers that are more suitable for numerical problems, if high precision is needed?</p>
<p>(I think the problem is not specific for some concrete equation, but if needed I add an example.)</p> | 2021-04-15 06:07:34.923000+00:00 | 2021-10-08 09:55:34.513000+00:00 | null | tensorflow|optimization|deep-learning | ['https://arxiv.org/pdf/1907.04502.pdf', 'https://github.com/lululxvi/deepxde/blob/master/examples/Burgers_RAR.py'] | 2 |
50,330,136 | <p>Breaking up text into individual characters is not as easy as it sounds at first. You can try to find some rules and manipulate the image by that, but there will be just too many exceptions. For example you can try to find disjoint marks, but the fourth one in your image, <strong><em>0715</em></strong> has it's "5" broken up into three pieces, and the 9<sup>th</sup> one, <strong><em>17.00</em></strong> has the two zeros overlapping.</p>
<p>You are very lucky with the horizontal lines - at least it's easy to separate different entries. But you have to come up with a lot of ideas related to semi-fixed character width, a "soft" disjointness rule, etc.</p>
<p>I did a project like that two years ago and we ended up using an external open source library called <a href="https://github.com/tesseract-ocr/tesseract" rel="nofollow noreferrer">Tesseract</a>. Here's <a href="https://arxiv.org/ftp/arxiv/papers/1003/1003.5898.pdf" rel="nofollow noreferrer">this article</a> of <strong><em>Roman</em></strong> numerals recognition with it, up to about 90% accuracy. You might also want to look into the <a href="http://lipitk.sourceforge.net/lipi-toolkit.htm" rel="nofollow noreferrer">Lipi Toolkit</a>, but I have no experience with that.</p>
<p>You might also want to consider to just train a network to recognize the four digits at once. So the input would be the whole field with the four handwritten digits and the output would be the four numbers. And let the network sort out where the characters are. If you have enough training data, that's probably the easiest approach.</p>
<p><strong>EDIT:</strong>
Inspired by @Link's answer, I just came up with this idea, you can give it a try. Once you extracted the area between the two lines, trim the image to get rid of white space all around. Then make an educated guess about how big the characters are. Use maybe the height of the area? Then create a sliding window over the image, and run the recognition all the way. There will most likely be four peaks which would correspond to the four digits.</p> | 2018-05-14 12:18:28.760000+00:00 | 2018-05-14 15:14:09.737000+00:00 | 2018-05-14 15:14:09.737000+00:00 | null | 50,327,815 | <p>I've gotten access to a lot of reports which are filled out by hand. One of the columns in the report contains a timestamp, which I would like to attempt to identify without going through each report manually.</p>
<p>I am playing with the idea of splitting the times, e.g. 00:30, into four digits, and running these through a classifier trained on MNIST to identify the actual timestamps.</p>
<p>When I manually extract the four digits in Photoshop and run these through an MNIST classifier, it works perfectly. But so far I haven't been able to figure out how to programatically split the number sequences into single digits. I tried to use different types of countour finding in OpenCV, but it didn't work very reliably.</p>
<p>Any suggestions?</p>
<p>I've <a href="https://i.stack.imgur.com/CpAyE.jpg" rel="nofollow noreferrer">added a screenshot</a> of some of the relevant columns in the reports.</p> | 2018-05-14 10:08:43.810000+00:00 | 2018-05-15 09:08:10.730000+00:00 | null | opencv|tensorflow|machine-learning|computer-vision|mnist | ['https://github.com/tesseract-ocr/tesseract', 'https://arxiv.org/ftp/arxiv/papers/1003/1003.5898.pdf', 'http://lipitk.sourceforge.net/lipi-toolkit.htm'] | 3 |
23,292,798 | <p>The <a href="http://www.mathworks.com/help/stats/kdtreesearcher-class.html" rel="nofollow noreferrer">code</a> builds a <a href="http://en.wikipedia.org/wiki/K-d_tree#Nearest_neighbour_search" rel="nofollow noreferrer">KD-tree</a> space-partitioning structure to speed up <a href="https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm" rel="nofollow noreferrer">nearest neighbor search</a>, think of it like building <a href="https://en.wikipedia.org/wiki/Database_index" rel="nofollow noreferrer">indexes</a> commonly used in RDBMS to speed up lookup operations.</p>
<p>In addition to nearest neighbor(s) searches, this structure also speeds up <a href="http://www.mathworks.com/help/stats/kdtreesearcher.rangesearch.html" rel="nofollow noreferrer">range-searches</a>, which finds all points that are within a distance <code>r</code> from a query point.</p>
<p>As pointed by @SamRoberts, the core of the code is implemented in C/C++ as a MEX-function.</p>
<p>Note that <code>knnsearch</code> chooses to build a <a href="http://www.mathworks.com/help/stats/classification-using-nearest-neighbors.html#bsfsm9c" rel="nofollow noreferrer">KD-tree</a> only under certain conditions, and falls back to an exhaustive search otherwise (by naively searching all points for the nearest one).</p>
<p>Keep in mind that in cases of very high-dimensional data (and few instances), the algorithm degenerates and is no better than an exhaustive search. In general as you go with dimensions <code>d>30</code>, the cost of searching KD-trees will increase to searching almost all the points, and could even become worse than a brute force search due to the overhead involved in building the tree.</p>
<p>There are other variations to the algorithm that deals with high dimensions such as the <a href="https://en.wikipedia.org/wiki/Ball_tree" rel="nofollow noreferrer">ball trees</a> which partitions the data in a series of nesting hyper-spheres (as opposed to partitioning the data along Cartesian axes like KD-trees). Unfortunately those are not implemented in the official Statistics toolbox. If you are interested, here is <a href="http://arxiv.org/abs/1007.0085" rel="nofollow noreferrer">a paper</a> which presents a survey of available kNN algorithms.</p>
<p><img src="https://i.stack.imgur.com/l9xPN.png" alt="kdtree_search"></p>
<p><em>(The above is an illustration of searching a kd-tree partitioned 2d space, borrowed from the docs)</em></p> | 2014-04-25 12:12:10.793000+00:00 | 2014-04-25 12:12:10.793000+00:00 | null | null | 23,175,491 | <p>I wrote a basic O(n^2) algorithm for a nearest neighbor search. As usual Matlab 2013a's knnsearch(..) method works a lot faster.</p>
<p>Can someone tell me what kind of optimization they used in their implementation?</p>
<p>I am okay with reading any documentation or paper that you may point me to.</p>
<p>PS: I understand the documentation on the site mentions the paper on kd trees as a reference. But as far as I understand kd trees are the default option when column number is less than 10. Mine is 21. Correct me if I'm wrong about it.</p> | 2014-04-19 20:59:38.477000+00:00 | 2014-04-25 12:12:10.793000+00:00 | null | matlab|machine-learning|knn | ['http://www.mathworks.com/help/stats/kdtreesearcher-class.html', 'http://en.wikipedia.org/wiki/K-d_tree#Nearest_neighbour_search', 'https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm', 'https://en.wikipedia.org/wiki/Database_index', 'http://www.mathworks.com/help/stats/kdtreesearcher.rangesearch.html', 'http://www.mathworks.com/help/stats/classification-using-nearest-neighbors.html#bsfsm9c', 'https://en.wikipedia.org/wiki/Ball_tree', 'http://arxiv.org/abs/1007.0085'] | 8 |
72,101,499 | <p>You can use a deep learning based text detector called <a href="https://arxiv.org/abs/1704.03155" rel="nofollow noreferrer">Efficient and Accurate Scene Text - EAST</a>. It can be used with OpenCV functions but first you need to download the trained model from <a href="https://github.com/YCICI/EAST-OpenCv" rel="nofollow noreferrer">frozen_east_text_detection.pb</a></p>
<p>The following code and its comments was borrowed in its entirety from <a href="https://github.com/YCICI/EAST-OpenCv/blob/master/text_detection.py" rel="nofollow noreferrer">here -<code>text_detection.py</code></a>. Remember to pass the downloaded <code>.pb</code> file into <code>cv2.dnn.readNet()</code></p>
<p><em><strong>Highlights:</strong></em></p>
<ul>
<li>The trained model is passed into <code>cv2.dnn.readNet()</code> as a <code>.pb</code> file.</li>
<li>This model only accepts images of dimensions which are a multiple of 32. (Here we set the width and height of the input image to 320 by default.)</li>
<li>Two output layers are defined in <code>layerNames</code> each for probabilities of containing text and bounding box coordinates</li>
<li>We cannot pass an image we normally do to every OpenCV function into a model. Every image passed into <code>cv2.dnn.blobFromImage()</code> where the image is considered to be a <strong>blob</strong>. It undergoes <strong>mean subtraction</strong>, <strong>scaling</strong> and <strong>channel swapping</strong>. <a href="https://pyimagesearch.com/2017/11/06/deep-learning-opencvs-blobfromimage-works/" rel="nofollow noreferrer">more details on these here</a></li>
<li>The input blob is passed to <code>net.setInput()</code> along with the output layers.</li>
<li>The output is a tuple of scores containing:
<ul>
<li>the probability of whether a region is text or not</li>
<li>the bounding box coordinate of the text region</li>
</ul>
</li>
<li>We filter out predictions below a certain probability</li>
<li>On the remaining predictions we perform non-maximal suppression to remove overlapping boxes</li>
</ul>
<p>For more code explanation <a href="https://pyimagesearch.com/2018/08/20/opencv-text-detection-east-text-detector/" rel="nofollow noreferrer">Please refer here</a></p>
<p><em><strong>Code:</strong></em></p>
<pre><code>image = cv2.imread('path_to_image')
orig = image.copy()
(H, W) = image.shape[:2]
# set the new width and height and then determine the ratio in change
# for both the width and height
(newW, newH) = (320, 320)
rW = W / float(newW)
rH = H / float(newH)
# resize the image and grab the new image dimensions
image = cv2.resize(image, (newW, newH))
(H, W) = image.shape[:2]
# define the two output layer names for the EAST detector model that
# we are interested -- the first is the output probabilities and the
# second can be used to derive the bounding box coordinates of text
layerNames = [
"feature_fusion/Conv_7/Sigmoid",
"feature_fusion/concat_3"]
# load the pre-trained EAST text detector
print("[INFO] loading EAST text detector...")
net = cv2.dnn.readNet('path_containing_frozen_east_text_detection.pb')
# construct a blob from the image and then perform a forward pass of
# the model to obtain the two output layer sets
blob = cv2.dnn.blobFromImage(image, 1.0, (W, H),(123.68, 116.78, 103.94), swapRB=True, crop=False)
net.setInput(blob)
(scores, geometry) = net.forward(layerNames)
# grab the number of rows and columns from the scores volume, then
# initialize our set of bounding box rectangles and corresponding
# confidence scores
(numRows, numCols) = scores.shape[2:4]
rects = []
confidences = []
# loop over the number of rows
for y in range(0, numRows):
# extract the scores (probabilities), followed by the geometrical
# data used to derive potential bounding box coordinates that
# surround text
scoresData = scores[0, 0, y]
xData0 = geometry[0, 0, y]
xData1 = geometry[0, 1, y]
xData2 = geometry[0, 2, y]
xData3 = geometry[0, 3, y]
anglesData = geometry[0, 4, y]
for x in range(0, numCols):
# ignore probability values below 0.75
if scoresData[x] < 0.75:
continue
# compute the offset factor as our resulting feature maps will
# be 4x smaller than the input image
(offsetX, offsetY) = (x * 4.0, y * 4.0)
# extract the rotation angle for the prediction and then
# compute the sin and cosine
angle = anglesData[x]
cos = np.cos(angle)
sin = np.sin(angle)
# use the geometry volume to derive the width and height of
# the bounding box
h = xData0[x] + xData2[x]
w = xData1[x] + xData3[x]
# compute both the starting and ending (x, y)-coordinates for
# the text prediction bounding box
endX = int(offsetX + (cos * xData1[x]) + (sin * xData2[x]))
endY = int(offsetY - (sin * xData1[x]) + (cos * xData2[x]))
startX = int(endX - w)
startY = int(endY - h)
# add the bounding box coordinates and probability score to
# our respective lists
rects.append((startX, startY, endX, endY))
confidences.append(scoresData[x])
# apply non-maxima suppression to suppress weak, overlapping bounding
# boxes
boxes = non_max_suppression(np.array(rects), probs=confidences)
# loop over the bounding boxes
for (startX, startY, endX, endY) in boxes:
# scale the bounding box coordinates based on the respective
# ratios
startX = int(startX * rW)
startY = int(startY * rH)
endX = int(endX * rW)
endY = int(endY * rH)
# draw the bounding box on the image
cv2.rectangle(orig, (startX, startY), (endX, endY), (0, 255, 0), 2)
cv2.imwrite('path_to_save', orig)
</code></pre>
<p><em><strong>Result:</strong></em></p>
<p><a href="https://i.stack.imgur.com/MDZIm.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MDZIm.jpg" alt="enter image description here" /></a></p>
<p>Although the result is not as expected, it is pretty close</p>
<p><strong>UPDATE:</strong></p>
<p>To crop and save each individual bounding box as an image do the following:</p>
<pre><code># take a copy o the original image
image2 = orig.copy()
for i, (startX, startY, endX, endY) in enumerate(boxes):
startX = int(startX * rW)
startY = int(startY * rH)
endX = int(endX * rW)
endY = int(endY * rH)
cropped = image2[startY:endY, startX:endX]
cv2.imwrite(r'Cropped_result\crop_img_{}.jpg'.format(i), cropped)
</code></pre> | 2022-05-03 15:07:08.913000+00:00 | 2022-05-27 19:21:05.263000+00:00 | 2022-05-27 19:21:05.263000+00:00 | null | 24,385,714 | <p>I have an image and want to detect the text regions in it. </p>
<p>I tried TiRG_RAW_20110219 project but the results are not satisfactory. If the input image is <a href="https://imgur.com/yCxOvQS,GD38rCa" rel="noreferrer">http://imgur.com/yCxOvQS,GD38rCa</a> it is producing <a href="https://imgur.com/yCxOvQS,GD38rCa#1" rel="noreferrer">http://imgur.com/yCxOvQS,GD38rCa#1</a> as output. </p>
<p>Can anyone suggest some alternative. I wanted this to improve the output of tesseract by sending it only the text region as input.</p> | 2014-06-24 11:41:39.190000+00:00 | 2022-09-07 15:13:35.253000+00:00 | 2020-01-30 00:40:19.517000+00:00 | python|image|opencv|image-processing|python-tesseract | ['https://arxiv.org/abs/1704.03155', 'https://github.com/YCICI/EAST-OpenCv', 'https://github.com/YCICI/EAST-OpenCv/blob/master/text_detection.py', 'https://pyimagesearch.com/2017/11/06/deep-learning-opencvs-blobfromimage-works/', 'https://pyimagesearch.com/2018/08/20/opencv-text-detection-east-text-detector/', 'https://i.stack.imgur.com/MDZIm.jpg'] | 6 |
68,336,557 | <p>It is hard to provide you with a detailed answer without knowing what you are trying to achieve.</p>
<p>In a nutshell, you first need to decide whether you want to apply a <strong>discrete</strong> (DWT) or a <strong>continous</strong> (CWT) wavelet transform to your time series.</p>
<p>A DWT will allow you to decompose your input data into a set of discrete levels, providing you with information about the frequency content of the signal <em>i.e.</em> determining whether the signal contains high frequency variations or low frequency trends. Think of it as applying several band-pass filters to your input data.</p>
<p>I do not think that you should apply a DWT to your entire time series at once. Since you are working with financial data, maybe decomposing your input signal into 1-day windows and applying a DWT on these subsets would do the trick for you.</p>
<p>In any case, I would suggest:</p>
<ul>
<li>Installing the <code>pywt</code> toolbox and playing with a dummy time series to understand how wavelet decomposition works.</li>
<li>Checking out the abundant literature available about wavelet analysis of financial data. For instance, if you are interested into financial time series forecasting, you might want to read <a href="https://arxiv.org/pdf/1605.07278.pdf" rel="nofollow noreferrer">this paper</a>.</li>
<li>Posting your future questions on the <a href="https://dsp.stackexchange.com/">DSP stack exchange</a>, unless you have a specific coding-related answer.</li>
</ul> | 2021-07-11 13:09:10.603000+00:00 | 2022-03-17 13:56:52.410000+00:00 | 2022-03-17 13:56:52.410000+00:00 | null | 68,303,601 | <p>I am trying to use wavelets coefficients as feature for neural networks on a time series data and I am bit confused on usage of the same. Do I need to find the coefficients on entire time series at once, or use a sliding window for finding the same. I mean, will finding coefficients on entire time series for once, include the future data points while determining those coefficients? What should be the approach to go about using Wavelets on a time series data without look ahead bias if any?</p> | 2021-07-08 14:29:23.610000+00:00 | 2022-03-17 13:56:52.410000+00:00 | 2022-03-17 13:56:35.703000+00:00 | python|quantitative-finance|wavelet|pywavelets | ['https://arxiv.org/pdf/1605.07278.pdf', 'https://dsp.stackexchange.com/'] | 2 |
4,078,788 | <p>This is the original proposal for MinMaxHeaps:</p>
<p><a href="http://www.cs.otago.ac.nz/staffpriv/mike/Papers/MinMaxHeaps/MinMaxHeaps.pdf" rel="noreferrer">http://www.cs.otago.ac.nz/staffpriv/mike/Papers/MinMaxHeaps/MinMaxHeaps.pdf</a></p>
<p>I've implemented a heap off this and found it to be very simple. An improvement which I've personally never implemented is a min max fine heap:</p>
<p><a href="http://arxiv.org/ftp/cs/papers/0007/0007043.pdf" rel="noreferrer">http://arxiv.org/ftp/cs/papers/0007/0007043.pdf</a></p> | 2010-11-02 14:45:05.373000+00:00 | 2010-11-02 14:45:05.373000+00:00 | null | null | 4,077,101 | <p>I searching <strong>minmax heap</strong> algorithm implementation,
i remember some things about this structure, her implementation is on one heap.
Even levels (floors) in heap tree is a min colored, and rest of nodes is max colored.
I remember some draft of workings of this but i searching some good document about it or some <code>C</code> or <code>C++</code> code snippet, i can't find any useful information's by Google, i think is a non widespread algorithm.</p>
<p>Greetings and Thanks for helpful answers.</p> | 2010-11-02 11:18:34.307000+00:00 | 2010-11-02 14:45:05.373000+00:00 | null | c++|c|algorithm|heap | ['http://www.cs.otago.ac.nz/staffpriv/mike/Papers/MinMaxHeaps/MinMaxHeaps.pdf', 'http://arxiv.org/ftp/cs/papers/0007/0007043.pdf'] | 2 |
65,754,453 | <p>The Fourier Transform relies on its kernels being defined with extreme precision at each point, float32, 64, and beyond, which makes most NNs, which are approximators, horrible candidates. It's also not exactly productive to learn what's already been perfected, unless for some feature extraction study.</p>
<p>The FT kernel <em>has</em>, however, been utilized as a mechanism, just like activation functions, in <a href="https://arxiv.org/abs/2010.08895" rel="nofollow noreferrer">Fourier Neural Operator for Parametric Partial Differential Equations</a>, with promising results:</p>
<p><a href="https://i.stack.imgur.com/XJFbB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XJFbB.png" alt="enter image description here" /></a></p>
<p>Regardless, such a design question is better suited for <a href="https://datascience.stackexchange.com/">Data Science</a> or <a href="https://ai.stackexchange.com/">AI</a> networks, and try being more specific with what you seek to accomplish.</p> | 2021-01-16 20:35:38.910000+00:00 | 2021-01-16 20:35:38.910000+00:00 | null | null | 65,753,737 | <p>If I want to make a neural net that can compute the Fourier Transform of any input signal, what should be the architecture, dataset, loss function, etc.</p>
<p>My way would be, making a normal fully connected NN that takes as input a signal (array vector of some size) and outputs the fourier transform (same size vector), the loss function might be a coef. error that measures how far it is from the actual fourier transform (maybe dynamic time warping).
any suggestions would be appreciated.</p> | 2021-01-16 19:23:46.483000+00:00 | 2021-01-16 21:12:19.677000+00:00 | null | python|deep-learning|signal-processing | ['https://arxiv.org/abs/2010.08895', 'https://i.stack.imgur.com/XJFbB.png', 'https://datascience.stackexchange.com/', 'https://ai.stackexchange.com/'] | 4 |
49,823,872 | <p>In CNN for images, normalization within channel is helpful because weights are shared across channels.
The figure from another paper shows how we are dealing with BN. It's helpful to understand better.</p>
<p><img src="https://i.stack.imgur.com/DLwRc.png" alt="figure from Group Normalization paper" /></p>
<p>Figure taken from</p>
<blockquote>
<p>Wu, Y. and He, K., 2018. Group normalization. arXiv preprint arXiv: 1803.08494.</p>
</blockquote> | 2018-04-13 19:26:20.790000+00:00 | 2022-01-15 07:47:01.413000+00:00 | 2022-01-15 07:47:01.413000+00:00 | null | 45,799,926 | <p>I am wondering, if in Convolutional Neural Networks batch normalization should be applied with respect to every pixel separately, or should I take the mean of pixels with respect to each channel?</p>
<p>I saw that in the description of Tensorflow's <a href="https://www.tensorflow.org/api_docs/python/tf/layers/batch_normalization" rel="noreferrer">tf.layers.batch_normalization</a> it is suggested to perform bn with respect to the channels, but if I recall correctly, I have used the other approach with good results.</p> | 2017-08-21 14:40:57.507000+00:00 | 2022-01-15 07:47:01.413000+00:00 | null | machine-learning|computer-vision|convolution|batch-normalization | [] | 0 |
56,408,096 | <p>J.F. Williamson, "Random selection of points distributed on curved surfaces", <em>Physics in Medicine & Biology</em> 32(10), 1987, describes a general method of choosing a uniformly random point on a parametric surface. It is an acceptance/rejection method that accepts or rejects each candidate point depending on its stretch factor (norm-of-gradient). To use this method for a parametric surface, several things have to be known about the surface, namely—</p>
<ul>
<li><p><code>x(u, v)</code>, <code>y(u, v)</code> and <code>z(u, v)</code>, which are functions that generate 3-dimensional coordinates from two dimensional coordinates <code>u</code> and <code>v</code>,</p>
</li>
<li><p>The ranges of <code>u</code> and <code>v</code>,</p>
</li>
<li><p><code>g(point)</code>, the norm of the gradient ("stretch factor") at each point on the surface, and</p>
</li>
<li><p><code>gmax</code>, the maximum value of <code>g</code> for the entire surface.</p>
</li>
</ul>
<p>The algorithm is then:</p>
<ul>
<li>Generate a point on the surface, <code>xyz</code>.</li>
<li>If <code>g(xyz) >= RNDU01()*gmax</code>, where <code>RNDU01()</code> is a uniform random variate in [0, 1), accept the point. Otherwise, repeat this process.</li>
</ul>
<p>Chen and Glotzer (2007) apply the method to the surface of a prolate spheroid (one form of ellipsoid) in "Simulation studies of a phenomenological model for elongated virus capsid formation", <em>Physical Review E</em> 75(5), 051504 (<a href="https://arxiv.org/abs/cond-mat/0701125" rel="nofollow noreferrer">preprint</a>).</p> | 2019-06-01 15:38:24.660000+00:00 | 2022-05-30 11:19:14.717000+00:00 | 2022-05-30 11:19:14.717000+00:00 | null | 56,404,399 | <p>I am trying to sample around 1000 points from a 3-D ellipsoid, uniformly. Is there some way to code it such that we can get points starting from the equation of the ellipsoid?</p>
<p>I want points on the surface of the ellipsoid.</p> | 2019-06-01 06:29:28.133000+00:00 | 2022-06-13 11:19:42.477000+00:00 | 2019-06-02 07:05:31.930000+00:00 | python|math|random|geometry|ellipse | ['https://arxiv.org/abs/cond-mat/0701125'] | 1 |
42,649,165 | <p>Normally for deep learning this does not have to be the case. Convolutional Neural Networks do not depend on the image size and filters can be applied on all image sizes.</p>
<p>Still many frameworks and literally all papers use the same image sizes for training. In <a href="https://arxiv.org/pdf/1409.1556/" rel="noreferrer">https://arxiv.org/pdf/1409.1556/</a> they used different sizes for evaluating the network. To achieve this you can use either resizing or crops or a combination of the both. Keep in mind that changing the aspect ratio is almost always a bad idea.</p>
<p>To choose a good image size it is important to note that a bigger image sizes will give you better accuracy normally. However all the filter take longer and the memory requirements rise with the image size. Additionally larger sizes yield diminishing improvements. I normally use 224x224, because it is often divisible through 2 and imagenet uses it too.</p>
<p>Finally the image size does not have to be square, but it is most of the time a good idea, because CNNs often cut the image size in half and often end up at something like 4x4 or 6x6. Doing this with a non square starting size will give you an akward ending size like 4x2 or 6x3.</p> | 2017-03-07 13:07:40.357000+00:00 | 2017-03-07 13:07:40.357000+00:00 | null | null | 42,648,326 | <p>I'm new to deep learning. I'm planning to use caffe and preparing a dataset for the training.</p>
<p>Do all the images have to have the same size? And does it have to be a square?</p>
<p>If so, what would be the ideal size or how to choose it?</p> | 2017-03-07 12:27:49.277000+00:00 | 2021-09-05 18:09:02.987000+00:00 | 2021-09-05 18:09:02.987000+00:00 | deep-learning | ['https://arxiv.org/pdf/1409.1556/'] | 1 |
59,731,968 | <p><a href="https://i.stack.imgur.com/amG1p.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/amG1p.png" alt="enter image description here"></a><br>
Multiple MSDU packets can be combined into an AMSDU. This AMSDU unit serves as one packet as passed down by higher layers to the MAC. The CRC is calculated on each of these AMSDUs. So if any single AMSDU transmission fails, the entire AMSDU has to be retransmitted. Thus the effective packet error rate (PER) for a considered bit error rate (BER) is determined by the size of the AMSDU. </p>
<p>However, if the protocol supported only the AMSDU layer of aggregation, the benefits of aggregation achieved by aggregating multiple MAC layer units would have been limited by the achievable PER for the aggregate size. Instead, the WiFi protocol allows the sender to aggregate multiple AMSDU (also referred to as MPDUs) units into a single AMPDU while allowing CRC checks and retries for each AMSDU within an AMPDU. Thus the WiFi protocol allows us to achieve higher MAC efficiency by transmitting AMPDUs while limiting PERs and re-transmissions at the AMSDU level. </p>
<p>Including AMSDUs as a part of AMPDUs is more efficient because this results in: </p>
<ul>
<li>Fewer CRC calculations for smaller packet sizes at sender and receiver – once
per AMSDU as opposed to once every MSDU</li>
<li>Fewer MAC headers (MSDU headers).</li>
</ul>
<p>More information You can read here <a href="https://arxiv.org/pdf/1704.07015.pdf" rel="nofollow noreferrer">A Brief Tutorial on WiFi
Aggregation Support</a> and here <a href="https://dot11ap.wordpress.com/a-mpdu-vs-a-msdu/" rel="nofollow noreferrer">A-MPDU vs. A-MSDU</a></p> | 2020-01-14 10:31:33.033000+00:00 | 2020-01-14 10:31:33.033000+00:00 | null | null | 59,721,172 | <p>Can someone please tell me why the need of two aggregations in the 11n.
if there is no A-MPDU in 11n, what will be the impact? </p>
<p>Note: in 11AC, only A-MPDU is there.</p> | 2020-01-13 17:18:52.200000+00:00 | 2020-01-14 10:31:33.033000+00:00 | null | authentication|client|wireless|wifi | ['https://i.stack.imgur.com/amG1p.png', 'https://arxiv.org/pdf/1704.07015.pdf', 'https://dot11ap.wordpress.com/a-mpdu-vs-a-msdu/'] | 3 |
41,416,583 | <p>You could try if <a href="https://arxiv.org/abs/1502.03167" rel="nofollow noreferrer">this paper</a> helps.
They say that if you don't use normalization, you need to train "more carefully", meaning using a lower learning rate. </p>
<p>Skimming the first pages, I could imagine it works like this: </p>
<p>For some nonlinearities, there's a 'good input value range', and batch norm brings values into that range. High input values are bad and lead to saturation (little slope in the function and "vanishing gradients").</p>
<p>So, if you don't normalize, you need to make smaller steps - a lower learning rate - to avoid 'jumping' into weights that lead to high values within the net. And also be more careful how you initialize weights. I guess if you use ReLus, that's not so much of a problem. But please correct me if someone else has had different experiences with ReLus.</p> | 2017-01-01 15:33:46.370000+00:00 | 2017-01-01 15:33:46.370000+00:00 | null | null | 39,934,549 | <p>I saw the following code under caffe framework. The whole code is trying to write caffe <code>train_val.prototxt</code> and <code>solver.prototxt</code>.</p>
<pre><code># Use different initial learning rate.
if use_batchnorm:
base_lr = 0.0004
else:
base_lr = 0.00004
</code></pre>
<p>Why is the base learning rate different?</p> | 2016-10-08 16:13:48.887000+00:00 | 2017-01-01 18:38:41.427000+00:00 | 2017-01-01 18:38:41.427000+00:00 | neural-network|normalization|caffe | ['https://arxiv.org/abs/1502.03167'] | 1 |
72,552,288 | <p>This is not a full answer, but is what <a href="https://beta.openai.com/playground?model=code-davinci-002" rel="nofollow noreferrer">OpenAI's Davinci coding system</a> was able to come up with in a few tries. (It's free to use right now!) Some modification to this approach should actually be correct.</p>
<pre><code>'''
As input, I have:
A master list of strings (in this case, the names of API calls) that contains every possible valid string in the input list.
An input list of strings, each of which is guaranteed to exist in the master list, but may not contain every element of the master list.
As output, I want:
Process the input list, using * as a wildcard, with as few elements in the output list as possible, to list every element of the input without "incorrectly" pulling in additional items.
Note: A wildcard can match zero or more characters.
What approach can I take to implementing this somewhat efficiently on the CPU in Python? I'm going to be handling master_list and input_list sizes of O(50), so even if there is a lot of computational complexity, Python can probably chew through it given the smallish sizes of the input lists (I'm hoping).
Note: before invoking the function that would be the answer to this question, I'm going to call .lower() on all input strings, both in the master list and the input list, so the solution doesn't need to think about casing.
Example
master_list = [ "GetFoo", "GetBar", "GetBaz",
"PutFoo", "PutBar", "PutBaz", "PutFooThing", "PutFooOtherThing",
"ViewFoo", "ViewBar", "ViewBarThing", "ViewBaz" ]
input_list = [ "GetBar", "GetBaz", "PutFooOtherThing", "ViewBar", "ViewBarThing" ]
# Do processing here ...
#This is what I expect as output.
correct_output_list = [ "GetB*", "PutFooO*", "ViewBar*" ]
wrong_output_list = [ "Get*", "PutFooOtherThing", "ViewB*" ]
'''
# Solution
def process_input(master_list, input_list):
# Create a dictionary of all the possible prefixes, and the
# list of strings in the master list that begin with that prefix.
prefix_dict = {}
for string in master_list:
for i in range(len(string)):
prefix = string[:i]
if prefix not in prefix_dict:
prefix_dict[prefix] = []
prefix_dict[prefix].append(string)
# Sort the input list by length, longest first.
input_list = sorted(input_list, key=len, reverse=True)
# Initialize our output list.
output_list = []
# Iterate over the input list.
for string in input_list:
# Iterate over the range of possible prefixes.
for i in range(len(string)):
prefix = string[:i]
# If the prefix is a key in the dictionary, and if the
# current string is the only string in the list of strings
# that begin with that prefix, then add the prefix to the
# output list, and remove the prefix from the dictionary.
if prefix in prefix_dict and len(prefix_dict[prefix]) == 1:
output_list.append(prefix + '*')
del prefix_dict[prefix]
break
# Return the output list.
return output_list
# Test
master_list = [ "GetFoo", "GetBar", "GetBaz",
"PutFoo", "PutBar", "PutBaz", "PutFooThing", "PutFooOtherThing",
"ViewFoo", "ViewBar", "ViewBarThing", "ViewBaz" ]
input_list = [ "GetBar", "GetBaz", "PutFooOtherThing", "ViewBar", "ViewBarThing" ]
print process_input(master_list, input_list)
</code></pre>
<p>Completion started after the <code>def process_input(master_list, input_list):</code> line, and was done with a temperature of 0.46 (no particular reason for that, just dragged the slider up a bit).</p>
<p>If you fix the final print line to be Python3-compliant, the output from this code is <code>['PutFooO*', 'ViewBar*']</code> which is not perfect, but should at least provide a basis for something that works.</p>
<p><strong>EDIT:
After actually looking at the above code, it seems to be really bad for doing what you actually want.</strong> (As you note, you had to significantly modify the code.)</p>
<p>I also have not looked at the following code, but the explanation of how to solve the problem may be helpful. I followed a practice from <a href="https://arxiv.org/abs/2205.11916" rel="nofollow noreferrer">this paper</a>, and also used the new insert function instead of completion.</p>
<p>However, it is worth noting that the following code produces the output <code>['', '', '', '', '']</code>, which is very much not what you're looking for. So this code likely also needs a lot of fixing.</p>
<pre><code>'''
As input, I have:
A master list of strings (in this case, the names of API calls) that contains every possible valid string in the input list.
An input list of strings, each of which is guaranteed to exist in the master list, but may not contain every element of the master list.
As output, I want:
Process the input list, using * as a wildcard, with as few elements in the output list as possible, to list every element of the input without "incorrectly" pulling in additional items.
Note: A wildcard can match zero or more characters.
What approach can I take to implementing this somewhat efficiently on the CPU in Python? I'm going to be handling master_list and input_list sizes of O(50), so even if there is a lot of computational complexity, Python can probably chew through it given the smallish sizes of the input lists (I'm hoping).
Note: before invoking the function that would be the answer to this question, I'm going to call .lower() on all input strings, both in the master list and the input list, so the solution doesn't need to think about casing.
Example
master_list = [ "GetFoo", "GetBar", "GetBaz",
"PutFoo", "PutBar", "PutBaz", "PutFooThing", "PutFooOtherThing",
"ViewFoo", "ViewBar", "ViewBarThing", "ViewBaz" ]
input_list = [ "GetBar", "GetBaz", "PutFooOtherThing", "ViewBar", "ViewBarThing" ]
# Do processing here ...
#This is what I expect as output.
correct_output_list = [ "GetB*", "PutFooO*", "ViewBar*" ]
wrong_output_list = [ "Get*", "PutFooOtherThing", "ViewB*" ]
Solution
Let's think step by step. How do you find the longest matching prefix for two strings? You could iterate over the characters of both strings, comparing the characters at each index. If they match, compare the next character. If they don't match, you've found your longest matching prefix.
Let's write a function to do this.
def longest_common_prefix(a, b):
'''
a, b: two strings
returns: the longest prefix of a, up to the first character where they differ
'''
longest_prefix = ''
for i, (char_a, char_b) in enumerate(zip(a, b)):
if char_a == char_b:
longest_prefix += char_a
else:
return longest_prefix
return longest_prefix
Now, let's apply this to our problem.
We can work through the master list and the input list, comparing each element of the input list to every element of the master list, and keeping track of the longest matching prefix for each input element.
def longest_common_prefixes(master_list, input_list):
'''
master_list: a master list of strings
input_list: a list of strings
returns: a list of the longest matching prefix for each element of the input list
'''
longest_prefixes = []
for i, input_value in enumerate(input_list):
longest_prefix = input_value
for master_value in master_list:
longest_prefix = longest_common_prefix(longest_prefix, master_value)
longest_prefixes.append(longest_prefix)
return longest_prefixes
This gives us a list of the longest matching prefixes for each element of the input list. But how do we get our final output, which is the shortest list of strings that correctly matches all of the input strings?
In this case, we can use a simple greedy approach. We can iterate through the longest_prefixes list, and for each element, we can see if we can eliminate that element without causing any of the other elements to be incorrectly matched. If so, we can safely remove that element from the list, and continue iterating.
def shortest_matching_prefixes(longest_prefixes):
'''
longest_prefixes: a list of the longest matching prefix for each element of the input list
returns: a list of the shortest matching prefixes for all of the input list
'''
shortest_prefixes = longest_prefixes
while True:
# Iterate over all of the longest prefixes
for longest_prefix in longest_prefixes:
# See if there is a longest_prefix that is a suffix of any other longest prefix
# If so, it can be safely removed
if any(longest_prefix.startswith(other_longest_prefix) and longest_prefix != other_longest_prefix
for other_longest_prefix in longest_prefixes):
shortest_prefixes.remove(longest_prefix)
break
else:
# No longest prefix can be safely removed
return shortest_prefixes
Putting these functions together, we have our solution.
def wildcard_matching(master_list, input_list):
'''
master_list: a master list of strings
input_list: a list of strings
returns: a list of the shortest matching prefixes for all of the input list
'''
longest_prefixes = longest_common_prefixes(master_list, input_list)
shortest_prefixes = shortest_matching_prefixes(longest_prefixes)
return shortest_prefixes
'''
Notes on Time Complexity
The longest_common_prefixes function is linear in the number of elements in the master list, and quadratic with respect to the number of elements in the input list. We can follow the same pattern in the longest_prefixes and shortest_prefixes functions, for a total running time of O(m * n^2), where m is the number of elements in the master list, and n is the number of elements in the input list.
The wildcard_matching function should be replaced with a function that can control the size of the input list. This function would be linear in the number of elements in the master list, and would take some constant amount of time to do processing per element of the input list, giving a total running time of O(m * n).
This solution can be further optimized by using a more complex algorithm to find the longest common prefix of two strings, but the time complexity will remain O(m * n). In this case, it might be better to use a suffix tree, which can be constructed in linear time.
'''
</code></pre> | 2022-06-08 21:15:23.437000+00:00 | 2022-06-09 03:46:31.897000+00:00 | 2022-06-09 03:46:31.897000+00:00 | null | 72,552,116 | <p>As input, I have:</p>
<ul>
<li>A <em>master list</em> of strings (in this case, the names of API calls) that contains every possible valid string in the <em>input list</em>.</li>
<li>An <em>input list</em> of strings, each of which is guaranteed to exist in the master list, but may not contain every element of the master list.</li>
</ul>
<p>As output, I want:</p>
<ul>
<li>Process the <em>input list</em>, using <code>*</code> as a wildcard, with as few <em>elements</em> in the output list as possible, to list every element of the input without "incorrectly" pulling in additional items.</li>
<li>Note: A wildcard can match <em>zero or more</em> characters.</li>
</ul>
<p>What approach can I take to implementing this somewhat efficiently on the CPU in Python? I'm going to be handling <code>master_list</code> and <code>input_list</code> sizes of O(50), so even if there is a lot of computational complexity, Python can probably chew through it given the smallish sizes of the input lists (I'm hoping).</p>
<p>Note: before invoking the function that would be the answer to this question, I'm going to call <code>.lower()</code> on <strong>all</strong> input strings, both in the master list and the input list, so the solution doesn't need to think about casing.</p>
<p>Note(2): Before running the answer to this question, I have already sanitized the <code>input_list</code> to verify that every element of it exists within <code>master_list</code>. It is an error to provide input in <code>input_list</code> that does not exist in <code>master_list</code>.</p>
<hr />
<h1>Example</h1>
<pre class="lang-py prettyprint-override"><code>master_list = [ "GetFoo", "GetBar", "GetBaz",
"PutFoo", "PutBar", "PutBaz", "PutFooThing", "PutFooOtherThing",
"ViewFoo", "ViewBar", "ViewBarThing", "ViewBaz" ]
input_list = [ "GetBar", "GetBaz", "PutFooOtherThing", "ViewBar", "ViewBarThing" ]
# Do processing here ...
#This is what I expect as output.
correct_output_list = [ "GetB*", "PutFooO*", "ViewBar*" ]
wrong_output_list = [ "Get*", "PutFooOtherThing", "ViewB*" ]
</code></pre>
<p><code>correct_output_list</code> is correct because:</p>
<ul>
<li><code>"GetB*"</code> refers to <code>GetBar</code> and <code>GetBaz</code> from <code>master_list</code>, and both of these are specified in <code>input_list</code>.</li>
<li><code>"PutFooO*"</code> only refers to <code>PutFooOtherThing</code> in <code>master_list</code>, but that's OK -- we still use a wildcard because this is shorter, and therefore saves some bytes (saving bytes is the goal of this function).</li>
<li><code>"ViewBar*"</code> refers to <code>ViewBar</code> and <code>ViewBarThing</code>, but, crucially, <em>excludes</em> <code>ViewBaz</code>, which is part of <code>master_list</code> but <strong>not</strong> <code>input_list</code>.</li>
</ul>
<p><code>wrong_output_list</code> is wrong because:</p>
<ul>
<li><code>"Get*"</code> incorrectly pulls in <code>GetFoo</code>, which is part of <code>master_list</code> but <strong>not</strong> in <code>input_list</code>.</li>
<li><code>"PutFooOtherThing"</code> doesn't pull in any extra elements of <code>master_list</code> that are absent in <code>input_list</code>, which is "fine" in a sense, but it doesn't take advantage of the opportunistic "compression" of shortening <code>PutFooOtherThing</code> to <code>PutFooO*</code> to save bytes.</li>
<li><code>"ViewB*"</code> is too broad, because it pulls in <code>ViewBaz</code>, which is not specified in <code>input_list</code>, only in <code>master_list</code>.</li>
</ul> | 2022-06-08 20:57:35.790000+00:00 | 2022-06-09 03:46:31.897000+00:00 | null | python|string | ['https://beta.openai.com/playground?model=code-davinci-002', 'https://arxiv.org/abs/2205.11916'] | 2 |
56,819,947 | <p>In ImageMagick command line, you can do that as follows. Suppose you want 8 pages from the PDF.</p>
<p>Input PDF from <a href="http://www.arxiv-sanity.com" rel="nofollow noreferrer">http://www.arxiv-sanity.com</a>:</p>
<pre><code>convert image.pdf[0-7] -thumbnail 140x140 -background white +smush 20 -bordercolor white -border 10 result.jpg
</code></pre>
<p><br>
<a href="https://i.stack.imgur.com/UfW24.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UfW24.jpg" alt="enter image description here"></a></p>
<p>This takes the first 8 pages, makes thumbnails of size 140x140 and appends them side-by-side with a 20 pixels white spacing between them and adds a 10 pixel white border around it all.</p>
<p>Sorry, I do not know Node.js. But apparently there is a module that integrates ImageMagick. See <a href="https://github.com/yourdeveloper/node-imagemagick" rel="nofollow noreferrer">https://github.com/yourdeveloper/node-imagemagick</a></p> | 2019-06-29 18:32:44.297000+00:00 | 2019-06-29 21:48:36.370000+00:00 | 2019-06-29 21:48:36.370000+00:00 | null | 42,264,950 | <p>I'm currently looking for a way to generate the thumbnail image for a given pdf file, which shows several pages in the same image. The output should like what shows in the <a href="http://www.arxiv-sanity.com" rel="nofollow noreferrer">arxiv sanity</a> website. I want to know if there is any <code>npm</code> package which supports this functionality. Thanks.</p> | 2017-02-16 04:26:38.527000+00:00 | 2019-06-29 21:48:36.370000+00:00 | null | node.js|pdf|npm | ['http://www.arxiv-sanity.com', 'https://i.stack.imgur.com/UfW24.jpg', 'https://github.com/yourdeveloper/node-imagemagick'] | 3 |
71,617,734 | <p>Since there is a 1:1 mapping of package and core energy consumption for workloads that only use core ressources [1, Fig. 9b] this is <strong>very</strong> likely.</p>
<p>[1] <a href="https://arxiv.org/pdf/2108.00808" rel="nofollow noreferrer">https://arxiv.org/pdf/2108.00808</a></p> | 2022-03-25 13:27:47.847000+00:00 | 2022-03-25 13:27:47.847000+00:00 | null | null | 68,572,529 | <p>Intel CPUs provide power monitoring via RAPL for several power domains - PKG, DRAM, PPx, Platform. Many sources describe these power domains and their relations, nice figure is in <em>Khan, K. et al. “RAPL in Action.” ACM Transactions on Modeling and Performance Evaluation of Computing Systems (TOMPECS) 3 (2018): 1 - 26.</em>
<a href="https://i.stack.imgur.com/wPt6F.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wPt6F.png" alt="enter image description here" /></a></p>
<p>AMD CPUs also have RAPL interface for power monitoring (as far as I know) PKG and Core domains, however I have not found any source presenting relations of these power domains. <strong>Does PKG domain include Core domains?</strong> One would expect that it does, but it is just an assumption.</p> | 2021-07-29 08:23:33.963000+00:00 | 2022-03-31 07:14:19.923000+00:00 | 2022-03-25 18:43:55.640000+00:00 | x86-64|cpu|amd-processor|energy | ['https://arxiv.org/pdf/2108.00808'] | 1 |
71,316,687 | <p>Just use strong consistent classifier. See <a href="https://arxiv.org/abs/2201.08528" rel="nofollow noreferrer">https://arxiv.org/abs/2201.08528</a></p> | 2022-03-02 02:11:31.900000+00:00 | 2022-03-02 02:11:31.900000+00:00 | null | null | 71,101,448 | <p>I have been working on the case study where data is highly imbalanced. we have been taught we can handle the imbalanced data by either under sampling the majority class or over sampling the minority class.
I wanted to ask if there is any other way/method that can be used to handle imbalanced data?</p>
<p>this question is more on conceptual side than programming.</p>
<p>for example,
I was thinking if we could put some weight on minority class (conceptually) to make the model emphasize on identifying pattern in minority class.
I don't know how that can be done but this concept theoretically should work.</p>
<p>feel free to put crazy ideas too.</p> | 2022-02-13 14:11:29.427000+00:00 | 2022-09-13 13:06:58.610000+00:00 | null | machine-learning|data-science|imbalanced-data | ['https://arxiv.org/abs/2201.08528'] | 1 |
28,962,334 | <p>The <a href="http://en.wikipedia.org/wiki/Change-making_problem" rel="nofollow noreferrer">wikipedia link</a> is sparse on details on how to decide if a greedy algorithm such as yours will work. A better reference is linked in this <a href="https://cs.stackexchange.com/questions/6552/when-can-a-greedy-algorithm-solve-the-coin-change-problem">CS StackExchange question</a>. Essentially, if the coin system is <em>canonical</em>, a greedy algorithm will provide an optimal solution. So, is [1, 2, 5, 10, 20] canonical? (using 10s of cents for units, so that the sequence starts in 1)</p>
<p>According to <a href="http://arxiv.org/PS_cache/arxiv/pdf/0809/0809.0400v1.pdf" rel="nofollow noreferrer">this article</a>, a 5-coin system is <em>non</em>-canonical if and only if it satisfies exactly one of the following conditions:</p>
<ul>
<li>[1, c2, c3] is non-canonical (false for [1, 2, 5])</li>
<li>it cannot be written as [1, 2, c3, c3+1, 2*c3] (true for [1, 2, 5, 10, 20])</li>
<li>the greedyAnswerSize((k+1) * c4) > k+1 with k*c4 < c5 < (k+1) * c4; in this case, this would require a k*10 < 20 < (k+1)*10; there is no integer k in that range, so this is false for [1, 2, 5, 10, 20].</li>
</ul>
<p>Therefore, since the greedy algorithm will not provide optimal answers (and even if it did, I doubt that it would work with limited coins), you should try dynamic programming or some enlightened backtracking:</p>
<pre><code>import java.util.HashSet;
import java.util.PriorityQueue;
public class Main {
public static class Answer implements Comparable<Answer> {
public static final int coins[] = {1, 2, 5, 10, 20};
private int availableCoins[] = new int[coins.length];
private int totalAvailable;
private int totalRemaining;
private int coinsUsed;
public Answer(int availableCoins[], int totalRemaining) {
for (int i=0; i<coins.length; i++) {
this.availableCoins[i] = availableCoins[i];
totalAvailable += coins[i] * availableCoins[i];
}
this.totalRemaining = totalRemaining;
}
public boolean hasCoin(int coinIndex) {
return availableCoins[coinIndex] > 0;
}
public boolean isPossibleBest(Answer oldBest) {
boolean r = totalRemaining >= 0
&& totalAvailable >= totalRemaining
&& (oldBest == null || oldBest.coinsUsed > coinsUsed);
return r;
}
public boolean isAnswer() {
return totalRemaining == 0;
}
public Answer useCoin(int coinIndex) {
Answer a = new Answer(availableCoins, totalRemaining - coins[coinIndex]);
a.availableCoins[coinIndex]--;
a.totalAvailable = totalAvailable - coins[coinIndex];
a.coinsUsed = coinsUsed+1;
return a;
}
public int getCoinsUsed() {
return coinsUsed;
}
@Override
public String toString() {
StringBuilder sb = new StringBuilder("{");
for (int c : availableCoins) sb.append(c + ",");
sb.setCharAt(sb.length()-1, '}');
return sb.toString();
}
// try to be greedy first
@Override
public int compareTo(Answer a) {
int r = totalRemaining - a.totalRemaining;
return (r==0) ? coinsUsed - a.coinsUsed : r;
}
}
// returns an minimal set of coins to solve
public static int makeChange(int change, int[] availableCoins) {
PriorityQueue<Answer> queue = new PriorityQueue<Answer>();
queue.add(new Answer(availableCoins, change));
HashSet<String> known = new HashSet<String>();
Answer best = null;
int expansions = 0;
while ( ! queue.isEmpty()) {
Answer current = queue.remove();
expansions ++;
String s = current.toString();
if (current.isPossibleBest(best) && ! known.contains(s)) {
known.add(s);
if (current.isAnswer()) {
best = current;
} else {
for (int i=0; i<Answer.coins.length; i++) {
if (current.hasCoin(i)) {
queue.add(current.useCoin(i));
}
}
}
}
}
// debug
System.out.println("After " + expansions + " expansions");
return (best != null) ? best.getCoinsUsed() : -1;
}
public static void main(String[] args) {
for (int i=0; i<100; i++) {
System.out.println("Solving for " + i + ":"
+ makeChange(i, new int[]{100,5,2,5,1}));
}
}
}
</code></pre> | 2015-03-10 11:19:18.917000+00:00 | 2015-03-10 11:19:18.917000+00:00 | 2017-04-13 12:48:30.793000+00:00 | null | 28,959,633 | <p>I am stuck on the coin denomination problem.</p>
<p>I am trying to find the lowest number of coins used to make up $5.70 (or 570 cents). For example, if the coin array is {100,5,2,5,1} (100 x 10c coins, 5 x 20c, 2 x 50c, 5 x $1, and 1 x $2 coin), then the result should be {0,1,1,3,1}
At the moment the coin array will consist of the same denominations ( $2, $1, 50c, 20c, 10c)</p>
<pre><code>public static int[] makeChange(int change, int[] coins) {
// while you have coins of that denomination left and the total
// remaining amount exceeds that denomination, take a coin of that
// denomination (i.e add it to your result array, subtract it from the
// number of available coins, and update the total remainder). –
for(int i= 0; i< coins.length; i++){
while (coins[i] > 0) {
if (coins[i] > 0 & change - 200 >= 0) {
coins[4] = coins[4]--;
change = change - 200;
} else
if (coins[i] > 0 & change - 100 >= 0) {
coins[3] = coins[3]--;
change = change - 100;
} else
if (coins[i] > 0 & change - 50 >= 0) {
coins[2] = coins[2]--;
change = change - 50;
} else
if (coins[i] > 0 & change - 20 >= 0) {
coins[1] = coins[1]--;
change = change - 20;
} else
if (coins[i] > 0 & change - 10 >= 0) {
coins[0] = coins[0]--;
change = change - 10;
}
}
}
return coins;
}
</code></pre>
<p>I am stuck on how to deduct the values from coins array and return it.</p>
<p>EDIT: New code</p> | 2015-03-10 09:02:29.487000+00:00 | 2015-03-10 11:19:18.917000+00:00 | 2015-03-10 09:22:55.337000+00:00 | java|int | ['http://en.wikipedia.org/wiki/Change-making_problem', 'https://cs.stackexchange.com/questions/6552/when-can-a-greedy-algorithm-solve-the-coin-change-problem', 'http://arxiv.org/PS_cache/arxiv/pdf/0809/0809.0400v1.pdf'] | 3 |
65,944,995 | <p>In fact it does recognizes faces, but the <code>model</code> parameter is used to specify which architecture is used, in this case <code>resnet50</code>, see: <a href="https://github.com/rcmalli/keras-vggface#available-models" rel="nofollow noreferrer">https://github.com/rcmalli/keras-vggface#available-models</a>.</p>
<p>Don't know much about <code>resnet50</code>, but it's a residual network and that means that one of the earliest layers (usually the input) is feed back into the network in a later layer as a technique to increase accuracy on big networks (this one has ~150 layers).</p>
<p>More details: <a href="https://arxiv.org/abs/1512.03385" rel="nofollow noreferrer">https://arxiv.org/abs/1512.03385</a></p> | 2021-01-28 20:47:18.423000+00:00 | 2021-01-28 20:47:18.423000+00:00 | null | null | 65,942,486 | <p>I came across this line of code:</p>
<p><code>VGGFace(model='resnet50', include_top=False)</code></p>
<p>Could someone please explain to what this means? From my knowledge, VGGFace is a model trained to recognise faces, and then it accepts another model as an argument. So do we have two models? I am confused.</p>
<p>Thanks in advance.</p> | 2021-01-28 17:44:34.110000+00:00 | 2021-01-28 20:47:18.423000+00:00 | null | deep-learning|conv-neural-network|resnet | ['https://github.com/rcmalli/keras-vggface#available-models', 'https://arxiv.org/abs/1512.03385'] | 2 |
33,602,229 | <p>Kalchbrenner et. al (2015) implemented an CNN that accepts dynamic length input and pools them into k elements. If there are less than k elements to begin with, the remaining are zero-padded. Their experiments with sentence classification show that such networks successfully represent grammatical structures.</p>
<p>For details check out:</p>
<ul>
<li>the paper (<a href="http://arxiv.org/pdf/1404.2188v1.pdf" rel="nofollow">http://arxiv.org/pdf/1404.2188v1.pdf</a>)</li>
<li>Matlab code (link on page 2 of the paper)</li>
<li>suggestion for DCNNs for Theano/Keras (<a href="https://github.com/fchollet/keras/issues/373" rel="nofollow">https://github.com/fchollet/keras/issues/373</a>)</li>
</ul> | 2015-11-09 03:45:52.947000+00:00 | 2015-11-09 03:45:52.947000+00:00 | null | null | 29,818,284 | <p>I tried to process the tweets dataset using CNN in Theano. Different from images, the lenght of different tweets (corresponding to the image shape) is variable. So the shape of each tweet is different. However, in Theano, the convolution need that the shape information are constant values. So my question is that is there some way to make the image_shape dynamic?</p> | 2015-04-23 08:49:59.773000+00:00 | 2015-11-09 03:45:52.947000+00:00 | null | nlp|convolution|theano|deep-learning | ['http://arxiv.org/pdf/1404.2188v1.pdf', 'https://github.com/fchollet/keras/issues/373'] | 2 |
36,288,472 | <p>Seems like a promising algorithm to measure similarity of a list is to use Spearman footrule distance <a href="http://people.revoledu.com/kardi/tutorial/Similarity/FootruleDistance.html" rel="nofollow">http://people.revoledu.com/kardi/tutorial/Similarity/FootruleDistance.html</a>, or more involved and taking order into account, discounted cumulative gain, DCG, <a href="https://www.kaggle.com/wiki/NormalizedDiscountedCumulativeGain" rel="nofollow">https://www.kaggle.com/wiki/NormalizedDiscountedCumulativeGain</a> .</p>
<p>A very good resource to that topic is</p>
<p><a href="http://arxiv.org/pdf/1107.2691.pdf" rel="nofollow">http://arxiv.org/pdf/1107.2691.pdf</a></p>
<p>and</p>
<p><a href="http://theory.stanford.edu/~sergei/slides/www10-metrics.pdf" rel="nofollow">http://theory.stanford.edu/~sergei/slides/www10-metrics.pdf</a></p> | 2016-03-27 19:27:02.103000+00:00 | 2016-03-29 07:28:26.230000+00:00 | null | null | 36,288,470 | <p>Assume I have two lists:</p>
<pre><code>L1: [1,2,3,4]
L2: [1,3,2,4,5]
</code></pre>
<p>How can I compute the similarity between theses two lists?</p>
<p>If these two lists would be of same length, Spearman and Kendall seem to be the answer, but can this principle also extended to lists of diverging length?</p> | 2016-03-24 12:00:53.390000+00:00 | 2016-03-29 15:21:22.527000+00:00 | null | statistics|information-theory | ['http://people.revoledu.com/kardi/tutorial/Similarity/FootruleDistance.html', 'https://www.kaggle.com/wiki/NormalizedDiscountedCumulativeGain', 'http://arxiv.org/pdf/1107.2691.pdf', 'http://theory.stanford.edu/~sergei/slides/www10-metrics.pdf'] | 4 |
23,687,695 | <p>This is an expanded and corrected version of <a href="https://stackoverflow.com/a/18294951/2144669">Subhasis's answer</a>.</p>
<p>Formally, the problem is, given a <em>n</em>-letter alphabet <em>V</em> and two <em>m</em>-letter words, <em>x</em> and <em>y</em>, for which there exists a permutation <em>p</em> such that <em>p</em>(<em>x</em>) = <em>y</em>, determine the least number of swaps (permutations that fix all but two elements) whose composition <em>q</em> satisfies <em>q</em>(<em>x</em>) = <em>y</em>. Assuming that <em>n</em>-letter words are maps from the set {1, ..., <em>m</em>} to <em>V</em> and that <em>p</em> and <em>q</em> are permutations on {1, ..., <em>m</em>}, the action <em>p</em>(<em>x</em>) is defined as the composition <em>p</em> followed by <em>x</em>.</p>
<p>The least number of swaps whose composition is <em>p</em> can be expressed in terms of the cycle decomposition of <em>p</em>. When <em>j</em><sub>1</sub>, ..., <em>j</em><sub><em>k</em></sub> are pairwise distinct in {1, ..., <em>m</em>}, the cycle (<em>j</em><sub>1</sub> ... <em>j</em><sub><em>k</em></sub>) is a permutation that maps <em>j</em><sub><em>i</em></sub> to <em>j</em><sub><em>i</em> + 1</sub> for <em>i</em> in {1, ..., <em>k</em> - 1}, maps <em>j</em><sub><em>k</em></sub> to <em>j</em><sub>1</sub>, and maps every other element to itself. The permutation <em>p</em> is the composition of every distinct cycle (<em>j</em> <em>p</em>(<em>j</em>) <em>p</em>(<em>p</em>(<em>j</em>)) ... <em>j'</em>), where <em>j</em> is arbitrary and <em>p</em>(<em>j</em>') = <em>j</em>. The order of composition does not matter, since each element appears in exactly one of the composed cycles. A <em>k</em>-element cycle (<em>j</em><sub>1</sub> ... <em>j</em><sub><em>k</em></sub>) can be written as the product (<em>j</em><sub>1</sub> <em>j</em><sub><em>k</em></sub>) (<em>j</em><sub>1</sub> <em>j</em><sub><em>k</em> - 1</sub>) ... (<em>j</em><sub>1</sub> <em>j</em><sub>2</sub>) of <em>k</em> - 1 cycles. In general, every permutation can be written as a composition of <em>m</em> swaps minus the number of cycles comprising its cycle decomposition. A straightforward induction proof shows that this is optimal.</p>
<p><strong>Now we get to the heart of Subhasis's answer.</strong> Instances of the asker's problem correspond one-to-one with <strong>Eulerian</strong> (for every vertex, in-degree equals out-degree) digraphs <em>G</em> with vertices <em>V</em> and <em>m</em> arcs labeled 1, ..., <em>m</em>. For <em>j</em> in {1, ..., <em>n</em>}, the arc labeled <em>j</em> goes from <em>y</em>(<em>j</em>) to <em>x</em>(<em>j</em>). The problem in terms of <em>G</em> is to determine how many parts a partition of the arcs of <em>G</em> into directed cycles can have. (Since <em>G</em> is Eulerian, such a partition always exists.) This is because the permutations <em>q</em> such that <em>q</em>(<em>x</em>) = <em>y</em> are in one-to-one correspondence with the partitions, as follows. For each cycle (<em>j</em><sub>1</sub> ... <em>j</em><sub><em>k</em></sub>) of <em>q</em>, there is a part whose directed cycle is comprised of the arcs labeled <em>j</em><sub>1</sub>, ..., <em>j</em><sub><em>k</em></sub>.</p>
<p><strong>The problem</strong> with Subhasis's NP-hardness reduction is that arc-disjoint cycle packing on <em>Eulerian</em> digraphs is a special case of arc-disjoint cycle packing on general digraphs, so an NP-hardness result for the latter has no direct implications for the complexity status of the former. In <a href="http://arxiv.org/abs/1402.2137" rel="noreferrer">very recent work</a> (see the citation below), however, it has been shown that, indeed, even the Eulerian special case is NP-hard. Thus, by the correspondence above, the asker's problem is as well.</p>
<p>As Subhasis hints, this problem can be solved in polynomial time when <em>n</em>, the size of the alphabet, is fixed (fixed-parameter tractable). Since there are <em>O</em>(<em>n</em>!) distinguishable cycles when the arcs are unlabeled, we can use dynamic programming on a state space of size <em>O</em>(<em>m</em><sup><em>n</em></sup>), the number of distinguishable subgraphs. In practice, that might be sufficient for (let's say) a binary alphabet, but if I were to try to try to solve this problem exactly on instances with large alphabets, then I likely would try branch and bound, obtaining bounds by using linear programming with column generation to pack cycles fractionally.</p>
<pre><code>@article{DBLP:journals/corr/GutinJSW14,
author = {Gregory Gutin and
Mark Jones and
Bin Sheng and
Magnus Wahlstr{\"o}m},
title = {Parameterized Directed \$k\$-Chinese Postman Problem and \$k\$
Arc-Disjoint Cycles Problem on Euler Digraphs},
journal = {CoRR},
volume = {abs/1402.2137},
year = {2014},
ee = {http://arxiv.org/abs/1402.2137},
bibsource = {DBLP, http://dblp.uni-trier.de}
}
</code></pre> | 2014-05-15 20:10:08.197000+00:00 | 2014-05-15 20:10:08.197000+00:00 | 2017-05-23 12:01:06.053000+00:00 | null | 18,292,202 | <p>I was looking through a programming question, when the following question suddenly seemed related.</p>
<p>How do you convert a string to another string using as few swaps as follows. The strings are guaranteed to be interconvertible (they have the same set of characters, this is given), <strong>but the characters can be repeated</strong>. I saw web results on the same question, without the characters being repeated though.
Any two characters in the string can be swapped. </p>
<p>For instance : "aabbccdd" can be converted to "ddbbccaa" in two swaps, and "abcc" can be converted to "accb" in one swap.</p>
<p>Thanks! </p> | 2013-08-17 18:45:28.767000+00:00 | 2016-01-24 18:29:30.583000+00:00 | null | string|algorithm|swap | ['https://stackoverflow.com/a/18294951/2144669', 'http://arxiv.org/abs/1402.2137'] | 2 |
44,295,851 | <p>Suffix arrays are the right idea, but there's a non-trivial piece missing, namely, identifying what are known in the literature as "supermaximal repeats". Here's a GitHub repo with working code: <a href="https://github.com/eisenstatdavid/commonsub" rel="nofollow noreferrer">https://github.com/eisenstatdavid/commonsub</a> . Suffix array construction uses the SAIS library, vendored in as a submodule. The supermaximal repeats are found using a corrected version of the pseudocode from <code>findsmaxr</code> in <a href="https://arxiv.org/pdf/1304.0528.pdf" rel="nofollow noreferrer">Efficient repeat finding via suffix arrays
(Becher–Deymonnaz–Heiber)</a>.</p>
<pre><code>static void FindRepeatedStrings(void) {
// findsmaxr from https://arxiv.org/pdf/1304.0528.pdf
printf("[");
bool needComma = false;
int up = -1;
for (int i = 1; i < Len; i++) {
if (LongCommPre[i - 1] < LongCommPre[i]) {
up = i;
continue;
}
if (LongCommPre[i - 1] == LongCommPre[i] || up < 0) continue;
for (int k = up - 1; k < i; k++) {
if (SufArr[k] == 0) continue;
unsigned char c = Buf[SufArr[k] - 1];
if (Set[c] == i) goto skip;
Set[c] = i;
}
if (needComma) {
printf("\n,");
}
printf("\"");
for (int j = 0; j < LongCommPre[up]; j++) {
unsigned char c = Buf[SufArr[up] + j];
if (iscntrl(c)) {
printf("\\u%.4x", c);
} else if (c == '\"' || c == '\\') {
printf("\\%c", c);
} else {
printf("%c", c);
}
}
printf("\"");
needComma = true;
skip:
up = -1;
}
printf("\n]\n");
}
</code></pre>
<p>Here's a sample output on the text of the first paragraph:</p>
<pre><code>Davids-MBP:commonsub eisen$ ./repsub input
["\u000a"
," S"
," as "
," co"
," ide"
," in "
," li"
," n"
," p"
," the "
," us"
," ve"
," w"
,"\""
,"&ndash;"
,"("
,")"
,". "
,"0"
,"He"
,"Suffix array"
,"`"
,"a su"
,"at "
,"code"
,"com"
,"ct"
,"do"
,"e f"
,"ec"
,"ed "
,"ei"
,"ent"
,"ere's a "
,"find"
,"her"
,"https://"
,"ib"
,"ie"
,"ing "
,"ion "
,"is"
,"ith"
,"iv"
,"k"
,"mon"
,"na"
,"no"
,"nst"
,"ons"
,"or"
,"pdf"
,"ri"
,"s are "
,"se"
,"sing"
,"sub"
,"supermaximal repeats"
,"te"
,"ti"
,"tr"
,"ub "
,"uffix arrays"
,"via"
,"y, "
]
</code></pre> | 2017-05-31 22:49:08.840000+00:00 | 2017-05-31 22:49:08.840000+00:00 | null | null | 44,122,262 | <p>I am trying to find patterns that:</p>
<ul>
<li>occur more than once</li>
<li>are more than 1 character long</li>
<li>are not substrings of any other known pattern</li>
</ul>
<p>without knowing any of the patterns that might occur.</p>
<p>For example:</p>
<ul>
<li>The string "the boy fell by the bell" would return <code>'ell', 'the b', 'y '</code>.</li>
<li>The string "the boy fell by the bell, the boy fell by the bell" would return <code>'the boy fell by the bell'</code>.</li>
</ul>
<p>Using double for-loops, it can be brute forced <strong><em>very</em></strong> inefficiently:</p>
<pre><code>ArrayList<String> patternsList = new ArrayList<>();
int length = string.length();
for (int i = 0; i < length; i++) {
int limit = (length - i) / 2;
for (int j = limit; j >= 1; j--) {
int candidateEndIndex = i + j;
String candidate = string.substring(i, candidateEndIndex);
if(candidate.length() <= 1) {
continue;
}
if (string.substring(candidateEndIndex).contains(candidate)) {
boolean notASubpattern = true;
for (String pattern : patternsList) {
if (pattern.contains(candidate)) {
notASubpattern = false;
break;
}
}
if (notASubpattern) {
patternsList.add(candidate);
}
}
}
}
</code></pre>
<p>However, this is incredibly slow when searching large strings with tons of patterns.</p> | 2017-05-22 21:15:20.490000+00:00 | 2017-06-02 06:20:06.890000+00:00 | 2017-05-22 21:16:08.440000+00:00 | java|algorithm|substring | ['https://github.com/eisenstatdavid/commonsub', 'https://arxiv.org/pdf/1304.0528.pdf'] | 2 |
48,485,675 | <p>This question is known as the <strong>direct problem</strong> in the study of <a href="https://en.wikipedia.org/wiki/Geodesy" rel="noreferrer">geodesy</a>. </p>
<p>This is indeed a very popular question and one that is a constant cause of confusion. The reason is that most people are looking for a simple and straight-forward answer. But there is none, because most people asking this question are not supplying enough information, simply because they are not aware that:</p>
<ol>
<li>Earth is not a perfect sphere, since it is flattened/compressed by it poles</li>
<li>Because of (1) earth does not have a constant Radius, <code>R</code>. See <a href="https://en.wikipedia.org/wiki/Earth_ellipsoid" rel="noreferrer">here</a>.</li>
<li>Earth is not perfectly smooth (variations in altitude) etc. </li>
<li>Due to tectonic plate movement, a geographic point's lat/lon position may change by several millimeters (at least), every year. </li>
</ol>
<p>Therefore there are many different assumptions used in the various geometric models that apply differently, depending on your needed accuracy. So to answer the question you need to consider to what <strong>accuracy</strong> you would like to have your result. </p>
<p><strong>Some examples:</strong></p>
<ul>
<li>I'm just looking for an approximate location to the nearest few kilometers for small ( <strong><</strong> 100 km) distances of in <code>latitudes</code> between <code>0-70 deg</code> <strong>N|S</strong>. (Earth is ~flat model.)</li>
<li>I want an answer that is good anywhere on the globe, but only accurate to about a few meters </li>
<li>I want a super accurate positioning that is valid down to atomic scales of <code>nanometers</code> [nm].</li>
<li>I want answers that is very fast and easy to calculate and not computationally intensive. </li>
</ul>
<p>So you can have many choices in which algorithm to use. In addition each programming language has it's own implementation or "package" multiplied by number of models and the model developers specific needs. For all practical purposes here, it pays off to ignore any other language apart <code>javascript</code>, since it very closely resemble pseudo-code by its nature. Thus it can be easily converted to any other language, with minimal changes.</p>
<p>Then the main models are: </p>
<ul>
<li><code>Euclidian/Flat earth model</code>: good for very short distances under ~10 km </li>
<li><code>Spherical model</code>: good for large longitudinal distances, but with small latitudinal difference. Popular model:
<ul>
<li><a href="https://www.movable-type.co.uk/scripts/latlong.html" rel="noreferrer">Haversine</a>: <strong>meter</strong> accuracy on [km] scales, very simple code.</li>
</ul></li>
<li><code>Ellipsoidal models</code>: Most accurate at any lat/lon and distance, but is still a numerical approximation that depend on what accuracy you need. Some popular models are:
<ul>
<li><a href="https://en.wikipedia.org/wiki/Geographical_distance#Lambert%27s_formula_for_long_lines" rel="noreferrer">Lambert</a>: <strong>~10 meter</strong> precision over 1000's of <em>km</em>. </li>
<li><a href="http://www.dtic.mil/get-tr-doc/pdf?AD=AD0627893" rel="noreferrer">Paul D.Thomas</a>: Andoyer-Lambert approximation</li>
<li><a href="https://www.movable-type.co.uk/scripts/latlong-vincenty.html" rel="noreferrer">Vincenty</a>: <strong>millimeter</strong> precision and computational efficiency</li>
<li><a href="http://arxiv.org/pdf/1109.4448.pdf" rel="noreferrer">Kerney</a>: <strong>nanometer</strong> precision</li>
</ul></li>
</ul>
<p><strong>References:</strong></p>
<ul>
<li><a href="https://en.wikipedia.org/wiki/Reference_ellipsoid" rel="noreferrer">https://en.wikipedia.org/wiki/Reference_ellipsoid</a></li>
<li><a href="https://en.wikipedia.org/wiki/Haversine_formula" rel="noreferrer">https://en.wikipedia.org/wiki/Haversine_formula</a></li>
<li><a href="https://en.wikipedia.org/wiki/Earth_ellipsoid" rel="noreferrer">https://en.wikipedia.org/wiki/Earth_ellipsoid</a></li>
<li><a href="https://en.wikipedia.org/wiki/Geodesics_on_an_ellipsoid" rel="noreferrer">https://en.wikipedia.org/wiki/Geodesics_on_an_ellipsoid</a></li>
<li><a href="https://en.wikipedia.org/wiki/Vincenty%27s_formulae" rel="noreferrer">https://en.wikipedia.org/wiki/Vincenty%27s_formulae</a></li>
<li><a href="https://geographiclib.sourceforge.io/scripts/geod-calc.html" rel="noreferrer">https://geographiclib.sourceforge.io/scripts/geod-calc.html</a></li>
</ul> | 2018-01-28 11:09:50.270000+00:00 | 2018-08-22 16:44:54.097000+00:00 | 2018-08-22 16:44:54.097000+00:00 | null | 7,222,382 | <p>Given an existing point in lat/long, distance in (in KM) and bearing (in degrees converted to radians), I would like to calculate the new lat/long. <a href="http://www.movable-type.co.uk/scripts/latlong.html" rel="noreferrer">This</a> site crops up over and over again, but I just can't get the formula to work for me. </p>
<p>The formulas as taken the above link are:</p>
<pre><code>lat2 = asin(sin(lat1)*cos(d/R) + cos(lat1)*sin(d/R)*cos(θ))
lon2 = lon1 + atan2(sin(θ)*sin(d/R)*cos(lat1), cos(d/R)−sin(lat1)*sin(lat2))
</code></pre>
<p>The above formula is for MSExcel where- </p>
<pre><code>asin = arc sin()
d = distance (in any unit)
R = Radius of the earth (in the same unit as above)
and hence d/r = is the angular distance (in radians)
atan2(a,b) = arc tan(b/a)
θ is the bearing (in radians, clockwise from north);
</code></pre>
<p>Here's the code I've got in Python.</p>
<pre><code>import math
R = 6378.1 #Radius of the Earth
brng = 1.57 #Bearing is 90 degrees converted to radians.
d = 15 #Distance in km
#lat2 52.20444 - the lat result I'm hoping for
#lon2 0.36056 - the long result I'm hoping for.
lat1 = 52.20472 * (math.pi * 180) #Current lat point converted to radians
lon1 = 0.14056 * (math.pi * 180) #Current long point converted to radians
lat2 = math.asin( math.sin(lat1)*math.cos(d/R) +
math.cos(lat1)*math.sin(d/R)*math.cos(brng))
lon2 = lon1 + math.atan2(math.sin(brng)*math.sin(d/R)*math.cos(lat1),
math.cos(d/R)-math.sin(lat1)*math.sin(lat2))
print(lat2)
print(lon2)
</code></pre>
<p>I get </p>
<pre><code>lat2 = 0.472492248844
lon2 = 79.4821662373
</code></pre> | 2011-08-28 16:56:14.227000+00:00 | 2022-08-22 21:38:14.663000+00:00 | 2015-03-03 11:38:37.727000+00:00 | python|gis|distance|latitude-longitude | ['https://en.wikipedia.org/wiki/Geodesy', 'https://en.wikipedia.org/wiki/Earth_ellipsoid', 'https://www.movable-type.co.uk/scripts/latlong.html', 'https://en.wikipedia.org/wiki/Geographical_distance#Lambert%27s_formula_for_long_lines', 'http://www.dtic.mil/get-tr-doc/pdf?AD=AD0627893', 'https://www.movable-type.co.uk/scripts/latlong-vincenty.html', 'http://arxiv.org/pdf/1109.4448.pdf', 'https://en.wikipedia.org/wiki/Reference_ellipsoid', 'https://en.wikipedia.org/wiki/Haversine_formula', 'https://en.wikipedia.org/wiki/Earth_ellipsoid', 'https://en.wikipedia.org/wiki/Geodesics_on_an_ellipsoid', 'https://en.wikipedia.org/wiki/Vincenty%27s_formulae', 'https://geographiclib.sourceforge.io/scripts/geod-calc.html'] | 13 |
62,645,608 | <p>There's the method for building a recommendation system - <a href="https://arxiv.org/abs/1205.2618" rel="nofollow noreferrer">Bayesian personalized ranking from implicit feedback</a>. I also wrote <a href="https://medium.com/heyjobs-tech/building-recommendation-system-based-bayesian-personalized-ranking-using-tensorflow-2-1-b814d2704130" rel="nofollow noreferrer">an article</a> on how it can be implemented using TensorFlow.</p>
<p>There's no "right" answer for the question of how to transfer implicit feedback explicitly. The answer will depend on business requirements. If the task is to increase the click rate, you should try to use the clicks. If the task of increasing conversion, you need to work with purchases.</p> | 2020-06-29 19:43:57.230000+00:00 | 2020-06-29 19:43:57.230000+00:00 | null | null | 62,644,920 | <p>I'm currently in the process of building a recommendation system with implicit data (e.g. clicks, views, purchases), however much of the research I've looked at seems to skip the step of "aggregating implicit data". For example, how do you aggregate multiple clicks and purchases overtime into a single user rating (as is required for a standard matrix factorization model)?</p>
<p>I've been experimenting with several Matrix Factorization based methods, including Neural Collaborative Filtering, Deep Factorization Machines, LightFM, and Variational Autoencoders for Collaborative Filtering. None of these papers seem to address the issue of aggregating implicit data. They also do not discuss how to weight different types of user events (e.g. clicks vs purchase) when calculating a score.</p>
<p>For now I've been using a confidence score approach (the conference score corresponds to the count of events) as outlined in this paper: <a href="http://yifanhu.net/PUB/cf.pdf" rel="nofollow noreferrer">http://yifanhu.net/PUB/cf.pdf</a>. However this approach doesn't address incorporating other types of user events (other than clicks), nor does it address negative implicit feedback (e.g. a ton of impressions with zero clicks).</p>
<p>Anyway, I'd love some insight on this topic! Any thoughts at all would be hugely appreciated!</p> | 2020-06-29 18:56:27.737000+00:00 | 2020-06-29 19:43:57.230000+00:00 | null | machine-learning|deep-learning|recommendation-engine|recommendation-system|matrix-factorization | ['https://arxiv.org/abs/1205.2618', 'https://medium.com/heyjobs-tech/building-recommendation-system-based-bayesian-personalized-ranking-using-tensorflow-2-1-b814d2704130'] | 2 |
71,114,926 | <p>Due to the high number of parameters it is hard if not impossible to reason about the optimization landscape, so any speculations are really just that, speculations.</p>
<p>If you assume that the model got stuck somewhere, that is, that the gradient is getting very small (it's sometimes worth plotting the distribution of the entries of the gradient over time too, or at least its magnitude), it sometimes is worth artificially forcing the optimizer to adapt, by changing the environment. One popular way to do so is using weight decay. For instance using a usual weight decay for <code>SGD</code> or if you're using <code>Adam</code>, switching to <a href="https://arxiv.org/abs/1711.05101" rel="nofollow noreferrer"><code>AdamW</code></a>. Alternatives that are based on a similar idea are <a href="https://arxiv.org/abs/1608.03983" rel="nofollow noreferrer">warm restarts</a>.</p>
<p>Finally it might very well be possible that you reached the limits of what your model can achieve. A dice score in the neighbourhood of 0.9 is already quite good in many of todays segmentation tasks.</p> | 2022-02-14 16:14:08.880000+00:00 | 2022-02-14 16:14:08.880000+00:00 | null | null | 71,114,265 | <p>I am training a deep model for MRI segmentation. The models I am using are U-Net++ and UNet3+. However, when plotting the validation and training losses of these models over time, I find that they all end with a sudden drop in loss, and a permanent plateau. Any ideas for what could be causing this plateau? or any ideas for how I could surpass it?</p>
<p>Here are the plots for the training and validation loss curves, and the corresponding segmentation performance (dice score) on the validation set. The drop in loss occurs at around epoch 80 and is pretty obvious in the graphs.</p>
<p>In regard to the things I've tried:</p>
<ul>
<li>Perhaps a local minima is being found, which is hard to escape, so I tried resuming training at epoch 250 with the learning rate increased by a factor of 10, but the plateau stays the exact same regardless of how many epochs I keep training. I also tried resuming with a reduced LR of factor 10 and 100 and no change either.</li>
<li>Perhaps the model has too many parameters, i.e. the plateau is happening due to over-fitting. So I tried training models that have fewer parameters. This changed the actual loss value (Y-axis value) that the plateau ends up occurring at, but the same general shape of a sudden drop and plateau remains the same. I also tried increasing the parameters (because it was easy to do), and the same problem is observed.</li>
</ul>
<p>Any ideas for what could be causing this plateau? or any ideas for how I could surpass it?</p>
<p><a href="https://i.stack.imgur.com/dDbhy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dDbhy.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/PDdvV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PDdvV.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/Mh1UC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Mh1UC.png" alt="enter image description here" /></a></p> | 2022-02-14 15:28:35.150000+00:00 | 2022-02-14 16:14:08.880000+00:00 | null | machine-learning|deep-learning|pytorch|statistics|evaluation | ['https://arxiv.org/abs/1711.05101', 'https://arxiv.org/abs/1608.03983'] | 2 |
45,159,189 | <p>Great question. Detecting multiple objects in the same image boils is essentially a "segmentation problem". Two nice and popular algorithms are
YOLO (You Only Look Once), and SSD(Single Shot Multibox Detector). I included links to them at the bottom. </p>
<p>I would watch a few videos on how YOLO works, and see if you grasp the idea. Then read the paper on SSD, and see if you get why this algorithm is even faster and more precise. </p>
<p>Both algorithms are single-pass: they only look at the image "once" and predict bounding boxes for the categories they spot. There are more precise algorithms, but they are slower (they first pick many spots they want to look, and then run a classifier on only that spot. The result is that they run this classifier many times per image, which is slow). </p>
<p>As you stated you are a newbie to Tensorflow, you can try this code other people made: <a href="https://github.com/thtrieu/darkflow" rel="noreferrer">https://github.com/thtrieu/darkflow</a> . The very extensive readme shows you how to get started on your own dataset. </p>
<p>Good luck, and let us know if you have other questions, or if these algorithms do not fit your use-case. </p>
<ul>
<li>YOLO 9000 (<a href="https://pjreddie.com/darknet/yolo/" rel="noreferrer">https://pjreddie.com/darknet/yolo/</a>)</li>
<li>SSD (Single shot multibox detector) (<a href="https://arxiv.org/abs/1512.02325" rel="noreferrer">https://arxiv.org/abs/1512.02325</a>)</li>
</ul> | 2017-07-18 06:34:03.593000+00:00 | 2017-07-18 06:40:21.713000+00:00 | 2017-07-18 06:40:21.713000+00:00 | null | 42,364,513 | <p>I am a newbie in TensorFlow.</p>
<p>Currently, I am testing some classification's examples "Convolutional Neural Network" in the TensorFlow website, and it explains how to classify input images into pre-defined classes, but the problem is: I can't figure out how to locate multiple objects in the same image. For example, I had an input image with a cat and dog and I want my graph to display in the output that there are both of them "a cat and a dog" in the image.</p> | 2017-02-21 10:16:27.750000+00:00 | 2020-06-14 11:50:13.683000+00:00 | 2019-07-27 20:18:10.617000+00:00 | tensorflow|machine-learning|deep-learning|image-recognition | ['https://github.com/thtrieu/darkflow', 'https://pjreddie.com/darknet/yolo/', 'https://arxiv.org/abs/1512.02325'] | 3 |
54,177,543 | <p>Previously, I thought I found examples of explicitly defined monads without a transformer, but those examples were incorrect.</p>
<p>The transformer for <code>Either a (z -> a)</code> is <code>m (Either a (z -> m a)</code>, where <code>m</code> is an arbitrary foreign monad. The transformer for <code>(a -> n p) -> n a</code> is <code>(a -> t m p) -> t m a</code> where <code>t m</code> is the transformer for the monad <code>n</code>.</p>
<ol>
<li>The free pointed monad.</li>
</ol>
<p>The monad type constructor <code>L</code> for this example is defined by</p>
<pre><code> type L z a = Either a (z -> a)
</code></pre>
<p>The intent of this monad is to embellish the ordinary reader monad <code>z -> a</code> with an explicit <code>pure</code> value (<code>Left x</code>). The ordinary reader monad's <code>pure</code> value is a constant function <code>pure x = _ -> x</code>. However, if we are given a value of type <code>z -> a</code>, we will not be able to determine whether this value is a constant function. With <code>L z a</code>, the <code>pure</code> value is represented explicitly as <code>Left x</code>. Users can now pattern-match on <code>L z a</code> and determine whether a given monadic value is pure or has an effect. Other than that, the monad <code>L z</code> does exactly the same thing as the reader monad.</p>
<p>The monad instance:</p>
<pre><code> instance Monad (L z) where
return x = Left x
(Left x) >>= f = f x
(Right q) >>= f = Right(join merged) where
join :: (z -> z -> r) -> z -> r
join f x = f x x -- the standard `join` for Reader monad
merged :: z -> z -> r
merged = merge . f . q -- `f . q` is the `fmap` of the Reader monad
merge :: Either a (z -> a) -> z -> a
merge (Left x) _ = x
merge (Right p) z = p z
</code></pre>
<p>This monad <code>L z</code> is a specific case of a more general construction, <code>(Monad m) => Monad (L m)</code> where <code>L m a = Either a (m a)</code>. This construction embellishes a given monad <code>m</code> by adding an explicit <code>pure</code> value (<code>Left x</code>), so that users can now pattern-match on <code>L m</code> to decide whether the value is pure. In all other ways, <code>L m</code> represents the same computational effect as the monad <code>m</code>.</p>
<p>The monad instance for <code>L m</code> is almost the same as for the example above, except the <code>join</code> and <code>fmap</code> of the monad <code>m</code> need to be used, and the helper function <code>merge</code> is defined by</p>
<pre><code> merge :: Either a (m a) -> m a
merge (Left x) = return @m x
merge (Right p) = p
</code></pre>
<p>I checked that the laws of the monad hold for <code>L m</code> with an arbitrary monad <code>m</code>.</p>
<p>This construction gives the free pointed functor on the given monad <code>m</code>. This construction guarantees that the free pointed functor on a monad is also a monad.</p>
<p>The transformer for the free pointed monad is defined like this:</p>
<pre><code> type LT m n a = n (Either a (mT n a))
</code></pre>
<p>where <code>mT</code> is the monad transformer of the monad m (which needs to be known).</p>
<ol start="2">
<li>Another example:</li>
</ol>
<p><code>type S a = (a -> Bool) -> Maybe a</code></p>
<p>This monad appeared in the context of "search monads" <a href="https://lukepalmer.wordpress.com/2010/11/17/searchable-data-types/" rel="nofollow noreferrer">here</a>. The <a href="https://arxiv.org/pdf/1406.2058.pdf" rel="nofollow noreferrer">paper by Jules Hedges</a> also mentions the search monad, and more generally, "selection" monads of the form</p>
<pre><code> type Sq n q a = (a -> n q) -> n a
</code></pre>
<p>for a given monad <code>n</code> and a fixed type <code>q</code>. The search monad above is a particular case of the selection monad with <code>n a = Maybe a</code> and <code>q = ()</code>. The paper by Hedges claims (without proof, but he proved it later using Coq) that <code>Sq</code> is a monad transformer for the monad <code>(a -> q) -> a</code>.</p>
<p>However, the monad <code>(a -> q) -> a</code> has another monad transformer <code>(m a -> q) -> m a</code> of the "composed outside" type. This is related to the property of "rigidity" explored in the question <a href="https://stackoverflow.com/questions/39649497/is-this-property-of-a-functor-stronger-than-a-monad?rq=1">Is this property of a functor stronger than a monad?</a> Namely, <code>(a -> q) -> a</code> is a rigid monad, and all rigid monads have monad transformers of the "composed-outside" type.</p>
<ol start="3">
<li>Generally, transformed monads don't themselves automatically possess a monad transformer. That is, once we take some foreign monad <code>m</code> and apply some monad transformer <code>t</code> to it, we obtain a new monad <code>t m</code>, and this monad doesn't have a transformer: given a new foreign monad <code>n</code>, we don't know how to transform <code>n</code> with the monad <code>t m</code>. If we know the transformer <code>mT</code> for the monad <code>m</code>, we can first transform <code>n</code> with <code>mT</code> and then transform the result with <code>t</code>. But if we don't have a transformer for the monad <code>m</code>, we are stuck: there is no construction that creates a transformer for the monad <code>t m</code> out of the knowledge of <code>t</code> alone and works for arbitrary foreign monads <code>m</code>.</li>
</ol>
<p>However, in practice all explicitly defined monads have explicitly defined transformers, so this problem does not arise.</p>
<ol start="4">
<li>@JamesCandy's answer suggests that <strong>for any monad</strong> (including <code>IO</code>?!), one can write a (general but complicated) type expression that represents the corresponding monad transformer. Namely, you first need to Church-encode your monad type, which makes the type look like a continuation monad, and then define its monad transformer as if for the continuation monad. But I think this is incorrect - it does not give a recipe for producing a monad transformer in general.</li>
</ol>
<p>Taking the Church encoding of a type <code>a</code> means writing down the type</p>
<pre><code> type ca = forall r. (a -> r) -> r
</code></pre>
<p>This type <code>ca</code> is completely isomorphic to <code>a</code> by Yoneda's lemma. So far we have achieved nothing other than made the type a lot more complicated by introducing a quantified type parameter <code>forall r</code>.</p>
<p>Now let's Church-encode a base monad <code>L</code>:</p>
<pre><code> type CL a = forall r. (L a -> r) -> r
</code></pre>
<p>Again, we have achieved nothing so far, since <code>CL a</code> is fully equivalent to <code>L a</code>.</p>
<p>Now pretend for a second that <code>CL a</code> a continuation monad (which it isn't!), and write the monad transformer as if it were a continuation monad transformer, by replacing the result type <code>r</code> through <code>m r</code>:</p>
<pre><code> type TCL m a = forall r. (L a -> m r) -> m r
</code></pre>
<p>This is claimed to be the "Church-encoded monad transformer" for <code>L</code>. But this seems to be incorrect. We need to check the properties:</p>
<ul>
<li><code>TCL m</code> is a lawful monad for any foreign monad <code>m</code> and for any base monad <code>L</code></li>
<li><code>m a -> TCL m a</code> is a lawful monadic morphism</li>
</ul>
<p>The second property holds, but I believe the first property fails, - in other words, <code>TCL m</code> is not a monad for an arbitrary monad <code>m</code>. Perhaps some monads <code>m</code> admit this but others do not. I was not able to find a general monad instance for <code>TCL m</code> corresponding to an arbitrary base monad <code>L</code>.</p>
<p>Another way to argue that <code>TCL m</code> is not in general a monad is to note that <code>forall r. (a -> m r) -> m r</code> is indeed a monad for any type constructor <code>m</code>. Denote this monad by <code>CM</code>. Now, <code>TCL m a = CM (L a)</code>. If <code>TCL m</code> were a monad, it would imply that <code>CM</code> can be composed with any monad <code>L</code> and yields a lawful monad <code>CM (L a)</code>. However, it is highly unlikely that a nontrivial monad <code>CM</code> (in particular, one that is not equivalent to <code>Reader</code>) will compose with all monads <code>L</code>. Monads usually do not compose without stringent further constraints.</p>
<p>A specific example where this does not work is for reader monads. Consider <code>L a = r -> a</code> and <code>m a = s -> a</code> where <code>r</code> and <code>s</code> are some fixed types. Now, we would like to consider the "Church-encoded monad transformer" <code>forall t. (L a -> m t) -> m t</code>. We can simplify this type expression using the Yoneda lemma,</p>
<pre><code> forall t. (x -> t) -> Q t = Q x
</code></pre>
<p>(for any functor <code>Q</code>) and obtain</p>
<pre><code> forall t. (L a -> s -> t) -> s -> t
= forall t. ((L a, s) -> t) -> s -> t
= s -> (L a, s)
= s -> (r -> a, s)
</code></pre>
<p>So this is the type expression for <code>TCL m a</code> in this case. If <code>TCL</code> were a monad transformer then <code>P a = s -> (r -> a, s)</code> would be a monad. But one can check explicitly that this <code>P</code> is actually not a monad (one cannot implement <code>return</code> and <code>bind</code> that satisfy the laws).</p>
<p>Even if this worked (i.e. <strong>assuming that I made a mistake in claiming that <code>TCL m</code> is in general not a monad</strong>), this construction has certain disadvantages:</p>
<ul>
<li>It is not functorial (i.e. not covariant) with respect to the foreign monad <code>m</code>, so we cannot do things like interpret a transformed free monad into another monad, or merge two monad transformers as explained here <a href="https://stackoverflow.com/questions/18364808/is-there-a-principled-way-to-compose-two-monad-transformers-if-they-are-of-diffe">Is there a principled way to compose two monad transformers if they are of different type, but their underlying monad is of the same type?</a></li>
<li>The presence of a <code>forall r</code> makes the type quite complicated to reason about and may lead to performance degradation (see the "Church encoding considered harmful" paper) and stack overflows (since Church encoding is usually not stack-safe)</li>
<li>The Church-encoded monad transformer for an identity base monad (<code>L = Id</code>) does not yield the unmodified foreign monad: <code>T m a = forall r. (a -> m r) -> m r</code> and this is not the same as <code>m a</code>. In fact it's quite difficult to figure out what that monad is, given a monad <code>m</code>.</li>
</ul>
<p>As an example showing why <code>forall r</code> makes reasoning complicated, consider the foreign monad <code>m a = Maybe a</code> and try to understand what the type <code>forall r. (a -> Maybe r) -> Maybe r</code> actually means. I was not able to simplify this type or to find a good explanation about what this type does, i.e. what kind of "effect" it represents (since it's a monad, it must represent some kind of "effect") and how one would use such a type.</p>
<ul>
<li>The Church-encoded monad transformer is not equivalent to the standard well-known monad transformers such as <code>ReaderT</code>, <code>WriterT</code>, <code>EitherT</code>, <code>StateT</code> and so on.</li>
</ul>
<p>It is not clear how many other monad transformers exist and in what cases one would use one or another transformer.</p>
<ol start="5">
<li>One of the questions in the post is to find an explicit example of a monad <code>m</code> that has two transformers <code>t1</code> and <code>t2</code> such that for some foreign monad <code>n</code>, the monads <code>t1 n</code> and <code>t2 n</code> are not equivalent.</li>
</ol>
<p>I believe that the <code>Search</code> monad provides such an example.</p>
<pre><code> type Search a = (a -> p) -> a
</code></pre>
<p>where <code>p</code> is a fixed type.</p>
<p>The transformers are</p>
<pre><code> type SearchT1 n a = (a -> n p) -> n a
type SearchT2 n a = (n a -> p) -> n a
</code></pre>
<p>I checked that both <code>SearchT1 n</code> and <code>SearchT2 n</code> are lawful monads for any monad <code>n</code>. We have liftings <code>n a -> SearchT1 n a</code> and <code>n a -> SearchT2 n a</code> that work by returning constant functions (just return <code>n a</code> as given, ignoring the argument). We have <code>SearchT1 Identity</code> and <code>SearchT2 Identity</code> obviously equivalent to <code>Search</code>.</p>
<p>The big difference between <code>SearchT1</code> and <code>SearchT2</code> is that <code>SearchT1</code> is not functorial in <code>n</code>, while <code>SearchT2</code> is. This may have implications for "running" ("interpreting") the transformed monad, since normally we would like to be able to lift an interpreter <code>n a -> n' a</code> into a "runner" <code>SearchT n a -> SearchT n' a</code>. This is possibly only with <code>SearchT2</code>.</p>
<p>A similar deficiency is present in the standard monad transformers for the continuation monad and the codensity monad: they are not functorial in the foreign monad.</p> | 2019-01-14 07:53:27.077000+00:00 | 2020-04-26 16:27:55.537000+00:00 | 2020-04-26 16:27:55.537000+00:00 | null | 24,515,876 | <p>So far, every monad (that can be represented as a data type) that I have encountered had a corresponding monad transformer, or could have one. Is there such a monad that can't have one? Or <strong>do all monads have a corresponding transformer?</strong></p>
<p>By a <em>transformer <code>t</code> corresponding to monad <code>m</code></em> I mean that <code>t Identity</code> is isomorphic to <code>m</code>. And of course that it satisfies the monad transformer laws and that <code>t n</code> is a monad for any monad <code>n</code>.</p>
<p>I'd like to see either a proof (ideally a constructive one) that every monad has one, or an example of a particular monad that doesn't have one (with a proof). I'm interested in both more Haskell-oriented answers, as well as (category) theoretical ones.</p>
<p>As a follow-up question, is there a monad <code>m</code> that has two distinct transformers <code>t1</code> and <code>t2</code>? That is, <code>t1 Identity</code> is isomorphic to <code>t2 Identity</code> and to <code>m</code>, but there is a monad <code>n</code> such that <code>t1 n</code> is not isomorphic to <code>t2 n</code>.</p>
<p>(<code>IO</code> and <code>ST</code> have a special semantics so I don't take them into account here and let's disregard them completely. Let's focus only on "pure" monads that can be constructed using data types.)</p> | 2014-07-01 17:12:17.690000+00:00 | 2020-04-26 16:27:55.537000+00:00 | 2014-07-01 20:12:10.343000+00:00 | haskell|monads|monad-transformers|category-theory | ['https://lukepalmer.wordpress.com/2010/11/17/searchable-data-types/', 'https://arxiv.org/pdf/1406.2058.pdf', 'https://stackoverflow.com/questions/39649497/is-this-property-of-a-functor-stronger-than-a-monad?rq=1', 'https://stackoverflow.com/questions/18364808/is-there-a-principled-way-to-compose-two-monad-transformers-if-they-are-of-diffe'] | 4 |
41,193,656 | <blockquote>
<p>If I've understood correctly, I can transform a 3-Layered NN into a DL NN by adding a RelU after the hidden layer, then repeating the hidden layer + RelU</p>
</blockquote>
<p>Deep learning is pretty much a buzz-word. It can be networks with anything, from 3 learning layers up at least 16. Depends on the author / when you wanted to know what "deep" means. For example, the <a href="https://arxiv.org/abs/1512.03385" rel="nofollow noreferrer">deep residual learning paper</a> has set the bar much higher to several hundrets of layers.</p>
<p>What is important, is that you have at least one hidden layer with a non-linearity. A network with an input layer, a hidden layer and an output layer (thus two learning layers). The non-linearities (logistic function, tanh, ReLU, ...) make neural networks so powerful.</p>
<p>For the dimensions: Those are matrix multiplications / vector additions. Your input is of dimension (1, M) and gets mutliplied with a matrix of dimension (M, N). The result is of dimension (1, N). The next matrix to be multiplied has to be of dimension (N, whatever). So you have to make sure the neighboring matrices have dimensions with fit. Just calculus 101.</p> | 2016-12-16 23:04:54.733000+00:00 | 2016-12-16 23:04:54.733000+00:00 | null | null | 40,987,555 | <p>If I've understood correctly, I can transform a <code>3-Layered NN</code> into a <code>DL NN</code> by adding a <code>RelU</code> after <code>the hidden layer</code>, then repeating the <code>hidden layer</code> + <code>RelU</code></p>
<p>I'm having trouble visualizing how the dimensionality will work out. I now have the following from a <a href="https://github.com/autojazari/xiaonet/blob/master/XiaoNet.ipynb" rel="nofollow noreferrer">small library</a> I am putting together so I can sink in the concepts</p>
<pre><code>M = 784 # 28 x 28 pixels
N = 512 # hidden neurons
P = 10 # number of possible classes
w1 = np.random.normal(0.0, pow(10, -0.5), (M, N))
w2 = np.random.normal(0.0, pow(10, -0.5), (N, P))
b1 = np.random.normal(0.0, pow(10, -0.5), (N))
b2 = np.random.normal(0.0, pow(10, -0.5), (P))
x = Input(w1, b1)
h = Hidden(x, w2, b2)
g = Softmax(h)
cost = CrossEntropy(g) # numpy.mean(CrossEntropy) over BATCH SIZE
train_data()
</code></pre>
<p>But I want to go to </p>
<pre><code>x = Input(w1, b1)
h = Hidden(x, w2, b2)
r = ReLU(h)
h2 = Hidden(r, ??, ??) # 1
r2 = ReLU(h2) # 2
.. <repeat 1 and 2>
g = Softmax(h)
cost = CrossEntropy(g) # numpy.mean(CrossEntropy) over BATCH SIZE
train_data()
</code></pre>
<p><a href="http://sdc.autojazari.com/gtsd-nn/" rel="nofollow noreferrer">Related article I a writing about this</a></p> | 2016-12-06 03:55:04.017000+00:00 | 2016-12-16 23:04:54.733000+00:00 | null | neural-network|deep-learning|mnist | ['https://arxiv.org/abs/1512.03385'] | 1 |
44,884,606 | <p>I believe that this is not possible. This is similar to Hard Attention described in this <a href="https://arxiv.org/pdf/1502.03044.pdf" rel="nofollow noreferrer">paper</a>. Hard attention is used in Image captioning to allow the model to focus only on a certain part of the image at each step. Hard attention is not differentiable but there are 2 ways to go around this:</p>
<p>1- Use Reinforcement Learning (RL): RL is made to train models that makes decisions. Even though, the loss function won't back-propagate any gradients to the softmax used for the decision, you can use RL techniques to optimize the decision. For a simplified example, you can consider the loss as penalty, and send to the node, with the maximum value in the softmax layer, a policy gradient proportional to the penalty in order to decrease the score of the decision if it was bad (results in a high loss).</p>
<p>2- Use something like soft attention: instead of picking only one operation, mix them with weights based on the softmax. so instead of:</p>
<pre><code>output = values * mask
</code></pre>
<p>Use:</p>
<pre><code>output = values * softmax
</code></pre>
<p>Now, the operations will converge down to zero based on how much the softmax will <strong>not</strong> select them. This is easier to train compared to RL but it won't work if you must completely remove the non-selected operations from the final result (set them to zero completely). </p>
<p>This is another answer that talks about Hard and Soft attention that you may find helpful: <a href="https://stackoverflow.com/a/35852153/6938290">https://stackoverflow.com/a/35852153/6938290</a></p> | 2017-07-03 11:40:00.223000+00:00 | 2017-07-05 15:16:50.737000+00:00 | 2017-07-05 15:16:50.737000+00:00 | null | 44,882,131 | <p>I am trying to produce a mathematical operation selection nn model, which is based on the scalar input. The operation is selected based on the softmax result which is produce by the nn. Then this operation has to be applied to the scalar input in order to produce the final output. So far I’ve come up with applying argmax and onehot on the softmax output in order to produce a mask which then is applied on the concated values matrix from all the possible operations to be performed (as show in the pseudo code below). The issue is that neither argmax nor onehot appears to be differentiable. I am new to this, so any would be highly appreciated. Thanks in advance. </p>
<pre><code> #perform softmax
logits = tf.matmul(current_input, W) + b
softmax = tf.nn.softmax(logits)
#perform all possible operations on the input
op_1_val = tf_op_1(current_input)
op_2_val = tf_op_2(current_input)
op_3_val = tf_op_2(current_input)
values = tf.concat([op_1_val, op_2_val, op_3_val], 1)
#create a mask
argmax = tf.argmax(softmax, 1)
mask = tf.one_hot(argmax, num_of_operations)
#produce the input, by masking out those operation results which have not been selected
output = values * mask
</code></pre> | 2017-07-03 09:30:29.130000+00:00 | 2017-07-05 15:16:50.737000+00:00 | null | machine-learning|tensorflow|neural-network|recurrent-neural-network|calculus | ['https://arxiv.org/pdf/1502.03044.pdf', 'https://stackoverflow.com/a/35852153/6938290'] | 2 |
49,360,202 | <p>You could use character-level embeddings (i.e. your input classes are the different characters, so 'a' is class 1, 'b' is class 2 etc..). One-hot encoding the classes and then passing them through an embedding layer will yield unique representations for each character. A string can then be treated as a character-sequence (or equally a vector-sequence), which can be used as an input for either a recurrent or convolutional network. If you feel like reading, this <a href="https://arxiv.org/abs/1508.06615" rel="nofollow noreferrer">paper</a> by Kim et al. will provide you all the necessary theoretical backbone.</p> | 2018-03-19 10:03:46.020000+00:00 | 2018-03-19 10:03:46.020000+00:00 | null | null | 49,358,277 | <p>If you check my <a href="https://github.com/raady07" rel="nofollow noreferrer">github</a>, I have successfully implemented CNN, KNN for classifying signal faults. For that, I have taken the signal with little preprocessing for dimensionality reduction and provided it to the network, using its class information I trained the network, later the trained network is tested with testing samples to determine the class and computed the accuracy.</p>
<p>My question here how do I input the text information to CNN or any other network. For inputs, I took the Twitter database from kaggle, I have selected 2 columns which have names and gender information. I have gone through some algorithms which classify gender based on their blog data. I wasn't clear how I implement to my data (In my case, if I just want to classify using only names alone).</p>
<p>In some examples, which I understood I saw computing sparse matrix for the text, but for 20,000 samples the sparse matrix is huge to give as input. I have no problem in implementing the CNN architectures(I want to implement because no features are required) or any other network. I am stuck here, how to input data to the network. What kind of conversations can I make so that I take the names and gender information can be considered to train the network?</p>
<p>If my method of thinking is wrong please provide me suggestion which algorithm is the best way. Deep learning or any other methods are ok!</p> | 2018-03-19 08:08:23.787000+00:00 | 2018-03-19 10:03:46.020000+00:00 | 2020-06-20 09:12:55.060000+00:00 | python|python-3.x|nlp|deep-learning|kaggle | ['https://arxiv.org/abs/1508.06615'] | 1 |
42,361,797 | <ol>
<li><p>If you pool twice with <code>stride=2</code> means you reduce by a factor 2 the image size twice, resulting with a total of x4 reduction (sub sampling) of image size. Hence, if you start with an image of size 256: 256/4=64.</p></li>
<li><p>How do choose kernel size, number of output of each layer, strides and other design parameters? There's actually no single answer for that and basically many papers/works approach the same tasks with different settings. AFAIK there's no clear guidelines or obvious choice of parameters that suits any specific task.<br>
That being said, you can find <a href="https://arxiv.org/abs/1611.00847" rel="nofollow noreferrer">this work</a> surveying some emerging deep nets design patterns.</p></li>
</ol> | 2017-02-21 08:07:40.937000+00:00 | 2017-02-21 08:07:40.937000+00:00 | null | null | 42,361,500 | <p>I am a beginner and learning deep learning with baby steps. I have a question about designing the nets. I see in the papers, there are layers with different inputs/outputs and I do not know how to calculate/design before implementation.
For instance, in this <a href="https://people.eecs.berkeley.edu/~jonlong/long_shelhamer_fcn.pdf" rel="nofollow noreferrer">paper</a>, there are some numbers beside the schematic layers output (see the following figure). How these filter size and other parameters are being specified for a network with a specific image size as input.
<a href="https://people.eecs.berkeley.edu/~jonlong/long_shelhamer_fcn.pdf" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/V6L42.png" alt="enter image description here"></a></p>
<p>or in another paper, they have the following design:
<a href="https://i.stack.imgur.com/qFjQI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qFjQI.png" alt="enter image description here"></a></p>
<p>and they have mentioned, For a <code>256x256</code> input image,
the total sub-sampling factor of the network is <code>4</code>, resulting in
a <code>64x64xL array</code>, where L is the number of class labels. How this <code>64x64</code> size is being obtained?</p>
<p>How can I learn to design the net and calculates inputs/outputs of layers?</p>
<p>Thank you for any help</p> | 2017-02-21 07:50:24.890000+00:00 | 2017-02-21 08:07:57.537000+00:00 | 2017-02-21 08:07:57.537000+00:00 | neural-network|deep-learning|caffe|pycaffe|matconvnet | ['https://arxiv.org/abs/1611.00847'] | 1 |
62,638,151 | <p>It might be related to the <strong>scale</strong> of your Q-Values. I have the same behavior in my DQN loss, my agent easily solves the environment but the loss is growing through training.</p>
<p>If you look at this part of the DQN algorithm you might get some insights:</p>
<p><a href="https://i.stack.imgur.com/hWktP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hWktP.png" alt="enter image description here" /></a></p>
<ul>
<li>First you will notice that the target <em>y</em> is built upon the <em>max</em> Q-values of the target network. It could induce a constant overestimation of the target Q-value as it is demonstrated in the <a href="https://arxiv.org/pdf/1509.06461.pdf" rel="nofollow noreferrer">Double-DQN paper</a>. Since the target could be constantly overestimated while the prediction is not, a delta will always exist between predictions and targets</li>
<li>Second, this delta will grow in scale as the Q-values grow too. I think it is a normal behavior since your Q function will learn that many states have an important value, so the error at the beginning of training might be way smaller than the error at the end</li>
<li>Third the target Q-network is frozen for some steps while the prediction Q-network changes constantly, that also contributes to this delta</li>
</ul>
<p>Hope this helps, note that it is a purely intuitive and personal explanation, I did not conduct any test to check my hypotheses. And I think that the second point might be the most important here.</p> | 2020-06-29 12:28:06.730000+00:00 | 2020-06-29 12:28:06.730000+00:00 | null | null | 62,586,436 | <p>I wrote a DQN to play the OpenAI gym cart pole game with TensorFlow and tf_agents. The code looks like the following:</p>
<pre class="lang-py prettyprint-override"><code>def compute_avg_return(environment, policy, num_episodes=10):
total_return = 0.0
for _ in range(num_episodes):
time_step = environment.reset()
episode_return = 0.0
while not time_step.is_last():
action_step = policy.action(time_step)
time_step = environment.step(action_step.action)
episode_return += time_step.reward
total_return += episode_return
avg_return = total_return / num_episodes
return avg_return.numpy()[0]
def collect_step(environment, policy, buffer):
time_step = environment.current_time_step()
action_step = policy.action(time_step)
next_time_step = environment.step(action_step.action)
traj = trajectory.from_transition(time_step, action_step, next_time_step)
buffer.add_batch(traj)
def collect_data(env, policy, buffer, steps):
for _ in range(steps):
collect_step(env, policy, buffer)
def train_model(
num_iterations=config.default_num_iterations,
collect_steps_per_iteration=config.default_collect_steps_per_iteration,
replay_buffer_max_length=config.default_replay_buffer_max_length,
batch_size=config.default_batch_size,
learning_rate=config.default_learning_rate,
log_interval=config.default_log_interval,
num_eval_episodes=config.default_num_eval_episodes,
eval_interval=config.default_eval_interval,
checkpoint_saver_directory=config.default_checkpoint_saver_directory,
model_saver_directory=config.default_model_saver_directory,
visualize=False,
static_plot=False,
):
env_name = 'CartPole-v0'
train_py_env = suite_gym.load(env_name)
eval_py_env = suite_gym.load(env_name)
train_env = tf_py_environment.TFPyEnvironment(train_py_env)
eval_env = tf_py_environment.TFPyEnvironment(eval_py_env)
fc_layer_params = (100,)
q_net = q_network.QNetwork(
train_env.observation_spec(),
train_env.action_spec(),
fc_layer_params=fc_layer_params)
optimizer = Adam(learning_rate=learning_rate)
train_step_counter = tf.Variable(0)
agent = dqn_agent.DqnAgent(
train_env.time_step_spec(),
train_env.action_spec(),
q_network=q_net,
optimizer=optimizer,
td_errors_loss_fn=common.element_wise_squared_loss,
train_step_counter=train_step_counter)
agent.initialize()
replay_buffer = tf_uniform_replay_buffer.TFUniformReplayBuffer(
data_spec=agent.collect_data_spec,
batch_size=train_env.batch_size,
max_length=replay_buffer_max_length)
dataset = replay_buffer.as_dataset(
num_parallel_calls=3,
sample_batch_size=batch_size,
num_steps=2).prefetch(3)
iterator = iter(dataset)
agent.train_step_counter.assign(0)
avg_return = compute_avg_return(eval_env, agent.policy, num_eval_episodes)
returns = []
loss = []
for _ in range(num_iterations):
for _ in range(collect_steps_per_iteration):
collect_step(train_env, agent.collect_policy, replay_buffer)
experience, unused_info = next(iterator)
train_loss = agent.train(experience).loss
step = agent.train_step_counter.numpy()
avg_return = compute_avg_return(eval_env, agent.policy, num_eval_episodes)
returns.append(avg_return)
</code></pre>
<p>Although the average reward is getting better and reached 200, the maximum score, in the end, the loss is not obviously decreasing.</p>
<p>Here is the loss plot:</p>
<p><a href="https://i.stack.imgur.com/AcIYE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AcIYE.png" alt="loss plot" /></a></p>
<p>Here is the reward plot:</p>
<p><a href="https://i.stack.imgur.com/J01qu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/J01qu.png" alt="reward plot" /></a></p>
<p>The good point is that the model is successful, and it can play the game really well. However, I would really love to get some insight into why this is happening where an extremely high loss still yields a good reward.</p> | 2020-06-26 00:58:09.823000+00:00 | 2020-06-29 12:28:06.730000+00:00 | null | python|tensorflow|machine-learning|reinforcement-learning|openai-gym | ['https://i.stack.imgur.com/hWktP.png', 'https://arxiv.org/pdf/1509.06461.pdf'] | 2 |
66,076,890 | <ol>
<li>Yes. But if you want to export the log and make a chart out of it, you can try this command:</li>
</ol>
<p><code>./darknet detector train data/obj.data cfg/yolov4.cfg yolov4.weights -map | tee results.log</code></p>
<ol start="2">
<li><p>The <strong>blue curve</strong> is the training loss or the error on the training dataset (specifically Complete Intersection-Over-Union or CIoU loss for YOLOv4). For more details on CIoU loss, <a href="https://arxiv.org/abs/1911.08287" rel="noreferrer">check this paper</a>. The <strong>red line</strong> is the mean average precision at 50% Intersection-over-Union threshold ([email protected]), which checks if your model it is generalizing well on a <em>never-before-seen dataset</em> or <em>validation set.</em> If you want to understand mAP more, you can refer to this <a href="https://jonathan-hui.medium.com/map-mean-average-precision-for-object-detection-45c121a31173" rel="noreferrer">easy-to-understand blogpost.</a></p>
</li>
<li><p>Are you using a custom dataset? The drop near iteration 1200 might be caused by some problems in your dataset. To check, try these:</p>
<p>(a) Check your dataset - run training with flag <code>-show_imgs</code> i.e. <code>./darknet detector train ... -show_imgs</code> and look at the <code>aug_...jpg</code> images, do you see correct truth bounded boxes?</p>
<p>(b) Check generated files <code>bad.list</code> and <code>bad_label.list</code> if they exist. These files contain the label files that may have problems.</p>
</li>
<li><p>Yes. But if you enable the log file (check my answer - no. 1), then, no.</p>
</li>
</ol> | 2021-02-06 12:25:15.700000+00:00 | 2021-02-06 12:35:15.203000+00:00 | 2021-02-06 12:35:15.203000+00:00 | null | 66,074,481 | <p>I'm still new to "You Only Look Once" object detection algorithm (YOLOv4 to be exact). I have some questions regarding the mAP and loss chart.</p>
<p>I tried to follow the instructions from <a href="https://github.com/AlexeyAB/darknet#how-to-train-to-detect-your-custom-objects" rel="noreferrer">AlexeyAB Darknet</a>, and train my custom object detector using Google Colabs. After the training, it shows the loss and mAP chart as shown below.</p>
<p>Loss and mAP chart:</p>
<p><a href="https://i.stack.imgur.com/ftGtE.png" rel="noreferrer"><img src="https://i.stack.imgur.com/ftGtE.png" alt="image" /></a></p>
<p>My questions are:</p>
<ol>
<li>Is there any chart other than this?</li>
<li>Is this loss for training or validation?</li>
<li>Why is there a sudden drop near iteration 1200?</li>
<li>Is the output of the training only that chart and the weight files?</li>
</ol> | 2021-02-06 07:16:52.430000+00:00 | 2021-11-30 07:19:03.190000+00:00 | 2021-02-23 10:59:12.387000+00:00 | deep-learning|computer-vision|object-detection|yolo|darknet | ['https://arxiv.org/abs/1911.08287', 'https://jonathan-hui.medium.com/map-mean-average-precision-for-object-detection-45c121a31173'] | 2 |
38,800,356 | <p>I have to qualify my answer: I have limited knowledge of Parsey McParseface. However, since nobody else has answered, I hope I can add some value. </p>
<p>I think a major problem with most machine learning models is a lack of interpretability. This relates to your first question: "why is this happening?" It's very difficult to tell because this tool is founded on a 'black box' model, namely, a neural network. I will say that it seems extremely surprising, given the <a href="https://research.googleblog.com/2016/05/announcing-syntaxnet-worlds-most.html" rel="nofollow">strong claims made about Parsey</a>, that a common word like 'is' fools it consistently. Is it possible you've made some mistake? It's hard to tell without a code sample.</p>
<p>I'll assume you haven't made a mistake, in which case, I think you could solve this (or mitigate it) by taking advantage of your observation that the word 'is' seems to throw the model off. You could simply check the sentence in question for the word 'is' and use GCloud (or another parser) in that case. Conveniently, once you are using both, you can use GCloud as a fallback for other cases where Parsey seems to fail, should you find them in the future.</p>
<p>As for improving the base model, if you care enough, you could recreate it using the <a href="http://arxiv.org/pdf/1603.06042.pdf" rel="nofollow">original paper</a>, and perhaps optimize the training to suit your situation.</p> | 2016-08-06 03:47:48.647000+00:00 | 2016-08-06 03:47:48.647000+00:00 | null | null | 38,711,353 | <p>It seems to me that Parsey has severe issues with correctly tagging questions and any sentence with "is" in it. </p>
<hr>
<p><strong>Text: Is Barrack Obama from Hawaii?</strong></p>
<p>GCloud Tokens (correct):</p>
<ul>
<li>Is - [root] VERB</li>
<li>Barrack - [nn] NOUN</li>
<li>Obama - [nsubj] NOUN</li>
<li>from - [adp] PREP</li>
<li>Hawaii - [pobj] NOUN</li>
</ul>
<p>Parsey Tokens (wrong):</p>
<ul>
<li>Is - [cop] VERB</li>
<li>Barrack - [nsubj] NOUN</li>
<li>Obama - [root] NOUN</li>
<li>from - [adp] PREP</li>
<li>Hawaii - [pobj] NOUN</li>
</ul>
<p>Parsey decides to make the noun (!) Obama the root, which messes up everything else.</p>
<hr>
<p><strong>Text: My name is Philipp</strong></p>
<p>GCloud Tokens (correct):</p>
<ul>
<li>My [poss] PRON</li>
<li>name [nsubj] NOUN</li>
<li>is [root] VERB</li>
<li>Philipp [attr] NOUN</li>
</ul>
<p>ParseyTokens (incorrect):</p>
<ul>
<li>My [poss] PRON</li>
<li>name [nsubj] NOUN</li>
<li>is [cop] VERB</li>
<li>Philipp [root] NOUN</li>
</ul>
<p>Again parsey chooses the NOUN as root and struggles with COP.</p>
<hr>
<p>Any ideas why this is happening and how I could fix it?</p>
<p>Thanks,
Phil</p> | 2016-08-02 03:34:46.980000+00:00 | 2016-09-08 23:56:52.287000+00:00 | null | nlp|tensorflow|pos-tagger|dependency-parsing|parsey-mcparseface | ['https://research.googleblog.com/2016/05/announcing-syntaxnet-worlds-most.html', 'http://arxiv.org/pdf/1603.06042.pdf'] | 2 |
37,565,570 | <p>I assume that your "<code>LATCH_match.cpp</code> in <code>opencv_3.1.0</code>" is
<a href="https://github.com/Itseez/opencv/blob/3.1.0/samples/cpp/tutorial_code/xfeatures2D/LATCH_match.cpp" rel="nofollow noreferrer">https://github.com/Itseez/opencv/blob/3.1.0/samples/cpp/tutorial_code/xfeatures2D/LATCH_match.cpp</a></p>
<p>In that file, you <a href="https://github.com/Itseez/opencv/blob/3.1.0/samples/cpp/tutorial_code/xfeatures2D/LATCH_match.cpp#L13" rel="nofollow noreferrer">find</a>:</p>
<pre><code>// If you find this code useful, please add a reference to the following paper in your work:
// Gil Levi and Tal Hassner, "LATCH: Learned Arrangements of Three Patch Codes", arXiv preprint arXiv:1501.03719, 15 Jan. 2015
</code></pre>
<p>And so, looking at <a href="http://arxiv.org/pdf/1501.03719v1.pdf" rel="nofollow noreferrer">http://arxiv.org/pdf/1501.03719v1.pdf</a> you will find</p>
<blockquote>
<p>For each set, we compare the first image against each of the remaining
five and check for correspondences. Performance is measured using the
code from [16, 17]<sup>1</sup> , which computes recall and 1-precision
using <strong>known ground truth homographies between the images</strong>.</p>
</blockquote>
<p>I think that the image <code>../data/graf1.png</code> is <a href="https://github.com/Itseez/opencv/blob/3.1.0/samples/data/graf1.png" rel="nofollow noreferrer">https://github.com/Itseez/opencv/blob/3.1.0/samples/data/graf1.png</a> that I show here:</p>
<p><a href="https://i.stack.imgur.com/ZpO0H.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZpO0H.jpg" alt="enter image description here"></a></p>
<p>According to the comment <a href="https://stackoverflow.com/questions/37564541/homography-matrix-in-opencv/37565570?noredirect=1#comment62624557_37565570">Homography matrix in Opencv?</a> by <a href="https://stackoverflow.com/users/6055233/catree">Catree</a> the original dataset is at <a href="http://www.robots.ox.ac.uk/~vgg/research/affine/det_eval_files/graf.tar.gz" rel="nofollow noreferrer">http://www.robots.ox.ac.uk/~vgg/research/affine/det_eval_files/graf.tar.gz</a> where it is said that</p>
<blockquote>
<p>Homographies between image pairs included.</p>
</blockquote>
<p>So I think that the homography stored in file <code>../data/H1to3p.xml</code> is the homography between image 1 and image 3.</p> | 2016-06-01 10:05:32.407000+00:00 | 2016-06-01 13:19:10.477000+00:00 | 2017-05-23 12:15:54.783000+00:00 | null | 37,564,541 | <p>In <code>LATCH_match.cpp</code> in <code>opencv_3.1.0</code> the homography matrix is defined and used as:</p>
<pre><code>Mat homography;
FileStorage fs("../data/H1to3p.xml", FileStorage::READ);
...
fs.getFirstTopLevelNode() >> homography;
...
Mat col = Mat::ones(3, 1, CV_64F);
col.at<double>(0) = matched1[i].pt.x;
col.at<double>(1) = matched1[i].pt.y;
col = homography * col;
...
</code></pre>
<p>Why <code>H1to3p.xml</code> is:</p>
<pre><code><opencv_storage><H13 type_id="opencv-matrix"><rows>3</rows><cols>3</cols><dt>d</dt><data>
7.6285898e-01 -2.9922929e-01 2.2567123e+02
3.3443473e-01 1.0143901e+00 -7.6999973e+01
3.4663091e-04 -1.4364524e-05 1.0000000e+00 </data></H13></opencv_storage>
</code></pre>
<p>With which criteria these numbers were chosen? They can be used for any other homography test for filtering keypoints (as in <code>LATCH_match.cpp</code>)?</p> | 2016-06-01 09:24:55.987000+00:00 | 2016-06-01 13:19:10.477000+00:00 | null | opencv|image-processing|homography | ['https://github.com/Itseez/opencv/blob/3.1.0/samples/cpp/tutorial_code/xfeatures2D/LATCH_match.cpp', 'https://github.com/Itseez/opencv/blob/3.1.0/samples/cpp/tutorial_code/xfeatures2D/LATCH_match.cpp#L13', 'http://arxiv.org/pdf/1501.03719v1.pdf', 'https://github.com/Itseez/opencv/blob/3.1.0/samples/data/graf1.png', 'https://i.stack.imgur.com/ZpO0H.jpg', 'https://stackoverflow.com/questions/37564541/homography-matrix-in-opencv/37565570?noredirect=1#comment62624557_37565570', 'https://stackoverflow.com/users/6055233/catree', 'http://www.robots.ox.ac.uk/~vgg/research/affine/det_eval_files/graf.tar.gz'] | 8 |
46,040,322 | <p>some good holiday reading on the latest research in this area: </p>
<p><a href="https://arxiv.org/abs/1706.04972" rel="nofollow noreferrer">https://arxiv.org/abs/1706.04972</a></p> | 2017-09-04 15:35:57.460000+00:00 | 2017-09-04 15:35:57.460000+00:00 | null | null | 45,989,568 | <p>1) I have seen that unless stated otherwise, a subgraph in simple_placer.cc is placed on task 0 (mapped to device 0), but, before that, it occurs graph partitioning. So, if after this operation we have two subgraphs, then they are going to be mapped to different tasks/devices?</p>
<p>2) Is there a way to have automatic device placement for model parallelism considering distributed execution or should I have to set it manually? Broadly speaking, not only model parallelism, but automatic task parallelism?</p> | 2017-08-31 20:26:32.560000+00:00 | 2017-09-24 16:24:55.990000+00:00 | 2017-09-01 17:44:09.130000+00:00 | tensorflow|distributed-computing | ['https://arxiv.org/abs/1706.04972'] | 1 |
53,383,439 | <p>You can use this code first download the PDF file then read the text with apache lib. you need to add some jar manually.
you need to set your local pdf file address which is by defualt "download.pdf".</p>
<pre><code>import com.gnostice.pdfone.PdfDocument;
import org.apache.pdfbox.pdmodel.PDDocument;
import org.apache.pdfbox.text.PDFTextStripper;
import java.io.File;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.net.ConnectException;
import java.net.URL;
import java.net.URLConnection;
public class LoadDocumentFromURL {
public static void main(String[] args) throws IOException {
URL url1 = new URL("https://arxiv.org/pdf/1811.06933.pdf");
byte[] ba1 = new byte[1024];
int baLength;
FileOutputStream fos1 = new FileOutputStream("download.pdf");
try {
// Contacting the URL
// System.out.print("Connecting to " + url1.toString() + " ... ");
URLConnection urlConn = url1.openConnection();
// Checking whether the URL contains a PDF
if (!urlConn.getContentType().equalsIgnoreCase("application/pdf")) {
System.out.println("FAILED.\n[Sorry. This is not a PDF.]");
} else {
try {
// Read the PDF from the URL and save to a local file
InputStream is1 = url1.openStream();
while ((baLength = is1.read(ba1)) != -1) {
fos1.write(ba1, 0, baLength);
}
fos1.flush();
fos1.close();
is1.close();
// Load the PDF document and display its page count
//System.out.print("DONE.\nProcessing the PDF ... ");
PdfDocument doc = new PdfDocument();
try {
doc.load("download.pdf");
// System.out.println("DONE.\nNumber of pages in the PDF is " + doc.getPageCount());
// System.out.println(doc.getAuthor());
// System.out.println(doc.getKeywords());
// System.out.println(doc.toString());
doc.close();
} catch (Exception e) {
System.out.println("FAILED.\n[" + e.getMessage() + "]");
}
} catch (ConnectException ce) {
//System.out.println("FAILED.\n[" + ce.getMessage() + "]\n");
}
}
} catch (NullPointerException npe) {
//System.out.println("FAILED.\n[" + npe.getMessage() + "]\n");
}
File file = new File("your local pdf file address which is download.pdf");
PDDocument document = PDDocument.load(file);
PDFTextStripper pdfStripper = new PDFTextStripper();
String text = pdfStripper.getText(document);
System.out.println(text);
document.close();
}
}
</code></pre> | 2018-11-19 22:16:34.457000+00:00 | 2018-11-19 22:16:34.457000+00:00 | null | null | 29,998,020 | <p>I need to be able to parse the text contained in a file online with a given url, i.e. <code>http://website.com/document.pdf</code>.</p>
<p>I am making a search engine which basically can tell me if the searched word is in some file online, and retrieve the file's URL, so I don't need to download the file but to just read it.</p>
<p>I was looking for a way and found something with <code>InputStream</code> and <code>OpenConnection</code> but didn't managed to actually do it.</p>
<p>I am using jsoup in order to crawl around a website in order to retrieve the URLs, and I was trying to parse it with a Jsoup method, but it does not work.</p>
<p>So what is the best way to do this?</p>
<p>EDIT:</p>
<p>I want to be able to do something like this:</p>
<pre><code>File in = new File("http://website.com/document.pdf");
Document doc = Jsoup.parse(in, "UTF-8");
System.out.println(doc.toString());
</code></pre> | 2015-05-02 03:18:49.303000+00:00 | 2018-11-19 22:16:34.457000+00:00 | null | java|parsing|pdf|stream|jsoup | [] | 0 |
55,427,297 | <p>Here's how I implemented it</p>
<pre><code>class LayerCardinalConv(object):
"""Aggregated Residual Transformations for Deep Neural Networks https://arxiv.org/abs/1611.05431"""
def __init__(self, name, w, nin, card, use_bias=True, init='he'):
self.group = nin // card
with tf.name_scope(name):
self.conv = tf.Variable(weight_init(nin, self.group, [*w, nin, self.group], init), name='conv')
self.bias = tf.Variable(tf.zeros([nin]), name='bias') if use_bias else 0
def __call__(self, vin, train):
s = tf.shape(vin)
vout = tf.nn.depthwise_conv2d(vin, self.conv, strides=[1] * 4, padding='SAME')
vout = tf.reshape(vout, [s[0], s[1], s[2], self.group, s[3]])
vout = tf.reduce_sum(vout, 3)
return vout + self.bias
</code></pre>
<p>Notes:</p>
<ul>
<li>w is the kernel shape (3, 3) for example</li>
<li>nin number of input channels</li>
<li>cardinality or number of groups</li>
</ul>
<p>Hope it helps.</p> | 2019-03-30 01:23:13.737000+00:00 | 2019-03-30 01:23:13.737000+00:00 | null | null | 48,994,369 | <p>I'm attempting to improve the performance of my ResNeXt implementation in Tensorflow. David Berthelot mentioned a potential improvement over <a href="https://twitter.com/D_Berthelot_ML/status/908200291274711040" rel="nofollow noreferrer">on twitter</a>. I'd like to apply this to my implementation - how does reshape+sum fit into this?</p>
<pre><code># one resnext block per figure 3c
# see also https://arxiv.org/pdf/1611.05431.pdf
def bottleneck(x, strides, dim):
x = tf.layers.conv2d(x, filters=64, kernel_size=1, strides=strides)
x = tf.layers.batch_normalization(x, training=is_training)
x = tf.nn.relu(x)
w = tf.get_variable(name='depthwise_filter', shape=[3, 3, 64, cardinality])
x = tf.nn.depthwise_conv2d_native(x, w, strides=1, padding='same')
x = tf.layers.batch_normalization(x, training=is_training)
x = tf.nn.relu(x)
x = tf.layers.conv2d(x, filters=dim, kernel_size=1, strides=1)
x = tf.layers.batch_normalization(x, training=is_training)
return tf.nn.relu(x)
</code></pre>
<p><strong>EDIT:</strong> I thought this implementation was correct, and I just needed to add a couple operations to improve the performance. Taking another look at David's comment, the depthwise+reshape+sum wasn't instead of a single depthwise operation, but instead of some other method; the code above does not compute the equivalent of the bottleneck block version 3d.</p> | 2018-02-26 17:57:15.350000+00:00 | 2019-03-30 01:23:13.737000+00:00 | 2018-03-01 20:58:34.033000+00:00 | tensorflow|convolution | [] | 0 |
49,041,584 | <p>Depthwise convolutions and grouped convolutions are very similar. Grouped convolutions apply a set of independent kernels across a number of channel groups, whereas depthwise convolutions apply a set of independent kernels for every input channel. Crucially, in both cases individual connections between input and output channels use weights that are not shared with any other input-output channel pair in both cases. As a result, we can apply (as the man said!) a reshape and sum to emulate a grouped convolution with a depthwise convolution. This approach comes at the expense of memory, as we must allocate a tensor that is multiple times larger to perform the intermediary computation.</p>
<p>The depthwise convolutions map individual input channels to multiple output channels, and the grouped convolution maps blocks of input channels to blocks of output channels. If we want a to apply a grouped convolution with 32 groups ot 128-channel input, we can instead apply a depthwise convolution with a channel multiplier of 128/32=4. The output tensor represents a decomposed version of the equivalent grouped convolution output - the first sixteen channels of the depthwise convolution output correspond to the first four channels of the grouped convolution output. We can reshape these channels into a set of 4x4 spaces, and sum along one of the new axes to realize the equivalent of the grouped convolution output. Across all the output channels, we just reshape by adding two new axes with a dimensionality of 4, sum, and reshape back to 128 channels.</p>
<pre><code># one resnext block per figure 3c
# see also https://arxiv.org/pdf/1611.05431.pdf
def bottleneck(x, strides, dim, is_training):
input_channels = x.shape.as_list()[-1]
bottleneck_depth = input_channels // 2
x = tf.layers.conv2d(x, filters=bottleneck_depth, kernel_size=1, strides=strides)
x = tf.layers.batch_normalization(x, training=is_training)
x = tf.nn.relu(x)
group_size = bottleneck_depth // cardinality
w = tf.get_variable(name='depthwise_filter', shape=[3, 3, bottleneck_depth, group_size])
x = tf.nn.depthwise_conv2d_native(x, w, strides=1, padding='same')
depthwise_shape = x.shape.as_list()
x = tf.reshape(x, depthwise_shape[:3] + [cardinality, group_size, group_size])
x = tf.reduce_sum(x, axis=4)
x = tf.reshape(x, depthwise_shape[:3] + [bottleneck_depth])
x = tf.layers.batch_normalization(x, training=is_training)
x = tf.nn.relu(x)
x = tf.layers.conv2d(x, filters=dim, kernel_size=1, strides=1)
x = tf.layers.batch_normalization(x, training=is_training)
return tf.nn.relu(x)
</code></pre>
<p><strong>EDIT:</strong> It seems I didn't formulate the reshape/sum correctly. I've updated the code sample above to reflect what I now believe is the correct transformation. The older version was reducible to a depthwise convolution with a <code>channel_multiplier</code> of 1.</p>
<p>I'll illustrate the incorrect and correct behavior using numpy with weights fixed at 1 to better understand the difference. We'll look at a simpler 8-channel input with two groups.</p>
<pre><code>input = np.arange(8)
# => [0, 1, 2, 3, 4, 5, 6, 7]
# the result of applying a depthwise convolution with a channel multiplier of 2 and weights fixed at 1
depthwise_output = output.repeat(input, 4)
# => [0, 0, 0, 0, 1, 1, 1, 1, 2, 2, ..., 6, 6, 7, 7, 7, 7]
</code></pre>
<p>Incorrect transformation:</p>
<pre><code>x = depthwise_output.reshape((8, 4))
# => [[0, 0, 0, 0],
# [1, 1, 1, 1],
# [2, 2, 2, 2],
# [3, 3, 3, 3],
# [4, 4, 4, 4],
# [5, 5, 5, 5],
# [6, 6, 6, 6],
# [7, 7, 7, 7]]
x = x.sum(axis=1)
# => [ 0, 4, 8, 12, 16, 20, 24, 28]
</code></pre>
<p>Correct transformation:</p>
<pre><code>x = depthwise_output.reshape((2, 4, 4))
# => [[[0, 0, 0, 0],
# [1, 1, 1, 1],
# [2, 2, 2, 2],
# [3, 3, 3, 3]],
#
# [[4, 4, 4, 4],
# [5, 5, 5, 5],
# [6, 6, 6, 6],
# [7, 7, 7, 7]]]
x = x.sum(axis=1)
# => [[ 6, 6, 6, 6],
# [22, 22, 22, 22]])
x = x.reshape((8,))
# => [ 6, 6, 6, 6, 22, 22, 22, 22]
</code></pre> | 2018-03-01 02:11:50.563000+00:00 | 2018-03-01 20:55:54.157000+00:00 | 2018-03-01 20:55:54.157000+00:00 | null | 48,994,369 | <p>I'm attempting to improve the performance of my ResNeXt implementation in Tensorflow. David Berthelot mentioned a potential improvement over <a href="https://twitter.com/D_Berthelot_ML/status/908200291274711040" rel="nofollow noreferrer">on twitter</a>. I'd like to apply this to my implementation - how does reshape+sum fit into this?</p>
<pre><code># one resnext block per figure 3c
# see also https://arxiv.org/pdf/1611.05431.pdf
def bottleneck(x, strides, dim):
x = tf.layers.conv2d(x, filters=64, kernel_size=1, strides=strides)
x = tf.layers.batch_normalization(x, training=is_training)
x = tf.nn.relu(x)
w = tf.get_variable(name='depthwise_filter', shape=[3, 3, 64, cardinality])
x = tf.nn.depthwise_conv2d_native(x, w, strides=1, padding='same')
x = tf.layers.batch_normalization(x, training=is_training)
x = tf.nn.relu(x)
x = tf.layers.conv2d(x, filters=dim, kernel_size=1, strides=1)
x = tf.layers.batch_normalization(x, training=is_training)
return tf.nn.relu(x)
</code></pre>
<p><strong>EDIT:</strong> I thought this implementation was correct, and I just needed to add a couple operations to improve the performance. Taking another look at David's comment, the depthwise+reshape+sum wasn't instead of a single depthwise operation, but instead of some other method; the code above does not compute the equivalent of the bottleneck block version 3d.</p> | 2018-02-26 17:57:15.350000+00:00 | 2019-03-30 01:23:13.737000+00:00 | 2018-03-01 20:58:34.033000+00:00 | tensorflow|convolution | [] | 0 |
14,138,534 | <p>While my first answer should do the trick for this simple problem, I can't help but mention that there exist general techniques for dealing with these kinds of special cases.</p>
<p><a href="http://arxiv.org/pdf/math/9410209.pdf" rel="nofollow">This article</a> describes a technique for dealing with these kinds of issues in general. And one of the first examples they provide happens to be the algorithm you ask about!</p>
<p>The idea is to apply <a href="http://en.wikipedia.org/wiki/Automatic_differentiation" rel="nofollow">Automatic differentiation</a> aka <a href="http://en.wikipedia.org/wiki/Dual_number" rel="nofollow">Dual numbers</a> to compute symbolic perturbations.</p>
<p>By the way the same technique can also be used to avoid handling 0/0 as a special case in programs!</p>
<p><a href="http://blog.sigfpe.com/2008/05/desingularisation-and-its-applications.html" rel="nofollow">Here</a> is the blog post I originally learned this from, it gives some great background to the technique, and the author blogs a lot about automatic differentiation (AD).</p>
<p>Despite appearances AD is a very practical technique especially in languages with good support for operator overloading (eg: C++, Haskell, Python ...) and I have used it in "real life" (industrial applications in C++).</p> | 2013-01-03 12:01:07.863000+00:00 | 2013-01-04 20:46:23.243000+00:00 | 2013-01-04 20:46:23.243000+00:00 | null | 14,130,742 | <p>To detect if a point is in a polygon, you project a line from the point, to infinity, and see how many of polygon's vertices it intersects with... simple enough. My problem is that if the ray intersects the polygon on one of the points, then it is counted as intersecting two segments, and considered outside the polygon. I altered my function to make it only count one of the segments when the ray intersects a point of the polygon, but there are cases where a line could intersect the point while still being outside as well. Take this image as an example:</p>
<p><img src="https://i.stack.imgur.com/CQlsY.png" alt="two examples of a ray crossing a polygon vertex"></p>
<p>If you assume the point in the top left is "infinity", and cast a ray to either of the other points, both intersect at a point of the polygon, and would count as intersecting the same number of vertices even though one is inside, and one is outside. </p>
<p>Is there a way to compensate for that, or do I just have to assume that those fringe cases won't pop up?</p> | 2013-01-02 23:14:13.317000+00:00 | 2013-01-04 21:10:32.977000+00:00 | 2013-01-04 08:53:13.610000+00:00 | geometry|gis|polygon|point | ['http://arxiv.org/pdf/math/9410209.pdf', 'http://en.wikipedia.org/wiki/Automatic_differentiation', 'http://en.wikipedia.org/wiki/Dual_number', 'http://blog.sigfpe.com/2008/05/desingularisation-and-its-applications.html'] | 4 |
67,635,068 | <p>You can get a basic recommended setup by using <a href="https://www.tensorflow.org/federated/api_docs/python/tff/learning/dp_aggregator" rel="nofollow noreferrer"><code>tff.learning.dp_aggregator</code></a>, and use it as</p>
<pre><code>iterative_process = tff.learning.build_federated_averaging_process(
...,
model_update_aggregation_factory=tff.learning.dp_aggregator(...))
</code></pre>
<p>For guidance of how to use it with in learning algorithms in general, see tutorial: <a href="https://www.tensorflow.org/federated/tutorials/tuning_recommended_aggregators" rel="nofollow noreferrer">Tuning recommended aggregations for learning</a>.</p>
<p>The default clipping method used corresponds to "flat clipping", as termed in the paper you link to. However, the clipping norm is not fixed, but automatically adapted based on values seen in previous rounds of training. For details, see documentation and the paper <a href="https://arxiv.org/abs/1905.03871" rel="nofollow noreferrer">Differentially Private Learning with Adaptive Clipping</a>.</p>
<p>If you want to use a fixed clipping norm <code>my_clip_norm</code>, you can look at the implementation and see what components can be modified. I believe you should be able to simply use:</p>
<p><code>tff.aggregators.DifferentiallyPrivateFactory.gaussian_fixed(..., clip=my_clip_norm)</code></p>
<p>If you wanted to use some form of per-layer clipping, you would need to write your own aggregator. Implementation of <code>tff.aggregators.DifferentiallyPrivateFactory</code> could be a good start, and see also tutorial <a href="https://www.tensorflow.org/federated/tutorials/custom_aggregators" rel="nofollow noreferrer">Implementing Custom Aggregations</a>.</p> | 2021-05-21 10:25:17.863000+00:00 | 2021-05-21 10:25:17.863000+00:00 | null | null | 67,633,859 | <p>I'm training a DP federated learning model using the "DP-FedAvg" algorithm, which is based on below paper:</p>
<p><a href="https://arxiv.org/abs/1710.06963" rel="nofollow noreferrer">Learning Differentially Private Recurrent Language Models</a></p>
<p>The paper proposes two norm clipping techniques "flat clipping" and "per-layer clipping", then performs the experiments using "per-layer clipping".</p>
<p>In case of TFF, when attaching a DP-query and an aggregation process to the federated model, which clipping technique is implemented by default? Is there a way to specify the clipping technique used?</p> | 2021-05-21 09:06:13.450000+00:00 | 2021-05-22 17:04:25.960000+00:00 | 2021-05-22 17:04:25.960000+00:00 | tensorflow-federated|federated-learning | ['https://www.tensorflow.org/federated/api_docs/python/tff/learning/dp_aggregator', 'https://www.tensorflow.org/federated/tutorials/tuning_recommended_aggregators', 'https://arxiv.org/abs/1905.03871', 'https://www.tensorflow.org/federated/tutorials/custom_aggregators'] | 4 |
72,302,882 | <p>This is a typical challenge when performing multi-task learning. There are many methods to handle this, but as for all things in this field, there is no single solution to solve them all. The most straightforward approach is to weigh the different loss components indeed. You can do so by performing a grid search or random search on the three weights or try and level the three components of your loss by looking at the orders of magnitude for each of them. The general idea behind this is if you're giving high precedence for one of the loss terms, then the gradient corresponding to this term will be much more prominent when performing back propagation and parameter update.</p>
<p>I recommend you read more on multi-task learning. For example you could start with <a href="https://arxiv.org/pdf/2004.13379.pdf" rel="nofollow noreferrer"><em>Multi-Task Learning for Dense Prediction Tasks A Survey</em></a>: Simon Vandenhende <em>et al.</em>, in <strong>TPAMI'21</strong>.</p> | 2022-05-19 10:24:55.507000+00:00 | 2022-05-19 10:24:55.507000+00:00 | null | null | 72,302,269 | <p><a href="https://i.stack.imgur.com/kkNov.jpg" rel="nofollow noreferrer">network architecture</a></p>
<p>I have a neural network with 3 heads, one of them with a focal loss and two others with L1 losses. They are combined by summing: loss = hm_loss + off_loss + wh_loss
However the range of typical values for loss elements are different. Is it an issue? Should I weight the loss elements, or should I normalize the network outputs?</p> | 2022-05-19 09:45:13.703000+00:00 | 2022-05-19 10:24:55.507000+00:00 | 2022-05-19 09:54:54.827000+00:00 | pytorch|normalization|loss | ['https://arxiv.org/pdf/2004.13379.pdf'] | 1 |
55,158,136 | <p>I don't think anyone can say "yes, framework X can definitely handle your workload", because it depends a lot on what you need out of your message processing, e.g. regarding messaging reliability, and how your data streams can be partitioned.</p>
<p>You may be interested in <a href="https://arxiv.org/ftp/arxiv/papers/1802/1802.08496.pdf" rel="nofollow noreferrer">BenchmarkingDistributedStreamProcessingEngines</a>. The paper is using versions of Storm/Flink/Spark that are a few years old (looks like they were released in 2016), but maybe the authors would be willing to let you use their benchmark to evaluate newer versions of the three frameworks? </p>
<p>A very common setup for streaming analytics is to go data source -> Kafka/Pulsar -> analytics framework -> long term data store. This decouples processing from data ingest, and lets you do stuff like reprocessing historical data as if it were new.</p>
<p>I think the first step for you should be to see if you can get the data volume you need through Kafka/Pulsar. Either generate a test set manually, or grab some data you think could be representative from your production environment, and see if you can put it through Kafka/Pulsar at the throughput/latency you need.</p>
<p>Remember to consider partitioning of your data. If some of your data streams could be processed independently (i.e. ordering doesn't matter), you should not be putting them in the same partitions. For example, there is probably no reason to mix sensor measurements and the video feed streams. If you can separate your data into independent streams, you are less likely to run into bottlenecks both in Kafka/Pulsar and the analytics framework. Separate data streams would also allow you to parallelize processing in the analytics framework much better, as you could run e.g. video feed and sensor processing on different machines.</p>
<p>Once you know whether you can get enough throughput through Kafka/Pulsar, you should write a small example for each of the 3 frameworks. To start, I would just receive and drop the data from Kafka/Pulsar, which should let you know early whether there's a bottleneck in the Kafka/Pulsar -> analytics path. After that, you can extend the example to do something interesting with the example data, e.g. do a bit of processing like what you might want to do in production.</p>
<p>You also need to consider which kinds of processing guarantees you need for your data streams. Generally you will pay a performance penalty for guaranteeing at-least-once or exactly-once processing. For some types of data (e.g. the video feed), it might be okay to occasionally lose messages. Once you decide on a needed guarantee, you can configure the analytics frameworks appropriately (e.g. disable acking in Storm), and try benchmarking on your test data.</p>
<p>Just to answer some of your questions more explicitly:</p>
<p>The live data analysis/monitoring use case sounds like it fits the Storm/Flink systems fairly well. Hooking it up to Kafka/Pulsar directly, and then doing whatever analytics you need sounds like it could work for you.</p>
<p>Reprocessing of historical data is going to depend on what kind of queries you need to do. If you simply need a time interval + id, you can likely do that with Kafka plus a filter or appropriate partitioning. Kafka lets you start processing at a specific timestamp, and if you data is partitioned by id or you filter it as the first step in your analytics, you could start at the provided timestamp and stop processing when you hit a message outside the time window. This only applies if the timestamp you're interested in is when the message was added to Kafka though. I also don't believe Kafka supports below-millisecond resolution on the timestamps it generates.</p>
<p>If you need to do more advanced queries (e.g. you need to look at timestamps generated by your sensors), you could look at using <a href="http://cassandra.apache.org/" rel="nofollow noreferrer">Cassandra</a> or <a href="https://www.elastic.co/" rel="nofollow noreferrer">Elasticsearch</a> or <a href="http://lucene.apache.org/solr/" rel="nofollow noreferrer">Solr</a> as your permanent data store. You will also want to investigate how to get the data from those systems back into your analytics system. For example, I believe Spark ships with a connector for reading from Elasticsearch, while Elasticsearch provides a connector for Storm. You should check whether such a connector exists for your data store/analytics system combination, or be willing to write your own.</p>
<p>Edit: Elaborating to answer your comment.</p>
<p>I was not aware that Kafka or Pulsar supported timestamps specified by the user, but sure enough, they <a href="https://github.com/apache/pulsar/wiki/PIP-5:-Event-time" rel="nofollow noreferrer">both</a> <a href="https://kafka.apache.org/0110/documentation/streams/core-concepts#streams_time" rel="nofollow noreferrer">do</a>. I don't see that Pulsar supports sub-millisecond timestamps though? </p>
<p>The idea you describe can definitely be supported by Kafka.</p>
<p>What you need is the ability to start a Kafka/Pulsar client at a specific timestamp, and read forward. Pulsar doesn't seem to support this yet, but Kafka does.</p>
<p>You need to guarantee that when you write data into a partition, they arrive in order of timestamp. This means that you are not allowed to e.g. write first message 1 with timestamp 10, and then message 2 with timestamp 5.</p>
<p>If you can make sure you write messages in order to Kafka, the example you describe will work. Then you can say "Start at timestamp 'last night at midnight'", and Kafka will start there. As live data comes in, it will receive it and add it to the end of its log. When the consumer/analytics framework has read all the data from last midnight to current time, it will start waiting for new (live) data to arrive, and process it as it comes in. You can then write custom code in your analytics framework to make sure it stops processing when it reaches the first message with timestamp 'tomorrow night'. </p>
<p>With regard to support of sub-millisecond timestamps, I don't think Kafka or Pulsar will support it out of the box, but you can work around it reasonably easily. Just put the sub-millisecond timestamp in the message as a custom field. When you want to start at e.g. timestamp 9ms 10ns, you ask Kafka to start at 9ms, and use a filter in the analytics framework to drop all messages between 9ms and 9ms 10ns.</p> | 2019-03-14 08:41:28.100000+00:00 | 2019-03-14 10:15:17.223000+00:00 | 2019-03-14 10:15:17.223000+00:00 | null | 55,147,517 | <p>I'm still quite new to the world of stream and batch processing and trying to understnad concepts and speach. It is admitedly very possible that the answer to my question well known, easy to find or even answered a hundred times here at SO, but I was not able to find it. </p>
<p>The background: </p>
<p>I am working in a big scientific project (nuclear fusion research), and we are producing tons of measurement data during experiment runs. Those data are mostly streams of samples tagged with a nanosecond timestamp, where samples can be anything from a single by ADC value, via an array of such, via deeply structured data (with up to hundreds of entries from 1 bit booleans to 64bit double precision floats) to raw HD video frames or even string text messages. If I understand the common terminologies right, I would regard our data as "tabular data", for the most part.</p>
<p>We are working with mostly selfmade software solutions from data acquisition over simple online (streaming) analysis (like scaling, subsampling and such) to our own data sotrage, management and access facilities.</p>
<p>In view of the scale of the operation and the effort for maintaining all those implementations, we are investigating the possibilities to use standard frameworks and tools for more of our tasks. </p>
<p>My question:</p>
<p>In particular at this stage, we are facing the need for more and more sofisticated (automated and manual) data analytics on live/online/realtime data as well as "after the fact" offline/batch analytics of "historic" data. In this endavor, I am trying to understand if and how existing analytics frameworks like Spark, Flink, Storm etc. (possibly supported by message queues like Kafka, Pulsar,...) can support a scenario, where</p>
<ul>
<li>data is flowing/streamed into the platform/framework, attached an identifier like a URL or an ID or such</li>
<li>the platform interacts with integrated or external storage to persist the streaming data (for years), associated with the identifier</li>
<li>analytics processes can now transparently query/analyse data addressed by an identifier and an arbitrary (open or closed) time window, and the framework suplies data batches/samples for the analysis either from backend storage or coming in live from data acquisition</li>
</ul>
<p>Simply streaming the online data into storage and querying from there seems no option as we need both raw and analysed data for live monitoring and realtime feedback control of the experiment.
Also, letting the user query either a live input signal or a historic batch from storage differently would not be ideal, as our physicists mostly are no data scientists and we would like to keep such "technicalities" away from them and idealy the exact same algorithms should be used for analysing new real time data and old stored data from previous experiments.</p>
<p>Sitenotes:</p>
<ul>
<li>we are talking about peek data loads in the range of 10th of gigabits per second coming in bursts of increasing length of seconds up to minutes - could this be handled by the candidates?</li>
<li>we are using timestamps in nanosecond resolution, even thinking about pico - this poses some limitations on the list of possible candidates if I unserstand correctly?</li>
</ul>
<p>I would be very greatfull if anyone would be able to understand my question and to shed some light on the topic for me :-)</p>
<p>Many Thanks and kind regards,
Beppo</p> | 2019-03-13 17:03:58.900000+00:00 | 2019-04-26 15:28:39.313000+00:00 | null | apache-spark|apache-flink|apache-storm|apache-pulsar | ['https://arxiv.org/ftp/arxiv/papers/1802/1802.08496.pdf', 'http://cassandra.apache.org/', 'https://www.elastic.co/', 'http://lucene.apache.org/solr/', 'https://github.com/apache/pulsar/wiki/PIP-5:-Event-time', 'https://kafka.apache.org/0110/documentation/streams/core-concepts#streams_time'] | 6 |
71,052,748 | <p>you cant try to use this modified peen-node:</p>
<p><a href="https://github.com/trustbloc/fabric-mod" rel="nofollow noreferrer">https://github.com/trustbloc/fabric-mod</a></p>
<p><a href="https://github.com/trustbloc/trustbloc-did-method/blob/main/docs/spec/trustbloc-did-method.md" rel="nofollow noreferrer">https://github.com/trustbloc/trustbloc-did-method/blob/main/docs/spec/trustbloc-did-method.md</a></p>
<p>or read this research:</p>
<p><a href="https://arxiv.org/pdf/2104.03277.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/2104.03277.pdf</a></p>
<p>or check other did:methods that support HLF:</p>
<p><a href="https://www.w3.org/TR/did-spec-registries/" rel="nofollow noreferrer">https://www.w3.org/TR/did-spec-registries/</a></p>
<p>or look to this project:</p>
<p><a href="https://github.com/BLOCKOTUS/blockotus-organism" rel="nofollow noreferrer">https://github.com/BLOCKOTUS/blockotus-organism</a></p> | 2022-02-09 15:55:39.523000+00:00 | 2022-02-09 15:55:39.523000+00:00 | null | null | 70,289,149 | <p>I'm trying to use DIDs/VCs from hyperledger Indy with Hyperledger Fabric. Simply I want to replace Fabric's certificate based identity/MSP with DIDs/VCs. However, as far as I understand this is not direct. The existing code based have lots of dependencies on Fabric-CA. Could someone help me to figure out potential starting points to do this customisation?</p> | 2021-12-09 11:30:41.860000+00:00 | 2022-03-03 11:46:53.250000+00:00 | 2021-12-09 23:40:16.897000+00:00 | hyperledger-fabric|hyperledger|hyperledger-indy|decentralized-identity | ['https://github.com/trustbloc/fabric-mod', 'https://github.com/trustbloc/trustbloc-did-method/blob/main/docs/spec/trustbloc-did-method.md', 'https://arxiv.org/pdf/2104.03277.pdf', 'https://www.w3.org/TR/did-spec-registries/', 'https://github.com/BLOCKOTUS/blockotus-organism'] | 5 |
65,621,031 | <p>Another option is to actually use more recent packages that are purpose-built for highly dimensional / high volume data sets. They run their code using lower-level languages (C++ and/or Java) and in certain cases use parallelization.</p>
<p>I'd recommend taking a look into these three:</p>
<p>ranger (uses C++ compiler)
randomForestSRC (uses C++ compiler)
h2o (Java compiler - needs Java version 8 or higher)
Also, some additional reading here to give you more to go off on which package to choose: <a href="https://arxiv.org/pdf/1508.04409.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1508.04409.pdf</a></p>
<p>Page 8 shows benchmarks showing the performance improvement of ranger against randomForest against growing data size - ranger is WAY faster due to linear growth in runtime rather than non-linear for randomForest for rising tree/sample/split/feature sizes.</p>
<p>Good Luck!</p> | 2021-01-07 22:48:20.683000+00:00 | 2021-01-07 22:48:20.683000+00:00 | null | null | 23,075,506 | <p>I have a training set of size 38 MB (12 attributes with 420000 rows). I am running the below <code>R</code> snippet, to train the model using <code>randomForest</code>. This is taking hours for me.</p>
<pre><code>rf.model <- randomForest(
Weekly_Sales~.,
data=newdata,
keep.forest=TRUE,
importance=TRUE,
ntree=200,
do.trace=TRUE,
na.action=na.roughfix
)
</code></pre>
<p>I think, due to <code>na.roughfix</code>, it is taking long time to execute. There are so many <code>NA's</code> in the training set.</p>
<p>Could someone let me know how can I improve the performance?</p>
<p>My system configuration is:</p>
<pre><code>Intel(R) Core i7 CPU @ 2.90 GHz
RAM - 8 GB
HDD - 500 GB
64 bit OS
</code></pre> | 2014-04-15 05:43:43.587000+00:00 | 2021-01-07 22:48:20.683000+00:00 | 2017-02-15 09:34:19.807000+00:00 | r|performance|random-forest | ['https://arxiv.org/pdf/1508.04409.pdf'] | 1 |
48,449,699 | <p>You might be interested in <a href="https://arxiv.org/pdf/1801.07779.pdf" rel="nofollow noreferrer">The WiLI benchmark dataset for written language identification</a>. The high level-answer you can also find in the paper is the following:</p>
<ul>
<li>Clean the text: Remove things you don't want / need; make unicode un-ambiguious by applying a normal form.</li>
<li>Feature Extraction: Count n-grams, create tf-idf features. Something like that</li>
<li>Train a classifier on the features: Neural networks, SVMs, Naive Bayes, ... whatever you think could work.</li>
</ul> | 2018-01-25 18:36:01.867000+00:00 | 2018-01-25 18:36:01.867000+00:00 | null | null | 7,670,427 | <p>I have been wondering for some time how does Google translate(or maybe a hypothetical translator) detect language from the string entered in the "from" field. I have been thinking about this and only thing I can think of is looking for words that are unique to a language in the input string. The other way could be to check sentence formation or other semantics in addition to keywords. But this seems to be a very difficult task considering different languages and their semantics. I did some research to find that there are ways that use n-gram sequences and use some statistical models to detect language. Would appreciate a high level answer too.</p> | 2011-10-06 04:57:44.147000+00:00 | 2018-01-25 18:36:01.867000+00:00 | 2011-10-06 05:41:29.120000+00:00 | algorithm|nlp|pattern-matching | ['https://arxiv.org/pdf/1801.07779.pdf'] | 1 |
16,507,080 | <p>If your compiler can be made to accept <a href="http://en.wikipedia.org/wiki/C++11" rel="nofollow">C++11</a> standard, you could use <em>raw string literals</em> like eg:</p>
<pre><code> std::cout << R"*(<!DOCTYPE html>
<html>
<head>
<title>Title with a backslash \ here
and double " quote</title>)*";
</code></pre>
<p>Hence with raw string literals there is no forbidden sequence of characters in those raw string literals. Any sequence of characters could appear in them (but you can define the ending sequence of the raw string)</p>
<hr>
<p>And you could use <code>#{</code> and <code>}#</code> like I do in <a href="http://www.gcc-melt.org/tutomeltlang.html" rel="nofollow">MELT macro-strings</a>; <a href="http://gcc-melt.org/" rel="nofollow">MELT</a> is Lisp-like <a href="http://arxiv.org/abs/1109.0779" rel="nofollow">domain specific language</a> to extend GCC, and you can embed code in it with e.g.</p>
<pre><code>(code_chunk hellocount_chk
#{ /* $HELLOCOUNT_CHK chunk */
static int $HELLOCOUNT_CHK#_counter;
$HELLOCOUNT_CHK#_counter++;
$HELLOCOUNT_CHK#_lab:
printf ("Hello World, counted %d\n",
$HELLOCOUNT_CHK#_counter);
if (random() % 4 == 0) goto $HELLOCOUNT_CHK#_lab;
}#)
</code></pre>
<p>The <code>#{</code> and <code>}#</code> are enclosing macro-strings (these character sequences are unlikely to appear in C or C++ code, except in string literals and comments), with the <code>$</code> starting symbols in such macro-strings (up to a non-letter or <code>#</code> character).</p>
<p><sup>Using <code>#{</code> and <code>}#</code> is not fool-proof (e.g. because of raw string literals) but good enough: a cooperative user could manage to avoid them.</sup></p> | 2013-05-12 11:55:37.513000+00:00 | 2013-05-12 12:40:46.350000+00:00 | 2013-05-12 12:40:46.350000+00:00 | null | 16,507,056 | <p>I'm currently working on a toy language that works like this: one can embed blocks written in this language into a C++ source, and before compilation, these blocks are translated into C++ in an extra preprocessing step, producing a valid C++ source.</p>
<p>I want to make sure that these blocks can always be identified in the source unambiguously and also, whenever such a block is present in the source, it cannot be valid C++. Moreover, I want to achieve these by putting as few constraints to the embedded language as possible (the language itself is still somewhat fluid).</p>
<p>The obvious way would be to introduce a pair of special multi-character parentheses, made of characters that cannot appear together in valid C++ code (or in the embedded language). However, I'm not sure how to ensure that particular a character sequence is good for this purpose (not after <a href="http://www.gotw.ca/gotw/078.htm" rel="nofollow">GotW #78</a>, anyway (: ).</p>
<p>So what is a good way to escape these blocks?</p> | 2013-05-12 11:51:08.937000+00:00 | 2013-05-12 12:40:46.350000+00:00 | null | c++|lexer|embedded-language | ['http://en.wikipedia.org/wiki/C++11', 'http://www.gcc-melt.org/tutomeltlang.html', 'http://gcc-melt.org/', 'http://arxiv.org/abs/1109.0779'] | 4 |
7,322,375 | <p>Indeed to do traditional cross validation with F1-score or V-Measure as scoring function you would need some labeled data as ground truth. But in this case you could just count the number of classes in the ground truth dataset and use it as your optimal value for K, hence no-need for cross-validation.</p>
<p>Alternatively you could use a <strong>cluster stability measure as unsupervised performance evaluation</strong> and do some kind of cross validation procedure for that. However this is not yet implemented in scikit-learn even though it's still on my personal todo list.</p>
<p>You can find additional info on this approach in the following <a href="http://metaoptimize.com/qa/questions/7105/quantifying-the-success-of-clustering#7111" rel="nofollow">answer on metaoptimize.com/qa</a>. In particular you should read <a href="http://arxiv.org/abs/1007.1075" rel="nofollow">Clustering Stability: An Overview by Ulrike von Luxburg</a>.</p> | 2011-09-06 15:30:05.660000+00:00 | 2011-09-06 15:30:05.660000+00:00 | null | null | 6,629,165 | <p>In a document clustering process, as a data pre-processing step, I first applied singular vector decomposition to obtain <code>U</code>, <code>S</code> and <code>Vt</code> and then by choosing a suitable number of eigen values I truncated <code>Vt</code>, which now gives me a good document-document correlation from what I read <a href="http://en.wikipedia.org/wiki/Latent_semantic_analysis" rel="nofollow noreferrer">here</a>. Now I am performing clustering on the columns of the matrix <code>Vt</code> to cluster similar documents together and for this I chose k-means and the initial results looked acceptable to me (with k = 10 clusters) but I wanted to dig a bit deeper on choosing the k value itself. To determine the number of clusters <code>k</code> in k-means, I was <a href="https://stackoverflow.com/questions/6615665/kmeans-without-knowing-the-number-of-clusters">suggested</a> to look at cross-validation. </p>
<p>Before implementing it I wanted to figure out if there is a built-in way to achieve it using numpy or scipy. Currently, the way I am performing <code>kmeans</code> is to simply use the function from scipy.</p>
<pre><code>import numpy, scipy
# Preprocess the data and compute svd
U, S, Vt = svd(A) # A is the TFIDF representation of the original term-document matrix
# Obtain the document-document correlations from Vt
# This 50 is the threshold obtained after examining a scree plot of S
docvectors = numpy.transpose(self.Vt[0:50, 0:])
# Prepare the data to run k-means
whitened = whiten(docvectors)
res, idx = kmeans2(whitened, 10, iter=20)
</code></pre>
<p>Assuming my methodology is correct so far (please correct me if I am missing some step), at this stage, what is the standard way of using the output to perform cross-validation? Any reference/implementations/suggestions on how this would be applied to k-means would be greatly appreciated.</p> | 2011-07-08 19:00:11.047000+00:00 | 2020-10-12 03:00:42.777000+00:00 | 2017-05-23 10:28:47.277000+00:00 | python|statistics|numpy|nlp|machine-learning | ['http://metaoptimize.com/qa/questions/7105/quantifying-the-success-of-clustering#7111', 'http://arxiv.org/abs/1007.1075'] | 2 |
28,547,654 | <p>You are basically on the right track. I would try and apply classifier with features you already have and see how well it will work, before doing anything else. </p>
<p>Actually best way to improve your work is to google for subjectivity classification papers and read them (there are a quite a <a href="https://scholar.google.ru/scholar?as_ylo=2011&q=subjectivity%20classification&hl=en&as_sdt=0,5" rel="nofollow">number of them</a>). For example <a href="http://arxiv.org/ftp/arxiv/papers/1312/1312.6962.pdf" rel="nofollow">this one</a> lists typical features for this task.</p>
<p>And yes Chi-squared can be used to construct dictionaries for text classification (other commonly used methods are TD*IDF, pointwise mutal information and LDA)</p>
<p>Also, recently new neural network-based methods for text classification such as <a href="http://arxiv.org/pdf/1405.4053v2.pdf" rel="nofollow">paragraph vector</a> and <a href="http://arxiv.org/pdf/1406.3830v1.pdf" rel="nofollow">dynamic convolutional neural networks with k-max pooling</a> demonstrated state-of-the-art results on sentiment analysis, thus they should probably be good for subjectivity classification as well.</p> | 2015-02-16 18:06:37.553000+00:00 | 2015-02-16 18:06:37.553000+00:00 | null | null | 28,535,136 | <p>I am trying to build a classifier to detect subjectivity. I have text files tagged with subjective and objective . I am little lost with the concept of features creation from this data. I have found the lexicon of the subjective and objective tag. One thing that I can do is to create a feature of having words present in respective dictionary. Maybe the count of words present in subjective and objective dictionary. After that I intend to use naive bayes or SVM to develop the model</p>
<p>My problem is as follow</p>
<ol>
<li>Is my approach correct ?</li>
<li>Can I create more features ? If possible suggest some or point me to some paper or link</li>
<li>Can I do some test like chi -sq etc to identify effective words from the dictionary ?</li>
</ol> | 2015-02-16 05:36:56.753000+00:00 | 2015-02-16 18:06:37.553000+00:00 | null | nlp|text-mining|sentiment-analysis | ['https://scholar.google.ru/scholar?as_ylo=2011&q=subjectivity%20classification&hl=en&as_sdt=0,5', 'http://arxiv.org/ftp/arxiv/papers/1312/1312.6962.pdf', 'http://arxiv.org/pdf/1405.4053v2.pdf', 'http://arxiv.org/pdf/1406.3830v1.pdf'] | 4 |
50,199,898 | <p><strong>A note on your current approach</strong></p>
<p>Training with MSE is equivalent to optimizing the likelihood of your data under a Gaussian with fixed variance and mean given by your model. So you are already training an autoencoder, though you do not formulate it so.</p>
<h2>About the things you do</h2>
<ol>
<li><p><strong>You don't give the LSTM a chance</strong></p>
<p>Since you provide data from last 24 hours only, the LSTM cannot possibly learn a weekly pattern.
It could at best learn that the value should be similar as it was 24 hours before (though it is very unlikely, see next point) -- and then you break it with Fri-Sat and Sun-Mon data. From the LSTM's point of view, your holiday 'anomaly' looks pretty much the same as the weekend data you were providing during the training.</p>
<p>So you would first need to provide longer contexts during learning (I assume that you carry the hidden state on during test time).</p></li>
<li><p><strong>Even if you gave it a chance, it wouldn't care</strong></p>
<p>Assuming that your data really follows a simple pattern -- high value during and only during working hours, plus some variations of smaller scale -- the LSTM doesn't need any long-term knowledge for most of the datapoints. Putting in all my human imagination, I can only envision the LSTM benefiting from long-term dependencies at the beginning of the working hours, so just for one or two samples out of the 96.</p>
<p>So even if the loss value at the points would like to backpropagate through > 7 * 96 timesteps to learn about your weekly pattern, there are 7*95 other loss terms that are likely to prevent the LSTM from deviating from the current local optimum.</p>
<p>Thus it <em>may help</em> to weight the samples at the beginning of working hours more, so that the respective loss can actually influence representations from far history.</p></li>
<li><p><strong>Your solutions is a good thing</strong></p>
<p>It is difficult to model sequences at multiple scales in a single model. Even you, as a human, need to "zoom out" to judge longer trends -- that's why all the Wall Street people have Month/Week/Day/Hour/... charts to watch their shares' prices on. Such multiscale modeling is especially difficult for an RNN, because it needs to process all the information, always, with the same weights.</p>
<p>If you really want on model to learn it all, you may have more success with deep feedforward architectures employing some sort of time-convolution, eg. <a href="https://en.wikipedia.org/wiki/Time_delay_neural_network" rel="nofollow noreferrer">TDNNs</a>, <a href="https://www.semanticscholar.org/paper/Residual-Memory-Networks-in-Language-Modeling:-the-Benes-Baskar/2ee7ee38745e9fcf89860dfb3d41c2155521e3a3" rel="nofollow noreferrer">Residual Memory Networks</a> (Disclaimer: I'm one of the authors.), or the recent one-architecture-to-rule-them-all, <a href="https://deepmind.com/blog/wavenet-generative-model-raw-audio/" rel="nofollow noreferrer">WaveNet</a>. As these have skip connections over longer temporal context and apply different transformations at different levels, they have better chances of discovering and exploiting such an unexpected long-term dependency.</p>
<p>There are implementations of WaveNet in Keras laying around on GitHub, e.g. <a href="https://github.com/basveeling/wavenet" rel="nofollow noreferrer">1</a> or <a href="https://github.com/usernaamee/keras-wavenet/blob/master/simple-generative-model.py" rel="nofollow noreferrer">2</a>. I did not play with them (I've actually moved away from Keras some time ago), but esp. the second one seems really easy, with the <code>AtrousConvolution1D</code>.</p>
<p>If you want to stay with RNNs, <a href="https://arxiv.org/abs/1402.3511" rel="nofollow noreferrer">Clockwork RNN</a> is probably the model to fit your needs.</p></li>
</ol>
<h2>About things you may want to consider for your problem</h2>
<ul>
<li><p><strong>So are there two data distributions?</strong></p>
<p>This one is a bit philosophical.
Your current approach shows that you have a very strong belief that there are two different setups: workhours and the rest. You're even OK with changing part of your model (the Gaussian) according to it. </p>
<p>So perhaps your data actually comes from two distributions and <strong>you should therefore train two models</strong> and switch between them as appropriate?</p>
<p>Given what you have told us, I would actually go for this one (to have a theoretically sound system). You cannot expect your LSTM to learn that there will be low values on Dec 25. Or that there is a deadline and this weekend consists purely of working hours.</p></li>
<li><p><strong>Or are there two definitions of anomaly?</strong></p>
<p>One philosophical point more. Perhaps you personally consider two different types of anomaly: </p>
<p>A weird temporal trajectory, unexpected peaks, oscillations, whatever is unusual in your domain. Your LSTM supposedly handles these already. </p>
<p>And then, there is different notion of anomaly: Value of certain bound in certain time intervals. Perhaps a simple linear regression / small MLP from time to value would do here?</p></li>
<li><p><strong>Let the NN do all the work</strong></p>
<p>Currently, you effectively model the distribution of your quantity in two steps: First, the LSTM provides the mean. Second, you supply the variance.</p>
<p>You might instead let your NN (together with additional 2 affine transformations) directly provide you with a complete Gaussian by producing its mean and variance; much like in Variational AutoEncoders (<a href="https://arxiv.org/pdf/1312.6114.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1312.6114.pdf</a>, appendix C.2). Then, you need to optimize directly the likelihood of your following sample under the NN-distribution, rather than just MSE between the sample and the NN output. </p>
<p>This will allow your model to tell you when it is very strict about the following value and when "any" sample will be OK.</p>
<p>Note, that you can take this approach further and have your NN produce "any" suitable distribution. E.g. if your data live in-/can be sensibly transformed to- a limited domain, you may try to produce a Categorical distribution over the space by having a Softmax on the output, much like WaveNet does (<a href="https://arxiv.org/pdf/1609.03499.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1609.03499.pdf</a>, Section 2.2).</p></li>
</ul> | 2018-05-06 12:50:14.730000+00:00 | 2018-05-16 17:55:32.103000+00:00 | 2018-05-16 17:55:32.103000+00:00 | null | 50,199,225 | <p>I am implementing an anomaly detection system that will be used on different time series (one observation every 15 min for a total of 5 months). All these time series have a common pattern: high levels during working hours and low levels otherwise.</p>
<p>The idea presented in many papers is the following: build a model to predict future values and calculate an anomaly score based on the residuals.</p>
<p><strong>What I have so far</strong></p>
<p>I use an LSTM to predict the next time step given the previous 96 (1 day of observations) and then I calculate the anomaly score as the likelihood that the residuals come from one of the two normal distributions fitted on the residuals obtained with the validation test. I am using two different distributions, one for working hours and one for non working hours.</p>
<p>The model detects very well point anomalies, such as sudden falls and peaks, but it fails during holidays, for example.</p>
<p>If an holiday is during the week, I expect my model to detect more anomalies, because it's an unusual daily pattern wrt a normal working day.
But the predictions simply follows the previous observations.</p>
<p><strong>My solution</strong> </p>
<p>Use a second and more lightweight model (based on time series decomposition) which is fed with <em>daily</em> aggregations instead of 15min aggregations to detect <em>daily</em> anomalies.</p>
<p><strong>The question</strong></p>
<p>This combination of two models allows me to have both anomalies and it works very well, but my idea was to use only one model because I expected the LSTM to be able to "learn" also the weekly pattern. Instead it strictly follows the previous time steps without taking into consideration that it is a working hour and the level should be much higher.
I tried to add exogenous variables to the input (hour of day, day of week), to add layers and number of cells, but the situation is not that better.</p>
<p>Any consideration is appreciated.
Thank you</p> | 2018-05-06 11:37:20.750000+00:00 | 2018-05-16 17:55:32.103000+00:00 | 2018-05-06 11:43:16.140000+00:00 | python|lstm|prediction|rnn|anomaly-detection | ['https://en.wikipedia.org/wiki/Time_delay_neural_network', 'https://www.semanticscholar.org/paper/Residual-Memory-Networks-in-Language-Modeling:-the-Benes-Baskar/2ee7ee38745e9fcf89860dfb3d41c2155521e3a3', 'https://deepmind.com/blog/wavenet-generative-model-raw-audio/', 'https://github.com/basveeling/wavenet', 'https://github.com/usernaamee/keras-wavenet/blob/master/simple-generative-model.py', 'https://arxiv.org/abs/1402.3511', 'https://arxiv.org/pdf/1312.6114.pdf', 'https://arxiv.org/pdf/1609.03499.pdf'] | 8 |
38,711,403 | <p>Truncated back-propagation aims at speeding up learning sequences (e.g. with LSTM), by computing approximate gradients on "short" sequences, instead of on full sequences. I guess this is what the documentation means by making "the learning process tractable".</p>
<p>This method seems to originate from the work of Mikolov on "Statistical Language Models based on Neural Networks" (his thesis). And as explained by Alex Graves in a <a href="http://arxiv.org/abs/1308.0850" rel="nofollow">well-cited paper</a> (page 9):</p>
<blockquote>
<p>This form of truncated back propagation has been considered before for RNN language modelling [23], and found to speed up training (by reducing the sequence length and hence increasing the frequency of stochastic weight updates) without affecting the network’s ability to learn long-range dependencies.</p>
</blockquote>
<p>[23] is Mikolov's thesis.</p>
<hr>
<p>In short, truncated back-propagation is a "trick" to speed up learning over sequences, without losing (too much) significant information. Note that truncating too much can backdraft (i.e. lose significant information).</p> | 2016-08-02 03:41:22.567000+00:00 | 2016-08-03 06:27:34.257000+00:00 | 2016-08-03 06:27:34.257000+00:00 | null | 38,689,863 | <p>I'm learning a tensorflow tutorial about LSTM: <a href="https://www.tensorflow.org/versions/master/tutorials/recurrent/index.html#truncated-backpropagation" rel="nofollow">Truncated Backpropagation</a>. </p>
<p>This section says the code uses "truncated backpropagation", so what exactly does this mean?</p> | 2016-08-01 01:27:58.490000+00:00 | 2020-08-06 07:14:52.530000+00:00 | null | tensorflow|recurrent-neural-network|lstm | ['http://arxiv.org/abs/1308.0850'] | 1 |
54,731,648 | <p>Let's say you would like to add <a href="https://arxiv.org/abs/1710.05941" rel="nofollow noreferrer"><code>swish</code></a> or <a href="https://arxiv.org/abs/1606.08415" rel="nofollow noreferrer"><code>gelu</code></a> to keras, the previous methods are nice inline insertions. But you could also insert them in the set of keras activation functions, so that you call you custom fucntion as you would call <code>ReLU</code>. I tested this with keras 2.2.2 (any v2 would do). Append to this file <code>$HOME/anaconda2/lib/python2.7/site-packages/keras/activations.py</code> the definition of your custom function (can be different for you python and anaconda version).</p>
<p>In keras internal:</p>
<pre><code>$HOME/anaconda2/lib/python2.7/site-packages/keras/activations.py
def swish(x):
return (K.sigmoid(beta * x) * alpha *x)
</code></pre>
<p>Then in your python file:</p>
<pre><code>$HOME/Documents/neural_nets.py
model = Sequential()
model.add(Activation('swish'))
</code></pre> | 2019-02-17 09:03:53.297000+00:00 | 2019-02-17 09:03:53.297000+00:00 | null | null | 43,915,482 | <p>Sometimes the default <a href="https://keras.io/activations/" rel="noreferrer">standard activations</a> like ReLU, tanh, softmax, ... and the <a href="https://keras.io/layers/advanced-activations/" rel="noreferrer">advanced activations</a> like LeakyReLU aren't enough. And it might also not be in <a href="https://github.com/farizrahman4u/keras-contrib" rel="noreferrer">keras-contrib</a>.</p>
<p>How do you create your own activation function?</p> | 2017-05-11 12:30:20.230000+00:00 | 2021-01-04 18:24:41.620000+00:00 | 2019-09-03 14:14:39.777000+00:00 | python|keras|keras-layer | ['https://arxiv.org/abs/1710.05941', 'https://arxiv.org/abs/1606.08415'] | 2 |
71,111,984 | <p>Insert the tailwind class <code>z-10</code> to the navbar container to give the navbar a higher z-Index</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code><link href="https://unpkg.com/tailwindcss@^2/dist/tailwind.min.css" rel="stylesheet"
<div class="flex flex-col">
<!-- navbar -->
<div class="fixed inset-x-0 left-0 right-0 w-full text-gray-700 bg-white dark-mode:text-gray-200 dark-mode:bg-gray-800 z-10">
<div x-data="{ open: false }" class="flex flex-col max-w-screen-xl px-4 mx-auto md:items-center md:justify-between md:flex-row md:px-6 lg:px-8">
<div class="p-4 flex flex-row items-center justify-between"> <a href="#" class="text-lg font-semibold tracking-widest text-gray-900 uppercase rounded-lg dark-mode:text-white focus:outline-none focus:shadow-outline">Sachchidanand Prasad</a>
<button class="md:hidden rounded-lg focus:outline-none focus:shadow-outline" @click="open = !open">
<svg fill="currentColor" viewBox="0 0 20 20" class="w-6 h-6">
<path x-show="!open" fill-rule="evenodd" d="M3 5a1 1 0 011-1h12a1 1 0 110 2H4a1 1 0 01-1-1zM3 10a1 1 0 011-1h12a1 1 0 110 2H4a1 1 0 01-1-1zM9 15a1 1 0 011-1h6a1 1 0 110 2h-6a1 1 0 01-1-1z" clip-rule="evenodd"></path>
<path x-show="open" fill-rule="evenodd" d="M4.293 4.293a1 1 0 011.414 0L10 8.586l4.293-4.293a1 1 0 111.414 1.414L11.414 10l4.293 4.293a1 1 0 01-1.414 1.414L10 11.414l-4.293 4.293a1 1 0 01-1.414-1.414L8.586 10 4.293 5.707a1 1 0 010-1.414z" clip-rule="evenodd"></path>
</svg>
</button>
</div>
<nav :class="{'flex': open, 'hidden': !open}" class="flex-col flex-grow pb-4 md:pb-0 hidden md:flex md:justify-end md:flex-row"> <a class="px-4 py-2 mt-2 text-sm font-semibold bg-transparent rounded-lg dark-mode:bg-gray-700 dark-mode:hover:bg-gray-600 dark-mode:focus:bg-gray-600 dark-mode:focus:text-white dark-mode:hover:text-white dark-mode:text-gray-200 md:mt-0 hover:text-gray-900 focus:text-gray-900 hover:bg-gray-200 focus:bg-gray-200 focus:outline-none focus:shadow-outline" href="index.html">
Home
</a> <a class="px-4 py-2 mt-2 text-sm font-semibold bg-gray-200 text-gray-900 rounded-lg dark-mode:bg-transparent dark-mode:hover:bg-gray-600 dark-mode:focus:bg-gray-600 dark-mode:focus:text-white dark-mode:hover:text-white dark-mode:text-gray-200 md:mt-0 md:ml-4 hover:text-gray-900 focus:text-gray-900 hover:bg-gray-200 focus:bg-gray-200 focus:outline-none focus:shadow-outline" href="research.html">
Research
</a> <a class="px-4 py-2 mt-2 text-sm font-semibold bg-transparent rounded-lg dark-mode:bg-transparent dark-mode:hover:bg-gray-600 dark-mode:focus:bg-gray-600 dark-mode:focus:text-white dark-mode:hover:text-white dark-mode:text-gray-200 md:mt-0 md:ml-4 hover:text-gray-900 focus:text-gray-900 hover:bg-gray-200 focus:bg-gray-200 focus:outline-none focus:shadow-outline" href="cv.html">
CV
</a> </nav>
</div>
</div>
<!-- section -->
<section class="text-gray-600 body-font">
<div class="container lg:px-36 py-24 mx-auto">
<div class="flex flex-col items-center text-center justify-center">
<h2 class="text-gray-900 text-3xl title-font font-medium">Research</h2>
<div class="w-24 h-1 bg-indigo-500 rounded mt-2 mb-4"></div>
</div>
<div class="flex flex-wrap sm:-m-4 -mx-4 -mb-10 -mt-4 md:space-y-0 space-y-6">
<div class="p-4 md:w-full flex">
<div class="flex-grow lg:px-32 px-6">
<p class="mt-4 leading-relaxed text-lg text-justify"> My area of interests include <em>differential geometry</em>, <em>differential topology</em> and <em>algebraic topology</em>. More specifically, I am working on <em class="text-green-600">cut locus of a submanifold</em>. For a given Riemannian manifold $M$ and $N\subset M$ the <em class="text-green-600">cut locus of $N$</em>, $\mathrm{Cu}(N)$, is the collection of points $q\in M$ such that extension of a distance minimal geodesic joining $N$ to $q$ is no longer minimal. Here by the <em class="text-green-600">distance minimal geodesic $\gamma$ joining $N$ to $q$</em> we mean that there exists $p\in N$ such that the length of $\gamma$ from $p$ to $q$ is same as the distance from $N$ to $q$. In my recent <a href="#publications" class="text-blue-500 hover:text-blue-800">paper</a> (joint with Dr Somnath Basu) we have discussed the cut locus of a closed submanifold and described the relation between cut locus, Thom spaces and Morse-Bott functions. </p>
<p class="mt-4 leading-relaxed text-lg text-justify"> Currently, I am working on the cut locus of a quotient manifold and application to the classifying spaces. </p>
<p class="mt-4 leading-relaxed text-lg text-justify:"> </p>
<h2 class="mt-4 mb-4 text-gray-900 text-xl title-font font-medium" id="publications">Publications and Preprints</h2>
<ol class="pl-8 list-decimal">
<li class="text-lg"> <em>A connection between cut locus, Thom spaces and Morse-Bott functions</em> (joint with Dr Somnath Basu) <a href="ml-8 https://arxiv.org/abs/2011.02972" target="_blank" class="text-blue-500 hover:text-blue-800">ArXiv link</a>
<div class="mt-2 ml-4 border-l-2 border-indigo-500 hidden md:block xs:hidden">
<p class="pl-2 leading-relaxed text-base text-gray-500 text-justify"> <span class="text-gray-900">Abstract:</span> Associated to every closed, embedded submanifold $N$ in a connected Riemannian manifold $M$, there is the distance function $d_N$ which measures the distance of a point in $M$ from $N$. We analyze the square of this function and show that it is Morse-Bott on the complement of the cut locus $\mathrm{Cu}(N)$ of $N$, provided $M$ is complete. Moreover, the gradient flow lines provide a deformation retraction of $M-\mathrm{Cu}(N)$ to $N$. If $M$ is a closed manifold, then we prove that the Thom space of the normal bundle of $N$ is homeomorphic to $M/\mathrm{N}$. We also discuss several interesting results which are either applications of these or related observations regarding the theory of cut locus. These results include, but are not limited to, a computation of the local homology of singular matrices, a classification of the homotopy type of the cut locus of a homology sphere inside a sphere, a deformation of the indefinite unitary group $U(p,q)$ to $U(p)\times U(q)$ and a geometric deformation of $GL(n,\mathbb{R})$ to $O(n,\mathbb{R})$ which is different from the Gram-Schmidt retraction. </p>
</div>
</li>
</ol>
</div>
</div>
</div>
</div>
</section>
<!-- footer -->
<footer class="foot mt-auto border-t border-gray-200">
<div class=" container flex flex-col flex-wrap px-4 py-1 mx-auto md:items-center lg:items-start md:flex-row md:flex-nowrap">
<div class="justify-between w-full mt-4 text-center lg:flex">
<div class="w-full px-4 lg:w-1/3 md:w-1/2">
<h2 class="mb-2 font-bold tracking-widest text-gray-900">
Useful Links
</h2>
<ul class="mb-8 space-y-2 text-sm list-none">
<li> <a class="text-gray-600 hover:text-gray-800" href="https://mathscinet.ams.org/mathscinet/" target="_blank">MathSciNet</a> </li>
<li> <a class="text-gray-600 hover:text-gray-800" href="https://www.ams.org/open-math-notes" target="_blank">AMS open Math Notes</a> </li>
<li> <a class="text-gray-600 hover:text-gray-800" href="https://mtts.org.in/" target="_blank">MTTS</a> </li>
<li> <a class="text-gray-600 hover:text-gray-800" href="https://www.atmschools.org/" target="_blank">ATM School</a> </li>
</ul>
</div>
<div class="w-full px-4 lg:w-1/3 md:w-1/2">
<h2 class="mb-2 font-bold tracking-widest text-gray-900">
Useful Links
</h2>
<ul class="mb-8 space-y-2 text-sm list-none">
<li> <a class="text-gray-600 hover:text-gray-800" href="http://www.nbhm.dae.gov.in/" target="_blank">NBHM</a> </li>
<li> <a class="text-gray-600 hover:text-gray-800">About Us</a> </li>
<li> <a class="text-gray-600 hover:text-gray-800">Blogs</a> </li>
<li> <a class="text-gray-600 hover:text-gray-800">Contact Us</a> </li>
</ul>
</div>
<div class="w-full px-4 lg:w-1/3 md:w-1/2">
<h2 class="mb-2 font-bold tracking-widest text-gray-900">
Social Networks
</h2>
<ul class="mb-8 space-y-2 text-sm list-none">
<li> <a class="text-gray-600 hover:text-gray-800" href="">Facebook</a> </li>
<li> <a class="text-gray-600 hover:text-gray-800" href="">Twitter</a> </li>
<li> <a class="text-gray-600 hover:text-gray-800" href="">Instagram</a> </li>
<li> <a class="text-gray-600 hover:text-gray-800" href="">Github</a> </li>
</ul>
</div>
</div>
</div>
<div class="flex justify-center">
<p class="text-base text-gray-400"> All rights reserved by @ <a class="text-gray-400 hover:text-gray-800" href="index.html">Sachchidanand</a> 2022 </p>
</div>
</footer>
</div>
<script src="https://cdn.jsdelivr.net/gh/alpinejs/[email protected]/dist/alpine.min.js" defer></script>
<script src="https://polyfill.io/v3/polyfill.min.js?features=es6"></script>
<script id="MathJax-script" async src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js"></script>
<script>
MathJax = {
tex: {
inlineMath: [
['$', '$'],
['\\(', '\\)']
],
displayMath: [
['$$', '$$'],
['\\[', '\\]']
]
},
svg: {
fontCache: 'global'
}
};
</script></code></pre>
</div>
</div>
</p> | 2022-02-14 12:33:07.090000+00:00 | 2022-02-14 12:33:07.090000+00:00 | null | null | 71,111,797 | <p>Sorry to ask a question again as I asked a problem in the morning also.</p>
<p>This time I am facing an issue with Mathjax text. As my navbar is fixed, my texts are going inside the navbar whereas the math expressions are going above the navbar. This is annoying me from today's morning itself.</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code><link href="https://unpkg.com/tailwindcss@^2/dist/tailwind.min.css" rel="stylesheet"
<div class="flex flex-col h-screen">
<!-- navbar -->
<div class="fixed inset-x-0 left-0 right-0 w-full text-gray-700 bg-white dark-mode:text-gray-200 dark-mode:bg-gray-800">
<div x-data="{ open: false }" class="flex flex-col max-w-screen-xl px-4 mx-auto md:items-center md:justify-between md:flex-row md:px-6 lg:px-8">
<div class="p-4 flex flex-row items-center justify-between"> <a href="#" class="text-lg font-semibold tracking-widest text-gray-900 uppercase rounded-lg dark-mode:text-white focus:outline-none focus:shadow-outline">Sachchidanand Prasad</a>
<button class="md:hidden rounded-lg focus:outline-none focus:shadow-outline" @click="open = !open">
<svg fill="currentColor" viewBox="0 0 20 20" class="w-6 h-6">
<path x-show="!open" fill-rule="evenodd" d="M3 5a1 1 0 011-1h12a1 1 0 110 2H4a1 1 0 01-1-1zM3 10a1 1 0 011-1h12a1 1 0 110 2H4a1 1 0 01-1-1zM9 15a1 1 0 011-1h6a1 1 0 110 2h-6a1 1 0 01-1-1z" clip-rule="evenodd"></path>
<path x-show="open" fill-rule="evenodd" d="M4.293 4.293a1 1 0 011.414 0L10 8.586l4.293-4.293a1 1 0 111.414 1.414L11.414 10l4.293 4.293a1 1 0 01-1.414 1.414L10 11.414l-4.293 4.293a1 1 0 01-1.414-1.414L8.586 10 4.293 5.707a1 1 0 010-1.414z" clip-rule="evenodd"></path>
</svg>
</button>
</div>
<nav :class="{'flex': open, 'hidden': !open}" class="flex-col flex-grow pb-4 md:pb-0 hidden md:flex md:justify-end md:flex-row"> <a class="px-4 py-2 mt-2 text-sm font-semibold bg-transparent rounded-lg dark-mode:bg-gray-700 dark-mode:hover:bg-gray-600 dark-mode:focus:bg-gray-600 dark-mode:focus:text-white dark-mode:hover:text-white dark-mode:text-gray-200 md:mt-0 hover:text-gray-900 focus:text-gray-900 hover:bg-gray-200 focus:bg-gray-200 focus:outline-none focus:shadow-outline" href="index.html">
Home
</a> <a class="px-4 py-2 mt-2 text-sm font-semibold bg-gray-200 text-gray-900 rounded-lg dark-mode:bg-transparent dark-mode:hover:bg-gray-600 dark-mode:focus:bg-gray-600 dark-mode:focus:text-white dark-mode:hover:text-white dark-mode:text-gray-200 md:mt-0 md:ml-4 hover:text-gray-900 focus:text-gray-900 hover:bg-gray-200 focus:bg-gray-200 focus:outline-none focus:shadow-outline" href="research.html">
Research
</a> <a class="px-4 py-2 mt-2 text-sm font-semibold bg-transparent rounded-lg dark-mode:bg-transparent dark-mode:hover:bg-gray-600 dark-mode:focus:bg-gray-600 dark-mode:focus:text-white dark-mode:hover:text-white dark-mode:text-gray-200 md:mt-0 md:ml-4 hover:text-gray-900 focus:text-gray-900 hover:bg-gray-200 focus:bg-gray-200 focus:outline-none focus:shadow-outline" href="cv.html">
CV
</a> </nav>
</div>
</div>
<!-- section -->
<section class="text-gray-600 body-font">
<div class="container lg:px-36 py-24 mx-auto">
<div class="flex flex-col items-center text-center justify-center">
<h2 class="text-gray-900 text-3xl title-font font-medium">Research</h2>
<div class="w-24 h-1 bg-indigo-500 rounded mt-2 mb-4"></div>
</div>
<div class="flex flex-wrap sm:-m-4 -mx-4 -mb-10 -mt-4 md:space-y-0 space-y-6">
<div class="p-4 md:w-full flex">
<div class="flex-grow lg:px-32 px-6">
<p class="mt-4 leading-relaxed text-lg text-justify"> My area of interests include <em>differential geometry</em>, <em>differential topology</em> and <em>algebraic topology</em>. More specifically, I am working on <em class="text-green-600">cut locus of a submanifold</em>. For a given Riemannian manifold $M$ and $N\subset M$ the <em class="text-green-600">cut locus of $N$</em>, $\mathrm{Cu}(N)$, is the collection of points $q\in M$ such that extension of a distance minimal geodesic joining $N$ to $q$ is no longer minimal. Here by the <em class="text-green-600">distance minimal geodesic $\gamma$ joining $N$ to $q$</em> we mean that there exists $p\in N$ such that the length of $\gamma$ from $p$ to $q$ is same as the distance from $N$ to $q$. In my recent <a href="#publications" class="text-blue-500 hover:text-blue-800">paper</a> (joint with Dr Somnath Basu) we have discussed the cut locus of a closed submanifold and described the relation between cut locus, Thom spaces and Morse-Bott functions. </p>
<p class="mt-4 leading-relaxed text-lg text-justify"> Currently, I am working on the cut locus of a quotient manifold and application to the classifying spaces. </p>
<p class="mt-4 leading-relaxed text-lg text-justify:"> </p>
<h2 class="mt-4 mb-4 text-gray-900 text-xl title-font font-medium" id="publications">Publications and Preprints</h2>
<ol class="pl-8 list-decimal">
<li class="text-lg"> <em>A connection between cut locus, Thom spaces and Morse-Bott functions</em> (joint with Dr Somnath Basu) <a href="ml-8 https://arxiv.org/abs/2011.02972" target="_blank" class="text-blue-500 hover:text-blue-800">ArXiv link</a>
<div class="mt-2 ml-4 border-l-2 border-indigo-500 hidden md:block xs:hidden">
<p class="pl-2 leading-relaxed text-base text-gray-500 text-justify"> <span class="text-gray-900">Abstract:</span> Associated to every closed, embedded submanifold $N$ in a connected Riemannian manifold $M$, there is the distance function $d_N$ which measures the distance of a point in $M$ from $N$. We analyze the square of this function and show that it is Morse-Bott on the complement of the cut locus $\mathrm{Cu}(N)$ of $N$, provided $M$ is complete. Moreover, the gradient flow lines provide a deformation retraction of $M-\mathrm{Cu}(N)$ to $N$. If $M$ is a closed manifold, then we prove that the Thom space of the normal bundle of $N$ is homeomorphic to $M/\mathrm{N}$. We also discuss several interesting results which are either applications of these or related observations regarding the theory of cut locus. These results include, but are not limited to, a computation of the local homology of singular matrices, a classification of the homotopy type of the cut locus of a homology sphere inside a sphere, a deformation of the indefinite unitary group $U(p,q)$ to $U(p)\times U(q)$ and a geometric deformation of $GL(n,\mathbb{R})$ to $O(n,\mathbb{R})$ which is different from the Gram-Schmidt retraction. </p>
</div>
</li>
</ol>
</div>
</div>
</div>
</div>
</section>
<!-- footer -->
<footer class="foot mt-auto border-t border-gray-200">
<div class=" container flex flex-col flex-wrap px-4 py-1 mx-auto md:items-center lg:items-start md:flex-row md:flex-nowrap">
<div class="justify-between w-full mt-4 text-center lg:flex">
<div class="w-full px-4 lg:w-1/3 md:w-1/2">
<h2 class="mb-2 font-bold tracking-widest text-gray-900">
Useful Links
</h2>
<ul class="mb-8 space-y-2 text-sm list-none">
<li> <a class="text-gray-600 hover:text-gray-800" href="https://mathscinet.ams.org/mathscinet/" target="_blank">MathSciNet</a> </li>
<li> <a class="text-gray-600 hover:text-gray-800" href="https://www.ams.org/open-math-notes" target="_blank">AMS open Math Notes</a> </li>
<li> <a class="text-gray-600 hover:text-gray-800" href="https://mtts.org.in/" target="_blank">MTTS</a> </li>
<li> <a class="text-gray-600 hover:text-gray-800" href="https://www.atmschools.org/" target="_blank">ATM School</a> </li>
</ul>
</div>
<div class="w-full px-4 lg:w-1/3 md:w-1/2">
<h2 class="mb-2 font-bold tracking-widest text-gray-900">
Useful Links
</h2>
<ul class="mb-8 space-y-2 text-sm list-none">
<li> <a class="text-gray-600 hover:text-gray-800" href="http://www.nbhm.dae.gov.in/" target="_blank">NBHM</a> </li>
<li> <a class="text-gray-600 hover:text-gray-800">About Us</a> </li>
<li> <a class="text-gray-600 hover:text-gray-800">Blogs</a> </li>
<li> <a class="text-gray-600 hover:text-gray-800">Contact Us</a> </li>
</ul>
</div>
<div class="w-full px-4 lg:w-1/3 md:w-1/2">
<h2 class="mb-2 font-bold tracking-widest text-gray-900">
Social Networks
</h2>
<ul class="mb-8 space-y-2 text-sm list-none">
<li> <a class="text-gray-600 hover:text-gray-800" href="">Facebook</a> </li>
<li> <a class="text-gray-600 hover:text-gray-800" href="">Twitter</a> </li>
<li> <a class="text-gray-600 hover:text-gray-800" href="">Instagram</a> </li>
<li> <a class="text-gray-600 hover:text-gray-800" href="">Github</a> </li>
</ul>
</div>
</div>
</div>
<div class="flex justify-center">
<p class="text-base text-gray-400"> All rights reserved by @ <a class="text-gray-400 hover:text-gray-800" href="index.html">Sachchidanand</a> 2022 </p>
</div>
</footer>
</div>
<script src="https://cdn.jsdelivr.net/gh/alpinejs/[email protected]/dist/alpine.min.js" defer></script>
<script src="https://polyfill.io/v3/polyfill.min.js?features=es6"></script>
<script id="MathJax-script" async src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js"></script>
<script>
MathJax = {
tex: {
inlineMath: [
['$', '$'],
['\\(', '\\)']
],
displayMath: [
['$$', '$$'],
['\\[', '\\]']
]
},
svg: {
fontCache: 'global'
}
};
</script></code></pre>
</div>
</div>
</p>
<p><a href="https://i.stack.imgur.com/Mq9o6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Mq9o6.png" alt="enter image description here" /></a></p>
<p>We can see in the above image that the math text s over the navbar. I don't know what exactly the problem is. Any help will be appreciated.</p> | 2022-02-14 12:18:21.903000+00:00 | 2022-02-14 12:33:07.090000+00:00 | null | javascript|html|css|tailwind-css|mathjax | [] | 0 |
48,902,301 | <p>Two straightforward ways to handle this are:</p>
<ul>
<li>Oversample the minority class or undersample the majority class to create a training set with balanced class sizes. Since you have so many instances to play with, I'd have thought undersampling should be fine i.e. use all the positive-class instances and a randomly chosen subset of the negative-class instances, but if you have less data then the oversampling technique <a href="https://arxiv.org/pdf/1106.1813.pdf" rel="nofollow noreferrer">SMOTE</a> is popular.</li>
<li>Use the <code>Prior</code> argument to <code>fitctree</code> to specify the prior probability of each class, rather than allow <code>fitctree</code> to calculate it from the class sizes (the default):</li>
</ul>
<blockquote>
<p><strong><code>'Prior'</code> — Prior probabilities</strong><br>
<code>'empirical' (default) | 'uniform' | vector of scalar values | structure</code> </p>
<p>Prior probabilities for each
class, specified as the comma-separated pair consisting of <code>'Prior'</code> and
one of the following.</p>
<p>A character vector: 'empirical' determines class probabilities from
class frequencies in Y. If you pass observation weights, fitctree uses
the weights to compute the class probabilities. 'uniform' sets all
class probabilities equal.<br>
...</p>
</blockquote>
<p>In your case simply adding <code>'Prior', 'uniform'</code> to the arguments to <code>fitctree</code> might do the trick.</p>
<p>Whichever option you choose, remember that the test set(s) you use for evaluating the performance of the classifier should have the real distribution of positive and negative class (or, the distribution you expect to encounter in real data).</p> | 2018-02-21 09:28:33.403000+00:00 | 2018-02-21 09:28:33.403000+00:00 | null | null | 48,817,307 | <p>I'm trying to build a decision tree in MATLAB for binary classification. I have 4 features for each instance. There are around 25,000 instances in the positive class and 350,000 instances in the negative class. </p>
<p>I've tried building classifiers both within the classification learner app and using fitctree but both seem to just identify everything as the negative class. I'm guessing that MATLAB is structuring the tree to yield the highest "accuracy."</p>
<p>Is there a way to structure the decision tree towards a more sensitive model? (i.e. rather than "accuracy", can I use fitctree to build a model where sensitivity is at 70/80/90% or where sensitivity and specificity are similar?)</p> | 2018-02-15 22:24:53.660000+00:00 | 2018-02-21 09:28:33.403000+00:00 | 2018-02-15 22:25:41.590000+00:00 | matlab|classification|decision-tree|feature-selection | ['https://arxiv.org/pdf/1106.1813.pdf'] | 1 |
69,477,164 | <p>You might need to distinguish between a BERT model (the architecture) and a pre-trained BERT model. The former can definitely support emoji; the latter will only have reserved code points for them if they were in the data that was used to create the WordPiece tokenizer.</p>
<p>Here is <a href="http://juditacs.github.io/2019/02/19/bert-tokenization-stats.html" rel="nofollow noreferrer">an analysis of the 119,547 WordPiece vocab used in the HuggingFace multilingual model</a> It does not mention emoji. Note that 119K is very large for a vocab; more normal is 8K, 16K or 32K. The size of the vocab has quite a big influence on the model size: the first and last layers of a Transformer (e.g. BERT) model have way more weights than between any other layer.</p>
<p>I've just been skimming how the paper <a href="https://arxiv.org/abs/1910.13793" rel="nofollow noreferrer">Time to Take Emoji Seriously: They Vastly Improve Casual Conversational Models</a> deals with it. They append 3267 emoji to the end of the vocabulary. Then train it on some data with emoji in so it can try and learn what to do with those new characters.</p>
<p>BTW, a search of the HuggingFace github repository found they are using <code>from emoji import demojize</code>. This sounds like they convert emoji into text. Depending on what you are doing, you might need to disable it, or conversely you might need to be using that in your pipeline.</p> | 2021-10-07 07:38:42.273000+00:00 | 2021-10-07 07:38:42.273000+00:00 | null | null | 69,465,778 | <p>My research interest is effect of emojis in text. I am trying to classify sarcastic tweets in text. A month ago I have used a dataset where I added the tokens using:</p>
<blockquote>
<p>tokenizer.add_tokens('List of Emojis').</p>
</blockquote>
<p>So when I tested the BERT model had successfully added the tokens. But 2 days ago when I did the same thing for another dataset, BERT model has categorized then as 'UNK' tokens. My question is, is there a recent change in the BERT model? I have tried it with the following tokenizer,</p>
<blockquote>
<p>BertTokenizer.from_pretrained('bert-base-uncased')</p>
</blockquote>
<p>This is same for distilbert. It does not recognize the emojis despite explicitly adding them. At first I read somewhere there is no need to add them in the tokenizer because BERT or distilbert has already those emojis in the 30000 tokens but I tried both. By adding and without adding. For both cases it does not recognize the emojis.</p>
<p>What can I do to solve this issue. Your thoughts on this would be appreciated.</p> | 2021-10-06 12:32:09.890000+00:00 | 2021-10-07 07:38:42.273000+00:00 | null | python|nlp|sentiment-analysis | ['http://juditacs.github.io/2019/02/19/bert-tokenization-stats.html', 'https://arxiv.org/abs/1910.13793'] | 2 |
61,682,329 | <blockquote>
<ol start="4">
<li>Increased the number of rounds to 5 ...</li>
</ol>
</blockquote>
<p>Running only a few <em>rounds</em> of federated learning sounds insufficient. One of the earliest Federated Averaging papers (<a href="https://arxiv.org/abs/1602.05629" rel="nofollow noreferrer">McMahan 2016</a>) required running for hundreds of rounds when the MNIST data had non-iid splits. More recently (<a href="https://arxiv.org/abs/2003.00295" rel="nofollow noreferrer">Reddi 2020</a>) required thousands of <em>rounds</em> for CIFAR-100. One thing to note is that each "round" is one "step" of the global model. That step may be larger with more client epochs, but these are averaged and diverging clients may reduce the magnitude of the global step.</p>
<blockquote>
<p>I also save the tf keras model weights after each round and make predictions on the test set - no changes.</p>
</blockquote>
<p>This can be concerning. It will be easier to debug if you could share the code used in the FL training loop.</p> | 2020-05-08 15:16:21.190000+00:00 | 2020-05-11 15:28:48.090000+00:00 | 2020-05-11 15:28:48.090000+00:00 | null | 61,677,696 | <p>I have followed <a href="https://www.tensorflow.org/federated/tutorials/federated_learning_for_image_classification" rel="nofollow noreferrer">this emnist tutorial</a> to create an image classification experiment (7 classes) with the aim of training a classifier on 3 silos of data with the TFF framework.</p>
<p>Before training begins, I convert the model to a tf keras model using <code>tff.learning.assign_weights_to_keras_model(model,state.model)</code> to evaluate on my validation set. Regardless of the label, the model only predicts one class. This is to be expected as no training of the model has occurred yet. However, I repeat this step after each federated averaging round and the problem persists. All validation images are predicted to one class. I also save the tf keras model weights after each round and make predictions on the test set - no changes.</p>
<p>Some of the steps I have taken to check the source of the issue:</p>
<ol>
<li>Checked if the tf keras model weights are updating when the FL model is converted after each round - they are updating.</li>
<li>Ensured that the buffer size is greater than the training dataset size for each client.</li>
<li>Compared the predictions to the class distribution in the training datasets. There is a class imbalance but the one class that the model predicts is not necessarily the majority class. Also, it is not always the same class. For the most part, it predicts only class 0.</li>
<li>Increased the number of rounds to 5 and epochs per round to 10. This is computationally very intensive as it is quite a large model being trained with approx 1500 images per client.</li>
<li>Investigated the TensorBoard logs from each training attempt. The training loss is decreasing as the round progresses.</li>
<li>Tried a much simpler model - basic CNN with 2 conv layers. This allowed me to greatly increase the number of epochs and rounds. When evaluating this model on the test set, it predicted 4 different classes but the performance remains very bad. This would indicate that I just would need to increase the number of rounds and epochs for my original model to increase the variation in predictions. This is difficult due the large training time that would be a result.</li>
</ol>
<p>Model details:</p>
<p>The model uses the XceptionNet as the base model with the weights unfrozen. This performs well on the classification task when all the training images are pooled into a global dataset. Our aim is to hopefully achieve a comparable performance with FL.</p>
<pre><code>base_model = Xception(include_top=False,
weights=weights,
pooling='max',
input_shape=input_shape)
x = GlobalAveragePooling2D()( x )
predictions = Dense( num_classes, activation='softmax' )( x )
model = Model( base_model.input, outputs=predictions )
</code></pre>
<p>Here is my training code:</p>
<pre><code>def fit(self):
"""Train FL model"""
# self.load_data()
summary_writer = tf.summary.create_file_writer(
self.logs_dir
)
federated_averaging = self._construct_iterative_process()
state = federated_averaging.initialize()
tfkeras_model = self._convert_to_tfkeras_model( state )
print( np.argmax( tfkeras_model.predict( self.val_data ), axis=-1 ) )
val_loss, val_acc = tfkeras_model.evaluate( self.val_data, steps=100 )
with summary_writer.as_default():
for round_num in tqdm( range( 1, self.num_rounds ), ascii=True, desc="FedAvg Rounds" ):
print( "Beginning fed avg round..." )
# Round of federated averaging
state, metrics = federated_averaging.next(
state,
self.training_data
)
print( "Fed avg round complete" )
# Saving logs
for name, value in metrics._asdict().items():
tf.summary.scalar(
name,
value,
step=round_num
)
print( "round {:2d}, metrics={}".format( round_num, metrics ) )
tff.learning.assign_weights_to_keras_model(
tfkeras_model,
state.model
)
# tfkeras_model = self._convert_to_tfkeras_model(
# state
# )
val_metrics = {}
val_metrics["val_loss"], val_metrics["val_acc"] = tfkeras_model.evaluate(
self.val_data,
steps=100
)
for name, metric in val_metrics.items():
tf.summary.scalar(
name=name,
data=metric,
step=round_num
)
self._checkpoint_tfkeras_model(
tfkeras_model,
round_num,
self.checkpoint_dir
)
def _checkpoint_tfkeras_model(self,
model,
round_number,
checkpoint_dir):
# Obtaining model dir path
model_dir = os.path.join(
checkpoint_dir,
f'round_{round_number}',
)
# Creating directory
pathlib.Path(
model_dir
).mkdir(
parents=True
)
model_path = os.path.join(
model_dir,
f'model_file_round{round_number}.h5'
)
# Saving model
model.save(
model_path
)
def _convert_to_tfkeras_model(self, state):
"""Converts global TFF modle of TF keras model
Takes the weights of the global model
and pushes them back into a standard
Keras model
Args:
state: The state of the FL server
containing the model and
optimization state
Returns:
(model); TF Keras model
"""
model = self._load_tf_keras_model()
model.compile(
loss=self.loss,
metrics=self.metrics
)
tff.learning.assign_weights_to_keras_model(
model,
state.model
)
return model
def _load_tf_keras_model(self):
"""Loads tf keras models
Raises:
KeyError: A model name was not defined
correctly
Returns:
(model): TF keras model object
"""
model = create_models(
model_type=self.model_type,
input_shape=[self.img_h, self.img_w, 3],
freeze_base_weights=self.freeze_weights,
num_classes=self.num_classes,
compile_model=False
)
return model
def _define_model(self):
"""Model creation function"""
model = self._load_tf_keras_model()
tff_model = tff.learning.from_keras_model(
model,
dummy_batch=self.sample_batch,
loss=self.loss,
# Using self.metrics throws an error
metrics=[tf.keras.metrics.CategoricalAccuracy()] )
return tff_model
def _construct_iterative_process(self):
"""Constructing federated averaging process"""
iterative_process = tff.learning.build_federated_averaging_process(
self._define_model,
client_optimizer_fn=lambda: tf.keras.optimizers.SGD( learning_rate=0.02 ),
server_optimizer_fn=lambda: tf.keras.optimizers.SGD( learning_rate=1.0 ) )
return iterative_process
</code></pre> | 2020-05-08 11:05:19.387000+00:00 | 2020-05-28 11:29:27.863000+00:00 | 2020-05-11 09:44:51.353000+00:00 | python|tensorflow|keras|deep-learning|tensorflow-federated | ['https://arxiv.org/abs/1602.05629', 'https://arxiv.org/abs/2003.00295'] | 2 |
59,713,254 | <p>I had the very same question after reading the <a href="https://arxiv.org/abs/1706.03762" rel="noreferrer">Transformer paper</a>. I found no complete and detailed answer to the question in the Internet so I'll try to explain my understanding of Masked Multi-Head Attention.</p>
<p>The short answer is - we need masking to make the training parallel. And the parallelization is good as it allows the model to train faster.</p>
<p>Here's an example explaining the idea. Let's say we train to translate "I love you" to German. The encoder works in parallel mode - it can produce vector representation of the input sequence ("I love you") within a constant number of steps (i.e. the number of steps doesn't depend on the length of the input sequence).</p>
<p>Let's say the encoder produces the numbers <code>11, 12, 13</code> as the vector representations of the input sequence. In reality these vectors will be much longer but for simplicity we use the short ones. Also for simplicity we ignore the service tokens, like - beginning of the sequence, - end of the sequence and others.</p>
<p>During the training we know that the translation should be "Ich liebe dich" (we always know the expected output during the training). Let's say the expected vector representations of the "Ich liebe dich" words are <code>21, 22, 23</code>.</p>
<p>If we make the decoder training in sequential mode, it'll look like the training of the Recurrent Neural Network. The following sequential steps will be performed:</p>
<ul>
<li>Sequential operation #1. Input: <code>11, 12, 13</code>.
<ul>
<li>Trying to predict <code>21</code>.</li>
<li>The predicted output won't be exactly <code>21</code>, let's say it'll be <code>21.1</code>.</li>
</ul></li>
<li>Sequential operation #2. Input: <code>11, 12, 13</code>, and also <code>21.1</code> as the previous output.
<ul>
<li>Trying to predict <code>22</code>.</li>
<li>The predicted output won't be exactly <code>22</code>, let's say it'll be <code>22.3</code>.</li>
</ul></li>
<li>Sequential operation #3. Input <code>11, 12, 13</code>, and also <code>22.3</code> as the previous output.
<ul>
<li>Trying to predict <code>23</code>.</li>
<li>The predicted output won't be exactly <code>23</code>, let's say it'll be <code>23.5</code>.</li>
</ul></li>
</ul>
<p>This means we'll need to make 3 sequential operations (in general case - a sequential operation per each input). Also we'll have an accumulating error on each next iteration. Also we don't use attention as we only look to a single previous output.</p>
<p>As we actually know the expected outputs we can adjust the process and make it parallel. There's no need to wait for the previous step output.</p>
<ul>
<li>Parallel operation #A. Inputs: <code>11, 12, 13</code>.
<ul>
<li>Trying to predict <code>21</code>.</li>
</ul></li>
<li>Parallel operation #B. Inputs: <code>11, 12, 13</code>, and also <code>21</code>.
<ul>
<li>Trying to predict <code>22</code>.</li>
</ul></li>
<li>Parallel operation #C. Inputs: <code>11, 12, 13</code>, and also <code>21, 22</code>.
<ul>
<li>Trying to predict <code>23</code>.</li>
</ul></li>
</ul>
<p>This algorithm can be executed in parallel and also it doesn't accumulate the error. And this algorithm uses attention (i.e. looks to all previous inputs) thus has more information about the context to consider while making the prediction.</p>
<p>And here is where we need the masking. The training algorithm knows the entire expected output (<code>21, 22, 23</code>). It hides (masks) a part of this known output sequence for each of the parallel operations.</p>
<ul>
<li>When it executes #A - it hides (masks) the entire output.</li>
<li>When it executes #B - it hides 2nd and 3rd outputs.</li>
<li>When it executes #C - it hides 3rd output.</li>
</ul>
<p>Masking itself is implemented as the following (from the <a href="https://arxiv.org/abs/1706.03762" rel="noreferrer">original paper</a>): </p>
<blockquote>
<p>We implement this inside of scaled dot-product attention by masking
out (setting to −∞) all values in the input of the softmax which
correspond to illegal connections</p>
</blockquote>
<p>Note: during the inference (not training) the decoder works in the sequential (not parallel) mode as it doesn't know the output sequence initially. But it's different from RNN approach as Transformer inference still uses self-attention and looks at all previous outputs (but not only the very previous one).</p>
<p>Note 2: I've seen in some materials that masking can be used differently for non-translation applications. For example, for language modeling the masking can be used to hide some words from the input sentence and the model will try to predict them during the training using other, non-masked words (i.e. learn to understand the context).</p> | 2020-01-13 08:55:22.800000+00:00 | 2020-01-13 13:53:19+00:00 | 2020-01-13 13:53:19+00:00 | null | 58,127,059 | <p>I'm currently studying code of transformer, but I can not understand the masked multi-head of decoder. The paper said that it is to prevent you from seeing the generating word, but I can not unserstand if the words after generating word have not been generated, how can them be seen? </p>
<p>I try to read the code of transformer (link:<a href="https://github.com/Kyubyong/transformer" rel="noreferrer">https://github.com/Kyubyong/transformer</a>). The code achieved mask is shown below. It uses the lower triangular matrix to mask, I can not understand why.</p>
<pre class="lang-py prettyprint-override"><code>padding_num = -2 ** 32 + 1
diag_vals = tf.ones_like(inputs[0, :, :]) # (T_q, T_k)
tril = tf.linalg.LinearOperatorLowerTriangular(diag_vals).to_dense() # (T_q, T_k)
masks = tf.tile(tf.expand_dims(tril, 0), [tf.shape(inputs)[0], 1, 1]) # (N, T_q, T_k)
paddings = tf.ones_like(masks) * padding_num
outputs = tf.where(tf.equal(masks, 0), paddings, inputs)
</code></pre> | 2019-09-27 02:40:48.810000+00:00 | 2022-05-05 11:33:14.080000+00:00 | null | tensorflow|deep-learning|transformer-model|attention-model | ['https://arxiv.org/abs/1706.03762', 'https://arxiv.org/abs/1706.03762'] | 2 |
57,313,430 | <p>Using BeautifulSoup:</p>
<pre><code>import requests
import json
from bs4 import BeautifulSoup as bs
url = 'http://www.arxiv-sanity.com/top?timefilter=year&vfilter=all'
res = requests.get(url)
text = res.text
soup=bs(text, "html.parser")
extract=soup.select('script')[6]
target = extract.decode().split('var papers = ')[1]
target2 = target.replace("}, {","}xxx{").replace('[{','{').replace('}];','}')
final = target2.split('xxx')
for i in range(len(final)):
if i == len(final)-1:
last = final[i].split('var pid')[0]
d = json.loads(last)
print(d['title'],d['link'],d['abstract'])
else:
d = json.loads(final[i])
print(d['title'],d['link'],d['abstract'])
</code></pre>
<p>Sample output:</p>
<pre><code>BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
http://arxiv.org/abs/1810.04805v2
We introduce a new language representation model called BERT, which stands
for Bidirectional Encoder Representations from Transformers. Unlike recent
language representation models, BERT is designed to pre-train deep
bidirectional representations from unlabeled text by jointly conditioning on
both left and right context in all layers...
</code></pre>
<p>etc.</p> | 2019-08-01 16:35:20.770000+00:00 | 2019-08-01 16:35:20.770000+00:00 | null | null | 57,312,256 | <p>I want crawling "link", "title" and "abstract"</p>
<p>how can I get crawling this?</p>
<p>I tried</p>
<pre><code>import requests
import json
url = 'http://www.arxiv-sanity.com/top?timefilter=year&vfilter=all'
res = requests.get(url)
text = res.text
# print(text)
d = json.loads(text)
print(d['title'], d['link'], d['abstract'])
</code></pre>
<p>but <code>SONDecodeError: Expecting value: line 1 column 1 (char 0)</code> occur</p> | 2019-08-01 15:20:56.233000+00:00 | 2019-08-01 16:35:20.770000+00:00 | null | python|web-crawler | [] | 0 |
29,975,391 | <p>You can use Opencv3.0 "Scene Text Detection" functions. It is based on 'Class-specific Extremal Regions for Scene Text Detection'. It also has sample code.</p>
<p><em>You can find them at:</em></p>
<p>[1] <a href="http://docs.opencv.org/3.0-beta/modules/text/doc/erfilter.html" rel="nofollow">http://docs.opencv.org/3.0-beta/modules/text/doc/erfilter.html</a></p>
<p>[2] <a href="https://github.com/Itseez/opencv_contrib/blob/master/modules/text/samples/textdetection.cpp" rel="nofollow">https://github.com/Itseez/opencv_contrib/blob/master/modules/text/samples/textdetection.cpp</a></p>
<p><strong>Papers:</strong></p>
<p>[Neumann12] Neumann L., Matas J.: Real-Time Scene Text Localization and Recognition, CVPR 2012. The paper is available online at <a href="http://cmp.felk.cvut.cz/~neumalu1/neumann-cvpr2012.pdf" rel="nofollow">http://cmp.felk.cvut.cz/~neumalu1/neumann-cvpr2012.pdf</a></p>
<p>[Neumann11] Neumann L., Matas J.: Text Localization in Real-world Images using Efficiently Pruned Exhaustive Search, ICDAR 2011. The paper is available online at <a href="http://cmp.felk.cvut.cz/~neumalu1/icdar2011_article.pdf" rel="nofollow">http://cmp.felk.cvut.cz/~neumalu1/icdar2011_article.pdf</a></p>
<p>[Gomez13] Gomez L. and Karatzas D.: Multi-script Text Extraction from Natural Scenes, ICDAR 2013. The paper is available online at <a href="http://158.109.8.37/files/GoK2013.pdf" rel="nofollow">http://158.109.8.37/files/GoK2013.pdf</a></p>
<p>[Gomez14] Gomez L. and Karatzas D.: A Fast Hierarchical Method for Multi-script and Arbitrary Oriented Scene Text Extraction, arXiv:1407.7504 [cs.CV]. The paper is available online at <a href="http://arxiv.org/abs/1407.7504" rel="nofollow">http://arxiv.org/abs/1407.7504</a></p> | 2015-04-30 18:43:35.370000+00:00 | 2015-05-01 11:37:46.950000+00:00 | 2015-05-01 11:37:46.950000+00:00 | null | 12,197,947 | <p>I would like to ask you if you know any good text localization algorithms that would detect text candidates in an image (for my OCR project)</p>
<p>Essentially, after 'applying' this algorithm I would like to be able to get regions (bounding boxes) with character candidates, e.g. </p>
<p><img src="https://i.stack.imgur.com/GX1E6.png" alt="enter image description here"></p>
<p>I am trying to find something that I might use but even if I find something it's most likely in an extremely difficult paper with really high maths that needs to be applied. I have already encountered MSER (<a href="http://en.wikipedia.org/wiki/Maximally_stable_extremal_regions" rel="noreferrer">Maximally Stable Extremal Regions</a>) or Gradient Vector Flow method but both of them are quite difficult for me (although I understand a lot in maths I still have hard time figuring these out)</p> | 2012-08-30 13:25:56.740000+00:00 | 2020-09-29 04:53:38.720000+00:00 | null | image|algorithm|image-processing|text|localization | ['http://docs.opencv.org/3.0-beta/modules/text/doc/erfilter.html', 'https://github.com/Itseez/opencv_contrib/blob/master/modules/text/samples/textdetection.cpp', 'http://cmp.felk.cvut.cz/~neumalu1/neumann-cvpr2012.pdf', 'http://cmp.felk.cvut.cz/~neumalu1/icdar2011_article.pdf', 'http://158.109.8.37/files/GoK2013.pdf', 'http://arxiv.org/abs/1407.7504'] | 6 |
55,483,480 | <p><strong>Give a try to <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.lstsq.html#scipy.linalg.lstsq" rel="nofollow noreferrer"><code>scipy.linalg.lstsq()</code></a> using <code>lapack_driver='gelsy'</code>!</strong></p>
<p>Let's review the different routines for solving linear least square and the approches:</p>
<ul>
<li><p><a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.lstsq.html" rel="nofollow noreferrer"><code>numpy.linalg.lstsq()</code></a> wraps LAPACK's <a href="http://www.netlib.org/lapack/explore-html/db/d6a/dgelsd_8f_source.html" rel="nofollow noreferrer"><code>xGELSD()</code></a>, as shown in <a href="https://github.com/numpy/numpy/blob/v1.16.1/numpy/linalg/umath_linalg.c.src" rel="nofollow noreferrer">umath_linalg.c.src</a> on line 2841+. This routine reduces the matrix V to bidiagonal form using a devide and conquer strategy and compute the SVD of that bidiagonal matrix.</p></li>
<li><p>scipy's <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.lstsq.html#scipy.linalg.lstsq" rel="nofollow noreferrer"><code>scipy.linalg.lstsq()</code></a> wraps LAPACK's <code>xGELSD()</code>, <a href="http://www.netlib.org/lapack/explore-html/d6/d4b/dgelsy_8f_source.html" rel="nofollow noreferrer"><code>xGELSY()</code></a> and <a href="http://www.netlib.org/lapack/explore-html/d9/d4e/dgelss_8f_source.html" rel="nofollow noreferrer"><code>xGELSS()</code></a>: the argument <code>lapack_driver</code> can be modifed to switch from one to another. According to the benchmark of LAPACK, <code>xGELSS()</code> is slower than <code>xGELSD()</code> and <code>xGELSY()</code> is about 5 faster than <code>xGELSD()</code>. <code>xGELSY()</code> makes use of a QR factorization of V with column pivoting. And the good news is that this switch was <a href="https://docs.scipy.org/doc/scipy-1.1.0/reference/generated/scipy.linalg.lstsq.html#scipy.linalg.lstsq" rel="nofollow noreferrer">already available in scipy 1.1.0</a>! </p></li>
<li>LAPACK's <a href="http://www.netlib.org/lapack/explore-html/d8/dde/dgels_8f_source.html" rel="nofollow noreferrer"><code>xGELS()</code></a> makes use of the QR decomposition of the matrix V, but it assumes that this matrix has full rank. According to the benchmark of LAPACK, on can expect <code>dgels()</code> to be about 5 time faster than <code>dgelsd()</code>, but it also more vulnerable to the condition number of the matrix and may become innacurate. See details and further references in <a href="https://stackoverflow.com/questions/41637108/the-difference-between-c-lapack-sgels-and-python-numpy-lstsq-results">The difference between C++ (LAPACK, sgels) and Python (Numpy, lstsq) results</a> . xGELS() is available in the <a href="https://docs.scipy.org/doc/scipy/reference/linalg.cython_lapack.html#module-scipy.linalg.cython_lapack" rel="nofollow noreferrer">cython-lapack interface of scipy</a>. </li>
</ul>
<p>While very tempting, computing and using <code>V^T·V</code> to solve the normal equation is likely not the way to go. Indeed, the precision is endangered by the condition number of that matrix, about <a href="https://www.cs.ubc.ca/~rbridson/courses/542g-fall-2008/notes-oct1.pdf" rel="nofollow noreferrer">the square of the condition number of the matrix V</a>. Since <a href="https://arxiv.org/abs/1504.02118" rel="nofollow noreferrer">Vandermonde matrices tend to be badly ill-conditioned, except for the matrices of the discrete Fourier transform</a>, it can become hazardous... <strong>Finally, you may even keep using <code>xGELSD()</code> to avoid problems related to conditionning. If you switch to <code>xGELSY()</code>, consider <a href="http://www.netlib.org/lapack/lug/node82.html" rel="nofollow noreferrer">estimating the error</a>.</strong></p> | 2019-04-02 20:59:12.480000+00:00 | 2019-04-02 20:59:12.480000+00:00 | null | null | 55,367,024 | <p>In <a href="https://math.stackexchange.com/a/2233298/340174">https://math.stackexchange.com/a/2233298/340174</a> it is mentioned that solving a linear equation "M·x = b" (matrix M is square) is slow if done via LU decomposition (but even slower using QR decomposition). Now I noticed that <code>numpy.linalg.solve</code> is in fact using LU decomposition. In truth, I want to solve "V·x = b" for a non-squared Vandermonde design matrix V for the least squares. I don't want regularization. I see multiple approaches:</p>
<ol>
<li>Solve "V·x = b" with <code>numpy.linalg.lstsq</code>, which uses Fortran "xGELSD" based on SVD. The SVD should be even slower than LU decomposition, but I don't need to calculate "(V^T·V)".</li>
<li>Solve "(V^T·V)·x = (V^T·b)" with <code>numpy.linalg.solve</code>, which uses LU decomposition.</li>
<li>Solve "A·x = b" with <code>numpy.linalg.solve</code>, which uses LU decomposition, but calculating "A=xV^T·V" directly according to <a href="https://math.stackexchange.com/a/3155891/340174">https://math.stackexchange.com/a/3155891/340174</a></li>
</ol>
<p>Alaternatively I could use the newest <code>solve</code> from scipy (<a href="https://docs.scipy.org/doc/scipy-1.2.1/reference/generated/scipy.linalg.solve.html" rel="nofollow noreferrer">https://docs.scipy.org/doc/scipy-1.2.1/reference/generated/scipy.linalg.solve.html</a>) which can use diagonal pivoting for the symmetric matrix "A" (which is faster than using LU decomposition, I guess), but my scipy is stuck on 1.1.0, so I don't have access to that.</p>
<p>From <a href="https://stackoverflow.com/a/45535523/4533188">https://stackoverflow.com/a/45535523/4533188</a> it seems that <code>solve</code> is faster than <code>lstsq</code>, including calculating "V^T·V", but when I tried it, <code>lstsq</code> was faster. Maybe I am doing something wrong?</p>
<p><strong>What is the fastest way of solving my linear problem?</strong></p>
<hr>
<h3>No real options</h3>
<ul>
<li><code>statsmodels.regression.linear_model.OLS.fit</code> is uses either Moore-Penrose pseudoinverse or QR-factorization + <code>np.linalg.inv</code> + <code>np.linalg.svd</code> + <code>numpy.linalg.solve</code>, which does not seem too efficient to me.</li>
<li><code>sklearn.linear_model.LinearRegression</code> uses scipy.linalg.lstsq.</li>
<li><code>scipy.linalg.lstsq</code> uses also xGELSD.</li>
<li>I expect calculating the inverse of "(V^T·V)" to be pretty expensive, so I discarded the direct computation of "x = (V^T·V)^-1·(V^T·b)"</li>
</ul> | 2019-03-26 22:24:06.337000+00:00 | 2022-02-21 09:26:23.677000+00:00 | null | python|numpy|scipy | ['https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.lstsq.html#scipy.linalg.lstsq', 'https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.lstsq.html', 'http://www.netlib.org/lapack/explore-html/db/d6a/dgelsd_8f_source.html', 'https://github.com/numpy/numpy/blob/v1.16.1/numpy/linalg/umath_linalg.c.src', 'https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.lstsq.html#scipy.linalg.lstsq', 'http://www.netlib.org/lapack/explore-html/d6/d4b/dgelsy_8f_source.html', 'http://www.netlib.org/lapack/explore-html/d9/d4e/dgelss_8f_source.html', 'https://docs.scipy.org/doc/scipy-1.1.0/reference/generated/scipy.linalg.lstsq.html#scipy.linalg.lstsq', 'http://www.netlib.org/lapack/explore-html/d8/dde/dgels_8f_source.html', 'https://stackoverflow.com/questions/41637108/the-difference-between-c-lapack-sgels-and-python-numpy-lstsq-results', 'https://docs.scipy.org/doc/scipy/reference/linalg.cython_lapack.html#module-scipy.linalg.cython_lapack', 'https://www.cs.ubc.ca/~rbridson/courses/542g-fall-2008/notes-oct1.pdf', 'https://arxiv.org/abs/1504.02118', 'http://www.netlib.org/lapack/lug/node82.html'] | 14 |
52,957,787 | <p>If you want to calculate the Mutual information between say X and Y, it depends on the underlying assumptions you can make. If you have very high dimensional data and complicated distributions, I suggest binning, which is non parametric. Also there are some more sopisticated methods that I am using.
See <a href="https://github.com/BiuBiuBiLL/NPEET_LNC" rel="nofollow noreferrer">here</a>, <a href="https://github.com/gregversteeg/NPEET" rel="nofollow noreferrer">here</a> and <a href="https://arxiv.org/pdf/1801.04062.pdf" rel="nofollow noreferrer">here</a>.</p>
<p>The first two don't really scale well and the last one involves some hyperparameter tuning that can get your numbers totally off (or I am doing smth. wrong), but scales relatively well.</p> | 2018-10-23 21:01:36.177000+00:00 | 2018-10-23 21:01:36.177000+00:00 | null | null | 52,955,118 | <p>I'm learning how to use tensorflow and have run into a problem in implementing a custom loss function. Specifically, I'm trying to compute the average mutual information between all pairs of variables (the idea being to determine what predictions for one class are tightly correlated with another).</p>
<p>For example, if I have an array</p>
<pre><code># In simple case, 2 entries of data showing predictions for non-exclusive
# properties A, B, C, and D in that order.
data = np.array([[0.99, 0.05, 0.85, 0.2], [0.97, 0.57, 0.88, 0.1]])
</code></pre>
<p>I'd like to get back a tensor showing the mutual information between A and B, A and C, A and D, B and A, etc. where A is the 1st element of each row vector, B is the 2nd, etc. I would also be ok with just getting the average pairwise mutual information for each variable (e.g. average of MI(A, B), MI(A, C), and MI(A, D))</p>
<p>The way I would do this is by calculating the entropy across rows of every pair of variables and then subtracting off the entropy for each variable alone.</p>
<p>As a starting point, I looked at existing code for computing the covariance of two variables:</p>
<pre><code>def tf_cov(x):
mean_x = tf.reduce_mean(x, axis=0, keepdims=True)
mx = tf.matmul(tf.transpose(mean_x), mean_x)
vx = tf.matmul(tf.transpose(x), x)/tf.cast(tf.shape(x)[0], tf.float32)
cov_xx = vx - mx
return cov_xx
</code></pre>
<p>This is a nice example of how to get pair-wise statistics, but it doesn't get me quite the metrics I want.</p>
<p>I'm able to compute the entropy for a single variable as well:</p>
<pre><code>def tf_entropy(prob_a):
# Calculates the entropy along each column
col_entropy = tf.reduce_sum(prob_a * tf.log(prob_a), axis=0)
return col_entropy
</code></pre>
<p>Does anyone know of a good way to compute the pairwise entropies? I imagine it will look a lot like matmul, but instead of summing the element-wise products, I would compute the entropy. Of course, if you know of existing tensorflow functions that already do what I want, that would be great. I've been reading up on various entropy-related functions, but they never seem to be quite what I want.</p> | 2018-10-23 17:56:42.713000+00:00 | 2018-10-23 21:01:36.177000+00:00 | null | python|tensorflow|entropy|loss-function | ['https://github.com/BiuBiuBiLL/NPEET_LNC', 'https://github.com/gregversteeg/NPEET', 'https://arxiv.org/pdf/1801.04062.pdf'] | 3 |
48,443,832 | <p><strong>Initial note:</strong> Since you talk about having "created a simple model" and having said model "identify this image as car", I'll assume you're not actually using a model for object detection, but one that does simple classification.</p>
<p>The problem you're trying to solve is a different problem than the one you trained your network to solve.</p>
<p>You have a network that was trained to tell you <strong>whether an image you feed to it contains a car</strong>. This is a classification problem.</p>
<p>What you want now is <strong>the area where the car actually is in the image</strong>. This is a much harder problem to solve, because now your network doesn't need to output anymore "I see a car" vs. "I don't see a car", but instead, in the <strong>simplest</strong> formulation, "I see a car in the rectangle (x,y,w,h)". In another formulation, more similar to what your desired output would be, you would have <strong>per <em>each pixel</em> a classification like "it's a car" or "not a car"</strong>. These problems are then object <em>detection</em> and <em>segmentation</em>.</p>
<p>There are studies out there that tackle these problems (<a href="https://arxiv.org/abs/1504.08083" rel="nofollow noreferrer">one example</a> and <a href="https://people.eecs.berkeley.edu/~jonlong/long_shelhamer_fcn.pdf" rel="nofollow noreferrer">another</a>), but my suggestion is to have a look at Tensorflow's <a href="https://github.com/tensorflow/models/tree/master/research/object_detection" rel="nofollow noreferrer">object detection API</a> which have pretrained models you might exploit for your use-case.</p> | 2018-01-25 13:19:57.263000+00:00 | 2018-01-25 13:19:57.263000+00:00 | null | null | 48,443,389 | <p>I have created a simple model in TF to identify cars.
it identified the bellow image as a car:</p>
<p><a href="https://i.stack.imgur.com/1POkv.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1POkv.jpg" alt="enter image description here"></a></p>
<p>What I would like to have is the area(or crop of the area) of the actual identified car as follows:
<a href="https://i.stack.imgur.com/HuJzm.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HuJzm.jpg" alt="enter image description here"></a></p>
<p>any ideas if it is possible with Tensorflow? my current python code is looking like that:</p>
<pre><code>file_name = 'mustangTest.png'
input_height = 299
input_width = 299
input_mean = 0
input_std = 255
input_layer = "Mul"
output_layer = "final_result"
t = read_tensor_from_image_file(src,input_height=input_height,input_width=input_width,input_mean=input_mean,input_std=input_std)
input_name = "import/" + input_layer
output_name = "import/" + output_layer
input_operation = graph.get_operation_by_name(input_name);
output_operation = graph.get_operation_by_name(output_name);
with tf.Session(graph=graph) as sess:
results = sess.run(output_operation.outputs[0],{input_operation.outputs[0]: t})
results = np.squeeze(results)
top_k = results.argsort()[-5:][::-1]
print("car is " + top_k[0]")
</code></pre> | 2018-01-25 12:53:17.260000+00:00 | 2018-01-25 13:19:57.263000+00:00 | null | python|opencv|tensorflow | ['https://arxiv.org/abs/1504.08083', 'https://people.eecs.berkeley.edu/~jonlong/long_shelhamer_fcn.pdf', 'https://github.com/tensorflow/models/tree/master/research/object_detection'] | 3 |
70,927,496 | <p>I was able to run the following code successfully, <em>however</em> it requires additional steps</p>
<ol>
<li>Install the babel runtime - <code>yarn add @babel/runtime</code> - courtesy <a href="https://stackoverflow.com/a/66264883/13749957">this</a> post.</li>
<li>Material UI icon gives an error, so similarly add the material UI dependencies</li>
</ol>
<p>Assuming this is what you want:
<a href="https://i.stack.imgur.com/Yzjoa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Yzjoa.png" alt="enter image description here" /></a></p>
<p>Stackblitz - <strong><a href="https://stackblitz.com/edit/nextjs-utkd32?file=pages%2Findex.js" rel="nofollow noreferrer">https://stackblitz.com/edit/nextjs-utkd32?file=pages%2Findex.js</a></strong></p>
<pre><code>import Head from 'next/head';
import styles from '../styles/Home.module.css';
import PDFViewer from 'pdf-viewer-reactjs';
export default function Home() {
return (
<div className={styles.container}>
<Head>
<title>Create Next App</title>
</Head>
<main className={styles.main}>
<PDFViewer
document={{
url: 'https://arxiv.org/pdf/quant-ph/0410100.pdf',
}}
/>
</main>
<footer className={styles.footer}>
<a
href="https://vercel.com?utm_source=create-next-app&utm_medium=default-template&utm_campaign=create-next-app"
target="_blank"
rel="noopener noreferrer"
>
Powered by{' '}
</a>
</footer>
</div>
);
}
</code></pre>
<h3>Dynamic Imports</h3>
<p>If you want to do a <code>dynamic</code> import as you were trying to do above, the export of the individual module maybe need to linked to - potentially <code>pdf-viewer-reactjs/pdf-viewer-reactjs</code> - This needs more to be looked into.</p> | 2022-01-31 14:17:07.737000+00:00 | 2022-01-31 14:17:07.737000+00:00 | null | null | 70,923,428 | <p>I have used <a href="https://www.npmjs.com/package/pdf-viewer-reactjs" rel="nofollow noreferrer">pdf-viewer-reactjs</a> into my <code>Next.js</code> project. But getting following error.</p>
<pre><code>error - ./node_modules/pdfjs-dist/build/pdf.js 2094:26
Module parse failed: Unexpected token (2094:26)
You may need an appropriate loader to handle this file type, currently no loaders are configured to process this file. See https://webpack.js.org/concepts#loaders
| async destroy() {
| this.destroyed = true;
> await this._transport?.destroy();
| this._transport = null;
|
</code></pre>
<hr />
<p><strong>So far I've tried following both ways but nothing worked for me!</strong></p>
<p><strong>How to use this <code>npm</code> into my project without any errors?</strong></p>
<p>The code (Way 1):</p>
<pre><code>import React from 'react';
import StudentPortalLayout from "../../../components/Layout/StudentLayout";
import dynamic from "next/dynamic";
const PDFViewer = dynamic(() => import("pdf-viewer-reactjs"), {
ssr: false
});
function PDFView() {
return (
<StudentPortalLayout hideSidebar={true}>
<PDFViewer
document={{
url: 'https://arxiv.org/pdf/quant-ph/0410100.pdf',
}}
/>
</StudentPortalLayout>
)
}
export default PDFView;
</code></pre>
<p>The code (Way 2):</p>
<pre><code>import React from 'react';
import StudentPortalLayout from "../../../components/Layout/StudentLayout";
import PDFViewer from 'pdf-viewer-reactjs'
function PDFView() {
return (
<StudentPortalLayout hideSidebar={true}>
<PDFViewer
document={{
url: 'https://arxiv.org/pdf/quant-ph/0410100.pdf',
}}
/>
</StudentPortalLayout>
)
}
export default PDFView;
</code></pre> | 2022-01-31 09:05:10.390000+00:00 | 2022-01-31 14:17:07.737000+00:00 | null | javascript|reactjs|npm|next.js | ['https://stackoverflow.com/a/66264883/13749957', 'https://i.stack.imgur.com/Yzjoa.png', 'https://stackblitz.com/edit/nextjs-utkd32?file=pages%2Findex.js'] | 3 |
69,453,039 | <p>The <code>cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST</code> value is the threshold used to filter out low-scored bounding boxes predicted by the <a href="https://arxiv.org/abs/1504.08083" rel="noreferrer">Fast R-CNN</a> component of the model during inference/test time.</p>
<p>Basically, any prediction with a confidence score above the threshold value is kept, and the remaining are discarded.</p>
<p>This thresholding can be seen in the Detectron2 code <a href="https://github.com/facebookresearch/detectron2/blob/c55d3816c52f22bb2d7cbe4d08a2e760cf6b8e31/detectron2/modeling/roi_heads/fast_rcnn.py#L151" rel="noreferrer">here</a>.</p>
<pre><code>def fast_rcnn_inference_single_image(
boxes,
scores,
image_shape: Tuple[int, int],
score_thresh: float,
nms_thresh: float,
topk_per_image: int,
):
### clipped code ###
# 1. Filter results based on detection scores. It can make NMS more efficient
# by filtering out low-confidence detections.
filter_mask = scores > score_thresh # R x K
### clipped code ###
</code></pre>
<p>You can also see <a href="https://github.com/facebookresearch/detectron2/blob/c55d3816c52f22bb2d7cbe4d08a2e760cf6b8e31/detectron2/modeling/roi_heads/fast_rcnn.py#L251" rel="noreferrer">here</a> to confirm that that parameter value originates from the config.</p>
<pre><code>class FastRCNNOutputLayers(nn.Module):
"""
Two linear layers for predicting Fast R-CNN outputs:
1. proposal-to-detection box regression deltas
2. classification scores
"""
### clipped code ###
@classmethod
def from_config(cls, cfg, input_shape):
return {
"input_shape": input_shape,
"box2box_transform": Box2BoxTransform(weights=cfg.MODEL.ROI_BOX_HEAD.BBOX_REG_WEIGHTS),
# fmt: off
"num_classes" : cfg.MODEL.ROI_HEADS.NUM_CLASSES,
"cls_agnostic_bbox_reg" : cfg.MODEL.ROI_BOX_HEAD.CLS_AGNOSTIC_BBOX_REG,
"smooth_l1_beta" : cfg.MODEL.ROI_BOX_HEAD.SMOOTH_L1_BETA,
"test_score_thresh" : cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST,
"test_nms_thresh" : cfg.MODEL.ROI_HEADS.NMS_THRESH_TEST,
"test_topk_per_image" : cfg.TEST.DETECTIONS_PER_IMAGE,
"box_reg_loss_type" : cfg.MODEL.ROI_BOX_HEAD.BBOX_REG_LOSS_TYPE,
"loss_weight" : {"loss_box_reg": cfg.MODEL.ROI_BOX_HEAD.BBOX_REG_LOSS_WEIGHT},
# fmt: on
}
### clipped code ###
</code></pre> | 2021-10-05 15:11:38.297000+00:00 | 2021-10-05 15:11:38.297000+00:00 | null | null | 69,448,588 | <p>Hope you're doing great!</p>
<p>I didn't really understand these 2 lines from the detectron2 <a href="https://colab.research.google.com/drive/16jcaJoc6bCFAQ96jDe2HwtXj7BMD_-m5" rel="nofollow noreferrer">colab notebook</a> tutorial, I tried looking in the official documentation but i didn't understand much, can someone please explain this to me :</p>
<pre><code>cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.5 # set threshold for this model
# Find a model from detectron2's model zoo. You can use the https://dl.fbaipublicfiles... url as well
cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml")
</code></pre>
<p>I thank you in advance and wish you a great day!</p> | 2021-10-05 10:09:54.113000+00:00 | 2021-10-05 15:11:38.297000+00:00 | null | python|object-detection-api|detectron | ['https://arxiv.org/abs/1504.08083', 'https://github.com/facebookresearch/detectron2/blob/c55d3816c52f22bb2d7cbe4d08a2e760cf6b8e31/detectron2/modeling/roi_heads/fast_rcnn.py#L151', 'https://github.com/facebookresearch/detectron2/blob/c55d3816c52f22bb2d7cbe4d08a2e760cf6b8e31/detectron2/modeling/roi_heads/fast_rcnn.py#L251'] | 3 |
48,778,163 | <p>Theano 1.0.0 , Keras 2.1.3 , Python 2.7.10</p>
<p><strong>Shape of Input and Output:</strong> To my (yet narrow) understanding of keras and CNN you'll need to train the net on samples of shape <code>(n_samples, sequence_length, features)</code>. That means expanding your time series in a <a href="http://arogozhnikov.github.io/2015/09/30/NumpyTipsAndTricks2.html#Strides-and-training-on-sequences" rel="nofollow noreferrer">rolling window view</a>, broadcasting shape <code>(6400, 1, 1)</code> to <code>(6400, 64, 1)</code>. You do this via <a href="https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.lib.stride_tricks.as_strided.html" rel="nofollow noreferrer"><code>as_strided</code></a>:</p>
<pre><code>## reshape data
cos_view = np.lib.stride_tricks.as_strided(
cos,
shape=[cos.shape[0] - sequence_length, sequence_length, 1],
strides=cos.strides
)
expected_output_view = np.lib.stride_tricks.as_strided(
expected_output,
shape=[expected_output.shape[0] - sequence_length, sequence_length, 1],
strides=expected_output.strides
)
</code></pre>
<p>However the <code>expected_output</code> must also be of a higher dimension. In <code>gen_cosine_amp</code>:</p>
<pre><code>def gen_cosine_amp(amp=100, period=25, x0=0, xn=500, step=1, k=0.0001):
...
expected_output = np.zeros((len(cos), 1, 1))
for i in range(len(cos) - lahead):
expected_output[i, 0, 0] = np.mean(cos[i + 1:i + lahead + 1])
</code></pre>
<p>Now can train your model on these broadcast views:</p>
<pre><code>UFCNN_1.fit(cos_view, expected_output_view, verbose=1,epochs=1,shuffle=False, batch_size=batch_size)
</code></pre>
<p><strong>Fixing Model Bug:</strong> But be aware that there is another bug in your code. The layer <code>conv_7</code> is your output-layer. Its output / filters dimension shouldn't be <code>nb_filter</code>, but <code>output_dim</code></p>
<pre><code>conv_7 = Conv1D(filters=output_dim, kernel_size=filter_length, padding='same')(relu_6)
</code></pre>
<p><strong>Concerns:</strong> Even though with these modifications training on the data works quite well, I suppose this model will loos into the future of the time series (assuming your signal is a time series as is in lukovkin's <a href="https://github.com/lukovkin/ufcnn-keras/blob/master/models/UFCNN1.py" rel="nofollow noreferrer"><code>ufcnn-keras</code></a>). Maybe a <a href="https://keras.io/layers/convolutional/" rel="nofollow noreferrer"><code>Conv1D</code></a>-layer with <code>padding="causal"</code> and <code>dilation_rate=x</code> (x > 1) will be better. (I'm still experimenting myself with time series prediction)</p>
<p>Furthermore, be aware that the model merges the layers via <a href="https://keras.io/layers/merge/#add" rel="nofollow noreferrer"><code>add</code></a>, not via <a href="https://keras.io/layers/merge/#concatenate" rel="nofollow noreferrer"><code>concatenate</code></a> which would resemble the UFCNN-model described in the paper of <a href="https://arxiv.org/pdf/1508.00317.pdf" rel="nofollow noreferrer">Roni Mittelman</a>. <strong>The code below doesn't reflect these concerns.</strong></p>
<p><strong>Putting it all together:</strong></p>
<pre><code>from __future__ import absolute_import
from __future__ import print_function
import numpy as np
import keras
from keras.models import Model
from keras.models import Sequential
from keras.layers import Input, merge
from keras.layers.core import Activation
from keras.layers.convolutional import Conv1D
import matplotlib.pyplot as plt
from keras.preprocessing import sequence
def ufcnn_regression_model(sequence_length=5000,
features=1,
nb_filter=150,
filter_length=5,
output_dim=1,
optimizer='adagrad',
loss='mse'):
inputs = Input(shape=(sequence_length, features), name = 'input')
#########################################################
conv_1 = Conv1D(filters=nb_filter, kernel_size=filter_length, padding='same')(inputs)
relu_1 = Activation('relu')(conv_1)
#########################################################
conv_2 = Conv1D(filters=nb_filter, kernel_size=filter_length, padding='same')(relu_1)
relu_2 = Activation('relu')(conv_2)
#########################################################
conv_3 = Conv1D(filters=nb_filter, kernel_size=filter_length, padding='same')(relu_2)
relu_3 = Activation('relu')(conv_3)
#########################################################
conv_4 = Conv1D(filters=nb_filter, kernel_size=filter_length, padding='same')(relu_3)
relu_4 = Activation('relu')(conv_4)
#########################################################
merge_1 = keras.layers.add([relu_2, relu_4])
conv_5 =Conv1D(filters=nb_filter, kernel_size=filter_length, padding='same')(merge_1)
relu_5 = Activation('relu')(conv_5)
#########################################################
merge_2 = keras.layers.add([relu_1, relu_5])
conv_6 = Conv1D(filters=nb_filter, kernel_size=filter_length, padding='same')(merge_2)
relu_6 = Activation('relu')(conv_6)
#########################################################
conv_7 = Conv1D(filters=output_dim, kernel_size=filter_length, padding='same')(relu_6)
#########################################################
model = Model(inputs = inputs, outputs = conv_7)
model.compile(optimizer=optimizer, loss=loss)
print(model.summary())
return model
## Input & Output function
def gen_cosine_amp(amp=100, period=25, x0=0, xn=500, step=1, k=0.0001):
cos = np.zeros(((xn - x0) * step, 1, 1))
print("Cos. Shape",cos.shape)
for i in range(len(cos)):
idx = x0 + i * step
cos[i, 0, 0] = amp * np.cos(idx / (2 * np.pi * period))
cos[i, 0, 0] = cos[i, 0, 0] * np.exp(-k * idx)
lahead = 1
expected_output = np.zeros((len(cos), 1, 1))
for i in range(len(cos) - lahead):
expected_output[i, 0, 0] = np.mean(cos[i + 1:i + lahead + 1])
return cos, expected_output
## Parameter
sequence_length = 64
features = 1
nb_filter = 150
filter_length = 5
output_dim = 1
epochs = 5
batch_size = 128
## UFCNN_1 model summary
UFCNN_1 = ufcnn_regression_model(sequence_length=sequence_length, nb_filter=nb_filter)
## Inputs and ouputs to be trained
cos, expected_output = gen_cosine_amp(xn = sequence_length * 100)
## reshape data
cos_view = np.lib.stride_tricks.as_strided(
cos,
shape=[cos.shape[0] - sequence_length, sequence_length, 1],
strides=cos.strides
)
expected_output_view = np.lib.stride_tricks.as_strided(
expected_output,
shape=[expected_output.shape[0] - sequence_length, sequence_length, 1],
strides=expected_output.strides
)
print("Cos. Shape Input ",cos_view.shape)
## Trainning
for i in range(epochs):
print('Epoch', i, '/', epochs)
UFCNN_1.fit(cos_view, expected_output_view, verbose=1,epochs=1,shuffle=False, batch_size=batch_size)
print('Predicting')
predicted_output = UFCNN_1.predict(cos_view, batch_size=batch_size)
rmse = np.sqrt(((predicted_output - cos_view) ** 2).mean(axis=None))
print ("RMSE ", rmse)
</code></pre> | 2018-02-14 01:06:27.313000+00:00 | 2018-04-04 13:13:58.820000+00:00 | 2018-04-04 13:13:58.820000+00:00 | null | 43,039,931 | <p>Tensorflow 1.0.1 Keras 2.0 and Python 3.4</p>
<p>I am running a regression trainning using UFCNN model following lukovkin/ufcnn-keras' model of ufcnn-keras/notebook/UFCNN.ipynb ("<a href="https://github.com/lukovkin/ufcnn-keras/tree/master/notebook" rel="nofollow noreferrer">https://github.com/lukovkin/ufcnn-keras/tree/master/notebook</a>") and keras funcational API tutorials.
But an error shows that " ValueError: Error when checking model input: expected input to have shape (None, 64, 1) but got array with shape (6400, 1, 1)". I want someone can help me out. Here is my code below:</p>
<pre><code>from __future__ import absolute_import
from __future__ import print_function
import numpy as np
import keras
from keras.models import Model
from keras.models import Sequential
from keras.layers import Input, merge
from keras.layers.core import Activation
from keras.layers.convolutional import Conv1D
import matplotlib.pyplot as plt
from keras.preprocessing import sequence
## UFCNN function
def ufcnn_regression_model(sequence_length=5000,
features=1,
nb_filter=150,
filter_length=5,
output_dim=1,
optimizer='adagrad',
loss='mse'):
inputs = Input(shape=(sequence_length, features), name = 'input')
#########################################################
conv_1 = Conv1D(filters=nb_filter, kernel_size=filter_length, padding='same')(inputs)
relu_1 = Activation('relu')(conv_1)
#########################################################
conv_2 = Conv1D(filters=nb_filter, kernel_size=filter_length, padding='same')(relu_1)
relu_2 = Activation('relu')(conv_2)
#########################################################
conv_3 = Conv1D(filters=nb_filter, kernel_size=filter_length, padding='same')(relu_2)
relu_3 = Activation('relu')(conv_3)
#########################################################
conv_4 = Conv1D(filters=nb_filter, kernel_size=filter_length, padding='same')(relu_3)
relu_4 = Activation('relu')(conv_4)
#########################################################
merge_1 = keras.layers.add([relu_2, relu_4])
conv_5 =Conv1D(filters=nb_filter, kernel_size=filter_length, padding='same')(merge_1)
relu_5 = Activation('relu')(conv_5)
#########################################################
merge_2 = keras.layers.add([relu_1, relu_5])
conv_6 = Conv1D(filters=nb_filter, kernel_size=filter_length, padding='same')(merge_2)
relu_6 = Activation('relu')(conv_6)
#########################################################
conv_7 = Conv1D(filters=nb_filter, kernel_size=filter_length, padding='same')(relu_6)
#########################################################
model = Model(inputs = inputs, outputs = conv_7)
model.compile(optimizer=optimizer, loss=loss)
print(model.summary())
return model
## Input & Output function
def gen_cosine_amp(amp=100, period=25, x0=0, xn=500, step=1, k=0.0001):
cos = np.zeros(((xn - x0) * step, 1, 1))
print("Cos. Shape",cos.shape)
for i in range(len(cos)):
idx = x0 + i * step
cos[i, 0, 0] = amp * np.cos(idx / (2 * np.pi * period))
cos[i, 0, 0] = cos[i, 0, 0] * np.exp(-k * idx)
lahead = 1
expected_output = np.zeros((len(cos), 1))
for i in range(len(cos) - lahead):
expected_output[i, 0] = np.mean(cos[i + 1:i + lahead + 1])
return cos, expected_output
## Parameter
sequence_length = 64
features = 1
nb_filter = 150
filter_length = 5
output_dim = 1
epochs = 5
batch_size = 128
## UFCNN_1 model summary
UFCNN_1 = ufcnn_regression_model(sequence_length=sequence_length)
## Inputs and ouputs to be trained
cos = gen_cosine_amp(xn = sequence_length * 100)[0]
expected_output = gen_cosine_amp(xn = sequence_length * 100)[1]
## Trainning
for i in range(epochs):
print('Epoch', i, '/', epochs)
UFCNN_1.fit(cos, expected_output, verbose=1,epochs=1,shuffle=False, batch_size=batch_size)
print('Predicting')
## Predicting
predicted_output = model.predict(cos, batch_size=batch_size)
</code></pre>
<p>My Error is:</p>
<pre><code>Epoch 0 / 5
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-8-d49a856b74bd> in <module>()
1 for i in range(epochs):
2 print('Epoch', i, '/', epochs)
----> 3 UFCNN_1.fit(cos, expected_output, verbose=1,epochs=1,shuffle=False, batch_size=batch_size)
4 print('Predicting')
5 predicted_output = model.predict(cos, batch_size=batch_size)
/usr/local/lib/python3.4/dist-packages/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, **kwargs)
1403 class_weight=class_weight,
1404 check_batch_axis=False,
-> 1405 batch_size=batch_size)
1406 # prepare validation data
1407 if validation_data:
/usr/local/lib/python3.4/dist-packages/keras/engine/training.py in _standardize_user_data(self, x, y, sample_weight, class_weight, check_batch_axis, batch_size)
1293 self._feed_input_shapes,
1294 check_batch_axis=False,
-> 1295 exception_prefix='model input')
1296 y = _standardize_input_data(y, self._feed_output_names,
1297 output_shapes,
/usr/local/lib/python3.4/dist-packages/keras/engine/training.py in _standardize_input_data(data, names, shapes, check_batch_axis, exception_prefix)
131 ' to have shape ' + str(shapes[i]) +
132 ' but got array with shape ' +
--> 133 str(array.shape))
134 return arrays
135
ValueError: Error when checking model input: expected input to have shape (None, 64, 1) but got array with shape (6400, 1, 1)
</code></pre>
<p>Thank you for your help to make it work!!!!!</p>
<p>BTW: this model is almost the same as lukovkin/ufcnn-keras' model, only to update the code to fit for a newer version of keras and tensorflow.</p> | 2017-03-27 07:08:00.217000+00:00 | 2018-04-04 13:13:58.820000+00:00 | 2017-03-27 14:41:10.967000+00:00 | python|tensorflow|keras | ['http://arogozhnikov.github.io/2015/09/30/NumpyTipsAndTricks2.html#Strides-and-training-on-sequences', 'https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.lib.stride_tricks.as_strided.html', 'https://github.com/lukovkin/ufcnn-keras/blob/master/models/UFCNN1.py', 'https://keras.io/layers/convolutional/', 'https://keras.io/layers/merge/#add', 'https://keras.io/layers/merge/#concatenate', 'https://arxiv.org/pdf/1508.00317.pdf'] | 7 |
64,012,895 | <p>Ok, finally I found a <a href="https://arxiv.org/abs/1706.07156" rel="nofollow noreferrer">paper</a> that talks about it. In the paper they say:</p>
<blockquote>
<p>All audio clips were standardized by padding/clipping to a 4 second
duration</p>
</blockquote>
<p>So yes, what you say that impacts on your performance is what they do in papers for what I see.</p>
<p>An example of this kind of application can be <a href="https://urbansounddataset.weebly.com/" rel="nofollow noreferrer">UrbanSoundDataset</a>. It's a dataset of different lengths audio and therefore any paper that uses it (for a non-RNN network) will be forced to use this or another approach that converts sounds to the same length vector/matrix. I recommend the paper <a href="https://arxiv.org/abs/1608.04363" rel="nofollow noreferrer">Deep Convolutional Neural Networks and Data Augmentation for Environmental Sound Classification</a> or <a href="https://ieeexplore.ieee.org/document/7324337" rel="nofollow noreferrer">ENVIRONMENTAL SOUND CLASSIFICATION WITH CONVOLUTIONAL NEURAL NETWORKS</a>. The latter has its code open-sourced and you can see it also gets audio to 4 seconds on function <code>_load_audio</code> in <a href="https://github.com/karolpiczak/paper-2015-esc-convnet/blob/master/Code/_Datasets/Setup.ipynb" rel="nofollow noreferrer">this</a> notebook.</p>
<p><strong>How to clip audios</strong></p>
<pre><code>from pydub import AudioSegment
audio = pydub.AudioSegment.silent(duration=duration_ms) # The length you want
audio = audio.overlay(pydub.AudioSegment.from_wav(path))
raw = audio.split_to_mono()[0].get_array_of_samples() # I only keep the left sound
</code></pre>
<p><strong>Mel-spectrogram</strong></p>
<p>Standard is to use a <a href="https://medium.com/analytics-vidhya/understanding-the-mel-spectrogram-fca2afa2ce53" rel="nofollow noreferrer">mel-spectrum</a> for this kind of applications. You could use the Python library <a href="https://pypi.org/project/essentia/" rel="nofollow noreferrer">Essentia</a> and follow <a href="https://essentia.upf.edu/essentia_python_tutorial.html#computing-spectrum-mel-bands-energies-and-mfccs" rel="nofollow noreferrer">this</a> example or use librosa like this:</p>
<pre><code># Attention, I do not cut / pad this example
y, sr = librosa.load('your-wav-file.wav')
mel_spect = librosa.feature.melspectrogram(y=y, sr=sr, n_fft=2048, hop_length=1024)
</code></pre> | 2020-09-22 15:26:34.210000+00:00 | 2020-09-23 13:34:43.523000+00:00 | 2020-09-23 13:34:43.523000+00:00 | null | 37,045,126 | <p>A lot of articles are using CNNs to extract audio features. The input data is a spectrogram with two dimensions, time and frequency.</p>
<p>When creating an audio spectrogram, you need to specify the exact size of both dimensions. But they are usually not fixed. One can specify the size of the frequency dimension through the window size, but what about the time domain? The lengths of audio samples are different, but the size of the input data of CNNs should be fixed. </p>
<p>In my datasets, the audio length ranges from 1s to 8s. Padding or Cutting always impacts the results too much. </p>
<p>So I want to know more about this method.</p> | 2016-05-05 07:40:06.310000+00:00 | 2020-09-23 13:34:43.523000+00:00 | 2017-03-07 08:40:04.493000+00:00 | signal-processing|speech-recognition|conv-neural-network|spectrogram | ['https://arxiv.org/abs/1706.07156', 'https://urbansounddataset.weebly.com/', 'https://arxiv.org/abs/1608.04363', 'https://ieeexplore.ieee.org/document/7324337', 'https://github.com/karolpiczak/paper-2015-esc-convnet/blob/master/Code/_Datasets/Setup.ipynb', 'https://medium.com/analytics-vidhya/understanding-the-mel-spectrogram-fca2afa2ce53', 'https://pypi.org/project/essentia/', 'https://essentia.upf.edu/essentia_python_tutorial.html#computing-spectrum-mel-bands-energies-and-mfccs'] | 8 |
44,964,060 | <p>For the requirements</p>
<blockquote>
<ol>
<li>generate a sequence of random numbers r_i from a whole number interval I = [-(R+1), R], R > 0 with a statistical distribution like
java.util.Random</li>
<li>the sequence r_i must be strictly increasing (r_i > r_j for i > j)</li>
</ol>
</blockquote>
<p>we could come up with a simple algorithm</p>
<pre><code>A1:
- draw a random number r_i from I via a library call
- discard it, if it is less or equal the last draw, try another pick
</code></pre>
<p>The possible complaint would be that this algorithm would probably give not the right number of generated r_i, there is a fuzzy requirement about N=10^12 expected numbers in total</p>
<blockquote>
<ol start="3">
<li>"need a generator for many (up to one trillion, 10^12) unique random 64-bit numbers"</li>
</ol>
</blockquote>
<p>The solution for this would be </p>
<pre><code>A2:
- to generate N numbers and then
- sort them
</code></pre>
<p>However there is another requirement, that there is not enough memory available.</p>
<blockquote>
<ol start="4">
<li>"I'd like to use at most a few MB of internal state."</li>
</ol>
</blockquote>
<p>My conjecture is that it is not possible to fulfill all these requirements at once. </p>
<p>As a compromise I suggest</p>
<pre><code>A3:
R=2^63 = 9 10^18
N=1 Trillion = 10^12
- divide the range I=[-R,R-1] into N intervals of length (2R+1)/N each
- visit each of those intervals (visiting one interval after another)
- draw a random number from that interval
</code></pre>
<p>This will give N random numbers, in increasing order. </p>
<p><strong>Update:</strong></p>
<p>After skimming the <a href="https://arxiv.org/abs/1702.03154" rel="nofollow noreferrer">BBHash paper</a> and <a href="https://github.com/rizkg/BBHash" rel="nofollow noreferrer">sources</a> a couple of times this is my understanding:</p>
<p>Given some integer set I and a subset S with N=|S| elements, the BBHash procedure will calculate a function f which maps S to some permutation of {1,..,N} (what permutation seems to be implicitly decided by the BBHash procedure) and maps all other elements from I to a special value Imax from I. </p>
<p>Possible tests:</p>
<p>Given S and f one might check if membership in S for some arbitrary element from I is properly calculated.</p>
<p>One might also check if f(S) = {1,..,N}.</p>
<p>My guess is that the requested algorithm is intended to calculate a sample set S for N=10^12 on the fly under tight memory budget, needing uniqueness of the random number sequence rather than monotony.</p>
<p>To quote <a href="https://stackoverflow.com/a/35050835/2579220">https://stackoverflow.com/a/35050835/2579220</a></p>
<blockquote>
<p>Probabilistic data structures can't give you a definite answer,
instead they provide you with a reasonable approximation of the answer
and a way to approximate this estimation. They are extremely useful
for big data and streaming application because they allow to
dramatically decrease the amount of memory needed (in comparison to
data structures that give you exact answers).</p>
<p>In majority of the cases these data structures use hash functions to
randomize the items. Because they ignore collisions they keep the size
constant, but this is also a reason why they can't give you exact
values.</p>
</blockquote>
<p>In case of BBHash a sequence of different hash functions h_i is used. One applies different h_i until no collision occurs. This only works if the input is unique. It will only work if the implementation has enough different h_i in store for the particular S. </p> | 2017-07-07 06:36:10.957000+00:00 | 2017-07-09 15:18:50.533000+00:00 | 2017-07-09 15:18:50.533000+00:00 | null | 44,963,859 | <p>I need a generator for many (up to one trillion, 10^12) unique random 64-bit numbers.
The generator needs to return the numbers in sorted order (Long.MIN_VALUE to Long.MAX_VALUE). The problem is that sorting $10^{12}$ numbers is slow. The use case is replicating a test that was run for <a href="https://github.com/rizkg/BBHash" rel="nofollow noreferrer">BBHash</a> (in the <a href="https://arxiv.org/pdf/1702.03154.pdf" rel="nofollow noreferrer">paper</a>, 4.5 Indexing a trillion keys).</p>
<p>The straightforward solution is to create a set in memory, using a huge bit set or so
to ensure no duplicates are returned.
But that uses too much memory or I/O.
I'd like to use at most a few MB of internal state.</p>
<p>The generator should use a java.util.Random internally.
It should be as "fair" as possible (have the same statistical distribution as if generated otherwise). I would also like to have a version for 128-bit numbers (2 longs).</p>
<p>What I have so far is code to create a set in memory (Java code):</p>
<pre><code>public static void main(String... args) {
for(long x : randomSet(10, 0)) {
System.out.println(x);
}
}
static Iterable<Long> randomSet(int size, int seed) {
Random r = new Random(seed);
TreeSet<Long> set = new TreeSet<Long>();
while (set.size() < size) {
set.add(r.nextLong());
}
return set;
}
-8292973307042192125
-7423979211207825555
-6688467811848818630
-4962768465676381896
-2228689144322150137
-1083761183081836303
-279624296851435688
4437113781045784766
6146794652083548235
7105486291024734541
</code></pre>
<p>The simplest (wrong) solution, which is not random, is to distribute the results evenly.
I don't think a solution along the line of "add a random gap" will work,
because it is slow, and the sum of such gaps, after 10^12, will not land where it should (well, maybe: remember how many numbers are left, then re-calculate the distribution...). I think the following should work, but is complex, and not sure what formulas to use: for each bit level,
recursively, calculating how many 0s / 1s will likely occur
(using the Binomial distribution or the approximation, the normal / Gaussian distribution, somehow).
Stop at some point (say, blocks of 1 million entries or less),
use the code above, for speed.
But maybe there is an elegant solution.
Maybe this is related to the Metropolis–Hastings algorithm, not sure.
I read "An Efficient Algorithm for Sequential Random Sampling",
but I think it is only for small n, and I find it hard to get to a simple algorithm from this.</p>
<p>Java code would be best, but C is fine (anyway at some point I might have to convert it to C / C++). I would like to not use too many libraries, to simplify porting.</p> | 2017-07-07 06:23:11.237000+00:00 | 2017-07-12 16:18:05.677000+00:00 | 2017-07-09 09:24:41.523000+00:00 | java|algorithm|random|sequence|distribution | ['https://arxiv.org/abs/1702.03154', 'https://github.com/rizkg/BBHash', 'https://stackoverflow.com/a/35050835/2579220'] | 3 |
73,534,829 | <p>For all all 32 pieces still on the board, there is a bound of 1.89x10<sup>33</sup> states. That could be encoded in 14 bytes. See <a href="https://arxiv.org/pdf/2112.09386.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/2112.09386.pdf</a> . Not sure how that's useful though, since there is such a thing as captures. You'd also need to handle other possible sets of pieces, due to promotions.</p> | 2022-08-29 21:25:09.590000+00:00 | 2022-08-29 23:55:21.953000+00:00 | 2022-08-29 23:55:21.953000+00:00 | null | 73,533,416 | <p>I want to minimize the bytes need when i save a chess state with all figures:
Currently i need about 18 bytes to store the full board. I simply encode the possible positions for White King and other figures in some number.</p>
<pre><code>BigInteger big1 = ZERO;
BigInteger small1 = ONE;
big1 = big1.add(small1.multiply(tn.binomialCache(64,1, cache).subtract(ONE))); // White K
small1 = small1.multiply(tn.binomialCache(64,1, cache));
big1 = big1.add(small1.multiply(tn.binomialCache(63,1, cache).subtract(ONE))); // White Q
small1 = small1.multiply(tn.binomialCache(63,1, cache));
big1 = big1.add(small1.multiply(tn.binomialCache(62,1, cache).subtract(ONE))); // Black K
small1 = small1.multiply(tn.binomialCache(62,1, cache));
big1 = big1.add(small1.multiply(tn.binomialCache(61,1, cache).subtract(ONE))); // Black Q
small1 = small1.multiply(tn.binomialCache(61,1, cache));
big1 = big1.add(small1.multiply(tn.binomialCache(60,2, cache).subtract(ONE))); // White T
small1 = small1.multiply(tn.binomialCache(60,2, cache));
big1 = big1.add(small1.multiply(tn.binomialCache(58,2, cache).subtract(ONE))); // Black T
small1 = small1.multiply(tn.binomialCache(58,2, cache));
big1 = big1.add(small1.multiply(tn.binomialCache(56,2, cache).subtract(ONE))); // White H
small1 = small1.multiply(tn.binomialCache(56,2, cache));
big1 = big1.add(small1.multiply(tn.binomialCache(54,2, cache).subtract(ONE))); // Black H
small1 = small1.multiply(tn.binomialCache(54,2, cache));
big1 = big1.add(small1.multiply(tn.binomialCache(52,2, cache).subtract(ONE))); // White R
small1 = small1.multiply(tn.binomialCache(52,2, cache));
big1 = big1.add(small1.multiply(tn.binomialCache(50,2, cache).subtract(ONE))); // Black R
small1 = small1.multiply(tn.binomialCache(50,2, cache));
big1 = big1.add(small1.multiply(tn.binomialCache(48,8, cache).subtract(ONE))); // White P
small1 = small1.multiply(tn.binomialCache(48,8, cache));
big1 = big1.add(small1.multiply(tn.binomialCache(40,8, cache).subtract(ONE))); // Black P
// nummber of states which a full chessboard can have
// big1 = 4634726695587809641192045982323285670400000
</code></pre>
<p>Its possible do encode/decode the state of a chess board with all figures to less to than 18bytes?</p> | 2022-08-29 18:50:41.850000+00:00 | 2022-08-29 23:55:21.953000+00:00 | 2022-08-29 19:05:39.773000+00:00 | java|math|compression|chess|lossless-compression | ['https://arxiv.org/pdf/2112.09386.pdf'] | 1 |
48,963,537 | <h2>Which predictions get updated?</h2>
<blockquote>
<p>Stepping through the code, the reason it's happening is all the Q values that aren't meant to be updated (the ones for actions we didn't take) increase slightly. It's my understanding that passing the networks own output to the network during training should keep the output the same, not increase or decrease it.</p>
</blockquote>
<p>Below I have drawn a very simple neural network with 3 input nodes, 3 hidden nodes, and 3 output nodes. Suppose that you have only set a new target for the first action, and simply use the existing predictions as targets again for the other actions. This results in only a non-zero (for simplicity I'll just assume greater than zero) error (denoted by <code>delta</code> in the image) for the first action/output, and errors of <code>0</code> for the others. </p>
<p>I have drawn the connections through which this error will be propagated from output to hidden layer in bold. Note how each of the nodes in the hidden layer still gets an error. When these nodes then propagate their errors back to the input layer, they'll do this through <strong>all</strong> of the connections between input and hidden layer, so <strong>all</strong> of those weights can be modified. </p>
<p>So, imagine all those weights got updated, and now imagine doing a new forwards pass with the original inputs. Do you expect output nodes 2 and 3 to have exactly the same outputs as before? No, probably not; the connections from hidden nodes to the last two outputs may still have the same weights, but all three hidden nodes will have different activation levels. So no, the other outputs are not guaranteed to remain the same.</p>
<p><a href="https://i.stack.imgur.com/XCjnz.png" rel="noreferrer"><img src="https://i.stack.imgur.com/XCjnz.png" alt="Example Neural Network"></a></p>
<blockquote>
<p>Is there some way I can mask the update so it only updates the relevant Q value?</p>
</blockquote>
<p>Not easily no, if at all. The problem is that the connections between pairs of layers other than the connections between the final pair are not action-specific, and I don't think you want them to be either.</p>
<h2>Target Network</h2>
<blockquote>
<p>Is there something wrong with my model?</p>
</blockquote>
<p>One thing I'm seeing is that you seem to be updating the same network that is used to generate targets:</p>
<pre><code>target_f = self.target_model.predict(state)
</code></pre>
<p>and</p>
<pre><code>self.target_model.fit(state, target_f, epochs=1, verbose=0)
</code></pre>
<p>both use <code>self.target_model</code>. You should use separate copies of the network for those two lines, and only after longer periods of time copy the updated network's weights into the network used to compute targets. For a bit more on this, see Addition 3 in <a href="https://medium.com/@awjuliani/simple-reinforcement-learning-with-tensorflow-part-4-deep-q-networks-and-beyond-8438a3e2b8df" rel="noreferrer">this post</a>.</p>
<h2>Double DQN</h2>
<p>Apart from that, it is well known that DQN can still have a tendency to overestimate Q values (though it generally shouldn't completely explode). This can be addressed by using <a href="https://arxiv.org/abs/1509.06461" rel="noreferrer">Double DQN</a> (note: this is an improvement that was added later on top of DQN).</p> | 2018-02-24 13:46:16.123000+00:00 | 2018-02-24 14:13:54.420000+00:00 | 2018-02-24 14:13:54.420000+00:00 | null | 48,898,104 | <p>I'm training a DQN to play OpenAI's Atari environment, but the Q-values of my network quickly explode far above what is realistic.</p>
<p>Here's the relevant portion of the code:</p>
<pre><code>for state, action, reward, next_state, done in minibatch:
if not done:
# To save on memory, next_state is just one frame
# So we have to add it to the current state to get the actual input for the network
next_4_states = np.array(state)
next_4_states = np.roll(next_4_states, 1, axis=3)
next_4_states[:, :, :, 0] = next_state
target = reward + self.gamma * \
np.amax(self.target_model.predict(next_4_states))
else:
target = reward
target_f = self.target_model.predict(state)
target_f[0][action] = target
self.target_model.fit(state, target_f, epochs=1, verbose=0)
</code></pre>
<p>The discount factor is 0.99 (it doesn't happen with discount factor 0.9, but also doesn't converge because it can't think far enough ahead).</p>
<p>Stepping through the code, the reason it's happening is all the Q values that aren't meant to be updated (the ones for actions we didn't take) increase slightly. It's my understanding that passing the networks own output to the network during training should keep the output the same, not increase or decrease it. Is there something wrong with my model? Is there some way I can mask the update so it only updates the relevant Q value?</p>
<p>EDIT: My model creation code is here:</p>
<pre><code>def create_model(self, input_shape, num_actions, learning_rate):
model = Sequential()
model.add(Convolution2D(32, 8, strides=(4, 4),
activation='relu', input_shape=(input_shape)))
model.add(Convolution2D(64, 4, strides=(2, 2), activation='relu'))
model.add(Convolution2D(64, 3, strides=(1, 1), activation='relu'))
model.add(Flatten())
model.add(Dense(512, activation='relu'))
model.add(Dense(num_actions))
model.compile(loss='mse', optimizer=Adam(lr=learning_rate))
return model
</code></pre>
<p>I create two of these. One for the online network and one for the target.</p> | 2018-02-21 04:18:47.467000+00:00 | 2021-05-07 06:28:32.223000+00:00 | 2018-02-27 07:51:55.073000+00:00 | python|tensorflow|machine-learning|neural-network|keras | ['https://i.stack.imgur.com/XCjnz.png', 'https://medium.com/@awjuliani/simple-reinforcement-learning-with-tensorflow-part-4-deep-q-networks-and-beyond-8438a3e2b8df', 'https://arxiv.org/abs/1509.06461'] | 3 |
68,983,257 | <p>Based on the replies, I tested two options: the use of <a href="https://cran.r-project.org/package=tdigest" rel="nofollow noreferrer"><code>tdigest</code> package</a> and the built-in parallelization routine from <code>terra</code> package (<code>cores</code> parameter). The t-Digest construction algorithm, by <a href="https://arxiv.org/abs/1902.04023v1" rel="nofollow noreferrer">Dunning et al., (2019)</a>, uses a variant of 1-dimensional k-means clustering to produce a very compact data structure that allows accurate estimation of quantiles. I recommend to use the <code>tquantile</code> function, which reduced by third the processing time with the tested dataset.</p>
<p>For those who were thinking about <code>foreach</code> parallelization, there are no simple solution to run foreach loop with terra objects. For such tasks, I'm still using the good old raster package. <a href="https://stackoverflow.com/questions/67445883/terra-package-returns-error-when-try-to-run-parallel-operations">It is a planned update but not on short term - see here</a>. More details below.</p>
<h1>Toy dataset</h1>
<pre><code>library(terra)
# load elevation coming with the terra pakage
r <- rast( system.file("ex/elev.tif", package="terra") )
plot(r)
# number of iteration
n_it <- 20
# With `stats::quantile()` function
start_time <- Sys.time()
for (i in 1:n_it){
ra <- aggregate(r, 2 , fun = function(x) quantile(x, probs = .5, na.rm = T))
}
end_time <- Sys.time()
print(end_time-start_time)
</code></pre>
<blockquote>
<p>Time difference of 6.013551 secs</p>
</blockquote>
<h1>With <code>tdigest::tquantile()</code></h1>
<pre><code>library(tdigest)
start_time_tdigest <- Sys.time()
for (i in 1:n_it){
ra_tdigest <- aggregate(r, 2 , fun = function(x) tquantile(tdigest(na.omit(x)), probs = .5))
}
end_time_tdigest <- Sys.time()
print(end_time_tdigest-start_time_tdigest)
</code></pre>
<blockquote>
<p>Time difference of 1.922526 secs</p>
</blockquote>
<p>As suspected by Martin, the use of the <code>cores</code> parameter in the <code>terra:aggregate</code> function did not improve the processing time:</p>
<h1><code>stats::quantile()</code> + parallelization</h1>
<pre><code>start_time_parallel <- Sys.time()
for (i in 1:n_it){
ra_tdigest_parallel <- aggregate(r, 2 , fun = function(x) quantile(x, probs = .5, na.rm = T), cores = 2)
}
end_time_parallel <- Sys.time()
print(end_time_parallel-start_time_parallel)
</code></pre>
<blockquote>
<p>Time difference of 8.537751 secs</p>
</blockquote>
<h1><code>tdigest::tquantile()</code> + parallelization</h1>
<pre><code>tdigest_quantil_terra <- function(x) {
require(tdigest)
tquantile(tdigest(na.omit(x)), probs = .5) }
start_time_tdigest_parallel <- Sys.time() for (i in 1:n_it){
ra_tdigest_parallel <- aggregate(r, 2 ,
fun = function(x, ff) ff(x), cores = 2 ,
ff = tdigest_quantil_terra)
}
end_time_tdigest_parallel <- Sys.time()
print(end_time_tdigest_parallel-start_time_tdigest_parallel)
</code></pre>
<blockquote>
<p>Time difference of 7.520231 secs</p>
</blockquote>
<p>In a nutshell:</p>
<blockquote>
<p>1 tdigest 1.922526 secs</p>
<p>2 base_quantile 6.013551 secs</p>
<p>3 tdigest_parallel 7.520231 secs</p>
<p>4 base_quantile_parallel 8.537751 secs</p>
</blockquote> | 2021-08-30 11:19:37.020000+00:00 | 2021-08-30 12:22:19.610000+00:00 | 2021-08-30 12:22:19.610000+00:00 | null | 68,864,334 | <p>I would like to use <code>aggregate</code> function from the <code>terra</code> <code>R</code> package to aggregate raster with a quantiles approach as aggregation function. Here below, I used the <code>quantile</code> function from <code>R base</code> to compute 50th percentile (i.e. the median) using a raster from the local package directory. I chose the 50th percentile for comparison with median but my goal is indeed to compute other quantile(s)...</p>
<pre><code>library(terra)
# load elevation coming with the terra pakage
r <- rast( system.file("ex/elev.tif", package="terra") )
plot(r)
# number of iteration
n_it <- 20
# with a custom function
start_time <- Sys.time()
for (i in 1:n_it){
ra <- aggregate(r, 2 , fun = function(x) quantile(x, probs = .5, na.rm = T))
}
end_time <- Sys.time()
</code></pre>
<p>It took my computer approx. 6 secs to do it 20 times.</p>
<pre><code>print(end_time-start_time)
</code></pre>
<blockquote>
<p>Time difference of 6.052727 secs</p>
</blockquote>
<p>WHen I ran the same <code>aggregate</code> run with the median built-in function, it took approx. 40 times less time to perform the very same 20 iterations!</p>
<pre><code># with a built-in function
start_time <- Sys.time()
for (i in 1:n_it){
ra <- aggregate(r, 2 , fun = median)
}
end_time <- Sys.time()
print(end_time-start_time)
</code></pre>
<blockquote>
<p>Time difference of 0.1456101 secs</p>
</blockquote>
<p>As I would like to compute other percentiles than the 50th, could someone provide some advises to speed up <code>aggregate</code> when using custom functions?</p> | 2021-08-20 15:05:43.013000+00:00 | 2021-08-30 13:02:14.473000+00:00 | 2021-08-30 13:02:14.473000+00:00 | r|r-raster|terra | ['https://cran.r-project.org/package=tdigest', 'https://arxiv.org/abs/1902.04023v1', 'https://stackoverflow.com/questions/67445883/terra-package-returns-error-when-try-to-run-parallel-operations'] | 3 |
32,647,314 | <p>It is not ASIFT or better-ASIFT problem. Basically, ASIFT solves "Wide baseline stereo" problem - find correspondences and geometrical transformations between different views of the SAME object or scene. </p>
<p>What you looking for is some kind of image (object) similarity. State-of-art method for this - train neural net, get fixed length descriptor of image from it and compare descriptors with Eucledian distance between them</p>
<p>For example, have a look into "Neural Codes for Image Retrieval" paper - <a href="http://arxiv.org/abs/1404.1777" rel="nofollow">http://arxiv.org/abs/1404.1777</a> </p>
<p>P.S. If you still need correspondences and gave us different glasses by mistake, you can try MODS <a href="http://cmp.felk.cvut.cz/wbs/index.html" rel="nofollow">http://cmp.felk.cvut.cz/wbs/index.html</a>
Difference from ASIFT that it could handle much bigger angluar differences, more stable and much faster.</p> | 2015-09-18 08:27:45.327000+00:00 | 2015-09-18 08:27:45.327000+00:00 | null | null | 32,592,185 | <p>I'm currently working on comparing objects in different angles for image detection. Basically, I want to know whether the object from image 1 is similar with object from image 2 (% of similarity would be great).</p>
<p>Image1:</p>
<p><a href="https://i.stack.imgur.com/zVrm0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zVrm0.png" alt="Black glass in angle 1"></a></p>
<p>Image2:</p>
<p><a href="https://i.stack.imgur.com/Vppct.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Vppct.png" alt="Black glass in angle 2"></a></p>
<p>I have already looked around on the Internet and it seems like ASIFT (<a href="http://www.ipol.im/pub/art/2011/my-asift/" rel="nofollow noreferrer">LINK</a>) is a great solution. However, when I implement their demo and rerun the demo multiple times with the same inputs, ASIFT gives out different results on matched vertices. </p>
<p>Why does ASIFT give out different results each time I rerun the demo with the same inputs?</p>
<p>PS:<br>
Some <em>comments</em> regarding alternative solutions like ASIFT or SIFT for comparing objects in a different angle (having a more consistent result) would be appreciated as well.</p> | 2015-09-15 17:28:30.717000+00:00 | 2015-09-18 08:43:59.367000+00:00 | 2015-09-16 21:14:29.793000+00:00 | image|opencv|image-processing|sift|asift | ['http://arxiv.org/abs/1404.1777', 'http://cmp.felk.cvut.cz/wbs/index.html'] | 2 |
27,573,513 | <p>You don't say which Operating System or CPU. It doesn't matter whether you choose libpcap or not, the underlying network performance is still burdened by the Operating System Memory Management and its network driver. libpcap has kept up with the paces and can handle 10Gbps, but there's more.</p>
<p>If you want the best CPU so that you can do number-crunching, running virtual machines and while capturing packets, go with AMD Opteron CPU which still outperforms Intel Xeon Quadcore 5540 2.53GHz (despite Intel's XIO/DDIO introduction and mostly because of Intel dual-core sharing of same L2 cache). For best ready-made OS, go with latest FreeBSD as-is (which still outperforms Linux 3.10 networking using basic hardware.) Otherwise, Intel and Linux will works just fine for basic drop-free 10Gbps capture, provided you are eager to roll up your sleeves.</p>
<p>If you're pushing for breakneck speed all the time while doing financial-like or stochastic or large matrix predictive computational crunching (or something), then read-on...</p>
<p>As RedHat have <a href="http://people.netfilter.org/hawk/presentations/nfws2014/dp-accel-10G-challenge.pdf" rel="nofollow noreferrer">discovered</a>, 67.2 nanosecond is what it takes to process one minimal-sized packet at 10Gbps rate. I assert it's closer to 81.6 nanosecond for 64 byte Ethernet payload but they are talking 46-byte minimal as a theoretical.</p>
<p>To cut it short, you WON'T be able to DO or USE any of the following if you want 0% packet drop at full-rate by staying under 81.6 ns for each packet:</p>
<ul>
<li>Make an SKB call for each packet (to minimize that overhead, amortized this over several 100s of packets) </li>
<li>TLB (Translation lookaside buffer, to avoid that, use HUGE page allocations) </li>
<li>Short latency (you did say 'capture', so latency is irrelevant here). It's called Interrupt Coalesce
(<code>ethtool -C rx-frames 1024+</code>). </li>
<li>Float processes across multi-CPU (must lock them down, one per network interface interrupt)</li>
<li>libc <code>malloc()</code> (must replace it with a faster one, preferably HUGE-based one)</li>
</ul>
<p>So, Linux has an edge over FreeBSD to capture the 10Gbps rate in 0% drop rate AND run several virtual machines (and other overheads). Just that it requires a new memory management (MM) of some sort for a specific network device and not necessarily the whole operating system. Most new super-high-performance network driver are now making devices use HUGE memory that were allocated at userland then using driver calls to pass a bundle of packets at a time.</p>
<p>Many new network-driver having repurposed MM are out (in no particular order):</p>
<ul>
<li>netmap</li>
<li>PF-RING</li>
<li>PF-RING+netmap</li>
<li>OpenOnload</li>
<li>DPDK</li>
<li>PacketShader</li>
</ul>
<p>The maturity level of each code is highly dependent on which Linux (or distro) version you choose. I've tried a few of them and once I understood the basic design, it became apparent what I needed. YMMV.</p>
<p>Updated: White paper on high speed packet architecture: <a href="https://arxiv.org/pdf/1901.10664.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1901.10664.pdf</a> </p>
<p>Good luck.</p> | 2014-12-19 20:34:37.777000+00:00 | 2019-02-28 19:52:00.797000+00:00 | 2019-02-28 19:52:00.797000+00:00 | null | 7,763,321 | <p>I want to capture packets from 10Gbps network card with 0 packet loss.
I am using lipcap for 100Mbps NIC and it is working fine.
Will libpcap be able to handle 10Gbps NIC traffic?
If not what are the other alternative ways to achive this?</p> | 2011-10-14 05:12:07.103000+00:00 | 2019-02-28 19:52:00.797000+00:00 | 2012-02-24 18:55:03.320000+00:00 | c++|networking | ['http://people.netfilter.org/hawk/presentations/nfws2014/dp-accel-10G-challenge.pdf', 'https://arxiv.org/pdf/1901.10664.pdf'] | 2 |
63,885,025 | <p>The problem is that your flags are ordinary <code>bool</code> variables, which results in a data race and therefore UB. That basically means that all bets are off about how your program behaves! For example, the compiler could hoist the load in the while-loop, effectively transforming the loop into an infinite loop. But infinite loops without side effects are also UB, so the compiler is well in its right to remove the loop entirely!</p>
<p>But even if the compiler does not perform these optimizations, the code still does not guarantee mutual exclusion, because the flag operations are not sequentially consistent. Essentially what can happen is that both threads store <code>true</code> in their respective flag, but it is not guaranteed that this updated value is visible to the other thread. So it can happen that the subsequent load operations in the while-loop still return false <em>for both threads</em>.</p>
<p>To get the desired behavior, the operations on the flags need to be <em>sequentially consistent</em>, which means that all such operations have a <em>single total order</em>. To achieve that you have to define your flags as <code>std::atomic<bool></code>. All operations on atomics are sequentially consistent by default (unless specified otherwise).</p>
<p>However, since this is the third attempt by Dekker and not the final (correct) version, it does provide mutual exclusion (under a sequentially consistent memory model), but is prone to deadlocks!</p>
<p>For more details you should familiarize yourself with the C++ memory model. I can recommend the paper <a href="https://arxiv.org/abs/1803.04432" rel="nofollow noreferrer">Memory Models for C/C++ Programmers</a> which I have co-authored.</p> | 2020-09-14 13:10:39.113000+00:00 | 2020-09-15 10:43:27.470000+00:00 | 2020-09-15 10:43:27.470000+00:00 | null | 63,879,644 | <p>I'm studying the operating system and the program that Dekker wrote for his third attempt to Mutual Exclusion
I wrote my code in C ++ in Visual Studio, the code is below, but I wonder how these two threads are still in the critical area at the same time?
The output of the program is below</p>
<pre><code>#include<iostream>
#include<conio.h>
#include<thread>
using namespace std;
bool flag0 = false;
bool flag1 = false;
void p1()
{
flag0 = true;
while (flag1);
for (int i = 1; i <= 10; i++)
cout << "p1 : " << i << endl;
flag0 = false;
}
void p2()
{
flag1 = true;
while (flag0);
for (int i = -1; i >= -10; i--)
cout << "p2 : " << i << endl;
flag1 = false;
}
int main()
{
thread t1(p1);
thread t2(p2);
t1.join();
t2.join();
_getch();
return 0;
}
</code></pre>
<p>Output:</p>
<pre><code>p1 : p2 : -11
p2 : p1 : 2
p1 : 3
p1 : 4
p1 : 5
p1 : 6
-2
p2 : -3
p2 : -4
p2 : -5
p2 : -6
p2 : -7
p2 : -8
p2 : -9
p2 : p1 : -107
p1 : 8
p1 : 9
p1 : 10
</code></pre> | 2020-09-14 07:12:57.777000+00:00 | 2021-04-14 16:18:19.510000+00:00 | 2020-09-15 14:01:10.133000+00:00 | multithreading|visual-studio|c++11|visual-c++|operating-system | ['https://arxiv.org/abs/1803.04432'] | 1 |
38,455,581 | <p>Functional languages may not be directly useful for building great apps but we use functional programming paradigm heavily for building our Apps. Pure functional programming puts the constraint of "no side effects". This ensures that pure functional calls will yield same result in what ever order they are called. This is not ideal for web-development but if functional programming is combined with a state-changing system, a robust web app can be built. Have a look at my paper for more details: <a href="http://arxiv.org/abs/1607.05075" rel="nofollow">FAST Server</a>
Also these <a href="http://www.slideshare.net/GopiSuvanam/function-augmented-state-transfer-fast-architecture-64151967" rel="nofollow">slides</a>.</p> | 2016-07-19 10:12:53.177000+00:00 | 2016-07-19 10:12:53.177000+00:00 | null | null | 292,033 | <p>I've been seeing so much recently about functional programming and Clojure looks particularly interesting. While I 'understand' the basic description of what it is, I can't figure out how I would use it on a day to day basis as a web developer, if I can at all. A lot of what I have read focuses on the maths side of functional programming rather then typical programming situations found in regular OO. </p>
<p>Have I got the wrong end of the stick? Is functional programming totally un-related to web development? If not, are there any examples of it being used 'for the web'?</p> | 2008-11-15 02:27:40.570000+00:00 | 2020-10-23 15:18:54.190000+00:00 | 2016-07-22 22:03:51.787000+00:00 | clojure|functional-programming | ['http://arxiv.org/abs/1607.05075', 'http://www.slideshare.net/GopiSuvanam/function-augmented-state-transfer-fast-architecture-64151967'] | 2 |
63,254,942 | <p>From my experience benchmarking MCF in an industry setting, there are three publicly available implementations that are competitive,</p>
<ol>
<li>Andrew V Goldberg's cost scaling <a href="https://github.com/iveney/cs2" rel="nofollow noreferrer">implementation</a>.</li>
<li>Coin-OR's Lemon library cost scaling <a href="http://lemon.cs.elte.hu/pub/doc/latest/a00102.html" rel="nofollow noreferrer">implementation</a>.</li>
<li>Coin-OR's Network Simplex <a href="http://lemon.cs.elte.hu/pub/doc/latest/a00269.html" rel="nofollow noreferrer">implementation</a>.</li>
</ol>
<p>I would try those in that order if you are limited for time. Other honorable mentions are,</p>
<ol>
<li>Google-OR's cost scaling <a href="https://github.com/google/or-tools/blob/stable/ortools/graph/min_cost_flow.cc" rel="nofollow noreferrer">implementation</a>. I haven't benchmarked this, but I'd expect it to be competitive with those above.</li>
<li>MCFClass has several <a href="http://groups.di.unipi.it/optimize/Software/MCF.html" rel="nofollow noreferrer">implementations</a> listed under various restricted licenses for commercial use. RelaxIV is very competitive but restrictive.</li>
</ol>
<p>In terms of studying literature and a survey of competitive algorithms, the work of <a href="https://arxiv.org/pdf/1207.6381.pdf" rel="nofollow noreferrer">Kirarly and Kovacs</a> are an excellent starting point.</p> | 2020-08-04 20:58:09.533000+00:00 | 2020-08-05 01:17:31.840000+00:00 | 2020-08-05 01:17:31.840000+00:00 | null | 63,251,259 | <p>Can someone tell me which is the best algorithm for minimum cost maximum flow (and easy to implement) and from where to read will be helpful? I searched online and got names of many algorithms and unable to decide which one to study.</p> | 2020-08-04 16:33:29.790000+00:00 | 2020-08-05 01:17:31.840000+00:00 | null | algorithm | ['https://github.com/iveney/cs2', 'http://lemon.cs.elte.hu/pub/doc/latest/a00102.html', 'http://lemon.cs.elte.hu/pub/doc/latest/a00269.html', 'https://github.com/google/or-tools/blob/stable/ortools/graph/min_cost_flow.cc', 'http://groups.di.unipi.it/optimize/Software/MCF.html', 'https://arxiv.org/pdf/1207.6381.pdf'] | 6 |
47,018,029 | <blockquote>
<p>And the fully-connected layer is something like a feature list abstracted from convoluted layers.</p>
</blockquote>
<p>Yes, it's correct. The goal of this layer is to combine features detected from the image patches together for a particular task. In some (very simplified) sense, conv layers are smart <em>feature extractors</em>, and FC layers is the actual network.</p>
<blockquote>
<p>Why two? And why not three or four or more?</p>
</blockquote>
<p>I can't say the exact reasons for these particular networks, but I can imagine few possible reasons why this choice makes sense:</p>
<ul>
<li><p>You don't want to make your first FC layer too big, because it contains most of model parameters, or, in other words, consumes most memory. E.g. VGGNet has <code>7*7*512*4096 = 102,760,448</code> parameters in FC layer, which is 72% of all network parameters. Making it twice as big will make it 85%!</p>
<p>Hence, two smaller FC layers, one after another, is generally more flexible, given memory constraints, than one big FC layer.</p></li>
<li><p>Conv layers are much more important in terms of accuracy, than the way they are combined in the top layers. There's nothing wrong with three or more FC layers, but I don't think you'll see any significant changes if you try that.</p>
<p>In fact, the case of <a href="http://arxiv.org/abs/1412.6806" rel="noreferrer">all-convolutional network</a> has shown that one can greatly simplify the network by replacing FC layers with convolutional layers without visible performance degradation. I'd like to stress here: <em>these networks do not contain FC layers at all</em>. I won't be surprised if the authors didn't spend too much time on FC part and were focused on the earlier layers. The latest CNNs tend to get rid of FC layers as well.</p></li>
</ul>
<p>By the way, I don't think that computational cost is a big factor as far as FC layer is concerned, because most of the computation is happening in the first conv layer. Remember that convolution is much more expensive operation than matrix multiplication.</p> | 2017-10-30 14:42:00.080000+00:00 | 2017-10-30 14:42:00.080000+00:00 | null | null | 47,007,658 | <p>I notice that some famous CNN structure in ILSVRC, such as AlexNet, VGG, ZF net, etc. They all use two fully-connected layer, followed by the output layer.
So why two? Is there any in intrinsic idea behind this?</p>
<p>I try to understand it in this way: before the fully-connected layer, we have bunch of convolutional layers, which might contains various high-level features. And the fully-connected layer is something like a feature list abstracted from convoluted layers. But in this sense, one FC layer should be enough. Why two? And why not three or four or more? I guess a constrains behind this might be computing cost. But is it necessary that more FC layer always provide better result? And what might be the reason for choosing two?</p> | 2017-10-30 03:12:51.620000+00:00 | 2017-10-30 14:42:00.080000+00:00 | 2017-10-30 14:02:31.940000+00:00 | machine-learning|neural-network|computer-vision|deep-learning|conv-neural-network | ['http://arxiv.org/abs/1412.6806'] | 1 |
73,539,240 | <p>I think the algorithm presented here <a href="https://arxiv.org/abs/1406.7130" rel="nofollow noreferrer">https://arxiv.org/abs/1406.7130</a> could work for your problem. It is implemented in Julia here <a href="https://github.com/twMisc/Clustering-ToMaTo" rel="nofollow noreferrer">https://github.com/twMisc/Clustering-ToMaTo</a>
I forked the project to refactor it as a package here
<a href="https://pnavaro.github.io/ClusteringToMaTo.jl" rel="nofollow noreferrer">https://pnavaro.github.io/ClusteringToMaTo.jl</a>
with some examples <a href="https://pnavaro.github.io/ClusteringToMaTo.jl/dev/demo2/" rel="nofollow noreferrer">https://pnavaro.github.io/ClusteringToMaTo.jl/dev/demo2/</a></p>
<p>Perhaps you can put the code inside your project and adapt it. I hope it helps.</p>
<p>My objective is to offer this algorithm in a cleaner package <a href="https://github.com/pnavaro/GeometricClusterAnalysis.jl" rel="nofollow noreferrer">https://github.com/pnavaro/GeometricClusterAnalysis.jl</a>
but it is not finished yet.</p> | 2022-08-30 08:20:17.213000+00:00 | 2022-08-30 08:24:58.193000+00:00 | 2022-08-30 08:24:58.193000+00:00 | null | 73,538,425 | <p>I am trying to find clusters in some data with high noise (see plot below).</p>
<p><a href="https://i.stack.imgur.com/iFux4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iFux4.png" alt="enter image description here" /></a></p>
<p>I tried using DBSCAN which sort of worked, but it required quite a bit of manually tuning the input parameters to find the clusters properly. Are there any other good clustering algorithms for dealing with this kind of data?</p>
<p>Some considerations:</p>
<ul>
<li><p>I am using Julia to do my data processing.</p>
</li>
<li><p>The data has periodic boundary conditions in both directions.</p>
</li>
<li><p>The number of clusters is known a priori.</p>
</li>
<li><p>I am planning to process many datasets in this way, so it should run relatively fast and not require <em>too</em> much manual fiddling.</p>
</li>
</ul>
<p>Thanks!</p> | 2022-08-30 07:05:34.610000+00:00 | 2022-08-30 11:06:56.737000+00:00 | null | julia|data-science|cluster-analysis | ['https://arxiv.org/abs/1406.7130', 'https://github.com/twMisc/Clustering-ToMaTo', 'https://pnavaro.github.io/ClusteringToMaTo.jl', 'https://pnavaro.github.io/ClusteringToMaTo.jl/dev/demo2/', 'https://github.com/pnavaro/GeometricClusterAnalysis.jl'] | 5 |
56,021,224 | <p>As a general rule of thumb, Bayesian optimisation becomes ineffective when the dimension of the search space is > 15. This will obviously depend on volume and utility function landscape. Check <a href="https://arxiv.org/abs/1902.10675" rel="nofollow noreferrer">https://arxiv.org/abs/1902.10675</a> for an example of bayesian optimisation coupled with some reversible dimensional reduction (an auto-encoder there).</p> | 2019-05-07 11:06:42.167000+00:00 | 2019-05-07 11:06:42.167000+00:00 | null | null | 55,537,093 | <p>Does anybody know how quickly the Bayesian Optimization algorithm slows down as a function of the dimension of the search space? What is a good estimate of the maximum dimension that one can reasonably use? I am thinking especially of GPyOpt and GPFlowOpt.</p> | 2019-04-05 14:00:19.707000+00:00 | 2019-05-07 11:06:42.167000+00:00 | null | python|machine-learning|optimization | ['https://arxiv.org/abs/1902.10675'] | 1 |
56,981,156 | <p>Yes, you can do Multi-Horizon forecasting with LSTM. You can find a similar architecture by Amazon <a href="https://arxiv.org/pdf/1711.11053.pdf" rel="nofollow noreferrer">here</a>. </p>
<p>PS: these type of questions seems more suitable for <a href="https://datascience.stackexchange.com">https://datascience.stackexchange.com</a>. </p> | 2019-07-11 03:17:05.690000+00:00 | 2019-07-11 03:17:05.690000+00:00 | null | null | 56,976,434 | <p>As far as I know, LSTM can do regression/classification analysis in terms of time series data. I am wondering if </p>
<pre><code>time feature1 feature2 feature3 output
day1 ....................................
day2.....................................
day3....................................
day4...................................
day5...................................
...
...
day30.................................
</code></pre>
<p>Assume I got the data from experiment and totally collected 30 days' data with format shown above. Using LSTM I definitely can predict the output of 31 day as long as I obtain the input (feature 1 to 3 in this case) in day 31. My question is if I miss the experimental input data from day 31 to 50 (too busy to do experiment), could I still use LSTM to predict the output of 51 days? (I may have time to do the experiment again in day 51 ^.^). </p>
<p>This problem essentially is unlike the stock prediction problem that can be typically analyzed by LSTM. Since in the stock prediction problem, the output in time t can be considered as the input in time t+1. However, in this particular problem, the input (feature 1 to 3) can not be directly linked to the output.</p>
<p>Anyone can help to clarify/solve?</p>
<p>Thank you very much.</p> | 2019-07-10 18:13:14.770000+00:00 | 2019-07-11 03:17:05.690000+00:00 | null | python|keras|lstm | ['https://arxiv.org/pdf/1711.11053.pdf', 'https://datascience.stackexchange.com'] | 2 |
67,796,866 | <p>Short answer to the question: yes, they are overfitting. Most of the important NLP data sets are not actually well-crafted enough to test what they claim to test, and instead test the ability of the model to find subtle (and not-so-subtle) patterns in the data.</p>
<p>The best tool I know for creating data sets that help deal with this is <a href="https://github.com/marcotcr/checklist/" rel="nofollow noreferrer">Checklist</a>. The corresponding paper, <a href="https://arxiv.org/abs/2005.04118" rel="nofollow noreferrer">"Beyond Accuracy: Behavioral Testing of NLP models with CheckList"</a> is very readable and goes into depth on this type of issue. They have a very relevant table... but need some terms:</p>
<blockquote>
<p>We prompt users to evaluate each capability with
three different test types (when possible): Minimum Functionality tests, Invariance, and Directional Expectation tests... A Minimum Functionality test (MFT), is a collection of simple examples (and labels) to check a
behavior within a capability. MFTs are similar to
creating small and focused testing datasets, and are
particularly useful for detecting when models use
shortcuts to handle complex inputs without actually
mastering the capability.</p>
</blockquote>
<blockquote>
<p>...An Invariance test (INV) is when we apply
label-preserving perturbations to inputs and expect
the model prediction to remain the same.</p>
</blockquote>
<blockquote>
<p>A Directional Expectation test (DIR) is similar,
except that the label is expected to change in a certain way. For example, we expect that sentiment
will not become more positive if we add “You are
lame.” to the end of tweets directed at an airline
(Figure 1C).</p>
</blockquote>
<p><a href="https://i.stack.imgur.com/s743t.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/s743t.png" alt="A table from the Checklist paper about duplicate issue identification" /></a></p> | 2021-06-01 22:41:31.723000+00:00 | 2021-06-01 22:41:31.723000+00:00 | null | null | 67,714,372 | <p>I've been working on a sentence transformation task that involves paraphrase identification as a critical step: if we are confident enough that the state of the program (a sentence repeatedly modified) has become a paraphrase of a target sentence, stop transforming. The overall goal is actually to study potential reasoning in predictive models that can generate language prior to a target sentence. The approach is just one specific way of reaching that goal. Nevertheless, I've become interested in the paraphrase identification task itself, as it's received some boost from language models recently.</p>
<p>The problem I run into is when I manipulate sentences from examples or datasets. For example, in this <a href="https://huggingface.co/transformers/v2.6.0/usage.html#sequence-classification" rel="nofollow noreferrer">HuggingFace example</a>, if I negate either sequence or change the subject to Bloomberg, I still get a majority "is paraphrase" prediction. I started going through many examples in the <a href="https://github.com/wasiahmad/paraphrase_identification/blob/master/dataset/msr-paraphrase-corpus/msr_paraphrase_train.txt" rel="nofollow noreferrer">MSRPC</a> training set and negating one sentence in a positive example or making one sentence in a negative example a paraphrase of the other, especially when doing so would be a few word edit. I found to my surprise that various language models, like <code>bert-base-cased-finetuned-mrpc</code> and <code>textattack/roberta-base-MRPC</code>, don't change their confidences much on these sorts of changes. It's surprising as these models claim an f1 score of <a href="https://huggingface.co/textattack/facebook-bart-large-MRPC/blob/main/eval_results_mrpc.txt" rel="nofollow noreferrer">0.918</a>+. The dataset is clearly missing a focus on negative examples and small perturbative examples.</p>
<p>My question is, are there datasets, techniques, or models that deal well when given small edits? I know that this is an extremely generic question, much more than is typically asked on StackOverflow, but my concern is in finding practical tools. If there is a theoretical technique, then it might not be suitable as I'm in the category of "available tools define your approach" rather than vice-versa. So I hope that the community would have a recommendation on this.</p> | 2021-05-27 00:52:49.370000+00:00 | 2021-07-10 00:11:29.463000+00:00 | null | nlp|huggingface-transformers|msrpc | ['https://github.com/marcotcr/checklist/', 'https://arxiv.org/abs/2005.04118', 'https://i.stack.imgur.com/s743t.png'] | 3 |
57,801,585 | <p>Following Apache Tika <a href="https://tika.apache.org/1.17/index.html" rel="nofollow noreferrer">changelog</a> I came to <a href="https://issues.apache.org/jira/browse/TIKA-2262" rel="nofollow noreferrer">this</a> feature request for image captioning. There, the author stated that they used Google </p>
<blockquote>
<p>'show and tell' neural network</p>
</blockquote>
<p>mentioned in <a href="https://ai.googleblog.com/2016/09/show-and-tell-image-captioning-open.html" rel="nofollow noreferrer">this</a> blog post.</p>
<p>Also, <a href="https://arxiv.org/abs/1609.06647" rel="nofollow noreferrer">here</a> is a link to the paper if you want to compare it to the current state of the art methods.</p> | 2019-09-05 08:39:17.277000+00:00 | 2019-09-05 08:39:17.277000+00:00 | null | null | 57,801,343 | <p>Iam working on an image captioning tool and came across the apache tika</p>
<p><a href="https://tika.apache.org/1.18/api/org/apache/tika/parser/captioning/tf/TensorflowRESTCaptioner.html" rel="nofollow noreferrer">TensorflowRESTCaptioner</a></p>
<p>and would like to now which model does it use internally and how good are the results when compared with the state of the art right now in the market</p>
<p><a href="https://github.com/facebookresearch/pythia" rel="nofollow noreferrer">pythia - BUTF - FacebookResearch</a></p> | 2019-09-05 08:24:55.850000+00:00 | 2019-09-05 08:39:17.277000+00:00 | null | apache|tensorflow|deep-learning|apache-tika | ['https://tika.apache.org/1.17/index.html', 'https://issues.apache.org/jira/browse/TIKA-2262', 'https://ai.googleblog.com/2016/09/show-and-tell-image-captioning-open.html', 'https://arxiv.org/abs/1609.06647'] | 4 |
71,209,632 | <p>XAI methods primarily help in improving models based on better understanding and faster debugging. Thus, as an engineer you can use targeted measures to improve a model.</p>
<p>To the best of my knowledge, there is just one scientific work (see below) that uses explanations of XAI methods directly in the training process of models. Essentially, the paper proposes a novel reasoning approach. First, a model makes a decision. Then, an explanation of the decision is computed. Then, the same (or possibly another) model uses the original input and the explanation to come to a final decision, i.e., in some sense the network ``reflects''/contemplates on its initial decision and its reasons to come to a final conclusion.</p>
<p>"Reflective-Net: Learning from Explanations" by Schneider et al.
<a href="https://arxiv.org/abs/2011.13986" rel="nofollow noreferrer">https://arxiv.org/abs/2011.13986</a></p>
<p>Disclaimer: I am the inventor and first author of the paper.</p> | 2022-02-21 16:17:12.330000+00:00 | 2022-02-21 16:17:12.330000+00:00 | null | null | 70,749,714 | <p>I'm learning about explainable AI (XAI) and some of papers I've read say that we can use XAI to improve model's performance. It seems quite a new problem cuz I think when the model has already converged, it's impossible to find a new global minimum and this contradicts the above statement. I want to ask if there is anyways to improve the model's results that relevant to XAI methods? And if there is, how do they work? Tks a lots!!</p> | 2022-01-18 02:36:49.703000+00:00 | 2022-02-21 16:17:12.330000+00:00 | 2022-01-18 02:45:01.417000+00:00 | deep-learning|convergence|xai | ['https://arxiv.org/abs/2011.13986'] | 1 |
67,669,548 | <p>Changing <strong>num_layers</strong> won't help you in this case.<br />
SSD has several detection branches - each branch derived from different feature map resolution
For example - for mobilenet_v1_ssd found in TF OD API there are 6 branches.<br />
branch 0 - 19x19<br />
branch 1 - 10x10<br />
branch 2 - 5x5<br />
branch 3 - 3x3<br />
branch 4 - 2x2<br />
branch 5 - 1x1</p>
<p>So the argument - num_layers will actually change the number of branches but not the number of anchors per pixel.</p>
<p>In case you want to have only one anchor box per pixel you will have to change the several things (read until the end :) ):</p>
<ol>
<li><strong>Aspect ratios</strong> - By changing the number of aspect ratios you change the number of anchors per layer. so:</li>
</ol>
<pre class="lang-py prettyprint-override"><code> anchor_generator {
ssd_anchor_generator {
num_layers: 6
min_scale: 0.2
max_scale: 0.95
aspect_ratios: 1.0
aspect_ratios: 2.0
aspect_ratios: 0.5
aspect_ratios: 3.0
aspect_ratios: 0.3333
}
}
</code></pre>
<p>So by removing but 1.0, for example, will leave you <strong>almost</strong> with one anchor.
2. Why almost? SSD (<a href="https://arxiv.org/abs/1512.02325" rel="nofollow noreferrer">https://arxiv.org/abs/1512.02325</a>), see the paper, add another anchor box with a scale of a geometric mean between current branch scale to next branch scale. So you will actually have 2 anchors per pixel for branches 1-5. In order to disable it you should change a parameter named - <strong>interpolated_scale_aspect_ratio</strong> to 0.<br />
<a href="https://i.stack.imgur.com/q2HjJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/q2HjJ.png" alt="enter image description here" /></a><br />
You can do it by adding this argument to the config file (see the protobuf of ssd anchors - <a href="https://github.com/tensorflow/models/blob/2ad3e213838f71e92af198d917ac5574c9d60294/research/object_detection/protos/ssd_anchor_generator.proto" rel="nofollow noreferrer">https://github.com/tensorflow/models/blob/2ad3e213838f71e92af198d917ac5574c9d60294/research/object_detection/protos/ssd_anchor_generator.proto</a>)
so under anchor_generator add:</p>
<pre><code>interpolated_scale_aspect_ratio: 0
</code></pre>
<ol start="3">
<li>If you followed 1 and 2, you will see that you have 1 anchor per pixel per branch but for branches 1-5. But for some reason you will have 3 anchors for the first branch... This is because the first branch is getting three anchors by default. You can disable it by using the following argument:<br />
<a href="https://i.stack.imgur.com/qvanz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qvanz.png" alt="enter image description here" /></a></li>
</ol>
<pre><code>reduce_boxes_in_lowest_layer: false
</code></pre> | 2021-05-24 09:20:21.023000+00:00 | 2021-05-24 09:20:21.023000+00:00 | null | null | 61,768,401 | <p>How I can implement a single shot detector with only one anchor box per feature map cell?
I have used ssd mobilenet coco 2018 v1 and I have changed the anchor generator part in config file I.e num layers = 1 in ssd anchor generator. Will that be okay??</p>
<p><a href="https://github.com/tensorflow/models/blob/master/research/object_detection/samples/configs/ssd_mobilenet_v1_coco.config" rel="nofollow noreferrer">https://github.com/tensorflow/models/blob/master/research/object_detection/samples/configs/ssd_mobilenet_v1_coco.config</a></p>
<p>anchor_generator {
ssd_anchor_generator {
num_layers: 6
min_scale: 0.2
max_scale: 0.95
aspect_ratios: 1.0
aspect_ratios: 2.0
aspect_ratios: 0.5
aspect_ratios: 3.0
aspect_ratios: 0.3333</p> | 2020-05-13 07:09:23.010000+00:00 | 2021-07-11 14:18:40.373000+00:00 | 2020-05-14 23:59:51.543000+00:00 | python|tensorflow|object-detection|tensorflow-ssd | ['https://arxiv.org/abs/1512.02325', 'https://i.stack.imgur.com/q2HjJ.png', 'https://github.com/tensorflow/models/blob/2ad3e213838f71e92af198d917ac5574c9d60294/research/object_detection/protos/ssd_anchor_generator.proto', 'https://i.stack.imgur.com/qvanz.png'] | 4 |
11,317,531 | <p>Yes. See <a href="http://arxiv.org/abs/1106.2578" rel="noreferrer">this paper</a> if you're curious about how it's implemented. In general, all of the syntactic forms that are not listed on <a href="http://docs.racket-lang.org/reference/syntax-model.html#(part._fully-expanded)" rel="noreferrer">this page</a> in the docs are built as macros.</p> | 2012-07-03 19:07:55.123000+00:00 | 2012-07-03 19:07:55.123000+00:00 | null | null | 11,315,559 | <p>Simple question - is the <code>match</code> form in Racket a macro? It certainly seems like it could be defined as a macro, but I thought it might be baked further into the implementation to make it faster or something...</p> | 2012-07-03 16:52:26.200000+00:00 | 2012-07-03 19:07:55.123000+00:00 | null | macros|pattern-matching|racket | ['http://arxiv.org/abs/1106.2578', 'http://docs.racket-lang.org/reference/syntax-model.html#(part._fully-expanded)'] | 2 |
27,110,551 | <p>This paper describes a method:</p>
<p>Oberdorf, R.; Ferguson, A.; Jacobsen, J.L.; Kondev, J. - <a href="http://arxiv.org/abs/cond-mat/0508094" rel="nofollow">Secondary Structures in Long Compact Polymers</a> (arXiv.org)</p>
<p>The method roughly consists of the following: start with a zig-zag pattern (a non-random Hamiltonian path on the grid) and repeatedly apply a transformation (called "backbite") to the path. A backbite consists of adding an edge from one of the endpoints A to an adjacent vertex B other than the one A is connected to (thus creating a loop), and then remove the edge that starts in B that is not the one just added and that causes a loop (there will always be only one causing a loop other than the one just added).</p>
<p>The authors add some conditions to obtain rough uniformity (including an estimation on how many times to apply the backbite move). Details in the paper.</p>
<p>The authors also prove empirically that the probability of their method generating adjacent endpoints approximately matches the theoretical probability in uniform random Hamiltonian paths.</p>
<p>There is an implementation of the algorithm in JavaScript here: <a href="http://lattice.complex.unimelb.edu.au/hamiltonian_path.html" rel="nofollow">Hamiltonian Path Generator</a></p> | 2014-11-24 17:25:12.773000+00:00 | 2014-11-24 17:49:06.110000+00:00 | 2014-11-24 17:49:06.110000+00:00 | null | 7,371,227 | <p>I'm looking for an efficient algorithm that is able to find an as random as possible <a href="http://en.wikipedia.org/wiki/Hamiltonian_path" rel="noreferrer">Hamiltonian path</a> in a bidirectional N*M grid.</p>
<p>Does anyone know where I can find, or how to go about constructing such an algorithm?</p>
<hr>
<p>I've already found an efficient approach (see image below). The end result here is a Hamiltonian cycle. Removing a random edge will make it a Hamiltonian path. This algorithm is efficient, but does not provide enough randomness. This approach will always have the begin and end point of the path right next to each other, while I'd like to have those in random locations.
<a href="http://img593.imageshack.us/img593/8060/sfc.png" rel="noreferrer">Space-filling curve http://img593.imageshack.us/img593/8060/sfc.png</a>
Image taken from <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.35.3648&rep=rep1&type=pdf" rel="noreferrer">http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.35.3648&rep=rep1&type=pdf</a></p> | 2011-09-10 10:57:11.377000+00:00 | 2022-06-16 15:51:25.577000+00:00 | null | algorithm|graph-algorithm|hamiltonian-cycle | ['http://arxiv.org/abs/cond-mat/0508094', 'http://lattice.complex.unimelb.edu.au/hamiltonian_path.html'] | 2 |
55,675,732 | <p>Well, this is a situation ("negative images") where, as it was revealed relatively recently, the results are not what we may seem to expect them to be...</p>
<p>There is an unpublished paper @ ArXiv, which shows exactly that CNN models that have achieved almost perfect test accuracy at datasets like MNIST & CIFAR-10, fail to give similar performance in the respective "negative" images (i.e. with inverted background & foreground, like your case here):</p>
<p><a href="https://arxiv.org/abs/1703.06857" rel="nofollow noreferrer">On the Limitation of Convolutional Neural Networks in Recognizing Negative Images</a></p>
<p>Here is the main result of the paper:</p>
<p><a href="https://i.stack.imgur.com/MyeOc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MyeOc.png" alt="enter image description here"></a></p>
<p>The issue is rather non-trivial, and there has been strong disagreement in the community as to if this result is indeed expected & unsurprising or not; see the (now archived) relevant discussion @ <a href="https://www.reddit.com/r/MachineLearning/comments/60tex8/r170306857_deep_neural_networks_do_not_recognize/" rel="nofollow noreferrer">Reddit</a>, as well as a relevant piece @ <a href="https://www.kdnuggets.com/2017/04/negative-results-images-flaw-deep-learning.html" rel="nofollow noreferrer">KDNuggets</a>.</p>
<p>All in all, as the paper also suggests, you can do it with <em>one</em> model, but you will need to include at least some such "negative" images in your training. See also the SO thread <a href="https://stackoverflow.com/questions/48839618/high-training-accuracy-but-low-prediction-performance-for-tensorflows-official">High training accuracy but low prediction performance for Tensorflow's official MNIST model</a>.</p> | 2019-04-14 13:19:44.187000+00:00 | 2019-04-16 09:53:16.507000+00:00 | 2019-04-16 09:53:16.507000+00:00 | null | 55,675,344 | <p>I've recently started with Deep Learning and CNN which as quoted, attempts to extract the most optimal features from samples on it's own.
I made a model to recognize characters where the training set had images with black background and script in white.
<a href="https://i.stack.imgur.com/HxsbJ.png" rel="nofollow noreferrer">Image Sample</a></p>
<p>This type of model though fails to recognize images with pattern in black on white background(I tried with my own input and the negative of previous set also).<a href="https://i.stack.imgur.com/DL9YS.png" rel="nofollow noreferrer">Negative of Image Sample</a></p>
<p>Is it possible to recognize both types of images using the same model or do I need to train two separate models?
I don't know if it's possible using ImageDataGenerator class.
Following is the current code snippet:</p>
<pre><code>train_datagen = ImageDataGenerator(rescale = 1./255, shear_range = 0.2, zoom_range = 0.2, horizontal_flip = True)
test_datagen = ImageDataGenerator(rescale = 1./255)
</code></pre> | 2019-04-14 12:38:25.223000+00:00 | 2019-04-16 09:53:16.507000+00:00 | 2019-04-14 13:21:14.607000+00:00 | machine-learning|keras|deep-learning|computer-vision|conv-neural-network | ['https://arxiv.org/abs/1703.06857', 'https://i.stack.imgur.com/MyeOc.png', 'https://www.reddit.com/r/MachineLearning/comments/60tex8/r170306857_deep_neural_networks_do_not_recognize/', 'https://www.kdnuggets.com/2017/04/negative-results-images-flaw-deep-learning.html', 'https://stackoverflow.com/questions/48839618/high-training-accuracy-but-low-prediction-performance-for-tensorflows-official'] | 5 |
50,401,315 | <p>Not a full answer. I have just replaced gradient descent with something similar to a simple <a href="https://arxiv.org/abs/1802.08842" rel="nofollow noreferrer">evolution strategy (ES)</a>. This works, so there probably is no bug in your forward pass.</p>
<pre><code># [...] sigmoid(), forward(), pred() not modified
X = np.array([[0,0],[0,1],[1,0],[1,1]]) # features 4 * 2
Y = np.array([[0],[1],[1],[0]]) # labels 4 * 1
W1 = np.zeros((2,2)) # map from input to hidden
b1 = np.zeros((2,1)) # bias1
W2 = np.zeros((1,2)) # map from hidden to output
b2 = np.zeros((1,1)) # bias2
epoch = 2000 # maximum training turns
for turn in range(epoch):
print('turn:',turn,end=' ')
epoch_cost = 0
for index in range(X.shape[0]):
x = X[index,:].reshape((-1,1))
y = Y[index,:].reshape((-1,1))
a,z,b,y_pred = forward(x,W1,W2,b1,b2) # feed forward
cost = -y.dot(np.log(y_pred)) - (1-y).dot(np.log(1-y_pred)) # calculate cost
epoch_cost += cost # calculate cumulative cost of this epoch
if turn == 0 or epoch_cost < epoch_cost_best:
epoch_cost_best = epoch_cost
W1_best = W1
b1_best = b1
W2_best = W2
b2_best = b2
epsilon = 0.12 # perturb all weighs between -0.12 ~ 0.12
W1 = W1_best + np.random.random((2,2)) * epsilon * 2 - epsilon
b1 = b1_best + np.random.random((2,1)) * epsilon * 2 - epsilon
W2 = W2_best + np.random.random((1,2)) * epsilon * 2 - epsilon
b2 = b2_best + np.random.random((1,1)) * epsilon * 2 - epsilon
print('cost:',epoch_cost)
print('prediction\n',pred(X,W1_best,W2_best,b1_best,b2_best))
print('ground-truth\n',Y)
</code></pre> | 2018-05-17 22:25:14.400000+00:00 | 2018-05-17 22:35:00.737000+00:00 | 2018-05-17 22:35:00.737000+00:00 | null | 50,397,142 | <p>I am trying to implement Neural Network(NN) in python with numpy, and I found that my NN doesn't work as expected. </p>
<p>I have checked the numerical gradient and compare it with gradient calculated by Back Propagation. It turns out that I'm right. But the cost decreases very slowly and it rebounds after some epochs.</p>
<p>I'm trying to solve the problem of Exclusive Or. But my NN seems ignore the input vector of each sample and tend to predict all the samples to the percentage of samples which label is 1(E.g if I feed it with 3 positive samples and 1 negative sample ,it will predict all the four samples to about 0.75).</p>
<p>Can anyone help me with this problem? This has already perplexed me for a long time.</p>
<p>Here is the structure of neural network and some formula</p>
<p><a href="https://i.stack.imgur.com/ltI4J.png" rel="nofollow noreferrer">structure of NN</a></p>
<p><a href="https://i.stack.imgur.com/5C88X.png" rel="nofollow noreferrer">formula</a></p>
<p>Here is my code</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
np.random.seed(565113221)
def sigmoid(x): # sigmoid function
return 1/(1+np.power(np.e,-x))
def forward(x,W1,W2,b1,b2): # feed forward
a = W1.dot(x)
z = sigmoid(a+b1)
b = W2.dot(z)
y = sigmoid(b+b2)
return a,z,b,y
def pred(X,W1,W2,b1,b2): # predict
y_pred = np.zeros((X.shape[0],1))
for i in range(X.shape[0]):
_,_,_,y_pred[i] = forward(x.reshape((-1,1)),W1,W2,b1,b2)
return y_pred
X = np.array([[0,0],[0,1],[1,0],[1,1]]) # features 4 * 2
Y = np.array([[0],[1],[1],[0]]) # labels 4 * 1
epsilon = 0.12 # initialize all weighs between -0.12 ~ 0.12
W1 = np.random.random((2,2)) * epsilon * 2 - epsilon # map from input to hidden
b1 = np.random.random((2,1)) * epsilon * 2 - epsilon # bias1
W2 = np.random.random((1,2)) * epsilon * 2 - epsilon # map from hidden to output
b2 = np.random.random((1,1)) * epsilon * 2 - epsilon # bias2
epoch = 50 # maximum training turns
alpha = 0.01 # learning rate
for turn in range(epoch):
print('turn:',turn,end=' ')
epoch_cost = 0
for index in range(X.shape[0]):
x = X[index,:].reshape((-1,1))
y = Y[index,:].reshape((-1,1))
a,z,b,y_pred = forward(x,W1,W2,b1,b2) # feed forward
cost = -y.dot(np.log(y_pred)) - (1-y).dot(np.log(1-y_pred)) # calculate cost
epoch_cost += cost # calculate cumulative cost of this epoch
for k in range(W2.shape[0]): # update W2
for j in range(W2.shape[1]):
W2[k,j] -= alpha * (y_pred - y) * z[j,0]
for k in range(b2.shape[0]): # update b2
b2[k,0] -= alpha * (y_pred - y)
for j in range(W1.shape[0]): # update W1
for i in range(W1.shape[1]):
for k in range(W2.shape[0]):
W1[j,i] -= alpha * (y_pred - y) * W2[k,j] * z[j,0] * (1 - z[j,0]) * x[i]
for j in range(b1.shape[0]): # update b1
b1[j,0] -= alpha * (y_pred - y) * W2[k,j] * z[j,0] * (1 - z[j,0])
print('cost:',epoch_cost)
print('prediction\n',pred(X,W1,W2,b1,b2))
print('ground-truth\n',Y)
</code></pre> | 2018-05-17 17:17:44.047000+00:00 | 2018-05-17 22:35:00.737000+00:00 | 2018-05-17 17:30:15.590000+00:00 | python|numpy|neural-network | ['https://arxiv.org/abs/1802.08842'] | 1 |
64,255,872 | <p>Well, these are the "while loops" of Prolog on the one hand, and the inductive definitions of mathematical logic on the other hand (See also: <a href="https://arxiv.org/abs/cs/9301109" rel="nofollow noreferrer">Logic Programming, Functional Programming, and Inductive Definitions</a>, <em>Lawrence C. Paulson, Andrew W. Smith, 2001</em>), so it's not surprising to find them multiple times in a program - syntactically similar, with slight deviations.</p>
<p>In this case, you just have a binary decision - whether something is the case or not - and you "branch" (or rather, decide to not fail the body and press on with the selected clause) on that. The "guard" (the test which supplements the head unification), in this case <code>member(X,Ys)</code> or <code>\+ member(X,Ys)</code> is a binary decision (it also is exhaustive, i.e. covers the whole space of possible <code>X</code>)</p>
<pre><code>intersect([X|Xs],Ys,[X|Acc]) :- % if the head could unify with the goal
member(X,Ys), % then additionally check that ("guard")
(...action...). % and then do something
intersect([X|Xs],Ys,Acc) :- % if the head could unify with the goal
\+ member(X,Ys), % then additionally check that ("guard")
(...action...). % and then do something
</code></pre>
<p>Other applications may need the equivalent of a multiple-decision switch statement here, and so N>2 clauses may have to be written instead of 2.</p>
<pre><code>foo(X) :-
member(X,Set1),
(...action...).
foo(X) :-
member(X,Set2),
(...action...).
foo(X) :-
member(X,Set3),
(...action...).
% inefficient pseudocode for the case where Set1, Set2, Set3
% do not cover the whole range of X. Such a predicate may or
% may not be necessary; the default behaviour would be "failure"
% of foo/1 if this clause does not exist:
foo(X) :-
\+ (member(X,Set1);member(X,Set2);member(X,Set3)),
(...action...).
</code></pre>
<p><em>Note</em>:</p>
<ul>
<li>Use <code>memberchk/2</code> (which fails or succeeds-once) instead of <code>member/2</code> (which fails or succeeds-and-then-tries-to-succeed-again-for-the-rest-of-the-set) to make the program deterministic in its decision whether <code>member(X,L)</code>.</li>
<li>Similarly, "cut" after the clause guard to tell Prolog that if a guard of one clause succeeds, there is no point in trying the other clauses because they will all turn out false: <code>member(X,Ys),!,...</code></li>
<li>Finally, use term comparison <code>==</code> and <code>\==</code> instead of unification <code>=</code> or unification failure <code>\=</code> <code>for delete/3</code>.</li>
</ul> | 2020-10-08 04:36:40.063000+00:00 | 2020-10-08 04:53:30.317000+00:00 | 2020-10-08 04:53:30.317000+00:00 | null | 64,252,358 | <p>All of these predicates are defined in pretty much the same way. The base case is defined for the empty list. For non-empty lists we unify in the head of the clause when a certain predicate holds, but do not unify if that predicate does not hold. These predicates look too similar for me to think it is a coincidence. Is there a name for this, or a defined abstraction?</p>
<pre><code>intersect([],_,[]).
intersect(_,[],[]).
intersect([X|Xs],Ys,[X|Acc]) :-
member(X,Ys),
intersect(Xs,Ys,Acc).
intersect([X|Xs],Ys,Acc) :-
\+ member(X,Ys),
intersect(Xs,Ys,Acc).
</code></pre>
<pre><code>without_duplicates([],[]).
without_duplicates([X|Xs],[X|Acc]) :-
\+ member(X,Acc),
without_duplicates(Xs,Acc).
without_duplicates([X|Xs],Acc) :-
member(X,Acc),
without_duplicates(Xs,Acc).
</code></pre>
<pre><code>difference([],_,[]).
difference([X|Xs],Ys,[X|Acc]) :-
\+ member(X,Ys),
difference(Xs,Ys,Acc).
difference([X|Xs],Ys,Acc) :-
member(X,Ys),
difference(Xs,Ys,Acc).
</code></pre>
<pre><code>delete(_,[],[]).
delete(E,[X|Xs],[X|Ans]) :-
E \= X,
delete(E,Xs,Ans).
delete(E,[X|Xs],Ans) :-
E = X,
delete(E,Xs,Ans).
</code></pre> | 2020-10-07 21:11:46.377000+00:00 | 2020-10-09 19:58:01.303000+00:00 | null | prolog | ['https://arxiv.org/abs/cs/9301109'] | 1 |
45,454,561 | <p>Regarding the second step b) there are recent developments which significantly speed up the calculation of signatures:</p>
<ul>
<li>Optimal Densification for Fast and Accurate Minwise Hashing, 2017,
<a href="https://arxiv.org/abs/1703.04664" rel="nofollow noreferrer">https://arxiv.org/abs/1703.04664</a></li>
<li>Fast Similarity Sketching, 2017, <a href="https://arxiv.org/abs/1704.04370" rel="nofollow noreferrer">https://arxiv.org/abs/1704.04370</a></li>
<li>SuperMinHash - A New Minwise Hashing Algorithm for Jaccard Similarity Estimation, 2017, <a href="https://arxiv.org/abs/1706.05698" rel="nofollow noreferrer">https://arxiv.org/abs/1706.05698</a></li>
<li>ProbMinHash - A Class of Locality-Sensitive Hash Algorithms for the (Probability) Jaccard Similarity, 2019, <a href="https://arxiv.org/pdf/1911.00675.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1911.00675.pdf</a></li>
</ul> | 2017-08-02 07:44:30.957000+00:00 | 2020-01-13 05:39:22.477000+00:00 | 2020-01-13 05:39:22.477000+00:00 | null | 44,355,031 | <p>To my understanding, the scientific consensus in NLP is that the most effective method for near-duplicate detection in large-scale scientific document collections (more than 1 billion documents) is the one found here: </p>
<p><a href="http://infolab.stanford.edu/~ullman/mmds/ch3.pdf" rel="noreferrer">http://infolab.stanford.edu/~ullman/mmds/ch3.pdf</a></p>
<p>which can be briefly described by: </p>
<p>a) shingling of documents
b) minhashing to obtain theminhash signatures of the shingles
c) locality-sensitive hashing to avoid doing pairwise similarity calculations for all signatures but instead focus only to pairs within buckets.</p>
<p>I am ready to implement this algorithm in Map-Reduce or Spark, but because I am new to the field (I have been reading upon large-scale near-duplicate detection for about two weeks) and the above was published quite a few years ago, I am wondering whether there are known limitations of the above algorithm and whether there are different approaches that are more efficient (offering a more appealing performance/complexity trade-off ). </p>
<p>Thanks in advance! </p> | 2017-06-04 14:13:13.567000+00:00 | 2020-01-13 05:39:22.477000+00:00 | null | machine-learning|nlp | ['https://arxiv.org/abs/1703.04664', 'https://arxiv.org/abs/1704.04370', 'https://arxiv.org/abs/1706.05698', 'https://arxiv.org/pdf/1911.00675.pdf'] | 4 |
58,273,356 | <p>No, this is not what the Transformer module does. The Transformer is primarily used for pre-training general use models for NLP on large bodies of text. If you're curious to learn more, I strongly recommend you read the article which introduced the architecture, <a href="https://arxiv.org/abs/1706.03762" rel="nofollow noreferrer">"Attention is All You Need"</a>. If you've heard of models like <a href="https://github.com/huggingface/transformers" rel="nofollow noreferrer">BERT or GPT-2</a>, these are examples of transformers.</p>
<p>It's not entirely clear what you are trying to accomplish when you ask how to "transform an image into another image." I'm thinking maybe you are looking for something this? <a href="https://junyanz.github.io/CycleGAN/" rel="nofollow noreferrer">https://junyanz.github.io/CycleGAN/</a></p>
<p>In any event, to re-answer your question: no, that's not how you use nn.Transformer. You should try to clarify what you are trying to accomplish with "transforming one picture into another," and post that description as a separate question.</p> | 2019-10-07 16:11:16.773000+00:00 | 2019-10-08 20:25:58.940000+00:00 | 2019-10-08 20:25:58.940000+00:00 | null | 58,267,531 | <p>If I want to transform an image to another image,
then</p>
<pre><code>transformer_model = nn.Transformer(img_size, n_heads)
transformer_model(source_image, target_image)
</code></pre>
<p>is this the correct way to use nn.Transformer?</p> | 2019-10-07 10:15:09.657000+00:00 | 2019-10-08 20:25:58.940000+00:00 | null | pytorch | ['https://arxiv.org/abs/1706.03762', 'https://github.com/huggingface/transformers', 'https://junyanz.github.io/CycleGAN/'] | 3 |
63,989,579 | <blockquote>
<p>In my (limited) understanding this seems to be issue because the train
data has different sizes and hence numpy can't convert it into a
matrix,</p>
</blockquote>
<p>True, that's exactly the issue.</p>
<blockquote>
<p>so how do I fix this? Can I pad it? If so whats the size I should use?
Or is this a mistake on my part?</p>
</blockquote>
<p>There are 2 options I can think of, one is to use a network that supports different size matrices (Option1). Other is to pad with zeros. (Option 2). One can also change the length of the audio with other methods (maintaining or not the pitch) but I have not find any applications of that in any paper so I will not post it as an option.</p>
<p><strong>Option 1: Use a network that can deal with different sizes</strong></p>
<p>Normally, one uses <a href="https://en.wikipedia.org/wiki/Recurrent_neural_network" rel="nofollow noreferrer">Recurrent Neural Network</a> (RNN) as it can cope with different sizes of audio.</p>
<p><strong>Option 2: zero-pad / truncate</strong></p>
<p>I couldn't honestly find a standard here. You could choose a fix duration and then:</p>
<ul>
<li>For shorter audios: Add silence at the end and/or start of the audio</li>
<li>For longer audios: Cut them</li>
</ul>
<pre><code>from pydub import AudioSegment
audio = pydub.AudioSegment.silent(duration=duration_ms) # The length you want
audio = audio.overlay(pydub.AudioSegment.from_wav(path))
raw = audio.split_to_mono()[0].get_array_of_samples() # I only keep the left sound
</code></pre>
<p>An example of this kind of application can be <a href="https://urbansounddataset.weebly.com/" rel="nofollow noreferrer">UrbanSoundDataset</a>. It's a dataset of different lengths audio and therefore any paper that uses it (for a non-RNN network) will be forced to use this or another approach that converts sounds to a same length vector/matrix. I recommend the paper <a href="https://arxiv.org/abs/1608.04363" rel="nofollow noreferrer">Deep Convolutional Neural Networks and Data Augmentation for Environmental Sound Classification</a> or <a href="https://ieeexplore.ieee.org/document/7324337" rel="nofollow noreferrer">ENVIRONMENTAL SOUND CLASSIFICATION WITH CONVOLUTIONAL NEURAL NETWORKS</a>. The later has it's code open sourced and you can see it uses the method I explained (kind of) on function <code>_load_audio</code> in <a href="https://github.com/karolpiczak/paper-2015-esc-convnet/blob/master/Code/_Datasets/Setup.ipynb" rel="nofollow noreferrer">this</a> notebook</p>
<hr />
<p>A bit off topic, but for these kind of applications is highly recommended to use <strong>mel-spectrum</strong></p>
<p>Standard (as far as I know) is to use a <a href="https://medium.com/analytics-vidhya/understanding-the-mel-spectrogram-fca2afa2ce53" rel="nofollow noreferrer">mel-spectrum</a> for this kind of applications. You could use the Python library <a href="https://pypi.org/project/essentia/" rel="nofollow noreferrer">Essentia</a> and follow <a href="https://essentia.upf.edu/essentia_python_tutorial.html#computing-spectrum-mel-bands-energies-and-mfccs" rel="nofollow noreferrer">this</a> example or use librosa like this:</p>
<pre><code>y, sr = librosa.load('your-wav-file.wav')
mel_spect = librosa.feature.melspectrogram(y=y, sr=sr, n_fft=2048, hop_length=1024)
</code></pre> | 2020-09-21 09:30:40.480000+00:00 | 2020-09-24 08:01:57.253000+00:00 | 2020-09-24 08:01:57.253000+00:00 | null | 42,164,538 | <p>I'm new to ML and wanted to try a project myself to learn, so please excuse any blatant mistakes. I'm trying to classify a few files (ringtones and such) using audiolab and sklearn in python. </p>
<p>Here's the code:</p>
<pre><code>from scikits.audiolab.pysndfile.matapi import oggread, wavread
import numpy as np
from sklearn import svm
files = ["Basic_Bell.ogg", "Beep-Beep.ogg", "Beep_Once.ogg", "Calling_You.ogg", "Time_Up.ogg"]
labels = [2,1,1,2,2]
train = []
for f in files:
data, fs, enc = oggread("Tones/"+f)
train.append(data)
clf = svm.SVC()
clf.fit(train, labels)
</code></pre>
<p>I'm getting an error message:</p>
<pre><code>Traceback (most recent call last):
File "/home/athul/Projects/Audio Analysis/read.py", line 18, in <module>
clf.fit(train, labels)
File "/usr/local/lib/python2.7/dist-packages/sklearn/svm/base.py", line 150, in fit
X = check_array(X, accept_sparse='csr', dtype=np.float64, order='C')
File "/usr/local/lib/python2.7/dist-packages/sklearn/utils/validation.py", line 373, in check_array
array = np.array(array, dtype=dtype, order=order, copy=copy)
ValueError: setting an array element with a sequence.
</code></pre>
<p>In my (limited) understanding this seems to be issue because the train data has different sizes and hence numpy can't convert it into a matrix, so how do I fix this? Can I pad it? If so whats the size I should use? Or is this a mistake on my part? </p> | 2017-02-10 16:47:37.050000+00:00 | 2020-09-24 08:01:57.253000+00:00 | null | python|numpy|machine-learning|scikit-learn | ['https://en.wikipedia.org/wiki/Recurrent_neural_network', 'https://urbansounddataset.weebly.com/', 'https://arxiv.org/abs/1608.04363', 'https://ieeexplore.ieee.org/document/7324337', 'https://github.com/karolpiczak/paper-2015-esc-convnet/blob/master/Code/_Datasets/Setup.ipynb', 'https://medium.com/analytics-vidhya/understanding-the-mel-spectrogram-fca2afa2ce53', 'https://pypi.org/project/essentia/', 'https://essentia.upf.edu/essentia_python_tutorial.html#computing-spectrum-mel-bands-energies-and-mfccs'] | 8 |