a_id
int64 7.84k
73.8M
| a_body
stringlengths 61
33k
| a_creation_date
stringlengths 25
32
| a_last_activity_date
stringlengths 25
32
| a_last_edit_date
stringlengths 25
32
⌀ | a_tags
float64 | q_id
int64 826
73.8M
| q_body
stringlengths 61
29.9k
| q_creation_date
stringlengths 25
32
| q_last_activity_date
stringlengths 25
32
| q_last_edit_date
stringlengths 25
32
⌀ | q_tags
stringlengths 1
103
| _arxiv_links
stringlengths 2
6.69k
| _n_arxiv_links
int64 0
94
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
65,958,473 | <p>The information provided is not really sufficient for a really valuable answer, but you may consider the following:</p>
<ol>
<li>It seems to me, that you have more than one RID for one VID but only one VID for each RID. If this is true, it seems more reasonable to me to store this 1:M relationship in the Registration DB.</li>
<li>In what circumstances do you need the Validation schema information behind the VID?
What I mean:</li>
</ol>
<ul>
<li>Do you have a method, which returns all the registrations connected to specific VID?</li>
<li>Or may be, you have a method, returning the Validation schema used for specific registration?</li>
</ul>
<p>Do you see the difference and why this questions is important?</p>
<p>Finally, you may be interested in <a href="https://arxiv.org/pdf/1906.01553.pdf" rel="nofollow noreferrer">this article</a>, especially in chapter 4.4 Decentralisation -> Shared persistence.</p>
<p>You don't share the same storage, but it seems to me that it is possible, that you have split it more than needed and it may be a good idea to combine the Registration and the validation services into one. Of course, this is very speculative statement. But if you are unsure if I am right or not, ask yourself:</p>
<ul>
<li>Does other services / clients use the Validation service?</li>
<li>Does the Validation service represent a dedicated business unit / domain or is it just part of other's unit processes?</li>
</ul>
<p>And things like that.</p>
<p>And finally: The microservices world doesn't recommend where to put your data, but what to think about, when you decide where to put your data and the main things you may consider, are:</p>
<ul>
<li>Your services should be deployed autonomously and should operate autonomously.</li>
<li>Your services shouldn't share their storage (because of the previous one)</li>
<li>You should be able to scale individual services by need, without touching the other (this is why we need autonomous deployment and operability)</li>
<li>The granularity principle is very dependent to your concrete project. When you decide "how much", you should take care of the business domain and the ability to maintain all other principles.</li>
</ul>
<p>Remark: The principles above are by no means exhaustive, but I hope, that all this gives you some directions to get your job done.</p> | 2021-01-29 16:43:56.200000+00:00 | 2021-01-29 16:43:56.200000+00:00 | null | null | 65,957,058 | <p>We have two Microservices (M1 and M2) and each microservice has it's own schema DB1 and DB2.</p>
<ol>
<li>M1 receives the request for registration</li>
<li>M1 calls M2 for validation</li>
<li>M2 returns validation results (with validation id - VID) to M1</li>
<li>M1 completes the registration and persists in DB1 and each registration will have Record Identifier (RID)</li>
</ol>
<p>My question here is where do we persist the relationship between RID and Validation Results for RID?</p>
<p>Should they be persisted in DB1 (associated to M1) or DB2 (validation schema)?</p>
<p>If the relationship needs to be persisted in M2, then M1 has to make a call to M2 with RID and VID (validation id)</p>
<p>what is the recommended approach in microservices world?</p> | 2021-01-29 15:12:59.463000+00:00 | 2021-01-29 16:43:56.200000+00:00 | null | spring-boot|microservices | ['https://arxiv.org/pdf/1906.01553.pdf'] | 1 |
61,340,466 | <p>Yes, on x86 aligned loads are atomic, <strong>BUT</strong> this is an architectural detail that you should <strong>NOT</strong> rely on!</p>
<p>Since you are writing C++ code, you have to abide by the rules of the C++ standard, i.e., you have to use atomics instead of volatile. The fact
that <code>volatile</code> has been part of that language long before the introduction
of threads in C++11 should be a strong enough indication that <code>volatile</code> was
never designed or intended to be used for multi-threading. It is important to
note that in C++ <code>volatile</code> is something fundamentally different from <code>volatile</code>
in languages like Java or C# (in these languages <code>volatile</code> is in
fact related to the memory model and therefore much more like an atomic in C++).</p>
<p>In C++, <code>volatile</code> is used for what is often referred to as "unusual memory".
This is typically memory that can be read or modified outside the current process,
for example when using memory mapped I/O. <code>volatile</code> forces the compiler to
<em>execute all operations in the exact order as specified</em>. This prevents
some optimizations that would be perfectly legal for atomics, while also allowing
some optimizations that are actually <em>illegal</em> for atomics. For example:</p>
<pre><code>volatile int x;
int y;
volatile int z;
x = 1;
y = 2;
z = 3;
z = 4;
...
int a = x;
int b = x;
int c = y;
int d = z;
</code></pre>
<p>In this example, there are two assignments to <code>z</code>, and two read operations on <code>x</code>.
If <code>x</code> and <code>z</code> were atomics instead of volatile, the compiler would be free to treat
the first store as irrelevant and simply remove it. Likewise it could just reuse the
value returned by the first load of <code>x</code>, effectively generating code like <code>int b = a</code>.
But since <code>x</code> and <code>z</code> are volatile, these optimizations are not possible. Instead,
the compiler has to ensure that <em>all</em> volatile operations are executed in the
<em>exact order as specified</em>, i.e., the volatile operations cannot be reordered with
respect to each other. However, this does not prevent the compiler from reordering
non-volatile operations. For example, the operations on <code>y</code> could freely be moved
up or down - something that would not be possible if <code>x</code> and <code>z</code> were atomics. So
if you were to try implementing a lock based on a volatile variable, the compiler
could simply (and legally) move some code outside your critical section.</p>
<p>Last but not least it should be noted that marking a variable as <code>volatile</code> does
not prevent it from participating in a data race. In those rare cases where you
have some "unusual memory" (and therefore really require <code>volatile</code>) that is
also accessed by multiple threads, you have to use volatile atomics.</p>
<p>Since aligned loads are actually atomic on x86, the compiler will translate an <code>atomic.load()</code> call to a simple <code>mov</code> instruction, so an atomic load is not slower than reading a volatile variable. An <code>atomic.store()</code> is actually slower than writing a volatile variable, but for good reasons, since in contrast to the volatile write it is by default <em>sequentially consistent</em>. You can relax the memory orders, but you <em>really</em> have to know what you are doing!!</p>
<p>If you want to learn more about the C++ memory model, I can recommend this paper: <a href="https://arxiv.org/abs/1803.04432" rel="nofollow noreferrer">Memory Models for C/C++ Programmers</a></p> | 2020-04-21 09:42:43.020000+00:00 | 2020-04-21 09:42:43.020000+00:00 | null | null | 61,339,630 | <p>Can I base a mission-critical application on the results of this test, that 100 threads reading a pointer set a billion times by a main thread never see a tear?</p>
<p>Any other potential problems doing this besides tearing?</p>
<p>Here's a stand-alone demo that compiles with <code>g++ -g tear.cxx -o tear -pthread</code>.</p>
<pre><code>#include <atomic>
#include <thread>
#include <vector>
using namespace std;
void* pvTearTest;
atomic<int> iTears( 0 );
void TearTest( void ) {
while (1) {
void* pv = (void*) pvTearTest;
intptr_t i = (intptr_t) pv;
if ( ( i >> 32 ) != ( i & 0xFFFFFFFF ) ) {
printf( "tear: pv = %p\n", pv );
iTears++;
}
if ( ( i >> 32 ) == 999999999 )
break;
}
}
int main( int argc, char** argv ) {
printf( "\n\nTEAR TEST: are normal pointer read/writes atomic?\n" );
vector<thread> athr;
// Create lots of threads and have them do the test simultaneously.
for ( int i = 0; i < 100; i++ )
athr.emplace_back( TearTest );
for ( int i = 0; i < 1000000000; i++ )
pvTearTest = (void*) (intptr_t)
( ( i % (1L<<32) ) * 0x100000001 );
for ( auto& thr: athr )
thr.join();
if ( iTears )
printf( "%d tears\n", iTears.load() );
else
printf( "\n\nTEAR TEST: SUCCESS, no tears\n" );
}
</code></pre>
<p>The actual application is a <code>malloc()</code>'ed and sometimes <code>realloc()</code>'d array (size is power of two; realloc doubles storage) that many child threads will absolutely be hammering in a mission-critical but also high-performance-critical way.</p>
<p>From time to time a thread will need to add a new entry to the array, and will do so by setting the next array entry to point to something, then increment an <code>atomic<int> iCount</code>. Finally it will add data to some data structures that would cause other threads to attempt to dereference that cell.</p>
<p>It all seems fine (except I'm not positive if the increment of count is assured of happening before following non-atomic updates)... <em>except</em> for one thing: <code>realloc()</code> will typically change the address of the array, and <em>further</em> frees the old one, the pointer to which is still visible to other threads.</p>
<p>OK, so instead of <code>realloc()</code>, I <code>malloc()</code> a new array, manually copy the contents, set the pointer to the array. I would free the old array but I realize other threads may still be accessing it: they read the array base; I free the base; a third thread allocates it writes something else there; the first thread then adds the indexed offset to the base and expects a valid pointer. I'm happy to leak those though. (Given the doubling growth, all old arrays combined are about the same size as the current array so overhead is simply an extra 16 bytes per item, and it's memory that soon is never referenced again.)</p>
<p>So, here's the crux of the question: once I allocate the bigger array, can I write it's base address with a non-atomic write, in utter safety? Or despite my billion-access test, do I actually have to make it atomic<> and thus slow all worker threads to read that atomic?</p>
<p>(As this is surely environment dependent, we're talking 2012-or-later Intel, g++ 4 to 9, and Red Hat of 2012 or later.)</p>
<p>EDIT: here is a modified test program that matches my planned scenario much more closely, with only a small number of writes. I've also added a count of the reads. I see when switching from void* to atomic I go from 2240 reads/sec to 660 reads/sec (with optimization disabled). The machine language for the read is shown after the source.</p>
<pre><code>#include <atomic>
#include <chrono>
#include <thread>
#include <vector>
using namespace std;
chrono::time_point<chrono::high_resolution_clock> tp1, tp2;
// void*: 1169.093u 0.027s 2:26.75 796.6% 0+0k 0+0io 0pf+0w
// atomic<void*>: 6656.864u 0.348s 13:56.18 796.1% 0+0k 0+0io 0pf+0w
// Different definitions of the target variable.
atomic<void*> pvTearTest;
//void* pvTearTest;
// Children sum the tears they find, and at end, total checks performed.
atomic<int> iTears( 0 );
atomic<uint64_t> iReads( 0 );
bool bEnd = false; // main thr sets true; children all finish.
void TearTest( void ) {
uint64_t i;
for ( i = 0; ! bEnd; i++ ) {
intptr_t iTearTest = (intptr_t) (void*) pvTearTest;
// Make sure top 4 and bottom 4 bytes are the same. If not it's a tear.
if ( ( iTearTest >> 32 ) != ( iTearTest & 0xFFFFFFFF ) ) {
printf( "tear: pv = %ux\n", iTearTest );
iTears++;
}
// Output periodically to prove we're seeing changing values.
if ( ( (i+1) % 50000000 ) == 0 )
printf( "got: pv = %lx\n", iTearTest );
}
iReads += i;
}
int main( int argc, char** argv ) {
printf( "\n\nTEAR TEST: are normal pointer read/writes atomic?\n" );
vector<thread> athr;
// Create lots of threads and have them do the test simultaneously.
for ( int i = 0; i < 100; i++ )
athr.emplace_back( TearTest );
tp1 = chrono::high_resolution_clock::now();
#if 0
// Change target as fast as possible for fixed number of updates.
for ( int i = 0; i < 1000000000; i++ )
pvTearTest = (void*) (intptr_t)
( ( i % (1L<<32) ) * 0x100000001 );
#else
// More like our actual app: change target only periodically, for fixed time.
for ( int i = 0; i < 100; i++ ) {
pvTearTest.store( (void*) (intptr_t) ( ( i % (1L<<32) ) * 0x100000001 ),
std::memory_order_release );
this_thread::sleep_for(10ms);
}
#endif
bEnd = true;
for ( auto& thr: athr )
thr.join();
tp2 = chrono::high_resolution_clock::now();
chrono::duration<double> dur = tp2 - tp1;
printf( "%ld reads in %.4f secs: %.2f reads/usec\n",
iReads.load(), dur.count(), iReads.load() / dur.count() / 1000000 );
if ( iTears )
printf( "%d tears\n", iTears.load() );
else
printf( "\n\nTEAR TEST: SUCCESS, no tears\n" );
}
</code></pre>
<p></p>
<pre><code>Dump of assembler code for function TearTest():
0x0000000000401256 <+0>: push %rbp
0x0000000000401257 <+1>: mov %rsp,%rbp
0x000000000040125a <+4>: sub $0x10,%rsp
0x000000000040125e <+8>: movq $0x0,-0x8(%rbp)
0x0000000000401266 <+16>: movzbl 0x6e83(%rip),%eax # 0x4080f0 <bEnd>
0x000000000040126d <+23>: test %al,%al
0x000000000040126f <+25>: jne 0x40130c <TearTest()+182>
=> 0x0000000000401275 <+31>: mov $0x4080d8,%edi
0x000000000040127a <+36>: callq 0x40193a <std::atomic<void*>::operator void*() const>
0x000000000040127f <+41>: mov %rax,-0x10(%rbp)
0x0000000000401283 <+45>: mov -0x10(%rbp),%rax
0x0000000000401287 <+49>: sar $0x20,%rax
0x000000000040128b <+53>: mov -0x10(%rbp),%rdx
0x000000000040128f <+57>: mov %edx,%edx
0x0000000000401291 <+59>: cmp %rdx,%rax
0x0000000000401294 <+62>: je 0x4012bb <TearTest()+101>
0x0000000000401296 <+64>: mov -0x10(%rbp),%rax
0x000000000040129a <+68>: mov %rax,%rsi
0x000000000040129d <+71>: mov $0x40401a,%edi
0x00000000004012a2 <+76>: mov $0x0,%eax
0x00000000004012a7 <+81>: callq 0x401040 <printf@plt>
0x00000000004012ac <+86>: mov $0x0,%esi
0x00000000004012b1 <+91>: mov $0x4080e0,%edi
0x00000000004012b6 <+96>: callq 0x401954 <std::__atomic_base<int>::operator++(int)>
0x00000000004012bb <+101>: mov -0x8(%rbp),%rax
0x00000000004012bf <+105>: lea 0x1(%rax),%rcx
0x00000000004012c3 <+109>: movabs $0xabcc77118461cefd,%rdx
0x00000000004012cd <+119>: mov %rcx,%rax
0x00000000004012d0 <+122>: mul %rdx
0x00000000004012d3 <+125>: mov %rdx,%rax
0x00000000004012d6 <+128>: shr $0x19,%rax
0x00000000004012da <+132>: imul $0x2faf080,%rax,%rax
0x00000000004012e1 <+139>: sub %rax,%rcx
0x00000000004012e4 <+142>: mov %rcx,%rax
0x00000000004012e7 <+145>: test %rax,%rax
0x00000000004012ea <+148>: jne 0x401302 <TearTest()+172>
0x00000000004012ec <+150>: mov -0x10(%rbp),%rax
0x00000000004012f0 <+154>: mov %rax,%rsi
0x00000000004012f3 <+157>: mov $0x40402a,%edi
0x00000000004012f8 <+162>: mov $0x0,%eax
0x00000000004012fd <+167>: callq 0x401040 <printf@plt>
0x0000000000401302 <+172>: addq $0x1,-0x8(%rbp)
0x0000000000401307 <+177>: jmpq 0x401266 <TearTest()+16>
0x000000000040130c <+182>: mov -0x8(%rbp),%rax
0x0000000000401310 <+186>: mov %rax,%rsi
0x0000000000401313 <+189>: mov $0x4080e8,%edi
0x0000000000401318 <+194>: callq 0x401984 <std::__atomic_base<unsigned long>::operator+=(unsigned long)>
0x000000000040131d <+199>: nop
0x000000000040131e <+200>: leaveq
0x000000000040131f <+201>: retq
</code></pre> | 2020-04-21 08:58:01.040000+00:00 | 2020-04-22 20:06:34.920000+00:00 | 2020-04-22 20:06:34.920000+00:00 | c++11|c++14|c++17|stdthread|stdatomic | ['https://arxiv.org/abs/1803.04432'] | 1 |
63,286,868 | <p>Based on Kula's [paper](<a href="https://arxiv.org/pdf/1507.08439.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1507.08439.pdf</a>, LightFM starts from a CF matrix factorization algorithm while also learning both users and items embeddings in the process (if such data available). However, if no user/item features are provided to the model, its behaviour will be that of a MF (Matrix Factorization) method.</p> | 2020-08-06 15:27:31.420000+00:00 | 2020-10-07 02:34:29.377000+00:00 | 2020-10-07 02:34:29.377000+00:00 | null | 60,834,508 | <p>I'm new to recommender system and trying to understand the fundamental difference between standard collaborative filtering (CF) and hybrid methods like LightFM. As I researched online, most of the posts mentioned hybrid method combines both CF and content-based method. But from a matrix/math standpoint, LightFM also learns item-user interaction embedding, like CF. How are they different?</p>
<p>Thank you so much in advance.</p> | 2020-03-24 15:48:11.170000+00:00 | 2020-10-07 02:34:29.377000+00:00 | 2020-03-24 16:29:14.490000+00:00 | machine-learning|collaborative-filtering|recommendation-system|lightfm | ['https://arxiv.org/pdf/1507.08439.pdf'] | 1 |
58,155,729 | <p>In general Feed forward networks treat features as independent; convolutional networks focus on relative location and proximity; RNNs and LSTMs have memory limitations and tend to read in one direction. </p>
<p>In contrast to these, attention and the transformer can grab context about a word from distant parts of a sentence, both earlier and later than the word appears, in order to encode information to help us understand the word and its role in the system called sentence.</p>
<p>There is a good model for feed-forward network with attention mechanism here:</p>
<p><a href="https://arxiv.org/pdf/1512.08756.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1512.08756.pdf</a></p>
<p>hope to be useful.</p> | 2019-09-29 13:34:05.647000+00:00 | 2019-09-29 13:34:05.647000+00:00 | null | null | 58,154,954 | <p>Recently, I have learned decoder-encoder network and attention mechanism, and found that many papers and blogs implement attention mechanism on RNN network.</p>
<p>I am interested if other networks can incorporate attentional mechanisms.For example, the encoder is a feedforward neural network and decoder is an RNN. Can feedforward neural networks without time series use attentional mechanisms? If you can, please give me some suggestions.Thank you in advance!</p> | 2019-09-29 11:54:44.870000+00:00 | 2019-09-29 13:34:05.647000+00:00 | null | deep-learning|recurrent-neural-network|attention-model|feed-forward | ['https://arxiv.org/pdf/1512.08756.pdf'] | 1 |
57,170,280 | <p>I would like to point out that ANY distribution law (uniform, gaussian, exponential, ...) will produce numbers <code>a</code>, <code>b</code> and <code>c</code> meeting your condition as soon as you normalize and sort them, so there should be some domain knowledge to prefer one over the other.</p>
<p>As an alternative, I would propose to use <a href="https://en.wikipedia.org/wiki/Dirichlet_distribution" rel="nofollow noreferrer">Dirichlet distribution</a> which produce numbers naturally satisfying your first condition: a+b+c=1. It was applied to rainfall modelling as well, I believe (<a href="https://arxiv.org/pdf/1801.02962.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1801.02962.pdf</a>)</p>
<pre><code>library(MCMCpack)
abc <- rdirichlet(n, c(1,1,1))
sum(abc) # should output n
</code></pre>
<p>You could vary power law values to shape the data, and, of course, sort them to satisfy your second condition. For many cases it is easy to argue about your model behavior if it uses Dirichlet (Dirichlet being prior for multinomial in Bayes approach, f.e.)</p> | 2019-07-23 18:35:06.520000+00:00 | 2019-07-23 18:35:06.520000+00:00 | null | null | 57,153,619 | <p>I would like to generate 500 different combination of a,b,and c meeting the following conditions</p>
<ol>
<li>a+ b+ c = 1 and</li>
<li>a < b < c</li>
</ol>
<p>here is a basic sample of generating random numbers, however, I need to generate it based on aforementioned conditions.</p>
<pre><code>Coeff = data.frame(a=runif(500, min = 0, max = 1),
b=runif(500, min = 0, max = 1),
c=runif(500, min = 0, max = 1))
</code></pre> | 2019-07-22 20:57:48.120000+00:00 | 2019-07-23 18:35:06.520000+00:00 | null | r|dataframe|if-statement|random|conditional-statements | ['https://en.wikipedia.org/wiki/Dirichlet_distribution', 'https://arxiv.org/pdf/1801.02962.pdf'] | 2 |
55,928,715 | <p>Take a look at <a href="https://arxiv.org/abs/1709.01195" rel="nofollow noreferrer">Parallel Statistical Computing with R: An Illustration on Two Architectures</a>, which gives two ways to parallelize random forest calculations: with <code>mclapply</code> and with <code>pbdMPI</code>.</p> | 2019-04-30 21:13:40.713000+00:00 | 2019-04-30 21:13:40.713000+00:00 | null | null | 55,718,368 | <p>How can I get the following code (alternative code would be great too) for enhancing the speed of randomForest analyses on a regression equation using multiple cores in a parallel approach to work? </p>
<pre><code>#Parallelized Random Forest Model
RFcores <- detectCores()/3 + 4
RFcores
RFtrees <- 1000/RFcores
RFtrees
cl <- makeCluster(RFcores)
registerDoParallel(cl)
timer <- proc.time()
form <- as.formula(paste(a, "~", b))
fit <- foreach(ntree = rep(RFtrees, RFcores), .combine = gtable_combine, .packages = 'randomForest') %dopar%
{
randomForest(form, data = maindf, mtry = 4,
keep.forest = FALSE, nodesize = 10000, do.trace = TRUE, maxnodes = 5,
improve = 0.01, doBest = TRUE, importance = TRUE, ntree = ntree)}
proc.time() - timer
stopCluster(cl)
}
</code></pre>
<p>I keep getting the following error related to the <code>.combine argument</code> in the <code>foreach</code> function.</p>
<pre><code>error calling combine function:
<simpleError in align_2(x, y, along = along, join = join): Both gtables must have names along dimension to be aligned>
</code></pre>
<p>I look forward to any thoughts on this issue.</p> | 2019-04-17 00:42:51.943000+00:00 | 2019-04-30 21:13:40.713000+00:00 | null | r|parallel-processing|rstudio|random-forest | ['https://arxiv.org/abs/1709.01195'] | 1 |
44,438,490 | <p>In this paper : <a href="https://arxiv.org/pdf/1311.2901.pdf" rel="nofollow noreferrer">“Visualizing and Understanding Convolutional Neural Networks”</a> , Zeiler and Fergus discussed the idea that this renewed interest in CNNs is due to the accessibility of large training sets and increased computational power with the usage of GPUs. They found and also talked about the interesting ways of visualizing feature maps and weights. It is one of the best and prevalent papers in the field of Deep Learning. Writers also emphasized about the limited knowledge that researchers had on inner mechanisms of these models.</p>
<blockquote>
<p>Saying that without this insight, the <strong><em>“development of better models
is reduced to trial and error”</em></strong>.</p>
</blockquote>
<p>They also use AlexNet like model in their paper. This model which was named as ZFNET was the winner of ILSVRC 2013 .I am sure by reading this paper you will have better understanding about overall DL concept and possible solutions to your question.</p> | 2017-06-08 14:27:45.877000+00:00 | 2017-06-08 14:27:45.877000+00:00 | null | null | 44,438,222 | <p>I have a small deep learning problem. Here I built my network (CNN) with the bookstore Keras. I am interested in visualizing the weights of my CNN. My architecture is AlexNet type and my color images (RGB) are divided into 72 classes. For the first convolution which has 96 filters whose filter kernel is 11 by 11 I recover a 4 dimensional tensor at the output [11] [11] [3] [96]. So each filter has 3 matrix 11 by 11 which we will call kernel.</p>
<p>At this level for the visualization of my weights I took an image I split it in 3 channels. For a given filter each channel was convoluted with a kernel. Each result of these convolutions operations has been gathered to give a resulting image.</p>
<p>Now the second convolution that takes input the output of the first is set with 383 filters whose filter kernel is 5 * 5. The output of this second convolution gives me a tensor 4d of size [5] [5] [96] [383]. This means that for a given filter it has 96 filters (at least that's what I understand). So there for a given filter I'm still with my famous image splitted on these 3 channels facing 96 filters.</p>
<p>I do not know if it is a problem of understanding but I block total because on the output of the second convolution I do not know interpreted the 96 kernels for each filter.</p>
<p>I would like to from my weights reconstitute the filters a convolution layer.</p>
<p>I am really novice in deep learning it is an interesting science but full of mystery for me. If anyone had the kindness to enlighten me I would thank him.</p> | 2017-06-08 14:16:26.813000+00:00 | 2017-06-09 09:12:50.883000+00:00 | null | python|deep-learning|keras | ['https://arxiv.org/pdf/1311.2901.pdf'] | 1 |
65,867,666 | <p>In general, there are <a href="https://stackoverflow.com/questions/5027757/data-structure-for-loaded-dice/63166311#63166311">many ways to choose an integer</a> with a custom distribution, but most of them take weights that are <em>proportional</em> to the given probabilities. If the weights are log probabilities instead, then a slightly different approach is needed. Perhaps the simplest algorithm for this is rejection sampling, described below and implemented in Python. In the following algorithm, the maximum log-probability is <code>max</code>, and there are <code>k</code> integers to choose from.</p>
<ol>
<li>Choose a uniform random integer <code>i</code> in [0, <code>k</code>).</li>
<li>Get the log-weight corresponding to <code>i</code>, then generate an exponential(1) random number, call it <code>ex</code>.</li>
<li>If <code>max</code> minus <code>ex</code> is less than the log-weight, return <code>i</code>. Otherwise, go to step 1.</li>
</ol>
<p>The time complexity for rejection sampling is constant on average, especially if <code>max</code> is set to equal the true maximum weight. On the other hand, the expected number of iterations per sample depends greatly on the shape of the distribution. See also <a href="https://www.keithschwarz.com/darts-dice-coins/" rel="nofollow noreferrer">Keith Schwarz's discussion</a> on the "Fair Die/Biased Coin Loaded Die" algorithm.</p>
<p>Now, Python code for this algorithm follows.</p>
<pre><code>import random
import math
def categ(c):
# Do a weighted choice of an item with the
# given log-probabilities.
cm=max(c) # Find max log probability
while True:
# Choose an item at random
x=random.randint(0,len(c)-1)
# Choose it with probability proportional
# to exp(c[x])
y=cm-random.expovariate(1)
# Alternatively: y=math.log(random.random())+cm
if y<c[x]:
return x
</code></pre>
<p>The code above generates one variate at a time and uses only Python's base modules, rather than NumPy. <a href="https://stackoverflow.com/a/64881410/815724">Another answer</a> shows how rejection sampling can be implemented in NumPy by blocks of random variates at a time (demonstrated on a different random sampling task, though).</p>
<hr />
<p>The so-called "<a href="https://arxiv.org/abs/2110.01515" rel="nofollow noreferrer">Gumbel max trick</a>", used above all in machine learning, can be used to sample from a distribution with unnormalized log probabilities. This involves—</p>
<ol>
<li>("Gumbel") adding a separate <em>Gumbel</em> random variate to each log probability, namely −ln(−ln(<em>U</em>)) where <em>U</em> is a random variate greater than 0 and less than 1, then</li>
<li>("max") choosing the item corresponding to the highest log probability.</li>
</ol>
<p>However, the time complexity for this algorithm is linear in the number of items.</p>
<p>The following code illustrates the Gumbel max trick:</p>
<pre><code>import random
import math
def categ(c):
# Do a weighted choice of an item with the
# given log-probabilities, using the Gumbel max trick
return max([[c[i]-math.log(-math.log(random.random())),i] \
for i in range(len(c))])[1]
# Or:
# return max([[c[i]-math.log(random.expovariate(1)),i] \
# for i in range(len(c))])[1]
</code></pre> | 2021-01-24 06:16:27.673000+00:00 | 2022-05-30 22:35:59.497000+00:00 | 2022-05-30 22:35:59.497000+00:00 | null | 65,867,476 | <p>I have a 1-D <code>np.ndarray</code> filled with unnormalized log-probabilities that define a categorical distribution. I would like to sample an integer index from this distribution. Since many of the probabilities are small, normalizing and exponentiating the log-probabilities introduces significant numerical error, therefore I cannot use <a href="https://numpy.org/doc/stable/reference/random/generated/numpy.random.choice.html" rel="nofollow noreferrer"><code>np.random.choice</code></a>. Effectively, I am looking for a NumPy equivalent to TensorFlow's <a href="https://www.tensorflow.org/api_docs/python/tf/random/categorical" rel="nofollow noreferrer"><code>tf.random.categorical</code></a>, which works on unnormalized log-probabilities.</p>
<p>If there is not a function in NumPy that achieves this directly, what is an efficient manner to implement such sampling?</p> | 2021-01-24 05:38:05.480000+00:00 | 2022-05-30 22:35:59.497000+00:00 | 2021-01-24 09:05:52.813000+00:00 | python|numpy|random | ['https://stackoverflow.com/questions/5027757/data-structure-for-loaded-dice/63166311#63166311', 'https://www.keithschwarz.com/darts-dice-coins/', 'https://stackoverflow.com/a/64881410/815724', 'https://arxiv.org/abs/2110.01515'] | 4 |
50,466,255 | <p>After LDA you have topics characterized as distributions on words. If you plan to compare these probability vectors (weight vectors if you prefer), you can simply use any cosine similarity implemented for Python, <a href="http://scikit-learn.org/stable/modules/generated/sklearn.metrics.pairwise.cosine_similarity.html" rel="nofollow noreferrer">sklearn</a> for instance. </p>
<p>However, this approach will only tell you which topics have in general similar probabilities put in the same words.</p>
<p>If you want to measure similarities based on semantic information instead of word occurrences, you may want to use word vectors (as those learned by Word2Vec, GloVe or FastText). </p>
<p>They learned vectors for representing the words as low dimensional vectors, encoding certain semantic information. They're easy to use in <a href="https://rare-technologies.com/word2vec-tutorial/" rel="nofollow noreferrer">Gensim</a>, and the typical approach is loading a pre-trained model, learned in Wikipedia articles or News.</p>
<p>If you have topics defined by words, you can represent these words as vectors and obtain an average of the cosine similarities between the words in two topics (we did it for a <a href="http://jesusfbes.es/wp-content/uploads/2017/09/Roca2017Workshop.pdf" rel="nofollow noreferrer">workshop</a>). There are some sources using these Word Vectors (also called Word Embeddings) to represent somehow topics or documents. For instance, <a href="https://arxiv.org/pdf/1605.02019.pdf" rel="nofollow noreferrer">this</a> one. </p>
<p>There are some recent publications combining Topic Models and Word Embeddings, you can look for them if you're interested. </p> | 2018-05-22 11:12:17.790000+00:00 | 2018-05-22 11:12:17.790000+00:00 | null | null | 50,463,415 | <p>I did LDA over a corpus of documents with topic_number=5. As a result, I have five vectors of words, each word associates with a weight or degree of importance, like this:</p>
<pre><code>Topic_A = {(word_A1,weight_A1), (word_A2, weight_A2), ... ,(word_Ak, weight_Ak)}
Topic_B = {(word_B1,weight_B1), (word_B2, weight_B2), ... ,(word_Bk, weight_Bk)}
.
.
Topic_E = {(word_E1,weight_E1), (word_E2, weight_E2), ... ,(word_Ek, weight_Ek)}
</code></pre>
<p>Some of the words are common between documents. Now, I want to know, how I can calculate the similarity between these vectors. I can calculate cosine similarity (and other similarity measures) by programming from scratch, but I was thinking, there might be an easier way to do it. Any help would be appreciated. Thank you in advance for spending time on this.</p>
<blockquote>
<ul>
<li><p>I am programming with Python 3.6 and gensim library (but I am open to any other library)</p>
</li>
<li><p>I know someone else has asked similar question (<a href="https://stackoverflow.com/questions/48115965/cosine-similarity-and-lda-topics">Cosine Similarity and LDA topics</a>) but becasue he didn't get the answer, I ask it again</p>
</li>
</ul>
</blockquote> | 2018-05-22 08:45:36.017000+00:00 | 2018-05-22 12:59:01.153000+00:00 | 2020-06-20 09:12:55.060000+00:00 | python-3.x|nlp|gensim|lda|spacy | ['http://scikit-learn.org/stable/modules/generated/sklearn.metrics.pairwise.cosine_similarity.html', 'https://rare-technologies.com/word2vec-tutorial/', 'http://jesusfbes.es/wp-content/uploads/2017/09/Roca2017Workshop.pdf', 'https://arxiv.org/pdf/1605.02019.pdf'] | 4 |
65,604,874 | <h2>The problem with conditionals in neural networks</h2>
<p>The issue with a switch or conditionals (like if-then-else) as part of a neural network is that conditionals are not differentiable everywhere. Therefore the automatic differentiation methods would not work directly and solving this is super complex. Check <a href="https://cs.stackexchange.com/a/70620">this</a> for more details.</p>
<p>A shortcut is you can end up training 3 separate models independently, and then during inference uses a control flow of conditionals to infer from them.</p>
<pre><code>#Training -
model1 = model.fit(all images, P(cat/dog))
model2 = model.fit(all images, P(cat))
model3 = model.fit(all images, P(dog))
final prediction = argmax(model2, model3)
#Inference -
if model1.predict == Cat:
model2.predict
else:
model3.predict
</code></pre>
<p>But I don't think you are looking for that. <strong>I think you are looking to include conditionals as part of the computation graph itself.</strong></p>
<p>Sadly, there is no direct way for you to build an if-then condition as part of a computation graph as per my knowledge. The <code>keras.switch</code> that you see allows you to work with tensor outputs but not with layers of a graph during training. That's why you will see it being used as part of loss functions and not in computation graphs (throws input errors).</p>
<h2>A possible Solution - Skip connections & soft-switching</h2>
<p>You can, however, try to build something similar with <code>skip connections</code> and <code>soft switching</code>.</p>
<p>A skip connection is a connection from a previous layer to another layer that allows you to pass information to the subsequent layers. This is quite common in very deep networks where information from the original data is subsequently lost. Check <a href="https://arxiv.org/pdf/1505.04597.pdf" rel="noreferrer">U-net</a> or <a href="https://arxiv.org/pdf/1512.03385.pdf" rel="noreferrer">Resnet</a> for example, which uses skip connections between layers to pass information to future layers.</p>
<p><a href="https://i.stack.imgur.com/9sroF.png" rel="noreferrer"><img src="https://i.stack.imgur.com/9sroF.png" alt="enter image description here" /></a></p>
<p>The next issue is the issue of switching. You want to switch between 2 possible paths in the graph. What you can do is a soft-switching method which I took as inspiration from <a href="https://arxiv.org/pdf/1905.08743.pdf" rel="noreferrer">this paper</a>. Notice that in order to <code>switch</code> between 2 distribution of words (one from the decoder and another from the input), the authors multiply them with <code>p</code> and <code>(1-p)</code> to get a cumulative distribution. This is a soft-switch that allows the model to pick the next predicted word from either the decoder or from the input itself. (helps when you want your chatbot to speak the words that were input by the user as part of its response to them!)</p>
<p><a href="https://i.stack.imgur.com/Gpv0C.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Gpv0C.png" alt="enter image description here" /></a></p>
<p>With an understanding of these 2 concepts, let's try to intuitively build our architecture.</p>
<ol>
<li><p>First we need a single-input multi-output graph since we are training 2 models</p>
</li>
<li><p>Our first model is a multi-class classification that predicts individual probabilities for Cat and Dog separately. This will be trained with the activation of <code>softmax</code> and a <code>categorical_crossentropy</code> loss.</p>
</li>
<li><p>Next, let's take the logit which predicts the probability of Cat, and multiply the convolution layer 3 with it. This can be done with a <code>Lambda</code> layer.</p>
</li>
<li><p>And similarly, let's take the probability of Dog and multiply it with the convolution layer 2. This can be seen as the following -</p>
<ul>
<li>If my first model predicts a cat and not a dog, perfectly, then the computation will be <code>1*(Conv3)</code> and <code>0*(Conv2)</code>.</li>
<li>If the first model predicts a dog and not a cat, perfectly, then the computation will be <code>0*(Conv3)</code> and <code>1*(Conv2)</code></li>
<li>You can think of this as either a <code>soft-switch</code> OR a <code>forget gate</code> from <a href="https://colah.github.io/posts/2015-08-Understanding-LSTMs/" rel="noreferrer">LSTM</a>. The <code>forget gate</code> is a sigmoid (0 to 1) output that multiplies the cell state to gate it and allow the LSTM to forget or remember previous time-steps. Similar concept here!</li>
</ul>
</li>
<li><p>These Conv3 and Conv2 can now be further be processed, flattened, concatenated, and passed to another Dense layer for the final prediction.</p>
</li>
</ol>
<p>This way if the model is not sure about a dog or a cat, both conv2 and conv3 features participate in the second model's predictions. This is how you can use <code>skip connections</code> and <code>soft switch</code> inspired mechanism to add some amount of conditional control flow to your network.</p>
<p>Check my implementation of the computation graph below.</p>
<pre><code>from tensorflow.keras import layers, Model, utils
import numpy as np
X = np.random.random((10,500,500,3))
y = np.random.random((10,2))
#Model
inp = layers.Input((500,500,3))
x = layers.Conv2D(6, 3, name='conv1')(inp)
x = layers.MaxPooling2D(3)(x)
c2 = layers.Conv2D(9, 3, name='conv2')(x)
c2 = layers.MaxPooling2D(3)(c2)
c3 = layers.Conv2D(12, 3, name='conv3')(c2)
c3 = layers.MaxPooling2D(3)(c3)
x = layers.Conv2D(15, 3, name='conv4')(c3)
x = layers.MaxPooling2D(3)(x)
x = layers.Flatten()(x)
out1 = layers.Dense(2, activation='softmax', name='first')(x)
c = layers.Lambda(lambda x: x[:,:1])(out1)
d = layers.Lambda(lambda x: x[:,1:])(out1)
c = layers.Multiply()([c3, c])
d = layers.Multiply()([c2, d])
c = layers.Conv2D(15, 3, name='conv5')(c)
c = layers.MaxPooling2D(3)(c)
c = layers.Flatten()(c)
d = layers.Conv2D(12, 3, name='conv6')(d)
d = layers.MaxPooling2D(3)(d)
d = layers.Conv2D(15, 3, name='conv7')(d)
d = layers.MaxPooling2D(3)(d)
d = layers.Flatten()(d)
x = layers.concatenate([c,d])
x = layers.Dense(32)(x)
out2 = layers.Dense(2, activation='softmax',name='second')(x)
model = Model(inp, [out1, out2])
model.compile(optimizer='adam', loss='categorical_crossentropy', loss_weights=[0.5, 0.5])
model.fit(X, [y, y], epochs=5)
utils.plot_model(model, show_layer_names=False, show_shapes=True)
</code></pre>
<pre><code>Epoch 1/5
1/1 [==============================] - 1s 1s/step - loss: 0.6819 - first_loss: 0.7424 - second_loss: 0.6214
Epoch 2/5
1/1 [==============================] - 0s 423ms/step - loss: 0.6381 - first_loss: 0.6361 - second_loss: 0.6400
Epoch 3/5
1/1 [==============================] - 0s 442ms/step - loss: 0.6137 - first_loss: 0.6126 - second_loss: 0.6147
Epoch 4/5
1/1 [==============================] - 0s 434ms/step - loss: 0.6214 - first_loss: 0.6159 - second_loss: 0.6268
Epoch 5/5
1/1 [==============================] - 0s 427ms/step - loss: 0.6248 - first_loss: 0.6184 - second_loss: 0.6311
</code></pre>
<p><a href="https://i.stack.imgur.com/8JTb2.png" rel="noreferrer"><img src="https://i.stack.imgur.com/8JTb2.png" alt="enter image description here" /></a></p> | 2021-01-06 23:54:33.710000+00:00 | 2021-01-07 08:24:32.227000+00:00 | 2021-01-07 08:24:32.227000+00:00 | null | 65,451,045 | <p>I am trying to build a <code>conditional CNN</code> model. The model is,</p>
<p><a href="https://i.stack.imgur.com/ww0dP.png" rel="noreferrer"><img src="https://i.stack.imgur.com/ww0dP.png" alt="enter image description here" /></a></p>
<p>At the <code>first stage</code> of my model, I feed my data to <code>Model 1</code> then, <code>based on the prediction of Model 1</code>, I want to <code>train the model to Conditional Cat model or Conditional Dog model</code> and finally, give the output from Conditional Cat model or Conditional Dog model. <strong>How Can I do this?</strong></p>
<p><strong>Note:</strong>
My effort is,</p>
<pre><code>import keras
from keras.layers import *
from keras.models import *
from keras.utils import *
img_rows,img_cols,number_of_class = 256,256,2
input = Input(shape=(img_rows,img_cols,3))
#----------- main model (Model 1) ------------------------------------
conv_01 = Convolution2D(64, 3, 3, activation='relu',name = 'conv_01') (input)
conv_02 = Convolution2D(64, 3, 3, activation='relu',name = 'conv_02') (conv_01)
skip_dog = conv_02
conv_03 = Convolution2D(64, 3, 3, activation='relu',name = 'conv_03') (conv_02)
skip_cat = conv_03
conv_04 = Convolution2D(64, 3, 3, activation='relu',name = 'conv_04') (conv_03)
flatten_main_model = Flatten() (conv_04)
Output_main_model = Dense(units = number_of_class , activation = 'softmax', name = "Output_layer")(flatten_main_model)
#----------- Conditional Cat model ------------------------------------
conv_05 = Convolution2D(64, 3, 3, activation='relu',name = 'conv_05') (skip_cat)
flatten_cat_model = Flatten() (conv_05)
Output_cat_model = Dense(units = number_of_class , activation = 'softmax', name = "Output_layer_cat")(flatten_cat_model)
#----------- Conditional Dog model ------------------------------------
conv_06 = Convolution2D(64, 3, 3, activation='relu',name = 'conv_06') (skip_dog)
flatten_dog_model = Flatten() (conv_06)
Output_dog_model = Dense(units = number_of_class , activation = 'softmax', name = "Output_layer_dog")(flatten_dog_model)
#----------------------------- My discrete 3 models --------------------------------
model_01 = Model(inputs = input , outputs = Output_main_model,name = 'model_main')
model_02_1 = Model(inputs = input , outputs = Output_cat_model ,name = 'Conditional_cat_model')
model_02_2 = Model(inputs = input , outputs = Output_dog_model ,name = 'Conditional_dog_model')
</code></pre>
<p>How can I merge these 3 models (<code>model_01, model_02_1, model_02_2</code>) based on these conditions?</p>
<p>**Conditions are: **</p>
<ol>
<li>Feed data to model <code>model_01</code></li>
<li>Based on <code>model_01</code> result feed data to <code>model_02_1 or model_02_2</code></li>
<li>Next, predict the final output from <code>model_02_1 or model_02_2</code></li>
</ol> | 2020-12-25 19:17:40.880000+00:00 | 2021-01-26 13:43:23.063000+00:00 | 2021-01-26 13:43:23.063000+00:00 | python|machine-learning|keras|deep-learning|neural-network | ['https://cs.stackexchange.com/a/70620', 'https://arxiv.org/pdf/1505.04597.pdf', 'https://arxiv.org/pdf/1512.03385.pdf', 'https://i.stack.imgur.com/9sroF.png', 'https://arxiv.org/pdf/1905.08743.pdf', 'https://i.stack.imgur.com/Gpv0C.png', 'https://colah.github.io/posts/2015-08-Understanding-LSTMs/', 'https://i.stack.imgur.com/8JTb2.png'] | 8 |
33,906,826 | <p>I think the HMM-based <a href="http://www.coli.uni-saarland.de/~thorsten/tnt/" rel="nofollow">TnT tagger</a> provides a better approach to handle unknown words (see the approach in <a href="http://www.coli.uni-saarland.de/~thorsten/publications/Brants-ANLP00.pdf" rel="nofollow">TnT tagger's paper</a>). </p>
<p>The accuracy results (for known words and unknown words) of TnT and other two POS and morphological taggers on 13 languages including Bulgarian, Czech, Dutch, English, French, German, Hindi, Italian, Portuguese, Spanish, Swedish, Thai and Vietnamese, can be found in <a href="http://arxiv.org/abs/1412.4021" rel="nofollow">this article</a>.</p> | 2015-11-25 01:07:16.140000+00:00 | 2015-11-25 02:35:51.690000+00:00 | 2015-11-25 02:35:51.690000+00:00 | null | 12,613,294 | <p>In the part of speech tagger, the best probable tags for the given sentence is determined using HMM by</p>
<pre><code> P(T*) = argmax P(Word/Tag)*P(Tag/TagPrev)
T
</code></pre>
<p>But when 'Word' did not appear in the training corpus, P(Word/Tag) produces ZERO for given all possible tags, this leaves no room for choosing the best. </p>
<p>I have tried few ways, </p>
<p>1) Assigning small amount of probability for all unknown words, P(UnknownWord/AnyTag)~Epsilon... means this completely ignores the P(Word/Tag) for unknowns word by assigning the constant probability.. So decision making on unknown word is by prior probabilities.. As expected it is not producing good result. </p>
<p>2) Laplace Smoothing
I confused with this. I don't know what is difference between (1) and this. My way of understanding Laplace Smoothing adds the constant probability(lambda) to all unknown & Known words.. So the All Unknown words will get constant probability(fraction of lambda) and Known words probabilities will be the same relatively since all word's prob increased by Lambda.
Is the Laplace Smoothing same as the previous one ?</p>
<p>*)Is there any better way of dealing with unknown words ?</p> | 2012-09-27 02:37:43.290000+00:00 | 2015-11-25 02:35:51.690000+00:00 | 2013-05-29 06:24:31.127000+00:00 | nlp|pos-tagger|oov | ['http://www.coli.uni-saarland.de/~thorsten/tnt/', 'http://www.coli.uni-saarland.de/~thorsten/publications/Brants-ANLP00.pdf', 'http://arxiv.org/abs/1412.4021'] | 3 |
62,631,570 | <p><em>Cross Modality Pre-training</em> may be the method you need. Proposed by <a href="https://arxiv.org/abs/1608.00859" rel="nofollow noreferrer">Wang et al. (2016)</a>, this method averages the weights of the pre-trained model across the channels in the first layer and replicates the mean by the number of target channels. The experiment result indicates that the network gets better performance by using this kind of pre-training method even it has 20 input channels and its input modality is not RGB.</p>
<p>To apply this, one can refer to <a href="https://stackoverflow.com/questions/53251827/pretrained-tensorflow-model-rgb-rgby-channel-extension">another answer</a> that use layer.get_weights() and layer.set_weights() to manually set the weights in the first layer of the pre-trained model.</p> | 2020-06-29 05:07:05.430000+00:00 | 2020-06-29 05:07:05.430000+00:00 | null | null | 55,944,771 | <p>I am prototyping a deep learning segmentation model that needs six channels of input (two aligned 448x448 RGB images under different lighting conditions). I wish to compare the performance of several pretrained models to that of my current model, which I trained from scratch. Can I use the pretrained models in <code>tf.keras.applications</code> for input images with more than 3 channels?</p>
<p>I tried applying a convolution first to reduce the channel dimension to 3 and then passed that output to <code>tf.keras.applications.DenseNet121()</code> but received the following error:</p>
<pre><code>import tensorflow as tf
dense_input = tf.keras.layers.Input(shape=(448, 448, 6))
dense_filter = tf.keras.layers.Conv2D(3, 3, padding='same')(dense_input)
dense_stem = tf.keras.applications.DenseNet121(include_top=False, weights='imagenet', input_tensor=dense_filter)
*** ValueError: You are trying to load a weight file containing 241 layers into a model with 242 layers.
</code></pre>
<p>Is there a better way to use pretrained models on data with a different number of input channels in keras? Will pretraining even help when the number of input channels is different?</p> | 2019-05-02 01:58:54.447000+00:00 | 2022-03-08 13:37:27.987000+00:00 | null | python|tensorflow|keras|deep-learning | ['https://arxiv.org/abs/1608.00859', 'https://stackoverflow.com/questions/53251827/pretrained-tensorflow-model-rgb-rgby-channel-extension'] | 2 |
54,542,551 | <p>For branches, some are like <code>jc .somewhere</code> where the CPU only really needs to guess if the branch will be taken or not taken to be able to speculate down the guessed path. However, some branches are like <code>jmp [table+eax*8]</code> where there can be over 4 billion possible directions, and for those cases the CPU needs to guess the target address to be able to speculate down the guessed path. Because there's very different types of branches, the CPU uses very different types of predictors.</p>
<p>For Spectre, there's a "meta pattern" - the attacker uses speculative execution to trick CPU into leaving information in something, then extracts that information from the something. There are multiple possibilities for "something" (data caches, instruction caches, TLBs, branch target buffer, branch direction buffer, return stack, write-combining buffers, ...) and therefore there's are many possible variations of spectre (and not just the "well known first two variations" that were made public in early 2018).</p>
<p>For spectre v1 (where "something" is a data cache) the attacker needs some way to trick the CPU into putting data into the data cache (e.g. a load and then a second load that depends on the value from the first load, which can be executed speculatively) and some way to extract the information (flush everything in the cache, then use the amount of time that a load takes to determine how the state of the data cache changed).</p>
<p>For spectre v2 (where "something" is the branch direction buffer that's used for instructions like <code>jc .somewhere</code>) the attacker needs some way to trick the CPU into putting data into the branch direction buffer (e.g. a load and then a branch that depends on the load, which can be executed speculatively) and some way to extract the information (set the branch direction buffer to a known state beforehand, then use the amount of time that a branch takes to determine how the state of the branch direction buffer changed).</p>
<p>For all of the many possible variations of spectre, the only important thing (for defense) is what the "something" can be (and how to prevent information from getting into the "something", or flush/overwrite/destroy information that got into the "something"). Everything else (specific details of one of the many possible implementations of code to attack any one of the many possible spectre variations) is unimportant.</p>
<p><strong>Vague History Of Spectre</strong></p>
<p>The original Spectre (v1, using cache timing) was found in 2017 and publicly announced in January 2018. It was like a dam bursting, and a few other variants (e.g. v2, using branch prediction) quickly followed. These early variations grabbed a lot of publicity. In the ~6 months or so after that multiple other variants were found, but didn't get as much publicity and a lot of people weren't (and still aren't) aware of them. By the "latter half" of 2018 people (e.g. me) started losing track of which variants were proven (via. "proof of concept" implementations) and which were still unproven, and some researchers started trying to enumerate the possibilities and establish naming conventions for them. The best example of this that I've seen so far is "A Systematic Evaluation of Transient Execution Attacks and Defenses" (see <a href="https://arxiv.org/pdf/1811.05441.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1811.05441.pdf</a> ).</p>
<p>However, the "hole in the dam wall" isn't something that can be plugged easily, and (for random guesses) I think it's going to take several years before we can assume all possibilities have been explored (and I think the need for mitigation will never go away).</p> | 2019-02-05 20:30:03.677000+00:00 | 2019-02-06 22:38:18.633000+00:00 | 2019-02-06 22:38:18.633000+00:00 | null | 54,541,157 | <p>I have done some reading about Spectre v2 and obviously you get the non technical explanations. Peter Cordes has a more in-depth <a href="https://security.stackexchange.com/questions/177100/why-are-amd-processors-not-less-vulnerable-to-meltdown-and-spectre/177101#177101">explanation</a> but it doesn't fully address a few details. Note: I have never performed a Spectre v2 attack so I do not have hands on experience. I have only read up about about the theory.</p>
<p>My understanding of Spectre v2 is that you make an indirect branch mispredict for instance <code>if (input < data.size)</code>. If the Indirect Target Array (which I'm not too sure of the details of -- i.e. why it is separate from the BTB structure) -- which is rechecked at decode for RIPs of indirect branches -- does not contain a prediction then it will insert the new jump RIP (branch execution will eventually insert the target RIP of the branch), but for now it does not know the target RIP of the jump so any form of static prediction will not work. My understanding is it is always going to predict not taken for new indirect branches and when Port 6 eventually works out the jump target RIP and prediction it will roll back using the BOB and update the ITA with the correct jump address and then update the local and global branch history registers and the saturating counters accordingly. </p>
<p>The hacker needs to train the saturating counters to always predict taken which, I imagine, they do by running the <code>if(input < data.size)</code> multiple times in a loop where <code>input</code> is set to something that is indeed less than <code>data.size</code> (catching errors accordingly) and on the final iteration of the loop, make <code>input</code> more than <code>data.size</code> (1000 for instance); the indirect branch will be predicted taken and it will jump to the body of the if statement where the cache load takes place.</p>
<p>The if statement contains <code>secret = data[1000]</code> (A particular memory address (data[1000]) that contains secret data is targeted for loading from memory to cache) then this will be allocated to the load buffer speculatively. The preceding indirect branch is still in the branch execution unit and waiting to complete.</p>
<p>I believe the premise is that the load needs to be executed (assigned a line fill buffer) before the load buffers are flushed on the misprediction. If it has been assigned a line fill buffer already then nothing can be done. It makes sense that there isn't a mechanism to cancel a line fill buffer allocation because the line fill buffer would have to pend before storing to the cache after returning it to the load buffer. This could cause line fill buffers to become saturated because instead of deallocating when required (keeping it in there for speed of other loads to the same address but deallocating when the there are no other available line buffers). It would not be able to deallocate until it receives some signal that a flush is <em>not</em> going to occur, meaning it has to halt for the previous branch to execute instead of immediately making the line fill buffer available for the stores of the other logical core. This signalling mechanism could be difficult to implement and perhaps it didn't cross their minds (pre-Spectre thinking) and it would also introduce delay in the event that branch execution takes enough time for hanging line fill buffers to cause a performance impact i.e. if <code>data.size</code> is purposefully flushed from the cache (<code>CLFLUSH</code>) before the final iteration of the loop meaning branch execution could take up to 100 cycles.</p>
<p>I hope my thinking is correct but I'm not 100% sure. If anyone has anything to add or correct then please do.</p> | 2019-02-05 18:49:13.417000+00:00 | 2019-02-27 21:35:02.033000+00:00 | 2019-02-16 06:13:39.930000+00:00 | x86|intel|cpu-architecture|branch-prediction|spectre | ['https://arxiv.org/pdf/1811.05441.pdf'] | 1 |
68,411,391 | <p>I think the problem might be with the training step rather than the "averaging" algorithm.</p>
<p>According to the paper that proposes the FedAvg algorithm (<a href="https://arxiv.org/pdf/1602.05629.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1602.05629.pdf</a>), the local models apply stochastic gradient descent to the global model rather than training new local models from scratch.</p>
<p>Here you have a tutorial from TensorFlow that applies Federated Averaging: <a href="https://www.tensorflow.org/federated/tutorials/custom_federated_algorithms_2#gradient_descent_on_a_single_batch" rel="nofollow noreferrer">https://www.tensorflow.org/federated/tutorials/custom_federated_algorithms_2#gradient_descent_on_a_single_batch</a></p> | 2021-07-16 15:11:46.450000+00:00 | 2021-07-16 15:11:46.450000+00:00 | null | null | 66,472,157 | <p>I am working with federated learning. I am using a global server where I defined a cnn based classifier. The global server compiles the model with hyper-parameters and send it to the edge(clients), currently I am using two clients. Each client uses its local data (for now I am using same data, and model on each client). After training model, each client has above 95 percent accuracy, precision and recall in their local models. clients sends their trained local model to the server. The server gets the model and and gets the weights from each received model and computes average according to <a href="https://i.stack.imgur.com/iZxZr.png." rel="nofollow noreferrer">this formula</a>. Below is the code I wrote to implement this formula in python. when I set the average weights to models and try to predict, the accuracy, recall and precision fall below 20%.</p>
<p>Am I doing something wrong in implementation?</p>
<pre><code># initial weights of global model, set to zer0.
ave_weights=model.get_weights()
ave_weights=[i * 0 for i in ave_weights]
count=0
# Multithreaded Python server : TCP Server Socket Thread Pool
def ClientThread_send(conn,address,weights):
# send model to client
conn.send(model)
print("Model Sent to :",address)
print("waiting for weights")
model_recv=conn.recv(1024)
print("weights received from:",address)
global count
global ave_weights
#receive weights from clients
rec_weight=model.get_weights()
#multiply the client weights by number of local data samples in client local data
rec_weight= [i * 100000 for i in rec_weight]
# divide the weights by total number of samples of all participants
rec_weight= [i / 200000 for i in rec_weight]
#sum the weights of all clients
ave_weights=[x + y for x, y in zip(ave_weights,rec_weight)]
count=count+1
conn.close()
if count==2:
# set the global model weights if the count(number of clients is two)
model.set_weights(ave_weights)
while True:
conn, address = s.accept()
start_new_thread(ClientThread_send,(conn,address,ave_weights))
</code></pre> | 2021-03-04 09:27:37.537000+00:00 | 2021-07-16 15:11:46.450000+00:00 | 2021-03-04 09:29:29.080000+00:00 | python|machine-learning|artificial-intelligence|conv-neural-network|tensorflow-federated | ['https://arxiv.org/pdf/1602.05629.pdf', 'https://www.tensorflow.org/federated/tutorials/custom_federated_algorithms_2#gradient_descent_on_a_single_batch'] | 2 |
53,408,005 | <p>As described in the <a href="https://arxiv.org/pdf/1704.04861.pdf" rel="noreferrer">paper</a>: </p>
<ol>
<li><p>The role of the <strong>width multiplier <em>α</em></strong> is to thin a network uniformly at each layer. for a given layer and width multiplier <em>α</em>, the number of input channels <em>M</em> becomes <em>αM</em> and the number of output channels <em>N</em> becomes <em>αN</em>.</p></li>
<li><p>The <strong>resolution multiplier <em>ρ</em></strong> is applied to the input image and the internal representation of every layer is subsequently reduced by the same multiplier. In practice we <strong>implicitly</strong> set <em>ρ</em> by setting the input resolution.</p></li>
</ol>
<p>In the <a href="https://github.com/tensorflow/models/blob/9f7a5fa353df0ee2010f8e7a5494ca6b188af8bc/research/slim/nets/mobilenet_v1.py#L216" rel="noreferrer">code</a>:
The <strong>depth_multiplier</strong> is used to reduce the number of channels at each layer. <strong>So the depth_multiplier corresponds the width multiplier <em>α</em>.</strong></p> | 2018-11-21 08:34:04.407000+00:00 | 2018-11-21 08:34:04.407000+00:00 | null | null | 49,993,541 | <p>Refering to tensorflow mobilenetv1 model: <a href="https://github.com/tensorflow/models/blob/9f7a5fa353df0ee2010f8e7a5494ca6b188af8bc/research/slim/nets/mobilenet_v1.py#L171" rel="noreferrer">https://github.com/tensorflow/models/blob/9f7a5fa353df0ee2010f8e7a5494ca6b188af8bc/research/slim/nets/mobilenet_v1.py#L171</a></p>
<p>The param depth_multiplier is documented as:</p>
<blockquote>
<p>depth_multiplier: Float multiplier for the depth (number of channels)
for all convolution ops. The value must be greater than zero. Typical
usage will be to set this value in (0, 1) to reduce the number of
parameters or computation cost of the model</p>
</blockquote>
<p>But in the (<a href="https://arxiv.org/pdf/1704.04861.pdf" rel="noreferrer">paper</a>), they mention 2 types of multipliers: width multiplier and resolution multiplier, so which one correspond to depth multiplier?</p>
<p>On <a href="https://keras.rstudio.com/reference/application_mobilenet.html" rel="noreferrer">Keras</a>, they say that:</p>
<blockquote>
<p>depth_multiplier: depth multiplier for depthwise convolution (also
called the resolution multiplier)</p>
</blockquote>
<p>I'm so confused!</p> | 2018-04-24 04:38:48.747000+00:00 | 2018-11-21 08:34:04.407000+00:00 | null | tensorflow | ['https://arxiv.org/pdf/1704.04861.pdf', 'https://github.com/tensorflow/models/blob/9f7a5fa353df0ee2010f8e7a5494ca6b188af8bc/research/slim/nets/mobilenet_v1.py#L216'] | 2 |
68,062,164 | <p>It seems like the code is from <a href="https://arxiv.org/pdf/1305.1878.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1305.1878.pdf</a> (Apendix A).</p>
<p>First, the backticks in the code are actually <code>'</code>. I guess the paper prints backticks just because the journal (or the authors) didn't handle the character properly.</p>
<p>Now <code>not i:2 if i==2 else None:2 if i==1 else 1</code> is just a slicing like what <code>A[...:...:...]</code> would mean for example.</p>
<p><code>x if a else y</code> is what's called <a href="https://docs.python.org/3/reference/expressions.html#conditional-expressions" rel="nofollow noreferrer">conditional expression</a>; it evaluates to <code>x</code> if <code>a</code> is true, and <code>y</code> otherwise.</p>
<p>So, for example, if <code>i == 1</code>, then</p>
<ol>
<li><code>not i</code> becomes <code>False</code>, since <code>1</code> is Truthy.</li>
<li><code>2 if i == 2 else None</code> becomes <code>None</code>.</li>
<li><code>2 if i == 1 else 1</code> becomes <code>2</code>.</li>
</ol>
<p>So <code>not i:2 if i==2 else None:2 if i==1 else 1</code> in this case becomes <code>False:None:2</code>, which is <code>0::2</code> when used as a slice. Likewise, when <code>j == 1</code>, the second one <code>not j:2 if j==2 else None:2 if j==1 else 1</code> becomes <code>0::2</code>. Therefore <code>a = A[::2, ::2]</code> if <code>i == 1, j == 2</code>.</p>
<p>An example follows:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
def cofactor(A, ij):
'''Cofactor[i,j] of 3x3 matrix A'''
i, j = ij
a = A[not i:2 if i==2 else None:2 if i==1 else 1,
not j:2 if j==2 else None:2 if j==1 else 1]
return (-1)**(i+j) * (a[0,0]*a[1,1] - a[1,0]*a[0,1])
A = np.arange(9).reshape(3, 3) # an example matrix; [[0, 1, 2], [3, 4, 5], [6, 7, 8]]
print(cofactor(A, (1, 2))) # 6
</code></pre>
<p>Note that I modified the function a little bit; apparently, defining a function with arguments <code>(A, (i, j))</code> as in the paper is not a valid syntax. So I added a line to explicitly unpack the tuple <code>ij</code> into <code>i</code> and <code>j</code>.</p> | 2021-06-21 04:02:46.210000+00:00 | 2021-06-21 04:18:33.010000+00:00 | 2021-06-21 04:18:33.010000+00:00 | null | 68,062,027 | <p>I am confused by the python code mentioned below. What does the statement <code>not i:2 if i==2 else None:2 if i==1 else 1</code> mean?
Besides, how can I use the function <code>cofactor(A, (i, j))</code>; can you give an example?</p>
<pre><code>def cofactor(A, (i, j)):
’’’Cofactor[i,j] of 3x3 matrix A’’’
a = A[not i:2 if i==2 else None:2 if i==1 else 1,
not j:2 if j==2 else None:2 if j==1 else 1]
return (-1)**(i+j) * (a[0,0]*a[1,1] - a[1,0]*a[0,1])
</code></pre> | 2021-06-21 03:40:28.190000+00:00 | 2021-06-21 07:11:13.697000+00:00 | 2021-06-21 05:46:06.540000+00:00 | python|python-3.x | ['https://arxiv.org/pdf/1305.1878.pdf', 'https://docs.python.org/3/reference/expressions.html#conditional-expressions'] | 2 |
28,194,256 | <p>Thats an open problem in image recognition. Besides sliding windows, existing approaches include predicting object location in image as CNN output, predicting borders (classifiyng pixels as belonging to image boundary or not) and so on. See for example <a href="http://arxiv.org/pdf/1312.6229v4.pdf" rel="noreferrer">this paper</a> and references therein. </p>
<p>Also note that with CNN using max-pooling, one can identify positions of feature detectors that contributed to object recognition, and use that to suggest possible object location region. </p> | 2015-01-28 14:06:07.353000+00:00 | 2015-01-28 14:06:07.353000+00:00 | null | null | 28,178,054 | <p>As far as I know, CNN rely on sliding window techniques and can only indicate if a certain pattern is present or not anywhere in given bounding boxes. Is that true?</p>
<p>Can one achieve localization with CNN without any help of such techniques?</p> | 2015-01-27 19:05:29.387000+00:00 | 2017-07-18 17:54:58.363000+00:00 | null | computer-vision|neural-network|feature-detection|deep-learning | ['http://arxiv.org/pdf/1312.6229v4.pdf'] | 1 |
11,573,116 | <p>It turns out this question was already asked on <a href="https://mathoverflow.net/questions/39159/isodiametric-hull">Math Overflow</a>, and people concluded it was likely to be a difficult problem. There are even some unanswered basic questions such as whether such a shape is unique.</p>
<p>So I don't have an exact solution, but hopefully this will get you closer or at least give you some ideas.</p>
<h2>Background</h2>
<p>For simplicity we can assume without loss of generality that the diameter of the initial polygon is 1.</p>
<p><a href="http://cdm.ucalgary.ca/cdm/index.php/cdm/article/download/190/126" rel="nofollow noreferrer">On a generalization of the Blaschke-Lebesgue theorem for disk-polygons</a> (M. Bezdek, 2009) describes a number of useful concepts. Relevant ones include:</p>
<ul>
<li>a disk-polygon is, informally, a convex set that forms a "fat" polygon where edges are replaced with arcs of curvature 1.</li>
<li>the set of points which we can add to a set of points <em>D</em> so that the resulting shape is of diameter at most 1 is called the dual disk-polygon <em>D*</em> of <em>D</em>.</li>
<li>the dual of the dual <em>D**</em> is called the spindle convex hull of <em>D</em>: it is the smallest disk-polygon containing <em>D</em>.</li>
</ul>
<p>Instead of working with polygons, it suffices to work with disk-polygons: we can always replace the original polygon with its spindle convex hull without changing the result.</p>
<p>We have that <em>D</em> ⊆ <em>D*</em> when <em>D</em> has diameter 1, and <em>D</em> = <em>D*</em> if and only if <em>D</em> has constant width 1. The solution <em>S</em> will have constant width 1 (although this is of course not sufficient). Therefore <em>D</em> ⊆ <em>S</em> if and only if <em>D</em> ⊆ <em>S</em> ⊆ <em>D*</em>: in particular, to approximate <em>S</em>, we only need to find a large enough disk-polygonal subset <em>D</em> of <em>S</em>. This is very powerful, because as we will see, saying that some point belongs or does not belong to <em>S</em> translates to both an upper bound <em>and</em> a lower bound on <em>S</em> (and therefore its area).</p>
<h2>Theoretical problems</h2>
<p>Ideally to find an efficient algorithm it would be useful to answer the following questions:</p>
<ul>
<li>is a globally optimal shape, i.e. a solution, necessarily unique?</li>
<li>is a locally optimal shape necessarily unique?</li>
<li>is the isodiametric hull of a polygon necessarily a circle of diameter 1 or a Reuleaux polygon of width 1?</li>
<li>if so, are the vertices of the Reuleaux polygon derived from finitely many unit-radius circle intersections, starting from the vertices of the original polygon?</li>
<li>is there a bound on the number of vertices of the Reuleaux polygon as a function of the number of vertices of the original polygon?</li>
</ul>
<p>Questions on the area of disk-polygons can be difficult: the problem solved in <a href="http://arxiv.org/pdf/math.MG/0108098" rel="nofollow noreferrer">Pushing disks apart - the Kneser-Poulsen conjecture in the plane</a> (K. Bezdek, R. Connelly, 2001) was a simple question regarding the area of intersections of disks in the plane which had remained unsolved for a long time.</p>
<h2>Practical(?) approaches</h2>
<p><strong>Global search</strong>:<br>
Start with the spindle convex hull of the polygon, and lazily construct an infinite search tree of increasing disk-polygons where each node partitions the set of all constant-width <em>X</em> satisfying <em>D</em> ⊆ <em>X</em> ⊆ <em>D*</em>, depending on whether some point <em>x</em> of <em>D*</em> \ <em>D</em> belongs or does not belong to <em>X</em>. The left branch is the spindle convex hull of <em>D</em> ∪ {<em>x</em>}. The right branch is the dual disk-polygon of <em>D*</em> ∩ {<em>y</em> : <em>x</em> ∉ [<em>y</em>, <em>z</em>] for all <em>z</em> in <em>D</em>}.</p>
<p>Unless you choose <em>x</em> very poorly (e.g. on the boundary of <em>D*</em> \ <em>D</em>), every infinite path of that tree should converge to a constant-width curve.</p>
<p>The idea is to explore the tree in a somewhat breadth-first way. Hopefully, if <em>x</em> is chosen in a sensible way, you will be able to discard all the branches where <em>D*</em> has a smaller area than the greatest area of a <em>D</em> found so far, as such branches cannot contain the solution. Then you will have a set of disk-polygons that converge to the set of solutions to the problem as you go deeper in the tree, hopefully while not growing too fast.</p>
<p>Some heuristics for <em>x</em> could be: take a point as close as possible to the inside of <em>D*</em> \ <em>D</em>, take a random point, and so on. It may also be interesting to incorporate some amount of depth-first search to have more precise lower bounds of the area of the solution which would allow to discard whole branches of the tree sooner.</p>
<p><strong>Local search</strong>:<br>
We could also work only with constant-width disk-polygons (Reuleaux polygons), and look at the effect of small deviations. But the search space is pretty large, so it's not clear how to do that.</p> | 2012-07-20 04:57:04.473000+00:00 | 2012-07-20 04:57:04.473000+00:00 | 2017-04-13 12:57:55.007000+00:00 | null | 3,707,231 | <p>Given a convex polygon, I am trying to grow its shape (as in "maximal area") while preserving its diameter. The diameter is defined as the length of the longest segment that can be placed within the polygon. Since the polygon is convex, I assume that this diameter can always be found by scanning all vertex pairs.</p>
<p>For example, given an equilateral triangle as an input polygon, the diameter of the triangle is the length of any edge; smoothing this would result in 3 circle segments as shown in the image<img src="https://i.stack.imgur.com/wVOvu.png" alt="before-and-after-smoothing"></p>
<p>For arbitrary convex polygons, a very inefficient algorithm is to compute the intersection of the maximum-diameter radius circles centered on each polygon vertex; this is what I am currently using (in Java). Is there anything better? Any pseudo-code or pointer to algorithm will be appreciated.</p>
<p>Another example: a squashed pentagon and its corresponding diameter-preserving maximal shape. The idea is that you cannot increase the area of this shape without increasing the diameter (that is, making it possible to draw a straight line within the bounds of the shape which is longer than the original diameter). In this particular case, it seems that a single circle with radius = polygon_diameter/2 (pink) is better than the intersection of multiple larger circles with radius = polygon_diameter (light-blue). The second image superimposes both areas to make comparison easier, but areas should completely enclose the polygon.</p>
<p><img src="https://i.stack.imgur.com/uFuGm.png" alt="enter image description here"></p> | 2010-09-14 08:31:09.383000+00:00 | 2012-07-20 04:57:04.473000+00:00 | 2012-07-16 08:53:56.570000+00:00 | computational-geometry|shapes|convex-polygon | ['https://mathoverflow.net/questions/39159/isodiametric-hull', 'http://cdm.ucalgary.ca/cdm/index.php/cdm/article/download/190/126', 'http://arxiv.org/pdf/math.MG/0108098'] | 3 |
47,274,319 | <p>There is a paper available here (<a href="http://iopscience.iop.org/article/10.3847/1538-4357/aa7ede/meta;jsessionid=A9DA9DDB925E6522D058F3CEEC7D0B21.ip-10-40-2-120" rel="nofollow noreferrer">http://iopscience.iop.org/article/10.3847/1538-4357/aa7ede/meta;jsessionid=A9DA9DDB925E6522D058F3CEEC7D0B21.ip-10-40-2-120</a>) and (non-paywalled version) here (<a href="https://arxiv.org/abs/1707.02212" rel="nofollow noreferrer">https://arxiv.org/abs/1707.02212</a>) that describes how to use Intel Secure Key, which is a cryptographically secure random number generator implemented on-chip. It can be accessed via RdRand and RdSeed instructions.</p>
<p>But the author seems to say you should go for implementing it in C/C++ instead of Python. The rdrand python module runs about 100x slower than the Python default random number generator, and about 1000x slower than the one in Numpy (paper section 5.2).</p> | 2017-11-13 22:03:10.643000+00:00 | 2017-11-14 16:17:33.283000+00:00 | 2017-11-14 16:17:33.283000+00:00 | null | 22,680,441 | <p>Are there any ready made libraries so that the intel hardware prng (rdrand) can be used by numpy programs to fill buffers of random numbers?</p>
<p>Failing this can someone point me in the right direction for some C code that I could adapt or use (I use CPython and Cython with numpy so the bare minimum wrapper shd be enough).</p>
<p>The random generators I want are uniform random numbers between [0,1).</p> | 2014-03-27 07:01:06.800000+00:00 | 2017-11-14 16:17:33.283000+00:00 | 2014-03-27 14:21:53.697000+00:00 | python|random|numpy|rdrand | ['http://iopscience.iop.org/article/10.3847/1538-4357/aa7ede/meta;jsessionid=A9DA9DDB925E6522D058F3CEEC7D0B21.ip-10-40-2-120', 'https://arxiv.org/abs/1707.02212'] | 2 |
67,109,702 | <p>It is possible to use DOM queries to get the parsed values. Please check below code snippet.</p>
<pre class="lang-js prettyprint-override"><code>fetch("https://export.arxiv.org/api/query?id_list=1804.10436")
.then(response => response.text())
.then(str => new window.DOMParser().parseFromString(str, "text/xml"))
.then(xml => Array.from(xml.querySelectorAll('author>name')).map(e => e.textContent).join(", "))
.then(console.log);
</code></pre> | 2021-04-15 13:47:04.310000+00:00 | 2021-04-15 13:47:04.310000+00:00 | null | null | 67,109,535 | <p>I have the javascript below that fetches an XML feed:</p>
<pre class="lang-js prettyprint-override"><code>fetch("https://export.arxiv.org/api/query?id_list=1804.10436")
.then(response => response.text())
.then(str => new window.DOMParser().parseFromString(str, "text/xml"))
</code></pre>
<p>The XML looks like below:</p>
<pre class="lang-xml prettyprint-override"><code><?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
<link href="http://arxiv.org/api/query?search_query%3D%26id_list%3D1804.10436%26start%3D0%26max_results%3D10" rel="self" type="application/atom+xml"/>
<title type="html">ArXiv Query: search_query=&amp;id_list=1804.10436&amp;start=0&amp;max_results=10</title>
<id>http://arxiv.org/api/nUEsN1vTKh1gSfUw4HiR2ZTFdzs</id>
<updated>2021-04-15T00:00:00-04:00</updated>
<opensearch:totalResults xmlns:opensearch="http://a9.com/-/spec/opensearch/1.1/">1</opensearch:totalResults>
<opensearch:startIndex xmlns:opensearch="http://a9.com/-/spec/opensearch/1.1/">0</opensearch:startIndex>
<opensearch:itemsPerPage xmlns:opensearch="http://a9.com/-/spec/opensearch/1.1/">10</opensearch:itemsPerPage>
<entry>
<id>http://arxiv.org/abs/1804.10436v1</id>
<updated>2018-04-27T10:57:45Z</updated>
<published>2018-04-27T10:57:45Z</published>
<title>Characterizing the highly cited articles: a large-scale bibliometric
analysis of the top 1% most cited research</title>
<author>
<name>Pablo Dorta-González</name>
</author>
<author>
<name>Yolanda Santana-Jiménez</name>
</author>
<arxiv:comment xmlns:arxiv="http://arxiv.org/schemas/atom">23 pages, 6 tables, 2 figures</arxiv:comment>
<link href="http://arxiv.org/abs/1804.10436v1" rel="alternate" type="text/html"/>
<link title="pdf" href="http://arxiv.org/pdf/1804.10436v1" rel="related" type="application/pdf"/>
<arxiv:primary_category xmlns:arxiv="http://arxiv.org/schemas/atom" term="cs.DL" scheme="http://arxiv.org/schemas/atom"/>
<category term="cs.DL" scheme="http://arxiv.org/schemas/atom"/>
</entry>
</feed>
</code></pre>
<p>How can I construct a comma-separate string that contains <em>all</em> values in <code>author/name</code> tags? In the XML above I want to get <code>Pablo Dorta-González, Yolanda Santana-Jiménez</code></p> | 2021-04-15 13:36:26.953000+00:00 | 2021-04-15 13:47:04.310000+00:00 | null | javascript|xml|domparser | [] | 0 |
36,916,326 | <p>That's a subcategory problem within machine learning. You can learn a lot reading this survey: "One-Class Classification: Taxonomy of Study and Review of
Techniques" (<a href="http://arxiv.org/pdf/1312.0049.pdf" rel="nofollow">http://arxiv.org/pdf/1312.0049.pdf</a>). Hope it helps.</p> | 2016-04-28 13:34:47.443000+00:00 | 2016-04-28 13:34:47.443000+00:00 | null | null | 36,909,310 | <p>I have a dataset which contains visiting history from customers. </p>
<p>It has three columns in dataset including customer ID, AM/PM (visit at AM or PM) and Weekday/Weekend (visit on weekday or weekend). </p>
<p>I want to learn from this dataset and select the top 50 customers who have the biggest chance to visit in specified input (like AM / Weekday).</p>
<p>For now, I create model for each customer by using one-class SVM (I only have positive (visit) data). Since the one-class SVM only has binary output, I can only tell the certain customer will visit or not in specified input, rather than selecting the top 50 customers.</p>
<p>I was wondering if there is an algorithm that can learn from a positive-only dataset and give a score or probability like output?</p> | 2016-04-28 08:32:09.477000+00:00 | 2016-04-28 13:34:47.443000+00:00 | null | python|machine-learning|scikit-learn|classification | ['http://arxiv.org/pdf/1312.0049.pdf'] | 1 |
32,472,492 | <p>It's a very difficult problem in general. I'd suggest the easier way is to constrain the problem as much as possible - control lighting, size orientation of cars to detect, no occlusions. </p>
<p>This constraining has been the philosophy image processing has followed up until recently. Now the trend is that instead of constraining your problem, obtain as a massive amount of example data to train a suppervised learning algorithm. In fact it's possible that you can use a pre-trained model that would let you detect cars as it has been suggested in a previous answer.</p>
<p>There has been recently massive progress in the area of object detection in images and here are a few of the state of the art approaches based on neural network based approaches:</p>
<ul>
<li><p><a href="http://arxiv.org/pdf/1311.2901" rel="nofollow">OverFeat</a></p></li>
<li><p>Rich feature hierarchies for accurate object detection and semantic segmentation (<a href="http://www.cv-foundation.org/openaccess/content_cvpr_2014/papers/Girshick_Rich_Feature_Hierarchies_2014_CVPR_paper.pdf" rel="nofollow">R-CNN paper</a>)</p></li>
<li>Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition (<a href="http://arxiv.org/pdf/1406.4729" rel="nofollow">paper</a>)</li>
</ul>
<p>Framework that you could use include:</p>
<ul>
<li>Caffe: <a href="http://caffe.berkeleyvision.org/" rel="nofollow">http://caffe.berkeleyvision.org/</a></li>
<li>Theano</li>
<li>Torch</li>
</ul> | 2015-09-09 06:37:55.323000+00:00 | 2015-09-09 06:37:55.323000+00:00 | null | null | 18,432,497 | <p>I am currently studying image processing and learning matlab for my project.<br/>
I needed to know that if there is any method to detect a car from traffic image or parking lot image and then segment it out from it.<br/>
I have googled a lot but mostly the content is video based and I dont know anything about image processing.
<br/>
language prefered : MATLAB<br/>
I am supposed to do this on images only not videos.</p> | 2013-08-25 18:55:08.297000+00:00 | 2015-09-09 06:37:55.323000+00:00 | 2013-08-26 11:52:57.363000+00:00 | matlab|image-processing|image-segmentation|object-detection | ['http://arxiv.org/pdf/1311.2901', 'http://www.cv-foundation.org/openaccess/content_cvpr_2014/papers/Girshick_Rich_Feature_Hierarchies_2014_CVPR_paper.pdf', 'http://arxiv.org/pdf/1406.4729', 'http://caffe.berkeleyvision.org/'] | 4 |
47,346,807 | <p>Let's just take an example: I'm transposing the floating point problem using a model in base 10 with only 2 significant digits to make it simple, operation result being rounded to nearest.</p>
<p>Say we must sum the 3 numbers <code>9.9 + 8.4 + 1.4</code><br>
The exact result is <code>19.7</code> but we have only two digits so it hould be rounded to <code>20.</code></p>
<p>If we first sum <code>9.9 + 8.4</code> we get <code>18.3</code> which is then rounded to <code>18.</code><br>
We then sum <code>18. + 1.4</code> we get <code>19.4</code> rounded to <code>19.</code>.</p>
<p>If we first sum the last two terms <code>8.4 + 1.4</code> we get <code>9.8</code>, no rounding required yet.<br>
Then <code>9.9 + 9.8</code> we get <code>19.7</code> rounded to <code>20.</code>, a different result.</p>
<p><code>(9.9 + 8.4) + 1.4</code> differs from <code>9.9 + (8.4 + 1.4)</code>, the sum operation is not associative and this is due to intermediate rounding. We could exhibit similar examples with other rounding modes too...</p>
<p>The problem is exactly the same in base 2 with 53 bits significand: intermediate rounding will be causing the non associativity whatever the base or significand length.</p>
<p>To eliminate the problem, you could either sort the numbers so that the order is allways the same, or eliminate the intermediate rounding and keep only the final one, for example with a super accumulator like this <a href="https://arxiv.org/pdf/1505.05571.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1505.05571.pdf</a><br>
...Or just accept to live with an approximate result (up to you to analyze average or worse error and decide if acceptable...).</p> | 2017-11-17 09:05:12.437000+00:00 | 2017-11-17 09:19:12.280000+00:00 | 2017-11-17 09:19:12.280000+00:00 | null | 47,296,419 | <p>Here is a pseudocode of my problem.</p>
<p>I have an array of <strong>IEEE 754</strong> double precision positive numbers.</p>
<p>The array can come in a random order but numbers are always the same, just scrambled in their positions. Also these numbers <strong>can vary in a very wide range</strong> in the valid IEEE range of the <code>double</code> representation.</p>
<p>Once I have the list, I initialize a variable:</p>
<pre><code>double sum_result = 0.0;
</code></pre>
<p>And I accumulate the sum on <code>sum_result</code>, in a loop over the whole array. At each step I do:</p>
<pre><code>sum_result += my_double_array[i]
</code></pre>
<p>Is it guaranteed that, whatever the order of the initial array of <code>double</code>, if the numbers are the same, the printed out sum result will be always the same?</p> | 2017-11-14 22:37:48.727000+00:00 | 2017-11-17 09:19:12.280000+00:00 | 2017-11-14 23:18:04.287000+00:00 | floating-point|precision|ieee-754 | ['https://arxiv.org/pdf/1505.05571.pdf'] | 1 |
32,269,840 | <p>Necroposting, but for others like me that come across this question there is <a href="http://lab.fs.uni-lj.si/lasin/wp/IMIT_files/neural/doc/Omondi2006.pdf" rel="noreferrer">an in-depth, though old, treatment of implementing neural networks using FPGAs</a></p>
<p>It's been three years since I posted this, but it is still being viewed so I thought I'd add another two papers from last year I recently found.</p>
<p>The first talks about <a href="http://www.nallatech.com/fpga-acceleration-convolutional-neural-networks/" rel="noreferrer">FPGA Acceleration of Convolutional Neural Networks</a>. Nallatech performed the work. It's more marketing that an academic paper, but still an interesting read, and might be a jumping off point for someone interesting in experimenting. I am not connected to Nallatech in any way. </p>
<p>The second paper came out of the University of Birmingham, UK, written by Yufeng Hao. It presents <a href="https://arxiv.org/ftp/arxiv/papers/1711/1711.05860.pdf" rel="noreferrer">A General Neural Network Hardware Architecture on FPGA</a>.</p> | 2015-08-28 11:18:05.817000+00:00 | 2018-05-16 14:57:37.247000+00:00 | 2018-05-16 14:57:37.247000+00:00 | null | 2,190,470 | <p>To learn FPGA programming, I plan to code up a simple Neural Network in FPGA (since it's massively parallel; it's one of the few things where an FPGA implementation might have a chance of being faster than a CPU implementation).</p>
<p>Though I'm familiar with C programming (10+ years). I'm not so sure with FPGA development stuff. Can you provide a guided list of what I should do / learn / buy?</p>
<p>Thanks!</p> | 2010-02-03 08:01:55.663000+00:00 | 2020-05-22 18:52:06.603000+00:00 | null | neural-network|fpga | ['http://lab.fs.uni-lj.si/lasin/wp/IMIT_files/neural/doc/Omondi2006.pdf', 'http://www.nallatech.com/fpga-acceleration-convolutional-neural-networks/', 'https://arxiv.org/ftp/arxiv/papers/1711/1711.05860.pdf'] | 3 |
69,673,504 | <p>There can be various reasons.</p>
<p>One reason is that this is due to the so-called <a href="https://arxiv.org/pdf/1412.6568.pdf" rel="nofollow noreferrer">hubness problem</a> of embedding spaces, which is an artifact of the high-dimensional space. Some words end up close to a large part of the space and act as sort of hubs in the nearest neighbor search, so through these words, you can get quickly from everywhere to everywhere.</p>
<p>Another reason might be that the model is just undertrained for this particular word. Word embeddings are typically trained on very large datasets, such that every word appears in sufficiently many contexts. If a word does not appear frequently enough or in too ambiguous contexts, then it also ends up to be similar to basically everything.</p> | 2021-10-22 08:12:53.500000+00:00 | 2021-10-22 08:12:53.500000+00:00 | null | null | 69,628,951 | <p>I'm trying out the Word2Vec tutorial at tensorflow (see here: <a href="https://www.tensorflow.org/tutorials/text/word2vec" rel="nofollow noreferrer">https://www.tensorflow.org/tutorials/text/word2vec</a>)</p>
<p>While all seems to work fine, the output is somewhat unexpected to me, especially the small cluster in the PCA. The 'closet' words in the embedding dimension also don't make much sense, especially compared to other examples.</p>
<p>Am I doing something (trivially) wrong? Or is this expected?</p>
<p>For completeness, I run this in the nvidia-docker image, but also found similar results running cpu only.</p>
<p>Here is the projected embedding showing the cluster.
<a href="https://i.stack.imgur.com/SnTCH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SnTCH.png" alt="enter image description here" /></a></p> | 2021-10-19 10:10:01.183000+00:00 | 2021-10-22 08:12:53.500000+00:00 | null | tensorflow|pca|word2vec|embedding | ['https://arxiv.org/pdf/1412.6568.pdf'] | 1 |
56,836,197 | <p>To avoid the problems load <code>natbib</code> after <code>babel</code>:</p>
<pre><code>\documentclass{report}
%\documentclass[12pt,twoside]{mitthesis}
% Hebrew
\usepackage[utf8]{inputenc}
\usepackage[english,hebrew]{babel}
\usepackage[top=2cm,bottom=2cm,left=2.5cm,right=2cm]{geometry}
\usepackage{natbib}
\begin{filecontents}{bibliography.bib}
@article{example1,
title={Title1},
author={Author},
journal={arXiv preprint arXiv:1706.04902},
year={2019}
}
@article{example2,
title={Title2},
author={Author},
journal={arXiv preprint arXiv:1706.04902},
year={2019}
}
\end{filecontents}
\begin{document}
% \selectlanguage{english}
\cite{example1,example2}
{
\selectlanguage{english}
\cite{example1,example2}
}
% bibliography
% \selectlanguage{english}
\bibliography{bibliography}
\bibliographystyle{plainnat}
\selectlanguage{english}
\bibliography{bibliography}
\bibliographystyle{plainnat}
\end{document}
</code></pre>
<p><a href="https://www.overleaf.com/read/kgrzpznmzjcz" rel="nofollow noreferrer">https://www.overleaf.com/read/kgrzpznmzjcz</a></p> | 2019-07-01 13:01:23.560000+00:00 | 2019-07-01 13:01:23.560000+00:00 | null | null | 56,826,229 | <p>When including LaTeX foreign language packages:</p>
<pre><code>\usepackage[utf8x]{inputenc}
\usepackage[english,hebrew]{babel}
\usepackage[top=2cm,bottom=2cm,left=2.5cm,right=2cm]{geometry}
</code></pre>
<p>I get an error for the citations:</p>
<p>Example:</p>
<pre><code>Missing number, treated as zero.
<to be read again>
\afterassignment
l.19 ...{example1}}}{\@@number {27}}
A number should have been here; I inserted `0'.
(If you can't figure out why I needed to see a number,
look up `weird error' in the index to The TeXbook.)
</code></pre>
<p>Another Example:</p>
<pre><code>Improper alphabetic constant.
<to be read again>
\afterassignment
l.86 ...15normalized}. \citet{example2}
further improved the resu...
A one-character control sequence belongs after a ` mark.
So I'm essentially inserting \0 here.
! Missing = inserted for \ifnum.
<to be read again>
\afterassignment
l.86 ...15normalized}. \citet{example2}
further improved the resu...
I was expecting to see `<', `=', or `>'. Didn't.
</code></pre>
<p>And the citations are empty.
When I don't include the foreign language packages I don't get the error and the citations are great. </p>
<p>The format I used for citations is \usepackage{natbib} and for the document \documentclass[12pt,twoside]{mitthesis}.</p>
<p>I also tried other formats and get the same error. </p>
<p>bibliography.bib is traditional:</p>
<pre><code>@article{example1,
title={Title1},
author={Author},
journal={arXiv preprint arXiv:1706.04902},
year={2019}
}
@article{example2,
title={Title2},
author={Author},
journal={arXiv preprint arXiv:1706.04902},
year={2019}
}
</code></pre>
<p>Minimal Working Example is super basic, (the minimal Not-Working Example is when you uncomment the Hebrew part):</p>
<pre><code>\documentclass{report}
%\documentclass[12pt,twoside]{mitthesis}
\usepackage{natbib}
% Hebrew
% \usepackage[utf8x]{inputenc}
% \usepackage[english,hebrew]{babel}
% \usepackage[top=2cm,bottom=2cm,left=2.5cm,right=2cm]{geometry}
\begin{document}
%\selectlanguage{english}
\include{introduction} %some text with \cite{example1} and so on...
% bibliography
\bibliography{bibliography}
\bibliographystyle{plainnat}
\end{document}
</code></pre> | 2019-06-30 15:58:10.973000+00:00 | 2019-07-01 13:01:23.560000+00:00 | 2019-07-01 10:51:49.957000+00:00 | latex | ['https://www.overleaf.com/read/kgrzpznmzjcz'] | 1 |
41,841,173 | <p>According to the <a href="https://arxiv.org/pdf/1512.03385v1.pdf" rel="nofollow noreferrer">paper</a> provided in Keras documentation you should provide a <code>224 x 224 RGB [0 - 225]</code> image. The actual dimension ordering depends on the backend you use in your Keras installation. </p>
<p>The data preparation was performed as in <a href="https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf" rel="nofollow noreferrer">AlexNet</a> so the mean activation was subtracted from each color channel. The mean vector for RGB is <code>103.939, 116.779, 123.68</code>.</p>
<p>If your color values would extend <code>-255, 255</code> range - it could harm your training because of the magnitude of data unknown for the network. But still - network could adapt to this changes, but it usually makes more time and make training more chaotic.</p>
<p>In case of monochromatic images - a commonly used technique is a repeating the same channel 3 times in order to make dimensions plausible for network architecture.</p> | 2017-01-25 00:05:03.737000+00:00 | 2017-01-25 00:05:03.737000+00:00 | null | null | 41,821,975 | <p>I'm trying to use Keras's implementation of resnet for a transfer learaning task with a quite different set of images (B&W 16 bit). So what Keras expects as an input? Image with 3 channels and -127-128 range (that's what I assume zero centered 8 bit image)? 0-255? What would happen if I pass something outside this range?</p>
<p>Thanks.</p> | 2017-01-24 07:03:38.130000+00:00 | 2017-01-25 19:40:33.297000+00:00 | 2017-01-25 19:40:33.297000+00:00 | python|neural-network|deep-learning|theano|keras | ['https://arxiv.org/pdf/1512.03385v1.pdf', 'https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf'] | 2 |
55,590,314 | <p>First of all we should understand what is the purpose of <code>roi pooling</code> : <strong>to have fixed size feature representation from proposal regions on the feature maps</strong>. Because the proposed regions could come as in various sizes, if we directly use the features from the regions, they are in different shapes and therefore cannot be fed to fully-connected layers for prediction. (As we already knew fully-connected layers require fixed shape inputs). For further reading, <a href="https://stackoverflow.com/questions/43430056/what-is-the-purpose-of-the-roi-layer-in-a-fast-r-cnn">here</a> is a nice answer.</p>
<p>So we understood that <code>roi</code> pooling essentially requires two inputs, <strong>proposed regions</strong> and <strong>feature maps</strong>. As is clearly described in the following <a href="https://arxiv.org/abs/1506.01497" rel="noreferrer">figure</a> <a href="https://i.stack.imgur.com/S71NG.png" rel="noreferrer"><img src="https://i.stack.imgur.com/S71NG.png" alt="figure"></a>.</p>
<p>So why don't <strong>YOLO</strong> and <strong>SSD</strong> use <code>roi pooling</code>? Simply because they don't use <strong>region proposals</strong>! They are designed inherently different from models like <strong>R-CNN, Fast R-CNN, Faster R-CNN</strong>, in fact <strong>YOLO</strong> and <strong>SSD</strong> are categoried as <code>one-stage</code> detectors while r-cnn series (<strong>R-CNN, Fast R-CNN, Faster R-CNN</strong>) are called <code>two-stage</code> detectors simply because they propose regions first and then perform classification and regression.</p>
<p>For <code>one-stage</code> detecors, <strong>they perform predictions (classification and regression )directly from feature maps</strong>. Their method is to divide images in grids and each grid will predict a fixed amount of bounding boxes with confidence scores and class scores. The original <strong>YOLO</strong> used a single scale feature map while <strong>SSD</strong> used multi-scale feature maps, as clearly shown in the following <a href="https://arxiv.org/abs/1512.02325" rel="noreferrer">fig</a> <a href="https://i.stack.imgur.com/xA4qz.png" rel="noreferrer"><img src="https://i.stack.imgur.com/xA4qz.png" alt="enter image description here"></a></p>
<p>We can see with <strong>YOLO and SSD</strong> , the final output is a fixed shaped tensor. Therefore they behave very similiar to problems like <code>linear regression</code>, hence they are called <code>one-stage</code> detectors. </p> | 2019-04-09 10:11:10.590000+00:00 | 2019-04-09 10:22:36.667000+00:00 | 2019-04-09 10:22:36.667000+00:00 | null | 55,587,129 | <p>We know that the object detection framework like <code>faster-rcnn</code> and <code>mask-rcnn</code> has an <code>roi pooling layer</code> or <code>roi align layer</code>. But why ssd and yolo framework has no such layers? </p> | 2019-04-09 07:19:08.143000+00:00 | 2019-04-09 10:22:36.667000+00:00 | null | computer-vision|object-detection|yolo|faster-rcnn | ['https://stackoverflow.com/questions/43430056/what-is-the-purpose-of-the-roi-layer-in-a-fast-r-cnn', 'https://arxiv.org/abs/1506.01497', 'https://i.stack.imgur.com/S71NG.png', 'https://arxiv.org/abs/1512.02325', 'https://i.stack.imgur.com/xA4qz.png'] | 5 |
46,274,246 | <p>FCN uses per-pixel softmax and a multinominal loss. This means, that the mask prediction task (the boundaries of the object) and the class prediction task (what is the object being masked) are coupled.<br>
Mask-RCNN decouples these tasks: the existing bounding-box prediction (AKA the localization task) head predicts the class, like faster-RCNN, and the mask branch generates a mask <strong>for each class</strong>, without competition among classes (e.g. if you have 21 classes the mask branch predicts 21 masks instead of FCN's single mask with 21 channels). The loss being used is per-pixel sigmoid + binary loss.<br>
Bottom line, it's Sigmoid in Mask-RCNN vs. Soft-max in FCN.<br>
(<a href="https://arxiv.org/pdf/1703.06870.pdf" rel="noreferrer">See table 2.b. in Mask RCNN paper - Ablation section</a>).</p> | 2017-09-18 07:55:09.010000+00:00 | 2017-09-18 09:06:47.157000+00:00 | 2017-09-18 09:06:47.157000+00:00 | null | 46,272,841 | <p>The paper has clearly mentioned the <strong>classification and regression</strong> losses are identical to the RPN network in the Faster RCNN . Can someone explain the Mask Loss function . How the use FCN to improve ? </p> | 2017-09-18 06:27:20.437000+00:00 | 2020-01-16 03:11:46.480000+00:00 | null | deep-learning|object-detection | ['https://arxiv.org/pdf/1703.06870.pdf'] | 1 |
55,781,699 | <p>Better go for padding zeroes in the beginning, as this paper suggests <a href="https://arxiv.org/abs/1903.07288" rel="noreferrer">Effects of padding on LSTMs and CNNs</a>, </p>
<blockquote>
<p>Though post padding model peaked it’s efficiency at 6 epochs and started to overfit after that, it’s accuracy is way less than pre-padding.</p>
</blockquote>
<p>Check table 1, where the accuracy of pre-padding(padding zeroes in the beginning) is around 80%, but for post-padding(padding zeroes in the end), it is only around 50%</p> | 2019-04-21 09:54:58.957000+00:00 | 2019-04-21 09:54:58.957000+00:00 | null | null | 44,131,718 | <p>I have a dataset of time series that I use as input to an LSTM-RNN for action anticipation. The time series comprises a time of 5 seconds at 30 fps (i.e. 150 data points), and the data represents the position/movement of facial features.</p>
<p>I sample additional sub-sequences of smaller length from my dataset in order to add redundancy in the dataset and reduce overfitting. In this case I know the starting and ending frame of the sub-sequences.</p>
<p>In order to train the model in batches, all time series need to have the same length, and according to many papers in the literature padding should not affect the performance of the network.</p>
<p>Example:</p>
<p>Original sequence:</p>
<pre><code> 1 2 3 4 5 6 7 8 9 10
</code></pre>
<p>Subsequences:</p>
<pre><code>4 5 6 7
8 9 10
2 3 4 5 6
</code></pre>
<p>considering that my network is trying to <em>anticipate</em> an action (meaning that as soon as P(action) > threshold as it goes from t = 0 to T = tmax, it will predict that action) will it matter where the padding goes? </p>
<p><strong>Option 1</strong>: Zeros go to substitute original values</p>
<pre><code>0 0 0 4 5 6 7 0 0 0
0 0 0 0 0 0 0 8 9 10
0 2 3 4 5 6 0 0 0 0
</code></pre>
<p><strong>Option 2</strong>: all zeros at the end</p>
<pre><code>4 5 6 7 0 0 0 0 0 0
8 9 10 0 0 0 0 0 0 0
2 3 4 5 0 0 0 0 0 0
</code></pre>
<p>Moreover, some of the time series are missing a number of frames, but it is not known which ones they are - meaning that if we only have 60 frames, we don't know whether they are taken from 0 to 2 seconds, from 1 to 3s, etc. These need to be padded before the subsequences are even taken. What is the best practice for padding in this case?</p>
<p>Thank you in advance.</p> | 2017-05-23 10:04:19.643000+00:00 | 2021-11-12 08:47:41.737000+00:00 | null | machine-learning|deep-learning|padding|lstm|recurrent-neural-network | ['https://arxiv.org/abs/1903.07288'] | 1 |
14,246,709 | <p>This is what I do when I have to read arbitrary in an heterogeneous binary file.<br>
Numpy allows to interpret a bit pattern in arbitray way by changing the <a href="http://docs.scipy.org/doc/numpy/reference/arrays.dtypes.html" rel="nofollow">dtype</a> of the array.
The Matlab code in the question reads a <code>char</code> and two <code>uint</code>.</p>
<p>Read this <a href="http://arxiv.org/pdf/1102.1523.pdf" rel="nofollow">paper</a> (easy reading on user level, not for scientists) on what one can achieve with changing the dtype, stride, dimensionality of an array.</p>
<pre><code>import numpy as np
data = np.arange(10, dtype=np.int)
data.tofile('f')
x = np.fromfile('f', dtype='u1')
print x.size
# 40
second = x[8]
print 'second', second
# second 2
total_cycles = x[8:12]
print 'total_cycles', total_cycles
total_cycles.dtype = np.dtype('u4')
print 'total_cycles', total_cycles
# total_cycles [2 0 0 0] !endianness
# total_cycles [2]
start_cycle = x[12:16]
start_cycle.dtype = np.dtype('u4')
print 'start_cycle', start_cycle
# start_cycle [3]
x.dtype = np.dtype('u4')
print 'x', x
# x [0 1 2 3 4 5 6 7 8 9]
x[3] = 423
print 'start_cycle', start_cycle
# start_cycle [423]
</code></pre> | 2013-01-09 21:20:14.830000+00:00 | 2013-01-09 21:42:30.680000+00:00 | 2013-01-09 21:42:30.680000+00:00 | null | 14,245,094 | <p>I'm converting a matlab script to numpy, but have some problems with reading data from a binary file. Is there an equivelent to <code>fseek</code> when using <code>fromfile</code> to skip the beginning of the file? This is the type of extractions I need to do:</p>
<pre><code>fid = fopen(fname);
fseek(fid, 8, 'bof');
second = fread(fid, 1, 'schar');
fseek(fid, 100, 'bof');
total_cycles = fread(fid, 1, 'uint32', 0, 'l');
start_cycle = fread(fid, 1, 'uint32', 0, 'l');
</code></pre>
<p>Thanks!</p> | 2013-01-09 19:39:30.370000+00:00 | 2020-08-03 10:07:17.510000+00:00 | null | python|numpy|scipy | ['http://docs.scipy.org/doc/numpy/reference/arrays.dtypes.html', 'http://arxiv.org/pdf/1102.1523.pdf'] | 2 |
61,965,358 | <p>There are two option to approach the problem:</p>
<p>1.) You consider the problem as a two-class classification task. Then you train a classifier with samples from the class (images showing a mouse), and samples where the images do not depict a mouse. This approach works in general. To this end, you can still train an SVM, or you train a neural network. If you have only few samples, using SVM or tranfer learning with neural networks works good in most cases. Note that for applying a SVM, you have to define discriminative features (like HOG).</p>
<p>A question that arises with this approach is which images to select for the not-mouse class. </p>
<p>2.) You consider the problem as a one-class classification task. In this case, you only need samples of your desired class and the machine learning model is trained to recognize deviations from the class.</p>
<p>There a classical approaches like one-class SVM, where you train only on positive samples by fitting a hypersphere around the feature vectors of your class.</p>
<p>If you want to apply neural networks, you could use for example DROCC (<a href="https://arxiv.org/abs/2002.12718" rel="nofollow noreferrer">https://arxiv.org/abs/2002.12718</a>)</p> | 2020-05-22 23:23:01.447000+00:00 | 2020-05-30 20:44:58.867000+00:00 | 2020-05-30 20:44:58.867000+00:00 | null | 61,930,631 | <p>I have a trained model that determines if an image contains either cat or dog. I'm using SVM to classify. I want a new model that determines if an image is a mouse or not. It's different from the first model that classifies into 2 classes. This new model will return TRUE or FALSE only.</p>
<p>I don't think I can use a classifier model since I have only 1 class; the mouse. I don't intend to use the first model or anything related to it in the 2nd model.</p>
<p>What is the best way to approach this?</p> | 2020-05-21 08:46:27.957000+00:00 | 2020-09-13 20:09:35.933000+00:00 | 2020-09-13 20:09:35.933000+00:00 | python-3.x|machine-learning|computer-vision | ['https://arxiv.org/abs/2002.12718'] | 1 |
71,226,381 | <p>This problem is called image outpainting and You can see very advanced deep learning based solutions in papers on <a href="https://paperswithcode.com/" rel="nofollow noreferrer">paperswithcode.com</a>. Current state-of-art is given by Basile Van Hoorick in work <a href="https://arxiv.org/pdf/1912.10960v2.pdf" rel="nofollow noreferrer">Image Outpainting and Harmonization using Generative Adversarial Networks</a>. Code with possibility of usage is placed in his <a href="https://github.com/basilevh/image-outpainting" rel="nofollow noreferrer">github profile</a>. See <em>Usage</em> section on github to know how to use it.</p>
<p>There are 3 models available to use in <em>Pretrained models</em> section:</p>
<ul>
<li>G_art.pt: Artistic</li>
<li>G_nat.pt: Natural</li>
<li>G_rec.pt: Reconstruction loss only (no adversarial loss)</li>
</ul>
<p>I believe You'll have to use transfer learning to train and use this architecture in Your use case.</p> | 2022-02-22 18:15:08.347000+00:00 | 2022-02-22 18:39:03.923000+00:00 | 2022-02-22 18:39:03.923000+00:00 | null | 71,217,858 | <p>I'm trying to stretch the edges of the image (duplicate pixels) around data (not transparent) areas in picture.</p>
<p>for example:</p>
<p><strong>before:</strong>
<a href="https://i.stack.imgur.com/NaG2r.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NaG2r.png" alt="before" /></a></p>
<p><strong>after - the red line is only for you to see the difference</strong>
<a href="https://i.stack.imgur.com/S8k9B.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/S8k9B.png" alt="after: (the red line is only for you to see the difference)" /></a></p>
<p><strong>note:</strong> it's important not to change the dimension of the image, i'ts being used for geographical needs.</p>
<p>I would like to hear any ideas or suggestions from you, code snippets will be welcomed.</p>
<p>thank you!</p> | 2022-02-22 08:09:55.630000+00:00 | 2022-02-22 18:39:03.923000+00:00 | 2022-02-22 08:33:00.827000+00:00 | python|numpy|opencv|computer-vision | ['https://paperswithcode.com/', 'https://arxiv.org/pdf/1912.10960v2.pdf', 'https://github.com/basilevh/image-outpainting'] | 3 |
64,752,011 | <p>@Susmit Argawal may give you some solution, you can try it. I will raise my idea to answer why it always generates seven.</p>
<p>You didn't do anything wrong, GANs is the fight between Generator(G) and Discriminator(D). Image it as the philosophical way, when you can fool somebody in some way, you will try to do the same thing next time. That problem appears in GANs too, when the G can "minimize" the loss, it will learn less from data, produce samples with less variation (nearly similar).</p>
<p>Some new GANs model tries to reduce this in multiple ways, for example, "minibatch standard deviation" in <a href="https://arxiv.org/pdf/1710.10196.pdf" rel="nofollow noreferrer">ProGans paper</a>. There are <a href="https://github.com/soumith/ganhacks" rel="nofollow noreferrer">several tips</a> for training the GANs model, follow it will help you reduce a lot of time.</p>
<p>Training GANs and tuning its parameters are painful and need a little luck, so just try it as much as possible.</p>
<p>Good luck!</p> | 2020-11-09 12:49:24.057000+00:00 | 2020-11-09 12:49:24.057000+00:00 | null | null | 64,744,345 | <p>i am learning to code GAN model</p>
<p>the code link <a href="https://github.com/saleh1312/python/blob/master/GAN%20model" rel="nofollow noreferrer">here</a></p>
<p>after 100 epochs when i run generator , it generate one digit only the digit is
: "7"</p>
<p>why it generate one digit only , although i change the random numbers every time before i give it to generator and the result is 7 every time</p>
<p>i use this code to test generator</p>
<pre><code>u=np.random.randn(0,100)
ph=tf.reshape(generator(u)[0],(28,28)).numpy()
plt.imshow(ph,cmap='gray')
</code></pre>
<p>it gives me every time images of the digit 7 :</p>
<p><a href="https://i.stack.imgur.com/SoZcQ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SoZcQ.jpg" alt="enter image description here" /></a>
<a href="https://i.stack.imgur.com/dS245.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dS245.jpg" alt="enter image description here" /></a></p>
<p>am i make a mistake in code ?? or what ??</p>
<p>the generator and decremenator models <a href="http://www.mediafire.com/file/574007nmqav02tz/models.rar/file" rel="nofollow noreferrer">here</a>to download if you want to try them :)</p> | 2020-11-09 00:10:03.453000+00:00 | 2020-11-09 12:49:24.057000+00:00 | 2020-11-09 04:16:38.447000+00:00 | python|keras|tensorflow2.0|generative-adversarial-network | ['https://arxiv.org/pdf/1710.10196.pdf', 'https://github.com/soumith/ganhacks'] | 2 |
70,938,900 | <p>Here are some of my suggestions. Since you have a dataset consisting of two columns [Text, topic_labels], and Topic_labels are of 6 categories for ex: [plants,animals,birds,insects etc] only. This is a relatively small task. I recommend you choose the model that focuses on accuracy, rather than speed and memory. Accuracy is defined as follows.</p>
<p><a href="https://i.stack.imgur.com/BGN2Y.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BGN2Y.png" alt="enter image description here" /></a></p>
<p>TP, FP, TN, FN denote true positive, false positive, true negative, and false negative.</p>
<p>I recommend the models stated in this <a href="https://arxiv.org/pdf/2004.03705.pdf" rel="nofollow noreferrer">paper</a>. In general there are two categories:</p>
<ol>
<li>Rule-based methods. Rule-based methods classify text into different categories using a set of pre-defined rules, and require a deep
domain knowledge like linguistic. Rule-based approaches classify text into organized groups by using a set of linguistic rules. One of the most successful rule-based algorithms in Topic classification is transformation-based learning (TBL).</li>
<li>Machine learning (data-driven) based methods</li>
</ol>
<p>Since you mentioned deep learning, you want the second category. In the second category, an accurate method is the Feed-forward networks. Even though they are quite simple, they have achieved high
accuracy on many Text-classication (or Topic-classication if you will) benchmarks.</p>
<blockquote>
<p>Feed-forward networks view text as a bag of words. For each word, they
learn a vector representation using an embedding model such as
word2vec or Glove, take the vector sum or average of the embeddings as
the representation of the text, pass it through one or more
feed-forward layers, known as Multi-Layer Perceptrons (MLPs), and then
perform classification on the final layer’s representation using a
classifier such as logistic regression, Naïve Bayes, or SVM.</p>
</blockquote>
<p>However, if you want more fancy ones, and the latest state-of-the-art one, you can read the following table.</p>
<p><a href="https://i.stack.imgur.com/bbaIG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bbaIG.png" alt="enter image description here" /></a></p>
<p>The most accurate one is <em>XLNet-Large (ensemble)</em> among all the instances in the benchmarking.</p> | 2022-02-01 10:22:23.517000+00:00 | 2022-02-03 08:05:35.497000+00:00 | 2022-02-03 08:05:35.497000+00:00 | null | 70,938,641 | <p>I have a dataset consisting of two columns [Text, topic_labels].
Topic_labels are of 6 categories for ex: [plants,animals,birds,insects etc]</p>
<p>I want to build deep learning-based models in order to be able to classify topic_labels.
so far I have implemented both supervised[SVM, Logistic] & unsupervised [topic-LDA, Guided-LDA] approaches in a traditional way by applying both Word2Vec & TF-IDF but I wanted to implement state-of-the-art deep learning classification techniques for the text data?</p>
<p>Suggest me the best deep learning model for text topic classification</p> | 2022-02-01 10:03:12.357000+00:00 | 2022-02-03 08:05:35.497000+00:00 | 2022-02-01 11:01:22.577000+00:00 | python|deep-learning|nlp|topic-modeling|multiclass-classification | ['https://i.stack.imgur.com/BGN2Y.png', 'https://arxiv.org/pdf/2004.03705.pdf', 'https://i.stack.imgur.com/bbaIG.png'] | 3 |
23,968,075 | <blockquote>
<p>Is C# type system decidable?</p>
</blockquote>
<p>A type system is "decidable" if the compiler is in theory always able to decide whether the program type checks or not in finite time.</p>
<p><strong>The C# type system is not decidable.</strong></p>
<p>C# has "nominal" subtyping -- that is, you give classes and interfaces <em>names</em> and say what the base classes and interfaces are <em>by name</em> when you declare a class.</p>
<p>C# also has generic types, and, as of C# 4, covariance and contravariance of generic interfaces.</p>
<p>Those three things -- nominal subtyping, generic interfaces, and contravariance -- are sufficient to make a type system undecidable (in the absence of other restrictions on the ways that subtypes may mention each other.)</p>
<p>When this answer was originally written in 2014, that was suspected but not known. The history of this discovery is interesting.</p>
<p>First, the designers of the C# generic type system wondered the same thing, and wrote a paper in 2007 describing different ways in which type checking can go wrong, and what restrictions one can put on a nominal subtyping system that make it decidable.</p>
<p><a href="https://www.microsoft.com/en-us/research/publication/on-decidability-of-nominal-subtyping-with-variance/" rel="noreferrer">https://www.microsoft.com/en-us/research/publication/on-decidability-of-nominal-subtyping-with-variance/</a></p>
<p>A more gentle introduction to the problem can be found on my blog, here:</p>
<p><a href="https://ericlippert.com/2008/05/07/covariance-and-contravariance-part-11-to-infinity-but-not-beyond/" rel="noreferrer">https://ericlippert.com/2008/05/07/covariance-and-contravariance-part-11-to-infinity-but-not-beyond/</a></p>
<p><a href="https://cstheory.stackexchange.com/questions/18846/a-simple-decision-problem-whose-decidability-is-not-known/18866#18866">I have written about this subject on SE sites before</a>; a researcher noticed the problem mentioned in that posting and solved it; we now know that nominal subtyping is in general undecidable if there is generic contravariance thrown into the mix. You can encode a Turing Machine into the type system and force the compiler to emulate its operation, and since the question "does this TM halt?" is undecidable, so must type checking be undecidable.</p>
<p>See <a href="https://arxiv.org/abs/1605.05274" rel="noreferrer">https://arxiv.org/abs/1605.05274</a> for the details.</p>
<blockquote>
<p>Is the C# type system sound?</p>
</blockquote>
<p>A type system is "sound" if we are guaranteed that a program which type checks at compile time has no type errors at runtime.</p>
<p><strong>The C# type system is not sound.</strong></p>
<p>There are many reasons why it is not, but my least favourite is array covariance:</p>
<pre><code>Giraffe[] giraffes = new[] { new Giraffe() };
Animal[] animals = giraffes; // This is legal!
animals[0] = new Tiger(); // crashes at runtime with a type error
</code></pre>
<p>The idea here is that most methods that take arrays only read the array, they do not write it, and it is safe to read an animal out of an array of giraffes. Java allows this, and so the CLR allows it because the CLR designers wanted to be able to implement variations on Java. C# allows it because the CLR allows it. The consequence is that <em>every time you write anything into an array of a base class, the runtime must do a check to verify that the array is not an array of an incompatible derived class</em>. The common case gets slower so that the rare error case can get an exception.</p>
<p>That brings up a good point though: C# is at least well-defined as to the consequences of a type error. Type errors at runtime produce sane behaviour in the form of exceptions. It's not like C or C++ where the compiler can and will blithely generate code that does arbitrarily crazy things.</p>
<p>There are a few other ways in which the C# type system is unsound by design.</p>
<ul>
<li><p>If you consider getting a null reference exception to be a kind of runtime type error, then C# pre C# 8 is very unsound in that it does almost nothing to prevent this kind of error. C# 8 has many improvements in support for detecting nullity errors statically, but the null reference type checking is not sound; it has both false positives and false negatives. The idea is that some compile-time checking is better than none, even if it is not 100% reliable.</p>
</li>
<li><p>Many cast expressions allow the user to override the type system and declare "I know this expression will be of a more specific type at runtime, and if I'm wrong, throw an exception". (Some casts mean the opposite: "I know this expression is of type X, please generate code to convert it to an equivalent value of type Y". Those are generally safe.) Since this is a place where the developer is specifically saying that they know better than the type system, one can hardly blame the type system for the resulting crash.</p>
</li>
</ul>
<p>There are also a handful of features that generate cast-like behaviour even though there is no cast in the code. For example, if you have a list of animals you can say</p>
<pre><code>foreach(Giraffe g in animals)
</code></pre>
<p>and if there is a tiger in there, your program will crash. As the specification notes, the compiler simply inserts a cast on your behalf. (If you want to loop over all the giraffes and ignore the tigers, that's <code>foreach(Giraffe g in animals.OfType<Giraffe>())</code>.)</p>
<ul>
<li>The <code>unsafe</code> subset of C# makes all bets off; you can break the rules of the runtime arbitrarily with it. Turning off a safety system <strong>turns a safety system off</strong>, so it should not be surprising that C# is not sound when you turn off soundness checking.</li>
</ul> | 2014-05-31 08:36:07.660000+00:00 | 2021-01-06 15:53:21.320000+00:00 | 2021-01-06 15:53:21.320000+00:00 | null | 23,939,168 | <p>I know that Java's type system is unsound (it fails to type check constructs that are semantically legal) and undecidable (it fails to type check some construct).</p>
<p>For instance, if you copy/paste the following snippet in a class and compile it, the compiler will crash with a <code>StackOverflowException</code> (how apt). This is undecidability.</p>
<pre><code>static class ListX<T> {}
static class C<P> extends ListX<ListX<? super C<C<P>>>> {}
ListX<? super C<Byte>> crash = new C<Byte>();
</code></pre>
<p>Java uses wildcards with type bounds, which are a form of use-site variance. C# on the other hand, uses declaration site variance annotation (with the <code>in</code> and <code>out</code> keywords). It is known that declaration-site variance is weaker than use-site variance (use-site variance can express everything declaration-site variance can, and more -- on the down side, it's much more verbose).</p>
<p>So my question is: Is C# type system sound and decidable? If not, why?</p> | 2014-05-29 17:20:23.353000+00:00 | 2021-04-29 22:03:15.813000+00:00 | 2021-04-29 22:03:15.813000+00:00 | c#|types|covariance|type-systems | ['https://www.microsoft.com/en-us/research/publication/on-decidability-of-nominal-subtyping-with-variance/', 'https://ericlippert.com/2008/05/07/covariance-and-contravariance-part-11-to-infinity-but-not-beyond/', 'https://cstheory.stackexchange.com/questions/18846/a-simple-decision-problem-whose-decidability-is-not-known/18866#18866', 'https://arxiv.org/abs/1605.05274'] | 4 |
10,389,377 | <p>There is a new paper that discusses exactly how to solve your problem: <a href="http://arxiv.org/pdf/1204.6216.pdf" rel="noreferrer">Geodesics in Heat</a>. (Just spotted it and it reminded me of your question.) The idea is that the heat equation can be thought of as describing the diffusion of particles from some central point. Although it models random diffusion, if you run the heat equation for a short enough time then any particles that get from A to B must have followed the shortest path so mathematically you can get an estimate of distance.</p>
<p>The catch is that the proportion of particles that follow a path close to the shortest path is tiny so you have to solve a differential equation that starts large at some region and rapidly ends up small elsewhere. That's not likely to be well behaved numerically. The trick is that for larger t, even though it doesn't measure distance correctly, it does give the gradient of the distance function and this can be used with other methods to get the distance.</p>
<p>TL;DR The linked paper solves distance from every point in a mesh to any subdomain, including finite sets of seed points.</p>
<p>Oh...and I haven't tested it myself.</p> | 2012-04-30 19:24:04.027000+00:00 | 2012-04-30 19:24:04.027000+00:00 | null | null | 6,940,051 | <p>given a mesh made entirely of quads, where every vertex has valence n (with n >= 3), and does not lie on the same plane, I need to find the distance of every vertex in the mesh from a closed set of seed vertices. That is, given one or more mesh vertices (a seed set), I need to build a distance map that stores the distance of each mesh vertex from the seed set (which will have distance 0 from themselves).</p>
<p>after spending some time searching for possible solutions, I got the following picture:</p>
<p>1) it is not trivial, and different approaches have been developed during the last 20 years or so</p>
<p>2) every algorithm that takes into account a 3d domain is restricted to a triangular domain</p>
<p>said that, this is the picture I got:</p>
<p>Dijkstra algorithm may be used as a way to find the shortest path between 2 vertices, following the edges of the mesh, but it is very inaccurate and will lead to an erroneous geodesic. Lanthier (LA) proposed an improvement, but the error is still quite high.</p>
<p>Kimmel and Sethian (KS) proposed a Fast Marching Method -FMM- to solve the Eikonal equation, addressing the issue calculating the propagation of a wave starting at the seed points and recording the time the wave crosses every vertex. Unfortunately this algorithm, while simple enough to implement, still brings a quite inaccurate result, and care has to be taken to avoid obtuse triangles, or treat them in a very special way.
Novotni (NV) addressed the problem of (KS) precision in a single seed scenario, but it is unclear to me if:</p>
<p>a) it still suffers from the obtuse angle problem</p>
<p>b) when used in a multiple seed points scenario, a single FMM has to be implemented for every single seed in order to find the minimum distance for each mesh vertex from each seed (that is, in a 10 seed points scenario, FMM would have to be run 10 times per each mesh vertex)</p>
<p>On the other side, an exact algorithm -MMP- that leads to 0 error has been presented by Mitchell & al. (MI) in 87, and AFAIK has never been really implmeneted (probably due to the computing power required). On the same exact approach, Surazhsky & al. (SU) provided an alternative exact algorithm based on MMP that should outperform the latter in terms of speed, still leading to a correct result. Unfortunately the computing power required for the calculation, even if much less than the original MMP, is still high enough so that realtime interactive implementation is not feasible at this time.
(SU) also proposed an approximation of their exact algorithm, what they called flat-exact. It should take the same computational time of FMM, while bringing only 1/5th of the error, but:</p>
<p>c) it is unclear to me if it can be used in a multiple seeds scenario.</p>
<p>Other exact shortest path algorithms have been proposed by Chen & Han (CH) and Kapoor (KP), but while the first is absolutely slow, the second is just too complicated to be implemented in practice.</p>
<p>so.. the bottom line is: I need a distance from a set, not the shortest path between 2 points.</p>
<p>if I got it right,</p>
<p>either I use FMM to get a distance of each vertex from a set in a single pass,</p>
<p>-or-</p>
<p>use another algorithm to calulate the geodesic from every mesh vertex to every seed point and find the shortest one (and If I got it right that would mean calling that algorithm on every seed point for every mesh vertex, that is, on a 10,000 vertex mesh and a seed set of 50 points, I would have to calculate 500,000 geodesics in order to get the 10,000 shortest one)</p>
<p>Am I missing something? is FMM the only way to deal with multiple seeds distances in a single pass? Someone knows if the flat-exact algorithm may be used in a multiple seed points scenario?</p>
<p>thnx</p>
<p>Notes:</p>
<p>(LA): Lanthier & al. "Approximating weighted shortest paths on polyhedral surfaces"</p>
<p>(KS): Kimmel, Sethian "Computing geodesic paths on manifolds"</p>
<p>(NV): Novotni "Computing geodesic distances on triangular meshes"</p>
<p>(MI): Mitchell & al. "The discrete geodesic problem"</p>
<p>(SU): Surazhsky, Kirsanov & al. "Fast exact and approximate geodesics on meshes"</p>
<p>(CH): Chen, Han, "Shortest paths on polyhedron"</p>
<p>(KP): Kapoor "Efficient computation of geodeisc shortest paths"</p> | 2011-08-04 10:48:46.223000+00:00 | 2014-11-25 03:26:49.463000+00:00 | null | algorithm|math|graphics|computational-geometry|graph-algorithm | ['http://arxiv.org/pdf/1204.6216.pdf'] | 1 |
59,197,779 | <p>First of all: off-topic as there is the <a href="https://softwarerecs.stackexchange.com/">https://softwarerecs.stackexchange.com/</a> for these kind of questions (but personally I don't mind). </p>
<p>Secondly, you can't prove a negative but <em>if your data is continuous and multidimensional</em>, I would say that probably nothing ticks all of these boxes out-of-the-box. </p>
<p>I have implemented the Kraskov estimator and a bunch of related measures in python as at the time there wasn't anything else publicly available apart from a couple of dubious scripts written in MATLAB on the Mathworks exchange (<a href="https://github.com/paulbrodersen/entropy_estimators/" rel="nofollow noreferrer">my project can be found here</a>). Most of the heavy lifting is either pushed down to C (as I use <code>cKDTree</code> to find nearest neighbours) or to LAPACK/BLAS (i.e. Fortran), so I don't think that there is much to be gained by further optimization. At least for my data sets, the python "overhead" is small compared to everything else.
I don't do any bias correction in the published version of the repository. This is by design as I think that if your interactions between variables are small enough that you need to worry about biases then you <strong>really</strong> need to worry about it. All bias correction methods have a bunch of assumptions baked in and providing anything out of the box does more harm than good, IMO.</p>
<p>Then there is <a href="https://github.com/gregversteeg/NPEET" rel="nofollow noreferrer">NPEET</a>, which is also in python, also build around the Kraskov estimator, and very, very similar to my stuff (so similar in fact that when I first read the source, I thought they had forked my repo until I saw that they first published their code a month before me). </p>
<p>Finally, there is <a href="https://arxiv.org/abs/1801.04062" rel="nofollow noreferrer">MINE</a>, an algorithm developed in Joshua Bengio's group. Their approach is conceptually very different from Kozachenko/Kraskov, and a very interesting read. They published their method last year but there are already a couple of implementations on github. I haven't had a chance to try it out myself, nor have I looked in detail at any of the implementations, so I don't have an informed opinion on it (other than that I am big fan of Joshua Bengio's work in general). The paper looks very promising but I haven't seen an independent evaluation so far (that doesn't mean there isn't one, though). However, they are training a neural network with gradient descent on mini-batches to estimate the mutual information, so I don't expect it to be fast. At all. </p>
<p>For discrete/binned data, there is <a href="https://nemenmanlab.org/~ilya/index.php/Entropy_Estimation" rel="nofollow noreferrer">Ilya Nemenman's NSB estimator</a>, which ticks all boxes apart from your first one, which presumably is your crucial criterion. </p> | 2019-12-05 14:55:05.057000+00:00 | 2019-12-05 14:55:05.057000+00:00 | null | null | 59,159,816 | <p>I am looking for a state of the art library for estimating differential entropy from finite samples. In an ideal world, it would have the following features:</p>
<ul>
<li>Work with real-valued multi-dimensional data</li>
<li>Optimized for high performance (e.g. Implemented in C)</li>
<li>Be aware of biases in sample entropy estimators and correct them (see, e.g. <a href="https://www.cns.nyu.edu/pub/lcv/paninski-infoEst-2003.pdf" rel="nofollow noreferrer">Paninski2003</a>)</li>
<li>Use something better than naive binning estimator (e.g. <a href="https://journals.aps.org/pre/abstract/10.1103/PhysRevE.69.066138" rel="nofollow noreferrer">Kraskov</a> estimator)</li>
</ul>
<p>What are my options?</p> | 2019-12-03 14:55:53.583000+00:00 | 2019-12-05 14:55:05.057000+00:00 | null | entropy | ['https://softwarerecs.stackexchange.com/', 'https://github.com/paulbrodersen/entropy_estimators/', 'https://github.com/gregversteeg/NPEET', 'https://arxiv.org/abs/1801.04062', 'https://nemenmanlab.org/~ilya/index.php/Entropy_Estimation'] | 5 |
57,719,613 | <p>I would recommend using something like Roaring Bitmaps (described in this <a href="https://arxiv.org/pdf/1603.06549.pdf" rel="nofollow noreferrer">paper</a>). There are implementations in Python, Java, and C, and they automatically switch between 3 different formats for optimal storage density. If you want to implement something similar yourself, it essentially combines:</p>
<ol>
<li>Arrays (the default)</li>
<li>Bitsets (good for more dense collections)</li>
<li>Run-length encoding (store the start of every "run" of continuous numbers and the length of that run)</li>
</ol>
<p>The concept works for 32-bit unsigned integers, which comfortably could contain information on 10M rows without any additional work necessary for customizing your own solution.</p> | 2019-08-30 02:01:00.300000+00:00 | 2019-08-30 02:01:00.300000+00:00 | null | null | 57,719,184 | <p>I have an prefix-to-rows mapping that looks something like this:</p>
<pre><code>{
"x": [9],
"far": [1,2,3,4,5],
"car": [1,4,5]
}
</code></pre>
<p>The key is the indexed search term and the array is a sorted list of the rows that have a match. Simple enough, a basic inverted index. And for this question, let's suppose <code>a-z0-9</code> characters with a maximum length of three
characters (upper bound or 36+(36^2)+(36^3)=47,988 combinations, though probably much less in practice, let's say around 10k combinations).</p>
<p>However, the tricky part is that I may have ~ 10M rows, and low-cardinality items could have a (meaningless) list of all 10M rows. In my calculations an 10M-row array itself comes out to 88.9MB uncompressed. </p>
<p>What would be a suggested way to compress these often-repeated arrays? It seems that this must be a very common occurrence in search, and I'd like to learn a bit more about the best method of handling large and repeating prefix maps, such as with the above.</p> | 2019-08-30 00:45:38.237000+00:00 | 2019-08-30 06:13:25.987000+00:00 | 2019-08-30 06:13:25.987000+00:00 | c|algorithm|search|compression|inverted-index | ['https://arxiv.org/pdf/1603.06549.pdf'] | 1 |
45,970,236 | <p>A bunch of things have been called 'doc2vec', but it seems to most-often refer to the 'Paragraph Vector' technique from Le and Mikolov.</p>
<p>The original <a href="https://cs.stanford.edu/~quocle/paragraph_vector.pdf" rel="noreferrer">'Paragraph Vector' paper</a> describes evaluating it on three datasets:</p>
<ul>
<li>'Stanford Sentiment Treebank': 11,825 sentences of movie-reviews (which were further broken into 239,232 fragment-phrases of a few words each)</li>
<li>'IMDB Dataset': 100,000 movie-reviews (often of a few hundred words each)</li>
<li>Search-result 'snippet' paragraphs: 10,000,000 paragraphs, collected from the top-10 Google search results for each of the top 1,000,000 most-common queries</li>
</ul>
<p>The 1st two are publicly available, so you can also review their total sizes in words, typical document sizes, and vocabularies. (Note, though, that no one has been able to fully-reproduce that paper's sentiment-classification results on either of those first two datasets, implying some missing info or error in their reporting. It's possible to get close on the IMDB dataset.)</p>
<p>A <a href="https://arxiv.org/abs/1507.07998" rel="noreferrer">followup paper</a> applied the algorithm to discovering topical-relationships in the datasets:</p>
<ul>
<li>Wikipedia: 4,490,000 article body-texts</li>
<li>Arxiv: 886,000 academic-paper texts extracted from PDFs</li>
</ul>
<p>So the corpuses used in those two early papers ranged from tens-of-thousands to millions of documents, and document sizes from a few word phrases to thousands-of-word articles. (But those works did not necessarily mix wildly-differently-sized documents.)</p>
<p>In general, word2vec/paragraph-vector techniques benefit from a lot of data and variety of word-contexts. I wouldn't expect good results without at least tens-of-thousands of documents. Documents longer than a few words each work much better. Results may be harder to interpret if wildly-different-in-size or -kind documents are mixed in the same training – such as mixing tweets and books. </p>
<p>But you really have to evaluate it with your corpus and goals, because what works with some data, for some purposes, may not be generalizable to very-different projects. </p> | 2017-08-30 21:51:04.037000+00:00 | 2017-09-01 18:31:39.770000+00:00 | 2017-09-01 18:31:39.770000+00:00 | null | 45,959,618 | <p>How does doc2vec perform when trained on different sized datasets? There is no mention of dataset size in the original corpus, so I am wondering what is the minimum size required to get good performance out of doc2vec. </p> | 2017-08-30 11:48:23.557000+00:00 | 2017-09-01 18:31:39.770000+00:00 | null | nlp|doc2vec | ['https://cs.stanford.edu/~quocle/paragraph_vector.pdf', 'https://arxiv.org/abs/1507.07998'] | 2 |
72,876,253 | <p>The angst about <code>Prelude.seq</code> (frequently in association with <code>⊥</code>) is mostly attributed to a few reasons:</p>
<ol>
<li><p>it weakens <em>extensionality</em></p>
<pre><code> -- (\ x -> f x) == f
seq (\ x -> ⊥ x) y = y
seq ⊥ y = ⊥
</code></pre>
</li>
<li><p>it weakens <em>parametricity</em></p>
<pre><code> -- foldr k z (build g) == g k z
foldr ⊥ 0 (build seq) = foldr ⊥ 0 (seq (:) [])
= foldr ⊥ 0 []
= 0
seq ⊥ 0 = ⊥
</code></pre>
</li>
<li><p>it invalidates various laws e.g. those for the monadic interface</p>
<pre><code> -- m >>= return == m
seq (⊥ >>= return :: State s a) True = True
seq (⊥ :: State s a) True = ⊥
</code></pre>
</li>
</ol>
<p>However:</p>
<ol>
<li><p>Extensionality is also weakened by the combination of call-by-need semantics and the use of weak-head normal form, <a href="https://www.pauldownen.com/publications/first-class-call-stacks.pdf" rel="nofollow noreferrer">as noted by</a> Philip Johnson-Freyd, Paul Downen and Zena Ariola.</p>
</li>
<li><p>Parametricity is similarly weakened by the combination of GADTs and the corresponding map functions satisfying the functor laws, <a href="https://arxiv.org/pdf/2105.03389v3" rel="nofollow noreferrer">as shown by</a> Patricia Johann, Enrico Ghiorzi, and Daniel Jeffries.</p>
</li>
<li><p>The presence of the polymorphic fixed-point operator:</p>
<pre><code>yet :: (a -> a) -> a
yet f = f (yet f)
</code></pre>
<p>also imposes restrictions on parametricity, <a href="https://dl.acm.org/doi/pdf/10.1145/99370.99404" rel="nofollow noreferrer">as specified by</a> Philip Wadler.</p>
</li>
<li><p>That certain combinations of operators and values can invalidate basic logical or mathematical rules isn't new e.g. division and zero. Like division, <a href="https://www.janis-voigtlaender.eu/papers/TamingSelectiveStrictness.pdf" rel="nofollow noreferrer">selective strictness</a> is necessary - algorithms for determining strictness (a nontrivial property) are subject to <a href="http://kilby.stanford.edu/%7Ervg/154/handouts/Rice.html" rel="nofollow noreferrer">Rice's theorem</a>: they may not always succeed, resulting in unexpectedly-excessive memory usage (i.e. space leaks) in programs. As for the choice to use primitive operators or annotations (for <a href="https://downloads.haskell.org/%7Eghc/7.8.4/docs/html/users_guide/bang-patterns.html" rel="nofollow noreferrer">patterns</a> or <a href="https://www.haskell.org/onlinereport/haskell2010/haskellch4.html" rel="nofollow noreferrer">types</a>), that usually doesn't change the adverse impact on important theorems and laws.</p>
<p>Another option could be to use an <a href="https://www.cs.nott.ac.uk/%7Epszgmh/clairvoyant.pdf" rel="nofollow noreferrer">augmented form</a> of call-by-value semantics, but that presumes the method of augmentation being used is sufficiently <em>"well-behaved"</em>.</p>
</li>
</ol>
<hr />
<p>At some time in the future, perhaps there will be one or more advances in the computing sciences that resolve the matter. Until then, the most practical option is to manage the awkward interaction between useful laws and theorems and this along with other <a href="https://d-nb.info/1045276316/34" rel="nofollow noreferrer">real-world programming features</a>.</p> | 2022-07-05 22:40:20.070000+00:00 | 2022-08-27 10:48:48.290000+00:00 | 2022-08-27 10:48:48.290000+00:00 | null | 12,687,392 | <p>Haskell has a magical function named <code>seq</code>, which takes an argument of any type and reduces it to <em>Weak Head Normal Form</em> (WHNF).</p>
<p>I've read a couple of sources [not that I can remember who they were <em>now</em>...] which claim that "polymorphic <code>seq</code> is bad". In what way are they "bad"?</p>
<p>Similarly, there is the <code>rnf</code> function, which reduces an argument to <em>Normal Form</em> (NF). But <em>this</em> is a class method; it does not work for arbitrary types. It seems "obvious" to me that one could alter the language spec to provide this as a built-in primitive, similar to <code>seq</code>. This, presumably, would be "even more bad" than just having <code>seq</code>. In what way is this so?</p>
<p>Finally, somebody suggested that giving <code>seq</code>, <code>rnf</code>, <code>par</code> and similars the same type as the <code>id</code> function, rather than the <code>const</code> function as it is now, would be an improvement. How so?</p> | 2012-10-02 09:04:18.460000+00:00 | 2022-08-27 10:48:48.290000+00:00 | 2012-10-11 20:43:20.577000+00:00 | haskell|lazy-evaluation | ['https://www.pauldownen.com/publications/first-class-call-stacks.pdf', 'https://arxiv.org/pdf/2105.03389v3', 'https://dl.acm.org/doi/pdf/10.1145/99370.99404', 'https://www.janis-voigtlaender.eu/papers/TamingSelectiveStrictness.pdf', 'http://kilby.stanford.edu/%7Ervg/154/handouts/Rice.html', 'https://downloads.haskell.org/%7Eghc/7.8.4/docs/html/users_guide/bang-patterns.html', 'https://www.haskell.org/onlinereport/haskell2010/haskellch4.html', 'https://www.cs.nott.ac.uk/%7Epszgmh/clairvoyant.pdf', 'https://d-nb.info/1045276316/34'] | 9 |
70,841,691 | <p>The part of the instrument audio that gives its distinctive sound, independently from the pitch played, is called the <a href="https://en.wikipedia.org/wiki/Timbre" rel="nofollow noreferrer">timbre</a>. The modern approach to get a vector representation, would be to train a neural network. This kind of learned vector representation is often called to create an <em>audio embedding</em>.</p>
<p>An example implementation of this is described in <a href="https://arxiv.org/abs/1906.08152" rel="nofollow noreferrer">Learning Disentangled Representations Of Timbre And Pitch For Musical Instrument Sounds Using Gaussian Mixture Variational Autoencoders</a> (2019).</p> | 2022-01-24 23:21:55.073000+00:00 | 2022-01-24 23:21:55.073000+00:00 | null | null | 70,841,114 | <p>I'm looking to write a function that takes an audio signal (assuming it contains a single instrument playing), out of which I would like to extract the instrument-like features out of the audio and into a vector space. So in theory, if I had two signals with similar-sounding instruments (such as two pianos), their respective vectors should be fairly similar (by euclidian distance/cosine similarity/etc.). How would one go about doing this?</p>
<p>What I've tried: I'm currently extracting (and temporally averaging) the chroma energy, spectral contrast, MFCC (and their 1st and 2nd derivatives), as well as the Mel spectrogram and concatenating them into a single representation vector:</p>
<pre><code># expects a numpy array (dimensions: [1, num_samples],
# similar to torchaudio.load() output).
# assume all signals contain a constant number of samples and sampled at 44.1Khz
def extract_instrument_features(signal, sr):
# define hyperparameters:
FRAME_LENGTH = 1024
HOP_LENGTH = 512
# compute and perform temporal averaging of the chroma energy:
ce = torch.Tensor(librosa.feature.chroma_cens(signal_np, sr))
ce = torch.mean(ce, axis=1)
# compute and perform temporal averaging of the spectral contrast:
spc = torch.Tensor(librosa.feature.spectral_contrast(signal_np, sr))
spc = torch.mean(spc, axis=1)
# extract MFCC and its first & second derivatives:
mfcc = torch.Tensor(librosa.feature.mfcc(signal_np, sr, n_mfcc=13))
mfcc_1st = torch.Tensor(librosa.feature.delta(mfcc))
mfcc_2nd = torch.Tensor(librosa.feature.delta(mfcc, order=2))
# temporal averaging of MFCCs:
mfcc = torch.mean(mfcc, axis=1)
mfcc_1st = torch.mean(mfcc_1st, axis=1)
mfcc_2nd = torch.mean(mfcc_2nd, axis=1)
# define the mel spectrogram transform:
mel_spectrogram = torchaudio.transforms.MelSpectrogram(
sample_rate=target_sample_rate,
n_fft=1024,
hop_length=512,
n_mels=64
)
# extract the mel spectrogram:
ms = mel_spectrogram(signal)
ms = torch.mean(ms, axis=1)[0]
# concatenate and return the feature vector:
features = [ce, spc, mfcc, mfcc_1st, mfcc_2nd]
return np.concatenate(features)
</code></pre> | 2022-01-24 22:09:24.960000+00:00 | 2022-01-24 23:21:55.073000+00:00 | null | data-extraction|audio-processing|data-preprocessing | ['https://en.wikipedia.org/wiki/Timbre', 'https://arxiv.org/abs/1906.08152'] | 2 |
47,943,662 | <p>I can't run your model, because neither the question not the GitHub repo contains the data. That's why I am 90% sure of my answer.</p>
<p>But I think the main problem of your network is the <code>sigmoid</code> activation function after dense layers. I assume, it will train well when there's just two of them, but four is too much.</p>
<p>Unfortunately, NVidia's <a href="https://arxiv.org/pdf/1604.07316v1.pdf" rel="nofollow noreferrer">End to End Learning for Self-Driving Cars</a> paper doesn't specify it explicitly, but these days the default activation is no longer <code>sigmoid</code> (as it once was), but <code>relu</code>. See <a href="https://stats.stackexchange.com/q/176794/130598">this discussion</a> if you're interested why that is so. So the solution I'm proposing is try this model:</p>
<pre class="lang-py prettyprint-override"><code>model = Sequential()
model.add(Lambda(lambda x: x/255.0 - 0.5, input_shape = (160,320,3)))
model.add(Cropping2D(cropping=((70,25),(0,0))))
model.add(Conv2D(24, (5, 5), strides=(2, 2), activation="relu"))
model.add(Conv2D(36, (5, 5), strides=(2, 2), activation="relu"))
model.add(Conv2D(48, (5, 5), strides=(2, 2), activation="relu"))
model.add(Conv2D(64, (3, 3), strides=(1, 1), activation="relu"))
model.add(Conv2D(64, (3, 3), strides=(1, 1), activation="relu"))
model.add(Flatten())
model.add(Dense(1164, activation="relu"))
model.add(Dense(100, activation="relu"))
model.add(Dense(50, activation="relu"))
model.add(Dense(10, activation="relu"))
model.add(Dense(1))
</code></pre>
<p>It mimics the NVidia's network architecture and does not suffer from the vanishing gradients.</p> | 2017-12-22 15:12:27.403000+00:00 | 2017-12-22 15:12:27.403000+00:00 | null | null | 47,846,824 | <p>How to prevent a lazy Convolutional Neural Network? I end with a ‘lazy CNN’ after training it with KERAS. Whatever the input is, the output is constant. What do you think the problem is? </p>
<p>I try to repeat an experiment of NVIDIA’s End to End Learning for Self-Driving Cars <a href="https://devblogs.nvidia.com/parallelforall/deep-learning-self-driving-cars/" rel="nofollow noreferrer">the paper</a>. Absolutely, I do not have a real car but a Udacity’s <a href="https://github.com/udacity/self-driving-car-sim" rel="nofollow noreferrer">simulator</a> . The simulator generates figures about the foreground of a car. </p>
<p><a href="https://i.stack.imgur.com/9Z8P3.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9Z8P3.jpg" alt="enter image description here"></a></p>
<p>A CNN receives the figure, and it gives the steering angle to keep the car in the track. The rule of the game is to keep the simulated car runs in the track safely. It is not very difficult. </p>
<p>The strange thing is sometimes I end with a lazy CNN after training it with KERAS, which gives constant steering angles. The simulated car will go off the trick, but the output of the CNN has no change. Especially the layer gets deeper, e.g. the CNN in <a href="https://devblogs.nvidia.com/parallelforall/deep-learning-self-driving-cars/" rel="nofollow noreferrer">the paper</a>. </p>
<p>If I use a CNN like this, I can get a useful model after training.</p>
<pre><code>model = Sequential()
model.add(Lambda(lambda x: x/255.0 - 0.5, input_shape = (160,320,3)))
model.add(Cropping2D(cropping=((70,25),(0,0))))
model.add(Conv2D(24, 5, strides=(2, 2)))
model.add(Activation('relu'))
model.add(Conv2D(36, 5, strides=(2, 2)))
model.add(Activation('relu'))
model.add(Conv2D(48, 5, strides=(2, 2)))
model.add(Activation('relu'))
model.add(Flatten())
model.add(Dense(50))
model.add(Activation('sigmoid'))
model.add(Dense(10))
model.add(Activation('sigmoid'))
model.add(Dense(1))
</code></pre>
<p>But, if I use a deeper CNN, I have more chance to receive a lazy CNN.
Specifically, if I use a CNN which likes NVIDIA’s, I almost receive a lazy CNN after every training. </p>
<pre><code>model = Sequential()
model.add(Lambda(lambda x: x/255.0 - 0.5, input_shape = (160,320,3)))
model.add(Cropping2D(cropping=((70,25),(0,0))))
model.add(Conv2D(24, 5, strides=(2, 2)))
model.add(Activation('relu'))
model.add(Conv2D(36, 5, strides=(2, 2)))
model.add(Activation('relu'))
model.add(Conv2D(48, 5, strides=(2, 2)))
model.add(Activation('relu'))
model.add(Conv2D(64, 3, strides=(1, 1)))
model.add(Activation('relu'))
model.add(Conv2D(64, 3, strides=(1, 1)))
model.add(Activation('relu'))
model.add(Flatten())
model.add(Dense(1164))
model.add(Activation('sigmoid'))
model.add(Dense(100))
model.add(Activation('sigmoid'))
model.add(Dense(50))
model.add(Activation('sigmoid'))
model.add(Dense(10))
model.add(Activation('sigmoid'))
model.add(Dense(1))
</code></pre>
<p>I use ‘relu’ for convolution layers, and the activation function for the fully connected layer is ‘sigmoid’. I try to change the activation function, but there is no effect. </p>
<p>There is my analysis. I do not agree with a bug in my program because I can successfully drive the car with same codes and a simpler CNN. I think the reason is the simulator or the structure of the neural network. In a real self-driving car, the training signal, that is the steering angle, should contain noise; therefor, the driver never holds the wheel still in the real road. But in the simulator, the training signal is very clean. Almost 60% of the steering angle is zero. The optimizer can easily do the job by turning the output of CNN close to the zero. It seems the optimizer is lazy too. However, when we really want this CNN output something, it also gives zeros. So, I add small noise for these zero steering angles. The chance that I get a lazy CNN is smaller, but it is not disappearing. </p>
<p>What do you think about my analysis? Is there other strategy that I can use? I am wondering whether similar problems have been solved in the long history of CNN research.</p>
<p><strong>resource</strong>:</p>
<p>The related files have been uploaded to <a href="https://github.com/BlueBirdHouse/CarND-Behavioral-Cloning-P3/" rel="nofollow noreferrer">GitHub</a>. You can repeat the entire experiment with these files.</p> | 2017-12-16 14:47:42.060000+00:00 | 2017-12-22 15:12:27.403000+00:00 | 2017-12-18 01:25:08.857000+00:00 | machine-learning|computer-vision|keras|conv-neural-network|robotics | ['https://arxiv.org/pdf/1604.07316v1.pdf', 'https://stats.stackexchange.com/q/176794/130598'] | 2 |
51,972,785 | <p>The definition of consensus in Hyperledger/Fabric is quite different from traditional consensus protocol such as PBFT. Consensus in Hyperledger/Fabric has wider meaning; Its core consensus architecture is called "Execute-Order-Validate". And unlike PBFT where all the nodes can execute same role, Hyperledger/Fabric separate execution node from ordering node. The consensus that you mentioned is included only in ordering phase, meaning ordering node execute consensus protocol such as Kafka, PBFT.
In short, I think that you misunderstand the difference between consensus in ordering phase and consensus in whole protocol (transaction flow).
For more detail, I recommend <a href="https://arxiv.org/abs/1801.10228" rel="nofollow noreferrer">this paper</a>.</p> | 2018-08-22 18:09:56.143000+00:00 | 2018-08-22 18:09:56.143000+00:00 | null | null | 50,997,481 | <p>In hyperledger fabric orderer takes care of replicating blocks through atomic broadcast after endorsing transactions and validating blocks thus achieves consensus. Then why do we need consensus algorithms such as kafka, pbft etc to be integrated with hyperledger fabric. </p> | 2018-06-23 02:35:28.933000+00:00 | 2018-08-22 18:09:56.143000+00:00 | 2018-06-25 06:57:01.443000+00:00 | hyperledger|consensus | ['https://arxiv.org/abs/1801.10228'] | 1 |
70,940,387 | <p>Your first approach should be to try the pre-trained weights. Generally it works well. However if you are working on a different domain (e.g.: Medicine), then you'll need to fine-tune on data from new domain. Again you might be able to find pre-trained models on the domains (e.g.: <a href="https://arxiv.org/abs/1901.08746" rel="nofollow noreferrer">BioBERT</a>).</p>
<p>For adding layer, there are slightly different approaches depending on your task. E.g.: For question-answering, have a look at <a href="https://arxiv.org/abs/1911.04118" rel="nofollow noreferrer">TANDA</a> paper (Transfer and Adapt Pre-Trained Transformer Models for Answer Sentence Selection). It is a very nice easily readable paper which explains the transfer and adaptation strategy. Again, hugging-face has modified and pre-trained models for most of the standard tasks.</p> | 2022-02-01 12:13:22.393000+00:00 | 2022-02-01 12:13:22.393000+00:00 | null | null | 70,939,904 | <p>So if I understand correctly there are mainly two ways to adapt BERT to a specific task: fine-tuning (all weights are changed, even pretrained ones) and feature-based (pretrained weights are frozen). However, I am confused.</p>
<ol>
<li>When to use which one? If you have unlabeled data (unsupervised learning), should you then use fine-tuning?</li>
<li>If I want to fine-tuned BERT, isn't the only option to do that using masked language model and next sentence prediction? And also: is it necessary to put another layer of neural network on top?</li>
</ol>
<p>Thank you.</p> | 2022-02-01 11:35:17.343000+00:00 | 2022-02-01 12:13:22.393000+00:00 | null | nlp|bert-language-model | ['https://arxiv.org/abs/1901.08746', 'https://arxiv.org/abs/1911.04118'] | 2 |
40,539,054 | <p>From <a href="https://stats.stackexchange.com/q/164876/12359">Tradeoff batch size vs. number of iterations to train a neural network</a>:</p>
<p>From Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, Ping Tak Peter Tang. On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima. <a href="https://arxiv.org/abs/1609.04836" rel="nofollow noreferrer">https://arxiv.org/abs/1609.04836</a> :</p>
<blockquote>
<p>The stochastic gradient descent method and its variants are algorithms of choice for many Deep Learning tasks. These methods operate in a small-batch regime wherein a fraction of the training data, usually 32--512 data points, is sampled to compute an approximation to the gradient. <strong>It has been observed in practice that when using a larger batch there is a significant degradation in the quality of the model, as measured by its ability to generalize.</strong> There have been some attempts to investigate the cause for this generalization drop in the large-batch regime, however the precise answer for this phenomenon is, hitherto unknown. In this paper, we present ample numerical evidence that supports the view that large-batch methods tend to converge to sharp minimizers of the training and testing functions -- and that sharp minima lead to poorer generalization. In contrast, small-batch methods consistently converge to flat minimizers, and our experiments support a commonly held view that this is due to the inherent noise in the gradient estimation. We also discuss several empirical strategies that help large-batch methods eliminate the generalization gap and conclude with a set of future research ideas and open questions.</p>
<p>[…]</p>
<p><strong>The lack of generalization ability is due to the fact that large-batch methods tend to converge to <em>sharp minimizers</em> of the training function</strong>. These minimizers are characterized by large positive eigenvalues in $\nabla^2 f(x)$ and tend to generalize less well. In contrast, small-batch methods converge to flat minimizers characterized by small positive eigenvalues of $\nabla^2 f(x)$. We have observed that the loss function landscape of deep neural networks is such that large-batch methods are almost invariably attracted to regions with sharp minima and that, unlike small batch methods, are unable to escape basins of these minimizers.</p>
<p>[…]</p>
<p><a href="https://i.stack.imgur.com/30I6a.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/30I6a.png" alt="enter image description here"></a></p>
</blockquote>
<p>Also, some good insights from <a href="https://www.quora.com/profile/Ian-Goodfellow" rel="nofollow noreferrer">Ian Goodfellow</a>
answering to <a href="https://www.quora.com/Deep-learning-why-do-not-use-the-whole-training-set-to-compute-the-gradient" rel="nofollow noreferrer">why do not use the whole training set to compute the gradient?
</a> on Quora:</p>
<blockquote>
<p>The size of the learning rate is limited mostly by factors like how
curved the cost function is. You can think of gradient descent as
making a linear approximation to the cost function, then moving
downhill along that approximate cost. If the cost function is highly
non-linear (highly curved) then the approximation will not be very
good for very far, so only small step sizes are safe. You can read
more about this in Chapter 4 of the deep learning textbook, on
numerical computation:
<a href="http://www.deeplearningbook.org/contents/numerical.html" rel="nofollow noreferrer">http://www.deeplearningbook.org/contents/numerical.html</a></p>
<p>When you put
m examples in a minibatch, you need to do O(m) computation and use
O(m) memory, but you reduce the amount of uncertainty in the gradient
by a factor of only O(sqrt(m)). In other words, there are diminishing
marginal returns to putting more examples in the minibatch. You can
read more about this in Chapter 8 of the deep learning textbook, on
optimization algorithms for deep learning:
<a href="http://www.deeplearningbook.org/contents/optimization.html" rel="nofollow noreferrer">http://www.deeplearningbook.org/contents/optimization.html</a></p>
<p>Also, if
you think about it, even using the entire training set doesn’t really
give you the true gradient. The true gradient would be the expected
gradient with the expectation taken over all possible examples,
weighted by the data generating distribution. Using the entire
training set is just using a very large minibatch size, where the size
of your minibatch is limited by the amount you spend on data
collection, rather than the amount you spend on computation.</p>
</blockquote> | 2016-11-10 23:53:11.727000+00:00 | 2016-11-10 23:53:11.727000+00:00 | 2017-04-13 12:44:13.837000+00:00 | null | 40,535,679 | <p>I am doing a neural network regression with 4 features. How do I determine the size of mini-batch for my problem? I see people use 100 ~ 1000 batch size for computer vision with 32*32*3 features for each image, does that mean I should use batch size of 1 million? I have billions of data and tens of GB of memory so there is no hard requirement for me not to do that.</p>
<p>I also observed using a mini-batch with size ~ 1000 makes the convergence much faster than batch size of 1 million. I thought it should be the other way around, since the gradient calculated with a larger batch size is most representative of the gradient of the whole sample? Why does using mini-batch make the convergence faster?</p> | 2016-11-10 19:38:08.777000+00:00 | 2016-11-10 23:53:11.727000+00:00 | null | neural-network|gradient-descent | ['https://stats.stackexchange.com/q/164876/12359', 'https://arxiv.org/abs/1609.04836', 'https://i.stack.imgur.com/30I6a.png', 'https://www.quora.com/profile/Ian-Goodfellow', 'https://www.quora.com/Deep-learning-why-do-not-use-the-whole-training-set-to-compute-the-gradient', 'http://www.deeplearningbook.org/contents/numerical.html', 'http://www.deeplearningbook.org/contents/optimization.html'] | 7 |
70,028,164 | <p>For a better phrase embedding, you can try Phrase-BERT for phrase embeddings.</p>
<ul>
<li>Paper: <em>Wang, Shufan, Laure Thompson, and Mohit Iyyer. "<a href="https://arxiv.org/pdf/2109.06304.pdf" rel="nofollow noreferrer">Phrase-BERT: Improved Phrase Embeddings from BERT with an Application to Corpus Exploration.</a>" EMNLP 2021</em>.</li>
<li><a href="https://github.com/sf-wa-326/phrase-bert-topic-model" rel="nofollow noreferrer">Code</a>.</li>
</ul>
<p>The paper also mentions related previous work, e.g. SentBERT and SpanBERT.</p>
<p>Not conditional though I believe.</p> | 2021-11-18 23:47:51.460000+00:00 | 2021-11-18 23:47:51.460000+00:00 | null | null | 62,595,908 | <p>I use <a href="https://github.com/UKPLab/sentence-transformers" rel="nofollow noreferrer">https://github.com/UKPLab/sentence-transformers</a> to obtain sentence embedding from BERT. Using this I am able to obtain embedding for sentences or phrases. For example: I can get embedding of a sentence like <strong>"system not working given to service center but no response on replacement"</strong>. I can also get embedding of a phrase like <strong>"no response"</strong>.</p>
<p>However I want to get embedding of <strong>"no response"</strong> in the context of <strong>"system not working given to service center but no response on replacement"</strong>. Any pointers on how to obtain this will be helpful. Thanks in advance.</p>
<p>I am trying to do this because the phrase <strong>"no response"</strong> has different contexts in different sentences. For example the context of "no response" is different in the following two sentences:
<strong>"system not working given to service center but no response on replacement"
"we tried recovery procedure on the patient but there was no response"</strong></p> | 2020-06-26 13:33:32.687000+00:00 | 2021-11-18 23:47:51.460000+00:00 | null | nlp|bert-language-model | ['https://arxiv.org/pdf/2109.06304.pdf', 'https://github.com/sf-wa-326/phrase-bert-topic-model'] | 2 |
65,260,013 | <p><strong>Summary of your results:</strong></p>
<ul>
<li>a) CNN with Softmax activation function -> accuracy ~ 0.50, loss ~ 7.60</li>
<li>b) CNN with Sigmoid activation function -> accuracy ~ 0.98, loss ~ 0.06</li>
</ul>
<p><strong>TLDR</strong></p>
<p>Update:</p>
<p>Now that I also see you are <strong>using only 1 output neuron with Softmax, you will not be able to capture the second class</strong> in binary classification. <strong>With Softmax you need to define K neurons in the output layer</strong> - where K is the number of classes you want to predict. Whereas with Sigmoid: 1 output neuron is sufficient for binary classification.</p>
<p>so in short, this should change in your code when using softmax for 2 classes:</p>
<pre><code>#use 2 neurons with softmax
model.add(tf.keras.layers.Dense(2, activation='softmax'))
</code></pre>
<p>Additionally:</p>
<p>When doing <strong>binary classification</strong>, a <strong>sigmoid function is more suitable</strong> as it is simply <strong>computationally more effective</strong> compared to the more generalized softmax function (which is normally being used for multi-class prediction when you have K>2 classes).</p>
<hr />
<p><strong>Further Reading:</strong></p>
<p><strong>Some attributes of selected activation functions</strong></p>
<p>If the short answer above is not enough for you, I can share with you some things I've learned from my research about activation functions with NNs in short:</p>
<p>To begin with, let's be clear with the terms activation and activation function</p>
<blockquote>
<p>activation (alpha): is the state of a neuron. The state of neurons in hidden or output layers will be quantified by the weighted sum of input signals from a previous layer</p>
</blockquote>
<blockquote>
<p>activation function f(alpha): Is a function that transforms an activation to a neuron signal. Usually a non-linear and differentiable function as for instance the sigmoid function. Many applications & research has been applied with the sigmoid function (see Bengio & Courville, 2016, p.67 ff.). Mostly the same activation function is being used throughout the neural network, but it is possible to use multiple (e.g. different ones in different layers).</p>
</blockquote>
<p>Now to the effects of activation functions:</p>
<blockquote>
<p>The choice of activation function can have an immense impact on learning of neural networks (as you have seen in your example). Historically it was common to use the sigmoid function, as it was a good function to depict a saturated neuron. Today, especially in CNNs other activation functions, also only partially linear activation functions (like relu) is being preferred over sigmoid function. There are many different functions, just to name some: sigmoid, tanh, relu, prelu, elu ,maxout, max, argmax, softmax etc.</p>
</blockquote>
<p>Now let's only compare sigmoid, relu/maxout and softmax:</p>
<pre><code># pseudo code / formula
sigmoid = f(alpha) = 1 / (1 + exp(-alpha))
relu = f(alpha) = max(0,alpha)
maxout = f(alpha) = max(alpha1, alpha2)
softmax = f(alpha_j) = alpha_j / sum_K(alpha_k)
</code></pre>
<p>sigmoid:</p>
<ul>
<li>in binary classification preferably used for output layer</li>
<li>values can range between [0,1], suitable for a probabilistic interpretation (+)</li>
<li>saturated neurons can eliminate gradient (-)</li>
<li>not zero centered (-)</li>
<li>exp() is computationally expensive (-)</li>
</ul>
<p>relu:</p>
<ul>
<li>no saturated neurons in positive regions (+)</li>
<li>computationally less expensive (+)</li>
<li>not zero centered (-)</li>
<li>saturated neurons in negative regions (-)</li>
</ul>
<p>maxout:</p>
<ul>
<li>positive attributes of relu (+)</li>
<li>doubles the number of parameters per neuron, normally requires an increased learning effort (-)</li>
</ul>
<p>softmax:</p>
<ul>
<li>can bee seen as a generalization of sigmoid function</li>
<li>mainly being used as output activation function in multi-class prediction problems</li>
<li>values range between [0,1], suitable for a probabilistic interpretation (+)</li>
<li>computationally more expensive because of exp() terms (-)</li>
</ul>
<p>Some good references for further reading:</p>
<ul>
<li><a href="http://cs231n.stanford.edu/2020/syllabus" rel="nofollow noreferrer">http://cs231n.stanford.edu/2020/syllabus</a></li>
<li><a href="http://deeplearningbook.org" rel="nofollow noreferrer">http://deeplearningbook.org</a> (Bengio & Courtville)</li>
<li><a href="https://arxiv.org/pdf/1811.03378.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1811.03378.pdf</a></li>
<li><a href="https://papers.nips.cc/paper/2018/file/6ecbdd6ec859d284dc13885a37ce8d81-Paper.pdf" rel="nofollow noreferrer">https://papers.nips.cc/paper/2018/file/6ecbdd6ec859d284dc13885a37ce8d81-Paper.pdf</a></li>
</ul> | 2020-12-11 23:31:20.760000+00:00 | 2020-12-12 05:58:05.483000+00:00 | 2020-12-12 05:58:05.483000+00:00 | null | 65,258,468 | <p>I've been trying to build an image classifier with CNN. There are 2300 images in my dataset and two categories: men and women. Here's the model I used:</p>
<pre><code>early_stopping = EarlyStopping(min_delta = 0.001, patience = 30, restore_best_weights = True)
model = tf.keras.Sequential()
model.add(tf.keras.layers.Conv2D(256, (3, 3), input_shape=X.shape[1:], activation = 'relu'))
model.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2)))
model.add(tf.keras.layers.BatchNormalization())
model.add(tf.keras.layers.Conv2D(256, (3, 3), input_shape=X.shape[1:], activation = 'relu'))
model.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2)))
model.add(tf.keras.layers.BatchNormalization())
model.add(tf.keras.layers.Flatten()) # this converts our 3D feature maps to 1D feature vectors
model.add(tf.keras.layers.Dense(64))
model.add(tf.keras.layers.Dense(1, activation='softmax'))
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
h= model.fit(xtrain, ytrain, validation_data=(xval, yval), batch_size=32, epochs=30, callbacks = [early_stopping], verbose = 0)
</code></pre>
<p>Accuracy of this model is 0.501897 and loss 7.595693(the model is stuck on these numbers in every epoch) but if I replace Softmax activation with Sigmoid, accuracy is about 0.98 and loss 0.06. Why does such strange thing happen with Softmax? All info I could find was that these two activations are similar and softmax is even better but I couldn't find anything about such abnormality. I'll be glad if someone could explain what the problem is.</p> | 2020-12-11 20:52:17.370000+00:00 | 2020-12-12 05:58:05.483000+00:00 | 2020-12-11 21:14:46.310000+00:00 | python|conv-neural-network|softmax|activation-function|sigmoid | ['http://cs231n.stanford.edu/2020/syllabus', 'http://deeplearningbook.org', 'https://arxiv.org/pdf/1811.03378.pdf', 'https://papers.nips.cc/paper/2018/file/6ecbdd6ec859d284dc13885a37ce8d81-Paper.pdf'] | 4 |
54,398,386 | <p>That is because vanilla Keras does not include implementation of methods/models for object detection. </p>
<p>There are many approaches to object detection with deep learning (see <a href="https://arxiv.org/abs/1807.05511" rel="nofollow noreferrer">Object Detection with Deep Learning: A Review</a> for a survey), but none of them are implemented as a part of Keras library, so no official models as well. I have a feeling that François Chollet tries to keep it simple and minimalistic, so bloating the code with something like <a href="https://github.com/tensorflow/models" rel="nofollow noreferrer">TensorFlow models</a> will be against its philosophy.</p>
<p>However, Keras is easily extendable, so there are plenty of unofficial implementations (e.g. <a href="https://github.com/pierluigiferrari/ssd_keras" rel="nofollow noreferrer">SSD</a> or <a href="https://github.com/matterport/Mask_RCNN" rel="nofollow noreferrer">Mask R-CNN</a>) supplied with the trained models though. See <a href="https://modelzoo.co/framework/keras" rel="nofollow noreferrer">Keras model zoo</a> for more.</p> | 2019-01-28 08:55:31.093000+00:00 | 2019-01-28 08:55:31.093000+00:00 | null | null | 54,396,398 | <p>There are pretrained <strong>object recognition</strong> models in keras.applications library. But as far as I know, there is no pretrained <strong>object detection</strong> model available.</p>
<p>Does anyone know why it is the case? Object detection is a big part of problems when dealing with visual problems.</p> | 2019-01-28 06:10:39.773000+00:00 | 2019-08-18 05:42:58.057000+00:00 | null | tensorflow|machine-learning|keras|pytorch | ['https://arxiv.org/abs/1807.05511', 'https://github.com/tensorflow/models', 'https://github.com/pierluigiferrari/ssd_keras', 'https://github.com/matterport/Mask_RCNN', 'https://modelzoo.co/framework/keras'] | 5 |
33,326,134 | <p>A program in language C for the complex error function (aka the Faddeeva function) that can be run from Mathematica is also available in <a href="http://arxiv.org/pdf/1407.0748v1.pdf" rel="nofollow">RooFit</a>. Read the article by <a href="http://arxiv.org/pdf/1407.0748v1.pdf" rel="nofollow">Karbach <em>et al.</em> arXiv:1407.0748</a> for more information.</p> | 2015-10-25 04:19:24.730000+00:00 | 2015-10-25 04:24:51.303000+00:00 | 2015-10-25 04:24:51.303000+00:00 | null | 6,805,164 | <p>The <a href="http://en.wikipedia.org/wiki/Error_function" rel="nofollow">complex error function</a> w(z) is defined as <code>e^(-x^2) erfc(-ix)</code>. The problem with using w(z) as defined above is that the erfc tends to explode out for larger x (complemented by the exponential going to 0 so everything stays small), so that Mathematica reverts to arbitrary precision calculations that make life VERY slow. The function is used in implementing the voigt profile - a line shape commonly used in spectroscopy and other related areas. Right now I'm reverting to calculating the lineshape once and using an interpolation to speed things up, however this doesn't let me alter the parameters of the lineshape (or fit to them) easily. </p>
<p>scipy has a nice and fast implementation of w(z) as <code>scipy.special.wofz</code>, and I was wondering if there is an equivalent in Mathematica.</p> | 2011-07-24 05:47:15.250000+00:00 | 2015-10-25 04:24:51.303000+00:00 | 2011-07-25 01:12:07.520000+00:00 | wolfram-mathematica | ['http://arxiv.org/pdf/1407.0748v1.pdf', 'http://arxiv.org/pdf/1407.0748v1.pdf'] | 2 |
68,323,086 | <p>You can rewrite your regex to have the structure of a trie (instead of <code>word|worse|wild</code> use <code>w(or(d|se)|ild)</code>), or even better, ditch the regex and use the <a href="https://en.wikipedia.org/wiki/Aho%E2%80%93Corasick_algorithm" rel="nofollow noreferrer">Aho–Corasick</a> algorithm. Of course you can use a library for that, for instance <a href="https://github.com/vi3k6i5/flashtext" rel="nofollow noreferrer">FlashText</a> (which is a <a href="https://arxiv.org/abs/1711.00046" rel="nofollow noreferrer">slimmed down version</a> of Aho-Corasick, specialized for searching and replacing whole words as in your case).</p>
<p>The author of FlashText claims <em>»<a href="https://www.freecodecamp.org/news/regex-was-taking-5-days-flashtext-does-it-in-15-minutes-55f04411025f/" rel="nofollow noreferrer">Regex was taking 5 days to run. So I built a tool that did it in 15 minutes.</a>«</em></p> | 2021-07-09 22:02:35.290000+00:00 | 2021-07-09 22:14:42.337000+00:00 | 2021-07-09 22:14:42.337000+00:00 | null | 68,322,915 | <p>I have a large corpus that I want to remove certain words from. Similar to removing stopwords from the text, but rather I now want to remove bigrams from the corpus. I have my list of bigrams, but obviously the simple list comprehension way to remove stopwords isn't going to cut it. I was thinking to use regex and compile a pattern from a list of words and then substituting the words. Here is some sample code:</p>
<pre><code>txt = 'He was the type of guy who liked Christmas lights on his house in the middle of July. He picked up trash in his spare time to dump in his neighbors yard. If eating three-egg omelets causes weight-gain, budgie eggs are a good substitute. We should play with legos at camp. She cried diamonds. She had some amazing news to share but nobody to share it with. He decided water-skiing on a frozen lake wasn’t a good idea. His eyes met mine on the street. When he asked her favorite number, she answered without hesitation that it was diamonds. She is never happy until she finds something to be unhappy about; then, she is overjoyed.'
</code></pre>
<p>--</p>
<pre><code>import re
words_to_remove = ['this is', 'We should', 'Christmas lights']
pattrn = re.compile(r' | '.join(words_to_remove))
pattrn.sub(' ',txt)
%timeit pattrn.sub(' ',txt)
</code></pre>
<p>--</p>
<pre><code>timeit 1: 9.18 µs ± 11.2 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
</code></pre>
<p>Is there a faster way for me to remove these bigrams? The len of the actual corpus is 556,694,135 characters and the number of bigrams is 3,205,182 this is really slow when doing it on the actual dataset.</p> | 2021-07-09 21:35:59.020000+00:00 | 2021-07-09 22:18:26.897000+00:00 | 2021-07-09 22:18:26.897000+00:00 | python|regex|loops|stop-words | ['https://en.wikipedia.org/wiki/Aho%E2%80%93Corasick_algorithm', 'https://github.com/vi3k6i5/flashtext', 'https://arxiv.org/abs/1711.00046', 'https://www.freecodecamp.org/news/regex-was-taking-5-days-flashtext-does-it-in-15-minutes-55f04411025f/'] | 4 |
23,474,463 | <p>Computers don't really do exponents. You would think they do, but what they do is high-accuracy polynomial approximations.</p>
<p>References:</p>
<ul>
<li><a href="http://www.math.vanderbilt.edu/~esaff/texts/13.pdf" rel="nofollow noreferrer">http://www.math.vanderbilt.edu/~esaff/texts/13.pdf</a></li>
<li><a href="http://deepblue.lib.umich.edu/bitstream/handle/2027.42/33109/0000495.pdf" rel="nofollow noreferrer">http://deepblue.lib.umich.edu/bitstream/handle/2027.42/33109/0000495.pdf</a></li>
<li><a href="http://www.cs.yale.edu/homes/sachdeva/pubs/fast-algos-via-approx-theory.pdf" rel="nofollow noreferrer">http://www.cs.yale.edu/homes/sachdeva/pubs/fast-algos-via-approx-theory.pdf</a></li>
</ul>
<p>The last reference looked quite nice. Perhaps it should have been first.</p>
<p>Since you are working on images, you likely have discrete number of intensity levels (255 typically). This can allow reduced sampling, or lookups, depending on the nature of "A". One way to check this is to do something like the following for a sufficiently representative group of values of "x":</p>
<pre><code>y=Ax
cdfplot(y(:))
</code></pre>
<p>If you were able to pre-segment your images into "more interesting" and "not as interesting" - like if you were looking at an x-ray being able to trim out all the "outside the human body" locations and clamp them to zero to pre-sparsify your data, that could reduce your number of unique values. You might consider the previous for each unique "mode" inside the data. </p>
<p>My approaches would include:</p>
<ul>
<li>look at alternate formulations of exp(x) that are lower accuracy but higher speed</li>
<li>consider table lookups if you have few enough levels of "x"</li>
<li>consider a combination of interpolation and table lookups if you have "slightly too many" levels to do a table lookup</li>
<li>consider a single lookup (or alternate formulation) based on segmented mode. If you know it is a bone and are looking for a vein, then maybe it should get less high-cost data processing applied.</li>
</ul>
<p>Now I have to ask myself why would you be living in so many iterations of exp(A*x)*x and I think you might be switching back and forth between frequency/wavenumber domain and time/space domain. You also might be dealing with probabilities using exp(x) as a basis, and doing some Bayesian fun. I don't know that exp(x) is a good conjugate prior, so I'm going to go with the fourier material.</p>
<p>Other options:
- consider use of fft, fft2, or fftn given your matrices - they are fast and might do part of what you are looking for.</p>
<p>I am sure there is a forier domain variation on the following:</p>
<ul>
<li><a href="https://mathoverflow.net/questions/34173/fast-matrix-multiplication">https://mathoverflow.net/questions/34173/fast-matrix-multiplication</a></li>
<li><a href="http://www-cc.cs.uni-saarland.de/media/oldmaterial/bc.pdf" rel="nofollow noreferrer">http://www-cc.cs.uni-saarland.de/media/oldmaterial/bc.pdf</a></li>
<li><a href="http://arxiv.org/PS_cache/math/pdf/0511/0511460v1.pdf" rel="nofollow noreferrer">http://arxiv.org/PS_cache/math/pdf/0511/0511460v1.pdf</a></li>
</ul>
<p>You might be able to mix the lookup with a compute using the woodbury matrix. I would have to think about that some to be sure though. (<a href="http://en.wikipedia.org/wiki/Woodbury_matrix_identity" rel="nofollow noreferrer">link</a>) At one point I knew that everything that mattered (CFD, FEA, FFT) were all about the matrix inversion, but I have since forgotten the particular details.</p>
<p>Now, if you are living in MatLab then you might consider using "coder" which converts MatLab code to c-code. No matter how much fun an interpreter may be, a good c-compiler can be a lot faster. The mnemonic (hopefully not too ambitious) that I use is shown here: <a href="https://www.youtube.com/watch?v=YyEReiAYGlU" rel="nofollow noreferrer">link</a> starting around 13:49. It is really simple, but it shows the difference between a canonical interpreted language (python) and compiled version of the same (cython/c).</p>
<p>I'm sure that if I had some more specifics, and was requested to, then I could engage more aggressively in a more specifically relevant answer.</p>
<p>You might not have a good way to do it on conventional hardware, buy you might consider something like a GPGPU. CUDA and its peers have massively parallel operations that allow substantial speedup for the cost of a few video cards. You can have thousands of "cores" (overglorified pipelines) doing the work of a few ALU's and if the job is properly parallelizable (as this looks like) then it can get done a LOT faster.</p>
<p>EDIT:</p>
<p>I was thinking about <a href="http://www.nutonian.com/" rel="nofollow noreferrer">Eureqa</a>. One option that I would consider if I had some "big iron" for development but not production would be to use their Eureqa product to come up with a fast enough, accurate enough approximation. </p>
<p>If you performed a 'quick' singular value decomposition of your "A" matrix, you would find that the dominant performance is governed by 81 eigenvectors. I would look at the eigenvalues and see if there were only a few of those 81 eigenvectors providing the majority of the information. If that was the case, then you can clamp the others to zero, and construct a simple transformation.</p>
<p>Now, if it were me, I would want to get "A" out of the exponent. I'm wondering if you can look at the 81x81 eigenvector matrix and "x" and think a little about linear algebra, and what space you are projecting your vectors into. Is there any way that you can make a function that looks like the following:</p>
<p>f(x) = B2 * exp( B1 * x )</p>
<p>such that the </p>
<p>B1 * x</p>
<p>is much smaller rank than your current </p>
<p>Ax.</p> | 2014-05-05 14:03:04.773000+00:00 | 2014-05-27 19:55:47.593000+00:00 | 2017-04-13 12:57:55.007000+00:00 | null | 23,264,559 | <p>I need to calculate <code>f(x)=exp(A*x)</code> repeatedly for a tiny, variable column vector <code>x</code> and a huge, constant matrix <code>A</code> (many rows, few columns). In other words, the <code>x</code> are few, but the <code>A*x</code> are many. My problem dimensions are such that <code>A*x</code> takes about as much runtime as the exp() part.</p>
<p>Apart from Taylor expansion and pre-calculating a range of values <code>exp(y)</code> (assuming known the range <code>y</code> of values of <code>A*x</code>), which I haven't managed to speed up considerably (while maintaining accuracy) with respect to what MATLAB is doing on its own, I am thinking about analytically restating the problem in order to be able to precalculate some values.</p>
<p>For example, I find that <code>exp(A*x)_i = exp(\sum_j A_ij x_j) = \prod_j exp(A_ij x_j) = \prod_j exp(A_ij)^x_j</code></p>
<p>This would allow me to precalculate <code>exp(A)</code> once, but the required exponentiation in the loop is as costly as the original <code>exp()</code> function call, and the multiplications (\prod) have to be carried out in addition.</p>
<p>Is there any other idea that I could follow, or solutions within MATLAB that I may have missed?</p>
<p><strong>Edit:</strong> some more details</p>
<p><code>A</code> is 26873856 by 81 in size (yes, it's that huge), so <code>x</code> is 81 by 1.
<code>nnz(A) / numel(A)</code> is <code>0.0012</code>, <code>nnz(A*x) / numel(A*x)</code> is <code>0.0075</code>. I already use a sparse matrix to represent <code>A</code>, however, exp() of a sparse matrix is not sparse any longer. So in fact, I store <code>x</code> non-sparse and I calculate <code>exp(full(A*x))</code> which turned out to be as fast/slow as <code>full(exp(A*x))</code> (I think <code>A*x</code> is non-sparse anyway, since x is non-sparse.) <code>exp(full(A*sparse(x)))</code> is a way to have a sparse <code>A*x</code>, but is slower. Even slower variants are <code>exp(A*sparse(x))</code> (with doubled memory impact for a non-sparse matrix of type sparse) and <code>full(exp(A*sparse(x))</code> (which again yields a non-sparse result).</p>
<pre><code>sx = sparse(x);
tic, for i = 1 : 10, exp(full(A*x)); end, toc
tic, for i = 1 : 10, full(exp(A*x)); end, toc
tic, for i = 1 : 10, exp(full(A*sx)); end, toc
tic, for i = 1 : 10, exp(A*sx); end, toc
tic, for i = 1 : 10, full(exp(A*sx)); end, toc
Elapsed time is 1.485935 seconds.
Elapsed time is 1.511304 seconds.
Elapsed time is 2.060104 seconds.
Elapsed time is 3.194711 seconds.
Elapsed time is 4.534749 seconds.
</code></pre>
<p>Yes, I do calculate element-wise exp, I update the above equation to reflect that.</p>
<p><strong>One more edit:</strong> I tried to be smart, with little success:</p>
<pre><code>tic, for i = 1 : 10, B = exp(A*x); end, toc
tic, for i = 1 : 10, C = 1 + full(spfun(@(x) exp(x) - 1, A * sx)); end, toc
tic, for i = 1 : 10, D = 1 + full(spfun(@(x) exp(x) - 1, A * x)); end, toc
tic, for i = 1 : 10, E = 1 + full(spfun(@(x) exp(x) - 1, sparse(A * x))); end, toc
tic, for i = 1 : 10, F = 1 + spfun(@(x) exp(x) - 1, A * sx); end, toc
tic, for i = 1 : 10, G = 1 + spfun(@(x) exp(x) - 1, A * x); end, toc
tic, for i = 1 : 10, H = 1 + spfun(@(x) exp(x) - 1, sparse(A * x)); end, toc
Elapsed time is 1.490776 seconds.
Elapsed time is 2.031305 seconds.
Elapsed time is 2.743365 seconds.
Elapsed time is 2.818630 seconds.
Elapsed time is 2.176082 seconds.
Elapsed time is 2.779800 seconds.
Elapsed time is 2.900107 seconds.
</code></pre> | 2014-04-24 09:10:24.630000+00:00 | 2014-05-27 19:55:47.593000+00:00 | 2014-04-24 17:20:16.057000+00:00 | performance|matlab|simplify|exp | ['http://www.math.vanderbilt.edu/~esaff/texts/13.pdf', 'http://deepblue.lib.umich.edu/bitstream/handle/2027.42/33109/0000495.pdf', 'http://www.cs.yale.edu/homes/sachdeva/pubs/fast-algos-via-approx-theory.pdf', 'https://mathoverflow.net/questions/34173/fast-matrix-multiplication', 'http://www-cc.cs.uni-saarland.de/media/oldmaterial/bc.pdf', 'http://arxiv.org/PS_cache/math/pdf/0511/0511460v1.pdf', 'http://en.wikipedia.org/wiki/Woodbury_matrix_identity', 'https://www.youtube.com/watch?v=YyEReiAYGlU', 'http://www.nutonian.com/'] | 9 |
58,084,728 | <p>If I understood correctly, your step 2 involves creating a project containing 200,000 cells containing 200,000 elements each, i.e. 200,000^2 = 40 billion elements <- it is way too much to deal with for OpenRefine.</p>
<p>Then you would have to do 40 billion more calculations to compare each of your 200,000 dates with the other 200,000. At the rate of 1 millisecond per calculation, this would make 1111 hours of calculation, or 462 days...</p>
<p>You definitely need another tool than OpenRefine (<a href="https://recordlinkage.readthedocs.io/en/latest/about.html" rel="nofollow noreferrer">example in Python</a>). And in any case, you will need to develop a technique to reduce the number of comparisons. The most common in this type of case is <a href="https://arxiv.org/abs/1407.3191" rel="nofollow noreferrer">blocking</a>. It consists in comparing only values that are already close enough (for example only dates with the same year). But it is difficult to suggest a strategy without knowing your data. </p> | 2019-09-24 16:36:56.653000+00:00 | 2019-09-24 20:25:00.703000+00:00 | 2019-09-24 20:25:00.703000+00:00 | null | 58,078,754 | <p>I have a set of data with a column named <code>START</code> containing date values.</p>
<p>Is there a way to compare the <code>START</code> value of a row against all other <code>START</code> values inside the project?</p>
<p>I'd like to create a new column with a message like this <code>"Rows n° x,y,...,z contains greater START values".</code></p>
<p>I've tried the following using <code>"cell.cross"</code> function to compare two projects (actually the same project) but it seems to be too resource and time consuming:</p>
<pre><code>1. create a new column COMPARE with a fixed value
2. use "cell.cross" function against column COMPARE to import in every row all the START values of the project, collapsed in a new column named ALL_START_VALUES
3. compare START value against the array in ALL_START_VALUES and generate log
</code></pre>
<p>Too bad, my 200k rows project freezes during step n°2.</p>
<p>This is what i'd like to obtain:</p>
<blockquote>
<p>ROW | START | LOG</p>
<p>0 | 2019-01-<strong>01</strong>T00:00:00Z | Rows n° 1,2 contain greater START values</p>
<p>1 | 2019-01-<strong>02</strong>T00:00:00Z | Rows n° 2 contain greater START values</p>
<p>2 | 2019-01-<strong>03</strong>T00:00:00Z |</p>
</blockquote> | 2019-09-24 10:55:58.130000+00:00 | 2019-09-25 08:18:36.973000+00:00 | 2020-06-20 09:12:55.060000+00:00 | openrefine | ['https://recordlinkage.readthedocs.io/en/latest/about.html', 'https://arxiv.org/abs/1407.3191'] | 2 |
71,258,315 | <p>Solution is to not use negative hard mining, but instead use focal loss. This is in line with the Focal Loss / RetinaNet paper : <a href="https://arxiv.org/pdf/1708.02002v2.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1708.02002v2.pdf</a></p> | 2022-02-24 21:12:51.307000+00:00 | 2022-02-24 21:12:51.307000+00:00 | null | null | 71,056,302 | <p>I am using the Tensorflow Object Detection API to build a detection model. Hard example mining seemed to work really well with <a href="https://github.com/tensorflow/models/blob/master/research/object_detection/samples/configs/ssd_mobilenet_v2_coco.config" rel="nofollow noreferrer">SSD+Mobilenetv2 model</a>(used with the TF1 version of the API). However with similar settings in the TF2 version with FPN <a href="https://github.com/tensorflow/models/blob/master/research/object_detection/configs/tf2/ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8.config" rel="nofollow noreferrer">SSD+Mobilenetv2+FPN model</a>, I achieve similar metrics for mAP on relevant category but see a lot more false positives in evaluation even after adding hard example mining. What could be the possible reasons for that, any other ways to reduce false positives?</p> | 2022-02-09 20:29:54.663000+00:00 | 2022-02-24 21:12:51.307000+00:00 | null | tensorflow|computer-vision|tensorflow2.0|object-detection|object-detection-api | ['https://arxiv.org/pdf/1708.02002v2.pdf'] | 1 |
70,616,435 | <p>A couple of ways to do it is by:</p>
<ol>
<li>freezing the lower layers of the discriminator,</li>
<li>changing the embeddings.</li>
</ol>
<p>References:</p>
<ol>
<li>Mo S, Cho M, Shin J. Freeze the discriminator: a simple baseline for fine-tuning gans. arXiv preprint arXiv:2002.10964. 2020 Feb 25. <a href="https://arxiv.org/pdf/2002.10964.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/2002.10964.pdf</a>
Code: <a href="https://github.com/sangwoomo/FreezeD" rel="nofollow noreferrer">https://github.com/sangwoomo/FreezeD</a></li>
<li>Li Q, Mai L, Alcorn MA, Nguyen A. A cost-effective method for improving and re-purposing large, pre-trained GANs by fine-tuning their class-embeddings. InProceedings of the Asian Conference on Computer Vision 2020. <a href="https://anhnguyen.me/project/biggan-am/" rel="nofollow noreferrer">https://anhnguyen.me/project/biggan-am/</a>
Code: <a href="https://github.com/qilimk/biggan-am" rel="nofollow noreferrer">https://github.com/qilimk/biggan-am</a></li>
</ol> | 2022-01-07 03:56:02.290000+00:00 | 2022-01-12 01:24:15.637000+00:00 | 2022-01-12 01:24:15.637000+00:00 | null | 69,345,540 | <p>I would like to fine tune a pre-trained GAN available online using my own images. For example, BigGAN, which was trained on ImageNet, can generate realistic images. However, I do not want to generate the classes of images in ImageNet. I want to generate artificial images of my own image sets. How can I fine tune the pre-train models? Is it the same as fine-tuning other neural networks like a CNN image classification model? Is just replacing/retrain the last few layers is enough? It would be nice if I have see some examples in code of Tensorflow/Keras. Thanks so much!</p>
<p>BigGAN
<a href="https://tfhub.dev/deepmind/biggan-deep-256/1" rel="nofollow noreferrer">https://tfhub.dev/deepmind/biggan-deep-256/1</a></p> | 2021-09-27 11:06:31.150000+00:00 | 2022-01-12 01:24:15.637000+00:00 | null | tensorflow|deep-learning|generative-adversarial-network|transfer-learning|tensorflow-hub | ['https://arxiv.org/pdf/2002.10964.pdf', 'https://github.com/sangwoomo/FreezeD', 'https://anhnguyen.me/project/biggan-am/', 'https://github.com/qilimk/biggan-am'] | 4 |
54,472,611 | <p>I think something like <a href="https://github.com/shangjingbo1226/AutoNER" rel="nofollow noreferrer">AutoNER</a> might be useful for this. Essentially, the input to the system is text documents from a particular domain and a list of domain-specific entities that you'd like the system to recognize (like Hockey players in your case). </p>
<p>According to their results in <a href="https://arxiv.org/pdf/1809.03599.pdf" rel="nofollow noreferrer">this paper</a>, they perform well on recognizing chemical names and disease names among others.</p> | 2019-02-01 03:48:03.593000+00:00 | 2019-02-01 03:48:03.593000+00:00 | null | null | 9,987,681 | <p>I need to build a classifier which identifies NEs in a specific domain. So for instance if my domain is Hockey or Football, the classifier should go accept NEs in that domain but NOT all pronouns it sees on web pages. My ultimate goal is to improve text classification through NER. </p>
<p>For people working in this area please suggest me how should I build such a classifier?
thanks!</p> | 2012-04-03 05:41:57.080000+00:00 | 2019-02-01 03:48:03.593000+00:00 | null | text|nlp|machine-learning|classification|named-entity-recognition | ['https://github.com/shangjingbo1226/AutoNER', 'https://arxiv.org/pdf/1809.03599.pdf'] | 2 |
46,754,071 | <p>The click event is handled by the jquery <code>.on("click",function(){})</code>, but the we can't handle when the file downloading finish, my information is based on this Stackoverflow Question: <a href="https://stackoverflow.com/questions/2343418/browser-event-when-downloaded-file-is-saved-to-disk">Browser event when downloaded file is saved to disk</a></p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-js lang-js prettyprint-override"><code>$(".file a").on("click",function(e){
var originalHtml=$(this).html();
$(this).html('<div class="load-container load8"><div class="loader">Loading...</div></div>'); // do your UI thing here
e.preventDefault();
var destination = this.href;
var clickedLink=$(this);
setTimeout(function() {
clickedLink.html(originalHtml);
window.location = destination;
},1000);
$('<iframe>').hide().appendTo('body').load(function() {
window.location = destination;
}).attr('src', destination);
});</code></pre>
<pre class="snippet-code-css lang-css prettyprint-override"><code>.loader,
.loader:before,
.loader:after {
border-radius: 50%;
width: 2.5em;
height: 2.5em;
-webkit-animation-fill-mode: both;
animation-fill-mode: both;
-webkit-animation: load7 1.8s infinite ease-in-out;
animation: load7 1.8s infinite ease-in-out;
}
.loader {
color: darkblue;
font-size: 10px;
margin: 80px auto;
position: relative;
text-indent: -9999em;
-webkit-transform: translateZ(0);
-ms-transform: translateZ(0);
transform: translateZ(0);
-webkit-animation-delay: -0.16s;
animation-delay: -0.16s;
}
.loader:before,
.loader:after {
content: '';
position: absolute;
top: 0;
}
.loader:before {
left: -3.5em;
-webkit-animation-delay: -0.32s;
animation-delay: -0.32s;
}
.loader:after {
left: 3.5em;
}
@-webkit-keyframes load7 {
0%,
80%,
100% {
box-shadow: 0 2.5em 0 -1.3em;
}
40% {
box-shadow: 0 2.5em 0 0;
}
}
@keyframes load7 {
0%,
80%,
100% {
box-shadow: 0 2.5em 0 -1.3em;
}
40% {
box-shadow: 0 2.5em 0 0;
}
}</code></pre>
<pre class="snippet-code-html lang-html prettyprint-override"><code><script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<html>
<body style="color: black; background-color: #EFF6E4;font-family: myFirstFont; ">
<ol class="tree">
<li>
<label for="folder2">First Semestar</label> <input type="checkbox" id="folder2" />
<ol>
<li>
<label for="subfolder2">Electronics</label> <input type="checkbox" id="subfolder2" />
<ol>
<li class="file">
<a href="
https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGUm9fLVd2dVpuQ28" >Bogart Chapter 8 Solutions </a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGbUw2Y2RzeXQyYTQ">Bogart Digital to Analog Chapter</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGSjM4VDNuZ180VWM
">An Introduction to Error Analysis </a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGbUw2Y2RzeXQyYTQ">Basic of Electronics</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGdkNWZHJBbUY4WGM">Solution from Bogart Book</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGUTdvN29GU0VoY0U">Least Square Solutions</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGeTY5RmtMLXFoWWs">Estimation of Error</a></li>
</ol>
</li>
<li>
<label for="subfolder3">Mathematical Physics</label> <input type="checkbox" id="subfolder3" />
<ol>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGd2NCWm5iMmluWVU">Tutorial One and Two</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGVFFwc1VTeGVtVTA">Tutorial One and Two II</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGOFBkTmRLc1hVN1E">Tutorial 4</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGeFN2Q2N6QjNnSTQ">Tutorial 3</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGNEE4ZG9zT2NKSG8">Curvature and Torison Curves</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGdmJLVUFxMEdZU2M">Arfken Solution</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGRmdjUVpLaXN1UGM">Schaum's tensor Analysis</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGbmEwM2ZrclF5ODA">George B. Arfken, Hans J. Weber, Manual_ Mathematical Methods for Physicists</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGRmdjUVpLaXN1UGM">Schaum's Tensor Calculus</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGZjg1MDVYRWI0TW8">Vector space and Eigen Value</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGcmMwT1FGUGFod0E">Vector in Functional space</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGVzVtWndjVktwTDg">Vector funtional Space</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGdHFCbUVnOEp6RDQ">Orothonormal Basis</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGbG9SQXd2RE1DWUk">Linear Vector Space</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGemNnQ3ZJZkhBV3c">Linear Transformation</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGWllDS0pBVFpQemc">Change of Basis</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGZzZ2blRkclhhczg">Operators</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGdU9udFd5UzBZSVk">Operators</a></li>
</ol>
</li>
<li>
<label for="subfolder4">Quantum Mechanics</label> <input type="checkbox" id="subfolder4" />
<ol>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGZllRR2pvTzdYVU0">Binil Sir Lecture ( 1-5 ) From Quantum Spin</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGcWFVcTdvejRoQjQ">Schaum's Series</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGeTBmSE94aDB5bEU">Basis and Dimension</a></li>
</ol>
</li>
<li>
<label for="subfolder5">Classical Mechanics</label> <input type="checkbox" id="subfolder5" />
<ol>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGVEFmU0V4bGNiR1k">Non-Linear Dynamics</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGUkdOaEg0Mmdaa00">Goldstein Chp 8 Solutions ( BG Sir Homework )</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGZmVEZ2N1aEVwOE0">Goldstein Chp 9 Solutions Handwritten ( BG Sir Homework )</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGT0pfWnhXZFNIQmc">Goldstein Chp 9 Solutions (BG Sir Homework)</a></li>
<li class="file"><a href="https://archive.org/download/arxiv-math-ph0012051/math-ph0012051_jp2.zip">Operator formalism of quantum mechanics</a></li>
</ol>
</li>
</body>
</html></code></pre>
</div>
</div>
</p> | 2017-10-15 10:31:43.123000+00:00 | 2017-10-15 12:59:12.650000+00:00 | 2017-10-15 12:59:12.650000+00:00 | null | 46,753,945 | <p>I'm making an website with tons of Notes and Books to Download. I would Like to show any image or any Text when Link is Download link is clicked and when request is complete and file starts to Download , then remove the image or Loading Text.</p>
<p>I've gone through lots of forums but couldn't find any solution.</p>
<p>Here is the html File of mine</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code><body style="color: black; background-color: #EFF6E4;font-family: myFirstFont; ">
<ol class="tree">
<li>
<label for="folder2">First Semestar</label> <input type="checkbox" id="folder2" />
<ol>
<li>
<label for="subfolder2">Electronics</label> <input type="checkbox" id="subfolder2" />
<ol>
<li class="file">
<a href="
https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGUm9fLVd2dVpuQ28" >Bogart Chapter 8 Solutions </a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGbUw2Y2RzeXQyYTQ">Bogart Digital to Analog Chapter</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGSjM4VDNuZ180VWM
">An Introduction to Error Analysis </a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGbUw2Y2RzeXQyYTQ">Basic of Electronics</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGdkNWZHJBbUY4WGM">Solution from Bogart Book</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGUTdvN29GU0VoY0U">Least Square Solutions</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGeTY5RmtMLXFoWWs">Estimation of Error</a></li>
</ol>
</li>
<li>
<label for="subfolder3">Mathematical Physics</label> <input type="checkbox" id="subfolder3" />
<ol>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGd2NCWm5iMmluWVU">Tutorial One and Two</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGVFFwc1VTeGVtVTA">Tutorial One and Two II</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGOFBkTmRLc1hVN1E">Tutorial 4</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGeFN2Q2N6QjNnSTQ">Tutorial 3</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGNEE4ZG9zT2NKSG8">Curvature and Torison Curves</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGdmJLVUFxMEdZU2M">Arfken Solution</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGRmdjUVpLaXN1UGM">Schaum's tensor Analysis</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGbmEwM2ZrclF5ODA">George B. Arfken, Hans J. Weber, Manual_ Mathematical Methods for Physicists</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGRmdjUVpLaXN1UGM">Schaum's Tensor Calculus</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGZjg1MDVYRWI0TW8">Vector space and Eigen Value</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGcmMwT1FGUGFod0E">Vector in Functional space</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGVzVtWndjVktwTDg">Vector funtional Space</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGdHFCbUVnOEp6RDQ">Orothonormal Basis</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGbG9SQXd2RE1DWUk">Linear Vector Space</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGemNnQ3ZJZkhBV3c">Linear Transformation</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGWllDS0pBVFpQemc">Change of Basis</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGZzZ2blRkclhhczg">Operators</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGdU9udFd5UzBZSVk">Operators</a></li>
</ol>
</li>
<li>
<label for="subfolder4">Quantum Mechanics</label> <input type="checkbox" id="subfolder4" />
<ol>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGZllRR2pvTzdYVU0">Binil Sir Lecture ( 1-5 ) From Quantum Spin</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGcWFVcTdvejRoQjQ">Schaum's Series</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGeTBmSE94aDB5bEU">Basis and Dimension</a></li>
</ol>
</li>
<li>
<label for="subfolder5">Classical Mechanics</label> <input type="checkbox" id="subfolder5" />
<ol>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGVEFmU0V4bGNiR1k">Non-Linear Dynamics</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGUkdOaEg0Mmdaa00">Goldstein Chp 8 Solutions ( BG Sir Homework )</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGZmVEZ2N1aEVwOE0">Goldstein Chp 9 Solutions Handwritten ( BG Sir Homework )</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGT0pfWnhXZFNIQmc">Goldstein Chp 9 Solutions (BG Sir Homework)</a></li>
<li class="file"><a href="https://archive.org/download/arxiv-math-ph0012051/math-ph0012051_jp2.zip">Operator formalism of quantum mechanics</a></li>
</ol>
</li>
</body>
</html></code></pre>
</div>
</div>
</p>
<p>Any help would be much much appreciated.</p> | 2017-10-15 10:14:41.883000+00:00 | 2017-10-15 12:59:12.650000+00:00 | null | javascript|jquery|html|onclick | ['https://stackoverflow.com/questions/2343418/browser-event-when-downloaded-file-is-saved-to-disk'] | 1 |
46,755,216 | <p>Here I found another solution to load gif instead which would be a best alternative but problem is It won't fade away and keeps on loading. How can I make it to end and return to original state after sometime.</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-js lang-js prettyprint-override"><code> $(".file a").on("click",function(){
$(this).parent().parent().append('<div style="" id="loadingDiv"><div class="loader">Loading...</div></div>');
$(window).on('load', function(){
setTimeout(removeLoader, 100); //wait for page load PLUS two seconds.
});
function removeLoader(){
$( "#loadingDiv" ).fadeOut(0, function() {
// fadeOut complete. Remove the loading div
$( "#loadingDiv" ).remove(); //makes page more lightweight
});
}
});</code></pre>
<pre class="snippet-code-css lang-css prettyprint-override"><code>.loader,
.loader:after {
border-radius: 50%;
width: 10em;
height: 10em;
}
.loader {
margin: 60px auto;
font-size: 10px;
position: relative;
text-indent: -9999em;
border-top: 1.1em solid rgba(255, 255, 255, 0.2);
border-right: 1.1em solid rgba(255, 255, 255, 0.2);
border-bottom: 1.1em solid rgba(255, 255, 255, 0.2);
border-left: 1.1em solid #ffffff;
-webkit-transform: translateZ(0);
-ms-transform: translateZ(0);
transform: translateZ(0);
-webkit-animation: load8 1.1s infinite linear;
animation: load8 1.1s infinite linear;
}
@-webkit-keyframes load8 {
0% {
-webkit-transform: rotate(0deg);
transform: rotate(0deg);
}
100% {
-webkit-transform: rotate(360deg);
transform: rotate(360deg);
}
}
@keyframes load8 {
0% {
-webkit-transform: rotate(0deg);
transform: rotate(0deg);
}
100% {
-webkit-transform: rotate(360deg);
transform: rotate(360deg);
}
}
#loadingDiv {
position:absolute;;
top:0;
left:0;
width:100%;
height:100%;
background-color:#000;
}</code></pre>
<pre class="snippet-code-html lang-html prettyprint-override"><code><script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<body style="color: black; background-color: #EFF6E4;font-family: myFirstFont; ">
<ol class="tree">
<li>
<label for="folder2">First Semestar</label> <input type="checkbox" id="folder2" />
<ol>
<li>
<label for="subfolder2">Electronics</label> <input type="checkbox" id="subfolder2" />
<ol>
<li class="file">
<a href="
https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGUm9fLVd2dVpuQ28" >Bogart Chapter 8 Solutions </a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGbUw2Y2RzeXQyYTQ">Bogart Digital to Analog Chapter</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGSjM4VDNuZ180VWM
">An Introduction to Error Analysis </a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGbUw2Y2RzeXQyYTQ">Basic of Electronics</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGdkNWZHJBbUY4WGM">Solution from Bogart Book</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGUTdvN29GU0VoY0U">Least Square Solutions</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGeTY5RmtMLXFoWWs">Estimation of Error</a></li>
</ol>
</li>
<li>
<label for="subfolder3">Mathematical Physics</label> <input type="checkbox" id="subfolder3" />
<ol>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGd2NCWm5iMmluWVU">Tutorial One and Two</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGVFFwc1VTeGVtVTA">Tutorial One and Two II</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGOFBkTmRLc1hVN1E">Tutorial 4</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGeFN2Q2N6QjNnSTQ">Tutorial 3</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGNEE4ZG9zT2NKSG8">Curvature and Torison Curves</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGdmJLVUFxMEdZU2M">Arfken Solution</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGRmdjUVpLaXN1UGM">Schaum's tensor Analysis</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGbmEwM2ZrclF5ODA">George B. Arfken, Hans J. Weber, Manual_ Mathematical Methods for Physicists</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGRmdjUVpLaXN1UGM">Schaum's Tensor Calculus</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGZjg1MDVYRWI0TW8">Vector space and Eigen Value</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGcmMwT1FGUGFod0E">Vector in Functional space</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGVzVtWndjVktwTDg">Vector funtional Space</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGdHFCbUVnOEp6RDQ">Orothonormal Basis</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGbG9SQXd2RE1DWUk">Linear Vector Space</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGemNnQ3ZJZkhBV3c">Linear Transformation</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGWllDS0pBVFpQemc">Change of Basis</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGZzZ2blRkclhhczg">Operators</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGdU9udFd5UzBZSVk">Operators</a></li>
</ol>
</li>
<li>
<label for="subfolder4">Quantum Mechanics</label> <input type="checkbox" id="subfolder4" />
<ol>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGZllRR2pvTzdYVU0">Binil Sir Lecture ( 1-5 ) From Quantum Spin</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGcWFVcTdvejRoQjQ">Schaum's Series</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGeTBmSE94aDB5bEU">Basis and Dimension</a></li>
</ol>
</li>
<li>
<label for="subfolder5">Classical Mechanics</label> <input type="checkbox" id="subfolder5" />
<ol>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGVEFmU0V4bGNiR1k">Non-Linear Dynamics</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGUkdOaEg0Mmdaa00">Goldstein Chp 8 Solutions ( BG Sir Homework )</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGZmVEZ2N1aEVwOE0">Goldstein Chp 9 Solutions Handwritten ( BG Sir Homework )</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGT0pfWnhXZFNIQmc">Goldstein Chp 9 Solutions (BG Sir Homework)</a></li>
<li class="file"><a href="https://archive.org/download/arxiv-math-ph0012051/math-ph0012051_jp2.zip">Operator formalism of quantum mechanics</a></li>
</ol>
</li>
</body>
</html></code></pre>
</div>
</div>
</p> | 2017-10-15 12:49:25.260000+00:00 | 2017-10-15 12:49:25.260000+00:00 | null | null | 46,753,945 | <p>I'm making an website with tons of Notes and Books to Download. I would Like to show any image or any Text when Link is Download link is clicked and when request is complete and file starts to Download , then remove the image or Loading Text.</p>
<p>I've gone through lots of forums but couldn't find any solution.</p>
<p>Here is the html File of mine</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code><body style="color: black; background-color: #EFF6E4;font-family: myFirstFont; ">
<ol class="tree">
<li>
<label for="folder2">First Semestar</label> <input type="checkbox" id="folder2" />
<ol>
<li>
<label for="subfolder2">Electronics</label> <input type="checkbox" id="subfolder2" />
<ol>
<li class="file">
<a href="
https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGUm9fLVd2dVpuQ28" >Bogart Chapter 8 Solutions </a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGbUw2Y2RzeXQyYTQ">Bogart Digital to Analog Chapter</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGSjM4VDNuZ180VWM
">An Introduction to Error Analysis </a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGbUw2Y2RzeXQyYTQ">Basic of Electronics</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGdkNWZHJBbUY4WGM">Solution from Bogart Book</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGUTdvN29GU0VoY0U">Least Square Solutions</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGeTY5RmtMLXFoWWs">Estimation of Error</a></li>
</ol>
</li>
<li>
<label for="subfolder3">Mathematical Physics</label> <input type="checkbox" id="subfolder3" />
<ol>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGd2NCWm5iMmluWVU">Tutorial One and Two</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGVFFwc1VTeGVtVTA">Tutorial One and Two II</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGOFBkTmRLc1hVN1E">Tutorial 4</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGeFN2Q2N6QjNnSTQ">Tutorial 3</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGNEE4ZG9zT2NKSG8">Curvature and Torison Curves</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGdmJLVUFxMEdZU2M">Arfken Solution</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGRmdjUVpLaXN1UGM">Schaum's tensor Analysis</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGbmEwM2ZrclF5ODA">George B. Arfken, Hans J. Weber, Manual_ Mathematical Methods for Physicists</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGRmdjUVpLaXN1UGM">Schaum's Tensor Calculus</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGZjg1MDVYRWI0TW8">Vector space and Eigen Value</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGcmMwT1FGUGFod0E">Vector in Functional space</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGVzVtWndjVktwTDg">Vector funtional Space</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGdHFCbUVnOEp6RDQ">Orothonormal Basis</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGbG9SQXd2RE1DWUk">Linear Vector Space</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGemNnQ3ZJZkhBV3c">Linear Transformation</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGWllDS0pBVFpQemc">Change of Basis</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGZzZ2blRkclhhczg">Operators</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGdU9udFd5UzBZSVk">Operators</a></li>
</ol>
</li>
<li>
<label for="subfolder4">Quantum Mechanics</label> <input type="checkbox" id="subfolder4" />
<ol>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGZllRR2pvTzdYVU0">Binil Sir Lecture ( 1-5 ) From Quantum Spin</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGcWFVcTdvejRoQjQ">Schaum's Series</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGeTBmSE94aDB5bEU">Basis and Dimension</a></li>
</ol>
</li>
<li>
<label for="subfolder5">Classical Mechanics</label> <input type="checkbox" id="subfolder5" />
<ol>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGVEFmU0V4bGNiR1k">Non-Linear Dynamics</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGUkdOaEg0Mmdaa00">Goldstein Chp 8 Solutions ( BG Sir Homework )</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGZmVEZ2N1aEVwOE0">Goldstein Chp 9 Solutions Handwritten ( BG Sir Homework )</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGT0pfWnhXZFNIQmc">Goldstein Chp 9 Solutions (BG Sir Homework)</a></li>
<li class="file"><a href="https://archive.org/download/arxiv-math-ph0012051/math-ph0012051_jp2.zip">Operator formalism of quantum mechanics</a></li>
</ol>
</li>
</body>
</html></code></pre>
</div>
</div>
</p>
<p>Any help would be much much appreciated.</p> | 2017-10-15 10:14:41.883000+00:00 | 2017-10-15 12:59:12.650000+00:00 | null | javascript|jquery|html|onclick | [] | 0 |
46,754,722 | <p>Here I modified you code and now working solution is,</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-js lang-js prettyprint-override"><code><script type="text/javascript">
$(".file a").on("click",function(){
$(this).text('Loading...'); // do your UI thing here
e.preventDefault();
var destination = this.href;
setTimeout(function() {
window.location = destination;
},100);
$('<iframe>').hide().appendTo('body').load(function() {
window.location = destination;
}).attr('src', destination);
});
</script></code></pre>
<pre class="snippet-code-html lang-html prettyprint-override"><code><script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<body style="color: black; background-color: #EFF6E4;font-family: myFirstFont; ">
<ol class="tree">
<li>
<label for="folder2">First Semestar</label> <input type="checkbox" id="folder2" />
<ol>
<li>
<label for="subfolder2">Electronics</label> <input type="checkbox" id="subfolder2" />
<ol>
<li class="file">
<a href="
https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGUm9fLVd2dVpuQ28" >Bogart Chapter 8 Solutions </a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGbUw2Y2RzeXQyYTQ">Bogart Digital to Analog Chapter</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGSjM4VDNuZ180VWM
">An Introduction to Error Analysis </a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGbUw2Y2RzeXQyYTQ">Basic of Electronics</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGdkNWZHJBbUY4WGM">Solution from Bogart Book</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGUTdvN29GU0VoY0U">Least Square Solutions</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGeTY5RmtMLXFoWWs">Estimation of Error</a></li>
</ol>
</li>
<li>
<label for="subfolder3">Mathematical Physics</label> <input type="checkbox" id="subfolder3" />
<ol>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGd2NCWm5iMmluWVU">Tutorial One and Two</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGVFFwc1VTeGVtVTA">Tutorial One and Two II</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGOFBkTmRLc1hVN1E">Tutorial 4</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGeFN2Q2N6QjNnSTQ">Tutorial 3</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGNEE4ZG9zT2NKSG8">Curvature and Torison Curves</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGdmJLVUFxMEdZU2M">Arfken Solution</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGRmdjUVpLaXN1UGM">Schaum's tensor Analysis</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGbmEwM2ZrclF5ODA">George B. Arfken, Hans J. Weber, Manual_ Mathematical Methods for Physicists</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGRmdjUVpLaXN1UGM">Schaum's Tensor Calculus</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGZjg1MDVYRWI0TW8">Vector space and Eigen Value</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGcmMwT1FGUGFod0E">Vector in Functional space</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGVzVtWndjVktwTDg">Vector funtional Space</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGdHFCbUVnOEp6RDQ">Orothonormal Basis</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGbG9SQXd2RE1DWUk">Linear Vector Space</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGemNnQ3ZJZkhBV3c">Linear Transformation</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGWllDS0pBVFpQemc">Change of Basis</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGZzZ2blRkclhhczg">Operators</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGdU9udFd5UzBZSVk">Operators</a></li>
</ol>
</li>
<li>
<label for="subfolder4">Quantum Mechanics</label> <input type="checkbox" id="subfolder4" />
<ol>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGZllRR2pvTzdYVU0">Binil Sir Lecture ( 1-5 ) From Quantum Spin</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGcWFVcTdvejRoQjQ">Schaum's Series</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGeTBmSE94aDB5bEU">Basis and Dimension</a></li>
</ol>
</li>
<li>
<label for="subfolder5">Classical Mechanics</label> <input type="checkbox" id="subfolder5" />
<ol>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGVEFmU0V4bGNiR1k">Non-Linear Dynamics</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGUkdOaEg0Mmdaa00">Goldstein Chp 8 Solutions ( BG Sir Homework )</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGZmVEZ2N1aEVwOE0">Goldstein Chp 9 Solutions Handwritten ( BG Sir Homework )</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGT0pfWnhXZFNIQmc">Goldstein Chp 9 Solutions (BG Sir Homework)</a></li>
<li class="file"><a href="https://archive.org/download/arxiv-math-ph0012051/math-ph0012051_jp2.zip">Operator formalism of quantum mechanics</a></li>
</ol>
</li>
</body>
</html></code></pre>
</div>
</div>
</p>
<p>But It won't return to the original text and stuck forever on Loading.</p> | 2017-10-15 11:53:46.220000+00:00 | 2017-10-15 11:53:46.220000+00:00 | null | null | 46,753,945 | <p>I'm making an website with tons of Notes and Books to Download. I would Like to show any image or any Text when Link is Download link is clicked and when request is complete and file starts to Download , then remove the image or Loading Text.</p>
<p>I've gone through lots of forums but couldn't find any solution.</p>
<p>Here is the html File of mine</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code><body style="color: black; background-color: #EFF6E4;font-family: myFirstFont; ">
<ol class="tree">
<li>
<label for="folder2">First Semestar</label> <input type="checkbox" id="folder2" />
<ol>
<li>
<label for="subfolder2">Electronics</label> <input type="checkbox" id="subfolder2" />
<ol>
<li class="file">
<a href="
https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGUm9fLVd2dVpuQ28" >Bogart Chapter 8 Solutions </a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGbUw2Y2RzeXQyYTQ">Bogart Digital to Analog Chapter</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGSjM4VDNuZ180VWM
">An Introduction to Error Analysis </a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGbUw2Y2RzeXQyYTQ">Basic of Electronics</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGdkNWZHJBbUY4WGM">Solution from Bogart Book</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGUTdvN29GU0VoY0U">Least Square Solutions</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGeTY5RmtMLXFoWWs">Estimation of Error</a></li>
</ol>
</li>
<li>
<label for="subfolder3">Mathematical Physics</label> <input type="checkbox" id="subfolder3" />
<ol>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGd2NCWm5iMmluWVU">Tutorial One and Two</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGVFFwc1VTeGVtVTA">Tutorial One and Two II</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGOFBkTmRLc1hVN1E">Tutorial 4</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGeFN2Q2N6QjNnSTQ">Tutorial 3</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGNEE4ZG9zT2NKSG8">Curvature and Torison Curves</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGdmJLVUFxMEdZU2M">Arfken Solution</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGRmdjUVpLaXN1UGM">Schaum's tensor Analysis</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGbmEwM2ZrclF5ODA">George B. Arfken, Hans J. Weber, Manual_ Mathematical Methods for Physicists</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGRmdjUVpLaXN1UGM">Schaum's Tensor Calculus</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGZjg1MDVYRWI0TW8">Vector space and Eigen Value</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGcmMwT1FGUGFod0E">Vector in Functional space</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGVzVtWndjVktwTDg">Vector funtional Space</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGdHFCbUVnOEp6RDQ">Orothonormal Basis</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGbG9SQXd2RE1DWUk">Linear Vector Space</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGemNnQ3ZJZkhBV3c">Linear Transformation</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGWllDS0pBVFpQemc">Change of Basis</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGZzZ2blRkclhhczg">Operators</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGdU9udFd5UzBZSVk">Operators</a></li>
</ol>
</li>
<li>
<label for="subfolder4">Quantum Mechanics</label> <input type="checkbox" id="subfolder4" />
<ol>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGZllRR2pvTzdYVU0">Binil Sir Lecture ( 1-5 ) From Quantum Spin</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGcWFVcTdvejRoQjQ">Schaum's Series</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGeTBmSE94aDB5bEU">Basis and Dimension</a></li>
</ol>
</li>
<li>
<label for="subfolder5">Classical Mechanics</label> <input type="checkbox" id="subfolder5" />
<ol>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGVEFmU0V4bGNiR1k">Non-Linear Dynamics</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGUkdOaEg0Mmdaa00">Goldstein Chp 8 Solutions ( BG Sir Homework )</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGZmVEZ2N1aEVwOE0">Goldstein Chp 9 Solutions Handwritten ( BG Sir Homework )</a></li>
<li class="file"><a href="https://drive.google.com/uc?export=download&id=0B2j3ckof7nFGT0pfWnhXZFNIQmc">Goldstein Chp 9 Solutions (BG Sir Homework)</a></li>
<li class="file"><a href="https://archive.org/download/arxiv-math-ph0012051/math-ph0012051_jp2.zip">Operator formalism of quantum mechanics</a></li>
</ol>
</li>
</body>
</html></code></pre>
</div>
</div>
</p>
<p>Any help would be much much appreciated.</p> | 2017-10-15 10:14:41.883000+00:00 | 2017-10-15 12:59:12.650000+00:00 | null | javascript|jquery|html|onclick | [] | 0 |
42,549,265 | <p>There is a couple of tensorflow implementations here <a href="https://github.com/yselivonchyk/Tensorflow_WhatWhereAutoencoder/blob/master/pooling.py" rel="nofollow noreferrer" title="pooling.py">pooling.py</a></p>
<p>Namely:</p>
<p>1) unpool operation (<a href="https://arxiv.org/abs/1506.02351" rel="nofollow noreferrer">source</a>) that utilizes output of <code>tf.nn.max_pool_with_argmax</code>. Although please notice, that as of tensorflow 1.0 <code>tf.nn.max_pool_with_argmax</code> is GPU-only</p>
<p>2) upsample operation that mimics inverse of max-pooling by filling positions of unpooled region with either zeros or copies of max element.
Comparing to <a href="https://github.com/ppwwyyxx/tensorpack/blob/master/tensorpack/models/pool.py#L66" rel="nofollow noreferrer">tensorpack</a> it allows copies of elements instead of zeros and supports strides other than <code>[2, 2]</code>. </p>
<p>No recompile, back-prop friendly.</p>
<p>Illustration:
<a href="https://github.com/ppwwyyxx/tensorpack/blob/master/tensorpack/models/pool.py#L66" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7nKkL.png" alt="Upsampling"></a></p>
<p><a href="https://i.stack.imgur.com/7nKkL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oHaap.png" alt="Unpooling"></a></p> | 2017-03-02 07:31:13.320000+00:00 | 2019-04-09 12:47:40.033000+00:00 | 2019-04-09 12:47:40.033000+00:00 | null | 36,548,736 | <p>Is there TensorFlow native function that does unpooling for Deconvolutional Networks ? </p>
<p>I have written this in normal python, but it is getting complicated when want to translate it to TensorFlow as it's objects does not even support item assignment at the moment, and I think this is a great inconvenience with TF.</p> | 2016-04-11 12:29:21.293000+00:00 | 2022-02-17 19:31:46.857000+00:00 | null | tensorflow|conv-neural-network|deconvolution | ['https://github.com/yselivonchyk/Tensorflow_WhatWhereAutoencoder/blob/master/pooling.py', 'https://arxiv.org/abs/1506.02351', 'https://github.com/ppwwyyxx/tensorpack/blob/master/tensorpack/models/pool.py#L66', 'https://github.com/ppwwyyxx/tensorpack/blob/master/tensorpack/models/pool.py#L66', 'https://i.stack.imgur.com/7nKkL.png'] | 5 |
65,378,708 | <p>We typically call this problem modelling contagion on a network, or epidemics on networks. Specifically you are trying to simulate an SIR model (read up on that).</p>
<p>There are several libraries on this you may want to check out:</p>
<ul>
<li>EoN <a href="https://arxiv.org/pdf/2001.02436.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/2001.02436.pdf</a></li>
<li>igraph</li>
<li>Netpidemix</li>
<li>epydemic</li>
<li>EpiModel</li>
<li>Graph-tool</li>
</ul>
<p>You may also want to clarify:</p>
<ul>
<li>What do you mean by the 'labelling of the graph'? Are you talking about time steps? This is confusing: "what is the average number of infections that occurs in each possible labeling of the graph?"</li>
<li>Are you using python or networkx or c++ ...?</li>
<li>Are you saying there is a 100% chance of infection if connected to node with status I? re "If there is an edge between an I and V node., an infection occurs."</li>
</ul> | 2020-12-20 09:58:26.200000+00:00 | 2020-12-20 09:58:26.200000+00:00 | null | null | 65,115,708 | <p>Hi I am working on a graph problem for a research project and I am curious how other people would approach the following problem. Apologies if this is the wrong forum for posting this.</p>
<p>Say you have a graph G with. N nodes and E edges between the nodes (edges are bi directional / go both ways).</p>
<p>I is the number of nodes which are infected, V are the number of nodes which are vulnerable, and the rest of the nodes are R (resistant). If there is an edge between an I and V node., an infection occurs. Any other edges result in no infection. Even if a V node is connected to many I nodes, it only counts as one infection.</p>
<p>Now, among all possible ways you can label the nodes, what is the average number of infections that occurs in each possible labeling of the graph?</p> | 2020-12-02 20:33:08.427000+00:00 | 2020-12-20 09:58:26.200000+00:00 | null | graph|graph-theory|graph-algorithm | ['https://arxiv.org/pdf/2001.02436.pdf'] | 1 |
48,475,483 | <p>Gauss–Jordan elimination algorithm has <a href="https://i.stack.imgur.com/VfTjJ.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VfTjJ.gif" alt="O(n^3)"></a> time complexity.</p>
<p>It can be shown that a divide and conquer algorithm that uses <a href="https://en.wikipedia.org/wiki/Invertible_matrix#Blockwise_inversion" rel="nofollow noreferrer">blockwise inversion</a> to invert an <a href="https://i.stack.imgur.com/8RXNF.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8RXNF.gif" alt="n x n"></a> matrix runs with the same time complexity as the matrix multiplication algorithm that is used internally.</p>
<p>So if you implement the <a href="https://en.wikipedia.org/wiki/Coppersmith%E2%80%93Winograd_algorithm" rel="nofollow noreferrer">Coppersmith–Winograd algorithm</a> for matrix multiplication you can achieve a time complexity of <a href="https://i.stack.imgur.com/ZJJw4.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZJJw4.gif" alt="O(2^{2.376})"></a> or <a href="https://arxiv.org/abs/1401.7714" rel="nofollow noreferrer">even better</a>.</p>
<p>In your <a href="https://i.stack.imgur.com/AHTTt.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AHTTt.gif" alt="Ax = B"></a> linear system, once you have found the matrix inverse <a href="https://i.stack.imgur.com/ONlTN.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ONlTN.gif" alt="A^{-1}"></a> of matrix <a href="https://i.stack.imgur.com/CZMvc.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CZMvc.gif" alt="A"></a>, the solution will be <a href="https://i.stack.imgur.com/ArEx9.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ArEx9.gif" alt="x = A^{-1}B"></a>.</p> | 2018-01-27 11:48:37.847000+00:00 | 2018-01-27 11:48:37.847000+00:00 | null | null | 48,471,230 | <p>i was wondering what is the shortest code(needed for contests like ACM) for calculating linear algebra such as Ax=B.
here's my java code for calculating Ax=B with Gaussian rule:</p>
<pre><code>public static void linearAlgebra(double[][] a , double[] b , double[] x , int n)
{
for(int j = 0 ; j<n ; j++)
{
for(int i = 0 ; i<n ; i++)
{
if(j<i)//lower triangle of A matrix
{
double factor = 1 ;
int k=i-1;
while(k>=0)//find the Gaussian factor from the upper rows of the current row
{
if(k==i)continue;
if(a[k][j]!=0)//if the upper rows same column is not zero
{
factor = (double)(-a[i][j])/a[k][j];
break;
}//if
k--;
}//while checking the upper rows of the current row to find the Gaussian factor
for(int m = 0 ; m<n ; m++)
{
a[i][m] += factor*a[k][m];// Gaussian factor
}//for
b[i] += factor*b[k];//do the same thing that we did with A to B
}//if j<i
}//i
}//j
for(int i=n-1 ; i>=0 ;i--)
{
//for example if we have 2*x2 + 3*x3 = b2 we already found x3 the rest is history :)
double sum = 0 ;
for(int j = i+1 ; j<n ; j++)
{
sum += a[i][j]*x[j];
}//j
x[i] = (b[i]-sum)/a[i][i];//calculate x_i --> for example the first loop will find x_n
}//i
//output the linear Algebra
System.out.print("A's matrix after Gaussian:");
System.out.println();
for(int row = 0 ; row<n ; row++)
{
for(int col = 0 ; col<n ; col++)
{
System.out.print(a[row][col]+ " ");
}
System.out.println();
}//i
System.out.println();
System.out.print("B's matrix after Gaussian:");
for(int col = 0 ; col<n ; col++)
{
System.out.println(b[col]);
}//for
System.out.println();
System.out.print("x's vector is:");
for(int col = 0 ; col<n ; col++)
{
System.out.println(x[col]);
}//for
}//linearAlgebra method
</code></pre>
<p>is there any faster rule or Gaussian is the best one to code if you need to code fast?
by the way this code can help those in need of calculating linear algebra by java. this code is tested but if it has any kind of bugs please let me know.</p> | 2018-01-26 23:59:34.970000+00:00 | 2018-01-27 11:48:37.847000+00:00 | null | java | ['https://i.stack.imgur.com/VfTjJ.gif', 'https://en.wikipedia.org/wiki/Invertible_matrix#Blockwise_inversion', 'https://i.stack.imgur.com/8RXNF.gif', 'https://en.wikipedia.org/wiki/Coppersmith%E2%80%93Winograd_algorithm', 'https://i.stack.imgur.com/ZJJw4.gif', 'https://arxiv.org/abs/1401.7714', 'https://i.stack.imgur.com/AHTTt.gif', 'https://i.stack.imgur.com/ONlTN.gif', 'https://i.stack.imgur.com/CZMvc.gif', 'https://i.stack.imgur.com/ArEx9.gif'] | 10 |
46,181,636 | <p>After reading the original papers of batch normalization (<a href="https://arxiv.org/abs/1502.03167" rel="nofollow noreferrer">https://arxiv.org/abs/1502.03167</a>) and SELU (<a href="https://arxiv.org/abs/1706.02515" rel="nofollow noreferrer">https://arxiv.org/abs/1706.02515</a>), I have a better understanding of them:</p>
<ol>
<li><p>batch normalization is an "isolation" procedure to ensure the input (in any mini-batch) to the next layer has a fixed distribution, therefore the so called "shifting variance" problem is fixed. The affine transform ( γ*x^ + β ) just tunes the standardized x^ to another <strong>fixed distribution</strong> for better expressiveness. For the simple normalization, we need to turn the <code>center</code> and <code>scale</code> parameters to <code>False</code> when calling <code>tf.layers.batch_normalization</code>.</p></li>
<li><p>Make sure the <code>epsilon</code> (still in <code>tf.layers.batch_normalization</code>) is set to at least 2 magnitudes less than the lowest magnitude of the all input data. The default value of <code>epsilon</code> is set to 0.001. For my case, some features have values as low as 1e-6. Therefore, I had to change <code>epsilon</code> to 1e-8.</p></li>
<li><p>The inputs to SELU have to be normalized before feeding them into the model. <code>tf.layers.batch_normalization</code> is not designed for that purpose.</p></li>
</ol> | 2017-09-12 16:38:59.793000+00:00 | 2017-09-12 23:37:51.347000+00:00 | 2017-09-12 23:37:51.347000+00:00 | null | 46,136,165 | <p>The SELU activation function (<a href="https://github.com/bioinf-jku/SNNs/blob/master/selu.py" rel="nofollow noreferrer">https://github.com/bioinf-jku/SNNs/blob/master/selu.py</a>) requires the input to be normalized to have the mean value of 0.0 and the variance of 1.0. Therefore, I tried to apply <code>tf.layers.batch_normalization</code> (<code>axis=-1</code>) on the raw data to meet that requirement. The raw data in each batch have the shape of <code>[batch_size, 15]</code>, where 15 refers to the number of features. The graph below shows the variances of 5 of these features returned from <code>tf.layers.batch_normalization</code> (~20 epochs). They are not all close to 1.0 as expected. The mean values are not all close to 0.0 as well (graphs not shown). </p>
<p>How should I get the 15 features all normalized independently (I expect every feature after normalization will have mean = 0 and var = 1.0)? </p>
<p><a href="https://i.stack.imgur.com/8EcQg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8EcQg.png" alt="enter image description here"></a></p> | 2017-09-10 00:07:37.550000+00:00 | 2017-09-12 23:55:54.287000+00:00 | 2017-09-12 23:55:54.287000+00:00 | tensorflow|batch-normalization | ['https://arxiv.org/abs/1502.03167', 'https://arxiv.org/abs/1706.02515'] | 2 |
61,241,054 | <p>Neural networks operate on a continuous space, and don't know what to do with a discrete space like words. That's why NLP tasks start by embedding the discrete word IDs into a continuous space.</p>
<p>Fast Gradient Sign Method, which clearly uses the gradient and also operates that continuous space, can get you as far as an adversarial embedding. But if you want an adversarial <em>example</em>, then you need to somehow go from that adversarial embedding to an adversarial word.</p>
<p>This paper on <a href="https://arxiv.org/abs/1801.04354" rel="nofollow noreferrer">Black-box Generation of Adversarial Text Sequences</a> describes one such idea.</p>
<blockquote>
<p>Multiple recent studies [21, 25] defined adversarial perturbations
on RNN-based text classifiers. [21] first chose the word at a random
position in a text input, then used a projected Fast Gradient Sign
Method to perturb the word’s embedding vector. The perturbed vector is projected to the nearest word vector in the word embedding
space, resulting in an adversarial sequence (adversarial examples
in the text case).</p>
</blockquote>
<p>But right after that quote they said this technique does not always generate great examples. Perhaps it will be suitable for your purposes, or perhaps you will want to dive deeper into the paper to see how that their black box idea works.</p>
<p>Or maybe you don't need to generate adversarial words, and an adversarial embedding is sufficient. If so, read on.</p>
<hr>
<p><strong>Older idea of mine, not backed by research.</strong></p>
<p>Another path forward is to generate the adversarial example on top of the embedding, instead of the indices the embedding is based on. That is:</p>
<ol>
<li>Run the embedding.</li>
<li>Feed it directly to the <code>answer</code> part of your model, which gives one half of your loss.</li>
<li>Update the <em>embedding</em> in an adversarial way. This will now work because you are working on the embeddings, which are floating point and suitable for the FGSM update.</li>
<li>Feed the adversarial example to your <code>answer</code> subnet, which gives the second half of your loss.</li>
</ol>
<p>This is straightforward to do in PyTorch, but unfortunately I do not know a convenient way to do so in Keras given the up-front requirement to <code>compile()</code> the model instead of leaving it in two pieces.</p> | 2020-04-16 01:02:37.907000+00:00 | 2020-04-17 04:13:08.860000+00:00 | 2020-04-17 04:13:08.860000+00:00 | null | 61,240,758 | <p>I would like to implement a custom loss function for my neural network in order to consider also the contribution of adversarial examples during training, computed with the Fast Gradient Sign Method.</p>
<p><img src="https://i.stack.imgur.com/O5v7Q.png" alt="Custom Loss Function"></p>
<p>where <strong><em>J</em></strong> is a classic categorical cross-entropy computed wrt to the inputs. And <strong><em>x + delta</em></strong> is the adversarial example.</p>
<p><strong>Network Structure</strong></p>
<p>More in details, my network is the following:</p>
<pre><code>sentence = Input(shape=(story_maxlen,))
encoded_sentence = Embedding(vocab_size, embed_size, input_length=story_maxlen)(sentence)
question = Input(shape=(query_maxlen,))
encoded_question = Embedding(vocab_size, embed_size, input_length=query_maxlen)(question)
merged = concatenate([encoded_sentence, encoded_question], axis=1)
answer = LSTM(lstm_size, return_sequences=True)(merged)
answer = Dense(mlp_size, activation='tanh')(merged)
answer = Dropout(dropout_rate)(answer)
answer = Flatten()(answer)
answer = Dense(vocab_size, activation='softmax')(answer)
model = Model([sentence, question], answer)
model.compile(optimizer="adam", loss=my_loss_wrapper([sentence,question]), metrics=['accuracy'])
</code></pre>
<p>And then my custom loss function with also the function to generate the adversarial examples:</p>
<pre><code>def generate_advers(model, epsilon):
x1 = input_tensor[0]
x2 = input_tensor[1]
answer = y_true
x1 = tf.Variable(x1)
x2 = tf.Variable(x2)
with tf.GradientTape() as tape:
tape.watch([x1, x2])
proba = model([x1, x2])
loss = K.categorical_crossentropy(answer, proba[0])
# Get the gradients of the loss w.r.t to the input.
gradient = tape.gradient(loss, [x1, x2])
g1 = gradient[0]
g2 = gradient[1]
signed_grad_st = tf.sign(g1)
signed_grad_qu = tf.sign(g2)
delta_1 = tf.multiply(signed_grad_st, epsilon)
delta_2 = tf.multiply(signed_grad_qu, epsilon)
x1_adv = tf.add(x1, delta_1)
x2_adv = tf.add(x2, delta_2)
proba_adv = model([x1_adv, x2_adv])
loss_advers = K.categorical_crossentropy(label, proba_adv[0])
return loss_advers
def my_loss_wrapper(input_tensor):
def my_loss(y_true, y_pred):
alpha = 0.05
alpha_compl = 1.0 - alpha
epsilon = 0.15
loss_advers = generate_advers(model, epsilon)
loss_advers = alpha_compl*loss_advers
loss_true = K.categorical_crossentropy(y_true, y_pred)
loss_true = alpha*loss_true
total = loss_true + loss_advers
return total
return my_loss
</code></pre>
<p>Giving that my input is an encoded vector of vocabulary indices of the form:</p>
<pre><code>[1,5,4,3,6,9...]
</code></pre>
<p>I don't understand how to compute the gradient of the loss wrt to the input (it is always None), which is fundamental to implement the FGSM. Do you have any suggestions? Also, do you think I'm on the right way?</p>
<p><strong>Important</strong></p>
<p>I'm able to compute the gradient if and only if i <strong>remove</strong> the Embedding layer from the network. But then the problem is that I can't train my embeddings and so the accuracy does not increase. So I need the Embedding layer to be in the network.</p> | 2020-04-16 00:26:07.160000+00:00 | 2020-04-17 04:13:08.860000+00:00 | 2020-04-16 01:48:30.967000+00:00 | python|tensorflow|keras|neural-network | ['https://arxiv.org/abs/1801.04354'] | 1 |
61,086,443 | <p>The precalculated features released with AudioSet are "embeddings" from a deep net that was trained to predict video-level tags from soundtracks (see <a href="https://arxiv.org/abs/1609.09430" rel="nofollow noreferrer">https://arxiv.org/abs/1609.09430</a>). The embedding layer is further processed via PCA to reduce dimensionality; this processing is included to make the features compatible with the ones release in <a href="https://research.google.com/youtube8m/" rel="nofollow noreferrer">https://research.google.com/youtube8m/</a> . So, vggish_model.ckpt gives the weights of the VGG-like deep CNN used to calculate the embedding from mel-spectrogram patches, and vggish_pca_params.npz gives the bases for the PCA transformation.</p>
<p>The only content released as part of <a href="https://research.google.com/audioset/download.html" rel="nofollow noreferrer">AudioSet</a> are these precalculated embedding features. If you train a model based on these features, then want to use it to classify new inputs, you must convert the new input to the same domain, and thus you have to use vggish_model and vggish_pca_params. </p>
<p>If AudioSet had included waveforms, none of this would be needed. But YouTube terms of service do not allow download and redistribution of its users' content.</p> | 2020-04-07 18:00:40.663000+00:00 | 2020-04-07 18:00:40.663000+00:00 | null | null | 61,080,619 | <p>I am trying to understand some aspects of audio classification and came by "vggish_model.ckpt" and "vggish_pca_params.npz". I am trying to have a good understanding of these two. Are they part of tensorflow or google audio set? Why do I need to use them when building audio features? I couldn't see any documentation about them!</p> | 2020-04-07 13:00:35.293000+00:00 | 2020-04-07 18:00:40.663000+00:00 | null | tensorflow|artificial-intelligence|feature-extraction | ['https://arxiv.org/abs/1609.09430', 'https://research.google.com/youtube8m/', 'https://research.google.com/audioset/download.html'] | 3 |
6,634,668 | <p>Despite the title to your question, I think you’re actually looking for the minimum <em>dissection</em> into rectangles of a rectilinear polygon. (Jason’s links are about minimum <em>covers</em> by rectangles, which is quite a different problem.)</p>
<p><a href="http://www.ics.uci.edu/~eppstein/" rel="noreferrer">David Eppstein</a> discusses this problem in section 3 of his 2010 survey article <a href="http://arxiv.org/pdf/0908.3916v1" rel="noreferrer">Graph-Theoretic Solutions to Computational Geometry Problems</a>, and he gives a nice summary in <a href="https://mathoverflow.net/questions/28303/split-polygon-into-minimum-amount-of-rectangles-and-triangles/28350#28350">this answer on mathoverflow.net</a>:</p>
<blockquote>
<p>The idea is to find the maximum number of disjoint axis-parallel diagonals that have two concave vertices as endpoints, split along those, and then form one more split for each remaining concave vertex. To find the maximum number of disjoint axis-parallel diagonals, form the intersection graph of the diagonals; this graph is bipartite so its maximum independent set can be found in polynomial time by graph matching techniques.</p>
</blockquote>
<p>Here’s my gloss on this admirably terse description, using figure 2 from Eppstein’s article. Suppose we have a rectilinear polygon, possibly with holes.</p>
<p><img src="https://i.stack.imgur.com/fCyRM.png" alt=""></p>
<p>When the polygon is dissected into rectangles, each of the concave vertices must be met by at least one edge of the dissection. So we get the <em>minimum</em> dissection if as many of these edges as possible do double-duty, that is, they connect two of the concave vertices.</p>
<p>So let’s draw the axis-parallel diagonals between two concave vertices that are contained entirely within the polygon. (‘Axis-parallel’ means ‘horizontal or vertical’ here, and a <a href="http://en.wikipedia.org/wiki/Diagonal#Polygons" rel="noreferrer">diagonal of a polygon</a> is a line connecting two non-adjacent vertices.) We want to use as many of these lines as possible in the dissection as long as they don’t intersect.</p>
<p><img src="https://i.stack.imgur.com/EzPPP.png" alt=""></p>
<p>(If there are no axis-parallel diagonals, the dissection is trivial—just make a cut from each concave vertex. Or if there are no intersections between the axis-parallel diagonals then we use them all, plus a cut from each remaining concave vertex. Otherwise, read on.)</p>
<p>The <a href="http://en.wikipedia.org/wiki/Intersection_graph" rel="noreferrer">intersection graph</a> of a set of line segments has a node for every line segment, and an edge joins two nodes if the lines cross. Here’s the intersection graph for the axis-parallel diagonals:</p>
<p><img src="https://i.stack.imgur.com/4SWYk.png" alt=""></p>
<p>It’s <a href="http://en.wikipedia.org/wiki/Bipartite_graph" rel="noreferrer">bipartite</a> with the vertical diagonals in one part, and the horizontal diagonals in the other part. Now, we want to pick as many of the diagonals as possible as long as they don’t intersect. This corresponds to finding the <a href="http://en.wikipedia.org/wiki/Independent_set_%28graph_theory%29" rel="noreferrer">maximum independent set</a> in the intersection graph.</p>
<p>Finding the maximum independent set in a general graph is an NP-hard problem, but in the special case of a bipartite graph, <a href="http://en.wikipedia.org/wiki/K%C3%B6nig%27s_theorem_%28graph_theory%29" rel="noreferrer">König’s theorem</a> shows that it’s equivalent to the problem of finding a maximum matching, which can be solved in polynomial time, for example by the <a href="http://en.wikipedia.org/wiki/Hopcroft%E2%80%93Karp_algorithm" rel="noreferrer">Hopcroft–Karp algorithm</a>. A given graph can have several maximum matchings, but any of them will do, as they all have the same size. In the example, all the maximum matchings have three pairs of vertices, for example {(2, 4), (6, 3), (7, 8)}:</p>
<p><img src="https://i.stack.imgur.com/q5MD5.png" alt=""></p>
<p>(Other maximum matchings in this graph include {(1, 3), (2, 5), (7, 8)}; {(2, 4), (3, 6), (5, 7)}; and {(1, 3), (2, 4), (7, 8)}.)</p>
<p>To get from a maximum matching to the corresponding <a href="http://en.wikipedia.org/wiki/Vertex_cover" rel="noreferrer">minimum vertex cover</a>, apply the <a href="https://en.wikipedia.org/wiki/K%C5%91nig%27s_theorem_(graph_theory)#Proof" rel="noreferrer">proof of König’s theorem</a>. In the matching shown above, the left set is <em>L</em> = {1, 2, 6, 7}, the right set is <em>R</em> = {3, 4, 5, 8}, and the set of unmatched vertices in <em>L</em> is <em>U</em> = {1}. There is only one alternating path starting in <em>U</em>, namely 1–3–6, so the set of vertices in alternating paths is <em>Z</em> = {1, 3, 6} and the minimum vertex cover is thus <em>K</em> = (<em>L</em> \ <em>Z</em>) ∪ (<em>R</em> ∩ <em>Z</em>) = {2, 3, 7}, shown in red below, with the maximum independent set in green:</p>
<p><img src="https://i.stack.imgur.com/Im3Nd.png" alt=""></p>
<p>Translating this back into the dissection problem, this means that we can use five axis-parallel diagonals in the dissection:</p>
<p><img src="https://i.stack.imgur.com/d6lLD.png" alt=""></p>
<p>Finally, make a cut from each remaining concave vertex to complete the dissection:</p>
<p><img src="https://i.stack.imgur.com/4hJaS.png" alt=""></p> | 2011-07-09 12:15:36.610000+00:00 | 2018-08-09 08:02:03.953000+00:00 | 2018-08-09 08:02:03.953000+00:00 | null | 5,919,298 | <p>I have a set of rectangles and I would like to "reduce" the set so I have the fewest number of rectangles to describe the same area as the original set. If possible, I would like it to also be fast, but I am more concerned with getting the number of rectangles as low as possible. I have an approach now which works most of the time.</p>
<p>Currently, I start at the top-left most rectangle and see if I can expand it out right and down while keeping it a rectangle. I do that until it can't expand anymore, remove and split all intersecting rectangles, and add the expanded rectangle back in the list. Then I start the process again with the next top-left most rectangle, and so on. But in some cases, it doesn't work. For example:
<img src="https://i.stack.imgur.com/XUvZs.png" alt="enter image description here"></p>
<p>With this set of three rectangles, the correct solution would end up with two rectangles, like this:
<img src="https://i.stack.imgur.com/6FI93.png" alt="enter image description here"></p>
<p>However, in this case, my algorithm starts by processing the blue rectangle. This expand downwards and splits the yellow rectangle (correctly). But then when the remainder of the yellow rectangle is processed, instead of expanding downwards, it expands right first and takes back the portion that was previously split off. Then the last rectangle is processed and it can't expand right or down, so the original set of rectangles is left. I could tweak the algorithm to expand down first and then right. That would fix this case, but it would cause the same problem in a similar scenario that was flipped.</p>
<p><strong>Edit:</strong> Just to clarify, the original set of rectangles do not overlap and do not have to be connected. And if a subset of rectangles are connected, the polygon which completely covers them can have holes in it.</p> | 2011-05-07 05:21:13.177000+00:00 | 2022-04-21 19:35:26.070000+00:00 | 2016-11-22 12:50:51.983000+00:00 | algorithm|language-agnostic|geometry|rectangles | ['http://www.ics.uci.edu/~eppstein/', 'http://arxiv.org/pdf/0908.3916v1', 'https://mathoverflow.net/questions/28303/split-polygon-into-minimum-amount-of-rectangles-and-triangles/28350#28350', 'http://en.wikipedia.org/wiki/Diagonal#Polygons', 'http://en.wikipedia.org/wiki/Intersection_graph', 'http://en.wikipedia.org/wiki/Bipartite_graph', 'http://en.wikipedia.org/wiki/Independent_set_%28graph_theory%29', 'http://en.wikipedia.org/wiki/K%C3%B6nig%27s_theorem_%28graph_theory%29', 'http://en.wikipedia.org/wiki/Hopcroft%E2%80%93Karp_algorithm', 'http://en.wikipedia.org/wiki/Vertex_cover', 'https://en.wikipedia.org/wiki/K%C5%91nig%27s_theorem_(graph_theory)#Proof'] | 11 |
48,629,359 | <p>The code above uses Theorem 4; it seems to me that you want to use Theorem 5 instead (from the paper in the next paragraph).</p>
<p>Note, however, that if the number of identifiers is really the problem then the incremental approach below isn't going to work either---at some point the dictionaries are going to get too large.</p>
<p>Below you can find a proof-of-concept Python implementation that follows the description from <a href="https://arxiv.org/abs/1403.6348" rel="nofollow noreferrer">Updating Formulas and Algorithms for Computing Entropy and Gini Index from Time-Changing Data Streams</a>.</p>
<pre><code>import collections
import math
import random
def log2(p):
return math.log(p, 2) if p > 0 else 0
CountChange = collections.namedtuple('CountChange', ('label', 'change'))
class EntropyHolder:
def __init__(self):
self.counts_ = collections.defaultdict(int)
self.entropy_ = 0
self.sum_ = 0
def update(self, count_changes):
r = sum([change for _, change in count_changes])
residual = self._compute_residual(count_changes)
self.entropy_ = self.sum_ * (self.entropy_ - log2(self.sum_ / (self.sum_ + r))) / (self.sum_ + r) - residual
self._update_counts(count_changes)
return self.entropy_
def _compute_residual(self, count_changes):
r = sum([change for _, change in count_changes])
residual = 0
for label, change in count_changes:
p_new = (self.counts_[label] + change) / (self.sum_ + r)
p_old = self.counts_[label] / (self.sum_ + r)
residual += p_new * log2(p_new) - p_old * log2(p_old)
return residual
def _update_counts(self, count_changes):
for label, change in count_changes:
self.sum_ += change
self.counts_[label] += change
def entropy(self):
return self.entropy_
def naive_entropy(counts):
s = sum(counts)
return sum([-(r/s) * log2(r/s) for r in counts])
if __name__ == '__main__':
print(naive_entropy([1, 1]))
print(naive_entropy([1, 1, 1, 1]))
entropy = EntropyHolder()
freq = collections.defaultdict(int)
for _ in range(100):
index = random.randint(0, 5)
entropy.update([CountChange(index, 1)])
freq[index] += 1
print(naive_entropy(freq.values()))
print(entropy.entropy())
</code></pre> | 2018-02-05 18:40:49.897000+00:00 | 2018-02-05 19:08:07.177000+00:00 | 2018-02-05 19:08:07.177000+00:00 | null | 48,601,396 | <p>I have a set of data for which has an ID, timestamp, and identifiers. I have to go through it, calculate the entropy and save some other links for the data. At each step more identifiers are added to the identifiers dictionary and I have to re-compute the entropy and append it. I have really large amount of data and the program gets stuck due to growing number of identifiers and their entropy calculation after each step. I read the following solution but it is about the data consisting of numbers.
<a href="https://stackoverflow.com/questions/17104673/incremental-entropy-computation">Incremental entropy computation</a></p>
<p>I have copied two functions from this page and the incremental calculation of entropy gives different values than the classical full entropy calculation at every step.
Here is the code I have:</p>
<pre><code>from math import log
# ---------------------------------------------------------------------#
# Functions copied from https://stackoverflow.com/questions/17104673/incremental-entropy-computation
# maps x to -x*log2(x) for x>0, and to 0 otherwise
h = lambda p: -p*log(p, 2) if p > 0 else 0
# entropy of union of two samples with entropies H1 and H2
def update(H1, S1, H2, S2):
S = S1+S2
return 1.0*H1*S1/S+h(1.0*S1/S)+1.0*H2*S2/S+h(1.0*S2/S)
# compute entropy using the classic equation
def entropy(L):
n = 1.0*sum(L)
return sum([h(x/n) for x in L])
# ---------------------------------------------------------------------#
# Below is the input data (Actually I read it from a csv file)
input_data = [["1","2008-01-06T02:13:38Z","foo,bar"], ["2","2008-01-06T02:12:13Z","bar,blup"], ["3","2008-01-06T02:13:55Z","foo,bar"],
["4","2008-01-06T02:12:28Z","foo,xy"], ["5","2008-01-06T02:12:44Z","foo,bar"], ["6","2008-01-06T02:13:00Z","foo,bar"],
["7","2008-01-06T02:13:00Z","x,y"]]
total_identifiers = {} # To store the occurrences of identifiers. Values shows the number of occurrences
all_entropies = [] # Classical way of calculating entropy at every step
updated_entropies = [] # Incremental way of calculating entropy at every step
for item in input_data:
temp = item[2].split(",")
identifiers_sum = sum(total_identifiers.values()) # Sum of all identifiers
old_entropy = 0 if all_entropies[-1:] == [] else all_entropies[-1] # Get previous entropy calculation
for identifier in temp:
S_new = len(temp) # sum of new samples
temp_dictionaty = {a:1 for a in temp} # Store current identifiers and their occurrence
if identifier not in total_identifiers:
total_identifiers[identifier] = 1
else:
total_identifiers[identifier] += 1
current_entropy = entropy(total_identifiers.values()) # Entropy for current set of identifiers
updated_entropy = update(old_entropy, identifiers_sum, current_entropy, S_new)
updated_entropies.append(updated_entropy)
entropy_value = entropy(total_identifiers.values()) # Classical entropy calculation for comparison. This step becomes too expensive with big data
all_entropies.append(entropy_value)
print(total_identifiers)
print('Sum of Total Identifiers: ', identifiers_sum) # Gives 12 while the sum is 14 ???
print("All Classical Entropies: ", all_entropies) # print for comparison
print("All Updated Entropies: ", updated_entropies)
</code></pre>
<p>The other issue is that when I print "Sum of total_identifiers", it gives <strong>12</strong> instead of <strong>14</strong>! (Due to very large amount of data, I read the actual file line by line and write the results directly to the disk and do not store it in the memory apart from the dictionary of identifiers).</p> | 2018-02-03 19:55:02.973000+00:00 | 2018-02-08 01:28:49.310000+00:00 | 2018-02-05 03:08:22.703000+00:00 | python|python-3.x|math|python-3.5|entropy | ['https://arxiv.org/abs/1403.6348'] | 1 |
33,748,086 | <p>Grasp detection is an open problem with new state of the art solutions appearing every month or so. As far as I know PCL has no ready to use solution for that. So, you'd better google for publications with newest solutions in this area.
Here are the articles I found useful for me:</p>
<ul>
<li><a href="http://wiki.ros.org/agile_grasp" rel="nofollow">agile_grasp</a> ROS node and "<a href="http://arxiv.org/pdf/1501.03100.pdf" rel="nofollow">Using Geometry to Detect Grasps</a>." </li>
<li><a href="http://arxiv.org/abs/1412.3128" rel="nofollow">Real-Time Grasp Detection Using Convolutional Neural Networks</a></li>
<li><a href="http://pr.cs.cornell.edu/deepgrasping/" rel="nofollow">Deep Learning for Detecting Robotic Grasps</a></li>
<li><a href="http://www.cs.columbia.edu/~cmatei/graspit/" rel="nofollow">GraspIt!</a></li>
</ul>
<p>Hope this helps.</p> | 2015-11-17 02:21:10.390000+00:00 | 2015-11-17 02:33:00.077000+00:00 | 2015-11-17 02:33:00.077000+00:00 | null | 33,721,942 | <p>I have a point cloud data, where the detected objects are bounded by the bounding box. We were planning to suck(grasp) it from the top plane of the object. How do I go about determining the co-ordinates for grasping on any surface using PCL? I am new to programming, any help extended here is appreciated.</p>
<p>Thanks!</p> | 2015-11-15 16:21:18.463000+00:00 | 2015-11-17 02:33:00.077000+00:00 | null | point-cloud-library|point-clouds | ['http://wiki.ros.org/agile_grasp', 'http://arxiv.org/pdf/1501.03100.pdf', 'http://arxiv.org/abs/1412.3128', 'http://pr.cs.cornell.edu/deepgrasping/', 'http://www.cs.columbia.edu/~cmatei/graspit/'] | 5 |
71,256,471 | <p>SpanBERT is a version of BERT pre-trained to produce useful embeddings on text spans. SpanBERT itself has nothing to do with coreference resolution. The original paper is <a href="https://arxiv.org/abs/1907.10529" rel="nofollow noreferrer">https://arxiv.org/abs/1907.10529</a>, and the original source code is <a href="https://github.com/facebookresearch/SpanBERT" rel="nofollow noreferrer">https://github.com/facebookresearch/SpanBERT</a>, though you might have an easier time using the huggingface version at <a href="https://huggingface.co/SpanBERT" rel="nofollow noreferrer">https://huggingface.co/SpanBERT</a>.</p>
<p>It is definitely possible to get the embeddings as output, along with the coreference predictions. I recommend cloning <a href="https://github.com/allenai/allennlp-models" rel="nofollow noreferrer">https://github.com/allenai/allennlp-models</a>, getting it to run in your environment, and then changing the code until it gives you the output you want.</p> | 2022-02-24 18:14:59.817000+00:00 | 2022-02-24 18:14:59.817000+00:00 | null | null | 70,987,577 | <p>I'm an Italian student approaching the NLP world.
First of all I'd like to thank you for the amazing work you've done with the paper " Higher-order Coreference Resolution with Coarse-to-fine Inference".
I am using the model provided by allennlp library and I have two questions for you.</p>
<ol>
<li><p>in <a href="https://demo.allennlp.org/coreference-resolution" rel="nofollow noreferrer">https://demo.allennlp.org/coreference-resolution</a> it is written that the embedding used is SpanBERT. Is this a BERT embedding trained regardless of the coreference task? I mean, could I possibly use this embedding just as a pretrained model on the english language to embed sentences? (e.g. like <a href="https://huggingface.co/facebook/bart-base" rel="nofollow noreferrer">https://huggingface.co/facebook/bart-base</a> )</p>
</li>
<li><p>is it possible to modify the code in order to return, along with the coreference prediction, also the aforementioned embeddings of each sentence?</p>
</li>
</ol>
<p>I really hope you can help me.
Meanwhile I thank you in advance for your great availability.
Sincerely,
Emanuele Gusso</p> | 2022-02-04 13:58:33.497000+00:00 | 2022-02-24 18:14:59.817000+00:00 | null | allennlp|coreference-resolution | ['https://arxiv.org/abs/1907.10529', 'https://github.com/facebookresearch/SpanBERT', 'https://huggingface.co/SpanBERT', 'https://github.com/allenai/allennlp-models'] | 4 |
45,480,956 | <p>It sounds like you are looking for <strong>Tree drawing algorithms</strong>. Let me give a short overview:</p>
<p>In graph drawing, you can basically describe a drawing by three groups of properties:</p>
<ul>
<li><p>Your <strong>drawing conventions</strong>, i.e. how the <em>nodes</em> and <em>edges</em> are represented in your visualization. Possible choices are:</p>
<ul>
<li>Represent your nodes as <em>points</em>, your edges as non-intersecting <em>curves</em> between them.<br>
This might be specialized by e.g. requiring the points to be placed on a regular grid, on concentric circles around the root or something similar. Also, you could require the edges to be straight lines, circle args, ...<br>
This is probably what you first think of when you talk about tree visualizations.
<a href="https://i.stack.imgur.com/GdLYF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GdLYF.png" alt="straight-line drawing, from "Trees with convex faces and optimal angles." J. Carlson and D. Eppstein. arXiv:cs.CG/0607113. 14th Int. Symp. Graph Drawing, Karlsruhe, Germany, 2006. Lecture Notes in Comp. Sci. 4372, 2007, pp. 77-88."></a></li>
<li>Represent your nodes as <em>rectangles</em>, your edges implicitly by nesting the rectangles. This is commonly known as a <em>tree map</em>
<a href="https://i.stack.imgur.com/jynEQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jynEQ.png" alt="tree map, from Wikipedia"></a></li>
</ul></li>
<li><p>Some <strong>aesthetics</strong> you want to optimize, i.e.</p>
<ul>
<li>Maximize angles between edges</li>
<li>Minimize the area the tree takes</li>
<li>Minimize the potential energy if you build a physical model based on the tree (see Wikipedia: <a href="https://en.wikipedia.org/wiki/Force-directed_graph_drawing" rel="nofollow noreferrer">Force-directed graph drawing</a>)<br>
...</li>
</ul></li>
<li><p>You might also place some <strong>constraints</strong> on your nodes or edges, e.g. fix certain nodes to a position, enforce certain edge lengths or similar ideas.</p></li>
</ul>
<p>Now that I've explained the basics, let me list some approaches:</p>
<ul>
<li>As you already showed, draw your tree in a circle-based fashion, either by placing the root at the center and the children in concentric circles around the root or aligning all subtrees of a node in circular fashion around it. These are implemented in the <a href="http://www.graphviz.org/" rel="nofollow noreferrer">GraphViz</a> tool as <code>circo</code> and <code>twopi</code></li>
<li>Use an <em>HV layout</em> similar to your tree view to mostly stick with horizontal and vertical edges, but allow for a bit more freedom in the subtree placement
<img src="https://i.stack.imgur.com/EgiI0.gif" alt="HV layout, from the goblin2 documentation"></li>
<li>Use any of the many straight-line drawing techniques that operate level-wise, i.e. the root is placed at the top, its children one level below, ...
<a href="https://i.stack.imgur.com/qtqlP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qtqlP.png" alt="level-wise graph drawing"></a></li>
<li>Represent your tree as a <em>tree map</em> similar to the example above.</li>
</ul>
<p>Most if not all of these techniques are described and visualized in the <a href="https://cs.brown.edu/~rt/gdhandbook/chapters/trees.pdf" rel="nofollow noreferrer">Tree-drawing chapter</a> of the <em>Handbook of Graph Drawing and Visualization</em> by Tassima et. al.</p> | 2017-08-03 10:01:48.063000+00:00 | 2017-08-03 10:07:23.627000+00:00 | 2017-08-03 10:07:23.627000+00:00 | null | 45,479,834 | <p>I am working on some reseaech which subject is to find a proper way to represent a hierarchical structure on a simple web page.
Precision : It's a huge amount of data.</p>
<p>Let have some contextualisation first :
Let say you have a company which is composed departements which contains several employees...</p>
<p>What is commonly used : </p>
<p>Tree architecture :</p>
<p><a href="https://i.stack.imgur.com/gfTGj.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gfTGj.gif" alt="Tree architecture (example)"></a></p>
<p>But I don't like it because if you have a huge amount of data, expanding and collapsing if you are looking for several objects it might be tricky...</p>
<p>2 others approaches that might bring some flexibility :</p>
<p>Circles Mode : </p>
<p><a href="https://i.stack.imgur.com/e2gIb.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/e2gIb.jpg" alt="Circle Mode (example)"></a></p>
<p>and Nodes Mode :</p>
<p><a href="https://i.stack.imgur.com/CoLRT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CoLRT.png" alt="Nodes example"></a></p>
<p>One functionality that I think can help the final user is an Elastic Search bar, but the goal here is to bring some flexibility for the user to navigate through the structure.</p>
<p>I wants to use JSF but the technology doesn't really matter here, it'sa conceptual phase.</p>
<p>Please share your opinons, ideas, trails... ?</p> | 2017-08-03 09:14:15.103000+00:00 | 2017-08-03 10:07:23.627000+00:00 | null | user-interface|data-structures|conceptual | ['https://i.stack.imgur.com/GdLYF.png', 'https://i.stack.imgur.com/jynEQ.png', 'https://en.wikipedia.org/wiki/Force-directed_graph_drawing', 'http://www.graphviz.org/', 'https://i.stack.imgur.com/qtqlP.png', 'https://cs.brown.edu/~rt/gdhandbook/chapters/trees.pdf'] | 6 |
31,210,520 | <p>The current usage of machine learning methods such as <code>word2vec</code> and <code>dl4j</code> for modelling words are based on <a href="http://www.aclweb.org/aclwiki/index.php?title=Distributional_Hypothesis" rel="nofollow noreferrer">distributional hypothesis</a>. They train models of words and phrases based on their context. There is no ontological aspects in these word models. At its best trained case a model based on these tools can say if two words can appear in similar contexts. That is how their similarity measure works.</p>
<p>The Mikolov papers (<a href="http://arxiv.org/pdf/1301.3781.pdf" rel="nofollow noreferrer">a</a>, <a href="http://arxiv.org/pdf/1310.4546.pdf" rel="nofollow noreferrer">b</a> and <a href="http://research.microsoft.com/pubs/189726/rvecs.pdf" rel="nofollow noreferrer">c</a>) which suggests that these models can learn "Linguistic Regularity" doesn't have any ontological test analysis, it only suggests that these models are capable of predicting "similarity between members of the word pairs". This kind of prediction doesn't help your task. These models are even incapable of recognising <em>similarity</em> in contrast with <em>relatedness</em> (e.g. read this page <a href="http://www.cl.cam.ac.uk/%7Efh295/simlex.html" rel="nofollow noreferrer">SimLex test set</a>).</p>
<p>I would say that you need an ontological database to solve your problem. More specifically about your examples, it seems for <code>String 1</code> and <code>String 2</code> in your examples:</p>
<pre><code>String 1 = "a"
String 2 = "b"
</code></pre>
<p>You are trying to check <a href="https://en.wikipedia.org/wiki/Entailment_(pragmatics)" rel="nofollow noreferrer">entailment</a> relations in sentences:</p>
<blockquote>
<p>(1) "<em>c</em> is <em>b</em>"</p>
<p>(2) "<em>c</em> is <em>a</em>"</p>
<p>(3) "<em>c</em> is related to <em>a</em>".</p>
</blockquote>
<p>Where:</p>
<blockquote>
<p>(1) entails (2)</p>
</blockquote>
<p>or</p>
<blockquote>
<p>(1) entails (3)</p>
</blockquote>
<p>In your two first examples, you can probably use semantic knowledge bases to solve the problem. But your third example will probably need a syntactical parsing before understanding the difference between two phrases. For example, these phrases:</p>
<blockquote>
<p>"men"</p>
<p>"all men"</p>
<p>"tall men"</p>
<p>"men in black"</p>
<p>"men in general"</p>
</blockquote>
<p>It needs a logical understanding to solve your problem. However, you can analyse that based on <em>economy of language</em>, adding more words to a phrase usually makes it <em>less general</em>. Longer phrases are less general comparing to shorter phrases. It doesn't give you a precise tool to solve the problem, but it can help to analyse some phrases without special words such as <code>all</code>, <code>general</code> or <code>every</code>.</p> | 2015-07-03 15:51:28.827000+00:00 | 2015-07-03 16:04:00.333000+00:00 | 2020-06-20 09:12:55.060000+00:00 | null | 30,796,385 | <p>I have this problem of matching two strings for 'more general', 'less general', 'same meaning', 'opposite meaning' etc.</p>
<p>The strings can be from any domain. Assume that the strings can be from people's emails. </p>
<p>To give an example, </p>
<pre><code>String 1 = "movies"
String 2 = "Inception"
</code></pre>
<p>Here I should know that Inception is less general than movies (sort of is-a relationship)</p>
<pre><code>String 1 = "Inception"
String 2 = "Christopher Nolan"
</code></pre>
<p>Here I should know that Inception is less general than Christopher Nolan </p>
<pre><code>String 1 = "service tax"
String 2 = "service tax 2015"
</code></pre>
<p>At a glance it appears to me that S-match will do the job. But I am not sure if S-match can be made to work on knowledge bases other than WordNet or GeoWordNet (as mentioned in their page). </p>
<p>If I use <code>word2vec</code> or <code>dl4j</code>, I guess it can give me the similarity scores. But does it also support telling a string is <code>more general</code> or <code>less general</code> than the other?</p>
<p>But I do see word2vec can be based on a training set or large corpus like wikipedia etc.</p>
<p>Can some one throw light on the way to go forward?</p> | 2015-06-12 06:13:56.890000+00:00 | 2015-07-03 16:04:00.333000+00:00 | 2015-06-12 06:37:47.457000+00:00 | semantic-analysis|word2vec | ['http://www.aclweb.org/aclwiki/index.php?title=Distributional_Hypothesis', 'http://arxiv.org/pdf/1301.3781.pdf', 'http://arxiv.org/pdf/1310.4546.pdf', 'http://research.microsoft.com/pubs/189726/rvecs.pdf', 'http://www.cl.cam.ac.uk/%7Efh295/simlex.html', 'https://en.wikipedia.org/wiki/Entailment_(pragmatics)'] | 6 |
70,825,170 | <p>Using Rocker images to install <code>rstan</code> is indeed a good idea, and I demontrated it via (essentially) a single command in <a href="http://dirk.eddelbuettel.com/blog/2017/12/22#014_finding_binary_deb_packages" rel="nofollow noreferrer">this older post</a> on my blog.</p>
<p>Your best bet, really, is to rely on Rocker containers already set up for the c2d4u.team repo. And, as @Oliver noted in a comment to your question, those are <em>not</em> the r-ver images which (for their own valid reason) follow a different internal model (which makes them less ideal for the binaries we show in use here).</p>
<h4>First example: rocker/r-ubuntu:20.04</h4>
<p>This "simply" relies on the fact that <code>rstan</code> exists here as a binary package <code>r-cran-rstan</code> -- so we install it in one command after updating the <code>apt</code> index.</p>
<pre><code>edd@rob:~$ docker run --rm -ti rocker/r-ubuntu:20.04 bash
root@11e89aea64f6:/# apt update -qq
85 packages can be upgraded. Run 'apt list --upgradable' to see them.
root@11e89aea64f6:/# apt install r-cran-rstan
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
libpng-tools libtbb2 pandoc pandoc-data r-cran-backports r-cran-bh r-cran-brio r-cran-callr
r-cran-checkmate r-cran-cli r-cran-colorspace r-cran-cpp11 r-cran-crayon r-cran-data.table
r-cran-desc r-cran-diffobj r-cran-digest r-cran-ellipsis r-cran-evaluate r-cran-fansi r-cran-farver
r-cran-fastmatch r-cran-ggplot2 r-cran-glue r-cran-gridextra r-cran-gtable r-cran-inline
r-cran-isoband r-cran-jsonlite r-cran-labeling r-cran-lifecycle r-cran-loo r-cran-magrittr
r-cran-matrixstats r-cran-munsell r-cran-pillar r-cran-pkgbuild r-cran-pkgconfig r-cran-pkgload
r-cran-praise r-cran-prettyunits r-cran-processx r-cran-ps r-cran-r6 r-cran-rcolorbrewer r-cran-rcpp
r-cran-rcppeigen r-cran-rcppparallel r-cran-rematch2 r-cran-rlang r-cran-rprojroot r-cran-rstudioapi
r-cran-scales r-cran-stanheaders r-cran-svglite r-cran-systemfonts r-cran-testthat r-cran-tibble
r-cran-utf8 r-cran-vctrs r-cran-viridislite r-cran-waldo r-cran-withr
Suggested packages:
texlive-latex-recommended texlive-xetex texlive-luatex pandoc-citeproc texlive-latex-extra context
wkhtmltopdf librsvg2-bin groff ghc nodejs php python ruby libjs-mathjax node-katex r-cran-devtools
r-cran-knitr r-cran-rmarkdown r-cran-tinytest r-cran-covr
The following NEW packages will be installed:
libpng-tools libtbb2 pandoc pandoc-data r-cran-backports r-cran-bh r-cran-brio r-cran-callr
r-cran-checkmate r-cran-cli r-cran-colorspace r-cran-cpp11 r-cran-crayon r-cran-data.table
r-cran-desc r-cran-diffobj r-cran-digest r-cran-ellipsis r-cran-evaluate r-cran-fansi r-cran-farver
r-cran-fastmatch r-cran-ggplot2 r-cran-glue r-cran-gridextra r-cran-gtable r-cran-inline
r-cran-isoband r-cran-jsonlite r-cran-labeling r-cran-lifecycle r-cran-loo r-cran-magrittr
r-cran-matrixstats r-cran-munsell r-cran-pillar r-cran-pkgbuild r-cran-pkgconfig r-cran-pkgload
r-cran-praise r-cran-prettyunits r-cran-processx r-cran-ps r-cran-r6 r-cran-rcolorbrewer r-cran-rcpp
r-cran-rcppeigen r-cran-rcppparallel r-cran-rematch2 r-cran-rlang r-cran-rprojroot r-cran-rstan
r-cran-rstudioapi r-cran-scales r-cran-stanheaders r-cran-svglite r-cran-systemfonts r-cran-testthat
r-cran-tibble r-cran-utf8 r-cran-vctrs r-cran-viridislite r-cran-waldo r-cran-withr
0 upgraded, 64 newly installed, 0 to remove and 85 not upgraded.
Need to get 60.7 MB of archives.
After this operation, 345 MB of additional disk space will be used.
Do you want to continue? [Y/n]
</code></pre>
<h4>Second example: rocker/r-bspm:20.04</h4>
<p>It gets even nicer thanks to <a href="https://cloud.r-project.org/package=bspm" rel="nofollow noreferrer">bspm</a> and its integration (and Inaki and I have an <a href="https://arxiv.org/abs/2103.08069" rel="nofollow noreferrer">arXiv paper on this</a>) because we can _use <code>install.packages()</code> (via a script) to fetch <code>r-cran-rstan</code> for us:</p>
<pre><code>edd@rob:~$ docker run --rm -ti rocker/r-bspm:20.04 bash
root@ef07add5e9e7:/# apt update -qq
28 packages can be upgraded. Run 'apt list --upgradable' to see them.
root@ef07add5e9e7:/# install.r rstan
(loaded the methods namespace)
Loading required package: utils Tracing function "install.packages" in package "utils"
Install system packages as root...
Reading package lists... Done
Building dependency tree
Reading state information... Done
Hit http://archive.ubuntu.com/ubuntu focal InRelease
Hit http://security.ubuntu.com/ubuntu focal-security InRelease
Hit http://ppa.launchpad.net/c2d4u.team/c2d4u4.0+/ubuntu focal InRelease
Hit http://archive.ubuntu.com/ubuntu focal-updates InRelease
Hit http://archive.ubuntu.com/ubuntu focal-backports InRelease
Hit http://ppa.launchpad.net/edd/r-4.0/ubuntu focal InRelease
Hit http://ppa.launchpad.net/marutter/rrutter4.0/ubuntu focal InRelease
Fetched 0 B in 0s (0 B/s)
Reading package lists... Done
Building dependency tree
Reading state information... Done
Get:1 http://ppa.launchpad.net/c2d4u.team/c2d4u4.0+/ubuntu focal/main amd64 r-cran-backports amd64 1.4.1-
1cran1.2004.0 [94.9 kB]
Get:2 http://archive.ubuntu.com/ubuntu focal/main amd64 libpng-tools amd64 1.6.37-2 [26.1 kB]
Get:3 http://archive.ubuntu.com/ubuntu focal/universe amd64 pandoc-data all 2.5-3build2 [76.0 kB]
Get:4 http://archive.ubuntu.com/ubuntu focal/universe amd64 pandoc amd64 2.5-3build2 [15.4 MB]
[... stuff omitted here for brevity ...]
Setting up r-cran-stanheaders (2.21.0-7-1cran1.2004.0) ...
Setting up r-cran-callr (3.7.0-1cran1.2004.0) ...
Setting up r-cran-tibble (3.1.6-1cran1.2004.0) ...
Setting up r-cran-pkgbuild (1.3.1-1cran1.2004.0) ...
Setting up r-cran-ggplot2 (3.3.5-1cran1.2004.0) ...
Setting up r-cran-rstan (2.21.3-1cran1.2004.0) ...
Processing triggers for libc-bin (2.31-0ubuntu9.2) ...
root@d3045995c945:/# R
R version 4.1.2 (2021-11-01) -- "Bird Hippie"
Copyright (C) 2021 The R Foundation for Statistical Computing
Platform: x86_64-pc-linux-gnu (64-bit)
R is free software and comes with ABSOLUTELY NO WARRANTY.
You are welcome to redistribute it under certain conditions.
Type 'license()' or 'licence()' for distribution details.
Natural language support but running in an English locale
R is a collaborative project with many contributors.
Type 'contributors()' for more information and
'citation()' on how to cite R or R packages in publications.
Type 'demo()' for some demos, 'help()' for on-line help, or
'help.start()' for an HTML browser interface to help.
Type 'q()' to quit R.
Loading required package: utils
Tracing function "install.packages" in package "utils"
> library(rstan)
Loading required package: StanHeaders
Loading required package: ggplot2
rstan (Version 2.21.3, GitRev: 2e1f913d3ca3)
For execution on a local, multicore CPU with excess RAM we recommend calling
options(mc.cores = parallel::detectCores()).
To avoid recompilation of unchanged Stan programs, we recommend calling
rstan_options(auto_write = TRUE)
>
</code></pre>
<p>Note that we use <code>install.r</code> here -- a wrapper for <code>install.packages()</code> from <a href="https://cloud.r-project.org/package=littler" rel="nofollow noreferrer">littler</a> -- to point at the R package yet we get the binary. And <em>all</em> its depends. In one command. I think I even posted an animated gif showing just this once ...</p>
<p>The key insight, though, is that none of this is limited to Docker. I am using the <code>bspm</code> approach on one small laptop where I don't want to compile from sources. It works flawlessly, and has for months.</p>
<p>Let us know here or on the <code>r-sig-debian</code> list if you have questions.</p>
<p><em>Edit:</em> The animated gif I just created is too large to post it here (2mb limit) but fits for Twitter so here it is: <a href="https://twitter.com/eddelbuettel/status/1485318710818754567" rel="nofollow noreferrer">https://twitter.com/eddelbuettel/status/1485318710818754567</a></p> | 2022-01-23 18:13:09.367000+00:00 | 2022-01-23 21:11:45.803000+00:00 | 2022-01-23 21:11:45.803000+00:00 | null | 70,824,760 | <p>I have the following image:</p>
<pre><code>FROM rocker/r-ver:4.1.2
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
apt-utils \
ed \
libnlopt-dev \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/
RUN apt-get update
RUN apt install software-properties-common -y
# https://github.com/stan-dev/rstan/wiki/Configuring-C-Toolchain-for-Linux
RUN add-apt-repository -y ppa:marutter/rrutter4.0
RUN add-apt-repository -y ppa:c2d4u.team/c2d4u4.0+
RUN apt-get update
RUN apt install -y r-cran-rstan
# https://github.com/stan-dev/rstan/wiki/RStan-Getting-Started#installation-of-rstan
RUN Rscript -e 'Sys.setenv(DOWNLOAD_STATIC_LIBV8 = 1); install.packages("rstan", repos = "https://cloud.r-project.org/", dependencies = TRUE)'
</code></pre>
<p>After building the image with <code>docker build -f Dockerfile -t docker_r_stan_test .</code> (no errors output), then running: <code>example(stan_model, package = "rstan", run.dontrun = TRUE)</code> I'm expecting to see something similar to what's at: <a href="https://github.com/stan-dev/rstan/wiki/RStan-Getting-Started#verify-installation" rel="nofollow noreferrer">https://github.com/stan-dev/rstan/wiki/RStan-Getting-Started#verify-installation</a></p>
<p>I'm instead getting the output:</p>
<pre><code>Error in find.package(package, lib.loc, verbose = verbose) :
there is no package called ‘rstan’
</code></pre>
<p>I have a similar error trying to use <code>library(rstan)</code>:</p>
<pre><code>> library("rstan")
Error in library("rstan") : there is no package called ‘rstan’
</code></pre>
<p>I don't understand why <code>rstan</code> isn't installed properly within this image, as it seems I've followed the steps closely</p> | 2022-01-23 17:32:57.940000+00:00 | 2022-01-23 21:11:45.803000+00:00 | null | r|docker|ubuntu|rstan | ['http://dirk.eddelbuettel.com/blog/2017/12/22#014_finding_binary_deb_packages', 'https://cloud.r-project.org/package=bspm', 'https://arxiv.org/abs/2103.08069', 'https://cloud.r-project.org/package=littler', 'https://twitter.com/eddelbuettel/status/1485318710818754567'] | 5 |
458,603 | <p>It is hard to understand what you mean here. Shor's algorithm is a quantum algorithm <a href="http://www.scottaaronson.com/blog/?p=208" rel="nofollow noreferrer">designed to factor integers</a>. With some twicking of the <a href="http://en.wikipedia.org/wiki/Quantum_Fourier_transform" rel="nofollow noreferrer">main idea</a> you can make it <a href="http://arxiv.org/abs/quant-ph/0301141" rel="nofollow noreferrer">break other crypto-sytems</a>; however, how are you planing to build a crypto-system ? On the other hand, <a href="http://en.wikipedia.org/wiki/Quantum_cryptography" rel="nofollow noreferrer">Quantum-crypto</a> sits on much more solid grounds then quantum computation (i.e. we may actually see quantum crypto-systems in our lifetime).</p> | 2009-01-19 18:23:30.063000+00:00 | 2009-01-19 18:23:30.063000+00:00 | null | null | 458,481 | <p>Can <a href="http://en.wikipedia.org/wiki/Shor%27s_algorithm" rel="nofollow noreferrer">Shor's algorithm</a> be used for encryption, more specifically WEP encryption?</p> | 2009-01-19 17:47:43.800000+00:00 | 2009-01-19 19:16:25.073000+00:00 | 2009-01-19 19:16:25.090000+00:00 | algorithm|encryption | ['http://www.scottaaronson.com/blog/?p=208', 'http://en.wikipedia.org/wiki/Quantum_Fourier_transform', 'http://arxiv.org/abs/quant-ph/0301141', 'http://en.wikipedia.org/wiki/Quantum_cryptography'] | 4 |
32,542,105 | <p>I think you will need to read a paper about the training process. Basically the values of the vectors are the node values of the trained neural network.</p>
<p>I tried to read <a href="http://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf" rel="nofollow noreferrer">the original paper</a> but I think the paper <a href="https://arxiv.org/abs/1411.2738" rel="nofollow noreferrer">"word2vec Parameter Learning Explained"</a> by Xin Rong has a more detailed explanation. </p> | 2015-09-12 18:09:11.920000+00:00 | 2017-07-10 10:02:37.733000+00:00 | 2017-07-10 10:02:37.733000+00:00 | null | 32,458,269 | <p>I have been reading a lot of papers on NLP, and came across many models. I got the SVD Model and representing it in 2-D, but I still did not get how do we make a word vector by giving a corpus to the word2vec/skip-gram model? Is it also co-occurrence matrix representation for each word? Can you explain it by taking an example corpus:</p>
<pre><code>Hello, my name is John.
John works in Google.
Google has the best search engine.
</code></pre>
<p>Basically, how does skip gram convert <code>John</code> to a vector?</p> | 2015-09-08 12:46:46.427000+00:00 | 2019-06-01 00:54:09.700000+00:00 | null | nlp|word2vec | ['http://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf', 'https://arxiv.org/abs/1411.2738'] | 2 |
58,654,134 | <p>In the paper that you mentioned, 49 networks are trained for 49 games: "A different network was trained on each game: the same network architecture, learning algorithm and hyperparameter settings (see Extended Data Table 1) were used across all games, showing that our approach is robust enough to work on a variety of games while incorporating only minimal prior knowledge", which is quoted from the paper.</p>
<p>There are algorithms which only train one network for all 49 games, e.g. <a href="https://arxiv.org/pdf/1809.04474.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1809.04474.pdf</a></p> | 2019-11-01 04:32:42.700000+00:00 | 2019-11-01 04:32:42.700000+00:00 | null | null | 58,623,123 | <p>I would like to have a clarification about the article "Human level control through deep reinforcement learning" in Nature 2015. When I read it, I understand that they use a DQN with the same algo, network architecture and hyperparameters. Great! But they don't specify if they train each game from scratch and as a result we obtain one neural network per game (means 49 neural networks for the 49 games) or if they train all the game with a unique neural network (means only one neural network can play 49 games). </p>
<p>Is there someone who know what is the correct answer? Because it is not the same thing at all!:)</p>
<p>Thanks,</p> | 2019-10-30 10:18:39.960000+00:00 | 2019-11-01 04:32:42.700000+00:00 | null | reinforcement-learning | ['https://arxiv.org/pdf/1809.04474.pdf'] | 1 |
53,968,277 | <p>Not a direct answer, but when I need to “hard-code” a name in a GHC plugin, I don’t use TH. Instead, I use <code>findImportedModule</code> and <code>lookupOrig</code> to look it up, e.g. as in </p>
<pre><code>lookupJDITyCon :: TcPluginM Class
lookupJDITyCon = do
Found _ md <- findImportedModule jdiModule Nothing
jdiTcNm <- lookupOrig md (mkTcOcc "JustDoIt")
tcLookupClass jdiTcNm
where
jdiModule = mkModuleName "GHC.JustDoIt"
</code></pre>
<p>from the code of my <a href="https://github.com/nomeata/ghc-justdoit/blob/master/GHC/JustDoIt/Plugin.hs" rel="nofollow noreferrer"><code>ghc-justdoit</code></a> plugin.</p>
<p>I use Template Haskell names when the <em>user</em> needs to mention names, e.g. in splices or annotations, that I want to pick up in the plugin. This is what I do in <a href="https://github.com/nomeata/inspection-testing/blob/e11bf6ddf615bb62aa452ce761ded47599163bd1/src/Test/Inspection/Plugin.hs#L234" rel="nofollow noreferrer"><code>inspection-testing</code></a>. I discuss this a bit in the <a href="https://arxiv.org/abs/1803.07130" rel="nofollow noreferrer">appendix of the Inspection Testing paper</a>.</p> | 2018-12-29 09:19:33.913000+00:00 | 2018-12-29 09:19:33.913000+00:00 | null | null | 53,964,198 | <p>[Update: Turns out this was a GHC bug and it is now fixed, slated for the 8.6.4 release: <a href="https://ghc.haskell.org/trac/ghc/ticket/16104#comment:8" rel="nofollow noreferrer">https://ghc.haskell.org/trac/ghc/ticket/16104#comment:8</a> ]</p>
<p>I'm trying to port a core plugin to GHC 8.6.3, which was last working fine with GHC 8.4 series. Unfortunately, I'm running into issues. Wondering if pluging programming requirements have changed, or is this a regression in GHC itself. I boiled it down to the following example and would like some guidance on how to make this work:</p>
<p>I have the following in file <code>TestPlugin.hs</code>:</p>
<pre><code>{-# LANGUAGE TemplateHaskell #-}
module TestPlugin (plugin) where
import GhcPlugins
import Data.Bits
plugin :: Plugin
plugin = defaultPlugin {installCoreToDos = install}
where install _ todos = return (test : todos)
test = CoreDoPluginPass "Test" check
check :: ModGuts -> CoreM ModGuts
check m = do mbN <- thNameToGhcName 'complement
case mbN of
Just _ -> liftIO $ putStrLn "Found complement!"
Nothing -> error "Failed to locate complement"
return m
</code></pre>
<p>And I have a very simple <code>Test.hs</code> file:</p>
<pre><code>{-# OPTIONS_GHC -fplugin TestPlugin #-}
main :: IO ()
main = return ()
</code></pre>
<p>With GHC-8.4.2, I have:</p>
<pre><code>$ ghc-8.4.2 --make -package ghc -c TestPlugin.hs
[1 of 1] Compiling TestPlugin ( TestPlugin.hs, TestPlugin.o )
$ ghc-8.4.2 -package ghc -c Test.hs
Found complement!
</code></pre>
<p>But with GHC 8.6.3, I get:</p>
<pre><code>$ ghc-8.6.3 --make -package ghc -c TestPlugin.hs
[1 of 1] Compiling TestPlugin ( TestPlugin.hs, TestPlugin.o )
$ ghc-8.6.3 -package ghc -c Test.hs
ghc: panic! (the 'impossible' happened)
(GHC version 8.6.3 for x86_64-apple-darwin):
Failed to locate complement
</code></pre>
<p>The problem goes away if I change <code>Test.hs</code> to:</p>
<pre><code>{-# OPTIONS_GHC -fplugin TestPlugin #-}
import Data.Bits -- Should not be required in the client code!
main :: IO ()
main = return ()
</code></pre>
<p>That is, if I explicitly import <code>Data.Bits</code>. But tis is quite undesirable, since <code>Test.hs</code> is client code and the users of the plugin have no reason to import all bunch of modules the plugin might need for its own purposes. (In practice, this would require clients to import a whole bunch of irrelevant modules; quite unworkable and not maintainable.)</p>
<p>I've found the following stack-overflow ticket, which seems to suffer from a similar problem: <a href="https://stackoverflow.com/questions/50236926/how-to-replicate-the-behaviour-of-name-in-a-th-splice">How to replicate the behaviour of 'name in a TH splice</a> However, the answer suggested there is just not OK in this case (and perhaps wasn't really OK there either) since it would require unnecessary changes to client code in my case that is just not reasonable to expect. (Perhaps @JoachimBretner has an idea?) I've also filed this as a GHC ticket (<a href="https://ghc.haskell.org/trac/ghc/ticket/16104#ticket" rel="nofollow noreferrer">https://ghc.haskell.org/trac/ghc/ticket/16104#ticket</a>), but feedback from the stack-overflow community is greatly appreciated.</p>
<p>Should I be coding my plugin differently? Or is this a GHC regression?</p> | 2018-12-28 20:55:09.593000+00:00 | 2019-01-19 20:16:14.600000+00:00 | 2019-01-19 20:16:14.600000+00:00 | haskell | ['https://github.com/nomeata/ghc-justdoit/blob/master/GHC/JustDoIt/Plugin.hs', 'https://github.com/nomeata/inspection-testing/blob/e11bf6ddf615bb62aa452ce761ded47599163bd1/src/Test/Inspection/Plugin.hs#L234', 'https://arxiv.org/abs/1803.07130'] | 3 |
64,749,055 | <p>Attention can be interpreted as a soft vector retrieval.</p>
<ul>
<li><p>You have some <strong>query vectors</strong>. For each query, you want to retrieve some</p>
</li>
<li><p><strong>values</strong>, such that you compute a weighted of them,</p>
</li>
<li><p>where the weights are obtained by comparing a query with <strong>keys</strong> (the number of keys must the be same as the number of values and often they are the same vectors).</p>
</li>
</ul>
<p>In sequence-to-sequence models, the query is the decoder state and keys and values are the decoder states.</p>
<p>In classification task, you do not have such an explicit query. The easiest way how to get around this is training a "universal" query that is used to collect relevant information from the hidden states (something similar to what was originally described <a href="https://arxiv.org/abs/1703.03130" rel="nofollow noreferrer">in this paper</a>).</p>
<p>If you approach the problem as sequence labeling, assigning a label not to an entire sequence, but to individual time steps, you might want to use a self-attentive layer instead.</p> | 2020-11-09 09:34:14.523000+00:00 | 2020-11-09 09:34:14.523000+00:00 | null | null | 64,747,663 | <p>I am doing a speech emotion recognition machine training.</p>
<p>I wish to apply an attention layer to the model. The instruction <a href="https://www.tensorflow.org/api_docs/python/tf/keras/layers/Attention" rel="nofollow noreferrer">page</a> is hard to understand.</p>
<pre><code>def bi_duo_LSTM_model(X_train, y_train, X_test,y_test,num_classes,batch_size=68,units=128, learning_rate=0.005, epochs=20, dropout=0.2, recurrent_dropout=0.2):
class myCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
if (logs.get('acc') > 0.95):
print("\nReached 99% accuracy so cancelling training!")
self.model.stop_training = True
callbacks = myCallback()
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Masking(mask_value=0.0, input_shape=(X_train.shape[1], X_train.shape[2])))
model.add(tf.keras.layers.Bidirectional(LSTM(units, dropout=dropout, recurrent_dropout=recurrent_dropout,return_sequences=True)))
model.add(tf.keras.layers.Bidirectional(LSTM(units, dropout=dropout, recurrent_dropout=recurrent_dropout)))
# model.add(tf.keras.layers.Bidirectional(LSTM(32)))
model.add(Dense(num_classes, activation='softmax'))
adamopt = tf.keras.optimizers.Adam(lr=learning_rate, beta_1=0.9, beta_2=0.999, epsilon=1e-8)
RMSopt = tf.keras.optimizers.RMSprop(lr=learning_rate, rho=0.9, epsilon=1e-6)
SGDopt = tf.keras.optimizers.SGD(lr=learning_rate, momentum=0.9, decay=0.1, nesterov=False)
model.compile(loss='binary_crossentropy',
optimizer=adamopt,
metrics=['accuracy'])
history = model.fit(X_train, y_train,
batch_size=batch_size,
epochs=epochs,
validation_data=(X_test, y_test),
verbose=1,
callbacks=[callbacks])
score, acc = model.evaluate(X_test, y_test,
batch_size=batch_size)
yhat = model.predict(X_test)
return history, yhat
</code></pre>
<p>How can I apply it to fit for my model?</p>
<p>And are <code>use_scale</code>, <code>causal</code> and <code>dropout</code> all the arguments?</p>
<p>If there is a <code>dropout</code> in <code>attention layer</code>, how do we deal with it since we have <code>dropout</code> in LSTM layer?</p> | 2020-11-09 07:53:39.207000+00:00 | 2020-11-09 09:34:14.523000+00:00 | null | python|tensorflow|keras|attention-model | ['https://arxiv.org/abs/1703.03130'] | 1 |
52,053,099 | <p>Take a look at these <a href="https://www.kaggle.com/tigurius/recuplots-and-cnns-for-time-series-classification" rel="nofollow noreferrer">kaggle challange</a>. It think you also want to implement parts of <a href="https://arxiv.org/pdf/1710.00886.pdf" rel="nofollow noreferrer">this paper</a> like they do.</p>
<p>Maybe you can also use the function that they adopted from another SO question:</p>
<pre><code>#modified from https://stackoverflow.com/questions/33650371/recurrence-plot-in-python
def recurrence_plot(s, eps=None, steps=None):
if eps==None: eps=0.1
if steps==None: steps=10
d = sk.metrics.pairwise.pairwise_distances(s)
d = np.floor(d / eps)
d[d > steps] = steps
#Z = squareform(d)
return d
</code></pre> | 2018-08-28 08:13:26.333000+00:00 | 2018-08-28 08:42:09.290000+00:00 | 2018-08-28 08:42:09.290000+00:00 | null | 50,526,033 | <p>I have a numpy array X of timeserieses. Something like that:</p>
<pre><code>[[0.05, -0.021, 0.003, 0.025, -0.001, -0.023, 0.095, 0.001, -0.018]
[0.015, 0.011, -0.032, -0.044, -0.002, 0.032, -0.051, -0.03, -0.020]
[0.04, 0.081, -0.02, 0.014, 0.063, -0.077, 0.059, 0.031, 0.025]]
</code></pre>
<p>I can plot this with</p>
<pre><code>fig, axes = plt.subplots(3, 1)
for i in range(3):
axes[i].plot(X[i])
plt.show()
</code></pre>
<p>Then something like the following appears (the plots do <strong>not</strong> show the demo values I wrote above but other values with similar structure). So each row in X is one timeseries.
<a href="https://i.stack.imgur.com/lKykf.png" rel="noreferrer"><img src="https://i.stack.imgur.com/lKykf.png" alt="enter image description here"></a></p>
<p>But I want to have a numpy array which describes each timeseries as a grayscale image (because I want to use it for a cnn later). So I think what I need should be something like that:</p>
<pre><code>[[[0, 0, 0, 0, 0, 1]
[0, 0, 0, 0, 1, 0]
[0, 0, 0, 0, 0, 1]
[0, 0, 1, 0, 0, 0]]
[[0, 0, 1, 0, 0, 0]
[0, 0, 0, 1, 0, 0]
[0, 1, 0, 0, 0, 0]
[0, 1, 0, 0, 0, 0]]...]
</code></pre>
<p>How is it (if possible: efficiently) possible to convert each timeseries into a matrix, which describes the timeseries as an image. So each row in the old array (e.g. this: </p>
<p><code>[0.05, -0.021, 0.003, 0.025, -0.001, -0.023, 0.095, 0.001, -0.018]</code>) </p>
<p>should be converted to a 2D matrix (e.g. something like this: </p>
<p><code>[[0, 0, 0, 0, 0, 1]
[0, 0, 0, 0, 1, 0]
[0, 0, 0, 0, 0, 1]
[0, 0, 1, 0, 0, 0]]</code></p>
<p><strong>Alternative describtion:</strong>
Every row in X describes one timeseries. For each row in X I need a 2D matrix describing the timeseries as an image (like the plot shown above)</p>
<p><strong>"Solution"</strong>: Seems there is no nice solution to do this. I used this workaround now: </p>
<pre><code>fig = plt.figure()
fig.add_subplot(111)
fig.tight_layout(pad=0)
plt.axis('off')
plt.plot(X[0], linewidth=3)
fig.canvas.draw()
data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='')
data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,))
</code></pre>
<p>The <code>data</code> contains the 2D matrix now and could be plotted with <code>plt.imshow(data)</code> again with some loss of quality.</p> | 2018-05-25 09:36:02.233000+00:00 | 2018-08-28 08:42:09.290000+00:00 | 2018-05-25 11:08:00.193000+00:00 | python|numpy|matplotlib|numpy-ndarray | ['https://www.kaggle.com/tigurius/recuplots-and-cnns-for-time-series-classification', 'https://arxiv.org/pdf/1710.00886.pdf'] | 2 |
59,588,429 | <p>Depending on the case, you might be interested in using one of the following methods:</p>
<p><strong>Method 0: Use an API or library</strong></p>
<ul>
<li><a href="https://pypi.org/project/cld2-cffi/" rel="nofollow noreferrer">cld2-cffi</a></li>
<li><a href="https://cloud.google.com/translate/docs/basic/detecting-language" rel="nofollow noreferrer">Google Cloud Translation - Basic (v2)</a></li>
<li><a href="https://textblob.readthedocs.io/en/dev/" rel="nofollow noreferrer">TextBlob</a></li>
<li><a href="https://github.com/Mimino666/langdetect" rel="nofollow noreferrer">langdetect</a></li>
<li>etc.</li>
</ul>
<p>Usually, there are a few problems with these libraries because some of them are not accurate for small texts, some languages are missing, are slow, require internet connection, are non-free,... But generally speaking, they will suit most needs.</p>
<p><strong>Method 1: Language models</strong></p>
<p>A language model gives us the probability of a sequence of words. This is important because it allows us to robustly detect the language of a text, even when the text contains words in other languages (e.g.: <em>"'Hola' means 'hello' in spanish"</em>).</p>
<p>You can use N language models (one per language), to score your text. The detected language will be the language of the model that gave you the highest score.</p>
<p>If you want to build a simple language model for this, I'd go for 1-grams. To do this, you only need to count the number of times each word from a big text (e.g. Wikipedia Corpus in "X" language) has appeared.</p>
<p>Then, the probability of a word will be its frequency divided by the total number of words analyzed (sum of all frequencies).</p>
<pre><code>the 23135851162
of 13151942776
and 12997637966
to 12136980858
a 9081174698
in 8469404971
for 5933321709
...
=> P("'Hola' means 'hello' in spanish") = P("hola") * P("means") * P("hello") * P("in") * P("spanish")
</code></pre>
<p>If the text to detect is quite big, I recommend sampling N random words and then use the sum of logarithms instead of multiplications to avoid floating-point precision problems.</p>
<pre><code>P(s) = 0.03 * 0.01 * 0.014 = 0.0000042
P(s) = log10(0.03) + log10(0.01) + log10(0.014) = -5.376
</code></pre>
<p><strong>Method 2: Intersecting sets</strong></p>
<p>An even simpler approach is to prepare N sets (one per language) with the top M most frequent words. Then intersect your text with each set. The set with the highest number of intersections will be your detected language.</p>
<pre><code>spanish_set = {"de", "hola", "la", "casa",...}
english_set = {"of", "hello", "the", "house",...}
czech_set = {"z", "ahoj", "závěrky", "dům",...}
...
text_set = {"hola", "means", "hello", "in", "spanish"}
spanish_votes = text_set.intersection(spanish_set) # 1
english_votes = text_set.intersection(english_set) # 4
czech_votes = text_set.intersection(czech_set) # 0
...
</code></pre>
<p><strong>Method 3: Zip compression</strong></p>
<p>This more a curiosity than anything else, but here it goes... You can compress your text (e.g LZ77) and then measure the zip-distance with regards to a reference compressed text (target language). Personally, I didn't like it because it's slower, less accurate and less descriptive than other methods. Nevertheless, there might be interesting applications for this method.
To read more: <a href="https://arxiv.org/abs/cond-mat/0108530" rel="nofollow noreferrer">Language Trees and Zipping</a></p> | 2020-01-04 06:18:52.673000+00:00 | 2020-01-04 15:03:40.313000+00:00 | 2020-01-04 15:03:40.313000+00:00 | null | 39,142,778 | <p>I want to get this:</p>
<pre class="lang-none prettyprint-override"><code>Input text: "ру́сский язы́к"
Output text: "Russian"
Input text: "中文"
Output text: "Chinese"
Input text: "にほんご"
Output text: "Japanese"
Input text: "العَرَبِيَّة"
Output text: "Arabic"
</code></pre>
<p>How can I do it in python?</p> | 2016-08-25 10:26:00.440000+00:00 | 2022-09-19 08:41:41.320000+00:00 | 2022-06-13 15:59:31.613000+00:00 | python|nlp | ['https://pypi.org/project/cld2-cffi/', 'https://cloud.google.com/translate/docs/basic/detecting-language', 'https://textblob.readthedocs.io/en/dev/', 'https://github.com/Mimino666/langdetect', 'https://arxiv.org/abs/cond-mat/0108530'] | 5 |
48,904,594 | <p>First, <code>match</code> variable doesn't only identify matched words, it gives a probability distribution on the input. These can be seen as weights for each input sentence. </p>
<p>The input sequence is embedded by using two different matrices whose results are <code>input_encoded_c</code> and <code>input_encoded_m</code> in the code. Using the 1st embedding, we find the match weights. Then applying weights to the 2nd embedded vectors we find the answer. It won't be logical to apply the weights to the same vectors where we calculated them. </p>
<p>Then comes <a href="https://keras.io/layers/core/#permute" rel="nofollow noreferrer">Permute</a>. To generate the answer,
we add the query to the <code>response</code>, to have the same dimensions we permute the dimensions of the response. </p>
<p>In paper <a href="https://arxiv.org/pdf/1503.08895.pdf" rel="nofollow noreferrer">End-to-End Memory Network</a> , if you read section 2.1, it will help you to understand. </p> | 2018-02-21 11:21:52.983000+00:00 | 2018-02-21 11:21:52.983000+00:00 | null | null | 44,527,066 | <p>I am going through the following code of memory network using keras on Babi dataset - </p>
<pre><code> '''Trains a memory network on the bAbI dataset.
References:
- Jason Weston, Antoine Bordes, Sumit Chopra, Tomas Mikolov, Alexander M. Rush,
"Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks",
http://arxiv.org/abs/1502.05698
- Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, Rob Fergus,
"End-To-End Memory Networks",
http://arxiv.org/abs/1503.08895
Reaches 98.6% accuracy on task 'single_supporting_fact_10k' after 120 epochs.
Time per epoch: 3s on CPU (core i7).
'''
from __future__ import print_function
from keras.models import Sequential, Model
from keras.layers.embeddings import Embedding
from keras.layers import Input, Activation, Dense, Permute, Dropout, add, dot, concatenate
from keras.layers import LSTM
from keras.utils.data_utils import get_file
from keras.preprocessing.sequence import pad_sequences
from functools import reduce
import tarfile
import numpy as np
import re
def tokenize(sent):
'''Return the tokens of a sentence including punctuation.
>>> tokenize('Bob dropped the apple. Where is the apple?')
['Bob', 'dropped', 'the', 'apple', '.', 'Where', 'is', 'the', 'apple', '?']
'''
return [x.strip() for x in re.split('(\W+)?', sent) if x.strip()]
def parse_stories(lines, only_supporting=False):
'''Parse stories provided in the bAbi tasks format
If only_supporting is true, only the sentences
that support the answer are kept.
'''
data = []
story = []
for line in lines:
line = line.decode('utf-8').strip()
nid, line = line.split(' ', 1)
nid = int(nid)
if nid == 1:
story = []
if '\t' in line:
q, a, supporting = line.split('\t')
q = tokenize(q)
substory = None
if only_supporting:
# Only select the related substory
supporting = map(int, supporting.split())
substory = [story[i - 1] for i in supporting]
else:
# Provide all the substories
substory = [x for x in story if x]
data.append((substory, q, a))
story.append('')
else:
sent = tokenize(line)
story.append(sent)
return data
def get_stories(f, only_supporting=False, max_length=None):
'''Given a file name, read the file,
retrieve the stories,
and then convert the sentences into a single story.
If max_length is supplied,
any stories longer than max_length tokens will be discarded.
'''
data = parse_stories(f.readlines(), only_supporting=only_supporting)
flatten = lambda data: reduce(lambda x, y: x + y, data)
data = [(flatten(story), q, answer) for story, q, answer in data if not max_length or len(flatten(story)) < max_length]
return data
def vectorize_stories(data, word_idx, story_maxlen, query_maxlen):
X = []
Xq = []
Y = []
for story, query, answer in data:
x = [word_idx[w] for w in story]
xq = [word_idx[w] for w in query]
# let's not forget that index 0 is reserved
y = np.zeros(len(word_idx) + 1)
y[word_idx[answer]] = 1
X.append(x)
Xq.append(xq)
Y.append(y)
return (pad_sequences(X, maxlen=story_maxlen),
pad_sequences(Xq, maxlen=query_maxlen), np.array(Y))
try:
path = get_file('babi-tasks-v1-2.tar.gz', origin='https://s3.amazonaws.com/text-datasets/babi_tasks_1-20_v1-2.tar.gz')
except:
print('Error downloading dataset, please download it manually:\n'
'$ wget http://www.thespermwhale.com/jaseweston/babi/tasks_1-20_v1-2.tar.gz\n'
'$ mv tasks_1-20_v1-2.tar.gz ~/.keras/datasets/babi-tasks-v1-2.tar.gz')
raise
tar = tarfile.open(path)
challenges = {
# QA1 with 10,000 samples
'single_supporting_fact_10k': 'tasks_1-20_v1-2/en-10k/qa1_single-supporting-fact_{}.txt',
# QA2 with 10,000 samples
'two_supporting_facts_10k': 'tasks_1-20_v1-2/en-10k/qa2_two-supporting-facts_{}.txt',
}
challenge_type = 'single_supporting_fact_10k'
challenge = challenges[challenge_type]
print('Extracting stories for the challenge:', challenge_type)
train_stories = get_stories(tar.extractfile(challenge.format('train')))
test_stories = get_stories(tar.extractfile(challenge.format('test')))
vocab = set()
for story, q, answer in train_stories + test_stories:
vocab |= set(story + q + [answer])
vocab = sorted(vocab)
# Reserve 0 for masking via pad_sequences
vocab_size = len(vocab) + 1
story_maxlen = max(map(len, (x for x, _, _ in train_stories + test_stories)))
query_maxlen = max(map(len, (x for _, x, _ in train_stories + test_stories)))
print('-')
print('Vocab size:', vocab_size, 'unique words')
print('Story max length:', story_maxlen, 'words')
print('Query max length:', query_maxlen, 'words')
print('Number of training stories:', len(train_stories))
print('Number of test stories:', len(test_stories))
print('-')
print('Here\'s what a "story" tuple looks like (input, query, answer):')
print(train_stories[0])
print('-')
print('Vectorizing the word sequences...')
word_idx = dict((c, i + 1) for i, c in enumerate(vocab))
inputs_train, queries_train, answers_train = vectorize_stories(train_stories,
word_idx,
story_maxlen,
query_maxlen)
inputs_test, queries_test, answers_test = vectorize_stories(test_stories,
word_idx,
story_maxlen,
query_maxlen)
print('-')
print('inputs: integer tensor of shape (samples, max_length)')
print('inputs_train shape:', inputs_train.shape)
print('inputs_test shape:', inputs_test.shape)
print('-')
print('queries: integer tensor of shape (samples, max_length)')
print('queries_train shape:', queries_train.shape)
print('queries_test shape:', queries_test.shape)
print('-')
print('answers: binary (1 or 0) tensor of shape (samples, vocab_size)')
print('answers_train shape:', answers_train.shape)
print('answers_test shape:', answers_test.shape)
print('-')
print('Compiling...')
# placeholders
input_sequence = Input((story_maxlen,))
question = Input((query_maxlen,))
# encoders
# embed the input sequence into a sequence of vectors
input_encoder_m = Sequential()
input_encoder_m.add(Embedding(input_dim=vocab_size,
output_dim=64))
input_encoder_m.add(Dropout(0.3))
# output: (samples, story_maxlen, embedding_dim)
# embed the input into a sequence of vectors of size query_maxlen
input_encoder_c = Sequential()
input_encoder_c.add(Embedding(input_dim=vocab_size,
output_dim=query_maxlen))
input_encoder_c.add(Dropout(0.3))
# output: (samples, story_maxlen, query_maxlen)
# embed the question into a sequence of vectors
question_encoder = Sequential()
question_encoder.add(Embedding(input_dim=vocab_size,
output_dim=64,
input_length=query_maxlen))
question_encoder.add(Dropout(0.3))
# output: (samples, query_maxlen, embedding_dim)
# encode input sequence and questions (which are indices)
# to sequences of dense vectors
input_encoded_m = input_encoder_m(input_sequence)
input_encoded_c = input_encoder_c(input_sequence)
question_encoded = question_encoder(question)
# compute a 'match' between the first input vector sequence
# and the question vector sequence
# shape: `(samples, story_maxlen, query_maxlen)`
match = dot([input_encoded_m, question_encoded], axes=(2, 2))
match = Activation('softmax')(match)
# add the match matrix with the second input vector sequence
response = add([match, input_encoded_c]) # (samples, story_maxlen, query_maxlen)
response = Permute((2, 1))(response) # (samples, query_maxlen, story_maxlen)
# concatenate the match matrix with the question vector sequence
answer = concatenate([response, question_encoded])
# the original paper uses a matrix multiplication for this reduction step.
# we choose to use a RNN instead.
answer = LSTM(32)(answer) # (samples, 32)
# one regularization layer -- more would probably be needed.
answer = Dropout(0.3)(answer)
answer = Dense(vocab_size)(answer) # (samples, vocab_size)
# we output a probability distribution over the vocabulary
answer = Activation('softmax')(answer)
# build the final model
model = Model([input_sequence, question], answer)
model.compile(optimizer='rmsprop', loss='categorical_crossentropy',
metrics=['accuracy'])
# train
model.fit([inputs_train, queries_train], answers_train,
batch_size=32,
epochs=120,
validation_data=([inputs_test, queries_test], answers_test))
</code></pre>
<p>This is what my understanding is for the model creation part - </p>
<p>After creating dense vectors of story and question part with below code -</p>
<pre><code> input_encoded_m = input_encoder_m(input_sequence)
input_encoded_c = input_encoder_c(input_sequence)
question_encoded = question_encoder(question)
</code></pre>
<p>outputs will have below shapes</p>
<p>input_encoded_m will have shape - <strong>samples, story_maxlen, query_maxlen</strong>
input_encoded_c will have shape - <strong>samples, story_maxlen, query_maxlen</strong>
question_encoded will have shape - <strong>samples, query_maxlen, embedding_dim</strong></p>
<p>input_encoded_m and input_encoded_c have same input embedded in different dimensions i.e. (68 and 4). and question_encoded will have question embedded.</p>
<p>now below part matches the word in story and question and applies softmax activation on the output which means matching words are identified -</p>
<pre><code> match = dot([input_encoded_m, question_encoded], axes=(2, 2))
match = Activation('softmax')(match)
</code></pre>
<p>I am not clear about why differently embedded same input vector is being added to the matched matrix from above step. Comment says "Second input vector" but we are not dealing with 2nd input yet..Not able to understand this, Any help???
# add the match matrix with the second input vector sequence
response = add([match, input_encoded_c]) # (samples, story_maxlen, query_maxlen)</p>
<p>Also what permuting the output of above step does in this context -
response = Permute((2, 1))(response) # (samples, query_maxlen, story_maxlen)</p>
<p>this is just concatenating the story from above part with question for LSTM layer? please correct if my understanding is wrong here - </p>
<pre><code> # concatenate the match matrix with the question vector sequence
answer = concatenate([response, question_encoded])
</code></pre>
<p>I coudn't find any intuitive explanation of this anywhere so posting here.</p>
<p>Any help is highly appreciated!</p>
<p>Thanks.</p> | 2017-06-13 16:33:57.727000+00:00 | 2018-02-21 11:21:52.983000+00:00 | null | python|keras|recurrent-neural-network | ['https://keras.io/layers/core/#permute', 'https://arxiv.org/pdf/1503.08895.pdf'] | 2 |
58,300,051 | <p>FastAlign is an implementation of <a href="https://en.wikipedia.org/wiki/IBM_alignment_models#Model_2" rel="nofollow noreferrer">IBM Model 2</a>, the score is the probability estimated by this model. The details of the model are very nicely explained in <a href="http://mt-class.org/jhu/slides/lecture-ibm-model1.pdf" rel="nofollow noreferrer">these slides from JHU</a>.</p>
<p>The score is a probability of the source sentence given the target sentence words and the alignment. The algorithm iteratively estimates:</p>
<ol>
<li>The probabilities of being each other translation for (virtually all) pairs of the source language and the target language pairs.</li>
<li>Optimal alignment given the word-to-word translation probabilities.</li>
</ol>
<p>The score is then a product of the word-to-word translation probabilities with the alignment the algorithm converged to. So, in theory, this should correlate with how parallel the sentences are, but there are so many ways in which this can break. For instance, rare words have unreliable probability estimates. Another problem might be some words (such as "of") can be part of multi-word expressions that are a single word in other languages, which skews the probability estimates as well. So, there is no wonder that the probability is not to be trusted.</p>
<p>If your goal is to filter the parallel corpus and remove the incorrectly aligned sentence pairs, I would recommend something else. You can e.g., use Multilingual BERT as they did in <a href="https://arxiv.org/abs/1906.01502" rel="nofollow noreferrer">a paper by Google</a>, where they the centered vectors for cross-lingual retrieval. Or just google "parallel corpus filtering."</p> | 2019-10-09 08:31:59.400000+00:00 | 2019-10-09 08:31:59.400000+00:00 | null | null | 58,292,601 | <p>I'm using the alignment toolkit fast_align: <a href="https://github.com/clab/fast_align" rel="nofollow noreferrer">https://github.com/clab/fast_align</a>, to get word-to-word alignment of a parallel corpus. There is an option to print out the alignment score -- how do I interpret this score? Does the score measure the degree of alignment between the parallel sentences? I know that some of the sentences in the corpus are well aligned and others are not, but so far I see no correlation between the score and how well aligned they are. Should I adjust for the number of words in the sentence?</p> | 2019-10-08 19:06:29.907000+00:00 | 2019-10-09 08:31:59.400000+00:00 | null | nlp|alignment|language-translation|machine-translation | ['https://en.wikipedia.org/wiki/IBM_alignment_models#Model_2', 'http://mt-class.org/jhu/slides/lecture-ibm-model1.pdf', 'https://arxiv.org/abs/1906.01502'] | 3 |
40,572,200 | <pre><code>Function Clean-InvalidFileNameChars {
param(
[Parameter(Mandatory=$true,
Position=0,
ValueFromPipeline=$true,
ValueFromPipelineByPropertyName=$true)]
[String]$Name
)
$invalidChars = [IO.Path]::GetInvalidFileNameChars() -join ''
$re = "[{0}]" -f [RegEx]::Escape($invalidChars)
$res=($Name -replace $re)
return $res.Substring(0, [math]::Min(260, $res.Length))
}
Function Clean-InvalidPathChars {
param(
[Parameter(Mandatory=$true,
Position=0,
ValueFromPipeline=$true,
ValueFromPipelineByPropertyName=$true)]
[String]$Name
)
$invalidChars = [IO.Path]::GetInvalidPathChars() -join ''
$re = "[{0}]" -f [RegEx]::Escape($invalidChars)
$res=($Name -replace $re)
return $res.Substring(0, [math]::Min(248, $res.Length))
}
$rootpath="c:\temp2"
$rootpathresult="c:\tempresult"
$template=@'
[3] arXiv:1611.00057 [pdf, ps, other]
Title: {title*:Holomorphy of adjoint $L$ functions for quasisplit A2}
Authors: Joseph Hundley
Comments: 18 pages
Subjects: {subject:Number Theory (math.NT)}
[4] arXiv:1611.00066 [pdf, other]
Title: {title*:Many Haken Heegaard splittings}
Authors: Alessandro Sisto
Comments: 12 pages, 3 figures
Subjects: {subject:Geometric Topology (math.GT)}
[5] arXiv:1611.00067 [pdf, ps, other]
Title: {title*:Subsumed homoclinic connections and infinitely many coexisting attractors in piecewise-linear maps}
Authors: David J.W. Simpson, Christopher P. Tuffley
Subjects: {subject:Dynamical Systems (math.DS)}
[21] arXiv:1611.00114 [pdf, ps, other]
Title: {title*:Faces of highest weight modules and the universal Weyl polyhedron}
Authors: Gurbir Dhillon, Apoorva Khare
Comments: We recall preliminaries and results from the companion paper arXiv:1606.09640
Subjects: {subject:Representation Theory (math.RT)}; Combinatorics (math.CO); Metric Geometry (math.MG)
'@
#extract utils data and clean
$listbook=gci $rootpath -File -filter *.pdf | foreach { New-Object psobject -Property @{file=$_.fullname; books= ((iwr "https://arxiv.org/abs/$($_.BaseName)").ParsedHtml.body.outerText | ConvertFrom-String -TemplateContent $template)}} | select file -ExpandProperty books | select file, @{N="Subject";E={Clean-InvalidPathChars $_.subject}}, @{N="Title";E={Clean-InvalidFileNameChars $_.title}}
#build dirs and copy+rename file
$listbook | %{$newpath="$rootpathresult\$($_.subject)"; New-Item -ItemType Directory -Path "$newpath" -Force; Copy-Item $_.file "$newpath\$($_.title).pdf" -Force}
</code></pre> | 2016-11-13 09:02:51.470000+00:00 | 2016-11-13 09:02:51.470000+00:00 | null | null | 40,571,297 | <p>For <em>matching</em> category with filename I use this code</p>
<pre><code> gci *.pdf | foreach { (iwr "https://arxiv.org/abs/$($_.BaseName)")`
-match 'primary-subject">(.*?)</span>'; $matches[1] }
</code></pre>
<p>To have idea what I mean <a href="https://i.imgur.com/57KjJr6.png" rel="nofollow noreferrer">http://i.imgur.com/57KjJr6.png</a></p>
<p>To <em>rename</em> indipendently: I can use that but this not useful because I process folder for folder and this take long time (number of folders is high number)</p>
<pre><code>#All PDFs | Rename { query Arxiv for the abstract by filename, use the page title + ".pdf"}
Get-ChildItem *.pdf | Rename-Item -NewName {
$title = (Invoke-WebRequest "https://arxiv.org/abs/$($_.BaseName)").parsedhtml.title
$title = $title -replace '[\\/:\*\?"<>\|]', '-' # replace forbidden characters
"$title.pdf" # in filenames with -
}
</code></pre>
<p>I should make this folders like this (without <em>[folder]</em>)</p>
<pre><code>[folder] Information Theory (cs.IT)
[folder] Number Theory (math.NT)
....
</code></pre>
<p>I try to <em>join</em> 2 operations:</p>
<p>MOVING by Subject</p>
<pre><code>[folder] Geometric Topology (cs.IT)
|
|__ [file] 1611.00066
|__ [file] .....
[folder] Number Theory (math.NT)
|
|__ [file] 1611.00057
</code></pre>
<p>and RENAMING by Title</p>
<pre><code>[folder] Geometric Topology (cs.IT)
|
|__ [file] 1611.00066
|__ [file] .....
[folder] Number Theory (math.NT)
|
|__ [file] 1611.00057
</code></pre>
<p>For loop and join operation I make a <em>.ps1</em> file. I insert this code but don't work</p>
<pre><code>$res=Invoke-WebRequest "https://arxiv.org/abs/$($_.BaseName"
$rootpath="c:\temp"
Function Clean-InvalidFileNameChars {
param(
[Parameter(Mandatory=$true,
Position=0,
ValueFromPipeline=$true,
ValueFromPipelineByPropertyName=$true)]
[String]$Name
)
$invalidChars = [IO.Path]::GetInvalidFileNameChars() -join ''
$re = "[{0}]" -f [RegEx]::Escape($invalidChars)
$res=($Name -replace $re)
return $res.Substring(0, [math]::Min(260, $res.Length))
}
Function Clean-InvalidPathChars {
param(
[Parameter(Mandatory=$true,
Position=0,
ValueFromPipeline=$true,
ValueFromPipelineByPropertyName=$true)]
[String]$Name
)
$invalidChars = [IO.Path]::GetInvalidPathChars() -join ''
$re = "[{0}]" -f [RegEx]::Escape($invalidChars)
$res=($Name -replace $re)
return $res.Substring(0, [math]::Min(248, $res.Length))
}
gci *.pdf | foreach { (iwr "https://arxiv.org/abs/$($_.BaseName)")`
-match 'primary-subject">(.*?)</span>'; $matches[1] }
#get date and cut with format template, group by Subject and clean Title and Subject for transformation to dir and file name
$grousubject=$res.ParsedHtml.body.outerText | ConvertFrom-String -TemplateContent $template | select @{N="Subject";E={Clean-InvalidPathChars $_.subject}}, @{N="Title";E={Clean-InvalidFileNameChars $_.title}} | group Subject
#create dir and files
$grousubject | %{$path= "$rootpath\$($_.Name)" ; $_.group.title | %{New-Item -ItemType File -Path "$path\$_" -Force} }
Get-ChildItem *.pdf | Rename-Item -NewName {
$title = (Invoke-WebRequest "https://arxiv.org/abs/$($_.BaseName)").parsedhtml.title
$title = $title -replace '[\\/:\*\?"<>\|]', '-' # replace forbidden characters
"$title.pdf" # in filenames with -
}
</code></pre>
<p>My powershell version is 4</p>
<p><strong>EDIT:</strong> Esmeraldo solution works like this <a href="https://i.imgur.com/NEio868.png" rel="nofollow noreferrer">http://i.imgur.com/NEio868.png</a></p>
<p>Thank you</p> | 2016-11-13 06:40:23.747000+00:00 | 2016-11-13 11:14:25.770000+00:00 | 2016-11-13 11:14:25.770000+00:00 | powershell|powershell-4.0 | [] | 0 |
55,534,488 | <p>I think you are looking for GAN(Generative Adversarial Networks) which is proposed in <a href="https://arxiv.org/abs/1406.2661" rel="nofollow noreferrer">this paper</a>. </p>
<p>GAN are the type of algorithm which contains two different model so that one model named Discriminator tries to learn to determine if it's input data comes from the data set or not and the other one named Generator tries to learn how to generate data so that the Discriminator wrongly recognize that it comes from the data set.</p>
<p><a href="https://i.stack.imgur.com/nmHEV.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nmHEV.jpg" alt="enter image description here"></a></p>
<p>You can find more details from the following links:</p>
<p><a href="https://searchenterpriseai.techtarget.com/definition/generative-adversarial-network-GAN" rel="nofollow noreferrer">generative adversarial network (GAN)</a></p>
<p><a href="https://blog.statsbot.co/generative-adversarial-networks-gans-engine-and-applications-f96291965b47" rel="nofollow noreferrer">Generative Adversarial Networks (GANs): Engine and Applications</a></p>
<p><a href="https://towardsdatascience.com/gan-by-example-using-keras-on-tensorflow-backend-1a6d515a60d0" rel="nofollow noreferrer">GAN by Example using Keras on Tensorflow Backend</a></p> | 2019-04-05 11:34:49.417000+00:00 | 2019-04-05 11:34:49.417000+00:00 | null | null | 55,534,025 | <p>If i have a dataset consisting by a list of images each associated with a series of features; there is a model that, once trained, generates new images upon entering a new list of features?</p> | 2019-04-05 11:06:09.947000+00:00 | 2019-04-05 11:34:49.417000+00:00 | null | python|keras|deep-learning|conv-neural-network|dcgan | ['https://arxiv.org/abs/1406.2661', 'https://i.stack.imgur.com/nmHEV.jpg', 'https://searchenterpriseai.techtarget.com/definition/generative-adversarial-network-GAN', 'https://blog.statsbot.co/generative-adversarial-networks-gans-engine-and-applications-f96291965b47', 'https://towardsdatascience.com/gan-by-example-using-keras-on-tensorflow-backend-1a6d515a60d0'] | 5 |
36,726,247 | <p><code>JuMP</code> already benefits from sparse matrix in different ways, I've not checked the source but refer to a <a href="http://arxiv.org/pdf/1312.1431v1" rel="nofollow">cited paper</a> from <a href="https://github.com/JuliaOpt/JuMP.jl" rel="nofollow">JuMP.jl</a>:</p>
<blockquote>
<p>In the case of LP, the input data structures are the vectors c and b
and the matrix A in <strong>sparse</strong> format, and the routines to generate these
data structures are called matrix generators</p>
</blockquote>
<p>One point to note is that, the main task of algebraic modeling languages (AMLs) like JuMP is to generate input data structures for solvers. AMLs like JuMP do not solve generated problems themselves but they call standard appropriate solvers to do the task.</p> | 2016-04-19 18:10:21.557000+00:00 | 2016-04-19 18:10:21.557000+00:00 | null | null | 36,698,433 | <p>How do I deal with sparse matrices in <a href="https://github.com/JuliaOpt/JuMP.jl" rel="nofollow">JuMP</a>?</p>
<p>For example, suppose I want to impose a constrain of the form:</p>
<pre><code>A * x == 0
</code></pre>
<p>where <code>A</code> is a sparse matrix and <code>x</code> a vector of variables. I assume that the sparsity of <code>A</code> could be exploited to make the optimization faster. How can I take advantage of this in JuMP?</p> | 2016-04-18 15:33:38.483000+00:00 | 2016-10-03 00:10:28.020000+00:00 | 2016-10-03 00:10:28.020000+00:00 | julia|julia-jump | ['http://arxiv.org/pdf/1312.1431v1', 'https://github.com/JuliaOpt/JuMP.jl'] | 2 |
72,164,547 | <p>Maybe the poor performance is due to gradients being applied to the BERT backbone. Validate it like so:</p>
<pre><code>print([p.requires_grad for p in bert_distil.distilbert.parameters()])
</code></pre>
<p>As an alternative solution, try freezing the weights of your trained model:</p>
<pre><code>for param in bert_distil.distilbert.parameters():
param.requires_grad = False
</code></pre>
<p>As you are trying to optimize the weights of a trained model during fine-tuning on your data, you face issues described, among other sources, in the ULMIfit (<a href="https://arxiv.org/abs/1801.06146" rel="nofollow noreferrer">https://arxiv.org/abs/1801.06146</a>) paper</p> | 2022-05-08 19:41:26.127000+00:00 | 2022-05-08 19:41:26.127000+00:00 | null | null | 63,218,778 | <p>I am relatively new to PyTorch and Huggingface-transformers and experimented with DistillBertForSequenceClassification on this <a href="https://www.kaggle.com/c/nlp-getting-started" rel="nofollow noreferrer">Kaggle-Dataset</a>.</p>
<pre><code>from transformers import DistilBertForSequenceClassification
import torch.optim as optim
import torch.nn as nn
from transformers import get_linear_schedule_with_warmup
n_epochs = 5 # or whatever
batch_size = 32 # or whatever
bert_distil = DistilBertForSequenceClassification.from_pretrained('distilbert-base-uncased')
#bert_distil.classifier = nn.Sequential(nn.Linear(in_features=768, out_features=1), nn.Sigmoid())
tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(bert_distil.parameters(), lr=0.1)
X_train = []
Y_train = []
for row in train_df.iterrows():
seq = tokenizer.encode(preprocess_text(row[1]['text']), add_special_tokens=True, pad_to_max_length=True)
X_train.append(torch.tensor(seq).unsqueeze(0))
Y_train.append(torch.tensor([row[1]['target']]).unsqueeze(0))
X_train = torch.cat(X_train)
Y_train = torch.cat(Y_train)
running_loss = 0.0
bert_distil.cuda()
bert_distil.train(True)
for epoch in range(n_epochs):
permutation = torch.randperm(len(X_train))
j = 0
for i in range(0,len(X_train), batch_size):
optimizer.zero_grad()
indices = permutation[i:i+batch_size]
batch_x, batch_y = X_train[indices], Y_train[indices]
batch_x.cuda()
batch_y.cuda()
outputs = bert_distil.forward(batch_x.cuda())
loss = criterion(outputs[0],batch_y.squeeze().cuda())
loss.requires_grad = True
loss.backward()
optimizer.step()
running_loss += loss.item()
j+=1
if j == 20:
#print(outputs[0])
print('[%d, %5d] running loss: %.3f loss: %.3f ' %
(epoch + 1, i*1, running_loss / 20, loss.item()))
running_loss = 0.0
j = 0
</code></pre>
<blockquote>
<p>[1, 608] running loss: 0.689 loss: 0.687
[1, 1248] running loss: 0.693 loss: 0.694
[1, 1888] running loss: 0.693 loss: 0.683
[1, 2528] running loss: 0.689 loss: 0.701
[1, 3168] running loss: 0.690 loss: 0.684
[1, 3808] running loss: 0.689 loss: 0.688
[1, 4448] running loss: 0.689 loss: 0.692 etc...</p>
</blockquote>
<p>Regardless on what I tried, loss did never decrease, or even increase, nor did the prediction get better. It seems to me that I forgot something so that weights are actually not updated. Someone has an idea?
O</p>
<p><strong>what I tried</strong></p>
<ul>
<li>Different loss functions
<ul>
<li>BCE</li>
<li>CrossEntropy</li>
<li>even MSE-loss</li>
</ul>
</li>
<li>One-Hot Encoding vs A single neuron output</li>
<li>Different learning rates, and optimizers</li>
<li>I even changed all the targets to only one single label, but even then, the network did'nt converge.</li>
</ul> | 2020-08-02 17:02:08.167000+00:00 | 2022-05-08 19:41:26.127000+00:00 | null | nlp|pytorch|text-classification|loss-function|huggingface-transformers | ['https://arxiv.org/abs/1801.06146'] | 1 |
63,593,996 | <p>I would highlight two possible reasons for your "stable" results:</p>
<ol>
<li>I agree that the <strong>learning rate</strong> is surely too <strong>high</strong> that prevents model from any significant updates.</li>
<li>But what is important to know is that based on the state-of-the-art papers <strong>finetuning has very marginal effect</strong> on the core NLP abilities of Transformers. For example, the <a href="https://arxiv.org/pdf/1909.04925.pdf" rel="nofollow noreferrer">paper</a> says that finetuning only applies really small weight changes. Citing it: "Finetuning barely affects accuracy on NEL, COREF and REL indicating that those tasks are already sufficiently covered by pre-training". Several papers suggest that finetuning for classification tasks is basically waste of time. Thus, considering that DistilBert is actually a student model of BERT, maybe you won't get better results. <em><strong>Try pre-training</strong></em> with your data first. Generally, pre-training has a more significant impact.</li>
</ol> | 2020-08-26 08:51:34.120000+00:00 | 2020-08-26 08:51:34.120000+00:00 | null | null | 63,218,778 | <p>I am relatively new to PyTorch and Huggingface-transformers and experimented with DistillBertForSequenceClassification on this <a href="https://www.kaggle.com/c/nlp-getting-started" rel="nofollow noreferrer">Kaggle-Dataset</a>.</p>
<pre><code>from transformers import DistilBertForSequenceClassification
import torch.optim as optim
import torch.nn as nn
from transformers import get_linear_schedule_with_warmup
n_epochs = 5 # or whatever
batch_size = 32 # or whatever
bert_distil = DistilBertForSequenceClassification.from_pretrained('distilbert-base-uncased')
#bert_distil.classifier = nn.Sequential(nn.Linear(in_features=768, out_features=1), nn.Sigmoid())
tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(bert_distil.parameters(), lr=0.1)
X_train = []
Y_train = []
for row in train_df.iterrows():
seq = tokenizer.encode(preprocess_text(row[1]['text']), add_special_tokens=True, pad_to_max_length=True)
X_train.append(torch.tensor(seq).unsqueeze(0))
Y_train.append(torch.tensor([row[1]['target']]).unsqueeze(0))
X_train = torch.cat(X_train)
Y_train = torch.cat(Y_train)
running_loss = 0.0
bert_distil.cuda()
bert_distil.train(True)
for epoch in range(n_epochs):
permutation = torch.randperm(len(X_train))
j = 0
for i in range(0,len(X_train), batch_size):
optimizer.zero_grad()
indices = permutation[i:i+batch_size]
batch_x, batch_y = X_train[indices], Y_train[indices]
batch_x.cuda()
batch_y.cuda()
outputs = bert_distil.forward(batch_x.cuda())
loss = criterion(outputs[0],batch_y.squeeze().cuda())
loss.requires_grad = True
loss.backward()
optimizer.step()
running_loss += loss.item()
j+=1
if j == 20:
#print(outputs[0])
print('[%d, %5d] running loss: %.3f loss: %.3f ' %
(epoch + 1, i*1, running_loss / 20, loss.item()))
running_loss = 0.0
j = 0
</code></pre>
<blockquote>
<p>[1, 608] running loss: 0.689 loss: 0.687
[1, 1248] running loss: 0.693 loss: 0.694
[1, 1888] running loss: 0.693 loss: 0.683
[1, 2528] running loss: 0.689 loss: 0.701
[1, 3168] running loss: 0.690 loss: 0.684
[1, 3808] running loss: 0.689 loss: 0.688
[1, 4448] running loss: 0.689 loss: 0.692 etc...</p>
</blockquote>
<p>Regardless on what I tried, loss did never decrease, or even increase, nor did the prediction get better. It seems to me that I forgot something so that weights are actually not updated. Someone has an idea?
O</p>
<p><strong>what I tried</strong></p>
<ul>
<li>Different loss functions
<ul>
<li>BCE</li>
<li>CrossEntropy</li>
<li>even MSE-loss</li>
</ul>
</li>
<li>One-Hot Encoding vs A single neuron output</li>
<li>Different learning rates, and optimizers</li>
<li>I even changed all the targets to only one single label, but even then, the network did'nt converge.</li>
</ul> | 2020-08-02 17:02:08.167000+00:00 | 2022-05-08 19:41:26.127000+00:00 | null | nlp|pytorch|text-classification|loss-function|huggingface-transformers | ['https://arxiv.org/pdf/1909.04925.pdf'] | 1 |
64,952,126 | <p>Okay, so first of all there is no data :( so I am just taking a sample ID from <a href="https://pypi.org/project/semanticscholar/" rel="nofollow noreferrer">semanticscholar</a> documents. Looking at your code, there I can see plenty of mistakes:</p>
<ol>
<li>Don't always stick to <code>pd.DataFrame</code> for your work! Dataframe are great, but are also slow! You just need to get the ID from the <code>'paperIds-1975-2005-2015-2.tsv'</code> so you can either read the file using <code>file.readline()</code> or you can save the data into a list:</li>
</ol>
<pre class="lang-py prettyprint-override"><code>data = pd.read_csv('paperIds-1975-2005-2015-2.tsv', sep='\t', names=["id"]).id.values
</code></pre>
<ol start="2">
<li>From the code flow, what I understand is you want to save the scraped data into a <em>single</em> CSV file, right? So, why are you appending the data and saving the file again and again? This makes the code like 100000s time slower!</li>
<li>I really don't understand the purpose of <code>time.sleep(60)</code> you have added. If there is some error, you should print and move on - why wait?</li>
<li>For checking the progress, you can use the <code>tqdm</code> <a href="https://www.geeksforgeeks.org/python-how-to-make-a-terminal-progress-bar-using-tqdm/" rel="nofollow noreferrer">library</a> which shows a nice progress bar for your code!</li>
</ol>
<p>Taking these into consideration, I have modified your code as follows:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import semanticscholar as sch
from tqdm import tqdm as TQ # for progree-bar
data = ['10.1093/mind/lix.236.433', '10.1093/mind/lix.236.433'] # using list or np.ndarray looks more logical!
print(data)
>> ['10.1093/mind/lix.236.433', '10.1093/mind/lix.236.433']
</code></pre>
<p>Once you have done this, you can now go and scrape the data. Okay, before that <code>pandas DataFrame</code> is basically a dictionary with advanced features. So, for our purpose, we will first add all the information to the dictionary and then create the dataframe. I personally prefer this process - gives me more control, if there are any changes need to be done.</p>
<pre class="lang-py prettyprint-override"><code>cols = ['id', 'abstract', 'arxivId', 'authors', 'citationVelocity', 'citations',
'corpusId', 'doi', 'fieldsOfStudy', 'influentialCitationCount', 'is_open_access',
'is_publisher_licensed', 'paperId', 'references', 'title', 'topics', 'url', 'venue', 'year']
outputData = dict((k, []) for k in cols)
print(outputData)
{'id': [],
'abstract': [],
'arxivId': [],
'authors': [],
'citationVelocity': [],
'citations': [],
'corpusId': [],
'doi': [],
'fieldsOfStudy': [],
'influentialCitationCount': [],
'is_open_access': [],
'is_publisher_licensed': [],
'paperId': [],
'references': [],
'title': [],
'topics': [],
'url': [],
'venue': [],
'year': []}
</code></pre>
<p>Now you can simply fetch the data and save it into your dataframe as below:</p>
<pre class="lang-py prettyprint-override"><code>for _paperID in TQ(data):
paper = sch.paper(_paperID, timeout = 10) # scrape the paper
for key in cols:
try:
outputData[key].append(paper.get(key))
except KeyError:
outputData[key].append(None) # if there is no data, append none
print(f"{key} not Found for {_paperID}")
pd.DataFrame(outputData).to_csv('output_file_name.csv', index = False)
</code></pre>
<p>This is the output that I have obtained:
<a href="https://i.stack.imgur.com/eo3iv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eo3iv.png" alt="enter image description here" /></a></p> | 2020-11-22 08:39:27.820000+00:00 | 2020-11-22 08:39:27.820000+00:00 | null | null | 64,951,767 | <p>I am doing a task that requires scraping. I have a dataset with ids and for each id i need to scrape some new information. This dataset has around 4 million rows. Here is my code:</p>
<pre><code>import pandas as pd
import numpy as np
import semanticscholar as sch
import time
# dataset with ids
df = pd.read_csv('paperIds-1975-2005-2015-2.tsv', sep='\t', names=["id"])
# columns that will be produced
cols = ['id', 'abstract', 'arxivId', 'authors',
'citationVelocity', 'citations',
'corpusId', 'doi', 'fieldsOfStudy',
'influentialCitationCount', 'is_open_access',
'is_publisher_licensed', 'paperId',
'references', 'title', 'topics',
'url', 'venue', 'year']
# a new dataframe that we will append the scraped results
new_df = pd.DataFrame(columns=cols)
# a counter so we know when every 100000 papers are scraped
c = 0
i = 0
while i < df.shape[0]:
try:
paper = sch.paper(df.id[i], timeout=10) # scrape the paper
new_df = new_df.append([df.id[i]]+paper, ignore_index=True) # append to the new dataframe
new_df.to_csv('abstracts_impact.csv', index=False) # save it
if i % 100000 == 0: # to check how much we did
print(c)
c += 1
i += 1
except:
time.sleep(60)
</code></pre>
<p>The problem is that the dataset is pretty big and this approach is not working. I left it working for 2 days and it scraped around 100000 ids, and then suddenly just froze and all the data that was saved was just empty rows.
I was thinking that the best solution would be to parallelize and batch processing. I never have done this before and I am not familiar with these concepts. Any help would be appreciated. Thank you!</p> | 2020-11-22 07:50:16.460000+00:00 | 2020-11-22 08:39:27.820000+00:00 | null | python|pandas|web-scraping|batch-processing | ['https://pypi.org/project/semanticscholar/', 'https://www.geeksforgeeks.org/python-how-to-make-a-terminal-progress-bar-using-tqdm/', 'https://i.stack.imgur.com/eo3iv.png'] | 3 |
53,923,608 | <blockquote>
<p>If I have to use pretrained word vectors as embedding layer in Neural
Networks (eg. say CNN), How do I deal with index 0?</p>
</blockquote>
<p><strong>Answer</strong></p>
<p>In general, empty entries can be handled via a weighted cost of the model and the targets.
However, when dealing with words and sequential data, things can be a little tricky and there are several things that can be considered. Let's make some assumptions and work with that.</p>
<p><strong>Assumptions</strong></p>
<ol>
<li>We begin with a pre-trained word2vec model.</li>
<li>We have sequences with varying lengths, with at most <code>max_lenght</code> words.</li>
</ol>
<p><strong>Details</strong></p>
<ul>
<li>Word2Vec is a model that learns a mapping (embedding) from discrete variables (word token = word unique id) to a continuous vector space.</li>
<li>The representation in the vector space is such that the cost function (CBOW, Skip-gram, essentially it is predicting word from context in bi-directional way) is minimized on the corpus. </li>
<li>Reading basic tutorials (like <a href="https://www.tensorflow.org/tutorials/representation/word2vec" rel="nofollow noreferrer">Google's word2vec tutorial</a> on <a href="https://www.tensorflow.org/tutorials/" rel="nofollow noreferrer">Tensorflow tutorials</a>) reveals some details on the algorithm, including <a href="https://en.wikipedia.org/wiki/Word2vec" rel="nofollow noreferrer">negative sampling</a>.</li>
<li>The implementation is a lookup table. It is faster than the alternative one-hot encoding technique, since the dimensions of a one-hot encoded matrix are huge (say 10,000 columns for 10,000 words, <code>n</code> row for <code>n</code> sequential words). So the lookup (hash) table is significantly faster, and it selects rows from the embedding matrix (for row vectors).</li>
</ul>
<p><strong>Task</strong></p>
<ul>
<li>Add missing entries (no words) and use it in the model.</li>
</ul>
<p><strong>Suggestions</strong></p>
<ul>
<li>If there is some use for the cost of missing data, such as using a prediction from that entry and there is a label for that entry, you can add a new value as suggested (can be the 0 index, but all indexes must move <code>i=i+1</code> and the embedding matrix should have new row at position 0).</li>
<li>Following the first suggestion, you need to train the added row. You can use negative sampling for the NaN class vs all. <em>I do not suggest it for handling missing values. It is a good trick to handle an "Unknown word" class.</em></li>
<li>You can weight the cost of those entries by constant 0 for each sample that is shorter that <code>max_length</code>. That is, if we have a sequence of word tokens <code>[0,5,6,2,178,24,0,NaN,NaN]</code>, the corresponding weight vector is <code>[1,1,1,1,1,1,1,0,0]</code></li>
<li>You should worry about re-indexing the words and the cost of it. In memory, there is almost no difference (<code>1</code> vs <code>N</code> words, <code>N</code> is large). In complexity, it is something that can be later incorporated in the initial tokenize function. The predictions and model complexity is a larger issue and more important requirement from the system.</li>
<li>There are numerous ways to tackle varying lengths (LSTM, RNNs, now we try CNNs and costs tricks). Read state-of-the-art literature on that issue, I'm sure there is much work. For example, see <a href="https://arxiv.org/abs/1404.2188" rel="nofollow noreferrer">A Convolutional Neural Network for Modelling Sentences</a> paper.</li>
</ul> | 2018-12-25 15:26:09.547000+00:00 | 2018-12-25 15:26:09.547000+00:00 | null | null | 53,923,344 | <p>If I have to use pretrained word vectors as embedding layer in Neural Networks (eg. say CNN), How do I deal with index 0?</p>
<p><strong>Detail:</strong> </p>
<p>We usually start with creating a zero numpy 2D array. Later we fill in the indices of words from the vocabulary.
The problem is, 0 is already the index of another word in our vocabulary (say, 'i' is index at 0). Hence, we are basically initializing the whole matrix filled with 'i' instead of empty words. So, how do we deal with padding all the sentences of equal length?</p>
<p>One easy pop-up in mind is we can use the another digit=numberOfWordsInVocab+1 to pad. But wouldn't that take more size? [Help me!]</p> | 2018-12-25 14:46:37.583000+00:00 | 2018-12-25 15:26:09.547000+00:00 | null | python|tensorflow|nlp|artificial-intelligence|word2vec | ['https://www.tensorflow.org/tutorials/representation/word2vec', 'https://www.tensorflow.org/tutorials/', 'https://en.wikipedia.org/wiki/Word2vec', 'https://arxiv.org/abs/1404.2188'] | 4 |
53,802,505 | <p>Apply tSNE and fit k-means is one of the basic things you can start from.
I would say consider using different f-divergence.</p>
<p>Stochastic Neighbor Embedding under f-divergences <a href="https://arxiv.org/pdf/1811.01247.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1811.01247.pdf</a></p>
<p>This paper tries five different f- divergence functions : KL, RKL, JS, CH (Chi-Square), HL (Hellinger).</p>
<p>The paper goes over which divergence emphasize what in terms of precision and recall.</p> | 2018-12-16 13:14:34.723000+00:00 | 2018-12-16 13:14:34.723000+00:00 | null | null | 48,550,697 | <p>I have a dataset which I need to cluster and display in a way wherein elements in the same cluster should appear closer together. The dataset is based out of a research study, and has around 16 rows(entries) and about 50 features. I do agree that its not an ideal dataset to begin with, but unfortunately thats is the situation on hand.</p>
<p>Following is the approach I took:</p>
<p>I first applied KMeans on the dataset after normalizing it.</p>
<p>In parallel I also tried to use TSNE to map the data into 2 dimensions and plotted them on a scatterplot. From my understanding of TSNE, that technique should already be placing items in same clusters closer to each other. When I look at the scatterplot, however, the clusters are really all over the place.</p>
<p>The result of the scatterplot can be found here: <a href="https://imgur.com/ZPhPjHB" rel="nofollow noreferrer">https://imgur.com/ZPhPjHB</a></p>
<p>Is this because TSNE and KMeans intrinsically work differently? Should I just do TSNE and try to label the clusters (and if so, how?) or should I be using TSNE output to feed into KMeans somehow?</p>
<p>I am really new in this space and advice would be greatly appreciated!</p>
<p>Thanks in advance once again</p>
<p>Edit: The same overlap happens if I first use TSNE to reduce dimensions to 2 and then use those reduced dimensions to cluster using KMeans</p> | 2018-01-31 20:13:28.673000+00:00 | 2018-12-16 13:14:34.723000+00:00 | null | cluster-analysis|data-science | ['https://arxiv.org/pdf/1811.01247.pdf'] | 1 |
56,620,137 | <p>It could be ELBO loss fault, try to use <a href="https://arxiv.org/abs/1706.02262" rel="nofollow noreferrer">MMD</a> instead </p> | 2019-06-16 15:18:28.243000+00:00 | 2019-06-16 15:18:28.243000+00:00 | null | null | 56,235,077 | <p>I am trying to create a convolutional autoencoder which recover (800,800,1) images. What i found is that if I use a deeper architecture (more than 2 layers for encoder and decoder) the outputs tend to have a small variance and a lower mean value, but I don't understand what's wrong.</p> | 2019-05-21 09:18:00.097000+00:00 | 2019-06-16 15:18:28.243000+00:00 | null | python|keras|deep-learning|autoencoder | ['https://arxiv.org/abs/1706.02262'] | 1 |
64,467,213 | <p>You can try with request and beautiful soup approach. No need to click more link.</p>
<pre><code>from requests import get
from bs4 import BeautifulSoup
# you can change the size to retrieve all the results at one shot.
url = 'https://arxiv.org/search/?query=healthcare&searchtype=all&abstracts=show&order=-announced_date_first&size=50&start=0'
response = get(url,verify = False)
soup = BeautifulSoup(response.content, "lxml")
#print(soup)
queryresults = soup.find_all("li", attrs={"class": "arxiv-result"})
for result in queryresults:
title = result.find("p",attrs={"class": "title is-5 mathjax"})
print(title.text)
#If you need full abstract content - try this (you do not need to click on more button
for result in queryresults:
abstractFullContent = result.find("span",attrs={"class": "abstract-full has-text-grey-dark mathjax"})
print(abstractFullContent.text)
</code></pre>
<p><strong>Output:</strong></p>
<pre><code> Interpretable Deep Learning for Automatic Diagnosis of 12-lead Electrocardiogram
Leveraging Technology for Healthcare and Retaining Access to Personal Health Data to Enhance Personal Health and Well-being
Towards new forms of particle sensing and manipulation and 3D imaging on a smartphone for healthcare applications
</code></pre> | 2020-10-21 15:46:24.147000+00:00 | 2020-10-21 18:32:44.437000+00:00 | 2020-10-21 18:32:44.437000+00:00 | null | 64,465,823 | <p>I'm trying to scrape <a href="https://arxiv.org/search/?query=healthcare&searchtype=allI" rel="nofollow noreferrer">https://arxiv.org/search/?query=healthcare&searchtype=allI</a> through the Selenium and python. The for loop takes too long to execute. I tried to scrape with headless browsers and PhantomJS, but it doesnt scrape the abstract field (Need the abstract field expanded with the more button clicked)</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import selenium
import re
import time
from selenium.common.exceptions import NoSuchElementException
from selenium.webdriver import Firefox
browser = Firefox()
url_healthcare = 'https://arxiv.org/search/?query=healthcare&searchtype=all'
browser.get(url_healthcare)
dfs = []
for i in range(1, 39):
articles = browser.find_elements_by_tag_name('li[class="arxiv-result"]')
for article in articles:
title = article.find_element_by_tag_name('p[class="title is-5 mathjax"]').text
arxiv_id = article.find_element_by_tag_name('a').text.replace('arXiv:','')
arxiv_link = article.find_elements_by_tag_name('a')[0].get_attribute('href')
pdf_link = article.find_elements_by_tag_name('a')[1].get_attribute('href')
authors = article.find_element_by_tag_name('p[class="authors"]').text.replace('Authors:','')
try:
link1 = browser.find_element_by_link_text('▽ More')
link1.click()
except:
time.sleep(0.1)
abstract = article.find_element_by_tag_name('p[class="abstract mathjax"]').text
date = article.find_element_by_tag_name('p[class="is-size-7"]').text
date = re.split(r"Submitted|;",date)[1]
tag = article.find_element_by_tag_name('div[class="tags is-inline-block"]').text.replace('\n', ',')
try:
doi = article.find_element_by_tag_name('div[class="tags has-addons"]').text
doi = re.split(r'\s', doi)[1]
except NoSuchElementException:
doi = 'None'
all_combined = [title, arxiv_id, arxiv_link, pdf_link, authors, abstract, date, tag, doi]
dfs.append(all_combined)
print('Finished Extracting Page:', i)
try:
link2 = browser.find_element_by_class_name('pagination-next')
link2.click()
except:
browser.close
time.sleep(0.1)
</code></pre> | 2020-10-21 14:30:39.637000+00:00 | 2020-10-21 21:43:51.140000+00:00 | 2020-10-21 21:43:51.140000+00:00 | python|selenium|web-scraping|optimization | [] | 0 |
64,470,329 | <p>The following implementation achieves this in <strong>16 seconds</strong>.</p>
<p>To speed up the execution process I have taken the following measures:</p>
<ul>
<li>Removed <code>Selenium</code> entirely (No clicking required)</li>
<li>For <code>abstract</code>, used <code>BeautifulSoup</code>'s output and processed it later</li>
<li>Added <code>multiprocessing</code> to speed up the process significantly</li>
</ul>
<pre class="lang-py prettyprint-override"><code>from multiprocessing import Process, Manager
import requests
from bs4 import BeautifulSoup
import re
import time
start_time = time.time()
def get_no_of_pages(showing_text):
no_of_results = int((re.findall(r"(\d+,*\d+) results for all",showing_text)[0].replace(',','')))
pages = no_of_results//200 + 1
print("total pages:",pages)
return pages
def clean(text):
return text.replace("\n", '').replace(" ",'')
def get_data_from_page(url,page_number,data):
print("getting page",page_number)
response = requests.get(url+"start="+str(page_number*200))
soup = BeautifulSoup(response.content, "lxml")
arxiv_results = soup.find_all("li",{"class","arxiv-result"})
for arxiv_result in arxiv_results:
paper = {}
paper["titles"]= clean(arxiv_result.find("p",{"class","title is-5 mathjax"}).text)
links = arxiv_result.find_all("a")
paper["arxiv_ids"]= links[0].text.replace('arXiv:','')
paper["arxiv_links"]= links[0].get('href')
paper["pdf_link"]= links[1].get('href')
paper["authors"]= clean(arxiv_result.find("p",{"class","authors"}).text.replace('Authors:',''))
split_abstract = arxiv_result.find("p",{"class":"abstract mathjax"}).text.split("▽ More\n\n\n",1)
if len(split_abstract) == 2:
paper["abstract"] = clean(split_abstract[1].replace("△ Less",''))
else:
paper["abstract"] = clean(split_abstract[0].replace("△ Less",''))
paper["date"] = re.split(r"Submitted|;",arxiv_results[0].find("p",{"class":"is-size-7"}).text)[1]
paper["tag"] = clean(arxiv_results[0].find("div",{"class":"tags is-inline-block"}).text)
doi = arxiv_results[0].find("div",{"class":"tags has-addons"})
if doi is None:
paper["doi"] = "None"
else:
paper["doi"] = re.split(r'\s', doi.text)[1]
data.append(paper)
print(f"page {page_number} done")
if __name__ == "__main__":
url = 'https://arxiv.org/search/?searchtype=all&query=healthcare&abstracts=show&size=200&order=-announced_date_first&'
response = requests.get(url+"start=0")
soup = BeautifulSoup(response.content, "lxml")
with Manager() as manager:
data = manager.list()
processes = []
get_data_from_page(url,0,data)
showing_text = soup.find("h1",{"class":"title is-clearfix"}).text
for i in range(1,get_no_of_pages(showing_text)):
p = Process(target=get_data_from_page, args=(url,i,data))
p.start()
processes.append(p)
for p in processes:
p.join()
print("Number of entires scraped:",len(data))
stop_time = time.time()
print("Time taken:", stop_time-start_time,"seconds")
</code></pre>
<p>Output:</p>
<pre><code>>>> python test.py
</code></pre>
<pre><code>getting page 0
page 0 done
total pages: 10
getting page 1
getting page 4
getting page 2
getting page 6
getting page 5
getting page 3
getting page 7
getting page 9
getting page 8
page 9 done
page 4 done
page 1 done
page 6 done
page 2 done
page 7 done
page 3 done
page 5 done
page 8 done
Number of entires scraped: 1890
Time taken: 15.911492586135864 seconds
</code></pre>
<p>Note:</p>
<ul>
<li>Please write the above code in a <code>.py</code> file. For Jupyter notebook refer <a href="https://stackoverflow.com/a/47374811/11573842">this</a>.</li>
<li>Multiprocessing code taken from <a href="https://stackoverflow.com/a/42490484/11573842">here</a>.</li>
<li>The ordering of entries in the <code>data</code> list won't match the ordering on the website as <code>Manager</code> will append <code>dictionaries</code> into it as they come.</li>
<li>The above code finds the number of pages on its own and is thus generalized to work on any arxiv search result. Unfortunately, to do this it first <code>gets</code> <code>page 0</code> and then calculates the <code>number of pages</code> and then goes for <code>multiprocessing</code> for the remaining pages. This has the disadvantage that while the <code>0th page</code> was being worked on, no other process was running. So if you remove that part and simply run the loop for <code>10 pages</code> then the time taken should fall at around <strong>8 seconds</strong>.</li>
</ul> | 2020-10-21 19:20:59.083000+00:00 | 2020-10-21 20:51:11.797000+00:00 | 2020-10-21 20:51:11.797000+00:00 | null | 64,465,823 | <p>I'm trying to scrape <a href="https://arxiv.org/search/?query=healthcare&searchtype=allI" rel="nofollow noreferrer">https://arxiv.org/search/?query=healthcare&searchtype=allI</a> through the Selenium and python. The for loop takes too long to execute. I tried to scrape with headless browsers and PhantomJS, but it doesnt scrape the abstract field (Need the abstract field expanded with the more button clicked)</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import selenium
import re
import time
from selenium.common.exceptions import NoSuchElementException
from selenium.webdriver import Firefox
browser = Firefox()
url_healthcare = 'https://arxiv.org/search/?query=healthcare&searchtype=all'
browser.get(url_healthcare)
dfs = []
for i in range(1, 39):
articles = browser.find_elements_by_tag_name('li[class="arxiv-result"]')
for article in articles:
title = article.find_element_by_tag_name('p[class="title is-5 mathjax"]').text
arxiv_id = article.find_element_by_tag_name('a').text.replace('arXiv:','')
arxiv_link = article.find_elements_by_tag_name('a')[0].get_attribute('href')
pdf_link = article.find_elements_by_tag_name('a')[1].get_attribute('href')
authors = article.find_element_by_tag_name('p[class="authors"]').text.replace('Authors:','')
try:
link1 = browser.find_element_by_link_text('▽ More')
link1.click()
except:
time.sleep(0.1)
abstract = article.find_element_by_tag_name('p[class="abstract mathjax"]').text
date = article.find_element_by_tag_name('p[class="is-size-7"]').text
date = re.split(r"Submitted|;",date)[1]
tag = article.find_element_by_tag_name('div[class="tags is-inline-block"]').text.replace('\n', ',')
try:
doi = article.find_element_by_tag_name('div[class="tags has-addons"]').text
doi = re.split(r'\s', doi)[1]
except NoSuchElementException:
doi = 'None'
all_combined = [title, arxiv_id, arxiv_link, pdf_link, authors, abstract, date, tag, doi]
dfs.append(all_combined)
print('Finished Extracting Page:', i)
try:
link2 = browser.find_element_by_class_name('pagination-next')
link2.click()
except:
browser.close
time.sleep(0.1)
</code></pre> | 2020-10-21 14:30:39.637000+00:00 | 2020-10-21 21:43:51.140000+00:00 | 2020-10-21 21:43:51.140000+00:00 | python|selenium|web-scraping|optimization | ['https://stackoverflow.com/a/47374811/11573842', 'https://stackoverflow.com/a/42490484/11573842'] | 2 |
72,646,084 | <p>I think that your statement about the number of predictions of the network could be misleading. Assuming a 13 x 13 grid and 5 anchor boxes the output of the network has, as I understand it, the following shape: 13 x 13 x 5 x (2+2+nbOfClasses)</p>
<ul>
<li>13 x 13: the grid</li>
<li>x 5: the anchors</li>
<li>x (2+2+nbOfClasses): (x, y)-coordinates of the center of the bounding box (in the coordinate system of each cell), (h, w)-deviation of the bounding box (deviation to the prior anchor boxes) and a softmax activated class vector indicating a probability for each class.</li>
</ul>
<p>If you want to have more information about the determination of the anchor priors you can take a look at the original paper in the arxiv: <a href="https://arxiv.org/pdf/1612.08242.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1612.08242.pdf</a>.</p> | 2022-06-16 12:44:28.437000+00:00 | 2022-06-16 12:44:28.437000+00:00 | null | null | 52,710,248 | <p>I have gone through a couple of <code>YOLO</code> tutorials but I am finding it some what hard to figure if the Anchor boxes for each cell the image is to be divided into is predetermined. In one of the guides I went through, The image was divided into <strong>13x13</strong> cells and it stated each cell predicts <strong>5</strong> anchor boxes(bigger than it, ok here's my first problem because it also says it would first detect what object is present in the small cell before the prediction of the boxes).</p>
<p>How can the small cell predict anchor boxes for an object bigger than it. Also it's said that each cell classifies before predicting its anchor boxes how can the small cell classify the right object in it without querying neighbouring cells if only a small part of the object falls within the cell </p>
<p><code>E.g.</code> say one of the <strong>13</strong> cells contains only the white pocket part of a man wearing a T-shirt how can that cell classify correctly that a man is present without being linked to its neighbouring cells? with a normal CNN when trying to localize a single object I know the bounding box prediction relates to the whole image so at least I can say the network has an idea of what's going on everywhere on the image before deciding where the box should be.</p>
<p><strong>PS:</strong> What I currently think of how the YOLO works is basically each cell is assigned predetermined anchor boxes with a classifier at each end before the boxes with the highest scores for each class is then selected but I am sure it doesn't add up somewhere. </p>
<blockquote>
<p><strong>UPDATE:</strong> Made a mistake with this question, it should have been about how regular bounding boxes were decided rather than anchor/prior boxes. So I am marking <code>@craq</code>'s answer as correct because that's how anchor boxes are decided according to the YOLO v2 paper</p>
</blockquote> | 2018-10-08 21:18:50.980000+00:00 | 2022-06-16 12:44:28.437000+00:00 | 2019-10-29 15:41:35.233000+00:00 | deep-learning|artificial-intelligence|object-detection|yolo | ['https://arxiv.org/pdf/1612.08242.pdf'] | 1 |
63,279,256 | <p>You should have a look at xlsxwriter, a module for creating excel files.
Your code could then look like this:</p>
<pre><code>import xlsxwriter
from refextract import extract_references_from_url
workbook = xlsxwriter.Workbook('References.xlsx')
worksheet = workbook.add_worksheet()
references = extract_references_from_url('https://arxiv.org/pdf/1503.07589.pdf')
row = 0
col = 0
worksheet.write(references)
workbook.close
</code></pre>
<p>(modified based upon <a href="https://xlsxwriter.readthedocs.io/tutorial01.html" rel="nofollow noreferrer">https://xlsxwriter.readthedocs.io/tutorial01.html</a>)</p> | 2020-08-06 08:02:17.340000+00:00 | 2020-08-06 08:02:17.340000+00:00 | null | null | 63,279,088 | <p>I'm a novice in python and I need to extract references from scientific literature. Following is the code I'm using</p>
<pre><code>from refextract import extract_references_from_url
references = extract_references_from_url('https://arxiv.org/pdf/1503.07589.pdf')
print(references)
</code></pre>
<p>So, Please guide me on how to extract this printed information into a Xls file. Thank you so much.</p> | 2020-08-06 07:50:56.593000+00:00 | 2020-08-24 13:51:20.237000+00:00 | null | python|xlsx|xls|python-3.8 | ['https://xlsxwriter.readthedocs.io/tutorial01.html'] | 1 |
63,279,160 | <p>You could use the pandas library to write the references into excel.</p>
<pre><code>from refextract import extract_references_from_url
import pandas as pd
references = extract_references_from_url('https://arxiv.org/pdf/1503.07589.pdf')
print(references)
# convert to pandas dataframe
dfref = pd.DataFrame(references)
# write dataframe into excel
dfref.to_excel('./refs.xlsx')
</code></pre> | 2020-08-06 07:55:12.303000+00:00 | 2020-08-24 13:51:20.237000+00:00 | 2020-08-24 13:51:20.237000+00:00 | null | 63,279,088 | <p>I'm a novice in python and I need to extract references from scientific literature. Following is the code I'm using</p>
<pre><code>from refextract import extract_references_from_url
references = extract_references_from_url('https://arxiv.org/pdf/1503.07589.pdf')
print(references)
</code></pre>
<p>So, Please guide me on how to extract this printed information into a Xls file. Thank you so much.</p> | 2020-08-06 07:50:56.593000+00:00 | 2020-08-24 13:51:20.237000+00:00 | null | python|xlsx|xls|python-3.8 | [] | 0 |
34,272,580 | <p>The length of the predicted list is indeed not differentiable. You need to add an extra softmax output to the model predicting the length of the list, or add many sigmoid outputs predicting which entries should be included.</p>
<p>I wrote a paper about transcribing variable-length text sequences from images, and the appendix goes into a lot of detail with a worked example for how the math works:
<a href="http://arxiv.org/abs/1312.6082" rel="nofollow">http://arxiv.org/abs/1312.6082</a></p> | 2015-12-14 16:59:29.977000+00:00 | 2015-12-14 16:59:29.977000+00:00 | null | null | 34,247,661 | <p>I am trying to predict medications given to patients. For each medication I have a column in the predictions (through softmax) indicating the probability that the patient will get this medication.</p>
<p>But obviously people can get several meds at once, so I have another model that tries to predict the number of different medications given.</p>
<p>I would like to evaluate them in a single TensorFlow call (I currently have a bunch of slow NumPy hacks), but I can't pass <code>tensorflow.nn.top_k</code> an array of <code>k</code>s (one for each patient, i.e. row), only a fixed integer - which doesn't work because different patients will get different numbers of meds.<br>
Ultimately I'm trying to <code>tensorflow.list_diff</code> between the actually prescribed medication indices and the predicted ones. And then maybe the <code>tensorflow.size</code> of it.</p>
<pre><code>tensorflow.list_diff(
tensorflow.where( # get indices of medications
tensorflow.equal(medication_correct_answers, 1) # convert 1 to True
),
tensorflow.nn.top_k( # get most likely medications
medication_soft_max, # medication model
tensorflow.argmax(count_soft_max, 1) # predicted count
)[1] # second element are the indices
)[:, 0] # get unmatched medications elements
</code></pre>
<p><em>Bonus question: Would it be possible to train a model directly on this instead of two seperate cross entropies? It doesn't really look differentiable to me - or do only the underlying softmaxes need to be differentiable?</em></p> | 2015-12-13 03:25:51.880000+00:00 | 2015-12-14 16:59:29.977000+00:00 | null | python|tensorflow | ['http://arxiv.org/abs/1312.6082'] | 1 |
61,239,777 | <p>If you can train two networks, to the same accuracy, but one of them only needs to process half as much data, then yes that is a good thing.</p>
<p>The resulting network will not be any faster to execute during inference time, but there are still several important benefits to the training process.</p>
<ul>
<li>Training will take half as long. This is valuable by itself. It is extra valuable when you consider that you can now try twice as many ideas in the same amount of time. That will improve results quality for the entire process.</li>
<li>Faster convergence can reduce generalization error and overfitting. The optimization does not have as many opportunities to "fidget" and find opportunities to overfit.</li>
<li>Extremely fast convergence, called <a href="https://arxiv.org/abs/1708.07120" rel="nofollow noreferrer">super-convergence</a>, can improve the final <em>training</em> error while still keeping generalization error low, leading to better validation scores too.</li>
</ul>
<p>Speaking more generally, there is a lot of research and other activity on the topic of how to make networks train as quickly and cheaply as possible. One such benchmark is <a href="https://dawn.cs.stanford.edu/benchmark/" rel="nofollow noreferrer">DAWNBench</a>, which sets a target accuracy to achieve and then ranks approaches based on how fast they reach that target, and how much the GPUs or other infrastructure cost to do it.</p>
<p>This general idea of "cost reduction" is also one of the drivers behind the general idea of <a href="https://en.wikipedia.org/wiki/Transfer_learning" rel="nofollow noreferrer">Transfer Learning</a>.</p> | 2020-04-15 22:35:52.650000+00:00 | 2020-04-15 22:35:52.650000+00:00 | null | null | 61,231,219 | <p>I want to know how beneficial it would be if we could reduce the number of back propagation steps by 50%.</p>
<p>For example, let's say a neural network performed back propagation 1000 times for training. And another neural network performs back propagation 500 to get trained (Lets assume that both of them gave same accuracy after training). Will the second one be significantly faster? Or does it not matter much? It will increase the speed of training.</p> | 2020-04-15 14:25:48.333000+00:00 | 2020-04-18 03:08:58.380000+00:00 | 2020-04-18 03:08:58.380000+00:00 | optimization|deep-learning|neural-network | ['https://arxiv.org/abs/1708.07120', 'https://dawn.cs.stanford.edu/benchmark/', 'https://en.wikipedia.org/wiki/Transfer_learning'] | 3 |
42,383,358 | <p>Found the problem, I was initializing my weights according to a gaussian distribution with standard deviation 0.001. This worked for the original SRCNN paper because it had less layers but in my deeper network it was causing the gradient to vanish. The initialization scheme I ended up using comes from <a href="https://arxiv.org/pdf/1502.01852.pdf" rel="nofollow noreferrer">this paper</a> on PreLU optimization.</p> | 2017-02-22 05:26:40.470000+00:00 | 2017-02-22 05:26:40.470000+00:00 | null | null | 42,381,661 | <p>So I'm currently trying to implement a fast super-resolution CNN (<a href="https://arxiv.org/pdf/1608.00367.pdf" rel="nofollow noreferrer">this paper</a>) by modifying <a href="https://github.com/tegg89/SRCNN-Tensorflow" rel="nofollow noreferrer">this repository</a> (a tensorflow implementation of the original super-resolution CNN).</p>
<p>The problem is that the network instantly reaches a high loss after a few epochs and then stops learning immediately, no matter how many times I reset the network it always converges to the exact same high loss. If I try to feed-forward an image the result ends up being a shade of gray.</p>
<p>On the other hand though if I hook up the first convolution layer directly to the final deconvolution layer the network actually trains and feed-forwarding creates a new up-scaled image. Of course this network is too shallow to actually learn any real features though.</p>
<p>So what I'm wondering is what's going wrong between my first convolution layer conv1 and my last layer conv8?</p>
<p>These are the network layers:</p>
<pre><code># Feature Extraction
conv1 = prelu(tf.nn.conv2d(self.images, self.weights['w1'], strides=[1,1,1,1], padding='SAME') + self.biases['b1'], 1)
# Shrinking
conv2 = prelu(tf.nn.conv2d(conv1, self.weights['w2'], strides=[1,1,1,1], padding='SAME') + self.biases['b2'], 2)
# Mapping
conv3 = prelu(tf.nn.conv2d(conv2, self.weights['w3'], strides=[1,1,1,1], padding='SAME') + self.biases['b3'], 3)
conv4 = prelu(tf.nn.conv2d(conv3, self.weights['w4'], strides=[1,1,1,1], padding='SAME') + self.biases['b4'], 4)
conv5 = prelu(tf.nn.conv2d(conv4, self.weights['w5'], strides=[1,1,1,1], padding='SAME') + self.biases['b5'], 5)
conv6 = prelu(tf.nn.conv2d(conv5, self.weights['w6'], strides=[1,1,1,1], padding='SAME') + self.biases['b6'], 6)
# Expanding
conv7 = prelu(tf.nn.conv2d(conv6, self.weights['w7'], strides=[1,1,1,1], padding='SAME') + self.biases['b7'], 7)
# Deconvolution
deconv_output = [self.batch_size, self.label_size, self.label_size, 1]
deconv_stride = [1, self.scale, self.scale, self.c_dim]
conv8 = tf.nn.conv2d_transpose(conv7, self.weights['w8'], output_shape=deconv_output, strides=deconv_stride, padding='SAME') + self.biases['b8']
</code></pre>
<p>With their respective weights and biases:</p>
<pre><code>self.weights = {
'w1': tf.Variable(tf.random_normal([5, 5, 1, 56], stddev=1e-3), name='w1'),
'w2': tf.Variable(tf.random_normal([1, 1, 56, 12], stddev=1e-3), name='w2'),
'w3': tf.Variable(tf.random_normal([3, 3, 12, 12], stddev=1e-3), name='w3'),
'w4': tf.Variable(tf.random_normal([3, 3, 12, 12], stddev=1e-3), name='w4'),
'w5': tf.Variable(tf.random_normal([3, 3, 12, 12], stddev=1e-3), name='w5'),
'w6': tf.Variable(tf.random_normal([3, 3, 12, 12], stddev=1e-3), name='w6'),
'w7': tf.Variable(tf.random_normal([1, 1, 12, 56], stddev=1e-3), name='w7'),
'w8': tf.Variable(tf.random_normal([9, 9, 1, 56], stddev=1e-3), name='w8')
}
self.biases = {
'b1': tf.Variable(tf.zeros([56]), name='b1'),
'b2': tf.Variable(tf.zeros([12]), name='b2'),
'b3': tf.Variable(tf.zeros([12]), name='b3'),
'b4': tf.Variable(tf.zeros([12]), name='b4'),
'b5': tf.Variable(tf.zeros([12]), name='b5'),
'b6': tf.Variable(tf.zeros([12]), name='b6'),
'b7': tf.Variable(tf.zeros([56]), name='b7'),
'b8': tf.Variable(tf.zeros([1]), name='b8')
}
</code></pre>
<p>Thank you!</p> | 2017-02-22 02:33:30.383000+00:00 | 2017-02-22 05:26:40.470000+00:00 | null | machine-learning|tensorflow|neural-network | ['https://arxiv.org/pdf/1502.01852.pdf'] | 1 |
33,013,840 | <ol>
<li><p>Well, the first choice that you mentioned corresponds to a very challenging task in computer vision community: fine-grained image classification, where you want to classify the subordinates of a base class, say Car! To get more info on <a href="http://arxiv.org/pdf/1411.6447" rel="nofollow">this</a>, you may see this paper.
According to the literature on image classification, classifying the high-level classes such as car/trucks would be much simpler for CNNs to learn since there may exist more discriminative features. I suggest to follow the second approach, that is classifying all types of cars vs. truck and so on.</p></li>
<li><p>Number of training samples is mainly proportional to the number of parameters, that is if you want to train a shallow model, much less samples are required. That also depends on your decision to fine-tune a pre-trained model or train a network from scratch. When sufficient samples are not available, you have to fine-tune a model on your task. </p></li>
<li><p>Wrestling with over-fitting has been always a problematic issue in machine learning and even CNNs are not free of them. Within the literature, some practical suggestions have been introduced to reduce the occurrence of over-fitting such as dropout layers and data-augmentation procedures.</p></li>
</ol>
<p>May not included in your questions, but it seems that you should follow the fine-tuning procedure, that is initializing the network with pre-computed weights of a model on another task (say ILSVRC 201X) and adapt the weights according to your new task. This procedure is known as transfer learning (and sometimes domain adaptation) in community.</p> | 2015-10-08 10:57:18.863000+00:00 | 2015-10-10 09:23:45.330000+00:00 | 2015-10-10 09:23:45.330000+00:00 | null | 32,966,970 | <p>I am a deep-learning newbie and working on creating a vehicle classifier for images using Caffe and have a 3-part question:</p>
<ol>
<li><p>Are there any best practices in organizing classes for training a
CNN? i.e. number of classes and number of samples for each class?
For example, would I be better off this way:</p>
<ul>
<li>(a) Vehicles - Car-Sedans/Car-Hatchback/Car-SUV/Truck-18-wheeler/.... (note this could mean several thousand classes), or </li>
<li>(b) have a higher level
model that classifies between car/truck/2-wheeler and so on...
and if car type then query the Car Model to get the car type<br>
(sedan/hatchback etc)</li>
</ul></li>
<li><p>How many training images per class is a typical best practice? I know there are several other variables that affect the accuracy of
the CNN, but what rough number is good to shoot for in each class?
Should it be a function of the number of classes in the model? For
example, if I have many classes in my model, should I provide more
samples per class?</p></li>
<li><p>How do we ensure we are not overfitting to class? Is there way to measure heterogeneity in training samples for a class?</p></li>
</ol>
<p>Thanks in advance.</p> | 2015-10-06 09:54:44.823000+00:00 | 2015-10-10 09:23:45.330000+00:00 | null | machine-learning|computer-vision|deep-learning|caffe | ['http://arxiv.org/pdf/1411.6447'] | 1 |
55,250,993 | <p>Read this - <a href="https://arxiv.org/pdf/1811.02308.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1811.02308.pdf</a>
It has the math of adaptive bilateral filter. Let me know if you need help. </p>
<p>The kernel size is used for the local variance calculation, and where pixels will contribute (in a weighted manner).</p>
<p>sigmaSpace filters sigma in the coordinate space. The larger value of the parameter means that farther pixels will influence each other </p>
<p>For Example:
img = cv2.bilateralFilter(image, 20, 5)</p> | 2019-03-19 22:37:37.257000+00:00 | 2019-03-19 22:37:37.257000+00:00 | null | null | 45,137,319 | <p>I want to use adaptive bilateral filter in python using opencv. But I am not able to understand how to put the parameters or what should be the values. This is what I found in OpenCV 2.4 documentation.</p>
<p>cv2.adaptiveBilateralFilter(src, ksize, sigmaSpace[, dst[, maxSigmaColor[, anchor[, borderType]]]])</p>
<p>Can anybody give me example for implementation of this function? </p> | 2017-07-17 06:29:07.553000+00:00 | 2019-03-19 22:37:37.257000+00:00 | null | python-2.7|opencv|imagefilter|noise-reduction | ['https://arxiv.org/pdf/1811.02308.pdf'] | 1 |
Subsets and Splits