a_id
int64 7.84k
73.8M
| a_body
stringlengths 61
33k
| a_creation_date
stringlengths 25
32
| a_last_activity_date
stringlengths 25
32
| a_last_edit_date
stringlengths 25
32
⌀ | a_tags
float64 | q_id
int64 826
73.8M
| q_body
stringlengths 61
29.9k
| q_creation_date
stringlengths 25
32
| q_last_activity_date
stringlengths 25
32
| q_last_edit_date
stringlengths 25
32
⌀ | q_tags
stringlengths 1
103
| _arxiv_links
stringlengths 2
6.69k
| _n_arxiv_links
int64 0
94
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
12,375,243 | <p>You're algorithm has loss of information from compressing the string's original data set so I'm not sure you can recover the full information set without doing far more work than comparing the original string. Also while your data set appears easier for human readability, it current takes up as much space as the original string and a difference map of the string (where the values are the distance between the prior character and current character) may have a more comparable information set.</p>
<p>However, as to how you can detect all common subsets you should look at Least Common Subsequence algorithms to find the largest matching pattern. It is a well defined algorithm and is efficient -- O(n * m), where n and m are the lengths of the strings. See <a href="https://stackoverflow.com/questions/8257655/lcs-algorithm-example">LCS on SO</a> and <a href="http://en.wikipedia.org/wiki/Longest_common_subsequence_problem" rel="nofollow noreferrer">Wikipedia</a>. If you also want to see patterns which wrap around a string (as a circular stirng -- where <code>abeab</code> and <code>eabab</code> should match) then you'll need a ciruclar LCS which is described in a paper by <a href="http://arxiv.org/abs/1208.0396" rel="nofollow noreferrer">Andy Nguyen</a>.</p>
<p>You'll need to change the algorithm slightly to account for number of variations so far. My advise would be to add two additional dimensions to the LCS table representing the number of unique numbers encountered in the past k characters of both original strings along with you're compressed version of each string. Then you could do an LCS solve where you are always moving in the direction which matches on your compressed strings AND matching the same number of unique characters in both strings for the past k characters. This should encode all possible unique substring matches. </p>
<p>The tricky part will be always choosing the direction which maximizes the k which contains the same number of unique characters. Thus at each element of the LCS table you'll have an additional string search for the best step of k value. Since a longer sequence always contains all possible smaller sequences, if you maximize you're k choice during each step you know that the best k on the next iteration is at most 1 step away, so once the 4D table is filled out it should be solvable in a similar fashion to the original LCS table. Note that because you have a 4D table the logic does get more complicated, but if you read how LCS works you'll be able to see how you can define consistent rules for moving towards the upper left corner at each step. Thus the LCS algorithm stays the same, just scaled to more dimensions.</p>
<p>This solution is quite complicated once it's complete, so you may want to rethink what you're trying to achieve/if this pattern encodes the information you actually want before you start writing such an algorithm.</p> | 2012-09-11 17:53:07.390000+00:00 | 2012-09-11 17:53:07.390000+00:00 | 2017-05-23 10:27:16.340000+00:00 | null | 12,374,259 | <p>A string "abab" could be thought of as a pattern of indexed symbols "0101". And a string "bcbc" would also be represented by "0101". That's pretty nifty and makes for powerful comparisons, but it quickly falls apart out of perfect cases.</p>
<p>"babcbc" would be "010202". If I wanted to note that it contains a pattern equal to "0101" (the bcbc part), I can only think of doing some sort of normalization process at each index to "re-represent" the substring from n to length symbolically for comparison. And that gets complicated if I'm trying to see if "babcbc" and "dababd" (010202 vs 012120) have anything in common. So inefficient!</p>
<p>How could this be done efficiently, taking care of all possible nested cases? Note that I'm looking for similar patterns, not similar sub-strings in the actual text.</p> | 2012-09-11 16:41:03.330000+00:00 | 2015-11-11 17:49:03.370000+00:00 | 2015-11-11 17:49:03.370000+00:00 | string|algorithm|data-structures|pattern-matching | ['https://stackoverflow.com/questions/8257655/lcs-algorithm-example', 'http://en.wikipedia.org/wiki/Longest_common_subsequence_problem', 'http://arxiv.org/abs/1208.0396'] | 3 |
50,163,077 | <p>I am the author of the R package <b>optimParallel</b>, which could be helpful in your case. The package provides parallel versions of the gradient-based optimization methods of <code>optim()</code>. The main function of the package is <code>optimParallel()</code>, which has the same usage and output as <code>optim()</code>. Using <code>optimParallel()</code> can significantly reduce optimization times as illustrated in the following figure (<code>p</code> is the number of paramters).
<a href="https://i.stack.imgur.com/PiEZM.png" rel="noreferrer"><img src="https://i.stack.imgur.com/PiEZM.png" alt="enter image description here"></a>
See <a href="https://cran.r-project.org/package=optimParallel" rel="noreferrer">https://cran.r-project.org/package=optimParallel</a> and <a href="http://arxiv.org/abs/1804.11058" rel="noreferrer">http://arxiv.org/abs/1804.11058</a> for more information. </p> | 2018-05-03 20:06:20.017000+00:00 | 2018-05-03 20:06:20.017000+00:00 | null | null | 3,757,321 | <p>I am trying to use R to estimate a multinomial logit model with a manual specification. I have found a few packages that allow you to estimate MNL models <a href="http://hosho.ees.hokudai.ac.jp/~kubo/Rdoc/library/VGAM/html/multinomial.html" rel="noreferrer">here</a> or <a href="http://cran.r-project.org/web/packages/mlogit/index.html" rel="noreferrer">here</a>. </p>
<p>I've found some other writings on "rolling" your own MLE function <a href="http://www.mayin.org/ajayshah/KB/R/documents/mle/mle.html" rel="noreferrer">here</a>. However, from my digging around - all of these functions and packages rely on the internal <code>optim</code> function. </p>
<p>In my benchmark tests, <code>optim</code> is the bottleneck. Using a simulated dataset with ~16000 observations and 7 parameters, R takes around 90 seconds on my machine. The equivalent model in <a href="http://transp-or.epfl.ch/page63023.html" rel="noreferrer">Biogeme</a> takes ~10 seconds. A colleague who writes his own code in <a href="http://www.doornik.com/" rel="noreferrer">Ox</a> reports around 4 seconds for this same model.</p>
<p>Does anyone have experience with writing their own MLE function or can point me in the direction of something that is optimized beyond the default <code>optim</code> function (no pun intended)? </p>
<p>If anyone wants the R code to recreate the model, let me know - I'll glady provide it. I haven't provided it since it isn't directly relevant to the problem of optimizing the <code>optim</code> function and to preserve space...</p>
<p><em>EDIT: Thanks to everyone for your thoughts. Based on a myriad of comments below, we were able to get R in the same ballpark as Biogeme for more complicated models, and R was actually faster for several smaller / simpler models that we ran. I think the long term solution to this problem is going to involve writing a separate maximization function that relies on a fortran or C library, but am certainly open to other approaches.</em></p> | 2010-09-21 04:32:29.253000+00:00 | 2018-05-03 20:06:20.017000+00:00 | 2010-09-23 02:35:59.973000+00:00 | optimization|r | ['https://i.stack.imgur.com/PiEZM.png', 'https://cran.r-project.org/package=optimParallel', 'http://arxiv.org/abs/1804.11058'] | 3 |
42,730,743 | <p>Here is the code for the Spearman correlation:</p>
<pre><code>predictions_rank = tf.nn.top_k(predictions_batch, k=samples, sorted=True, name='prediction_rank').indices
real_rank = tf.nn.top_k(real_outputs_batch, k=samples, sorted=True, name='real_rank').indices
rank_diffs = predictions_rank - real_rank
rank_diffs_squared_sum = tf.reduce_sum(rank_diffs * rank_diffs)
six = tf.constant(6)
one = tf.constant(1.0)
numerator = tf.cast(six * rank_diffs_squared_sum, dtype=tf.float32)
divider = tf.cast(samples * samples * samples - samples, dtype=tf.float32)
spearman_batch = one - numerator / divider
</code></pre>
<p>The problem with the Spearman correlation is that you need to use a sorting algorithm (<code>top_k</code> in my code). And there is no way to translate it to a loss value. There is no derivade of a sorting algorithm. You can use a normal correlation but I think there is no mathematically difference to use the mean squared error.</p>
<p>I am working on this right now for images. What I have read in papers that they use to add the ranking into the loss function is to compare 2 or 3 images (where I say images you can say anything you want to rank).</p>
<p>Comparing two elements:</p>
<p><a href="https://i.stack.imgur.com/9CnfN.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9CnfN.gif" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/H5aHo.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/H5aHo.gif" alt="enter image description here"></a></p>
<p>Where N is the total number of elements and α a margin value. I got this equation from <a href="https://arxiv.org/abs/1606.01621" rel="nofollow noreferrer">Photo Aesthetics Ranking Network with Attributes and Content Adaptation</a></p>
<p>You can also use losses with 3 elemens where you compare two of them with similar ranking with another one with a different one:</p>
<p><a href="https://i.stack.imgur.com/9M8a1.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9M8a1.gif" alt="enter image description here"></a></p>
<p>But in this equation you also need to add the direction of the ranking, more details in <a href="https://arxiv.org/abs/1611.05203" rel="nofollow noreferrer">Will People Like Your Image?</a>. In the case of this paper they use a vector encodig instead of a real value but you can do it for just a number too.</p>
<p>In the case of images, the comparison between images makes more sense when those images are related. So it is a good idea to run a clustering algorithm to create (maybe?) 10 clusters, so you can use elements of the same cluster to make comparisons instead of very different things. This will help the network as the inputs are related somehow and not completely different.</p>
<p>As a side note you should know what is more important for you, if it is the final rank order or the rank value. If it is the value you should go with mean square error, if it is the rank order you can use the losses I wrote before. Or you can even combine them.</p>
<blockquote>
<p>How do you determine the size of the tensors you're looking at during runtime?</p>
</blockquote>
<p><code>tf.shape(tensor)</code> returns a tensor with the shape. Then you can use <code>tf.gather(tensor,index)</code> to get the value you want.</p> | 2017-03-11 02:46:22.683000+00:00 | 2017-03-11 02:46:22.683000+00:00 | null | null | 38,487,410 | <p>I'm working with extremely noisy data occasionally peppered with outliers, so I'm relying mostly on correlation as a measure of accuracy in my NN.</p>
<p>Is it possible to explictly use something like rank correlation (the Spearman correlation coefficient) as my cost function? Up to now, I've relied mostly on MSE as a proxy for correlation.</p>
<p>I have three major stumbling blocks right now:</p>
<p>1) The notion of ranking becomes much fuzzier with mini-batches.</p>
<p>2) How do you dynamically perform rankings? Will TensorFlow not have a gradient error/be unable to track how a change in a weight/bias affects the cost? </p>
<p>3) How do you determine the size of the tensors you're looking at during runtime?</p>
<p>For example, the code below is what I'd like to roughly do if I were to just use correlation. In practice, length needs to be passed in rather than determined at runtime.</p>
<pre><code>length = tf.shape(x)[1] ## Example code. This line not meant to work.
original_loss = -1 * length * tf.reduce_sum(tf.mul(x, y)) - (tf.reduce_sum(x) * tf.reduce_sum(y))
divisor = tf.sqrt(
(length * tf.reduce_sum(tf.square(x)) - tf.square(tf.reduce_sum(x))) *
(length * tf.reduce_sum(tf.square(y)) - tf.square(tf.reduce_sum(y)))
)
original_loss = tf.truediv(original_loss, divisor)
</code></pre> | 2016-07-20 17:48:41.643000+00:00 | 2017-03-11 02:46:22.683000+00:00 | null | python|tensorflow | ['https://i.stack.imgur.com/9CnfN.gif', 'https://i.stack.imgur.com/H5aHo.gif', 'https://arxiv.org/abs/1606.01621', 'https://i.stack.imgur.com/9M8a1.gif', 'https://arxiv.org/abs/1611.05203'] | 5 |
25,462,764 | <p>Naively, no; you need to transform each of the n terms of the product into the "Montgomery space", so you have n full reductions mod m, the same as the "usual" algorithm.</p>
<p>However, a factorial isn't just an arbitrary product of n terms; it's much more structured. In particular, if you already have the "Montgomerized" <code>kr mod m</code>, then you can use a very cheap reduction to get <code>(k+1)r mod m</code>.</p>
<p>So this is perfectly feasible, though I haven't seen it done before. I went ahead and wrote a quick-and-dirty implementation (very untested, I wouldn't trust it very far at all):</p>
<pre><code>// returns m^-1 mod 2**64 via clever 2-adic arithmetic (http://arxiv.org/pdf/1209.6626.pdf)
uint64_t inverse(uint64_t m) {
assert(m % 2 == 1);
uint64_t minv = 2 - m;
uint64_t m_1 = m - 1;
for (int i=1; i<6; i+=1) { m_1 *= m_1; minv *= (1 + m_1); }
return minv;
}
uint64_t montgomery_reduce(__uint128_t x, uint64_t minv, uint64_t m) {
return x + (__uint128_t)((uint64_t)x*-minv)*m >> 64;
}
uint64_t montgomery_multiply(uint64_t x, uint64_t y, uint64_t minv, uint64_t m) {
return montgomery_reduce(full_product(x, y), minv, m);
}
uint64_t montgomery_factorial(uint64_t x, uint64_t m) {
assert(x < m && m % 2 == 1);
uint64_t minv = inverse(m); // m^-1 mod 2**64
uint64_t r_mod_m = -m % m; // 2**64 mod m
uint64_t mont_term = r_mod_m;
uint64_t mont_result = r_mod_m;
for (uint64_t k=2; k<=x; k++) {
// Compute the montgomerized product term: kr mod m = (k-1)r + r mod m.
mont_term += r_mod_m;
if (mont_term >= m) mont_term -= m;
// Update the result by multiplying in the new term.
mont_result = montgomery_multiply(mont_result, mont_term, minv, m);
}
// Final reduction
return montgomery_reduce(mont_result, minv, m);
}
</code></pre>
<p>and benchmarked it against the usual implementation:</p>
<pre><code>__uint128_t full_product(uint64_t x, uint64_t y) {
return (__uint128_t)x*y;
}
uint64_t naive_factorial(uint64_t x, uint64_t m) {
assert(x < m);
uint64_t result = x ? x : 1;
while (x --> 2) result = full_product(result,x) % m;
return result;
}
</code></pre>
<p>and against the usual implementation with some inline asm to fix a minor inefficiency:</p>
<pre><code>uint64_t x86_asm_factorial(uint64_t x, uint64_t m) {
assert(x < m);
uint64_t result = x ? x : 1;
while (x --> 2) {
__asm__("mov %[result], %%rax; mul %[x]; div %[m]"
: [result] "+d" (result) : [x] "r" (x), [m] "r" (m) : "%rax", "flags");
}
return result;
}
</code></pre>
<p>Results were as follows on my Haswell laptop for reasonably large x:</p>
<pre><code>implementation speedup
---------------------------
naive 1.00x
x86_asm 1.76x
montgomery 5.68x
</code></pre>
<p>So this really does seem to be a pretty nice win. The codegen for the Montgomery implementation is pretty decent, but could probably be improved somewhat further with hand-written assembly as well.</p>
<p>This is an interesting approach for "modest" x and m. Once x gets large, the various approaches that have sub-linear complexity in x will necessarily win out; factorial has so much structure that this method doesn't take advantage of.</p> | 2014-08-23 14:04:56.070000+00:00 | 2014-08-23 22:50:26.847000+00:00 | 2014-08-23 22:50:26.847000+00:00 | null | 24,850,470 | <p>This question originates in a comment I almost wrote below <a href="https://stackoverflow.com/questions/24850272/big-number-factorial-modulo-big-prime-number">this question</a>, where Zack is computing the factorial of a large number modulo a large number (that we will assume to be prime for the sake of this question). Zack is using the traditional computation of factorial, taking the remainder at each multiplication.</p>
<p>I almost commented that an alternative to consider was <a href="http://www.hackersdelight.org/MontgomeryMultiplication.pdf" rel="nofollow noreferrer">Montgomery multiplication</a>, but thinking more about it, I have only seen this technique used to speed up several multiplications by the same multiplicand (in particular, to speed up the computation of a<sup>n</sup> mod p).</p>
<p>My question is: can Montgomery multiplication be used to speed up the computation of n! mod p for large n and p?</p> | 2014-07-20 12:24:55.717000+00:00 | 2014-08-23 22:50:26.847000+00:00 | 2017-05-23 12:13:09.037000+00:00 | modulo|factorial|modular-arithmetic|montgomery-multiplication | [] | 0 |
62,522,755 | <p>The Fabric model for transacting follows 'endorse' (or, execute), 'order', 'commit'. A proposal is sent for endorsement at one or more peers, if the endorsements are successful, the client assembles the proposal and endorsements into a transaction which is submitted to ordering. Once the transaction is ordered, the peers receive it in a block and perform validation (ensuring that the correct endorsements are present) and version control checks (ensuring that all of the inputs to execution are unmodified). For a detailed architectural review see <a href="https://arxiv.org/abs/1801.10228v2" rel="nofollow noreferrer">this paper</a>.</p>
<p>Event listeners are for 'commit events', which are emitted in that last stage where the peer checks to see if the transaction is valid/consistent. If the initial endorsement fails, then the client never submits the transaction to ordering, and therefore, no commit event occurs.</p>
<p>You may be confusing "endorsement failure" with "endorsement policy failure". An endorsement policy failure occurs when the client does not seek enough endorsements, or endorsements from the right peers, but submits the transaction anyway. You will see an event with a failure in this case, but there was no error at endorsement time.</p> | 2020-06-22 20:13:44.003000+00:00 | 2020-06-22 20:13:44.003000+00:00 | null | null | 62,513,715 | <p>I have coded below logic to use fabric network event listener which will listen to transaction commit. However, it is working fine when transaction endorsed successfully but not when transaction endorsed unsuccessfully. Kindly let me know if I am missing something.</p>
<p><strong>Code snapshot:</strong></p>
<pre><code>const transaction = networkObj.contract.createTransaction('addOrg');
const tx_id = transaction.getTransactionID().getTransactionID();
await transaction.addCommitListener((err: any, txId: any, status: any, blockHeight: any) => {
console.log('inside listener');
if (err) {
console.log(err)
return
}
if (status === 'VALID') {
console.log('transaction committed');
console.log('txId: ',txId,'status: ',status,'blockHeight: ',blockHeight);
console.log('transaction committed end');
} else {
console.log('err transaction failed');
console.log(status);
}
});
transaction.submit(OrgAdd.organization, OrgAdd.orgShortName, OrgAdd.orgType, OrgAdd.industryType)
let responseMessage: ResponseMessage = new ResponseMessage({ message: tx_id });
console.log('before return');
return responseMessage;
</code></pre>
<p>Logs when transaction is endorsed successfully vs unsuccessfully.</p>
<p>Successful:</p>
<pre><code>Connected to mychannel.
Connected to contract. p2pmembers
Done connecting to network.
OrgAdd: {
organization: 'Manufacturer 10',
orgShortName: 'MF10',
orgType: 'manufacturer',
industryType: 'Electronics'
}
before return
inside listener
transaction committed
<<txId:>> 7b1767397a9821e0e2e0b16c7f7ad4ada9d15a8a7b838c5cc542be50e260d497 <<status:>> VALID <<blockHeight:>> 116
transaction committed end
</code></pre>
<p>Unsuccessful</p>
<pre><code>Connected to mychannel.
Connected to contract. p2pmembers
Done connecting to network.
OrgAdd: {
organization: 'Manufacturer 10',
orgShortName: 'MF10',
orgType: 'manufacturer',
industryType: 'Electronics'
}
before return
2020-06-22T11:32:13.973Z - warn: [DiscoveryEndorsementHandler]: _build_endorse_group_member >> G0:0 - endorsement failed - Error: transaction returned with failure: Error: MF10 organization does exist
2020-06-22T11:32:13.975Z - error: [DiscoveryEndorsementHandler]: _endorse - endorsement failed::Error: Endorsement has failed
</code></pre> | 2020-06-22 11:47:48.700000+00:00 | 2020-07-17 08:45:43.917000+00:00 | 2020-06-22 15:25:27.440000+00:00 | hyperledger-fabric | ['https://arxiv.org/abs/1801.10228v2'] | 1 |
27,253,171 | <p>In the case of <strong>undirected</strong> graph, a paper recently published (<em>Optimal listing of cycles and st-paths in undirected graphs</em>) offers an asymptotically optimal solution. You can read it here <a href="http://arxiv.org/abs/1205.2766" rel="nofollow">http://arxiv.org/abs/1205.2766</a> or here <a href="http://dl.acm.org/citation.cfm?id=2627951" rel="nofollow">http://dl.acm.org/citation.cfm?id=2627951</a>
I know it doesn't answer your question, but since the title of your question doesn't mention direction, it might still be useful for Google search</p> | 2014-12-02 15:35:41.923000+00:00 | 2014-12-02 15:47:32.343000+00:00 | 2014-12-02 15:47:32.343000+00:00 | null | 546,655 | <p>How can I find (iterate over) ALL the cycles in a directed graph from/to a given node?</p>
<p>For example, I want something like this:</p>
<pre><code>A->B->A
A->B->C->A
</code></pre>
<p>but not:
B->C->B</p> | 2009-02-13 16:40:27.753000+00:00 | 2021-07-05 15:00:22.343000+00:00 | 2017-04-26 02:43:09.740000+00:00 | algorithm|graph-theory|graph-algorithm | ['http://arxiv.org/abs/1205.2766', 'http://dl.acm.org/citation.cfm?id=2627951'] | 2 |
72,619,211 | <p>I will use your JSON source data instead of the XML, since that is easier to handle in DataTables.</p>
<p>Here is a basic demo, to start with, followed by some explanatory notes:</p>
<pre><code><!doctype html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Demo</title>
<script src="https://code.jquery.com/jquery-3.5.1.min.js"></script>
<script src="https://cdn.datatables.net/1.10.21/js/jquery.dataTables.min.js"></script>
<link rel="stylesheet" type="text/css" href="https://cdn.datatables.net/1.10.21/css/jquery.dataTables.min.css">
</head>
<body>
<div style="margin: 20px;">
<table id="arxivtable" class="display" style="width:100%">
<thead>
<tr>
<th>title</th>
<th>id</th>
<th>link</th>
<th>author</th>
<th>published</th>
<th>summary</th>
</tr>
</thead>
</table>
</div>
<script type="text/javascript">
$(document).ready(function(){
$('#arxivtable').DataTable({
"ajax": {
url: "YOUR_URL_GOES_HERE",
dataSrc: "feed.entry"
},
"columns": [
{"data": "title"},
{ "data": "id" },
{ "data": "link[].@href" },
{ "data": "author.name" },
{ "data": "published" },
{ "data": "summary" }
]
});
});
</script>
</body>
</html>
</code></pre>
<p><strong>Notes</strong></p>
<p>1 - Because you have provided hard-coded HTML column headers, you need to make sure the number of those headers matches the number of columns defined in the DataTable. Alternatively, you can remove the HTML <code><thead></code> section and use the DataTables <a href="https://datatables.net/reference/option/columns.title" rel="nofollow noreferrer"><code>columns.title</code></a> option.</p>
<p>2 - Your Ajax JSON source data contains an array <code>[ ... ]</code>. DataTables needs to know where this array is located in your JSON response, as part of the Ajax handling option, so that it can iterate over that array. Each element in the array will be used to create a row of HTML table data. The <code>ajax.dataSrc</code> option therefore needs to be set accordingly:</p>
<pre><code>dataSrc: "feed.entry"
</code></pre>
<p>Once you have set the above Ajax JSON starting point correctly, then you can use field names for each separate column <code>data</code> value - as shown below.</p>
<p>3 - The <code>author</code> JSON value is actually an object:</p>
<pre><code>"author": {
"name": "Xiaomin Chen"
},
</code></pre>
<p>Therefore you need to drill down into that to get the field you want to show in the DataTable:</p>
<pre><code>{ "data": "author.name" },
</code></pre>
<p>4 - I removed your column renderer function to keep my initial demo simple, but it can be used to access fields and sub-fields - and concatenate strings and other values as needed (as in your example in the question).</p>
<p>5 - The <code>link</code> JSON value is actually an array of objects. For my basic demo, I just accessed the final entry in that array, and then took the href field:</p>
<pre><code>{ "data": "link[].@href" },
</code></pre>
<p>This may not be what you want. You may want to only choose links of a certain type, or choose all links, or something different.</p>
<p>This is where DataTables is limited in what it can handle. It cannot display arbitrary nested JSON values of this type (not surprisingly).</p>
<p>In such cases, you would need to re-structure the JSON, prior to sending it to DataTables - or restructure it in a <a href="https://datatables.net/reference/option/ajax.dataSrc#Examples" rel="nofollow noreferrer"><code>dataSrc</code> function</a> inside DataTables itself:</p>
<pre><code>"dataSrc": function ( json ) { ...transform and return your JSON here... }
</code></pre>
<p>6 - I was not sure what you wanted to display for <code>{ "data": "journal" }</code>. I did not see anything called <code>journal</code> in the JSON.</p>
<p>7 - Note that all the source JSON data outside of the <code>feed.entry</code> array is also not available to DataTables. DataTables can only iterate over that outer array. Anything you may also need which is not in that outer array would need to be added to the array, to be accessible to DataTables.</p>
<hr />
<p>See also <a href="https://datatables.net/examples/ajax/objects_subarrays.html" rel="nofollow noreferrer">Nested object data (arrays)</a> and <a href="https://datatables.net/examples/ajax/deep.html" rel="nofollow noreferrer">Nested object data (objects)</a> for more related notes.</p> | 2022-06-14 14:58:26.477000+00:00 | 2022-06-14 14:58:26.477000+00:00 | null | null | 72,611,683 | <p>How to output response of type atom/xml feed (from arxiv call) into Jquery DataTable?</p>
<p>I have the datatable working for a <a href="https://datatables.net/manual/ajax" rel="nofollow noreferrer">simple json from Ajax call</a> to flask server example.</p>
<p>When i try to do it with the xml from an arxiv api response, i cant seem to get it to display in the datatable (though i can just print the raw xml using <code><pre lang="xml" ></code> or json).</p>
<p>I also tried to convert to json first via python dictionary, but still couldnt get it formatted into datatable as im unsure how to access the properties properly in the Ajax call when theyre deeper than the first level as in the basic example linked.</p>
<p>The HTML in template:</p>
<pre><code><table id="arxivtable" class="display" style="width:100%">
<thead>
<tr>
<th>title</th>
<th>id</th>
<th>link</th>
<th>author</th>
<th>published</th>
</tr>
</thead>
</table>
</code></pre>
<p>I tried via xml :</p>
<pre><code> $('#arxivtable').DataTable({
"ajax": {
// "url": "static/objects2.txt", // This works for the static file
"url": "/get_arxivpapers", // This now works too thanks to @kthorngren
"dataType": "xml",
"type":"GET",
"dataSrc": "{{name}}",
"contentType":"application/atom+xml"
},
"columns": [
{"data": "title"},
{
"data": "link",
"render": function(data, type, row, meta){
if(type === 'display'){
data = '<a href="' + data + '">' + data + '</a>';
}
return data;
}
},
{ "data": "id" },
{ "data": "link" },
{ "data": "author" },
{ "data": "journal" },
{ "data": "published" },
{ "data": "summary" }
]
});
</code></pre>
<p>JSON from AJAX call:</p>
<pre><code> {
"feed": {
"@xmlns": "http://www.w3.org/2005/Atom",
"link": {
"@href": "http://arxiv.org/api/query?search_query%3Dall%3Aeinstein%26id_list%3D%26start%3D0%26max_results%3D2",
"@rel": "self",
"@type": "application/atom+xml"
},
"title": {
"@type": "html",
"#text": "ArXiv Query: search_query=all:einstein&id_list=&start=0&max_results=2"
},
"id": "http://arxiv.org/api/vehKAQR+bheXtHwJw3qx/OG/XXw",
"updated": "2022-06-14T00:00:00-04:00",
"opensearch:totalResults": {
"@xmlns:opensearch": "http://a9.com/-/spec/opensearch/1.1/",
"#text": "36970"
},
"opensearch:startIndex": {
"@xmlns:opensearch": "http://a9.com/-/spec/opensearch/1.1/",
"#text": "0"
},
"opensearch:itemsPerPage": {
"@xmlns:opensearch": "http://a9.com/-/spec/opensearch/1.1/",
"#text": "2"
},
"entry": [
{
"id": "http://arxiv.org/abs/1801.05533v2",
"updated": "2018-11-22T14:04:43Z",
"published": "2018-01-17T03:05:51Z",
"title": "Einstein-Weyl structures on almost cosymplectic manifolds",
"summary": "",
"author": {
"name": "Xiaomin Chen"
},
"arxiv:comment": {
"@xmlns:arxiv": "http://arxiv.org/schemas/atom",
"#text": "accepted by Periodica Mathematica Hungarica, 14 pages, no figures"
},
"link": [
{
"@href": "http://arxiv.org/abs/1801.05533v2",
"@rel": "alternate",
"@type": "text/html"
},
{
"@title": "pdf",
"@href": "http://arxiv.org/pdf/1801.05533v2",
"@rel": "related",
"@type": "application/pdf"
}
],
"arxiv:primary_category": {
"@xmlns:arxiv": "http://arxiv.org/schemas/atom",
"@term": "math.DG",
"@scheme": "http://arxiv.org/schemas/atom"
},
"category": [
{
"@term": "math.DG",
"@scheme": "http://arxiv.org/schemas/atom"
},
{
"@term": "53D10, 53D15",
"@scheme": "http://arxiv.org/schemas/atom"
}
]
},
{
"id": "http://arxiv.org/abs/0802.2137v3",
"updated": "2008-04-01T04:36:21Z",
"published": "2008-02-15T04:40:56Z",
"title": "",
"summary": ".",
"author": {
"name": ""
},
"arxiv:comment": {
"@xmlns:arxiv": "http://arxiv.org/schemas/atom",
"#text": "18 pages, added Theorem 5"
},
"link": [
{
"@href": "http://arxiv.org/abs/0802.2137v3",
"@rel": "alternate",
"@type": "text/html"
},
{
"@title": "pdf",
"@href": "http://arxiv.org/pdf/0802.2137v3",
"@rel": "related",
"@type": "application/pdf"
}
],
"arxiv:primary_category": {
"@xmlns:arxiv": "http://arxiv.org/schemas/atom",
"@term": "math.DG",
"@scheme": "http://arxiv.org/schemas/atom"
},
"category": [
{
"@term": "math.DG",
"@scheme": "http://arxiv.org/schemas/atom"
},
{
"@term": "53C30; 53C25",
"@scheme": "http://arxiv.org/schemas/atom"
}
]
}
]
}
}
</code></pre>
<p>Or the original atom/xml:</p>
<pre><code><feed xmlns="http://www.w3.org/2005/Atom">
<link href="http://arxiv.org/api/query?search_query%3Dall%3Aeinstein%26id_list%3D%26start%3D0%26max_results%3D2" rel="self" type="application/atom+xml">
<title type="html">ArXiv Query: search_query=all:einstein&amp;id_list=&amp;start=0&amp;max_results=2</title>
<id>http://arxiv.org/api/vehKAQR+bheXtHwJw3qx/OG/XXw</id>
<updated>2022-06-14T00:00:00-04:00</updated>
<opensearch:totalresults xmlns:opensearch="http://a9.com/-/spec/opensearch/1.1/">36970</opensearch:totalresults>
<opensearch:startindex xmlns:opensearch="http://a9.com/-/spec/opensearch/1.1/">0</opensearch:startindex>
<opensearch:itemsperpage xmlns:opensearch="http://a9.com/-/spec/opensearch/1.1/">2</opensearch:itemsperpage>
<entry>
<id>http://arxiv.org/abs/1801.05533v2</id>
<updated>2018-11-22T14:04:43Z</updated>
<published>2018-01-17T03:05:51Z</published>
<title></title>
<summary>
</summary>
<author>
<name></name>
</author>
<arxiv:comment xmlns:arxiv="http://arxiv.org/schemas/atom">accepted by Periodica Mathematica Hungarica, 14 pages, no figures</arxiv:comment>
<link href="http://arxiv.org/abs/1801.05533v2" rel="alternate" type="text/html">
<link title="pdf" href="http://arxiv.org/pdf/1801.05533v2" rel="related" type="application/pdf">
<arxiv:primary_category xmlns:arxiv="http://arxiv.org/schemas/atom" term="math.DG" scheme="http://arxiv.org/schemas/atom">
<category term="math.DG" scheme="http://arxiv.org/schemas/atom">
<category term="53D10, 53D15" scheme="http://arxiv.org/schemas/atom">
</category></category></arxiv:primary_category></entry>
<entry>
<id>http://arxiv.org/abs/0802.2137v3</id>
<updated>2008-04-01T04:36:21Z</updated>
<published>2008-02-15T04:40:56Z</published>
<title></title>
<summary>
</summary>
<author>
<name></name>
</author>
<arxiv:comment xmlns:arxiv="http://arxiv.org/schemas/atom"></arxiv:comment>
<link href="http://arxiv.org/abs/0802.2137v3" rel="alternate" type="text/html">
<link title="pdf" href="http://arxiv.org/pdf/0802.2137v3" rel="related" type="application/pdf">
<arxiv:primary_category xmlns:arxiv="http://arxiv.org/schemas/atom" term="math.DG" scheme="http://arxiv.org/schemas/atom">
<category term="math.DG" scheme="http://arxiv.org/schemas/atom">
<category term="53C30; 53C25" scheme="http://arxiv.org/schemas/atom">
</category></category></arxiv:primary_category></entry>
</feed>
</code></pre>
<p>The End Point:</p>
<pre><code>@app.route('/get_arxivpapers')
def getArxivPapers(name="einstein"):
max_results = 2
searchterm = name.replace("_", "&#32")
url = 'http://export.arxiv.org/api/query?search_query=all:' + searchterm + '&start=0&' + 'max_results='+ str(max_results)
data = urllib.request.urlopen(url)
# data_dict = xmltodict.parse(data)
# json_data = json.dumps(data_dict)
# print(json_data)
# return jsonify(json_data)
return data.read().decode('utf-8')
</code></pre> | 2022-06-14 04:51:58.603000+00:00 | 2022-06-14 14:58:26.477000+00:00 | null | json|xml|flask|datatable|atom-feed | ['https://datatables.net/reference/option/columns.title', 'https://datatables.net/reference/option/ajax.dataSrc#Examples', 'https://datatables.net/examples/ajax/objects_subarrays.html', 'https://datatables.net/examples/ajax/deep.html'] | 4 |
73,674,021 | <p>The Transformer model was <a href="https://arxiv.org/pdf/1706.03762.pdf" rel="nofollow noreferrer">originally proposed for machine translation</a> in 2017, where it was directly trained on translation tasks. The additional step of pretraining Transformer-based models via self-supervised learning came later with models like GPT and BERT. The post you are looking at is an example of the former kind of approach.</p> | 2022-09-10 17:48:28.477000+00:00 | 2022-09-10 17:48:28.477000+00:00 | null | null | 73,591,224 | <p>I am looking at the below tensorflow transformers implementation.</p>
<p><a href="https://www.tensorflow.org/text/tutorials/transformer" rel="nofollow noreferrer">https://www.tensorflow.org/text/tutorials/transformer</a></p>
<p>I am not sure I understood correctly. When initialising a transformers model it need to be trained on a lot of raw text in an unsupervised way so that it learns the language and then you can fit it to a particular task.</p>
<p>In this example, I am not sure if the training data is used to train the transformers model itself? It look like there is only one "fitting" procedure. Is this correct?</p> | 2022-09-03 09:46:01.110000+00:00 | 2022-09-10 17:48:28.477000+00:00 | 2022-09-03 23:54:22.097000+00:00 | tensorflow|machine-learning|nlp|transformer-model | ['https://arxiv.org/pdf/1706.03762.pdf'] | 1 |
50,455,702 | <p>There might be several approaches to solving your problem.</p>
<p>First - it might not be a problem after all. If the mislabeled data accounts for a small part of your training set, it might not matter. Actually, there are some cases when adding mislabeled data or just random noise improves robustness and generalization power of your classifier.</p>
<p>Second - you might want to use the training set to train the classifier and then check the data points for which the classifier gave the incorrect classification. It is possible that the classifier was actually right and directs you to the incorrectly labeled data. This data can be subsequently manually checked if such a thing is possible.</p>
<p>Third - you can filter the data up front using methods like consensus filters. This article might be a good way to start your research on this topic: <a href="https://arxiv.org/pdf/1106.0219" rel="nofollow noreferrer">Identifying Mislabeled Training Data - C.E. Brody and M.A. Friedl</a>.</p> | 2018-05-21 19:49:59.433000+00:00 | 2018-05-21 19:49:59.433000+00:00 | null | null | 50,455,527 | <p>I have a dataset which consists of people who have diabetes, and who have not. Using this data, I want to train a model to calculate a risk probability for people with unknown diabetes status. I know that the majority of people who have not been diagnosed with diabetes in the training do not have diabetes, but it is likely that some of these people may have undiagnosed diabetes.</p>
<p>This appears to present a catch 22 situation. I want to identify people who are at-risk, or potentially have undiagnosed diabetes, however I know some of the people in my training dataset are incorrectly labelled as not having diabetes because they have not yet been diagnosed. Has anyone encountered such a problem? Can one still proceed on the basis that there may be some incorrectly labelled data, if it only counts for a small percentage of the data?</p> | 2018-05-21 19:33:50.913000+00:00 | 2018-05-21 19:49:59.433000+00:00 | null | machine-learning|training-data | ['https://arxiv.org/pdf/1106.0219'] | 1 |
46,136,571 | <p>If you are interested in the details of the algorithm, you should take a look at <a href="https://arxiv.org/pdf/1603.02754.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1603.02754.pdf</a> . It is the original paper describing the xgboost algorithm. </p>
<p>Chapter 2 in this paper (TREE BOOSTING IN A NUTSHELL) might be enough if you want to get a high level overview. </p> | 2017-09-10 01:44:00.853000+00:00 | 2017-09-10 01:44:00.853000+00:00 | null | null | 46,131,515 | <p>guys, I am new to xgboost and not native English speaker, so I hope you will not get mad if I have made some stupid mistakes.
I just wander about how does xgboost(set the boost model as gbtree model) generate the base learner in every iteration, for example, there are 10 features in the training data, and the <em>num_boost_round</em> is set to 100 which means the model will generate a optimal base learner in every iteration, so how does xgboost generate these 100 base learner using these 10 features?
Thanks for your help in advance!</p> | 2017-09-09 14:04:29.687000+00:00 | 2017-09-10 01:44:00.853000+00:00 | null | python|xgboost | ['https://arxiv.org/pdf/1603.02754.pdf'] | 1 |
42,049,984 | <p>That's correct. The phone needs to implement the Open Mobile API (by means of the smartcard system service) in order for your app to be able to use it. Not all devices implement this. It's mainly devices from Samsung, Sony, and HTC which support the Open Mobile API.</p>
<p>In addition to that restriction, you need the SE (UICC/eSE) set up to allow your application (this is handled by GlobalPlatform SE Access Control) to interact with the SE.</p>
<p>Finally, I'm not aware of any complete list (and ther probably is none). However, have a look at the question <a href="https://stackoverflow.com/q/38657627/2425802">List of OMAPI supported devices</a> to get some ideas on how to test devices and how to let Play Store generate a list for you.</p>
<p>You may also want to read our report <a href="https://arxiv.org/abs/1601.03027" rel="nofollow noreferrer">Open Mobile API: Accessing the UICC on Android Devices</a> to get some idea about how the Open Mobile API works.</p> | 2017-02-05 08:30:27.163000+00:00 | 2017-02-05 08:30:27.163000+00:00 | 2017-05-23 12:00:14.453000+00:00 | null | 42,037,149 | <p>I understand that to access SIM/eSE from an Android app we need to install Open Mobile API addon on Android Studio. However, is it true that it will not work on all NFC phones? For example, do some OEM limited access to SIM/eSE? Or are there phones where only custom firmware will work with Open Mobile API?</p>
<p>Also, is there a list of phones that support Open Mobile API by default?</p> | 2017-02-04 06:10:01.510000+00:00 | 2017-02-05 08:32:43.497000+00:00 | 2017-02-05 08:32:43.497000+00:00 | android|nfc|sim-card|open-mobile-api|secure-element | ['https://stackoverflow.com/q/38657627/2425802', 'https://arxiv.org/abs/1601.03027'] | 2 |
62,413,102 | <p>A vanilla CNN is usually incapable of inferring that sort of spatial information without a bit of extra help. There have been numerous attempts to remedy that, one of which is <a href="https://arxiv.org/abs/1807.03247" rel="nofollow noreferrer">CoordConv</a>. The tl;dr is that in cases when you want to regress positions in an array like in your problem, it's useful to supply the network with a tensor/matrix/vector/whatever which contains (usually normalized) coordinates. You can do that either at input or at different levels. For example, in your case, your input could be modified to look like this:</p>
<pre><code>#Tensor of size 1x1x2x3100
[0, ..., non_zero_val, 0, other_non_zero_val, 0, 0]
[0, 1 , ... 3099]/3099 #element-wise division just to normalise
</code></pre> | 2020-06-16 16:20:31.007000+00:00 | 2020-06-16 16:20:31.007000+00:00 | null | null | 62,412,031 | <p>I want to train a CNN that takes as an input a numpy array of shape (1600, 800, 1) which would contain all 0s except at few pixels where I can have values from range 10 to 3100(This numpy array is not an image) and the output should be of size 310 where each element is a pair containing coordinates(x, y) positions of the points in the input that had non zero values.</p>
<p>Is there any way of doing this? Any insight on this is greatly appreciated. Thanks in advance!</p> | 2020-06-16 15:25:47.167000+00:00 | 2020-06-16 16:20:31.007000+00:00 | null | python|computer-vision|data-science|conv-neural-network | ['https://arxiv.org/abs/1807.03247'] | 1 |
39,305,005 | <p>Fine-tuning is a very useful trick to achieve a promising accuracy compared to past manual feature. <a href="https://stackoverflow.com/a/36842553/1714410">@Shai</a> already posted a good tutorial for fine-tuning the Googlenet using Caffe, so I just want to give some recommends and tricks for fine-tuning for general cases.</p>
<p>In most of time, we face a task classification problem that new dataset (e.g. <a href="http://www.robots.ox.ac.uk/~vgg/data/flowers/102/" rel="nofollow noreferrer">Oxford 102 flower dataset</a> or <a href="http://www.robots.ox.ac.uk/~vgg/publications/2012/parkhi12a/" rel="nofollow noreferrer">Cat&Dog</a>) has following four common situations <a href="http://cs231n.github.io/transfer-learning/" rel="nofollow noreferrer">CS231n</a>:</p>
<ol>
<li>New dataset is small and similar to original dataset.</li>
<li>New dataset is small but is different to original dataset (Most common cases)</li>
<li>New dataset is large and similar to original dataset.</li>
<li>New dataset is large but is different to original dataset.</li>
</ol>
<p>In practice, most of time we do not have enough data to train the network from scratch, but may be enough for pre-trained model. Whatever which cases I mentions above only thing we must care about is that do we have enough data to train the CNN?</p>
<p>If yes, we can train the CNN from scratch. However, in practice it is still beneficial to initialize the weight from pre-trained model.</p>
<p>If no, we need to check whether data is very different from original datasets? If it is very similar, we can just fine-tune the fully connected neural network or <a href="http://arxiv.org/pdf/1403.6382v3.pdf" rel="nofollow noreferrer">fine-tune with SVM</a>. However, If it is very different from original dataset, we may need to <a href="http://arxiv.org/pdf/1411.1792v1.pdf" rel="nofollow noreferrer">fine-tune the convolutional neural network to improve the generalization</a>.</p> | 2016-09-03 08:44:44.633000+00:00 | 2016-09-04 06:29:44.397000+00:00 | 2017-05-23 12:26:35.987000+00:00 | null | 36,841,158 | <p>I trained GoogLeNet model from scratch. But it didn't give me the promising results.<br>
As an alternative, I would like to do fine tuning of GoogLeNet model on my dataset. Does anyone know what are the steps should I follow? </p> | 2016-04-25 12:52:05.063000+00:00 | 2022-01-18 19:39:47.107000+00:00 | 2022-01-18 19:39:47.107000+00:00 | machine-learning|deep-learning|computer-vision|conv-neural-network|caffe | ['https://stackoverflow.com/a/36842553/1714410', 'http://www.robots.ox.ac.uk/~vgg/data/flowers/102/', 'http://www.robots.ox.ac.uk/~vgg/publications/2012/parkhi12a/', 'http://cs231n.github.io/transfer-learning/', 'http://arxiv.org/pdf/1403.6382v3.pdf', 'http://arxiv.org/pdf/1411.1792v1.pdf'] | 6 |
39,917,340 | <p>One problem with a Tensorflow placeholder is that you can only feed it with a Python list or Numpy array (I think). So you can't save the state between runs in tuples of LSTMStateTuple. </p>
<p>I solved this by saving the state in a tensor like this</p>
<p><code>initial_state = np.zeros((num_layers, 2, batch_size, state_size))</code></p>
<p>You have two components in an LSTM layer, the <strong>cell state</strong> and <strong>hidden state</strong>, thats what the "2" comes from. (this article is great: <a href="https://arxiv.org/pdf/1506.00019.pdf" rel="noreferrer">https://arxiv.org/pdf/1506.00019.pdf</a>)</p>
<p>When building the graph you unpack and create the tuple state like this:</p>
<pre><code>state_placeholder = tf.placeholder(tf.float32, [num_layers, 2, batch_size, state_size])
l = tf.unpack(state_placeholder, axis=0)
rnn_tuple_state = tuple(
[tf.nn.rnn_cell.LSTMStateTuple(l[idx][0],l[idx][1])
for idx in range(num_layers)]
)
</code></pre>
<p>Then you get the new state the usual way</p>
<pre><code>cell = tf.nn.rnn_cell.LSTMCell(state_size, state_is_tuple=True)
cell = tf.nn.rnn_cell.MultiRNNCell([cell] * num_layers, state_is_tuple=True)
outputs, state = tf.nn.dynamic_rnn(cell, series_batch_input, initial_state=rnn_tuple_state)
</code></pre>
<p>It shouldn't be like this... perhaps they are working on a solution.</p> | 2016-10-07 12:29:24.860000+00:00 | 2016-10-07 14:47:53.903000+00:00 | 2016-10-07 14:47:53.903000+00:00 | null | 39,112,622 | <p>I have written an <a href="https://github.com/wpm/tfrnnlm" rel="noreferrer">RNN language model using TensorFlow</a>. The model is implemented as an <code>RNN</code> class. The graph structure is built in the constructor, while <code>RNN.train</code> and <code>RNN.test</code> methods run it.</p>
<p>I want to be able to reset the RNN state when I move to a new document in the training set, or when I want to run a validation set during training. I do this by managing the state inside the training loop, passing it into the graph via a feed dictionary.</p>
<p>In the constructor I define the the RNN like so</p>
<pre><code> cell = tf.nn.rnn_cell.LSTMCell(hidden_units)
rnn_layers = tf.nn.rnn_cell.MultiRNNCell([cell] * layers)
self.reset_state = rnn_layers.zero_state(batch_size, dtype=tf.float32)
self.state = tf.placeholder(tf.float32, self.reset_state.get_shape(), "state")
self.outputs, self.next_state = tf.nn.dynamic_rnn(rnn_layers, self.embedded_input, time_major=True,
initial_state=self.state)
</code></pre>
<p>The training loop looks like this</p>
<pre><code> for document in document:
state = session.run(self.reset_state)
for x, y in document:
_, state = session.run([self.train_step, self.next_state],
feed_dict={self.x:x, self.y:y, self.state:state})
</code></pre>
<p><code>x</code> and <code>y</code> are batches of training data in a document. The idea is that I pass the latest state along after each batch, except when I start a new document, when I zero out the state by running <code>self.reset_state</code>.</p>
<p>This all works. Now I want to change my RNN to use the recommended <code>state_is_tuple=True</code>. However, I don't know how to pass the more complicated LSTM state object via a feed dictionary. Also I don't know what arguments to pass to the <code>self.state = tf.placeholder(...)</code> line in my constructor.</p>
<p>What is the correct strategy here? There still isn't much example code or documentation for <code>dynamic_rnn</code> available.</p>
<hr>
<p>TensorFlow issues <a href="https://github.com/tensorflow/tensorflow/issues/2695" rel="noreferrer">2695</a> and <a href="https://github.com/tensorflow/tensorflow/issues/2838" rel="noreferrer">2838</a> appear relevant.</p>
<p>A <a href="http://www.wildml.com/2016/08/rnns-in-tensorflow-a-practical-guide-and-undocumented-features/" rel="noreferrer">blog post</a> on WILDML addresses these issues but doesn't directly spell out the answer.</p>
<p>See also <a href="https://stackoverflow.com/questions/38241410/tensorflow-remember-lstm-state-for-next-batch-stateful-lstm">TensorFlow: Remember LSTM state for next batch (stateful LSTM)</a>.</p> | 2016-08-24 00:22:34.377000+00:00 | 2018-03-22 16:16:49.970000+00:00 | 2017-05-23 12:34:30.953000+00:00 | python|machine-learning|tensorflow | ['https://arxiv.org/pdf/1506.00019.pdf'] | 1 |
70,901,390 | <p>In addition to the two current great posted answers, there are some important point to take into account in order to write an efficient implementation.</p>
<p>First of all, using a <strong>hash-map can be very efficient</strong> if the number of different items is small so that the hash-map can <strong>fit in cache</strong>. Classical sorting algorithm (eg. introsort/mergesort/heapsort) tends to be significantly slower because of the <code>log n</code> factor in the complexity. However, when the number of different items is big, a sort is generally much faster since the <em>unpredictable random-like hash-map access pattern in RAM</em> can be very expensive: each access can be as slow as the RAM <strong>latency</strong> (typically 40~100 ns).</p>
<p>Additionally, sort implementations can be vectorized using <strong>SIMD instructions</strong> (see <a href="https://arxiv.org/pdf/1704.08579.pdf" rel="nofollow noreferrer">here</a> for AVX-512 and <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8627236/" rel="nofollow noreferrer">there</a> for SVE) and parallelized using <strong>multithreading</strong>. This is especially true in this case since <code>np.unique</code> is generally applied on numbers. C++17 provides parallel algorithms (PSTL) including parallel sorts. Numpy sorting algorithms have recently been improved to use AVX-512 resulting in an order of magnitude faster execution. Alternatively, an <code>O(n)</code> radix sort can be used to efficiently sort small array with short-sized keys (eg. 4-bytes).</p>
<p>Moreover, <strong>not all hash-map implementations are equivalent in term of performance</strong>. The STL <code>std::unordered_map</code> tends to be pretty slow. This is due to restrictions in the C++ standard (about the invalidation of iterators) that (almost) force the STL not to use an <a href="https://en.wikipedia.org/wiki/Open_addressing" rel="nofollow noreferrer">open addressing</a> strategy. Instead, STL implementations generally use linked lists to solve hash conflicts and this cause slow allocation as described by @Homer512. The thing the fastest existing hash-map implementations mainly use open addressing. For example, <a href="https://en.wikipedia.org/wiki/Hopscotch_hashing" rel="nofollow noreferrer">hopscotch hashing</a> provide good performance with some interesting guarantees (eg. good performance when the <a href="https://en.wikipedia.org/wiki/Hash_table#Key_statistics" rel="nofollow noreferrer">load factor</a> is close to 90~100%). <a href="https://tessil.github.io/2016/08/29/benchmark-hopscotch-map.html" rel="nofollow noreferrer">Tessil</a> has provided a very interesting benchmark about several hash-map implementations and has written very fast implementations (generally much faster than the one of the mainstream STL implementations).</p>
<p><code>std::map</code> is usually implemented using a <a href="https://en.wikipedia.org/wiki/Red%E2%80%93black_tree" rel="nofollow noreferrer">red-black tree</a>. It is written in a pretty efficient way in mainstream STL implementations. However, the <strong>allocator</strong> used by default is clearly not optimized for this use-case. Indeed, the number of nodes to be inserted is bounded. Thus, one can use a <a href="https://en.cppreference.com/w/cpp/memory/monotonic_buffer_resource" rel="nofollow noreferrer">monotonic allocator</a> to speed up the computation. An optimized implementation can even use few simple big arrays with <em>no additional allocation</em>.</p>
<p>Finally, note that <code>np.unique</code> provides the <strong>index of the first unique value</strong> and not all of them. This enable further optimisations. For example, no <code>std::vector</code> is needed for the <code>std::map</code>-based implementation of @Dúthomhas resulting in a smaller memory footprint and probably higher performance.</p> | 2022-01-28 23:49:24.473000+00:00 | 2022-01-28 23:49:24.473000+00:00 | null | null | 70,868,307 | <p><code>numpy</code> has an implementation of the <code>unique</code> algorithm that returns :</p>
<ol>
<li>the <em>sorted unique elements</em> of a numpy array (<em>i.e.</em> with no duplicates)</li>
</ol>
<p>In addition, <a href="https://numpy.org/doc/stable/reference/generated/numpy.unique.html" rel="noreferrer">numpy.unique()</a> can also return :</p>
<ol start="2">
<li>the indices of the input array that give the unique values</li>
<li>the indices of the unique array that reconstruct the input array</li>
</ol>
<p>The C++ standard library also implements a <code>unique</code> algorithm (<a href="https://en.cppreference.com/w/cpp/algorithm/unique" rel="noreferrer">here</a>), that somehow eliminates consecutive duplicates. Combined with <code>std::vector<>::erase</code> and <code>std::sort()</code>, this can return the <em>sorted unique elements</em> of the vector (output 1 of <code>numpy.unique()</code>).</p>
<p>My question is : is there any algorithm in the <code>stl</code> or elsewhere that can also return outputs 2 and 3 of <code>numpy.unique()</code>. If not, is there a way to efficiently implement it ?</p> | 2022-01-26 18:12:30.837000+00:00 | 2022-02-15 15:14:37.050000+00:00 | null | c++|numpy|stl | ['https://arxiv.org/pdf/1704.08579.pdf', 'https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8627236/', 'https://en.wikipedia.org/wiki/Open_addressing', 'https://en.wikipedia.org/wiki/Hopscotch_hashing', 'https://en.wikipedia.org/wiki/Hash_table#Key_statistics', 'https://tessil.github.io/2016/08/29/benchmark-hopscotch-map.html', 'https://en.wikipedia.org/wiki/Red%E2%80%93black_tree', 'https://en.cppreference.com/w/cpp/memory/monotonic_buffer_resource'] | 8 |
24,235,741 | <p><a href="http://arxiv.org/ftp/arxiv/papers/1211/1211.2038.pdf" rel="nofollow">http://arxiv.org/ftp/arxiv/papers/1211/1211.2038.pdf</a>
You might be better off using OpenMP for it's lower initialization times.</p> | 2014-06-16 01:51:02.610000+00:00 | 2014-06-16 01:51:02.610000+00:00 | null | null | 20,080,892 | <p>There's not much on this subject, perhaps because it isn't a good idea in the first place.</p>
<p>I want to create a realtime audio synthesis/processing engine that runs on the GPU. The reason for this is because I will also be using a physics library that runs on the GPU, and the audio output will be determined by the physics state. Is it true that GPU only carries audio output and can't generate it? Would this mean a large increase in latency, if I were to read the data back on the CPU and output it to the soundcard? I'm looking for a latency between 10 and 20ms in terms of the time between synthesis and playback.</p>
<p>Would the GPU accelerate synthesis by any worthwhile amount? I'm going to have a large number of synthesizers running at once, each of which I imagine could take up their own parallel process. AMD is coming out with GPU audio, so there must be something to this.</p> | 2013-11-19 19:59:13.180000+00:00 | 2021-08-06 15:26:50.850000+00:00 | null | audio|opencl|real-time | ['http://arxiv.org/ftp/arxiv/papers/1211/1211.2038.pdf'] | 1 |
25,561,646 | <h2>Yes, sort of..</h2>
<p>When you ask the question, "Can the scroll-bars of a browser be removed in some way, rather than simply hidden or camouflaged", everyone will say "Not possible" because it is not possible to <em>remove</em> the scrollbars from all browsers in a compliant and cross-compatible way, and then there's the whole argument of usability.</p>
<p>However, it is possible to prevent the browser from ever having the need to generate and display scrollbars if you do not allow your webpage to overflow.</p>
<p>This just means that we have to proactively substitute the same behavior that the browser would typically do for us and tell the browser thanks but no thanks buddy. Rather than try to remove scrollbars (which we all know is not possible) we can avoid scrolling (perfectly feasible) and scroll within the elements that we make and have more control over.</p>
<p>Create a div with overflow hidden. Detect when the user attempts to scroll, but is unable to because we've disabled the browsers ability to scroll with overflow: hidden.. and instead move the content up using JavaScript when this occurs. Thereby creating our own scrolling without the browsers default scrolling or use a plugin like <a href="https://github.com/cubiq/iscroll" rel="noreferrer">iScroll</a>.</p>
<h2>---</h2>
<p><em>For the sake of being thorough; all the vendor specific ways of manipulating scroll-bars:</em></p>
<h2>Internet Explorer 5.5+</h2>
<p>*These properties were never part of the CSS specification, nor were they ever approved or vendor prefixed, but they work in Internet Explorer and Konqueror. These can also be set locally in the user style sheet for each application. In Internet Explorer you find it under the "Accessibility" tab, in Konqueror under the "Stylesheets" tab.</p>
<pre><code>body, html { /* These are defaults and can be replaced by hexadecimal color values */
scrollbar-base-color: aqua;
scrollbar-face-color: ThreeDFace;
scrollbar-highlight-color: ThreeDHighlight;
scrollbar-3dlight-color: ThreeDLightShadow;
scrollbar-shadow-color: ThreeDDarkShadow;
scrollbar-darkshadow-color: ThreeDDarkShadow;
scrollbar-track-color: Scrollbar;
scrollbar-arrow-color: ButtonText;
}
</code></pre>
<p>As of Internet Explorer 8 these properties were vendor prefixed by Microsoft, but they were still never approved by <a href="http://en.wikipedia.org/wiki/World_Wide_Web_Consortium" rel="noreferrer">W3C</a>.</p>
<pre><code>-ms-scrollbar-base-color
-ms-scrollbar-face-color
-ms-scrollbar-highlight-color
-ms-scrollbar-3dlight-color
-ms-scrollbar-shadow-color
-ms-scrollbar-darkshadow-color
-ms-scrollbar-base-color
-ms-scrollbar-track-color
</code></pre>
<h3>Further details about Internet Explorer</h3>
<p>Internet Explorer makes <code>scroll</code> available which sets whether or not to disable or enable scroll bars; it can also be used to get the value of the position of the scroll bars.</p>
<p>With Microsoft Internet Explorer 6 and later, when you use the <code>!DOCTYPE</code> declaration to specify standards-compliant mode, this attribute applies to the HTML element. When standards-compliant mode is not specified, as with earlier versions of Internet Explorer, this attribute applies to the <code>BODY</code> element, <strong>NOT</strong> the <code>HTML</code> element.</p>
<p>It's also worth noting that when working with .NET the ScrollBar class in <code>System.Windows.Controls.Primitives</code> in the Presentation framework is responsible for rendering the scrollbars.</p>
<p><a href="http://msdn.microsoft.com/en-us/library/ie/ms534393(v=vs.85).aspx" rel="noreferrer">http://msdn.microsoft.com/en-us/library/ie/ms534393(v=vs.85).aspx</a></p>
<ul>
<li><a href="http://msdn.microsoft.com/en-us/library/ie/hh772048%28v=vs.85%29.aspx" rel="noreferrer">MSDN. Basic UI properties</a></li>
<li><a href="http://www.w3.org/Style/Examples/007/scrollbars.en.html" rel="noreferrer">W3C. About non-standard scrollbar properties</a></li>
<li><a href="http://msdn.microsoft.com/en-us/library/system.windows.controls.primitives.scrollbar%28v=vs.110%29.aspx" rel="noreferrer">MSDN. .NET ScrollBar Class</a></li>
</ul>
<hr>
<h2>WebKit</h2>
<p>WebKit extensions related to scroll-bar customization are:</p>
<pre><code>::-webkit-scrollbar {} /* 1 */
::-webkit-scrollbar-button {} /* 2 */
::-webkit-scrollbar-track {} /* 3 */
::-webkit-scrollbar-track-piece {} /* 4 */
::-webkit-scrollbar-thumb {} /* 5 */
::-webkit-scrollbar-corner {} /* 6 */
::-webkit-resizer {} /* 7 */
</code></pre>
<p><img src="https://i.stack.imgur.com/BhMto.jpg" alt="Enter image description here"></p>
<p>These can each be combined with additional pseudo-selectors:</p>
<ul>
<li><code>:horizontal</code> – The horizontal pseudo-class applies to any scrollbar pieces that have a horizontal orientation.</li>
<li><code>:vertical</code> – The vertical pseudo-class applies to any scrollbar pieces that have a vertical orientation.</li>
<li><code>:decrement</code> – The decrement pseudo-class applies to buttons and track pieces. It indicates whether or not the button or track piece will decrement the view’s position when used (e.g., up on a vertical scrollbar, left on a horizontal scrollbar).</li>
<li><code>:increment</code> – The increment pseudo-class applies to buttons and track pieces. It indicates whether or not a button or track piece will increment the view’s position when used (e.g., down on a vertical scrollbar, right on a horizontal scrollbar).</li>
<li><code>:start</code> – The start pseudo-class applies to buttons and track pieces. It indicates whether the object is placed before the thumb.</li>
<li><code>:end</code> – The end pseudo-class applies to buttons and track pieces. It indicates whether the object is placed after the thumb.</li>
<li><code>:double-button</code> – The double-button pseudo-class applies to buttons and track pieces. It is used to detect whether a button is part of a pair of buttons that are together at the same end of a scrollbar. For track pieces it indicates whether the track piece abuts a pair of buttons.</li>
<li><code>:single-button</code> – The single-button pseudo-class applies to buttons and track pieces. It is used to detect whether a button is by itself at the end of a scrollbar. For track pieces it indicates whether the track piece abuts a singleton button.</li>
<li><code>:no-button</code> – Applies to track pieces and indicates whether or not the track piece runs to the edge of the scrollbar, i.e., there is no button at that end of the track.</li>
<li><code>:corner-present</code> – Applies to all scrollbar pieces and indicates whether or not a scrollbar corner is present.</li>
<li><code>:window-inactive</code> – Applies to all scrollbar pieces and indicates whether or not the window containing the scrollbar is currently active. (In recent nightlies, this pseudo-class now applies to ::selection as well. We plan to extend it to work with any content and to propose it as a new standard pseudo-class.)</li>
</ul>
<p><strong>Examples of these combinations</strong></p>
<pre><code>::-webkit-scrollbar-track-piece:start { /* Select the top half (or left half) or scrollbar track individually */ }
::-webkit-scrollbar-thumb:window-inactive { /* Select the thumb when the browser window isn't in focus */ }
::-webkit-scrollbar-button:horizontal:decrement:hover { /* Select the down or left scroll button when it's being hovered by the mouse */ }
</code></pre>
<ul>
<li><a href="https://www.webkit.org/blog/363/styling-scrollbars/" rel="noreferrer">Styling Scrollbars - Webkit.org</a></li>
</ul>
<h3>Further details about Chrome</h3>
<blockquote>
<p><strong>addWindowScrollHandler</strong>
public static HandlerRegistration addWindowScrollHandler(Window.ScrollHandler handler)</p>
<p> Adds a Window.ScrollEvent handler
Parameters:
handler - the handler
Returns:
returns the handler registration
[<em>source</em>](<a href="http://www.gwtproject.org/javadoc/latest/com/google/gwt/user/client/Window.html#addWindowScrollHandler(com.google.gwt.user.client.Window.ScrollHandler)" rel="noreferrer">http://www.gwtproject.org/javadoc/latest/com/google/gwt/user/client/Window.html#addWindowScrollHandler(com.google.gwt.user.client.Window.ScrollHandler)</a>
)</p>
</blockquote>
<hr>
<h2>Mozilla</h2>
<p>Mozilla does have some extensions for manipulating the scroll-bars, but they are all recommended not to be used.</p>
<ul>
<li><code>-moz-scrollbars-none</code> They recommend using overflow:hidden in place of this.</li>
<li><code>-moz-scrollbars-horizontal</code> Similar to overflow-x</li>
<li><code>-moz-scrollbars-vertical</code> Similar to overflow-y</li>
<li><p><code>-moz-hidden-unscrollable</code> Only works internally within a users profile settings. Disables scrolling XML root elements and disables using arrow keys and mouse wheel to scroll web pages.</p></li>
<li><p><a href="https://developer.mozilla.org/en-US/docs/Web/CSS/overflow" rel="noreferrer">Mozilla Developer Docs on 'Overflow'</a></p></li>
</ul>
<h3>Further details about Mozilla</h3>
<p>This is not really useful as far as I know, but it's worth noting that the attribute which controls whether or not scrollbars are displayed in Firefox is (<a href="https://developer.mozilla.org/en-US/docs/Mozilla/Tech/XPCOM/Reference/Interface/nsIDOMWindow" rel="noreferrer">reference link</a>):</p>
<ul>
<li><strong>Attribute:</strong> scrollbars</li>
<li><strong>Type:</strong> nsIDOMBarProp</li>
<li><strong>Description:</strong> The object that controls whether or not scrollbars are shown in the window. This attribute is "replaceable" in JavaScript. Read only</li>
</ul>
<h2>Last but not least, padding is like magic.</h2>
<p>As has been previously mentioned in some other answers, here is an illustration which is sufficiently self-explanatory.</p>
<p><img src="https://i.stack.imgur.com/fafgt.gif" alt="Enter image description here"></p>
<hr>
<h2>History lesson</h2>
<p><a href="https://i.stack.imgur.com/C1Wd0.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/C1Wd0.jpg" alt="Scroll bars"></a></p>
<p>Just because I'm curious, I wanted to learn about the origin of scrollbars, and these are the best references I found.</p>
<ul>
<li><a href="https://arxiv.org/pdf/1404.6752.pdf" rel="noreferrer">10 Inventions on Scrolling and Scrollbars</a></li>
<li><a href="https://tools.ietf.org/id/draft-hellstrom-textpreview-02.txt" rel="noreferrer">https://tools.ietf.org/id/draft-hellstrom-textpreview-02.txt</a></li>
<li><a href="https://tools.ietf.org/id/draft-mrose-blocks-service-01.txt" rel="noreferrer">https://tools.ietf.org/id/draft-mrose-blocks-service-01.txt</a></li>
</ul>
<h2>Miscellaneous</h2>
<p><a href="http://www.w3.org/TR/2014/CR-html5-20140204/embedded-content-0.html#attr-iframe-seamless" rel="noreferrer">In an HTML5 specification draft, the <code>seamless</code> attribute was defined to prevent scroll-bars from appearing in iFrames so that they could be blended with surrounding content on a page</a>. Though this element does not appear in the latest revision.</p>
<p>The <code>scrollbar</code> BarProp object is a child of the <code>window</code> object and represents the user interface element that contains a scrolling mechanism, or some similar interface concept. <code>window.scrollbars.visible</code> will return <code>true</code> if the scroll bars are visible.</p>
<pre><code>interface Window {
// The current browsing context
readonly attribute WindowProxy window;
readonly attribute WindowProxy self;
attribute DOMString name;
[PutForwards=href] readonly attribute Location location;
readonly attribute History history;
readonly attribute UndoManager undoManager;
Selection getSelection();
[Replaceable] readonly attribute BarProp locationbar;
[Replaceable] readonly attribute BarProp menubar;
[Replaceable] readonly attribute BarProp personalbar;
[Replaceable] readonly attribute BarProp scrollbars;
[Replaceable] readonly attribute BarProp statusbar;
[Replaceable] readonly attribute BarProp toolbar;
void close();
void focus();
void blur();
// Truncated
</code></pre>
<p>The History API also includes features for scroll restoration on page navigation to persist the scroll position on page load.</p>
<p><code>window.history.scrollRestoration</code> can be used to check the status of scrollrestoration or change its status (appending <code>="auto"/"manual"</code>. Auto is the default value. Changing it to manual means that you as the developer will take ownership of any scroll changes that may be required when a user traverses the app's history. If you need to, you can keep track of the scroll position as you push history entries with history.pushState().</p>
<h2>---</h2>
<h1>Further reading:</h1>
<ul>
<li><a href="https://en.wikipedia.org/wiki/Scrollbar" rel="noreferrer">Scrollbar on Wikipedia</a></li>
<li><a href="https://msdn.microsoft.com/en-us/library/windows/desktop/bb787529(v=vs.85).aspx" rel="noreferrer">Scroll bar (Windows)</a></li>
<li><a href="http://help.dottoro.com/ljtxlmfr.php" rel="noreferrer">The Scroll Method</a></li>
<li><a href="http://msdn.microsoft.com/en-us/library/ms536726(VS.85).aspx" rel="noreferrer">The Scroll Method - Microsoft Dev Network</a></li>
<li><a href="https://github.com/cubiq/iscroll" rel="noreferrer">iScroll on Github (referenced in the first section above)</a></li>
<li><a href="http://www.nngroup.com/articles/scrolling-and-scrollbars/" rel="noreferrer">Scrolling and Scrollbars an article about usability by Jakob Nielsen</a></li>
</ul>
<h2>Examples</h2>
<ul>
<li><a href="https://benfrain.com/independent-scrolling-panels-body-scroll-using-just-css/" rel="noreferrer">Independent scrolling panels with no body scroll (using just CSS) - Ben Frain (10-21-2014)</a></li>
</ul> | 2014-08-29 04:17:42.943000+00:00 | 2019-07-13 16:18:04.377000+00:00 | 2019-07-13 16:18:04.377000+00:00 | null | 3,296,644 | <p>Can CSS be used to hide the scroll bar? How would you do this?</p> | 2010-07-21 05:57:43.340000+00:00 | 2022-01-14 13:37:41.780000+00:00 | 2019-07-13 13:16:08.523000+00:00 | css|browser|scrollbar | ['https://github.com/cubiq/iscroll', 'http://en.wikipedia.org/wiki/World_Wide_Web_Consortium', 'http://msdn.microsoft.com/en-us/library/ie/ms534393(v=vs.85).aspx', 'http://msdn.microsoft.com/en-us/library/ie/hh772048%28v=vs.85%29.aspx', 'http://www.w3.org/Style/Examples/007/scrollbars.en.html', 'http://msdn.microsoft.com/en-us/library/system.windows.controls.primitives.scrollbar%28v=vs.110%29.aspx', 'https://www.webkit.org/blog/363/styling-scrollbars/', 'http://www.gwtproject.org/javadoc/latest/com/google/gwt/user/client/Window.html#addWindowScrollHandler(com.google.gwt.user.client.Window.ScrollHandler)', 'https://developer.mozilla.org/en-US/docs/Web/CSS/overflow', 'https://developer.mozilla.org/en-US/docs/Mozilla/Tech/XPCOM/Reference/Interface/nsIDOMWindow', 'https://i.stack.imgur.com/C1Wd0.jpg', 'https://arxiv.org/pdf/1404.6752.pdf', 'https://tools.ietf.org/id/draft-hellstrom-textpreview-02.txt', 'https://tools.ietf.org/id/draft-mrose-blocks-service-01.txt', 'http://www.w3.org/TR/2014/CR-html5-20140204/embedded-content-0.html#attr-iframe-seamless', 'https://en.wikipedia.org/wiki/Scrollbar', 'https://msdn.microsoft.com/en-us/library/windows/desktop/bb787529(v=vs.85).aspx', 'http://help.dottoro.com/ljtxlmfr.php', 'http://msdn.microsoft.com/en-us/library/ms536726(VS.85).aspx', 'https://github.com/cubiq/iscroll', 'http://www.nngroup.com/articles/scrolling-and-scrollbars/', 'https://benfrain.com/independent-scrolling-panels-body-scroll-using-just-css/'] | 22 |
48,487,020 | <p>The idea is that it is harder to overfit due to <em>gradient noise</em>. But it is not only improving. See Table 5.9 on page 59 of <a href="https://arxiv.org/pdf/1707.09725.pdf" rel="nofollow noreferrer">Analysis and Optimization of Convolutional Neural Network Architectures</a>. If you make too small batch sizes, the accuracy decreases again.</p> | 2018-01-28 13:44:22.350000+00:00 | 2018-01-28 13:44:22.350000+00:00 | null | null | 48,482,059 | <p>My understanding about batch size was the the smaller, the noisier and the less computationally efficient, however I developed a model and I'm using a certain dataset in which I try different configurations, and all I can see is that the accuracy gets better as the batch size decreases (while keeping the rest of the parameters constant). I tried batch sizes of 2, 4, 8, 16, 32 and 64. I expected that the accuracy would increase from 2-8, and it would be stable/oscillating in the others, but the improvement over the reduction of the batch size is totally clear (2 times 5-fold cross-validation).</p>
<p>My question is, why is this happening? What can I say about my model and dataset when this is happening?</p> | 2018-01-28 00:42:00.073000+00:00 | 2018-01-28 13:44:22.350000+00:00 | null | tensorflow|deep-learning | ['https://arxiv.org/pdf/1707.09725.pdf'] | 1 |
68,424,533 | <p>If you want some ways to preserve non black color in your style transfer model I suggest checking out the github repo <a href="https://github.com/rrmina/neural-style-pytorch" rel="nofollow noreferrer">here</a>. It has .ipnyb notebooks with entire training pipelines, model weights, a good readme, etc. to reference. According to their readme, they try to implement this <a href="https://arxiv.org/pdf/1606.05897.pdf" rel="nofollow noreferrer">paper</a> on preserving color in neural artistic style transfer which should help you. You can also referecne other repos and run some of them on collabs in this paper of codes repo <a href="https://paperswithcode.com/paper/preserving-color-in-neural-artistic-style" rel="nofollow noreferrer">here</a> though I do suggest looking at the first repo first.</p>
<p>If you want to color transfer outside your styles transfer model and rather have two images that transfer color with the help of some functions in a linrary, then I recommend looking at this <a href="https://www.pyimagesearch.com/2014/06/30/super-fast-color-transfer-images/" rel="nofollow noreferrer">tutorial</a></p>
<p>Sarthak Jain</p> | 2021-07-17 21:54:13.803000+00:00 | 2021-07-17 21:54:13.803000+00:00 | null | null | 68,423,074 | <p>I've got a neural style transfer model. I'm currently working on trying to use different parts of an image to transfer different pictures. I'm wondering how can I get the model to just use the colours present in an image. Below is an example:</p>
<p><a href="https://i.stack.imgur.com/fMtyq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fMtyq.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/mBHSO.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mBHSO.jpg" alt="enter image description here" /></a></p>
<p>The picture above is the style image that I have gotten from using thresholding along with the original image. Now the transferred picture is below:</p>
<p><a href="https://i.stack.imgur.com/LQPgs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LQPgs.png" alt="enter image description here" /></a></p>
<p>Obviously it's transferred some of the black parts of the image but I only want the non black colours present to be transferred. Below is my code for my model:</p>
<pre><code>import torch
import torch.nn as nn
import torch.optim as optim
from PIL import Image
import torchvision.transforms as transforms
import torchvision.models as models
from torchvision.utils import save_image
class VGG(nn.Module):
def __init__(self):
super(VGG, self).__init__()
self.chosen_features = ["0", "5", "10", "19", "28"]
self.model = models.vgg19(pretrained=True).features[:29]
def forward(self, x):
# Store relevant features
features = []
for layer_num, layer in enumerate(self.model):
x = layer(x)
if str(layer_num) in self.chosen_features:
features.append(x)
return features
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
def load_image(image_name):
image = Image.open(image_name)
image = loader(image).unsqueeze(0)
return image.to(device)
imsize = 384
loader = transforms.Compose(
[
transforms.Resize((imsize, imsize)),
transforms.ToTensor(),
# transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
]
)
original_img = load_image("Content Image.jpg")
style_img = load_image("Adaptive Image 2.jpg")
# initialized generated as white noise or clone of original image.
# Clone seemed to work better for me.
generated = original_img.clone().requires_grad_(True)
# generated = load_image("20epoctom.png")
model = VGG().to(device).eval()
# Hyperparameters
total_steps = 10000
learning_rate = 0.001
alpha = 1
beta = 0.01
optimizer = optim.Adam([generated], lr=learning_rate)
for step in range(total_steps):
# Obtain the convolution features in specifically chosen layers
generated_features = model(generated)
original_img_features = model(original_img)
style_features = model(style_img)
# Loss is 0 initially
style_loss = original_loss = 0
# iterate through all the features for the chosen layers
for gen_feature, orig_feature, style_feature in zip(
generated_features, original_img_features, style_features
):
# batch_size will just be 1
batch_size, channel, height, width = gen_feature.shape
original_loss += torch.mean((gen_feature - orig_feature) ** 2)
# Compute Gram Matrix of generated
G = gen_feature.view(channel, height * width).mm(
gen_feature.view(channel, height * width).t()
)
# Compute Gram Matrix of Style
A = style_feature.view(channel, height * width).mm(
style_feature.view(channel, height * width).t()
)
style_loss += torch.mean((G - A) ** 2)
total_loss = alpha * original_loss + beta * style_loss
optimizer.zero_grad()
total_loss.backward()
optimizer.step()
if step % 500 == 0:
print(total_loss)
save_image(generated, f"Generated Pictures/{step//500} Iterations Generated Picture.png")
</code></pre>
<p>Any idea of where to potentially go as well would be appreciated!</p> | 2021-07-17 18:02:11.873000+00:00 | 2021-07-17 21:54:13.803000+00:00 | null | python|opencv|pytorch|image-thresholding | ['https://github.com/rrmina/neural-style-pytorch', 'https://arxiv.org/pdf/1606.05897.pdf', 'https://paperswithcode.com/paper/preserving-color-in-neural-artistic-style', 'https://www.pyimagesearch.com/2014/06/30/super-fast-color-transfer-images/'] | 4 |
62,582,914 | <p>I suggest you use the U-Net container on NGC <a href="https://ngc.nvidia.com/catalog/resources/nvidia:unet_industrial_for_tensorflow" rel="nofollow noreferrer">https://ngc.nvidia.com/catalog/resources/nvidia:unet_industrial_for_tensorflow</a>
I also suggest you read this: Mixed Precision Training: <a href="https://arxiv.org/abs/1710.03740" rel="nofollow noreferrer">https://arxiv.org/abs/1710.03740</a>
<a href="https://developer.nvidia.com/blog/mixed-precision-training-deep-neural-networks/" rel="nofollow noreferrer">https://developer.nvidia.com/blog/mixed-precision-training-deep-neural-networks/</a></p>
<p>Let me know how you are progressing and if any public repo, happy to have a look</p> | 2020-06-25 19:26:23.673000+00:00 | 2020-06-25 19:26:23.673000+00:00 | null | null | 62,575,809 | <p>I am training a U-Net architecture to for a segmentation task. This is in Python using Keras. I have now run into an issue, that I am trying to understand:</p>
<p>I have two very similar images from a microscopy image series (these are consecutive images), where my current U-Net model performs very good on one, but performs extremely poor on the immediately following one. However, there is little difference between the two to the eye and the histograms also look very much alike. Also on other measurements the model performs great across the whole frame-range, but then this issue appears for other measurements.</p>
<p>I am using data-augmentation during training (histogram stretching, affine transformation, noise-addition) and I am surprised that still the model is so brittle.</p>
<p>Since the U-Net is still mostly a black-box to me, I want to find out steps I can take to better understand the issue and then adjust the training/model accordingly.</p>
<p>I know there are ways to visualize what individual layers learn (e.g. as discussed F. Chollets book <a href="https://github.com/fchollet/deep-learning-with-python-notebooks/blob/master/5.4-visualizing-what-convnets-learn.ipynb" rel="nofollow noreferrer">see here</a>) and I should be able to apply these to U-Nets, which is fully convolutional.</p>
<p>However, these kinds of methods are practically always discussed in the realm of classifying networks - not semantic segmentation.</p>
<p>So my question is:</p>
<p>Is this the best/most direct approach to reach an understanding of how U-Net models attain a segmentation result? If not, what are better ways to understand/debug U-Nets?</p> | 2020-06-25 12:52:22.607000+00:00 | 2020-06-25 19:26:23.673000+00:00 | null | python|tensorflow|computer-vision|conv-neural-network|semantic-segmentation | ['https://ngc.nvidia.com/catalog/resources/nvidia:unet_industrial_for_tensorflow', 'https://arxiv.org/abs/1710.03740', 'https://developer.nvidia.com/blog/mixed-precision-training-deep-neural-networks/'] | 3 |
60,308,200 | <p>Since the <a href="https://github.com/neo4j-contrib/neo4j-apoc-procedures/blob/4.0/src/main/java/apoc/text/Strings.java" rel="nofollow noreferrer">implementations</a> of <code>apoc.text.levenshteinDistance</code> and <code>apoc.text.levenshteinSimilarity</code> simply rely on <a href="https://commons.apache.org/proper/commons-text/apidocs/org/apache/commons/text/similarity/LevenshteinDistance.html" rel="nofollow noreferrer">org.apache.commons.text.similarity.LevenshteinDistance</a> to do the calculation, the APOC library does not introduce any complexity improvements.</p>
<p>In any case, such a calculation should just compare 2 strings of text and should not in any way rely on the graphical nature of the DB.</p>
<p>And finally, <a href="https://ui.adsabs.harvard.edu/abs/2014arXiv1412.0348B/abstract" rel="nofollow noreferrer">it has been proven</a> that the complexity <strong>cannot</strong> be improved (unless the <a href="https://en.wikipedia.org/wiki/Exponential_time_hypothesis" rel="nofollow noreferrer">Strong Exponential Time Hypothesis</a> is wrong).</p> | 2020-02-19 19:53:00.537000+00:00 | 2020-02-20 01:09:54.967000+00:00 | 2020-02-20 01:09:54.967000+00:00 | null | 60,303,724 | <p>Would the Levenshtein (Edit Distance) have better time complexity in a native graph database such as Neo4j than the current limit of O(n*m)? If so, why?</p> | 2020-02-19 15:26:13.847000+00:00 | 2020-02-20 01:09:54.967000+00:00 | null | neo4j|time-complexity|graph-databases|levenshtein-distance|edit-distance | ['https://github.com/neo4j-contrib/neo4j-apoc-procedures/blob/4.0/src/main/java/apoc/text/Strings.java', 'https://commons.apache.org/proper/commons-text/apidocs/org/apache/commons/text/similarity/LevenshteinDistance.html', 'https://ui.adsabs.harvard.edu/abs/2014arXiv1412.0348B/abstract', 'https://en.wikipedia.org/wiki/Exponential_time_hypothesis'] | 4 |
43,169,781 | <p>As it turns out, this fantastic little paper on arxiv gives a nice clear description of balancing: <a href="https://arxiv.org/pdf/1401.5766.pdf" rel="noreferrer">https://arxiv.org/pdf/1401.5766.pdf</a>. When I implement this balancing, the eigenvalues agree almost perfectly with numpy. It would be great if Eigen would balance the matrix prior to taking eigenvalues.</p>
<pre><code>void balance_matrix(const Eigen::MatrixXd &A, Eigen::MatrixXd &Aprime, Eigen::MatrixXd &D) {
// https://arxiv.org/pdf/1401.5766.pdf (Algorithm #3)
const int p = 2;
double beta = 2; // Radix base (2?)
Aprime = A;
D = Eigen::MatrixXd::Identity(A.rows(), A.cols());
bool converged = false;
do {
converged = true;
for (Eigen::Index i = 0; i < A.rows(); ++i) {
double c = Aprime.col(i).lpNorm<p>();
double r = Aprime.row(i).lpNorm<p>();
double s = pow(c, p) + pow(r, p);
double f = 1;
while (c < r / beta) {
c *= beta;
r /= beta;
f *= beta;
}
while (c >= r*beta) {
c /= beta;
r *= beta;
f /= beta;
}
if (pow(c, p) + pow(r, p) < 0.95*s) {
converged = false;
D(i, i) *= f;
Aprime.col(i) *= f;
Aprime.row(i) /= f;
}
}
} while (!converged);
}
</code></pre> | 2017-04-02 14:55:39.170000+00:00 | 2017-04-02 14:55:39.170000+00:00 | null | null | 43,151,853 | <p>My experience (like some others: <a href="https://stackoverflow.com/questions/23912310/how-do-i-get-specified-eigenvectors-from-the-generalized-schur-factorization-of">How do I get specified Eigenvectors from the generalized Schur factorization of a matrix pair using LAPACK?</a>) is that the eigenvalues obtained from Eigen (I don't care about the eigenvectors) are not nearly as reliable as those obtained from numpy, matlab, etc. when the matrix is ill-conditioned.</p>
<p>The internet (<a href="https://www.mathworks.com/help/matlab/ref/balance.html" rel="nofollow noreferrer">https://www.mathworks.com/help/matlab/ref/balance.html</a>) suggests that balancing is the solution, but I can't figure out how to do this in Eigen. Can anyone help?</p>
<p>At the moment I have an annoying two-layer solution that involves python and C++ and I would like to push everything into C++; the eigenvalue solver is the only part that is holding me back.</p> | 2017-04-01 01:35:24.813000+00:00 | 2017-04-02 14:55:39.170000+00:00 | 2017-05-23 12:02:43.670000+00:00 | c++|eigen|eigen3 | ['https://arxiv.org/pdf/1401.5766.pdf'] | 1 |
39,288,513 | <p>You can use blocks of size 3! Yes, I'm as surprised as you are. In 2014 (you asked in 2010) there came a paper which shows how to do so.</p>
<p>The idea is as follows: instead of doing <code>median3</code>, <code>partition</code>, <code>median3</code>, <code>partition</code>, ..., you do <code>median3</code>, <code>median3</code>, <code>partition</code>, <code>median3</code>, <code>median3</code>, <code>partition</code>, ... . In the paper this is called "The Repeated Step Algorithm".</p>
<p>So instead of:</p>
<pre><code>T(n) <= T(n/3) + T(2n/3) + O(n)
T(n) = O(nlogn)
</code></pre>
<p>one gets:</p>
<pre><code>T(n) <= T(n/9) + T(7n/9) + O(n)
T(n) = Theta(n)
</code></pre>
<p>The said article is <a href="http://arxiv.org/abs/1409.3600" rel="noreferrer">Select with Groups of 3 or 4 Takes Linear Time</a> by K. Chen and A. Dumitrescu (2014, arxiv), or <a href="http://www.cs.uwm.edu/faculty/ad/select.pdf" rel="noreferrer">Select with groups of 3 or 4</a> (2015, author's homepage).</p>
<p>PS: The <a href="https://arxiv.org/abs/1606.00484" rel="noreferrer">Fast Deterministic Selection</a> by A. Alexandrescu (of D language fame!) which shows how to implement the above even more efficiently.</p> | 2016-09-02 09:07:33.223000+00:00 | 2016-09-02 09:07:33.223000+00:00 | null | null | 3,908,073 | <p>I'm working on a quicksort-variant implementation based on <a href="http://en.wikipedia.org/wiki/Selection_algorithm#Linear_general_selection_algorithm_-_Median_of_Medians_algorithm" rel="noreferrer">the Select algorithm</a> for choosing a good pivot element. Conventional wisdom seems to be to divide the array into 5-element blocks, take the median of each, and then recursively apply the same blocking approach to the resulting medians to get a "median of medians".</p>
<p>What's confusing me is the choice of 5-element blocks rather than 3-element blocks. With 5-element blocks, it seems to me that you perform <code>n/4 = n/5 + n/25 + n/125 + n/625 + ...</code> median-of-5 operations, whereas with 3-element blocks, you perform <code>n/2 = n/3 + n/9 + n/27 + n/81 + ...</code> median-of-3 operations. Being that each median-of-5 is 6 comparisons, and each median-of-3 is 2 comparisons, that results in <code>3*n/2</code> comparisons using median-of-5 and <code>n</code> comparisons using median-of-3.</p>
<p>Can anyone explain this discrepancy, and what the motivation for using 5-element blocks could be? I'm not familiar with usual practices for applying these algorithms, so maybe there's some way you can cut out some steps and still get "close enough" to the median to ensure a good pivot, and that approach works better with 5-element blocks?</p> | 2010-10-11 16:25:14.063000+00:00 | 2016-09-02 09:07:33.223000+00:00 | null | algorithm|language-agnostic|sorting|quicksort|median | ['http://arxiv.org/abs/1409.3600', 'http://www.cs.uwm.edu/faculty/ad/select.pdf', 'https://arxiv.org/abs/1606.00484'] | 3 |
67,957,782 | <p>Please see <a href="https://arxiv.org/abs/1704.04110" rel="nofollow noreferrer">DeepAR</a> - a LSTM forecaster more than one step into the future.</p>
<blockquote>
<p>The main contributions of the paper are twofold: (1) we propose an RNN
architecture for probabilistic forecasting, incorporating a negative
Binomial likelihood for count data as well as special treatment for
the case when the magnitudes of the time series vary widely; (2) we
demonstrate empirically on several real-world data sets that this
model produces accurate probabilistic forecasts across a range of
input characteristics, thus showing that modern deep learning-based
approaches can effective address the probabilistic forecasting
problem, which is in contrast to common belief in the field and the
mixed results</p>
</blockquote>
<p>In this paper, they forecast multiple steps into the future, to negate exactly what you state here which is the error propagation.<br />
Skipping several steps allows to get more accurate predictions, further into the future.</p>
<p>One more thing done in this paper is predicting percentiles, and interpolating, rather than predicting the value directly. This adds stability, and an error assessment.</p>
<hr />
<p>Disclaimer - I read an older version of this paper.</p> | 2021-06-13 11:33:44.017000+00:00 | 2021-06-13 11:33:44.017000+00:00 | null | null | 67,957,684 | <p>I have a confusion about the way the LSTM networks work when forecasting with an horizon that is not finite but I'm rather searching for a prediction in whatever time in future. In physical terms I would call it the evolution of the system.</p>
<p>Suppose I have a time series $y(t)$ (output) I want to forecast, and some external inputs $u_1(t), u_2(t),\cdots u_N(t)$ on which the series $y(t)$ depends.</p>
<p>It's common to use the lagged value of the output $y(t)$ as input for the network, such that I schematically have something like (let's consider for simplicity just lag 1 for the output and no lag for the external input):</p>
<p>[y(t-1), u_1(t), u_2(t),\cdots u_N(t)] \to y(t)</p>
<p>In this way of thinking the network, when one wants to do recursive forecast it is forced to use the predicted value at the previous step as input for the next step. In this way we have an effect of propagation of error that makes the long term forecast badly behaving.</p>
<p>Now, my confusion is, I'm thinking as a RNN as a kind of an (simple version) implementation of a state space model where I have the inputs, my output and one or more state variable responsible for the memory of the system. These variables are hidden and not observed.</p>
<p>So now the question, if there is this kind of variable taking already into account previous states of the system why would I need to use the lagged output value as input of my network/model ?</p>
<p>Getting rid of this does my long term forecast would be better, since I'm not expecting anymore the propagation of the error of the forecasted output. (I guess there will be anyway an error in the internal state propagating)</p>
<p>Thanks !</p> | 2021-06-13 11:23:09.817000+00:00 | 2021-06-13 11:34:01.927000+00:00 | 2021-06-13 11:34:01.927000+00:00 | deep-learning|time-series|lstm|forecasting | ['https://arxiv.org/abs/1704.04110'] | 1 |
66,923,372 | <p>Generally, the idea is to measure the reconstruction and classify anomalies as those datapoints that cause a significant deviation from the input. Thus, one can other other norms such as <code>mae</code>. However, the results will probably be very similar.</p>
<p>I would suggest different flavors of the auto encoder. First of all, if your are not already using it, the <a href="https://www.jeremyjordan.me/variational-autoencoders/" rel="nofollow noreferrer">variational autoencoder</a> is better than a standard auto encoder in all aspects.</p>
<p>Second, the performance of a variational autoencoder can be significantly improved by using the <a href="http://dm.snu.ac.kr/static/docs/TR/SNUDM-TR-2015-03.pdf" rel="nofollow noreferrer">reconstruction probability</a>. The idea is to output the parameters for probability distributions not only for the latent space but also for the feature space. This means that the decoder would output a mean and a variance to parameterize a normal distribution when used with continuous data. Then the reconstruction probability is basically the negative log likehood of the normal distribution <code>N(x; decoder_mu, decoder_var)</code>. Using the 2-sigma rule, the variance can be interpreted as confidence intervall and thus even small errors can lead to an high error.</p>
<p>Other than that, there are other flavors like <a href="https://www.mdpi.com/1424-8220/20/13/3738" rel="nofollow noreferrer"><code>vae-gan</code></a>, which combines a vae and gan uses a combined anomaly score with the reconstruction error and the discriminator prediction. Also depending on your problem type, you can also go into the route of a <a href="https://arxiv.org/pdf/2103.12998.pdf" rel="nofollow noreferrer"><code>vae-sl</code></a> that adds an additional classifier in the bottleneck. The model is then trained on mixed data which can be fully or sparsed labelled. Then the classifier can be used for anomaly detection.</p> | 2021-04-02 17:59:33.230000+00:00 | 2021-04-02 17:59:33.230000+00:00 | null | null | 66,921,984 | <p>Other than mean square error, are there other quantities that we can use to detect anomalies using autoencoder in keras?</p> | 2021-04-02 15:58:58.480000+00:00 | 2021-04-02 17:59:33.230000+00:00 | null | keras|autoencoder|anomaly-detection | ['https://www.jeremyjordan.me/variational-autoencoders/', 'http://dm.snu.ac.kr/static/docs/TR/SNUDM-TR-2015-03.pdf', 'https://www.mdpi.com/1424-8220/20/13/3738', 'https://arxiv.org/pdf/2103.12998.pdf'] | 4 |
56,745,037 | <p>Yes, that is how current neural networks work. The only exception are some state of the art networks such as <a href="https://arxiv.org/abs/1711.11503" rel="nofollow noreferrer">CNNs with Adaptive Inference Graphs</a>.</p> | 2019-06-24 23:20:04.390000+00:00 | 2019-06-24 23:20:04.390000+00:00 | null | null | 56,744,862 | <p>When you have an input that you want to make a prediction on, does the input have to be run through the entire neural net?</p> | 2019-06-24 22:49:05.280000+00:00 | 2019-06-25 11:16:56.163000+00:00 | 2019-06-25 11:16:56.163000+00:00 | machine-learning|neural-network|predict|inference | ['https://arxiv.org/abs/1711.11503'] | 1 |
18,293,138 | <p>This might help you:
<a href="http://arxiv.org/pdf/1308.3466v1.pdf" rel="nofollow">http://arxiv.org/pdf/1308.3466v1.pdf</a> </p>
<p>If you store the last $k$ many input symbols you can easily find palindromes up to a length of $k$.<br>
If you use the algorithms of the paper you can find the midpoints of palindromes and an length estimate of its length. </p> | 2013-08-17 20:40:45.073000+00:00 | 2013-08-17 20:40:45.073000+00:00 | null | null | 4,963,560 | <p>I don't even know if a solution exists or not. Here is the problem in detail. You are a program that is accepting an infinitely long stream of characters (for simplicity you can assume characters are either 1 or 0). At any point, I can stop the stream (let's say after N characters were passed through) and ask you if the string received so far is a palindrome or not. How can you do this using less sub-linear space and/or time.</p> | 2011-02-10 22:40:50.147000+00:00 | 2022-05-06 08:23:53.297000+00:00 | 2011-02-10 22:52:47.520000+00:00 | algorithm|stream|puzzle|palindrome | ['http://arxiv.org/pdf/1308.3466v1.pdf'] | 1 |
61,003,774 | <p>The idea is super cool! I think that option A will probably work fairly well if the posts are very formulaic, but it really isn't that exciting. </p>
<p>Option B, like you point out will need training data.</p>
<p>Option C, really isn't the right use case of an autoencoder to try to extract latent information and somehow get from unstructured data to structured classifications. </p>
<p>I'd like to throw my hat in the ring with option D, it combines some of all 3 (or at least B and C). I suggest using BERT (or some flavor of it, like RoBERTa) which pulls in some of option C, then throwing a simple classifier on top of it for prediction. Because we're using BERT we can make do with a very small dataset. A suggestion for classification, I would mask the location names (found using NER) and then do predictions. For example, "I'm going from LA to San Fran" (Spacy picks up both as GPE, I did some tests and it is actually surprisingly good at abbreviations) would become "I'm going from A to B", then have the prediction be A to B or B to A. This would reduce classes and allow multiple locations, if we had "A to B to C" it would be several classification problems: A to B, then B to C. You could then do the computation again, just changing masks (technically it requires n choose 4 computations, choosing the two highest activation, maybe throwing out reverses)</p>
<p>I would get the dataset by bootstrapping it using option A, or even better quickly entering it yourself (because we're using BERT it shouldn't require too much data).</p>
<p>As for paper recommendations, I'm just in love with BERT lately <a href="https://arxiv.org/pdf/1810.04805.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1810.04805.pdf</a>. I'm really into political applications, so I thought TD Parse was awesome <a href="https://www.aclweb.org/anthology/E17-1046.pdf" rel="nofollow noreferrer">https://www.aclweb.org/anthology/E17-1046.pdf</a>. Tell me how this project goes!</p> | 2020-04-03 01:30:39.597000+00:00 | 2020-04-03 01:30:39.597000+00:00 | null | null | 61,002,376 | <p>I am member of a Facebook group for local ride shares. The group is specific for two cities and everything that is in-between, so the post are (mostly) as such:</p>
<ul>
<li><em>"I'm gonna drive from city A to city Z tomorrow afternoon"</em></li>
<li><em>"Anybody wanna join from city Z -> city A tonight"</em></li>
<li><em>"Tomorrow at 4 pm, I need to drive from city D to city Z"</em></li>
</ul>
<p>So I've been thinking about possible ways to build a simple search engine for it where people can select a date/time and the direction where they need to go. I'm thinking that in the end, I would like to have a structured tuple such as <code>{start: 'city A', end: 'city Z', time: '15/04/2020 14:00'}</code>. (I'd probably get the date from the post metadata.)</p>
<p>I'm not that advanced in NLP/text mining techniques that could make it in production, so I'm looking for some input on my ideas here:</p>
<h3>Option a): A rule-based approach</h3>
<ul>
<li>Use a common NLP library like StanfordNLP</li>
<li>Build a classic pipeline with preprocessing (stop word removal, ...), POS tagging etc.</li>
<li>Annotate all cities that we know about and define synonyms for abbreviations</li>
<li>Create enough specific rules to cover most cases</li>
<li>Probably a solid baseline, but as always: Edge cases would most likely be tedious</li>
</ul>
<h3>Option b): Supervised Learning</h3>
<ul>
<li>Turn it into a classification problem with "City A -> Z" and "City Z -> A" being the classes</li>
<li>Problem 1: Need for a hand-labeled dataset</li>
<li>Problem 2: Sub-routes in-between city A and Z become difficult</li>
<li>Not really my favorite option</li>
</ul>
<h3>Option c): Unsupervised Learning</h3>
<ul>
<li>Use an Autoencoder to extract the useful information from the posts</li>
<li>No need for hand-labeling data</li>
<li>Ideally, the latent space representation would contain all the information I need</li>
</ul>
<p>Option c) is my favorite and also the technically most interesting option, but I just started reading about this topic. Some thoughts I have about it:</p>
<ul>
<li>How would I point the Autoencoder towards the information I'm specifically interested in?</li>
<li>I read that with variational Autoencoders, you can manually set the bottleneck "thin enough" so that the compressed code contains what you're looking for. Is this a trial-and-error process or is there any intuition behind it?</li>
<li>Is an Autoencoder even the right choice to do structured data extraction from text?</li>
<li>Do you see any alternative approaches that I might have missed?</li>
</ul>
<p>I would really appreciate some thoughts, comments and paper or book recommendations. With all the current down time, I'm hoping to do some hands-on work on this and get some more experience in unsupervised learning.</p> | 2020-04-02 22:46:59.347000+00:00 | 2020-04-03 01:30:39.597000+00:00 | 2020-06-20 09:12:55.060000+00:00 | nlp|text-mining|information-retrieval|unsupervised-learning|information-extraction | ['https://arxiv.org/pdf/1810.04805.pdf', 'https://www.aclweb.org/anthology/E17-1046.pdf'] | 2 |
57,192,114 | <p>Variational inference is an approximate algorithm and we don't expect it to provide the same answer as full Bayes implemented through MCMC. The best thing to read on evaluating whether variational inference even gets close is this arXiv paper by Yuling Yao and colleagues, <a href="https://arxiv.org/abs/1802.02538" rel="nofollow noreferrer">Yes, but does it work? Evaluating variational inference</a>. There's a good description of how the approximations work in Bishop's machine learning text.</p>
<p>I don't think anything has changed in Stan's variational inference algorithm between versions recently. Variational inference can be much more sensitive to the parameters of the algorithm and to initializations than full Bayes. That's why it's still marked as "experimental" in all of our interfaces. You might try running old versions controlling for initialization and making sure there are enough iterations. Variational inference can fail pretty badly on the optimization step, winding up with suboptimal approximations. It can also fail if the best variational approximation is not very good. </p> | 2019-07-24 22:37:33.117000+00:00 | 2019-07-24 22:37:33.117000+00:00 | null | null | 57,186,920 | <p>I am working on an <a href="https://github.com/alan-turing-institute/PosteriorBootstrap/" rel="nofollow noreferrer">R package</a> that depends on RStan and I seem to have hit a failure mode in the latter.</p>
<p>I run a Bayesian logistic regression with exact inference (<code>rstan::stan()</code>) and get very different results with variational inference (<code>rstan::vb()</code>). The following code downloads the German Statlog Credit data and runs both inferences on that data:</p>
<pre class="lang-r prettyprint-override"><code>library("rstan")
seed <- 123
prior_sd <- 10
n_bootstrap <- 1000
# Index of coefficients in the plot and summary statistics
x_index <- 21
y_index <- 22
# Get the dat from online repository
library(data.table)
raw_data <- fread('http://archive.ics.uci.edu/ml/machine-learning-databases/statlog/german/german.data-numeric', data.table = FALSE)
statlog <- list()
statlog$y <- raw_data[, 25] - 1
statlog$x <- cbind(1, scale(raw_data[, 1:24]))
# Bayesian logit in RStan
train_dat <- list(n = length(statlog$y), p = ncol(statlog$x), x = statlog$x, y = statlog$y, beta_sd = prior_sd)
stan_file <- "bayes_logit.stan"
bayes_log_reg <- rstan::stan(stan_file, data = train_dat, seed = seed,
iter = n_bootstrap * 2, chains = 1)
stan_bayes_sample <- rstan::extract(bayes_log_reg)$beta
# Variational Bayes in RStan
stan_model <- rstan::stan_model(file = stan_file)
stan_vb <- rstan::vb(object = stan_model, data = train_dat, seed = seed,
output_samples = n_bootstrap)
stan_vb_sample <- rstan::extract(stan_vb)$beta
</code></pre>
<p>The Stan file <code>bayes_logit.stan</code> with the model is:</p>
<pre><code>// Code for 0-1 loss Bayes Logistic Regression model
data {
int<lower=0> n; // number of observations
int<lower=0> p; // number of covariates
matrix[n,p] x; // Matrix of covariates
int<lower=0,upper=1> y[n]; // Responses
real<lower=0> beta_sd; // Stdev of beta
}
parameters {
vector[p] beta;
}
model {
beta ~ normal(0,beta_sd);
y ~ bernoulli_logit(x * beta); // Logistic regression
}
</code></pre>
<p>The results for coefficients 21 and 22 are very different:</p>
<pre><code>> mean(stan_bayes_sample[, 21])
[1] 0.1316655
> mean(stan_vb_sample[, 21])
[1] 0.3832403
> mean(stan_bayes_sample[, 22])
[1] -0.05473327
> mean(stan_vb_sample[, 22])
[1] 0.1570745
</code></pre>
<p>And a plot clearly shows the difference, where the dots are exact inference and the lines are the density for variational inference:</p>
<p><a href="https://i.stack.imgur.com/tZerD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tZerD.png" alt="Plot of exact and variational inferences"></a></p>
<p>I get the same results on my machine and on Azure. I have noted that exact inference gives the same results when the data is scaled and centered and variational inference gives different results, so I may unwittingly trigger a different step of data processing.</p>
<p>Even more confusing is that the same code with the same version of RStan, as recently as May 30th 2019, was giving very similar results for the two methods, as shown below, where the red dots are roughly in the same place but the blue lines are different in location and scale (and the green lines are for the method I am implementing, which I did not include in the minimal reproducible example):</p>
<p><a href="https://i.stack.imgur.com/Rn7SU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Rn7SU.png" alt="Plot of exact and variational inference in a previous version"></a></p>
<p>Does anyone have a hint?</p>
<h1>Code for the plot</h1>
<p>The code for the plot is a bit long:</p>
<pre class="lang-r prettyprint-override"><code>requireNamespace("dplyr", quietly = TRUE)
requireNamespace("ggplot2", quietly = TRUE)
requireNamespace("tibble", quietly = TRUE)
#The first argument is required, either NULL or an arbitrary string.
stat_density_2d1_proto <- ggplot2::ggproto(NULL,
ggplot2::Stat,
required_aes = c("x", "y"),
compute_group = function(data, scales, bins, n) {
# Choose the bandwidth of Gaussian kernel estimators and increase it for
# smoother densities in small sample sizes
h <- c(MASS::bandwidth.nrd(data$x) * 1.5,
MASS::bandwidth.nrd(data$y) * 1.5)
# Estimate two-dimensional density
dens <- MASS::kde2d(
data$x, data$y, h = h, n = n,
lims = c(scales$x$dimension(), scales$y$dimension())
)
# Store in data frame
df <- data.frame(expand.grid(x = dens$x, y = dens$y), z = as.vector(dens$z))
# Add a label of this density for ggplot2
df$group <- data$group[1]
# plot
ggplot2::StatContour$compute_panel(df, scales, bins)
}
)
# Wrap that ggproto in a ggplot2 object
stat_density_2d1 <- function(data = NULL,
geom = "density_2d",
position = "identity",
n = 100,
...) {
ggplot2::layer(
data = data,
stat = stat_density_2d1_proto,
geom = geom,
position = position,
params = list(
n = n,
...
)
)
}
append_to_plot <- function(plot_df, sample, method,
x_index, y_index) {
new_plot_df <- rbind(plot_df, tibble::tibble(x = sample[, x_index],
y = sample[, y_index],
Method = method))
return(new_plot_df)
}
plot_df <- tibble::tibble()
plot_df <- append_to_plot(plot_df, sample = stan_bayes_sample,
method = "Bayes-Stan",
x_index = x_index, y_index = y_index)
plot_df <- append_to_plot(plot_df, sample = stan_vb_sample,
method = "VB-Stan",
x_index = x_index, y_index = y_index)
ggplot2::ggplot(ggplot2::aes_string(x = "x", y = "y", colour = "Method"),
data = dplyr::filter(plot_df, plot_df$Method != "Bayes-Stan")) +
stat_density_2d1(bins = 5) +
ggplot2::geom_point(alpha = 0.1, size = 1,
data = dplyr::filter(plot_df,
plot_df$Method == "Bayes-Stan")) +
ggplot2::theme_grey(base_size = 8) +
ggplot2::xlab(bquote(beta[.(x_index)])) +
ggplot2::ylab(bquote(beta[.(y_index)])) +
ggplot2::theme(legend.position = "none",
plot.margin = ggplot2::margin(0, 10, 0, 0, "pt"))
</code></pre> | 2019-07-24 15:54:24.660000+00:00 | 2019-07-24 22:37:33.117000+00:00 | null | r|rstan | ['https://arxiv.org/abs/1802.02538'] | 1 |
50,123,755 | <p>The first (strongly) polynomial-time algorithm was <a href="https://arxiv.org/abs/1307.6809" rel="nofollow noreferrer">published</a> by Végh in 2013, and has since been <a href="https://arxiv.org/abs/1611.01778" rel="nofollow noreferrer">improved</a> by Olver and Végh to a worst-case runtime in O((<em>m</em> + <em>n</em> log <em>n</em>) <em>m</em> <em>n</em> log(<em>n</em>^2 / <em>m</em>)).
But I don't know of any public implementation for this algorithm.</p>
<p>The linked papers also contain references to earlier (weakly) polynomial-time algorithms as well as approximate algorithms, some of which may have public implementations. (<a href="https://www.cs.princeton.edu/~wayne/papers/gain-scaling.pdf" rel="nofollow noreferrer">This older paper</a> by Tardos and Wayne mentions a C++ implementation.)</p> | 2018-05-01 20:59:13.890000+00:00 | 2018-05-01 20:59:13.890000+00:00 | null | null | 10,601,730 | <p>I'm trying to find an efficient, publically available algorithm, preferably with implementation, for solving maximum flow in a generalized (non-pure) network with gains.
All multipliers, capacities and flow values are non-zero integers. </p>
<p>Does such an algorithm exist, or is this problem not solvable in polynomial time?</p> | 2012-05-15 13:27:11.970000+00:00 | 2018-05-01 20:59:13.890000+00:00 | null | algorithm|data-structures|graph-theory|graph-algorithm|max-flow | ['https://arxiv.org/abs/1307.6809', 'https://arxiv.org/abs/1611.01778', 'https://www.cs.princeton.edu/~wayne/papers/gain-scaling.pdf'] | 3 |
50,042,382 | <p>You could also try entity embeddings to reduce hundreds of boolean features into vectors of small dimension.</p>
<p>It is similar to word embedings for categorical features. In practical terms you define an embedding of your discrete space of features into a vector space of low dimension. It can enhance your results and save on memory. The downside is that you do need to train a neural network model to define the embedding before hand.</p>
<p>Check <a href="https://arxiv.org/abs/1604.06737" rel="nofollow noreferrer">this article</a> for more information.</p> | 2018-04-26 11:56:19.247000+00:00 | 2018-04-26 11:56:19.247000+00:00 | null | null | 46,442,266 | <p>I have a question regarding random forests. Imagine that I have data on users interacting with items. The number of items is large, around 10 000. My output of the random forest should be the items that the user is likely to interact with (like a recommender system). For any user, I want to use a feature that describes the items that the user has interacted with in the past. However, mapping the categorical product feature as a one-hot encoding seems very memory inefficient as a user interacts with no more than a couple of hundred of the items at most, and sometimes as little as 5. </p>
<p>How would you go about constructing a random forest when one of the input features is a categorical variable with ~10 000 possible values and the output is a categorical variable with ~10 000 possible values? Should I use CatBoost with the features as categorical? Or should I use one-hot encoding, and if so, do you think XGBoost or CatBoost does better?</p> | 2017-09-27 07:49:44.237000+00:00 | 2018-04-26 11:56:19.247000+00:00 | null | machine-learning|random-forest|xgboost|categorical-data|catboost | ['https://arxiv.org/abs/1604.06737'] | 1 |
58,693,582 | <p>I agree with Horizon_Net that it depends on personal preference.
I like to have something which looks similar to LaTeX.
An example is provided below.
Note that it demonstrates a numeric citation style.
Alphabetic or reading style are possible too.
For numeric citation style, higher numbers should appear later in the text, and this can make satisfying numeric citation style cumbersome.
To avoid this problem, I typically use an alphabetic citation style.</p>
<pre><code>"...the **go to** statement should be abolished..." [[1]](#1).
## References
<a id="1">[1]</a>
Dijkstra, E. W. (1968).
Go to statement considered harmful.
Communications of the ACM, 11(3), 147-148.
</code></pre>
<blockquote>
<p>"...the <strong>go to</strong> statement should be abolished..." [1].</p>
<p><strong>References</strong></p>
<p>[1]
Dijkstra, E. W. (1968).
Go to statement considered harmful.
Communications of the ACM, 11(3), 147-148.</p>
</blockquote>
<p>On GitHub flavored Markdown and most other Markdown flavors, you can actually click on [1] to jump to the reference.
Apologies for taking Dijkstra his sentence out of context.
The full sentence would make this example more difficult to read.</p>
<p><strong>EDIT</strong>:
If the references all have a stable link, it is also possible to use those:</p>
<pre><code>The field of natural language processing (NLP) has become mostly dominated by deep learning approaches
(Young et al., [2018](https://doi.org/10.1109/MCI.2018.2840738)).
Some are based on transformer neural networks
(e.g., Devlin et al, [2018](https://arxiv.org/abs/1810.04805)).
</code></pre>
<blockquote>
<p>The field of natural language processing (NLP) has become mostly dominated by deep learning approaches
(Young et al., <a href="https://doi.org/10.1109/MCI.2018.2840738" rel="noreferrer">2018</a>).
Some are based on transformer neural networks
(e.g., Devlin et al, <a href="https://arxiv.org/abs/1810.04805" rel="noreferrer">2018</a>).</p>
</blockquote> | 2019-11-04 12:35:23.317000+00:00 | 2021-10-23 09:21:46.253000+00:00 | 2021-10-23 09:21:46.253000+00:00 | null | 26,587,527 | <p>I am coding a readme for a repo in github, and I want to add a reference to a paper. What is the most adequate way to code in the citation? e.g. As a blockquote, as code, as simple text, etc?</p>
<p>Suggestions?</p> | 2014-10-27 12:23:46.800000+00:00 | 2021-10-23 20:07:49.147000+00:00 | null | github|github-flavored-markdown | ['https://doi.org/10.1109/MCI.2018.2840738', 'https://arxiv.org/abs/1810.04805'] | 2 |
38,518,260 | <p>For starters, <a href="https://en.wikipedia.org/wiki/Data_erasure#Limitations" rel="noreferrer">secure file deletion on flash media</a> is a complex problem, with no quick and easy answers. The paper <a href="https://www.usenix.org/legacy/events/fast11/tech/full_papers/Wei.pdf" rel="noreferrer">Reliably Erasing Data From Flash-Based Solid State Drives</a> gives a good overview of the problems, the potential solutions, and their limitations. They conclude that</p>
<blockquote>
<p>For sanitizing <em>entire disks</em>, ... software techniques work most, but not
all, of the time. We found that none of the available software
techniques for sanitizing <em>individual files</em> were effective. [emphasis added]</p>
</blockquote>
<p><a href="http://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-88r1.pdf" rel="noreferrer">NIST 800-88</a> also has a good overview of the technology trends contributing to the problem, along with some minimum recommendations (appendix A) for Android devices. However they tend to be either whole-disk erasure (factory reset), or rely on cryptographic erasure (CE), rather than being general file erasure methods.</p>
<p>But all is not lost. Even if you can't sanitize individual files, you could hope to wipe all the unallocated space after deleting files. The article <a href="http://arxiv.org/PS_cache/arxiv/pdf/1106/1106.0917v1.pdf" rel="noreferrer">Secure Deletion on Log-structured File Systems</a> (Reardon, et al.) describes a fairly promising way to do that in user-mode software. Android's internal memory uses (always?) a log-structured file system. </p>
<p>This paper's "purging" method does not require kernel-level access, and doesn't seem to require any native code on Android. (Note that the term "purging" is used a little differently in documents like NIST 800-88.) The basic idea is to delete all the sensitive data, then fill in the remaining space on the drive with a junk data file, and finally delete the junk data file.</p>
<p>While that takes more time and effort than just overwriting the deleted files themselves (several times in different patterns), it seems to be very robust even when you have to deal with the possibility of wear-leveling and log-structure FS.</p>
<h1>Caveat and Further Measures</h1>
<p>The main caveat for me is about the conditions mentioned by Reardon et al. in the above paper:</p>
<blockquote>
<p>Purging will work for any log-structured file system provided both the
<strong>user’s disk quota is unlimited</strong> and the file system always performs
garbage collection <strong>to reclaim even a single chunk of memory</strong> before
declaring that the drive is unwritable. [emphasis mine]</p>
</blockquote>
<p>The second condition seems pretty likely to be fulfilled, but I don't know about the first one. Does Android (or some manufacturers' versions of it) enforce quotas on disk space used by apps? I have not found any info about user quotas, but there are quotas for other niches like browser persistent storage. Does Android reserve some space for system use, or for each app's caching, for example, that can't be used for other things? If so, it should help (albeit with no guarantees) if we begin purging immediately after the sensitive files are deleted, so there is little time for other filesystem activity to stake a claim to the recently freed space.</p>
<p>Maybe we could mitigate these risks by cyclical purging:</p>
<ul>
<li>Determine the remaining space available (call it S) on the relevant partition, e.g. using <code>File.getUsableSpace()</code></li>
<li>Write a series of files to the partition; each one is, say, 20% of the initial S (subject to file size limits).</li>
<li>When we run out of space, delete the first couple of files that we created, then write another file or two as space allows.</li>
<li>Repeat that last step a few times, until you've reached a threshold you're satisfied with. Maybe up to the point where you've written 2*S worth of filler files; tweak that number to balance speed against thoroughness. How much you actually need to do this would be an area for more research.</li>
<li>Delete the remaining filler files.</li>
</ul>
<p>The idea with cyclical purging is that if we run out of quota to overwrite all free space, deleting the filler files just written will free up more quota; and then the way log-structured filesystems allocate new blocks should allow us to continue overwriting the remaining blocks of free space in sequence, rather than rewriting the same space again.</p>
<p>I'm implementing this method in a test app, and will post it when it's working.</p>
<h1>What about FAT-formatted microSD cards?</h1>
<p>Would the same methods work on external storage or microSD cards? FAT is block-structured, so would the purge method apply to FAT-formatted SD cards?</p>
<blockquote>
<p>On most contemporary flash memory devices, such as CompactFlash and
Secure Digital cards, [wear leveling] techniques are implemented in
hardware by a built-in microcontroller. On such devices, wear leveling
is transparent and most conventional file systems can be used on them
as-is. (<a href="https://en.wikipedia.org/wiki/Wear_leveling" rel="noreferrer">https://en.wikipedia.org/wiki/Wear_leveling</a>)</p>
</blockquote>
<p>...which suggests to me that even on a FAT-formatted SD card, wear leveling means that the traditional <a href="https://www.cs.auckland.ac.nz/~pgut001/pubs/secure_del.html#epilogue" rel="noreferrer">Gutmann methods</a> would not work (see his "<a href="https://www.cs.auckland.ac.nz/~pgut001/pubs/secure_del.html#epilogue" rel="noreferrer">Even Further Epilogue</a>") and that a method like "purging" would be necessary.</p>
<p>Whether purging is sufficient, depends on your security parameters. Wear leveling seems to imply that a block could potentially be "retired" at any time, in which case there is no way to erase it without bypassing the microcontroller's wear leveling. AFAIK this can't be done in software, even if you had kernel privileges; you'd have to design special hardware.</p>
<p>However, "retiring" a bad block should be a fairly rare event relative to the life of the media, so for many scenarios, a purging method would be secure enough.</p>
<h2>Erasing the traces</h2>
<p>Note that Gutmann's method has an important strength, namely, to erase <a href="https://en.wikipedia.org/wiki/Data_remanence" rel="noreferrer">possible traces of old data</a> on the storage media that could remain even after a block is overwritten with new data. These traces could theoretically be read by a determined attacker with lots of resources. A truly thorough approach to secure deletion would <em>augment</em> a method like Gutmann's with purging, rather than replacing it.</p>
<p>However, on log-structured and wear-leveled filesystems, the much bigger problem is trying to ensure that the sensitive blocks get overwritten at all.</p>
<h1>Do existing apps use these methods?</h1>
<p>I don't have any inside information about apps in the app store, but looking at reviews for apps like <a href="https://play.google.com/store/apps/details?id=com.projectstar.ishredder.android.standard&hl=en#details-reviews" rel="noreferrer">iShredder</a> would suggest that at best, they use methods like Reardon's "purging." For example, they can take <em>several hours</em> to do a single-pass wipe of 32GB of free space.</p>
<p>Also note limitations: The reviews on some of the secure deletion apps say that in some cases, the "deleted" files were still accessible after running the "secure delete" operation. Of course we take these reviews with a grain of salt -- there is a possibility of user error. Nevertheless, I wouldn't assume these apps are effective, without good testing.</p>
<p><a href="https://play.google.com/store/apps/details?id=com.protectstar.ishredder.ent" rel="noreferrer">iShredder 4 Enterprise</a> helpfully names some of the algorithms they use, in their app description:</p>
<blockquote>
<p>Depending on the edition, the iShredder™ package comes with deletion
algorithms such as DoD 5220.22-M E, US Air Force (AFSSI-5020), US Army
AR380-19, DoD 5220.22-M ECE, BSI/VS-ITR TL-03423 Standard,
BSI-VS-2011, NATO Standard, Gutmann, HMG InfoSec No.5, DoD 5220.22 SSD
and others.</p>
</blockquote>
<p>This impressive-sounding list gives us some pointers for further research. It's not clear how these methods are used -- singly or in combination -- and in particular whether any of them are represented as being effective on their own. We know that Gutmann's method would not be. Similarly, DoD 5220.22-M, AFSSI-5020, AR380-19, and Infosec No. 5 specify Gutmann-like procedures for overwriting sectors on hard drives, which would not be effective for flash-based media. In fact, "<a href="http://www.destructdata.com/dod-standard/" rel="noreferrer">The U.S. Department of Defense no longer references DoD 5220.22-M as a method for secure HDD erasure</a>", let alone for flash-based media, so this reference is misleading to the uninformed. (The DoD is said to reference <a href="http://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-88r1.pdf" rel="noreferrer">NIST 800-88</a> instead.) "DoD 5220.22 SSD" sounds promising, but I can't find any informative references for it. I haven't chased down the other algorithms listed, but the results so far are not encouraging.</p> | 2016-07-22 04:37:42.957000+00:00 | 2016-07-28 18:54:45.733000+00:00 | 2016-07-28 18:54:45.733000+00:00 | null | 33,322,531 | <p>I found an android app named <a href="https://play.google.com/store/apps/details?id=tk.hasankassem.supererase&hl=en">Super Erase</a> that deletes files and folder permanently from android device so that the file deleted cant be recovered anymore..here is the application i am talking about ...but i was wondering how to that and i know it is made with android studio ..i tried the regular way to delete <code>file.delete()</code> but still the file can be recovered.can i have any help .</p> | 2015-10-24 19:39:27.400000+00:00 | 2016-07-29 13:56:51.717000+00:00 | 2015-10-25 10:11:13.963000+00:00 | android | ['https://en.wikipedia.org/wiki/Data_erasure#Limitations', 'https://www.usenix.org/legacy/events/fast11/tech/full_papers/Wei.pdf', 'http://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-88r1.pdf', 'http://arxiv.org/PS_cache/arxiv/pdf/1106/1106.0917v1.pdf', 'https://en.wikipedia.org/wiki/Wear_leveling', 'https://www.cs.auckland.ac.nz/~pgut001/pubs/secure_del.html#epilogue', 'https://www.cs.auckland.ac.nz/~pgut001/pubs/secure_del.html#epilogue', 'https://en.wikipedia.org/wiki/Data_remanence', 'https://play.google.com/store/apps/details?id=com.projectstar.ishredder.android.standard&hl=en#details-reviews', 'https://play.google.com/store/apps/details?id=com.protectstar.ishredder.ent', 'http://www.destructdata.com/dod-standard/', 'http://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-88r1.pdf'] | 12 |
36,277,340 | <blockquote>
<p>What is the difference between online mode and offline mode?
Why offline mode decreases its accuracy? Is there any solution with better accuracy?</p>
</blockquote>
<p>The offline mode is a based on a model that has a file size of approx. 20.3MB; given that no internet connection is needed, no data is needed to be sent/received. Regardless, this model does speech-to-text about 6.5-7x faster than the online version. Of key mention here, is that this model has a word error rate of 13.5%, which although, not very high, is quite high given the limited data, and algorithms, it has access to. </p>
<p>An online system would obviously have access to way more training data, and get parsed through more algorithms. I don't think the offline version can be considered as a replacement, but as a substitution when the online version is not available. I have read articles where users have claimed that 'English US' works better than 'English UK', the reasons for which are not entirely known to me.</p>
<p>3G cannot give voice and data and the same time. WiFi/4G does not have this issue. There are multiple other known issues like constraints from service providers, LTE/non-LTE, CDMA, etc. If you have such a constraint, one way could be to incorporate some design changes to enable you to cache data and then access the online engine, after the call is completed.</p>
<p>In my limited experience, for offline functionality, CMUSphinx seems like a better bet (since Google is limited to 50 calls a day(?)). A few other available API's are listed <a href="https://www.quora.com/What-are-the-top-ten-speech-recognition-APIs" rel="nofollow">here</a>.</p>
<p>The research paper that enabled offline speech-to-text is linked here [<a href="http://arxiv.org/pdf/1603.03185.pdf" rel="nofollow">link</a>].</p> | 2016-03-29 06:50:59.417000+00:00 | 2016-03-29 18:17:45.890000+00:00 | 2016-03-29 18:17:45.890000+00:00 | null | 36,277,050 | <p>I am working on Speech to text android application. Google API's are available for online and offline speech to text conversions.</p>
<p>I have done testing of speech to text on Google API's (online ans well as Offline API). It has been observed that online speech to text is giving better accuracy as compared to the offline. Now my questions are </p>
<ol>
<li>What is the difference between online mode and offline mode?</li>
<li>Why offline mode decreases its accuracy? Is there any solution with better accuracy?</li>
<li>When We receive any phone call data connectivity get lost. Is there any solution with which I can achieve both at a time?</li>
</ol> | 2016-03-29 06:33:22.107000+00:00 | 2016-03-29 18:17:45.890000+00:00 | 2016-03-29 06:34:13.420000+00:00 | android|speech-to-text | ['https://www.quora.com/What-are-the-top-ten-speech-recognition-APIs', 'http://arxiv.org/pdf/1603.03185.pdf'] | 2 |
62,587,393 | <p>A promising approach would be metric embedding. In this paper: <a href="https://arxiv.org/pdf/2001.11692.pdf" rel="nofollow noreferrer">Convolutional Embedding for Edit Distance</a> the researchers state that the algorithm can accelerate the searching by orders of magnitude. After doing the training metric embedding you can apply the <a href="https://github.com/google-research/google-research/tree/master/scann" rel="nofollow noreferrer">approximate nearest neighbor</a> algorithms to find the k text with the shortest distance.</p>
<p>HTH.</p> | 2020-06-26 03:04:34.933000+00:00 | 2020-07-29 11:47:57.710000+00:00 | 2020-07-29 11:47:57.710000+00:00 | null | 62,275,915 | <p>I have a big corpus and I'm trying to find the most similar n-grams in the corpus. For that case, I'm using <a href="https://kite.com/python/docs/difflib.get_close_matches" rel="nofollow noreferrer"><code>get_close matches</code></a>.</p>
<p>The problem is that this procedure takes a lot of time. A friend suggests me to convert the n-grams to MD5 and then calculate the distance. I suspect that it will work. Is hashing invariant to hashing? Is distance calculation efficiently running on MD5 that strings?</p>
<p>Post scriptum, what is the most efficient way to calculate the distance between strings (like n-grams) in a large corpus?</p> | 2020-06-09 05:52:26.647000+00:00 | 2020-07-29 11:47:57.710000+00:00 | null | python|nlp|md5|corpus|edit-distance | ['https://arxiv.org/pdf/2001.11692.pdf', 'https://github.com/google-research/google-research/tree/master/scann'] | 2 |
44,944,485 | <p>Selu is not in your <code>activations.py</code> of keras (most likely because it was added Jun 14, 2017, only <a href="https://github.com/fchollet/keras/commit/21cf50734a6996da7023dc500bdcc8ac7d74ef48#diff-2a1c58fc96ce9cf0b88487249b21a319" rel="nofollow noreferrer">22 days</a> ago). You can just add the <a href="https://github.com/fchollet/keras/blob/master/keras/activations.py" rel="nofollow noreferrer">missing code</a> in the <code>activations.py</code> file or create your own <code>selu</code> activation in the script.</p>
<p><strong>Example code</strong></p>
<pre><code>from keras.activations import elu
def selu(x):
"""Scaled Exponential Linear Unit. (Klambauer et al., 2017)
# Arguments
x: A tensor or variable to compute the activation function for.
# References
- [Self-Normalizing Neural Networks](https://arxiv.org/abs/1706.02515)
"""
alpha = 1.6732632423543772848170429916717
scale = 1.0507009873554804934193349852946
return scale * elu(x, alpha)
model.add(Dense(32, input_shape=(input_length - 1,)), activation=selu)
</code></pre>
<hr>
<p>NOTE:</p>
<p>With tensorflow 2.0 keras is included. You can get the selu activation with:</p>
<pre><code>from tensorflow.keras.activations import selu
</code></pre> | 2017-07-06 09:08:16.960000+00:00 | 2019-12-17 10:23:37.610000+00:00 | 2019-12-17 10:23:37.610000+00:00 | null | 44,943,323 | <p>I'm using Keras with Tensorflow backend. When I'm trying to use the 'selu' activation function using: </p>
<pre><code>model.add(Dense(32, input_shape=(input_length - 1,)))
model.add(Activation('selu'))
</code></pre>
<p>The error I get is: </p>
<pre><code>ValueError: Unknown activation function:selu
</code></pre>
<p>Is there any solution to this?</p> | 2017-07-06 08:14:40.210000+00:00 | 2019-12-17 10:23:37.610000+00:00 | 2017-07-06 09:32:01.037000+00:00 | python|tensorflow|neural-network|keras | ['https://github.com/fchollet/keras/commit/21cf50734a6996da7023dc500bdcc8ac7d74ef48#diff-2a1c58fc96ce9cf0b88487249b21a319', 'https://github.com/fchollet/keras/blob/master/keras/activations.py'] | 2 |
61,308,793 | <p>There are three reasons to choose a batch size.</p>
<ol>
<li>Speed. If you are using a GPU then larger batches are often nearly as fast to process as smaller batches. That means individual cases are much faster, which means each epoch is faster too.</li>
<li>Regularization. Smaller batches add regularization, similar to increasing dropout, increasing the learning rate, or adding weight decay. Larger batches will reduce regularization.</li>
<li>Memory constraints. This one is a hard limit. At a certain point your GPU just won't be able to fit all the data in memory, and you can't increase batch size any more.</li>
</ol>
<p>That suggests that larger batch sizes are better until you run out of memory. Unless you are having trouble with overfitting, a larger and still-working batch size will (1) speed up training and (2) allow a larger learning rate, which also speeds up the training process.</p>
<p>That second point comes about because of regularization. If you increase batch size, the reduced regularization gives back some "regularization budget" to spend on an increased learning rate, which will add that regularization back.
<hr>
Regularization, by the way, is just a way to think about how noisy or smooth your training process is.</p>
<p>Low regularization means that training is very smooth, which means that it is easy for training to converge but also easy for training to overfit.</p>
<p>High regularization means that training is more noisy or difficult, but validation results are better because the noisy training process reduces overfitting and the resulting generalization error.</p>
<p>If you are familiar with the <a href="https://en.wikipedia.org/wiki/Bias%E2%80%93variance_tradeoff" rel="nofollow noreferrer">Bias-Variance Tradeoff</a>, adding regularization is a way of adding a bit of bias in order to reduce the variance. Here is one of many good write ups on the subject: <a href="https://towardsdatascience.com/regularization-the-path-to-bias-variance-trade-off-b7a7088b4577" rel="nofollow noreferrer">Regularization: the path to bias-variance trade-off</a>.</p>
<hr>
<p>On the broader topic of regularization, training schedules, and hyper-parameter tuning, I highly recommend two papers on the subject by Leslie N. Smith.</p>
<ul>
<li><a href="https://arxiv.org/abs/1708.07120" rel="nofollow noreferrer">Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates</a></li>
<li><a href="https://arxiv.org/abs/1803.09820" rel="nofollow noreferrer">A disciplined approach to neural network hyper-parameters: Part 1 -- learning rate, batch size, momentum, and weight decay</a></li>
</ul>
<p>The first paper, on Super-Convergence, will also address your some of your questions on how many epochs to use.
<hr>
After that there are no correct answers for how many epochs to use, only guidance. What I do is:</p>
<ul>
<li>Keep the training schedule as fast as possible for as long as possible while you are working on the model. Faster training means can try more ideas and tune your hyper-parameters more finely.</li>
<li>When you are ready to fine-tune results for some reason (submitting to Kaggle, deploying a model to production) then you can increase epochs and do some final hyper-parameter tuning until validation results stop improving "enough", where "enough" is a combination of your patience and the need for better results.</li>
</ul> | 2020-04-19 17:34:49.967000+00:00 | 2020-04-19 19:25:32.030000+00:00 | 2020-04-19 19:25:32.030000+00:00 | null | 61,304,854 | <p>I know there are a number of related questions but I was hoping someone could provide some advice specific to the model I am trying to build. </p>
<p>It is an image classification model. At the moment I am trying to classify 40 different classes (40 different types of animals). Within each class there are between 120 and 220 images. My training set is 4708 images and my validation set is 2512 images. </p>
<p>I ran a sequential model (code below) where I used a batch size of 64 and 30 epochs. The code took a long time to run. The accuracy after 30 epochs was about 67 on the validation set and about 70 on the training set. The loss on the validation set was about 1.2 and about 1 on the training set (I have included the last 12 epoch results below). It appears to be tapering off after about 25 epochs.</p>
<p>My questions are around batch size and epochs. Is there value to using larger or smaller batch sizes (than 64) and should I be using more epochs. I read that generally between 50 and 100 epochs are common practice, but if my results are tapering off after 25 is there value to adding more. </p>
<p>Model</p>
<pre><code>history = model.fit_generator(
train_data_gen,
steps_per_epoch= 4708 // batch_size,
epochs=30,
validation_data=val_data_gen,
validation_steps= 2512 // batch_size
)
</code></pre>
<p>Results</p>
<pre><code>Epoch 18/30
73/73 [==============================] - 416s 6s/step - loss: 1.0982 - accuracy: 0.6843 - val_loss: 1.3010 - val_accuracy: 0.6418
Epoch 19/30
73/73 [==============================] - 414s 6s/step - loss: 1.1215 - accuracy: 0.6712 - val_loss: 1.2761 - val_accuracy: 0.6454
Epoch 20/30
73/73 [==============================] - 414s 6s/step - loss: 1.0848 - accuracy: 0.6809 - val_loss: 1.2918 - val_accuracy: 0.6442
Epoch 21/30
73/73 [==============================] - 413s 6s/step - loss: 1.0276 - accuracy: 0.7013 - val_loss: 1.2581 - val_accuracy: 0.6430
Epoch 22/30
73/73 [==============================] - 415s 6s/step - loss: 1.0985 - accuracy: 0.6854 - val_loss: 1.2626 - val_accuracy: 0.6575
Epoch 23/30
73/73 [==============================] - 413s 6s/step - loss: 1.0621 - accuracy: 0.6949 - val_loss: 1.3168 - val_accuracy: 0.6346
Epoch 24/30
73/73 [==============================] - 415s 6s/step - loss: 1.0718 - accuracy: 0.6869 - val_loss: 1.1658 - val_accuracy: 0.6755
Epoch 25/30
73/73 [==============================] - 419s 6s/step - loss: 1.0368 - accuracy: 0.6957 - val_loss: 1.1962 - val_accuracy: 0.6739
Epoch 26/30
73/73 [==============================] - 419s 6s/step - loss: 1.0231 - accuracy: 0.7067 - val_loss: 1.3491 - val_accuracy: 0.6426
Epoch 27/30
73/73 [==============================] - 434s 6s/step - loss: 1.0520 - accuracy: 0.6919 - val_loss: 1.2039 - val_accuracy: 0.6683
Epoch 28/30
73/73 [==============================] - 417s 6s/step - loss: 0.9810 - accuracy: 0.7151 - val_loss: 1.2047 - val_accuracy: 0.6711
Epoch 29/30
73/73 [==============================] - 436s 6s/step - loss: 0.9915 - accuracy: 0.7140 - val_loss: 1.1737 - val_accuracy: 0.6711
Epoch 30/30
73/73 [==============================] - 424s 6s/step - loss: 1.0006 - accuracy: 0.7087 - val_loss: 1.2213 - val_accuracy: 0.6619
</code></pre> | 2020-04-19 13:08:42.613000+00:00 | 2020-05-28 12:38:20.670000+00:00 | 2020-04-19 13:16:05.603000+00:00 | python|tensorflow|machine-learning|keras|neural-network | ['https://en.wikipedia.org/wiki/Bias%E2%80%93variance_tradeoff', 'https://towardsdatascience.com/regularization-the-path-to-bias-variance-trade-off-b7a7088b4577', 'https://arxiv.org/abs/1708.07120', 'https://arxiv.org/abs/1803.09820'] | 4 |
42,393,166 | <h1>Declarative wording</h1>
<p>Almost always, when a Prolog task is formulated in a rather <em>imperative</em> way, the solution will be comparatively limited. This means that we typically can only use it in a few modes and directions, while other modes may even yield wrong results.</p>
<p>Therefore, I suggest to use more <em>declarative</em> wording.</p>
<p>You say:</p>
<blockquote>
<p>a predicate that <strong>takes</strong> a list and <strong>succeeds</strong> if the list <strong>contains</strong> elements "a, b, c" in that order anywhere in the list, otherwise it <strong>fails</strong>.</p>
</blockquote>
<p>That's a rather <em>procedural</em> way to look at this. Note that in Prolog, any argument can also be a logical <em>variable</em>, and thus there may not even be a list to "take". Instead, we expect the predicate to <strong>generate</strong> such lists in these cases!</p>
<p>Watch your wording! Very often, when you are able to express the task <em>declaratively</em>, an elegant and general Prolog solution will be straight-forward and often follows quite naturally from the task description.</p>
<h1>Describing solutions</h1>
<p>First, let us focus on <strong>what holds</strong>. There is no need to express what <em>doesn't hold</em>, because the predicate <em>will not succeed</em> anyways in such cases.</p>
<p>What do we want to <em>describe</em>?</p>
<p>Essentially, we want to <em>describe</em> lists of the form <code>[...,a,b,c,...]</code>.</p>
<p>There are already some answers, with various drawbacks.</p>
<p>A <strong>pure</strong> way to do it uses the meta-predicate <code>if_/3</code> from <a href="https://arxiv.org/abs/1607.01590" rel="nofollow noreferrer"><em>Indexing dif/2</em></a>:</p>
<pre>
abc([X,Y,Z|Vs]) :-
if_((X=a,Y=b,Z=c), true, abc([Y,Z|Vs])).
</pre>
<h1>Generality</h1>
<p>This works in all directions. First, let us try the <strong>most general</strong> query, where the single argument is a fresh <em>variable</em>:</p>
<pre>
<b>?- abc(Vs).</b>
Vs = [a, b, c|_5032] ;
Vs = [a, b, a, b, c|_5144] ;
Vs = [a, b, a, b, a, b, c|_5286] .
</pre>
<p>Thus, we can <em>generate</em> solutions, which is a very nice property of a relation!</p>
<p>The predicate is <em>monotonic</em>, and therefore <em>iterative deepening</em> is possible to <em>fairly</em> enumerate answers:</p>
<pre>
<b>?- length(Vs, _), abc(Vs).</b>
Vs = [a, b, c] ;
Vs = [a, b, c, _11600] ;
Vs = [a, a, b, c] ;
Vs = [_11982, a, b, c],
dif(_11982, a) ;
Vs = [a, b, c, _11600, _11606] .
</pre>
<p>From this, it follows that there are <em>no solutions</em> with less than 3 elements. In this case, that's quite obvious. In other cases, such results may be much less obvious from the task description.</p>
<h1>Efficiency</h1>
<p>The predicate is <strong>deterministic</strong> if its argument is sufficiently instantiated.</p>
<p>For example:</p>
<pre>
?- abc([a,b,c]).
<b>true.</b>
?- abc([z,a,b,c]).
<b>true.</b>
?- abc([a,b,c,z]).
<b>true.</b>
</pre>
<p>Note that <em>no choice points</em> remain in these cases!</p> | 2017-02-22 13:42:54.173000+00:00 | 2017-02-22 13:42:54.173000+00:00 | null | null | 42,380,563 | <p>I have to write a predicate that takes a List and succeeds if the list contains elements "a, b, c"in that order anywhere in the list, other wise it fails. I am pretty lost on where to start(not looking for a solution, just a hint to the right direction).</p> | 2017-02-22 00:30:12.463000+00:00 | 2019-02-01 19:02:54.667000+00:00 | 2019-02-01 19:02:54.667000+00:00 | list|prolog | ['https://arxiv.org/abs/1607.01590'] | 1 |
56,302,042 | <p>As per this <a href="https://arxiv.org/pdf/1404.2188.pdf" rel="noreferrer">paper</a>, k-Max Pooling is a pooling operation that is a generalisation of the max pooling over the time dimension used in the Max-TDNN sentence model
and different from the local max pooling operations applied in a convolutional network for object recognition (LeCun et al., 1998).</p>
<p><a href="https://i.stack.imgur.com/oshJY.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/oshJY.jpg" alt="enter image description here"></a></p>
<p>The k-max pooling operation makes it possible
to pool the k most active features in p that may be
a number of positions apart; it preserves the order
of the features, but is insensitive to their specific
positions.</p>
<p>There are few resources which show how to implement it in tensorflow or keras:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/51299181/how-to-implement-k-max-pooling-in-tensorflow-or-keras">How to implement K-Max pooling in Tensorflow or Keras?</a></li>
<li><a href="https://github.com/keras-team/keras/issues/373" rel="noreferrer">https://github.com/keras-team/keras/issues/373</a></li>
<li><a href="https://homes.cs.washington.edu/~yjzhang/notebooks/2016/2016-3-10%20-%20New%20Pooling%20Layers.html" rel="noreferrer">New Pooling Layers For Varying-Length Convolutional Networks</a></li>
</ul> | 2019-05-25 05:35:33.470000+00:00 | 2019-05-25 05:35:33.470000+00:00 | null | null | 56,300,553 | <p>I have to add a k-max pooling layer in CNN model to detect fake reviews. Please can you let me know how to implement it using keras.</p>
<p>I searched the internet but I got no good resources.</p> | 2019-05-24 23:40:59.750000+00:00 | 2021-03-30 03:56:02.577000+00:00 | 2019-09-09 22:29:57.783000+00:00 | machine-learning|keras|deep-learning|max-pooling | ['https://arxiv.org/pdf/1404.2188.pdf', 'https://i.stack.imgur.com/oshJY.jpg', 'https://stackoverflow.com/questions/51299181/how-to-implement-k-max-pooling-in-tensorflow-or-keras', 'https://github.com/keras-team/keras/issues/373', 'https://homes.cs.washington.edu/~yjzhang/notebooks/2016/2016-3-10%20-%20New%20Pooling%20Layers.html'] | 5 |
439,919 | <p>I think what are important in cryptography are not primes itself, but it is the <strong>difficulty</strong> of <em>prime factorization problem</em></p>
<p>Suppose you have very very large integer which is known to be product of two primes m and n, it is not easy to find what are m and n. Algorithm such as RSA depends on this fact.</p>
<p>By the way, there is a <a href="http://arxiv.org/abs/quant-ph/9508027" rel="nofollow noreferrer">published paper</a> on algorithm which can "solve" this prime factorization problem in acceptable time using quantum computer. So newer algorithms in cryptography may not rely on this "difficulty" of prime factorization anymore when quantum computer comes to town :)</p> | 2009-01-13 17:27:13.950000+00:00 | 2009-01-13 17:27:13.950000+00:00 | null | null | 439,870 | <p>One thing that always strikes me as a non-cryptographer: Why is it so important to use prime numbers? What makes them so special in cryptography?</p>
<p>Does anyone have a <em>simple</em> short explanation? (I am aware that there are many primers and that Applied Cryptography is the Bible, but as said: I am not looking to implement my own cryptographic algorithm, and the stuff that I found just made my brain explode - no ten pages of math formulas please).</p> | 2009-01-13 17:12:17.663000+00:00 | 2022-04-13 16:18:39.483000+00:00 | 2022-04-13 16:18:39.483000+00:00 | cryptography|primes | ['http://arxiv.org/abs/quant-ph/9508027'] | 1 |
62,985,555 | <p><strong>Some of the preferred ways to improve <em>detection time</em> with already trained Yolov3 model are:</strong></p>
<ul>
<li>Quantisation: Run inference with INT8 instead of FP32. You can use this <a href="https://github.com/AlexeyAB/yolo2_light" rel="nofollow noreferrer">repo</a> for this purpose.</li>
<li>Use Inference accelerator such as <a href="https://developer.nvidia.com/tensorrt" rel="nofollow noreferrer">TensorRT</a> since you're using Nvidia's GPU. The tool includes good amount of inference oriented optimisations along with quantisation optimisations INT8 and FP16 to reduce detection time. This <a href="https://forums.developer.nvidia.com/t/yolov3-with-tensorrt-5/68705" rel="nofollow noreferrer">thread</a> talks about Yolov3 inference with TensorRT5. Use <a href="https://github.com/wang-xinyu/tensorrtx" rel="nofollow noreferrer">this</a> repo for Yolov3 on TensorRT7.</li>
<li>Use inference library such as <a href="https://github.com/ceccocats/tkDNN" rel="nofollow noreferrer">tkDNN</a>, which is a Deep Neural Network library built with cuDNN and tensorRT primitives, specifically thought to work on NVIDIA Jetson Boards.</li>
</ul>
<p><strong>If you're open to do the model training there are few more options other than the ones mentioned above:</strong></p>
<ul>
<li>You can train the models with tinier versions rather than full Yolo versions, of course this comes at the cost of drop in accuracy/mAP. You can train <a href="https://github.com/AlexeyAB/darknet/blob/master/cfg/yolov4-tiny-custom.cfg" rel="nofollow noreferrer">tiny-yolov4</a> (latest model) or train <a href="https://github.com/AlexeyAB/darknet/blob/master/cfg/yolov3-tiny_obj.cfg" rel="nofollow noreferrer">tiny-yolov3</a>.</li>
<li>Model Pruning - If you could rank the neurons in the network according to how much they contribute, you could then remove the low ranking neurons from the network, resulting in a smaller and faster network. Pruned yolov3 research <a href="https://arxiv.org/abs/1907.11093v1" rel="nofollow noreferrer">paper</a> and it's <a href="https://github.com/PengyiZhang/SlimYOLOv3" rel="nofollow noreferrer">implementation</a>. <a href="https://github.com/Lam1360/YOLOv3-model-pruning" rel="nofollow noreferrer">This</a> is another pruned Yolov3 implementation.</li>
</ul> | 2020-07-19 20:55:18.180000+00:00 | 2020-07-19 20:55:18.180000+00:00 | null | null | 62,796,683 | <p>I'm using YOLOv3 custom trained model with OpenCV 4.2.0 compiled with CUDA. When I'm testing code in Python I'm using OpenCV on GPU (GTX1050 Ti) but detection on single image (416px x 416px) takes 0.055 s (~20 FPS). My config file is set to small object detection, because I need to detect ~ 10px x 10px objects on 2500px x 2000px images so I split original image into 30 smaller pieces. My goal is to reach 0.013 s (~80 FPS) on 416px x 416px image. Is it possible in Python with OpenCV? If not, how to do it in proper way?</p>
<p>PS. Currently detection takes like 50% of CPU, 5GB RAM and 6% GPU.</p> | 2020-07-08 14:06:46.223000+00:00 | 2020-07-21 19:40:13.790000+00:00 | null | python|opencv|darknet | ['https://github.com/AlexeyAB/yolo2_light', 'https://developer.nvidia.com/tensorrt', 'https://forums.developer.nvidia.com/t/yolov3-with-tensorrt-5/68705', 'https://github.com/wang-xinyu/tensorrtx', 'https://github.com/ceccocats/tkDNN', 'https://github.com/AlexeyAB/darknet/blob/master/cfg/yolov4-tiny-custom.cfg', 'https://github.com/AlexeyAB/darknet/blob/master/cfg/yolov3-tiny_obj.cfg', 'https://arxiv.org/abs/1907.11093v1', 'https://github.com/PengyiZhang/SlimYOLOv3', 'https://github.com/Lam1360/YOLOv3-model-pruning'] | 10 |
38,185,069 | <p>To cut a long story short: using the "conventional" graph cut formulation, the pair-wise term is used to encourage smoothness. It can be tuned based on local gradients to be weaker across image boundary, but as you correctly noted, it is always non-negative, thus, it never encourages changing of label (unless strong evidence exists in the per-pixel term).</p>
<p>The requirement for non-negative pair-wise terms is called "sub-modularity" and only for sub-modular energies there exist a polynomial time solution (exact for the 2-labels case, good approximation for multiple labels).<br>
You can read more about the "sub-modularity" in this context at <a href="http://www.cs.cornell.edu/~rdz/papers/kz-pami04.pdf" rel="nofollow noreferrer"><em>Kolmogorov and Zabih</em>, <strong>What Energy Functions can be Minimized via Graph Cuts?</strong>, PAMI 2004</a>.</p>
<p>However, you can define non sub-modular weights for the pairwise terms across image boundaries. You will no longer be able to use the "conventional" GCMex implementation, however, there are some nice approximation algorithms that can give not reasonable results even in the shaky realm of non sub-modular energies.<br>
The first approximation to explore, for the two-labels case, is <a href="https://github.com/shaibagon/large_scale_cc/tree/master/QPBO-v1.3.src" rel="nofollow noreferrer">QPBO</a>, described at <a href="http://pub.ist.ac.at/~vnk/papers/KR-PAMI07.pdf" rel="nofollow noreferrer"><em>Kolmogorov and Rother</em>, <strong>Minimizing non-submodular functions with graph cuts - a review</strong>, PAMI 2007</a>.<br>
The next step is a multi-label, <a href="https://github.com/shaibagon/discrete_multiscale" rel="nofollow noreferrer">multi-scale approximation</a>, described at <a href="http://arxiv.org/abs/1204.4867" rel="nofollow noreferrer"><em>Bagon and Galun</em>, <strong>A Multiscale Framework for Challenging Discrete Optimization</strong>, NIPS 2012</a>.</p>
<p>Last but not least, if you only have pair-wise interactions (positive <strong>and</strong> negative) and no clear per-pixel terms, you might want to consider segmentation using <a href="https://stackoverflow.com/a/19510366/1714410">Correlation Clustering</a> functional.</p> | 2016-07-04 12:42:14.587000+00:00 | 2016-07-04 12:42:14.587000+00:00 | 2017-05-23 12:01:59.317000+00:00 | null | 37,277,301 | <p>Up to now I have been segmenting some grayscale images using either Otsu's method or K-means. I noticed however that the segmentation cannot be "perfect" only based on the intensity. At least one would have to consider local gradients in the image.</p>
<p>I then did some research and came across the graph cut segmentation algorithm. I thought this algorithm would be beneficial since</p>
<ol>
<li>The relationship between neighboring pixels is considered</li>
<li>I could incorporate prior knowledge if I know of pixels that belong to a certain class beforehand.</li>
</ol>
<p>I now have been doing some tests using <a href="https://github.com/shaibagon/GCMex" rel="nofollow">Shai's MATLAB Graph Cut wrapper</a> and noticed that Graph Cut does not seem as beneficial as I thought. Based on the gradient I can reduce the penalty for a class border, but I can't <em>encourage</em> the algorithm to draw boundaries at edges - if a boundary is not present in the initialization via K-Means/Otsus (to create Dc) the algorithm won't draw one even though there might be a strong local edge. I think this is due to the fact that costs need to be positive.
Hence it seems as that Graph-Cut here only helps to smoothen boundaries but it won't help me to introduce "new ones".</p>
<p>Long story short: Does my story above sound plausible to you, i.e. do my conclusions make sense? Or, is there a way to use edges to create boundaries?</p>
<p>Thanks!</p>
<p>PS. Sorry, I can't show a real image I am talking about here :(</p> | 2016-05-17 13:16:44.033000+00:00 | 2016-07-04 12:43:38.587000+00:00 | 2016-07-04 12:43:38.587000+00:00 | matlab|graph|image-segmentation | ['http://www.cs.cornell.edu/~rdz/papers/kz-pami04.pdf', 'https://github.com/shaibagon/large_scale_cc/tree/master/QPBO-v1.3.src', 'http://pub.ist.ac.at/~vnk/papers/KR-PAMI07.pdf', 'https://github.com/shaibagon/discrete_multiscale', 'http://arxiv.org/abs/1204.4867', 'https://stackoverflow.com/a/19510366/1714410'] | 6 |
39,421,649 | <p>First you claim that "<em>most languages are context-free with some exceptions</em>", this is not totally true. When designing a computer language, we mostly try to keep it as context-free as possible, since CFGs are the de-facto standard for that. It will ease a lot of work. This is not always feasible, though, and a lot<sup>[?]</sup> of languages depend on the semantic analysis phase to disambiguate any possible ambiguities.</p>
<p>Parser combinators do not use a formal model usually; PEGs, on the other hand, are a formalism for grammars, as are CFGs. On the last decade a few people have decided to use PEGs over CFGs due to two facts: PEGs are, by design, unambiguous, and they might always be parsed in linear time. A parser combinator library <em>might</em> use PEGs as underlying formalism, but might as well use CFGs or even none.</p>
<p>PEGs are attractive for designing computer languages because we usually do not want to handle ambiguities, which is something hard (or even impossible) to avoid when using CFGs. And, because of that, they might be parsed O(n) time by using dynamic programming (the so called packrat parser). <a href="https://arxiv.org/pdf/1304.3177.pdf" rel="nofollow">It's not simple to "add ambiguities to them" for a few reasons, most importantly because the language they recognize depend on the fact that the options are deterministic, which is used for example when checking for lookahead</a>. It isn't as simple as "just picking the first choice". For example, you could define a PEG:</p>
<pre><code>S = "a" S "a" / "aa"
</code></pre>
<p>Which only parse <em>sequences of N "a", where N is a power of 2</em>. So it recognizes a sequence of 2, 4, 8, 16, 32, 64, etc, letter "a". By adding ambiguity, as a CFG would have, then you would recognize any even number of "a" (2, 4, 6, 8, 10, etc), <strong>which is a different language</strong>.</p>
<p>To answer your question,</p>
<blockquote>
<p>How would you go about solving it? Is it possible to add ambiguating combinator to PEG?</p>
</blockquote>
<p>First I must say that this is probably not a good idea. If you wish to keep ambiguity on the AST, you probably should use a CFG parser instead.</p>
<p>One could, for example, make a parser for PEGs which is similar to a parser for <a href="https://en.wikipedia.org/wiki/Boolean_grammar" rel="nofollow">boolean grammars</a>, but then our asymptotic parsing time would grow from O(n) to O(n<sup>3</sup>) by keeping all alternatives alive while keeping the same language. And we actually lose both good things about PEGs at once.</p>
<p>Another way would be to keep a packrat parser in memory, and transverse its table to handle the semantics from the AST. Not really a good idea either, since this would imply a large memory footprint.</p>
<p>Ideally, one should build an AST which already has information regarding possible ambiguities by changing the grammar structure. While this requires manual work, and usually isn't simple, you wouldn't have to go back a phase to check the grammar again.</p> | 2016-09-10 01:31:29.220000+00:00 | 2016-09-10 01:31:29.220000+00:00 | null | null | 37,883,261 | <p>As far as I understand, most languages are context-free with some exceptions. For instance, <code>a * b</code> may stand for <code>type * pointer_declaration</code> or multiplication in C++. Which one takes place depends on the context, the meaning of the first identifier. Another example is <code>name</code> production in VHDL</p>
<pre><code>enum_literal ::= char_literal | identifer
physical_literal ::= [num] unit_identifier
func_call ::= func_identifier [parenthized_args]
array_indexing ::= arr_name (index_expr)
name ::= func_call | physical_literal | enum_litral | array_indexing
</code></pre>
<p>You see that syntactic forms are different but they can match if optional parameters are omitted, like <code>f</code>, does it stand for func_call, physical_literal, like 1 meter with optional amount 1 is implied, or enum_literal. </p>
<p>Talking to Scala plugin designers, I was educated to know that you build AST to re-evaluate it when dependencies change. There is no need to re-parse the file if you have its AST. AST also worth to display the file contents. But, AST is invalidated if grammar is context-sensitive (suppose that <code>f</code> was a function, defined in another file, but later user requalified it into enum literal or undefined). AST changes in this case. AST changes on whenever you change the dependencies. Another option, that I am asking to evaluate and let me know how to make it, is to build an ambiguous AST.</p>
<p>As far as I know, parser combinators are of <a href="http://blog.reverberate.org/2013/09/ll-and-lr-in-context-why-parsing-tools.htm" rel="nofollow">PEG kind. They hide the ambiguity by returning you the first matched production</a> and <code>f</code> would match a function call because it is the first alternative in my grammar. I am asking for a combinator that instead of falling back on the first success, it proceeds to the next alternative. In the end, it would return me a list of all matching alternatives. It would return me an ambiguity. </p>
<p>I do not know how would you display the ambiguous file contents tree to the user but it would eliminate the need to re-parse the dependent files. I would also be happy to know how modern language design solve this problem.</p>
<p>Once ambiguous node is parsed and ambiguity of results is returned, I would like the parser to converge because I would like to proceed parsing beyond the <code>name</code> and I do not want to parse to the end of file after every ambiguity. The situation is complicated by situations like <code>f(10)</code>, which can be a function call with a single argument or a nullary function call, which return an array, which is indexed afterwards. So, f(10) can match name two ways, either as <code>func_call</code> directly or recursively, as <code>arr_indexing -> name ~ (expr)</code>. So, it won't be ambiguity like several parallel rules, like <code>fcall | literal</code>. Some branches may be longer than 1 parser before re-converging, like <code>fcall ~ (expr) | fcall</code>.</p>
<p>How would you go about solving it? Is it possible to add ambiguating combinator to PEG?</p> | 2016-06-17 13:46:06.033000+00:00 | 2016-09-10 01:31:29.220000+00:00 | 2016-08-19 07:22:04.763000+00:00 | parsing|parser-combinators|peg|context-sensitive-grammar | ['https://arxiv.org/pdf/1304.3177.pdf', 'https://en.wikipedia.org/wiki/Boolean_grammar'] | 2 |
43,874,772 | <p>I am a little confused by your environment. I am assuming that your problem is not flappy bird, and you are trying to port over code from flappy bird into your own environment. So even though I don't know your environment or your code, I still think there is enough to answer some potential issues to get you on the right track. </p>
<p>First, you mention the three models that you have tried. Of course, picking the right function approximation is very important for generalized reinforcement learning, but there are so many more hyper-parameters that could be important in solving your problem. For example, there is the gamma, learning rate, exploration and exploration decay rate, replay memory length in certain cases, batch size of training, etc. With your Q-value not changing in a state that you believe should in fact change, leads me to believe that limited exploration is being done for models one and two. In the code example, epsilon starts at .1, maybe try different values there up to 1. Also that will require messing with the decay rate of the exploration rate as well. If your q values are shooting up drastically across episodes, I would also look at the learning rate as well (although in the code sample, it looks pretty small). On the same note, gamma can be extremely important. If it is too small, you learner will be myopic.</p>
<p>You also mention you have 400 output nodes. Does your environment have 400 actions? Large action spaces also come with their own set of challenges. Here is a good white paper to look at if indeed you do have 400 actions <a href="https://arxiv.org/pdf/1512.07679.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1512.07679.pdf</a>. If you do not have 400 actions, something is wrong with your network structure. You should treat each of the output nodes as a probability of which action to select. For example, in the code example you posted, they have two actions and use relu.</p>
<p>Getting the parameters of deep q learning right is very difficult, especially when you account for how slow training is. </p> | 2017-05-09 16:05:35.650000+00:00 | 2017-05-09 16:05:35.650000+00:00 | null | null | 43,451,252 | <p>I'm experimenting with deep q learning using <code>Keras</code> , and i want to teach an agent to perform a task .</p>
<p>in my problem i wan't to teach an agent to avoid hitting objects in it's path by changing it's speed (accelerate or decelerate)</p>
<p>the agent is moving horizontally and the objects to avoid are moving vertically and i wan't him to learn to change it's speed in a way to avoid hitting them .
i based my code on this : <a href="https://github.com/yanpanlau/Keras-FlappyBird" rel="nofollow noreferrer">Keras-FlappyBird</a></p>
<p>i tried 3 different models (i'm not using convolution network)</p>
<ol>
<li><p>model with 10 dense hidden layer with sigmoid activation function , with 400 output node</p></li>
<li><p>model with 10 dense hidden layer with <code>Leaky ReLU</code> activation function</p></li>
<li>model with 10 dense hidden layer with <code>ReLu</code> activation function, with 400 output node</li>
</ol>
<p>and i feed to the network the coordinates and speeds of all the object in my word to the network . </p>
<p>and trained it for 1 million frame but still can't see any result
here is my q-value plot for the 3 models ,</p>
<p><strong>Model 1 : q-value</strong>
<a href="https://i.imgur.com/5iHLWO6.png" rel="nofollow noreferrer"><img src="https://i.imgur.com/5iHLWO6.png" alt="enter image description here"></a>
<strong>Model 2 : q-value</strong></p>
<p><a href="https://i.imgur.com/Bo7oZdP.png" rel="nofollow noreferrer"><img src="https://i.imgur.com/Bo7oZdP.png" alt="enter image description here"></a>
<strong>Model 3 : q-value</strong></p>
<p><a href="https://i.imgur.com/9VxfAYz.png" rel="nofollow noreferrer"><img src="https://i.imgur.com/9VxfAYz.png" alt="enter image description here"></a>
<strong>Model 3 : q-value zoomed</strong></p>
<p><a href="https://i.imgur.com/6vcMoMk.png" rel="nofollow noreferrer"><img src="https://i.imgur.com/6vcMoMk.png" alt="enter image description here"></a> </p>
<p>as you can see the q values isn't improving at all same as fro the reward ... please help me what i'm i doing wrong ..</p> | 2017-04-17 12:11:55.737000+00:00 | 2017-05-09 16:05:35.650000+00:00 | 2017-04-17 12:18:38.220000+00:00 | tensorflow|deep-learning|keras|keras-layer|q-learning | ['https://arxiv.org/pdf/1512.07679.pdf'] | 1 |
51,645,638 | <p>Although your problem is quite simple, it is poorly scaled: <code>x</code> ranges from 255 to 200K. This poor scaling leads to numerical instability and overall makes the training process unnecessarily unstable.<br>
To overcome this technical issue, you simply need to scale your inputs to <code>[-1, 1]</code> (or <code>[0, 1]</code>) range.</p>
<p>Note that this scaling is quite ubiquitous in deep-learning: images are scaled to <code>[-1, 1]</code> range (see, e.g., <a href="https://pytorch.org/docs/stable/torchvision/transforms.html#torchvision.transforms.Normalize" rel="nofollow noreferrer"><code>torchvision.transforms.Normalize</code></a>).<br>
To understand better the importance of scaled responses, you can look into the mathematical analysis done in <a href="https://arxiv.org/abs/1502.01852" rel="nofollow noreferrer">this paper</a>.</p> | 2018-08-02 04:42:29.653000+00:00 | 2018-08-02 04:42:29.653000+00:00 | null | null | 51,640,064 | <p>I’ve tried to train a 2 layer neural network on a simple linear interpolation for a discrete function, I’ve tried lots of different learning rates as well as different activation functions, and it seems like nothing is being learned!</p>
<p>I’ve literally spent the last 6 hours trying to debug the following code, but it seems like there’s no bug! What's the explanation?</p>
<pre class="lang-py prettyprint-override"><code> from torch.utils.data import Dataset
import os
import torch
import numpy as np
import torch.nn as nn
import torch.optim as optim
import random
LOW_X=255
MID_X=40000
HIGH_X=200000
LOW_Y=torch.Tensor([0,0,1])
MID_Y=torch.Tensor([0.2,0.5,0.3])
HIGH_Y=torch.Tensor([1,0,0])
BATCH_SIZE=4
def x_to_tensor(x):
if x<=MID_X:
return LOW_Y+(x-LOW_X)*(MID_Y-LOW_Y)/(MID_X-LOW_X)
if x<=HIGH_X:
return MID_Y+(x-MID_X)*(HIGH_Y-MID_Y)/(HIGH_X-MID_X)
return HIGH_Y
class XYDataset(Dataset):
LENGTH=10000
def __len__(self):
return self.LENGTH
def __getitem__(self, idx):
x=random.randint(LOW_X,HIGH_X)
y=x_to_tensor(x)
return x,y
class Interpolate(nn.Module):
def __init__(self, num_outputs,hidden_size=10):
super(Interpolate, self).__init__()
self.hidden_size=hidden_size
self.x_to_hidden = nn.Linear(1, hidden_size)
self.hidden_to_out = nn.Linear(hidden_size,num_outputs)
self.activation = nn.Tanh() #I have tried Sigmoid and Relu activations as well
self.softmax=torch.nn.Softmax(dim=1)
def forward(self, x):
out = self.x_to_hidden(x)
out = self.activation(out)
out = self.hidden_to_out(out)
out = self.softmax(out)
return out
dataset=XYDataset()
trainloader = torch.utils.data.DataLoader(dataset, batch_size=BATCH_SIZE,
shuffle=True, num_workers=4)
criterion= nn.MSELoss()
def train_net(net,epochs=10,lr=5.137871216190041e-05,l2_regularization=2.181622809797563e-12):
optimizer= optim.Adam(net.parameters(),lr=lr,weight_decay=l2_regularization)
net.train(True)
running_loss=0.0
for epoch in range(epochs):
for i,data in enumerate(trainloader):
inputs,targets=data
inputs,targets=torch.FloatTensor(inputs.float()).view(-1,1),torch.FloatTensor(targets.float())
optimizer.zero_grad()
outputs=net(inputs)
loss=criterion(outputs,targets)
loss.backward()
optimizer.step()
running_loss+=loss.item()
if (len(trainloader)*epoch+i)%200==199:
running_loss=running_loss/(200*BATCH_SIZE)
print('[%d,%5d] loss: %.6f ' % (epoch+1,i+1,running_loss))
running_loss=0.0
for i in range(-11,3):
net=Interpolate(num_outputs=3)
train_net(net,lr=10**i,epochs=1)
print('for learning rate {} net output on low x is {}'.format(i,net(torch.Tensor([255]).view(-1,1))))
</code></pre> | 2018-08-01 18:30:22.757000+00:00 | 2019-10-09 06:26:55.187000+00:00 | 2019-10-09 06:26:55.187000+00:00 | machine-learning|neural-network|deep-learning|pytorch | ['https://pytorch.org/docs/stable/torchvision/transforms.html#torchvision.transforms.Normalize', 'https://arxiv.org/abs/1502.01852'] | 2 |
42,111,073 | <p>I suggest starting with a base architecture used in practice like this one in nerve-segmentation: <a href="https://github.com/EdwardTyantov/ultrasound-nerve-segmentation" rel="nofollow noreferrer">https://github.com/EdwardTyantov/ultrasound-nerve-segmentation</a>. Here a dice_loss is used as a loss function. This works very well for a two class problem as has been shown in literature: <a href="https://arxiv.org/pdf/1608.04117.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1608.04117.pdf</a>.</p>
<p>Another loss function that has been widely used is cross entropy for such a problem. For problems like yours most commonly long and short skip connections are deployed to stabilize training as denoted in the paper above. </p> | 2017-02-08 10:50:31.653000+00:00 | 2017-02-08 10:50:31.653000+00:00 | null | null | 42,093,632 | <p>Appologizes for misuse of technical terms.
I am working on a project of semantic segmentation via CNNs ; trying to implement an architecture of type Encoder-Decoder, therefore output is the same size as the input.</p>
<p>How do you design the labels ?
What loss function should one apply ? Especially in the situation of heavy class inbalance (but the ratio between the classes is variable from image to image). </p>
<p>The problem deals with two classes (objects of interest and background). I am using Keras with tensorflow backend.</p>
<p>So far, I am going with designing expected outputs to be the same dimensions as the input images, applying pixel-wise labeling. Final layer of model has either softmax activation (for 2 classes), or sigmoid activation ( to express probability that the pixels belong to the objects class). I am having trouble with designing a suitable objective function for such a task, of type:</p>
<p>function(y_pred,y_true), </p>
<p>in agreement with Keras. </p>
<p>Please,try to be specific with the dimensions of tensors involved (input/output of the model). Any thoughts and suggestions are much appreciated. Thank you !</p> | 2017-02-07 15:27:43.700000+00:00 | 2017-03-14 10:24:21.727000+00:00 | null | tensorflow|deep-learning|keras | ['https://github.com/EdwardTyantov/ultrasound-nerve-segmentation', 'https://arxiv.org/pdf/1608.04117.pdf'] | 2 |
4,178,690 | <p>I'm not aware that this is possible to do in the general case, but if you know certain properties of the graph (such as its "distance from cycle-freeness" as described in the paper below), there exist randomized algorithms that with high probability will find a cycle quickly. Specifically, see the first algorithm in section 3 of the linked paper, with the corresponding analysis explaining how to extract the cycle.</p>
<p>As for deterministic algorithms, Mr. Saurav's answer is correct. In the worst case, you'll at least have to scan the entire input in order to correctly determine whether or not there is a cycle, which already requires O(|V| + |E|) time.</p>
<p>[1] <a href="http://arxiv.org/abs/1007.4230" rel="nofollow">http://arxiv.org/abs/1007.4230</a></p> | 2010-11-14 17:31:19.053000+00:00 | 2010-11-14 17:31:19.053000+00:00 | null | null | 4,178,100 | <p>Also are there any randomized algorithms for that. I need to find a single cycle as fast as possible, not all cycles.</p> | 2010-11-14 15:20:22.323000+00:00 | 2010-11-14 17:33:50.557000+00:00 | 2010-11-14 15:28:51.207000+00:00 | algorithm|graph|cycle | ['http://arxiv.org/abs/1007.4230'] | 1 |
72,992,243 | <p>Here are some useful links you can get start with -</p>
<pre><code>https://www.tensorflow.org/text/guide/word_embeddings
https://arxiv.org/abs/1810.04805
https://machinelearningmastery.com/what-are-word-embeddings/
https://www.analyticsvidhya.com/blog/2017/06/word-embeddings-count-word2veec/
</code></pre> | 2022-07-15 09:56:43.087000+00:00 | 2022-07-15 09:56:43.087000+00:00 | null | null | 72,896,436 | <p>I am trying to perform in python the cosine similarity between two words which are in a dataset of texts (each text represents a tweet). I want to evaluate the similarity based on the context where they are placed.</p>
<p>I have set a code like the following:</p>
<pre><code>from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics.pairwise import cosine_similarity
corpus = dataset
// corpus is a list of texts (in this case is a list of tweets)
vectorizer = TfidfVectorizer()
trsfm = vectorizer.fit_transform(corpus)
sims = cosine_similarity(trsfm, trsfm)
counts = count_vect.fit_transform(corpus)
pd.DataFrame(trsfm.toarray(), columns = vectorizer.get_feature_names(), index = corpus)
vectorizer.get_feature_names()
</code></pre>
<p>The result is the similarity between the texts but I want the similarity between two words.</p>
<p>So, wow can I obtain the similarity between two words and not between two texts?
For instance, I want the similarity between these couple of words: {["covid","vaccine"], ["work","covid"], ["environment","pollution"]}.</p>
<p>In addition, I want to represet these words in a cartesian plane in order to display graphically the distances amongst them. So I need to calculate their cartesian coordinates.</p>
<p>Is there anyone who can help me?</p> | 2022-07-07 10:46:06.647000+00:00 | 2022-07-15 09:56:43.087000+00:00 | null | python|cosine-similarity | [] | 0 |
60,237,174 | <p>The learning rate you define for optimizers like ADAM are upper bounds. You can see this in the <a href="https://arxiv.org/pdf/1412.6980.pdf" rel="nofollow noreferrer">paper</a> in Section 2.1. The stepsize α in the paper is the learning rate. </p>
<blockquote>
<p>The effective magnitude of the steps taken in parameter space at each are approximately bounded by the stepsize setting α </p>
</blockquote>
<p>Also this stepsize α is directly used and multiplied with the step size correction, which is learned. So changing the learning rate e.g. reducing it will reduce all individual learning rates and reduces the upper bound. This can be helpful during the "end" of an training, to reduce the overall step sizes, so only smaller steps occur and might help the network to find a minima in the loss function.</p>
<p>I saw learning rate decay in some papers using ADAM and used it myself and it did help. What I found is that you should do it slower than e.g. with SGD. With one model I just multiply it with 0.8 every 10 epochs. So it is a gradual decay which I think works better than more drastic steps since you don't "invalidate" the estimated momentums to much. But this is just my theory.</p> | 2020-02-15 08:43:15.987000+00:00 | 2020-02-15 08:43:15.987000+00:00 | null | null | 60,229,897 | <p>In PyTorch, the weight adjustment policy is determined by the optimizer, and the learning rate is adjusted with a scheduler. When the optimizer is SGD, there is only one learning rate and this is straightforward. When using Adagrad, Adam, or any similar optimizer which inherently adjusts the learning rate on a per-parameter basis, is there something in particular to look out for? Can I ignore the scheduler completely since the algorithm adjusts its own learning rates? Should I parameterize it very differently than if I'm using SGD?</p> | 2020-02-14 16:13:06.653000+00:00 | 2020-02-15 08:43:15.987000+00:00 | null | python|deep-learning|pytorch|gradient-descent | ['https://arxiv.org/pdf/1412.6980.pdf'] | 1 |
53,912,694 | <p>I've been able to test the new method as mentioned in the comments, and it worked perfectly fine.</p>
<p><a href="https://arxiv.org/ftp/arxiv/papers/1505/1505.03090.pdf" rel="nofollow noreferrer">The algorithm that was linked above</a>, implicitly states that the point shall be individually dropped down into the partition tree, passing all the random tests and creating new nodes as it is dropped down.</p>
<hr>
<p>But there is a significant problem with this method, since in order to have a balanced efficient and shallow tree, left and right nodes must be distributed evenly. </p>
<p>Hence, in order to split the node, at every level of the tree, every point of the node must be passed to either left or right node (by a random test), until the tree reaches the depth where all nodes at that level are leaf.</p>
<hr>
<p>In mathematical terms, root node contains a vector space which is divided into two left and right nodes containing convex polyhedrons bounded by supporting hyper-planes by the separating hyper-plane:</p>
<p><a href="https://i.stack.imgur.com/o5tZa.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/o5tZa.gif" alt="enter image description here"></a></p>
<p>Negative term of the equation (I believe we can call it bias), is where the splitting ratio starts to play, it should be percentile of all node points between 100*r to 100*(1-r), so that tree is separated more evenly and it is more shallow. Basically it decides how even should hyper-plane separation be, that's why we require nodes that contain all the points.</p>
<hr>
<p>I have been able to implement such system:</p>
<pre><code>def index_space(self):
shuffled_space = self.shuffle_space()
current_tree = PartitionTree()
level = 0
root_node = RootNode(shuffled_space, self.capacity, self.split_ratio, self.indices)
current_tree.root_node = root_node
current_tree.node_array.append(root_node)
current_position = root_node.node_position
node_array = {0: [root_node]}
while True:
current_nodes = node_array[level]
if all([node.is_leaf() for node in current_nodes]):
break
else:
level += 1
node_array[level] = []
for current_node in current_nodes:
if not current_node.is_leaf():
left_child = InternalNode(self.capacity, self.split_ratio, self.indices,
self._append(current_position, [-1]), current_node)
right_child = InternalNode(self.capacity, self.split_ratio, self.indices,
self._append(current_position, [1]), current_node)
for point in current_node.list_points():
if current_node.random_test(point) == 1:
right_child.add_point(point)
else:
left_child.add_point(point)
node_array[level].extend([left_child, right_child])
</code></pre>
<p>where <code>node_array</code> contains all the nodes of the tree (root, internal and leaf). </p>
<p>Unfortunately, <code>node.random_test(x)</code> method:</p>
<pre><code>def random_test(self, main_point):
random_coefficients = self.random_coefficients()
scale_values = [np.inner(self.random_coefficients(), point[:self.indices].ravel())
for point in self.points]
percentile = np.percentile(scale_values, self.ratio * 100)
main_term = np.inner(main_point[:self.indices].ravel(), random_coefficients)
if self.is_leaf():
return 0 # Next node is the center leaf child
else:
if (main_term - percentile) >= 0: # Hyper-plane equation defined in the document
return -1 # Next node is the left child
else:
return 1 # Next node is the right child
</code></pre>
<p>is inefficient, since calculating percentile takes too much time. Hence I have to find another way to calculate percentile (perhaps by performing short-circuited binary search to optimize percentile). </p>
<hr>
<h1>Conclusion:</h1>
<p>This is just a large extension of Clinton Ray Mulligan's answer - which briefly explains the solution to create such trees and hence will remain as an accepted answer. </p>
<p>I have just added more details in case anyone is interested in implementing randomized binary partition trees.</p> | 2018-12-24 11:13:21.440000+00:00 | 2018-12-24 11:13:21.440000+00:00 | null | null | 53,889,906 | <p>I'm trying to implement <a href="https://arxiv.org/ftp/arxiv/papers/1505/1505.03090.pdf" rel="nofollow noreferrer">this algorithm</a> in Python, but due to my lack of understanding tree structures I'm confused about creation process of the partition tree.</p>
<p><strong>Brief Explanation</strong>:</p>
<p>Algorithm that was linked, is for partitioning a high-dimensional feature space into internal and leaf nodes so that query can be performed quickly. </p>
<p>It divides a large space using specific random test, hyperplane that splits one large cell into two.</p>
<p><a href="https://math.stackexchange.com/a/3044426/513294">This answer explains everything much more precisely</a>.</p>
<p><a href="https://i.stack.imgur.com/Ijl26.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ijl26.png" alt="enter image description here"></a> </p>
<p>(taken from the link above)</p>
<p><strong>Code Fragments</strong>:</p>
<pre><code>def random_test(self, main_point): # Main point is np.ndarray instance
dimension = main_point.ravel().size
random_coefficients = self.random_coefficients(dimension)
scale_values = np.array(sorted([np.inner(random_coefficients, point.ravel())
for point in self.points]))
percentile = random.choice([np.percentile(scale_values, 100 * self.ratio), # Just as described on Section 3.1
np.percentile(scale_values, 100 * (1 - self.ratio))])
main_term = np.inner(main_point.ravel(), random_coefficients)
if self.is_leaf():
return 0 # Next node is the center leaf child
else:
if (main_term - percentile) >= 0: # Hyper-plane equation defined in the document
return -1 # Next node is the left child
else:
return 1 # Next node is the right child
</code></pre>
<p><code>self.ratio</code> as mentioned in the algorithm linked above, is determining how balanced and shallow the tree will be, at <code>1/2</code> it is supposed to generate the most balanced and shallow tree.</p>
<p>Then we move onto the iterative part, where the tree keeps dividing the space further and further until it <strong>reaches</strong> the leaf node (notice the keyword <strong>reaches</strong>), the problem is, it will <strong>never truly</strong> reaches the leaf node. </p>
<p>Since, the definition of leaf node in the document linked above is this:</p>
<pre><code>def is_leaf(self):
return (self.capacity * self.ratio) <= self.cell_count() <= self.capacity
</code></pre>
<p>where <code>self.cell_count()</code> is number of points in the cell, <code>self.capacity</code> is the maximum amount of points that the cell can have and <code>self.ratio</code> is the split ratio.</p>
<p><a href="https://github.com/ShellRox/Lucifitas/blob/master/core/semantic/polyconvex.py" rel="nofollow noreferrer">My full code</a> should basically divide the feature space by creating new nodes at initial iteration until the leaf node is created (but the leaf node is never created). <a href="https://github.com/ShellRox/Lucifitas/blob/master/core/semantic/polyconvex.py/#L230-L256" rel="nofollow noreferrer">See the fragment that contains the division process</a>.</p>
<p><a href="https://i.stack.imgur.com/EiqXo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EiqXo.png" alt="enter image description here"></a></p>
<p>(taken from the document linked above)</p>
<h1>tl;dr:</h1>
<p>Are binary partition trees prepared (filled with empty nodes) before we add any points to them? If so, don't we require to define the level (depth) of the tree?</p>
<p><strong>If not</strong>, are binary partition trees created while adding points to them? If so, then how is the first point (from the first iteration) added to the tree?</p> | 2018-12-21 19:32:51.220000+00:00 | 2018-12-24 14:19:38.163000+00:00 | 2018-12-21 21:24:10.720000+00:00 | python|algorithm|numpy|search|kdtree | ['https://arxiv.org/ftp/arxiv/papers/1505/1505.03090.pdf', 'https://i.stack.imgur.com/o5tZa.gif'] | 2 |
70,986,702 | <p>There is a way to access pre-activation layers for pretrained Keras models using TF version 2.7.0. Here's how to access two intermediate pre-activation outputs from VGG19 in a <em>single</em> forward pass.</p>
<p>Initialize VGG19 model. We can omit top layers to avoid loading unnecessary parameters into memory.</p>
<pre class="lang-py prettyprint-override"><code>vgg19 = tf.keras.applications.VGG19(
include_top=False,
weights="imagenet"
)
</code></pre>
<p><strong>This is the important part: Create a deepcopy of the intermediate layer form which you like to have the features, change the activation of the conv layers to linear (i.e. no activation), rename the layer (otherwise two layers in the model will have the same name which will raise errors) and finally pass the output of the <em>previous</em> through the copied conv layer.</strong></p>
<pre class="lang-py prettyprint-override"><code># for more intermediate features wrap a loop around it to avoid copy paste
b5c4_layer = deepcopy(vgg19.get_layer("block5_conv4"))
b5c4_layer.activation = tf.keras.activations.linear
b5c4_layer._name = b5c4_layer.name + str("_preact")
b5c4_preact_output = b5c4_layer(vgg19.get_layer("block5_conv3").output)
b2c2_layer = deepcopy(vgg19.get_layer("block2_conv2"))
b2c2_layer.activation = tf.keras.activations.linear
b2c2_layer._name = b2c2_layer.name + str("_preact")
b2c2_preact_output = b2c2_layer(vgg19.get_layer("block2_conv1").output)
</code></pre>
<p>Finally, get the outputs and check if they equal post-activation outputs when we apply ReLU-activation.</p>
<pre class="lang-py prettyprint-override"><code>vgg19_features = Model(vgg19.input, [b2c2_preact_output, b5c4_preact_output])
vgg19_features_control = Model(vgg19.input, [vgg19.get_layer("block2_conv2").output, vgg19.get_layer("block5_conv4").output])
b2c2_preact, b5c4_preact = vgg19_features(tf.keras.applications.vgg19.preprocess_input(img))
b2c2, b5c4 = vgg19_features_control(tf.keras.applications.vgg19.preprocess_input(img))
print(np.allclose(tf.keras.activations.relu(b2c2_preact).numpy(),b2c2.numpy()))
print(np.allclose(tf.keras.activations.relu(b5c4_preact).numpy(),b5c4.numpy()))
</code></pre>
<pre><code>True
True
</code></pre>
<p>Here's a visualization similar to Fig. 6 of <a href="https://arxiv.org/pdf/1809.00219.pdf" rel="nofollow noreferrer">Wang et al.</a> to see the effect in the feature space.
<a href="https://i.stack.imgur.com/QjAyJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QjAyJ.png" alt="VGG19-intermediate" /></a></p>
<p>Input image</p>
<p><a href="https://i.stack.imgur.com/CJ3PA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CJ3PA.png" alt="input image" /></a></p> | 2022-02-04 12:52:52.553000+00:00 | 2022-02-05 11:01:34.380000+00:00 | 2022-02-05 11:01:34.380000+00:00 | null | 64,303,825 | <p>Is it possible to access pre-activation tensors in a Keras Model? For example, given this model:</p>
<pre class="lang-python prettyprint-override"><code>import tensorflow as tf
image_ = tf.keras.Input(shape=[224, 224, 3], batch_size=1)
vgg19 = tf.keras.applications.VGG19(include_top=False, weights='imagenet', input_tensor=image_, input_shape=image_.shape[1:], pooling=None)
</code></pre>
<p>the usual way to access layers is:</p>
<pre class="lang-python prettyprint-override"><code>intermediate_layer_model = tf.keras.models.Model(inputs=image_, outputs=[vgg19.get_layer('block1_conv2').output])
intermediate_layer_model.summary()
</code></pre>
<p>This gives the ReLU outputs for a layer, while I would like the ReLU inputs. I tried doing this:</p>
<pre class="lang-python prettyprint-override"><code>graph = tf.function(vgg19, [tf.TensorSpec.from_tensor(image_)]).get_concrete_function().graph
outputs = [graph.get_tensor_by_name(tname) for tname in [
'vgg19/block4_conv3/BiasAdd:0',
'vgg19/block4_conv4/BiasAdd:0',
'vgg19/block5_conv1/BiasAdd:0'
]]
intermediate_layer_model = tf.keras.models.Model(inputs=image_, outputs=outputs)
intermediate_layer_model.summary()
</code></pre>
<p>but I get the error</p>
<pre class="lang-python prettyprint-override"><code>ValueError: Unknown graph. Aborting.
</code></pre>
<p>The only workaround I've found is to edit the model file to manually expose the intermediates, turning every layer like this:</p>
<pre class="lang-python prettyprint-override"><code>x = layers.Conv2D(256, (3, 3), activation="relu", padding="same", name="block3_conv1")(x)
</code></pre>
<p>into 2 layers where the 1st one can be accessed before activations:</p>
<pre class="lang-python prettyprint-override"><code>x = layers.Conv2D(256, (3, 3), activation=None, padding="same", name="block3_conv1")(x)
x = layers.ReLU(name="block3_conv1_relu")(x)
</code></pre>
<p>Is there a way to acces pre-activation tensors in a Model without essentially editing Tensorflow 2 source code, or reverting to Tensorflow 1 which had full flexibility accessing intermediates?</p> | 2020-10-11 12:07:36.570000+00:00 | 2022-02-05 11:01:34.380000+00:00 | 2020-11-12 15:33:40.360000+00:00 | python|tensorflow|keras|tensorflow2.0 | ['https://arxiv.org/pdf/1809.00219.pdf', 'https://i.stack.imgur.com/QjAyJ.png', 'https://i.stack.imgur.com/CJ3PA.png'] | 3 |
54,038,110 | <p>I found an article dealing with exactly the problem i was facing:
<a href="https://arxiv.org/pdf/1812.09057.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1812.09057.pdf</a></p>
<p>It introduces a technique called "Singular Spectrum Analysis for advanced reduction of dimensionality" (SSA-FARI).</p> | 2019-01-04 11:29:26.973000+00:00 | 2019-01-04 11:29:26.973000+00:00 | null | null | 50,735,926 | <p>I have given a time-series in various channels. There are two major oscillations "hidden" in the time-series and distributed over all channels. I want to extract these oscillations using multivariate Singular Spectrum (mSSA) Analysis.</p>
<p>I am new to SSA and it seems to me that SSA is not really a dimensionality reduction method but more a "denoising" method. I.e. is it true that I cannot really extract the major oscillations, as after grouping, backprojection and diagonal averaging, I get signal in all channels, but not really a single signal which is the major oscillation (as PCA would provide)?</p>
<p>On the other hand, the eigenvectors (altough shrinked in time due to hankelization) seem to be exactly the oscillations that I am looking for. Can I use SSA for dimensionality reduction by simply treating the eigenvectors as the major oscillations?</p> | 2018-06-07 08:02:19.360000+00:00 | 2019-01-04 11:29:26.973000+00:00 | 2018-08-22 15:36:47.447000+00:00 | time-series|pca|dimensionality-reduction|ssa | ['https://arxiv.org/pdf/1812.09057.pdf'] | 1 |
51,783,051 | <p>While it's <a href="https://arxiv.org/pdf/0907.5058.pdf" rel="noreferrer">possible to decide whether two regular expressions accept the same language</a>, it seems to be rather complicated and not all that terribly useful for everyday regex usage. Therefore, equality on compiled regex patterns is just referential equality:</p>
<pre><code>val x = "abc".r
val y = "abc".r
x == y
// res0: Boolean = false
</code></pre> | 2018-08-10 09:17:09.707000+00:00 | 2018-08-10 09:51:36.810000+00:00 | 2018-08-10 09:51:36.810000+00:00 | null | 51,782,957 | <p>In ScalaTest, I have the following check:</p>
<pre><code>"abc".r shouldBe "abc".r
</code></pre>
<p>But it is not equal. I don't understand.</p>
<pre><code>abc was not equal to abc
ScalaTestFailureLocation: com.ing.cybrct.flink.clickstream.ConfigsTest$$anonfun$6 at (ConfigsTest.scala:97)
Expected :abc
Actual :abc
</code></pre> | 2018-08-10 09:12:01.283000+00:00 | 2018-10-10 09:45:27.110000+00:00 | 2018-08-10 14:17:33.767000+00:00 | scala|equality|scalatest | ['https://arxiv.org/pdf/0907.5058.pdf'] | 1 |
36,883,726 | <p>There are several algorithms to perform this task: the "state-elimination method" from Brzozowski and Mc Cluskey, the resolution of a system of linear equation, the method from McNaughton and Yamada, etc. They are very well described in <a href="http://arxiv.org/abs/1502.03573" rel="noreferrer" title="Automata and rational expressions">Automata and rational expressions</a> by Jacques Sakarovitch.</p>
<p>The state-elimination method in particular is simple to understand. The key idea is that you are going to build an automaton labeled by rational (aka regular) expressions rather than letters. First, make sure you have a single initial state and a single final state (you may add fresh states and spontaneous transitions if necessary). Then choose a state s to eliminate, say state 1 in the following picture.</p>
<p><a href="https://i.stack.imgur.com/r4cH9.png" rel="noreferrer"><img src="https://i.stack.imgur.com/r4cH9.png" alt="A simple automaton"></a></p>
<p>Then consider all the couples (p, q) where p is a predecessor (states from which a transition reaches s, 0 in the picture) and q a successor (state 2). For each such couple (p, q) add a transition from p to q which is labeled by E(p, q) + E(p, s)E(s, s)*E(s, q) where E(p, s) means "the expression that labels the transition from p to s. Once you treated all the couple (p, q), remove the state s. In the previous example:</p>
<p><a href="https://i.stack.imgur.com/ZF4fm.png" rel="noreferrer"><img src="https://i.stack.imgur.com/ZF4fm.png" alt="An automaton labeled by regexps"></a></p>
<p>Do that until you eliminated all the inner states (i.e., keep the initial state and the final state), and just read the result on the transition from the initial state to the final state (here d+ab*c).</p>
<p>You may toy with this algorithm using <a href="http://vcsn.lrde.epita.fr" rel="noreferrer" title="Vcsn">Vcsn</a>, a tool for rational expressions and automata. Here is a complete example you may reproduce at <a href="http://vcsn-sandbox.lrde.epita.fr" rel="noreferrer" title="Vcsn Sandbox">Vcsn Sandbox</a>.</p>
<p><a href="https://i.stack.imgur.com/ko9jA.png" rel="noreferrer"><img src="https://i.stack.imgur.com/ko9jA.png" alt="A complete run of the state elimination method"></a></p> | 2016-04-27 07:57:28.987000+00:00 | 2017-09-14 07:04:47.867000+00:00 | 2017-09-14 07:04:47.867000+00:00 | null | 36,853,077 | <p>Is there a tool (or an algorithm) to convert a <em>finite state machine</em> into a <em>regular expression</em>?</p>
<p>(not the other way around, that would be easy).</p> | 2016-04-25 23:50:05.793000+00:00 | 2020-09-30 10:02:36.183000+00:00 | null | regex|algorithm|fsm | ['http://arxiv.org/abs/1502.03573', 'https://i.stack.imgur.com/r4cH9.png', 'https://i.stack.imgur.com/ZF4fm.png', 'http://vcsn.lrde.epita.fr', 'http://vcsn-sandbox.lrde.epita.fr', 'https://i.stack.imgur.com/ko9jA.png'] | 6 |
37,984,092 | <p>No, that won't work. The Open Mobile API <em>library</em> is only an interface to the Open Mobile API service ("SmartcardService"). Thus, the library only helps your app to communicate with the service through a standardized interface (= the Open Mobile <em>API</em>). If you try to use the library on a device without the SmartcardService, the library won't be able to bind to that service and, consequently, the API calls will fail.</p>
<h3>What you can do</h3>
<ol>
<li>Starting with Android 5 (API level 21), the <a href="https://developer.android.com/reference/android/telephony/TelephonyManager.html" rel="nofollow">TelephonyManager</a> provides an API to exchange APDU commands with applications on the SIM/UICC. See <a href="https://developer.android.com/reference/android/telephony/TelephonyManager.html#iccOpenLogicalChannel%28java.lang.String%29" rel="nofollow"><code>iccOpenLogicalChannel</code></a>, <a href="https://developer.android.com/reference/android/telephony/TelephonyManager.html#iccTransmitApduBasicChannel(int,%20int,%20int,%20int,%20int,%20java.lang.String)" rel="nofollow"><code>iccTransmitApduBasicChannel</code></a>, <a href="https://developer.android.com/reference/android/telephony/TelephonyManager.html#iccTransmitApduLogicalChannel(int,%20int,%20int,%20int,%20int,%20int,%20java.lang.String)" rel="nofollow"><code>iccTransmitApduLogicalChannel</code></a>, etc. Just as with the Open Mobile API, your app would, of course, need to have the permission to access that API.</li>
<li>If there is an implementation of the SmartcardService for your device <strong>that also implements access to the SIM/UICC</strong> through the RIL, you could install that service together with the Open Mobile API library to get access to the SIM/UICC. See my report <a href="https://arxiv.org/abs/1601.03027" rel="nofollow">Open Mobile API: Accessing the UICC on Android Devices</a>.</li>
</ol> | 2016-06-23 06:44:47.323000+00:00 | 2016-06-23 06:44:47.323000+00:00 | null | null | 37,942,706 | <p>I have a device without NFC. This device also does not support the Open Mobile API. I need access to the SIM applet on that device.</p>
<p>Now I wonder if I could add that functionality...If I have a copy of the Open Mobile API library, would it work if pushed that Open Mobile API library to my device through ADB? Could I then exchange APDUs with my SIM applet?</p> | 2016-06-21 11:04:11.667000+00:00 | 2016-06-23 15:38:19.623000+00:00 | 2016-06-23 15:38:19.623000+00:00 | android|nfc|apdu|sim-card|open-mobile-api | ['https://developer.android.com/reference/android/telephony/TelephonyManager.html', 'https://developer.android.com/reference/android/telephony/TelephonyManager.html#iccOpenLogicalChannel%28java.lang.String%29', 'https://developer.android.com/reference/android/telephony/TelephonyManager.html#iccTransmitApduBasicChannel(int,%20int,%20int,%20int,%20int,%20java.lang.String)', 'https://developer.android.com/reference/android/telephony/TelephonyManager.html#iccTransmitApduLogicalChannel(int,%20int,%20int,%20int,%20int,%20int,%20java.lang.String)', 'https://arxiv.org/abs/1601.03027'] | 5 |
59,156,122 | <p>LDA and NTM have different scientific logic:</p>
<p><a href="https://docs.aws.amazon.com/sagemaker/latest/dg/lda.html" rel="noreferrer">SageMaker LDA</a> (Latent Dirichlet Allocation, not to be confused with <a href="https://scikit-learn.org/0.16/modules/generated/sklearn.lda.LDA.html" rel="noreferrer">Linear Discriminant Analysis</a>) model works by assuming that documents are formed by sampling words from a finite set of topics. It is made of 2 moving parts: (1) the word composition per topic and (2) the topic composition per document</p>
<p><a href="https://docs.aws.amazon.com/sagemaker/latest/dg/ntm.html" rel="noreferrer">SageMaker NTM</a> on the other hand doesn't explicitly learn a word distribution per topic, it is a neural network that passes document through a bottleneck layer and tries to reproduce the input document (presumably a Variational Auto Encoder (VAE) according to <a href="https://aws.amazon.com/blogs/machine-learning/introduction-to-the-amazon-sagemaker-neural-topic-model/" rel="noreferrer">AWS documentation</a>). That means that the bottleneck layer ends up containing all necessary information to predict document composition and its coefficients can be considered as topics</p>
<p>Here are considerations for choosing one or the other:</p>
<ol>
<li><strong>VAE-based method such as SageMaker NTM may do a better job of discerning relevant topics than LDA</strong>, presumably because of their possibly deeper expressive power. <a href="https://arxiv.org/pdf/1809.02687.pdf" rel="noreferrer">A benchmark here</a> (featuring a VAE-NTM that could be different that SageMaker NTM) shows that NTMs can beat LDA in both metrics of topic coherence and perplexity</li>
<li><strong>So far there seems to be more community knowledge about LDA than about VAEs, NTMs and SageMaker NTM</strong>. That means a possibly easier learning and troubleshooting path if you play with LDAs. Things change fast though, so this point may be less and less relevant as DL knowledge grows</li>
<li><strong>SageMaker NTM has more flexible hardware options than SageMaker LDA and may scale better</strong>: SageMaker NTM can run on CPU, GPU, multi-GPUs instances and multi-instance context. For example, the official NTM demo uses an ephemeral cluster of 2 <code>ml.c4.xlarge</code> instances. SageMaker LDA currently only support single-instance CPU training.</li>
</ol> | 2019-12-03 11:30:35.970000+00:00 | 2019-12-03 11:30:35.970000+00:00 | null | null | 59,109,982 | <p>I am looking for difference between LDA and NTM . What are some use case where you will use LDA over NTM?</p>
<p>As per AWS doc:</p>
<p>LDA : The Amazon SageMaker Latent Dirichlet Allocation (LDA) algorithm is an unsupervised learning algorithm that attempts to describe a set of observations as a mixture of distinct categories. LDA is most commonly used to discover a user-specified number of topics shared by documents within a text corpus. </p>
<p>Although you can use both the Amazon SageMaker NTM and LDA algorithms for topic modeling, they are distinct algorithms and can be expected to produce different results on the same input data. </p> | 2019-11-29 19:15:21.047000+00:00 | 2019-12-03 11:30:35.970000+00:00 | 2019-11-29 23:57:12.233000+00:00 | algorithm|topic-modeling | ['https://docs.aws.amazon.com/sagemaker/latest/dg/lda.html', 'https://scikit-learn.org/0.16/modules/generated/sklearn.lda.LDA.html', 'https://docs.aws.amazon.com/sagemaker/latest/dg/ntm.html', 'https://aws.amazon.com/blogs/machine-learning/introduction-to-the-amazon-sagemaker-neural-topic-model/', 'https://arxiv.org/pdf/1809.02687.pdf'] | 5 |
51,218,340 | <p>Youre not doing anything wrong. The CDF is not implemented for the Multivariate Normal. (I agree the error message is confusing. The error message is being thrown by <code>TransformedDistribution</code> which is responsible for implementing the <code>cdf</code>.)</p>
<p>If you can tolerate a Monte Carlo approximation, I suggest doing something like:</p>
<pre><code>def approx_multivariate_cdf(dist, bound, num_samples=int(100e3), seed=None):
s = dist.sample(num_samples, seed=seed)
in_box = tf.cast(tf.reduce_all(s <= bound, axis=-1), dist.dtype)
return tf.reduce_mean(in_box, axis=0)
</code></pre>
<p>(With some thought, I'm sure someone can do better than this.)</p>
<p>There might also be a more clever solution described here: <a href="https://arxiv.org/abs/1603.04166" rel="nofollow noreferrer">https://arxiv.org/abs/1603.04166</a></p> | 2018-07-06 22:27:29.473000+00:00 | 2018-07-06 22:27:29.473000+00:00 | null | null | 50,988,462 | <p>I want to evaluate the cdf of a multivariate normal distribution using tensorflow. What I have tried so far:</p>
<pre><code>import tensorflow as tf
ds = tf.contrib.distributions
# Initialize a single 3-variate Gaussian.
mu = [0., 0., 0.]
cov = [[ 0.36, 0.12, 0.06],
[ 0.12, 0.29, -0.13],
[ 0.06, -0.13, 0.26]]
mvn = ds.MultivariateNormalFullCovariance(
loc=mu,
covariance_matrix=cov)
value = tf.constant([0., 0., 0.])
with tf.Session() as sess:
print mvn.cdf(value).eval()
</code></pre>
<p>This yields the error:</p>
<pre><code>NotImplementedError: cdf is not implemented when overriding event_shape
</code></pre>
<p>I don't understand why I am overriding the event_shape since event_shape and the shape of value are the same. What am I doing wrong?</p> | 2018-06-22 12:57:27.760000+00:00 | 2018-07-06 22:27:29.473000+00:00 | null | python|tensorflow|cdf | ['https://arxiv.org/abs/1603.04166'] | 1 |
50,348,112 | <p>I haven't run any experiment yet, but Scalable K-Means++ seems rather good for very large data sets (perhaps for those even larger than what you describe).
You can find the paper <a href="https://arxiv.org/pdf/1203.6402" rel="nofollow noreferrer">here</a> and another post explaining it <a href="https://stats.stackexchange.com/questions/135656/k-means-a-k-a-scalable-k-means">here</a>.</p>
<p>Unfortunately, I haven't seen any code around I'd trust...</p> | 2018-05-15 10:38:15.770000+00:00 | 2018-05-16 09:25:45.540000+00:00 | 2018-05-16 09:25:45.540000+00:00 | null | 50,153,049 | <p>I used this k-means++ python code for initializing k centers but it is very long for large data, for example 400000 points of 2 dimension:</p>
<pre><code>class KPlusPlus(KMeans):
def _dist_from_centers(self):
cent = self.mu
X = self.X
D2 = np.array([min([np.linalg.norm(x-c)**2 for c in cent]) for x in X])
self.D2 = D2
def _choose_next_center(self):
self.probs = self.D2/self.D2.sum()
self.cumprobs = self.probs.cumsum()
r = random.random()
ind = np.where(self.cumprobs >= r)[0][0]
return(self.X[ind])
def init_centers(self):
self.mu = random.sample(self.X, 1)
while len(self.mu) < self.K:
self._dist_from_centers()
self.mu.append(self._choose_next_center())
def plot_init_centers(self):
X = self.X
fig = plt.figure(figsize=(5,5))
plt.xlim(-1,1)
plt.ylim(-1,1)
plt.plot(zip(*X)[0], zip(*X)[1], '.', alpha=0.5)
plt.plot(zip(*self.mu)[0], zip(*self.mu)[1], 'ro')
plt.savefig('kpp_init_N%s_K%s.png' % (str(self.N),str(self.K)), \
bbox_inches='tight', dpi=200)
</code></pre>
<p>Is there a way to speed up k-means++?</p> | 2018-05-03 10:42:02.130000+00:00 | 2018-05-16 09:25:45.540000+00:00 | null | python|machine-learning|bigdata|cluster-analysis|k-means | ['https://arxiv.org/pdf/1203.6402', 'https://stats.stackexchange.com/questions/135656/k-means-a-k-a-scalable-k-means'] | 2 |
50,153,346 | <p>Initial seeding has a large impact on k-means execution time. In <a href="http://datasciencelab.wordpress.com/2014/01/15/improved-seeding-for-clustering-with-k-means/" rel="nofollow noreferrer">this post</a> you can find some strategies to speed it up.</p>
<p>Perhaps, you could consider to use the <a href="https://arxiv.org/abs/1701.04600" rel="nofollow noreferrer">Siddhesh Khandelwal's K-means variant</a>, which was publised in Proceedings of European Conference on Information Retrieval (ECIR 2017).
Siddhesh provided the python implementation <a href="https://github.com/siddheshk/Faster-Kmeans" rel="nofollow noreferrer">in GitHub</a>, and it is accompanied by some other previous heuristic algorithms. </p> | 2018-05-03 10:55:59.607000+00:00 | 2018-05-03 10:55:59.607000+00:00 | null | null | 50,153,049 | <p>I used this k-means++ python code for initializing k centers but it is very long for large data, for example 400000 points of 2 dimension:</p>
<pre><code>class KPlusPlus(KMeans):
def _dist_from_centers(self):
cent = self.mu
X = self.X
D2 = np.array([min([np.linalg.norm(x-c)**2 for c in cent]) for x in X])
self.D2 = D2
def _choose_next_center(self):
self.probs = self.D2/self.D2.sum()
self.cumprobs = self.probs.cumsum()
r = random.random()
ind = np.where(self.cumprobs >= r)[0][0]
return(self.X[ind])
def init_centers(self):
self.mu = random.sample(self.X, 1)
while len(self.mu) < self.K:
self._dist_from_centers()
self.mu.append(self._choose_next_center())
def plot_init_centers(self):
X = self.X
fig = plt.figure(figsize=(5,5))
plt.xlim(-1,1)
plt.ylim(-1,1)
plt.plot(zip(*X)[0], zip(*X)[1], '.', alpha=0.5)
plt.plot(zip(*self.mu)[0], zip(*self.mu)[1], 'ro')
plt.savefig('kpp_init_N%s_K%s.png' % (str(self.N),str(self.K)), \
bbox_inches='tight', dpi=200)
</code></pre>
<p>Is there a way to speed up k-means++?</p> | 2018-05-03 10:42:02.130000+00:00 | 2018-05-16 09:25:45.540000+00:00 | null | python|machine-learning|bigdata|cluster-analysis|k-means | ['http://datasciencelab.wordpress.com/2014/01/15/improved-seeding-for-clustering-with-k-means/', 'https://arxiv.org/abs/1701.04600', 'https://github.com/siddheshk/Faster-Kmeans'] | 3 |
72,107,179 | <p>The CNN is rotation-invariant providing all convolution kernel with a property K = T{K} (e.g., use of symmetrical kernels) and replace the 1st flatten layer with a merge convolution layer. I called it transformation-identical CNN (TI-CNN), <a href="https://arxiv.org/abs/1806.03636" rel="nofollow noreferrer">https://arxiv.org/abs/1806.03636</a> and <a href="https://arxiv.org/abs/1807.11156" rel="nofollow noreferrer">https://arxiv.org/abs/1807.11156</a></p>
<p>If you wish to establish a rotation-identical CNN (virtually arbitrary small angle), I would introduce the geared rotation-identical CNN (GRI-CNN) <a href="https://arxiv.org/abs/1808.01280" rel="nofollow noreferrer">https://arxiv.org/abs/1808.01280</a></p> | 2022-05-04 01:54:21.327000+00:00 | 2022-05-04 01:54:21.327000+00:00 | null | null | 40,952,163 | <p>As known nVidia DetectNet - CNN (convolutional neural network) for object detection is based on approach from Yolo/DenseBox: <a href="https://devblogs.nvidia.com/parallelforall/deep-learning-object-detection-digits/" rel="noreferrer">https://devblogs.nvidia.com/parallelforall/deep-learning-object-detection-digits/</a></p>
<blockquote>
<p>DetectNet is an extension of the popular GoogLeNet network. The
extensions are similar to approaches taken in the <strong>Yolo and DenseBox</strong>
papers.</p>
</blockquote>
<p>And as shown here, DetectNet can detects objects (cars) with any rotations: <a href="https://devblogs.nvidia.com/parallelforall/detectnet-deep-neural-network-object-detection-digits/" rel="noreferrer">https://devblogs.nvidia.com/parallelforall/detectnet-deep-neural-network-object-detection-digits/</a></p>
<p><a href="https://i.stack.imgur.com/lsz4u.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/lsz4u.jpg" alt="enter image description here"></a></p>
<p>Are modern CNN (convolutional neural network) as DetectNet rotate invariant?</p>
<p>Can I train DetectNet on thousands different images with one the same rotation angle of object, to detect objects on any rotation angles?</p>
<p><a href="https://i.stack.imgur.com/ZHpEs.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/ZHpEs.jpg" alt="enter image description here"></a></p>
<p>And what about rotate invariant of: Yolo, Yolo v2, DenseBox on which based DetectNet?</p> | 2016-12-03 20:27:16.890000+00:00 | 2022-05-04 01:54:21.327000+00:00 | null | machine-learning|computer-vision|neural-network|deep-learning|conv-neural-network | ['https://arxiv.org/abs/1806.03636', 'https://arxiv.org/abs/1807.11156', 'https://arxiv.org/abs/1808.01280'] | 3 |
40,953,261 | <p>No</p>
<p>In classification problems, CNNs are not rotate invariant. You need to include in your training set images with every possible rotation.</p>
<p>You can train a CNN to classify images into predefined categories (if you want to detect several objects in a image as in your example you need to scan every place of a image with your classifier).</p>
<p>However, this is an object detection problem, not only a classification problem.</p>
<p>In object detection problems, you can use a sliding window approach, but it is extremely inefficient. Instead a simple CNN other architectures are the state of art. For example:</p>
<ul>
<li>Faster RCNN: <a href="https://arxiv.org/pdf/1506.01497.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1506.01497.pdf</a></li>
<li>YOLO NET: <a href="https://pjreddie.com/darknet/yolo/" rel="nofollow noreferrer">https://pjreddie.com/darknet/yolo/</a></li>
<li>SSD: <a href="https://arxiv.org/pdf/1512.02325.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1512.02325.pdf</a></li>
</ul>
<p>These architectures can detect the object anywhere in the image, but you also must include in the training set samples with different rotations (and the training set must be labelled using bounding boxes, that it is very time consuming).</p> | 2016-12-03 22:37:04.060000+00:00 | 2020-03-04 12:55:10.750000+00:00 | 2020-03-04 12:55:10.750000+00:00 | null | 40,952,163 | <p>As known nVidia DetectNet - CNN (convolutional neural network) for object detection is based on approach from Yolo/DenseBox: <a href="https://devblogs.nvidia.com/parallelforall/deep-learning-object-detection-digits/" rel="noreferrer">https://devblogs.nvidia.com/parallelforall/deep-learning-object-detection-digits/</a></p>
<blockquote>
<p>DetectNet is an extension of the popular GoogLeNet network. The
extensions are similar to approaches taken in the <strong>Yolo and DenseBox</strong>
papers.</p>
</blockquote>
<p>And as shown here, DetectNet can detects objects (cars) with any rotations: <a href="https://devblogs.nvidia.com/parallelforall/detectnet-deep-neural-network-object-detection-digits/" rel="noreferrer">https://devblogs.nvidia.com/parallelforall/detectnet-deep-neural-network-object-detection-digits/</a></p>
<p><a href="https://i.stack.imgur.com/lsz4u.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/lsz4u.jpg" alt="enter image description here"></a></p>
<p>Are modern CNN (convolutional neural network) as DetectNet rotate invariant?</p>
<p>Can I train DetectNet on thousands different images with one the same rotation angle of object, to detect objects on any rotation angles?</p>
<p><a href="https://i.stack.imgur.com/ZHpEs.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/ZHpEs.jpg" alt="enter image description here"></a></p>
<p>And what about rotate invariant of: Yolo, Yolo v2, DenseBox on which based DetectNet?</p> | 2016-12-03 20:27:16.890000+00:00 | 2022-05-04 01:54:21.327000+00:00 | null | machine-learning|computer-vision|neural-network|deep-learning|conv-neural-network | ['https://arxiv.org/pdf/1506.01497.pdf', 'https://pjreddie.com/darknet/yolo/', 'https://arxiv.org/pdf/1512.02325.pdf'] | 3 |
16,574,276 | <p>Before teaching a neural network via <code>neuralnet</code>, it is strongly advised to scale your data:</p>
<pre class="lang-R prettyprint-override"><code>learn <- scale(learn)
# be honest and use the mean and scaling inferred from the training set -
# the test set could in principle contain only one element causing an incorrect scaling
test <- scale(test, center = attributes(learn)$`scaled:center`, scale = attributes(learn)$`scaled:scale`)
model <- neuralnet(formula, learn, ...)
compute(model, test)$net.result
</code></pre>
<p>Neural networks are sensitive to shifting and scaling of the data. Additionally, the initial weights are chosen randomly from a distribution alike to a standard normal one.</p>
<p>See, for example, chapter 3.2, "Preprocessing" (and much more) in an excellent paper by Yoshua Bengio [1].</p>
<p><strong>Modern update:</strong> Modern networks usually approach this sensitivity by using normalization layers, possibly with trained parameters. The most well-known and popular is Batch Normalization [2].</p>
<p>[1] <a href="http://arxiv.org/abs/1206.5533" rel="nofollow noreferrer">http://arxiv.org/abs/1206.5533</a></p>
<p>[2] <a href="https://en.wikipedia.org/wiki/Batch_normalization" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Batch_normalization</a></p> | 2013-05-15 20:21:33.807000+00:00 | 2020-07-20 16:37:48.387000+00:00 | 2020-07-20 16:37:48.387000+00:00 | null | 9,062,522 | <p>I am using neuralnet package and using neuralnet function to train my data and compute to predict. </p>
<pre><code>x <- neuralnet( X15 ~ X1 + X2 + X3 + X8, norm_ind[1:15000,],2,act.fct="tanh",linear.output=TRUE)
pr <- compute(x,testdata)
</code></pre>
<p>The problem I am facing is <code>pr$net.result</code> value is almost constant for all data points. </p>
<p>I am predicting return of stock and providing stock real return one day ahead as target function i.e. <code>X15</code> in formula. Output I am getting is almost constant as you can see below.
Could anyone tell me what needs to be done? </p>
<pre><code>1084 0.00002217204168
1085 0.00002217204168
1086 0.00002217204168
1087 0.00002217204168
1088 0.00002217204168
1089 0.00002217204168
1090 0.00002217204168
1091 0.00002217204168
1092 0.00002217204168
1093 0.00002217204168
1094 0.00002217204168
1095 0.00002217204168
1096 0.00002217204168
1097 0.00002217204168
1098 0.00002217204168
1099 0.00002217204168
1100 0.00002217204168
</code></pre> | 2012-01-30 10:36:11.480000+00:00 | 2020-07-20 16:37:48.387000+00:00 | 2012-01-30 10:39:59.360000+00:00 | r|neural-network | ['http://arxiv.org/abs/1206.5533', 'https://en.wikipedia.org/wiki/Batch_normalization'] | 2 |
72,057,836 | <p>Reverse engineering the HDL from the bitstream (what the binary file that is used to configure the FPGA is called) is an extremely difficult task, possibly even an impossible one.</p>
<p>You could possibly extract a netlist (the circuit schematic at the cell level), which would essentially give you how all the logic elements are connected on the FPGA. From there you would need to rebuild the actual hardware, which is extremely difficult (see <a href="https://arxiv.org/pdf/1910.01519.pdf" rel="nofollow noreferrer">this paper</a>, which goes into the required steps and their complexity for hardware reverse engineering).</p>
<p>In your case, if your current bitstream was generated inside the company, there should be at least some documentation if the source is no longer available.</p>
<p>If it is a design that was purchased, I would try to get the original design and work from there.</p>
<p>Otherwise, you are out of luck and will need to re-create a new design.</p> | 2022-04-29 12:16:26.327000+00:00 | 2022-04-29 12:16:26.327000+00:00 | null | null | 72,057,695 | <p>I am new to the world of FPGAs besides a course in undergrad using an Altera training board, but I've recently been assigned to do some work with a Xilinx Nexys Video A7 FPGA. Since this project involves updating firmware, my first task is to download the existing program that is currently on the FPGA. However, I can't find anything online that describes how to do so.</p>
<p>Can one even extract the behavior from an FPGA into a VHDL program, or does it only go one way? I find it hard to conceptualize turning hardware back into HDL unless the HDL itself gets stored somewhere on the board upon upload.</p>
<p>Again, I'm quite new to the FPGA world, so sorry if this is a dumb question. Thank you for your help.</p> | 2022-04-29 12:03:43.933000+00:00 | 2022-04-29 12:16:26.327000+00:00 | null | fpga|xilinx | ['https://arxiv.org/pdf/1910.01519.pdf'] | 1 |
62,529,481 | <p>You can use <a href="https://arxiv.org/abs/1205.2618" rel="nofollow noreferrer">Bayesian Personalized Ranking for implicit feedback</a>. I wrote about my experience of building such <a href="https://medium.com/heyjobs-tech/building-recommendation-system-based-bayesian-personalized-ranking-using-tensorflow-2-1-b814d2704130" rel="nofollow noreferrer">recommendation systems using Tensorflow</a>.</p>
<p>Regarding timeliness, you should use only active items to find recommendations.</p>
<p>For an example of the workflow can look like:</p>
<ul>
<li>you need to recommend 5 items;</li>
<li>you ask the system to give you 30 recommendations using only active items;</li>
<li>then exclude items that will expire in the next 2 days;</li>
<li>then randomly select 5 from those who stayed;</li>
</ul> | 2020-06-23 07:27:04.580000+00:00 | 2020-06-23 07:27:04.580000+00:00 | null | null | 62,527,218 | <p>I have a dataset that has, <strong>users</strong>, <strong>items</strong> and <strong>views</strong>,which is the interaction between user and item.</p>
<p>The only difference in this dataset from the other recommendation datasets is that, the items have strong timeliness i.e. the items expires after a certain time period and won't be considered anymore.(Items life span can range from 1 week - 4 months)</p> | 2020-06-23 04:13:56.800000+00:00 | 2020-06-25 05:13:27.537000+00:00 | 2020-06-25 05:13:27.537000+00:00 | machine-learning|recommendation-engine|collaborative-filtering | ['https://arxiv.org/abs/1205.2618', 'https://medium.com/heyjobs-tech/building-recommendation-system-based-bayesian-personalized-ranking-using-tensorflow-2-1-b814d2704130'] | 2 |
68,079,407 | <pre class="lang-py prettyprint-override"><code>def backward_batchnorm2d(input, output, grad_output, layer):
gamma = layer.weight
gamma = gamma.view(1,-1,1,1) # edit
# beta = layer.bias
# avg = layer.running_mean
# var = layer.running_var
eps = layer.eps
B = input.shape[0] * input.shape[2] * input.shape[3] # edit
# add new
mean = input.mean(dim = (0,2,3), keepdim = True)
variance = input.var(dim = (0,2,3), unbiased=False, keepdim = True)
x_hat = (input - mean)/(torch.sqrt(variance + eps))
dL_dxi_hat = grad_output * gamma
# dL_dvar = (-0.5 * dL_dxi_hat * (input - avg) / ((var + eps) ** 1.5)).sum((0, 2, 3), keepdim=True)
# dL_davg = (-1.0 / torch.sqrt(var + eps) * dL_dxi_hat).sum((0, 2, 3), keepdim=True) + dL_dvar * (-2.0 * (input - avg)).sum((0, 2, 3), keepdim=True) / B
dL_dvar = (-0.5 * dL_dxi_hat * (input - mean)).sum((0, 2, 3), keepdim=True) * ((variance + eps) ** -1.5) # edit
dL_davg = (-1.0 / torch.sqrt(variance + eps) * dL_dxi_hat).sum((0, 2, 3), keepdim=True) + (dL_dvar * (-2.0 * (input - mean)).sum((0, 2, 3), keepdim=True) / B) #edit
dL_dxi = (dL_dxi_hat / torch.sqrt(variance + eps)) + (2.0 * dL_dvar * (input - mean) / B) + (dL_davg / B) # dL_dxi_hat / sqrt()
# dL_dgamma = (grad_output * output).sum((0, 2, 3), keepdim=True)
dL_dgamma = (grad_output * x_hat).sum((0, 2, 3), keepdim=True) # edit
dL_dbeta = (grad_output).sum((0, 2, 3), keepdim=True)
return dL_dxi, dL_dgamma, dL_dbeta
</code></pre>
<ol>
<li>Because you didn't upload your forward snipcode, so if your gamma has the shape size is <code>1</code>, you need to reshape it to <code>[1,gamma.shape[0],1,1]</code>.</li>
<li>The formula follows 1D where the scale factor is the sum of the batch size. However, in 2D, the summation should between 3 dimensions, so <code>B = input.shape[0] * input.shape[2] * input.shape[3]</code>.</li>
<li>The <code>running_mean</code> and <code>running_var</code> only use in test/inference mode, we don't use them in training (you can find it in <a href="https://arxiv.org/abs/1502.03167" rel="nofollow noreferrer">the paper</a>). The mean and variance you need are computed from the input, you can store the mean, variance and <code>x_hat = (x-mean)/sqrt(variance + eps)</code> into your object <code>layer</code> or re-compute as I did in the code above <code># add new</code>. Then replace them with the formula of <code>dL_dvar, dL_davg, dL_dxi</code>.</li>
<li>your <code>dL_dgamma</code> should be incorrect since you multiplied the gradient of <code>output</code> by itself, it should be modified to <code>grad_output * x_hat</code>.</li>
</ol> | 2021-06-22 07:40:18.877000+00:00 | 2021-07-15 04:28:15.227000+00:00 | 2021-07-15 04:28:15.227000+00:00 | null | 67,968,913 | <p>In my network, I want to calculate the forward pass and backward pass of my network both in the forward pass.
For this, I have to manually define all the backward pass methods of the forward pass layers.<br />
For the activation functions, that's easy. And also for the linear and conv layers, it worked well. But I'm really struggling with BatchNorm. As the BatchNorm paper only discusses the 1D case:
So far, my implementation looks like this:</p>
<pre class="lang-py prettyprint-override"><code>def backward_batchnorm2d(input, output, grad_output, layer):
gamma = layer.weight
beta = layer.bias
avg = layer.running_mean
var = layer.running_var
eps = layer.eps
B = input.shape[0]
# avg, var, gamma and beta are of shape [channel_size]
# while input, output, grad_output are of shape [batch_size, channel_size, w, h]
# for my calculations I have to reshape avg, var, gamma and beta to [batch_size, channel_size, w, h] by repeating the channel values over the whole image and batches
dL_dxi_hat = grad_output * gamma
dL_dvar = (-0.5 * dL_dxi_hat * (input - avg) / ((var + eps) ** 1.5)).sum((0, 2, 3), keepdim=True)
dL_davg = (-1.0 / torch.sqrt(var + eps) * dL_dxi_hat).sum((0, 2, 3), keepdim=True) + dL_dvar * (-2.0 * (input - avg)).sum((0, 2, 3), keepdim=True) / B
dL_dxi = dL_dxi_hat / torch.sqrt(var + eps) + 2.0 * dL_dvar * (input - avg) / B + dL_davg / B # dL_dxi_hat / sqrt()
dL_dgamma = (grad_output * output).sum((0, 2, 3), keepdim=True)
dL_dbeta = (grad_output).sum((0, 2, 3), keepdim=True)
return dL_dxi, dL_dgamma, dL_dbeta
</code></pre>
<p>When I check my gradients with torch.autograd.grad() I notice that <code>dL_dgamma</code> and <code>dL_dbeta</code> are correct, but <code>dL_dxi</code> is incorrect, (by a lot). But I can't find my mistake. Where is my mistake?</p>
<p>For reference, here is the definition of BatchNorm:</p>
<p><a href="https://i.stack.imgur.com/p2Mm2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/p2Mm2.png" alt="enter image description here" /></a></p>
<p>And here are the formulas for the derivatives for the 1D case:<a href="https://i.stack.imgur.com/ut4N3m.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ut4N3m.png" alt="enter image description here" /></a></p> | 2021-06-14 10:49:18.517000+00:00 | 2021-07-15 04:28:15.227000+00:00 | 2021-06-22 09:55:42.893000+00:00 | pytorch|derivative|autograd | ['https://arxiv.org/abs/1502.03167'] | 1 |
10,576,266 | <p>This game is another version of <a href="http://en.wikipedia.org/wiki/SameGame" rel="nofollow">Same Game</a>. The question of whether an optimal solution exists was shown in <a href="http://arxiv.org/abs/cs.CC/0107031" rel="nofollow">this paper</a> to be NP complete. What this means is that in general,
an optimal solution will take exponential time to find. On the other hand, if you turn the problem into an instance of the boolean satisfiability problem, you may be able to use a <a href="http://en.wikipedia.org/wiki/SAT_solver#Algorithms_for_solving_SAT" rel="nofollow">SAT solver</a> to solve the problem more quickly than an ad-hoc approach.</p> | 2012-05-14 00:18:41.807000+00:00 | 2012-05-14 00:18:41.807000+00:00 | null | null | 10,575,886 | <p>Can anyone suggest me a strategy for solving this game <a href="http://puzzle-games.pogo.com/games/poppit" rel="nofollow">http://puzzle-games.pogo.com/games/poppit</a> in least possible steps. </p>
<p>My idea is to find the group of balloons (same-colored neighbours) which after being removed leaves us with the fewest number of groups. </p>
<p>My implementation however, is not good enough. The only thing I can think of is collect all groups of balloons and check for each group what would be the number of groups left if I remove it. This of course is quite heavy operation to do since it includes rearranging the balloons after I remove a group and then restoring the original order. </p>
<p>If someone comes up with a better way of implementing my algorithm or a completely other approach to the problem, I would be really thankful!</p> | 2012-05-13 22:59:34.490000+00:00 | 2018-09-13 18:32:30.610000+00:00 | 2018-09-13 18:32:30.610000+00:00 | algorithm | ['http://en.wikipedia.org/wiki/SameGame', 'http://arxiv.org/abs/cs.CC/0107031', 'http://en.wikipedia.org/wiki/SAT_solver#Algorithms_for_solving_SAT'] | 3 |
49,662,463 | <p>I've had the same problem and found a solution. The code can be found here (<a href="https://github.com/lFatality/tensorflow2caffe" rel="noreferrer">https://github.com/lFatality/tensorflow2caffe</a>) and I've also documented the code in some Youtube videos.</p>
<hr>
<p><a href="https://www.youtube.com/watch?v=9iJheyF7x4Y" rel="noreferrer">Part 1</a> covers the creation of the architecture of <a href="https://arxiv.org/pdf/1409.1556.pdf" rel="noreferrer">VGG-19</a> in Caffe and <a href="http://tflearn.org/" rel="noreferrer">tflearn</a> (higher level API for TensorFlow, with some changes to the code native TensorFlow should also work).</p>
<hr>
<p>In <a href="https://www.youtube.com/watch?v=LNsEfZV_24c" rel="noreferrer">Part 2</a> the export of the weights and biases out of the TensorFlow model into a numpy file is described. In tflearn you can get the weights of a layer like this: </p>
<pre><code>#get parameters of a certain layer
conv2d_vars = tflearn.variables.get_layer_variables_by_name(layer_name)
#get weights out of the parameters
weights = model.get_weights(conv2d_vars[0])
#get biases out of the parameters
biases = model.get_weights(conv2d_vars[1])
</code></pre>
<p>For a convolutional layer, the layer_name is <code>Conv_2D</code>. Fully-Connected layers are called <code>FullyConnected</code>. If you use more than one layer of a certain type, a raising integer with a preceding underscore is used (e.g. the 2nd conv layer is called <code>Conv_2D_1</code>). I've found these names in the graph of the TensorBoard. If you name the layers in your architecture definition, then these layer_names might change to the names you defined. </p>
<p>In native TensorFlow the export will need different code but the format of the parameters should be the same so subsequent steps should still be applicable. </p>
<hr>
<p><a href="https://www.youtube.com/watch?v=kvXHOIn3-8s" rel="noreferrer">Part 3</a> covers the actual conversion. What's critical is the conversion of the weights when you create the caffemodel (the biases can be carried over without change). TensorFlow and Caffe use different formats when saving a filter. While TensorFlow uses <code>[height, width, depth, number of filters]</code> (<a href="https://www.tensorflow.org/extend/tool_developers/" rel="noreferrer">TensorFlow docs, at the bottom</a>), Caffe uses <code>[number of filters, depth, height, width]</code> (<a href="http://caffe.berkeleyvision.org/tutorial/net_layer_blob.html" rel="noreferrer">Caffe docs, chapter 'Blob storage and communication'</a>). To convert between the formats you can use the <code>transpose</code> function (for example: <code>weights_of_first_conv_layer.transpose((3,2,0,1))</code>. The 3,2,0,1 sequence can be obtained by enumerating the TensorFlow format (origin) and then switching it to the Caffe format (target format) while keeping the numbers at their specific variable.).<br>
If you want to connect a tensor output to a fully-connected layer, things get a little tricky. If you use VGG-19 with an input size of 112x112 it looks like this.</p>
<pre><code>fc1_weights = data_file[16][0].reshape((4,4,512,4096))
fc1_weights = fc1_w.transpose((3,2,0,1))
fc1_weights = fc1_w.reshape((4096,8192))
</code></pre>
<p>What you get from TensorFlow if you export the parameters at the connection between tensor and fully-connected layer is an array with the shape <code>[entries in the tensor, units in the fc-layer]</code> (here: <code>[8192, 4096]</code>). You have to find out what the shape of your output tensor is and then reshape the array so that it fits the TensorFlow format (see above, <code>number of filters</code> being the <code>number of units in the fc-layer</code>). After that you use the transpose-conversion you've used previously and then reshape the array again, but the other way around. While TensorFlow saves fc-layer weights as <code>[number of inputs, number of outputs]</code>, Caffe does it the other way around.<br>
If you connect two fc-layers to each other, you don't have to do the complex process previously described but you will have to account for the different fc-layer format by transposing again (<code>fc_layer_weights.transpose((1,0))</code>)</p>
<p>You can then set the parameters of the network using</p>
<pre><code>net.params['layer_name_in_prototxt'][0].data[...] = weights
net.params['layer_name_in_prototxt'][1].data[...] = biases
</code></pre>
<p>This was a quick overview. If you want all the code, it's in my github repository. I hope it helps. :)</p>
<hr>
<p>Cheers,<br>
Fatality</p> | 2018-04-05 00:39:38.983000+00:00 | 2018-04-05 00:39:38.983000+00:00 | null | null | 41,138,481 | <p>I would like to be able to convert a Tensorflow model to Caffe model.</p>
<p>I searched on google but I was able to find only converters from caffe to tensorflow but not the opposite.</p>
<p>Does anyone have an idea on how to do it?</p>
<p>Thanks,
Evi</p> | 2016-12-14 09:01:17.253000+00:00 | 2020-12-18 08:32:35.140000+00:00 | null | tensorflow|caffe | ['https://github.com/lFatality/tensorflow2caffe', 'https://www.youtube.com/watch?v=9iJheyF7x4Y', 'https://arxiv.org/pdf/1409.1556.pdf', 'http://tflearn.org/', 'https://www.youtube.com/watch?v=LNsEfZV_24c', 'https://www.youtube.com/watch?v=kvXHOIn3-8s', 'https://www.tensorflow.org/extend/tool_developers/', 'http://caffe.berkeleyvision.org/tutorial/net_layer_blob.html'] | 8 |
50,333,549 | <p>If you replaced the Q-table of an untrained SAC model with a trained DDPG's Q-table then you'd be using a converged policy produced from the DDPG method. Likewise, replacing an untrained DDPG model's Q-table with the Q-table from a trained SAC model will give it the converged policy from following the SAC method.</p>
<p>If you have not already, you should check out <a href="https://arxiv.org/pdf/1801.01290.pdf" rel="nofollow noreferrer">this paper</a> which discusses and experiments with the differences between DDPG and SAC.</p> | 2018-05-14 15:04:33.747000+00:00 | 2018-05-14 15:16:00.260000+00:00 | 2018-05-14 15:16:00.260000+00:00 | null | 50,328,080 | <p>I'm working on NIPS 2017 Learning to Run project. I have limited time and I need to try 2 models(DDPG and Soft Actor Critic). The simulation is slow and it takes too much time.
I wonder,
After I trained one of them, is it possible to use its state-action-reward data for training the other one?</p> | 2018-05-14 10:24:58.847000+00:00 | 2018-05-14 15:16:00.260000+00:00 | 2018-05-14 12:18:14.420000+00:00 | deep-learning|reinforcement-learning | ['https://arxiv.org/pdf/1801.01290.pdf'] | 1 |
62,082,638 | <p>You have a <a href="https://en.cppreference.com/w/cpp/language/memory_model#Threads_and_data_races" rel="nofollow noreferrer">data race</a> on shutdown.</p>
<blockquote>
<p>When an evaluation of an expression writes to a memory location and another evaluation reads or modifies the same memory location, the expressions are said to conflict. A program that has two conflicting evaluations has a data race [...]</p>
</blockquote>
<p>In <code>shut()</code> you set the <code>shutdown</code> flag using a mutex, but the check is performed <em>without</em> the mutex (and the <code>State</code> destructor doesn't use a mutex either). Thus you have conflicting operations (read + write) on a non-atomic variable, without the proper happens before relation. This is a data race which results in undefined behavior.</p>
<p>The simple solution would be to make <code>shutdown</code> an <code>std::atomic<bool></code>, then you wouldn't even need the mutex to set the flag.</p>
<p>For more details about data races and the C++ memory model I can recommend this paper which I have co-authored: <a href="https://arxiv.org/abs/1803.04432" rel="nofollow noreferrer">Memory Models for C/C++ Programmers</a></p> | 2020-05-29 09:23:14.523000+00:00 | 2020-05-29 09:23:14.523000+00:00 | null | null | 62,077,786 | <p>I have 2 threads monitoring the same global <code>state</code>, if the <code>state.shutdown</code> becomes <code>false</code>, the thread <code>run()</code> should return. The code is below.</p>
<pre><code>#include <iostream>
#include <chrono>
#include <thread>
#include <mutex>
using namespace std;
struct State {
bool shutdown = false;
~State() {
shutdown = true;
}
};
State state;
#define CHECK_SHUTDOWN \
{ \
std::cout << (state.shutdown ? " SHUTDOWN " : " NOSHUT ") << typeid(*this).name() << std::endl; \
if (state.shutdown) { \
return; \
} \
}
class Mythread {
public:
void join();
void run();
void launch();
std::thread self_thread;
};
void Mythread::run() {
while(1) {
CHECK_SHUTDOWN
}
}
void Mythread::join() {
if (self_thread.joinable()) {
self_thread.join();
}
}
void Mythread::launch() {
self_thread = std::thread(&Mythread::run, this);
}
std::mutex mtx;
void shut() {
std::lock_guard<std::mutex> lock(mtx);
state.shutdown = true;
}
int main()
{
Mythread thread1;
Mythread thread2;
thread1.launch();
thread2.launch();
std::this_thread::sleep_for(std::chrono::milliseconds(1000));
//state.shutdown = true;
shut(); //This makes no difference with the line above
std::this_thread::sleep_for(std::chrono::milliseconds(100));
thread1.join();
thread2.join();
return 0;
}
</code></pre>
<p>However, even I manually set the <code>state.shutdown</code> to be true, the threads can never detect it. I got prints like:</p>
<pre><code> NOSHUT 8Mythread
NOSHUT 8Mythread
NOSHUT 8Mythread
...Program finished with exit code 0
Press ENTER to exit console.
</code></pre>
<p>at the end. I'm also confused given that the <code>run()</code> function is never returned, the threads join should hang. However the threads can join successfully. </p>
<p>Any help would be very appreciated here!</p> | 2020-05-29 02:38:08.483000+00:00 | 2020-05-29 09:23:14.523000+00:00 | null | c++|multithreading|global-variables|mutex | ['https://en.cppreference.com/w/cpp/language/memory_model#Threads_and_data_races', 'https://arxiv.org/abs/1803.04432'] | 2 |
35,760,826 | <p>The purpose of <code>cudaDeviceProperties()</code> is, like the equivalent <code>cpuid</code> facility on x86 CPUs, to return relevant microarchitecural parameters. As on CPUs, performance characteristics on GPUs can differ even if the microarchitectural parameters are identical, for example due to different clock frequencies, or due to differing specifications of the attached DRAM, and the way these interact with various buffering and caching mechanisms inside the processor. In general, there is no single "memory latency" number one can assign, nor am I aware of a way to compute possible ranges from known microarchitectural parameters.</p>
<p>On both CPUs and GPU, one therefore has to utilize sophisticated microbenchmarks to determine performance parameters such as DRAM latency. How to construct such microbenchmarks for each desired parameter would be too broad to cover here. Multiple papers have been published that discuss this in detail with regard to NVDIA GPUs. One of the earliest relevant publications is (<a href="http://www.stuffedcow.net/files/gpuarch-ispass2010.pdf" rel="nofollow">online draft</a>):</p>
<p><em>Wong, Henry, et al. "Demystifying GPU microarchitecture through microbenchmarking." In Proceedings: 2010 IEEE International Symposium on Performance Analysis of Systems & Software (ISPASS), pp. 235-246</em></p>
<p>A recent work that includes coverage of the Kepler architecture is (<a href="http://arxiv.org/pdf/1509.02308.pdf" rel="nofollow">online draft</a>):</p>
<p><em>Xinxin Mei, Xiaowen Chu. "Dissecting GPU Memory Hierarchy through Microbenchmarking." Arxiv manuscript, September 2015, pp. 1-14</em></p>
<p>Short of constructing one's own microbenchmarks, one has to rely on published results such as the ones cited above for various implementation-specific performance parameters of specific GPUs. </p>
<p>In many years of optimizing for GPU platforms, I have have not had a need for knowledge of this kind of data, in general the performance metrics of the CUDA profiler(s) should be sufficient to track down specific bottlenecks.</p> | 2016-03-03 00:18:37.550000+00:00 | 2016-03-03 05:50:42.497000+00:00 | 2016-03-03 05:50:42.497000+00:00 | null | 35,757,357 | <p>The <code>cudaGetDeviceProperties()</code> API call does not seem to tell us much about the global memory's latency (not even a typical value, or a min/max pair etc).</p>
<p><strong>Edit:</strong> When I say latency, I actually mean the different latencies for the various cases of having to read data from main device memory. So, if we take <a href="http://arxiv.org/pdf/1509.02308.pdf" rel="nofollow">this paper</a>, it's actually 6 figures: { TLB L1 hit, TLB L2 hit, TLB miss } x L1 data cache turned { on, off }.</p>
<p><strong>Q1: Is there a way to obtain these figures, other than to measure them myself?</strong><br>Even a rule-of-thumb calculation based on SM version, SM clock and mem clock might do.</p>
<p>I would ask the secondary question, being:<br>
<strong>Q2: If not, is there a utility which does this for you?</strong> <br>
<sub>(although that might be off-topic for the site.)</sub></p> | 2016-03-02 20:24:10.960000+00:00 | 2017-12-08 18:42:54.047000+00:00 | 2016-03-12 00:24:30.690000+00:00 | memory|cuda|gpgpu|latency | ['http://www.stuffedcow.net/files/gpuarch-ispass2010.pdf', 'http://arxiv.org/pdf/1509.02308.pdf'] | 2 |
65,978,235 | <p>Modifying your class and using <code>BallTree</code> data structure (with build time <code>O(n.(log n)^2)</code>, refer to <a href="https://arxiv.org/ftp/arxiv/papers/1210/1210.6122.pdf" rel="nofollow noreferrer">https://arxiv.org/ftp/arxiv/papers/1210/1210.6122.pdf</a>) with custom <code>DistanceMetric</code> (since Callable functions in the metric parameter are NOT supported for <code>KDTree</code>, as mentioned here as a note: <a href="https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.BallTree.html" rel="nofollow noreferrer">https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.BallTree.html</a>), you can use the following code too (also removing the loop for prediction):</p>
<pre><code>from sklearn.neighbors import BallTree
from sklearn.neighbors import DistanceMetric
from scipy.stats import mode
class GlobalWeightedKNN:
"""
A k-NN classifier with feature weights
Returns: predictions of k-NN.
"""
def __init__(self):
self.X_train = None
self.y_train = None
self.k = None
self.weights = None
self.tree = None
self.predictions = list()
def fit(self, X_train, y_train, k, weights):
self.X_train = X_train
self.y_train = y_train
self.k = k
self.weights = weights
self.tree = BallTree(X_train, \
metric=DistanceMetric.get_metric('wminkowski', p=2, w=weights))
def predict(self, testing_data):
"""
Takes a 2d array of query cases.
Returns a list of predictions for k-NN classifier
"""
indexes = self.tree.query(testing_data, self.k, return_distance=False)
y_answers = self.y_train[indexes]
self.predictions = np.apply_along_axis(lambda x: mode(x)[0], 1, y_answers)
return self.predictions
</code></pre>
<p>Training:</p>
<pre><code>from time import time
n, d = 10000, 2
begin = time()
cls = GlobalWeightedKNN()
X_train = np.random.rand(n,d)
y_train = np.random.choice(2,n, replace=True)
cls.fit(X_train, y_train, k=3, weights=np.random.rand(d))
end = time()
print('time taken to train {} instances = {} s'.format(n, end - begin))
# time taken to train 10000 instances = 0.01998615264892578 s
</code></pre>
<p>Testing / prediction:</p>
<pre><code>begin = time()
X_test = np.random.rand(n,d)
cls.predict(X_test)
end = time()
print('time taken to predict {} instances = {} s'.format(n, end - begin))
#time taken to predict 10000 instances = 3.732935905456543 s
</code></pre> | 2021-01-31 10:51:49.800000+00:00 | 2021-02-01 04:52:50.803000+00:00 | 2021-02-01 04:52:50.803000+00:00 | null | 51,688,568 | <p>I want to code my own kNN algorithm from scratch, the reason is that I need to weight the features. The problem is that my program is still really slow despite removing for loops and using built in numpy functionality.</p>
<p>Can anyone suggest a way to speed this up? I don't use <code>np.sqrt</code> for the L2 distance because it's unnecessary and actually slows it all up quite a bit.</p>
<pre><code>class GlobalWeightedKNN:
"""
A k-NN classifier with feature weights
Returns: predictions of k-NN.
"""
def __init__(self):
self.X_train = None
self.y_train = None
self.k = None
self.weights = None
self.predictions = list()
def fit(self, X_train, y_train, k, weights):
self.X_train = X_train
self.y_train = y_train
self.k = k
self.weights = weights
def predict(self, testing_data):
"""
Takes a 2d array of query cases.
Returns a list of predictions for k-NN classifier
"""
np.fromiter((self.__helper(qc) for qc in testing_data), float)
return self.predictions
def __helper(self, qc):
neighbours = np.fromiter((self.__weighted_euclidean(qc, x) for x in self.X_train), float)
neighbours = np.array([neighbours]).T
indexes = np.array([range(len(self.X_train))]).T
neighbours = np.append(indexes, neighbours, axis=1)
# Sort by second column - distances
neighbours = neighbours[neighbours[:,1].argsort()]
k_cases = neighbours[ :self.k]
indexes = [x[0] for x in k_cases]
y_answers = [self.y_train[int(x)] for x in indexes]
answer = max(set(y_answers), key=y_answers.count) # get most common value
self.predictions.append(answer)
def __weighted_euclidean(self, qc, other):
"""
Custom weighted euclidean distance
returns: floating point number
"""
return np.sum( ((qc - other)**2) * self.weights )
</code></pre> | 2018-08-04 18:39:24.153000+00:00 | 2021-02-01 04:52:50.803000+00:00 | 2021-01-31 10:57:14.100000+00:00 | python|machine-learning|scikit-learn|knn | ['https://arxiv.org/ftp/arxiv/papers/1210/1210.6122.pdf', 'https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.BallTree.html'] | 2 |
48,360,101 | <p>There is single (sliced) L3 cache in single-socket chip, and several L2 caches (one per real physical core).
L3 cache caches data in segments of size of 64 bytes (cache lines), and there is special <a href="https://en.wikipedia.org/wiki/Cache_coherence" rel="noreferrer">Cache coherence protocol</a> between L3 and different L2/L1 (and between several chips in the NUMA/ccNUMA multi-socket systems too); it tracks which cache line is actual, which is shared between several caches, which is just modified (and should be invalidated from other caches). Some of protocols (cache line possible states and state translation): <a href="https://en.wikipedia.org/wiki/MESI_protocol" rel="noreferrer">https://en.wikipedia.org/wiki/MESI_protocol</a>, <a href="https://en.wikipedia.org/wiki/MESIF_protocol" rel="noreferrer">https://en.wikipedia.org/wiki/MESIF_protocol</a>, <a href="https://en.wikipedia.org/wiki/MOESI_protocol" rel="noreferrer">https://en.wikipedia.org/wiki/MOESI_protocol</a></p>
<p>In older chips (epoch of Core 2) cache coherence was <a href="https://en.wikipedia.org/wiki/Bus_snooping" rel="noreferrer">snooped</a> on shared bus, now it is checked with help of <a href="https://en.wikipedia.org/wiki/Directory-based_cache_coherence" rel="noreferrer">directory</a>.</p>
<p>In real life L3 is not just "single" but sliced into several slices, each of them having high-speed access ports. There is some method of selecting the slice based on physical address, which allow multicore system to do many accesses at every moment (each access will be directed by <a href="https://software.intel.com/en-us/forums/software-tuning-performance-optimization-platform-monitoring/topic/673649" rel="noreferrer">undocumented method</a> to some slice; when two cores uses same physical address, their accesses will be served by same slice or by slices which will do cache coherence protocol checks).
Information about L3 cache slices was reversed in several papers: </p>
<ul>
<li><a href="https://cmaurice.fr/pdf/raid15_maurice.pdf" rel="noreferrer">https://cmaurice.fr/pdf/raid15_maurice.pdf</a> Reverse Engineering Intel Last-Level Cache Complex Addressing Using Performance Counters</li>
<li><a href="https://eprint.iacr.org/2015/690.pdf" rel="noreferrer">https://eprint.iacr.org/2015/690.pdf</a> Systematic Reverse Engineering of Cache Slice Selection in Intel Processors</li>
<li><a href="https://arxiv.org/pdf/1508.03767.pdf" rel="noreferrer">https://arxiv.org/pdf/1508.03767.pdf</a> Cracking Intel Sandy Bridge’s Cache Hash Function</li>
</ul>
<p>With recent chips programmer has ability to partition the L3 cache between applications "Cache Allocation Technology" (v4 Family): <a href="https://software.intel.com/en-us/articles/introduction-to-cache-allocation-technology" rel="noreferrer">https://software.intel.com/en-us/articles/introduction-to-cache-allocation-technology</a> <a href="https://software.intel.com/en-us/articles/introduction-to-code-and-data-prioritization-with-usage-models" rel="noreferrer">https://software.intel.com/en-us/articles/introduction-to-code-and-data-prioritization-with-usage-models</a> <a href="https://danluu.com/intel-cat/" rel="noreferrer">https://danluu.com/intel-cat/</a> <a href="https://lwn.net/Articles/659161/" rel="noreferrer">https://lwn.net/Articles/659161/</a></p> | 2018-01-20 19:14:53.067000+00:00 | 2018-01-20 19:14:53.067000+00:00 | null | null | 28,891,349 | <p>Given that CPUs are now multi-core and have their own L1/L2 caches, I was curious as to how the L3 cache is organized given that its shared by multiple cores. I would imagine that if we had, say, 4 cores, then the L3 cache would contain 4 pages worth of data, each page corresponding to the region of memory that a particular core is referencing. Assuming I'm somewhat correct, is that as far as it goes? It could, for example, divide each of these pages into sub-pages. This way when multiple threads run on the same core each thread may find their data in one of the sub-pages. I'm just coming up with this off the top of my head so I'm very interested in educating myself on what is really going on underneath the scenes. Can anyone share their insights or provide me with a link that will cure me of my ignorance? </p>
<p>Many thanks in advance.</p> | 2015-03-06 02:18:45.350000+00:00 | 2018-07-28 21:57:13.527000+00:00 | null | cpu|intel|cpu-cache | ['https://en.wikipedia.org/wiki/Cache_coherence', 'https://en.wikipedia.org/wiki/MESI_protocol', 'https://en.wikipedia.org/wiki/MESIF_protocol', 'https://en.wikipedia.org/wiki/MOESI_protocol', 'https://en.wikipedia.org/wiki/Bus_snooping', 'https://en.wikipedia.org/wiki/Directory-based_cache_coherence', 'https://software.intel.com/en-us/forums/software-tuning-performance-optimization-platform-monitoring/topic/673649', 'https://cmaurice.fr/pdf/raid15_maurice.pdf', 'https://eprint.iacr.org/2015/690.pdf', 'https://arxiv.org/pdf/1508.03767.pdf', 'https://software.intel.com/en-us/articles/introduction-to-cache-allocation-technology', 'https://software.intel.com/en-us/articles/introduction-to-code-and-data-prioritization-with-usage-models', 'https://danluu.com/intel-cat/', 'https://lwn.net/Articles/659161/'] | 14 |
45,311,974 | <p>You can implement <strong>supervised LDA</strong> with PyMC that uses Metropolis sampler to learn the latent variables in the following graphical model:
<a href="https://i.stack.imgur.com/G4zo1.png" rel="noreferrer"><img src="https://i.stack.imgur.com/G4zo1.png" alt="sLDA graphical model"></a></p>
<p>The training corpus consists of 10 movie reviews (5 positive and 5 negative) along with the associated star rating for each document. The star rating is known as a response variable which is a quantity of interest associated with each document. The documents and response variables are modeled jointly in order to find latent topics that will best predict the response variables for future unlabeled documents. For more information, check out the <a href="https://arxiv.org/pdf/1003.0783.pdf" rel="noreferrer">original paper</a>.
Consider the following code:</p>
<pre><code>import pymc as pm
import numpy as np
from sklearn.feature_extraction.text import TfidfVectorizer
train_corpus = ["exploitative and largely devoid of the depth or sophistication ",
"simplistic silly and tedious",
"it's so laddish and juvenile only teenage boys could possibly find it funny",
"it shows that some studios firmly believe that people have lost the ability to think",
"our culture is headed down the toilet with the ferocity of a frozen burrito",
"offers that rare combination of entertainment and education",
"the film provides some great insight",
"this is a film well worth seeing",
"a masterpiece four years in the making",
"offers a breath of the fresh air of true sophistication"]
test_corpus = ["this is a really positive review, great film"]
train_response = np.array([3, 1, 3, 2, 1, 5, 4, 4, 5, 5]) - 3
#LDA parameters
num_features = 1000 #vocabulary size
num_topics = 4 #fixed for LDA
tfidf = TfidfVectorizer(max_features = num_features, max_df=0.95, min_df=0, stop_words = 'english')
#generate tf-idf term-document matrix
A_tfidf_sp = tfidf.fit_transform(train_corpus) #size D x V
print "number of docs: %d" %A_tfidf_sp.shape[0]
print "dictionary size: %d" %A_tfidf_sp.shape[1]
#tf-idf dictionary
tfidf_dict = tfidf.get_feature_names()
K = num_topics # number of topics
V = A_tfidf_sp.shape[1] # number of words
D = A_tfidf_sp.shape[0] # number of documents
data = A_tfidf_sp.toarray()
#Supervised LDA Graphical Model
Wd = [len(doc) for doc in data]
alpha = np.ones(K)
beta = np.ones(V)
theta = pm.Container([pm.CompletedDirichlet("theta_%s" % i, pm.Dirichlet("ptheta_%s" % i, theta=alpha)) for i in range(D)])
phi = pm.Container([pm.CompletedDirichlet("phi_%s" % k, pm.Dirichlet("pphi_%s" % k, theta=beta)) for k in range(K)])
z = pm.Container([pm.Categorical('z_%s' % d, p = theta[d], size=Wd[d], value=np.random.randint(K, size=Wd[d])) for d in range(D)])
@pm.deterministic
def zbar(z=z):
zbar_list = []
for i in range(len(z)):
hist, bin_edges = np.histogram(z[i], bins=K)
zbar_list.append(hist / float(np.sum(hist)))
return pm.Container(zbar_list)
eta = pm.Container([pm.Normal("eta_%s" % k, mu=0, tau=1.0/10**2) for k in range(K)])
y_tau = pm.Gamma("tau", alpha=0.1, beta=0.1)
@pm.deterministic
def y_mu(eta=eta, zbar=zbar):
y_mu_list = []
for i in range(len(zbar)):
y_mu_list.append(np.dot(eta, zbar[i]))
return pm.Container(y_mu_list)
#response likelihood
y = pm.Container([pm.Normal("y_%s" % d, mu=y_mu[d], tau=y_tau, value=train_response[d], observed=True) for d in range(D)])
# cannot use p=phi[z[d][i]] here since phi is an ordinary list while z[d][i] is stochastic
w = pm.Container([pm.Categorical("w_%i_%i" % (d,i), p = pm.Lambda('phi_z_%i_%i' % (d,i), lambda z=z[d][i], phi=phi: phi[z]),
value=data[d][i], observed=True) for d in range(D) for i in range(Wd[d])])
model = pm.Model([theta, phi, z, eta, y, w])
mcmc = pm.MCMC(model)
mcmc.sample(iter=1000, burn=100, thin=2)
#visualize topics
phi0_samples = np.squeeze(mcmc.trace('phi_0')[:])
phi1_samples = np.squeeze(mcmc.trace('phi_1')[:])
phi2_samples = np.squeeze(mcmc.trace('phi_2')[:])
phi3_samples = np.squeeze(mcmc.trace('phi_3')[:])
ax = plt.subplot(221)
plt.bar(np.arange(V), phi0_samples[-1,:])
ax = plt.subplot(222)
plt.bar(np.arange(V), phi1_samples[-1,:])
ax = plt.subplot(223)
plt.bar(np.arange(V), phi2_samples[-1,:])
ax = plt.subplot(224)
plt.bar(np.arange(V), phi3_samples[-1,:])
plt.show()
</code></pre>
<p>Given the training data (observed words and response variables), we can learn the global topics (beta) and regression coefficients (eta) for predicting the response variable (Y) in addition to topic proportions for each document (theta).
In order to make predictions of Y given the learned beta and eta, we can define a new model where we do not observe Y and use the previously learned beta and eta to obtain the following result:</p>
<p><a href="https://i.stack.imgur.com/IGeSL.png" rel="noreferrer"><img src="https://i.stack.imgur.com/IGeSL.png" alt="sLDA prediction"></a></p>
<p>Here we predicted a positive review (approx 2 given review rating range of -2 to 2) for the test corpus consisting of one sentence: "this is a really positive review, great film" as shown by the mode of the posterior histogram on the right.
See <a href="https://github.com/vsmolyakov/experiments_with_python/blob/master/chp01/supervised_lda.ipynb" rel="noreferrer">ipython notebook</a> for a complete implementation.</p> | 2017-07-25 19:28:14.947000+00:00 | 2017-07-30 18:41:19.050000+00:00 | 2017-07-30 18:41:19.050000+00:00 | null | 13,555,021 | <p>I have a bunch of already human-classified documents in some groups. </p>
<p>Is there a modified version of lda which I can use to train a model and then later classify unknown documents with it?</p> | 2012-11-25 20:12:20.140000+00:00 | 2021-03-04 06:50:13.040000+00:00 | null | machine-learning|nlp|classification|document-classification|lda | ['https://i.stack.imgur.com/G4zo1.png', 'https://arxiv.org/pdf/1003.0783.pdf', 'https://i.stack.imgur.com/IGeSL.png', 'https://github.com/vsmolyakov/experiments_with_python/blob/master/chp01/supervised_lda.ipynb'] | 4 |
33,839,195 | <p>There's quite a bit wrong here so I'll just start from the beginning.</p>
<p>Ok, first thing you do after opening an image is tresholding. I recommend strongly that you have another look at the OpenCV manual on <a href="http://docs.opencv.org/2.4/doc/tutorials/imgproc/threshold/threshold.html" rel="noreferrer">tresholding</a> and the exact meaning of the <a href="http://docs.opencv.org/2.4/modules/imgproc/doc/miscellaneous_transformations.html?highlight=threshold#threshold" rel="noreferrer">treshold methods</a>. </p>
<p>The manual mentions that</p>
<blockquote>
<p>cv2.threshold(src, thresh, maxval, type[, dst]) → retval, dst</p>
<p>the special value THRESH_OTSU may be combined with one of the above
values. In this case, the function determines the optimal threshold
value using the Otsu’s algorithm and uses it instead of the specified
thresh .</p>
</blockquote>
<p>I know it's a bit confusing because you don't actully <em>combine</em> THRESH_OTSU with any of the other methods (THRESH_BINARY etc...), unfortunately that manual can be like that. What this method actually does is it assumes that there's a "foreground" and a "background" that follow a bi-modal histogram and then applies the THRESH_BINARY I believe. </p>
<p>Imagine this as if you're taking an image of a cathedral or a high building mid day. On a sunny day the sky will be very bright and blue, and the cathedral/building will be quite a bit darker. This means the group of pixels belonging to the sky will all have high brightness values, that is will be on the right side of the histogram, and the pixels belonging to the church will be darker, that is to the middle and left side of the histogram.</p>
<p>Otsu uses this to try and guess the right "cutoff" point, called thresh. For your image Otsu's alg. supposes that all that white on the side of the map is the background, and the map itself the foreground. Therefore your image after thresholding looks like this:</p>
<p><a href="https://i.stack.imgur.com/Z0IJx.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/Z0IJx.jpg" alt="Image after OP's thresholding"></a></p>
<p>After this point it's not hard to guess what goes wrong. But let's go on, What you're trying to achieve is, I believe, something like this:</p>
<pre><code>flag,b = cv2.threshold(gray,160,255,cv2.THRESH_BINARY)
</code></pre>
<p><a href="https://i.stack.imgur.com/nHgVu.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/nHgVu.jpg" alt="Image with manually guessed threshold."></a></p>
<p>Then you go on, and try to erode the image. I'm not sure why you're doing this, was your intention to "bold" the lines, or was your intention to remove noise. In any case you never assigned the result of erosion to something. Numpy arrays, which is the way images are represented, are mutable but it's not the way the syntax works:</p>
<pre><code>cv2.erode(src, kernel, [optionalOptions] ) → dst
</code></pre>
<p>So you have to write:</p>
<pre><code>b = cv2.erode(b,element)
</code></pre>
<p>Ok, now for the element and how the erosion works. Erosion drags a kernel over an image. Kernel is a simple matrix with 1's and 0's in it. One of the elements of that matrix, usually centre one, is called an anchor. An anchor is the element that will be replaced at the end of the operation. When you created </p>
<pre><code>cv2.getStructuringElement(cv2.MORPH_CROSS, (1, 1))
</code></pre>
<p>what you created is actually a 1x1 matrix (1 column, 1 row). This makes erosion completely useless. </p>
<p>What erosion does, is firstly retrieves all the values of pixel brightness from the original image where the kernel element, overlapping the image segment, has a "1". Then it finds a minimal value of retrieved pixels and replaces the anchor with that value.</p>
<p>What this means, in your case, is that you drag <code>[1]</code> matrix over the image, compare if the source image pixel brightness is larger, equal or smaller than itself and then you replace it with itself. </p>
<p>If your intention was to remove "noise", then it's probably better to use a rectangular kernel over the image. Think of it this way, "noise" is that thing that "doesn't fit in" with the surroundings. So if you compare your centre pixel with it's surroundings and you find it doesn't fit, it's most likely noise. </p>
<p>Additionally, I've said it replaces the anchor with the minimal value retrieved by the kernel. Numerically, minimal value is 0, which is coincidentally how black is represented in the image. This means that in your case of a predominantly white image, erosion would "bloat up" the black pixels. Erosion would replace the 255 valued white pixels with 0 valued black pixels if they're in the reach of the kernel. In any case it shouldn't be of a shape (1,1), ever. </p>
<pre><code>>>> cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3))
array([[0, 1, 0],
[1, 1, 1],
[0, 1, 0]], dtype=uint8)
</code></pre>
<p>If we erode the second image with a 3x3 rectangular kernel we get the image bellow.</p>
<p><a href="https://i.stack.imgur.com/v45xw.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/v45xw.jpg" alt="Eroded threshed image."></a></p>
<p>Ok, now we got that out of the way, next thing you do is you find edges using Canny edge detection. The image you get from that is:</p>
<p><a href="https://i.stack.imgur.com/PbaBh.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/PbaBh.jpg" alt="Canny edges"></a></p>
<p>Ok, now we look for <strong>EXACTLY</strong> vertical and <strong>EXACTLY</strong> horizontal lines <strong>ONLY</strong>. Of course there are no such lines apart from the meridian on the left of the image (is that what it's called?) and the end image you get after you did it right would be this:</p>
<p><a href="https://i.stack.imgur.com/wI1Oy.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/wI1Oy.jpg" alt="enter image description here"></a></p>
<p>Now since you never described your exact idea, and my best guess is that you want the parallels and meridians, you'll have more luck on maps with lesser scale because those aren't lines to begin with, they are curves. Additionally, is there a specific reason to get a Probability Hough done? The "regular" Hough doesn't suffice?</p>
<p>Sorry for the too-long post, hope it helps a bit.</p>
<hr>
<p>Text here was added as a request for clarification from the OP Nov. 24th. because there's no way to fit the answer into a char limited comment.</p>
<p>I'd suggest OP asks a new question more specific to the detection of <strong>curves</strong> because you are dealing with curves op, not horizontal and vertical <strong>lines</strong>.</p>
<p>There are several ways to detect curves but none of them are easy. In the order of simplest-to-implement to hardest:</p>
<ol>
<li>Use RANSAC algorithm. Develop a formula describing the nature of the long. and lat. lines depending on the map in question. I.e. latitude curves will almost be a perfect straight lines on the map when you're near the equator, with the equator being the perfectly straight line, but will be very curved, resembling circle segments, when you're at high latitudes (near the poles). SciPy already has <a href="http://scipy-cookbook.readthedocs.org/items/RANSAC.html" rel="noreferrer">RANSAC</a> implemented as a class all you have to do is find and the programatically define the model you want to try to fit to the curves. Of course there's the ever-usefull 4dummies text <a href="http://old.vision.ece.ucsb.edu/~zuliani/Research/RANSAC/docs/RANSAC4Dummies.pdf" rel="noreferrer">here</a>. This is the easiest because all you have to do is the math. </li>
<li>A bit harder to do would be to create a rectangular grid and then try to use cv findHomography to warp the grid into place on the image. For various geometric transformations you can do to the grid you can check out <a href="http://docs.opencv.org/2.4/modules/imgproc/doc/geometric_transformations.html" rel="noreferrer">OpenCv manual</a>. This is sort of a hack-ish approach and might work worse than 1. because it depends on the fact that you can re-create a grid with enough details and objects on it that cv can identify the structures on the image you're trying to warp it to. This one requires you to do similar math to 1. and just a bit of coding to compose the end solution out of several different functions.</li>
<li>To actually do it. There are mathematically neat ways of describing curves as a list of tangent lines on the curve. You can try to fit a bunch of shorter HoughLines to your image or image segment and then try to group all found lines and determine, by assuming that they're tangents to a curve, if they really follow a curve of the desired shape or are they random. See <a href="http://arxiv.org/pdf/1501.03124.pdf" rel="noreferrer">this paper</a> on this matter. Out of all approaches this one is the hardest because it requires a quite a bit of solo-coding and some math about the method.</li>
</ol>
<p>There could be easier ways, I've never actually had to deal with curve detection before. Maybe there are tricks to do it easier, I don't know. If you ask a new question, one that hasn't been closed as an answer already you might have more people notice it. Do make sure to ask a full and complete question on the exact topic you're interested in. People won't usually spend so much time writing on such a broad topic.</p>
<p>To show you what you can do with just Hough transform check out bellow:</p>
<pre><code>import cv2
import numpy as np
def draw_lines(hough, image, nlines):
n_x, n_y=image.shape
#convert to color image so that you can see the lines
draw_im = cv2.cvtColor(image, cv2.COLOR_GRAY2BGR)
for (rho, theta) in hough[0][:nlines]:
try:
x0 = np.cos(theta)*rho
y0 = np.sin(theta)*rho
pt1 = ( int(x0 + (n_x+n_y)*(-np.sin(theta))),
int(y0 + (n_x+n_y)*np.cos(theta)) )
pt2 = ( int(x0 - (n_x+n_y)*(-np.sin(theta))),
int(y0 - (n_x+n_y)*np.cos(theta)) )
alph = np.arctan( (pt2[1]-pt1[1])/( pt2[0]-pt1[0]) )
alphdeg = alph*180/np.pi
#OpenCv uses weird angle system, see: http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_imgproc/py_houghlines/py_houghlines.html
if abs( np.cos( alph - 180 )) > 0.8: #0.995:
cv2.line(draw_im, pt1, pt2, (255,0,0), 2)
if rho>0 and abs( np.cos( alphdeg - 90)) > 0.7:
cv2.line(draw_im, pt1, pt2, (0,0,255), 2)
except:
pass
cv2.imwrite("/home/dino/Desktop/3HoughLines.png", draw_im,
[cv2.IMWRITE_PNG_COMPRESSION, 12])
img = cv2.imread('a.jpg')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
flag,b = cv2.threshold(gray,160,255,cv2.THRESH_BINARY)
cv2.imwrite("1tresh.jpg", b)
element = np.ones((3,3))
b = cv2.erode(b,element)
cv2.imwrite("2erodedtresh.jpg", b)
edges = cv2.Canny(b,10,100,apertureSize = 3)
cv2.imwrite("3Canny.jpg", edges)
hough = cv2.HoughLines(edges, 1, np.pi/180, 200)
draw_lines(hough, b, 100)
</code></pre>
<p>As you can see from the image bellow, straight lines are only longitudes. Latitudes are not as straight therefore for each latitude you have several detected lines that behave like tangents on the line. Blue drawn lines are drawn by the <code>if abs( np.cos( alph - 180 )) > 0.8:</code> while the red drawn lines are drawn by <code>rho>0 and abs( np.cos( alphdeg - 90)) > 0.7</code> condition. Pay close attention when comparing the original image with the image with lines drawn on it. The resemblance is uncanny (heh, get it?) but because they're not lines a lot of it only looks like junk. (especially that highest detected latitude line that seems like it's too "angled" but in reality those lines make a perfect tangent to the latitude line on its thickest point, just as hough algorithm demands it). Acknowledge that there are limitations to detecting curves with a line detection algorithm</p>
<p><a href="https://i.stack.imgur.com/OlM2F.png" rel="noreferrer"><img src="https://i.stack.imgur.com/OlM2F.png" alt="Best possible detected lines."></a></p> | 2015-11-21 02:54:56.653000+00:00 | 2015-11-24 16:45:51.580000+00:00 | 2015-11-24 16:45:51.580000+00:00 | null | 33,838,156 | <p>I am using OpenCV HoughlinesP to find horizontal and vertical lines. It is not finding any lines most of the time. Even when it finds a lines it is not even close to actual image.</p>
<pre><code>import cv2
import numpy as np
img = cv2.imread('image_with_edges.jpg')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
flag,b = cv2.threshold(gray,0,255,cv2.THRESH_OTSU)
element = cv2.getStructuringElement(cv2.MORPH_CROSS,(1,1))
cv2.erode(b,element)
edges = cv2.Canny(b,10,100,apertureSize = 3)
lines = cv2.HoughLinesP(edges,1,np.pi/2,275, minLineLength = 100, maxLineGap = 200)[0].tolist()
for x1,y1,x2,y2 in lines:
for index, (x3,y3,x4,y4) in enumerate(lines):
if y1==y2 and y3==y4: # Horizontal Lines
diff = abs(y1-y3)
elif x1==x2 and x3==x4: # Vertical Lines
diff = abs(x1-x3)
else:
diff = 0
if diff < 10 and diff is not 0:
del lines[index]
gridsize = (len(lines) - 2) / 2
cv2.line(img,(x1,y1),(x2,y2),(0,0,255),2)
cv2.imwrite('houghlines3.jpg',img)
</code></pre>
<p>Input Image:
<a href="https://i.stack.imgur.com/1BXDZ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1BXDZ.jpg" alt="input image"></a></p>
<p>Output Image: (see the Red Line):
<a href="https://i.stack.imgur.com/iNAzT.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iNAzT.jpg" alt="enter image description here"></a></p>
<p>@ljetibo Try this with:
<a href="http://www.lib.utexas.edu/maps/onc/txu-pclmaps-oclc-8322829_c_6.jpg" rel="nofollow noreferrer">c_6.jpg</a></p> | 2015-11-21 00:15:26.230000+00:00 | 2015-11-24 18:30:58.283000+00:00 | 2015-11-24 18:30:58.283000+00:00 | python|opencv|computer-vision|houghlinesp | ['http://docs.opencv.org/2.4/doc/tutorials/imgproc/threshold/threshold.html', 'http://docs.opencv.org/2.4/modules/imgproc/doc/miscellaneous_transformations.html?highlight=threshold#threshold', 'https://i.stack.imgur.com/Z0IJx.jpg', 'https://i.stack.imgur.com/nHgVu.jpg', 'https://i.stack.imgur.com/v45xw.jpg', 'https://i.stack.imgur.com/PbaBh.jpg', 'https://i.stack.imgur.com/wI1Oy.jpg', 'http://scipy-cookbook.readthedocs.org/items/RANSAC.html', 'http://old.vision.ece.ucsb.edu/~zuliani/Research/RANSAC/docs/RANSAC4Dummies.pdf', 'http://docs.opencv.org/2.4/modules/imgproc/doc/geometric_transformations.html', 'http://arxiv.org/pdf/1501.03124.pdf', 'https://i.stack.imgur.com/OlM2F.png'] | 12 |
43,922,384 | <p>You can use the built-in <code>columnSimilarities()</code> method on a <code>RowMatrix</code>, that can both calculate the exact cosine similarities, or estimate it using the <a href="https://arxiv.org/abs/1304.1467" rel="noreferrer">DIMSUM</a> method, which will be considerably faster for larger datasets. The difference in usage is that for the latter, you'll have to specify a <code>threshold</code>. </p>
<p>Here's a small reproducible example:</p>
<pre><code>from pyspark.mllib.linalg.distributed import RowMatrix
rows = sc.parallelize([(1, 2, 3), (4, 5, 6), (7, 8, 9), (10, 11, 12)])
# Convert to RowMatrix
mat = RowMatrix(rows)
# Calculate exact and approximate similarities
exact = mat.columnSimilarities()
approx = mat.columnSimilarities(0.05)
# Output
exact.entries.collect()
[MatrixEntry(0, 2, 0.991935352214),
MatrixEntry(1, 2, 0.998441152599),
MatrixEntry(0, 1, 0.997463284056)]
</code></pre> | 2017-05-11 17:46:42.263000+00:00 | 2017-05-11 17:46:42.263000+00:00 | null | null | 43,921,636 | <p>For a Recommender System, I need to compute the cosine similarity between all the columns of a whole Spark DataFrame.</p>
<p>In Pandas I used to do this:</p>
<pre><code>import sklearn.metrics as metrics
import pandas as pd
df= pd.DataFrame(...some dataframe over here :D ...)
metrics.pairwise.cosine_similarity(df.T,df.T)
</code></pre>
<p>That generates the Similarity Matrix between the columns (since I used the transposition)</p>
<p>Is there any way to do the same thing in Spark (Python)?</p>
<p>(I need to apply this to a matrix made of tens of millions of rows, and thousands of columns, so that's why I need to do it in Spark)</p> | 2017-05-11 17:02:13.007000+00:00 | 2019-01-14 16:20:52.490000+00:00 | 2019-01-14 16:20:52.490000+00:00 | python|apache-spark|pyspark|apache-spark-sql|cosine-similarity | ['https://arxiv.org/abs/1304.1467'] | 1 |
48,718,302 | <p><code>vw</code> SGD is highly enhanced vs the vanilla naive SGD so pre-scaling isn't needed.</p>
<p>If you have very few instances (small data-set), pre-scaling may help somewhat.</p>
<p><code>vw</code> does automatic normalization for scale by remembering the range of each feature as it goes, so pre-scaling is rarely needed to achieve good results.</p>
<p>Normalization for scale, rarity and importance is applied by default. The relevant <code>vw</code> options are:</p>
<pre><code>--normalized
--adaptive
--invariant
</code></pre>
<p>If any of them appears on the command line, the others are not applied. By default all three are applied.</p>
<p><em>See also:</em> <a href="https://stackoverflow.com/a/32770631/1296044">this stackoverflow answer</a></p>
<p><em>The paper explaining the enhanced SGD algorithm in <code>vw</code> is:</em></p>
<p><a href="https://arxiv.org/abs/1011.1576" rel="nofollow noreferrer">Online Importance Weight Aware Updates - Nikos Karampatziakis & John Langford</a></p> | 2018-02-10 07:21:17.677000+00:00 | 2018-02-10 07:21:17.677000+00:00 | null | null | 48,687,328 | <p>I know that vw can handle very raw data(e.g. raw text) but for instance should one consider scaling numerical features before feeding the data to vw?
Consider the following line:</p>
<p><code>1 |n age: 80.0 height: 180.0 |c male london |d the:1 cat:2 went:3 out:4</code></p>
<p>Assuming that typical age ranges from 1 to 100 and height(in centimeters) may range from 140 to 220, is it better to transform/scale the <code>age</code> and <code>height</code> so they share a common range? I think many algorithms may need this kinda of preprocessing on their input data, for example Linear Regression.</p> | 2018-02-08 14:02:37.497000+00:00 | 2018-04-21 21:10:28.217000+00:00 | 2018-04-21 21:10:28.217000+00:00 | r|machine-learning|data-processing|vowpalwabbit | ['https://stackoverflow.com/a/32770631/1296044', 'https://arxiv.org/abs/1011.1576'] | 2 |
57,487,456 | <p>I'm not familiar with DL4J's approach, but at the core 'Paragraph Vector'/'Doc2Vec' level, documents typically have an identifier assigned by the user – most typically, a single unique ID. Sometimes, though, these (provided) IDs have been called "labels", and further, sometimes it can be useful to re-use known-labels as if they were per-document doc-tokens, which can lead to confusion. In the Python gensim library, we call those user-provided tokens "tags" to distinguish from "labels" that might be from a totally different, and downstream, vocabulary. </p>
<p>So in a followup paper like "<a href="https://arxiv.org/abs/1507.07998" rel="nofollow noreferrer">Document Embedding with Paragraph Vectors</a>", each document has a unique ID - its title or identifer within Wikpedia or Arxiv. But then the resulting doc-vectors are evaluated by how well they place documents with the same category-labels closer to each other than third documents. So there's both a learned doc-tag space, and a downstream evaluation based on other labels (that weren't in any way provided to the unsupervised Paragraph Vector algorithm). </p>
<p>Similarly, you might give all training documents unique IDs, but then later train a separate classifier (of any algorithm) to use the doc=vectors as inputs, and learn to predict <em>other</em> labels. That's my understanding of the IMDB experiment in the original 'Paragraph Vectors' paper: every review has a unique ID during training, and thus got its own doc-vector. But then a downstream classifier was trained to predict positive/negative review sentiment based on those doc-vectors. So, the assessment/prediction of <em>labels</em> ("positive"/"negative") was a separate downstream step. </p>
<p>As mentioned, it's sometimes the case that re-using known category-labels as doc-ids – either as the only doc-ID, or as an extra ID in addition to a unique-per-document ID – can be useful. In a way, it creates synthetic combined documents for training, made up of all documents with the same label. This <em>may</em> tend to influence the final space/coordinates to be more discriminative with regard to the known labels, and thus make the resulting doc-vectors more helpful to downstream classifiers. But then you've replaced classic 'Paragraph Vector', with one ID per doc, with a similar semi-supervised approach where known labels influence training. </p> | 2019-08-14 02:27:20.230000+00:00 | 2019-08-14 02:27:20.230000+00:00 | null | null | 57,459,175 | <p>I just read the paper <a href="https://cs.stanford.edu/~quocle/paragraph_vector.pdf" rel="nofollow noreferrer">Distributed Representations of Sentences and Documents</a>. In the sentiment analysis experiment section, it says, "After learning the vector representations for training sentences and their subphrases, we feed them to a logistic regression to learn a predictor of the movie rating." So it uses logistic regression algorithm as a classifier to determine what the label is. </p>
<p>Then I moved on to dl4j, I read the example "ParagraphVectorsClassifierExample" the code shows as below:</p>
<pre><code> void makeParagraphVectors() throws Exception {
ClassPathResource resource = new ClassPathResource("paravec/labeled");
// build a iterator for our dataset
iterator = new FileLabelAwareIterator.Builder()
.addSourceFolder(resource.getFile())
.build();
tokenizerFactory = new DefaultTokenizerFactory();
tokenizerFactory.setTokenPreProcessor(new CommonPreprocessor());
// ParagraphVectors training configuration
paragraphVectors = new ParagraphVectors.Builder()
.learningRate(0.025)
.minLearningRate(0.001)
.batchSize(1000)
.epochs(20)
.iterate(iterator)
.trainWordVectors(true)
.tokenizerFactory(tokenizerFactory)
.build();
// Start model training
paragraphVectors.fit();
}
void checkUnlabeledData() throws IOException {
/*
At this point we assume that we have model built and we can check
which categories our unlabeled document falls into.
So we'll start loading our unlabeled documents and checking them
*/
ClassPathResource unClassifiedResource = new ClassPathResource("paravec/unlabeled");
FileLabelAwareIterator unClassifiedIterator = new FileLabelAwareIterator.Builder()
.addSourceFolder(unClassifiedResource.getFile())
.build();
/*
Now we'll iterate over unlabeled data, and check which label it could be assigned to
Please note: for many domains it's normal to have 1 document fall into few labels at once,
with different "weight" for each.
*/
MeansBuilder meansBuilder = new MeansBuilder(
(InMemoryLookupTable<VocabWord>)paragraphVectors.getLookupTable(),
tokenizerFactory);
LabelSeeker seeker = new LabelSeeker(iterator.getLabelsSource().getLabels(),
(InMemoryLookupTable<VocabWord>) paragraphVectors.getLookupTable());
while (unClassifiedIterator.hasNextDocument()) {
LabelledDocument document = unClassifiedIterator.nextDocument();
INDArray documentAsCentroid = meansBuilder.documentAsVector(document);
List<Pair<String, Double>> scores = seeker.getScores(documentAsCentroid);
/*
please note, document.getLabel() is used just to show which document we're looking at now,
as a substitute for printing out the whole document name.
So, labels on these two documents are used like titles,
just to visualize our classification done properly
*/
log.info("Document '" + document.getLabels() + "' falls into the following categories: ");
for (Pair<String, Double> score: scores) {
log.info(" " + score.getFirst() + ": " + score.getSecond());
}
}
}
</code></pre>
<p>It demonstrates how does doc2vec associate arbitrary documents with labels, but it hides the implementations behind the scenes. My question is: is it does so also by logistic regression? if not, what is it? And how can I do it by logistic regression?</p> | 2019-08-12 10:04:29.130000+00:00 | 2019-08-14 02:27:20.230000+00:00 | null | java|nlp|label|doc2vec|dl4j | ['https://arxiv.org/abs/1507.07998'] | 1 |
42,585,623 | <p>The problem was caused by <code>nan</code> in loss function and weights, as described in <a href="https://stackoverflow.com/q/37448557/41977">this question</a>.</p>
<p>By introducing a different standard deviation for each weights tensor based on its dimensions (as described <a href="https://stackoverflow.com/a/37755443/41977">in this answer</a> and originally in He <em>et al.</em> [1]) I was able to train successfully the network.</p>
<p>[1]: He <em>et al.</em> (2015) <a href="https://arxiv.org/pdf/1502.01852.pdf" rel="nofollow noreferrer">Delving Deep into Rectifiers:
Surpassing Human-Level Performance on ImageNet Classification</a></p> | 2017-03-03 18:03:25.890000+00:00 | 2017-03-03 18:03:25.890000+00:00 | 2017-05-23 10:30:04.887000+00:00 | null | 42,578,096 | <p>I am following the third Jupyter notebook on <a href="https://github.com/jdwittenauer/ipython-notebooks/blob/master/notebooks/tensorflow/Tensorflow-3-Regularization.ipynb" rel="nofollow noreferrer">Tensorflow examples</a>.</p>
<p>Running problem 4, I tried to implement a function which builds automatically a number of hidden layers, without manually coding the configuration of each layer.</p>
<p>However, the model runs providing very low accuracy (10%) so I thought that maybe such function could not be compatible with the graph builder of Tensorflow.</p>
<p>My code is the following:</p>
<pre><code>def hlayers(n_layers, n_nodes, i_size, a, r=0, keep_p=1):
for i in range(n_layers):
if i > 0:
i_size = n_nodes
w = tf.Variable(tf.truncated_normal([i_size, n_nodes]), name=f'W{i}')
b = tf.Variable(tf.zeros([n_nodes]), name=f'b{i}')
pa = tf.nn.relu(tf.add(tf.matmul(a, w), b))
a = tf.nn.dropout(pa, keep_prob=keep_p, name=f'a{i}')
r += tf.nn.l2_loss(w, name=f'r{i}')
return a, r
batch_size = 128
num_nodes = 1024
beta = 0.01
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(
tf.float32,
shape=(batch_size, image_size * image_size),
name='Dataset')
tf_train_labels = tf.placeholder(
tf.float32,
shape=(batch_size, num_labels),
name='Labels')
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
keep_p = tf.placeholder(tf.float32, name='KeepProb')
# Hidden layers.
a, r = hlayers(
n_layers=3,
n_nodes=num_nodes,
i_size=image_size * image_size,
a=tf_train_dataset,
keep_p=keep_p)
# Output layer.
wo = tf.Variable(tf.truncated_normal([num_nodes, num_labels]), name='Wo')
bo = tf.Variable(tf.zeros([num_labels]), name='bo')
logits = tf.add(tf.matmul(a, wo), bo, name='Logits')
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(
labels=tf_train_labels, logits=logits))
# Regularizer.
regularizers = tf.add(r, tf.nn.l2_loss(wo))
loss = tf.reduce_mean(loss + beta * regularizers, name='Loss')
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
a, _ = hlayers(
n_layers=3,
n_nodes=num_nodes,
i_size=image_size * image_size,
a=tf_valid_dataset)
valid_prediction = tf.nn.softmax(tf.add(tf.matmul(a, wo), bo))
a, _ = hlayers(
n_layers=3,
n_nodes=num_nodes,
i_size=image_size * image_size,
a=tf_test_dataset)
test_prediction = tf.nn.softmax(tf.add(tf.matmul(a, wo), bo))
num_steps = 3001
with tf.Session(graph=graph) as session:
tf.global_variables_initializer().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {
tf_train_dataset : batch_data,
tf_train_labels : batch_labels,
keep_p : 0.5}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
</code></pre> | 2017-03-03 11:47:37.747000+00:00 | 2017-03-03 18:03:25.890000+00:00 | null | python|machine-learning|tensorflow|deep-learning | ['https://stackoverflow.com/q/37448557/41977', 'https://stackoverflow.com/a/37755443/41977', 'https://arxiv.org/pdf/1502.01852.pdf'] | 3 |
58,734,171 | <p>Have you turned off DropOut layer during testing phase?</p>
<p>Since DropOut layers are only used during training phase to prevent overfitting, they're not used in testing phase. That's why Tf.Estimator is famous nowadays, since you can turn off DropOut easier with is_training=True/False</p>
<p>You can turn off with tf.keras.backend.set_learning_phase(0). Please make sure you are using tensorflow.keras from tensorflow.keras.layers import Conv2D, MaxPooling2D, Dense, Dropout, Input, Flatten, there is difference between tf.keras and keras, and tf.keras is better. </p>
<p>If you have turned off, below are my techniques to prevent overfitting:<br>
- Make error analysis. You can refer to Prof.Andrew best material <a href="https://www.coursera.org/learn/machine-learning-projects?specialization=deep-learning" rel="nofollow noreferrer">https://www.coursera.org/learn/machine-learning-projects?specialization=deep-learning</a><br>
- Check test and train set distribution, data augmentation (flip, rotate, ...)<br>
- Increase InputShape for more features. One of the best current techniques is using compounding scaling method from <a href="https://arxiv.org/pdf/1905.11946.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1905.11946.pdf</a> </p>
<p>Hope this helps! Happy Coding! </p> | 2019-11-06 16:03:03.697000+00:00 | 2019-11-06 16:03:03.697000+00:00 | null | null | 57,076,930 | <p>I made this model for an image classification problem. The problem I'm encountering is that the validation accuracy is always from 5-8% lower than the training accuracy and the validation loss is way higher than the training loss. Here's an example of one of my epochs: loss: 0.2232 - acc: 0.9245 - val_loss: 0.4131 - val_acc: 0.8700</p>
<pre><code>model = Sequential()
model.add(Conv2D(32, 3, 3, border_mode='same', input_shape=(150,
150, 3), activation='relu'))
model.add(Conv2D(32, 3, 3, border_mode='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, 3, 3, border_mode='same', activation='relu'))
model.add(Conv2D(64, 3, 3, border_mode='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(128, 3, 3, border_mode='same',
activation='relu'))
model.add(Conv2D(128, 3, 3, border_mode='same',
activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(256, 3, 3, border_mode='same',
activation='relu'))
model.add(Conv2D(256, 3, 3, border_mode='same',
activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer=RMSprop(lr=0.0001),
metrics=['accuracy'])
# this is the augmentation configuration we will use for training
train_datagen = ImageDataGenerator(
rescale=1. / 255,
shear_range=0.1,
zoom_range=0.1,
horizontal_flip=True)
# this is the augmentation configuration we will use for testing:
# only rescaling
test_datagen = ImageDataGenerator(rescale=1. / 255)
</code></pre>
<p>I've tried Bayesian Hyperparameter Optimization using Hyperas, but the model hyperparameters that it's recommending aren't really working for me. What should I change in my model to prevent it from Overfitting? I'm not using much data to train and validate the model because I won't have much data for what the model will be used in real-life. Any recommendation will be greatly appreciated.</p> | 2019-07-17 13:23:02.650000+00:00 | 2019-11-06 16:03:03.697000+00:00 | null | python|tensorflow|keras|deep-learning|conv-neural-network | ['https://www.coursera.org/learn/machine-learning-projects?specialization=deep-learning', 'https://arxiv.org/pdf/1905.11946.pdf'] | 2 |
42,578,376 | <p>If the labels you are looking for are contained in imagenet or cifar100 you can definitely use them for training. Currently sliding windows are used for finding multiple labels in an image. To read up on that approach you can use: <a href="https://arxiv.org/pdf/1312.6229.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1312.6229.pdf</a>.</p> | 2017-03-03 12:00:52.717000+00:00 | 2017-03-03 12:00:52.717000+00:00 | null | null | 42,559,458 | <p>I'm new to deeplearning. I have a picture set and each image can contain several objects (bridge, bed, river, etc.). I want to use deeplearning to detect objects on each image.</p>
<p>For example:<br>
- image 1 contains car and dog<br>
- image 2 contains river, person and boat</p>
<p>I don't have a labeled training dataset. Can I use open image dataset like ImageNet or CIFAR100 to train my model?</p> | 2017-03-02 15:28:38.003000+00:00 | 2017-03-03 12:00:52.717000+00:00 | null | deep-learning | ['https://arxiv.org/pdf/1312.6229.pdf'] | 1 |
59,056,914 | <p>But ML might be a good solution. It does everything you want and solves anything easily...at least you might get the impression listening to the buuuuuzzword people :)</p>
<p>Using only the distribution of the value will probably not work. But you could try to use a combination of distribution of values and distribution of gradients to make an educated guess.</p>
<p>This <em>might</em> be literature that might describe the approach</p>
<p><a href="https://rstudio-pubs-static.s3.amazonaws.com/161075_05ce98dc51c844e0833c06835c9ce4c3.html" rel="nofollow noreferrer">https://rstudio-pubs-static.s3.amazonaws.com/161075_05ce98dc51c844e0833c06835c9ce4c3.html</a></p>
<p><a href="https://arxiv.org/pdf/1905.08850" rel="nofollow noreferrer">https://arxiv.org/pdf/1905.08850</a></p> | 2019-11-26 18:02:08.903000+00:00 | 2019-11-26 18:02:08.903000+00:00 | null | null | 59,055,680 | <p>I'm working with time series data which roughly oscillates and has around 3000 data points. There are 2 things I would like to accomplish:
1) smooth data to remove jagged edges
2) predict the next data point with weight based off location in the distribution diagram</p>
<p>I have included the data distribution diagram and a sample plot of the data in blue with the data points in black. The yellow line represents Lowess smoothed data points with a frac of 4/len(df) so the window for local regression stays consistent with additional data.The problem is that it is horrific for predicting the next data point. Simple /Exponential Moving average is not an option because of the lag. I have used several scipy modules under signal and optimize such as curve_fit but have not found anything close to match statsmodel lowess accuracy besides the forecasting of the next data point. I'm trying to stay away from going to ML if possible.</p>
<p>My searching has been pointing to using a gaussian process with bayesian optimization but this is a bit over my abilities to implement as my own custom function.</p>
<p>If I am stuck building my own custom function, any links or feedback how to proceed would be greatly appreciated. </p>
<pre><code>c_list = [2.8, 2.1, 4.0, 4.7, 4.7, 3.0, 0.2, -0.4, -3.2, 1.0, 4.0, -3.7, -3.7, -4.3, -2.7, 0.2, 3.4, 4.3, 4.2, 3.8, -0.3, 2.4, -0.2, -0.2, -2.6, -3.3, -4.3, -3.6, 0.5, 0.3, 0.9, 3.3, 3.3, 3.6, 3.9, 4.1, -0.3, -0.9, -2.9, -0.9, 1.9, 2.8, 4.4, 3.9, 3.3, -2.6, -3.1, -3.2, -0.2, 3.2]
c_series = pd.Series(c_list)
x = c_series.index.values
y = c_series.values
window = 4/len(c_series)
l = lowess(y, x, window)
c_series.plot()
plt.scatter(x, y, s=9)
plt.plot(l[:,1])
</code></pre>
<p><a href="https://i.stack.imgur.com/3mqtw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3mqtw.png" alt="Ploted data points in blue with lowess in yellow"></a></p>
<p><a href="https://i.stack.imgur.com/txnc1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/txnc1.png" alt="Distribution graph of data points"></a></p> | 2019-11-26 16:44:50.250000+00:00 | 2019-11-26 20:21:53.987000+00:00 | 2019-11-26 20:21:53.987000+00:00 | python|forecasting|smoothing | ['https://rstudio-pubs-static.s3.amazonaws.com/161075_05ce98dc51c844e0833c06835c9ce4c3.html', 'https://arxiv.org/pdf/1905.08850'] | 2 |
56,745,139 | <p>I see two things that jump out at me.</p>
<ol>
<li><p>First, the double use of <code>learn_rate</code>. Federated Averaging as introduced <a href="https://arxiv.org/pdf/1602.05629.pdf" rel="nofollow noreferrer">here</a> first computes a client update, where the gradients are scaled by the learning rate, then aggregates these as a weighted average at the server. In particular, the server does <em>not</em> scale by the learning rate as well. Scaling the update by the learning rate twice will result in an effective squaring of the learning rate, which could easily explain a dramatic slowdown.</p></li>
<li><p>Second, the use of batch normalization. Batch normalization in federated optimization is very much an open area of research; it is not clear if the same optimization benefits will materialize in the federated setting as in the datacenter. Your beliefs on the answer to this question should depend on what you believe to be the mechanism through which BatchNorm works its magic, and there has been <a href="https://arxiv.org/abs/1805.11604" rel="nofollow noreferrer">debate</a> on this <a href="https://arxiv.org/abs/1805.10694" rel="nofollow noreferrer">recently</a>.</p></li>
</ol>
<p>With that said, I would try setting the server's learning rate to 1, removing the BatchNorm layer, and ensuring that the centralized model and the federated model are processing equivalent amounts of data, in order to perform a direct comparison. Federated averaging in the extreme case simply reduces to gradient descent, so if you are seeing a stark contrast there is likely a misspecification in the optimization problem.</p>
<p>Hope this helps!!</p> | 2019-06-24 23:38:06.423000+00:00 | 2019-06-24 23:38:06.423000+00:00 | null | null | 56,514,398 | <p>I'm evaluating keras and tensorflow-federated model performance for a regression problem. The performance is basically the MSE for both. The only difference is:
1. the way of splitting the dataset.
2. the loss function:</p>
<pre><code># keras model loss function
def loss_fn():
return tf.keras.losses.MeanSquaredError()
# Federated model loss function
def loss_fn_Federated(y_true, y_pred):
return tf.reduce_mean(tf.keras.losses.MSE(y_true, y_pred))
</code></pre>
<p>Please help me improving the federated model.</p>
<pre><code>
tf.compat.v1.enable_v2_behavior()
train_perc = 0.8
Norm_Input = True
Norm_Output = True
Input_str = ['latitude', 'longitude']
if Norm_Output:
Output_str = ['BeamRSRP_max_Normazlied']
else:
Output_str = ['BeamRSRP_max']
Final_Tag = 'Tag4' # here just decide which tagging method do you want to use
Num_Clients = 2 # cannot be less than one
Num_Cons_Sample_Client = 20 # cannot be less than one
max_thr_all = 1000000000
learn_rate = .01
Val_Train_Split = 0.8
SNN_epoch = 50
SNN_batch_size = 1100
shuffle_buffer = 200
SNN_Layers = [10,100,100,100,100,10] # layers Dense
SNN_epsilon =0.1
SNN_decay = 0.01
datetime
Sim_Feature_Name = "-All-"
add_path_name = "Norm"+str(Norm_Output*1) +Sim_Feature_Name
tosave_Path = add_path_name+str(datetime.datetime.now().hour) + '-'+str(datetime.datetime.now().minute)+'-' + str(datetime.datetime.now().second)+'/'
Data_2018 = False
if Data_2018:
tmp_removed = ["gpsTime","UEC/CSI/CSI_rs_ssb_idx","UEC/CSI/CSI_rs_ssb_rsrp","TX/CSI_RX/Beam RSRP dBm","TX/CSI_RX/Channel Quality Indicator"]
TobeRemoved_Array = ["gpsTime_float","core","ssbRSRP_max","RX/PBCH_Rx/Cell id","ssbidx_max","TX/CSI_RX/Precoding Matrix Indicator","CQI_max","UEC/UEC/L1_RX_throughput_mbps","BeamRSRP_max","UEC/PBCH/PBCH_SINR","BeamRSRP_max_Normazlied","log","is_training"]
else:
tmp_removed = ["gpsTime","TX/CSI_RX/Beam RSRP dBm","nmea_raw","core_y"]
TobeRemoved_Array = ['TX/CSI_RX/Channel Quality Indicator', 'core', 'epoch', 'TX/CSI_RX/Efficiency', 'TX/CSI_RX/Estimated Freq Error', 'TX/CSI_RX/Estimated Time Error', 'TX/CSI_RX/Precoding Matrix Indicator', 'TX/CSI_RX/Rank Indicator', 'log', 'BeamRSRP_max', 'CQI_max', 'gpsTime_float', 'BeamRSRP_max_Normazlied', 'is_training']
if not os.path.isdir(tosave_Path):
os.makedirs(tosave_Path)
# Load simulation data.
##############################################
dir_name = 'pickle-data/'
file_name = 'all_logs_april_2019.pickle'
files = os.listdir('pickle-data/')
dataframe = Import_Pickle.Import_v1(dir_name,file_name,Data_2018) # choose False to use 2019 data
# Just to reduce the processing
ave = dataframe.core.min() + max_thr_all
#df2 = dataframe.drop(dataframe[dataframe.core < ave].index)
df2 = dataframe[dataframe.core < ave]
df = Import_Pickle.PreProcessing_v2019(df2,Norm_Input,tmp_removed)
train_df,test_df,X_traindf,X_testdf,Y_traindf,Y_testdf,XY_traindf,XY_testdf = Import_Pickle.Splitting_Train_Test(df,train_perc,Norm_Output,TobeRemoved_Array)
########## splitting for clients ############
def Tag_per_day(train_df_loc,TagNum):
train_df_loc['log2'] = train_df_loc['log'].apply(lambda x: x.replace("_",""))
tag_Index = train_df_loc.log2.apply(lambda x: x.index("201"))
tag_Index2 = tag_Index.values[1]
tag_date =train_df_loc.log2.apply(lambda x: x[tag_Index2:tag_Index2+8])
train_df_loc.loc[:,'Tag'+str(TagNum)] = pd.Series(tag_date.to_list(),index=train_df.index) # to be fixed
return train_df_loc
# Introduce time as input
X_traindf['gpsTime_float'] = train_df['gpsTime_float']
# introduce first tag per day
TagNum=1
train_df = Tag_per_day(train_df,TagNum)
#examples on groupby
Unq_tag1_grps = list(train_df.groupby(train_df.Tag1).groups.keys())
train_df.groupby(train_df.Tag1).first()
train_df.groupby(train_df.Tag1)['gpsTime_float'].count()
X_traindf['Tag'+str(TagNum)] = train_df['Tag'+str(TagNum)]
#############################
# introduce epoch as tag
#############################
TagNum=2
train_df['Tag'+str(TagNum)] = train_df.epoch
X_traindf['Tag'+str(TagNum)] = train_df['Tag'+str(TagNum)]
#############################
# introduce core as tag
#############################
TagNum=3
train_df['Tag'+str(TagNum)] = train_df.core
X_traindf['Tag'+str(TagNum)] = train_df['Tag'+str(TagNum)]
#############################
# introduce day as tag per client
#############################
TagNum = 4
RepNum = np.ceil(train_df.shape[0]/(Num_Cons_Sample_Client*Num_Clients))
Part_Tag_Array=[]
for i in np.arange(Num_Clients):
Part_Tag_Tmp = list(map(lambda _: i+1,range(Num_Cons_Sample_Client)))
Part_Tag_Array.extend(Part_Tag_Tmp)
Full_Tag_Array2 = Part_Tag_Array * int(RepNum)
extra_tags = np.abs(len(Full_Tag_Array2) - train_df.shape[0])
Full_Tag_Array = Full_Tag_Array2[:-extra_tags]
train_df.loc[:,'Tag'+str(TagNum)] = pd.Series(Full_Tag_Array,index=train_df.index)
X_traindf.loc[:,'Tag'+str(TagNum)] = train_df['Tag'+str(TagNum)]
#############################
# END day as tag per client
#############################
######### Introduce gpsTime and Tag to the input
Input_str.extend(['gpsTime_float',Final_Tag])
#FLObj = FLTest()
#FLObj.test_self_contained_example(X_traindf[Input_str].values, Y_traindf[Output_str].values)
###### Adding StandardSalarization:
scaler = StandardScaler()
removed_column = Input_str.pop()
X_train_ScaledTmp = scaler.fit_transform(X_traindf[Input_str],Y_traindf[Output_str])
# Adding Int tag per client without scalarization
X_train_Scaled = np.c_[X_train_ScaledTmp, train_df[removed_column].values.reshape(train_df.shape[0],1)]
# X_train_Scaled = scaler.transform(X_traindf[Input_str])
# All In/Out data Numpy
Act_Inputs_Int_Tag = X_train_Scaled
Act_Outputs_Int = Y_traindf[Output_str].values
# Remove Tags
Act_Inputs_Int = np.delete(Act_Inputs_Int_Tag,-1,axis=1)
# prepare In/Out per Client
All_Act_Inputs_Int_Tag = [Act_Inputs_Int_Tag[np.where(Act_Inputs_Int_Tag[:,-1]== x)] for x in np.arange(1,Num_Clients+1)]
All_Act_Outputs_Int = [Act_Outputs_Int[np.where(Act_Inputs_Int_Tag[:,-1]== x)] for x in np.arange(1,Num_Clients+1)]
# Remove Tags
All_Act_Inputs_Int = [np.delete(All_Act_Inputs_Int_Tag[x],-1,axis=1) for x in np.arange(0,Num_Clients) ]
# a need conversion to float32
Act_Inputs = np.float32(Act_Inputs_Int)
Act_Outputs = np.float32(Act_Outputs_Int)
# convert dataset to client based dataset
All_Act_Inputs = [np.float32(All_Act_Inputs_Int[x]) for x in np.arange(0,Num_Clients)]
All_Act_Outputs = [np.float32(All_Act_Outputs_Int[x]) for x in np.arange(0,Num_Clients)]
# convert to OrderedDict
new_batch = collections.OrderedDict([('In', Act_Inputs),('Out', Act_Outputs)])
All_new_batch = [collections.OrderedDict([('In', All_Act_Inputs[x]),('Out', All_Act_Outputs[x])]) for x in np.arange(0,Num_Clients)]
# Convert to tensor
dataset_input = tf.data.Dataset.from_tensor_slices(new_batch)#,,maxval=100, dtype=tf.float32)
# All_new_batch has different item per In / Out
All_dataset_input = [tf.data.Dataset.from_tensor_slices(All_new_batch[x]) for x in np.arange(0,Num_Clients)]
# Select among the datasets
Used_dataset= dataset_input
All_Used_dataset= All_dataset_input
with eager_mode():
def preprocess(new_dataset):
#return Used_dataset.repeat(2).batch(2)
def map_fn(elem):
return collections.OrderedDict([('x', tf.reshape(elem['In'], [-1])),('y', tf.reshape(elem['Out'],[1]))])
DS2= new_dataset.map(map_fn)
#return DS2.repeat(SNN_epoch).map(map_fn).shuffle(shuffle_buffer).batch(SNN_batch_size)
return DS2.repeat(SNN_epoch).batch(SNN_batch_size)
train_data = [preprocess(Used_dataset)]
#######changes###############33
def make_federated_data(client_data, client_ids):
return [preprocess(client_data[x]) for x in client_ids]
#@test {"output": "ignore"}
# sample_clients = [0:Num_Clients]
federated_train_data = make_federated_data(All_Used_dataset, np.arange(0,Num_Clients))
sample_batch = tf.contrib.framework.nest.map_structure(lambda x: x.numpy(), next(iter(train_data[0])))
########## END Changes ############
def create_SK_model():
modelF = tf.keras.models.Sequential([tf.keras.layers.Dense(SNN_Layers[0],activation=tf.nn.relu,input_shape=(Act_Inputs.shape[1],), kernel_initializer='RandomNormal'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(SNN_Layers[1], activation=tf.nn.relu, kernel_initializer='RandomNormal'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(1, activation=tf.nn.relu, kernel_initializer='RandomNormal'),
])
return modelF
# keras model loss function
def loss_fn():
return tf.keras.losses.MeanSquaredError()
# Federated model loss function
def loss_fn_Federated(y_true, y_pred):
return tf.reduce_mean(tf.keras.losses.MSE(y_true, y_pred))
def model_fn_Federated():
return tff.learning.from_keras_model(create_SK_model(),sample_batch,
loss=loss_fn_Federated,
optimizer=gradient_descent.SGD(learn_rate))
YTrain = Act_Outputs #np.random.rand(50,1)
XTrain = Act_Inputs #np.random.rand(50,100)
# locally compile the model
Local_model = create_SK_model()
Local_model.compile(loss=tf.keras.losses.MeanSquaredError(),optimizer=tf.keras.optimizers.SGD(lr=learn_rate,decay=1e-6,momentum=0.9,nesterov=True))
# fitting without federated learning
trained_local_Model = Local_model.fit(XTrain,YTrain, validation_split=Val_Train_Split, epochs=SNN_epoch, batch_size=SNN_batch_size) #tbuc
# Loss of local model
Local_Loss = trained_local_Model.history['loss'] # tbuc
# Copy local model for comparison purposes
Local_model_Fed = Local_model
# training/fitting with TF federated learning
trainer_Itr_Process = tff.learning.build_federated_averaging_process(model_fn_Federated,server_optimizer_fn=(lambda : gradient_descent.SGD(learning_rate=learn_rate)),client_weight_fn=None)
FLstate = trainer_Itr_Process.initialize()
FL_Loss_arr = []
Fed_eval_arr = []
# Track loss of different ...... of federated iteration
for round_num in range(2,10):
"""
The second of the pair of federated computations, next, represents a single round of Federated Averaging, which consists of pushing the server state (including the model parameters) to the clients, on-device training on their local data, collecting and averaging model updates, and producing a new updated model at the server.
"""
FLstate, FLoutputs = trainer_Itr_Process.next(FLstate, federated_train_data)
# Track the loss.
FL_Loss_arr.append(FLoutputs.loss)
# Setting federated weights on copied Object of local model
tff.learning.assign_weights_to_keras_model(Local_model_Fed,FLstate.model)
#Local_model_Fed.set_weights(tff_weights)
print(tff.__name__)
# Evaluate loss of the copied federated weights on local model
Fed_predicted = Local_model_Fed.predict(XTrain)
Fed_eval = Local_model_Fed.evaluate(XTrain,YTrain)
Fed_eval_arr.append(Fed_eval)
if True:
FieldnamesSNN = ['Local_Loss', 'FL_Loss_arr','Fed_eval_arr']
Valuesall2 = [Local_Loss,FL_Loss_arr,Fed_eval_arr]
# ValuesallSNN = Valuesall2.transpose()
ValuesallSNN = Valuesall2
workbook = xlsxwriter.Workbook(tosave_Path + Sim_Feature_Name+'SNN_loss.xlsx')
worksheetSNN = workbook.add_worksheet(Sim_Feature_Name+'SNN_loss')
row = 0
col = 0
#Write Validation results
prev_col_len=0
for names in FieldnamesSNN:
row=0
worksheetSNN.write(row,col,names)
# values = ValuesallSNN[:,col]
values = np.array(ValuesallSNN)[col]
row=row + 1
for val in values:
print(val)
worksheetSNN.write(row,col,val)
row=row+1
col = col +1
workbook.close()
</code></pre>
<p>The result currently is
(Local_Loss is for keras model, FL_Loss_arr: is loss per client, Fed_eval_arr: is the loss for the aggregated mode)</p>
<pre><code>Local_Loss FL_Loss_arr Fed_eval_arr
0.361531615257263 0.027410915121436 0.386061603840212
0.354410231113434 0.026805186644197 0.378279162582626
0.32423609495163 0.026369236409664 0.370627223614037
0.287901371717453 0.02615818567574 0.363125243503663
0.244472771883011 0.025971807539463 0.355770364471598
0.203615099191666 0.025779465213418 0.348538321804381
0.165129363536835 0.025623736903071 0.341443773817459
0.130221307277679 0.025475736707449 0.334481204779932
0.103743642568588
0.084212586283684
0.065002344548702
0.057881370186806
0.054710797965527
0.050441317260265
0.050083305686712
0.049112796783447
0.050076562911272
0.051196228712797
0.05450239777565
0.053276151418686
</code></pre> | 2019-06-09 12:00:01.843000+00:00 | 2019-06-24 23:38:06.423000+00:00 | null | tensorflow-federated | ['https://arxiv.org/pdf/1602.05629.pdf', 'https://arxiv.org/abs/1805.11604', 'https://arxiv.org/abs/1805.10694'] | 3 |
27,891,745 | <p>Indicating that a statement is an hypothesis can be made with reification.</p>
<p><a href="http://www.w3.org/TR/2014/REC-rdf11-mt-20140225/#reification" rel="nofollow">reification in the RDF 1.1 specification</a></p>
<p>Reification does not entail that the reified triple exists. Therefore, there is another approach that entails the triple: <a href="http://arxiv.org/abs/1406.3399" rel="nofollow">reification done right</a>. Note: this approach is not a standard.</p> | 2015-01-11 20:31:05.937000+00:00 | 2015-01-12 16:11:35.950000+00:00 | 2015-01-12 16:11:35.950000+00:00 | null | 27,860,142 | <p>I wonder how to describe scientific theories.</p>
<p>In Physical sciences a key element is to link an equation
to the hypothesis that determines its domain of validity.</p>
<p>N-ary sentences can be described as follow</p>
<blockquote>
<p>_: Equation123 o:hypothesis _:HypothesisABC .</p>
<p>_:HypothesisABC rdfs:label "abc",</p>
<p>o:expression "a = b" .</p>
</blockquote>
<p>Yet, an hypothesis is a triple in itself, not only a datatype literal.</p>
<p>Therefore I see 3 ways to write the hypothesis itself and I think
there is still a better way :</p>
<ul>
<li>Encode the triple in turtle as au datatype literal
_:HypothesisABC o:expression "a=b" .</li>
<li>Encode the triple in a N-ary sentence :
_:HypothesisABC a:has subject "db:Function" ,
o:has Predicate "o:has Property" ,
o:hasObject "o:Linear" .</li>
<li>store the triple in a distinct RDF graph
and indicate the URI of the RDF graph</li>
</ul>
<blockquote>
<p>o:HypothesisABC o:storedIn <a href="http://example.org/graph" rel="nofollow">http://example.org/graph</a>; .</p>
</blockquote>
<p>Yet, these three means to write the link between
an equation and the underlying
hypothesis bring new problems,
as the computation of the RDF graph requires first to:</p>
<ul>
<li>parse the datatype literal as triple (case 1 and 2),</li>
<li>load the graph where the triple is stored (case 3).</li>
</ul>
<p>Is there another solution I didn't consider ?
If not, is the 3rd solution reasonable ?</p>
<p>PS : I read <a href="http://www.lncc.br/seminarioDEXL/palestras/Bernardo.pdf" rel="nofollow">this powerpoint presentation</a> and like some others, there is no reference to the RDF syntax correspondig to the model.</p>
<p>Edit (2015-10-10 13:10):</p>
<p>Here is another solution I thought of.</p>
<p>It would be to integrate the notion of hypothesis in the property. That is to say defining object properties and datatype properties in the ontology indicating in their names that they have hypothesis value.</p>
<p>Therefore two versions of a property (e.g. hasProperty haSupposedProperty) have same function (attribution of a property to the object) yet they enable to write triples
corresponding to hypothesis differently from triples corresponding to true sentences.</p>
<p>Sincerly yours,</p>
<p>jeybee</p> | 2015-01-09 12:02:33.823000+00:00 | 2015-01-12 16:11:35.950000+00:00 | 2015-01-10 12:11:56.397000+00:00 | rdf|physics|semantics | ['http://www.w3.org/TR/2014/REC-rdf11-mt-20140225/#reification', 'http://arxiv.org/abs/1406.3399'] | 2 |
37,804,375 | <p>A few points you should consider:</p>
<ol>
<li><p>Your network is <em>not</em> a Siamese network: it contains two paths left and right, but these paths do <em>not</em> share the same filters. See <a href="http://caffe.berkeleyvision.org/gathered/examples/siamese.html" rel="nofollow">this tutorial</a> how to build a siamese network that shares filters across layers.</p></li>
<li><p><code>"HDF5Data"</code> layer is not restricted to two outputs ("top"s), it can have as many as you'd like. Thus, you can have a single layer for train and a single layer for test:</p>
<pre><code>layer {
name: "data"
type: "HDF5Data"
top: "data_left"
top: "data_right"
top: "labels"
hdf5_data_param { ... }
include { phase: TRAIN }
}
</code></pre>
<p>The corresponding hdf5 files should have three dataset specified for the <code>h5write</code> command (instead of only two in your code).</p></li>
<li><p>Have you considered using <a href="http://arxiv.org/abs/1605.07270" rel="nofollow">minibatch loss</a>, instead of pairs loss?</p></li>
</ol> | 2016-06-14 06:23:34.373000+00:00 | 2016-06-14 06:23:34.373000+00:00 | null | null | 37,803,958 | <p>This question refers to a question answered <a href="https://stackoverflow.com/questions/34903975/how-to-create-caffedb-training-data-for-siamese-networks-out-of-image-directory">here</a>.<br>
The accepted answer suggests to create labels on the fly. I have a very similar problem but need to use HDF5.</p>
<hr>
<p>Here is my prototxt:</p>
<pre><code>name: "StereoNet"
layer {
name: "layer_data_left"
type: "HDF5Data"
top: "data_left"
top: "labels_left"
include {
phase: TRAIN
}
hdf5_data_param {
source: "/home/ubuntu/trainLeftPatches.txt"
batch_size: 128
}
}
layer {
name: "layer_data_right"
type: "HDF5Data"
top: "data_right"
top: "labels_right"
include {
phase: TRAIN
}
hdf5_data_param {
source: "/home/ubuntu/trainRightPatches.txt"
batch_size: 128
}
}
... etc.
</code></pre>
<p>As you hopefully understand, I create two separate data HDF5 data files. They consist of positive and negative samples by having <em>on the same index</em> a left and a right image that in combination are a positive or negative sample. The labels_left and labels_right are identical matlab arrays of 1's and 0's. I tried to use a single labels array before but caffe gave an error, which seemed to indicate that two processes were clashing. When changing to a copy of the labels array, the training could start.</p>
<p>Here is part of the Matlab data creation file I am now using, the data are the KITTI data:</p>
<pre><code>h5create('trainLeftPatches.h5','/data_left',[9 9 1 numberOfTrainingPatches],'Datatype','double');
h5create('trainLeftPatches.h5','/labels_left',[1 numberOfTrainingPatches],'Datatype','double');
h5create('trainRightPatches.h5','/data_right',[9 9 1 numberOfTrainingPatches],'Datatype','double');
h5create('trainRightPatches.h5','/labels_right',[1 numberOfTrainingPatches],'Datatype','double');
h5create('valLeftPatches.h5','/data_left',[9 9 1 numberOfValidatePatches],'Datatype','double');
h5create('valLeftPatches.h5','/labels_left',[1 numberOfValidatePatches],'Datatype','double');
h5create('valRightPatches.h5','/data_right',[9 9 1 numberOfValidatePatches],'Datatype','double');
h5create('valRightPatches.h5','/labels_right',[1 numberOfValidatePatches],'Datatype','double');
h5write('trainLeftPatches.h5','/data_left', dataLeft_permutated(:, :, :, 1:numberOfTrainingPatches));
h5write('trainLeftPatches.h5','/labels_left', labels_permutated(:, 1:numberOfTrainingPatches));
h5write('trainRightPatches.h5','/data_right', dataRight_permutated(:, :, :, 1:numberOfTrainingPatches));
h5write('trainRightPatches.h5','/labels_right', labels_permutated(:, 1:numberOfTrainingPatches));
h5write('valLeftPatches.h5','/data_left', dataLeft_permutated(:, :, :, numberOfTrainingPatches+1:end));
h5write('valLeftPatches.h5','/labels_left', labels_permutated(:, numberOfTrainingPatches+1:end));
h5write('valRightPatches.h5','/data_right', dataRight_permutated(:, :, :, numberOfTrainingPatches+1:end));
h5write('valRightPatches.h5','/labels_right', labels_permutated(:, numberOfTrainingPatches+1:end));
toc;
</code></pre>
<p>the loss is acceptable on mini batches at the end, but stays too high on the tests
<img src="https://i.stack.imgur.com/Ws0jL.png" alt=""></p>
<p>Please advice. (It may not work). If there is an error, it is probably very subtle.</p> | 2016-06-14 05:56:03.390000+00:00 | 2016-06-15 09:13:31.727000+00:00 | 2017-05-23 10:29:06.880000+00:00 | matlab|neural-network|deep-learning|caffe|stereoscopy | ['http://caffe.berkeleyvision.org/gathered/examples/siamese.html', 'http://arxiv.org/abs/1605.07270'] | 2 |
60,427,302 | <p>I think you have trouble understanding how a basic object detection works.
I will recommend you to read this paper first:</p>
<p><a href="https://arxiv.org/pdf/1807.05511.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1807.05511.pdf</a></p> | 2020-02-27 06:19:40.540000+00:00 | 2020-02-27 06:19:40.540000+00:00 | null | null | 60,426,663 | <p>Let's say we have n images of cat and dog separately and we trained an image classification model to classify a new image with a probability score saying whether it's a cat or a dog.</p>
<p>Now, we are getting images that contains multiple cats and dogs in same image, how can we detect and localize objects(cats and dogs here)?</p>
<p>If it is possible, can we also depict the focus areas considered by model for prediction so that a bounding box could be drawn?</p> | 2020-02-27 05:17:45.240000+00:00 | 2020-02-27 08:57:56.023000+00:00 | null | keras|deep-learning|computer-vision|artificial-intelligence|object-detection | ['https://arxiv.org/pdf/1807.05511.pdf'] | 1 |
55,204,287 | <p><strong>SSD</strong> - single shot detector - is a NN architecture designed for detection purposes - which means localization(bounding boxes) and classification at once.</p>
<p><strong>Mobilenet</strong>- (<a href="https://arxiv.org/abs/1704.04861" rel="noreferrer">https://arxiv.org/abs/1704.04861</a>) - efficient architecture introduced by Google (using depthwise and pointwise convolutions). It can be used for classification purposes, or as a feature extractor for other (i.e. detection).</p>
<p>In the SSD paper they present the use of VGG NN as the <strong>feature extractor</strong> for the detection, the features maps are being taken from several different layers (resolutions) and being fed to their corresponding classification and localization layers (Classification head and Regression head).</p>
<p>So actually, one can decide to use a different kind of feature extractor - like MobileNet-SSD - which means you use SSD arch. while your feature extractor is mobilenet arch.</p>
<p>By reading the SSD paper, and the mobilenet paper you would be able to understand the model exist in the model zoo.</p> | 2019-03-17 05:58:38.283000+00:00 | 2019-03-17 05:58:38.283000+00:00 | null | null | 50,585,597 | <p>I am confusing between SSD and mobilenet. As far as I know, both of them are neural network. SSD provides localization while mobilenet provides classification. Thus the combination of SSD and mobilenet can produce the object detection. The image is taken from <a href="https://arxiv.org/abs/1512.02325" rel="noreferrer">SSD paper</a>. The default classification network of SSD is VGG-16. So, for SSD Mobilenet, VGG-16 is replaced with mobilenet. Are my statements correct?</p>
<p>Where can I get more information about SSD Mobilenet especially that one available on Tensorflow model zoo?</p>
<p><a href="https://i.stack.imgur.com/g30dc.png" rel="noreferrer"><img src="https://i.stack.imgur.com/g30dc.png" alt="enter image description here"></a></p> | 2018-05-29 13:26:42.567000+00:00 | 2021-03-22 16:49:55.120000+00:00 | null | machine-learning|neural-network|object-detection | ['https://arxiv.org/abs/1704.04861'] | 1 |
34,112,044 | <p>Have a look at <a href="http://arxiv.org/abs/1412.6765" rel="nofollow">Performance comparison between Java and JNI for optimal implementation of computational micro-kernels</a>. They show that Java HotSpot VM server compiler supports auto-vectorization using Super-word Level Parallelism, which is limited to simple cases of inside the loop parallelism. This article will also give you some guidance whether your data size is large enough to justify going JNI route.</p> | 2015-12-05 23:11:34.647000+00:00 | 2015-12-05 23:11:34.647000+00:00 | null | null | 10,784,951 | <p>Let's say the bottleneck of my Java program really is some tight loops to compute a bunch of vector dot products. Yes I've profiled, yes it's the bottleneck, yes it's significant, yes that's just how the algorithm is, yes I've run Proguard to optimize the byte code, etc.</p>
<p>The work is, essentially, dot products. As in, I have two <code>float[50]</code> and I need to compute the sum of pairwise products. I know processor instruction sets exist to perform these kind of operations quickly and in bulk, like SSE or MMX.</p>
<p>Yes I can probably access these by writing some native code in JNI. The JNI call turns out to be pretty expensive.</p>
<p>I know you can't guarantee what a JIT will compile or not compile. Has anyone <em>ever</em> heard of a JIT generating code that uses these instructions? and if so, is there anything about the Java code that helps make it compilable this way?</p>
<p>Probably a "no"; worth asking.</p> | 2012-05-28 12:48:21.910000+00:00 | 2021-12-22 11:22:57.950000+00:00 | null | java|floating-point|jit|sse|vectorization | ['http://arxiv.org/abs/1412.6765'] | 1 |
60,073,820 | <p>If you add tokens to the tokenizer, you indeed make the tokenizer tokenize the text differently, but this is not the tokenization BERT was trained with, so you are basically adding noise to the input. The word embeddings are not trained and the rest of the network never saw them in context. You would need a lot of data to teach BERT to deal with the newly added words.</p>
<p>There are also some ways how to compute a single word embedding, such that it would not hurt BERT like in <a href="https://arxiv.org/pdf/1910.07181.pdf" rel="nofollow noreferrer">this paper</a> but it seems pretty complicated and should not make any difference.</p>
<p>BERT uses a word-piece-based vocabulary, so it should not really matter if the words are present in the vocabulary as a single token or get split into multiple wordpieces. The model probably saw the split word during pre-training and will know what to do with it.</p>
<p>Regarding the <code>##</code>-prefixed tokens, those are tokens can only be prepended as a suffix of another wordpiece. E.g., <code>walrus</code> gets split into <code>['wal', '##rus']</code> and you need both of the wordpieces to be in the vocabulary, but not <code>##wal</code> or <code>rus</code>.</p> | 2020-02-05 10:32:28.317000+00:00 | 2020-02-05 10:32:28.317000+00:00 | null | null | 60,068,129 | <p>Referring to the <a href="https://huggingface.co/transformers/main_classes/tokenizer.html" rel="nofollow noreferrer">documentation</a> of the awesome Transformers library from Huggingface, I came across the <code>add_tokens</code> functions.</p>
<pre><code>tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertModel.from_pretrained('bert-base-uncased')
num_added_toks = tokenizer.add_tokens(['new_tok1', 'my_new-tok2'])
model.resize_token_embeddings(len(tokenizer))
</code></pre>
<p>I tried the above by adding previously absent words in the default vocabulary. However, keeping all else constant, I noticed a decrease in accuracy of the fine tuned classifier making use of this updated <code>tokenizer</code>. I was able to replicate similar behavior even when just 10% of the previously absent words were added.</p>
<p>My questions</p>
<ol>
<li>Am I missing something?</li>
<li>Instead of whole words, is the <code>add_tokens</code> function expecting masked tokens, for example : <code>'##ah'</code>, <code>'##red'</code>, <code>'##ik'</code>, <code>'##si</code>', etc.? If yes, is there a procedure to generate such masked tokens?</li>
</ol>
<p>Any help would be appreciated.</p>
<p>Thanks in advance.</p> | 2020-02-05 02:16:04.023000+00:00 | 2020-02-05 10:32:28.317000+00:00 | null | pytorch|bert-language-model|huggingface-transformers | ['https://arxiv.org/pdf/1910.07181.pdf'] | 1 |
68,045,715 | <p>Binary search trees (BST) are not totally equivalent to the proposed data structure. Their asymptotic complexity is better when it comes to both insert and remove sorted values dynamically (assuming they are balanced correctly). For example, when you when to build an index of the top-k values dynamically:</p>
<pre><code>while end_of_stream(stream):
value <- stream.pop_value()
tree.insert(value)
tree.remove_max()
</code></pre>
<p>Sorted arrays are not efficient in this case because of the linear-time insertion. The complexity of bucketed lists is not better than plain list asymptotically and also suffer from a linear-time search. One can note that a heap can be used in this case, and in fact it is probably better to use a heap here, although they are not always interchangeable.</p>
<p>That being said, your are right : BST are slow, cause a lot of cache miss and fragmentation, etc. Thus, they are often replaced by more compact variants like <a href="https://en.wikipedia.org/wiki/B-tree" rel="nofollow noreferrer"><strong>B-trees</strong></a>. B-tree uses a sorted array index to reduce the amount of node jumps and make the data-structure much more compact. They can be mixed with some 4-byte pointer optimizations to make them even more compact. B-trees are to BST what bucketed linked-lists are to plain linked-lists. B-trees are very good for building dynamic database index of huge datasets stored on a slow storage device (because of the size): they enable applications to fetch values associated to a key using very <em>few storage-device lookups</em> (which as very slow on HDD for example). Another example of real-world use-case is interval-trees.</p>
<p>Note that memory fragmentation can be reduced using compaction methods. For BSTs/B-trees, one can reorder the root nodes like in a heap. However, compaction is not always easy to apply, especially on native languages with pointers like in C/C++ although some <a href="https://arxiv.org/pdf/1902.04738.pdf" rel="nofollow noreferrer">very clever methods exists</a> to do so.</p>
<p>Keep in mind that B-trees shine only on big datasets (especially the ones that do not fit in cache). On relatively small ones, using just plain arrays or even sorted array is often a very good solution.</p> | 2021-06-19 10:04:44.883000+00:00 | 2021-06-19 10:11:48.087000+00:00 | 2021-06-19 10:11:48.087000+00:00 | null | 68,043,002 | <p>I'm watching university lectures on algorithms and it seems so many of them rely almost entirely binary search trees of some particular sort for querying/database/search tasks.</p>
<p>I don't understand this obsession with <em>Binary Search <strong>Trees</strong></em>. It seems like in the <strong>vast</strong> majority of scenarios, a BSP could be replaced with a sorted array in the case of a static data, or a sorted bucketed list if insertions occur dynamically, and then a <em>Binary Search</em> could be employed over them.</p>
<p>With this approach, you get the same algorithmic complexity (for querying at least) as a BST, <strong>way</strong> better cache coherency, way less memory fragmentation (and less gc allocs depending on what language you're in), and are likely much simpler to write.</p>
<p>The fundamental issue is that BSP are completely memory naïve -- their focus is <strong>entirely</strong> on O(n) complexity and they ignore the very real performance considerations of memory fragmentation and cache coherency... Am I missing something?</p> | 2021-06-19 01:59:15.240000+00:00 | 2021-06-19 10:11:48.087000+00:00 | 2021-06-19 03:06:47.690000+00:00 | performance|memory-management|binary-search-tree|binary-search | ['https://en.wikipedia.org/wiki/B-tree', 'https://arxiv.org/pdf/1902.04738.pdf'] | 2 |
20,321,175 | <p>There are recent algorithms which remove backgrounds (detect foreground) far better than the standard GMM implementation in OpenCV.</p>
<p>For example, there is a block-based classifier cascade approach described in <a href="http://arxiv.org/pdf/1303.4160v1.pdf">this journal article</a>, along with its C++ based <a href="http://arma.sourceforge.net/foreground/">source code</a>.</p> | 2013-12-02 04:59:35.353000+00:00 | 2013-12-02 04:59:35.353000+00:00 | null | null | 10,458,633 | <p>I'm using OpenCV2.2 to implement moving objects detection with the method of Background Subtraction. And I use the Gaussian Mixture Model(GMM) method to model the background reference image. </p>
<p>I directly get the foreground pixels(or foreground mask) by using the class cv::BackgroundSubtractorMOG provided in OpenCV2.2. It's convenient but the foreground mask returned by cv::BackgroundSubtractorMOG is not as good as I expected. In addition, it seems that cv::BackgroundSubtractorMOG performs poorer than the method of GMM wrote in C language provided in OpenCV1.0.</p>
<p>The following is my code in OpenCV2.2:</p>
<pre><code>cv::BackgroundSubtractorMOG mog;
mog(frame, fgMask, 0.01);
</code></pre>
<p>So, did I use the method in a wrong way? </p>
<p>By the way, does cv::BackgroundSubtractorMOG perform shadow removal on the foreground pixels? </p>
<p>Thank you very much.</p> | 2012-05-05 03:39:28.260000+00:00 | 2017-08-16 17:06:28.307000+00:00 | 2012-06-01 10:49:32.273000+00:00 | opencv|computer-vision|background-subtraction|mog|shadow-removal | ['http://arxiv.org/pdf/1303.4160v1.pdf', 'http://arma.sourceforge.net/foreground/'] | 2 |
49,907,341 | <p>When using a <code>size=100</code>, there are <em>not</em> "100 vectors" per text example – there is <em>one</em> vector, which includes 100 scalar dimensions (each a floating-point value, like <code>0.513</code> or <code>-1.301</code>). </p>
<p>Note that the values represent points in 100-dimensional space, and the individual dimensions/axes don't have easily-interpretable meanings. Rather, it is only the <em>relative distances</em> and <em>relative directions</em> between individual vectors that have useful meaning for text-based applications, such as assisting in information-retrieval or automatic classification.</p>
<p>The method for computing the vectors is described in the paper <a href="https://arxiv.org/abs/1405.4053" rel="nofollow noreferrer">'Distributed Representation of Sentences and Documents' by Le & Mikolov</a>. But, it is closely associated to the 'word2vec' algorithm, so understanding that 1st may help, such as via its <a href="https://arxiv.org/abs/1301.3781" rel="nofollow noreferrer">first</a> and <a href="https://arxiv.org/abs/1310.4546" rel="nofollow noreferrer">second</a> papers. If that style of paper isn't your style, queries like <code>[word2vec tutorial]</code> or <code>[how does word2vec work]</code> or <code>[doc2vec intro]</code> should find more casual beginning descriptions. </p> | 2018-04-18 19:11:17.280000+00:00 | 2018-04-18 19:11:17.280000+00:00 | null | null | 49,898,287 | <p>If I pass a Sentence containing 5 words to the Doc2Vec model and if the size is 100, there are 100 vectors. I'm not getting what are those vectors. If I increase the size to 200, there are 200 vectors for just a simple sentence. Please tell me how are those vectors calculated.</p> | 2018-04-18 11:13:18.457000+00:00 | 2018-04-18 19:11:17.280000+00:00 | null | python-3.x|nlp|doc2vec | ['https://arxiv.org/abs/1405.4053', 'https://arxiv.org/abs/1301.3781', 'https://arxiv.org/abs/1310.4546'] | 3 |
69,640,479 | <p>There are 3 ways you can approach your problem-</p>
<ol>
<li>There exists a very cool tool called <a href="https://github.com/hanxiao/bert-as-service" rel="nofollow noreferrer">bert-as-service</a>. It maps a sentence to a fixed length word embeddings based on the model you choose to use. The documentation is very well written.
Install</li>
</ol>
<pre><code>pip install bert-serving-server # server
pip install bert-serving-client # client, independent of bert-serving-server
</code></pre>
<p>Download one of the pre-trained models available at official BERT repo- <a href="https://github.com/google-research/bert" rel="nofollow noreferrer">link</a></p>
<p>Start the server</p>
<pre><code>bert-serving-start -model_dir /model_directory/ -num_worker=4
</code></pre>
<p>Generate embedding</p>
<pre><code>from bert_serving.client import BertClient
bc = BertClient()
vectors=bc.encode(your_list_of_sentences)
</code></pre>
<ol start="2">
<li><p>There exist an academic paper by name of <a href="https://arxiv.org/abs/1908.10084" rel="nofollow noreferrer">Sentence-BERT</a> and their <a href="https://github.com/UKPLab/sentence-transformers" rel="nofollow noreferrer">github repo</a></p>
</li>
<li><p>You are doing a lot of manual work- padding attn mask etc.
Toeknizer does it for you automatically, check the <a href="https://huggingface.co/transformers/main_classes/tokenizer.html#transformers.PreTrainedTokenizer.__call__" rel="nofollow noreferrer">documentation</a>. And, if you see the implementation of the <a href="https://github.com/huggingface/transformers/blob/master/src/transformers/models/bert/modeling_bert.py#L1012" rel="nofollow noreferrer">forward()</a> call of the model, it returns-</p>
</li>
</ol>
<pre><code> return (sequence_output, pooled_output) + encoder_outputs[1:]
</code></pre>
<p>For the bert base (768 hidden states), sequence output is the embedding of all token of the sequence, so if your input size[ max_len] is 510, then each token is embedded is a space of 768 dimension making sequence output of size- 768 * 510 *1</p>
<p>Pooled output is the one where all the embeddings are squished into a space of 768*1 dimension.</p>
<p>So I think you will want to use Pooled output for simple embeddings.</p> | 2021-10-20 04:49:03.787000+00:00 | 2021-10-23 15:15:38.483000+00:00 | 2021-10-23 15:15:38.483000+00:00 | null | 68,603,462 | <p>I am following this link:</p>
<p><a href="https://stackoverflow.com/questions/63209960/bert-document-embedding">BERT document embedding</a></p>
<p>I want to extract sentence-embedding using <code>BERT</code> model using <code>CLS</code> token. Here is the code:</p>
<pre><code>import torch
from keras.preprocessing.sequence import pad_sequences
import tensorflow as tf
def text_to_embedding(tokenizer, model, in_text):
'''
Uses the provided BERT 'model' and 'tokenizer' to generate a vector
representation of the input string, 'in_text'.
Returns the vector stored as a numpy ndarray.
'''
# ===========================
# STEP 1: Tokenization
# ===========================
MAX_LEN = 510
# 'encode' will:
# (1) Tokenize the sentence
# (2) Prepend the '[CLS]' token to the start.
# (3) Append the '[SEP]' token to the end.
# (4) Map tokens to their IDs.
input_ids = tokenizer.encode(
in_text, # sentence to encode.
add_special_tokens = True, # Add '[CLS]' and '[SEP]'
max_length = MAX_LEN, # Truncate all sentences.
#return_tensors = 'pt' # Return pytorch tensors.
)
print(input_ids)
print(tokenizer.decode(input_ids))
# Pad our input tokens. Truncation was handled above by the 'encode'
# function, which also makes sure that the '[SEP]' token is placed at the
# end *after* truncating.
# Note: 'pad_sequences' expects a list of lists, but we only have one
# piece of text, so we surround 'input_ids' with an extra set of brackets.
results = tokenizer(in_text, max_length=MAX_LEN, truncation=True)
input_ids = results.input_ids
attn_mask = results.attention_mask
print(results)
# Cast to tensors.
input_ids = torch.tensor(input_ids)
attn_mask = torch.tensor(attn_mask)
# Add an extra dimension for the "batch" (even though there is only one
# input in this batch)
input_ids = input_ids.unsqueeze(0)
attn_mask = attn_mask.unsqueeze(0)
# ===========================
# STEP 1: Tokenization
# ===========================
# Put the model in evaluation mode--the dropout layers behave differently
# during evaluation.
#model.eval()
# Copy the inputs to the GPU
#input_ids = input_ids.to(device)
#attn_mask = attn_mask.to(device)
# telling the model not to build the backward graph will make this
# a little quicker.
with torch.no_grad():
# Forward pass, returns hidden states and predictions
# This will return the logits rather than the loss because we have
# not provided labels.
outputs = model(input_ids = input_ids,token_type_ids = None,attention_mask = attn_mask)
hidden_states = outputs[2]
#Sentence Vectors
#To get a single vector for our entire sentence we have multiple
#application-dependent strategies, but a simple approach is to
#average the second to last hiden layer of each token producing
#a single 768 length vector.
# `hidden_states` has shape [13 x 1 x ? x 768]
# `token_vecs` is a tensor with shape [? x 768]
token_vecs = hidden_states[-2][0]
# Calculate the average of all ? token vectors.
sentence_embedding = torch.mean(token_vecs, dim=0)
# Move to the CPU and convert to numpy ndarray.
sentence_embedding = sentence_embedding.detach().cpu().numpy()
return(sentence_embedding)
from transformers import BertTokenizer, BertModel
# Load pre-trained model (weights)
model = BertModel.from_pretrained('bert-base-uncased',output_hidden_states = True), # Whether the model returns all hidden-states.
#model.cuda()
from transformers import BertTokenizer
# Load the BERT tokenizer.
print('Loadin BERT tokenizer...')
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True)
k=text_to_embedding(tokenizer, model, "I like to play cricket")
</code></pre>
<p>Output:</p>
<pre><code><ipython-input-14-f03410b60544> in text_to_embedding(tokenizer, model, in_text)
77 # This will return the logits rather than the loss because we have
78 # not provided labels.
---> 79 outputs = model(input_ids = input_ids,token_type_ids = None,attention_mask = attn_mask)
80
81
TypeError: 'tuple' object is not callable
</code></pre>
<p>I get an error in this line <code>outputs = model(input_ids = input_ids,token_type_ids = None,attention_mask = attn_mask)</code></p>
<p>Instead of using average of hidden layer, I want to modify code to get embedding for input sentence using <code>CLS </code> token.</p> | 2021-07-31 15:28:41.577000+00:00 | 2021-10-23 15:15:38.483000+00:00 | null | python-3.x|embedding|bert-language-model|transformer-model | ['https://github.com/hanxiao/bert-as-service', 'https://github.com/google-research/bert', 'https://arxiv.org/abs/1908.10084', 'https://github.com/UKPLab/sentence-transformers', 'https://huggingface.co/transformers/main_classes/tokenizer.html#transformers.PreTrainedTokenizer.__call__', 'https://github.com/huggingface/transformers/blob/master/src/transformers/models/bert/modeling_bert.py#L1012'] | 6 |
42,619,527 | <ul>
<li><p>Your loss is <code>-8.27284e-09</code> which practically speaking is zero and not negative (caffe is using a single precision floating point numbers and not double precision).<br>
What loss layer are you using? <code>"SoftmaxWithLoss"</code>?</p></li>
<li><p><code>bias_filler</code> and <code>wieght_filler</code> parameters are added when we want caffe to <strong>randomly</strong> initialize the weights of the layer, usually when we start training from scratch. If you start training from existing model (i.e. fine tuning) there is no meaning for these arguments.</p></li>
<li><p><code>std</code> value is computed based on the fan-in and fan-out (i.e., number of in-channels and out channels) in order to keep the statistics of the Blob values roughly zero-mean and unit variance.<br>
You can find an analysis of these parameters in <a href="https://arxiv.org/abs/1502.01852" rel="nofollow noreferrer"><em>Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun</em> <strong>Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification</strong> (arXiv 2015)</a>.</p></li>
</ul> | 2017-03-06 07:09:46.333000+00:00 | 2017-03-06 07:09:46.333000+00:00 | null | null | 42,610,050 | <p>After I added <code>xavier</code> initialization to every convolution layer, the loss starts becoming <strong>negative</strong>. Could someone give any suggestion/reason?
I added the following lines to all convolutional layers:</p>
<pre><code>weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0.1
}
</code></pre>
<hr>
<pre><code>I0305 14:31:53.356343 11179 solver.cpp:219] Iteration 0 (-4.02766e+28 iter/s, 0.528933s/100 iters), loss = 2.05371
I0305 14:31:53.356374 11179 solver.cpp:238] Train net output #0: accuracy = 0.11937
I0305 14:31:53.356384 11179 solver.cpp:238] Train net output #1: loss = 2.05371 (* 1 = 2.05371 loss)
I0305 14:31:53.356395 11179 sgd_solver.cpp:105] Iteration 0, lr = 0.0001
I0305 14:32:28.728870 11179 solver.cpp:219] Iteration 100 (2.82699 iter/s, 35.3733s/100 iters), loss = 0.0270034
I0305 14:32:28.729014 11179 solver.cpp:238] Train net output #0: accuracy = 1
I0305 14:32:28.729028 11179 solver.cpp:238] Train net output #1: loss = 0 (* 1 = 0 loss)
I0305 14:32:28.729034 11179 sgd_solver.cpp:105] Iteration 100, lr = 0.0001
I0305 14:33:03.729997 11179 solver.cpp:219] Iteration 200 (2.85701 iter/s, 35.0017s/100 iters), loss = -8.27284e-09
I0305 14:33:03.730154 11179 solver.cpp:238] Train net output #0: accuracy = 1
I0305 14:33:03.730167 11179 solver.cpp:238] Train net output #1: loss = 0 (* 1 = 0 loss)
I0305 14:33:03.730172 11179 sgd_solver.cpp:105] Iteration 200, lr = 0.0001
I0305 14:33:38.885211 11179 solver.cpp:219] Iteration 300 (2.84449 iter/s, 35.1557s/100 iters), loss = -8.27284e-09
I0305 14:33:38.885368 11179 solver.cpp:238] Train net output #0: accuracy = 1
I0305 14:33:38.885383 11179 solver.cpp:238] Train net output #1: loss = 0 (* 1 = 0 loss)
I0305 14:33:38.885387 11179 sgd_solver.cpp:105] Iteration 300, lr = 0.0001
I0305 14:34:14.174548 11179 solver.cpp:219] Iteration 400 (2.83368 iter/s, 35.2898s/100 iters), loss = -8.27284e-09
I0305 14:34:14.174702 11179 solver.cpp:238] Train net output #0: accuracy = 1
I0305 14:34:14.174720 11179 solver.cpp:238] Train net output #1: loss = 0 (* 1 = 0 loss)
I0305 14:34:14.174724 11179 sgd_solver.cpp:105] Iteration 400, lr = 0.0001
I0305 14:34:49.578112 11179 solver.cpp:219] Iteration 500 (2.82453 iter/s, 35.4041s/100 iters), loss = -8.27284e-09
I0305 14:34:49.578254 11179 solver.cpp:238] Train net output #0: accuracy = 1
I0305 14:34:49.578269 11179 solver.cpp:238] Train net output #1: loss = 0 (* 1 = 0 loss)
I0305 14:34:49.578272 11179 sgd_solver.cpp:105] Iteration 500, lr = 0.0001
I0305 14:35:25.042238 11179 solver.cpp:219] Iteration 600 (2.81971 iter/s, 35.4646s/100 iters), loss = -8.27284e-09
I0305 14:35:25.042421 11179 solver.cpp:238] Train net output #0: accuracy = 1
I0305 14:35:25.042438 11179 solver.cpp:238] Train net output #1: loss = 0 (* 1 = 0 loss)
I0305 14:35:25.042443 11179 sgd_solver.cpp:105] Iteration 600, lr = 0.0001
I0305 14:36:00.540053 11179 solver.cpp:219] Iteration 700 (2.81704 iter/s, 35.4983s/100 iters), loss = -8.27284e-09
I0305 14:36:00.540194 11179 solver.cpp:238] Train net output #0: accuracy = 1
I0305 14:36:00.540207 11179 solver.cpp:238] Train net output #1: loss =
</code></pre>
<p>My another question is that in some networks, <code>Gaussian</code> is added. Like:</p>
<pre><code>weight_filler {
type: "gaussian"
std: 0.005
}
bias_filler {
type: "constant"
value: 0.1
}
</code></pre>
<ol>
<li><p>Why are we adding these parameters to convolutional layer? Is it
because we are training the network from the scratch?</p></li>
<li><p>How are a specific value assigned to <code>std</code> and/or the <code>bias_filler</code>
value?</p></li>
</ol>
<p>I really appreciate your help.</p> | 2017-03-05 15:15:11.267000+00:00 | 2017-03-06 07:09:46.333000+00:00 | null | deep-learning|caffe|pycaffe|matcaffe | ['https://arxiv.org/abs/1502.01852'] | 1 |
40,083,061 | <p>You get ~99% accuracy when you test you model with <code>is_training=True</code> only because of the batch size of 100.
If you change the batch size to 1 your accuracy will decrease.</p>
<p>This is due to the fact that you're computing the exponential moving average and variance for the input batch and than you're (batch-)normalizing the layers output using these values.</p>
<p>The <code>batch_norm</code> function have the parameter <code>variables_collections</code> that helps you to store the computed moving average and variance during the train phase and reuse them during the test phase.</p>
<p>If you define a collection for these variables, then the <code>batch_norm</code> layer will use them during the testing phase, instead of calculating new values.</p>
<p>Therefore, if you change you batch normalization layer definition to</p>
<pre><code>local4_bn = tf.contrib.layers.batch_norm(local4, is_training=True, variables_collections=["batch_norm_non_trainable_variables_collection"])
</code></pre>
<p>The layer will store the computed variables into the <code>"batch_norm_non_trainable_variables_collection"</code> collection.</p>
<p>In the test phase, when you pass the <code>is_training=False</code> parameters, the layer will re-use the computed value that it find in the collection.</p>
<p>Note that the moving average and the variance are not trainable parameters and therefore, if you save only your model trainable parameters in the checkpoint files, you have to manually add the non-trainable variables stored into the previously defined collection.</p>
<p>You can do it when you create the <code>Saver</code> object:</p>
<pre><code>saver = tf.train.Saver(tf.get_trainable_variables() + tf.get_collection_ref("batch_norm_non_trainable_variables_collection") + otherlistofvariables)
</code></pre>
<p>In addiction, since batch normalization can limit the expressive power of the layer which is applied to (because it restricts the range of the values), you should enable the network to learn the parameters <code>gamma</code> and <code>beta</code> (the affine transformation coefficients described in the <a href="https://arxiv.org/abs/1502.03167" rel="nofollow">paper</a>) that allows the network to learn, thus, an affine transformation that increase the representation power of the layer.</p>
<p>You can enable the learning of these parameters setting to <code>True</code> the parameter of the <code>batch_norm</code> function, in this way:</p>
<pre><code>local4_bn = tf.contrib.layers.batch_norm(
local4,
is_training=True,
center=True, # beta
scale=True, # gamma
variables_collections=["batch_norm_non_trainable_variables_collection"])
</code></pre> | 2016-10-17 09:42:34.457000+00:00 | 2016-10-18 09:45:35.073000+00:00 | 2016-10-18 09:45:35.073000+00:00 | null | 40,081,697 | <p>I am using the official Batch Normalization (BN) function (<a href="https://github.com/tensorflow/tensorflow/blob/b826b79718e3e93148c3545e7aa3f90891744cc0/tensorflow/contrib/layers/python/layers/layers.py#L100" rel="noreferrer">tf.contrib.layers.batch_norm()</a>) of Tensorflow on the MNIST data. I use the following code for adding BN:</p>
<pre><code>local4_bn = tf.contrib.layers.batch_norm(local4, is_training=True)
</code></pre>
<p>During testing, I change "is_training=False" in the above line of code and observe only 20% accuracy. However, it gives ~99% accuracy if I use the above code also for testing (i.e., keeping is_training=True) with a batch size of 100 images. This observation indicates that the <em>exponential moving average and variance</em> computed by <a href="https://github.com/tensorflow/tensorflow/blob/b826b79718e3e93148c3545e7aa3f90891744cc0/tensorflow/contrib/layers/python/layers/layers.py#L100" rel="noreferrer">batch_norm()</a> are probably incorrect or I am missing something in my code.</p>
<p>Can anyone please answer about the solution of the above problem.</p> | 2016-10-17 08:28:39.503000+00:00 | 2018-05-09 05:45:59.717000+00:00 | null | python|tensorflow | ['https://arxiv.org/abs/1502.03167'] | 1 |
46,267,023 | <blockquote>
<p>What does the x, b, W and y variables represent?</p>
</blockquote>
<p>These are symbolic variables that the model is going to use - input, output and neural network parameters.
<code>x</code> and <code>y</code> are the data, they don't change, that's why they're defined as <code>tf.placeholder</code>. <code>W</code> and <code>y</code> are learnable parameters (in TF terminology <em>trainable</em>). Initial value is less important than the dimensions of those parameters (in fact, <a href="https://arxiv.org/abs/1511.06422" rel="nofollow noreferrer">not exactly</a>, but that's an advanced topic). In this example, both <code>W</code> and <code>b</code> are one-dimensional, though usually <code>W</code> is a matrix and <code>b</code> is a vector.</p>
<p>Together, all defined variables form a so called <em>computational graph</em>.</p>
<blockquote>
<p>Why is the code passing in a parameter value of 0.01 to the
GradientDescentOptimizer function?</p>
</blockquote>
<p>This is the <em>learning rate</em>. In simple terms, it's the step size that the engine is making when it optimizes the objective function <code>loss</code>. The learning rate is usually close to 0, but exact value depends on many factors. In fact, it's one of common hyperparameters that researches try manually. <code>0.01</code> seems like a good starting point, because it's good enough in many cases.</p>
<blockquote>
<p>What does y_train represent here?</p>
</blockquote>
<p><code>x_train</code> and <code>y_train</code> are the training data, the first one is the input and the second one is the expected label. In this case, you are telling that input <code>1</code> should lead to result <code>0</code>, input <code>2</code> to <code>1</code> and so on. Hopefully, the network is going to figure out from these 4 examples that it should learn a "minus one" operation (side note: the neural networks is just a linear model that fits perfectly). It's called <em>supervised learning</em>.</p>
<blockquote>
<p>What does the variables curr_W, curr_b represent here?</p>
</blockquote>
<p>First of all, note that <code>curr_W</code> and <code>curr_b</code> are ordinary python variables, while <code>W</code> and <code>b</code> are symbolic variables. Symbolic variables define how your computation is organized and they take different values during training. <code>curr_W</code> and <code>curr_b</code> are just one of those values after some iterations. Basically, you take a snapshot of the model and print it out to see what the neural network has learned. The result values <code>-1</code> and <code>1</code> (almost) mean that the neural network successfully a linear transformation.</p> | 2017-09-17 17:42:41.870000+00:00 | 2017-09-17 17:42:41.870000+00:00 | null | null | 46,258,954 | <p>I need help from someone to explain to me the code below. I'm kind of new to TensorFlow, but I have specific questions defined within code below</p>
<pre><code>import tensorflow as tf
# Model parameters
#Why are variables initialize with .3 and -.3
W = tf.Variable([.3], dtype=tf.float32)
b = tf.Variable([-.3], dtype=tf.float32)
</code></pre>
<p>What does the x, b, W and y variables represent?</p>
<pre><code># Model input and output
x = tf.placeholder(tf.float32) # this is the input
linear_model = W * x + b # this is the linear_model operation
y = tf.placeholder(tf.float32) # Is this the output we're trying to predict.
</code></pre>
<p>Why is the code passing in a parameter value of 0.01 to the GradientDescentOptimizer function?</p>
<pre><code># loss - measure how far apart the current model is from the provided data.
loss = tf.reduce_sum(tf.square(linear_model - y)) # sum of the squares
# optimizer
optimizer = tf.train.GradientDescentOptimizer(0.01) # Why are we passing the value '0.01'
train = optimizer.minimize(loss)
</code></pre>
<p>What does y_train represent here? </p>
<pre><code># training data
x_train = [1, 2, 3, 4] # the input variables we know
y_train = [0, -1, -2, -3] #
# training loop
init = tf.global_variables_initializer() # init is a handle to the TensorFlow sub-graph that initializes all the global variables. Until we call sess.run, the variables are unitialized
sess = tf.Session() # Sesson encapsulates the control and state of the TensorFlow runtime. ITs used to evaluate the nodes, we must run the computational graph within a session
sess.run(init) # reset values to wrong
for i in range(1000):
sess.run(train, {x: x_train, y: y_train})
</code></pre>
<p>What does the variables curr_W, curr_b represent here?</p>
<pre><code># evaluate training accuracy
# Why does the variables W and b represent?
curr_W, curr_b, curr_loss = sess.run([W, b, loss], {x: x_train, y: y_train})
print("W: %s b: %s loss: %s"%(curr_W, curr_b, curr_loss))
</code></pre>
<p>The code example comes from Tensorflow website: <a href="https://www.tensorflow.org/get_started/get_started#complete_program" rel="nofollow noreferrer">https://www.tensorflow.org/get_started/get_started#complete_program</a></p> | 2017-09-16 22:22:26.303000+00:00 | 2017-09-17 17:42:41.870000+00:00 | 2017-09-16 22:39:20.340000+00:00 | python|machine-learning|tensorflow | ['https://arxiv.org/abs/1511.06422'] | 1 |
69,059,998 | <p>In a recent paper, the authors designed a neuron they called <a href="https://arxiv.org/pdf/2108.12943.pdf" rel="nofollow noreferrer">Growing Cosine Unit(GCU)</a>:</p>
<p><a href="https://i.stack.imgur.com/o3bP7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/o3bP7.png" alt="enter image description here" /></a></p> | 2021-09-05 01:26:26.447000+00:00 | 2021-09-05 01:26:26.447000+00:00 | null | null | 30,412,427 | <p>I've always heard that the XOR problem can not be solved by a single layer perceptron (not using a hidden layer) since it is not linearly separable. I understand that there is no linear function that can separate the classes. </p>
<p>However, what if we use a non-monotonic activation function like sin() or cos() is this still the case? I would imagine these types of functions might be able to separate them.</p> | 2015-05-23 12:02:39.257000+00:00 | 2021-09-05 01:26:26.447000+00:00 | 2015-05-23 12:28:35.663000+00:00 | neural-network|xor|perceptron | ['https://arxiv.org/pdf/2108.12943.pdf', 'https://i.stack.imgur.com/o3bP7.png'] | 2 |
58,404,801 | <p>You may be interested in this paper (accepted to EMNLP 2019): <a href="https://arxiv.org/abs/1903.11222" rel="nofollow noreferrer">https://arxiv.org/abs/1903.11222</a></p>
<p>In this paper, we experiment with several different ways of dealing with this exact problem (including the 2 mentioned by @christopher-manning above). TLDR, the main takeaways are:</p>
<ol>
<li>Using a truecaser on test data is a bad idea, because truecasers perform more poorly than you think.</li>
<li>Caseless models work pretty well.</li>
<li>But overall the best option is to augment the original training data with caseless training data (just <code>train_data.lower()</code>) and retrain the model. </li>
</ol> | 2019-10-16 02:06:59.897000+00:00 | 2019-10-16 02:06:59.897000+00:00 | null | null | 45,097,507 | <p>I got a problem that CoreNLP can only recognize named entity such as Kobe Bryant that is beginning with a uppercase char, but can't recognize kobe bryant as a person!!! So how to recognize a named entity that is beginning with a lowercase char by CoreNLP ???? Appreciate it !!!!</p> | 2017-07-14 07:46:48.047000+00:00 | 2019-10-16 02:06:59.897000+00:00 | null | java|stanford-nlp | ['https://arxiv.org/abs/1903.11222'] | 1 |
30,470,525 | <p>One interesting angle on this question is the idea of using "combinatorial block designs". We want each of our sorts to give us as much information as possible, so we don't want the same pair of elements in two different sorts. That is actually achievable: we can use a combinatorial structure called a "balanced incomplete block design" (BIBD). We are looking for a (25,5,1)-BIBD, meaning there are 25 elements (25), blocked by five at a time (5), such that each pair of elements appears in exactly one block (1).</p>
<p>Such block designs have been extensively explored. It turns out that there is a (25,5,1)-BIBD. An explicit construction is given in, e.g., <a href="http://arxiv.org/ftp/arxiv/papers/0909/0909.3533.pdf" rel="nofollow">http://arxiv.org/ftp/arxiv/papers/0909/0909.3533.pdf</a> page 8.</p>
<pre><code>{(1,2,3,4,5) (6,7,8,9,10) (11,12,13,14,15) (16,17,18,19,20) (21,22,23,24,25)
(1,6,11,16,21) (2,7,12,17,21) (3,8,13,18,21) (4,9,14,19,21) (5,10,15,20,21)
(2,8,14,20,22) (3,10,11,19,22) (5,9,12,16,22) (1,7,15,18,22) (4,6,13,17,22)
(3,9,15,17,23) (5,6,14,18,23) (4,7,11,20,23) (2,10,13,16,23) (1,8,12,19,23)
(4,10,12,18,24) (1,9,13,20,24) (2,6,15,19,24) (5,8,11,17,24) (3,7,14,16,24)
(5,7,13,19,25) (4,8,15,16,25) (1,10,14,17,25) (3,6,12,20,25) (2,9,11,18,25)}
</code></pre>
<p><a href="http://www.sagemath.org/" rel="nofollow">Sage</a> can also be used to construct BIBDs.</p>
<p>That block design has 30 blocks, so it's far from optimal for this problem. But perhaps it can be combined with transitivity to devise a faster algorithm for the problem.</p>
<p><strong>Update</strong></p>
<p>It turns out that my suggestion is not much use in this problem, except to place an upper bound on the number of steps in a solution (30, the size of the BIBD). To gauge its performance, I wrote some "test bed" software (see below) which gives a visual representation of the progress of the sort. </p>
<p>I represented the state of sortedness of the data with two graphs: g, the graph of all potential relationships among the 25 items, which starts as a complete directed graph on the 25 items, and g1, the graph of all known relationships among the 25 items. g and g1 have an obvious relationship, so keeping track of two graphs is clearly redundant, but different kinds of information can be easily extracted from g and g1 which I why I keep track of them both.</p>
<p>g starts with 600 edges, each of the 25*24 directed edges between two items. We are done when g has no non-trivial (i.e., size greater than 1) strongly connected components, in which case g can be unambiguously toposorted to give the correct ordering. That occurs when there are exactly 300 edges in g. Similarly, g1 starts with no edges, and we are done when the same 300 edges appear in g1 as in g.</p>
<p>Picking 5 items and then sorting them immediately adds up to 5 + 4 + 3 + 2 + 1 = 15 new edges to g (and removes the same number of edges from g1). I say "up to" because if any of those edges are already in g, they don't get added to the count. So if we already know A -> B and nothing about relationships between any other pair in A,B,C,D,E, then sorting those five only gives 14 new edges in g.</p>
<p>On the other hand, we can often get more arrows from a sort by exploiting transitivity. If we know that B -> F, and a sort tells us that A -> B, we can deduce that A -> F, which is an extra arrow in g. The set of all additional arrows found through transitivity can be obtained by finding the "transitive closure" of g with the new arrows added as a result of the sort. The corresponding effect on g1 can be easily found.</p>
<p>In my software below, I give an image of the graph g1, which starts out as 600 directed edges pictured as 300 blue edges with arrows on each end. Each sorting step followed by transitive closure will replace some doubled-sided blue arrows with single-sided yellow arrows. We know we're finished when all the blue arrows are gone; equivalently, when g1 has no non-trivial strongly connected components.</p>
<p>In the software below, I chose the 5 items to be sorted by picking a random unused block from the BIBD I gave earlier, then marked the block as used. It generally takes all 30 blocks to sort the 25 items in this manner. As you can see from the visualization, the process starts out well enough, but misses obvious speedups from steps 20 to 30. My conclusion is that the fastest solutions to this problem will have to be adaptive, picking 5 items to sort based on prior results.</p>
<p>Looking at the problem in this way has given me some new ideas that I may explore in another answer to the question. In the meantime, here are the two Java classes for the program I used.</p>
<p>The GUI framework:</p>
<pre><code>package org.jgrapht.demo;
import java.awt.BorderLayout;
import java.awt.Button;
import java.awt.Color;
import java.awt.EventQueue;
import java.awt.Font;
import java.awt.event.ActionEvent;
import java.awt.event.ActionListener;
import javax.swing.BoxLayout;
import javax.swing.JFrame;
import javax.swing.JLabel;
import javax.swing.JPanel;
import javax.swing.UIManager;
public class FiveSort extends JFrame {
private static final long serialVersionUID = 1L;
private Font smallFont = new Font(Font.DIALOG, Font.PLAIN, 12);
private Font largeFont = new Font(Font.DIALOG, Font.PLAIN, 36);
private JLabel stepsLabel = new JLabel("0");
private JLabel maxLabel = new JLabel("0");
private JLabel averageLabel = new JLabel("");
private int rounds = 0;
private int totalSteps = 0;
private double averageSteps = 0;
private int maxSteps = 0;
public static void main(String[] args) {
EventQueue.invokeLater(new Runnable() {
@Override
public void run() {
new FiveSort();
}
});
}
public FiveSort() {
initGUI();
setLocationRelativeTo(null);
setTitle("Five Sort");
setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
pack();
setVisible(true);
}
public void initGUI() {
try {
// UIManager.setLookAndFeel( UIManager.getCrossPlatformLookAndFeelClassName() );
UIManager.setLookAndFeel( UIManager.getSystemLookAndFeelClassName() );
} catch (Exception e) {
e.printStackTrace();
}
// title label
JLabel titleLabel = new JLabel("Five Sort");
titleLabel.setFont(largeFont);
titleLabel.setBackground(Color.BLACK);
titleLabel.setForeground(Color.WHITE);
titleLabel.setOpaque(true);
titleLabel.setHorizontalAlignment(JLabel.CENTER);
add(titleLabel,BorderLayout.PAGE_START);
// main panel
JPanel mainPanel = new JPanel();
mainPanel.setLayout(new BoxLayout(mainPanel,BoxLayout.Y_AXIS));
add(mainPanel);
// graph panel
FiveSortPanel graphPanel = new FiveSortPanel();
mainPanel.add(graphPanel,BorderLayout.CENTER);
// stats panel
JPanel statsPanel = new JPanel();
statsPanel.setBackground(Color.BLACK);
mainPanel.add(statsPanel);
JLabel stepsTitleLabel = new JLabel("Current Steps: ");
stepsTitleLabel.setFont(smallFont);
stepsTitleLabel.setForeground(Color.WHITE);
statsPanel.add(stepsTitleLabel);
stepsLabel.setFont(largeFont);
stepsLabel.setForeground(Color.WHITE);
stepsLabel.setText("" + graphPanel.getSteps());
statsPanel.add(stepsLabel);
JLabel maxTitleLabel = new JLabel("Max Steps: ");
maxTitleLabel.setFont(smallFont);
maxTitleLabel.setForeground(Color.WHITE);
statsPanel.add(maxTitleLabel);
maxLabel.setFont(largeFont);
maxLabel.setForeground(Color.WHITE);
maxLabel.setText("" + maxSteps);
statsPanel.add(maxLabel);
JLabel averageTitleLabel = new JLabel("Avg Steps: ");
averageTitleLabel.setFont(smallFont);
averageTitleLabel.setForeground(Color.WHITE);
statsPanel.add(averageTitleLabel);
averageLabel.setFont(largeFont);
averageLabel.setForeground(Color.WHITE);
averageLabel.setText("");
statsPanel.add(averageLabel);
// button panel
JPanel buttonPanel = new JPanel();
buttonPanel.setBackground(Color.BLACK);
add(buttonPanel,BorderLayout.PAGE_END);
Button newButton = new Button("Step");
newButton.setFocusable(false);
newButton.addActionListener(new ActionListener() {
@Override
public void actionPerformed(ActionEvent e) {
if (!graphPanel.isComplete()) {
graphPanel.step();
stepsLabel.setText("" + graphPanel.getSteps());
}
}
});
buttonPanel.add(newButton);
Button restartButton = new Button("Restart");
restartButton.setFocusable(false);
restartButton.addActionListener(new ActionListener() {
@Override
public void actionPerformed(ActionEvent e) {
if (graphPanel.isComplete()) {
++rounds;
totalSteps += graphPanel.getSteps();
averageSteps = ((int)(totalSteps / (double)rounds * 10))/10.0;
maxSteps = Math.max(maxSteps, graphPanel.getSteps());
}
graphPanel.restart();
stepsLabel.setText("" + graphPanel.getSteps());
maxLabel.setText("" + maxSteps);
averageLabel.setText("" + averageSteps);
}
});
buttonPanel.add(restartButton);
}
}
</code></pre>
<p>The graph manipulation routines (which require jgrapht-ext-0.9.1-uber.jar, freely available from the JGraphT site):</p>
<pre><code>package org.jgrapht.demo;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collections;
import java.util.HashMap;
import java.util.Comparator;
import java.util.List;
import java.util.Set;
import javax.swing.JPanel;
import org.jgrapht.DirectedGraph;
import org.jgrapht.Graphs;
import org.jgrapht.ListenableGraph;
import org.jgrapht.alg.StrongConnectivityInspector;
import org.jgrapht.alg.TransitiveClosure;
import org.jgrapht.ext.JGraphXAdapter;
import org.jgrapht.graph.DefaultEdge;
import org.jgrapht.graph.ListenableDirectedGraph;
import org.jgrapht.graph.SimpleDirectedGraph;
import com.mxgraph.layout.mxCircleLayout;
import com.mxgraph.swing.mxGraphComponent;
import com.mxgraph.util.mxConstants;
public class FiveSortPanel extends JPanel {
private static final long serialVersionUID = 1L;
private static final int ARRAY_SIZE = 25;
private static final int SORT_SIZE = 5;
private static final String ALPHABET = "ABCDEFGHIJKLMNOPQRSTUVWXYZ";
private static final String STROKE_YELLOW = "strokeColor=#CCCC00";
private Integer[][] BIBD = {
{1,2,3,4,5}, {6,7,8,9,10}, {11,12,13,14,15}, {16,17,18,19,20}, {21,22,23,24,0},
{1,6,11,16,21}, {2,7,12,17,21}, {3,8,13,18,21}, {4,9,14,19,21}, {5,10,15,20,21},
{2,8,14,20,22}, {3,10,11,19,22}, {5,9,12,16,22}, {1,7,15,18,22}, {4,6,13,17,22},
{3,9,15,17,23}, {5,6,14,18,23}, {4,7,11,20,23}, {2,10,13,16,23}, {1,8,12,19,23},
{4,10,12,18,24}, {1,9,13,20,24}, {2,6,15,19,24}, {5,8,11,17,24}, {3,7,14,16,24},
{5,7,13,19,0}, {4,8,15,16,0}, {1,10,14,17,0}, {3,6,12,20,0}, {2,9,11,18,0}
};
private int steps = 0;
private boolean complete = false;
class Node<T extends Comparable<T>> implements Comparable<Node<T>> {
String label;
T value;
Node(String label, T value) {
this.label = label;
this.value = value;
}
@Override
public String toString() {
return label + ": " + value.toString();
}
@Override
public int compareTo(Node<T> other) {
return value.compareTo(other.value);
}
}
// g represents all potential orders; starts as complete graph
private ListenableGraph<Node<Integer>, DefaultEdge> g;
// g1 represents all actual orders; starts with no edges
private SimpleDirectedGraph<Node<Integer>, DefaultEdge> g1;
private JGraphXAdapter<Node<Integer>, DefaultEdge> jgxAdapter;
@SuppressWarnings("unchecked")
Node<Integer>[] vertexArray = new Node[ARRAY_SIZE];
List<Set<Node<Integer>>> connectedComponentsOfG;
HashMap<Node<Integer>,com.mxgraph.model.mxICell> vertexToCellMap;
HashMap<DefaultEdge,com.mxgraph.model.mxICell> edgeToCellMap;
// sort sets in descending order by number of elements
public class SetComparator implements Comparator<Set<Node<Integer>>> {
@Override
public int compare(Set<Node<Integer>> s1, Set<Node<Integer>> s2) {
return s2.size() - s1.size();
}
}
TransitiveClosure transitiveClosure = TransitiveClosure.INSTANCE;
public FiveSortPanel() {
restart();
}
public int getSteps() {
return steps;
}
public boolean isComplete() {
return complete;
}
private void updateConnectedComponents() {
@SuppressWarnings("unchecked")
StrongConnectivityInspector<Node<Integer>,DefaultEdge> sci
= new StrongConnectivityInspector<Node<Integer>,DefaultEdge>(
(DirectedGraph<Node<Integer>, DefaultEdge>) g);
connectedComponentsOfG = sci.stronglyConnectedSets();
Collections.sort(connectedComponentsOfG, new SetComparator());
}
public void step() {
if (!complete) {
chooseFiveAndSort();
++steps;
}
updateConnectedComponents();
complete = true;
for (Set<Node<Integer>> s : connectedComponentsOfG) {
if (s.size() > 1) {
complete = false;
}
}
}
public void restart() {
removeAll();
steps = 0;
complete = false;
shuffleBIBD();
g = new ListenableDirectedGraph<Node<Integer>, DefaultEdge>(DefaultEdge.class);
g1 = new SimpleDirectedGraph<Node<Integer>, DefaultEdge>(DefaultEdge.class);
jgxAdapter = new JGraphXAdapter<Node<Integer>, DefaultEdge>(g);
vertexToCellMap = jgxAdapter.getVertexToCellMap();
edgeToCellMap = jgxAdapter.getEdgeToCellMap();
jgxAdapter.getStylesheet().getDefaultEdgeStyle().put(mxConstants.STYLE_NOLABEL, "1");
add(new mxGraphComponent(jgxAdapter));
ArrayList<Integer> permutation = new ArrayList<Integer>();
for (int i = 0; i < ARRAY_SIZE; ++i) {
permutation.add(i);
}
Collections.shuffle(permutation);
@SuppressWarnings("unchecked")
Node<Integer>[] n = new Node[ARRAY_SIZE];
for (int i = 0; i < ARRAY_SIZE; ++i) {
n[i] = new Node<Integer>(ALPHABET.substring(i, i+1), permutation.get(i));
vertexArray[i] = n[i];
g.addVertex(n[i]);
g1.addVertex(n[i]);
for (int j = 0; j < i; ++j) {
g.addEdge(n[i], n[j]);
g.addEdge(n[j], n[i]);
}
}
updateConnectedComponents();
mxCircleLayout layout = new mxCircleLayout(jgxAdapter);
layout.execute(jgxAdapter.getDefaultParent());
//repaint();
}
private void chooseFiveAndSort() {
Node<Integer>[] fiveNodes = chooseFive();
for (int i = 0; i < fiveNodes.length-1; ++i) {
g1.addEdge(fiveNodes[i],fiveNodes[i+1]);
}
transitiveClosure.closeSimpleDirectedGraph(g1);
List<Object> edgeCellList = new ArrayList<Object>();
for (int i = 0; i < fiveNodes.length-1; ++i) {
List<Node<Integer>> predList = Graphs.predecessorListOf(g1,fiveNodes[i]);
predList.add(fiveNodes[i]);
List<Node<Integer>> succList = Graphs.successorListOf(g1,fiveNodes[i+1]);
succList.add(fiveNodes[i+1]);
for (Node<Integer> np : predList) {
for (Node<Integer> ns : succList) {
g.removeEdge(ns,np);
edgeCellList.add((Object)(edgeToCellMap.get(g.getEdge(np, ns))));
}
}
}
jgxAdapter.setCellStyle(STROKE_YELLOW, edgeCellList.toArray());
}
private Node<Integer>[] chooseFive() {
return chooseFiveRandomBIBD();
}
private void shuffleBIBD() {
List<Integer[]> BIBDList = (List<Integer[]>) Arrays.asList(BIBD);
Collections.shuffle(BIBDList);
BIBD = BIBDList.toArray(new Integer[0][0]);
}
private Node<Integer>[] chooseFiveRandomBIBD() {
Integer[] indexArray = BIBD[steps];
@SuppressWarnings("unchecked")
Node<Integer>[] nodeArray = new Node[SORT_SIZE];
for (int i = 0; i < SORT_SIZE; ++i) {
nodeArray[i] = vertexArray[indexArray[i]];
}
Arrays.sort(nodeArray);
return nodeArray;
}
}
</code></pre> | 2015-05-26 23:11:38.647000+00:00 | 2015-08-12 02:43:20.287000+00:00 | 2015-08-12 02:43:20.287000+00:00 | null | 30,265,140 | <blockquote>
<p>Assume that, you have <strong>25 objects</strong>, and a <strong>machine that can sort 5 of them</strong> by some criteria that you don't even know. The cost of using this machine is very expensive (1$ for one launch), so what is the minimal cost of sorting all of your objects? </p>
</blockquote>
<p>My current solution is very simple (similar idea to <a href="https://en.wikipedia.org/wiki/Merge_sort">merge sort</a>): </p>
<ol>
<li>Randomly divide them into 5 groups of 5 objects</li>
<li>Sort each of them (+5 launches)</li>
<li>Now, sort the minimal elements of these five groups (+1 launch)</li>
<li>Now we have the minimal element of the whole set. Remove it from the group that it's belong to, and repeat the step <strong>3</strong> until only 5 unsorted objects left in general (+19 launch)</li>
<li>Sort the rest 5 objects (+1 launch)</li>
</ol>
<p>So, in general, we have to pay <strong>26$</strong> (26 launches). </p>
<blockquote>
<p>Question: Is there any way to make it cheaper (sort them in the least number of launches)?</p>
</blockquote> | 2015-05-15 17:11:27.387000+00:00 | 2015-10-09 20:26:46.390000+00:00 | null | algorithm|sorting | ['http://arxiv.org/ftp/arxiv/papers/0909/0909.3533.pdf', 'http://www.sagemath.org/'] | 2 |
59,732,165 | <p>An idea could be building <a href="https://en.wikipedia.org/wiki/Word_embedding" rel="nofollow noreferrer">embeddings</a> of your text using <a href="https://arxiv.org/abs/1810.04805" rel="nofollow noreferrer">Bert</a> or other pretrained models (take a look to <a href="https://github.com/huggingface/transformers" rel="nofollow noreferrer">transformers</a>) and later compare (for instance using cosine distance) such embeddings with your query (the question) and get the most similar ones interpreting as the section or chapter containing them.</p> | 2020-01-14 10:42:31.287000+00:00 | 2020-01-14 10:42:31.287000+00:00 | null | null | 59,731,721 | <p>I would like to use Tensorflow to create a smart faq. I've seen how to manage a chatbot, but my need is to let the user searching for help and the result must be the most probable chapter or section of a manual.</p>
<p>For example the user can ask: </p>
<blockquote>
<p>"What are the O.S. supported?"</p>
</blockquote>
<p>The reply must be a list of all the possible sections of the manual in which could be the correct answer.
My text record set for the training procedure is only the manual itself. I've followed the text classification example, but i don't think is what i need because in that case it would only understand if a given text belongs to a category or another one.</p>
<p>What's the best practice to accomplish this task (i use Python)?</p>
<p>Thank you in advance</p> | 2020-01-14 10:19:11.827000+00:00 | 2020-01-14 10:42:31.287000+00:00 | 2020-01-14 10:39:40.453000+00:00 | python|tensorflow|deep-learning|recurrent-neural-network|text-classification | ['https://en.wikipedia.org/wiki/Word_embedding', 'https://arxiv.org/abs/1810.04805', 'https://github.com/huggingface/transformers'] | 3 |
48,067,635 | <p>It is unlikely to be practical because it does not even try to be efficient (number of lookups) or reliable (failure rate multiplied by number of lookups). And that is for a single keyword, not boolean queries which would blow up the lookup complexity even further.</p>
<p>Not to mention that it doesn't even solve the hard problems of distributed searching such as avoiding spam and censoring.</p>
<p>Additional problems are that each node could only publish one torrent under a keyword and it would require multiple nodes to somehow coordinate what they publish under which keyword before they run into the collision problem.</p>
<p>Of course you might be able to make it work in a handful of instances, but that is is irrelevant because uses of p2p protocols should be designed in a way such that they still work in that case that <em>all nodes</em> nodes used that feature in a similar fashion. Clearly a (m * n * 10)-fold [m = torrents per keyword, n = number of search terms] blowup of network traffic is not acceptable.</p>
<p>If you are seriously interested in distributed keyword search I recommend that you hit google scholar and arxiv and look for existing research, it is a non-trivial topic.</p>
<p>For bittorrent specifically you should also look beyond BEP 5. BEP 44 provides arbitrary data storage, BEPs 46, 49 and 51 describe additional building blocks and abstractions. But I would consider none of them sufficient for a realtime distributed multi-keyword search as one would expect it from a local database or an indexing website.</p> | 2018-01-02 20:47:28.273000+00:00 | 2018-01-02 20:47:28.273000+00:00 | null | null | 48,065,176 | <p>I have an idea to implement a real-time keyword-based torrent search mechanism using the existing BitTorrent DHT, and I would like to know if it is feasible and realistic.</p>
<hr>
<p>We have a torrent, and we would like to be able to find it from a <code>keyword</code> using the DHT only.</p>
<ul>
<li><code>H</code> is a hash function with a 20 bytes output</li>
<li><code>infohash</code> is the info_hash of the torrent (20 bytes)</li>
<li><code>sub(hash, i)</code> returns 2 bytes of <code>hash</code> starting at byte <code>i</code> (for example, <code>sub(0x62616463666568676a696c6b6e6d706f72717473, 2) = 0x6463</code>)</li>
<li><code>announce_peer(hash, port)</code> publishes a fake peer associated with a fake info_hash <code>hash</code>. The IP of the fake peer is irrelevant and we use the <code>port</code> number to store data (2 bytes).</li>
<li><code>get_peers(hash)</code> retrieves fake peers associated with fake info_hash <code>hash</code>. Let's consider that this function returns a list of port number only.</li>
<li><code>a ++ b</code> means concatenate <code>a</code> and <code>b</code> (for example, <code>0x01 ++ 0x0203 = 0x010203</code>)</li>
</ul>
<h2>Publication</h2>
<pre><code>id <- sub(infohash, 0)
announce_peer( H( 0x0000 ++ 0x00 ++ keyword ), id )
announce_peer( H( id ++ 0x01 ++ keyword ), sub(infohash, 2 ))
announce_peer( H( id ++ 0x02 ++ keyword ), sub(infohash, 4 ))
announce_peer( H( id ++ 0x03 ++ keyword ), sub(infohash, 6 ))
announce_peer( H( id ++ 0x04 ++ keyword ), sub(infohash, 8 ))
announce_peer( H( id ++ 0x05 ++ keyword ), sub(infohash, 10))
announce_peer( H( id ++ 0x06 ++ keyword ), sub(infohash, 12))
announce_peer( H( id ++ 0x07 ++ keyword ), sub(infohash, 14))
announce_peer( H( id ++ 0x08 ++ keyword ), sub(infohash, 16))
announce_peer( H( id ++ 0x09 ++ keyword ), sub(infohash, 18))
</code></pre>
<h2>Search</h2>
<pre><code>ids <- get_peers(H( 0x0000 ++ 0x00 ++ keyword ))
foreach (id : ids)
{
part1 <- get_peers(H( id ++ 0x01 ++ keyword ))[0]
part2 <- get_peers(H( id ++ 0x02 ++ keyword ))[0]
part3 <- get_peers(H( id ++ 0x03 ++ keyword ))[0]
part4 <- get_peers(H( id ++ 0x04 ++ keyword ))[0]
part5 <- get_peers(H( id ++ 0x05 ++ keyword ))[0]
part6 <- get_peers(H( id ++ 0x06 ++ keyword ))[0]
part7 <- get_peers(H( id ++ 0x07 ++ keyword ))[0]
part8 <- get_peers(H( id ++ 0x08 ++ keyword ))[0]
part9 <- get_peers(H( id ++ 0x09 ++ keyword ))[0]
result_infohash <- id ++ part1 ++ part2 ++ ... ++ part9
print("search result:" ++ result_infohash)
}
</code></pre>
<hr>
<p>I know there would be collisions with <code>id</code> (2 bytes only), but with relatively specific keywords it should work...</p>
<p>We could also build more specific keywords by concatenating several words in alphanumeric order. For example, if we have words <code>A</code>, <code>B</code> and <code>C</code> associated with a torrent, we could publish keywords <code>A</code>, <code>B</code>, <code>C</code>, <code>A ++ B</code>, <code>A ++ C</code>, <code>B ++ C</code> and <code>A ++ B ++ C</code>.</p>
<hr>
<p>So, is this awful hack feasible :D ? I know that <a href="http://retroshare.sourceforge.net/wiki/index.php/Frequently_Asked_Questions#How_does_RetroShare_know_my_friend.27s_IP_address_and_port.3F_Why_don.27t_I_need_a_static_IP_address.3F_What_is_DHT_for.3F" rel="nofollow noreferrer">Retroshare is using BitTorrent's DHT</a>.</p> | 2018-01-02 17:18:20.910000+00:00 | 2018-01-02 20:47:28.273000+00:00 | 2018-01-02 18:23:45.823000+00:00 | bittorrent|dht|kademlia | [] | 0 |
Subsets and Splits